Introduction
Credit Last Week’s Demo (Lionel)
Moving some OpenWhisk’s basic concepts using Kube/Knative providing
“Runtime proxy” approach
Invisible proxy (Docker layer) in runtime to simplify/normalize functionality to functions
Mostly platform agnostic no added code needed" approach
Parameter passing, Parameter binding
CRD - that acts as a basic "controller"
Simplifies Kube/Knative interactions (from CLI)
Config mgmt. - binding configs to params for function
Launch Knative Services "ksvc” - for named functions
Compositions
Supported by backend (state) were important
Fundamentally, agreed with "tree/branching" use case (Max), but more…
OpenWhisk Approach
Developer-centric
Everything we do is for Developer simplicity**
Developer only codes functions in their chosen language
No directives | pragmas | annotations
Utilize normal language module/package imports
May provide platform callbacks/intrinsic functionality (as functions)
Developer has NO knowledge of
Operating System (OS)
Platform (implementation)
Kube | Knative | Firecracker, etc.
Container (VM tech.)
(Micro)Service Framework
_Minimalistic archive “zip” packaging from CLI
Observer Pattern: Event driven / Reactive
It's all about an eventing framework that scales as fast as possible!
Observer pattern reflected in Programming Model
Informs design of control and data plane (for FaaS platform)
Reactive
as expected by microservices; these developers will migrate to FaaS
moving "up the stack" for the developer... no more Pflask... etc.
Cold start reduction is paramount!
Event format agnostic
Cannot force everyone to single event format; either impractical to change
cannot suffer performance (transforms)...
functions are often coupled with specific data (e.g., from IoT sensor events or data sets), e.g.,
NASA PIXL, raw NoSQL data, analytics models, genetic model data)
HTTP / Web accessible without APIs
functions creating/modifying/serving http
content-type
web data
Edge accessible
IoT events (network response time sensitve)
Driven by Serverless Use Cases
Informed by experience of OW and production usage for ~4 years
Serverless Patterns - Unpredictable, sporadic
Even aperiodic embarrassingly parallel workloads

Anti-pattern - High volume, continuous requests

Key Use Cases
Reflect the top reasons Developers move "up-the-stack"
Minimize compute costs (smallest possible per-invocation charge)
Manage unpredictability, sporadic loads (events sporadic)
Horizontal Scaling to N (up to account limits), to 0
Alarm | Periodic
"batch jobs"
ETL Pipelines
Changes in Data-at-Rest, Data-in-Motion "trigger" functions
Serverless APIs
OpenAPI, Security, Rate limiting pushed to edge,
Easily interfaced with host IAM systems
Embarrassingly parallel workloads
Time/Cost of additive cold start compute times considerations
Features NOT covered in today that impact Developer Usability
Observer pattern (Triggers and Rules)
N Triggers -> 1 Action, 1 Trigger -> N Actions
Feed Actions (with Packages)
Using Functions to create Event providers / Feeds
Alarms, GitHub, NoSQL (Cloudant)
APIs – OpenAPI “Swagger” support
Rate Limiting, OAuth, Custom domain names, HTTP Methods
Binding your own "secure key" token without OpenAPI
YAML – deploy ‘fn deploy’
Packages, Actions, Triggers, Rules, APIs, Parameters (Bindings), Annotations, Client-Server “sync”, +++
Compositions
Composer / Conductor (Lionel developer)
Logging / Metrics
built-in via File Descriptors in “Go” Proxy
Debugging Support
server side, client side (CLI)
Docker actions (any binary executable)
Docker SDK
Tekton build for OW runtimes
Builds for OpenWhisk or Knative
Execution Domain Router ("Kind' Routing)
OW using Knative, V8 Isolates, Firecracker (runtime concurrency)
Target: 10K NodeJS invocations in 1 runtime Container
Scheduling (pluggable)
General scheduler (default) for all FaaS use cases
Custom schedulers (for dedicated use cases)
Heterogeneous clusters
Pools with diff. Compute CPU/Mem
Last updated
Was this helpful?