In my last post I shared some methods for getting up and running with Knative eventing. In this post, we are going to step back a little to try and understand how the eventing component works. It gives a high level overview of a demo you can follow along with on GitHub.
This will be the first of a series of posts, which walk through an example, that we will build out in complexity over the coming weeks.
First, lets go over a few reasons why we might look to use eventing.
What capabilities does Knative Eventing bring?
- Ability to decouple producers and consumers (this means, for example, that consumers can be subscribed to an event type before any of those event types have been produced).
- Events are published as CloudEvents (this is a topic I would like to cover separately in more detail).
- Push-based messaging
There are a number of key components that I will describe below, which together will make up the initial example. Channels and subscriptions will not be included in this post, we’ll discuss those another time.
What are we building?
Let’s first take a look at the diagram below to get a picture of how these components fit and interact together. This diagram shows the type of demo scenario we are looking to recreate over the next few posts.
Each of these components are deployed using yaml files, except for the broker, which is automatically created once the knative injection is enabled within the namespace. You can deploy a custom broker if you wish but I won’t include that in this post.
In this simple example, we use a Kubernetes app deployment as the source and a Knative Service as the consumer, which will subscribe to the events.
The code that I use to stream the events to the broker is available here on GitHub and gives more detailed instructions if you want to build it yourself. It also contains the yaml files used in this tutorial.
Our source is the producer of the events. It could be an application, a web socket, a process etc. It produces events that other services may or may not be interested in subscribing to.
There are a number of different types of sources, each one is a custom resource. The range of sources available can be seen in the Knative documentation here. You can create your own event source if you need to.
The following yaml shows a simple Kubernetes app deployment which is our source:
In the above example, the source is a go application, which streams messages via a web socket connection. It sends them as CloudEvents, which our service will consume. It is available to view here.
Broker and Trigger are CRDs, which will manage the delivery of events and abstract away the details of these from the related services.
The broker is where events get received. It is like a holding area, from where they can be consumed by those interested. As mentioned above, a
default broker is automatically created when you label your namespace with
kubectl label namespace my-event-namespace knative-eventing-injection=enabled
Our trigger provides a filter, by which it is determines which events should be delivered to a given consumer.
Here below is an example trigger, which just defines that the event-display service subscribes to all events from the default broker.
Under spec, I can also add filter: > attributes: and then include some CloudEvent attributes to filter by. We can then filter the CloudEvent fields, such as type or source etc in order to determine those to which a service subscribes. Here is another example, which filters on a specific event type:
You can also filter on an expression such as:
expression: ce.type == "com.github.pull.create"
We can have one or multiple consumers. These are the services that are interested (or not) in the events. If you have a single consumer, you can send straight from the source to consumer.
Here is the Knative Service deployment yaml:
Because I want to send events to a Knative Service, I need to have the cluster-local visibility label (this is explained in more detail below). For the rest, I am using a pre-built image from knative for a simple event display, the code for which can be found here.
Once you have all of these initial components, it looks like this:
I had some issues with getting the events to the consumer at first when trying this initial demo out. In the end I found out that in order to sink to a Knative Service, you need to add the cluster local gateway to the Istio installation. This is somewhat vaguely mentioned in the docs around installing Istio but could probably have been a bit clearer. Luckily, I found this great post, which helped me massively!
When you install Istio you will need to ensure that you see (at least) the following:
Handy tips and stuff I wish I had known before…
If you want to add a sink URL in your source file, see the example below for a broker sink:
default is the name of the broker and
knative-eventing-websocket-source is the namespace.
In order to verify events are being sent/received/consumed then you can use the following examples as a reference:
//Getting the logs of the source app kubectl --namespace knative-eventing-websocket-source logs -l app=wseventsource --tail=100 //Getting the logs of the broker kubectl --namespace knative-eventing-websocket-source logs -l eventing.knative.dev/broker=default --tail=100 //Getting the logs of a Knative service kubectl logs -l serving.knative.dev/service=event-display -c user-container --since=10m -n knative-eventing-websocket-source
You should see something like this when you get the service logs:
Next time we will be looking at building out our service and embellishing it a little so we can transform, visualise and send new events back to a broker.