raspberry pi

Pairing BBC Microbit with a Raspberry Pi

For a while I have wanted to experiment with sensor events and I recently had a day off so (rather than continuing my re-exploration of the wonderful LOTR trilogy… the book, not the movie) I decided to finally get all the electronics out and give it a whirl.

I have a lot of microbits in the house from running coding clubs so I figured I would use one as a sensor. I also had a Pi Zero that I hadn’t really used so thought I would use it as the machine to which I send the sensor data.

So this is part one of what will be a series where we can explore the possibilities. For this demo, I am going to go over how I got set up getting the microbit to communicate with the Raspberry Pi.

Best friends forever… paired in perfect harmony

It took a lot longer than I thought :/ I had a few issues along the way (many due to my own errors :p). One thing I noticed is that my microbit was a bit sensitive and kept disconnecting from the power source every few minutes, I tried with another which seemed much more stable.

Prerequisites

In order to run this tutorial, you will need the following:

  • A microbit
  • A Raspberry Pi (I used the Pi Zero) with the following installed:
    • Bluezero – I used the latest install 3.0 (I originally had 2.0 but had issues getting this working so reinstalled to the newest at the time of writing this).
      • sudo pip3 install bluezero

Prepare the microbit

Here is a link to the microbit code I used. I started with a set up from one of the issues I found in the python-bluezero github repo. However, I found that with that setup, though the code ran, the microbit kept throwing a 020 error, which seems to relate to memory issues. I thus removed the while loop and now it works without the error.

Let’s go over what it does:

  1. On start it will look for the temperature and uart services. It will also show the bluetooth symbol in the LED matrix.
  2. When the microbit is connected to the Pi it will set the connected variable to "true". It will show a smiley face on the LED matrix 🙂 It will also set uart to read up to the hashtag symbol. The bluetooth UART service is started and it can read data received from the Pi. It will terminate reading when it gets to the ‘#’ symbol. More information about this service can be found here. It will display the data as a string on the microbit led matrix.
  3. When the microbit is disconnected from the Pi it will set the connected variable to "false". It will show a sad face 😦

Download the hex file and load it onto the microbit by dragging it into the microbit drive.

Connect to the Pi

Ensure your Raspberry Pi is connected to a power source and that you know the ip address.

ssh <username>@<ipadress>

Enter the username and password set for your Rasberry Pi.

Pairing the Raspberry Pi with the Microbit

Enter the bluetoothctl by typing bluetoothctl. First we will scan to see which bluetooth devices are available.

  1. Find the microbit

First, your Raspberry Pi needs to find the microbit. To do this run the following command:

scan on

This will start scanning for any bluetooth devices and you will see them appear. The microbit one will look something like:

[NEW] Device A1:B2:C3:D4:E5:FF BBC micro:bit [a name]

Once it appears, type scan off to stop the scanning.

2. Pair with the microbit

To pair the microbit and the Raspberry Pi, you can run

pair <device_address>

So in the example above it would be:

pair A1:B2:C3:D4:E5:FF

When you run this pair command, hold down the A+B buttons on the microbit and press the reset button on the back. You will see a bluetooth symbol, then you can release them.

At first this did not work for me but I changed the pairing settings in MakeCode project settings (as mentioned above) and then it worked.

You can check if it is paired by making sure it is listed when you run the following command:

paired-devices

Clone the bluezero repo

This tutorial uses VS Code to build our Python code. Sadly, it is currently not possible to have VS code full supprt with the Raspberry Pi Zero. More details can be found in this github issue.

Instead, you can use SSH FS extension for VS code. It won’t give you any debugging functionality but you can navigate the folders of your Pi and, make new files, code etc. Then add the Raspberry Pi to the ssh file extension by creating a new ssh configuration.

I started out by cloning the python-bluezero repo and using this file as the start point. I’ve pretty much kept it the same for now so I can run over what is happening.

Additionally, I added the ability to get the temperature from the microbit’s inbuilt temperature sensor. This is going to form some basis for our sensor data for this project.

Through doing this, I found out the temperature sensor data is very boring :p so I am planning on swapping it out next time for accelerometer data instead.

Using the microbit_uart.py code as a base

The first thing the Python script does is import microbit tools from the bluezero package. It imports microbit and async_tools.

The code then calls the microbit function and sets up some variable values, which are explained below:

  • adapter address: bluetooth controller on raspberry pi. You can find this by running list controller from the bluetoothctl.
  • device address: microbit address. You can find this by running paired-devices from the bluetoothctl.
  • It then defines which services are enabled and disabled. E.g
    • temperature_service=True because we will be using this to send temperature from the microbit to the Raspberry Pi.
    • uart_service=True because the example script we are using has a need for the uart service.
    • You can also enable others. You will also need to add these into your microbit code by dragging in the relevant blocks (E.g bluetooth led service if you wanted to use the led display as an event input or bluetooth accelerometer service if you wanted to look at rotation of the microbit as an event)

There are two functions already defined; ping and goodbye.

The ping function will transmit a message to the microbit from the Raspberry Pi. The microbit will reads the message up until the hashtag (as defined in our microbit code). The message is “ping”. The microbit will display this message via the leds.

def ping():     
ubit.uart = 'ping#'
return True

This works through UART (Universal Asynchronous Receiver/Transmitter) over bluetooth. It is used for communication across serial ports that is (as the name suggests) asynchronous. The code that makes this all work is here.

The goodbye function disconnects the microbit from the Raspberry Pi and quits the asynchronous event loop. The EventLoop class is defined here.

We are now going to need to add an additional function and some extra lines of code so we can get the temperature reading from the microbit.

Getting temperature data from the microbit

To enable getting a temperature reading from the microbit, first ensure the temperature service is set to true for the microbit:

temperature_service=True

Then add the following function within the code:

def temperature():
print('Temperature:', ubit.temperature)

Then add the following:

for i in range(3):
eloop.add_timer(i*10000, temperature)

Finally, we need to change the event loop time input where we call the goodbye function to 50000 microseconds (by which time all of the other functions have run).

The code should now look like this:

Next time, we will change this code to remove the uart functionality and add in the ability for us to get data from the accelerometer service as a stream of data that we can use to do some cool stuff with!

event driven

3 advantages of Event-Driven Architecture

My latest posts have put a lot of focus on Cloud native technologies. The last few have mentioned things like CloudEvents and Knative Eventing and it got me thinking… why might people want to implement event driven ecosystems in the first place?

I’ve decided to put together three advantages that I think offer pretty attractive prospects for implementing an Event Driven Architecture pattern.

True Decoupling of Producers and Consumers

The nature of an Event Driven Architecture ecosystem lends itself to microservices and, in this type of system there is (hopefully) a loose coupling between the services. Depending on the communication between microservices, there may still be dependencies between them
(e.g a http request/response approach).

In the excellent book, ‘Designing Event-Driven Systems‘, Ben Stopford tells us that event-driven services core mantra is “Centralize an immutable stream of facts. Decentralise the freedom to act, adapt and change”.

Because the ownership of data is separated by domain, this gives a nice logical separation between the production and consumption of events. As a producer I do not need to concern myself with how the events I produce are going to be consumed. Vice versa for the team consuming them. They are free to figure out for themselves what to do with the events, they do not need to be instructed. The message structure is also not important. It can be json, xml, avro etc. Doesn’t matter.

The broker and some kind of trigger between it and the services enables messages to be ingested into the event driven eco-system and then broadcast out to whichever services are interested in receiving them.

Business narrative of what has happened that can’t be changed

We have all heard the term ‘single source of truth’ and this is usually just a rumor (like the treasure chest hidden at the end of the rainbow). Well, in an event-driven ecosystem it really exists!

As mentioned above, an event-stream should be an immutable stream of facts. This is very representative of how our daily lives unfold; as a series of events. These events happened and it’s not possible to go back and change them unless you own one of these (remember, terrible things can happen to those who meddle with time)…

This is an advantage for business data governance as you can always look back in the log for auditing or to see what happened.

It is becoming more and more common for companies to need to explain their ‘data-derived’ decisions, e.g why a customer’s application for finance or insurance has been rejected. The log of immutable events that EDA provides us can provide a key component of this auditing.

Real-time event streams for Data Science.

One of the reasons I am enthusiastic about EDA is that it is particularly well suited to in-stream processing. It lends itself to fast decision making, things where milliseconds count.

Business logic can be applied while data is in motion rather than needing to wait for the data to land somewhere and then do the analysis. This is good for things like fraud detection, predictive analytics. Oftentimes, we need to know if a transaction is fraudulent before it completes.

Further Reading

There are many reasons you might want to use eventing as the backbone of your system and if you want to find out more about Event-Driven Architecture then I recommend the following resources as a start:

  • Designing Event-Driven Systems by Ben Stopford
  • Building Event-Driven Microservices by Adam Bellemare (pre-release)
  • Cloud Native Patterns by Cornelia Davis

cloud native, knative

Knative Eventing: Part 3 – Replying to broker

In part two of my Knative eventing tutorials, we streamed websocket data to a broker and then subscribed to the events with an event display service that displayed the events in real-time via a web UI.

In this post, I am going to show how to create reply events to a broker and then subscribe to them from a different service. Once we are done, we will have something that should look like the following:

In this scenario, we are receiving events from a streaming source (in this case a websocket) and each time we receive an event, we want to send another event as a reply back to the broker. In this example, we are just going to do this very simply in that every time we get an event, we send another.

In the next tutorial, we will look to receive the events and then perform some analysis on the fly, for example analyse the value of a transaction and assign a size variable. We could then implement some logic like if size = XL, send a reply back to the broker, which could then be listened for by an alerting application.

Setup

I am running Kubernetes on Docker Desktop for mac. You will also need Istio and Knative eventing installed.

Deploy a namespace:

kubectl create namespace knative-eventing-websocket-source

apply the knative-eventing label:

kubectl label namespace knative-eventing-websocket-source knative-eventing-injection=enabled

Ensure you have the cluster local gateway set up.

Adding newEvent logic:

In our application code, we are adding some code for sending a new reply event every time an event is received. The code is here:

newEvent := cloudevents.NewEvent()
newEvent.SetSource(fmt.Sprintf("https://knative.dev/jsaladas/transactionClassified"))
newEvent.SetType("dev.knative.eventing.jsaladas.transaction.classify")
newEvent.SetID("1234")
newEvent.SetData("Hi from Knative!")
response.RespondWith(200, &newEvent)

This code creates a new event, with the following information:

source = "https://knative.dev/jsaladas/transactionClassified"

type = "dev.knative.eventing.jsaladas.transaction.classify"

data = "Hi from Knative!"

I can use whichever values I want for the above, this is just the values I decided on, feel free to change them. We can then use these fields later to filter for the reply events only. Our code will now, aside from receiving an event and displaying it, generate a new event that will enter the knative eventing ecosystem.

Initial components to run the example

The main code for this tutorial is already in the github repo for part 2. If you already followed this and have this running then you will need to redeploy the event-display service with a new image. For those who didn’t join for part 2, this part shows you how to deploy the components.

Deploy the websocket source application:

kubectl apply -f 010-deployment.yaml

Here is the yaml below:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: wseventsource
  namespace: knative-eventing-websocket-source
spec:
  replicas: 1
  selector:
    matchLabels: &labels
      app: wseventsource
  template:
    metadata:
      labels: *labels
    spec:
      containers:
        - name: wseventsource
          image: docker.io/josiemundi/wssourcecloudevents:latest
          env:
          - name: SINK
            value: "http://default-broker.knative-eventing-websocket-source.svc.cluster.local"

Next we will apply the trigger that will set up a subscription for the event display service to subscribe to events from the broker that have a source equal to "wss://ws.blockchain.info/inv". We can also filter on type or even another cloudEvent variable or, if we left them both empty, then it would filter on all events.

kubectl apply -f 040-trigger.yaml

Here is the trigger yaml:

apiVersion: eventing.knative.dev/v1alpha1
kind: Trigger
metadata:
  name: wsevent-trigger
  namespace: knative-eventing-websocket-source
spec:
  broker: default
  filter:
    sourceAndType:
      type: ""
      source: "wss://ws.blockchain.info/inv"
  subscriber:    
    ref:
      apiVersion: v1
      kind: Service
      name: event-display

Next we will deploy the event-display service, which is the specified subscriber of the blockchain events in our trigger.yaml. This application is where we create our reply events.

This is a Kubernetes service so we need to apply the following yaml files:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: event-display
  namespace: knative-eventing-websocket-source
spec:
  replicas: 1
  selector:
    matchLabels: &labels
      app: event-display
  template:
    metadata:
      labels: *labels
    spec:
      containers:
        - name: event-display
          image: docker.io/josiemundi/test-reply-broker
apiVersion: v1
kind: Service
metadata:
  name: event-display
  namespace: knative-eventing-websocket-source
spec:
  type: NodePort
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
    name: consumer
  - port: 9080
    protocol: TCP
    targetPort: 9080
    nodePort: 31234
    name: dashboard
  selector:
    app: event-display

If you head to localhost:31234, you should see the stream of events.

Subscribe to the reply events

Now we need to add another trigger, this time subscribing only to the reply events (that’s our newEvent that we set up in the go code). You can see that, in this case, we specify the source as "https://knative.dev/jsaladas/transactionClassified".

Here is the trigger yaml we apply:

apiVersion: eventing.knative.dev/v1alpha1
kind: Trigger
metadata:
  name: reply-trigger-test
  namespace: knative-eventing-websocket-source
spec:
  broker: default
  filter:
    sourceAndType:
      type: ""
      source: "https://knative.dev/jsaladas/transactionClassified"
  subscriber:
    ref:
      apiVersion: serving.knative.dev/v1alpha1
      kind: Service
      name: test-display

This time, our subscriber is a Knative service called test-display, which we still need to deploy.

Run the following to deploy the knative service that subscribes to reply events:

kubectl --namespace knative-eventing-websocket-source apply --filename - << END
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: test-display
spec:
  template:
    spec:
      containers:
        - image: gcr.io/knative-releases/github.com/knative/eventing-contrib/cmd/event_display@sha256:1d6ddc00ab3e43634cd16b342f9663f9739ba09bc037d5dea175dc425a1bb955
END

We can now get the logs of the test-display service and you should only see the reply messages:

kubectl logs -l serving.knative.dev/service=test-display -c user-container --tail=100 -n knative-eventing-websocket-source

Next time we will look at classifying the events and use transaction size as a reply to the broker.

knative, kubernetes

Knative Eventing: Part 2 – streaming CloudEvents to a UI

I’ve been looking at Knative eventing a fair bit lately and one of the things I have been doing is building an eventing demo (the first part of which can be found here). As part of this demo, I wanted to understand how I could get CloudEvents that were being sent by my producer to display in real time via a web UI (event display service UI).

Here is a bit of info and an overview of the approach I took. The code to run through this tutorial can be found here.

Prerequisites and set-up

First, you will need to have Knative and your chosen Gateway provider installed (I tried this with both Istio and Gloo, which both worked fine). You can follow the instructions here.

Initially deploy the 001-namespace.yaml by running:

kubectl apply -f 001-namespace.yaml

Verify you have a broker:

kubectl -n knative-eventing-websocket-source get broker default

You will see that the broker has a URL, this is what we will use as our SINK in the next step.

Deploy the Blockchain Events Sender Application

The application that sends the events was discussed in my Knative Eventing: Part 1 post and you can find the repo with all the code for this application here.

To get up and running you can simply run the 010-deployment.yaml file. Here is a reminder of what it looks like:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: wseventsource
  namespace: knative-eventing-websocket-source
spec:
  replicas: 1
  selector:
    matchLabels: &labels
      app: wseventsource
  template:
    metadata:
      labels: *labels
    spec:
      containers:
        - name: wseventsource
          image: docker.io/josiemundi/wssourcecloudevents:latest
          env:
          - name: SINK
            value: "http://default-broker.knative-eventing-websocket-source.svc.cluster.local"

This is a Kubernetes app deployment. The name of the deployment is wseventsource and the namespace is knative-eventing-websocket-source. We have defined an environmental variable of SINK, for which we set the value as the address of our broker.

Verify events are being sent by running:

kubectl --namespace knative-eventing-websocket-source logs -l app=wseventsource --tail=100 

This is what we currently have deployed:

Add a trigger – Send CloudEvents to Event-Display

Now we can deploy our trigger, which will set our event-display service as the subscriber.

# Knative Eventing Trigger to trigger the helloworld-go service
apiVersion: eventing.knative.dev/v1alpha1
kind: Trigger
metadata:
  name: wsevent-trigger
  namespace: knative-eventing-websocket-source
spec:
  broker: default
  filter:
    sourceAndType:
      type: ""
      source: ""
  subscriber:    
    ref:
      apiVersion: v1
      kind: Service
      name: event-display

In the file above, we define our trigger name as wsevent-trigger and the namespace. In spec > filter I am basically specifying for the broker to send all events to the subscriber. The subscriber in this case is a Kubernetes services rather than a Knative Service.

kubectl apply -f 030-trigger.yaml

Now we have the following:

A trigger can exist before the service and vice versa. Let’s set up our event display.

Stream CloudEvents to Event Display service

I used the following packages to build the Event Display service:

Originally I deployed my event-display application as a Knative Service and this was fine but I could only access the events through the logs or by using curl.

Ideally, I wanted to build a stream of events that was push all the way to the UI. However, I discovered that for this use case it wasn’t possible to deploy this way. This is because Knative serving does not allow multiple ports in a service deployment.

I asked the question about it in the Knative Slack channel and the response was mainly to use mux and specify a path (I saw something similar in the sockeye GitHub project).

In the end, I chose to deploy as a native Kubernetes service instead. The reason is that it seemed like the most applicable way to do this, both in terms of functionality and also security. I was a little unsure about the feasibility of using mux in production as you may not want to expose an internal port externally.

For the kncloudevents project, I struggled to find detailed info or examples but the code is built on top of the Go sdk for CloudEvents and there are some detailed docs for the Python version.

We can use it to listen for HTTP cloudevents requests. By default it will listen on port 8080. When we use the StartReceiver function, this is essentially telling our code to start listening. Because this happens on one port, we need another to ListenAndServe.

So here are the two yaml files that we deploy for the event-display.

App Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: event-display
  namespace: knative-eventing-websocket-source
spec:
  replicas: 1
  selector:
    matchLabels: &labels
      app: event-display
  template:
    metadata:
      labels: *labels
    spec:
      containers:
        - name: event-display
          image: docker.io/josiemundi/bitcoinfrontendnew

Service Deployment:

apiVersion: v1
kind: Service
metadata:
  name: event-display
  namespace: knative-eventing-websocket-source
spec:
  type: NodePort
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
    name: consumer
  - port: 9080
    protocol: TCP
    targetPort: 9080
    nodePort: 31234
    name: dashboard
  selector:
    app: event-display

With everything deployed we now have the following:

Now if you head to the nodeport specified in the yaml:

http://localhost:31234

Next time, we will look at how to send a reply event back into the Knative eventing space.

cloud native

What are CloudEvents?

CloudEvents is a design specification for sending events in a common and uniform way. They are an interesting proposal for standardising the way we send events in an event-driven ecosystem. The specification is an open and a versatile approach for sending and consuming.

CloudEvents is currently an ‘incubating’ project with the CNCF. On the cloudevents website, they specify that the advantages of using cloud events are:

  • Consistency
  • Accessibility
  • Portability

Metadata about an event is contained within a CloudEvent, through a number of required (and optional) attributes including:

  • id
  • source
  • specversion
  • type

For more information about the attributes, you can take a look at the cloudevents spec.

Here is an example of a CloudEvent from my previous eventing example:

You can see the required attributes are:

  • id: 8e3cf8fb-88bb-4a00-a3fe-0635e221ce92
  • source: wss://ws.blockchain.info/inv
  • specversion: 0.3
  • type: websocket-event

There are also some extension attributes such as knativearrivaltime, knativehistory and traceparent. We then also have the body of the message in Data.

Having these set attributes means they can be used for filtering (e.g through a Knative eventing trigger) and also for capturing key information that can be used by other services that subscribe to the events. I can, for example, filter for events that are only from a certain source or of a certain type.

CloudEvents are currently supported by Knative, Azure Event Grid and Open FaaS.

There are number of libraries for CloudEvents inlcuding for Python, Go and Java. I’ve used the go-sdk for CloudEvents a lot lately and will be running through some of this in some future posts.

knative, kubernetes

Step by Step: Deploy and interact with a Knative Service

In this post, I will show how to deploy a Knative service and interact with it through curl and via the browser. I’ll go over some of the useful stuff to know as I found this kind of confusing at first.

I’m running this on a mac using the Kubernetes that’s built in to Docker Desktop, so things will be a bit different if you are running another flavor of Kubernetes. You will need Istio and the Knative serving components installed to follow along with this.

For the service, we are deploying a simple web app example from golang.org, which by default prints out “Hi there, I love (word of your choice)”. The code is at the link above, or I have a simple test image on Docker hub, which just prints out “Hi there, I love test” (oh the lack of creativity!)

Deploying a Knative Service

First we need to create a namespace, in which our Knative service will be deployed. For example:

kubectl create namespace web-service

Here is the Knative service deployment, which is a file called service.yaml.

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: event-display
spec:
  template:
    spec:
      containers:
        - image: docker.io/josiemundi/webserversimple

Deploy the service yaml by running the following command:

kubectl apply -f service.yaml -n web-service

Now run the following in order to view the Knative service and some details we will need:

kubectl get ksvc -n web-service

There are a few fields, including:

NAME: The name of the service

URL: The url of the service, which we will need to interact with it. By default the URL will be “<your-service-name>.<namespace>.example.com” however you can also have a custom domain.

READY: This should say “True”, if not it will say “False” and there will be a reason in the REASON field.

After a little while, you might notice the service will disappear as it scales down to zero. More on that in a while.

IngressGateway

To interact with the service we just deployed, we need to understand a bit about the IngressGateway. By default, Knative uses the istio-ingressgateway as its gateway service. We need to understand this in order to expose our service outside of the local cluster.

We can look at the istio-ingressgateway using the following command:

kubectl get service istio-ingressgateway --namespace istio-system

This will return the following:

Within the gateway configuration, there are a number of ports and NodePorts specified as default including the one we will use to communicate with our service:

port: number: 80, name: http2 protocol: HTTP

To find the port for accessing the service you can run the following:

kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].port}'   

You can customise the Gateway configuration. Details and the different ports can be found here in the Istio documentation. I’d also recommend running through the Istio httpbin example to understand a bit more about istio and ingressgateway.

To interact with our service we will need to combine both the URL (event-display.web-service.example.com) and the EXTERNAL-IP (localhost) which we saw for the istio-ingressgateway. Depending on your set up, these will not be the same as mine.

It will be something like the following:

curl -H "Host: event-display.web-service.example.com" http://127.0.0.1/test

Scaling our Service

Your initial pod has probably disappeared right now because when a service is idle, it will scale down to zero after around 90 seconds. You should see the pod start ‘Terminating’ and then disappear.

Knative uses the KPA (Knative Pod Autoscaler), which runs as a Kubernetes deployment. The KPA scales based on requests (concurrency), however it is also possible to use the HPA (Horizontal Pod Autoscaler), which allows scaling based on CPU.

You can find out more detailed information about autoscaling here but for now just note that you can change the parameters in the ConfigMap.

To see the autoscaler config you can run the following command:

kubectl describe configmap config-autoscaler -n knative-serving

To edit the ConfigMap:

kubectl edit configmap config-autoscaler -n knative-serving 

In the result you will see some fields including:

scale-to-zero-grace-period: 30s
stable-window: 60s

The scale-to-zero-grace-period specifies how long it will wait until it scales an inactive service down to zero. The autoscaler takes a 60 second window to assess activity. If it is determined that within that 60 seconds stable-window, there are no events, it will then wait another 30 seconds before scaling to zero. This is why it takes around 90 seconds to terminate an inactive service.

If desired, these can be amended so that your service will scale down faster or slower. There is also a field called enable-scale-to-zero, which (if you want to be able to scale to zero) must be set to “true”.

Test using curl

Once you curl the service again you should see the pod spin up again.

curl -H "Host: event-display.web-service.example.com" http://127.0.0.1:80/test

Should return:

Hi there, I love test!

Access Knative Service through browser

If you are using Docker Desktop on a mac, to access through a browser you could add the host to the hostfile on your mac.

sudo vi /etc/hosts

Add 127.0.0.1 event-display.web-service.example.com to the file and save it.

Alternatively, if you don’t want to (or can’t) change the host file, I used the “Simple Modify Headers” browser plugin. Then click on the icon once installed and select ‘configure’. Input the parameters as follows and then click the start button.

Now open http://localhost/test and you should see:

knative, kubernetes

Knative Eventing Example: Part 1

In my last post I shared some methods for getting up and running with Knative eventing. In this post, we are going to step back a little to try and understand how the eventing component works. It gives a high level overview of a demo you can follow along with on GitHub.

This will be the first of a series of posts, which walk through an example, that we will build out in complexity over the coming weeks.

First, lets go over a few reasons why we might look to use eventing.

What capabilities does Knative Eventing bring?

  • Ability to decouple producers and consumers (this means, for example, that consumers can be subscribed to an event type before any of those event types have been produced).
  • Events are published as CloudEvents (this is a topic I would like to cover separately in more detail).
  • Push-based messaging

There are a number of key components that I will describe below, which together will make up the initial example. Channels and subscriptions will not be included in this post, we’ll discuss those another time.

What are we building?

Let’s first take a look at the diagram below to get a picture of how these components fit and interact together. This diagram shows the type of demo scenario we are looking to recreate over the next few posts.

Each of these components are deployed using yaml files, except for the broker, which is automatically created once the knative injection is enabled within the namespace. You can deploy a custom broker if you wish but I won’t include that in this post.

In this simple example, we use a Kubernetes app deployment as the source and a Knative Service as the consumer, which will subscribe to the events.

The code that I use to stream the events to the broker is available here on GitHub and gives more detailed instructions if you want to build it yourself. It also contains the yaml files used in this tutorial.

Source

Our source is the producer of the events. It could be an application, a web socket, a process etc. It produces events that other services may or may not be interested in subscribing to.

There are a number of different types of sources, each one is a custom resource. The range of sources available can be seen in the Knative documentation here. You can create your own event source if you need to.

The following yaml shows a simple Kubernetes app deployment which is our source:

In the above example, the source is a go application, which streams messages via a web socket connection. It sends them as CloudEvents, which our service will consume. It is available to view here.

Broker and Trigger are CRDs, which will manage the delivery of events and abstract away the details of these from the related services.

Broker

The broker is where events get received. It is like a holding area, from where they can be consumed by those interested. As mentioned above, a default broker is automatically created when you label your namespace with kubectl label namespace my-event-namespace knative-eventing-injection=enabled

Trigger

Our trigger provides a filter, by which it is determines which events should be delivered to a given consumer.

Here below is an example trigger, which just defines that the event-display service subscribes to all events from the default broker.

Under spec, I can also add filter: > attributes: and then include some CloudEvent attributes to filter by. We can then filter the CloudEvent fields, such as type or source etc in order to determine those to which a service subscribes. Here is another example, which filters on a specific event type:

You can also filter on an expression such as:

expression: ce.type == "com.github.pull.create"

Consumer

We can have one or multiple consumers. These are the services that are interested (or not) in the events. If you have a single consumer, you can send straight from the source to consumer.

Here is the Knative Service deployment yaml:

Because I want to send events to a Knative Service, I need to have the cluster-local visibility label (this is explained in more detail below). For the rest, I am using a pre-built image from knative for a simple event display, the code for which can be found here.

Once you have all of these initial components, it looks like this:

Issues

I had some issues with getting the events to the consumer at first when trying this initial demo out. In the end I found out that in order to sink to a Knative Service, you need to add the cluster local gateway to the Istio installation. This is somewhat vaguely mentioned in the docs around installing Istio but could probably have been a bit clearer. Luckily, I found this great post, which helped me massively!

When you install Istio you will need to ensure that you see (at least) the following:

Handy tips and stuff I wish I had known before…

If you want to add a sink URL in your source file, see the example below for a broker sink:

http://default-broker.knative-eventing-websocket-source.svc.cluster.local

default is the name of the broker and knative-eventing-websocket-source is the namespace.

In order to verify events are being sent/received/consumed then you can use the following examples as a reference:

//Getting the logs of the source app
kubectl --namespace knative-eventing-websocket-source logs -l app=wseventsource --tail=100 

//Getting the logs of the broker
kubectl --namespace knative-eventing-websocket-source logs -l eventing.knative.dev/broker=default --tail=100 

//Getting the logs of a Knative service
kubectl logs -l serving.knative.dev/service=event-display -c user-container --since=10m -n knative-eventing-websocket-source

You should see something like this when you get the service logs:

Next steps

Next time we will be looking at building out our service and embellishing it a little so we can transform, visualise and send new events back to a broker.

knative, kubernetes

Up and Running with Knative Eventing on Docker desktop

I’ve been playing around with Knative Eventing and wanted to write my own post on how to get it up and running on a Kubernetes cluster. The docs are pretty straight forward but I always like to keep a record for myself, just so that it’s all in one place.

Hopefully this guide will help someone who is new to the world of Knative eventing get up and running on their local machine.

So let’s get started with our install.

Get Docker Desktop

First you will need to install Docker Desktop. I’m using a Mac so I followed the instructions from the Docker website, they also have for Windows.

Once Docker desktop is installed, go to Preferences > Kubernetes > Enable Kubernetes (this post assumes you have kubectl). Then under ‘Advanced’, you will need to change the settings to increase the resources available:

Install an Ingress controller (Istio)

Knative needs Istio in order to run for it’s ingress controller. You can also use Gloo, but in this example we will use the former.

I followed the instructions from the official Istio site:

Install Istio

You can also install a lighter version of Istio for Knative, for which you can find the instructions here. The installation uses Helm, despite Istio starting to move away from Helm. Also, I needed to remove all the comment lines from the helm template command that makes the istio-lean.yaml file, otherwise it wouldn’t run for me.

Install Knative Serving

Then to install Knative, I first installed Knative serving (as recommended for Docker Desktop users) as per the instructions from their site:

Install Knative Serving

Install Knative Eventing CRDs

After this was installed, I then installed the CRDs for Knative eventing as per the instructions at the link below:

Install the CRDs for Knative Eventing

Check Install

Once you have installed everything, run the following command and you should have something like this:

kubectl get pods --all-namespaces

Create a namespace with Knative Eventing

Now create a test namespace:

kubectl create namespace my-event-namespace

Then we need to add the resources that will be needed from Knative to manage events into the namespace we just created. This is done using the following command:

kubectl label namespace my-event-namespace knative-eventing-injection=enabled

Cleaning Up

Rather than deleting everything, you can just scale your pods down to zero when not in use. This way you can spin them back up when you want to use them again. To do this use the following command:

kubectl get deploy -n <namespace> -o name | xargs -I % kubectl scale % --replicas=0 -n <namespace>

So, for example so scale down any Knative-eventing pods I would use:

kubectl get deploy -n knative-eventing -o name | xargs -I % kubectl scale % --replicas=0 -n knative-eventing

Next steps would be to try out some of the examples listed on the Knative website. In the next few weeks I will be posting some more on using Knative eventing so stay tuned 🙂

databricks, github, version control

Multi-user Branching Approach using Databricks

In my previous post on version control using Databricks, we looked at how to link GitHub and Databricks. Following on from this I wanted to show a simple branching methodology that could work for a small team collaborating in the Databricks environment. This is for an environment where the users will not pull code down onto their own machine and are potentially new to git in general.

This is not a CI/CD pipeline and isn’t a large scale solution. It’s also very manual and I would recommend looking at something like devops pipelines or even GitHub actions (which I will discuss another time).

In this example, we will have two main branches; master and development. This is pretty standard practise. We are assuming that developers clone from and make pull requests to development and then admins sync up development and master. I won’t be discussing branch policies or anything like that but feel free to ask if you have any questions about how to set those up.

For every feature a new branch will be created by the developer. This is called feature branching.

You will need to read the previous post as we will start where we left off; with our master code folder.

Create a Development Branch

Now we are also going to add a branch for development code in Databricks. To do this, clone a copy of the master file to a development folder in Databricks. Ensure the filename is kept the same for consistency.

Sync this with a new branch called development in GitHub (in the same way you synced the master file. However, this time in ‘Branch’, add development and change the Path in ‘Git Repo’ to match the master one (highlighted).

Create a Feature Branch

To create a feature branch, a developer will clone a copy of the code from the development folder into their own workspace. In Databricks, you have the option to clone a notebook by selecting the tiny arrow next to the notebook. It will give you the option of where you want to put it.

Make a new folder for the feature development (e.g jo_change_c_value) and clone the notebook into there.

Once the notebook has been cloned, we will then sync it with a new feature branch folder in the same way as the image above. In this example, I create a new branch under ‘Git Preferences’ called ‘jo_change_c_value’ and sync my notebook.

I’m then going to change the code and set the c value equal to 4.

a=1
b=2
c=4

print(a)
print(a+b)

Click on save on the right hand side and it will ask you to save and additionally commit changes to GitHub if you want. Save the revision.

Make a PR to Development branch

You will see the change in GitHub.

Click on the ‘Compare & pull request’ and you will see an overview of the changes. In this example, I am asking to push changes to the development branch. This is just for good practise, if you only have a master branch then just ask to merge to there.

Press the green button that says ‘Create Pull request’.

You will then see the following page.

Click ‘Merge pull request’.

Now head to your development notebook in Databricks.

This is a bit odd but your changes will not immediately be visible (not sure why Databricks does this… if anyone does then let me know). On the right hand side in revision history, you will see the second to top record says Commit and then a number.

You need to click on the Commit one and this will be your newly pushed code.

Make a PR to master branch

To sync up the development and master branches, just follow the same process as above. As mentioned, ideally you would have some policies in place so that there is some sort of peer review process on PRs. However, that’s outside the scope of this post.

Databricks Version Control summary

Hopefully I have shown you how to version control for single branch personal and for multi user branching approaches. In my honest opinion, it’s not the most user friendly and there is room for human error (especially with the manual entry of folder paths). Further, it’s not very automated and this could be frustrating.

A better approach would be to employ a CI/CD approach. I’ll create a follow up post on this.

kubernetes

AKS (Azure Kubernetes Service) through azure cli

You can deploy AKS using the azure-cli. Here is a quick tutorial on how to do it!

First off, ensure you have the azure-cli installed on your machine. If you have a mac it’s pretty simple. You can just do:

brew install azure-cli

If you are using Windows or Linux then follow the instructions on the Microsoft page.

Login to you Azure account using:

az login

Now we are going to create some variables for location, resource and cluster that we will use to set up our Kubernetes cluster:

export LOCATION=uksouth
export RESOURCE_GROUP=aks-project-group
export CLUSTER_NAME=josiemundi-cluster

Now we create a resource group for our cluster:

az group create --name $RESOURCE_GROUP --location $LOCATION

Now lets deploy our cluster:

az aks create --resource-group $RESOURCE_GROUP \
--name $CLUSTER_NAME \
--generate-ssh-keys \
--node-vm-size Standard_DS2_v2

You can actually head over to your Azure portal and within the resource group we set up, you should now see Kubernetes service. If you click on it, you will see it is in the process of deploying (or maybe already has).

We now need to link with our azure account

az aks get-credentials --resource-group $RESOURCE_GROUP --name $CLUSTER_NAME --admin

Now we can see our deployment with the following command:

kubectl get nodes

Here is a link to a list of the cli commands available for aks.