Using IBM Event Automation with Amazon MSK


Blog post and demo by Chris Patmore (Software Developer, IBM Event Automation) and Dale Lane (Chief Architect, IBM Event Automation)
25 Oct 2023

IBM Event Automation helps companies to accelerate their event-driven projects wherever businesses are on their journey. It provides multiple components (Event Streams, Event Endpoint Management, and Event Processing) which together lay the foundation of an event-driven architecture that can unlock the value of the streams of events that businesses have.

A key goal of Event Automation is to be composable. The three components can be used together, or they can each be used to extend and enhance an existing event-driven deployment.

Amazon MSK (Managed Streaming for Kafka) is a hosted, managed Kafka service available in Amazon Web Services. If a business has started their event-driven journey using MSK, then components from Event Automation can help to enhance this. This could be by offering management and governance of their MSK topics. And it could be by providing an intuitive low-code authoring canvas to process the events on their MSK topics.

Working with Amazon MSK is a nice example of the benefits of the composability of Event Automation, by helping businesses to get more value from their existing MSK topics.

In this blog post, we want to show a few different examples of where this can be done. For each example, we'll provide a high-level diagram and description. We'll also share a demonstration that we created to show it in action.

(Click on the descriptions for detailed step-by-step instructions and screenshots of how we built each demo.)

using Event Processing with Amazon MSK

To start with, we demonstrated how Event Processing can be used with Amazon MSK. We showed how Event Processing, based on Apache Flink, can help businesses to identify insights from the events on their MSK topics through an easy-to-use authoring canvas.

The diagram above is a simplified description of what we created. We created an MSK cluster in AWS, set up a few topics, and then started a demonstration app producing a stream of events to them. This gave us a simplified example of a live MSK cluster.

We then accessed this Amazon MSK cluster from an instance of Event Processing (that was running in a Red Hat OpenShift cluster in IBM Cloud). We used Event Processing to create a range of stateful stream processing flows.

This showed how the events on MSK topics can be processed where they are, without requiring an instance of Event Streams or for topics to be mirrored into a Kafka cluster also running in OpenShift. Using the low-code authoring canvas with Kafka topics that you already have, wherever they are, is a fantastic event-driven architecture enabler.

How we created a demonstration of this...
We started by creating an Amazon MSK cluster.
click on the image to enlarge

We opened the Amazon Web Services admin console, and went to the MSK (Managed Streaming for Apache Kafka) service.

To start, we clicked Create cluster.

click on the image to enlarge

We opted for the Custom create option so we could customize our MSK cluster.

We called the cluster loosehanger-msk because we're basing this demonstration on "Loosehanger" - a fictional clothes retailer that we have a data generator for.

click on the image to enlarge

We chose a Provisioned (rather than serverless) Kafka cluster type, and chose the latest version of Kafka that Amazon offered (3.5.1).

Because we only needed an MSK cluster for this short demo, we went with the small broker type.

click on the image to enlarge

We went with the default, and recommended, number of zones to distribute the Kafka brokers across: three.

Because we only planned to run a few applications with a small number of topics, we didn't need a lot of storage - we gave each broker 5GB of storage.

click on the image to enlarge

Rather than go with the default Kafka configuration, we clicked Create configuration to prepare a new custom config.

click on the image to enlarge

We gave the config a name similar to the MSK cluster itself, based on our scenario of the fictional clothes retailer, Loosehanger.

click on the image to enlarge

We started with a config that would make it easy for us to set up the topics.

auto.create.topics.enable=true
default.replication.factor=3
min.insync.replicas=2
num.io.threads=8
num.network.threads=5
num.partitions=1
num.replica.fetchers=2
replica.lag.time.max.ms=30000
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
socket.send.buffer.bytes=102400
unclean.leader.election.enable=true
zookeeper.session.timeout.ms=18000
allow.everyone.if.no.acl.found=true

The key value we added to the default config was the allow.everyone.if.no.acl.found one, to make it clear that we would start creating topics before setting up auth or access control lists.

We clicked Create to create this configuration. Once back on the MSK cluster settings screen, we chose this custom config and clicked Next to move on to the next step.

click on the image to enlarge

Networking was next.

We clicked Create VPC to prepare a new virtual networking environment for our demonstration.

click on the image to enlarge

We opted for the VPC and more option so we could set this up in a way that would support public access to the MSK cluster.

We chose three availability zones to match the config we used for the MSK cluster - this would allow us to have a separate AZ for each MSK broker.

click on the image to enlarge

We went with three public subnets and no private subnets, again as this would allow us to enable public access to the MSK cluster.

We left the default DNS options enabled so that we could have DNS hostnames for our addresses.

Next we clicked Create VPC.

click on the image to enlarge

We verified that the VPC resources we requested were created and then closed this window.

click on the image to enlarge

Back on the Networking step of the MSK cluster creation wizard, we were now able to choose our new VPC, and select the zones and subnets. The match of three availability zones for the MSK cluster, and three availability zones for the VPC meant it was just a matter of choosing a different zone for each broker.

We wanted to enable public access, but this can't be done at cluster creation time, so this remained on Off for now.

click on the image to enlarge

The final networking step option is to create security groups. The default option here was fine for our purposes, so we left this as-is.

We clicked Next to move onto the next step.

click on the image to enlarge

The next step was to configure the security options for the MSK cluster.

We started with disabling auth, as unauthenticated access would make it easy for us to set up our cluster and topics. We enabled auth after we had the cluster the way we wanted it.

For the same reason, we also left client TLS disabled as well. We turned this on later when we enabled public access to the cluster.

click on the image to enlarge

The default encryption key was fine for encrypting the broker storage, so we left this as-is and clicked Next.

click on the image to enlarge

The next step is to configure monitoring. As a short-lived demo cluster, we didn't have monitoring requirements, so we left this on the default basic option and clicked Next.

click on the image to enlarge

Our MSK cluster specification was ready to go, so we clicked Create.

click on the image to enlarge

We had to wait for the MSK cluster to provision.

click on the image to enlarge

Once it was ready, we could move on to the next stage which was to create topics.

Next, we created some Kafka topics that we would use with Event Automation, and set up some credentials for accessing them.
click on the image to enlarge

Amazon MSK doesn't offer admin controls for the Kafka cluster, so we needed somewhere that we could run a Kafka admin client.

The simplest option was to create an EC2 server where we could run Kafka admin commands from. We went to the EC2 service within AWS, and clicked Launch instance.

click on the image to enlarge

We kept with our naming theme of "Loosehanger" to represent our fictional clothes retailer.

This would only be a short-lived server, that we would keep around long enough to run a few Kafka admin commands, so quick and simple was the priority. We went with the Amazon Linux quick start option.

click on the image to enlarge

Again, as this would only be a short-lived server running a small number of command-line scripts, a small, free-tier-eligible instance type was fit for our needs.

We didn't need to create a key pair as we weren't planning to make any external connections to the EC2 server.

click on the image to enlarge

We did need to create a new security group, in order to be able to enable terminal access to the server.

For the same reason, we needed to assign a public IP address to the server, as that is needed for web console access to the server's shell.

With this configured, we clicked Launch instance.

click on the image to enlarge

There was one more modification we needed to make to the server, so we clicked on the instance name at the top of the page to access the server instance details.

click on the image to enlarge

We needed to modify the firewall rules to allow access from the EC2 instance to our MSK cluster, so we needed to modify the security groups.

In the top-right menu, we navigated to Change security groups.

click on the image to enlarge

We chose the security group that was created for the MSK cluster.

click on the image to enlarge

Having both security groups gave our EC2 instance the permissions needed to let us connect from a web console, as well the permissions needed for it to connect to Kafka.

Next, we clicked Save.

click on the image to enlarge

We were now ready to access the instance, so we clicked Connect.

click on the image to enlarge

We went with EC2 Instance Connect as it offers a web-based terminal console with nothing to install.

To open the console, we clicked Connect.

click on the image to enlarge

This gave us a shell in our Amazon Linux server.

click on the image to enlarge

We started by installing Java, as this is required for running the Kafka admin tools.

sudo yum -y install java-11
click on the image to enlarge

Next, we downloaded Kafka.

curl -o kafka.tgz https://downloads.apache.org/kafka/3.5.1/kafka_2.13-3.5.1.tgz
tar -zxf kafka.tgz
cd kafka_2.13-3.5.1

We matched the version of the MSK cluster (but it would have still worked if we had picked a different version).

click on the image to enlarge

To run the Kafka admin tools, we first needed to get connection details for the MSK cluster.

From the MSK cluster page, we clicked View client information.

click on the image to enlarge

We copied both the bootstrap address (labelled as the Private endpoint in the Bootstrap servers section) and the Plaintext ZooKeeper address.

Notice that as we hadn't yet enabled public access, these were both private DNS addresses, but our EC2 server (running in the same VPC) would be able to access them.

click on the image to enlarge

We created BOOTSTRAP and ZOOKEEPER environment variables using our copied values.

export BOOTSTRAP=b-3.loosehangermsk.krrnez.c3.kafka.eu-west-1.amazonaws.com:9092,b-1.loosehangermsk.krrnez.c3.kafka.eu-west-1.amazonaws.com:9092,b-2.loosehangermsk.krrnez.c3.kafka.eu-west-1.amazonaws.com:9092
export ZOOKEEPER=z-1.loosehangermsk.krrnez.c3.kafka.eu-west-1.amazonaws.com:2181,z-2.loosehangermsk.krrnez.c3.kafka.eu-west-1.amazonaws.com:2181,z-3.loosehangermsk.krrnez.c3.kafka.eu-west-1.amazonaws.com:2181
click on the image to enlarge

This was enough to let us create the topics we needed for our demonstration. We created a set of topics that are needed for the initial set of IBM Event Automation tutorials.

for TOPIC in ORDERS.NEW CANCELLATIONS DOOR.BADGEIN STOCK.MOVEMENT CUSTOMERS.NEW SENSOR.READINGS
do
    ./bin/kafka-topics.sh --create \
        --bootstrap-server $BOOTSTRAP \
        --replication-factor 3 \
        --partitions 3 \
        --config retention.bytes=25000000 \
        --topic $TOPIC
done

You can see a list of these topics, together with a description of the events that we would be producing to them, in the documentation for the data generator we use for these demos.

click on the image to enlarge

With the topics created, it was time to start setting up authentication.

We started by setting up permissions for a user able to produce and consume events to all of our topics.

We named this user producer.

for TOPIC in ORDERS.NEW CANCELLATIONS DOOR.BADGEIN STOCK.MOVEMENT CUSTOMERS.NEW SENSOR.READINGS
do
    ./bin/kafka-acls.sh --add \
        --authorizer-properties zookeeper.connect=$ZOOKEEPER \
        --allow-principal "User:producer" \
        --operation Write \
        --topic $TOPIC
    ./bin/kafka-acls.sh --add \
        --authorizer-properties zookeeper.connect=$ZOOKEEPER \
        --allow-principal "User:producer" \
        --operation Read \
        --group="*" \
        --topic $TOPIC
    ./bin/kafka-acls.sh --add \
        --authorizer-properties zookeeper.connect=$ZOOKEEPER \
        --allow-principal "User:producer" \
        --operation Describe \
        --group="*" \
        --topic $TOPIC
done
click on the image to enlarge

Next, we set up more limited permissions, for a second user that would only be allowed to consume events from our topics.

We named this user consumer.

for TOPIC in ORDERS.NEW CANCELLATIONS DOOR.BADGEIN STOCK.MOVEMENT CUSTOMERS.NEW SENSOR.READINGS
do
    ./bin/kafka-acls.sh --add \
        --authorizer-properties zookeeper.connect=$ZOOKEEPER \
        --allow-principal "User:consumer" \
        --operation Read \
        --group="*" \
        --topic $TOPIC
    ./bin/kafka-acls.sh --add \
        --authorizer-properties zookeeper.connect=$ZOOKEEPER \
        --allow-principal "User:consumer" \
        --operation Describe \
        --group="*" \
        --topic $TOPIC
done

We didn't have any Kafka administration left to do, so we were finished with this EC2 admin instance. In the interests of cleaning up, and keeping the demo running costs down, we terminated the EC2 instance and deleted the new security group we created for it.

Then we enabled public access to our MSK cluster.
click on the image to enlarge

We started by turning on security. We went back to the MSK cluster instance page, and clicked the Properties tab.

click on the image to enlarge

We scrolled to the Security settings section and clicked Edit.

click on the image to enlarge

We enabled SASL/SCRAM authentication. This automatically enabled client TLS encryption as well.

We clicked Save changes.

click on the image to enlarge

We needed to wait for the cluster to update.

(This took over thirty minutes, so this was a good point to take a break!)

click on the image to enlarge

The prompt at the top of the MSK instance page showed that the next step was to set up secrets with username/passwords for Kafka clients to use.

We clicked Associate secrets.

click on the image to enlarge

We wanted two secrets: one for our producer user, the other for consumer user - each containing the username and password.

We clicked Create secret to get started.

click on the image to enlarge

We chose the Other type of secret, and used the Plaintext tab to create a JSON payload with a random password we generated for the producer user.

{
    "username": "producer",
    "password": "BE9rEMxwfC0eD7IQcVzC4s9csceBsKi3Enzi2wiY9B8uw73KsoNyR33vfFBKFozv"
}
click on the image to enlarge

We clicked Add new key to create a custom encryption key for this demo.

click on the image to enlarge

The default key type and usage were fine for our needs.

click on the image to enlarge

We gave it a name, again keeping with our "Loosehanger" clothing retailer scenario.

click on the image to enlarge
click on the image to enlarge

We set the permissions to make me the administrator of the key.

click on the image to enlarge

With the new key created, we could choose it to use this new encryption key for the secret with our new producer credentials.

click on the image to enlarge

To use the secret for Amazon MSK credentials, we needed to give the secret a name starting with AmazonMSK_.

We went with AmazonMSK_producer.

click on the image to enlarge

We clicked through the remaining steps until we could click Store.

click on the image to enlarge

We needed to repeat this to create a secret for our consumer user.

Again, we generated a long random password to use for clients that can only consume from our topics.

{
    "username": "consumer",
    "password": "RUkSRjUF6Nlw9420CCW50s3tdRTf3jq8R6Z0HQbneeUs8MiXtQ447OC003R538Nr"
}
click on the image to enlarge

We used the same new encryption key that we created for the consumer secret.

click on the image to enlarge

We had to give it a name with the same AmazonMSK_ prefix, so we went with the name AmazonMSK_consumer.

click on the image to enlarge

With these two secrets, our credentials were now ready.

click on the image to enlarge

We associated both of them with our MSK cluster.

click on the image to enlarge

We clicked Associate secrets.

click on the image to enlarge

With authentication prepared, we were ready to modify the MSK configuration to allow public access to the Kafka cluster.

First, we needed to modify the cluster configuration to set a property that Amazon requires for public access. We went to the Amazon MSK Cluster Configurations, clicked on our config name, then clicked Create revision.

click on the image to enlarge

We modified the following value:

allow.everyone.if.no.acl.found=false

And then clicked Create revision.

click on the image to enlarge

To use this modified configuration, we went back to the MSK cluster, and clicked on it.

click on the image to enlarge

We selected Edit cluster configuration.

click on the image to enlarge

We could then choose the new config revision, and click Save changes.

click on the image to enlarge

This can take ten minutes or so to complete.

click on the image to enlarge

Once this was complete, we clicked the Properties tab.

click on the image to enlarge

We scrolled to the Networking settings section, and selected the Edit public access option.

click on the image to enlarge

We enabled Turn on and clicked Save changes.

click on the image to enlarge

There was a bit of a wait for this change to be applied.

click on the image to enlarge

After thirty minutes, the configuration change was complete.

We now had an Amazon MSK cluster, with the topics we wanted to use, configured to allow public access, and with two SASL/SCRAM usernames/passwords prepared for our applications to use.

We started an app producing events to the Amazon MSK topics we'd created.
click on the image to enlarge

First, we needed to modify our MSK cluster to allow connections from an app we would run on our laptop.

We went back to our MSK cluster instance, clicked on to the Properties tab, and scrolled to the Networking settings section.

We then clicked on the Security groups applied value.

click on the image to enlarge

On the security groups instance page, we clicked Edit inbound rules.

click on the image to enlarge

We needed to add a new rule for access to the Kafka port, 9196.

We could have added the specific source IP address for where we would run our app, but it was simpler for this quick demo to just allow access from applications running anywhere.

click on the image to enlarge

Our MSK cluster was now ready to allow connections.

click on the image to enlarge

To produce messages to our topics, we used a Kafka Connect connector. You can find the source connector we used at github.com/IBM/kafka-connect-loosehangerjeans-source. It is a data generator that periodically produces randomly generated messages, that we often use for giving demos.

To run Kafka Connect, we created a properties file called connect.properties.

We populated this with the following config. Note that the plugin.path location is a folder where we downloaded the source connector jar to - you can find the jar in the Releases page for the data gen source connector.

bootstrap.servers=b-1-public.loosehangermsk.krrnez.c3.kafka.eu-west-1.amazonaws.com:9196

security.protocol=SASL_SSL
producer.security.protocol=SASL_SSL

sasl.mechanism=SCRAM-SHA-512
producer.sasl.mechanism=SCRAM-SHA-512

sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="producer" password="BE9rEMxwfC0eD7IQcVzC4s9csceBsKi3Enzi2wiY9B8uw73KsoNyR33vfFBKFozv";
producer.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="producer" password="BE9rEMxwfC0eD7IQcVzC4s9csceBsKi3Enzi2wiY9B8uw73KsoNyR33vfFBKFozv";

client.id=loosehanger
group.id=connect-group

key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=false
value.converter.schemas.enable=false
offset.storage.file.filename=/tmp/connect/offsets
plugin.path=/Users/dalelane/dev/demos/aws/connect/jars

We then created a properties file called connector.properties with the following config.

(You can see the other options we could have set in the connector README).

name=msk-loosehanger
connector.class=com.ibm.eventautomation.demos.loosehangerjeans.DatagenSourceConnector

We ran it using connect-standalone.sh. (This is a script included in the bin folder of the Apache Kafka zip you can download from kafka.apache.org).

connect-standalone.sh connect.properties connector.properties

We left this running to create a live stream of events that we could use for demos.

We now had streams of events on MSK topics, ready to process using IBM Event Processing.
click on the image to enlarge

We started by following the Transform events to create or remove properties tutorial from ibm.github.io/event-automation/tutorials.

We were running an instance of IBM Event Processing in an OpenShift cluster in IBM Cloud. (For details of how we deployed this, you can see the first step in the tutorial instructions. It just involved running an ansible playbook).

We started by logging on to the Event Processing authoring dashboard.

click on the image to enlarge

And we started to create a new flow.

We created an event source in Event Processing using an Amazon MSK topic.
click on the image to enlarge

We went to the MSK instance page, and clicked View client information. From there, we copied the public bootstrap address.

We pasted that into the Server box in the event source configuration page. We needed to split up the comma-separated address we got from the Amazon MSK page, as Event Processing requires separate broker addresses.

click on the image to enlarge

We provided the consumer credentials we created earlier when setting up the MSK cluster.

click on the image to enlarge

The list of topics displayed in Event Processing matches the list of topics that we configured the consumer user to have access to.

Following the tutorial instructions, we chose the ORDERS.NEW topic.

click on the image to enlarge

We copied in a sample message from the ORDERS.NEW topic.

Finally, we created an event processing flow using the events from an MSK topic.
click on the image to enlarge

We continued to create the flow as described in the tutorial instructions.

All of the Event Processing tutorials can be followed as written using the Amazon MSK cluster that we created.

using Event Endpoint Management with Amazon MSK

Next, we demonstrated the value that Event Endpoint Management can bring to an Amazon MSK cluster. We showed how adding MSK topics to a self-service catalog enables sharing and reuse of existing topics, wherever they are hosted. And we showed the way that the addition of an Event Gateway can maintain control and governance of these topics as they are shared.

The diagram above is a simplified description of what we created. We used the same MSK cluster in AWS that we had used for the previous demo, as it already had a variety of topics and a data generator producing live streams of events to them. This time we used it with an instance of Event Endpoint Management (that was running in our Red Hat OpenShift cluster in IBM Cloud).

We added our Amazon MSK topics to the catalog, and configured an Event Gateway to govern access to them. We could have run the Event Gateway in OpenShift, alongside the endpoint manager. However, for this demonstration, we wanted to show the flexibility of running the Event Gateway in the same environment as a Kafka cluster. This showed how you can remove the need for egress from the AWS environment where your Kafka applications are also running in AWS.

Finally, we showed all of this in action by running a Kafka consumer, consuming events from the MSK topics. The consumer was using credentials created in the Event Endpoint Management catalog and connected via the Event Gateway.

How we created a demonstration of this...
We started by creating a security group (for the load balancer that would provide the external connection for the Event Gateway).
click on the image to enlarge

We started by going to the Security Groups section of the EC2 service in AWS.

We clicked Create security group.

click on the image to enlarge

We created a security group that would accept connections on port 443.

click on the image to enlarge

Next, we went to the IP Target groups page and clicked Create target group.

click on the image to enlarge

We chose IP addresses as the target type, and gave it a name that explained this would be the target address for the Event Gateway.

click on the image to enlarge

We chose port 443. Other port numbers would have been fine, but this is consistent with the port number used to access the Event Gateway when it is running in OpenShift.

For the VPC, we chose the same VPC that the MSK cluster is in.

click on the image to enlarge

Next, we defined a healthcheck. Again, we did this to be consistent with the way the Event Gateway runs when managed by the Operator in OpenShift, by using the same protocol (HTTP), port number (8081), and path (/ready) that are used for probes in Kubernetes.

Then we clicked Next.

click on the image to enlarge
click on the image to enlarge

The default values are fine for the next Register targets step, so we clicked Create target group.

Then, we created the network load balancer to give the Event Gateway an external address.
click on the image to enlarge

We started by going to the Load balancers page within EC2, and clicked Create load balancer.

click on the image to enlarge

We chose Network Load Balancer and clicked Create.

click on the image to enlarge

We gave it a name, and made it Internet-facing because we wanted the event gateway to be accessible from outside of AWS.

click on the image to enlarge

We selected one of the three availability zones created when we set up the MSK cluster to put the gateway in.

click on the image to enlarge

Then we selected security groups for the load balancer to use: the security group created for the MSK cluster, and the new one created for the gateway.

click on the image to enlarge

That was everything we needed, so we clicked Create load balancer.

click on the image to enlarge

It took a little while to complete the setup.

click on the image to enlarge

We waited for it to no longer be in a Provisioning state before continuing.

Now that we had an external DNS address, we could create a certificate for the Event Gateway.
click on the image to enlarge

We clicked into the network load balancer that we'd just created to view the DNS name that we would give to the gateway.

click on the image to enlarge

We'd installed IBM Event Endpoint Management using the demo ansible playbook. This setup uses a Kubernetes Certificate Manager in OpenShift to setup the SSL/TLS certs needed. This meant the issuer and CA we needed to create a certificate for the Event Gateway we would run in AWS was in OpenShift.

Our next step was to go into OpenShift so that we could create a new cert.

click on the image to enlarge

We just needed to copy the DNS name for the gateway to use in the cert.

click on the image to enlarge

That gave us what we needed to create the certificate.

Notice that we used the DNS name in two places: spec.dnsNames and spec.uris. You can see more information about the requirements for this certificate in the Event Endpoint Management documentation.

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
    name: amazon-eem-gw-cert
    namespace: event-automation
spec:
    dnsNames:
    - loosehanger-event-gateway-68367c28f6f6c440.elb.eu-west-1.amazonaws.com
    duration: 2160h0m0s
    issuerRef:
        kind: Issuer
        name: my-eem-gateway-ibm-egw-iss
    privateKey:
        algorithm: RSA
        rotationPolicy: Always
    secretName: amazon-eem-gw-cert
    subject:
        organizations:
        - IBM Event Endpoint Management
    uris:
    - egw://loosehanger-event-gateway-68367c28f6f6c440.elb.eu-west-1.amazonaws.com:443/amazon-gateways/eu-west-1a
    usages:
    - client auth
    - digital signature
    - server auth
click on the image to enlarge

The cert manager created the certificate based on this spec, and stored it in a Kubernetes secret in the event-automation namespace where Event Endpoint Management was running.

We downloaded ca.crt, tls.crt, and tls.key to three files that would be needed to run the gateway.

To make the certificates available to the Event Gateway we would run in AWS, we created a container image that would hold the certificates.
click on the image to enlarge

We decided to use a custom container image to hold the certificates that we'd just created. We wanted to store this as a private image in Amazon's container registry, so we went to the ECR service in AWS.

Then we clicked Get started.

click on the image to enlarge

We chose the repository name event-gateway-certs and set the visibility to Private.

click on the image to enlarge

That gave us an image name that we could push a container image to. We clicked on the View push commands button to get the details for this.

click on the image to enlarge

First we needed to copy the login command.

click on the image to enlarge

We used this to build and push a custom container image to ECR. Our Dockerfile looked like this:

FROM registry.access.redhat.com/ubi8/ubi-minimal:latest

COPY ca.crt /certs/eem/ca.pem
COPY tls.crt /certs/eem/client.pem
COPY tls.key /certs/eem/client.key
COPY tls.crt /certs/eem/egwclient.pem
COPY tls.key /certs/eem/egwclient-key.pem

VOLUME ["/certs"]

The role of this container image would simply be to make the certificate files that we had downloaded in the previous step available to the Event Gateway.

More information about the certificate file names we used, and the way these files would be used, can be found in the documentation for running stand-alone Event Gateways.

We then created another security group, this time for the Event Gateway.
click on the image to enlarge

The previous security group we had created was for the network load balancer that would provide the external address for the Event Gateway. That was a security group that needed to allow connections from external clients.

Now we needed to create a security group for the Event Gateway container itself, which would need to receive connections from the load balancer, and make connections to the Amazon MSK cluster.

We started by going back to the Security Groups page.

click on the image to enlarge

We gave the security group a name that explained what this would be used for.

We defined two inbound rules.

The first was for the healthcheck address we had defined previously. This does not return sensitive data, so we allowed connections to this from anywhere, which would allow both the load balancer and ECS to use the healthcheck.

The second was for the port used for Kafka traffic coming from the network load balancer, so we specified the 8092 port number and the security group for the network load balancer.

click on the image to enlarge

The security group was now ready to use.

We created a task definition for running the Event Gateway in ECS.
click on the image to enlarge

We went to the Elastic Container Service next, to start creating a new task definition for the Event Gateway.

click on the image to enlarge

We gave our task definition the name loosehanger-event-gateway.

We opted for a launch type of AWS Fargate as that was a serverless option that would let us run the Event Gateway when it was being used.

click on the image to enlarge

We chose Linux/X86/64 and a relatively small task size for our quick demo.

We also needed to create a Task execution role to run the container. It needed to be able to pull the Event Gateway from the IBM Entitled Registry, so the task execution role needed to have access to credentials for this.

click on the image to enlarge

To enable this, we opened the AWS Secrets Manager so that we could Store a new secret to hold an Entitled Registry pull secret.

click on the image to enlarge

We created a new entitlement key from myibm.ibm.com/products-services/containerlibrary.

We used this to create a new Other type of secret using the Plaintext tab to create a secret like this:

{
    "username": "cp",
    "password": "our-entitled-registry-key"
}

Note that your entitled registry key is a password that you shouldn't share publicly. The key you can see in the screenshot above has since been revoked.

click on the image to enlarge

We encrypted this secret using the encryption key that we'd created earlier.

click on the image to enlarge

We gave the secret a helpful description to remind us what this secret contains.

click on the image to enlarge

The secret was ready to use. To use it in our task definition, we clicked into the instance to get the ARN for it.

click on the image to enlarge

We copied the ARN for the secret - so that we could add this to the container definition as the pull secret.

click on the image to enlarge

Back in the ECS task definition, we started filling in the details of the Event Gateway container.

As described in the Event Endpoint Management documentation for installing a stand-alone gateway, we needed to use an Image URI of cp.icr.io/cp/ibm-eventendpointmanagement/egw:11.0.5

We marked this as an Essential container, and added two port mappings: one for the healthcheck, and one for Kafka traffic.

We pasted the pull secret ARN in for pulling the Event Gateway image.

The ID of this isn't sufficient for it to be used - the task execution role also needed permission to access the pull secret and decrypt it.

click on the image to enlarge

To do this, once the task execution role was created, we needed to attach some additional policies. Firstly, we needed to give it permission to pull our custom private certificates image from ECR.

click on the image to enlarge

In order for the task execution role to decrypt the Entitled Registry pull secret, we needed to give it details of the loosehanger-keys encryption key we'd created earlier. We started by getting the ARN for this key.

click on the image to enlarge

In order to pull the Event Gateway image from the Entitled Registry, the task execution role needs to be access:

* the Entitled Registry we'd stored in Secrets Manager

* the decryption key needed to be able to decrypt that secret

We added the ARNs for both of these to a custom permissions policy.

click on the image to enlarge

Finally, we needed to set some environment variables to add to the Event Gateway container. To get the right values for this, we returned to the Event Endpoint Management manager in OpenShift.

We needed to copy the API endpoint URI.

click on the image to enlarge

This provided the value for the backendURL environment variable, which we set to https://my-eem-manager-ibm-eem-gateway-event-automation.itzroks-120000f8p4-ivahj5-6ccd7f378ae819553d37d5f2ee142bd6-0000.eu-gb.containers.appdomain.cloud

We set the value of the GATEWAY_PORT environment variable to 8092 as that is the port number we had chosen for Kafka traffic in the port mappings.

We set the value of the GATEWAY_HEALTH_PORT environment variable to 8081 as that is the port number we had chosen for the healthcheck in the port mappings.

We set the value of the KAFKA_ADVERTISED_LISTENER environment variable to loosehanger-event-gateway-68367c28f6f6c440.elb.eu-west-1.amazonaws.com:443 as it was the DNS name we had created for the Event Gateway load balancer.

We set the value of the certPaths environment variable to this, as it matched the locations we'd created in the custom certs container we had built.

/certs/eem/client.pem,/certs/eem/client.key,/certs/eem/ca.pem,/certs/eem/egwclient.pem,/certs/eem/egwclient-key.pem

Finally, we added a few additional environment variables to match the way the gateway is run when managed by the Operator in OpenShift.

GATEWAY_REGISTRATION_SPEC_LOCATION
/opt/ibm/gateway/openapi-specs/gw-director-openapi.yaml

externalGatewayConfigJsonFile
/config/gatewayConfig.json

features
eemgw,core

These were all of the environment variables that we needed.

click on the image to enlarge

We enabled logging, as it can be useful when identifying what the gateway is doing.

click on the image to enlarge

We defined a healthcheck action (a curl of the gateway's readiness URL) for the container. Again, we were following the pattern for the probes that are set up when the Operator runs the Event Gateway in Kubernetes.

click on the image to enlarge

Finally, we needed to add the certificates we created for the gateway available by adding the custom certificates container we built as a secondary container.

click on the image to enlarge

We gave the certs container a command so that ECS wouldn't treat it as a crashed container. We didn't need this container to run anything, we just needed it to share a volume with the Event Gateway.

click on the image to enlarge

We achieved this by giving the gateway container read-only access to the storage from the certificates container.

This was everything we needed to complete the task specification, so at this point we clicked Create.

Now we were ready to create an ECS cluster to run our Event Gateway task.
click on the image to enlarge

We were ready to run the Event Gateway task, so we went to the Clusters page in ECS and clicked Create cluster.

click on the image to enlarge

We defined a Fargate cluster and gave it a name.

click on the image to enlarge

Then we waited it for it to be provisioned.

click on the image to enlarge

Once it was ready, we clicked into the cluster instance, went to the Services tab, and clicked the Create button.

click on the image to enlarge

For the compute options, we opted for the Launch type compute option - as we only wanted to run a single instance of the gateway.

click on the image to enlarge

We chose our Event Gateway task definition, and gave the service a name.

click on the image to enlarge

We chose the same availability zone to run the Event Gateway in that we'd chosen when creating the network load balancer.

Then we chose the security group we had created for the gateway container.

click on the image to enlarge

We chose the load balancer we had created for the gateway.

For the listener, we used the listener we had defined when creating the load balancer.

Finally, we clicked Create.

click on the image to enlarge

Now we just needed to wait for the Event Gateway to start.

We verified that the Event Gateway running in AWS had connected to the Endpoint manager running in OpenShift.
click on the image to enlarge

A good way to know when the Event Gateway is ready is to check the Gateways tab in the Event Endpoint Management manager. We could see our new Event Gateway listed in the table (alongside the existing Event Gateway we had created in OpenShift in IBM Cloud).

We could now start adding Amazon MSK topics to the Event Endpoint Management catalog.
click on the image to enlarge

First, we went to the topics tab in the Event Endpoint Management manager.

click on the image to enlarge

We needed to add the Amazon MSK cluster first, so we clicked Add new cluster.

click on the image to enlarge

We gave this new Kafka cluster a name: Amazon MSK.

click on the image to enlarge

We needed to get the bootstrap address for the MSK cluster, using the same View client information button as before. Note that we needed to split the public bootstrap address we got from Amazon into separate broker addresses.

click on the image to enlarge

We needed to accept the certificates when prompted.

click on the image to enlarge

We used the consumer credentials that we had created for the MSK cluster, which had permission to consume from all of our application topics.

click on the image to enlarge

With the credentials accepted, we were now ready to start adding topics.

click on the image to enlarge

We confirmed that all of the topics our consumer credentials had access to were listed.

click on the image to enlarge

We selected all of them. We could have given them different aliases, but we decided to use the existing topic names as-is.

click on the image to enlarge

Our topics were now added to the catalog, ready for additional documentation and publishing.

click on the image to enlarge

For each topic, we needed to write additional documentation.

click on the image to enlarge

We added a recent message from the topic as a sample, to make the catalog documentation more useful.

click on the image to enlarge

Finally, we could click Publish.

click on the image to enlarge

We needed to choose the gateway groups we wanted to be able to consume this topic through. We chose only the Event Gateway running in AWS.

click on the image to enlarge

Our topic was published! We were now able to repeat this for our other Amazon MSK topics.

Finally, to show this all working, we created a Kafka consumer to receive events from Amazon MSK topics through the Event Gateway.
click on the image to enlarge

From the Event Endpoint Management catalog, we could review what people finding our topics in the catalog would see.

click on the image to enlarge

To show this in action, we created credentials for consuming from the topic.

click on the image to enlarge

This gave us a unique username and password for consuming from this topic through the Event Gateway.

click on the image to enlarge

We clicked Download certificates to download the Event Gateway certificate as a PEM file.

We copied the bootstrap address from the Servers section of the Catalog page.

We put all of this: the location of the downloaded certificate file, the bootstrap address, and username and password we had generated from the Catalog, into a properties file. It looked like this:

bootstrap.servers=loosehanger-event-gateway-68367c28f6f6c440.elb.eu-west-1.amazonaws.com:443
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="eem-582bdccb-f1b4-479d-940a-7480147e70b1" password="018c80b7-218b-4eed-a8f5-169d2cb64206";
ssl.truststore.location=/Users/dalelane/Downloads/certificate_loosehanger-event-gateway-68367c28f6f6c440.elb.eu-west-1.amazonaws.com_443.pem
ssl.truststore.type=PEM
group.id=test-consumer-group
enable.auto.commit=false

We could use this from kafka-console-consumer.sh to start consuming events through the Event Gateway.

using Event Automation with Amazon MSK

Finally, we demonstrated how these could be combined, bringing the value of both the previous demonstrations together.

Making Amazon MSK topics available through a self-service catalog can enable much wider reuse of these streams of events. And providing a low-code authoring canvas for processing these events can extend this use beyond just developers, enabling both business and IT teams to define the scenarios they need to respond to.

For this final demonstration, we again used the same Amazon MSK cluster, with the same range of topics and live streams of events as before. We had already added these to the Event Endpoint Management catalog for the previous demo, so for this demonstration we showed how MSK topics found in the catalog can easily be used in Event Processing to quickly identify new real-time insights.

How we created a demonstration of this...
We demonstrated an Event Endpoint Management catalog can contain topics from many distributions of Kafka cluster, including topics from Amazon MSK.
click on the image to enlarge

We had published two of the topics so far to be able to demonstrate Event Endpoint Management.

click on the image to enlarge

With them both available in the catalog, that was enough to demonstrate another of the Event Processing tutorials.

click on the image to enlarge

First, we started by showing that we could still add topics to the catalog from non-MSK Kafka clusters. We clicked Add topic and then chose an Event Streams cluster.

click on the image to enlarge

This was useful to show the benefits of aliases. If you have topics with the same name in different Kafka clusters, and want to add both to the catalog, aliases let you differentiate them.

click on the image to enlarge

As the owner of the topic, we could see which topic(s) were hosted on which Kafka clusters.

click on the image to enlarge

However, for someone finding the topics in the catalog, they all just look like Kafka topics. You can use tags to identify where the topics are from if you want developers to know, as we had done here.

Next, to process these topics using IBM Event Processing, we created a new flow.
click on the image to enlarge

We went back to the Event Processing UI, running in our OpenShift cluster in IBM Cloud.

click on the image to enlarge

We created a new event processing flow and gave it a name.

We defined event sources based on Amazon MSK topics, using details from the Event Endpoint Management catalog.
click on the image to enlarge

We created an event source, using the bootstrap address we copied from the Event Endpoint Management catalog page.

click on the image to enlarge

We needed to accept the certificates as before.

click on the image to enlarge

We used credentials that we created in the Event Endpoint Management catalog.

click on the image to enlarge

Then we confirmed the topic name.

click on the image to enlarge

We provided the sample message from the catalog to give Event Processing information about the properties of messages on this topic.

click on the image to enlarge

Then we started repeating this to add another event source.

click on the image to enlarge

This second event source was also based on an Amazon MSK topic, that we discovered in the Event Endpoint Management catalog and accessed via the Event Gateway running in AWS.

click on the image to enlarge

Again, we provided the bootstrap address for the Event Gateway that we copied from the catalog page.

click on the image to enlarge

We needed to confirm the certificates.

click on the image to enlarge

We used the Event Endpoint Management catalog to generate credentials unique for this second topic.

click on the image to enlarge

We added these credentials to the Event Processing configuration.

click on the image to enlarge

Then we confirmed the topic name.

click on the image to enlarge

And finally provided the sample message to give Event Processing info about the properties of messages on this second topic.

Finally, we added an interval join to correlate events from these two MSK topics.
click on the image to enlarge

We used an Interval join to join these two MSK topics.

click on the image to enlarge

We gave this join a name.

click on the image to enlarge

We used the assistant to help us identify the attributes of each stream of events that we could join on.

click on the image to enlarge

We used the visualisation to help us specify the join window.

click on the image to enlarge

Then we chose, and named, the attributes from each stream of events that we wanted to output.

click on the image to enlarge

Finally, to show everything working, we ran the flow and viewed the results.

Our goal with this blog post was to demonstrate what can be done with IBM Event Automation, with a particular focus on the benefits of composability. By taking advantage of the de-facto standard nature of the Kafka protocol, we can layer additional capabilities on top of Apache Kafka clusters, wherever they are running.

Our demonstrations were intended to provide an illustrative example of using Event Automation with MSK. It was absolutely not meant to be a description of how to use Amazon services in a perfect or optimum way, but instead focused on a quick and simple way to show what is possible. We wanted to inspire you for how you could get more out of your own Amazon MSK cluster.

For more information about any of the ideas that we have shared here, please see the Event Automation documentation, or get in touch with us.


Amazon Web Services (AWS), Amazon Managed Streaming for Kafka (MSK), Amazon Elastic Compute Cloud (EC2), Amazon Elastic Container Service (ECS), Amazon Elastic Container Registry (ECR) are trademarks of Amazon Web Services. This blog post is intended to explain how we demonstrated the use of these services, but does not imply any affiliation with, or endorsement by, Amazon or AWS.