Date   

Re: Question about Critical Security Findings in kafka-exporter dependency in Strimzi images

Jakub Scholz
 

Hi Kerstin,

I'm not really a Golang expert. As for CVE-2022-23806, crypto functions will be used between the Kafka Exporter where mTLS is used. The CVE-2021-38297 seems to suggest it applies only to WASM modules in which case I wonder if it applies here. But obviously it will be showing in scanners anyway.

Did you raise it on the Kafka Exporter project as well? There was not much development going on, but there were occasional releases happening there. Last commit seems to be from January. In general, we tend to rely on the binaries provided by the other projects because having our own build of something like this requires a lot of time (CI, updates, know-how etc.). But if there would not be a new release with a fix, we might need to decide whether we want to fork it to maintain our own build or find some other project for exporting the consumer lag.

Thanks & Regards
Jakub

On Wed, Jun 22, 2022 at 4:13 PM kerstin.maier via lists.cncf.io <kerstin.maier=mercedes-benz.com@...> wrote:
Hi,
we do regular automatic security scans for the Strimzi images we use in our organization and the latest images always have a few CRITICAL findings in our security scan, at the moment this are
NVD - CVE-2021-38297 (nist.gov) and NVD - cve-2022-23806 (nist.gov).

We took a look where this is coming from and seems it's cause the latest Kafka exporter release 1.4.2 (from September 21st, 2021) still comes with Go 1.17.1
https://github.com/danielqsj/kafka_exporter/tags

Looking at the Github repo of Kafka Exporter, it doesn't look as if anybody is actively working on this repo anymore at the moment. We are wondering,are there any plans from Strimzi to deal with such dependencies that aren't regularily updated?
I assume many projects to regular security scans of their images and if some dependencies aren't updated regularily or at all anymore, the critical findings won't disappear.

Thanks,
Kerstin


Question about Critical Security Findings in kafka-exporter dependency in Strimzi images

kerstin.maier@...
 

Hi,
we do regular automatic security scans for the Strimzi images we use in our organization and the latest images always have a few CRITICAL findings in our security scan, at the moment this are
NVD - CVE-2021-38297 (nist.gov) and NVD - cve-2022-23806 (nist.gov).

We took a look where this is coming from and seems it's cause the latest Kafka exporter release 1.4.2 (from September 21st, 2021) still comes with Go 1.17.1
https://github.com/danielqsj/kafka_exporter/tags

Looking at the Github repo of Kafka Exporter, it doesn't look as if anybody is actively working on this repo anymore at the moment. We are wondering,are there any plans from Strimzi to deal with such dependencies that aren't regularily updated?
I assume many projects to regular security scans of their images and if some dependencies aren't updated regularily or at all anymore, the critical findings won't disappear.

Thanks,
Kerstin


Re: strimzi operator running namespaced

Jakub Scholz
 

Strimzi requires access to some cluster wide resources for some important features such as rack awareness. It is also required for example for node-port access or disk resizing. You can disable some of them if you do not need them, but at minimum you would need to create the CRDs and the ClusterRoles. The ClusterRoleBindings might be possibly changed to RoleBindings if you are willing to sacrifice the features.

Thanks & Regards
Jakub

On Tue, Jun 21, 2022 at 8:47 PM <dfernandez@...> wrote:

Hi guys,

I am trying to intall Strimzi operator 0.29 but i am running on a multi tenant kubenetes cluster so I have limited permission. for example I can't create clusterroles/roles objects.

Is there a way that the operator could run on a namespaced level with less permission over the cluster?

 

thanks!


strimzi operator running namespaced

dfernandez@...
 

Hi guys,

I am trying to intall Strimzi operator 0.29 but i am running on a multi tenant kubenetes cluster so I have limited permission. for example I can't create clusterroles/roles objects.

Is there a way that the operator could run on a namespaced level with less permission over the cluster?

 

thanks!


[ANNOUNCE] [RELEASE] Strimzi Canary 0.3.0 released

Paolo Patierno
 

We have released Strimzi Canary 0.3.0.
Strimzi canary is a tool which acts as an indicator of whether Kafka clusters are operating correctly. This is achieved by creating a canary topic and periodically producing and consuming events on the topic and getting metrics out of these exchanges.
For more details about what it does and how to use it, check the README.md file.

For more details and installation files, go to: https://github.com/strimzi/strimzi-canary/releases/tag/0.3.0

Thanks to everyone who contributed to this release!

Thanks & Regards
Strimzi team

Paolo Patierno
Principal Software Engineer @ Red Hat
Microsoft MVP on Azure

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience


Re: RC2 of Strimzi Canary 0.3.0

kwall@...
 

I tested out the RC3 image, all seems good for our use-cases.


RC3 of Strimzi Canary 0.3.0

Paolo Patierno
 

*We have prepared the RC3 of the new Strimzi Canary 0.3.0 release*

For more details and installation files, go to: https://github.com/strimzi/strimzi-canary/releases/tag/0.3.0-rc3

Any feedback can be provided on the Strimzi mailing list, on the #strimzi Slack channel on CNCF Slack or as a GitHub issue.

Thanks & Regards
Strimzi team

Paolo Patierno
Principal Software Engineer @ Red Hat
Microsoft MVP on Azure

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience


RC2 of Strimzi Canary 0.3.0

Paolo Patierno
 

*We have prepared the RC2 of the new Strimzi Canary 0.3.0 release*

For more details and installation files, go to: https://github.com/strimzi/strimzi-canary/releases/tag/0.3.0-rc2

Any feedback can be provided on the Strimzi mailing list, on the #strimzi Slack channel on CNCF Slack or as a GitHub issue.

Thanks & Regards
Strimzi team

Paolo Patierno
Principal Software Engineer @ Red Hat
Microsoft MVP on Azure

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience


Re: RC1 of Strimzi Canary 0.3.0

Paolo Patierno
 

Thanks for reporting this kwall!

I think that it's an important fix because without it re-auth feature is un-usable and the canary itself if it's enabled.
Before a 0.3.0-RC2 out, I would like to ping Sarama maintainers to see if there is space for approving your fix and releasing a new patched Sarama version 1.33.1.

Thanks,
Paolo

Paolo Patierno
Principal Software Engineer @ Red Hat
Microsoft MVP on Azure

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience


From: cncf-strimzi-users@... <cncf-strimzi-users@...> on behalf of kwall@... <kwall@...>
Sent: Thursday, May 26, 2022 11:15 AM
To: cncf-strimzi-users@... <cncf-strimzi-users@...>
Subject: Re: [cncf-strimzi-users] RC1 of Strimzi Canary 0.3.0
 
An issue has been discovered with in the Strimzi RC1 build.  The root cause is a defect in new KIP-368 implementation within the latest release of Sarama 1.33.0 include in the Strimzi RC.    The defect is described by https://github.com/Shopify/sarama/issues/2233 and leads to the canary unexpectedly disconnecting from the kafka cluster and may also lead to unexpected out of memory problems being suffered by the kube pod hosting the canary.

There is already a PR open against Sarama with a proposed fix.  It is hoped if the fix is accepted and a new Sarama micro release made soon, the Strimzi Canary RC will be respun soon.


Re: RC1 of Strimzi Canary 0.3.0

kwall@...
 

An issue has been discovered with in the Strimzi RC1 build.  The root cause is a defect in new KIP-368 implementation within the latest release of Sarama 1.33.0 include in the Strimzi RC.    The defect is described by https://github.com/Shopify/sarama/issues/2233 and leads to the canary unexpectedly disconnecting from the kafka cluster and may also lead to unexpected out of memory problems being suffered by the kube pod hosting the canary.

There is already a PR open against Sarama with a proposed fix.  It is hoped if the fix is accepted and a new Sarama micro release made soon, the Strimzi Canary RC will be respun soon.


[ANNOUNCE] [RELEASE] Strimzi Kafka Operators 0.29.0

Jakub Scholz
 

Strimzi Kafka Operators 0.29.0 has been released. The main changes in this release include:
* Support for new Apache Kafka releases (3.0.1, 3.1.1 and 3.2.0)
* Renew user certificates in User Operator only during maintenance windows
* New rebalancing modes in the `KafkaRebalance` custom resource to add or remove brokers
* Experimental KRaft mode (ZooKeeper-less Kafka)
* Experimental support for the s390x platform

For more details and installation files, go to https://github.com/strimzi/strimzi-kafka-operator/releases/tag/0.29.0

You can also check a video about the main new features on our YouTube channelhttps://youtu.be/lUsIoFTZr00

Important: This release supports only the API version v1beta2 and CRD version apiextensions.k8s.io/v1. If upgrading from Strimzi 0.22, migration to v1beta2 needs to be completed for all Strimzi CRDs and CRs before the upgrade to 0.28 is done! If upgrading from Strimzi version earlier than 0.22, you need to first install the CRDs from Strimzi 0.22 and complete the migration to v1beta2 for all Strimzi CRDs and CRs before the upgrade to 0.28 is done! For more details about the CRD upgrades, see the documentation.

Thanks to everyone who contributed to these releases!

Thanks & Regards
Strimzi team


RC2 of Strimzi Kafka Operators 0.29.0 is available for testing

Jakub Scholz
 

Release candidate 2 of Strimzi Kafka Operators 0.29.0 is now available for testing. The changes since RC1 are:
* Fix Kafka, KafkaConnect and Cruise Control examples
* Fix error handling in KafkaRebalance processing
* Fix bugs in Rack-awareness and upgrade system tests

Any feedback can be provided on the Strimzi mailing list, on the #strimzi Slack channel on CNCF Slack or as a GitHub issue.

Thanks & Regards
Strimzi team


RC1 of Strimzi Canary 0.3.0

Paolo Patierno
 

*We have prepared the RC1 of the new Strimzi Canary 0.3.0 release*

For more details and installation files, go to: https://github.com/strimzi/strimzi-canary/releases/tag/0.3.0-rc1

Any feedback can be provided on the Strimzi mailing list, on the #strimzi Slack channel on CNCF Slack or as a GitHub issue.

Thanks & Regards
Strimzi team

Paolo Patierno
Principal Software Engineer @ Red Hat
Microsoft MVP on Azure

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience


RC1 of Strimzi Kafka Operators 0.29.0 is available for testing

Jakub Scholz
 

Release candidate 1 of Strimzi Kafka Operators 0.29.0 is now available for testing. The changes in this release include for example:
* Support for new Apache Kafka releases (3.0.1, 3.1.1 and 3.2.0)
* Renew user certificates in User Operator only during maintenance windows
* New rebalancing modes in the `KafkaRebalance` custom resource to add or remove brokers
* Experimental KRaft mode (ZooKeeper-less Kafka)
* Experimental support for the s390x platform


Any feedback can be provided on the Strimzi mailing list, on the #strimzi Slack channel on CNCF Slack or as a GitHub issue.

Thanks & Regards
Strimzi team


Re: Attempting to install Strimzi/Kafka on my microk8s cluster

Jakub Scholz
 

The first node looks ok. But if you check the description, you can see that some of the nodes are marked with the taint:

That is what the events are complaining about. I expect that the taint is set there for some reason - but I never used Microk8s, so not sure where to look for more info. Also, the Conditions in the descriptions are unknown. So my guess is that maybe the cluster cannot communicate with them? Maybe some Microk8s forum might give you better answers on what might be causing this.

Jakub

On Sun, May 8, 2022 at 1:38 AM <tfv01@...> wrote:
Thank you Jakub, appreciate your response!

I'm not exactly sure I understand how to check if each node is "reachable", but here is the output of "kubectl describe nodes", and I don't know if the fact that all three Pis are listed means that they are in-fact reachable.
-Tim

ubuntu@pi4-01:~$ microk8s kubectl describe nodes
Name:               pi4-01
Roles:              <none>
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=arm64
                    kubernetes.io/hostname=pi4-01
                    kubernetes.io/os=linux
                    microk8s.io/cluster=true
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Mon, 05 Jul 2021 22:02:48 +0000
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  pi4-01
  AcquireTime:     <unset>
  RenewTime:       Sat, 07 May 2022 23:34:18 +0000
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Sat, 07 May 2022 23:29:57 +0000   Mon, 05 Jul 2021 22:02:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Sat, 07 May 2022 23:29:57 +0000   Mon, 05 Jul 2021 22:02:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Sat, 07 May 2022 23:29:57 +0000   Mon, 05 Jul 2021 22:02:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Sat, 07 May 2022 23:29:57 +0000   Sat, 07 May 2022 21:18:54 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  10.0.0.10
  Hostname:    pi4-01
Capacity:
  cpu:                4
  ephemeral-storage:  245747040Ki
  memory:             7998736Ki
  pods:               110
Allocatable:
  cpu:                4
  ephemeral-storage:  244698464Ki
  memory:             7896336Ki
  pods:               110
System Info:
  Machine ID:                 b4d5f79c8636436cbd291ba7ac7f14c9
  System UUID:                b4d5f79c8636436cbd291ba7ac7f14c9
  Boot ID:                    3e28585a-dceb-4793-8425-f9d80f92a939
  Kernel Version:             5.4.0-1052-raspi
  OS Image:                   Ubuntu 20.04.2 LTS
  Operating System:           linux
  Architecture:               arm64
  Container Runtime Version:  containerd://1.2.5
  Kubelet Version:            v1.18.20
  Kube-Proxy Version:         v1.18.20
Non-terminated Pods:          (10 in total)
  Namespace                   Name                                               CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                               ------------  ----------  ---------------  -------------  ---
  default                     zookeeper-deployment-1-6c88c95964-496xq            0 (0%)        0 (0%)      0 (0%)           0 (0%)         11d
  ingress                     nginx-ingress-microk8s-controller-jmkgb            0 (0%)        0 (0%)      0 (0%)           0 (0%)         306d
  kafka                       strimzi-cluster-operator-74f9f5d7c7-gs5df          200m (5%)     1 (25%)     384Mi (4%)       384Mi (4%)     4h3m
  kube-system                 coredns-588fd544bf-x9jxf                           100m (2%)     0 (0%)      70Mi (0%)        170Mi (2%)     306d
  kube-system                 dashboard-metrics-scraper-db65b9c6f-cq2t8          0 (0%)        0 (0%)      0 (0%)           0 (0%)         306d
  kube-system                 heapster-v1.5.2-5bc67ff868-2tkbf                   100m (2%)     100m (2%)   184720Ki (2%)    184720Ki (2%)  306d
  kube-system                 kubernetes-dashboard-c5b698784-wnfqj               0 (0%)        0 (0%)      0 (0%)           0 (0%)         306d
  kube-system                 monitoring-influxdb-grafana-v4-6cc44d985f-qstsn    200m (5%)     200m (5%)   600Mi (7%)       600Mi (7%)     306d
  metallb-system              controller-55d8b88d7f-tkvzq                        100m (2%)     100m (2%)   100Mi (1%)       100Mi (1%)     17d
  metallb-system              speaker-ms5z4                                      100m (2%)     100m (2%)   100Mi (1%)       100Mi (1%)     124d
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests         Limits
  --------           --------         ------
  cpu                800m (20%)       1500m (37%)
  memory             1468816Ki (18%)  1571216Ki (19%)
  ephemeral-storage  0 (0%)           0 (0%)
Events:              <none>
 
 
Name:               pi4-02
Roles:              <none>
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=arm64
                    kubernetes.io/hostname=pi4-02
                    kubernetes.io/os=linux
                    microk8s.io/cluster=true
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Mon, 05 Jul 2021 23:02:47 +0000
Unschedulable:      false
Lease:
  HolderIdentity:  pi4-02
  AcquireTime:     <unset>
  RenewTime:       Fri, 25 Feb 2022 05:08:28 +0000
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
  ----             ------    -----------------                 ------------------                ------              -------
  MemoryPressure   Unknown   Fri, 25 Feb 2022 05:07:09 +0000   Fri, 25 Feb 2022 05:12:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  DiskPressure     Unknown   Fri, 25 Feb 2022 05:07:09 +0000   Fri, 25 Feb 2022 05:12:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  PIDPressure      Unknown   Fri, 25 Feb 2022 05:07:09 +0000   Fri, 25 Feb 2022 05:12:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  Ready            Unknown   Fri, 25 Feb 2022 05:07:09 +0000   Fri, 25 Feb 2022 05:12:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
Addresses:
  InternalIP:  10.0.0.11
  Hostname:    pi4-02
Capacity:
  cpu:                4
  ephemeral-storage:  245617500Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  hugepages-32Mi:     0
  hugepages-64Ki:     0
  memory:             7997148Ki
  pods:               110
Allocatable:
  cpu:                4
  ephemeral-storage:  244568924Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  hugepages-32Mi:     0
  hugepages-64Ki:     0
  memory:             7894748Ki
  pods:               110
System Info:
  Machine ID:                 37c8b0e6c19c4bc09f1aace54cd91d55
  System UUID:                37c8b0e6c19c4bc09f1aace54cd91d55
  Boot ID:                    02cb905f-359b-4584-b2c6-d20953e07322
  Kernel Version:             5.11.0-1023-raspi
  OS Image:                   Ubuntu 21.04
  Operating System:           linux
  Architecture:               arm64
  Container Runtime Version:  containerd://1.2.5
  Kubelet Version:            v1.18.20
  Kube-Proxy Version:         v1.18.20
Non-terminated Pods:          (5 in total)
  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
  default                     kafka-broker0-86669885c5-xshms             0 (0%)        0 (0%)      0 (0%)           0 (0%)         90d
  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         125d
  default                     zookeeper-deploy-7f5bb9785f-2tqf8          0 (0%)        0 (0%)      0 (0%)           0 (0%)         90d
  ingress                     nginx-ingress-microk8s-controller-wzbnf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         306d
  metallb-system              speaker-86pmc                              100m (2%)     100m (2%)   100Mi (1%)       100Mi (1%)     124d
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                100m (2%)   100m (2%)
  memory             100Mi (1%)  100Mi (1%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-1Gi      0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
  hugepages-32Mi     0 (0%)      0 (0%)
  hugepages-64Ki     0 (0%)      0 (0%)
Events:              <none>
 
 
Name:               pi4-03
Roles:              <none>
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=arm64
                    kubernetes.io/hostname=pi4-03
                    kubernetes.io/os=linux
                    microk8s.io/cluster=true
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Mon, 05 Jul 2021 23:03:01 +0000
Unschedulable:      false
Lease:
  HolderIdentity:  pi4-03
  AcquireTime:     <unset>
  RenewTime:       Fri, 25 Feb 2022 05:08:24 +0000
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
  ----             ------    -----------------                 ------------------                ------              -------
  MemoryPressure   Unknown   Fri, 25 Feb 2022 05:03:45 +0000   Fri, 25 Feb 2022 05:12:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  DiskPressure     Unknown   Fri, 25 Feb 2022 05:03:45 +0000   Fri, 25 Feb 2022 05:12:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  PIDPressure      Unknown   Fri, 25 Feb 2022 05:03:45 +0000   Fri, 25 Feb 2022 05:12:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  Ready            Unknown   Fri, 25 Feb 2022 05:03:45 +0000   Fri, 25 Feb 2022 05:12:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
Addresses:
  InternalIP:  10.0.0.12
  Hostname:    pi4-03
Capacity:
  cpu:                4
  ephemeral-storage:  245617500Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  hugepages-32Mi:     0
  hugepages-64Ki:     0
  memory:             7997148Ki
  pods:               110
Allocatable:
  cpu:                4
  ephemeral-storage:  244568924Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  hugepages-32Mi:     0
  hugepages-64Ki:     0
  memory:             7894748Ki
  pods:               110
System Info:
  Machine ID:                 5f870ea041464a25a73f93fb37cec85c
  System UUID:                5f870ea041464a25a73f93fb37cec85c
  Boot ID:                    15e11c9e-4674-45c7-ac7f-021e83a17eff
  Kernel Version:             5.11.0-1023-raspi
  OS Image:                   Ubuntu 21.04
  Operating System:           linux
  Architecture:               arm64
  Container Runtime Version:  containerd://1.2.5
  Kubelet Version:            v1.18.20
  Kube-Proxy Version:         v1.18.20
Non-terminated Pods:          (5 in total)
  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
  default                     kafka-broker0-86669885c5-9k6f6             0 (0%)        0 (0%)      0 (0%)           0 (0%)         90d
  default                     nginx-f89759699-8zm68                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         124d
  default                     zookeeper-deploy-7f5bb9785f-4j8kt          0 (0%)        0 (0%)      0 (0%)           0 (0%)         90d
  ingress                     nginx-ingress-microk8s-controller-p6c9q    0 (0%)        0 (0%)      0 (0%)           0 (0%)         306d
  metallb-system              speaker-h28vg                              100m (2%)     100m (2%)   100Mi (1%)       100Mi (1%)     124d
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                100m (2%)   100m (2%)
  memory             100Mi (1%)  100Mi (1%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-1Gi      0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
  hugepages-32Mi     0 (0%)      0 (0%)
  hugepages-64Ki     0 (0%)      0 (0%)
Events:              <none>
 
 
Name:               ubuntu
Roles:              <none>
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=arm64
                    kubernetes.io/hostname=ubuntu
                    kubernetes.io/os=linux
                    microk8s.io/cluster=true
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Mon, 05 Jul 2021 14:20:11 +0000
Unschedulable:      false
Lease:
  HolderIdentity:  ubuntu
  AcquireTime:     <unset>
  RenewTime:       Mon, 05 Jul 2021 22:03:34 +0000
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
  ----             ------    -----------------                 ------------------                ------              -------
  MemoryPressure   Unknown   Mon, 05 Jul 2021 22:02:48 +0000   Mon, 05 Jul 2021 22:04:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  DiskPressure     Unknown   Mon, 05 Jul 2021 22:02:48 +0000   Mon, 05 Jul 2021 22:04:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  PIDPressure      Unknown   Mon, 05 Jul 2021 22:02:48 +0000   Mon, 05 Jul 2021 22:04:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  Ready            Unknown   Mon, 05 Jul 2021 22:02:48 +0000   Mon, 05 Jul 2021 22:04:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
Addresses:
  InternalIP:  10.0.0.11
  Hostname:    ubuntu
Capacity:
  cpu:                4
  ephemeral-storage:  245617500Ki
  memory:             7997276Ki
  pods:               110
Allocatable:
  cpu:                4
  ephemeral-storage:  244568924Ki
  memory:             7894876Ki
  pods:               110
System Info:
  Machine ID:                 37c8b0e6c19c4bc09f1aace54cd91d55
  System UUID:                37c8b0e6c19c4bc09f1aace54cd91d55
  Boot ID:                    4aac5c6b-db60-4468-85e9-b5ef13b1e9f4
  Kernel Version:             5.11.0-1007-raspi
  OS Image:                   Ubuntu 21.04
  Operating System:           linux
  Architecture:               arm64
  Container Runtime Version:  containerd://1.2.5
  Kubelet Version:            v1.18.19
  Kube-Proxy Version:         v1.18.19
Non-terminated Pods:          (6 in total)
  Namespace                   Name                                               CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                               ------------  ----------  ---------------  -------------  ---
  ingress                     nginx-ingress-microk8s-controller-bzx2x            0 (0%)        0 (0%)      0 (0%)           0 (0%)         306d
  kube-system                 coredns-588fd544bf-ntgwl                           100m (2%)     0 (0%)      70Mi (0%)        170Mi (2%)     306d
  kube-system                 dashboard-metrics-scraper-db65b9c6f-rl8l2          0 (0%)        0 (0%)      0 (0%)           0 (0%)         306d
  kube-system                 heapster-v1.5.2-5bc67ff868-hl5kp                   100m (2%)     100m (2%)   184720Ki (2%)    184720Ki (2%)  306d
  kube-system                 kubernetes-dashboard-c5b698784-55rhq               0 (0%)        0 (0%)      0 (0%)           0 (0%)         306d
  kube-system                 monitoring-influxdb-grafana-v4-6cc44d985f-2p4gd    200m (5%)     200m (5%)   600Mi (7%)       600Mi (7%)     306d
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests        Limits
  --------           --------        ------
  cpu                400m (10%)      300m (7%)
  memory             870800Ki (11%)  973200Ki (12%)
  ephemeral-storage  0 (0%)          0 (0%)
Events:              <none>


Re: Attempting to install Strimzi/Kafka on my microk8s cluster

tfv01@...
 

Thank you Jakub, appreciate your response!

I'm not exactly sure I understand how to check if each node is "reachable", but here is the output of "kubectl describe nodes", and I don't know if the fact that all three Pis are listed means that they are in-fact reachable.
-Tim

ubuntu@pi4-01:~$ microk8s kubectl describe nodes
Name:               pi4-01
Roles:              <none>
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=arm64
                    kubernetes.io/hostname=pi4-01
                    kubernetes.io/os=linux
                    microk8s.io/cluster=true
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Mon, 05 Jul 2021 22:02:48 +0000
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  pi4-01
  AcquireTime:     <unset>
  RenewTime:       Sat, 07 May 2022 23:34:18 +0000
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Sat, 07 May 2022 23:29:57 +0000   Mon, 05 Jul 2021 22:02:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Sat, 07 May 2022 23:29:57 +0000   Mon, 05 Jul 2021 22:02:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Sat, 07 May 2022 23:29:57 +0000   Mon, 05 Jul 2021 22:02:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Sat, 07 May 2022 23:29:57 +0000   Sat, 07 May 2022 21:18:54 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  10.0.0.10
  Hostname:    pi4-01
Capacity:
  cpu:                4
  ephemeral-storage:  245747040Ki
  memory:             7998736Ki
  pods:               110
Allocatable:
  cpu:                4
  ephemeral-storage:  244698464Ki
  memory:             7896336Ki
  pods:               110
System Info:
  Machine ID:                 b4d5f79c8636436cbd291ba7ac7f14c9
  System UUID:                b4d5f79c8636436cbd291ba7ac7f14c9
  Boot ID:                    3e28585a-dceb-4793-8425-f9d80f92a939
  Kernel Version:             5.4.0-1052-raspi
  OS Image:                   Ubuntu 20.04.2 LTS
  Operating System:           linux
  Architecture:               arm64
  Container Runtime Version:  containerd://1.2.5
  Kubelet Version:            v1.18.20
  Kube-Proxy Version:         v1.18.20
Non-terminated Pods:          (10 in total)
  Namespace                   Name                                               CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                               ------------  ----------  ---------------  -------------  ---
  default                     zookeeper-deployment-1-6c88c95964-496xq            0 (0%)        0 (0%)      0 (0%)           0 (0%)         11d
  ingress                     nginx-ingress-microk8s-controller-jmkgb            0 (0%)        0 (0%)      0 (0%)           0 (0%)         306d
  kafka                       strimzi-cluster-operator-74f9f5d7c7-gs5df          200m (5%)     1 (25%)     384Mi (4%)       384Mi (4%)     4h3m
  kube-system                 coredns-588fd544bf-x9jxf                           100m (2%)     0 (0%)      70Mi (0%)        170Mi (2%)     306d
  kube-system                 dashboard-metrics-scraper-db65b9c6f-cq2t8          0 (0%)        0 (0%)      0 (0%)           0 (0%)         306d
  kube-system                 heapster-v1.5.2-5bc67ff868-2tkbf                   100m (2%)     100m (2%)   184720Ki (2%)    184720Ki (2%)  306d
  kube-system                 kubernetes-dashboard-c5b698784-wnfqj               0 (0%)        0 (0%)      0 (0%)           0 (0%)         306d
  kube-system                 monitoring-influxdb-grafana-v4-6cc44d985f-qstsn    200m (5%)     200m (5%)   600Mi (7%)       600Mi (7%)     306d
  metallb-system              controller-55d8b88d7f-tkvzq                        100m (2%)     100m (2%)   100Mi (1%)       100Mi (1%)     17d
  metallb-system              speaker-ms5z4                                      100m (2%)     100m (2%)   100Mi (1%)       100Mi (1%)     124d
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests         Limits
  --------           --------         ------
  cpu                800m (20%)       1500m (37%)
  memory             1468816Ki (18%)  1571216Ki (19%)
  ephemeral-storage  0 (0%)           0 (0%)
Events:              <none>
 
 
Name:               pi4-02
Roles:              <none>
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=arm64
                    kubernetes.io/hostname=pi4-02
                    kubernetes.io/os=linux
                    microk8s.io/cluster=true
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Mon, 05 Jul 2021 23:02:47 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  pi4-02
  AcquireTime:     <unset>
  RenewTime:       Fri, 25 Feb 2022 05:08:28 +0000
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
  ----             ------    -----------------                 ------------------                ------              -------
  MemoryPressure   Unknown   Fri, 25 Feb 2022 05:07:09 +0000   Fri, 25 Feb 2022 05:12:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  DiskPressure     Unknown   Fri, 25 Feb 2022 05:07:09 +0000   Fri, 25 Feb 2022 05:12:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  PIDPressure      Unknown   Fri, 25 Feb 2022 05:07:09 +0000   Fri, 25 Feb 2022 05:12:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  Ready            Unknown   Fri, 25 Feb 2022 05:07:09 +0000   Fri, 25 Feb 2022 05:12:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
Addresses:
  InternalIP:  10.0.0.11
  Hostname:    pi4-02
Capacity:
  cpu:                4
  ephemeral-storage:  245617500Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  hugepages-32Mi:     0
  hugepages-64Ki:     0
  memory:             7997148Ki
  pods:               110
Allocatable:
  cpu:                4
  ephemeral-storage:  244568924Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  hugepages-32Mi:     0
  hugepages-64Ki:     0
  memory:             7894748Ki
  pods:               110
System Info:
  Machine ID:                 37c8b0e6c19c4bc09f1aace54cd91d55
  System UUID:                37c8b0e6c19c4bc09f1aace54cd91d55
  Boot ID:                    02cb905f-359b-4584-b2c6-d20953e07322
  Kernel Version:             5.11.0-1023-raspi
  OS Image:                   Ubuntu 21.04
  Operating System:           linux
  Architecture:               arm64
  Container Runtime Version:  containerd://1.2.5
  Kubelet Version:            v1.18.20
  Kube-Proxy Version:         v1.18.20
Non-terminated Pods:          (5 in total)
  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
  default                     kafka-broker0-86669885c5-xshms             0 (0%)        0 (0%)      0 (0%)           0 (0%)         90d
  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         125d
  default                     zookeeper-deploy-7f5bb9785f-2tqf8          0 (0%)        0 (0%)      0 (0%)           0 (0%)         90d
  ingress                     nginx-ingress-microk8s-controller-wzbnf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         306d
  metallb-system              speaker-86pmc                              100m (2%)     100m (2%)   100Mi (1%)       100Mi (1%)     124d
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                100m (2%)   100m (2%)
  memory             100Mi (1%)  100Mi (1%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-1Gi      0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
  hugepages-32Mi     0 (0%)      0 (0%)
  hugepages-64Ki     0 (0%)      0 (0%)
Events:              <none>
 
 
Name:               pi4-03
Roles:              <none>
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=arm64
                    kubernetes.io/hostname=pi4-03
                    kubernetes.io/os=linux
                    microk8s.io/cluster=true
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Mon, 05 Jul 2021 23:03:01 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  pi4-03
  AcquireTime:     <unset>
  RenewTime:       Fri, 25 Feb 2022 05:08:24 +0000
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
  ----             ------    -----------------                 ------------------                ------              -------
  MemoryPressure   Unknown   Fri, 25 Feb 2022 05:03:45 +0000   Fri, 25 Feb 2022 05:12:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  DiskPressure     Unknown   Fri, 25 Feb 2022 05:03:45 +0000   Fri, 25 Feb 2022 05:12:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  PIDPressure      Unknown   Fri, 25 Feb 2022 05:03:45 +0000   Fri, 25 Feb 2022 05:12:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  Ready            Unknown   Fri, 25 Feb 2022 05:03:45 +0000   Fri, 25 Feb 2022 05:12:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
Addresses:
  InternalIP:  10.0.0.12
  Hostname:    pi4-03
Capacity:
  cpu:                4
  ephemeral-storage:  245617500Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  hugepages-32Mi:     0
  hugepages-64Ki:     0
  memory:             7997148Ki
  pods:               110
Allocatable:
  cpu:                4
  ephemeral-storage:  244568924Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  hugepages-32Mi:     0
  hugepages-64Ki:     0
  memory:             7894748Ki
  pods:               110
System Info:
  Machine ID:                 5f870ea041464a25a73f93fb37cec85c
  System UUID:                5f870ea041464a25a73f93fb37cec85c
  Boot ID:                    15e11c9e-4674-45c7-ac7f-021e83a17eff
  Kernel Version:             5.11.0-1023-raspi
  OS Image:                   Ubuntu 21.04
  Operating System:           linux
  Architecture:               arm64
  Container Runtime Version:  containerd://1.2.5
  Kubelet Version:            v1.18.20
  Kube-Proxy Version:         v1.18.20
Non-terminated Pods:          (5 in total)
  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
  default                     kafka-broker0-86669885c5-9k6f6             0 (0%)        0 (0%)      0 (0%)           0 (0%)         90d
  default                     nginx-f89759699-8zm68                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         124d
  default                     zookeeper-deploy-7f5bb9785f-4j8kt          0 (0%)        0 (0%)      0 (0%)           0 (0%)         90d
  ingress                     nginx-ingress-microk8s-controller-p6c9q    0 (0%)        0 (0%)      0 (0%)           0 (0%)         306d
  metallb-system              speaker-h28vg                              100m (2%)     100m (2%)   100Mi (1%)       100Mi (1%)     124d
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                100m (2%)   100m (2%)
  memory             100Mi (1%)  100Mi (1%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-1Gi      0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
  hugepages-32Mi     0 (0%)      0 (0%)
  hugepages-64Ki     0 (0%)      0 (0%)
Events:              <none>
 
 
Name:               ubuntu
Roles:              <none>
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=arm64
                    kubernetes.io/hostname=ubuntu
                    kubernetes.io/os=linux
                    microk8s.io/cluster=true
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Mon, 05 Jul 2021 14:20:11 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  ubuntu
  AcquireTime:     <unset>
  RenewTime:       Mon, 05 Jul 2021 22:03:34 +0000
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
  ----             ------    -----------------                 ------------------                ------              -------
  MemoryPressure   Unknown   Mon, 05 Jul 2021 22:02:48 +0000   Mon, 05 Jul 2021 22:04:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  DiskPressure     Unknown   Mon, 05 Jul 2021 22:02:48 +0000   Mon, 05 Jul 2021 22:04:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  PIDPressure      Unknown   Mon, 05 Jul 2021 22:02:48 +0000   Mon, 05 Jul 2021 22:04:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  Ready            Unknown   Mon, 05 Jul 2021 22:02:48 +0000   Mon, 05 Jul 2021 22:04:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
Addresses:
  InternalIP:  10.0.0.11
  Hostname:    ubuntu
Capacity:
  cpu:                4
  ephemeral-storage:  245617500Ki
  memory:             7997276Ki
  pods:               110
Allocatable:
  cpu:                4
  ephemeral-storage:  244568924Ki
  memory:             7894876Ki
  pods:               110
System Info:
  Machine ID:                 37c8b0e6c19c4bc09f1aace54cd91d55
  System UUID:                37c8b0e6c19c4bc09f1aace54cd91d55
  Boot ID:                    4aac5c6b-db60-4468-85e9-b5ef13b1e9f4
  Kernel Version:             5.11.0-1007-raspi
  OS Image:                   Ubuntu 21.04
  Operating System:           linux
  Architecture:               arm64
  Container Runtime Version:  containerd://1.2.5
  Kubelet Version:            v1.18.19
  Kube-Proxy Version:         v1.18.19
Non-terminated Pods:          (6 in total)
  Namespace                   Name                                               CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                               ------------  ----------  ---------------  -------------  ---
  ingress                     nginx-ingress-microk8s-controller-bzx2x            0 (0%)        0 (0%)      0 (0%)           0 (0%)         306d
  kube-system                 coredns-588fd544bf-ntgwl                           100m (2%)     0 (0%)      70Mi (0%)        170Mi (2%)     306d
  kube-system                 dashboard-metrics-scraper-db65b9c6f-rl8l2          0 (0%)        0 (0%)      0 (0%)           0 (0%)         306d
  kube-system                 heapster-v1.5.2-5bc67ff868-hl5kp                   100m (2%)     100m (2%)   184720Ki (2%)    184720Ki (2%)  306d
  kube-system                 kubernetes-dashboard-c5b698784-55rhq               0 (0%)        0 (0%)      0 (0%)           0 (0%)         306d
  kube-system                 monitoring-influxdb-grafana-v4-6cc44d985f-2p4gd    200m (5%)     200m (5%)   600Mi (7%)       600Mi (7%)     306d
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests        Limits
  --------           --------        ------
  cpu                400m (10%)      300m (7%)
  memory             870800Ki (11%)  973200Ki (12%)
  ephemeral-storage  0 (0%)          0 (0%)
Events:              <none>


Re: Attempting to install Strimzi/Kafka on my microk8s cluster

Jakub Scholz
 

Hi Tim,

That looks like some scheduling issue between your cluster and your storage. TBH, these issues are a bit hard to decode without your environment. The taint suggests that some of your nodes are unreachable, so that might be the cause of the issue?

Thanks & Regards
Jakub

On Sun, May 8, 2022 at 12:56 AM <tfv01@...> wrote:
Hi,
I have a three node Raspberry Pi cluster on which I'd like to be able to install and run Kafka. Strimzi seems to be a very easy environment to get that working and I read that as-of Dec 2021, it supports ARM-based processors.

I was following along the Quick Start Guide (Strimzi Quick Start guide (0.28.0) and got to step 2.4 Creating a Cluster.
In this step (as-in all steps before, I am prefacing all "kubectl...." commands with "microk8s kubectl...").
Step 2.4 outputs: "kafka.kafka.strimzi.io/my-cluster created"

But the next command times-out because the cluster never becomes Ready:
microk8s kubectl wait kafka/my-cluster --for=condition=Ready --timeout=300s -n my-kafka-project

When I look at the Kubernetes Dashboard, I see the following Events under the "my-kafka-project" Namespace:

Message                                                                                                                                                                 Source
no persistent volumes available for this claim and no storage class is set                                                                                              persistentvolume-controller
0/4 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims, 3 node(s) had taint {node.kubernetes.io/unreachable: }, that the pod didn't tolerate.      default-scheduler
0/4 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims, 3 node(s) had taint {node.kubernetes.io/unreachable: }, that the pod didn't tolerate.      default-scheduler
0/4 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims, 3 node(s) had taint {node.kubernetes.io/unreachable: }, that the pod didn't tolerate.      default-scheduler
0/4 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims, 3 node(s) had taint {node.kubernetes.io/unreachable: }, that the pod didn't tolerate.      default-scheduler
No matching pods found                                                                                                                                                  controllermanager
create Pod my-cluster-zookeeper-0 in StatefulSet my-cluster-zookeeper successful                                                                                        statefulset-controller
 
I appreciate any help you could pass-on as to what I might be doing wrong.

Thank you!
-Tim


Attempting to install Strimzi/Kafka on my microk8s cluster

tfv01@...
 

Hi,
I have a three node Raspberry Pi cluster on which I'd like to be able to install and run Kafka. Strimzi seems to be a very easy environment to get that working and I read that as-of Dec 2021, it supports ARM-based processors.

I was following along the Quick Start Guide (Strimzi Quick Start guide (0.28.0) and got to step 2.4 Creating a Cluster.
In this step (as-in all steps before, I am prefacing all "kubectl...." commands with "microk8s kubectl...").
Step 2.4 outputs: "kafka.kafka.strimzi.io/my-cluster created"

But the next command times-out because the cluster never becomes Ready:
microk8s kubectl wait kafka/my-cluster --for=condition=Ready --timeout=300s -n my-kafka-project

When I look at the Kubernetes Dashboard, I see the following Events under the "my-kafka-project" Namespace:

Message                                                                                                                                                                 Source
no persistent volumes available for this claim and no storage class is set                                                                                              persistentvolume-controller
0/4 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims, 3 node(s) had taint {node.kubernetes.io/unreachable: }, that the pod didn't tolerate.      default-scheduler
0/4 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims, 3 node(s) had taint {node.kubernetes.io/unreachable: }, that the pod didn't tolerate.      default-scheduler
0/4 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims, 3 node(s) had taint {node.kubernetes.io/unreachable: }, that the pod didn't tolerate.      default-scheduler
0/4 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims, 3 node(s) had taint {node.kubernetes.io/unreachable: }, that the pod didn't tolerate.      default-scheduler
No matching pods found                                                                                                                                                  controllermanager
create Pod my-cluster-zookeeper-0 in StatefulSet my-cluster-zookeeper successful                                                                                        statefulset-controller
 
I appreciate any help you could pass-on as to what I might be doing wrong.

Thank you!
-Tim


[ANNOUNCE] [RELEASE] Strimzi Kafka Bridge 0.21.5

Jakub Scholz
 

New version 0.21.5 of Strimzi Kafka Bridge has been released. The main changes in this release are:
* Support for ppc64le platform
* Documentation improvements
* Dependency updates


Thanks to everyone who contributed to to this releases!

Regards
Strimzi team


RC1 of Strimzi Kafka Bridge 0.21.5

Jakub Scholz
 

Release Candidate 1 of Strimzi Kafka Bridge the 0.21.5 is now available for testing. The main changes since 0.21.4 are:
* Support for ppc64le platform
* Documentation improvements
* Dependency updates

More details and a full list of changes can be found on the GitHub release page: https://github.com/strimzi/strimzi-kafka-bridge/releases/tag/0.21.5-rc1

Any feedback can be provided on the mailing list, on Slack or as a GitHub issue.

Thanks & Regards
Strimzi team