Re: Attempting to install Strimzi/Kafka on my microk8s cluster


Jakub Scholz
 

Hi Tim,

That looks like some scheduling issue between your cluster and your storage. TBH, these issues are a bit hard to decode without your environment. The taint suggests that some of your nodes are unreachable, so that might be the cause of the issue?

Thanks & Regards
Jakub

On Sun, May 8, 2022 at 12:56 AM <tfv01@...> wrote:
Hi,
I have a three node Raspberry Pi cluster on which I'd like to be able to install and run Kafka. Strimzi seems to be a very easy environment to get that working and I read that as-of Dec 2021, it supports ARM-based processors.

I was following along the Quick Start Guide (Strimzi Quick Start guide (0.28.0) and got to step 2.4 Creating a Cluster.
In this step (as-in all steps before, I am prefacing all "kubectl...." commands with "microk8s kubectl...").
Step 2.4 outputs: "kafka.kafka.strimzi.io/my-cluster created"

But the next command times-out because the cluster never becomes Ready:
microk8s kubectl wait kafka/my-cluster --for=condition=Ready --timeout=300s -n my-kafka-project

When I look at the Kubernetes Dashboard, I see the following Events under the "my-kafka-project" Namespace:

Message                                                                                                                                                                 Source
no persistent volumes available for this claim and no storage class is set                                                                                              persistentvolume-controller
0/4 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims, 3 node(s) had taint {node.kubernetes.io/unreachable: }, that the pod didn't tolerate.      default-scheduler
0/4 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims, 3 node(s) had taint {node.kubernetes.io/unreachable: }, that the pod didn't tolerate.      default-scheduler
0/4 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims, 3 node(s) had taint {node.kubernetes.io/unreachable: }, that the pod didn't tolerate.      default-scheduler
0/4 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims, 3 node(s) had taint {node.kubernetes.io/unreachable: }, that the pod didn't tolerate.      default-scheduler
No matching pods found                                                                                                                                                  controllermanager
create Pod my-cluster-zookeeper-0 in StatefulSet my-cluster-zookeeper successful                                                                                        statefulset-controller
 
I appreciate any help you could pass-on as to what I might be doing wrong.

Thank you!
-Tim

Join cncf-strimzi-users@lists.cncf.io to automatically receive all group messages.