Date   

RC1 of Strimzi Canary 0.3.0

Paolo Patierno
 

*We have prepared the RC1 of the new Strimzi Canary 0.3.0 release*

For more details and installation files, go to: https://github.com/strimzi/strimzi-canary/releases/tag/0.3.0-rc1

Any feedback can be provided on the Strimzi mailing list, on the #strimzi Slack channel on CNCF Slack or as a GitHub issue.

Thanks & Regards
Strimzi team

Paolo Patierno
Principal Software Engineer @ Red Hat
Microsoft MVP on Azure

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience


RC1 of Strimzi Kafka Operators 0.29.0 is available for testing

Jakub Scholz
 

Release candidate 1 of Strimzi Kafka Operators 0.29.0 is now available for testing. The changes in this release include for example:
* Support for new Apache Kafka releases (3.0.1, 3.1.1 and 3.2.0)
* Renew user certificates in User Operator only during maintenance windows
* New rebalancing modes in the `KafkaRebalance` custom resource to add or remove brokers
* Experimental KRaft mode (ZooKeeper-less Kafka)
* Experimental support for the s390x platform


Any feedback can be provided on the Strimzi mailing list, on the #strimzi Slack channel on CNCF Slack or as a GitHub issue.

Thanks & Regards
Strimzi team


Re: Attempting to install Strimzi/Kafka on my microk8s cluster

Jakub Scholz
 

The first node looks ok. But if you check the description, you can see that some of the nodes are marked with the taint:

That is what the events are complaining about. I expect that the taint is set there for some reason - but I never used Microk8s, so not sure where to look for more info. Also, the Conditions in the descriptions are unknown. So my guess is that maybe the cluster cannot communicate with them? Maybe some Microk8s forum might give you better answers on what might be causing this.

Jakub

On Sun, May 8, 2022 at 1:38 AM <tfv01@...> wrote:
Thank you Jakub, appreciate your response!

I'm not exactly sure I understand how to check if each node is "reachable", but here is the output of "kubectl describe nodes", and I don't know if the fact that all three Pis are listed means that they are in-fact reachable.
-Tim

ubuntu@pi4-01:~$ microk8s kubectl describe nodes
Name:               pi4-01
Roles:              <none>
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=arm64
                    kubernetes.io/hostname=pi4-01
                    kubernetes.io/os=linux
                    microk8s.io/cluster=true
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Mon, 05 Jul 2021 22:02:48 +0000
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  pi4-01
  AcquireTime:     <unset>
  RenewTime:       Sat, 07 May 2022 23:34:18 +0000
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Sat, 07 May 2022 23:29:57 +0000   Mon, 05 Jul 2021 22:02:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Sat, 07 May 2022 23:29:57 +0000   Mon, 05 Jul 2021 22:02:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Sat, 07 May 2022 23:29:57 +0000   Mon, 05 Jul 2021 22:02:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Sat, 07 May 2022 23:29:57 +0000   Sat, 07 May 2022 21:18:54 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  10.0.0.10
  Hostname:    pi4-01
Capacity:
  cpu:                4
  ephemeral-storage:  245747040Ki
  memory:             7998736Ki
  pods:               110
Allocatable:
  cpu:                4
  ephemeral-storage:  244698464Ki
  memory:             7896336Ki
  pods:               110
System Info:
  Machine ID:                 b4d5f79c8636436cbd291ba7ac7f14c9
  System UUID:                b4d5f79c8636436cbd291ba7ac7f14c9
  Boot ID:                    3e28585a-dceb-4793-8425-f9d80f92a939
  Kernel Version:             5.4.0-1052-raspi
  OS Image:                   Ubuntu 20.04.2 LTS
  Operating System:           linux
  Architecture:               arm64
  Container Runtime Version:  containerd://1.2.5
  Kubelet Version:            v1.18.20
  Kube-Proxy Version:         v1.18.20
Non-terminated Pods:          (10 in total)
  Namespace                   Name                                               CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                               ------------  ----------  ---------------  -------------  ---
  default                     zookeeper-deployment-1-6c88c95964-496xq            0 (0%)        0 (0%)      0 (0%)           0 (0%)         11d
  ingress                     nginx-ingress-microk8s-controller-jmkgb            0 (0%)        0 (0%)      0 (0%)           0 (0%)         306d
  kafka                       strimzi-cluster-operator-74f9f5d7c7-gs5df          200m (5%)     1 (25%)     384Mi (4%)       384Mi (4%)     4h3m
  kube-system                 coredns-588fd544bf-x9jxf                           100m (2%)     0 (0%)      70Mi (0%)        170Mi (2%)     306d
  kube-system                 dashboard-metrics-scraper-db65b9c6f-cq2t8          0 (0%)        0 (0%)      0 (0%)           0 (0%)         306d
  kube-system                 heapster-v1.5.2-5bc67ff868-2tkbf                   100m (2%)     100m (2%)   184720Ki (2%)    184720Ki (2%)  306d
  kube-system                 kubernetes-dashboard-c5b698784-wnfqj               0 (0%)        0 (0%)      0 (0%)           0 (0%)         306d
  kube-system                 monitoring-influxdb-grafana-v4-6cc44d985f-qstsn    200m (5%)     200m (5%)   600Mi (7%)       600Mi (7%)     306d
  metallb-system              controller-55d8b88d7f-tkvzq                        100m (2%)     100m (2%)   100Mi (1%)       100Mi (1%)     17d
  metallb-system              speaker-ms5z4                                      100m (2%)     100m (2%)   100Mi (1%)       100Mi (1%)     124d
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests         Limits
  --------           --------         ------
  cpu                800m (20%)       1500m (37%)
  memory             1468816Ki (18%)  1571216Ki (19%)
  ephemeral-storage  0 (0%)           0 (0%)
Events:              <none>
 
 
Name:               pi4-02
Roles:              <none>
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=arm64
                    kubernetes.io/hostname=pi4-02
                    kubernetes.io/os=linux
                    microk8s.io/cluster=true
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Mon, 05 Jul 2021 23:02:47 +0000
Unschedulable:      false
Lease:
  HolderIdentity:  pi4-02
  AcquireTime:     <unset>
  RenewTime:       Fri, 25 Feb 2022 05:08:28 +0000
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
  ----             ------    -----------------                 ------------------                ------              -------
  MemoryPressure   Unknown   Fri, 25 Feb 2022 05:07:09 +0000   Fri, 25 Feb 2022 05:12:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  DiskPressure     Unknown   Fri, 25 Feb 2022 05:07:09 +0000   Fri, 25 Feb 2022 05:12:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  PIDPressure      Unknown   Fri, 25 Feb 2022 05:07:09 +0000   Fri, 25 Feb 2022 05:12:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  Ready            Unknown   Fri, 25 Feb 2022 05:07:09 +0000   Fri, 25 Feb 2022 05:12:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
Addresses:
  InternalIP:  10.0.0.11
  Hostname:    pi4-02
Capacity:
  cpu:                4
  ephemeral-storage:  245617500Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  hugepages-32Mi:     0
  hugepages-64Ki:     0
  memory:             7997148Ki
  pods:               110
Allocatable:
  cpu:                4
  ephemeral-storage:  244568924Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  hugepages-32Mi:     0
  hugepages-64Ki:     0
  memory:             7894748Ki
  pods:               110
System Info:
  Machine ID:                 37c8b0e6c19c4bc09f1aace54cd91d55
  System UUID:                37c8b0e6c19c4bc09f1aace54cd91d55
  Boot ID:                    02cb905f-359b-4584-b2c6-d20953e07322
  Kernel Version:             5.11.0-1023-raspi
  OS Image:                   Ubuntu 21.04
  Operating System:           linux
  Architecture:               arm64
  Container Runtime Version:  containerd://1.2.5
  Kubelet Version:            v1.18.20
  Kube-Proxy Version:         v1.18.20
Non-terminated Pods:          (5 in total)
  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
  default                     kafka-broker0-86669885c5-xshms             0 (0%)        0 (0%)      0 (0%)           0 (0%)         90d
  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         125d
  default                     zookeeper-deploy-7f5bb9785f-2tqf8          0 (0%)        0 (0%)      0 (0%)           0 (0%)         90d
  ingress                     nginx-ingress-microk8s-controller-wzbnf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         306d
  metallb-system              speaker-86pmc                              100m (2%)     100m (2%)   100Mi (1%)       100Mi (1%)     124d
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                100m (2%)   100m (2%)
  memory             100Mi (1%)  100Mi (1%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-1Gi      0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
  hugepages-32Mi     0 (0%)      0 (0%)
  hugepages-64Ki     0 (0%)      0 (0%)
Events:              <none>
 
 
Name:               pi4-03
Roles:              <none>
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=arm64
                    kubernetes.io/hostname=pi4-03
                    kubernetes.io/os=linux
                    microk8s.io/cluster=true
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Mon, 05 Jul 2021 23:03:01 +0000
Unschedulable:      false
Lease:
  HolderIdentity:  pi4-03
  AcquireTime:     <unset>
  RenewTime:       Fri, 25 Feb 2022 05:08:24 +0000
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
  ----             ------    -----------------                 ------------------                ------              -------
  MemoryPressure   Unknown   Fri, 25 Feb 2022 05:03:45 +0000   Fri, 25 Feb 2022 05:12:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  DiskPressure     Unknown   Fri, 25 Feb 2022 05:03:45 +0000   Fri, 25 Feb 2022 05:12:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  PIDPressure      Unknown   Fri, 25 Feb 2022 05:03:45 +0000   Fri, 25 Feb 2022 05:12:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  Ready            Unknown   Fri, 25 Feb 2022 05:03:45 +0000   Fri, 25 Feb 2022 05:12:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
Addresses:
  InternalIP:  10.0.0.12
  Hostname:    pi4-03
Capacity:
  cpu:                4
  ephemeral-storage:  245617500Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  hugepages-32Mi:     0
  hugepages-64Ki:     0
  memory:             7997148Ki
  pods:               110
Allocatable:
  cpu:                4
  ephemeral-storage:  244568924Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  hugepages-32Mi:     0
  hugepages-64Ki:     0
  memory:             7894748Ki
  pods:               110
System Info:
  Machine ID:                 5f870ea041464a25a73f93fb37cec85c
  System UUID:                5f870ea041464a25a73f93fb37cec85c
  Boot ID:                    15e11c9e-4674-45c7-ac7f-021e83a17eff
  Kernel Version:             5.11.0-1023-raspi
  OS Image:                   Ubuntu 21.04
  Operating System:           linux
  Architecture:               arm64
  Container Runtime Version:  containerd://1.2.5
  Kubelet Version:            v1.18.20
  Kube-Proxy Version:         v1.18.20
Non-terminated Pods:          (5 in total)
  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
  default                     kafka-broker0-86669885c5-9k6f6             0 (0%)        0 (0%)      0 (0%)           0 (0%)         90d
  default                     nginx-f89759699-8zm68                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         124d
  default                     zookeeper-deploy-7f5bb9785f-4j8kt          0 (0%)        0 (0%)      0 (0%)           0 (0%)         90d
  ingress                     nginx-ingress-microk8s-controller-p6c9q    0 (0%)        0 (0%)      0 (0%)           0 (0%)         306d
  metallb-system              speaker-h28vg                              100m (2%)     100m (2%)   100Mi (1%)       100Mi (1%)     124d
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                100m (2%)   100m (2%)
  memory             100Mi (1%)  100Mi (1%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-1Gi      0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
  hugepages-32Mi     0 (0%)      0 (0%)
  hugepages-64Ki     0 (0%)      0 (0%)
Events:              <none>
 
 
Name:               ubuntu
Roles:              <none>
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=arm64
                    kubernetes.io/hostname=ubuntu
                    kubernetes.io/os=linux
                    microk8s.io/cluster=true
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Mon, 05 Jul 2021 14:20:11 +0000
Unschedulable:      false
Lease:
  HolderIdentity:  ubuntu
  AcquireTime:     <unset>
  RenewTime:       Mon, 05 Jul 2021 22:03:34 +0000
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
  ----             ------    -----------------                 ------------------                ------              -------
  MemoryPressure   Unknown   Mon, 05 Jul 2021 22:02:48 +0000   Mon, 05 Jul 2021 22:04:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  DiskPressure     Unknown   Mon, 05 Jul 2021 22:02:48 +0000   Mon, 05 Jul 2021 22:04:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  PIDPressure      Unknown   Mon, 05 Jul 2021 22:02:48 +0000   Mon, 05 Jul 2021 22:04:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  Ready            Unknown   Mon, 05 Jul 2021 22:02:48 +0000   Mon, 05 Jul 2021 22:04:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
Addresses:
  InternalIP:  10.0.0.11
  Hostname:    ubuntu
Capacity:
  cpu:                4
  ephemeral-storage:  245617500Ki
  memory:             7997276Ki
  pods:               110
Allocatable:
  cpu:                4
  ephemeral-storage:  244568924Ki
  memory:             7894876Ki
  pods:               110
System Info:
  Machine ID:                 37c8b0e6c19c4bc09f1aace54cd91d55
  System UUID:                37c8b0e6c19c4bc09f1aace54cd91d55
  Boot ID:                    4aac5c6b-db60-4468-85e9-b5ef13b1e9f4
  Kernel Version:             5.11.0-1007-raspi
  OS Image:                   Ubuntu 21.04
  Operating System:           linux
  Architecture:               arm64
  Container Runtime Version:  containerd://1.2.5
  Kubelet Version:            v1.18.19
  Kube-Proxy Version:         v1.18.19
Non-terminated Pods:          (6 in total)
  Namespace                   Name                                               CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                               ------------  ----------  ---------------  -------------  ---
  ingress                     nginx-ingress-microk8s-controller-bzx2x            0 (0%)        0 (0%)      0 (0%)           0 (0%)         306d
  kube-system                 coredns-588fd544bf-ntgwl                           100m (2%)     0 (0%)      70Mi (0%)        170Mi (2%)     306d
  kube-system                 dashboard-metrics-scraper-db65b9c6f-rl8l2          0 (0%)        0 (0%)      0 (0%)           0 (0%)         306d
  kube-system                 heapster-v1.5.2-5bc67ff868-hl5kp                   100m (2%)     100m (2%)   184720Ki (2%)    184720Ki (2%)  306d
  kube-system                 kubernetes-dashboard-c5b698784-55rhq               0 (0%)        0 (0%)      0 (0%)           0 (0%)         306d
  kube-system                 monitoring-influxdb-grafana-v4-6cc44d985f-2p4gd    200m (5%)     200m (5%)   600Mi (7%)       600Mi (7%)     306d
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests        Limits
  --------           --------        ------
  cpu                400m (10%)      300m (7%)
  memory             870800Ki (11%)  973200Ki (12%)
  ephemeral-storage  0 (0%)          0 (0%)
Events:              <none>


Re: Attempting to install Strimzi/Kafka on my microk8s cluster

tfv01@...
 

Thank you Jakub, appreciate your response!

I'm not exactly sure I understand how to check if each node is "reachable", but here is the output of "kubectl describe nodes", and I don't know if the fact that all three Pis are listed means that they are in-fact reachable.
-Tim

ubuntu@pi4-01:~$ microk8s kubectl describe nodes
Name:               pi4-01
Roles:              <none>
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=arm64
                    kubernetes.io/hostname=pi4-01
                    kubernetes.io/os=linux
                    microk8s.io/cluster=true
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Mon, 05 Jul 2021 22:02:48 +0000
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  pi4-01
  AcquireTime:     <unset>
  RenewTime:       Sat, 07 May 2022 23:34:18 +0000
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Sat, 07 May 2022 23:29:57 +0000   Mon, 05 Jul 2021 22:02:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Sat, 07 May 2022 23:29:57 +0000   Mon, 05 Jul 2021 22:02:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Sat, 07 May 2022 23:29:57 +0000   Mon, 05 Jul 2021 22:02:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Sat, 07 May 2022 23:29:57 +0000   Sat, 07 May 2022 21:18:54 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  10.0.0.10
  Hostname:    pi4-01
Capacity:
  cpu:                4
  ephemeral-storage:  245747040Ki
  memory:             7998736Ki
  pods:               110
Allocatable:
  cpu:                4
  ephemeral-storage:  244698464Ki
  memory:             7896336Ki
  pods:               110
System Info:
  Machine ID:                 b4d5f79c8636436cbd291ba7ac7f14c9
  System UUID:                b4d5f79c8636436cbd291ba7ac7f14c9
  Boot ID:                    3e28585a-dceb-4793-8425-f9d80f92a939
  Kernel Version:             5.4.0-1052-raspi
  OS Image:                   Ubuntu 20.04.2 LTS
  Operating System:           linux
  Architecture:               arm64
  Container Runtime Version:  containerd://1.2.5
  Kubelet Version:            v1.18.20
  Kube-Proxy Version:         v1.18.20
Non-terminated Pods:          (10 in total)
  Namespace                   Name                                               CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                               ------------  ----------  ---------------  -------------  ---
  default                     zookeeper-deployment-1-6c88c95964-496xq            0 (0%)        0 (0%)      0 (0%)           0 (0%)         11d
  ingress                     nginx-ingress-microk8s-controller-jmkgb            0 (0%)        0 (0%)      0 (0%)           0 (0%)         306d
  kafka                       strimzi-cluster-operator-74f9f5d7c7-gs5df          200m (5%)     1 (25%)     384Mi (4%)       384Mi (4%)     4h3m
  kube-system                 coredns-588fd544bf-x9jxf                           100m (2%)     0 (0%)      70Mi (0%)        170Mi (2%)     306d
  kube-system                 dashboard-metrics-scraper-db65b9c6f-cq2t8          0 (0%)        0 (0%)      0 (0%)           0 (0%)         306d
  kube-system                 heapster-v1.5.2-5bc67ff868-2tkbf                   100m (2%)     100m (2%)   184720Ki (2%)    184720Ki (2%)  306d
  kube-system                 kubernetes-dashboard-c5b698784-wnfqj               0 (0%)        0 (0%)      0 (0%)           0 (0%)         306d
  kube-system                 monitoring-influxdb-grafana-v4-6cc44d985f-qstsn    200m (5%)     200m (5%)   600Mi (7%)       600Mi (7%)     306d
  metallb-system              controller-55d8b88d7f-tkvzq                        100m (2%)     100m (2%)   100Mi (1%)       100Mi (1%)     17d
  metallb-system              speaker-ms5z4                                      100m (2%)     100m (2%)   100Mi (1%)       100Mi (1%)     124d
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests         Limits
  --------           --------         ------
  cpu                800m (20%)       1500m (37%)
  memory             1468816Ki (18%)  1571216Ki (19%)
  ephemeral-storage  0 (0%)           0 (0%)
Events:              <none>
 
 
Name:               pi4-02
Roles:              <none>
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=arm64
                    kubernetes.io/hostname=pi4-02
                    kubernetes.io/os=linux
                    microk8s.io/cluster=true
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Mon, 05 Jul 2021 23:02:47 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  pi4-02
  AcquireTime:     <unset>
  RenewTime:       Fri, 25 Feb 2022 05:08:28 +0000
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
  ----             ------    -----------------                 ------------------                ------              -------
  MemoryPressure   Unknown   Fri, 25 Feb 2022 05:07:09 +0000   Fri, 25 Feb 2022 05:12:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  DiskPressure     Unknown   Fri, 25 Feb 2022 05:07:09 +0000   Fri, 25 Feb 2022 05:12:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  PIDPressure      Unknown   Fri, 25 Feb 2022 05:07:09 +0000   Fri, 25 Feb 2022 05:12:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  Ready            Unknown   Fri, 25 Feb 2022 05:07:09 +0000   Fri, 25 Feb 2022 05:12:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
Addresses:
  InternalIP:  10.0.0.11
  Hostname:    pi4-02
Capacity:
  cpu:                4
  ephemeral-storage:  245617500Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  hugepages-32Mi:     0
  hugepages-64Ki:     0
  memory:             7997148Ki
  pods:               110
Allocatable:
  cpu:                4
  ephemeral-storage:  244568924Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  hugepages-32Mi:     0
  hugepages-64Ki:     0
  memory:             7894748Ki
  pods:               110
System Info:
  Machine ID:                 37c8b0e6c19c4bc09f1aace54cd91d55
  System UUID:                37c8b0e6c19c4bc09f1aace54cd91d55
  Boot ID:                    02cb905f-359b-4584-b2c6-d20953e07322
  Kernel Version:             5.11.0-1023-raspi
  OS Image:                   Ubuntu 21.04
  Operating System:           linux
  Architecture:               arm64
  Container Runtime Version:  containerd://1.2.5
  Kubelet Version:            v1.18.20
  Kube-Proxy Version:         v1.18.20
Non-terminated Pods:          (5 in total)
  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
  default                     kafka-broker0-86669885c5-xshms             0 (0%)        0 (0%)      0 (0%)           0 (0%)         90d
  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         125d
  default                     zookeeper-deploy-7f5bb9785f-2tqf8          0 (0%)        0 (0%)      0 (0%)           0 (0%)         90d
  ingress                     nginx-ingress-microk8s-controller-wzbnf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         306d
  metallb-system              speaker-86pmc                              100m (2%)     100m (2%)   100Mi (1%)       100Mi (1%)     124d
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                100m (2%)   100m (2%)
  memory             100Mi (1%)  100Mi (1%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-1Gi      0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
  hugepages-32Mi     0 (0%)      0 (0%)
  hugepages-64Ki     0 (0%)      0 (0%)
Events:              <none>
 
 
Name:               pi4-03
Roles:              <none>
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=arm64
                    kubernetes.io/hostname=pi4-03
                    kubernetes.io/os=linux
                    microk8s.io/cluster=true
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Mon, 05 Jul 2021 23:03:01 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  pi4-03
  AcquireTime:     <unset>
  RenewTime:       Fri, 25 Feb 2022 05:08:24 +0000
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
  ----             ------    -----------------                 ------------------                ------              -------
  MemoryPressure   Unknown   Fri, 25 Feb 2022 05:03:45 +0000   Fri, 25 Feb 2022 05:12:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  DiskPressure     Unknown   Fri, 25 Feb 2022 05:03:45 +0000   Fri, 25 Feb 2022 05:12:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  PIDPressure      Unknown   Fri, 25 Feb 2022 05:03:45 +0000   Fri, 25 Feb 2022 05:12:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  Ready            Unknown   Fri, 25 Feb 2022 05:03:45 +0000   Fri, 25 Feb 2022 05:12:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
Addresses:
  InternalIP:  10.0.0.12
  Hostname:    pi4-03
Capacity:
  cpu:                4
  ephemeral-storage:  245617500Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  hugepages-32Mi:     0
  hugepages-64Ki:     0
  memory:             7997148Ki
  pods:               110
Allocatable:
  cpu:                4
  ephemeral-storage:  244568924Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  hugepages-32Mi:     0
  hugepages-64Ki:     0
  memory:             7894748Ki
  pods:               110
System Info:
  Machine ID:                 5f870ea041464a25a73f93fb37cec85c
  System UUID:                5f870ea041464a25a73f93fb37cec85c
  Boot ID:                    15e11c9e-4674-45c7-ac7f-021e83a17eff
  Kernel Version:             5.11.0-1023-raspi
  OS Image:                   Ubuntu 21.04
  Operating System:           linux
  Architecture:               arm64
  Container Runtime Version:  containerd://1.2.5
  Kubelet Version:            v1.18.20
  Kube-Proxy Version:         v1.18.20
Non-terminated Pods:          (5 in total)
  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
  default                     kafka-broker0-86669885c5-9k6f6             0 (0%)        0 (0%)      0 (0%)           0 (0%)         90d
  default                     nginx-f89759699-8zm68                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         124d
  default                     zookeeper-deploy-7f5bb9785f-4j8kt          0 (0%)        0 (0%)      0 (0%)           0 (0%)         90d
  ingress                     nginx-ingress-microk8s-controller-p6c9q    0 (0%)        0 (0%)      0 (0%)           0 (0%)         306d
  metallb-system              speaker-h28vg                              100m (2%)     100m (2%)   100Mi (1%)       100Mi (1%)     124d
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                100m (2%)   100m (2%)
  memory             100Mi (1%)  100Mi (1%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-1Gi      0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
  hugepages-32Mi     0 (0%)      0 (0%)
  hugepages-64Ki     0 (0%)      0 (0%)
Events:              <none>
 
 
Name:               ubuntu
Roles:              <none>
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=arm64
                    kubernetes.io/hostname=ubuntu
                    kubernetes.io/os=linux
                    microk8s.io/cluster=true
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Mon, 05 Jul 2021 14:20:11 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  ubuntu
  AcquireTime:     <unset>
  RenewTime:       Mon, 05 Jul 2021 22:03:34 +0000
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
  ----             ------    -----------------                 ------------------                ------              -------
  MemoryPressure   Unknown   Mon, 05 Jul 2021 22:02:48 +0000   Mon, 05 Jul 2021 22:04:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  DiskPressure     Unknown   Mon, 05 Jul 2021 22:02:48 +0000   Mon, 05 Jul 2021 22:04:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  PIDPressure      Unknown   Mon, 05 Jul 2021 22:02:48 +0000   Mon, 05 Jul 2021 22:04:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  Ready            Unknown   Mon, 05 Jul 2021 22:02:48 +0000   Mon, 05 Jul 2021 22:04:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
Addresses:
  InternalIP:  10.0.0.11
  Hostname:    ubuntu
Capacity:
  cpu:                4
  ephemeral-storage:  245617500Ki
  memory:             7997276Ki
  pods:               110
Allocatable:
  cpu:                4
  ephemeral-storage:  244568924Ki
  memory:             7894876Ki
  pods:               110
System Info:
  Machine ID:                 37c8b0e6c19c4bc09f1aace54cd91d55
  System UUID:                37c8b0e6c19c4bc09f1aace54cd91d55
  Boot ID:                    4aac5c6b-db60-4468-85e9-b5ef13b1e9f4
  Kernel Version:             5.11.0-1007-raspi
  OS Image:                   Ubuntu 21.04
  Operating System:           linux
  Architecture:               arm64
  Container Runtime Version:  containerd://1.2.5
  Kubelet Version:            v1.18.19
  Kube-Proxy Version:         v1.18.19
Non-terminated Pods:          (6 in total)
  Namespace                   Name                                               CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                               ------------  ----------  ---------------  -------------  ---
  ingress                     nginx-ingress-microk8s-controller-bzx2x            0 (0%)        0 (0%)      0 (0%)           0 (0%)         306d
  kube-system                 coredns-588fd544bf-ntgwl                           100m (2%)     0 (0%)      70Mi (0%)        170Mi (2%)     306d
  kube-system                 dashboard-metrics-scraper-db65b9c6f-rl8l2          0 (0%)        0 (0%)      0 (0%)           0 (0%)         306d
  kube-system                 heapster-v1.5.2-5bc67ff868-hl5kp                   100m (2%)     100m (2%)   184720Ki (2%)    184720Ki (2%)  306d
  kube-system                 kubernetes-dashboard-c5b698784-55rhq               0 (0%)        0 (0%)      0 (0%)           0 (0%)         306d
  kube-system                 monitoring-influxdb-grafana-v4-6cc44d985f-2p4gd    200m (5%)     200m (5%)   600Mi (7%)       600Mi (7%)     306d
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests        Limits
  --------           --------        ------
  cpu                400m (10%)      300m (7%)
  memory             870800Ki (11%)  973200Ki (12%)
  ephemeral-storage  0 (0%)          0 (0%)
Events:              <none>


Re: Attempting to install Strimzi/Kafka on my microk8s cluster

Jakub Scholz
 

Hi Tim,

That looks like some scheduling issue between your cluster and your storage. TBH, these issues are a bit hard to decode without your environment. The taint suggests that some of your nodes are unreachable, so that might be the cause of the issue?

Thanks & Regards
Jakub

On Sun, May 8, 2022 at 12:56 AM <tfv01@...> wrote:
Hi,
I have a three node Raspberry Pi cluster on which I'd like to be able to install and run Kafka. Strimzi seems to be a very easy environment to get that working and I read that as-of Dec 2021, it supports ARM-based processors.

I was following along the Quick Start Guide (Strimzi Quick Start guide (0.28.0) and got to step 2.4 Creating a Cluster.
In this step (as-in all steps before, I am prefacing all "kubectl...." commands with "microk8s kubectl...").
Step 2.4 outputs: "kafka.kafka.strimzi.io/my-cluster created"

But the next command times-out because the cluster never becomes Ready:
microk8s kubectl wait kafka/my-cluster --for=condition=Ready --timeout=300s -n my-kafka-project

When I look at the Kubernetes Dashboard, I see the following Events under the "my-kafka-project" Namespace:

Message                                                                                                                                                                 Source
no persistent volumes available for this claim and no storage class is set                                                                                              persistentvolume-controller
0/4 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims, 3 node(s) had taint {node.kubernetes.io/unreachable: }, that the pod didn't tolerate.      default-scheduler
0/4 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims, 3 node(s) had taint {node.kubernetes.io/unreachable: }, that the pod didn't tolerate.      default-scheduler
0/4 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims, 3 node(s) had taint {node.kubernetes.io/unreachable: }, that the pod didn't tolerate.      default-scheduler
0/4 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims, 3 node(s) had taint {node.kubernetes.io/unreachable: }, that the pod didn't tolerate.      default-scheduler
No matching pods found                                                                                                                                                  controllermanager
create Pod my-cluster-zookeeper-0 in StatefulSet my-cluster-zookeeper successful                                                                                        statefulset-controller
 
I appreciate any help you could pass-on as to what I might be doing wrong.

Thank you!
-Tim


Attempting to install Strimzi/Kafka on my microk8s cluster

tfv01@...
 

Hi,
I have a three node Raspberry Pi cluster on which I'd like to be able to install and run Kafka. Strimzi seems to be a very easy environment to get that working and I read that as-of Dec 2021, it supports ARM-based processors.

I was following along the Quick Start Guide (Strimzi Quick Start guide (0.28.0) and got to step 2.4 Creating a Cluster.
In this step (as-in all steps before, I am prefacing all "kubectl...." commands with "microk8s kubectl...").
Step 2.4 outputs: "kafka.kafka.strimzi.io/my-cluster created"

But the next command times-out because the cluster never becomes Ready:
microk8s kubectl wait kafka/my-cluster --for=condition=Ready --timeout=300s -n my-kafka-project

When I look at the Kubernetes Dashboard, I see the following Events under the "my-kafka-project" Namespace:

Message                                                                                                                                                                 Source
no persistent volumes available for this claim and no storage class is set                                                                                              persistentvolume-controller
0/4 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims, 3 node(s) had taint {node.kubernetes.io/unreachable: }, that the pod didn't tolerate.      default-scheduler
0/4 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims, 3 node(s) had taint {node.kubernetes.io/unreachable: }, that the pod didn't tolerate.      default-scheduler
0/4 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims, 3 node(s) had taint {node.kubernetes.io/unreachable: }, that the pod didn't tolerate.      default-scheduler
0/4 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims, 3 node(s) had taint {node.kubernetes.io/unreachable: }, that the pod didn't tolerate.      default-scheduler
No matching pods found                                                                                                                                                  controllermanager
create Pod my-cluster-zookeeper-0 in StatefulSet my-cluster-zookeeper successful                                                                                        statefulset-controller
 
I appreciate any help you could pass-on as to what I might be doing wrong.

Thank you!
-Tim


[ANNOUNCE] [RELEASE] Strimzi Kafka Bridge 0.21.5

Jakub Scholz
 

New version 0.21.5 of Strimzi Kafka Bridge has been released. The main changes in this release are:
* Support for ppc64le platform
* Documentation improvements
* Dependency updates


Thanks to everyone who contributed to to this releases!

Regards
Strimzi team


RC1 of Strimzi Kafka Bridge 0.21.5

Jakub Scholz
 

Release Candidate 1 of Strimzi Kafka Bridge the 0.21.5 is now available for testing. The main changes since 0.21.4 are:
* Support for ppc64le platform
* Documentation improvements
* Dependency updates

More details and a full list of changes can be found on the GitHub release page: https://github.com/strimzi/strimzi-kafka-bridge/releases/tag/0.21.5-rc1

Any feedback can be provided on the mailing list, on Slack or as a GitHub issue.

Thanks & Regards
Strimzi team


Draft Strimzi proposal to CNCF "incubation" stage

Paolo Patierno
 

We would like to propose Strimzi to the "incubation" stage in CNCF. Here there is a PR with a draft proposal on my own GitHub profile to allow everyone to take a look at it and engage. We would really love to have feedback and comments from the community before submitting the official PR to the CNCF TOC. It will be really appreciated! https://github.com/ppatierno/toc/pull/1
This PR is about the proposal for Strimzi as an incubation project in CNCF.
github.com
The Strimzi maintainers

Paolo Patierno
Principal Software Engineer @ Red Hat
Microsoft MVP on Azure

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience


[ANNOUNCE] [RELEASE] Strimzi Test Container Operators 0.101.0 is out

Maros Orsak
 

Strimzi Test Container 0.101.0 has been released. Main changes since the 0.100.0 release include:
-   adding the ability to configure ZooKeeper container via zookeeper.properties
-   adding the ability to connect to internal ZooKeeper inside StrimziKafkaContainer using the getInternalZooKeeperConnect() method
-   constructor for an only specified number of brokers
-   better logs description
-   adds support for ppc64le architecture

**Github links**  

-   https://github.com/strimzi/test-container/releases/tag/0.101.0
-   https://github.com/strimzi/test-container-images/releases/tag/0.101.0
--

Maroš Orsák

Quality Engineer - AMQ Streams

Red Hat

morsak@...   


[ANNOUCE] [Release Candidate 2] Strimzi test containers 0.101.0

Maros Orsak
 

Release candidate 2 of Strimzi test containers 0.101.0 is now available for testing. The main changes from version 0.101.0-rc1 in this release include :
  • fixes a problem with the configuration for custom container network which was overridden in the doStart() method
Images used by Strimzi test container (no change in images)
quay.io/strimzi-test-container/test-container:0.101.0-rc1-kafka-3.1.0
quay.io/strimzi-test-container/test-container:0.101.0-rc1-kafka-3.0.0
quay.io/strimzi-test-container/test-container:0.101.0-rc1-kafka-2.8.1
Maven artefacts
To test the Maven artefacts which are part of this release, use the staging repository by including the following in your pom.xml:
 <repositories>
    <repository>
      <id>staging</id>
      <url>https://oss.sonatype.org/content/repositories/iostrimzi-1166</url>
    </repository>
  </repositories>
Best regards,
--

Maroš Orsák

Quality Engineer - AMQ Streams

Red Hat

morsak@...   


[ANNOUCE] [Release Candidate 1] Strimzi test containers 0.101.0

Maros Orsak
 

Release candidate 1 of Strimzi test containers 0.101.0 is now available for testing. The main changes from version 0.100.0 in this release include :
- adding the ability to configure ZooKeeper container via `zookeeper.properties`
- adding the ability to connect to internal ZooKeeper inside `StrimziKafkaContainer` using `getInternalZooKeeperConnect()` method
- constructor for an only specified number of brokers
- better logs description
- adds support for `ppc64le` architecture

Images used by Strimzi test container
quay.io/strimzi-test-container/test-container:0.101.0-rc1-kafka-3.1.0
quay.io/strimzi-test-container/test-container:0.101.0-rc1-kafka-3.0.0
quay.io/strimzi-test-container/test-container:0.101.0-rc1-kafka-2.8.1
Maven artefacts
To test the Maven artefacts which are part of this release, use the staging repository by including the following in your pom.xml:
 <repositories>
    <repository>
      <id>staging</id>
      <url>https://oss.sonatype.org/content/repositories/iostrimzi-1165</url>
    </repository>
  </repositories>
Best regards,
--

Maroš Orsák

Quality Engineer - AMQ Streams

Red Hat

morsak@...   


Re: Need help setting UP Strimzi Kafka Cluster with Cruise Control

Jakub Scholz
 

You have to do it manually. Kafka is not something that is normally scaled up or down very often. It is a stateful application with partitions and their replicas having a fixed assignment to a specific broker. Scaling the cluster up and down means reassigning the partitions between the nodes and since the partitions often contain very large amounts of data, these data need to be transferred. And that has of course its cost (it often takes a long time, generates financial costs etc.). So usually, you would scale the cluster up for example as you onboard new applications or as your applications grow. But you would for example not scale it up for a few minutes because there seems to be some traffic peak. So normally it is more the result of some long term planning rather than some quick decision of the HPA.

Thanks & Regards
Jakub

On Wed, Feb 23, 2022 at 9:48 PM Ranjeet Ranjan <ranjeet@...> wrote:
Hi Jakub,

I have set up Strimzi Kafka Cluster and Operator. 

I have still one doubt that I did not answer, do I need to scale the Kafka cluster and data balance up/down manually or will it scale up/down based on the Kubernetes HPA? 

Your help will be much appreciated. 


On Tue, Feb 22, 2022 at 11:47 PM Jakub Scholz <jakub@...> wrote:
No, we do not have a Helm Chart for creating the Kafka custom resource.

Thanks & Regards
Jakub

On Tue, Feb 22, 2022 at 6:40 PM Ranjeet Ranjan <ranjeet@...> wrote:
Hi Jakub,

Thank you for your response.

DO we have a helm as well for creating Kafka Cluster? 

On Tue, Feb 22, 2022 at 11:08 PM Jakub Scholz <jakub@...> wrote:
HI Ranjeet,

The Helm Chart installs only the operator for managing the Kafka clusters. Next, you need to create the Kafka custom resource describing the Kafka cluster you want to have, its configuration etc. The docs (https://strimzi.io/documentation/) cover all the different aspects which you can configure. We have also all kind of examples on GitHub: https://github.com/strimzi/strimzi-kafka-operator/tree/main/examples

Thanks & Regards
Jakub

On Tue, Feb 22, 2022 at 6:33 PM Ranjeet Ranjan <ranjeet@...> wrote:
Hi Team,

Let me introduce myself Ranjeet Ranjan, CEO & Founder Adsizzler Media. 
We have recently launched one new Ad Tech Startup Named http://adosiz.com/

We wanted to install but need help setting UP Strimzi Kafka Cluster with Cruise Control for Auto Scale.

I have downloaded helm from https://strimzi.io/downloads/.

I followed the below steps to install. 

Step -1: helm install strimzi-kafka-operator strimzi-kafka-operator -n test

But it only installed strimzi-cluster-operator-5f56c65c5d-4pphv

Now please let know how Kafka Cluster pod will be created. 

Thanks
Ranjeet Ranjan


Re: Need help setting UP Strimzi Kafka Cluster with Cruise Control

Ranjeet Ranjan
 

Hi Jakub,

I have set up Strimzi Kafka Cluster and Operator. 

I have still one doubt that I did not answer, do I need to scale the Kafka cluster and data balance up/down manually or will it scale up/down based on the Kubernetes HPA? 

Your help will be much appreciated. 


On Tue, Feb 22, 2022 at 11:47 PM Jakub Scholz <jakub@...> wrote:
No, we do not have a Helm Chart for creating the Kafka custom resource.

Thanks & Regards
Jakub

On Tue, Feb 22, 2022 at 6:40 PM Ranjeet Ranjan <ranjeet@...> wrote:
Hi Jakub,

Thank you for your response.

DO we have a helm as well for creating Kafka Cluster? 

On Tue, Feb 22, 2022 at 11:08 PM Jakub Scholz <jakub@...> wrote:
HI Ranjeet,

The Helm Chart installs only the operator for managing the Kafka clusters. Next, you need to create the Kafka custom resource describing the Kafka cluster you want to have, its configuration etc. The docs (https://strimzi.io/documentation/) cover all the different aspects which you can configure. We have also all kind of examples on GitHub: https://github.com/strimzi/strimzi-kafka-operator/tree/main/examples

Thanks & Regards
Jakub

On Tue, Feb 22, 2022 at 6:33 PM Ranjeet Ranjan <ranjeet@...> wrote:
Hi Team,

Let me introduce myself Ranjeet Ranjan, CEO & Founder Adsizzler Media. 
We have recently launched one new Ad Tech Startup Named http://adosiz.com/

We wanted to install but need help setting UP Strimzi Kafka Cluster with Cruise Control for Auto Scale.

I have downloaded helm from https://strimzi.io/downloads/.

I followed the below steps to install. 

Step -1: helm install strimzi-kafka-operator strimzi-kafka-operator -n test

But it only installed strimzi-cluster-operator-5f56c65c5d-4pphv

Now please let know how Kafka Cluster pod will be created. 

Thanks
Ranjeet Ranjan


Re: Need help setting UP Strimzi Kafka Cluster with Cruise Control

Jakub Scholz
 

No, we do not have a Helm Chart for creating the Kafka custom resource.

Thanks & Regards
Jakub

On Tue, Feb 22, 2022 at 6:40 PM Ranjeet Ranjan <ranjeet@...> wrote:
Hi Jakub,

Thank you for your response.

DO we have a helm as well for creating Kafka Cluster? 

On Tue, Feb 22, 2022 at 11:08 PM Jakub Scholz <jakub@...> wrote:
HI Ranjeet,

The Helm Chart installs only the operator for managing the Kafka clusters. Next, you need to create the Kafka custom resource describing the Kafka cluster you want to have, its configuration etc. The docs (https://strimzi.io/documentation/) cover all the different aspects which you can configure. We have also all kind of examples on GitHub: https://github.com/strimzi/strimzi-kafka-operator/tree/main/examples

Thanks & Regards
Jakub

On Tue, Feb 22, 2022 at 6:33 PM Ranjeet Ranjan <ranjeet@...> wrote:
Hi Team,

Let me introduce myself Ranjeet Ranjan, CEO & Founder Adsizzler Media. 
We have recently launched one new Ad Tech Startup Named http://adosiz.com/

We wanted to install but need help setting UP Strimzi Kafka Cluster with Cruise Control for Auto Scale.

I have downloaded helm from https://strimzi.io/downloads/.

I followed the below steps to install. 

Step -1: helm install strimzi-kafka-operator strimzi-kafka-operator -n test

But it only installed strimzi-cluster-operator-5f56c65c5d-4pphv

Now please let know how Kafka Cluster pod will be created. 

Thanks
Ranjeet Ranjan


Re: Need help setting UP Strimzi Kafka Cluster with Cruise Control

Ranjeet Ranjan
 

Hi Jakub,

Thank you for your response.

DO we have a helm as well for creating Kafka Cluster? 

On Tue, Feb 22, 2022 at 11:08 PM Jakub Scholz <jakub@...> wrote:
HI Ranjeet,

The Helm Chart installs only the operator for managing the Kafka clusters. Next, you need to create the Kafka custom resource describing the Kafka cluster you want to have, its configuration etc. The docs (https://strimzi.io/documentation/) cover all the different aspects which you can configure. We have also all kind of examples on GitHub: https://github.com/strimzi/strimzi-kafka-operator/tree/main/examples

Thanks & Regards
Jakub

On Tue, Feb 22, 2022 at 6:33 PM Ranjeet Ranjan <ranjeet@...> wrote:
Hi Team,

Let me introduce myself Ranjeet Ranjan, CEO & Founder Adsizzler Media. 
We have recently launched one new Ad Tech Startup Named http://adosiz.com/

We wanted to install but need help setting UP Strimzi Kafka Cluster with Cruise Control for Auto Scale.

I have downloaded helm from https://strimzi.io/downloads/.

I followed the below steps to install. 

Step -1: helm install strimzi-kafka-operator strimzi-kafka-operator -n test

But it only installed strimzi-cluster-operator-5f56c65c5d-4pphv

Now please let know how Kafka Cluster pod will be created. 

Thanks
Ranjeet Ranjan


Re: Need help setting UP Strimzi Kafka Cluster with Cruise Control

Jakub Scholz
 

HI Ranjeet,

The Helm Chart installs only the operator for managing the Kafka clusters. Next, you need to create the Kafka custom resource describing the Kafka cluster you want to have, its configuration etc. The docs (https://strimzi.io/documentation/) cover all the different aspects which you can configure. We have also all kind of examples on GitHub: https://github.com/strimzi/strimzi-kafka-operator/tree/main/examples

Thanks & Regards
Jakub

On Tue, Feb 22, 2022 at 6:33 PM Ranjeet Ranjan <ranjeet@...> wrote:
Hi Team,

Let me introduce myself Ranjeet Ranjan, CEO & Founder Adsizzler Media. 
We have recently launched one new Ad Tech Startup Named http://adosiz.com/

We wanted to install but need help setting UP Strimzi Kafka Cluster with Cruise Control for Auto Scale.

I have downloaded helm from https://strimzi.io/downloads/.

I followed the below steps to install. 

Step -1: helm install strimzi-kafka-operator strimzi-kafka-operator -n test

But it only installed strimzi-cluster-operator-5f56c65c5d-4pphv

Now please let know how Kafka Cluster pod will be created. 

Thanks
Ranjeet Ranjan


Need help setting UP Strimzi Kafka Cluster with Cruise Control

Ranjeet Ranjan
 

Hi Team,

Let me introduce myself Ranjeet Ranjan, CEO & Founder Adsizzler Media. 
We have recently launched one new Ad Tech Startup Named http://adosiz.com/

We wanted to install but need help setting UP Strimzi Kafka Cluster with Cruise Control for Auto Scale.

I have downloaded helm from https://strimzi.io/downloads/.

I followed the below steps to install. 

Step -1: helm install strimzi-kafka-operator strimzi-kafka-operator -n test

But it only installed strimzi-cluster-operator-5f56c65c5d-4pphv

Now please let know how Kafka Cluster pod will be created. 

Thanks
Ranjeet Ranjan


Re: adding annotation to service account

Jakub Scholz
 

Hi Amit,

You can use the `template` section to declaratively customize annotations: https://strimzi.io/docs/operators/latest/full/configuring.html#assembly-customizing-kubernetes-resources-str ... for Service Accounts, from 0.27.0 with the ServiceAccountPatching feature gate enabled, it should be possible to use that even for existing clusters / service accounts. In earlier versions (or with the ServiceAccountPatching feature gate disables) it will be taken into account only when a new Service account is created. In that case, you can also annotate it simply with `kubectl annotate ...`.

Thanks & Regards
Jakub

On Mon, Feb 21, 2022 at 10:54 AM <amit.cahanovich@...> wrote:
Hi, 
I work with strimzi (as kafka connect) on eks. 
I would like to add  to the service account aws role annotation to s3 (something like: eks.amazonaws.com/role-arn: arn:aws:iam::xxxxxxx:role/s3-read-role).
Is there a trivial way to do it?
Thanks,
Amit


adding annotation to service account

amit.cahanovich@...
 

Hi, 
I work with strimzi (as kafka connect) on eks. 
I would like to add  to the service account aws role annotation to s3 (something like: eks.amazonaws.com/role-arn: arn:aws:iam::xxxxxxx:role/s3-read-role).
Is there a trivial way to do it?
Thanks,
Amit

41 - 60 of 224