Date   

RC4 of Strimzi Kafka Operators 0.17.0 is now available

Jakub Scholz
 

Release Candidate 4 of Strimzi Kafka Operators 0.17.0 is now available. There is only one additional fix since RC3:

* Make sure the ZookeeperScaler does work will with custom CAs without the PKCS12 files

For more details about the release candidate and the upgrade procedure, please go to https://github.com/strimzi/strimzi-kafka-operator/releases/tag/0.17.0-rc4

Unless there are any new issues, we will probably release it tomorrow.

Thanks & Regards
Jakub


RC3 of Strimzi Kafka Operators 0.17.0 is now available

Jakub Scholz
 

Release Candidate 3 of Strimzi Kafka Operators 0.17.0 is now available. There are some important fixes and improvements since RC2:

* Fixed NPE which happen in KafkaRoller under rare circumstances
* Fix ordering of addresses in config map to avoid random rolling updates
* Fix scaling of Zookeeper 3.5
* Fix several issues with Connector operator and Mirror Maker 2
* Many other bugs, test and docs improvements

For more details about the release candidate and the upgrade procedure, please go to https://github.com/strimzi/strimzi-kafka-operator/releases/tag/0.17.0-rc3

Thanks & Regards
Jakub


Re: Invalid value 10737418240 for configuration segment.bytes: Not a number of type INT

Tom Bentley
 

Hi,

segment.bytes uses the Java int type, which has a maximum possible value of 2147483647 (2^31-1). So although 10737418240 is a valid number it's too big.

Kind regards,

Tom


On Fri, Mar 13, 2020 at 8:47 AM <yohei@...> wrote:
Hi,

I am trying to apply custom segment.bytes value to kafka topic as follows.

----
kind: KafkaTopic
metadata:
  labels:
    strimzi.io/cluster: default
  name: myname
  namespace: data
spec:
  config:
    cleanup.policy: compact
    segment.bytes: 10737418240
  partitions: 3
  replicas: 2
  topicName: mytopicname
---

But I got this error on broker log.

2020-03-13 06:50:44,048 INFO [Admin Manager on Broker 2]: Invalid config value for resource ConfigResource(type=TOPIC, name='mytopicname'): Invalid value 10737418240 for configuration segment.bytes: Not a number of type INT (kafka.server.AdminManager) [data-plane-kafka-request-handler-2]

segment.bytes in my manifest file looks valid integer. What is wrong with this configuration?
Thank you.


Invalid value 10737418240 for configuration segment.bytes: Not a number of type INT

yohei@...
 

Hi,

I am trying to apply custom segment.bytes value to kafka topic as follows.

----
apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaTopic
metadata:
  labels:
    strimzi.io/cluster: default
  name: myname
  namespace: data
spec:
  config:
    cleanup.policy: compact
    segment.bytes: 10737418240
  partitions: 3
  replicas: 2
  topicName: mytopicname
---

But I got this error on broker log.

2020-03-13 06:50:44,048 INFO [Admin Manager on Broker 2]: Invalid config value for resource ConfigResource(type=TOPIC, name='mytopicname'): Invalid value 10737418240 for configuration segment.bytes: Not a number of type INT (kafka.server.AdminManager) [data-plane-kafka-request-handler-2]

segment.bytes in my manifest file looks valid integer. What is wrong with this configuration?
Thank you.


Re: Adding annotations and limits of kafka connect created pods

Jakub Scholz
 

Yeah, the annotations are strings. So you need to wrap it into quotes to make it a string. Without them it will be interpreted as array.


On Fri, Feb 28, 2020 at 2:04 PM <alonisser@...> wrote:
Thanks, following  your advice I've added:
template:
pod:
metadata:
annotations:
ad.datadoghq.com/kafka-connect-container-name.logs: [{"type":"file", "source":"java","sourcecategory":"sourcecode", "service":"kafka-connect"}]

And it didn't work and I saw errors in the pods of the operator
following the error log 
at [Source: UNKNOWN; line: -1, column: -1] (through reference chain: io.fabric8.kubernetes.api.model.WatchEvent["object"]->io.strimzi.api.kafka.model.KafkaConnect["spec"]->io.strimzi.api.kafka.model.KafkaConnectSpec["template"]->io.strimzi.api.kafka.model.template.KafkaConnectTemplate["pod"]->io.strimzi.api.kafka.model.template.PodTemplate["metadata"]->io.strimzi.api.kafka.model.template.MetadataTemplate["annotations"]->java.util.LinkedHashMap["ad.datadoghq.com/kafka-connect-container-name.logs"])

I've guessed it's about the array, so wrapping the array as a quoted string fixed the issue 
Thanks for the help again, and I'll hope this would be useful for someone else


Re: Updating the kafka connect deployment with changes in the kafka connect crd

alonisser@...
 

Turns out the issue was the changed CRD had errors strimzi operator couldn't handle, following the logs of the operator pods, I've found the error and now it does update as expected. 


Re: Adding annotations and limits of kafka connect created pods

alonisser@...
 

Thanks, following  your advice I've added:
template:
pod:
metadata:
annotations:
ad.datadoghq.com/kafka-connect-container-name.logs: [{"type":"file", "source":"java","sourcecategory":"sourcecode", "service":"kafka-connect"}]

And it didn't work and I saw errors in the pods of the operator
following the error log 
at [Source: UNKNOWN; line: -1, column: -1] (through reference chain: io.fabric8.kubernetes.api.model.WatchEvent["object"]->io.strimzi.api.kafka.model.KafkaConnect["spec"]->io.strimzi.api.kafka.model.KafkaConnectSpec["template"]->io.strimzi.api.kafka.model.template.KafkaConnectTemplate["pod"]->io.strimzi.api.kafka.model.template.PodTemplate["metadata"]->io.strimzi.api.kafka.model.template.MetadataTemplate["annotations"]->java.util.LinkedHashMap["ad.datadoghq.com/kafka-connect-container-name.logs"])

I've guessed it's about the array, so wrapping the array as a quoted string fixed the issue 
Thanks for the help again, and I'll hope this would be useful for someone else


Updating the kafka connect deployment with changes in the kafka connect crd

alonisser@...
 

I've updated the crd after it created the deployment and pods and services (adding configs, annotations and resources)but it seems the deployment is stuck where I started
Is there a way to update the deployment and pods besides removing the crd and destroying the existing configuration? 


Re: Is there a way to use topic operator with zookeepr?

alonisser@...
 

Thanks for the kind and authoritative response! I'll roll my own for now 


Re: Adding annotations and limits of kafka connect created pods

alonisser@...
 

Thanks I would try that! I was missing the pod/deployment sub level


Re: Is there a way to use topic operator with zookeepr?

Jakub Scholz
 

I'm afraid that is not possible. The Topic Operator is currently using Zookeeper for storing some additional data, so it it really needed there. Since Zookeeper will be removed from Kafka in the future, we have some plans to change it to use Kafka as its storage instead of Zookeeper. Once that is done, you should be able to use it. Until then, it will need Zookeeper access.

Thanks & Regards
Jakub

On Sun, Feb 23, 2020 at 10:24 PM <alonisser@...> wrote:
Without zookeeper that is, I need topic operator without touching zookeeper


Re: Adding annotations and limits of kafka connect created pods

Jakub Scholz
 

Hi,

For specifying the resources, you can do something like this:
apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaConnect
metadata:
  name: my-connect
  # ...
spec:
  # ...
  resources:
    requests:
      memory: 1Gi
      cpu: 500m
    limits:
      memory: 2Gi
      cpu: 1000m
  # ...

For the annotations, it depends what exactly do you want to annotate. But basically you can use this feature: https://strimzi.io/docs/latest/full.html#assembly-customizing-deployments-str
Your YAML would look something like this:
kind: KafkaConnect
metadata:
  name: my-connect
  # ...
spec:
  # ...
  template:
    deployment:
      metadata:
        annotations:
          myanno: myvalue
    pod:
      metadata:
        annotations:
          myanno: myvalue
  # ...

Thanks & Regards
Jakub




On Sun, Feb 23, 2020 at 10:42 AM <alonisser@...> wrote:
Trying to add pod annotations (for our log connector) and resource request/limits to the KafkaConnect created resources
What should I add in the resource so it would create the pods/deployment with them(I suspect I already sent this message but can't find it now)


Re: Is there a way to use topic operator with zookeepr?

alonisser@...
 

Without zookeeper that is, I need topic operator without touching zookeeper


Is there a way to use topic operator with zookeepr?

alonisser@...
 

I want to use it for programmatic management of my topics, but my kafka is hosted in confluent and I don't think I can access the zookeeper cluster there


Adding annotations and limits of kafka connect created pods

alonisser@...
 

Trying to add pod annotations (for our log connector) and resource request/limits to the KafkaConnect created resources
What should I add in the resource so it would create the pods/deployment with them(I suspect I already sent this message but can't find it now)


RC2 of Strimzi Kafka Operators 0.17.0 is now available

Jakub Scholz
 

Release Candidate 2 of Strimzi Kafka Operators 0.17.0 is now available. It took us more than two weeks since the RC1, but we spent them with a lot of testing and improving the code to make sure the quality of the 0.17 release is good. The main changes since RC1 include:
* Support for pausing / resuming MM2 connectors
* Fix bug in Kafka rack awareness configuration
* Add network policies for Kafka connect when using the connector operator
* Fix rolling update restart when configuration changes
* Fix ZooKeeper scale-up bug
* Validate the replication factor in relation to number of Kafka replicas
* Various bug-fixes
* Dependency upgrades to avoid CVEs
* Improved system tests and documentation

For more details about the release candidate and the upgrade procedure, please go to https://github.com/strimzi/strimzi-kafka-operator/releases/tag/0.17.0-rc2

Thanks & Regards
Jakub


Re: Does anyone here has an experience with running schema manager (for avro) with strimzi kafka ?

Matthew Isaacs
 

I used the open source helm chart from Confluent: https://github.com/confluentinc/cp-helm-charts/tree/v5.4.0/charts/cp-schema-registry

The heap settings and resource limits were not sufficient for the 5.4.0 release (I need to send in a patch for this and other issues).

## Schema registry JVM Heap Option
heapOptions: "-Xms1000M -Xmx1000M"
resources:
  limits:
   memory: 2Gi
Otherwise, don't forget to specify the your Kafka bootstrap endpoint:
kafka:
  bootstrapServers: "Xx-Kafka-bootstrap....:9092"

On Wed, Feb 12, 2020 at 1:51 PM alonn <alonisser@...> wrote:

>
> Thanks, Yes, I'm trying with multiple instances. I can do with one, probably if that works
> Can you please provide a sample deployment.yaml ? so I can see what I'm missing? 
> Tech blog:https://medium.com/@alonisser
> Personal Blog:degeladom.wordpress.com
> Tel:972-54-6734469
> Email: alonisser@...
>
>
>
> On Wed, Feb 12, 2020 at 7:51 PM Matthew Isaacs <matthew.isaacs@...> wrote:
>>
>> I'm running the Schema Registry service along side a Strimzi deployed Kafka. It seems to be working fine for me. Are you trying to run multiple instances of the the schema registry? I'm afraid I don't fully understand what problem you're having.


Does anyone here has an experience with running schema manager (for avro) with strimzi kafka ?

alonisser@...
 

I've setup the most basic kafka persistent setup (see the crd below)
But can't seem to understand how to run schema registy against it (with kafka based leader selection)
I see kafka logs with endless generations like this one
Preparing to rebalance group schema-registry in state PreparingRebalance with old generation 24
So it does seem to actually connect , but then something goes wrong

Using bootstrap servers leading to the kafka service 9092 port
bootstrap.servers = [PLAINTEXT://my-fafka-bootstrap:9092]
 

The kafka crd

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
name: "internal-kafka"
spec:
kafka:
version: 2.4.0
replicas: 1
listeners:
plain: {}
tls: {}
config:
offsets.topic.replication.factor: 1
transaction.state.log.replication.factor: 1
transaction.state.log.min.isr: 1
log.message.format.version: "2.4"
storage:
type: jbod
volumes:
- id: 0
type: persistent-claim
size: 100Gi
class: managed-premium
deleteClaim: false
zookeeper:
replicas: 1
storage:
type: persistent-claim
size: 100Gi
class: managed-premium
deleteClaim: false
entityOperator:
topicOperator: {}
userOperator: {}


RC1 of Strimzi Kafka Operators 0.17.0 is now available

Jakub Scholz
 

The Release Candidate 1 of Strimzi Kafka Operators 0.17.0 is now available. It doesn't include many new features, but some of those included were in high demand. The main features are:
* Add possibility to set Java System Properties via CR yaml
* Add support for Mirror Maker 2.0
* Add Jmxtrans deployment
* Add public keys of TLS listeners to the status section of the Kafka CR

And of course a lot more smaller improvements and bugfixes. For more details about the release candidate and the upgrade procedure, please go to https://github.com/strimzi/strimzi-kafka-operator/releases/tag/0.17.0-rc1

Thanks
Jakub


[ANNOUNCE] Strimzi Kafka bridge 0.15.2

Paolo Patierno
 

Strimzi Kafka Bridge 0.15.2 is now available with the following changes:
* Fixed missing Jackson Core dependency
 
 
Thanks to everyone who contributed to these releases.
 
Paolo

61 - 80 of 101