Date   

Re: support for 2.4.1

Jakub Scholz
 

Kafka 2.4.1 will be supported in Strimzi 0.18.0. 0.17.0 supports only 2.4.0.

Thanks & Regards
Jakub

On Wed, Apr 22, 2020 at 11:06 PM <alonisser@...> wrote:
Does the new 0.17 operator uses kafka-connect 2.4.1 ? or still the 2.4.1 image?


support for 2.4.1

alonisser@...
 

Does the new 0.17 operator uses kafka-connect 2.4.1 ? or still the 2.4.1 image?


[ANNOUNCE] Strimzi Kafka OAuth library 0.4.0 release

Jakub Scholz
 

The 0.4.0 version of the Strimzi Kafka OAuth library for using OAuth authentication in Kafka clients and brokers has been released and should be available in the Maven repositories. The main improvements are:
* Improved compatibility with different OAuth authorization servers
* Deprecation of some configuration options
* Updated dependencies
* Improved examples and documentation

For more information, see the release page on GitHub: https://github.com/strimzi/strimzi-kafka-oauth/releases/tag/0.4.0 

Thanks to everyone who contributed to this release.

Thanks & Regards
Jakub


RC1 of Strimzi Kafka OAuth library 0.4.0

Jakub Scholz
 

Hi,

Release Candidate 1 of the 0.4.0 version of the Strimzi Kafka OAuth library is now available for testing: https://github.com/strimzi/strimzi-kafka-oauth/releases/tag/0.4.0-rc1

The main changes are:
* Deprecated configuration options
* Compatibility improvements
* Updated dependencies
* Instructions for developers added
* Improvements to examples and documentation

To test it, you can use the staging Maven repository:

  <repositories>
    <repository>
      <id>staging</id>
      <url>https://oss.sonatype.org/content/repositories/iostrimzi-1062/</url>
    </repository>
  </repositories>

Any feedback can be provided here or as a GitHub issue.

Thanks & Regards
Jakub


[ANNOUNCE] Strimzi Kafka Operators 0.17.0 is now available

Jakub Scholz
 

Strimzi Kafka Operators 0.17.0 has been released. The main changes since 0.16 include:
* Add possibility to set Java System Properties via CR yaml
* Add support for Mirror Maker 2.0
* Add Jmxtrans deployment
* Add public keys of TLS listeners to the status section of the Kafka CR
* Various bug-fixes
* Dependency upgrades to avoid CVEs
* Improved system tests and documentation

For more details about the release and the upgrade procedure, please go to https://github.com/strimzi/strimzi-kafka-operator/releases/tag/0.17.0

Full list of changes can be found under the [0.17.0 milestone](https://github.com/strimzi/strimzi-kafka-operator/milestone/16?closed=1).

Thanks to everyone who contributed to this release!

Thanks & Regards
Strimzi team


RC4 of Strimzi Kafka Operators 0.17.0 is now available

Jakub Scholz
 

Release Candidate 4 of Strimzi Kafka Operators 0.17.0 is now available. There is only one additional fix since RC3:

* Make sure the ZookeeperScaler does work will with custom CAs without the PKCS12 files

For more details about the release candidate and the upgrade procedure, please go to https://github.com/strimzi/strimzi-kafka-operator/releases/tag/0.17.0-rc4

Unless there are any new issues, we will probably release it tomorrow.

Thanks & Regards
Jakub


RC3 of Strimzi Kafka Operators 0.17.0 is now available

Jakub Scholz
 

Release Candidate 3 of Strimzi Kafka Operators 0.17.0 is now available. There are some important fixes and improvements since RC2:

* Fixed NPE which happen in KafkaRoller under rare circumstances
* Fix ordering of addresses in config map to avoid random rolling updates
* Fix scaling of Zookeeper 3.5
* Fix several issues with Connector operator and Mirror Maker 2
* Many other bugs, test and docs improvements

For more details about the release candidate and the upgrade procedure, please go to https://github.com/strimzi/strimzi-kafka-operator/releases/tag/0.17.0-rc3

Thanks & Regards
Jakub


Re: Invalid value 10737418240 for configuration segment.bytes: Not a number of type INT

Tom Bentley
 

Hi,

segment.bytes uses the Java int type, which has a maximum possible value of 2147483647 (2^31-1). So although 10737418240 is a valid number it's too big.

Kind regards,

Tom


On Fri, Mar 13, 2020 at 8:47 AM <yohei@...> wrote:
Hi,

I am trying to apply custom segment.bytes value to kafka topic as follows.

----
kind: KafkaTopic
metadata:
  labels:
    strimzi.io/cluster: default
  name: myname
  namespace: data
spec:
  config:
    cleanup.policy: compact
    segment.bytes: 10737418240
  partitions: 3
  replicas: 2
  topicName: mytopicname
---

But I got this error on broker log.

2020-03-13 06:50:44,048 INFO [Admin Manager on Broker 2]: Invalid config value for resource ConfigResource(type=TOPIC, name='mytopicname'): Invalid value 10737418240 for configuration segment.bytes: Not a number of type INT (kafka.server.AdminManager) [data-plane-kafka-request-handler-2]

segment.bytes in my manifest file looks valid integer. What is wrong with this configuration?
Thank you.


Invalid value 10737418240 for configuration segment.bytes: Not a number of type INT

yohei@...
 

Hi,

I am trying to apply custom segment.bytes value to kafka topic as follows.

----
apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaTopic
metadata:
  labels:
    strimzi.io/cluster: default
  name: myname
  namespace: data
spec:
  config:
    cleanup.policy: compact
    segment.bytes: 10737418240
  partitions: 3
  replicas: 2
  topicName: mytopicname
---

But I got this error on broker log.

2020-03-13 06:50:44,048 INFO [Admin Manager on Broker 2]: Invalid config value for resource ConfigResource(type=TOPIC, name='mytopicname'): Invalid value 10737418240 for configuration segment.bytes: Not a number of type INT (kafka.server.AdminManager) [data-plane-kafka-request-handler-2]

segment.bytes in my manifest file looks valid integer. What is wrong with this configuration?
Thank you.


Re: Adding annotations and limits of kafka connect created pods

Jakub Scholz
 

Yeah, the annotations are strings. So you need to wrap it into quotes to make it a string. Without them it will be interpreted as array.


On Fri, Feb 28, 2020 at 2:04 PM <alonisser@...> wrote:
Thanks, following  your advice I've added:
template:
pod:
metadata:
annotations:
ad.datadoghq.com/kafka-connect-container-name.logs: [{"type":"file", "source":"java","sourcecategory":"sourcecode", "service":"kafka-connect"}]

And it didn't work and I saw errors in the pods of the operator
following the error log 
at [Source: UNKNOWN; line: -1, column: -1] (through reference chain: io.fabric8.kubernetes.api.model.WatchEvent["object"]->io.strimzi.api.kafka.model.KafkaConnect["spec"]->io.strimzi.api.kafka.model.KafkaConnectSpec["template"]->io.strimzi.api.kafka.model.template.KafkaConnectTemplate["pod"]->io.strimzi.api.kafka.model.template.PodTemplate["metadata"]->io.strimzi.api.kafka.model.template.MetadataTemplate["annotations"]->java.util.LinkedHashMap["ad.datadoghq.com/kafka-connect-container-name.logs"])

I've guessed it's about the array, so wrapping the array as a quoted string fixed the issue 
Thanks for the help again, and I'll hope this would be useful for someone else


Re: Updating the kafka connect deployment with changes in the kafka connect crd

alonisser@...
 

Turns out the issue was the changed CRD had errors strimzi operator couldn't handle, following the logs of the operator pods, I've found the error and now it does update as expected. 


Re: Adding annotations and limits of kafka connect created pods

alonisser@...
 

Thanks, following  your advice I've added:
template:
pod:
metadata:
annotations:
ad.datadoghq.com/kafka-connect-container-name.logs: [{"type":"file", "source":"java","sourcecategory":"sourcecode", "service":"kafka-connect"}]

And it didn't work and I saw errors in the pods of the operator
following the error log 
at [Source: UNKNOWN; line: -1, column: -1] (through reference chain: io.fabric8.kubernetes.api.model.WatchEvent["object"]->io.strimzi.api.kafka.model.KafkaConnect["spec"]->io.strimzi.api.kafka.model.KafkaConnectSpec["template"]->io.strimzi.api.kafka.model.template.KafkaConnectTemplate["pod"]->io.strimzi.api.kafka.model.template.PodTemplate["metadata"]->io.strimzi.api.kafka.model.template.MetadataTemplate["annotations"]->java.util.LinkedHashMap["ad.datadoghq.com/kafka-connect-container-name.logs"])

I've guessed it's about the array, so wrapping the array as a quoted string fixed the issue 
Thanks for the help again, and I'll hope this would be useful for someone else


Updating the kafka connect deployment with changes in the kafka connect crd

alonisser@...
 

I've updated the crd after it created the deployment and pods and services (adding configs, annotations and resources)but it seems the deployment is stuck where I started
Is there a way to update the deployment and pods besides removing the crd and destroying the existing configuration? 


Re: Is there a way to use topic operator with zookeepr?

alonisser@...
 

Thanks for the kind and authoritative response! I'll roll my own for now 


Re: Adding annotations and limits of kafka connect created pods

alonisser@...
 

Thanks I would try that! I was missing the pod/deployment sub level


Re: Is there a way to use topic operator with zookeepr?

Jakub Scholz
 

I'm afraid that is not possible. The Topic Operator is currently using Zookeeper for storing some additional data, so it it really needed there. Since Zookeeper will be removed from Kafka in the future, we have some plans to change it to use Kafka as its storage instead of Zookeeper. Once that is done, you should be able to use it. Until then, it will need Zookeeper access.

Thanks & Regards
Jakub

On Sun, Feb 23, 2020 at 10:24 PM <alonisser@...> wrote:
Without zookeeper that is, I need topic operator without touching zookeeper


Re: Adding annotations and limits of kafka connect created pods

Jakub Scholz
 

Hi,

For specifying the resources, you can do something like this:
apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaConnect
metadata:
  name: my-connect
  # ...
spec:
  # ...
  resources:
    requests:
      memory: 1Gi
      cpu: 500m
    limits:
      memory: 2Gi
      cpu: 1000m
  # ...

For the annotations, it depends what exactly do you want to annotate. But basically you can use this feature: https://strimzi.io/docs/latest/full.html#assembly-customizing-deployments-str
Your YAML would look something like this:
kind: KafkaConnect
metadata:
  name: my-connect
  # ...
spec:
  # ...
  template:
    deployment:
      metadata:
        annotations:
          myanno: myvalue
    pod:
      metadata:
        annotations:
          myanno: myvalue
  # ...

Thanks & Regards
Jakub




On Sun, Feb 23, 2020 at 10:42 AM <alonisser@...> wrote:
Trying to add pod annotations (for our log connector) and resource request/limits to the KafkaConnect created resources
What should I add in the resource so it would create the pods/deployment with them(I suspect I already sent this message but can't find it now)


Re: Is there a way to use topic operator with zookeepr?

alonisser@...
 

Without zookeeper that is, I need topic operator without touching zookeeper


Is there a way to use topic operator with zookeepr?

alonisser@...
 

I want to use it for programmatic management of my topics, but my kafka is hosted in confluent and I don't think I can access the zookeeper cluster there


Adding annotations and limits of kafka connect created pods

alonisser@...
 

Trying to add pod annotations (for our log connector) and resource request/limits to the KafkaConnect created resources
What should I add in the resource so it would create the pods/deployment with them(I suspect I already sent this message but can't find it now)

61 - 80 of 106