confluent-kafka-go

Confluent's Apache Kafka Golang client

APACHE-2.0 License

Stars
4.4K
Committers
74

Bot releases are hidden (Show)

confluent-kafka-go - v2.3.0 Latest Release

Published by milindl 12 months ago

This is a feature release.

  • Adds support for AdminAPI DescribeCluster() and DescribeTopics() (#964, @jainruchir).
  • KIP-430: Return authorized operations in Describe Responses. (#964, @jainruchir).
  • Adds Rack to the Node type, so AdminAPI calls can expose racks for brokers (currently, all Describe Responses) (#964, @jainruchir).
  • KIP-396: completed the implementation with the addition of ListOffsets (#1029).
  • Adds cache for Schema Registry client's GetSchemaMetadata (#1042).
  • MockCluster can now be shutdown and started again to test broker availability problems (#998, @kkoehler).
  • Adds CreateTopic method to the MockCluster. (#1047, @mimikwang).
  • Honor HTTPS_PROXY environment variable, if set, for the Schema Registry client (#1065, @finncolman).
  • KIP-516: Partial support of topic identifiers. Topic identifiers in metadata response are available through the new DescribeTopics function (#1068).

Fixes

  • Fixes a bug in the mock schema registry client where the wrong ID was being returned for pre-registered schema (#971, @srlk).
  • The minimum version of Go supported has been changed from 1.16 to 1.17 (#1074).
  • Fixes an issue where testing was being imported by a non-test file, testhelpers.go. (#1049, @dmlambea).
  • Fixes the optional Coordinator field in ConsumerGroupDescription in case it's not known. It now contains a Node with ID -1 in that case. Avoids a C segmentation fault.
  • Fixes an issue with Producer.Flush. It was waiting for queue.buffering.max.ms while flushing (#1013).
  • Fixes an issue where consumer methods would not be allowed to run while the consumer was closing, and during the final partition revoke (#1073).

confluent-kafka-go is based on librdkafka v2.3.0, see the librdkafka v2.3.0 release notes for a complete list of changes, enhancements, fixes and upgrade considerations.

confluent-kafka-go - v2.2.0

Published by milindl over 1 year ago

This is a feature release.

  • KIP-339: IncrementalAlterConfigs API (#945).
  • KIP-554: User SASL/SCRAM credentials alteration and description (#1004).

Fixes

  • Fixes a nil pointer bug in the protobuf Serializer.Serialize(), caused due to an unchecked error (#997, @baganokodo2022).
  • Fixes incorrect protofbuf FileDescriptor references (#989, @Mrmann87).
  • Allow fetching all partition offsets for a consumer group by passing a nil slice in AdminClient.ListConsumerGroupOffsets, when earlier it was not processing that correctly (#985, @alexandredantas).
  • Deprecate m.LeaderEpoch in favor of m.TopicPartition.LeaderEpoch (#1012).

confluent-kafka-go is based on librdkafka v2.2.0, see the librdkafka v2.2.0 release notes for a complete list of changes, enhancements, fixes and upgrade considerations.

confluent-kafka-go - v2.1.1

Published by milindl over 1 year ago

This is a maintenance release.

It is strongly recommended to update to v2.1.1 if v2.1.0 is being used, as it fixes a critical issue in the consumer (#980).

confluent-kafka-go is based on librdkafka v2.1.1, see the librdkafka v2.1.1 release notes for a complete list of changes, enhancements, fixes and upgrade considerations.

confluent-kafka-go - v2.1.0

Published by milindl over 1 year ago

This is a feature release.

  • Added Consumer SeekPartitions() method to seek multiple partitions at once and deprecated Seek() (#940).
  • KIP-320: add offset leader epoch to the TopicPartition and Message structs (#968).
  • The minimum version of Go supported has been changed from 1.14 to 1.16 (#973).
  • Add validation on the Producer, the Consumer and the AdminClient to prevent panic when they are used after close (#901).
  • Fix bug causing schema-registry URL with existing path to not be parsed correctly (#950).
  • Support for Offset types on Offset.Set() (#962, @jdockerty).
  • Added example for using rebalance callback with manual commit.

confluent-kafka-go is based on librdkafka v2.1.0, see the librdkafka v2.1.0 release notes and later ones for a complete list of changes, enhancements, fixes and upgrade considerations.

confluent-kafka-go - v2.0.2

Published by milindl over 1 year ago

This is a feature release:

  • Added SetSaslCredentials. This new method (on the Producer, Consumer, and AdminClient) allows modifying the stored SASL PLAIN/SCRAM credentials that will be used for subsequent (new) connections to a broker (#879).
  • Channel based producer (Producer ProduceChannel()) and channel based consumer (Consumer Events()) are deprecated (#894).
  • Added IsTimeout() on Error type. This is a convenience method that checks if the error is due to a timeout (#903).
  • The timeout parameter on Seek() is now ignored and an infinite timeout is used, the method will block until the fetcher state is updated (typically within microseconds) (#906)
  • The minimum version of Go supported has been changed from 1.11 to 1.14.
  • KIP-222 Add Consumer Group operations to Admin API.
  • KIP-518 Allow listing consumer groups per state.
  • KIP-396 Partially implemented: support for AlterConsumerGroupOffsets.
  • As result of the above KIPs, added (#923)
    • ListConsumerGroups Admin operation. Supports listing by state.
    • DescribeConsumerGroups Admin operation. Supports multiple groups.
    • DeleteConsumerGroups Admin operation. Supports multiple groups (@vsantwana).
    • ListConsumerGroupOffsets Admin operation. Currently, only supports 1 group with multiple partitions. Supports the requireStable option.
    • AlterConsumerGroupOffsets Admin operation. Currently, only supports 1 group with multiple offsets.
  • Added SetRoundtripDuration to the mock broker for setting RTT delay for a given mock broker (@kkoehler, #892).
  • Built-in support for Linux/ arm64. (#933).

Fixes

  • The SpecificDeserializer.Deserialize method was not returning its result correctly, and was hence unusable. The return has been fixed (#849).
  • The schema ID to use during serialization, specified in SerializerConfig, was ignored. It is now used as expected (@perdue, #870).
  • Creating a new schema registry client with an SSL CA Certificate led to a panic. This was due to a nil pointer, fixed with proper initialization (@HansK-p, @ju-popov, #878).

Upgrade Considerations

  • OpenSSL 3.0.x upgrade in librdkafka requires a major version bump, as some legacy ciphers need to be explicitly configured to continue working, but it is highly recommended not to use them. The rest of the API remains backward compatible, see the librdkafka release notes below for details.
  • As required by the Go module system, a suffix with the new major version has been added to the module name, and package imports must reflect this change.

confluent-kafka-go is based on librdkafka v2.0.2, see the librdkafka v2.0.0 release notes and later ones for a complete list of changes, enhancements, fixes and upgrade considerations.

Note: There were no confluent-kafka-go v2.0.0 or v2.0.1 releases.

confluent-kafka-go - v1.9.2

Published by emasab about 2 years ago

v1.9.2 is a maintenance release:

  • Bundles librdkafka v1.9.2.
  • Example for using go clients with AWS lambda (@jliunyu, #823).
  • OAUTHBEARER unsecured producer, consumer and OIDC examples.

confluent-kafka-go is based on librdkafka v1.9.2, see the librdkafka release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.

confluent-kafka-go - v1.9.1

Published by emasab over 2 years ago

v1.9.1 is a feature release:

confluent-kafka-go is based on librdkafka v1.9.1, see the librdkafka release notes for a complete list of changes, enhancements, fixes and upgrade considerations.

confluent-kafka-go - v1.9.0

Published by emasab over 2 years ago

v1.9.0 is a feature release:

  • OAUTHBEARER OIDC support
  • KIP-140 Admin API ACL support
  • Added MockCluster for functional testing of applications without the need
    for a real Kafka cluster (by @SourceFellows and @kkoehler, #729).
    See examples/mock_cluster.

Fixes

  • Fix Rebalance events behavior for static membership (@jliunyu, #757, #798).
  • Fix consumer close taking 10 seconds when there's no rebalance needed (@jliunyu, #757).

confluent-kafka-go is based on librdkafka v1.9.0, see the
librdkafka release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.

confluent-kafka-go - v1.8.2

Published by edenhill almost 3 years ago

confluent-kafka-go v1.8.2

This is a maintenance release:

  • Bundles librdkafka v1.8.2
  • Check termination channel while reading delivery reports (by @zjj)
  • Added convenience method Consumer.StoreMessage() (@finncolman, #676)

confluent-kafka-go is based on librdkafka v1.8.2, see the librdkafka release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.

Note: There were no confluent-kafka-go v1.8.0 and v1.8.1 releases.

confluent-kafka-go - v1.7.0

Published by edenhill over 3 years ago

confluent-kafka-go is based on librdkafka v1.7.0, see the librdkafka release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.

Enhancements

  • Experimental Windows support (by @neptoess).
  • The produced message headers are now available in the delivery report
    Message.Headers if the Producer's go.delivery.report.fields
    configuration property is set to include headers, e.g.:
    "go.delivery.report.fields": "key,value,headers"
    This comes at a performance cost and are thus disabled by default.

Fixes

  • AdminClient.CreateTopics() previously did not accept default value(-1) of
    ReplicationFactor without specifying an explicit ReplicaAssignment, this is
    now fixed.
confluent-kafka-go - v1.6.1

Published by edenhill over 3 years ago

v1.6.1

v1.6.1 is a feature release:

  • KIP-429: Incremental consumer rebalancing - see cooperative_consumer_example.go
    for an example how to use the new incremental rebalancing consumer.
  • KIP-480: Sticky producer partitioner - increase throughput and decrease
    latency by sticking to a single random partition for some time.
  • KIP-447: Scalable transactional producer - a single transaction producer can
    now be used for multiple input partitions.
  • Add support for go.delivery.report.fields by @kevinconaway

Fixes

  • For dynamically linked builds (-tags dynamic) there was previously a possible conflict
    between the bundled librdkafka headers and the system installed ones. This is now fixed. (@KJTsanaktsidis)

confluent-kafka-go is based on and bundles librdkafka v1.6.1, see the
librdkafka release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.

confluent-kafka-go - v1.5.2

Published by edenhill almost 4 years ago

confluent-kafka-go v1.5.2

v1.5.2 is a maintenance release with the following fixes and enhancements:

  • Bundles librdkafka v1.5.2 - see release notes for all enhancements and fixes.
  • Documentation fixes

confluent-kafka-go is based on librdkafka v1.5.2, see the
librdkafka release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.

confluent-kafka-go - v1.4.2

Published by edenhill over 4 years ago

confluent-kafka-go v1.4.2

v1.4.2 is a maintenance release:

  • The bundled librdkafka directory (kafka/librdkafka) is no longer pruned by Go mod vendor import.
  • Bundled librdkafka upgraded to v1.4.2, highlights:
    • System root CA certificates should now be picked up automatically on most platforms
    • Fix produce/consume hang after partition goes away and comes back,
      such as when a topic is deleted and re-created (regression in v1.3.0).

librdkafka v1.4.2 changes

See the librdkafka v1.4.2 release notes for changes to the bundled librdkafka included with the Go client.

confluent-kafka-go - v1.4.0

Published by edenhill over 4 years ago

confluent-kafka-go v1.4.0

  • Added Transactional Producer API and full Exactly-Once-Semantics (EOS) support.
  • A prebuilt version of the latest version of librdkafka is now bundled with the confluent-kafka-go client. A separate installation of librdkafka is NO LONGER REQUIRED or used.
  • Added support for sending client (librdkafka) logs to Logs() channel.
  • Added Consumer.Position() to retrieve the current consumer offsets.
  • The Error type now has additional attributes, such as IsRetriable() to deem if the errored operation can be retried. This is currently only exposed for the Transactional API.
  • Removed support for Go < 1.9

Transactional API

librdkafka and confluent-kafka-go now has complete Exactly-Once-Semantics (EOS) functionality, supporting the idempotent producer (since v1.0.0), a transaction-aware consumer (since v1.2.0) and full producer transaction support (in this release).
This enables developers to create Exactly-Once applications with Apache Kafka.

See the Transactions in Apache Kafka page for an introduction and check the transactions example for a complete transactional application example.

Bundled librdkafka

The confluent-kafka-go client now comes with batteries included, namely prebuilt versions of librdkafka for the most popular platforms, you will thus no longer need to install or manage librdkafka separately.

Supported platforms are:

  • Mac OSX
  • glibc-based Linux x64 (e.g., RedHat, Debian, etc) - lacks Kerberos/GSSAPI support
  • musl-based Linux x64 (Alpine) - lacks Kerberos/GSSAPI support

These prebuilt librdkafka has all features (e.g., SSL, compression, etc) except for the Linux builds which due to libsasl2 dependencies does not have Kerberos/GSSAPI support.
If you need Kerberos support, or you are running on a platform where the prebuilt librdkafka builds are not available (see above), you will need to install librdkafka separately (preferably through the Confluent APT and RPM repositories) and build your application with -tags dynamic to disable the builtin librdkafka and instead link your application dynamically to librdkafka.

librdkafka v1.4.0 changes

Full librdkafka v1.4.0 release notes.

Highlights:

  • KIP-98: Transactional Producer API
  • KIP-345: Static consumer group membership (by @rnpridgeon)
  • KIP-511: Report client software name and version to broker
  • SASL SCRAM security fixes.
confluent-kafka-go - v1.3.0

Published by rigelbm almost 5 years ago

confluent-kafka-go v1.3.0

  • Purge messages API (by @khorshuheng at GoJek).

  • ClusterID and ControllerID APIs.

  • Go Modules support.

  • Fixed memory leak on calls to NewAdminClient(). (discovered by @gabeysunda)

  • Requires librdkafka v1.3.0 or later

librdkafka v1.3.0 changes

Full librdkafka v1.3.0 release notes.

  • KIP-392: Fetch messages from closest replica/follower (by @mhowlett).
  • Experimental mock broker to make application and librdkafka development testing easier.
  • Fixed consumer_lag in stats when consuming from broker versions <0.11.0.0 (regression in librdkafka v1.2.0).
confluent-kafka-go - v1.1.0

Published by edenhill over 5 years ago

confluent-kafka-go v1.1.0

  • OAUTHBEARER SASL authentication (KIP-255) by Ron Dagostini (@rondagostino) at StateStreet.
  • Offset commit metadata (@damour, #353)
  • Requires librdkafka v1.1.0 or later

Noteworthy librdkafka v1.1.0 changes

Full librdkafka v1.1.0 release notes.

  • SASL OAUTHBEARER support (by @rondagostino at StateStreet)
  • In-memory SSL certificates (PEM, DER, PKCS#12) support (by @noahdav at Microsoft)
  • Pluggable broker SSL certificate verification callback (by @noahdav at Microsoft)
  • Use Windows Root/CA SSL Certificate Store (by @noahdav at Microsoft)
  • ssl.endpoint.identification.algorithm=https (off by default) to validate the broker hostname matches the certificate. Requires OpenSSL >= 1.0.2.
  • Improved GSSAPI/Kerberos ticket refresh

Upgrade considerations

  • Windows SSL users will no longer need to specify a CA certificate file/directory (ssl.ca.location), librdkafka will load the CA certs by default from the Windows Root Certificate Store.
  • SSL peer (broker) certificate verification is now enabled by default (disable with enable.ssl.certificate.verification=false)
  • %{broker.name} is no longer supported in sasl.kerberos.kinit.cmd since kinit refresh is no longer executed per broker, but per client instance.

SSL

New configuration properties:

  • ssl.key.pem - client's private key as a string in PEM format
  • ssl.certificate.pem - client's public key as a string in PEM format
  • enable.ssl.certificate.verification - enable(default)/disable OpenSSL's builtin broker certificate verification.
  • enable.ssl.endpoint.identification.algorithm - to verify the broker's hostname with its certificate (disabled by default).
  • The private key data is now securely cleared from memory after last use.

Enhancements

  • Bump message.timeout.ms max value from 15 minutes to 24 days (@sarkanyi)

Fixes

  • SASL GSSAPI/Kerberos: Don't run kinit refresh for each broker, just per client instance.
  • SASL GSSAPI/Kerberos: Changed sasl.kerberos.kinit.cmd to first attempt ticket refresh, then acquire.
  • SASL: Proper locking on broker name acquisition.
  • Consumer: max.poll.interval.ms now correctly handles blocking poll calls, allowing a longer poll timeout than the max poll interval.
confluent-kafka-go - v1.0.0

Published by edenhill over 5 years ago

confluent-kafka-go v1.0.0

This release adds support for librdkafka v1.0.0, featuring the EOS Idempotent Producer, Sparse connections, KIP-62 - max.poll.interval.ms support, zstd, and more.

See the librdkafka v1.0.0 release notes for more information and upgrade considerations.

Go client enhancements

  • Now requires librdkafka v1.0.0.
  • A new IsFatal() function has been added to KafkaError to help the application differentiate between temporary and fatal errors. Fatal errors are currently only triggered by the idempotent producer.
  • Added kafka.NewError() to make it possible to create error objects from user code / unit test (Artem Yarulin)

Go client fixes

  • Deprecate the use of default.topic.config. Topic configuration should now be set on the standard ConfigMap.
  • Reject delivery.report.only.error=true on producer creation (#306)
  • Avoid use of "Deprecated: " prefix (#268)
  • PartitionEOF must now be explicitly enabled thru enable.partition.eof

Make sure to check out the Idempotent Producer example

confluent-kafka-go - v0.11.6

Published by edenhill almost 6 years ago

Admin API

This release adds support for the Topic Admin API (KIP-4):

  • Create and delete topics
  • Increase topic partition count
  • Read and modify broker and topic configuration
  • Requires librdkafka >= v0.11.6
results, err := a.CreateTopics(
	ctx,
	// Multiple topics can be created simultaneously
	// by providing additional TopicSpecification structs here.
	[]kafka.TopicSpecification{{
		Topic:             "mynewtopic",
		NumPartitions:     20,
		ReplicationFactor: 3}})

More examples.

Fixes and enhancements

  • Make sure plugins are set before other configuration options (#225, @dtheodor)
  • Fix metadata memory leak
  • Clone config before mutating it in NewProducer and NewConsumer (@vlad-alexandru-ionescu)
  • Enable Error events to be emitted from librdkafka errors, e.g., ErrAllBrokersDown, et.al (#200)
confluent-kafka-go - v0.11.4

Published by edenhill over 6 years ago

Announcements

  • This release drops support for Golang < 1.7

  • Requires librdkafka v0.11.4 or later

Message header support

Support for Kafka message headers has been added (requires broker version >= v0.11.0).

When producing messages, pass a []kafka.Header list:

        err = p.Produce(&kafka.Message{
                TopicPartition: kafka.TopicPartition{Topic: &topic, Partition: kafka.PartitionAny},
                Value:          []byte(value),
                Headers:        []kafka.Header{{"myTestHeader", []byte("header values are binary")}},
        }, deliveryChan)

Message headers are available to the consumer as Message.Headers:

	msg, err := c.ReadMessage(-1)
	if err != nil {
		fmt.Printf("%% Consumer error: %v\n", err)
		continue
	}
	fmt.Printf("%% Message on %s:\n%s\n", msg.TopicPartition, string(msg.Value))
	if msg.Headers != nil {
		fmt.Printf("%% Headers: %v\n", msg.Headers)
	}

Enhancements

  • Message Headers support
  • Close event channel when consumer is closed (#123 by @czchen)
  • Added ReadMessage() convenience method to Consumer
  • producer: Make events channel size configurable (@agis)
  • Added Consumer.StoreOffsets() (#72)
  • Added ConfigMap.Get() (#26)
  • Added Pause() and Resume() APIs
  • Added Consumer.Committed() API
  • Added OffsetsForTimes() API to Consumer and Producer

Fixes

  • Static builds should now work on both OSX and Linux (#137, #99)
  • Update error constants from librdkafka
  • Enable produce.offset.report by default (unless overriden)
  • move test helpers that need testing pkg to _test.go file (@gwilym)
  • Build and run-time checking of librdkafka version (#88)
  • Remove gotos (@jadekler)
  • Fix Producer Value&Key slice referencing to avoid cgo pointer checking failures (#24)
  • Fix Go 1.10 build errors (drop pkg-config --static ..)
confluent-kafka-go - v0.11.0

Published by edenhill about 7 years ago

This is a minimal librdkafka version-synchronized release of the Go client.

Changes:

  • Requires librdkafka v0.11.0 or later
  • Added stats events (#57)
  • Updated librdkafka error codes
  • Fix signal channel buffering in example (#66)