Bot releases are hidden (Show)
This is a feature release.
DescribeCluster()
and DescribeTopics()
(#964, @jainruchir).Rack
to the Node
type, so AdminAPI calls can expose racks for brokers (currently, all Describe Responses) (#964, @jainruchir).GetSchemaMetadata
(#1042).CreateTopic
method to the MockCluster. (#1047, @mimikwang).HTTPS_PROXY
environment variable, if set, for the Schema Registry client (#1065, @finncolman).DescribeTopics
function (#1068).testing
was being imported by a non-test file, testhelpers.go. (#1049, @dmlambea).Coordinator
field in ConsumerGroupDescription
in case it's not known. It now contains a Node
with ID -1 in that case. Avoids a C segmentation fault.Producer.Flush
. It was waiting for queue.buffering.max.ms
while flushing (#1013).confluent-kafka-go is based on librdkafka v2.3.0, see the librdkafka v2.3.0 release notes for a complete list of changes, enhancements, fixes and upgrade considerations.
Published by milindl over 1 year ago
This is a feature release.
Serializer.Serialize()
, caused due to an unchecked error (#997, @baganokodo2022).nil
slice in AdminClient.ListConsumerGroupOffsets
, when earlier it was not processing that correctly (#985, @alexandredantas).confluent-kafka-go is based on librdkafka v2.2.0, see the librdkafka v2.2.0 release notes for a complete list of changes, enhancements, fixes and upgrade considerations.
Published by milindl over 1 year ago
This is a maintenance release.
It is strongly recommended to update to v2.1.1 if v2.1.0 is being used, as it fixes a critical issue in the consumer (#980).
confluent-kafka-go is based on librdkafka v2.1.1, see the librdkafka v2.1.1 release notes for a complete list of changes, enhancements, fixes and upgrade considerations.
Published by milindl over 1 year ago
This is a feature release.
SeekPartitions()
method to seek multiple partitions at once and deprecated Seek()
(#940).Offset.Set()
(#962, @jdockerty).confluent-kafka-go is based on librdkafka v2.1.0, see the librdkafka v2.1.0 release notes and later ones for a complete list of changes, enhancements, fixes and upgrade considerations.
Published by milindl over 1 year ago
This is a feature release:
SetSaslCredentials
. This new method (on the Producer, Consumer, and AdminClient) allows modifying the stored SASL PLAIN/SCRAM credentials that will be used for subsequent (new) connections to a broker (#879).ProduceChannel()
) and channel based consumer (Consumer Events()
) are deprecated (#894).IsTimeout()
on Error type. This is a convenience method that checks if the error is due to a timeout (#903).Seek()
is now ignored and an infinite timeout is used, the method will block until the fetcher state is updated (typically within microseconds) (#906)ListConsumerGroups
Admin operation. Supports listing by state.DescribeConsumerGroups
Admin operation. Supports multiple groups.DeleteConsumerGroups
Admin operation. Supports multiple groups (@vsantwana).ListConsumerGroupOffsets
Admin operation. Currently, only supports 1 group with multiple partitions. Supports the requireStable option.AlterConsumerGroupOffsets
Admin operation. Currently, only supports 1 group with multiple offsets.SetRoundtripDuration
to the mock broker for setting RTT delay for a given mock broker (@kkoehler, #892).SpecificDeserializer.Deserialize
method was not returning its result correctly, and was hence unusable. The return has been fixed (#849).SerializerConfig
, was ignored. It is now used as expected (@perdue, #870).nil
pointer, fixed with proper initialization (@HansK-p, @ju-popov, #878).confluent-kafka-go is based on librdkafka v2.0.2, see the librdkafka v2.0.0 release notes and later ones for a complete list of changes, enhancements, fixes and upgrade considerations.
Note: There were no confluent-kafka-go v2.0.0 or v2.0.1 releases.
Published by emasab about 2 years ago
v1.9.2 is a maintenance release:
confluent-kafka-go is based on librdkafka v1.9.2, see the librdkafka release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.
Published by emasab over 2 years ago
v1.9.1 is a feature release:
Schema Registry support for Avro Generic and Specific, Protocol Buffers and JSON Schema. (@rayokota, #776).
Built-in support for Mac OSX M1 / arm64. (#818).
confluent-kafka-go is based on librdkafka v1.9.1, see the librdkafka release notes for a complete list of changes, enhancements, fixes and upgrade considerations.
Published by emasab over 2 years ago
v1.9.0 is a feature release:
confluent-kafka-go is based on librdkafka v1.9.0, see the
librdkafka release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.
Published by edenhill almost 3 years ago
This is a maintenance release:
confluent-kafka-go is based on librdkafka v1.8.2, see the librdkafka release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.
Note: There were no confluent-kafka-go v1.8.0 and v1.8.1 releases.
Published by edenhill over 3 years ago
confluent-kafka-go is based on librdkafka v1.7.0, see the librdkafka release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.
Message.Headers
if the Producer's go.delivery.report.fields
headers
, e.g.:"go.delivery.report.fields": "key,value,headers"
Published by edenhill over 3 years ago
v1.6.1 is a feature release:
go.delivery.report.fields
by @kevinconaway-tags dynamic
) there was previously a possible conflictconfluent-kafka-go is based on and bundles librdkafka v1.6.1, see the
librdkafka release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.
Published by edenhill almost 4 years ago
v1.5.2 is a maintenance release with the following fixes and enhancements:
confluent-kafka-go is based on librdkafka v1.5.2, see the
librdkafka release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.
Published by edenhill over 4 years ago
v1.4.2 is a maintenance release:
kafka/librdkafka
) is no longer pruned by Go mod vendor import.See the librdkafka v1.4.2 release notes for changes to the bundled librdkafka included with the Go client.
Published by edenhill over 4 years ago
Logs()
channel.Consumer.Position()
to retrieve the current consumer offsets.Error
type now has additional attributes, such as IsRetriable()
to deem if the errored operation can be retried. This is currently only exposed for the Transactional API.librdkafka and confluent-kafka-go now has complete Exactly-Once-Semantics (EOS) functionality, supporting the idempotent producer (since v1.0.0), a transaction-aware consumer (since v1.2.0) and full producer transaction support (in this release).
This enables developers to create Exactly-Once applications with Apache Kafka.
See the Transactions in Apache Kafka page for an introduction and check the transactions example for a complete transactional application example.
The confluent-kafka-go client now comes with batteries included, namely prebuilt versions of librdkafka for the most popular platforms, you will thus no longer need to install or manage librdkafka separately.
Supported platforms are:
These prebuilt librdkafka has all features (e.g., SSL, compression, etc) except for the Linux builds which due to libsasl2 dependencies does not have Kerberos/GSSAPI support.
If you need Kerberos support, or you are running on a platform where the prebuilt librdkafka builds are not available (see above), you will need to install librdkafka separately (preferably through the Confluent APT and RPM repositories) and build your application with -tags dynamic
to disable the builtin librdkafka and instead link your application dynamically to librdkafka.
Full librdkafka v1.4.0 release notes.
Highlights:
Published by rigelbm almost 5 years ago
Purge messages API (by @khorshuheng at GoJek).
ClusterID and ControllerID APIs.
Go Modules support.
Fixed memory leak on calls to NewAdminClient()
. (discovered by @gabeysunda)
Requires librdkafka v1.3.0 or later
Full librdkafka v1.3.0 release notes.
Published by edenhill over 5 years ago
Full librdkafka v1.1.0 release notes.
ssl.endpoint.identification.algorithm=https
(off by default) to validate the broker hostname matches the certificate. Requires OpenSSL >= 1.0.2.ssl.ca.location
), librdkafka will load the CA certs by default from the Windows Root Certificate Store.enable.ssl.certificate.verification=false
)%{broker.name}
is no longer supported in sasl.kerberos.kinit.cmd
since kinit refresh is no longer executed per broker, but per client instance.New configuration properties:
ssl.key.pem
- client's private key as a string in PEM formatssl.certificate.pem
- client's public key as a string in PEM formatenable.ssl.certificate.verification
- enable(default)/disable OpenSSL's builtin broker certificate verification.enable.ssl.endpoint.identification.algorithm
- to verify the broker's hostname with its certificate (disabled by default).message.timeout.ms
max value from 15 minutes to 24 days (@sarkanyi)sasl.kerberos.kinit.cmd
to first attempt ticket refresh, then acquire.max.poll.interval.ms
now correctly handles blocking poll calls, allowing a longer poll timeout than the max poll interval.Published by edenhill over 5 years ago
This release adds support for librdkafka v1.0.0, featuring the EOS Idempotent Producer, Sparse connections, KIP-62 - max.poll.interval.ms
support, zstd, and more.
See the librdkafka v1.0.0 release notes for more information and upgrade considerations.
IsFatal()
function has been added to KafkaError
to help the application differentiate between temporary and fatal errors. Fatal errors are currently only triggered by the idempotent producer.kafka.NewError()
to make it possible to create error objects from user code / unit test (Artem Yarulin)default.topic.config
. Topic configuration should now be set on the standard ConfigMap.enable.partition.eof
Make sure to check out the Idempotent Producer example
Published by edenhill almost 6 years ago
This release adds support for the Topic Admin API (KIP-4):
results, err := a.CreateTopics(
ctx,
// Multiple topics can be created simultaneously
// by providing additional TopicSpecification structs here.
[]kafka.TopicSpecification{{
Topic: "mynewtopic",
NumPartitions: 20,
ReplicationFactor: 3}})
More examples.
Published by edenhill over 6 years ago
This release drops support for Golang < 1.7
Requires librdkafka v0.11.4 or later
Support for Kafka message headers has been added (requires broker version >= v0.11.0).
When producing messages, pass a []kafka.Header
list:
err = p.Produce(&kafka.Message{
TopicPartition: kafka.TopicPartition{Topic: &topic, Partition: kafka.PartitionAny},
Value: []byte(value),
Headers: []kafka.Header{{"myTestHeader", []byte("header values are binary")}},
}, deliveryChan)
Message headers are available to the consumer as Message.Headers
:
msg, err := c.ReadMessage(-1)
if err != nil {
fmt.Printf("%% Consumer error: %v\n", err)
continue
}
fmt.Printf("%% Message on %s:\n%s\n", msg.TopicPartition, string(msg.Value))
if msg.Headers != nil {
fmt.Printf("%% Headers: %v\n", msg.Headers)
}
pkg-config --static ..
)Published by edenhill about 7 years ago
This is a minimal librdkafka version-synchronized release of the Go client.
Changes: