Ruby and Rails efficient Kafka processing framework
OTHER License
Bot releases are hidden (Show)
Published by mensfeld about 2 years ago
karafka.rb
template for Ruby on RailsPublished by mensfeld about 2 years ago
gruf
(#974)Published by mensfeld about 2 years ago
karafka info
with more descriptive Ruby version info.Published by mensfeld about 2 years ago
Published by mensfeld about 2 years ago
Karafka::Admin
for creating and destroying topics and fetching cluster-info.wait_for_kafka
script.Published by mensfeld about 2 years ago
This changelog describes changes between 1.4
and 2.0
. Please refer to appropriate release notes for changes between particular rc
releases.
Karafka 2.0 is a major rewrite that brings many new things to the table but also removes specific concepts that happened not to be as good as I initially thought when I created them.
Please consider getting a Pro version if you want to support my work on the Karafka ecosystem!
For anyone worried that I will start converting regular features into Pro: This will not happen. Anything free and fully OSS in Karafka 1.4 will forever remain free. Most additions and improvements to the ecosystem are to its free parts. Any feature that is introduced as a free and open one will not become paid.
This section describes new things and concepts introduced with Karafka 2.0.
Karafka 2.0:
max.poll.interval.ms
.Rails::Railte
without need for any extra configuration.#revoked
method for taking actions upon topic revocation.librdkafka
via the standardized error.occurred
monitor channel.ruby-kafka
with librdkafka
as an underlying driver.This section describes things that are no longer part of the Karafka ecosystem.
Karafka 2.0:
sidekiq-backend
due to introduction of multi-threading.Responders
concept in favour of WaterDrop producer usage.#shutdown
.This section describes things that were changed in Karafka but are still present.
Karafka 2.0:
librdkafka
. They are now piped through Karafka prior to being dispatched.2.x
tightly with autoconfiguration inheritance and an option to redefine it.karafka-testing
gem for RSpec that also has been updated.cli info
to reflect the 2.0
details.kafka
configuration beyond minimum as the rest is handled by librdkafka
.dry-validation
.dry-monitor
.dry-configurable
.Karafka::Params::BatchMetadata
to Karafka::Messages::BatchMetadata
.Karafka::Params::Params
to Karafka::Messages::Message
.#params_batch
in consumers to #messages
.Karafka::Params::Metadata
to Karafka::Messages::Metadata
.Karafka::Fetcher
to Karafka::Runner
and align notifications key names.StdoutListener
to LoggerListener
.0.5
) behaves. It now builds a single consumer group instead of one per topic.2.1
.error.occurred
).LGPL-3.0
.karafka-core
dependency that contains common code used across the ecosystem.Karafka 2.0 is just the beginning.
There are several things in the plan already for 2.1 and beyond, including a web dashboard, at-rest encryption, transactions support, and more.
Published by mensfeld about 2 years ago
TTIN
backtrace printingwarn
levelrdkafka
>= 0.12
karafka-core
Published by mensfeld about 2 years ago
dry-monitor
karafka-core
Published by mensfeld about 2 years ago
max_wait_time
to 1 second.max_messages
to 100 (#915).:key
and :partition_key
for Enhanced Active Job partitioning.Published by mensfeld over 2 years ago
example_consumer.rb.erb
#shutdown
and #revoked
signatures to correct once.max_wait_time
from 10s to 5s.dry-configurable
in favour of a home-brew.dry-validation
in favour of a home-brew.Published by mensfeld over 2 years ago
Published by mensfeld over 2 years ago
#seek
was issued for a partition.#revoked
when rebalance with revocation happens.Published by mensfeld over 2 years ago
#prepare
to #before_call
and from #teardown
to #after_call
to abstract away jobs execution from any type of executors and consumers logicbefore_consume
and after_consume
completely. Those should be for internal usage only.2.3.1
.revoked?
state from PRO to regular Karafka.mark_as_consumed!
and mark_as_consumed
as indicator of partition ownership + use it to switch the ownership state.poll
operation upon partition lost or max poll exceeded event.Published by mensfeld over 2 years ago
consumer.prepared.error
into LoggerListener
active_job_topic
to accept a block for extra topic related settings#prepared
to #prepare
to reflect better its use-caseworker.process
and worker.processed
)LoggerListener
to include more useful information about processing and polling messagesPublished by mensfeld over 2 years ago
poll
. This ensures, that for async jobs that are long-living, we do not reach max.poll.interval
.Shutdown
jobs are executed in workers to align all the jobs behaviours.Shutdown
jobs are always blocking.ListenersBatch
was introduced similar to WorkersBatch
to abstract this concept.shutdown_timeout
to be more than max_wait_time
not to cause forced shutdown when no messages are being received from Kafka.shutdown_timeout
is more than max_wait_time
. This will prevent users from ending up with a config that could lead to frequent forceful shutdowns.Published by mensfeld over 2 years ago
Published by mensfeld over 2 years ago
bundle install
(#820)consumer.consume
with consumer.consumed
event to match the behaviourconsumer.consumed
event is propagated#revoked
on partitions that were lost and assigned back upon rebalancing (#825)Published by mensfeld over 2 years ago
Published by mensfeld over 2 years ago
Published by mensfeld over 2 years ago