kafka client - quotas, latency
https://aivarsk.com/2021/11/01/low-latency-kafka-producers/
What still makes me wonder is why librdkafka
does not set TCP_NODELAY
by default. Because they are doing something similar to Nagle’s algorithm in the library code by using linger.ms
/ queue.buffering.max.ms
settings and buffering messages.
Long story short, to optimize producers for latency, you should set both:
socket.nagle.disable
= Truequeue.buffering.max.ms
= 0
I think you should also set socket.nagle.disable
for low latency consumers to deliver acknowledgments as soon as possible.
librdkafka - Low latency
Low latency
When low latency messaging is required the "queue.buffering.max.ms" should be tuned to the maximum permitted producer-side latency. Setting queue.buffering.max.ms to 1 will make sure messages are sent as soon as possible. You could check out How to decrease message latency to find more details.
https://github.com/edenhill/librdkafka/wiki/How-to-decrease-message-latency
Also read the manual’s chapter on low latency.
There are two builtin end-to-end latencies in librdkafka:
- Producer batch latency -
queue.buffering.max.ms
(aliaslinger.ms
) - how long the producer waits for more messages to be.._produce()
d by the app before sending them off to the broker in one batch of messages. - Consumer batch latency -
fetch.wait.max.ms
- how much time the consumer gives the broker to fill upfetch.min.bytes
worth of messages before responding.
When trying to minimize end-to-end latency it is important to adjust both of these settings:
- producer:
queue.buffering.max.ms
- set to 0 for immediate transmission, or some other low reasonable value (e.g. 5 ms) - consumer:
fetch.wait.max.ms
- set to your allowed maximum latency, e.g. 10 (ms).
Setting fetch.wait.max.ms
too low (lower than the partition message rate) causes the occassional FetchRequest to return empty before any new messages were seen on the broker, this in turn kicks in the fetch.error.backoff.ms
timer that waits that long before the next FetchRequest. So you might want to decrease fetch.error.backoff.ms
too.
In librdkafka <=0.9.2, or on Windows, you’ll also want to minimize socket.blocking.max.ms
for both producer and consumer.
https://docs.confluent.io/platform/current/kafka/post-deployment.html#rolling-restart
custom quota for (user=user1, client-id=clientA)
bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter \
--add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' \
--entity-type users --entity-name user1 --entity-type clients \
--entity-name clientA
default quota for user
bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter \
--add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' \
--entity-type users --entity-default
default quota for client
bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter \
--add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' \
--entity-type clients --entity-default
bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe \
--entity-type users --entity-type clients
Configs for user-principal 'user1', default client-id are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200
Configs for user-principal 'user1', client-id 'clientA' are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200