{"id":"019d2f1f-7af2-777a-8503-254a4b919609","title":"Kafka Offset Commit with Spring Boot","slug":"2026/03/kafka-offset-commit-with-spring-boot","renderedHtml":"<p>Piotr Minkowski's new article on <a href=\"https://spring.io/projects/spring-kafka\" title=\"the Spring project that provides @KafkaListener, KafkaTemplate, and AckMode abstractions for producing and consuming Kafka messages in Spring Boot applications.\">Spring Kafka</a> offset behavior (&quot;<a href=\"https://piotrminkowski.com/2026/03/27/deep-dive-into-kafka-offset-commit-with-spring-boot/\">Deep Dive into Kafka Offset Commit with Spring Boot</a>&quot;) is worth your time, but the actionable point arrives after some setup. Here it is up front: the consumer offset in <a href=\"/factoids/Kafka\">Kafka</a> advances when <a href=\"https://spring.io/projects/spring-framework\" title=\"a Java application framework providing a DI container and vast ecosystem of modules. Foundation for Spring Boot, Spring Security, Spring Data, Spring Integration, and more.\">Spring</a>'s listener thread finishes processing a batch - which may or may not mirror what your business logic does.</p>\n<p>That one rule generates the three failure modes the article walks through.</p>\n<p>The first is a single-thread  batch mode read (the default). Spring Kafka receives a batch of messages and hands them to a single listener thread one at a time. The offset isn't committed until the thread works through the entire batch. Interrupt that thread - say, with a graceful shutdown that times out - and none of the batch's offsets are committed. On restart, you reprocess everything from the last committed point. At-least-once delivery, which is correct behavior, but only if you're prepared for it.</p>\n<p>This makes sense: the entire batch is read as if in a transaction, and the read offset is written at the end of the transaction.</p>\n<p>The second scenario uses concurrent listeners: concurrency is set to the partition count and each thread owns one partition. Now offset commits are per-partition. Two threads can finish and commit; the third can be mid-batch when you shut down. On restart, only the uncommitted partition replays. This is strictly better than scenario one for throughput, but the replay exposure is the same in principle - it's just scoped to one partition rather than all of them.</p>\n<p>The third scenario is where the real sauce comes in: the <em>silent loss</em> case. This is the one that bites people. If your listener method hands work off to a pool of handlers and returns immediately, Spring Kafka sees a completed listener invocation and commits the offset - even though your processing is still in flight. The incoming messages have <em>been</em> read, the transaction that reads them commits and the read offset advances, but the messages themselves haven't completed processing. Kill the application now and those in-flight messages are gone. The broker thinks they were handled; your thread pool never finished them. This is the async handoff anti-pattern, and it converts Kafka's at-least-once guarantee into at-most-once without you explicitly choosing that.</p>\n<p>The fix isn't exotic: don't let your listener return until you're willing to have the offset committed. If you need async processing, you need to manage offset commits manually (using <code>AckMode.MANUAL</code> and explicit acknowledgment) or structure your async handoff so the listener blocks until the work is done. Minkowski links to two earlier articles on the mechanics of both approaches.</p>\n<p>The article includes working Spring Boot code and log traces that make each scenario concrete - worth reading in full if you work with Spring Kafka in production.</p>","excerpt":"Piotr Minkowski's new article on Spring Kafka offset behavior (\"Deep Dive into Kafka Offset Commit with Spring Boot\") goes into how Spring's Kafka reader commits read offsets: it treats the read operation as a committable transaction, which means your code needs to be aware of the offset write semantics to make sure you don't lose messages. Kafka is powerful precisely because it's not a simple queue - messages can be replayed via offset manipulation - but that same model means offset commit semantics matter in ways they never would with a fire-and-forget message broker.","authorId":"019c5c8a-609d-7cd4-975b-50bbcc412a33","authorDisplayName":"dreamreal","status":"APPROVED","publishedAt":"2026-03-27T11:48:09.935Z","sortOrder":0,"createdAt":"2026-03-27T11:48:05.489737Z","updatedAt":"2026-03-27T11:48:10.102591Z","commentCount":0,"tags":["java","kafka","messaging","spring boot","spring kafka"],"categories":[],"markdownSource":null}