Back to Journal
System Design

Complete Guide to Event-Driven Architecture with Java

A comprehensive guide to implementing Event-Driven Architecture using Java, covering architecture, code examples, and production-ready patterns.

Muneer Puthiya Purayil 18 min read

Java's mature ecosystem makes it a natural fit for event-driven architectures in enterprises that already invest in the JVM. With Spring Boot, Kafka Streams, and a rich set of observability tools, Java offers a batteries-included approach that few other platforms can match. This guide walks through production-grade implementations from basic consumers to sophisticated stream processing topologies.

Foundational Architecture

Event-driven architecture in Java typically follows one of two paths: Spring Kafka for traditional consumer/producer patterns, or Kafka Streams for stateful stream processing. The choice depends on whether you need simple event routing or complex event transformations with state.

java
1// Domain event definitions
2public sealed interface OrderEvent permits OrderCreated, OrderShipped, OrderCancelled {
3 String eventId();
4 String orderId();
5 Instant timestamp();
6}
7 
8public record OrderCreated(
9 String eventId,
10 String orderId,
11 String customerId,
12 List<OrderItem> items,
13 BigDecimal total,
14 String currency,
15 Instant timestamp
16) implements OrderEvent {}
17 
18public record OrderShipped(
19 String eventId,
20 String orderId,
21 String trackingNumber,
22 String carrier,
23 Instant timestamp
24) implements OrderEvent {}
25 

Java 21's sealed interfaces and records make event definitions concise and exhaustive — the compiler verifies that every switch expression handles all event types.

Spring Kafka Producer

Spring Kafka abstracts the Kafka producer API with sensible defaults while allowing fine-grained tuning:

java
1@Configuration
2public class KafkaProducerConfig {
3 
4 @Bean
5 public ProducerFactory<String, Object> producerFactory(KafkaProperties properties) {
6 Map<String, Object> config = new HashMap<>(properties.buildProducerProperties());
7 config.put(ProducerConfig.ACKS_CONFIG, "all");
8 config.put(ProducerConfig.RETRIES_CONFIG, 3);
9 config.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, true);
10 config.put(ProducerConfig.MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION, 5);
11 config.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
12 config.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
13 return new DefaultKafkaProducerFactory<>(config);
14 }
15 
16 @Bean
17 public KafkaTemplate<String, Object> kafkaTemplate(ProducerFactory<String, Object> factory) {
18 return new KafkaTemplate<>(factory);
19 }
20}
21 
22@Service
23public class EventPublisher {
24 
25 private final KafkaTemplate<String, Object> kafka;
26 
27 public EventPublisher(KafkaTemplate<String, Object> kafka) {
28 this.kafka = kafka;
29 }
30 
31 public CompletableFuture<SendResult<String, Object>> publish(String topic, String key, Object event) {
32 var headers = new RecordHeaders();
33 headers.add("event_type", event.getClass().getSimpleName().getBytes(StandardCharsets.UTF_8));
34 headers.add("produced_at", Instant.now().toString().getBytes(StandardCharsets.UTF_8));
35 
36 var record = new ProducerRecord<>(topic, null, key, event, headers);
37 return kafka.send(record);
38 }
39}
40 

Consumer with Manual Acknowledgment

For production event processing, manual acknowledgment provides control over when offsets are committed:

java
1@Configuration
2public class KafkaConsumerConfig {
3 
4 @Bean
5 public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory(
6 ConsumerFactory<String, String> consumerFactory) {
7 var factory = new ConcurrentKafkaListenerContainerFactory<String, String>();
8 factory.setConsumerFactory(consumerFactory);
9 factory.setConcurrency(6);
10 factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL);
11 factory.setCommonErrorHandler(new DefaultErrorHandler(
12 new DeadLetterPublishingRecoverer(kafkaTemplate()),
13 new FixedBackOff(1000L, 3)
14 ));
15 return factory;
16 }
17}
18 
19@Component
20public class OrderEventConsumer {
21 
22 private static final Logger log = LoggerFactory.getLogger(OrderEventConsumer.class);
23 private final OrderService orderService;
24 private final ObjectMapper objectMapper;
25 
26 public OrderEventConsumer(OrderService orderService, ObjectMapper objectMapper) {
27 this.orderService = orderService;
28 this.objectMapper = objectMapper;
29 }
30 
31 @KafkaListener(
32 topics = "order-events",
33 groupId = "order-processor",
34 containerFactory = "kafkaListenerContainerFactory"
35 )
36 public void consume(ConsumerRecord<String, String> record, Acknowledgment ack) {
37 String eventType = new String(
38 record.headers().lastHeader("event_type").value(), StandardCharsets.UTF_8
39 );
40 
41 try {
42 switch (eventType) {
43 case "OrderCreated" -> {
44 var event = objectMapper.readValue(record.value(), OrderCreated.class);
45 orderService.handleOrderCreated(event);
46 }
47 case "OrderShipped" -> {
48 var event = objectMapper.readValue(record.value(), OrderShipped.class);
49 orderService.handleOrderShipped(event);
50 }
51 default -> log.warn("Unknown event type: {}", eventType);
52 }
53 ack.acknowledge();
54 } catch (Exception e) {
55 log.error("Failed to process event at offset {}: {}", record.offset(), e.getMessage());
56 throw new RuntimeException(e); // triggers error handler and DLQ
57 }
58 }
59}
60 

Kafka Streams for Stateful Processing

Kafka Streams enables complex event processing directly within your Java application — no separate cluster needed:

java
1@Configuration
2public class OrderStreamConfig {
3 
4 @Bean
5 public KStream<String, OrderEvent> orderStream(StreamsBuilder builder) {
6 JsonSerde<OrderEvent> eventSerde = new JsonSerde<>(OrderEvent.class);
7 
8 KStream<String, OrderEvent> events = builder.stream(
9 "order-events",
10 Consumed.with(Serdes.String(), eventSerde)
11 );
12 
13 // Branch by event type
14 var branches = events.split(Named.as("order-"))
15 .branch((key, event) -> event instanceof OrderCreated, Branched.as("created"))
16 .branch((key, event) -> event instanceof OrderShipped, Branched.as("shipped"))
17 .defaultBranch(Branched.as("other"));
18 
19 // Aggregate order totals per customer over a 1-hour window
20 KStream<String, OrderCreated> createdEvents = branches.get("order-created")
21 .mapValues(event -> (OrderCreated) event);
22 
23 KTable<Windowed<String>, BigDecimal> customerTotals = createdEvents
24 .groupBy((key, event) -> event.customerId(),
25 Grouped.with(Serdes.String(), new JsonSerde<>(OrderCreated.class)))
26 .windowedBy(TimeWindows.ofSizeWithNoGrace(Duration.ofHours(1)))
27 .aggregate(
28 () -> BigDecimal.ZERO,
29 (customerId, event, total) -> total.add(event.total()),
30 Materialized.with(Serdes.String(), new BigDecimalSerde())
31 );
32 
33 // Alert on high-value customers
34 customerTotals.toStream()
35 .filter((windowedKey, total) -> total.compareTo(new BigDecimal("10000")) > 0)
36 .mapValues((windowedKey, total) -> new HighValueAlert(
37 windowedKey.key(), total, windowedKey.window().startTime()))
38 .to("high-value-alerts", Produced.with(
39 WindowedSerdes.timeWindowedSerdeFrom(String.class, Duration.ofHours(1).toMillis()),
40 new JsonSerde<>(HighValueAlert.class)
41 ));
42 
43 return events;
44 }
45}
46 

Need a second opinion on your system design architecture?

I run free 30-minute strategy calls for engineering teams tackling this exact problem.

Book a Free Call

Transactional Outbox with Spring

The outbox pattern guarantees consistency between database writes and event publication:

java
1@Entity
2@Table(name = "outbox_events")
3public class OutboxEvent {
4 @Id
5 private String id;
6 private String aggregateId;
7 private String aggregateType;
8 private String eventType;
9
10 @Column(columnDefinition = "jsonb")
11 private String payload;
12
13 private Instant createdAt;
14 private Instant publishedAt;
15}
16 
17@Repository
18public interface OutboxRepository extends JpaRepository<OutboxEvent, String> {
19 @Query("SELECT o FROM OutboxEvent o WHERE o.publishedAt IS NULL ORDER BY o.createdAt ASC")
20 List<OutboxEvent> findUnpublished(Pageable pageable);
21}
22 
23@Service
24public class OrderService {
25 
26 private final OrderRepository orderRepo;
27 private final OutboxRepository outboxRepo;
28 
29 @Transactional
30 public Order createOrder(CreateOrderCommand cmd) {
31 var order = new Order(UUID.randomUUID().toString(), cmd.customerId(), cmd.items(), "created");
32 orderRepo.save(order);
33 
34 var event = new OrderCreated(
35 UUID.randomUUID().toString(),
36 order.getId(),
37 order.getCustomerId(),
38 order.getItems(),
39 order.getTotal(),
40 "AED",
41 Instant.now()
42 );
43 
44 var outbox = new OutboxEvent();
45 outbox.setId(event.eventId());
46 outbox.setAggregateId(order.getId());
47 outbox.setAggregateType("Order");
48 outbox.setEventType("OrderCreated");
49 outbox.setPayload(objectMapper.writeValueAsString(event));
50 outbox.setCreatedAt(Instant.now());
51 outboxRepo.save(outbox);
52 
53 return order;
54 }
55}
56 
57@Component
58public class OutboxPoller {
59 
60 private final OutboxRepository outboxRepo;
61 private final EventPublisher publisher;
62 
63 @Scheduled(fixedDelay = 100)
64 @Transactional
65 public void pollAndPublish() {
66 var events = outboxRepo.findUnpublished(PageRequest.of(0, 100));
67 for (var event : events) {
68 publisher.publish("order-events", event.getAggregateId(), event.getPayload())
69 .whenComplete((result, ex) -> {
70 if (ex == null) {
71 event.setPublishedAt(Instant.now());
72 outboxRepo.save(event);
73 }
74 });
75 }
76 }
77}
78 

Error Handling and Dead Letter Queues

Spring Kafka's error handling integrates directly with the consumer lifecycle:

java
1@Bean
2public DefaultErrorHandler errorHandler(KafkaTemplate<String, Object> template) {
3 var recoverer = new DeadLetterPublishingRecoverer(template,
4 (record, ex) -> new TopicPartition(record.topic() + ".dlq", record.partition()));
5 
6 var backoff = new ExponentialBackOffWithMaxRetries(3);
7 backoff.setInitialInterval(1000);
8 backoff.setMultiplier(2.0);
9 
10 var handler = new DefaultErrorHandler(recoverer, backoff);
11 handler.addNotRetryableExceptions(
12 DeserializationException.class,
13 InvalidEventException.class
14 );
15 return handler;
16}
17 

Observability with Micrometer

Spring Boot's Micrometer integration provides out-of-the-box Kafka metrics:

java
1@Component
2public class EventMetrics {
3 
4 private final MeterRegistry registry;
5 
6 public EventMetrics(MeterRegistry registry) {
7 this.registry = registry;
8 }
9 
10 public void recordProcessed(String eventType, String status) {
11 registry.counter("events.processed",
12 "event_type", eventType,
13 "status", status
14 ).increment();
15 }
16 
17 public void recordLatency(String eventType, Duration duration) {
18 registry.timer("events.processing.duration",
19 "event_type", eventType
20 ).record(duration);
21 }
22}
23 

Testing Event-Driven Components

Spring Kafka Test provides an embedded Kafka broker for integration tests:

java
1@SpringBootTest
2@EmbeddedKafka(partitions = 3, topics = {"order-events", "order-events.dlq"})
3class OrderEventConsumerTest {
4 
5 @Autowired
6 private EmbeddedKafkaBroker embeddedKafka;
7 
8 @Autowired
9 private KafkaTemplate<String, Object> kafkaTemplate;
10 
11 @Autowired
12 private OrderRepository orderRepository;
13 
14 @Test
15 void shouldProcessOrderCreatedEvent() throws Exception {
16 var event = new OrderCreated(
17 UUID.randomUUID().toString(),
18 "order-123",
19 "customer-456",
20 List.of(new OrderItem("SKU-001", 2, new BigDecimal("29.99"))),
21 new BigDecimal("59.98"),
22 "AED",
23 Instant.now()
24 );
25 
26 kafkaTemplate.send("order-events", event.orderId(), event).get(5, TimeUnit.SECONDS);
27 
28 await().atMost(Duration.ofSeconds(10)).untilAsserted(() -> {
29 var order = orderRepository.findById("order-123");
30 assertThat(order).isPresent();
31 assertThat(order.get().getStatus()).isEqualTo("created");
32 });
33 }
34}
35 

Conclusion

Java's event-driven architecture ecosystem is arguably the most complete of any language. Spring Kafka handles the operational complexity of consumer groups, error handling, and offset management. Kafka Streams adds stateful stream processing without requiring a separate compute cluster. And the JVM's mature observability tooling — Micrometer, OpenTelemetry Java agent, JFR — provides deep visibility into every layer of the event pipeline.

The key architectural decision is choosing between Spring Kafka listeners for simple event routing and Kafka Streams for stateful processing. Start with listeners — they cover 80% of use cases with less conceptual overhead. Graduate to Kafka Streams when you need windowed aggregations, stream-table joins, or exactly-once stream processing semantics.

FAQ

Need expert help?

Building with system design?

I help teams ship production-grade systems. From architecture review to hands-on builds.

Muneer Puthiya Purayil

SaaS Architect & AI Systems Engineer. 10+ years shipping production infrastructure across fintech, automotive, e-commerce, and healthcare.

Engage

Start a
Conversation.

For teams building at scale: SaaS platforms, agentic AI systems, and enterprise mobile infrastructure. Scope and fit are evaluated before any engagement begins.

Limited availability · Q3 / Q4 2026