Best Practices
Proven patterns, preconditions, and best practices for implementing event sourcing successfully.
- Events are immutable: Once written, events should never be changed or deleted. They represent historical facts that have occurred.
- Events are facts, not requests: Name events in past tense to reflect that they represent something that has already happened.
- One aggregate, one stream: Each aggregate instance should have its own event stream identified by a unique ID.
- The event store is the source of truth: All other data stores are derived views that can be rebuilt from events.
- •Keep events small and focused on a single business fact
- •Include all necessary context to understand the event
- •Avoid references that might not exist in the future
- •Use explicit, meaningful event names
- •Include timestamps and correlation IDs
- •Keep aggregates small and focused
- •One transaction = one aggregate
- •Design around business invariants
- •Minimize dependencies between aggregates
- •Use eventual consistency between aggregates
- •Plan for event schema evolution from day one
- •Use versioning in event type names
- •Support multiple event versions during migration
- •Use upcasters to transform old events
- •Never delete old event definitions
- •Use optimistic concurrency control
- •Track event version numbers
- •Handle concurrent modification conflicts
- •Implement retry logic for conflicts
- •Use expected version when appending events
Before adopting event sourcing, ensure your team and organization are ready:
Technical Prerequisites
- Team understands domain-driven design principles
- Comfortable with eventual consistency
- Infrastructure for reliable message handling
- Ability to handle increased storage requirements
- Event store infrastructure or service
Organizational Prerequisites
- Business stakeholders understand event-driven thinking
- Clear domain boundaries and business processes
- Commitment to learning and experimentation
- Patience for the initial learning curve
Best practices for processing commands that generate events:
- 1.Validate command: Check business rules and preconditions before processing
- 2.Load aggregate: Reconstruct current state from events (and snapshot if available)
- 3.Execute business logic: Apply command to aggregate, generating new events
- 4.Persist events: Atomically append new events to the event store
- 5.Publish events: Notify interested parties about the new events
Given-When-Then Pattern
Event sourcing makes testing straightforward with a clear pattern:
- •Test business logic by verifying event output
- •Use actual events for test fixtures
- •Test event replay and projection building
- •Verify idempotency of event handlers
- •Track event processing lag: Monitor delay between event creation and processing
- •Log correlation IDs: Trace commands and events across system boundaries
- •Monitor event store health: Track write throughput, read latency, and storage growth
- •Alert on projection failures: Detect when read models fall behind or fail to update
- •Track command success rates: Monitor business operation success and failure rates
Event Sourcing Everything
Don't use event sourcing for the entire system. Apply it selectively where it provides value.
Large Aggregates
Avoid aggregates that grow too large. Split into smaller aggregates or use different patterns.
Querying Event Store Directly
Don't query the event store for read operations. Build dedicated read models (projections) instead.
Deleting or Modifying Events
Never delete or modify events. Use compensating events to correct mistakes.
No Versioning Strategy
Don't ignore event versioning. Have a strategy from day one for evolving event schemas.
Putting Logic in Events
Events should be pure data. Keep business logic in aggregates and command handlers.
Event immutability can conflict with data privacy regulations like GDPR's "right to be forgotten":
- •Encrypt sensitive data: Store PII encrypted with user-specific keys
- •Store references, not data: Keep sensitive data in separate stores
- •Use crypto-shredding: Delete encryption keys to make data unrecoverable
- •Anonymize historical events: Replace PII with pseudonymous identifiers
- •Design for privacy: Consider data retention requirements during event design