Most backend codebases don't start broken. They start small, coherent, and easy to change. Then time passes.
Features ship. Teams grow. Deadlines happen. And one day you realize that changing the checkout flow requires touching seven packages, three shared utilities, and a database table that four other features depend on.
This is actually two distinct problems working together. Low cohesion: related concepts are scattered across the codebase instead of living together. High coupling: unrelated modules depend on each other's internals, so changes ripple outward.
The result is the same: refactoring feels risky, and nobody quite understands how everything fits together anymore.
When coupling gets bad enough, teams usually respond in one of two ways.
Some reach for microservices. Split everything into separate services. Coupling becomes impossible because nothing can import anything directly. But this trades code complexity for operational complexity: network failures become normal failures, data consistency requires careful design, and local development gets significantly harder.
Most teams try something simpler: better discipline. Add architectural guidelines. Improve folder structure. Be more careful in code review. This is cheaper and less disruptive—but guidelines without enforcement tend to erode.
A modular monolith is a way to make the second approach actually stick. It's a monolith where architectural boundaries are enforced by the build system—where you cannot import another module's internals because the compiler won't let you.
You keep the operational simplicity of a single deployable. But you get real architecture: explicit contracts, isolated data, and the ability to change one module without accidentally breaking others.
This post is a practical guide to building one in Kotlin and Spring Boot. We'll cover where to draw boundaries, how to enforce them with Gradle, how to structure code inside modules, how to isolate data, how modules communicate, error handling, and testing. The focus is on patterns that work in real codebases, not theoretical ideals.
Part 1: Where to Draw the Lines
We want boundaries. But where should they go?
This is where Domain-Driven Design becomes useful. DDD has a reputation for academic terminology and books thicker than your laptop, but the part that matters for architecture is simple: organize around business domains, not technical layers.
Domains, Not Layers
Most backend projects start with a structure like this:
com.example.app/
├── controllers/
│ ├── ProductController.kt
│ ├── OrderController.kt
│ └── ShippingController.kt
├── services/
│ ├── ProductService.kt
│ ├── OrderService.kt
│ └── ShippingService.kt
└── repositories/
├── ProductRepository.kt
├── OrderRepository.kt
└── ShippingRepository.ktAll controllers in one folder, all services in another, all repositories in a third. It feels tidy. It looks professional in code review.
It's also a mistake.
When you add a shipping feature, you touch files in three different folders. When you want to understand how shipping works, you're jumping between layers, assembling the picture from scattered pieces. And nothing stops OrderService from calling ProductRepository directly—the folder structure creates the appearance of organization without any actual boundaries.
The alternative is organizing by domain:
com.example.app/
├── products/
│ ├── ProductController.kt
│ ├── ProductService.kt
│ └── ProductRepository.kt
├── orders/
│ ├── OrderController.kt
│ ├── OrderService.kt
│ └── OrderRepository.kt
└── shipping/
├── ShippingController.kt
├── ShippingService.kt
└── ShippingRepository.ktNow a shipping/ folder contains everything about shipping. New developers can open one folder and understand one capability. Changes stay local. And the structure reveals where real boundaries could be enforced.
The Same Word, Different Meanings
Ask three people in an e-commerce company what "Product" means:
The catalog team says name, description, price, images, sustainability rating—it's what customers browse and search. The warehouse team says weight, dimensions, and whether it's fragile—they need to put it in a box, not describe it. The inventory team says a product ID and a quantity on a shelf—they don't care what it looks like.

These aren't three views of the same thing. They're three different concepts that happen to share a name. DDD calls this a bounded context: a boundary within which a term has a consistent meaning.
The instinct is to create one Product class that serves everyone:
data class Product(
val id: ProductId,
val name: String,
val description: String,
val price: Money,
val images: List<Image>,
val sustainabilityRating: String,
val weightGrams: Int,
val dimensions: Dimensions,
val isFragile: Boolean,
val warehouseQuantities: Map<WarehouseId, Int>,
// ... and it keeps growing
)One class, no duplication—efficient, right? But this is how coupling starts. The catalog team adds images; now warehouse code depends on it. The warehouse team adds dimensions; now catalog code carries that weight. Everyone's afraid to touch this class, and everyone has to.
The fix is recognizing that each context should define its own model:
// In the catalog context
data class Product(
val id: ProductId,
val name: String,
val description: String,
val price: Money,
val images: List<Image>,
)
// In the shipping context
data class ShippableProduct(
val productId: ProductId,
val weightGrams: Int,
val dimensions: Dimensions,
val isFragile: Boolean,
)Each context owns its representation. They communicate through explicit contracts, not a shared blob of fields.
Finding the Right Boundaries
There's no algorithm for this, but there are useful signals.
Listen for different vocabulary. When warehouse staff say "pick" and "pack" while marketing says "browse" and "wishlist," you're hearing two contexts. Language differences usually reflect model differences—this is more reliable than most technical heuristics.
Watch for different rates of change. Pricing rules might change weekly; shipping carrier integrations change quarterly. Bundling them means every pricing change risks breaking shipping.
Follow the org chart. The finance team owns payments. The logistics team owns shipping. If different people are responsible for different areas, those are natural seams.
Size matters too. A module that's too large is a monolith in disguise—your "Commerce" module handles products, orders, payments, and shipping, and changes in one area keep breaking another. A module that's too small creates coordination overhead—you've split orders into "OrderCreation," "OrderValidation," "OrderPersistence," and "OrderNotification," and every operation requires coordinating multiple modules.
A useful test: can you describe the module in one sentence without conjunctions? "Manages the product catalog" works. "Handles payments and shipping and user profiles" is three modules wearing a trench coat.

The Example We'll Use
For this guide, we're building an e-commerce system with these bounded contexts:

Don't spend weeks trying to find perfect boundaries before writing code. You'll learn things as you build that you couldn't have known upfront. What matters is having reasonable starting boundaries, enforcement that makes them real, and willingness to adjust when you learn more.
A slightly wrong boundary that's enforced beats a perfect boundary on a whiteboard. Refactoring modules is easy. Refactoring spaghetti is hard.
Part 2: Enforcing the Lines
Boundaries on a whiteboard are aspirations. Boundaries in the build system are architecture.
Without enforcement, boundaries erode. It happens slowly, always with good intentions. Someone imports an internal class because it's convenient. Someone adds a "temporary" dependency to meet a deadline. Six months later, your modules are coupled in ways nobody intended, and untangling them is a project of its own.
The solution is to make invalid dependencies a compiler error, not a code review discussion.
Contracts, Not Implementations
When Shipping needs product information, the obvious approach is to call the Products service directly:
class ShippingService(
private val productService: ProductServiceImpl
) {
fun calculateWeight(productId: String): Grams {
val product = productService.getProduct(productId)
return product.weightGrams
}
}This works, but it creates tight coupling. Shipping now depends on ProductServiceImpl—a concrete class with its own dependencies, internal structure, and implementation details. And even worse, this coupling is transitive: ProductServiceImpl depends on ProductRepository, which depends on database entities. Shipping has indirectly coupled itself to the Products database schema.
The fix is to depend on a contract instead of an implementation:
interface ProductServiceApi {
fun getProduct(id: String): ProductDto
}
class ProductServiceImpl(
private val repository: ProductRepository
) : ProductServiceApi {
override fun getProduct(id: String): ProductDto { ... }
}
class ShippingService(
private val productService: ProductServiceApi // Interface, not implementation
) {
fun calculateWeight(productId: String): Grams {
val product = productService.getProduct(productId)
return Grams(product.weightGrams)
}
}Now Shipping depends on ProductServiceApi—an interface with no implementation details. The Products team can refactor their internals, change their database, swap out libraries. As long as they fulfill the contract, Shipping won't notice.

But where does the contract live? If the interface is inside the Products module alongside its implementation, Shipping still depends on the Products module. We need to separate the contract into its own place.
The API/Implementation Split
In Gradle, we split each bounded context into two modules:
products/
├── products-api/ # The contract
└── products-impl/ # The implementationThe rules are simple:
-impldepends on its own-api(implements the contract)- Other modules depend only on
-apimodules (never on-impl) - No circular dependencies
// shipping-impl/build.gradle.kts
dependencies {
implementation(project(":shipping:shipping-api"))
implementation(project(":products:products-api")) // Contract only
// Cannot add products-impl - that's the whole point
}If someone tries to import a class from products-impl, the build fails. No discussion needed.
The -api module contains the public contract: interfaces defining what the module can do, DTOs for data exchange, events other modules might listen to, and error types so callers know what can go wrong.
// products-api
interface ProductServiceApi {
fun getProduct(id: String): Result<ProductDto, ProductError>
}
data class ProductDto(
val id: String,
val name: String,
val weightGrams: Int,
)
sealed class ProductError {
data class NotFound(val id: String) : ProductError()
}The -impl module contains everything private: domain models with business logic, service implementations, persistence layer, and controllers. Mark these internal so Kotlin reinforces the boundary:
// products-impl
@Service
internal class ProductServiceImpl(
private val repository: ProductRepository,
) : ProductServiceApi {
// Maps between internal domain model and public DTOs
}
internal data class Product(
val id: ProductId,
val name: String,
val price: Money,
) {
init {
require(name.isNotBlank()) { "Name required" }
}
}Dependency Direction
Enforcement prevents accidental coupling. But there's a more fundamental constraint: all dependencies must flow in one direction. No cycles.
If Products depends on Inventory, Inventory cannot depend on Products. If it did, you couldn't compile one without the other, couldn't test one without the other, couldn't change one without risking the other.
It usually starts innocently. Orders needs product information, so it depends on Products. But now Products wants to check if a product has pending orders before allowing deletion:
// products-impl
class ProductServiceImpl(
private val orderService: OrderServiceApi // Products now depends on Orders
) {
fun deleteProduct(id: ProductId): Result<Unit, ProductError> {
if (orderService.hasPendingOrdersForProduct(id.value)) {
return Err(ProductError.HasPendingOrders)
}
// ...
}
}Now you have Orders → Products and Products → Orders. A cycle. Gradle will refuse to compile it.
Invert with an SPI
Ask: which module should own this concern? Checking for pending orders before deletion is really an ordering concern, not a product concern. Products shouldn't know about orders. But Products does need a way to ask: "Can I delete this?"
The solution is a Service Provider Interface (SPI)—an interface that Products defines but does not implement. Products owns the question ("can I delete this?") because deletion is a product operation. Orders owns the answer ("no, there are pending orders") because that's order logic. The interface lives where the question is asked, and implementations live where the answers come from. This keeps the dependency direction correct: Orders depends on products-api to implement the interface, not the other way around.
// products-api/spi/ProductDeletionBlocker.kt
interface ProductDeletionBlocker {
fun canDelete(productId: String): Boolean
}Orders implements it:
// orders-impl
@Service
internal class OrderBasedDeletionBlocker(
private val orderRepository: OrderRepository
) : ProductDeletionBlocker {
override fun canDelete(productId: String): Boolean {
return !orderRepository.existsPendingForProduct(productId)
}
}Products consumes all implementations:
// products-impl
@Service
internal class ProductServiceImpl(
private val repository: ProductRepository,
private val deletionBlockers: List<ProductDeletionBlocker>
) : ProductServiceApi {
fun deleteProduct(id: ProductId): Result<Unit, ProductError> {
if (deletionBlockers.any { !it.canDelete(id.value) }) {
return Err(ProductError.DeletionBlocked)
}
repository.delete(id)
return Ok(Unit)
}
}Spring automatically collects all beans implementing ProductDeletionBlocker and injects them as a list. Products doesn't know who's blocking—it just asks. The dependency direction is preserved: Orders depends on products-api to implement the interface. Products depends on nothing new.

Extract shared concepts
Sometimes two modules seem to need each other because they're both working with the same concept that doesn't belong to either. If Products and Pricing both need currency conversion, neither should own it:
common-money/
├── Money.kt
├── Currency.kt
└── CurrencyConverter.ktBoth modules depend on common-money. Neither depends on the other. Common modules are leaves in the dependency graph—they depend on nothing except other common modules.
A healthy dependency graph looks like a tree or a DAG. You can draw it top-to-bottom without any arrows pointing upward. If you find yourself drawing an arrow that points up, you have a design problem to solve—not a rule to break.
Automated Enforcement
Gradle modules prevent most violations—you can't import what you can't depend on. But some rules need explicit checks, like ensuring no -api module depends on an -impl module.
Write a validation task that fails the build on violations, and run it as part of CI. The specifics depend on your setup, but the principle is universal: rules that aren't enforced aren't rules, they're suggestions.
Check the Gradle plugin from the example code that validates module dependencies: [link]
Architecture enforced by the build system survives deadlines, new team members, and "temporary" workarounds. Architecture enforced by documentation survives until the first Thursday afternoon crunch.

Alternative: Spring Modulith
The Gradle multi-module approach provides the strongest guarantees, but it's not the only option. Spring Modulith offers a lighter-weight alternative that uses package structure and test-time verification instead of separate build modules.
Spring Modulith treats each top-level package under your main application package as a module. Public classes directly in a module's package are its API; anything in subpackages is considered internal:
com.example.app/
├── products/ # Module: products
│ ├── ProductService.kt # API - accessible
│ ├── ProductDto.kt # API - accessible
│ └── internal/ # Hidden from other modules
│ ├── Product.kt
│ └── ProductRepository.kt
└── shipping/ # Module: shipping
├── ShippingService.kt
└── internal/
└── ...A test verifies the structure:
@Test
fun `verify module structure`() {
ApplicationModules.of(Application::class.java).verify()
}This runs ArchUnit rules under the hood, failing if any module accesses another's internal packages.
Gradle Modules vs Spring Modulith
Both approaches require tests to verify your architecture. The difference is what the build system enforces versus what your tests enforce.
With Gradle modules, you can't import a class from a module you don't depend on—the compiler rejects it. Spring Modulith pushes everything to test time: imports compile fine, and violations surface when ArchUnit rules run.
Setup cost favors Modulith heavily. Converting an existing monolith takes less time: add the dependency, reorganize packages, write a verification test. Gradle modules mean restructuring your entire project—each module needs its own build.gradle.kts, source folders, and explicit dependencies.
But Gradle's structure pays dividends beyond enforcement. Each module gets its own src/main/resources, so Flyway migrations can live with the code that owns those tables. Build caching works per module—change orders-impl and only that module recompiles. Team ownership becomes obvious when modules are physically separate directories.
Spring Modulith brings features Gradle doesn't: automatic documentation generation, @ApplicationModuleTest for isolated module contexts, and runtime observability. If you want these with Gradle modules, you build them yourself.
Start with your situation. Greenfield project with clear boundaries? Gradle modules—pay the setup cost once. Existing monolith with unclear boundaries? Spring Modulith—discover your modules without restructuring everything. Some teams start with Modulith, then graduate to Gradle modules once the structure stabilizes.
Part 3: Inside the Modules
We have modules with enforced boundaries. The build system prevents coupling. The hard part is done.
What happens inside each -impl module matters less now. A mess in one module can't leak into others. You can refactor later without coordinating across teams. Internal structure is a local decision.
That said, one principle is worth following: dependencies point inward.
The Three Layers
Organize code so that outer layers depend on inner layers, never the reverse.
Domain is the core: business logic, entities, value objects, repository interfaces. No framework dependencies—just plain Kotlin. The domain defines what the system does, not how it connects to the outside world.
Application orchestrates: it implements the API contract, coordinates domain operations, handles transactions, publishes events. Application code uses domain types and repository interfaces but doesn't know about databases or HTTP.
Infrastructure connects to the outside world: controllers, repository implementations, message consumers, external API clients. This is where Spring annotations live, where SQL gets written, where HTTP requests get parsed.
The dependency direction: infrastructure → application → domain. A request flows inward: the controller (infrastructure) calls a handler (application), which uses domain types and a repository interface. The repository implementation (infrastructure) knows how to persist those types—but the domain doesn't know the implementation exists.
This keeps your business logic testable without frameworks and portable across different infrastructure choices.

Domain Models Stay Internal
Domain entities never leave the module. What crosses boundaries is a DTO—a data carrier defined in -api:
// products-impl/domain - internal, has behavior and validation
internal data class Product(
val id: ProductId,
val name: String,
val price: Money,
) {
init {
require(name.isNotBlank()) { "Product name cannot be blank" }
}
fun applyDiscount(percent: Int): Product =
copy(price = price.discountBy(percent))
}
// products-api - public, just data
data class ProductDto(
val id: String,
val name: String,
val priceInCents: Long,
)This might feel like duplication. It's intentional. Your domain model evolves based on business needs—you might add validation rules, rename fields for clarity, or restructure relationships. DTOs are shaped by what consumers need and form a stable contract. Keeping them separate means you can refactor internal representations without breaking other modules.
The mapping happens at the boundary, in the application layer. The domain layer shouldn't know about DTOs, but the application layer knows both—it coordinates between them. You can define the mapper there as an extension function:
// products-impl/application/ProductMappers.kt
internal fun Product.toDto() = ProductDto(
id = id.value,
name = name,
priceInCents = price.toCents(),
)The extension function syntax keeps the call site clean (product.toDto()) while the file location keeps the dependency direction correct: application depends on domain, not the reverse.
Simple Modules: Keep It Flat
A module with a few straightforward operations doesn't need elaborate structure:
notifications-impl/
└── src/main/kotlin/com/example/notifications/
├── Notification.kt
├── NotificationServiceImpl.kt
├── NotificationRepository.kt
├── NotificationRepositoryJdbc.kt
├── NotificationEntity.kt
└── EmailClient.ktEverything in one package. The dependency direction still applies—just without the folders. When the module grows, you can reorganize. The module boundary protects you either way.
Larger Modules: Organize by Use Case
When a module offers multiple distinct features, a single service class becomes a dumping ground. ProductServiceImpl accumulates methods for creation, updates, pricing, inventory sync, bulk imports. The class grows until someone suggests splitting it, and everyone agrees but nobody wants to do it.
The alternative is organizing by use case: each operation gets its own slice.
products-impl/
└── src/main/kotlin/com/example/products/
├── create/
│ ├── CreateProductHandler.kt
│ └── CreateProductValidator.kt
├── get/
│ └── GetProductHandler.kt
├── updateprice/
│ ├── UpdatePriceHandler.kt
│ └── PriceCalculator.kt
├── domain/
│ ├── Product.kt
│ ├── ProductId.kt
│ └── Money.kt
└── persistence/
├── ProductRepository.kt
├── ProductRepositoryJdbc.kt
└── ProductEntity.ktEach feature folder contains everything specific to that use case. When you need to understand or modify price updates, you open one folder. The domain model and repository are shared because they represent the module's core concepts—but complex logic specific to one use case stays in its slice.
// products-impl/create/CreateProductHandler.kt
@Service
internal class CreateProductHandler(
private val repository: ProductRepository,
private val eventPublisher: ApplicationEventPublisher,
) {
@Transactional
fun handle(request: CreateProductRequest): ProductDto {
val product = Product(
id = ProductId.generate(),
name = request.name,
price = Money(request.priceAmount, Currency.getInstance(request.priceCurrency)),
)
val saved = repository.save(product)
eventPublisher.publishEvent(ProductCreatedEvent(saved.id.value))
return saved.toDto()
}
}Connecting to the API contract
Other modules depend on the -api interface. A facade delegates to handlers:
// products-impl
@Service
internal class ProductServiceFacade(
private val createHandler: CreateProductHandler,
private val getHandler: GetProductHandler,
private val updatePriceHandler: UpdatePriceHandler,
) : ProductServiceApi {
override fun createProduct(request: CreateProductRequest) =
createHandler.handle(request)
override fun getProduct(id: String) =
getHandler.handle(id)
override fun updatePrice(id: String, request: UpdatePriceRequest) =
updatePriceHandler.handle(id, request)
}Pure delegation, no logic. Other modules see one interface; internally, work is split by feature.
Controllers can inject handlers directly since they're in the same module:
@RestController
@RequestMapping("/api/v1/products")
internal class ProductController(
private val createHandler: CreateProductHandler,
private val getHandler: GetProductHandler,
) {
@PostMapping
fun create(@RequestBody request: CreateProductRequest): ResponseEntity<ProductDto> {
val product = createHandler.handle(request)
return ResponseEntity.status(HttpStatus.CREATED).body(product)
}
@GetMapping("/{id}")
fun get(@PathVariable id: String): ResponseEntity<ProductDto> {
return getHandler.handle(id)
?.let { ResponseEntity.ok(it) }
?: ResponseEntity.notFound().build()
}
}Part 4: Isolating Data
You can have perfectly separated modules, clean APIs, and enforced dependencies—and still end up with a tightly coupled system. The culprit? The database.
When modules share tables, they share problems. When one module writes directly to another's tables, your boundaries exist only in your imagination. When a foreign key reaches across module boundaries, you've created a dependency that no Gradle configuration can catch.
The Shared Database Trap
It usually starts innocently. The Shipping module needs product weights. Products already has a products table. Why not just join?
-- In shipping code
SELECT s.*, p.weight_grams
FROM shipments s
JOIN products p ON s.product_id = p.id
WHERE s.id = ?This works. It's fast. It's "just one query."
It's also invisible coupling. Now Shipping depends on the Products table structure. If Products renames weight_grams, Shipping breaks. If Products moves to a different database, Shipping breaks. And nobody sees this dependency in the code—it lurks in SQL strings, waiting to cause an incident during an otherwise routine deployment.

Establishing Data Boundaries
There are two practical approaches to data isolation. Both follow the same fundamental rules: no cross-module table access, no foreign keys between modules, IDs only for references. The difference is how strictly the boundaries are enforced.
Table prefixes are the lighter option. One database, one schema, one datasource—but tables are prefixed by module: products_product, shipping_shipment, orders_order. This gives you logical separation without configuration overhead. Cross-module queries are still possible, but the prefix makes violations obvious in code review. This works well for small teams with good discipline, or when you're early in development and boundaries might shift.
Separate schemas provide harder guarantees. Each module gets its own schema, its own Flyway instance, and its own migration history. The Products module can only touch the products schema; Shipping can only touch the shipping schema. This isn't convention—it's enforced by how you configure your repositories.
You can check the example implementation here.
No Cross-Schema Foreign Keys
This is the rule that makes people uncomfortable: no foreign keys between schemas.
-- Don't do this
CREATE TABLE shipping.shipments (
id VARCHAR(255) PRIMARY KEY,
product_id VARCHAR(255) REFERENCES products.products(id) -- No!
);
-- Do this instead
CREATE TABLE shipping.shipments (
id VARCHAR(255) PRIMARY KEY,
product_id VARCHAR(255) NOT NULL -- Just data, no constraint
);Referential integrity across module boundaries becomes your responsibility at the application level. When Shipping creates a shipment, it validates that the product exists by calling the Products API:
override fun createShipment(request: CreateShipmentRequest): Result<ShipmentDto, ShipmentError> {
val product = productService.getProduct(request.productId)
.getOrElse { return Err(ShipmentError.ProductNotFound(request.productId)) }
val shipment = Shipment(
id = ShipmentId.generate(),
productId = ProductId(request.productId),
weightGrams = product.weightGrams,
)
return Ok(shipmentRepository.save(shipment).toDto())
}This is more work than a foreign key. But it's explicit—visible in the code, testable, and under your control.
Queries That Span Modules
The most common concern when isolating data: "I used to join Orders and Products in one query. Now what?"
For single-item lookups, compose at the application layer:
fun getOrderDetails(orderId: String): OrderDetailsDto {
val order = orderService.getOrder(orderId).getOrThrow()
val product = productService.getProduct(order.productId).getOrThrow()
val shipment = shipmentService.findByOrderId(orderId).getOrNull()
return OrderDetailsDto(
orderId = order.id,
productName = product.name,
shippingStatus = shipment?.status,
)
}For lists, avoid the N+1 problem by batching. Don't loop through 50 orders calling getProduct for each—collect all product IDs first, fetch them in one bulk call, then join in memory:
fun getOrderSummaries(orderIds: List<String>): List<OrderSummary> {
val orders = orderService.getOrders(orderIds)
val productIds = orders.map { it.productId }.distinct()
val products = productService.getProducts(productIds).associateBy { it.id }
return orders.map { order ->
OrderSummary(order, products[order.productId])
}
}For historical accuracy, denormalize at write time. An order should show the product name and price at the time of purchase, not whatever the product is called today. When creating an order, copy the fields that matter: productNameAtPurchase, priceAtPurchase. The order captures the truth of the transaction as it happened.
For analytics and reporting, pragmatism wins. Building full API-based pipelines for internal dashboards is often over-engineering. A read-only reporting schema with cross-module views is acceptable—just keep it isolated from your application code.
The Tradeoff
Data isolation requires more code. You lose database-enforced referential integrity across modules—if a product is deleted while a shipment references it, the database won't stop you. Your application code has to handle that.
What you get in return is independence. Products can restructure its tables without Shipping noticing. When there's a problem with the shipments table, there's no ambiguity about who owns it. Module tests can mock the Products API while using a real database for Shipping's own tables.
And if you ever need to extract a module into its own service, the path is clear. No queries to untangle, no foreign keys to remove. The module already operates as if it's independent.
Part 5: Module Communication
Modules need to talk to each other. An order needs product information. A shipment needs to know when payment completes. A checkout needs to validate inventory before confirming.
The question isn't whether modules communicate—it's how they communicate without reintroducing the coupling we worked so hard to eliminate.
There are two patterns: synchronous calls for immediate needs, and asynchronous events for reactions.
Synchronous Calls: When You Need an Answer Now
The simplest pattern: one module calls another's API and waits for the response.
// shipping-impl
@Service
internal class ShipmentServiceImpl(
private val productService: ProductServiceApi,
private val shipmentRepository: ShipmentRepository,
) : ShipmentServiceApi {
override fun createShipment(request: CreateShipmentRequest): Result<ShipmentDto, ShipmentError> {
val product = productService.getProduct(request.productId)
.getOrElse { return Err(ShipmentError.ProductNotFound(request.productId)) }
val shipment = Shipment(
id = ShipmentId.generate(),
productId = ProductId(request.productId),
weightGrams = product.weightGrams,
status = ShipmentStatus.PENDING,
)
return Ok(shipmentRepository.save(shipment).toDto())
}
}
This is appropriate when you need data to proceed and the operation should fail if the dependency fails. The tradeoff is runtime coupling—if Products is slow, Shipping is slow. For many operations, this is exactly right.
The Translation Layer (Anti-Corruption)
When Shipping calls Products, it receives a ProductDto. This DTO is the Products team’s view of the world. It’s full of things they care about: SEO descriptions, image URLs, sustainability ratings, and marketing tags.
If you pass this ProductDto deep into your Shipping domain logic, you haven’t decoupled anything. You’ve just imported the coupling via a method argument. Your shipping logic now depends on the Products team’s naming conventions. If they decide to rename weight_grams to weight_value or split the name field, your code breaks.
You need a border guard. In Domain-Driven Design, this is called an Anti-Corruption Layer (ACL). It sounds dramatic, but it’s just a translation step that happens immediately at the boundary.
The mapping happens in the infrastructure layer of the consuming module. You fetch the data, strip out the noise, translate it into your language, and only then pass it to your domain.
// shipping-impl/infrastructure/adapter/ProductAdapter.kt
@Component
internal class ProductAdapter(
private val productService: ProductServiceApi
) {
fun getShippableItem(productId: String): ShippableItem? {
// 1. Call the external API
val dto = productService.getProduct(productId).getOrNull() ?: return null
// 2. Translate immediately
// We discard images and descriptions. We only keep what shipping cares about.
return ShippableItem(
id = ProductId(dto.id),
weight = Weight.fromGrams(dto.weightGrams),
dimensions = Dimensions(dto.width, dto.height, dto.depth),
// We translate their concepts into ours
isFragile = dto.tags.contains("FRAGILE")
)
}
}Asynchronous Events: When You're Announcing, Not Asking
Sometimes a module doesn't need a response. It's announcing that something happened, and other modules react if they care.
Events are defined in the publishing module's API:
// payment-api
data class PaymentCompletedEvent(
val paymentId: String,
val orderId: String,
val amount: BigDecimal,
val currency: String,
val timestamp: Instant,
)
The publisher doesn't know who's listening. It doesn't care. Shipping might react by starting fulfillment. Inventory might confirm a reservation. Notifications might send an email. Each listener is independent.

Events are appropriate when the action is a reaction rather than a prerequisite, when multiple modules might care, and when eventual consistency is acceptable.
The Dual-Write Problem
There's a subtle issue with publishing events directly. Consider this sequence:
- Update database (payment marked complete)
- Publish event (notify listeners)
- Return success
What if the database write succeeds but event publishing fails? The payment is complete, but nobody knows. Shipping never starts.
What if the event publishes but then the transaction rolls back? Listeners react to something that didn't actually happen.
This is the dual-write problem: updating two systems without a shared transaction.
The Transactional Outbox
The solution is to make event publishing part of the database transaction. Instead of publishing directly, write the event to an outbox table in the same transaction as your business data.

A separate processor polls the outbox and publishes to listeners. If the business transaction rolls back, the outbox entry rolls back too. If it commits, the event is guaranteed to be published eventually.
This guarantees at-least-once delivery—if the processor crashes after publishing but before marking the entry as processed, it republishes on restart. This means listeners need to be idempotent: handling the same event twice should be safe.
For events within the modular monolith using Spring's ApplicationEventPublisher, the direct approach is often sufficient—everything happens in the same process. The outbox becomes important when publishing to external message brokers or when event delivery is critical to business correctness.
Choosing the Right Pattern
Most module communication falls into one of three patterns. The right choice depends on whether you need an immediate answer, whether multiple modules might care about the same event, and how much you care about delivery guarantees.
| Pattern | Use When | Consistency |
|---|---|---|
| Synchronous call | You need data to proceed | Immediate |
| Async event (direct) | Simple cases, in-process | Best effort / Retry |
| Async event (outbox) | Reliability matters | At-least-once |
Most module communication is synchronous calls—simple and direct. Events handle reactions and loose coupling. The outbox adds reliability when you can't afford to lose events.
The patterns aren't mutually exclusive. A checkout flow might make synchronous calls to validate inventory, then write the order and an outbox entry in one transaction, then have listeners react to start fulfillment.
Part 6: Errors and Validation
Errors are where modular codebases quietly rot. One team throws exceptions, another returns nulls, a third invents custom result types. The controller layer becomes a graveyard of catch blocks trying to translate chaos into HTTP status codes.
It doesn't have to be this way.
Exceptions Are for Exceptional Things
The traditional approach to errors in Java/Kotlin:
fun getProduct(id: String): Product {
val product = repository.findById(id)
?: throw ProductNotFoundException(id)
return product
}
Nothing in the type signature hints that this might throw. You discover ProductNotFoundException exists when it crashes in production—or if you're lucky, by reading documentation that probably doesn't exist.
Exceptions make sense for truly exceptional situations: out of memory, database connection lost, disk full. But "product not found" isn't exceptional. It's a normal business case—the user typed a wrong ID, the product was deleted. This happens all the time, and the code should make it obvious.
The alternative is returning errors as values. Define them in the API module using sealed classes:
// products-api
sealed class ProductError {
data class NotFound(val id: String) : ProductError()
data class InvalidData(val reason: String) : ProductError()
data object DuplicateName : ProductError()
}
interface ProductServiceApi {
fun getProduct(id: String): Result<ProductDto, ProductError>
fun createProduct(request: CreateProductRequest): Result<ProductDto, ProductError>
}Now the type signature tells you everything. The service returns Result<ProductDto, ProductError>—success or failure is explicit. And because ProductError is sealed, the compiler knows all possible subtypes. Add a new error case, and every call site that doesn't handle it becomes a compile error.
A note on Result types: TheResult<T, E>used here isn't Kotlin's built-inResultclass—that one is designed for exceptions, not typed errors. We need a proper either type withOkandErrvariants. The example project includes a custom implementation, but you can also use Michael Bull's kotlin-result or Arrow's Either. The choice matters less than the consistency—pick one and use it everywhere.
// products-impl
@Service
internal class ProductServiceImpl(
private val repository: ProductRepository,
) : ProductServiceApi {
override fun getProduct(id: String): Result<ProductDto, ProductError> {
val product = repository.findById(ProductId(id))
?: return Err(ProductError.NotFound(id))
return Ok(product.toDto())
}
}Callers handle both cases explicitly:
productService.getProduct(productId).fold(
ifOk = { product -> /* use product */ },
ifErr = { error -> /* handle error */ }
)No try-catch. No wondering what might throw.

Translate at the Boundary
The domain layer doesn't know about HTTP. It returns ProductError.NotFound. Somewhere, that needs to become a 404 response.
That translation happens at the controller—the boundary between your domain and the web:
// products-impl/infrastructure/web
@RestController
internal class ProductController(
private val productService: ProductServiceApi,
) {
@GetMapping("/products/{id}")
fun getProduct(@PathVariable id: String): ResponseEntity<ProductDto> {
return productService.getProduct(id).fold(
ifOk = { ResponseEntity.ok(it) },
ifErr = { throw it.toResponseStatusException() }
)
}
}
fun ProductError.toResponseStatusException() = when (this) {
is ProductError.NotFound -> ResponseStatusException(NOT_FOUND, "Product not found: $id")
is ProductError.InvalidData -> ResponseStatusException(BAD_REQUEST, reason)
is ProductError.DuplicateName -> ResponseStatusException(CONFLICT, "Product name already exists")
}
Spring Boot 3 converts ResponseStatusException to RFC 7807 Problem Details automatically—clients get consistent, parseable error responses. The domain stays clean. The HTTP translation is explicit and in one place.
And when you add a new error type, the when expression forces you to decide what HTTP status it maps to.
Make Invalid State Unrepresentable
Where should validation live? The traditional answer is "everywhere"—annotations on DTOs, checks in services, constraints in the database. And somehow invalid data still gets through.
A better approach: validate at construction time. If an object exists, it's valid.
Kotlin gives us two tools for this: require and check. Both throw exceptions when their condition fails, but they signal different kinds of problems. require throws IllegalArgumentException—the caller passed bad input. check throws IllegalStateException—something is wrong with the system itself. This distinction matters when you're deciding what HTTP status to return.
For request DTOs, use require. Invalid input is the caller's fault:
data class CreateProductRequest(
val name: String,
val priceAmount: BigDecimal,
val priceCurrency: String,
) {
init {
require(name.isNotBlank()) { "Name is required" }
require(priceAmount > BigDecimal.ZERO) { "Price must be positive" }
}
}When Jackson deserializes a request with invalid data, the init block throws IllegalArgumentException. A global exception handler catches this and returns a 400—bad request, try again with valid input.
For domain objects, use check. If a domain invariant fails, something has gone wrong in your system:
internal data class Product(
val id: ProductId,
val name: String,
val price: Money,
) {
init {
check(name.isNotBlank()) { "Product name cannot be blank" }
check(name.length <= 200) { "Product name too long" }
}
}
internal data class Money(
val amount: BigDecimal,
val currency: Currency,
) {
init {
check(amount >= BigDecimal.ZERO) { "Amount cannot be negative" }
}
}If these fail, it means your application code tried to create an invalid domain object—a bug, not bad user input. The global handler maps IllegalStateException to a 500, which is exactly right. You want to know about this.
@RestControllerAdvice
class GlobalExceptionHandler {
@ExceptionHandler(IllegalArgumentException::class)
fun handleBadInput(ex: IllegalArgumentException): ProblemDetail {
return ProblemDetail.forStatusAndDetail(
HttpStatus.BAD_REQUEST,
ex.message ?: "Invalid request"
)
}
@ExceptionHandler(IllegalStateException::class)
fun handleInternalError(ex: IllegalStateException): ProblemDetail {
// Log it—this is a bug
return ProblemDetail.forStatusAndDetail(
HttpStatus.INTERNAL_SERVER_ERROR,
"Something went wrong"
)
}
}No validation logic in controllers. No @Valid annotations to forget. Invalid requests fail automatically, and domain bugs surface as errors rather than silent corruption.
How It Fits Together
Request DTOs validate in init blocks with require—invalid data never makes it past deserialization. Domain objects validate with check—invariant violations are bugs, not user errors. The application layer returns Result types with domain errors for expected business cases. Controllers translate those domain errors to HTTP status codes. And the global handler catches anything unexpected.
Yes, this is more code than throwing exceptions everywhere. But it's code that tells you something. When you look at a function returning Result<ProductDto, ProductError>, you know exactly what can go wrong. When you add a new error case to a sealed class, the compiler finds every place that needs updating. When a domain object exists, you know it's valid.
Part 7: Testing
A modular architecture should make testing easier, not harder. If you need to spin up the entire application to test whether a discount calculation works, something has gone wrong.
The module boundaries we've enforced aren't just about preventing coupling—they create natural test boundaries. Each module has a clear API, explicit dependencies, and isolated data. This changes how you test.
What Changes with Modules
In a traditional monolith, testing is frustrating because everything is connected. Testing the order service means setting up products, users, inventory, and payments—even if you only care about order validation logic.
With enforced module boundaries, you get a new option: test one module completely, mock the others. The Shipping module depends on ProductServiceApi, not ProductServiceImpl. In tests, you can provide a mock implementation and test Shipping in total isolation—real database, real transactions, real queries—without Products existing at all.
This is the highest-value testing strategy for a modular monolith. Unit tests are still useful for complex logic. Application-level tests still catch integration issues. But module tests sit in the sweet spot: fast enough to run frequently, realistic enough to catch real bugs.
The classic test pyramid still applies: many fast tests at the bottom, fewer slow tests at the top.
Unit tests form the base. They test individual classes with dependencies mocked, run in milliseconds, and you'll have many of them.
Module tests are the sweet spot. One module fully wired—services, repositories, database—while other modules are mocked. They verify that a module fulfills its contract and are often your highest-value tests.
Integration tests verify adapters in isolation: a repository against a real database, a controller's serialization behavior. Write them where module tests don't already provide coverage.
Application-level tests wire up everything and test critical user journeys. Write few—they're slow and brittle, but they catch integration issues nothing else will.

Unit Tests
Unit tests give you rapid feedback on complex logic. Change some code, run the test, see if it works—all in under a second.
Focus on code with meaningful logic: calculations, state transitions, validation rules, parsing. Value objects are often good candidates:
class MoneyTest {
@Test
fun `cannot create negative amount`() {
assertThatThrownBy { Money(BigDecimal("-10"), EUR) }
.isInstanceOf(IllegalArgumentException::class.java)
.hasMessageContaining("cannot be negative")
}
@Test
fun `discount reduces amount correctly`() {
val money = Money(BigDecimal("100"), EUR)
val discounted = money.discountBy(20)
assertThat(discounted.amount).isEqualByComparingTo(BigDecimal("80"))
}
}For services, mock the dependencies:
class ProductServiceImplTest {
private val repository: ProductRepository = mock()
private val eventPublisher: ApplicationEventPublisher = mock()
private val service = ProductServiceImpl(repository, eventPublisher)
@Test
fun `returns error when product not found`() {
whenever(repository.findById(ProductId("unknown"))).thenReturn(null)
val result = service.getProduct("unknown")
assertThat(result.isErr).isTrue()
assertThat(result.error).isEqualTo(ProductError.NotFound("unknown"))
}
}Skip trivial code. If a method just delegates to a repository and maps the result, a unit test adds little—the module test will cover it.
Module Tests
Module tests verify that a module fulfills its contract. One module fully wired—services, repositories, real database—while dependencies on other modules are mocked.
The key is configuring Spring to load only what you need. Create a test configuration that mocks external dependencies:
// shipping-impl/src/test/kotlin/com/example/shipping/ShippingModuleTestConfig.kt
@TestConfiguration
class ShippingModuleTestConfig {
@Bean
fun productServiceApi(): ProductServiceApi = mock()
@Bean
fun inventoryServiceApi(): InventoryServiceApi = mock()
}Then use @SpringBootTest with limited component scanning:
@SpringBootTest(classes = [ShippingModuleTestConfig::class])
@ComponentScan(basePackages = ["com.example.shipping"])
@Import(FlywayConfig::class) // Run migrations for shipping schema
class ShippingModuleTest {
@Autowired
private lateinit var shippingService: ShippingServiceApi
@Autowired
private lateinit var productServiceApi: ProductServiceApi // The mock
@Test
fun `creates shipment for valid product`() {
// Given
whenever(productServiceApi.getProduct("prod-123")).thenReturn(
Ok(buildProductDto(id = "prod-123", weightGrams = 500))
)
// When
val result = shippingService.createShipment(
CreateShipmentRequest(productId = "prod-123", address = "123 Main St")
)
// Then
assertThat(result.isOk).isTrue()
assertThat(result.value.weightGrams).isEqualTo(500)
}
}The Shipping module runs against a real database (use Testcontainers for Postgres), executes real SQL, and validates real transactions. But it doesn't need Products, Inventory, or any other module to exist.
When a module test fails, you know which module broke and which requirement failed. When it passes, you have confidence the module works.
Expressing tests as specifications with BDD
The examples above work fine, but there's an alternative worth considering: Behavior-Driven Development frameworks like Cucumber let you express tests as readable specifications.
Feature: Shipment Creation
Scenario: Create shipment for valid product
Given a product exists with id "prod-123" and weight 500 grams
When I create a shipment for product "prod-123"
Then the shipment should be created with weight 500 grams
Scenario: Fail when product does not exist
Given no product exists with id "unknown"
When I create a shipment for product "unknown"
Then the shipment creation should fail with "ProductNotFound"Step definitions wire the scenarios to code, mocking external modules the same way:
class ShipmentSteps(
private val shippingService: ShippingServiceApi,
private val productServiceApi: ProductServiceApi,
) {
private var result: Result<ShipmentDto, ShipmentError>? = null
@Given("a product exists with id {string} and weight {int} grams")
fun mockProduct(productId: String, weight: Int) {
whenever(productServiceApi.getProduct(productId)).thenReturn(
Ok(buildProductDto(id = productId, weightGrams = weight))
)
}
@When("I create a shipment for product {string}")
fun createShipment(productId: String) {
result = shippingService.createShipment(
CreateShipmentRequest(productId = productId, address = "123 Main St")
)
}
@Then("the shipment should be created with weight {int} grams")
fun verifyShipmentCreated(expectedWeight: Int) {
assertThat(result?.isOk).isTrue()
assertThat(result?.value?.weightGrams).isEqualTo(expectedWeight)
}
}Feature files become living documentation—new team members read them and understand what the module does. When requirements change, update the scenario first; the failing test drives the implementation.
The tradeoff is complexity: Cucumber requires additional configuration and glue code. For small teams or simple modules, plain Kotlin tests may be simpler. Choose the style that fits your team.
Integration Tests
Module tests cover most scenarios, but sometimes you need focused tests on a specific adapter.
Repository tests verify complex queries work correctly:
@DataJdbcTest
@Import(FlywayConfig::class)
class ProductRepositoryJdbcTest {
@Autowired
private lateinit var repository: ProductRepositoryJdbc
@Test
fun `finds products by category with price range`() {
// Given
repository.save(buildProductEntity(category = "electronics", priceInCents = 5000))
repository.save(buildProductEntity(category = "electronics", priceInCents = 15000))
repository.save(buildProductEntity(category = "clothing", priceInCents = 3000))
// When
val results = repository.findByCategoryAndPriceRange(
category = "electronics",
minPrice = 1000,
maxPrice = 10000
)
// Then
assertThat(results).hasSize(1)
assertThat(results.first().priceInCents).isEqualTo(5000)
}
}Controller tests verify serialization, validation, and HTTP semantics:
@WebMvcTest(ProductController::class)
class ProductControllerTest {
@Autowired
private lateinit var mockMvc: MockMvc
@MockBean
private lateinit var productService: ProductServiceApi
@Test
fun `returns 404 when product not found`() {
whenever(productService.getProduct("unknown"))
.thenReturn(Err(ProductError.NotFound("unknown")))
mockMvc.perform(get("/api/v1/products/unknown"))
.andExpect(status().isNotFound)
.andExpect(jsonPath("$.detail").value("Product not found: unknown"))
}
@Test
fun `validates request body`() {
mockMvc.perform(
post("/api/v1/products")
.contentType(MediaType.APPLICATION_JSON)
.content("""{"name": "", "priceAmount": -10}""")
)
.andExpect(status().isBadRequest)
}
}Write integration tests when the adapter has complexity worth testing in isolation. A repository with only findById and save doesn't need its own test—the module test exercises it. A repository with custom queries, pagination, or complex joins benefits from focused tests.
Application-Level Tests
These wire up the full system to validate cross-module flows. The step definitions live in the application module and call real services:
Feature: Checkout Flow
Background:
Given I am logged in as a user
Scenario: Complete purchase of eco-friendly product
Given the product "Organic Cotton T-Shirt" is in stock
And I have added it to my cart
When I complete checkout with valid payment
Then an order should be created
And inventory should be reduced
And a shipment should be scheduledThese are slow and brittle. Write them only for high-impact user journeys where integration failures would be costly. Let module tests handle the detailed behavior.
Test Data
Tests need data, and you don't want every test constructing objects from scratch.
Test fixtures live in src/testFixtures and provide builder functions with sensible defaults. A buildProductDto() function lets you specify only the fields that matter for your test—everything else gets a reasonable value:
fun buildProductDto(
id: String = UUID.randomUUID().toString(),
name: String = "Test Product",
weightGrams: Int = 100,
) = ProductDto(id = id, name = name, weightGrams = weightGrams)This keeps tests focused on what they're actually testing. When you read a test that says buildProductDto(weightGrams = 500), you know the weight matters; everything else is just scaffolding.
Worldview modules serve a different purpose—they provide realistic named data for runtime use: local development, demos, and staging environments. Each bounded context can have an optional -worldview submodule that seeds a coherent dataset on startup.
// products-worldview/WorldviewProduct.kt
object WorldviewProduct {
val organicCottonTShirt = ProductDto(
id = "PROD-001",
name = "Organic Cotton T-Shirt",
priceAmount = BigDecimal("29.99"),
// ...
)
val allProducts = listOf(organicCottonTShirt, bambooToothbrush, /* ... */)
}
// products-worldview/WorldviewProductDataLoader.kt
@Component
@Order(1) // Products load before orders
class WorldviewProductDataLoader(
private val productService: ProductService,
@Value("\${spring.profiles.active:}") private val activeProfile: String
) : ApplicationRunner {
override fun run(args: ApplicationArguments?) {
if (activeProfile.contains("prod") || activeProfile.contains("test")) return
WorldviewProduct.allProducts.forEach { product ->
productService.createProduct(product.toCreateRequest())
}
}
}The @Order annotation controls load sequence—products before orders, users before orders. The profile check ensures worldview data never loads in production or during tests. The application module pulls in all worldview modules alongside the implementations, so starting the app locally gives you a populated system immediately.
Keep these concerns separate. Test fixtures build minimal, scenario-specific data inline. Worldview modules create a realistic environment that persists across restarts. Module tests should be self-documenting:
# Avoid: reader must look up the price
Given the worldview product "Organic Cotton T-Shirt" exists
When a 20% discount is applied
Then the product price should be €23.99
# Better: scenario is self-contained
Given a product "Organic Cotton T-Shirt" with price €29.99
When a 20% discount is applied
Then the product price should be €23.99The second version states the precondition that matters. Module tests mock external dependencies anyway, so worldview data wouldn't even be available. Save worldview references for application-level tests, where you're verifying that actual user journeys work with realistic data.
Keep Module Test Fixtures Independent
It's tempting to share test setup across modules. The Orders test fixtures define "Given an order exists," and Shipping tests reuse it. Less duplication, right?
This recreates your production dependency graph in your test code. Orders needs users for its tests, so its fixtures create users. Shipping needs orders, so its fixtures pull in Orders fixtures—which pull in Users. Before long, refactoring a Users test helper breaks tests in three other modules.
There's a subtler problem too: shared steps hide complexity. "Given an order exists" might silently create a user, a product, and an inventory reservation behind the scenes. When the test fails, you're debugging through layers of setup you didn't know existed.
Module test fixtures should mock external dependencies, not call them. "Given a product exists" configures a mock, not the real Products module. Each module's tests stay self-contained, and failures stay local. If you genuinely need to test across module boundaries—real services, real data—that's what application-level tests are for.
Prioritize Confidence Over Coverage
The architecture enables meaningful tests. Because modules have clean APIs, you can mock them. Because boundaries are enforced, module tests verify real units of functionality. Because data is isolated, you can test against a real database without setting up the world.
Write the tests that give you confidence. Skip the tests that just give you coverage.
Part 8: Putting It All Together
We've talked about modules, boundaries, layers, and tests. But how does it actually become a running application?
The good news: it's simpler than you might expect. One application module pulls in the implementations, Spring wires them together, and you deploy a single JAR.
The Application Module
The application module is the composition root—the place where all the pieces come together. Its build.gradle.kts depends on all the -impl modules: products-impl, shipping-impl, orders-impl, and so on. It also pulls in the Spring Boot starters, Flyway, and your database driver.
This is the only place that depends on -impl modules. Everyone else depends on -api modules. The application module breaks the rule because its job is to assemble everything.
The application class itself is just a standard Spring Boot entry point. The only thing worth noting is scanBasePackages—it needs to cover your root namespace so Spring discovers components in all your -impl modules.
How Spring Wires Across Modules
A common question: if ShippingServiceImpl needs ProductServiceApi, and they're in different Gradle modules, how does Spring connect them?
The answer is straightforward. Spring doesn't care about Gradle modules; it cares about the classpath. When the application starts, all the -impl modules are on the classpath. Spring scans for components, finds ProductServiceImpl (which implements ProductServiceApi), and registers it as a bean. When it creates ShippingServiceImpl, it sees a constructor parameter of type ProductServiceApi, finds the matching bean, and injects it.
// products-impl
@Service
internal class ProductServiceImpl(...) : ProductServiceApi
// shipping-impl
@Service
internal class ShippingServiceImpl(
private val productService: ProductServiceApi // Spring injects ProductServiceImpl
) : ShippingServiceApiThe interface is public (in -api). The implementation is internal (in -impl). Spring wires them together because at runtime, they're all in the same application context.
This is one of the key benefits of a modular monolith over microservices—no service discovery, no HTTP clients, no serialization overhead. Just dependency injection.
Project Structure
The overall project structure follows naturally from the patterns we've established:
project-root/
├── build-logic/
│ └── src/main/kotlin/
│ ├── kotlin-conventions.gradle.kts
│ └── common-library.gradle.kts
├── common/
│ ├── common-money/
│ ├── common-time/
│ └── common-result/
├── products/
│ ├── products-api/
│ ├── products-impl/
│ └── products-worldview/
├── shipping/
│ ├── shipping-api/
│ └── shipping-impl/
├── application/
└── settings.gradle.ktsThe build-logic module contains convention plugins that centralize shared Gradle configuration. Instead of repeating Kotlin version, Java compatibility, and test configuration in every build.gradle.kts, modules apply a single plugin like id("kotlin-conventions"). Change the convention once, and every module picks up the change.
The root settings.gradle.kts registers all modules and includes the build-logic as a composite build. Nothing surprising here—just a list of include() statements for each module in your project.
From Source to Deployment
The Spring Boot plugin packages everything into a single executable JAR. Run ./gradlew :application:bootJar and you get one artifact containing all modules, all dependencies, and an embedded server. Deploy it anywhere that runs Java. One artifact, one process, all your modules.
For local development, you typically need a database. A simple Docker Compose file handles that—start the container, run ./gradlew :application:bootRun, and you're developing. Worldview modules can seed the database with realistic data automatically, so your local environment feels like a real system rather than an empty shell.
When You Outgrow the Monolith
The architecture we've built is designed to be "split-ready," even if you never need to split it.
If a module eventually needs to become a separate service, the path is clear. The database is already separate—each module has its own schema, so you export it to a new database without untangling shared tables. The API is already defined—the -api module becomes the contract for the new service, and you replace the in-process implementation with an HTTP or gRPC client. Communication is already explicit—you swap Spring's dependency injection for network calls, and Spring events for a message broker. Even your tests are ready—module tests that mocked ProductServiceApi now mock the client instead.
You're not refactoring a tangled mess. You're promoting a module that's already isolated.
Most teams never need to do this. The modular monolith scales further than people expect—both technically and organizationally. Multiple teams can work on different modules without stepping on each other, and the single deployment model remains manageable well into the hundreds of thousands of lines of code. But knowing the escape hatch exists makes the choice less risky. You're not betting the company on a monolith forever; you're choosing the simplest architecture that works today while keeping your options open for tomorrow.

Conclusion
A modular monolith isn't a compromise or a stepping stone to microservices. For most teams, it's the destination.
You get boundaries that hold—not because everyone remembered the guidelines, but because the compiler enforces them. You get changes that stay local, because modules can't reach into each other's internals. You get deployment that stays simple, because it's still one JAR, one process, one thing to monitor at 3am when something goes wrong.
The work we've covered—splitting API from implementation, isolating data by schema, making errors explicit in the type system—isn't ceremony for its own sake. It's buying options. The option to scale your team across modules without stepping on each other. The option to refactor one module's internals without touching its consumers. The option to extract a service later if you genuinely need to, without first untangling years of accumulated coupling.
Not everything in this guide carries equal weight. Some things are load-bearing: enforcement through Gradle modules (or Spring Modulith), the API/implementation split, data ownership. Skip these, and the "modular" part of your monolith will erode within months. Other things—how you structure folders inside a module, whether you use Cucumber or plain JUnit, what you name your handler classes—are local decisions. Get them wrong and you have a mess, but it's a contained mess. The module boundary limits the blast radius.
That last point matters more than it might seem. In a traditional monolith, every shortcut becomes everyone's problem. In a modular monolith, a messy module is just a messy module. You can clean it up later, on your own schedule, without coordinating across teams or risking the whole system.
The modular monolith scales further than its reputation suggests—both technically and organizationally. But knowing the escape hatch to microservices exists makes the architecture feel less like a bet and more like a choice.
Build the boundaries. Enforce them. Ship the JAR. And maybe, just maybe, you'll have a codebase that's still pleasant to work in two years from now.
Finally, a cheat sheet as quick reference:
