Composable Backends with Modular Handlers: Building Services That Grow Without Rewrites
When a backend is young, everything feels fast: one service, a few routes, everyone knows where things live. A year later, the same service is handling alerts, audits, preferences, billing hooks, experiment flags, and “just one more feature” after every sprint. Regression risk climbs, onboarding slows, and even small refactors start to feel dangerous.
This post is about a different way to grow: composable Python backends built from modular feature handlers. Each feature behaves like a small plugin: it owns its routes, validators, config, teardown path, and test contract. Instead of rewriting the core when new capabilities arrive, you just add (or remove) a handler.
We’ll walk through the problem, the architecture pattern, a step-by-step process to implement it in Python (Flask/FastAPI style), and how this approach keeps services testable, CI-friendly, and maintainable as your product surface area expands.
Problem: When Features Accumulate, Backends Lose Shape
Most teams start with something like this:
- A single Python service (
app.py/main.py) - A few modules (
alerts.py,users.py,billing.py) - Shared DB/session clients imported everywhere
Over time, you get:
- Scattered feature logic: alerts code mixed with billing code in the same controllers
- Ad-hoc lifecycle hooks: teardown code sprinkled in
try/finallyblocks and random decorators - Tests that mirror the mess: end-to-end tests for the entire service, but very few unit/contract tests per feature
- Risky feature additions: adding “just one more” capability feels like threading a needle through fragile wiring
If you’re supporting real devices (like a wearable tracker or smart feeder) and mobile apps on top, the surface area grows fast. When you add “alerts for low battery” or “timeline of feeding events”, those features shouldn’t require you to reason about every other feature in the backend.
That’s exactly what modular handlers give you.

Approach: Feature Modules as “Handlers” With Contracts
At a high level, a handler is a feature module that:
- Knows how to register itself with the app (routes, background jobs, events)
- Knows what dependencies it needs (DB clients, cache, config)
- Exposes a clear contract for tests (e.g., “given this input, produce this side effect”)
- Owns its teardown or cleanup (closing connections, stopping schedulers)
- Can be added or removed without touching the core router or other handlers
The core backend becomes a host:
- It configures shared infrastructure (HTTP server, DI container, logging)
- It discovers and registers handlers at startup
- It defines a minimal base interface each handler implements
- It coordinates global policies (auth, tracing, rate limiting), while handlers focus on business logic
If you’ve seen plugin architectures in frontends (e.g., modular feature bundles in React or micro-frontend setups), this is the same idea applied to backends.
Process: Designing a Composable Backend in Python
Let’s walk through one concrete way to implement this in Python using a FastAPI/Flask style stack. We’ll keep snippets small and focus on the pattern, not a specific framework.
1. Start With a Clear Handler Contract
Define what it means to be a “handler” in your system. A clean starting point:
- Name & config key
- Registration hook (e.g., attach routes)
- Lifecycle hooks (
on_startup,on_shutdown) - Tests that can run without real network calls
# core/handlers/base.py
from typing import Protocol, Any
class Handler(Protocol):
name: str
def register_routes(self, app: Any) -> None:
...
def on_startup(self) -> None:
...
def on_shutdown(self) -> None:
...
Your actual interface may include DI containers, background schedulers, or event buses, but the idea is the same: define the minimal common shape.
2. Give Each Feature a First-Class Home
Structure your repo so each handler is self-contained:
src/
core/
app.py
config.py
handlers/
base.py
registry.py
features/
alerts/
__init__.py
handler.py
models.py
validators.py
tests/
test_contract.py
preferences/
handler.py
models.py
tests/
audits/
handler.py
tests/
Rules of thumb:
- A handler folder should be safe to move or delete without breaking unrelated features.
- Each handler should have its own mini-API surface (routes + business logic), not just random helper functions.

3. Centralize Registration in a Handler Registry
You don’t want app.py importing 20 handlers manually. Instead, create a registry that knows which handlers exist and how to instantiate them.
# core/handlers/registry.py
from typing import List
from core.handlers.base import Handler
from features.alerts.handler import AlertsHandler
from features.preferences.handler import PreferencesHandler
from features.audits.handler import AuditHandler
def get_all_handlers() -> List[Handler]:
return [
AlertsHandler(),
PreferencesHandler(),
AuditHandler(),
]
In core/app.py, your startup looks like:
# core/app.py
from fastapi import FastAPI
from core.handlers.registry import get_all_handlers
def create_app() -> FastAPI:
app = FastAPI()
handlers = get_all_handlers()
for handler in handlers:
handler.register_routes(app)
@app.on_event("startup")
async def startup():
for handler in handlers:
handler.on_startup()
@app.on_event("shutdown")
async def shutdown():
for handler in handlers:
handler.on_shutdown()
return app
This keeps the core app dumb and generic. The only thing it “knows” about features is that they implement the Handler contract.
You can later evolve get_all_handlers() into:
- A config-driven list (feature flags per environment)
- Dynamic discovery (e.g., entry points, reflection, plugin loading)
- Environment-specific handler sets (production vs internal tools)
4. Make Handlers Dependency-Injected, Not Global
Handlers will need dependencies: DB sessions, cache clients, message queues, etc. Avoid importing globals directly; instead, inject a container or context.
Example: simple container object:
# core/container.py
class Container:
def __init__(self, db, cache, settings):
self.db = db
self.cache = cache
self.settings = settings
Then in the handler:
# features/alerts/handler.py
from core.handlers.base import Handler
class AlertsHandler:
name = "alerts"
def __init__(self, container):
self.db = container.db
self.cache = container.cache
def register_routes(self, app):
@app.get("/alerts")
def list_alerts(user_id: str):
# use self.db, self.cache
...
def on_startup(self):
# preload alert templates, warm caches, etc.
...
def on_shutdown(self):
# optional cleanup
...
Now your registry can pass the container:
# core/handlers/registry.py
from core.container import Container
from core.db import db_client
from core.cache import cache_client
container = Container(db=db_client, cache=cache_client, settings={...})
def get_all_handlers() -> List[Handler]:
return [
AlertsHandler(container),
PreferencesHandler(container),
AuditHandler(container),
]
This pattern scales nicely for:
- Device-driven systems where handlers manage streams from wearables or smart bowls
- Experimentation systems where handlers manage feature flags and cohorts
- Anything where testability and swap-ability matter
5. Treat Teardown as a First-Class Concern
Most backends treat teardown as an afterthought. In long-lived systems with background tasks, this becomes a source of leaks.
By putting an explicit on_shutdown hook in the handler contract, you can:
- Cleanly stop background jobs or polling loops
- Flush last-minute metrics or checkpoints
- Release any non-framework resources (e.g., custom threads, file handles)
For example, a handler that schedules periodic reconciliation:
# features.audit/handler.py
import threading
import time
class AuditHandler:
name = "audits"
def __init__(self, container):
self.db = container.db
self._thread = None
self._stop = False
def on_startup(self):
def worker():
while not self._stop:
self._run_reconciliation()
time.sleep(60)
self._thread = threading.Thread(target=worker, daemon=True)
self._thread.start()
def on_shutdown(self):
self._stop = True
if self._thread:
self._thread.join(timeout=5)
Because this behavior is per handler, you don’t end up with a giant “teardown” method in app.py that needs to know about every background job in the service.

6. Make Handlers Testable-by-Design
If each handler is a module with a clear contract, testing becomes composable too.
Patterns that work well:
- Contract tests per handler
- For example: a
test_contract.pythat asserts certain routes, status codes, and side effects - Can run with a fake container in isolation
- For example: a
- Integration tests for combinations of handlers
- Use the same
get_all_handlers()registry, but substitute a subset when needed - For example, run the app with only
AlertsHandlerandPreferencesHandlerto validate shared behavior
- Use the same
- Golden tests for responses
- Store JSON responses in fixtures and ensure handlers don’t break existing contracts
Example contract test (conceptual):
# features/alerts/tests/test_contract.py
from fastapi.testclient import TestClient
from core.app import create_app
from core.handlers.registry import get_all_handlers_for_testing
def test_alerts_list_contract():
app = create_app(handlers=get_all_handlers_for_testing(["alerts"]))
client = TestClient(app)
response = client.get("/alerts?user_id=test-user")
assert response.status_code == 200
payload = response.json()
assert "items" in payload
# Further assertions on schema, ordering, etc.
The key idea: handlers should be testable with a fake container and without real external services.
This is especially useful when you have device-driven features (like ingesting sensor data from a wearable or feeding events from a bowl): you can simulate the event stream in the handler’s tests without booting the entire stack.
Results: How Modular Handlers Change Day-to-Day Work
Once you apply this architecture, a few things shift:
1. Adding Features Stops Feeling Dangerous
Adding a new capability becomes:
- Create
features/new_feature/handler.py - Implement
Handlercontract (routes + lifecycle) - Register in
get_all_handlers() - Add contract tests in
features/new_feature/tests/
You rarely touch:
- Core routing
- Other handlers
- Global teardown logic
This reduces regression risk dramatically and encourages experimentation, especially for cross-cutting capabilities like new alert types, audit sinks, or notification channels.
2. Onboarding Becomes Map-Driven, Not Detective Work
New engineers can understand the architecture by looking at:
- The handler registry: list of features and their names
- The features folder: each handler’s home
Instead of grepping for “alerts” across 40 files, they:
- Open
features/alerts/ - See its routes, models, validators, tests in one place
This is a big win for teams with multiple products or device types. If you add a new integration (say, a different wearable device or hardware revision), you can give it its own handler module and keep risk contained.
3. CI Pipelines Become More Targeted
You can structure CI so that:
- Changes in
features/alerts/trigger:- Unit tests for alerts
- Contract tests for alerts
- Changes in
core/trigger:- A broader smoke suite
- Changes in shared models trigger:
- All affected handler tests (dependency map)
You can even run handlers’ test suites in parallel, since they’re designed to be isolated.

At Hoomanely, we’re building long-lived systems around pet wellness: smart devices in the home, mobile apps, and backend pipelines that translate raw data into actionable insights. That means:
- Multiple device types and firmware generations over time
- New analytics and alerting features as we learn from real-world behavior
- A need to evolve quickly without constantly rewriting the backend core
A modular handler architecture lets us:
- Add new event processors (e.g., for additional sensors or feeding patterns) as separate handlers
- Evolve alerting or personalization logic in tight, testable modules
- Keep the core HTTP/DI layer stable while device and feature capabilities grow
Even if you’re not working on pet tech, the same principles apply to any product backend that needs to grow while staying reliable and testable.
Practical Advice & Common Pitfalls
If you’re adopting this pattern, a few guidelines help:
- Keep the handler contract small but opinionated
- Don’t turn it into a god interface
- Include only what every feature truly needs (routes + lifecycle + name)
- Avoid cross-handler imports
- If two handlers need shared logic, extract it into a shared module (
shared/) - Handlers should communicate via events, queues, or shared services, not direct imports
- If two handlers need shared logic, extract it into a shared module (
- Be intentional about config
- Give each handler its own config namespace
- Keep secrets and credentials in global config, but feature toggles per handler
- Enforce handler boundaries in code review
- Reject PRs that put feature logic directly in
core/app.py - Ask, “should this be its own handler?” whenever a new feature shows up
- Reject PRs that put feature logic directly in
- Document the handler catalog
- Maintain a simple table: handler name, purpose, owned routes, dependencies
- This becomes your map of the backend for onboarding and incident response
Takeaways
- Think in feature handlers, not sprawling modules.
Each handler owns its routes, lifecycle, and tests. - Make the core boring.
It should host handlers, configure infra, and enforce global policies — nothing more. - Let handlers define their dependencies explicitly.
Use a container or DI so they’re easy to test and refactor. - Treat lifecycle hooks as design, not cleanup hacks.
Startup and shutdown are first-class parts of the handler contract. - Align architecture with how your team works.
Handlers make it easy to assign ownership, slice CI, and onboard new engineers.
Composable backends with modular handlers won’t solve every problem, but they give you a sane way to grow: new capabilities arrive as plug-in modules, not invasive rewrites. For long-lived Python services — especially those supporting devices, apps, and evolving products — that difference is what keeps your backend fast to change and safe to maintain, years down the line.