
AI coding agents can write features, open PRs, and pass test suites. But without the right infrastructure around them, they'll also cross architectural boundaries no one wrote down, consult documentation that's out of date, and ship UIs that don't look quite right - all while reporting green builds. The problem isn't the agents. It's that most codebases rely on tribal knowledge and human judgment for things that need to be explicit and machine-checkable.
This session walks through the practical infrastructure built for a multi-workspace monorepo. We'll cover four layers of formalization, each with concrete examples:
- Executable architecture checks. How we used dependency-cruiser and custom structural analysis scripts to enforce cross-workspace import boundaries and ownership rules in CI.
- Documentation as a hierarchy. Why a monolithic instructions file doesn't work for agents, and how restructuring into a scoped hierarchy with clear delegation made agents load only the context relevant to their current task.
- Explicit cross-workspace contract ownership. How assigning owners and consumers to shared contracts made cross-workspace ripple effects visible at PR time instead of as bugs in production.
- Visual verification for agent workflows. How we added Playwright screenshots and screen recordings to agent-generated PRs so that there's visual evidence of what the agent built - not just passing tests. This is what moves agent output from "probably fine" to "I can see that it works."
Throughout the session, we'll connect each layer back to a core insight: the same properties that make a codebase easier for humans to maintain make it dramatically easier for AI agents to contribute correctly. If you're already writing code with AI assistance, this talk will give you a concrete playbook for making that collaboration more reliable.
Audience
Intermediate
Session Category
Emerging Technologies
Speaker(s)