Orchestrating Four Agents to Build a Secure Internal Site in Under Two Hours
From Google OAuth configuration to production deployment: How we parallelized traditionally sequential work and achieved 4x faster delivery through systematic agent coordination.
Reading time: 14 min Tags: Multi-Agent Systems, Infrastructure Automation, DevOps, Authentication, Case Study
When your team grows from two founders to include interns, contractors, and advisors, scattered Google Docs and Slack threads stop scaling. We needed an internal site—employee directory, documentation, company updates—locked down to @briefcasebrain.com emails only.
The traditional approach would take a week. Configure OAuth credentials, wait for Google verification, scaffold the Next.js project, wire up authentication, build the UI, provision AWS infrastructure, configure DNS, obtain SSL certificates. Each step blocking the next in a waterfall of dependencies.
We finished in under two hours using four parallel agents. This is how we did it, what went wrong, and what we learned about orchestrating autonomous systems for infrastructure work.
The Sequential Trap
Here's what most teams do when building an authenticated internal site:
- Developer creates Google Cloud project and OAuth credentials (30 minutes, then 24-48 hour verification wait)
- Developer scaffolds Next.js project locally (20 minutes)
- Developer integrates NextAuth.js, discovers OAuth credentials aren't ready (blocked)
- Developer provisions AWS infrastructure (45 minutes)
- Developer deploys, discovers domain isn't configured (blocked)
- Developer configures Route53, waits for DNS propagation (15 minutes setup, 24-48 hours propagation)
- Developer requests SSL certificate, waits for validation (10 minutes setup, hours to days for issuance)
Total wall-clock time: 3-5 days, depending on external service response times.
The actual development work is maybe 6 hours. The rest is waiting. Waiting for OAuth verification, DNS propagation, SSL certificate issuance. Each dependency creates a blocking point that forces context switches and kills momentum.
Identifying the Critical Path
Before assigning agents, we mapped every task's dependencies and estimated completion times. The goal wasn't just parallelization—it was identifying which long-poll operations would gate the entire project.
Three operations dominated our timeline:
Google OAuth App Creation — Google's OAuth consent screen requires verification for apps accessing sensitive scopes. For internal tools restricted to a single domain, verification is typically instant, but the configuration itself takes 15-20 minutes and must complete before any authentication code can be tested.
DNS Configuration — Route53 propagation technically takes up to 48 hours, though in practice it's usually 10-30 minutes. But you can't request an SSL certificate until DNS is pointing at your infrastructure.
SSL Certificate Provisioning — AWS Certificate Manager issues certificates quickly for domains you control, but validation requires either DNS records or email confirmation. This creates a dependency chain: infrastructure → DNS → SSL → deployment.
With these bottlenecks identified, we structured our agent deployment to start long-running operations immediately while parallel tracks handled everything else.
Agent Architecture
We deployed four agents with distinct responsibilities and clear interfaces between them:
Agent 1: Infrastructure & Cloud Services Responsible for: Google Cloud OAuth setup, AWS infrastructure provisioning (S3, CloudFront, Route53), SSL certificate requests, environment configuration.
Agent 2: Project Scaffolding & Dependencies Responsible for: Next.js 15.1.6 initialization, TypeScript configuration, dependency installation (NextAuth.js v5, Tailwind, Radix UI), build pipeline setup.
Agent 3: Design System & Components Responsible for: UI component extraction from existing projects, Tailwind configuration with existing design tokens, layout components, navigation structure.
Agent 4: Authentication & Security Responsible for: NextAuth.js integration, Google OAuth provider configuration, domain restriction middleware, session management, route protection.
The key insight was that Agents 2, 3, and 4 could work in parallel on code that would eventually need Agent 1's outputs (OAuth credentials, infrastructure endpoints), but could be developed and tested with mock values.
Execution Timeline
T+0:00 — Parallel Initialization
All four agents started simultaneously:
Agent 1 began creating the Google Cloud project and OAuth credentials. We knew this would be our first potential blocker, so it got priority attention.
# Agent 1's first actions
gcloud projects create briefcase-internal-prod
gcloud services enable oauth2.googleapis.com
Agent 2 initialized the Next.js project structure:
# Agent 2's initialization
npx create-next-app@15.1.6 briefcase-ai-internal \
--typescript \
--tailwind \
--eslint \
--app \
--src-dir
Agent 3 started extracting design tokens and components from our existing landing page repository. This required read access to briefcase-ai-landing but no coordination with other agents.
Agent 4 began writing authentication logic using placeholder OAuth credentials:
// Agent 4's initial auth config (placeholder values)
export const authConfig = {
providers: [
GoogleProvider({
clientId: process.env.GOOGLE_CLIENT_ID ?? 'placeholder',
clientSecret: process.env.GOOGLE_CLIENT_SECRET ?? 'placeholder',
}),
],
}
T+0:15 — First Synchronization Point
Agent 1 completed OAuth credential creation and published the client ID and secret to our secrets manager. This triggered Agent 4 to swap placeholder values for real credentials.
Agent 2 had finished scaffolding and started installing dependencies. A version conflict emerged: NextAuth.js v5 requires specific React versions that conflicted with what create-next-app installed.
# Conflict discovered by Agent 2
npm warn peer react@"^18.2.0" from next-auth@5.0.0
npm warn peer react@"19.0.0" installed
Agent 2 resolved this by pinning React to 18.2.0 and documented the decision for future maintenance.
T+0:25 — Infrastructure Bottleneck
Agent 1 hit our first real blocker: AWS CloudFront distribution creation takes 15-25 minutes to deploy globally. Rather than waiting, Agent 1 proceeded to configure Route53 with the CloudFront domain name that AWS provides immediately (even before the distribution is active).
# Agent 1 created Route53 records pointing to CloudFront
aws route53 change-resource-record-sets \
--hosted-zone-id Z0123456789 \
--change-batch '{
"Changes": [{
"Action": "CREATE",
"ResourceRecordSet": {
"Name": "internal.briefcasebrain.com",
"Type": "A",
"AliasTarget": {
"HostedZoneId": "Z2FDTNDATAQYW2",
"DNSName": "d1234567890.cloudfront.net",
"EvaluateTargetHealth": false
}
}
}]
}'
This let DNS propagation happen in parallel with CloudFront deployment.
T+0:35 — Authentication Integration
Agent 4 implemented the domain restriction middleware—the core security requirement:
// middleware.ts
import { NextResponse } from 'next/server'
import { auth } from '@/auth'
export default auth((req) => {
const session = req.auth
// Allow access to auth endpoints
if (req.nextUrl.pathname.startsWith('/api/auth')) {
return NextResponse.next()
}
// Require authentication for all other routes
if (!session) {
return NextResponse.redirect(new URL('/api/auth/signin', req.url))
}
// Validate email domain
const email = session.user?.email
if (!email?.endsWith('@briefcasebrain.com')) {
return NextResponse.redirect(new URL('/unauthorized', req.url))
}
return NextResponse.next()
})
export const config = {
matcher: ['/((?!_next/static|_next/image|favicon.ico).*)'],
}
The critical detail here is the order of checks. We first allow auth endpoints through (otherwise users can't sign in), then require authentication, then validate the domain. Getting this sequence wrong creates either security holes or infinite redirect loops.
T+0:50 — Design System Merge
Agent 3 completed component extraction and created a pull request against Agent 2's scaffold. This was our first code merge between agents.
The merge revealed a namespace collision: both the existing landing page and the new internal site used a component called Header. Agent 3 renamed the internal version to InternalHeader and updated all imports.
// Agent 3's resolution
// Before: import { Header } from '@/components/Header'
// After: import { InternalHeader } from '@/components/internal/Header'
T+1:05 — Integration Testing
With core components merged, we ran integration tests against a local development server. Agent 4 discovered that the OAuth callback URL configured in Google Cloud didn't match the local development URL.
Error: redirect_uri_mismatch
The redirect URI in the request does not match the ones authorized.
Agent 1 added http://localhost:3000/api/auth/callback/google to the authorized redirect URIs. This required navigating Google Cloud Console's OAuth configuration—a reminder that some operations still need human-accessible interfaces even in an automated workflow.
T+1:20 — Production Deployment Preparation
Agent 1 confirmed CloudFront distribution was active. SSL certificate validation was still pending—AWS Certificate Manager was waiting for DNS propagation to complete before issuing the certificate.
Rather than block on SSL, we deployed to staging first:
# Agent 2's staging deployment
npm run build
aws s3 sync out/ s3://briefcase-internal-staging/
aws cloudfront create-invalidation \
--distribution-id E1234567890 \
--paths "/*"
T+1:35 — SSL Resolution
DNS propagation completed and ACM issued our SSL certificate. Agent 1 attached the certificate to the CloudFront distribution and updated the origin configuration.
The final production deployment required coordinating environment variables across all services:
# Production environment variables
NEXTAUTH_URL=https://internal.briefcasebrain.com
NEXTAUTH_SECRET=<generated-secret>
GOOGLE_CLIENT_ID=<from-agent-1>
GOOGLE_CLIENT_SECRET=<from-agent-1>
T+1:50 — Quality Control
Our final phase was comprehensive validation. Rather than relying on any single agent's assessment, we ran a systematic checklist:
Security Validation
- Attempted login with non-briefcasebrain.com email → Correctly rejected
- Attempted direct URL access without auth → Correctly redirected to login
- Verified session expiration after configured timeout → Working
- Checked for secure headers (HSTS, CSP, X-Frame-Options) → Present
Functionality Testing
- Navigation between all internal pages → Working
- User profile display and logout → Working
- Responsive design on mobile viewports → Working
- Content rendering from markdown sources → Working
Infrastructure Verification
- SSL certificate valid and properly chained → Verified
- CloudFront serving from correct S3 origin → Verified
- DNS resolving to correct CloudFront distribution → Verified
- All environment variables correctly set → Verified
One issue surfaced during QC: the session cookie wasn't being set with the Secure flag in production because NEXTAUTH_URL was initially set without https://. This would have allowed session hijacking over unencrypted connections. Agent 4 fixed the configuration and redeployed.
What We Learned
Long-Poll Identification Is Everything
The difference between a 2-hour project and a 5-day project wasn't the amount of work—it was understanding which operations would block everything else. Starting CloudFront provisioning before writing a single line of application code saved us 20+ minutes of wall-clock time.
Mock Values Enable Parallelization
Agent 4 couldn't test real OAuth flows until Agent 1 finished, but could write and unit test all authentication logic using placeholder credentials. When real values became available, swapping them in took seconds.
Clear Interface Boundaries Prevent Conflicts
Each agent had explicit responsibility boundaries. Agent 2 owned the package.json and build configuration. Agent 3 owned component files but not their integration points. Agent 4 owned auth configuration but not infrastructure. When Agent 3's components needed to merge with Agent 2's scaffold, the interface was clear: Agent 3 proposed changes via pull request, Agent 2 reviewed and merged.
Quality Control Can't Be Distributed
Individual agents optimized for their specific domains. Agent 1 knew infrastructure was configured correctly. Agent 4 knew authentication logic was sound. But nobody except a dedicated QC pass could verify that everything worked together. The session cookie security issue was invisible to any single agent—it only appeared at the integration level.
The Numbers
| Metric | Traditional Approach | Agent-Orchestrated | |--------|---------------------|-------------------| | Wall-clock time | 3-5 days | 1 hour 50 minutes | | Developer hours | 6-8 hours | ~2 hours (parallel) | | Blocking wait time | 48+ hours | 25 minutes | | Integration issues found | Post-deployment | Pre-deployment |
The 4x improvement in developer hours understates the real gain. In a traditional workflow, those 6-8 hours are spread across 3-5 days, with constant context switches as developers wait for external services. The concentrated 2-hour execution preserved focus and momentum.
When This Approach Works
Agent orchestration excels when:
External service latency dominates — OAuth verification, DNS propagation, SSL issuance, CI/CD pipelines. Any project gated by external systems benefits from starting those operations early and working in parallel.
Clear module boundaries exist — Authentication can be developed independently from UI components. Infrastructure can be provisioned before application code is written. When responsibilities are separable, parallelization is safe.
Quality gates are well-defined — We knew exactly what "done" meant: domain-restricted OAuth working in production. Fuzzy success criteria make it impossible to verify completion.
The approach struggles when:
Work is inherently sequential — Database migrations that depend on schema design that depends on data modeling. Some work genuinely can't be parallelized.
Integration surfaces are large — If every component touches every other component, agents will spend more time resolving conflicts than working independently.
Requirements are evolving — Parallel execution assumes stable targets. If requirements change mid-execution, agents may produce incompatible outputs.
Infrastructure as Testable Output
We instrumented every agent with observability hooks—tracking decisions made, resources created, and validations performed. When the QC phase found the session cookie issue, we could trace exactly which agent set the configuration and why.
This is the same principle behind Briefcase AI's core product: capturing the decision-making process, not just the outputs. When an agent configures infrastructure incorrectly, understanding why matters more than knowing that it happened.
The internal site we built now serves as our documentation hub, employee directory, and company knowledge base. It took less time to build than it would take to explain to a new hire how to navigate our scattered Google Docs.
More importantly, it demonstrated that infrastructure work—traditionally seen as sequential and blocking—can be parallelized when you understand the dependency graph and design clear interfaces between executing agents.
Briefcase AI provides observability infrastructure for AI agents in high-stakes environments. If you're building systems where AI decisions need to be traced, reproduced, and audited, get in touch.