~/abdullah-mobeen/writing/building-reliable-agents-part-1
← back to writing
2025-11-01[agents]12 min read

Building reliable agents: governed gateways

From direct calls to a governed agent platform: the gateway layer that makes demos shippable.

The gap between an agent demo and a production agent is mostly infrastructure. Demos assume the model is always available, the context is always right, and the agent's actions are always safe. None of that holds at scale.

This post covers eight patterns that establish governance layers around model and tool interactions — the stuff that makes agents actually shippable.

Agents calling models and tools directly — the starting point

Calling models

Pattern 1: AI gateway fronting all model calls

The problem: Teams call language models directly using hardcoded API keys, creating visibility gaps around costs and access control.

The approach: Route all LLM calls through a centralized gateway handling authentication, rate limiting, provider selection, and logging. This single choke point enables organizational-level controls and simplifies key rotation across deployments.

Pattern 2: Multi-provider fallback & routing

The problem: Dependence on a single provider creates service vulnerabilities.

The approach: Define model equivalence classes with ranked provider lists. The gateway automatically retries using alternative providers within latency and token budgets when primary options fail. Routing decisions can factor in latency, cost, regional requirements, or current availability metrics.

Pattern 3: Policy engine at the gateway

The problem: Agents may inadvertently send sensitive data to external models or generate harmful content.

The approach: Implement mandatory pre- and post-processing at the gateway level. Request-side filtering strips PII and blocks disallowed tools; response-side processing detects and redacts sensitive information or blocks harmful outputs. This centralizes safety enforcement without requiring individual agent implementation.

AI Gateway with input and output guardrails

Pattern 4: Dedicated inference service with batching

The problem: Per-request model calls waste GPU resources and become cost-prohibitive at scale.

The approach: Deploy self-hosted models behind inference services using continuous batching and KV-cache reuse (e.g. vLLM, Ray Serve). This maximizes GPU throughput while maintaining consistent latency profiles aligned with user experience requirements.

AI Gateway routing to external providers and dedicated inference service

Calling tools

Pattern 5: Centralized tool calling via tool gateway

The problem: Direct agent-to-API connections create security vulnerabilities where compromised prompts could trigger unauthorized operations.

The approach: Route all tool invocations through a gateway that validates permissions, enforces rate limits, and maintains audit trails. The gateway handles authentication, tool registration verification, and optional protocol transcoding to backend systems.

Tools Gateway with per-tool authorization

Pattern 6: Tool registry with versioned capabilities

The problem: API schema changes break agent implementations when tools lack versioning.

The approach: Agents discover available tools dynamically from a versioned registry rather than relying on hardcoded tool references. Each tool entry includes version information, JSON schemas, and access policies, enabling safe evolution of backend APIs.

Tools Gateway with Tools Registry for validation and versioning

Pattern 7: Protocol transcoding for flexibility

The problem: Heterogeneous backend systems (REST, gRPC, GraphQL, proprietary RPC) force agents to understand multiple protocols.

The approach: Standardize a front-facing protocol (typically Model Context Protocol) for agent interactions while allowing the gateway to translate requests to whatever backend protocol each service requires. The registry maintains mapping information linking tool names to backend descriptors and policies.

Protocol transcoding — MCP to gRPC and other backends

Pattern 8: Capability-based / least-privilege access

The problem: Agents potentially access tools beyond their intended scope.

The approach: Issue short-lived, scoped capabilities for each agent run specifying exactly which tools that execution can invoke. The tool gateway enforces these boundaries, preventing prompt injection attacks from expanding agent permissions.

The full picture

Full architecture: Agents, AI Gateway, Tools Gateway, Tools Registry, Models, and Tools

Priority framework

Adopt patterns based on operational needs:

  • Multiple teams/agents → Central authentication and logging (P1)
  • Provider downtime → Failover mechanisms (P2)
  • PII/regulated data → Input/output filtering (P3)
  • Self-hosted models → Batching and cache management (P4)
  • Agent-to-API calls → Authorization and auditing (P5)
  • Frequently changing APIs → Versioned discovery (P6)
  • Mixed backend protocols → Protocol translation (P7)
  • Multi-tenant environments → Scoped permissions (P8)

Trade-offs

Each pattern introduces operational complexity alongside reliability improvements:

PatternTrade-offMitigation
P1 GatewayAdditional network hop; SPOFMulti-region active-active; circuit breakers
P2 FallbackLatency variance across providersStrict time budgets; provider allowlists
P3 PolicyFalse positives; processing overheadCategory tuning; decision caching
P4 InferenceOperational complexity; batch effects on UXSeparate streaming routes; autoscaling
P5 Tool GatewayAdditional choke point; schema driftHealth probes; scalable control plane
P6 RegistryVersion proliferation; migration workDeprecation windows; code generation
P7 TranscodingError semantic mismatchesCanonical error taxonomy; conformance testing
P8 CapabilitiesToken management complexityShort TTL (~10 min); deny-by-default

Hybrid approaches work well — direct provider calls in low-risk development environments, gateways with policies in production. SaaS models for variable loads, inference services for predictable, cost-sensitive workloads.


These eight patterns establish the governance infrastructure that transforms agent development from chaotic, fragmented systems toward centralized, observable, and controlled platforms. Next up: runtime shapes, sandboxing, observability, evals, memory, and orchestration SDK design.