Let's dive into the technical architecture that makes the SignalWire AI Agents SDK so powerful. This isn't just about features. It's about fundamental design decisions that enable scalability, reliability, and developer productivity.
The SDK embraces stateless architecture as a first-class principle. Why? Because stateless services can scale infinitely:
@AgentBase.tool(name="check_order_status")
def check_order(self, args, raw_data):
order_id = args.get("order_id")
# Fetch from database - no local state needed
status = fetch_order_status(order_id)
return SwaigFunctionResult(f"Order {order_id} is {status}")
When you need state (and sometimes you do), we provide clean abstractions:
# Optional state management when needed
result = SwaigFunctionResult("Processing your request")
result.add_action("set_global_data", {"session_id": unique_id})
This approach means your agents can run on a single server or across a thousand Lambda functions with zero code changes.
Write once, deploy anywhere. The same agent code runs in multiple environments:
if __name__ == "__main__":
agent = MyAgent()
agent.run() # Starts FastAPI server on port 8080
def lambda_handler(event, context):
agent = MyAgent()
return agent.handle_request(event)
def cloud_function(request):
agent = MyAgent()
return agent.handle_request(request)
#!/usr/bin/env python3
agent = MyAgent()
agent.run() # Auto-detects CGI environment
The SDK automatically detects the deployment environment and configures appropriately. No environment-specific code required.
Building sophisticated systems often requires specialized agents working together. The SDK provides first-class support for multi-agent architectures:
from signalwire_agents import AgentServer
# Create a server hosting multiple agents
server = AgentServer(host="0.0.0.0", port=3000)
# Register specialized agents
server.register(TriageAgent(), "/")
server.register(SalesAgent(), "/sales")
server.register(SupportAgent(), "/support")
server.run()
Agent handoffs preserve complete context:
# In the triage agent
self.add_skill("swml_transfer", {
"tool_name": "transfer_to_specialist",
"required_fields": {
"customer_name": "Customer's name",
"issue_summary": "Summary of the issue"
},
"transfers": {
"/sales/i": {
"url": sales_url,
"message": "Transferring to our sales team..."
}
}
})
The receiving agent automatically has access to all collected information through ${call_data.customer_name}
and similar variables.
Real-world applications need to adapt behavior based on context. The SDK provides powerful dynamic configuration:
def configure_dynamically(self, query_params, body_params, headers, agent):
tier = query_params.get('customer_tier', 'standard')
if tier == 'premium':
agent.set_params({
"ai_model": "gpt-4",
"attention_timeout": 30000
})
agent.add_skill("advanced_analytics")
else:
agent.set_params({
"ai_model": "gpt-3.5-turbo",
"attention_timeout": 15000
})
# Configure transfer URLs with proxy detection
base_url = agent.get_full_url(include_auth=True)
# Automatically handles direct connections, proxies, and tunnels
This enables true multi-tenant deployments where a single agent codebase serves different customers with different configurations.
Enterprise-grade security is built in, not bolted on:
# Automatic secure credential generation
# Or set custom credentials via environment
export SWML_AUTH_USER=myuser
export SWML_AUTH_PASS=securepass
# SSL/HTTPS support
export SWML_SSL_ENABLED=true
export SWML_SSL_CERT_PATH=/path/to/cert.pem
# Function-specific authentication
@AgentBase.tool(name="sensitive_operation", auth_token="special_token")
def protected_function(self, args, raw_data):
# Only callable with correct token
The SDK includes everything needed for production deployment:
/health
endpointsswaig-test
CLI for development# Test locally with full debugging
swaig-test my_agent.py --list-tools
swaig-test my_agent.py --exec get_weather --location "London"
# Simulate production environments
swaig-test my_agent.py --simulate-serverless lambda
swaig-test my_agent.py --simulate-serverless cloud_function --gcp-project my-project
This architecture isn't theoretical. It's battle-tested in production deployments handling millions of conversations.