Responses API
The Responses API is fully compatible with OpenAI SDKs. Simply change the base_url and api_key to integrate Alphagent into your existing OpenAI workflow.
OpenAI Compatible
Drop-in replacement for OpenAI SDKs. Same interface, different endpoint.
Multi-Agent Models
Access specialized financial agents with deep research capabilities.
Streaming Support
Real-time SSE streaming with tool call visibility.
#Endpoint
POST https://api.alphagent.co/v1/responses#Quick Start
Install the OpenAI SDK and configure it to use the Alphagent API endpoint.
$ pip install openaifrom openai import OpenAI # Initialize the client with Alphagent endpoint client = OpenAI( api_key="your-alphagent-api-key", base_url="https://api.alphagent.co/v1" ) # Make a request response = client.responses.create( model="alphagent-smart", input="What is AAPL trading at?" ) print(response.output_text)
#Available Models
Choose the model that best fits your use case. Single-agent models are faster, while deep research models provide comprehensive analysis.
| Model | Description | Type |
|---|---|---|
| alphagent-fast | Quick responses using Gemini Flash | Single-agent |
| alphagent-smart | Balanced performance with Gemini Pro | Single-agent |
| alphagent-pro | High reasoning effort with Gemini Pro | Single-agent |
| alphagent-deep-research-fast | Multi-agent with Gemini Flash | 6-agent |
| alphagent-deep-research-pro | Multi-agent with high reasoning | 6-agent |
Multi-Agent Architecture
Deep research models use a 6-agent architecture with specialized agents for market data, fundamentals, options/risk, research, and verification. Multi-domain queries are processed in parallel for comprehensive analysis.
#Streaming Responses
Enable streaming to receive responses in real-time as they are generated.
from openai import OpenAI client = OpenAI( api_key="your-alphagent-api-key", base_url="https://api.alphagent.co/v1" ) # Stream the response stream = client.responses.create( model="alphagent-smart", input="Analyze TSLA fundamentals", stream=True ) for event in stream: if event.type == "response.output_text.delta": print(event.delta, end="", flush=True) elif event.type == "response.completed": print("\nDone!")
#Conversations
Group related requests into conversations by providing a conversation_id. This enables history tracking and retrieval.
import uuid from openai import OpenAI client = OpenAI( api_key="your-alphagent-api-key", base_url="https://api.alphagent.co/v1" ) # Start a new conversation conversation_id = str(uuid.uuid4()) # First message response = client.responses.create( model="alphagent-smart", input="What is AAPL trading at?", extra_body={"conversation_id": conversation_id} ) # Follow-up in same conversation response = client.responses.create( model="alphagent-smart", input="What about its P/E ratio?", extra_body={"conversation_id": conversation_id} )
#Code Execution
Enable server-side code execution for complex calculations, data analysis, and visualizations. Code runs in a secure sandboxed environment with Python 3.11, pandas, numpy, and matplotlib.
from openai import OpenAI client = OpenAI( api_key="your-alphagent-api-key", base_url="https://api.alphagent.co/v1" ) # Enable code execution response = client.responses.create( model="alphagent-smart", input="Calculate correlation between AAPL and MSFT returns", tools=[{"type": "code_execution"}] ) # Get container_id for reuse if response.metadata: container_id = response.metadata.get("container_id") outputs = response.metadata.get("code_execution_outputs", []) # Reuse container in follow-up response = client.responses.create( model="alphagent-smart", input="Now plot the results", tools=[{"type": "code_execution"}], extra_body={"container_id": container_id} )
Request Parameters
| Parameter | Type | Description |
|---|---|---|
| model | string | Required. Model ID to use. |
| input | string | array | Required. The input prompt or conversation messages. |
| stream | boolean | Enable SSE streaming. Default: false |
| conversation_id | string | UUID v4 to group requests into a conversation. |
| instructions | string | Custom system prompt that replaces the default. |
| tools | array | Tools available for the model to use. |
| max_output_tokens | integer | Maximum tokens in the response. |
| temperature | number | Sampling temperature (0-2). |
#Response Format
The response follows the OpenAI Responses API format with additional metadata for verification results.
{
"id": "resp_abc123",
"object": "response",
"model": "alphagent-smart",
"status": "completed",
"created_at": 1736360000,
"output": [
{
"id": "msg_1",
"type": "message",
"role": "assistant",
"status": "completed",
"content": [
{
"type": "output_text",
"text": "AAPL is trading at $187.12.",
"annotations": []
}
]
}
],
"output_text": "AAPL is trading at $187.12.",
"usage": {
"input_tokens": 18,
"output_tokens": 26,
"total_tokens": 44
}
}#Claim Verification
Deep research models include a Verifier agent that checks claims against tool outputs. The verification report is included in the response metadata.
# Use a deep research model for verification response = client.responses.create( model="alphagent-deep-research-pro", input="Analyze AAPL's current valuation" ) # Access verification report if response.metadata: verification = response.metadata.get("verification", ) print("Confidence:", verification.get("confidence")) print("Verified claims:", verification.get("verified_claims")) print("Flagged claims:", verification.get("flagged_claims"))
Verification Report Fields
verified_claims- Claims matched to supporting tool outputflagged_claims- Claims with issues (contradictions or unsourced)confidence- Overall confidence: "high", "medium", or "low"
#TypeScript / Node.js
The same OpenAI SDK compatibility applies to the Node.js/TypeScript SDK.
$ npm install openaiimport OpenAI from 'openai'; const client = new OpenAI({ apiKey: 'your-alphagent-api-key', baseURL: 'https://api.alphagent.co/v1', }); async function main() { // Non-streaming request const response = await client.responses.create({ model: 'alphagent-smart', input: 'What is AAPL trading at?', }); console.log(response.output_text); // Streaming request const stream = await client.responses.create({ model: 'alphagent-smart', input: 'Analyze TSLA fundamentals', stream: true, }); for await (const event of stream) { if (event.type === 'response.output_text.delta') { process.stdout.write(event.delta); } } } main();
API Key Security
Never commit your API keys to version control. Use environment variables like ALPHAGENT_API_KEY to store secrets securely.