Skip to main content
Version: v2.0

Agents Quickstart

Deploy autonomous AI agents with the aiXplain SDK—backed by 200+ tools and 170+ LLMs—all accessible through a single API key.

Prerequisites:

  • aiXplain account and API key (get one ↗)
  • Credits in your wallet (or a voucher)
  • pip install aixplain

What You Need to Deploy an Agent

RequirementDetails
Agent name & descriptionHuman-readable label and purpose
LLMA default model is pre-selected; override with any marketplace model
ToolsMarketplace tools, knowledge bases, or custom Python functions

Building and deploying agents is free—you only pay for what you run:

Cost per run: supplier rates per model/tool used + 20% service fee. Find supplier rates on any asset card in Studio.


Setup

from aixplain import Aixplain

aix = Aixplain(api_key="YOUR_API_KEY")

Quick Start

Create your first agent with a marketplace tool in under 60 seconds.

# Get a marketplace tool
search_tool = aix.Tool.get("tavily/tavily-web-search")

# Create and save an agent
agent = aix.Agent(
name="Search Agent",
description="Searches and answers questions",
tools=[search_tool],
)
agent.save()

# Run it
response = agent.run("What's the latest AI news?")
print(response.data.output)
Show output

agent.save() promotes the agent from DRAFT to ONBOARDED, giving it a persistent cloud endpoint.


1. Agent Basics

Minimal Agent

The simplest agent uses only LLM reasoning—no tools required.

agent = aix.Agent(
name="Hello Agent",
description="Answers general questions clearly",
# Defaults to OpenAI GPT-4o
)

response = agent.run(query="What is machine learning?")
print(response.data.output)
Show output

Full Configuration

agent = aix.Agent(
name="Research Assistant",
description="Answers questions with research and citations", # Visible in UI and traces
instructions="Always cite sources. Be concise but thorough.", # Internal guidance, not shown to users
# tools=[], # Attach tools here
# llm=None, # Override the default LLM
output_format="text", # "text" (default) | "markdown" | "json"
expected_output=None # Required when output_format="json"
)
ParameterTypeRequiredDefaultDescription
namestrDisplay name shown in Studio and traces
descriptionstrUser-facing purpose of the agent
instructionsstrNoneInternal behaviour guidance (not shown to users)
toolslist[]Tools the agent can invoke
llmModelGPT-4oOverride the reasoning model
output_formatstr"text""text" | "markdown" | "json"
expected_outputstrNoneRequired when output_format="json"

Agent Lifecycle

StateMeaning
DRAFTTesting mode — expires in 24 hours
ONBOARDEDProduction — persistent endpoint, deployed asset
DELETEDAsset and endpoint removed
# Save to production
agent.save()
print(agent.status) # ONBOARDED

# Update and persist
agent.description = "Updated description"
agent.instructions = "New behaviour instructions"
agent.save()

# Delete
agent.delete()
Show output

Learn more about Agents.


2. Tools

Marketplace Tools

Browse aiXplain Marketplace for 200+ ready-to-use tools and models.

# Get a tool by asset path
search_tool = aix.Tool.get("tavily/tavily-web-search")

# Test before attaching (optional)
result = search_tool.run({"query": "AI news today", "num_results": 2})
print(result.data)
Show output
# Optionally pair with a specific LLM
llm = aix.Model.get("anthropic/claude-3-7-sonnet")
llm.inputs.temperature = 0.7
llm.inputs.max_tokens = 20000

# Build an agent with both
agent = aix.Agent(
name="Search Agent",
description="Searches and answers questions",
tools=[search_tool, llm],
)
agent.save()

response = agent.run(query="Find an animal shelter in San Jose")
print(response.data.output)
Show output

Learn more about Models and Tools.

Python Script Tool

Deploy secure, sandboxed Python functions as callable tools.

import inspect

def calculate_bmi(weight_kg: float, height_m: float) -> dict:
"""
Calculate Body Mass Index (BMI).

Args:
weight_kg: Weight in kilograms
height_m: Height in meters

Returns:
dict: BMI value and category
"""
bmi = weight_kg / (height_m ** 2)

if bmi < 18.5:
category = "underweight"
elif bmi < 25:
category = "normal"
elif bmi < 30:
category = "overweight"
else:
category = "obese"

return {"bmi": round(bmi, 2), "category": category}

# Extract source code
script_content = inspect.getsource(calculate_bmi)

# Create the tool
bmi_tool = aix.Tool(
name="BMI Tool",
integration="688779d8bfb8e46c273982ca", # Script Integration ID
config={"code": script_content, "function_name": "calculate_bmi"},
)
bmi_tool.save()

# Attach to an agent
health_agent = aix.Agent(
name="Health Assistant",
description="Calculates BMI and provides health insights",
tools=[bmi_tool],
)
health_agent.save()

response = health_agent.run("Calculate my BMI. I weigh 70kg and I'm 1.75m tall")
print(response.data.output)
Show output

Learn more about the Python Script Tool.

Integration Tools (Composio & MCP)

Connect to 600+ services—Slack, Google Calendar, Salesforce, and more.

Setup:

  1. Connect the integration in aiXplain Studio
  2. Get the generated tool ID
  3. Scope actions to reduce context bloat
# Create a Slack integration tool
slack_tool = aix.Tool(
name="Slack Connection Tool",
description="Sends messages to Slack.",
integration="composio/slack",
config={"token": "YOUR_SLACK_API_KEY"},
)
slack_tool.save()

# Test it
response = slack_tool.run(
{"text": "Hello :)", "channel": "integrations-test"},
action="SLACK_SENDS_A_MESSAGE_TO_A_SLACK_CHANNEL"
)
print(response.data)

# Scope to only the actions you need (improves performance)
slack_tool.allowed_actions = ["SLACK_SENDS_A_MESSAGE_TO_A_SLACK_CHANNEL"]

# Attach to an agent
agent = aix.Agent(
name="Slack Notifier",
description="Sends notifications to Slack channels.",
tools=[slack_tool],
)
agent.save()
Show output

Learn more about Commercial Integrations.


3. Team Agents

Orchestrate multiple specialised agents to solve complex tasks.

How team agents work:

Mentalist       → Breaks the query into a task graph
Orchestrator → Routes tasks to the right subagent
Inspector → Validates output quality, triggers retries
Response Generator → Synthesises the final response

Basic Team

tavily_tool = aix.Tool.get("tavily/tavily-web-search")

researcher = aix.Agent(
name="Researcher",
description="Searches for information and gathers data",
tools=[tavily_tool],
)

writer = aix.Agent(
name="Writer",
description="Writes clear, well-structured reports",
output_format="markdown",
)

team = aix.Agent(
name="Research Team",
description="Researches topics and produces written reports",
agents=[researcher, writer],
)

# save_subcomponents=True saves all subagents first, then the team
team.save(save_subcomponents=True)

response = team.run(query="Research quantum computing and write a summary")
print(response.data.output)
Show output

Team with Predefined Workflow

Define explicit task dependencies for deterministic execution.

# Agent 1: Find leads (no dependencies — runs first)
lead_finder = aix.Agent(
name="Lead Finder",
description="Find EdTech leads",
tools=[tavily_tool],
tasks=[aix.Agent.Task(
name="find_leads",
instructions="Generate list of EdTech companies",
expected_output="List of companies with contact info",
)]
)

# Agent 2: Analyse leads (depends on Agent 1)
lead_analyzer = aix.Agent(
name="Lead Analyzer",
description="Qualify EdTech leads",
tasks=[aix.Agent.Task(
name="analyze_leads",
instructions="Prioritise leads by platform alignment",
expected_output="Qualified and prioritised list",
dependencies=["find_leads"],
)]
)

lead_gen_team = aix.Agent(
name="Lead Gen Team",
description="Generate and qualify EdTech leads",
agents=[lead_finder, lead_analyzer],
)
lead_gen_team.save(save_subcomponents=True)

response = lead_gen_team.run(
query="Find and qualify EdTech companies for AI platform partnership"
)
print(response.data.output)
Show output

Learn more about Team Agents.


4. Response Structure

Reading Output

response = agent.run("Search for AI news")

print(response.data.output) # Final answer
print(response.status) # True | False
print(response.error_message) # None if successful
Show output

Debugging with the Reasoning Trace

for i, step in enumerate(response.data.steps or []):
print(f"\n--- Step {i+1}: {step.get('agent')} ---")
print("Reason:", step.get("reason")) # Chain-of-thought

for tool_step in step.get("tool_steps", []):
print(f"\n Tool: {tool_step.get('tool')}")
print(f" Input: {tool_step.get('input')}")
print(f" Output: {str(tool_step.get('output'))[:200]}...")
print(f" Error: {tool_step.get('error')}")
Show output

Execution Metrics

stats = response.data.execution_stats or {}

print(f"Runtime: {stats.get('runtime')}s")
print(f"API calls: {stats.get('api_calls')}")
print(f"Credits: ${stats.get('credits')}")
print(f"Assets used: {stats.get('assets_used')}")
print(f"Session ID: {stats.get('session_id')}")
print(f"Run ID: {stats.get('params', {}).get('id')}")
print(f"Request ID: {stats.get('request_id')}")
Show output

5. Save & Integrate

Save an Agent

agent.save()
print(f"Agent ID: {agent.id}")
print(f"Agent path: {agent.path}")
Show output

Python SDK

Load a saved agent by ID or path and run it with optional session memory.

from aixplain import Aixplain

aix = Aixplain(api_key="YOUR_API_KEY")
agent = aix.Agent.get("YOUR_AGENT_PATH")

# Optional: enable memory across conversations
session_id = agent.generate_session_id()

response = agent.run(query="What is the capital of France?", session_id=session_id)
print(response.data.output)

# Follow-up — agent remembers prior context
response = agent.run(query="What did I just ask you?", session_id=session_id)
print(response.data.output)
Show output

cURL / REST API

note

cURL accepts plain text input only. Use the Python SDK for structured inputs.

# 1. Submit a run
curl -X POST 'https://platform-api.aixplain.com/sdk/v2/agents/<AGENT_ID>/run' \
-H 'x-api-key: YOUR_API_KEY' \
-H 'Content-Type: application/json' \
-d '{"query": "Your question here", "sessionId": "user_123_session"}'
Show output
# 2. Poll for result
curl -X GET 'https://platform-api.aixplain.com/sdk/v2/agents/<REQUEST_ID>/result' \
-H 'x-api-key: YOUR_API_KEY' \
-H 'Content-Type: application/json'
Show output

JavaScript / TypeScript

// Submit a run
const response = await fetch(
"https://platform-api.aixplain.com/sdk/v2/agents/AGENT_ID/run",
{
method: "POST",
headers: { "x-api-key": "YOUR_API_KEY", "Content-Type": "application/json" },
body: JSON.stringify({ query: "Your question here", sessionId: "user_123_session" }),
}
);
const { requestId } = await response.json();

// Poll for result
const result = await fetch(
`https://platform-api.aixplain.com/sdk/v2/agents/${requestId}/result`,
{ headers: { "x-api-key": "YOUR_API_KEY", "Content-Type": "application/json" } }
);
const data = await result.json();
console.log(data.output);

OpenAI-Compatible API

Use aiXplain agents as a drop-in replacement in any OpenAI-compatible client.

from openai import OpenAI

client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://models.aixplain.com/api/v1/",
)

response = client.chat.completions.create(
messages=[{"role": "user", "content": "How do I create an agent?"}],
model="agent-696e105a070d2931c0963a87",
)
print(response.choices[0].message.content)
Show output

Async Processing

Fire a query without blocking, then poll for the result.

import time
from aixplain import Aixplain

aix = Aixplain(api_key="YOUR_API_KEY")
agent = aix.Agent.get("YOUR_AGENT_ID")

# Single async job
response = agent.run_async(query="What are AI Agents?")

while True:
result = agent.poll(response.url)
if result.completed:
print(result.data.output)
break
time.sleep(5)
Show output

Run multiple queries in parallel:

queries = ["Define happy", "Define excited", "Define content"]

# Fire all jobs
jobs = [(q, agent.run_async(query=q)) for q in queries]

# Poll until all complete
pending = {j.url: q for q, j in jobs if j.url}
outputs = {}

while pending:
done_urls = []
for url, q in list(pending.items()):
res = agent.poll(url)
if res.completed:
outputs[q] = res.data.output
done_urls.append(url)
for url in done_urls:
pending.pop(url, None)
if pending:
time.sleep(2)

print([outputs[q] for q in queries])
Show output

List and Inspect Agents

# List all deployed agents
agents = aix.Agent.search()
for agent in agents["results"]:
print(f"{agent.name}: {agent.id}")
Show output
# Get a specific agent
agent = aix.Agent.get("YOUR_AGENT_ID")
print(f"Name: {agent.name}")
print(f"Description: {agent.description}")
print(f"Status: {agent.status}")
print(f"Tools: {agent.tools}")
Show output

6. Troubleshooting

Agent Not Using Tools

# 1. Verify tools are attached
for tool in agent.tools:
print(tool)

# 2. Inspect the reasoning trace
# response.data.steps shows what the agent attempted

# 3. Sharpen the tool description if needed
# search_tool.description = "Search the web for current information and news"
# agent.save()

Maximum Iterations Reached

agent.max_iterations = 20  # Default: 5
agent.save()

Token Limit Exceeded

llm = aix.Model.get("openai/gpt-4o")
llm.inputs.max_tokens = 100000 # Affects all LLM reasoning — can be expensive
agent.llm = llm

Duplicate Tool Names

# List current tools
for i, tool in enumerate(agent.tools):
print(i, tool)

# Remove by index
# agent.tools.pop(2)
# agent.save()

# Add safely
# if new_tool.id not in {getattr(t, "id", None) for t in agent.tools}:
# agent.tools.append(new_tool)
# agent.save()

Next Steps

  • Agents — Full agent configuration, session management, and execution options
  • Tools & Integrations — Marketplace tools, Python sandbox, databases, MCP, and commercial integrations
  • Team Agents — Orchestrate multi-agent workflows with task dependencies
  • Knowledge Base — Add semantic search and RAG to your agents
  • Cookbook — End-to-end agent examples and tutorials