Self-hosting, Tools API, and new providers
February 7, 2026Self-hosting, Tools API, and new providers.
A lot has shipped over the last few months (feels like years in AI age) — self-hosting, two new Azure providers, a full Tools REST API, and truncation controls. Here's a rundown.
Self-hosted Superinterface
@superinterface/server is open-source and ready to run on your own infrastructure. Deploy with Docker, Vercel, Azure Container Apps, or import the route handlers straight into an existing Next.js app.
You get the same REST API and React components as Superinterface Cloud — point @superinterface/react at your deployment and everything works. Pick your own Postgres database, bring your own AI provider keys, and keep full control over data and environment.
npx @superinterface/server@latest prisma deploy
npx @superinterface/server@latest organizations create
npx @superinterface/server@latest organizations api-keys create
Read the self-hosting docs for deployment guides, environment setup, and how to embed the server in your own Next.js project.
Tools API
Assistants can use native AI provider tools like web search, file search, code interpreter, image generation, and computer use. Until now these were only configurable from the dashboard. Today you can create, list, get, update, and delete tools on any assistant via REST.
Create a tool
Enable web search on an assistant with a single request:
curl -X POST "https://superinterface.ai/api/cloud/assistants/{assistantId}/tools" \
-H "Authorization: Bearer YOUR_PRIVATE_API_KEY" \
-H "Content-Type: application/json" \
-d '{ "type": "WEB_SEARCH" }'
Tools that have configurable settings accept them inline:
curl -X POST "https://superinterface.ai/api/cloud/assistants/{assistantId}/tools" \
-H "Authorization: Bearer YOUR_PRIVATE_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"type": "IMAGE_GENERATION",
"imageGenerationTool": {
"model": "gpt-image-1",
"quality": "HIGH",
"size": "SIZE_1024_1024",
"background": "TRANSPARENT"
}
}'
Update tool settings
Change settings on an existing tool without recreating it:
curl -X PATCH "https://superinterface.ai/api/cloud/assistants/{assistantId}/tools/{toolId}" \
-H "Authorization: Bearer YOUR_PRIVATE_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"imageGenerationTool": {
"quality": "LOW",
"outputFormat": "WEBP"
}
}'
Provider availability
The API enforces the same provider rules as the dashboard. For example, image generation only works with OpenAI + Responses API and Azure OpenAI + Responses API. Attempting to create a tool that isn't supported by the assistant's provider configuration returns a clear 400 error.
See the full Tools API reference for all five tool types, their settings, and the complete availability matrix.
Azure Responses API
Store threads and run responses on Azure. Connect an Azure AI Project, pick Azure Responses in assistant settings, and your assistants execute on Azure infrastructure with the same native tools — web search, image generation, computer use, and more.
Azure Agents
Azure AI Foundry Agent Service is now available as a provider. Managed conversations, tool orchestration, and built-in safety with enterprise integrations like Bing, SharePoint, and Azure AI Search — all accessible from the same assistant settings.
Truncation settings
You can now configure how conversation history is truncated when creating or updating assistants via the API. Two new optional fields are available:
truncationType — AUTO (default), LAST_MESSAGES, or DISABLED
truncationLastMessagesCount — number of recent messages to keep when using LAST_MESSAGES
curl -X PATCH "https://superinterface.ai/api/cloud/assistants/{assistantId}" \
-H "Authorization: Bearer YOUR_PRIVATE_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"truncationType": "LAST_MESSAGES",
"truncationLastMessagesCount": 20
}'
Both fields are returned in all assistant API responses. This is useful when you want to keep context windows predictable across different models or cap token usage for cost control.
Get started