Model Context Protocol (MCP) Now Generally Available
Dec 20, 2024
Model Context Protocol (MCP) is now generally available on Superinterface. After a successful closed beta, MCP is ready for all users, offering streamlined integrations and proven reliability.
Key Updates:
General Availability: Open to all users, with no beta restrictions.
SSE Support: Persistent connections for efficient interactions.
Refined Performance: Improved based on real-world feedback during the beta.
Set up MCP in the Assistant → MCP servers section to start integrating advanced capabilities into your assistants. Read more about it here. Humiris Mix Model Support
Dec 14, 2024Humiris Mix Model Support
Superinterface now supports Humiris Mix Model, allowing you to combine multiple AI models into a custom blend tailored to your needs. Learn more about Humiris here.
Humiris Mix Model support is live—select it in your assistant's model settings and start building smarter solutions today. Read more about the update here. Interactive Components
Dec 13, 2024Interactive Components are here.
Superinterface now supports Interactive Components, introducing dynamic elements like clickable cards, forms, and rich media to your AI-powered interfaces.
Highlights:
Boost User Engagement: Add interactive cards, surveys, and media for a more engaging experience.
MDX Integration: Automatically generate and integrate interactive elements directly into your interface.
Versatile Applications: Perfect for product showcases, data collection, and step-by-step tutorials.
Interactive Components are live—start using them today by opening your interface and building with the AI Builder. Read more here. Gemini Support
Dec 12, 2024Gemini support now in Superinterface.
Superinterface now supports Google’s Gemini models, enabling seamless integration for advanced reasoning and efficient task handling. The latest Gemini models come with competitive pricing and a generous free tier. Available now—select Gemini in your model settings to get started. Read more here. All Models All Providers
Dec 11, 2024Support for the latest models of every AI provider is now live.
Superinterface now supports all models from all providers automatically and in real-time. No delays, no manual updates—new models, fine-tunes, and beta releases are instantly available in your model dropdown.
This feature is live for all users—just connect your AI provider and get started. Read more about it in our blog post. Model Context Protocol (MCP) Support
Dec 10, 2024MCP Support is live now in closed beta.
Superinterface now supports Model Context Protocol (MCP), available in closed beta. This integration simplifies how AI assistants connect to external tools, enabling them to perform advanced tasks with minimal setup.
Key Features:
Streamlined Integrations: Instantly access pre-configured tools via MCP servers.
Enhanced Capabilities: Manage files in tools like Google Drive or automate database workflows with ease.
Simplified Development: Reduce manual setup by leveraging a single MCP server integration for multiple functions.
Learn More:
Explore how MCP transforms AI assistant capabilities and book a demo to try MCP support in beta here. AI builder
Dec 9, 2024Build and customize AI-powered interfaces with AI Builder.
As part of our Launch Midnights week, we’ve released AI Builder (Beta)—a new tool to build and customize AI-powered interfaces by chatting with an assistant.
Key features:
• Design interfaces with natural language input.
• Generate production-ready React code automatically.
• Deploy apps instantly to a subdomain, your domain, or as reusable components.
AI Builder is currently in beta, and we’re improving it every day. Learn more here. In-Chat Image Rendering for Code Interpreter Outputs
Nov 14, 2024Visualizing Revenue Data with Code Interpreter.
You can now view image files generated by Code Interpreter directly in chat. When an image file is returned, it will automatically display in the chat window.
Note: Make sure to have the Code Interpreter enabled in your assistant setup. This feature is only applicable to OpenAI and Azure OpenAI providers.
Make sure to enable code interpreter in your Assistant setup.
Video and Audio Components
Oct 27, 2024You can now play video and audio files directly in chat by embedding links with supported formats (e.g., mp4, ogg, mp3, wav).
Example: ![Media description](https://files.vidstack.io/sprite-fight/720p.mp4)
Azure OpenAI Suppport
Oct 25, 2024Connect Azure OpenAI with Superinterface
Added support for Azure. We now support Azure OpenAI Completions API and Azure OpenAI Assistants API.
Check out our step-by-step guide here. Firecrawl Support
Oct 22, 2024Add scraping and crawling capabilities to your assistant with Firecrawl functions.
Integrated support for Firecrawl, allowing assistants to scrape websites, perform Google searches, and extract content from web pages.
Learn how to set up functions that enable your assistants to scrape, crawl, and search the web using Firecrawl here. Enhanced Error Handling
Oct 12, 2024Improved user experience by adding a friendly toast notification whenever there’s an issue with sending messages or receiving assistant responses.
Toggle File Upload
Oct 11, 2024Introduced a setting to enable or disable file uploads from the user's side within Interfaces.
Customizable Annotations
Oct 10, 2024An example of how annotations are displayed in the style Minimal.
Added the option to customize how file annotations appear in chat. They can now be set as:
Minimal (shows "File cited." on annotation icon click)
Source (displays the actual source file in a modal on annotation icon click)
Note: For source annotations, reimport files via the vector stores tab in Superinterface since OpenAI Assistants API doesn’t provide access to original source files.
See a step-by-step guide for how to customize annotations here. Client-Side Tools
Oct 8, 2024New function handler - "Client tool," allowing assistants to call functions defined on the client side/frontend.
Iframe Integration
Oct 2, 2024Native support for iframe integration is now available.
Group Interfaces
Oct 1, 2024Configure group interface.
Expanded from single assistant interfaces to include Group Interfaces, enabling a ChatGPT-like UI experience.
An example of a published group interface.
Open Router & o1 Model Support
Sep 16, 2024Added support for models like OpenAI’s o1 via Open Router. Find it here. Code Examples
Sep 15, 2024Superinterface Code Examples
Published examples to illustrate how to use Superinterface via code. See examples here. Replicate Support
Sep 6, 2024Generate visual and auditory content with replicate integration
Added support for calling any Replicate model (e.g., flux). This enables assistants to generate images or other content using Replicate.
Custom Domains for Webpage Integration
Aug 28, 2024Publish your assistant on your domain.
Custom domains are now supported. Users can set up DNS records, making assistants accessible through their own domain.
Logs
Aug 25, 2024
Introduced assistant logs, allowing users to view logs, especially useful in case of errors (like exceeding OpenAI credit limits, invalid API key, Connection error, etc.).
Assistant Descriptions
Aug 18, 2024Add public descriptions to assistants.
Added the ability to set descriptions for assistants, which are displayed in Group Interfaces.
Avatars
Aug 18, 2024Customize your assistant's avatar.
Users can now customize avatars by choosing an icon or uploading their own image.
Groups
Aug 18, 2024
Introduced grouping and ordering of assistants, enabling Group Interfaces (ChatGPT-like) with multiple assistants.
Multi-Agent Support
Jul 11, 2024Added support for multi-agent flows via the new "Superinterface Assistant" Function Handler, enabling complex agent flows.
Learn how to set up multi-agent flows here.