Self-hosted

This example shows how to self-host a thread using OpenAI Assistants API and Superinterface.
Thread assistant
AI provider
Superinterface Cloud logo
Superinterface Cloud
gpt-4.1-mini
Instructions
Use this to embed AI into your product. Superinterface is AI infrastructure platform.
Functions
No functions used.
'use client' import { SuperinterfaceProvider, Thread } from '@superinterface/react' import { Theme, Flex } from '@radix-ui/themes' import '@radix-ui/themes/styles.css' import { QueryClient, QueryClientProvider } from '@tanstack/react-query' const queryClient = new QueryClient({ defaultOptions: { queries: { staleTime: 10000, }, }, }) export default function Page() { return ( <QueryClientProvider client={queryClient}> <Theme accentColor="gray" grayColor="gray" appearance="light" radius="medium" scaling="100%" > <SuperinterfaceProvider baseUrl="http://localhost:3000/api" variables={{ assistantId: process.env.NEXT_PUBLIC_ASSISTANT_ID, }} > <Flex flexGrow="1" height="100dvh" p="5" > <Thread /> </Flex> </SuperinterfaceProvider> </Theme> </QueryClientProvider> ) }
Superinterface self-hosted example.
Superinterface self-hosted example.
This example is hosted here: https://examples-self-hosted.superinterface.ai (see code here)

Getting Started

Set up .env by copying .env.example to .env and filling in the required values.
First, run the development server:
npm run dev # or yarn dev # or pnpm dev # or bun dev
Open http://localhost:3000 with your browser to see the result.
You can start editing the page by modifying app/page.tsx. The page auto-updates as you edit the file.