In this tutorial we'll build a production-ready AI chat feature using the OpenAI API. By the end you'll have a streaming chat interface, server-side API routes, and a solid understanding of prompt engineering for apps.
What You'll Build
Set Up the OpenAI Client
Install the SDK, configure environment variables, and write a simple wrapper for the chat completions endpoint.
Build a Streaming API Route
Create a Next.js API route that streams tokens back to the client using the ReadableStream API.
Build the Chat UI
Create a React component that renders streaming text in real time with message history and a loading state.
Prompt Engineering & Safety
Add a system prompt, handle rate limits gracefully, and sanitise user input before sending to the API.
Deploy to Production
Deploy on Vercel with environment variable management and edge runtime for lowest latency.
Install the SDK
npm install openai
# or
pnpm add openai
// lib/openai.ts
import OpenAI from 'openai';
export const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
Tip: Never expose your API key in client-side code. Always proxy requests through a server-side route.