skills /ai-elements
Next.js Referenced

ai-elements

Production-quality skill for developing AI-native applications with AI Elements (https://ai-sdk.dev/elements). Use when building chat interfaces, conversational AI applications, or any AI-powered UI that needs components like Message, Conversation, Response, Code Block, Reasoning, Tool calls, Sources, Prompt Input, or other AI-specific UI patterns. Essential for Next.js projects integrating Vercel AI SDK with shadcn/ui components.

AI Elements Development Skill

Comprehensive guide for building production-quality AI-native applications using AI Elements, the official component library built on shadcn/ui for the Vercel AI SDK.

Overview

AI Elements is a component library and custom registry that provides pre-built, customizable React components specifically designed for AI applications. It addresses the unique challenges of AI interfaces: streaming responses, tool calls, reasoning blocks, citations, and conversational patterns that standard UI libraries don't handle well.

Key Characteristics:

  • Built on shadcn/ui - components live in your codebase for full customization
  • TypeScript-first with proper type safety
  • Designed for Vercel AI SDK integration (useChat, streamText, etc.)
  • Handles streaming, incomplete markdown, and AI-specific UI patterns
  • Production-ready with security, accessibility, and performance built-in

Quick Start Workflow

1. Project Setup

Initialize a new Next.js project with the required dependencies:

hljs bash
# Create Next.js project (choose Tailwind CSS during setup) npx create-next-app@latest my-ai-app && cd my-ai-app # Install AI SDK dependencies npm install ai @ai-sdk/react zod # Install AI Elements (this also sets up shadcn/ui if not configured) npx ai-elements@latest

Important: AI Elements requires:

  • Next.js project with AI SDK installed
  • Tailwind CSS configured
  • shadcn/ui CSS Variables mode (automatically configured by CLI)
  • Node.js 18 or later

2. Install Components

hljs bash
# Install all components at once npx ai-elements@latest # Or install specific components npx ai-elements@latest add message npx ai-elements@latest add conversation npx ai-elements@latest add response npx ai-elements@latest add prompt-input npx ai-elements@latest add code-block npx ai-elements@latest add reasoning npx ai-elements@latest add tool npx ai-elements@latest add sources

Alternative Method (using shadcn CLI):

hljs bash
# Install all components npx shadcn@latest add https://registry.ai-sdk.dev/all.json # Install specific component npx shadcn@latest add https://registry.ai-sdk.dev/message.json

3. Setup API Route

Create /app/api/chat/route.ts:

hljs typescript
import { openai } from '@ai-sdk/openai'; import { streamText, convertToModelMessages, UIMessage } from 'ai'; export const maxDuration = 30; export async function POST(req: Request) { const { messages }: { messages: UIMessage[] } = await req.json(); const result = streamText({ model: openai('gpt-4o-mini'), messages: convertToModelMessages(messages), }); return result.toUIMessageStreamResponse(); }

4. Build Frontend Component

Create /app/page.tsx:

hljs typescript
'use client'; import { useChat } from '@ai-sdk/react'; import { useState } from 'react'; import { Conversation, ConversationContent, ConversationEmptyState, } from '@/components/ai-elements/conversation'; import { Message, MessageContent, } from '@/components/ai-elements/message'; import { PromptInput, PromptInputTextarea, PromptInputSubmit, } from '@/components/ai-elements/prompt-input'; import { Response } from '@/components/ai-elements/response'; export default function Chat() { const { messages, sendMessage, status } = useChat(); const [input, setInput] = useState(''); return ( <div className="flex flex-col h-screen max-w-4xl mx-auto"> <Conversation className="flex-1"> <ConversationContent> {messages.length === 0 ? ( <ConversationEmptyState /> ) : ( messages.map((message) => ( <Message key={message.id} from={message.role}> <MessageContent> {message.parts.map((part, i) => { if (part.type === 'text') { return <Response key={i}>{part.text}</Response>; } return null; })} </MessageContent> </Message> )) )} </ConversationContent> </Conversation> <PromptInput value={input} onChange={(e) => setInput(e.target.value)} onSubmit={(e) => { e.preventDefault(); if (input.trim()) { sendMessage({ text: input }); setInput(''); } }} status={status} > <PromptInputTextarea placeholder="Ask anything..." /> <PromptInputSubmit /> </PromptInput> </div> ); }

5. Configure Globals CSS

Add to /app/globals.css (required for Response component):

hljs css
@source "../node_modules/streamdown/dist/index.js";

This imports Streamdown styles used by the Response component for markdown rendering.

Core Components

Essential Components

  1. Conversation - Container with auto-scrolling for chat messages
  2. Message - Individual chat messages with avatars and styling
  3. Response - Markdown renderer with streaming support (uses Streamdown)
  4. Prompt Input - Advanced input with file upload and model selection
  5. Code Block - Syntax-highlighted code with copy button

AI-Specific Components

  1. Reasoning - Collapsible AI reasoning/thinking display
  2. Tool - Tool call visualization with input/output
  3. Sources - Citation and source attribution
  4. Branch - Response variation navigation
  5. Loader - Loading states for AI operations

Interactive Components

  1. Actions - Action buttons for responses (copy, regenerate, etc.)
  2. Suggestion - Quick action suggestions
  3. Task - Task completion tracking
  4. Inline Citation - Inline source citations
  5. Image - AI-generated image display

Advanced Components

  1. Web Preview - Embedded web page previews
  2. Shimmer - Loading shimmer effects
  3. Queue - Operation queue visualization

For detailed documentation on each component, see references/COMPONENTS.md.

AI SDK Integration Patterns

Pattern 1: Basic Chat with useChat

hljs typescript
'use client'; import { useChat } from '@ai-sdk/react'; export default function Chat() { const { messages, sendMessage, status } = useChat({ api: '/api/chat', // default endpoint }); // messages: array of { id, role, parts } // sendMessage: function to send new messages // status: 'ready' | 'streaming' | 'submitted' | 'error' return (/* render UI */); }

Key Points:

  • useChat manages state, streaming, and API calls automatically
  • Messages contain ordered parts array (text, tool-call, reasoning, etc.)
  • Status indicates streaming state for UI feedback

Pattern 2: Handling Message Parts

Messages contain parts that represent different types of content:

hljs typescript
message.parts.map((part, i) => { switch (part.type) { case 'text': return <Response key={i}>{part.text}</Response>; case 'tool-call': return ( <Tool key={i}> <ToolHeader type="tool-call" state="output-available" /> <ToolContent> <ToolInput input={JSON.stringify(part.args)} /> </ToolContent> </Tool> ); case 'tool-result': return ( <Tool key={i}> <ToolContent> <ToolOutput output={part.result} /> </ToolContent> </Tool> ); default: return null; } });

Pattern 3: Streaming API Route

Backend route using streamText:

hljs typescript
import { openai } from '@ai-sdk/openai'; import { streamText, convertToModelMessages, tool } from 'ai'; import { z } from 'zod'; export async function POST(req: Request) { const { messages } = await req.json(); const result = streamText({ model: openai('gpt-4o-mini'), messages: convertToModelMessages(messages), tools: { get_weather: tool({ description: 'Get weather for a city', parameters: z.object({ city: z.string().describe('City name'), }), execute: async ({ city }) => { // Tool implementation return { temperature: 72, conditions: 'sunny' }; }, }), }, }); return result.toUIMessageStreamResponse(); }

Pattern 4: Multiple Model Support

hljs typescript
'use client'; import { useChat } from '@ai-sdk/react'; import { useState } from 'react'; export default function Chat() { const [selectedModel, setSelectedModel] = useState('gpt-4o-mini'); const { messages, sendMessage } = useChat({ body: { model: selectedModel }, // Pass to backend }); return ( <PromptInput /* ... */> <PromptInputModelSelect value={selectedModel} onChange={setSelectedModel} > <PromptInputModelSelectTrigger /> <PromptInputModelSelectContent> <PromptInputModelSelectItem value="gpt-4o-mini"> GPT-4o Mini </PromptInputModelSelectItem> <PromptInputModelSelectItem value="gpt-4o"> GPT-4o </PromptInputModelSelectItem> </PromptInputModelSelectContent> </PromptInputModelSelect> </PromptInput> ); }

Backend handles model selection:

hljs typescript
export async function POST(req: Request) { const { messages, model } = await req.json(); const result = streamText({ model: openai(model || 'gpt-4o-mini'), messages: convertToModelMessages(messages), }); return result.toUIMessageStreamResponse(); }

Advanced Patterns

Reasoning Display (DeepSeek R1, etc.)

For models that output reasoning tokens:

hljs typescript
// Frontend <Message from={message.role}> <MessageContent> {message.parts.map((part, i) => { if (part.type === 'reasoning') { return ( <Reasoning key={i} isStreaming={ status === 'streaming' && i === message.parts.length - 1 && message.id === messages.at(-1)?.id } > <ReasoningTrigger /> <ReasoningContent>{part.text}</ReasoningContent> </Reasoning> ); } return <Response key={i}>{part.text}</Response>; })} </MessageContent> </Message>

Tool Calls with Status

hljs typescript
<Tool> <ToolHeader type="tool-call" state={ part.result ? 'output-available' : part.error ? 'output-error' : 'input-available' } /> <ToolContent> <ToolInput input={JSON.stringify(part.args, null, 2)} /> {part.result && ( <ToolOutput output={JSON.stringify(part.result, null, 2)} /> )} {part.error && ( <ToolOutput errorText={part.error} /> )} </ToolContent> </Tool>

Web Search with Citations

hljs typescript
// Backend: streamText with sources const result = streamText({ model: openai('gpt-4o'), messages, experimental_transform: (stream) => { return stream.pipeThrough( new TransformStream({ transform(chunk, controller) { // Add source metadata if (chunk.type === 'text-delta') { controller.enqueue({ ...chunk, sources: [/* source objects */], }); } }, }) ); }, }); // Frontend: render sources {part.type === 'text' && part.sources && ( <Sources> {part.sources.map((source, i) => ( <Source key={i} title={source.title} url={source.url} description={source.description} /> ))} </Sources> )}

File Upload Support

hljs typescript
<PromptInput onFilesChange={(files) => { // Handle file upload console.log('Files:', files); }} maxFiles={5} maxFileSize={10 * 1024 * 1024} // 10MB accept={{ 'image/*': ['.png', '.jpg', '.jpeg', '.gif'], 'application/pdf': ['.pdf'], }} > <PromptInputTextarea /> <PromptInputToolbar> <PromptInputFileUpload /> </PromptInputToolbar> <PromptInputSubmit /> </PromptInput>

Component Customization

All components are installed directly into your codebase at @/components/ai-elements/, allowing full customization:

Example: Customize Message Styling

Edit components/ai-elements/message.tsx:

hljs typescript
// Change user message background color className: cn( 'group-[.is-user]:bg-blue-500', // Changed from bg-primary 'group-[.is-user]:text-white', // ... other classes )

Example: Custom Response Components

hljs typescript
import { Response } from '@/components/ai-elements/response'; <Response components={{ h1: ({ children }) => ( <h1 className="text-4xl font-bold my-custom-class"> {children} </h1> ), code: CustomCodeBlock, a: CustomLink, }} > {markdown} </Response>

Best Practices

1. Component Organization

app/ ├── api/ │ └── chat/ │ └── route.ts # API route ├── components/ │ ├── ai-elements/ # AI Elements components (auto-installed) │ ├── chat/ │ │ ├── chat-interface.tsx │ │ ├── chat-message.tsx │ │ └── chat-input.tsx │ └── ui/ # Other shadcn/ui components ├── lib/ │ ├── ai/ │ │ ├── providers.ts # Model provider configuration │ │ └── tools.ts # Tool definitions │ └── utils.ts └── page.tsx

2. Type Safety

Always use TypeScript and import proper types:

hljs typescript
import type { UIMessage } from 'ai'; import type { UseChatHelpers } from '@ai-sdk/react'; interface ChatProps { initialMessages?: UIMessage[]; onError?: (error: Error) => void; }

3. Error Handling

hljs typescript
const { messages, error, reload } = useChat({ onError: (error) => { console.error('Chat error:', error); toast.error('Failed to send message'); }, }); {error && ( <div className="text-red-500"> Error: {error.message} <button onClick={() => reload()}>Retry</button> </div> )}

4. Loading States

hljs typescript
<PromptInputSubmit disabled={status !== 'ready'} > {status === 'streaming' ? 'Streaming...' : 'Send'} </PromptInputSubmit> {status === 'streaming' && <Loader />}

5. Performance Optimization

  • Use React.memo for expensive components
  • Implement virtualization for long conversations
  • Stream responses instead of waiting for completion
  • Use proper key props in lists
hljs typescript
import { memo } from 'react'; const ChatMessage = memo(({ message }: { message: UIMessage }) => { return ( <Message from={message.role}> {/* content */} </Message> ); });

6. Accessibility

All AI Elements components are built with accessibility in mind:

  • Keyboard navigation support
  • Proper ARIA labels
  • Focus management
  • Screen reader compatibility

Enhance with custom ARIA attributes:

hljs typescript
<Message role="article" aria-label={`Message from ${message.role}`} > {/* content */} </Message>

7. AI Gateway Integration

For production applications, use Vercel AI Gateway:

hljs typescript
// .env.local AI_GATEWAY_API_KEY=your_key_here // No model API keys needed - AI Gateway handles routing

Benefits:

  • $5/month free usage
  • Single API key for all providers
  • Built-in caching and rate limiting
  • Usage analytics

Common Patterns

Multi-Turn Conversations with Context

hljs typescript
const { messages, sendMessage } = useChat({ initialMessages: [ { id: '1', role: 'system', parts: [{ type: 'text', text: 'You are a helpful assistant.', }], }, ], });

Regenerate Response

hljs typescript
const { messages, reload } = useChat(); <Actions> <ActionButton onClick={() => reload()}> <RefreshIcon /> Regenerate </ActionButton> </Actions>

Stop Streaming

hljs typescript
const { status, stop } = useChat(); {status === 'streaming' && ( <button onClick={stop}>Stop generating</button> )}

Branch Conversations

hljs typescript
<Branch> {responses.map((response, i) => ( <BranchItem key={i} active={i === currentIndex} onClick={() => setCurrentIndex(i)} > Response {i + 1} </BranchItem> ))} </Branch>

Environment Configuration

Minimal Setup (.env.local)

hljs bash
# Option 1: Use AI Gateway (recommended) AI_GATEWAY_API_KEY=your_gateway_key # Option 2: Direct provider keys OPENAI_API_KEY=sk-... ANTHROPIC_API_KEY=sk-ant-... GOOGLE_API_KEY=...

Provider Configuration (lib/ai/providers.ts)

hljs typescript
import { createOpenAI } from '@ai-sdk/openai'; import { createAnthropic } from '@ai-sdk/anthropic'; import { createGoogleGenerativeAI } from '@ai-sdk/google'; export const openai = createOpenAI({ apiKey: process.env.OPENAI_API_KEY, }); export const anthropic = createAnthropic({ apiKey: process.env.ANTHROPIC_API_KEY, }); export const google = createGoogleGenerativeAI({ apiKey: process.env.GOOGLE_API_KEY, });

Troubleshooting

For comprehensive troubleshooting information including version compatibility, tool calling errors, and debugging patterns, see references/TROUBLESHOOTING.md.

Common Issues

Issue: Response component styles not applied Solution: Ensure @source "../node_modules/streamdown/dist/index.js"; is in globals.css

Issue: Components not found after installation Solution: Check components.json for correct path configuration. Default is @/components/ai-elements/

Issue: TypeScript errors with message.parts Solution: Import and use proper types: import type { UIMessage } from 'ai';

Issue: Streaming not working Solution: Ensure API route returns result.toUIMessageStreamResponse() not result.toDataStreamResponse() (v5+). For v3, use result.toDataStreamResponse(). See TROUBLESHOOTING.md for version differences.

Issue: Message parts not rendering Solution: Always check part.type and handle all cases (text, tool-call, tool-result, reasoning)

Issue: Tool calling schema validation errors Solution: See TROUBLESHOOTING.md for detailed guidance on tool schema validation, especially the "Invalid schema for function: got 'type: None'" error.

Project Templates

See assets/templates/ for complete project templates:

  • basic-chat - Simple chat interface
  • advanced-chat - Chat with tools, reasoning, and sources
  • multimodal-chat - Chat with image and file upload
  • agentic-chat - Multi-agent conversation system

Additional Resources

Official Documentation

Version Compatibility

  • AI SDK: v5.x (v6 beta available)
  • Next.js: 14.x - 15.x
  • React: 18.x - 19.x
  • Node.js: 18.x or later
  • TypeScript: 5.x recommended

Last Updated: January 2025

Related Categories