Building ChatGpMe: AI Integration with Cloudflare Workers
I built ChatGpMe as an experiment to explore Cloudflare’s AI capabilities. Here’s a detailed look at how I implemented both text and image generation features.
Chat Implementation
The chat system uses streaming responses for a better user experience. Here’s the core implementation:
import { Ai } from '@cloudflare/ai';
export async function onRequest({ request, env }) {
const model = "@cf/meta/llama-3.3-70b-instruct-fp8-fast";
const { messages, prompt } = await request.json();
const response = await env.AI.run(
model,
{ messages, stream: true },
{ gateway: { id: 'martingpt', skipCache: true } }
);
const stream = response
.pipeThrough(new TextDecoderStream())
.pipeThrough(new SSEToStream())
.pipeThrough(new TextEncoderStream());
return new Response(stream, {
headers: { 'content-type': 'application/x-ndjson' },
});
}
Image Generation
I implemented image generation using the Flux model from Black Forest Labs, available through Cloudflare Workers AI:
export async function generateImage({ request, env }) {
const { prompt } = await request.json();
const gateway_id = "martingpt";
const response = await env.AI.run(
"@cf/black-forest-labs/flux-1-schnell",
{ prompt },
{ gateway: { id: gateway_id, skipCache: true } }
);
// Response includes base64 encoded image
const dataURI = `data:image/jpeg;charset=utf-8;base64,${response.image}`;
return Response.json({ dataURI });
}
Frontend Integration
The chat interface uses React with TypeScript for type safety:
interface ChatMessage {
role: 'user' | 'assistant';
content: string;
}
function ChatInterface() {
const [messages, setMessages] = useState<ChatMessage[]>([]);
const [isLoading, setIsLoading] = useState(false);
async function handleSubmit(userMessage: string) {
setIsLoading(true);
try {
const response = await fetch('/api/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
messages: [...messages, { role: 'user', content: userMessage }]
}),
});
const reader = response.body?.getReader();
let accumulatedResponse = '';
while (true) {
const { done, value } = await reader?.read();
if (done) break;
const chunk = new TextDecoder().decode(value);
accumulatedResponse += chunk;
setMessages(prev => [
...prev.slice(0, -1),
{ role: 'assistant', content: accumulatedResponse }
]);
}
} finally {
setIsLoading(false);
}
}
return (
// Chat interface JSX
);
}
Cloudflare AI Gateway
The application uses Cloudflare’s AI Gateway, which provides several built-in features:
- Automatic request caching at the edge
- Built-in rate limiting protection
- Request queueing and load balancing
- Usage analytics and monitoring
This allows me to focus on the application logic while the infrastructure handles scaling and protection.