Chat Demo
A streaming chat UI with a mock LLM that returns Markdown-formatted responses.
Run
Open http://localhost:8000 in your browser.
Code
"""Chat demo with mock LLM streaming.
Run:
uv run uvicorn examples.chat_demo:app --reload
"""
import asyncio
from fastapi import FastAPI, Request
from kokage_ui import KokageUI, Page
from kokage_ui.ai import ChatMessage, ChatView, chat_stream
app = FastAPI()
ui = KokageUI(app)
MOCK_RESPONSE = (
"こんにちは!何かお手伝いできることはありますか?\n\n"
"例えば、以下のようなことができます:\n\n"
"- **質問に回答** する\n"
"- **コードを書く** ことをサポートする\n"
"- **文章を要約** する\n\n"
"```python\n"
'print("Hello, World!")\n'
"```\n\n"
"お気軽にどうぞ!"
)
@ui.page("/")
def chat_page():
return Page(
ChatView(
send_url="/api/chat",
messages=[
ChatMessage(role="assistant", content="こんにちは!何でも聞いてください。"),
],
assistant_name="AI",
user_name="あなた",
),
title="Chat Demo",
include_marked=True,
include_highlightjs=True,
)
@app.post("/api/chat")
async def chat(request: Request):
data = await request.json()
user_message = data["message"]
async def generate():
# Mock LLM: stream the response character by character
for char in MOCK_RESPONSE:
yield char
await asyncio.sleep(0.02)
return chat_stream(generate())
Features Demonstrated
- ChatView — Full chat interface with DaisyUI chat bubbles
- ChatMessage — Initial assistant greeting message
- chat_stream — SSE streaming response from async generator
- Markdown rendering —
include_marked=Truefor rich text display - Code highlighting —
include_highlightjs=Truefor syntax highlighting in code blocks
Key Patterns
Streaming Response
async def generate():
for char in text:
yield char
await asyncio.sleep(0.02) # Simulate LLM latency
return chat_stream(generate())
The chat_stream() helper wraps any async string generator into an SSE StreamingResponse. Each yielded string becomes a {"token": "..."} event.
Custom Names
Display names appear in the chat bubble headers.