Skip to content

Chat Demo

A streaming chat UI with a mock LLM that returns Markdown-formatted responses.

Run

uvicorn examples.chat_demo:app --reload

Open http://localhost:8000 in your browser.

Code

"""Chat demo with mock LLM streaming.

Run:
    uv run uvicorn examples.chat_demo:app --reload
"""

import asyncio

from fastapi import FastAPI, Request

from kokage_ui import KokageUI, Page
from kokage_ui.ai import ChatMessage, ChatView, chat_stream

app = FastAPI()
ui = KokageUI(app)

MOCK_RESPONSE = (
    "こんにちは!何かお手伝いできることはありますか?\n\n"
    "例えば、以下のようなことができます:\n\n"
    "- **質問に回答** する\n"
    "- **コードを書く** ことをサポートする\n"
    "- **文章を要約** する\n\n"
    "```python\n"
    'print("Hello, World!")\n'
    "```\n\n"
    "お気軽にどうぞ!"
)


@ui.page("/")
def chat_page():
    return Page(
        ChatView(
            send_url="/api/chat",
            messages=[
                ChatMessage(role="assistant", content="こんにちは!何でも聞いてください。"),
            ],
            assistant_name="AI",
            user_name="あなた",
        ),
        title="Chat Demo",
        include_marked=True,
        include_highlightjs=True,
    )


@app.post("/api/chat")
async def chat(request: Request):
    data = await request.json()
    user_message = data["message"]

    async def generate():
        # Mock LLM: stream the response character by character
        for char in MOCK_RESPONSE:
            yield char
            await asyncio.sleep(0.02)

    return chat_stream(generate())

Features Demonstrated

  • ChatView — Full chat interface with DaisyUI chat bubbles
  • ChatMessage — Initial assistant greeting message
  • chat_stream — SSE streaming response from async generator
  • Markdown renderinginclude_marked=True for rich text display
  • Code highlightinginclude_highlightjs=True for syntax highlighting in code blocks

Key Patterns

Streaming Response

async def generate():
    for char in text:
        yield char
        await asyncio.sleep(0.02)  # Simulate LLM latency

return chat_stream(generate())

The chat_stream() helper wraps any async string generator into an SSE StreamingResponse. Each yielded string becomes a {"token": "..."} event.

Custom Names

ChatView(
    send_url="/api/chat",
    assistant_name="AI",
    user_name="あなた",
)

Display names appear in the chat bubble headers.