Skip to content

Welcome to Kedi

A typed reasoning layer for LLM orchestration.

Kedi is a powerful DSL that transforms how you build LLM-powered applications. Write multi-step AI workflows with strong typing, Python interop, and clear dataflow — all in a clean, indentation-scoped syntax that compiles to production-ready code.

Get Started View Examples


The Power of Kedi

research_assistant.kedi
# Prelude: Import Python libraries available throughout
from datetime import datetime from collections import Counter
# Define structured types for type-safe LLM outputs
~Source(title, url, credibility_score: float, summary)
~ResearchFinding(claim, evidence: list[str], confidence: float, sources: list[Source])
~ResearchReport(topic, findings: list[ResearchFinding], methodology, \
    conclusion, generated_at: str)

# Procedure to analyze a single source with typed output
@analyze_source(url, topic) -> Source:
    Analyze the source at <url> for information about "<topic>". \
    Provide [title], [summary], and [credibility_score: float] (0.0-1.0).
    = `Source(title=title, url=url, credibility_score=credibility_score, summary=summary)`

# Procedure to synthesize findings from multiple sources
@synthesize_findings(sources: list[Source], topic) -> list[ResearchFinding]:
    Given these sources about "<topic>": \
    <`'\n'.join([f"- {s.title}: {s.summary}" for s in sources])`> \
    Extract key [findings: list[ResearchFinding]] with evidence and confidence scores.
    = `findings`

# AI-generated procedure with specification
@generate_methodology(topic, source_count: int) -> str:
    > Generate a methodology section describing how <source_count> sources \
    were analyzed to research the topic. Include search strategy, \
    evaluation criteria, and synthesis approach.

# Main research pipeline
@research(topic, urls: list[str]) -> ResearchReport:
    # Analyze all sources in parallel (Python list comprehension)
    [sources: list[Source]] = `[analyze_source(url, topic) for url in urls]`

    # Filter credible sources
    [credible: list[Source]] = `[s for s in sources if s.credibility_score > 0.6]`

    # Synthesize findings from credible sources
    [findings: list[ResearchFinding]] = `synthesize_findings(credible, topic)`

    # Generate methodology and conclusion
    [methodology] = `generate_methodology(topic, len(credible))`

    Based on these findings about "<topic>": \
    <`'\n'.join([f.claim for f in findings])`> \
    Write a comprehensive [conclusion] summarizing the research.

    = `ResearchReport(
        topic=topic,
        findings=findings,
        methodology=methodology,
        conclusion=conclusion,
        generated_at=datetime.now().isoformat()
    )`

# Execute the research
[urls: list[str]] = `[
    "https://example.com/ai-safety",
    "https://example.com/ml-research",
    "https://example.com/tech-trends"
]`
[report: ResearchReport] = `research("AI Safety in 2025", urls)`

= <`report.conclusion`>

What This Demonstrates

  • Custom Types (~Source, ~ResearchFinding, ~ResearchReport) for structured LLM outputs
  • Typed Procedures with parameters and return types
  • Python Interop with list comprehensions and datetime
  • AI-Generated Procedures using > specification syntax
  • Line Continuation with \ for readable multi-line prompts
  • Multi-step Pipelines chaining LLM calls with data transformations

Key Features

  • Typed Procedures


    Define inputs and outputs with full type annotations. Get compile-time validation and structured data from LLMs automatically.

    @greet(name: str) -> str:
        Hello [greeting] for <name>!
        = `greeting`
    
  • Python Interop


    Seamlessly mix Python logic with LLM prompts. Use inline expressions, multiline blocks, and import any library.

    [result: float] = `math.sqrt(16)`
    
  • Structured Output


    Define custom types with ~ syntax. Get validated Pydantic objects back from LLMs automatically.

    ~Person(name, age: int, email)
    [user: Person] = ...
    
  • Multiple Adapters


    Use PydanticAI, DSPy, or create custom adapters. Run against OpenAI, Anthropic, Groq, or local models.

    kedi --adapter pydantic
    kedi --adapter dspy
    

Core Syntax at a Glance

Syntax Purpose Example
@name() Define a procedure @greet(name: str) -> str:
~Type() Define a custom type ~Person(name, age: int)
<var> Substitute a variable Hello, <name>!
[out] Capture LLM output The capital is [capital].
[out: type] Typed LLM output [cities: list[str]]
\ Continue to next line Long prompt \
`expr` Inline Python <`2 + 2`>
``` Python code block Multiline Python execution
> AI-generated procedure > Specification for AI...

Quick Start

pip install kedi
hello.kedi
@greet(name) -> str:
    Hello! A warm [greeting] for <name>.
    = `greeting`

= <greet(World)>
kedi hello.kedi --adapter pydantic

Explore the Documentation