🚧 FLIN is in active development. Coming soon! Thank you for your patience. Join Discord for updates

One Line to AI

No SDKs. No configuration. Just call:

ai-example.flin
// Claude (Anthropic)
answer = ask_claude("What is FLIN?")

// OpenAI
answer = ask_openai("Write a haiku about programming")

// Grok (X.AI)
answer = ask_grok("Tell me a joke")

// Mistral
answer = ask_mistral("Translate to French: Hello")

// FLIN's own AI (api.flin.sh)
answer = ask_flin("Generate a User entity")

Automatic Fallback

Use ask_ai() to automatically try providers in order:

fallback.flin
// Tries: Claude → OpenAI → Grok → Mistral → FLIN
answer = ask_ai("Explain quantum computing simply")

print(answer)
💡 Smart Fallback

If Claude is down or rate-limited, FLIN automatically tries the next available provider. No code changes needed.

With Options

Pass an options object for more control:

with-options.flin
answer = ask_claude("Write production-ready code", {
    model: "claude-sonnet-4-20250514",
    max_tokens: 4000,
    temperature: 0.3,
    system_prompt: "You are a senior Rust developer."
})

Available Options

Option Type Default Description
model text Provider default Model to use
max_tokens int 1024 Maximum response tokens
temperature float 1.0 Creativity (0.0 - 2.0)
system_prompt text none System instructions

Supported Providers

Function Provider Default Model Env Variable
ask_claude() Anthropic claude-sonnet-4-20250514 ANTHROPIC_API_KEY
ask_openai() OpenAI gpt-4o OPENAI_API_KEY
ask_grok() X.AI grok-3 XAI_API_KEY
ask_mistral() Mistral mistral-large-latest MISTRAL_API_KEY
ask_flin() FLIN Cloud flin-coder FLIN_API_KEY

API Keys Configuration

Set your API keys as environment variables:

~/.zshrc or ~/.bashrc
# AI Gateway Keys
export ANTHROPIC_API_KEY="sk-ant-api03-..."
export OPENAI_API_KEY="sk-proj-..."
export XAI_API_KEY="xai-..."
export MISTRAL_API_KEY="..."

# FLIN Cloud
export FLIN_API_KEY="flin_..."
⚠ Security

Never commit API keys to git. Use .env files (add to .gitignore) or environment variables.

Built-in Features

Caching

LRU cache with 1-hour TTL. Same prompt = instant response. No extra API costs.

Retry Logic

Exponential backoff with jitter. Handles 429 rate limits and 503 errors automatically.

Circuit Breaker

Per-provider circuit breaker. If a provider fails 5 times, it's bypassed for 60 seconds.

Validation

Prompt size limits, token bounds, temperature range (0.0-2.0). Prevents wasted API calls.

Real Example: AI Chat

app/index.flin
question = ""
answer = none
loading = false

fn askQuestion() {
    if question.len > 0 {
        loading = true
        answer = ask_ai(question)
        loading = false
    }
}

<div class="chat">
    <input
        placeholder="Ask anything..."
        value={question}
        enter={askQuestion()} />

    <button click={askQuestion()} disabled={loading}>
        {if loading then "Thinking..." else "Ask"}
    </button>

    {if answer}
        <div class="response">{answer}</div>
    {/if}
</div>

Where to Get API Keys

Provider Dashboard
Anthropic (Claude) console.anthropic.com
OpenAI platform.openai.com
X.AI (Grok) console.x.ai
Mistral console.mistral.ai
FLIN Cloud flin.dev/api-keys