Multi-agent Workflows

Combine staik models in agent workflows. Each model has its strengths — use the right model for the right task.

ModelStrengthsAgent role
qwen3.6:35b-a3bCoding, complex tasks, visionCoder, problem solver, image analysis
gemma4:31bAccuracy, review, language, visionReviewer, orchestrator, image analysis
qwen3.5:9bFast, simpler tasksRouting, summarization

1. Coder + Reviewer Loop

Two agents take turns: one codes, one reviews. The loop continues until the reviewer approves the result.

Coder (qwen3.6:35b) → writes code
  ↓
Reviewer (gemma4:31b) → reviews + gives feedback
  ↓
Approved? → Yes: done | No: back to coder
PythonCrewAI
from crewai import Agent, Task, Crew, LLM

coder_llm = LLM(
    model="openai/qwen3.6:35b-a3b",
    base_url="https://api.staik.se/v1",
    api_key="sk-st-your-key",
)

reviewer_llm = LLM(
    model="openai/gemma4:31b",
    base_url="https://api.staik.se/v1",
    api_key="sk-st-your-key",
)

coder = Agent(
    role="Developer",
    goal="Write clean, working Python code",
    backstory="Senior Python developer with focus on readability.",
    llm=coder_llm,
)

reviewer = Agent(
    role="Code Reviewer",
    goal="Review code for bugs, style and correctness",
    backstory="Meticulous code reviewer who catches edge cases.",
    llm=reviewer_llm,
)

code_task = Task(
    description="Write a Python function that validates email addresses with regex.",
    expected_output="A correct Python function with docstring.",
    agent=coder,
)

review_task = Task(
    description="Review the code. Check edge cases, security, and readability. Give concrete feedback.",
    expected_output="Approved or list of improvement suggestions.",
    agent=reviewer,
)

crew = Crew(agents=[coder, reviewer], tasks=[code_task, review_task])
result = crew.kickoff()
print(result)

2. Orchestrator

A central agent receives the task, breaks it down, and delegates parts to specialized agents using different models.

Orchestrator (gemma4:31b) → analyzes the task
  ├→ Coder (qwen3.6:35b) → writes implementation
  ├→ Tester (qwen3.5:9b) → writes tests (fast)
  └→ Orchestrator → compiles results
PythonCrewAI — Manager agent
from crewai import Agent, Task, Crew, Process, LLM

def staik_llm(model: str) -> LLM:
    return LLM(
        model=f"openai/{model}",
        base_url="https://api.staik.se/v1",
        api_key="sk-st-your-key",
    )

orchestrator = Agent(
    role="Project Manager",
    goal="Break down tasks and coordinate the team",
    llm=staik_llm("gemma4:31b"),
)

coder = Agent(
    role="Developer",
    goal="Implement features based on specifications",
    llm=staik_llm("qwen3.6:35b-a3b"),
)

tester = Agent(
    role="QA Engineer",
    goal="Write comprehensive test cases",
    llm=staik_llm("qwen3.5:9b"),
)

task = Task(
    description="Build a REST API endpoint for user registration with validation and tests.",
    expected_output="Complete implementation with tests.",
    agent=orchestrator,
)

crew = Crew(
    agents=[orchestrator, coder, tester],
    tasks=[task],
    process=Process.hierarchical,
    manager_agent=orchestrator,
)
result = crew.kickoff()
print(result)

3. Human-in-the-Loop (HITL)

The agent works autonomously but pauses at critical steps for human approval before continuing.

Agent (qwen3.6:35b) → generates proposal
  ↓
Human → approves / modifies / rejects
  ↓
Agent → implements based on feedback
PythonOpenAI SDK + HITL
from openai import OpenAI

client = OpenAI(
    base_url="https://api.staik.se/v1",
    api_key="sk-st-your-key",
)

def generate_and_review(task: str) -> str:
    # Step 1: Agent generates a proposal
    proposal = client.chat.completions.create(
        model="qwen3.6:35b-a3b",
        messages=[
            {"role": "system", "content": "You are a senior developer. Generate a proposal."},
            {"role": "user", "content": task},
        ],
    ).choices[0].message.content

    print(f"\n--- PROPOSAL ---\n{proposal}\n")

    # Step 2: Human reviews
    feedback = input("Approve (enter) or give feedback: ")

    if not feedback:
        return proposal

    # Step 3: Agent revises based on feedback
    revised = client.chat.completions.create(
        model="qwen3.6:35b-a3b",
        messages=[
            {"role": "system", "content": "Revise your proposal based on the feedback."},
            {"role": "user", "content": task},
            {"role": "assistant", "content": proposal},
            {"role": "user", "content": feedback},
        ],
    ).choices[0].message.content

    return revised

result = generate_and_review("Design a database schema for an e-commerce app.")
print(result)

4. Pipeline

Sequential flow where each step is processed by a specialized model. Output from one step becomes input to the next.

Write (qwen3.6:35b) → Review (gemma4:31b) → Translate (qwen3.5:9b)
PythonPipeline with OpenAI SDK
from openai import OpenAI

client = OpenAI(
    base_url="https://api.staik.se/v1",
    api_key="sk-st-your-key",
)

def pipeline_step(model: str, system: str, content: str) -> str:
    return client.chat.completions.create(
        model=model,
        messages=[
            {"role": "system", "content": system},
            {"role": "user", "content": content},
        ],
    ).choices[0].message.content

# Step 1: Write technical documentation
draft = pipeline_step(
    "qwen3.6:35b-a3b",
    "You are a technical writer. Write clear documentation.",
    "Document how to set up a WebSocket server in Python.",
)
print("Draft complete")

# Step 2: Review language and accuracy
reviewed = pipeline_step(
    "gemma4:31b",
    "You are an editor. Improve text without changing technical content.",
    draft,
)
print("Review complete")

# Step 3: Translate to Swedish
translated = pipeline_step(
    "qwen3.5:9b",
    "Translate to fluent Swedish. Keep code examples unchanged.",
    reviewed,
)
print("Translation complete")
print(translated)

SDK Examples

staik works out of the box with popular frameworks. Just change the base_url and API key.

PythonOpenAI SDK
from openai import OpenAI

client = OpenAI(
    base_url="https://api.staik.se/v1",
    api_key="sk-st-your-key",
)

response = client.chat.completions.create(
    model="gemma4:31b",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Explain GDPR briefly."}
    ],
    temperature=0.7,
    max_tokens=500,
)
print(response.choices[0].message.content)
PythonLangChain
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage

llm = ChatOpenAI(
    model="gemma4:31b",
    base_url="https://api.staik.se/v1",
    api_key="sk-st-your-key",
)

response = llm.invoke([
    HumanMessage(content="Write a haiku about Stockholm")
])
print(response.content)
TypeScriptVercel AI SDK
import { createOpenAI } from "@ai-sdk/openai";
import { generateText } from "ai";

const staik = createOpenAI({
  baseURL: "https://api.staik.se/v1",
  apiKey: "sk-st-your-key",
});

const { text } = await generateText({
  model: staik("gemma4:31b"),
  prompt: "Explain GDPR in simple terms.",
});
console.log(text);
Node.jsOpenAI SDK
import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "https://api.staik.se/v1",
  apiKey: "sk-st-your-key",
});

const response = await client.chat.completions.create({
  model: "gemma4:31b",
  messages: [
    { role: "user", content: "Hello from Node.js!" }
  ],
});
console.log(response.choices[0].message.content);

Ready to build?