01 // LLM Configuration
Generate your API key from your provider dashboard:
OpenAI,
Claude / Anthropic,
Groq,
Mistral,
Together AI.
How to use Ollama: install Ollama from docs.ollama.com, run `ollama serve`, then pull a local model such as `ollama pull llama3.2:3b`. In this app, choose Ollama (local), leave the API key blank, and select a local model.
How to use Ollama: install Ollama from docs.ollama.com, run `ollama serve`, then pull a local model such as `ollama pull llama3.2:3b`. In this app, choose Ollama (local), leave the API key blank, and select a local model.
02 // Article Input
Instructions:
- Enter your API key above.
- Choose a model provider and model.
- Upload the article PDF or type the DOI of the article.
- Press Run Critical Appraisal.
Use this when you know the paper DOI and want the app to fetch article text or abstract automatically.
Use a text-based PDF when possible. Very large or scanned PDFs may give weaker extraction quality.
Quick Guide
- Enter your API key in the first field.
- Select your provider from the dropdown.
- Pick the model you want to use.
- Type the DOI or upload the article PDF.
- Read the verdict, reasons, applicability notes, and GRADE output.
What You Receive
- A practice-oriented verdict on whether the paper should change clinical thinking.
- A GRADE-based certainty summary with domain-level commentary.
- A concise structured summary of design, population, outcomes, and effect size.