/v1/ask — LLM-Free Web Q&A
Ask any question and get an answer sourced from the live web — no LLM API key required. WebPeel searches, fetches, and ranks the most relevant sentences using BM25 scoring.
How It Works
The ask pipeline runs four steps automatically:
- Search — Query DuckDuckGo (or Brave if configured) for the question
- Pre-rank — Score results by domain authority + primary source detection before fetching (saves time by only fetching the best candidates)
- Fetch — Retrieve and extract clean text from the top
sourcesresults in parallel (5s timeout per source) - Score & Rank — Compute a combined score using BM25 + domain authority + freshness + primary source, then return the highest-scoring passage
The response includes the ranked answer, confidence score, enriched source metadata (authority, freshness, isPrimarySource), and the method used — so you always know where the answer came from and why it was ranked first.
GET /v1/ask
Ask a question using query parameters.
Request
GET https://api.webpeel.dev/v1/ask?q=<question>&sources=3
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
q | string | Yes | The question to answer |
sources | number | No | Number of web sources to fetch (default: 3, max: 10) |
curl Example
curl "https://api.webpeel.dev/v1/ask?q=What+is+BM25+scoring&sources=3" \
-H "Authorization: Bearer wp_live_xxxx"
POST /v1/ask
Ask a question with a JSON body — useful for longer or structured questions.
Request Body
{
"question": "What is the capital of France?",
"sources": 3
}
curl Example
curl -X POST "https://api.webpeel.dev/v1/ask" \
-H "Authorization: Bearer wp_live_xxxx" \
-H "Content-Type: application/json" \
-d '{
"question": "What is BM25 and how does it rank documents?",
"sources": 5
}'
Response
{
"question": "What is BM25 scoring?",
"answer": "BM25 (Best Match 25) is a ranking function used by search engines to rank documents by relevance to a query. It extends TF-IDF with document length normalization and saturating term frequency.",
"confidence": 0.87,
"sources": [
{
"url": "https://en.wikipedia.org/wiki/Okapi_BM25",
"title": "Okapi BM25 — Wikipedia",
"snippet": "BM25 is a bag-of-words retrieval function that ranks documents by term frequency...",
"confidence": 0.87,
"authority": "institutional",
"freshness": "this-year",
"isPrimarySource": false
},
{
"url": "https://www.elastic.co/blog/practical-bm25-part-1-how-shards-affect-relevance-scoring-in-elasticsearch",
"title": "Practical BM25 — Elastic Blog",
"snippet": "BM25 remains the gold standard for keyword search relevance in production systems...",
"confidence": 0.71,
"authority": "major",
"freshness": "this-year",
"isPrimarySource": false
}
],
"method": "bm25",
"elapsed": 1420
}
Response Fields
| Field | Type | Description |
|---|---|---|
question | string | The original question |
answer | string | Best answer assembled from ranked sentences |
confidence | number | BM25 relevance score for the top passage (0–1) |
sources | array | Ranked source list with enriched scoring metadata |
sources[].url | string | Source URL |
sources[].title | string | Page title |
sources[].snippet | string | Relevant excerpt from the page |
sources[].confidence | number | BM25 score for the best passage found in this source (0–1) |
sources[].authority | string | Domain authority tier: official, institutional, major, general |
sources[].freshness | string | Content freshness: recent (≤30 days), this-month (≤90 days), this-year, older |
sources[].isPrimarySource | boolean | true if the source domain matches the query entity (e.g. openai.com for an OpenAI question) |
method | string | Scoring method used (always "bm25") |
elapsed | number | Total time in milliseconds |
How Ranking Works
Sources are scored using a weighted combination of four signals:
| Signal | Weight (standard) | Weight (factual queries) | Description |
|---|---|---|---|
| BM25 | 40% | 35% | Term-frequency relevance of the best passage to your question |
| Domain authority | 25% | 15% | Tier-based trust score: .gov/official docs score highest, general blogs lowest |
| Freshness | 20% | 35% | How recently the page was published/updated (from Open Graph / article metadata) |
| Primary source | 15% | 15% | Whether the source domain is the subject of the query (e.g. openai.com for "OpenAI pricing") |
Factual queries (questions about pricing, limits, rates, versions) automatically double the freshness weight so stale pricing pages don't rank first.
Domain deduplication: A maximum of 2 results per registered domain are kept, so you get diverse sources rather than 5 results from wikipedia.org.
CLI Usage
The webask CLI command is a wrapper around /v1/ask:
# Ask any question
webpeel webask "What is the current price of Bitcoin?"
# Control number of sources
webpeel webask "How does BM25 work?" --sources 5
# Or use npx without installing
npx webpeel webask "What is WebPeel?"
Use Cases
- Fact lookup — Quick answers without spinning up an LLM
- Research pipelines — Gather cited facts from the web as a first pass
- Cost reduction — Use BM25 answers for simple questions, only escalate to LLMs for complex reasoning
- RAG pre-fetch — Collect source documents and confidence scores before sending to your LLM
vs. /v1/fetch with question=
/v1/ask and /v1/fetch?question= both use BM25, but serve different needs:
/v1/ask— Searches the web first, then fetches + ranks across multiple sources/v1/fetch?question=— Fetches a specific URL you provide, then scores sentences within that page
Use /v1/ask when you don't have a specific URL. Use /v1/fetch?question= when you already know the page.
Related
- Deep Research — Multi-round research agent for comprehensive cited reports (requires LLM key)
- LLM-Free Q&A — BM25 scoring on a single page via
/v1/fetch?question= - Fetch API — Full parameter reference including
question=andsummary= - Browser Sessions — Stateful sessions for multi-step web interaction
- Changelog — See what else shipped in v0.19.0