💡 Try this endpoint in the Playground →

/v1/ask — LLM-Free Web Q&A

Ask any question and get an answer sourced from the live web — no LLM API key required. WebPeel searches, fetches, and ranks the most relevant sentences using BM25 scoring.

⚡ No LLM required. /v1/ask uses BM25 term-frequency ranking — it's fast, deterministic, and free from hallucinations. Results come from real web sources with source attribution. Added in v0.19.0.

How It Works

The ask pipeline runs four steps automatically:

  1. Search — Query DuckDuckGo (or Brave if configured) for the question
  2. Pre-rank — Score results by domain authority + primary source detection before fetching (saves time by only fetching the best candidates)
  3. Fetch — Retrieve and extract clean text from the top sources results in parallel (5s timeout per source)
  4. Score & Rank — Compute a combined score using BM25 + domain authority + freshness + primary source, then return the highest-scoring passage

The response includes the ranked answer, confidence score, enriched source metadata (authority, freshness, isPrimarySource), and the method used — so you always know where the answer came from and why it was ranked first.

GET /v1/ask

Ask a question using query parameters.

Request

GET https://api.webpeel.dev/v1/ask?q=<question>&sources=3

Parameters

ParameterTypeRequiredDescription
qstringYesThe question to answer
sourcesnumberNoNumber of web sources to fetch (default: 3, max: 10)

curl Example

curl "https://api.webpeel.dev/v1/ask?q=What+is+BM25+scoring&sources=3" \
  -H "Authorization: Bearer wp_live_xxxx"

POST /v1/ask

Ask a question with a JSON body — useful for longer or structured questions.

Request Body

{
  "question": "What is the capital of France?",
  "sources": 3
}

curl Example

curl -X POST "https://api.webpeel.dev/v1/ask" \
  -H "Authorization: Bearer wp_live_xxxx" \
  -H "Content-Type: application/json" \
  -d '{
    "question": "What is BM25 and how does it rank documents?",
    "sources": 5
  }'

Response

{
  "question": "What is BM25 scoring?",
  "answer": "BM25 (Best Match 25) is a ranking function used by search engines to rank documents by relevance to a query. It extends TF-IDF with document length normalization and saturating term frequency.",
  "confidence": 0.87,
  "sources": [
    {
      "url": "https://en.wikipedia.org/wiki/Okapi_BM25",
      "title": "Okapi BM25 — Wikipedia",
      "snippet": "BM25 is a bag-of-words retrieval function that ranks documents by term frequency...",
      "confidence": 0.87,
      "authority": "institutional",
      "freshness": "this-year",
      "isPrimarySource": false
    },
    {
      "url": "https://www.elastic.co/blog/practical-bm25-part-1-how-shards-affect-relevance-scoring-in-elasticsearch",
      "title": "Practical BM25 — Elastic Blog",
      "snippet": "BM25 remains the gold standard for keyword search relevance in production systems...",
      "confidence": 0.71,
      "authority": "major",
      "freshness": "this-year",
      "isPrimarySource": false
    }
  ],
  "method": "bm25",
  "elapsed": 1420
}

Response Fields

FieldTypeDescription
questionstringThe original question
answerstringBest answer assembled from ranked sentences
confidencenumberBM25 relevance score for the top passage (0–1)
sourcesarrayRanked source list with enriched scoring metadata
sources[].urlstringSource URL
sources[].titlestringPage title
sources[].snippetstringRelevant excerpt from the page
sources[].confidencenumberBM25 score for the best passage found in this source (0–1)
sources[].authoritystringDomain authority tier: official, institutional, major, general
sources[].freshnessstringContent freshness: recent (≤30 days), this-month (≤90 days), this-year, older
sources[].isPrimarySourcebooleantrue if the source domain matches the query entity (e.g. openai.com for an OpenAI question)
methodstringScoring method used (always "bm25")
elapsednumberTotal time in milliseconds

How Ranking Works

Sources are scored using a weighted combination of four signals:

SignalWeight (standard)Weight (factual queries)Description
BM2540%35%Term-frequency relevance of the best passage to your question
Domain authority25%15%Tier-based trust score: .gov/official docs score highest, general blogs lowest
Freshness20%35%How recently the page was published/updated (from Open Graph / article metadata)
Primary source15%15%Whether the source domain is the subject of the query (e.g. openai.com for "OpenAI pricing")

Factual queries (questions about pricing, limits, rates, versions) automatically double the freshness weight so stale pricing pages don't rank first.

Domain deduplication: A maximum of 2 results per registered domain are kept, so you get diverse sources rather than 5 results from wikipedia.org.

CLI Usage

The webask CLI command is a wrapper around /v1/ask:

# Ask any question
webpeel webask "What is the current price of Bitcoin?"

# Control number of sources
webpeel webask "How does BM25 work?" --sources 5

# Or use npx without installing
npx webpeel webask "What is WebPeel?"

Use Cases

vs. /v1/fetch with question=

/v1/ask and /v1/fetch?question= both use BM25, but serve different needs:

Use /v1/ask when you don't have a specific URL. Use /v1/fetch?question= when you already know the page.

Related