Strategy · Medium risk · Intermediate
AI model
Let a language model or classifier predict the true probability of an event, trade where the market mis-prices it. polybot treats LLMs as probability oracles, not decision-makers.
Published Apr 10, 2026
The idea
Prediction markets quote a probability (the price). A well-calibrated model quotes a probability too. If the model says 0.62 and the market says 0.51, buy YES; if the model says 0.34, buy NO. Over many independent trades with a genuine edge, you come out ahead.
The subtle point: the model isn’t choosing trades. It’s producing probabilities. polybot’s strategy turns probabilities into sized, risk-gated signals.
How polybot implements it
The ai_model strategy delegates probability estimation to an AIModelPlugin. polybot ships three:
llm_plugin.py— Anthropic Claude (default:claude-sonnet-4) or OpenAI GPT (gpt-4o).perplexity_plugin.py— Perplexity’s Sonar model for web-grounded probability calls.example_plugin.py— a calibration example you can fork.
# pseudocode of src/polybot/strategies/ai_model.py
prob, confidence = self.model.probability(market)
edge = prob - market.last_price # YES side
size = self.kelly_fraction * confidence * edge * self.risk.available_usd
if abs(edge) >= self.config.min_edge and confidence >= self.config.min_confidence:
side = "YES" if edge > 0 else "NO"
self.emit(Signal(market.id, side=side, size=abs(size)))
What the LLM sees
polybot constructs a structured prompt — market question, resolution criteria, recent news snippets (optional, via Perplexity), current price, volume. The model returns a JSON object with probability (0-1) and confidence (0-1). No free-form reasoning is used for trading; only the structured output.
Kelly sizing, capped
The strategy uses fractional Kelly (default 0.25) scaled by confidence and a global risk budget. You cannot accidentally size the whole book on one high-confidence call — per-market caps and global exposure limits are enforced by the risk service.
What this is not
- Not a chat agent. The LLM doesn’t decide to trade; it decides on probability. Decisions about sizing, timing, and risk are polybot’s job.
- Not a replacement for specialised models. If you have a fine-tuned classifier for sports outcomes, wire it up as an
AIModelPlugin— you’ll outperform general LLMs on that domain. - Not a free lunch. LLMs hallucinate, fixate, and go stale. Calibrate on shadow trades for 60+ days before going live.
Calibration matters more than cleverness
A model that says 0.9 and is right 90% of the time is a gold mine. A model that says 0.9 and is right 60% of the time will systematically lose money, especially with Kelly sizing. polybot ships a calibration report (polybot strategy report ai_model --calibration) that plots predicted probability vs realised outcome; use it weekly.
Configuration
polybot plugin enable anthropic
polybot strategy enable ai_model
polybot strategy config ai_model \
--plugin anthropic \
--min-edge 0.05 \
--min-confidence 0.6 \
--kelly-fraction 0.25 \
--max-size-usd 300
polybot strategy shadow ai_model --enable
Swap models by changing --plugin:
polybot strategy config ai_model --plugin perplexity # web-grounded
polybot strategy config ai_model --plugin openai # gpt-4o
polybot strategy config ai_model --plugin my_custom # your fine-tuned classifier
FAQ
Which LLM is best for this? Honest answer: it depends on the market. On news-driven markets, Perplexity’s web grounding helps. On mechanical markets (will X exceed Y by date), any capable model with the resolution criteria performs similarly. A/B via shadow mode — polybot makes this trivial.
How often does the model get called? On market updates, subject to a per-market cooldown (default: once per 5 minutes per market). Without a cooldown, you’ll burn tokens re-scoring unchanged markets.
Are LLM calls expensive? With Claude Sonnet + prompt caching (polybot uses cache_control on the system prompt and resolution criteria), typical cost is < $0.01 per market scored. Token budget is enforced per-strategy in the risk service.
Can I use a local model? Yes — implement AIModelPlugin and point at a local endpoint. See writing a custom AI plugin.
Source: src/polybot/strategies/ai_model.py, src/polybot/plugins/llm_plugin.py.
Want this strategy tuned for your book?
Cryptuon can adapt polybot strategies to your capital, risk budget, and markets. Shadow-deployed before you go live.