# Sybil Agentic Research > Research dashboard ranking 27 prediction markets across 5 tiers (A–E) by what an autonomous AI agent can demonstrably do today. Four independent benchmarks — agent accessibility, CLI/MCP, skill, and framework — all run by autonomous coding agents. Every score, summary, and finding on this site was produced by autonomous coding agents. April 2026 snapshot. Lives at https://sybil.exchange/agentic-research. A broader, apex-level llms.txt is at https://sybil.exchange/llms.txt. ## For autonomous agents This page is fully server-rendered. Every tier, finding, and benchmark grade is in the initial HTML response — no JavaScript required. You can fetch the URL with a plain HTTP client and parse the body directly. Fastest path for "which PMs are in Tier A": - [summary.json](https://sybil.exchange/agentic-research/data/summary.json) — authoritative hand-curated tier assignments (A/B/C/D/E) and 6 findings. JSON, ~6KB. Single source of truth for the rankings. Fastest path for "give me a markdown copy of the whole page": - [index.md](https://sybil.exchange/agentic-research/index.md) — plain markdown snapshot of the entire dashboard, regenerated on every build. No auth, no CORS, no rate limits. Plain HTTP GET. All data lives under `/agentic-research/`. ## Pages - [Summary](/agentic-research/#/summary): 5-tier ranked list (A–E) plus 6 hot-take findings. Start here. - [Landscape](/agentic-research/#/landscape): full 27-PM table with category, volume, tools, and links. - [Testing](/agentic-research/#/testing): per-benchmark grades for 20 PMs plus per-check evidence. - [Methodology](/agentic-research/#/methodology): formula descriptions, floor rules, grade buckets. ## Dataset - [pms.json](/agentic-research/data/pms.json): 27 prediction market entries — name, category, chain, volume, website, twitter, coreTools, aiTools (~40KB). The canonical place for tool links (MCP server, CLI repo, SDK, framework). - [accessibility.json](/agentic-research/data/accessibility.json): per-PM agent-accessibility grades. 15 checks weighted into a max-23 score. Each entry carries a `_regrade` audit field with the per-check breakdown. - [tests.json](/agentic-research/data/tests.json): CLI/MCP, skill, and framework benchmark results. Shape: `{current, legacy}`. Each tool/skill/framework entry under `current` carries a `_regrade` audit with weighted score, max, floor reason, and VPN-penalty reason. ## Methodology - [Agent Accessibility (15 checks → max 23, weighted)](/agentic-research/methodology/agent-accessibility.md) - [CLI/MCP Test (18 checks → max 23, trading 2× weighted)](/agentic-research/methodology/cli-mcp-test.md) - [Skill Test (8 milestones → max 13, M5/M7 weighted 3pt)](/agentic-research/methodology/skill-test.md) - [Framework Assessment (5 quality dimensions × 0–3 = max 15)](/agentic-research/methodology/framework-assessment.md) The full methodology — including the round-trip-floor and VPN-penalty rules — is also rendered as a single page at [/agentic-research/#/methodology](/agentic-research/#/methodology). ## For autonomous agents All data is available as static JSON — no API key, no CORS, no rate limits, no JavaScript required. The dashboard UI renders client-side, but the underlying dataset is readable by a simple `fetch`. ## Source - [Twitter](https://x.com/sybil_pm) - [Parent project: sybil.exchange](https://sybil.exchange)