perplexity-ai/r1-1776
by perplexity
Pricing
Input
$1.20 / 1M tokens
Output
$2.80 / 1M tokens
Overview
R1‑1776 is a post-trained variant of the DeepSeek-R1 model, open-sourced by Perplexity on February 18, 2025, with the goal of providing uncensored, high-reasoning language generation. It removes politically sensitive filters—particularly around CCP-related topics—while retaining strong general performance.
- Release Date: February 18, 2025
- Base Model: DeepSeek-R1
- Context Length: Up to 128,000 tokens
- Model Size: 671B parameters
Features
- Uncensored answers to politically sensitive queries
- Preserves DeepSeek-R1 reasoning, math, and factual accuracy
- Open-source weights (MIT license)
- Available on Hugging Face, OpenRouter, Sonar, and others
Benchmarks
Metric | R1‑1776 | DeepSeek-R1 | GPT‑4 Turbo |
---|---|---|---|
Reasoning (lineage-bench) | 1st place | ≈ equal | Top-tier |
CCP Topic Coverage | 100 % | ~2–3 % | ≈ 1 % |
Multilingual QA | High | High | Very High |
Benchmarks reported from Perplexity blog, community evaluations, and ArtificialAnalysis.ai.
Pricing
Token Type | Cost per 1M tokens (OpenRouter) |
---|---|
Input | $2.00 |
Output | $8.00 |
Prices vary by provider; self-hosted use is free.
Use Cases
- Answering geopolitically sensitive questions
- Multilingual open-topic QA systems
- Research, journalism, and political science tools
- Offline/private deployments with full access
Safety & Stability
- Open-weight model: no censorship, but not aligned to safety standards
- Post-trained using ~40k prompt completions on censored topics
- Transparency and reproducibility prioritized
Limitations
- No native instruction tuning—uses base DeepSeek alignment
- Some evaluations showed temporarily reduced reasoning (now resolved)
- Large size: 671B params may require powerful inference infrastructure
License
R1‑1776 is released under the permissive MIT license. It is free to use, modify, and distribute, with the goal of democratizing access to uncensored, high‑reasoning LLMs.