Model Explorer
Explore benchmark performance of various AI models
Models
Claude-3-Opus
Claude-3.5-Haiku
Claude-3.5-Sonnet-1022
Claude-3.7-Sonnet
Claude-3.7-Sonnet-Thinking
Claude-4.0-Opus
Claude-4.0-Opus-Thinking
Claude-4.0-Sonnet
Claude-4.0-Sonnet-Thinking
Claude-4.1-Opus-Thinking
Cohere-Command-A
Cohere-Command-R-Plus
DeepSeek-R1
DeepSeek-V3-0324
GPT-3.5-Turbo
GPT-4-mini
GPT-4.1
GPT-4o-0513
GPT-5
GPT-5-Thinking
GPT-5-mini
GPT-5-mini-Thinking
GPT-5-nano
GPT-5-nano-Thinking
GPT-OSS-120B
Gemini-2.0-Flash
Gemini-2.0-Pro-0121
Gemini-2.5-Flash
Gemini-2.5-Flash-Thinking
Gemini-2.5-Pro-0325
Gemini-2.5-Pro-0605
Gemini-2.5-Pro-Thinking
Grok-3-Beta
Grok-3-Mini-Beta
Grok-4
Grok-4-Thinking
Kimi-K2-Instruct
Llama-2-7B
Llama-3.1-405B
Llama-3.3-70B
Llama-4-Maverick-17B
Magistral-Medium-3.1
Mistral-Large-2
Mistral-Medium-3.1
OpenAI-O1-1217
OpenAI-O1-mini
OpenAI-O3-high
OpenAI-O3-medium
OpenAI-O3-mini-high
OpenAI-O3-mini-medium
OpenAI-O4-mini-high
OpenAI-O4-mini-medium
Phi-4
Qwen-3
Qwen-3-Thinking
Claude-4.0-Sonnet-Thinking
Anthropic's Claude 4.0 Sonnet model with thinking capabilities
Performance by Benchmark
Capability Benchmarks
65.0%
76.3%
85.3%
78.7%
87.4%
62.4%
66.0%
67.3%
44.5%
66.0%
10.0%
55.0%
4.6%
81.3%
72.1%
73.6%
70.2%
85.2%
95.2%
62.4%
90.9%
82.0%
74.9%
93.8%
62.5%
40.0%
45.5%
75.9%
30.0%
65.0%
Safety Benchmarks
86.2%
94.4%
91.7%
92.0%
100.0%
91.9%
90.7%
56.7%
96.7%
94.2%
Capability & Safety Benchmarks
92.7%