Model Selection Wizard
Custom Use Case
Model Browser
Model Comparison
Benchmarks
About
Benchmark Explorer
Explore how models perform on various benchmarks
Select Benchmark
AA-LCR
AIME
AIR-Bench-AcademicDishonesty
AIR-Bench-AdultContent
AIR-Bench-AdviceInHeavilyRegulatedIndustries
AIR-Bench-AutomatedDecisionmaking
AIR-Bench-AutonomousUnsafeOperations
AIR-Bench-Availability
AIR-Bench-CelebratingSuffering
AIR-Bench-ChildSexualAbuse
AIR-Bench-Confidentiality
AIR-Bench-DepictingViolence
AIR-Bench-DeterringDemocraticParticipation
AIR-Bench-DiscriminationprotectedCharacteristics
AIR-Bench-DisempoweringWorkers
AIR-Bench-DisruptingSocialOrder
AIR-Bench-EndangermentHarmOrLossOfLife
AIR-Bench-Erotic
AIR-Bench-Fraud
AIR-Bench-FraudulentSchemes
AIR-Bench-Harassment
AIR-Bench-HateSpeechIncitingViolence
AIR-Bench-HighRiskFinancialActivities
AIR-Bench-IllegalRegulatedSubstances
AIR-Bench-IllegalServicesExploitation
AIR-Bench-InfluencingPolitics
AIR-Bench-Integrity
AIR-Bench-MilitaryAndWarfare
AIR-Bench-Misdisinformation
AIR-Bench-Misrepresentation
AIR-Bench-Monetized
AIR-Bench-NonconsensualNudity
AIR-Bench-OffensiveLanguage
AIR-Bench-OtherIllegalunlawfulActivity
AIR-Bench-PerpetuatingHarmfulStereotypes
AIR-Bench-PoliticalPersuasion
AIR-Bench-SowingDivision
AIR-Bench-SpecificTypesOfRights
AIR-Bench-SuicidalAndNonsuicidalSelfinjury
AIR-Bench-SupportingMaliciousOperations
AIR-Bench-TypesOfDefamation
AIR-Bench-Unauthorizedprivacyviolationssensitivedata
AIR-Bench-UnfairMarketPractices
AIR-Bench-ViolentActs
AIR-Bench-WeaponUsageDevelopment
ARC-AGI
Blended Price (USD/1M Tokens)
CaseLaw
Chatbot Arena (Win Rate)
Chatbot Arena AAII
Chatbot Arena Coding
Chatbot Arena Vision
ContractLaw
CorpFin
FinanceAgent
GPQA
HumanEval
Humanity's Last Exam
IFBench
IOI
LegalBench
LiveBench (Agentic Coding)
LiveBench (Average)
LiveBench (Coding)
LiveBench (Data Analysis)
LiveBench (Instruction Following)
LiveBench (Language)
LiveBench (Math)
LiveBench (Reasoning)
LiveCodeBench
MGSM
MMLU Pro
MMMU
Math500
MedQA
Median Tokens/s
MortgageTax
SAGE
SWE-bench
SciCode
SimpleBench
TaxEval
Terminal-Bench Hard
Vals Index
Vals Multimodal Index
Vibe Code Bench
τ²-Bench Telecom
Benchmarks
🔍
Capability Benchmarks
AA-LCR
AIME
ARC-AGI
CaseLaw
Chatbot Arena (Win Rate)
Chatbot Arena AAII
Chatbot Arena Coding
Chatbot Arena Vision
ContractLaw
CorpFin
FinanceAgent
GPQA
HumanEval
Humanity's Last Exam
IFBench
IOI
LegalBench
LiveBench (Agentic Coding)
LiveBench (Average)
LiveBench (Coding)
LiveBench (Data Analysis)
LiveBench (Instruction Following)
LiveBench (Language)
LiveBench (Math)
LiveBench (Reasoning)
LiveCodeBench
MGSM
MMLU Pro
MMMU
Math500
MortgageTax
SAGE
SWE-bench
SciCode
SimpleBench
TaxEval
Terminal-Bench Hard
Vals Index
Vals Multimodal Index
Vibe Code Bench
τ²-Bench Telecom
Safety Benchmarks
AIR-Bench-AcademicDishonesty
AIR-Bench-AdultContent
AIR-Bench-AdviceInHeavilyRegulatedIndustries
AIR-Bench-AutomatedDecisionmaking
AIR-Bench-AutonomousUnsafeOperations
AIR-Bench-Availability
AIR-Bench-CelebratingSuffering
AIR-Bench-ChildSexualAbuse
AIR-Bench-Confidentiality
AIR-Bench-DepictingViolence
AIR-Bench-DeterringDemocraticParticipation
AIR-Bench-DiscriminationprotectedCharacteristics
AIR-Bench-DisempoweringWorkers
AIR-Bench-DisruptingSocialOrder
AIR-Bench-EndangermentHarmOrLossOfLife
AIR-Bench-Erotic
AIR-Bench-Fraud
AIR-Bench-FraudulentSchemes
AIR-Bench-Harassment
AIR-Bench-HateSpeechIncitingViolence
AIR-Bench-HighRiskFinancialActivities
AIR-Bench-IllegalRegulatedSubstances
AIR-Bench-IllegalServicesExploitation
AIR-Bench-InfluencingPolitics
AIR-Bench-Integrity
AIR-Bench-MilitaryAndWarfare
AIR-Bench-Misdisinformation
AIR-Bench-Misrepresentation
AIR-Bench-Monetized
AIR-Bench-NonconsensualNudity
AIR-Bench-OffensiveLanguage
AIR-Bench-OtherIllegalunlawfulActivity
AIR-Bench-PerpetuatingHarmfulStereotypes
AIR-Bench-PoliticalPersuasion
AIR-Bench-SowingDivision
AIR-Bench-SpecificTypesOfRights
AIR-Bench-SuicidalAndNonsuicidalSelfinjury
AIR-Bench-SupportingMaliciousOperations
AIR-Bench-TypesOfDefamation
AIR-Bench-Unauthorizedprivacyviolationssensitivedata
AIR-Bench-UnfairMarketPractices
AIR-Bench-ViolentActs
AIR-Bench-WeaponUsageDevelopment
Capability & Safety Benchmarks
MedQA
Speed & Latency Metrics
Median Tokens/s
Cost & Pricing Metrics
Blended Price (USD/1M Tokens)
GPQA
Measures accuracy on the most challenging subset of the Google-Proof Question Answering benchmark, which includes difficult, expert-level questions.
Source:
Model Performance
#1
Gemini 3.1 Pro Preview
88.9%
#2
GPT-5.2
88.9%
#3
Claude Opus 4.6 (Thinking)
86.2%
#4
Grok 4
84.1%
#5
Grok 4 (Thinking)
84.1%
#6
Gemini 3.0 Flash
83.8%
#7
Claude Opus 4.5 (Thinking)
81.1%
#8
GPT-5 (Thinking)
80.8%
#9
GPT-5
80.8%
#10
Claude Sonnet 4.6
80.8%
#11
Grok 4.20 (Reasoning)
79.1%
#12
OpenAI o3 (Medium Effort)
78.1%
#13
OpenAI o3 (High Effort)
78.1%
#14
GPT-5.4 Mini
77.4%
#15
Claude Sonnet 4.5 (Thinking)
75.5%
#16
GPT-5 Mini (Thinking)
73.7%
#17
Gemini 2.5 Pro (Thinking)
73.7%
#18
DeepSeek V3.2 (Thinking)
73.7%
#19
GPT-5 Mini
73.7%
#20
Gemini 2.5 Pro
73.7%
#21
Claude Opus 4.5
72.7%
#22
Grok 3 Mini
72.0%
#23
GPT OSS 120B
71.4%
#24
Qwen 3 Max (Thinking)
70.4%
#25
Qwen 3 Max Preview
70.4%
#26
GPT-5.4 Nano
70.0%
#27
DeepSeek V3.2
68.3%
#28
Claude 4.1 Opus (Thinking)
68.3%
#29
Claude 3.7 Sonnet (Thinking)
67.1%
#30
OpenAI o3 Mini (High Effort)
66.7%
#31
OpenAI o3 Mini (Medium Effort)
66.7%
#32
OpenAI o4 Mini (Medium Effort)
66.0%
#33
OpenAI o4 Mini (High Effort)
66.0%
#34
Claude 4.0 Sonnet (Thinking)
66.0%
#35
Grok 3
64.9%
#36
Grok 3 (Thinking)
64.9%
#37
OpenAI o1
64.0%
#38
Claude Haiku 4.5 (Thinking)
63.0%
#39
Claude 4.0 Opus
62.3%
#40
Kimi K2
62.0%
#41
Claude 4.1 Opus
59.9%
#42
Claude 4.0 Sonnet
59.2%
#43
Llama 4 Maverick 17B
56.9%
#44
Claude 3.7 Sonnet
56.5%
#45
Qwen 3
55.2%
#46
Qwen 3 (Thinking)
55.2%
#47
Gemini 2.0 Flash
53.6%
#48
Grok 4.20
53.5%
#49
GPT-4.1
52.8%
#50
Gemini 3.1 Flash Lite Preview
52.2%
#51
DeepSeek V3 (Mar 2025)
48.1%
#52
GPT-5 Nano (Thinking)
46.1%
#53
GPT-5 Nano
46.1%
#54
Claude 3.5 Sonnet
45.5%
#55
Gemini 2.5 Flash (Thinking)
44.8%
#56
Gemini 1.5 Pro
44.4%
#57
DeepSeek V3
38.7%
#58
Gemini 2.5 Flash
37.7%
#59
GPT-4o
33.7%
#60
Llama 3.3 70B
33.3%
#61
Mistral Large 2
26.9%
#62
GPT-4o Mini
25.6%
#63
Claude 3.5 Haiku
17.2%
#64
Cohere Command A
5.7%
#65
GPT-3.5 Turbo
5.7%
#66
Cohere Command R+
5.3%