Benchmark Detail View
School Library Management
MEDIUM Challenge43 models testedTop Score: 79.9
Success Rate
59.2%
Quality Score
84
Tests Passed
17
Models Tested
43
School Library Benchmark - Individual Model Results
Showing 43 of 43 models
1 Claude 3 Haiku Claude • 08/2025 08/2025 | 79.9 | 78.6% |
2 Grok 4 xAI • 08/2025 08/2025 | 79.5 | 78.6% |
3 Horizon Beta Other • 08/2025 08/2025 | 78.1 | 78.6% |
4 OpenAI o4-mini OpenAI • 08/2025 08/2025 | 76.1 | 75.0% |
5 R1 DeepSeek • 08/2025 08/2025 | 76.1 | 75.0% |
6 OpenAI o1-mini OpenAI • 08/2025 08/2025 | 75.9 | 75.0% |
7 Grok 3 Mini xAI • 08/2025 08/2025 | 75.7 | 75.0% |
8 OpenAI o3-mini (High) OpenAI • 08/2025 08/2025 | 75.5 | 75.0% |
9 OpenAI o3-mini OpenAI • 08/2025 08/2025 | 75.5 | 75.0% |
10 OpenAI GPT-4o OpenAI • 08/2025 08/2025 | 75.3 | 75.0% |
11 OpenAI o4-mini (High) OpenAI • 08/2025 08/2025 | 74.5 | 75.0% |
12 OpenAI GPT-4.1 OpenAI • 08/2025 08/2025 | 74.3 | 75.0% |
13 Nova Pro V1 Amazon • 08/2025 08/2025 | 73.3 | 71.4% |
14 Mistral Medium 3 Mistral • 08/2025 08/2025 | 73.1 | 71.4% |
15 Claude 4 Sonnet Claude • 08/2025 08/2025 | 72.7 | 71.4% |
16 OpenAI GPT-4.1 nano OpenAI • 08/2025 08/2025 | 71.9 | 71.4% |
17 Nova Lite V1 Amazon • 08/2025 08/2025 | 70.9 | 67.9% |
18 OpenAI GPT-4o mini OpenAI • 08/2025 08/2025 | 70.7 | 71.4% |
19 Coder Large Other • 08/2025 08/2025 | 66.9 | 64.3% |
20 Nova Micro V1 Amazon • 08/2025 08/2025 | 66.3 | 64.3% |
21 Gemini 2.5 Flash Lite Google • 08/2025 08/2025 | 62.2 | 60.7% |
22 Grok 3 xAI • 08/2025 08/2025 | 60.8 | 57.1% |
23 DeepSeek V3 DeepSeek • 08/2025 08/2025 | 57.8 | 53.6% |
24 Llama 4 Scout Meta • 08/2025 08/2025 | 56.8 | 53.6% |
25 Gemini 2.5 Flash Google • 08/2025 08/2025 | 56.6 | 53.6% |
26 Gemini 2.0 Flash-001 Google • 08/2025 08/2025 | 56.2 | 53.6% |
27 Qwen3 14b Alibaba • 08/2025 08/2025 | 55.8 | 53.6% |
28 OpenAI GPT-4 OpenAI • 08/2025 08/2025 | 55.8 | 53.6% |
29 Kimi K2 Moonshot • 08/2025 08/2025 | 53.8 | 50.0% |
30 Gemini 2.5 Pro Google • 08/2025 08/2025 | 53.8 | 50.0% |
31 Qwen 3 Coder Alibaba • 08/2025 08/2025 | 53.2 | 50.0% |
32 Claude 4 Opus Claude • 08/2025 08/2025 | 53.0 | 50.0% |
33 Claude 3.7 Sonnet (Thinking) Claude • 08/2025 08/2025 | 51.2 | 46.4% |
34 Claude 3.7 Sonnet Claude • 08/2025 08/2025 | 51.2 | 46.4% |
35 Claude 3.5 Sonnet Claude • 08/2025 08/2025 | 51.0 | 46.4% |
36 OpenAI GPT-4 Turbo OpenAI • 08/2025 08/2025 | 50.6 | 46.4% |
37 Claude 3.5 Haiku Claude • 08/2025 08/2025 | 50.0 | 46.4% |
38 OpenAI GPT-4.1 mini OpenAI • 08/2025 08/2025 | 48.2 | 46.4% |
39 Llama 4 Maverick Meta • 08/2025 08/2025 | 47.8 | 42.9% |
40 Codestral 25.08 Mistral • 08/2025 08/2025 | 47.2 | 42.9% |
41 OpenAI GPT-4o OpenAI • 08/2025 08/2025 | 44.6 | 39.3% |
42 OpenAI GPT-3.5 Turbo OpenAI • 08/2025 08/2025 | 41.5 | 35.7% |
43 Command A Cohere • 08/2025 08/2025 | 13.0 | 3.6% |
Top Performers
#1
Claude79.9
Claude 3 Haiku
Success Rate
78.6%22
Tests Passed
Q
92
Quality
4
Issues
28 total tests
#2
xAI79.5
Grok 4
Success Rate
78.6%22
Tests Passed
Q
88
Quality
6
Issues
28 total tests
#3
Other78.1
Horizon Beta
Success Rate
78.6%22
Tests Passed
Q
74
Quality
13
Issues
28 total tests
Explore More Benchmarks
See how models perform across different programming challenges and complexity levels.