Ruby LLM benchmarksAI Model Performance Dashboard
Comprehensive performance analysis of LLM models across all program fixing benchmarks - testing Ruby code generation capabilities through real programming challenges validated against test suites and RuboCop quality standards
4 Benchmarks
45 AI Models
57.1% Avg Success
Overall Performance Rankings - All Benchmarks Combined
Showing 45 of 45 models
1 Claude 4 Sonnet Claude • 08/2025 08/2025 | 72.6 | 73.7% |
2 Claude 4 Opus Claude • 08/2025 08/2025 | 70.8 | 72.3% |
3 OpenAI GPT-4.1 OpenAI • 08/2025 08/2025 | 70.8 | 73.1% |
4 Horizon Beta Other • 08/2025 08/2025 | 69.8 | 70.7% |
5 OpenAI GPT-4o OpenAI • 08/2025 08/2025 | 68.7 | 68.6% |
6 Claude 3.7 Sonnet (Thinking) Claude • 08/2025 08/2025 | 68.6 | 68.9% |
7 OpenAI o3-mini OpenAI • 08/2025 08/2025 | 68.5 | 69.0% |
8 OpenAI o1-mini OpenAI • 08/2025 08/2025 | 68.3 | 68.8% |
9 Claude 3.5 Sonnet Claude • 08/2025 08/2025 | 68.0 | 67.0% |
10 R1 DeepSeek • 08/2025 08/2025 | 67.9 | 67.5% |
11 OpenAI o4-mini OpenAI • 08/2025 08/2025 | 67.7 | 67.5% |
12 Grok 3 xAI • 08/2025 08/2025 | 67.3 | 67.3% |
13 Codestral 25.08 Mistral • 08/2025 08/2025 | 67.1 | 66.4% |
14 OpenAI o3-mini (High) OpenAI • 08/2025 08/2025 | 66.1 | 67.1% |
15 OpenAI o4-mini (High) OpenAI • 08/2025 08/2025 | 65.2 | 65.2% |
16 OpenAI GPT-4 Turbo OpenAI • 08/2025 08/2025 | 63.9 | 63.4% |
17 OpenAI GPT-4 OpenAI • 08/2025 08/2025 | 63.8 | 63.0% |
18 Llama 4 Scout Meta • 08/2025 08/2025 | 63.3 | 62.1% |
19 OpenAI GPT-4.1 mini OpenAI • 08/2025 08/2025 | 63.0 | 63.2% |
20 Llama 4 Maverick Meta • 08/2025 08/2025 | 63.0 | 62.1% |
21 Grok 4 xAI • 08/2025 08/2025 | 62.2 | 61.5% |
22 Gemini 2.5 Flash Google • 08/2025 08/2025 | 62.2 | 62.2% |
23 OpenAI GPT-4.1 nano OpenAI • 08/2025 08/2025 | 61.6 | 62.2% |
24 Claude 3.7 Sonnet Claude • 08/2025 08/2025 | 60.3 | 60.1% |
25 Gemini 2.5 Pro Google • 08/2025 08/2025 | 60.0 | 58.7% |
26 DeepSeek V3 DeepSeek • 08/2025 08/2025 | 59.6 | 57.6% |
27 Gemini 2.5 Flash Lite Google • 08/2025 08/2025 | 58.5 | 58.4% |
28 Gemini 2.0 Flash-001 Google • 08/2025 08/2025 | 57.3 | 57.6% |
29 Claude 3.5 Haiku Claude • 08/2025 08/2025 | 57.3 | 55.9% |
30 OpenAI GPT-4o OpenAI • 08/2025 08/2025 | 55.8 | 54.4% |
31 Mistral Medium 3 Mistral • 08/2025 08/2025 | 55.5 | 53.2% |
32 Grok 3 Mini xAI • 08/2025 08/2025 | 55.0 | 54.6% |
33 Claude 3 Haiku Claude • 08/2025 08/2025 | 53.9 | 50.7% |
34 Kimi K2 Moonshot • 08/2025 08/2025 | 52.1 | 50.2% |
35 Nova Pro V1 Amazon • 08/2025 08/2025 | 51.9 | 49.4% |
36 OpenAI GPT-4o mini OpenAI • 08/2025 08/2025 | 51.7 | 49.9% |
37 Coder Large Other • 08/2025 08/2025 | 50.6 | 49.1% |
38 Qwen 3 Coder Alibaba • 08/2025 08/2025 | 49.7 | 48.5% |
39 Nova Lite V1 Amazon • 08/2025 08/2025 | 47.5 | 43.1% |
40 OpenAI GPT-3.5 Turbo OpenAI • 08/2025 08/2025 | 45.3 | 41.1% |
41 Qwen3 14b Alibaba • 08/2025 08/2025 | 44.8 | 41.9% |
42 Magnum V4 72B NousResearch • 08/2025 08/2025 | 43.6 | 38.7% |
43 Nova Micro V1 Amazon • 08/2025 08/2025 | 38.3 | 34.3% |
44 Gemma 3 4B IT Google • 08/2025 08/2025 | 29.7 | 25.9% |
45 Command A Cohere • 08/2025 08/2025 | 10.6 | 2.3% |
How Scoring Works
90%
Test Success Rate
Percentage of test cases that pass. This measures whether the AI-generated code actually works correctly.
10%
Code Quality
Based on RuboCop static analysis. Quality score decreases linearly from 100 to 0 as offenses increase from 0 to 50.
RuboCop uses strict default settings and may not reflect real-world code quality preferences. The quality score should be interpreted as adherence to Ruby style guidelines rather than overall code quality.
📐
Calculation Formula
Score = (Test Success Rate × 90%) + (Quality Score × 10%)
Quality = 100 - ((RuboCop Offenses ÷ 50) × 100), capped 0-100
Top Performers
#1
Claude72.6
Claude 4 Sonnet
Success Rate
73.7%88
Tests Passed
Q
63
Quality
75
Issues
121 total tests
#2
Claude70.8
Claude 4 Opus
Success Rate
72.3%86
Tests Passed
Q
58
Quality
84
Issues
121 total tests
#3
OpenAI70.8
OpenAI GPT-4.1
Success Rate
73.1%85
Tests Passed
Q
50
Quality
101
Issues
121 total tests
Performance Overview
45
AI Models Tested
4
Benchmarks
57.1%
Avg Success Rate
68
Avg Quality Score
Benchmark Challenges
Calendar System
Easy43 models tested
91.3
Average Success Rate73.9%
Quality Score
61
Models
43
Parking Garage
Hard42 models tested
67.1
Average Success Rate38.7%
Quality Score
45
Models
42
School Library
Medium43 models tested
79.9
Average Success Rate59.2%
Quality Score
84
Models
43
Vending Machine
Medium45 models tested
78.5
Average Success Rate60.2%
Quality Score
80
Models
45
Dive Deeper into the Analysis (coming soon)
Explore detailed benchmark results, model comparisons, and performance insights across all coding challenges.