Ruby LLM benchmarksAI Model Performance Dashboard

Comprehensive performance analysis of LLM models across all program fixing benchmarks - testing Ruby code generation capabilities through real programming challenges validated against test suites and RuboCop quality standards

4 Benchmarks
67 AI Models
58.7% Avg Success
Overall Performance Rankings - All Benchmarks Combined
Showing 67 of 67 models
1
Claude 4 Sonnet
Claude08/2025
08/2025
72.673.7%
2
Claude 4.5 Sonnet
Claude10/2025
10/2025
72.373.2%
3
Claude 4.1 Opus
Claude08/2025
08/2025
71.373.2%
4
Claude 4 Opus
Claude08/2025
08/2025
70.872.3%
5
OpenAI GPT-4.1
OpenAI08/2025
08/2025
70.873.1%
6
Horizon Beta
Other08/2025
08/2025
69.870.7%
7
OpenAI GPT-5
OpenAI09/2025
09/2025
68.869.8%
8
OpenAI GPT-4o
OpenAI08/2025
08/2025
68.768.6%
9
Claude 3.7 Sonnet (Thinking)
Claude08/2025
08/2025
68.668.9%
10
OpenAI o3-mini
OpenAI08/2025
08/2025
68.569.0%
11
OpenAI o1-mini
OpenAI08/2025
08/2025
68.368.8%
12
Claude 3.5 Sonnet
Claude08/2025
08/2025
68.067.0%
13
R1
DeepSeek08/2025
08/2025
67.967.5%
14
OpenAI o4-mini
OpenAI08/2025
08/2025
67.767.5%
15
Grok 3
xAI08/2025
08/2025
67.367.3%
16
Codestral 25.08
Mistral08/2025
08/2025
67.166.4%
17
Glm 4 6
Other10/2025
10/2025
67.167.0%
18
Openai Oss 120b
OpenAI08/2025
08/2025
66.766.5%
19
DeepSeek V3
DeepSeek10/2025
10/2025
66.266.2%
20
OpenAI o3-mini (High)
OpenAI08/2025
08/2025
66.167.1%
21
OpenAI o4-mini (High)
OpenAI08/2025
08/2025
65.265.2%
22
Openai 5 Codex
OpenAI10/2025
10/2025
64.664.5%
23
Sonoma Sky Alpha
Other09/2025
09/2025
64.466.2%
24
Qwen3 Max
Alibaba10/2025
10/2025
64.165.1%
25
OpenAI GPT-4 Turbo
OpenAI08/2025
08/2025
63.963.4%
26
OpenAI GPT-4
OpenAI08/2025
08/2025
63.863.0%
27
Llama 4 Scout
Meta08/2025
08/2025
63.362.1%
28
OpenAI GPT-5 mini
OpenAI08/2025
08/2025
63.365.6%
29
OpenAI GPT-4.1 mini
OpenAI08/2025
08/2025
63.063.2%
30
Llama 4 Maverick
Meta08/2025
08/2025
63.062.1%
31
OpenAI GPT-5 Chat
OpenAI08/2025
08/2025
62.561.6%
32
Grok 4
xAI08/2025
08/2025
62.261.5%
33
Gemini 2.5 Flash
Google08/2025
08/2025
62.262.2%
34
OpenAI GPT-5 mini
OpenAI09/2025
09/2025
61.763.8%
35
OpenAI GPT-4.1 nano
OpenAI08/2025
08/2025
61.662.2%
36
OpenAI GPT-5 nano
OpenAI09/2025
09/2025
60.761.6%
37
Qwen 3 Coder
Alibaba10/2025
10/2025
60.760.6%
38
Claude 3.7 Sonnet
Claude08/2025
08/2025
60.360.1%
39
Kimi K2
Moonshot10/2025
10/2025
60.259.4%
40
OpenAI GPT-5 nano
OpenAI08/2025
08/2025
60.159.9%
41
Gemini 2.5 Pro
Google08/2025
08/2025
60.158.7%
42
OpenAI GPT-5
OpenAI08/2025
08/2025
59.760.9%
43
DeepSeek V3
DeepSeek08/2025
08/2025
59.657.6%
44
Grok 4
xAI10/2025
10/2025
59.360.8%
45
Gemini 2.5 Flash Lite
Google08/2025
08/2025
58.558.4%
46
Gemini 2.0 Flash-001
Google08/2025
08/2025
57.357.6%
47
Claude 3.5 Haiku
Claude08/2025
08/2025
57.355.9%
48
Claude 4.5 Haiku
Claude10/2025
10/2025
56.856.1%
49
Grok Code Fast 1
xAI09/2025
09/2025
56.254.9%
50
OpenAI GPT-4o
OpenAI08/2025
08/2025
55.854.4%
51
Mistral Medium 3
Mistral08/2025
08/2025
55.553.2%
52
Grok 3 Mini
xAI08/2025
08/2025
55.054.6%
53
Claude 3 Haiku
Claude08/2025
08/2025
53.950.7%
54
Kimi K2
Moonshot08/2025
08/2025
52.150.2%
55
Nova Pro V1
Amazon08/2025
08/2025
51.949.4%
56
OpenAI GPT-4o mini
OpenAI08/2025
08/2025
51.749.9%
57
Coder Large
Other08/2025
08/2025
50.649.1%
58
Qwen 3 Coder
Alibaba08/2025
08/2025
49.748.5%
59
Openai Oss 20b
OpenAI08/2025
08/2025
48.245.8%
60
Glm 4 5
Other08/2025
08/2025
48.044.2%
61
Nova Lite V1
Amazon08/2025
08/2025
47.543.1%
62
OpenAI GPT-3.5 Turbo
OpenAI08/2025
08/2025
45.341.1%
63
Qwen3 14b
Alibaba08/2025
08/2025
44.841.9%
64
Magnum V4 72B
NousResearch08/2025
08/2025
43.638.7%
65
Nova Micro V1
Amazon08/2025
08/2025
38.334.3%
66
Gemma 3 4B IT
Google08/2025
08/2025
29.725.9%
67
Command A
Cohere08/2025
08/2025
10.62.3%
How Scoring Works
90%

Test Success Rate

Percentage of test cases that pass. This measures whether the AI-generated code actually works correctly.

10%

Code Quality

Based on RuboCop static analysis. Quality score decreases linearly from 100 to 0 as offenses increase from 0 to 50.

📐

Calculation Formula

Score = (Test Success Rate × 90%) + (Quality Score × 10%)
Quality = 100 - ((RuboCop Offenses ÷ 50) × 100), capped 0-100

Top Performers

Champions
#1
Claude
72.6

Claude 4 Sonnet

Success Rate
73.7%
88
Tests Passed
Q
63
Quality
75
Issues
121 total tests
#2
Claude
72.3

Claude 4.5 Sonnet

Success Rate
73.2%
88
Tests Passed
Q
64
Quality
73
Issues
121 total tests
#3
Claude
71.3

Claude 4.1 Opus

Success Rate
73.2%
87
Tests Passed
Q
55
Quality
90
Issues
121 total tests

Performance Overview

67
AI Models Tested
4
Benchmarks
58.7%
Avg Success Rate
66
Avg Quality Score

Benchmark Challenges

Dive Deeper into the Analysis (coming soon)

Explore detailed benchmark results, model comparisons, and performance insights across all coding challenges.