Ruby LLM benchmarksAI Model Performance Dashboard

Comprehensive performance analysis of LLM models across all program fixing benchmarks - testing Ruby code generation capabilities through real programming challenges validated against test suites and RuboCop quality standards

4 Benchmarks
58 AI Models
58.0% Avg Success
Overall Performance Rankings - All Benchmarks Combined
Showing 58 of 58 models
1
Claude 4 Sonnet
Claude08/2025
08/2025
72.673.7%
2
Claude 4.1 Opus
Claude08/2025
08/2025
71.373.2%
3
Claude 4 Opus
Claude08/2025
08/2025
70.872.3%
4
OpenAI GPT-4.1
OpenAI08/2025
08/2025
70.873.1%
5
Horizon Beta
Other08/2025
08/2025
69.870.7%
6
OpenAI GPT-5
OpenAI09/2025
09/2025
68.869.8%
7
OpenAI GPT-4o
OpenAI08/2025
08/2025
68.768.6%
8
Claude 3.7 Sonnet (Thinking)
Claude08/2025
08/2025
68.668.9%
9
OpenAI o3-mini
OpenAI08/2025
08/2025
68.569.0%
10
OpenAI o1-mini
OpenAI08/2025
08/2025
68.368.8%
11
Claude 3.5 Sonnet
Claude08/2025
08/2025
68.067.0%
12
R1
DeepSeek08/2025
08/2025
67.967.5%
13
OpenAI o4-mini
OpenAI08/2025
08/2025
67.767.5%
14
Grok 3
xAI08/2025
08/2025
67.367.3%
15
Codestral 25.08
Mistral08/2025
08/2025
67.166.4%
16
Openai Oss 120b
OpenAI08/2025
08/2025
66.766.5%
17
OpenAI o3-mini (High)
OpenAI08/2025
08/2025
66.167.1%
18
OpenAI o4-mini (High)
OpenAI08/2025
08/2025
65.265.2%
19
Sonoma Sky Alpha
Other09/2025
09/2025
64.466.2%
20
OpenAI GPT-4 Turbo
OpenAI08/2025
08/2025
63.963.4%
21
OpenAI GPT-4
OpenAI08/2025
08/2025
63.863.0%
22
Llama 4 Scout
Meta08/2025
08/2025
63.362.1%
23
OpenAI GPT-5 mini
OpenAI08/2025
08/2025
63.365.6%
24
OpenAI GPT-4.1 mini
OpenAI08/2025
08/2025
63.063.2%
25
Llama 4 Maverick
Meta08/2025
08/2025
63.062.1%
26
OpenAI GPT-5 Chat
OpenAI08/2025
08/2025
62.561.6%
27
Grok 4
xAI08/2025
08/2025
62.261.5%
28
Gemini 2.5 Flash
Google08/2025
08/2025
62.262.2%
29
OpenAI GPT-5 mini
OpenAI09/2025
09/2025
61.763.8%
30
OpenAI GPT-4.1 nano
OpenAI08/2025
08/2025
61.662.2%
31
OpenAI GPT-5 nano
OpenAI09/2025
09/2025
60.761.6%
32
Claude 3.7 Sonnet
Claude08/2025
08/2025
60.360.1%
33
OpenAI GPT-5 nano
OpenAI08/2025
08/2025
60.159.9%
34
Gemini 2.5 Pro
Google08/2025
08/2025
60.058.7%
35
OpenAI GPT-5
OpenAI08/2025
08/2025
59.760.9%
36
DeepSeek V3
DeepSeek08/2025
08/2025
59.657.6%
37
Gemini 2.5 Flash Lite
Google08/2025
08/2025
58.558.4%
38
Gemini 2.0 Flash-001
Google08/2025
08/2025
57.357.6%
39
Claude 3.5 Haiku
Claude08/2025
08/2025
57.355.9%
40
Grok Code Fast 1
xAI09/2025
09/2025
56.254.9%
41
OpenAI GPT-4o
OpenAI08/2025
08/2025
55.854.4%
42
Mistral Medium 3
Mistral08/2025
08/2025
55.553.2%
43
Grok 3 Mini
xAI08/2025
08/2025
55.054.6%
44
Claude 3 Haiku
Claude08/2025
08/2025
53.950.7%
45
Kimi K2
Moonshot08/2025
08/2025
52.150.2%
46
Nova Pro V1
Amazon08/2025
08/2025
51.949.4%
47
OpenAI GPT-4o mini
OpenAI08/2025
08/2025
51.749.9%
48
Coder Large
Other08/2025
08/2025
50.649.1%
49
Qwen 3 Coder
Alibaba08/2025
08/2025
49.748.5%
50
Openai Oss 20b
OpenAI08/2025
08/2025
48.245.8%
51
Glm 4 5
Other08/2025
08/2025
48.044.2%
52
Nova Lite V1
Amazon08/2025
08/2025
47.543.1%
53
OpenAI GPT-3.5 Turbo
OpenAI08/2025
08/2025
45.341.1%
54
Qwen3 14b
Alibaba08/2025
08/2025
44.841.9%
55
Magnum V4 72B
NousResearch08/2025
08/2025
43.638.7%
56
Nova Micro V1
Amazon08/2025
08/2025
38.334.3%
57
Gemma 3 4B IT
Google08/2025
08/2025
29.725.9%
58
Command A
Cohere08/2025
08/2025
10.62.3%
How Scoring Works
90%

Test Success Rate

Percentage of test cases that pass. This measures whether the AI-generated code actually works correctly.

10%

Code Quality

Based on RuboCop static analysis. Quality score decreases linearly from 100 to 0 as offenses increase from 0 to 50.

📐

Calculation Formula

Score = (Test Success Rate × 90%) + (Quality Score × 10%)
Quality = 100 - ((RuboCop Offenses ÷ 50) × 100), capped 0-100

Top Performers

Champions
#1
Claude
72.6

Claude 4 Sonnet

Success Rate
73.7%
88
Tests Passed
Q
63
Quality
75
Issues
121 total tests
#2
Claude
71.3

Claude 4.1 Opus

Success Rate
73.2%
87
Tests Passed
Q
55
Quality
90
Issues
121 total tests
#3
Claude
70.8

Claude 4 Opus

Success Rate
72.3%
86
Tests Passed
Q
58
Quality
84
Issues
121 total tests

Performance Overview

58
AI Models Tested
4
Benchmarks
58.0%
Avg Success Rate
66
Avg Quality Score

Benchmark Challenges

Dive Deeper into the Analysis (coming soon)

Explore detailed benchmark results, model comparisons, and performance insights across all coding challenges.