The Computer Language
Benchmarks Game

Why do people just make up stuff sometimes?

Trust, and verify

Sometimes people just make up stuff because the facts don't fit the story they want to tell you.

You will probably come across stuff that people say about the benchmarks game. Do they show that what they claim is true? If they provide a URL, does the content actually confirm what they say?

Perhaps they are genuinely mistaken. Perhaps the content changed.

Wtf kind of benchmark counts the jvm startup time?

Compare the times against [pdf] Java Microbenchmark Harness reports:

secs JMH Average
n-body 21.54 23.367 ± 0.062
spectral-norm 4.29 4.061 ± 0.054
meteor-contest 0.24 0.112 ± 0.001

JVM start-up, JIT, OSR… are quickly effective and typical cold / warmed-up comparison at most of these workloads will show miniscule difference. Note the exception.

(In stark contrast to the traditional expectation of warmup, some benchmarks exhibit slowdown, where the performance of in-process iterations drops over time.)

Прогрелось. Но работает минимум в 65x медленнее – Поэтому супер-оптимизация…

No. The repeated measurements of the Java pi-digits program without restarting the JVM, did not make the Java program seem 65x slower.

That really would have been ridiculous. The JavaOne expert's slides mysteriously fail to show that the sum was correctly divided by 65 to give an average.

…to dispute a decision you basically need to pray the maintainer reopens it for some reason.

No. Followup comments could always be made in the project ticket tracker. There was a public discussion forum, etc. etc.

Someone's brilliant hack was rejected. Someone took the opportunity to push traffic to their personal blog.

There's a reason they call it a game
It's a game

The name "benchmarks game" signifies nothing more than the fact that programmers contribute programs that compete (but try to remain comparable) for fun not money.

It's what you make of it.

April 2017 through March 2018, Google Analytics shows 477,419 users.

Popular enough that many web search results show web spam - be careful!

unique page views
(go.html python.html etc March 2018)
Go 6,403
Rust 4,352
Python 4,076
C# 3,600
PHP 2,673
JavaScript 2,633
Java 2,519
C 2,510
C++ 1,619
Ruby 1,361
Swift 1,219
Haskell 1,108
Ada 764
Lua 758
Fortran 756
Dart 733
Erlang 725
Perl 630
Lisp 608
OCaml 501
F# 470
Pascal 448
Racket 362
Chapel 325
Smalltalk 248
Hack 247

Once upon a time…

Doug Bagley had a burst of crazy curiosity: When I started this project, my goal was to compare all the major scripting languages. Then I started adding in some compiled languages for comparison…

That project was abandoned in 2002, restarted in 2004 by Brent Fulgham, continued from 2008 by Isaac Gouy, and interrupted in 2018 by the Debian Alioth hosting service EOL. Everything has changed; several times.