A History
Once upon a time…
Doug Bagley had a burst of crazy curiosity: When I started this project, my goal was to compare all the major scripting languages. Then I started adding in some compiled languages for comparison…
That project was abandoned in 2002, restarted in 2004 by Brent Fulgham, continued from 2008 by Isaac Gouy, and interrupted in 2018 by the Debian Alioth hosting service EOL. Everything has changed; several times.
August 2020 through July 2021, Google Search Console saw 329,977 clicks.
Enough that many web search results show web spam and phishing - be careful!
a good starting point
How does Java compare in terms of speed to C or C++ or C# or Python? The answer depends greatly on the type of application you're running. No benchmark is perfect, but The Computer Language Benchmarks Game is a good starting point.
Differences in approach — to memory management, parallel programming, regex, arbitrary precision arithmetic, implementation technique — are part and parcel of using different programming languages.
So we accept something intermediate between chaos and rigidity — enough flex & slop & play to allow for Haskell programs that are not just mechanically translated from Fortran; enough similarity in the basic workloads & tested results.
We both accept PCRE and GMP and … library code; and refuse custom memory pool and hash table implementations.
The best way to complain is to make things.
As one person, you can’t help everyone. And you can’t make everyone happy because your tool is only a tiny part of their life. Don’t make that your goal: focus on learning and broadening your perspective. If it stops being constructive for you, then stop. Period.
Perhaps you would make different library choices? Perhaps you would include different programming language implementations? Perhaps you would make measurements on new computers?
Please do! Please use BenchExec or hyperfine or krun, and start making the kind-of measurements you would like to see.
Dismiss, Distract
You will probably come across stuff that people have said about the benchmarks game. Did they show that what they claim is true? If they provided a URL, does the content actually confirm what they said?
Maybe they were genuinely mistaken. Maybe the content changed.
I heard that one is pretty bad…
Maybe they heard wrong.
… neither scientific nor indicative of expected performance…
… That having been said … on the current benchmarks, Rust already outperforms C++, which is a pretty big deal…
No, we have to choose —
- either we accept the "neither scientific nor indicative" dismissal and don't even consider "a pretty big deal";
- or we reject the "neither scientific nor indicative" dismissal and consider "a pretty big deal".
… nor indicative of expected performance on real-world idiomatic code.
We've certainly not attempted to prove that these measurements, of a few tiny programs, are somehow representative of the performance of any real-world applications — not known — and in-any-case Benchmarks are a crock
.
There's a reason they call it a game
…
Wrong!
The name "benchmarks game" is just a name.
…to dispute a decision you basically need to pray the maintainer reopens it for some reason.
Never true. Followup comments could always be made in the project ticket tracker. There was a public discussion forum, etc. etc.
Someone's brilliant hack
was rejected. Someone saw the opportunity to push traffic to their personal blog.
The guy that runs it arbitrarily decided to stop tracking some languages he didn't care about…
Measurements are no longer made for these —
ATS, FreeBASIC, CINT, Cyclone, Tiny C, Mono C#, Mono F#, Intel C++, Clang++, CAL, Clean, Clojure, Digital Mars D, GNU D, Gwydion Dylan, SmartEiffel, bigForth, GNU GForth, Groovy, Hack, Icon, Io, Java -client, Java -Xint, gcj Java, Substrate VM, Rhino JavaScript, SpiderMonkey, TraceMonkey, Lisaac, LuaJIT, Mercury, Mozart/Oz, Nice, Oberon-2, Objective-C, Pike, SWI Prolog, YAP Prolog, IronPython, PyPy, Rebol, Rexx, Scala, Bigloo Scheme, Chicken Scheme, Ikarus Scheme, GNU Smalltalk, Squeak Smalltalk, Mlton SML, SML/NJ, Tcl, Truffle, Zonnon.
Like everyone else, I'm sitting on my hands waiting for kostya and hanabi1224 and attractivechaos to make and publish all the other program measurements.
I know it will take more time than I choose. Been there; done that.
Be curious
Wtf kind of benchmark counts the jvm startup time?
How much difference does it make for these tiny programs? Tiny differencies amortize over seconds and tens-of-seconds cpu.
Let's compare the fastest no warmup measurements against the fastest JMH after startup & warmup SampleTime p(0.0000) and mean measurements:
java 22 2024-03-19
| SampleTime
| Mean
|
i5-3330
| No warmup
| JMH after startup & warmup
|
fannkuch-redux #1
| 10.145
| 9.815
| 10.343 N = 35
|
fannkuch-redux #2
| 46.275
| 42.480
| 43.100 N = 25
|
fannkuch-redux #3
| 40.162
| 39.796
| 40.378 N = 25
|
n-body #1
| 7.892
| 7.718
| 7.740 N = 50
|
n-body #2
| 7.577
| 7.332
| 7.361 N = 50
|
n-body #3
| 7.624
| 7.374
| 7.390 N = 50
|
n-body #4
| 6.905
| 6.795
| 6.827 N = 50
|
n-body #5
| 6.793
| 6.677
| 6.689 N = 50
|
spectral-norm #1
| 7.074
| 6.795
| 6.800 N = 50
|
spectral-norm #2
| 2.375
| 1.950
| 2.110 N = 131
|
spectral-norm #3
| 1.681
| 1.569
| 1.593 N = 175
|
Some experimental studies show 10.9% of process executions don't reach a steady state of peak performance; and 43.5% of process executions were inconsistent; and sometimes they are slower than what came before
.
java 22 2024-03-19
| SingleShotTime
|
i5-3330
| No warmup
| JMH after startup
|
fannkuch-redux #1
| 10.036
| 10.554
|
|
fannkuch-redux #2
| 44.334
| 45.021
|
|
fannkuch-redux #3
| 40.809
| 40.370
|
|
nobody #1
| 7.892
| 7.823
|
|
nobody #2
| 7.582
| 7.387
|
|
nobody #3
| 7.629
| 7.528
|
|
nobody #4
| 6.908
| 7.007
|
|
nobody #5
| 6.794
| 6.740
|
|
spectral-norm #1
| 7.082
| 7.587
|
|
spectral-norm #2
| 2.443
| 2.398
|
|
spectral-norm #3
| 1.694
| 1.629
|
|
otoh JVM startup, JIT, OSR… are quickly effective and these no warmup / after warmup comparisons show little difference.
otoh For measurements of a few tenths of a second, a few tenths of a second is a huge difference.
The benchmarks game benchmarks your code… on a 2012 desktop machine?
Let's compare normalized published times of the same n-body programs: made in the cloud on a 2019 Xeon 8272 and made on our 2012 i5-3330 desktop.
n-body
| AMD EPYC 7763
| Xeon 8272
| i5-3330
|
|
programs
| Q1'2021
| Q2'2019
| Q3'2012
|
|
GCC C++
| 1.00
| 1.00
| 1.00
|
|
Rust #7
| 1.33
| 1.53
| 1.47
|
|
C gcc #8
| 1.53
| 1.70
| 1.89
|
|
C gcc #5
| 1.86
| 1.93
| 2.81
|
|
Rust #2
| 1.72
| 1.98
| 2.57
|
|
C gcc #2
| 1.85
| 2.05
| 3.31
|
|
Rust
| 2.02
| 2.29
| 2.61
|
|
Swift #7
| 2.12
| 2.48
| 2.43
|
|
Chapel #2
| 1.97
| 2.54
| 2.87
|
|
OCaml
| 2.23
| 2.81
| 3.14
|
|
Go
| 2.18
| 3.03
| 3.02
|
|
C# .NET #8
| 2.24
| 3.03
| 3.25
|
|
Java
| 2.65
| 3.46
| 3.60
|
|
Node js #6
| 2.98
| 4.05
| 3.88
|
|
Dart #3
| 2.67
| 5.23
| 5.78
|
|
Racket #2
|
| 7.24
| 6.48
|
|
We should expect increasing use of hand-written vector instructions will cause changes in the relative performance of the most optimised programs.
Apples and Oranges
We compare programs against each other, as though the different programming languages had been designed for the exact same purpose — that just isn't so.
The problems introduced by multicore processors, networked systems, massive computation clusters, and the web programming model were being worked around rather than addressed head-on. Moreover, the scale has changed: today's server programs comprise tens of millions of lines of code, are worked on by hundreds or even thousands of programmers, and are updated literally every day. To make matters worse, build times, even on large compilation clusters, have stretched to many minutes, even hours.
Go was designed and developed to make working in this environment more productive.
The most common class of 'less suitable' problems is characterised by performance being a prime requirement and constant-factors having a large effect on performance … Most (all?) large systems developed using Erlang make heavy use of C for low-level code, leaving Erlang to manage the parts which tend to be complex in other languages, like controlling systems spread across several machines and implementing complex protocol logic.
Lua is a tiny and simple language, partly because it does not try to do what C is already good for, such as sheer performance, low-level operations, or interface with third-party software. Lua relies on C for those tasks.