Finally reading Programming Crystal, by Ivo Balbaert and Simon
St. Laurent. Good stuff. The Crystal language has
advanced some since the book came out, but nearly all the code runs as-is.
Something that jumped out at me was the difference between their results and
mine with the benchmarking example. Not the raw numbers. I’d be a little
confused if those were exactly the same. The ratios caught my attention.
Given this source:
require"benchmark"
IOM=IO::Memory.new
Benchmark.ips do |x|
x.report("Appending") do
append
IOM.clear
end
x.report("Using to_s") do
to_s
IOM.clear
end
x.report("Interpolation") do
interpolation
IOM.clear
end
end
def append
IOM<<42
end
def to_s
IOM<<42.to_s
end
def interpolation
IOM<<"#{42}"
end
Here’s what we’re told to expect.
Build the code for production using $ crystal build benchmarking.cr --release and execute that with: $ ./benchmarking
You’ll get results like this:
Appending 34.06M ( 29.36ns) (± 3.97%) fastest
Using to_s 12.67M ( 78.92ns) (± 7.55%) 2.69× slower
It’s faster on native Linux than WSL. That’s hardly surprising. But the
differences between to_s and interpolation are now negligible. For
that matter, both of them are closer to the speed of append than to_s
was in the book’s example!
Is the difference because of changes in Crystal? Some dependency, like LLVM?
My computer’s 40GB of RAM compared to whatever the authors used? My hard
drive? GPU? Is Mercury in retrograde?
I don’t know! I just saw different numbers and thought it was curious.
My point isn’t that the book’s wrong. Heck no. The example’s supposed to
remind you that testing your assumptions is important. All I’ve done is
emphasized the validity of the lesson.
Anyways.
Good book. Fun language. Don’t forget to try out the example code. And if
you need to care about performance? Don’t assume — benchmark.
Got a comment? A question? More of a comment than a question?