Something that jumped out at me was the difference between their results and mine with the benchmarking example. Not the raw numbers. I’d be a little confused if those were exactly the same. The ratios caught my attention.
Given this source:
Here’s what we’re told to expect.
Build the code for production using
$ crystal build benchmarking.cr --releaseand execute that with:
You’ll get results like this:Appending 34.06M ( 29.36ns) (± 3.97%) fastest Using to_s 12.67M ( 78.92ns) (± 7.55%) 2.69× slower Interpolation 2.8M (356.75ns) (± 3.84%) 12.15× slower
But in Crystal 0.36.1 on Ubuntu 20.04, running on Windows WSL2:
$ ./benchmarking Appending 110.36M ( 9.06ns) (± 3.70%) 0.0B/op fastest Using to_s 18.52M ( 54.00ns) (± 5.36%) 16.0B/op 5.96× slower Interpolation 19.19M ( 52.12ns) (± 2.99%) 16.0B/op 5.75× slower
Sure, my numbers are bigger than the book’s.
to_s are so close to each other on my machine!
Maybe that’s WSL? After I get the day’s tasks done, I revisit on my computer’s Manjaro partition.
$ ./benchmarking Appending 123.54M ( 8.09ns) (± 2.57%) 0.0B/op fastest Using to_s 56.57M ( 17.68ns) (± 3.49%) 16.0B/op 2.18× slower Interpolation 56.55M ( 17.68ns) (± 4.32%) 16.0B/op 2.18× slower
It’s faster on native Linux than WSL.
That’s hardly surprising.
But the differences between
interpolation are now negligible.
For that matter, both of them are closer to the speed of
to_s was in the book’s example!
Is the difference because of changes in Crystal? Some dependency, like LLVM? My computer’s 40GB of RAM compared to whatever the authors used? My hard drive? GPU? Is Mercury in retrograde?
I don’t know! I just saw different numbers and thought it was curious.
My point isn’t that the book’s wrong. Heck no. The example’s supposed to remind you that testing your assumptions is important. All I’ve done is emphasized the validity of the lesson.
Good book. Fun language. Don’t forget to try out the example code. And if you need to care about performance? Don’t assume – benchmark.