I have a burning question that has been bugging me for years. I'm not sure why but it never occurred to me to ask you. It goes like this:
Some years ago there was a little "PI Day" challenge going on here to calculate the most digits of PI on a Raspberry Pi. Sorry I can't find the link to that now.
That got me curious and hence finding the history of the record holders for calculating the digits of PI. Nicely summarized by wikipedia here: https://en.wikipedia.org/wiki/Chronolog ... _of_%CF%80
Currently standing at 31,415,926,535,897 digits. I think at the time the record was only 10,000,000,000,050 digits.
Now what struck me about all this was the fact that all the recent record holders were not any of the worlds super computers. None of them were those hugely expensive, and well huge, machines with thousands/millions of processors and GPUs etc. Machines that nation states brag about.
Oh no, the machines used for PI digit record breaking were not much more than a few off the shelf PC's with masses of RAM/disk attached. Often build on a budget of almost nothing.
Today the current record holder, despite having all the resources of Google on tap is not using anywhere near the massive parallelism a super computer or Googles data center posses.
Naively a layman would assume that a super computer that cost hundreds of millions and consumes the electrical power of a city could get the PI digit record breaking job done during it's lunch break. They do not.
So the question:
Is it really impossible to parallelize the PI calculation? Or have they just not bothered to try?
I would have thought that after spending millions of dollars on a new super computer the first thing you would do is break the digits of PI record to show off how fast it is.
Memory in C++ is a leaky abstraction .