In languages like C++ or Java, just about every instruction involves a side-effect to the heap.
This is a problem because in the dawning multi-core, multi-processing world, every side-effect to the heap is in fact a broadcast operation to all other threads in the address space. This is expensive in hardware because cache snooping and cache coherency quickly become critical design and performance problems. This is equally expensive in software because in the multiprocessing context, every heap side-effect is a bug just waiting to happen — an open invitation to race conditions and datastructure corruption to come roost, costing agonizing weeks of debugging and then later showing up in the field anyhow.
At the other extreme, languages like Haskell ban heap side-effects completely, at least conceptually.
This is a problem because for many problems, the best known algorithms require side effects. “Wearing the hair shirt” of complete abstinence from heap side-effects can quickly start to feel like an exercise in extremism.
Mythryl occupies a happy medium on the continuum between those two extremes. It uses side-effects where, and only where, they are clearly what is logically required. The typical C++ or Java program uses side effects about one hundred times more frequently than the typical Mythryl program.
As the number of cores per processor proceeds inevitably down the doubling curve from two to four to eight and beyond, and as programmers come under increasing pressure to make effective use of those cores in their programmers, it is a safe bet that the C++ and Java programmers will be spending about one hundred times more hours debugging race condition bugs. Not because they are stupider or less careful, just because their languages force them to take more risks.