Previously:
https://news.ycombinator.com/item?id=7365812 (2014, 187 comments)
https://news.ycombinator.com/item?id=10243011 (2015, 56 comments)
https://news.ycombinator.com/item?id=16513717 (2018, 78 comments)
https://news.ycombinator.com/item?id=20251750 (2019, 37 comments)
Also my past commentary about DEC64:
> Most strikingly DEC64 doesn't do normalization, so comparison will be a nightmare (as you have to normalize in order to compare!). He tried to special-case integer-only arguments, which hides the fact that non-integer cases are much, much slower thanks to added branches and complexity. If DEC64 were going to be "the only number type" in future languages, it had to be much better than this.
Wtf is up with the clown car that is floating point standards?
Well, for one thing, IEEE-757 was a significant improvement on the vendor-specific ways of handling floating point that it replaced ( https://people.eecs.berkeley.edu/~wkahan/ieee754status/754st... ).
I wasn't a big fan of floating point until I worked with a former college professor who had taught astrophysics. When possible, he preferred to use respected libraries that would give accurate results fast. But when he had to implement things himself, he didn't always necessarily want the fastest or the most accurate implementation; he'd intentionally make and document the tradeoffs for his implementation. He could analyze an algorithm to estimate the accumulated units in the last place error ( https://en.wikipedia.org/wiki/Unit_in_the_last_place ), but he realized when that wasn't necessary.
Oops, I meant IEEE-754.
IEEE 754 is a floating point standard. It has a few warts that would be nice to fix if we had tabula rasa, but on the whole is one of the most successful standards anywhere. It defines a set of binary and decimal types and operations that make defensible engineering tradeoffs and are used across all sorts of software and hardware with great effect. In the places where better choices might be made knowing what we know today, there are historical reasons why different choices were made in the past.
DEC64 is just some bullshit one dude made up, and has nothing to do with “floating-point standards.”
It is important to remember that IEEE 754 is, in practice, aspirational. It is very complex and nobody gets it 100% correct. There are so many end cases around the sticky bit, quiet vs. signaling NaNs, etc, that a processor that gets it 100% correct for every special case simply does not exist.
One of the most important things that IEEE 754 mandates is gradual underflow (denormals) in the smallest binate. Otherwise you have a giant non-monotonic jump between the smallest normalizable float and zero. Which plays havoc with the stability of numerical algorithms.
Sorry, no. IEEE 754 is correctly implemented in pretty much all modern hardware [1], save for the fact that optional operations (e.g., the suggested transcendental operations) are not implemented.
The problem you run into is that the compiler generally does not implement the IEEE 754 model fully strictly, especially under default flags--you have to opt into strict IEEE 754 conformance, and even there, I'd be wary of the potential for bugs. (Hence one of the things I'm working on, quite slowly, is a special custom compiler that is designed to have 100% predictable assembly output for floating-point operations so that I can test some floating-point implementation things without having to worry about pesky optimizations interfering with me).
[1] The biggest stumbling block is denormal support: a lot of processors opted to support denormals only by trapping on it and having an OS-level routine to fix up the output. That said, both AMD and Apple have figured out how to support denormals in hardware with no performance penalty (Intel has some way to go), and from what I can tell, even most GPUs have given up and added full denormal support as well.
[dead]
The set of real numbers is continuous and uncountably infinite. Any attempt to fit it into a discrete finite set necessarily requires severe tradeoffs. Different tradeoffs are desirable for different applications.