Similar method to python math.isClose
but in runtime with IFloatingPointIeee754
#110666
-
Python currently has a convenience method to check if two floating point numbers are similar/close. Is there an alternative built-in in the runtime? |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 5 replies
-
No because the concept of “is close” depends on your program’s concept of close. Is magnitude difference important? What about percentage? ULPs? For example, are 500 and 505 close? It’s a magnitude difference of 5, but only a 1% change. In some situations, that would be fine (such as a milliliters for a recipe), but, in others situations, that’s a meaningful difference. |
Beta Was this translation helpful? Give feedback.
-
No, and part of the reason is that such APIs are error prone, are often the incorrect thing to do/use, and have been incorrectly taken and used in places based on developers not fully understanding the contexts of where they would be appropriate. IEEE 754 floating-point arithmetic is, by spec, deterministic. Additionally, it has what is effectively "variable precision" where the delta between representable values doubles every power of 2. Most of the perception of needing "tolerance" comes from developers working and thinking about their math in base-10 when the values are actually base-2. They see What actually happens for Even for But, developers never really ask for So the general issue is that developers see a mismatch between what they think is happening and what is actually happening; they then think (due to widespread misinformation and misunderstanding of the topic) that there's this fuzziness to the computation and that they need to account for it by saying "ehh, looks close enough" when in actuality that is the wrong thing to do and will almost consistently lead to bugs and other problems. -- A common example of this is it being a cause of clipping glitches in video games, because rather than testing against a finite boundary the logic instead did a "isClose" check and mistakenly allowed something through that should have otherwise collided. Some of this nuance misunderstanding comes from why programming languages define some Other misunderstanding comes the handling of "near zero" (subnormal) values and various good algorithms using this to handle edge cases where division by it would cause Once the format and its guarantees are understood and algorithms account for it and the representation, then it becomes clear that there are better and more efficient ways to test for correctness, for boundary cases, and whether or not something is "close enough" that it should be considered equivalent or that the compounded error has become too large. That the naive check for |
Beta Was this translation helpful? Give feedback.
I can't say for certain without seeing a concrete example, but serializing precise measurements as float/double will always introduce error. That error gets greater the larger the model; additionally, the larger the entire model is relative to the unit that
1.0
represents, th…