Why Python's ZeroDivisionError for floating-point type is a bad and unnecessary feature

In Python 2.7 and 3.6, dividing a floating-point number by zero results in a ZeroDivisionError:

>>> 1. / 0.
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ZeroDivisionError: float division by zero

This is not consistent with mathematics, given that $1 / 0 = \infty$. Wouldn’t it be better to just return inf? In fact, the exception “division by zero” has been stipulated to return infinities in the IEEE 754 standard for floating-point arithmetic [1]:

The default result of divideByZero shall be an ∞ correctly signed according to the operation . . .

In many programming languages, the expression 1. / 0. gives the positive infinity without raising an error; like in Julia, for example,

julia> 1. / 0.
Inf

In C, the expression 1. / 0. can sometimes be used to define the infinity that is compiler independent (before the C99 standard),

#ifndef INFINITY
#define INFINITY 1. / 0.
#endif

The fact that Python raises a ZeroDivisionError instead of returning inf from zero division seems to be in violation of the IEEE 754 standard.

There is also a special case where both the dividend and the divisor are zero. How this exception should be handled has not been specified in the IEEE 754 standard. In many programming languages, 0. / 0. is evaluated as NaN. In Haskell, for example,

ghci> (0 :: Double) / (0 :: Double)
NaN

The aforementioned mechanism for handling the division by zero exceptions has an important advantage — it ensures that the division between two floating-point numbers returns a floating-number type, no matter what. In mathematical terms, the set of floating-point numbers has closure under the operation of division. The resulting NaN or Inf values can be easily tested with standard library functions like isnan() and isinf(). This also has a practical benefit to skip the hassle of error handling.

But in Python, one has to catch the ZeroDivisionError first with the try and except statement.

def zerodiv(a, b):
    """Division that overrides the ZeroDivisionError."""
    try:
        return a / b
    except ZeroDivisionError:
        if b == 0.:
            if a == 0.:
                return float('NaN')
            else:
                return float('Inf')

In other words, with the floating-point ZeroDivisionError, Python has created a problem that does not even exist in other languages!

Why does Python have the ZeroDivisionError for floating-point arithmetic in the first place? I cannot find a justification after a bit of searching. I have the impression that the floating-point ZeroDivisionError is mimicking the integer version of the error. This is an unnecessary and bad feature, because it violates the closure of the floating-point type and costs additional code for error handling. I hope this flaw could be righted in Python 4.

References

  1. Zuras, D. et al. (2008). IEEE Standard for Floating-Point Arithmetic. IEEE Std 754-2008, 1–70, doi:10.1109/ieeestd.2008.4610935.