Floating Point Numbers

Floating-point numbers are a way to represent real numbers in computers. They consist of three parts: a sign bit, an exponent, and a mantissa (also called significand). This representation allows for a wide range of numbers but with varying precision.

16-bit (Half Precision)

  • 1 sign bit
  • 5 exponent bits
  • 10 mantissa bits
  • Range: ±65,504
  • Precision: ~3.3 decimal digits

32-bit (Single Precision)

  • 1 sign bit
  • 8 exponent bits
  • 23 mantissa bits
  • Range: ±3.4 × 10³⁸
  • Precision: ~7.2 decimal digits

The visualization below shows how floating-point numbers are distributed and how their precision varies across different ranges. You can see the gaps between representable numbers increase as the magnitude of the numbers increases.

Configuration

Number Format

Extra Mantissa Bits (for increased precision)

Target Value

Range

Precision Analysis

Current Value: 1

Precision Gap: 1.000e+0

Relative Precision: 100.000000%

Next Representable Value: 2.000000

Total Mantissa Bits: 23

Largest Gap Near Target

Size: 3.576e-7

Range: 1.000e+0 to 1.000e+0

At Power of 2: 1

Full Range Distribution

Zoomed View Around Target Value

Showing precision gaps near 1 (±1% range)

Understanding Precision Gaps

  • Precision gaps increase as numbers get larger
  • 16-bit floats have larger gaps than 32-bit floats
  • Numbers near zero have the highest precision
  • Special values: zero, infinity, NaN (Not a Number)
  • Denormal numbers provide gradual underflow