Exploring Fixed Point Arithmetic
Some applications use floating point computation only because they are optimized for other hardware architecture. As explained in “Deep Learning with INT8 Optimization on Xilinx Devices,” using fixed point arithmetic for applications like deep learning can save the power efficiency and area significantly while keeping the same level of accuracy. It is recommended to explore fixed point arithmetic for your application before committing to using floating point operations.