Doing the impossible- Bringing Int8 to AI Training
Training sessions often execute billions of multiply-accumulate operations, using floating point (FP32) precision. However, the high processing power required makes this method expensive and inefficient.
Integer operations, with 8-bit precision, require far fewer resources and are currently used for inference. However, most believe it is too imprecise, and using it for training purposes is impossible.
Using a series of patented, breakthrough technologies, we became the first to make Int8 training a reality while maintaining the highest levels of accuracy.
The implications are tremendous, as an 8-bit multiplier is 30x smaller than an FP32 and consumes 20x less energy.