Deep-AI uniquely provides an integrated, holistic and accelerated training and inference deep learning solution for the edge
Our solution runs on Xilinx Alveo PCIe cards, certified and available on a variety of standard servers from leading server vendors. The same hardware is used for inference and retraining of the deep learning model, allowing an on-going iterative process that keeps the model updated to the new data that is continuously generated.
Furthermore, in most systems training is done at 32bit floating-point while there is a growing need to run inference at 8bit fixed-point. In these cases, one needs to manually run challenging as well as time and resource consuming quantization processes to convert the 32bit training output into an 8bit inference input. Moreover, this conversion often results in loss of accuracy. Because our Training output is Inference-ready (also to 3rd party inference systems):