AI Training. For Everyone. Everywhere.

hands touch deepAI chip

Training AI models is painful WE SOLVED IT

WE SOLVED IT

We developed technology to build the world’s smallest training chip with the highest price-performance

Today’s solutions use the largest, most expensive chips in the world,
yet training is still time consuming

Doing the impossible- Bringing Int8 to AI Training

Training sessions often execute billions of multiply-accumulate operations, using floating point (FP32) precision. However, the high processing power required makes this method expensive and inefficient.

Integer operations, with 8-bit precision, require far fewer resources and are currently used for inference. However, most believe it is too imprecise, and using it for training purposes is impossible.

Using a series of patented, breakthrough technologies, we became the first to make Int8 training a reality while maintaining the highest levels of accuracy.

The implications are tremendous, as an 8-bit multiplier is 30x smaller than an FP32 and consumes 20x less energy.

compere chips

Today’s AI Training is Painful

Using existing best practices and technology, a single training session can take days to weeks and cost thousands of dollars. To complicate matters, AI models must be continually retrained to maintain high accuracy. Adding to the challenge are data sources that grow exponentially, significantly increasing training time and costs.

Developers running training vision models currently have two options:

  • icon cloud

    Run on-cloud Typical 50% utilization

    $12kmonth
    per user
  • icon station

    Buy on-prem server Shared between multiple users

    $200k

While some companies use gaming and graphics cards to mitigate costs, they significantly increase training time and cannot scale to production.

For each price point, Int8 technology drives 30x higher performance

price performance chart

's Revolution:

Game Changing Pricing & Subscription Business Models

The world’s smallest AI training chip, designed by DeepAI using int8 technology, delivers the same performance
as high-end data-center GPUs at an affordable price.
This enables us to offer low price points and subscription models for AI training, breaking the GPU monopoly.

robot click button

Our Story

Our leadership team has a proven record of ASIC design. We developed the world’s best-in-class network processing chips at EZchip, which was acquired by Mellanox for $800M in 2016.
Since then, we developed a robust Int8 solution that can be implemented in silicon. We built an Int8 training hardware prototype and verified accuracies on a variety of ML models and tasks.
Now that the technology has matured, our next step is to create the smallest AI training ASIC in the world leveraging our patented technology.

  • Dr. Moshe Mishali

    Dr. Moshe Mishali CEO, co-founder

    PhD EE/Technion
    81 – Elite IDF unit
    7 Patents
    EZchip (Architect)

  • Dr. Amir Rosen

    Dr. Amir Rosen Chief AI Scientist

    PhD EE/Technion
    81 – Elite IDF unit
    6 Patents
    EZchip (Chief Architect)
    Mellanox (Chief Architect)
    Toshiba (Senior Algorithm Engineer)

  • Eyal Lavee

    Eyal Lavee VP R&D, Software Architect

    BSc CS/Technion
    EZchip (SW Dept. Mgr)
    Mellanox (SW Architect)

  • Nadav Tobias

    Nadav Tobias Hardware Architect

    BSc CSE/BGU
    EZchip (Architect, Team Leader)
    Mellanox (Architect)
    Marvell (Senior Principal Engineer)

  • Dror Israel

    Dror Israel COO, CFO

    MBA/Technion
    EZchip (CFO)
    Enzymotec (CFO)

  • Ehab Wattad

    Ehab Wattad Research Team Leader

    MSc CS/Technion
    EZchip (Algo. Team Leader)
    Amazon (Software Team Leader)

Technological revolutions are rare;
This is your opportunity to take part in one