Why it seems like every Artificial Intelligence company is making their own chip

Big tech firms are racing to build custom chips to boost Artificial Intelligence performance, cut costs and reduce reliance on vendors such as nvidia. Intel and meta have unveiled new designs that claim sizable gains in efficiency and speed for training and deploying large models.

Big tech firms are investing in custom chips to accelerate Artificial Intelligence workloads, reduce vendor dependence and lower the cost of deploying advanced models. Meta has introduced a new generation of in-house processors to support ranking, recommendations and future products such as smart glasses, while Intel launched the gaudi 3 as part of a broader push by chipmakers to supply hardware tailored to training and inference. the rush mirrors moves by other players, including google and apple, which is reportedly preparing m4 processors with integrated Artificial Intelligence capabilities for its mac line.

Intel said the gaudi 3 delivers more than double the power efficiency and processes models roughly one and a half times faster than nvidia’s h100 gpu. the chip is offered in multiple configurations, including eight chips on a single motherboard and standalone cards that can be integrated into existing systems. Intel reported testing on models such as meta’s llama and highlighted support for workloads including stable diffusion for image generation and openai’s whisper for speech recognition. those performance and integration claims are central to vendors’ pitches that custom silicon can better match specific algorithms and deployment needs.

Industry experts quoted in the article said custom processors can lower per-customer training costs, enable more private and proprietary data control and make specialized deployments viable without relying solely on external model providers. amrit jassal of egnyte said in-house chips lower the bar for training per-customer, per-task models and support high-security use cases that move beyond API consumption. rodolfo rosini of vaire emphasized financial drivers and allocation constraints around popular chips, and michal oglodek of ivy.ai highlighted privacy and strategic control as reasons companies will race to own hardware as well as software.

68

Impact Score

Introducing Mistral 3: open artificial intelligence models

Mistral 3 is a family of open, multimodal and multilingual Artificial Intelligence models that includes three Ministral edge models and a sparse Mistral Large 3 trained with 41B active and 675B total parameters, released under the Apache 2.0 license.

NVIDIA and Mistral Artificial Intelligence partner to accelerate new family of open models

NVIDIA and Mistral Artificial Intelligence announced a partnership to optimize the Mistral 3 family of open-source multilingual, multimodal models across NVIDIA supercomputing and edge platforms. The collaboration highlights Mistral Large 3, a mixture-of-experts model designed to improve efficiency and accuracy for enterprise artificial intelligence deployments starting Tuesday, Dec. 2.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.