Microsoft released its Maia 200 AI chip this week. The launch marks the company’s second attempt at building proprietary silicon for artificial intelligence workloads.
The new chip targets inference operations inside Azure data centers. Microsoft designed it specifically to run AI models more efficiently than relying solely on third-party hardware.
TSMC manufactured Maia 200 using its 3nm process technology. The chip packs 216GB of HBM3e memory and 272MB of on-chip SRAM into a 750W thermal design.
Performance numbers show the chip can handle over 10 petaFLOPS at 4-bit precision. At 8-bit precision, it delivers more than 5 petaFLOPS of computing power.
Microsoft claims this makes Maia 200 faster than Amazon’s Trainium 3 and Google’s TPU v7. The chip will run OpenAI’s GPT-5.2 models and power Microsoft’s Copilot services.
Nvidia stock dropped 0.64% after Microsoft’s announcement. The small decline shows investors aren’t panicking about competition yet.
Demand for Nvidia’s chips continues to outstrip supply. The company still dominates the market for training large language models and complex AI systems.
Microsoft isn’t trying to replace Nvidia entirely. The strategy focuses on building alternatives for specific workloads where custom silicon makes financial sense.
Cloud providers face rising power costs as AI usage grows. Purpose-built chips help control those expenses while maintaining profit margins.
Microsoft joins a growing list of tech giants building their own AI processors. Google has deployed Tensor Processing Units for years across its cloud infrastructure.
Amazon uses Trainium and Inferentia chips throughout AWS for similar purposes. These custom designs handle specific tasks more efficiently than general-purpose hardware.
The approach gives cloud providers more leverage when negotiating with chip suppliers. It also reduces dependence on a single vendor for critical infrastructure components.
Microsoft is releasing a Maia SDK alongside the chip. The toolkit helps developers optimize their models to run better on the new hardware.
Maia 200 is already processing live workloads in Microsoft’s Iowa datacenter. The company plans to expand deployment to additional regions throughout 2026.
The chip handles inference work while Nvidia hardware still powers the training side. This dual approach lets Microsoft optimize costs without sacrificing capability for demanding AI tasks.
Custom silicon won’t eliminate the need for Nvidia’s products anytime soon. But it gives tech companies more options as they build out massive AI infrastructure investments.
Microsoft’s move reflects a broader industry trend toward vertical integration in AI hardware. As computing demands grow, controlling more of the stack becomes increasingly valuable for major cloud providers.
The post Microsoft (MSFT) Stock: New AI Chip Targets Nvidia’s Cloud Computing Empire appeared first on Blockonomi.
Also read: ASML Stock: Why This Wednesday’s Earnings Could Deliver Big Surprises