Microsoft rolled out its Maia 200 AI chip last week. The launch represents the company’s second try at building custom silicon for artificial intelligence tasks.
BREAKING: Microsoft just dropped a chip that could END Nvidia's $4 trillion monopoly.
it's already running GPT-5.2 in production.the real story (that everyone's missing)
→ Nvidia controls 95% of AI chips.
→ margins? 70%+.
→ lead? Untouchable.
→ moat? Software lock-in… pic.twitter.com/v7ohQwDm1l— Shruti (@heyshrutimishra) January 26, 2026
The chip went live inside Microsoft’s Iowa datacenter. Other regions will get access later this year.
Maia 200 runs on TSMC’s 3nm manufacturing process. The design focuses on inference work, which is the process of getting answers from trained AI models.
Microsoft built the chip to power Copilot and AI services sold through Azure. OpenAI’s GPT-5.2 models will also run on the new hardware.
The specs include 216GB of HBM3e memory and 272MB of on-chip SRAM. Microsoft claims the chip pushes over 10 petaFLOPS in 4-bit precision and more than 5 petaFLOPS at 8-bit performance.
All of this fits inside a 750W power envelope. Microsoft says that makes it faster than Amazon’s third-generation Trainium chip and Google’s seventh-generation TPU.
Nvidia stock dropped 0.64% after the news broke. The muted response suggests investors aren’t worried about immediate threats to Nvidia’s market position.
Demand for Nvidia’s chips still outpaces supply. The company’s hardware remains the top choice for training large AI models that require massive computing power.
Microsoft isn’t trying to ditch Nvidia overnight. The goal is to gain more control over costs and reduce risk in the supply chain.
Custom chips let cloud providers handle specific tasks more efficiently. Running some workloads on proprietary hardware cuts expenses as AI usage scales up.
Power costs are climbing fast. Purpose-built chips help manage those rising bills while protecting profit margins.
Microsoft is also releasing a Maia SDK. The toolkit helps developers tune their models to run better on the new chip.
Microsoft isn’t breaking new ground here. Google has used Tensor Processing Units for years inside its cloud infrastructure.
Amazon deployed Trainium and Inferentia chips across AWS to handle similar workloads. Both companies want tighter control over their AI infrastructure.
These custom chips don’t replace Nvidia for every job. They’re designed for targeted use cases where efficiency matters most.
Big Tech companies are building parallel systems. They use Nvidia for heavy training work and their own chips for inference and lighter tasks.
This strategy spreads out risk. It also gives them leverage when negotiating prices with external chip suppliers.
Microsoft’s Maia 200 is already processing real workloads in production. The company plans to expand deployment throughout 2026 as manufacturing capacity increases.
Microsoft built its second-generation AI chip to cut costs and reduce Nvidia dependence, but investors see no immediate threat.
The post Microsoft (MSFT) Stock: Tech Giant Takes on Nvidia with New Maia 200 AI Chip appeared first on CoinCentral.
Also read: Major League Soccer Partners With Polymarket in Multi‑year Fan Engagement Deal