Saturday, November 1, 2025
Home » Qualcomm Takes Aim at Nvidia with Powerful New AI Chips for Data Centers

Qualcomm Takes Aim at Nvidia with Powerful New AI Chips for Data Centers

Table of Contents

Qualcomm has announced two new artificial-intelligence accelerator chips, the AI200 and AI250, aimed at the data-centre market, with commercial availability slated for 2026 and 2027 respectively. The move marks a clear pivot from the company’s traditional mobile-chip business into the infrastructure of generative AI and inference workloads, signalling a direct challenge to Nvidia’s long-standing dominance in the space.

Qualcomm’s decision comes at a time when the industry is booking massive growth in demand for AI inference and large-language-model (LLMs) deployments. The new chips support widely used AI frameworks, emphasise cost-efficient operations and improved memory performance, key metrics for inference workloads. The AI200 is described as supporting up to 768 GB of memory per card, while the AI250 introduces a “near-memory computing” architecture that delivers significantly higher effective memory bandwidth.

From a market perspective, Qualcomm’s announcement triggered a jump in its share price, reflecting investor enthusiasm about its diversification beyond smartphones. The stock surged 12% on the day of the announcement. Meanwhile, broader market commentary noted that Qualcomm led the technology sector’s advance as part of a wider surge in semiconductor stocks.

However, the challenge for Qualcomm is steep. Nvidia currently holds an overwhelming share of the AI-data-centre accelerator market, with entrenched software ecosystems, established customer relationships and a sizeable lead in performance benchmarks. Qualcomm argues that its strength in mobile-device AI (via its Hexagon NPUs) and its experience in power efficiency give it an advantage, especially for inference workloads where cost and energy matter. But translating that strength into data-centre dominance will require not just the chips themselves but broader infrastructure, software stack compatibility, and customer trust.

Looking ahead, the industry will be watching key metrics: when Qualcomm ships its first rack-scale systems, whether the claimed memory-bandwidth advantages materialize in real world deployments, and how quickly hyperscale cloud providers and AI operators adopt the new platforms. The Saudi-based startup Humain has already committed to a deployment involving 200 megawatts of Qualcomm’s AI racks starting in 2026, signalling early support. If the project succeeds, the story may reshape the competitive landscape of AI infrastructure, where now the keyword “AI chips” implies not just GPUs from Nvidia, but modular systems from new challengers.

Picture of Manuela Tecchio

Manuela Tecchio

With over eight years of experience in newsrooms like CNN and Globo, Manuela is a specialized business and finance journalist, trained by FGV and Insper. She has covered the sector across Latin America and Europe, and edits FintechScoop since its founding.