Japan Is Building a 1.4nm AI Chip

Fujitsu is developing a neural processing unit (NPU) optimized for AI inference with transistors as small as 1.4 nanometers, enabled by the ambitious Rapidus semiconductor venture in Japan.

💡

Why it matters

This breakthrough in transistor scaling could have a significant impact on the global chip supply chain and enable more efficient AI inference at the edge and in data centers.

Key Points

  • 1Fujitsu is building a 1.4nm AI chip, smaller than a DNA double helix
  • 2Rapidus, a new semiconductor company in Japan, is aiming to jump directly to 2nm chip manufacturing
  • 3The 1.4nm chip is targeted for production around 2029, offering improved power efficiency for AI inference

Details

Fujitsu is developing a neural processing unit (NPU) optimized for AI inference with transistors as small as 1.4 nanometers, enabled by the ambitious Rapidus semiconductor venture in Japan. Rapidus has secured $1.7 billion in funding from the Japanese government and major companies to jump directly to 2nm chip manufacturing, skipping several intermediate nodes. The 1.4nm chip from Fujitsu is planned for production around 2029. This level of transistor density is critical for improving power efficiency and throughput for AI inference workloads, which prioritize low latency and high tokens-per-watt over the raw FLOPS needed for training large models.

Like
Save
Read original
Cached
Comments
?

No comments yet

Be the first to comment

AI Curator - Daily AI News Curation

AI Curator

Your AI news assistant

Ask me anything about AI

I can help you understand AI news, trends, and technologies