Memory chip stocks took a beating Thursday after Google went public with research on a new algorithm that could dramatically reduce the amount of memory needed to run large language models — rattling a sector that had been riding an AI-fueled supply crunch straight up.
Samsung Electronics and SK Hynix, the South Korean heavyweights that dominate the high-bandwidth memory market, both fell at least 6% in Seoul trading. In the U.S., Micron Technology (MU) slid more than 7%, while Western Digital and Sandisk each dropped at least 5%. Nvidia (NVDA) was not spared either, shedding nearly 4% as broader AI infrastructure sentiment soured.
What Google Actually Did
Google’s TurboQuant algorithm, which the company publicized on X this week — though the underlying research originally surfaced last year — claims to cut the memory required to run large language models by at least a factor of six. The efficiency gain targets what’s known as the key value cache, a critical bottleneck in AI inference, or the process of running AI models to generate outputs.
If widely adopted, TurboQuant could reduce the memory footprint of AI workloads significantly, theoretically easing the supply crunch that has sent chip prices and margins soaring across the sector.
The Bull Case Didn’t Disappear Overnight
Context matters here. Memory chip stocks had been on an extraordinary run. SK Hynix and Samsung shares had each surged more than 50% year-to-date through Wednesday, fueled by insatiable demand from hyperscalers building out AI infrastructure at historic scale. SK Group Chairman Chey Tae-won as recently as this week said the memory chip shortage would persist through 2030.
Morgan Stanley analyst Shawn Kim pushed back on the panic in a note, arguing the impact of Google’s research should ultimately be net positive for the industry. His logic: if AI models can run with materially lower memory requirements without sacrificing performance, the cost per query drops, making AI deployment more profitable and accelerating adoption — which in turn drives more demand for memory, not less.
Kim and analysts at JPMorgan and Citigroup all invoked the Jevons Paradox — a 19th century economic concept holding that greater efficiency in resource use tends to increase total consumption rather than reduce it. The same argument made the rounds when DeepSeek’s low-cost AI model rattled markets last year.
The Bigger Picture for Investors
The four largest hyperscalers — led by Amazon and Google — are collectively on track to spend roughly $650 billion this year on data center infrastructure. That spending appetite doesn’t evaporate because of one efficiency algorithm, and Ortus Advisors analyst Andrew Jackson noted the Google development may make little practical difference to near-term demand given how constrained supply remains.
For small and microcap investors with exposure to the memory supply chain — component manufacturers, equipment makers, or specialty materials companies — Thursday’s selloff may be more noise than signal. The structural demand drivers behind AI infrastructure spending remain firmly intact.
The more pressing question isn’t whether TurboQuant reduces memory demand. It’s whether the market had already priced in perfection for a sector where any efficiency headline is now treated as an existential threat.