Both a wildfire and activity of digital "neurons" exhibit a phase transition from an active to an absorbing phase. Once a system reaches an absorbing phase, it cannot escape from it without outside help. Credit: Tamai et al 2025
Researchers from the University of Tokyo in collaboration with Aisin Corporation have demonstrated that universal scaling laws, which describe how the properties of a system change with size and scale, apply to deep neural networks that exhibit absorbing phase transition behavior, a phenomenon typically observed in physical systems. The discovery not only provides a framework describing deep neural networks but also helps predict their trainability or generalizability. The findings were in the journal Âé¶¹ÒùÔºical Review Research.
In recent years, it seems no matter where we look, we come across artificial intelligence in one form or another. The current version of the technology is powered by deep neural networks: numerous layers of digital "neurons" with weighted connections between them. The network learns by modifying the weights between the "neurons" until it produces the correct output. However, a unified theory describing how the signal propagates between the layers of neurons in the system has eluded scientists so far.
"Our research was motivated by two drivers," says Keiichi Tamai, the first author. "Partially by industrial needs as brute-force tuning of these massive models takes a toll on the environment. But there was a second, deeper pursuit: the scientific understanding of the physics of intelligence itself."
This is where Tamai's background in statistical physics of phase transitions gave him the first hint. Absorbing phase transitions refer to a sharp shift at a tipping point from an active to an absorbing phase, from which the system cannot escape without outside help. An example of such a physical system would be a fire burning out.
Crucially, these systems exhibit universal behaviors near the tipping point and can be described using universal scaling laws if certain properties are preserved. If deep neural networks exhibit absorbing phase transitions, then universal scaling laws may apply, providing a unified framework for describing how they function. Consequently, researchers would be able to predict whether a signal would "burn out" in a certain deep learning set up.
The deep neural networks with different activation functions are characterized by a different value of the scaling factor. Its product with the network depth plays a pivotal role in whether we can train the network successfully. Credit: Tamai et al 2025
To investigate, the researchers combined theory with simulations. They derived the exponents, which are universal across systems, and scaling factors, which differ across systems, from theory when possible and used simulations to confirm the scaling laws in more complex cases.
"What a coincidence, I thought," Tamai says, remembering when he first noticed the link between deep neural networks and absorbing phase transitions. "I never imagined I would end up doing research on deep learning, let alone finding an effective use of a concept I worked on as a doctoral student in physics."
The finding also brings us closer to understanding the physics of intelligence itself, as it reinvigorates the brain criticality hypothesis, which states that some biological networks operate near phase transitions. Tamai is excited about the prospects of this line of research.
"Alan Turing hinted at this connection as early as 1950, but the tools weren't ready back then. With the rapid accumulation of evidence in neuroscience and the rise of near-human-level AI, I believe we're at a perfect moment to revisit and deepen our understanding of this fundamental relationship."
More information: Keiichi Tamai et al., Universal scaling laws of absorbing phase transitions in artificial deep neural networks, Âé¶¹ÒùÔºical Review Research (2025).
Journal information: Âé¶¹ÒùÔºical Review Research
Provided by University of Tokyo