Âé¶¹ÒùÔº

June 30, 2025

Mathematical approach makes uncertainty in AI quantifiable

Using geometric principles, the behavior of AI can be described. Credit: Vienna University of Technology
× close
Using geometric principles, the behavior of AI can be described. Credit: Vienna University of Technology

How reliable is artificial intelligence, really? An interdisciplinary research team at TU Wien has developed a method that allows for the exact calculation of how reliably a neural network operates within a defined input domain. In other words: It is now possible to mathematically guarantee that certain types of errors will not occur—a crucial step forward for the safe use of AI in sensitive applications.

From smartphones to self-driving cars, AI systems have become an everyday part of our lives. But in applications where safety is critical, one central question arises: Can we guarantee that an AI system won't make serious mistakes—even when its input varies slightly?

A team from TU Wien—Dr. Andrey Kofnov, Dr. Daniel Kapla, Prof. Efstathia Bura and Prof. Ezio Bartocci—bringing together experts from mathematics, statistics and computer science, has now found a way to analyze neural networks, the brains of AI systems, in such a way that the possible range of outputs can be exactly determined for a given input range—and specific errors can be ruled out with certainty.

The paper is on the arXiv preprint server. The research was accepted for presentation at .

Small changes, big impact?

"Neural networks usually behave in a predictable way—they give the same output every time you feed in the same input," says Dr. Kofnov. "But in the real world, inputs are often noisy or uncertain, and cannot always be described by a single, fixed value. This uncertainty in the input leads to uncertainty in the output."

"Imagine a that receives an image as input and is tasked with identifying the animal in it," says Prof. Ezio Bartocci. "What happens if the image is slightly altered? A different camera, a bit more noise, changes in lighting—could that cause the AI to suddenly misclassify what it sees?"

"Understanding the full range of possible outputs helps in making better, safer decisions—especially in high-stakes areas like finance, health care, or engineering," adds Kofnov. "By computing the likelihood of possible outputs, we can answer important questions like: What's the chance of an extreme outcome? How much risk is involved?"

These kinds of questions are difficult to answer using conventional testing. While many scenarios can be tried out, full coverage of all possible inputs is virtually impossible. There may always be rare edge cases that were not tested—and in which the system fails.

Get free science updates with Science X Daily and Weekly Newsletters — to customize your preferences!

Mathematics in multi-dimensional space

The solution developed at TU Wien uses a geometric approach: "The set of all possible inputs—for example, all possible images that could be fed into such an AI system—can be imagined as a space that is geometrically similar to our 3-dimensional world, but with an arbitrary number of dimensions," explains Prof. Bura. "We partition this multi-dimensional space into smaller subregions, each of which can be precisely analyzed to determine the outputs the neural network will produce for inputs from that region."

This makes it possible to mathematically quantify the likelihood of a range of outputs—potentially ruling out erroneous results with 100% certainty.

The theory is not yet applicable to large-scale neural networks, such as large language models. "An AI like ChatGPT is much too complex for this method. Analyzing it would require an unimaginable amount of computing power," says Kapla. "But we have shown that at least for small neural networks, rigorous error quantification is possible."

The method was developed as part of the SecInt doctoral college at TU Wien, which fosters interdisciplinary collaboration in the field of IT security. Ethical issues and the societal impact of technology also play a central role in the program.

Prof. Bartocci and Prof. Bura worked together with Dr. Kofnov (former Ph.D. student and current postdoc) and Dr. Kapla (postdoc) to develop this new method, combining ideas from AI theory, statistics, and formal methods.

More information: Andrey Kofnov, Exact Upper and Lower Bounds for the Output Distribution of Neural Networks with Random Inputs, Exact Upper and Lower Bounds for the Output Distribution of Neural Networks with Random Inputs,

Andrey Kofnov et al, Exact Upper and Lower Bounds for the Output Distribution of Neural Networks with Random Inputs, arXiv (2025).

Journal information: arXiv

Load comments (0)

This article has been reviewed according to Science X's and . have highlighted the following attributes while ensuring the content's credibility:

fact-checked
preprint
trusted source
proofread

Get Instant Summarized Text (GIST)

A new mathematical method enables exact calculation of neural network reliability within defined input domains, allowing certain errors to be ruled out with certainty. By partitioning high-dimensional input spaces and analyzing each region, the approach quantifies output uncertainty and risk. This method is currently feasible for small neural networks, not large-scale models.

This summary was automatically generated using LLM.