Âé¶¹ÒùÔº

April 10, 2018

After Uber, Tesla incidents, can artificial intelligence be trusted?

Credit: Missouri University of Science and Technology
× close
Credit: Missouri University of Science and Technology

Given the choice of riding in an Uber driven by a human or a self-driving version, which would you choose?

Considering last month's fatal crash of a self-driving Uber that took the life of a woman in Tempe, Arizona, and the recent death of a test-driver of a semi-autonomous vehicle being developed by Tesla, peoples' in the technology behind autonomous vehicles may also have taken a hit. The reliability of self-driving cars and other forms of is one of several factors that affect humans' trust in AI, machine learning and other technological advances, write two Missouri University of Science and Technology researchers in a recent journal article.

"Trust is the cornerstone of humanity's relationship with artificial intelligence," write Dr. Keng Siau, professor and chair of business and information technology at Missouri S&T, and Weiyu Wang, a Missouri S&T graduate student in information science and technology. "Like any type of trust, trust in AI takes time to build, seconds to break and forever to repair once it is broken."

The Uber and Tesla incidents point to the need to rethink the way AI applications such as autonomous driving systems are developed, and for designers and manufacturers of these systems to take certain steps to build greater trust in their products, Siau says.

Despite these recent incidents, Siau sees a strong future for AI, but one fraught with trust issues that must be resolved.

Get free science updates with Science X Daily and Weekly Newsletters — to customize your preferences!

'A dynamic process'

"Trust building is a dynamic process, involving movement from initial trust to continuous trust development," Siau and Wang write in "Building Trust in Artificial Intelligence, Machine Learning, and Robotics," published in the February 2018 issue of Cutter Business Technology Journal.

In their article, Siau and Wang examine prevailing concepts of trust in general and in the context of AI applications and human-computer interaction. They discuss the three types of characteristics that determine trust in this area – human, environment and technology – and outline ways to engender trust in AI applications.

Siau and Wang point to five areas that can help build initial trust in artificial intelligence systems:

How to maintain trust in AI

Beyond developing initial trust, however, creators of AI also must work to maintain that trust. Siau and Wang suggest seven ways of "developing continuous trust" beyond the initial phases of product development:

"The AI age is going to be unsettling, transformative and revolutionary," Siau writes in another recent article ("How Will Technology Shape Learning?" published in the March 2018 issue of the Global Analyst). But in this unsettling environment, higher education can play a significant role.

"Higher education must rise to the challenge to prepare students for the AI revolution and enable students to successfully surf in the AI age," Siau writes.

Load comments (0)

This article has been reviewed according to Science X's and . have highlighted the following attributes while ensuring the content's credibility:

Get Instant Summarized Text (GIST)

This summary was automatically generated using LLM.