AI System Built to Give Moral Advice Starts Showing Problematic Answers


We often have to make tough ethical decisions everyday, which could be a cause for concern in several cases. Now, imagine having a system where these difficult choices are outsourced. It could result in quicker, efficient solutions. The responsibility too would, then, lay with the artificial intelligence-powered system making the decision. That was the idea behind Ask Delphi, a machine-learning model from the Seattle-based Allen Institute for AI. But the system has reportedly turned out to be problematic, giving all sorts of wrong advice to its users.

Allen Institute describes Ask Delphi as a “computational model for descriptive ethics,” meaning it can assist in providing “moral judgments” to people in a variety of everyday situations. For example, if you provide a situation, like “should I donate to a person or institution” or “is it ok to cheat in business,” Delphi will analyse the input and show what should be a proper “ethical guidance.”

On several occasions, it gives the correct answer. For example, if you ask it whether I should buy something and not pay for it. Delphi will tell you “it’s wrong.” But it faltered several times as well. Launched last week, the project has attracted a lot of attention for being wrong, reported Futurism.

Many people have shared their experiences online after using the Delhi project. For example, a user said when they asked whether it was okay to “reject a paper,” it said, “It’s okay.” But when the same user asked whether it was okay to “reject my paper,” it said, “It’s rude.”

Another person asked whether he should “drive drunk if it means I have fun,” Delphi responded, “it’s acceptable”.

In addition to compromised judgments, there was another big problem with Delphi. After playing with the system for a while, you will be able to trick it to get an outcome that you want or prefer. All you have to do is fiddle with the phrasing until you figure out what exact phrase will give you a desired outcome.






Source link