The Ethical Challenges Of AI. Steve Williamson welcomes Matt Goodwin and Andrea Christelle to discuss Artificial Intelligence (AI). Both have PhDs in philosophy and have helped create Sedona Philosophy.
Asked to define AI, Matt responds, “Well, right now, what everyone is talking about with AI is really a chat bot. And what people are usually thinking of when they think of AI is AGI (Artificial General Intelligence). That’s what most people think of from science fiction. Something that approaches a human level of intelligence…and what would come after that, some people suggest, is something called Artificial Super Intelligence. And that would be something that would surpass human intelligence.”
As to where all of this is going, Matt says, “I think it’s hard to imagine where we’re going to be just in a few years because it’s all happening pretty quickly.”
Andrea notes that many AI programs such as Grammerly are based on statistical probabilities. She says, “And that really gets to an important philosophical aspect of it, which is ethics. I mean one of the things that we know is that AI is only as good as or only has the views that the data it’s given by human agents. And so, we’ve already seen some problems with AI, for example, making recommendations about who should be given a mortgage, making determinations about who might be guilty of a crime that very much reflects some of the problems we have in society right now.”
The danger, she believes, is that AI may reflect the biases that already exist in society.
“So, one of the ethical issues that people are really concerned with right now, is what are the data? And who determines what data points get put in there?” Indeed, AI raises many ethical and philosophical concerns going forward, not the least of which revolve around plagiarism, deep fakes and military applications.