AI 101 and the Future of Humanity

As published in Humanist Perspectives, November 5, 2023

It has become increasingly apparent that AI technologies have advanced quite rapidly in the last several years. In fact, it has happened so rapidly, my colleagues and I have been forced to prioritize our research to focus primarily upon the risks and governance of AI as we move into an uncertain future. Many of us believed that what we are experiencing with AI today was about 30 years away and that we had plenty of time to get to work on plans for regulating, legislating, controlling, containing, or even stopping the potential negative effects of such emerging technologies. Well, that all changed with some of the latest available forms of AI which includes, but is not limited to GPT-4, Bing AI, Claude, Bard, et al.

In this paper, I’m going to talk about the basics of Artificial Intelligence so that everyone is roughly on the same page in reference to key terms, concepts, and issues. There’s a lot going on out there in the AI universe, so it’s important for us to become familiar with some of the ideas and processes which have gotten us here so far so that we can engage in meaningful and productive dialogue. 

WHAT IS ARTIFICIAL INTELLIGENCE?

The great AI pioneer and Stanford professor, John McCarthy, defined Artificial Intelligence as:

“…the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.”((http://jmc.stanford.edu/articles/whatisai/whatisai.pdf))

And decades before McCarthy’s definition, Alan Turing, the British father of computer science, was considering in his groundbreaking work ‘Computing Machinery and Intelligence’ whether or not machines could think. He devised the now famous ‘Turing Test’ in which a human interrogator attempts to distinguish between a computer and human text response. If the interrogator is unable to distinguish between them, the computer is said to have passed the test.((See: https://academic.oup.com/mind/article/LIX/236/433/986238))

In the late 90’s, scientists Stuart Russell and Peter Norvig wrote the now seminal work: Artificial Intelligence: A Modern Approach which has become the leading textbook in the study of AI. They offer four potential goals or definitions of AI which differentiates computer systems on the basis of rationality and thinking vs. acting. First, they consider the ‘human approach’ in which systems that think like humans and act like humans are compared to an ideal approach in which systems think rationally and act rationally. This is an interesting distinction and one that will become thematic throughout the development of AI technologies. Humans don’t always think rationally; and that’s because our limbic (or emotional) systems and our prefrontal cortexes (or higher learning systems) are constantly at battle in our brains.

READ MORE HERE >>

Leave a comment