Why can we think about the universe when the universe cannot think about us? This simple, yet profound, question leads to fascinating philosophical, theological, and scientific investigations. Those same considerations converge in the arena of artificial intelligence research and its goal of artificial general intelligence. Artificial intelligence is a burgeoning field, but this research is not strictly a scientific endeavor. It necessarily involves philosophical reasoning that I think is reminiscent of intelligent agency behind the origin of the universe.

The pursuit of artificial general intelligence (AGI) requires that scientists figure out how to build and program a machine with capacity to contemplate things outside of itself, as well as its place within a larger context. Even more modest projects aiming for artificial narrow intelligence (ANI) that can work through open-ended problems mandate that scientists carefully consider how we think and reason.

AI conversations often highlight how fast computers perform calculations compared to humans. Clearly, programs compute faster and work through logical processes faster than we do. However, the salient question is how well can computers make decisions based on arguments derived from incomplete data that do not permit definitive conclusions? It turns out that philosophy, specifically the philosophy of argumentation, plays an integral role in describing and developing the processes for handling such situations.

Real-life Knowledge Is Usually Tentative

These AI scenarios differ from reasoning in the mathematical arena in that real-life knowledge is not monotonic. In formal (monotonic) logic, from a set of basic axioms, the conclusions drawn from those axioms become true, and the set of “known” things always grows. The nature of the formal logic disallows the possibility of contradiction or revision when new information is acquired. But rarely are the conditions required for formal logic met in everyday scenarios. Consequently, one must assess the conditions, evaluate different options for explaining the conditions, evaluate different options for how to proceed, and ultimately make a decision on the best way to proceed.

As the authors of one article state,1

. . . there are . . . a number of fundamental distinctions between the concepts “P is a formal proof that T holds” and “P is a persuasive argument for accepting T.”

Which Ethical System?

However, humans often disagree about whether an argument is persuasive or what the best course of action is. Any form of AI must figure out how to navigate the reality that most circumstances in life don’t have a single best answer and that any solution has benefits and consequences. Evaluating various benefits and consequences almost always involves some system of ethics and morals. This raises the question of what system the AI should use.

In our society, we seem to be moving in the direction that everyone determines what is right or wrong for themselves. Do we really want an AI (that might make decisions faster and respond more quickly that humans) making decisions with a subjective moral code? Suppose an AI makes a decision that results in someone’s injury or death. Who do we hold accountable? Can the AI be held responsible, or would the creator of the AI?

Maybe a Completely Rational, Incredibly Powerful Machine Is Not the Best Option

We assume that creating an AGI would work out something like Data on Star Trek: The Next Generation. Although Data was physically more capable than the crew of the Enterprise in almost every way (strength, intelligence, speed, etc), he always seemed to know when to submit to the authority of his superiors. The creators of the Star Trek universe can write things however they like, but a powerful intelligence in real life would pose quite a dilemma. If Data has a genuine awareness of self, how does he choose to submit—especially when he knows his superiors’ decisions are incorrect? Human history is littered with examples of people who became powerful enough to impose their will on others (with great destruction resulting). The very thing we hope AI will do (perform human tasks with superhuman skills) also provides the platform to rain destruction upon us. This possibility brings us back to the previous point. How would we instill a set of values and ethics into such a machine, and how would we choose which values and ethics to use?

Here’s where I see a parallel between AI research and cosmology. Some scientists investigating the history and origin of the universe have claimed that philosophy is dead (or at least rather worthless) because only science has the proper tools to provide the answers we seek. But a brief look into the pursuit of AI reveals the naivety of such a statement. Well-established philosophical principles, including ethical considerations, are helping guide the development of basic AI capabilities (short of self-awareness) and the goal of any AGI will require further philosophical and theological input. In this way, it seems to me that a wise pursuit of AGI provides an argument for the existence of God. That is, the very questions researchers (physicists, astronomers) ask assume philosophical reasoning. Did the universe begin to exist? Has it existed forever? These are concepts. Thus, a study of nature does not furnish these questions, but intelligent, nonartificial agents created in the image of a superintelligent Being are equipped to ask them.

Check out more from Reasons to Believe @Reasons.org

Endnotes
  1. T. J. M. Bench-Capon and Paul E. Dunne, “Argumentation in Artificial Intelligence,” Artificial Intelligence 171, nos. 10–15 (2007): 619–41, doi:10.1016/j.artint.2007.05.001.

 

About The Author

Jeff Zweerink

Since my earliest memories, science and the Christian faith have featured prominently in my life - but I struggled when my scientific studies seemed to collide with my early biblical training. My first contact with RTB came when I heard Hugh Ross speak at Iowa State University. It was the first time I realized it was possible to do professional work incorporating both my love of science and my desire to serve God. I knew RTB's ministry was something I was called to be a part of. While many Christians and non-Christians see the two as in perpetual conflict, I find they integrate well. They operate by the same principles and are committed to discovering foundational truths. My passion at RTB is helping Christians see how powerful a tool science is to declare God's glory and helping scientists understand how the established scientific discoveries demonstrate the legitimacy and rationality of the Christian faith. While many Christians and non-Christians see the two as in perpetual conflict, I find they integrate well. • Biography • Resources • Upcoming Events • Promotional Items Jeff Zweerink thought he would follow in his father's footsteps as a chemistry professor until a high school teacher piqued his interest in physics. Jeff pursued a BS in physics and a PhD in astrophysics at Iowa State University (ISU), where he focused his study on gamma rays - messengers from distant black holes and neutron stars. Upon completing his education, Jeff taught at Loras College in Dubuque, Iowa. Postdoctoral research took him to the West Coast, to the University of California, Riverside, and eventually to a research faculty position at UCLA. He has conducted research using STACEE and VERITAS gamma-ray telescopes, and currently works on GAPS, a balloon experiment seeking to detect dark matter. A Christian from childhood, Jeff desired to understand how the worlds of science and Scripture integrate. He struggled when his scientific studies seemed to collide with his early biblical training. While an undergrad at ISU, Jeff heard Hugh Ross speak and learned of Reasons to Believe (RTB) and its ministry of reconciliation - tearing down the presumed barriers between science and faith and introducing people to their personal Creator. Jeff knew this was something he was called to be a part of. Today, as a research scholar at RTB, Jeff speaks at churches, youth groups, universities, and professional groups around the country, encouraging people to consider the truth of Scripture and how it connects with the evidence of science. His involvement with RTB grows from an enthusiasm for helping others bridge the perceived science-faith gap. He seeks to assist others in avoiding the difficulties he experienced. Jeff is author of Who's Afraid of the Multiverse? and coauthor of more than 30 journal articles, as well as numerous conference proceedings. He still serves part-time on the physics and astronomy research faculty at UCLA. He directs RTB's online learning programs, Reasons Institute and Reasons Academy, and also contributes to the ministry's podcasts and daily blog, Today's New Reason to Believe. When he isn’t participating in science-faith apologetics Jeff enjoys fishing, camping, and working on home improvement projects. An enthusiastic sports fan, he coaches his children's teams and challenges his RTB colleagues in fantasy football. He roots for the Kansas City Chiefs and for NASCAR's Ryan Newman and Jeff Gordon. Jeff and his wife, Lisa, live in Southern California with their five children.



Email Sign-up

Sign up for the TWR360 Newsletter

Access updates, news, Biblical teaching and inspirational messages from powerful Christian voices.

Thank you for signing up to receive updates from TWR360.

Required information missing

This site is protected by reCAPTCHA, and the Google Privacy Policy & Terms of Use apply.