In May 1997, an IBM chess-playing computer called Deep Blue defeated a grandmaster human chess player (under regular time controls) for the first time in history. It took four decades for computer programs and hardware to advance from their first victory in the mid-1950s to besting a world champion. In the twenty plus years since, however, chess programs running on relatively common hardware (like that used in smartphones) could routinely beat even the best human players. 

The strongest chess programs utilize handcrafted evaluation functions, developed by humans over many years, to help these programs play the game more effectively. At least one team of computer scientists adopted a more general approach by developing a system that only requires knowing the rules of the game. Called AlphaZero, the system learns chess by playing itself and training its neural networks based on the outcomes. More importantly, the same setup has also mastered shogi (a harder game than chess) and a far more complex game called Go. By mastery, I mean this “self-taught” program matches or outplays the best programs specifically designed to play just one of these games.1

Keep in mind that all of these programs outperform the most advanced human players. Such mastery in different arenas by the same program raises two important questions: First, does AlphaZero represent an advance toward an artificial general intelligence (AGI)? Second, if we could develop an AGI, would we actually listen to what it has to say?

AlphaZero and AGIs

AlphaZero does represent a significant step toward AGI but it’s not clear that such a step makes developing an AGI any closer. AlphaZero shows that a single system can master three separate tasks (playing chess, shogi, and Go). However, it still approaches those tasks separately. As far as I can tell, AlphaZero does not take the knowledge it acquired through learning chess to develop a more general set of principles that it applies to shogi. Instead, AlphaZero trains itself to play chess, then it starts from scratch and trains itself to play shogi; it then repeats the process to play Go.

Critical features, such as well-defined rules and indisputable conditions that determine wins, losses, and draws, allow AlphaZero to master the various games. However, this means that every new game (with new rules or different goals) requires AlphaZero to start from scratch to learn the new game. This process differs markedly from how humans would approach a slight variant of a known game. A human would start from the accumulated knowledge of the original game and extrapolate how the rule changes would affect that knowledge. Although AlphaZero demonstrates the effectiveness of the new programming approach, it remains to be seen whether programming can ever employ the abstraction that humans routinely use to deal with new and unpredictable situations.

Would Humans Listen?

Shortly after Deep Blue bested human grandmasters, humans quit playing against chess programs because the programs played a far superior brand of chess. Instead, people started using the programs to learn how to play the game better because the programs could explore a range of game play not yet available to humans.

Now consider a scenario where we develop an AI (either narrow or general) with the capacity to evaluate options on something more consequential—like climate change or healthcare. Chess is a game with a clear objective, but healthcare requires balancing competing interests where different people value different things. Would the AI need its own values, or would humans program those? Would we adopt an economic value system (most efficient use of physical resources), utilitarian value system (greatest good for the greatest number), or one based on inherent human dignity (each and every human has value)? What if those values result in eliminating the AI? And an even more practical question: Would we actually listen to the AI, or would we tweak the inputs to get the answer we wanted from the start?

AI Would Still Point to a Creator

At first glance, it seems that the development of AGI would undermine central pillars of the Christian faith. An AGI would challenge the idea of human exceptionalism—that humans differ not just in degree but also in kind from every other creature on Earth. And artificially created, sentient beings would challenge the basics of the gospel. However, my colleague Fuz Rana argues that, while AI will become increasingly sophisticated at mimicking human behavior, it will never become self-aware. I tend to agree. Also, the values humans possess stem from a moral awareness that separates us from possible AGI.

A central tenet of Christianity is that God created everything. The Bible also clearly states that we humans are made in God’s image (imago Dei). Just like our ability to produce beautiful art and music flows from the imago Dei, humans creating an AGI also reflects this quality and points to someone who created us.

Check out more from Reasons to Believe @Reasons.org

Endnotes
  1. David Silver et al., “A General Reinforcement Learning Algorithm that Masters Chess, Shogi and Go through Self-play,” Science 362, issue 6419 (December 7, 2018): 1140–44, doi:10.1126/science.aar6404.

 

About The Author

Jeff Zweerink

Since my earliest memories, science and the Christian faith have featured prominently in my life - but I struggled when my scientific studies seemed to collide with my early biblical training. My first contact with RTB came when I heard Hugh Ross speak at Iowa State University. It was the first time I realized it was possible to do professional work incorporating both my love of science and my desire to serve God. I knew RTB's ministry was something I was called to be a part of. While many Christians and non-Christians see the two as in perpetual conflict, I find they integrate well. They operate by the same principles and are committed to discovering foundational truths. My passion at RTB is helping Christians see how powerful a tool science is to declare God's glory and helping scientists understand how the established scientific discoveries demonstrate the legitimacy and rationality of the Christian faith. While many Christians and non-Christians see the two as in perpetual conflict, I find they integrate well. • Biography • Resources • Upcoming Events • Promotional Items Jeff Zweerink thought he would follow in his father's footsteps as a chemistry professor until a high school teacher piqued his interest in physics. Jeff pursued a BS in physics and a PhD in astrophysics at Iowa State University (ISU), where he focused his study on gamma rays - messengers from distant black holes and neutron stars. Upon completing his education, Jeff taught at Loras College in Dubuque, Iowa. Postdoctoral research took him to the West Coast, to the University of California, Riverside, and eventually to a research faculty position at UCLA. He has conducted research using STACEE and VERITAS gamma-ray telescopes, and currently works on GAPS, a balloon experiment seeking to detect dark matter. A Christian from childhood, Jeff desired to understand how the worlds of science and Scripture integrate. He struggled when his scientific studies seemed to collide with his early biblical training. While an undergrad at ISU, Jeff heard Hugh Ross speak and learned of Reasons to Believe (RTB) and its ministry of reconciliation - tearing down the presumed barriers between science and faith and introducing people to their personal Creator. Jeff knew this was something he was called to be a part of. Today, as a research scholar at RTB, Jeff speaks at churches, youth groups, universities, and professional groups around the country, encouraging people to consider the truth of Scripture and how it connects with the evidence of science. His involvement with RTB grows from an enthusiasm for helping others bridge the perceived science-faith gap. He seeks to assist others in avoiding the difficulties he experienced. Jeff is author of Who's Afraid of the Multiverse? and coauthor of more than 30 journal articles, as well as numerous conference proceedings. He still serves part-time on the physics and astronomy research faculty at UCLA. He directs RTB's online learning programs, Reasons Institute and Reasons Academy, and also contributes to the ministry's podcasts and daily blog, Today's New Reason to Believe. When he isn’t participating in science-faith apologetics Jeff enjoys fishing, camping, and working on home improvement projects. An enthusiastic sports fan, he coaches his children's teams and challenges his RTB colleagues in fantasy football. He roots for the Kansas City Chiefs and for NASCAR's Ryan Newman and Jeff Gordon. Jeff and his wife, Lisa, live in Southern California with their five children.



Email Sign-up

Sign up for the TWR360 Newsletter

Access updates, news, Biblical teaching and inspirational messages from powerful Christian voices.

Thank you for signing up to receive updates from TWR360.

Required information missing

This site is protected by reCAPTCHA, and the Google Privacy Policy & Terms of Use apply.