What the Bible Says about Artificial Intelligence

July 05, 2024
00:00 32:11
Listen on Your Favorite App

For years now, even as headlines about the development of AI have become more frequent and more dire, I really never worried about it much, because I couldn't think of anything in scripture that sounded a great deal like a superintelligent machine. I'd read the end of the book (Revelation), I knew how it ended, and it wasn't in a robot apocalypse... so all the fears surrounding that possibility must therefore be much ado about nothing. (I did write a fictional trilogy for young adults back in 2017 about how I imagined a near-miss robot apocalypse might look, though, because I found the topic fascinating enough to research at the time. It's called the "Uncanny Valley" trilogy, where the "uncanny valley" refers to the "creepy" factor, as a synthetic humanoid creature approaches human likeness.)


When I finished the trilogy, I more or less forgot about advancing AI, until some of the later iterations of Chat GPT and similar Large Language Models (LLMs). Full disclosure: I've never used any LLMs myself, mostly because (last I checked) you had to create an account with your email address before you started asking it questions. (In the third book of my series, the superintelligent bot Jaguar kept track of everyone via facial recognition cameras, recording literally everything they did in enormous data processing centers across the globe that synced with one another many times per day. Though at that point I doubt it would make any difference, I'd rather not voluntarily give Jaguar's real-life analog any data on me if I can help it!)


Particularly the recent release of Chat GPT Omni (which apparently stands for "omniscient" --!!) gave me pause, though, and I had to stop and ask myself why the idea that it could be approaching actual Artificial General Intelligence (AGI) made the hairs on the back of my neck stand up. I recently read a book called "Deep Medicine" by Eric Topol on the integration of AI into the medical field, which helped allay some potential concerns--that book contended that AGI would likely never be realized, largely because AGI inherently requires experience in the real world, and a robot can never have lived experiences in the way that humans can. It painted a mostly rosy picture of narrow (specialized) AI engaging in pattern recognition (reading radiology images or recognizing pathology samples or dermatological lesions, for instance), and thus vastly improving diagnostic capabilities of physicians. Other uses might include parsing a given individual's years of medical records and offering a synopsis and recommendations, or consolidating PubMed studies, and offering relevant suggestions. Topol did not seem to think that the AI would ever replace the doctor, though. Rather, the author contended, at the rate that data is currently exploding, doctors are drowning in the attempt to document and to keep up with it all, and empathic patient care suffers as a result. AI, he argues, will actually give the doctor time to spend with the patient again, to make judgment calls with a summary of all the data at his fingertips, and to put it together in an integrated whole with his uniquely human common sense.


Synthetic Empathy and Emotions?


But, "Deep Medicine" was written in 2019, which (in the world of AI) is already potentially obsolete. I'm told that Chat GPT Omni is better than most humans at anything involving either logic or creativity, and it does a terrific approximation of empathy, too. Even "Deep Medicine" cited statistics to suggest that most humans would prefer a machine for a therapist than a person (!!), largely due to the fear that the human might judge them for some of their most secret or shameful thoughts or feelings. And if the machine makes you feel like it understands you, does it really matter whether its empathy is "real" or not?


What does "real" empathy mean, anyway? In "Uncanny Valley," my main character, as a teenager, inherited a "companion bot" who was programmed with mirror neurons (the seat of empathy in the human brain.) In the wake of her father's death, she came to regard her companion bot as her best friend. It was only as she got older that she started to ask questions like whether its 'love' for her was genuine, if it was programmed. This is essentially the theological argument for free will, too. Could God have made a world without sin? Sure, but in order to do it, we'd all have to be automatons--programmed to do His will, programmed to love Him and to love one another. Would there be any value in the love of a creature who could not do anything else? (The Calvinists might say that's the way the world actually is, for those who are predestined, but everyone else would vehemently disagree.) It certainly seems that God thought it was worth all the misery He endured since creation, for the chance that some of us might freely choose Him. I daresay that same logic is self-evident to al

More Episodes
See all episodes
Meet Your Host
Dr. Lauren Deville is the owner of Nature Cure Family Health in Tucson, Arizona. She received her NMD from Southwest College of Naturopathic Medicine in Tempe, AZ, and she holds a BS in Biochemistry and Molecular Biophysics from the University of Arizona, with minors in Spanish and Creative Writing. She is the author of The Holistic Gut Prescription and How to Be Healthy: Body, Mind, and Spirit.

In her spare time, Dr. Lauren writes young adult science fiction and fantasy novels as well as Biblical retellings under the pen name C.A. Gray, and she maintains a movie review blog with her cinephile husband.

For questions or guest inquiries, please email us at drlauren@naturecurefamilyhealth.com
Podcasts About Us Contact Us Newsletters