Here is our latest podcast based on Chapter 8 of Gerd's best-selling book Technology vs Humanity (Now available in 11 languages!)
This week Peter and I covered the topic of Proaction vs. Precaution. Precaution means looking proactively at what might happen—the possible consequences and unintended outcomes—before we proceed with a course of scientific exploration or technological development.
On the other hand, too much proactivity will free some powerful and likely uncontrollable forces that we should keep locked up for the time being. Imagine the consequences of being too proactive with AI, geo-engineering, or human genome editing. Or entering an arms race with AI-controlled weapons that can kill without human supervision…
Download the MP3: gerd leonhard peter van podcast proactive precautionary chapter 8
You can find more podcasts here, or subscribe to Gerd’s podcasts on Spotify, iTunes or Soundcloud.
Some excerpts from the book's chapter 8
The safest and still most promising future is one where we do not postpone innovation, but neither do we dismiss the exponential risks it now involves as ‘somebody else’s business'.
“The proactionary principle was introduced by transhumanist philosopher Max More and further articulated by UK sociologist Steve Fuller. Since the very idea of transhumanism is based on the concept of transcending our biology, i.e. the possibility of becoming at least part machine, uninhibited proactivity is naturally part of the story—no surprise there. A thoughtful and humanist balance: Here is what I am proposing: Too much precaution may paralyze us with fear and create a self-amplifying cycle of restraint. Pushing cutting-edge science, technology, engineering and math (STEM) activities or game-changing inventions into the underground will quite likely criminalize those undertaking them. That is obviously not a good response to the problem because we might actually discover things that would be our human duty to investigate further, such as the possibility of ending cancer. The things that make humanity flourish obligate us to set them free… An approach that worked just fine when we were doubling from 0.01 to 0.02 or even from one to two may no longer be appropriate when we are doubling successively from four up to 128—the stakes are just so much higher, and the consequences are so much harder for human minds to understand. Imagine the consequences of being too proactive with AI, geo-engineering, or human genome editing. Imagine entering an arms race with AI-controlled weapons that can kill without human supervision. Imagine rogue nations and nonstate actors experimenting with controlling the weather and causing permanent damage to the atmosphere. Imagine a research lab in a not-so-transparent country coming up with a formula to program superhumans”