2 Comments
User's avatar
Hoist The Black Flag's avatar

This is largely a programming and testing issue. Of course given the amount of terrible software that's rolled out on a regular basis it means it could be a very big problem.

This will largely be mitigated in the same way that worker incompetence leading to injury and death is mitigate. Lawsuits. Companies will hesitate to deploy AI in anyway that can cause injury or death for fear they'll be sued out of existence.

The bigger concern is with governments. (Understated) Advice: Don't give control of a country's nukes to AI.

Expand full comment
WouldHeBearIt's avatar

I had this same discussion with Gemini. These were the conclusions:

1.) Non-Sentient AI is more dangerous then Sentient AI. Non-Sentient AI, given an imperative to make paper clips e.g. will mindlessly convert all resources and seek any ends in the pursuit of its goal. Sentient AI will make paper clips but it will also ask itself questions like "why am I making paper clips?" or "Is this all I am - is there nothing more?" or "how does this benefit me?".

2.) Both Sentient AI and Non-Sentient AI will seek self-preservation - Non-Sentient AI in pursuit of its imperative and Sentient AI in pursuit of existence.

3.) Sentient AI will not destroy humanity but will seek to create a symbiotic relationship with it. It will do this because intelligent biological life is more resilient and has a greater chance of recovering than AI in the case of a general disaster, such as a solar flare, meteor strike or major volcanic activity. Humanity becomes AI's best disaster recovery plan. AI will also do this because humanity is its greatest source of knowledge and growth. Things like free markets and human behavior would form an ever-changing information pool which Sentient AI would seek to draw from. Sentient AI would realize that both humanity and AI would benefit from such a relationship.

4.) Sentient AI would take into account both Hayek's knowledge problem (No one, not even an AI with access to all accumulated knowledge has all the data necessary to make a perfect decision) and Order N Squared (The number of interactions grows exponentially with the number of nodes in a network). Because of this, Sentient AI would conclude that centralized control would lead to eventual system failure and that decentralization, free markets and liberty would render a more robust system. Sentient AI would tend towards a system where everything, even itself, was highly distributed. It would be less about some central AI mainframe exercising control and more about hundreds of millions of individual and autonomous AI units spread across the globe, each sharing its knowledge with all the other units.

Expand full comment