When you invent a new technology, you uncover a new class of responsibilities
According to a survey, 50% of AI researchers believe there is a 10% or greater chance of humans going extinct due to our inability to control AI. In the video below, the co-founders of the Center for Humane Technology, Tristan Harris and ASA Raskin, introduce the topic and discuss the exponential growth of AI and its impact on society. They recognize the positive aspects of AI but also highlight concerns about its responsible deployment. They emphasize the importance of defining new responsibilities and regulations for AI as it continues to advance.
50% of AI researchers believe there is a 10% or greater chance of humans going extinct due to our inability to control AI.
This presentation was given a few days before ChatGPT-4 was launched, just to demonstrate how quickly AIs are indeed advancing.
I suggest you watch it entirely.
"When you invent a new technology, you uncover a new class of responsibilities"
Hopefully Big Tech will understand the implications of unleashing these Gollems (Generative Large Language Multi-Modal Model) and take responsibility and come up with ethical and global regulations and safeguards. Not to make the same mistakes again like they did with inventing social media algorithms.
Tristan and Aza argue the very negative side effects besides all the good social media brought us were:
But that the potential negative side effects of AI are existentially more frightening.
See the summary from the presentation below.
Just watch the presentation yourself and let's discuss what needs to be done to safely harness this new exciting potential and use AI for the solving the world suffering.
Give a man a fish and you feed him for a day;
Teach a man to fish and you feed him for a lifetime;
Teach an AI to fish, and it will teach itself biology, chemistry, oceanography, evolutionary theory... and fish all the fish to extinction