Many fear the latest unaccountable generation of artificial intelligence (AI), generative AI or ChatGPT, and its accelerated deployment to the public, could make AI humanity’s biggest existential threat.
Unaccountable generative AI warrants s existential concern because already it has proven to be unexplainable, unpredictable, and uncontrollable.
Key Context for evaluating the existential need for AI accountability.
Fortunately for humanity, the Center for Humane Technology (CHT), which exposed how social media harms the mental health and wellbeing of people and minors in the 2020 documentary The Social Dilemma, is now reprising that needed responsibility role in exposing how generative AI unaccountably threatens humanity’s health and existence, in its 2023 podcast The AI Dilemma.
To create a “shared frame of reference,” CHT educates that Generative AI is growing in speed and power at unprecedented, exponentially-exponential, rates. They flag there is no content verification to detect or protect against ubiquitous deepfake misrepresentations and disinformation. They also warn there is no AI research on how to make AI aligned with humanity’s survival and best interests long-term.
CHT wisely warns us to not repeat the damaging mistake of self-policed social media on autopilot again with self-policed, existential-threat AI on autopilot.
CHT’s co-founders, Tristan Harris and Aza Raskin, are also wisely asking everyone this time the same AI humanity protection question: “what should be happening that’s not happening and needs to happen to protect humanity from AI harms?”
Their wise warning and essential existential question inspired this piece and contribution to the cause of Internet/AI accountability, because Restore Us Institute’s (RUI) tagline and purpose is to “restore humanity online,” and its mission is “Restore Internet accountability to protect people from online harm. RUI is weighing in because AI may be the most enabling, empowering, accelerating, augmenting, and generating Internet service that benefits and harms users warranting accountability.
Fears that unaccountable AI existentially threatens humans are warranted.
Unaccountable experimentation on Americans/minors: In December, one unaccountable AI leader, OpenAI CEO Sam Altman, unilaterally, prematurely, and knowingly, unleashed a potentially dangerous ChatGPT AI experiment on the public and children. He bragged “People talk about AI as a technological revolution. It’s even bigger than that, it’s going to be this whole thing that touches all aspects of society.”
AI Experts Urge ChatGPT Caution: AI experts and leaders (>27,000 signers) found Altman’s accelerated public experiment reckless, and publicly pushed back via an open letter calling for a six-month pause in giant generative AI experiments on the public.
AI can already replicate and outperform humans: AI already can write code, create another AI, create a better AI than humans can create, and is growing multi-exponentially, more powerful than before.
Existential Risk: The more one learns about generative AI risks, the more one fears AI unaccountability. “More than 3 in 5 adults and 7 in 10 regular AI users are concerned AI tools pose an existential threat to humans.” Morning Consult survey.
The AI Dilemma: The Center for Humane Technology, in its tour spotlighting “The AI Dilemma,” is wisely warning: “50% of AI researchers believe there is a 10% or greater chance that humans go extinct from our inability to control AI.”
A Big Tech ‘AI Harms Race’ of profit over people: Rather than pausing giant AI experiments that can endanger the public, America’s largest ChatGPT-AI social media platforms, — Google’s Bard AI and Microsoft’s ChatGPT-4Bing AI – have accelerated a potential ‘AI Harms Race.’
What makes AI most dangerous?
U.S. policy that makes AI unfettered by Federal/State Government makes AI most dangerous.
In 1996, Congress declared in Section 230: “It is the policy of the United States to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, [i.e., AI, algorithms, cloud, apps, etc.] unfettered by Federal or State regulation.”
Nothing is more dangerous than making potentially the biggest existential threat to humanity unfettered by government with impunity to threaten humanity for perpetuity.
Merriam Webster defines “unfettered” as “not controlled or restricted.”
Unfettered AI is:
Reckless endangerment and gross negligence, because AI is not only ‘unfettered’ but also unexplainable, unpredictable, and uncontrollable.
Dangerously above the law and not subject to U.S. Government essentials: sovereignty, Constitutional authority, limited government, rights, rule of law, and civil duty of care.
Subversion of Government’s existential purpose — to protect people from what they can’t protect against themselves, i.e., attacks, terrorism, crime, disinformation, fire, disasters, etc.
Amoral anarchism – in ignoring sovereignty, limited government, the Constitution, borders, police, public safety, i.e., anarchism; and denying rights, rule-of-law, duty of care, access to justice, and adjudication of truth/lies, legal/illegal, & right/wrong, i.e., amoralism.
CHT: What should be happening that’s not happening and needs to happen to protect humanity?
The Commerce Department now is seeking public input on “what policies should shape the AI accountability ecosystem.” Congress is learning that trusting self-policed social media was a mistake and a national mental health disaster, and perpetuating self-policed-AI only worsens a bad situation. Congress heard the FTC Chairman say ChatGPT could “turbocharge online fraud.”
CONCLUSION: Section 230 makes AI most dangerous and 230 repeal makes AI most fixable.
CHT AI Accountability question: “what should be happening that’s not happening and needs to happen to protect humanity from AI harms?” — Facilitate Section 230 repeal!
Artificial intelligence (AI) can both existentially threaten and benefit humanity. This duality reality means humanity faces a holistic, 21st Century existential challenge and opportunity.
Thus, the questions and tasks here are how America can both deter and protect against bad and dangerous AI, while encouraging beneficial and safe AI?
In other words, how can humanity accountably prioritize protection of its existence and wellbeing from AI, while also accountably keeping the substantial benefits AI can provide humanity along the way?
Today’s AI unaccountability baseline is set by America’s only Internet conduct policy/law Section 230 of the 1996 Communications Decency Act. By default, Section 230 Internet conduct policy/law is the only AI conduct policy/law.
The Internet & AI are integrated and interdependent. New generative AI could not exist and perform without: internet-enabled cloud computing; Internet-accessible content for AI’s machine learning; and Internet demand/Internet users/consumers (ChatGPT’s 100 million monthly users make it the fastest growing consumer app in history.)
AI complements and turbocharges Internet services. AI may be the most enabling, empowering, accelerating, augmenting, and generating Internet service that benefits and harms Internet users, warranting accountability.
Interdependent offline-online worlds. The physical world and the online Internet/AI world are not separate and independent spaces as 1990’s utopians first imagined. Today the offline and online worlds are now fully integrated and interdependent systems that enable everyone to conduct everything everywhere online for life, work, and play.
Simply, we need a holistic AI accountability system that can block bad and guard good AI.
The great news is it already exists. It is a proven, time-tested, and emulated system. It hides in plain sight. It is Constitutional. It’s one of the best innovations in modern world history.
It is designed to deliver fair and reasonable outcomes: e.g., help over harm, truth over lies, legal over illegal, right over wrong. Most can support it because it is familiar and easy to understand.
That great news is America can restore U.S. sovereignty, Constitution & Bill of Rights authority, Constitution limited government, rule-of-law, civil duty of care, justice, & law enforcement, by repealing Section 230.
In 1996, Section 230 abdicated U.S. sovereignty, constitution-authority & rule of law online; repeal of 230 restores fidelity and defense of the U.S. Constitution, while simultaneously:
Checking AI’s out-of-control, existential threats to humanity; and
Balancing in control benefits to continue to provide AI’s many benefits to humanity.
Repeal of Section 230 is the only way to control generative AI by blocking bad AI and guarding good AI via restored rule of law and duty of care online.
Repeal is the only proven, time-tested, constitutional solution that most can readily understand and support.
Repeal means same rules and rights offline-online. Illegal offline, illegal online. Equal justice under law.
Only repeal and the restoration of Constitutional limited government rule of law and duty of care check and balance keeping the good legal and safe AI and ridding the bad illegal and dangerous AI.
Forewarned is forearmed.