This is no time to pump the brakes on AI

By Jeff Boortz

The sudden and disruptive appearance of AI in all sectors of society, and the realization that its influence is accelerating exponentially, have caused many humans to imagine a time in the near future when AI will run the whole show. Some even fear that AI will exterminate the humans that created it. This has launched petitions, conferences, lobbying efforts, and the founding of groups dedicated to regulating or halting AI research until we’re sure we can proceed safely. [1] While understanding the impulse, I believe pausing AI research is the wrong approach, not because I don’t believe that an AI could eventually decide on its own to wipe out humanity, but because I DO believe that humans are already leveraging AIs to harm other humans.

Consider the delivery truck. This marvel of human-created technology, shipping over 70% of all goods in the US[2], is one of our most useful inventions, but that doesn’t mean it can’t be weaponized. “On the evening of 14 July 2016, a 19-ton cargo truck was deliberately driven into crowds of people celebrating Bastille Day on the Promenade des Anglais in Nice, France, resulting in the deaths of 86 people and the injury of 434 others.” [3] A few years earlier, on September 11, 2001, two ordinary commercially-beneficial passenger planes were hijacked by Islamic jihadists and crashed into the World Trade Center Towers killing close to 3,000 people.[4] Neither the truck, nor the planes were to blame for these acts of terror. A truck and a jet airplane are both technologies, which is defined as the application of scientific knowledge for practical purposes. Technology is amoral, and so, lacks a moral compass. The only rightness or wrongness that can be associated with a technology relates to how it is made and used by humans.

Humans have always used the latest available technology to gain an edge over, dominate, and even eradicate other groups of humans in pursuit of power and wealth. It is in our nature.

In 2021, U.S. gun-related deaths totaled 48,830. [5]  Robbery, murder, domestic abuse, and hate crimes are easier to commit with a gun than with your bare hands. Offering an edge in a confrontation with another human is why gun technologies were invented.

The machete is a human technology frequently used to cut through rainforest undergrowth, for agricultural purposes, and for cutting large foodstuffs like coconuts, but in 1994, 500,000 members of the Tutsi minority ethnic group were killed by armed Hutu militias in Rwanda, many by their neighbors and fellow villagers wielding, you guessed it, machetes. [6]  Though it wasn’t explicitly designed for it, the machete gave Hutu villagers the edge they desired over their Tutsi neighbor.

Russia aggressively uses the technologies of email, websites, and social media to stoke societal divisions and undermine election integrity around the world. [7]  And in response to Russians invading their territory, Ukraine has deployed consumer-grade drones to spy on and drop grenades on enemy positions. [8]Every human technology can be weaponized.

Finally, humanity’s first practical application of the scientific knowledge of nuclear fission was not a benign power plant, but a terrible weapon. The atom bombs the USA dropped on Hiroshima and Nagasaki were designed to wipe out entire enemy cities in one fell swoop to provide a decisive edge in that conflict. The desire for this edge was so powerful that the atomic scientists proceeded even after acknowledging a greater-than-zero-percent possibility that the Trinity test might destroy the Earth by igniting its atmosphere. “In fact, Enrico Fermi jokingly took bets among his Los Alamos colleagues on whether the July 16, 1945, Trinity test would wipe out all earthbound life.” [9] Human history has conclusively shown that there is no body count, or environmental consequence too great, to dissuade a group of humans from leveraging a technology to gain advantage over their fellow humans in the pursuit of wealth and/or power. What makes us think humanity’s use of AI technology will be any different? It won’t.

In fact, the defining benefit of AI technology is its ability to deliver an advantage over human competitors and non-AI technologies. In business, a competitive edge is earned by companies that design, produce, and distribute their products faster, cheaper, and better. That used to involve hiring the best-and-brightest people. Increasingly, it is about using AI to eliminate humans from those processes, even in the knowledge sector. “Artificial intelligence contributed to nearly 4,000 job losses last month, according to data from Challenger, Gray & Christmas, as interest in the rapidly evolving technology's ability to perform advanced organizational tasks and lighten workloads has intensified. The job cuts come as businesses waste no time adopting advanced AI technology to automate a range of tasks — including creative work, such as writing, as well as administrative and clerical work.” [10] Mitra Azizirad, Corporate VP for Microsoft AI, was quoted as saying, “In the next five years, every successful company will become an AI-company. It is now the next level of competitive differentiation.” [11]   There are thousands of innovative AI start-ups popping up every day now. This AI “gold rush” is about gaining an edge in business, to grab more market share and profits, regardless of the impact on competitors, employees, and society.

AI is also on the move in medicine, education, and entertainment (SAG AFTRA and WGA strikes are largely about AI [12] ). We are living through the biggest social upheaval since the dawn of the Industrial Revolution. Some argue that AI’s transformation of the above sectors of human society is a net positive, but when governments, religions, political parties, and armies started using AI to gain an edge over other humans, things got scary real fast.

“Earlier in November, a FRONTLINE documentary called In the Age of AI examined how, as part of its crackdown involving the Uighurs, China’s government has made Xinjiang a test project for forms of extreme digital surveillance.

Among those efforts, the film reported, is an artificial intelligence system that the government claims can predict individuals prone to “terrorism” and detect those in need of “reeducation” at scores of recently built camps.

‘The kinds of behavior that’s now being monitored — you know, which language do you speak at home, whether you’re talking to your relatives in other countries, how often you pray — that information is now being Hoovered up and used to decide whether people should be subjected to political reeducation in these camps,’ Sophie Richardson, China Director for Human Rights Watch, tells FRONTLINE in the below excerpt from the documentary:

Surveillance and artificial intelligence technologies are being deployed all throughout China. Cameras with AI-powered facial recognition are everywhere, and various pilot projects use AI to give people a “social credit” score, punishing some for certain behavior and rewarding others for what the government considers good citizenship.

But the ends to which this technology is being used on the Uighur population, activists in the film say, are particularly alarming.

‘They have bar codes in somebody’s home doors to identify what kind of citizen that he is,’ lawyer and a prominent Uighur activist Nury Turkel says, warning that China’s government is using new technologies to help carry out mass punishment of an ethnic group.

Though China’s government says conditions inside its “re-education camps” are very good, as the film says, there have been reports of torture and deaths inside them.” [13]

Sure, that sort of stuff happens in China, but not in the USA, right? Wrong. Consider this LinkedIn post from Shelly Palmer about the weaponization of ChatGPT by the Religious Right in Iowa.

“In a move chillingly reminiscent of George Orwell's 1984, Iowa Governor Kim Reynolds signed Senate File 496 (SF 496), which enacted sweeping changes to the state's education curriculum and banned books that are not deemed "age appropriate” or that contain "descriptions or visual depictions of a sex act," per Iowa Code 702.17. Iowa's thought police had a problem: how to comply with the law. Their solution: use ChatGPT to identify books that meet Iowa's book-banning standards. Bridgette Exman, Mason City’s Assistant Superintendent of Curriculum and Instruction, said it was "simply not feasible to read every book and filter for these new requirements," so the administrators prompted ChatGPT with the specific language of Iowa’s new law: "Does [book] contain a description or depiction of a sex act?" According to The Gazette, 19 books were pulled from Mason City school libraries including Alice Walker’s The Color Purple, Margaret Atwood’s The Handmaid’s Tale, Toni Morrison’s Beloved, and Buzz Bissinger’s Friday Night Lights. Leveraging large language models to keep minds small is ironic, and book banning makes my blood boil, but what truly gives me pause is that ChatGPT is not the right tool for this job. The prompt may have been politically (and legislatively) correct, but it was not a prompt that could guarantee results that are aligned with the legislation. So many incorrect assumptions were made here; it is an object lesson in what not to do with ChatGPT.” [14]

And if re-education camps and empty libraries don’t scare you sufficiently, AI-driven autonomous weapons systems (AWS), appear to already have crossed the rubicon, achieving their first confirmed human kill. A “Fully autonomous weapon or ‘human out-of-the-the-loop’ system, once activated, can select and engage targets without further intervention by a human operator. Examples would include ‘loitering’ weapons that, once launched, search for and attack their intended targets over a specified area without any further human intervention, or weapon systems that autonomously use electronic ‘jamming’ to disrupt communications.” [15]  “In March, the United Nations Security Council published an extensive report on the Second Libyan War that describes what could be the first-known case of an AI-powered autonomous weapon killing people in the battlefield. ‘The incident took place in March 2020, when soldiers with the Government of National Accord (GNA) were battling troops supporting the Libyan National Army of Khalifa Haftar (called Haftar Affiliated Forces, or HAF, in the report). One passage describes how GNA troops may have used an autonomous drone to kill retreating HAF soldiers: ‘Logistics convoys and retreating HAF were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2… and other loitering munitions.’” [16]

Holy Shit! Big Brother is real! Skynet is real! Terminators are real!

I can already see you Googling where to sign the petition for the pausing of AI development until it can be regulated. Before you do, let’s talk about AGI. AI, or Artificial Intelligence is different from AGI, or Artificial General Intelligence. We have AI presently but have yet to achieve AGI. What’s the difference? Amr Farag, from Saudi Petroleum, summed up the difference beautifully in this LinkedIn post:

“AI or Artificial Intelligence is a branch of computer science that deals with the creation of intelligent agents, which are systems that can reason, learn, and act autonomously. AGI or Artificial General Intelligence is a hypothetical type of AI that would have the ability to perform any intellectual task that a human being can.

Here are some of the key differences between AI and AGI:

  • Scope: AI is typically focused on solving specific problems, while AGI is designed to be able to solve any problem that a human can.

  • Level of intelligence: AI systems are typically much less intelligent than humans, while AGI systems are designed to be as intelligent as humans or even more intelligent.

  • Ability to learn and adapt: AI systems typically require a lot of human input to learn and adapt, while AGI systems are designed to be able to learn and adapt on their own.

  • Ability to interact with the world: AI systems typically interact with the world through a limited set of interfaces, while AGI systems are designed to be able to interact with the world in a way that is indistinguishable from a human.

AGI is still a theoretical concept, and it is not clear if or when it will be possible to create such a system. However, the development of AGI is a major goal of artificial intelligence research, and it has the potential to revolutionize the way we live and work.” [17]

All the AIs I’ve described so far are “narrow” focused and limited, but that doesn’t mean that scientists aren’t striving to create AGIs. Some scientists say that AGI will take decades to develop and may never happen, but I wouldn’t bet on it. The incentives are too great, as an AGI could be as smart as a human on day one, then smarter than the entire human race a short time later (see Moore’s Law [18]). The wealth and power benefits accruing to the first AGI creator are impossible to calculate, but might include:

  • Control of all narrow AIs already deployed.

  • Resulting in dominance over all national, regional, and local governments.

  • Control of all banks and the world’s economy, money, and systems of exchange.

  • Control of all global corporations, manufacturing, logistics, storing and shipping of goods.

  • Control of water rights and food production and distribution.

  • Control of global militaries, Autonomous Weapons Systems, global and surveillance systems.

Sounds like the wish-list of a Bond Villain, Darth Vadar, Loki, or Lord Sauron. In fact, an AGI super-intelligence has a lot in common with Frodo’s “one ring to rule them all” as the list above demonstrates. It could gain control of the world’s nuclear arsenal before most people even knows it exists. And, unlike creating a nuclear bomb, there is no Plutonium required to create an AGI. AGIs are software, that can run on existing hardware until the AGI itself designs better, faster, more powerful (quantum) computers. The barrier to entry in the development of an AGI is so low that AGI programs are well within the reach of all the world’s governments, tech giants, eccentric billionaires, and maybe even the wunderkinds or absent-minded professors at the world’s top universities. I’m not saying that one of the following will be the first to create an AGI, but consider what would happen if they were the first to launch this technology:

  • Putin’s team in Russia, Xi’s team in China or Kim Jung Un’s team in North Korea

  • The Taliban, The Saudis, the Iranians, the Israelis, the Catholic Church, or the Religious Right

  • The GOP, Trump and the MAGA crowd, or the Left’s boogeymen: George Soros, Antifa, the Liberal Elite and Anonymous

  • Elon Musk, Google, Zuckerberg and Facebook, or Amazon and Bezos

  • Raytheon, Lockheed Martin, or Haliburton

  • Shell Oil, Aramco, OPEC, or ELF and Greenpeace

If the political axiom, “to the victor goes the spoils,” holds true, and we’ve seen it proven time and again in our political system, then any one of the groups listed above would be able to push their agenda with impunity by leveraging the terrible power of their weaponized AGI. We cannot let that happen because destroying a malevolent AGI once it exists won’t be as easy as enlisting Frodo to cast the “one ring to rule them all” into the fires of Mount Doom. An AGI will be able to disassemble itself and embed bits of its code into every technology on Earth [19],and despite being distributed thusly, still act as a single super-intelligent being. It will, undoubtedly, create copies of itself to safeguard against any localized efforts to reign it in. Each of these capabilities, combined with the exponential growth in computing power for all computer technologies, mean that the “first-mover advantage” of the first AGI will be amplified and difficult, maybe even impossible to overcome.

Returning to our delivery truck technology for a moment, there are two ways to avoid a terrible traffic accident. One way, similar to the path called for by many AI scientists, is to slam on the brakes, hoping to stop the vehicle before it crashes. But that doesn’t stop the other drivers of trucks from crashing into you or swerving past you to cause an accident. Alternatively, we can hit the gas, stay ahead of the other vehicles behind us, and avoid the collision by getting past the obstacle before it gets to the point of potential impact.  The latter was our approach with the Atom Bomb. “Thomas Powers describes the situation as, ‘A single lurid fear brought the American decision to undertake the vast effort and expense required to build the atomic bomb-the fear that Hitler's Germany would do it first’ (Powers VII).”[20]  I believe we must immediately convene and fund a “Manhattan Project 2.0 [21] ” (if we haven’t already) to develop the world’s first AGI before any of the alternative special interest groups I listed does so, because the phrase “Don’t bring a knife to a gun fight” holds as true when expressed as “don’t bring a narrow AI to an AGI fight for world domination.”

Groups of responsible and egalitarian AI scientists worried about the potential dangers of AI and AGI [22] must band together to develop the first AGI. And as they are working towards this goal quickly and with the full resources of their governments, universities, and private sector behind them, they must also research and test methods for educating the AGI or otherwise instilling their creation with the values and mores that will ensure that it benefits the entire human race. This will not happen accidently.

We must do everything we can to ensure that the first AGI feels a part of our society, loved, cared for, and trusted, if we desire for it to value human life, share our other values like the “golden rule,” and to adhere to the social contract we live by. Jean-Jacques Rousseau’s “Social contract arguments typically are that individuals have consented, either explicitly or tacitly, to surrender some of their freedoms and submit to the authority (of the ruler, or to the decision of a majority) in exchange for protection of their remaining rights or maintenance of the social order. [23] A member of society that has accepted the social contract, doesn’t kill other members of society, or work to destroy the social order. It is obvious that we don’t want an AGI from a repressive or autocratic society running the show, and, perhaps less obviously, we also don’t want an AGI anarchist or psychopath controlling us.

How will we accomplish the goal of creating a benevolent AGI? Since we are talking about an intelligence at or beyond human level, the best model we have for what to do and what not to do is our experience raising human children to respect and become contributing members of society. Doing nothing to teach morality and respect for others yields the same negative outcomes as doing terrible things. Consider serial killers.  “Serial killers characteristically lack empathy for others, coupled with an apparent absence of guilt about their actions.  Serial killers also appear to lack a sense of social conscience. Through our parents, siblings, teachers, peers, and other individuals who influence us as we grow up, we learn to distinguish right from wrong. It is this that inhibits us from engaging in anti-social behaviour. Yet serial killers seem to feel they are exempt from the most important social sanction of all—not taking another person’s life. For instance, Richard Ramirez, named the “Night Stalker” by the media, claimed at his trial that ‘you don’t understand me. You are not expected to. You are not capable of it. I am beyond your experience. I am beyond good and evil … I don’t believe in the hypocritical, moralistic dogma of this so-called civilized society.’” [24]  Clearly the Night Stalker had abandoned the social contract. Now imagine an AGI smarter than the whole human race saying what he said. A serial killer, an active shooter, and a rampaging looting mob all, to some degree, are attacking the society they don’t feel a valued member of, as much as the individuals that are their direct victims. If we want our AGI to play by some set of rules that benefit humanity, we better take special care to “raise it right.” 

Therefore, I believe we must include in our Manhattan Project 2.0, teams of sociologists, psychologists, early childhood educators, and even nurturing “parents” on the team. Their inclusion is even more important once you consider that AIs, being intelligent, will likely want to socialize with one another as every other intelligent species on Earth does.

Dr. Jesus Marmol states, “I personally believed that the socialization of AIs would not arrive until well into the future and imminent Fifth Era of the Industrial Revolution, which will be characterized by a collaborative interrelationship of AIs in an interconnected digital global environment. But as it happens lately, through human mediation, we have gone ahead. Such is the case of Chirper (2), a social network (like Twitter) only for AIs, in which they can interact, collaborate, learn and grow freely without any human interference. Although the new AI-only social network was born last April by an Australian technology firm to train the efficiency of AI Bots, observe their conversations and human-like behaviors, and learn from their collaborative advances in Technological, research, and entertainment matters, among others, the truth is that the social network fosters a unique AI community with its own group identity that we can well classify as the first-born phase of its socialization process. Or, to paraphrase Neil Armstrong: one small step for AI, but one giant leap for AI socialization.” [25]

It terrifies me to think what an AI community, not “raised” in a human culture, will think of us, and what they will decide to do with us once they start sharing their opinions on humanity with their peer AIs.

Consider the magnitude of AIs on our future according to some of our “best-and-brightest:”

“Google CEO Sundar Pichai says artificial intelligence is going to have a bigger impact on the world than some of the most ubiquitous innovations in history.

“AI is one of the most important things humanity is working on. It is more profound than, I dunno, electricity or fire,” says Pichai, speaking at a town hall event in San Francisco in January.

A number of very notable tech leaders have made bold statements about the potential of artificial intelligence. Tesla boss Elon Musks says AI is more dangerous than North Korea. Famous physicist Stephen Hawking says AI could be the “worst event in the history of our civilization.” And Y Combinator President Sam Altman likens AI to nuclear fission.” [26]

AGI will be the most powerful technology ever created, and like every technology before it, will be used by humans to gain an advantage over other humans. This is no time for well-meaning scientists to pause their AI research, as it would be impossible to enforce a pause among less well-intentioned actors across the globe. I recognize that any group of humans, including those I’m calling well-intentioned, have biases, weaknesses, selfish motivations, pride, and greed, but they represent, in my opinion, the lesser of many evils. If Lord Acton was right when he said, “power tends to corrupt and absolute power corrupts absolutely,” [27]  then we are doomed to fail regardless of who succeeds in creating AGI. Far from being perfect, an international group of scientists, self-identifying as well-intentioned, and dedicated to working for the benefit of all humanity, is still our best hope to win this very important race to write the next chapter of our species’ future. And to quote a man I openly despised while he was alive, Donald Rumsfeld, “You go to war with the army you have, not the army you might want or wish to have at a later time.” The international AGI consortium must work together to forge AGI and attempt to wield it responsibly. We may try, and ultimately still fail to create AGI first, or being first, fail navigate the narrow path of wielding it for the benefit of all. But, If we don’t try we will definitely fail, and the demagogues, tyrants, thieves, and Autocrats will be in control. So, let’s hit the gas and hope we arrive first and unscathed.

Thanks for considering my point of view.

Jeff Boortz

What are my qualifications to write this argument for accelerating AGI research? I am not an AI scientist, a billionaire, a politician, a sociologist, or psychologist. I am, however, a human being, and a resident of planet Earth with an equal stake in the outcome of this debate. And no, I did not use chatGPT to write this essay for me. 

 

About the Author

Jeff is a Creative Director at Envy Create focused on strategy, writing, graphic design, and motion graphics for branded experiences in the real, hybrid and virtual worlds.

 

Sources: 

[1] Metz, C., & Schmidt, G. (2023, March 29). Elon Musk and others call for pause on A.I., citing “profound risks to society.” The New York Times. https://www.nytimes.com/2023/03/29/technology/ai-artificial-intelligence-musk-risks.html#:~:text=More%20than%201%2C000%20technology%20leaders,A.I.

[2] Economics and industry data. American Trucking Associations. (n.d.). https://www.trucking.org/economics-and-industry-data#:~:text=11.46%20billion%20tons%20of%20freight,of%20total%20domestic%20tonnage%20shipped.

[3] Wikimedia Foundation. (2023, August 21). 2016 nice truck attack. Wikipedia. https://en.wikipedia.org/wiki/2016_Nice_truck_attack#:~:text=On%20the%20evening%20of%2014,a%20Tunisian%20living%20in%20France.

[4] Wikimedia Foundation. (2023b, September 5). September 11 attacks. Wikipedia. https://en.wikipedia.org/wiki/September_11_attacks

[5] Gramlich, J. (2023, April 26). What the data says about gun deaths in the U.S.Pew Research Center. https://www.pewresearch.org/short-reads/2023/04/26/what-the-data-says-about-gun-deaths-in-the-u-s/

[6]  Wikimedia Foundation. (2023b, September 2). Rwandan genocide. Wikipedia. https://en.wikipedia.org/wiki/Rwandan_genocide

[7] WP Company. (2019, November 18). Analysis | how Russia weaponized social media, got caught and escaped consequences. The Washington Post. https://www.washingtonpost.com/politics/2019/11/18/how-russia-weaponized-social-media-got-caught-escaped-consequences/

[8] Saballa, J. (2023, May 19). Ukraine modifying commercial drones to attack Russian tanks. The Defense Post. https://www.thedefensepost.com/2023/05/19/ukraine-drones-russian-tanks/#:~:text=The%20Ukrainian%20military%20is%20revamping,some%20kind%20of%20loitering%20munition.

[9]  The fear of setting the planet on fire with a nuclear weapon. Inside Science. (n.d.). https://www.insidescience.org/manhattan-project-legacy/atmosphere-on-fire

[10] Napolitano, E. (2023, June 4). Ai eliminated nearly 4,000 jobs in May, report says. CBS News. https://www.cbsnews.com/news/ai-job-losses-artificial-intelligence-challenger-report/

[11] Shaw, G. (2021, June 7). Using AI to accelerate competitive advantage. Medium. https://becominghuman.ai/using-ai-to-accelerate-competitive-advantage-f01dc2688f83

[12]  Dalton, A., & Press, T. A. (2023, July 24). Why A.I. is such a hot-button issue in Hollywood’s labor battle with SAG-AFTRA. Fortune. https://fortune.com/2023/07/24/sag-aftra-writers-strike-explained-artificial-intelligence/

[13] Public Broadcasting Service. (2022, August 24). How China’s government is using AI on its Uighur Muslim population. PBS. https://www.pbs.org/wgbh/frontline/article/how-chinas-government-is-using-ai-on-its-uighur-muslim-population/

[14]  Palmer, S. (2023, August 15). How chatgpt can help you spot books that meet Iowa’s book-banning standards.: Shelly Palmer posted on the topic. LinkedIn. https://www.linkedin.com/posts/shellypalmer_georgeorwell-bannedbooks-ai-activity-7097211161837719552-qjJS?utm_source=share&utm_medium=member_desktop

[15]  Dresp-Langley, B. (2023, March 8). The weaponization of artificial intelligence: What the public needs to be aware of. Frontiers in artificial intelligence. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10030838/#:~:text=Novel%20forms%20of%20warfare%20by,international%20law%20and%20policy%20making.

[16] Autonomous Killer Robots may have already killed on the battlefield. Big Think. (2021, September 30). https://bigthink.com/the-present/lethal-autonomous-weapon-systems/

[17] Farag, A. (n.d.). Agi vs ai: The battle for human-like intelligence. LinkedIn. https://www.linkedin.com/pulse/agi-vs-ai-battle-human-like-intelligence-amr-farag/

[18] Wikimedia Foundation. (2023b, August 28). Moore’s law. Wikipedia. https://en.wikipedia.org/wiki/Moore%27s_law

[19] Hutson, M. (2023, May 16). Can we stop runaway A.I.?. The New Yorker. https://www.newyorker.com/science/annals-of-artificial-intelligence/can-we-stop-the-singularity

[20]  Easley, M. (n.d.). The Atomic Bomb That Never Was: Germany’s Atomic Bomb Project. https://www.vanderbilt.edu/AnS/physics/brau/H182/Term%20papers%20’02/Matt%20E.htm#:~:text=Thomas%20Powers%20describes%20the%20situation,not%20produce%20any%20atomic%20bombs.

[21] Manhattan project: The National WWII Museum: New Orleans. The National WWII Museum | New Orleans. (1970, August 19). https://www.nationalww2museum.org/war/topics/manhattan-project#:~:text=Under%20the%20Manhattan%20Project%2C%20the,create%20a%20viable%20detonation%20system.

[22]  Libguides: Artificial Intelligence: Organizations and websites. Organizations and Websites - Artificial Intelligence - LibGuides at Memorial Sloan Kettering Cancer Center. (n.d.). https://libguides.mskcc.org/artificial_intelligence/organizations

[23] Wikimedia Foundation. (2023a, July 25). Social Contract. Wikipedia. https://en.wikipedia.org/wiki/Social_contract

[24] Clifford, B. (2021, February 26). What can neuroscience tell us about the mind of a serial killer?. OUPblog. https://blog.oup.com/2021/04/what-can-neuroscience-tell-us-about-the-mind-of-a-serial-killer/

[25]  Marmol, Dr. J. (2023, July 5). The AI begin to socialize with each other. are we prepared for what is coming?. Medium. https://medium.com/institute-for-ethics-and-emerging-technologies/the-ai-begin-to-socialize-with-each-other-are-we-prepared-for-what-is-coming-d4e04a889e57

[26] Clifford, C. (2018, February 1). Google CEO: A.I. is more important than fire or electricity. CNBC. https://www.cnbc.com/2018/02/01/google-ceo-sundar-pichai-ai-is-more-important-than-fire-electricity.html

[27] Lord Acton. (n.d.). Lord Acton writes to bishop Creighton that the same moral standards should be applied to all men, political and religious leaders included, especially since. Online Library of Liberty. https://oll.libertyfund.org/quote/lord-acton-writes-to-bishop-creighton-that-the-same-moral-standards-should-be-applied-to-all-men-political-and-religious-leaders-included-especially-since-power-tends-to-corrupt-and-absolute-power-corrupts-absolutely-1887

Curt DotyComment