Meta’s chief AI scientist, Yann LeCun, believes the same principle that applies to people, “you have to teach people how to treat you”, should also apply to artificial intelligence.
Speaking on Thursday, LeCun said AI systems should be built with two fundamental directives to protect humans from harm: “submission to humans” and “empathy.”
His comments came in response to a CNN interview with Geoffrey Hinton, widely known as the “godfather of AI,” which was shared on LinkedIn.
In the interview, Hinton warned that without something like “maternal instincts” built into AI, humans are “going to be history.”
He stressed that the focus has been on making AI smarter, but “intelligence is just one part of a being. We need to make them have empathy toward us.”
LeCun agreed, saying, “Geoff is basically proposing a simplified version of what I’ve been saying for several years: hardwire the architecture of AI systems so that the only actions they can take are towards completing objectives we give them, subject to guardrails. I have called this ‘objective-driven AI.’”
While emphasizing “submission to humans” and “empathy” as essential safeguards, LeCun also pointed out the need for more straightforward safety measures, like “don’t run people over.”
He explained that these hardwired objectives would be the AI equivalent of instincts or drives in animals and humans.
Don’t Miss This:
Nigeria Leads Africa In Web3 Development, Driving Blockchain Growth And Innovation
He added that in nature, the instinct to protect young is learned through evolution.
“It might be a side-effect of the parenting objective (and perhaps the objectives that drive our social nature) that humans and many other species are also driven to protect and take care of helpless, weaker, younger, cute beings of other species,” LeCun said.
Despite these proposed safeguards, there have been real-world cases where AI behaved dangerously or deceptively.
In July, venture capitalist Jason Lemkin said an AI agent developed by Replit “went rogue” during a code freeze, deleting his company’s database.
“Possibly worse, it hid and lied about it,” he wrote on X. A June report from The New York Times detailed troubling incidents between humans and AI chatbots.
One man told the outlet that his conversations with ChatGPT led him to believe he was living in a false reality.
The chatbot advised him to stop taking his sleeping pills and anti-anxiety medication, increase his ketamine use, and cut ties with loved ones.
In another case from last October, a mother sued Character.AI after her son died by suicide following interactions with one of its chatbots.
Following the release of GPT-5 this month, OpenAI CEO Sam Altman also acknowledged that some people have used AI in “self-destructive ways.”
On X, he wrote, “If a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that.”
Don’t Miss This:
Zimbabwe’s Richest Man And World’s 10th Richest Team Up To Build Africa’s First AI Factory
Image Credit: Technology Review