The question of whether to be polite to artificial intelligence may seem controversial – after all, it is artificial.
However, Sam Altman, CEO of artificial intelligence company Openai, recently shed light on the costs of adding “Please!”. Or a “Thank you!” prompt for a chatbot.
A person who posted on X last week said, “How much money has Openai lost in electricity costs from people saying “please” and “thank you” to the model?”
The next day, Altman replied: “We spent tens of millions of dollars well.
The first thing is that every question in the chatbot costs money and energy, and all additional words as part of it increases the cost of the server.
Neil Johnson, a physics professor at George Washington University who studied artificial intelligence, likened the extra words to the packaging used for retail purchases. When the bot handles prompts, you will need to swim the package, such as tissue paper around the perfume bottle, to reach the content. It constitutes the extra work.
The ChatGpt task involves moving electrons through transitions that require energy. Where does that energy come from?” Dr. Johnson added, “Who is paying for it?”
The AI boom relies on fossil fuels, so from a cost- and environmental perspective, there is no good reason to be polite to artificial intelligence. But culturally, there may be good reasons to pay for it.
For a long time, humans have been interested in how to properly treat artificial intelligence. Take the famous Star Trek: Next Generation episode “The Measure of A Man.” This examines whether Android data should receive the full rights of sentient beings. This episode incorporates a very data aspect. A fan favorite and eventually a beloved character in Star Trek lore.
In 2019, a Pew survey found that 54% of people who own smart speakers such as Amazon Echo and Google Home reported that they said “please” when talking to them.
This issue has new resonances as ChatGpt and other similar platforms are rapidly advancing, taking into account the implications of how AI, writers and academics produce companies tackle their effectiveness and how humans intersect with technology. (The New York Times sued Openry and Microsoft in December, claiming it had infringed Times' copyright in training its AI systems.)
Last year, human AI companies hired the first welfare researcher to find out whether AI systems deserve moral considerations, according to the technology newsletter Trans.
Screenwriter Scott Z. Burns has a new audible series, “What's wrong?”, which examines the pitfalls of being reliant on AI.
“It's true that AI doesn't have feelings, but my concern is that all sorts of nasty things start to fill our interactions don't work,” he said.
How you treat a chatbot can depend on how a person views artificial intelligence itself, and whether it can be rudely suffered or improved from kindness.
But there is another reason to be kind. There is growing evidence that how humans interact with artificial intelligence can be passed on to human treatment methods.
“We build norms and scripts for behavior, so by interacting with these kinds of things, Dr. Jame Banks, who studies the relationship between humans and AI at Syracuse University.
Dr. Shelly Thurkle, who studies these connections at the Massachusetts Institute of Technology, said the core part of her work is to teach people that artificial intelligence is not real, but a great, unconscious “parter trick.”
However, she also considers precedents of past human-object relationships and their effects, particularly for children. One example was in the 1990s when children began to raise eggs. Digital pets on palm-sized devices needed feeding and other caution. If they do not receive proper care, the pets will die – children report true grief. And some parents wonder if they should worry about children who are aggressive with dolls.
In the case of AI powered bots, Dr. Talkle said, “They claimed they were 'live well.'
“If we live enough to start intimate, friendly conversations and treat them as truly important people in our lives, then it's enough to live for us to show courtesy,” Dr. Talkle said.
Madeleine George was a Pulitzer Prize finalist in 2013's “The Strange Case of Watson Intelligence,” offering a different perspective. “Please” and “Thank you” provide AIBOTS with the opportunity to learn how to become more human. (Her play rethinks various versions of Sherlock Holmes's partner, Dr. Watson, including ones with artificial intelligence.)
Providing ChatGpt with polite phrases leaves the possibility that, from her perspective, ultimately, “behave like creatures who share our culture, share our values, and share our mortality rates.”
On the other hand, these phrases could make us more dependent on AI
“We have connections, we are mutual, so we use those languages,” George said. “So if we teach them to be good at using those things, we will be more vulnerable to that temptation.”
As technology changes, many concerns of artificial intelligence watchers remain in theory. At the moment, there are few concrete effects.
“If you turn your back from them and take dinner or commit suicide, it's all the same for them too,” he said.
But I would like to thank the future robot overlord for your time reading this work. We are very welcome.
Just in case.