ChatGPT, a new chat system powered by artificial intelligence (AI), has caused a stir and a debate about the future of this type of technology in education and a number of professions.
Launched on November 30, 2022 by the American company OpenAI, the robot took a few days to go viral and, in six weeks, had already been tested by millions of users.
The tool, which is currently being tested, allows dialogues in multiple languages on virtually any topic, in a seemingly natural way, with answers to numerous questions, in addition to content creation.
If the user asks the tool, for example, to write a text about the causes of the Civil War in the USA, it is possible to witness a persuasive response being typed in real time – ready in seconds.
That’s one reason why New York’s education authorities have begun blocking the site of this impressive and controversial robot, which can develop textual paragraphs similar to those written by humans.
The decision to restrict access to ChatGPT on devices and internal networks in New York schools may be followed by other US regions in an effort to prevent students from using the text editor.
The creators even claim to be working on combating and detecting misuse of the tool.
How does it work
ChatGPT integrates a wide range of technologies developed by OpenAI, based in San Francisco and closely related to Microsoft.
It is part of a new generation of AI systems that can speak, write text and even produce video and images from a vast database of digital books, online publications and other media.
However, unlike previous systems known as “large language models”, such as GPT-3, launched in 2020 by the same company, ChatGPT is free and available to everyone on the Internet.
Considered easier to use, the system works like a written dialogue with the person asking the questions.
The millions of people who have already tested the system have used it to write poems or songs to write emails. All of this has helped the tool become smarter.
“The dialogue format allows ChatGPT to answer specific questions, admit its mistakes, question incorrect assumptions and reject inappropriate requests,” revealed the company, which has among its founders, in 2015, the current owner of Twitter, Elon Musk. However, the entrepreneur left the company in 2018, which received investments from giants such as Microsoft.
In addition to GPT-3 and now ChatGPT, OpenAI is also known for having created DALL-E, an artificial intelligence system for creating images from textual descriptions.
One of the reasons why the novelty is considered more user-friendly than previous programs is the smaller number of contradictions.
“A few years ago, chatbots had the vocabulary of a dictionary and the memory of a fish. Today they are much better at reacting consistently based on search history and responses. They are more than just fish,” says Sean McGregor , a researcher who collects AI-related problems in a database.
However, according to experts, like other programs that rely on so-called deep learning (“deep learning”, in English), ChatGPT cannot justify why it has selected certain words that make up its answers.
Impression that the tools really think
Technologies based on artificial intelligence and which, in some cases, can communicate, increasingly give the impression of really thinking. Recently, researchers from the Meta company (which owns Facebook) developed a program called Cicero, named after the Roman politician and orator.
The program was tested through the Diplomacy game, where participants have to demonstrate negotiation skills. According to an article published last November in the magazine ScienceCicero “scored more than double the average score of human players”, in 40 races of an online championship of the game.
“If you don’t communicate like a real person [durante o jogo]by showing empathy, building relationships and speaking correctly, he will not be able to form partnerships with other participants,” Meta said in a statement.
In October, startup Character.ai, founded by former Google engineers, unveiled an experimental online chatbot that can hire characters. Based on a short description, users create characters and then can “chat” with a fictional Sherlock Holmes, Socrates or even Donald Trump.
On the one hand, this degree of sophistication is fascinating. On the other hand, it leaves many observers uneasy that perhaps these technologies are being used to deceive humans, spread false information, or create increasingly believable frauds.
And what does ChatGPT think?
Questioned on the matter by journalists of the AFP news agency, ChatGPT replied as follows: “There are potential dangers in relation to the construction of super sophisticated chatbots […]. People might think they are interacting with a real person.”
In order to prevent potential abuse, tech companies are putting protections in place. On its home page, OpenAI warns that the chatbot may generate “wrong information” or “produce offensive or biased content.”
What’s more, ChatGPT refuses to take sides: “OpenAI has made it extremely difficult to get the tool to express opinions,” says McGregor.
The researcher asked the chatbot to write a poem about an ethical issue. “I’m just a machine, a tool at your disposal / I have no power to judge, I have no power to decide (…)”, replied the robot.
“It’s interesting to see people asking whether AI systems should behave the way users want them to or the way their creators envisioned them,” said Sam Altman, co-founder and head of OpenAI. “The debate about what values to assign to systems will be one of the most important debates in society,” he adds.
window to the future
According to Altman, at least for now, ChatGPT is a “first demonstration” of what it will be possible to do with AI-based language interfaces, noting that in this case there are still many limitations.
“You may soon have assistants who talk to you, answer questions and give advice. Later, you may have something that does tasks for you. Maybe you may have something that discovers new knowledge,” says Altman.
Possibility of use
Among the possibilities that ChatGPT offers so far is its use as a quick alternative to Google searches, even if its results are often misleading or contradictory. However, if someone questions one of your incorrect answers, the system is often able to admit the mistake and come up with a solution.
However, many users have highlighted the ability of the tool to help with specific questions: for example, programmers have used ChatGPT to write complex code or in an unusual language, and university professors have claimed that the tool is able to adequately answer questions some test questions.
The ability of these systems to produce well-written and coherent texts could also allow their use in the editorial and journalistic fields, a characteristic that has led specialists to envisage the replacement of many content creation activities.
gb/lf (AP, EFE, AFP, ots)
#ChatGPT #works #controversial