Pedro and Paulo Markun – Robot that generates conversations and content: the confusion is just beginning

CNET, short for “Computer Network,” is a US media website with between 500 and 1,000 employees. His reviews, news, articles, blogs, podcasts and videos on technology and consumer electronics are viewed by over 50 million people every month.

Because since November 2022, alongside the texts of its reporters and columnists, CNET has been using artificial intelligence to create articles. The channel chosen was CNET Money Staff (which is paid), aimed at the world of finance.

The most recent, dated January 3, has the benefits of paying off credit card charges before they expire. Finally, a common warning in self-respecting publications:

“The editorial content on this page is based solely on objective and independent evaluations of our authors and is not influenced by advertising or partnerships. It was not provided or commissioned by any third party. However, we may receive compensation when you click on product links or services offered by our partners”.

There is no clear use of artificial intelligence. Only in another point of the site does the agency inform that the contents produced by its “Money Staff” are, in fact, generated by automation technology. Simply put, they are the work of our well-known GPT-3 chatbot, from OpenAI, co-founded by Elon Musk, and which will now receive a large sum from Microsoft to become the most attractive startup in the world.

CNET does not go all-in on GPT-3. So much so that one of its reporters, Jackson Ryan, even posted an article on the website about the chatbot and predicted:

“ChatGPT is an impressive AI, but human jobs are safe (for now) (…) It definitely can’t do the job of a journalist. To say it understates the very act of journalism.”

we decided to test

The columnists decided to check the forecast with one of the available sources. Not our colleague Ryan, but with the same GPT-3. We presented Ryan’s argument and asked the chatbot to create a short article taking on the role of a Brazilian journalist to justify the use of GPT-3 to produce content.

The answer of our “interviewee”:

“As a journalist from Brazil, I see great potential in using GPT-3 for content production. The AI ​​tool is capable of learning from large text datasets and can generate human sounding text for a variety of purposes.

Using GPT-3 can help speed up the news writing process, allowing reporters to spend more time on other aspects of the story, such as fact checking and deep research. It can also be used to generate story ideas and help journalists narrow their stories down to the most important elements.

Furthermore, GPT-3 can help reduce the burden of mundane tasks such as summarizing news, cleaning up articles and producing headlines. This can help free up more time for reporters to focus on investigative reporting and better understand the context of the news they are covering.

Overall, using GPT-3 can make the news production process more efficient and allow journalists to focus on those aspects of the story that require the greatest human expertise. While it’s true that GPT-3 cannot do the job of a journalist, it certainly can be used in ways that complement and enhance the work of journalists.”

Next, we asked if there are any risks to using GPT-3 in this activity. And if you need to send the content to aa fact check (fact check) human. The answer reveals prudence and common sense, let’s say:

“Yes, there are risks to using GPT-3 to produce content. As with any technology, there is the potential for errors, inaccuracies or distortions in the generated text. Therefore, it is important for journalists to review the generated text for accuracy. ‘accuracy and bias before publishing it.

It is also important to verify the content generated by GPT-3 and ensure that it is accurate and unbiased. This can be done by submitting the content to a human fact checker or by verifying the information with other sources. This is especially important when GPT-3 is used to generate ideas or headlines, as it is important to ensure that the information is correct and unbiased.”

And, in the future, will there be other possibilities to use GPT-3 in journalism?

“In the future, GPT-3 could be used for a variety of journalistic tasks. For example, it could be used to generate stories from raw data, helping journalists quickly and easily produce data-driven stories. It could be used to generate story ideas , helping journalists find interesting angles and stories to carry on.

GPT-3 can also be used to summarize stories, helping journalists quickly produce abstracts and headlines for their stories. It can also be used to generate citations from sources, helping journalists quickly and easily generate citations for their stories. Finally, GPT-3 can be used to generate interviews, helping reporters quickly and easily generate interviews with sources.”

After all, let’s get to the heart of Ryan’s question: Do GPT-3 think journalists should fear losing their jobs when that happens? The chatbot got halfway there in his answer:

“There is always the possibility that journalists fear losing their jobs when GPT-3 is used for various tasks in journalism. However, it is important to remember that GPT-3 is not meant to replace journalists, but to help them in their work, make some more efficient activities and free up time for other activities that require more human skills.

GPT-3 also has the potential to create new opportunities for journalists, such as data-driven story building, rapid summary and headline generation, and citation and interview generation. So while there is the potential for some disruption, GPT-3 should ultimately help improve the work of journalists and create new opportunities for them.”

What to expect in the future? More and more uses

The text above, the various CNET articles and more and more content published on the net make it clear that the new generative artificial intelligences are here to stay and that the confusion is just beginning.

These tools will become increasingly commonplace and integrated into every content creator’s production workflow.

It’s worth mentioning that OpenAI already has a large partnership with Microsoft, which owns, among other things, Microsoft Word, software that has already become synonymous with text editors.

Imagine, then, if in the next release the company decides to integrate GPT-3 there and in all the text you write? Was the system already complete with suggestions for sentences and entire paragraphs?

Or in a format where you give it five key phrases and it creates a full page copy for further editing?

Finally, is there an ethical problem in not revealing that the texts were produced by artificial intelligence? CNET’s original disclaimer lists GPT-3 as part of the editorial team, and after the story broke, the company edited it to clarify the fact that the article was written with the assistance of artificial intelligence.

At the same time, in any bookstore, there are hundreds of published books written by ghostwriters, people who are paid to write for others, and this, while it generates debate in some quarters, has never been a major social problem.

The fact is that all these changes are coming with impressive speed. And we continue to cover the progress here in this column.

#Pedro #Paulo #Markun #Robot #generates #conversations #content #confusion #beginning

Add Comment