Skip to main content

Journalism

Promises

  • NLP can be used to create media content without human supervision.
  • NLP content production reduces the cost of producing disinformation at scale. Chatbots can produce text autonomously or generate samples for a human to select. The scalable production of fake information can create loops in news consumption referred to as “filter bubbles” or “echo chambers” in which people are only exposed to news that adheres to their beliefs.

Opportunities

  • Disinformation campaigns often rely on current events, so chatbots that have updated training data will excel at this task.
  • Fake media content can also be coupled with extended reality applications, like deep fake avatars – synthetic versions of people who look and sound like the real individuals – and produce even more convincing fake content.

Concerns

  • How can readers distinguish between news produced by people and by chatbots if the information is not provided directly?

Boundaries

  • It is potentially harmful because mass produced media can involve “fake news”.
  • One of the biggest concerns of mass-produced media is the false sense of majority opinion. NLP and social media chatbots can be employed to inflate the support of a particular view about politics, finance, health, warfare, or other socially and politically sensitive areas. Since social media is generally not nationalised, it opens possibilities for foreign agencies to falsify what the majority opinion of the nation is.
  • Chatbots can be even more efficient than humans in detecting and manipulating recommendation algorithms that supply the content to the end users. NLP can be used to create content that supports a specific political view, and fuels polarisation or even violent extremist views.