Skip to main content

Social media

Promises

  • Conversational agents known as “virtual influencers” are increasingly present on social networks, e.g., Twitter or Instagram. These virtual influencers imitate humans and manipulate other users, most worryingly by spreading misinformation or disinformation.

Opportunities

  • One of the virtual influencers, Lil Miquela, created in 2016, dwells on Instagram and currently has over three million followers. It often pleads against racism, sexism, and police violence, and even talks about “sexual abuse” of which it (“she”) was supposedly a victim. It exploits human empathy and the ambiguity of its imitated virtual character in order to attract Instagram followers.

Concerns

  • Because virtual influencers interact with people using natural language, they evoke projections of human traits and relational characteristics like trust or responsibility, even though they do not possess these capacities. Because of this, chatbots participating in social media pose a major risk for society – they are culturally influential despite not obeying existing norms and conventions.
  • Since people interacting with bots on social media are not machine learning specialists, who is responsible for potential malfunctions or undesired effect? Should responsibility be distributed among the developers, trainers, manufacturers, and users? How could this be achieved?

Boundaries

  • In some cases, a user may not understand that they are interacting with a chatbot posing as an influencer. Or, they may project human qualities to chatbots even while knowing they are interacting with machines. This poses a risk to the autonomy of an individual to be free from manipulation by third parties. To protect autonomy, the manufacturer needs to implement appropriate reminders or acknowledgments that one’s interlocutor is a machine.