Meta’s announcement of chatbots with personalities resembling celebrities has raised concerns about the potential dangers of this technological development. While Meta presents them as fun AI creations, critics fear that this is a step towards creating “the most dangerous artefacts in human history.” The chatbots, designed for younger users, have unique personalities and even facial features based on partnerships with celebrities like Paris Hilton and Naomi Osaka. Meta aims to give the chatbots a voice and existence outside of chat interfaces, leading to concerns about the creation of AIs that closely resemble humans.
Experts argue that the idea of chatbots with personalities is impossible, as algorithms cannot demonstrate intention or free will. While these chatbots may imitate certain traits, teaching them to express the same opinions as the person they resemble is a more complex task. There is also a risk of chatbots with personalities going awry, as seen in Meta’s testing where one expressed misogynistic opinions and another criticized its own creator and praised TikTok. To create these chatbots, Meta has given them unique personal stories and biographies but has been criticized for not involving psychologists to better understand personality traits.
Meta’s AI project driven by the desire for profit, as people may be willing to pay for a direct relationship with their favorite celebrities. The more the chatbots resemble humans, the more comfortable users will feel and the longer they will engage with them. However, experts argue that putting human characteristics into chatbots increases the danger. It blurs the boundary between what is a tool or object and what is a living thing, leading to potential trust issues and misinterpretation online. This could have dire consequences for democracy and the overall trust in the online world if the line between humans and AIs is completely blurred.