Feeling lonely even on social networks? At Meta (formerly Facebook), the parent company of Instagram and others, they are planning to launch a variety of AI-based chatbots that will impersonate "personas" and be able to chat with users.
The chatbots will be as human as possible, unlike OpenAI's chatGPT and Google's BERT. Users will be able to choose which character to chat with based on the personality traits of that character and customize their "new friend" accordingly. There will be a surfing enthusiast chatbot, a chatbot that recommends travel routes, and chatbots that gather information from the web and provide content recommendations on Facebook.
In addition to these chatbots, Meta is also testing chatbots of famous historical figures that will chat with users and increase their activity, thus keeping them as engaged as possible within Meta's platforms.
The concern: collecting intimate information about the users
Behind all these shiny contents and features, there is a concern that the new initiative set to launch in the upcoming September will collect more and more data, some of which may be even more intimate than the interactions users have with the chatbots. This information will give Meta the ability to better target its advertisements across the various platforms under its control.
In an interview given by Ravit Dotan, an ethics and artificial intelligence researcher and consultant, to the Financial Times, she said, "When users chat with chatbots, it exposes a lot more data to the company, and they can do whatever they want with that data."
0 Comments