Digital rights groups have launched an attack on social media giant Meta, parent company of Facebook and Instagram, over its plans to use users’ public posts and images to train artificial intelligence (AI) tools. The company recently informed UK and European users that their information could be used to “develop and improve” its AI products, sparking outrage from privacy advocates.
Noyb, a European campaign group advocating for digital rights, has filed complaints with 11 data protection authorities across Europe, calling Meta’s processing of user content an “abuse of personal data for AI.” The group is urging authorities to take immediate action to halt the company’s plans.
Despite the backlash, Meta maintains that its approach complies with privacy laws and is consistent with how other tech firms use data to develop AI experiences. The company stated that European user information would support the rollout of its generative AI experiences by providing more relevant training data.
Critics have also taken issue with the way Meta has informed users about the changes to their data usage, particularly the opt-out process. Users must exercise their “right to object” if they do not want their data used for AI, a process that has been described as “highly awkward” and potentially dissuading users from objecting.
Noyb co-founder Max Schrems criticized Meta for shifting the responsibility to users and not obtaining explicit consent for data usage. He argued that users should have to opt-in, rather than having to beg to be excluded from data processing.
The Irish Data Protection Commission, which oversees Meta’s compliance with EU data laws, confirmed it has received a complaint from Noyb and is investigating the matter. Despite Meta’s assurances that objections will be upheld, concerns remain about the company’s use of user data for AI products without explicit consent.