ADVERTISEMENT
WeTransfer, the Dutch file-sharing platform, has revised its terms of service after public outcry over ambiguous language that hinted at the use of user-uploaded content to train machine learning models. The move comes in response to mounting scrutiny from privacy-conscious users and echoes a broader wave of consumer pushback against opaque AI data practices by tech companies.
The backlash stemmed from a clause in WeTransfer’s updated policy, set to take effect on August 8, which stated that files could be used to “improve performance of machine learning models.” The phrase sparked alarm among users who feared their personal data might be fed into AI training systems without consent.
Responding to the criticism, WeTransfer clarified that it does not use any content shared via its platform for AI model training, nor does it sell or share such data with third parties. As per media reports, a company spokesperson explained the clause was originally intended to allow for future use of AI in content moderation, such as detecting harmful or abusive files. However, the company admitted the wording had been poorly framed and had led to “unintended confusion.”
In its revised version, WeTransfer has completely removed references to AI or machine learning. The updated clause now simply states that users grant the company permission to use their content “for the purposes of operating, developing, and improving the service,” in line with its existing privacy and cookie policies.