LinkedIn's Generative AI Training on User Data Raises Data Privacy Issues
LinkedIn Trained AI Models on User Data Without Informing
A recent report from 404 Media has drawn attention to LinkedIn's controversial practice of scraping user data for training AI models. This action occurred before the company revised its policy to inform users. Many users reported discovering new options in their settings related to AI training.
Trend in AI Using User Data
Utilizing user data for AI training is a prevalent trend across the tech industry. Meta has admitted to using publicly available user posts to train its Llama models, while Google has updated its policies regarding training for AI models like Gemini. However, the major issue with LinkedIn pertains to its collection of user data without prior user notification.
- Privacy Protocol Concerns: Users expect a heads-up about data collection for privacy protection.
- Updated Policy: LinkedIn's revised terms indicate that it uses data for generative AI models creating content on its platform.
- Privacy Measures: The company claims to have implemented techniques to limit personal information collection.
Community Backlash
Since the policy update, netizens have expressed criticism across social media, highlighting the lack of transparency in LinkedIn's data practices. Users are encouraged to explore their settings to opt-out, although questions linger about data removal from AI model datasets.
This article was prepared using information from open sources in accordance with the principles of Ethical Policy. The editorial team is not responsible for absolute accuracy, as it relies on data from the sources referenced.