Data Security & Privacy while using Gen-AI models

Fiat Lexica
2 min readDec 1, 2023
Pic: Open Source Google Image

An essential concern in the growing technological era in Artificial Intelligence is the acquisition of informed consent from users. Here, I shall highlight a few corporate examples on this topic.

Zoom, a video conferencing company, has been criticised for its compliance with informed consent rules, although it has retracted its initial statements and is now exploring methods to get user permission. The absence of transparency may lead to significant legal and financial ramifications, since more than 70% of GDPR penalties are linked to a deficiency in the data controller’s transparency.

Generative AI mainly depends on large datasets for training, which raises concerns over copyright infringement and unlawful use of individuals’ data. Zoom’s plan to use recorded conversations for AI technology might possibly violate the informed consent requirement of GDPR. Getty Images, an American visual media company, has recently expressed concerns over the illegal use of its copyrighted photographs for AI models.

Websites sometimes enforce specific limitations on data scraping for AI models, emphasising the importance of organisations complying with copyright laws and privacy standards. The GDPR places significant emphasis on accuracy, requiring organisations using AI to do thorough data protection impact assessments to ensure compliance with the laws. Recent occurrences demonstrate the regulatory focus on ensuring that AI complies with laws and regulations while also fostering transparency in operations.

Investigations and prohibitions were imposed on rideshare and Food delivery applications in Italy as a result of their Artificial Intelligence practises. Spain undertook comprehensive research on the use of Artificial Intelligence in recruitment processes, with a particular focus on highlighting the need for transparency in the applicant selection process. Google’s Bard case, similar to the Facebook dating problem, saw a temporary suspension in the EU due to its failure to do a mandatory Data Protection Impact Assessment (DPIA).

In order to tackle the challenges related to compliance and data protection posed by AI, organisations must give priority to principles such as transparency, impartiality, and the lawful use of data. Conducting a data protection impact assessment is crucial, especially when Artificial Intelligence is used in Know Your Customer (KYC), due diligence, and job application processes. Ultimately, it is essential for firms using AI to place the utmost importance on adhering to regulations and safeguarding data integrity.

--

--

Fiat Lexica

Research Articles pioneering Nuptial bond of Criminal Law with AI/ML Algorithms. Also various others on Crime Science, Cyber crime, GDPR etc are shared.