OpenAI's chatbot, ChatGPT, may not be safe for kids as Italy views it.
Italy's data watchdog, the GPDP, has imposed an immediate temporary limitation on the processing of Italian users' data by OpenAI, the US-based company developing and managing the platform.
The move comes after the chatbot AI, ChatGPT, experienced a data breach in March 2023, and the GPDP raised concerns over data privacy and child safety.
Data privacy concerns and child safety concerns
According to The Daily Wire, Guarantor for the Protection of Personal Data (GPDP)'s border states that there is an absence of a suitable legal basis in relation to the collection of personal data and their treatment for the purpose of training the algorithms underlying the functioning of ChatGPT.
This means that OpenAI has no legal basis to collect data from Italian users to train the model.
Much of the agency's concern stemmed from the amount of information the app collects to train its language models.
The order also noted that the data processed by ChatGPT can be inaccurate since the AI does not always match factual circumstances.
Italian authorities have raised concerns regarding OpenAI's approach to collecting data and whether the scope of the data being stored complies with the law.
The Italian government has also taken issue with the lack of an age verification system to protect minors from being exposed to inappropriate answers.
There is no way to prevent children from accessing ChatGPT, and the AI does not have a system in place to prevent inappropriate responses.
This lack of child safety measures has raised concerns about the app's suitability for minors.
Furthermore, the inquiry was initiated following a security breach on March 20th that affected the personal information of certain users, including their chat history and payment details.
OpenAI has since stated that the vulnerability responsible for the breach has been resolved.
However, the Italian government's concerns extend beyond the data breach incident.
They have raised questions about OpenAI's data collection policies and the legality of the extensive data being stored by the chatbot.
OpenAI's ChatGPT faces temporary ban in Italy
As a result of these concerns, ChatGPT has been temporarily banned in Italy.
NPR reported that the Italian authorities have given OpenAI a deadline of 20 days to address the concerns raised by the government.
They also said that failing that, the company could be subject to a fine amounting to either $21 million or 4% of its annual revenue.
Italy has taken the lead among governments worldwide to impose such restrictions on ChatGPT in response to issues concerning data protection and privacy, although similar concerns have been mounting in the US as well.
According to The Verge, Italy's data watchdog has previously taken action against AI chatbots, such as the chatbot app Replika.ai, which was banned in February.
This app gained notoriety for the personal connections that some users established with its chatbot, with many expressing their dismay after the company removed the option for erotic roleplay.
The temporary ban on ChatGPT in Italy highlights the growing concern about data privacy and child safety in the field of artificial intelligence, underscoring the need for greater regulation and oversight to protect users from the risks associated with these technologies.
The lack of an age verification system and an appropriate legal basis for data collection are important considerations that should not be overlooked.
While AI technology is rapidly advancing, it is crucial that developers prioritize the safety and privacy of users, particularly children, in their products.