Samsung Electronics has banned the use of artificial intelligence (AI) tools by employees after discovering that sensitive code was uploaded to OpenAI’s ChatGPT.
Samsung is concerned that the data transmitted to external servers through AI platforms such as Google Bard and Bing is difficult to retrieve and delete, and could be disclosed to other users or its competitors.
The ban applies to the company’s internal networks, computers, tablets and phones. The company warned employees that breaking the new policies could result in disciplinary action, including termination of employment.
The company informed employees about the new restriction with a memo which was viewed by Bloomberg. Samsung also conducted a survey last month revealing that 65 percent of respondents believe such services (like ChatGPT) pose a security risk.
It all started when in April, internal source code was mistakenly uploaded to ChatGPT, although the specific details of the leaked information remain unknown.
Samsung is not alone in expressing concern over the security risks posed by generative AI. In February, JPMorgan Chase & Co., Bank of America Corp. and Citigroup Inc. banned or restricted the use of OpenAI’s chatbot service. Italy also barred the use of ChatGPT over privacy fears. However, it has since reversed its stance.
While Samsung is creating its own internal AI tools for translation, document summarization and software development, it is also working on ways to block the upload of sensitive company information to external services. In the meantime, the company has restricted the use of generative AI until it can create a secure environment for its employees to use it safely.
“HQ is reviewing security measures to create a secure environment for safely using generative AI to enhance employees’ productivity and efficiency,” read the memo. “However, until these measures are prepared, we are temporarily restricting the use of generative AI.”
Image credit: Shutterstock