Samsung employees in the company’s semiconductor division reportedly leaked confidential information into ChatGPT while using the chatbot for work tasks. The division had allowed engineers to use ChatGPT to check source code, but a report from The Economist Korea described three separate cases in which sensitive internal material was shared with the service.
In one case, an employee pasted confidential source code into ChatGPT to check for errors. Another employee shared code with ChatGPT and requested code optimization. A third shared a recording of a meeting to convert into notes for a presentation. The concern is that the submitted information is retained by ChatGPT and can be used to train the model, meaning Samsung is unlikely to recover the exposed data.
The incidents reflect broader privacy concerns around using public Artificial Intelligence systems for workplace tasks involving proprietary, legal, or medical information. Those concerns include the possibility that sharing sensitive material for summarization or analysis could create compliance issues. The report notes that experts have warned this could violate GDPR compliance, a concern raised alongside Italy’s recent ban on ChatGPT.
Samsung responded by restricting how much material employees can upload, limiting the ChatGPT upload capacity to 1024 bytes per person, and launching an investigation into the employees involved. The company is also considering building an internal Artificial Intelligence chatbot to reduce the risk of future leaks. OpenAI’s policy states that ChatGPT uses data to train its models unless users opt out, and its usage guide warns users not to share sensitive information in conversations.
