Data Used in AI Enhances Business, Introduces New Dangers

Artificial Intelligence is boosting business efficiency but brings rising privacy, security, and governance risks leaders must address.

Artificial intelligence is increasingly transforming the workplace by boosting efficiency, automation, and actionable insights. However, legal experts Michael La Marca and Samuel Grogan warn that the vast and often sensitive data fueling these systems brings significant risks if not properly controlled. Their analysis, published in Bloomberg Law, highlights five critical domains requiring urgent attention as organizational reliance on artificial intelligence deepens: privacy compliance, data governance, employee monitoring, cybersecurity, and accountability.

The foundational tension lies in artificial intelligence’s need for enormous datasets and the strict mandates of privacy regulations such as the California Consumer Privacy Act (CCPA) or the General Data Protection Regulation (GDPR). Modern systems often absorb personal and confidential data in opaque ways that can make fulfillment of user rights—access, correction, deletion—difficult or impossible. Moreover, profiling and automated decision-making embedded in artificial intelligence may trigger higher regulatory thresholds and expose companies to legal action if individuals are not properly informed or empowered.

Another increasing risk comes from inside organizations as employees use third-party artificial intelligence tools—sometimes without authorization or awareness—creating data governance gaps. Sensitive company data, once shared with external systems, may be used for further training, exposed to others, or lose legal protections as trade secrets. Without robust input controls and contractual safeguards with artificial intelligence vendors, organizations risk unintentional data leaks and regulatory non-compliance. The rapid spread of ´shadow AI´—unvetted, unmanaged use of artificial intelligence by employees—further complicates oversight, making governance frameworks essential for mitigating risk.

Employers deploying artificial intelligence-based employee monitoring systems face a parallel set of compliance challenges. Surveillance tools can enable efficiency and ensure adherence to policies, but may also violate federal or state privacy laws, especially where the collection of audio or biometric data is concerned. Failing to seek proper consent or placing excessive surveillance burdens can result in privacy torts and major legal exposure.

The cybersecurity landscape is exacerbated as malicious actors now leverage artificial intelligence to automate and scale attacks, generate sophisticated phishing content, and lower barriers to entry for cybercrime. Experts advise organizations to develop artificial intelligence-specific security protocols, conduct scenario-driven exercises, and plan rapid responses to evolving threats. With artificial intelligence tools entering workflows at an unprecedented pace, companies are urged to implement formal governance programs, designate responsible stakeholders, review third-party agreements, and regularly reassess risks to ensure both competitive advantage and regulatory compliance.

66

Impact Score

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.

Please check your email for a Verification Code sent to . Didn't get a code? Click here to resend