European organisations are expanding Artificial Intelligence use across daily operations while struggling to detect and govern the risks that come with it. ISACA found that 35% of European organisations cannot say whether they have been hit by an Artificial Intelligence-powered cyberattack, underscoring a visibility problem as attackers increasingly use the technology to scale phishing, social engineering, and other threats. A survey of 681 digital trust professionals in Europe found that 71% believe Artificial Intelligence-powered phishing and social engineering attacks are harder to detect. Another 58% said Artificial Intelligence has made it significantly harder to authenticate digital information, while 38% reported declining trust in traditional threat detection methods.
Misinformation and disinformation emerged as the top Artificial Intelligence-related risk in the survey, cited by 87% of respondents. Privacy violations followed at 75%, while 60% identified social engineering as a major concern. At the same time, Artificial Intelligence is also improving parts of the defensive response. Some 43% said it has improved their organisation’s ability to detect and respond to cyber threats, and 34% are already deploying Artificial Intelligence specifically to support cybersecurity efforts. Across European workplaces, 82% of organisations said they expressly permit Artificial Intelligence use and 74% permit generative Artificial Intelligence in particular. The most common uses were creating written content, cited by 69%, increasing productivity at 63%, automating repetitive tasks at 54%, and analysing large datasets at 52%. Time savings were cited by 77% of respondents, while 40% said Artificial Intelligence had increased capacity without additional headcount.
Governance is not keeping pace with adoption. Only 42% of organisations said they have a formal, comprehensive Artificial Intelligence policy in place. The survey also found that 33% do not require employees to disclose when Artificial Intelligence has contributed to work products. That gap is feeding concern among professionals responsible for risk and cybersecurity. According to the poll, 87% are worried about employees using Artificial Intelligence in an unauthorised capacity. Another 26% said their biggest challenge with Artificial Intelligence at work is a lack of trust that it adequately protects intellectual property and sensitive information.
The survey also points to mounting pressure on workforce capability and regulatory implementation. More than half of respondents, 54%, said they need to upskill within the next six months to retain their job or advance their career. Over the next year, that figure rose to 79%. Some 41% named the growing skills gap as one of the biggest risks posed by Artificial Intelligence, yet 21% said their organisations still provide no formal Artificial Intelligence training. The EU Artificial Intelligence Act was the most widely referenced governance framework in the survey, cited by 45% of organisations. NIST followed at 26%. Even so, 26% of organisations said they do not yet follow any framework, suggesting a persistent gap between awareness, oversight, and execution.
