Meta Accused of Using Gerry Adams´ Books to Train AI

Meta´s Artificial Intelligence may have been trained using works by political figure Gerry Adams.

Meta, the company behind major platforms such as Facebook and Instagram, is under scrutiny for allegedly using books authored by Gerry Adams, a prominent political figure, to train its Artificial Intelligence models. Gerry Adams, known for his leadership within Sinn Féin, has voiced his concerns about the use of his works without permission.

The controversy arises amidst broader fears related to the data sources used by tech giants like Meta to develop sophisticated AI technologies. The lack of transparency on such matters has drawn criticism from various quarters, including authors and public figures whose work may represent a significant portion of the training material for these AI systems.

This latest issue highlights growing tensions between content creators and technology firms over intellectual property rights and the commercialization of personal and political works. As AI continues to evolve, the debate over training data sources is expected to intensify, raising ethical questions about the use and distribution of creative works in the digital age.

58

Impact Score

Intel 18A-P node improves performance and efficiency

Intel plans to present new results for its 18A-P process at the VLSI 2026 Symposium, highlighting gains in performance, power efficiency, and manufacturing predictability. The updated node is positioned as a stronger option for customers seeking 18A density with better operating characteristics.

EA CEO defends broader Artificial Intelligence use in game development

EA CEO Andrew Wilson defended the company’s internal use of Artificial Intelligence after employee claims that the tools were slowing work rather than helping. He framed the technology as an aid for repetitive quality assurance tasks, even as concerns persist over its broader impact on development.

Generative Artificial Intelligence is reshaping cybercrime less than feared

Research into criminal underground forums suggests generative Artificial Intelligence is being used mainly as a productivity tool rather than a transformative criminal breakthrough. The biggest near-term risks may come from automation, fraud support, and attackers adapting content to influence chatbot outputs.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.