Harnessing artificial intelligence for good

USC Dornsife outlines a universitywide effort to shape the development of Artificial Intelligence, pairing ethics and foundational science with applications in health, neuroscience and education.

USC Dornsife’s “Harnessing Artificial Intelligence for Good” presents a broad, research-led vision for aligning emerging technologies with human values. Framing the moment as an intersection of the physical world with human and artificial intelligence, the initiative emphasizes that technology alone cannot determine the future. Researchers across the college are refining Artificial Intelligence while interrogating its impacts, from systems optimization and safety to drug discovery, neuroscience and creative research workflows.

A centerpiece is the USC Institute on Ethics & Trust in Computing, a collaboration between the USC Dornsife School of Philosophy, the USC Viterbi School of Engineering and interdisciplinary scholars. Supported by funds from the Lord Foundation of California, the institute is designed to guide responsible development of Artificial Intelligence, shape public discourse, partner with industry and train students to connect philosophical clarity with practical execution. The site also highlights thematic research thrusts, including “Moral Coding” for decision design that prioritizes wellbeing, using machine learning to surface brain signals that govern perception and memory, optimizing complex systems and prompting creativity through simulation and serendipity.

On the applications front, computational scientist Vsevolod Katritch’s V-SYNTHES platform reimagines drug discovery by virtually assembling molecular building blocks and predicting effects with an approach that is described as 5,000 times faster than traditional methods. The work aims to accelerate treatments for addiction, cancer and Alzheimer’s while reducing cost. Additional coverage curated by USC Dornsife spotlights philosophical debates about machine thinking, Artificial Intelligence tools that illuminate DNA structure and the launch of the ethics institute.

The initiative traces today’s breakthroughs to decades of basic research. Structural biologist Helen Berman co-founded the Protein Data Bank in 1971, establishing the first open-access repository of experimentally determined 3D protein structures. That high-quality dataset enabled machine learning systems to recognize folding patterns, underpinning Artificial Intelligence advances such as AlphaFold and work by DeepMind that help target disease mechanisms. Building on those lessons, the USC Cell Modeling Initiative, co-led by Kate White, Berman, computer scientist Carl Kesselman and production designer Alex McDowell, uses Artificial Intelligence and virtual reality to create immersive cellular models that clarify processes like insulin production and hormone secretion.

Education is positioned as a startup-like response to rapid change. Undergraduate and graduate pathways span philosophy, quantitative and computational biology and neuroscience, with internships and coursework that blend ethics and technical fluency. Examples include QBIO 465 on Artificial Intelligence in biology and medicine, PHIL/ENGR 265g on ethics, technology and value and a special topic on simulation and society using extended reality and Artificial Intelligence. A companion digital brochure, “Mind the Machine: Harnessing AI for Good,” summarizes the programs and priorities.

50

Impact Score

Nvidia to sell fully integrated Artificial Intelligence servers

A report picked up on Tom’s Hardware and discussed on Hacker News says Nvidia is preparing to sell fully built rack and tray assemblies that include Vera CPUs, Rubin GPUs and integrated cooling, moving beyond supplying only GPUs and components for Artificial Intelligence workloads.

Navigating new age verification laws for game developers

Governments in the UK, European Union, the United States of America and elsewhere are imposing stricter age verification rules that affect game content, social features and personalization systems. Developers must adopt proportionate age-assurance measures such as ID checks, credit card verification or Artificial Intelligence age estimation to avoid fines, bans and reputational harm.

Large language models require a new form of oversight: capability-based monitoring

The paper proposes capability-based monitoring for large language models in healthcare, organizing oversight around shared capabilities such as summarization, reasoning, translation, and safety guardrails. The authors argue this approach is more scalable than task-based monitoring inherited from traditional machine learning and can reveal systemic weaknesses and emergent behaviors across tasks.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.