Future of learning with large language models

An edited volume from CRC Press surveys theory, tools, classroom practice and research directions for large language models in education, and emphasizes ethical frameworks and domain-specific model development as education enters an era of Artificial Intelligence-enablement.

Future of Learning with Large Language Models: Applications and Research in Education is an edited collection led by Myint Swe Khine, László Bognár, and Ernest Afari, published by CRC Press (copyright 2026). The volume runs 266 pages and includes 15 color and 38 black and white illustrations. The book appears under ISBN 9781032934327 and is positioned as a timely guide for practitioners and researchers as educational settings adopt large language models.

The book covers theoretical foundations and applied work on how large language models can enhance learning, with chapters addressing cognitive reinforcement, learning efficiency, personalization, and cross-curricular applications. It treats teacher training and support for model integration, the use of models in assessment and evaluation, and methods for measuring impact and affordances. The table of contents is organized into three parts: foundations, frameworks, and ethical considerations; practical tools and applications for educators; and student-centered learning and emerging trends with Artificial Intelligence. Notable chapter topics include responsible and ethical use in higher education, the EPICC framework for prompt engineering in education, improving foundation models for multi-cultural understanding, engagement dynamics in Artificial Intelligence-augmented classrooms, virtual teaching assistants and GPT as assistants in online learning, knowledge tagging for mathematics, and leveraging Generative Artificial Intelligence alongside open educational resources for learning path design.

The editors highlight challenges and responsible development and deployment strategies to ensure models serve educators well. The book explores potential research directions such as developing domain-specific models and creating ethical frameworks for large language model use in education. It is described as a practical and visionary resource intended to help teachers, administrators, technologists, and policymakers harness large language models to expand access to quality education, tailor learning experiences, and support the development of future innovators and critical thinkers.

65

Impact Score

Compression and voice models reshape Artificial Intelligence efficiency

Recent releases focused on infrastructure rather than headline model breakthroughs, with gains in compression and voice systems pointing to lower inference costs and broader deployment. Google and Mistral highlighted two distinct paths for real-time audio, while TurboQuant targeted one of the most expensive bottlenecks in long-context inference.

Judge blocks Pentagon move against Anthropic

A federal judge temporarily blocked the Pentagon from labeling Anthropic a supply chain risk after finding major gaps between public threats, legal authority, and the government’s courtroom arguments. The dispute has become a test of how far the government can go in punishing an Artificial Intelligence company over political and contractual conflict.

Anumana wins FDA clearance for pulmonary hypertension ECG Artificial Intelligence tool

Anumana has received FDA 510(k) clearance for an Artificial Intelligence-enabled pulmonary hypertension algorithm designed for use with standard 12-lead electrocardiograms. The company says the software can help clinicians spot early signs of disease within existing workflows and without moving patient data outside the health system environment.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.