Future of learning with large language models

An edited volume from CRC Press surveys theory, tools, classroom practice and research directions for large language models in education, and emphasizes ethical frameworks and domain-specific model development as education enters an era of Artificial Intelligence-enablement.

Future of Learning with Large Language Models: Applications and Research in Education is an edited collection led by Myint Swe Khine, László Bognár, and Ernest Afari, published by CRC Press (copyright 2026). The volume runs 266 pages and includes 15 color and 38 black and white illustrations. The book appears under ISBN 9781032934327 and is positioned as a timely guide for practitioners and researchers as educational settings adopt large language models.

The book covers theoretical foundations and applied work on how large language models can enhance learning, with chapters addressing cognitive reinforcement, learning efficiency, personalization, and cross-curricular applications. It treats teacher training and support for model integration, the use of models in assessment and evaluation, and methods for measuring impact and affordances. The table of contents is organized into three parts: foundations, frameworks, and ethical considerations; practical tools and applications for educators; and student-centered learning and emerging trends with Artificial Intelligence. Notable chapter topics include responsible and ethical use in higher education, the EPICC framework for prompt engineering in education, improving foundation models for multi-cultural understanding, engagement dynamics in Artificial Intelligence-augmented classrooms, virtual teaching assistants and GPT as assistants in online learning, knowledge tagging for mathematics, and leveraging Generative Artificial Intelligence alongside open educational resources for learning path design.

The editors highlight challenges and responsible development and deployment strategies to ensure models serve educators well. The book explores potential research directions such as developing domain-specific models and creating ethical frameworks for large language model use in education. It is described as a practical and visionary resource intended to help teachers, administrators, technologists, and policymakers harness large language models to expand access to quality education, tailor learning experiences, and support the development of future innovators and critical thinkers.

65

Impact Score

CUDA Toolkit: features, tutorials and developer resources

The NVIDIA CUDA Toolkit provides a GPU development environment and tools for building, optimizing, and deploying GPU-accelerated applications. CUDA Toolkit 13.0 adds new programming-model and toolchain enhancements and explicit support for the NVIDIA Blackwell architecture.

Qwen 1M Integration Example with vLLM

Demonstrating how to use the Qwen/Qwen2.5-7B-Instruct-1M model in the vLLM framework for efficient long-context inference in Artificial Intelligence applications.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.