New MAP framework enhances parameter-efficient fine-tuning by decoupling direction and magnitude

Researchers unveil the MAP framework, a novel method that boosts Artificial Intelligence fine-tuning efficiency by separating directional and magnitude updates in model parameters.

Researchers have introduced MAP (Matrix Adaptation via Projection), a fresh framework designed to improve the parameter-efficient fine-tuning of large language models by decoupling weight adaptation into distinct directional and magnitude updates. Unlike traditional approaches that modify model weights collectively, MAP normalises pre-trained weights and then allows directional and scalar adjustments during fine-tuning. This geometric strategy enables greater flexibility and precision, resulting in more efficient adaptation with fewer parameter changes.

Developed by a team from leading institutions including Shanghai Jiao Tong University, Harvard, Alibaba Group, and the Singapore Institute of Technology, MAP was detailed in the study ´MAP: Revisiting Weight Decomposition for Low-Rank Adaptation.´ The researchers focused their experiments on foundation models such as LLaMA-7B and LLaMA-3-8B, leveraging the General Language Understanding Evaluation (GLUE) benchmark to assess performance. They carefully tuned hyperparameters such as the LoRA rank, learning rates, and batch sizes. Training employed AdamW optimization and a warmup period to ensure stable convergence.

The MAP approach not only demonstrated improvements in a range of tasks—including question answering, textual entailment, and sentiment analysis—but also reduced the risk of catastrophic forgetting by controlling the magnitude of updates. Its straightforward design allows for easy integration with existing parameter-efficient fine-tuning methods, potentially streamlining the deployment of large language models for domain-specific applications. The research also outlines plans to further explore the mathematical landscape of MAP, examine alternative normalisation methods, and develop intelligent strategies for automatically determining optimal scaling factors as training progresses.

Ultimately, the introduction of MAP signals a step forward for the practical and theoretical development of parameter-efficient fine-tuning strategies, offering the field a scalable and elegant mechanism for adapting large language models to diverse tasks without sacrificing previously acquired knowledge.

73

Impact Score

Google expands agentic enterprise push

Google used Cloud Next ’26 to position itself as a more integrated enterprise Artificial Intelligence provider, combining models, infrastructure, security, and multicloud data services. The strategy broadens its reach into enterprise software while emphasizing interoperability with rival clouds and platforms.

China still blocking Nvidia H200 chip sales

Nvidia has yet to complete H200 sales into China even after the United States reopened exports. Chinese authorities are reportedly limiting imports as Beijing pushes buyers toward domestic semiconductor suppliers.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.