Brain-Inspired Artificial Intelligence Enhances Machine Vision

Researchers have unveiled Lp-Convolution, a breakthrough artificial intelligence technique that enables computers to process images more like the human brain.

Researchers from the Institute for Basic Science, Yonsei University, and the Max Planck Institute have introduced a breakthrough artificial intelligence method known as Lp-Convolution, which advances machine vision by closely mimicking the human brain´s approach to processing visual information. Unlike conventional convolutional neural networks (CNNs) that use fixed, square-shaped filters, Lp-Convolution employs a multivariate p-generalized normal distribution to dynamically shape filters, allowing them to adapt to the orientation and context of the visual data—mirroring the brain´s flexible, selective processing mechanisms.

This innovation comes as a response to limitations in both traditional CNNs and newer Vision Transformers. While CNNs have excelled at certain image recognition tasks, their rigidity hinders nuanced pattern detection; Vision Transformers, despite higher accuracy, demand significant computational resources and vast datasets, often making them impractical outside of research environments. By reshaping how CNNs filter information, Lp-Convolution provides a middle ground—delivering strong performance and versatility without excessive computational costs.

Experimental results on widely used datasets such as CIFAR-100 and TinyImageNet demonstrate that Lp-Convolution enhances accuracy for legacy models like AlexNet and cutting-edge architectures including RepLKNet. Notably, the technology boosts resilience to corrupted input data, addressing a common challenge in real-world machine vision applications. Moreover, the team´s analysis revealed that when Lp-Convolution´s parameters approximate a Gaussian distribution, the artificial intelligence system´s internal activity closely aligns with biological neural patterns observed in mice. The researchers foresee broad applications ranging from autonomous vehicles and medical imaging to robotics and real-time reasoning tasks, and they are making the technology and source code publicly available for continued research and deployment. The findings will be formally presented at the 2025 International Conference on Learning Representations (ICLR).

75

Impact Score

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.

Please check your email for a Verification Code sent to . Didn't get a code? Click here to resend