Three young founders, one bold llm: how HelpingAI is rewriting India’s Artificial Intelligence playbook

HelpingAI, founded by three Indian technopreneurs, has built Dhanishtha, a token-efficient, emotionally aware llm that its creators say cuts latency and inference cost to accelerate Artificial Intelligence adoption in India.
Generative artificial intelligence uncovers undetected bird flu exposure risks in Maryland emergency departments

Researchers used a generative Artificial Intelligence large language model to scan emergency department notes and flag patients with potential H5N1 exposures that were not tested. The approach identified a small set of high-risk exposures among thousands of visits and could be deployed for real-time surveillance within electronic health records.
SK hynix begins mass production of 321-layer QLC NAND flash

SK hynix has started mass production of a 321-layer 2 Tb QLC NAND flash, the first QLC device with more than 300 layers. The company plans a commercial release in the first half of next year after global customer validation.
Leaked AMD roadmap hints Zen 6 mobile APUs in 2027 with local Artificial Intelligence focus

Leaked partner roadmaps from hardware leaker @momomo_us suggest AMD will deliver only modest mobile updates in 2026 with a Zen 5 refresh, while Zen 6-based mobile APUs with stronger graphics and local Artificial Intelligence capabilities are slated for 2027.
Advantech unveils MIC-743 Artificial Intelligence inference system with NVIDIA Jetson Thor

Advantech launched the MIC-743, an Artificial Intelligence inference system powered by the NVIDIA Jetson Thor module. It is designed to bring server-grade performance to the edge for workloads such as video language models and large language models.
NVIDIA Blackwell Ultra brings PCIe Gen 6 and 1.5x NVFP4 performance for Artificial Intelligence servers

NVIDIA´s Blackwell Ultra targets Artificial Intelligence servers with PCIe Gen 6, a TSMC 4NP 208 billion-transistor design and a 1,400 W TDP. The chip promises roughly 1.5 times denser NVFP4 compute for higher tokens-per-second and improved training throughput.