Liquid AI Releases LFM2-8B-A1B: An On-Device Mixture-of-Experts with 8.3B Params and a 1.5B Active Params per Token
How a lot functionality can a sparse 8.3B-parameter MoE with a ~1.5B lively path ship in your telephone with out blowing latency or reminiscence? Liquid AI has launched LFM2-8B-A1B, a small-scale Mixture-of-Experts (MoE) mannequin constructed for on-device execution underneath tight reminiscence, latency, and vitality budgets. Unlike most MoE work optimized for cloud batch serving, LFM2-8B-A1B…
