Slim-Llama is an LLM ASIC processor that can tackle 3-bllion parameters while sipping only 4.69mW - and we'll find out more on this potential AI game changer very soon
Date:
Tue, 17 Dec 2024 18:34:00 +0000
Description:
Slim-Llama is an LLM ASIC processor that can tackle 3-bllion parameters while consuming only 4.69mW.
FULL STORY ======================================================================Slim-Lla ma reduces power needs using binary/ternary quantization Achieves 4.59x efficiency boost, consuming 4.6982.07mW at scale Supports 3B-parameter models with 489ms latency, enabling efficiency
Traditional large language models (LLMs) often suffer from excessive power demands due to frequent external memory access - however researchers at the Korea Advanced Institute of Science and Technology (KAIST), have now
developed Slim-Llama, an ASIC designed to address this issue through clever quantization and data management.
Slim-Llama employs binary/ternary quantization which reduces the precision of model weights to just 1 or 2 bits, significantly lowering the computational and memory requirements.
To further improve efficiency, it integrates a Sparsity-aware Look-up Table, improving sparse data handling and reducing unnecessary computations. The design also incorporates an output reuse scheme and index vector reordering, minimizing redundant operations and improving data flow efficiency. Reduced dependency on external memory
According to the team, the technology demonstrates a 4.59x improvement in benchmark energy efficiency compared to previous state-of-the-art solutions.
Slim-Llama achieves system power consumption as low as 4.69mW at 25MHz and scales to 82.07mW at 200MHz, maintaining impressive energy efficiency even at higher frequencies. It is capable of delivering peak performance of up to
4.92 TOPS at 1.31 TOPS/W, further showcasing its efficiency.
The chip features a total die area of 20.25mm, utilizing Samsungs 28nm CMOS technology. With 500KB of on-chip SRAM, Slim-Llama reduces dependency on external memory, significantly cutting energy costs associated with data movement. The system supports external bandwidth of 1.6GB/s at 200MHz, promising smooth data handling.
Slim-Llama supports models like Llama 1bit and Llama 1.5bit, with up to 3 billion parameters, and KAIST says it delivers benchmark performance that meets the demands of modern AI applications. With a latency of 489ms for the Llama 1bit model, Slim-Llama demonstrates both efficiency and performance,
and making it the first ASIC to run billion-parameter models with such low power consumption.
Although it's early days, this breakthrough in energy-efficient computing could potentially pave the way for more sustainable and accessible AI
hardware solutions, catering to the growing demand for efficient LLM deployment. The KAIST team is set to reveal more about Slim-Llama at the 2025 IEEE International Solid-State Circuits Conference in San Francisco on Wednesday, February 19. You might also like Almost everything you want to
know about Large Language Models How organizations can make the most of LLMs AI chip built using ancient Samsung tech is as fast as Nvidia A100 GPU
======================================================================
Link to news story:
https://www.techradar.com/pro/slim-llama-is-an-llm-asic-processor-that-can-tac kle-3-bllion-parameters-while-sipping-only-4-69mw-and-we-shall-find-out-more-a bout-this-potential-ai-game-changer-in-february-2025
--- Mystic BBS v1.12 A47 (Linux/64)
* Origin: tqwNet Technology News (1337:1/100)