PowerInfer-2: Unlocking High-Speed Large Language Model Inference on Smartphones

In the rapidly evolving landscape of artificial intelligence, Large Language Models (LLMs) have become indispensable tools, offering unparalleled capabilities in understanding and generating human-like text. Traditionally, these models have been deployed in data centers equipped with powerful GPUs, but there's a growing trend to bring these capabilities to more ubiquitous devices like smartphones. This shift aims to leverage rich personal data while maintaining privacy by keeping computations local. However, deploying LLMs on smartphones presents significant challenges due to their limited processing power and memory. Enter PowerInfer-2, a groundbreaking framework from the Institute of Parallel and Distributed Systems (IPADS) at Shanghai Jiao Tong University, designed to tackle these challenges head-on.

Introduction to PowerInfer-2

PowerInfer-2 is an innovative framework specifically engineered for high-speed inference of LLMs on smartphones, even for models whose sizes exceed the device's memory capacity. The key to PowerInfer-2's success lies in its ability to utilize the heterogeneous computation, memory, and I/O resources available in modern smartphones. By decomposing traditional matrix computations into fine-grained neuron cluster computations, PowerInfer-2 significantly enhances inference speed and efficiency.

Key Features of PowerInfer-2

  1. Polymorphic Neuron Engine: Adapts computational strategies for various stages of LLM inference.
  2. Segmented Neuron Caching: Minimizes and conceals I/O overhead.
  3. Fine-Grained Neuron-Cluster-Level Pipelining: Reduces computational delays caused by I/O operations.
  4. Support for Large Models: Capable of running models with up to 47 billion parameters.

Technical Insights

Heterogeneous Computation Utilization

PowerInfer-2 leverages the heterogeneous hardware present in smartphones, such as asymmetric big.LITTLE CPU cores, GPUs, and NPUs. This approach allows the framework to dynamically adapt to the strengths of each component during the different stages of LLM inference.

Prefill Stage

During the prefill stage, which processes all tokens in the input sequence concurrently, PowerInfer-2 employs the NPU to handle large matrix computations. This stage benefits from the NPU's efficiency in processing dense computations, significantly speeding up the generation of the first token.

Decoding Stage

In the decoding stage, where tokens are generated sequentially, PowerInfer-2 utilizes small neuron clusters and CPU cores to handle the sparse computations. This method leverages the flexibility of CPU cores, which are well-suited for the lighter computational tasks associated with sparse activations.

Neuron Caching and Pipelining

PowerInfer-2 introduces a segmented cache that operates at the neuron granularity level. This cache is designed to enhance the cache hit rate and reduce the impact of I/O overhead on inference performance. By overlapping I/O operations with neuron cluster computations, the framework minimizes waiting times and maximizes throughput.

Offline Planner

Before running a new model on a smartphone, PowerInfer-2 executes an offline planning phase. This phase analyzes the model and hardware specifications to generate an execution plan that optimally configures computation, memory, and I/O resources. This plan ensures that inference is performed efficiently, even for models that do not fit entirely in memory.

Implementation and Evaluation

PowerInfer-2 has been implemented with an additional 12,000 lines of code on top of the original PowerInfer framework. The researchers deployed it on two smartphones: OnePlus 12 and Ace 2, equipped with Qualcomm XPUs and 24GB and 16GB of DRAM, respectively.

Supported Models

PowerInfer-2 supports a diverse array of LLMs, including:

  • Llama-2 (7B, 13B)
  • TurboSparse-Mistral (7B)
  • TurboSparse-Mixtral (47B)

Performance

The evaluation of PowerInfer-2 shows impressive results:

  • Speed: Up to 29.2× speed increase compared to state-of-the-art frameworks.
  • Memory Efficiency: Approximately 40% reduction in memory usage for smaller models while maintaining comparable inference speeds.

Notably, PowerInfer-2 is the first system to support the TurboSparse-Mixtral-47B model on mobile platforms, achieving a generation speed of 11.68 tokens per second.

Real-World Applications

To demonstrate its practical utility, PowerInfer-2 was tested on various real-world tasks such as multi-turn dialogue, code generation, math problem solving, and role play. The framework consistently delivered high performance across these diverse tasks, showcasing its robustness and versatility.

Conclusion

PowerInfer-2 represents a significant advancement in the deployment of LLMs on smartphones. By harnessing the heterogeneous resources of modern smartphones and optimizing computation, memory, and I/O operations, PowerInfer-2 enables high-speed, efficient inference for even the largest models. This innovation opens up new possibilities for privacy-preserving, intelligent personal assistants and other applications that require powerful language understanding and generation capabilities on mobile devices.

For more details and a demonstration video, visit the PowerInfer-2 project site.

0 0 投票数
Article Rating
订阅评论
提醒
0 评论
最旧
最新 最多投票
内联反馈
查看所有评论
0
希望看到您的想法,请您发表评论x