Tag:model-optimization
All the articles with the tag "model-optimization".
Speculative Sampling for Faster LLM Inference
Posted on:June 20, 2024 at 12:00 AMDeep dive into speculative sampling techniques for accelerating LLM inference through draft model prediction and rejection sampling.
Low-Bit MoE Quantization for Large Language Models
Posted on:July 25, 2024 at 12:00 AMComprehensive guide to quantizing large MoE models like DeepSeek-V3/R1, covering techniques for efficient memory usage and inference optimization.
W4A8KV4 Quantization Summary and Best Practices
Posted on:August 30, 2024 at 12:00 AMComprehensive summary of W4A8KV4 quantization techniques, covering KV4 and W4A8 optimization methods with practical recommendations.