====== Pretraining Scaling ====== **Pretraining scaling** refers to the systematic improvement of large language model (LLM) performance through coordinated increases in model size, training data volume, and computational resources. This approach has emerged as a fundamental paradigm in deep learning, enabling predictable performance gains across diverse model architectures and domains. The field is characterized by well-established **[[scaling_laws|scaling laws]]**—mathematical relationships that describe how model performance improves as a function of training inputs—which allow practitioners to forecast performance outcomes and optimize resource allocation strategically. ===== Scaling Laws and Power Law Relationships ===== The foundation of pretraining scaling rests on the discovery that LLM performance follows predictable [[power_law|power law]] relationships across multiple dimensions (([[https://arxiv.org/abs/2001.08361|Henighan et al. - Scaling Laws for Language Models (2020]])). These relationships indicate that loss (a measure of prediction error) decreases as an inverse power function of model size, dataset size, and training compute. Specifically, the loss can be approximated as: **Loss ∝ N^(-α)** or **Loss ∝ D^(-β)** where N represents model parameters, D represents data tokens, and α and β are typically small positive exponents (commonly 0.05-0.1). This mathematical predictability enables **compute forecasting**—the ability to estimate required computational resources for achieving specific performance targets before training begins (([[https://arxiv.org/abs/2010.14701|Kaplan et al. - Scaling Laws for Neural Language Models (2020]])). The power law relationships hold remarkably consistently across orders of magnitude, from small models with millions of parameters to large models exceeding hundreds of billions of parameters. ===== Compute-Optimal Allocation ===== A critical insight from scaling law research is that optimal performance at fixed compute budgets requires careful allocation across model size and training data. The **[[chinchilla_paper|Chinchilla]] scaling** findings demonstrated that many existing models are undertrained relative to their size—they would achieve better performance per unit of compute by reducing model parameters and increasing training data proportionally (([[https://arxiv.org/abs/2203.15556|Hoffmann et al. - Training Compute-Optimal Large Language Models (2022]])). Compute-optimal training suggests that for a given computational budget, practitioners should balance: * **Model parameters (N)**: Typically scaled linearly with compute budget * **Training tokens (D)**: Optimally matched to N rather than using fixed data-to-parameter ratios * **Batch size and learning rate**: Adjusted to maximize hardware utilization while maintaining convergence properties This allocation strategy fundamentally changed how organizations approach model development, moving away from maximizing model size toward more efficient resource usage patterns. ===== Applications and Performance Prediction ===== Pretraining scaling principles enable multiple practical applications in model development. **Performance prediction** leverages scaling laws to estimate downstream task accuracy before expensive evaluation cycles. **Resource planning** uses these relationships to determine computational requirements for specific capability targets, critical for budget allocation in large-scale projects. Organizations employ scaling laws for **architecture selection**, determining which model sizes offer optimal performance-to-latency tradeoffs for deployment scenarios. The predictability enables **research prioritization**, allowing teams to estimate whether algorithmic improvements merit investigation before full implementation (([[https://arxiv.org/abs/2305.16934|Hoffmann et al. - Scaling Laws and Emergent Abilities in Large Language Models (2023]])). Recent work has extended scaling law understanding to **multimodal models**, instruction-tuned variants, and models incorporating retrieval-augmented generation (RAG), demonstrating that core principles generalize across architectures and training paradigms. ===== Limitations and Open Questions ===== Despite their predictive power, scaling laws exhibit several important limitations. **Emergence and phase transitions** create discontinuities where performance suddenly improves on specific capabilities at particular scale thresholds, violating smooth power law predictions (([[https://arxiv.org/abs/2206.07682|Wei et al. - Emergent Abilities of Large Language Models (2022]])). The mechanisms underlying these transitions remain incompletely understood. **Data quality factors** significantly impact scaling law relationships—the laws assume homogeneous, high-quality data, but real datasets contain duplicates, noise, and varying relevance. The effective data scaling exponent changes substantially with data curation quality, making theoretical predictions less reliable for lower-quality corpora. **Extrapolation uncertainty** increases substantially beyond observed parameter ranges. While laws fit historical data well, confidence intervals widen dramatically for 100x+ scale increases, particularly regarding capability emergence on novel downstream tasks. Additionally, scaling laws primarily predict loss on pretraining objectives rather than performance on diverse downstream tasks, where transfer characteristics and task-specific optimization introduce additional variability not captured by simple power law models. ===== See Also ===== * [[prefill_vs_decode_scaling|Prefill vs Decode Capacity Scaling]] * [[reinforcement_learning_scaling|Reinforcement Learning Scaling for LLMs]] * [[chinchilla_paper|Chinchilla]] * [[pretraining_scaling_vs_rl_scaling|Pretraining Scaling Laws vs RL Scaling Laws]] * [[how_to_fine_tune_an_llm|How to Fine-Tune an LLM]] ===== References =====