Please use this identifier to cite or link to this item:
http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/19890| Title: | On Learning What to Learn : Adaptive Data Mixtures for Robust Multi-Target LLM Pretraining |
| Authors: | Glarou, Maria Ios Μαραγκός Πέτρος |
| Keywords: | Large Language Models Minimax Optimization Multi-Target Learning Domain Reweighting Group Distributionally Robust Optimization Data Mixture Optimization |
| Issue Date: | 16-May-2025 |
| Abstract: | Data selection during LLM pretraining is a major driver of downstream performance. Steering the training mixture with signals from target tasks can orient learning toward representations that better serve those objectives. However, most existing approaches tune mixtures for a single target, yielding narrow specialization and weak robustness across tasks. This thesis introduces GRAPE (Group-Robust Multi-target Adaptive PrEtraining), a multi-source, multi-target reweighting framework that discovers effective training mixtures for multiple targets simultaneously. GRAPE maintains two sets of weights: task weights, encoding the relative priority of each target task, and source weights, specifying sampling proportions over source domains. Derived from a minimax formulation, the method couples two interleaved reweighting updates. First, a task-reweighting step—using group distributionally robust optimization (GDRO)—reallocates task weights toward targets showing the least progress, correspondingly easing emphasis on better-served tasks. Second, a source-reweighting step updates the sampling weights over source domains, shifting probability toward domains whose updates most effectively reduce loss on the targets in focus. Together, these updates instantiate the minimax design and dynamically steer data selection, closing performance gaps and yielding balanced, sample-efficient improvements across targets. Empirically, on ClimbLab, GRAPE outperforms strong baselines with higher average accuracy, superior data efficiency, and more balanced task-wise gains across six targets, while generalizing better to unseen reasoning benchmarks. On SlimPajama, across twelve multi-task suites, we observe consistent improvements over competing methods. In multilingual experiments on Wiki40B, GRAPE leverages six high-resource languages to improve low-resource suites of sizes 4 and 8, achieving faster convergence and lower final perplexity. |
| URI: | http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/19890 |
| Appears in Collections: | Διπλωματικές Εργασίες - Theses |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| thesis.pdf | 17.62 MB | Adobe PDF | View/Open |
Items in Artemis are protected by copyright, with all rights reserved, unless otherwise indicated.