Παρακαλώ χρησιμοποιήστε αυτό το αναγνωριστικό για να παραπέμψετε ή να δημιουργήσετε σύνδεσμο προς αυτό το τεκμήριο: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/19890
Τίτλος: On Learning What to Learn : Adaptive Data Mixtures for Robust Multi-Target LLM Pretraining
Συγγραφείς: Glarou, Maria Ios
Μαραγκός Πέτρος
Λέξεις κλειδιά: Large Language Models
Minimax Optimization
Multi-Target Learning
Domain Reweighting
Group Distributionally Robust Optimization
Data Mixture Optimization
Ημερομηνία έκδοσης: 16-Μαΐ-2025
Περίληψη: Data selection during LLM pretraining is a major driver of downstream performance. Steering the training mixture with signals from target tasks can orient learning toward representations that better serve those objectives. However, most existing approaches tune mixtures for a single target, yielding narrow specialization and weak robustness across tasks. This thesis introduces GRAPE (Group-Robust Multi-target Adaptive PrEtraining), a multi-source, multi-target reweighting framework that discovers effective training mixtures for multiple targets simultaneously. GRAPE maintains two sets of weights: task weights, encoding the relative priority of each target task, and source weights, specifying sampling proportions over source domains. Derived from a minimax formulation, the method couples two interleaved reweighting updates. First, a task-reweighting step—using group distributionally robust optimization (GDRO)—reallocates task weights toward targets showing the least progress, correspondingly easing emphasis on better-served tasks. Second, a source-reweighting step updates the sampling weights over source domains, shifting probability toward domains whose updates most effectively reduce loss on the targets in focus. Together, these updates instantiate the minimax design and dynamically steer data selection, closing performance gaps and yielding balanced, sample-efficient improvements across targets. Empirically, on ClimbLab, GRAPE outperforms strong baselines with higher average accuracy, superior data efficiency, and more balanced task-wise gains across six targets, while generalizing better to unseen reasoning benchmarks. On SlimPajama, across twelve multi-task suites, we observe consistent improvements over competing methods. In multilingual experiments on Wiki40B, GRAPE leverages six high-resource languages to improve low-resource suites of sizes 4 and 8, achieving faster convergence and lower final perplexity.
URI: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/19890
Εμφανίζεται στις συλλογές:Διπλωματικές Εργασίες - Theses

Αρχεία σε αυτό το τεκμήριο:
Αρχείο Περιγραφή ΜέγεθοςΜορφότυπος 
thesis.pdf17.62 MBAdobe PDFΕμφάνιση/Άνοιγμα


Όλα τα τεκμήρια του δικτυακού τόπου προστατεύονται από πνευματικά δικαιώματα.