Please use this identifier to cite or link to this item: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/19935
Full metadata record
DC FieldValueLanguage
dc.contributor.authorΤσίπη, Αργυρώ-
dc.date.accessioned2025-11-18T17:17:16Z-
dc.date.available2025-11-18T17:17:16Z-
dc.date.issued2025-11-18-
dc.identifier.urihttp://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/19935-
dc.description.abstractParliamentary speech generation presents specific challenges for large language models beyond standard text generation tasks. Unlike general text generation, parliamentary speeches require not only linguistic quality but also political authenticity and ideological consistency. Current language models lack specialized training for parliamentary contexts, and existing evaluation methods focus on standard NLP metrics rather than political authenticity. To address this, we present a bench- mark for parliamentary speech generation. We constructed and preprocessed a dataset of speeches from UK Parliament Parlamint GB to enable systematic model training. We introduce a comprehen- sive evaluation framework combining computational metrics with LLM-as-a-judge assessments for measuring generation quality across three dimensions: linguistic quality, semantic coherence, and political authenticity. For linguistic quality and semantic coherence, we employed metrics including Perplexity, Self-BLEU, BERTScore, GRUEN Score, MOVER Score, and Distinct-n. We propose two novel embedding-based metrics, Political Spectrum Alignment and Party Alignment, to quantify ideo- logical positioning. Additionally, we utilized the LLM-as-a-judge approach to evaluate six dimensions: conciseness, coherence, authenticity, political appropriateness, overall quality, and relevance. We fine- tuned five large language models (Mistral, Gemma, Qwen, Llama, Yi) using the Unsloth framework, for parameter-efficient training, generated around 28,000 speeches with the same context for each model, and evaluated them using our framework, comparing baseline and fine-tuned models. For statistical analysis of results, we applied t-tests and ANOVA tests. Results show that fine-tuning produces sta- tistically significant improvements across the majority of metrics and our novel metrics demonstrate strong discriminative power for political dimensions.en_US
dc.languageenen_US
dc.subjectParliamentary Speech Generationen_US
dc.subjectLLM Evaluationen_US
dc.subjectPolitical Authenticityen_US
dc.subjectBenchmark Evaluationen_US
dc.subjectNatural Language Generationen_US
dc.subjectNatural Language Processingen_US
dc.subjectIdeological Alignmenten_US
dc.subjectEmbedding-based Metricsen_US
dc.subjectParameter-Efficient Trainingen_US
dc.titleEvaluating the Synthetic Generation of Political Speech Using Large Language Modelsen_US
dc.description.pages111en_US
dc.contributor.supervisorΤσανάκας Παναγιώτηςen_US
dc.departmentΤομέας Τεχνολογίας Πληροφορικής και Υπολογιστώνen_US
Appears in Collections:Διπλωματικές Εργασίες - Theses

Files in This Item:
File Description SizeFormat 
thesis_argyro-19.pdf10.76 MBAdobe PDFView/Open


Items in Artemis are protected by copyright, with all rights reserved, unless otherwise indicated.