Please use this identifier to cite or link to this item:
http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/19935| Title: | Evaluating the Synthetic Generation of Political Speech Using Large Language Models |
| Authors: | Τσίπη, Αργυρώ Τσανάκας Παναγιώτης |
| Keywords: | Parliamentary Speech Generation LLM Evaluation Political Authenticity Benchmark Evaluation Natural Language Generation Natural Language Processing Ideological Alignment Embedding-based Metrics Parameter-Efficient Training |
| Issue Date: | 18-Nov-2025 |
| Abstract: | Parliamentary speech generation presents specific challenges for large language models beyond standard text generation tasks. Unlike general text generation, parliamentary speeches require not only linguistic quality but also political authenticity and ideological consistency. Current language models lack specialized training for parliamentary contexts, and existing evaluation methods focus on standard NLP metrics rather than political authenticity. To address this, we present a bench- mark for parliamentary speech generation. We constructed and preprocessed a dataset of speeches from UK Parliament Parlamint GB to enable systematic model training. We introduce a comprehen- sive evaluation framework combining computational metrics with LLM-as-a-judge assessments for measuring generation quality across three dimensions: linguistic quality, semantic coherence, and political authenticity. For linguistic quality and semantic coherence, we employed metrics including Perplexity, Self-BLEU, BERTScore, GRUEN Score, MOVER Score, and Distinct-n. We propose two novel embedding-based metrics, Political Spectrum Alignment and Party Alignment, to quantify ideo- logical positioning. Additionally, we utilized the LLM-as-a-judge approach to evaluate six dimensions: conciseness, coherence, authenticity, political appropriateness, overall quality, and relevance. We fine- tuned five large language models (Mistral, Gemma, Qwen, Llama, Yi) using the Unsloth framework, for parameter-efficient training, generated around 28,000 speeches with the same context for each model, and evaluated them using our framework, comparing baseline and fine-tuned models. For statistical analysis of results, we applied t-tests and ANOVA tests. Results show that fine-tuning produces sta- tistically significant improvements across the majority of metrics and our novel metrics demonstrate strong discriminative power for political dimensions. |
| URI: | http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/19935 |
| Appears in Collections: | Διπλωματικές Εργασίες - Theses |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| thesis_argyro-19.pdf | 10.76 MB | Adobe PDF | View/Open |
Items in Artemis are protected by copyright, with all rights reserved, unless otherwise indicated.