Please use this identifier to cite or link to this item: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/19266
Title: Training-free Style Transfer in Diffusion Models
Authors: Κακούρης, Δημήτριος
Βουλόδημος Αθανάσιος
Keywords: Generative AI
neural networks
machine learning
transformers
μηχανική μάθηση
νευρωνικά δίκτυα
μοντέλα διάχυσης
Diffusion models
Issue Date: 12-Sep-2024
Abstract: Diffusion models are a novel class of generative models that have shown promising results in various applications, including image synthesis, natural language processing, and audio generation. Diffusion models operate by gradually transforming a sample from a simple distribution (e.g., Gaussian noise) into a complex data distribution through a series of iterative, noise-adding, and noise-removing steps. This diploma thesis extends the application of diffusion models to the domain of style transfer, a technique pivotal to altering the output of the diffusion model and effectively guiding into producing an image that closely resembles our desired art style. It is crucial to handle semantic alignment while also preserving the texture and nuances of the desired art style. This thesis aims to achieve a fine-balance between these two components. It uses state of the art methods from papers like StyleID and InitNO to handle style transfer via attention key injection and semantic alignment via intial latent noise optimization respectively.
URI: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/19266
Appears in Collections:Διπλωματικές Εργασίες - Theses

Files in This Item:
File Description SizeFormat 
DiplomaThesis_03119019.pdf29.94 MBAdobe PDFView/Open


Items in Artemis are protected by copyright, with all rights reserved, unless otherwise indicated.