Please use this identifier to cite or link to this item:
Title: Automatic Music Synthesis using Neural Networks and Machine Learning Techniques
Authors: Χαρίτου, Δανάη-Νικολέτα
Μαραγκός Πέτρος
Keywords: Music
Machine Learning
Artificial Intelligence
Generative Adversarial Networks
Generation from scratch
Accompaniment Generation
Issue Date: 31-Oct-2022
Abstract: Machine Learning has flourished over the last few years, resulting in the inevitable inclusion of Artificial Intelligence into our everyday life. The emulation of human mental acuity, achieved by Artificial Neural Networks, has made overwhelming progress concerning fundamental or even instinctive intellectual processes. On this ground, the interest of research community is now focused on more creative and generative functionalities, one of those being music synthesis. The process of creating musical pieces is considered a higher mental function that still remains unfathomed, even at a non-computational level. A musical composition constitutes a form of expressing various attributes, such as knowledge, experience, ideas, emotions. Therefore, this involving notion of subjectivity makes the problem of automatic music generation particularly complex. Our approach in the research field of automatic music synthesis is based on Generative Adversarial Networks, one of the most prominent system architectures in the area of generative modeling with several applications in comparable problems of different data types, such as image, video and text. Initially, we examine the task of polyphonic music synthesis for multiple tracks, in terms of generation from scratch, that is without any human input or supplementary information. Afterwards, we extend our model in a human-AI cooperative framework by exploring the task of accompaniment generation, namely the generation process of the musical part which provides the rhythmic and/or harmonic support for the melody or main themes of a song, composed by human. The experimentation over the structure of individual networks, the architecture of the whole system, the training algorithm and various parameters with respect to the generated musical samples, allows us to investigate different aspects of the procedure that an Artificial Intelligence model follows in order to compose music, demonstrating at the same time the impact of the aforementioned components on the produced musical result. Finally, a set of objective metrics concerning musical features is established, while a user study is also conducted in the context of subjective evaluation. In this way, we show that our model is capable of creating novel aesthetic music characterized by tonal, temporal and harmonic structure, achieving competitive performance in comparison with the baseline implementation.
Appears in Collections:Διπλωματικές Εργασίες - Theses

Files in This Item:
File Description SizeFormat 
Charitou_Thesis_Final.pdf11.13 MBAdobe PDFView/Open

Items in Artemis are protected by copyright, with all rights reserved, unless otherwise indicated.