Show simple item record

dc.contributor.advisorMartín Gutiérrez, David (Tutor)es_ES
dc.contributor.advisorRamos Castro, Daniel es_ES
dc.contributor.authorBustos Manzanet, Lauraes_ES
dc.contributor.otherUAM. Departamento de Tecnología Electrónica y de las Comunicacioneses_ES
dc.date.accessioned2022-02-09T11:42:30Zen_US
dc.date.available2022-02-09T11:42:30Zen_US
dc.date.issued2021-09en_US
dc.identifier.urihttp://hdl.handle.net/10486/700206en_US
dc.descriptionMaster Universitario en Deep Learning for Audio and Video Signal Processinges_ES
dc.description.abstractIt is known that speaker identification is a field with a lot of related research carried out but,when it comes to looking for research developed from singingvoiceinstead of speech,only a few studiescan be found. This difference in the amount of work related to both fields is mainly due to the fact that the spoken voice is simpler and contains a much narrower frequency spectrum than the singingvoice. In this way, this Master's Final Project containsa study to identify singers from their recorded songs. For thispurpose, a more sophisticated system has been developed to facethe increased complexity in the data, being able to discriminateamongsingers.As a previous step to identify the singer, and due to the scarcity of databases of singing voice in the state of the art, the present work also includes the development of an automatic way for creating anovel databaseusing Spotify’s API. The database contains information related to the musical genre,the artist and differentmusical characteristics of the 30 seconds excerpt pre-view song provided by Spotify. The files of the songs have been source separated with the network of the Spleeter application to carry out a source separation and thus be able to work with the processedfile that only contains the singingvoice of the original songs.The developed system has used different feature extractors from the current state of the art using both speech analysis techniques and techniques that are used whenmusical instruments are wanted to be identified in recordings. With these obtained features, some current state of the art classifiers have been fed based on shallow neural networks and speaker identification networks.es_ES
dc.format.extent47 pág.es_ES
dc.format.mimetypeapplication/pdfen_US
dc.language.isoengen_US
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/
dc.subject.otherSingerIDen_US
dc.subject.otherDeep Learningen_US
dc.subject.otherwav2vecen_US
dc.titleDeep Learning based singer identificationes_ES
dc.typemasterThesisen_US
dc.subject.ecienciaTelecomunicacioneses_ES
dc.rights.ccReconocimiento – NoComercial – SinObraDerivadaes_ES
dc.rights.accessRightsopenAccessen_US
dc.facultadUAMEscuela Politécnica Superior


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

https://creativecommons.org/licenses/by-nc-nd/4.0/
Except where otherwise noted, this item's license is described as https://creativecommons.org/licenses/by-nc-nd/4.0/