Please use this identifier to cite or link to this item: http://buratest.brunel.ac.uk/handle/2438/2427
Full metadata record
DC FieldValueLanguage
dc.contributor.authorZhou, H-
dc.contributor.authorSadka, A H-
dc.contributor.authorJiang, M-
dc.coverage.spatial4en
dc.date.accessioned2008-06-20T12:04:16Z-
dc.date.available2008-06-20T12:04:16Z-
dc.date.issued2008-
dc.identifier.citationThe Sixth International Workshop on Content-Based Multimedia Indexing. London, UK, 18-20th June, 2008en
dc.identifier.urihttp://bura.brunel.ac.uk/handle/2438/2427-
dc.description.abstractDriven by the demand of information retrieval, video editing and human-computer interface, in this paper we propose a novel spectral feature for music and speech discrimination. This scheme attempts to simulate a biological model using the averaged cepstrum, where human perception tends to pick up the areas of large cepstral changes. The cepstrum data that is away from the mean value will be exponentially reduced in magnitude. We conduct experiments of music/speech discrimination by comparing the performance of the proposed feature with that of previously proposed features in classification. The dynamic time warping based classification verifies that the proposed feature has the best quality of music/speech classification in the test database.en
dc.format.extent389806 bytes-
dc.format.mimetypeapplication/pdf-
dc.language.isoen-
dc.publisherIEEEen
dc.titleFeature extraction for speech and music discriminationen
dc.typeConference Paperen
Appears in Collections:Electronic and Computer Engineering
Dept of Electronic and Computer Engineering Research Papers

Files in This Item:
File Description SizeFormat 
Feature extraction for speech and music discrimination.pdf380.67 kBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.