Please use this identifier to cite or link to this item: http://buratest.brunel.ac.uk/handle/2438/12793
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorAbbod, M-
dc.contributor.authorAlani, Shayma-
dc.date.accessioned2016-06-15T13:25:19Z-
dc.date.available2016-06-15T13:25:19Z-
dc.date.issued2015-
dc.identifier.urihttp://bura.brunel.ac.uk/handle/2438/12793-
dc.descriptionThis thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University London.en_US
dc.description.abstractClassifier ensembling research has been one of the most active areas of machine learning for a long period of time. The main aim of generating combined classifier ensembles is to improve the prediction accuracy in comparison to using an individual classifier. A combined classifiers ensemble can improve the prediction results by compensating for the individual classifier weaknesses in certain areas and benefiting from better accuracy of the other ensembles in the same area. In this thesis, different algorithms are proposed for designing classifier ensemble combiners. The existing methods such as averaging, voting, weighted average, and optimised weighted method does not increase the accuracy of the combiner in comparison to the proposed advanced methods such as genetic programming and the coalition method. The different methods are studied in detail and analysed using different databases. The aim is to increase the accuracy of the combiner in comparison to the standard stand-alone classifiers. The proposed methods are based on generating a combiner formula using genetic programming, while the coalition is based on estimating the diversity of the classifiers such that a coalition is generated with better prediction accuracy. Standard accuracy measures are used, namely accuracy, sensitivity, specificity and area under the curve, in addition to training error accuracies such as the mean square error. The combiner methods are compared empirically with several stand-alone classifiers using neural network algorithms. Different types of neural network topologies are used to generate different models. Experimental results show that the combiner algorithms are superior in creating the most diverse and accurate classifier ensembles. Ensembles of the same models are generated to boost the accuracy of a single classifier type. An ensemble of 10 models of different initial weights is used to improve the accuracy. Experiments show a significant improvement over a single model classifier. Finally, two combining methods are studied, namely the genetic programming and coalition combination methods. The genetic programming algorithm is used to generate a formula for the classifiers’ combinations, while the coalition method is based on a simple algorithm that assigns linear combination weights based on the consensus theory. Experimental results of the same databases demonstrate the effectiveness of the proposed methods compared to conventional combining methods. The results show that the coalition method is better than genetic programming.en_US
dc.language.isoenen_US
dc.publisherBrunel University Londonen_US
dc.relation.urihttp://bura.brunel.ac.uk/bitstream/2438/12793/1/FulltextThesis.pdf-
dc.subjectCoalition methoden_US
dc.subjectImprove predictionen_US
dc.subjectIncrease accuracy combineren_US
dc.subjectGenerating neural network modelsen_US
dc.subjectNN algorithmsen_US
dc.titleDesign of intelligent ensembled classifiers combination methodsen_US
dc.typeThesisen_US
Appears in Collections:Electronic and Computer Engineering
Dept of Electronic and Computer Engineering Theses

Files in This Item:
File Description SizeFormat 
FulltextThesis.pdf2.62 MBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.