Please use this identifier to cite or link to this item:
Title: Using blind analysis for software engineering experiments
Authors: Sigweni, B
Shepperd, M
Keywords: Researcher Bias;Blind analysis;Software engineering experimentation;Software e ort estimation
Issue Date: 2015
Publisher: ACM
Citation: EASE '15 Proceedings of the 19th International Conference on Evaluation and Assessment in Software Engineering, 32, Nanjing, China, (April 27 - 29, 2015)
Abstract: Context: In recent years there has been growing concern about conflicting experimental results in empirical software engineering. This has been paralleled by awareness of how bias can impact research results. Objective: To explore the practicalities of blind analysis of experimental results to reduce bias. Method : We apply blind analysis to a real software engineering experiment that compares three feature weighting approaches with a na ̈ıve benchmark (sample mean) to the Finnish software effort data set. We use this experiment as an example to explore blind analysis as a method to reduce researcher bias. Results: Our experience shows that blinding can be a relatively straightforward procedure. We also highlight various statistical analysis decisions which ought not be guided by the hunt for statistical significance and show that results can be inverted merely through a seemingly inconsequential statistical nicety (i.e., the degree of trimming). Conclusion: Whilst there are minor challenges and some limits to the degree of blinding possible, blind analysis is a very practical and easy to implement method that supports more objective analysis of experimental results. Therefore we argue that blind analysis should be the norm for analysing software engineering experiments.
ISBN: 978-1-4503-3350-4
Appears in Collections:Computer Science
Dept of Computer Science Research Papers

Files in This Item:
File Description SizeFormat 
Fulltext.pdf263.8 kBAdobe PDFView/Open

Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.