Open access repository

Home Open access repository

In 2014, we launched our open-access repository which offers full text access to conference proceedings from many of our events including the INC and HAISA series. These papers are free to access and distribute (subject to citing the source).

» Openaccess proceedings » Eleventh International Symposium on Human Aspects of Information Security & Assurance (HAISA 2017)

Eleventh International Symposium on Human Aspects of Information Security & Assurance (HAISA 2017)

Eleventh International Symposium on Human Aspects of Information Security & Assurance (HAISA 2017)
Adelaide, Australia, November 28-30, 2017
ISBN: 978-1-84102-428-8

Title: How Reliable are Experts’ Assessments? A Case Study on UAV Security
Author(s): Abdulhadi Shoufan, Ernesto Damiani
Reference: pp104-113
Keywords: Experts’ qualitative assessment, inter-rater reliability, Fleiss' kappa,
Abstract: Experts’ opinion is a vital source in the information security process. However, the judgement of information security professionals is not always consistent and different experts may provide clearly different ratings. This paper proposes an experimental design towards a quantitative analysis of inter-rater reliability in the field of information security. Twenty experts were asked to rate the security objectives (confidentiality, integrity, and availability) of civilian drone communication in 45 different use cases. Three rates were available: low, medium, and high. The experts’ rating was analyzed using Fleiss’ kappa to measure the inter-rater reliability. The results show only a slight agreement among the experts which raises concerns regarding the validity of such assessment. However, the experts show higher agreement on the extremes, i.e., when the use case shows clearly high or clearly low security objectives. Increasing the number of experts causes an initial improvement of Fleiss’ kappa. However, the latter seems to reach a saturation point when the number of experts exceeds ten, suggesting that large panels do not guarantee increased agreement. Most polled experts seem to have bias towards giving a specific rate. Interestingly, unbiased experts show higher agreement among themselves compared to biased ones. Our findings suggest that the experts’ rating should be followed by a verification procedure towards determining the reliability level of the provided data. Also, a purposeful identification of panel subsets with higher inter-rater agreement should be considered.
Download count: 912

How to get this paper:

Download a free PDF copy of this paperBuy this book at Lulu.com

PDF copy of this paper is free to download. You may distribute this copy providing you cite this page as the source.