How do developers and users discuss human-centric issues during software development?
We are mining software repositories to better understand:
- What issues developers discuss/don't discuss about human-centric issues in SE?
- What issues do users discuss about human-centric issues in SE?
- How do human-centric issues get discussed?
- What are the differences in the discussions across human-centric issues in SE?
- How discussions vary between different platforms, e.g. Stack Overflow, GitHub?
- How discussions vary based on human aspect, project, person, ...?
- How do discussions vary in different fields and applications?
- How discussions vary between developers and users? and finally,
- do developers address what they discuss about the human-centric issues in SE?
AH-CID: A Tool to Automatically Detect Human-Centric Issues in App Reviews
In modern software development, there is a growing emphasis on creating and designing around the end-user. This has sparked the widespread adoption of human-centred design and agile development. These concepts intersect during the user feedback stage in agile development, where user requirements are re-evaluated and utilised towards the next iteration of development. An issue arises when the amount of user feedback far exceeds the team’s capacity to extract meaningful data. As a result, many critical concerns and issues may fall through the cracks and remain unnoticed, or the team must spend a great deal of time in analysing the data that can be better spent elsewhere. In this paper, a tool is presented that analyses a large number of user reviews from 24 mobile apps. These are used to train a machine learning (ML) model to automatically generate the probability of the existence of human-centric issues, to automate and streamline the user feedback review analysis process. Evaluation shows an improved ability to find human-centric issues of the users.
We collected a large number of reviews from 24 apps in different categories such as parking, social media, COVID 19, education, fitness, and apps developed for people with dyslexia. The reviews were classified by using eight human-centric tags identified during our analysis as: Disability, Age, Emotional (emotional impacts of the app), Language, Gender, Location (or culture), Privacy, and Socio-economic Status. The app reviews were initially tagged using a semi-automated keyword-based tool. They were then manually checked, revised and used in training an ML model. We adopted a binary relevance (BR) transformation method with a base classifier of support vector machine (SVM) to determine the percentage likelihood of the text input to contain any of the 8 specified labels. Utilising this ML model yields promising results and our performance evaluations indicate a positive trend toward automating the user-feedback process as a viable option to manual analysis of reviews.
AH-CID is a novel tool that has the means of detecting and analysing human-centric issues in text, to allow developers to ascertain which issues are adversely affecting their diverse user base. Using a machine learning approach, a model was constructed and deployed with 91.4% accuracy and a 2.1% hamming loss. Also, the training data was balanced resulting in a moderate-high F1 score. In the future, we plan to investigate a larger set of apps reviews and human-centric issues using our tool. Additionally, an empirical study with users on their perception of human-centric issues is another area for our future work. We also plan to extend the tool and add a user review feature for the end-users to use the tool to detect HCIs in different apps to be able to select and download apps more wisely, from a human-centric aspect.
- Mathews., C., Ye, K., Grozdanovski, J., Marinelli, M., Zhong, K.,Khalajzadeh, H., Obie, H. and Grundy, J.C., AH-CID: A Tool to Automatically Detect Human-Centric Issues in App Reviews, 16th International Conference on Software Technologies (ICSOFT2021), July 6-8 2021, online. -- Author pre-published version PDF
For more information, please email here.