<b>An Empirical Evaluation of Software Quality Classification Based on User Feedback Aligned</b><b>with ISO/IEC 25010</b>
<p dir="ltr">Evaluating software quality without access to the source code is a challenging task, as traditional metrics and testing approaches often rely on internal code analysis. However, user feedback offers a valuable alternative data source by reflecting real-world quality issu...
Saved in:
| Main Author: | |
|---|---|
| Published: |
2025
|
| Subjects: | |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | <p dir="ltr">Evaluating software quality without access to the source code is a challenging task, as traditional metrics and testing approaches often rely on internal code analysis. However, user feedback offers a valuable alternative data source by reflecting real-world quality issues and user perceptions. The use of such feedback for quality classification poses several technical challenges: the data is unstructured, class distributions are imbalanced, and labeled samples are limited. In this study, these challenges are addressed using a dataset of user reviews for mobile health (mHealth) applications, with each review labeled with one of the eight software quality characteristics defined in the ISO/IEC 25010:2011 standard. The reviews were represented with various text vectorization techniques such as Bag-of-Words, TF-IDF, Word2Vec, FastText, BERT, and RoBERTa, and were classified by machine learning algorithms including Support Vector Machines (SVM), K-Nearest Neighbors (KNN), Random Forest (RF), Naive Bayes, Decision Tree, Stochastic Gradient Descent (SGD), XGBoost, and AdaBoost.</p> |
|---|