Automatic topic classification of test cases using text mining at an Android smartphone vendor

Junji Shimagaki, Yasutaka Kamei, Naoyasu Ubayashi, Abram Hindle

2018/08/15

Automatic topic classification of test cases using text mining at an Android smartphone vendor

Authors

Junji Shimagaki, Yasutaka Kamei, Naoyasu Ubayashi, Abram Hindle

Venue

Notes

ESEM Best Industrial Paper Award

Abstract

Background: An Android smartphone is an ecosystem of applications, drivers, operating system components, and assets. The volume of the software is large and the number of test cases needed to cover the functionality of an Android system is substantial. Enormous effort has been already taken to properly quantify “what features and apps were tested and verified?". This insight is provided by dashboards that summarize test coverage and results per feature. One method to achieve this is to manually tag or label test cases with the topic or function they cover, much like function points. At the studied Android smartphone vendor, tests are labelled with manually defined tags, so-called “feature labels (FLs)", and the FLs serve to categorize 100s to 1000s test cases into 10 to 50 groups.\nAim: Unfortunately for developers, manual assignment of FLs to 1000s of test cases is a time consuming task, leading to inaccurately labeled test cases, which will render the dashboard useless. We created an automated system that suggests tags/labels to the developers for their test cases rather than manual labeling.\nMethod: We use machine learning models to predict and label the functionality tested by 10,000 test cases developed at the company.\nResults: Through the quantitative experiments, our models achieved acceptable F-1 performance of 0.3 to 0.88. Also through the qualitative studies with expert teams, we showed that the hierarchy and path of tests was a good predictor of a feature’s label.\nConclusions: We find that this method can reduce tedious manual effort that software developers spent classifying test cases, while providing more accurate classification results.

Bibtex

@inproceedings{junji2018EMSE-topics,
 abstract = {Background: An Android smartphone is an ecosystem of applications, drivers, operating system components, and assets. The volume of the software is large and the number of test cases needed to cover the functionality of an Android system is substantial. Enormous effort has been already taken to properly quantify "what features and apps were tested and verified?". This insight is provided by dashboards that summarize test coverage and results per feature. One method to achieve this is to manually tag or label test cases with the topic or function they cover, much like function points. At the studied Android smartphone vendor, tests are labelled with manually defined tags, so-called "feature labels (FLs)", and the FLs serve to categorize 100s to 1000s test cases into 10 to 50 groups.\nAim: Unfortunately for developers, manual assignment of FLs to 1000s of test cases is a time consuming task, leading to inaccurately labeled test cases, which will render the dashboard useless. We created an automated system that suggests tags/labels to the developers for their test cases rather than manual labeling.\nMethod: We use machine learning models to predict and label the functionality tested by 10,000 test cases developed at the company.\nResults: Through the quantitative experiments, our models achieved acceptable F-1 performance of 0.3 to 0.88. Also through the qualitative studies with expert teams, we showed that the hierarchy and path of tests was a good predictor of a feature's label.\nConclusions: We find that this method can reduce tedious manual effort that software developers spent classifying test cases, while providing more accurate classification results.},
 accepted = {2018-08-15},
 author = {Junji Shimagaki and Yasutaka Kamei and Naoyasu Ubayashi and Abram Hindle},
 authors = {Junji Shimagaki, Yasutaka Kamei, Naoyasu Ubayashi, Abram Hindle},
 booktitle = {Proceedings of the 12th {ACM/IEEE} International Symposium on Empirical Software Engineering and Measurement (ESEM)},
 code = {junji2018EMSE-topics},
 date = {2018-10-11},
 funding = {NSERC Discovery, JSPS},
 location = {Olulu, Finland},
 notes = {ESEM Best Industrial Paper Award},
 pagerange = {1--10},
 pages = {1--10},
 rate = {12/28 or 43%},
 role = { Author},
 title = {Automatic topic classification of test cases using text mining at an Android smartphone vendor},
 type = {inproceedings},
 url = {http://softwareprocess.ca/pubs/junji2018EMSE-topics.pdf},
 venue = {Proceedings of the 12th {ACM/IEEE} International Symposium on Empirical Software Engineering and Measurement (ESEM)},
 year = {2018}
}