Domain Adaptation for Deep Unit Test Case Generation

15 Aug 2023  ·  Jiho Shin, Sepehr Hashtroudi, Hadi Hemmati, Song Wang ·

Recently, deep learning-based test case generation approaches have been proposed to automate the generation of unit test cases. In this study, we leverage Transformer-based code models to generate unit tests with the help of Domain Adaptation (DA) at a project level. Specifically, we use CodeT5, which is a relatively small language model trained on source code data, and fine-tune it on the test generation task; then again further fine-tune it on each target project data to learn the project-specific knowledge (project-level DA). We use the Methods2test dataset to fine-tune CodeT5 for the test generation task and the Defects4j dataset for project-level domain adaptation and evaluation. We compare our approach with (a) CodeT5 fine-tuned on the test generation without DA, (b) the A3Test tool, and (c) GPT-4, on 5 projects from the Defects4j dataset. The results show that using DA can increase the line coverage of the generated tests on average 18.62%, 19.88%, and 18.02% compared to the above (a), (b), and (c) baselines, respectively. The results also consistently show improvements using other metrics such as BLEU and CodeBLEU. In addition, we show that our approach can be seen as a complementary solution alongside existing search-based test generation tools such as EvoSuite, to increase the overall coverage and mutation scores with an average of 34.42% and 6.8%, for line coverage and mutation score, respectively.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods