The Adverse Effects of Code Duplication in Machine Learning Models of Code
The field of big code relies on mining large corpora of code to perform some learning task towards creating better tools for software engineers. A significant threat to this approach was recently identified by Lopes et al. [18] who found a large amount of near-duplicate code on GitHub. However, the impact of code duplication has not been noticed by re- searchers devising machine learning models for source code. In this essay, we explore the effects of code duplication on ma- chine learning models showing that reported performance metrics are sometimes inflated by up to 100% when testing on duplicated code corpora compared to the performance on de-duplicated corpora which more accurately represent how machine learning models of code are used by software engineers. We present a duplication index for widely used datasets, list best practices for collecting code corpora and evaluating machine learning models on them. Finally, we release tools to help the community avoid this problem in future research.
Thu 24 Oct Times are displayed in time zone: Beirut change
14:00 - 15:30: Onward! Papers 4Onward! Papers at Templars Chair(s): Hidehiko MasuharaTokyo Institute of Technology | |||
14:00 - 14:30 Talk | Property Conveyances as a Programming Language Onward! Papers Shrutarshi BasuCornell University, Nate FosterCornell University, James GrimmelmannCornell University Pre-print | ||
14:30 - 15:00 Talk | The Adverse Effects of Code Duplication in Machine Learning Models of Code Onward! Papers Miltiadis AllamanisMicrosoft Research, Cambridge DOI Pre-print |