Professor Michael Ernst of the Allen School’s Programming Languages & Software Engineering group (PLSE) has been recognized by the International Conference on Software Engineering with its Most Influential Paper Award for 2017. Ernst — along with co-authors Carlos Pacheco of Google, and Shuvendu Lahiri and Thomas Ball of Microsoft Research — earned the award for their ICSE 2007 paper, “Feedback-directed random test generation.”
Each year, ICSE selects a paper from 10 years earlier that it judges to have had “the most influence on the theory or practice of software engineering during the 10 years since its original publication.” In their winning paper, Ernst and his colleagues presented a test generation tool, Randoop, which generates tests for programs written in object-oriented languages such as Java and .NET. The technique put forward by the team generates one test at a time, executes the test, and classifies it as (probably) a normal execution, a failure, or an illegal input. Based on this information, it biases the subsequent generation process to extend good tests and avoid bad tests.
By contrast, a typical test generation technique would generate many tests and then try to determine which ones were of value. For example, an error-revealing test is one that makes legal calls that yield incorrect results. Without a formal specification, it is difficult to know whether a given call is legal and whether its outcome is desired. Furthermore, multiple generated tests might fail for the same reason.
Automated test generation is a practically important research topic. In 2002, the National Institute of Standards and Technology (NIST) estimated the annual costs of inadequate infrastructure for software testing in the U.S. to be at least $22.2 billion and as high as $59.5 billion — with more than half of those costs borne by software users on error avoidance and mitigation.
Ernst and Pacheco first introduced feedback-directed test generation in their ECOOP 2005 paper, “Eclat: Automatic generation and classification of test inputs,” when Ernst was a member of the MIT faculty and Pacheco was his student. The ICSE 2007 paper expanded and improved upon the technique with the introduction of Randoop based on more extensive experiments. Randoop continues to be actively maintained 10 years on, and thanks to its scalability and simplicity, it remains the standard benchmark against which other test generation tools are measured.
Ernst and his colleagues were formally recognized at the ICSE 2017 conference held in Buenos Aires, Argentina in May.
Congratulations, Michael!