Computer scientist Davide Spadini PhD researcher at the Faculty Electrical Engineering, Mathematics and Computer Science, has his head in The Cloud. In this online world of software and services, he has been researching how to improve testing practices. After software has been developed the next phase is usually testing, searching for bugs or unexpected behaviours. Simulating, or in technical terms ‘mocking’, is common practice in testing. Which type of software is most suitable and when is the right moment to mock, that is the question. Davide discovered a pattern when to mock or not to mock.
Setting fire to a washing machine
“Compare it to testing a new washing machine”, Davide explains, “If, for example, you want to test if the machine is fire proof, you can set the whole machine on fire and find out whether there are problems and what needs to be improved. However, if you need to repeat this process every time you do a modification, it becomes a costly, slow and complicated business to set a whole new machine on fire. Sometimes it might be easier to simulate, or mock the situation. Also, if all you want is to test whether a particular part of the machine is fire proof, mocking all the other parts of the machine seems to be the more suitable option. It’s about finding the balance between real life testing, when necessary, and simulating at other crucial moments.”
“Mocking is a common testing practice among software developers. However, there is little empirical evidence on how developers actually apply the technique in their software systems. We investigated how and why developers currently use mock objects, looking at software that has been used for a long time. We found out that some types of software are mocked more than others. And also in the timing of when to mock we found a clear pattern. Our results show that developers tend to mock complex and slow classes more. A class is a set of program codes that belong to an object and its actions. For example, the “washing machine” is a class and the behaviour could be “wash 30°, wash 90°, dry, etc. In a mock situation you only look at the particular actions. In real life you would also have to take dependencies in the context in to account. Think of classes that are hard to set up or that depend on external resources, for example a class that makes a request and has to wait for the response.”
In contrast, developers do not often mock classes that they can fully control. Senior developers even go as far to say that in an ideal world there is very little need of mocking, as long as you write good code. Developers state that the excessive use of mocks is an indicative of poorly engineered code. Another clear pattern is to not mock when the focus of the test is the integration (i.e. a class that interacts with a database). They need to implement the integration to be sure that the software works. In other words, some cases are important enough to test the washing machine with real fire. We also discussed about challenges faced during mocks, for example handling legacy (i.e., old) systems, or the fact that not all junior developers know how to use mocks properly, and many other challenges.”
“Sharing the discovered patterns and the best moments to mock, will help other scientists and developers. There is a lot in this world you have to pay for, in my opinion knowledge should be free. That is why I wanted to publish my findings in a reliable source, secure for the future. I therefore uploaded my paper in Pure and my data in the 4TU.ResearchData Archive storage. The TU Delft Library helped me with making my data better searchable and complete, so that others know the context of my research and other helpful information. Filling in the extra information makes my work more accessible and it increases my academic visibility.”
“On 21 May 2017, I presented my findings at the 14th International Conference on Mining Software Repositories (MSR), receiving really good feedback and positive reactions. We were also nominated as one of the 6 best papers in the conference, winning an extension to the Empirical Software Engineering (EMSE) Journal. Some researchers already asked me to share the dataset with them, and thanks to 4TU I had this possibility.”
Library information box
4TU.ResearchData offers a trustworthy long-term archive, qualified with a Data Seal of Approval, for storing and reusing research data and advice on research data management. All data at 4TU.ResearchData are identified in a unique and persistent way, by providing Digital Object Identifiers, which makes the data underlying scientific publications visible, findable and citable. The 4TU.ResearchData team provides the researchers with a standardised set of metadata for every dataset, that provides primary information (dataset title, creator, publication year) and additional information, such as free text description, individual keywords, link to the publication. The teams also provide valuable feedback on the entered metadata during the upload process and offers suggestions about e.g. file formats, to amplify the discoverability of the research output.
Photography: Marcel Krijger – email@example.com
Author: Marieke Hopley TU Delft Library – firstname.lastname@example.org
Publication date: July 2017
Don’t want to miss the next edition? Subscribe here!