Property-based testing in Kotlin — Part 5
An elephant in a brown field
Framework fatigue
In part 4, we learned how to draw using the broad strokes of property-based testing. At the same time, we made all of our progress in an unusual framework, KotlinTest. Framework fatigue is a problem in the JVM community as it is the JavaScript community:
That being said, it’s a little easier if the framework is scoped as a test dependency. This is because it’s not included in your app’s binary and the dependency space for test is usually less crowded (remember we can scope a dependency to test in Gradle with testImplementation
). Still, what if you don’t want to introduce a new lava layer into your brownfield project but you want to try property-based testing in your legacy project?
Luckily, property-based testing is included with JUnit so you should be able to use it in your current project! Skip this part and dance for joy if you have the leisure to use the latest framework 😉
It’s only a theory
Earlier we said that “theories” are another name for “properties”. So let me introduce you to your new BFF, org.junit.experimental.theories.Theories
. It’s a runner — you would have seen them if you’ve used Robolectric or Espresso on Android. In other words, it is responsible for executing the tests contained in your test class.
Execution of tests is a cross-cutting concern, indicated using annotations. So, the first step is to annotate your test class with metadata specifying the runner:
We can now write tests that we annotate with @Theory
. Unlike normal JUnit test methods that can’t have parameters, we can add parameters and the Theories
runner will attempt to supply them at the time of test execution:
Unfortunately, Theories is not a turnkey solution like KotlinTest. It knows how to supply all the values of an enum
and the values for a Boolean
and little else. Let’s abuse the framework a bit by just printing values inside the test body:
DataPoints
DataPoints
are a way of providing other values. These are a way to supply test values. Here’s how we could supply interesting points for testing MyMax
:
Now we can do the following:
And, just like magic, the Theories runner will execute the test against all the DataPoints
we supplied in the companion object, attempting to falsify our theory.
Here, an assumption is a soft failure. It means “if the assumption is violated, discontinue the test”. We’ve used Assert4j’s assumeThat
to achieve a fluent syntax, but it is also possible with vanilla JUnit’s Assume.
Adding a basic generator
Although we can supply values manually with DataPoints, it’s even better if we could generate them à la KotlinTest.
We can do this with a ParameterSupplier, JUnit’s equivalent of a generator. The first step is to define an annotation. We’ll call this one @RandomInts
:
The annotation has two parameters: the number of iterations, which we will default to 50
, and a seed. We default the seed for random number generation to 0
in order to get reproducible tests.
@ParametersSuppliedBy
links this RandomInts
annotation to a class that actually generates the values. In other words, it says “whenever you see RandomInts
, call RandomIntsSupplier
with the number of iterations and the seed.” We’ll give the code for this later 😉
We consume the annotation like this:
This syntax means “use RandomInts
to supply the lists of integers for this Theory
”.
RandomIntsSupplier
looks like this:
The first line retrieves the linked annotation and checks it is not null
. We then unpack the seed and instantiate a random number generator.
Finally, we use the elegant methods from Kotlin stdlib to make multiple lists of random integers. The randomInts
private method simply composes a sequence from values emitted from the random number generator until a probability of stopping has been met. In other words, it generates a list of random integers of a random size.
We then take as many of these lists as required by the iterations
parameter from the annotation, which we defaulted to 50
. PotentialAssignment
just gives a name to these supplied values: “ints”. This is just to make it easier for the Theories runner to output a meaningful error message when it falsifies a theory.
If you’re like me, you’re just itching to try it and see what it actually prints. Here you go:
Abbreviated output:
Note we still don’t get shrinking to find the smallest counterexample. For that, you’ll need to persuade your architect to let you use KotlinTest or another framework 😉
Show her these articles and maybe she will be convinced, as I am, that property-based testing is the way forward. Then we can all start testing like it’s 2019 😉
Acknowledgements
Thanks to Roger Nesbitt and Nick Parfene for providing the platform and encouraging me to speak about this topic. Jamie Sanson showed me how to test LiveData emissions over time and very kindly reviewed the whole series.