Monday, March 27, 2017

JUnit 5 - Part II

In the first part, I gave you a brief introduction to the basic functionality of the new JUnit 5 framework: new asserts, testing exceptions and timing, parameterizing and structuring tests. In this part I will explain the extension mechanism, which serves as a replacement for the runners and rules.

The New Extension Model

JUnit 4 introduced the concept of runners, which allowed you to implement a strategy on how to run a test. You could specify the runner to use by attaching the @RunWith annotation to the test class, where the value of the annotation specified the runner class to use. You could do quite a lot of stuff with runners, but they had one central drawback: you could specify only one runner on a test :-|

Since this was not flexible enough for most people, they invented the concept of rules. Rules allows you to intercept the test execution, so you can do all kind of stuff here like test preparation and cleanup, but also conditionally executing a test. Additionally you could combine multiple rules. But they could not satisfy all requirements, that's why runners were still needed to e.g. run a Spring test.

So there were two disjunct concepts applying to the same problem. In JUnit 5 both of them has been discarded and replaced by the extension mechanism. In one sentence extensions allows you to implement callbacks that hook into the test lifecycle. You can attach an extension to a test using the @ExtendsWith annotation, where the value specifies your extension class. In contrast to @RunWith multiple extensions are allowed. Also, you may use an extension either on the test class or on a test method:

@ExtendWith(MockitoExtension.class)
class MockTests {
   // ...
}

   @ExtendWith(MockitoExtension.class)
   @Test
   void mockTest() {
      // ...
   }

An extension must implement the interface Extension which is just a marker interface. The interesting stuff comes with the subtypes of Extension which allows you to hook into the JUnit lifecycle.

Conditional Test Execution

This extension type allows you to decide, whether a test should be executed at all. By implementing the interface ContainerExecutionCondition you may decide about the execution of all tests in a test container, which is e.g. a test class:

public interface ContainerExecutionCondition extends Extension {

   ConditionEvaluationResult evaluate(ContainerExtensionContext context);
}

The context gives you access to the test container, e.g. the test class so you may inspect it in order to make the decision. To decide on a per test instance whether to run a test or not, implement the interface TestExecutionCondition. The TestExtensionContext gives you access to the test method and the parent context:

public interface TestExecutionCondition extends Extension {

   ConditionEvaluationResult evaluate(TestExtensionContext context);
}

A practical example for a condition is the DisabledCondition which implements both interfaces, and checks if the either the test method or container is marked with a @Disabled annotation. Have a look at the source code at GitHub.

TestInstancePostProcessor

The TestInstancePostProcessor allows you to - make an educated guess - post-process the test class instance. This is useful to e.g. perform dependency injection and is used by the Spring- and MockitoExtension to inject beans resp. mocks. We will use that soon in our practical example.

Test Lifecycle Callbacks

These extensions allow you to hook into JUnit's before/after lifecycle. You may implement one or even all callbacks, depending on your usecase. The callbacks are:

BeforeAllCallback

This extension is called before all tests and before all methods marked with the @BeforeAll annotation.

BeforeEachCallback

This extension is called before each test of the associated container, and before all methods marked with the @BeforeEach annotation.

BeforeTestExecutionCallback

This extension is called before each test of the associated container, but - in contrast to the BeforeEachCallback - after all methods marked with the @BeforeEach annotation.

AfterTestExecutionCallback

This extension is called after each test of the associated container, but before all methods marked with the @AfterEach annotation.

AfterEachCallback

This extension is called after each test of the associated container, and after all methods marked with the @AfterEach annotation.

AfterAllCallback

This extension is called after all tests and after all methods marked with the @AfterAll annotation.

Set an Example - A Replacement for the TemporaryFolder Rule

Since extensions are a replacement for runners and rules, the old rules are no longer supported**. One rule often used is the TemporaryFolder Rule, which provides temporary files and folders for every test, and also performs some cleanup afterwards. So we will now write an extension based replacement using the extensions we have seen so far. You will find the source code in a GitHub repository accompanying this article. The main functionality of creating and cleaning the files and folders will be provided by the class TemporaryFolder (we use the same name here as the original rule, so we can easily use ist as a replacement). It has some methods to create files and folders, and also a before() and after() methods which are supposed to be called before resp. after every test:

public class TemporaryFolder {
...
    public File newFile() throws IOException { ... }

    public File newFolder() throws IOException { ... }
    
    public void before() throws IOException { ... }

    public void after() throws IOException { ... }
}

We now gonna write an extension, that injects the TemporaryFolder in a test instance, and automatically calls the before() and after() methods before resp. after executing a test. Something like this

@ExtendWith(TemporaryFolderExtension.class)
public class TempFolderTest {

   private TemporaryFolder temporaryFolder;

   @BeforeEach
   public void setUp() throws IOException {
      assertNotNull(temporaryFolder);
   }

   @Test
   public void testTemporaryFolderInjection() {
      File file = temporaryFolder.newFile();
      assertNotNull(file);
      assertTrue(file.isFile());

      File folder = temporaryFolder.newFolder();
      assertNotNull(folder);
      assertTrue(folder.isDirectory());
   }
}

Let's start implementing that extension. We want to inject a TemporaryFolder into our test instance, and as already mentioned, the TestInstancePostProcessor is the extension designed for that use case. You will get the test class instance and the extension context for the test class as a parameter. So we need to inspect our test instance for fields of type TemporaryFolder, and assign a new instance to that field:

public class TemporaryFolderExtension implements TestInstancePostProcessor {

   @Override
   public void postProcessTestInstance(Object testInstance, ExtensionContext context) throws Exception {
      for (Field field : testInstance.getClass().getDeclaredFields()) {
         if (field.getType().isAssignableFrom(TemporaryFolder.class)) {
            TemporaryFolder temporaryFolder = createTemporaryFolder(context, field);
            field.setAccessible(true);
            field.set(testInstance, temporaryFolder);
         }
      }
   }

   ...
}

Not that hard at all. But we need to remember the created TemporaryFolder instances, in order to call the before() and after() methods on it. One would say No problem, just save them in some kind of collection member. But there is a catch: extensions must not have state! This was a design decision in order to be flexible on the lifecycle of extensions. But since state is essential for certain kinds of extension, there is a store API:

interface Store {

   Object get(Object key);

   <V> V get(Object key, Class<V> requiredType);

   <K, V> Object getOrComputeIfAbsent(K key, Function<K, V> defaultCreator);

   <K, V> V getOrComputeIfAbsent(K key, Function<K, V> defaultCreator, Class<V> requiredType);

   void put(Object key, Object value);

   Object remove(Object key);

   <V> V remove(Object key, Class<V> requiredType);

}

The store is provided by the ExtensionContext, where the context is passed to the extension callbacks as a parameter. Be aware that these contexts are organized hierarchically, means you have a context for the test (TestExtensionContext) and for the surrounding test class (ContainerExtensionContext). And since test classes may be nested, so may be those container contexts. And each context provides its own store, so you have to take care where you are storing your stuff. Big words, let's just write our createTemporaryFolder() method that creates the TemporaryFolder, associates it in a map using the given field as the key, and saves that map in the context's store:

    protected TemporaryFolder createTemporaryFolder(ExtensionContext extensionContext, Member key) {
        Map<Member, TemporaryFolder> map =
                getStore(extensionContext).getOrComputeIfAbsent(extensionContext.getTestClass().get(),
                        (c) -> new ConcurrentHashMap<>(), Map.class);
        return map.getcomputeIfAbsent(key, (k) ->  new TemporaryFolder());
    }

    protected ExtensionContext.Store getStore(ExtensionContext context) {
        return context.getStore(ExtensionContext.Namespace.create(getClass(), context));
    }


Ok, so we now create and inject the field, and remember that in the store. Are we done now? Let's write a test. We want our extension to inject a TemporaryFolder that we will use to create files and folder - either in the set up or in a test - and these files are supposed to be deleted after the test:

@ExtendWith(TemporaryFolderExtension.class)
public class TempFolderTest {

    private List<File> createdFiles = new ArrayList<>();
    private TemporaryFolder temporaryFolder;


    private void rememberFile(File file) {
        createdFiles.add(file);
    }

    private void checkFileAndParentHasBeenDeleted(File file) {
        assertFalse(file.exists(), String.format("file %s has not been deleted", file.getAbsolutePath()));
        assertFalse(file.getParentFile().exists(), String.format("folder %s has not been deleted", file.getParentFile().getAbsolutePath()));
    }

    @BeforeEach
    public void setUp() throws IOException {
        assertNotNull(temporaryFolder);

        createdFiles.clear();

        // create a file in set up
        File file = temporaryFolder.newFile();
        rememberFile(file);
    }

    @AfterEach
    public void tearDown() throws Exception {
        for (File file : createdFiles) {
            checkFileAndParentHasBeenDeleted(file);
        }
    }

    @Test
    public void testTemporaryFolderInjection() throws Exception  {
        File file = temporaryFolder.newFile();
        rememberFile(file);
        assertNotNull(file);
        assertTrue(file.isFile());

        File folder = temporaryFolder.newFolder();
        rememberFile(folder);
        assertNotNull(folder);
        assertTrue(folder.isDirectory());
    }

}

Run the test, and...it fails:

org.opentest4j.AssertionFailedError: file C:\Users\Ralf\AppData\Local\Temp\junit6228173188033609420\junit1925268561755970404.tmp has not been deleted
   ...
   at com.github.ralfstuckert.junit.jupiter.TempFolderTest.checkFileAndParentHasBeenDeleted(TempFolderTest.java:32)
   at com.github.ralfstuckert.junit.jupiter.TempFolderTest.tearDown(TempFolderTest.java:55)

Well, no surprise, we are not cleaning up any files yet, so we need to implement that. We want to clean up the files right after the test before the @AfterEach is triggered. The callback to do this, is AfterTestExecutionCallback:

public class TemporaryFolderExtension implements AfterTestExecutionCallback, TestInstancePostProcessor {

   ...
   
   @Override
   public void afterTestExecution(TestExtensionContext extensionContext) throws Exception {
      if (extensionContext.getParent().isPresent()) {
         // clean up injected member
         cleanUpTemporaryFolder(extensionContext.getParent().get());
      }
   }

   protected void cleanUpTemporaryFolder(ExtensionContext extensionContext) {
      for (TemporaryFolder temporaryFolder : getTemporaryFolders(extensionContext)) {
         temporaryFolder.after();
      }
   }

   protected Iterable<TemporaryFolder> getTemporaryFolders(ExtensionContext extensionContext) {
      Map<Object, TemporaryFolder> map = getStore(extensionContext).get(extensionContext.getTestClass().get(), Map.class);
      if (map == null) {
         return Collections.emptySet();
      }
      return map.values();
   }

}

So we now called right after the test has been executed, retrieve all TemporaryFolder we saved in the store in order to remember them, and call the after() method which actually cleans up the files. One point to mention is, that we are using the context's parent to retrieve the store. That's because we used the (Class-)ContainerExecutionContext store when we created the TemporaryFolders, but in afterTestExecution() we get passed the TestExtensionContext which is the child context. So we have to climb up the context hierarchy in order to get the right context and the associated store. Let's run the test again...tada, green:




Provide the TemporaryFolder as a Parameter

We want the possibility to provide a TemporaryFolder as parameter for a test method. We will specify this as a test first:

   @Test
   public void testTemporaryFolderAsParameter(final TemporaryFolder tempFolder) throws Exception {
      assertNotNull(tempFolder);
      assertNotSame(tempFolder, temporaryFolder);

      File file = tempFolder.newFile();
      rememberFile(file);
      assertNotNull(file);
      assertTrue(file.isFile());
   }

Run the test...

org.junit.jupiter.api.extension.ParameterResolutionException: 
No ParameterResolver registered for parameter [com.github.ralfstuckert.junit.jupiter.extension.tempfolder.TemporaryFolder arg0] in executable [public void com.github.ralfstuckert.junit.jupiter.TempFolderTest.testTemporaryFolderAsParameter(com.github.ralfstuckert.junit.jupiter.extension.tempfolder.TemporaryFolder) throws java.lang.Exception].

This failure message already gives us a hint on what we have to do: a ParameterResolver. This is also an extension interface that allows you to provide parameters for both test constructor and methods, so we will implement that. It consists of the two methods supports() and resolve(). The first one is called to check whether this extension is capable of providing the desired parameter, and the latter is then called to actually create an instance of that parameter:

public class TemporaryFolderExtension implements ParameterResolver, AfterTestExecutionCallback, TestInstancePostProcessor {

   @Override
   public boolean supports(ParameterContext parameterContext, ExtensionContext extensionContext) throws ParameterResolutionException {
      Parameter parameter = parameterContext.getParameter();
      return (extensionContext instanceof TestExtensionContext) && 
            parameter.getType().isAssignableFrom(TemporaryFolder.class);
   }

   @Override
   public Object resolve(ParameterContext parameterContext, ExtensionContext extensionContext) throws ParameterResolutionException {
      TestExtensionContext testExtensionContext = (TestExtensionContext) extensionContext;
      try {
         TemporaryFolder temporaryFolder = createTemporaryFolder(testExtensionContext, 
                                                   testExtensionContext.getTestMethod().get());

         Parameter parameter = parameterContext.getParameter();
         if (parameter.getType().isAssignableFrom(TemporaryFolder.class)) {
            return temporaryFolder;
         }

         throw new ParameterResolutionException("unable to resolve parameter for " + parameterContext);
      } catch (IOException e) {
         throw new ParameterResolutionException("failed to create temp file or folder", e);
      }
   }


That's it? No, if you run the test, it is still red, but with a different failure message saying that a file has not been delete as expected. Well, if you look at the implementation you will see, that we are saving the created TemporaryFolder in the store of the testExtensionContext using the test method as the key. Before we rembered all instances we injected in the (Class)ContainerExtensionContext. So we have to take of this one in our clean up code:

   @Override
   public void afterTestExecution(TestExtensionContext extensionContext) throws Exception {
      // clean up test instance
      cleanUpTemporaryFolder(extensionContext);

      if (extensionContext.getParent().isPresent()) {
         // clean up injected member
         cleanUpTemporaryFolder(extensionContext.getParent().get());
      }
   }

Run the test again...green. Of course we could have climbed up to the class container extension context, and use that store for remembering the new TemporaryFolder, but we want to fool around here bit and try things out ;-)


More Fun with Parameters

By now we get a TemporaryFolder injected and passed as parameter, and then we are using that one to create files and folders. Why that extra step? I'd like a fresh temporary file or folder directly passed as a parameter. Ok, it would be nice, if we could express our desire for a temporary file. Also we need something to distinguish between files and folders, since they both have type File...how about that:

   public void testTempFolder(@TempFolder final File folder) {
      rememberFile(folder);
      assertNotNull(folder);
      assertTrue(folder.exists());
      assertTrue(folder.isDirectory());
   }

   public void testTempFile(@TempFile final File file) {
      rememberFile(file);
      assertNotNull(file);
      assertTrue(file.exists());
      assertTrue(file.isFile());
   }

Very nice, we just mark the parameter with an annotation that describes our needs. And this is easy to accomplish with the parameter resolver. At first, we need our parameter annotations:

@Target({ ElementType.TYPE, ElementType.PARAMETER })
@Retention(RetentionPolicy.RUNTIME)
@Documented
public @interface TempFile {}

@Target({ ElementType.TYPE, ElementType.PARAMETER })
@Retention(RetentionPolicy.RUNTIME)
@Documented
public @interface TempFolder {}

Also, we just need to extend our existing code a little bit:

   @Override
   public boolean supports(ParameterContext parameterContext, ExtensionContext extensionContext) throws ParameterResolutionException {
      Parameter parameter = parameterContext.getParameter();
      return (extensionContext instanceof TestExtensionContext) && (parameter.getType().isAssignableFrom(TemporaryFolder.class) ||
            (parameter.getType().isAssignableFrom(File.class) && (parameter.isAnnotationPresent(TempFolder.class)
                  || parameter.isAnnotationPresent(TempFile.class))));
   }

   @Override
   public Object resolve(ParameterContext parameterContext, ExtensionContext extensionContext) throws ParameterResolutionException {
      TestExtensionContext testExtensionContext = (TestExtensionContext) extensionContext;
      try {
         TemporaryFolder temporaryFolder = createTemporaryFolder(testExtensionContext, testExtensionContext.getTestMethod().get());

         Parameter parameter = parameterContext.getParameter();
         if (parameter.getType().isAssignableFrom(TemporaryFolder.class)) {
            return temporaryFolder;
         }
         if (parameter.isAnnotationPresent(TempFolder.class)) {
            return temporaryFolder.newFolder();
         }
         if (parameter.isAnnotationPresent(TempFile.class)) {
            return temporaryFolder.newFile();
         }

         throw new ParameterResolutionException("unable to resolve parameter for " + parameterContext);
      } catch (IOException e) {
         throw new ParameterResolutionException("failed to create temp file or folder", e);
      }
   }

Run the tests, aaaand...green. That was easy. Just one more improvement: wouldn't it useful, if we could name our test files? Like this:

   @Test
   public void testTempFile(@TempFile("hihi") final File file) {
      rememberFile(file);
      assertNotNull(file);
      assertTrue(file.exists());
      assertTrue(file.isFile());
      assertEquals("hihi", file.getName());
   }

That's easy. Just add a value to our file annotation, and evaluate it in the resolve() method:

@Target({ ElementType.TYPE, ElementType.PARAMETER })
@Retention(RetentionPolicy.RUNTIME)
@Documented
public @interface TempFile {

   String value() default "";
}

   @Override
   public Object resolve(ParameterContext parameterContext, ExtensionContext extensionContext) throws ParameterResolutionException {
         ...
         if (parameter.isAnnotationPresent(TempFile.class)) {
            TempFile annotation = parameter.getAnnotation(TempFile.class);
            if (!annotation.value().isEmpty()) {
               return temporaryFolder.newFile(annotation.value());
            }
            return temporaryFolder.newFile();
         }


Annotation Composition

As already explained in the first part, JUnit 5 has support for composed and meta-annotations. This allows you to use JUnit annotations by inheritance (see the chapter on interface default methods). When searching for annotations, JUnit inspects also all super classes, interfaces and even annotations itself, means you can also use JUnit annotations as meta-annotation on your own annotations. Let's say you have a bunch of tests you like to Bechmark. In order to group them, you tag them with @Tag("benchmark"). The benchmark functionality is provided by your custom BenchmarkExtension:

@Tag("benchmark")
@ExtendWith(BenchmarkExtension.class)
class SearchEngineTest {
   ...

We will now extract both the tag and the extension to our own meta-annotation...

@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
@Documented
@Tag("benchmark")
@ExtendWith(BenchmarkExtension.class)
public @interface Benchmark {
}

...and use that meta-annotation in our tests instead

@Benchmark
class SearchEngineTest {
   ...

So this is helpful to represent a bunch of annotations by one descriptive annotation.

But there are other usecases, especially if you are dealing with libraries that themself support composed and meta-annotations, like e.g. Spring. Spring has support for running integration tests with junit, where the application context is created before the test is run. Spring has support for both JUnit 4 (using the SpringJUnit4ClassRunner and JUnit 5 (using the SpringExtension. So what is our use case? If you working with Spring persistence, it is very easy to write integration tests that check your custom persistence logic by real interaction with the database. But after a test you need to clean up your test dirt. Some people do so by tracking the objects they have inserted for testing purposes, and deleting them after the tests. So how about writing an extension, that actually tracks certain entities created during test, and automatically deletes them afterwards?


The MongoCleanup Extension

Let's say we have an entity Ticket, a MongoDB based TicketRepository, and an integration test TicketRepositoryIT. What we want to achieve, is that we mark our test with the @MongoCleanup annotation which gets passed one or multiple entity classes to watch. All instances of that entities saved during the test will be automatically deleted after the test has been finished:

@MongoCleanup(Ticket.class)
@ExtendWith(SpringExtension.class)
@SpringBootTest
public class TicketRepositoryIT {

   @Autowired
   private TicketRepository repository;

   @Test
   @DisplayName("Test the findByTicketId() method")
   public void testSaveAndFindTicket() throws Exception {
      Ticket ticket1 = new Ticket("1", "blabla");
      repository.save(ticket1);
      Ticket ticket2 = new Ticket("2", "hihi");
      repository.save(ticket2);

      ...
   }

In order to do so, we got to register a bean in the spring context, that tracks saved instances, and provides some functionality to delete them. Also we need an extension, that has access to the spring context, so it can retrieve that bean and trigger the delete after the test is finished. Beans first:

public class MongoCleaner implements ApplicationListener<AfterSaveEvent> {

   @Override
   public void onApplicationEvent(AfterSaveEvent event) {
      // remember saved entities
      ...
   }

   public void prepare(final List<Class<?>> entityTypes) {
      // prepare entities to watch
      ...
   }

   public Map<Class<?>, Set<String>> cleanup() {
      // delete watched entities
      ...
   }
   ...
}

The concrete implementation is not the point here, if you are interested, have a look at the accompanying GitHub project. The bean is provided to the spring context using a configuration class:

@Configuration
public class MongoCleanerConfig {

   @Bean
   public MongoCleaner mongoCleaner() {
      return new MongoCleaner();
   }
}

And now the extension: It retrieves the MongoCleaner bean from the spring context using a static function of the SpringExtension, and calls the prepare() and cleanup() methods before resp. after each test:

public class MongoCleanupExtension implements BeforeEachCallback, AfterEachCallback {

   @Override
   public void beforeEach(TestExtensionContext context) throws Exception {
      MongoCleaner mongoCleaner = getMongoCleaner(context);
      List<Class<?>> entityTypesToCleanup = getEntityTypesToCleanup(context);
      mongoCleaner.prepare(entityTypesToCleanup);
   }

   @Override
   public void afterEach(TestExtensionContext context) throws Exception {
      MongoCleaner mongoCleaner = getMongoCleaner(context);
      Map<Class<?>, Set<String>> cleanupResult = mongoCleaner.cleanup();
      cleanupResult.forEach((entityType, ids) -> {
         context.publishReportEntry(String.format("deleted %s entities", entityType.getSimpleName()), ids.toString());
      });
   }

   protected MongoCleaner getMongoCleaner(ExtensionContext context) {
      ApplicationContext applicationContext = SpringExtension.getApplicationContext(context);
      MongoCleaner mongoCleaner = applicationContext.getBean(MongoCleaner.class);
      return mongoCleaner;
   }

   protected List<Class<?>> getEntityTypesToCleanup(ExtensionContext context) {
      Optional<AnnotatedElement> element = context.getElement();
      MongoCleanup annotation = AnnotationUtils.findAnnotation(context.getTestClass().get(), MongoCleanup.class);
      return Arrays.asList(annotation.value());
   }

}

Well, but how is the bean configuration passed to spring? And what about our MongoCleanupExtension, that must be provided to JUnit via an @ExtendsWith annotation?!? Now that's the use case for a meta annotation. We will create our own annotation @MongoCleanup which is itself annotated with the JUnit @ExtendsWith AND the spring @Import annotation:

@Target({ ElementType.TYPE})
@Retention(RetentionPolicy.RUNTIME)
@Documented
@Import(MongoCleanerConfig.class)
@ExtendWith(MongoCleanupExtension.class)
public @interface MongoCleanup {

   /**
    * @return the entity classes to clean up.
    */
   Class[] value();
}

The @ExtendWith(MongoCleanupExtension.class) is processed by JUnit, and hooks our extension into the test lifecycle. The @Import(MongoCleanerConfig.class) is processed by Spring, and adds our MongoCleaner to the application context. So by adding one single annotation to our test class, we add functionality that hooks into two different frameworks. And this is possible since they both support composed resp. meta-annotations.


Conclusion

JUnit 5 is complete rewrite, and it looks promising. The separation of the framework into a platform and a test engine SPI decouples the tool providers from the test engines, providing you support for any test engine that implements the SPI. And the engine providers may improve and refactor their code without affecting the tool providers, which was quite a problem in the past. The usage of lambdas lets you write more concise test code, and nested test classes and dynamic tests gives you some new flexibility to structure your tests. The runners and rules API has been replaced by the extension API, providing you a clean mechanism to extend the framework. Be aware that the work on JUnit 5 is still in progress, so some APIs might change until the release in Q3. That was quite a lot more stuff than planned, but I hope you got an idea on what to do with JUnit 5.

Best regards
Ralf
That's what makes this a particularly difficult sort of extraordinary case. The kind I like.
Jupiter Jones - Jupiter Ascending


** Limited Supported for Some Old Rules

Since some people will miss rules like TemporaryFolder, the JUnit 5 team added some supported for a selection of rules:

  • org.junit.rules.ExternalResource (including org.junit.rules.TemporaryFolder)
  • org.junit.rules.Verifier (including org.junit.rules.ErrorCollector)
  • org.junit.rules.ExpectedException

These are provided by the separate artifact junit-jupiter-migration-support. In order to use those rules, you have to add one of the responsible extensions to your test class, e.g. for Verifier it is the extension VerifierSupport. Or you just annotate your test with @EnableRuleMigrationSupport, which composes all rule support extensions:

@Target({ ElementType.TYPE})
@ExtendWith(ExternalResourceSupport.class)
@ExtendWith(VerifierSupport.class)
@ExtendWith(ExpectedExceptionSupport.class)
public @interface EnableRuleMigrationSupport {}

Monday, March 20, 2017

JUnit 5 - Part I

More than a decade ago I wrote an introduction to JUnit 4, which was – to be quite honest – just a catch-up with the more advanced TestNG. Now JUnit 5 is in the door and it is a complete rewrite, so it’s worth having a fresh look on it. In the first installment of this two part article I will describe what’s new in the basic usage: new asserts, testing exceptions and timing, parameterizing and structuring tests. JUnit 5 comes with a comprehensive user guide, most examples shown in this part are taken from there. So if you already read that, you may skip this and continue directly with the second part, which describes the new extension API that replaces the old runner and rules mechanisms.

One Jar fits all?

Prior to JUnit 5 all parts of the JUnit framework has been packed into one jar: the API to write tests - e.g. assertions - and the API and implementation to actually run tests; for some time, even hamcrest was baked into it. This was a problem since parts of JUnit could not be easily refactored without affecting the tool providers. JUnit 5 is a complete redesign and has been split into a platform and test engines. The platform provides mechanisms for discovering and launching tests, and serves as a foundation for the tool providers (e.g. IDE manufacturers). The platform also defines an SPI for test engines, where those test engines actually define the API to write a test. JUnit 5 provides two engines, the jupiter and the classic engine. The jupiter engine is used to write new JUnit 5 style tests. The classic engine can be used to run your old Junit 4 tests.

So this decouples the tool manufacturers from the test framework providers. This also means, that other test frameworks may be adapted to run on the JUnit platform. You may even write your own test engine, and run your tests in any tool that supports the platform. Currently IntelliJ runs JUnit 5 tests out of the box, other IDEs will follow soon. Also there is support for gradle and maven, and a console runner, so you may start using JUnit right now.

Platform, service providers, engines, a whole bunch of junit jars...holy chihuahua, what do I need to write JUnit tests? Well, to write tests with JUnit 5, you just need the junit-jupiter-api artifact, which defines the JUnit 5 API. It contains the API to write tests and extensions, and this is what we are gonna use in this article. The API in turn is implemented by the junit-jupiter-engine. The current version is 5.0.0-M3, the final release is scheduled for Q3 2017.

Annotations

The first thing to notice is that the new API is in a new namespace org.junit.jupiter.api. Most of the annotation names has been kept, the only one that changed is that Before is now BeforeEach, and BeforeClass is now BeforeAll:

import static org.junit.jupiter.api.Assertions.fail;

import org.junit.jupiter.api.AfterAll; 
import org.junit.jupiter.api.AfterEach; 
import org.junit.jupiter.api.BeforeAll; 
import org.junit.jupiter.api.BeforeEach; 
import org.junit.jupiter.api.Disabled; 
import org.junit.jupiter.api.Test; 

class ClassicTests { 

   @BeforeAll 
   static void setUpAll() { } 

   @BeforeEach 
   void setUp() { } 

   @AfterEach 
   void tearDown() { } 

   @AfterAll 
   static void tearDownAll() { } 

   @Test 
   void succeedingTest() { } 

   @Test 
   void failingTest() { 
      fail("a failing test");
   } 

   @Test 
   @Disabled("for demonstration purposes") 
   void skippedTest() { 
      // not executed 
   } 
}

Asserts

All assertions are still available via static imports, where the enclosing class is now org.junit.jupiter.api.Assertions:

import static org.junit.jupiter.api.Assertions.assertEquals;
...
  assertEquals(2, 2, “the message is now the last argument”);

The message is now the last argument, in order to get the test facts to the pole position. Another improvement is the possibility to create the message lazily using a lambda expression. This avoids unnecessary operations so the test may execute faster:

assertTrue(2 == 2, () -> "This message is created lazily");

Assertions may be grouped using assertAll(). All assertion in the group will  be executed whether they fail or not, and the results are collected.

   @Test 
   void groupedAssertions() { 
      // In a grouped assertion all assertions are executed, and any 
      // failures will be reported together. 
      assertAll("address", 
         () -> assertEquals("John", address.getFirstName()), 
         () -> assertEquals("User", address.getLastName()) 
      ); 
   } 

In JUnit4 exception testing has been done first using the expected property of the @Test annotation. This had the drawback, that you where unable to both inspect the exception and continue after the exception has been thrown. Therefore the ExpectedException rule has been introduced, to come around this flaw. By using the possibilities of lambda expressions, there is now an assertion for testing exceptions which allows to execute a block of code. That code is expected to throw an exception, and also continue the test and inspect that exception. And all that with just some local code:

   @Test 
   void exceptionTesting() { 
      Throwable exception = assertThrows(IllegalArgumentException.class, () -> { 
         throw new IllegalArgumentException("that hurts"); 
      }); 
      assertEquals("that hurts", exception.getMessage()); 
   }

JUnit 5 now also provides some asserts that allows you to time the code under test. This assertion comes in two flavours: the first just measures the time and fails, if the given time has been elapsed. The second is preemptive, means the test fails immediately if the time is up:

   @Test 
   void timeoutExceeded() { 
      // The following assertion fails with an error message similar to: 
      // execution exceeded timeout of 10 ms by 91 ms 
      assertTimeout(ofMillis(10), () -> { 
         // Simulate task that takes more than 10 ms. 
         Thread.sleep(100); 
      });
   } 
   
   @Test 
   void timeoutExceededWithPreemptiveTermination() { 
      // The following assertion fails with an error message similar to: 
      // execution timed out after 10 ms 
      assertTimeoutPreemptively(ofMillis(10), () -> { 
         // Simulate task that takes more than 10 ms. 
         Thread.sleep(100); 
      }); 
   }

Farewell to assertThat()

As already said, JUnit 5 is no longer a one-size-fits-all jar, but has been split up into different responsibilities. The core API now contains just the core API, nothing less, nothing more. Consequently, the baked in hamcrest support has been thrown out. As a replacement just use the bare hamcrest functionality, so all that changes are just some imports:

import static org.hamcrest.CoreMatchers.equalTo; 
import static org.hamcrest.CoreMatchers.is; 
import static org.hamcrest.MatcherAssert.assertThat;

...
   assertThat(2 + 1, is(equalTo(3)));

Assumptions

Some assumptions are gone in JUnit 5, e.g. assumeThat() (see farewell to assertThat()) and assumeNoExceptions(). But you may now use lambda expressions for the condition and the (lazy) message, and there is also a signature that allows you to execute a block of code if the condition matches:

   @Test
   void assumptionWithLambdaCondition() {
      assumeTrue(() -> "CI".equals(System.getenv("ENV")));
      // remainder of test
   }

   @Test
   void assumptionWithLambdaMessage() {
      assumeTrue("DEV".equals(System.getenv("ENV")),
         () -> "Aborting test: not on developer workstation");
      // remainder of test
   }

   @Test
   void assumptionWithCodeBlock() {
      assumingThat("CI".equals(System.getenv("ENV")),
         () -> {
            // perform these assertions only on the CI server
            assertEquals(2, 2);
         });

      // perform these assertions in all environments
      assertEquals("a string", "a string");
   }

Naming, Disabling and Filtering

Test runners use the test method name to visualize the test, so in order to make the result meaningful to us, we used to write XXXL long method names. JUnit 5 allow you to add a @DisplayName annotation that may provide a readable description of the test, which may also contain blanks and all other kind of characters. So you are no longer restricted to the set of characters allowed as java method names:

   @Test 
   @DisplayName("Custom test name containing spaces") 
   void testWithDisplayNameContainingSpaces() { }

The @Ignored annotation has been renamed to @Disabled. You may still provide a reason. Nothing more to say on that

   @Disabled(“This test will be ommited”)
   @Test void testWillBeSkipped() { }

We all use tags to mark content with some metadata in order to find, group or filter the data we are looking for in a certain context. Now tags are supported in JUnit also. You may add one or multiple tags to either a test case, and/ or the complete test class:

   @Test 
   @Tag("acceptance") 
   void testingCalculation() { }

Then you can use these tags to group resp. filter the tests you want to run. Here is a maven example:

<plugin> 
   <artifactId>maven-surefire-plugin</artifactId> 
   <version>2.19</version> 
   <configuration> 
      <properties> 
         <includeTags>acceptance</includeTags> 
         <excludeTags>integration, regression</excludeTags> 
      </properties> 
   </configuration> 
   <dependencies> ... </dependencies> 
</plugin>

Nested Tests

Sometimes it makes sense to organize tests in a hierarchy, e.g. in order to reflect the hierarchy of the structure under test. JUnit 5 allows you to organize tests in nested classes by just marking them with the @Nested annotation. The test discovery will organize these classes in nested test container, which may be visualized reflecting this structure. A core benefit is, that you may also organize the before and after methods hierarchally, means you may define a test setup for a group of nested tests:

...
import org.junit.jupiter.api.Nested;

@DisplayName("A stack")
class TestingAStackDemo {

   Stack<Object> stack;

   @Test
   @DisplayName("is instantiated with new Stack()")
   void isInstantiatedWithNew() {
      new Stack<>();
   }

   @Nested
   @DisplayName("when new")
   class WhenNew {

      @BeforeEach
      void createNewStack() {
         stack = new Stack<>();
      }

      @Test
      @DisplayName("is empty")
      void isEmpty() {
         assertTrue(stack.isEmpty());
      }

      @Test
      @DisplayName("throws EmptyStackException when popped")
      void throwsExceptionWhenPopped() {
         assertThrows(EmptyStackException.class, () -> stack.pop());
      }

      @Test
      @DisplayName("throws EmptyStackException when peeked")
      void throwsExceptionWhenPeeked() {
         assertThrows(EmptyStackException.class, () -> stack.peek());
      }

      @Nested
      @DisplayName("after pushing an element")
      class AfterPushing {

         String anElement = "an element";

         @BeforeEach
         void pushAnElement() {
            stack.push(anElement);
         }

         @Test
         @DisplayName("it is no longer empty")
         void isNotEmpty() {
            assertFalse(stack.isEmpty());
         }

         @Test
         @DisplayName("returns the element when popped and is empty")
         void returnElementWhenPopped() {
            assertEquals(anElement, stack.pop());
            assertTrue(stack.isEmpty());
         }

         @Test
         @DisplayName("returns the element when peeked but remains not empty")
         void returnElementWhenPeeked() {
            assertEquals(anElement, stack.peek());
            assertFalse(stack.isEmpty());
         }
      }
   }
}

You may nest tests arbitrarily deep, but be aware that this works for non-static classes only. Due to this restriction you can not nest @BeforeAll/ @AfterAll annotation since they are static. Let's have a look on how IntelliJ represents those nested test containers:



Dynamic Tests

More than once I had the case that I needed to repeat the same test logic for different contexts, e.g. different parameters. There are certain ways how to manage that, but they all suck. I always wished, I could just generate my tests on the fly for the different context. And finally the JUnit gods must have heard my prayers: create a method that returns either an Iterator or Iterable of DynamicTests, and mark it as @TestFactory:

import static org.junit.jupiter.api.DynamicTest.dynamicTest;
import org.junit.jupiter.api.DynamicTest;
...
class DynamicTestsDemo {

   @TestFactory
   Iterator<DynamicTest> dynamicTestsFromIterator() {
      return Arrays.asList(
         dynamicTest("1st dynamic test", () -> assertTrue(true)),
         dynamicTest("2nd dynamic test", () -> assertEquals(4, 2 * 2))
      ).iterator();
   }

   @TestFactory
   Collection<DynamicTest> dynamicTestsFromCollection() {
      return Arrays.asList(
            dynamicTest("3rd dynamic test", () -> assertTrue(true)),
            dynamicTest("4th dynamic test", () -> assertEquals(4, 2 * 2))
      );
   }

   @TestFactory
   Stream<DynamicTest> dynamicTestsFromIntStream() {
      // Generates tests for the first 10 even integers.
      return IntStream.iterate(0, n -> n + 2).limit(10).mapToObj(
         n -> dynamicTest("test" + n, () -> assertTrue(n % 2 == 0)));
   }
}



Unlike nested tests, the dynamic tests are - at the time of this writing - not organized in nested containers, but run in the context the test factory method. This has some significant flaw, as the before/after lifecycle is not executed for every dynamic test, but only once for the test factory method. Maybe this will change until the final release.

Parameter Resolver

JUnit 5 provides an API to pass parameter to the test constructor and methods, including the before- and after methods. You can define a ParameterResolver, which is responsible for providing parameters of a certain type. Besides passing parameters, this allows also stuff like dependency injection, and builds the foundation for e.g. the Spring- and Mockito-Extensions. Since this is part of the extension mechanism, it will be described in more detail in the second part of this article. By now I will just give you an example using the built-in TestInfo parameter resolver, which will provide you some data on the current test method. Just add TestInfo as a parameter in your test, and JUnit will automatically inject it:

   @Test
   @DisplayName("my wonderful test")
   @Tag("this is my tag")
   void test(TestInfo testInfo) {
      assertEquals("my wonderful test", testInfo.getDisplayName());
      assertTrue(testInfo.getTags().contains("this is my tag"));
   }

Interface Default Methods

Since JUnit 5 has support for composed annotations, you may also use test annotations on interfaces...and also on interface default methods. This allows you to write interface contracts as poor man's mixins instead of abstract base classes. Here is an example for the Comparable interface:

public interface ComparableContract<T extends Comparable<T>> {

   T createValue();
   
   T createSmallerValue();

   @Test
   default void returnsZeroWhenComparedToItself() {
      T value = createValue();
      assertEquals(0, value.compareTo(value));
   }

   @Test
   default void returnsPositiveNumberComparedToSmallerValue() {
      T value = createValue();
      T smallerValue = createSmallerValue();
      assertTrue(value.compareTo(smallerValue) > 0);
   }

   @Test
   default void returnsNegativeNumberComparedToLargerValue() {
      T value = createValue();
      T smallerValue = createSmallerValue();
      assertTrue(smallerValue.compareTo(value) < 0);
   }

}

If we now write a test for a class that implements the Comparable interface, we can inherit all tests provided by ComparableContract. All we have to do is to implement those two methods that provide appropriate values of our class createValue() and createSmallerValue().

public class StringTest implements ComparableContract<String> {

   @Override
   public String createValue() {
      return "foo";
   }

   @Override
   public String createSmallerValue() {
      return "bar"; // "bar" < "foo"
   }

    @Test
    public void someStringSpecificTest() { }

    @Test
    public void anotherStringSpecificTest() { }
}

If we now run the StringTest, both our specific tests and the tests inherited from the ComparableContract.



Ok, that's enough for today. Next week I will explain the JUnit 5 extension mechanism.

Best regards
Ralf
Perhaps a random drawing might be the most impartial way to figure things out.
Jupiter Jones - Jupiter Ascending