Showing posts with label Java. Show all posts
Showing posts with label Java. Show all posts

Monday, March 03, 2014

When/how to use Mockito Answer

by Hongfei Ding, Software Engineer, Shanghai

Mockito is a popular open source Java testing framework that allows the creation of mock objects. For example, we have the below interface used in our SUT (System Under Test):
interface Service {
  Data get();
}

In our test, normally we want to fake the Service’s behavior to return canned data, so that the unit test can focus on testing the code that interacts with the Service. We use when-return clause to stub a method.
when(service.get()).thenReturn(cannedData);

But sometimes you need mock object behavior that's too complex for when-return. An Answer object can be a clean way to do this once you get the syntax right.

A common usage of Answer is to stub asynchronous methods that have callbacks. For example, we have mocked the interface below:
interface Service {
  void get(Callback callback);
}

Here you’ll find that when-return is not that helpful anymore. Answer is the replacement. For example, we can emulate a success by calling the onSuccess function of the callback.
doAnswer(new Answer<Void>() {
    public Void answer(InvocationOnMock invocation) {
       Callback callback = (Callback) invocation.getArguments()[0];
       callback.onSuccess(cannedData);
       return null;
    }
}).when(service).get(any(Callback.class));

Answer can also be used to make smarter stubs for synchronous methods. Smarter here means the stub can return a value depending on the input, rather than canned data. It’s sometimes quite useful. For example, we have mocked the Translator interface below:
interface Translator {
  String translate(String msg);
}

We might choose to mock Translator to return a constant string and then assert the result. However, that test is not thorough, because the input to the translator function has been ignored. To improve this, we might capture the input and do extra verification, but then we start to fall into the “testing interaction rather than testing state” trap.

A good usage of Answer is to reverse the input message as a fake translation. So that both things are assured by checking the result string: 1) translate has been invoked, 2) the msg being translated is correct. Notice that this time we’ve used thenAnswer syntax, a twin of doAnswer, for stubbing a non-void method.
when(translator.translate(any(String.class))).thenAnswer(reverseMsg())
...
// extracted a method to put a descriptive name
private static Answer<String> reverseMsg() { 
  return new Answer<String>() {
    public String answer(InvocationOnMock invocation) {
       return reverseString((String) invocation.getArguments()[0]));
    }
  }
}

Last but not least, if you find yourself writing many nontrivial Answers, you should consider using a fake instead.

Wednesday, September 16, 2009

Checked exceptions I love you, but you have to go


Once upon a time Java created an experiment called checked-exceptions, you know, you have to declare exceptions or catch them. Since that time, no other language (I know of) has decided to copy this idea, but somehow the Java developers are in love with checked exceptions. Here, I am going to "try" to convince you that checked-exceptions, even though look like a good idea at first glance, are actually not a good idea at all:

Empirical Evidence

Let's start with an observation of your code base. Look through your code and tell me what percentage of catch blocks do rethrow or print error? My guess is that it is in high 90s. I would go as far as 98% of catch blocks are meaningless, since they just print an error or rethrow the exception which will later be printed as an error. The reason for this is very simple. Most exceptions such as FileNotFoundException, IOException, and so on are sign that we as developers have missed a corner case. The exceptions are used as away of informing us that we, as developers, have messed up. So if we did not have checked exceptions, the exception would be throw and the main method would print it and we would be done with it (optionally we would catch all exceptions in the main log them if we are a server).

Checked exceptions force me to write catch blocks which are meaningless: more code, harder to read, and higher chance that I will mess up the rethrow logic and eat the exception.

Lost in Noise

Now lets look at the 2-5% of the catch blocks which are not rethrow and real interesting logic happens there. Those interesting bits of useful and important information is lost in the noise, since my eye has been trained to skim over the catch blocks. I would much rather have code where a catch would indicate: "pay, attention! here, something interesting is happening!", rather than, "it is just a rethrow." Now, if we did not have checked exceptions, you would write your code without catch blocks, test your code (you do test right?) and realize that under some circumstances an exception is throw and deal with it by writing the catch block. In such a case forgetting to write a catch block is no different than forgetting to write an else block of the if statement. We don't have checked ifs and yet no one misses them, so why do we need to tell developers that FileNotFound can happen. What if the developer knows for a fact that it can not happen since he has just placed the file there, and so such an exception would mean that your filesystem has just disappeared! (and your application is not place to handle that.)

Checked exception make me skim the catch blocks as most are just rethrows, making it likely that you will miss something important.

Unreachable Code

I love to write tests first and implement as a consequence of tests. In such a situation you should always have 100% coverage since you are only writing what the tests are asking for. But you don't! It is less than 100% because checked exceptions force you to write catch blocks which are impossible to execute. Check this code out:
bytesToString(byte[] bytes) {
 ByteArrayOutputStream out = new ByteArrayOutputStream();
 try {
   out.write(bytes);
   out.close()
   return out.toSring();
 } catch (IOException e) {
   // This can never happen!
   // Should I rethrow? Eat it? Print Error?
 }
}

ByteArrayOutputStream will never throw IOException! You can look through its implementation and see that it is true! So why are you making me catch a phantom exception which can never happen and which I can not write a test for? As a result I cannot claim 100% coverage because of things outside my control.

Checked exceptions create dead code which will never execute.

Closures Don't Like You

Java does not have closures but it has visitor pattern. Let me explain with concrete example. I was creating a custom class loader and need to override load() method on MyClassLoader which throws ClassNotFoundException under some circumstances. I use ASM library which allows me to inspect Java bytecodes. The way ASM works is that it is a visitor pattern, I write visitors and as ASM parses the bytecodes it calls specific methods on my visitor implementation. One of my visitors, as it is examining bytecodes, decides that things are not right and needs to throw a ClassNotFondException which the class loader contract says it should throw. But now we have a problem. What we have on a stack is MyClassLoader -> ASMLibrary -> MyVisitor. MyVisitor wants to throw an exception which MyClassLoader expects but it can not since ClassNotFoundException is checked and ASMLibrary does not declare it (nor should it). So I have to throw RuntimeClassNotFoundException from MyVisitor which can pass through ASMLibrary which MyClassLoader can then catch and rethrow as ClassNotFoundException.

Checked exception get in the way of functional programing.

Lost Fidelity

Suppose java.sql package would be implemented with useful exception such as SqlDuplicateKeyExceptions and SqlForeignKeyViolationException and so on (we can wish) and suppose these exceptions are checked (which they are). We say that the SQL package has high fidelity of exception since each exception is to a very specific problem. Now lets say we have the same set up as before where there is some other layer between us and the SQL package, that layer can either redeclare all of the exceptions, or more likely throw its own. Let's look at an example, Hibernate is object-relational-database-mapper, which means it converts your SQL rows into java objects. So on the stack you have MyApplication -> Hibernate -> SQL. Here Hibernate is trying hard to hide the fact that you are talking to SQL so it throws HibernateExceptions instead of SQLExceptions. And here lies the problem. Your code knows that there is SQL under Hibernate and so it could have handled SqlDuplicateKeyException in some useful way, such as showing an error to the user, but Hibernate was forced to catch the exception and rethrow it as generic HibernateException. We have gone from high fidelitySqlDuplicateKeyException to low fidelity HibernateException. An so MyApplication can not do anything. Now Hibernate could have throw HibernateDuplicateKeyException but that means that Hibernate now has the same exception hierarchy as SQL and we are duplicating effort and repeating ourselves.

Rethrowing checked exceptions causes you to lose fidelity and hence makes it less likely that you could do something useful with the exception later on.

You can't do Anything Anyway

In most cases when exception is throw there is no recovery. We show a generic error to the user and log an exception so that we con file a bug and make sure that that exception will not happen again. Since 90+% of the exception are bugs in our code and all we do is log, why are we forced to rethrow it over and over again.

It is rare that anything useful can be done when checked exception happens, in most case we die with error! Therefor I want that to be the default behavior of my code with no additional typing.

How I deal with the code

Here is my strategy to deal with checked exceptions in java:

  • Always catch all checked exceptions at source and rethrow them as LogRuntimeException.

    • LogRuntimeException is my runtime un-checked exception which says I don't care just log it.

    • Here I have lost Exception fidelity.

  • All of my methods do not declare any exceptions

  • As I discover that I need to deal with a specific exception I go back to the source where LogRuntimeException was thrown and I change it to <Specific>RuntimeException (This is rarer than you think)

    • I am restoring the exception fidelity only where needed.

  • Net effect is that when you come across a try-catch clause you better pay attention as interesting things are happening there.

    • Very few try-catch calluses, code is much easier to read.

    • Very close to 100% test coverage as there is no dead code in my catch blocks.

Wednesday, January 07, 2009

Interfacing with hard-to-test third-party code

by Miško Hevery

Shahar asks an excellent question about how to deal with frameworks which we use in our projects, but which were not written with testability in mind.
Hi Misko, First I would like to thank you for the “Guide to Writing Testable Code”, which really helped me to think about better ways to organize my code and architecture. Trying to apply the guide to the code I’m working on, I came up with some difficulties. Our code is based on external frameworks and libraries. Being dependent on external frameworks makes it harder to write tests, since test setup is much more complex. It’s not just a single class we’re using, but rather a whole bunch of classes, base classes, definitions and configuration files. Can you provide some tips about using external libraries or frameworks, in a manner that will allow easy testing of the code?
-- Thanks, Shahar
There are two different kind of situations you can get yourself into:

  1. Either your code calls a third-party library (such as you calling into LDAP authentication, or JDBC driver)

  2. Or a third party library calls you and forces you to implement an interface or extend a base class (such as when using servlets).

Unless these APIs are written with testability in mind, they will hamper your ability to write tests.

Calling Third-Party Libraries

I always try to separate myself from third party library with a Facade and an Adapter. Facade is an interface which has a simplified view of the third-party API. Let me give you an example. Have a look at javax.naming.ldap. It is a collection of several interfaces and classes, with a complex way in which you have to call them. If your code depends on this interface you will drown in mocking hell. Now I don't know why the API is so complex, but I do know that my application only needs a fraction of these calls. I also know that many of these calls are configuration specific and outside of bootstrapping code these APIs are cluttering what I have to mock out.

I start from the other end. I ask myself this question. 'What would an ideal API look like for my application?' The key here is 'my application' An application which only needs to authenticate will have a very different 'ideal API' than an application which needs to manage the LDAP. Because we are focusing on our application the resulting API is significantly simplified. It is very possible that for most applications the ideal interface may be something along these lines.
interface Authenticator {
 boolean authenticate(String username,
                      String password);
}

As you can see this interface is a lot simpler to mock and work with than the original one as a result it is a lot more testable. In essence the ideal interfaces are what separates the testable world from the legacy world.

Once we have an ideal interface all we have to do is implement the adapter which bridges our ideal interface with the actual one. This adapter may be a pain to test, but at least the pain is in a single location.

The benefit of this is that:

  • We can easily implement an InMemoryAuthenticator for running our application in the QA environment.

  • If the third-party APIs change than those changes only affect our adapter code.

  • If we now have to authenticate against a Kerberos or Windows registry the implementation is straight forward.

  • We are less likely to introduce a usage bug since calling the ideal API is simpler than calling the original API.

Plugging into an Existing Framework

Let's take servlets as an example of hard to test framework. Why are servlets hard to test?

  • Servlets require a no argument constructor which prevents us from using dependency injection. See how to think about the new operator.

  • Servlets pass around HttpServletRequest and HttpServletResponse which are very hard to instantiate or mock.

At a high level I use the same strategy of separating myself from the servlet APIs. I implement my actions in a separate class
class LoginPage {
 Authenticator authenticator;
 boolean success;
 String errorMessage;
 LoginPage(Authenticator authenticator) {
   this.authenticator = authenticator;
 }

 String execute(Map<String, String> parameters,
                String cookie) {
   // do some work
   success = ...;
   errorMessage = ...;
 }

 String render(Writer writer) {
   if (success)
     return "redirect URL";
   else
     writer.write(...);
 }
}

The code above is easy to test because:

  • It does not inherit from any base class.

  • Dependency injection allows us to inject mock authenticator (Unlike the no argument constructor in servlets).

  • The work phase is separated from the rendering phase. It is really hard to assert anything useful on the Writer but we can assert on the state of the LoginPage, such as success and errorMessage.

  • The input parameters to the LoginPage are very easy to instantiate. (Map<String, String>, String for cookie, or a StringWriter for the writer).

What we have achieved is that all of our application logic is in the LoginPage and all of the untestable mess is in the LoginServlet which acts like an adapter. We can than test the LoginPage in depth. The LoginSevlet is not so simple, and in most cases I just don't bother testing it since there can only be wiring bug in that code. There should be no application logic in the LoginServlet since we have moved all of the application logic to LoginPage.

Let's look at the adapter class:
class LoginServlet extends HttpServlet {
 Provider<LoginPage> loginPageProvider;

 // no arg constructor required by
 // Servlet Framework
 LoginServlet() {
   this(Global.injector
          .getProvider(LoginPage.class));
 }

 // Dependency injected constructor used for testing
 LoginServlet(Provider<LoginPage> loginPageProvider) {
   this.loginPageProvider = loginPageProvider;
 }

 service(HttpServletRequest req,
         HttpServletResponse resp) {
   LoginPage page = loginPageProvider.get();
   page.execute(req.getParameterMap(),
        req.getCookies());
   String redirect = page.render(resp.getWriter())
   if (redirect != null)
     resp.sendRedirect(redirect);
 }
}

Notice the use of two constructors. One fully dependency injected and the other no argument. If I write a test I will use the dependency injected constructor which will than allow me to mock out all of my dependencies.

Also notice that the no argument constructor is forcing me to use global state, which is very bad, but in the case of servlets I have no choice. However, I make sure that only servlets access the global state and the rest of my application is unaware of this global variable and uses proper dependency injection techniques.

BTW there are many frameworks out there which sit on top of servlets and which provide you a very testable APIs. They all achieve this by separating you from the servlet implementation and from HttpServletRequest and HttpServletResponse. For example Waffle and WebWork

Friday, July 25, 2008

TotT: Testing Against Interfaces

To quell a lingering feeling of inadequacy, you took the time to build your own planetary death ray, a veritable rite of passage in the engineering profession. Congratulations. And you were feeling pretty proud until the following weekend, when you purchased the limited-edition Star Wars trilogy with Ewok commentary, and upon watching the Death Star destroy Alderaan, you realized that you had made a bad decision: Your planetary death ray has a blue laser, but green lasers look so much cooler. But it's not a simple matter of going down to Radio Shack to purchase a green laser that you can swap into your existing death ray. You're going to have to build another planetary death ray from the ground-up to have a green laser, which is fine by you because owning two death rays instead of one will only make the neighbors more jealous.

Both your planetary death rays should interoperate with a variety of other bed-wettingly awesome technology, so it's natural that they export the same Java API:

public interface PlanetaryDeathRay {
  public void aim(double xPosition, double yPosition);
  public boolean fire(); /* call this if she says the rebel
                            base is on Dantooine */
}

public class BlueLaserPlanetaryDeathRay
    implements PlanetaryDeathRay { /* implementation here */ }
public class GreenLaserPlanetaryDeathRay
    implements PlanetaryDeathRay { /* implementation here */ }



Testing both death rays is important so there are no major malfunctions, like destroying Omicron Persei VIII instead of Omicron Persei VII. You want to run the same tests against both implementations to ensure that they exhibit the same behavior – something you could easily do if you only once defined tests that run against any PlanetaryDeathRay implementation. Start by writing the following abstract class that extends junit.framework.TestCase:

public abstract class PlanetaryDeathRayTestCase
    extends TestCase {
  protected PlanetaryDeathRay deathRay;
  @Override protected void setUp() {
    deathRay = createDeathRay();
  }
  @Override protected void tearDown() {
    deathRay = null;
  }
  protected abstract PlanetaryDeathRay createDeathRay();
      /* create the PlanetaryDeathRay to test */

  public void testAim() {
    /* write implementation-independent tests here against
       deathRay.aim() */
  }
  public void testFire() {
    /* write implementation-independent tests here against
       deathRay.fire() */
  }
}



Note that the setUp method gets the particular PlanetaryDeathRay implementation to test from the abstract createDeathRay method. A subclass needs to implement only this method to create a complete test: the testAim and testFire methods it inherits will be part of the test when it runs:

public class BlueLaserPlanetaryDeathRayTest
    extends PlanetaryDeathRayTestCase {
  protected PlanetaryDeathRay createDeathRay() {
    return new BlueLaserPlanetaryDeathRay();
  }
}



You can easily add new tests to this class to test functionality specific to BlueLaserPlanetaryDeathRay.

Remember to download this episode of Testing on the Toilet and post it in your office.

Thursday, March 20, 2008

TotT: TestNG on the Toilet

Recently, somewhere in the Caribbean Sea, you implemented the PirateShip class. You want to test the cannons thoroughly in preparation for a clash with the East India Company. This requires that you run the crucial testFireCannonDepletesAmmunition() method many times with many different inputs.

TestNG is a test framework for Java unit tests that offers additional power and ease of use over JUnit. Some of TestNG's features will help you to write your PirateShip tests in such a way that you'll be well prepared to take on the Admiral. First is the @DataProvider annotation, which allows you to add parameters to a test method and provide argument values to it from a data provider.

public class PirateShipTest {
  @Test(dataProvider = "cannons")
  public void testFireCannonDepletesAmmunition(int ballsToLoad,
         int ballsToFire,
         int expectedRemaining) {
    PirateShip ship = new PirateShip("The Black Pearl");
    ship.loadCannons(ballsToLoad);
    for (int i = 0; i < ballsToFire; i++) {
      ship.fireCannon();
    }
    assertEquals(ship.getBallsRemaining(), expectedRemaining);
  }
  @DataProvider(name = "cannons")
  public Object[][] getShipSidesAndAmmunition() {
    // Each 1-D array represents a single execution of a @Test that
    // refers to this provider. The elements in the array represent
    // parameters to the test call.
    return new Object[] {
      {5, 1, 4}, {5, 5, 0}, {5, 0, 5}
    };
  }
}


Now let's focus on making the entire test suite run faster. An old, experienced pirate draws your attention to TestNG's capacity for running tests in parallel. You can do this in the definition of your test suite (described in an XML file) with the parallel and thread-count attributes.

<suite name="PirateShip suite" parallel="methods" thread-count="2">


A great pirate will realize that this parallelization can also help to expose race conditions in the methods under test.
Now you have confidence that your cannons fired in parallel will work correctly. But you didn't get to be a Captain by slacking off! You know that it's also important for your code to fail as expected. For this, TestNG offers the ability to specify those exceptions (and only those exceptions) that you expect your code to throw.

@Test(expectedExceptions = { NoAmmunitionException.class })
public void testFireCannonEmptyThrowsNoAmmunitionException() {
  PirateShip ship = new PirateShip("The Black Pearl");
  ship.fireCannon();
}



Remember to download this episode of Testing on the Toilet and post it in your office.