Thursday, October 23, 2008

Automatic Bug Filing

I like to automate things. This is welcome trait at APT, as rapidly developing software with an engineering team about 20 in size does not leave much time for manual testing. Out of necessity, we have built a fairly sophisticated automating testing framework, which has been critical in monitoring the integrity of our code. We have software that interacts with our product as if somebody was controlling it themselves. Along the way it checks for errors, or even worse, changed output numbers. The testing code that tells the software what to do is dynamically generated from an object-oriented state-based model abstracted in a database. This allows us to quickly create thousands of test cases that interact with our product in a variety of different ways. These test cases are prioritized and assigned to one of about a dozen automated testing machines which constantly execute them every minute of every day of every week and report on the results.

So there we have it: a distributed prioritized automated testing framework. What more could we want? Well, I found myself spending a lot of time examining the failed tests. If I determined that the problem encountered was not a known issue, I would file a bug a report with the relevant information. Otherwise, I would have to take note that we know about this issue and ignore that test until it is fixed. My coworkers were experiencing the same thing. As we scaled our framework to run more and more tests, we had no analogous expansion of our abilities to monitor and react to the result of these tests. This is where our affinity for automating things comes in. Why not automate responses to our automated tests? And this is exactly what we did.

Now when one of our automated tests hit an error, a check is done to see if it is a new error or not. If it is not a new error, we associate the test with it. This association helps us avoid wasting any more time on subsequent failures as well as logging which tests to use to determine if the problem has been fixed. If it is a new error, we automatically file a detailed bug report with an appropriate priority determined by characteristics of the test case and the error hit.

This automation was not without complexities; in fact we are still working out some kinks. First of all, it hinges on the ability to accurately determine if errors are new or not. Once that is done, you want to be able to filter out errors that are not relevant. Automatic bug filing is a fine line. File too few bugs and you still must spend time going over reports checking for things that may have been missed. File too many bugs and you have to go through them all and weed out the legitimate ones. However once the logic is tweaked the previously manual task of responding to the results of automated tests is now automated itself. The benefits include less time spent looking over reports, as well as zero lag time between the time a problem occurs and the time a bug report is filed, quickly bringing the issue to the attention of product engineers. And of course there is the good feeling you get when you’ve automated something that used to be done manually!

Monday, October 6, 2008

Starting Selenium Server in Java


For some of our automated tests we are switching to use the open-source project Selenium-RC. You can read more about it at its web site: http://selenium-rc.openqa.org/, but essentially it runs a java server which can control an internet browser, and then your testing code sends commands to this server. One key part of this setup is that you need the server running while your testing code is executing. For automated testing machines it would be no big deal to make the Selenium server a service; however developers probably don’t want it running all the time—in fact they do not want to think about it!



Thus our solution was to have our testing code launch the server. I’ve seen a number of posts on various forums asking how to start the selenium server form Java, but none of them had concrete answers. Thus I will reproduce our implementation for you to use and modify as you please:




Process p = null;

try {

    String[] cmd = {"java","-jar","C:\\<path to selenium>\\server\\selenium-server.jar" };

    p = Runtime.getRuntime().exec(cmd);

} catch (IOException e) {

    System.out.println("IOException caught: "+e.getMessage());

    e.printStackTrace();

}


System.out.println("Waiting for server...");

int sec = 0;

int timeout = 20;

boolean serverReady = false;

try {

    BufferedReader input = new BufferedReader(new InputStreamReader(p.getInputStream()));

    while (sec < timeout && !serverReady) {

        while (input.ready()) {

            String line = input.readLine();

            System.out.println("From selenium: "+line);

            if (line.contains("Started HttpContext[/,/]")) {

                serverReady = true;

            }

    }

        Thread.sleep(1000);

         ++sec;

    }

    input.close();

} catch (Exception e) {

    System.out.println("Exception caught: "+e.getMessage());

}



if (!serverReady) {

    throw new RuntimeException("Selenium server not ready");

}



System.out.println("Done waiting");



Some notes on the above code:

  • I left in some handy print statements; however these are of course completely optional.

  • For non-automated testing machines, be sure to have your outer most try-catch block of your testing code kill the server or it may be left running even when the testing code finishes.

  • For code on automated testing machines, you may want to check to see if the server is running and start it only if it is not. This way you don’t waste time waiting for the server to be ready if another test already brought it up.


  •