JUnit is an open source Java testing framework used to write and run repeatable tests.
It is an instance of the xUnit architecture for unit testing frameworks.
Maintained by Mike Clark
Last modified on August 17, 2002
Table of Contents: |
FAQ Info: | top |
The current version of this FAQ is maintained by Mike Clark.
Most of the wisdom contained in this FAQ comes from the collective insights and hard-won experiences of the many good folks who participate on the JUnit mailing list and the JUnit community at large.
If you see your genius represented anywhere in this FAQ without due credit to you, please send me an email and I'll make things right.
Your contributions to this FAQ are greatly appreciated! The JUnit community thanks you in advance.
To contribute to this FAQ, simply write a JUnit-related question and answer, then send the unformatted text to Mike Clark. Corrections to this FAQ are always appreciated, as well.
No reasonable contribution will be denied. Your name will always appear along with any contribution you make.
The master copy of this FAQ is available at http://junit.sourceforge.net/doc/faq/faq.htm.
The entries in this FAQ are also documented in the jGuru FAQ.
The JUnit distribution also includes this FAQ in the doc
directory.
Overview: | top |
JUnit is an open source Java testing framework used to write and run repeatable tests. It is an instance of the xUnit architecture for unit testing frameworks.
JUnit features include:
JUnit was originally written by Erich Gamma and Kent Beck.
The official JUnit home page is http://junit.org.
There are 3 mailing lists dedicated to everything JUnit:
You can search the JUnit user list archives for answers to frequently asked questions not included here.
There is also a jGuru discussion forum dedicated to everything JUnit.
The following documents are included in the JUnit distribution in the doc
directory:
The JUnit home page maintains a list of JUnit articles.
The JUnit home page publishes the latest JUnit news.
JUnit is Open Source Software, released under IBM's Common Public License Version 0.5 and hosted on SourceForge.
Best Java Performance Monitoring/Testing Tool
Best Java Performance Monitoring/Testing Tool
Getting Started: | top |
The latest version of JUnit is available at http://download.sourceforge.net/junit/.
junit.zip
.
Windows
To install JUnit on Windows, follow these steps:
junit.zip
distribution file to
a directory referred to as %JUNIT_HOME%
.
set CLASSPATH=%CLASSPATH%;%JUNIT_HOME%\junit.jar
Unix (bash)
To install JUnit on Unix, follow these steps:
junit.zip
distribution file to
a directory referred to as $JUNIT_HOME
.
export CLASSPATH=$CLASSPATH:$JUNIT_HOME/junit.jar
$JUNIT_HOME/src.jar
file.
Note: The sample tests are not contained in the junit.jar
, but
in the installation directory directly. Therefore, make sure that the JUnit
installation directory is in the CLASSPATH.
For the textual TestRunner, type:
java junit.textui.TestRunner junit.samples.AllTests
For the graphical TestRunner, type:
java junit.swingui.TestRunner junit.samples.AllTests
All the tests should pass with an "OK" (textual) or a green bar (graphical).
If the tests don't pass, verify that junit.jar
is in the CLASSPATH.
junit.jar
from the classpath
JUnit does not modify the registry so simply removing all the files will fully uninstall it.
Questions that are not answered in the FAQ or in the documentation should be posted to the jGuru discussion forum or the JUnit user mailing list.
Please stick to technical issues on the discussion forum and mailing lists. Keep in mind that these are public, so do not include any confidental information in your questions!
You should also read "How to ask questions the smart way" by Eric Raymond before participating in the discussion forum and mailing lists.
NOTE:
Please do NOT submit bugs, patches, or feature requests to the discussion forum or
mailing lists.
Refer instead to
"How do I submit bugs, patches, or feature requests?".
JUnit celebrates programmers testing their own software. In this spirit, bugs, patches, and feature requests that include JUnit tests have a better chance of being addressed than those without.
JUnit is forged on SourceForge. Please use the tools provided by SourceForge for your submissions.
Writing Tests: | top |
TestCase
:
package junitfaq; import java.util.*; import junit.framework.*; public class SimpleTest extends TestCase { public SimpleTest(String name) { super(name); } |
public void testEmptyCollection() { Collection collection = new ArrayList(); assertTrue(collection.isEmpty()); } |
suite()
method that uses reflection to dynamically
create a test suite containing all the testXXX()
methods:
public static Test suite() { return new TestSuite(SimpleTest.class); } |
main()
method to conveniently
run the test with the textual test runner:
public static void main(String args[]) { junit.textui.TestRunner.run(suite()); } } |
main()
, type:
java junitfaq.SimpleTest
The passing test results in the following textual output:
. Time: 0 OK (1 tests)
java junit.swingui.TestRunner junitfaq.SimpleTest
The passing test results in a green bar displayed in the graphical UI.
A test fixture is useful if you have two or more tests for a common set of objects. Using a test fixture avoids duplicating the test code necessary to initialize and cleanup those common objects for each test.
Tests can share the objects in a test fixture, with each test invoking different methods on the objects in the fixture and asserting different expected results. Each test runs in its own test fixture to isolate tests from the changes made by other tests. Because the tests are isolated, they can be run in any order.
To create a test fixture, define a setUp()
method that initializes
common objects and a tearDown()
method to cleanup those objects.
The JUnit framework automatically invokes the setUp()
method before
each test is run and the tearDown()
method after each test is run.
The following test uses a test fixture to initialize and cleanup a common
Collection
object such that both tests are isolated from changes
made by the other:
package junitfaq; import junit.framework.*; import java.util.*; public class SimpleTest extends TestCase { private Collection _collection; public SimpleTest(String name) { super(name); } protected void setUp() { _collection = new ArrayList(); } protected void tearDown() { _collection.clear(); } public void testEmptyCollection() { assertTrue(_collection.isEmpty()); } public void testOneItemCollection() { _collection.add("itemA"); assertEquals(1, _collection.size()); } } |
Make sure the test fixture methods are defined as follows, noting that both method names are case sensitive:
protected void setUp() { // initialization code } protected void tearDown() { // cleanup code }
(Submitted by: Dave Astels)
Often if a method doesn't return a value, it will have some side effect. Actually, if it doesn't return a value AND doesn't have a side effect, it isn't doing anything.
There may be a way to verify that the side effect actually occurred as expected. For example,
consider the add()
method in the Collection classes. There are ways of verifying
that the side effect happened (i.e. the object was added). You can check the size and assert
that it is what is expected:
public void testCollectionAdd() { Collection collection = new ArrayList(); assertEquals(0, collection.size()); collection.add("itemA"); assertEquals(1, collection.size()); collection.add("itemB"); assertEquals(2, collection.size()); } |
Another approach is to make use of MockObjects.
A related issue is to design for testing. For example, if you have a method that is meant
to output to a file, don't pass in a filename, or even a FileWriter
. Instead,
pass in a Writer
. That way you can pass in a StringWriter
to
capture the output for testing purposes. Then you can add a method
(e.g. writeToFileNamed(String filename)
) to encapsulate the
FileWriter
creation.
Unit tests are intended to alleviate fear that something might break. If you
think a get()
or set()
method could reasonably break, or has
in fact contributed to a defect, then by all means write a test.
In short, test until you're confident. What you choose to test is subjective, based on your experiences and confidence level. Remember to be practical and maximize your testing investment.
Refer also to "How simple is 'too simple to break'?".
(Submitted by: J. B. Rainsberger)
Most of the time, get/set methods just can't break, and if they can't break, then why test them? While it is usually better to test more, there is a definite curve of diminishing returns on test effort versus "code coverage". Remember the maxim: "Test until fear turns to boredom."
Assume that the getX()
method only does "return x;" and that
the setX()
method only does "this.x = x;". If you write this test:
then you are testing the equivalent of the following:testGetSetX() { setX(23); assertEquals(23, getX()); }
or, if you prefer,testGetSetX() { x = 23; assertEquals(23, x); }
At this point, you are testing the Java compiler, or possibly the interpreter, and not your component or application. There is generally no need for you to do Java's testing for them.testGetSetX() { assertEquals(23, 23); }
If you are concerned about whether a property has already been set at the
point you wish to call getX()
, then you want to test the constructor,
and not the getX()
method. This kind of test is especially useful if
you have multiple constructors:
testCreate() { assertEquals(23, new MyClass(23).getX()); }
Catch the exception within the test method. If it isn't thrown, call the
fail()
method to signal the failure of the test.
The following is an example test that passes when the expected IndexOutOfBoundsException
is raised:
public void testIndexOutOfBoundsException() { ArrayList list = new ArrayList(10); try { Object o = list.get(11); fail("Should raise an IndexOutOfBoundsException"); } catch (IndexOutOfBoundsException success) {} } |
Declare the exception in the throws
clause of the test method and don't catch
the exception within the test method. Uncaught exceptions will cause the test to fail
with an error.
The following is an example test that fails when the IndexOutOfBoundsException
is raised:
public void testIndexOutOfBoundsExceptionNotRaised() throws IndexOutOfBoundsException { ArrayList list = new ArrayList(10); Object o = list.get(11); } |
Assertions are used to check for the possibility of failures, therefore failures are anticipated. Errors are unanticipated problems resulting in uncaught exceptions being propagated from a JUnit test method.
In the following example, the FileNotFoundException
is expected and checked
with an assertion. If the expected exception is not raised, then a failure is produced.
If any other unexpected IOException
or unchecked exception
(e.g. NullPointerException
) is raised, the JUnit framework catches the
exception and signals an error.
public void testNonexistentFileRead() throws IOException { try { File file = new File("doesNotExist.txt"); FileReader reader = new FileReader(file); assertEquals('a', (char)reader.read()); fail("Read from a nonexistent file?!"); } catch (FileNotFoundException success) {} } |
In the following example, an IOException
is not expected. The JUnit
framework will signal an error if an IOException
(e.g. FileNotFoundException
) or any unchecked exception
(e.g. NullPointerException
) is raised.
public void testExistingFileRead() throws IOException { // exists.txt created in setup(), perhaps File file = new File("exists.txt"); FileReader reader = new FileReader(file); assertEquals('a', (char)reader.read()); } |
Both failures and errors will cause the test to fail. However, it is useful to differentiate between failures and errors because the debugging process is slightly different.
In the first example, the use of fail()
will not generate a complete
stack trace including the method that raised the exception. In this case that's
sufficient since we anticipate that the exception will be raised. If it's not raised,
then it's a problem with the test itself.
In the second example, the JUnit framework catches the exception and generates an error with a complete stack trace for the exception. Since we don't expect this exception to be raised, a complete stack trace is useful in debugging why it was raised.
Place your tests in the same package as the classes under test.
Refer to "Where should I put my test files?" for examples of how to organize tests for protected method access.
Testing private methods may be an indication that those methods should be moved into another class to promote reusability.
But if you must...
You can use reflection to subvert the access control mechanism. If you are using JDK 1.3 or higher you can use the PrivilegedAccessor class. Examples of how to use this class are available in PrivilegedAccessorTest.
(Submitted by: J. B. Rainsberger)
Reporting multiple failures in a single test is generally a sign that the test does too much, compared to what a unit test ought to do. Usually this means either that the test is really a functional/acceptance/customer test or, if it is a unit test, then it is too big a unit test.
JUnit is designed to work best with a number of small tests. It executes each test within a separate instance of the test class. It reports failure on each test. Shared setup code is most natural when sharing between tests. This is a design decision that permeates JUnit, and when you decide to report multiple failures per test, you begin to fight against JUnit. This is not recommended.
Long tests are a design smell and indicate the likelihood of a design problem. Kent Beck is fond of saying in this case that "there is an opportunity to learn something about your design." We would like to see a pattern language develop around these problems, but it has not yet been written down.
Finally, note that a single test with multiple assertions is isomorphic to a test case with multiple tests:
One test method, three assertions:
public class MyTestCase extends TestCase { public void testSomething() { // Set up for the test, manipulating local variables assertTrue(condition1); assertTrue(condition2); assertTrue(condition3); } }
Three test methods, one assertion each:
The resulting tests use JUnit's natural execution and reporting mechanism and, failure in one test does not affect the execution of the other tests. You generally want exactly one test to fail for any given bug, if you can manage it.public class MyTestCase extends TestCase { // Locale variables become instance variables protected void setUp() { // Set up for the test, manipulating instance variables } public void testCondition1() { assertTrue(condition1); } public void testCondition2() { assertTrue(condition2); } public void testCondition3() { assertTrue(condition3); } }
assert()
method?
(Submitted by: David Stagner)
JUnit 3.7 deprecated assert()
and replaced it with assertTrue()
,
which works exactly the same way.
Simply upgrade your JUnit to version 3.7 or higher and change all
assert()
calls in your existing tests to assertTrue()
.
Refactoring J2EE components to delegate functionality to other objects that don't have to be run in a J2EE container will improve the design and testability of the software.
Cactus is an open source JUnit extension that can be used to test J2EE components in their natural environment.
TestCase
class for every class I need to test?
(Submitted by: J. B. Rainsberger)
No. It is a convention to start with one TestCase
class per class under
test, but it is not necessary.
TestCase
classes only provide a way to organize tests, nothing more.
Generally you will start with one TestCase
class per class under test, but
then you may find that a small group of tests belong together with their own
common test fixture.[1] In this case, you may move those tests to a new
TestCase
object. This is a simple object-oriented refactoring: separating
responsibilities of an object that does too much.
Another point to consider is that the TestSuite
is the smallest execution
unit in JUnit: you cannot execute anything smaller than a TestSuite at one
time without changing source code. In this case, you probably do not want to
put tests in the same TestCase
class unless they somehow "belong together".
If you have two groups of tests that you think you'd like to execute
separately from one another, it is wise to place them in separate TestCase
classes.
[1] A test fixture is a common set of test data and collaborating objects
shared by many tests. Generally they are implemented as instance variables in
the TestCase
class.
Organizing Tests: | top |
You can place your tests in the same package and directory as the classes under test.
For example:
src com xyz SomeClass.java SomeClassTest.java
Or, if you feel this clutters the source directory, you can place the tests in a separate parallel directory structure with package alignment.
For example:
src com xyz SomeClass.java test com xyz SomeClassTest.java
These approaches allow the tests to access to all the package visible methods and fields of the classes under test.
Write a suite()
method that creates a TestSuite
containing all your tests.
For example:
import junit.framework.*; public class AllTests { public static Test suite() { TestSuite suite = new TestSuite(); suite.addTest(SomeTest.suite()); suite.addTest(AnotherTest.suite()); return suite; } public static void main(String args[]) { junit.textui.TestRunner.run(suite()); } } |
Running AllTests
will automatically run all of its contained tests
in one fell swoop.
You can arbitrarily group any tests into test suites as appropriate by package, logical layers, test type, etc.
The desire to do this is usually a symptom of excessive coupling in your design. If two or more tests must share the same test fixture state, then the tests may be trying to tell you that the classes under test have some undesirable dependencies.
Refactoring the design to further decouple the classes under test and eliminate code duplication is usually a better investment than setting up a shared test fixture.
But if you must...
You can wrap the test suite containing all your tests in a subclass of TestSetup
which invokes setUp()
exactly once before all the tests are run and invokes
tearDown()
exactly once after all the tests have been run.
The following is an example suite()
method that uses a TestSetup
for one-time initialization and cleanup:
import junit.framework.*; import junit.extensions.TestSetup; public class AllTestsOneTimeSetup { public static Test suite() { TestSuite suite = new TestSuite(); suite.addTest(SomeTest.suite()); suite.addTest(AnotherTest.suite()); TestSetup wrapper = new TestSetup(suite) { protected void setUp() { oneTimeSetUp(); } protected void tearDown() { oneTimeTearDown(); } }; return wrapper; } public static void oneTimeSetUp() { // one-time initialization code } public static void oneTimeTearDown() { // one-time cleanup code } } |
Running Tests: | top |
ClassCastException
or
LinkageError
) using the GUI TestRunners?
(Submitted by: Scott Stirling)
JUnit’s GUI TestRunners use a custom class loader (junit.runner.TestCaseClassLoader
)
to dynamically reload your code every time you press the "Run" button so you don't have
to restart the GUI to reload your classes if you recompile them. The default classloaders of the
Java Virtual Machine do not dynamically reload changed classes. But JUnit’s custom class loader
finds and loads classes from the same CLASSPATH used by the JVM’s system classloader. So, by
design, it "sits in front of" the system loader and applies a filter to determine
whether it should load a given class or delegate the loading of a class to the system classloader.
This filter is configured with a list of String patterns in a properties file called
excluded.properties
.
The excluded.properties
file contains a numbered list (excluded.0
,
excluded.1
, excluded.2
, etc.) of properties whose values are patterns
for packages. This file is packaged in junit.jar
as
junit/runner/excluded.properties
. As of JUnit 3.7 and Java 1.4, its contents are:
# # The list of excluded package paths for the TestCaseClassLoader # excluded.0=sun.* excluded.1=com.sun.* excluded.2=org.omg.* excluded.3=javax.* excluded.4=sunw.* excluded.5=java.*
There are some conditions, discussed below, where the default exclusions
are insufficient and you will want to add some more to this list and then either update the
junit.jar
file with your customized version or place your customized version in the
CLASSPATH before junit.jar
.
LinkageError
when using
XML interfaces in my TestCase?
(Submitted by: Scott Stirling)
The workaround as of JUnit 3.7 is to add org.w3c.dom.*
and org.xml.sax.*
to your excluded.properties
.
It’s just a matter of time before this fix becomes incorporated into the released version of
JUnit's excluded.properties
, since JAXP is a standard part of JDK 1.4. It will be
just like excluding org.omg.*
. By the way, if you download the JUnit source from
its Sourceforge CVS, you will find that these patterns have already been added to the default
excluded.properties and so has a pattern for JINI. In fact, here is the current version in CVS,
which demonstrates how to add exclusions to the list too:
# # The list of excluded package paths for the TestCaseClassLoader # excluded.0=sun.* excluded.1=com.sun.* excluded.2=org.omg.* excluded.3=javax.* excluded.4=sunw.* excluded.5=java.* excluded.6=org.w3c.dom.* excluded.7=org.xml.sax.* excluded.8=net.jini.*
This is the most common case where the default excluded.properties
list needs
modification. The cause of the LinkageError
is related to using JAXP in your test
cases. By JAXP I mean the whole set of javax.xml.*
classes and the supporting
org.w3c.dom.*
and org.xml.sax.*
classes.
As stated above, the JUnit GUI TestRunners' classloader relies on the excluded.properties
for classes it should delegate to the system classloader. JAXP is an unusual case because it
is a standard Java extension library dependent on classes whose package names
(org.w3c.dom.*
and org.xml.sax.*
) do not begin with a standard Java
or Sun prefix. This is similar to the relationship between javax.rmi.*
and the
org.omg.*
classes, which have been excluded by default in JUnit’s
excluded.properties
for a while.
What can happen, and frequently does when using the JUnit Swing or AWT UI with test cases that
reference, use or depend on JAXP classes, such as Log4J, Apache SOAP, Axis, Cocoon, etc., is that
the JUnit class loader (properly) delegates javax.xml.*
classes it "sees"
to the system loader. But then the system loader, in the process of initializing and loading that
JAXP class, links and loads up a bunch of org.w3c.dom
/org.xml.sax
classes. When it does so, the JUnit custom classloader is not involved at all because the system
classloader never delegates "down" or checks with custom classloaders to see if a class
is already loaded. At any point after this, if the JUnit loader is asked to load an
org.w3c.dom
/org.xml.sax
class that it's never seen before, it will try
to load it because the class’s name doesn't match any of the patterns in the default exclude list.
That’s when a LinkageError
occurs. This is really a flaw in the JUnit classloader
design, but there is the workaround given above.
Java 2 JVMs keep classes (remember, classes and objects, though related, are different entities to the JVM - I’m talking about classes here, not object instances) in namespaces, identifying them by their fully qualified classname plus the instance of their defining (not initiating) loader. The JVM will attempt to assign all unloaded classes referenced by an already defined and loaded class to that class's defining loader. The JVM's classresolver routine (implemented as a C function in the JVM source code) keeps track of all these class loading events and "sees" if another classloader (such as the JUnit custom loader) attempts to define a class that has already been defined by the system loader. According to the rules of Java 2 loader constraints, in case a class has already been defined by the system loader, any attempts to load a class should first be delegated to the system loader. A "proper" way for JUnit to handle this feature would be to load classes from a repository other than the CLASSPATH that the system classloader knows nothing about. And then the JUnit custom classloader could follow the standard Java 2 delegation model, which is to always delegate class loading to the system loader, and only attempt to load if that fails. Since they both load from the CLASSPATH in the current model, if the JUnit loader delegated like it's supposed to, it would never get to load any classes since the system loader would always find them.
You could try to hack around this in the JUnit source by catching the LinkageError
in TestCaseClassLoader's loadClass()
method and then making a recovery call to
findSystemClass()
-- thereby delegating to the system loader after the violation
has been caught. But this hack only works some of the time, because now you can have the reverse
problem where the JUnit loader will load a host of org.w3c.dom
/org.xml.sax
classes, and then the system loader violates the loader contraints at some point when it tries
to do exactly what I described above with JAXP because it doesn't ever delegate to its logical
child (the JUnit loader). Inevitably, if your test cases use many JAXP and related XML classes,
one or the other classloader will end up violating the constraints whatever you do.
ClassCastException
when
I use narrow()
in an EJB client TestCase?
(Submitted by: Scott Stirling)
The solution is to prevent your EJB's interface classes from being loaded by the JUnit
custom class loader by adding them to excluded.properties
.
This is another problem inherent to JUnit's dynamically reloading TestCaseClassLoader.
Similar to the LinkageErrors with JAXP and the org.xml.sax
and org.w3c.dom
classes, but with a different result.
Here's some example code:
Point point; PointHome pointHome; // The next line works in textui, but throws // ClassCastException in swingui pointHome = (PointHome)PortableRemoteObject. narrow(ctx.lookup("base/PointHome"), PointHome.class);
When you call InitialContext.lookup()
, it returns an object that was loaded and
defined by the JVM's system classloader (sun.misc.Launcher$AppClassLoader
), but
the PointEJBHome.class
type is loaded by JUnit's TestCaseClassLoader. In the
narrow()
, the two fully qualified class names are the same, but the defining
classloaders for the two are different so you get the exception during the narrow because the
JVM doesn't see them as being the same runtime class type.
Recall that in Java 2 an object's class (a.k.a. "runtime type") is identified in the JVM as the pair of <fully-qualified-classname;definingClassLoaderInstance> or (in shorter form) <C;L>. That is, the defining loader’s identity is part of the runtime name identifying that class in the JVM. Also recall that the JVM will expect a class's defining loader to load all unloaded classes referenced by the classes it loads.
If interested for debugging purposes, you can find out more about which loader loaded which class by doing something like this:
System.out.println(ctx.lookup("base/PointEJBHome").getClass().getClassLoader()); System.out.println(PointEJBHome.class.getClassLoader());
You'll find when using the GUI TestRunners that the PointEJBHome
type is defined
by the JUnit TestCaseClassLoader and the object returned from InitialContext.lookup()
was defined through the JVM's system class loader. When using the tex-based TestRunner they'll
both have been loaded through the system loader.
If you use Ant's <batchtest> task to run your test cases and you have this problem, you
can work around it by setting fork="true"
on <batchtest>
,
which causes it to run each test in its own Java Virtual Machine separate from Ant’s launching
JVM.
For further reading about the principles of Java dynamic classloading, the best resource is the short paper by Sheng Liang, the architect of the Java 2 classloader architecture: Dynamic Class Loading in the Java Virtual Machine, OOPSLA 1998.
Make sure the test contains one or more methods with names beginning with "test".
For example:
public void testSomething() { }
The debug option for the Java compiler must be enabled in order to see source file and line number information in a stack trace.
When invoking the Java compiler from the command line, use
the -g
option to generate all debugging info.
When invoking the Java compiler from an
Ant task, use the
debug="on"
attribute. For example:
<javac srcdir="${src}" destdir="${build}" debug="on" />
When using older JVMs pre-Hotspot (JDK 1.1 and most/all 1.2), run JUnit with
the -DJAVA_COMPILER=none
JMV command line argument to prevent
runtime JIT compilation from obscuring line number info.
Compiling the test source with debug enabled will show the line where the assertion failed. Compiling the non-test source with debug enabled will show the line where an exception was raised in the class under test.
NoClassDefFoundError
when trying to test JUnit or run the samples?
(Submitted by: J.B. Rainsberger and Jason Rogers)
Most likely your CLASSPATH doesn't include the JUnit installation directory.
Consider running WhichJunit to print the absolute location of the JUnit class files required to run and test JUnit and its samples.
If the CLASSPATH seems mysterious, read this!
(Submitted by: William Pietri)
Some IDEs come with a copy of JUnit, so your copy of JUnit in the
project classpath isn't the one being used. Replace the junit.jar
file used by the IDE with a junit.jar
file containing a
custom excluded.properties
and your bar will once again be green.
Best Practices: | top |
Tests should be written before the code. Test-first programming is practiced by only writing new code when an automated test is failing.
Good tests tell you how to best design the system for its intended use. They effectively communicate in an executable format how to use the software. They also prevent tendencies to over-build the system based on speculation. When all the tests pass, you know you're done!
Whenever a customer test fails or a bug is reported, first write the necessary unit test(s) to expose the bug(s), then fix them. This makes it almost impossible for that particular bug to resurface later.
Test-driven development is a lot more fun than writing tests after the code seems to be working. Give it a try!
No, just test everything that could reasonably break.
Be practical and maximize your testing investment. Remember that investments in testing are equal investments in design. If defects aren't being reported and your design responds well to change, then you're probably testing enough. If you're spending a lot of time fixing defects and your design is difficult to grow, you should write more tests.
If something is difficult to test, it's usually an opportunity for a design improvement. Look to improve the design so that it's easier to test, and by doing so a better design will usually emerge.
(Submitted by: J. B. Rainsberger)
The general philosophy is this: if it can't break on its own, it's too simple to break.
First example is the getX()
method. Suppose the getX()
method only answers the value of an instance variable. In that case,
getX()
cannot break unless either the compiler or the interpreter
is also broken. For that reason, don't test getX()
; there is no benefit.
The same is true of the setX()
method, although if your setX()
method does any parameter validation or has any side effects, you likely need to test it.
Next example: suppose you have written a method that does nothing but forward parameters into a method called on another object. That method is too simple to break.
public void myMethod(final int a, final String b) { myCollaborator.anotherMethod(a, b); }
myMethod
cannot possibly break because it does nothing: it
forwards its input to another object and that's all.
The only precondition for this method is "myCollaborator != null", but that is generally the responsibility of the constructor, and not of myMethod. If you are concerned, add a test to verify that myCollaborator is always set to something non-null by every constructor.
The only way myMethod could break would be if myCollaborator.anotherMethod()
were broken. In that case, test myCollaborator
, and not the current class.
It is true that adding tests for even these simple methods guards against the possibility that someone refactors and makes the methods "not-so-simple" anymore. In that case, though, the refactorer needs to be aware that the method is now complex enough to break, and should write tests for it -- and preferably before the refactoring.
Another example: suppose you have a JSP and, like a good programmer, you have removed all business logic from it. All it does is provide a layout for a number of JavaBeans and never does anything that could change the value of any object. That JSP is too simple to break, and since JSPs are notoriously annoying to test, you should strive to make all your JSPs too simple to break.
Here's the way testing goes:
The loop, as you can see, never terminates.becomeTimidAndTestEverything while writingTheSameThingOverAndOverAgain becomeMoreAggressive writeFewerTests writeTestsForMoreInterestingCases if getBurnedByStupidDefect feelStupid becomeTimidAndTestEverything end end
Run all your unit tests as often as possible, ideally every time the code is changed. Make sure all your unit tests always run at 100%. Frequent testing gives you confidence that your changes didn't break anything and generally lowers the stress of programming in the dark.
For larger systems, you may just run specific test suites that are relevant to the code you're working on.
Run all your acceptance, integration, stress, and unit tests at least once per day (or night).
Test-driven development generally lowers the defect density of software. But we're all fallible, so sometimes a defect will slip through. When this happens, write a failing test that exposes the defect. When the test passes, you know the defect is fixed!
Don't forget to use this as a learning opportunity. Perhaps the defect could have been prevented by being more aggressive about testing everything that could reasonably break.
System.out.println()
?
Inserting debug statements into code is a low-tech method for debugging it. It requires that output be scanned manually every time the program is run to ensure that the code is doing what's expected.
It generally takes less time in the long run to codify expectations in the form of an automated JUnit test that retains its value over time. If it's difficult to write a test to assert expectations, the tests may be telling you that shorter and more cohesive methods would improve your design.
Debuggers are commonly used to step through code and inspect that the variables along the way contain the expected values. But stepping through a program in a debugger is a manual process that requires tedious visual inspections. In essence, the debugging session is nothing more than a manual check of expected vs. actual results. Moreover, every time the program changes we must manually step back through the program in the debugger to ensure that nothing broke.
It generally takes less time to codify expectations in the form of an automated JUnit test that retains its value over time. If it's difficult to write a test to assert expected values, the tests may be telling you that shorter and more cohesive methods would improve your design.
Extending JUnit: | top |
JUnit is a testing framework intended to be customized for specialized use. Browsing the JUnit source code is an excellent way to learn its design and discover how it can be extended.
Examples of JUnit extensions can be found in the junit.extensions
package:
TestDecorator
A decorator for Tests. You can use it as the base class for implementing new test decorators that add behavior before or after a test is run.
ActiveTestSuite
A TestSuite
that runs each test in a separate thread and waits
until all threads have terminated.
TestSetup
A TestDecorator
to initialize and cleanup test fixture state once
before the test is run.
RepeatedTest
A TestDecorator
that runs a test repeatedly.
ExceptionTestCase
A TestCase
that expects a particular Exception
to be thrown.
Kent Beck has mentioned that ExceptionTestCase
likely does not provide
enough to be useful; it is just as easy to write the "exception test"
yourself. Refer to the "Writing Tests" section for guidance.
The JUnit home page has a complete list of available JUnit extensions.
Miscellaneous: | top |
The JUnit home page maintains a list of IDE integration instructions.
Start the TestRunner
under the debugger and configure the debugger
so that it catches the junit.framework.AssertionFailedError
.
How you configure this depends on the debugger you prefer to use. Most Java debuggers provide support to stop the program when a specific exception is raised.
Notice that this will only launch the debugger when an expected failure occurs.
XProgramming.com maintains a complete list of available xUnit testing frameworks.
top |