If you are writing tests for a Python project and you (rather wisely) want to
use testtools to do so, this is the manual for you.
We assume that you already know Python and that you know something about
automated testing already.
If you are a test author of an unusually large or unusually unusual test
suite, you might be interested in testtools for framework folk.
Introduction
testtools is a set of extensions to Python’s standard unittest module.
Writing tests with testtools is very much like writing tests with standard
Python, or with Twisted’s “trial”, or nose, except a little bit easier and
more enjoyable.
Below, we’ll try to give some examples of how to use testtools in its most
basic way, as well as a sort of feature-by-feature breakdown of the cool bits
that you could easily miss.
The basics
Here’s what a basic testtools unit tests look like:
from testtools import TestCase
from myproject import silly
class TestSillySquare(TestCase):
"""Tests for silly square function."""
def test_square(self):
# 'square' takes a number and multiplies it by itself.
result = silly.square(7)
self.assertEqual(result, 49)
def test_square_bad_input(self):
# 'square' raises a TypeError if it's given bad input, say a
# string.
self.assertRaises(TypeError, silly.square, "orange")
Here you have a class that inherits from testtools.TestCase and bundles
together a bunch of related tests. The tests themselves are methods on that
class that begin with test_.
Running your tests
You can run these tests in many ways. testtools provides a very basic
mechanism for doing so:
$ python -m testtools.run exampletest
Tests running...
Ran 2 tests in 0.000s
OK
where ‘exampletest’ is a module that contains unit tests. By default,
testtools.run will not recursively search the module or package for unit
tests. To do this, you will need to either have the discover module
installed or have Python 2.7 or later, and then run:
$ python -m testtools.run discover packagecontainingtests
For more information see the Python 2.7 unittest documentation, or:
python -m testtools.run --help
As your testing needs grow and evolve, you will probably want to use a more
sophisticated test runner. There are many of these for Python, and almost all
of them will happily run testtools tests. In particular:
From now on, we’ll assume that you know how to run your tests.
Running test with Distutils
If you are using Distutils to build your Python project, you can use the testtools
Distutils command to integrate testtools into your Distutils workflow:
from distutils.core import setup
from testtools import TestCommand
setup(name='foo',
version='1.0',
py_modules=['foo'],
cmdclass={'test': TestCommand}
)
You can then run:
$ python setup.py test -m exampletest
Tests running...
Ran 2 tests in 0.000s
OK
For more information about the capabilities of the TestCommand command see:
$ python setup.py test --help
You can use the setup configuration to specify the default behavior of the
TestCommand command.
Assertions
The core of automated testing is making assertions about the way things are,
and getting a nice, helpful, informative error message when things are not as
they ought to be.
All of the assertions that you can find in Python standard unittest can be
found in testtools (remember, testtools extends unittest). testtools changes
the behaviour of some of those assertions slightly and adds some new
assertions that you will almost certainly find useful.
Improved assertRaises
TestCase.assertRaises returns the caught exception. This is useful for
asserting more things about the exception than just the type:
def test_square_bad_input(self):
# 'square' raises a TypeError if it's given bad input, say a
# string.
e = self.assertRaises(TypeError, silly.square, "orange")
self.assertEqual("orange", e.bad_value)
self.assertEqual("Cannot square 'orange', not a number.", str(e))
Note that this is incompatible with the assertRaises in unittest2 and
Python2.7.
ExpectedException
If you are using a version of Python that supports the with context
manager syntax, you might prefer to use that syntax to ensure that code raises
particular errors. ExpectedException does just that. For example:
def test_square_root_bad_input_2(self):
# 'square' raises a TypeError if it's given bad input.
with ExpectedException(TypeError, "Cannot square.*"):
silly.square('orange')
The first argument to ExpectedException is the type of exception you
expect to see raised. The second argument is an optional regular expression,
if provided, the str() of the raised exception must match the regular
expression.
assertIn, assertNotIn
These two assertions check whether a value is in a sequence and whether a
value is not in a sequence. They are “assert” versions of the in and
not in operators. For example:
def test_assert_in_example(self):
self.assertIn('a', 'cat')
self.assertNotIn('o', 'cat')
self.assertIn(5, list_of_primes_under_ten)
self.assertNotIn(12, list_of_primes_under_ten)
assertIs, assertIsNot
These two assertions check whether values are identical to one another. This
is sometimes useful when you want to test something more strict than mere
equality. For example:
def test_assert_is_example(self):
foo = [None]
foo_alias = foo
bar = [None]
self.assertIs(foo, foo_alias)
self.assertIsNot(foo, bar)
self.assertEqual(foo, bar) # They are equal, but not identical
assertIsInstance
As much as we love duck-typing and polymorphism, sometimes you need to check
whether or not a value is of a given type. This method does that. For
example:
def test_assert_is_instance_example(self):
now = datetime.now()
self.assertIsInstance(now, datetime)
Note that there is no assertIsNotInstance in testtools currently.
expectFailure
Sometimes it’s useful to write tests that fail. For example, you might want
to turn a bug report into a unit test, but you don’t know how to fix the bug
yet. Or perhaps you want to document a known, temporary deficiency in a
dependency.
testtools gives you the TestCase.expectFailure to help with this. You use
it to say that you expect this assertion to fail. When the test runs and the
assertion fails, testtools will report it as an “expected failure”.
Here’s an example:
def test_expect_failure_example(self):
self.expectFailure(
"cats should be dogs", self.assertEqual, 'cats', 'dogs')
As long as ‘cats’ is not equal to ‘dogs’, the test will be reported as an
expected failure.
If ever by some miracle ‘cats’ becomes ‘dogs’, then testtools will report an
“unexpected success”. Unlike standard unittest, testtools treats this as
something that fails the test suite, like an error or a failure.
Matchers
The built-in assertion methods are very useful, they are the bread and butter
of writing tests. However, soon enough you will probably want to write your
own assertions. Perhaps there are domain specific things that you want to
check (e.g. assert that two widgets are aligned parallel to the flux grid), or
perhaps you want to check something that could almost but not quite be found
in some other standard library (e.g. assert that two paths point to the same
file).
When you are in such situations, you could either make a base class for your
project that inherits from testtools.TestCase and make sure that all of
your tests derive from that, or you could use the testtools Matcher
system.
Using Matchers
Here’s a really basic example using stock matchers found in testtools:
import testtools
from testtools.matchers import Equals
class TestSquare(TestCase):
def test_square(self):
result = square(7)
self.assertThat(result, Equals(49))
The line self.assertThat(result, Equals(49)) is equivalent to
self.assertEqual(result, 49) and means “assert that result equals 49”.
The difference is that assertThat is a more general method that takes some
kind of observed value (in this case, result) and any matcher object
(here, Equals(49)).
The matcher object could be absolutely anything that implements the Matcher
protocol. This means that you can make more complex matchers by combining
existing ones:
def test_square_silly(self):
result = square(7)
self.assertThat(result, Not(Equals(50)))
Which is roughly equivalent to:
def test_square_silly(self):
result = square(7)
self.assertNotEqual(result, 50)
Stock matchers
testtools comes with many matchers built in. They can all be found in and
imported from the testtools.matchers module.
Equals
Matches if two items are equal. For example:
def test_equals_example(self):
self.assertThat([42], Equals([42]))
Is
Matches if two items are identical. For example:
def test_is_example(self):
foo = object()
self.assertThat(foo, Is(foo))
The raises helper
Matches if a callable raises a particular type of exception. For example:
def test_raises_example(self):
self.assertThat(lambda: 1/0, raises(ZeroDivisionError))
This is actually a convenience function that combines two other matchers:
Raises and MatchesException.
DocTestMatches
Matches a string as if it were the output of a doctest example. Very useful
for making assertions about large chunks of text. For example:
import doctest
def test_doctest_example(self):
output = "Colorless green ideas"
self.assertThat(
output,
DocTestMatches("Colorless ... ideas", doctest.ELLIPSIS))
We highly recommend using the following flags:
doctest.ELLIPSIS | doctest.NORMALIZE_WHITESPACE | doctest.REPORT_NDIFF
LessThan
Matches if the given thing is less than the thing in the matcher. For
example:
def test_less_than_example(self):
self.assertThat(2, LessThan(3))
StartsWith, EndsWith
These matchers check to see if a string starts with or ends with a particular
substring. For example:
def test_starts_and_ends_with_example(self):
self.assertThat('underground', StartsWith('und'))
self.assertThat('underground', EndsWith('und'))
MatchesException
Matches an exc_info tuple if the exception is of the correct type. For
example:
def test_matches_exception_example(self):
try:
raise RuntimeError('foo')
except RuntimeError:
exc_info = sys.exc_info()
self.assertThat(exc_info, MatchesException(RuntimeError))
self.assertThat(exc_info, MatchesException(RuntimeError('bar'))
Most of the time, you will want to uses The raises helper instead.
NotEquals
Matches if something is not equal to something else. Note that this is subtly
different to Not(Equals(x)). NotEquals(x) will match if y != x,
Not(Equals(x)) will match if not y == x.
You only need to worry about this distinction if you are testing code that
relies on badly written overloaded equality operators.
KeysEqual
Matches if the keys of one dict are equal to the keys of another dict. For
example:
def test_keys_equal(self):
x = {'a': 1, 'b': 2}
y = {'a': 2, 'b': 3}
self.assertThat(a, KeysEqual(b))
MatchesRegex
Matches a string against a regular expression, which is a wonderful thing to
be able to do, if you think about it:
def test_matches_regex_example(self):
self.assertThat('foo', MatchesRegex('fo+'))
Combining matchers
One great thing about matchers is that you can readily combine existing
matchers to get variations on their behaviour or to quickly build more complex
assertions.
Below are a few of the combining matchers that come with testtools.
Not
Negates another matcher. For example:
def test_not_example(self):
self.assertThat([42], Not(Equals("potato")))
self.assertThat([42], Not(Is([42])))
If you find yourself using Not frequently, you may wish to create a custom
matcher for it. For example:
IsNot = lambda x: Not(Is(x))
def test_not_example_2(self):
self.assertThat([42], IsNot([42]))
Annotate
Used to add custom notes to a matcher. For example:
def test_annotate_example(self):
result = 43
self.assertThat(
result, Annotate("Not the answer to the Question!", Equals(42))
Since the annotation is only ever displayed when there is a mismatch
(e.g. when result does not equal 42), it’s a good idea to phrase the note
negatively, so that it describes what a mismatch actually means.
As with Not, you may wish to create a custom matcher that describes a
common operation. For example:
PoliticallyEquals = lambda x: Annotate("Death to the aristos!", Equals(x))
def test_annotate_example_2(self):
self.assertThat("orange", PoliticallyEquals("yellow"))
AfterPreprocessing
Used to make a matcher that applies a function to the matched object before
matching. This can be used to aid in creating trivial matchers as functions, for
example:
def test_after_preprocessing_example(self):
def HasFileContent(content):
def _read(path):
return open(path).read()
return AfterPreproccessing(_read, Equals(content))
self.assertThat('/tmp/foo.txt', PathHasFileContent("Hello world!"))
MatchesAll
Combines many matchers to make a new matcher. The new matcher will only match
things that match every single one of the component matchers.
It’s much easier to understand in Python than in English:
def test_matches_all_example(self):
has_und_at_both_ends = MatchesAll(StartsWith("und"), EndsWith("und"))
# This will succeed.
self.assertThat("underground", has_und_at_both_ends)
# This will fail.
self.assertThat("found", has_und_at_both_ends)
# So will this.
self.assertThat("undead", has_und_at_both_ends)
At this point some people ask themselves, “why bother doing this at all? why
not just have two separate assertions?”. It’s a good question.
The first reason is that when a MatchesAll gets a mismatch, the error will
include information about all of the bits that mismatched. When you have two
separate assertions, as below:
def test_two_separate_assertions(self):
self.assertThat("foo", StartsWith("und"))
self.assertThat("foo", EndsWith("und"))
Then you get absolutely no information from the second assertion if the first
assertion fails. Tests are largely there to help you debug code, so having
more information in error messages is a big help.
The second reason is that it is sometimes useful to give a name to a set of
matchers. has_und_at_both_ends is a bit contrived, of course, but it is
clear.
MatchesAny
Like MatchesAll, MatchesAny combines many matchers to make a new
matcher. The difference is that the new matchers will match a thing if it
matches any of the component matchers.
For example:
def test_matches_any_example(self):
self.assertThat(42, MatchesAny(Equals(5), Not(Equals(6))))
MatchesListwise
Where MatchesAny and MatchesAll combine many matchers to match a
single value, MatchesListwise combines many matches to match many values.
For example:
def test_matches_listwise_example(self):
self.assertThat(
[1, 2, 3], MatchesListwise(map(Equals, [1, 2, 3])))
This is useful for writing custom, domain-specific matchers.
MatchesSetwise
Combines many matchers to match many values, without regard to their order.
Here’s an example:
def test_matches_setwise_example(self):
self.assertThat(
[1, 2, 3], MatchesSetwise(Equals(2), Equals(3), Equals(1)))
Much like MatchesListwise, best used for writing custom, domain-specific
matchers.
MatchesStructure
Creates a matcher that matches certain attributes of an object against a
pre-defined set of matchers.
It’s much easier to understand in Python than in English:
def test_matches_structure_example(self):
foo = Foo()
foo.a = 1
foo.b = 2
matcher = MatchesStructure(a=Equals(1), b=Equals(2))
self.assertThat(foo, matcher)
MatchesStructure.from_example takes an object and a list of attributes and
creates a MatchesStructure matcher where each attribute of the matched
object must equal each attribute of the example object. For example:
matcher = MatchesStructure.from_example(foo, 'a', 'b')
is exactly equivalent to matcher in the previous example.
Raises
Takes whatever the callable raises as an exc_info tuple and matches it against
whatever matcher it was given. For example, if you want to assert that a
callable raises an exception of a given type:
def test_raises_example(self):
self.assertThat(
lambda: 1/0, Raises(MatchesException(ZeroDivisionError)))
Although note that this could also be written as:
def test_raises_example_convenient(self):
self.assertThat(lambda: 1/0, raises(ZeroDivisionError))
See also MatchesException and the raises helper
Writing your own matchers
Combining matchers is fun and can get you a very long way indeed, but
sometimes you will have to write your own. Here’s how.
You need to make two closely-linked objects: a Matcher and a
Mismatch. The Matcher knows how to actually make the comparison, and
the Mismatch knows how to describe a failure to match.
Here’s an example matcher:
class IsDivisibleBy(object):
"""Match if a number is divisible by another number."""
def __init__(self, divider):
self.divider = divider
def __str__(self):
return 'IsDivisibleBy(%s)' % (self.divider,)
def match(self, actual):
remainder = actual % self.divider
if remainder != 0:
return IsDivisibleByMismatch(actual, self.divider, remainder)
else:
return None
The matcher has a constructor that takes parameters that describe what you
actually expect, in this case a number that other numbers ought to be
divisible by. It has a __str__ method, the result of which is displayed
on failure by assertThat and a match method that does the actual
matching.
match takes something to match against, here actual, and decides
whether or not it matches. If it does match, then match must return
None. If it does not match, then match must return a Mismatch
object. assertThat will call match and then fail the test if it
returns a non-None value. For example:
def test_is_divisible_by_example(self):
# This succeeds, since IsDivisibleBy(5).match(10) returns None.
self.assertThat(10, IsDivisbleBy(5))
# This fails, since IsDivisibleBy(7).match(10) returns a mismatch.
self.assertThat(10, IsDivisbleBy(7))
The mismatch is responsible for what sort of error message the failing test
generates. Here’s an example mismatch:
class IsDivisibleByMismatch(object):
def __init__(self, number, divider, remainder):
self.number = number
self.divider = divider
self.remainder = remainder
def describe(self):
return "%s is not divisible by %s, %s remains" % (
self.number, self.divider, self.remainder)
def get_details(self):
return {}
The mismatch takes information about the mismatch, and provides a describe
method that assembles all of that into a nice error message for end users.
You can use the get_details method to provide extra, arbitrary data with
the mismatch (e.g. the contents of a log file). Most of the time it’s fine to
just return an empty dict. You can read more about Details elsewhere in this
document.
Sometimes you don’t need to create a custom mismatch class. In particular, if
you don’t care when the description is calculated, then you can just do that
in the Matcher itself like this:
def match(self, actual):
remainder = actual % self.divider
if remainder != 0:
return Mismatch(
"%s is not divisible by %s, %s remains" % (
actual, self.divider, remainder))
else:
return None
Details
As we may have mentioned once or twice already, one of the great benefits of
automated tests is that they help find, isolate and debug errors in your
system.
Frequently however, the information provided by a mere assertion failure is
not enough. It’s often useful to have other information: the contents of log
files; what queries were run; benchmark timing information; what state certain
subsystem components are in and so forth.
testtools calls all of these things “details” and provides a single, powerful
mechanism for including this information in your test run.
Here’s an example of how to add them:
from testtools import TestCase
from testtools.content import text_content
class TestSomething(TestCase):
def test_thingy(self):
self.addDetail('arbitrary-color-name', text_content("blue"))
1 / 0 # Gratuitous error!
A detail an arbitrary piece of content given a name that’s unique within the
test. Here the name is arbitrary-color-name and the content is
text_content("blue"). The name can be any text string, and the content
can be any testtools.content.Content object.
When the test runs, testtools will show you something like this:
======================================================================
ERROR: exampletest.TestSomething.test_thingy
----------------------------------------------------------------------
Text attachment: arbitrary-color-name
------------
blue
------------
Text attachment: traceback
------------
Traceback (most recent call last):
...
File "exampletest.py", line 8, in test_thingy
1 / 0 # Gratuitous error!
ZeroDivisionError: integer division or modulo by zero
------------
Ran 1 test in 0.030s
As you can see, the detail is included as a “Text attachment”, here saying
that our arbitrary-color-name is “blue”.
Content
For the actual content of details, testtools uses its own MIME-based Content
object. This allows you to attach any information that you could possibly
conceive of to a test, and allows testtools to use or serialize that
information.
The basic testtools.content.Content object is constructed from a
testtools.content.ContentType and a nullary callable that must return an
iterator of chunks of bytes that the content is made from.
So, to make a Content object that is just a simple string of text, you can
do:
from testtools.content import Content
from testtools.content_type import ContentType
text = Content(ContentType('text', 'plain'), lambda: ["some text"])
Because adding small bits of text content is very common, there’s also a
convenience method:
text = text_content("some text")
To make content out of an image stored on disk, you could do something like:
image = Content(ContentType('image', 'png'), lambda: open('foo.png').read())
Or you could use the convenience function:
image = content_from_file('foo.png', ContentType('image', 'png'))
The lambda helps make sure that the file is opened and the actual bytes
read only when they are needed – by default, when the test is finished. This
means that tests can construct and add Content objects freely without worrying
too much about how they affect run time.
A realistic example
A very common use of details is to add a log file to failing tests. Say your
project has a server represented by a class SomeServer that you can start
up and shut down in tests, but runs in another process. You want to test
interaction with that server, and whenever the interaction fails, you want to
see the client-side error and the logs from the server-side. Here’s how you
might do it:
from testtools import TestCase
from testtools.content import attach_file, Content
from testtools.content_type import UTF8_TEXT
from myproject import SomeServer
class SomeTestCase(TestCase):
def setUp(self):
super(SomeTestCase, self).setUp()
self.server = SomeServer()
self.server.start_up()
self.addCleanup(self.server.shut_down)
self.addCleanup(attach_file, self.server.logfile, self)
def attach_log_file(self):
self.addDetail(
'log-file',
Content(UTF8_TEXT,
lambda: open(self.server.logfile, 'r').readlines()))
def test_a_thing(self):
self.assertEqual("cool", self.server.temperature)
This test will attach the log file of SomeServer to each test that is
run. testtools will only display the log file for failing tests, so it’s not
such a big deal.
If the act of adding at detail is expensive, you might want to use
addOnException so that you only do it when a test actually raises an
exception.
Controlling test execution
addCleanup
TestCase.addCleanup is a robust way to arrange for a clean up function to
be called before tearDown. This is a powerful and simple alternative to
putting clean up logic in a try/finally block or tearDown method. For
example:
def test_foo(self):
foo.lock()
self.addCleanup(foo.unlock)
...
This is particularly useful if you have some sort of factory in your test:
def make_locked_foo(self):
foo = Foo()
foo.lock()
self.addCleanup(foo.unlock)
return foo
def test_frotz_a_foo(self):
foo = self.make_locked_foo()
foo.frotz()
self.assertEqual(foo.frotz_count, 1)
Any extra arguments or keyword arguments passed to addCleanup are passed
to the callable at cleanup time.
Cleanups can also report multiple errors, if appropriate by wrapping them in
a testtools.MultipleExceptions object:
raise MultipleExceptions(exc_info1, exc_info2)
Fixtures
Tests often depend on a system being set up in a certain way, or having
certain resources available to them. Perhaps a test needs a connection to the
database or access to a running external server.
One common way of doing this is to do:
class SomeTest(TestCase):
def setUp(self):
super(SomeTest, self).setUp()
self.server = Server()
self.server.setUp()
self.addCleanup(self.server.tearDown)
testtools provides a more convenient, declarative way to do the same thing:
class SomeTest(TestCase):
def setUp(self):
super(SomeTest, self).setUp()
self.server = self.useFixture(Server())
useFixture(fixture) calls setUp on the fixture, schedules a clean up
to clean it up, and schedules a clean up to attach all details held by the
fixture to the test case. The fixture object must meet the
fixtures.Fixture protocol (version 0.3.4 or newer, see fixtures).
If you have anything beyond the most simple test set up, we recommend that
you put this set up into a Fixture class. Once there, the fixture can be
easily re-used by other tests and can be combined with other fixtures to make
more complex resources.
Skipping tests
Many reasons exist to skip a test: a dependency might be missing; a test might
be too expensive and thus should not berun while on battery power; or perhaps
the test is testing an incomplete feature.
TestCase.skipTest is a simple way to have a test stop running and be
reported as a skipped test, rather than a success, error or failure. For
example:
def test_make_symlink(self):
symlink = getattr(os, 'symlink', None)
if symlink is None:
self.skipTest("No symlink support")
symlink(whatever, something_else)
Using skipTest means that you can make decisions about what tests to run
as late as possible, and close to the actual tests. Without it, you might be
forced to use convoluted logic during test loading, which is a bit of a mess.
Legacy skip support
If you are using this feature when running your test suite with a legacy
TestResult object that is missing the addSkip method, then the
addError method will be invoked instead. If you are using a test result
from testtools, you do not have to worry about this.
In older versions of testtools, skipTest was known as skip. Since
Python 2.7 added skipTest support, the skip name is now deprecated.
No warning is emitted yet – some time in the future we may do so.
addOnException
Sometimes, you might wish to do something only when a test fails. Perhaps you
need to run expensive diagnostic routines or some such.
TestCase.addOnException allows you to easily do just this. For example:
class SomeTest(TestCase):
def setUp(self):
super(SomeTest, self).setUp()
self.server = self.useFixture(SomeServer())
self.addOnException(self.attach_server_diagnostics)
def attach_server_diagnostics(self, exc_info):
self.server.prep_for_diagnostics() # Expensive!
self.addDetail('server-diagnostics', self.server.get_diagnostics)
def test_a_thing(self):
self.assertEqual('cheese', 'chalk')
In this example, attach_server_diagnostics will only be called when a test
fails. It is given the exc_info tuple of the error raised by the test, just
in case it is needed.
Twisted support
testtools provides highly experimental support for running Twisted tests –
tests that return a Deferred and rely on the Twisted reactor. You should not
use this feature right now. We reserve the right to change the API and
behaviour without telling you first.
However, if you are going to, here’s how you do it:
from testtools import TestCase
from testtools.deferredruntest import AsynchronousDeferredRunTest
class MyTwistedTests(TestCase):
run_tests_with = AsynchronousDeferredRunTest
def test_foo(self):
# ...
return d
In particular, note that you do not have to use a special base TestCase
in order to run Twisted tests.
You can also run individual tests within a test case class using the Twisted
test runner:
class MyTestsSomeOfWhichAreTwisted(TestCase):
def test_normal(self):
pass
@run_test_with(AsynchronousDeferredRunTest)
def test_twisted(self):
# ...
return d
Here are some tips for converting your Trial tests into testtools tests.
- Use the AsynchronousDeferredRunTest runner
- Make sure to upcall to setUp and tearDown
- Don’t use setUpClass or tearDownClass
- Don’t expect setting .todo, .timeout or .skip attributes to do anything
- flushLoggedErrors is testtools.deferredruntest.flush_logged_errors
- assertFailure is testtools.deferredruntest.assert_fails_with
- Trial spins the reactor a couple of times before cleaning it up,
AsynchronousDeferredRunTest does not. If you rely on this behavior, use
AsynchronousDeferredRunTestForBrokenTwisted.
Test helpers
testtools comes with a few little things that make it a little bit easier to
write tests.
TestCase.patch
patch is a convenient way to monkey-patch a Python object for the duration
of your test. It’s especially useful for testing legacy code. e.g.:
def test_foo(self):
my_stream = StringIO()
self.patch(sys, 'stderr', my_stream)
run_some_code_that_prints_to_stderr()
self.assertEqual('', my_stream.getvalue())
The call to patch above masks sys.stderr with my_stream so that
anything printed to stderr will be captured in a StringIO variable that can be
actually tested. Once the test is done, the real sys.stderr is restored to
its rightful place.
Creation methods
Often when writing unit tests, you want to create an object that is a
completely normal instance of its type. You don’t want there to be anything
special about its properties, because you are testing generic behaviour rather
than specific conditions.
A lot of the time, test authors do this by making up silly strings and numbers
and passing them to constructors (e.g. 42, ‘foo’, “bar” etc), and that’s
fine. However, sometimes it’s useful to be able to create arbitrary objects
at will, without having to make up silly sample data.
To help with this, testtools.TestCase implements creation methods called
getUniqueString and getUniqueInteger. They return strings and
integers that are unique within the context of the test that can be used to
assemble more complex objects. Here’s a basic example where
getUniqueString is used instead of saying “foo” or “bar” or whatever:
class SomeTest(TestCase):
def test_full_name(self):
first_name = self.getUniqueString()
last_name = self.getUniqueString()
p = Person(first_name, last_name)
self.assertEqual(p.full_name, "%s %s" % (first_name, last_name))
And here’s how it could be used to make a complicated test:
class TestCoupleLogic(TestCase):
def make_arbitrary_person(self):
return Person(self.getUniqueString(), self.getUniqueString())
def test_get_invitation(self):
a = self.make_arbitrary_person()
b = self.make_arbitrary_person()
couple = Couple(a, b)
event_name = self.getUniqueString()
invitation = couple.get_invitation(event_name)
self.assertEqual(
invitation,
"We invite %s and %s to %s" % (
a.full_name, b.full_name, event_name))
Essentially, creation methods like these are a way of reducing the number of
assumptions in your tests and communicating to test readers that the exact
details of certain variables don’t actually matter.
See pages 419-423 of xUnit Test Patterns by Gerard Meszaros for a detailed
discussion of creation methods.