I’ll be creating more complex PIR programs soon, but first I want to stop for a minute and look at testing in Parrot. Why? Code is a weird thing. You need to pin down its behavior as specifically as you can, or it’ll become unreadable before you realize what’s going on. Good tests help you describe how your program should behave. Tests aren’t a magic pill that will guarantee perfect programs, but they will help you check that your program behaves the way you claim it does.
There are many testing libraries in the programming world, but I will focus on Test::More for Parrot.
Using Test::More to Write Tests
Test::More is more or less an implementation of Perl’s Test::More.
It provides a set of simple assertions such as
isnt, along with a few testing-specific commands like
I’ll be looking at some of those simple assertions, but not spending so much time on the testing commands.
This is a Babystep, after all.
Test::More is already included in the standard Parrot runtime, so we don’t need to do anything special to install it. Even better - there’s a test_more.pir include file that you can include to import all of the important Test::More subroutines automatically.
Let’s start writing tests.
Every test needs a plan.
plan subroutine in Test::More tells the world one simple thing: how many tests are in this file.
Accuracy is important, because it’s no fun when you are told to expect ten tests but only five run.
The other five might not have run for a number of reasons: the test script failed, Parrot failed in some mysterious way, or you just forgot to mention that you removed half of your tests.
We don’t plan to have any tests yet, so let’s be honest.
.include directive will insert the contents of
test_more.pir into the subroutine, which saves us a lot of namespace wrangling.
The testing starts when a plan is declared.
Of course, this is not the most exciting test plan in the world to run.
$ parrot example-08-01.pir $
What if we lie?
Running this is a little different.
$ parrot example-08-02.pir 1..10
Now Parrot is telling whoever cares that there will be ten tests in this file. It’s true that nothing exploded. For right now, you’re going to have to trust me when I say that honesty is the best policy. You’ll see later that some tools do care about how many tests you claim to run.
Sometimes we want to make a comment in our test for the world to see.
We could just
say what we want to say, but Test::More provides the
diag subroutine to produce those comments in a manner that will make testers happy later.
What does this produce?
$ parrot example-08-03.pir 1..10 # There are no tests. The plan is a lie.
That’s supposed to make our diagnostic comment stand out from the test results without confusing anyone.
But the diagnostic makes me sad.
Let’s write an actual test.
ok takes two arguments:
- The value you are testing
- A description of the test
The value being tested is obviously the most important part, but don’t underestimate the helpfulness of those descriptions. They are a form of documentation.
$ parrot example-08-04.pir 1..1 ok 1 - `ok` tests for simple truth
The test in
ok is one of simple truth as seen by Parrot.
We already saw that anything which looks like
0 or an empty string is considered false by Parrot, while everything else is considered true.
What happens when we introduce a test that we know will fail?
You updated your plan, right? Anyways, let’s see what this produces.
$ parrot example-08-05.pir 1..2 ok 1 - `ok` tests for simple truth not ok 2 - 0 is false, so this should fail.
Oh hey, this is starting to get interesting!
Now we can see clearly that the output from
ok is a line split into three parts:
- The result of the test:
- The test number
- Our description string
ok has shown us what a test result line looks like.
Let’s look at some of the other simple assertions.
Sometimes you are more concerned if something is true which shouldn’t be. For example, let’s say we have a Web site building script. It builds temporary cache files to save time when building subpage links, but those cache files need to go away when it’s done. So we would test for existence of a cache file and fail if the file exists.
The assertion may be
nok, but the output is still
ok or not based on whether the assertion was true.
$ parrot example-08-06.pir 1..1 ok 1 - Cache files should be cleaned up
What does it look like if we deliberately confuse things?
$ touch subpages.data $ parrot example-08-06.pir 1..1 not ok 1 - Cache files should be cleaned up
Yes. That’s what I hoped to see. Let’s clean up after ourselves to avoid future confusion.
$ rm subpages.data
There are many times where we want to compare two values.
Let’s continue with our Web site building tool.
This tool sets the title of a page in metadata.
We obviously want to be certain that it reads the metadata correctly.
We would use the
is assertion for that kind of test.
Anybody know what we should see?
$ parrot example-08-07.pir 1..1 ok 1 - The title should be correct.
Let’s deliberately mess things up again so we know what failure of
is looks like.
is produces some useful information.
$ parrot example-08-08.pir 1..1 not ok 1 - The title should be correct. # Have: I am a Walrus # Want: 08 - Test::More and Tapir
There’s the test result line, which shows `not ok', just like we expected. We also have a couple of diagnostic lines describing what we want and what we actually have.
ok has its opposite assertion
nok, so there must be an opposite for
There sure is.
Occasionally we care less about what a value is than making sure it’s not something in particular. Maybe we have a user registration process that uses social security numbers to satisfy an obscure corporate tracking requirement, but can’t save them as-is because of privacy concerns. In this case we don’t care what the stored value is. We want to be certain that it’s not the social security number.
Really, nobody should be surprised by the output at this point.
1..1 ok 1 - SSN should not be stored as-is
What does a failed
isnt look like?
The output diagnostic is once again straightforward.
$ parrot example-08-10.pir 1..1 not ok 1 - SSN should not be stored as-is # Have: 5551234567 # Want: not 5551234567
is fails us when we need to compare PMCs.
Well, it sort of works:
The output isn’t incredibly useful, though.
$ parrot example-08-11.pir 1..1 not ok 1 - Super Man is not Super Woman # Have: Hash[0x25ee84] # Want: Hash[0x25ee48]
Thankfully, we have the
is_deeply assertion to tell use exactly how a test has failed.
Now we can see exactly which value in the PMC was different.
$ parrot example-08-12.pir 1..1 not ok 1 - Super Man is not Super Woman # Mismatch at [last]: expected Man, received Woman
is_deeply under our belt, we now know enough assertions to get started putting them to use in real projects.
What About The Other Assertions and Commands?
We won’t be talking about them. I may eventually visit more as we get the hang of Parrot, but this is a good enough core to start with. Do you want to dig deeper? Go right ahead. The best resource for the moment is the documentation within Test::More itself.
TAP - The Test Anything Protocol
All of this output has looked remarkably consistent. There’s a reason for that. Test::More formats its result in a format known as TAP - the Test Anything Protocol. All of the output can be read by another program to provide you with a summary report. This other program is usually referred to as a test harness. The test harness runs your tests and then tells you how many of them failed, or if there were any surprises.
All I need is a test harness. I’ll be back to talk about Tapir very soon.
Hey, we can test now! We learned how to use the Test::More library, making simple assertions and reporting the results using the Test Anything Protocol. As long as we stay disciplined and run our tests regularly, we will learn immediately when we have an "inspired" moment that breaks existing code. Since I’m such a huge fan of Test-Driven Development, you can be assured of seeing many assertions in future Parrot Babysteps.