25 June 2008
Test::Unit, Rake and Hudson status
I spent a long time trying to figure out who was dropping the ball on the status. Was it Test::Unit? Rake? Windows? Hudson? I never found a definitive answer, though it was starting to look like it might be Test::Unit.
I did find this post and gave it a try. Unfortunately, this solution works when there is a non-zero exit status, but this is not the case when running Unit::Test from rake.
The solution turned out to be much simpler than trying to figure out how to correct Rake or Test::Unit. Hudson to the rescue. Hudson supports a plugin architecture. The repository has a plugin named Text-finder Plugin (download it here). It determines the success or failure of a build by looking for a regular expression in the build artifacts or system output. I added a rule which looks for the expression /\d+ tests, \d+ assertions, 0 failures, 0 errors/ in the console output. If the expression is found, the build passes; otherwise it fails. Once installed the configuration panel has this section:
Yet again Hudson solves the problem where other players are dropping the ball. The Hudson status now accurately conveys the result of running the unit tests.
14 June 2008
Ruby - Making a Mockery of Testing
The solution is to "mock" the news server. A mock is code that provides the same interface (or at least enough of it for the tests) as a "real" object or class, yet return expected data each time the test is run. Mocks also can be configured to know how many times a method should be called, in what order they are called, the values that should be passed in and so on. If these conditions are not met, an error is thrown and the test fails.
Ruby has several tools for creating mock objects. I settled on Flex Mock after reading a few blogs. Others were passionate about Mocha - I'll take a look at that another time.
For the purposes of my test, a Ruby newsreaderAPI needs to provide the following methods:
- Net::NNTP.new(host, port, timeout) - Specify the host, port and timeout values to use when connecting (class method)
- Net::NNTP.connect() - Connect to the host with parameters set in new.
- Net::NNTP.xover(groupname, :from=>low, :to=>high) - Retrieve the headers from group groupname in the range low..high
- Net::NNTP.group(group) - Set the group to fetch articles from
- Net::NNTP.article(id) - Retrieve the article with the given id
Notice there is one class method to mock. After a little looking, I found the answer in the README file (see section "Mocking Class Objects) for FlexMock. The answer is to have the class method return another flexmock object which is the instance.
nntp_mock = flexmock
... more stuff here to define nntp_mock...
flexmock(Net::NNTP).should_receive(:new).and_return(nntp_mock)
Now when the Net::NNTP.new method is called, it will return our flexmock instance which handles the rest of the test.
Lets look at the flexmock instance object now:
nntp_mock = flexmock
nntp_mock.should_receive(:connect).once.
with_no_args
nntp_mock.should_receive(:group).once.
with(String).and_return(group_mock)
nntp_mock.should_receive(:xover).once.
with(String, Hash).and_return([summary1, summary2])
nntp_mock.should_receive(:article).twice.
with(String).and_return(article1, article2)
One interesting feature is that parameters may be chained together. So for example the last line specified that:
- It responds to the method call 'article'
- It should be called exactly twice during the test (fail otherwise)
- It must receive exactly one parameter which is a string
- It returns the specified articles
So with just a few lines of code we've written a newsgroup reader application which is sufficient for our test. Its behavior is deterministic and further our test will fail if any of the expectations set for it fail. That is a lot of value for a small effort.
The README file is extensive and gave me enough information to write my mock.
The next time you need to write a test for code which references an external resource, mock it instead. You'll be happy you did.
12 June 2008
Walking the Talk, a few days later
Several tools were left out of the quality sandbox sandbox last time. RDoc (source-embedded documentation ala JavaDoc) and Log4r (logfile support ala Log4J) are both useful tools for the agile developer. I'm still learning some of the finer points of each, but both were up and functioning at a basic level in no time. Expect to hear more about these in a future post.
Hudson
Hudson continues to be awesome. I just read on Kohsuke Kawaguchi's Blog that Hudson and his other projects he's been doing in the background are going to become his day job. This is exciting news for Hudson fans! The brisk development pace will likely accelerate further.
TDD
I had an epiphany several months ago in our AgileNM meeting about TDD. A member said that TDD significantly reduced the cyclomatic complexity of the code. Ah, TDD is not about having better tests, it is about writing better code! I'll relate my personal experience with this a little later, but let's just say for now that my personal experience has only increased my enthusiasm! Take a look at this paper from David Jansen of the University of Kansas for more background.
rcov
rcov pretty much worked as expected. There were only two ways that it fell short of the standard set by Cobertura for me. The first was the lack of hit counts for each line. I found this interesting to understand where the hot spots in the code might be even before firing up a profiler.
The other shortcoming of rcov compared to Cobertura is that it does not include cyclomatic complexity analysis. After a little Googling, it appears that Saikuro fills that gap. I will report on it as soon as I get a chance to play with it.
Trac
Trac was the biggest disappointment of the bunch. One area where Ruby leaves Python in the dust is in package management. I had to install so many different packages and take so many manual steps just to get something that sort of worked. Compared to RubyGems projects, this is an absolute joke. I couldn't get either of the plugins I installed to work.
The good news is there will soon be an excellent issue tracker/wiki available. Confluence, the wiki we use at work, already has a personal edition available. With the 4.0 release of JIRA (date not announced yet) there will be a personal edition of JIRA as well. Both are from Atlassian, who are fantastic to work with. Goodbye Trac, it was fun knowing you, but I want to spend more time developing my app and less time screwing around with plugins.
07 June 2008
Walking the Talk
- Continuous integration (Hudson)
- Version control (Subversion)
- Test Driven Development
- Unit testing (JUnit)
- Defect tracking (JIRA)
- Light documentation via a wiki (Confluence)
- Coverage analysis (Cobertura)
- Profiling (YourKit)
- Automated builds/testing (Ant)
At home I have several constraints I don't have at work. I am writing in Ruby, not Java. I am not willing to pay the money for commercial tools (JIRA, Confluence, YourKit). I think Ant is a silly way to do builds and automated tests. So I started thinking about what my own tool set might look like.
Continuous integration: Hudson. I have written about this before, but it is a fantastic continuous integration tool. Kohsuke Kawaguchi is constantly improving it, so it is just getting better and better.
Version control: Subversion. I see no reason to switch here either. Version control with only one developer is dead-simple anyway, but why bother learning something new when Subversion works so well?
Unit testing: Test::Unit. It ships with Ruby and is the spiritual equivalent to JUnit. ZenTest also looks very interesting - promising to go far beyond the simple XUnit frameworks. I'll take a look at this as well.
Defect tracking: Trac. There are endless open source defect trackers out there. Maybe some are better than Trac. But I like that it is light weight, has an integrated wiki and has bazillions of plugins, including Hudson integration. It is also written in Python, my previous favorite language before Ruby.
Wiki: Trac. See Defect Tracking above.
Coverage Analysis: rcov. To the best of my knowledge, this is the only game in town. The outputs look very similar to Cobertura. I haven't played with this yet, but it is on my short list.
Profiling: ruby-prof. This seems to be the clear winner over Profile. I could not find a single reference comparing the two that didn't favor ruby-prof.
Automated builds/testing: Rake. There are quite a few Ruby-based build tools available. Since I am planning to use Rails in my implentation, Rake seems like the obvious choice.
That is the plan. I'll see how it all works out and report back on my progress. Even if it is a big failure, I will have taken a sip (gulp?) of my own medicine.