In the past few weeks I’ve heard several misconceptions raised about Test Driven Development:
- Does TDD really work? I’ve written about this before: Advantages of TDD; In addition George Dinwiddie maintains a list of case studies (including one from IBM). Finally Keith Braithwaite has recently done some research on measuring the benefit of TDD. Currently the results are buried in a presentation (pdf). Key message: “measuring a over 20 projects: if you have a large number of unit tests your code will be an order of magnitude less complex.”
- Writing the tests after the code has been written is the same as Test Driven Development. Its not, there are significant benefits to TDD that I outlined in: “Test Driven Development vs Plain Old Unit Testing“.
- TDD isn’t useful for helping to design the architecture of programs.
The later is an interesting question. You can find voices on the net that say TDD by itself (with no thought given to the architecture is bad). The best example of this is “Coplien and Martin Debate TDD, CDD and Professionalism” (with some excellent comments) and a few blog posts that follow: “The TDD Controversy – JAOO 2007 ” and “TDD vs good design/architecture principles” – outline the debate.
If we accept that TDD isn’t entirely sufficient for design, then the question becomes how much architecture is required and what would a good project designed with TDD look like?
The real trick is balancing upfront design with the principle of YAGNI. Since like most developers I’m tempted to over design (“I know I will need this method/class later on”), I force myself to wait until the last responsible moment (typically just before I would need the code) to do my design work (saving a lot of waste – in the form of unused code). In the case of the larger architecture I discourage people from thinking more than one iteration ahead, any further ahead and the requirements are likely change. On rare occasions this means missing a big issue that forces a lot of rework. Since I have a rock solid tests I’m not all that upset and I believe I’ve still saved a lot of time with the architecture I didn’t build.
So if TDD can be used on a large scale and to help drive architecture there must be examples. The leading members of the community (JB Rainsberger, Nat Pryce and Lasse Koskela) all have clients with products built and architected using TDD. Unfortunately these are all closed source or clients that don’t want to be talked about publicly.
In the open source world there are a number of applications and libraries developed using TDD:
- Obviously all of the Agile related tools (JUnit, NUnit, MbUnit, Gallio, JMock, RMock, CruiseControl, CruiseControl.NET and Hudson) were developed using TDD. However this is just a bit self referential
- Bazaar – a distributed version control system, used by: LaunchPad, MySQL and Mailman (Python)
- Allelogram – a program for normalizing and binning microsatellite genotypes. (Java)
- Jena – a framework for building Semantic Web applications. It provides a programmatic environment for RDF, RDFS and OWL, SPARQL and includes a rule-based inference engine. (Java) Related: Eyeball – lint for RDF/OWL
- Helium – (He) is a lightweight and extremely useful templating engine based entirely on XML. He is 100% Java and 100% TDD (Test Driven Development) (Java)
- http.net – a dirt simple HTTP server written in C#. It supports minimal functionality and is primarily intended as example code for TDD et al. (C#)
- Joda – provides a quality replacement for the Java date and time classes. (Java)
- Joyent Connector (source) – The Connector suite of applications provide cool features such as search, tagging, and RSS feeds that we believe will make your life easier on a day to day basis. (ruby)
- Task Coach – a task manager written in Python.
- In addition the Google Chrome browser was developed in part with TDD: Chromium has used a combination of test-driven and other development processes. Sometimes we write tests first and then implement features to pass them, sometimes we use existing tests as a guide to what to work on next, and sometimes we implement first and test afterward. It depends on the subject at hand, and on the individual developer.
- Mark Shuttleworth remarks TDD is used in developing Ubuntu.
Undoubtedly there are a number of open source projects that are Test Driven, please let me know if you’re aware of some I missed. In addition to proving its possible these projects also act as examples as to the sort of architecture that TDD can help build.
Mark Levison has been helping Scrum teams and organizations with Agile, Scrum and Kanban style approaches since 2001. From certified scrum master training to custom Agile courses, he has helped well over 8,000 individuals, earning him respect and top rated reviews as one of the pioneers within the industry, as well as a raft of certifications from the ScrumAlliance. Mark has been a speaker at various Agile Conferences for more than 20 years, and is a published Scrum author with eBooks as well as articles on InfoQ.com, ScrumAlliance.org an AgileAlliance.org.
Adam Sroka says
I have made contributions to both open and closed source projects where all of my code was test-driven even if some (or all) other code was not.
I find that TDD fits my brain. I also find that the code that *I* produce is better/simpler/more correct with TDD (And even better/simpler/more correct with close collaboration with at least one pair programmer.)
However, it is hard to quantify how/if this improves the overall quality and design in an environment where these practices aren’t strictly enforced. It is hard enough to know what percentage of the code benefits both directly and indirectly from the introduction of TDD let alone what percentage of suck-ness results directly and indirectly from where it is missing.
Kevin Rutherford says
reek, the code smells detection tool for ruby, is being developed entirely using TDD. See the source at https://github.com/kevinrutherford/reek and the docs at https://reek.rubyforge.org/rdoc
Kane Mar says
Surprisingly, there is little clear evidence that TDD is beneficial. After a question was raised on a forum that I regularly read, I spent several days reading the current literature on TDD … case studies, reports, recommendations etc. The vast majority of studies and reports simply don’t reach any firm conclusions. And, for every article that promoted certain properties of TDD there are an equal number of articles demoting those same properties.
All this came as a surprise to me because I had assumed that the benefits of TDD were a forgone conclusion … based on personal experience and practice of TDD.
What is very clear is that there is a correlation between the number of tests and clarity (simplicity) of the code. Quoting from your article: “measuring a over 20 projects: if you have a large number of unit tests your code will be an order of magnitude less complex.” I would like to point out that one doesn’t need to practice TDD, to have a lot of unit tests. There are other ways to achieve the same end.
I absolutely agree with the sentiment of your post and Dave’s comments above, especially the following quote: “It’s interesting that many of those who question the effectiveness of TDD never seem to question the effectiveness of traditional methods …” I would also like to point out that the benefits of TDD are not clear nor are they immediately apparent.
Mark Levison says
Have you looked into the most recent study from MS – that shows a reduction in the defect rate? I don’t remember the details right this second but I think George Dinwiddie links to it.
Kane Mar says
Yes, absolutely. And as part of their conclusion they state: “Drawing general conclusions from empirical studies in software engineering is difficult because any process depends to a large degree on a potentially large number of relevant context variables. For this reason, we cannot assume a priori that the results of a study generalize beyond the specific environment in which it was conducted.”
I’m not disputing that TDD is beneficial. I *am* saying that the benefits are neither clear, nor obvious … at least at this point in time. In several years time (with a body of evidence) this discussion might appear dated and quaint. As of today (Jan, 2010) I think it’s fair to say the the jury is still out on TDD.
Dave Nicolette says
It’s interesting that many of those who question the effectiveness of TDD never seem to question the effectiveness of traditional methods, which are responsible for the dismal performance of the IT industry over the past few decades as documented in studies by Gartner, Forrester, Standish, and others. They demand newer methods like TDD have to prove themselves as compared with…well, with what, exactly? What’s their baseline for comparison? Long lead times, high defect rates, untenable design debt, cost overruns? Rather than asking whether TDD “really works,” they ought to ask whether any other approach has really worked.
Ron Jeffries says
The “TDD without thinking about architecture” trope is such a straw man that I’m surprised anyone bothered with a debate. “Looking both ways without paying attention to the cars you see is not a good way to cross the street”.
TDD, red/green/REFACTOR, is a ///way/// of thinking about architecture. I fully agree with anyone who says that if you don’t think about architecture you’ll be in trouble. And if someone isn’t thinking about architecture, they’d not be doing TDD. (At all. Not “not doing TDD well”. Not doing TDD at all.)
Where TDD folks go off the rails, I believe, is not that they don’t /think/ about architecture but more that they don’t do anything about their thoughts.