Why The Idea Of Test Driven Development (TDD) Royally Upsets Me



(Image Author:Excirial / Taken from Wikipedia / Creative Commons)

I am a professional developer, I write codes day in day out for living. Not only I write my own code, I maintain the code other people write, reviewing it, correcting it, refactoring it, formatting it and making it better as much as possible. I favor writing tests that assure the quality of the work that one has done, especially testing the boundary conditions, making sure the code meets the requirement, making sure the various scenarios are met and the post production surprises are eliminated. I have always found that writing enough tests early and often in the lifecycle of software development not only reduced the amount of UI testing you need to do, but it also gives you opportunity to refactor your code and identify the defects early. Everyone knows defect fixes are cheaper in the early stages of software development lifecycle. Having said that, I am starting this article by making it crystal clear that I am a proponent of Unit Testing and I love writing tests for my code.

Lately, I have been hearing the Test Driven Development (TDD) buzz. I have heard management people talk about it because they heard it on some conferences they went through and would like to implement it in their team and have heard tech leads who mostly spend their time designing  the software but not actually coding it talk about the greatness of this approach. I have heard and seen developers who love talking about it but never write their own unit tests. And I have met those people who are developers, who have used it and are in a love hate relationship with this approach. These things are true either in waterfall model of software development or in the agile paradigm. It does not matter what technology platform you are in – some of these people I have met and talked with work for .NET, Java, Grails, PHP and other technologies.

The idea of TDD – where you write your failing test case, then write your code – sometimes design of such code being driven off the test cases you wrote, and then running the test cases to pass them after writing the actual code, refactoring both the code and the tests needed is great, at least in theory. I can see where this philosophy is leading to – a better code coverage, better met edge conditions, and enforcing the developers to write the unit tests. The idea is great, brilliant, absolutely fantastic – but just in theory.

In reality there are other stuffs such as time constraints, complex enterprise applications, multiple integration points, design-as-you-develop realities, deadlines that you must meet and last but not the least Common Sense. TDD is not a silver bullet in every scenarios.

I feel the same way, this gentleman Chris Maddox feels in his blog:

What boggles my head about TDD is the assumptions it makes. When I begin a project or a new feature, I don’t know what the fuck is going to come out of my head. That blank-canvas-driven creativity leads to novel solutions and unique approaches that keep my code fresh. Many TDD’ers I know would say that if you’re doing TDD correctly, you make 0 assumptions about implementation. I hear that, and it’s total bullshit.

While you’re writing code, deciding what to test and hat is trivial, you make choices. You think about the behavior of the code. At that point, the desired behavior already implies an implementation strategy, whether you care to admit it or not. So by the time you actually get around to writing the code, you are already primed with the complexity of the project at hand.

Here are some of the reasons why it upsets me when I hear too much of TDD preaching at my workplace.

  1. Time Investment:
    As you all know you don’t have infinite hours and infinite days to do your project. You are time constrained. In my team work in a two week iteration and some of these iterations are just prior to code freeze decisions implying if you don’t finish your work by the end of the iteration, it will delay the implementation and then you and your team is suddenly in the spotlight. You are surrounded by the management negativity for not being able to do your work on time. TDD does not help here, TDD does not give me freedom to refactor the code whenever I want it without a fear lying in my back that I will break the test and suddenly I will have to spend a lot of time fixing the test and I am running out of time.
  2. It’s unnatural.
    Nothing in the nature follows TDD. You don’t try to deliver a baby first and make sure you can’t and then you go have sex. You don’t eat a raw flesh of meat, making sure it is uncooked and then cook the food and eat it to make sure it is cooked. You don’t rock climb without a rope and fall and injure yourself intentionally and then do the rock climbing with a rope.
  3. There are no proven statistics:
    So far it’s a theory. There are no proven scientific statistics that tell you that TDD actually makes things better, specially time wise. People have excuses – some say the statistics are difficult because team don’t do the exact same work twice one with TDD and one without TDD. Others say the TDD is fairly new concept and getting industry standard statistics is difficult.
  4. You don’t always create softwares, you maintain it too.
    You are not always writing the softwares from scratch, you sometime (or maybe a lot of time) inherit older systems and you need to maintain those. These maintenance are in the form of some micro bug fixes, smaller enhancements, technical debt reduction, smaller customizations. Now you are modifying a method that’s already 200 lines long, all you need to do is change a few lines here and there. Now your old system may not have existing unit testing. How do you test such methods? Another scenario – you are fixing a bug, you don’t even know what the problem is, how do you start writing tests for things that you have not yet identified where to fix or how to fix? Maybe you start fixing something at one place, then you suddenly realize the problem is somewhere else. Why waste time and effort writing tests first for the things that you don’t even know how you are going to deal with? Defect fixes are more analytical. They don’t have requirements on how to fix a bug.
  5. TDD is a solution. What’s the problem?
    Those who are proponents of TDD keep preaching that TDD is good for this, TDD is good for that. Okay I hear you.. It’s a good solution to certain problems. For example, it enables people to write smaller methods, it enforces writing unit testing, it makes you think about the requirements, it increases your test coverage, it increases the code quality. I hear that, and I get it. But even without TDD, I write smaller methods, I deliver production codes with proper and valid unit tests, I ensure the requirements are met. I read requirements carefully before I begin working on project or piece of work. That’s my professional guarantee. Now why force TDD solution if I don’t have a problem?
  6. It’s a Stress Driven Development
    One of my colleagues joked while we were deeply in discussions about pros and cons of TDD. She said, it’s not a Test Driven Development. It’s a Stress Driven Development. She is 100% right. TDD makes me stressed all the time because every time I want to refactor my code, I have to be worried about my test cases being broken and the need to refactor those. This is different than the trivial style of unit testing where you finish the unit of your work and only when you are done refactoring, recoding, renaming etc  then you do the testing. Your refactoring does not put you under stress. TDD forces me to be less creative, TDD leaves me less freedom unless I worry too much or I spend a lot of time.
  7. TDD forgets that Legacy Systems exists
    Unfortunately in our eco-system, legacy systems that are not designed for Testing do exist. TDD forgets about those. If the application does not have any unit testing framework support, it’s not possible to apply TDD.
  8. You application is more than just Java or .NET
    Just java or .NET don’t make up your application (I mean to categorize the applications that are easily testable via some testing frameworks). Although TDD definitely is independent of programming language, writing meaningful Tests for languages like XML or JavaScript or XSLT might be difficult or impossible if your team or the application itself doesn’t have tools that allow you to test these components. If you are only going to test some components of your application and leave the rest, it does not guarantee a quality delivery.
  9. I Prefer Highways:
    Another thing you will repeatedly hear in TDD is red-green-refactor which normally refers to the failing the Unit (red), then writing just enough code to pass the Unit Test (green) and then refactoring as needed. When you start with failing unit tests, you get to the red zone, then you write the code and make the test pass – taking you to the green zone. Then you can repeat, refactoring as needed – according to TDD. I compare this to taking local roads with speed limits where you stop on every other traffic lights (red), then wait for the green signal and go. I prefer taking highways. It needs a courage and caution to take the highway. It’s riskier than local roads but it’s more productive, it’s time saving and maybe the developers don’t have the grudge about the stumbles on every other block of the road.
  10. TDD is short sighted:
    TDD just looks into the immediate work because of it’s nature of test-code-test trait. It helps you design and test just enough work for your probably next day of work or maybe just another section in your webpage. But in reality, applications cross over each other and are far complex and probably coupled. You can’t just design smaller things without looking into the big picture to be successful. TDD quite does not let you see if there is a dead end in the front, or if there are bigger impacts to your application because you just design and walk a few steps at a time, without really looking into what’s ahead.

Conclusion:
I am a professional developer and have a very open mind to technologies and processes. I have no problem trying different stuffs and fail and learn from the mistakes. I thrive to have proper tests for the code that I produce whether I produce them using TDD or just code and test style. If I have infinite time for any projects, I have no problems using TDD. But because TDD exists there does not mean it is suitable to every one. If I produce effective quality code without defects, meeting complete specifications, and without adding extra tech debt to application, and if I am experienced enough to not make shitty designs, I have no need for TDD. After all TDD is a solution to certain problems that developers have. If I don’t have those problems, I don’t need to take an extra dose of TDD.

References and further readings:

15 Best Practices for Unit Testing Your Java Code Using Junit

Disclaim and Credit:

The exerpt is from a book Java Development with Ant, by Erik Hatcher and Steve Loughran, published by Manning Publications Company. The credit goes to the authors of the book and the publishers.

The following are the best practices for unit testing.

  1. Test everything that could possibly break. This is an XP maxim and it holds.
  2. A well-written test is hard to pass. If all your tests pass the first time, you are probably not testing vigorously enough.
  3. Add a new test case for every bug you find.
  4. When a test case fails, track down the problem by writing more tests, before going to the debugger. The more tests you have, the better.
  5. Test invalid parameters to every method, rather than just valid data. Robust software needs to recognize and handle invalid data, and the tests that pass using incorrect data are often the most informative.
  6. Clear previous test results before running new tests; delete and recreate the test results and reports directories.
  7. Set haltonfailure=”false” on to allow reporting or other steps to occur before the build fails. Capture the failure/error status in a single Ant property using errorProperty and failureProperty.
  8. Pick a unique naming convention for test cases: *Test.java. Then you can use with Ant’s pattern matching facility to run only the files that
    match the naming convention. This helps you avoid attempting to run helper or base classes.
  9. Separate test code from production code. Give them each their own unique directory tree with the same package naming structure. This lets tests live in the same package as the objects they test, while still keeping them separate during a build.
  10. Capture results using the XML formatter: .
  11. Use , which generates fantastic color enhanced reports to quickly access detailed failure information.
  12. Fail the build if an error or failure occurred:
  13. Use informative names for tests. It is better to know that testDocumentLoad failed, rather than test17 failed, especially when the test suddenly breaks four months after someone in the team wrote it.
  14. Try to test only one thing per test method. If testDocumentLoad fails and this test method contains only one possible point of failure, it is easier to track down the bug than to try and find out which one line out of twenty the failure occurred on.
  15. Utilize the testing up-to-date technique. Design builds to work as subcomponents, and be sensitive to build inefficiencies doing unnecessary work.

Understanding the JUnit Annotations And Their Work

Annotations have made some of the programming stuffs easier because they do a lot of things in the background, which any programmer would otherwise needed to have coded by themselves. JUnit uses a bunch of annotations and represents various separate tasks they do behind the scene. Here are some of the most widely used annotations from Java based Unit Testing Framework – The JUnit. If you would like to see a comprehensive example of JUnit with all these annotations in action, see my previous article on JUnit Test Case

  1. @Test
    • Denotes a test method. Can be used with expected to assert expected results on the object under test.
  2. @Before
    • Run before each test method is run i.e. do a setup
  3. @After
    • Run after each test method is run i.e. do a teardown
  4. @BeforeClass
    • Run before all the tests in a class (Runs once for all of the tests)
  5. @AfterClass
    • Run after all the tests in a class (Runs once for all of the tests)
  6. @Parameters
    • Allows you to run the same test with different data by defining the data parameters. @Parameters have to return List[], and the parameter will pass into class constructor as argument.
  7. @RunWith
    • Exclusively tells the class that the Junit Test Case uses Parameterized runner
  8. @Ignore
    • This allows to skip a test.  You might want to skip a test if you are still working on it or you are not convinced it’s a valid test case or maybe ignore cases which are long running.