The essence of a continuous integration is that the system is regression tested all the time people develop and commit new features or bug fixes. The basic assumption is that the code is always tested for regression on multiple levels; unit, component and system/integration level.

This leads to an interesting question: Can you really have a functional continuous integration system if you don’t drive your development with tests?

The simple answer is no.

It is certain that no-one will write comprehensive unit tests after the code has been put to production. And without unit-level automated tests you can not have fully functional CI environment. If you don’t drive your development with tests you are creating technical debt.

This does not mean that you have to become a TDD zealot, but it means that you should start to develop with TDD or a variant in order to achieve fully functional continuous integration.

Post filed under Agile, TDD and tagged , .

4 Comments

  1. Shmoo says:

    Hi, Jussi,

    Excellent post. Very concise.

    You write that, “It is certain that no-one will write comprehensive unit tests after the code has been put to production.”

    That may or may not be true, and forgive me if I’m misinterpreting you, but you seem to be offering two choices: TDD or writing unit tests after production.

    Can’t we write unit tests after the code but before production?

    Kind regards,

    Shmoo

  2. Hi Shmoo,

    thanks for the comment!

    you can write unit tests after development and before production but then you will miss the biggest effect TDD has. Testability. With TDD you write your code to be testable. Writing tests after also leads to tests that are difficult to maintain and brittle as the your code was not designed to be testable.

    There are plenty of writings about TDD versus TAD in the Internet and I suggest that you lookup a few articles. In my opinion TDD’s benefits are indisputable.

  3. Shmoo says:

    Hi, Jussi,

    I must disagree, I’m afraid, with you comment, “Writing tests after also leads to tests that are difficult to maintain and brittle as the your code was not designed to be testable.”

    Firstly, of course we write our code to be testable, even when practicing TAD. It’s not that we forget that we will need to test it afterwards – that, “Afterwards,” is typically 3-4 hours later.

    People were writing testable code long before TDD was invented.

    Also, I’ve not seen any evidence that TAD leads to tests that are difficult to maintain; can you provide a reference?

    Thanks,

    Shmoo.

  4. Shmoo,

    my experience tells a different story. All TAD tests I have seen have much less value, are much more complex and harder to maintain than tests that result from TDD. TAD tests achieve lesser coverage, are not executed often enough and lead very easily to happy path testing only. With TAD tests people verify their own expectations of the code they just wrote which narrows the scope of tests.

    I’ve seen so much TAD tests during the last 12 years with substantially lower quality and less coverage that I just don’t see it as a one-off event or unskilled people issue. Yes, you can design your code to be testable and write comprehensive tests after, but I’ve never seen that to happen. Also, it misses the whole point of TDD, which is a design act, not a testing act.

    Maybe in your context you have brilliant people who can write testable code without doing TDD, but I haven’t seen that yet durig the last 12 years. When a system grows the added complexity makes writing tests after a huge burden when the design does not support it.

    I can’t provide you with fool-proof evidence, but in my experience, TAD has always lead to lower quality in both the system under construction and the tests. Googling for this subject finds a lot of texts that support my view.

Leave a Reply

Your email address will not be published. Required fields are marked *