Programmers Stack Exchange is a question and answer site for professional programmers interested in conceptual questions about software development. It's 100% free.

Sign up
Here's how it works:
  1. Anybody can ask a question
  2. Anybody can answer
  3. The best answers are voted up and rise to the top

I'm a solo developer with a pretty time-constrained work environment where development time ranges usually from 1-4 weeks per project, depending on either requirements, urgency, or both. At any given time I handle around 3-4 projects, some having timelines that overlap with each other.

Expectedly, code quality suffers. I also do not have formal testing; it usually goes down to walking through the system until it somewhat breaks. As a result, a considerable amount of bugs escape to production, which I have to fix and in turn sets back my other projects.

This is where unit testing comes in. When done right, it should keep bugs, let alone those that escape to production, to a minimum. On the other hand, writing tests can take a considerable amount of time, which doesn't sound good with time-constrained projects such as mine.

Question is, how much of a time difference would writing unit-tested code over untested code, and how does that time difference scale as project scope widens?

share|improve this question
    
Comments are not for extended discussion; this conversation has been moved to chat. – maple_shaft 17 hours ago
    
You are solving the wrong problem. You are too busy and seem to have no project management support. Are you estimating project effort? Are you reserving 20% of your time for bug fixes, meetings, and other non-coding tasks? How much overtime are you working? – Tony Ennis 7 hours ago
1  
Do you realize that you're essentially saying "I have time to do it twice, but not time to do it once the right way."? – RubberDuck 6 hours ago

12 Answers 12

The later you test, the more it costs to write tests.

The longer a bug lives, the more expensive it is to fix.

The law of diminishing returns ensures you can test yourself into oblivion trying to ensure there are no bugs.

Buddha taught the wisdom of the middle path. Tests are good. There is such a thing as too much of a good thing. The key is being able to tell when you are out of balance.

Every line of code you write without tests will have significantly greater costs to adding tests later than if you had written the tests before writing the code.

Every line of code without tests will be significantly more difficult to debug or rewrite.

Every test you write will take time.

Every bug will take time to fix.

The faithful will tell you not to write a single line of code without first writing a failing test. The test ensures you're getting the behavior you expect. It allows you to change the code quickly without worrying about affecting the rest of the system since the test proves the behavior is the same.

You must weigh all that against the fact that tests don't add features. Code adds features. And features are what pays the bills.

Pragmatically speaking, I add all the tests I can get away with. I ignore comments in favor of watching tests. I don't even trust code to do what I think it does. I trust tests. But I've been known to throw the occasional hail mary and get lucky.

However, many successful coders don't do TDD. That doesn't mean they don't test. They just don't obsessively insist that every line of code have an automated test against it. Even Uncle Bob admits he doesn't test his UI. He also insists you move all logic out of the UI.

As a football metaphor (that's American football) TDD is a good ground game. Manual only testing where you write a pile of code and hope it works is a passing game. You can be good at either. You aren't going to make the playoffs unless you can do both. You won't make the superbowl until you learn when to pick each one. But if you need a nudge in a particular direction: the officials calls go against me more often when I'm passing.

If you want to give TDD a try I highly recommend you practice before trying to do it at work. TDD done half way, half hearted, and half assed is a big reason some don't respect it. It's like pouring one glass of water into another. If you don't commit and do it quickly and completely you end up dribbling water all over the table.

share|improve this answer
33  
There is such a thing as too much of a good thing Neither you or Buddha have tested my grandmother's cookies :-) – Pierre Arlaud yesterday
9  
The analogy at the end is priceless. – Brandon yesterday
18  
I have no idea what that metaphor means. – njzk2 yesterday
2  
@njzk2 water, when poured slowly, will follow the surface of the container backwards and make a mess (because physics). Poured quickly it will not 'wrap around' in this fashion. CandiedOrange is suggesting that doing a little TDD is like pouring water slowly (messy) and that to get all the value one must commit totally (pour fast) – Jared Smith yesterday
12  
You completely lost me at the 'football metaphor'. – Pharap yesterday

I agree with the rest of the answers but to answer the what is the time difference question directly.

Roy Osherove in his book The Art of Unit Testing, Second Edition page 200 did a case study of implementing similarly sized projects with similar teams (skill wise) for two different clients where one team did testing while the other one did not.

His results were like so:

Team progress and output measured with and without tests

So in the end of a project you get both less time and fewer bugs. This of course depends on how big a project is.

share|improve this answer
10  
The sample size is way too small to consider this to be scientific but I think it's representative of what lots of people experience. I find that when I do TDD, most of the extra time is spent fixing the bugs that cause my unit tests to fail, not writing the tests themselves. That's not really adding extra time, just shifting when you find and fix those issues. Any real extra time is in fixing problems that you wouldn't have found, at least not in the first go-round. – JimmyJames yesterday
1  
@Panzercrisis integration refers to pretty much gluing together all the different classes/functionality as well as external systems like network configuration etc. – Aki K yesterday
5  
@JimmyJames It's a case study, which is used extensively in business, and a lot in science when it's not (yet) possible to run a large scale reproducible experiment. There's psychology journals full of them. "Unscientific" is not the right word. – djechlin yesterday
6  
Why do I think if the outcome of that case study had been showing the opposite, it would not have made it into the book ;-) ? – Doc Brown 19 hours ago
4  
@DocBrown I wonder how many case studies were made and discarded before they found one with the right answers :-) – gbjbaanb 17 hours ago

Done well, developing with unit tests can be faster even without considering the benefits of extras bugs being caught.

The fact is, I'm not a good enough coder to simply have my code work as soon as it compiles. When I write/modify code, I have to run the code to make sure it does what I thought it does. At one project, this tended to end up looking like:

  1. Modify code
  2. Compile application
  3. Run application
  4. Log into application
  5. Open a window
  6. Select an item from that window to open another window
  7. Set some controls in that window and click a button

And of course, after all that, it usually took a few round trips to actually get it right.

Now, what if I'm using unit tests? Then the process looks more like:

  1. Write a test
  2. Run tests, make sure it fails in the expected way
  3. Write code
  4. Run tests again, see that it passes

This is easier and faster then manually testing the application. I still have to manually run the application (so I don't look silly when I turn in work that doesn't actually work at all), but for the most part I've already worked out the kinks, and I'm just verifying at that point. I actually typically make this loop even tighter by using a program that automatically reruns my tests when I save.

However, this depends on working in a test-friendly code base. Many projects, even those with many tests, make writing tests difficult. But if you work at it, you can have a code base that's easier to test via automated tests than with manual testing. As a bonus, you can keep the automated tests around, and keep running them to prevent regressions.

share|improve this answer
1  
And using something like nCrunch can cut steps 2 and 4, making the feedback loop even tighter. – Euphoric yesterday
2  
"The fact is, I'm not a good enough coder to simply have my code work as soon as it compiles." Nobody can write code which always works at the first try. – Apfelsaft yesterday
1  
@Apfelsaft: Legend has it that the Multics i/o layer was written with pen on paper and worked first try when the program was transferred to tape. Legend also says that that part of multics was left unchanged for years. – slebetman yesterday
3  
@Apfelsaft I can. As long as you don't ask me to do anything useful, of course. – Davidmh yesterday
    
@Apfelsaft, don't explain the joke. :P – Winston Ewert yesterday

There is only one study I know of which studied this in a "real-world setting": Realizing quality improvement through test driven development: results and experiences of four industrial teams. It is expensive to do this in a sensible way, since it basically means you need to develop the same software twice (or ideally even more often) with similar teams, and then throw all but one away.

The results of the study were an increase in development time between 15%–35% (which is nowhere near the 2x figure that often gets quoted by TDD critics) and a decrease in pre-release defect density from 40%–90%(!). Note that all teams had no prior experience with TDD, so one could assume that the increase in time can at least partially attributed to learning, and thus would go down even further over time, but this was not assessed by the study.

Note that this study is about TDD, and your question is about unit testing, which are very different things, but it is the closest I could find.

share|improve this answer

Despite there being a lot of answers already, they are somewhat repetitive and I would like to take a different tack. Unit tests are valuable, if and only if, they increase business value. Testing for testing's sake (trivial or tautological tests), or to hit some arbitrary metric (like code coverage), is cargo-cult programming.

Tests are costly, not only in the time it takes to write them, but also maintenance. They have to be kept in sync with the code they test or they're worthless. Not to mention the time cost of running them on every change. That's not a deal-breaker (or an excuse for not doing the truly necessary ones), but needs to be factored in to cost-benefit analysis.

So the question to ask when deciding whether or not (or of what kinds) to test a function/method, ask yourself 'what end-user value am I creating/safeguarding with this test?'. If you can't answer that question, off the top of your head, then that test is likely not worth the cost of writing/maintaining. (or you don't understand the problem domain, which is a waaaay bigger problem than a lack of tests).

http://rbcs-us.com/documents/Why-Most-Unit-Testing-is-Waste.pdf

share|improve this answer
    
This is where BDD comes in as it takes a lot of discipline to write unit tests in such as way that they're meaningful to future developers. – Robbie Dee yesterday
    
I'm not super familiar with BDD but will guess that it operates at a slightly coarser granularity than the method/function level and probably has a less tenuous connection to user-value. – Jared Smith yesterday
1  
Related – Robbie Dee yesterday
2  
"Testing for testing's sake (trivial or tautological tests), or to hit some arbitrary metric (like code coverage), is cargo-cult programming." So true and so well-said. Test in such a way that you feel like a cool badass - think about yourself as a ... spy, elite athlete ... DON'T test like a "government department". You know? – Joe Blow 17 hours ago
    
@JoeBlow lol I work for a government dept... – Jared Smith 16 hours ago

It depends on the person, as well as the complexity and shape of the code you're working with.

For me, on most projects, writing unit tests means I get the work done about 25% faster. Yes, even including the time to write the tests.

Because the fact of the matter is that software isn't done when you write the code. It is done when you ship it to the customer and they're happy with it. Unit tests are by far the most efficient way I know of to catch most bugs, isolate most bugs for debugging, and to gain confidence that the code is good. You have to do those things anyways, so do them well.

share|improve this answer
5  
I think it's worth noting though, it's an acquired skill. I see so many people hear the claim that TDD really isn't even a time-sink upfront that pays itself off in the long run, it's just faster, period. then they try it for a day and it's painful because they have 0 experience, have read 0 books, no practice, they just expect it to magically work. there's no secret to TDD that makes you a better developer, you still need to practice, still need to think, still need to make good educated decisions. – kai 2 days ago
1  
@kai - +1. I spent weeks reading about TDD before I tried it. I read everything I could find. I read books. I read through all the well-known agile blogs for examples. I read xUnit Test Patterns cover-to-cover. For the first few weeks, it still took me twice as long. – Jules yesterday
1  
I agree. TDD is hard. The mindset is difficult. Anyone who says "Just write the tests first" and claims that it's free doesn't know how to do it. It takes practice. – duffymo yesterday
    
@kai: for similar reasons a lot of people can't touch-type. They tried it once and after a whole hour still weren't typing any faster than before ;-) – Steve Jessop 4 hours ago

There have been a long history of Programmers board promoting TDD and other test methodologies, I won't recall their arguments and agree with them, but here is additional things to consider that should nuance a bit:

  • Testing isn't equally convenient and efficient depending of context. I develop web software, tell me if you have a program to test the whole UI... right now I'm programming excel macros, should I really develop a test module in VBA ?
  • Writing and maintaing the test software is real work that counts in the short run (it pays off in a longer run). Writing relevant tests is also an expertise to get
  • Working as a team and working alone, haven't the same tests requirements because in team you need to validate, understand and communicate code you did not write.

I'd say testing is good, but make sure you test early and test where the gain is.

share|improve this answer
    
"Should I really develop a test module for VBA?" Damn right you should. rubberduckvba.com/Features#unitTesting – RubberDuck 6 hours ago

Question is, how much of a time difference would writing unit-tested code over untested code, and how does that time difference scale as project scope widens?

The problem gets worse as the age of the project increases: because whenever you add new functionality and/or whenever you refactor existing implementation, you ought to retest what's previously be tested to ensure that it still works. So, for a long-lived (multi-year) project, you might need to not only test functionality but re-test it 100 times and more. For this reason you might benefit from having automated tests. However, IMO it's good enough (or even, better) if these are automated system tests, rather than automated unit tests.

A second problem is that bugs can be harder to find and fix if they're not caught early. For example if there's a bug in the system and I know it was working perfectly before you made your latest change, then I'll concentrate my attention on your latest change to see how it might have introduced the bug. But if I don't know that the system was working before you made your latest change (because the system wasn't properly tested before your latest change), then the bug could be anywhere.

The above applies especially to deep code, and less to shallow code e.g. adding new web pages where new pages are unlikely to affect existing pages.

As a result, a considerable amount of bugs escape to production, which I have to fix and in turn sets back my other projects.

In my experience that would be unacceptable, and so you're asking the wrong question. Instead of asking whether tests would make development faster you ought to ask what would make development more bug-free.

A better question might be:

  • Is unit-testing the right kind of testing, which you need to avoid the "considerable amount of bugs" you've been producing?
  • Are there other quality control/improvement mechanisms (apart from unit-testing) to recommend as well or instead?

Learning is a two-stage process: learn to do it well enough, then learn to do that more quickly.

share|improve this answer

Programmers, like people dealing with most tasks, underestimate how long it actually takes to complete it. With that in mind, spending 10 minutes to write a test can be looked at as time one could have spent writing tons of code when in reality, you would have spent that time coming up with the same function name and parameters you did during the test. This is a TDD scenario.

Not writing tests, is a lot like having a credit card; we tend to spend more or write more code. More code has more bugs.

Instead of deciding to have total code coverage or none at all, I suggest focusing on the critical and complicated part of your application and have tests there. In a banking app, that might be the interest calculation. An engine diagnostic tool may have complex calibration protocols. If you've been working on a project, you probably know what it is and where the bugs are.

Start slowly. Build some fluency before you judge. You can always stop.

share|improve this answer

I believe you would benefit a lot from using TDD and unit testing. You already know the benefit:

As a result, a considerable amount of bugs escape to production, which I have to fix and in turn sets back my other projects.

There is one more benefit which is also worth mentioning: when you fix a bug, you know it will never come back. Also when you fix a bug, or add a new feature, you are sure you are not breaking anything which is stable.

As for the costs, this is something which is fuzzy, some people say it is 20% - 30%. I would agree to those numbers in the beginning, but if you start to practice to always write a test before production code, this time can come down to 10% to 20%.

Since you already see the value, why not give it a try?

But let me tell you something: you can estimate the time a feature can take to be done in code with years of experience. I do not know a single person who can tell you how fast a bug will be fixed. Debugging is something which can really take longer and really steals more time than the 10% -20%.

share|improve this answer
    
Writing unit tests and interface documentation, such as Javadocs, in parallel is a time save for me. Both require thinking about edge cases. I decide what to do if e.g. a pointer is null, write the decision into the interface documentation, and write the test case for it. – Patricia Shanahan yesterday

An oft overlooked benefit of TDD is that the tests act as a safeguard to make sure you aren't introducing new bugs when you make a change.

The TDD approach is undoubtedly more time consuming initially but the takeaway point is you'll write less code which means less things to go wrong. All those bells and whistles you often include as a matter of course won't make it into the code base.

There's a scene in the film Swordfish where if memory serves, a hacker is having to work with a gun to his head and being erm... otherwise distracted. The point is it is a lot easier to work when your headspace is in the code and you have time on your side rather than months down the line with a customer screaming at you and other priorities getting squeezed.

Developers understand that fixing bugs later is more costly, but flip that on it's head. If you could be paid $500 a day to code how you code now or $1000 if you wrote in a TDD way, you'd bite the hand off the person making you the 2nd offer. The sooner you stop seeing testing as a chore and see it as a money saver, the better off you'll be.

share|improve this answer
    
That thing in your first sentence is called Regression testing – cat yesterday

Some aspects to consider, not mentioned in the other answers.

  • Extra Benefit/Extra Cost depend on experience with writing unittests
    • with my first unit-test project the extra costs trippled because i had to learn a lot and i made a lot of mistakes.
    • after 10 years of experience with tdd i need 25% more coding time to write the tests in advance.
  • with more tdd-moduls there is still need for manual-gui-test and integration-testing
  • tdd only works when done from the beginning.
    • applying tdd to an existing, grown project is expensive/difficuilt. But you can implement regression-tests instead.
  • automated tests (unittests and other kind of tests) require maintanace const to keep them working.
    • having created test through copy&paste can make testcode-maintanace expensive.
    • with growing experience testcode becomes more modular and more easy to maintain.
  • with growing experience you will get the feeling when it is worth to create automated tests and when not.
    • example there is no big benefit to unittest simple getters/setters/wrappers
    • i donot write automated tests via the gui
    • i take care that the businesslayer can be tested

Summary

When starting with tdd it is difficuilt to reach the "more benefit than cost" state as long as you are under "time-constrained work environment" especially if there are "clever managers" that tell you to "get rid of the expensive, useless testing stuff"

Note: with "unit testing" i mean "testing moduls in isolation".

Note: with "regression testing" i mean

  • write some code that produces some output-text.
  • write some "regression testing" code that verifies that the result of the generation ist still the same.
  • the regression test let you know whenever the result changes (which might be ok or an indicator for a new bug)
  • the idea of "regression testing" is similar to approvaltests
    • ... taking a snapshot of the results, and confirming that they have not changed.
share|improve this answer

protected by maple_shaft 17 hours ago

Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).

Would you like to answer one of these unanswered questions instead?

Not the answer you're looking for? Browse other questions tagged or ask your own question.