Software Quality Assurance & Testing Stack Exchange is a question and answer site for software quality control experts, automation engineers, and software testers. It's 100% free, no registration required.

Sign up
Here's how it works:
  1. Anybody can ask a question
  2. Anybody can answer
  3. The best answers are voted up and rise to the top

I'm a reasonably inexperienced developer, and I recently got into a debate with the manager of our current project.

It's a concept demonstration for trial funded internally by our company. After discussing with the project manager while going over the project plan I'd put together he said we didn't want to put any time into testing. It's not a safety critical system, it's just simply reading and analyzing data, and no one will be making any decisions on it that might affect anything because it will just be a trial concept demonstration, no one is buying it yet. Fair enough.

However, I think my manager (not prior project manager) was getting pressure to prove we're doing what we say we're doing for project review etc, because he started asking us about what testing we were doing and to record "our testing" in a spreadsheet.

There was some confusion. Me and my fellow developer were not comfortable saying we were doing any testing, formally or informally. Our manager seemed to think no testing meant we'd never run any code. We'd used the word 'test' when describing we run the code. You know what it's like, you spend 20% of your time writing some code and 80% of the time trying to work out why it wasn't running properly.

I was arguing this is not testing. My manager was saying this was testing, and if we wrote it down in a spreadsheet with a date we can prove we've done testing in reviews etc.

Cue 2 hours of argument/discussion on what 'testing' means. At the end, I was unwilling to budge and our fundamental disagreement was: When asked if we were doing testing, my answer would be no, and his yes.

I feel very uncomfortable moving from this position, my understanding is that testing has more of a formal definition in software development and saying that simply running the code was 'testing' would be misleading.

It came down to the guy questioning my experience (I've not got a CS degree) and asking if I would benefit of going on a 'course', which makes me even more uncomfortable. We've had fundamental disagreements in the past that we haven't been able to overcome.

Would you consider running code during development testing, or would saying that be misleading?

share|improve this question
1  
Without testing, how would you know it works right? And if it does not have to work right, why would you spend resources on it? When it passed tests, it means is is "DONE" - and you need to agree what "DONE" means, – Peter Masiar 2 days ago
    
Because you can demonstrate it? Demonstrate outputs? We've got requirements (that we have been trying to work out as part of the project). We can say we've completed them, and do some demonstrations without formally testing them can't we? – Joe 2 days ago
4  
I would call this Smoke Testing, ensuring that the basic functionality is well, functional. – Paul Muir 2 days ago
5  
Maybe the focus shouldn't be on whether you're testing (because that's just arguing semantics), but on what you're testing. Sure, you checked whether that button you clicked does what you expect, but did you check all edge cases? Did you check whether it still works after you made another change to some tangentially related thing? Did you record your results? Did you write down what steps you performed? Could you repeat that test on a later version of the software? Whether you're "testing" or not isn't important, the important part is what those tests actually tell you about the software. – Ajedi32 2 days ago
    
Find out what someone will want to see in operation. A concept demonstration is to show something. What are they looking for? It is like a proof of concept car: you wouldn't be doing your job if someone climbed in and turned the key, nothing happened, and you said, "you told us not to test anything". This "proves" only miscommunication. To paraphrase your remark: "You know what it's like, you spend 20% of your time getting work done and 80% of the time trying to work out what Management wants from you." Find out what they want. That is your first task, always. Next: give it to them. – no comprende yesterday

11 Answers 11

A test is an experiment. You have a hypothesis, which is normally governed by a specification, such as "When I enter a username and password that I know to be valid and click login, I am brought past the login screen and to the dashboard." or "When I log in to the program with 10,000 simultaneous users, performance is no less than 95% of that when there are 1,000 simultaneous users".

All tests start with this hypothesis. The scope of the hypothesis may be limited to a small portion of the program, like a single function or component. It may involve multiple components. It may contain known good data, known bad data, or random data. There's a lot of different things you can shove into this hypothesis. But whatever it is, the hypothesis comes from a part of the product specification.

The next part of the test is the experiment itself. You perform the action and collect the result. This might be a Selenium or JUnit test. It might be a human clicking buttons. It might be a batch job that does routine checks. But someone has to perform the action and collect observations.

Then the observation is compared to the original hypothesis to determine if it passes. If you can't tell whether or not it passes, then you need to refine the criteria. This is why things like "fun" are so hard to test for.

So if you don't have a formal hypothesis and a formal comparison of observations to that hypothesis, then sorry, in my books you're not doing testing.

As a former developer, I can tell you a good rule of thumb:

If you're trying to make it work, you're developing.
If you're trying to make it break, you're testing.

share|improve this answer
2  
I agree with this all the way up to the point where you require a "formal hypothesis and a formal comparison of observations". It's possible to perform "tests" that are informal as part of development (I expect this button to do this thing when I click it; does it?), but in practice such tests are pretty much worthless as soon as any even remotely basic level of rigor is required. – Ajedi32 2 days ago
3  
Ajedi32, in your example, "the button will do this thing when I click it" is the hypothesis. Contrast that with "just running it on your desktop", where the hypothesis is at most, "If I run it, it will not crash." – user246 2 days ago
1  
Something I'm struggling with about my answer is that exploratory testing doesn't fit my definition necessarily. Formal testing is a science, while exploratory testing is an art. But I hate to dismiss the value of experienced testers "going to town" on an app because they're likely to find good material to report back to the dev team. I suppose, though, as far as OP is concerned, they need to identify what level of testing they need for their prototype, and exploratory testing probably isn't in that. But that makes my answer question-specific, not canonical. :-( – corsiKa yesterday
    
Joined SQA just to upvote this answer for your rule of thumb. The rest of the answer is solid, but that sums it up so perfectly. – WeRelic yesterday

That is really up to your organization and your relationship with your stakeholders.

In most cases, testing is just a label with no specific meaning. The ultimate goal is to deliver a high-quality product. "Testing" is just a label for a kind of activity that might help you reach that goal. You can define that label however you want, but it is your responsibility to deliver something that meets your quality goals.

That said, your arrangement with your stakeholders (the people who will interact with your software) may stipulate how and/or when testing should take place. If you have that kind of arrangement, you will need to use those stipulations to decide whether running software on your desktop meets their definition of testing.

share|improve this answer
3  
This is the right answer. If you are using a more casual definition of testing from the person asking the question about testing, they will feel lied to. Technically, this "start it up and see if it works" testing is "manual testing", but I doubt it's what the asker means. In the meantime, make sure you document your conversation. Send a summarizing email to your boss or something, so he can't throw you under the bus later. Consider adding in some unit tests NOW. They are really hard to write in later and will change and improve your code design. #gratuitousAdvice – Ethel Evans 2 days ago
    
I agree that this is the right answer, however @EthelEvans, I think you have it backwards, he's worried to say it's testing because "he said we didn't want to put any time into testing". Adding unit tests makes the situation worse for OP. – Tom.Bowen89 yesterday
    
@EthelEvans, I'd love to write unit tests, I've at least designed the UI using MVVM so that could be unit tested to a degree. But we simply don't have the resource/budget, I'm already struggling to persuade this guy the software is more complex than he thinks and explaining why features aren't done when 'they are easy, I can do that in excel'. – Joe 23 hours ago

TLDR; The specific terminology that probably describes what he is asking for is smoke testing.

The long version: You are arguing with your manger instead of working towards a solution. Terminology is important, but so is communication and getting the job done.

You haven't done unit or integration testing, you probably haven't done any performance or usability testing, there are all kinds of specific test that you haven't done -- forget about all of them.

Focus on what is being asked, and not the terminology used. Your manager wants you to document at least some of what you have verified about what you have built. What have you verified that can usefully be reported?

Write something up that gives him what he needs, if there is a specific terminology for what he is asking for, introduce him to it, if there's not, and you are uncomfortable using the word "tested", then use another word that means what he wants. Call it verified, examined, checked, inspected, scrutinized or Fred. Whatever works and makes it clear that you didn't just bang on the keyboard until your hands got tired and then call that done.

Finally, get a clear statement of what "done" means, both for the overall project and for the documentation that he wants from you.

share|improve this answer

Testing is validating a situation on a set of conditions. In your case you validate if it runs, you could say "I am going to test if it builds". This would mean you are actual testing something.

From a SDLC perspective you are actual just coding and checking if you think it works and is good enough, this it not testing. Testing is actually a bit more structured and hopefully a repeatable way of validating the same. Often it is also a different phase in the life-cycle, unless it is test-driven development or an other automated testing effort, which would be exercised parallel with developing code. Test (case) design could be done in parallel, but not by one person.

In your-case if management wants to really minimize on testing efforts describe which steps have to be taking to release a feature to a production environment in a definition of done. This definition should include that it builds and that the happy path of feature is executed to verify it is complete. I guess some developers could build systems without ever running it, but the time of punch-cards is far behind us. You could skip testing efforts like automated, exploratory and or usability testing.

Agree on a definition of done! Minimize it for now and extend it in the future when you run into trouble.

I get the feeling you are prototyping as you say the following in a comment:

the point of the project is to develop something and see if it's a viable product

Be aware that because you are taking shortcuts (e.g. skipping testing) you are creating technical debt. If the product is viable it might need a clean and well tested reimplementation, because taking shortcuts now will mostly lead to a product that is unmaintainable in the long run. Discuss technical debt with your stakeholders so they have a honest view on the future. Selling your minimal testing efforts as testing by placing them in a sheet with a timestamp sounds as lying to yourself and to your stakeholders and clients. Take technical debt as a serious issue. Be warned! :)

share|improve this answer

Without testing, how would you know it works right? And if it does not have to work right, why would you spend resources on it? When it passed tests, it means is is "DONE" - and you need to agree what "DONE" means.

Testing is the difference between Wally (from Dilbert cartoon) saying "it worked for me, my test file in my browser did not crashed" and saying "we can expect it will produce correct results for any input by any of our customers".

But the "testing" you did was developing. Code might work for other people processing different input files, or might not. Testing (by other people using different inputs and possibly different workflow) will determine if code is "done".

Multiple Levels of Done - you need to find out what is your boss' definition of done.

Good testing costs resources and time. If you can release "beta version", your customers will test it for you, give you input files which will break your assumption on input, etc (for no cost to your boss), but there is price: your customers will see failures of your code. It is business decision if it makes business sense to spend resources to present better products to your customers. I.e. first version of Windows which was usable was 3.1, previous versions were junk for beta testers.

It could be a perfectly valid approach to release beta version (without spending too much resources on it), so customers can see the product and try it in "beta" - if they have no expectations for it to work, produce valid input etc ("release early, release often"). It allows you to face customer's expectations before lots of resources were spent on something what customers don't care about. Talk to your boss, we have no idea what s/he wants.

As a side note, I would never argue with a boss for two hours like that. Instead, I would ask for time-out to research the differences. It might be that you have different definitions of "done", and your boss' definition might be perfectly fine for the product you work on. Getting some training in "agile development" would be beneficial for you - google is your friend.

share|improve this answer
    
The multiple levels of done seems like a good thing to show to my boss. He seems to think we can say code's been tested after level 1 on that list, which is what i'm uncomfortable with. The goal of the concept demonstration, i think, is to bring everything to level 3, maybe 4. I do see your point about not spending resources on it, but the point of the project is to develop something and see if it's a viable product, can we decide that without proving everything works as designed or not, and at what level, is perhaps the question. – Joe 2 days ago
1  
I've seen all kinds of "levels of done" stuff, but this has been my go-to favorite (pdf): thebraidytester.com/downloads/YouAreNotDoneYet.pdf – panhandel yesterday
    
Yup, nice list. Consider how puff tubes for input might influence testing. Say no more. Great checklist, thanks. – Peter Masiar yesterday
    
@Joe What's important about it is that maybe that level 1 is actually fine. If it's only ever going to be internal for the next year and he's just trying to get it fully funded, it very well may be. You know quality will never be perfect, so where do you draw the line? – corsiKa yesterday
    
@corsiKa, I agree, level 1 would be fine, but I'd be unhappy to say we'd performed any form of testing. – Joe 23 hours ago

The TL;DR answer: No.

The explanation:

Running code during development, on the developer's machine, is simply iterative development.

Running it on another system (one that does not have the development environment and does not compile the code first) can be considered smoke-testing.

That said, here's what I'd suggest you do, given that your application is a proof-of-concept trial (and my experience with these is that a distressing number of them become production code far too quickly):

  • Before you do anything else, ask your manager what level of testing he expects. You want to make sure that both of you are using the same definitions or you'll be talking past each other. At this point, it matters less that the definitions are correct than that you are both communicating (which is a bad position to be in, but that's life).
  • You should also ask your manager why there has been a change from the original "no need to test because this is a proof of concept". That will give you a better idea of why your manager is insisting you document some form of testing.
  • If (as I suspect) there is a requirement from further up the food chain that proof of testing be shown, I'd recommend that your proof of testing be something like the following:
    • List installation version (i.e. you're running the application on a different machine) and the actions performed. Call this a smoke test.
    • For each listing in your spreadsheet, enter the modules you covered. So, for instance, you logged in and added a user, then logged out and logged in as that user. Your smoke tests covered login and user management.
    • Have a separate listing of all modules in the application and all functions within each module. Yes, it's work, but I suspect you're going to need a certain amount of covering your anatomy here.
    • Include a list of what you have not tested, and why (the "why" in your case is going to be "no time"). This list would cover the functions you expect to receive little use, the error handling (which is out of scope for smoke tests), exotic inputs, and so forth. If you're using anything resembling an agile process, make sure to be explicit that your tests cover only the happy path or steel thread - the minimum set of functionality that makes the application viable with correct inputs
    • Also include in your documentation that you're making no guarantees about what will happen if the application is used in any way outside your happy path testing. In my experience caveats like this get ignored, but they're useful to point to when the blamestorming starts.
  • Finally, good luck. What I'm seeing in your question suggests that the politics in your workplace aren't the best. That's never fun to deal with.
share|improve this answer

Your problem is very simply a confusion of semantics. The word "testing" is too broad, hence allowing wildly different interpretations. It just makes no sense to talk about "testing" without saying what you actually mean.

Someone mentioned "definition of done". This is what you are looking for; it is a contract between you and your customer. The DoD can be as fuzzy or specific as you (and your customer, even if it is internal to your company) like. Part of the DoD is a metric of when the software is considered to be "correct". As long as you and your customer both agree to it, and your code fulfills it, you are in the clear.

In your scenario, the metric of your DoD could be: "the compiled program can be started manually with a trivial input on the developers laptop". For a DoD, that is perfectly acceptable. Everybody will know that the software will likely fail immediately when confronted with the real world, but for a prototype/proof of concept that may just be fine.

share|improve this answer

the question is: does running code equal testing?

if it means launching the code and to observe it runs and loads the page or initial screen, yes, it would be considered a test - a smoke test if you will.

but there's more to testing than just a smoke test. as others have mentioned, you would do justice to the project by performing other types of test e.g. functional test or even non-functional tests. There's UI or API testing. Testing against the sw requirements or user stories would be a good place.

from what I gather, the project needs at least a Test Lead, snr test analyst or better yet a Test Manager! Then you will have a test process, defect management, test design, test analysis, test strategy & test closure.

share|improve this answer

Developing without doing unit testing is, in my experience, similar to taking a used car for a test ride, without taking a look at the engine or any other components. Yes, you might make a long ride and experience no issues, but are you confident enough that this car is worth paying for?

If this is how you do your development, it's entirely possible that iteratively running code during your development cycle might be enough to cover a great portion of issues, without writing tests explicitly, for a smaller project. However, in my experience, doing this alone is not enough to ensure the lowest amount of production issues you can ensure as a developer.

So, again, in my personal experience:

  1. Not doing unit testing while coding (not necessarily TDD, but just testing tiny chunks of code while you code) has a couple of issues.

    By "running and seeing what happens" you observe the system as a "black box", evaluating its quality through the UI (or whatever results you get from the program).

    Perhaps you write a chunk of code, then try to build, and repeat until you get rid of all the syntax errors. Then you try some inputs and it looks promising. But then you try some other, and it doesn't work - it's time to debug!

    Depending on your skills, this might be fun, it's like solving puzzles throughout the day. So, this tiny bug you found in a certain function just happened to manifest itself under right conditions, but the underlying problem is that this tiny function doesn't do what it's supposed to do.

    Unit testing tries to catch these errors in functions while you write them, even if those errors would be very hard to provoke in very rare conditions, and that's what adds confidence in your code to your manager and stakeholders.

    It's also efficient - it's very easy to prove a function is doing what it's supposed to do using a couple of quick tests. It also promotes writing code which is easily testable, which is generally as purely functional as possible, with as little state as possible. These functions are easy to test, and you can rest assured they will work in all conditions.

  2. What you are doing while developing is a sort of a manual integration testing. But not doing automated unit + integration testing means you are risking regression bugs in your commits, and making it harder to refactor your code or safely introduce changes.

    If your CI server is running a bunch of tests on each commit, and they are all green, it adds confidence in your code.

    Refactoring the code is much easier too. Needing to rewrite a part of an app is not so scary if you know there are automated integration tests which will tell you something is broken.

Other types of testing (functional tests, like testing user stories, or performance testing) seem to me like obviously separate phases of the development, and you can hardly argue they can be done implicitly while programming.

share|improve this answer
    
Oh, I agree doing unit testing as you write is the best (or before), but we simply don't have the budget or resources! – Joe 23 hours ago
    
@Joe: for start, unit testing doesn't have to be "extreme" as in "taking twice as time as coding". Next time you write a class which should do one specific simple thing right, create that first test project (shouldn't take more than ~1 min.), and write a couple of functions which tests each of the public methods (try to spend ~2 min. per method). In an 8 hour workday, spend 30 min. writing a couple of tests. In the long run, tests really pay off. You should be able to reduce lots of debugging, and you will have more reusable tested code which is a great foundation for future projects. – Groo 17 hours ago
    
Writing code professionally mean you should follow good programming pratices (like SOLID in OOP), but developers often prefer cowboy coding over clean design for these same reason: low budget, short deadlines, no time for refactoring or even thinking about clean design. In my honest experience, this is a misconception and doesn't work in the long run. TDD (or generally "agile methodologies") are, I agree, cool buzzwords which can be a very poor metric of code quality. – Groo 17 hours ago
    
Fact is, when you write a class, you do need test in in one way or another. The only difference is whether you will hit F5 to "run the app and see what happens" - in which case you are observing your class wrapped in possibly many layers, or you write a quick test which checks if this class really does what it says it does. The latter approach is my favorite: 1) I sometimes cannot believe myself the number of tiny bugs I catch this way, 2) the test remains in your code forever, 3) test coverage gives you at least some confidence without even running the app and 4) makes your boss happy. – Groo 17 hours ago

I run into this exact problem a surprising amount. People in various corners of the business use terms like "data" and "testing" ambiguously without themselves really understanding what they're asking when they utter these terms.

So when we're constructing some throw-away proof of concept, our development plan may not include any period of formal functional testing (because it's not necessary), or regression testing (because there's no prior version to compare with) or even writing regression tests (because there's not going to be a future version, either).

But of course this doesn't mean we never actually ran the code. Basic developer-level tests are an inherent part of programming. If you'd never "tested" what you wrote, you would literally be programming blind. At its most extreme, this approach would mean not even performing a check that your code compiles, and in my experience getting something to compile first time is very rare. So obviously you are going to be doing some amount of testing of your code before you announce that it works and is ready for demonstration.

But managers don't always get this. Your project manager will say "why did you spend time on testing? I said not to". And then when you decide "okay I won't count that as 'testing' because it was just basic development sanity checks" your line manager comes to you wondering why — and how — you haven't done any testing at all.

At this point I find it's best to qualify the word "testing" by replying with a sentence. Something like:

We haven't done a formal test run (and aren't planning to because it's a Proof of Concept), but my basic development-level tests show that it works well enough for the demo tomorrow.

It's worth noting that since your line manager asked you for a spreadsheet of test results, that suggests to me that you actually are expected to do some formal testing. In that case, he is actually disagreeing with the project manager and you should refer them to each other to figure out what they want your team to do.

Hahaha yeah right. Proofs of concept always end up evolving into real products. Management always ruins things by making that happen. Nowadays I try not to cut corners even on a proof of concept, because that's the best way to give yourself a tremendous ton of technical debt on the first version of your product.

share|improve this answer

Manual Testing is a large area, I'd break it down into:

  • Unit testing
    Do basic functions achieve their purpose when used by the user?

  • Integrated Testing
    Are related services and datastores updated and communicated with correctly?

  • Performance Testing
    How long do responses take? How many users can be supported simultaneously (load)

  • Exploratory Testing
    Does the system work from a real users perspective?

These are similar to the 4 quadrants of Agile testing, note that the tests could be manual or automated, the areas and need are similar.

share|improve this answer
2  
Actually these four types do not describe anything from quadrant two: lisacrispin.com/2011/11/08/using-the-agile-testing-quadrants Also you do not answer the question: Does running code equal testing? Or am I missing something? :) – Niels van Reijmersdal 2 days ago

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.