It's ultimately the Product Owner's decision if you should release or not, but in order to make the decision she must answer that very same question. Based on what? What information can we give her to help her support her decision?
Most teams go about it like this:
Hey, all unit tests and integration tests are passing and we have a 98% test coverage! We've manually checked that the software satisfies the acceptance criteria for the new stories we played, and we also did some manual regression testing. Everything seems to be OK. We're happy to release.For a team to state they're happy to release when this is all the information they can provide is an act of faith in their skills. They're assuming that because unit tests are passing then the software is going to work as expected.
Unfortunately we've all been trained to fully believe in our unit tests and how good we are: surely if we test the individual methods and then the interactions by faking dependencies, we can imply that everything works, right?
Well, I'd argue there are a lot of assumptions and untested implications in the proposition "the system works because we have good unit tests coverage and those tests are passing". See my previous post about the value of unit tests, and by all means, do your Googling... the subject's been heating up recently.
Long story short, you may have all the unit tests in the world, 100% code coverage (whatever that means), and still have no certainty that the software is going to work as the developers intented. More importantly, because unit tests verify that software does what the developers wanted, you'll have even less certainty that the software is going to do what the Product Owner wanted!
Here's a good exercise you can do to get an idea of the implications you do in this process: take a couple of user stories you have already delivered and released, and try to make a map of the unit tests that guarantee that user story works as expected.
Take 1 minute and think about how would you go about it right now...
The first sign of alert should be that you had never thought about this, which is usually the case.
In most cases only imagining the exercise should be enough, but if you go ahead and are actually able to do it then you should end up with either the realization that you have blind spots (particularly around interactions) or in the best case, a big mess of tests scathered all around the code with no clear relation to a particular acceptance test.
If your Product Owner is not very technical then deciding to release after been given the former assessment on the quality of the release is an act of faith in the team. One most of us has been part of. If the Product Owner happens to be a techy then she's probably into the unit tests "religion" and is as convinced as everyone else that you can release with very high (and false) certainty things are to work as expected.
OK, you got me thinking. What can I do?
Hey, all unit tests are passing, all the end-to-end automated acceptance tests which run in a production-like environment are passing. We've done some manual exploratory testing and didn't find any issues.
The automated acceptance tests cover 83% of all the acceptance tests, 98% of those considered critical and 95% of the new acceptance tests we've just implemented.
We're happy to release.If you've followed your process correctly then acceptance tests represent all your Product Owner cares about. If you can tell the Product Owner 83% of the acceptance tests have been verified in a production-like environment you're basically telling her that at least 83% of the features are known to work (the only risk being how different from production the test environment is). And she can now make an informed decision of the type "no, I really need X and Y to be verified" or "great, let's go ahead".
But isn't the end-to-end automation of acceptance tests expensive?
It's all about managing risk. If the Product Owner and the team are happy to live with the risk of regressions of bad behaviour then it makes sense to cut down on this automation, or apply it only to critical areas. In most cases this is not the case though, and the only reason releases are approved is because there's no clear picture of what the certainty is.
Here's another exercise (more of a challenge really): next time you have to release, create a list of all the acceptance tests in the stories you have already delivered and released. Then try to explain to your Product Owner how many of those have been validated for this release. Feel free to do all the magic and hand waving you want, just stay honest.
You can say "this acceptance test is covered because we have this and this and this unit tests, you see...", and "this other was tested manually and because it uses most of the same components as acceptance test X then we consider X has also been validated". This other might also work: "we haven't touched this component for a long time and because users haven't complained so far we can safely assume it's working properly".
See how much confidence can you get from your Product Owner for that release.
If your application is in any way critical to your company by this point you'll probably have a very pallid face in front of you. Clear exceptions might be some start-ups, experiments, prototypes, etc.
After perhaps a bit of hatred and arguing, and after pointing out that it's unfeasible to verify every single acceptance test before each release, this would be the time to propose that you invest in the automation of your acceptance tests, in a production-like environment.
Note that all this is very related to BDD, which has been around for a long time, but most people associate BDD with a framework and it also implies that you write your tests first, which is not the point I'm trying to make here.
Any hints on automating the acceptance tests?
Actually, yes, I have a couple of things we've been trying and I haven't seen anywhere else like having a unique set of tests that can be run fast, without any I/O, and also in a full-deployed environment.
Being able to run those tests fast give the developers a very quick feedback on their changes, much like unit tests do, especially if you use a tool like NCrunch.
The full-deployed version will be much slower so it should probably run as part of your continuos integration, but will give you much more confidence in return.
I'll try to post more details about this approach soon, but whatever you do, please start being objective about the quality of the software you're releasing. If you decide to take the step and start to automate your acceptance tests systematically then make sure you invest in quality: you'll need to maintain this code together with your production code..