Wednesday, August 19, 2009


There are various kinds of test's that can be performed on a system , but it looks like most engineers don't agree on the definition of the test. For e.g. in my current project a 'unit test' tests out the end to end functionality but it must be performed by a developer i.e. any test that a developer performs is a unit test!
In any case this is my usage

Unit Test - Generally meant to represent that only a small subset of the code is under test. You might find a few fanatics who argue to the effect that anything that goes to the database isn't a unit test, anything that needs an external interface isn't a unit test etc etc. Ignore them. The key is that the test might not exercise the flow as it finally would. There might be mocked interfaces, hardcoded data. Generally written by developers and generally easier to automate. Unit tests are also low hanging fruit, and contrary to what most agile engineers will tell you, are pretty much useless expect to impress some manager with 'We have automated unit tests!' , 'We have continuous builds with JUnit reports!','We have 99% code coverage!'. The quality of these tests written is mostly poor (they are written by developers after all), the test data provided rarely covers boundary conditions, invalid data, exceptional conditions. Yes Yes I know you should have high code coverage. It's quite easy to game the high code coverage (which is what happens when metrics are imposed by management) but even if it wasn't, the code coverage is only as good as the code and the test.
e.g. function int add(int one, int two) {
return 4;
Test function with add(2,2), add(-1,5) , 100% code coverage , automated unit tests all green , mission accomplished!?
i.e. If the developer doesn't account for the boundary conditions in code, he isn't likely to test out those scenarios either.

Integration Test - Generally tests out the interface between multiple systems, though the data might be faked. If there are only two types of test that you can carry out on your system then this is one of them.
These tests are extremely hard to write (or atleast make it repeatable), and are very very useful. The earlier you can have these tests up and running on your system, the better the quality of your system. These tests are hard to write not because of technical problems(which exist) but because of people issues. Different systems are normally run by different teams (sometimes even external to your organisation) have their own schedules , develop at their own pace and make their own assumptions and are notoriously non co-operative. Technical limitations are normally due to non repeatable time sensitive data which either needs the data reset to a known state or data created from scratch.

Functional Test (End to End test). - Tests out functionality from a user's perspective. This is the second type of test that must be run and the earlier you can run these tests in your system the better the quality of the system is. Normally nothing is faked , actual data is used for the tests. When these tests are performed by a business user / stake holder these become Acceptance tests. These tests are extremely important and difficult to fully automate. e.g. These would also include UI tests (this page doesnt work on my browser!).

Performance/Load - Functional tests run in parallel (normally a good mixture) with varying concurrent users. Easy to do if the functional tests can be automated. In most cases difficult to simulate (especially if the system is already live, easier to do the first time). Depending on the duration you run this is also by different names. Some organizations refer to long running tests as smoke tests. (used to smoke out memory leaks) whereas some organisations use smoke tests to refer to some important functional tests that are run to check that no major errors exist.

System Tests - Used to check that all systems are up and running. Not really a test category by itself , but useful to abort test runs.

No comments: