I've been testing my existing work application using JMeter for the last 6 months , so what follows are a series of posts that I think isn't covered in most documentation or available online easily or in the same place (I had to figure out these solutions myself). However there is a caveat to that, the solution may not be the best possible one , may not be efficient and there may be more elegant ways to do what the solution does. In which case please inform me. However the solutions do have one thing in common, they worked for me. Your mileage may vary.
The rest of the post is a rant , feel free to move on
A few observations on a Friday(with a tip of the hat to BusyBee)
That people who keep asking for Unit Tests don't have a clue about testing web based applications.
That data driven integration testing between multiple systems is damn hard to do , but boy is it satisfying when you actually find a defect due to it, and boy is it worth your time to develop these tests.
That open source tools are so much better than some of the paid commercial tools. Except when it comes to presentation of the reports. Which surprisingly is also the difference between a Techie and a Manager.
That there still is no tool to truly perform visual tests on a website.
And That someone who could develop such a tool could make a lot of money.
And that someone wont be me.
That there is no time to test all the normal cases, much less the weird corner ones. and that there would always have been plenty of time in hindsight, if we had just gotten our act together earlier
That a successful test is the one that fails in local,development, test, QA , UAT environments. That a test thats succeeds in the above environments but fails on the production environment is a pain.
That a web based test should never underestimate the ignorance of a user.
That developers do not make good testers. And that most developers are better testers than the official testing team, especially when we test other developers code. And that scares me.
Friday, June 05, 2009
Subscribe to:
Post Comments (Atom)
2 comments:
Interesting blurb. Some thoughts:
on test reporting (techie vs mgr) - not necessarily too hard on presenting the results in a spreadsheet, etc., but rather how to orally present it in a meeting with the given files.
on visual website testing - screenshot comparison tools are the closest we can get for now. Something like Sikuli tool.
on developers as testers - that's interesting, would like to hear more of an elaboration on that one.
>not necessarily too hard on presenting the results in a spreadsheet
Ah but precisely. We developers speak a different language - and we usually dnot switch based on audience. So most of us could rattle of I ran a performance test with 100 active sessions, 20 concurrent for an average response time of 7.2345 seconds - but few would know when to switch to the pages are too slow for the normal usage of the site.
>screenshot comparison tools are the closest we can get for now.
Exactly. But even small CSS changes or visual differences flag as errors. What you really need is
a. Does the site work across multiple browsers and their versions (some tools exist for this , but expensive and slow)
b. Did I break anything too much when I made this CSS/JS fix (where some visual part does actually change).
>on developers as testers
A lot of developers love to show how smart they are - which makes them excellent testers when it comes to Other people's code - you get to pretend your code would have been better .
The other problem is, the way that I see it, is that in most cases testing is not a profession that people voluntarily go in , lack of domain knowledge , poorer salaries generally works out to a developer is actually in a better position to test than the tester in the team. Automated tests need a good level of coding skills and most testers that have those skill prefer to become developers.
However a developer is not a good candidate to test his own code - if I didnt check for null when I coded it , I likely wont write a test case to check for null. Same goes for functionality - I interpret the requirement and I code it. If I didnt think to implement a certain functionality or I interpreted the requirement incorrectly or i didnt consider all possible inputs, its likely that i will carry this over to my tests as well. This hold whether you do agile , TDD or waterfall.
But this was just a rant - all anecdotal experiences , not evidence
Post a Comment