Despite a huge push for more testing of applications, online or off, there is actually not a lot of standardized systems for doing it. This is not to say there aren't a lot of standard techniques, there simply lacks unifying by-the-book approaches which yield solid results. A lack of standards should however be no reason not to test your web application. Yet without clear guidance most providers kind of just try things thereby wasting a lot of effort and still ending up with a poor quality product. Here is a general overview of the five basic approaches to testing.

Monkeys in a closet

I'll start with this one as it is the most common, and the most misguided approach. It is achieved by deploying your application and letting many test subjects use the site and look for problems. The idea is to simply get more eyeballs and cursors on the site weeding out defects. In practice this does indeed produce a lot of defect reports, but usually fails in detecting serious defects.

The trouble starts with the individuals hired as testers. At the low end of the pay scale these are simply people looking for a stepping stone job, or simply unqualified for another position. Sticking a bunch of these people together in a room is not going to get you good results. Parts of the site will likely not be tested, and a lot of effort will be spent doing permutations of tests which are essentially the same. Furthermore you are likely to create resent from other teams as the quality of defects being reported is generally poor.

Let the programmers do it

First in line to ensure proper operations are the people responsible for developing the web application. These are the programmers who put integrate the various modules bits and write custom code for your unique features. They know the system well and are in the best position to perform certain kings of tests.

A few common problems arise here. Many programmers, if not most, lack proper training in testing techniques. Furthermore, many teams have poor work ethic and consider proper testing beneath them. Some will simply not understand why they should be testing the website at all, it is after all, just a bunch of modules written by others.

Web applications, even smaller interactive websites, need to be treated like any software development. This is are is fortunately well documented. Your programmers should be comfortable with unit tests and have a readily available test environment. They can't do all the testing though, programmers are not a good choise for use-case or usability testing. Let them focus on unit and functional tests.

Manual feature testing

Manual testing is something that simply can't be avoided. The standard approach is to write a test plan and expand it with test cases. Any development method will surely have at least a bit to say on this topic. Beyond that there are also several specialized references on the topic of testing. Though many lack substantially on the actual art of writing a good test case.

And it is the lack of good test cases which is the pitfall of testing. Team members without strong experience in writing test cases are about the same as the monkey scenario. Testing is a priority balancing act with a constant choice between breadth and depth searching. It is simply not possible to test everything. A good test designer must have a method to choosing his test executions. Unfortunately there aren't many, if any, well documented and complete methods.

Even without clear tactics there is no way you can avoid manual testing and expect to produce a high quality site. Your biggest problem simply becomes finding somebody capable of writing good test cases.

Record and play automation

It sounds very appealing to simply run through the website once, record it, and then play it back again for any future release. When something goes wrong the test framework will tell you. The automation tools require very little training since the tester will simply be using the website. One simply sits down, takes five to ten minutes to record the script, then another two to twenty hours manually tweaking it! You see, if you don't tweak it even the slightest UI code change can cause it to break. Worse, when you decide to revamp the interface your library of recorded scripts will likely be completely useless. You will have to start again from scratch.

Putting that aside this you still have the problem that mostly its the monkeys in the closet writing the scripts. That means none of the recorded scripts will be of any higher quality than what can be achieved with said monkeys. Indeed I'd venture the results will be even worse. Thus ensure you have good manual test cases before moving on to automation.

Also be willing to accept that the recording feature of any testing tool is simply a minimal first step. Don't let the sales people mislead you into thinking it is the holy grail of testing. The tester will need to manage the recorded script and write additional code manually. Many tools are severely lacking in this area.

Scripted automation

Rather than rely on record and playback techniques you should incorporate a real programming mentality into your automation effort. That is, have your testers manage a series of testing scripts which are properly implemented as modules and functions. Hand crafted scripts allow for dynamic testing, such that every run of the script tests the application slightly differently than the last. In a well layered system these scripts will also be usable even after UI changes.

Having a fully automated test system which goes through all the major use cases of the website is a major boon to product quality. This allows you to safely deploy changes knowing that nothing critical will break. Automating testing is a lot of work, both in initial programming and maintenance. It also cannot cover the same depth and breadth as a good manual tester in the same time frame. Thus it is not enough for new features. Automation and manual testing must be used together.

The critical point here is that script writing is essentially programming. This leaves a daunting task of finding a qualified programmer who is also capable of writing quality test cases. These people are not easy to find.