Why are those tests not good enough? We would like to emphasize that the tests that we did are NOT what we would call professional tests of the anti-virus products mentioned. Here are some reasons why. 1) Many of the products used are integrated systems, including resident components, integrity checkers, and so on. ONLY the scanner part of the product was tested. 2) The resident scanners were not tested - only the on-demand ones were. 3) The ability of the scanners to detect viruses in memory was not tested. 4) No special tests were made on some important subclasses of viruses, like only the polymorphic viruses (on a reasonably big number of replicants), only the viruses known to be in the wild, and so on. 5) The virus collection used is far from ideal - it does not contain enough samples of each kind of file infectable by each of the viruses. We do not mean just COM/EXE/SYS/BAT replicants, but also such things like very small and very large files, EXE files with internal overlay structure, and so on. 6) No testing has been done of the disinfection capabilities of the scanners. 7) No tests have been done to see how well do the scanners perform when a stealth virus is resident in memory. 8) No attempt has been made to evaluate the user interface of the scanner, although when it is annoyingly awkward, this is mentioned. We believe that any normal user or at least a professional reviewer is able to test this, and have selected to concentrate our efforts in testing the anti-virus part of the scanners - something that only a competent anti-virus researcher is able to do. Therefore, the results obtained have only a limited value. What are those tests good for, then? Regardless of the drawbacks mentioned above, we believe that our tests are of some value. 1) Probably the most valuable part is the naming cross-reference. It can help the producers of the scanners to become compliant with the CARO virus naming scheme and can be used by the users to figure out which virus they have exactly, after their favorite scanner reports some name. 2) The tests do provide some overall impression of how good a scanner is at detecting viruses. If the results of these tests show that scanner X has a detection rate of 97.5%, while scanner Y has a detection rate of 96%, this does not necessarily mean that the latter is worse than the former. It just means that the former has shown slightly better results on this particular virus collection and that both scanners have a very high detection rate. However, if the results show that scanner X is excellent, while scanner Y is total junk, then those results are pretty reliable. As Dr. Alan Solomon says, to pick a good virus scanner, you don't need to know whether it detects 96% or 97% of the known viruses - you need to know whether it is pretty good or very bad. Most of the existing scanners can be very easily divided into those two categories.