In developing mission-critical, real-time applications, the authors have found historical testing methods to be inadequate for personal computers (PC's). A major problem is the myth of "PC compatibility"; i.e. the notion that all PC's are interchangeable and function independently of the software. The authors propose that the interactions between PC hardware and software are extensive and complex, and require that system specification and testing be modified to include integrated hardware/software testing.
One of the primary reasons for the widespread use of the personal computer in commercial applications has been the openness of the system, allowing many competing vendors to develop hardware and software components. This competition has produced surprisingly rapid advances in computer technology and in system capabilities. However, the main goal of application developers continues to be producing applications that are usable, reliable and cost-effective, both to develop and to support. The historical approach to application development has included independently specifying and testing the hardware and software components of the system. This has proven to be inadequate for mission-critical PC applications, due in large part to the fact that modern PC's are high-performance machines containing an array of specialized components. The open architecture of the PC, compared to the previous closed (proprietary) designs, creates the situation where each of these hardware and software components can come from multiple, independent sources, resulting in a neverending stream of possible combinations. The differences between PC's and the complex interactions between the software and various hardware components requires that specification and test procedures be modified to include rigorous testing of all hardware/software combinations. In addition, due to the continuing evolution and changes in PC hardware and in software development tools, these test procedures must continue in some form throughout the product's life cycle. This paper describes the authors' experience in developing and distributing PC-based applications, emphasizing practical solutions and anticipated future improvements.
The industry relies on personal computers to provide data acquisition, increases in personal productivity, and assistance with data analysis. We are increasingly dependent on these devices and their associated software. We have all been faced with mixed emotions when upgrading our personal computers to either new operating systems, new hardware or both. We welcome the prospects of an analysis taking 20 to 50% less time and in some cases doing things not possible at all a few years ago without massive investments in workstations, mainframes and specialized training. However, will our old applications still function in a predictable manner? Will we spend weeks on the phone in non productive time in an attempt to contact a living and knowledgeable technical support staff member or be relegated to a combination of voice and fax back hell? If the personal computer industry can build a device that operates 2500 percent faster than the original IBM PC, why can't all my applications still work? Sadly, the true answer is that computers do provide the desired benefits only after a start up learning curve. The game, therefore, becomes an effort to reduce this learning curve to a minimum, searching for the quickest answer to the question "The hardware has changed again; does everything still work?".