Writing device drivers is no easy task. Speak to a few device driver developers, and you’ll soon come across many horror stories, like sending error codes using Morse code through an on-device LED. Thus, any attempts that ease the process are to be applauded.
The paper describes a methodology for acquiring the states through which a device driver transitions by monitoring the calls made into a device driver and the replies returned. The logging is achieved by inserting a filter driver between the input/output (I/O) manager and the driver being tested. This filter device driver sends a record of all I/O request packets (IRPs) to the DebugView kernel debugger. The authors profile five Windows XP and two Windows Vista device drivers using 20 different workload generators that range from the simple, such as disabling of the device driver, to the extreme, such as running burn-in tests concurrently with commercial benchmarking tools.
As the authors assume no access to the drivers’ source code, a state is defined as the interval between receiving an IRP and completing it. Should a second IRP arrive prior to the completion of the first, a new state is defined. Using this data, the authors build a state transition graph with knowledge of time spent in each state and the frequency of transitions out into other states.
Having real-world usage profiles for an in-development device driver is good. However, I disagree with the authors’ suggestion that the profiles can be used to reduce testing effort to provide a certain percentage of coverage of visited states. A driver that works 95 percent of the time, by definition, fails five percent of the time. Even a 0.01 percent failure rate would be unacceptable if a driver managed to cover all possible states every few hours of usage. This is particularly acute when the level of granularity is an IRP request.