The lack of standardization in the client-side technologies of cascading style sheets (CSS), Hypertext Markup Language (HTML), and scripting languages (such as JavaScript) means that Web application developers have the almost impossible task of trying to ensure that applications display and run as intended in several browsers. End users are often forced to rely on more than one browser. WebDiff is a research tool that automates the process of detecting and reporting cross-browser differences, easing the burden of manually checking for such differences. First, the tool undertakes a structural analysis comparing the document object model (DOM) trees produced by different browsers. Second, a visual analysis is undertaken that processes the results of the structural analysis. DOM nodes that are matched across different browsers are checked for positional shifts and changes in visibility, size, and appearance. Algorithm listings and an accompanying discussion provide enough detail to allow for an independent replication of the approach.
An empirical study that used WebDiff on nine randomly selected Web pages is reported. Table 3 shows that WebDiff found 121 true issues of different types and 21 false positives. WebDiff could be further engineered to reduce false positives. The authors suggest that increasing sampling will improve the detection of variable elements such as advertisements.
My only criticism is that no attempt was made to gauge the severity of the 121 true issues. Would a developer act on all, some, or none of these issues? Despite this criticism, I strongly recommend this paper to lobbyists for technology standardization and those interested in Web application development.