Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Comparing industry benchmarks for J2EE application server: IBM’s trade2 vs Sun’s ECperf
Zhang Y., Liu A., Qu W.  Conference in research and practice in information technology (Proceedings of the twenty-sixth Australasian computer science conference, Adelaide, Australia,199-206.2003.Type:Proceedings
Date Reviewed: Aug 19 2003

This paper attempts to compare two Java enterprise application server benchmark suites--IBM Trade2 and SUN ECperf--and illustrates this using an IBM Websphere applications case. The paper states the difficulties involved in separating the effects of the underlying platform (hardware and operating system), the underlying middleware of the application server, and the application components. It also correctly highlights the high costs involved in evaluation exercises for alternative servers and applications. Section 2 describes Trade2 and ECperf, and mentions that they have been designed by different software development processes for different application domains. Section 3 evaluates these two benchmark suites on Websphere, but unfortunately with different Web stress tools for load generation, due to different system architectures. Rather arbitrary metrics are chosen and measured for class modes, such as Java database connectivity database (JDBC-DB) access, enterprise Java beans database (EJB-DB) access, EJB-ALT-DB access, and so on. It is also stated that one benchmarking tool has proprietary access beans, but failed to mention that the one supplier of underlying “compliant” Java 2 Enterprise Edition (J2EE) middleware also has proprietary elements the design, which may favor selected performance gains. The paper correctly mentions that there are multitudes of tunings and configurations for each of the benchmark suites.

All in all, while the intentions are laudable, the heteterogeneity of the technologies, applications, and benchmarking suites make it very difficult to draw any conclusions, either on the methodology or on the respective qualities of the benchmarking suites. The most reasonable conclusion is that for complexity levels like those of J2EE and J2EE servers, and unless specific user and application specific metrics and contexts are defined, it is very difficult to accept any benchmarking suite as generic for J2EE. Standardization bodies like Object Management Group (OMG) previously highlighted the need for rigorous benchmarking specification environments to determine if a given benchmarking tool offers a partial or generic view, and to compare specific benchmarks.

Reviewer:  Prof. L.-F. Pau, CBS Review #: CR128145 (0312-1395)
Bookmark and Share
 
Benchmarks (K.6.2 ... )
 
 
Java (D.3.2 ... )
 
 
Performance And Usage Measurement (K.6.2 ... )
 
 
Servers (C.5.5 )
 
Would you recommend this review?
yes
no
Other reviews under "Benchmarks": Date
Object operations benchmark
Cattell R., Skeen J. ACM Transactions on Database Systems 17(1): 1-31, 1992. Type: Article
Feb 1 1993

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy