Empirical Examination of A Collaborative Web Application Christopher Stewart, Matthew Leventi, and Kai Shen Department of Computer Science, University of Rochester Online instructional applications, social networking sites, Wiki-based web sites, and other emerging web applications that rely on end users for the generation of web content are increasingly popular. However, these collaborative web applications are still absent from the benchmark suites commonly used in the evaluation of online systems. This paper argues that collaborative web applications are unlike traditional online benchmarks, and therefore warrant a new class of benchmarks. Specifically, request behaviors in collaborative web applications are determined by contributions from end users, which leads to qualitatively more diverse server-side resource requirements and execution patterns compared to traditional online benchmarks. Our arguments stem from an empirical examination of WeBWorK --- a widely-used collaborative web application that allows teachers to post math or physics problems for their students to solve online. Compared to traditional online benchmarks (like TPC-C, SPECweb, and RUBiS), WeBWorK requests are harder to cluster according to their resource consumption, and they follow less regular patterns. Further, we demonstrate that the use of a WeBWorK-style benchmark would probably have led to different results in some recent research studies concerning request classification from event chains and type-based resource usage prediction.