Operational Analysis of Parallel Servers Terence Kelly Kai Shen Alex Zhang Christopher Stewart HP Labs U. of Rochester HP Labs U. of Rochester Multicore processors promise continued hardware performance improvements even as single-core performance flattens out. However they also enable increasingly complex application software that threatens to obfuscate application-level performance. This paper applies operational analysis to the problem of understanding and predicting application-level performance in parallel servers. We present operational laws that offer both insight and actionable information based on lightweight passive external observations of black-box applications. One law accurately infers queuing delays; others predict the performance implications of expanding or reducing capacity. The former enables improved monitoring and system management; the latter enable capacity planning and dynamic resource provisioning to incorporate application-level performance in a principled way. Our laws rest upon a handful of weak assumptions that are easy to test and widely satisfied in practice. We show that the laws are broadly applicable across many practical CPU scheduling policies. Experimental results on a multicore network server in an enterprise data center demonstrate the usefulness of our laws.