The performance of a program depends on the number of instructions executed, the number of cycles it takes to execute each instruction, and how fast those cycles are. The number of instructions depends on the program itself and the compiler that translates it. The number of cycles it takes to execute each instruction depends on the compiler, the instruction set, and microarchitecture. Finally the cycle time depends on the microarchitecture and the technology at our disposal.
The CPI (Cycles Per Instruction) for a program can be expressed as the summation of the partial CPIs for each type of instruction appearing in that program weighted by the frequency of that instruction.
MIPS and MFLOPS claim to be program independent but they can be
misleading. Most problems stem from the fact they are dependent on
the instruction set but instruction sets across machines are not
comparable. That is the main reason people resort to benchmarks.
SPEC seems to be the most popular benchmark nowadays although people
gripe about this one too. It consists of a set of integer and
floating point programs. One of the main problems with SPEC is that
there is no easy way to present a computer's performance on SPEC as a
whole number. Both arithmetic and geometric means have been used but
both have their problems. In particular the arithmetic mean may end
up giving you different results depending on your normalization point,
while the geometric mean rewards improvements from 1sec to a half
second, as much as improvements from 10,000 seconds to 5,000 seconds.
Amdahl's law tells us that if we only improve one aspect of our computer/program then the unimproved part will end up dominating the performance of our system.