Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It'd be awesome if they provided a comparison of other tools on the same hardware like, "on an AWS M4.xlarge instance we were able to achieve 160m/ops/sec when the dataset fit into memory where as redis only did X"

Lacking that I agree it's a pretty meaningless stat.



Check out the linked paper for a detailed performance comparison.


Don't do benchmarks on shared-host.


If that’s your production environment then that’s where you should run them.


That's not why you don't run them on a shared host.

It's because every other tenant on the machine is going to make run of the same benchmark unpredictable, and it will likely vary greatly through the day.

Even taking multiple runs of each benchmark isn't sufficient, because you don't know the usage patterns of other tenants.


You're making it sound like performance on such hosts in unknowable which isn't really accurate. 'Multiple runs of each benchmark' is vague enough to be potentially insufficient in just about any environment to boot.


Variance matters though. Test where you can reasonably be sure about low variance.


Sure. But it can be measured and accounted for. The idea that you can't or shouldn't benchmark such environments seems weird, given that they're pretty popular.


No benchmark means anything on an EC2 shared instance (or probably any other cloud instance) because you don't know what else is running on the machine.


What about running the benchmark multiple times on instances of the same type? I get that it would be noisy but lots of workloads run on shared instances so it’s a useful measuring stick in that way.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: