Comment : Should Servers Be Overheated?

I had a rather interesting discussion with a Dell Engineer, who shall stay anonymous, about servers in general and how generally upgrading trends are evolving.

Our informal chat revolved around the high scalability and high availability server market and how some customers are now looking to push their hardware beyond their recommended limits by voluntarily allowing operating temperatures to surge in data centres.

These limits are often guidelines within which the servers are guaranteed to work but just like overclocking processors generates more heat, some are considering the possibility of allowing temperatures within data centres to rise.

There are two main reasons to explain this; the first is about maintenance; cooling down servers is expensive, demands significant planning and cumbersome.

The second is about getting more computing power per unit volume, new technologies like Nvidia's Computing processor range, Tesla, can deliver significant performance boosts but do also generate much more heat.

Servers, I've been told, are normally designed to last more than five years in general but performance jumps are often so important that they justify replacing current hardware with new ones.

So if a company plans to replace its hardware every 18 months, it does make sense to evaluate whether they can potentially afford to shorten its expected lifespan and save money.

Whether this will void your hardware's warranty remains to be seen, especially as it is difficult for Dell to prove that deliberate attempts to raise the server's temperature have been made.

404

Sorry! Page not found.

The article you requested has either been moved or removed from the site.