Sheffield ICT Footprint Commentary

Summary

Annual use of electricity by ICT equipment at the University of Sheffield is nearly 9,000 MWh/y, corresponding to a carbon dioxide emission of 3,800 tonnes. It is about 18% of non-residential electricity consumption. Electricity use can be broken down into categories as in the following table. Note that the percentages do not sum to 100 due to rounding.

MWh/y / %tage
PCs / 4,160 / 48
Servers / 1,520 / 18
High performance computing / 1,210 / 14
Imaging / 840 / 10
Networks / 690 / 8
Telephony / 200 / 2
AV / 60 / 1
Total / 8,680

Background

An attempt has been made to estimate the electricity used by ICT at the University of Sheffield for HEEPI (See: http://www.heepi.org.uk/). This document sets out what is included, what is not included and the assumptions made.

I am very grateful for guidance from HEEPI staff and for the generous help of all the staff at the University of Sheffield who provided data for this report.

Caveats

Much of the survey was based on results from equipment connected to the University local area network which covers all the University main buildings and almost all University staff and just about the area for which the University centrally pays the electricity bill. However there are potential gaps (or overlaps) at the edges particularly for staff in some research groups, commercial units and health service trusts.

Only the electricity used in powering equipment on campus has been estimated. Electricity use is estimated at the wall socket, so no allowance is made for distribution across the campus. No allowance has been made for the energy cost of equipment manufacture, disposal and use of consumables. No allowance has been made for the electricity used by:

l  office equipment such as franking machines, money counters, laminating machines, etc;

l  by the Print Unit nor commercial suppliers for outsourced printing work;

l  PCs which are often incorporated into laboratory equipment;

l  PCs and printers in student bedrooms;

l  portable PCs which are brought on to the campus.

The full impact of ICT equipment on space cooling and heating has not been taken into account. Where equipment is in a dedicated cooled space such as a server room, then an overhead (40% extra) for cooling has been added. However where equipment is in a general space which may be heated in winter or cooled in summer, the impact of equipment heat output on space heating and cooling costs has not been assessed.

Estimates of electricity use are derived from a number of sources, including measurement of sample equipment, suppliers specifications or measured electrical and air-conditioning load. While these should be broadly in line with one another, there is room for some bias in the results (for example manufacturers figures are perhaps 20% higher than that measured for an individual sample).

In many cases crude estimates had to be made, for example over the number of locally connected printers and how much they are used. While all reasonable efforts have been made to ensure these estimates are sound, for example by consulting support staff in a number of departments, local practice varies. In the end such estimates are little more than educated guesses, so not too much reliance should be put on the detail of the results. It is however hoped that the overall magnitude of electricity used is reasonable and robust.

Measurements were made with a Maplin power meter.[1], These units are simple to use and give repeatable results, except where the measured value is very low or changing significantly.

Servers

These are in server rooms and are run 24*7. High performance and grid computers have been separated out, because they will feature only in universities with significant science or engineering research.

The University has two central server rooms which are covered by uninterruptible power supplies (UPS) that are network connected and which will report instantaneous power (or current) figures via simple network management protocol (SNMP). Some of the figures from the UPS are VA measurements, so higher than Watts (W), but measurements in the past indicate that this discrepancy should be modest, where equipment is active. The measurements were done during the day in April 2008, when weather was normal for the time of year and equipment was under a normal load. Electricity use varies with the load on the servers. Electricity use for cooling also varies, with temperature.

The UPS covers both the air conditioning and active equipment (including servers, storage, central phone and network equipment). These are “line-interactive” UPS units, which are understood to be very efficient. It was not possible to separately measure the air-conditioning, but these two are assessed together as an overhead of 40%.

A number of departments have servers offering services such as mail, web, Windows name serving. Discussions were held with support staff from six departments, three of the larger installations were visited and the Sun equipment maintenance contracts (though much will now be based on PC servers) were examined to assess the extent of facilities. However there are probably twice as many centres as this, so an estimate was made. Note further that these servers each have an IP address so an appropriate compensation has been made on the number of PCs.

Server spaces are air-conditioned and often the simplest estimate comes from this. For example, in the case of one department they have three units rated at 10kW, 10kW and 4.5kW and can run on two out of three so are using about 14kW. An overhead for powering the air-conditioning of 40% has been added.

Storage

Much of the University's central data storage is held on a Netapp storage area network split across two machine rooms. The total storage available is nearly 50TB, of which 32TB is in use. The data is held on 154 SATA 3.5” disks (each about 10W) and 56 FCAL 3.5” disks (about 15W) so in total, perhaps 3kW (assuming an 80% efficient power supply). With four controllers, the total load would be around 5kW. Back up is LTO3 autochange tape decks. With associated servers, it is likely that the electricity used by storage is of the order of 10% of the 85kW total for central servers.

High Performance Computers

High performance computation is a separate requirement from general purpose server provision and has been split out. All the high performance equipment is in air conditioned area so a 40% allowance has been made. The central computation equipment was running at about 60% of load when tested. Typically, only the head node of a cluster will have a visible IP network address so worker nodes would not show up on a count of network addresses.

It has been possible to estimate the electricity used per worker node from the total load. There are centrally, the equivalent of 220 (1u, dual socket, AMD-based) servers at Sheffield, so each is using around 270W on average, with no allowance for air-conditioning. This roughly corresponds to measurements made on individual servers and to the Sun calculator (dependent on memory configuration) for their X2200 AMD based servers.[2]

A number of departments have local clusters. Estimates were made from the number of 1u boxes, 2-socket boxes (nodes). Note that there are a small number of 4-socket boxes and each of these was counted as two 2-socket boxes. The most recent HPC servers at Sheffield have used high efficiency (HE) processors which saves about 7% of electricity when running flat out.

High performance computing provision continues to grow at a significant rates in contrast to general purpose server provision, with additional equipment being on order at the time of the measurement.

Phones

Sheffield University has an Avaya Definity voice network with a mix of digital and analogue phones which are generally powered from equipment in cabinets distributed over the campus, which are protected by UPS. Around 12,000 phones are in use. The electricity used was estimated by adding the figures from UPS reports. An allowance of 10% has been made for the extra electricity used by the UPS units. There is a small amount of central phone equipment which is included under servers. Even taking the latter into account, it is estimated that less than 2W of electricity is used per phone, which is very modest.

No allowance was made for locally powered phones, answering machines and faxes. The University has a central voice mail system and the number of fax machines is believed to be modest (fewer than the number of photocopiers for example).

Network equipment

Hubs, switches, routers and wireless access points are run 24*7. There are good records of installed equipment, which is highly standardised. Samples of the most commonly used equipment were measured and it was discovered that the figure for fully connected, but idle equipment was about 20% less that the manufacturer's published figure. The latter was used except in one case where the published figure was three times the 4 watts measured. Five of the core switches are not in a main machine room, but are covered by a UPS and the figure from this, 600 watts each, was used. There is a small amount of central equipment (including a pair of central routers, YHMAN and CCTV equipment) which is not included here, but is instead included in the servers estimate.

PCs

PCs certainly use the most electricity of any ICT equipment on campus but trying to estimate it with any degree of accuracy is difficult. The number of PCs was estimated on the number of IP addresses allocated to devices that were not network devices nor printers. An allowance was made for server equipment and also for the one department that allocates its own IP addresses. About 13,000 PCs are in use.

PC purchase records were examined over a period of 5 years from the three main suppliers of desktop machines to the University, which showed over 10,000 purchases. The suppliers report that most departments follow the central recommendations which are for modest configurations. Even though the recommendations are generally followed, there is a huge range of equipment in use, due to the rate of change of specifications. The purchase records include some portable PCs, but many more of these will have been purchased outside the central agreement. The figures suggest that PCs are used on average for about 6 years at Sheffield. PC makers do not generally give specifications for power use because of the many varied configurations. Instead estimates have been made by measuring typical equipment.

In recent months, Sheffield has specified energy efficient PCs alongside more conventional models. These have been bought for student areas, but they cost more than other models for the same performance and it is not yet known what the take-up from departments will be. PCs can exist in a number of states, each of which uses a different amount of electricity as illustrated by the following table.

Off / Idle / Intense
Dell Optiplex SX280 / 1 / 58 / 100
HP dx5150S / 2 / 43 / 87
Macintosh Mini / 2 / 22 / 37
Viglen VM4 Cube / 3 / 61 / 108
Viglen Genie / 1 / 92 / 149
Viglen EQ100 / 3 / 46 / 59
Dell Optiplex 210L / 2 / 70 / 135
Viglen Genie Core Duo / 2 / 65 / 86
IBM X40 Portable / 2 / 27 / 37

Off is when the mains to the machine is powered up (and any power supply switch is on) but the machine has not been powered on at the front panel. Idle is after the PC has been booted up and is waiting for keyboard input. Intense is when the machine is actively doing arithmetic on a large dataset and writing results back to disk. Machines can also be in Standby (for wake-on-LAN) and Hibernate states, but the electricity used is generally similar to that in the Off state. In practice PCs are rarely intensely used (for example when starting up, calculating a very large spreadsheet, searching a huge file or rendering a 3-dimensional model), so the Idle figure is used for active power consumption. A figure of 2W has been used for all PCs when Off.

On the 3000 or so managed PCs (including all central student provision), Sheffield has installed software to switch PCs off after they have been logged out for 20 minutes. In this case it is estimated that average use will be for 40 hours per week which allows for both office use (though office users may not log out) and that although most student areas are in use for more than 40 hours per week, utilisation is lower out of term time.

For other PCs, it is expected that many users do not switch the machine off overnight or even at the weeekend. Recently 162 unmanaged PCs (in Estates, Library, Materials, MBB) were monitored using Verdiem Surveyor.[3] The report says the annual electricity use per PC is 224.5kWh (at an approximate cost of £15). They claim that their software could reduce these costs by 26% if it was run actively to power down PCs instead of monitoring activity. The electricity use figure was used indirectly to estimate the average number of hours that a PC is left switched on, about 70 hours per week averaged over a year of 52 weeks. A lot of users must be switching their PC off at night.

No estimate of the number of portable machines or high powered machines was made, instead it was assumed that all machines were modest office ones (as most PCs bought at Sheffield are). However some allowance on the number of PC devices was made on the assumption that they are servers run 24*7. These appear under servers. Inevitably there will be PCs of much higher specification than recommended. However also included are low power devices such as print release stations.

For monitors, the default on modern operating systems is that they will power down to standby after 20 minutes, so the typical number of hours of active use will be about 40 hours. PCs bought in the last four years, so about two thirds, will have flat screens which use about half the electricity of cathode ray tubes, perhaps 30W to 40W rather than 60W to 80W. Again standby electricity use of modern flat screen monitors is much better than CRTs, at 2W rather than about 5W. Monitors are a significant item, contributing over one third of the power used by PCs at Sheffield.