My dear reader, how are you? السلام عليكم
… ‘Eat and drink of what Allah has provided, and commit not iniquity in the earth, creating disorder.’ (Holy Quran 2:61)
O children of Adam! look to your adornment at every time and place of worship, and eat and drink but exceed not the bounds; surely, He does not love those who exceed the bounds (Holy Quran 7:32)
This post presents challenges and insights towards developments in reducing the energy consumption of computing and efforts towards achieving the EU vision to be climate-neutral by 2050 – an economy with net-zero greenhouse gas emissions.
Just like us Humans, computing platforms require their food to survive and operate! By 2030 energy-hungry datacenters will eat up to 20% of the global electricity production. If the trends persist, 25% of the Irish national grid capacity will be fueling the datacenter operations by 2025. Serious environmental concerns include carbon emissions equal to the airline industry. The worldwide electricity wastage from datacenters can power the whole of New York City twice over.
Figure 1: Datacenter Energy Consumption
Since its origin, the High-Performance Computing (HPC) community has always been concerned with the objective of increasing the platform performance for executing an application or set of applications. Figure 2 on the left shows the top supercomputing platforms and their increased performance (in petaflops) over a period of time. It can be seen that the U.S. summit was the fastest supercomputing platform after 2018 with a peak performance of 200 petaflops. However, since 2020, Japanese supercomputer Fugaku is leading the race with over 500 petaflops of peak performance. The next most ambitious objective for an HPC platform is to achieve an exascale performance.
Figure 2: Top Supercomputing Platform (Adopted from )
On the other hand, according to a US DOE Office of Science report, excessive energy consumption is the leading concern of HPC system design is now a principal and mainstream challenge in the scientific community. The energy consumption of Information and Communication Technologies (ICT) systems and devices is reported to be about 5300 terawatt-hours (TWh) in the year 2019. If the trend continues, according to a worst-case forecast, ICT will consume up to 50% of global electricity in 2030 with a contribution to greenhouse gas emissions of 23%. Hence a serious environmental concern as well.
Figure 3: Worst-case forecast for ICT Energy Consumption
Well said by William Ruckelshaus, nature provides a free lunch, but only if we control our appetites. To control the data center’s appetites, energy is optimized by innovations at the hardware and software level. We exclusively focus on the software.
Key approaches for software-based energy consumption optimization include:
- System-level energy optimization such as Dynamic Voltage and Frequency Scaling (DVFS), Dynamic Power Management (DPM), and energy-aware scheduling.
- Application-level energy optimization techniques that use application-level parameters such as workload sizes. Comparatively understudied and forms the focus of this research.
However, before optimization comes to the measurement and three mainstream methods are (1) power meters, (2) on-chip hardware sensors, and (3) model-based software sensors. The first method is accurate at the system level but does not provide fine-grained component-level energy composition information. Our published background research [2, 3] proved the second and third methods to be inaccurate and unreliable.
So, what you measure is not what you pay for. This is due to the lack of efficient and reliable energy sensors and established scientific frameworks. The existing solutions are not up to mark causing frustration and a lack of trust for datacenter designers, cloud solution providers, and processor manufacturers. The problem impacts include over 2 times inaccurate measurements, over 50% application-level energy losses, and 40% additional operational and capital expenses.
We bring innovations for energy savings via a three-stage process of measurement, modelling, and optimization on all modern computing platforms(See Figure 4). The machine learning-based software models are developed using our know-how and practical implications of our novel theory of energy predictive models employing the properties of fundamental energy conservation law .
Figure 4: High-Level View of our R&D Solutions and Technologies
Our research and development in the area of energy measurement and modelling have enabled us to realise and understand how to help the computing infrastructure providers that need to visualize and optimize the energy costs succeed by providing them with accurate and efficient software-based energy sensors. Unlike the inaccurate and unreliable on-chip hardware sensors on modern Intel, AMD, and NVIDIA platforms, our technologies yield over 2 times accurate sensors identifying up to 160 million euros per year of savings in operational and capital expenses per average datacenter. This further reduces up to 40% wasteful resources, carbon emissions, and helps to achieve up to 80% application-level energy savings.
A list of our publications on the topic of energy consumption measurement, modelling and optimisation in computing can be found at the website: https://hcl.ucd.ie and Publications
Our dream is to save 300 TWh/year that would be sufficient to power two New Yorks using our research-driven technologies.
 A Shahid, M Fahad, R R Manumachu, and A Lastovetsky, “Energy of Computing: Theory, Practical Implications for Predictive Models, and Experimental Analysis on Multicore Processors”, in IEEE Access, IEEE, April 2021
 M. Fahad, A Shahid, R. Reddy, and A. Lastovetsky, “A Comparative Study of Methods for Measurement of Energy of Computing” in Energies, MDPI, Vol. 12, Issue 11, IF 2018: 2.707, 06/2019
I hope you find this post useful. If you find any errors or feel any need for improvement, let me know in your comments below.
Signing off for today. Stay tuned! Happy learning.