Boden Type DC – Newsletter Q1 2020
In this edition we report about how lessons learned in BTDC may affect the data center technology in Asia, we talk to one of our advisory board members and continue our series introducing the consortium.
- Boden Type DC is a project funded by the European Union’s Horizon2020 programme.
- The aim of the project is to build and operate the world’s most cost and energy efficient data center at minimal environmental impact.
- 5 consortium members from 4 countries.
- H1 Systems.
- Fraunhofer IOSB.
- RISE SICS North.
- Boden Business Agency.
- The pilot site of the concept called Boden Type DC One was built in Boden, Sweden in less than 5 months.
- Boden Type DC One was inaugurated in February 2019.
- 180 m2 total white space with 600 kW total capacity.
Boden Type Data Center in the Tropics
While the DCD Award 2019 winner BTDC project is implemented in the Nordics near the arctic circle, the lessons learned from the project can be easily used anywhere else in the world.
Part of the initial requirements of the BTDC’s design was to make it easily adapdiv for any other locations. H1 Systems and the consortium members took the following aspects into consideration:
- Use locally available renewable materials if possible – so the building structure is made very simple, standardized and the materials are easily replaceable by local possibilities. We designed in a way that there are no special requirements needed beyond the normal building code compliance , but the facility remains still safe and sound.
- Easy to build: use one standardized physical module which can be used for at least 3 different internal POD design for different kind of IT equipment and type of data center (HPC, colocation, blockchain etc) – this makes any further replication of the design very cost effective.
- Design the cooling in a way that it can be adapted to most weather conditions easily and to incorporate the possibility to use more cooling strategies without changing the POD layout and general design.
- It has to be scalable from 200kW up to several MW and beyond.
- Topology of the power supply is usually not location dependent, however we promote the importance of the carefully chosen location, because with the right location you can substitute internal redundancies with the reliability of external utilities.
The key lays not in this but the deltaT we can maintain between the front and the back of the racks while we make sure there are no hot spots. In case we have a rack inlet temperature of 26C and maintain a sdiv 15K deltaT, the return air temperature will be 41C. The good thing in the tropics is that the temperatures are never that high like in the Mediterranean or in continental climate, so in most of the time the “hot” 29-32C outdoor air is still making cooling for the return air. The issue is how we supplement the natural cooling to reach 26-27C rack inlet temperature and to make sure the humidity is non-condensing and within the tolerable range. There are more efficient technologies to do that which we are currently investigating as part of the adaption of BTDC into other climate zones.
Tamas Balogh from H1 Systems had the honor to speak briefly about the BTDC project during the OCP workshop in Singapore. Although the climate conditions are very different, most of the experiences of BTDC are directly adapdiv in the tropics like high density air cooling, holistic cooling control, standardized POD design, location selection and integration and compliance open compute solutions.
Tamas Balogh, Strategic Consultant H1 Systems talking about BTDC at the Open Compute workshop in Singapore in 2019
|Power density||10-15kWavg, up to 90kW/rack||5-8kW avg. up to 15kW/rack|
|Cooling pPUE in Nordics||1,01||1,08-1,25|
|Cooling pPUE in Tropics (est.)||1,15 (with modified cooling)||1,3|
|Cooling control logic||Holistic cooling, control is based on direct HW data and feedback||SAT/RAT set point based control|
|Source of reliability||Careful location selection, simplicity, easy to repair, limited redundancy||Increased internal redundancy|
Comparison of BTDC and traditional design in the Nordics and tropical climates
The biggest obstacle to adapt BTDC is not laying in the technology, but in the resistance of IT and procurement departments. However cloud giants are using similar technologies for a decade now for their data centers (high density, higher inlet temperatures, focus on deltaT instead of SAT, integrated layers of the infrastructure), enterprise, government and also some data center service providers are still maintaining 22C +-2 rack inlet temperature and doing open air rooms. The reason for that can be simple, the external impacts of such inefficiency is not realized internally within the decision making of any of the participants in the supply chain. Until the clients are ready to pay the premium of overprotection and overdesign compared to that what they can achieve with today’s IT resiliency, the data centers’ pace of change will remain slow because the emission impacts are not realized internally and benefits of higher efficiency in most cases are not that much important to force disruptive changes. 5-20% savings on electricity consumption of a bank’s server room, enterprise edge site etc. is so little amount that they won’t bother. For cloud players electricity is a significant cost per sold vm or sold vCPU etc, why they are leaders of sustainable ICT. This is where policy makers can take a lead and actually force the end users to adapt and to internalize the emission externalities. By 2030 all data centers have to be sustainable – said Ursula von der Leyen – president of the European Commission - in one of her early interviews which is already an important sign for everybody who is thinking of setting up a new data center with a lifespan of 15Y+, already in 2020.
The next step for H1 Systems will be to do a similar proof of concept of BTDC in the tropics, stay tuned.
Consortium members discussed this topic recently as the importance of distributed and edge computing rise every day. Depending on the edge case BTDC is actually may be a perfect fit for distributed computing if the unit size of a site is in the range of 200-1000kW. The reason is that it is already standardized, compact, extremely cost effecient and very simple to implement and operate. The electrical and cooling parameters are already integrated into a virtual model which makes it possible to execute workload orchestration not only based on virtual but also considering physical parameters and conditions like site and server status, energy prices, utilization levels, temperature conditions etc.
An interview with Antal Kerekes, member of the advisory board of the BTDC project
Antal is Partner at PricewaterhouseCoopers Hungary in charge of Technology Advisory. He has managed several IoT, data center, 5G projects in the last couple of years in CEE.
What were your first impressions when you were invited as an advisory board member?
The BTDC project very much fits in PwC’s corporate social responsibility. It is very important that industries adopt innovative, efficient and sustainable technologies early on. The BTDC project helps demonstrate and disseminate these technologies. I liked the ambitious approach on efficiency but besides that decarbonization and cost improvement plans also impressed me. These are all high up on the agenda today when cloud based solutions and other innovative technologies like IoT and AI are gaining more and more ground. So I happily agreed to H1’s invitation to join the project on the advisory board.
How do you see the data center markets today? What can be used from the findings of the BTDC project?
In a 2017 report PwC projected growth in data center markets with a certain slow down for the years 2018-20 and named a few factors that are key to avoid running into problems when it comes to DC investments. While the capacity forecast was somewhat underestimated the other mentioned factors like ability to scale operations to meet customer demand, cost and schedule forecasting, misalignment in risk sharing in construction, poor governance and controls or insufficient power/permit are still very valid of course. H1 Systems as the main contractor of the data center together with its consortium partners greatly managed these tasks. The Boden Type DC project with its modular and scalable design, five months installation time, reducing time to market, careful location selection serves as a proven model to keep the mentioned potential pain points under control.
I also see that the current incredible demand on new investments is not necessarily sustainable. Improvements on efficiency are crucial. Thanks to the developed telecommunication networks for many types of investments location based optimization is now possible, selecting sites for example close to renewable power sources, optimal climatic environment, capable workforce or advantageous geographies. The BTDC project is a good example for all these.
What have you experienced on digital markets in 2020 ever since the world is in emergency mode? What are the factors that may stay after the isolation period is over?
The current crisis enforced some sort of a pilot program, a major global test of the digital ecosystem. A test to this extent would not have been possible to run in “peacetime”. So far, the digital ecosystem on the supply side seems to cope well with this extraordinary heavy load test.
On the user side some organizations, companies have been better, some less prepared for operations online. Those that are better prepared can have a competitive advantage in the future, while the less prepared ones need to spend efforts on quick investments to accelerate the transition.
In our organization we introduced the use of a single online platform a few years ago, something that was not necessarily popular with all of our colleagues. By now we think this has been one of our best decisions as from the first moment of the lockdown on we had already been well prepared for online coworking.
Some of the customs and behaviors developed by organizations and individuals in the recent period may stay to a certain extent after the isolation period is over. Home office work, co-working platforms, streaming media, e-commerce, online education for example are particular areas that are booming now. Most probably some of the users that are comfordiv with the online/digital way of doing things may stick more to these in the future too. I am not sure business travels will resume past levels either anytime soon after conditions are all safe or people will return to cinemas who switched for streaming services during the lockdown.
All in all the factors generate a lot of increased demand for the digital economy and in particular data centers in the future. Again I think it is of vital importance that investments related to this growth are efficient and are made in a rational way enabling future scalability
What are the findings of the BTDC project that impress you most? Is there any area where you see a need for improvement?
I have heard about the extraordinary partial PUE results that are even below 1,01. That is really unique and I am sure soon we will see more data centers embark on the technology and the know-how gained in this project. As for the improvement points, it became clear during the project, that to define an objective and well balanced KPI for Data Center efficiency is a larger challenge than we thought and we were probably not well prepared for that. We plan to have a session in June with the advisory board and the consortium. Then we might further close this gap and I will be able to tell more on this as well.
Meet our consortium members – RISE
RISE is an independent, state-owned research institute. As an innovation partner for every part of society, we help develop technologies, products, services and processes that contribute to a sustainable world and a competitive business community. We do this in collaboration with and on behalf of companies, academia and the public sector. We also have a special focus on supporting small and medium-sized enterprises in their innovation processes.
RISE ICE Data Center
ICE, the Infrastructure and Cloud research & test Environment, was inaugurated in January 2016. The facility is open primarily for European projects, universities and companies to use. However, customers and partners from all over the world are welcome to use ICE for their testing and experiments.
The ICE mission is to contribute to Sweden being at the absolute forefront regarding competence in sustainable and efficient data center solutions, cloud applications and data analysis. This will be accomplished by increasing innovation capability, helping product and service companies excel, as well as attracting more researchers and companies to Sweden to make the business branch even stronger nationally.
The team working at RISE ICE Datacenter has grown from 3 in early 2016, to 23 in 2020. The picture in Figure 1 below shows the team in June 2019.
Figure 1: The RISE ICE Data center team in June 2019
The facilities at RISE ICE Datacenter
The current facilities are situated in an old storage facility located within walking distance of Luleå University of Technology. There are a number of independent data center modules and test rigs, open for testing and experiments, along with a demo space that can fit 50+ persons.
The first module, a sdiv environment optimized for testing of IT/cloud-related applications and big data handling, has been running since February 2016. Measurement data from equipment and sensors is collected for modeling, simulation and optimization. The server infrastructure of module 1 is based on Dell SmartEdge RX730xd and is enhanced with GPU acceleration. This module is primarily used for hosting software-oriented projects and experiments.
The second module is more flexible in its physical setup has been used for testing of facility/utility innovations. It is currently hosting our GPU-cluster, providing over 1,1Million CUDA cores to our customers and partners.
The third module is our fresh-air cooling test facility, where we have the ability to test how fresh air-cooling units behave in different climate conditions, as the fresh-air can be altered to mimic different geographical locations. The IT in the datacenter consists of Open Compute hardware, primarily Facebook Winterfell servers.
Further, we have a 5G enabled edge data center with PV-panels, microgrid controller, thermal- and electrical storage and about 10kW IT. It is used to evaluate future concepts and challenges within the growing market of edge data centers. See also Figure 2 below.
In addition, we have a number of wind-tunnels, immersion cooling and heat recovery experiments, and our external datacenter activities as the BodenType One Datacenter, and the soon to be built fuel cell powered micro datacenter with high temperature heat recovery.
Figure 2: EDGE Data center laboratory.
The test facility offers access to a unique environment for testing, demos and experiments. The ICE offer covers all parts of the stack:
- Big data and machine learning – Computing capacity, platforms and tools for handling big data and machine learning.
- IT and cloud – testing and experiment environments for software development, scaling and infrastructure optimization.
- Facility and IT HW – possibilities for testing disruptive innovations concerning the facility and hardware of a datacenter.
- Utility – measurements and research securing a sustainable society with efficient datacenters as a part of the energy system.
The BTDC project
The BTDC project has given RISE great visibility throughout Europe and created in depth relations with the project partners which has contributed to our future expansion into new projects, technology adventures and business opportunities.
From a technical perspective, RISE focus has been on monitoring, operation and control of the data center. The BTDC One has been an excellent data center to further develop and operate our data collection and tool chain, which is completely built on open source freeware and our own developed code, see also Figure 3 below. The monitoring software enabled the project to collect data from all domains within the data center, including facility, environment, climate, IT and networking, utility usage and much more. Operating the datacenter and applying a repeated IT-load on a fixed time interval was solved by installing Kubernetes and an in-house developed scheduler, distributing the load across all servers installed in the BTDC One POD1. This solution gave us the flexibility to run both synthetic load patterns and realistic load patterns defined by our project partner Fraunhofer.
One of the biggest achievements so far in the project is the implementation and operation of the holistic control approach, where we utilize the monitoring system to supply information about the server conditions to the facility control system. This enables us to define the temperature difference across the servers (often referred to as Delta T), which potentially reduces the overall power usage of the data center. The results have been partly presented at the European Open Compute Summit 2019 and were to be published in detail at the Open Compute Summit in San Jose, but unfortunately the conference was cancelled due to the ongoing pandemic. However, a virtual version of the event will be organized in May, where we will present the results.
Figure 3: BTDC One monitoring schematic.
Where to meet us
Join the Open Compute Virtual Summit and listen to presentations online from Jon Summers and Jeffrey Sarkinen from RISE.
MAY13 - 10.30 Pacific Time (19.30 CET) - Profiling OCP Servers for Holistic Air Cooling
MAY13 - 11.00 Pacific Time (20.00 CET) – BTDC: A Practical Demonstration of World Class Efficiency
- The consortium welcomed around 50 attendees of Vattenfall’s Nordic Data Center conference in Boden Type DC One on February 6th.
- Nils Lindh, Director Data Center Development spoke at Data Center Dynamics New York Virtual conference on April 1st, on the panel Ethically Challenged: Decarbonizing the data center - can a bold CSR policy save the world?.
- The consortium meeting planned to be held in Karlsruhe was turned into a two day online meeting on April 15-16. The consortium reviewed the actual status of the project and planned ahead for the next few months.
- Pod 2 will be populated by a new testing partner in May.