BOPTEST Challenge:

Smart building HVAC control

Background

Buildings account for a significant portion of global energy consumption, the bulk of it for heating and cooling, and reducing energy use in buildings is critical to meeting global emissions reduction targets.  


With introduction of more renewables and electrification of heating systems and the mobility sector, the strain on the energy grids is drastically increased. However, buildings also have a large potential for flexibility thanks to their slow thermal inertia. Consequently, buildings offer the potential to be one of the lowest cost opportunities for providing the flexible demand needed to support increasing levels of variable renewable energy resources in electricity grids. 


To release this flexibility potential, smart control algorithms, that can respond to external signals, such as price variations, are needed. However, implementation of smart control algorithms in buildings are a time consuming and costly task. Evaluation of control algorithms in a simplified virtual environment to identify the most promising concepts are needed. 


The Smart building HVAC control challenge aims to develop efficient and scalable control algorithms that can be applied in commercial buildings to reduce energy consumption and release their flexibility potential. 


Problem statement

The development and implantation of smart control algorithms is a complex task that requires development of scalable, transferable and efficient solutions. This competition seeks to address this challenge by promoting development and benchmarking of various control strategy approaches, that are applicable for implementation in real buildings. 


The competition will challenge participants to develop scalable, efficient, and effective algorithms that can release the flexibility potential in building HVAC systems, while taking the following key challenges into account:


  • Activation of flexibility: The challenge will focus on controllers ability to activate the building HVAC system’s flexibility potential based on variable cost signals, while not compromising indoor air quality.

  • Scalability: Many smart control algorithms require a lot of individual adaption for implementation in different buildings. This is time and cost intensive, and makes it less profitable to install. The competition will promote solutions that are scalable from an implementation perspective.


Emulators

The competition will be performed using the BOPTEST framework (https://ibpsa.github.io/project1-boptest/).


Each of the emulators have two scenarios defined each: “typical_heat_day” and “peak_heat_day”. These represent periods with typical heat demand and peak heat demand respectively. For energy prices, only the “highly_dynamic” scenario will be used in the competition evaluation.


Detailed descriptions of the emulators and the scenarios will be available on Codalab, as part of the Starting Kit.

Participation and Submission

I     Register at the Adrenalin competition dashboard (Codalab)


  1. To register a Codalab account go to https://codalab.lisn.upsaclay.fr/accounts/signup

  2. Go to the competition’s page.
  3. Navigate to the “Participate” tab to accept the terms and conditions and register for the competition.


II    Run and train your controller


In the training phase of the competition, the testcases can be run both locally and through the BOPTEST service.

 

How to run the testscase locally:

  1. The testcase FMUs will be available on Codalab as part of the Starting Kit.
  2. Download BOPTEST from here: https://github.com/ibpsa/project1-boptest/releases/tag/v0.6.0

Or clone directly from GitHub.Make sure to checkout commit “v0.6.0”

  1. Inside the BOPTEST/testcases folder. Copy the testcase folders from the starting kit.
  2. Load BOPTEST using docker compose and the testcase name, as described here: https://github.com/ibpsa/project1-boptest
  3. Follow the steps shown in the Python script in the starting kit. More detailed information on the BOPTEST API can be found at the BOPTEST repository: https://github.com/ibpsa/project1-boptest

 

Run with service version:

  1. Follow the instructions in the Python script. More details on the service specific API are given here: https://github.com/NREL/boptest-service

 

III   Submit your results to the competition dashboard


For a result to be valid for submission to the competition, it must be ran through the BOPTEST service.  

Submission is done by submitting the test_id retrieved from the BOPTEST service.

The test_id is retrieved when initializing a testcase from the BOPTEST service, with:

 

POST testcases/{namespace}/{testcase_name}/select

 

Make sure the testcase is run until finished before submitting the test_id to the Adrenalin dashboard. You should create and submit different test_id’s for each scenario and testcase.


To make a submission, a zipped Excel file has to be uploaded on Codalab. The template Excel file that needs to be modified and submitted will be available on Codalab, as part of the Starting Kit.

The Excel file contains 4 rows and two columns. One row for each scenario and testcase. The first column is the identifier, and the second column is for the test_id's retrieved from the BOPTEST service. If some test_id's are left blank, only the submitted scenarios are scored.

 

The test_ids are submitted through the competition dashboard. The submission score is calculated in the back-end and displayed on the dashboard.


NOTICE: By submitting your results, you agree to make your algorithm available as open source under the BSD-3 license agreement (https://opensource.org/license/bsd-3-clause), in the case you are selected as one of the winners.

Evaluation Criteria

The evaluation will be two-folded.

  1. Quantitative evaluation based on KPIs in the BOPTEST framework. A single number will be calculated based weighting of the different KPIs. See further description below
  2. Qualitative evaluation taking into account the complexity of the control, data requirement and scalability. As part of the submission, the participant will be asked to document their control algorithm.


Quantitative evaluation

The quantitative evaluation will be using a selection of the standard KPIs from the BOPTEST framework, and weighting them in a total score. The descriptions of the relevant KPIs are shown in the table below.


Table 1: KPIs

Name

Unit

Description

cost_tot

/m2

Cost: operational cost associated with the HVAC energy usage. Energy*price

idis_tot

ppm*h/zone

IAQ: the extent that the CO2 concentration levels in zones exceed bounds of the acceptable concentration level, which are predefined within the test case FMU for each zone, averaged over all zones.

pdih_tot

kW/m2

Peak district heating demand: the HVAC peak district heating demand (15 min).

pele_tot

kW/m2

Peak electrical demand: defines the HVAC peak electrical demand (15 min).

tdis_tot

Kh/zone

Thermal discomfort: the cumulative deviation of zone temperatures from upper and lower comfort limits that are predefined within the test case FMU for each zone, averaged over all zones. Air temperature is used for air-based systems and operative temperature is used for radiant systems.


The score of a controller on a single scenario (i) is given by the following equation:


The score equation contains energy cost plus a peak power penalty, representing a peak power tariff. The energy prices for electricity and district heating are the same and are based on electricity spot prices for the location of the building. However, as an important factor in the competition, is for the controller to demonstrate its ability to exploit the flexibility of the building, the spot price signal is exaggerated. The baseline controllers are price ignorant. In addition, the score equation contains a penalty for violating the comfort constraints.

 

The total score of a submission will be the sum of the difference between the score of the baseline controller and submission controller for all scenarios:


For a submission to be eligible for the prize money, the score for each scenario must be positive.

 

Qualitative evaluation

For the Final submission, in addition to submission of the controller performance, the participants must submit a description of their control approach.

 

The qualitative evaluation will focus on the prospect of application in real buildings. Important points are:

  • Model complexity
  • Training data demand
  • Interpretability (How well can the decisions be understood by users?)

 

Instructions for the Documentation will be available on Codalab, as part of the Starting Kit

Timeline

  •  Training stage (phase I)

    The training period will last for 2 months. In this period, the participants will get access to a training version of each of the emulators. The emulators will be available through the BOPTEST service and for download to run it locally. More information on how to install and run the emulators is given in section Emulators.

    The participants can use this period to train and test their control algorithms. It is however encouraged that participants submit their intermediate results from the training to the competition dashboard (Participation and Submission).

    To continue to the competition stage, a full, valid result (for both emulators and scenarios) must be run with the BOPTEST service version and submitted to the Adrenalin dashboard.

 

  • Competition stage (Phase II)

    The competition stage will last 5 days. For this stage, new versions of the emulators (same emulators but slightly different boundary conditions) will be made available. The emulators will be available only through the service version of BOPTEST. To be eligible for the prize money valid results for both scenarios in both emulators must be submitted to the Adrenalin dashboard.


Sponsors

AES Innovation
Synavision
Kiona