The project involved investigating the robustness of an AI-based wind power forecasting model against intentional but imperceptible changes in input data aimed at falsifying the output data.

The use of AI-based methods in critical infrastructures such as the energy sector can lead to potential security issues. For instance, adversarial attacks are a major threat. These are slight but very clever changes to input data aimed at manipulating machine learning processes. AI algorithms used in the energy sector, such as wind power forecasting models, are also exposed to this threat. There is a risk that attackers intentionally manipulate input data, falsifying the forecast in a way that is profitable to them. That is why it is very important to develop suitable methods to increase the robustness of these models against adversarial attacks.

Goals

  • Developing methods to generate adversarial attacks capable of selectively manipulating AI-based wind power forecasting models
  • Analyzing the robustness of AI-based wind power forecasting models against manipulation of input data
  • Investigating whether the inclusion of adversarial training in the training process of the wind power forecasting models also leads to improved generalizability and forecasting quality of the models in addition to increased robustness
  • Publishing the project results in the form of a scientific paper

Methods

  • Encoder-decoder LSTMs consist of an encoder network that processes an input sequence and encodes it into a latent representation and a decoder network that uses this encoding to sequentially generate the output sequence.
  • Projected gradient descent (PGD) is a gradient-based method for generating adversarial attacks that iteratively manipulates the input data within a limit set by the attacker.
  • Adversarial training is a method for increasing the robustness of AI processes against manipulation of input data by including both unchanged input data and manipulated data into the training process.

Detailed project description

The project involved training an encoder-decoder LSTM model to forecast the wind power generated by a single wind power plant as accurately as possible. The model merely used a small number of wind speed forecasts in the form of time series for the forecast, so it was characterized by low-dimensional input data. This model was very robust, even against stronger manipulation of the input data, and always exhibited physically correct behavior. Another model was trained to forecast the generation of wind energy across Germany as precisely as possible. This model was based on an encoder-decoder convolutional LSTM architecture and used wind speed forecasts in the form of weather maps for the forecast. The input data of the model thus matched very high-dimensional data. Various analyses have shown that such models are highly vulnerable to adversarial attacks. The forecasts of this model could be manipulated almost at will just by making slight but imperceptible changes to the input data. The project also investigated methods capable of improving the robustness of forecasting models against adversarial attacks. The results of these investigations indicated that the robustness of the forecasting models can be increased exorbitantly with the help of adversarial training and that this increase merely entails a slight drop in forecasting quality. In summary, the project has shown that AI-based forecasting models which receive high-dimensional input data from security-critical interfaces should always be checked prior to their operationalization in terms of their vulnerability to adversarial attacks and, if necessary, appropriate methods (e.g. adversarial training) should be used to protect the models from such attacks.

Project schedule

  • Creating an ETL process for processing historical weather and wind power data
  • Implementing a wind power forecasting model for an individual wind power plant as well as another model for Germany-wide wind power forecasting
  • Selecting and implementing suitable methods for creating adversarial attacks and generating compromised input data to manipulate both wind power forecasting models
  • Including adversarial training into the training process of the two wind power forecasting models
  • Analyzing the effects of adversarial attacks and adversarial training on the performance and robustness of the two wind power forecasting models

Projekt partners

Fraunhofer IEE:

  • René Heinrich
  • Dr. Christoph Scholz (christoph.scholz@iee.fraunhofer.de)

University of Kassel

  • We collaborated with the Chair for Intelligent Embedded Systems at the University of Kassel in this project. The chair has special expertise in the field of machine learning and supported the project by providing both advice and synthetic data for the energy input of wind power plants.
  • Point of contact: Stephan Vogt (stephan.vogt@uni-kassel.de)
  • Link to website: https://www.uni-kassel.de/eecs/ies/startseite
Publications / Further information

Planned Paper

  • Name of paper: “Targeted Adversarial Attacks on Wind Power Forecasts”
  • Name of journal: Possibly “Scientific Reports” or “Energy and AI”

Project period

1.4.2021 – 30.11.2021

René Heinrich

Information analysis & evaluation

Fraunhofer IEE

+49 (0) 160 – 3408484

Share this Spotlight with your network.

GRADS – Generating artificial district heating supply systems
GANs4RE – Artificial SCADA dataset for benchmarking anomaly detection approaches