Large amounts of data are available to determine the current energy efficiency. The more data that needs to be processed to find the necessary result the greater the costs incurred in terms of time and processing power etc. This data must be processed frugally so that we are not using more processing power and data storage than needed to
initially identify anomalies in the energy efficiency and so that this can be done in the minimum amount of time.
The purpose of the technique described in this paper is to address the 'curse of dimensionality' whereby high dimensional data "may cause the deterioration of many fault detection techniques because the degree of data abnormality in fault-relevant dimensions can be obscured or
even masked by irrelevant attributes".
More information on the Paper