Daily electricity price forecasting (report)
Материал из MachineLearning.
(→Local algorithm Fedorovoy description) |
(→Варианты или модификации) |
||
Строка 297: | Строка 297: | ||
* Apply algorithm for last <tex>l</tex> elements of our responses set <tex>Y</tex> | * Apply algorithm for last <tex>l</tex> elements of our responses set <tex>Y</tex> | ||
- | === | + | === Set of algorithm variants and modifications === |
+ | |||
+ | |||
+ | |||
+ | ==== Numericals operations ==== | ||
+ | |||
+ | First component consists of simple numerical operations. For | ||
+ | example, square rooting or cubing. We add arithmetical operations | ||
+ | like addition and multiplication. Superposition of this functions | ||
+ | can be used in our new variables matrix <tex>X'</tex>. | ||
+ | |||
+ | ==== Smoothing function ==== | ||
+ | |||
+ | Second component consists of smoothing functions. We use Parzen | ||
+ | window smoothing with two parameters: width of Parzen window <tex>h</tex> | ||
+ | and periodic <tex>t</tex>. For each element from sample we calculate | ||
+ | <tex> | ||
+ | {\xi'}_i^k = \sum_{j=-h}^h \omega_{j} \xi_{i+tj}^l, | ||
+ | </tex> | ||
+ | for each object, where <tex>k</tex>,<tex>l</tex> is indexes of elements from data | ||
+ | set. Only condition for <tex>\{\omega_j\}_{j=-h}^h</tex> is <tex> \sum_{j=-h}^h | ||
+ | \omega_{j} =1</tex>. In our case we use one from set of different kernel | ||
+ | functions. It is Epanechnikov kernel function. <tex>\omega_{\pm j}=\frac{3}{4k}(1-(\frac{j}{h})^2)</tex>, where <tex>k</tex> is normalization | ||
+ | coefficient, <tex>k=\sum_{j=-h}^h \omega_{j}</tex>. There is periodic with | ||
+ | step 7 in data set. For responses data set it seems reasonable | ||
+ | apply kernel smoothing several times for windows with different | ||
+ | periodic <tex>t \in \{ 1,7 \}</tex>. | ||
+ | |||
+ | ==== Autoregression ==== | ||
+ | |||
+ | In our model we add autoregression to set of primitive functions | ||
+ | from variables and responses. For each variable adding we use | ||
+ | parameters <tex>h\in H, H\subset \mathbb{N}</tex> -- shift of data. | ||
+ | |||
+ | <tex> | ||
+ | {\xi'}_j^k=\xi_{j-h}^l, | ||
+ | </tex> | ||
+ | |||
+ | for each object, where <tex>k,l</tex> are indexes from data matrix. For | ||
+ | objects in sample to classify we have to calculate it step by step | ||
+ | for responses. If there is no value for <tex>\xi_{i-h}^l</tex>, <tex>i<h</tex>, we | ||
+ | have to assign this value to be zero. It decreases correlation | ||
+ | value for this autoregression variable in LARS algorithm. But our | ||
+ | sample has about three thousands elements and for small values of | ||
+ | <tex>h</tex>, <tex>h<10</tex>, this factor decreases correlation in rate about | ||
+ | <tex>0.1-1\%</tex> or smaller in average. | ||
+ | |||
+ | As alternative we use another way to create autoregression matrix. | ||
+ | There are periodicals in our data set. To reshape our answers | ||
+ | matrix we create the set of new variables from responses. For | ||
+ | periodical length <tex>t</tex> choose <tex>H=\{1,2,\dots,t-1 \}</tex>. Then create | ||
+ | model for each part of periodical. From full sample of row indexes | ||
+ | <tex>{I}_t</tex> select <tex>I_{k,t}=\{ k, k+t, k+2t \dots \} </tex>. In set of | ||
+ | matrixes for each <tex>k\in\{1,2,\dots,t\}</tex> we get from LARS its own | ||
+ | model of linear regression. Otherwise, we modify our variables | ||
+ | matrix according to scheme | ||
+ | <tex>\begin{pmatrix}k&\xi_{k-1}^l&\xi_{k-2}^l&\cdots&\xi_{k-(t-1)}^l \\ | ||
+ | k+t&\xi_{k+t-1}^l&\xi_{k+t-2}^l&\cdots&\xi_{k+t-(t-1)}^l \\ | ||
+ | k+2t&\xi_{k+2t-1}^l&\xi_{k+2t-2}^l&\cdots&\xi_{k+2t-(t-1)}^l \\ | ||
+ | \cdots&\cdots&\cdots&\cdots&\cdots \\ \end{pmatrix}</tex> | ||
+ | |||
+ | First row is indexes, what we leave in our variables matrix. To | ||
+ | the right of them is variables, which we add to new variables | ||
+ | matrix. | ||
+ | |||
+ | ==== Removing spikes ==== | ||
+ | |||
+ | Another primitive function in our set is procedure of removing | ||
+ | spikes from our sample. This function uses two parameters. It's | ||
+ | maximum rate of first and second type errors <tex>r_1,r_2</tex>. At first | ||
+ | step our algorithm classify all sample by using this initial | ||
+ | sample <tex>X</tex>. At second step we removing objects from sample if rate | ||
+ | of error to true value | ||
+ | |||
+ | <tex>\frac{|y_i-\hat{y}_i|}{|y_i|} > r_1</tex> | ||
+ | |||
+ | or if rate of error to responses mean value | ||
+ | |||
+ | <tex>\frac{|y_i-\hat{y}_i|}{\overline{|y_i|}} > r_2.</tex> | ||
+ | |||
+ | We get <tex>r_1,r_2</tex> from crossvalidation: 12 times we split sample | ||
+ | to control and learning sample. For learning sample we create | ||
+ | forecast to control sample. By optimization <tex>r_1,r_2</tex> we get minimum of | ||
+ | mean MAPE for control set. | ||
+ | |||
+ | |||
+ | ==== Normalization ==== | ||
+ | |||
+ | For some variables we need normalization. This primitive function | ||
+ | is necessary in our sample. It can be | ||
+ | |||
+ | <tex>{\xi'}_j^k=\frac{\xi_{j}^l-\xi_{min}^l}{\xi_{max}^l - | ||
+ | \xi_{min}^l}.</tex> | ||
+ | |||
+ | for each <tex>\xi_j^l</tex> from <tex>\xi^l</tex>. This normalization makes nearly | ||
+ | equal initial conditions for each variable in LARS and that is we | ||
+ | need. This primitive is useful for responses and variables. | ||
+ | |||
+ | ==== Additional notes ==== | ||
+ | |||
+ | It's necessary to make some additional notes. Some functions can | ||
+ | be useful only for subset of variables and we have to take into | ||
+ | account this fact. Another thing to pay attention is a possible | ||
+ | usage primitives superposition. | ||
+ | |||
+ | All this statements need additional researching and computing | ||
+ | experiments in other parts of this article to create good model | ||
+ | for our problem. | ||
== Описание системы == | == Описание системы == |
Версия 11:33, 10 ноября 2009
Введение в проект
Project description
Goal
The goal is to forecast average daily spot price of electricity. Forecasting horizon (the maximal time segment where the forecast error does not exceed given value) is supposed to be one month.
Motivation
For example, one needs precise forecast of electricity consumption for each hour during the next day to avoid transactions on the balancing market.
Data
Daily time series from 1/1/2003 until now. Time series are weather (average-, low-, and high daily temperature, relative humidity, precipitation, wind speed, heating-, and cooling degree day) and average daily price of electricity. There is no data on electricity consumption and energy units prices. The data on sunrise and sunset are coming from the internet.
Quality
The time series splits into the whole history but the last month and the last month. The model is created using the first part and tested with the second. The procedure must be repeated for each month of the last year. The target function is MAPE (mean absolute percentage error) for given month.
Requirements
The monthly error of obtained model must not exceed the error of the existing model, same to customers. It’s LASSO with some modifications.
Feasibility
The average daily price of electricity contains peaks and it is seems there is no information about these peaks in given data. There is no visible correlation between data and responses.
Methods
The model to be generated is a linear combination of selected features. Each primitive feature (of index j) is a set of j+nk-th samples of time series, k is a period. The set of the features includes primitive features and their superpositions.
Problem definition
We have variables matrix and responses vector for this matrix. This is time series. Our goal is to recover regression for variables matrix . This data is straight after initial data in time. Our goal is to find vector � of linear coefficients between and , . As quality functional we use MAPE (mean average percent error).
,
Algorithm description
State of art
The main task of this subsection is to describe ways of daily electricity price forecasting and sort of data needed for this task. Also we have a deal with German electricity price market and there is brief survey about it. Let’s start from methods of forecasting daily electricity price. There are many ways to solve this problem. It can be ARIMA models or autoregression [1], [4]. Artificial neural networks are also used in combination with some serious improvements like wavelet techniques or ARIMA models [2]. SVM can be used in a close problem of price spikes forecasting [2]. Models of noise and jump can be constructed in some other ways [3]. Sets of data can be rather different. For neural networks it can be only time series [1], but in most of works some addition data is required. Weather is an important factor in price forecasting [5]. It can be median day temperature, HDD, CDD [4] or wind speed [6]. Dates of sunsets and sunrises can be useful too. Energy consumption and system load has important impact on daily electricity price [4]. Interesting features is prediction instead of electricity price in €. Our goal is forecasting daily electricity price for German electricity price market EEX, so let’s represent some information about it. Germany has free 2 electricity market, so models for free electricity market can be applied to it. Market of energy producing changes every year, main goals are phasing out nuclear energy and creation new renewable sources manufactures. Germany is one of the largest consumers of energy in the world. In 2008, it consumed energy from the following sources: oil (34.8%), coal including lignite (24.2%), natural gas (22.1%), nuclear (11.6%), renewables (1.6%), and other (5.8%), whereas renewable energy is far more present in produced energy, since Germany imports about two thirds of its energy. This country is the world’s largest operators of non-hydro renewables capacity in the world, including the world’s largest operator of wind generation [7].
References
[1] Hsiao-Tien Pao. A Neural Network Approach to m-Daily-Ahead Electricity Price Prediction
[2] Wei Wu, Jianzhong Zhou,Li Mo and Chengjun Zhu. Forecasting Electricity Market Price Spikes Based on Bayesian Expert with Support Vector Machines
[3] S.Borovkova, J.Permana. Modelling electricity prices by the potential jump-functions
[4] R.Weron, A.Misiorek. Forecasting spot electricity prices: A comparison of parametric and semiparametric time series models
[5] J.Cherry, H.Cullen, M.Vissbeck, A.Small and C.Uvo. Impacts of the North Atlantic Oscillation on Scandinavian Hydropower Production and Energy Markets
[6] Yuji Yamada. Optimal Hedging of Prediction Errors Using Prediction Errors Yuji Yamada
[7] http://en.wikipedia.org/wiki/Energy_in_Germany
Basic hypotheses and estimations
In case of linear regression we assume, that vector of responses is linear combination for modified incoming matrix of features , what we designate as :
.
This modified matrix we can get from superposition, smoothing and autoregression in initial data set. LARS[8] seems suitable for this kind of problem, so we use it in our work. From state of art and data we get some additional hypotheses. We have a deal with data set with two periodic – week periodic and year periodic. Possible usage of this is generation new features and creation own model for each day of week.
Mathematic algorithm description
LARS overview
LARS (Least angle regression \cite{[9]}) is new (2002) model selection algorithm. It has several advantages:
- It does less steps then older LASSO and Forward selection.
- This algorithm gives same results as LASSO and Forward selection after simple modification.
- It creates a connection between LASSO and Forward selection.
Let's describe LARS. We have initial data set , where is number of variables, is number of objects. is responses
vector. Our goal is recovery regression and make estimation for
, where
For each algorithm step we add new variable to model and create parameters estimation for variables
subset. Finally we'll get set of parameters estimation . Algorithm works in condition of independence .
Describe this algorithm single step. For subset of vectors with the largest correlations, we add one vector to it, and get indexes set . Then
where . Introduce
where --- ones vector size . Then calculate equiangular vector between :
Vector has equal angles with all vectors from
After introducing all necessary designation one describe LARS. As Stagewise, our algorithm starts from responses vector estimation . After that it makes consecutive steps in equiangular vector dimension. For each step we have with estimation :
Get active vectors indexes set from
Introduce additional vector:
Write next approximation for estimation
where
This actions do minimization step for all variables with indexes from set <\tex>A</tex>. In addition is the least positive on condition that we add new index to on the next step :
This algorithm works only steps. The experiment in article \cite{[1]} confirms, that LARS works better LASSO and Forward Selection, and makes less number of iterations. Last step estimation gives least-squares solution.
Apply LARS
We use three steps in this algorithm.
- Modify of initial data set and set to classify . Get .
- LARS algorithm apply.
- We split our initial sample into two parts: for learning and for control .
- For learning sample we apply procedure of removing spikes from this part of sample.
- From we get a set of weights vectors for each step of LARS algorithm.
- By using control sample we choose with best MAPE rate.
- Calculate .
Local algorithm Fedorovoy description
This algorithm is variation of k-means for time series. We have time series . Suppose continuation of time series depends on preceding events . Select in our set subsets
Introduce way to calculate distance between and .
Define set of linear transformations for :
Note that for our problem importance of objects increases to the end of vector because of forecasting straightforward vector . Introduce parameter . So, our distance is
where
From that distance definition find closest elements to . We use this set of elements , sorted by ascending of , to create forecast by equation
for each , where proportional to distances
So ,we use integrated local averaged method to create forecast. This algorithm needs optimization by several parameters and it's not useful without this procedure \cite{[11]}. Our model requires ( is fixed and equals one month length ). To define them we use two step iteration process from :
- Optimization by .
- Optimization by .
It's necessary to keep in mind, that we use specified model of general algorithm from \cite{[11]}. Function of distance and weights for each element from set of can be chosen from another basic assumptions.
Apply Local algorithm Fedorovoy
For Local algorithm Fedorovoy we use only responses vector . To forecast future responses we have to initialize and optimize parameters of this algorithm (see section above).
- Optimize parameters , for algorithm
- Apply algorithm for last elements of our responses set
Set of algorithm variants and modifications
Numericals operations
First component consists of simple numerical operations. For example, square rooting or cubing. We add arithmetical operations like addition and multiplication. Superposition of this functions can be used in our new variables matrix .
Smoothing function
Second component consists of smoothing functions. We use Parzen window smoothing with two parameters: width of Parzen window and periodic . For each element from sample we calculate for each object, where , is indexes of elements from data set. Only condition for is . In our case we use one from set of different kernel functions. It is Epanechnikov kernel function. , where is normalization coefficient, . There is periodic with step 7 in data set. For responses data set it seems reasonable apply kernel smoothing several times for windows with different periodic .
Autoregression
In our model we add autoregression to set of primitive functions from variables and responses. For each variable adding we use parameters -- shift of data.
for each object, where are indexes from data matrix. For objects in sample to classify we have to calculate it step by step for responses. If there is no value for , , we have to assign this value to be zero. It decreases correlation value for this autoregression variable in LARS algorithm. But our sample has about three thousands elements and for small values of , , this factor decreases correlation in rate about or smaller in average.
As alternative we use another way to create autoregression matrix. There are periodicals in our data set. To reshape our answers matrix we create the set of new variables from responses. For periodical length choose . Then create model for each part of periodical. From full sample of row indexes select . In set of matrixes for each we get from LARS its own model of linear regression. Otherwise, we modify our variables matrix according to scheme
First row is indexes, what we leave in our variables matrix. To the right of them is variables, which we add to new variables matrix.
Removing spikes
Another primitive function in our set is procedure of removing spikes from our sample. This function uses two parameters. It's maximum rate of first and second type errors . At first step our algorithm classify all sample by using this initial sample . At second step we removing objects from sample if rate of error to true value
or if rate of error to responses mean value
We get from crossvalidation: 12 times we split sample to control and learning sample. For learning sample we create forecast to control sample. By optimization we get minimum of mean MAPE for control set.
Normalization
For some variables we need normalization. This primitive function is necessary in our sample. It can be
for each from . This normalization makes nearly equal initial conditions for each variable in LARS and that is we need. This primitive is useful for responses and variables.
Additional notes
It's necessary to make some additional notes. Some functions can be useful only for subset of variables and we have to take into account this fact. Another thing to pay attention is a possible usage primitives superposition.
All this statements need additional researching and computing experiments in other parts of this article to create good model for our problem.
Описание системы
- Ссылка на файл system.docs
- Ссылка на файлы системы
Отчет о вычислительных экспериментах
Визуальный анализ работы алгоритма
Анализ качества работы алгоритма
Анализ зависимости работы алгоритма от параметров
Отчет о полученных результатах
Список литературы
Данная статья является непроверенным учебным заданием.
До указанного срока статья не должна редактироваться другими участниками проекта MachineLearning.ru. По его окончании любой участник вправе исправить данную статью по своему усмотрению и удалить данное предупреждение, выводимое с помощью шаблона {{Задание}}. См. также методические указания по использованию Ресурса MachineLearning.ru в учебном процессе. |