Improving Electrical Power Grid Performance & Reliability Through the Optimization of FACTS

Improving Electrical Power Grid Performance & Reliability Through the Optimization of FACTS

FACTS Placement Optimization

For Multi-Line Contingencies

Josh Wilkerson

November 30, 2005

Abstract

In the state that it is on now, the United States power grid is susceptible to crippling rolling blackouts. This was seen in the massive failure experienced in 2003. These large failures occur due to what is called a cascading failure in the power grid. Cascading failures are typically caused by one or more line contingencies (the event of one or more lines becoming unable to carry their load). There is a means to a solution available known as a FACTS device, which provides more control over how the power of a downed line is redistributed in the system. The problem is that these FACTS devices are very expensive, so it would be ideal to place a minimal number of them while still achieving a certain level of control over the system. This experiment analyzes the placement of FACTS devices in the event of multiple lines going down (multi line contingencies).

Contents

1.Introduction ...... 2

2. Problem Statement ...... 3

2.1 Previous Work ...... 4

3. Approach ...... 5

3.1 Parameters ...... 5

3.2 Solution Type ...... 6

3.3 Representation ...... 6

3.4 Population Initialization ...... 7

3.5 Offspring Generation ...... 7

3.6 Parent Selection ...... 7

3.7 Survivor Selection ...... 11

3.8 Termination Condition ...... 11

4. Experimental Setup ...... 11

6. Results ...... 13

7. Conclusions ...... 15

8. Future Work ...... 15

9. References ...... 16

1.Introduction

In August of 2003, the United States was crippled due to a massive rolling blackout, which caused much economical damage to the country. These rolling blackouts occur more often than they should in our country due to the fact that the existing power system was not intended for the type of use that we subject it to; specifically, there is a “collision between the physics of the system and the economic rules that now regulate it” [1] (the inadequacies of the current power system are due to a number of problems, whose specifics will not be addressed in this paper in that it is sufficient for our means to just understand that there exists a problem of the nature described here, see [1] for more discussion about the problems with the power grid). Typically, the reason that these massive failures occur is from what is called a cascading failure due to contingencies.A contingency can be generally described as the event of a line, for whatever reason, becoming unable to carry its load. A cascading failure due to contingencies is when one or more contingencies occur initially (due to terrorism, natural interference, equipment failure, etc.), shifting their load(s) over to other lines, which in turn, overloads one or more of those lines causing even more contingencies to occur and so on. The main issue occurring here is that the load of the downed lines is not distributed optimally, causing the overload of other lines, which causes more load to be distributed sub-optimally, overloading even more lines to go down and this continues on until the load can balance itself again. Cascading failures are clearly a huge problem for our nation due to the fact that in order for our nation to operate normally we need power.

Fortunately there is a means to a solution available, this means is known as a Flexible AC Transmission System (FACTS) device. The basic function of the FACTS device is to enhance controllability and increase power transfer capability. What this means is that the device allows for more control over how the load is distributed via controlling how much power is passed through the line that it is on.This functionality can help to alleviate the problem of sub-optimal power redistribution can help.When placed on a line, a FACTS device can pull power away from other lines to the line it is on. This would allow for a closer to optimal redistribution of power in the event of one or more lines going down. The problem is that each FACTS device is very expensive, making placing a lot of these devices unfeasible. So, this creates a placement problem: what are the optimal positions to place a minimal number of these FACTS devices such that the power grid achieves a certain level of security from cascading failure? This can be solved by analyzing how, given a FACTS placement, the grid behaves after one or more lines go down.

2. Problem Statement

As mentioned above, the main obstacle to widespread usage of FACTS devices is their prohibitivecost (on the order of tens of millions of dollars), making it of critical importancethat they areinstalled at locations where they can have a maximum positiveimpact on the power grid. One method of quantifying this impact is throughthe use of a Power Index metric such as:

Equation 1: Power Index Formula

where Si is the power flow through line i and Simax is the power flow ratingof line i. This particular metric has a higher “penalty” for lines that are morehighly loaded. In fact, by varying n the amount of disparity between overloadsand near-overloads can be dramatically increased. However, for simplicity’s sakewe will assume that n = 1. Minimizing this metric has two beneficial results:

1. Overloads (i.e., when SiSimax) are minimized with higher overloads incurring heavier penalties than lower overloads, and

2. Power flow is balanced because any imbalance is penalized, resulting inless line losses.

In order to fully gauge the impact of a given FACTS placement on the grid, it is not enough to simply set it against a single contingency scenario; so it is pitted against a number of scenarios, summing the results from equation 1 for all scenarios considered. A considerable amount of work has already gone into analyzing FACTS placements in the face of single line contingencies, but is it really enough to say that a placement is good or bad if it can handle a single line going down? Consider this: in most natural events which could cause contingencies (tornados, ice storms, etc.), the event is not localized enough to only be likely to affect a single line;in fact it is more likely that it will affect multiple lines in a system. Also, power companies test their existing physical systems against two line contingencies. This implies that a FACTS placement would be better gauged if it is ran against multi-line contingency (MLC) scenarios.

The problem can now be stated asfollows: given a fixed number of FACTS devices and a power grid with a loading profile, find the optimal installationlocations such that the Power Index metric specified in Equation 1 is calculated for multiple MLC’s and minimized.For this particular experiment, the placement of 5 FACTS devices into the IEEE 118-Bus test system is analyzed.

2.1 Previous Work

As mentioned in Section 2, there has already been a considerable amount of work put into performing and analyzing simulations involving FACTS placements and single line contingencies. There has been a lot of work done in finding the best placements for single line contingencies using simply brute force methods as well as natural computation methods [2]. However, there has not been much work put into investigating FACTS placements in the face of MLC’s. As mentioned earlier, power companies test their systems for 2 line contingencies, but FACTS devices are not really a part of that. [4] describes a way in which to evaluate contingency scenarios, but is also not too terribly useful to this experiment due to the fact that FACTS devices are not considered. After a fairly exhaustive search, it was found that there is not much published work on FACTS placements and MLC’s, implying that this experiment is somewhat pioneering into a new area of research.

3. Approach

This problem has a particularly large problem space. That is, there is a significant number of both potential scenarios to consider and number of potential solutions available. For example, assume that there are150lines in the power system you are considering and you want to place 5 FACTS devices. This implies that there is 150 choose 5 (that is 150!/(5!*(150-5 )!)), or about 591 million unique placements for FACTS devices. Now, say we are going to analyze MLC scenarios which involve 2 lines going down at once. This means that there are 150 choose 2, or 11,175, unique MLC scenarios to consider. So, in order to search the entire problem space you would have to consider about (5.91*108)*(11,175), or 6.611*1012, scenarios. Even for the fastest deterministic program this would be a sizable task. However, stochastic evolutionary algorithms excel in the face of problems such as these due to the fact that the problem space is more navigated through in a search for (near) optimal solutions rather than combed across for the best solution possible. This is the approach that is used in this experiment.

3.1 Parameters

The evolutionary algorithm (EA) used for this experiment uses many parameters. This was donein an effort to make the EA as flexible as possible. The parameters are as follows:

  • End Condition Parameters:

-Time allowed per run

-Total number of generations per run

-Goal fitness to be reached

  • Evolution Parameters:

-Whether or not to use mutation

-Chance that an individual will mutate (only used if mutation is used)

-Given that an individual is viable for mutation, the chance that an individual gene will mutate (only used if mutation is used)

-Number of parent pairs to select

-Number of offspring per parent pair

-Population size

-Selection mode (At this point only ranked based selection and Boltzmann selection are allowed)

-Number of FACTS devices to place

-Whether or not to allow genetic clones in the population

-A flag indicating whether or not to use MLC scenarios

-The number of MLC scenarios to consider (only used if in MLC mode)

-The number of lines to go down in each MLC scenario (only used if in MLC mode)

Due to the fact that prior work in this area is fairly limited, the goal of this experiment is to “feel out” the search space; that is, to map the behavior of FACTS placements in MLC scenarios and to see how placements found to perform well in random MLC scenarios perform in single line contingency (SLC) scenarios. In order to test these attributes, the parameters focused on by this experiment are the gene mutation chance, contingency mode (SLC’s are considered in order to provide a metric for the MLC’s), and the number of lines to go down in each MLC scenario.

3.2 Solution Type

In general, it makes no sense to have more than one FACTS device on a single line. Thus, for each individual generated, as the genes/placements are determined, the EA checks to make sure that that particular placement is unique to the individual in question. Also there are certain lines which have been labeled as being unfeasible for FACTS devices, so the EA also will not allow a FACTS placement on any of those lines.

3.3 Representation

The representation of a solution that was used was a fixed length array of integers which represent the lines which FACTS devices are placed on. This representation was chosen over a binary one due to the fact that the number of FACTS devices is fixed. This means that for each solution there will be a fixed number of placements which the majority of the time will be much less than the total number of lines in a system (which would be the length of the gene array in a binary representation). This implies that it would be more efficient in the vast majority of cases to use an array of integers representing line placements over a binary string representing all lines.

3.4 Population Initialization

The initial population is generated by randomly creating (valid) individuals and adding them to the population until the population size specified by the parameter file is reached. A random generation method is used over a heuristic method because it has been shown that using a heuristic to generate more fit set of individuals initially does not have a dramatic effect on the result generated, thus not being worth the time it would take to create such a set.

3.5 Offspring Generation

The offspring are generated using the mutation and recombination methods described below. The number of offspring created each generation is equal to the amount specified by the parameter file.

3.6 Parent Selection

For this experiment, the EA used the Boltzmann selection method. The Boltzmann mechanism can be illustrated by describing the process of simulated annealing. In short, annealing is the process of heating and cooling a substance in a manufacturing process to achieve some alteration of the substance’s properties. In simulated annealing, you can think of the selective pressure as the temperature and it increases and decreases as the evolutionary procedure executes.As the temperature increases (i.e., the population approaches an optima), the chance that you would select highly fit individuals for parents decreases while the chance that you would select less fit individuals for parents increases. Conversely, as the temperature decreases (i.e., the population is not centered on an optima), the opposite would occur: the probability of selecting highly fit parents increases and the probability of selecting less fit parents decreases. The resulting effect of this would be that as the process begins to converge on an optima, less fit individuals are chosen in order to diversify the genetic pool, allowing for the population to escape potential local optima. When the population as a whole is not centered on a particular optima, highly fit individuals are chosen in order to speed up the process of convergence. Over a long enough period of time, the population would ideally bounce from optima to optima until it found the global optima, from which it would not be able to escape, since the individuals there would be more fit than those anywhere else.

The increases and decreases in selective pressure should be based on the amount of diversity in the population. On Graph 1, the ideal selective probabilities are shown for the different classifications of individuals. These curves seem very similar to sinusoids, so 0.5*(cos(x) + 1)is used by the EA for the curve for more fit individuals and the shifted version 0.5*(cos(x + π) + 1)is used for the curve for the less fit individuals, where, for both,0≤x ≤π and x represents a value that indicates the level of diversity in the population.

Graph 1: Selective Probability

The method which the EA employs to gauge the diversity of the population is based off of the average fitness and the standard deviation of the population. First, the average fitness value of the population is calculated.Next it is determined what proportion of the population lies within half a standard deviation of the average;thiswill provide a percentage indicating the proportion of the population that is ‘in range’ of the average fitness value. This percentage is then normalized by multiplying by π and obtaining the diversity value for the population.

Algorithm 1: Algorithm to Calculate the Diversity Value

This implies that for large diversity values there is a large amount of the population near the average, thus the population is approaching an optima and we want to select more individuals with lower fitness values and less individuals with high fitness vales, causing a decrease in selective pressure which in turn should cause an increase in genetic diversity. Alternatively, if the diversity value is small, there are not many individuals near the average value and it is safe to select a large number of highly fit individuals, increasing the selective pressure and encouraging convergence. Graph 1 illustrates this. As a safeguardthere is a break implemented where, if the diversity value nears one of its extreme values (0 or π) the probability is simply set to a value of 0.9 or 0.1, depending on the curve and the extreme. This makes sure that the parent pool created will always have a certain level of diversity. These curves are the ‘Modified’ versions on Graph 1.

In order for all of this to work the population has to be divided into more fit and less fit sections. For this experiment, the upper 35% of the population (ranked by fitness) is considered to be “more fit”.

When using the Boltzmann selection scheme the EA firsts sorts the population and calculates the diversity value using Algorithm 1. Using this diversity value the selection probabilities are then assigned in the manner discussed at the beginning of this subsection. After that, individuals are chosen at random from the population. If the individual is from the less fit section of the population, then it is added to the parent pool with chance equal to the probability decided for less fit individuals. Likewise, if the individual is from the more fit section of the population it is added to the parent pool with probability equal to the calculated value for more fit individuals. This process is repeated until the parent pool is filled to amount specified by the parameter provided to the EA.

3.6.3 Recombination

The uniform recombinationmethod is used by the EA for recombination. In this method each gene has a chance of coming from either parent and for this experiment this chance is equal for both parents. It is possible for both parents to have a line placement that is the same, so each time a new gene is supplied all of the genes previously supplied have to be checked to make sure that the placement is unique. In the event that the gene in question is already in the new individuals genes then the other parent’s gene is used.

3.6.4 Mutation

In this experiment, the EA has mutation activated. What this means is that after an individual is created,it is determined if the individual is a candidate for mutation based off the individual mutation chance provided as a parameter to the EA. If the individual is chosen for mutation then each gene is considered, having a chance (the value supplied to the EA as the parameter for gene mutation chance is used for this) that it will mutate to another random line placement that is unique to this solution.

3.6.5 Fitness Evaluation

As would be expected, the fitness value used for a FACTS placement is the PI metric described in Section 2. The PI metric is calculated by taking the average of the result of Equation 1 for a series of MLC scenarios. As shown in Section 3, it is unfeasible to consider all MLC scenarios; so instead, scenarios are chosen at random using the Monte Carlo sampling method (in order to guarantee a good spread of scenarios for each placement); and in an attempt to remain near par with the SLC version of this problem there will be 180 different MLC scenarios considered for each placement (for the IEEE 118-Bus there is roughly 180 lines, which implies roughly 180 SLC scenarios). Since the EA’s goal is to maximize the fitness function the PI metric is altered such that the highest value is the best value rather than the lowest by making an individual’s fitness equal to the negative value of its PI metric