Search This Blog

Wednesday, June 06, 2012

ZIBopt http://zibopt.zib.de/





The ZIB Optimization Suite is a tool for generating and solving mixed integer programs. It consists of the following parts
ZIMPL
a mixed integer programming modeling language
SoPlex
a linear programming solver
SCIP
a mixed integer programming solver and constraint programming framework.
The user can easily generate linear programs and mixed integer programs with the modeling language ZIMPL. The resulting model can directly be loaded into SCIP and solved. In the solution process SCIP may use SoPlex as underlying LP solver.
Since all three tools are available in source code and free for academic use, they are an ideal tool for academic research purposes and for teaching integer programming.

CVXOPT



CVXOPT

Python Software for Convex Optimization
CVXOPT is a free software package for convex optimization based on the Python programming language. It can be used with the interactive Python interpreter, on the command line by executing Python scripts, or integrated in other software via Python extension modules. Its main purpose is to make the development of software for convex optimization applications straightforward by building on Python’s extensive standard library and on the strengths of Python as a high-level programming language.

Current release

Version 1.1.5 includes:
  • efficient Python classes for dense and sparse matrices (real and complex), with Python indexing and slicing and overloaded operations for matrix arithmetic
  • an interface to most of the double-precision real and complex BLAS
  • an interface to LAPACK routines for solving linear equations and least-squares problems, matrix factorizations (LU, Cholesky, LDLT and QR), symmetric eigenvalue and singular value decomposition, and Schur factorization
  • an interface to the fast Fourier transform routines from FFTW
  • interfaces to the sparse LU and Cholesky solvers from UMFPACK and CHOLMOD
  • routines for linear, second-order cone, and semidefinite programming problems
  • routines for nonlinear convex optimization
  • interfaces to the linear programming solver in GLPK, the semidefinite programming solver in DSDP5, and the linear, quadratic and second-order cone programming solvers in MOSEK
  • a modeling tool for specifying convex piecewise-linear optimization problems.

Availability

A platform-independent source package and a binary Windows installer are available from the Download section. CVXOPT is also available precompiled for the major platforms:

Authors

CVXOPT is developed by Martin Andersen (martin.skovgaard.andersen@gmail.com), Joachim Dahl (dahl.joachim@gmail.com), and Lieven Vandenberghe (vandenbe@ee.ucla.edu).
CVXOPT was originally developed for use in our own work, and is being made available in the hope that it may be useful to others. We welcome feedback, bug reports, and suggestions for improvements, but can only offer very limited support.

What is a solution to a zero-sum game?


What is a solution to a zero-sum game?

In a two-person zero-sum game, the payoff to one player is the negative of that going to the other. Although zero-sum games are not terribly interesting to economists, who typically study situations where there are gains to trade, most common parlor games such as poker and chess are zero sum: one player wins, one loses. According to Von-Neumann's theory, every zero sum game has a value. Each player can guarantee himself this value against any play of his opponent, and can prevent the other player from doing any better than this. We typically write a zero-sum game by forming a matrix and allowing one player to choose the rows and the other the columns. The entries in the matrix are the payoffs to the row player. For example in the game of matching pennies, we can write the payoff matrix
1 -1
-1 1
so that the row player is trying to match the column player and the column player is trying to guess the opposite of the row player. The value of the game may be calculated as either the minimum of what the row player can achieve knowing the strategy of the column player (the minmax for the row player) or the maximum of what the column player can hold the row player to, knowing the strategy of the row player (the maxmin for the row player). Von Neumann's famous minmax theorem shows that these two quantities are the same.
It is possible to solve a zero-sum game using the simplex algorithm or any other algorithm that can solve a linear programming problem. This is implemented below. To solve a zero sum game, fill in the payoffs to the row player in the blank area below separated by commas. Do not enter blank lines. The program will then find the strategy for the column player that holds the row player's payoff to a minimum. For example in the game of matching pennies:

1, -1
-1, 1

Enter the payoffs to the row player:


The program finds the column player strategy that holds the row player's payoff to a minimum, and reports the value of the game to the row player.
If you have questions about the program or about zero-sum games, you should check out discussion on the forum.

Zero Sum


Zero Sum

A zero sum game is a special case of a constant sum game in which all outcomes involve a sum of all player's payoffs of 0. Hence, a gain for one participant is always at the expense of another, such as in most sporting events. Given the conflicting interests, the equilibrium of such games is often in mixed strategies
 

Zero–sum game

 

Zero–sum game

From Wikipedia, the free encyclopedia
Jump to: navigation, search
In game theory and economic theory, a zero-sum game is a mathematical representation of a situation in which a participant's gain (or loss) of utility is exactly balanced by the losses (or gains) of the utility of the other participant(s). If the total gains of the participants are added up, and the total losses are subtracted, they will sum to zero. Thus cutting a cake, where taking a larger piece reduces the amount of cake available for others, is a zero-sum game if all participants value each unit of cake equally (see marginal utility). In contrast, non-zero-sum describes a situation in which the interacting parties' aggregate gains and losses are either less than or more than zero. A zero-sum game is also called a strictly competitive game while non-zero-sum games can be either competitive or non-competitive. Zero-sum games are most often solved with the minimax theorem which is closely related to linear programming duality,[1] or with Nash equilibrium.

Contents

Definition

The zero-sum property (if one gains, another loses) means that any result of a zero-sum situation is Pareto optimal (generally, any game where all strategies are Pareto optimal is called a conflict game).[2]
Situations where participants can all gain or suffer together are referred to as non-zero-sum. Thus, a country with an excess of bananas trading with another country for their excess of apples, where both benefit from the transaction, is in a non-zero-sum situation. Other non-zero-sum games are games in which the sum of gains and losses by the players are sometimes more or less than what they began with.

Solution

For 2-player finite zero-sum games, the different game theoretic solution concepts of Nash equilibrium, minimax, and maximin all give the same solution. In the solution, players play a mixed strategy.

Example

A zero-sum game

A B C
1 30, -30 -10, 10 20, -20
2 10, -10 20, -20 -20, 20
A game's payoff matrix is a convenient representation. Consider for example the two-player zero-sum game pictured at right.
The order of play proceeds as follows: The first player (red) chooses in secret one of the two actions 1 or 2; the second player (blue), unaware of the first player's choice, chooses in secret one of the three actions A, B or C. Then, the choices are revealed and each player's points total is affected according to the payoff for those choices.
Example: Red chooses action 2 and Blue chooses action B. When the payoff is allocated, Red gains 20 points and Blue loses 20 points.
Now, in this example game both players know the payoff matrix and attempt to maximize the number of their points. What should they do?
Red could reason as follows: "With action 2, I could lose up to 20 points and can win only 20, while with action 1 I can lose only 10 but can win up to 30, so action 1 looks a lot better." With similar reasoning, Blue would choose action C. If both players take these actions, Red will win 20 points. But what happens if Blue anticipates Red's reasoning and choice of action 1, and goes for action B, so as to win 10 points? Or if Red in turn anticipates this devious trick and goes for action 2, so as to win 20 points after all?
Émile Borel and John von Neumann had the fundamental and surprising insight that probability provides a way out of this conundrum. Instead of deciding on a definite action to take, the two players assign probabilities to their respective actions, and then use a random device which, according to these probabilities, chooses an action for them. Each player computes the probabilities so as to minimize the maximum expected point-loss independent of the opponent's strategy. This leads to a linear programming problem with the optimal strategies for each player. This minimax method can compute provably optimal strategies for all two-player zero-sum games.
For the example given above, it turns out that Red should choose action 1 with probability 4/7 and action 2 with probability 3/7, while Blue should assign the probabilities 0, 4/7, and 3/7 to the three actions A, B, and C. Red will then win 20/7 points on average per game.

Solving

The Nash equilibrium for a two-player, zero-sum game can be found by solving a linear programming problem. Suppose a zero-sum game has a payoff matrix M where element M_{i,j} is the payoff obtained when the minimizing player chooses pure strategy i and the maximizing player chooses pure strategy j (i.e. the player trying to minimize the payoff chooses the row and the player trying to maximize the payoff chooses the column). Assume every element of M is positive. The game will have at least one Nash equilibrium. The Nash equilibrium can be found (see ref. [2], page 740) by solving the following linear program to find a vector u:
Minimize:
\sum_{i} u_i
Subject to the constraints:
u ≥ 0
M u ≥ 1.
The first constraint says each element of the u vector must be nonnegative, and the second constraint says each element of the  M u vector must be at least 1. For the resulting u vector, the inverse of the sum of its elements is the value of the game. Multiplying u by that value gives a probability vector, giving the probability that the maximizing player will choose each of the possible pure strategies.
If the game matrix does not have all positive elements, simply add a constant to every element that is large enough to make them all positive. That will increase the value of the game by that constant, and will have no effect on the equilibrium mixed strategies for the equilibrium.
The equilibrium mixed strategy for the minimizing player can be found by solving the dual of the given linear program. Or, it can be found by using the above procedure to solve a modified payoff matrix which is the transpose and negation of M (adding a constant so it's positive), then solving the resulting game.
If all the solutions to the linear program are found, they will constitute all the Nash equilibria for the game. Conversely, any linear program can be converted into a two-player, zero-sum game by using a change of variables that puts it in the form of the above equations. So such games are equivalent to linear programs, in general.[citation needed]

Non-zero-sum

Economics

Many economic situations are not zero-sum, since valuable goods and services can be created, destroyed, or badly allocated in a number of ways, and any of these will create a net gain or loss of utility to numerous stakeholders. Specifically, all trade is by definition positive sum, because when two parties agree to an exchange each party must consider the goods it is receiving to be more valuable than the goods it is delivering. In fact, all economic exchanges must benefit both parties to the point that each party can overcome its transaction costs, (or the transaction would simply not take place).
There is some semantic confusion in addressing exchanges under coercion. If we assume that "Trade X", in which Adam trades Good A to Brian for Good B, does not benefit Adam sufficiently, Adam will ignore Trade X (and trade his Good A for something else in a different positive-sum transaction, or keep it). However, if Brian uses force to ensure that Adam will exchange Good A for Good B, then this says nothing about the original Trade X. Trade X was not, and still is not, positive-sum (in fact, this non-occurring transaction may be zero-sum, if Brian's net gain of utility coincidentally offsets Adam's net loss of utility). What has in fact happened is that a new trade has been proposed, "Trade Y", where Adam exchanges Good A for two things: Good B and escaping the punishment imposed by Brian for refusing the trade. Trade Y is positive-sum, because if Adam wanted to refuse the trade, he theoretically has that option (although it is likely now a much worse option), but he has determined that his position is better served in at least temporarily putting up with the coercion. Under coercion, the coerced party is still doing the best they can under their unfortunate circumstances, and any exchanges they make are positive-sum.
There is additional confusion under asymmetric information. Although many economic theories assume perfect information, economic participants with imperfect or even no information can always avoid making trades that they feel are not in their best interest. Considering transaction costs, then, no zero-sum exchange would ever take place, (although asymmetric information can reduce the number of positive-sum exchanges, as occurs in The Market for Lemons).
See also:

Psychology

The most common or simple example from the subfield of Social Psychology is the concept of "Social Traps". In some cases we can enhance our collective well-being by pursuing our personal interests — or parties can pursue mutually destructive behavior as they choose their own ends.

Complexity

It has been theorized by Robert Wright in his book Nonzero: The Logic of Human Destiny, that society becomes increasingly non-zero-sum as it becomes more complex, specialized, and interdependent. As former US President Bill Clinton states:
The more complex societies get and the more complex the networks of interdependence within and beyond community and national borders get, the more people are forced in their own interests to find non-zero-sum solutions. That is, win–win solutions instead of win–lose solutions.... Because we find as our interdependence increases that, on the whole, we do better when other people do better as well — so we have to find ways that we can all win, we have to accommodate each other....
—Bill Clinton, Wired interview, December 2000.[3]

Extensions

In 1944 John von Neumann and Oskar Morgenstern proved that any zero-sum game involving n players is in fact a generalized form of a zero-sum game for two players, and that any non-zero-sum game for n players can be reduced to a zero-sum game for n + 1 players; the (n + 1) player representing the global profit or loss.[4]

Misunderstandings

Zero–sum games and particularly their solutions are commonly misunderstood by critics of game theory, usually with respect to the independence and rationality of the players, as well as to the interpretation of utility functions. Furthermore, the word "game" does not imply the model is valid only for recreational games.[1]
 

Fictitious Play


Fictitious Play

A process by which players assume that the strategies of their opponents are randomly chosen from some unknown stationary distribution. In each period, a player selects her best response to the historical frequency of actions of her opponents. The process was first noted by Julia Robinson who also noted that the process converges to the equilibrium for two-player zero sum games. While the process does not always converge in other settings, it is known that if it converges, then the point of convergence is a Nash equilibrium of the game. updated: 15 August 2005

Fictitious play


Fictitious play

From Wikipedia, the free encyclopedia
Jump to: navigation, search
In game theory, fictitious play is a learning rule first introduced by G.W. Brown (1951). In it, each player presumes that the opponents are playing stationary (possibly mixed) strategies. At each round, each player thus best responds to the empirical frequency of play of his opponent. Such a method is of course adequate if the opponent indeed uses a stationary strategy, while it is flawed if the opponent's strategy is non stationary. The opponent's strategy may for example be conditioned on the fictitious player's last move.

Contents

History

Brown first introduced fictitious play as an explanation for Nash equilibrium play. He imagined that a player would "simulate" play of the game in his mind and update his future play based on this simulation; hence the name fictitious play. In terms of current use, the name is a bit of a misnomer, since each play of the game actually occurs. The play is not exactly fictitious.

Convergence properties

In fictitious play Nash equilibria are absorbing states. That is, if at any time period all the players play a Nash equilibrium, then they will do so for all subsequent rounds. (Fudenberg and Levine 1998, Proposition 2.1) In addition, if fictitious play converges to any distribution, those probabilities correspond to a Nash equilibrium of the underlying game. (Proposition 2.2)
Generalized Rock Paper Scissors Example

A B C
a 0, 0 1, 0 0, 1
b 0, 1 0, 0 1, 0
c 1, 0 0, 1 0, 0
Therefore, the interesting question is, under what circumstances does fictitious play converge? The process will converge for a 2-person game if:
  1. Both players have only a finite number of strategies and the game is zero sum (Robinson 1951)
  2. The game is solvable by iterated elimination of strictly dominated strategies (Nachbar 1990)
  3. The game is a potential game (Monderer and Shapley 1996-a,1996-b)
  4. The game has generic payoffs and is 2xN (Berger 2005)
Fictitious play does not always converge, however. Shapley (1964) proved that in the game pictured here (a limit case of generalized Rock, Paper, Scissors games), if the players start by choosing (a, B), the play will cycle indefinitely.

Terminology

Berger (2007) states that "what modern game theorists describe as "fictitious play" is not the learning process that George W. Brown defined in his 1951 paper. Brown's original version differs in a subtle detail..." and points out that modern usage involves the players updating their beliefs simultaneously. Berger goes on to say that Brown clearly states that the players update alternatingly. Berger then uses Brown's original form to present a simple and intuitive proof of convergence in the case of nondegenerate ordinal potential games.
The term "fictitious" had earlier been given another meaning in game theory. Von Neumann and Morgenstern [1944] defined a "fictitious player" as a player with only one strategy, added to an n-player game to turn it into a n+1-player zero-sum game.

Semidefinite programming


Semidefinite programming

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Semidefinite programming (SDP) is a subfield of convex optimization concerned with the optimization of a linear objective function over the intersection of the cone of positive semidefinite matrices with an affine space, i.e., a spectrahedron.
Semidefinite programming is a relatively new field of optimization which is of growing interest for several reasons. Many practical problems in operations research and combinatorial optimization can be modeled or approximated as semidefinite programming problems. In automatic control theory, SDP's are used in the context of linear matrix inequalities. SDPs are in fact a special case of cone programming and can be efficiently solved by interior point methods. All linear programs can be expressed as SDPs, and via hierarchies of SDPs the solutions of polynomial optimization problems can be approximated. Finally, semidefinite programming has been used in the optimization of complex systems.

Contents

Motivation and definition

Initial motivation

A linear programming problem is one in which we wish to maximize or minimize a linear objective function of real variables over a polyhedron. In semidefinite programming, we instead use real-valued vectors and are allowed to take the dot product of vectors. Specifically, a general semidefinite programming problem can be defined as any mathematical programming problem of the form
\begin{array}{rl}
{\displaystyle \min_{x^1, \ldots, x^n \in \mathbb{R}^n}} & {\displaystyle \sum_{i,j \in [n]} c_{i,j} (x^i \cdot x^j)} \\
\text{subject to} & {\displaystyle \sum_{i,j \in [n]} a_{i,j,k} (x^i \cdot x^j) \leq b_k \qquad \forall k}. \\
\end{array}

Equivalent formulations

An n \times n matrix M is said to be positive semidefinite if it is the gramian matrix of some vectors (ie. if there exist vectors x^1, \ldots, x^n such that m_{i,j}=x^i \cdot x^j for all i,j). If this is the case, we denote this as M \succeq 0. Note that there are several other equivalent definitions of being positive semidefinite.
Denote by \mathbb{S}^n the space of all real symmetric matrices. The space is equipped with the inner product (where {\rm tr} denotes the trace)   \langle A,B\rangle_{\mathbb{S}^n} = {\rm tr}(A^T B) = \sum_{i=1,j=1}^n
  A_{ij}B_{ij}.
We can rewrite the mathematical program given in the previous section equivalently as
\begin{array}{rl}
{\displaystyle\min_{X \in \mathbb{S}^n}} & \langle C, X \rangle_{\mathbb{S}^n} \\
\text{subject to} & \langle A_k, X \rangle_{\mathbb{S}^n} \leq b_k, \quad k = 1,\ldots,m \\
& X \succeq 0
\end{array}
where entry i,j in C is given by c_{i,j} from the previous section and A_k is an n \times n matrix having i,jth entry a_{i,j,k} from the previous section.
Note that if we add slack variables appropriately, this SDP can be converted to one of the form
\begin{array}{rl}
{\displaystyle\min_{X \in \mathbb{S}^n}} & \langle C, X \rangle_{\mathbb{S}^n} \\
\text{subject to} & \langle A_i, X \rangle_{\mathbb{S}^n} = b_i, \quad i = 1,\ldots,m \\
& X \succeq 0.
\end{array}
For convenience, an SDP may be specified in a slightly different, but equivalent form. For example, linear expressions involving nonnegative scalar variables may be added to the program specification. This remains an SDP because each variable can be incorporated into the matrix X as a diagonal entry (X_{ii} for some i). To ensure that X_{ii} \geq 0, constraints X_{ij} = 0 can be added for all j \neq i. As another example, note that for any positive semidefinite matrix X, there exists a set of vectors \{ v_i \} such that the i, j entry of X is X_{ij} = (v_i, v_j) the scalar product of v_i and v_j. Therefore, SDPs are often formulated in terms of linear expressions on scalar products of vectors. Given the solution to the SDP in the standard form, the vectors \{ v_i \} can be recovered in O(n^3) time (e.g., by using an incomplete Cholesky decomposition of X).

Duality theory

Definitions

Analogously to linear programming, given a general SDP of the form
\begin{array}{rl}
{\displaystyle\min_{X \in \mathbb{S}^n}} & \langle C, X \rangle_{\mathbb{S}^n} \\
\text{subject to} & \langle A_i, X \rangle_{\mathbb{S}^n} = b_i, \quad i = 1,\ldots,m \\
& X \succeq 0
\end{array}
(the primal problem or P-SDP), we define the dual semidefinite program (D-SDP) as
\begin{array}{rl}
{\displaystyle\max_{y \in \mathbb{R}^m}} & \langle b, y \rangle_{\mathbb{R}^m} \\
\text{subject to} & {\displaystyle\sum_{i=1}^m} y_i A_i \preceq C
\end{array}
where for any two matrices P and Q, P \succeq Q means P-Q \succeq 0.

Weak duality

The weak duality theorem states that the value of the primal SDP is at least the value of the dual SDP. Therefore, any feasible solution to the dual SDP lower-bounds the primal SDP value, and conversely, any feasible solution to the primal SDP upper-bounds the dual SDP value. This is because
\langle C, X \rangle - \langle b, y \rangle
= \langle C, X \rangle - \sum_{i=1}^m y_i b_i
= \langle C, X \rangle - \sum_{i=1}^m y_i \langle A_i, X \rangle
= \langle C - \sum_{i=1}^m y_i A_i, X \rangle
\geq 0,
where the last inequality is because both matrices are positive semidefinite.

Strong duality

Under a condition known as Slater's condition, the value of the primal and dual SDPs are equal. This is known as strong duality. Unlike for linear programs, however, not every SDP satisfies strong duality; in general, the value of the dual SDP may lie strictly below the value of the primal.
(i) Suppose the primal problem (P-SDP) is bounded below and strictly feasible (i.e., there exists X_0\in\mathbb{S}^n, X_0\succ 0 such that \langle
A_i,X_0\rangle_{\mathbb{S}^n} = b_i, i=1,\ldots,m). Then there is an optimal solution y^* to (D-SDP) and
\langle C,X^*\rangle_{\mathbb{S}^n} = \langle b,y^*\rangle_{\R^m}.
(ii) Suppose the dual problem (D-SDP) is bounded above and strictly feasible (i.e., \sum_{i=1}^m (y_0)_i A_i
\prec C for some y_0\in\R^m). Then there is an optimal solution X^* to (P-SDP) and the equality from (i) holds.

Examples

Example 1

Consider three random variables A, B, and C. By definition, their correlation coefficients \rho_{AB}, \ \rho_{AC}, \rho_{BC} are valid if and only if
\begin{pmatrix}
  1 & \rho_{AB} & \rho_{AC} \\
  \rho_{AB} & 1 & \rho_{BC} \\
  \rho_{AC} & \rho_{BC} & 1
\end{pmatrix} \succeq 0
Suppose that we know from some prior knowledge (empirical results of an experiment, for example) that -0.2 \leq \rho_{AB} \leq -0.1 and 0.4 \leq \rho_{BC} \leq 0.5. The problem of determining the smallest and largest values that \rho_{AC} \ can take is given by:
minimize/maximize x_{13}
subject to
-0.2 \leq x_{12} \leq -0.1
0.4 \leq x_{23} \leq 0.5
x_{11} = x_{22} = x_{33} = 1 \
\begin{pmatrix}
  1 & x_{12} & x_{13} \\
  x_{12} & 1 & x_{23} \\
  x_{13} & x_{23} & 1
\end{pmatrix} \succeq 0
we set \rho_{AB} = x_{12}, \ \rho_{AC} = x_{13}, \ \rho_{BC} = x_{23} to obtain the answer. This can be formulated by an SDP. We handle the inequality constraints by augmenting the variable matrix and introducing slack variables, for example
\mathrm{tr}\left(\left(\begin{array}{cccccc}
0 & 1 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 1 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\end{array}\right)\cdot\left(\begin{array}{cccccc}
1 & x_{12} & x_{13} & 0 & 0 & 0\\
x_{12} & 1 & x_{23} & 0 & 0 & 0\\
x_{13} & x_{23} & 1 & 0 & 0 & 0\\
0 & 0 & 0 & s_{1} & 0 & 0\\
0 & 0 & 0 & 0 & s_{2} & 0\\
0 & 0 & 0 & 0 & 0 & s_{3}\end{array}\right)\right)=x_{12} + s_{1}=-0.1
Solving this SDP gives the minimum and maximum values of \rho_{AC} = x_{13} \ as -0.978 and  0.872 respectively.

Example 2

Consider the problem
minimize \frac{(c^T x)^2}{d^Tx}
subject to Ax +b\geq 0
where we assume that d^Tx>0 whenever Ax+b\geq 0.
Introducing an auxiliary variable t the problem can be reformulated:
minimize t
subject to Ax+b\geq 0, \, \frac{(c^T x)^2}{d^Tx}\leq t
In this formulation, the objective is a linear function of the variables x,t.
The first restriction can be written as
\textbf{diag}(Ax+b)\geq 0
where the matrix \textbf{diag}(Ax+b) is the square matrix with values in the diagonal equal to the elements of the vector Ax+b.
The second restriction can be written as
td^Tx-(c^Tx)^2\geq 0
or equivalently
det\underbrace{\left[\begin{array}{cc}t&c^Tx\\c^Tx&d^Tx\end{array}\right]}_{D}\geq 0
Thus D \succeq 0.
The semidefinite program associated with this problem is
minimize t
subject to \left[\begin{array}{ccc}\textbf{diag}(Ax+b)&0&0\\0&t&c^Tx\\0&c^Tx&d^Tx\end{array}\right] \succeq 0

Example 3 (Goemans-Williamson MAX CUT approximation algorithm)

Semidefinite programs are important tools for developing approximation algorithms for NP-hard maximization problems. The first approximation algorithm based on an SDP is due to Goemans and Williamson (JACM, 1995). They studied the MAX CUT problem: Given a graph G = (V, E), output a partition of the vertices V so as to maximize the number of edges crossing from one side to the other. This problem can be expressed as an integer quadratic program:
Maximize \sum_{(i,j) \in E} \frac{1-v_{i} v_{j}}{2}, such that each v_i\in\{1,-1\}.
Unless P = NP, we cannot solve this maximization problem efficiently. However, Goemans and Williamson observed a general three-step procedure for attacking this sort of problem:
  1. Relax the integer quadratic program into an SDP.
  2. Solve the SDP (to within an arbitrarily small additive error \epsilon).
  3. Round the SDP solution to obtain an approximate solution to the original integer quadratic program.
For MAX CUT, the most natural relaxation is
\max \sum_{(i,j) \in E} \frac{1-\langle v_{i}, v_{j}\rangle}{2}, such that \lVert v_i\rVert^2 = 1, where the maximization is over vectors \{v_i\} instead of integer scalars.
This is an SDP because the objective function and constraints are all linear functions of vector inner products. Solving the SDP gives a set of unit vectors in \mathbf{R^n}; since the vectors are not required to be collinear, the value of this relaxed program can only be higher than the value of the original quadratic integer program. Finally, a rounding procedure is needed to obtain a partition. Goemans and Williamson simply choose a uniformly random hyperplane through the origin and divide the vertices according to which side of the hyperplane the corresponding vectors lie. Straightforward analysis shows that this procedure achieves an expected approximation ratio (performance guarantee) of 0.87856 - ε. (The expected value of the cut is the sum over edges of the probability that the edge is cut, which is proportional to the angle \cos^{-1}\langle v_{i}, v_{j}\rangle between the vectors at the endpoints of the edge over \pi. Comparing this probability to (1-\langle v_{i}, v_{j}\rangle)/{2}, in expectation the ratio is always at least 0.87856.) Assuming the Unique Games Conjecture, it can be shown that this approximation ratio is essentially optimal.
Since the original paper of Goemans and Williamson, SDPs have been applied to develop numerous approximation algorithms. Recently, Prasad Raghavendra has developed a general framework for constraint satisfaction problems based on the Unique Games Conjecture.[1]

Algorithms

There are several types of algorithms for solving SDPs. These algorithms output the value of the SDP up to an additive error \epsilon in time that is polynomial in the program description size and \log (1/\epsilon).

Interior point methods

Most codes are based on interior point methods (CSDP, SeDuMi, SDPT3, DSDP, SDPA). Robust and efficient for general linear SDP problems. Restricted by the fact that the algorithms are second-order methods and need to store and factorize a large (and often dense) matrix.

Bundle method

The code ConicBundle formulates the SDP problem as a nonsmooth optimization problem and solves it by the Spectral Bundle method of nonsmooth optimization. This approach is very efficient for a special class of linear SDP problems.

Other

Algorithms based on augmented Lagrangian method (PENSDP) are similar in behavior to the interior point methods and can be specialized to some very large scale problems. Other algorithms use low-rank information and reformulation of the SDP as a nonlinear programming problem (SPDLR).

Software

The following codes are available for SDP:
SDPA, CSDP, SDPT3, SeDuMi, DSDP, PENSDP, SDPLR, ConicBundle
SeDuMi runs on MATLAB and uses the Self-Dual method for solving general convex optimization problems.

Applications

Semidefinite programming has been applied to find approximate solutions to combinatorial optimization problems, such as the solution of the max cut problem with an approximation ratio of 0.87856. SDPs are also used in geometry to determine tensegrity graphs, and arise in control theory as LMIs.