This is a video of my English Bulldog Max -- my sunshine.
Sunday, August 30, 2009
Halal’s Predictions Concerning Genetically Modified Organisms
First, some definitions: Genetically Modified Organisms (GMO), AKA Genetically Engineered Organisms (GEO) are organisms whose genetic material has been directly manipulated. The techniques used are generally known as recombinant DNA technology. The basic idea is to use DNA molecules from different sources, and combined into one molecule to create a new set of genes, which is then transferred into an organism, giving it modified or novel genes. A subset of GMOs, known as transgenic organisms is of particular interest to agriculture. Transgenic organisms are one which has inserted DNA that originated in a different species. Another type of GMO, termed cisgenic, contains no DNA from other species.
Technologies for genetically modifying foods offer the promise of meeting one of the 21st Century’s greatest challenges: severe food shortages for an ever increasing human and livestock population. GMO-based products (current or those in development) include medicines, vaccines, feeds, fibers, foods and food ingredients. Adding important traits to organisms that currently do not have it, such as insect resistance or desired nutrients for particular crops is certainly a desirable thing, isn’t it; or are we playing the role of the sorcerer’s apprentice? Like all new technologies, GMOs pose some risks, both known and unknown. Controversies surrounding genetically altered foods and crops commonly focus on human and environmental safety, even labeling and consumer choice, not to mention ethical issues, including food security, poverty reduction, and environmental conservation.
The amount of farm land being used or converted to the use of transgenic crops in recent years is staggering. According to the Human Genome Project at genomics.energy.gov, in 2006, 252 million acres of transgenic crops were planted in 22 countries by 10.3 million farmers. The majority of these crops were herbicide- and insect-resistant soybeans, corn, cotton, canola, and alfalfa. Not to mention a number of crops grown commercially or field-tested that was had increased nutritional value, was virus resistant and/or able to survive extreme weather conditions. These included rice with increased iron and vitamins that may alleviate chronic malnutrition in Asian countries, a sweet potato that was resistant to a virus that could decimate most of the African harvest, and a variety of plants able to survive weather extremes. The near future promises fish that mature more quickly; cows that are resistant to bovine spongiform encephalopathy (mad cow disease); fruit and nut trees that yield years earlier; even bananas that produce human vaccines against infectious diseases such as hepatitis B and plants that produce new plastics with unique properties.
Although growth of global transgenic crops is expected to level-out in industrialized nations, it is increasing and will continue to increase dramatically in developing countries.
What about William E. Halal’s predictions in his book Technologies Promise? Well considering futurists tend toward optimism, at times, extreme optimism, I believe that Halal’s prediction that it will take 10 to 20 years to reach a 30% worldwide adoption level is actually a bit pessimistic. Why? Climatic changes, drought, energy shortages, and the proliferation of nuclear and biological weapons among not only countries, but terrorists groups will causes numerous nations, particularly third-world nations to adopt genetically altered crops and livestock to feed and maintain order. I believe that we will see a dramatic increase in adoption within the next 10-years. There will of course develop a “back-to-nature” or “all-natural” movement in most of the industrialize countries that can afford it. These “unaltered” foods will be the exception, not the norm, and will be a relative luxary.
The following is from the Human Genome Project (HGP) Information website – it summarizes the currently perceived benefits and concerns over GMOs.
Technologies for genetically modifying foods offer the promise of meeting one of the 21st Century’s greatest challenges: severe food shortages for an ever increasing human and livestock population. GMO-based products (current or those in development) include medicines, vaccines, feeds, fibers, foods and food ingredients. Adding important traits to organisms that currently do not have it, such as insect resistance or desired nutrients for particular crops is certainly a desirable thing, isn’t it; or are we playing the role of the sorcerer’s apprentice? Like all new technologies, GMOs pose some risks, both known and unknown. Controversies surrounding genetically altered foods and crops commonly focus on human and environmental safety, even labeling and consumer choice, not to mention ethical issues, including food security, poverty reduction, and environmental conservation.
The amount of farm land being used or converted to the use of transgenic crops in recent years is staggering. According to the Human Genome Project at genomics.energy.gov, in 2006, 252 million acres of transgenic crops were planted in 22 countries by 10.3 million farmers. The majority of these crops were herbicide- and insect-resistant soybeans, corn, cotton, canola, and alfalfa. Not to mention a number of crops grown commercially or field-tested that was had increased nutritional value, was virus resistant and/or able to survive extreme weather conditions. These included rice with increased iron and vitamins that may alleviate chronic malnutrition in Asian countries, a sweet potato that was resistant to a virus that could decimate most of the African harvest, and a variety of plants able to survive weather extremes. The near future promises fish that mature more quickly; cows that are resistant to bovine spongiform encephalopathy (mad cow disease); fruit and nut trees that yield years earlier; even bananas that produce human vaccines against infectious diseases such as hepatitis B and plants that produce new plastics with unique properties.
Although growth of global transgenic crops is expected to level-out in industrialized nations, it is increasing and will continue to increase dramatically in developing countries.
What about William E. Halal’s predictions in his book Technologies Promise? Well considering futurists tend toward optimism, at times, extreme optimism, I believe that Halal’s prediction that it will take 10 to 20 years to reach a 30% worldwide adoption level is actually a bit pessimistic. Why? Climatic changes, drought, energy shortages, and the proliferation of nuclear and biological weapons among not only countries, but terrorists groups will causes numerous nations, particularly third-world nations to adopt genetically altered crops and livestock to feed and maintain order. I believe that we will see a dramatic increase in adoption within the next 10-years. There will of course develop a “back-to-nature” or “all-natural” movement in most of the industrialize countries that can afford it. These “unaltered” foods will be the exception, not the norm, and will be a relative luxary.
The following is from the Human Genome Project (HGP) Information website – it summarizes the currently perceived benefits and concerns over GMOs.
GM Products: Benefits and Controversies
Benefits
-----• Crops
----------o Enhanced taste and quality
----------o Reduced maturation time
----------o Increased nutrients, yields, and stress tolerance
----------o Improved resistance to disease, pests, and
------------herbicides
----------o New products and growing techniques
-----• Animals
----------o Increased resistance, productivity, hardiness, and
----------o New products and growing techniques
-----• Animals
----------o Increased resistance, productivity, hardiness, and
------------feed efficiency
----------o Better yields of meat, eggs, and milk
----------o Improved animal health and diagnostic methods
-----• Environment
----------o "Friendly" bioherbicides and bioinsecticides
----------o Conservation of soil, water, and energy
----------o Bioprocessing for forestry products
----------o Better natural waste management
----------o More efficient processing
-----• Society
----------o Increased food security for growing populations
----------o Better yields of meat, eggs, and milk
----------o Improved animal health and diagnostic methods
-----• Environment
----------o "Friendly" bioherbicides and bioinsecticides
----------o Conservation of soil, water, and energy
----------o Bioprocessing for forestry products
----------o Better natural waste management
----------o More efficient processing
-----• Society
----------o Increased food security for growing populations
Controversies
-----• Safety
----------o Potential human health impacts, including allergens,
-----• Safety
----------o Potential human health impacts, including allergens,
------------transfer of antibiotic resistance markers, unknown
------------effects
----------o Potential environmental impacts, including:
----------o Potential environmental impacts, including:
------------unintended transfer of transgenes through cross-
------------pollination, unknown effects on other organisms
------------(e.g., soil microbes), and loss of flora and fauna
------------biodiversity
-----• Access and Intellectual Property
----------o Domination of world food production by a few
-----• Access and Intellectual Property
----------o Domination of world food production by a few
------------companies
----------o Increasing dependence on industrialized nations
----------o Increasing dependence on industrialized nations
------------by developing countries
----------o Biopiracy, or foreign exploitation of natural
----------o Biopiracy, or foreign exploitation of natural
------------resources
-----• Ethics
----------o Violation of natural organisms' intrinsic values
----------o Tampering with nature by mixing genes among
-----• Ethics
----------o Violation of natural organisms' intrinsic values
----------o Tampering with nature by mixing genes among
------------species
----------o Objections to consuming animal genes in plants
----------o Objections to consuming animal genes in plants
------------and vice versa
----------o Stress for animal
-----• Labeling
----------o Not mandatory in some countries
----------o Stress for animal
-----• Labeling
----------o Not mandatory in some countries
------------(e.g., United States)
----------o Mixing GM crops with non-GM products
----------o Mixing GM crops with non-GM products
------------confounds
------------labeling attempts
-----• Society
----------o New advances may be skewed to interests of rich
-----• Society
----------o New advances may be skewed to interests of rich
------------countries
Saturday, August 29, 2009
Quantum Game Theory
QUANTUM GAME THEORY
Popular culture was lightly exposed to Game theory with the release of the book and subsequent movie, A Beautiful Mind, about the life of mathematician John Nash, whose theories revolutionized economic theory – becoming the cornerstone of modern business practices, conflict resolutions and bargaining theory, as well as effecting numerous other fields. Central to Nash’s discovery, is what has come to be termed the Nash equilibrium, which we’ll briefly describe later. Quantum game theory is an interesting extension of classical Game theory into the quantum domain; but first a little history.
The von Neumann–Morgenstern theory
In 1944, John Von Neumann and Oskar Morgenstern published their book, Theory of Games and Economic Behavior. They were the first to construct a cooperative theory of n-person games. In this book, Von Neumann and Morgenstern axiomatic derived from Bernoulli’s formulation of a utility function over wealth, an expected utility function over lotteries, or gambles. By this, we mean is that we need to establish a set of assumptions about people’s preferences prior to constructing a utility function. Essentially, they assumed that various groups of players might join together to form coalitions, each having an associated value defined as the minimum amount that the coalition can ensure by its own efforts. These interactions were described as n-person games in what is termed the characteristic-function form. In this form, the individual players (one-person coalitions) are listed, as well as all possible coalitions of 2 or more players, and the values that each of these coalitions could ensure if a counter-coalition comprising all other players acted to minimize the amount that the coalition can obtain. Von Neumann and Morgenstern also assumed that the characteristic function is superadditive: that is, the value of a coalition of 2 formerly separate coalitions is at least as great as the sum of the separate values of the two coalitions.
The sum of payments to the players in each coalition must equal the value of the coalition. Additionally, each coalition player must receive no less than what he could obtain playing alone; (otherwise, he would not have joined the coalition in the first place). Each set of player payments describes a possible outcome of an n-person cooperative game and is called an imputation. According to Von Neumann and Morgenstern, within a coalition S, an imputation X is said to dominate another imputation Y if each player in S gets more with X than with Y and if the players in S receive a total payment that does not exceed the coalition value of S. The important consequence to this is that players in the coalition prefer the payoff X to the payoff Y and have the power to enforce this preference through their play. From this relationship, Von Neumann and Morgenstern went onto define the solution to an n-person game as a set of imputations satisfying two conditions:
(1) No imputation in the solution dominates another imputation in the solution, and
(2) Any imputation not in the solution is dominated by another one in the solution.
It must be noted that a von Neumann–Morgenstern solution is not simply a single outcome, but rather a set of possible outcomes. The reason that the solution is stable is due to the fact that for the members of the coalition, any imputation outside the solution is dominated by, and is therefore less attractive than any imputation within the solution. The imputations within the solution are obviously viable because they are not dominated by any other imputations in the solution.
It should be noted that although there may be numerous solutions to a game (each solution representing a different “standard of behavior”), it was not initially apparent that there would always be at least one in every cooperative game. Von Neumann and Morgenstern themselves did not find a single type of game that did not have a solution. They strongly felt that indicated that no such game existed; however, in 1967 the American mathematician William F. Lucas discovered a fairly complicated 10-person game that did not have a solution. This was the first of a number of counterexamples discovered since indicating that the von Neumann–Morgenstern solution is not universally applicable. These exceptions not withstanding, the von Neumann–Morgenstern solution but it remains compelling, particularly considering that no definitive theory of n-person cooperative games exists to this day.
John Nash – Nash’s Equilibrium
John Nash exploded onto the scene in 1950 with his submission of 2 papers that subsequent defined the direction of economic applications of Game theory even to this day. These papers addressed both the cooperative and non-cooperative modes of Game theory. It was Nash’s insights into non-cooperative modes of Game theory conveyed in his simple and elegant general proof of the existence of a non-cooperative equilibrium in n-person games. In Nash’s framework each player takes the others’ strategies as given and chooses his own strategy; Nash’s equilibrium is where all of these choices are mutually consistent. Nash’s approach was a natural extension to the economic framework of choice and equilibrium familiar to economists at the time; namely, the standard Marshallian or Walrasian theory of competitive markets, where each individual consumer or firm takes the market prices as given and makes or her own purchase and sale decisions; the equilibrium price is where all these choices are mutually consistent.
What made Nash’s theorem so powerful was that it worked for any number of players, with arbitrary mixtures of common interests and conflicts of interest. This ability is needed in economics to model real world scenarios, where many rational people interact, and there exist possible mutual gains from trade, as well as distributive conflicts. This is the true strength of Nash’s theorem.
John Nash – Nash’s theory of bargaining
Nash’s contribution to the theory of bargaining was as equally groundbreaking as his Equilibrium theory. Prior to Nash’s discovery, economists believed that the outcome of bilateral bargaining was indeterminate, dependent on some vaguely defined “bargaining powers” of the participants about which economics could say little. The more formal cooperative game-theoretic approach of von Neumann and Morgenstern was (as mentioned earlier) equally indeterminate – offering a solution that consisted of a whole set of Pareto efficient allocations.
What Nash did was to take the cooperative approach, and established a set of properties such that there would be a unique solution satisfying them for each bargaining problem in a large class of such problems. This is significant. The solution Nash derived had some features of fair arbitration that equitably divide up the players’ gains from the deal, but this was not central to Nash’s goal. He thought of the outcome of such activities as resulting from some unspecified process of negotiation or strategizing by the individual bargainers acting in their own interests. The cooperative solution was intended as a mechanism to cut through the complex details of this process, and could also be useful for predicting the possible outcomes of such endeavors (i.e. games). (This was depicted in the movie A Beautiful Mind, in the bar scene where Nash is outlining his eureka moment to his classmates using his friends typical behavior of fighting over the beautiful girl to the exclusion of her friends to illustrate his point.) This notion of elaborating this connection, such that “steps of negotiation become moves in a larger non-cooperative game”, has become known as the Nash program. The best known and most influential contribution to this line of research is Ariel Rubinstein’s work on the bargaining problem. But even before that appeared many applications in labor economics and international trade had used Nash’s axiomatic and cooperative solution with great success for the predictive purpose he intended.
Quantum Game Theory
Whereas John Nash proved that multi-player games can achieve a stable solution, provided that the players cannot collaborate, Quantum game theory was created when it was discovered that if quantum information (yes, I mean entangled particles from which one can derive information) was introduced into multi-player games – a new type of equilibrium strategy emerged that is not found in traditional games. This new type equilibrium strategy is derived from the “entanglement” of the players choices (or more correctly, the physical method they use to maintain the state of their selection) and the effect certain “contact” can have on those choice that actually prevent players who betray from profiting from their betrayal.
Basically, all we need is a pair of entangled particles, which of course is easy enough to create in the laboratory. If I get a particular situation in a multi-player game that requires (or would be greatly helped by) fore-knowledge of what my secret “entangled” partner is going to do (i.e. their choice), I'll measure my particle’s spin (which is either up or down) and answer “yes” or “no” accordingly. We can even check for the answer to multiple situations if we say If rotate my measuring apparatus by 90 degrees, and of course my secret partner does the same, but start with your measuring apparatus rotated 45 degrees from mine – we can determine the action that our partner is going to do by “reading” the particle that is entangled with theirs. The thing about entangled particles is that the outcomes of these measurements are correlated in a very particular way, and they remain so forever, even if the particles are separated.
OK, so specifically, what is the difference between Quantum game theory and classical game theory? In general, Quantum game theory differs from classical game theory in 3 primary ways:
-- Superposed initial states
-- Quantum entanglement of initial states
-- The application of superposition of strategies on the initial states.
So what does that mean? Let’s look at each of these differences in some detail.
Superposed Initial State
We can view the information transfer that occurs during a game as a physical process (which in many cases it is; e.g. pieces move, etc.). In the simplest case of a classical 2-player game, with each player employing two strategies each, using a single bit (a ‘0’ or a ‘1’) to convey their current strategy. In the quantum version of the same game, the bit is replaced by the qubit, which is a quantum superposition of two or more base states. In the case of a 2-strategy game this can be implemented physically by employing say an electron, which has a superposed spin state, with the base states being +1/2 (plus one-half) and -1/2 (minus one-half). Each of the spin states represents each of the 2 possible strategies available to the players. When a measurement is made on the electron, it collapses to one of the base states, thus revealing the strategy used by the player.
Entangled Initial States
The set of qubits initially provided to each player (to be used to convey their choice of strategy) may be entangled. An entangled pair of qubits implies that an operation performed on one of the qubits, affects the other qubit as well, (instantaneously) thus altering the knowledge and expected payoffs of the game.
Superposition of Strategies
The job of a player in a game is to select (and implement) a strategy. In terms of bits, this means that the player has to choose between ‘flipping’ the bit to its opposite state or leaving its current state untouched. When this operation is extended to the quantum domain, the analog (i.e. equivalent operation) would be for the player to rotate the qubit to a new state, thus changing the probability amplitudes of each of the base states, or simply leave the qubit unchanged. Such operations on the qubits are required to be unitary transformations on the initial state of the qubit. This is different from the classical procedure of assigning different probabilities to the act of selecting each of the strategies.
Quantum Game Theory and Multiplayer games
Introducing quantum information into multiplayer games allows a new type of equilibrium strategy which is not found in traditional games. The entanglement of players’ choices can have the effect of a contract by preventing players from profiting from betrayal.
Future Applications of Quantum Game Theory
One can view quantum game theory as an exercise in pure mathematics: Given a game G, we create a new game G` and we study its properties; but game theory has historically been interesting primarily for its applications. It has been suggested that quantum games might have immediate applications in the theory of evolution. The assumption being that genetic mutations are driven by quantum events. Though an intriguing idea, to my knowledge there is no evidence to support this hypothesis. As with quantum computing, the applications of quantum game theory lie in the future.1 The immediate task is to prove theorems that we expect will be useful a generation from now.
1 S. Landsburg – preprint available at http://www. landsburg. com/pdf – rcer.econ.rochester.eduPage 1. UNIVERSITY OF ROCHESTER Nash Equilibria in Quantum Games Landsburg,Steven E. Working Paper No. 524 February 2006 Page 2. 5/5/05 Nash Equilibriain Quantum Games by Steven E. Landsburg Abstract ...
Popular culture was lightly exposed to Game theory with the release of the book and subsequent movie, A Beautiful Mind, about the life of mathematician John Nash, whose theories revolutionized economic theory – becoming the cornerstone of modern business practices, conflict resolutions and bargaining theory, as well as effecting numerous other fields. Central to Nash’s discovery, is what has come to be termed the Nash equilibrium, which we’ll briefly describe later. Quantum game theory is an interesting extension of classical Game theory into the quantum domain; but first a little history.
The von Neumann–Morgenstern theory
In 1944, John Von Neumann and Oskar Morgenstern published their book, Theory of Games and Economic Behavior. They were the first to construct a cooperative theory of n-person games. In this book, Von Neumann and Morgenstern axiomatic derived from Bernoulli’s formulation of a utility function over wealth, an expected utility function over lotteries, or gambles. By this, we mean is that we need to establish a set of assumptions about people’s preferences prior to constructing a utility function. Essentially, they assumed that various groups of players might join together to form coalitions, each having an associated value defined as the minimum amount that the coalition can ensure by its own efforts. These interactions were described as n-person games in what is termed the characteristic-function form. In this form, the individual players (one-person coalitions) are listed, as well as all possible coalitions of 2 or more players, and the values that each of these coalitions could ensure if a counter-coalition comprising all other players acted to minimize the amount that the coalition can obtain. Von Neumann and Morgenstern also assumed that the characteristic function is superadditive: that is, the value of a coalition of 2 formerly separate coalitions is at least as great as the sum of the separate values of the two coalitions.
The sum of payments to the players in each coalition must equal the value of the coalition. Additionally, each coalition player must receive no less than what he could obtain playing alone; (otherwise, he would not have joined the coalition in the first place). Each set of player payments describes a possible outcome of an n-person cooperative game and is called an imputation. According to Von Neumann and Morgenstern, within a coalition S, an imputation X is said to dominate another imputation Y if each player in S gets more with X than with Y and if the players in S receive a total payment that does not exceed the coalition value of S. The important consequence to this is that players in the coalition prefer the payoff X to the payoff Y and have the power to enforce this preference through their play. From this relationship, Von Neumann and Morgenstern went onto define the solution to an n-person game as a set of imputations satisfying two conditions:
(1) No imputation in the solution dominates another imputation in the solution, and
(2) Any imputation not in the solution is dominated by another one in the solution.
It must be noted that a von Neumann–Morgenstern solution is not simply a single outcome, but rather a set of possible outcomes. The reason that the solution is stable is due to the fact that for the members of the coalition, any imputation outside the solution is dominated by, and is therefore less attractive than any imputation within the solution. The imputations within the solution are obviously viable because they are not dominated by any other imputations in the solution.
It should be noted that although there may be numerous solutions to a game (each solution representing a different “standard of behavior”), it was not initially apparent that there would always be at least one in every cooperative game. Von Neumann and Morgenstern themselves did not find a single type of game that did not have a solution. They strongly felt that indicated that no such game existed; however, in 1967 the American mathematician William F. Lucas discovered a fairly complicated 10-person game that did not have a solution. This was the first of a number of counterexamples discovered since indicating that the von Neumann–Morgenstern solution is not universally applicable. These exceptions not withstanding, the von Neumann–Morgenstern solution but it remains compelling, particularly considering that no definitive theory of n-person cooperative games exists to this day.
John Nash – Nash’s Equilibrium
John Nash exploded onto the scene in 1950 with his submission of 2 papers that subsequent defined the direction of economic applications of Game theory even to this day. These papers addressed both the cooperative and non-cooperative modes of Game theory. It was Nash’s insights into non-cooperative modes of Game theory conveyed in his simple and elegant general proof of the existence of a non-cooperative equilibrium in n-person games. In Nash’s framework each player takes the others’ strategies as given and chooses his own strategy; Nash’s equilibrium is where all of these choices are mutually consistent. Nash’s approach was a natural extension to the economic framework of choice and equilibrium familiar to economists at the time; namely, the standard Marshallian or Walrasian theory of competitive markets, where each individual consumer or firm takes the market prices as given and makes or her own purchase and sale decisions; the equilibrium price is where all these choices are mutually consistent.
What made Nash’s theorem so powerful was that it worked for any number of players, with arbitrary mixtures of common interests and conflicts of interest. This ability is needed in economics to model real world scenarios, where many rational people interact, and there exist possible mutual gains from trade, as well as distributive conflicts. This is the true strength of Nash’s theorem.
John Nash – Nash’s theory of bargaining
Nash’s contribution to the theory of bargaining was as equally groundbreaking as his Equilibrium theory. Prior to Nash’s discovery, economists believed that the outcome of bilateral bargaining was indeterminate, dependent on some vaguely defined “bargaining powers” of the participants about which economics could say little. The more formal cooperative game-theoretic approach of von Neumann and Morgenstern was (as mentioned earlier) equally indeterminate – offering a solution that consisted of a whole set of Pareto efficient allocations.
What Nash did was to take the cooperative approach, and established a set of properties such that there would be a unique solution satisfying them for each bargaining problem in a large class of such problems. This is significant. The solution Nash derived had some features of fair arbitration that equitably divide up the players’ gains from the deal, but this was not central to Nash’s goal. He thought of the outcome of such activities as resulting from some unspecified process of negotiation or strategizing by the individual bargainers acting in their own interests. The cooperative solution was intended as a mechanism to cut through the complex details of this process, and could also be useful for predicting the possible outcomes of such endeavors (i.e. games). (This was depicted in the movie A Beautiful Mind, in the bar scene where Nash is outlining his eureka moment to his classmates using his friends typical behavior of fighting over the beautiful girl to the exclusion of her friends to illustrate his point.) This notion of elaborating this connection, such that “steps of negotiation become moves in a larger non-cooperative game”, has become known as the Nash program. The best known and most influential contribution to this line of research is Ariel Rubinstein’s work on the bargaining problem. But even before that appeared many applications in labor economics and international trade had used Nash’s axiomatic and cooperative solution with great success for the predictive purpose he intended.
Quantum Game Theory
Whereas John Nash proved that multi-player games can achieve a stable solution, provided that the players cannot collaborate, Quantum game theory was created when it was discovered that if quantum information (yes, I mean entangled particles from which one can derive information) was introduced into multi-player games – a new type of equilibrium strategy emerged that is not found in traditional games. This new type equilibrium strategy is derived from the “entanglement” of the players choices (or more correctly, the physical method they use to maintain the state of their selection) and the effect certain “contact” can have on those choice that actually prevent players who betray from profiting from their betrayal.
Basically, all we need is a pair of entangled particles, which of course is easy enough to create in the laboratory. If I get a particular situation in a multi-player game that requires (or would be greatly helped by) fore-knowledge of what my secret “entangled” partner is going to do (i.e. their choice), I'll measure my particle’s spin (which is either up or down) and answer “yes” or “no” accordingly. We can even check for the answer to multiple situations if we say If rotate my measuring apparatus by 90 degrees, and of course my secret partner does the same, but start with your measuring apparatus rotated 45 degrees from mine – we can determine the action that our partner is going to do by “reading” the particle that is entangled with theirs. The thing about entangled particles is that the outcomes of these measurements are correlated in a very particular way, and they remain so forever, even if the particles are separated.
OK, so specifically, what is the difference between Quantum game theory and classical game theory? In general, Quantum game theory differs from classical game theory in 3 primary ways:
-- Superposed initial states
-- Quantum entanglement of initial states
-- The application of superposition of strategies on the initial states.
So what does that mean? Let’s look at each of these differences in some detail.
Superposed Initial State
We can view the information transfer that occurs during a game as a physical process (which in many cases it is; e.g. pieces move, etc.). In the simplest case of a classical 2-player game, with each player employing two strategies each, using a single bit (a ‘0’ or a ‘1’) to convey their current strategy. In the quantum version of the same game, the bit is replaced by the qubit, which is a quantum superposition of two or more base states. In the case of a 2-strategy game this can be implemented physically by employing say an electron, which has a superposed spin state, with the base states being +1/2 (plus one-half) and -1/2 (minus one-half). Each of the spin states represents each of the 2 possible strategies available to the players. When a measurement is made on the electron, it collapses to one of the base states, thus revealing the strategy used by the player.
Entangled Initial States
The set of qubits initially provided to each player (to be used to convey their choice of strategy) may be entangled. An entangled pair of qubits implies that an operation performed on one of the qubits, affects the other qubit as well, (instantaneously) thus altering the knowledge and expected payoffs of the game.
Superposition of Strategies
The job of a player in a game is to select (and implement) a strategy. In terms of bits, this means that the player has to choose between ‘flipping’ the bit to its opposite state or leaving its current state untouched. When this operation is extended to the quantum domain, the analog (i.e. equivalent operation) would be for the player to rotate the qubit to a new state, thus changing the probability amplitudes of each of the base states, or simply leave the qubit unchanged. Such operations on the qubits are required to be unitary transformations on the initial state of the qubit. This is different from the classical procedure of assigning different probabilities to the act of selecting each of the strategies.
Quantum Game Theory and Multiplayer games
Introducing quantum information into multiplayer games allows a new type of equilibrium strategy which is not found in traditional games. The entanglement of players’ choices can have the effect of a contract by preventing players from profiting from betrayal.
Future Applications of Quantum Game Theory
One can view quantum game theory as an exercise in pure mathematics: Given a game G, we create a new game G` and we study its properties; but game theory has historically been interesting primarily for its applications. It has been suggested that quantum games might have immediate applications in the theory of evolution. The assumption being that genetic mutations are driven by quantum events. Though an intriguing idea, to my knowledge there is no evidence to support this hypothesis. As with quantum computing, the applications of quantum game theory lie in the future.1 The immediate task is to prove theorems that we expect will be useful a generation from now.
1 S. Landsburg – preprint available at http://www. landsburg. com/pdf – rcer.econ.rochester.eduPage 1. UNIVERSITY OF ROCHESTER Nash Equilibria in Quantum Games Landsburg,Steven E. Working Paper No. 524 February 2006 Page 2. 5/5/05 Nash Equilibriain Quantum Games by Steven E. Landsburg Abstract ...
Sunday, August 23, 2009
Dimewise
I was self-employed for almost 2-decades, and one of the most difficult parts of my business to manage was to quickly and easily record your purchases/expenses and categorizes them. This becomes particularly important if you travel a lot and of course the all important tax time. Though I am no longer self-employed, the possibility always exists. To this end, the Web 2.0 tool Dimewise provides this ability. Among other things, this tool allows you to review over the net your expense history, so you can determine where you’re spending your money, even by category. You can also set recurring expenses as well as track balances in one or more accounts, making it easier to predict future monthly expenses. I realize that that there is nothing new here – there is a lot of software of varying sophistication out there that does what this Web 2.0 tool, but Dimewise lets you do it from anywhere with a web browser, and saves you the time of setting up macros.
Wisdom of the People Forum
Wisdom of the People Forum
A Case Study in SDP
The case study in Structured Design Process (SDP) that I selected from Christakis’ book Harness Collective Wisdom and Power, was the study discussed in Chapter 16 titled “Wisdom of the People Forum”. This event occurred in Washington DC, September 16-18, 2002. This form consisted of 40 Indigenous leaders from the Americas and New Zealand, as well as a number of non-Indigenous experts. The intent of the form was to create a foundation that would strongly encourage and help to establish an ever-expanding, interconnecting set of relationships and cooperation between Indigenous peoples that would be transnational and yet grassroots in nature, within a framework allowing for the integration of what they termed as “intangible” traditional core values into modern life.
With that said, the first order of business was to examine the current state of transnational interaction between Indigenous leaders and then devise effective methods to strengthen existing ties and establish new ones – this in light of current and projected levels of globalization. To accomplish this, existing barriers to this effort needed to be identified and addressed. This was achieved through what was termed as “true dialogue” or “open deliberations”, exercised through the use of the Comanche circle, where members share their “medicine”, or source of inner strength and personal power with the group – primarily relating their ideals and reverence for the Earth, their ancestors and desire for peaceful co-existence among all living creatures.
In an effort to analyze the “Wisdom of the People Forum” (WPF) case study; I will classify the various methods and components of the SDP approach utilized by the members of the form, identifying how they were specifically employed. By this I mean that I will identify what of the 31 component constructs across the 7 modules comprising the SDP methodology, did the members of the conference employ and how.
Module A – Consensus Methods
Of the 6 consensus methods comprising the first module, WPF employed a combination of methods, namely the Interpretive Structural Modeling (ISM) and Options Field methods. Specifically, ISM was used to create an influence tree to identify the crucial root sources among a collection of observations making up a Problematique. In this case, the Problematique is the set of 79 barriers that the group felt stood in the way of worldwide cooperation between Indigenous peoples. Included in this set were barriers that if overcome would exert the most leverage in overcoming other barriers.
On the second day of the conference, the co-laboratory answered the following trigger question: “What are action options which, if adopted and implemented by the community of stakeholders, will help in meeting the system of barriers?” From the question, the group developed 49 action options, posting them on the wall, creating an Options Field representation; its use indicting the employment of the Options Field consensus method.
Module B – Language Patterns
Of the 7 possible language patterns available, the WPF utilized 3 language patterns: (1) Problematique, as demonstrated by the initial definition of set of 79 barriers that they believed stood in the way of worldwide cooperation between Indigenous peoples, including those barriers that if overcome would exert the most leverage in overcoming other barriers; (2) Influence tree patterns, used to determine the those barriers to interconnectivity in light of globalization; (3) Options field pattern, as illustrated by the use of the Options Field representation of the 49 action options diagramed on the wall.
Module C – The 3-Application Time Phases
All three Application Time Phases (Discovery, Designing, and Action) were of course transitioned through during the 3-day conference, culminating in 8 consensus actions included in the Consensus Action Scenario.
Module D – The 3-Key Role Responsibilities
Though not specifically identified, WPF conference employed all 3 key roles. The case study highlighted the Content – Stakeholders/Designers role and their actions in its description.
Module E – The 4-Stages of Interactive Inquiry
The description of the WPF case study keyed on 2 of the 4 stages of interactive inquiry, namely, the 2nd stage Design of Alternatives, and the 4th stage, the Action Planning. The Design of Alternatives stage was described by the construction of the Influence Tree of the barriers in context of globalization, and the Action Options exercise used to identify the items comprising the resulting Consensus Action Scenario.
Module F – Collaborative Software and Faculty
Interestingly enough, there was no mention of the use of collaborative software in this case study. There could be a couple of reasons for this, ranging from the fact that its specific mention and/or use was of no importance to the case study; or that the very nature of the case study involving Indigenous peoples implies a certain disdain for its use – though I doubt if this was the case in light of the fact that part of the intent of the WPF was to review the current state of collaboration in light of globalization, which is heavily tied to technology and by implication that the beliefs of Indigenous peoples is still relevant in the modern world.
The Module G – The 6 Dialogue Laws
The 6-Dialogue Laws of SDP consist of Requisite: (1) Variety, (2) Parsimony, (3) Saliency, (4) Meaning and Wisdom, (5) Authenticity and Autonomy, and (6) Evolutionary Learning. Of these, laws (1) and (4) are the strongest factors during the WPF conference; namely, (1) The law of requite variety, which demands that diverse perspectives and their stakeholders MUST be appreciated to effect a favorable outcome to a complex problem; and (4) The law of requisite meaning, which states that meaning and wisdom result ONLY when participants seek the understand of those relationships of similar characters; e.g. priority, influence, type, etc. This is abundantly evident in the WPF case study, where variety or diversity and the seeking for common ground, was the cornerstone of the gathering.
Conclusion
Though the “Wisdom of the People Forum” case study in Christakis’ book Harness Collective Wisdom and Power is the shortest of the case studies, it illustrates key elements of the Structured Design Process (SDP) and their successful application in a unique setting.
Saturday, August 1, 2009
Kurzweil’s future world
Ray Kurzweil’s 2005 book, The Singularity is Near, describes an era (termed the Singularity) where our intelligence will become increasingly non-biological and trillions of times more powerful than it is today. This new age, according to Kurzweil will usher in the dawning of a new civilization that will be marked by our ability to transcend biological limitations, as well as amplify our creative abilities. It is Kurzweil’s strong conviction that humanity stands on the verge of the most transforming and the exciting period in its history, where the very nature of what it means to be human will change. Kurzweil offers a compelling argument that the ever-accelerating rate of technological change as described by Moore’s Law, will inevitably lead to computers that will rival human intelligence at every level. Kurzweil goes on to propose that the next logical step in our inexorable evolutionary process will be the union of human and machine. This union, which he ultimately terms Human (version) 2.0, will allow the knowledge and skills embedded in our brains to be combined with human-created systems (hardware and wet-wear) imbuing us with vastly greater capacity, speed, and knowledge-sharing ability. This breaking of our genetic limitations will enable almost unimaginable achievements, including inconceivable increases of intelligence, material progress, and longevity; but this transition will not be without its challenges, and opponents, as well as champions. In this new era, there will be no clear distinction between human and machine. The distinction between real and virtual reality will be blurred. According to Kurzweil, we will be able to assume different bodies and take on a range of personae at will.
In Kurzweil’s world, nanotechnology will make it possible to create virtually any physical product using inexpensive information processes – ultimately rendering even death into a solvable problem. In other words, aging and illness will be reversible. Nanotechnology will make it possible to stop pollution, solve world hunger and poverty.
While The Singularity Is Near maintains a radically optimistic view of the future, Kurzweil acknowledges that the social and philosophical ramifications of these changes will be profound. Kurzweils does outline what he feels are some of the outstanding threats these technologies pose. One of the most striking of these threats was the dangers posed by nanotechnology-based processes that run amok, whether by accident, incompetence, or design; i.e. terrorism. Picture a world where our nanotechnology-based creations attack the biosphere or where self-replicating nanobots reproduce out of control, like bacteria run amok. Numerous other Michael Crichton like scenarios explored.
Ultimately, Kurzweil offers a view of the coming age that is captivating to some and positively frightening to others. To Kurzweil, this coming age is simply the culmination of our species century’s old quest for environmental, physical and spiritual improvement, and the technological ingenuity this pursuit generates.
Do I share Kurzweil’s vision? I believe that his timeline, like many futurists, is overly optimistic. He strongly believes that we are at the “elbow” of the geometric growth of our innovation/technology, and that we, as a species are about to witness an explosion of knowledge and technology never before witnessed before in human history, as we “turn beyond this elbow” in the knowledge/technology growth graph. History seems to indicate that growth at the micro-view (i.e. timeline) is segmented or digital – not continuous or analog, as it appears that way at the macro-view; i.e. the proverbial 10,000 foot level. (Where’s my flying car?) Growth is in spurts that is heavily influenced by environmental factors, including availability of resources (time, money, etc.), need and/or desire (read this as social/economic/political will), and a number of other agents. Further, our technological advances are coming at increasing cost – both direct and indirect – this includes development, manufacturing and deployment costs. With the possible exception of small-time gadgets and software, I would argue that most innovation is incremental, synergistic, and expensive – with multiple groups working on commonly identified, potentially profitable ventures, in a race to capture a (near) future market. The bottom-line, in my humble opinion, Kurzweil’s world is not 25-50 years off, but 100. I will admit that his vision is seductive on many levels.
In Kurzweil’s world, nanotechnology will make it possible to create virtually any physical product using inexpensive information processes – ultimately rendering even death into a solvable problem. In other words, aging and illness will be reversible. Nanotechnology will make it possible to stop pollution, solve world hunger and poverty.
While The Singularity Is Near maintains a radically optimistic view of the future, Kurzweil acknowledges that the social and philosophical ramifications of these changes will be profound. Kurzweils does outline what he feels are some of the outstanding threats these technologies pose. One of the most striking of these threats was the dangers posed by nanotechnology-based processes that run amok, whether by accident, incompetence, or design; i.e. terrorism. Picture a world where our nanotechnology-based creations attack the biosphere or where self-replicating nanobots reproduce out of control, like bacteria run amok. Numerous other Michael Crichton like scenarios explored.
Ultimately, Kurzweil offers a view of the coming age that is captivating to some and positively frightening to others. To Kurzweil, this coming age is simply the culmination of our species century’s old quest for environmental, physical and spiritual improvement, and the technological ingenuity this pursuit generates.
Do I share Kurzweil’s vision? I believe that his timeline, like many futurists, is overly optimistic. He strongly believes that we are at the “elbow” of the geometric growth of our innovation/technology, and that we, as a species are about to witness an explosion of knowledge and technology never before witnessed before in human history, as we “turn beyond this elbow” in the knowledge/technology growth graph. History seems to indicate that growth at the micro-view (i.e. timeline) is segmented or digital – not continuous or analog, as it appears that way at the macro-view; i.e. the proverbial 10,000 foot level. (Where’s my flying car?) Growth is in spurts that is heavily influenced by environmental factors, including availability of resources (time, money, etc.), need and/or desire (read this as social/economic/political will), and a number of other agents. Further, our technological advances are coming at increasing cost – both direct and indirect – this includes development, manufacturing and deployment costs. With the possible exception of small-time gadgets and software, I would argue that most innovation is incremental, synergistic, and expensive – with multiple groups working on commonly identified, potentially profitable ventures, in a race to capture a (near) future market. The bottom-line, in my humble opinion, Kurzweil’s world is not 25-50 years off, but 100. I will admit that his vision is seductive on many levels.
Subscribe to:
Posts (Atom)