PhD Student in Computer Science and Philosophy
Department of Computer Science
University of Hertfordshire
Hatfield AL10 9AB
Telephone: +44 (0)1707 284490
I'm a PhD student at the University of Hertfordshire since December 2007. I work with the Adaptive Systems Group in the department of Computer Sciences and the department of Philosophy. My principal supervisor is Daniel Polani and my second supervisors are Daniel Hutto and Luciano Floridi.
I am a Reasearch Fellow at the University of Hertfordshire for the CORBYS project, and a Junior Research Associate with the Oxford University Research Group on the Philosophy of Information.
My main research interest is in modeling and simulating aspects of social interaction with the help of information theory. I am especially interested in the transfer of information between agents and how such a system can evolve from a simple model to display complex behavior.
This borders on several areas of philosophy, mainly the theory of mind and the theories regarding social interaction. Here we try to investigate how exactly those two fields are connected, and how and if the results of one field can be applied to the other. This also involves the clarification of several terms which are commonly used by both disciplines.
A further interest of mine are the models behind game mechanics. I am interested in how AIs can be adapted to different challenges, and how that influences the design of games itself.
(2012) Malte Harder, Christoph Salge and Daniel Polani: A Bivariate Measure of Redundant Information, ArXiv, 2012. (Link)
(2012) Christoph Salge, Cornelius Glackin and Daniel Polani: Approximation of Empowerment in the Continuous Domain, Advances in Complex Systems (ACS), in Press, 2012.
(2011) Christoph Salge and Daniel Polani: Digested Information as an Information Theoretic Motivation for Social Interaction, Journal of Artificial Societies and Social Simulation (JASSS), 14(1), 2011. (Link | Postprint | Abstract | Bibtex)
Within a universal agent-world interaction framework, based on Information Theory and Causal Bayesian Networks, we demonstrate how every agent that needs to acquire relevant information in regard to its strategy selection will automatically inject part of this information back into the environment. We introduce the concept of 'Digested Information' which both quantifies, and explains this phenomenon. Based on the properties of digested information, especially the high density of relevant information in other agents actions, we outline how this could motivate the development of low level social interaction mechanisms, such as the ability to detect other agents.
(2009) Christoph Salge and Daniel Polani: Information Driven Organization of Visual Receptive Fields, Advances in Complex Systems (ACS), vol. 12, issue 03, pages 311-326, 2009. (Link | Postprint | Abstract | Bibtex)
By using information theory to reduce the state space of sensor arrays, such as receptive fields, for AI decision making we offer an adaptive algorithm without classical biases of hand coded approaches. This paper presents a way to build an acyclic directed graph to organize the sensor inputs of a visual receptive field. The Information Distance Metric is used to repeatedly select two sensors, which contain the most information about each other. Those are then encoded to a single variable, of equal alphabet size, with a deterministic mapping function that aims to create maximal entropy while maintaining a low information distance to the original sensors. The resulting tree determines which sensors are fused to reduce the input data while maintaining a maximum of information. The structure adapts to different environments of input images by encoding groups of preferred line structures or creating a higher resolution for areas with simulated movement. These effects are created without prior assumptions about the sensor statistics or the spatial configuration of the receptive field, and are cheap to compute since only pair-wise informational comparison of sensors is used.
(2011) Salge, C. and Polani, D.: Local Information Maximisation creates Emergent Flocking Behaviour, In Proc. European Conference on Artificial Life 2011, Paris, in Press. (Abstract|Download| Bibtex )
The three boids rules of alignment, separation and cohesion, introduced by Reynolds to recreate flocking behaviour have become a well known standard to create swarm behaviour. In this paper we want to demonstrate how similar flocking behaviour can be created by a local, agent based model, following a principle of information maximisation. The basis for our model is an extension of Vergassola's infotaxis model, where agents determine their actions based on the highest expected reduction of entropy. We adapted this approach to a grid world-based search task, and extended the agents abilities so they could not only perform a Bayesian update with information gained from the environment, but also with information gained from other agents. The resulting global flocking behaviour is then analysed in regard to how well it resembles the basic boids rules.
Digested Information is a theory that aims to explain, at the non-semantic level of Information Theory, why it makes sense for one agent to observe another. Based on the formalism of Relevant Information, defined as the minimum amount of information an agent needs in order to determine its optimal strategy, I argue that, following its own motivation, an agent (1) obtains relevant information from the environment (2) displays it in the environment through its own actions, and (3) is likely to display information in a higher density in regard to its bandwidth than other parts of the environment. Furthermore, I argue that this information is also relevant to other, similar, agents and that this could be used to motivate agent-agent interaction (such as observing other agents) in a framework where agent behaviour is determined by information maximisation.
(2010) Salge, C. and Polani, D.: Extended Abstract: From Infotaxis to Boids-like Swarm Behaviour, in Proc.of Artificial Life, Odense, Denmark, 2010. (PDF)
(2010) Salge, C. and Mahlmann, T.: Relevant Information as a Formalised Approach to Evaluate Game Mechanics, in Proc. of IEEE Computational Intelligence in Games Conference 2010, Copenhagen, pages 281 - 288, 2010. (PDF | Abstract | Bibtex)
We present a new approach to use adaptive AI and Information Theory to aid the evaluation of game mechanics. Being able to evaluate the core game mechanics early during production is useful to improve the quality of a game, and ultimately, player satisfaction. A current problem with automated game evaluation via AI is to define measurable parameters that correlate to the quality of the game mechanics. We apply the Information Theory based concept of “Relevant Information” to this problem and argue that there is a relation between enjoyment related game-play properties and Relevant Information for an AI playing the game. We also demonstrate, with a simple game implementation, a.) how an adaptive AI can be used to approximate the Relevant Information, b.) how those measurable numerical values relate to certain game design flaws c.) how this knowledge can be used to improve the game.
(2008) Salge, C., Lipski, C.,Mahlmann, T. and Mathiak, B.: Using Genetically Optimized AIs to improve Gameplaying Fun for Strategical Games, in Proc. of SIGGRAPH Sandbox 2008, Los Angeles, pages 7 -14, 2008. (PDF | Abstract | Bibtex)
Fun in computer games depends on many factors. While some factors like uniqueness and humor can only be measured by human subjects, in a strategical game, the rule system is an important and measurable factor. Classics like chess and GO have a millennia-old story of success, based on clever rule design. They only have a few rules, are relatively easy to understand, but still they have myriads of possibilities. Testing the deepness of a rule-set is very hard, especially for a rule system as complex as in a classic strategic computer game. It is necessary, though, to ensure prolonged gaming fun.
This paper presents a way to build a treelike network structure to organise the sensor inputs of a visual receptive field. The Information Distance Metric is used to repeatedly select two sensors, which contain the most information about each other. Those are then encoded to a single variable of equal capacity with a mapping function that tries to create maximal entropy while maintaining a low information distance to the original sensors. The resulting tree determines which sensors are fused to reduce the input data while maintaining maximum information. The structure adapts to different environments of input images, by encoding groups of preferred line structures or creating a higher resolution of areas with simulated movement. These effects are created without prior assumption of the environment or the spatial configuration of the receptive field and are cheap to compute, since only pairwise informational comparison of sensors is used.
Abstract: A first step towards social interaction is to observe other agents and their actions. The concept of "Relevant Information" is used to argue, from an information theoretic perspective, why it would be beneficial to observe other agents, and why observing their actions should be relevant to me, even if their actions are not. A simple grid world model illustrates those points, for a simple information gathering task, and shows how to utilise this information to increase an agent's performance.
Colleagues in the Adaptive Systems Research Group who have similar research interests:
- Tom Anthony - Investigating information theoretic tools like empowerment to drive intelligent behaviour.
- Sander van Dijk - Using information theory for hierarchical structuring of behaviour and intelligent agent control.
- Malte Harder - Information-driven self-organization of agent collectives.