The iterated Prisoner's Dilemma: Early experiences with learning classifier system-based simple agents

Chen Lu Meng, Ramakrishnan Pakath

Research output: Contribution to journalArticlepeer-review

9 Scopus citations


Prior research on artificial agents/agencies involves entities using specifically tailored operational strategies (e.g., for information retrieval, purchase negotiation). In some situations, however, an agent must interact with others whose strategies are initially unknown and whose interests may counter its own. In such circumstances, pre-defining effective counter-strategies could become difficult or impractical. One solution, which may be viable in certain contexts, is to create agents that self-evolve increasingly effective strategies from rudimentary beginnings, during actual deployment. Using the Iterated Prisoner's Dilemma (IPD) problem as a generic agent-interaction setting, we use the Learning Classifier System (LCS) paradigm to construct autonomously adapting "simple" agents. A simple agent attempts to cope by maintaining an evolving but potentially perennially incomplete and imperfect knowledge base. These agents operate against specifically tailored (non-adaptive) agents. We present a preliminary suite of simulation experiments and results. The promise evidenced leads us to articulate several additional areas of interesting investigations that we are pursuing.

Original languageEnglish
Pages (from-to)379-403
Number of pages25
JournalDecision Support Systems
Issue number4
StatePublished - Oct 2001


  • Adaptive systems
  • Artificial agents
  • Iterated prisoner's dilemma
  • Learning classifier systems

ASJC Scopus subject areas

  • Management Information Systems
  • Information Systems
  • Developmental and Educational Psychology
  • Arts and Humanities (miscellaneous)
  • Information Systems and Management


Dive into the research topics of 'The iterated Prisoner's Dilemma: Early experiences with learning classifier system-based simple agents'. Together they form a unique fingerprint.

Cite this