Learning that a cocaine reward is smaller than expected: A test of Redish's computational model of addiction

Katherine R. Marks, David N. Kearns, Chesley J. Christensen, Alan Silberberg, Stanley J. Weiss

Research output: Contribution to journalArticlepeer-review

18 Scopus citations

Abstract

The present experiment tested the prediction of Redish's (2004) [7] computational model of addiction that drug reward expectation continues to grow even when the received drug reward is smaller than expected. Initially, rats were trained to press two levers, each associated with a large dose of cocaine. Then, the dose associated with one of the levers was substantially reduced. Thus, when rats first pressed the reduced-dose lever, they expected a large cocaine reward, but received a small one. On subsequent choice tests, preference for the reduced-dose lever was reduced, demonstrating that rats learned to devalue the reduced-dose lever. The finding that rats learned to lower reward expectation when they received a smaller-than-expected cocaine reward is in opposition to the hypothesis that drug reinforcers produce a perpetual and non-correctable positive prediction error that causes the learned value of drug rewards to continually grow. Instead, the present results suggest that standard error-correction learning rules apply even to drug reinforcers.

Original languageEnglish
Pages (from-to)204-207
Number of pages4
JournalBehavioural Brain Research
Volume212
Issue number2
DOIs
StatePublished - Oct 2010

Bibliographical note

Funding Information:
This research was supported by NIDA Grant R01-DA-08651 awarded to SJW and by NIDA Grant 5F31-DA-024493 awarded to CJC.

Keywords

  • Addiction
  • Cocaine
  • Dopamine
  • Learning
  • Rats
  • Reward prediction errors

ASJC Scopus subject areas

  • Behavioral Neuroscience

Fingerprint

Dive into the research topics of 'Learning that a cocaine reward is smaller than expected: A test of Redish's computational model of addiction'. Together they form a unique fingerprint.

Cite this