Language, tranning, and experience in information system assessment

Research output: Contribution to journalArticlepeer-review

7 Scopus citations

Abstract

This paper proposes a framework of models for making information system assessments and provides empirical evidence relevant to the framework. Perceptions of the decision language and degree of structure appropriate to each model are tested, as are the impact of training and experience on the perceived usefulness of various assessment models. Results indicate that assessment models primarily based on quantitative language were perceived as more useful when executed as structured procedures, and models primarily based on qualitative language were perceived as more useful when executed as unstructured procedures. In addition, perceptions of a decision model's usefulness were affected by participants' training and experience. The findings suggest that no single model is perceived as rich enough to encompass a full range of decision languages and procedures, and that the perceived usefulness of any given model depends on an individual's training and experience. Triangulation and dialectic inquiry are suggested as possible multimodel strategies useful in enriching information system assessment practice.

Original languageEnglish
Pages (from-to)91-108
Number of pages18
JournalAccounting, Management and Information Technologies
Volume1
Issue number1
DOIs
StatePublished - 1991

Bibliographical note

Copyright:
Copyright 2014 Elsevier B.V., All rights reserved.

Keywords

  • Evaluation methods
  • Information system assessment
  • Knowledge
  • Language

ASJC Scopus subject areas

  • Management Information Systems
  • Information Systems
  • Organizational Behavior and Human Resource Management
  • Library and Information Sciences
  • Management of Technology and Innovation

Fingerprint

Dive into the research topics of 'Language, tranning, and experience in information system assessment'. Together they form a unique fingerprint.

Cite this