IT Systems Theories – Draft DBA Project
Modular Learning Outcomes
Upon successful completion of this module, the student will be able to satisfy the following outcomes:
Case
Demonstrate the ability to create a sample project using the knowledge gained in prior modules and the Module 5 SLP.
SLP
Identify and analyze a developed research problem and topic as it relates to a selected organization within the DSP process.
Discussion
Identify challenges in implementing common information technology solutions.
Module Overview
Module 2 focused on types of processes at the conceptual level. Module 4 looked at specific processes used in many firms. This module looks at applying various concepts and processes within this course and previous courses in order to develop the first stages of a suitable applied problem statement identified within your selected organization of study that you will investigate within your DSP.
The student will demonstrate understanding of the class by submitting a draft proposal for their personal DSP which contains a usable and identifiable problem statement within the students selected organization used for the DSP.
Privacy Policy | Contact
blank93.html
Module 5 – Letter of Intent
IT Systems Theories – Draft DBA Project
The Letter of Intent (LOI) is designed to communicate the intention of your intended research of the site/organization and the topic that you will be investigating with your selected research site/organization. The LOI will allow for identification and establishment of the professional relationship between you as the researcher and the various principle stakeholders involved. The following elements required within the LOI template should include the following items: Title of proposed research Organization identified Principal stakeholder acknowledging the intended research and site selection Principal stakeholder offering permission to conduct said research on site/organization Any additional information or stipulations to the proposed research and site selection NOTE: The Letter of Intent (LOI) does not substitute or replace the standard Site Permission Letter required within the IRB Application process.
After completing the Letter of Intent, including obtaining the necessary signatures, submit a scanned copy to the respective DOC800 Dropbox for LOI/DSP.
Privacy Policy | Contact
LOI.docx
Letter of Intent to Allow Research
Date
Name of Student/Principal Researcher
Address
RE: Intent to allow Research – (Title of Research)
Dear Student/Principal Researcher:
In response to your request to conduct applied research at (Organization Name), as (Position/Title), I hereby confirm the intention to allow your research on (area of research/topic) to be conducted subject to final approval of your proposal and formal approval from the institutional IRB, if applicable.
Once your proposal is finalized and approved, you will be provided a formal approval to conduct your research at (Organization Name). This research may involve interviews and/or surveys with our personnel, observation of activities, secondary analysis of available data, and/or other data collection methods. All data collection will be reviewed and approved by us prior to implementation. The formal approval to conduct research will set forth any restrictions or limitations to your access or activities.
If you have any questions, please do not hesitate to call. I will serve as a point of contact and can be reached at (000) 000-0000 or [email protected].
Sincerely,
Name of Authorizing Administrator
Position in the Organization
Modules/Module5/Mod5Background.html
Module 5 – Background
IT Systems Theories – Draft DBA Project
Search Terms: IT Theories, Technology Acceptance Model, Task-Technology Fit, IT Success Model, Unified Theory of Acceptance and Use of Technology, Business and IT Strategy Alignment, Enterprise Resource Planning Systems
Required Reading
There are no required readings for the case. The student will use parts of prior cases and SLPs and associated references for the Case Assignment.
Note: Unlike prior modules, we not going to highlight the important sections for the first five Ph.D.-level empirical research studies used in the SLP. Please read the Introductions, Background, and Theoretical Sections, and the results/findings. Do not worry about understanding the sections that are statistical in nature. However, if you are interested, contact the instructor and he will explain them.
SLP Reading
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology, MIS Quarterly, September, 319:340
Taylor, S. and Todd, P. A. (1995) Understanding information technology usage: a test of competing models, Information Systems Research, 6(2), 144:175
Delone, W. H. and McLean, E.R. (1992). Information systems success -the quest for the dependent variable, Information Systems Research, 3(1), 60:95
Goodhue,D. L. and Thompson, R. L. (1995). Task-Technology Fit and Individual Performance, MIS Quarterly, June, 213:236
Venkatesh, V., Morris, M. G, and Davis, F. D. (2003). User Acceptance of Information Technology: Toward a Unified View, MIS Quarterly, 27(3), 425-478
Optional Reading
We recommend you search articles related to the following:
IT Service (Delivery) Quality
Business and IT Strategy Alignment
Privacy Policy | Contact
1. Perceived usefulness, perceived ease of use, and user acceptance of information technology.pdf
IT Usefulness and Ease of Use
dent to perceived usefulness, as opposed to a parallel, direct determinant of system usage. Implications are drawn for future research on user acceptance.
Keywords: User acceptance, end user computing, user measurement
ACM Categories: H.I.2, K.6.1, K.6.2, K.6.3
f. Fred D. Davis Computer and Information Systems Graduate Schooi of Business
Administration Univeirsity of Michigan Ann Arbor, R/lichigan 48109
Abstract Valid measurement scales for predicting user acceptance of computers are in short suppiy. Most subjective measures used in practice are unvalidated, and their relationship to system usage is unknown. The present research de- velops and validates new scales for two spe- cific variabies, perceived usefuiness and per- ceived ease of use, which are hypothesized to be fundamental determinants of user accep- tance. Definitions for these two variabies were used to develop scale items that were pretested for content validity and then tested for reliability and construct validity in two studies involving a totai of 152 users and four application pro- grams. The measures were refined and stream- lined, resulting in two six-item scales with reli- abiiities of .98 for usefulness and .94 for ease of use. The scales exhibited high convergent, discriminant, and factorial validity. Perceived use- fulness was significantiy correiated with both self- reported current usage (r=.63, Study 1) and self-predicted future usage (r=.85, Study 2). Per- ceived ease of use was also significantly corre- lated with current usage (r-.45. Study 1) and future usage (r=.59, Study 2). In both studies, usefulness had a significantiy greater correla- tion with usage behavior than did ease of use. Regression analyses suggest that perceived ease of use may actually be a causal antece-
Information technology offers the potential for sub- stantially improving white collar performance (Curley, 1984; Edelman, 1981; Sharda, et al., 1988). But performance gains are often ob- structed by users’ unwillingness to accept and use available systems (Bowen, 1986; Young, 1984). Because of the persistence and impor- tance of this problem, explaining user accep- tance has been a long-standing issue in MIS research (Swanson, 1974; Lucas, 1975; Schultz and Slevin, 1975; Robey, 1979; Ginzberg, 1981; Swanson, 1987). Although numerous individual, organizational, and technological variables have been investigated (Benbasat and Dexter, 1986; Franz and Robey, 1986; Markus and Bjorn- Anderson, 1987; Robey and Farrow, 1982), re- search has been constrained by the shortage of high-quality measures for key determinants of user acceptance. Past research indicates that many measures do not correlate highly with system use (DeSanctis, 1983; Ginzberg, 1981; Schewe, 1976; Srinivasan, 1985), and the size of the usage correlation varies greatly from one study to the next depending on the particular measures used (Baroudi, et al., 1986; Barki and Huff, 1985; Robey, 1979; Swanson, 1982, 1987). The development of improved measures for key theoretical constructs is a research priority for the information systems field.
Aside from their theoretical value, better meas- ures for predicting and explaining system use would have great practical value, both for ven- dors who would like to assess user demand for new design ideas, and for information systems managers within user organizations who would like to evaluate these vendor offerings.
Unvalidated measures are routinely used in prac- tice today throughout the entire spectrum of design, selection, implementation and evaluation activities. For example: designers within vendor organizations such as IBM (Gould, et al., 1983), Xerox (Brewley, et al., 1983), and Digital Equip-
MIS Quarterly/September 1989 319
IT Usefulness and Ease of Use
ment Corporation (Good, et al., 1986) measure user perceptions to guide the development of new information technologies and products; in- dustry publications often report user surveys (e.g., Greenberg, 1984; Rushinek and Rushinek, 1986); several methodologies for software se- lection call for subjective user inputs (e.g., Goslar, 1986; Klein and Beck, 1987); and con- temporary design principles emphasize meas- uring user reactions throughout the entire design process (Anderson and Olson 1985; Gould and Lewis, 1985; Johansen and Baker, 1984; Mantei and Teorey, 1988; Norman, 1983; Shneiderman, 1987). Despite the widespread use of subjec- tive measures in practice, little attention is paid to the quality of the measures used or how well they correlate with usage behavior. Given the low usage correlations often observed in re- search studies, those who base important busi- ness decisions on unvalidated measures may be getting misinformed about a system’s accept- ability to users.
The purpose of this research is to pursue better measures for predicting and explaining use. The investigation focuses on two theoretical con- structs, perceived usefulness and perceived ease of use, which are theorized to be funda- mental determinants of system use. Definitions for these constructs are formulated and the theo- retical rationale for their hypothesized influence on system use is reviewed. New, multi-item meas- urement scales for perceived usefulness and per- ceived ease of use are developed, pretested, and then validated in two separate empirical stud- ies. Correlation and regression analyses exam- ine the empirical relationship between the new measures and self-reported indicants of system use. The discussion concludes by drawing im- plications for future research.
What causes people to accept or reject Informa- tion technology? Among the many variables that may influence system use, previous research sug- gests two determinants that are especially im- portant. First, people tend to use or not use an application to the extent they believe it will help them perform their job better. We refer to this first variable as perceived usefulness. Second, even if potential users believe that a given ap- plication is useful, they may, at the same time.
believe that the systems is too hard to use and that the performance benefits of usage are out- weighed by the effort of using the application. That is, in addition to usefulness, usage is theo- rized to be influenced by perceived ease of use.
Perceived usefulness is defined here as “the degree to which a person believes that using a particular system would enhance his or her job performance.” This follows from the defini- tion of the word useful: “capable of being used advantageously.” Within an organizational con- text, people are generally reinforced for good performance by raises, promotions, bonuses, and other rewards (Pfeffer, 1982; Schein, 1980; Vroom, 1964). A system high in perceived use- fulness, in turn, is one for which a user believes in the existence of a positive use-performance relationship.
Perceived ease of use, in contrast, refers to “the degree to which a person believes that using a particular system would be free of effort.” This follows from the definition of “ease”: “freedom from difficulty or great effort.” Effort is a finite resource that a person may allocate to the vari- ous activities for which he or she is responsible (Radner and Rothschild, 1975). All else being equal, we claim, an application perceived to be easier to use than another is more likely to be accepted by users.
The theoretical importance of perceived useful- ness and perceived ease of use as determinants of user behavior is indicated by several diverse lines of research. The impact of perceived use- fulness on system utilization was suggested by the work of Schultz and Slevin (1975) and Robey (1979). Schultz and Slevin (1975) conducted an exploratory factor analysis of 67 questionnaire items, which yielded seven dimensions. Of these, the “performance” dimension, interpreted by the authors as the perceived “effect of the model on the manager’s job performance,” was most highly correlated with self-predicted use of a decision model (r=.61). Using the Schultz and Slevin questionnaire, Robey (1979) finds the per- formance dimension to be most correlated with two objective measures of system usage (r=.79 and .76). Building on Vertinsky, et al.’s (1975) expectancy model, Robey (1979) theorizes that: “A system that does not help people perform their jobs is not likely to be received favorably
320 MIS Quarterly/September 1989
IT Usefulness and Ease of Use
in spite of careful implementation efforts” (p. 537). Although the perceived use-performance contingency, as presented in Robey’s (1979) model, parallels our definition of perceived use- fulness, the use of Schultz and Slevin’s (1975) performance factor to operationalize perform- ance expectancies is problematic for several rea- sons: the instrument is empirically derived via exploratory factor analysis; a somewhat low ratio of sample size to items is used (2:1); four of thirteen items have loadings below .5, and sev- eral of the items clearly fall outside the defini- tion of expected performance improvements (e.g., “My job will be more satisfying,” “Others will be more aware of what I am doing,” etc.).
An alternative expectancy-theoretic model, de- rived from Vroom (1964), was introduced and tested by DeSanctis (1983). The use-perform- ance expectancy was not analyzed separately from performance-reward instrumentalities and reward valences. Instead, a matrix-oriented meas- urement procedure was used to produce an over- all Index of “motivational force” that combined these three constructs. “Force” had small but significant correlations with usage of a DSS within a business simulation experiment (corre- lations ranged from .04 to .26). The contrast be- tween DeSanctis’s correlations and the ones ob- served by Robey underscore the importance of measurement in predicting and explaining use.
Self-efficacy theory The importance of perceived ease of use is sup- ported by Bandura’s (1982) extensive research on self-efficacy, defined as “judgments of how well one can execute courses of action required to deal with prospective situations” (p. 122). Self- efficacy is similar to perceived ease of use as defined above. Self-efficacy beliefs are theorized to function as proximal determinants of behav- ior. Bandura’s theory distinguishes self-efficacy judgments from outcome judgments, the latter being concerned with the extent to which a be- havior, once successfully executed, is believed to be linked to valued outcomes. Bandura’s “out- come judgment” variable is similar to perceived usefulness. Bandura argues that self-efficacy and outcome beliefs have differing antecedents and that, “In any given instance, behavior would be best predicted by considering both self- efficacy and outcome beliefs” (p. 140).
Hill, et al. (1987) find that both self-efficacy and outcome beliefs exert an influence on decisions
to learn a computer language. The self efficacy paradigm does not offer a general measure ap- plicable to our purposes since efficacy beliefs are theorized to be situationally-specific, with measures tailored to the domain under study (Bandura, 1982). Self efficacy research does, however, provide one of several theoretical per- pectives suggesting that perceived ease of use and perceived usefulness function as basic de- terminants of user behavior.
Cost-benefit paradigm The cost-benefit paradigm from behavioral deci- sion theory (Beach and Mitchell, 1978; Johnson and Payne, 1985; Payne, 1982) is also relevant to perceived usefulness and ease of use. This research explains people’s choice among vari- ous decision-making strategies (such as linear compensatory, conjunctive, disjunctive and elmi- nation-by-aspects) in terms of a cognitive trade- off between the effort required to employ the strat- egy and the quality (accuracy) of the resulting decision. This approach has been effective for explaining why decision makers alter their choice strategies in response to changes in task com- plexity. Although the cost-benefit approach has mainly concerned itself with unaided decision making, recent work has begun to apply the same form of analysis to the effectiveness of information display formats (Jarvenpaa, 1989; Kleinmuntz and Schkade, 1988).
Cost-benefit research has primarily used objec- tive measures of accuracy and effort in research studies, downplaying the distinction between ob- jective and subjective accuracy and effort. In- creased emphasis on subjective constructs is war- ranted, however, since (1) a decision maker’s choice of strategy is theorized to be based on subjective as opposed to objective accuracy and effort (Beach and Mitchell, 1978), and (2) other research suggests that subjective measures are often in disagreement with their ojbective coun- terparts (Abelson and Levi, 1985; Adelbratt and Montgomery, 1980; Wright, 1975). Introducing measures of the decision maker’s own perceived costs and benefits, independent of the decision actually made, has been suggested as a way of mitigating criticisms that the cost/benefit frame- work is tautological (Abelson and Levi, 1985). The distinction made herein between perceived usefulness and perceived ease of use is similar to the distinction between subjective decision- making performance and effort.
MIS Quarterly/September 1989 321
IT Usefulness and Ease of Use
Adoption of innovations Research on the adoption of innovations also suggests a prominent role for perceived ease of use. In their meta-analysis of the relationship between the characteristics of an innovation and its adoption, Tornatzky and Klein (1982) find that compatibility, relative advantage, and complex- ity have the most consistent significant relation- ships across a broad range of innovation types. Complexity, defined by Rogers and Shoemaker (1971) as “the degree to which an innovation is perceived as relatively difficult to understand and use” (p, 154), parallels perceived ease of use quite closely. As Tornatzky and Klein (1982) point out, however, compatibility and relative ad- vantage have both been dealt with so broadly and inconsistently in the literature as to be diffi- cult to interpret.
Evaluation of information reports Past research within MIS on the evaluation of information reports echoes the distinction be- tween usefulness and ease of use made herein. Larcker and Lessig (1980) factor analyzed six items used to rate four information reports. Three items load on each of two distinct factors: (1) perceived importance, which Larcker and Lessig define as “the quality that causes a particular information set to acquire relevance to a deci- sion maker,” and the extent to which the infor- mation elements are “a necessary input for task accomplishment,” and (2) perceived usable- ness, which is defined as the degree to which “the information format is unambiguous, clear or readable” (p. 123). These two dimensions are similar to perceived usefulness and perceived ease of use as defined above, repsectively, al- though Larcker and Lessig refer to the two di- mensions collectively as “perceived usefulness.” Reliabilities for the two dimensions fall in the range of .64-.77, short of the .80 minimal level recommended for basic research. Correlations with actual use of information reports were not addressed in their study.
Channel disposition model Swanson (1982, 1987) introduced and tested a model of “channel disposition” for explaining the choice and use of information reports. The con- cept of channel disposition is defined as having
two components: attributed information quality and attributed access quality. Potential users are hypothesized to select and use information re- ports based on an implicit psychological trade- off between information quality and associated costs of access. Swanson (1987) performed an exploratory factor analysis in order to measure information quality and access quality. A five- factor solution was obtained, with one factor cor- responding to information quality (Factor #3, “value”), and one to access quality (Factor #2, “accessibility”). Inspecting the items that load on these factors suggests a close correspondence to perceived usefulness and ease of use. Items such as “important,” “relevant,” “useful,” and “valuable” load strongly on the value dimension. Thus, value parallels perceived usefulness. The fact that relevance and usefulness load on the same factor agrees with information scientists, who emphasize the conceptual similarity be- tween the usefulness and relevance notions (Saracevic, 1975). Several of Swanson’s “acces- sibility” items, such as “convenient,” “controlla- ble,” “easy,” and “unburdensome,” correspond to perceived ease of use as defined above. Al- though the study was more exploratory than con- firmatory, with no attempts at construct valida- tion, it does agree with the conceptual distinction between usefulness and ease of use. Self- reported information channel use correlated .20 with the value dimension and .13 with the ac- cessibility dimension.
Non-MIS studies Outside the MIS domain, a marketing study by Hauser and Simmie (1981) concerning user per- ceptions of alternative communication technolo- gies similarly derived two underlying dimensions: ease of use and effectiveness, the latter being similar to the perceived usefulness construct de- fined above. Both ease of use and effectiveness were influential in the formation of user prefer- ences regarding a set of alternative communi- cation technologies. The human-computer inter- action (HCl) research community has heavily emphasized ease of use in design (Branscomb and Thomas, 1984; Card, et al., 1983; Gould and Lewis, 1985). For the most part, however, these studies have focused on objective meas- ures of ease of use, such as task completion time and error rates. In many vendor organiza- tions, usability testing has become a standard phase in the product development cycle, with
322 MIS Quarterly/September 1989
IT Usefulness and Ease of Use
large investments in test facilities and instrumen- tation. Although objective ease of use is clearly relevant to user performance given the system is used, subjective ease of use is more relevant to the users’ decision whether or not to use the system and may not agree with the objective measures (Carroll and Thomas, 1988).
Convergence of findings There is a striking convergence among the wide range of theoretical perspectives and research studies discussed above. Although Hill, et al. (1987) examined learning a computer language, Larcker and Lessig (1980) and Swanson (1982, 1987) dealt with evaluating information reports, and Hauser and Simmie (1981) studied com- munication technologies, all are supportive of the conceptual and empirical distinction between use- fulness and ease of use. The accumulated body of knowledge regarding self-efficacy, contingent decision behavior and adoption of innovations provides theoretical support for perceived use- i’ulness and ease of use as key determinants of behavior.
From multiple disciplinary vantage points, per- ceived usefulness and perceived ease of use are indicated as fuhdamental and distinct con- structs that are influential in decisions to use in- formation technology. Although certainly not the only variables of interest in explaining user be- havior (for other variables, see Cheney, et al., 1986; Davis, et al., 1989; Swanson, 1988), they do appear likely to play a central role. Improved measures are needed to gain further insight into the nature of perceived usefulness and per- ceived ease of use, and their roles as determi- nants of computer use.
Scale Development and Pretest A step-by-step process was used to develop new multi-item scales having high reliability and validity. The conceptual definitions of perceived usefulness and perceived ease of use, stated above, were used to generate 14 candidate items for each construct from past literature. Pre- test interviews were then conducted to assess the semantic content of the items. Those items that best fit the definitions of the constructs were
retained, yielding 10 items for each construct. Next, a field study (Study 1) of 112 users con- cerning two different interactive computer sys- tems was conducted in order to assess the reli- ability and construct validity of the resulting scales. The scales were further refined and streamlined to six items per construct. A lab study (Study 2) involving 40 participants and two graphics systems was then conducted. Data from the two studies were then used to assess the relationship between usefulness, ease of use, and self-reported usage.
Psychometricians emphasize that the validity of a measurement scale is built in from the outset. As Nunnally (1978) points out, “Rather than test the validity of measures after they have been constructed, one should ensure the validity by the plan and procedures for construction” (p. 258). Careful selection of the initial scale items helps to assure the scales will possess “content validity,” defined as “the degree to which the score or scale being used represents the con- cept about which generalizations are to be made” (Bohrnstedt, 1970, p. 91). In discussing content validity, psychometricians often appeal to the “domain sampling model,” (Bohrnstedt, 1970; Nunnally, 1978) which assumes there is a domain of content corresponding to each vari- able one is interested in measuring. Candidate items representative of the domain of content should be selected. Researchers are advised to begin by formulating conceptual definitions of what is to be measured and preparing items to fit the construct definitions (Anastasi, 1986).
Following these recommendations, candidate items for perceived usefulness and perceived ease of use were generated based on their con- ceptual definitions, stated above, and then pre- tested in order to select those items that best fit the content domains. The Spearman-Brown Prophecy formula was used to choose the number of items to generate for each scale. This formula estimates the number of items needed to achieve a given reliability based on the number of items and reliability of comparable existing scales. Extrapolating from past studies, the formula suggests that 10 items would be needed for each perceptual variable to achieve reliability of at least .80 (Davis, 1986). Adding four additional items for each construct to allovy for item elimination, it was decided to generate 14 items for each construct.
The initial item pools for perceived usefulness and perceived ease of use are given in Tables
MIS Quarterly/September 1989 323
IT Usefulness and Ease of Use
1 and 2, respectively. In preparing candidate items, 37 published research papers dealing with user reactions to interactive systems were re- viewed in other to identify various facets of the constructs that should be measured (Davis, 1986). The items are worded in reference to “the electronic mail system,” which is one of the two test applications investigated in Study 1, reported below. The items within each pool tend to have a lot of overlap in their meaning, which is con- sistent with the fact that they are intended as measures of the same underlying construct. Though different individuals may attribute slightly different meaning to particular item statements, the goal of the multi-item approach is to reduce any extranneous effects of individual items, al- lowing idiosyncrasies to be cancelled out by
other items in order to yield a more pure indi- cant of the conceptual variable.
Pretest interviews were performed to further en- hance content validity by assessing the corre- spondence between
Read more
Applied Sciences
Architecture and Design
Biology
Business & Finance
Chemistry
Computer Science
Geography
Geology
Education
Engineering
English
Environmental science
Spanish
Government
History
Human Resource Management
Information Systems
Law
Literature
Mathematics
Nursing
Physics
Political Science
Psychology
Reading
Science
Social Science
Home
Homework Answers
Blog
Archive
Tags
Reviews
Contact
twitterfacebook
Copyright © 2021 SweetStudy.com