Trust in Technology: Its Components and Measures

Trust in a Specific Technology: An Investigation of its Components and Measures

D. H. MCKNIGHT

Eli Broad College of Business, Michigan State University, U. S. A

M. CARTER

College of Business and Behavioral Sciences, Clemson University, U. S. A.

J. B. THATCHER

College of Business and Behavioral Sciences, Clemson University, U. S. A.

and

P.F. CLAY

College of Business, Washington State University, U. S. A.

______

Trust plays an important role in many Information Systems (IS)-enabled situations. Most IS research employs trust as a measure of interpersonal or person-to-firm relations, such as trust in a Web vendor or a virtual team member. Although trust in other people is important, this paper suggests that trust in the information technology (IT) itself also plays a role in shaping IT-related beliefs and behavior. To advance trust and technology research, this paper presents a set of trust in technology construct definitions and measures. We also empirically examine these construct measures using tests of convergent, discriminant, and nomological validity. This study contributes to the literature by providing: a) a framework that differentiates trust in technology from trust in people, b) a theory-based set of definitions necessary for investigating different kinds of trust in technology, and c) validated trust in technologymeasures useful to research and practice.

Categories and Subject Descriptors: K.8.m PERSONAL COMPUTING—Miscellaneous

General Terms: Human Factors

Additional Key Words and Phrases: Trust, Trust in Technology, Construct Development

______

1. INTRODUCTION

Trust is commonly defined as an individual’s willingness to depend on another party because of the characteristics of the other party [Rousseau et al. 1998]. This study concentrates on the latter half of this definition, the characteristics or attributes of the trustee, usually termed ‘trust’ or ‘trusting beliefs.’ Research has found trust to be not only useful, but also central [Golembiewski and McConkie 1975] to understanding individual behavior in diverse domains such as work group interaction [Jarvenpaa and Leidner 1998; Mayer et al. 1995]orcommercial relationships [Arrow, 1974]. For example, Jarvenpaa and Leidner [1998] report swift trust influences how “virtual peers” interact in globally distributed teams. Trust is crucial toalmost any type of situation in which either uncertainty exists or undesirable outcomes are possible [Fukuyama 1995; Luhmann 1979].

Within the Information Systems (IS) domain, as in other fields, trust is usually examined and defined in terms of trust in peoplewithout regard for trust in the technology itself. IStrust research primarily examines how trust in people affects IT-acceptance. For example, trust in specific Internet vendors [Gefen et al. 2003; Kim 2008; Lim et al. 2006; McKnight et al. 2002; Stewart 2003] has been found to influence Web consumers’ beliefs and behavior [Clarke 1999]. Additionally, research has used a subset of trust in people attributes—i.e., ability, benevolence, and integrity—to study trust in web sites [Vance et al., 2008] and trust in online recommendation agents [Wang and Benbasat 2005]. In general, Internet research provides evidence that trust in another actor (i.e., a web vendor or person) and/or trust in an agent of another actor (i.e. a recommendation agent) influences individual decisions to use technology. Comparatively little research directly examines trust in a technology, that is, in an IT artifact.

______

Authors’ addresses: D. H. McKnight, Department of Accounting and Information Systems, Eli Broad College of Business, Michigan State University, East Lansing, MI 48824, U. S. A.; E-mail: ; M. Carter, Department of Management, College of Business and Behavioral Sciences, Clemson University, Clemson, SC 29634, U. S. A.; E-mail: ; J. B. Thatcher, Department of Management, College of Business and Behavioral Sciences, Clemson University, Clemson, SC 29634, U. S. A.; E-mail: ; P.F. Clay, Department of Entrepreneurship and Information Systems, College of Business, Washington State University, Pullman, WA 99164,

U. S. A.; E-mail:

To an extent, the research on trust in recommendation agents (RA) answers the call to focus on the IT artifact (Orlikowski and Iacono 2001). RAs qualify as IT artifacts since they are automated online assistants that help users decide among products. Thus, to study an RA is to study an IT artifact. However, RAs tend to imitate human characteristics and interact with users in human-like ways. They may even look human-like. Because of this, RA trust studies have measured trust in RAs using trust-in-people scales. Thus, the RA has not actually been studied regarding its technological trust traits, but rather regarding its human trust traits (i.e., an RA is treated as a human surrogate).

The primary difference between this study and prior studies is that we focus on trust in the technology itself instead of trust in people, organizations, or human surrogates. The purpose of this study is to develop trust in technology definitions and measures and to test how they work with a nomological network. This helps address the problem that IT trust research focused on trust in people has not profited from additionally considering trust in the technology itself. Just as the Technology Acceptance Model’s (TAM) perceived usefulness and ease of use concepts directly focus on the attributes of the technology itself, so our focus is on the trust-related attributes of the technology itself. This studymore directly examinesthe IT artifact than past studies, answering Orlikowski and Iacono’s call.Our belief is that by focusing on trust in the technology, we can better determine what it is about technology that makes the technology itself trustworthy, irrespective of the people and human structures that surround the technology. This focus should yield new insights into the nature of how trust works in a technological context.

To gain a more nuanced view of trust’s implications for IT use, MIS research needs to examine how users’ trust in the technology itself relates to value-added post-adoption use of IT. In this study, technology is defined as the IT software artifact, with whatever functionality is programmed into it. By focusing on the technology itself, trust researchers can evaluate how trusting beliefs regarding specific attributes of the technology relate to individual IT acceptance and post-adoption behavior. By so doing, research will help extend understanding of individuals’ value-added technology use after an IT “has been installed, made accessible to the user, and applied by the user in accomplishing his/her work activities” [Jasperson et al., 2005].

In order to link trust to value-added applications of existing workplace IT, this paper advances a conceptual definition and operationalization of trust in technology. In doing so, we explain how trust in technology differs from trust in people. Also, we develop a model that explains how trust in technology predicts the extent to which individuals continue using that technology. This is important because scant research has examined how technology-oriented trusting beliefs relate to behavioral beliefs that shape post-adoption technology use [Thatcher et al., 2011]. Thus, to further understanding of trust and individual technology use, this study addresses the following research questions:What is the nomological network surrounding trust in technology? What is the influence of trust in technology on individuals’ post-adoptive technology use behaviors?

In answering these questions, this study draws on IS literature on trust to develop a taxonomy of trust in technology constructs that extend research on trust in the context of IT use.By distinguishing betweentrust in technology and trust in people, our work affords researchers an opportunity to tease apart how beliefs towards a vendor, such as Microsoft or Google, relate to cognitions about features of their products. By providing a literature-based conceptual and operational definition of trust in technology, our work provides research and practice with a framework for examining the interrelationships among different forms of trust and post-adoption technology use.

2. theoretical foundation

IS research has primarily examined the influence of trust in people on individual decisions to use technology. One explanation for this is that it seems more “natural” to trust a person than to trust a technology. In fact, people present considerable uncertainty to the trustor because of their volition (i.e., the power to choose)—something that technology usually lacks. However, some researchers have stretched this idea so far as to doubt the viability of the trust in technology concept: “People trust people, not technology” [Friedman et al. 2000: 36]. This extreme position assumes that trust exists only when the trustee has volition and moral agency, i.e., the ability to do right or wrong. It also assumes that trust is to be defined narrowly as “accepted vulnerability to another’s…ill will (or lack of good will) toward one.” [Friedman et al. 2000: 34]. This view suggests that technology,without its own will,cannot fit within this human-bound definition of what trust is. However, the literature on trust employs a large number of definitions, many of which extend beyond this narrow view (see [McKnight and Chervany 1996]).

This paper creates trust in technology definitions and constructs that are more palatable to apply to technology than the interpersonal trust constructs used in other papers that study trust in technology. Our position is that trust situations arise when one has to make oneself vulnerable by relying on another person or object, regardless of the trust object’s will or volition. Perhaps the most basic dictionary meaning of trust is to depend or rely on another [McKnight and Chervany 1996]. Thus, if one can depend on anIT’s attributes under uncertainty, then trust in technology is a viable concept. For instance, a business person can say, “I trust Blackberry®’s email system to deliver messages to my phone.” Here the trustor relies on the Blackberry device to manage email and accepts vulnerabilities tied to network outages or device failures. Hence, similar to trust in people, trust inan IT involves accepting vulnerability that it that may or may not complete a task.

Different Types of Trust

Researchers (e.g. [Lewicki and Bunker 1996; Paul and McDaniel 2004]) suggest different types of trust develop as trust relationships evolve. Initial trust rests on trustor judgments before they experience the trustee. The online trust literature has often focused on initial trust in web vendors (see Appendix A). This research ([e.g. Gefen et al. 2003; McKnight et al. 2002; Vance et al. 2008) finds that initial trust in web vendors influences online purchase intentions. One form of initial trust is calculus-based trust [Lewicki and Bunker 1996], in which the trustor assesses the costs and benefits of extending trust. This trust implies the trustor makes a rational decision about the situation before extending trust [Coleman 1990]. By contrast, we use the social-psychological trust that is about perceptions regarding thetrustee’s attributes.

Once familiar with a trustee, trustors form knowledge-based or experiential trust. Knowledge-based trust means the trustor knows the other party well enough to predict trustee behavior in a situation[Lewicki and Bunker 1996]. This assumes a history of trustor - trustee interactions. In contrast to initial trust, which may erode quickly when costs and benefits change, knowledge-based trust is more persistent. Because the trustor is familiar with the eccentricities of a trustee, they are more likely to continue the relationship even when circumstances change or performance lapses [Lewicki and Bunker 1996].

In recent years, limited IS trust research (e.g. [Pavlou 2003; Lippert 2007; Thatcher et al., 2011]) has investigated knowledge-based trust in technology. These studies provide evidence that it is technology knowledge that informs post-adoptive use behaviors,not cost vs. benefit assessments. The fact that other IS constructs based on cost/benefit assessments (e.g. perceived usefulness and perceived ease of use) have been shown to have less predictive power in a post-adoptive context [Kim and Malhotra 2005] supports this view. Thus, developing a knowledge-based trust in technology construct may provide insight into post-adoptive technology use. Further, even though some examine trust based on technology attributes, they typically do not use trust in technology measures (Appendix A). Rather, they either use trust in people measures or non-trust-related measures like website quality, which is a distinct construct [McKnight et al. 2002]. This underscores the need for trustintechnology constructs and measures.

Contextual Condition

Whether they involve people or technology, trust situations feature risk and uncertainty (Table 1). Trustors lack total control over outcomes because they depend on either people or a technology to complete a task [Riker 1971]. Depending on another requires that the trustor risks that the trustee may not fulfill expected responsibilities, intentionally or not. That is, under conditions of uncertainty, one relies on a person who may intentionally (i.e., by moral choice) not fulfill their role. Alternatively, one relies on a technology which may not demonstrate the capability (i.e., without intention) to fulfill its role. For example, when an individual trusts a cloud-based application, such as Dropbox, to save data, one becomes exposed to risk and uncertainty tied to transmitting data over the Internet and storing confidential data on a server. Regardless of the source of failure, technology users assumethe risk of incurring negative consequences if an application fails to act as expected [Bonoma 1976], which issimilar to the risks the trustor incurs if a human trustee fails to prove worthy of interpersonal trust. Hence,both trust in people and trust in technology involve risk.

Table I: Conceptual Comparison—Trust in People versus Trust in Technology

Trust in People / Trust in Technology
Contextual Condition / Risk, Uncertainty, Lack of total control / Risk, Uncertainty, Lack of total user control
Object of Dependence / People—in terms of moral agency and both volitional and non-volitional factors / Technologies—in terms of amoral and non-volitional factors only
Nature of the Trustor’s Expectations
(regarding the Object of Dependence) / 1. Do things for you in a competent way. (ability [Mayer et al. 1995]) / 1. Demonstrate possession of the needed functionality to do a required task.
2. Are caring and considerate of you; are benevolent towards you; possess the will and moral agency to help you when needed. (benevolence [Mayer et al. 1995]) / 2. Are able to provide you effective help when needed (e.g., through a help menu).
3. Are consistent in 1.-2 above. (predictability [McKnight et al. 1998]) / 3. Operate reliably or consistently without failing.

Object of Dependence

Trust in people and trust in technology differ in terms of the nature of the object of dependence (Table I, row 2). With the former, one trusts a person (a moral and volitional agent); with the latter, one trusts a specific technology (a human-created artifact with a limited range of capabilities that lacks volition [i.e., will] and moral agency). For example, when a technology user selects between relying on a human copy-editor or a word processing program, their decision reflects comparisons of the copy editor’s competence and their willingness (reflecting volition) to take time to carefully edit the paperversus the word processing program’s ability (reflecting no volition) to reliably identify misspelled words or errors in grammar. Further, while a benevolent human copy editor may catch the misuse of a correctly spelled word and make appropriate changes, a word processing program can only be expected to do what it is programmed to do. Because technology lacks volition and moral agency, IT-related trust necessarily reflects beliefs about a technology’s characteristics rather than its will or motives, because it has none. This does not mean trust in technology is devoid of emotion, however. Emotion arises whenever a person’s plans or goals are interrupted (Berscheid 1993). Because we depend on less than reliable technology for many tasks, technology can interrupt our plans and raise emotion. For this reason, trust in technology will often reflect positive/negative emotions people develop towards a technology.

Nature of Trustor’s Expectations

When forming trust in people and technology, individuals consider different attributes of the object of dependence (see Table I, bottom section). Trust (more accurately called trusting beliefs) means beliefs that a person or technology has the attributes necessary to perform as expected in a situation [Mayer et al. 1995]. Similar to trust in people, users’ assessments of attributes reflect their beliefs about technology’s ability to deliver on the promise of its objective characteristics. Even if an objective technology characteristic exists, users’ beliefs about performance may differ based on their experience or the context for its use. When comparing trust in people and technology, users express expectations about different attributes:

  • Competence vs. Functionality – With trust in people, one assesses the efficacy of the trustee to fulfill a promise in terms of their ability or power to do something for us [Barber 1983]. For example, an experienced lawyer might develop the capability to argue a case effectively. With technology (Table I, Nature of Trustor’s Expectations entry 1.), users considerwhether the technology delivers on the functionality promised by providing features sets needed to complete a task [McKnight 2005]. For example, while a payroll system may have the features necessary to produce a correct payroll for a set of employees, trust in a technology’s functionality hinges on that system’s capability to properly account for various taxes and deductions. The competence of a person and the functionality of a technology are similar because they represent users’ expectations about the trustee’s capability.
  • Benevolence vs. Helpfulness – With people, one hopes they care enough to offer help when needed [Rempel et al. 1985]. With technology (Table I, entry 2.), users sense no caring emotions because technology itself has no moral agency. However, users do hope that a technology’s help function will provide advice necessary to complete a task, [McKnight 2005]. Evaluating helpfulness is important, because while most software has a help function, there may be substantial variance in whether users perceive the advice offered effectively enables task performance. Consequently, trusting beliefs in helpfulness represent users’ beliefs that the technology provides adequate, effective, and responsive help.
  • Predictability/Integrity vs. Reliability – In both cases (Table I, entry 3.), we hope trustees are consistent, predictable or reliable [Giffin 1967; McKnight 2005]. With people, predictability refers to the degree to which an individual can be relied upon to act in a predictable manner. This is risky due to peoples’ volition or freedom to choose. Although technology has no volition, it still may not function consistently due to built-in flaws or situational events that cause failures. By operating continually (i.e., with little or no downtime) or by responding predictably to inputs (i.e. printing on command), a technology can shape users’ perceptions of consistency and reliability.

Note that the above expectations are perceptual, rather than objective, in nature. Having delimited a role for the knowledge-based trust in technology construct and described similarities and differences between trust in people and trust in technology, we turn to developing definitions of different types of trust in technology. In each case, the trust in technology definition corresponds to a trust in people definition in order to be based on the trust literature.