Communication Ethics and Human- Computer Cognitive Systems


Communication Ethics and Human- Computer Cognitive Systems

Charles Webster 

 

School of Health Sciences

Duquesne University

Pittsburgh, Pennsylvania, USA

webster@globe.edrc.cmu.edu

Introduction

Communication ethics, traditionally, involves the nature of the speaker (such as their character, good or bad), the quality of their arguments (for example, logical versus emotional appeals), and the manner in which presentation contributes to long term goals (of the individual, the community, society, religious deities, etc.) (Anderson, 1991). These dimensions interact in complex ways and should concern designers of human-computer cognitive systems.

 To this author a cognitive system is any complex information processing system that perceives, problem solves, learns, and communicates. Cognitive systems can be naturally evolved, intentionally designed, or, in the case of a human-computer cognitive system, both. Any discussion of cognitive systems occurs in the context of the popular notion of "cyborg," a "human individual who has some of its bodily parts controlled by cybernetically operating devices" (The American Heritage Dictionary, quoted in Morse, 1994, p. 198). Replacing "bodily parts" with "cognitive processes" in the previous quotation results in one kind of cognitive system. Perhaps in the future computers will be implanted into the brain to assume functions lost to disease or trauma ("direct neural jacking" in cyberpunk parlance). For now the interaction of machine and user is that of tool and tool user. Cognitive systems engineering views the combined interactions of tool and user as a joint cognitive system whose performance is to be maximized with the use of cognitive technology (Woods, 1986; Dalal & Kasper, 1994). 

 Among the most intimate tool-user interactions is that of patient and prosthesis, such as a one who relies on an augmentative communication device, or a blind individual who uses a video camera and tactile monitor. At one end of a spectrum are these unusual people using tools to adapt to usual circumstances. In the middle are usual people using tools to adapt to usual circumstances, such as a manager interacting with intelligent software to write polished employee evaluations. At the other extreme are usual people using tools to adapt to unusual circumstances, such as a fighter pilot locked in a tight feedback-loop embrace with a complex dynamic system, or the World Wide Web surfer. Other tool use situations are not so obvious, for example the interaction between a human client and an automated advisor or clerk, or an individual's reliance on a virtual community and its conventions. 

 All of these cognitive systems involve communication between human user and digital partner. Sometimes the emphasis is on signals passed between human and machine, sometimes the emphasis is on signals jointly generated by human and machine and directed at other cognitive systems, or both. Communication among cognitive systems and their subsystems profoundly facilitates and constrains possible goals. Whenever designers consider more than instrumental means, and contemplate desirable ends, they begin to confront communication ethics. 

 Consider Habermas's (1984) ideal speech as it might be applied to cognitive system communication design. Communication acts within and among cognitive systems should be comprehensible (a criteria violated by intimidating technical jargon), true (violated by sincerely offered misinformation), justified (for example, not lacking proper authority or fearing repercussion), and sincere (speakers must believe their own statements). These principles can conflict, as when an utterance about a technical subject is simplified to the point of containing a degree of untruth in order to be made comprehensible to a lay person. Thus, they exist in a kind of equilibrium with each other, with circumstances attenuating the degree to which each principle is satisfied. Many designers probably already intuitively observe them. However they (and other concepts from communication ethics) have relevance for humane cognitive systems that deserves discussion.
 
  

The Nature of The Speaker

First, let me dismiss artificially intelligent prostheses or digital personalities, themselves, as requiring moral concern. The capacity to consciously experience pain or pleasure is often offered as the critical property that marks objects as requiring care (Murphy & Coleman, 1990). To consider an extreme example, this author (Webster, forthcoming) has conducted computer simulations of a cognitive model of human depression, a most isolating and existentially painful human experience. I believe that computer simulation served merely as a theory complexity management tool, that it did not experience emotional pain, and thus did not require my ethical concern. (However, see Culbertson (1963) for arguments that software can be sentient.) I did have to obtain the approval of an internal ethics committee, but that was out of regard for any human or animal subjects that might contribute data.

 Given the choice between saving a beloved artificial agent and a complete, but human, stranger, I would be forced to help the stranger. Note the use of "forced." I can imagine such a choice causing me some discomfiture. Moreover, I believe that some individuals who know less about the ELIZA effect (where a simple program fooled smart people; Weizenbaum, 1966) might, in some cases, chose to save the artificial agent. In spite of intriguing possibilities, and not a little wishful thinking, capabilities of artificial intelligence and associated disciplines and technologies are currently insufficient to the task of creating an artificial but sentient entity. 

Do good character and sincere intention require sentient and reflective mindfulness? Can designed, non-sentient, non-living, cognitive systems be said to have character, good or bad, or to be sincere or insincere, or is any such quality a reflection of the cognitive system's designer? Some traditions of communication ethics hold that non-virtuous or insincere speakers cannot utter virtuous speech. Speech becomes "mere" rhetoric. To be able to persuade but not be capable of sincere conviction leaves anyone or anything open to criticism. (Locke referred to the art of rhetoric as "the artificial and figurative application of words eloquence hath invented, [which] are for nothing else but to insinuate wrong ideas, move passions, and thereby mislead the judgment." (quoted in Simons (1990, p. 1) who goes on to defend rhetoric).) 

 What, then, of the combination of a virtuous and sincere human and an "intelligent" communication prosthesis? Is the speech of such a cognitive system less virtuous than a digitally unencumbered virtuous human? Perhaps there is a ratio of virtuousness that is a ratio of human to computer information processing. As we replace a human's natural cognitive system with that of a designed artificial cognitive system does the resulting hybrid system become in some sense less sincere and possess less character? (This thought experiment is inspired by Searle's imaginary account of neurons of a human brain being replaced, one by one, resulting in a hypothetical diminishment and eventual disappearance of consciousness (discussed in Herbert (1993)).) 

 Consider an individual with physically devastating muscular or neurological disease who uses a communication prosthesis, a computer that allows the patient to create the appearance of certain kinds of communication competence to a conversational partner. The CHAT system (Newell, 1992) compensates for slow and laborious movements of the user by making things up to say ("How are you?", "It is a nice day." etc.). While the conversational partner is engaged ("deflected" may be a better word) the user struggles to program a "genuine" utterance in time to continue after CHAT's ploys are exhausted. 

 Now consider memory or decision aids for the senile, or interactive intelligent multimedia "photo albums" for descendants of the deceased (Bos, 1995) (sort of personality prostheses for the dead). These systems are to varying degrees potentially deceitful, but which are justified and why? We will see in a later section that part of the answer must hinge on another aspect of communication ethics, the degree to which presentation contributes to ultimate goals. 

 The CHAT system designer cheerfully acknowledges the deceit and points out that real life conversation is full of such subterfuges. If cognitive prostheses do introduce a form of mindlessness into conversation, what of the mindlessness of much human communication? (Langer, 1989) Much (perhaps even most) communication occurs automatically and without reflection, as the result of repetitive practice in daily affairs. Are we, can we be, "routinely" sincere? The answer occurs when communication failure forces us to repair and reflect. This capacity for failure is relevant to the combination of a human and computer into a single cognitive system because while the computer and the human both contribute to automaticity of communication, if failure occurs it is the human's responsibility to repair and reflect. The character of the cognitive system derives from the combined sources of the character of the human that will step in and the character of the system designer. 

Interesting, the left and right cerebral hemispheres may observe a similar division of labor in that the left hemisphere appears capable of both automatic and reflective behaviors, while the right hemisphere appears restricted to automatic behavior (Chiarello, 1985). Whether they work together in a manner relevant to understanding possible patterns of interaction between human and machine, is highly conjectural. However, some researchers on brain asymmetry believe that, in effect, we have two minds that interact to form a cognitive system (Springer & Deutsch, 1993). Neuropsychological studies also support the notion of a moral cognitive module based on neural circuits within the frontal lobes of the cerebral hemispheres (Damasio, 1994), attesting to a kind of modularity of moral responsibility. 

 So, what of the point raised earlier that a computer only reflects the character of its creator? This is reminiscent of the argument that programs only reflect the intelligence of their programmers (addressed by Turing (1969) and cited in Chesebro (1993)). While this may have once been true, I do believe that programs are beginning to exhibit, at a low level, intelligence besides or in addition to their designers. Consider World Wide Web agents that learn where information resides through search, or artificial life creations that on the basis of a few simple rules exhibit complex and surprising communal and procreative behaviors. Perhaps, eventually, programs will possess character, nonetheless, for now, the nature of the speaker, artificial or hybrid, derives from directly and indirectly involved human elements.
 
  

The Quality of The Argument

All communication involves persuasion, to believe, to act, to respond. Some forms of persuasion seem more ethical than others. A reasonable manner of introducing assumptions along with evidence, and a logically explicit line of reasoning seems laudatory. In ideal speech terms this would seem to involve truth of assumptions, evidence, and inference rules, and efficient, that is, comprehensible exposition. Appeal to emotions such as fear and hate, devious and duplicitous presentation of evidence and assumptions, sleight-of-hand logic, coercive threats or cajolery, and appeal to baser instincts and prejudices—to generally "rouse and exploit the sentiments and prejudices of a target audience" (Walton, 1992, p. 2)—all seem condemnable.

Quality of argument is focused upon by the natural language generation and explanation research community (see, for example, the special issue on expert system explanation (Berry, 1995)). Perhaps it is the nature of this community, but its computer and cognitive scientists strive for calm, dispassionate, logical explanatory styles (in the programs they create). However, even here, there have been attempts to create programs that consult explicit internal biases to generate slanted text emphasizing or undermining differing points of view (Hovy, 1985). 

 Appeal to emotion is not always unethical, as when the public health service warns of the consequences of self-destructive behavior by hiring an attractive but dying spokesperson to deliver the message. It may be, as Haugeland points out (1985), that in order to make machines more sociable and therefore more useful we may be required to design into them simulations of certain kinds of emotional states, if only to allow them to represent more fully their human audiences. However, even if these artificial entities can reason accurately about human emotion, and tailor their appeals accordingly, this does not mean that they can feel emotions and empathize with their audience. 

 The natural language generation and explanation research community also debates the proper relation between a program's knowledge base (and logical steps to decision), and the manner in which a human user is persuaded to accept and act upon the program's advice. (The difference is reminiscent of a traditional distinction made in communication ethics between a speaker's conviction and the manner in which an audience is persuaded (Anderson, 1991).) One explanation camp believes that the maintenance of two problem solvers—one to decide and one to persuade—presents software engineering problems, such as changes in one database that must be synchronized with changes in the other. The other camp believes that expert human explanation contains elements generated post hoc to expert decision, a necessary form of rationalization after the fact (Wick, Dutta, Wineinger, & Conner, 1995). Thus, as in the case of mindlessness, we again face the prospect of holding artificial or hybrid cognitive systems to a standard higher than that applied to humans. 

 Deceits are often justified by better performance of the human-machine combination. For example, human control of some complex dynamic systems, such as vertical takeoff and landing (VTOL) aircraft, can benefit from displays that "lie" (Rouse, 1980, p. 53). These deceits may in some cases lead to performance failure in ways that the designer cannot predict. For example, displays aboard the USS Stark, a frigate struck by a Iraqi fired missile with a loss of thirty-seven lives, used a spatial metaphor to represent incoming missiles, where display distance was a function of inferred level of hostility and not physical distance from the ship. The missile, having French manufacture, was classified as friendly, and therefore distant and ignored (Chapman, 1994). 

 I am not arguing that these kinds of misrepresentations are inherently unethical, or, if they are, that humans are not occasionally and perhaps appropriately similarly guilty. Many tools are useful exactly because they suppress one kind of verisimilitude for another representation (Norman, 1993). Rather, where there is potential for deceit, appropriateness is a complex determination and communication ethics provides one framework for sorting out some of the issues. 

Given the level of the technology to create, for example, roving and conversing knobots (Riecken, 1994), we cannot hold the artificial agent, itself, responsible for the quality of its arguments (only its human designer can do that). Inflammatory or duplicitous communication by artificial agents must still be blamed on their creators. However, quality of arguments, their mechanics in terms of appealing to audience biases, and the relation between deciding and persuading others to accept the decision, are all topics of concern for both cognitive technologists and communication ethicists.
 
  

The Presentation's Contribution to Ultimate Goals

In addition to virtuous speakers, truth, and its efficient transmittal, there is the goal of it all. Why congregate and communicate? Answers to this question range from theories based on individual hedonism and utility maximization to belief in supernatural mandate. When designing human-computer cognitive systems, the goal may be more mundane, such as facilitating entry into community (relying on a communication prosthesis), or accurate and timely diagnosis and management of human disease (using a medical diagnostic and management program). These involve client-professional relationships, with a human user in the role of client and computer in the role of professional.

 The most intimate traditional client-professional relationship is the patient-physician relationship. (Note that I am by no means holding up this relationship as a sort of ideal, since it is fraught with its own problems, has come under considerable examination and criticism, and is changing in response to social and economic pressure. Rather, because it has come under examination and criticism, there is a well- formed body of literature and theory that may contain useful insights.) 

 Four principles—observed during ethically convicted decision making—have been influential during the last decade in theorizing about medical ethics (Beauchamp & Childress, 1994): beneficence (provide benefits while weighing the risks), non-maleficense (avoid unnecessary harm), self-autonomy (respect the client's wishes), and justice (such as fairly distribution benefits and burdens, respect individual rights, and adherence to morally acceptable laws). People from different cultures and religions will usually agree that these principles are to be generally respected, although different people (from different cultures or ethical traditions) will often attach different relative importance to them. 

 These principles are part of the conviction side of a conviction-persuasion trade-off. Consider the "deceit" of the communication prosthesis that "makes up stuff" in order to provide time for the user to slowly labor over an utterance that may well be their only chance to establish human contact all day. The extremely low rate of words per minute of which some patients are capable, can result in frustrated conversational partners who eventually disengage, or even come to believe the speaker to be stupid. Thus, even though CHAT's occasional statements may be less true or less sincere (in the sense that they are inane and canned), the conversation becomes more comprehensible (tolerable), serving the greater goals of benefiting the sick and sharing the burden of illness among the user (the patient), society (which funds the research and provides the prosthesis), and the audience (who is deceived slightly).
 
  

Virtual Communities of Virtual Communities

The previous discussion focused on a human and a machine. At the level of the virtual community, where groupware and telecommunications technology allow many individuals separated in time, space, and culture, to form associations based on commonly agreed upon interests and ground rules, there will be (and is) potential for creating different communication ethics traditions.

 Arnett (1991) has described five traditions of communication ethics assumed, described, or prescribed by writers on speech communication. As applied to the design of virtual communities as cognitive systems, each has different emphases (connoted by sets of key words and phrases). For each I have suggested different mechanisms (underlined) for facilitating or preventing certain behaviors. 

Democratic ethics: public debate, tolerance of dissent, majority vote. Possible implications for cognitive systems: potential for anonymous electronic views or voting, methods to automatically manage issues and poll community, automated protocols to close debate when necessary. 

Universalist ethics: discovery of absolute prohibitions and requirements by enlightened activists. Possible implications for cognitive systems: prohibition of certain types of speech as a matter of a priori design of communication architecture. 

Codes and standards: creation of rules for regulating speech by designated specialists. Possible implications for cognitive systems: databases of rules for guiding behavior, control of rule representation and formation by specialists, methods for designating or certifying specialists. 

Situational ethics: personal and private morality revealed at the moment of decision or utterance. Possible implications for cognitive systems: prohibitions against violation of privacy, provision of forums for informal discussion of personal views. 

Narrative ethics: stories with morals, identification with good or bad characters, making myth or history relevant to the moment, community drama that generates meaning, coherence and solace. Possible implications for cognitive systems: publication of personal biography (for example, personal home pages on the World Wide Web), role of entertainment as a mode for creating community, encouragement of poetic, prose, oral and other kinds of creative expression, biography-based reasoning (analogous to story-or case-based reasoning), people and agents as characters. 

 The world contains "ecosystems" of competing (warring, debating, etc.) and cooperating (trading, learning, etc.) communities with their own communication ethics traditions. Consider the notion of "transcendental eloquence" (Freeman, Littlejohn, & Pearce, 1992), offered as a type of speech for coping with conflict among incommensurate ethical systems. In such situations the ultimate goal of communication may be communication itself. Transcendental eloquence is philosophical (seeking to pause and reflect about fundamental knowledge and value), comparative (searching for a larger framework within which to compare different assumptions about how one lives and communicates), dialogic (inviting the creation of a conversation that explores, rather than persuades in debate), and critical (revealing methods and their limits, such as implicit biases that preserve hegemony). 

 As new virtual communities form, based on common interests (hobbies, attitudes toward abortion, etc.) instead of geography, the potential for conflict among communities increases. The technology that makes possible communication also creates ability to avoid communication, by enabling people to associate with the like-minded, aided by filters intended to cope with information glut, while preventing them from having to confront "the other." (I am reminded of the cross newsgroup flame wars that can break out due to occasional accidental or intentional provocative cross postings.) Designers of communities of virtual communities may find something of relevance to design in the concept of transcendental eloquence.
 
  

Conclusion

Communication ethics has something to offer to the design of humane human-computer cognitive systems, and cognitive technology has something to offer to communication ethics. As disciplines, each highly interdisciplinary and complex in its own way, they seem a bit far afield from each other. However, the announcement for this First International Cognitive Technology Conference says: "[Cognitive technology] is primarily concerned with how cognitive awareness in people may be amplified by technological means and considers the implications of such amplification on what it means to be human."

 Communication fundamentally involves the transmission of information by signals, often competing with other random and non-random signals. To amplify a signal is to increase its strength with respect to a background and to make it "rise above others," which happens to be a meaning of the Latin roots forming the word "excel" (Simpson & Weiner, 1989). Cognitive technology provides tools for achieving an excellence in human awareness that requires new and plastic means to achieve old and enduring goals. 

 Anderson (1991, p. 18) closes a history of communication ethics the following way: 

The complexity of the relationship between ethics and many aspects of the communication process is such that most theorists require the reader of their works to incorporate the shared ethics of society and the individual's own ethic into the theory being developed, if a guide to ethical communication activity is to be derived. 

If guides to ethical communication, as they apply to cognitive technology and human-computer cognitive systems, are possible and relevant, then it will only be so by virtue of the prior existence of an audience already possessed of implicit or explicit theories of ethical communication. While I have listed some issues and pointed to possible implications, intending to constructively provoke, these guides should come from conversation within or among one or more communities of ethical communicators, including this one.
 
  

References

Anderson, K. (1991). A history of communication ethics. In K. Greenberg (Ed.), Conversations on communication ethics. Norwood, NJ: Ablex Publishing Co.

 Arnett, R. (1991). The status of communication ethics scholarship in speech communication journals from 1915 to 1985. In K. Greenberg (Ed.), Conversations on communication ethics. Norwood, NJ: Ablex Publishing Co. 

 Beauchamp, T. & Childress, J. (1994). Principles of biomedical ethics. New York: Oxford University Press. 

 Berry, D. (1995). Guest editor's preface: Explanation: The way forward. Expert systems with applications, 8, 4, 399-401. 

 Bos, E. (1995). Making the dead live on: An interactive talking picture of a deceased person. Computers and society, March, 7-9. 

 Chapman, G. (1994). Making sense out of nonsense: Rescuing reality from virtual reality. In G. Bender & T. Druckrey (Eds.), Culture on the brink: Ideologies of Technology. Seattle, WA: Bay Press. 

 Chesebro, J. (1993). Communication and computability: The case of Alan Mathison Turing. Communication Quarterly, 41, 1, 90-121. 

 Chiarello, C. (1985). Hemispheric dynamics in lexical access: Automatic and controlled priming. Brain and language, 26, 146-172. 

 Culbertson, J. (1963). The minds of robots. Urbana, IL: University of Illinois Press. 

 Dalal, N., & Kasper, G. (1994). The design of joint cognitive systems: The effect of cognitive coupling on performance. International journal of human-computer studies, 40, 677-702. 

 Damasio, A. (1994). Descartes' error: Emotion, reason, and the human brain. New York: G. P. Putnum. 

 Freeman, S., Littlejohn, S., & Pearce, W. (1992). Communication and moral conflict. Western. Journal of communication, 56, 311-329. 

 Habermas, J. (1984). The theory of communicative action. Vol. 1, Reason and the rationalization of society. Boston: Beacon Press. 

 Haugeland, J. (1985). AI, the very idea. Cambridge, MA: MIT Press. 

 Herbert, J. (1993). Elemental mind. New York: Penguin Books. 

 Hovy, E. (1985). Integrating text planning and production in generation. Proceedings of the 9th international journal conference for artificial intelligence, Los Angeles. 

 Langer, E. (1989). Mindfulness. Reading, MA: Addison-Wesley. 

 Morse, M. (1994). What do cyborgs eat? Oral logic in an information society. In G. Bender & T. Druckrey (Eds.), Culture on the brink: Ideologies of technology. Seattle, WA: Bay Press. 

 Murphy, J., & Coleman, J.(1990). Philosophy of law: An introduction to jurisprudence, Revised Edition. Boulder, CO: Westview Press. 

 Newell, A. (1992). Today's dream—Tomorrow's reality. Augmentative and alternative communication, 8, 81-88. 

 Norman, D. (1993). Things that make us smart. Reading, MA: Addison- Wesley. 

 Riecken, D. (Ed.) (1994). Special issue on intelligent agents. Communications of the association for computing machinery 37, 7. 

 Rouse, W. (1980). Systems engineering models of human-machine interaction. New York: Elsevier Books. 

 Simons, H. (1990). The rhetoric of inquiry as an intellectual movement. In H. Simons (Ed.), The rhetorical turn: Invention and persuasion in the conduct of inquiry. Chicago, IL: University of Chicago Press. 

 Simpson, J., & Weiner, E. (Eds.) (1989). The Oxford English Dictionary (2nd ed.). Oxford: Clarendon Press. 

 Springer, S. & Deutsch, G. (1993). Left brain, right brain. New York: Freeman. 

 Turing, A. (1969) Intelligent machinery. In B. Meltzer & D. Michie (Eds.), Machine intelligence 5. New York: Halsted Press/John Wiley & Sons. 

 Walton, D. (1992). The place of emotion in argument. University Park, PA: Pennsylvania State University Press. 

 Webster, C. (forthcoming). Computer modeling of adaptive depression. Behavioral Science. 

 Weizenbaum, J. (1966). ELIZA. Communications of the association for computing machinery, 9, 36-45. 

 Wick, M., Dutta, P., Wineinger, T., & Conner, J. (1995). Reconstructive explanation: A case study in integral calculus. Expert systems with applications, 8, 4, 463- 473. 

 Woods, D. (1986). Cognitive technologies: The design of the joint human-machine systems. The AI magazine, 6, 86-92.