This chapter builds on the assertion that communication is a key process in system design, creation, and operation. At all stages of the system lifecycle, determining the requirements and constraints, transferring that knowledge, and then verifying and validating that the proper actions have been performed all involve communication of essential information. When disasters occur, investigators frequently point to problems in communication and leadership.

    As the Columbia Accident Investigation Board (CAIB) Report (2003) concluded, “Flawed practices embedded in NASA's organizational systems continued for 20 years and made substantial contributions to both accidents” (p. 202). Following CAIB, aerospace engineers could no longer ignore the relationship between communication and system safety, but individuals untrained in communication felt powerless to change familiar habits and institutional practice (personal conversation with NASA systems engineer, March 2006).

    Because systems are complex, dynamic, three‐dimensional architectures, this chapter does not focus on traditional communication topics like “developing better PowerPoint presentations,” though well‐designed PowerPoint presentations may help individuals make their case to audiences who do not share the same education, disciplinary assumptions, and experience (cf. Tufte, 1995). Instead, this chapter focuses on the “Big Picture” articulation and integration of subsystem activity that must be coordinated, reconciled, and interpreted to advance system goals and objectives.

    As this chapter argues, effective communication results from an underlying communication design process that requires continual integration, feedback, and monitoring to achieve the most effective information flow in the system (cf. Kaufer and Butler, 1996). This Big Picture view of communication differentiates system‐level communication(s) from the local (and often time‐limited) communication practices that characterize subsystem communication and development (cf. Spinuzzi, 2003; Johnson, 1998; Leydens, 2008; Winsor, 1996; 2003) – most vividly exemplified in the need to reconcile data‐driven PowerPoint presentations with a Big Picture overview of critical system‐level relationships in system safety reviews and risk management.

    Grounded in psycholinguistic studies in gesture and cognition, this chapter argues that communication(s) in large systems are multimodal in two senses. In engineering, multimodal normally refers to the integration of different modes or “types” of technologies (e.g., multimodal transportation systems that incorporate buses, trains, and commuter rail). In this sense, multimodal communication describes the integration of multiple communication technologies (e.g., video, audio, radio, fax, telephone, electronic, mechanical) in a single interoperational system. In linguistics, multimodal communication also describes different modes of communication – speech, gesture, visual design, or writing. In both senses of the term, different communication modalities convey – or capture – different types of information. Thus, gesture conveys information about manner, motion, and spatial relationships not possible in speech alone (McNeill and Duncan, 2000). Analysis of gesture can also provide clues to the three‐dimensional visual models that define relationships in systems, but agencies have not developed communication technologies that can capture and interpret information conveyed in speech and gesture. Not surprisingly, particular linguistic modalities (tone, gesture) reflect underlying strategic choices about automated communication modalities.

    To maintain system health, analysts must continually reassess the strategic linkage of multiple (and often incommensurate) communication modalities; monitor information flow; and respond to failures or outages in the system. Ideally, well‐designed (automated) systems can improve the dissemination of information, increase the accuracy of hazard warnings, reduce mistakes in interpretation, and improve risk decision‐making. Too often, however, system designers focus on mechanical and technical systems and thus obscure the role of human sense‐making at the moment in a crisis. Disasters are instructive because they help us identify critical junctures in the system where knowledge is transformed and refigured to communicate effectively to new audiences with different assumptions (cf. Sauer, 2003a). Unfortunately, agencies may not have the system‐level reporting and analytic capacity to understand the relevance of near‐miss data until a catastrophe ends the practice (Dillon and Tinsley, 2009), creating a culture of indifference that normalizes disaster in institutions (Vaughan, 1999). Despite CAIB, change is difficult because communication strategies are often deeply embedded in institutional practices that are deeply resistant to change. Communication disasters are nonetheless costly to the agency in financial terms and in damage to the agency's reputation. To improve system‐level communication, this chapter suggests, designers must develop a strategic understanding of how different communication modalities influence risk decision‐making at all phases in the system; articulate those functions in the initial design phase of communication(s) in systems; and – most important – evaluate, test, and redesign communication(s) channels with the same level of specification that they apply to technical and operational performance in systems.

    Although this chapter focuses on examples from the aerospace industry, similar problems occur in other large systems like coal mines, highway systems (including bridges), shipping (supply chain management), and railways. Indeed, large systems share similarities to the extent that these systems are imagined representations of large‐scale processes that can be modeled in laboratory and computer environments. Although each system reflects local contingencies and missed opportunities, integrated systems share similar characteristics to the extent that they involve generalizable processes like haulage and transportation, power, ventilation, radiation, fire, dust, and explosions. These systems also share similar processes of regulation and risk management (Sauer, 2003a). Lessons learned in one area can therefore be instructive to system designers in radically different knowledge domains and physical settings.

    As NASA moves forward, the extreme environments, distances, and material realities of Mars spaceflight will require increasingly complex and sophisticated computer decision and maintenance systems. As data systems become more complex, agencies must work together to reduce redundancy, improve efficiency, and ensure the security and utility of data. The promise of interplanetary spaceflight will also require a more complex understanding of what it means to communicate in large systems. To develop such systems, developers must collaborate in real time and virtual interactions with colleagues who may not share the same language, culture, knowledge, experience, education, or institutional affiliation. They must develop model‐based reasoning tools that can predict and manage unknown, probabilistic, and highly uncertain events. They must negotiate trade‐offs between efficiency, cost, and speed; integrate the data requirements of agencies with vastly different agendas; use back‐of‐the envelope reasoning to test assumptions and conclusions at the moment of crisis; and document back‐of‐the envelope assumptions to ensure knowledge capture across the system. Although many systems engineers are more comfortable with electronic (computer‐mediated) than with face‐to‐face (human) communication, their jobs require them to talk with clients and contractors to resolve problems, understand risks, and make trade‐offs that ensure mission success without compromising the integrity of the mission itself.

    Like other chapters in this collection, this chapter cannot describe the accumulated knowledge of practitioners and researchers engaged in linguistic, rhetorical, and socio‐linguistic research designed to produce better technical communication and risk decision‐making. As the following discussion suggests, systems engineers must actively and continuously work to ensure that the message intended was the message received. Most important, they must make their case to management in the face of political, economic, and institutional resistance because they are ultimately responsible for the outcomes of human and computer decision‐making.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Safe Multimodal Communication in Human-Robot Collaboration

    Ferrari, Davide / Pupa, Andrea / Signoretti, Alberto et al. | Springer Verlag | 2024


    Multimodal LSTM forecasting for LEO Satellite Communication Terminal access

    Li, Hongguang / Liu, Yaoqi / Shi, Jinglin et al. | IEEE | 2023



    Multimodal

    Online Contents | 2018


    Multimodal

    Online Contents | 2001