Skip to main content
Erschienen in:
Buchtitelbild

Open Access 2024 | OriginalPaper | Buchkapitel

6. Immersive Visualisation Systems as Alignment Strategies for Extreme Event Scenarios

verfasst von : Baylee Brits, Yang Song, Carlos Tirado Cortes

Erschienen in: Climate Disaster Preparedness

Verlag: Springer Nature Switzerland

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Immersive systems are increasingly used to train first responders and prepare communities for extreme climate events. This chapter considers alignment issues that arise in their development and discusses how they might be resolved—taking as case study the iFire system currently being designed at the iCinema Research Centre. We particularly focus on ways to maximise the two non-moral values that define the success of any such system as well as of associated climate science: accuracy and verisimilitude. Drawing on the work of Shepherd et al. (2018) and Sharples et al. (2016), we theorise the epistemic and situational challenges to arrive at these values. Exploring solutions already proposed through the related concepts of ‘storylines’ (Shepherd, 2019), ‘scenarios’ (Lempert, 2013) and ‘tales’ (Hazeleger et al., 2015), we show how iFire’s values may be maximised through composition strategies derived from these concepts. Using this approach, we explain how iFire may ‘simulate’ links between uncertainty and affect to enhance decision making in uncertain circumstances. Our key findings are that alignment strategies for iFire are best described as ‘interpretable’ (rather than ‘explainable’) and can be achieved through qualitative methods. These describe compositional strategies deployed by the user that support reflective management of uncertainty.

6.1 Introduction

This chapter considers alignment issues for immersive environments that stage extreme event scenarios for the purposes of preparation. To show what an aligned project might look like, we are using the iFire system, which is currently in development at the University of New South Wales’s iCinema Research Centre. These sorts of environments have the potential to use visualisation and immersive experiences to inform perception, expectations and responses to wildfires (Hoang et al., 2010; Altintas et al., 2015). To create significant impact, these systems require a unique combination of accuracy as well as aesthetic and conceptual verisimilitude—a challenging combination in high-stakes ethical and decision-making situations. These systems are urgently needed because we are facing rapidly shifting fire landscapes around the world under a changing climate, which see a significant increase in extreme fire events. In response, AI capabilities are being developed to assist rate-of-spread (RoS) prediction and decision making in uncertain scenarios. The form of these systems and their potential usability for fire management requires an alignment model that ensures that the two key non-moral values—accuracy and verisimilitude—are maximised to achieve the overall goal for such systems: to generate insight and enhance preparedness for extreme event scenarios. This chapter provides a theoretical study of these alignment issues, developing an account of the efficacy of verisimilitude techniques, which range from ‘storyline’ to ‘scenario’ and ‘simulation’ theories—when they are placed in an interpretable framework. ‘Alignment’ is the guiding principle for circumscribing or shaping artificial intelligence (AI) actions to conform to human values, so that these systems successfully achieve human goals while minimising harm (Yudkowsky, n.d.). Alignment is a broad field that seeks to bring together intentions, values and contingencies. It is shaped by the nature of the impressions or assumptions that underlie our relations with AI—it is an “overarching research topic of how to develop sufficiently advanced machine intelligences, such that running them produces good outcomes in the real world” (Arbital, n.d.). Given the contestability of normative ethics, each alignment problem involves interrogating and defining what these ‘good outcomes’ might look like and the boundaries where the ‘good’ bleeds into the problematic, harmful and potentially catastrophic. The classic ‘alignment story’, whose origins are unclear, involves an AI machine that is designed to create paperclips, an apparently innocuous task. Through intelligence explosion and an optimisation process, this imagined paperclip machine creates a situation where it transforms “earth and then increasing portions of space into paperclip manufacturing facilities”—Armageddon in the name of wire squiggles (Yudkowsky, n.d.). This is a tale about the unknown capacities of an artificial general intelligence (AGI), and it is also, of course, a tale ad nauseam. Values are profoundly contingent, and value-oriented questions revolve around the modification, interpretation and application of multiple competing values and their individual limits. While this sort of imagined situation demonstrates the dangers of a naive application of goals in an exceptional AGI, it does not venture into the theatrically modest but conceptually more difficult territory of Janus-faced AI systems whose goals are also values: where a goal is inherently collaborative, as in multi-agent systems, or where the relevant values are not moral. This form of alignment can be achieved through an application of ‘interpretable’ strategies, which involve explicit articulation of qualitative techniques including, for example, ‘insight’. Interpretability is a distinct problem of AI explainability and involves qualitative methods and case-by-case development. While AI explainability (XAI) is the process where AI decision-making processes are made transparent, interpretability refers to a broader, human-centred process that involves contextualised explanations. Interpretability recognises that any “ML decision is explained differently, depending on the person to whom it is explained, the reason why the explanation is needed, the place and time of the explanation, the ergonomics of human-machine interaction, and so on” (John-Matthews, 2022). The application of strategies theorised in scenario studies, psychology and climate science provides qualitative methods that can fulfil the need for interpretability and achieve alignment. This chapter surveys a series of methods, chief among these Theodore Shepherd’s concept of ‘storylines’ (2019), which are defined as “physically self-consistent unfolding of past events, or of plausible future events or pathways” with no ‘a priori probability’ attached. In application to iFire, we couple this method with a theory of affect-informed ‘simulations’ of uncertainty (Anderson et al., 2019) and scenarios organised around perceived or discovered vulnerabilities (Lempert, 2013). Simultaneously, we show that an immersive environment has the capacity to offer the sort of “embedded experience” that climate science sorely lacks (Jasanoff, 2010; Shepherd & Lloyd, 2021).

6.2 Creating Accuracy and Verisimilitude for the Representation of Unpredictable Events

iFire assimilates geolocated data from fire simulation software to visualise wildfires in an immersive 360-degree cinema and other modules like 130-degree cinemas, single-projector displays, desktop computers and VR headsets. iFire’s goal is to accurately represent the nature of wildfires, using AI to predict and model rate of spread (RoS). It seeks to account for the dynamic and erratic characteristics of recent extreme fires, which can occur when “turbulent winds and mass spotting […] create complex spread patterns [and when these interact and coalesce with] the main fire area” (Storey et al., 2021). This system will have the capacity to simulate changing fire behaviour based on shifting variables and user interactions. iFire is also developing an “AI framework that analyses, learns from and responds to individual and group behaviour in real time” in order to develop a multi-agent collaborative decision-making system that learns from past interactions (iCinema, n.d.). This system is intended to support preparedness efforts of a range of end users: scientists will be better able to analyse potential fire scenarios, firefighters can train and test strategies, and communities can increase awareness of vulnerabilities and improve preparedness strategies by experiencing extreme events in a safe environment. iFire attempts to model fires that are scientifically accurate and deliver verisimilitude (i.e. explanatory depth) by depicting a wildfire scenario that is convincing to both firefighters and scientists.
An AI ‘misalignment’ in this context, where the AI produces inaccurate or confusing visualisations, could lead to potentially dangerous real-world decisions or actions. Equally, an AI system that does not facilitate narrative plausibility for and elicits trust from its users, and which does not illuminate vulnerabilities or novel scenarios, will fail to create preparedness. Jasanoff (2010) has argued that scientific work on climate “arise[s] from impersonal observation” and, as such, can “detach knowledge from meaning”. This is a unique alignment problem whereby these systems need to match accuracy with the sorts of verisimilitude that can provide “embedded experience” (ibid.). These problems are all the more pointed in the case of iFire, which aims to represent wildfires not only comprehensively but contingently, seeking to model a dynamic extreme firescape that is characterised by unpredictability. These dangers are not unique to the iFire system. It is comparable to Hädrich et al.’s mesoscale simulation of wildfires, a novel fire simulation system that replicates “dynamic behaviour and physics response of plant models” at forest scale using an innovative “module-based tree representation” (2021). This facilitates higher-fidelity simulation, because it can model realistic trees and variables, such as growth, as well as understanding feedback loops based on the heat radiating from burning trees. This project, however, is limited to a focus on trees and has no capacity to model grasses, undergrowth or leaf litter. Another comparable project is VFire, which is an immersive fire modelling system with similar aims to iFire. Although VFire is not AI enabled, it faces similar potential ethical conundrums as iFire because it does not include atmospheric circulation in its models and thus grapples with an inevitable partiality in the accuracy and realism of the fires (Hoang et al., 2010).
Users of the iFire system need to deal with significant novelty, including unexpected frequency, enormous scale and intensity of extreme fire behaviour, which can quickly escalate into hazardous and catastrophic scenarios, such as erratic firestorms (Sharples et al., 2016). This novel generation of wildfires (which are in many ways stoked by climate change) poses unprecedented risk and a complex set of challenges for the scientific community who grapples with their attendant uncertainty and novel experiential dimensions. The broad dilemmas of AI alignment here take on a particular pertinence for supporting decision making. The iFire system—and others like it—will be considered aligned if it is able to present users with an accurate and convincing fire simulation that offers them insight without being misleading. This involves maximising several competing aesthetic and scientific measures, which bridge fiction (i.e. the non-verifiable, non-confidence-based, intuitively plausible and meaningful scenarios created in immersive visualisation systems) and fact (the data-consistent, scientifically plausible, coherent, non-arbitrary scenarios) in a productive way to foster enhanced preparedness. In a situation characterised by uncertainty—both in climate science in general and in dynamically evolving extreme fire events in particular—, the iFire system needs to maximise the non-moral value of accuracy in a situation in which facts that are stable, retrospective and confidence-based might not exist.

6.3 The Challenge of Accuracy in Climate Science

Climate science is characterised by uncertainty. Significant work has identified the varieties of existing uncertainties and proposed methodologies for nevertheless enhancing knowledge and decision making under these conditions. This constitutes the ‘interpretable’ question in climate science. Shepherd et al. (2018) have developed the concept of ‘storylines’ to create reliability where uncertainty cannot be mitigated. They identify the incompatibility between increasing demands for clear, actionable climate information and the inherent uncertainty of key forms of climate data. They are responding to the fact that most public-facing climate reports rely on frequentist data to make probability statements about climate change. Frequentist statistics is generally held in opposition to Bayesian statistics and deals with the probability of data based on a null hypothesis, using the limit of the frequency of data as its probability. To demonstrate the limits of traditional probabilistic approaches to climate science, Shepherd and Lloyd (2021) detail studies of atmospheric circulation, which are “inherently regional, and involve dynamics (Newton’s second law) as well as thermodynamics”, which they contrast with confidence-based studies of thermodynamics. Shepherd (2019) has emphasised that such studies do not meet the three criteria typical of climate models, which require them to be accepted by climate theory, found in observations and contained in modelling. This concurs in important ways with wildfire modelling. Sharples et al. (2016) have argued that current fire prediction models fall short because they “are predicated on the assumption that the rate of spread of a wildfire burning in a quasi-equilibrium state can be uniquely determined by the local conditions of fuel, weather, and topography”. They argue that these models, which rely on a ‘quasi-steady assumption’, do not work in scenarios where the fire is not adequately explained or represented by environmental conditions (ibid.). Such fires, which do not achieve the equilibriums assumed by RoS models, are known as ‘dynamic fires’. A subset of these is called ‘extreme bushfires’, which “are associated with a higher level of energy, chaos, and nonlinearity” (Sharples et al., 2016). Storey and his colleagues (2021) argue that there is an imperative to raise awareness about these types of fires, which place significant and unprecedented demands on firefighters.
Shepherd (2019) sees uncertainty arising from different sources, namely, from future climate forcing, from climate system response to this and from the internally variable manifestation of a local climate at a given point in time. Further, uncertainty can result from human actions (i.e. scenario uncertainty), from limits in knowledge (epistemic uncertainty) or from random interfering elements (aleatoric uncertainty), whose probability may be partially deducted (ibid.). It is the latter two types of uncertainty which Shepherd argues must be held as distinct. Epistemic uncertainty is ‘subjective’, because it relates to what we know and do not know, whereas aleatoric uncertainty is ‘objective’, because it relates to events independent of our knowledge. Both latter types of uncertainty are relevant to iFire and the broad goal of preparation for extreme events: epistemic uncertainty involves, broadly, the change in weather systems and extreme weather events under climate change, where frequentist predictions with confidence levels attached become harder to make. Aleatoric uncertainty is relevant to such wildfires, as they are characterised by unprecedented levels of inherent dynamism.
Shepherd (2019) claims to cut the ‘Gordian knot’ of climate change uncertainty by shifting the question asked by climate scientists: he argues that “the societally relevant question is not “What will happen?” but rather “What is the impact of particular actions under an uncertain regional climate change?”” This is another way of saying that the climate discussion needs to move “from the “prediction [space]” to the “decision space”” (ibid.), without expecting the former to be a precursor for the latter. Shepherd argues that the situational, epistemic and aleatoric uncertainties that he describes should not preclude decisions and that they make “subjectivity inevitable” (ibid.). Where objectivity is not possible because of epistemic uncertainties, there is an ethical imperative to avoid the “illusion of objectivity”, which can actually “reduce transparency” (ibid.). This, too, is a key imperative for iFire: accuracy in fire representation and behaviour must be achieved but must not be synonymous with objectivity. The simulations must offer contingent, possible future scenarios. This move from the probability space to the decision space is facilitated by the fact that epistemic uncertainty can be represented “through a discrete set of (multiple) storylines—physically self-consistent, plausible pathways, with no probability attached” (ibid.). Shepherd distinguishes storylines from scenarios through their remove from probability, so rather than asking what will happen, he asks “what would be the effect of particular interventions” (ibid.). The uncertainties that iFire must deal with are both situational (depending on how humans intervene or events unfold), epistemic (climate change increases unpredictable fire behaviour) and aleatory (inherent dynamics of extreme wildfires). This alignment issue of how to create accuracy in the face of uncertainty can be approached via ‘interpretability’, where contingency is an inherent part of the way a user understands the immersive system.

6.4 From Epistemic to Affective Uncertainty

Shepherd speaks about uncertainty as a relation to knowledge or events and in terms of the limits of frequentist probabilities that can dominate reports from the IPCC and similar authorities. But there is an affective dimension to uncertainty that is particularly relevant to the immersive experience of iFire. Anderson et al. (2019) “define uncertainty [as] a mental state, a subjective, cognitive experience of human beings rather than a feature of the objective, material world”, which results from a conscious awareness of ignorance, i.e. lack of knowledge. There is surely scope for both definitions of uncertainty to exist—uncertainty as a quality of knowledge and uncertainty as a subjective experience—especially given that Anderson et al. distinguish a variety of experiences of uncertainty that conform, to a certain extent, to Shepherd’s scientific discrepancies: probability, ambiguity and complexity. They note that probability, which they identify with risk, stems from the “randomness or indeterminacy of the future” (ibid.). Ambiguity, on the other hand, “arises from limitations in the reliability, credibility, or adequacy of probability”, and complexity is yet again different, because it arises from difficulty in comprehension of information, rather than qualities of the information (ibid.). The below table outlines the affiliations between these different varieties of uncertainty. These are not understood as equivalences but as expressing related varieties of uncertainty in two different registers, each of which can be delineated based on the goals of iFire: accuracy and verisimilitude (Table 6.1).
Table 6.1
The relationship between sources of uncertainty, types of uncertainty and affinities with mental experience
Climate science (Shepherd, 2019)
Types of uncertainty (Shepherd, 2019)—accuracy based
Inflection of mental state (Anderson et al., 2019)—verisimilitude based
Uncertainty ‘in future climate forcing’
Scenario
Complexity
Uncertainty in ‘the climate system response to that forcing’
Epistemic
Ambiguity
Uncertainty ‘in the actual realisation of climate for a particular time window’
Aleatory
Probability/ambiguity
Decision-making strategy
Storylines
Simulations
These approaches address one aspect of the alignment question for AI systems that contribute to climate change storylines. By shifting the question to effect of actions—from the ‘prediction space’ to the ‘decision space’—the prospect of ‘good outcomes’ (that is, outcomes that align with values of accuracy and which support the facilitation of preparedness and resilience) increases. Yet, users need to be made aware of the application and implication of ‘storylines’ in this context and their relation to concepts of uncertainty. This extends to both affective and cognitive approaches as well as experiences of uncertainty.

6.5 Storylining and Other Techniques: Tales, Simulations and Scenarios

The concept of storylines resonates with a process that Hazeleger et al. (2015) frame as Tales of Future Weather. The authors show that the limits of the conventional methodology applied in climate science, namely, ‘MCDT’—model[ling] the entire “climate system, correct[ing] for biases, downscale[ing] to the scales of interest and finally translat[ing] into terms suitable for application”—cannot be adequately responsive to future weather (ibid.). They suggest a “complementary methodology” that arguably can “more fully explore the uncertainty of future climate for decision-makers today” (ibid.). Their ‘tales’ approach extends Shepherd’s ‘storylines’ by seeking to reveal uncertainties. By explicitly shifting away from Shepherd’s ‘prediction space’, ‘tales’ can allow for decision making but also clarify the present uncertainties. These uncertainties do not necessarily have to correlate exclusively with extremes in intensity but can also extend to increases in frequency (ibid.). A ‘tales’ scenario might reveal uncertainties in either domain.
For Anderson et al. (2019), uncertainty in itself has consequences, regardless of its origin: it “can lead to suboptimal decision making, negative affect, diminished well-being [sic], and psychopathology” and demands research and action to mitigate. This is an important and easily overlooked point. While Shepherd (2019) deals with the need to be responsive to uncertainty in terms of methodology, Anderson et al. (2019) remind us that uncertainty itself requires an active response to increase preparedness, resilience and wellbeing. Their suggestion for the mitigation of the negative effects of uncertainty aligns in certain respects with Shepherd’s own method of storylines. Anderson et al. highlight the connection between uncertainty, simulation and affect, explaining that:
[M]ental simulations might represent the critical mechanistic link between uncertainty and affective responses: uncertainty invites simulation of possible situations, and simulation, in turn, generates affective responses. For instance, if someone learns they might have cancer, they simulate what they think it would be like to have cancer (e.g., painful symptoms, treatment side-effects, hair loss, and death), which in turn generates negative affective responses. (Anderson et al., 2019)
Here the use of the term ‘simulation’ does not refer to particular media but to the imaginative process that rehearses possible outcomes from a situation. These simulations are proposed as a mediator between uncertainty and affect, with the implication that different imaginative processes can create different responses to uncertainty. This theory of simulation is a useful supplement to theories of ‘storylines’ and ‘tales’, because it focuses on the affective dimensions of these strategies, which neither Shepherd et al. nor Hazeleger et al. theorise. Anderson et al. (2019) point out that affect can change perceptions of likelihood and risk posed by extreme events. The latter are multidimensional scenarios requiring users to interact with information and content, with their responses additionally determined by immediately preceding emotional states and individual temperament on the identification of risk and attendant decision-making options (ibid.). Hazeleger et al. (2015)’s work also reflects this by structuring their ‘tales’ in ways that generate higher levels of concern: they relate information on extreme weather to likely everyday user experiences, which “was found to be a statistically significant determinant of higher levels of concern” (ibid.). This powerful affective dimension needs to be carefully considered in any use of ‘storylines’. Simulations can affect perceptions of risk and uncertainty based on emotional states (prior and developing). This presents both possibilities and dangers for the alignment of a system like iFire. Effects like familiarity, optimism, capability, readiness or awareness—all affective varieties of preparedness—can contribute to a ‘good outcome’. Instability, pessimism or fear would not.
Again, this demonstrates the necessity of differentiating ‘interpretability’ from ‘explainability’. Shepherd’s theory of ‘storylines’ demonstrates the need for forecast strategies that are responsive to inherent uncertainties, but the ‘storylines’ that he theorises are abstract and text based. If these storylines were transformed into immersive, visually rich simulations, they have the capacity both to offer much more substantial investigation of plausible scenarios and the effect of human actions in these scenarios. They would also address the affective dimensions of the uncertainties and risks of extreme events. Such rich simulations could capture the reality of uncertainty theorised by Shepherd but also address the affective dimensions raised by Anderson et al. They could achieve workable accuracy under uncertainty if users are aware of the simulations’ narrative contingencies and if they are involved in specifying its compositional priorities.

6.6 Storylines and Interpretability

Forecast strategies that deploy the storylines approach are closely connected to AI concepts of ‘explainability’ and ‘interpretability’. In the field of AI ethics, these two terms have often been conflated. However, recent work endeavours to separate them in order to distil requirements for AI decision making (e.g. Marcinkevîcs & Vogt, 2023). Here, explainability is associated with answering questions such as ‘Why did the AI make this specific prediction?’ or ‘What factors influenced the AI’s decision?’ Such questions target the Shepherdian ‘prediction space’. Yet, if this system is to occupy a ‘decision space’, we need to ask interpretive questions, like ‘What meaningful insights do we glean from this simulation?’ or ‘What does this reveal about vulnerabilities?’ These are fundamentally qualitative questions that involve multi-domain information and knowledge that comes from an interaction between the simulation and users, such as scientists, firefighters or community members (Table 6.2).
Table 6.2
Qualitative and quantitative applications of explainability and interpretability
Explainability
Interpretability
Prediction space (frequentist statistics)
Decision space (storylines, simulation)
Quantifiable risk
Qualitative vulnerabilities
Computing risk of different options
Preparation, concern
One strategy to transition into the ‘decision space’ is to use vulnerability as a compositional priority. This has been theorised by Robert Lempert (2013), who works in an adjacent field to storylines and simulations. His definition of ‘scenarios’ deals with uncertainty and dynamism in that they are less confident about the future than probabilistic predictions (ibid.). Lempert taxonomises a variety of human factors that show whether a scenario is successful or not:
i) the usefulness of information so that the intended users regard it as credible, legitimate, actionable, and salient; ii) the relationships among knowledge producers and users, helping these parties to engage in mutual learning and ‘coproduction of knowledge’ while increasing mutual understanding, respect, and trust; and iii) the quality of the decision, which should include all five elements described above and be regarded by the parties as having been improved by the support received. (2013)
Lempert (2013) refines his methodology, though, through a particular focus on scenarios built around vulnerabilities. These scenarios aim to understand where a given policy might fail and subsequently how to find solutions or alternatives. He argues that it is vital to pay attention to the ‘task’ that the scenario is created to fulfil, contrasting a “decision structuring task that involves defining the scope of the problem, the goals, and the options under consideration” with ‘a choice task’, which deals with existing options (ibid.). Lempert contends that scenarios that illuminate the former will not necessarily do so for the latter. However, scenarios that are structured to illuminate vulnerabilities, he proposes, can fulfil both of these criteria and as such are stronger compositions with more opportunity for insight. A scenario that is structured around vulnerabilities in a proposed policy would enable users to understand where their plans may fail or what the vulnerabilities might be in their strategies. This is one appropriate way to ensure that storylines and simulations address the important affective dimensions that are central to modulating ‘human’ relations to uncertainty and increasing concern in the user. A storyline that makes use of Lempert’s approach to scenarios, i.e. which is ‘task oriented’ and uses vulnerabilities as a compositional priority, will enhance preparedness and manage uncertainty simultaneously, because it prioritises the user’s knowledge and expectations. For iFire, this may involve approaching the immersive environment with a task that is articulated prior to the experience and, in particular, making use of ‘revealing vulnerabilities’ as particularly pertinent task. This centrality of users’ priorities mitigates the epistemic complexities of uncertainty in extreme events because they dictate verisimilitude rather than the measurable likelihood of a fire event.

6.7 Conclusion

iFire, and similar visualisation systems, intervenes in a field characterised by uncertainty and dynamism to create preparedness for the future. iFire aims to maximise truthful representation in an immersive environment to stage wildfires that will aid preparedness for scientists, firefighters and community members. This is a complex and ambitious undertaking, because such a project needs to create plausible futures that are necessarily subjective—they cannot be certain or confidence-based—and involve high levels of dynamism. iFire does not create probability-based forecasts but plausible storylines (in the sense developed by Shepherd and other climate scientists), which facilitate accuracy without the need for confidence. This also fulfils the requirement for verisimilitude, given that user intervention in storyline composition will facilitate explanatory depth. This allows iFire to intervene in the dynamic and ever-changing context of adaptation to climate change and to support preparedness not by previewing what will happen in the future but by allowing participants to survey vulnerabilities, stage events, experience contingencies and understand the consequences or impact of various actions. As such, iFire also represents a solution to the problems raised by Jasanoff regarding reports like the IPCC’s, which “detach knowledge from meaning” by neglecting “embedded experience” (2010).
In this sense, iFire departs from other fire visualisation systems by shifting its priorities away from a confidence-enabled representation of a fire to an accurate projection of a plausible future fire scenario that is structured around user priorities and fully involves the situational uncertainty of human action and the epistemic and aleatoric uncertainties of climate science. As an immersive environment, iFire exceeds Shepherd’s definition of ‘storylines’ because it functions as a simulation, thus presenting an affectively significant version of a storyline. This too presents an alignment issue in that the affective outcomes of the simulation need to be aligned with preparedness rather than panic, as well as awareness rather than confusion. An interpretable system that is focused on ensuring users focus on qualitative categories like insight and revelation (rather than seeking an explanation of how the AI makes decisions) has the capacity to forge links between affective states and types of knowledge.
The AI components of iFire can create optimal outcomes by an explicit task-oriented or purpose-oriented focus, the premier example being Lempert’s positioning of vulnerability as a compositional priority. This explicit articulation of both purpose and compositional priorities is central to an ‘interpretable’ AI system. Simultaneously, this investigation shows that immersive systems such as iFire have a reciprocal alignment function. iFire, when conducted through compositional methods from scenario modelling, can produce storylines akin to those theorised by Shepherd et al. But it does so with the capacity to present an affectively rich storyline that can link feelings with knowledge. As such, iFire delineates the role creative arts can play in fostering climate change knowledge and preparedness, i.e. in shaping compositional structures and interpretable systems so that they give meaning to novel and unpredictable scenarios.

Acknowledgements

Baylee Brits’ postdoctoral fellowship is funded by the Australian Government’s Office of National Intelligence (RG220479). Carlos Tirado Cortes’ postdoctoral fellowship is funded by the Australian Research Council (FL200100004, directed by Laureate Prof. Dennis Del Favero), which develops the iFire research. Yang Song and Baylee Brits are involved in this project as expert collaborators.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Literatur
Zurück zum Zitat Altintas, I., Block, J., De Callafon, R., Crawl, D., et al. (2015). Towards an integrated cyberinfrastructure for scalable data-driven monitoring, dynamic prediction and resilience of wildfires. Procedia Computer Science, 51(1), 1633–1642.CrossRef Altintas, I., Block, J., De Callafon, R., Crawl, D., et al. (2015). Towards an integrated cyberinfrastructure for scalable data-driven monitoring, dynamic prediction and resilience of wildfires. Procedia Computer Science, 51(1), 1633–1642.CrossRef
Zurück zum Zitat Anderson, E. C., Carleton, R. N., Diefenbach, M., & Han, P. K. J. (2019). The relationship between uncertainty and affect. Hypothesis and Theory, 10, 2504. Anderson, E. C., Carleton, R. N., Diefenbach, M., & Han, P. K. J. (2019). The relationship between uncertainty and affect. Hypothesis and Theory, 10, 2504.
Zurück zum Zitat Hädrich, T., Banuti, D. T., Palubicki, W., Pirk, S., & Michels, D. L. (2021). Fire in paradise: Mesoscale simulation of wildfires. ACM Transactions on Graphics, 40(4 (July)), 1–15.CrossRef Hädrich, T., Banuti, D. T., Palubicki, W., Pirk, S., & Michels, D. L. (2021). Fire in paradise: Mesoscale simulation of wildfires. ACM Transactions on Graphics, 40(4 (July)), 1–15.CrossRef
Zurück zum Zitat Hazeleger, W., van den Hurk, B., et al. (2015). Tales of future weather. Nature Climate Change, 5, 107–113.CrossRef Hazeleger, W., van den Hurk, B., et al. (2015). Tales of future weather. Nature Climate Change, 5, 107–113.CrossRef
Zurück zum Zitat Hoang, R., Sgambati, M., Brown, T., Coming, D., & Harris, F. (2010). VFire: Immersive wildfire simulation & visualization. Computers & Graphics (Pergamon), 34(6), 655–664.CrossRef Hoang, R., Sgambati, M., Brown, T., Coming, D., & Harris, F. (2010). VFire: Immersive wildfire simulation & visualization. Computers & Graphics (Pergamon), 34(6), 655–664.CrossRef
Zurück zum Zitat Jasanoff, S. (2010). A new climate for society. Theory, Culture and Society, 27(2–3), 233–253.CrossRef Jasanoff, S. (2010). A new climate for society. Theory, Culture and Society, 27(2–3), 233–253.CrossRef
Zurück zum Zitat John-Matthews, J.-M. (2022). Some critical and ethical perspectives on the empirical turn of AI interpretability. Technological Forecasting and Social Change, 174(January), 121209.CrossRef John-Matthews, J.-M. (2022). Some critical and ethical perspectives on the empirical turn of AI interpretability. Technological Forecasting and Social Change, 174(January), 121209.CrossRef
Zurück zum Zitat Lempert, R. (2013). Scenarios that illuminate vulnerabilities and robust responses. Climactic Change, 117, 627–646.CrossRef Lempert, R. (2013). Scenarios that illuminate vulnerabilities and robust responses. Climactic Change, 117, 627–646.CrossRef
Zurück zum Zitat Marcinkevičs, R., & Vogt, J. E. (2023). Interpretable and explainable machine learning: A methods-centric overview with concrete examples. WIREs Data Mining and Knowledge Discovery, 13(3), e1493.CrossRef Marcinkevičs, R., & Vogt, J. E. (2023). Interpretable and explainable machine learning: A methods-centric overview with concrete examples. WIREs Data Mining and Knowledge Discovery, 13(3), e1493.CrossRef
Zurück zum Zitat Sharples, J. J., Cary, G. J., Fox-Hughes, P., Mooney, S., et al. (2016). Natural hazards in Australia: extreme bushfire. Climatic Change, 139, 86.CrossRef Sharples, J. J., Cary, G. J., Fox-Hughes, P., Mooney, S., et al. (2016). Natural hazards in Australia: extreme bushfire. Climatic Change, 139, 86.CrossRef
Zurück zum Zitat Shepherd, T. (2019). Storyline approach to the construction of regional climate change information. Proceedings of the Royal Society A, 475, 20190013.CrossRef Shepherd, T. (2019). Storyline approach to the construction of regional climate change information. Proceedings of the Royal Society A, 475, 20190013.CrossRef
Zurück zum Zitat Shepherd, T., & Lloyd, E. (2021). Meaningful climate science. Climactic Change, 169, 17.CrossRef Shepherd, T., & Lloyd, E. (2021). Meaningful climate science. Climactic Change, 169, 17.CrossRef
Zurück zum Zitat Shepherd, T., Boyd, E., Calel, R., Chapman, S. C., et al. (2018). Storylines: An alternative approach to representing uncertainty in physical aspects of climate change. Climatic Change, 151, 555.CrossRef Shepherd, T., Boyd, E., Calel, R., Chapman, S. C., et al. (2018). Storylines: An alternative approach to representing uncertainty in physical aspects of climate change. Climatic Change, 151, 555.CrossRef
Zurück zum Zitat Storey, M., Bedward, M., Price, O. F., Bradstock, R. A., & Sharples, J. J. (2021). Derivation of a Bayesian fire spread model using large-scale wildfire observations. Environmental Modelling & Software, 144, 1–2.CrossRef Storey, M., Bedward, M., Price, O. F., Bradstock, R. A., & Sharples, J. J. (2021). Derivation of a Bayesian fire spread model using large-scale wildfire observations. Environmental Modelling & Software, 144, 1–2.CrossRef
Metadaten
Titel
Immersive Visualisation Systems as Alignment Strategies for Extreme Event Scenarios
verfasst von
Baylee Brits
Yang Song
Carlos Tirado Cortes
Copyright-Jahr
2024
DOI
https://doi.org/10.1007/978-3-031-56114-6_6

Premium Partner