Skip to main content
Erschienen in:
Buchtitelbild

Open Access 2024 | OriginalPaper | Buchkapitel

2. Reimagining Extreme Event Scenarios: The Aesthetic Visualisation of Climate Uncertainty to Enhance Preparedness

verfasst von : Dennis Del Favero, Susanne Thurow, Maurice Pagnucco, Ursula Frohne

Erschienen in: Climate Disaster Preparedness

Verlag: Springer Nature Switzerland

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
download
DOWNLOAD
print
DRUCKEN
insite
SUCHEN
loading …

Abstract

Responding to the rapidly escalating climate emergency, this chapter outlines transformative multidisciplinary research centred on the visualisation of unpredictable extreme event scenarios. It proposes a unique, scientifically grounded artistic approach to one of the world’s most immediate challenges—preparing communities for extreme climate events, such as firestorms and flash floods. As preparedness is a function of prior threat experience, it argues that visualising threat scenarios in advance can be a key to surviving and adapting in an era of increasing climate instability. This approach can enable communities to viscerally experience and rehearse threat perception, situational awareness, adaptive decision making and dynamic response to unexpected life-threatening extreme events. Using experimental case studies at The University of New South Wales’s iCinema Research Centre and international benchmarks, this chapter explores how advances in immersive visualisation and artificial intelligence aesthetics can be integrated to provide a framework that enables the virtual prototyping of unforeseen geolocated climate scenarios to facilitate readiness in the face of accelerating climate uncertainty.

2.1 Introduction

The climate emergency presents an existential global crisis resulting from the combined processes of global warming and atmospheric, hydrospheric, biospheric and pedospheric degradation. The IPCC report of 2023 found that extreme climate events are rapidly increasing around the globe, with projections indicating they will become more frequent and severe, with impacts intensifying and interacting. The World Economic Forum’s Global Risks Report 2023 identifies, for the first time, “failure to mitigate climate change” and “failure of climate change adaptation” as the most severe risks on a global scale, followed by “natural disasters and extreme weather events” (Heading & Zahidi, 2023).
In Australia, for example, natural disasters, primarily wildfires and floods, have cost over $13 billion every year (1.2% of GDP) and are “expected to rise to $39 billion per year by 2050” (Slatyer et al., 2017). Over nine million Australians have been directly impacted by extreme weather events since 1990. The Australian government’s National Strategy for Disaster Resilience and its National Climate Resilience and Adaptation Strategy emphasise the critical importance of employing all possible means to mitigate the impacts of climate emergencies on communities.
In reference to the findings of the Royal Commission into National Natural Disaster Arrangements (Binskin, 2020), Emergency Leaders for Climate Action (2022) urged that “unprecedented is not a reason to be unprepared. We need to be prepared for the future”. As detailed by Cunningham et al. in Chap. 13 of this volume, Australia’s policy settings have thus far failed to facilitate a level of effective preparedness, adaptation and mitigation that can pave a way to sustaining quality of life (or life itself) on our continent. The Australian Productivity Commission (2014) reports that 97% of Australia’s investment in climate emergencies is on recovery and only 3% on preparedness and mitigation. As fire scientist David Bowman (2023) points out, Australia is currently “sleepwalking” into a future of extreme and escalating climate events that will inevitably change how we can live on this “most fire-prone continent[-] on Earth”. As Cunningham et al. argue, investing in preparedness is key to meeting this challenge. However, despite its importance, tangible examples of enhancing preparedness are rare, partly because it often requires deep levels of experiential and intellectual engagement across disciplinary boundaries (Lazo et al., 2015).
The climate crisis requires fundamental shifts in how these events are envisioned. Creative arts and technology are increasingly being used to depict and model these world-defining events, using evocative imagery and abstract graphics to communicate with audiences (e.g. Altintas et al., 2015, Calkin et al., 2021, Smith, 2015, Roelstraete et al., 2023). By depicting complex climate information in dynamic visual form, art and technology enable a more immediate and intuitive understanding of extreme events in ways that are not apparent in raw data (Shepherd & Truong, 2023). The arts can achieve this by depicting plausible scenarios, while technology undertakes it by modelling probable scenarios. Yet both currently tend to provide episodes that address non-localised uncertainties and render citizens as passive witnesses to the aftermaths, or the events as abstract forces, constraining the capacity to anticipate reliable scenarios (Jasanoff, 2010; Sheppard 2019). This leads to detached generic observations, whereas credible and meaningful preparedness emerges from embedded sensorial experiences that can inform decision making, since preparedness is a function of the prior experience of a perceived threat (Lazo et al., 2015). Most importantly, current art and technology approaches are unable to experientially vivify the increasingly uncertain interaction between geolocated events and situated communities, amplifying a profound existential vulnerability (Sheppard, 2005).
To overcome this challenge, government agencies, emergency services and researchers are seeking multidisciplinary innovations to optimise foresight, readiness and responsiveness to face the catastrophic risks of global warming (Jasanoff, 2010). This challenge demands reformulating how these interactions are envisioned. It necessitates a shift from abstract observational depiction and modelling to a scenario visualisation that facilitates a viscerally immersive interaction with unforeseen situated threats in advance. This formulation would need to integrate the recent advances in the arts that create compelling visceral experiences with developments in artificial intelligence (AI) that process and render complex and unpredictable climate processes.
This chapter surveys the conceptual and practical requirements for integrating these approaches and establishing a climate scenario visualisation framework to address this existential challenge. It probes preliminary experimental case studies developed at The University of New South Wales’s iCinema Research Centre in tandem with international benchmark research. It outlines the state of the art in current scenario visualisation and maps how recent advances can be leveraged to address significant constraints currently limiting progress in the development of preparedness, both in understanding and practice. The chapter argues that what is required is an immersive visualisation and AI aesthetic that facilitates actionable visualisation (Sheppard, 2005) using virtual landscape scenarios in which viewers can safely curate the variables that drive extreme events and compose episodes through which to rehearse and test responses. In such dynamic landscapes, communities can viscerally and interactively enact scenarios that embody the unforeseen interactions between themselves and constantly fluctuating climate variables. Such a multidimensional aesthetic framework would act as an intelligent simulatory theatre where communities could creatively envision their transaction with the intense uncertainty and non-linearity of extreme events in their locality.

2.2 Immersive Visualisation

Recent developments in immersive visualisation enable users to navigate through dynamically modelled territories, where they can experience multisensorial vulnerabilities and safely rehearse their response to threats in advance within a credible spatial environment (e.g. Soga et al., 2021). Immersion here refers to physically embedding the user in a virtual scenario where they can navigate through its three-dimensional (3D) space, enabling them to spatially explore scenarios as if they are present (Fonnet & Prié, 2021; Suh & Prophet, 2018).
This use of immersive scenarios to simulate threat preparedness is evidenced in the iCASTS project (Shaw & Del Favero, 2010–ongoing) that to date has trained over 30,000 underground miners in New South Wales. It uses 360° 3D theatres situated on mine sites in which plausible naturalistic simulations of probable evidence-based hazardous settings are presented. Integrating artistic and technological methods, trainees experientially engage with predictable threats, underpinning a 65% reduction in serious injuries and no fatalities across the NSW mining sector (Pedram et al., 2014; Fig. 2.1).
In similar ways, the University of California’s Center for Information Technology Research in the Interest of Society (CITRIS) is developing immersive interactive projects that focus on visualising navigational decision making during a disaster. It is their aim to enhance communications between emergency services and the public, to understand how future evacuations may be improved. Stakeholders use head-mounted VR to rehearse the navigation of expected hazards during an evacuation simulation “to help shift [their] perspectives from reacting to events when they occur, to anticipating emergencies and looking for ways to reduce risk before disaster strikes” (Soga et al., 2021).
Virtual scenarios are proving to be powerful tools for envisioning complex interactions and for probing viable pathways for action (Havenith et al., 2019). They are multimodal, combining sight, sound and kinaesthetic dimensions, and multidimensional, enabling a multiplicity of flexible responses that are required in chaotic situations (Lempert, 2013). However, current approaches lack the ability to depict unpredictable interactions between stakeholders and geolocated events, where uncertainty is central. This severely limits their capacity to support preparedness.
Addressing these interactions demands developing immersive scenarios that model rapid, large-scale and unanticipated transactions in geolocations, which cannot be understood by human cognition alone. It requires a transformative aesthetic that integrates cutting-edge advances in creative arts and AI. This would combine the speed and scale of AI in establishing patterns and predicting behaviours, with the digitally augmented inventiveness of aesthetic practices to make sense of and process the uncertainty of situated sensorial experience (Grosz, 2001; Del Favero et al., 2023). The complex nature of climate scenarios requires a multilayered aesthetic framework that utilises different forms of sense making—encompassing picturing a situation, accounting for what is seen, communicating what is experienced and rehearsing a response (Lempert, 2013).
Such a framework would empower stakeholder communities to viscerally perceive threats, dramatise hazardous stories, prototype risk-laden encounters and probe geolocal readiness. In short, it would mobilise preparedness by previewing and rehearsing an appropriate response. Reframing visualisation from what will happen to what it could look like and how to prepare, it will transform the art and technology involved and establish a transformative knowledge domain in climate scenario visualisation. This will shift the envisioning of climate disasters from the passive observation of events into an experiential preview of hyperlocal unexplored situations (Frohne, 2023). It will enable readiness and adaptive responsiveness for unforeseen emergencies through the embedded rehearsal of probable and plausible geolocated scenarios. Investing in such a risk reduction strategy will provide a compounding dividend of avoided loss and suffering, reduced disaster costs and enhanced creative capacity, cultural cohesion and social empowerment.
Such an integration of art and technology needs to start by reconsidering how we imaginatively formulate these crises and our interaction within them. As set out by Thurow, Grehan and Pagnucco in Chap. 9, dynamic systems theory, such as that of Bruno Latour (2018), considers humans as part of a symbiotic “terrestrial habitat”. The concept of the “terrestrial” redefines humans as one of the many Earth-bound agents, co-habiting with multiple other organic and non-organic agencies, including the sensorial and cognitive aesthetic processes and forces that form part of the habitat. The climate breakdown is disrupting these terrestrial and corresponding aesthetic systems. This instability is now centre-stage and manifested in the uncertainty across all terrestrial systems and the ways they are aesthetically formulated (Willcock et al., 2023). It triggers intense climate variability, multiplicative stresses and intersystem feedback with indeterminate outcomes—as seen, for example, in the unexpectedly accelerated rise of phosphorus concentrations in freshwater reserves (ibid.). This increasing uncertainty is challenging predictive data modelling and existing imaging paradigms, necessitating an aesthetic that can address the resulting systemic instability. As Britts, Song and Cortes discuss in Chap. 6, climate scientists are proposing aesthetic modifications to simulation approaches that focus on situated scenario visualisation to augment decision making, using storylining (Shepherd et al., 2018), tales (Hazeleger et al., 2015) and scenarios (Lempert, 2013).
Such a situated scenario visualisation would need to be grounded in robust data to ensure it aligns as accurately as possible with the determinate and indeterminate physical processes that govern extreme events. As Moinuddin et al. and Song et al. demonstrate in Chaps. 4 and 5, respectively, new physics-based modelling supported by machine learning (ML) and generative AI can supply such a basis if coupled with efficient processing pipelines. However, by themselves, data analysis and established visualisation approaches are currently not capable of furnishing meaningful insight and engagement with situated extreme event modelling. They yield vast amounts of data that are most commonly visualised through abstract graphical illustrations—struggling and often failing to convey geolocated dimensions of the events, which in their immediacy directly influence people’s response in emergencies. The visceral experience of a fire, its terrifying unpredictability and sensorial scale, can easily override any preparatory plans and actions devised via abstract graphics or text-based engagement.1 To ensure people have a safe and reliable concept of what to expect when facing a firestorm or flash flood, they need to be presented with a compelling and credible rendition of what facing such events might look and feel like. Algorithmic visualisation systems can be complemented by creative methodologies so that people may experience such overwhelming scenarios in advance in a safe virtual environment, allowing the chaos of the situation to be moderated through visceral rehearsal of response strategies. As Grehan, Ostwald and Smaill argue in Chap. 14, the arts and architecture excel at aesthetically transmitting such experiential qualities and can do far more than merely facilitate evocative stories of climate change impact. They can provide the key to unlocking an affective and intelligent preparedness based on enhanced data insight, visual analytics and rehearsal approaches. These can transform isolated disciplinary approaches into a cohesive and powerful integrated approach.

2.2.1 AI Aesthetics

Actively involving viewers in the composition of climate scenarios can significantly enhance their immediacy and meaning (Stevens et al., 2023). Traditional linear narrative concepts derived from semiotic and semantic theories are ill-suited to realise such a dynamic formulation as they fail to leverage the expanded capabilities of immersive visualisation and AI to reformulate engagement with the pillars of storytelling, namely, progression through time and space (Deleuze, 1995). Narratives that adhere to realist paradigms that simulate everyday experiences of these dimensions are limited in their capacity to stimulate reflection and to afford insight into the fundamental relations that embed us in our habitat. While hyperrealistic renditions of pre-scripted scenarios, such as those provided in Belinda Chayco and Tony Ayres’s seminal TV series Fires (2021), can impress upon audiences the ferocity of violent firestorms, they cannot facilitate personally relevant foresight into viable response strategies when faced with such incidents. This is because the linear progression that is determined by the traditional televisual medium already prefigures the causal logic and semantic valuation of selected narratives and frames a passive viewing position. By contrast, the malleability that underpins immersive visualisation and AI aesthetics allows transforming such detached positioning by affording the means to radically redefine narrative forms and interactions.
Digital artefacts are multimodal in nature, “shaped by software rather than semiotic codes: [that is,] software compresses information into virtually realisable [and interpretatively] thick units” (Weibel, 2002). Rather than applying psycholinguistic approaches that understand narrative as the recovery of representational structures from semantic memory (Willemen, 2002), digital media are better served by deploying a concept of narrative defined “as the episodic recomposition of emergent events within the affective, sensory and cultural memory” of the viewer (Deleuze, 1995). Such a definition captures and engages the layered and autonomous status of data in its virtual form, which is open to actualisation in manifold combinations and translation across varied contexts (Manovich, 2001). It develops concepts of relational semantics that have conceptually transferred meaning-making authority from author to receiver of information, emphasising that the aesthetics of data carry their own implications for aesthetic practice that are activated in dialogical engagement with the viewer. Narrative in these contexts becomes recombinatory, defined as a recursive system made up of a large number of self-organising and interdependent data elements that are able to provide the viewer with richly textured engagements. As a technical and critical framework, a recombinatory narrative recycles the abundance of available information into significant multitemporal episodes. For example, rather than presenting viewers with a predetermined set of narrative permutations, they can actively partake in combining and curating units of data into open-ended and infinitely malleable storylines. Applied to flood visualisation, this insight into and engagement with the dynamics of these events is far better enabled by letting viewers actively explore the effects of flood variables and evacuation decisions on the progression of a scenario than if they merely watched a recording of the events as they unfold.
The effectiveness of such approaches has already been foreshadowed by international seminal research, such as Refik Anadol’s Archive Dreaming (2017), which uses an AI framework to enable a user to search and sort 1.7 million archival documents and convert these into dynamic and novel narratives within an immersive theatre. When idle, the installation generates unexpected correlations between the documents. In a similar vein, the T_Visionarium project (Del Favero et al., 2004–2009) offers the means to capture and reassemble televisual data supported by an AI image analysis and recombinatory system within a panoramic 360° theatre. It allows viewers to explore and actively compose a multitude of stories reassembled from the original data. Digital free-to-air Australian television was captured over a 1-week period. This footage was segmented and converted into a database containing over 20,000 short clips. Each clip was first manually tagged with metadata descriptors defining their properties and then processed using AI. The AI image analysis and recombinatory system assembles and displays across the 360° screen a selection of related visual material based on the metadata. The latter includes categories such as the gender of the actors, the dominant emotions they are expressing and the prevalent colouring of the scene. Dismantling the video data in this way breaks down the original linear narrative into components that then become building blocks for a new kind of interactive television. Two hundred fifty video clips are simultaneously displayed and distributed around the panoramic theatre. The user can select, rearrange and link these video clips at will, composing them into combinations based on relations of gesture and movement. They are able to reassign connections among data layers by pleating and creasing their topology until they cascade into new episodes of autonomously unfolding events. The AI furnishes the user with multiple entry and exit points to the data, with the facility to generate narrative content on the fly (Manovich, 2001). Experiments such as these do not attempt to freeze the world into monolithic representations. Instead, they explore how the world may be dramatised in ways that shed light on how it is an assemblage of images and processes (Serres, 2000; Fig. 2.2).
T_Visionarium explores the productive capabilities of recombinatory narrative: that is, deconstructing televisual sequences into their building blocks and allowing their interactive recomposition, to probe new creative modes of user-driven storytelling. When seeking to make an interactive recombinatory system useful for exploring climate processes, fully open-ended user control misaligns with the task of plausibly engaging with physical laws. User domination could easily override these and fail to account for the global-scale interactions that shape extreme events. These systems stand in continuous exchange with each other as well as with human and non-human agents (Latour, 2018). To develop compelling scenarios while retaining plausibility requires integrating artistic free-form with a grounding in real-world physics. Users have to feel empowered to affect the visual environment while at the same time encountering the resistance that conveys the actions of non-human agencies. If such resistance is perceived as coherent, i.e. following discernible patterns, then a dialogue between users and an interactive system can be established and explored.
The Atmoscape project (Del Favero et al., 2012–2014) embarked on such an experimental investigation by processing low earth-orbiting satellite data of lower atmosphere water vapour provided by NASA. This was translated into immersive scenarios for scientific and artistic applications. For the scientific application, the immersive visualisation allowed scientists to observe and interactively study phenomena such as reticulations in water vapour layers preceding tropical cyclone formation for the very first time, as they were undetected in conventional raw data modelling. In its artistic application, the Nebula project (2014–2023) investigated the emergence of an aesthetic that demonstrated the potential for alternative conceptualisations to the pervasive idea of landscape, including its weather and atmosphere, as an inert backdrop to human activity. Driven by an AI particle-generation graphics engine, Nebula allows users to interact with water vapour particles underpinning a virtual landscape. They can interactively assemble resistant AI-programmed particles into a range of clustering topographies and vistas. While immersed in these vertiginous terrains, users hear a voice challenging them to explore unknown sites. As they attempt to herd the independently minded particles into recognisable landscapes, the undulating generative imaging system suggests how their actions and those of the landscape are fundamentally enmeshed, co-dependent yet autonomous. With particles constantly shifting and re-assembling, giving way to new combinations, users are forced to engage with these dynamic processes and to find ways of navigating and responding to the independent kinaesthetic flux of particle streams. In its abstraction of atmospheric data and its encoded processes, Nebula affords users the opportunity to explore an AI aesthetic that is underpinned by analysis and embodiment of water vapour dynamics. Such an AI aesthetic can drive insight into the constitution of terrestrial systems as dynamically mutating situated habitats (Fig. 2.3).
In order to furnish an evolving transaction in which both human and other agencies mutually determine each other, capabilities for such dialogical interaction need to be developed that articulate how agencies are symbiotically asymmetrical. Narrative has to co-evolve between human and virtual agents based on autonomous yet reciprocal perception and interpretation of behaviours and interactions. The Scenario project (Brown et al. 2011–2015; Scheer, 2012) experimentally investigated such forms of co-evolutionary relationship between virtual agencies and human participants in immersive environments.
In Scenario, a female humanoid character and her children seek to flee an underground labyrinth, enlisting the users’ help to identify viable escape routes. A group of shadowy AI-enabled humanoid sentinels are tracking the family and users, trying to block their attempts at successfully navigating the nested spaces. The sentinel interaction is achieved by means of a computer vision system that tracks the users’ behaviour, linked to an AI system that allows the humanoid virtual agents to independently interpret and respond to user behaviour—with actions sampled from a knowledge database. The AI system was developed using a variant of a symbolic logic planner drawn from the cognitive robotics language Golog, capable of dealing with sensors and external actions. Animations that can be performed by a humanoid character were considered actions that needed to be modelled and controlled.2 Each action was modelled in terms of the conditions under which it could be performed3 and how it affected the environment when the action was performed. Using such modelling, the AI system planned and coordinated the actions (i.e. animations) of the humanoid characters by reasoning about the most appropriate course of action. This imbued the sentinels with a number of capacities beyond the rudimentary pre-scripted symmetrical behaviour of conventional virtual agents such as regular computer game characters. First, they were invested with the ability to sense the behaviour of individuals as well as collectives of users. Second, the AI system enabled symbolic representation of this behaviour. And third, the agents were able to deliberate on their own behaviour and respond intelligibly through gestural and clustering actions. The framework was structured so as to respect autonomous virtual agent intentionality, as opposed to the simulated intentionality of conventional games. While narrative reasoning in human-centred interactivity focuses exclusively on human judgements, the co-evolutionary narrative here allows deliberate autonomous action by virtual agents (Scheer, 2012). Scenario was designed to dramatise distinct behavioural processes, thus probing virtual agent autonomy and the cognitive gap between virtual agents and human participants. It investigates the differences in narrative reasoning between them. It probes how virtual agents, if provided with a modest ability to sense and interpret the actions of human users in a shared immersive environment, can interactively respond and co-evolve autonomously with the users (Fig. 2.4).
These AI-enabled capabilities provide a robust foundation for the exploration of evolving and cascading interaction between autonomous climate agencies and human participants in an immersive environment. This is the prerequisite for conceiving open-ended and unimagined encounters that plausibly anticipate probable extreme events.

2.3 Towards Climate Scenario Visualisation

iFire (Del Favero et al., since 2021) explores the visualisation of dynamic wildfire scenarios. It is an experimental prototype for climate scenario visualisation, integrating the advances in immersive visualisation and AI aesthetics in the above-cited case studies. It is being developed in collaboration with a range of national and international partners, including the Australian Broadcasting Corporation, AFAC (Australasian Fire and Emergency Service Authorities Council), CSIRO/Data 61, Fire and Rescue NSW, Royal College of Art London, San José State University’s Wildfire Interdisciplinary Research Center and The University of Melbourne. It assembles a team of Australian, European and US artists; AI, computer, fire and climate scientists at international universities; and partner organisations to develop immersive visualisations of extreme wildfires and their uncertain dynamics. It utilises an AI-based landscape prototype that not only interprets but learns from human interaction and behaves autonomously. It sketches a scenario visualisation system that can depict unpredictable climate event interactions.
iFire’s goals are to:
1.
Establish synthetic landscapes that can envision unpredictable wildfires and explore imaginative risk perception.
 
2.
Create improvised narratives that can dramatise unanticipated fire behaviour and embody visceral decision making.
 
3.
Enact interactive experiences in rural settings where users can virtually rehearse unexpected fiery encounters.
 
4.
Generate geolocated scenarios that can model unanticipated fire-laden landscapes and probe readiness.
 
The iFire prototype is being developed for application as an artistic and a scientific series, respectively titled Penumbra and Umbra. It consists of a database of atmospheres, flora, pyro-histories and topographies and AI landscapes. To ensure geophysical reliability, it uses both the SPARK and WRF-SFIRE simulation engines applied to geographical databases. To implement a rich sensorial credibility for these worlds, it uses the Unreal game simulation engine and a customised interface specific to each series. This enables higher-fidelity texturing and modelling of the landscape at scale. The interfaces are venue domain-specific, such as for museums, scientific laboratories and emergency centres. For artistic applications, it utilises cinematic environments where interaction is driven by motion tracking. For science and emergency applications, it uses either a tablet for physical screen environments or mouse for online environments. A main window display is used for landscape navigation, with smaller inset informational windows to convey variables such as wind speed. Each series is being developed in collaboration with domain-specific stakeholders. Both series are explored through three geolocated case studies: a hypothetical pine plantation fire, a grasslands fire in the Australian state of Victoria (2022) and the Bridger Foothills Fire in Montana, USA in 2021. The case studies progressively explore the intensifying dynamics of fire from the irregular rhythms of a low-intensity fire to the violent vorticity-driven lateral spread of an extreme fire.
The artistic Penumbra series explores the multisensorial qualities of unexpected wildfire experiences for creative industry audiences. The scientific Umbra series investigates the dynamic interplay between unanticipated wildfire processes and users for scientific analysis and emergency training. Both series are remotely accessible. They are developed across a networked, fully immersive system that can translate multilayered wildfire data into 3D scenarios, allowing exploration of situations across distributed locations. This comprises immersive cinemas, wall displays, desktops and tablets. It generates hyperrealistic immersive visualisations that can stage a geolocated wildfire as it unfolds on the ground in interaction with users. The visualisations operate in two modalities—the first enables users to create compelling hypothetical scenarios, and the second allows users to recreate historical fires and generate probable future scenarios.
The AI system that underpins the wildfire landscape has three distinct functions. First, its ML system addresses shortcomings in existing empirical and physical models. For physical models, the computation complexity grows exponentially with higher resolution and takes an inordinate amount of time to simulate, which is prohibitive for operational use. On the other hand, empirical models are mostly used in operations because of their simplicity. However, due to simplified modelling of the processes and dynamics of wildfires, empirical models often fail to achieve the accuracy required. To address the disadvantages of both physical and empirical simulations, iFire’s ML-based simulations learn multidimensional representations of data in latent space and apply physics-related constraints to a neural network. Once trained, it provides instant and accurate inference.
Second, the AI system implements algorithms for the interpretation of motion capture and tracking data derived from individual users and groups. It applies algorithms to enable the AI landscape to set goals, learn from and autonomously respond to user behaviour. This furnishes the landscape with the ability to develop and realise its goals while integrating what it learns from human behaviour through exceptionally fast execution enabled by the programming language verified in the Scenario and Nebula projects. This interprets past user behavioural patterns, models and anticipates future behaviour and makes inferences about how to act in response. It is achieved by using open-ended rules that allow the AI setting to infer from unknown situations, make predictions, decide how to act, learn from the interaction and adapt its reasoning. This ensures that the setting can make independent decisions, a critical attribute for its improvised and reciprocal interaction with users so that it acts in unanticipated ways. Third, the AI goal-orientated system analyses user interactive decision making to both support and challenge it, in order to optimise user response to scenario uncertainty.
The AI is applied in different ways for the two series. The artistic Penumbra series leverages AI to explore an open-ended user and landscape dialogue that co-evolves through reciprocal transactions between the user and the fire-laden terrain. For example, the landscape may choose to collaborate with the user by enlivening the landscape and propagating fire-resistant trees, or it may challenge the user’s attempt to control its spread by generating chaotic ember storms. The unpredictable nature of its behaviour opens pathways for new forms of interactive encounter that are open to user and AI co-creativity while circumscribed by physics. By translating wildfire behaviour into discrete actions, Penumbra facilitates an experimental imaginary where possibilities and vulnerabilities can be generated and investigated. Opening a new horizon for reimagining extreme fires, it enables prototyping an anticipatory life-saving imaginary. By interactively exploring and transforming the dynamic range of possibilities involved, it offers artists and audiences a new genre of collaborative human and machine co-creativity through which they will be able to compose a wide spectrum of previously unforeseen and mutating encounters with evolving fire landscapes unrestricted in aesthetic form, complexity and inventiveness (Fig. 2.5).
The scientific Umbra series leverages the AI to furnish an analytical laboratory for scientists and a training platform for emergency personnel, such as fire crews, incident controllers and operations officers (Fig. 2.6). The focus for the first is to facilitate 3D animated models of extreme fires in geolocated landscapes where scientists can either forensically assess historical fires or test their hypotheses for upstream scenarios by manipulating variables such as wind speed and fuel load. For the second, the focus is to train emergency service personnel’s situational awareness and decision making in a safe environment. Actual wildfire grounds are impossible to use as training environments due to their clear and present dangers. Emergency personnel often have very short timespans to spot key indicators of impending conflagrations and to make life-saving decisions. This series allows personnel to adjust the situational variables to focus on a specific critical variable, such as wind direction, and to viscerally experience the dramatic changes these can trigger. They are able to explore and practise the efficacy of a range of different responses, training their perceptual skills. This allows them to momentarily step away from the hyperattention required in the real-life situation, to rethink, reflect and reconsider the complex relationships that structure and govern their action space. The AI system learns from, predicts and disrupts their reactions while presenting a challenging range of novel fire behaviours. These features allow emergency personnel to experientially deal with unpredictable scenarios and develop proactive planning through dramatisations that safely simulate complex uncertainties. Groups of responders are physically placed inside a rapidly moving wildfire landscapes and confronted with evolving situations that challenge them to collaborate across geographic locations. Emergency organisations can integrate location-specific data and protocols into the simulation to build informational complexity and provide trainees with the challenging and unanticipated experiences they need to manage and mitigate risk.

2.4 Conclusion

As extreme event preparedness is a function of prior threat experience, safely visualising threat scenarios in advance is key to enhancing survival and adaptation in an era of unpredictable extreme event emergencies. A climate scenario visualisation framework aims to model a visceral imaginary to foster community preparedness by aesthetically integrating and transforming artistic, technological and scientific approaches. It would generate the capacity to make sense of an extreme event by picturing the situation, narrating its contours, interacting with its dynamics, communicating its experiences and testing a credible response. This would enhance a community’s ability to viscerally experience and develop threat perception, situational awareness, adaptive decision making and flexible response to unexpected life-threatening situations. By integrating advances in immersive visualisation and AI aesthetics, such a framework would virtually rehearse unforeseen geolocated extreme events to facilitate readiness in the face of escalating and profound climate uncertainties.

Acknowledgements

The research described in this chapter has been supported by the Australian Government through the Australian Research Council’s Laureate and Discovery funding schemes (FL200100004, iFire; DP120102243, Atmoscape; DP110101146, Nebula; DP0556659, Scenario; DP0345547, T_Visionarium). All authors are ARC investigators as listed for funding scheme). This chapter includes extracts by the authors from the iCinema website.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Fußnoten
1
That is, those from emergency responders and local communities
 
2
For example, walking to a location, pushing a character, etc.
 
3
For example, pushing something or someone could only be executed if located next to an object or agent.
 
Literatur
Zurück zum Zitat Altintas, I., Block, J., De Callafon, R., Crawl, D., et al. (2015). Towards an integrated cyberinfrastructure for scalable data-driven monitoring, dynamic prediction and resilience of wildfires. Procedia Computer Science, 51(1), 1633–1642.CrossRef Altintas, I., Block, J., De Callafon, R., Crawl, D., et al. (2015). Towards an integrated cyberinfrastructure for scalable data-driven monitoring, dynamic prediction and resilience of wildfires. Procedia Computer Science, 51(1), 1633–1642.CrossRef
Zurück zum Zitat Ayres, T., Chayko, B., Denholm, A., & Watts, L. (executive producers). (2021). Fires [TV series]. Australian Broadcasting Corporation. Ayres, T., Chayko, B., Denholm, A., & Watts, L. (executive producers). (2021). Fires [TV series]. Australian Broadcasting Corporation.
Zurück zum Zitat Binskin, A. C. M. M. (2020). Royal Commission into national natural disaster arrangements report. Commonwealth of Australia. Binskin, A. C. M. M. (2020). Royal Commission into national natural disaster arrangements report. Commonwealth of Australia.
Zurück zum Zitat Bowman, D. (2023). ‘Australia is sleepwalking’: A bushfire scientist explains what the Hawaii tragedy means for our flammable continent. The Conversation. https://shorturl.at/chLSX. Accessed 1 Dec 2023 Bowman, D. (2023). ‘Australia is sleepwalking’: A bushfire scientist explains what the Hawaii tragedy means for our flammable continent. The Conversation. https://​shorturl.​at/​chLSX. Accessed 1 Dec 2023
Zurück zum Zitat Brown, N. C., Del Favero, D., Shaw, J., & Weibel, P. (2004–2017). T_Visionarium. Exhibitions have included: 2017: ‘Art of immersion’, ZKM Media Museum, Karlsruhe, Germany. 2014: Chronus Art Center, Shanghai, China. 2010: ‘STRP Festival’, Strijp R Klokgebouw, Eindhoven, Netherlands. 2009: ‘International Architecture Biennale’, Zuiderkirk, Amsterdam, Netherlands; ‘Un Volcan Numérique’, Le Havre, France; ‘Second Nature’, Aix-en-Provence, France. 2008: ‘eARTS Festival: eLANDSCAPES’, Shanghai Zendai Museum of Modern Art, China; ‘Biennial of Seville’, Spain; ‘Sydney Festival’. 2007: ‘YOUser’, ZKM. 2006: ‘Artescienza: La Rivoluzione Algoritmica, Spazio Deformato’, Casa dell’Architettura, Rome, Italy. 2005: ‘Avignon Festival’, France. 2004: ‘Cinémas du Futur, Lille 2004 Capitale Européenne de la Culture’, Centre Euralille, Lille, France. Brown, N. C., Del Favero, D., Shaw, J., & Weibel, P. (2004–2017). T_Visionarium. Exhibitions have included: 2017: ‘Art of immersion’, ZKM Media Museum, Karlsruhe, Germany. 2014: Chronus Art Center, Shanghai, China. 2010: ‘STRP Festival’, Strijp R Klokgebouw, Eindhoven, Netherlands. 2009: ‘International Architecture Biennale’, Zuiderkirk, Amsterdam, Netherlands; ‘Un Volcan Numérique’, Le Havre, France; ‘Second Nature’, Aix-en-Provence, France. 2008: ‘eARTS Festival: eLANDSCAPES’, Shanghai Zendai Museum of Modern Art, China; ‘Biennial of Seville’, Spain; ‘Sydney Festival’. 2007: ‘YOUser’, ZKM. 2006: ‘Artescienza: La Rivoluzione Algoritmica, Spazio Deformato’, Casa dell’Architettura, Rome, Italy. 2005: ‘Avignon Festival’, France. 2004: ‘Cinémas du Futur, Lille 2004 Capitale Européenne de la Culture’, Centre Euralille, Lille, France.
Zurück zum Zitat Calkin, D. E., O’Connor, C. D., Thompson, M. P., & Stratton, R. (2021). Strategic wildfire response decision support and the risk management assistance program. Forests, 12, 1407.CrossRef Calkin, D. E., O’Connor, C. D., Thompson, M. P., & Stratton, R. (2021). Strategic wildfire response decision support and the risk management assistance program. Forests, 12, 1407.CrossRef
Zurück zum Zitat Del Favero, D., with Davidson, J. W., Green, C., Moinuddin, K., Ostwald, M. J., Pagnucco, M., & Song, Y. since 2021. iFire. Exhibited at: 2023: ‘SIGGRAPH Asia’, ICC, Sydney, ‘Viscera’, Cavallerriza Reale, Turin, Italy; 2022: ‘Beyond the Night’, ‘Art Cologne’ and ‘Dusseldorf Cologne Open’, Galerie Brigitte Schenk, Cologne. Del Favero, D., with Davidson, J. W., Green, C., Moinuddin, K., Ostwald, M. J., Pagnucco, M., & Song, Y. since 2021. iFire. Exhibited at: 2023: ‘SIGGRAPH Asia’, ICC, Sydney, ‘Viscera’, Cavallerriza Reale, Turin, Italy; 2022: ‘Beyond the Night’, ‘Art Cologne’ and ‘Dusseldorf Cologne Open’, Galerie Brigitte Schenk, Cologne.
Zurück zum Zitat Del Favero, D., Shaw, J., Benford, S., & Goebel, J. (2011–2015). Scenario. Exhibited at: 2014-15: ‘Jeffrey Shaw & Hu Jieming Twofold Exhibition’, Chronus Art Center, Shanghai, China; ‘Child, Nation & World Cinema Symposium’, UNSW, Sydney. 2013: ‘ISEA’, UNSW, Sydney. 2011: ‘Sydney Film Festival’, Sydney. 2010: ‘15th Biennial Film & History Conference’, UNSW, Sydney. Del Favero, D., Shaw, J., Benford, S., & Goebel, J. (2011–2015). Scenario. Exhibited at: 2014-15: ‘Jeffrey Shaw & Hu Jieming Twofold Exhibition’, Chronus Art Center, Shanghai, China; ‘Child, Nation & World Cinema Symposium’, UNSW, Sydney. 2013: ‘ISEA’, UNSW, Sydney. 2011: ‘Sydney Film Festival’, Sydney. 2010: ‘15th Biennial Film & History Conference’, UNSW, Sydney.
Zurück zum Zitat Del Favero, D., Bennett, J., Brown, N., Shaw, J., & Weibel, P.. (2014–2023). Nebula/Atmoscape. Exhibitions have included: 2023: ‘Viscera’, Recontemporary, Turin, Italy; 2019: ’SIGGRAPH Asia’, Gallery of Contemporary Art, Brisbane. 2018: ‘Art Abu Dhabi’, UAE; ‘SIGGRAPH Asia’, Tokyo International Forum, Japan; ‘Visibility Matrix’, Douglas Hyde Galley, Dublin & Void Gallery, Derry, Ireland; ‘Art Cologne’, Germany; ‘Le Printemps de Septembre’, La Fondation Espace Écureuil, Toulouse, France. 2017: ‘Art of Immersion’, ZKM Media Museum, Karlsruhe, Germany. 2016: ‘Sydney Film Festival’; ‘Art Cologne’, Germany; Galerie Marion Scharmann & Laskowski, Cologne, Germany. 2015: ‘GLOBALE’, ZKM, Karlsruhe, Germany. Del Favero, D., Bennett, J., Brown, N., Shaw, J., & Weibel, P.. (2014–2023). Nebula/Atmoscape. Exhibitions have included: 2023: ‘Viscera’, Recontemporary, Turin, Italy; 2019: ’SIGGRAPH Asia’, Gallery of Contemporary Art, Brisbane. 2018: ‘Art Abu Dhabi’, UAE; ‘SIGGRAPH Asia’, Tokyo International Forum, Japan; ‘Visibility Matrix’, Douglas Hyde Galley, Dublin & Void Gallery, Derry, Ireland; ‘Art Cologne’, Germany; ‘Le Printemps de Septembre’, La Fondation Espace Écureuil, Toulouse, France. 2017: ‘Art of Immersion’, ZKM Media Museum, Karlsruhe, Germany. 2016: ‘Sydney Film Festival’; ‘Art Cologne’, Germany; Galerie Marion Scharmann & Laskowski, Cologne, Germany. 2015: ‘GLOBALE’, ZKM, Karlsruhe, Germany.
Zurück zum Zitat Del Favero, D., Thurow, S., Frohne, U., Moinuddin, K., Sharma, A., & Song, Y. (2023). Re-imagining the climate emergency using AI visualisation. In RE:SOURCE—10th international conference on the histories of Media Art, Science & Technology. Università Ca’Foscari Venice. September 15. Del Favero, D., Thurow, S., Frohne, U., Moinuddin, K., Sharma, A., & Song, Y. (2023). Re-imagining the climate emergency using AI visualisation. In RE:SOURCE—10th international conference on the histories of Media Art, Science & Technology. Università Ca’Foscari Venice. September 15.
Zurück zum Zitat Deleuze, G. (1995). Negotiations 1972–1990. Columbia University Press. Deleuze, G. (1995). Negotiations 1972–1990. Columbia University Press.
Zurück zum Zitat Fonnet, A., & Prié, Y. (2021). Survey of immersive analytics. IEEE Transactions on Visualization & Computer Graphics, 27(3), 2101–2122.CrossRef Fonnet, A., & Prié, Y. (2021). Survey of immersive analytics. IEEE Transactions on Visualization & Computer Graphics, 27(3), 2101–2122.CrossRef
Zurück zum Zitat Frohne, U. (2023). Night drifts above the Earth. In I. P. di Persano (Ed.), Dennis Del Favero. Viscera (pp. 28–31). Recontemporary. Frohne, U. (2023). Night drifts above the Earth. In I. P. di Persano (Ed.), Dennis Del Favero. Viscera (pp. 28–31). Recontemporary.
Zurück zum Zitat Grosz, E. (2001). Architecture from the outside. MIT. Grosz, E. (2001). Architecture from the outside. MIT.
Zurück zum Zitat Havenith, H.-B., Cerfontaine, P., & Mreyen, A.-S. (2019). How virtual reality can help visualise and assess geohazards. International Journal of Digital Earth, 12(2), 173–189.CrossRef Havenith, H.-B., Cerfontaine, P., & Mreyen, A.-S. (2019). How virtual reality can help visualise and assess geohazards. International Journal of Digital Earth, 12(2), 173–189.CrossRef
Zurück zum Zitat Hazeleger, W., van den Hurk, B. J. J. M., Min, E., van Oldenborgh, G. J., Petersen, A. C., Stainforth, D. A., Vasileiadou, E., & Smith, L. A. (2015). Tales of future weather. Nature Climate Change, 5(2), 107–113.CrossRef Hazeleger, W., van den Hurk, B. J. J. M., Min, E., van Oldenborgh, G. J., Petersen, A. C., Stainforth, D. A., Vasileiadou, E., & Smith, L. A. (2015). Tales of future weather. Nature Climate Change, 5(2), 107–113.CrossRef
Zurück zum Zitat Heading, S., & Zahidi, S. (2023). Global risks report. World Economic Forum. Heading, S., & Zahidi, S. (2023). Global risks report. World Economic Forum.
Zurück zum Zitat Jasanoff, S. (2010). A new climate for society. Theory, Culture & Society, 27(2–3, 233), –25. Jasanoff, S. (2010). A new climate for society. Theory, Culture & Society, 27(2–3, 233), –25.
Zurück zum Zitat Latour, B. (2018). Down to earth. Polity. Latour, B. (2018). Down to earth. Polity.
Zurück zum Zitat Lazo, J., Bostrom, A., Morss, E., Demuth, J., & Lazrus, H. (2015). Factors affecting hurricane evacuation intentions. Risk Analysis, 35(10), 1837–1857.CrossRef Lazo, J., Bostrom, A., Morss, E., Demuth, J., & Lazrus, H. (2015). Factors affecting hurricane evacuation intentions. Risk Analysis, 35(10), 1837–1857.CrossRef
Zurück zum Zitat Lempert, R. (2013). Scenarios that illuminate vulnerabilities and robust responses. Climactic Change, 117, 627–646.CrossRef Lempert, R. (2013). Scenarios that illuminate vulnerabilities and robust responses. Climactic Change, 117, 627–646.CrossRef
Zurück zum Zitat Manovich, L. (2001). Post-media aesthetics. In D. Del Favero & J. Shaw (Eds.), disLOCATIONS. ZKM. Manovich, L. (2001). Post-media aesthetics. In D. Del Favero & J. Shaw (Eds.), disLOCATIONS. ZKM.
Zurück zum Zitat Pedram, S., Perez, P., & Palmisano, S. (2014). Evaluating the influence of virtual reality-based training on workers’ competencies in the mining industry. In A. G. Bruzzone, F. De Felice, M. Massei, Y. Merkuryev, A. Solis, & G. Zacharewicz (Eds.), 13th international conference on modeling and applied simulation (pp. 60–64). Curran. Pedram, S., Perez, P., & Palmisano, S. (2014). Evaluating the influence of virtual reality-based training on workers’ competencies in the mining industry. In A. G. Bruzzone, F. De Felice, M. Massei, Y. Merkuryev, A. Solis, & G. Zacharewicz (Eds.), 13th international conference on modeling and applied simulation (pp. 60–64). Curran.
Zurück zum Zitat Productivity Commission. (2014). Natural disaster funding arrangements (Inquiry report no. 74). Canberra. JEL code: H77, H84. Productivity Commission. (2014). Natural disaster funding arrangements (Inquiry report no. 74). Canberra. JEL code: H77, H84.
Zurück zum Zitat Roelstraete, D., Mainetti, M., & Mattiacci, C. (2023). Everybody talks about the weather. Fondazione Prada. Roelstraete, D., Mainetti, M., & Mattiacci, C. (2023). Everybody talks about the weather. Fondazione Prada.
Zurück zum Zitat Serres, M. (2000). The birth of physics. Clinamen. Serres, M. (2000). The birth of physics. Clinamen.
Zurück zum Zitat Shaw, J., & Del Favero, D. (2010–ongoing). iCASTS. Exhibited at: Shenyang Research Institute, Fushun, 2012; APPEA Conference, Convention & Exhibition Centre, Brisbane, 2010; Mines Rescue Pty Ltd Australian Facilities in Woonona, Argenton, Lithgow & Singleton, 2008-ongoing; SimTech Conference, Convention Centre, Melbourne, 2008. Shaw, J., & Del Favero, D. (2010–ongoing). iCASTS. Exhibited at: Shenyang Research Institute, Fushun, 2012; APPEA Conference, Convention & Exhibition Centre, Brisbane, 2010; Mines Rescue Pty Ltd Australian Facilities in Woonona, Argenton, Lithgow & Singleton, 2008-ongoing; SimTech Conference, Convention Centre, Melbourne, 2008.
Zurück zum Zitat Shepherd, T., & Truong, C. H. (2023). Storylining climes. In D. Yu & J. Wouters (Eds.), Storying multipolar climes of the Himalaya, Andes and Artic (pp. 157–183). Routledge.CrossRef Shepherd, T., & Truong, C. H. (2023). Storylining climes. In D. Yu & J. Wouters (Eds.), Storying multipolar climes of the Himalaya, Andes and Artic (pp. 157–183). Routledge.CrossRef
Zurück zum Zitat Shepherd, T., Boyd, E., Calel, R., Chapman, S. C., et al. (2018). Storylines: An alternative approach to representing uncertainty in physical aspects of climate change. Climatic Change, 151, 555.CrossRef Shepherd, T., Boyd, E., Calel, R., Chapman, S. C., et al. (2018). Storylines: An alternative approach to representing uncertainty in physical aspects of climate change. Climatic Change, 151, 555.CrossRef
Zurück zum Zitat Sheppard, S. (2005). Landscape visualisation and climate change: The potential for influencing perceptions and behaviour. Environmental Science & Policy, 8, 637–654.CrossRef Sheppard, S. (2005). Landscape visualisation and climate change: The potential for influencing perceptions and behaviour. Environmental Science & Policy, 8, 637–654.CrossRef
Zurück zum Zitat Slatyer, J., Harmer, P., Callaghan, J., Ronnenberg, R., O’Sullivan, P., & Hartzer, B. (2017). Building resilience to natural disasters in our states and territories. Deloitte. Slatyer, J., Harmer, P., Callaghan, J., Ronnenberg, R., O’Sullivan, P., & Hartzer, B. (2017). Building resilience to natural disasters in our states and territories. Deloitte.
Zurück zum Zitat Smith, T. (2015). World picturing in contemporary art: The iconogeographic turn. Australian and New Zealand Journal of Art, 7(1), 24–46.CrossRef Smith, T. (2015). World picturing in contemporary art: The iconogeographic turn. Australian and New Zealand Journal of Art, 7(1), 24–46.CrossRef
Zurück zum Zitat Soga, K., Comfort, L., Zhao, B., Lorusso, P., & Soysal, S. (2021). Integrating traffic network analysis and communication network analysis at a regional scale to support more efficient evacuation in response to a wildfire event. UC Office of the President: University of California Institute of Transportation Studies. Soga, K., Comfort, L., Zhao, B., Lorusso, P., & Soysal, S. (2021). Integrating traffic network analysis and communication network analysis at a regional scale to support more efficient evacuation in response to a wildfire event. UC Office of the President: University of California Institute of Transportation Studies.
Zurück zum Zitat Stevens, B., Adami, S., Ali, T., Anzt, H., et al. (2023). Earth virtualization engines (EVE). Earth System Science Data Discussions, 1–14. Stevens, B., Adami, S., Ali, T., Anzt, H., et al. (2023). Earth virtualization engines (EVE). Earth System Science Data Discussions, 1–14.
Zurück zum Zitat Suh, A., & Prophet, J. (2018). The state of immersive technology research: A literature analysis. Computers in Human Behaviour, 86, 77–90.CrossRef Suh, A., & Prophet, J. (2018). The state of immersive technology research: A literature analysis. Computers in Human Behaviour, 86, 77–90.CrossRef
Zurück zum Zitat Weibel, P. (2002). Narrated theory: Multiple projection and multiple narration. In M. Rieser & A. Zapp (Eds.), New screen media: Cinema/art/narrative. BFI & ZKM. Weibel, P. (2002). Narrated theory: Multiple projection and multiple narration. In M. Rieser & A. Zapp (Eds.), New screen media: Cinema/art/narrative. BFI & ZKM.
Zurück zum Zitat Willcock, S., Cooper, G., Addy, J., & Dearing, J. (2023). Earlier collapse of Anthropocene ecosystems driven by multiple faster and noisier drivers. Nature Sustainability, 6, 1331–1342.CrossRef Willcock, S., Cooper, G., Addy, J., & Dearing, J. (2023). Earlier collapse of Anthropocene ecosystems driven by multiple faster and noisier drivers. Nature Sustainability, 6, 1331–1342.CrossRef
Zurück zum Zitat Willemen, P. (2002). Reflections on digital imagery: Of mice and men. In M. Rieser & A. Zapp (Eds.), New screen media: Cinema/art/narrative. BFI & ZKM. Willemen, P. (2002). Reflections on digital imagery: Of mice and men. In M. Rieser & A. Zapp (Eds.), New screen media: Cinema/art/narrative. BFI & ZKM.
Metadaten
Titel
Reimagining Extreme Event Scenarios: The Aesthetic Visualisation of Climate Uncertainty to Enhance Preparedness
verfasst von
Dennis Del Favero
Susanne Thurow
Maurice Pagnucco
Ursula Frohne
Copyright-Jahr
2024
DOI
https://doi.org/10.1007/978-3-031-56114-6_2

Premium Partner