Human-Computer Interaction has come a long way recently. The increased availability of mobile and multimodal devices spurred the exploration of innovative interaction modalities. However, despite their widespread use and exploration in other fields, Interactive Visualization has largely ignored them.
However, it is critical to grasp the benefits of a richer interaction scenario, more flexibility to varied circumstances, and wider network bandwidth between the user and the program.
Choosing an interaction modality, adaptability (e.g., multiple ways of showing data based on hardware or surroundings), and combining modalities are important here.
It is also important to investigate how the wide variety of devices (Smart TVs, tablets, and smartphones) can be used to support Visualization, either individually by providing different views tailored to the device or simultaneously by offering various (complementary) views of the same dataset, nurturing a richer interaction experience, or as the basis for collaborative work.
A Case Study on Reachability of Remote Areas for Emergency Management
When a hazard event occurs, the accessibility of impacted locations is critical in determining whether the situation becomes a disaster.
Decision-makers must act swiftly and under duress to complete activities that rely on the road network, such as managing relief operations, organizing evacuation routes, or distributing food and first aid.
This study aims to describe a method for visualization in disaster recovery and assessing the reachability of remote regions using an interactive tabletop and tablets.
We propose a straightforward method for combining and visualizing data from scientists and organizations to understand area reachability and the expected impact of future hazard events on entry points.
Additionally, our interface presents a method for evaluating alternate access routes to isolated towns via helicopter or off-road paths, based on satellite data and collaborative mapping.
This collection of visualization and interaction tools enables the creation of risk scenarios that aid in the planning, preparation, and response of risk-related tasks. We began our investigation with a case study of a Colombian area threatened by landslides.
This section provides a high-level overview of the architectural considerations involved in supporting multimodal multi-device interaction, covering the key components of the chosen multimodal architecture and briefly detailing the multi-device approach developed.
The W3C standard for multi-modal frameworks is divided into four modules:
- The interaction manager (IM) is responsible for receiving all event messages and initiating actions.
- The data model, which stores the IM’s information.
- Input and output modalities are responsible for capturing user interaction happenings or presenting information to the user.
- The executable framework is responsible for the interaction between both the modules and providing the necessary services to run multimodal architectures.
A proof-of-concept application demonstrates how the visualization modality works, allowing users to interact with the same data and entities simultaneously while selecting their representation preferences for the device in use even with two or more switches together.
A preliminary evaluation of the application prototype was conducted to ascertain users’ overall perceptions of the provided features (e.g., diverse representations and synchronous functionality between devices, possibly using distinct representations for each device), yielding positive results and suggestions for future work.
Recommended Article 1
Recommended Article 2
Recommended Article 3
Recommended Article 4
Recommended Article 5
Recommended Article 6
Recommended Article 7
Recommended Article 8
Recommended Article 9
Recommended Article 10