Virtual crowds for non-combative environments play an important role in modern military operations and often create complications for the combatant forces involved. To address this problem, we are developing crowd simulation capable of generating crowds of non-combative civilians that
exhibit a variety of individual and group behaviors at a different level of fidelity. Commercial game technology is used for creating an experimental setup to model an urban megacity environment and the physical behaviors of human characters that make up the crowd. The main objective of this
work is to verify the feasibility of designing a collaborative virtual environment (CVE) and its usability for training security agents to respond to emergency situations like active shooter events, bomb blasts, fire and smoke. We present a hybrid (human-artificial) platform where experiments
for disaster response can be performed in CVE by including AI agents and User-controlled agents. AI agents are computer controlled agents to include behaviors such as hostile agents, non-hostile agents, leader following agents, goal following agents, selfish agents, and fuzzy agents. User-controlled
agents are autonomous agents for specific situation roles such as police officer, medic, firefighter, and swat official. The novelty of our work lies in modeling behaviors for AI agents or computer-controlled agents so that they can interact with user-controlled agents in an immersive training
environment for emergency response and decision making. The hybrid platform aids in creating an experimental setup to study human behavior in a megacity for emergency response, decision-making strategies, and what-if scenarios.
No References for this article.
No Supplementary Data.
No Article Media
tactical training applications
Document Type: Research Article
Publication date: January 13, 2019
More about this publication?
For more than 30 years, the Electronic Imaging Symposium has been serving those in the broad community - from academia and industry - who work on imaging science and digital technologies. The breadth of the Symposium covers the entire imaging science ecosystem, from capture (sensors, camera) through image processing (image quality, color and appearance) to how we and our surrogate machines see and interpret images. Applications covered include augmented reality, autonomous vehicles, machine vision, data analysis, digital and mobile photography, security, virtual reality, and human vision. IS&T began sole sponsorship of the meeting in 2016. All papers presented at EIs 20+ conferences are open access.
Please note: For purposes of its Digital Library content, IS&T defines Open Access as papers that will be downloadable in their entirety for free in perpetuity. Copyright restrictions on papers vary; see individual paper for details.