Although node-link representations of graphs are widespread and even sometimes preferred to other approaches, they suffer from obvious limitations when graphs become large or dense, inducing visual cluttering and impeding the traditional visual information seeking process. This article
presents a new strategy of exploration particularly suitable when graphs are large and dense. Users iteratively drive the exploration through the visualization of small sub-networks of interest. Our technique is particularly useful with multilayer networks, where layers typically combine into
a large and dense network. Our iterative exploration process called M-QuBE3
computes a score for each node of a graph based on structural and semantic information where more interesting nodes from a user point of view have higher scores. This in turn translates into a procedure
to select sub-networks of interest. Within each sub-network, the user can select nodes to enhance the semantic context (and thus impact their interest score) and iteratively refine the exploration towards more relevant sub-networks. The M-QuBE3
process natively handles multilayer
network and allows the use of layers as a semantic apparatus when driving the navigation.
No References for this article.
No Supplementary Data.
No Article Media
Document Type: Research Article
Publication date: January 13, 2019
This article was made available online on January 13, 2019 as a Fast Track article with title: "M-QuBE³: Querying big multilayer graph by evolutive extraction and exploration".
More about this publication?
For more than 30 years, the Electronic Imaging Symposium has been serving those in the broad community - from academia and industry - who work on imaging science and digital technologies. The breadth of the Symposium covers the entire imaging science ecosystem, from capture (sensors, camera) through image processing (image quality, color and appearance) to how we and our surrogate machines see and interpret images. Applications covered include augmented reality, autonomous vehicles, machine vision, data analysis, digital and mobile photography, security, virtual reality, and human vision. IS&T began sole sponsorship of the meeting in 2016. All papers presented at EIs 20+ conferences are open access.
Please note: For purposes of its Digital Library content, IS&T defines Open Access as papers that will be downloadable in their entirety for free in perpetuity. Copyright restrictions on papers vary; see individual paper for details.