Research Projects: HybridReality

From Gravity
Jump to: navigation, search

Project Name Hybrid Reality
Team Members Zhiyu He, Jason Kimball, Mark Antonijuan Tresens and Falko Kuester
Project Sponsor National Science Foundation (NSF), National Institutes of Health (NIH)
teaser1Description

Contents

Overview

The pervasive nature of imaging techniques such as MRI, fMRI, CT, OCT and PET has provided doctors in many different disciplines with a means to acquire high-resolution biomedical images. These data can serve as the foundation for the diagnosis and treatment of diseases. However, experts with a multitude of backgrounds, including radiologists, surgeons, and anatomists now contribute to the thorough analysis of the available data and collaborate on the diagnosis and development of a treatment plan. Unfortunately, access to these specialists at the same physical location is not always possible and new tools and techniques are required to facilitate simultaneous and collaborative exploration of volumetric data between spatially separated domain experts. Our HybridReality research is aimed at the development of a framework for collaborative visualization of volume data-sets, supporting (i) heterogeneous computational platforms, ranging from high-end graphics workstations to low performance TabletPCs., and (ii) heterogeneous network configurations ranging from wireless networks to dedicated gigabit links.

HybridReality provides the user with a tool for data visualization and data annotation as well as the design of middleware for real-time exchange of the resulting visuals. It offers co-located 2D and 3D views of the underlying data and enables collaborative information exchange via modes such as sketching and annotation. In the case of biomedical data, the 2D view currently provides a user specifiable high-resolution image slice, while the 3D view provides a volume rendered visual of the entire data set. Users can augment information onto the 2D and 3D rendered visuals and share this information in real-time between multiple networked render nodes. Based on the quality of the computational platforms and graphics pipes, rendering of large-scale volumetric data may be a very costly operation. To address this, our renderer and middleware adapt to quality of service requirements such image quality and frame rates. Given a particular constraint, the system may select to render at one central location or to distribute the task over multiple render nodes.

Acknowledgments

This research is supported in part by the National Science Foundation under Grant Number EIA-0203528 and the California Institute for Telecommunications and Information Technology (Calit2). The above support is greatly appreciated.

Publications

  • He, Z., Kimball, J., and Kuester, F. (2005). Distributed and collaborative biomedical data exploration. Lecture Notes in Computer Science, 3804:271 – 278.
  • Tresens, M. A. and Kuester, F. (2004). Medicine Meets Virtual Reality 12. Building a Better You: The Next Tools for Medical Education, Diagnosis and Care, volume 98 of Studies in Health Technology and Informatics, Chapter Hybrid-Reality: Collaborative Biomedical Data Exploration Exploiting 2-D and 3-D Correspondence, pages 22–24. IOS Press.

Media Coverage

Related Resarch Projects

External Links

Copyright

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by the author's copyright. This work may not be reposted without the explicit permission of the copyright holder.


back

Personal tools
Namespaces

Variants
Actions
Navigation
Toolbox