Research Projects

From Gravity
Jump to: navigation, search

Contents

Display Systems and Collaborative Digital Workspaces (Cyberinfrastructure)

HIPerVerse
The vision of the HIPerVerse project is to develop freely scalable digital collaboratories, connecting distributed, high-resolution visualization resources for collaborative work in the sciences, engineering and the arts. The first HIPerVerse prototype was demonstrated during the Calren-XD / High Performance Research Workshop held at the University of San Diego between September 15th and 16th, 2008, achieving an amazing resolution of 507,904,000 pixels (1/2 gigapixel) by combining Calit2's HIPerSpace and HIPerWall ultra-high resolution displays into a single distributed visualization environment. (Read More)

HIPerSpace
The Highly Interactive Parallelized Display Space project (HIPerSpace) is brought to you by the creators of HIPerWall. HIPerSpace is the next generation concept for ultra-high resolution distributed display systems that can scale into the billions of pixels, providing unprecedented high-capacity visualization capabilities to experimental and theoretical researchers. HIPerSpace has held the distinction of the "World's Highest Resolution Display" since it was first introduced in 2006, taking the top spot previously held by HIPerWall, which held it since 2005. HIPerSpace is powered by CGLX our cluster graphics library, enabling the development of massively scalable visualizations. (Read More)

HIPerWall
The Highly Interactive Parallelized Display Wall project (HIPerWall) provides unprecedented high-capacity visualization capabilities to experimental and theoretical researchers. The fifty 30-inch Apple Cinema Displays driven by 25 Apple Mac G5 workstations yield a total display resolution of 200 megapixels, breaking the previous 100-megapixel world record (established one day earlier) — by doubling it in the Spring of 2005. (Read More)

HIPerWall3D
The HIPerWall3D Project is developing an immersive, room-scale, collaborative digital workspace. Particular focus is on multi-user interaction and data exploration in virtual environments. (Read More)

A Cross-Platform Cluster Graphics (CGLX) for Scalable Display Environments
CGLX (Cross-Platform Cluster Graphic Library ) is a flexible, high-performance OpenGL-based graphics framework for the development of distributed high performance visualization systems such as OptIPortals. CGLX allows OpenGL programs to transparently scale across display clusters and fully leverage the available resources, while maximizing the achievable performance and resolution of such systems. (Read More)

CGLX

TileViewer
TileViewer is a cross-platform framework, supporting the visualization of large-scale image and multi-dimensional data on massively tiled display walls. TileViewer is being developed as a proof-of-concept system exploring the technical challenges of grid-centric data analysis. Inherited from a base visualization object class, different visualization objects are supported, including 2D static image viewing, video streaming, and 3D visualization. Each visualization object can be arbitralrily transformed (translated, scaled and for 3D objects rotated), without being constrained by individual display tile boundaries. That is, TileViewer manages the data distribution and synchronization between computer, render and display nodes. TileViewer has been super-seeded by CGLX. (Read More)

HIPerScence
HIPerScene provides a distibuted scene graph that allows data to be located anywhere on the grid and streamed between visualization nodes. Particular focus is on real-time data distribution on large-area display walls such as HIPerSpace and HIPerWall. (Read More)

HD Teleconferencing and Video Mobility
This project aims at developing middleware for the acquisition, streaming and presentation of HD video and audio sources. This project ties together video capture hardware with a real-time texture compression library and multicast streaming protocol to deliver multiple HD resolution AV streams over gigabit networks with support for mobility -- which is useful for tiled displays. Input sources include HD video cameras, laptop/desktop machines and even video game consoles. (Read More)

A 720p60 HD video stream displayed on HIPerSpace that is responsive enough to play video games.

VideoBlaster
VideoBlaster is a distributed video player which is incorporated inside of CGLX. VideoBlaster synchronizes video frames between all nodes and uses audio timing to control the video release rate. VideoBlaster plays a wide variety of formats such as MPEG-4, H264, Mov, etc. VideoBlaster can also play network streams allowing for media such as DVD to be streamed in an uncompressed format to tiled displays. (Read More)

Collaborative Visualization Environments (CVEs)
Imaging techniques such as MRI, fMRI, CT and PET have provided physicians and researchers with a means to acquire high-quality biomedical images as the foundation for the diagnosis and treatment of diseases. This research presents a framework for collaborative visualization of biomedical data-sets, supporting heterogeneous computational platforms and network configurations. The system provides the user with data visualization, annotation and the middleware to exchange the resulting visuals between all participants, in real-time. A resulting 2D visual provides a user specifiable high-resolution image slice, while a resulting 3D visual provides insight into the entire data set. To address the costly rendering of large-scale volumetric data, the visualization engine can distribute tasks over multiple render nodes. (Read More)

A picture of the CVE program with a human head CT scan.

VizION: A Rapid Prototyping Framework for Collaborative Digital Workspaces
The rise of ubiquitous computing environments creates the need for middleware systems that facilitate spontaneous, reliable, and efficient communication between many different devices of varying complexity. By their nature, these environments have to manage components and elements entering and leaving the system unpredictably. As appliances grow in number and complexity, the middleware needs to adapt and remain responsive even in adverse situations. VizION is a middleware framework, specifically designed for ubiquitous collaborative educational spaces such as VizClass. Its decentralized approach helps avoid performance bottlenecks and potential points of failure that would paralyze the entire environment. VizION presents a simple, unified platform to the application developer, hiding many of the complexities of distributed environments, and thus allowing ease of system extension. (Read More)

GateKeeper: Authentication for Collaborative Digital Workspaces
The GateKeeper is the authentication system for the VizClass project. It is built on the VizION distributed system. It allows for multiple authentication inputs, and multiple authentications at a single point allowing for n-form authentications. Current supported mechanisms are a text login, USB stick login, and biometric logins. The system runs on a database backend using MySQL. (Read More)

GateKeeper

Computer Graphics and Visualization

Visualization for Neuroscience: LOD-managed Tuboid Rendering of White Matter Tracts
In collaboration with James H. Fallon, a distinguished neuroscientist, we developed a high-performance system for the visualization of labeled white matter tracts extracted from Diffusion Tensor MRI. The system incorporated several novel GPU-based techniques to enable high-visual-quality interactive exploration of large numbers of tracts on commodity hardware. Check out the background image on the IEEE Visualization 2008, InfoVis, and VAST webpages and you will see our research prominently featured. (Read More)

Brain Circuit Visualization

Visualization of Massive Point Datasets
Three-dimensional scanning technologies make it possible to capture massive point datasets of culturally significant spaces. These datasets provide a thorough record of the spaces' appearance and geometry---a record that traditionally would have required the taking of numerous photographs and meticulous measurements. However, while the technology for capturing such a record is already reasonably mature, the technology for making it useful is not. To address this, we are developing a point visualization platform that can render datasets with hundreds of millions of points with minimal preprocessing on modest hardware, while maintaining high visual quality. (Read More)

Point-Based Rendering

Leonardo da Vinci's Lost Mural: The Battle of Anghiari
The Battle of Anghiari disappeared nearly 500 years ago when the Hall of the 500 in the Palazzo Vecchio was remodeled by Giorgio Vasari, starting in 1563. But was "The Battle of Anghiari" destroyed? Did Vasari protect it behind his own new mural? And if the da Vinci masterpiece remained in place, did it crumble - or has it survived to this day? Our group is working on advanced imaging and visualization techniques to answer these questions. (Read More)

The Battle of Anghiari

Blue Marble: A Method for Visualizing Giga-Pixel Layered Imagery for Tiled Displays
This project investigates techniques for the interactive visualization and interrogation of multi-spectral giga-pixel imagery. Co-registered image layers representing discrete spectral wavelengths and temporal information can be seamlessly displayed and fused. Users can freely pan and zoom, while browsing through the data layers enabling intuitive analysis of the massive, time-varying, multi-spectral, records. A data resource aware display paradigm is introduced that progressively and adaptively loads data as it is needed from remote storage, at the level of resolution supported by the used display device. This results in a highly scalable approach, equally suited for a laptop or ultra-high resolution displays system operating at hundreds of mega-pixels resolution. (Read More)

Users viewing a time varying data set of NASA's Blue Marble Next Generation


Wipe-Off: Ultra-High Resolution Data Manipulation
This project investigates visualization and interaction techniques for the interrogation of multispectral data sets, in particular, in the field of cultural forensics. The data for this project is gathered by scanning artwork using several different imaging modalities, each one useful for identifying different characteristics. Since this data ends up being extremely detailed, clever techniques are needed to seamlessly display and manipulate it. (Read More)

A view of multi-spectral layers wiped away from the St George data set


Tera-Scale Atomistic Visualization (2004-2008)
We are developing visualization algorithms for real time exploration of massive, time varying point cloud datasets produced by atomistic simulations. Challenges include developing level of detail algorithms for unstructured point data and representing sub-pixel features such as occlusion and intersections. The massive size of these datasets (gigabytes per timestep) make simply loading and rendering an image a challenging task. Our algorithms target efficient exploration of tera-scale datasets which contain millions to billions of objects as well as thousands of timesteps. This project is in collaboration with Mark Duchaineau of Lawrence Livermore National Laboratories. (Read More)

Image courtesy Mark Duchaineau, LLNL

Dynamic IBR techniques for fixed cost stereoscopic support
This project investigates a GPU based implementation of an image-based rendering method for reducing the cost of stereoscopic rendering to the cost of rendering a single monoscopic image plus a smaller fixed cost. Our approach is to use the color and depth information from the rendered image of one eye to produce a reconstructed depthsprite which is rendered for the other eye. A GPU hardware accelerated technique for producing and rendering this depthsprite at rates above 60 Hz is presented. Our technique enables the real time stereoscopic display of complex and data intensive objects, which are currently constrained to monoscopic rendering technology. (Read More)

3D anaglyph rendered using fixed cost algorithm

The Earth and Planetary System Science Game Engine (EPSS-GE)
The widespread use of on-line computer games makes this medium a valuable vehicle for information sharing, while scalability facilitates global collaboration between players in the game space. Game engines generally provide an intuitive interface allowing attention to be shifted to the understanding of scientific elements rather than hiding them between a wealth of menus and other counterintuitive user interfaces. These strengths are applied towards promoting the understanding of planetary systems and climate change. Unconventional interaction and visualization techniques are introduced as a method to experience geophysical environments. Players are provided with dynamic visualization assets, which enable them to discover, interrogate and correlate scientific data in the game space. (Read more)

GPU Accelerated Iridescence
This project explores an interactive, physics-based, rendering technique for single and multi-layer iridescence models. A hardware accelerated approach is described that can render both models for the full visible spectrum at interactive rates. (Read More)

Morpho Butterfly

MediaViewer/TiffViewer
This project investigates visual analytics techniques for massive, ultra-high resolution image collections. MediaViewer/TiffViewer support arbitrary amounts of image sources that are adaptively and progressively loaded as the user interacts with individual images or image collections. The prototypes of MediaViewer and TiffViewer are integrated as the default viewers with the CGLX library for multi=tile display (wall) systems. MediaViewer can display png, jpg and video streams while TiffViewer supports very large, tiled pyramidal tiff files. (Read More)

TiffViewer

cglXPaint
This project aims to create an ultra-high digital canvases for the creation of dynamics art on digital wallpaper. Artists will be able to paint freely, yet manipulate content all the way down at the per-pixel level. A GPU centric approach already allows instantaneous processing of massive amounts of data for creative expression. (Read More)

Digital graffiti

Virtual Reality, Augmented Reality and Mixed Reality

A Wearable Augmented Reality System for Training in Mission Critical Environments
The focus of this project is on developing a wearable augmented reality system for training in mission critical environments. Of particular interest are new algorithms, techniques and interfaces that allow the actions of multiple-users to be tracked in real-time, passed to a remote simulation platform, and the resulting visuals to be augmented on top of a physical training space consisting of multiple control consoles. (Read More)

Augmented Reality

Tangled Reality
Tangled Reality combines physical and computer-generated artifacts to create a mixed reality playspace. Users are able to fabricate virtual environments by simply drawing maps on standard index cards, which are thereby projected down onto the floor. This virtual world can then be traversed with a physical radio controlled car, which is captured by an overhead camera monitoring its position and orientation. Psuedo-real world physics are enacted on the car as the car interacts with virtual agents and the virtual environment. Further, a first-person view of a driver's perspective from inside the vehicle is projected onto a second screen hanging in front of the user. Tangled Reality, blurring the lines between the physical and the virtual, is a culmination of Kevin Ponto's research in the fields of machine vision, teleoperations, and mixed reality. (Read More)

Virtual Bounds
This project explores a mixed reality workspace that allows users to combine arbitrary physical and computer-generated artifacts, and to control and simulate them within one fused world. All interactions are captured, monitored, modeled and represented with quasi-real world physics. The objective of this research project is to create an environment in which the virtual world and physical world have a symbiotic relationship. In this type of system, virtual objects can impose forces on the physical world and physical world objects can impose forces on the virtual world. The proof-of-concept system was designed for a tele-operated virtual environment. (Read More)

Virtual Bounds

Sonic Panoramas
This project was motivated largely by two areas of inquiry. The first is in developing compositional techniques for real-time interactive sound environments, such as those required in immersive art and VR experiences. A second area of investigation in this work concerns the ways in which humans perceive, understand, and represent physical landscapes. The objective is to enrich a participant’s experience of space through sonic interpretations of visual landscapes, providing a multi-modal interface for data exploration. The user’s physical movement through the immersive projection space is tracked in real-time and used to generate a position-specific visual and auditory representation. (Read More)

Automatic Creation of Three-Dimensional Avatars
This project focusses on developing an avatar construction pipeline designed to use multiple standard video cameras for the creation of realistic three-dimensional avatars that can be included into interactive virtual environments. Multiple images of a human subject are used to automatically reshape a synthetic three-dimensional articulated reference model into a high-quality avatar. The pipeline under development combines software and hardware-accelerated stages into one seamless system. Primary stages in this pipeline include pose estimation, skeleton fitting, body part segmentation, geometry construction and texturing. Silhouette-based modification techniques in combination with a reference model remove traditional constraints in the initial pose of the captured subject. Results can be obtained in near-real time with very limited user intervention. (Read More)

Avatar

VirtualizeMe
Tele-Immersion environment have to provide face-to-face, viewpoint corrected, stereoscopic visuals, allowing users to naturally interact with each other and the digital environment surrounding them via realistic avatars. The focus of this project is on extracting, modeling and communicating the avatar of a user needed for Tele-Immersion applications by introducing a stereo video stream based 3D reconstruction system. The combination of several 2.5D reconstruction results (disparity maps) from different viewing angles results into a high resolution extraction of avatar models with real-time characteristics. The point cloud based avatars are communicated to a remote collaboration location using lossless point cloud compression algorithms. (Read More)

3D Reconstruction from Stereo Camera Pairs

Avatar Centric Risk Evaluation
Hazard detection and prevention of natural and manmade disasters at critical civil infrastructure is becoming increasingly important. Recent events, such as earthquakes and terrorist attacks, clearly demonstrated the societal and economical impact stemming from the limitations and uncertainty within currently deployed emergency response systems. The aim of this project is to develop a new data visualization and simulation platform that will facilitate risk detection, emergency response and assessment in hazardous situations. The platform is based on the acquisition, modeling and analysis of sensor data, to capture objects and temporal changes in the observed spaces. Avatars are acquired, inserted and tracked in a virtual environment, enabling the simulation of multiple perilous situations and assisting in determining possible risk mitigation strategies. (Read More)

Risk Evaliation

User Interface Research and Human Computer Interaction

KITTY: A New Interface for Keyboard Independent Touch Typing
KITTY is a new hand- or finger-mounted data input device, designed for keyboard independent touch typing and supports traditional touch-typing skills as a method for alphanumeric data input. This glovetype device provides an ultra-portable solution for quiet data input into portable computer systems and full freedom of movement in mobile VR and AR environments. The KITTY design follows the concept of the column and row layout found on traditional keyboards, allowing users to draw from existing touch typing skills, easing the required training time. (Read More)



Image-Based Modeling and Rendering

3D Reconstruction from Aerial Images
This project is developing a 3D reconstruction pipeline for aerial imaging. The main focus is the error analysis of given correspondences and the 3D reconstruction given several image pairs. The combination of 3D reconstructions from different view angles is being analyzed and developed. This is a collaborative project with Lawrence Livermore National Laboratory (LLNL). (Read More)

Aerial image of Walnut Creek, CA

Image Meshing
We developed an optimized technique to construct mesh-based linear approximations to raster images, with potential applications in geometry extraction, image compression, and image interpretation. Our approach extended a previous Simulated Annealing method with the addition of a deterministic step and an efficient implementation framework. (Read More)

Image Meshing

Structural & Geotechnical Engineering

Seacliff erosion studies using Terrestrial Laser Scanning
The seacliffs of San Diego County are composed of weakly cemented material that erode through subaerial and wave-based processes. As the coastline is extremely important to the economy of Southern California, researchers at Scripps Institution of Oceanography and in the Jacob's School of Engineering at UCSD perform response surveys immediately after a failure to accurately quantify sediment transferred from the cliff to the beach. Through this work, new Terrestrial Laser Scanning (TLS) surveying and automated alignment techniques have been developed to efficiently map the coastline because traditional methods were unsuccessful or time-intensive. New techniques and algorithms have also been developed to accurately quantify and visualize change, allowing for researchers to understand the geologic processes in motion. (Read More)

3D point cloud acquired using terrestrial laser scanning


Simulation-Based Design / Integrative Design

An Agent based Computational Steering Framework for Radiative Heat Transfer
This project investigates a new approach for high performance simulations of radiative heat transfer, combining a computational steering framework and a software agent system. This approach supports interactive modeling, simulation and visualization of complex behavior of radiation induced heat transfer in complex geometries, including an interactive virtual optimization of the geometric design and boundary conditions. The modeling environment is based on Autodesk AutoCAD which is extended through an ObjectARX plug-in to provide an automatic 3D-grid generation and a communication interface for the data exchange between the geometric modeller and the numerical kernel. All geometric objects including attributed boundary conditions are transferred automatically from the modeler to the simulation kernel. The visualization front-end uses CGLX to create a high-resolution, visual analytics interface on HIPerSpace. (Read More)

Co-Simulation
Personal tools
Namespaces

Variants
Actions
Navigation
Toolbox