Sandbox

From Gravity
(Difference between revisions)
Jump to: navigation, search
(Display Systems and Digital Workspaces (Cyberinfrastructure))
Line 79: Line 79:
 
{{Template:ResearchThumbnail |
 
{{Template:ResearchThumbnail |
 
|name      =  The HIPerWall Project
 
|name      =  The HIPerWall Project
|image      =  Image:HIPerWall_teaser1.jpg
+
|image      =  Image:UCSD_Calit2_GRAVITY_HIPerSpace_TiffViewer_1.jpg
 
|caption    =  HIPerWall
 
|caption    =  HIPerWall
 
|summary = The Highly Interactive Parallelized Display Wall project (HIPerWall) provides unprecedented high-capacity visualization capabilities to experimental and theoretical researchers. The fifty 30-inch Apple Cinema Displays driven by 25 Apple Mac G5 workstations yield a total display resolution of 200 megapixels, breaking the previous 100-megapixel world record (established one day earlier) — by doubling it in the Spring of 2005.
 
|summary = The Highly Interactive Parallelized Display Wall project (HIPerWall) provides unprecedented high-capacity visualization capabilities to experimental and theoretical researchers. The fifty 30-inch Apple Cinema Displays driven by 25 Apple Mac G5 workstations yield a total display resolution of 200 megapixels, breaking the previous 100-megapixel world record (established one day earlier) — by doubling it in the Spring of 2005.
Line 92: Line 92:
 
|reference =  [[Research Projects: HIPerWall3D |Read More]]
 
|reference =  [[Research Projects: HIPerWall3D |Read More]]
 
}}
 
}}
 
  
 
== User Interface Research and Human Computer Interaction ==
 
== User Interface Research and Human Computer Interaction ==

Revision as of 02:09, 4 September 2008

Contents

The Research Thumbnail Template

{{Template:ResearchThumbnail |
|name       =  
|image      =  Image:UCSD_Calit2_GRAVITY_yourprojectnamegoeshere.jpg
|caption    =  
|summary = 
|reference = 
}}

Scratch Material

Sandbox_tmp


Leonardo da Vinci's Lost Mural: The Battle of Anghiari
The Battle of Anghiari disappeared nearly 500 years ago when the Hall of the 500 in the Palazzo Vecchio was remodeled by Giorgio Vasari, starting in 1563. But was "The Battle of Anghiari" destroyed? Did Vasari protect it behind his own new mural? And if the da Vinci masterpiece remained in place, did it crumble - or has it survived to this day? Our group is working on advanced imaging and visualization techniques to answer these questions. (Read More)

The Battle of Anghiari

Tangled Reality
Tangled Reality combines physical and computer-generated artifacts to create a mixed reality playspace. Users are able to fabricate virtual environments by simply drawing maps on standard index cards, which are thereby projected down onto the floor. This virtual world can then be traversed with a physical radio controlled car, which is captured by an overhead camera monitoring its position and orientation. Psuedo-real world physics are enacted on the car as the car interacts with virtual agents and the virtual environment. Further, a first-person view of a driver's perspective from inside the vehicle is projected onto a second screen hanging in front of the user. Tangled Reality, blurring the lines between the physical and the virtual, is a culmination of Kevin Ponto's research in the fields of machine vision, teleoperations, and mixed reality. (Read More)

Virtual Bounds
This project explores a mixed reality workspace that allows users to combine arbitrary physical and computer-generated artifacts, and to control and simulate them within one fused world. All interactions are captured, monitored, modeled and represented with quasi-real world physics. The objective of this research project is to create an environment in which the virtual world and physical world have a symbiotic relationship. In this type of system, virtual objects can impose forces on the physical world and physical world objects can impose forces on the virtual world. The proof-of-concept system was designed for a tele-operated virtual environment. (Read More)

Virtual Bounds

Sonic Panoramas
This project was motivated largely by two areas of inquiry. The first is in developing compositional techniques for real-time interactive sound environments, such as those required in immersive art and VR experiences. A second area of investigation in this work concerns the ways in which humans perceive, understand, and represent physical landscapes. The objective is to enrich a participant’s experience of space through sonic interpretations of visual landscapes, providing a multi-modal interface for data exploration. The user’s physical movement through the immersive projection space is tracked in real-time and used to generate a position-specific visual and auditory representation. (Read More)

Automatic Creation of Three-Dimensional Avatars
This project focusses on developing an avatar construction pipeline designed to use multiple standard video cameras for the creation of realistic three-dimensional avatars that can be included into interactive virtual environments. Multiple images of a human subject are used to automatically reshape a synthetic three-dimensional articulated reference model into a high-quality avatar. The pipeline under development combines software and hardware-accelerated stages into one seamless system. Primary stages in this pipeline include pose estimation, skeleton fitting, body part segmentation, geometry construction and texturing. Silhouette-based modification techniques in combination with a reference model remove traditional constraints in the initial pose of the captured subject. Results can be obtained in near-real time with very limited user intervention. (Read More)

Avatar Centric Risk Evaluation
Hazard detection and prevention of natural and manmade disasters at critical civil infrastructure is becoming increasingly important. Recent events, such as earthquakes and terrorist attacks, clearly demonstrated the societal and economical impact stemming from the limitations and uncertainty within currently deployed emergency response systems. The aim of this project is to develop a new data visualization and simulation platform that will facilitate risk detection, emergency response and assessment in hazardous situations. The platform is based on the acquisition, modeling and analysis of sensor data, to capture objects and temporal changes in the observed spaces. Avatars are acquired, inserted and tracked in a virtual environment, enabling the simulation of multiple perilous situations and assisting in determining possible risk mitigation strategies. (Read More)


Visualization Techniques

Virtual Reality, Augmented Reality and Mixed Reality

Display Systems and Digital Workspaces (Cyberinfrastructure)

The HIPerSpace Project
The Highly Interactive Parallelized Display Space project (HIPerSpace) is brought to you by the creators of HIPerWall. HIPerSpace is the next generation concept for ultra-high resolution distributed display systems that can scale into the billions of pixels, providing unprecedented high-capacity visualization capabilities to experimental and theoretical researchers. HIPerSpace has held the distinction of the "World's Highest Resolution Display" since it was first introduced in 2006, taking the top spot previously held by HIPerWall, which held it since 2005. (Read More)

The HIPerWall Project
The Highly Interactive Parallelized Display Wall project (HIPerWall) provides unprecedented high-capacity visualization capabilities to experimental and theoretical researchers. The fifty 30-inch Apple Cinema Displays driven by 25 Apple Mac G5 workstations yield a total display resolution of 200 megapixels, breaking the previous 100-megapixel world record (established one day earlier) — by doubling it in the Spring of 2005. (Read More)

HIPerWall

HIPerWall3D
The HIPerWall3D Project is developing an immersive, room-scale, collaborative digital workspace. Particular focus is on multi-user interaction and data exploration in virtual environments. (Read More)

HIPerWall3D

User Interface Research and Human Computer Interaction

KITTY: A New Interface for Keyboard Independent Touch Typing
KITTY is a new hand- or finger-mounted data input device, designed for keyboard independent touch typing and supports traditional touch-typing skills as a method for alphanumeric data input. This glovetype device provides an ultra-portable solution for quiet data input into portable computer systems and full freedom of movement in mobile VR and AR environments. The KITTY design follows the concept of the column and row layout found on traditional keyboards, allowing users to draw from existing touch typing skills, easing the required training time. (Read More)

Middleware

Cross-Platform Cluster Graphics (CGLX) for Scalable Display Environments
CGLX (Cross-Platform Cluster Graphic Library ) is a flexible, high-performance OpenGL-based graphics framework for the development of distributed high performance visualization systems such as OptIPortals. CGLX allows OpenGL programs to transparently scale across display clusters and fully leverage the available resources, while maximizing the achievable performance and resolution of such systems. (Read More)

TileViewer
TileViewer is a cross-platform framework, supporting the visualization of large-scale image and multi-dimensional data on massively tiled display walls. TileViewer is being developed as a proof-of-concept system exploring the technical challenges of grid-centric data analysis. Inherited from a base visualization object class, different visualization objects are supported, including 2D static image viewing, video streaming, and 3D visualization. Each visualization object can be arbitralrily transformed (translated, scaled and for 3D objects rotated), without being constrained by individual display tile boundaries. That is, TileViewer manages the data distribution and synchronization between computer, render and display nodes. TileViewer has been super-seeded by CGLX. (Read More)

HIPerScence
HIPerScene provides a distibuted scene graph that allows data to be located anywhere on the grid and streamed between visualization nodes. Particular focus is on real-time data distribution on large-area display walls such as HIPerSpace and HIPerWall. (Read More)

VizION: A Rapid Prototyping Framework for Collaborative Digital Workspaces
The rise of ubiquitous computing environments creates the need for middleware systems that facilitate spontaneous, reliable, and efficient communication between many different devices of varying complexity. By their nature, these environments have to manage components and elements entering and leaving the system unpredictably. As appliances grow in number and complexity, the middleware needs to adapt and remain responsive even in adverse situations. VizION is a middleware framework, specifically designed for ubiquitous collaborative educational spaces such as VizClass. Its decentralized approach helps avoid performance bottlenecks and potential points of failure that would paralyze the entire environment. VizION presents a simple, unified platform to the application developer, hiding many of the complexities of distributed environments, and thus allowing ease of system extension. (Read More)

GateKeeper: Authentication for Collaborative Digital Workspaces
The GateKeeper is the authentication system for the VizClass project. It is built on the VizION distributed system. It allows for multiple authentication inputs, and multiple authentications at a single point allowing for n-form authentications. Current supported mechanisms are a text login, USB stick login, and biometric logins. The system runs on a database backend using MySQL. (Read More)

Image-Based Modeling and Rendering

Multi-Modal Interfaces

Structural Engineering

Personal tools
Namespaces

Variants
Actions
Navigation
Toolbox