Graphic Design

Photo Sampling

Concept Maps

Computer-supported Intentional Learning Environment (CSILE)

Research

Return to home page.

3D Graphics

Virtual objects have to be three-dimensional. An important part of the illusion is that the objects one meets need to look inter-actable.

A major problem currently is the crudeness of the graphic images. From my own experience with CG, I know that graphic files can be very large. In film-making, when files are played out to a film recorder, they become non-interactive, and the huge files that created the film can be stored somewhere; they are no longer necessary to the animation or special effect. In VR, because the graphics must remain interactive, and be acted upon and with, they are too large and require too much processing, even with file compression, to easily manipulate in interactive mode.

Highly realistic computer animation can't be used for [virtual environments]. Even a single frame of so-called photo- realistic animation requires lengthy computing time on today's workstations, yet computer-generated displays must be updated in real time as the wearer of a head-mounted display moves and turns...It will be many years, if ever, before a computer simulation will be indistinguishable from physical reality. (Sheridan & Zeltzer, 24)

Until someone figures out how to store and process viable graphics files for interactive use, I think that is correct. But I wouldn't put a time frame on it. And I wouldn't bet that someone hasn't begun to do it already, and we just don't know it yet.

Interface Design

Technology

Depending upon what you are trying to do, VR equipment can be as simple as a button to push, and as complex as the Dexterous Hand Master, designed to track and relay intricate movements of the hands to the computer interface. The amount and type of equipment required can make a virtual experience a major production--helmets, body suits, trailing wires, bulk, and discomfort. The technological thrust currently is to miniaturize and simplify the trappings.

(Note: The Dexterous Hand Master is an extremely high- precision hand input device that was designed by Beth Marcus of Exos, Inc. of Burlington, MA. The figure was taken from Aukstakalnis & Blatner, 166)

Another difficulty involves disorientation. Users of head-mounted equipment report dizziness, headaches, and other physical discomfort. "Systems rely on a tracking device which records the movement of a user's head. That mechanism relays the information to the image-generation computer...The entire process takes just a split second, but even that small time lag can induce disorientation and nausea." (Jenish & Dolphin, 46)

Sheridan and Zeltzer see the next technological advance as a way to "...incorporate eye-tracking devices in head-mounted displays; if the computer knows exactly where the eyes are looking on the display screen, it will have to generate a high-resolution image only for the small area that our eyes are able to focus on...ideally, such eye-position based displays could be shrunk to the size of a pair of contact lenses and mounted on the eyeballs." (Sheridan & Zeltzer, 25)

As we speak, the Human Interface Technology Laboratory (HITL) of the Washington Technology Center at the University of Washington is at work on developing a "coherent light source [a laser diode]...to scan an image directly on the retina of the viewer's eye..." (For scientific details, please refer to Virtual Retinal Display Project Description)

The Process

VR is a computer world, driven by lines of program that define, populate and activate someplace that never was. Powerful computers, top-of-the-line work stations, complex programs and the newest developments in display and sensor equipment are required to participate. The four components of a VR system, according to Pimentel & Teixeira, are effectors, the reality engine, the application, and the geometry.

Effectors include the head-mounted display (HMD), the wired glove, and the force ball. They are any input or output sensor hardware that is used to stimulate the senses and control the environment. Sensors are used to give the user control of the viewpoint, in order to navigate the environment. Sensors can be attached to viewpoints or objects.

The reality engine is the computer system and external hardware.

The application is the software that creates the virtual environment; it includes the structure and how objects and user interact. In the application program, the event loop is the sequence of events through which the computer loops to maintain a simulation. Each pass of the event loop checks all the input devices for viewpoint positioning and command changes before instructing the computer to recreate the environment. This needs to happen at least sixteen to twenty times a second to create a sense of realism. (See Appendix)

Geometry is the description of objects (discrete 3-D shapes that can be interacted with), their placement and construction, and the backdrop. Only a single backdrop is present at one time. It can't be moved or broken into smaller elements. It's the underlying structure on which the virtual environment is built. As the user's viewpoint changes, the event loop corrects the backdrop to adjust it to the new position(s).

The geometry of a virtual world is created with a series of 3-D coordinate values--X, Y, and Z--to define a polygon. A display might have 1,000 to 1,500 3-D polygons. The more polygons, the more processing time; therefore, the more complex the shape, the more processing time is required.

(To continue, please click here.)