A major problem currently is the crudeness of the graphic images. From my own experience with CG, I know that graphic files can be very large. In film-making, when files are played out to a film recorder, they become non-interactive, and the huge files that created the film can be stored somewhere; they are no longer necessary to the animation or special effect. In VR, because the graphics must remain interactive, and be acted upon and with, they are too large and require too much processing, even with file compression, to easily manipulate in interactive mode.
Highly realistic computer animation can't be used for [virtual environments]. Even a single frame of so-called photo- realistic animation requires lengthy computing time on today's workstations, yet computer-generated displays must be updated in real time as the wearer of a head-mounted display moves and turns...It will be many years, if ever, before a computer simulation will be indistinguishable from physical reality. (Sheridan & Zeltzer, 24)
Until someone figures out how to store and process viable
graphics files for interactive use, I think that is correct. But
I wouldn't put a time frame on it. And I wouldn't bet that
someone hasn't begun to do it already, and we just don't know it
yet.
Interface Design
(Note: The Dexterous Hand Master is an extremely high- precision hand input device that was designed by Beth Marcus of Exos, Inc. of Burlington, MA. The figure was taken from Aukstakalnis & Blatner, 166)
Another difficulty involves disorientation. Users of head-mounted equipment report dizziness, headaches, and other physical discomfort. "Systems rely on a tracking device which records the movement of a user's head. That mechanism relays the information to the image-generation computer...The entire process takes just a split second, but even that small time lag can induce disorientation and nausea." (Jenish & Dolphin, 46)
Sheridan and Zeltzer see the next technological advance as a way to "...incorporate eye-tracking devices in head-mounted displays; if the computer knows exactly where the eyes are looking on the display screen, it will have to generate a high-resolution image only for the small area that our eyes are able to focus on...ideally, such eye-position based displays could be shrunk to the size of a pair of contact lenses and mounted on the eyeballs." (Sheridan & Zeltzer, 25)
As we speak, the Human Interface Technology Laboratory (HITL) of the Washington Technology Center at the University of Washington is at work on developing a "coherent light source [a laser diode]...to scan an image directly on the retina of the viewer's eye..." (For scientific details, please refer to Virtual Retinal Display Project Description)
Effectors include the head-mounted display (HMD), the wired glove, and the force ball. They are any input or output sensor hardware that is used to stimulate the senses and control the environment. Sensors are used to give the user control of the viewpoint, in order to navigate the environment. Sensors can be attached to viewpoints or objects.
The reality engine is the computer system and external hardware.
The application is the software that creates the virtual environment; it includes the structure and how objects and user interact. In the application program, the event loop is the sequence of events through which the computer loops to maintain a simulation. Each pass of the event loop checks all the input devices for viewpoint positioning and command changes before instructing the computer to recreate the environment. This needs to happen at least sixteen to twenty times a second to create a sense of realism. (See Appendix)
Geometry is the description of objects (discrete 3-D shapes that can be interacted with), their placement and construction, and the backdrop. Only a single backdrop is present at one time. It can't be moved or broken into smaller elements. It's the underlying structure on which the virtual environment is built. As the user's viewpoint changes, the event loop corrects the backdrop to adjust it to the new position(s).
The geometry of a virtual world is created with a series of 3-D
coordinate values--X, Y, and Z--to define a polygon. A display
might have 1,000 to 1,500 3-D polygons. The more polygons, the
more processing time; therefore, the more complex the shape, the
more processing time is required.
(To continue, please click here.)