
Cyberspace
But what was cyberspace? Where had it come from? Cyberspace had oozed out of the world's computers like stage-magic fog. Cyberspace was an alternate reality, it was the huge interconnected computation that was being collectively run by planet Earth's computers around the clock. Cyberspace was the information Net, but more than the Net cyberspace was a shared vision of the Net as a physical space.
-- Rudy Rucker
The Hacker and the Ants, 1994
Find it Fast
Introduction
MUTech
Shared Virtual Worlds Today
VNET
Visions of Cyberspace
PioneersAvatars and Bots
h-anim
Get Yer Avatars Here
Virtual Communities
Technical Papers
DIS and HLA
When you think of cyberspace, you probably think of Rucker and authors William Gibson (who is credited with having coined the term), Bruce Sterling, and Neal Stephenson, perhaps the best known writers in the Cyberpunk movement in science fiction.
That these authors lacked a coherent vision didn't matter (ask any of them whether they'd rather be consistent with others in the genre or tell a good story and be prepared to be looked at as though you were an idiot). The idea of a "shared hallucination" (Gibson's words) of virtual reality caught fire. Shared virtual reality. In this new vision, cyberspace was, at least peripherally, a medium through which people could interact and where computer constructs provided sensory richness and intellectual challenge.
Unlike the cyberpunk "black ice" or the apotheosis of user-unfriendly interfaces envisioned by John Varley in "Press Enter", cyberspace is seen as essentially benign: a Disney World where cigarette butts never even hit the ground. But Epcot is about the illusion of communities; no doubt a paragraph in their security manual deals with guests who decide to settle down and homestead France. The vision of cyberspace that's alive today is about the creation of virtual communities.
As Mark Pesce notes in "A Brief History of Cyberspace", he and Tony Parisi came up with the notion of VRML as one of the essential mechanisms to implement cyberspace.
Regular visitors to these parts know that this page has been embarrassingly under construction for a couple of years. Now that Mark Pesce has been quoted (or perhaps misquoted) in an article in New Media as saying "I never bought the whole Black Sun and Oz vision of the multiuser worlds," and the whole VRML content industry has been focusing on smaller, non-immersive VRML, it seems like a perfect time to bring this page up to date.
First of all, the fundamental multi-user (MU) technology (MUtech) is very definitely built into VRML 97. You can't have shared worlds unless users can communicate over a network, and, as you can see in David Frerichs's slide, VRML 97 lets you do that through a Java interface, either in the Script node or through the External Authoring Interface (EAI). The fundamental MUtech is there now, but some problems remain. Since this page was started remarkable progress has been made in solving those problems.
What needs to be communicated over the network to make MUtech happen? Basically, three things:
Let's take these challenges one at a time.
Seeing one another
Normally when someone visits your VRML world, everything in the world is transferred to his own machine. So if two people download your world one after the other, they have no way of knowing that the other person has downloaded it. If you want your world to be sharable, the first thing you need is a way of keeping track of who's there. Somehow information about visitor 1 has to be passed to visitor 2. We'll talk about what information and how it might be passed in a minute. For now, let's call the piece of MUtech that handles passing information between two visitors to a VRML world the "helper". That term is vague enough that we probably won't fall into the trap of assuming anything about the helper.
The next part of this challenge is to figure out what visitor 2 will see to let her know that visitor 1 is there. Traditionally each visitor in a MU world is displayed to other visitors as an avatar.
So if Carnival and Fox (two real people) are sharing a virtual world, Carnival needs to see an avatar for Fox, and vice-versa. Carnival and Fox will each choose an avatar that represents the way they'd like others to see them. Fox (usually) won't choose an avatar to represent Carnival; Carnival will do that himself. So we've narrowed things down to a requirement: if you want to build a MU world, you need a way for people to choose avatars (which are almost certainly VRML objects), and you need a way to insert the avatar for each visitor into the other visitors' worlds.
It would be even nicer if, as Carnival moved around his copy of the VRML world, Fox would see Carnival's avatar moving around in her own copy of the world. So now we know some of the information that Carnival's helper has to transfer to Fox's helper: the position and orientation of Carnival's currently bound Viewpoint. There's plenty more that can get transferred, and we'll talk about the information and the way it might get transferred in a while, but what we know now is enough to get us started.
Communicating
Carnival and Fox need to communicate with one another in this shared world. VRML is wonderful for 3D geometry, but it's pretty pathetic for text, and there's (currently) no standard for getting text from a keyboard. VRML can play audio, but it has no way to record or transmit speech (yet).
Verbal Communication. This usually means that alongside the VRML window (or possibly embedded in it, though I haven't seen any examples of that yet) there needs to be a text window similar to windows used for Web-based chat. So we have another requirement: a chat client that gets downloaded with the VRML world (if you're using Java) or that's activated by something in the VRML world or the web page in which it's embedded (if you're doing it another way). The chat client gets inputs from keyboard or voice and transmits them to the chat clients of other people who are logged in to the MU world.
Nonverbal Communication. People don't usually stand around like statues when they talk. They gesture.
Here the current VRML standard has everything we need. You can definitely make parts of your avatar move -- nod
yes, shake your head no -- and that information can be transmitted to others just like your Viewpoint's position
and orientation. When Carnival shakes his head, Fox's MU helper can ROUTE the position and orientation sent by
Carnival's MU helper to the head in Carnival's avatar. But perhaps you see the problem. How does Fox's helper know
that it's the head that's nodding? There's probably a tag in the message that specifies "head", but how
do we know how many tags we need to plan for? Head is obvious, but do we need a tag for spleen? The Humanoid
Animation Working Group (h-anim) is trying to answer that question by defining a standard humanoid.
Another problem that led to the chartering of h-anim is that gestures are difficult, or at least
tedious, to make convincing. Anyone who's tried to make a humanoid walk in VRML knows that. It should be possible
to have a library of reusable gestures, provided we had a standard for the humanoid body, its parts, and its joints.
Cindy Ballreich's slide from her presentation
at VRML 98 shows how the h-anim standard fits in with the grand scheme.
Blaxxun has taken a different approach. They have a standard set of gestures for their avatars and a menu in their chat client that allows participants to send one of the standard set (say hello, agree, disagree, etc.). This solves another problem: how do you command your avatar to do something? It's easy enough to build a sensor into a VRML model so that other people can touch your avatar and make it do something, but how does Carnival control his avatar in Fox's copy of his world?
Changing the World
Suppose Carnival picks up a glass (easy enough in VRML 97) and moves it from the table to the shelf. How does Fox see that the glass has moved? Once again, we need some tags to identify what's moving.
A much harder problem comes up after Carnival moves the glass. Now Fox signs off, eats lunch, and comes back to the world. Where is the glass? Clearly in an ordinary VRML world without MUtech the glass is on the table where it started. So far we haven't had a requirement for states in our MU worlds. Every world has an initial state, and the communication through the MU helper conveys where the avatars are now and what they're gesturing. It doesn't have to keep track of where the avatar was or what it gestured before -- that's why gestures in current MU implementations start and stop in a neutral position.
Enter the Server. But if you're going to change the world (and by the way, this is very bleeding-edge, and no one has done this outside the lab) you'll need to keep track of the state of everything in the world. We talked about clients before. Clients imply servers, and that would certainly solve our problem: the state of the world is kept in the server. The Database Working Group was originally chartered with just this issue in mind -- a database is a logical place to keep persistent, updatable information.
Besides databases there are several other resources that might be used, some based on CORBA (Common Object Request Broker Architecture): JacORB, OmniOrb, Jini, Linda, and TIB/Rendezvous (thanks to Christopher St. John for this list).
But servers have a big problem: scalability. Every time a visitor does something, the server (at least if it's a dumb server) has to send a message to every other visitor in the world. A crowd of 10,000 people in a VRML world is inconceivable if you rely on client-server. Current technology puts the limit well under 100. And servers suffer from the problem of having a single point of failure. Worst of all, servers are old tech. The Internet is peer-to-peer, VRML was designed so that it wouldn't require servers except to download the original world, and the world is becoming PCs and workstations while servers mean big iron.
Reducing the Noise. A conversation among 10,000 people or even 100 is madness. You obviously don't care how avatars are moving or gesturing when your back is turned, and you may want to ignore conversations between avatars who are too far away, since you can always move your avatar toward them when you want to hear what they're saying. It's possible for servers to restrict traffic so that only those visitors whose avatars are within a certain distance from a speaker or mover will get the messages. It's also possible for clients to do this on their own and to set up their helpers so that they'll ignore at a very low network level any messages that come from other clients whose avatars are too far away.
The Living Worlds draft specification is agnostic about the client-server issue, though they note that all of the Living Worlds implementations so far have used client-server.
This is the current state of the art. But research is going on into schemes that will have the server manage connections and maintain the state of movable objects in the world and let visitors who are close enough to one another communicate directly (perhaps through "introductions" managed by the server).
This isn't the only way to define the basic issues. Living Worlds has a set
of scenarios that define an even broader set of requirements.
Shared Virtual Worlds Today
As I said above, there are in fact some shared VRML worlds up and running today.
Rachael Edwards has done some reviews of some of these multiuser systems at NVRCAD.
Despite some early MU VRML companies defecting or putting their VRML projects on a back burner, and despite
the growth of VRML into business and other domains, there's a very lively MU VRML scene, and some people will settle
for "nothing less than cyberspace."
Visions of Cyberspace
A number of these demos and projects used a proprietary format or a version of VRML that is no longer supported. They're still interesting because they represent problems solved that we in the VRML community can learn from.
An avatar, as Webster tells us, is
Neal Stephenson used "avatar" in Snow Crash (1993) to mean an image or 3D figure that represents a person in cyberspace, and is widely credited with originating that metaphor. Stephenson himself says in the Bantam paperback edition, "after the first publication of Snow Crash I learned that the term 'avatar' has actually been in use for a number of years as part of a virtual reality system called Habitat, developed by F. Randall Farmer and Chip Morningstar."
"Bot" is short for "robot" -- the abbreviation being popularized if not coined by Mystery Science Theatre 3000. "Robot" was coined by Karel Capek in his 1920 play R.U.R. and is derived from the Czech robota -- compulsory labor. Some people are working on stuffing bot or intelligent agent technology into avatar-like figures, so that when you visit a MU world, you can't always tell at first whether there's a bot or a real person behind the avatar.
There are two kinds of technologies bots can use to mimic human avatar drivers. One is based on natural language processing (NLP) as in a system by Ergo Technologies. The other is traditional AI techniques like Eliza, a variant of which is used by Ampcom in their Conversations with Angels.
Holger Grahn of Blaxxun gave the following advice on www-vrml about building avatars:
Our typical avatar height is 1.85m, so ideally your entry viewpoints has a Viewpoint position y at 1.85 and a avatarHeight value of 1.85 in the Navigation Info node....If you use a VRML model as your own avatar, it should not contain lights and the world origin 0 0 0 should be the center between the eyes.
h-anim: The Humanoid Animation Working Group (h-anim) has been one of VRML's biggest successes. Under the leadership of Bernie Roehl and Cindy Ballreich and a host of active members, this WG has developed a standard for humanoid animation that is getting enthusiastic sign-ups from the hosts of virtual communities (above) and has already been adopted into the MPEG-4 standard. To get the most out of your avatar, make sure it's compliant with the h-anim standards. Christian Babski has an animation gallery that shows some of the possibilities of avatar animation.
biota: While the biota working group isn't specifically about shared multi-user worlds, we couldn't leave them out of this section because they're working with artificial life in ways that might eventually put ordinary bots to shame.
Get Yer Avatars Here: Here are some places
you can find out about and download avatars and bots:
Several of these papers are in PostScript or PDF format. Please check the link before you click it if you don't have viewers for both of these formats. Papers marked "(VRML 9x)" were given at the Symposium for the Virtual Reality Modeling Language for that year.
The U.S. Military's Distributed Interactive Simulation program was designed to help simulations interoperate in a shared virtual world. As a result, many of the problems we're encountering in the VRML community have already been studied. The Defense Modeling and Simulation Office is now in charge of all military modeling and simulation. There are two efforts going on now: one to replace DIS with a more ambitious High Level Architecture, and another, through the Simulation Interoperability Standards Organization, to bring DIS into compliance with HLA requirements.
![]()
Did I leave out something important? Let me know.
-- Bob Crispen
-- Sunday, May 23, 1999