next last content

8. Avatars

Avatars should be though of as special SharedObjects with some additional features. However we are running into some problems here with defining avatars int the same way for EAI and Shout3D clients. In the EAI version we follow the BOMU avatar specification that is based on discovery of fields at runtime. This is in principle not possible in the Shout3D implementation because all the nodes have a strong type by being implemented in a Java class. Therefore triggers for Avatar behaviours have to be defined in advance. We use two different definitions for the different clients.

EAI Client

DeepMatrix is able to use special avatars with user controlled behaviours termed BOMU avatars ( Browser Only Multi User ). The interface was first defined and used in VNet.

A typical VRML file defining a BOMU avatar should look like the following code :

PROTO SomeName [
  eventIn SFBool behaviourX
  eventIn SFBool behaviourY

  exposedField SFString inMotionBehaviour "walkBehaviour"
  exposedField SFString notInMotionBehaviour "standBehaviour"

  exposedField MFString gestures [ "behaviourX" "behaviourY" "walkBehaviour"  "standBehaviour"  ]
]
{
  # Avatar geometry and animations
}

SomeName{}

It contains a PROTO with any name you want that holds the definition of the avatars geometry and one instantiation of this PROTO. The PROTO also defines several eventIns of type SFBool which are used by the DeepMatrix System to trigger the different behaviours of this avatar. The avatar designer just has to trigger the avatars animation or states with these eventIns.

To let the MUtech know which behaviours are present the PROTO uses the exposedField gestures. This MFString field holds the names of the different eventIns. The MUtech reads these names and makes the behaviours available to the avatars user. DeepMatrix shows a choice list under the user list. Select a behaviour from this list to trigger it.

The two optional fields inMotionBehaviour and notInMotionBehaviour describe two special behaviours that are triggered automatically when the Avatar is moving or standing still.

Human avatars should stay with their feed on the ground. DeepMatrix assumes that the avatars origin is the point where an avatar touches the ground. So an human avatar should have its origin between its feed on ground level. This is compliant with the H-ANIM specification. It also should face in the z+ direction as in the H-ANIM spec.

Nevertheless DeepMatrix is also able to work with any piece of VRML as an avatar. However it can not guarantee some features of dedicated avatars.

Shout3D Client

For the Shout3D client a special Avatar node was developed. It is an extension of the SharedObject node and has the following definition.

Avatar {
  # all the SharedObject fields...

  field MFString gestures []
  eventIn SFString doGesture
}

The idea is that gestures defines the possible gestures and if the user triggers one, DeepMatrix sends an event with the gestures name to doGesture which triggers the gesture in the Avatar. This requires some scripting etc. and is still more a theoretical option.

the walking animation behaviours are not implemented yet.

next last content