wiki:Ei4/1.0/GameEngine
Last modified 7 years ago Last modified on 05/13/07 05:46:15

Back to Ei4 Documentation

Ei4 version 1.0 (DEAF07)

Game engine is implemented as a library consisting of a number of application modules configurable using simple configuration language and / or automatically generated GUI. The library is used by a number of demos which add some experimental functionality to the engine (which eventually ends up in the engine itself after the concept is proven and the API / implementation is more or less stabilized).

Modules are implemented as C++ classes, initially intended to be as independent as possible, but eventually contaminated by hidden assumptions about each other. This is one of the problem that has to be fixed during the next refactoring.

Ei4 Game Engine Modules

Some underlying patterns

  • We use boost::signals for inter-module connections all over the engine. Signals allow more concise implementation of events / handlers just when you need them and introduce less coupling.

Infrastructure

Application base

See: DemoBase.h, DemoBase.cpp.

This is the most entangled "module", which implements basic application functionality that may be shared among all the demos. It uses configuration manager to decide which objects to create (~ our analog of loading modules) and how to set them up (hence it assumes that it knows details about the modules such as loading order, interfaces, expected parameters and interconnctions). Usage pattern: derive particular demo from DemoBase, override some virtual methods and create it in main() with runDemo template from Utils.h. This class should be mostly obsoleted by wrapping the modules into a dynamic language and creating / configuring / connecting them from the scripts.

The class initializes the modules in a setup() method that calls individual setup*() methods of various subsystems in hard order. Smaller setup*() methods may decide to skip actual initializing if corresponding section is missing from the configuration file. The setup() is implemented as a separate method because in C++ virtual calls don't work inside a constructor, but we want to be able to overload smaller setup*() methods or the setup() itself in particular demos.

DemoBase also implements the main loop - in go() method. m_goOn and requestExit() allow to initiate exit from the main loop.

Since we don't have a generic module interface allowing to load / connect modules at will, some Ei4 specific functionality is included in DemoBase:

  • head light (as opengl light or mask)
  • sending light cues as midi
  • debugging geometry (floor grid)
  • handling scenery loading (calculating scene dimensions, assigning collision bodies and physics materials etc)
  • game play:
    • level switching
    • capturing bionts

To capture bionts a "capture frustum" is created which is shorter and slightly narrower then the camera frustum and this frustum is attached to the same SceneNode as the (mono) camera. Ogre's scene query is then used to list all bionts in the frustum. I didn't use newton's collison system for capture because it is too much trouble to force a physical / collision body to specific coordinates (which is how the camera has to move in Ei4 using the tracking data).

Following functionality should really belong to other modules or be initiated from scripts:

  • toggling inputs between keyboard/mouse and sensors
  • generating config GUI from parameters namespace
  • GUI callbacks

Configuration system

See: ConfigManager.h, ConfigManager.cpp.

Configuration language is a reimplementation of the Ogre's built in configuration language which in turn is a mutation of the windows ini file. The following additional facilities are provided:

  • configuration saving (ogre's ConfigFile only supports loading).
  • comments/blank lines preservation (to avoid loosing documentation comments after resaving). this feature is the reason loading was reimplemented from scratch
  • parameter modification signals - so modules can detect changes of the configuration (via GUI - implemented, OSC etc)
  • automatic building of the frobnication GUI from configuration namespace.

When scripting support is implemented configuration should better use a (subset) of the scripting language syntax. Automatic OSC namespace generation could still be useful, but the frobnication GUI could be obsoleted by real time access to the interpreter from the script editor.

Scene management

See:

Ei4 only supports generic scene manager at the moment, but no assumptions about SceneManager are made so it is easy to switch in the future if necessary. Initially oScene (oFusion scene format) and dotScene were supported, but dotScene support has since bitrotted.

All levels as expected to come from the same scene file with meshes encoding their level affinity in the mesh or group name. A mesh may belong to several levels as well, with different lightmap image on each level.

Some materials are expected to reserve the second texture unit (index 1) for lightmaps, lightmap names are generated from the mesh name and level number. This allows to reuse a single mesh for many similar objects whose only difference is the lightmap (due to different position in the scene or different game level).

When loading the scene, every entity gets assigned SceneEntity object as user data. SceneEntity keeps track of levels this entity belongs to and controls entity appearance depending on the level.

Basic IO - systems

Keyboard/mouse input handling

See: InputMapper.h, InputMapper.cpp

InputMapper allows to map functions to input events (from keyboard or mouse) and translation of ogre inputs to cegui input in GUI mode. This functionality should be made available from the scripting language. boost::signals are used for connections, the class converts from ogre style listener interface to more flexible and sane signals.

Most of the module is spent proving Greenspuns 10th rule: various shortcuts for frequently used patterns are implemented because C++ is braindead and doesn't allow to conveniently construct functions where you need them (not even with boost::lambda hacks).

Some crosspolination: InputMapper also handles GUI (which is considered virtual input device).

Network

We use the OSC to communicate between different computers. OSC is implemented using the WOSCLib library. By providing [network] in the configuration file Ei4 will build and setup an OSC client and Server. The local and remote port and additional parameters like the IP-addresses of other PC in our Ei4 network need to be provided at this point.

This class is used to implement all low-level connections to our network. There are three important functions in this class. NetworkInit, PollSocket and NetworkSend. NetworkInit creates an OSC server, pollSocket checks if there are OSC packets available on the port specified and processes them, excecuting the proper function if a container is defined. NetworkSend sends a message to a port an IP-address specified in this function. This file also contains a NetworkBaseMethod which forms the basis for functions that are called when a specific OSC message reaches the EI4 engine, the implementation of these functions is described below.

The NetManager consist of a networkInterface and knows about the different computers that our engine needs to communicate with, e.g. the backtop, the SIOS board and the midi-computer. Mostly this class contains helper functions that will send a specific piece of data to an OSC-address space. In order for the Ei4 engine to react on incoming OSC messages you can use the function AddNetworkMethod. This function let's you specify an address-space and a networkMethod to be executed if a message in this address space reaches the engine.

This class contains functions that happen if a specific OSC message reaches the Ei4 engine. At this point this class can only call functions from the sensorFusor and SceneManager, so eventually support needs to be added to bind boost::functions to these NetworkMethods.

Midi

Midi support was copy and pasted as RtMidi.h, RtMidi.cpp, RtError.h

Tracking

Measuring the players position is done using the ultra Sound positioning system called Hexamite™ mounted to the ceiling of the game-space and on top of the players head. The players orientation is measured using a SIOS-Acc Mag?™ sensor. The data from all these sensor systems are collected, filtered and fused in the Sensorfusor. All variables specified in the configfile influence how incomming data is fused and send to Ei4 world. The prefix of a variable gives an indication of which sensor system this varable wil influence e.g. hex variables will influence data received frome the hexamite system. Finally a position and orientation is send to the Ei4-world using different boost::signals. These signals differ in the way the sensor data is fused together, to test different interpretations of the sensor data. Signals are now (hard coded) connected to the camera object. The working of the Sensor fusor is described below.

Orientation

SIOS-Acc Mag?™ sensor

Orientation of the players head is determined by one SIOS-Acc Mag?™ sensor mounted on top of the players head. The engine receives data from these sensors through OSC messages, from a Gumstix-SIOS-Egg device mounted on the backtop. These mesages need to be activated at startup and deactivated during shutdown by sending predefined start and stop OSC-messages. After a while the magnetometer also needs to be recalibrated, this is done by sending the calibrateMagnetometer OSC message.

The SIOS-Acc Mag?™ sensor consists of three Accelerometers and three Magnetometers both measuring in the XY and Z plane. An absolute orientation is calculated by combining the earth's magnetic field measured using the magnetometers and gravitational force measured using the accelerometers. A combination of these two vectors will result in the absolute orientation in space of the SIOS-Acc Mag?™ sensor. The benefits of this combination of sensors is that you get an absolute (not relative) and drift free orientation, but there are some problems.

Magnetometers are relatively slow and don't respond quickly enough to small movements of the head. A VeloctityInterpolator Filter (as described below) is used to compensate this problem. An other problem is that metal or other object's that influence your magnetic field influence the measurements taken by the magnetometer. This is the reason why the magnetometer is mounted on the head of the player and not directly above the eyes because the metal of the camera disturbs the magnetic field measured by the magnetometer. Also the use of metal and other magnetic materials is avoided in the final setup.

The measurement you get from the accelerometers is not solely the gravitational force but the gravitational force combined with the acceleration of the moving SIOS-Acc Mag?™ sensor itself. This acceleration is removed using a Kalman-noise filter (as described below), but this process reduces the reaction speed of the final calculated orientation.

HMD tracking sensor

Inside the HMD there are also sensors that can measure the orientation of the head. These sensors are gyroscopic based. Gyroscopes react faster to head movement and give a much smoother rotation sensation. However these sensors measure a relative orientation. So your orientation will eventually drift, thus resulting in a shift between the virtual and physical world. This is not acceptable for Ei4, however the Sensorfusor can still signal the orientation measured using these sensors.

In a next version these two types of sensors need to be combined in order to get a better and more realistic tracking of the head. Some attempts where made to combine both sensor sytems but the physical distance, and changing rotational offset between both sensors turned out to be a bigger problem than expected.

Position

Hexamite™

Data received from the Hexamite™ is signaled directly as a new position to the Ei4-world. A Indirect Kalman Filter (as described below) can be used to improve measured by hexamite but eventually the data-rate in Pakhuis-Meesteren turned out to be to low to do any sensible filtering.

Accelerometers

Accelerometers are used to calculate a relative position between Hexamite™ measurements. In the SensorFusor some code remains that tries to calculate a velocity and position by intergrating the acceterometer data, but due to the relative small XZ-accelerations a walking human head this position and acceleration is very inacurate. We do measure Y-acceleration of the head, and if there is enough head movement we assume that the person is walking. If a person is walking we translate the player through the XZ-plane using his orientation. This method assumes that players will not walk looking sideways.

(Kalman) Filters

In the SensorFusor we use the open sourceBayes++ library to implement (Kalman) filtering. This library allows to create different filters by defining a certain filterscheme and through this scheme you measure and get your filtered data by using a certain prediction and observation model. How data is observed and predicted depends on what kind of model you choose and can be tuned by changing different parameters inside the chosen model. The chosen filter scheme decides how the observation and prediction model are merged together. In Ei4 there are three different filter combinations implemented in classes and used to filter the data from our sensor system. All filters have a measure and predict function to insert and receive filtered data from your filter. A couple of data members influence the behavior of your filter. DT changes the reaction speed of the filter, V_GAMMA influences how much this speed differs between measurements, V_NOISE indicates the amount of noise these predictions have and OBS_noise parameterizes the noise in our measuring system. If these values are changed at run-time the reinit function needs to be called in order for these changes to take effect.


See: Ei4/tags/ver-1-0/ei4/Kalmanfilters.h

This is an implementation of a simple Kalmanfilter and is used to filter out the noise on received sensor data. The behavior of this filter can be changed by tuning the data-members of this class, In Ei4 these two sets of these noise filters are used. One to reduce the noise on the Magnetometer data and an other to split the gravity and acceleration part of the accelerometer data

This is an implementation of a Square root covariance filter and allows to use the rate of change between the previous measured data to extrapolate a new position. Multiple predictions can be called between measurement calls to improve your data rate. How much the filter trusts a new measurement and the amount of changes between predictions calls can be set inside this class. One set of these filters is used to improve the data rate of the Magnetometer data.

This filter uses an errorfilter to indirectly predict data observed. Again you can use multiple predictions between measurement and you can change how much the filter trusts a new measurement. This filter was used to improve positioning using Hexamite™, because it performed better with the more non-linear Hexamite™ data but was not used in the final setup because of the low Hexamite™ data-rate.

Camera and HMD

(Virtual) camera controls

See: CameraControls.h, CameraControls.cpp

This module takes care of the virtual camera position / orientation, rendering modes (mono, split screen stereo, frame sequential stereo) and HMD interface. We should consider splitting HMD / rendering modes off into a separate module.

Video Input

See: LiveStereoVideoInput.h, LiveStereoVideoInput.cpp

The class name is a misnomer, because this class allows to capture stereo video either from the STOC camera or from the sequence of BMP files (named according to small vision system conventions). Which one is used depends on configuration and platform - on osx we don't have SVS and can only play back image files (using ogre to load them). On windows we use SVS for both live video capture and files loading playback.

We use SVS to access the camera because generic firewire driver for Windows gets confused when trying to start capture. It does see camera parameters and modes, so it could be just a driver problem. We should try to access the camera on linux.

This class also contains traces of !ARToolKit and !ARToolKitPlus support, which still can be enabled via config file.

Captured/loaded video is streamed into stereo material "stereo_backdrop". This material is used by background object (Rectangle2D) in demo base.

Stereo material is any material that defines techniques with schemes named "left" and "right". When the engine is in a stereo rendering mode it constantly switching current technique between "left" and "right". By convention (hmm, a deal between me and myself?) "left" technique should always be the first one defined so that in mono rendering modes all stereo materials use the same ("left") technique. It is also possible to define default or mono technique before both left and right, if necessary.

To mix between the video and virtual geometry the following is necessary:

  • video backdrop is the first thing that is rendered per frame
  • then a depth scene, corresponding to the actual geometry, is rendered. tracking accuracy is crucial here
  • finally, the virtual geometry is rendered

Depth scene is a model of the actual space that uses depth-only material (depth write on, depth check on, colour write off). Eventually, we should be able to mark some meshes in the scene as "real" at modelling time and build the depth scene from them at load time.

Ei4 Gameplay

Bionts

Apart from crosspollination all over the infrastructure, the only game play class is Biont, see: Biont.h, Biont.cpp

Biont handles:

  • loading the biont mesh / materials
  • assigning collision body (a sphere containing the mesh) and physics material (for triggers)
  • behaviour (applying forces to physical body, reacting on collisions)
  • switching between free / captured / penguin state (penguin state is free state but the biont can't be captured)
    • material
    • behaviour

In the previous version of the engine biont was derived from game object which did all this + handled object sound, which should be restored in the next version. We should provide different abstract aspects of game objects (visual representation, physics properties, sounds, behaviours) to the scripting layer and combine them in various compounds. Scene meshes, depth meshes, animated characters should be made such compounds as well.

Utilities

  • "screen saver": ScreenSaver.h, ScreenSaver.cpp. Saves the contents of the consequtive frames into a series of BMP image, and logs the position and orientation of the camera in a text file.
  • AndCombiner: AndCombiner.h, AndCombiner.cpp. helper class to use with boost/signal. TODO: why, how, where.
  • mesh generators: MeshGenerator.h, MeshGenerator.cpp. Several helpers to generate ogre meshes programmatically. At the moment generates grids and boxes made of grids. Was a study of procedural mesh generation with ogre API, grids and boxes were used for debugging the early demos.
  • misc small utilities: Utils.h, Utils.cpp. Includes runDemo template (which instantiates a demo base and captures its exceptions), and debugging output utilities.
  • SDL support for osx version (was copy pased from some other place): SDLMain.h, SDLMain.m. Basically it implements SDLMain that makes Ogre application a valid aqua application.
  • vector object: VectorObject.h, VectorObject.cpp Creates a visual object who will shows the XYZ value of a vector alongside the XYZ axsis. Function exist to change the scale, postion orientation and the colour of the VectorObject.