[NeoEngine Logo]
NeoEngine
Platform independent Open Source 3D game engine
Downloads
Source - latest release
NeoEngine:0.7.0
NeoTutorials:0.7.0
NeoTools:0.7.0
NeoFrameworks: 
  NeoApplication:0.7.0
NeoExporters: 
  Maya:0.7.0
  3DSMax:0.7.0

Source - weekly snapshot
NeoEngine:040712


Documentation
Browse
Download tarball

Forums

Information

Tutorials

Projects and extensions

Plan files

Useful Links

Mailing Lists

News archive

SourceForge.net Logo
Tutorial 4 - Room & Camera
Last update: 22/Jan/2004

This tutorial covers how to use rooms and cameras. New code not in the previous tutorial is in bold.

#include <neoengine/core.h>
#include <neoengine/render.h>
#include <neoengine/logstream.h>
#include <neoengine/input.h>
#include <neoengine/renderprimitive.h>
#include <neoengine/vertex.h>
#include <neoengine/polygon.h>
#include <neoengine/material.h>
#include <neoengine/room.h>
#include <neoengine/camera.h>

#ifdef BUILD_STATIC
#ifdef WIN32
#include <neodevd3d9/link.h>
#endif
#include <neodevopengl/link.h>
#include <neoicpng/link.h>
#include <neoabt/link.h>
#endif

using namespace std;
using namespace NeoEngine;
room.h defines the room base classes. Rooms are an abstraction of a portion of space, managing both static and dynamic geometry through some space partitioning algorithm. The room base class is implemented in room modules, currently available are a BSP implementation (Binary Space Partition) and an ABT implementation (Adaptive Binary Tree). The room manager class loads implementation modules on demand and creates rooms.

camera.h defines the base camera class you can use to implement your camera control in your game. The camera is a scene node, which means it has a rotation and translation in the 3D space and methods to modify these. It can also generate a view frustum that the engine uses for culling (mostly in the space partitioning implementations in the rooms).

We also include the link header for static builds for the room implementation module we are using, the ABT room.


class InputListener : public InputEntity
{
  public:

                     InputListener( InputGroup *pkGroup ) : InputEntity( pkGroup ) {}

    virtual void     Input( const InputEvent *pkEvent );
};

RenderDevice      *g_pkRenderDevice    = 0;
RenderCaps         g_kRenderCaps;
LogFileSink        g_kLogFile( "neoengine.log" );
InputGroup        *g_pkInputGroup      = 0;
InputListener     *g_pkInputListener   = 0;
bool               g_bRun              = true;
Room              *g_pkRoom            = 0;
Camera            *g_pkCamera          = 0;
g_pkRoom is our room, of the type implemented in the ABT module. All room implementations are derived from the base Room class, so you don't need to know what type the actual implementation use. g_pkCamera is our camera.


int main( int argc, char **argv )
{
  neolog.SetLogThreshold( DEBUG );
  neolog.AttachSink( Core::Get()->GetStdoutSink() );
  neolog.AttachSink( &g_kLogFile );

  Core::Get()->Initialize( argc, argv );

  Core::Get()->GetFileManager()->AddPackage( "../common/data" );

  if( !( g_pkRenderDevice = Core::Get()->CreateRenderDevice( "opengl" ) ) )
    goto SHUTDOWN;

  if( !g_pkRenderDevice->Open( RenderWindow( "Render Polygon & Basic Input", g_kRenderCaps, RenderResolution( 640, 480, 16 ) ) ) )
    goto SHUTDOWN;

  g_kRenderCaps     = g_pkRenderDevice->GetCaps();

  g_pkInputGroup    = new InputGroup;
  g_pkInputListener = new InputListener( g_pkInputGroup );

  g_pkRenderDevice->LoadCodec( "png" );

  g_pkRoom = Core::Get()->GetRoomManager()->CreateRoom( "abt" );

  {
    int x,z;

    VertexBufferPtr pkVBuffer = g_pkRenderDevice->CreateVertexBuffer( Buffer::NORENDER, 100, &NormalTexVertex::s_kDecl );

    pkVBuffer->Lock( Buffer::WRITE );

    NormalTexVertex *pkVertex = (NormalTexVertex*)pkVBuffer->GetVertex();

    for( z = 0; z < 10; ++z )
    {
      for( x = 0; x < 10; ++x, ++pkVertex )
      {
        pkVertex->m_kPosition.Set( -100.0f + float( x ) * 20.0f, 0.0f, 100.0f - float( z ) * 20.0f );
        pkVertex->m_kNormal.Set( 0.0f, 1.0f, 0.0f );

        pkVertex->m_afTexCoord[0] = ( x % 2 ) ? 1.0f : 0.0f;
        pkVertex->m_afTexCoord[1] = ( z % 2 ) ? 1.0f : 0.0f;
      }
    }

    pkVBuffer->Unlock();

    PolygonBufferPtr pkPBuffer = g_pkRenderDevice->CreatePolygonBuffer( Buffer::NORENDER, 162 );

    pkPBuffer->Lock( Buffer::WRITE );

    Polygon *pkPolygon = pkPBuffer->GetPolygon();

    for( z = 0; z < 9; ++z )
    {
      for( x = 0; x < 9; ++x )
      {
        pkPolygon->v[0] = x     +   z       * 10;
        pkPolygon->v[1] = x + 1 +   z       * 10;
        pkPolygon->v[2] = x     + ( z + 1 ) * 10;
        ++pkPolygon;

        pkPolygon->v[0] = x + 1 +   z       * 10;
        pkPolygon->v[1] = x + 1 + ( z + 1 ) * 10;
        pkPolygon->v[2] = x     + ( z + 1 ) * 10;
        ++pkPolygon;
      }
    }

    pkPBuffer->Unlock();

    MaterialPtr pkMat = new Material( "floor", 0 );

    pkMat->m_pkTexture = g_pkRenderDevice->LoadTexture( "floor" );

    g_pkRoom->AddGeometry( pkPBuffer, pkVBuffer, pkMat );
  }
First we create a room through the room manager by requesting the room implementation name (in this case "abt", for a BSP room you pass "bsp"). The room manager loads the module if it's not loaded and creates a room of that implementation.

We then create and setup one vertex and one polygon buffer holding the static geometry we want to have in our room, and create a material and loads the texture. Finally we add the geometry to the room (which the implementation then partitions according to the algorithm used).


  g_pkCamera = new Camera( "main cam" );

  g_pkCamera->SetRoom( g_pkRoom );
  g_pkCamera->Translate( Vector3d( 0.0f, 2.0f, 0.0f ) );
Now we create the camera object. To use the Render method on the camera object to render the camera view, we need to tell the camera which room it is in, by a call to the SetRoom method. We then position the camera in the 3D space by translating it 2 units upwards (+y direction), to be able to see the floor which is at y position 0. All scene nodes (and hence cameras too) are positioned in origo when created.


  while( g_bRun )
  {
    Core::Get()->GetInputManager()->Process();

    g_pkRenderDevice->Clear( RenderDevice::COLORBUFFER | RenderDevice::ZBUFFER | RenderDevice::STENCILBUFFER, Color::BLACK, 1.0f, 0 );

    g_pkRenderDevice->Begin( g_pkCamera->GetViewMatrix() );
    {
      g_pkCamera->Render();
    }

    g_pkRenderDevice->End();

    g_pkRenderDevice->Flip();
  }
Herein lies the beauty of using a camera, it completely abstracts the render loop. You first call GetViewMatrix to get the view matrix for the current translation and rotation of the camera, then simply call Render to have the camera render the room (it does this by generating a view frustum from the current rotation and translation, then calls the appropriate method on the room object to perform the rendering, using the frustum for culling).


  SHUTDOWN:

  delete g_pkCamera;
  delete g_pkRoom;

  Core::Get()->DeleteRenderDevice( g_pkRenderDevice );	
	
  delete g_pkInputListener;
  delete g_pkInputGroup;

  Core::Get()->Shutdown();

  return 0;
}
Delete the camera and room objects.


void InputListener::Input( const InputEvent *pkEvent )
{
  if( ( ( pkEvent->m_iType == IE_KEYDOWN ) && ( pkEvent->m_aArgs[0].m_iData == KC_ESCAPE ) ) ||
      ( ( pkEvent->m_iType == IE_SYSEVENT ) && ( pkEvent->m_aArgs[0].m_iData == IE_SYSEVENT_KILL ) ) )
    g_bRun = false;
}