The Camera Component
Prerequisites
- Transforms in 3D Space
Article
With an understanding of 3D transforms, we're going to extend our basic transform functionality to create a camera component. A camera has many of the same properties of our Transform: a position, a look vector (from our rotation), and the ability to move and rotate around (for example, following a player or a predefined track).
A camera also has several additional properties: a field of view - an angle defining how wide the "lens" is, near and far viewing distances, and an aspect ratio (derived from the width and height). All of these properties help us create what is known as a view frustum.
[todo: make an image]
In a later article in our rendering section, we'll use this viewing frustum, along with the Transform and bounding volume information of each object to determine whether or not they should be rendered.
Our camera properties also help us create two matrices that help us render things on screen: a projection matrix, which is how we define the shape of the frustum, and a view matrix, which tells us where our camera is in the world and how our objects should orient around it. As a simple example, if an object is at an X,Y,Z position of (5, 5, 5) and our camera is at (3, 3, 3), then our view frustum will translate (5, 5, 5) to (2, 2, 2) so that our object appears correctly in relation to our camera.
While the camera could easily have been part of the rendering system, because it is not dependent on any rendering functions, I've found it useful to make it a core component. If you separate our your various engine areas into modules in the future (eg. separate DLLs for rendering, physics, audio, etc.), then you end up making a lot of them dependent on rendering. For example, 3D sounds require knowledge of the camera's location to play correctly.
Let's take a look at our CameraComponent class:
class UCameraComponent : public UComponent {
public:
UCameraComponent();
~UCameraComponent() = default;
/**
* Get view and projection matrices
*/
matrix4 ViewMatrix();
matrix4 ProjectionMatrix();
/**
* Serialize / deserialize
*/
void SerializeComponent(UDataRecord* record);
void DeserializeComponent(UDataRecord* record);
public:
// Our transform / position
UTransform transform;
// Field of view in degrees
float fov;
// Near and far
float fnear, ffar;
// Aspect ratio
float aspect;
// Primary camera?
bool primary;
};
Here, you'll find all of the properties we discussed: a Transform, field of view (fov), near and far plane distances, an aspect ratio, and a new boolean variable that sets whether the camera is the primary camera in the scene.
Our constructor sets some reasonable defaults:
UCameraComponent::UCameraComponent() {
fnear = 0.1f;
ffar = 100.0f;
fov = 45.0f;
aspect = 1.3333;
primary = true;
}
This sets the near plane to 0.1 units (generally meters) in front of us (it should never be zero), our far visible distance to 100 units, our angle to 45 degrees, an aspect ratio of 1.3333 (eg. 800 / 600), and our primary variable to true, in the base case that this is the only camera.
Next, we have our ViewMatrix and ProjectionMatrix functions:
matrix4 UCameraComponent::ViewMatrix() {
vector3 position = transform.Position();
vector3 look = transform.Look();
vector3 up = transform.Up();
return(glm::lookAt(position, position + look, up));
}
matrix4 UCameraComponent::ProjectionMatrix() {
return(glm::perspective(glm::radians(fov), aspect, fnear, ffar));
}
These use the same GLM library as our Transform class. The math here isn't terribly complicated, but my goal is not to re-create the wheel. If you'd like to read more about it, take a look at this handy article at CodingLabs.
Finally, if you've read our articles on serialization, we have functions to serialize and deserialize our camera object that save its properties into a DataRecord:
void UCameraComponent::SerializeComponent(UDataRecord* record) {
record->Set("position", transform.Position());
record->Set("rotation", transform.Rotation());
record->Set("fnear", fnear);
record->Set("ffar", ffar);
record->Set("fov", fov);
record->Set("aspect", aspect);
record->Set("primary", primary);
}
void UCameraComponent::DeserializeComponent(UDataRecord* record) {
fnear = record->Get("fnear").AsFloat();
ffar = record->Get("ffar").AsFloat();
fov = record->Get("fov").AsFloat();
aspect = record->Get("aspect").AsFloat();
primary = record->Get("primary").AsBool();
transform.Position(record->Get("position").AsVector3());
transform.Rotation(record->Get("rotation").AsQuaternion());
}
That's it! With the addition of those properties, we have a CameraComponent that we can attach to Entity objects. One of those will be our primary scene camera, but you may have multiple cameras for cut scenes or replays. In addition, this class can be extended to implement different types of camera - for example, a follow camera that attaches itself to another Entity, or an arcball camera that can swing around inside of a sphere around an object it is looking at, like a player.
We'll see more on the implementation and usage of our camera component in other areas of our engine.