This page looks best with JavaScript enabled

Using compression at runtime with Unreal Engine

 ·  ☕ 6 min read

Overview

In a new project my Team wanted to toy around with a rewind mechanic for a new 2D adventure game and been totally honest, it’s not something to complex to implement, at least not for a quick prototype. But what happen when there’s a need to keep track of more than just the player and enemies, but also track the scenario and npcs wandering around there? We have a nice problem to resolve.

Braid as an inspiration

When the rewind mechanic was propoused, my teammates and I could not stop mencioting Braid as a reference, we keep comming back over and over again. An as a developer not only the idea of go back on your steps sounds cool and not too hard to implement, but the challenge to manage the tracking data to going back can be.

But Braid not only allow you to going back in time in the game, you can rewind has much as you want, a couple of seconds, minutes, maybe an hour?, it’s something impressive if you considered the fact the game was released for the XBox360, with only 512 MB of GDDR3 RAM.

The algorithm for a rewind mechanics only require to store in a stack a snapshot of the object you want to rewind later on every loop update. Them when you want to rewind, using the snapshots store in the stack recreate the object’s state in every loop update, the end. This applies to all objects you want to control. That’s were things can go out of control.

To known how much memory do we need use the following formula:

MemoryToUse = FPS * Time * ObjectsToTrack * SizeOfSnapshot

Where:

  • FPS: The frame rate the game is running.
  • Time: Seconds to record.
  • ObjectsToTrack: Number of objects been tracked.
  • SizeOfSnapshot: Size of the snapshot store each frame (in bytes).

In a simple example, we want to record one minute of gameplay, with 200 objects in the scene, running at 60 FPS and only trcking the object’s transform (48 bytes, you can check this info in C++ with the sizeof operator). With these values 34560000 bytes are requiered (around 34.56MB). For today standards that not too much, but we can reduce it more and keep track of more data, store more rewind time (if it’s necesary) or do it at a higher framerate.

The API

The Unreal Engine have inside it’s own code one struct called FCompression which is used, as it’s name points, to compress blocks of data. The most important methods inside for use are CompressMemory, UncompressMemory and CompressMemoryBound (to estimate the amount of memory required per block). The downside of using those struct methods is that you need to manage the memory allocations required, if a new block of data is ready to be compressed and stored, you need to allocate it; when the rewind mechanic is used, each block uncompressed needs to be freed.

As an example, here are to methods I wrote in a helper class that abstracts the Compression operations I need:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
bool URewindStorageHelper::StoreDataBlock()
{
	FDataCompressed TmpNewCompressedBlock;
	const FName CompressionFormatName = NAME_Zlib;
	const ECompressionFlags CompressionFlags = ECompressionFlags::COMPRESS_BiasSpeed;

	// Allocate memory to store the compressed block (bigger than original to have room save intermediate data).
	const int32 TmpCompressSize = FCompression::CompressMemoryBound(CompressionFormatName,
	                                                                UncompressDataSize * 2,
	                                                                CompressionFlags);
	TmpNewCompressedBlock.CompressBlockData = FMemory::Malloc(TmpCompressSize);

	if (TmpNewCompressedBlock.CompressBlockData == nullptr)
	{
		UE_LOG(LogRewindStorageHelper, Error,
		       TEXT("FMemory::Malloc Could not allocate memory. DataSize %d. Allocated memory asked %d"),
		       UncompressDataSize, TmpCompressSize);
		return false;
	}

	const bool bCompressionStatus = FCompression::CompressMemory(CompressionFormatName,
	                                                             TmpNewCompressedBlock.CompressBlockData,
	                                                             TmpNewCompressedBlock.CompressedSize,
	                                                             UncompressArrayBuffer,
	                                                             UncompressDataSize,
	                                                             CompressionFlags);
	if (bCompressionStatus)
	{
		// Once compressed, trim the compressed block to it's final size.
		TmpNewCompressedBlock.OriginalSize = UncompressDataSize;
		TmpNewCompressedBlock.CompressBlockData = FMemory::Realloc(TmpNewCompressedBlock.CompressBlockData,
		                                                           TmpNewCompressedBlock.CompressedSize);
		StackCompressedData.Add(TmpNewCompressedBlock);

		UE_LOG(LogRewindStorageHelper, Log,
		       TEXT("Ended compression, Original Size: %d bytes. Final Size: %d bytes (Ratio: %f (%f), Saved %d Bytes)"
		       ),
		       TmpNewCompressedBlock.OriginalSize, TmpNewCompressedBlock.CompressedSize,
		       1.0f * TmpNewCompressedBlock.OriginalSize / TmpNewCompressedBlock.CompressedSize,
		       1.0f * TmpNewCompressedBlock.CompressedSize / TmpNewCompressedBlock.OriginalSize,
		       TmpNewCompressedBlock.OriginalSize - TmpNewCompressedBlock.CompressedSize);
	}
	else
	{
		UE_LOG(LogRewindStorageHelper, Error,
		       TEXT("Failed to compress data :(, this should have not happened. DataSize %d. Allocated memory %d"),
		       UncompressDataSize, TmpCompressSize);

		if (TmpNewCompressedBlock.CompressBlockData)
		{
			FMemory::Free(TmpNewCompressedBlock.CompressBlockData);
		}
	}

	return bCompressionStatus;
}

bool URewindStorageHelper::LoadLastDataBlockStored()
{
	const int32 StackSize = StackCompressedData.Num();
	if (StackSize > 0)
	{
		const FDataCompressed TmpDataBlock = StackCompressedData.Last();

		// Decompress the last data block stored.
		const FName CompressionFormatName = NAME_Zlib;
		const bool bResult = FCompression::UncompressMemory(CompressionFormatName, UncompressArrayBuffer,
		                                                    TmpDataBlock.OriginalSize, TmpDataBlock.CompressBlockData,
		                                                    TmpDataBlock.CompressedSize);

		// Remove and Free the memory occupied by the compressed block.
		StackCompressedData.RemoveAt(StackSize - 1);
		FMemory::Free(TmpDataBlock.CompressBlockData);

		return bResult;
	}

	return false;
}

The class I wrote for the prototype also manage the compressed blocks generated and discard old information in case we want to set a limit on how much data we’re recording.

Right now, with this compression algorithm (ZLib) i found the following

  1. Storing complete random transform data gives between 1.1-1.25 ratio (10-20%).
  2. Storing complete random transform data without floating point gives between 1.42-1.61 ratios (30-35%).
  3. If the data store is sequencial (without floating point) we get up to 3.4 ratios (almost 70%).

So there’s potencial savings using compression with little penalties (so far). And this is using ZLib compression, later we can test the Oodle compressors on Unreal Engine 4.27 (in Unreal Engine 5 OodleData is not available… yet) to see if we can get even better ratios even faster. The anouncement here and in the Oodle site.

Compression with Blueprints?

As far as I know it’s imposible to expose this kind of functionality in a generic way to blueprints since it’s necesary a way to send a struct from blueprints to a C++ method (the CompressMemory/UncompressMemory works only with void pointers) and somehow, cast it back (blueprint’s wildcards are not generic pointers). Since this is a more C++ (memory managment and generic pointers) it makes more sense that this behavior stays under the source code realm.

Conclusions

The number of cases where we can use compression are not so many, outside a rewind mechanic (like in Braid) or playback mechanic (katana zero), it’s hard for me to think about other cases where this can be really useful. But, at least for me, this avoids the recording time and size of data blocks to store for each snapshot.

Share on

Jesús Mastache Caballero
WRITTEN BY
Jesús Mastache Caballero
Game Developer