Memory Management
Question submitted by (22 August 2000)




Return to The Archives
 
  I read your Q&A on flipcode.com and would like to thank you for all the advice you give. I have a question about memory management. How important is memory management for computer games on modern operating systems? Do you have any tips or advice on how to design a memory manager and what it should cover?  
 

 
  Nope. I have nothing at all to say on the topic.

Well, maybe a little...

Before I begin, I want to point out that there is a tool available (for a price) that you just can't beat. It's called BoundsChecker, and it's available from CompuWare/NuMega. If you're in the position to purchase this software (it's quite pricey for the hobbyist), then I definitely recommend you do so. Do not pass go. Do not collect $200. There are less than a handful of products I'll stand up and shout for, and this is one of them.

But this isn't an advertisement. And even if you can afford this tool, I recommend you still implement your own memory manager because there are many things you may need that just aren't available in any generic tool.

Let me tell you a story about a little product called Fly!.

When I started the Fly! project, the first thing they asked me to do was speed it up. There were over a million lines of code there (code that I didn't write and was not familiar with) so I looked for the easy path and just re-wrote Microsoft's __chop() function and got a 25% performance boost (being a flight simulator, Fly! is very floating-point intensive.) Actually, I had to do a lot more, but that was the biggest improvement by far.

My next task was to reduce the memory usage. We were using 80MB of memory and needed to be down to 32MB. With allocations spread throughout a million+ lines of code, I needed a tool to help me track down our memory consumption. A tool that worked at the level we needed didn't exist, so I wrote a memory manager. I had written a few in the past, and knew it was a good idea anyway. I can honestly say that after the final rush to release Fly!, the only crash bug was related to a USB issue (a USB device was reporting itself erroneously as a joystick.) There weren't any memory corruption issues. How many other projects of this magnitude can say this? I'm not boasting, I'm trying to relate how important the use of an error-handling memory manager is.

The basic idea was to override global new and delete. More specifically, we needed to takeover new, new[], delete, delete[], malloc, calloc, realloc and free. This is pretty tricky when the application uses the MFC (because of their DEBUG_NEW macros) but it can still be done. Whenever a new allocation is performed (new, new[], malloc, realloc, calloc) the memory manager keeps track of the __FILE__ and __LINE__ variables so we always know who owns the allocated memory. This, of course, requires the use of macros.

With each allocation, extra memory is also allocated at the head and tail of the allocated block. This allows you to check for memory overruns and underruns. By default, we used a size of 8 bytes. So if 100 bytes were requested, we would actually allocate 116 bytes (8 before and 8 after). These 8 bytes would be filled with a specific value so that we could later check that these values remained untouched. For example, if the user requested 100 bytes, we would arrange the 116 bytes like so:

[xxxxxxxx][user-requested memory of 100 bytes][xxxxxxxx]

You'll want the sizes of the head & tail to be variable so you can test your application pretty heavily. Periodically, increase the size of the head/tail to something like 1024 bytes. This increases your memory usage quite a bit, but also tests your application much more thoroughly.

Before we would return the newly allocated pointer to the user, we would fill it with repeating copies of some very large 4-byte value. This is important, because if they try to use this memory without initializing it, it will have a higher probability of causing a crash. I chose 0xDEADBEEF because it's something you're likely to notice in a debugger. Very often the debugger would show us a pointer with the contents 0xDEADBEEF and we would immediately know what the problem was (we were using uninitialized memory.)

On the contrary, when releasing memory back to the system, we would also fill that memory with a different value just before we released it. I chose 0xBAADF00D because, again, it's a large number and something you're likely to notice. Make sure to use a different value than the initialization values, because you'll want to know the difference.

During allocation, we would also keep track of the type of allocation (was it from new or new[]? Or a variant of malloc?). This is important, because if a block of memory was allocated with new, but released using delete[] (or vice versa) then problems can arise from this mismatch. So during every deallocation, we would validate that the memory was being released with the proper call. I can't tell you how many bugs we found with this one. Fly! is a very complex product, and with all that complexity comes ambiguity. A single memory block might be conditionally allocated from one of many classes. Each of those classes may have another purpose, so they might each act differently. At some point, one of those classes would require a modification. For example, the allocation might be done with new[], but later modifications require that the memory be reallocated, so the scheme is switched to use realloc and free. Yet at the same time, this class is still used for allocating memory for another object, which might still expect to delete[] it.

Tracking down all the allocation/deallocation mismatches proved to be a bit troublesome at times, but much less troublesome than a crash on a consumer's computer. Every once in a while, somebody would knock on my door and tell me there was a bug in my memory manager. I would grin, get up, and follow them to their office where they would proceed to show me an instance of our application in the debugger. In every case, they were wrong and the memory manager was right. Again, I'm not boasting, I'm trying to drive home the point as to exactly how obscure some of these bugs really are. These programmers are very talented, and even when they were told WHERE the bug is and WHAT the bug was, they were still fooled. So be wary. Can you imagine how hard it would have been to try to track that bug down when it actually caused a crash? I'd rather listen to fingernails on a chalkboard than think about that one.

Deallocations would always check to make sure the memory address given matches an existing memory address that was actually allocated. This means that allocations need to keep track of all this data (owner, memory address, size, allocation type, etc.) in a table. Once the memory was properly destroyed (0xBAADF00D) and released to the system, the table entry would be removed.

And of course, we can't forget about memory leaks. When the application shuts down the memory manager would report all the blocks of memory that were never released to the system.

Let's talk about hiding bugs for a moment. The purpose of the 0xDEADBEEF and 0xBAADF00D values was to make as many bugs as possible appear during the development process so they could be dealt with and fixed. But when a product gets released, your priority shifts from "fixing" to "prevention". Because of this, we now want to hide bugs as best as we can. So, when new memory is allocated during a release build, we fill it with zeros and when memory is released, we leave it alone. These are the safest things we can do to help hide any bugs we might not have caught during development. Granted, this may make a crash bug in the field a little harder to track down, but I would much rather have a hard time tracking down one really rare crash bug, then have an easy time tracking down ten common crash bugs that are causing the consumer to return the product.

Until now, I've talked about a memory manager for the purpose of stability. But my task was to reduce memory usage. This is where the real power of having your own memory manager comes in handy. Since I wrote it, I can use it to do some very interesting things within our application. Having control over every allocated byte is a very powerful tool and I'm about to describe how our little memory manager was able to help us tremendously in reducing the memory used by Fly!.

To start with, I modified the memory manager to keep track of memory usage. Just a couple numbers that kept track of how much memory was in use at the moment, and the peak memory usage during execution. This just gave us the bad news, but we now knew we were dealing with accurate data. The next step was to figure out where all the memory was going, so I added a logger to the memory manager to list out each and every allocation in use at any point in time.

Sorting the log by allocation size gave us a lot of very useful information. 80% of all memory was allocated in about 12 allocation calls. But there were 50,000+ individual allocations. Which meant that the majority of our memory was in little pieces. So I sorted the data by owner, and started to find out who was allocating all the little pieces. Usually, there would be a few thousand allocations by the same owner. Interrogating the owner revealed a loop of allocations. I simply preallocated the memory before the loop. This saved us a little bit of memory because your normal heap manager uses 8 extra bytes per allocation for heap management. Although 50,000 allocations only ends up costing 400,000 bytes of heap management (give or take, depending on the heap manager), we were able to reduce memory fragmentation quite a lot and also clean up our memory usage logs so we could better understand what was going on. We also saw a slight speed improvement in places because we weren't constantly allocating and reallocating all those little blocks of RAM.

With a more manageable memory log, we were able to find a few places where old legacy code was still allocating memory that it no longer needed. Now we're saving memory. Break out the champagne!

By simply looking at the log and comparing the owner to the memory allocated, you can learn a lot. For example, if you see that your texture manager is allocating 16MB, you might not think twice. But if you see your file manager allocating 10MB, you're going to wonder. Actually, you're going to rush to your editor to figure out why the allocation for a file manager is so large. As it turns out, the file manager needed that memory, because it was storing all of the filenames it needed. The filenames were in static-length strings, so there might be SOME savings here, but not much.

We repeated this process over and over, and ended up around 52MB. We still had 20MB to go if we were to hit our 32MB marker.

It was time for some more memory-manager magic.

Remember 0xDEADBEEF (our initialization value for all allocated RAM)? Well, I added a small function to the memory manager that would interrogate all memory blocks and look for 0xDEADBEEF. Other than a few small exceptions, any block containing 0xDEADBEEF meant that the block wasn't being entirely used. So I generated yet another report that showed the percentage of unused RAM within each block, and sorted it by that percent. Holy cow! We were allocating TONS of memory that we weren't even using! Remember that file manager? It was the biggest culprit. Our new report showed us just how much memory it was wasting, and proved to us that we needed to rewrite that code to use the memory more wisely.

We repeated this process a few more times and finally made our way down to 32MB.

An impossible task made possible by roughly 600 lines of memory management code.

Other memory management tools (such as BoundsChecker) can do a lot of extra stuff; stuff our memory manager could never hope to do (like validate each and every read/write from/to memory) but having your own memory manager is never a bad thing. In our case, our memory manager was the only tool that could have given us the information we needed. And it only took half of a day to write.

MSVC has its own memory manager (referring back to the DEBUG_NEW issue) which is quite good. There's a 1997 periodical about it, titled "Introducing the Bugslayer: Annihilating Bugs in an Application Near You". You can find it on your MSDN CDs, or you can read it online, right here. But as you may have already guessed, I still recommend adding a layer of protection on top of this. It certainly can't hurt to have your own memory manager and you'll have a lot more freedom in dealing with memory allocation and tracking.



Response provided by Paul Nettle
 
 

This article was originally an entry in flipCode's Ask Midnight, a Question and Answer column with Paul Nettle that's no longer active.


 

Copyright 1999-2008 (C) FLIPCODE.COM and/or the original content author(s). All rights reserved.
Please read our Terms, Conditions, and Privacy information.