Log In
Sign Up and Get Started Blogging!
JoeUser is completely free to use! By Signing Up on JoeUser, you can create your own blog and participate on the blogs of others!
Girl Geek
Lead Developer, Stardock Entertainment
Memory fragments
Published on May 1, 2006 By
CariElf
In
GalCiv Journals
Our top priority since 1.1 came out has been resolving the memory and performance issues that people have been having. BoundsChecker didn't come up with any significant memory leaks, so that left us with checking the change logs to see what we might have done to cause the issues. We were obviously doing something wacky, because people with 4 GB page files were still getting out of virtual memory errors.
One of the changes we made was to the way that the shipcfg files were parsed. The shipcfg files are really just ini files, which I think Joe picked for the shipcfg format because ini files are supposed to be fast for reading and writing. They have two main disadvantages. The first is that you can't have an ini file greater than 64 kb on Windows ME and Windows 98; it won't be able to read anything past the 64 kb mark. The second is that you have to know what is the size of the longest section name, and the longest key name, or else know what all the section and key names are before you go to read in the data from the file. If you don't know the section and keynames, you have to call APIs to get all the section and keynames that are in the file, and you need a buffer to store them.
There are two ways to allocate memory: static and dynamic. Static means that you always allocate the same amount of memory, no matter how much of it you actually need to use. If you allocate too much, you'll be taking up memory you aren't using, but if you don't allocate enough, you'll run into problems because you don't have enough to use. Dynamic memory is allocated as you need it, when you need it. Since you're in charge of dynamic memory, you have to remember to deallocate (release) it. When you forget to deallocate it, it becomes a memory leak. As far as your program and OS are concerned, that memory is still being used and is unavailable to be allocated to something else. Another bad thing about dynamic allocation is that you can fragment memory.
I'm assuming that most of you have seen Windows' disk defrag program. If you haven't, you might want to go to Start->All Programs->Accessories->System Tools and run it, because your had drive probably needs it.
It will also give you a visual indication of what I'm talking about in this paragraph. When you create or copy files to your hard drive, the files are copied into consecutive blocks of memory. If you delete one or more of those files, that leaves a hole in memory. Depending on how big the hole is, it might get filled with other files. But if it's too small, it's just wasted. Disk defrag goes through your hard drive and tries to move stuff around so that it is stored more efficiently, with bigger blocks of available memory. Fragmentation can also happen in system memory, aka RAM. So even if you deallocate dynamic memory, there's a chance that the memory you released won't be used again by your program, so it keeps using more memory. When the program runs out of available RAM, it will start using virtual memory. Virtual memory is really just a section of the hard drive that is set aside for the operating system's use when it runs out of RAM.
So how does all of this relate to GC2's issues? The shipcfg files originally used static memory allocation for the buffer that was used to get the section and keynames, but if you added enough jewelry and components to the ship, the buffer wasn't big enough. We needed a quick way to fix this that wouldn't involve re-engineering the shipcfg code and that wouldn't make reading in the shipcfg files take longer. The quickest change to implement was to switch from using static allocation to using dynamic allocation. In order to make sure that the buffer was big enough, I suggested that we dynamically allocate a buffer that was the same size as the file. The bad thing about this solution was that it didn't take into account that we weren't saving the parsed data values in memory after reading in the file for the first time. Every time you built a colony ship, the colony ship cfg file was read in and a buffer was allocated and deallocated. So I'm thinking that we were probably fragmenting memory.
Last week, I wrote code to make GC2 saved the parsed values from the shipcfg files in memory, so that they would only have to be read in once. It would mean that we're hanging on to a little more memory than we were before, but it should cut down on the fragmentation. It definitely cuts down on the amount of time spent creating ship graphics, which you will notice when loading a save game. If it's not enough, we may still have to change the shipcfg file format, but keeping the parsed results in memory will help keep load times down. I wrote the new shipcfg code in such a way that I should only have to replace one function if we need to switch the file format, the one that actually reads in the data. The code that uses the stored data to put together a ship with all its components will not need to be changed.
Another problem area is the save game code. Writing to the hard disk is slow, so it's quicker to create the file in memory and then write it out to the file in one fell swoop rather then writing out each datum as you go. Since the exact file size of a given save game is unknown, the save game code uses dynamic memory allocation. Each object in the game (ie ships, planets, civs, etc) has its own function to create a block of memory containing all the data it needs to store, which it then passes to the main save game function, and is added to the main block. This is using the same code as in GC1, but we had less dynamic data in GC1. Originally, all of the buffers started out as 1 kb and whenever the buffers needed to increase in size, they would allocate their current size + 1 kb and copy the data from the old buffer to the new buffer, then deallocate the old buffer. The process of growing and copying the buffers was taking up more time than the actual saving of data, and I needed a way to improve performance without doing major surgery to the save game code before version 1.0 came out. So I did some profiling on how big the buffers were for a gigantic galaxy in the first few turns of the game and how much they grew, and used those numbers to change the initial buffer sizes and how much they grew by for each data object. This was, admittedly, more of a band-aid than an actual fix.
Apart from adding some new things that needed to be saved, I don't think that we've really touched the save game code much. However, it is still fairly inefficient because of all the buffer growing and copying. So the next change I have started to make is to make all the data objects use one buffer. Once that is done, I can make further optimizations to the code like initializing the buffer size based on the galaxy size, and see if it does more good than harm to keep the buffer in memory so that it doesn't have to allocate and deallocate 2-13 MB (or more) every time the game saves. At the very least, making everything use one buffer should cut down on fragmentation and make the saving go quicker. I will also be reviewing the code to make sure that only necessary data is being saved rather than being recalculated, in an effort to cut down loading time.
Once I've finished making our memory usage more efficient, I'll start working on the modding stuff again.
Edit: Ok, since I'm getting e-mails and comments about this, I would like to clarify that I am not blaming your hard drives for causing the crashes. The point of this article is that I am working on resolving the memory issues. The comment about running disk defrag was meant as a general statement that you should regularly defrag your hard drives, and to provide a visual representation of what is happening when memory is fragmented.
Update: In my sticky thread here
Link
I put instructions and a link for an unofficial test exe.
--Cari
Popular Articles in this Category
Galactic Civilizations II: Beginner's Strategy Guide
Popular Articles from CariElf
I'm baaaaaaaaack!
Comments (Page 6)
6 Pages
First
Prev
4
5
6
76
Kosty
on May 06, 2006
p22 try in windowed mode, with the same resolution as your desktop. It will look the same with no ALT-TAB crashes.
77
CariElf
on May 06, 2006
Alfb, we're using C/C++, and our build environment is Visual Studio .NET 2003.
p22,
Are you using one of the test builds or the regular 1.1?
78
player1_fanatic
on May 06, 2006
1.1 final
By the way, just now I experimented a little, and was not able to "break it".
So it's not 50%.
But most of my crashes before did happened from alt-tabing back.
Tested with freshly loaded game, so maybe it happens after longer playing time.
But I noticed that Alt-Tab back to the game takes 5-10 seconds to make game appear in screen.
Using cat6.3 with my x1600pro.
P.S.
Also, game in background is pretty resource intensive.
I can't scroll in my Firefox well due to slowdowns.
Using 1MB RAM, 256MB video
79
Alfb
on May 06, 2006
Alfb, we're using C/C++, and our build environment is Visual Studio .NET 2003.
While the middle of a project is usually not the time for an upgrade, has anyone considered Visual Studio .NET 2005? While I am not aware of any improvement that specifically target Heap Fragmentation, there may be. Also, have you considered statically allocating a big block for your use and write your own allocator to work within that block? Obviously you would have extra code that might slow things down but if you don't find another solution, it might be worth considering.
80
Creston999
on May 07, 2006
Actually I have always been scared of using defrag. On a large drive it is an overnight job, and if the power goes out your system is pretty much screwed ... no? Please correct me if I am wrong here ...
This was a bug in the early early early days of Win98, it's long since been fixed. Even back then it had to just happen while defrag was working on an important system file that it couldn't write back after it had read it. If you then tried to boot after that had happened, you'd be unable to because the file was missing.
Nowadays, defrag actually skips system files (they show up as green sections on your defrag, those go untouched), and, IIRC, it allocates an extra storage buffer where it copies the contents of a cluster before it starts altering its actual location on the drive.
So the power could go out, but you would be fine. Your disk would still need defragging ofcourse.
Haven't noticed any memory leaks myself (1GB of ram), but I haven't played that long or in very large galaxies. No leaks here at home, nor on my work comp (I love you guys for letting me install on two systems!)
81
Pnakotus
on May 07, 2006
Defragmentation is absolutely essential for people using their pcs for media. Proper defragmentation soft, like Diskeeper or whatever, is both faster and more effective than the laughably poor XP defrag.
6 Pages
First
Prev
4
5
6
Welcome Guest! Please take the time to register with us.
There are many great features available to you once you register, including:
Richer content, access to many features that are disabled for guests like commenting on the forums.
Access to a great community, with a massive database of many, many areas of interest.
Access to contests & subscription offers like exclusive emails.
It's simple, and FREE!
Sign Up Now!
Meta
Views
» 61255
Comments
»
81
Category
»
GalCiv Journals
Comment
Recent Article Comments
Let's see your political mem...
LightStar Design Windowblind...
Let's start a New Jammin Thr...
A day in the Life of Odditie...
Safe and free software downl...
Veterans Day
A new and more functional PC...
Post your joy
AI Art Thread: 2022
WD Black Internal and Extern...
Sponsored Links