Red Gate forums :: View topic - Memory Allocation Rate
Return to www.red-gate.com RSS Feed Available

Search  | Usergroups |  Profile |  Messages |  Log in  Register 
Go to product documentation
ANTS Memory Profiler 7
ANTS Memory Profiler 7 forum

Memory Allocation Rate

Search in ANTS Memory Profiler 7 forum
Post new topic   Reply to topic
Jump to:  
Author Message
MichaelCederberg



Joined: 16 Aug 2011
Posts: 5

PostPosted: Tue Aug 16, 2011 12:48 pm    Post subject: Memory Allocation Rate Reply with quote

We have a system that have no (known) memory leaks. However, we can see that the memory allocation rate is high. Pretty much all objects are collected whenever the GC runs.

Can the memory profiler help me find the types/functions in question easily? Something along the lines of "Functions Allocating Most Memory" and "Types With Most Memory Allocated" in the VS2010 profiler (but without the performance hit).
Back to top
View user's profile Send private message
Brian Donahue



Joined: 23 Aug 2004
Posts: 6667

PostPosted: Wed Aug 17, 2011 4:40 pm    Post subject: Reply with quote

Hello,

ANTS Memory Profiler does not supply the call stack for allocations, so there is no ability to get "functions that use the most memory" probably for the reasons you point out - the information is not that important vs the monumental overhead needed to stop the program and grab the call stacks all of the time.

You can see classes that have the most instances and are using the most space quite easily, though, in the class list.

You may find the information in "checking for high memory usage" of some help:
http://www.red-gate.com/supportcenter/Content?p=ANTS%20Memory%20Profiler&c=ANTS_Memory_Profiler/help/7.0/amp_managed_usage.htm&toc=ANTS_Memory_Profiler/help/7.0/toc1286225.htm
Back to top
View user's profile Send private message
MichaelCederberg



Joined: 16 Aug 2011
Posts: 5

PostPosted: Wed Aug 17, 2011 10:02 pm    Post subject: Reply with quote

But I thought that the class list show the number of live objects?

In order to detect problems around too high memory allocation rate, I would like to see the number of allocations done.

Example:
Code:

   for (int i = 0; i < 10000; i++)
   {
       byte[] x = new byte[1024];
   }

   // Memory Profiler Snapshot #1

   for (int i = 0; i < 1000000; i++)
   {
       byte[] x = new byte[1024];
   }

   // Memory Profiler snapshot #2

I assume that the garbage collector has run a number of times between snapshot #1 and #2.

If I run the code above and compare snapshot #1 and #2, I would most likely see a delta in the number of live byte[1024] objects. However the delta might be negative or positive (depending on when GC was executed relative to the snapshots). What I do not see is that 1000000 byte[1024] objects was allocated between snapshot #1 and #2. That is what I would like to see. Keep in mind: I am not looking for a leak. I am looking for code that allocates too much memory (which causes the garbage collector to run too often).

What I write above is based on my understanding of Memory Profiler v6. I am trying to decide whether to upgrade it to V7.
Back to top
View user's profile Send private message
Brian Donahue



Joined: 23 Aug 2004
Posts: 6667

PostPosted: Thu Aug 18, 2011 9:13 am    Post subject: Reply with quote

I'd suggest having a look at v7 - you can install and evaluate it for 14 days alongside your currently-installed v6. The class list shows the number of live instances, the live size, and the differences between these between the two snapshots you're comparing.

I'm not sure how the total number of allocations is important compared to how well the GC is coping with cleaning up. I'm not even sure a memory profiler can provide this metric - as far as I know it can only snapshot a count of objects that are in memory at any given time.

Is the problem you are trying to troubleshoot too much time spent in GC?
Back to top
View user's profile Send private message
MichaelCederberg



Joined: 16 Aug 2011
Posts: 5

PostPosted: Thu Aug 18, 2011 9:36 am    Post subject: Reply with quote

The reason why we are very interrested in the number of allocations is that it directly affects the rate of garbage collections. And our system is time sensitive and thus we strive to keep the number of GC's down. In essense in part of our code we work by the rules described in this document:
http://download.microsoft.com/download/9/9/C/99CA11E6-774E-41C1-88B5-09391A70AF02/RapidAdditionWhitePaper.pdf

The VS2010 profiler can give number of allocations for each type. However, I really hate the VS2010 profiler (for various reasons) and would rather use your profiler.

Just to give you some background why this is important:

- Our system is very latency sensitive
- Our system continously has a working set of around 3Gb
- A gen2 GC takes about 1.5 sec
- When we started out performance tuning the system we had an allocation rate around 200Mb/s, after tuning we are down to around 1Mb/s
- The performance tuning reduced the the gen2 GC rate from one every 5 min to one every 12 hour (which is acceptable)

Now you can argue that there is something wrong with a design that creates such amounts of garbage.

1. Yes, there is something wrong ... we are fixing issues as we find them (using profilers)
2. Our task is CPU intensive. In total we have 48 cores continously running close to 100% doing the job. CPU intensive tasks in .NET will create a lot of garbage (but with careful programming you can minimize it by following the rules in the linked whitepaper).
Back to top
View user's profile Send private message
Brian Donahue



Joined: 23 Aug 2004
Posts: 6667

PostPosted: Thu Aug 18, 2011 2:07 pm    Post subject: Reply with quote

You may be able to use Performance Profiler -- if you look at the hit count for the type's constructor, that may give you the information that you need. I don't think this will work for primitive types like byte[] though.
Back to top
View user's profile Send private message
MichaelCederberg



Joined: 16 Aug 2011
Posts: 5

PostPosted: Thu Aug 18, 2011 3:12 pm    Post subject: Reply with quote

And unfortunately it doesn't work for stuff like boxing either. Anyway, thanks for the response.
Back to top
View user's profile Send private message
Brian Donahue



Joined: 23 Aug 2004
Posts: 6667

PostPosted: Mon Aug 22, 2011 9:48 am    Post subject: Reply with quote

Would it help to run performance profiler along with the "allocated bytes/sec" performance counter and then locate the code that ran around the time this got unacceptably high?
Back to top
View user's profile Send private message
MichaelCederberg



Joined: 16 Aug 2011
Posts: 5

PostPosted: Wed Aug 24, 2011 12:50 pm    Post subject: Reply with quote

Unfortunately it doesn't work for me. It is nowhere near precise enough. I have decided to go back to the VS2010 profiler for now when it comes to memory profiling. Even though I do not like it.

I realize that providing a "bytes allocated per function" would be fairly expensive to provide. However "number of instances of type" and "number of bytes allocated for type" columns in the class list should be possible (the CLR Profiler from Microsoft can track this, and source is available).
Back to top
View user's profile Send private message
georgigenov



Joined: 30 Mar 2012
Posts: 1
Location: Wilmington, DE

PostPosted: Fri Mar 30, 2012 5:58 pm    Post subject: Reply with quote

That feature "allocation callstack" saved me several times already. I read the article about the unimportance of where object has been allocated and it's relatively valid as long as you're the developers of all the code.
Now enter the realm of using third party components in your development where even some of the best ones have bugs. Sometimes really obscure bugs. In my case I had a component that was subscribing to an event from the main windows form, which ANTS profiler was simply showing as an entry in a linked list of event handlers (not sure if I can upload a screenshot here). Without knowing which event that was it was extremely difficult to figure out a workaround. The problem was even more difficult because of heavy obfuscation used by that third party control. Then I remembered about the allocation callstack feature of a competitor. Downloaded the trial and in less than a minutes I had enough information. It showed me the property I had assigned, which triggered subscribtion to the event in question. Now it was no longer a problem to navigate the obfuscated code knowing what to look for. I had solution and there was no memory leak after that.
One can argue since it's code issue with someone else's code then have'em fix it. Right, except when you have a tight deadline, you can't go to the client and say - "oh, yeah, that's that third party control and they'll fix it in two months, so not my problem you gotta wait".
This is not bashing the product in any way. On the contrary I really really like it. Just my reason why "allocation callstack" could save your bacon.
Hint - the competitor's product when I used it couple of years ago, this feature had very heavy performance impact. Today running their code on i7 - barely feel it (still feel it, but not as bad).
Back to top
View user's profile Send private message
AndrewH



Joined: 17 Aug 2006
Posts: 137

PostPosted: Wed Apr 04, 2012 3:31 pm    Post subject: Reply with quote

I've had a look at the problem of high allocation rates in the past. It's actually a problem that the performance profiler seems better suited for, if modified to measure memory allocation instead of time - the performance profiler is better at displaying what's happening on a continuous basis and points right to the function(s) that are most responsible for the problem, while the memory profiler shows an instantaneous snapshot of time, making it a matter of guesswork as to which objects are cycling fast.

You can already use the performance profiler to get this information, though it's not really ideal: select the region where the allocation rate is high, use the all methods grid to look for the object..ctor function and then the call graph to find out what is allocating the most objects. This won't work if the problem is down to arrays, however. I've got an experimental build that substitutes kilobytes for milliseconds, and that does work for arrays, so I think the principle's sound.

The problem is that this isn't intuitive. The performance profiler doesn't 'feel' like the go-to tool for this problem, and the memory profiler doesn't have a UI that can provide a good display of continuous data.

There's also the separate issue of using allocation call stacks to find where an object started life: the problem is that the relationship between this and where it became assigned to a place which makes it a leak is somewhat weak. We have been looking at more general features to help with tracking object lifecycle (and hence the code that should be responsible for tidying up a particular problem)
_________________
Andrew Hunter
Software Developer
Red Gate Software Ltd.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic All times are GMT + 1 Hour
Page 1 of 1

 
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum


Powered by phpBB © 2001, 2005 phpBB Group