.NET’s Garbage Collector (GC) implements many performance optimizations. One of them, the generational model assumes that young objects die quickly, whereas old live longer. This is why managed heap is divided into three Generations. We call them Gen 0 (youngest), Gen 1 (short living) and Gen 2 (oldest). New objects are allocated in Gen 0. When GC tries to allocate a new object and Gen 0 is full, it performs the Gen 0 cleanup. So it performs a partial cleanup (Gen 0 only)! It is traversing the object’s graph, starting from the roots (local variables, static fields & more) and marks all of the referenced objects as living objects.
This is the first phase, called “mark”. This phase can be nonblocking, everything else that GC does is fully blocking. GC suspends all of the application threads to perform next steps!
Living objects are being promoted (most of the time moved == copied!) to Gen 1, and the memory of Gen 0 is being cleaned up. Gen 0 is usually very small, so this is very fast. In a perfect scenario, which could be a web request, none of the objects survive. All allocated objects should die when the request ends. So GC just sets the next object pointer to the beginning of Gen 0. After some Gen 0 collections, we get to the situation, when Gen 1 is also full, so GC can’t just promote more objects to it. Then it simply collects Gen 1 memory. Gen 1 is also small, so it’s fast. Anyway, the Gen 1 survivors are being promoted to Gen 2. Gen 2 objects are supposed to be long living objects. Gen 2 is very big and it’s very time-consuming to collect its memory. So garbage collection of Gen 2 is something that we want to avoid. Why? let’s take a look at the following video to find out how the Gen 2 collection can affect user experience:
Large Object Heap (LOH)
When a large object is allocated, it’s marked as Gen 2 object. Not Gen 0 as for small objects. The consequences are that if you run out of memory in LOH, GC cleans up whole managed heap, not only LOH. So it cleans up Gen 0, Gen 1 and Gen 2 including LOH. This is called full garbage collection and is the most time-consuming garbage collection. For many applications, it can be acceptable. But definitely not for high-performance web servers, where few big memory buffers are needed to handle an average web request (read from a socket, decompress, decode JSON & more).
The Solution
The solution is very simple: buffer pooling. Pool is a set of initialized objects that are ready to use. Instead of allocating a new object, we rent it from the pool. Once we are done using it, we return it to the pool. Every large managed object is an array or an array wrapper (string contains a length field and an array of chars). So we need to pool arrays to avoid this problem.
ArrayPool
is a high performance pool of managed arrays. You can find it in System.Buffers
packageand it’s source code is available on GitHub. It’s mature and ready to use in the production. It targets .NET Stadard 1.1
which means that you can use it not only in your new and fancy .NET Core
apps, but also in the existing .NET 4.5.1
apps as well!Sample
var samePool = ArrayPool<byte>.Shared;
byte[] buffer = samePool.Rent(minLength);
try
{
Use(buffer);
}
finally
{
samePool.Return(buffer);
// don't use the reference to the buffer after returning it!
}
void Use(byte[] buffer) // it's an array
How to use it?
First of all you need to obtain an instance of the pool. You can do in at least three ways:
- Recommended: use the
ArrayPool
property, which returns a shared pool instance. It’s thread safe and all you need to remember is that it has a default max array length, equal to.Shared 2^20
(1024*1024 = 1 048 576
). - Call the static
ArrayPool
method, which creates a thread safe pool with custom.Create maxArrayLength
andmaxArraysPerBucket
. You might need it if the default max array length is not enough for you. Please be warned, that once you create it, you are responsible for keeping it alive. - Derive a custom class from abstract
ArrayPool
and handle everything on your own.
Next thing is to call the
Rent
method which requires you to specify minimum length of the buffer. Keep in mind, that what Rent
returns might be bigger than what you have asked for.byte[] webRequest = request.Bytes;
byte[] buffer = ArrayPool<byte>.Shared.Rent(webRequest.Length);
Array.Copy(
sourceArray: webRequest,
destinationArray: buffer,
length: webRequest.Length); // webRequest.Length != buffer.Length!!
Once you are done using it, you just
Return
it to the SAME pool. Return
method has an overload, which allows you to cleanup the buffer so subsequent consumer via Rent
will not see the previous consumer’s content. By default the contents are left unchanged.
Very imporant note from ArrayPool code:
Once a buffer has been returned to the pool, the caller gives up all ownership of the buffer. The reference returned from a given call to Rent must be returned via Return only once.
It means, that the developer is responsible for doing things right. If you keep using the reference to the buffer after returning it to the pool, you are risking unexpected behavior. As far as I know, there is no static code analysis tool that can verify the correct usage (as of today). ArrayPool is part of the
corefx
library, it’s not a part of the C#
language.
Source : https://adamsitnik.com/Array-Pool/
Garbage collection is one of premiere features of the .NET managed coding platform. As the platform has become more capable, we’re seeing developers allocate more and more large objects. Since large objects are managed differently than small objects, we’ve heard a lot of feedback requesting improvement. Today’s post is by Surupa Biswas and Maoni Stephens from the garbage collection feature team. — BrandonThe CLR manages two different heaps for allocation, the small object heap (SOH) and the large object heap (LOH). Any allocation greater than or equal to 85,000 bytes goes on the LOH. Copying large objects has a performance penalty, so the LOH is not compacted unlike the SOH. Another defining characteristic is that the LOH is only collected during a generation 2 collection. Together, these have the built-in assumption that large object allocations are infrequent.
Because the LOH is not compacted, memory management is more like a traditional allocator. The CLR keeps a free list of available blocks of memory. When allocating a large object, the runtime first looks at the free list to see if it will satisfy the allocation request. When the GC discovers adjacent objects that died, it combines the space they used into one free block which can be used for allocation. Because a lot of interaction with the free list takes place at the time of allocation, there are tradeoffs between speed and optimal placement of memory blocks.
A condition known as fragmentation can occur when nothing on the free list can be used. This can result in an out-of-memory exception despite the fact that collectively there is enough free memory. For developers who work with a lot of large objects, this error condition may be familiar. We’ve received a lot of feedback requesting for a solution to LOH fragmentation.
A Better LOH Allocator
In .NET 4.5, we made two improvements to the large object heap. First, we significantly improved the way the runtime manages the free list, thereby making more effective use of fragments. Now the memory allocator will revisit the memory fragments that earlier allocation couldn’t use. Second, when in server GC mode, the runtime balances LOH allocations between each heap. Prior to .NET 4.5, we only balanced the SOH. We’ve observed substantial improvements in some of our LOH allocation benchmarks as a result of both changes.
We’re also starting to collect telemetry about how the LOH is used. We’re tracking how often out-of-memory conditions in managed applications are due to LOH fragmentation. We’ll use this data to measure and improve memory management of real-world applications.
No comments:
Post a Comment