You are right, a stack would offer better locality of data. I am changing that now.
"For the memory release problem, you could offer a manual TrimExcess() or a Trim(int segmentCount) method. What do you think ?" These methods would be very troublesome as you would have to insure that no buffers are being used in a given segment (with many small arrays this is easy as there is no sharing). It is quite possible that you could be in a situation where you have 8 segments of 1000 buffers each (say 8 mb segments of 8k buffers). It is quite possible that 7992 of these buffers are currently free with 8 being used for async operations (1 for each of the 8 segments). This would lead to a situation where although you have lots of free memory you can't free up a buffer. Using a stack would help minimize the possibility of this happenning but would not eliminate it. One could do some things like a "pre-emptive remove" where it went through all of the free buffers and removed all from a given segment. It would then have to find any holes there may be. When a new buffer was checked back in it would not place it back to the free pool but would instead fill in a hole with it. When all holes were filled it could release the segment. In looking at all of this logic I can't help but feel that you are better off to just eat the extra memory ... this is fairly common in larger scale systems anyways (to ramp up then stay there as opposed to cutting back on resources during slow times) Cheers, Greg On 8/9/06, Sébastien Lorion <[EMAIL PROTECTED]> wrote:
I really like your solution! Making this class a singleton as Yun Jin did seems not like a good choice because different scenarios will most probably require different parameters. Maybe a stupid idea, but I think using a stack would provide better cache locality since it will reuse the last buffer released. Also, the stack Push/Pop methods are faster (about 3 times on my machine). Really just a matter of taste, but personally I would use Request/Release instead of CheckIn/CheckOut. And to be clearer, AvailableBuffers property should be named AvailableBufferCount. Thanks for your code! Sébastien On 8/9/06, gregory young <[EMAIL PROTECTED]> wrote: > After some research this seems like a much better method. Here is the > BufferManager class ... you can use the new overloads for send/receive > that take ArraySegments in conjunction with it. > > Yun Jin gives a great explanation of the problem I am trying to > resolve here https://blogs.msdn.com/yunjin/archive/2004/01/27/63642.aspx > > I think this is better than the solution presented there for a few reasons. > > 1) the object will be in the LOH (less work for gc as it never deals > with compacting the LOH) > 2) buffers can be dynamically sized .. if you need more buffer just > ask for more array segments (the socket methods take an > ILIst<ArraySegment> > > One area where this suffers is in dealing with memory management (I > cannot easily release memory) but generally in such systems that is > not done often anyways. > > Love to hear some feedback. > > Cheers, > > Greg > > BufferManager.cs > /// <summary> > /// A manager to handle buffers for the socket connections > /// </summary> > /// <remarks> > /// When used in an async call a buffer is pinned. Large numbers > of pinned buffers > /// cause problem with the GC (in particular it causes heap fragmentation). > /// > /// This class maintains a set of large segments and gives clients > pieces of these > /// segments that they can use for their buffers. The alternative > to this would be to > /// create many small arrays which it then maintained. This > methodology should be slightly > /// better than the many small array methodology because in > creating only a few very > /// large objects it will force these objects to be placed on the > LOH. Since the > /// objects are on the LOH they are at this time not subject to > compacting which would > /// require an update of all GC roots as would be the case with > lots of smaller arrays > /// that were in the normal heap. > /// </remarks> > public class BufferManager { > private readonly int m_SegmentChunks; > private readonly int m_ChunkSize; > private readonly int m_SegmentSize; > private readonly Queue<ArraySegment<byte>> m_Buffers; > private readonly object m_LockObject = new Object(); > private readonly List<byte[]> m_Segments; > > /// <summary> > /// The current number of buffers available > /// </summary> > public int AvailableBuffers { > get { return m_Buffers.Count; } //do we really care about > volatility here? > } > > /// <summary> > /// The total size of all buffers > /// </summary> > public int TotalBufferSize { > get { return m_Segments.Count * m_SegmentSize; } //do we > really care about volatility here? > } > > /// <summary> > /// Creates a new segment, makes buffers available > /// </summary> > private void CreateNewSegment() { > byte[] bytes = new byte[m_SegmentChunks * m_ChunkSize]; > m_Segments.Add(bytes); > for (int i = 0; i < m_SegmentChunks; i++) { > ArraySegment<byte> chunk = new > ArraySegment<byte>(bytes, i * m_ChunkSize, m_ChunkSize); > m_Buffers.Enqueue(chunk); > } > } > > /// <summary> > /// Checks out a buffer from the manager > /// </summary> > /// <remarks> > /// It is the client's responsibility to return the buffer to > the manger by > /// calling <see cref="Checkin"></see> on the buffer > /// </remarks> > /// <returns>A <see cref="ArraySegment"></see> that can be > used as a buffer</returns> > public ArraySegment<byte> CheckOut() { > lock (m_LockObject) { > if (m_Buffers.Count == 0) { > CreateNewSegment(); > } > return m_Buffers.Dequeue(); > } > } > > /// <summary> > /// Returns a buffer to the control of the manager > /// </summary> > /// <remarks> > /// It is the client's responsibility to return the buffer to > the manger by > /// calling <see cref="Checkin"></see> on the buffer > /// </remarks> > /// <param name="_Buffer">The <see cref="ArraySegment"></see> > to return to the buffer</param> > public void CheckIn(ArraySegment<byte> _Buffer) { > lock (m_LockObject) { > m_Buffers.Enqueue(_Buffer); > } > } > > #region constructors > > /// <summary> > /// Constructs a new <see cref="BufferManager"></see> object > /// </summary> > /// <param name="_SegmentChunks">The number of chunks to > create per segment</param> > /// <param name="_ChunkSize">The size of a chunk in bytes</param> > public BufferManager(int _SegmentChunks, int _ChunkSize) : > this(_SegmentChunks, _ChunkSize, 1) { } > > /// <summary> > /// Constructs a new <see cref="BufferManager"></see> object > /// </summary> > /// <param name="_SegmentChunks">The number of chunks to > create per segment</param> > /// <param name="_ChunkSize">The size of a chunk in bytes</param> > /// <param name="_InitialSegments">The initial number of > segments to create</param> > public BufferManager(int _SegmentChunks, int _ChunkSize, int > _InitialSegments) { > m_SegmentChunks = _SegmentChunks; > m_ChunkSize = _ChunkSize; > m_SegmentSize = m_SegmentChunks * m_ChunkSize; > m_Buffers = new Queue<ArraySegment<byte>>(_SegmentChunks * > _InitialSegments); > m_Segments = new List<byte[]>(); > for (int i = 0; i < _InitialSegments; i++) { > CreateNewSegment(); > } > } > #endregion > } > > On 8/1/06, gregory young <[EMAIL PROTECTED]> wrote: > > Ok so I think everyone can agree that creating buffers on the fly in > > an async socket server is bad ... there is alot of literature > > available on the problems this will cause with the heap. I am looking > > at a few options to get around this. > > > > 1) Have a BufferPool class that hands out ArraySegment<byte> portions > > of a larger array (large enough that it would be in the LOH). If all > > of the array is used create another big segment. > > > > 2) Create a bunch of smaller arrays for use by the bufferpool class > > and have it hand them back > > > > In both 1 & 2 I would probably have the connection use their buffer > > for the duration of the connection. I would internally hold a list of > > the free blocks. When a connection was done ith its buffer it would > > have to release it back to this pool. My thought is that #2 might be > > better for dealing with cases where I want to shrink the number of > > buffers allocated from the previous maximum if needed. > > > > In general I lean towards #1 ... but figured I would check if I might > > be missing something. > > > > Thanks in advance, > > > > Greg Young > > > > > -- > If knowledge can create problems, it is not through ignorance that we > can solve them. > > Isaac Asimov > > =================================== > This list is hosted by DevelopMentor(r) http://www.develop.com > > View archives and manage your subscription(s) at http://discuss.develop.com > -- Sébastien Lorion Software Architect / Architecte organique [EMAIL PROTECTED] =================================== This list is hosted by DevelopMentor(r) http://www.develop.com View archives and manage your subscription(s) at http://discuss.develop.com
-- If knowledge can create problems, it is not through ignorance that we can solve them. Isaac Asimov =================================== This list is hosted by DevelopMentor® http://www.develop.com View archives and manage your subscription(s) at http://discuss.develop.com
