Oh that's awesome!  So it looks at the value, and if it is going to be too 
large it changed the allocatedunit automatically to become a scaling factor.  
Do we know what version this feature first was released in?

-----Original Message-----
From: [email protected] [mailto:[email protected]] On Behalf 
Of Dave Shield
Sent: Thursday, March 01, 2012 10:03 AM
To: Day, Robert
Cc: [email protected]
Subject: Re: How does the hrStorageTable handle very large disks (> 20tb, for 
instance).

On 1 March 2012 14:45, Day, Robert <[email protected]> wrote:
> Most of our servers are running 5.5.

> Would you be able to point me to a resource that shows how to set the 
> AllocatedUnit manually rather than having it be blocksize?

What the current code does is take the overall size and blocksize values, and 
shifts them up/down by one bit at a time, until the size fits into a
32-bit value.   This is handled automatically - no manual tweaking is needed.

This functionality was only added relatively recently (Feb last year), so 
doesn't appear to be in any of the current releases.

I'd suggest that you try retrieving the current master GIT source, and 
compiling this for yourself. See if that works for you.

(If you want to see the specific changes that were involved, see 
http://net-snmp.git.sourceforge.net/git/gitweb.cgi?p=net-snmp/net-snmp;a=commit;h=71d8293f387a6cd66bb0dbb13c0f50174d2e678b)

Dave

------------------------------------------------------------------------------
Virtualization & Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
_______________________________________________
Net-snmp-users mailing list
[email protected]
Please see the following page to unsubscribe or change other options:
https://lists.sourceforge.net/lists/listinfo/net-snmp-users

Reply via email to