Hi Tobi,
On Thu, Jun 12, 2008 at 08:16:59AM +0200, Tobias Oetiker wrote:
> > Tobi, btw., the patch I posted to this thread earlier could imho go into
> > the 1.4 development branch. I don't think it should be put into 1.3 as
> > it breaks the ABI (there are a couple of functions expecting off_t wh
Hi Sebastian,
> Tobi, btw., the patch I posted to this thread earlier could imho go into
> the 1.4 development branch. I don't think it should be put into 1.3 as
> it breaks the ABI (there are a couple of functions expecting off_t which
> would then expect 8 bytes for that argument).
but you were
Hi Karl,
On Thu, Jun 12, 2008 at 12:41:28AM +0200, Karl Fischer wrote:
> >> If that's the case I can split up into multiple rrd's
> >> with a lower number of DS per rrd and each of them will
> >> still stay below 2GB ...
> >>
> >> Any comments on the performance such a rrd might show?
> >> Think a
Hi Nick,
Yesterday Nick Ellson wrote:
>
> I have been point to RRD-Tool as the possible cause of an artifact I am
> seeing in my Cacti graphs of a Cisco ASA VPN Connection Count OID. If I
> understand what it happening, the squarewave looking graphs I had in MRTG
> showing the number of remote u
Nick Ellson wrote:
> Google landed me on something finally...
>
> http://osdir.com/ml/db.rrdtool.user/2002-10/msg00091.html
>
> Though if there are Cacti Pro's that have solved this that wouldn't mind
> sharing how you did it I would appreciate it. I am barely getting the hang
> of modifying gr
Google landed me on something finally...
http://osdir.com/ml/db.rrdtool.user/2002-10/msg00091.html
Though if there are Cacti Pro's that have solved this that wouldn't mind
sharing how you did it I would appreciate it. I am barely getting the hang
of modifying graph templates and such.
Nick
I have been point to RRD-Tool as the possible cause of an artifact I am
seeing in my Cacti graphs of a Cisco ASA VPN Connection Count OID. If I
understand what it happening, the squarewave looking graphs I had in MRTG
showing the number of remote users currently logged in, is not possible
wit
> Hi Karl,
>
>> If that's the case I can split up into multiple rrd's
>> with a lower number of DS per rrd and each of them will
>> still stay below 2GB ...
>>
>> Any comments on the performance such a rrd might show?
>> Think about storing values for a 5+ year period with a
>> one minute resolutio
> [1] http://bugs.debian.org/451852#49
Sebastian,
just studied the thread ...
Well, creating a database that spans ~1150 years not only breaks LONG,
it also breaks the current UNIX time function, that allows for a range
of ~68 years (01.01.1970 - 18.01.2038) ...
;-)
- Karl
Hi Karl,
> If that's the case I can split up into multiple rrd's
> with a lower number of DS per rrd and each of them will
> still stay below 2GB ...
>
> Any comments on the performance such a rrd might show?
> Think about storing values for a 5+ year period with a
> one minute resolution ... (3+
> Hi Karl,
>
> (Not going to comment on the usefulness of having such large RRD's. ;-))
Hi Sebastian,
what should I do if the requirement is to keep data for a long period
of time without loosing resolution?
> On Wed, Jun 11, 2008 at 03:14:18AM +0200, Karl Fischer wrote:
>> I'm planning to crea
Hi,
On Wed, Jun 11, 2008 at 11:34:28PM +0200, Sebastian Harl wrote:
> Attached to this E-mail you can find a patch that adds _very_ _basic_
> large file support.
D'oh! Now, it really is attached to this E-mail ;-)
Cheers,
Sebastian
--
Sebastian "tokkee" Harl +++ GnuPG-ID: 0x8501C7FC +++ http:/
Hi Karl,
(Not going to comment on the usefulness of having such large RRD's. ;-))
On Wed, Jun 11, 2008 at 03:14:18AM +0200, Karl Fischer wrote:
> I'm planning to create a really huge rrd with 5-10 million rows
> holding about 100-200 entries each, so the complete rrd will reach
> a size of 5 .. 2
On Wed, Jun 11, 2008 at 11:38:53AM -0400, Emily Chouinard wrote:
> Any pointers in the right direction or suggestions would be appreciated.
You could play with the contents of /proc/stat to monitor CPU
usage:
[...]
cpu 823945 102664 419218 112203483 434859 6285 334160 0 0
[...]
just create COU
So being a beginner I have worked through all available examples on the
web, those by Corey Goldberg were by far the most helpful, what I would
like to do now is work with my Linux computer and measure something in
real time to create my very first RRD. I am working with the rrd-python
binding,
Users!
After many beta and rc version I have finally releasing RRDtool 1.3.0.
Get it from http://oss.oetiker.ch/rrdtool/
Here is the news:
=
NEW Fast file access methods (Bernhard Fischer / Tobi Oetiker)
* introduced file-accessor functions rrd_rea
On Wed, Jun 11, 2008 at 12:39 AM, Fabien Wernli <[EMAIL PROTECTED]> wrote:
> On Wed, Jun 11, 2008 at 03:14:18AM +0200, Karl Fischer wrote:
>> I'm planning to create a really huge rrd with 5-10 million rows
>> holding about 100-200 entries each, so the complete rrd will reach
>> a size of 5 .. 20 GB
I will be out of the office starting 06/07/2008 and will not return until
06/16/2008.
In my absence, for issues involving NT Scans, please send all matters to
'NT Scan' (Notes name) or 'nt [EMAIL PROTECTED]'. Otherwise, I will
respond to your email when I return._
On Wed, Jun 11, 2008 at 03:14:18AM +0200, Karl Fischer wrote:
> I'm planning to create a really huge rrd with 5-10 million rows
> holding about 100-200 entries each, so the complete rrd will reach
> a size of 5 .. 20 GB ...
> Are there any problems to expect when creating and using an rrd
> that si
Hi Karl,
Today Karl Fischer wrote:
> Tobias Oetiker wrote:
> > Hi Karl,
> >
> >> so do you mean splitting up the rrd into multiple files by putting
> >> only a few (10 to 20) entries into each file (but sill holding 5-10
> >> million rows) would be be better?
> >
> > it would certainly be more in
Tobias Oetiker wrote:
> Hi Karl,
>
>> so do you mean splitting up the rrd into multiple files by putting
>> only a few (10 to 20) entries into each file (but sill holding 5-10
>> million rows) would be be better?
>
> it would certainly be more in line with what other people do ...
>
> do you act
21 matches
Mail list logo