On Thu, Dec 24, 2009 at 12:10:51PM +, Daniel Pocock wrote:
> Vladimir Vuksan wrote:
>>
>> The issue is value of this data. If these were financial transactions
>> than no loss would be acceptable however these are not. They are
>> performance, trending data which get "averaged" down as time
Vladimir Vuksan wrote:
> On Mon, 21 Dec 2009, Spike Spiegel wrote:
>
>>> a. Get all the rrds (rsync) from gmetad2 before you restart gmetad1
>> which unless you have small amount or data or fast network between the
>> two nodes won't complete before the next write is initiated, meaning
>> they won'
On Mon, 21 Dec 2009, Spike Spiegel wrote:
>> a. Get all the rrds (rsync) from gmetad2 before you restart gmetad1
> which unless you have small amount or data or fast network between the
> two nodes won't complete before the next write is initiated, meaning
> they won't be identical.
Granted they
On Sun, Dec 20, 2009 at 7:35 PM, Vladimir Vuksan wrote:
> If you lose a day or
> two or even a week of trending data that is not gonna be disaster as long
> as that data is present somewhere else.
sure, but where? how would the ganglia frontend tell?
> Thus I proposed a simple solution
> where e
On Sun, Dec 20, 2009 at 04:02:36PM +, Spike Spiegel wrote:
> On Mon, Dec 14, 2009 at 10:28 AM, Carlo Marcelo Arenas Belon
> wrote:
> >
> >> b) you can afford to have duplicate storage - if your storage
> >> requirements are huge (retaining a lot of historic data or lot's of data
> >> at short
On Sun, Dec 20, 2009 at 11:02, Spike Spiegel wrote:
> On Mon, Dec 14, 2009 at 10:28 AM, Carlo Marcelo Arenas Belon
> wrote:
>>> a) you are only concerned with redundancy and not looking for
>>> scalability - when I say scalability, I refer to the idea of maybe 3 or
>>> more gmetads running in par
On Sun, Dec 20, 2009 at 10:49, Spike Spiegel wrote:
> On Mon, Dec 14, 2009 at 2:00 AM, Vladimir Vuksan wrote:
>> I think you guys are complicating much :-). Can't you simply have multiple
>> gmetads in different sites poll a single gmond. That way if one gmetad fails
>> data is still available an
On Mon, Dec 14, 2009 at 10:28 AM, Carlo Marcelo Arenas Belon
wrote:
>> a) you are only concerned with redundancy and not looking for
>> scalability - when I say scalability, I refer to the idea of maybe 3 or
>> more gmetads running in parallel collecting data from huge numbers of agents
>
> what i
On Mon, Dec 14, 2009 at 2:00 AM, Vladimir Vuksan wrote:
> I think you guys are complicating much :-). Can't you simply have multiple
> gmetads in different sites poll a single gmond. That way if one gmetad fails
> data is still available and updated on the other gmetads. That is what we
> used to
On Mon, Dec 14, 2009 at 09:26:01AM +, Daniel Pocock wrote:
> Vladimir Vuksan wrote:
> > I think you guys are complicating much :-). Can't you simply have
> > multiple gmetads in different sites poll a single gmond. That way if
> > one gmetad fails data is still available and updated on the ot
Vladimir Vuksan wrote:
> I think you guys are complicating much :-). Can't you simply have
> multiple gmetads in different sites poll a single gmond. That way if
> one gmetad fails data is still available and updated on the other
> gmetads. That is what we used to do.
That is a good solution un
I think you guys are complicating much :-). Can't you simply have multiple
gmetads in different sites poll a single gmond. That way if one gmetad
fails data is still available and updated on the other gmetads. That is
what we used to do.
Vladimir
On Sun, 13 Dec 2009, Spike Spiegel wrote:
> in
On Fri, Dec 11, 2009 at 1:34 PM, Daniel Pocock wrote:
> Thanks for sharing this - could you comment on the total number of RRDs per
> gmetad, and do you use rrdcached?
the largest colo has 140175 rrds and we use the tmpfs + cron hack, no rrdcached.
> I was thinking about gmetads attached to the
Spike Spiegel wrote:
> On Wed, Nov 25, 2009 at 4:20 PM, Daniel Pocock wrote:
>
>> One problem I've been wondering about recently is the scalability of
>> gmetad/rrdtool.
>>
>
> [cut]
>
>
>> In a particularly large organisation, moving around the RRD files as
>> clusters grow could becom
On Wed, Nov 25, 2009 at 4:20 PM, Daniel Pocock wrote:
> One problem I've been wondering about recently is the scalability of
> gmetad/rrdtool.
[cut]
> In a particularly large organisation, moving around the RRD files as
> clusters grow could become quite a chore. Is anyone putting their RRD
> f
15 matches
Mail list logo