On Jun 10, 2013, at 14:06 , Robert Hajime Lanning wrote:
>>
>>> Think about it... Hope this helps.
>>
>> Sorry to be insistent, but this really bothers me.
>
> It is persisted to disk. You can check this yourself by going through
> /var/lib/glusterd and looking at the .vol file and others.
Hi,
I created the following volume some time ago:
gluster> volume info
Volume Name: dist
Type: Striped-Replicate
Volume ID: 045d90f6-3881-4c63-88d6-5b2b024f2db5
Status: Started
Number of Bricks: 1 x 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: galatea-ib:/data/dist1
Brick2: mimas-ib:/data/dist1
On Mon, Jun 10, 2013 at 3:35 PM, Jay Vyas wrote:
> Hi james: I didnt know were behind this :) I saw it the other day ..
>
> I guess i better play some with https://forge.gluster.org/puppet-gluster to
> see whats available and maybe ill post directly here or leave feedback on
> glusterforge
Cool
Hi james: I didnt know were behind this :) I saw it the other day ..
I guess i better play some with https://forge.gluster.org/puppet-gluster
to see whats available and maybe ill post directly here or leave feedback
on glusterforge
___
Gluster-users mai
On Mon, Jun 10, 2013 at 3:23 PM, John Mark Walker wrote:
>
>> https://github.com/purpleidea/puppet-gluster
>
> I'm pretty sure you meant https://forge.gluster.org/puppet-gluster :)
Errr um yeah! I did mean that!... Actually, I updated my main repo,
and when I tried to push there, I got problems
- Original Message -
> On Mon, Jun 10, 2013 at 3:01 PM, Jay Vyas wrote:
> > Hi gluster !
> >
> > Should an automated KVM gluster deployment creation script be included in
> > the gluster source code, so that people can easily test and build gluster
> > issues in a transparent and repro
On Mon, Jun 10, 2013 at 3:01 PM, Jay Vyas wrote:
> Hi gluster !
>
> Should an automated KVM gluster deployment creation script be included in
> the gluster source code, so that people can easily test and build gluster
> issues in a transparent and reproducible manner?
I think so. I think this is
Hi gluster !
Should an automated KVM gluster deployment creation script be included in
the gluster source code, so that people can easily test and build gluster
issues in a transparent and reproducible manner?
We could centralize it in gluster so that anyone could spin up a gluster
instance and
On Fri, Jun 7, 2013 at 3:59 PM, Martin Röbert
wrote:
> Hey James,
>
> I don't use a gluster specific module - I compile and run gluster via exec
> commands.
Sounds like ghosts.
I can suggest two things:
1) Either use one of the already available puppet modules.
I wrote one and semiosis wrote one
I am out of the office until 06/17/2013.
I will be out of the office for training.
Note: This is an automated response to your message "Gluster-users Digest,
Vol 62, Issue 21" sent on 6/10/2013 7:00:01 AM.
This is the only notification you will receive while this person is away.
**
This ema
Hi,
@Joop:
thanks for your info! :)
To be honest I planned the same here, as iptables will work, but also brings
some extra CPU load and network latency with it.
@all:
I read in the documentation, the Gluster server pool automatically tells the
GlusterFS client which storage server it should u
Hey James,
I don't use a gluster specific module - I compile and run gluster via exec
commands.
Thanks
Martin
Am 07.06.2013 um 21:15 schrieb James :
> Which puppet module are you using ?
>
> On Wed, May 15, 2013 at 8:59 AM, Martin Röbert
> wrote:
>> Hey there,
>>
>> I try to install the la
Hi Jeff,
many thanks for fast response... DNS || iptables solution is pretty easy to
implement,
you made my day! :)
I'm really looking forward into GlusterFS roadmap, great work !!
Cheers,
Sven
Sven Knohsalla | System Administration | Netbiscuits
Office +49 631 68036 433 | Fax +49 631 68036 1
On 06/10/13 01:07, Wolfgang Hennerbichler wrote:
On 10.06.2013, at 09:48, James wrote:
On Mon, Jun 10, 2013 at 1:16 AM, Wolfgang Hennerbichler wrote:
On Jun 9, 2013, at 23:30 , Mohit Anchlia wrote:
volume changes should survive the node failures
even if ALL nodes fail?
I mean if all th
On 06/10/2013 01:07 AM, Wolfgang Hennerbichler wrote:
On 10.06.2013, at 09:48, James wrote:
On Mon, Jun 10, 2013 at 1:16 AM, Wolfgang Hennerbichler wrote:
On Jun 9, 2013, at 23:30 , Mohit Anchlia wrote:
volume changes should survive the node failures
even if ALL nodes fail?
I mean if al
On Mon, Jun 10, 2013 at 4:07 AM, Wolfgang Hennerbichler wrote:
>
>> Think about it... Hope this helps.
>
> Sorry to be insistent, but this really bothers me.
I think you need to setup a small test cluster (maybe even some vm's)
and experiment yourself. It's the best way to learn :)
__
On 10.06.2013, at 09:48, James wrote:
> On Mon, Jun 10, 2013 at 1:16 AM, Wolfgang Hennerbichler
> wrote:
>> On Jun 9, 2013, at 23:30 , Mohit Anchlia wrote:
>>
>>> volume changes should survive the node failures
>>
>> even if ALL nodes fail?
> I mean if all the nodes "fail", then this is equi
On Mon, Jun 10, 2013 at 1:16 AM, Wolfgang Hennerbichler wrote:
> On Jun 9, 2013, at 23:30 , Mohit Anchlia wrote:
>
>> volume changes should survive the node failures
>
> even if ALL nodes fail?
I mean if all the nodes "fail", then this is equivalent to setting all
of the hardware on fire...
So no
18 matches
Mail list logo