Yes, I think an option (--force maybe) that says "I know I'm about to lose
data, that's what I want" sounds like a reasonable compromise. And the
given examples have clarified the current behavior in my mind. Thank you
for the replies everyone. I'm better informed now.
Sent from my Google Nexus 7
Adding a --yes-i-know-what-im-doing type option is something I would get behind
(and have suggested, myself). File a bug report as an enhancement request.
"Dr. Jörg Petersen" wrote:
>Hello,
>
>what I regularly do:
>1) Create a snapshot (btrfs) of Brick
>2) reassemble the snapshots into an
Hello,
what I regularly do:
1) Create a snapshot (btrfs) of Brick
2) reassemble the snapshots into an new (Snapshot-) Gluster-Volume
When Reassembling the snapshots I have to remove all xattr's and
.gluster-Directory.
Since btrfs is painfully slow in deleting, I would prefer an option to
re
On 09/20/2012 11:56 AM, Doug Hunley wrote:
On Thu, Sep 20, 2012 at 2:47 PM, Joe Julian wrote:
Because it's a vastly higher priority to preserve data. Just because I
delete a volume doesn't mean I want the data deleted. In fact, more often
than not, it's quite the opposite. The barrier to data l
On Thu, Sep 20, 2012 at 02:56:00PM -0400, Doug Hunley wrote:
> OK, again I'll ask: what is a typical scenario for me as a gluster
> admin to delete a volume and want to add one (or more) of its former
> bricks to another volume and keep that data in tact? I can't think of
> a real world example.
Y
On Thu, Sep 20, 2012 at 2:47 PM, Joe Julian wrote:
> Because it's a vastly higher priority to preserve data. Just because I
> delete a volume doesn't mean I want the data deleted. In fact, more often
> than not, it's quite the opposite. The barrier to data loss is high, and it
> should remain high
On Wed, Sep 19, 2012 at 4:05 PM, Anand Avati wrote:
> There have been far too many instances where users delete a volume, but fail
> to understand that those brick directories still contain their data
I think this is kinda the point though. Why would the user expect the
brick to still contain the
gt; you can simply rmdir/mkdir the directory when you want to delete
> > > a
> > > gluster volume.
>
> > >
>
> > > You can clear the xattrs or "nuke it from orbit" with mkfs on the
> > > volume device.
>
> > >
>
> &
the volume
> device.
> >
> >
> > - Original Message -
> > From: "Lonni J Friedman"
> > To: gluster-users@gluster.org
> > Sent: Tuesday, September 18, 2012 2:03:35 PM
> > Subject: [Gluster-users] cannot create a new volume with a bric
xattrs or "nuke it from orbit" with mkfs on the volume
> device.
>
>
> - Original Message -
> From: "Lonni J Friedman"
> To: gluster-users@gluster.org
> Sent: Tuesday, September 18, 2012 2:03:35 PM
> Subject: [Gluster-users] cannot create a new volume with
Hi Harry,
Thanks for your reply. I tried to manually delete everything from the
brick filesystems (including the hidden files/dirs), but that didn't
help:
[root@farm-ljf0 ~]# ls -la /mnt/sdb1/
total 4
drwxr-xr-x 2 root root 6 Sep 18 11:22 .
drwxr-xr-x. 3 root root 17 Sep 13 09:45 ..
[ro
ke it from orbit" with mkfs on the volume device.
- Original Message -
From: "Lonni J Friedman"
To: gluster-users@gluster.org
Sent: Tuesday, September 18, 2012 2:03:35 PM
Subject: [Gluster-users] cannot create a new volume with a brick that used to
be part of a deleted vol
I believe gluster writes 2 entries into the top level of your gluster brick
filesystems:
-rw-r--r-- 2 root root36 2012-06-22 15:58 .gl.mount.check
drw--- 258 root root 8192 2012-04-16 13:20 .glusterfs
You will have to remove these as well as all the other fs info from the volume
to r
Greetings,
I'm running v3.3.0 on Fedora16-x86_64. I used to have a replicated
volume on two bricks. This morning I deleted it successfully:
[root@farm-ljf0 ~]# gluster volume stop gv0
Stopping volume will make its data inaccessible. Do you want to
continue? (y/n) y
Stopping volume gv0 ha
14 matches
Mail list logo