Re: [Gluster-users] Can't create volume. Can't delete volume. Volume does not exist. Can't create volume.
On 2015-02-12 02:55, Atin Mukherjee wrote: > On 02/12/2015 12:36 AM, Ernie Dunbar wrote: > >> I nuked the entire partition with mkfs, just to be *sure*, and I still get >> the error message: volume create: gv0: failed: /brick1/gv0 is already part >> of a volume Clearly, there's some bit of data being kept somewhere else >> besides in /brick1? > > This shouldn't happen until you have an existing volume or you haven't > removed the xattrs. Can you please double check the output of gluster > volume info? Also you can query for the xattr with this path. > > ~Atin Okay, here we go: root@nfs1:/var/log/glusterfs# xattr -l /brick1/gv0/ trusted.glusterfs.volume-id: 7E 71 B0 DD D6 8A 46 55 B8 BE 21 D1 81 E1 C4 C9 ~qFU..!. I couldn't find any xattr on /brick1/. root@nfs2:/home/ernied# xattr -l /brick1/gv0/ root@nfs2:/home/ernied# Gluster volume info: root@nfs1:/var/log/glusterfs# gluster volume info No volumes present root@nfs2:/home/ernied# gluster volume info No volumes present ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Can't create volume. Can't delete volume. Volume does not exist. Can't create volume.
Sorry to top-post, Roundcube doesn't do quoting properly. The answer to your quesiton is yes, both NFS1 and NFS2 had /brick1 formatted. I find some information about the brick in / Also, here are my logs from the last time I tried to create the volume on NFS1: [2015-02-11 18:59:58.394420] E [glusterd-utils.c:7944:glusterd_new_brick_validate] 0-management: Host nfs2 is not in 'Peer in Cluster' state [2015-02-11 18:59:58.394476] E [glusterd-volume-ops.c:896:glusterd_op_stage_create_volume] 0-management: Host nfs2 is not in 'Peer in Cluster' state [2015-02-11 18:59:58.394488] E [glusterd-syncop.c:1151:gd_stage_op_phase] 0-management: Staging of operation 'Volume Create' failed on localhost : Host nfs2 is not in 'Peer in Cluster' state [2015-02-11 19:00:05.842154] I [glusterd-handler.c:1015:__glusterd_handle_cli_probe] 0-glusterd: Received CLI probe req nfs2 24007 [2015-02-11 19:00:15.378343] E [glusterd-utils.c:8112:glusterd_is_path_in_use] 0-management: /brick1/gv0 is already part of a volume [2015-02-11 19:00:15.378364] E [glusterd-syncop.c:1151:gd_stage_op_phase] 0-management: Staging of operation 'Volume Create' failed on localhost : /brick1/gv0 is already part of a volume [2015-02-11 19:01:41.295018] I [glusterd-handler.c:1280:__glusterd_handle_cli_get_volume] 0-glusterd: Received get vol req [2015-02-11 19:20:28.482341] I [glusterd-handler.c:3803:__glusterd_handle_status_volume] 0-management: Received status volume req for volume gv0 [2015-02-11 19:20:28.482408] E [glusterd-op-sm.c:3195:glusterd_dict_set_volid] 0-management: Volume gv0 does not exist [2015-02-11 19:20:28.482421] E [glusterd-syncop.c:1610:gd_sync_task_begin] 0-management: Failed to build payload for operation 'Volume Status' [2015-02-11 20:26:31.899032] E [glusterd-utils.c:8112:glusterd_is_path_in_use] 0-management: /brick1/gv0 is already part of a volume [2015-02-11 20:26:31.899064] E [glusterd-syncop.c:1151:gd_stage_op_phase] 0-management: Staging of operation 'Volume Create' failed on localhost : /brick1/gv0 is already part of a volume On 2015-02-12 07:21, Justin Clift wrote: > On 11 Feb 2015, at 19:06, Ernie Dunbar wrote: > >> I nuked the entire partition with mkfs, just to be *sure*, and I still get >> the error message: volume create: gv0: failed: /brick1/gv0 is already part >> of a volume Clearly, there's some bit of data being kept somewhere else >> besides in /brick1? > > Yeah, this frustrates the heck out of me every time too. > > As a thought, did you nuke the /brick1/gv0 directory on *both* of the > servers? Looking at the cut-n-pasted log below, it seems like you > nuked the dir on nfs1, but not on nfs2. ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Can't create volume. Can't delete volume. Volume does not exist. Can't create volume.
On 11 Feb 2015, at 19:06, Ernie Dunbar wrote: > I nuked the entire partition with mkfs, just to be *sure*, and I still get > the error message: > > volume create: gv0: failed: /brick1/gv0 is already part of a volume > > Clearly, there's some bit of data being kept somewhere else besides in > /brick1? Yeah, this frustrates the heck out of me every time too. As a thought, did you nuke the /brick1/gv0 directory on *both* of the servers? Looking at the cut-n-pasted log below, it seems like you nuked the dir on nfs1, but not on nfs2. And it'd probably really help on our end if the "failed: /brick1/gv0 is already part of a volume" message included the node name as well, just for it to be super clear. ;) + Justin -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Can't create volume. Can't delete volume. Volume does not exist. Can't create volume.
On 02/12/2015 12:36 AM, Ernie Dunbar wrote: > > > I nuked the entire partition with mkfs, just to be *sure*, and I still > get the error message: > > volume create: gv0: failed: /brick1/gv0 is already part of a volume > > Clearly, there's some bit of data being kept somewhere else besides in > /brick1? This shouldn't happen until you have an existing volume or you haven't removed the xattrs. Can you please double check the output of gluster volume info? Also you can query for the xattr with this path. ~Atin > > On 2015-02-11 01:03, Kaushal M wrote: > >> This happens because of 2 things, >> 1. GlusterFS writes an extended attribute containing the volume-id to every >> brick directory when a volume is created. This is done to prevent data being >> written to the root partition, in case the partition containing the brick >> wasn't mounted for any reason. 2. Deleting a GlusterFS volume does not >> remove any data in the brick directories and the brick directories >> themselves. We leave the decision of cleaning up the data to the user. The >> extended attribute is also not removed, so that an unused brick is not >> inadvertently added to another volume as it could lead to losing existing >> data. >> >> So if you want to reuse a brick, you need to clean it up and recreate the >> brick directory. >> >> On Wed, Feb 11, 2015 at 4:38 AM, Ernie Dunbar wrote: >> >>> I'm just going to paste this here to see if it drives you as mad as it does >>> me. >>> >>> I'm trying to re-create a new volume in gluster. The old volume is empty >>> and can be removed. And besides that, this is just an experimental server >>> that isn't in production just yet. Who cares. I just want to start over >>> again because it's not working. >>> >>> root@nfs1:/home/ernied# gluster >>> gluster> vol create gv0 replica 2 nfs1:/brick1/gv0 nfs2:/brick1/gv0 >>> volume create: gv0: failed: /brick1/gv0 is already part of a volume >>> gluster> vol in >>> No volumes present >>> gluster> root@nfs1:/home/ernied# ^C >>> root@nfs1:/home/ernied# rm -r /brick1/gv0 >>> root@nfs1:/home/ernied# gluster >>> gluster> vol create gv0 replica 2 nfs1:/brick1/gv0 nfs2:/brick1/gv0 >>> volume create: gv0: failed: Host nfs2 is not in 'Peer in Cluster' state >>> gluster> peer probe nfs2 >>> peer probe: success. Host nfs2 port 24007 already in peer list >>> gluster> volume list >>> No volumes present in cluster >>> gluster> volume delete gv0 >>> Deleting volume will erase all information about the volume. Do you want to >>> continue? (y/n) y >>> volume delete: gv0: failed: Volume gv0 does not exist >>> gluster> vol create gv0 replica 2 nfs1:/brick1/gv0 nfs2:/brick1/gv0 >>> volume create: gv0: failed: /brick1/gv0 is already part of a volume >>> gluster> vol create evil replica 2 nfs1:/brick1/gv0 nfs2:/brick1/gv0 >>> volume create: evil: failed: /brick1/gv0 is already part of a volume >>> gluster> >>> >>> ___ >>> Gluster-users mailing list >>> Gluster-users@gluster.org >>> http://www.gluster.org/mailman/listinfo/gluster-users [1] > > > > Links: > -- > [1] http://www.gluster.org/mailman/listinfo/gluster-users > > > > ___ > Gluster-users mailing list > Gluster-users@gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users > -- ~Atin ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Can't create volume. Can't delete volume. Volume does not exist. Can't create volume.
I nuked the entire partition with mkfs, just to be *sure*, and I still get the error message: volume create: gv0: failed: /brick1/gv0 is already part of a volume Clearly, there's some bit of data being kept somewhere else besides in /brick1? On 2015-02-11 01:03, Kaushal M wrote: > This happens because of 2 things, > 1. GlusterFS writes an extended attribute containing the volume-id to every > brick directory when a volume is created. This is done to prevent data being > written to the root partition, in case the partition containing the brick > wasn't mounted for any reason. 2. Deleting a GlusterFS volume does not remove > any data in the brick directories and the brick directories themselves. We > leave the decision of cleaning up the data to the user. The extended > attribute is also not removed, so that an unused brick is not inadvertently > added to another volume as it could lead to losing existing data. > > So if you want to reuse a brick, you need to clean it up and recreate the > brick directory. > > On Wed, Feb 11, 2015 at 4:38 AM, Ernie Dunbar wrote: > >> I'm just going to paste this here to see if it drives you as mad as it does >> me. >> >> I'm trying to re-create a new volume in gluster. The old volume is empty and >> can be removed. And besides that, this is just an experimental server that >> isn't in production just yet. Who cares. I just want to start over again >> because it's not working. >> >> root@nfs1:/home/ernied# gluster >> gluster> vol create gv0 replica 2 nfs1:/brick1/gv0 nfs2:/brick1/gv0 >> volume create: gv0: failed: /brick1/gv0 is already part of a volume >> gluster> vol in >> No volumes present >> gluster> root@nfs1:/home/ernied# ^C >> root@nfs1:/home/ernied# rm -r /brick1/gv0 >> root@nfs1:/home/ernied# gluster >> gluster> vol create gv0 replica 2 nfs1:/brick1/gv0 nfs2:/brick1/gv0 >> volume create: gv0: failed: Host nfs2 is not in 'Peer in Cluster' state >> gluster> peer probe nfs2 >> peer probe: success. Host nfs2 port 24007 already in peer list >> gluster> volume list >> No volumes present in cluster >> gluster> volume delete gv0 >> Deleting volume will erase all information about the volume. Do you want to >> continue? (y/n) y >> volume delete: gv0: failed: Volume gv0 does not exist >> gluster> vol create gv0 replica 2 nfs1:/brick1/gv0 nfs2:/brick1/gv0 >> volume create: gv0: failed: /brick1/gv0 is already part of a volume >> gluster> vol create evil replica 2 nfs1:/brick1/gv0 nfs2:/brick1/gv0 >> volume create: evil: failed: /brick1/gv0 is already part of a volume >> gluster> >> >> ___ >> Gluster-users mailing list >> Gluster-users@gluster.org >> http://www.gluster.org/mailman/listinfo/gluster-users [1] Links: -- [1] http://www.gluster.org/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Can't create volume. Can't delete volume. Volume does not exist. Can't create volume.
This happens because of 2 things, 1. GlusterFS writes an extended attribute containing the volume-id to every brick directory when a volume is created. This is done to prevent data being written to the root partition, in case the partition containing the brick wasn't mounted for any reason. 2. Deleting a GlusterFS volume does not remove any data in the brick directories and the brick directories themselves. We leave the decision of cleaning up the data to the user. The extended attribute is also not removed, so that an unused brick is not inadvertently added to another volume as it could lead to losing existing data. So if you want to reuse a brick, you need to clean it up and recreate the brick directory. On Wed, Feb 11, 2015 at 4:38 AM, Ernie Dunbar wrote: > I'm just going to paste this here to see if it drives you as mad as it > does me. > > I'm trying to re-create a new volume in gluster. The old volume is empty > and can be removed. And besides that, this is just an experimental server > that isn't in production just yet. Who cares. I just want to start over > again because it's not working. > > root@nfs1:/home/ernied# gluster > gluster> vol create gv0 replica 2 nfs1:/brick1/gv0 nfs2:/brick1/gv0 > volume create: gv0: failed: /brick1/gv0 is already part of a volume > gluster> vol in > No volumes present > gluster> root@nfs1:/home/ernied# ^C > root@nfs1:/home/ernied# rm -r /brick1/gv0 > root@nfs1:/home/ernied# gluster > gluster> vol create gv0 replica 2 nfs1:/brick1/gv0 nfs2:/brick1/gv0 > volume create: gv0: failed: Host nfs2 is not in 'Peer in Cluster' state > gluster> peer probe nfs2 > peer probe: success. Host nfs2 port 24007 already in peer list > gluster> volume list > No volumes present in cluster > gluster> volume delete gv0 > Deleting volume will erase all information about the volume. Do you want > to continue? (y/n) y > volume delete: gv0: failed: Volume gv0 does not exist > gluster> vol create gv0 replica 2 nfs1:/brick1/gv0 nfs2:/brick1/gv0 > volume create: gv0: failed: /brick1/gv0 is already part of a volume > gluster> vol create evil replica 2 nfs1:/brick1/gv0 nfs2:/brick1/gv0 > volume create: evil: failed: /brick1/gv0 is already part of a volume > gluster> > > > ___ > Gluster-users mailing list > Gluster-users@gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users > ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users