Re: [Gluster-users] Expanding Volumes and Geo-replication

2014-09-03 Thread Vijaykumar Koppad
On Wed, Sep 3, 2014 at 8:20 PM, M S Vishwanath Bhat 
wrote:

> On 01/09/14 23:09, Paul Mc Auley wrote:
>
>> Hi Folks,
>>
>> Bit of a query on the process for setting this up and the best practices
>> for same.
>>
>> I'm currently working with a prototype using 3.5.2 on vagrant and I'm
>> running into assorted failure modes with each pass.
>>
>> The general idea is I start with two sites A and B where A has 3 bricks
>> used to build a volume vol at replica 3 and
>> B has 2 bricks with at replica 2 to also build vol.
>> I create 30 files is A::vol and then set up geo-replication from A to B
>> after which I verify that the files have appeared in B::vol.
>> What I want to do then is double the size of volumes (presumably growing
>> one and not the other is a bad thing)
>> by adding 3 bricks to A and 2 bricks to B.
>>
>> I've had this fail number of ways and so I have a number of questions.
>>
>> Is geo-replication from a replica 3 volume to a replica 2 volume possible?
>>
> Yes. geo-replication just needs two gluster volumes (master -> slave). It
> doesn't matter what configuration master and slave has. But slave should be
> big enough to have all the data in master.
>
>  Should I stop geo-replication before adding additional bricks? (I assume
>> yes)
>>
> There is no need to stop geo-rep while adding more bricks to the volume.
>
>  Should I stop the volume(s) before adding additional bricks? (Doesn't
>> _seem_ to be the case)
>>
> No.
>
>  Should I rebalance the volume(s) after adding the bricks?
>>
> Yes. After add-brick, rebalance should be run.
>
>  Should I need to recreate the geo-replication to push-pem subsequently,
>> or can I do that out-of-band?
>> ...and if so should I have to add the passwordless SSH key back in? (As
>> opposed to the restricted secret.pem)
>> For that matter in the inital setup is it an expected failure mode that
>> the initial geo-replication create will fail if the slave host's SSH key
>> isn't known?
>>
> After the add-brick, the newly added node will not have any pem files. So
> you need to do "geo-rep create push-pem force". This will actually push the
> pem files to the newly added node as well. And then you need to do "geo-rep
> start force" to start the gsync processes in newly added node.
>
> So the sequence of steps for you will be,
>
> 1. Add new nodes to both master and slave using gluster add-brick command.
>
After this, we need to run  "gluster system:: execute gsec_create " on
master node and then proceed with step 2.

> 2. Run geo-rep create push-pem force and start force.
> 3. Run rebalance.
>
> Hope this works and hope it helps :)
>
>
> Best Regards,
> Vishwanath
>
>
>
>> Thanks,
>> Paul
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] How to manually/force remove a bad geo-rep agreement?

2014-04-30 Thread Vijaykumar Koppad


On 04/29/2014 07:15 PM, Steve Dainard wrote:
Do you have a link to the doc's that mention a specific sequence 
particular to geo-replication enabled volumes? I don't see anything on 
the gluster doc page here: 
http://www.gluster.org/community/documentation/index.php/Main_Page.
Check out, 
http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.5


Regards,
Vijaykumar


Thanks,
Steve

On Tue, Apr 29, 2014 at 2:29 AM, Vijaykumar Koppad <mailto:vkop...@redhat.com>> wrote:



On 04/28/2014 08:31 PM, Steve Dainard wrote:

3.4.2 doesn't have the force option.

Oh, I got confused with releases.



I went through an upgrade to 3.5 which ended in my replica pairs
not being able to sync and all commands coming back with no
output. Individually each node would mount its volumes and report
status ok until the other node was contacted. Logs had no useful
information. I did an upgrade on another replica pair without
issue before attempting my primary storage pair and had no issues.

With geo-rep when you are upgrading, there are some special steps,
you need to follow.
If you have followed, it looks like there is some problem with the
steps.

-Vijaykumar



I ended up restoring from backups and rebuilding my primary
storage pair.

Not looking forward to the next upgrade.

*Steve Dainard *
IT Infrastructure Manager
Miovision <http://miovision.com/> | /Rethink Traffic/

*Blog <http://miovision.com/blog>  | **LinkedIn
<https://www.linkedin.com/company/miovision-technologies>  |
Twitter <https://twitter.com/miovision>  | Facebook
<https://www.facebook.com/miovision>*

Miovision Technologies Inc. | 148 Manitou Drive, Suite 101,
Kitchener, ON, Canada | N2C 1L3
This e-mail may contain information that is privileged or
confidential. If you are not the intended recipient, please
delete the e-mail and any attachments and notify us immediately.


On Mon, Apr 28, 2014 at 3:09 AM, Vijaykumar Koppad
mailto:vkop...@redhat.com>> wrote:


On 04/24/2014 07:40 PM, Steve Dainard wrote:

# gluster volume geo-replication rep1
gluster://10.0.11.4:/rep1 stop
geo-replication command failed'

Try out

# gluster volume geo-replication rep1
gluster://10.0.11.4:/rep1 stop force
# gluster volume geo-replication rep1
gluster://10.0.11.4:/rep1 delete

-Vijaykumar


cli.log:
2014-04-24 14:08:03.916112] W
[rpc-transport.c:175:rpc_transport_load] 0-rpc-transport:
missing 'option transport-type'. defaulting to "socket"
[2014-04-24 14:08:03.918050] I [socket.c:3480:socket_init]
0-glusterfs: SSL support is NOT enabled
[2014-04-24 14:08:03.918068] I [socket.c:3495:socket_init]
0-glusterfs: using system polling thread
[2014-04-24 14:08:04.146710] I [input.c:36:cli_batch] 0-:
Exiting with: -1



*Steve Dainard *
IT Infrastructure Manager
Miovision <http://miovision.com/> | /Rethink Traffic/

*Blog <http://miovision.com/blog> | **LinkedIn
<https://www.linkedin.com/company/miovision-technologies>  |
Twitter <https://twitter.com/miovision>  | Facebook
<https://www.facebook.com/miovision>*

Miovision Technologies Inc. | 148 Manitou Drive, Suite 101,
Kitchener, ON, Canada | N2C 1L3
This e-mail may contain information that is privileged or
confidential. If you are not the intended recipient, please
delete the e-mail and any attachments and notify us immediately.


On Thu, Apr 24, 2014 at 9:47 AM, Steve Dainard
mailto:sdain...@miovision.com>> wrote:

Version: glusterfs-server-3.4.2-1.el6.x86_64

I have an issue where I'm not getting the correct status
for geo-replication, this is shown below. Also I've had
issues where I've not been able to stop geo-replication
without using a firewall rule on the slave. I would get
back a cryptic error and nothing useful in the logs.

# gluster volume geo-replication status
NODE MASTER SLAVESTATUS

---
ovirt001.miovision.corp rep1 gluster://10.0.11.4:/rep1
 faulty
ovirt001.miovision.corp miofiles
gluster://10.0.11.4:/miofiles  faulty

# gluster volume geo-replication rep1
gluster://10.0.11.4:/rep1 start
geo-replication session between rep1 &
   

Re: [Gluster-users] How to manually/force remove a bad geo-rep agreement?

2014-04-28 Thread Vijaykumar Koppad


On 04/24/2014 07:40 PM, Steve Dainard wrote:

# gluster volume geo-replication rep1 gluster://10.0.11.4:/rep1 stop
geo-replication command failed'

Try out

# gluster volume geo-replication rep1 gluster://10.0.11.4:/rep1 stop force
# gluster volume geo-replication rep1 gluster://10.0.11.4:/rep1 delete



cli.log:
2014-04-24 14:08:03.916112] W [rpc-transport.c:175:rpc_transport_load] 
0-rpc-transport: missing 'option transport-type'. defaulting to "socket"
[2014-04-24 14:08:03.918050] I [socket.c:3480:socket_init] 
0-glusterfs: SSL support is NOT enabled
[2014-04-24 14:08:03.918068] I [socket.c:3495:socket_init] 
0-glusterfs: using system polling thread

[2014-04-24 14:08:04.146710] I [input.c:36:cli_batch] 0-: Exiting with: -1



*Steve Dainard *
IT Infrastructure Manager
Miovision  | /Rethink Traffic/

*Blog  | **LinkedIn 
  | Twitter 
  | Facebook 
*


Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, 
ON, Canada | N2C 1L3
This e-mail may contain information that is privileged or 
confidential. If you are not the intended recipient, please delete the 
e-mail and any attachments and notify us immediately.



On Thu, Apr 24, 2014 at 9:47 AM, Steve Dainard > wrote:


Version: glusterfs-server-3.4.2-1.el6.x86_64

I have an issue where I'm not getting the correct status for
geo-replication, this is shown below. Also I've had issues where
I've not been able to stop geo-replication without using a
firewall rule on the slave. I would get back a cryptic error and
nothing useful in the logs.

# gluster volume geo-replication status
NODE MASTER   SLAVE  
   STATUS


---
ovirt001.miovision.corp rep1 gluster://10.0.11.4:/rep1  faulty
ovirt001.miovision.corp miofiles gluster://10.0.11.4:/miofiles
 faulty

# gluster volume geo-replication rep1 gluster://10.0.11.4:/rep1 start
geo-replication session between rep1 & gluster://10.0.11.4:/rep1
already started
geo-replication command failed

[root@ovirt001 ~]# gluster volume geo-replication status
NODE MASTER   SLAVE  
   STATUS


---
ovirt001.miovision.corp rep1 gluster://10.0.11.4:/rep1  faulty
ovirt001.miovision.corp miofiles gluster://10.0.11.4:/miofiles
 faulty


How can I manually remove a geo-rep agreement?

Thanks,


*Steve
*




___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] How to manually/force remove a bad geo-rep agreement?

2014-04-28 Thread Vijaykumar Koppad


On 04/24/2014 07:40 PM, Steve Dainard wrote:

# gluster volume geo-replication rep1 gluster://10.0.11.4:/rep1 stop
geo-replication command failed'

Try out

# gluster volume geo-replication rep1 gluster://10.0.11.4:/rep1 stop force
# gluster volume geo-replication rep1 gluster://10.0.11.4:/rep1 delete

-Vijaykumar


cli.log:
2014-04-24 14:08:03.916112] W [rpc-transport.c:175:rpc_transport_load] 
0-rpc-transport: missing 'option transport-type'. defaulting to "socket"
[2014-04-24 14:08:03.918050] I [socket.c:3480:socket_init] 
0-glusterfs: SSL support is NOT enabled
[2014-04-24 14:08:03.918068] I [socket.c:3495:socket_init] 
0-glusterfs: using system polling thread

[2014-04-24 14:08:04.146710] I [input.c:36:cli_batch] 0-: Exiting with: -1



*Steve Dainard *
IT Infrastructure Manager
Miovision  | /Rethink Traffic/

*Blog  | **LinkedIn 
  | Twitter 
  | Facebook 
*


Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, 
ON, Canada | N2C 1L3
This e-mail may contain information that is privileged or 
confidential. If you are not the intended recipient, please delete the 
e-mail and any attachments and notify us immediately.



On Thu, Apr 24, 2014 at 9:47 AM, Steve Dainard > wrote:


Version: glusterfs-server-3.4.2-1.el6.x86_64

I have an issue where I'm not getting the correct status for
geo-replication, this is shown below. Also I've had issues where
I've not been able to stop geo-replication without using a
firewall rule on the slave. I would get back a cryptic error and
nothing useful in the logs.

# gluster volume geo-replication status
NODE MASTER   SLAVE  
   STATUS


---
ovirt001.miovision.corp rep1 gluster://10.0.11.4:/rep1  faulty
ovirt001.miovision.corp miofiles gluster://10.0.11.4:/miofiles
 faulty

# gluster volume geo-replication rep1 gluster://10.0.11.4:/rep1 start
geo-replication session between rep1 & gluster://10.0.11.4:/rep1
already started
geo-replication command failed

[root@ovirt001 ~]# gluster volume geo-replication status
NODE MASTER   SLAVE  
   STATUS


---
ovirt001.miovision.corp rep1 gluster://10.0.11.4:/rep1  faulty
ovirt001.miovision.corp miofiles gluster://10.0.11.4:/miofiles
 faulty


How can I manually remove a geo-rep agreement?

Thanks,


*Steve
*




___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] geo-replication issues/questions

2013-01-06 Thread Vijaykumar Koppad
hi Matthew ,

   This is weird. There might be some inconsistency between gsyncd and glusterd 
communication. 
Just try restarting glusterd once. If it doesn't work, I guess we need some 
more info 
on what all you did before this happened.

Thanks,
Vijaykumar 
 

- Original Message -
> From: "Matthew Nicholson" 
> To: "Matthew Nicholson" 
> Cc: gluster-users@gluster.org
> Sent: Saturday, January 5, 2013 12:48:33 AM
> Subject: Re: [Gluster-users] geo-replication issues/questions
> 
> just to point out more weirdness:
> 
> root@sum1-gstore02 nichols2]# gluster volume geo-replication gstore
> status
> MASTER   SLAVE
>  STATUS
> 
> gstore   gluster://ox60-gstore01:gstore-rep
> OK
> 
> [root@sum1-gstore02 nichols2]# gluster volume geo-replication gstore
> gluster://ox60-gstore01:gstore-rep stop
> geo-replication session between gstore &
> gluster://ox60-gstore01:gstore-rep not active
> geo-replication command failed
> 
> [root@sum1-gstore02 nichols2]# gluster volume geo-replication gstore
> gluster://ox60-gstore01:gstore-rep config
> gluster_log_file:
> /var/log/glusterfs/geo-replication/gstore/gluster%3A%2F%2F10.242.64.121%3Agstore-rep.gluster.log
> ssh_command: ssh -oPasswordAuthentication=no
> -oStrictHostKeyChecking=no -i
> /var/lib/glusterd/geo-replication/secret.pem
> remote_gsyncd: /usr/libexec/glusterfs/gsyncd
> state_file:
> /var/lib/glusterd/geo-replication/gstore/gluster%3A%2F%2F10.242.64.121%3Agstore-rep.status
> gluster_command_dir: /usr/sbin/
> pid_file:
> /var/lib/glusterd/geo-replication/gstore/gluster%3A%2F%2F10.242.64.121%3Agstore-rep.pid
> log_file:
> /var/log/glusterfs/geo-replication/gstore/gluster%3A%2F%2F10.242.64.121%3Agstore-rep.log
> gluster_params: xlator-option=*-dht.assert-no-child-down=true
> 
> is there really no simple way to turn this off and start fresh?
> 
> On Fri, Jan 4, 2013 at 10:39 AM, Matthew Nicholson
>  wrote:
> > Oh further more, the slave as listed in the info file are things
> > like:
> >
> > slave2=6456206b-fe19-4b65-b7ab-0c9e7ce6221e:ssh://gstore-rep:/gstore-rep
> >
> > but:
> >
> > gluster volume geo-replication gstore gstore-rep:/gstore-rep stop
> > geo-replication session between gstore & gstore-rep:/gstore-rep not
> > active
> > geo-replication command failed
> >
> > and
> >
> > gluster volume geo-replication gstore
> > gluster://gstore-rep:/gstore-rep stop
> > geo-replication session between gstore & gstore-rep:/gstore-rep not
> > active
> > geo-replication command failed
> >
> > BUT:
> >
> > gluster volume geo-replication gstore
> > gluster://gstore-rep:/gstore-rep stop
> > geo-replication session between gstore &
> > gluster://gstore-rep:/gstore-rep not active
> > geo-replication command failed
> > [root@sum1-gstore01 ~]# gluster volume geo-replication gstore
> > gluster://gstore-rep:/gstore-rep config
> > gluster_log_file:
> > /var/log/glusterfs/geo-replication/gstore/gluster%3A%2F%2F10.242.64.125%3A%2Fgstore-rep.gluster.log
> > ssh_command: ssh -oPasswordAuthentication=no
> > -oStrictHostKeyChecking=no -i
> > /var/lib/glusterd/geo-replication/secret.pem
> > session_owner: b5532175-bf49-413b-b0d7-b834ee9ec619
> > remote_gsyncd: /usr/libexec/glusterfs/gsyncd
> > state_file:
> > /var/lib/glusterd/geo-replication/gstore/gluster%3A%2F%2F10.242.64.125%3A%2Fgstore-rep.status
> > gluster_command_dir: /usr/sbin/
> > pid_file:
> > /var/lib/glusterd/geo-replication/gstore/gluster%3A%2F%2F10.242.64.125%3A%2Fgstore-rep.pid
> > log_file:
> > /var/log/glusterfs/geo-replication/gstore/gluster%3A%2F%2F10.242.64.125%3A%2Fgstore-rep.log
> > gluster_params: xlator-option=*-dht.assert-no-child-down=true
> >
> > so, its got a config!
> >
> > I know much of this coem form the gsync.conf file, but i'm not
> > clear
> > on how safe this is to edit, or how to get that sync'd properly
> > among
> > the nodes that make up this volume...
> >
> >
> >
> > On Fri, Jan 4, 2013 at 10:26 AM, Matthew Nicholson
> >  wrote:
> >> I've got a bunch, but I'll start with my current one:
> >>
> >> [root@sum1-gstore01 ~]# gluster volume geo-replication gstore
> >> status
> >> MASTER   SLAVE
> >>  STATUS
> >> 
> >>
> >>
> >> but ps axf |grep gsync shows lots of procs, and
> >> /var/lib/glusterd/vols/gstore/info
> >>
> >> shows a couple slave entires.
> >>
> >>
> >> trying to turn off geo indexing yeilds:
> >>
> >> [root@sum1-gstore01 ~]# gluster volume set gstore
> >> geo-replication.indexing off
> >> geo-replication.indexing cannot be disabled while geo-replication
> >> sessions exist
> >> Set volume unsuccessful
> >>
> >> so, my question is, how to do i stop these outright?
> >>
> >> I'm been pinging IRC, but its pretty dead in there, and attempts
> >> to
> >> subscribe to this list have been failing too
> >>
> >>
> >>
> >>
> >>
> >> --
> >> Matthew Nicholson
> >> matthew_nicho

Re: [Gluster-users] geo-replication status 'corrupt' and can't clear this status

2012-10-10 Thread Vijaykumar Koppad
Hi, 

  If geo-rep logs says master is corrupt , then it must be because xtimes are 
corrupt,
then you need to turn off and turn on the indexing to fix than state. But if 
the status only says corrupt,
it can be because of many reason. If you send the geo-rep logs , it'll be 
helpful to debug proper cause. 

Thanks 
-Vijaykumar 

- Original Message -
From: "미래성공남박은준" 
To: gluster-users@gluster.org
Sent: Tuesday, September 25, 2012 4:23:25 PM
Subject: [Gluster-users] geo-replication status 'corrupt' and can't clear this 
status



hi, 


I got a msg after doing this command(sudo gluster volume geo-replication 
status) 



MASTER SLAVE STATUS 

 
object-manager gluster://10.251.5.1:geo-object-manager corrupt 




so I did this command(sudo gluster volume set object-manager 
geo-replication.indexing on) to clear this status. log below 

[2012-09-25 09:38:03.991194] I 
[glusterd-handler.c:1729:glusterd_handle_gsync_set] 0-: master not found, while 
handlinggeo-replication options 
[2012-09-25 09:38:03.991245] I 
[glusterd-handler.c:1736:glusterd_handle_gsync_set] 0-: slave not not found, 
whilehandling geo-replication options 
[2012-09-25 09:38:03.991290] I [glusterd-utils.c:243:glusterd_lock] 0-glusterd: 
Cluster lock held by 26ce3eaa-ed38-474c-8dc3-4c63464b295a 
[2012-09-25 09:38:03.991305] I [glusterd-handler.c:420:glusterd_op_txn_begin] 
0-glusterd: Acquired local lock 
[2012-09-25 09:38:03.991768] I 
[glusterd-rpc-ops.c:758:glusterd3_1_cluster_lock_cbk] 0-glusterd: Received ACC 
from uuid: 0546f091-36f5-4bd2-b9a8-a80a3bd9872d 
[2012-09-25 09:38:03.991830] I 
[glusterd-op-sm.c:6737:glusterd_op_ac_send_stage_op] 0-glusterd: Sent op req to 
1 peers 
[2012-09-25 09:38:03.992142] I 
[glusterd-rpc-ops.c:1056:glusterd3_1_stage_op_cbk] 0-glusterd: Received ACC 
from uuid: 0546f091-36f5-4bd2-b9a8-a80a3bd9872d 
[2012-09-25 09:38:04.182485] I 
[glusterd-op-sm.c:6854:glusterd_op_ac_send_commit_op] 0-glusterd: Sent op req 
to 1 peers 
[2012-09-25 09:38:04.182816] I 
[glusterd-rpc-ops.c:1242:glusterd3_1_commit_op_cbk] 0-glusterd: Received ACC 
from uuid: 0546f091-36f5-4bd2-b9a8-a80a3bd9872d 
[2012-09-25 09:38:04.183162] I 
[glusterd-rpc-ops.c:817:glusterd3_1_cluster_unlock_cbk] 0-glusterd: Received 
ACC from uuid: 0546f091-36f5-4bd2-b9a8-a80a3bd9872d 
[2012-09-25 09:38:04.183215] I [glusterd-op-sm.c:7250:glusterd_op_txn_complete] 
0-glusterd: Cleared local lock 
[2012-09-25 09:38:04.185857] W [socket.c:1494:__socket_proto_state_machine] 
0-socket.management: reading from socket failed. Error (Transport endpoint is 
not connected), peer (127.0.0.1:1020) 




and do this command(sudo gluster volume set object-manager 
geo-replication.indexing on). log below 

[2012-09-25 10:48:11.994094] W [socket.c:1494:__socket_proto_state_machine] 
0-socket.management: reading from socket failed. Error (Transport endpoint is 
not connected), peer (127.0.0.1:1020) 
[2012-09-25 10:48:16.76174] I 
[glusterd-handler.c:796:glusterd_handle_cli_get_volume] 0-glusterd: Received 
get vol req 
[2012-09-25 10:48:16.76857] I 
[glusterd-handler.c:796:glusterd_handle_cli_get_volume] 0-glusterd: Received 
get vol req 
[2012-09-25 10:48:16.79213] W [socket.c:1494:__socket_proto_state_machine] 
0-socket.management: reading from socket failed. Error (Transport endpoint is 
not connected), peer (127.0.0.1:1020) 
[2012-09-25 10:48:38.3159] I 
[glusterd-handler.c:1729:glusterd_handle_gsync_set] 0-: master not found, while 
handlinggeo-replication options 
[2012-09-25 10:48:38.3224] I 
[glusterd-handler.c:1736:glusterd_handle_gsync_set] 0-: slave not not found, 
whilehandling geo-replication options 
[2012-09-25 10:48:38.3278] I [glusterd-utils.c:243:glusterd_lock] 0-glusterd: 
Cluster lock held by 26ce3eaa-ed38-474c-8dc3-4c63464b295a 
[2012-09-25 10:48:38.3296] I [glusterd-handler.c:420:glusterd_op_txn_begin] 
0-glusterd: Acquired local lock 
[2012-09-25 10:48:38.3734] I 
[glusterd-rpc-ops.c:758:glusterd3_1_cluster_lock_cbk] 0-glusterd: Received ACC 
from uuid: 0546f091-36f5-4bd2-b9a8-a80a3bd9872d 
[2012-09-25 10:48:38.3790] I 
[glusterd-op-sm.c:6737:glusterd_op_ac_send_stage_op] 0-glusterd: Sent op req to 
1 peers 
[2012-09-25 10:48:38.4083] I [glusterd-rpc-ops.c:1056:glusterd3_1_stage_op_cbk] 
0-glusterd: Received ACC from uuid: 0546f091-36f5-4bd2-b9a8-a80a3bd9872d 
[2012-09-25 10:48:38.189599] I 
[glusterd-op-sm.c:6854:glusterd_op_ac_send_commit_op] 0-glusterd: Sent op req 
to 1 peers 
[2012-09-25 10:48:38.190021] I 
[glusterd-rpc-ops.c:1242:glusterd3_1_commit_op_cbk] 0-glusterd: Received ACC 
from uuid: 0546f091-36f5-4bd2-b9a8-a80a3bd9872d 
[2012-09-25 10:48:38.190357] I 
[glusterd-rpc-ops.c:817:glusterd3_1_cluster_unlock_cbk] 0-glusterd: Received 
ACC from uuid: 0546f091-36f5-4bd2-b9a8-a80a3bd9872d 
[2012-09-25 10:48:38.190383] I [glusterd-op-sm.c:7250:glusterd_op_txn_complete] 
0-glusterd: Cleared local lock 
[2012-09-25 10:48:38.192709] W [socket.c:1494:__socket

Re: [Gluster-users] Limit access to a volume

2011-12-14 Thread Vijaykumar Koppad
 And if you want to unset it in Release-3.2.x , you have to set it again for 
the default value , which is provided for all the options in volume set help.


From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] on 
behalf of Marc Muehlfeld [marc.muehlf...@medizinische-genetik.de]
Sent: Wednesday, December 14, 2011 7:31 PM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] Limit access to a volume

Am 14.12.2011 14:57, schrieb Vijaykumar Koppad:
> Consider you have set an option data-self-heal-algorithm to diff. So if 
> you want to reset it to default value. you have use  volume reset   
> data-self-heal-algorithm . This will unset only that option and this option 
> is available only in master.

Ok. Thanks.
I just called the "reset" without arguments, what resets all changes.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Limit access to a volume

2011-12-14 Thread Vijaykumar Koppad
   Consider you have set an option data-self-heal-algorithm to diff. So if you 
want to reset it to default value. you have use  volume reset   
data-self-heal-algorithm . This will unset only that option and this option is 
available only in master. 

From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] on 
behalf of Marc Muehlfeld [marc.muehlf...@medizinische-genetik.de]
Sent: Wednesday, December 14, 2011 7:20 PM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] Limit access to a volume

Am 14.12.2011 14:48, schrieb Vijaykumar Koppad:
>   To unset any options, you can use  volume reset command.
> Usage: volume reset  [option] [force].

I saw that. But this unset all options I have set, right? Is there a way just
to unset _one_ setting?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Limit access to a volume

2011-12-14 Thread Vijaykumar Koppad
Hi Marc, 
 To unset any options, you can use  volume reset command. 
Usage: volume reset  [option] [force].
We will look into the other question.

From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] on 
behalf of Marc Muehlfeld [marc.muehlf...@medizinische-genetik.de]
Sent: Wednesday, December 14, 2011 4:00 PM
To: gluster-users@gluster.org
Subject: [Gluster-users] Limit access to a volume

Hi,

I created a volume and have to restrict access now. Just one client should be
allowed to acces the volume. So I tried:

# gluster volume set sesam auth.allow 192.168.20.1

But I can still mount the volume from other clients than 192.168.20.1. But
when I try to access the directory from an unallowd machine, my shell hangs.
If I try to unmount, it says that the device is busy. The only way to get it
out of the client is to kill the glusterfs process.

And one more question: If I set a option by "gluster volume set" - how do I
unset it?


Regards,
Marc
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users