since the VM files (VHD) are available on LVM - can a new gluster volume be
created and exported over NFS - without risk of dataloss?
From: srinivas jonn
To: Krishnan Parthasarathi ; "gluster-users@gluster.org"
Sent: Monday, 3 June 2013 11:42 PM
Subject: Re:
Can you ping/check server 10.0.0.30? Is that address ok? Is that brick
added?
Pablo.
El 03/06/2013 03:12 p.m., srinivas jonn escribió:
[2013-06-03 12:03:24.842904] E
[glusterd-volume-ops.c:842:glusterd_op_stage_start_volume] 0-: Unable
to resolve brick 10.0.0.30:/export/brick1
___
Hello Gluster users,
thought of posing a more refined question, thanks to the support of Krish.
problem statement: Gluster volume start - failure
RPM installation of 3.3.0 on CentOS 6.1 - XFS is filesystem layer -
NFS export , distributed single node, TCP
this server has experienced a acciden
Did you run "gluster volume start gvol1"? Could you attach
/var/log/glusterfs/.cmd_log_history (log file)?
From the logs you have pasted, it looks like volume-stop is the last command
you executed.
thanks,
krish
- Original Message -
> the volume is not starting - this was the issue.. pl
Also, because of the mistake that I did removing files and folders (the
.dropbox-cache folder removal on all the bricks) directly from bricks,
maybe it's better to scan the .gluster folder and remove any broken symlink
before upgrading (but after having stopped glusterd, of course)?
Please let me k
the volume is not starting - this was the issue.. please let mw know the
diagnostic or debug procedures,
logs:
usr/lib64/libgfrpc.so.0(rpcsvc_handle_rpc_call+0x293) [0x30cac0a443]
/usr/sbin/glusterfsd(glusterfs_handle_terminate+0x15) [0x40a955]))) 0-:
received signum (15), shutting down
[
Srinivas,
The volume is in stopped state. You could start the volume by running
"gluster volume start gvol1". This should make your attempts at mounting
the volume successful.
thanks,
krish
- Original Message -
> Krish,
> this is giving general volume information , can the state of volu
Krish,
this is giving general volume information , can the state of volume known from
any specific logs?
#gluster volume info gvol1
Volume Name: gvol1
Type: Distribute
Volume ID: aa25aa58-d191-432a-a84b-325051347af6
Status: Stopped
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 10.0
Srinivas,
Could you paste the output of "gluster volume info gvol1"?
This should give us an idea as to what was the state of the volume
before the power loss.
thanks,
krish
- Original Message -
> Hello Gluster users:
> sorry for long post, I have run out of ideas here, kindly let me know
Hello Gluster users:
sorry for long post, I have run out of ideas here, kindly let me know if i am
looking at right places for logs and any suggested actions.thanks
a sudden power loss casued hard reboot - now the volume does not start
Glusterfs- 3.3.1 on Centos 6.1 transport: TCP
shari
So, just to recap, is it ok to clone repo from github, go to tag 3.3.2qa3,
stop glusterd, configure, make & make install?
Regards,
Stefano
On Mon, Jun 3, 2013 at 5:54 PM, Vijay Bellur wrote:
> On 06/02/2013 01:42 PM, Stefano Sinigardi wrote:
>
>> Also directories got removed. I did a really b
Hey gluster-users,
I just stumbled on a problem in our current test-setup of gluster 3.3.2.
This is a simple replicated setup with 2 bricks (on XFS) in 1 volume running on
glusterfs version 3.3.2qa3 on ubuntu lucid.
The client mounting this volume on /mnt/gfs sits on a mother machine and is
usin
On 06/02/2013 01:42 PM, Stefano Sinigardi wrote:
Also directories got removed. I did a really bad job in that script,
wrong sed and path was not truncated and replaced with the fuse
mountpoint...
Yes I can move to 3.3.2qa3. At the moment I have gluster installed as
per the semiosis repository. Wh
13 matches
Mail list logo