[Gluster-users] QEMU-GlusterFS native integration demo video

2012-08-27 Thread Bharata B Rao
Hi,

If you are interested and/or curious to know how QEMU can be used to
create and boot VM's from GlusterFS volume, take a look at the demo
video I have created at:

www.youtube.com/watch?v=JG3kF_djclg

Regards,
Bharata.
-- 
http://bharata.sulekha.com/blog/posts.htm, http://raobharata.wordpress.com/
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Ownership changed to root

2012-08-27 Thread Brian Candler
On Mon, Aug 27, 2012 at 03:08:21PM +0200, Stephan von Krawczynski wrote:
> The gluster version is 2.X and cannot be changed.

Ah, that's the important bit. If you have a way to replicate the problem
with current code it will be easier to get someone to look at it.

> AFAIK the glusterfsd
> versions are not downward compatible to a point where one can build a setup
> with one brick 2.X and the other 3.X, which is - if true - a general design
> flaw amongst others.

It's true. You can't even upgrade from 3.2.5 to 3.3.0 without a total
shutdown and restart of everything.  However, I understand they are planning
to maintain protocol compatibility from that point onwards.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Failed to get names of volumes

2012-08-27 Thread s19n
* s19n  [2012 08 27, 11:25]:
> - cut here -
> # gluster volume status all
> operation failed
>  
> Failed to get names of volumes
> - cut here -

 Additional update: seems that the command (and many others) is not
working because of a 'local lock' being held. I have tried running the
command on the host holding the lock as well as on other hosts, with no
success. etc-glusterfs-glusterd.vol.log extract follows:

- cut here -
[2012-08-27 17:41:17.842441] I
  [glusterd-volume-ops.c:583:glusterd_handle_cli_statedump_volume]
  0-glusterd: Received statedump request for volume storage with options 

[2012-08-27 17:41:17.842510] E [glusterd-utils.c:277:glusterd_lock]
  0-glusterd: Unable to get lock for uuid:
  34b665ea-d315-489b-bd0f-172bb6b85ee1, lock held by:
  34b665ea-d315-489b-bd0f-172bb6b85ee1

[2012-08-27 17:41:17.842527] E
  [glusterd-handler.c:453:glusterd_op_txn_begin] 0-management: Unable to
  acquire local lock, ret: -1
- cut here -

 Could you please describe why these locks are being held and, if
possible, how to clear them?

Thank you very much for your kind attention,
Best regards

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Ownership changed to root

2012-08-27 Thread Stephan von Krawczynski
On Sun, 26 Aug 2012 20:01:20 +0100
Brian Candler  wrote:

> On Sun, Aug 26, 2012 at 03:50:16PM +0200, Stephan von Krawczynski wrote:
> > I'd like to point you to "[Gluster-devel] Specific bug question" dated few
> > days ago, where I describe a trivial situation when owner changes on a brick
> > can occur, asking if someone can point me to a patch for that.
> 
> I guess this is
> http://lists.gnu.org/archive/html/gluster-devel/2012-08/msg00130.html
> ?
> 
> This could be helpful but as far as I can see a lot of important information
> is missing: e.g.  what glusterfs version you are using, what operating
> system and kernel version, what underlying filesystem is used for the
> bricks.  Is the volume mounted on a separate client machine, or on one of
> the brick servers?  "gluster volume info" would be useful too.

In fact I wrote the pieces of information that seemed really important for me,
only they seem unclear. The setup has two independant hardware bricks and one
client (on seperate hardware). It is an all-linux setup with ext4 on the
bricks. The kernel versions are really of no use because I tested quite some
and the behaviour is always the same.
The problem has to do with the load on the client which is about the only sure
thing I can say.
The gluster version is 2.X and cannot be changed. AFAIK the glusterfsd
versions are not downward compatible to a point where one can build a setup
with one brick 2.X and the other 3.X, which is - if true - a general design
flaw amongst others.
I did in fact not intend to enter a big discussion about the point. I thought
there must be at least one person knowing the code to an extent where my
question can be answered immediately with one sentence. All you have to know
is how it may be possible that a "mv" command overruns a former one that
should in fact have already completed its job, because it exited successfully.

> Regards,
> 
> Brian.
> 


-- 
Regards,
Stephan
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Failed to get names of volumes

2012-08-27 Thread s19n
* Tao Lin  [2012 08 27, 17:54]:
> There are issues with gluster on ext4,you have to use other file
> systems(eg. xfs, ext3) instead of ext4.

If you are referring to

http://joejulian.name/blog/glusterfs-bit-by-ext4-structure-change/

 I don't think I was experienceing that problem, since I shouldn't have
an affected kernel version.

 Update: now I am able to issue the command, and indeed noticed that one
of the nodes was offline. In the brick log I found:

- cut here -
patchset: git://git.gluster.com/glusterfs.git
signal received: 7
time of crash: 2012-08-24 23:22:30
configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.3.0
/lib/libc.so.6(+0x33af0)[0x7f024da35af0]
/usr/lib/libglusterfs.so.0(__dentry_grep+0x8e)[0x7f024e7879de]
/usr/lib/libglusterfs.so.0(inode_grep+0x66)[0x7f024e787c56]
/usr/lib/glusterfs/3.3.0/xlator/protocol/server.so(resolve_entry_simple+0x91)[0x7f02491eb641]
/usr/lib/glusterfs/3.3.0/xlator/protocol/server.so(server_resolve_entry+0x24)[0x7f02491ebd14]
/usr/lib/glusterfs/3.3.0/xlator/protocol/server.so(server_resolve+0x98)[0x7f02491ebb88]
/usr/lib/glusterfs/3.3.0/xlator/protocol/server.so(server_resolve_all+0x9e)[0x7f02491ebcbe]
/usr/lib/glusterfs/3.3.0/xlator/protocol/server.so(resolve_and_resume+0x14)[0x7f02491ebd84]
/usr/lib/glusterfs/3.3.0/xlator/protocol/server.so(server_lookup+0x18f)[0x7f024920525f]
/usr/lib/libgfrpc.so.0(rpcsvc_handle_rpc_call+0x293)[0x7f024e550ce3]
/usr/lib/libgfrpc.so.0(rpcsvc_notify+0x93)[0x7f024e550e53]
/usr/lib/libgfrpc.so.0(rpc_transport_notify+0x28)[0x7f024e5518b8]
/usr/lib/glusterfs/3.3.0/rpc-transport/socket.so(socket_event_poll_in+0x34)[0x7f024acd0734]
/usr/lib/glusterfs/3.3.0/rpc-transport/socket.so(socket_event_handler+0xc7)[0x7f024acd0817]
/usr/lib/libglusterfs.so.0(+0x3e394)[0x7f024e79b394]
/usr/sbin/glusterfsd(main+0x58a)[0x407aaa]
/lib/libc.so.6(__libc_start_main+0xfd)[0x7f024da20c4d]
/usr/sbin/glusterfsd[0x404a59]
- cut here -

Regards

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Failed to get names of volumes

2012-08-27 Thread s19n

Hello,

 Gluster 3.3.0 distributed replicated, ext4 bricks. Since this morning I
am unable to check the status of the filesystem:

- cut here -
# gluster volume status all
operation failed
 
Failed to get names of volumes
- cut here -

Extract from cli.log:

- cut here -
[2012-08-27 11:10:00.341089] W [rpc-transport.c:174:rpc_transport_load]
  0-rpc-transport: missing 'option transport-type'. defaulting to "socket"
[2012-08-27 11:10:00.439743] E
  [cli-rpc-ops.c:5657:gf_cli_status_volume_all] 0-cli: status all failed
[2012-08-27 11:10:00.439791] I [input.c:46:cli_batch] 0-: Exiting with:
  -22
- cut here -

"gluster peer" reports all nodes connected but it seems to be known as
not really reliable.

Any suggestion will be really appreciated.
Best regards

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] ext4 issues

2012-08-27 Thread Shishir Gowda
Hi Jon,

No, this issue is common to nfs and native (gluster) clients.

http://review.gluster.org/#change,3679 patch is in review. This fix as of now 
handles ext4 issues with native client.

NFS is still broken, and we are investigating it.

With regards,
Shishir

- Original Message -
From: "Jon Tegner" 
To: "Gluster General Discussion List" 
Sent: Friday, August 24, 2012 3:40:38 PM
Subject: [Gluster-users] ext4 issues


Hi, I have seen that there are issues with gluster on ext4. Just to be clear, 
is this something which is only related to clients using nfs, i.e., can I 
happily use gluster (without downgrading kernel) if all clients are using 
gluster native client? 


Thanks, 


/jon 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Bricks not rebalancing

2012-08-27 Thread James Kahn
Thanks for the info Shishir.

What algorithm does Gluster use to hash files to bricks?
Will files in one directory be mostly distributed amongst multiple bricks?

Most of our files are named with a 37 character GUID, if that makes a
difference.




-Original Message-
From: Shishir Gowda 
Date: Monday, 27 August 2012 4:30 PM
To: James Kahn 
Cc: "gluster-users@gluster.org" 
Subject: Re: [Gluster-users] Bricks not rebalancing

>Hi James,
>
>Distribute hashes files based on the names, and not file sizes.
>
>It looks like in your case, most of the files hash to the same brick.
>
>Rebalance would not help you, as rebalance only moves files from
>non-hashed subvolume to hashed subvolumes.
>
>You might try to add bricks, and issue rebalance, to better distribute
>your data.
>
>With regards,
>Shishir
>
>- Original Message -
>From: "James Kahn" 
>To: gluster-users@gluster.org
>Sent: Saturday, August 25, 2012 2:19:32 PM
>Subject: [Gluster-users] Bricks not rebalancing
>
>I have a two brick distributed volume (single host) that has become
>significantly lopsided between bricks. One brick has 1.8TB allocated and
>the other only has 500GB allocated. Most of the files are very large.
>
>I'm trying to rebalance between bricks and gluster isn't doing anything.
>
>Any ideas on how to fix this?
>
>gluster> volume status storage1 detail
>Status of volume: storage1
>--
>-
>---
>Brick: Brick abc2.xx.com:/exports/exp1
>Port : 24013
>Online   : Y
>Pid  : 1553
>File System  : xfs
>Device   : /dev/xvdb1
>Mount Options: rw,inode64
>Inode Size   : 256
>Disk Space Free  : 179.6GB
>Total Disk Space : 2.0TB
>Inode Count  : 419429824
>Free Inodes  : 419428942
>--
>-
>---
>Brick: Brick abc2.xx:/exports/exp2
>Port : 24014
>Online   : Y
>Pid  : 1559
>File System  : xfs
>Device   : /dev/xvdc1
>Mount Options: rw,inode64
>Inode Size   : 256
>Disk Space Free  : 1.5TB
>Total Disk Space : 2.0TB
>Inode Count  : 419429824
>Free Inodes  : 419429032
> 
>gluster> volume rebalance storage1 start
>Starting rebalance on volume storage1 has been successful
>gluster> volume rebalance storage1 status
>Node Rebalanced-files  size
>   scanned  failures status
>   -  ---   ---
>---   ---   
>   localhost00
> 2620  completed
>
>
>Thanks.
>
>
>___
>Gluster-users mailing list
>Gluster-users@gluster.org
>http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users