Hi all,
I have one particular replicated volume keep getting these errors in the
log.
glustershd.log:
[2013-04-08 17:18:30.178495] W
[client3_1-fops.c:592:client3_1_unlink_cbk] 0-CSEVOL-client-1: remote
operation failed: No such file or directory
[2013-04-08 17:18:30.179174] W
Hi Toby,
According to the crash log, the cause may be the following:
glusterfsd/src/glusterfsd-mgmt.c line 1394
static char oldvolfile[131072];
so if the volume
file(/var/lib/glusterd/glustershd/glustershd-server.vol) is larger
than 128K then it simply crashes. This happens if there're a lot of
Hi gluster users,
I just upgraded 3.2.5 to 3.3.1 for a Distributed-Replicate volume with
about 2M directories to get a working replace-brick and now see it hang
up the entire gluster volume for all clients for several minutes, and
subsequently hang up the glusterfs on the destination brick.
I
This crash is because of some other reason. Fix for which is already merged in
upstream.
http://review.gluster.com/4767
Thanks for reporting the issue.
Pranith.
- Original Message -
From: 符永涛 yongta...@gmail.com
To: Toby Corkindale toby.corkind...@strategicdata.com.au
Cc:
On 04/06/2013 11:05 AM, Emmanuel Dreyfus wrote:
Gluster Build System jenk...@build.gluster.org wrote:
SRC:
http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.4.0alpha2.
tar.gz
This is almost one moth old. Are we going to have a new snapshot? alpha3
or beta1?
Yes, intend having
Vijay Bellur vbel...@redhat.com writes:
On 04/06/2013 11:05 AM, Emmanuel Dreyfus wrote:
Gluster Build System jenk...@build.gluster.org wrote:
SRC:
http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.4.0alpha2.
tar.gz
This is almost one moth old. Are we going to have a new
This is also imminent - there are a couple of backports that need to be
completed yet.
-JM
On Mon, Apr 8, 2013 at 3:09 PM, Shawn Nock n...@nocko.se wrote:
Vijay Bellur vbel...@redhat.com writes:
On 04/06/2013 11:05 AM, Emmanuel Dreyfus wrote:
Gluster Build System
Is the RDMA support going to be fixed with 3.4 ?
Regards,
--
Bartek Krawczyk
network and system administrator
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
Hi james !
1) Yes, right now, we run as root. Thanks for noticing :) ... We are
working on modifying this in the very near future. The problem is that
the plugin attempts to mount a filesystem, but we recently have discussed
that auto mount behaviour may be a superfluous feature, since
Hi,
Thanks for notifying the issue.
It is fixed now. The patch needs to be reviewed and will be merged soon.
https://bugzilla.redhat.com/show_bug.cgi?id=947824
Regards,
S.Venkatesh
- Original Message -
From: Vijay Bellur vbel...@gmail.com
To: Venkatesh Somyajulu vsomy...@redhat.com
Gluster Build System jenk...@build.gluster.org wrote:
SRC:
http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.4.0alpha2.
tar.gz
This is almost one moth old. Are we going to have a new snapshot? alpha3
or beta1?
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
Hi All,
I'm setting up a GlusterFS to provide a shared storage for our application.
I have 3 machines at various locations and would like to create a gluster
volume from the same.
1) Server 1 : Dubai HO
2) Server 2 : Kuwait HO
Hi,
We have a set of 4 gluster nodes, all in replicated (design?)
We use it to store our qcow2 images for kvm. These images have a variable
IO, though most of them are for reading only.
I tried to find some documentation re. performance optimization, but it's
either unclear to me, or I
Hello,
It seems the gluster hadoop plugin assumes all hadoop daemons/commands are
run as root? I was having trouble getting the jobtracker to start because
every time the fs is initialized a system call mount -t glusterfs ... is
issued. Cloudera runs all daemons as the mapred user who is not
Hi james:
Looks like standard Hadoop seems to want to keep the files as permission
700, just like you mention in your email:
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1/src/mapred/org/apache/hadoop/mapreduce/JobSubmissionFiles.java
Just a guess : but -- maybe it will work
This looks like you are replicating every file to all bricks?
What is tcp running on? 1G nics? 10G? IPoIB (40-80G)?
I think you want to have Distribute-Replicate. So 4 bricks with replica = 2.
Unless you are running at least 10G nics you are going to have serious
IO issues in your KVM/qcow2
16 matches
Mail list logo