use 3.6 disperse feature, but it is beta2 now, you could use it when it is
GA
On Wed, Sep 24, 2014 at 2:55 PM, Sahina Bose sab...@redhat.com wrote:
[+gluster-users]
On 09/24/2014 11:59 AM, Demeter Tibor wrote:
Hi,
Is there any method in glusterfs, like raid-5?
I have three node,
Your 3.4 cluster is a newly deployed one or a upgraded one ( from 3.3)?
If yours are newly deployed, you could not use 3.3 client to mount for the
op version is set 2
If yours are upgraded one, you could use 3.3 client to mount for the op
version is set 1.
the op version is newly introduced in 3.4
Vijay,
Is there any bugzilla item for this issue? So I could track if there is
some bugfix on it then I could backport to my deployment :)
On Wed, Apr 23, 2014 at 6:34 AM, Vijay Bellur vbel...@redhat.com wrote:
On 04/22/2014 01:53 PM, Vijay Bellur wrote:
On 04/21/2014 11:42 PM, Mingfan Lu
the subjected volume [1] and find the
result?
http://review.gluster.org/#/c/7412/7/doc/release-notes/3.5.0.md
--Humble
On Tue, Apr 22, 2014 at 10:46 AM, Mingfan Lu mingfan...@gmail.com wrote:
I saw something in
https://forge.gluster.org/gluster-docs-project/pages/GlusterFS_34_Release_Notes
I
:
1) gluster volume set volname server.allow-insecure on
2) Edit /etc/glusterfs/glusterd.vol to contain this line:
option rpc-auth-allow-insecure on
Post 2), restarting glusterd would be necessary.
On Tue, Apr 22, 2014 at 11:55 AM, Mingfan Lu mingfan...@gmail.com
I have seen some errors like gfid different on subvolume in my deployment.
e.g.
[2014-03-26 07:56:17.224262] W
[afr-common.c:1196:afr_detect_self_heal_by_iatt]
0-sh_ugc4_mams-replicate-1: /operation_1/video/2014/03/26/24/19: gfid
different on subvolume
my clients (3.3) have already backported the
when I using client of gluster3.3 to mount a gluster3.4.2 volume, I got a
mount failed error for 0-glusterd: Client x.x.x.x:709 (1 - 1) doesn't
support required op-version (2). Rejecting volfile request
Any comments?
___
Gluster-users mailing list
chance you can install/update your client ?
On Fri, Mar 28, 2014 at 12:06 PM, Mingfan Lu mingfan...@gmail.com wrote:
when I using client of gluster3.3 to mount a gluster3.4.2 volume, I got a
mount failed error for 0-glusterd: Client x.x.x.x:709 (1 - 1) doesn't
support required op-version (2
/update your client ?
On Fri, Mar 28, 2014 at 12:06 PM, Mingfan Lu mingfan...@gmail.com wrote:
when I using client of gluster3.3 to mount a gluster3.4.2 volume, I got a
mount failed error for 0-glusterd: Client x.x.x.x:709 (1 - 1) doesn't
support required op-version (2). Rejecting volfile
Hi
I saw one of our client dying after we see gfid different on
subvolume. Here is log of the client
[2014-03-03 08:30:54.286225] W
[afr-common.c:1196:afr_detect_self_heal_by_iatt] 0-bj-mams-replicate-6:
/operation/video/2014/03/03/e7/35/71/81f0a6656c077a16cad663e540543a78.pfvmeta:
gfid
In 3.3, when we run service glusterd stop, it will stop glusterd and all
glusterfsd (server's processes for volumes) and not stop glusterfs
processes (client processes).
But in my installation of 3.4.2, it could only stop glsuterd, all
glusterfsd processes (server processes) are still alive (this
I have tried a clean installation and a upgradation from 3.3, I have seen
the same problem.
Of coz, I rebooted.
On Wed, Feb 26, 2014 at 1:20 AM, Khoi Mai khoi...@up.com wrote:
When you tried gluster3.4.2-1. did you mean you upgraded it in place while
glusterd was running? Are you missing
I have trid the lastest glusterfs 3.4.2
but I found that I could start the service by *service glusterd status*,
and all volumes are up.
while I ran service glusterd status
it report the glusterd is stopped.
but when I called service glusterd status, I got
glusterd dead but subsys locked
I found
Hi,
We saw such a issue.
One client (fuse mount) updated one file, then the other client (also
fuse mount) copied the same file while the reader found that the copied
file was out-of-dated.
If the reader ran ls command to list the entries of the directory where
the target file in,then it
Hi,
We saw such a issue.
One client (fuse mount) updated one file, then the other client (also
fuse mount) copied the same file while the reader found that the copied
file was out-of-dated.
If the reader ran ls command to list the entries of the directory where
the target file in,then it
If my server is upgraded to 3.4 while many clients still use 3.3,
is there any problem? or I should update all clients.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
:afr_self_heal_completion_cbk]
0-search-prod-replicate-4: background meta-data self-heal completed on
/index_pipeline_searchengine/second_leaf/data/2014-02-18/shard0
On Tue, Feb 18, 2014 at 2:10 PM, Mingfan Lu mingfan...@gmail.com wrote:
Hi,
We saw such a issue.
One client (fuse mount) updated one file
thanks. I will try.
On Tue, Feb 18, 2014 at 3:04 PM, Vijay Bellur vbel...@redhat.com wrote:
On 02/18/2014 11:40 AM, Mingfan Lu wrote:
If my server is upgraded to 3.4 while many clients still use 3.3,
is there any problem? or I should update all clients.
3.4 and 3.3 are protocol
I found the tool pstack is what I need. thanks
On Sat, Feb 8, 2014 at 5:13 PM, Mingfan Lu mingfan...@gmail.com wrote:
use pstree to get the threads of a brick server process
I got something like below, could we know which threads are io-threads
which are threads to run self-heal? how about
use pstree to get the threads of a brick server process
I got something like below, could we know which threads are io-threads
which are threads to run self-heal? how about others?
(just according to the tid and know the sequence when to create)
[root@10.121.56.105 ~]# pstree -p 6226
CPU load in some of brick servers are very high and write performance is
very slow.
dd one file to the volume, the result is only 10+KB/sec
any comments?
more infomation
Volume Name: prodvolume
Type: Distributed-Replicate
Volume ID: f3fc24b3-23c7-430d-8ab1-81a646b1ce06
Status: Started
Number of
Message -
From: Mingfan Lu mingfan...@gmail.com
To: Pranith Kumar Karampuri pkara...@redhat.com
Cc: haiwei.xie-soulinfo haiwei@soulinfo.com,
Gluster-users@gluster.org List gluster-users@gluster.org
Sent: Thursday, January 30, 2014 12:43:44 PM
Subject: Re: [Gluster-users] help, all
in ?? ()
No symbol table info available.
On Thu, Jan 30, 2014 at 3:14 PM, Pranith Kumar Karampuri
pkara...@redhat.com wrote:
Could you give us the back trace
Pranith
- Original Message -
From: Mingfan Lu mingfan...@gmail.com
To: Pranith Kumar Karampuri pkara...@redhat.com
Cc: haiwei.xie
the notifications immediately.
Pranith
- Original Message -
From: Mingfan Lu mingfan...@gmail.com
To: haiwei.xie-soulinfo haiwei@soulinfo.com
Cc: Gluster-users@gluster.org List gluster-users@gluster.org
Sent: Tuesday, January 28, 2014 7:44:14 AM
Subject: Re: [Gluster-users] help, all
Hi,
I have a distributed and replica=3 volume (not to use stripe ) in a
cluster. I used dd to write 120 files to test. I foundthe write performane
of some files are much lower than others. all these BAD files are stored
in the same three brick servers for replication (I called node1 node2
P.S.
we use single 1Mbps NIC. I found the networks bandwidth is not a
issue.
On Tue, Jan 28, 2014 at 4:49 PM, Mingfan Lu mingfan...@gmail.com wrote:
Hi,
I have a distributed and replica=3 volume (not to use stripe ) in a
cluster. I used dd to write 120 files to test. I foundthe
writing your files across as many threads
as possible, and see what the performance improvement is like.
-Dan
Dan Mons
Skunk Works
Cutting Edge
http://cuttingedge.com.au
On 28 January 2014 18:49, Mingfan Lu mingfan...@gmail.com wrote:
Hi,
I have a distributed
I unmounted and remounted, it seems there is no BAD results.
Interesting.
On Tue, Jan 28, 2014 at 6:00 PM, Mingfan Lu mingfan...@gmail.com wrote:
In the client's log, I found:
[2014-01-28 17:54:36.839220] I [afr-self-heal-data.c:712:afr_sh_data_fix]
0-sh-ugc1-mams-replicate-7: no active
One of our client (3.3.0.5) crashed when writing data, the log is:
pending frames:
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame
the volume is distributed (replication = 1)
On Mon, Jan 27, 2014 at 4:01 PM, Mingfan Lu mingfan...@gmail.com wrote:
One of our client (3.3.0.5) crashed when writing data, the log is:
pending frames:
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame
I ran glusterfs volume profile my_volume info, I got some thing likes:
0.00 0.00 us 0.00 us 0.00 us 30 FORGET
0.00 0.00 us 0.00 us 0.00 us185
RELEASE
0.00 0.00 us 0.00 us 0.00 us 11
all my clients hang when they creating dir
On Mon, Jan 27, 2014 at 11:33 PM, Mingfan Lu mingfan...@gmail.com wrote:
I ran glusterfs volume profile my_volume info, I got some thing likes:
0.00 0.00 us 0.00 us 0.00 us 30 FORGET
0.00 0.00 us
lock info in bricks, print lock requst info
in afr dht.
all my clients hang when they creating dir
On Mon, Jan 27, 2014 at 11:33 PM, Mingfan Lu mingfan...@gmail.com
wrote:
I ran glusterfs volume profile my_volume info, I got some thing likes:
0.00 0.00 us 0.00 us
I found in the brick servers, .glusterfs/indicies/xattrop of one volume
has many stale files ( 260,000 ,most of them are created 2 month ago),
could I delete them direcly?
Another questions, how there stale files be left? I think when a file is
created or self-healed, the files should be
we have lots of (really) files in our gluser brick servers and every day,
we will generate lots, the number of files increase very quickly. could I
disable updatedb in brick servers? if that, glusterfs servers will be
impacted?
___
Gluster-users mailing
thanks, I will try this.
On Sun, Jan 26, 2014 at 7:23 PM, James purplei...@gmail.com wrote:
On Sun, Jan 26, 2014 at 6:13 AM, Mingfan Lu mingfan...@gmail.com wrote:
we have lots of (really) files in our gluser brick servers and every
day, we
will generate lots, the number of files increase
more hints:
I found node23 node24 have many files in
.glusterfs/indices/xattrop
there should be some problem? who could give some suggestions to resolve it?
On Thu, Jan 23, 2014 at 5:04 PM, Mingfan Lu mingfan...@gmail.com wrote:
I profiled node22, I found that most latency comes from
139544869
INODELK
41.949273.78 us 10.00 us 7193886.00 us 282853490
LOOKUP
On Wed, Jan 22, 2014 at 12:05 PM, Mingfan Lu mingfan...@gmail.com wrote:
I have a volume (distribute-replica (*3)), today i found an interesting
problem
node22 node23 and node24 are the replica-7 from
inode numbers but the server is using 64-bit inode numbers,
you could have something like this happen, I think.
If your client is 32-bit but the GlusterFS server is 64-bit, could you
confirm that the volume is mounted with the enable-ino32 option?
On Tue, Jan 21, 2014 at 7:37 PM, Mingfan Lu
Hi,
I failed to create directories (using python os.makedirs) occationally, the
following is an example.
When 00:01:56, my application try to create the directories
/mnt/upload/sh_ugc/production/video/2014/01/22/a38/466/351/ ,but finally,
my application failed write the file to the directories.
I have a volume (distribute-replica (*3)), today i found an interesting
problem
node22 node23 and node24 are the replica-7 from client A
but the annoying thing is when I create dir or write file from client to
replica-7,
date;dd if=/dev/zero of=49 bs=1MB count=120
Wed Jan 22 11:51:41 CST 2014
I have a volume (distribute-replica (*3)), today i found an interesting
problem
node22 node23 and node24 are the replica-7 from client A
but the annoying thing is when I create dir or write file from client to
replica-7,
date;dd if=/dev/zero of=49 bs=1MB count=120
Wed Jan 22 11:51:41 CST 2014
I saw lots of logs, any thoughts?
[2013-12-24 11:35:19.659143] E
[marker-quota-helper.c:230:mq_dict_set_contribution]
(--/usr/lib64/glusterfs/3.3.0.5rhs/xlator/debug/io-stats.so(io_stats_lookup+0x13e)
[0x7f9941ec7a3e]
I tried to set auth.allow using hostnames (not IPs) of clients
such as
gluster volume set VOLUME auth.allow hostname1,hostname2,hostname3
But the clients could not mount the volume
If I use IPs, it defintely works.
But for my clients use DHCP, so I don't think using IPs is a good idea for
they
client resolve the hostname?
On Wed, Dec 4, 2013 at 11:49 AM, Mingfan Lu mingfan...@gmail.com wrote:
I tried to set auth.allow using hostnames (not IPs) of clients
such as
gluster volume set VOLUME auth.allow hostname1,hostname2,hostname3
But the clients could not mount the volume
If I use IPs
then there should be something wrong, otherwise
you have DNS problem.
-C.B.
P.S. I guessed because I believe whenever a connection comes in, the
server does not know anything other than IP and port.
On 12/3/2013 8:38 PM, Mingfan Lu wrote:
In the gluster nodes, I could ping the clients using
(PTR record) for those IPs, which
may involve your upstream ISP, or even more complicated than that ...
http://en.wikipedia.org/wiki/Reverse_DNS_lookup
- C.B.
On 12/3/2013 9:24 PM, Mingfan Lu wrote:
host(a.b.c.d) I got,
Host a.b.c.d.in-addr.arpa. not found: 3(NXDOMAIN)
a.b.c.d stands
I have a volume named upload.
I upload some kind of file to the volume
and then use mv to rename the file
after that, I found the file size is 0 with 777 permission.
volume information ==
gluster volume info upload
Volume Name: upload
Type: Distributed-Replicate
Volume
48 matches
Mail list logo