Re: [Gluster-users] yum updates

2014-04-15 Thread Joe Julian
There is a bug filed already. Please start the glusterd service after upgrading 
until a corrected spec file is merged. 

On April 15, 2014 5:51:06 PM PDT, Franco Broi  wrote:
>
>Just discovered that doing a yum update of glusterfs on a running
>server
>is a bad idea. This was just a test system but I wouldn't have expected
>updating the software to cause the running daemons to fail.
>
>___
>Gluster-users mailing list
>Gluster-users@gluster.org
>http://supercolony.gluster.org/mailman/listinfo/gluster-users

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] yum updates

2014-04-15 Thread Franco Broi

Just discovered that doing a yum update of glusterfs on a running server
is a bad idea. This was just a test system but I wouldn't have expected
updating the software to cause the running daemons to fail.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] v3.4.3: Changelog?

2014-04-15 Thread Peter B.
On 04/14/2014 05:24 PM, Kaleb KEITHLEY wrote:
>>
>> I don't seem to be able to add a page for 3.4.3 there, however you can
>> always find the release notes in the source in .../doc/release-notes/.
>>
>
> https://forge.gluster.org/gluster-docs-project/pages/GlusterFS_343_Release_Notes
>

Awesome!
Thanks a lot.

Pb

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Writing is slow when there are 10 million files.

2014-04-15 Thread Liam Slusser
Our application also stores the path of the file in a database.  Accessing
a file directly is normally pretty speedy.  However, to get the files into
the database required searching parts of the filesystem which was really
slow.  We also had users using the filesystem fixing things which was all
unix shell ls/cp/mv etc, and again, really slow.

And the biggest problem I had was if one of the nodes went down for a
reboot/patching/whatever, to "resync" the filesystems took weeks because of
the huge number of files.

thanks,
liam



On Tue, Apr 15, 2014 at 3:15 AM, Terada Michitaka wrote:

> >> To Liam:
>
> >I had about 100 million files in Gluster and it was unbelievably
> painfully slow.  We had to ditch it for other technology.
>
> Has slow down occurred on writing file?, listing files, or both?
>
> In our application, path of the data is managed in database.
> "ls" is slow, but not influence to my application, but writing file slow
> down is critical.
>
> >> To All:
>
> I uploaded a statistics when writing test(32kbyte x 10 million, 6 bricks).
>
>   http://gss.iijgio.com/gluster/gfs-profile_d03r2.txt
>
> Line 15, average-latency value is about 30 ms.
> I cannot judge this value is a normal(ordinary?) performance or not.
>
> Is it slow?
>
> Thanks,
> --Michika Terada
>
>
>
>
> 2014-04-15 16:05 GMT+09:00 Franco Broi :
>
>
>> My bug report is here
>> https://bugzilla.redhat.com/show_bug.cgi?id=1067256
>>
>> On Mon, 2014-04-14 at 23:51 -0700, Joe Julian wrote:
>> > If you experience pain using any filesystem, you should see your
>> > doctor.
>> >
>> > If you're not actually experiencing pain, perhaps you should avoid
>> > hyperbole and instead talk about what version you tried, what your
>> > tests were, how you tried to fix it, and what the results were.
>> >
>> > If you're using a current version with a kernel that has readdirplus
>> > support for fuse it shouldn't be that bad. If it is, file a bug report
>> > - especially if you have the skills to help diagnose the problem.
>> >
>> > On April 14, 2014 11:30:26 PM PDT, Liam Slusser 
>> > wrote:
>> >
>> > I had about 100 million files in Gluster and it was
>> > unbelievably painfully slow.  We had to ditch it for other
>> > technology.
>> >
>> >
>> > On Mon, Apr 14, 2014 at 11:24 PM, Franco Broi
>> >  wrote:
>> >
>> > I seriously doubt this is the right filesystem for
>> > you, we have problems
>> > listing directories with a few hundred files, never
>> > mind millions.
>> >
>> > On Tue, 2014-04-15 at 10:45 +0900, Terada Michitaka
>> > wrote:
>> > > Dear All,
>> > >
>> > >
>> > >
>> > > I have a problem with slow writing when there are 10
>> > million files.
>> > > (Top level directories are 2,500.)
>> > >
>> > >
>> > > I configured GlusterFS distributed cluster(3 nodes).
>> > > Each node's spec is below.
>> > >
>> > >
>> > >  CPU: Xeon E5-2620 (2.00GHz 6 Core)
>> > >  HDD: SATA 7200rpm 4TB*12 (RAID 6)
>> > >  NW: 10GBEth
>> > >  GlusterFS : glusterfs 3.4.2 built on Jan  3 2014
>> > 12:38:06
>> > >
>> > > This cluster(volume) is mounted on CentOS via FUSE
>> > client.
>> > > This volume is storage of our application and I want
>> > to store 3
>> > > hundred million to 5 billion files.
>> > >
>> > >
>> > > I performed a writing test, writing 32KByte file ×
>> > 10 million to this
>> > > volume, and encountered a problem.
>> > >
>> > >
>> > > (1) Writing is so slow and slow down as number of
>> > files increases.
>> > >   In non clustering situation(one node), this node's
>> > writing speed is
>> > > 40 MByte/sec at random,
>> > >   But writing speed is 3.6MByte/sec on that cluster.
>> > > (2) ls command is very slow.
>> > >   About 20 second. Directory creation takes about 10
>> > seconds at
>> > > lowest.
>> > >
>> > >
>> > > Question:
>> > >
>> > >  1)5 Billion files are possible to store in
>> > GlusterFS?
>> > >   Has someone succeeded to store billion  files to
>> > GlusterFS?
>> > >
>> > >  2) Could you give me a link for a tuning guide or
>> > some information of
>> > > tuning?
>> > 

[Gluster-users] Volume add-brick: failed: (with no error message)

2014-04-15 Thread Iain Milne
Hi folks,

We've had a 2 node gluster array working great for the last year. Each
brick is a 37TB xfs mount. It's now on Centos 6.5 (x64) running gluster
3.4.3-2

Volume Name: gfs
Type: Distribute
Volume ID: ddbb46bb-821e-44db-bc7e-32f43334f62c
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: server1:/mnt/data
Brick2: server2:/mnt/data


We've just bought a new server (identical in every way to the previous
two) and we're trying to get it added to the volume.

The peering process goes fine:

Number of Peers: 2

Hostname: server2
Uuid: 02f1a25b-afd8-49e2-8708-95456f6b8473
State: Peer in Cluster (Connected)

Hostname: server3
Port: 24007
Uuid: 3fc9df26-bb49-4c74-8eae-4b3f37389224
State: Peer in Cluster (Connected)


The only thing of interest (?) there is the addition of the port number
for the new server. Neither of the old servers show a port, even when
running the peer status command on any of the boxes.

The main problem is the addition of the new server/brick:

[root@server1 glusterfs]# gluster volume add-brick gfs server3:/mnt/data
volume add-brick: failed:


There's no error there at all: just a blank after the colon.

The logs on server1 (the one trying to do the add):

W [rpc-transport.c:175:rpc_transport_load] 0-rpc-transport: missing
'option transport-type'. defaulting to "socket"
I [socket.c:3480:socket_init] 0-glusterfs: SSL support is NOT enabled
I [socket.c:3495:socket_init] 0-glusterfs: using system polling thread
I [cli-cmd-volume.c:1336:cli_check_gsync_present] 0-: geo-replication not
installed
I [cli-rpc-ops.c:1695:gf_cli_add_brick_cbk] 0-cli: Received resp to add brick
I [input.c:36:cli_batch] 0-: Exiting with: -1


And the logs on server3 (the one being added):

E [glusterd-op-sm.c:3719:glusterd_op_ac_stage_op] 0-management: Stage
failed on operation 'Volume Add brick', Status : -1


The current storage array is live and in-use by users, so it can't be
taken offline at short notice.

For completeness, here's glusterd on server3 running in debug mode when
the add-brick command was attempted:

[2014-04-15 15:03:33.133976] D
[glusterd-handler.c:549:__glusterd_handle_cluster_lock] 0-management:
Received LOCK from uuid: 881743a9-b71e-45a9-8528-cc932837ebb8
[2014-04-15 15:03:33.134013] D
[glusterd-utils.c:4936:glusterd_friend_find_by_uuid] 0-management: Friend
found... state: Peer in Cluster
[2014-04-15 15:03:33.134031] D
[glusterd-op-sm.c:5355:glusterd_op_sm_inject_event] 0-management: Enqueue
event: 'GD_OP_EVENT_LOCK'
[2014-04-15 15:03:33.134051] D
[glusterd-handler.c:572:__glusterd_handle_cluster_lock] 0-management:
Returning 0
[2014-04-15 15:03:33.134065] D [glusterd-op-sm.c:5432:glusterd_op_sm]
0-management: Dequeued event of type: 'GD_OP_EVENT_LOCK'
[2014-04-15 15:03:33.134083] D [glusterd-utils.c:340:glusterd_lock]
0-management: Cluster lock held by 881743a9-b71e-45a9-8528-cc932837ebb8
[2014-04-15 15:03:33.134096] D [glusterd-op-sm.c:2445:glusterd_op_ac_lock]
0-management: Lock Returned 0
[2014-04-15 15:03:33.134153] D
[glusterd-handler.c:1776:glusterd_op_lock_send_resp] 0-management:
Responded to lock, ret: 0
[2014-04-15 15:03:33.134171] D
[glusterd-utils.c:5598:glusterd_sm_tr_log_transition_add] 0-management:
Transitioning from 'Default' to 'Locked' due to event 'GD_OP_EVENT_LOCK'
[2014-04-15 15:03:33.134187] D
[glusterd-utils.c:5600:glusterd_sm_tr_log_transition_add] 0-management:
returning 0
[2014-04-15 15:03:33.135409] D
[glusterd-utils.c:4936:glusterd_friend_find_by_uuid] 0-management: Friend
found... state: Peer in Cluster
[2014-04-15 15:03:33.135452] D
[glusterd-handler.c:604:glusterd_req_ctx_create] 0-management: Received op
from uuid 881743a9-b71e-45a9-8528-cc932837ebb8
[2014-04-15 15:03:33.135481] D
[glusterd-op-sm.c:5355:glusterd_op_sm_inject_event] 0-management: Enqueue
event: 'GD_OP_EVENT_STAGE_OP'
[2014-04-15 15:03:33.135497] D [glusterd-op-sm.c:5432:glusterd_op_sm]
0-management: Dequeued event of type: 'GD_OP_EVENT_STAGE_OP'
[2014-04-15 15:03:33.135524] D
[glusterd-utils.c:1209:glusterd_volinfo_find] 0-: Volume gfs found
[2014-04-15 15:03:33.135537] D
[glusterd-utils.c:1216:glusterd_volinfo_find] 0-: Returning 0
[2014-04-15 15:03:33.135554] D
[glusterd-utils.c:5223:glusterd_is_rb_started] 0-: is_rb_started:status=0
[2014-04-15 15:03:33.135600] D
[glusterd-utils.c:5232:glusterd_is_rb_paused] 0-: is_rb_paused:status=0
[2014-04-15 15:03:33.135643] D
[glusterd-utils.c:803:glusterd_brickinfo_new] 0-management: Returning 0
[2014-04-15 15:03:33.135662] D
[glusterd-utils.c:865:glusterd_brickinfo_new_from_brick] 0-management:
Returning 0
[2014-04-15 15:03:33.135677] D [glusterd-utils.c:665:glusterd_volinfo_new]
0-management: Returning 0
[2014-04-15 15:03:33.135698] D
[glusterd-utils.c:749:glusterd_volume_brickinfos_delete] 0-management:
Returning 0
[2014-04-15 15:03:33.135713] D
[glusterd-utils.c:777:glusterd_volinfo_delete] 0-management: Returning 0
[2014-04-15 15:03:33.135729] D
[glusterd-utils.c:803:glusterd_brickinfo_new] 0-management: Returning 0
[2014

Re: [Gluster-users] libgfapi failover problem on replica bricks

2014-04-15 Thread Paul Penev
I am in the same boat as Fabio. I'm using glusterfs in production too.
Rebooting a brick might mean loosing a customer at this time.

However I did shorten the ping-timeout from the default 42 seconds to
5 seconds. I have been more successful at rebooting bricks, but still,
I am experiencing the death of vm (altough not every time).

I am setting up a small test cluster for experimenting more easily.
This will be a simple replica with two bricks only.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] RPMs for libvirt-1.2.3 for CentOS now available

2014-04-15 Thread Bharata B Rao
On Mon, Apr 7, 2014 at 8:54 PM, Nux!  wrote:

> On 07.04.2014 15:24, Lalatendu Mohanty wrote:
>
>> On 04/07/2014 07:28 PM, Nux! wrote:
>>
>>> On 07.04.2014 14:41, Lalatendu Mohanty wrote:
>>>
 RPMs for latest libvirt upstream release i.e. version 1.2.3 is
 available for CentOS is avalable at Yum repo[1] .

 Libvirt 1.2.3 has major bug fixes for GlusterFS and also supports
 qemu/libvirt snapshots on GlusterFS. The change log can be found
 here[2].

 However for snapshot support from QEMU,  we need QEMU 2.0, which will
 be released in couple of weeks from the upstream project.

 [1] http://download.gluster.org/pub/gluster/glusterfs/libvirt/CentOS/
 [2] http://libvirt.org/news.html

>>>
>>> Hello Lala,
>>>
>>> Where can I read more about these qemu/gluster snapshots? How are they
>>> different from the Qemu/qcow2 snapshots?
>>>
>>> Lucian
>>>
>>>  Hey Lucian,
>>
>> Sorry I was not clear in my previous mail. The snapshot is the same
>> Qemu/qcow2 snapshots. Now this is supported on VM images run through
>> libgfapi+GlusterFS. Previously the libvirt/qemu snapshots were
>> supported for fuse mounted gluster volumes.
>>
>> Thanks,
>> Lala
>>
>
> Oh, this is quite a big deal. So one can't take snapshots of VMs running
> on libgfapi right now unless they use Qemu 2.0?


AFAIK, QEMU (with libgfapi) supported offline (driven by qemu-img) as well
as live snapshot  (using snapshot_blkdev qemu monitor cmd) of a running VM
diskimage right from QEMU-1.3.

Regards,
Bharata.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Glusterfs Rack-Zone Awareness feature...

2014-04-15 Thread Jeff Darcy
> I have a little question.
> I have read glusterfs documentation looking for a replication management. I
> want to be able to localize replicas on nodes hosted in 2 Datacenters
> (dual-building).
> CouchBase provide the feature, I’m looking for GlusterFs : “Rack-Zone
> Awareness”.
> https://blog.couchbase.com/announcing-couchbase-server-25
> “Rack-Zone Awareness - This feature will allow logical groupings of Couchbase
> Server nodes (where each group is physically located on a rack or an
> availability zone). Couchbase Server will automatically allocate replica
> copies of data on servers that belong to a group different from where the
> active data lives. This significantly increases reliability in case an
> entire rack becomes unavailable. This is of particularly importance for
> customers running deployments in public clouds.”

> Do you know if Glusterfs provide a similar feature ?
> If not, do you plan to develop it, in the near future ?

There are two parts to the answer. Rack-aware placement in general is part of 
the "data classification" feature planned for the 3.6 release. 

http://www.gluster.org/community/documentation/index.php/Features/data-classification
 

With this feature, files can be placed according to various policies using any 
of several properties associated with objects or physical locations. Rack-aware 
placement would use the physical location of a brick. Tiering would use the 
performance properties of a brick and the access time/frequency of an object. 
Multi-tenancy would use the tenant identity for both bricks and objects. And so 
on. It's all essentially the same infrastructure. 

For replication decisions in particular, there needs to be another piece. Right 
now, the way we use N bricks with a replication factor of R is to define N/R 
replica sets each containing R members. This is sub-optimal in many ways. We 
can still compare the "value" or "fitness" of two replica sets for storing a 
particular object, but our options are limited to the replica sets as defined 
last time bricks were added or removed. The differences between one choice and 
another effectively get smoothed out, and the load balancing after a failure is 
less than ideal. To do this right, we need to use more (overlapping) 
combinations of bricks. Some of us have discussed ways that we can do this 
without sacrificing the modularity of having distribution and replication as 
two separate modules, but there's no defined plan or date for that feature 
becoming available. 

BTW, note that using *too many* combinations can also be a problem. Every time 
an object is replicated across a certain set of storage locations, it creates a 
coupling between those locations. Before long, all locations are coupled 
together, so that *any* failure of R-1 locations anywhere in the system will 
result in data loss or unavailability. Many systems, possibly including 
Couchbase Server, have made this mistake and become *less* reliable as a 
result.  Emin Gün Sirer does a better job describing the problem - and 
solutions - than I do, here:

http://hackingdistributed.com/2014/02/14/chainsets/
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Writing is slow when there are 10 million files.

2014-04-15 Thread Franco Broi

On 15 Apr 2014 18:15, Terada Michitaka  wrote:
>
> >> To Liam:
>
> >I had about 100 million files in Gluster and it was unbelievably painfully 
> >slow.  We had to ditch it for other technology.
>
> Has slow down occurred on writing file?, listing files, or both?
>
> In our application, path of the data is managed in database.
> "ls" is slow, but not influence to my application, but writing file slow down 
> is critical.

Throughput with the fuse client is very good, as long as you access files 
directly you won't have any problems with slow directory reads. In my 
experience it's better than NFS, especially if you have many clients.

>
> >> To All:
>
> I uploaded a statistics when writing test(32kbyte x 10 million, 6 bricks).
>
>   http://gss.iijgio.com/gluster/gfs-profile_d03r2.txt
>
> Line 15, average-latency value is about 30 ms.
> I cannot judge this value is a normal(ordinary?) performance or not.
>
> Is it slow?
>
> Thanks,
> --Michika Terada
>
>
>
>
> 2014-04-15 16:05 GMT+09:00 Franco Broi :
>>
>>
>> My bug report is here
>> https://bugzilla.redhat.com/show_bug.cgi?id=1067256
>>
>> On Mon, 2014-04-14 at 23:51 -0700, Joe Julian wrote:
>> > If you experience pain using any filesystem, you should see your
>> > doctor.
>> >
>> > If you're not actually experiencing pain, perhaps you should avoid
>> > hyperbole and instead talk about what version you tried, what your
>> > tests were, how you tried to fix it, and what the results were.
>> >
>> > If you're using a current version with a kernel that has readdirplus
>> > support for fuse it shouldn't be that bad. If it is, file a bug report
>> > - especially if you have the skills to help diagnose the problem.
>> >
>> > On April 14, 2014 11:30:26 PM PDT, Liam Slusser 
>> > wrote:
>> >
>> > I had about 100 million files in Gluster and it was
>> > unbelievably painfully slow.  We had to ditch it for other
>> > technology.
>> >
>> >
>> > On Mon, Apr 14, 2014 at 11:24 PM, Franco Broi
>> >  wrote:
>> >
>> > I seriously doubt this is the right filesystem for
>> > you, we have problems
>> > listing directories with a few hundred files, never
>> > mind millions.
>> >
>> > On Tue, 2014-04-15 at 10:45 +0900, Terada Michitaka
>> > wrote:
>> > > Dear All,
>> > >
>> > >
>> > >
>> > > I have a problem with slow writing when there are 10
>> > million files.
>> > > (Top level directories are 2,500.)
>> > >
>> > >
>> > > I configured GlusterFS distributed cluster(3 nodes).
>> > > Each node's spec is below.
>> > >
>> > >
>> > >  CPU: Xeon E5-2620 (2.00GHz 6 Core)
>> > >  HDD: SATA 7200rpm 4TB*12 (RAID 6)
>> > >  NW: 10GBEth
>> > >  GlusterFS : glusterfs 3.4.2 built on Jan  3 2014
>> > 12:38:06
>> > >
>> > > This cluster(volume) is mounted on CentOS via FUSE
>> > client.
>> > > This volume is storage of our application and I want
>> > to store 3
>> > > hundred million to 5 billion files.
>> > >
>> > >
>> > > I performed a writing test, writing 32KByte file ×
>> > 10 million to this
>> > > volume, and encountered a problem.
>> > >
>> > >
>> > > (1) Writing is so slow and slow down as number of
>> > files increases.
>> > >   In non clustering situation(one node), this node's
>> > writing speed is
>> > > 40 MByte/sec at random,
>> > >   But writing speed is 3.6MByte/sec on that cluster.
>> > > (2) ls command is very slow.
>> > >   About 20 second. Directory creation takes about 10
>> > seconds at
>> > > lowest.
>> > >
>> > >
>> > > Question:
>> > >
>> > >  1)5 Billion files are possible to store in
>> > GlusterFS?
>> > >   Has someone succeeded to store billion  files to
>> > GlusterFS?
>> > >
>> > >  2) Could you give me a link for a tuning guide or
>> > some information of
>> > > tuning?
>> > >
>> > > Thanks.
>> > >
>> > >
>> > > -- Michitaka Terada
>> >
>> > > ___
>> > > Gluster-users mailing list
>> > > Gluster-users@gluster.org
>> >

Re: [Gluster-users] Writing is slow when there are 10 million files.

2014-04-15 Thread Terada Michitaka
>> To Liam:

>I had about 100 million files in Gluster and it was unbelievably painfully
slow.  We had to ditch it for other technology.

Has slow down occurred on writing file?, listing files, or both?

In our application, path of the data is managed in database.
"ls" is slow, but not influence to my application, but writing file slow
down is critical.

>> To All:

I uploaded a statistics when writing test(32kbyte x 10 million, 6 bricks).

  http://gss.iijgio.com/gluster/gfs-profile_d03r2.txt

Line 15, average-latency value is about 30 ms.
I cannot judge this value is a normal(ordinary?) performance or not.

Is it slow?

Thanks,
--Michika Terada




2014-04-15 16:05 GMT+09:00 Franco Broi :

>
> My bug report is here
> https://bugzilla.redhat.com/show_bug.cgi?id=1067256
>
> On Mon, 2014-04-14 at 23:51 -0700, Joe Julian wrote:
> > If you experience pain using any filesystem, you should see your
> > doctor.
> >
> > If you're not actually experiencing pain, perhaps you should avoid
> > hyperbole and instead talk about what version you tried, what your
> > tests were, how you tried to fix it, and what the results were.
> >
> > If you're using a current version with a kernel that has readdirplus
> > support for fuse it shouldn't be that bad. If it is, file a bug report
> > - especially if you have the skills to help diagnose the problem.
> >
> > On April 14, 2014 11:30:26 PM PDT, Liam Slusser 
> > wrote:
> >
> > I had about 100 million files in Gluster and it was
> > unbelievably painfully slow.  We had to ditch it for other
> > technology.
> >
> >
> > On Mon, Apr 14, 2014 at 11:24 PM, Franco Broi
> >  wrote:
> >
> > I seriously doubt this is the right filesystem for
> > you, we have problems
> > listing directories with a few hundred files, never
> > mind millions.
> >
> > On Tue, 2014-04-15 at 10:45 +0900, Terada Michitaka
> > wrote:
> > > Dear All,
> > >
> > >
> > >
> > > I have a problem with slow writing when there are 10
> > million files.
> > > (Top level directories are 2,500.)
> > >
> > >
> > > I configured GlusterFS distributed cluster(3 nodes).
> > > Each node's spec is below.
> > >
> > >
> > >  CPU: Xeon E5-2620 (2.00GHz 6 Core)
> > >  HDD: SATA 7200rpm 4TB*12 (RAID 6)
> > >  NW: 10GBEth
> > >  GlusterFS : glusterfs 3.4.2 built on Jan  3 2014
> > 12:38:06
> > >
> > > This cluster(volume) is mounted on CentOS via FUSE
> > client.
> > > This volume is storage of our application and I want
> > to store 3
> > > hundred million to 5 billion files.
> > >
> > >
> > > I performed a writing test, writing 32KByte file ×
> > 10 million to this
> > > volume, and encountered a problem.
> > >
> > >
> > > (1) Writing is so slow and slow down as number of
> > files increases.
> > >   In non clustering situation(one node), this node's
> > writing speed is
> > > 40 MByte/sec at random,
> > >   But writing speed is 3.6MByte/sec on that cluster.
> > > (2) ls command is very slow.
> > >   About 20 second. Directory creation takes about 10
> > seconds at
> > > lowest.
> > >
> > >
> > > Question:
> > >
> > >  1)5 Billion files are possible to store in
> > GlusterFS?
> > >   Has someone succeeded to store billion  files to
> > GlusterFS?
> > >
> > >  2) Could you give me a link for a tuning guide or
> > some information of
> > > tuning?
> > >
> > > Thanks.
> > >
> > >
> > > -- Michitaka Terada
> >
> > > ___
> > > Gluster-users mailing list
> > > Gluster-users@gluster.org
> > >
> >
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> >
> >
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> >
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> >
> >
> >
> >
> > __
> >
> >

[Gluster-users] Glusterfs Rack-Zone Awareness feature...

2014-04-15 Thread COCHE Sébastien
HI all,

 

I have a little question.

I have read glusterfs documentation looking for a replication management. I 
want to be able to localize replicas on nodes hosted in 2 Datacenters 
(dual-building).

CouchBase provide the feature, I'm looking for GlusterFs : "Rack-Zone 
Awareness".

https://blog.couchbase.com/announcing-couchbase-server-25 
 

"Rack-Zone Awareness - This feature will allow logical groupings of Couchbase 
Server nodes (where each group is physically located on a rack or an 
availability zone). Couchbase Server will automatically allocate replica copies 
of data on servers that belong to a group different from where the active data 
lives. This significantly increases reliability in case an entire rack becomes 
unavailable. This is of particularly importance for customers running 
deployments in public clouds."

 

Do you know if Glusterfs provide a similar feature ?

If not, do you plan to develop it, in the near future ?

 

Thank's in advance.

 

Sébastien Coché

 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Writing is slow when there are 10 million files.

2014-04-15 Thread Franco Broi

My bug report is here
https://bugzilla.redhat.com/show_bug.cgi?id=1067256

On Mon, 2014-04-14 at 23:51 -0700, Joe Julian wrote:
> If you experience pain using any filesystem, you should see your
> doctor. 
> 
> If you're not actually experiencing pain, perhaps you should avoid
> hyperbole and instead talk about what version you tried, what your
> tests were, how you tried to fix it, and what the results were. 
> 
> If you're using a current version with a kernel that has readdirplus
> support for fuse it shouldn't be that bad. If it is, file a bug report
> - especially if you have the skills to help diagnose the problem. 
> 
> On April 14, 2014 11:30:26 PM PDT, Liam Slusser 
> wrote:
> 
> I had about 100 million files in Gluster and it was
> unbelievably painfully slow.  We had to ditch it for other
> technology. 
> 
> 
> On Mon, Apr 14, 2014 at 11:24 PM, Franco Broi
>  wrote:
> 
> I seriously doubt this is the right filesystem for
> you, we have problems
> listing directories with a few hundred files, never
> mind millions. 
> 
> On Tue, 2014-04-15 at 10:45 +0900, Terada Michitaka
> wrote:
> > Dear All,
> >
> >
> >
> > I have a problem with slow writing when there are 10
> million files.
> > (Top level directories are 2,500.)
> >
> >
> > I configured GlusterFS distributed cluster(3 nodes).
> > Each node's spec is below.
> >
> >
> >  CPU: Xeon E5-2620 (2.00GHz 6 Core)
> >  HDD: SATA 7200rpm 4TB*12 (RAID 6)
> >  NW: 10GBEth
> >  GlusterFS : glusterfs 3.4.2 built on Jan  3 2014
> 12:38:06
> >
> > This cluster(volume) is mounted on CentOS via FUSE
> client.
> > This volume is storage of our application and I want
> to store 3
> > hundred million to 5 billion files.
> >
> >
> > I performed a writing test, writing 32KByte file ×
> 10 million to this
> > volume, and encountered a problem.
> >
> >
> > (1) Writing is so slow and slow down as number of
> files increases.
> >   In non clustering situation(one node), this node's
> writing speed is
> > 40 MByte/sec at random,
> >   But writing speed is 3.6MByte/sec on that cluster.
> > (2) ls command is very slow.
> >   About 20 second. Directory creation takes about 10
> seconds at
> > lowest.
> >
> >
> > Question:
> >
> >  1)5 Billion files are possible to store in
> GlusterFS?
> >   Has someone succeeded to store billion  files to
> GlusterFS?
> >
> >  2) Could you give me a link for a tuning guide or
> some information of
> > tuning?
> >
> > Thanks.
> >
> >
> > -- Michitaka Terada
> 
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> >
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users 
> 
> 
> 
> 
> __
> 
> 
> 
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users