On Thu, Nov 10, 2016 at 12:58 PM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> Il 10 nov 2016 08:22, "Raghavendra
> ha scritto:
> >
> > Kyle,
> >
> > Thanks for your your response :). This really helps. From 13s to 0.23s
> seems like huge
Il 10 nov 2016 08:22, "Raghavendra
ha scritto:
>
> Kyle,
>
> Thanks for your your response :). This really helps. From 13s to 0.23s
seems like huge improvement.
>From 13 minutes to 23 seconds, not from 13 seconds :)
___
Kyle,
Thanks for your your response :). This really helps. From 13s to 0.23s
seems like huge improvement.
regards,
Raghavendra
On Tue, Nov 8, 2016 at 8:21 PM, Kyle Johnson wrote:
> Hey there,
>
> We have a number of processes which daily walk our entire directory tree
>
Hey Atul,
You'd need to provide adequate information to us to get to the actual
issue. I'd recommend you to provide the following:
1. The detailed description of the problem statement.
2. Steps you did in the cluster.
3. log files from all the nodes in the cluster.
You can file a bug for the
> And that's why I really prefere gluster, without any metadata or
> similiar.
> But metadata server aren't mandatory to archive automatic rebalance.
> Gluster is already able to rebalance and move data around the cluster,
> and already has the tool to add a single server even in a replica 3.
>
>
On 11/09/2016 11:22 AM, Gandalf Corvotempesta wrote:
2016-11-09 19:32 GMT+01:00 Joe Julian :
Yes, and ceph has a metadata server to manage this
And that's why I really prefere gluster, without any metadata or similiar.
But metadata server aren't mandatory to archive
2016-11-09 19:32 GMT+01:00 Joe Julian :
> Yes, and ceph has a metadata server to manage this
And that's why I really prefere gluster, without any metadata or similiar.
But metadata server aren't mandatory to archive automatic rebalance.
Gluster is already able to rebalance
Awesome, thanks Joe!!
To answer your question, it would be for improved availability.
Dan-Joe Lopez
PSD | DevOps CoE
Please consider the impact on the environment before printing this e-mail
From: Joe Julian [mailto:j...@julianfamily.org]
Sent: Wednesday, November 9, 2016 10:26 AM
To: Lopez,
On 11/08/2016 10:53 PM, Gandalf Corvotempesta wrote:
Il 09 nov 2016 1:23 AM, "Joe Julian" > ha scritto:
>
> Replicas are defined in the order bricks are listed in the volume
create command. So gluster volume create myvol replica 2
On 11/09/2016 10:21 AM, Lopez, Dan-Joe wrote:
Thanks Joe and Gandalf!
I’ve look at the blog that you wrote Joe, but it seems to reference a
more complicated scenario than I am working with.
We have a` replica n` volume, and I want to make it a` replica n+1`
volume. Is that possible?
Thanks Joe and Gandalf!
I’ve look at the blog that you wrote Joe, but it seems to reference a more
complicated scenario than I am working with.
We have a` replica n` volume, and I want to make it a` replica n+1` volume. Is
that possible?
Dan-Joe Lopez
From: gluster-users-boun...@gluster.org
Disks are SAS disks on the server. No hardware RAID(JBOD), no SSDs,
xfs for brick filesystem.
On Wed, Nov 9, 2016 at 8:28 PM, Alastair Neil wrote:
> Serkan
>
> I'd be interested to know how your disks are attached (SAS?)? Do you use
> any hardware RAID, or zfs and do you
Serkan
I'd be interested to know how your disks are attached (SAS?)? Do you use
any hardware RAID, or zfs and do you have and SSDs in there?
On 9 November 2016 at 06:17, Serkan Çoban wrote:
> Hi, I am using 26x8TB disks per server. There are 60 servers in gluster
>
Hi, I am using 26x8TB disks per server. There are 60 servers in gluster cluster.
Each disk is a brick and configuration is 16+4 EC, 9PB single volume.
Clients are using fuse mounts.
Even with 1-2K files in a directory, ls from clients takes ~60 secs.
So If you are sensitive to metadata operations,
could anyone reply on this.
On Wed, Nov 9, 2016 at 11:08 AM, ABHISHEK PALIWAL
wrote:
> Hi,
>
> We could see that sync is getting failed to sync the GlusterFS bricks due
> to error trace "Transport endpoint is not connected "
>
> [2016-10-31 04:06:03.627395] E [MSGID:
As you said you want to have 3 or 4 replicas, so i would use the zfs
knowledge and build 1 zpool per node with whatever config you know is
fastest on this kind of hardware and as safe as you need (stripe,
mirror, raidz1..3 - resilvering zfs is faster than healing gluster, I
think) . 1 node -> 1
16 matches
Mail list logo