Been using Gluster since the 3.3.x days, been burned a few times and if it
was not for the help of the community (one specific time saved big by Joe
Julian), I would had not continued using it.
My main use was originally as a self hosted engine via NFS and a file
server for windows clients with
Hello Gionatan,
Using Gluster brick in a RAID configuration might be safer and require
less work from Gluster admins but, it is a waste of disk space.
Gluster bricks are replicated "assuming you're creating a
distributed-replica volume" so when brick went down, it should be easy to
recover it
On Wed, 17 Jun 2020 00:06:33 +0300
Mahdi Adnan wrote:
> [gluster going down ]
I am following this project for quite some years now, probably longer than
most of the people nowadays on the list. The project started with the
brilliant idea of making a fs on top of classical fs's distributed over
Hey Erik,
I actually ment that there is no point in using controllers with fast
storage like SAS SSDs or NVMEs.
They (the controllers) usually have 1-2 GB of RAM to buffer writes until the
risc processor analyzes the requests and stacks them - thus JBOD (in 'replica
3' )makes much more
> For NVMe/SSD - raid controller is pointless , so JBOD makes most sense.
I am game for an education lesson here. We're still using spinng drives
with big RAID caches but we keep discussing SSD in the context of RAID. I
have read for many real-world workloads, RAID0 makes no sense with
modern
We had a distributed replicated volume of 3 x 7 HDD, the volume was used
for small files workload with heavy IO, we decided to replace the
bricks with SSDs because of IO saturation to the disks, so we started by
swapping the bricks one by one, and the fun started, some files lost its
attributes
Hi Hubert,
keep in mind RH recommends disks of size 2-3 TB, not 10. I guess that has
changed the situation.
For NVMe/SSD - raid controller is pointless , so JBOD makes most sense.
Best Regards,
Strahil Nikolov
На 22 юни 2020 г. 7:58:56 GMT+03:00, Hu Bert написа:
>Am So., 21. Juni 2020 um
Am So., 21. Juni 2020 um 19:43 Uhr schrieb Gionatan Danti :
> For the RAID6/10 setup, I found no issues: simply replace the broken
> disk without involing Gluster at all. However, this also means facing
> the "iops wall" I described earlier for single-brick node. Going
> full-Guster with JBODs
Il 2020-06-22 06:58 Hu Bert ha scritto:
Am So., 21. Juni 2020 um 19:43 Uhr schrieb Gionatan Danti
:
For the RAID6/10 setup, I found no issues: simply replace the broken
disk without involing Gluster at all. However, this also means facing
the "iops wall" I described earlier for single-brick
Il 2020-06-21 20:41 Mahdi Adnan ha scritto:
Hello Gionatan,
Using Gluster brick in a RAID configuration might be safer and
require less work from Gluster admins but, it is a waste of disk
space.
Gluster bricks are replicated "assuming you're creating a
distributed-replica volume" so when brick
Hello Gionatan,
Using Gluster brick in a RAID configuration might be safer and require
less work from Gluster admins but, it is a waste of disk space.
Gluster bricks are replicated "assuming you're creating a
distributed-replica volume" so when brick went down, it should be easy to
recover it
Il 2020-06-21 14:20 Strahil Nikolov ha scritto:
With every community project , you are in the position of a Betta
Tester - no matter Fedora, Gluster or CEPH. So far , I had
issues with upstream projects only diring and immediately after
patching - but this is properly mitigated with a
На 21 юни 2020 г. 10:53:10 GMT+03:00, Gionatan Danti
написа:
>Il 2020-06-21 01:26 Strahil Nikolov ha scritto:
>> The efforts are far less than reconstructing the disk of a VM from
>> CEPH. In gluster , just run a find on the brick searching for the
>> name of the VM disk and you will find
I agree with this assessment for the most part. I'll just add that,
during development of Gluster based solutions, we had internal use of
Redhat Gluster. This was over a year and a half ago when we started.
For my perhaps non-mainstream use cases, I found the latest versions of
gluster 7 actually
Il 2020-06-21 01:26 Strahil Nikolov ha scritto:
The efforts are far less than reconstructing the disk of a VM from
CEPH. In gluster , just run a find on the brick searching for the
name of the VM disk and you will find the VM_IMAGE.xyz (where xyz is
just a number) and then concatenate the
На 20 юни 2020 г. 19:08:49 GMT+03:00, Gionatan Danti
написа:
>DISCLAIMER: I *really* appreciate this project and I thank all peoples
>involved.
>
>Il 2020-06-19 21:33 Mahdi Adnan ha scritto:
>> The strength of Gluster, in my opinion, is the simplicity of creating
>> distributed volumes that
DISCLAIMER: I *really* appreciate this project and I thank all peoples
involved.
Il 2020-06-19 21:33 Mahdi Adnan ha scritto:
The strength of Gluster, in my opinion, is the simplicity of creating
distributed volumes that can be consumed by different clients, and
this is why we chose Gluster
My concern for Glusterd2 deprecation is, it tried to implement and fix
several things that we need in Gluster, and the promised features were not
considered in Gluster afterward "better logging, Journal Based Replication".
We're running both Ceph and Gluster, while both solutions are great in
On 6/18/20 12:41 PM, Stephan von Krawczynski wrote
Top Poster ;-)
And in fact, it's not true. The clear message to me once was: we are not able
to make a kernel version.
Which I understood as: we have not the knowledge to do that.
Since that was quite some time before Red Hat stepped in there
On Thu, 18 Jun 2020 13:27:19 -0400
Alvin Starr wrote:
> > [me]
> This is an amazingly unreasonable comment.
> First off ALL distributed file systems are slower than non-distributed
> file systems.
Obviously you fail to understand my point: the design of glusterfs implies
that it can be as
18.06.2020 20:41, Stephan von Krawczynski пишет:
Since 2009 when I entered the list there was not a single month where
there
were no complaints about gluster being slow.
It is slow not because client works in userspace, we have no problems
with cpu load here, caused by context switching,
On Thu, 18 Jun 2020 07:40:36 -0700
Joe Julian wrote:
> You're still here and still hurt about that? It was never intended to be in
> kernel. It was always intended to run in userspace. After all these years I
> thought you'd be over that by now.
Top Poster ;-)
And in fact, it's not true. The
You're still here and still hurt about that? It was never intended to be in
kernel. It was always intended to run in userspace. After all these years I
thought you'd be over that by now.
On June 18, 2020 1:54:18 AM PDT, Stephan von Krawczynski
wrote:
>On Wed, 17 Jun 2020 00:06:33 +0300
>Mahdi
18.06.2020 13:55, Stephan von Krawczynski пишет:
On Thu, 18 Jun 2020 13:06:51 +0400
Dmitry Melekhov wrote:
18.06.2020 12:54, Stephan von Krawczynski пишет:
_FS IN USERSPACE IS SH*T_ - understand that.
we use qemu and it uses gfapi... :-)
And exactly this kind of "insight" is base of my
On Thu, 18 Jun 2020 13:06:51 +0400
Dmitry Melekhov wrote:
> 18.06.2020 12:54, Stephan von Krawczynski пишет:
> >
> > _FS IN USERSPACE IS SH*T_ - understand that.
> >
>
> we use qemu and it uses gfapi... :-)
And exactly this kind of "insight" is base of my critics. gfapi is _userspace_
on
18.06.2020 12:54, Stephan von Krawczynski пишет:
_FS IN USERSPACE IS SH*T_ - understand that.
we use qemu and it uses gfapi... :-)
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968
Gluster-users
On Wed, 17 Jun 2020 00:06:33 +0300
Mahdi Adnan wrote:
> [gluster going down ]
I am following this project for quite some years now, probably longer than
most of the people nowadays on the list. The project started with the
brilliant idea of making a fs on top of classical fs's distributed over
On 6/17/20 6:19 AM, Dmitry Melekhov wrote:
17.06.2020 01:06, Mahdi Adnan пишет:
Hello,
I'm wondering what's the current and future plan for Gluster project
overall, I see that the project is not as busy as it was before "at
least this is what I'm seeing" Like there are fewer blogs about
d some
>minutes later the Ceph cluster is clean again.
>>
>> If others have more insights I'd be very happy to hear them.
>>
>> Stefan
>>
>>
>> - Original Message -
>> > Date: Tue, 16 Jun 2020 20:30:34 -0700
>> > From: Artem
th Ceph we can achieve
> 100% uptime, we regularly reboot our hosts one by one and some minutes later
> the Ceph cluster is clean again.
>
> If others have more insights I'd be very happy to hear them.
>
> Stefan
>
>
> - Original Message -----
> > Date: Tue,
We never ran tests with Ceph mostly due to time constraints in
engineering. We also liked that, at least when I started as a novice,
gluster seemed easier to set up. We use the solution in automated
setup scripts for maintaining very large clusters. Simplicity in
automated setup is critical here
-users] State of Gluster project
Hi Mahdi,
I am writing my views as an external contributor outside the Red Hat.
Hopefully, someone from Red Hat will respond with their focus/roadmap on
Gluster development.
Recently Amar wrote about the focus of the GlusterFS core
team(https://www.gluster.org
The note from Mahdi raises some key aspects in which our community
evaluates the project and we should consider those as important key
indicators.
The essence of what was put together at
https://www.gluster.org/building-a-longer-term-focus-for-gluster/
continues to hold true and a summary of the
17.06.2020 01:06, Mahdi Adnan пишет:
Hello,
I'm wondering what's the current and future plan for Gluster project
overall, I see that the project is not as busy as it was before "at
least this is what I'm seeing" Like there are fewer blogs about what
the roadmap or future plans of the
Hi Mahdi,
On Wed, Jun 17, 2020 at 5:02 AM Strahil Nikolov
wrote:
> Hey Mahdi,
>
> For me it looks like Red Hat are focusing more on CEPH than on Gluster.
> I hope the project remains active, cause it's very difficult to find a
> Software-defined Storage as easy and as scalable as Gluster.
>
>
Hey Mahdi,
For me it looks like Red Hat are focusing more on CEPH than on Gluster.
I hope the project remains active, cause it's very difficult to find a
Software-defined Storage as easy and as scalable as Gluster.
Best Regards,
Strahil Nikolov
На 17 юни 2020 г. 0:06:33 GMT+03:00, Mahdi
Has anyone tried to pit Ceph against gluster? I'm curious what the ups and
downs are.
On Tue, Jun 16, 2020, 4:32 PM Strahil Nikolov wrote:
> Hey Mahdi,
>
> For me it looks like Red Hat are focusing more on CEPH than on Gluster.
> I hope the project remains active, cause it's very difficult to
Hello,
I'm wondering what's the current and future plan for Gluster project
overall, I see that the project is not as busy as it was before "at least
this is what I'm seeing" Like there are fewer blogs about what the roadmap
or future plans of the project, the deprecation of Glusterd2, even Red
38 matches
Mail list logo