performance.
> ---
> Gilberto Nunes Ferreira
> (47) 99676-7530 - Whatsapp / Telegram
>
>
>
>
>
>
> Em ter., 12 de dez. de 2023 às 13:03, Danny
> escreveu:
>
>> MTU is already 9000, and as you can see from the IPERF results, I've got
>> a nice, fast c
rformance
>
>
>
> Best Regards,
> Strahil Nikolov
>
> On Monday, December 11, 2023, 3:32 PM, Danny
> wrote:
>
> Hello list, I'm hoping someone can let me know what setting I missed.
>
> Hardware:
> Dell R650 servers, Dual 24 Core Xeon 2.8 GHz, 1 TB RAM
> 8x
for your
> replica volume.
>
> Best regards!
>
> Ramon
>
>
> El 12/12/23 a les 19:10, Danny ha escrit:
>
> Sorry, I noticed that too after I posted, so I instantly upgraded to 10.
> Issue remains.
>
> On Tue, Dec 12, 2023 at 1:09 PM Gilberto Ferreira <
/staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/libgfapi/
>
> I know this doc somehow is out of date, but could be a hint
>
>
> ---
> Gilberto Nunes Ferreira
> (47) 99676-7530 - Whatsapp / Telegram
>
>
>
>
>
>
> Em ter., 12 de dez. de 2023 à
Hello list, I'm hoping someone can let me know what setting I missed.
Hardware:
Dell R650 servers, Dual 24 Core Xeon 2.8 GHz, 1 TB RAM
8x SSD s Negotiated Speed 12 Gbps
PERC H755 Controller - RAID 6
Created virtual "data" disk from the above 8 SSD drives, for a ~20 TB
/dev/sdb
OS:
CentOS Stream
?
--Danny
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
remember which case is
which by remembering that the capital letter shows up when it's probably an
error (execute should generally be set). :)
--Danny
Original message
From Khoi Mai khoi...@up.com
Date: 2014/02/12 10:27 AM (GMT-06:00)
To gluster-users@gluster.org
Subject
figure
out a solution.
--Danny
Steve Dainard sdain...@miovision.com wrote:
Hi Danny,
Did you get anywhere with this geo-rep issue? I have a similar problem running
on CentOS 6.5 when trying anything other than 'start' with geo-rep.
Thanks,
Steve
On Tue, Feb 25, 2014 at 9:45 AM, Danny
volume. The
documentation isn't quite complete on the new geo-replication, and I haven't
quite gotten a handle on the source code to just figure it out yet. Has the
syntax changed in a way that I'm not properly guessing, or is this no longer
supported?
Thanks,
Danny
Hello,
We are currently figuring out how to add GlusterFS to our system to make
our systems highly available using scripts. We are using Gluster 3.7.11.
Problem:
Trying to migrate to GlusterFS from a non-clustered system to a 3-node
glusterfs replicated cluster using scripts. Tried various
<hei...@fh-lausitz.de> wrote:
> Am Di, 21.06.2016, 19:22 schrieb Danny Lee:
> > Hello,
> >
> >
> > We are currently figuring out how to add GlusterFS to our system to make
> > our systems highly available using scripts. We are using Gluster 3.7.11.
> >
>
handle_status_volume] 0-management:
Received status volume req for volume volname" repeated 8 times between
[2016-06-26 11:35:13.583212] and [2016-06-26 11:35:14.358853]
On Sat, Jun 25, 2016 at 11:17 AM, Joe Julian <j...@julianfamily.org> wrote:
> Notice it actually tells you t
thing I noticed was that the notes
> states "need
> > to be defined in the /etc/hosts". Would using the IP address directly be
> a
> > problem?
> >
> > On Tue, Jun 21, 2016 at 2:10 PM, Heiko L. <hei...@fh-lausitz.de> wrote:
> >
> >> Am
n't output anything.
On Sun, Jun 26, 2016 at 2:02 PM, Danny Lee <dan...@vt.edu> wrote:
> Originally, I ran "sudo gluster volume heal appian full" on server-ip-1
> and then tailed the logs for all of the servers. The only thing that
> showed up was the logs for serve
Hi,
I have a 3-node replicated cluster using the native glusterfs mount, and
through some heavy IO load, the gluster logs show that one of the clients
(Client A) disconnected from one of the bricks (Brick 1) because of a 42
second ping timeout.
After waiting two hours, Client A never reconnected
Hi,
Environment:
Gluster Version: 3.8.3
Operating System: CentOS Linux 7 (Core)
Kernel: Linux 3.10.0-327.28.3.el7.x86_64
Architecture: x86-64
Replicated 3-Node Volume
~400GB of around a million files
Description of Problem:
One of the brick dies. The only suspect log I see is in the
dd the old brick back to get back to a
normal working state? If I do this migration again I'd probably just
do a direct dd of the ext4 file system onto the new mount while the
brick was offline.
Cheers,
Danny
Danny Webb
Senior Linux and Virtualisation Engineer
The Hut Group<http://www.thehutgro
r.org/#/c/glusterfs/+/21380/
>
> On 10/10/2018 05:20 PM, Danny Lee wrote:
>
> Great news! Awesome job, Pranith!
>
> Do you have a link to the patch? I tried looking it up but had no luck.
>
> On Wed, Oct 10, 2018, 6:02 AM Dmitry Melekhov wrote:
>
>> 10.10.2018 13:58
Great news! Awesome job, Pranith!
Do you have a link to the patch? I tried looking it up but had no luck.
On Wed, Oct 10, 2018, 6:02 AM Dmitry Melekhov wrote:
> 10.10.2018 13:58, Ravishankar N пишет:
> > Hi,
> > Sorry for the delay, should have gotten to this earlier. We uncovered
> > the
Ran into this issue too with 4.1.5 with an arbiter setup. Also could not
run a statedump due to "Segmentation fault".
Tried with 3.12.13 and had issues with locked files as well. We were able
to do a statedump and found that some of our files were "BLOCKED"
Alright. Built and installed origin/release-6 with the cherry-pick
mentioned by Amar and it looks good. No error logs.
On Tue, Mar 3, 2020, 11:54 AM Danny Lee wrote:
> Tried building off of release-6 with the cherry-pick, but reached a
> blocking point due to bugzilla ID 1683574 bug.
Tried building off of release-6 with the cherry-pick, but reached a
blocking point due to bugzilla ID 1683574 bug. So I couldn't test it.
On Tue, Mar 3, 2020, 11:04 AM Danny Lee wrote:
> I tested a cluster with version 6.7. It's still happening. I was able to
> reproduce the log b
I tested a cluster with version 6.7. It's still happening. I was able to
reproduce the log by just copying a large file and then interrupting it
(Ctrl+C). I guess something in my application is interrupting file io
On Tue, Mar 3, 2020, 3:47 AM Strahil Nikolov wrote:
> On March 3, 2020 7:27:03
This was happening for us on our 3-node replicated server. For one day,
the log amassed to 3GBs. Over a week, it took over 15GBs.
Our gluster version is 6.5.
On Mon, Mar 2, 2020, 5:26 PM Strahil Nikolov wrote:
> Hi Felix,
>
> can you test /on non-prod system/ the latest minor version of
24 matches
Mail list logo