-qlength:
1features.shard: onuser.cifs: offstorage.owner-uid: 36storage.owner-gid:
36network.ping-timeout: 30performance.strict-o-direct:
oncluster.granular-entry-heal: enablecluster.enable-shared-storage: enable
Network: 1 gbit/s
Filesystem:XFS
Best Regards,Strahil Nikolov
er dev
teams.
Best Regards,Strahil Nikolov
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
in order to fix the situation as the
fuse client is the best way to use glusterfs, and it seems the glusterfs-server
is not the guilty one.
Thanks in advance for your guidance.I have learned so much.
Best Regards,Strahil Nikolov
От: Strahil
До: Amar Tumballi Suryanarayan
Копие: Gluster
t2 are on /dev/gluster_vg_ssd/gluster_lv_engine
, while arbiter is on /dev/gluster_vg_sda3/gluster_lv_engine.
Is that the issue ? Should I rename my brick's VG ?If so, why there is no
mentioning in the documentation ?
Best Regards,Strahil Nikolov
___
Gluster
0.16 1.58
As you can see - all bricks are thin LV and space is not the issue.
Can someone hint me how to enable debug , so gluster logs can show the reason
for that pre-check failure ?
Best Regards,Strahil Nikolov
В сряда, 10 април 2019 г., 9:05:15 ч. Гринуич-4, Rafi Kavungal Chun
I hope this is the last update on the issue -> opened a bug
https://bugzilla.redhat.com/show_bug.cgi?id=1699309
Best regards,Strahil Nikolov
В петък, 12 април 2019 г., 7:32:41 ч. Гринуич-4, Strahil Nikolov
написа:
Hi All,
I have tested gluster snapshot without systemd.automo
It seems that I got confused.So you see the files on the bricks (servers) ,
but not when you mount glusterfs on the clients ?
If so - this is not the sharding feature as it works the opposite way.
Best Regards,Strahil Nikolov
В четвъртък, 16 май 2019 г., 0:35:04 ч. Гринуич+3, Paul van der
of the
NFS locks, so no disruption will be felt by the clients.
Still, this will be a lot of work to achieve.
Best Regards,
Strahil Nikolov
On Apr 30, 2019 15:19, Jim Kinney wrote:
>
> +1!
> I'm using nfs-ganesha in my next upgrade so my client systems can use NFS
> instead of
cluster.enable-shared-storage: enable
Any issues expected when downgrading the version ?
Best Regards,Strahil Nikolov
В понеделник, 22 април 2019 г., 0:26:51 ч. Гринуич-4, Strahil
написа:
Hello Community,
I have been left with the impression that FUSE mounts will read from both local
in advance for your response.
Best Regards,Strahil Nikolov
Community Meeting Calendar:
APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314
NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/11856431
It seems that whenever I reboot a gluster node , I got this problem - so it's
not an arbiter issue.Obviously there is something wrong with v6.6 ,as I never
had such issues with v6.5 .
Any ideas where should I start this up ?
Best Regards,Strahil Nikolov
В сряда, 13 ноември 2019 г., 22:23:38
and you can keep the
number of servers the same ... still the server bandwidth will be a limit at
some point .
I'm not sure how other SDS deal with such elasticity . I guess many users in
the list will hate me for saying this , but have you checked CEPH for your
needs ?
Best Regards,Strahil Nikolov
/usr/share/glusterfs/scripts/stop-all-gluster-processes.sh
[Install]
RequiredBy=shutdown.target
Of course systemd has to be reloaded :)
Best Regards,
Strahil Nikolov
В сряда, 27 ноември 2019 г., 8:07:52 ч. Гринуич-5, Sankarshan Mukhopadhyay
написа:
On Wed, Nov 27, 2019 at 6:10 PM Rav
.
Best Regards,Strahil Nikolov
В четвъртък, 19 декември 2019 г., 02:28:55 ч. Гринуич+2, David Cunningham
написа:
Hi Raghavendra and Strahil,
We are using GFS version 5.6-1.el7 from the CentOS repository. Unfortunately we
can't modify the application and it expects to read and write
fore the vendor provided a new patch), so the issues in Gluster are nothing
new and we should not forget that Gluster is free (and doesn't costs millions
like some arrays).
The only mitigation is to thoroughly test each patch on a cluster that provides
storage for your dev/test clients.
I hope you
ry data.
Another way to migrate the data is to:
1. Add the new disks on the old srv1,2,3
2. Add the new disks to the VG
3. pvmove all LVs to the new disks (I prefer to use the '--atomic' option)
4. vgreduce with the old disks
5. pvremove the old disks
6. Then just delete the block devices from the
Hi Felix,
can you test /on non-prod system/ the latest minor version of gluster v6 ?
Best Regards,
Strahil Nikolov
В понеделник, 2 март 2020 г., 21:43:48 ч. Гринуич+2, Felix Kölzow
написа:
Dear Community,
this message appears for me to on GlusterFS 6.0.
Before that, we had GlusterFS
ransport-type: tcp
>> >>>> Bricks:
>> >>>> Brick1: myhost:/nodirectwritedata/gluster/gvol0
>> >>>> Options Reconfigured:
>> >>>> transport.address-family: inet
>> >>>> nfs.disable: on
>> >>
ix is added while adding public key to remote
>>>> node’s authorized_keys file, So that if anyone gain access using
>this key
>>>> can access only gsyncd command.
>>>>
>>>> ```
>>>> command=gsyncd ssh-key….
>>>> ```
>>>>
>>>>
>>>>
>>>> Thanks for your help.
>>>>
>>>> --
>>>> David Cunningham, Voisonics Limited
>>>> http://voisonics.com/
>>>> USA: +1 213 221 1092
>>>> New Zealand: +64 (0)28 2558 3782
>>>>
>>>>
>>>>
>>>>
>>>> Community Meeting Calendar:
>>>>
>>>> Schedule -
>>>> Every Tuesday at 14:30 IST / 09:00 UTC
>>>> Bridge: https://bluejeans.com/441850968
>>>>
>>>> Gluster-users mailing list
>>>> Gluster-users@gluster.org
>>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>>
>>>>
>>>>
>>>> —
>>>> regards
>>>> Aravinda Vishwanathapura
>>>> https://kadalu.io
>>>>
>>>>
>>>
>>> --
>>> David Cunningham, Voisonics Limited
>>> http://voisonics.com/
>>> USA: +1 213 221 1092
>>> New Zealand: +64 (0)28 2558 3782
>>>
>>
>>
>> --
>> David Cunningham, Voisonics Limited
>> http://voisonics.com/
>> USA: +1 213 221 1092
>> New Zealand: +64 (0)28 2558 3782
>>
>>
>>
Hey David,
Why don't you set the B cluster's hostnames in /etc/hosts of all A cluster
nodes ?
Maybe you won't need to rebuild the whole B cluster.
I guess the A cluster nodes nees to be able to reach all nodes from B cluster,
so you might need to change the firewall settings.
Best Regards,
Strahil Nikolov
Community Meeting Calendar:
Schedule -
Every Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
emains in v6.0. Actually, we do not have a non-prod gluster system, so
>it will take some time
>
>to do this.
>
>Regards,
>
>Felix
>
>
>On 02/03/2020 23:25, Strahil Nikolov wrote:
>> Hi Felix,
>>
>> can you test /on non-prod system/ the latest minor
Gluster-users@gluster.org mailto:Gluster-users@gluster.org
>> > https://lists.gluster.org/mailman/listinfo/gluster-users
>> >
>> > >
>>
>>
>> Met vriendelijke groet, With kind regards,
>>
>> Jorick Astrego
>>
f anyone gain access using
>this key
>>> can access only gsyncd command.
>>>
>>> ```
>>> command=gsyncd ssh-key….
>>> ```
>>>
>>>
>>>
>>> Thanks for your help.
>>>
>>> --
>>> David C
ith following
>> parameters.
>> >
>> > net.ipv6.conf.all.disable_ipv6 = 1
>> > net.ipv6.conf.default.disable_ipv6 = 1
>> >
>> > That did not help.
>> >
>> > Volumes are configured with inet.
>> >
>> > sudo gluster volum
Hello Community,
I am experiencing again the issue with the ACL and none of the fixes ,
previously stated, are helping out.
Bug report -> https://bugzilla.redhat.com/show_bug.cgi?id=1797099
Any ideas would be helpful.
Best Regards,
Strahil Nikolov
Community Meeting Calen
ome other things should be different.
>>
>>
>> Greetings,
>>
>> Paolo
>>
>> Il 06/02/20 23:30, Christian Reiss ha scritto:
>>> Hey,
>>>
>>> I hit this bug, too. With disastrous results.
>>> I second this post.
>>
e 4.2 and there were 2 older VM's which had snapshots
>from
>prior versions, while the leaf was in compatibility level 4.2. note;
>the
>backup was taken on the engine running 4.3.
>
>Thanks Olaf
>
>
>
>Op di 28 jan. 2020 om 17:31 schreef Strahil Nikolov
>:
>
>> On
/mailman/listinfo/gluster-users
Hi Ravi,
This is the third time an oVirt user (one is me and I think my email is in the
list) that report such issue.
We need a through investigation as this is reoccurring.
Best Regards,
Strahil Nikolov
Community Meeting Calendar:
APAC Schedule -
Every
mation will allow more experienced adminiatrators and the developers
to identify any pattern that could cause the symptoms.
Tuning Gluster is one of the hardest topics, so you should prepare yourself
for a lot of test untill you reach the optimal settings for your volumes.
Best Regards,
Strahi
On February 10, 2020 5:32:29 PM GMT+02:00, Matthias Schniedermeyer
wrote:
>On 10.02.20 16:21, Strahil Nikolov wrote:
>> On February 10, 2020 2:25:17 PM GMT+02:00, Matthias Schniedermeyer
> wrote:
>>> Hi
>>>
>>>
>>> I would describe our basic use
rsion of gluster are you using ?
In my case only a downgrade has restored the operation of the cluster, so you
should consider that as an option (last, but still an option).
You can try to run a find against the fuse and 'find /path/to/fuse -exec
setfacl -m u:root:rw {} \;'
Maybe that will force
2020-03-11 20:08:55.286410] I [master(worker
>/srv/media-storage):1441:process] _GMaster: Batch Completed
>changelog_end=1583917610 entry_stime=None changelog_start=1583917610
>stime=None duration=153.5185 num_changelogs=1 mode=xsync
>[2020-03-11 20:08:55.315442] I [master(worker
>/srv/media
_all_smalldom.m pe_PB.in
>PE_Data_Comparison_glider_sp011_smalldom.m pe_PB.log
>PE_Data_Comparison_glider_sp064_smalldom.m pe_PB_short.in
>PeManJob.log PlotJob
>
>mseas(DSMccfzR75deg_001b)% ls PeManJob
>PeManJob
>
>mseas(DSMccfzR75deg_001b)% ls
-node Active
>Hybrid Crawl N/A
>
>Any idea? please. Thank you.
Hi Etem,
Have you checked the log on both source and destination. Maybe they can hint
you what the issue is.
Best Regards,
Strahil Nikolov
Community Meeting Calendar:
Schedule -
Every Tues
Email: pha...@mit.edu
>Center for Ocean Engineering Phone: (617) 253-6824
>Dept. of Mechanical EngineeringFax:(617) 253-8125
>MIT, Room 5-213http://web.mit.edu/phaley/www/
>77 Massachusetts Avenue
>Cambridge, MA 02139-4301
>
>
(-->
>> /lib64/libc.so.6(clone+0x6d)[0x7fc5ccfebbad] ) 0-glusterfs-fuse:
>> writing to fuse device failed: No such file or directory
>> [2020-03-08 11:40:54.362173] E [fuse-bridge.c:4188:fuse_xattr_cbk]
>> 0-glusterfs-fuse: extended attribute not supported by
; Pid : 4650
>> File System : xfs
>> Device : /dev/sda
>> Mount Options : rw
>> Inode Size : 512
>> Disk Space Free : 325.3GB
>> Total Disk Space : 91.0TB
>> Inode Count : 692001992
&g
M, Pat Haley wrote:
>>
>> Hi,
>>
>> I get the following
>>
>> [root@mseas-data2 bricks]# gluster volume get data-volume all | grep
>
>> cluster.min-free
>> cluster.min-free-disk 10%
>> cluster.min-free-inodes 5%
>>
>>
>> On 3
before,
>
>the performance is lower than the individual brick performance. Is this
>a normal behavior or
>
>or what can be done to improve the single client performance as pointed
>out in this case?
>
>
>Regards,
>
>Felix
>
>
>
>
>On 20/02/2020 22:26, Stra
xample
mounted on /gluster) from which you can create 6 directories.The arbiter stores
only metadata and the SSD random access performance will be the optimal
approach.
Something like:
arbiter:/gluster/data1
arbiter:/gluster/data2
arbiter:/gluster/data3
arbiter:/gluster/data4
arbiter:/gluster/data5
egards,
>Hubert
>
>Am Sa., 11. Apr. 2020 um 11:12 Uhr schrieb Strahil Nikolov
>:
>>
>> On April 11, 2020 8:40:47 AM GMT+03:00, Hu Bert
> wrote:
>> >Hi,
>> >
>> >so no one has seen the problem of disabled systemd units before?
>>
8
>>
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
>
>
>Community Meeting Calendar:
>
>Schedule -
>Every 2nd and 4th Tuesday at 14:30 IST / 09:0
known?
>>
>>
>> Regards,
>> Hubert
>
>
>
>
>Community Meeting Calendar:
>
>Schedule -
>Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>Bridge: https://bluejeans.com/441850968
>
>Gluster-users mailing list
>Gluster-user
the data to
fresh volumes and everything is working.
Best Regards,
Strahil Nikolov
В понеделник, 20 април 2020 г., 17:02:46 Гринуич+3, Rinku Kothiya
написа:
Hi,
The Gluster community is pleased to announce the release of Gluster7.5
(packages available at [1]).
Release notes
our user , or to use ACLs
(maybe with a find -exec ).
Still you got the option for '0777' , but then security will be just a word.
I think the first one is easier to implement.
Best Regards,
Strahil Nikolov
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IS
On March 25, 2020 3:32:59 PM GMT+02:00, Mauro Tridici
wrote:
>Dear All,
>
>some users tht use regularly our gluster file system are experiencing a
>strange error during attempting to remove a empty directory.
>All bricks are up and running, no perticular error has been detected,
>but they are
Take a look at Stefan Solbrig's e-mail
Best Regards,
Strahil Nikolov
В сряда, 25 март 2020 г., 22:55:23 Гринуич+2, Mauro Tridici
написа:
Hi Strahil,
unfortunately, no process is holding file or directory.
Do you know if some other community user could help me?
Thank you,
Mauro
x 2 das oclab_prod 4096 Mar 25 10:02 .
drwxr-xr-x 3 das oclab_prod 4096 Mar 25 10:02 ..
Any other idea related this issue?
Many thanks,
Mauro
> On 25 Mar 2020, at 18:32, Strahil Nikolov wrote:
>
> On March 25, 2020 3:32:59 PM GMT+02:00, Mauro Tridici
> wrote:
>> Dear All,
Community Meeting Calendar:
>
>Schedule -
>Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>Bridge: https://bluejeans.com/441850968
>
>Gluster-users mailing list
>Gluster-users@gluster.org
>https://lists.gluster.org/mailman/listinfo/gluster-users
Everything was moved
t; Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
>Erik Jacobson
>Software Engineer
>
>erik.jacob...@hpe.com
>+1 612 851 0550 Office
>
>Eagan, MN
>hpe.com
>
>
hus a 'replica 3' volume or a 'replica 3 arbiter 1' volume should be used and
a different set of options are needed (compared to other workloads).
Best Regards,
Strahil Nikolov
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://b
nch on which patch would have caused an
>increase
>in logs!
>
>-Amar
>
>
>>
>> On Sat, May 2, 2020, 12:47 AM Strahil Nikolov
>> wrote:
>>
>>> On May 1, 2020 8:03:50 PM GMT+03:00, Artem Russakovskii <
>>> archon...@gmail.com
opposite on
the brick2 - then only metadata at the Arbiter level can show us which data is
good and which has to be fixed.
>
>On Sat, 25 Apr 2020 at 19:41, Strahil Nikolov
>wrote:
>
>> On April 25, 2020 9:00:30 AM GMT+03:00, David Cunningham <
>> dcunning...@voisonics.com
>Gluster-users mailing list
>Gluster-users@gluster.org
>https://lists.gluster.org/mailman/listinfo/gluster-users
Inode size 1024 is the recommended for Gluster used with Openstack (SWIFT) ,
so it shouldn't have any issues.
Best Regards,
Strahil Nikolov
Community Mee
; Could you try disabling syncing xattrs and check ?
>>
>> gluster vol geo-rep :: config
>sync-xattrs
>> false
>>
>> On Fri, Mar 13, 2020 at 1:42 AM Strahil Nikolov
>
>> wrote:
>>
>>> On March 12, 2020 9:41:45 AM GMT+02:00, "Etem Bayoğlu&
,
Strahil Nikolov
В четвъртък, 3 септември 2020 г., 10:11:42 Гринуич+3, Ward Poelmans
написа:
Hi Strahil,
On 2/09/2020 21:30, Strahil Nikolov wrote:
> you shouldn't do that,as it is intentional - glusterd is just a management
> layer and you might need to restart it in order to recon
Regards,
Strahil Nikolov
В неделя, 23 август 2020 г., 11:05:27 Гринуич+3, mabi
написа:
Hello,
So to be precise I am exactly having the following issue:
https://github.com/gluster/glusterfs/issues/1332
I could not wait any longer to find some workarounds or quick fixes so I
decided
Seems that this time your email went to yahoo's spam , I am still too lazy to
get my own domain ...
It is good to know that they fixed your issues.
Best Regards,
Strahil Nikolov
В петък, 4 септември 2020 г., 02:00:32 Гринуич+3, Computerisms Corporation
написа:
Hi Strahil
the cutover time comes, you can just stop the geo rep and
reconfigure the clients to use the new cluster.
Best Regards,
Strahil Nikolov
В вторник, 1 септември 2020 г., 22:01:49 Гринуич+3, Thomas Cameron
написа:
Howdy, all -
I have inherited a system which is running an ancient version
Hi Pat,
I checked
https://docs.gluster.org/en/latest/Administrator%20Guide/Managing%20Volumes/#rebalancing-volumes
and indeed that is possible to affect your performance.
What is your current status ? Did you run a new rebalance ?
Best Regards,
Strahil Nikolov
В четвъртък, 27 август
to a
healing operation ongoing.Also , check the logs on all nodes for any errors
during the healing - maybe you got some issues that were not noticed before.
Best Regards,
Strahil Nikolov
В петък, 11 септември 2020 г., 12:13:06 Гринуич+3, Martin Bähr
написа:
Excerpts from Gionatan Danti's
ll-gluster-processes.sh are provided
by the glusterfs-server package.
Best Regards,
Strahil Nikolov
В сряда, 2 септември 2020 г., 21:59:45 Гринуич+3, Ward Poelmans
написа:
Hi,
I've playing with glusterfs on a couple of VMs to get some feeling with
it. The setup is 2 bricks with replicat
And it seems gdeploy is deprecated in favour of gluster-ansible ->
gluster/gluster-ansible .
gluster/gluster-ansible
A core library of gluster specific roles and modules for ansible/ansible
tower. - gluster/gluster-ansible
Best Regards,
Strahil Nikolov
В сряда
Do you have the option to update your cluster to 8.1 ?
Are your clients in a HCI (server & client are the same system) ?
Best Regards,
Strahil Nikolov
В четвъртък, 8 октомври 2020 г., 17:07:31 Гринуич+3, Knoth, Benjamin
написа:
Dear community,
actually, I'm running
t-o-direct=on
performance.read-ahead=off
performance.io-cache=off
performance.readdir-ahead=off
performance.client-io-threads=on
server.event-threads=4
client.event-threads=4
performance.read-after-open=yes
At least it is a good start point.
Best Regards,
Strahil Nikolov
В вторник, 13 октомври 2020 г.
Thanks for sharing.
Best Regards,
Strahil Nikolov
В вторник, 13 октомври 2020 г., 18:17:23 Гринуич+3, Benjamin Knoth
написа:
Dear all,
I add the community repository, to update Gluster to 8.1.
This fix my memory leak. But in my logfile I got every second many errors
512 /dev/arbiter/disk
3. mount /dev/arbiter/disk /new/path/to/brick
Next just add the brick:
gluster volume add-brick VOL replica 3 arbiter 1 Arbiter:/new/path/to/brick
In both cases (the reset-brick and remove/add-brick) you need to work with a
fresh new brick .
Best Regards,
Strahil Nikolov
need to
wipe the fs and 'reset-brick'.
Best Regards,
Strahil Nikolov
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
be
controlled easily.
Best Regards,
Strahil Nikolov
В понеделник, 12 октомври 2020 г., 12:12:18 Гринуич+3, Alex K
написа:
On Mon, Oct 12, 2020 at 9:47 AM Diego Zuccato wrote:
> Il 10/10/20 16:53, Alex K ha scritto:
>
>> Reading from the docs i see that this is not recomme
ements , I would add some SSDs in the game (tier 1+
storage) and use the SSD-based LUNs as lvm caching.
Best Regards,
Strahil Nikolov
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
Hi Rusty,
please provide more details like:
gluster volume status VOL
gluster volume info VOL
and from the client:
df -h /path/to/mounted/volume
P.S.: What was the command you used to add and then rebalance the volume ?
Best Regards,
Strahil Nikolov
В понеделник, 19 октомври 2020 г
but should not stale).
Best Regards,
Strahil Nikolov
В понеделник, 19 октомври 2020 г., 14:40:01 Гринуич+3, Nico van Royen
написа:
Hello,
>4 cores is quite low, especially when healing.
The 4 cores (and, by default, 8GB RAM), is a standard offering in our
situations. It would be up to
/proc/cmdline'
- tuned profile
- Number of clients and TSP nodes in the cluster
- Are you using it in HCI mode (where client and server is the same system)
Best Regards,
Strahil Nikolov
В понеделник, 5 октомври 2020 г., 21:27:28 Гринуич+3, Adrian Quintero
написа:
Hi,
Have you tried tuned
on community edition is 64MB.
Best Regards,
Strahil Nikolov
На 18 август 2020 г. 16:47:01 GMT+03:00, Gilberto Nunes
написа:
>>> What's your workload?
>I have 6 KVM VMs which have Windows and Linux installed on it.
>
>>> Read?
>>> Write?
>iostat (I am usin
of VMs.
Best Regards,
Strahil Nikolov
В петък, 28 август 2020 г., 19:57:18 Гринуич+3, Pat Haley
написа:
Hi All,
We have a distributed gluster filesystem across 2 servers. We recently
realized that one of the servers (mseas-data3) has 2 hostnames for the other
server (mseas
На 20 август 2020 г. 3:46:41 GMT+03:00, Computerisms Corporation
написа:
>Hi Strahil,
>
>so over the last two weeks, the system has been relatively stable. I
>have powered off both servers at least once, for about 5 minutes each
>time. server came up, auto-healed what it needed to, so all
>master# gluster vol profile webisms start
>Profile on Volume webisms is already started
It seems that it was already started. Can you stop it and check node's load
before starting it again?
Best Regards,
Strahil Nikolov
На 21 август 2020 г. 7:44:35 GMT+03:00, Computerisms Corpo
Sadly I have no idea why rebalance did that , so you should check the logs on
all nodes for clues.
Is there any reason why you used "force" in that command ?
Best Regards,
Strahil Nikolov
В четвъртък, 27 август 2020 г., 17:32:24 Гринуич+3, Pat Haley
написа:
Hi
to the 'ProjectB' volume's brick directories and remove the dirty flag.
When you stat that dir ('du' or 'stat' from fuse should work), the quota should
get fixed .
Best Regards,
Strahil Nikolov
На 14 август 2020 г. 14:39:49 GMT+03:00, "João Baúto"
написа:
>Hi Strahil,
>
>I have tried
Best Regards,
Strahil Nikolov
На 14 август 2020 г. 17:08:57 GMT+03:00, Gilberto Nunes
написа:
>Yes! I see!
>For many small files is complicated...
>Here I am generally using 2 or 3 large files (VM disk images!)...
>I think this could be at least some progress bar or percent about
It might help narrowing the problem.
Best Regards,
Strahil Nikolov
На 14 август 2020 г. 20:22:16 GMT+03:00, Matthew Benstead
написа:
>Hi,
>
>We are building a new storage system, and after geo-replication has
>been
>running for a few hours the server runs out of memory and oo
Sadly I can't help much here.
Is this a Hyperconverged setup (host is also a client) ?
Best Regards,
Strahil Nikolov
В вторник, 29 септември 2020 г., 18:29:20 Гринуич+3, Shreyansh Shah
написа:
Hi All,
Can anyone help me out with this?
On Tue, Sep 22, 2020 at 2:59 PM Shreyansh Shah
At least you can bumb the cluster op-version (in case you don't plan to add
older clients)via:
gluster volume set all cluster.op-version 50400
If it happens again, try to remount the client in order to verify that it is
not a memory leak.
Best Regards,
Strahil Nikolov
В сряда, 30
to update it to 5.11 (if the
Gluster Cluster is on 5.11 or higher) and monitor it closely.
Best Regards,
Strahil Nikolov
В сряда, 30 септември 2020 г., 18:22:33 Гринуич+3, Shreyansh Shah
написа:
Hi Strahil,
Thanks for taking out time to help me.
This is not a hyperconverged setup. We
В четвъртък, 17 септември 2020 г., 13:16:06 Гринуич+3, Alexander Iliev
написа:
On 9/16/20 9:53 PM, Strahil Nikolov wrote:
> В сряда, 16 септември 2020 г., 11:54:57 Гринуич+3, Alexander Iliev
> написа:
>
> From what I understood, in order to be able to scale
, so
keep that in mind.
You can check
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/chap-configuring_red_hat_storage_for_enhancing_performance
for more details.
Best Regards,
Strahil Nikolov
В петък, 18 септември 2020 г., 17:01:30 Гринуич
to upgrade without downtime to higher
versions of Gluster. As far as I remember I upgraded from that version to 5.5
some time ago, but should also work with 6.X ...
Best Regards,
Strahil Nikolov
В петък, 2 октомври 2020 г., 19:19:38 Гринуич+3, Pasi Kärkkäinen
написа:
Hello list,
I
Hi Rafi,
I have a test oVirt 4.3.9 cluster with Gluster v7.5 on CentOS7.
Can you provide the rpms and I will try to test.
Also, please share the switch that disables this behaviour (in case something
goes wrong).
Best Regards,
Strahil Nikolov
На 27 май 2020 г. 14:54:34 GMT+03:00, RAFI KC
Also,
can you provide a ping between the nodes, so we get an idea of the lattency
between the nodes.
Also, I'm interested how much time it takes on the bricks to 'du'.
Best Regards,
Strahil Nikolov
На 27 май 2020 г. 10:27:34 GMT+03:00, Karthik Subrahmanya
написа:
>Hi,
>
>Pleas
На 25 май 2020 г. 5:49:00 GMT+03:00, Olivier
написа:
>Strahil Nikolov writes:
>
>> On May 23, 2020 7:29:23 AM GMT+03:00, Olivier
> wrote:
>>>Hi,
>>>
>>>I have been struggling with NFS Ganesha: one gluster node with
>ganesha
>>>serving
I forgot to mention that you need to verify/set the VMware machines for
high-performance/low-lattency workload.
На 25 май 2020 г. 17:13:52 GMT+03:00, Strahil Nikolov
написа:
>
>
>На 25 май 2020 г. 5:49:00 GMT+03:00, Olivier
> написа:
>>Strahil Nikolov writes:
>>
>
Hey Rafi,
what do you mean with volume configuration and tree structure.
Best Regards,
Strahil Nikolov
На 27 май 2020 г. 16:18:36 GMT+03:00, RAFI KC написа:
>Sure, I have back-ported the patch to release-7. Now I will see How I
>can build the rpms.
>
>On the other hand, if possibl
move away the file from the slave , does it fixes the
issue ?
Best Regards,
Strahil Nikolov
На 30 май 2020 г. 1:10:56 GMT+03:00, David Cunningham
написа:
>Hello,
>
>We're having an issue with a geo-replication process with unusually
>high
>CPU use and giving "Entry n
Yesterday another ovirt user hit the issue (or similar one) after gluster v6.6
to 6.8 upgrade.
I guess Adrian can provide the logs, so we check if it is the same issue or not.
Best Regards,
Strahil Nikolov
На 28 май 2020 г. 14:15:55 GMT+03:00, Alan Orth написа:
>We upgraded from 5.10 or 5
ncy level 3 (8 +3)
12 bricks with redundancy level 4 (8 + 4)
In your case if 2 bricks fail - the volume will be available without any
disruption. Sadly there is no way to convert replicated to dispersed volume and
based on your workload dispersed volume might not be suitable.
Best Regards,
Stra
Hello Naranderan,
what OS are you using ? Do you have SELINUX in enforcing mode (verify via
'sestatus') ?
Best Regards,
Strahil Nikolov
В събота, 30 май 2020 г., 13:33:05 ч. Гринуич+3, Naranderan Ramakrishnan
написа:
Dear Developers/Users,
A geo-rep session of a sub-volume
this article (it's for small files tuning, but describes
the options above):
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.1/html/administration_guide/small_file_performance_enhancements
Best Regards,
Strahil Nikolov
На 29 май 2020 г. 22:25:29 GMT+03:00, Qing Wang написа
looping over some data causing the CPU hog.
Sadly, I can't find an instruction for increasing the log level of the geo rep
log .
Best Regards,
Strahil Nikolov
На 2 юни 2020 г. 6:14:46 GMT+03:00, David Cunningham
написа:
>Hi Strahil and Sunny,
>
>Thank you for the replies. I checked
Actually I used 'replace-brick' several times and I had no issues.
I guess you can 'remove brick replica old_brick' and later 'add
brick replica new brick' ...
Best Regards,
Strahil Nikolov
В сряда, 16 септември 2020 г., 08:41:29 Гринуич+3, Alex Wakefield
написа:
Hi all,
We have
reason to use ZFS ? It uses a lot of memory .
Best Regards,
Strahil Nikolov
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org
er will pick up the third node to be an
arbiter...
Is there any parameter to pass during volume creation or just replica 1 and
than gluster will analyze the hosts and then pick up the less powerful?
Sorry for newbie question...
Thanks
---
Gilberto Nunes Ferreira
Em qua., 21 de out. de
1 - 100 of 341 matches
Mail list logo