Hello Guenther,
thank you for the update. On next monday there will be a public holiday in
germany. So tuesday to friday would be fine . The time suggestions, as
mentioned in the last mails, should be suitable.
Regards
David
Am Do., 6. Juni 2019 um 16:58 Uhr schrieb Günther Deschner <
gdesc..
If i remember correctly: in the video they suggested not to make a
RAID 10 too big (i.e. too many (big) disks), because the RAID resync
then could take a long time. They didn't mention a limit; on my 3
servers with 2 RAID 10 (1x4 disks, 1x6 disks), no disk failed so far,
but there were automatic pe
Hi Abhishek,
Please use statedumps taken at intervals to determine where the memory is
increasing. See [1] for details.
Regards,
Nithya
[1] https://docs.gluster.org/en/latest/Troubleshooting/statedump/
On Fri, 7 Jun 2019 at 08:13, ABHISHEK PALIWAL
wrote:
> Hi Nithya,
>
> We are having the se
Hi Nithya,
We are having the setup where copying the file to and deleting it from
gluster mount point to update the latest file. We noticed due to this
having some memory increase in glusterfsd process.
To find the memory leak we are using valgrind but didn't get any help.
That's why contacted t
nfs.disable: on
transport.address-family: inet
# gluster volume heal elastic-volume info
Brick dev01:/data/gfs/lv_elastic/brick1/brick
/data/logs/20190606/ns-coreiol-iol-app-listini.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-fns.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-news
I have about 200TB in a gluster replicate only 3-node setup. We stopped
using hardware RAID6 after the third drive failed on one array at the
same time we replaced the other two and before recovery could complete.
200TB is a mess to resync.
So now each hard drive is a single entity. We add 1 drive
I need help cleaning up a faulty geo-replication session. I tried
deleting all related directories and files.
But I am currently in a state such that when I try and recreate the session
via
gluster> volume geo-replication
icp_kube-system_nfs-pvc_69753a58-819f-11e9-b3a0-005056b694b5
root@rmtwrk1
Hi
Am 06.06.19 um 18:48 schrieb Eduardo Mayoral:
> Your comment actually helps me more than you think, one of the main
> doubts I have is whether I go for JOBD with replica 3 or SW RAID 6 with
> replica2 + arbitrer. Before reading your email I was leaning more
> towards JOBD, as reconstruction of
Yes to the 10 GB NICS (they are already on the servers).
Nice idea with the SSDs, but I do not have a HW RAID card on these
servers or the possibility to get / install one.
What I do have is an extra SSD disk per server which I plan to use as
LVM cache for the bricks (Maybe just 1 disk, maybe 2 w
What if you have two fast 2TB SSDs per server in hardware RAID 1, 3 hosts
in replica 3. Dual 10gb enterprise nics. This would end up being a single
2TB volume, correct? Seems like that would offer great speed and have
pretty decent survivability.
On Wed, Jun 5, 2019 at 11:54 PM Hu Bert wrote:
Your comment actually helps me more than you think, one of the main
doubts I have is whether I go for JOBD with replica 3 or SW RAID 6 with
replica2 + arbitrer. Before reading your email I was leaning more
towards JOBD, as reconstruction of a moderately big RAID 6 with mdadm
can be painful too. Now
You should not have used this one:
>
> gluster-mountbroker remove --volume code-misc --user sas
-- This one is to remove volume/user from mount broker.
Please try setting up mount broker once again.
-Sunny
On Thu, Jun 6, 2019 at 5:28 PM deepu srinivasan wrote:
>
> Hi Sunny
> Please find the lo
Whats current trackback please share.
-Sunny
On Thu, Jun 6, 2019 at 4:53 PM deepu srinivasan wrote:
>
> Hi Sunny
> I have changed the file in /usr/libexec/glusterfs/peer_mountbroker.py as
> mentioned in the patch.
> Now the "gluster-mountbroker status" command is working fine. But the
> geo-r
Above error can be tracked here:
https://bugzilla.redhat.com/show_bug.cgi?id=1709248
and patch link:
https://review.gluster.org/#/c/glusterfs/+/22716/
You can apply patch and test it however its waiting on regression to
pass and merge.
-Sunny
On Thu, Jun 6, 2019 at 4:00 PM deepu srinivasan w
Hi Abhishek,
I am still not clear as to the purpose of the tests. Can you clarify why
you are using valgrind and why you think there is a memory leak?
Regards,
Nithya
On Thu, 6 Jun 2019 at 12:09, ABHISHEK PALIWAL
wrote:
> Hi Nithya,
>
> Here is the Setup details and test which we are doing as
On Tue, Jun 4, 2019 at 7:36 AM Xie Changlong wrote:
> To me, all 'df' commands on specific(not all) nfs client hung forever.
> The temporary solution is disable performance.nfs.write-behind and
> cluster.eager-lock.
>
> I'll try to get more info back if encounter this problem again .
>
If you ob
16 matches
Mail list logo