[Gluster-devel] [Fwd: [Gluster-infra] Regarding the recent arrival of emails on list]

2019-07-18 Thread Michael Scherer
(work better without a error in the address)
-- 
Michael Scherer
Sysadmin, Community Infrastructure



--- Begin Message ---
Hi,

while working on  https://bugzilla.redhat.com/show_bug.cgi?id=1730962,
I did notice that no one was moderating the list since end of April, so
I decided to clean the queue (around 200 to 300 mails, most non
legitimate, but some where). That's why old emails came suddenly, as I
figured that even if this was old, some might be useful for archives or
something.

Sorry for not having looked earlier, it seems we might need to
redistribute the responsibility regarding this task since a few people
who were doing it likely moved on from the project.



-- 
Michael Scherer
Sysadmin, Community Infrastructure





signature.asc
Description: This is a digitally signed message part
___
Gluster-infra mailing list
gluster-in...@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-infra
--- End Message ---


signature.asc
Description: This is a digitally signed message part
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Quick question about the latest glusterfs and client side selinux support

2019-07-18 Thread Desai, Janak
Thank you so much Jiffin for the quick response!

-Janak

From: Jiffin Thottan 
Sent: Thursday, June 20, 2019 11:58:52 PM
To: Desai, Janak
Cc: Gluster Devel; nfs-ganesha-devel
Subject: Re: Quick question about the latest glusterfs and client side selinux 
support

Hi Janak,

Currently, it is supported in glusterfs(from 2.8 onwards) and cephfs(already 
there in 2.7) for nfs-ganesha.

--
Jiffin

- Original Message -
From: "Janak Desai" 
To: "Jiffin Tony Thottan" 
Sent: Thursday, June 20, 2019 9:29:09 PM
Subject: Re: Quick question about the latest glusterfs and client side selinux 
support

Hi Jiffin,



I came across your presentation “NFS-Ganesha Weather Report” that you gave at 
the FOSDEM’19 in early Feb this year. In that you mentioned that ongoing 
developments in v2.8 include “labelled NFS” support. I see that v2.8 is now 
out.  Do you know if labelled NFS support made it in?  If it did, is it only 
supported in CEPHFS FSAL or any other FSALs also include the support for it? I 
took a cursory look at the release documents and didn’t see Labelled NFS in it, 
so thought I would bug you directly.



Thanks.



-Janak





From: Jiffin Tony Thottan 
Date: Tuesday, August 28, 2018 at 12:50 AM
To: Janak Desai , "nde...@redhat.com" 
, "mselv...@redhat.com" 
Cc: "p...@paul-moore.com" 
Subject: Re: Quick question about the latest glusterfs and client side selinux 
support



Hi Janak,

Thanks for the interest. Basic selinux xlator is present at gluster server 
stack. It stores selinux context at the backend as a xattr. When we developed 
that xlator,

at that point they were no client to test the functionality. Don't know whether 
required change  in fuse got merged or not. As you mentioned ,here first we 
need to figure out

whether issue is related to server. Can collect the packet trace using tcpdump 
from client and sent with mail during setting/getting selinux context.

Regards,

Jiffin



On Tuesday 28 August 2018 04:14 AM, Desai, Janak wrote:

Hi Niels, Manikandan, Jiffin,



I work for Georgia Tech Research Institute’s CIPHER Lab and am investigating 
suitability of glusterfs for a couple of large upcoming projects. My ‘google 
research’ is yielding confusing and inconclusive results, so I thought I would 
try and reach out to some of the core developers to get some clarity.



We use SELinux extensively in our software solution. I am trying to find out 
if, with the latest version 4.1 of glusterfs running on the latest version of 
rhel, I should be able to associate and enforce selinux contexts from glusterfs 
clients. I see in the 3.11 release notes that the selinux feature was 
implemented but then I also see references to kernel work that is not done yet. 
I also could not find any documentation/examples on how to add/integrate this 
selinux translator to setup and enforce selinux labels from the client side. In 
my simple test setup, which I mounted using the “selinux” option (which gluster 
does seem to recognize), I am getting the “operation not supported” error. I 
guess either I am not pulling in the selinux translator or I am running up 
against other missing functionality in the kernel. I would really appreciate if 
you could clear this up for me. If I am not configuring my mount correctly, I 
would appreciate if you could point me to a document or an example. Our other 
option is lustre filesystem since it does have a working client side 
association and enforcement of selinux contexts. However, lustre appears to be 
lot difficult to setup and maintain and I would rather use glusterfs. We need a 
distributed (or parallel) filesystem that can work with Hadoop. If glusterfs 
doesn’t pan out then I will look at labelled nfs 4.2 that is now available in 
rhel7.  However, my google research shows much more Hadoop affinity for 
glusterfs than nfs v4.



I am also copying Paul Moore, with whom I collaborated a few years ago as part 
of the team that took Linux through its common criteria evaluation, and who I 
haven’t bugged lately ☺, to see if he can shed some light any missing kernel 
dependencies. I am currently testing with rhel7.5, but would be willing to try 
upstream kernel if have to get this proof of concept going. I know the 
underlying problem in the kernel is supporting extended attrs on FUSE file 
systems, but was wondering (and hoping) that at least setup/enforcement of 
selinux contexts from client side for glusterfs is possible.



Thanks.



-Janak




___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] No healing on peer disconnect - is it correct?

2019-07-18 Thread Martin
My VMs using Gluster as storage through libgfapi support in Qemu. But I dont 
see any healing of reconnected brick.

Thanks Karthik / Ravishankar in advance!

> On 10 Jun 2019, at 16:07, Hari Gowtham  wrote:
> 
> On Mon, Jun 10, 2019 at 7:21 PM snowmailer  > wrote:
>> 
>> Can someone advice on this, please?
>> 
>> BR!
>> 
>> Dňa 3. 6. 2019 o 18:58 užívateľ Martin  napísal:
>> 
>>> Hi all,
>>> 
>>> I need someone to explain if my gluster behaviour is correct. I am not sure 
>>> if my gluster works as it should. I have simple Replica 3 - Number of 
>>> Bricks: 1 x 3 = 3.
>>> 
>>> When one of my hypervisor is disconnected as peer, i.e. gluster process is 
>>> down but bricks running, other two healthy nodes start signalling that they 
>>> lost one peer. This is correct.
>>> Next, I restart gluster process on node where gluster process failed and I 
>>> thought It should trigger healing of files on failed node but nothing is 
>>> happening.
>>> 
>>> I run VMs disks on this gluster volume. No healing is triggered after 
>>> gluster restart, remaining two nodes get peer back after restart of gluster 
>>> and everything is running without down time.
>>> Even VMs that are running on “failed” node where gluster process was down 
>>> (bricks were up) are running without down time.
> 
> I assume your VMs use gluster as the storage. In that case, the
> gluster volume might be mounted on all the hypervisors.
> The mount/ client is smart enough to give the correct data from the
> other two machines which were always up.
> This is the reason things are working fine.
> 
> Gluster should heal the brick.
> Adding people how can help you better with the heal part.
> @Karthik Subrahmanya  @Ravishankar N do take a look and answer this part.
> 
>>> 
>>> Is this behaviour correct? I mean No healing is triggered after peer is 
>>> reconnected back and VMs.
>>> 
>>> Thanks for explanation.
>>> 
>>> BR!
>>> Martin
>>> 
>>> 
>> ___
>> Gluster-users mailing list
>> gluster-us...@gluster.org 
>> https://lists.gluster.org/mailman/listinfo/gluster-users 
>> 
> 
> 
> 
> -- 
> Regards,
> Hari Gowtham.

___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Failure during “git review –setup”

2019-07-18 Thread Saleh, Amgad (Nokia - US/Naperville)
Never mind – It worked. The $USER when adding the remote gerrit should be my 
github user, documentation was not clear!

From: Saleh, Amgad (Nokia - US/Naperville)
Sent: Thursday, May 23, 2019 11:34 PM
To: gluster-devel@gluster.org
Subject: RE: Failure during “git review –setup”
Importance: High


Looking at the document 
https://gluster.readthedocs.io/en/latest/Developer-guide/Simplified-Development-Workflow/
I ran ./rfc.sh and got the following:

[ahsaleh@null-d4bed9857109 glusterfs]$ ./rfc.sh
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100  4780  100  47800 0438  0  0:00:10  0:00:10 --:--:--  1087
[bugfix-pureIPv6 d14e550] Return IPv6 when exists and not -1
1 file changed, 10 insertions(+), 5 deletions(-)
Commit: "Return IPv6 when exists and not -1"
Reference (Bugzilla ID or Github Issue ID): amgads
Invalid reference ID (amgads)!!!
Commit: "Return IPv6 when exists and not -1"
Reference (Bugzilla ID or Github Issue ID): amgads
Invalid reference ID (amgads)!!!
Commit: "Return IPv6 when exists and not -1"
Reference (Bugzilla ID or Github Issue ID): 677
Select yes '(y)' if this patch fixes the bug/feature completely,
or is the last of the patchset which brings feature (Y/n): y
[detached HEAD 8a5bb4a] Return IPv6 when exists and not -1
1 file changed, 10 insertions(+), 5 deletions(-)
Successfully rebased and updated refs/heads/bugfix-pureIPv6.
./rfc.sh: line 287: clang-format: command not found
[ahsaleh@null-d4bed9857109 glusterfs]$

Is the code submitted for review?  Please advise what’s needd next – this is my 
first time to use the process

Submit for review
To submit your change for review, run the rfc.sh script,
$ ./rfc.sh


From: Saleh, Amgad (Nokia - US/Naperville)
Sent: Thursday, May 23, 2019 11:19 PM
To: gluster-devel@gluster.org
Subject: Failure during “git review –setup”


Hi:

After submitting a Pull Request at Github. I got the message about the gerrit 
review (attached)
I follow the steps, added a public key but failed at the “git review –setup” 
step – please see errors below

Your urgent support is appreciated!

Regards,
Amgad Saleh
Nokia.
……
[ahsaleh@null-d4bed9857109 glusterfs]$ git review --setup
Problem running 'git remote update gerrit'
Fetching gerrit
Permission denied (publickey).
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
error: Could not fetch gerrit
Problems encountered installing commit-msg hook
The following command failed with exit code 1
"scp 
ahsa...@review.gluster.org:hooks/commit-msg
 .git/hooks/commit-msg"
---
Permission denied (publickey).
---
[ahsaleh@null-d4bed9857109 glusterfs]$

___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] No healing on peer disconnect - is it correct?

2019-07-18 Thread Strahil
Hi Martin,

By default gluster will proactively start to heal every 10 min - so this is not 
OK.

Usually, I do not wait for that to get triggered and i run gluster volume heal 
 full (using replica 3 with sharding of 4 MB -> oVirt default).

Best Regards,
Strahil NikolovOn Jun 3, 2019 19:58, Martin  wrote:
>
> Hi all,
>
> I need someone to explain if my gluster behaviour is correct. I am not sure 
> if my gluster works as it should. I have simple Replica 3 - Number of Bricks: 
> 1 x 3 = 3.
>
> When one of my hypervisor is disconnected as peer, i.e. gluster process is 
> down but bricks running, other two healthy nodes start signalling that they 
> lost one peer. This is correct.
> Next, I restart gluster process on node where gluster process failed and I 
> thought It should trigger healing of files on failed node but nothing is 
> happening.
>
> I run VMs disks on this gluster volume. No healing is triggered after gluster 
> restart, remaining two nodes get peer back after restart of gluster and 
> everything is running without down time.
> Even VMs that are running on “failed” node where gluster process was down 
> (bricks were up) are running without down time.
>
> Is this behaviour correct? I mean No healing is triggered after peer is 
> reconnected back and VMs.
>
> Thanks for explanation.
>
> BR!
> Martin 
>
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] No healing on peer disconnect - is it correct?

2019-07-18 Thread snowmailer
Can someone advice on this, please?

BR!

Dňa 3. 6. 2019 o 18:58 užívateľ Martin  napísal:

> Hi all,
> 
> I need someone to explain if my gluster behaviour is correct. I am not sure 
> if my gluster works as it should. I have simple Replica 3 - Number of Bricks: 
> 1 x 3 = 3.
> 
> When one of my hypervisor is disconnected as peer, i.e. gluster process is 
> down but bricks running, other two healthy nodes start signalling that they 
> lost one peer. This is correct.
> Next, I restart gluster process on node where gluster process failed and I 
> thought It should trigger healing of files on failed node but nothing is 
> happening.
> 
> I run VMs disks on this gluster volume. No healing is triggered after gluster 
> restart, remaining two nodes get peer back after restart of gluster and 
> everything is running without down time.
> Even VMs that are running on “failed” node where gluster process was down 
> (bricks were up) are running without down time. 
> 
> Is this behaviour correct? I mean No healing is triggered after peer is 
> reconnected back and VMs.
> 
> Thanks for explanation.
> 
> BR!
> Martin 
> 
> 
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] No healing on peer disconnect - is it correct?

2019-07-18 Thread Martin
Hi all,

I need someone to explain if my gluster behaviour is correct. I am not sure if 
my gluster works as it should. I have simple Replica 3 - Number of Bricks: 1 x 
3 = 3.

When one of my hypervisor is disconnected as peer, i.e. gluster process is down 
but bricks running, other two healthy nodes start signalling that they lost one 
peer. This is correct.
Next, I restart gluster process on node where gluster process failed and I 
thought It should trigger healing of files on failed node but nothing is 
happening.

I run VMs disks on this gluster volume. No healing is triggered after gluster 
restart, remaining two nodes get peer back after restart of gluster and 
everything is running without down time.
Even VMs that are running on “failed” node where gluster process was down 
(bricks were up) are running without down time. 

Is this behaviour correct? I mean No healing is triggered after peer is 
reconnected back and VMs.

Thanks for explanation.

BR!
Martin 


___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] VMs blocked for more than 120 seconds

2019-07-18 Thread Andrey Volodin
as per
https://helpful.knobs-dials.com/index.php/INFO:_task_blocked_for_more_than_120_seconds.
,
the informational warning could be suppressed with :

"echo 0 > /proc/sys/kernel/hung_task_timeout_secs"

Moreover, as per their website : "*This message is not an error*.
It is an indication that a program has had to wait for a very long time,
and what it was doing. "
More reference:
https://serverfault.com/questions/405210/can-high-load-cause-server-hang-and-error-blocked-for-more-than-120-seconds

Regards,
Andrei

On Mon, May 13, 2019 at 7:32 AM Martin Toth  wrote:

> Cache in qemu is none. That should be correct. This is full command :
>
> /usr/bin/qemu-system-x86_64 -name one-312 -S -machine
> pc-i440fx-xenial,accel=kvm,usb=off -m 4096 -realtime mlock=off -smp
> 4,sockets=4,cores=1,threads=1 -uuid e95a774e-a594-4e98-b141-9f30a3f848c1
> -no-user-config -nodefaults -chardev
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-one-312/monitor.sock,server,nowait
> -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime
> -no-shutdown -boot order=c,menu=on,splash-time=3000,strict=on -device
> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2
>
> -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4
> -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5
> -drive file=/var/lib/one//datastores/116/312/*disk.0*
> ,format=raw,if=none,id=drive-virtio-disk1,cache=none
> -device
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk1,id=virtio-disk1
> -drive file=gluster://localhost:24007/imagestore/
> *7b64d6757acc47a39503f68731f89b8e*
> ,format=qcow2,if=none,id=drive-scsi0-0-0-0,cache=none
> -device
> scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0
> -drive file=/var/lib/one//datastores/116/312/*disk.1*
> ,format=raw,if=none,id=drive-ide0-0-0,readonly=on
> -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0
>
> -netdev tap,fd=26,id=hostnet0
> -device e1000,netdev=hostnet0,id=net0,mac=02:00:5c:f0:e4:39,bus=pci.0,addr=0x3
> -chardev pty,id=charserial0 -device
> isa-serial,chardev=charserial0,id=serial0
> -chardev 
> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channel/target/domain-one-312/org.qemu.guest_agent.0,server,nowait
> -device
> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0
> -vnc 0.0.0.0:312,password -device cirrus-vga,id=video0,bus=pci.0,addr=0x2
> -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 -msg timestamp=on
>
> I’ve highlighted disks. First is VM context disk - Fuse used, second is
> SDA (OS is installed here) - libgfapi used, third is SWAP - Fuse used.
>
> Krutika,
> I will start profiling on Gluster Volumes and wait for next VM to fail.
> Than I will attach/send profiling info after some VM will be failed. I
> suppose this is correct profiling strategy.
>
> Thanks,
> BR!
> Martin
>
> On 13 May 2019, at 09:21, Krutika Dhananjay  wrote:
>
> Also, what's the caching policy that qemu is using on the affected vms?
> Is it cache=none? Or something else? You can get this information in the
> command line of qemu-kvm process corresponding to your vm in the ps output.
>
> -Krutika
>
> On Mon, May 13, 2019 at 12:49 PM Krutika Dhananjay 
> wrote:
>
>> What version of gluster are you using?
>> Also, can you capture and share volume-profile output for a run where you
>> manage to recreate this issue?
>>
>> https://docs.gluster.org/en/v3/Administrator%20Guide/Monitoring%20Workload/#running-glusterfs-volume-profile-command
>> Let me know if you have any questions.
>>
>> -Krutika
>>
>> On Mon, May 13, 2019 at 12:34 PM Martin Toth 
>> wrote:
>>
>>> Hi,
>>>
>>> there is no healing operation, not peer disconnects, no readonly
>>> filesystem. Yes, storage is slow and unavailable for 120 seconds, but why,
>>> its SSD with 10G, performance is good.
>>>
>>> > you'd have it's log on qemu's standard output,
>>>
>>> If you mean /var/log/libvirt/qemu/vm.log there is nothing. I am looking
>>> for problem for more than month, tried everything. Can’t find anything. Any
>>> more clues or leads?
>>>
>>> BR,
>>> Martin
>>>
>>> > On 13 May 2019, at 08:55, lemonni...@ulrar.net wrote:
>>> >
>>> > On Mon, May 13, 2019 at 08:47:45AM +0200, Martin Toth wrote:
>>> >> Hi all,
>>> >
>>> > Hi
>>> >
>>> >>
>>> >> I am running replica 3 on SSDs with 10G networking, everything works
>>> OK but VMs stored in Gluster volume occasionally freeze with “Task XY
>>> blocked for more than 120 seconds”.
>>> >> Only solution is to poweroff (hard) VM and than boot it up again. I
>>> am unable to SSH and also login with console, its stuck probably on some
>>> disk operation. No error/warning logs or messages are store in VMs logs.
>>> >>
>>> >
>>> > As far as I know this should be unrelated, I get this during heals
>>> > without any freezes, it just means the storage is slow I think.
>>> >
>>> >> KVM/Libvirt(qemu) using libgfapi and fuse mount to access VM disks on
>>> replica volume. Can someone advice 

Re: [Gluster-devel] [Gluster-users] VMs blocked for more than 120 seconds

2019-07-18 Thread Martin
Hi Krutika,

> Also, gluster version please?

I am running old 3.7.6. (Yes I know I should upgrade asap)

I’ve applied firstly "network.remote-dio off", behaviour did not changed, VMs 
got stuck after some time again.
Then I’ve set "performance.strict-o-direct on" and problem completly 
disappeared. No more stucks at all (7 days without any problems at all). This 
SOLVED the issue.

Can you explain what remote-dio and strict-o-direct variables changed in 
behaviour of my Gluster? It would be great for later archive/users to 
understand what and why this solved my issue.

Anyway, Thanks a LOT!!!

BR, 
Martin

> On 13 May 2019, at 10:20, Krutika Dhananjay  wrote:
> 
> OK. In that case, can you check if the following two changes help:
> 
> # gluster volume set $VOL network.remote-dio off
> # gluster volume set $VOL performance.strict-o-direct on
> 
> preferably one option changed at a time, its impact tested and then the next 
> change applied and tested.
> 
> Also, gluster version please?
> 
> -Krutika
> 
> On Mon, May 13, 2019 at 1:02 PM Martin Toth  > wrote:
> Cache in qemu is none. That should be correct. This is full command :
> 
> /usr/bin/qemu-system-x86_64 -name one-312 -S -machine 
> pc-i440fx-xenial,accel=kvm,usb=off -m 4096 -realtime mlock=off -smp 
> 4,sockets=4,cores=1,threads=1 -uuid e95a774e-a594-4e98-b141-9f30a3f848c1 
> -no-user-config -nodefaults -chardev 
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-one-312/monitor.sock,server,nowait
>  -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime 
> -no-shutdown -boot order=c,menu=on,splash-time=3000,strict=on -device 
> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 
> 
> -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4
> -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5
> -drive 
> file=/var/lib/one//datastores/116/312/disk.0,format=raw,if=none,id=drive-virtio-disk1,cache=none
>   -device 
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk1,id=virtio-disk1
> -drive file=gluster://localhost:24007/imagestore/ 
> <>7b64d6757acc47a39503f68731f89b8e,format=qcow2,if=none,id=drive-scsi0-0-0-0,cache=none
>   -device 
> scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0
> -drive 
> file=/var/lib/one//datastores/116/312/disk.1,format=raw,if=none,id=drive-ide0-0-0,readonly=on
>   -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0
> 
> -netdev tap,fd=26,id=hostnet0 -device 
> e1000,netdev=hostnet0,id=net0,mac=02:00:5c:f0:e4:39,bus=pci.0,addr=0x3 
> -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 
> -chardev 
> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channel/target/domain-one-312/org.qemu.guest_agent.0,server,nowait
>  -device 
> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0
>  -vnc 0.0.0.0:312 ,password -device 
> cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device 
> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 -msg timestamp=on
> 
> I’ve highlighted disks. First is VM context disk - Fuse used, second is SDA 
> (OS is installed here) - libgfapi used, third is SWAP - Fuse used.
> 
> Krutika,
> I will start profiling on Gluster Volumes and wait for next VM to fail. Than 
> I will attach/send profiling info after some VM will be failed. I suppose 
> this is correct profiling strategy.
> 
> About this, how many vms do you need to recreate it? A single vm? Or multiple 
> vms doing IO in parallel?
> 
> 
> Thanks,
> BR!
> Martin
> 
>> On 13 May 2019, at 09:21, Krutika Dhananjay > > wrote:
>> 
>> Also, what's the caching policy that qemu is using on the affected vms?
>> Is it cache=none? Or something else? You can get this information in the 
>> command line of qemu-kvm process corresponding to your vm in the ps output.
>> 
>> -Krutika
>> 
>> On Mon, May 13, 2019 at 12:49 PM Krutika Dhananjay > > wrote:
>> What version of gluster are you using?
>> Also, can you capture and share volume-profile output for a run where you 
>> manage to recreate this issue?
>> https://docs.gluster.org/en/v3/Administrator%20Guide/Monitoring%20Workload/#running-glusterfs-volume-profile-command
>>  
>> 
>> Let me know if you have any questions.
>> 
>> -Krutika
>> 
>> On Mon, May 13, 2019 at 12:34 PM Martin Toth > > wrote:
>> Hi,
>> 
>> there is no healing operation, not peer disconnects, no readonly filesystem. 
>> Yes, storage is slow and unavailable for 120 seconds, but why, its SSD with 
>> 10G, performance is good.
>> 
>> > you'd have it's log on qemu's standard output,
>> 
>> If you mean /var/log/libvirt/qemu/vm.log there is nothing. I am looking for 
>> problem for more than month, tried everything. Can’t find 

Re: [Gluster-devel] [Gluster-users] Proposing to previous ganesha HA cluster solution back to gluster code as gluster-7 feature

2019-07-18 Thread Renaud Fortier
IMO, you should keep storhaug and maintain it. At the beginning, we were with 
pacemaker and corosync. Then we move to storhaug with the upgrade to gluster 
4.1.x. Now you are talking about going back like it was. Maybe it will be 
better with pacemake and corosync but the important is to have a solution that 
will be stable and maintained.

thanks
Renaud

De : gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] De la part de Jim Kinney
Envoyé : 30 avril 2019 08:20
À : gluster-us...@gluster.org; Jiffin Tony Thottan ; 
gluster-us...@gluster.org; Gluster Devel ; 
gluster-maintain...@gluster.org; nfs-ganesha ; 
de...@lists.nfs-ganesha.org
Objet : Re: [Gluster-users] Proposing to previous ganesha HA cluster solution 
back to gluster code as gluster-7 feature

+1!
I'm using nfs-ganesha in my next upgrade so my client systems can use NFS 
instead of fuse mounts. Having an integrated, designed in process to coordinate 
multiple nodes into an HA cluster will very welcome.
On April 30, 2019 3:20:11 AM EDT, Jiffin Tony Thottan 
mailto:jthot...@redhat.com>> wrote:

Hi all,

Some of you folks may be familiar with HA solution provided for nfs-ganesha by 
gluster using pacemaker and corosync.

That feature was removed in glusterfs 3.10 in favour for common HA project 
"Storhaug". Even Storhaug was not progressed

much from last two years and current development is in halt state, hence 
planning to restore old HA ganesha solution back

to gluster code repository with some improvement and targetting for next 
gluster release 7.

I have opened up an issue [1] with details and posted initial set of patches [2]

Please share your thoughts on the same

Regards,

Jiffin

[1] https://github.com/gluster/glusterfs/issues/663

[2] https://review.gluster.org/#/q/topic:rfc-663+(status:open+OR+status:merged)

--
Sent from my Android device with K-9 Mail. All tyopes are thumb related and 
reflect authenticity.
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Improve stability between SMB/CTDB and Gluster (together with Samba Core Developer)

2019-07-18 Thread David Spisla
Hi Poornima,

thats fine. I would suggest this dates and times:

May 15th – 17th at 12:30, 13:30, 14:30 IST (9:00, 10:00, 11:00 CEST)
May 20th – 24th at 12:30, 13:30, 14:30 IST (9:00, 10:00, 11:00 CEST)

I add Volker Lendecke from Sernet to the mail. He is the Samba Expert.
Can someone of you provide a host via bluejeans.com? If not, I will try it with 
GoToMeeting (https://www.gotomeeting.com).

@all Please write your prefered dates and times. For me, all oft the above 
dates and times are fine

Regards
David



David Spisla
Software Engineer
david.spi...@iternity.com
+49 761 59034852
iTernity GmbH
Heinrich-von-Stephan-Str. 21
79100 Freiburg
Deutschland
Website

Newsletter

Support Portal
iTernity GmbH. Geschäftsführer: Ralf Steinemann.
​Eingetragen beim Amtsgericht Freiburg: HRB-Nr. 701332.
​USt.Id DE242664311. [v01.023]
Von: Poornima Gurusiddaiah 
Gesendet: Montag, 13. Mai 2019 07:22
An: David Spisla ; Anoop C S ; Gunther 
Deschner 
Cc: Gluster Devel ; gluster-us...@gluster.org List 

Betreff: Re: [Gluster-devel] Improve stability between SMB/CTDB and Gluster 
(together with Samba Core Developer)

Hi,

We would be definitely interested in this. Thank you for contacting us. For the 
starter we can have an online conference. Please suggest few possible date and 
times for the week(preferably between IST 7.00AM - 9.PM)?
Adding Anoop and Gunther who are also the main contributors to the 
Gluster-Samba integration.

Thanks,
Poornima



On Thu, May 9, 2019 at 7:43 PM David Spisla 
mailto:spisl...@gmail.com>> wrote:
Dear Gluster Community,
at the moment we are improving the stability of SMB/CTDB and Gluster. For this 
purpose we are working together with an advanced SAMBA Core Developer. He did 
some debugging but needs more information about Gluster Core Behaviour.

Would any of the Gluster Developer wants to have a online conference with him 
and me?

I would organize everything. In my opinion this is a good chance to improve 
stability of Glusterfs and this is at the moment one of the major issues in the 
Community.

Regards
David Spisla
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] VMs blocked for more than 120 seconds

2019-07-18 Thread Martin Toth
Cache in qemu is none. That should be correct. This is full command :

/usr/bin/qemu-system-x86_64 -name one-312 -S -machine 
pc-i440fx-xenial,accel=kvm,usb=off -m 4096 -realtime mlock=off -smp 
4,sockets=4,cores=1,threads=1 -uuid e95a774e-a594-4e98-b141-9f30a3f848c1 
-no-user-config -nodefaults -chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-one-312/monitor.sock,server,nowait
 -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime 
-no-shutdown -boot order=c,menu=on,splash-time=3000,strict=on -device 
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 

-device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4
-device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5
-drive 
file=/var/lib/one//datastores/116/312/disk.0,format=raw,if=none,id=drive-virtio-disk1,cache=none
-device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk1,id=virtio-disk1
-drive 
file=gluster://localhost:24007/imagestore/7b64d6757acc47a39503f68731f89b8e,format=qcow2,if=none,id=drive-scsi0-0-0-0,cache=none
-device 
scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0
-drive 
file=/var/lib/one//datastores/116/312/disk.1,format=raw,if=none,id=drive-ide0-0-0,readonly=on
-device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0

-netdev tap,fd=26,id=hostnet0 -device 
e1000,netdev=hostnet0,id=net0,mac=02:00:5c:f0:e4:39,bus=pci.0,addr=0x3 -chardev 
pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev 
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channel/target/domain-one-312/org.qemu.guest_agent.0,server,nowait
 -device 
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0
 -vnc 0.0.0.0:312,password -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 -msg timestamp=on

I’ve highlighted disks. First is VM context disk - Fuse used, second is SDA (OS 
is installed here) - libgfapi used, third is SWAP - Fuse used.

Krutika,
I will start profiling on Gluster Volumes and wait for next VM to fail. Than I 
will attach/send profiling info after some VM will be failed. I suppose this is 
correct profiling strategy.

Thanks,
BR!
Martin

> On 13 May 2019, at 09:21, Krutika Dhananjay  wrote:
> 
> Also, what's the caching policy that qemu is using on the affected vms?
> Is it cache=none? Or something else? You can get this information in the 
> command line of qemu-kvm process corresponding to your vm in the ps output.
> 
> -Krutika
> 
> On Mon, May 13, 2019 at 12:49 PM Krutika Dhananjay  > wrote:
> What version of gluster are you using?
> Also, can you capture and share volume-profile output for a run where you 
> manage to recreate this issue?
> https://docs.gluster.org/en/v3/Administrator%20Guide/Monitoring%20Workload/#running-glusterfs-volume-profile-command
>  
> 
> Let me know if you have any questions.
> 
> -Krutika
> 
> On Mon, May 13, 2019 at 12:34 PM Martin Toth  > wrote:
> Hi,
> 
> there is no healing operation, not peer disconnects, no readonly filesystem. 
> Yes, storage is slow and unavailable for 120 seconds, but why, its SSD with 
> 10G, performance is good.
> 
> > you'd have it's log on qemu's standard output,
> 
> If you mean /var/log/libvirt/qemu/vm.log there is nothing. I am looking for 
> problem for more than month, tried everything. Can’t find anything. Any more 
> clues or leads?
> 
> BR,
> Martin
> 
> > On 13 May 2019, at 08:55, lemonni...@ulrar.net 
> >  wrote:
> > 
> > On Mon, May 13, 2019 at 08:47:45AM +0200, Martin Toth wrote:
> >> Hi all,
> > 
> > Hi
> > 
> >> 
> >> I am running replica 3 on SSDs with 10G networking, everything works OK 
> >> but VMs stored in Gluster volume occasionally freeze with “Task XY blocked 
> >> for more than 120 seconds”.
> >> Only solution is to poweroff (hard) VM and than boot it up again. I am 
> >> unable to SSH and also login with console, its stuck probably on some disk 
> >> operation. No error/warning logs or messages are store in VMs logs.
> >> 
> > 
> > As far as I know this should be unrelated, I get this during heals
> > without any freezes, it just means the storage is slow I think.
> > 
> >> KVM/Libvirt(qemu) using libgfapi and fuse mount to access VM disks on 
> >> replica volume. Can someone advice  how to debug this problem or what can 
> >> cause these issues? 
> >> It’s really annoying, I’ve tried to google everything but nothing came up. 
> >> I’ve tried changing virtio-scsi-pci to virtio-blk-pci disk drivers, but 
> >> its not related.
> >> 
> > 
> > Any chance your gluster goes readonly ? Have you checked your gluster
> > logs to see if maybe they lose each other some times ?
> > /var/log/glusterfs
> > 
> > For libgfapi accesses you'd have 

Re: [Gluster-devel] [Gluster-users] Proposing to previous ganesha HA clustersolution back to gluster code as gluster-7 feature

2019-07-18 Thread Strahil Nikolov
Hi,

I'm posting this again as it got bounced.
Keep in mind that corosync/pacemaker  is hard for proper setup by new 
admins/users.

I'm still trying to remediate the effects of poor configuration at work.
Also, storhaug is nice for hyperconverged setups where the host is not only 
hosting bricks, but  other  workloads.
Corosync/pacemaker require proper fencing to be setup and most of the stonith 
resources 'shoot the other node in the head'.
I would be happy to see an easy to deploy (let say 'cluster.enable-ha-ganesha 
true') and gluster to be bringing up the Floating IPs and taking care of the 
NFS locks, so no disruption will be felt by the clients.

Still, this will be a lot of work to achieve.

Best Regards,
Strahil Nikolov

On Apr 30, 2019 15:19, Jim Kinney  wrote:
>  
> +1!
> I'm using nfs-ganesha in my next upgrade so my client systems can use NFS 
> instead of fuse mounts. Having an integrated, designed in process to 
> coordinate multiple nodes into an HA cluster will very welcome.
> 
> On April 30, 2019 3:20:11 AM EDT, Jiffin Tony Thottan  
> wrote:
>>  
>> Hi all,
>> 
>> Some of you folks may be familiar with HA solution provided for nfs-ganesha 
>> by gluster using pacemaker and corosync.
>> 
>> That feature was removed in glusterfs 3.10 in favour for common HA project 
>> "Storhaug". Even Storhaug was not progressed
>> 
>> much from last two years and current development is in halt state, hence 
>> planning to restore old HA ganesha solution back
>> 
>> to gluster code repository with some improvement and targetting for next 
>> gluster release 7.
>> 
>>  I have opened up an issue [1] with details and posted initial set of 
>>patches [2]
>> 
>> Please share your thoughts on the same
>> 
>> 
>> Regards,
>> 
>> Jiffin  
>> 
>> [1] https://github.com/gluster/glusterfs/issues/663
>> 
>> [2] 
>> https://review.gluster.org/#/q/topic:rfc-663+(status:open+OR+status:merged)
>> 
>> 
> 
> -- 
> Sent from my Android device with K-9 Mail. All tyopes are thumb related and 
> reflect authenticity.
Keep in mind that corosync/pacemaker  is hard for proper setup by new 
admins/users.

I'm still trying to remediate the effects of poor configuration at work.
Also, storhaug is nice for hyperconverged setups where the host is not only 
hosting bricks, but  other  workloads.
Corosync/pacemaker require proper fencing to be setup and most of the stonith 
resources 'shoot the other node in the head'.
I would be happy to see an easy to deploy (let say 'cluster.enable-ha-ganesha 
true') and gluster to be bringing up the Floating IPs and taking care of the 
NFS locks, so no disruption will be felt by the clients.

Still, this will be a lot of work to achieve.

Best Regards,
Strahil NikolovOn Apr 30, 2019 15:19, Jim Kinney  wrote:
>
> +1!
> I'm using nfs-ganesha in my next upgrade so my client systems can use NFS 
> instead of fuse mounts. Having an integrated, designed in process to 
> coordinate multiple nodes into an HA cluster will very welcome.
>
> On April 30, 2019 3:20:11 AM EDT, Jiffin Tony Thottan  
> wrote:
>>
>> Hi all,
>>
>> Some of you folks may be familiar with HA solution provided for nfs-ganesha 
>> by gluster using pacemaker and corosync.
>>
>> That feature was removed in glusterfs 3.10 in favour for common HA project 
>> "Storhaug". Even Storhaug was not progressed
>>
>> much from last two years and current development is in halt state, hence 
>> planning to restore old HA ganesha solution back
>>
>> to gluster code repository with some improvement and targetting for next 
>> gluster release 7.
>>
>> I have opened up an issue [1] with details and posted initial set of patches 
>> [2]
>>
>> Please share your thoughts on the same
>>
>> Regards,
>>
>> Jiffin  
>>
>> [1] https://github.com/gluster/glusterfs/issues/663
>>
>> [2] 
>> https://review.gluster.org/#/q/topic:rfc-663+(status:open+OR+status:merged)
>
>
> -- 
> Sent from my Android device with K-9 Mail. All tyopes are thumb related and 
> reflect authenticity.

___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] Proposing to previous ganesha HA clustersolution back to gluster code as gluster-7 feature

2019-07-18 Thread Strahil
Hi Jiffin,

No vendor will support your corosync/pacemaker stack if you do not have proper 
fencing.
As Gluster is already a cluster of its own, it makes sense to control 
everything from there.

Best Regards,
Strahil NikolovOn May 3, 2019 09:08, Jiffin Tony Thottan  
wrote:
>
>
> On 30/04/19 6:59 PM, Strahil Nikolov wrote: 
> > Hi, 
> > 
> > I'm posting this again as it got bounced. 
> > Keep in mind that corosync/pacemaker  is hard for proper setup by new 
> > admins/users. 
> > 
> > I'm still trying to remediate the effects of poor configuration at work. 
> > Also, storhaug is nice for hyperconverged setups where the host is not only 
> > hosting bricks, but  other  workloads. 
> > Corosync/pacemaker require proper fencing to be setup and most of the 
> > stonith resources 'shoot the other node in the head'. 
> > I would be happy to see an easy to deploy (let say 
> > 'cluster.enable-ha-ganesha true') and gluster to be bringing up the 
> > Floating IPs and taking care of the NFS locks, so no disruption will be 
> > felt by the clients. 
>
>
> It do take care those, but need to follow certain prerequisite, but 
> please fencing won't configured for this setup. May we think about in 
> future. 
>
> -- 
>
> Jiffin 
>
> > 
> > Still, this will be a lot of work to achieve. 
> > 
> > Best Regards, 
> > Strahil Nikolov 
> > 
> > On Apr 30, 2019 15:19, Jim Kinney  wrote: 
> >>    
> >> +1! 
> >> I'm using nfs-ganesha in my next upgrade so my client systems can use NFS 
> >> instead of fuse mounts. Having an integrated, designed in process to 
> >> coordinate multiple nodes into an HA cluster will very welcome. 
> >> 
> >> On April 30, 2019 3:20:11 AM EDT, Jiffin Tony Thottan 
> >>  wrote: 
> >>>    
> >>> Hi all, 
> >>> 
> >>> Some of you folks may be familiar with HA solution provided for 
> >>> nfs-ganesha by gluster using pacemaker and corosync. 
> >>> 
> >>> That feature was removed in glusterfs 3.10 in favour for common HA 
> >>> project "Storhaug". Even Storhaug was not progressed 
> >>> 
> >>> much from last two years and current development is in halt state, hence 
> >>> planning to restore old HA ganesha solution back 
> >>> 
> >>> to gluster code repository with some improvement and targetting for next 
> >>> gluster release 7. 
> >>> 
> >>>    I have opened up an issue [1] with details and posted initial set of 
> >>>patches [2] 
> >>> 
> >>> Please share your thoughts on the same 
> >>> 
> >>> 
> >>> Regards, 
> >>> 
> >>> Jiffin 
> >>> 
> >>> [1] https://github.com/gluster/glusterfs/issues/663 
> >>> 
> >>> [2] 
> >>> https://review.gluster.org/#/q/topic:rfc-663+(status:open+OR+status:merged)
> >>>  
> >>> 
> >>> 
> >> -- 
> >> Sent from my Android device with K-9 Mail. All tyopes are thumb related 
> >> and reflect authenticity. 
> > Keep in mind that corosync/pacemaker  is hard for proper setup by new 
> > admins/users. 
> > 
> > I'm still trying to remediate the effects of poor configuration at work. 
> > Also, storhaug is nice for hyperconverged setups where the host is not only 
> > hosting bricks, but  other  workloads. 
> > Corosync/pacemaker require proper fencing to be setup and most of the 
> > stonith resources 'shoot the other node in the head'. 
> > I would be happy to see an easy to deploy (let say 
> > 'cluster.enable-ha-ganesha true') and gluster to be bringing up the 
> > Floating IPs and taking care of the NFS locks, so no disruption will be 
> > felt by the clients. 
> > 
> > Still, this will be a lot of work to achieve. 
> > 
> > Best Regards, 
> > Strahil NikolovOn Apr 30, 2019 15:19, Jim Kinney  
> > wrote: 
> >> +1! 
> >> I'm using nfs-ganesha in my next upgrade so my client systems can use NFS 
> >> instead of fuse mounts. Having an integrated, designed in process to 
> >> coordinate multiple nodes into an HA cluster will very welcome. 
> >> 
> >> On April 30, 2019 3:20:11 AM EDT, Jiffin Tony Thottan 
> >>  wrote: 
> >>> Hi all, 
> >>> 
> >>> Some of you folks may be familiar with HA solution provided for 
> >>> nfs-ganesha by gluster using pacemaker and corosync. 
> >>> 
> >>> That feature was removed in glusterfs 3.10 in favour for common HA 
> >>> project "Storhaug". Even Storhaug was not progressed 
> >>> 
> >>> much from last two years and current development is in halt state, hence 
> >>> planning to restore old HA ganesha solution back 
> >>> 
> >>> to gluster code repository with some improvement and targetting for next 
> >>> gluster release 7. 
> >>> 
> >>> I have opened up an issue [1] with details and posted initial set of 
> >>> patches [2] 
> >>> 
> >>> Please share your thoughts on the same 
> >>> 
> >>> Regards, 
> >>> 
> >>> Jiffin 
> >>> 
> >>> [1] https://github.com/gluster/glusterfs/issues/663 
> >>> 
> >>> [2] 
> >>> https://review.gluster.org/#/q/topic:rfc-663+(status:open+OR+status:merged)
> >>>  
> >> 
> >> -- 
> >> Sent from my Android device with K-9 Mail. All tyopes are thumb related 
> >> and reflect authenticity. 

Re: [Gluster-devel] Failure during “git review –setup”

2019-07-18 Thread Saleh, Amgad (Nokia - US/Naperville)

Looking at the document 
https://gluster.readthedocs.io/en/latest/Developer-guide/Simplified-Development-Workflow/
I ran ./rfc.sh and got the following:

[ahsaleh@null-d4bed9857109 glusterfs]$ ./rfc.sh
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100  4780  100  47800 0438  0  0:00:10  0:00:10 --:--:--  1087
[bugfix-pureIPv6 d14e550] Return IPv6 when exists and not -1
1 file changed, 10 insertions(+), 5 deletions(-)
Commit: "Return IPv6 when exists and not -1"
Reference (Bugzilla ID or Github Issue ID): amgads
Invalid reference ID (amgads)!!!
Commit: "Return IPv6 when exists and not -1"
Reference (Bugzilla ID or Github Issue ID): amgads
Invalid reference ID (amgads)!!!
Commit: "Return IPv6 when exists and not -1"
Reference (Bugzilla ID or Github Issue ID): 677
Select yes '(y)' if this patch fixes the bug/feature completely,
or is the last of the patchset which brings feature (Y/n): y
[detached HEAD 8a5bb4a] Return IPv6 when exists and not -1
1 file changed, 10 insertions(+), 5 deletions(-)
Successfully rebased and updated refs/heads/bugfix-pureIPv6.
./rfc.sh: line 287: clang-format: command not found
[ahsaleh@null-d4bed9857109 glusterfs]$

Is the code submitted for review?  Please advise what’s needd next – this is my 
first time to use the process

Submit for review
To submit your change for review, run the rfc.sh script,
$ ./rfc.sh


From: Saleh, Amgad (Nokia - US/Naperville)
Sent: Thursday, May 23, 2019 11:19 PM
To: gluster-devel@gluster.org
Subject: Failure during “git review –setup”


Hi:

After submitting a Pull Request at Github. I got the message about the gerrit 
review (attached)
I follow the steps, added a public key but failed at the “git review –setup” 
step – please see errors below

Your urgent support is appreciated!

Regards,
Amgad Saleh
Nokia.
……
[ahsaleh@null-d4bed9857109 glusterfs]$ git review --setup
Problem running 'git remote update gerrit'
Fetching gerrit
Permission denied (publickey).
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
error: Could not fetch gerrit
Problems encountered installing commit-msg hook
The following command failed with exit code 1
"scp 
ahsa...@review.gluster.org:hooks/commit-msg
 .git/hooks/commit-msg"
---
Permission denied (publickey).
---
[ahsaleh@null-d4bed9857109 glusterfs]$

___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] Invitation: End Date for Feature/Scope proposal for Release-7 @ Wed May 22, 2019 11am - 12pm (IST) (gluster-devel@gluster.org)

2019-07-18 Thread rkothiya
BEGIN:VCALENDAR
PRODID:-//Google Inc//Google Calendar 70.9054//EN
VERSION:2.0
CALSCALE:GREGORIAN
METHOD:REQUEST
BEGIN:VEVENT
DTSTART:20190522T053000Z
DTEND:20190522T063000Z
DTSTAMP:20190515T065614Z
ORGANIZER;CN=Gluster Rlease:mailto:redhat.com_r3hootcr6t1v4ag631ocgseshg@gr
 oup.calendar.google.com
UID:3u1ets2h596u5dr3cu8u1d4...@google.com
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=gluster-devel@gluster.org;X-NUM-GUESTS=0:mailto:gluster-devel@glust
 er.org
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=maintain...@gluster.org;X-NUM-GUESTS=0:mailto:maintainers@gluster.o
 rg
X-MICROSOFT-CDO-OWNERAPPTID:963603482
CREATED:20190515T061308Z
DESCRIPTION:This is just a reminder/notification for announcing that\, 22-M
 ay-2019 is the end date for feature/scope proposal for the Release-7\n\n-::
 ~:~::~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:
 ~::~:~::-\nPlease do not edit this section of the description.\n\nView your
  event at https://www.google.com/calendar/event?action=VIEW=M3UxZXRzMmg
 1OTZ1NWRyM2N1OHUxZDQxbDUgZ2x1c3Rlci1kZXZlbEBnbHVzdGVyLm9yZw=NjMjcmVkaGF
 0LmNvbV9yM2hvb3RjcjZ0MXY0YWc2MzFvY2dzZXNoZ0Bncm91cC5jYWxlbmRhci5nb29nbGUuY2
 9tMzgzZGUyNzQ1ZTg2NDg2NmU0ODliMzkyMWY2OGY1YmFmOGViODNlNQ=Asia%2FKolkata
 =en=1.\n-::~:~::~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~
 :~:~:~:~:~:~:~:~:~::~:~::-
LAST-MODIFIED:20190515T065612Z
LOCATION:
SEQUENCE:0
STATUS:CONFIRMED
SUMMARY:End Date for Feature/Scope proposal for Release-7
TRANSP:OPAQUE
BEGIN:VALARM
ACTION:DISPLAY
DESCRIPTION:This is an event reminder
TRIGGER:-P1D
END:VALARM
BEGIN:VALARM
ACTION:EMAIL
DESCRIPTION:This is an event reminder
SUMMARY:Alarm notification
ATTENDEE:mailto:gluster-devel@gluster.org
TRIGGER:-P4D
END:VALARM
BEGIN:VALARM
ACTION:EMAIL
DESCRIPTION:This is an event reminder
SUMMARY:Alarm notification
ATTENDEE:mailto:gluster-devel@gluster.org
TRIGGER:-P2D
END:VALARM
END:VEVENT
END:VCALENDAR


invite.ics
Description: application/ics
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] [Gluster-Maintainers] Release 7: Kick off!

2019-07-18 Thread Rinku Kothiya
It is time to start some activities for release-7.

## Scope
It is time to collect and determine scope for the release, so as usual,
please send in features/enhancements that you are working towards
reaching maturity for this release to the devel list, and mark/open the
github issue with the required milestone [1].

## Schedule

Curretnly the plan working backwards on the schedule, here's what we have:
- Announcement: Week of Aug 4th, 2019
- GA tagging: Aug-02-2019
- RC1: On demand before GA
- RC0: July-03-2019
- Late features cut-off: Week of June-24th, 2018
- Branching (feature cutoff date): June-17-2018
  (~45 days prior to branching)
- Feature/scope proposal for the release (end date): May-22-2018


Regards
Rinku Kothiya
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] gluster-devel, rkoth...@redhat.com recommends that you use Google Calendar

2019-07-18 Thread rkothiya
I've been using Google Calendar to organize my calendar, find interesting  
events, and share my schedule with friends and family members. I thought  
you might like to use Google Calendar, too.


rkoth...@redhat.com recommends that you use Google Calendar.

To accept this invitation and register for an account, please visit:  
[https://www.google.com/calendar/render?cid=cmVkaGF0LmNvbV9yM2hvb3RjcjZ0MXY0YWc2MzFvY2dzZXNoZ0Bncm91cC5jYWxlbmRhci5nb29nbGUuY29t=gluster-devel@gluster.org:885485af2a01c0e13938156dbb3531c58af68e52]


Google Calendar helps you keep track of everything going on in
your life and those of the important people around you, and also help
you discover interesting things to do with your time.
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] Proposing to previous ganesha HA clustersolution back to gluster code as gluster-7 feature

2019-07-18 Thread Strahil
Keep in mind that corosync/pacemaker  is hard for proper setup by new 
admins/users.

I'm still trying to remediate the effects of poor configuration at work.
Also, storhaug is nice for hyperconverged setups where the host is not only 
hosting bricks, but  other  workloads.
Corosync/pacemaker require proper fencing to be setup and most of the stonith 
resources 'shoot the other node in the head'.
I would be happy to see an easy to deploy (let say 'cluster.enable-ha-ganesha 
true') and gluster to be bringing up the Floating IPs and taking care of the 
NFS locks, so no disruption will be felt by the clients.

Still, this will be a lot of work to achieve.

Best Regards,
Strahil NikolovOn Apr 30, 2019 15:19, Jim Kinney  wrote:
>
> +1!
> I'm using nfs-ganesha in my next upgrade so my client systems can use NFS 
> instead of fuse mounts. Having an integrated, designed in process to 
> coordinate multiple nodes into an HA cluster will very welcome.
>
> On April 30, 2019 3:20:11 AM EDT, Jiffin Tony Thottan  
> wrote:
>>
>> Hi all,
>>
>> Some of you folks may be familiar with HA solution provided for nfs-ganesha 
>> by gluster using pacemaker and corosync.
>>
>> That feature was removed in glusterfs 3.10 in favour for common HA project 
>> "Storhaug". Even Storhaug was not progressed
>>
>> much from last two years and current development is in halt state, hence 
>> planning to restore old HA ganesha solution back
>>
>> to gluster code repository with some improvement and targetting for next 
>> gluster release 7.
>>
>> I have opened up an issue [1] with details and posted initial set of patches 
>> [2]
>>
>> Please share your thoughts on the same
>>
>> Regards,
>>
>> Jiffin  
>>
>> [1] https://github.com/gluster/glusterfs/issues/663
>>
>> [2] 
>> https://review.gluster.org/#/q/topic:rfc-663+(status:open+OR+status:merged)
>
>
> -- 
> Sent from my Android device with K-9 Mail. All tyopes are thumb related and 
> reflect authenticity.___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] Proposing to previous ganesha HA cluster solution back to gluster code as gluster-7 feature

2019-07-18 Thread Jim Kinney
+1!
I'm using nfs-ganesha in my next upgrade so my client systems can use NFS 
instead of fuse mounts. Having an integrated, designed in process to coordinate 
multiple nodes into an HA cluster will very welcome.

On April 30, 2019 3:20:11 AM EDT, Jiffin Tony Thottan  
wrote:
>Hi all,
>
>Some of you folks may be familiar with HA solution provided for 
>nfs-ganesha by gluster using pacemaker and corosync.
>
>That feature was removed in glusterfs 3.10 in favour for common HA 
>project "Storhaug". Even Storhaug was not progressed
>
>much from last two years and current development is in halt state,
>hence 
>planning to restore old HA ganesha solution back
>
>to gluster code repository with some improvement and targetting for
>next 
>gluster release 7.
>
>I have opened up an issue [1] with details and posted initial set of 
>patches [2]
>
>Please share your thoughts on the same
>
>Regards,
>
>Jiffin
>
>[1]https://github.com/gluster/glusterfs/issues/663 
>
>
>[2] 
>https://review.gluster.org/#/q/topic:rfc-663+(status:open+OR+status:merged)

-- 
Sent from my Android device with K-9 Mail. All tyopes are thumb related and 
reflect authenticity.___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] gluster-block v0.4 is alive!

2019-07-18 Thread Vlad Kopylov
straight from

./autogen.sh && ./configure && make -j install


CentOS Linux release 7.6.1810 (Core)


May 17 19:13:18 vm2 gluster-blockd[24294]: Error opening log file: No such
file or directory
May 17 19:13:18 vm2 gluster-blockd[24294]: Logging to stderr.
May 17 19:13:18 vm2 gluster-blockd[24294]: [2019-05-17 23:13:18.966992]
CRIT: trying to change logDir from /var/log/gluster-block to
/var/log/gluster-block [at utils.c+495 :]
May 17 19:13:19 vm2 gluster-blockd[24294]: No such path
/backstores/user:glfs
May 17 19:13:19 vm2 systemd[1]: gluster-blockd.service: main process
exited, code=exited, status=1/FAILURE
May 17 19:13:19 vm2 systemd[1]: Unit gluster-blockd.service entered failed
state.
May 17 19:13:19 vm2 systemd[1]: gluster-blockd.service failed.



On Thu, May 2, 2019 at 1:35 PM Prasanna Kalever  wrote:

> Hello Gluster folks,
>
> Gluster-block team is happy to announce the v0.4 release [1].
>
> This is the new stable version of gluster-block, lots of new and
> exciting features and interesting bug fixes are made available as part
> of this release.
> Please find the big list of release highlights and notable fixes at [2].
>
> Details about installation can be found in the easy install guide at
> [3]. Find the details about prerequisites and setup guide at [4].
> If you are a new user, checkout the demo video attached in the README
> doc [5], which will be a good source of intro to the project.
> There are good examples about how to use gluster-block both in the man
> pages [6] and test file [7] (also in the README).
>
> gluster-block is part of fedora package collection, an updated package
> with release version v0.4 will be soon made available. And the
> community provided packages will be soon made available at [8].
>
> Please spend a minute to report any kind of issue that comes to your
> notice with this handy link [9].
> We look forward to your feedback, which will help gluster-block get better!
>
> We would like to thank all our users, contributors for bug filing and
> fixes, also the whole team who involved in the huge effort with
> pre-release testing.
>
>
> [1] https://github.com/gluster/gluster-block
> [2] https://github.com/gluster/gluster-block/releases
> [3] https://github.com/gluster/gluster-block/blob/master/INSTALL
> [4] https://github.com/gluster/gluster-block#usage
> [5] https://github.com/gluster/gluster-block/blob/master/README.md
> [6] https://github.com/gluster/gluster-block/tree/master/docs
> [7] https://github.com/gluster/gluster-block/blob/master/tests/basic.t
> [8] https://download.gluster.org/pub/gluster/gluster-block/
> [9] https://github.com/gluster/gluster-block/issues/new
>
> Cheers,
> Team Gluster-Block!
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] VMs blocked for more than 120 seconds

2019-07-18 Thread Martin Toth
Hi,

there is no healing operation, not peer disconnects, no readonly filesystem. 
Yes, storage is slow and unavailable for 120 seconds, but why, its SSD with 
10G, performance is good.

> you'd have it's log on qemu's standard output,

If you mean /var/log/libvirt/qemu/vm.log there is nothing. I am looking for 
problem for more than month, tried everything. Can’t find anything. Any more 
clues or leads?

BR,
Martin

> On 13 May 2019, at 08:55, lemonni...@ulrar.net wrote:
> 
> On Mon, May 13, 2019 at 08:47:45AM +0200, Martin Toth wrote:
>> Hi all,
> 
> Hi
> 
>> 
>> I am running replica 3 on SSDs with 10G networking, everything works OK but 
>> VMs stored in Gluster volume occasionally freeze with “Task XY blocked for 
>> more than 120 seconds”.
>> Only solution is to poweroff (hard) VM and than boot it up again. I am 
>> unable to SSH and also login with console, its stuck probably on some disk 
>> operation. No error/warning logs or messages are store in VMs logs.
>> 
> 
> As far as I know this should be unrelated, I get this during heals
> without any freezes, it just means the storage is slow I think.
> 
>> KVM/Libvirt(qemu) using libgfapi and fuse mount to access VM disks on 
>> replica volume. Can someone advice  how to debug this problem or what can 
>> cause these issues? 
>> It’s really annoying, I’ve tried to google everything but nothing came up. 
>> I’ve tried changing virtio-scsi-pci to virtio-blk-pci disk drivers, but its 
>> not related.
>> 
> 
> Any chance your gluster goes readonly ? Have you checked your gluster
> logs to see if maybe they lose each other some times ?
> /var/log/glusterfs
> 
> For libgfapi accesses you'd have it's log on qemu's standard output,
> that might contain the actual error at the time of the freez.
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users

___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] VMs blocked for more than 120 seconds

2019-07-18 Thread Andrey Volodin
what is the context from dmesg ?

On Mon, May 13, 2019 at 7:33 AM Andrey Volodin 
wrote:

> as per
> https://helpful.knobs-dials.com/index.php/INFO:_task_blocked_for_more_than_120_seconds.
>  ,
> the informational warning could be suppressed with :
>
> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
>
> Moreover, as per their website : "*This message is not an error*.
> It is an indication that a program has had to wait for a very long time,
> and what it was doing. "
> More reference:
> https://serverfault.com/questions/405210/can-high-load-cause-server-hang-and-error-blocked-for-more-than-120-seconds
>
> Regards,
> Andrei
>
> On Mon, May 13, 2019 at 7:32 AM Martin Toth  wrote:
>
>> Cache in qemu is none. That should be correct. This is full command :
>>
>> /usr/bin/qemu-system-x86_64 -name one-312 -S -machine
>> pc-i440fx-xenial,accel=kvm,usb=off -m 4096 -realtime mlock=off -smp
>> 4,sockets=4,cores=1,threads=1 -uuid e95a774e-a594-4e98-b141-9f30a3f848c1
>> -no-user-config -nodefaults -chardev
>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-one-312/monitor.sock,server,nowait
>> -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime
>> -no-shutdown -boot order=c,menu=on,splash-time=3000,strict=on -device
>> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2
>>
>> -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4
>> -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5
>> -drive file=/var/lib/one//datastores/116/312/*disk.0*
>> ,format=raw,if=none,id=drive-virtio-disk1,cache=none
>> -device
>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk1,id=virtio-disk1
>> -drive file=gluster://localhost:24007/imagestore/
>> *7b64d6757acc47a39503f68731f89b8e*
>> ,format=qcow2,if=none,id=drive-scsi0-0-0-0,cache=none
>> -device
>> scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0
>> -drive file=/var/lib/one//datastores/116/312/*disk.1*
>> ,format=raw,if=none,id=drive-ide0-0-0,readonly=on
>> -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0
>>
>> -netdev tap,fd=26,id=hostnet0
>> -device 
>> e1000,netdev=hostnet0,id=net0,mac=02:00:5c:f0:e4:39,bus=pci.0,addr=0x3
>> -chardev pty,id=charserial0 -device
>> isa-serial,chardev=charserial0,id=serial0
>> -chardev 
>> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channel/target/domain-one-312/org.qemu.guest_agent.0,server,nowait
>> -device
>> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0
>> -vnc 0.0.0.0:312,password -device
>> cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device
>> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 -msg timestamp=on
>>
>> I’ve highlighted disks. First is VM context disk - Fuse used, second is
>> SDA (OS is installed here) - libgfapi used, third is SWAP - Fuse used.
>>
>> Krutika,
>> I will start profiling on Gluster Volumes and wait for next VM to fail.
>> Than I will attach/send profiling info after some VM will be failed. I
>> suppose this is correct profiling strategy.
>>
>> Thanks,
>> BR!
>> Martin
>>
>> On 13 May 2019, at 09:21, Krutika Dhananjay  wrote:
>>
>> Also, what's the caching policy that qemu is using on the affected vms?
>> Is it cache=none? Or something else? You can get this information in the
>> command line of qemu-kvm process corresponding to your vm in the ps output.
>>
>> -Krutika
>>
>> On Mon, May 13, 2019 at 12:49 PM Krutika Dhananjay 
>> wrote:
>>
>>> What version of gluster are you using?
>>> Also, can you capture and share volume-profile output for a run where
>>> you manage to recreate this issue?
>>>
>>> https://docs.gluster.org/en/v3/Administrator%20Guide/Monitoring%20Workload/#running-glusterfs-volume-profile-command
>>> Let me know if you have any questions.
>>>
>>> -Krutika
>>>
>>> On Mon, May 13, 2019 at 12:34 PM Martin Toth 
>>> wrote:
>>>
 Hi,

 there is no healing operation, not peer disconnects, no readonly
 filesystem. Yes, storage is slow and unavailable for 120 seconds, but why,
 its SSD with 10G, performance is good.

 > you'd have it's log on qemu's standard output,

 If you mean /var/log/libvirt/qemu/vm.log there is nothing. I am looking
 for problem for more than month, tried everything. Can’t find anything. Any
 more clues or leads?

 BR,
 Martin

 > On 13 May 2019, at 08:55, lemonni...@ulrar.net wrote:
 >
 > On Mon, May 13, 2019 at 08:47:45AM +0200, Martin Toth wrote:
 >> Hi all,
 >
 > Hi
 >
 >>
 >> I am running replica 3 on SSDs with 10G networking, everything works
 OK but VMs stored in Gluster volume occasionally freeze with “Task XY
 blocked for more than 120 seconds”.
 >> Only solution is to poweroff (hard) VM and than boot it up again. I
 am unable to SSH and also login with console, its stuck probably on some
 disk operation. No error/warning logs or messages are store in VMs logs.
 >>
 >
 > As far as I know this 

Re: [Gluster-devel] [Gluster-users] One more way to contact Gluster team - Slack (gluster.slack.com)

2019-07-18 Thread Harold Miller
Amar,

Thanks for the clarification.  I'll go climb back into my cave. :)

Harold

On Fri, Apr 26, 2019 at 9:20 AM Amar Tumballi Suryanarayan <
atumb...@redhat.com> wrote:

>
>
> On Fri, Apr 26, 2019 at 6:27 PM Kaleb Keithley 
> wrote:
>
>>
>>
>> On Fri, Apr 26, 2019 at 8:21 AM Harold Miller  wrote:
>>
>>> Has Red Hat security cleared the Slack systems for confidential /
>>> customer information?
>>>
>>> If not, it will make it difficult for support to collect/answer
>>> questions.
>>>
>>
>> I'm pretty sure Amar meant as a replacement for the freenode #gluster and
>> #gluster-dev channels, given that he sent this to the public gluster
>> mailing lists @gluster.org. Nobody should have even been posting
>> confidential and/or customer information to any of those lists or channels.
>> And AFAIK nobody ever has.
>>
>>
> Yep, I am only talking about IRC (from freenode, #gluster, #gluster-dev
> etc).  Also, I am not saying we are 'replacing IRC'. Gluster as a project
> started in pre-Slack era, and we have many users who prefer to stay in IRC.
> So, for now, no pressure to make a statement calling Slack channel as a
> 'Replacement' to IRC.
>
>
>> Amar, would you like to clarify which IRC channels you meant?
>>
>>
>
> Thanks Kaleb. I was bit confused on why the concern of it came up in this
> group.
>
>
>
>>
>>> On Fri, Apr 26, 2019 at 6:00 AM Scott Worthington <
>>> scott.c.worthing...@gmail.com> wrote:
>>>
 Hello, are you not _BOTH_ Red Hat FTEs or contractors?


> Yes! but come from very different internal teams.
>
> Michael supports Gluster (the project) team's Infrastructure needs, and
> has valid concerns from his perspective :-) I, on the other hand, bother
> more about code, users, and how to make sure we are up-to-date with other
> technologies and communities, from the engineering view point.
>
>
>> On Fri, Apr 26, 2019, 3:16 AM Michael Scherer 
 wrote:

> Le vendredi 26 avril 2019 à 13:24 +0530, Amar Tumballi Suryanarayan a
> écrit :
> > Hi All,
> >
> > We wanted to move to Slack from IRC for our official communication
> > channel
> > from sometime, but couldn't as we didn't had a proper URL for us to
> > register. 'gluster' was taken and we didn't knew who had it
> > registered.
> > Thanks to constant ask from Satish, Slack team has now agreed to let
> > us use
> > https://gluster.slack.com and I am happy to invite you all there.
> > (Use this
> > link
> > <
> >
> https://join.slack.com/t/gluster/shared_invite/enQtNjIxMTA1MTk3MDE1LWIzZWZjNzhkYWEwNDdiZWRiOTczMTc4ZjdiY2JiMTc3MDE5YmEyZTRkNzg0MWJiMWM3OGEyMDU2MmYzMTViYTA
> > >
> > to
> > join)
> >
> > Please note that, it won't be a replacement for mailing list. But can
> > be
> > used by all developers and users for quick communication. Also note
> > that,
> > no information there would be 'stored' beyond 10k lines as we are
> > using the
> > free version of Slack.
>
> Aren't we concerned about the ToS of slack ? Last time I did read them,
> they were quite scary (like, if you use your corporate email, you
> engage your employer, and that wasn't the worst part).
>
> Also, to anticipate the question, my employer Legal department told me
> to not setup a bridge between IRC and slack, due to the said ToS.
>
>
> Again, re-iterating here. Not planning to use any bridges from IRC to
> Slack. I re-read the Slack API Terms and condition. And it makes sense.
> They surely don't want us to build another slack, or abuse slack with too
> many API requests made for collecting logs.
>
> Currently, to start with, we are not adding any bots (other than github
> bot). Hopefully, that will keep us under proper usage guidelines.
>
> -Amar
>
>
>> --
> Michael Scherer
> Sysadmin, Community Infrastructure
>
>
>
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users

 ___
 Gluster-users mailing list
 gluster-us...@gluster.org
 https://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>>
>>>
>>> --
>>>
>>> HAROLD MILLER
>>>
>>> ASSOCIATE MANAGER, ENTERPRISE CLOUD SUPPORT
>>>
>>> Red Hat
>>>
>>> 
>>>
>>> har...@redhat.comT: (650)-254-4346
>>> 
>>> TRIED. TESTED. TRUSTED. 
>>> ___
>>> Gluster-users mailing list
>>> gluster-us...@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
>
> --
> Amar Tumballi (amarts)
>


-- 

HAROLD MILLER

ASSOCIATE MANAGER, ENTERPRISE CLOUD SUPPORT

Red Hat



har...@redhat.comT: (650)-254-4346

TRIED. TESTED. TRUSTED. 

Re: [Gluster-devel] [Gluster-users] One more way to contact Gluster team - Slack (gluster.slack.com)

2019-07-18 Thread Harold Miller
Has Red Hat security cleared the Slack systems for confidential / customer
information?

If not, it will make it difficult for support to collect/answer questions.


Harold Miller, Associate Manager,
Red Hat, Enterprise Cloud Support
Desk - US (650) 254-4346



On Fri, Apr 26, 2019 at 6:00 AM Scott Worthington <
scott.c.worthing...@gmail.com> wrote:

> Hello, are you not _BOTH_ Red Hat FTEs or contractors?
>
> On Fri, Apr 26, 2019, 3:16 AM Michael Scherer  wrote:
>
>> Le vendredi 26 avril 2019 à 13:24 +0530, Amar Tumballi Suryanarayan a
>> écrit :
>> > Hi All,
>> >
>> > We wanted to move to Slack from IRC for our official communication
>> > channel
>> > from sometime, but couldn't as we didn't had a proper URL for us to
>> > register. 'gluster' was taken and we didn't knew who had it
>> > registered.
>> > Thanks to constant ask from Satish, Slack team has now agreed to let
>> > us use
>> > https://gluster.slack.com and I am happy to invite you all there.
>> > (Use this
>> > link
>> > <
>> >
>> https://join.slack.com/t/gluster/shared_invite/enQtNjIxMTA1MTk3MDE1LWIzZWZjNzhkYWEwNDdiZWRiOTczMTc4ZjdiY2JiMTc3MDE5YmEyZTRkNzg0MWJiMWM3OGEyMDU2MmYzMTViYTA
>> > >
>> > to
>> > join)
>> >
>> > Please note that, it won't be a replacement for mailing list. But can
>> > be
>> > used by all developers and users for quick communication. Also note
>> > that,
>> > no information there would be 'stored' beyond 10k lines as we are
>> > using the
>> > free version of Slack.
>>
>> Aren't we concerned about the ToS of slack ? Last time I did read them,
>> they were quite scary (like, if you use your corporate email, you
>> engage your employer, and that wasn't the worst part).
>>
>> Also, to anticipate the question, my employer Legal department told me
>> to not setup a bridge between IRC and slack, due to the said ToS.
>>
>> --
>> Michael Scherer
>> Sysadmin, Community Infrastructure
>>
>>
>>
>> ___
>> Gluster-users mailing list
>> gluster-us...@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users



-- 

HAROLD MILLER

ASSOCIATE MANAGER, ENTERPRISE CLOUD SUPPORT

Red Hat



har...@redhat.comT: (650)-254-4346

TRIED. TESTED. TRUSTED. 
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] One more way to contact Gluster team - Slack (gluster.slack.com)

2019-07-18 Thread Scott Worthington
Hello, are you not _BOTH_ Red Hat FTEs or contractors?

On Fri, Apr 26, 2019, 3:16 AM Michael Scherer  wrote:

> Le vendredi 26 avril 2019 à 13:24 +0530, Amar Tumballi Suryanarayan a
> écrit :
> > Hi All,
> >
> > We wanted to move to Slack from IRC for our official communication
> > channel
> > from sometime, but couldn't as we didn't had a proper URL for us to
> > register. 'gluster' was taken and we didn't knew who had it
> > registered.
> > Thanks to constant ask from Satish, Slack team has now agreed to let
> > us use
> > https://gluster.slack.com and I am happy to invite you all there.
> > (Use this
> > link
> > <
> >
> https://join.slack.com/t/gluster/shared_invite/enQtNjIxMTA1MTk3MDE1LWIzZWZjNzhkYWEwNDdiZWRiOTczMTc4ZjdiY2JiMTc3MDE5YmEyZTRkNzg0MWJiMWM3OGEyMDU2MmYzMTViYTA
> > >
> > to
> > join)
> >
> > Please note that, it won't be a replacement for mailing list. But can
> > be
> > used by all developers and users for quick communication. Also note
> > that,
> > no information there would be 'stored' beyond 10k lines as we are
> > using the
> > free version of Slack.
>
> Aren't we concerned about the ToS of slack ? Last time I did read them,
> they were quite scary (like, if you use your corporate email, you
> engage your employer, and that wasn't the worst part).
>
> Also, to anticipate the question, my employer Legal department told me
> to not setup a bridge between IRC and slack, due to the said ToS.
>
> --
> Michael Scherer
> Sysadmin, Community Infrastructure
>
>
>
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel