Re: [Gluster-users] writing to fuse device failed: No such file or directory

2020-03-02 Thread mabi
‐‐‐ Original Message ‐‐‐
On Tuesday, March 3, 2020 6:11 AM, Hari Gowtham  wrote:

> I checked on the backport and found that this patch hasn't yet been 
> backported to any of the release branches.
> If this is the fix, it would be great to have them backported for the next 
> release.

Thanks to everyone who responded to my post. Now I wanted to ask if the fix to 
this bug will also be backported to GlusterFS 5? and if yes, will it be 
available in the next GlusterFS version 5.13?



Community Meeting Calendar:

Schedule -
Every Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] writing to fuse device failed: No such file or directory

2020-03-02 Thread Felix Kölzow

Hi Strahil,


can you test /on non-prod system/ the latest minor version of gluster v6 ?

on the client side I can update the version to the latest minor version,
but the server still

remains in v6.0. Actually, we do not have a non-prod gluster system, so
it will take some time

to do this.

Regards,

Felix


On 02/03/2020 23:25, Strahil Nikolov wrote:

Hi Felix,

can you test /on non-prod system/ the latest minor version of gluster v6 ?

Best Regards,
Strahil Nikolov


В понеделник, 2 март 2020 г., 21:43:48 ч. Гринуич+2, Felix Kölzow 
 написа:





Dear Community,


this message appears for me to on GlusterFS 6.0.

Before that, we had GlusterFS 3.12 and the client log-file was almost
empty. After

upgrading to 6.0 we are facing this log entries.

Regards,

Felix

On 02/03/2020 15:17, mabi wrote:

Hello,

On the FUSE clients of my GlusterFS 5.11 two-node replica+arbitrer I see quite 
a lot of the following error message repeatedly:

[2020-03-02 14:12:40.297690] E [fuse-bridge.c:219:check_and_dump_fuse_W] (--> 
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x13e)[0x7f93d5c13cfe] (--> 
/usr/lib/x86_64-linux-gnu/glusterfs/5.11/xlator/mount/fuse.so(+0x789a)[0x7f93d331989a] (--> 
/usr/lib/x86_64-linux-gnu/glusterfs/5.11/xlator/mount/fuse.so(+0x7c33)[0x7f93d3319c33] (--> 
/lib/x86_64-linux-gnu/libpthread.so.0(+0x74a4)[0x7f93d4e8f4a4] (--> 
/lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7f93d46ead0f] ) 0-glusterfs-fuse: writing to 
fuse device failed: No such file or directory

Both the server and clients are Debian 9.

What exactly does this error message mean? And is it normal? or what should I 
do to fix that?

Regards,
Mabi









Community Meeting Calendar:

Schedule -
Every Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users





Community Meeting Calendar:

Schedule -
Every Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users





Community Meeting Calendar:

Schedule -
Every Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Geo-replication

2020-03-02 Thread Strahil Nikolov
On March 3, 2020 4:13:38 AM GMT+02:00, David Cunningham 
 wrote:
>Hello,
>
>Thanks for that. When we re-tried with push-pem from cafs10 (on the
>A/master cluster) it failed with "Unable to mount and fetch slave
>volume
>details." and in the logs we see:
>
>[2020-03-03 02:07:42.614911] E
>[name.c:258:af_inet_client_get_remote_sockaddr] 0-gvol0-client-0: DNS
>resolution failed on host nvfs10.local
>[2020-03-03 02:07:42.638824] E
>[name.c:258:af_inet_client_get_remote_sockaddr] 0-gvol0-client-1: DNS
>resolution failed on host nvfs20.local
>[2020-03-03 02:07:42.664493] E
>[name.c:258:af_inet_client_get_remote_sockaddr] 0-gvol0-client-2: DNS
>resolution failed on host nvfs30.local
>
>These .local addresses are the LAN addresses that B/slave nodes nvfs10,
>nvfs20, and nvfs30 replicate with. It seems that the A/master needs to
>be
>able to contact those addresses. Is that right? If it is then we'll
>need to
>re-do the B cluster to replicate using publicly accessible IP addresses
>instead of their LAN.
>
>Thank you.
>
>
>On Mon, 2 Mar 2020 at 20:53, Aravinda VK  wrote:
>
>> Looks like setup issue to me. Copying SSH keys manually is not
>required.
>>
>> Command prefix is required while adding to authorized_keys file in
>each
>> remote nodes. That will not be available if ssh keys are added
>manually.
>>
>> Geo-rep specifies /nonexisting/gsyncd in the command to make sure it
>> connects via the actual command specified in authorized_keys file, in
>your
>> case Geo-replication is actually looking for gsyncd command in
>> /nonexisting/gsyncd path.
>>
>> Please try with push-pem option during Geo-rep create command.
>>
>> —
>> regards
>> Aravinda Vishwanathapura
>> https://kadalu.io
>>
>>
>> On 02-Mar-2020, at 6:03 AM, David Cunningham
>
>> wrote:
>>
>> Hello,
>>
>> We've set up geo-replication but it isn't actually syncing. Scenario
>is
>> that we have two GFS clusters. Cluster A has nodes cafs10, cafs20,
>and
>> cafs30, replicating with each other over a LAN. Cluster B has nodes
>nvfs10,
>> nvfs20, and nvfs30 also replicating with each other over a LAN. We
>are
>> geo-replicating data from the A cluster to the B cluster over the
>internet.
>> SSH key access is set up, allowing all the A nodes password-less
>access to
>> root on nvfs10
>>
>> Geo-replication was set up using these commands, run on cafs10:
>>
>> gluster volume geo-replication gvol0 nvfs10.example.com::gvol0 create
>> ssh-port 8822 no-verify
>> gluster volume geo-replication gvol0 nvfs10.example.com::gvol0 config
>> remote-gsyncd /usr/lib/x86_64-linux-gnu/glusterfs/gsyncd
>> gluster volume geo-replication gvol0 nvfs10.example.com::gvol0 start
>>
>> However after a very short period of the status being
>"Initializing..."
>> the status then sits on "Passive":
>>
>> # gluster volume geo-replication gvol0 nvfs10.example.com::gvol0
>status
>> MASTER NODEMASTER VOLMASTER BRICK   
>SLAVE
>> USERSLAVE SLAVE NODE  STATUS
>CRAWL
>> STATUSLAST_SYNCED
>>
>>
>--
>> cafs10 gvol0 /nodirectwritedata/gluster/gvol0root
>>  nvfs10.example.com::gvol0nvfs30.localPassiveN/A
>> N/A
>> cafs30 gvol0 /nodirectwritedata/gluster/gvol0root
>>  nvfs10.example.com::gvol0N/A CreatedN/A
>> N/A
>> cafs20 gvol0 /nodirectwritedata/gluster/gvol0root
>>  nvfs10.example.com::gvol0N/A CreatedN/A
>> N/A
>>
>> So my questions are:
>> 1. Why does the status on cafs10 mention "nvfs30.local"? That's the
>LAN
>> address that nvfs10 replicates with nvfs30 using. It's not accessible
>from
>> the A cluster, and I didn't use it when configuring geo-replication.
>> 2. Why does geo-replication sit in Passive status?
>>
>> Thanks very much for any assistance.
>>
>>
>> On Tue, 25 Feb 2020 at 15:46, David Cunningham
>
>> wrote:
>>
>>> Hi Aravinda and Sunny,
>>>
>>> Thank you for the replies. We have 3 replicating nodes on the master
>>> side, and want to geo-replicate their data to the remote slave side.
>As I
>>> understand it if the master node which had the geo-replication
>create
>>> command run goes down then another node will take over pushing
>updates to
>>> the remote slave. Is that right?
>>>
>>> We have already taken care of adding all master node's SSH keys to
>the
>>> remote slave's authorized_keys externally, so won't include the
>push-pem
>>> part of the create command.
>>>
>>> Mostly I wanted to confirm the geo-replication behaviour on the
>>> replicating master nodes if one of them goes down.
>>>
>>> Thank you!
>>>
>>>
>>> On Tue, 25 Feb 2020 at 14:32, Aravinda VK 
>wrote:
>>>
 Hi David,


 On 25-Feb-2020, at 3:45 AM, David Cunningham
>
 wrote:

 Hello,

 I've a couple of questions on geo-replication

Re: [Gluster-users] writing to fuse device failed: No such file or directory

2020-03-02 Thread Hari Gowtham
Hi Amar,

I checked on the backport and found that this patch hasn't yet been
backported to any of the release branches.
If this is the fix, it would be great to have them backported for the next
release.

On Tue, Mar 3, 2020 at 7:22 AM Amar Tumballi  wrote:

> This is not normal at all.
>
> I guess the fix was
> https://github.com/gluster/glusterfs/commit/1166df1920dd9b2bd5fce53ab49d27117db40238
>
> I didn't check if its backported to other release branches.
>
> Csaba, Rinku, hari can you please confirm on this?
>
> Regards,
> Amar
>
>
> On Tue, Mar 3, 2020 at 4:25 AM Danny Lee  wrote:
>
>> This was happening for us on our 3-node replicated server.  For one day,
>> the log amassed to 3GBs.  Over a week, it took over 15GBs.
>>
>> Our gluster version is 6.5.
>>
>> On Mon, Mar 2, 2020, 5:26 PM Strahil Nikolov 
>> wrote:
>>
>>> Hi Felix,
>>>
>>> can you test /on non-prod system/ the latest minor version of gluster v6
>>> ?
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>>
>>> В понеделник, 2 март 2020 г., 21:43:48 ч. Гринуич+2, Felix Kölzow <
>>> felix.koel...@gmx.de> написа:
>>>
>>>
>>>
>>>
>>>
>>> Dear Community,
>>>
>>>
>>> this message appears for me to on GlusterFS 6.0.
>>>
>>> Before that, we had GlusterFS 3.12 and the client log-file was almost
>>> empty. After
>>>
>>> upgrading to 6.0 we are facing this log entries.
>>>
>>> Regards,
>>>
>>> Felix
>>>
>>> On 02/03/2020 15:17, mabi wrote:
>>> > Hello,
>>> >
>>> > On the FUSE clients of my GlusterFS 5.11 two-node replica+arbitrer I
>>> see quite a lot of the following error message repeatedly:
>>> >
>>> > [2020-03-02 14:12:40.297690] E
>>> [fuse-bridge.c:219:check_and_dump_fuse_W] (-->
>>> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x13e)[0x7f93d5c13cfe]
>>> (-->
>>> /usr/lib/x86_64-linux-gnu/glusterfs/5.11/xlator/mount/fuse.so(+0x789a)[0x7f93d331989a]
>>> (-->
>>> /usr/lib/x86_64-linux-gnu/glusterfs/5.11/xlator/mount/fuse.so(+0x7c33)[0x7f93d3319c33]
>>> (--> /lib/x86_64-linux-gnu/libpthread.so.0(+0x74a4)[0x7f93d4e8f4a4] (-->
>>> /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7f93d46ead0f] )
>>> 0-glusterfs-fuse: writing to fuse device failed: No such file or directory
>>> >
>>> > Both the server and clients are Debian 9.
>>> >
>>> > What exactly does this error message mean? And is it normal? or what
>>> should I do to fix that?
>>> >
>>> > Regards,
>>> > Mabi
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > 
>>> >
>>> >
>>> >
>>> > Community Meeting Calendar:
>>> >
>>> > Schedule -
>>> > Every Tuesday at 14:30 IST / 09:00 UTC
>>> > Bridge: https://bluejeans.com/441850968
>>> >
>>> > Gluster-users mailing list
>>> > Gluster-users@gluster.org
>>> > https://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>> 
>>>
>>>
>>>
>>> Community Meeting Calendar:
>>>
>>> Schedule -
>>> Every Tuesday at 14:30 IST / 09:00 UTC
>>> Bridge: https://bluejeans.com/441850968
>>>
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>> 
>>>
>>>
>>>
>>> Community Meeting Calendar:
>>>
>>> Schedule -
>>> Every Tuesday at 14:30 IST / 09:00 UTC
>>> Bridge: https://bluejeans.com/441850968
>>>
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>> 
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://bluejeans.com/441850968
>>
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>


-- 
Regards,
Hari Gowtham.




Community Meeting Calendar:

Schedule -
Every Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Geo-replication

2020-03-02 Thread David Cunningham
Hello,

Thanks for that. When we re-tried with push-pem from cafs10 (on the
A/master cluster) it failed with "Unable to mount and fetch slave volume
details." and in the logs we see:

[2020-03-03 02:07:42.614911] E
[name.c:258:af_inet_client_get_remote_sockaddr] 0-gvol0-client-0: DNS
resolution failed on host nvfs10.local
[2020-03-03 02:07:42.638824] E
[name.c:258:af_inet_client_get_remote_sockaddr] 0-gvol0-client-1: DNS
resolution failed on host nvfs20.local
[2020-03-03 02:07:42.664493] E
[name.c:258:af_inet_client_get_remote_sockaddr] 0-gvol0-client-2: DNS
resolution failed on host nvfs30.local

These .local addresses are the LAN addresses that B/slave nodes nvfs10,
nvfs20, and nvfs30 replicate with. It seems that the A/master needs to be
able to contact those addresses. Is that right? If it is then we'll need to
re-do the B cluster to replicate using publicly accessible IP addresses
instead of their LAN.

Thank you.


On Mon, 2 Mar 2020 at 20:53, Aravinda VK  wrote:

> Looks like setup issue to me. Copying SSH keys manually is not required.
>
> Command prefix is required while adding to authorized_keys file in each
> remote nodes. That will not be available if ssh keys are added manually.
>
> Geo-rep specifies /nonexisting/gsyncd in the command to make sure it
> connects via the actual command specified in authorized_keys file, in your
> case Geo-replication is actually looking for gsyncd command in
> /nonexisting/gsyncd path.
>
> Please try with push-pem option during Geo-rep create command.
>
> —
> regards
> Aravinda Vishwanathapura
> https://kadalu.io
>
>
> On 02-Mar-2020, at 6:03 AM, David Cunningham 
> wrote:
>
> Hello,
>
> We've set up geo-replication but it isn't actually syncing. Scenario is
> that we have two GFS clusters. Cluster A has nodes cafs10, cafs20, and
> cafs30, replicating with each other over a LAN. Cluster B has nodes nvfs10,
> nvfs20, and nvfs30 also replicating with each other over a LAN. We are
> geo-replicating data from the A cluster to the B cluster over the internet.
> SSH key access is set up, allowing all the A nodes password-less access to
> root on nvfs10
>
> Geo-replication was set up using these commands, run on cafs10:
>
> gluster volume geo-replication gvol0 nvfs10.example.com::gvol0 create
> ssh-port 8822 no-verify
> gluster volume geo-replication gvol0 nvfs10.example.com::gvol0 config
> remote-gsyncd /usr/lib/x86_64-linux-gnu/glusterfs/gsyncd
> gluster volume geo-replication gvol0 nvfs10.example.com::gvol0 start
>
> However after a very short period of the status being "Initializing..."
> the status then sits on "Passive":
>
> # gluster volume geo-replication gvol0 nvfs10.example.com::gvol0 status
> MASTER NODEMASTER VOLMASTER BRICKSLAVE
> USERSLAVE SLAVE NODE  STATUS CRAWL
> STATUSLAST_SYNCED
>
> --
> cafs10 gvol0 /nodirectwritedata/gluster/gvol0root
>  nvfs10.example.com::gvol0nvfs30.localPassiveN/A
> N/A
> cafs30 gvol0 /nodirectwritedata/gluster/gvol0root
>  nvfs10.example.com::gvol0N/A CreatedN/A
> N/A
> cafs20 gvol0 /nodirectwritedata/gluster/gvol0root
>  nvfs10.example.com::gvol0N/A CreatedN/A
> N/A
>
> So my questions are:
> 1. Why does the status on cafs10 mention "nvfs30.local"? That's the LAN
> address that nvfs10 replicates with nvfs30 using. It's not accessible from
> the A cluster, and I didn't use it when configuring geo-replication.
> 2. Why does geo-replication sit in Passive status?
>
> Thanks very much for any assistance.
>
>
> On Tue, 25 Feb 2020 at 15:46, David Cunningham 
> wrote:
>
>> Hi Aravinda and Sunny,
>>
>> Thank you for the replies. We have 3 replicating nodes on the master
>> side, and want to geo-replicate their data to the remote slave side. As I
>> understand it if the master node which had the geo-replication create
>> command run goes down then another node will take over pushing updates to
>> the remote slave. Is that right?
>>
>> We have already taken care of adding all master node's SSH keys to the
>> remote slave's authorized_keys externally, so won't include the push-pem
>> part of the create command.
>>
>> Mostly I wanted to confirm the geo-replication behaviour on the
>> replicating master nodes if one of them goes down.
>>
>> Thank you!
>>
>>
>> On Tue, 25 Feb 2020 at 14:32, Aravinda VK  wrote:
>>
>>> Hi David,
>>>
>>>
>>> On 25-Feb-2020, at 3:45 AM, David Cunningham 
>>> wrote:
>>>
>>> Hello,
>>>
>>> I've a couple of questions on geo-replication that hopefully someone can
>>> help with:
>>>
>>> 1. If there are multiple nodes in a cluster on the master side (pushing
>>> updates to the geo-replication slave), which node actually does the
>>> pushing? Does Gluster

Re: [Gluster-users] writing to fuse device failed: No such file or directory

2020-03-02 Thread Amar Tumballi
This is not normal at all.

I guess the fix was
https://github.com/gluster/glusterfs/commit/1166df1920dd9b2bd5fce53ab49d27117db40238

I didn't check if its backported to other release branches.

Csaba, Rinku, hari can you please confirm on this?

Regards,
Amar


On Tue, Mar 3, 2020 at 4:25 AM Danny Lee  wrote:

> This was happening for us on our 3-node replicated server.  For one day,
> the log amassed to 3GBs.  Over a week, it took over 15GBs.
>
> Our gluster version is 6.5.
>
> On Mon, Mar 2, 2020, 5:26 PM Strahil Nikolov 
> wrote:
>
>> Hi Felix,
>>
>> can you test /on non-prod system/ the latest minor version of gluster v6 ?
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>
>> В понеделник, 2 март 2020 г., 21:43:48 ч. Гринуич+2, Felix Kölzow <
>> felix.koel...@gmx.de> написа:
>>
>>
>>
>>
>>
>> Dear Community,
>>
>>
>> this message appears for me to on GlusterFS 6.0.
>>
>> Before that, we had GlusterFS 3.12 and the client log-file was almost
>> empty. After
>>
>> upgrading to 6.0 we are facing this log entries.
>>
>> Regards,
>>
>> Felix
>>
>> On 02/03/2020 15:17, mabi wrote:
>> > Hello,
>> >
>> > On the FUSE clients of my GlusterFS 5.11 two-node replica+arbitrer I
>> see quite a lot of the following error message repeatedly:
>> >
>> > [2020-03-02 14:12:40.297690] E
>> [fuse-bridge.c:219:check_and_dump_fuse_W] (-->
>> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x13e)[0x7f93d5c13cfe]
>> (-->
>> /usr/lib/x86_64-linux-gnu/glusterfs/5.11/xlator/mount/fuse.so(+0x789a)[0x7f93d331989a]
>> (-->
>> /usr/lib/x86_64-linux-gnu/glusterfs/5.11/xlator/mount/fuse.so(+0x7c33)[0x7f93d3319c33]
>> (--> /lib/x86_64-linux-gnu/libpthread.so.0(+0x74a4)[0x7f93d4e8f4a4] (-->
>> /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7f93d46ead0f] )
>> 0-glusterfs-fuse: writing to fuse device failed: No such file or directory
>> >
>> > Both the server and clients are Debian 9.
>> >
>> > What exactly does this error message mean? And is it normal? or what
>> should I do to fix that?
>> >
>> > Regards,
>> > Mabi
>> >
>> >
>> >
>> >
>> >
>> > 
>> >
>> >
>> >
>> > Community Meeting Calendar:
>> >
>> > Schedule -
>> > Every Tuesday at 14:30 IST / 09:00 UTC
>> > Bridge: https://bluejeans.com/441850968
>> >
>> > Gluster-users mailing list
>> > Gluster-users@gluster.org
>> > https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>> 
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://bluejeans.com/441850968
>>
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>> 
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://bluejeans.com/441850968
>>
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] writing to fuse device failed: No such file or directory

2020-03-02 Thread Danny Lee
This was happening for us on our 3-node replicated server.  For one day,
the log amassed to 3GBs.  Over a week, it took over 15GBs.

Our gluster version is 6.5.

On Mon, Mar 2, 2020, 5:26 PM Strahil Nikolov  wrote:

> Hi Felix,
>
> can you test /on non-prod system/ the latest minor version of gluster v6 ?
>
> Best Regards,
> Strahil Nikolov
>
>
> В понеделник, 2 март 2020 г., 21:43:48 ч. Гринуич+2, Felix Kölzow <
> felix.koel...@gmx.de> написа:
>
>
>
>
>
> Dear Community,
>
>
> this message appears for me to on GlusterFS 6.0.
>
> Before that, we had GlusterFS 3.12 and the client log-file was almost
> empty. After
>
> upgrading to 6.0 we are facing this log entries.
>
> Regards,
>
> Felix
>
> On 02/03/2020 15:17, mabi wrote:
> > Hello,
> >
> > On the FUSE clients of my GlusterFS 5.11 two-node replica+arbitrer I see
> quite a lot of the following error message repeatedly:
> >
> > [2020-03-02 14:12:40.297690] E [fuse-bridge.c:219:check_and_dump_fuse_W]
> (-->
> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x13e)[0x7f93d5c13cfe]
> (-->
> /usr/lib/x86_64-linux-gnu/glusterfs/5.11/xlator/mount/fuse.so(+0x789a)[0x7f93d331989a]
> (-->
> /usr/lib/x86_64-linux-gnu/glusterfs/5.11/xlator/mount/fuse.so(+0x7c33)[0x7f93d3319c33]
> (--> /lib/x86_64-linux-gnu/libpthread.so.0(+0x74a4)[0x7f93d4e8f4a4] (-->
> /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7f93d46ead0f] )
> 0-glusterfs-fuse: writing to fuse device failed: No such file or directory
> >
> > Both the server and clients are Debian 9.
> >
> > What exactly does this error message mean? And is it normal? or what
> should I do to fix that?
> >
> > Regards,
> > Mabi
> >
> >
> >
> >
> >
> > 
> >
> >
> >
> > Community Meeting Calendar:
> >
> > Schedule -
> > Every Tuesday at 14:30 IST / 09:00 UTC
> > Bridge: https://bluejeans.com/441850968
> >
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] writing to fuse device failed: No such file or directory

2020-03-02 Thread Strahil Nikolov
Hi Felix,

can you test /on non-prod system/ the latest minor version of gluster v6 ?

Best Regards,
Strahil Nikolov


В понеделник, 2 март 2020 г., 21:43:48 ч. Гринуич+2, Felix Kölzow 
 написа: 





Dear Community,


this message appears for me to on GlusterFS 6.0.

Before that, we had GlusterFS 3.12 and the client log-file was almost
empty. After

upgrading to 6.0 we are facing this log entries.

Regards,

Felix

On 02/03/2020 15:17, mabi wrote:
> Hello,
>
> On the FUSE clients of my GlusterFS 5.11 two-node replica+arbitrer I see 
> quite a lot of the following error message repeatedly:
>
> [2020-03-02 14:12:40.297690] E [fuse-bridge.c:219:check_and_dump_fuse_W] (--> 
> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x13e)[0x7f93d5c13cfe]
>  (--> 
> /usr/lib/x86_64-linux-gnu/glusterfs/5.11/xlator/mount/fuse.so(+0x789a)[0x7f93d331989a]
>  (--> 
> /usr/lib/x86_64-linux-gnu/glusterfs/5.11/xlator/mount/fuse.so(+0x7c33)[0x7f93d3319c33]
>  (--> /lib/x86_64-linux-gnu/libpthread.so.0(+0x74a4)[0x7f93d4e8f4a4] (--> 
> /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7f93d46ead0f] ) 
> 0-glusterfs-fuse: writing to fuse device failed: No such file or directory
>
> Both the server and clients are Debian 9.
>
> What exactly does this error message mean? And is it normal? or what should I 
> do to fix that?
>
> Regards,
> Mabi
>
>
>
>
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users





Community Meeting Calendar:

Schedule -
Every Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users




Community Meeting Calendar:

Schedule -
Every Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] writing to fuse device failed: No such file or directory

2020-03-02 Thread Felix Kölzow

Dear Community,


this message appears for me to on GlusterFS 6.0.

Before that, we had GlusterFS 3.12 and the client log-file was almost
empty. After

upgrading to 6.0 we are facing this log entries.

Regards,

Felix

On 02/03/2020 15:17, mabi wrote:

Hello,

On the FUSE clients of my GlusterFS 5.11 two-node replica+arbitrer I see quite 
a lot of the following error message repeatedly:

[2020-03-02 14:12:40.297690] E [fuse-bridge.c:219:check_and_dump_fuse_W] (--> 
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x13e)[0x7f93d5c13cfe] (--> 
/usr/lib/x86_64-linux-gnu/glusterfs/5.11/xlator/mount/fuse.so(+0x789a)[0x7f93d331989a] (--> 
/usr/lib/x86_64-linux-gnu/glusterfs/5.11/xlator/mount/fuse.so(+0x7c33)[0x7f93d3319c33] (--> 
/lib/x86_64-linux-gnu/libpthread.so.0(+0x74a4)[0x7f93d4e8f4a4] (--> 
/lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7f93d46ead0f] ) 0-glusterfs-fuse: writing to 
fuse device failed: No such file or directory

Both the server and clients are Debian 9.

What exactly does this error message mean? And is it normal? or what should I 
do to fix that?

Regards,
Mabi









Community Meeting Calendar:

Schedule -
Every Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users





Community Meeting Calendar:

Schedule -
Every Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] set larger field width for status command

2020-03-02 Thread Brian Andrus

All,

A quick question:

how can I get the "Gluster process" field to be larger when doing a 
"gluster volume status" command?


It word-wraps that field so I end up with 2 lines for some bricks and 1 
for others depending on the length of the path to the brick or hostname...


Brian Andrus





Community Meeting Calendar:

Schedule -
Every Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] writing to fuse device failed: No such file or directory

2020-03-02 Thread mabi
Hello,

On the FUSE clients of my GlusterFS 5.11 two-node replica+arbitrer I see quite 
a lot of the following error message repeatedly:

[2020-03-02 14:12:40.297690] E [fuse-bridge.c:219:check_and_dump_fuse_W] (--> 
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x13e)[0x7f93d5c13cfe]
 (--> 
/usr/lib/x86_64-linux-gnu/glusterfs/5.11/xlator/mount/fuse.so(+0x789a)[0x7f93d331989a]
 (--> 
/usr/lib/x86_64-linux-gnu/glusterfs/5.11/xlator/mount/fuse.so(+0x7c33)[0x7f93d3319c33]
 (--> /lib/x86_64-linux-gnu/libpthread.so.0(+0x74a4)[0x7f93d4e8f4a4] (--> 
/lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7f93d46ead0f] ) 
0-glusterfs-fuse: writing to fuse device failed: No such file or directory

Both the server and clients are Debian 9.

What exactly does this error message mean? And is it normal? or what should I 
do to fix that?

Regards,
Mabi









Community Meeting Calendar:

Schedule -
Every Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Advice on moving volumes/bricks to new servers

2020-03-02 Thread Ronny Adsetts
Ronny Adsetts wrote on 01/03/2020 00:02:
[...]
> 
> When I look at the FUSE-mounted volume, the file is there and correct
> but the file permissions of this and lots of others are screwed. Lots
> of dirs with d- permissions, lots of root:root owned files.

Replying to myself here...

I tried a second add-brick/remove-brick test which completed fine this time and 
all subvolumes were migrated to the new servers. I specifically checked for 
pending heals prior to each remove-brick.

However, I'm seeing some anomalies following the migration. During a 
remove-disk, as a test, I did a "mv linux-5.4.22 linux-5.4.22-orig" and the 
linux-5.4.22-orig folder has issues:

1. We're seeing lots of directories with "d-" permissions and owned by 
root:root. Not all but more than 0 is a worry.

2. The folder has files missing. Diff shows 5717 files. This is obviously 
unexpected.

$ sudo du -s linux-5.4.22 linux-5.4.22-orig2 linux-5.4.22-orig
898898  linux-5.4.22
898898  linux-5.4.22-orig2
830588  linux-5.4.22-orig

$ ls -ald linux-5.4.22*
drwxr-xr-x 24 ronny allusers  4096 Feb 24 07:37 linux-5.4.22
d- 24 root  root  4096 Mar  2 12:17 linux-5.4.22-orig
drwxr-xr-x 24 ronny allusers  4096 Feb 24 07:37 linux-5.4.22-orig2
-rw-r--r--  1 ronny allusers 109491488 Feb 24 07:44 linux-5.4.22.tar.xz

$ sudo ls -al linux-5.4.22-orig
total 807
d-  24 root  root   4096 Mar  2 12:17 .
drwxr-xr-x   9 ronny allusers   4096 Mar  2 14:08 ..
d-  27 root  root   4096 Mar  2 12:33 arch
d-   3 root  root   4096 Mar  2 12:34 block
d-   2 root  root   4096 Mar  2 12:16 certs
-rw-r--r--   1 ronny allusers  15318 Feb 24 07:37 .clang-format
-rw-r--r--   1 ronny allusers 59 Feb 24 07:37 .cocciconfig
-rw-r--r--   1 ronny allusers423 Feb 24 07:37 COPYING
-rw-r--r--   1 ronny allusers  99537 Feb 24 07:37 CREDITS
d-   4 root  root   4096 Mar  2 12:34 crypto
drwxr-xr-x  82 ronny allusers   4096 Mar  2 12:44 Documentation
drwxr-xr-x 138 ronny allusers   4096 Mar  2 12:16 drivers
drwxr-xr-x  76 ronny allusers   4096 Mar  2 12:35 fs
-rw-r--r--   1 ronny allusers 71 Feb 24 07:37 .get_maintainer.ignore
-rw-r--r--   1 ronny allusers 30 Feb 24 07:37 .gitattributes
-rw-r--r--   1 ronny allusers   1740 Feb 24 07:37 .gitignore
drwxr-xr-x  27 ronny allusers   4096 Mar  2 12:16 include
d-   2 root  root   4096 Mar  2 12:17 init
d-   2 root  root   4096 Mar  2 12:35 ipc
-rw-r--r--   1 ronny allusers   1321 Feb 24 07:37 Kbuild
-rw-r--r--   1 ronny allusers595 Feb 24 07:37 Kconfig
d-  18 root  root   4096 Mar  2 12:17 kernel
d-  18 root  root   4096 Mar  2 12:44 lib
d-   6 root  root   4096 Mar  2 12:15 LICENSES
-rw-r--r--   1 ronny allusers  13825 Feb 24 07:37 .mailmap
-rw-r--r--   1 ronny allusers 529379 Feb 24 07:37 MAINTAINERS
-rw-r--r--   1 ronny allusers  60910 Feb 24 07:37 Makefile
d-   3 root  root   4096 Mar  2 12:17 mm
drwxr-xr-x  70 ronny allusers   4096 Mar  2 12:17 net
-rw-r--r--   1 ronny allusers727 Feb 24 07:37 README
drwxr-xr-x  29 ronny allusers   4096 Mar  2 12:17 samples
d-  15 root  root   4096 Mar  2 12:44 scripts
drwxr-xr-x  12 ronny allusers   4096 Mar  2 12:35 security
d-  26 root  root   4096 Mar  2 12:17 sound
drwxr-xr-x  35 ronny allusers   4096 Mar  2 12:45 tools
d-   3 root  root   4096 Mar  2 12:35 usr
drwxr-xr-x   4 ronny allusers   4096 Mar  2 12:44 virt


Have I missed something in doing the remove-brick? Trying to get to the bottom 
of this before I press go on production data.

Thanks.

Ronny

-- 
Ronny Adsetts
Technical Director
Amazing Internet Ltd, London
t: +44 20 8977 8943
w: www.amazinginternet.com

Registered office: 85 Waldegrave Park, Twickenham, TW1 4TJ
Registered in England. Company No. 4042957




signature.asc
Description: OpenPGP digital signature




Community Meeting Calendar:

Schedule -
Every Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Announcing Gluster release 5.12

2020-03-02 Thread Hari Gowtham
Hi,

The Gluster community is pleased to announce the release of Gluster
5.12 (packages available at [1]).

Release notes for the release can be found at [2].

Major changes, features and limitations addressed in this release:
None

Thanks,
Gluster community

[1] Packages for 5.12:
https://download.gluster.org/pub/gluster/glusterfs/5/5.12/

[2] Release notes for 5.12:
https://docs.gluster.org/en/latest/release-notes/5.12/


-- 
Regards,
Hari Gowtham.




Community Meeting Calendar:

Schedule -
Every Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users