Re: [Gluster-users] Upgrade to 4.1.1 geo-replication does not work

2018-07-11 Thread Kotresh Hiremath Ravishankar
Hi Marcus,

I think the fix [1] is needed in 4.1
Could you please this out and let us know if that works for you?

[1] https://review.gluster.org/#/c/20207/

Thanks,
Kotresh HR

On Thu, Jul 12, 2018 at 1:49 AM, Marcus Pedersén 
wrote:

> Hi all,
>
> I have upgraded from 3.12.9 to 4.1.1 and been following upgrade
> instructions for offline upgrade.
>
> I upgraded geo-replication side first 1 x (2+1) and the master side after
> that 2 x (2+1).
>
> Both clusters works the way they should on their own.
>
> After upgrade on master side status for all geo-replication nodes
> is Stopped.
>
> I tried to start the geo-replication from master node and response back
> was started successfully.
>
> Status again  Stopped
>
> Tried to start again and get response started successfully, after that all
> glusterd crashed on all master nodes.
>
> After a restart of all glusterd the master cluster was up again.
>
> Status for geo-replication is still Stopped and every try to start it
> after this gives the response successful but still status Stopped.
>
>
> Please help me get the geo-replication up and running again.
>
>
> Best regards
>
> Marcus Pedersén
>
>
> Part of geo-replication log from master node:
>
> [2018-07-11 18:42:48.941760] I [changelogagent(/urd-gds/gluster):73:__init__]
> ChangelogAgent: Agent listining...
> [2018-07-11 18:42:48.947567] I 
> [resource(/urd-gds/gluster):1780:connect_remote]
> SSH: Initializing SSH connection between master and slave...
> [2018-07-11 18:42:49.363514] E 
> [syncdutils(/urd-gds/gluster):304:log_raise_exception]
> : connection to peer is broken
> [2018-07-11 18:42:49.364279] E [resource(/urd-gds/gluster):210:errlog]
> Popen: command returned errorcmd=ssh -oPasswordAuthentication=no
> -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret\
> .pem -p 22 -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-hjRhBo/
> 7e5534547f3675a710a107722317484f.sock geouser@urd-gds-geo-000
> /nonexistent/gsyncd --session-owner 5e94eb7d-219f-4741-a179-d4ae6b50c7ee
> --local-id .%\
> 2Furd-gds%2Fgluster --local-node urd-gds-001 -N --listen --timeout 120
> gluster://localhost:urd-gds-volume   error=2
> [2018-07-11 18:42:49.364586] E [resource(/urd-gds/gluster):214:logerr]
> Popen: ssh> usage: gsyncd.py [-h]
> [2018-07-11 18:42:49.364799] E [resource(/urd-gds/gluster):214:logerr]
> Popen: ssh>
> [2018-07-11 18:42:49.364989] E [resource(/urd-gds/gluster):214:logerr]
> Popen: ssh>  {monitor-status,monitor,
> worker,agent,slave,status,config-check,config-get,config-set,config-reset,
> voluuidget,d\
> elete}
> [2018-07-11 18:42:49.365210] E [resource(/urd-gds/gluster):214:logerr]
> Popen: ssh>  ...
> [2018-07-11 18:42:49.365408] E [resource(/urd-gds/gluster):214:logerr]
> Popen: ssh> gsyncd.py: error: argument subcmd: invalid choice:
> '5e94eb7d-219f-4741-a179-d4ae6b50c7ee' (choose from 'monitor-status',
> 'monit\
> or', 'worker', 'agent', 'slave', 'status', 'config-check', 'config-get',
> 'config-set', 'config-reset', 'voluuidget', 'delete')
> [2018-07-11 18:42:49.365919] I [syncdutils(/urd-gds/gluster):271:finalize]
> : exiting.
> [2018-07-11 18:42:49.369316] I [repce(/urd-gds/gluster):92:service_loop]
> RepceServer: terminating on reaching EOF.
> [2018-07-11 18:42:49.369921] I [syncdutils(/urd-gds/gluster):271:finalize]
> : exiting.
> [2018-07-11 18:42:49.369694] I [monitor(monitor):353:monitor] Monitor:
> worker died before establishing connection   brick=/urd-gds/gluster
> [2018-07-11 18:42:59.492762] I [monitor(monitor):280:monitor] Monitor:
> starting gsyncd worker   brick=/urd-gds/gluster
> slave_node=ssh://geouser@urd-gds-geo-000:gluster://
> localhost:urd-gds-volume
> [2018-07-11 18:42:59.558491] I 
> [resource(/urd-gds/gluster):1780:connect_remote]
> SSH: Initializing SSH connection between master and slave...
> [2018-07-11 18:42:59.559056] I [changelogagent(/urd-gds/gluster):73:__init__]
> ChangelogAgent: Agent listining...
> [2018-07-11 18:42:59.945693] E 
> [syncdutils(/urd-gds/gluster):304:log_raise_exception]
> : connection to peer is broken
> [2018-07-11 18:42:59.946439] E [resource(/urd-gds/gluster):210:errlog]
> Popen: command returned errorcmd=ssh -oPasswordAuthentication=no
> -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret\
> .pem -p 22 -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-992bk7/
> 7e5534547f3675a710a107722317484f.sock geouser@urd-gds-geo-000
> /nonexistent/gsyncd --session-owner 5e94eb7d-219f-4741-a179-d4ae6b50c7ee
> --local-id .%\
> 2Furd-gds%2Fgluster --local-node urd-gds-001 -N --listen --timeout 120
> gluster://localhost:urd-gds-volume   error=2
> [2018-07-11 18:42:59.946748] E [resource(/urd-gds/gluster):214:logerr]
> Popen: ssh> usage: gsyncd.py [-h]
> [2018-07-11 18:42:59.946962] E [resource(/urd-gds/gluster):214:logerr]
> Popen: ssh>
> [2018-07-11 18:42:59.947150] E [resource(/urd-gds/gluster):214:logerr]
> Popen: ssh>  {monitor-status,monitor,
> 

[Gluster-users] Upgrade to 4.1.1 geo-replication does not work

2018-07-11 Thread Marcus Pedersén
Hi all,

I have upgraded from 3.12.9 to 4.1.1 and been following upgrade instructions 
for offline upgrade.

I upgraded geo-replication side first 1 x (2+1) and the master side after that 
2 x (2+1).

Both clusters works the way they should on their own.

After upgrade on master side status for all geo-replication nodes is Stopped.

I tried to start the geo-replication from master node and response back was 
started successfully.

Status again  Stopped

Tried to start again and get response started successfully, after that all 
glusterd crashed on all master nodes.

After a restart of all glusterd the master cluster was up again.

Status for geo-replication is still Stopped and every try to start it after 
this gives the response successful but still status Stopped.


Please help me get the geo-replication up and running again.


Best regards

Marcus Pedersén


Part of geo-replication log from master node:

[2018-07-11 18:42:48.941760] I [changelogagent(/urd-gds/gluster):73:__init__] 
ChangelogAgent: Agent listining...
[2018-07-11 18:42:48.947567] I [resource(/urd-gds/gluster):1780:connect_remote] 
SSH: Initializing SSH connection between master and slave...
[2018-07-11 18:42:49.363514] E 
[syncdutils(/urd-gds/gluster):304:log_raise_exception] : connection to 
peer is broken
[2018-07-11 18:42:49.364279] E [resource(/urd-gds/gluster):210:errlog] Popen: 
command returned errorcmd=ssh -oPasswordAuthentication=no 
-oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret\
.pem -p 22 -oControlMaster=auto -S 
/tmp/gsyncd-aux-ssh-hjRhBo/7e5534547f3675a710a107722317484f.sock 
geouser@urd-gds-geo-000 /nonexistent/gsyncd --session-owner 
5e94eb7d-219f-4741-a179-d4ae6b50c7ee --local-id .%\
2Furd-gds%2Fgluster --local-node urd-gds-001 -N --listen --timeout 120 
gluster://localhost:urd-gds-volume   error=2
[2018-07-11 18:42:49.364586] E [resource(/urd-gds/gluster):214:logerr] Popen: 
ssh> usage: gsyncd.py [-h]
[2018-07-11 18:42:49.364799] E [resource(/urd-gds/gluster):214:logerr] Popen: 
ssh>
[2018-07-11 18:42:49.364989] E [resource(/urd-gds/gluster):214:logerr] Popen: 
ssh>  
{monitor-status,monitor,worker,agent,slave,status,config-check,config-get,config-set,config-reset,voluuidget,d\
elete}
[2018-07-11 18:42:49.365210] E [resource(/urd-gds/gluster):214:logerr] Popen: 
ssh>  ...
[2018-07-11 18:42:49.365408] E [resource(/urd-gds/gluster):214:logerr] Popen: 
ssh> gsyncd.py: error: argument subcmd: invalid choice: 
'5e94eb7d-219f-4741-a179-d4ae6b50c7ee' (choose from 'monitor-status', 'monit\
or', 'worker', 'agent', 'slave', 'status', 'config-check', 'config-get', 
'config-set', 'config-reset', 'voluuidget', 'delete')
[2018-07-11 18:42:49.365919] I [syncdutils(/urd-gds/gluster):271:finalize] 
: exiting.
[2018-07-11 18:42:49.369316] I [repce(/urd-gds/gluster):92:service_loop] 
RepceServer: terminating on reaching EOF.
[2018-07-11 18:42:49.369921] I [syncdutils(/urd-gds/gluster):271:finalize] 
: exiting.
[2018-07-11 18:42:49.369694] I [monitor(monitor):353:monitor] Monitor: worker 
died before establishing connection   brick=/urd-gds/gluster
[2018-07-11 18:42:59.492762] I [monitor(monitor):280:monitor] Monitor: starting 
gsyncd worker   brick=/urd-gds/gluster  
slave_node=ssh://geouser@urd-gds-geo-000:gluster://localhost:urd-gds-volume
[2018-07-11 18:42:59.558491] I [resource(/urd-gds/gluster):1780:connect_remote] 
SSH: Initializing SSH connection between master and slave...
[2018-07-11 18:42:59.559056] I [changelogagent(/urd-gds/gluster):73:__init__] 
ChangelogAgent: Agent listining...
[2018-07-11 18:42:59.945693] E 
[syncdutils(/urd-gds/gluster):304:log_raise_exception] : connection to 
peer is broken
[2018-07-11 18:42:59.946439] E [resource(/urd-gds/gluster):210:errlog] Popen: 
command returned errorcmd=ssh -oPasswordAuthentication=no 
-oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret\
.pem -p 22 -oControlMaster=auto -S 
/tmp/gsyncd-aux-ssh-992bk7/7e5534547f3675a710a107722317484f.sock 
geouser@urd-gds-geo-000 /nonexistent/gsyncd --session-owner 
5e94eb7d-219f-4741-a179-d4ae6b50c7ee --local-id .%\
2Furd-gds%2Fgluster --local-node urd-gds-001 -N --listen --timeout 120 
gluster://localhost:urd-gds-volume   error=2
[2018-07-11 18:42:59.946748] E [resource(/urd-gds/gluster):214:logerr] Popen: 
ssh> usage: gsyncd.py [-h]
[2018-07-11 18:42:59.946962] E [resource(/urd-gds/gluster):214:logerr] Popen: 
ssh>
[2018-07-11 18:42:59.947150] E [resource(/urd-gds/gluster):214:logerr] Popen: 
ssh>  
{monitor-status,monitor,worker,agent,slave,status,config-check,config-get,config-set,config-reset,voluuidget,d\
elete}
[2018-07-11 18:42:59.947369] E [resource(/urd-gds/gluster):214:logerr] Popen: 
ssh>  ...
[2018-07-11 18:42:59.947552] E [resource(/urd-gds/gluster):214:logerr] Popen: 
ssh> gsyncd.py: error: argument subcmd: invalid choice: 
'5e94eb7d-219f-4741-a179-d4ae6b50c7ee' (choose from 'monitor-status', 'monit\
or', 

[Gluster-users] Geo replication manual rsync

2018-07-11 Thread Marcus Pedersén
Hi all,
I have setup a gluster system with geo replication (Centos 7, gluster 3.12).
I have moved about 30 TB to the cluster.
It seems that it goes realy show for the data to be syncronized to geo 
replication.
It has been active for weeks and still just 9TB has ended up on the slave side.
I pause the replication once a day and make a snapshot with a script.
Does this slow things up?
Is it possible to pause replication and do a manual rsync, or does this disturb 
the geo sync when it is resumed?

Thanks!

Best regards
Marcus



Marcus Pedersén
Systemadministrator
Interbull Centre

Sent from my phone


---
När du skickar e-post till SLU så innebär detta att SLU behandlar dina 
personuppgifter. För att läsa mer om hur detta går till, klicka här 

E-mailing SLU will result in SLU processing your personal data. For more 
information on how this is done, click here 

___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Release 3.12.12: Scheduled for the 11th of July

2018-07-11 Thread Jiffin Tony Thottan

Hi Mabi,

I have checked with afr maintainer, all of the required changes is 
merged in 3.12.


Hence moving forward with 3.12.12 release

Regards,

Jiffin


On Monday 09 July 2018 01:04 PM, mabi wrote:

Hi Jiffin,

Based on the issues I am encountering on a nearly daily basis (See 
"New 3.12.7 possible split-brain on replica 3" thread in this ML) 
since now already 2-3 months I would be really glad if the required 
fixes as mentioned by Ravi could make it into the 3.12.12 release. 
Ravi mentioned the following:


afr: heal gfids when file is not present on all bricks
afr: don't update readables if inode refresh failed on all children
afr: fix bug-1363721.t failure
afr: add quorum checks in pre-op
afr: don't treat all cases all bricks being blamed as split-brain
afr: capture the correct errno in post-op quorum check
afr: add quorum checks in post-op

Right now I only see the first one pending in the review dashboard. It 
would be great if all of them could make it into this release.


Best regards,
Mabi



‐‐‐ Original Message ‐‐‐
On July 9, 2018 7:18 AM, Jiffin Tony Thottan  wrote:


Hi,

It's time to prepare the 3.12.12 release, which falls on the 10th of
each month, and hence would be 11-07-2018 this time around.

This mail is to call out the following,

1) Are there any pending *blocker* bugs that need to be tracked for
3.12.12? If so mark them against the provided tracker [1] as blockers
for the release, or at the very least post them as a response to this
mail

2) Pending reviews in the 3.12 dashboard will be part of the release,
*iff* they pass regressions and have the review votes, so use the
dashboard [2] to check on the status of your patches to 3.12 and get
these going

Thanks,
Jiffin

[1] Release bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.12

[2] 3.12 review dashboard:
https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard 





___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Community to supported

2018-07-11 Thread Colin Coe
Hi

The initial response I got from GSS was re-install which I why I posted
here.  I used the process I outlined above and it seemed to work but I was
looking for advice/war stories.

Thanks

On Wed, Jul 11, 2018 at 7:55 PM Vijay Bellur  wrote:

> Hi Colin,
>
> I think it would be ideal to vet this process with your Red Hat support
> channel. It is difficult to provide authoritative advice on any downstream
> product based on upstream GlusterFS in this forum.
>
> Thanks,
> Vijay
>
> On Tue, Jul 10, 2018 at 10:42 PM Colin Coe  wrote:
>
>> Hi all
>>
>> I did some basic testing and did a Gluster to RHGS using
>>
>> Built three new VMs (devfil01, devfil02, devfil03) and installed
>> community gluster version 3.8.7. using devfil01, added all three nodes to
>> the pool - peer probe devfil02 - peer probe devfil03 Created a volume
>> matching the basic setup of production - gluster volume create testvol
>> replica 3 arbiter 1 devfil01:/bricks/brick1/brick
>> devfil02:/bricks/brick1/brick devfil03:/bricks/brick1/brick On devfil03
>> (arbiter first) - systemctl stop glusterd - pkill glusterfs - yum erase
>> $(rpm -qa | grep glusterfs) - rhn-channel -r -c el-x86_64-glusterfs-7 #
>> community gluster - rhn-channel -a -c rhn-channel -a -c
>> rhel-x86_64-server-7-rh-gluster-3 -c rhel-x86_64-server-7-rh-gluster-3-nfs
>> -c rhel-x86_64-server-7-rh-gluster-3-samba - yum install glusterfs-server
>> -y - systemctl restart glusterd Repeat for devfil02 Repeat for devfil01
>>
>> Any thoughts on this process?  Anything I'm missing or could do a better
>> way?
>>
>> Thanks
>>
>>
>> On Wed, Jul 4, 2018 at 8:11 AM Colin Coe  wrote:
>>
>>> Hi all
>>>
>>> We've been running community supported gluster for a few years and now
>>> we've bought support subscriptions for RHGS.
>>>
>>> We currently have a 3 node system (2 replicas plus quorum) in production
>>> hosting several volumes with a TB or so of data.
>>>
>>> I've logged a support ticket requesting the best path forward to
>>> migrating from the community version to the RH supported version and was
>>> told to re-install.  Obviously I'd rather not do that as it will be very
>>> disruptive.
>>>
>>> Is there a way to migrate from gluster to RHGS without re-installing?
>>>
>>> Thanks
>>>
>>> CC
>>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Community to supported

2018-07-11 Thread Vijay Bellur
Hi Colin,

I think it would be ideal to vet this process with your Red Hat support
channel. It is difficult to provide authoritative advice on any downstream
product based on upstream GlusterFS in this forum.

Thanks,
Vijay

On Tue, Jul 10, 2018 at 10:42 PM Colin Coe  wrote:

> Hi all
>
> I did some basic testing and did a Gluster to RHGS using
>
> Built three new VMs (devfil01, devfil02, devfil03) and installed community
> gluster version 3.8.7. using devfil01, added all three nodes to the pool -
> peer probe devfil02 - peer probe devfil03 Created a volume matching the
> basic setup of production - gluster volume create testvol replica 3 arbiter
> 1 devfil01:/bricks/brick1/brick devfil02:/bricks/brick1/brick
> devfil03:/bricks/brick1/brick On devfil03 (arbiter first) - systemctl stop
> glusterd - pkill glusterfs - yum erase $(rpm -qa | grep glusterfs) -
> rhn-channel -r -c el-x86_64-glusterfs-7 # community gluster - rhn-channel
> -a -c rhn-channel -a -c rhel-x86_64-server-7-rh-gluster-3 -c
> rhel-x86_64-server-7-rh-gluster-3-nfs -c
> rhel-x86_64-server-7-rh-gluster-3-samba - yum install glusterfs-server -y -
> systemctl restart glusterd Repeat for devfil02 Repeat for devfil01
>
> Any thoughts on this process?  Anything I'm missing or could do a better
> way?
>
> Thanks
>
>
> On Wed, Jul 4, 2018 at 8:11 AM Colin Coe  wrote:
>
>> Hi all
>>
>> We've been running community supported gluster for a few years and now
>> we've bought support subscriptions for RHGS.
>>
>> We currently have a 3 node system (2 replicas plus quorum) in production
>> hosting several volumes with a TB or so of data.
>>
>> I've logged a support ticket requesting the best path forward to
>> migrating from the community version to the RH supported version and was
>> told to re-install.  Obviously I'd rather not do that as it will be very
>> disruptive.
>>
>> Is there a way to migrate from gluster to RHGS without re-installing?
>>
>> Thanks
>>
>> CC
>>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Announcing Glusterfs release 3.12.10 (Long Term Maintenance)

2018-07-11 Thread Niels de Vos
On Wed, Jul 11, 2018 at 09:26:45AM +0200, Paolo Margara wrote:
> Hi Niels,
> 
> I want just report that packages for release 3.12.10 and 3.12.11 are
> still not available on the mirrors.

Thanks for reporting! The CentOS-6 packages are available, just CentOS-7
not. I'll work out with the CentOS team why this is the case.

Niels


> 
> 
> Greetings,
> 
>     Paolo
> 
> 
> Il 04/07/2018 09:11, Niels de Vos ha scritto:
> > On Tue, Jul 03, 2018 at 05:20:44PM -0500, Darrell Budic wrote:
> >> I’ve now tested 3.12.11 on my centos 7.5 ovirt dev cluster, and all
> >> appears good. Should be safe to move from -test to release for
> >> centos-gluster312
> > Thanks for the report! Someone else already informed me earlier as well.
> > The packages are not available in the release yet, the CentOS team was
> > busy with CentOS 6.10 and that caused some delays. I expect the packages
> > to show up on the mirrors later this week.
> >
> > Niels
> >
> >
> >> Thanks!
> >>
> >>   -Darrell
> >>
> >>> From: Jiffin Tony Thottan 
> >>> Subject: [Gluster-users] Announcing Glusterfs release 3.12.10 (Long Term 
> >>> Maintenance)
> >>> Date: June 15, 2018 at 12:53:03 PM CDT
> >>> To: gluster-users@gluster.org, Gluster Devel; annou...@gluster.org
> >>>
> >>> The Gluster community is pleased to announce the release of Gluster 
> >>> 3.12.10 (packages available at [1,2,3]).
> >>>
> >>> Release notes for the release can be found at [4].
> >>>
> >>> Thanks,
> >>> Gluster community
> >>>
> >>>
> >>> [1] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.10/
> >>> [2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12 
> >>> 
> >>> [3] https://build.opensuse.org/project/subprojects/home:glusterfs
> >>> [4] Release notes: 
> >>> https://gluster.readthedocs.io/en/latest/release-notes/3.12.10/
> >>>
> >>> ___
> >>> Gluster-users mailing list
> >>> Gluster-users@gluster.org
> >>> http://lists.gluster.org/mailman/listinfo/gluster-users
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://lists.gluster.org/mailman/listinfo/gluster-users


signature.asc
Description: PGP signature
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster 3.10.5: used disk size reported by quota and du mismatch

2018-07-11 Thread Mauro Tridici
Hi Hari, Hi Sanoj,

thank you very much for your patience and your support! 
The problem has been solved following your instructions :-)

N.B.: in order to reduce the running time, I executed the “du” command as 
follows:

for i in {1..12}
do
 du /gluster/mnt$i/brick/CSP/ans004/ftp
done

and not on each brick at "/gluster/mnt$i/brick" tree level.

I hope it was a correct idea :-)

Thank you again for helping me to solve this issue.
Have a good day.
Mauro


> Il giorno 11 lug 2018, alle ore 09:16, Hari Gowtham  ha 
> scritto:
> 
> Hi,
> 
> There was a accounting issue in your setup.
> The directory ans004/ftp/CMCC-CM2-VHR4-CTR/atm/hist and 
> ans004/ftp/CMCC-CM2-VHR4
> had wrong size value on them.
> 
> To fix it, you will have to set dirty xattr (an internal gluster
> xattr) on these directories
> which will mark it for calculating the values again for the directory.
> And then do a du on the mount after setting the xattrs. This will do a
> stat that will
> calculate and update the right values.
> 
> To set dirty xattr:
> setfattr -n trusted.glusterfs.quota.dirty -v 0x3100 
> This has to be done for both the directories one after the other on each 
> brick.
> Once done for all the bricks issue the du command.
> 
> Thanks to Sanoj for the guidance
> On Tue, Jul 10, 2018 at 6:37 PM Mauro Tridici  wrote:
>> 
>> 
>> Hi Hari,
>> 
>> sorry for the late.
>> Yes, the gluster volume is a single volume that is spread between all the 3 
>> node and has 36 bricks
>> 
>> In attachment you can find a tar.gz file containing:
>> 
>> - gluster volume status command output;
>> - gluster volume info command output;
>> - the output of the following script execution (it generated 3 files per 
>> server: s01.log, s02.log, s03.log).
>> 
>> This is the “check.sh” script that has been executed on each server (servers 
>> are s01, s02, s03).
>> 
>> #!/bin/bash
>> 
>> #set -xv
>> 
>> host=$(hostname)
>> 
>> for i in {1..12}
>> do
>> ./quota_fsck_new-6.py --full-logs --sub-dir CSP/ans004 /gluster/mnt$i/brick 
>> >> $host.log
>> done
>> 
>> Many thanks,
>> Mauro
>> 
>> 
>> Il giorno 10 lug 2018, alle ore 12:12, Hari Gowtham  ha 
>> scritto:
>> 
>> Hi Mauro,
>> 
>> Can you send the gluster v status command output?
>> 
>> Is it a single volume that is spread between all the 3 node and has 36 
>> bricks?
>> If yes, you will have to run on all the bricks.
>> 
>> In the command use sub-dir option if you are running only for the
>> directory where limit is set. else if you are
>> running on the brick mount path you can remove it.
>> 
>> The full-log will consume a lot of space as its going to record the
>> xattrs for each entry inside the path we are
>> running it. This data is needed to cross check and verify quota's
>> marker functionality.
>> 
>> To reduce resource consumption you can run it on one replica set alone
>> (if its replicate volume)
>> But its better if you can run it on all the brick if possible and if
>> the size consumed is fine with you.
>> 
>> Make sure you run it with the script link provided above by Sanoj. (patch 
>> set 6)
>> On Tue, Jul 10, 2018 at 2:54 PM Mauro Tridici  wrote:
>> 
>> 
>> 
>> Hi Hari,
>> 
>> thank you very much for your answer.
>> I will try to use the script mentioned above pointing to each backend bricks.
>> 
>> So, if I understand, since I have a gluster cluster composed by 3 nodes 
>> (with 12 bricks on each node), I have to execute the script 36 times. Right?
>> 
>> You can find below the “df” command output executed on a cluster node:
>> 
>> /dev/mapper/cl_s01-gluster   100G   33M100G   1% /gluster
>> /dev/mapper/gluster_vgd-gluster_lvd  9,0T  5,6T3,5T  62% /gluster/mnt2
>> /dev/mapper/gluster_vge-gluster_lve  9,0T  5,7T3,4T  63% /gluster/mnt3
>> /dev/mapper/gluster_vgj-gluster_lvj  9,0T  5,7T3,4T  63% /gluster/mnt8
>> /dev/mapper/gluster_vgc-gluster_lvc  9,0T  5,6T3,5T  62% /gluster/mnt1
>> /dev/mapper/gluster_vgl-gluster_lvl  9,0T  5,8T3,3T  65% /gluster/mnt10
>> /dev/mapper/gluster_vgh-gluster_lvh  9,0T  5,7T3,4T  64% /gluster/mnt6
>> /dev/mapper/gluster_vgf-gluster_lvf  9,0T  5,7T3,4T  63% /gluster/mnt4
>> /dev/mapper/gluster_vgm-gluster_lvm  9,0T  5,4T3,7T  60% /gluster/mnt11
>> /dev/mapper/gluster_vgn-gluster_lvn  9,0T  5,4T3,7T  60% /gluster/mnt12
>> /dev/mapper/gluster_vgg-gluster_lvg  9,0T  5,7T3,4T  64% /gluster/mnt5
>> /dev/mapper/gluster_vgi-gluster_lvi  9,0T  5,7T3,4T  63% /gluster/mnt7
>> /dev/mapper/gluster_vgk-gluster_lvk  9,0T  5,8T3,3T  65% /gluster/mnt9
>> 
>> I will execute the following command and I will put here the output.
>> 
>> ./quota_fsck_new.py --full-logs --sub-dir /gluster/mnt{1..12}
>> 
>> Thank you again for your support.
>> Regards,
>> Mauro
>> 
>> Il giorno 10 lug 2018, alle ore 11:02, Hari Gowtham  ha 
>> scritto:
>> 
>> Hi,
>> 
>> There is no explicit command to backup all the quota limits as per my
>> understanding. need to look further about this.
>> But you can do the following to backup and 

Re: [Gluster-users] Announcing Glusterfs release 3.12.10 (Long Term Maintenance)

2018-07-11 Thread Paolo Margara
Hi Niels,

I want just report that packages for release 3.12.10 and 3.12.11 are
still not available on the mirrors.


Greetings,

    Paolo


Il 04/07/2018 09:11, Niels de Vos ha scritto:
> On Tue, Jul 03, 2018 at 05:20:44PM -0500, Darrell Budic wrote:
>> I’ve now tested 3.12.11 on my centos 7.5 ovirt dev cluster, and all
>> appears good. Should be safe to move from -test to release for
>> centos-gluster312
> Thanks for the report! Someone else already informed me earlier as well.
> The packages are not available in the release yet, the CentOS team was
> busy with CentOS 6.10 and that caused some delays. I expect the packages
> to show up on the mirrors later this week.
>
> Niels
>
>
>> Thanks!
>>
>>   -Darrell
>>
>>> From: Jiffin Tony Thottan 
>>> Subject: [Gluster-users] Announcing Glusterfs release 3.12.10 (Long Term 
>>> Maintenance)
>>> Date: June 15, 2018 at 12:53:03 PM CDT
>>> To: gluster-users@gluster.org, Gluster Devel; annou...@gluster.org
>>>
>>> The Gluster community is pleased to announce the release of Gluster 3.12.10 
>>> (packages available at [1,2,3]).
>>>
>>> Release notes for the release can be found at [4].
>>>
>>> Thanks,
>>> Gluster community
>>>
>>>
>>> [1] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.10/
>>> [2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12 
>>> 
>>> [3] https://build.opensuse.org/project/subprojects/home:glusterfs
>>> [4] Release notes: 
>>> https://gluster.readthedocs.io/en/latest/release-notes/3.12.10/
>>>
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> http://lists.gluster.org/mailman/listinfo/gluster-users
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Old gluster PPA repositories

2018-07-11 Thread Igor Cicimov
Hi,

Was looking to upgrade some old 3.4 cluster running on Ubuntu but seems
older versions than 3.10 have been removed from the gluster PPA. How do
people proceed with upgrade in this case? From what I read in the upgrade
documentation I can upgrade from 3.4 to 3.6 but since the repository is
gone what are my options now?

I tried to compile 3.6.9 on Ubuntu 14.04 but it errored out:

make[5]: *** [keys.lo] Error 1
make[4]: *** [all-recursive] Error 1
make[3]: *** [all-recursive] Error 1
make[2]: *** [all-recursive] Error 1
make[1]: *** [all-recursive] Error 1
make: *** [all] Error 2

By the way, why are the old PPA's being removed really?

Thanks,
Igor
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster 3.10.5: used disk size reported by quota and du mismatch

2018-07-11 Thread Hari Gowtham
Hi,

There was a accounting issue in your setup.
The directory ans004/ftp/CMCC-CM2-VHR4-CTR/atm/hist and ans004/ftp/CMCC-CM2-VHR4
had wrong size value on them.

To fix it, you will have to set dirty xattr (an internal gluster
xattr) on these directories
which will mark it for calculating the values again for the directory.
And then do a du on the mount after setting the xattrs. This will do a
stat that will
calculate and update the right values.

To set dirty xattr:
setfattr -n trusted.glusterfs.quota.dirty -v 0x3100 
This has to be done for both the directories one after the other on each brick.
Once done for all the bricks issue the du command.

Thanks to Sanoj for the guidance
On Tue, Jul 10, 2018 at 6:37 PM Mauro Tridici  wrote:
>
>
> Hi Hari,
>
> sorry for the late.
> Yes, the gluster volume is a single volume that is spread between all the 3 
> node and has 36 bricks
>
> In attachment you can find a tar.gz file containing:
>
> - gluster volume status command output;
> - gluster volume info command output;
> - the output of the following script execution (it generated 3 files per 
> server: s01.log, s02.log, s03.log).
>
> This is the “check.sh” script that has been executed on each server (servers 
> are s01, s02, s03).
>
> #!/bin/bash
>
> #set -xv
>
> host=$(hostname)
>
> for i in {1..12}
> do
>  ./quota_fsck_new-6.py --full-logs --sub-dir CSP/ans004 /gluster/mnt$i/brick 
> >> $host.log
> done
>
> Many thanks,
> Mauro
>
>
> Il giorno 10 lug 2018, alle ore 12:12, Hari Gowtham  ha 
> scritto:
>
> Hi Mauro,
>
> Can you send the gluster v status command output?
>
> Is it a single volume that is spread between all the 3 node and has 36 bricks?
> If yes, you will have to run on all the bricks.
>
> In the command use sub-dir option if you are running only for the
> directory where limit is set. else if you are
> running on the brick mount path you can remove it.
>
> The full-log will consume a lot of space as its going to record the
> xattrs for each entry inside the path we are
> running it. This data is needed to cross check and verify quota's
> marker functionality.
>
> To reduce resource consumption you can run it on one replica set alone
> (if its replicate volume)
> But its better if you can run it on all the brick if possible and if
> the size consumed is fine with you.
>
> Make sure you run it with the script link provided above by Sanoj. (patch set 
> 6)
> On Tue, Jul 10, 2018 at 2:54 PM Mauro Tridici  wrote:
>
>
>
> Hi Hari,
>
> thank you very much for your answer.
> I will try to use the script mentioned above pointing to each backend bricks.
>
> So, if I understand, since I have a gluster cluster composed by 3 nodes (with 
> 12 bricks on each node), I have to execute the script 36 times. Right?
>
> You can find below the “df” command output executed on a cluster node:
>
> /dev/mapper/cl_s01-gluster   100G   33M100G   1% /gluster
> /dev/mapper/gluster_vgd-gluster_lvd  9,0T  5,6T3,5T  62% /gluster/mnt2
> /dev/mapper/gluster_vge-gluster_lve  9,0T  5,7T3,4T  63% /gluster/mnt3
> /dev/mapper/gluster_vgj-gluster_lvj  9,0T  5,7T3,4T  63% /gluster/mnt8
> /dev/mapper/gluster_vgc-gluster_lvc  9,0T  5,6T3,5T  62% /gluster/mnt1
> /dev/mapper/gluster_vgl-gluster_lvl  9,0T  5,8T3,3T  65% /gluster/mnt10
> /dev/mapper/gluster_vgh-gluster_lvh  9,0T  5,7T3,4T  64% /gluster/mnt6
> /dev/mapper/gluster_vgf-gluster_lvf  9,0T  5,7T3,4T  63% /gluster/mnt4
> /dev/mapper/gluster_vgm-gluster_lvm  9,0T  5,4T3,7T  60% /gluster/mnt11
> /dev/mapper/gluster_vgn-gluster_lvn  9,0T  5,4T3,7T  60% /gluster/mnt12
> /dev/mapper/gluster_vgg-gluster_lvg  9,0T  5,7T3,4T  64% /gluster/mnt5
> /dev/mapper/gluster_vgi-gluster_lvi  9,0T  5,7T3,4T  63% /gluster/mnt7
> /dev/mapper/gluster_vgk-gluster_lvk  9,0T  5,8T3,3T  65% /gluster/mnt9
>
> I will execute the following command and I will put here the output.
>
> ./quota_fsck_new.py --full-logs --sub-dir /gluster/mnt{1..12}
>
> Thank you again for your support.
> Regards,
> Mauro
>
> Il giorno 10 lug 2018, alle ore 11:02, Hari Gowtham  ha 
> scritto:
>
> Hi,
>
> There is no explicit command to backup all the quota limits as per my
> understanding. need to look further about this.
> But you can do the following to backup and set it.
> Gluster volume quota volname list which will print all the quota
> limits on that particular volume.
> You will have to make a note of the directories with their respective limit 
> set.
> Once noted down, you can disable quota on the volume and then enable it.
> Once enabled, you will have to set each limit explicitly on the volume.
>
> Before doing this we suggest you can to try running the script
> mentioned above with the backend brick path instead of the mount path.
> you need to run this on the machines where the backend bricks are
> located and not on the mount.
> On Mon, Jul 9, 2018 at 9:01 PM Mauro Tridici  wrote:
>
>
> Hi Sanoj,
>
> could you provide me the command that I need in order to