566 0 201completed
> 0:00:08
>
> Is the rebalancing option working fine? Why did gluster throw the error
> saying that "Error : Request timed out"?
> .On Tue, Feb 5, 2019 at 4:23 PM Nithya Balachandran
> wrote:
>
>> Hi,
>> Pleas
and for very long time there
> was no failures and then at some point these 17000 failures appeared and it
> stayed like that.
>
> Thanks
>
> Kashif
>
>
>
>
>
> Let me explain a little bit of background.
>
>
> On Mon, Feb 4, 2019 at 5:09 AM Nithya
On Thu, 16 May 2019 at 03:05, Paul van der Vlis wrote:
> Op 15-05-19 om 15:45 schreef Nithya Balachandran:
> > Hi Paul,
> >
> > A few questions:
> > Which version of gluster are you using?
>
> On the server and some clients: glusterfs 4.1.2
> On a new c
Hi Brandon,
Please send the following:
1. the gluster volume info
2. Information about which brick was removed
3. The rebalance log file for all nodes hosting removed bricks.
Regards,
Nithya
On Fri, 24 May 2019 at 19:33, Ravishankar N wrote:
> Adding a few DHT folks for some possible
cted to glusterfs community.
>
> Regards,
> Abhishek
>
> On Thu, Jun 6, 2019, 16:08 Nithya Balachandran
> wrote:
>
>> Hi Abhishek,
>>
>> I am still not clear as to the purpose of the tests. Can you clarify why
>> you are using valgrind and why you think t
On Sat, 8 Jun 2019 at 01:29, Alan Orth wrote:
> Dear Ravi,
>
> In the last week I have completed a fix-layout and a full INDEX heal on
> this volume. Now I've started a rebalance and I see a few terabytes of data
> going around on different bricks since yesterday, which I'm sure is good.
>
>
the below script to see the memory increase whihle the script is
> above script is running in background.
>
> *ps_mem.py*
>
> I am attaching the script files as well as the result got after testing
> the scenario.
>
> On Wed, Jun 5, 2019 at 7:23 PM Nithya Balachandran
>
Hi,
Writing to a volume should not affect glusterd. The stack you have shown in
the valgrind looks like the memory used to initialise the structures
glusterd uses and will free only when it is stopped.
Can you provide more details to what it is you are trying to test?
Regards,
Nithya
On Tue,
On Fri, 17 May 2019 at 06:01, David Cunningham
wrote:
> Hello,
>
> We're adding an arbiter node to an existing volume and having an issue.
> Can anyone help? The root cause error appears to be
> "----0001: failed to resolve (Transport
> endpoint is not connected)", as
On Thu, 16 May 2019 at 14:17, Paul van der Vlis wrote:
> Op 16-05-19 om 05:43 schreef Nithya Balachandran:
> >
> >
> > On Thu, 16 May 2019 at 03:05, Paul van der Vlis > <mailto:p...@vandervlis.nl>> wrote:
> >
> > Op 15-05-19 om 15:45 s
Hi Paul,
A few questions:
Which version of gluster are you using?
Did this behaviour start recently? As in were the contents of that
directory visible earlier?
Regards,
Nithya
On Wed, 15 May 2019 at 18:55, Paul van der Vlis wrote:
> Hello Strahil,
>
> Thanks for your answer. I don't find the
Did you see this behaviour with previous Gluster versions?
Regards,
Nithya
On Wed, 3 Jul 2019 at 21:41, wrote:
> Am I alone having this problem ?
>
> - Mail original -
> De: n...@furyweb.fr
> À: "gluster-users"
> Envoyé: Vendredi 21 Juin 2019 09:48:47
> Objet: [Gluster-users] Parallel
On Fri, 28 Jun 2019 at 14:34, Dave Sherohman wrote:
> On Thu, Jun 27, 2019 at 12:17:10PM +0530, Nithya Balachandran wrote:
> > On Tue, 25 Jun 2019 at 15:26, Dave Sherohman wrote:
> > > My objective is to remove nodes B and C entirely.
> > >
> > > First up is
Hi Patrick,
Did this start only after the upgrade?
How do you determine which brick process to kill?
Are there a lot of files to be healed on the volume?
Can you provide a tcpdump of the slow listing from a separate test client
mount ?
1. Mount the gluster volume on a different mount point
ored. Ravi and Krutika, please take a look at the other files.
Regards,
Nithya
On Fri, 28 Jun 2019 at 19:56, Dave Sherohman wrote:
> On Thu, Jun 27, 2019 at 12:17:10PM +0530, Nithya Balachandran wrote:
> > There are some edge cases that may prevent a file from being migrated
> > duri
On Tue, 30 Jul 2019 at 05:44, Diego Remolina wrote:
> Unfortunately statedump crashes on both machines, even freshly rebooted.
>
Do you see any statedump files in /var/run/gluster? This looks more like
the gluster cli crashed.
>
> [root@ysmha01 ~]# gluster --print-statedumpdir
>
ill the actual process or simply trigger the dump? Which
> process should I kill? The brick process in the system or the fuse mount?
>
> Diego
>
> On Mon, Jul 29, 2019, 23:27 Nithya Balachandran
> wrote:
>
>>
>>
>> On Tue, 30 Jul 2019 at 05:44, Diego Remolina wro
On Sat, 31 Aug 2019 at 22:59, Herb Burnswell
wrote:
> Thank you for the reply.
>
> I started a rebalance with force on serverA as suggested. Now I see
> 'activity' on that node:
>
> # gluster vol rebalance tank status
> Node Rebalanced-files size
>
idance..
>
>
What is the output of the rebalance status command?
Can you check if there are any errors in the rebalance logs on the node on
which you see rebalance activity?
If there are a lot of small files on the volume, the rebalance is expected
to take time.
Regards,
Nithya
re not doing anything.
Hope this helps.
Regards,
Nithya
> behaviour.
>
> >
> >
>
> >Regards,
>
> >
>
> >Nithya
>
>
> Best Regards,
> Strahil Nikolov
> On Sep 9, 2019 06:36, Nithya Balachandran wrote:
>
>
>
> On Sat,
nly a single node per replica set would migrate files in
the version used in this case .
Regards,
Nithya
Best Regards,
> Strahil Nikolov
>
> В петък, 6 септември 2019 г., 15:29:20 ч. Гринуич+3, Herb Burnswell <
> herbert.burnsw...@gmail.com> написа:
>
>
>
>
> On Th
ff986c54ce0] )
> 0-tank-client-10: forced unwinding frame type(GlusterFS 3.3)
> op(FXATTROP(34)) called at 2019-09-08 15:40:44.040333 (xid=0x7f8cfac)
>
> Does this type of failure cause data corruption? What is the best course
> of action at this point?
>
> Thanks,
>
>
er.org/en/latest/Administrator%20Guide/Accessing%20Gluster%20from%20Windows/
>
> Diego
>
>
>
> On Mon, Jul 29, 2019 at 11:52 PM Nithya Balachandran
> wrote:
>
>>
>> Hi Diego,
>>
>> Please do the following:
>>
>> gluster v get readdir-ahead
Hi,
This is the expected behaviour for a distribute volume. Files that hash to
a brick that is down will not be created. This is to prevent issues in case
the file already exists on that brick.
To prevent this, please use distribute-replicate volumes.
Regards,
Nithya
On Thu, 8 Aug 2019 at
but I wasn't sure.
>
> Thanks,
> -Matthew
>
> --
> Matthew Benstead
> System Administrator
> Pacific Climate Impacts Consortium <https://pacificclimate.org/>
> University of Victoria, UH1
> PO Box 1800, STN CSC
> Victoria, BC, V8W 2Y2
> Phone: +1-250-721-8432
>
01
>
> trusted.glusterfs.6f95525a-94d7-4174-bac4-e1a18fe010a2.xtime=0x5d307baa00023ec0
> trusted.glusterfs.quota.dirty=0x3000
>
> trusted.glusterfs.quota.size.2=0x1b71d5279e763e320005cd53
> trusted.glusterfs.volume-id=0x6f95525a94d74174bac4e1a18fe010a2
>
62T 77% /storage
> [root@gluster07 ~]# getfattr --absolute-names -m . -d -e hex
> /mnt/raid6-storage/storage/
> # file: /mnt/raid6-storage/storage/
>
> security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a756e6c6162656c65645f743a733000
> trusted.gfid=0x00
cts Consortium <https://pacificclimate.org/>
> University of Victoria, UH1
> PO Box 1800, STN CSC
> Victoria, BC, V8W 2Y2
> Phone: +1-250-721-8432
> Email: matth...@uvic.ca
> On 7/24/19 9:30 PM, Nithya Balachandran wrote:
>
>
>
> On Wed, 24 Jul 2019 at 22:1
Hi Pat,
Do you still see the problem of missing files? If yes please provide the
following :
1. gluster volume info
2. ls -l of the directory containing the missing files from the mount point
and from the individual bricks.
Regards,
Niyhya
On Thu, 29 Aug 2019 at 18:57, Pat Riehecky wrote:
On Thu, 19 Sep 2019 at 15:40, Milewski Daniel
wrote:
> I've observed an interesting behavior in Gluster 5.6. I had a file
> which was placed on incorrect subvolume (aparrently by the rebalancing
> process). I could stat and read the file just fine over FUSE mount
> point, with this entry
On Thu, 27 Jun 2019 at 12:17, Nithya Balachandran
wrote:
> Hi,
>
>
> On Tue, 25 Jun 2019 at 15:26, Dave Sherohman wrote:
>
>> I have a 9-brick, replica 2+A cluster and plan to (permanently) remove
>> one of the three subvolumes. I think I've worked out how to do it,
Hi,
On Tue, 25 Jun 2019 at 15:26, Dave Sherohman wrote:
> I have a 9-brick, replica 2+A cluster and plan to (permanently) remove
> one of the three subvolumes. I think I've worked out how to do it, but
> want to verify first that I've got it right, since downtime or data loss
> would be Bad
On Thu, 7 Nov 2019 at 15:15, Shreyansh Shah
wrote:
> Hi,
> Running distributed gluster 5.10 with 6 node and 2 bricks on each node (12
> in total)
> Due to some reason the files under /.glusterfs were deleted.
> Post that we have loads of broken symlinks, but the data on the disks exist
> Due to
Hi,
For the memory increase, please capture statedumps of the process at
intervals of an hour and send it across.
https://docs.gluster.org/en/latest/Troubleshooting/statedump/ describes how
to generate a statedump for the client process.
Regards,
Nithya
On Wed, 13 Nov 2019 at 05:18, Jamie
Fermi National Accelerator Laboratory
> www.fnal.gov
> www.scientificlinux.org
>
>
> ________
> From: Nithya Balachandran
> Sent: Thursday, September 19, 2019 10:14 PM
> To: Patrick Riehecky
> Cc: gluster-users
> Subject: Re: [Gluster-user
201 - 235 of 235 matches
Mail list logo