Hi,
We would be definitely interested in this. Thank you for contacting us. For
the starter we can have an online conference. Please suggest few possible
date and times for the week(preferably between IST 7.00AM - 9.PM)?
Adding Anoop and Gunther who are also the main contributors to the
Gluster-Sa
> > require one of those options set to 'on'?
> >
> > I'll start another test shortly, and activate on of those 2 options,
> > maybe there's a connection between those 3 options?
> >
> >
> > Best Regards,
> > Hubert
Thank you for reporting this. I had done testing on my local setup and the
issue was resolved even with quick-read enabled. Let me test it again.
Regards,
Poornima
On Mon, Apr 15, 2019 at 12:25 PM Hu Bert wrote:
> fyi: after setting performance.quick-read to off network traffic
> dropped to nor
Do you have plain distributed volume without any replication? If so replace
brick should copy the data on the faulty brick to the new brick, unless
there is some old data which also would need rebalance.
Having, add brick followed by remove brick and doing a rebalance is
inefficient, i think we sh
+Sunny
On Wed, Apr 10, 2019, 9:02 PM Gomathi Nayagam
wrote:
> Hi User,
>
> We are testing geo-replication of gluster it is taking nearly 8 mins to
> transfer 16 GB size of data between the DCs while when transferred the same
> data over plain rsync it took only 2 mins. Can we know if we are miss
n replace brick/offline
migration.
[1] https://gluster.github.io/devblog/write-for-gluster
Thanks,
Poornima
> -Tom
>
> On Mon, Apr 1, 2019 at 9:56 PM Poornima Gurusiddaiah
> wrote:
>
>> You could also try xfsdump and xfsrestore if you brick filesystem is xfs
>> and the destinati
You could also try xfsdump and xfsrestore if you brick filesystem is xfs
and the destination disk can be attached locally? This will be much faster.
Regards,
Poornima
On Tue, Apr 2, 2019, 12:05 AM Tom Fite wrote:
> Hi all,
>
> I have a very large (65 TB) brick in a replica 2 volume that needs t
On Fri, Mar 29, 2019, 10:03 PM Jim Kinney wrote:
> Currently running 3.12 on Centos 7.6. Doing cleanups on split-brain and
> out of sync, need heal files.
>
> We need to migrate the three replica servers to gluster v. 5 or 6. Also
> will need to upgrade about 80 clients as well. Given that a comp
This feature is not under active development as it was not used widely.
AFAIK its not supported feature.
+Nithya +Raghavendra for further clarifications.
Regards,
Poornima
On Wed, Mar 27, 2019 at 12:33 PM Lucian wrote:
> Oh, that's just what the doctor ordered!
> Hope it works, thanks
>
> On 27
>From the client log, looks like the host is null and port is 0, hence the
client is not able to connect to the bricks(Gluster volume). The client
tries to connect to Glusterd daemon on the host specified in the mount
command to get the hosts and port(volfile) on which the bricks are running.
Have
nt=webmail&utm_term=icon>
> Virus-free.
> www.avast.com
> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link>
> <#m_-4429654867678350131_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>
> On Fri,
This high memory consumption is not normal. Looks like it's a memory leak.
Is it possible to try it on test setup with gluster-6rc? What is the kind
of workload that goes into fuse mount? Large files or small files? We need
the following information to debug further:
- Gluster volume info output
-
On Thu, Feb 28, 2019, 8:44 PM Tami Greene wrote:
> I'm missing some information about how the cluster volume creates the
> metadata allowing it to see and find the data on the bricks. I've been
> told not to write anything to the bricks directly as the glusterfs cannot
> create the metadata and
On Wed, Feb 27, 2019, 11:52 PM Ingo Fischer wrote:
> Hi Amar,
>
> sorry to jump into this thread with an connected question.
>
> When installing via "apt-get" and so using debian packages and also
> systemd to start/stop glusterd is the online upgrade process from
> 3.x/4.x to 5.x still needed as
If you are referring single head server as the samba node, then have samba
deployed on other server nodes and create a samba cluster using ctdb.
Regards,
Poornima
On Sat, Feb 9, 2019, 8:31 PM Jim Laib I'm using a single server (gluster client) to mount a gluster replicated
> gluster cvolume usi
Is this a new volume? Has it never been mounted successfully? If so try
changing firewall settings to allow gluster ports, also check for selinux
settings.
Regards,
Poornima
On Fri, Oct 26, 2018, 1:26 AM Oğuz Yarımtepe
wrote:
> One more addition:
>
> # gluster volume info
>
>
> Volume Name: vol
On Tue, Oct 2, 2018 at 5:26 PM Diego Remolina wrote:
> Dear all,
>
> I have a two node setup running on Centos and gluster version
> glusterfs-3.10.12-1.el7.x86_64
>
> One of my nodes died (motherboard issue). Since I had to continue
> being up, I modified the quorum to below 50% to make sure I c
To enable nl-cache please use group option instead of single volume set:
#gluster vol set VOLNAME group nl-cache
This sets few other things including time out, invalidation etc.
For enabling the option Raghavendra mentioned, you ll have to execute it
explicitly, as it's not part of group option
Hi,
Parallel-readdir is an experimental feature for 3.10, can you disable
performance.parallel-readdir option and see if the files are visible? Does the
unmount-mount help?
Also If you want to use parallel-readdir in production please use 3.11 or
greater.
Regards,
Poornima
- Original
- Original Message -
> From: "Shyam"
> To: "Gluster Devel"
> Cc: gluster-users@gluster.org
> Sent: Thursday, April 13, 2017 8:17:34 PM
> Subject: Re: [Gluster-devel] Announcing release 3.11 : Scope, schedule and
> feature tracking
>
> On 02/28/2017 10:17 AM, Shyam wrote:
> > Hi,
> >
>
Hi,
Myself and Rajesh Joseph would like to present on:
Topic: Performance bottlenecks for metadata workload in Gluster
Category: Performance and Stability
Abstract:
Will be presenting the analysis on the profile info for different
metadata workload- create, listing, rename, copy etc. and what are
Hi,
The error that you see in the log file, is fixed as a part of patch
http://review.gluster.org/#/c/10206/ (release 3.8.0)
But these errors are not responsible for the "Transport endpoint not connected
issues." Can you check if there are any other errors reported in the log?
Regards,
Poor
- Original Message -
> From: "Lindsay Mathieson"
> To: "Kaushal M" , gluster-users@gluster.org
> Cc: "Gluster Devel"
> Sent: Monday, July 4, 2016 4:23:37 PM
> Subject: Re: [Gluster-users] 3.7.12/3.8.qemu/proxmox testing
>
> On 4/07/2016 7:16 PM, Kaushal M wrote:
> > An update on this, w
Hi,
Whenever a new fd is created it is allocated from the mem-pool, if the mem pool
is full it will be calloc'd. The current limit for fd-mem-pool is 1024, if
there are more than 1024 fd's open, then the perf may be affected.
Also, the unix socket used while glfs_set_volfile_server() is only f
Hi,
So, you can find the documentation for configuring CTDB for gluster backend
here:
https://github.com/gluster/glusterdocs/blob/master/Administrator%20Guide/Accessing%20Gluster%20from%20Windows.md
I guess you have missed creating CTDB volume and mounting it on all nodes and
using that vol
gt; To: "Pranith Kumar Karampuri"
> Cc: "Gluster Devel" , "Patrick Glomski"
> , gluster-users@gluster.org, "David Robinson"
> , "Poornima Gurusiddaiah"
> Sent: Friday, January 22, 2016 8:37:50 AM
> Subject: Re: [Gluster-devel] [Gluster-
Answers inline
- Original Message -
> From: "Pranith Kumar Karampuri"
> To: "Ankireddypalle Reddy" , "Vijay Bellur"
> , gluster-users@gluster.org,
> "Shyam" , "Niels de Vos"
> Sent: Wednesday, December 16, 2015 1:14:35 PM
> Subject: Re: [Gluster-users] libgfapi access
>
>
>
> On 12/1
Hi,
Looks related to posix locks in Gluster, does the gluster log file report any
errors or warnings?
There are few known issues when client/server goes down when holding locks,
were there any client/server reconnects?
Does restarting brick processes(volume stop and start) solve the issue?
Hi,
If you are connected to a node via Samba and if that node goes down, you will
have to manually connect to the other Samba node,
unless Samba nodes are clustered and you have a HA solution on top of Samba.
CTDB could be one of your options, it provides both clustering and IP failover
for S
Could you please provide the backtrace of the dump? and the complete client log
at the time of crash?
Is the crash seen on 3.6?
Regards,
Poornima
- Original Message -
> From: "Josh Boon"
> To: "Gluster-users@gluster.org List"
> Sent: Sunday, March 15, 2015 7:55:52 PM
> Subject: Re
fly
> (when receiving list of snap names from glusterd?) so that the timezone
> application can be dynamic (which is what users would expect).
> Thanks
> On Thu Jan 08 2015 at 3:21:15 AM Poornima Gurusiddaiah < pguru...@redhat.com
> > wrote:
> > Hi,
>
> &g
Hi,
Windows has a feature called shadow copy. This is widely used by all
windows users to view the previous versions of a file.
For shadow copy to work with glusterfs backend, the problem was that
the clients expect snapshots to contain some format
of time in their name.
After evaluating th
Could be this bug, https://bugzilla.redhat.com/show_bug.cgi?id=1168080
Regards,
Poornima
- Original Message -
From: "Eric Ewanco"
To: gluster-users@gluster.org
Sent: Wednesday, November 26, 2014 12:43:56 AM
Subject: Re: [Gluster-users] Gluster volume not automounted when peer is d
Hi,
This is a strange behavior, i tried reproducing it with a simple batch scripts
that checks if the file exists, i didn't see any issue.
Could you please provide the following data:
$ testparm -v | grep case
$ testparm -v | grep unix
Also could you try disabling metadata caching for glust
u provide the workload(data size, number of files, operations) that is
leading to memory leak?
This will help us reproduce and debug.
Regards,
Poornima
- Original Message -
From: "Tamas Papp"
To: "Pranith Kumar Karampuri" , "Poornima Gurusiddaiah"
Cc
extent.
For further debugging, could you provide the core dump or steps to reproduce if
avaiable?
Regards,
Poornima
- Original Message -
From: "Tamas Papp"
To: "Poornima Gurusiddaiah"
Cc: Gluster-users@gluster.org
Sent: Sunday, August 3, 2014 10:33:17 PM
Subject: Re: [
Hi,
Can you provide the statedump of the process, it can be obtained as follows:
$ gluster --print-statedumpdir #create this directory if it doesn't exist.
$ kill -USR1 #generates state dump.
Also, xporting Gluster via Samba-VFS-plugin method is preferred over Fuse mount
export. For more deta
Hi Jon,
I believe the bug is fixed as a part of patch
http://review.gluster.org/#/c/8374/.
But this patch(fix) is not in glusterfs-api-3.5.1-1.el6.x86_64, I have posted
the same for 3.5-2 release.
Hopefully the fix will be available in the next release.
Regards,
Poornima
- Original Message
Hi,
Accessing the same volume via different threads using gfapi should work fine.
The crash you seem to be hitting, may be because of the mismatch in the version
of libgfapi Quemu was built and the libgfapi currently present on the system.
Check
the same, if that doesn't seem to be an issue plea
Hi,
It seems to be a bug, this may have something to do with flock, in the
vfs_glusterfs
module the function vfs_gluster_kernel_flock() returns ENOSYS error.
To solve this issue either modify the above mentioned function to return 0
instead of ENOSYS, or there may be some parameter in smb.conf t
40 matches
Mail list logo