On Thu, May 4, 2017 at 4:36 PM, Xavier Hernandez
wrote:
> Hi,
>
> On 30/04/17 06:03, Raghavendra Gowdappa wrote:
>
>> All,
>>
>> Its a common perception that the resolution of a file having linkto file
>> on the hashed-subvol requires two hops:
>>
>> 1. client to
Hi, I have a gluster volume with bricks spread over several physical
drives. I now want to upgrade my server to a new system and plan to move
the drives from the old server to the new server, with a different host
name and IP address. Can I shut down the gluster volume on the old server,
move
- Original Message -
> From: "Pranith Kumar Karampuri"
> To: "Raghavendra Gowdappa"
> Cc: "Gluster Devel" , "gluster-users"
>
> Sent: Thursday, May 4, 2017 4:03:18 PM
> Subject: Re:
On Thu, May 4, 2017 at 3:19 AM, Jiffin Tony Thottan
wrote:
>
>
> On 04/05/17 02:03, Praveen George wrote:
>
> Hi Team,
>
> We’ve been intermittently seeing issues where postgresql is unable to
> create a table, or some info is missing.
>
> Postgresql logs the following
Hi,
I'm copying a web folder with total size of 109GB to a mounted volume
(locally). Now it is already 260GB and still copying.
Is there any sizing guide so I can predict usage in the future?
That is a ext4 formatted logical volume.
Thanks.
___
- Original Message -
> From: "David Miller"
> To: gluster-users@gluster.org
> Sent: Thursday, May 4, 2017 2:48:38 PM
> Subject: [Gluster-users] Very odd performance issue
>
> Background: 4 identical gluster servers with 15 TB each in 2x2 setup.
> CentOS Linux release
Vijay,
Thanks for pointing to the relevant source code.
Sent from my iPhone
On May 4, 2017, at 6:32 PM, Vijay Bellur
> wrote:
On Thu, May 4, 2017 at 9:12 AM, Ankireddypalle Reddy
>
On Thu, May 4, 2017 at 9:12 AM, Ankireddypalle Reddy
wrote:
> Hi,
>
>Can glusterfs snapshot volume be accessed through libgfapi.
>
Yes, activated snapshots can be accessed through libgfapi. User serviceable
snapshots in Gluster makes use of gfapi to access
Hi
we are deploying a large (24node/45brick) cluster and noted that the RHES
guidelines limit the number of data bricks in a disperse set to 8. Is
there any reason for this. I am aware that you want this to be a power of
2, but as we have a large number of nodes we were planning on going with
Background: 4 identical gluster servers with 15 TB each in 2x2 setup.
CentOS Linux release 7.3.1611 (Core)
glusterfs-server-3.9.1-1.el7.x86_64
client systems are using:
glusterfs-client 3.5.2-2+deb8u3
The cluster has ~12 TB in use with 21 million files. Lots of jpgs. About 12
clients
Thanks for the reply, i will try it out but i am also facing one more issue
"i.e. replicated volumes returning different timestamps"
so is this because of Bug 1426548 - Openshift Logging ElasticSearch FSLocks
when using GlusterFS storage backend
Hi,
Same here, when i reboot the node i have to manually execute "pcs cluster start
gluster01" and pcsd already enabled and started.
Gluster 3.8.11
Centos 7.3 latest
Installed using CentOS Storage SIG repository
--
Respectfully
Mahdi A. Mahdi
From:
On Thu, May 4, 2017 at 10:41 PM, Abhijit Paul
wrote:
> Since i am new to gluster, can please provide how to turn off/disable "perf
> xlator options"?
>
>
$ gluster volume set performance.stat-prefetch off
$ gluster volume set performance.read-ahead off
$ gluster
Since i am new to gluster, can please provide how to turn off/disable "perf
xlator options"?
> On Wed, May 3, 2017 at 8:51 PM, Atin Mukherjee
> wrote:
>
>> I think there is still some pending stuffs in some of the gluster perf
>> xlators to make that work complete. Cced the
Rafi,
Thanks. Will change the volume name to the notation that you
provided and try it out.
Thanks and Regards,
Ram
From: Mohammed Rafi K C [mailto:rkavu...@redhat.com]
Sent: Thursday, May 04, 2017 11:11 AM
To: Ankireddypalle Reddy; Gluster Devel (gluster-de...@gluster.org);
Hi Ram,
You can access snapshot through libgfapi, it is just that the volname
will become something like /snaps// . I can give you
some example programs if you have any trouble in doing so.
Or you can use uss feature to use snapshot through main volume via
libgfapi (it is also uses the above
Hi,
I would like some advice on setting up a replicated or geo replicated setup
in Gluster.
Right now the setup consists of 1 storage server with no replica serving
gluster volumes to clients.
We need to have some sort of replication of it by adding a second server
but the catch is this second
Hi,
Can glusterfs snapshot volume be accessed through libgfapi.
Thanks and Regards,
Ram
***Legal Disclaimer***
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized
On Thu, May 4, 2017 at 4:38 PM, Niels de Vos wrote:
> On Thu, May 04, 2017 at 03:39:58PM +0530, Pranith Kumar Karampuri wrote:
>> On Wed, May 3, 2017 at 2:36 PM, Kaushal M wrote:
>>
>> > On Tue, May 2, 2017 at 3:55 PM, Pranith Kumar Karampuri
>> >
On Thu, May 04, 2017 at 03:39:58PM +0530, Pranith Kumar Karampuri wrote:
> On Wed, May 3, 2017 at 2:36 PM, Kaushal M wrote:
>
> > On Tue, May 2, 2017 at 3:55 PM, Pranith Kumar Karampuri
> > wrote:
> > >
> > >
> > > On Sun, Apr 30, 2017 at 9:01 PM, Shyam
Hi,
On 30/04/17 06:03, Raghavendra Gowdappa wrote:
All,
Its a common perception that the resolution of a file having linkto file on the
hashed-subvol requires two hops:
1. client to hashed-subvol.
2. client to the subvol where file actually resides.
While it is true that a fresh lookup
+Krutika
Krutika started work on this. But it is very long term. Not a simple thing
to do.
On Thu, May 4, 2017 at 3:53 PM, Ankireddypalle Reddy
wrote:
> Pranith,
>
> Thanks. Is there any work in progress to add this support.
>
>
>
> Thanks and Regards,
>
On Sun, Apr 30, 2017 at 9:33 AM, Raghavendra Gowdappa
wrote:
> All,
>
> Its a common perception that the resolution of a file having linkto file
> on the hashed-subvol requires two hops:
>
> 1. client to hashed-subvol.
> 2. client to the subvol where file actually resides.
>
Pranith,
Thanks. Is there any work in progress to add this support.
Thanks and Regards,
Ram
From: Pranith Kumar Karampuri [mailto:pkara...@redhat.com]
Sent: Thursday, May 04, 2017 6:17 AM
To: Ankireddypalle Reddy
Cc: Gluster Devel (gluster-de...@gluster.org);
Pranith,
Thanks. Does it mean that a given file can be written by only
one client at a time. If multiple clients try to access the file in write mode,
does it lead to any kind of data inconsistencies.
Thanks and Regards,
Ram
From: Pranith Kumar Karampuri
On Thu, May 4, 2017 at 3:43 PM, Ankireddypalle Reddy
wrote:
> Pranith,
>
> Thanks. Does it mean that a given file can be written by
> only one client at a time. If multiple clients try to access the file in
> write mode, does it lead to any kind of data
On Wed, May 3, 2017 at 2:36 PM, Kaushal M wrote:
> On Tue, May 2, 2017 at 3:55 PM, Pranith Kumar Karampuri
> wrote:
> >
> >
> > On Sun, Apr 30, 2017 at 9:01 PM, Shyam wrote:
> >>
> >> Hi,
> >>
> >> Release 3.11 for gluster has been
Since i am new to gluster, can please provide how to turn off/disable "perf
xlator options"?
On Wed, May 3, 2017 at 8:51 PM, Atin Mukherjee wrote:
> I think there is still some pending stuffs in some of the gluster perf
> xlators to make that work complete. Cced the
On 04/05/17 02:03, Praveen George wrote:
Hi Team,
We’ve been intermittently seeing issues where postgresql is unable to
create a table, or some info is missing.
Postgresql logs the following error:
ERROR: unexpected data beyond EOF in block 53 of relation
base/16384/12009
HINT: This
Hi
I'm trying to remove 2 bricks from a Distributed-Replicate without losing data.
But it fails in rebalance
Any help is appreciated...
What I do:
# gluster volume remove-brick glu_linux_dr2_oracle replica 2
glustoretst03.net.dr.dk:/bricks/brick1/glu_linux_dr2_oracle
30 matches
Mail list logo