-IT wrote:
> Am 2016-02-03 21:24, schrieb Raghavendra Bhat:
>
>> I think this is what is happening. Someone please correct me if I am
>> wrong.
>>
>> I think this is happening because nfs client, nfs server and bricks
>> being in the same machine. What happens is, w
the issue happens again?
Regards,
Raghavendra
On Wed, Feb 3, 2016 at 2:32 PM, Taste-Of-IT wrote:
> Am 2016-02-03 20:09, schrieb Raghavendra Bhat:
>
>> Hi,
>>
>> Is your nfs client mounted on one of the gluster serves?
>>
>> Regards,
>> Raghavendra
>&
Hi,
Is your nfs client mounted on one of the gluster serves?
Regards,
Raghavendra
On Wed, Feb 3, 2016 at 10:08 AM, Taste-Of-IT wrote:
> Hello,
>
> hope some expert can help. I have a 2 Brick 1 Volume Distributed GlusterFS
> in Version 3.7.6 on Debian. The volume is shared via nfs. If i copy v
But, why should client version be a problem when brick-log-level is being
changed?
Regards,
Raghavendra
On Wed, Jan 27, 2016 at 9:57 AM, Atin Mukherjee
wrote:
> gluster volume status clients can give you the list of clients
> connected. By that would be able to scan through all clients and se
Hi Laurent,
You can use either xfs or ext4 as the backend filesystem.
Regards,
Raghavendra
On Mon, Jan 25, 2016 at 11:10 AM, Laurent Le Van
wrote:
> Hello everyone,
>
> I'm trying to use GlusterFS in a Docker Container but volume creation
> doesn't work.
> I'm facing the "Setting extended attr
moving the peer to rejected state
1277822 - glusterd: probing a new node(>=3.6) from 3.5 cluster is
moving the peer to rejected state
Regards,
Raghavendra Bhat
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mail
. If you have a suitable topic to
discuss, please add it to the agenda.
Regards,
Raghavendra Bhat
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
Hi,
I have included the release notes in the announcement mail of
glusterfs-3.6.6.
http://www.gluster.org/pipermail/gluster-devel/2015-September/046821.html
Regards,
Raghavendra Bhat
On Fri, Oct 9, 2015 at 11:02 AM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
> Adding 3.6.
. If you have a suitable topic to
discuss, please add it to the agenda.
Regards,
Raghavendra Bhat
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
Hi Oleksandr,
You are right. The description should have said it as the limit on the
number of inodes in the lru list of the inode cache. I have sent a patch
for that.
http://review.gluster.org/#/c/12242/
Regards,
Raghavendra Bhat
On Thu, Sep 24, 2015 at 1:44 PM, Oleksandr Natalenko <
ole
USE mount can not use more than 32 groups
1256245 - AFR: gluster v restart force or brick process restart doesn't
heal the files
1258069 - gNFSd: NFS mount fails with "Remote I/O error"
1173437 - [RFE] changes needed in snapshot info command's xml output.
Re
or this 2nd example (where the file is opened, unlinked and a graph
swatch happens), there was a patch submitted long back.
http://review.gluster.org/#/c/5428/
Regards,
Raghavendra Bhat
3. Open-behind and unlink from a different client:
==
Wh
On 09/02/2015 12:45 PM, Raghavendra Bhat wrote:
Hi Christian,
I have been working on it since couple of days. I have not been able
to recreate the issue. I will continue to recreate and get back to you
in a day or two.
Regards,
Raghavendra Bhat
Hi Christian,
As per our tests (me and
Hi Christian,
I have been working on it since couple of days. I have not been able to
recreate the issue. I will continue to recreate and get back to you in a
day or two.
Regards,
Raghavendra Bhat
On 09/02/2015 12:45 AM, Christian Rice wrote:
This is still an issue for me, I don’t need
as we read it" even
though the file was not changed
Regards,
Raghavendra Bhat
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
does not store updated peerinfo objects.
1238074 - protocol/server doesn't reconfigure auth.ssl-allow options
1233036 - Fix shd coredump from tests/bugs/glusterd/bug-948686.t
Regards,
Raghavendra Bhat
___
Gluster-users mailing list
Gluster-users@gluste
- glusterfsd crashed after directory was removed from the mount
point, while self-heal and rebalance were running on
the volume
Regards,
Raghavendra Bhat
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluste
for that
node (thus discarding the existing uuid) and start using that new uuid.
Regards,
Raghavendra Bhat
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
with error: "E [socket.c:2495:socket_poller]
0-tcp.gluster-native-volume-3G-1-server: error in polling loop"
1211840 - glusterfs-api.pc versioning breaks QEMU
1204140 - "case sensitive = no" is not honored when "preserve case =
yes" is present in sm
tor).
Regards,
Raghavendra Bhat
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
r failed.
1182490 - Internal ec xattrs are allowed to be modified
1187547 - self-heal-algorithm with option "full" doesn't heal sparse
files correctly
1174170 - Glusterfs outputs a lot of warnings and errors when quota is
enabled
1212684 - GlusterD segfa
nsactions are run
1188064 - log files get flooded when removexattr() can't find a
specified key or value
1165938 - Fix regression test spurious failures
1192522 - index heal doesn't continue crawl on self-heal failure
1193970 - Fix spurious ssl-authz.t regression failure (ba
rsion.
What are the chances of getting this pushed into a release soon? I
can patch our hosts manually for the moment, but having this in the
package makes life much easier for maintenance.
Hi,
Since this is related to mounting, I can accept the patch for 3.6.3.
Regards,
Raghavendra Bhat
://review.gluster.org/#/c/9712/
Regards,
Raghavendra Bhat
On Thursday 19 February 2015 07:34 PM, Venky Shankar wrote:
Hi folks,
Listed below is the initial patchset for the upcoming bitrot detection
feature targeted for GlusterFS 3.7. As of now, these set of patches
implement object signing. Myself and
oesn't heal sparse
files correctly
1174170 - Glusterfs outputs a lot of warnings and errors when quota is
enabled
1186119 - tar on a gluster directory gives message "file changed as we
read it" even though no updates to file in progress
Regards,
Raghavendra Bhat
___
On Tuesday 03 February 2015 06:36 PM, Kingsley wrote:
On Tue, 2015-02-03 at 12:14 +0530, Raghavendra Bhat wrote:
Hi Kingsley,
I will be making a beta release of 3.6.3 by the end of this week.
Regards,
Raghavendra Bhat
Hi,
That's great news.
Is there a page somewhere that lists the ch
available? I'd like to roll
our cluster out into production soon but would prefer to get this fixed
first.
Adding Raghavendra Bhat, maintainer of release-3.6 to provide an
update of the schedule for 3.6.3.
Cheers,
Vijay
___
Gluster-users mailing
defined incorrectly
1175645 - [USS]: Typo error in the description for USS under "gluster
volume set help"
1171259 - mount.glusterfs does not understand -n option
Regards,
Raghavendra Bhat
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
On Friday 26 December 2014 12:22 PM, Raghavendra Bhat wrote:
Hi,
glusterfs-3.6.2beta1 has been released and the rpms can be found here.
Regards,
Raghavendra Bhat
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org
Hi,
glusterfs-3.6.2beta1 has been released and the rpms can be found here.
Regards,
Raghavendra Bhat
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
uster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
Hi Scott,
Can you please give the log files of the volume? You can find them in
/var/log/glusterfs/.
Regards,
Raghavendra Bhat
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
/var/log/glusterfs.
Regards,
Raghavendra Bhat
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
i - Niels de Vos & Shyam Ranganathan
index & io-threads - Pranith Karampuri
posix - Pranith Karampuri & Raghavendra Bhat
I'm wondering if there are any volunteers for maintaining the FUSE
component?
And maybe rewrite it to use libgfapi and drop the mount.glusterfs
script?
Niels
CCing people who know well about DHT and rebalance.
Regards,
Raghavendra Bhat
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
G
ASAP with my finding.
Regards,
Raghavendra Bhat
On 17 July 2014 17:31, Raghavendra Bhat <mailto:rab...@redhat.com>> wrote:
On Wednesday 16 July 2014 10:18 AM, David Raffelt wrote:
Hi Raghavendra,
No
Thanks
Dave
As per the cmd_log_history file (a h
)
fails.
Dave,
Please let me know if I have missed anything. This is my observation
based on the log files.
CCing Raghavendra G who might be able to clarify whether this is what
happened.
Regards,
Raghavendra Bhat
On 16 July 2014 14:47, Raghavendra Bhat <mailto:rab...@redhat.com>> wrote:
ROR
diagnostics.client-log-level: ERROR
server.root-squash: enable
Hi Dave,
Was rebalance running when you did above operations?
Regards,
Raghavendra Bhat
On 15 July 2014 15:29, Raghavendra Bhat <mailto:rab...@redhat.com>> wrote:
On Monday 14 July 2014 09:10 PM
On Monday 14 July 2014 09:10 PM, Pranith Kumar Karampuri wrote:
CCed Raghavendra Bhat who may know about the issue
Pranith
On 07/14/2014 08:01 PM, Joe Julian wrote:
https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
Please file a bug report.
On July 14, 2014 12:38:11 AM PDT, David
On Friday 11 July 2014 10:04 AM, Pranith Kumar Karampuri wrote:
Niels may know. CCed him.
Can you try this option and see if it works?
gluster volume set nfs.trusted-sync on
By default this option is off.
Regards,
Raghavendra Bhat
Pranith
On 07/11/2014 06:10 AM, Franco Broi wrote:
Hi
s may pause for moment to read and check the message.
We can even list the snaps to be deleted even if we don't ask for
confirmation for each.
Raghavendra Talur
Agree with Raghavendra Talur. It would be better to ask the user without
force option.
On Wednesday 04 June 2014 11:23 AM, Rajesh Joseph wrote:
- Original Message -
From: "M S Vishwanath Bhat"
To: "Rajesh Joseph"
Cc: "Vijay Bellur" , "Seema Naik" , "Gluster
Devel"
Sent: Tuesday, June 3, 2014 5:55:27 PM
Subject: Re: [Gluster-devel] autodelete in snapshots
On 3 June 20
A bug has already been logged for this issue and we are working on it.
https://bugzilla.redhat.com/show_bug.cgi?id=762989
Regards,
Raghavendra Bhat
- Original Message -
From: "David Coulson"
To: "Raghavendra Bhat"
Cc: "Gluster General Discussion List&quo
all the ports
starting 1023 and found that those ports are not free (used by other processes)
and found that port 80 is free and just did bind on it.
Regards,
Raghavendra Bhat
- Original Message -
From: "Tomasz Chmielewski"
To: "Gluster General Discussion List"
the above command, what is rails? In the description you have not
mentioned anything about rails. Can please provide information about it?
>
>
>
> On Sep 8, 2011, at 2:32 AM, Raghavendra Bhat wrote:
>
> Hi Kazuyoshi Tlacaelel,
>
> Can you please provide the glusterfs client
from rpm,
then logs will be present in /var/log/glusterfs.
Also can you provide the output of gluster volume info ?
Thanks
Regards,
Raghavendra Bhat
On Thu, Sep 8, 2011 at 2:49 AM, Kazuyoshi Tlacaelel wrote:
> Two master-servers, one client.
>
> *Client mount point is*: */gluster*
>
log file about the 2nd brick also crossing the minimum free
disk limit.
Regards,
Raghavendra Bhat
On Wed, Sep 7, 2011 at 4:27 PM, Dan Bretherton wrote:
>
> On 17/08/11 16:19, Dan Bretherton wrote:
>
>>
>>
>>>
>>>
>>> Dan Bretherton wrote:
>
If you create a volume with only one brick, and then add one more brick to the
volume then, the volume will be of distribute type and not replicate. If
replica feature is neede , then a replicate volume itself should be created and
to create replicate volume minimum 2 bricks are needed.
-
Hi Chris,
http://patches.gluster.com/patch/3151/
Can you please apply this patch and see if this works for you?
Thanks
Regards,
Raghavendra Bhat
> Tejas,
>
> We still have hundreds of GBs to copy, and have not put the new file
> system into the test. So far the clients works
48 matches
Mail list logo