memory. A very brief quick look at the git log of xlators/barrier
tells me no major changes.
So, in any case I would just read the code for barrier translator unless
the current maintainer can answer this.
Varun
On Mon, Jan 22, 2024 at 11:30 AM Stefan Kania
wrote:
> Hi to all,
>
> The do
Feel free to contribute to the documentation :-)
On Tue, Jan 23, 2024 at 12:43 AM Stefan Kania
wrote:
> Hi Varun,
>
> Am 23.01.24 um 01:37 schrieb Varun:
> > I'm not sure which doc are you referring to? It would help if you can
> share
> > it.
> Here,
>
>
into glusterd logs. And
can you check does the file in backend contains extended attributes
related to quota (trusted.glusterfs.quota.*).
Thanks
Varun Shastry
On Friday 08 February 2013 11:09 PM, Nux! wrote:
Hello,
I've upgraded my glusterfs servers and clients to 3.4qa8. Now I'm
trying to set
the client read
quotas with /usr/bin/quota as usual?
No.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
Varun
___
Gluster-users mailing list
Hi Michael,
Please take a look at
http://gluster.org/pipermail/gluster-users/2013-April/035953.html. I
believe this is same as mentioned there.
- Varun Shastry
On Tuesday 30 April 2013 08:32 PM, Michael Brown wrote:
I'm hitting the 'cannot find stripe size' bug:
[2013-04-29 17:42
Hi,
gluster volume reset brings back to default.
Usage: volume reset VOLNAME [option|all] [force]
- Varun Shastry
On Thursday 16 May 2013 04:53 PM, deb...@boku.ac.at wrote:
Hi,
i set this option:
gluster volume set MYVOLUME nfs.export-dir /test
How can i delete this option or set
Hi,
gluster volume reset is the contrary to gluster volume set. This
brings back to default state.
Usage: volume reset VOLNAME [option] [force]
- Varun Shastry
On Thursday 16 May 2013 04:53 PM, deb...@boku.ac.at wrote:
Hi,
i set this option:
gluster volume set MYVOLUME nfs.export-dir
.
Thanks
Varun Shastry
On Thursday 01 August 2013 02:00 PM, Nux! wrote:
Hello again,
Another error I'm seeing a lot in the logs:
== /var/log/glusterfs/nfs.log ==
[2013-08-01 08:24:03.512013] W [quota.c:2167:quota_fstat_cbk]
0-488_1152-quota: quota context not set in inode
(gfid:f862ec15-3739-42a9
- Varun Shastry
I am trying to figure a way to mount a directory within a gluster
volume to a web server. This directory is enabled with quota to limit
a users' usage.
gluster config:
Volume Name: test-volume
features.limit-usage: /gluster/Images:1GB
features.quota: on
I want to mount
Hi Khoi,
Please go through this mail thread for the same question.
http://gluster.org/pipermail/gluster-users/2009-April/002041.html
- Varun Shastry
On Wednesday 12 February 2014 09:57 PM, Khoi Mai wrote:
In my 4 node gluster. I was hunting down a split-brain report. 2 of
my 4 bricks shows
.) from fuse/native mount can resolve the issue until the
next restart of the bricks.
- Varun Shastry
On Sunday 30 March 2014 10:32 PM, Khoi Mai wrote:
[2014-03-30 17:00:39.330462] E
[marker-quota-helper.c:229:mq_dict_set_contribution]
(--/usr/lib64/glusterfs/3.4.2/xlator/debug/io-stats.so
Can you please provide these info?
- gluster --version
- getfattr -d -m . -e hex quota-limit-directories-in-the-backend
- logs
Thanks
Varun Shastry
On Monday 07 April 2014 06:23 PM, Barry
Stetler wrote
whether we have problem in level (i). I
have only one brick's information here, so can you please check
and where the problem among the above two cases?
- Varun Shastry
Server space is mounted on
[root@glusterfront1 dump]# gluster
/
I think this feature is already implemented (partially?) as part of the
snapshot feature.
The feature proposed here only concentrates on the user serviceability
of the snapshots taken.
- Varun Shastry
- Original Message -
From: Anand Subramanian ansub...@redhat.com
To: Paul Cuzner
-snapview-server.
Yes, it is handled through glusterd portmapper.
* Since a snap volume will refer to multiple bricks, we'll need
more brick daemons as well. How are *those* managed?
Brick processes associated with the snapshot will be started.
- Varun Shastry
* How does snapview-server
distributed?
Hi Jesper,
This is expected when the volume is plain distribute. When you loose the
connection to the brick(s), this is expected since there is no way to
access the data residing on that (those) brick(s).
- Varun Shastry
___
Gluster-users
) implementation and
has no effect on the new one (3.5), also the option needs to be
deprecated (Better we track it through a bug).
Please use options
- gluster volume quota volname [hard|soft]-timeout time
while working with quota in 3.5.z versions.
- Varun Shastry
- gluster volume quota vol0 enable
/bash_completion.d/gluster)
To the current bash session: source extras/command-completion/gluster.bash
Thanks
Varun Shastry
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
On Monday 16 June 2014 05:18 PM, Justin Clift wrote:
On 16/06/2014, at 11:02 AM, Varun Shastry wrote:
The patch (http://review.gluster.org/7979) currently merged in upstream adds
the bash command/tab completion
(https://en.wikipedia.org/wiki/Command-line_completion) utility for gluster
should be read / write on both node ( for better
performance). In case of node A fail data should be accessed from node B.
Hi Chandrahasa,
I think only Erasure Coding feature (which is not *yet* merged but under
review) can provide Failure Tolerance without using Replication.
- Varun Shastry
as bricks. And computes the owner of the bricks which is
a set of n bricks. The stripe module is responsible for contacting the
actual physical brick.
- Varun Shastry
Thanks,
Cyril.
___
Gluster-users mailing list
Gluster-users@gluster.org
http
21 matches
Mail list logo