+1 to the idea.
On Mon, Aug 26, 2019 at 2:41 PM Niels de Vos wrote:
> On Mon, Aug 26, 2019 at 08:08:36AM -0700, Joe Julian wrote:
> > You can also see diffs between force pushes now.
>
> That is great! It is the feature that I was looking for. I have not
> noticed it yet, will pay attention to i
I have sent a rfc patch [1] for review.
https://review.gluster.org/#/c/glusterfs/+/23011/
On Thu, Jul 4, 2019 at 1:13 AM Pranith Kumar Karampuri
wrote:
>
>
> On Wed, Jul 3, 2019 at 10:59 PM FNU Raghavendra Manjunath <
> rab...@redhat.com> wrote:
>
>>
>>
On Wed, Jul 3, 2019 at 3:28 AM Pranith Kumar Karampuri
wrote:
>
>
> On Wed, Jul 3, 2019 at 10:14 AM Ravishankar N
> wrote:
>
>>
>> On 02/07/19 8:52 PM, FNU Raghavendra Manjunath wrote:
>>
>>
>> Hi All,
>>
>> In glusterfs, there is a
Hi All,
In glusterfs, there is an issue regarding the fallocate behavior. In short,
if someone does fallocate from the mount point with some size that is
greater than the available size in the backend filesystem where the file is
present, then fallocate can fail with a subset of the required numbe
Yes. I have sent this patch [1] for review. It is now not failing in
regression tests. (i.e. uss.t is not failing)
[1] https://review.gluster.org/#/c/glusterfs/+/22728/
Regards,
Raghavendra
On Sat, Jun 1, 2019 at 7:25 AM Atin Mukherjee wrote:
> subdir-mount.t has started failing in brick mux r
The idea looks OK. One of the things that probably need to be considered
(more of an implementation detail though) is how to generate
frame->root->unique.
Because, for fuse, frame->root->unique is obtained by finh->unique which
IIUC is got from the incoming fop from kernel itself.
For protocol/ser
apview-server and also log level being changed
to TRACE in the .t file)
[1] https://review.gluster.org/#/c/glusterfs/+/22649/
[2] https://review.gluster.org/#/c/glusterfs/+/22728/
Regards,
Raghavendra
On Wed, May 1, 2019 at 11:11 AM Sanju Rakonde wrote:
> Thank you Raghavendra.
>
>
I am working on other uss issue. i.e. the occasional failure of uss.t due
to delays in the brick-mux regression. Rafi? Can you please look into this?
Regards,
Raghavendra
On Thu, May 16, 2019 at 9:48 AM Sanju Rakonde wrote:
> In most of the regression jobs
> ./tests/bugs/snapshot/bug-1399598-u
+ 1 to this.
There is also one more thing. For some reason, the community meeting is not
visible in my calendar (especially NA region). I am not sure if anyone else
also facing this issue.
Regards,
Raghavendra
On Tue, May 7, 2019 at 5:19 AM Ashish Pandey wrote:
> Hi,
>
> While we send a mail o
Hi All,
There is a good chance that, the inode on which unref came has already been
zero refed and added to the purge list. This can happen when inode table is
being destroyed (glfs_fini is something which destroys the inode table).
Consider a directory 'a' which has a file 'b'. Now as part of
/
Regards,
Raghavendra
On Tue, Apr 30, 2019 at 10:42 AM FNU Raghavendra Manjunath <
rab...@redhat.com> wrote:
>
> The failure looks similar to the issue I had mentioned in [1]
>
> In short for some reason the cleanup (the cleanup function that we call in
> our .t files) seems to be
The failure looks similar to the issue I had mentioned in [1]
In short for some reason the cleanup (the cleanup function that we call in
our .t files) seems to be taking more time and also not cleaning up
properly. This leads to problems for the 2nd iteration (where basic things
such as volume cre
Hi,
This is the agenda for tomorrow's community meeting for NA/EMEA timezone.
https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g?both
On Thu, Apr 11, 2019 at 4:56 AM Amar Tumballi Suryanarayan <
atumb...@redhat.com> wrote:
> Hi All,
>
> Below is the final details of our community meeting, and I wi
ectory issues in
the next iteration causes the failure of uss.t in the 2nd iteration.
Regards,
Raghavendra
On Wed, Apr 10, 2019 at 4:07 PM FNU Raghavendra Manjunath
wrote:
>
>
> On Wed, Apr 10, 2019 at 9:59 AM Atin Mukherjee
> wrote:
>
>> And now for last 15 da
On Wed, Apr 10, 2019 at 9:59 AM Atin Mukherjee wrote:
> And now for last 15 days:
>
> https://fstat.gluster.org/summary?start_date=2019-03-25&end_date=2019-04-10
>
> ./tests/bitrot/bug-1373520.t 18 ==> Fixed through
> https://review.gluster.org/#/c/glusterfs/+/22481/, I don't see this
> fail
On Thu, Oct 4, 2018 at 2:47 PM Kotresh Hiremath Ravishankar <
khire...@redhat.com> wrote:
>
>
> On Thu, Oct 4, 2018 at 9:03 PM Shyam Ranganathan
> wrote:
>
>> On 09/13/2018 11:10 AM, Shyam Ranganathan wrote:
>> > RC1 would be around 24th of Sep. with final release tagging around 1st
>> > of Oct.
On Fri, Sep 28, 2018 at 4:01 PM Shyam Ranganathan
wrote:
> We tested with ASAN and without the fix at [1], and it consistently
> crashes at the mdcache xlator when brick mux is enabled.
> On 09/28/2018 03:50 PM, FNU Raghavendra Manjunath wrote:
> >
> > I was looking into t
I was looking into the issue and this is what I could find while working
with shyam.
There are 2 things here.
1) The multiplexed brick process for the snapshot(s) getting the client
volfile (I suspect, it happened
when restore operation was performed).
2) Memory corruption happening while t
>From snapview client perspective one important thing to note. For building
the context for the entry point (by default ".snaps") a explicit lookup has
to be done on it. The dentry for ".snaps" is not returned when readdir is
done on its parent directory (Not even when ls -a is done). So for buildi
Tested Bitrot related aspects. Created data, enabled bitrot and created
more data. The files were signed by the bitrot daemon. Simulated the
corruption by editing a file directly in the backend.
Triggered scrubbing (on demand). Found that the corrupted files were marked
bad by the scrubber.
Also r
I would recommend the following tests.
1) The tests in our regression tests
2) Creating data set of many files (of different sizes) and then enabling
bit-rot on the volume.
3) scrubber throttling
4) Tests such as open + write + close and again open + write + close (i.e.
before the bit-rot daemon
+1
On Fri, Sep 2, 2016 at 11:30 PM, Raghavendra Gowdappa
wrote:
> Checking for inode/fd leaks should be top priority. We have seen a bunch
> of high memory consumption issues recently. [1] is first step towards that.
>
> [1] http://review.gluster.org/15318
>
> - Original Message -
> > Fr
Hi,
Any idea how big were the files that were being read?
Can you please attach the logs from all the gluster server and client
nodes? (the logs can be found in /var/log/glusterfs)
Also please provide the /var/log/messages from all the server and client
nodes.
Regards,
Raghavendra
On Fri, Jun
+1.
On Tue, May 10, 2016 at 2:58 AM, Kaushal M wrote:
> On Tue, May 10, 2016 at 12:01 AM, Vijay Bellur wrote:
> > Hi All,
> >
> > We are blocked on 3.7.12 owing to this proposal. Appreciate any
> > feedback on this!
> >
> > Thanks,
> > Vijay
> >
> > On Thu, Apr 28, 2016 at 11:58 PM, Vijay Bel
okup on root inode, then we need to create inode-ctx in
> a posix_acl_ctx_get() function.
>
> Thanks,
> Vijay
> On 28-Mar-2016 7:37 PM, "FNU Raghavendra Manjunath"
> wrote:
>
>> CCing Vijay Kumar who made the acl related changes in that patch.
>>
>>
CCing Vijay Kumar who made the acl related changes in that patch.
Vijay? Can you please look into it?
Regards,
Raghavendra
On Mon, Mar 28, 2016 at 9:57 AM, Avra Sengupta wrote:
> Hi Raghavendra,
>
> As part of the patch (http://review.gluster.org/#/c/13730/16), the
> inode_ctx is not created i
.openstack.org/show/335506/
>
> Regards,
> -Prashanth Pai
>
> - Original Message -
> > From: "FNU Raghavendra Manjunath"
> > To: "Soumya Koduri"
> > Cc: "Ira Cooper" , "Gluster Devel" <
> gluster-devel@gluster.org>
.
Just my 2 cents.
Regards,
Raghavendra
On Thu, Mar 24, 2016 at 10:31 AM, Soumya Koduri wrote:
> Thanks for the information.
>
> On 03/24/2016 07:34 PM, FNU Raghavendra Manjunath wrote:
>
>>
>> Yes. I think the caching example mentioned by Shyam is a good example of
&g
Yes. I think the caching example mentioned by Shyam is a good example of
ESTALE error. Also User Serviceable Snapshots (USS) relies heavily on
ESTALE errors. Because the files/directories from the snapshots are
assigned a virtual gfid on the fly when being looked up. If those inodes
are purged out
Hi,
I have raised a bug for it (
https://bugzilla.redhat.com/show_bug.cgi?id=1315465).
A patch has been sent for review (http://review.gluster.org/#/c/13628/).
Regards,
Raghavendra
On Mon, Mar 7, 2016 at 11:04 AM, Poornima Gurusiddaiah
wrote:
> Hi,
>
> I see a bitrot crash caused by a dht te
Hi,
glusterfs-3.6.9 has been released and the packages for RHEL/Fedora/Centos
can be found here.http://download.gluster.org/pub/gluster/glusterfs/3.6/LATEST/
Requesting people running 3.6.x to please try it out and let us know if
there are any issues.
This release supposedly fixes the bugs listed
+1
On Wed, Mar 2, 2016 at 12:48 AM, Venky Shankar wrote:
> On Wed, Mar 02, 2016 at 10:47:03AM +0530, Kaushal M wrote:
> > Couldn't reply earlier as I was asleep at the time.
> >
> > The time change should have announced during last weeks meeting, but
> > no one around remembered this (I'd forgot
the translators, the way
> they are stacked on client & server side, how control flows between them.
> Can somebody please help?
>
> - Ajil
>
>
> On Thu, Feb 25, 2016 at 7:27 AM, FNU Raghavendra Manjunath <
> rab...@redhat.com> wrote:
>
>> Hi Ajil,
>>
Hi Ajil,
Expiry policy tells the signer (Bit-rot Daemon) to wait for a specific
period of time before signing a object.
Whenever a object is modified, a notification is sent to the signer by
brick process (bit-rot-stub xlator sitting in the I/O path) upon getting a
release (i.e. when all the fds
Wed, Feb 17, 2016 at 7:18 PM, Kaushal M wrote:
> > I'm online now. We can figure out what the problem is.
> >
> > On Feb 17, 2016 7:17 PM, "FNU Raghavendra Manjunath"
> > wrote:
> >>
> >> Hi, Kaushal,
> >>
> >> I have b
Hi, Kaushal,
I have been trying to merge few patches. But every time I try (i.e. do a
cherry pick in gerrit), a new patch set gets submitted. I need some help in
resolving it.
Regards,
Raghavendra
On Wed, Feb 17, 2016 at 8:31 AM, Kaushal M wrote:
> Hey Johnny,
>
> Could you please provide an
Venky,
Yes. You are right. We should not remove the quarantine entry in forget.
We have to remove it upon getting -ve lookups in bit-rot-stub and upon
getting an unlink.
I have attached a patch for it.
Unfortunately rfc.sh is failing for me with the below error.
ssh: connect to host git.glust
37 matches
Mail list logo