Another simple test in code would be to check whether inode->fd_list is
empty as fd_list represents list of all fds opened on that inode.
On Fri, Jan 12, 2018 at 4:38 AM, Vijay Bellur wrote:
> Hi Ram,
>
> Do you want to check this from within a translator? If so, you can look
> for GLUSTERFS_OPE
On 12/01/2018 3:14 AM, Darrell Budic wrote:
It would also add physical resource requirements to future client
deploys, requiring more than 1U for the server (most likely), and I’m
not likely to want to do this if I’m trying to optimize for client
density, especially with the cost of GPUs today.
On Thu, Jan 11, 2018 at 10:44 PM, Darrell Budic
wrote:
> Sounds like a good option to look into, but I wouldn’t want it to take
> time & resources away from other, non-GPU based, methods of improving this.
> Mainly because I don’t have discrete GPUs in most of my systems. While I
> could add them
Hi Ram,
Do you want to check this from within a translator? If so, you can look
for GLUSTERFS_OPEN_FD_COUNT in xlators like dht, afr, ec etc. where they
check for open file descriptors in various FOPs.
Regards,
Vijay
On Thu, Jan 11, 2018 at 10:40 AM, Ram Ankireddypalle
wrote:
> Hi,
>
>
I like the idea immensely. As long as the gpu usage can be specified for
server-only, client and server, client and server with a client limit of X.
Don't want to take gpu cycles away from machine learning for file IO.
Also must support multiple GPUs and GPU pinning. Really useful for
encryptio
Hello Xavi,
Von: Xavi Hernandez [mailto:jaher...@redhat.com]
Gesendet: Donnerstag, 11. Januar 2018 10:51
An: David Spisla
Cc: Amar Tumballi ; gluster-devel@gluster.org
Betreff: Re: [Gluster-devel] Simulating some kind of "virtual file"
Hi David,
On Wed, Jan 10, 2018 at 3:24 PM, David Spisla
ma
Hi,
>>> In which protocol are you seeing this issue? Fuse/NFS/SMB?
It is fuse, within mountpoint by “mount -t glusterfs …“ command.
Thanks & Best Regards,
George
From: gluster-devel-boun...@gluster.org
[mailto:gluster-devel-boun...@gluster.org] On Behalf Of Pranith Kumar Karampuri
Sent: Wednesd
Hi,
Please see detail test step on
https://bugzilla.redhat.com/show_bug.cgi?id=1531457
How reproducible:
Steps to Reproduce:
1.create a volume name "test" with replicated
2.set volume option cluster.consistent-metadata with on:
gluster v set test cluster.consistent-metadata on
3. mount volum
Hello Amar, Xavi
Von: Amar Tumballi [mailto:atumb...@redhat.com]
Gesendet: Mittwoch, 10. Januar 2018 14:16
An: Xavi Hernandez ; David Spisla
Cc: gluster-devel@gluster.org
Betreff: Re: [Gluster-devel] Simulating some kind of "virtual file"
Check the files in $mountpoint/.meta/ directory. These a
Sounds like a good option to look into, but I wouldn’t want it to take time &
resources away from other, non-GPU based, methods of improving this. Mainly
because I don’t have discrete GPUs in most of my systems. While I could add
them to my main server cluster pretty easily, many of my clients a
Lian, George (NSB - CN/Hangzhou) would like to recall the message,
"[Gluster-devel] a link issue maybe introduced in a bug fix " Don't let NFS
cache stat after writes"".
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/m
Hi, Pranith Kumar,
I has create a bug on Bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1531457
After my investigation for this link issue, I suppose your changes on
afr-dir-write.c with issue " Don't let NFS cache stat after writes" , your fix
is like:
---
Hello Xavier,
now adding gluster-devel 😉
Von: Xavi Hernandez [mailto:jaher...@redhat.com]
Gesendet: Dienstag, 9. Januar 2018 23:02
An: David Spisla
Cc: gluster-devel@gluster.org
Betreff: Re: [Gluster-devel] Simulating some kind of "virtual file"
Hi David,
adding again gluster-devel.
On Tue, J
Hi,
>>> In which protocol are you seeing this issue? Fuse/NFS/SMB?
It is fuse, within mountpoint by “mount -t glusterfs …“ command.
Thanks & Best Regards,
George
From: gluster-devel-boun...@gluster.org
[mailto:gluster-devel-boun...@gluster.org] On Behalf Of Pranith Kumar Karampuri
Sent: Wednesd
Hi,
Is it possible to find out within a cluster if a file is currently open
by any of the clients/self-heal daemon or any other daemon's within a cluster.
Please point to the sample code in any of the Xlator which does such a check.
Thanks and Regards,
Ram
***Legal
Gluster Users,
Gluster community is deprecating running regression tests for every
commit on NetBSD, and in the future continue only build sanity (and
handling any build breakages) on FreeBSD.
We lack contributors that can help us keep the *BSD infrastructure and
functionality up to date and henc
Gluster Users,
This is to inform you that from the 4.0 release onward, packages for
CentOS 6 will not be built by the gluster community. This also means
that the CentOS SIG will not receive updates for 4.0 gluster packages.
Gluster release 3.12 and its predecessors will receive CentOS 6 updates
t
On 01/11/2018 11:34 AM, Shyam Ranganathan wrote:
>>>
>>> One thing not covered above is what happens when GD2 fixes a high priority
>>> bug between releases of glusterfs.
>>>
>>> Once option is we wait until the next release of glusterfs to include the
>>> update to GD2.
>>>
>>> Or we can respin (r
I have updated the comment.
Thanks!!!
---
Ashish
- Original Message -
From: "Shyam Ranganathan"
To: "Ashish Pandey"
Cc: "Gluster Devel"
Sent: Thursday, January 11, 2018 10:12:54 PM
Subject: Re: [Gluster-users] Integration of GPU with glusterfs
On 01/11/2018 01:12 AM, Ashish
On 01/11/2018 01:12 AM, Ashish Pandey wrote:
> There is a gihub issue opened for this. Please provide your comment or
> reply to this mail.
>
> A - https://github.com/gluster/glusterfs/issues/388
Ashish, the github issue first comment is carrying the default message
that we populate.
It would ma
On 01/11/2018 02:01 AM, Kaushal M wrote:
>> - (thought/concern) Jenkins smoke job (or other jobs) that builds RPMs
>> will not build GD2 (as the source is not available) and will continue as
>> is (which means there is enough spec file magic here that we can specify
>> during release packaging to
On 01/11/2018 02:04 AM, Kaushal M wrote:
> On Thu, Jan 11, 2018 at 1:56 AM, Kaleb S. KEITHLEY
> wrote:
>> comments inline
>>
>> On 01/10/2018 02:08 PM, Shyam Ranganathan wrote:
>>
>> Hi, (GD2 team, packaging team, please read)
>>
>> Here are some things we need to settle so that we can ship/relea
GlusterFS Coverity covscan results are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2018-01-11-a601db69
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman
Hi David,
On Wed, Jan 10, 2018 at 3:24 PM, David Spisla
wrote:
> Hello Amar, Xavi
>
>
>
> *Von:* Amar Tumballi [mailto:atumb...@redhat.com]
> *Gesendet:* Mittwoch, 10. Januar 2018 14:16
> *An:* Xavi Hernandez ; David Spisla <
> david.spi...@iternity.com>
> *Cc:* gluster-devel@gluster.org
> *Betr
24 matches
Mail list logo