t; bu...@onholyground.com >,
"Gluster Users" < gluster-us...@gluster.org >
Cc: "Gluster Devel" < gluster-devel@gluster.org >
Sent: Friday, January 12, 2018 6:00:25 PM
Subject: Re: [Gluster-devel] [Gluster-users] Integration of GPU with glusterfs
On January 1
indsay.mathie...@gmail.com>, "Darrell Budic" ,
> "Gluster Users"
> *Cc: *"Gluster Devel"
> *Sent: *Friday, January 12, 2018 6:00:25 PM
> *Subject: *Re: [Gluster-devel] [Gluster-users] Integration of GPU
> withglusterfs
>
>
>
> On Janu
t;
Sent: Friday, January 12, 2018 6:00:25 PM
Subject: Re: [Gluster-devel] [Gluster-users] Integration of GPU with glusterfs
On January 11, 2018 10:58:28 PM EST, Lindsay Mathieson
wrote:
>On 12/01/2018 3:14 AM, Darrell Budic wrote:
>> It would also add physical resource requireme
There are 3 issues with adding GPU and adding code to glusterFS:
1. Cost: Since you cannot stick any GTX card inside your server, you'll be
forced to buy a $2K+ card for that. That could be an issue if the GlusterFS
is made out of old/cheap servers and the company/institution doesn't have
money fo
On January 11, 2018 10:58:28 PM EST, Lindsay Mathieson
wrote:
>On 12/01/2018 3:14 AM, Darrell Budic wrote:
>> It would also add physical resource requirements to future client
>> deploys, requiring more than 1U for the server (most likely), and I’m
>
>> not likely to want to do this if I’m try
On 12/01/2018 3:14 AM, Darrell Budic wrote:
It would also add physical resource requirements to future client
deploys, requiring more than 1U for the server (most likely), and I’m
not likely to want to do this if I’m trying to optimize for client
density, especially with the cost of GPUs today.
On Thu, Jan 11, 2018 at 10:44 PM, Darrell Budic
wrote:
> Sounds like a good option to look into, but I wouldn’t want it to take
> time & resources away from other, non-GPU based, methods of improving this.
> Mainly because I don’t have discrete GPUs in most of my systems. While I
> could add them
I like the idea immensely. As long as the gpu usage can be specified for
server-only, client and server, client and server with a client limit of X.
Don't want to take gpu cycles away from machine learning for file IO.
Also must support multiple GPUs and GPU pinning. Really useful for
encryptio
Sounds like a good option to look into, but I wouldn’t want it to take time &
resources away from other, non-GPU based, methods of improving this. Mainly
because I don’t have discrete GPUs in most of my systems. While I could add
them to my main server cluster pretty easily, many of my clients a
I have updated the comment.
Thanks!!!
---
Ashish
- Original Message -
From: "Shyam Ranganathan"
To: "Ashish Pandey"
Cc: "Gluster Devel"
Sent: Thursday, January 11, 2018 10:12:54 PM
Subject: Re: [Gluster-users] Integration of GPU with glusterfs
On 01/11/2018 01:12 AM, Ashish
On 01/11/2018 01:12 AM, Ashish Pandey wrote:
> There is a gihub issue opened for this. Please provide your comment or
> reply to this mail.
>
> A - https://github.com/gluster/glusterfs/issues/388
Ashish, the github issue first comment is carrying the default message
that we populate.
It would ma
11 matches
Mail list logo