I should mention I thought about the layout of a GPUfs some time ago. I
just lack lots of knowledge about this, the gist was to write shader
(code or compiled?) into some files, also write image data and mesh data
to other files, abd reading results from other files. But as I said, I
lack lots of knowledge about how GPUs work and never wrote any OpenGL
code myself, only shader code. It always seemed like it's hundreds of
hundreds of lines of code to draw a triangle (which is the basic hello
world program).
sirjofri
22.08.2021 12:04:41 Frank D. Engel, Jr. <fde...@fjrhome.net>:
While not necessarily unwelcome as a possibility, I don't think
GPU-based drawing/gaming is as relevant to this discussion (or as
important of a goal for Plan 9 / 9front) as is GPU compute (GPGPU).
The ability to leverage GPU resources across CPU servers for
computation purposes would be of great benefit to the platform, and
working out a driver interface by starting the process remotely via
drawterm seems like a sensible step in that direction.
On 8/22/21 3:07 AM, sirjofri wrote:
22.08.2021 05:16:42 Eli Cohen <echol...@gmail.com>:
deep learning is another interest of mine too. hardware support is a
big deal for that... some kind of support for GPUs would be nice.
people have discussed that for years... hardware drivers are
difficult
and important to do correctly!
I always really liked the "XCPU" and drawterm type ideas of using
other OSes for their existing strengths along with Plan 9. maybe
drawterm could have a GPU device driver or something... that being
said I have sometimes found it ends up surprisingly easier doing it
all on Plan 9...
That's also something I thought about a few times already: drawterm
with GPU support. The only issue I see is, for realtime applications
like games the draw times would be network bound and thus pretty slow.
It would work for heavy GPU applications where almost no draw calls
will exist (no textures, very low poly meshes, ...), but for heavier
stuff we'd need to address that.
That's the benefit of a native driver: you could calculate the server
side (heavy CPU calculations) on a cpu server, the client/frontend
side (including draw calls) on a terminal and the pure graphics on the
GPU.
I'd still give the drawterm GPU a shot. Maybe I can set drawterm up
for compilation on my work PC (two GTX 1080Ti) and try figuring out
how to do all that stuff. However, I've never done graphics
applications on windows or somewhere else that uses OpenGL or DirectX
(I'd try OpenGL because portability), only written shaders so far.
I'll surely need some time (which is always rare as a game developer).
Btw I don't know the exact specifications for GPU usage for neural
networks. I assume it's all compute shaders? Maybe it's even a kinda
blackbox, put stuff in (draw call), read things out. I assume this can
work perfectly fine for draw times, depending on the data.
sirjofri
------------------------------------------
9fans: 9fans
Permalink:
https://9fans.topicbox.com/groups/9fans/T65ec64adb5137874-M5c847fafe65a91d8e47e9b63
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription