Dear Chuck,

thanks a lot for this suggestion. I never thought about this approach for this 
setup!
That would probably really be a great solution. I’ll give this tomorrow a try.

Thank you!
Harald

> Am 11.04.2016 um 20:54 schrieb Chuck Atkins <chuck.atk...@kitware.com>:
> 
> Hi Harald,
> 
> I don't have an answer to your VGL problem yet, but in the mean time, are you 
> tied to having to use it that way?  If not, you also have the option of 
> running ParaView in client / server mode which may actually be more ideal for 
> your situation: i.e. with pvserver running on the headless server and the 
> ParaView GUI client on your local machine (workstation / desktop / laptop, 
> etc.) and connecting to the remote server.  Doing so, you can actually bypass 
> vgl and likely get much better performance.  If you're running an X server on 
> the remote machine, you can do this by setting your DISPLAY variable to the 
> local x server:
> 
> ssh foo@bigserver
> [foo@bigserver ~]$ DISPLAY=:0.0 pvserver
> Waiting for client...
> Connection URL: cs://bigserver.supercooldomain.com:11111
> Accepting connection(s): bigserver.supercooldomain.com:11111
> 
> Then in the client you select [Conenct] -> [Add Server] to create the 
> connection to the remote machine.
> 
> Or I also noticed that you're running a very recent version of the NVidia 
> driver.  I believe this actually has EGL support, which will allow you to 
> build ParaView configured to use the GPU for off screen rendering with no X 
> server necessary at all.  To do this, configure ParaView with the following 
> CMake options:
>       • -DVTK_RENDERING_BACKEND=OpenGL2
>       • -DVTK_USE_OFFSCREEN_EGL=ON
> You would then run pvserver on the remote machine with no need for an X 
> server and connect to it with the GUI client in the same way.
> 
> 
> - Chuck
> 
> On Mon, Apr 11, 2016 at 1:07 PM, Harald Klimach <har...@klimachs.de> wrote:
> Dear Chuck, dear Ken,
> 
> thanks for the replies.
> 
> > I've had similar issues with VirtulGL and VTK in the past.  The "Shader 
> > object was not initialized, cannot attach it." error is often the result of 
> > some sort of required OpenGL version support for a specific call is 
> > lacking.  A few questions:
> >       • Are you able to run on the actual machine without Virtual GL?
> the machine is a headless server down in our machine room, and I can not 
> easily access it.
> 
> >       • What does the output of glxinfo running in the x2go+VirtualGL 
> > environment look like?
> Here is how it looks like:
> $ vglrun glxinfo
> name of display: :55
> display: :55  screen: 0
> direct rendering: Yes
> server glx vendor string: VirtualGL
> server glx version string: 1.4
> server glx extensions:
>     GLX_ARB_create_context, GLX_ARB_create_context_profile,
>     GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_swap_control,
>     GLX_EXT_texture_from_pixmap, GLX_EXT_visual_info, GLX_EXT_visual_rating,
>     GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGI_make_current_read,
>     GLX_SGI_swap_control, GLX_SUN_get_transparent_index
> client glx vendor string: VirtualGL
> client glx version string: 1.4
> client glx extensions:
>     GLX_ARB_create_context, GLX_ARB_create_context_profile,
>     GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_swap_control,
>     GLX_EXT_texture_from_pixmap, GLX_EXT_visual_info, GLX_EXT_visual_rating,
>     GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGI_make_current_read,
>     GLX_SGI_swap_control, GLX_SUN_get_transparent_index
> GLX version: 1.4
> GLX extensions:
>     GLX_ARB_create_context, GLX_ARB_create_context_profile,
>     GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_swap_control,
>     GLX_EXT_texture_from_pixmap, GLX_EXT_visual_info, GLX_EXT_visual_rating,
>     GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGI_make_current_read,
>     GLX_SGI_swap_control, GLX_SUN_get_transparent_index
> OpenGL vendor string: NVIDIA Corporation
> OpenGL renderer string: Tesla M2075/PCIe/SSE2
> OpenGL core profile version string: 4.5.0 NVIDIA 361.28
> OpenGL core profile shading language version string: 4.50 NVIDIA
> OpenGL core profile context flags: (none)
> OpenGL core profile profile mask: core profile
> …
> lots of extensions
> …
> OpenGL version string: 4.5.0 NVIDIA 361.28
> OpenGL shading language version string: 4.50 NVIDIA
> OpenGL context flags: (none)
> OpenGL profile mask: (none)
> 
> 
> ---
> Without vglrun I get:
> 
> $ glxinfo
> name of display: :55
> display: :55  screen: 0
> direct rendering: No (If you want to find out why, try setting 
> LIBGL_DEBUG=verbose)
> server glx vendor string: SGI
> server glx version string: 1.2
> server glx extensions:
>     GLX_ARB_multisample, GLX_EXT_import_context, GLX_EXT_visual_info,
>     GLX_EXT_visual_rating, GLX_OML_swap_method, GLX_SGIS_multisample,
>     GLX_SGIX_fbconfig, GLX_SGIX_hyperpipe, GLX_SGIX_swap_barrier,
>     GLX_SGI_make_current_read
> client glx vendor string: NVIDIA Corporation
> client glx version string: 1.4
> client glx extensions:
>     GLX_ARB_context_flush_control, GLX_ARB_create_context,
>     GLX_ARB_create_context_profile, GLX_ARB_create_context_robustness,
>     GLX_ARB_fbconfig_float, GLX_ARB_get_proc_address, GLX_ARB_multisample,
>     GLX_EXT_buffer_age, GLX_EXT_create_context_es2_profile,
>     GLX_EXT_create_context_es_profile, GLX_EXT_fbconfig_packed_float,
>     GLX_EXT_framebuffer_sRGB, GLX_EXT_import_context, GLX_EXT_stereo_tree,
>     GLX_EXT_swap_control, GLX_EXT_swap_control_tear,
>     GLX_EXT_texture_from_pixmap, GLX_EXT_visual_info, GLX_EXT_visual_rating,
>     GLX_NV_copy_buffer, GLX_NV_copy_image, GLX_NV_delay_before_swap,
>     GLX_NV_float_buffer, GLX_NV_multisample_coverage, GLX_NV_present_video,
>     GLX_NV_swap_group, GLX_NV_video_capture, GLX_NV_video_out,
>     GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGI_swap_control,
>     GLX_SGI_video_sync
> GLX version: 1.2
> GLX extensions:
>     GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_import_context,
>     GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_SGIX_fbconfig
> OpenGL vendor string: Mesa project: www.mesa3d.org
> OpenGL renderer string: Mesa GLX Indirect
> OpenGL version string: 1.2 (1.5 Mesa 6.4.1)
> 
> 
> ---
> On my local box, I get:
> 
> $ glxinfo
> name of display: 
> /private/tmp/com.apple.launchd.vDdJxiOxqs/org.macosforge.xquartz:0
> display: /private/tmp/com.apple.launchd.vDdJxiOxqs/org.macosforge.xquartz:0  
> screen: 0
> direct rendering: Yes
> server glx vendor string: SGI
> server glx version string: 1.4
> server glx extensions:
>     GLX_ARB_multisample, GLX_EXT_import_context, GLX_EXT_visual_info,
>     GLX_EXT_visual_rating, GLX_OML_swap_method, GLX_SGIS_multisample,
>     GLX_SGIX_fbconfig
> client glx vendor string: Mesa Project and SGI
> client glx version string: 1.4
> client glx extensions:
>     GLX_ARB_create_context, GLX_ARB_create_context_profile,
>     GLX_ARB_create_context_robustness, GLX_ARB_fbconfig_float,
>     GLX_ARB_framebuffer_sRGB, GLX_ARB_get_proc_address, GLX_ARB_multisample,
>     GLX_EXT_buffer_age, GLX_EXT_create_context_es2_profile,
>     GLX_EXT_fbconfig_packed_float, GLX_EXT_framebuffer_sRGB,
>     GLX_EXT_import_context, GLX_EXT_texture_from_pixmap, GLX_EXT_visual_info,
>     GLX_EXT_visual_rating, GLX_INTEL_swap_event, GLX_MESA_copy_sub_buffer,
>     GLX_MESA_multithread_makecurrent, GLX_MESA_query_renderer,
>     GLX_MESA_swap_control, GLX_OML_swap_method, GLX_OML_sync_control,
>     GLX_SGIS_multisample, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer,
>     GLX_SGIX_visual_select_group, GLX_SGI_make_current_read,
>     GLX_SGI_swap_control, GLX_SGI_video_sync
> GLX version: 1.4
> GLX extensions:
>     GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_import_context,
>     GLX_EXT_visual_info, GLX_EXT_visual_rating,
>     GLX_MESA_multithread_makecurrent, GLX_OML_swap_method,
>     GLX_SGIS_multisample, GLX_SGIX_fbconfig
> OpenGL vendor string: Intel
> OpenGL renderer string: Intel(R) Iris(TM) Graphics 6100
> OpenGL version string: 2.1 INTEL-10.6.33
> OpenGL shading language version string: 1.20
> 
> Paraview 5.0.0 works on my local Mac, but I am not 100% sure, whether it is 
> using OpenGL2.
> Somehow the paraview.org website is not responding right now, so I can’t 
> download 5.0.1 and
> give it another try.
> 
> ---
> I also get a segmentation fault if I run with just X-Forwarding in a regular 
> SSH connection, without
> any further information:
> $ paraview
> Segmentation fault (core dumped)
> 
> ---
> If I run in a x2go session, I get:
> $ paraview
> failed to get the current screen resources
> QXcbConnection: XCB error: 172 (Unknown), sequence: 166, resource id: 163, 
> major code: 149 (Unknown), minor code: 20
> Segmentation fault (core dumped)
> 
> (Where I believe the XCB error is unrelated.)
> 
> ---
> If I run paraview in a vglconnect session without x2go, I get the same error 
> as within the x2go session:
> 
> $ vglrun paraview
> ERROR: In 
> /home/gk772/abs/paraview/src/ParaView-v5.0.1-source/VTK/Rendering/OpenGL2/vtkOpenGLRenderWindow.cxx,
>  line 575
> vtkXOpenGLRenderWindow (0x3673960): GLEW could not be initialized.
> 
> 
> ERROR: In 
> /home/gk772/abs/paraview/src/ParaView-v5.0.1-source/VTK/Rendering/OpenGL2/vtkShaderProgram.cxx,
>  line 399
> vtkShaderProgram (0x36651f0): Shader object was not initialized, cannot 
> attach it.
> 
> 
> Segmentation fault (core dumped)
> 
> 
> > On Mon, Apr 11, 2016 at 9:56 AM, Ken Martin <ken.mar...@kitware.com> wrote:
> > As I understand it VirtualGL should transparently forward OpenGL/GLX/etc 
> > calls from the server to the client so that the client GPU is used. So my 
> > guess would be to first try running PV on the client and see if it works. 
> > If that seems to work fine then you know your client graphics supports PVs 
> > use of OpenGL and the issue could be in VirtualGL or some PV/VirtualGL 
> > interaction.
> I am always confused with this server and client terminology in the X context.
> The remote machine, where I try to run paraview on has a relatively strong 
> nVidia graphics card,
> which we would like to utilize via the virtualgl mechanism.
> With the x2go we also try to keep the X-server on that box as much as 
> possible, thus we have both,
> the X-server and the paraview application running there. As far as I 
> understand it, the graphics
> capabilities of my local laptop should not really matter that much in this 
> scenario.
> I probably should try and run paraview on the box itself, but as I said 
> above, it is not that easily
> accessible.
> 
> Thanks a lot!
> Harald
> 

_______________________________________________
Powered by www.kitware.com

Visit other Kitware open-source projects at 
http://www.kitware.com/opensource/opensource.html

Please keep messages on-topic and check the ParaView Wiki at: 
http://paraview.org/Wiki/ParaView

Search the list archives at: http://markmail.org/search/?q=ParaView

Follow this link to subscribe/unsubscribe:
http://public.kitware.com/mailman/listinfo/paraview

Reply via email to