Hello,

 

Ok, I think I know what is going on now. I'll just summarize my thoughts for
others (I've seen that the same question was already asked but without any
sufficient answer.)
The problem seems to be the size of the dataset. It doesn't fit into the
memory of the card for interactive rendering, so vtkKWEVolumeMapper chooses 
not to use GPURayCast at all, but switches to the next available Renderer
for that card in my case vtkTextureMapper3D. Now interaction is nice but
final rendering is average.

So when forcing GPURaycast via vtkKWEGPUVolumeRayCastMapper it produces a
nice rendering but interaction is near impossible.

Once I reduce the size of the dataset small enough to fit,
vtkKWEVolumeMapper uses GPURayCast. Interaction is nice with still quite a
good result, 
but still the final rendering is slightly less then what is possible (of
course as the dataset is smaller). 
Another drawback would be that you actually would need to compute the size
of the volume and compare it with the available hardware vram...something
that should not be the task for the programmer using the GPUMapper.

The thing that puzzled me the whole time was, why does it work in VolView
(using the same hardware and Dataset), fast interaction and very good final
rendering, but not with the example provided with vtkEdge?
I guess the answer is, that VolView uses some kind of self implemented
LODActor, for interaction (and not vtkKWEVolumeMapper off the shelf, as one
is supposed to think reading the API:  "If GPU ray casting is supported,
this mapper will be used for all rendering." 
Which in fact is not true) Why self implemented? Because the vtkLODActor (or
LOD3DProp) uses at least two different mappers to work and simply switch
between them at runtime, 
which works, but comes with the penalty that both mappers need to initialize
and load the data once. (this results in long waiting times before you can
interact with the volume).



The other bad thing about this implementation is, that if you force the
final image to be rendered with vtkKWEGPUVolumeRayCastMapper (bypassing
vtkKWEVolumeMapper) you probably get into trouble if the graphics card does
not support this, hence you would need some checks and use a fallback
method. But this is actually the task of vtkKWEVolumeMapper.



So I think the vtkKWEVolumeMapper needs better LOD management to make sure
that the final image always uses GPURayCast and not just fall back to
another renderer once the volume is to big.

If you want to reproduce this yourself (using an appropriate graphics card
of course), just download the PHENIX Dataset from www.osirix-viewer.com . 
Try loading it with vtkEdgeGPURenderDemo:

vtkEdgeGPURenderDemo -DICOM [path_to_dataset] -CT_Muscle



Now use VolView and load this dataset.
Compare the final rendering of both. vtkEdgeExample: average/poor result,
VolView: excellent result 



Start the demo again with:
vtkEdgeGPURenderDemo -DICOM [path_to_dataset] -CT_Muscle -ReductionFactor
0.8
Compare the final rendering of both. vtkEdgeExample: good result, VolView:
excellent result 

 

Another question: is it possible to catch some progress while
vtkKWEVolumeMapper loads the data into VRAM?

 

Regards,

 

Michael

 

_______________________________________________
VtkEdge mailing list
[email protected]
http://public.kitware.com/cgi-bin/mailman/listinfo/vtkedge

Reply via email to