The idea is to use OpenGL only for snapping operations that involves edges and 
vertices. Therefore snap to faces (or raycasting) would not be affected.
A while ago I started a discussion to know users' opinions about features 
proposals for precision modeling:
https://devtalk.blender.org/t/discussions-for-better-snapping-and-precision-modeling-to-come/5351
 
(https://devtalk.blender.org/t/discussions-for-better-snapping-and-precision-modeling-to-come/5351)
The feedback is very good, and indeed many other proposals were presented (and 
will be considered in a later stage of development).
As shown in the topic, the first item to be worked is the addition of new snap 
options: "Middle" and "Perpendicular".
The current CPU solution would be to modify the callback used for the BVH of 
edges.
It is not really a problem to continue with this CPU solution, but there are 
some drawbacks in using BVHs:
1. The resulting BVH can consume a large amount of memory:
To get an idea, the BVH of a Suzane subdivided 4 times consumes 32.428,66 KB.
For comparison, a 4k uint texture (that would be used on GPU) consumes 
33.177,60 KB.
Currently to avoid duplicate memory consumption, the snap to edges and vertices 
use the BVH of triangles.
This is not as efficient as it would be if they were specific BVHs (one for 
edges and another for vertices).
This "workaround" would be aggravated with the new snap options ("Middle" and 
"Perpendicular").
2. It is necessary to use artifice to simulate occlusion:
Since we can't use OpenGL, we can't use a depth map to know what is in front or 
behind an object.
The current CPU-based solution in Blender is to first make a raycast to get the 
polygon pointed by the mouse cursor and snap to vertices and edges of that 
polygon.
Also with this polygon is created a plane that separates the elements in the 3d 
view that will be tested to snap.
These steps would be avoided with a depth map obtained with OpenGL.
3. Big and complicated code:
BVH works with callbacks, the snapping system with BVH uses callbacks for 
raycast and another for mixed snap.
These callbacks also have to be compatible with different object types. (Mesh, 
EditMesh, Displists).
So within these callbacks there are other callbacks to get the coordinates of 
the vertices depending on the type of object.
This complication would be avoided with a simple texture mapping all ids.
On the other hand, a GPU based solution would have the advantages of:
1. Take advantage of the existing Blender solution:
Currently a Blender already does something similar in the system of selecting 
an edited mesh.

So the existing code would be harnessed and improved.
2. GPU depth test for occlusion.
Using GPU to snap is not strictly necessary, but I would like your opinion on 
this subject.
Thanks,
Germano
_______________________________________________
Bf-committers mailing list
[email protected]
https://lists.blender.org/mailman/listinfo/bf-committers

Reply via email to