Hi Marcus !

Thanks for your answer&help ! Well, i'm still working on the optimisation, 
i used json for the readable info (influences, etc), and cPickle for the 
weights array. But i think most of the optimisation should now come from 
how to export / import the values to the vertices.
Exporting is not that expensive (0.777652025223s for 39k vertices and 2 
influences), but importing is still 4.7600607872s. It seems there are 
different ways of reading/writing weights, and it takes some time to try 
all of them ! For now, it seems that skinPercent is definetly the worst 
idea (about 29sec for importing ^^), and i read that the MFnSkinCluster is 
not necessarily the best option, at least using getWeights() and 
setWeights() (http://www.macaronikazoo.com/?p=417). The fastest may be 
based on querying the values via api plugs.
Long story short, there are a lot of ways of doing it, so i need to try all 
of them, but i think the part i need to work on is more the maya part than 
the 'data' part ?
hdf5 looks great (i wish i could have a look at the book you mentionned on 
stackOverflow, too late now... ;-), but it's not native (because of the use 
of numpy ?), unfortunately. I'm not really informed on alembic 
possibilities and what you can or can't do with it, but it's definetly 
something i want to investigate, looks super powerful !
thanks for the help !

Le jeudi 13 octobre 2016 14:12:20 UTC+2, Marcus Ottosson a écrit :
>
> Hey @fruity, how did it go with this? Did you make any progress? :)
>
> ​I came to think of another constraint or method which to do what you’re 
> after - in regards to random access. That is, being able to query weights 
> for any given vertex, without (1) reading it all into memory and (2) 
> physically searching for it.
>
> There’s a file format called HDF5 <https://support.hdfgroup.org/HDF5/> 
> which was designed for this purpose (which has Python bindings as well). 
> It’s written by the scientific community, but applies well to VFX in that 
> they also deal with large datasets of high precision (in this case, 
> millions of vertices and floating point weights). To give you some 
> intuition for how it works, I formulated a StackOverflow question 
> <https://stackoverflow.com/questions/22125778/how-is-hdf5-different-from-a-folder-with-files>
>  
> about it a while back that compares it to a “filesystem in a file” that has 
> some good discussion around it.
>
> In more technical terms, you can think of it as Alembic. In fact Alembic 
> is a “fork” of HDF5, which was later rewritten (i.e. “Ogawa 
> <https://github.com/alembic/alembic/tree/master/lib/Alembic/Ogawa>“) but 
> maintains (to my knowledge) the gist of how things are organised and 
> accessed internally.
>
> At the end of the day, it means you can store the results of your weights 
> in one of these hdf5 files, and read it back either as you would any normal 
> file (i.e. entirely into memory) or random access - for example, if you’re 
> only interested in applying weights to a selected area of a highly dense 
> polygonal mesh. Or if you have multiple “channels” or “versions” of weights 
> within the same file (e.g. 50 gb of weights), you could pick one without 
> requiring all that memory to be readily available.
> ​
>

-- 
You received this message because you are subscribed to the Google Groups 
"Python Programming for Autodesk Maya" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to python_inside_maya+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/python_inside_maya/e51717ac-a4ff-4ffb-9950-44c617779e33%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to