>
> Generally speaking, binary compressed data would help in faster io.


Just expand a bit on this, I have successfully used modules like gzip
<https://docs.python.org/2/library/gzip.html>(for compression /
decompression) and struct
<https://docs.python.org/2/library/struct.html>(reading/
writing binary data) to write and read huge chunks of data. JSON, YAML etc.
do provide excellent functionality for 'structuring' / layering data, but
when it  comes to things like baked out simulations, skin weights etc. (as
in your case) which require fast and efficient i/o I would not care for
sophistication but would design my own custom binary file which is closer
to the metal (with some kind of header information for maintaining schema
version info and misc metadata) and then lay out the bytes in binary
format. The seek operations are much faster this way. Also, I presume, as
you have weight info that needs to be applied once, the seek would be
mostly linear making it even faster (or is it per frame data sample?).

Another advantage of using this type of  custom binary (library agnostic
data) is that later, a suite of faster-compiled tools in C++, C#, C etc.
tools can be more easily written to do read/write/apply to DCCs.

You can, perhaps, also use alembic as a data container, as it allows
support for arbitrary data and the whole tool ecosystem is available for
you (with python bindings).

I had written a custom binary transform cache format a few years back
(before we had alembic) where I had tools/plugins in C++ for Maya, Houdini
and Softimage to do i/o but then I also had some python code to do the same
stuff when reading / writing was not time-critical.


On Thu, Oct 6, 2016 at 2:34 AM, Marcus Ottosson <konstrukt...@gmail.com>
wrote:

> @fruity, I think you'll need to clarify whether you are looking for advice
> on serialisation/deserialisation or on reading/writing from disk. You can
> have something that serialises quickly, but writes slowly. You can also
> have something which serialises slowly, but writes quickly.
>
> For your tool, you'll need to determine where the bottleneck is, and focus
> on that. Perhaps it will be in serialising the data to json. Perhaps it
> will be writing because you are writing to the cloud. Best advice will
> differ based on which you are having problems with.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Python Programming for Autodesk Maya" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to python_inside_maya+unsubscr...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/python_inside_maya/CAFRtmOBpaakkOftf%2BjoypGsYiYRxHA3yVYgzke626OdK3
> z37gQ%40mail.gmail.com
> <https://groups.google.com/d/msgid/python_inside_maya/CAFRtmOBpaakkOftf%2BjoypGsYiYRxHA3yVYgzke626OdK3z37gQ%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>
> For more options, visit https://groups.google.com/d/optout.
>



--

-- 
You received this message because you are subscribed to the Google Groups 
"Python Programming for Autodesk Maya" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to python_inside_maya+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/python_inside_maya/CAPaTLMQz2Y4wEaxPjN36ZPjMiP7uUbns7wzH-NaeutgXQqOqng%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to