Before getting into how to make something fast, I must point out that your
journey towards performance is meaningless without having something to run.
Your first call to port should be to build your tool in the simplest and
most straightforward and obvious way possible, and then start to look at
how to make it faster.
------------------------------

Now, when it comes to speed, reading and writing are generally at conflict
with each other. The faster it writes, the slower it reads.

The fastest thing to write is also the smallest - that could mean
compressing your data for example.

import zlib

data = b"My very long string, "
compressed = zlib.compress(data)

At this point, writing will be at a peak speed, only dependent on the
quality of your chosen compression method and amount of content you
compress.

But reading suffers.

The fastest thing to read is also the smallest - that could mean
decompressing your data.

import zlib

data = b"My very long string, "
compressed = zlib.compress(data)

decompressed = zlib.decompress(compressed)assert decompressed == data

If you measure the results of compressed against decompressed, you’ll find
that it’s actually larger than the original.

import zlib

data = b"My very long string, "
compressed = zlib.compress(data)
decompressed = zlib.decompress(compressed)assert len(compressed) >
len(decompressed)

What gives? Because the data is so small, the added overhead of the
compression method outweighs the benefits. Compression, with zlib and
likely other algorithms, are most effective on large, repetitive data
structures.

import zlib

data = b"My very long string, " * 10000
compressed = zlib.compress(data)
decompressed = zlib.decompress(compressed)assert len(compressed) <
len(decompressed)  # Notice the flipped comparison sign

To get a sense of the savings made, you could try something like this.

import sysimport zlib

data = b"My very long string, " * 10000
compressed = zlib.compress(data)
decompressed = zlib.decompress(compressed)assert decompressed == data

original_size = sys.getsizeof(data)
compressed_size = sys.getsizeof(compressed)

print("original: %.2f kb\ncompressed: %.2f kb\n= %i times smaller" % (
    original_size / 1024.0,
    compressed_size / 1024.0,
    original_size / compressed_size)
)

Which on my system (Windows 10, Python 3.3x64) prints:

original: 205.11 kb
compressed: 0.58 kb
= 356 times smaller

Now which is it, do you need it to *read* fast, or *write* fast? :)

Best,
Marcus
​

-- 
You received this message because you are subscribed to the Google Groups 
"Python Programming for Autodesk Maya" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to python_inside_maya+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/python_inside_maya/CAFRtmOCAjZUST94WFyWE-JjmzN1z9m9hkHbpXtJyLWWv14X-LA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to