great. how would this look for the ndb package?
On Jun 1, 2012, at 2:40 PM, Andrin von Rechenberg wrote:
Hey there
If you want to store megabytes of JSON in datastore
and get it back from datastore into python already parsed,
this post is for you.
I ran a couple of performance tests
is this a valid implementation?
class JsonMarshalZipProperty(ndb.BlobProperty):
def _to_base_type(self, value):
return zlib.compress(marshal.dumps(value, MARSHAL_VERSION))
def _from_base_type(self, value):
return marshal.loads(zlib.decompress(value))
On Jun 4, 2012, at
aschmid: The ndb BlobProperty has optional compression built in (see
ndb.model.BlobProperty). You could implement the MarshalProperty like this:
class MarshalProperty(BlobProperty):
def _to_base_type(self, value):
return marshal.dumps(value, MARSHAL_VERSION)
def _from_base_type(self,
ok good to know.
but this still does not help with the 1mb entity size limit... even by
compressing some json objects i would still be over that size.
think the only solution here is to use the BlobStore with the files api.
On Jun 4, 2012, at 3:24 PM, Bryce Cutt wrote:
aschmid: The ndb
Andreas,
Yup. I have had to resort to using the blobstore on many occasions for
exactly this reason.
One gotcha that I have run into when doing this is that there appears
to be no way to write a new blob to the blobstore (using the files
API) inside of a transaction that also modifies a
The docs for marshal seem to indicate there are no guarantees marshaled data is
compatible between python versions. That worries me. If I decide to eventually
upgrade my python 2.5 apps to 2.7 am I going to have to convert all my data
between marshal versions? While pickle is not always
Hey there
If you want to store megabytes of JSON in datastore
and get it back from datastore into python already parsed,
this post is for you.
I ran a couple of performance tests where I want to store
a 4 MB json object in the datastore and then get it back at
a later point and process it.