Great Thanks for sharing this! First two short remarks: Your ...-
src.jar only contains the .class files and the URL pointing to the
subversion is wrong. But no problem I was able to get it with a
subversion client.
You have created a wrapper around the datastore service, hiding its
limitations. Interesting idea. I have done something similar but with
a different approach. I have implemented Java InputStream and
OutputStream classes that read/write from the datastore without 1MB
limit. It basically consists of two classes GoogleBlobOutputStream and
GoogleBlobInputStream working like this:

          byte[] buf1 = ...
          byte[] buf2 = ...

          // generate some test data
          RandomInputStream rs = new RandomInputStream();

          try {
            rs.read(buf1);
          } catch (IOException e) {
            e.printStackTrace();
            fail("Failed to read from RandomInputStream");
          }

          // write buffer to output stream
          GoogleBlobOutputStream out = new GoogleBlobOutputStream(kind,
name);
          try {
            out.write(buf1);
            out.close();
            _id = out.getId(); // Google Stream specific
          } catch (IOException e) {
            e.printStackTrace();
            fail("Failed to write to OutputStream");
          }
        }

        // read from input stream

        try {
           GoogleBlobInputStream in = new GoogleBlobInputStream(_id); //
Google Stream specific ctor with id
           in.read(buf2);
           in.close();
        } catch (IOException e) {
           ...
        } catch (EntityNotFoundException e) {
           ...
        }

Internally the data are held in memory until a threshold is exceeded
(max 1 MB) and then persisted in an entity with a  Blob, this repeats
until the stream is closed where the remaining data are persisted.

Your inner workings and calculations are quite complex and I did not
understand it yet in detail. Given your interface being the same as
Google's it should be easy to setup a test app and compare it.

I would be also interested to see if you face the same Blob
performance as I do. Storing a trivial Blob of 25K costs about
500ms-800ms of CPU time. This is terribly slow and quickly eats up
your quota. With increasing size the number does not increase very
much. With 500KB it is about 1000ms. Not sure how to improve this. You
have to do this on the production server, local dev store behaves
differently.

I will try your stuff and see how it compares. I will let you know
when I have news.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.

Reply via email to