Hi guys,

I'm playing around with protocol buffers for a project at work and I'm
coming across a possibly weird problem. I have the following setup in
my main():

    std::cerr << "creating file" << std::endl;
    int fd = open("blah.repo", O_WRONLY, O_CREAT);
    if ( fd == -1 ) {
        std::cerr << "ERROR: " << errno << " " << strerror(errno) <<
std::endl;
        return 1;
    }

    ZeroCopyOutputStream* raw_output = new FileOutputStream(fd);
    GzipOutputStream* gzip_output = new GzipOutputStream(raw_output,
GzipOutputStream::ZLIB);
    CodedOutputStream* coded_output = new CodedOutputStream
(gzip_output);
    // CodedOutputStream* coded_output = new CodedOutputStream
(raw_output);

This version takes, say, 8 seconds to create and serialize 100k simple
messages. If I flip it to not use the GzipOutputStream, it takes
roughly 1 second. Using gzip(1) to compress the resulting file takes
less than half a second.

Is there an option I need to be setting to bring it up to parity with
the command-line program or could there be a bug in GzipOutputStream?
For what it's worth, GzipInputStream is roughly on parity with a raw
CodedOutputStream.

Thanks,
Pete

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to