There is a default byte size limit of 64MB when parsing protocol buffers -
if a message is larger than that, it will fail to parse. This can be
configured if you really need to parse larger messages, but it is generally
not recommended. Additionally, ByteSize() returns a 32-bit integer, so
there's an implicit limit on the size of data that can be serialized.

You can certainly use protocol buffers in large data sets, but it's not
recommended to have your entire data set be represented by a single message.
Instead, see if you can break it up into smaller messages.

On Mon, May 17, 2010 at 1:05 PM, sanikumbh <saniku...@gmail.com> wrote:

> I wanted to get some opinion on large data sets and protocol buffers.
> Protocol Buffer project page by google says that for data > 1
> megabytes, one should consider something different but they don’t
> mention what would happen if one crosses this limit. Are there any
> known failure modes when it comes to the large data sets?
> What are your observations, recommendations from your experience on
> this front?
>
> --
> You received this message because you are subscribed to the Google Groups
> "Protocol Buffers" group.
> To post to this group, send email to proto...@googlegroups.com.
> To unsubscribe from this group, send email to
> protobuf+unsubscr...@googlegroups.com<protobuf%2bunsubscr...@googlegroups.com>
> .
> For more options, visit this group at
> http://groups.google.com/group/protobuf?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to proto...@googlegroups.com.
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.

Reply via email to