I wanted to get some opinion on large data sets and protocol buffers.
Protocol Buffer project page by google says that for data > 1
megabytes, one should consider something different but they don’t
mention what would happen if one crosses this limit. Are there any
known failure modes when it comes to the large data sets?
What are your observations, recommendations from your experience on
this front?

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to proto...@googlegroups.com.
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.

Reply via email to