I saw that ProtoBuf has been benchmarked using the Northwind data
set- a data set of size 130K, with 3000 objects including orders and
order line items.

This is an excellent review:  
http://code.google.com/p/protobuf-net/wiki/Performance

Is it not more realistic, to have a benchmark with a much larger file,
in which we are interested only in a few records, and a few fields
within those records.

For example: 10,000 order line items, we want only a line item with a
particular product code.
Or we want to pick orders for a particular customer type, or with a
particular description.

Are there use cases where data is stored in Protocol Buffer Format in
a file, and read into memory?

Another issue is that the size seems rather small- it is only 256
bytes per object,- I would imagine there are many use cases where the
objects are much bigger.

Many  use cases are going to be with much larger objects and will
select m out N fields- where m will be 5 and N will be 20.  This is
because very rarely can an application want all of the information in
a protocol buffer generated by another program.

Any comments?












-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to proto...@googlegroups.com.
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.

Reply via email to