Never mind, I had to select the MacPorts python for this to work.
--
You received this message because you are subscribed to the Google Groups
Protocol Buffers group.
To view this discussion on the web visit
https://groups.google.com/d/msg/protobuf/-/VVJ0bHA7IsQJ.
To post to this group, send
Hi,
I've got nested messages like:
message A
{
required double value = 1;
}
message B
{
required A a = 1;
}
message C
{
repeated B entries = 1;
}
The C object is saved in file and may have any number of B entries. However,
now I'd like to save a copy of C in a different file with B's
Oh, I forgot to add some info on the Protobuf. I use v2.3.0 with C++.
--
You received this message because you are subscribed to the Google Groups
Protocol Buffers group.
To view this discussion on the web visit
https://groups.google.com/d/msg/protobuf/-/Zk5LNGpDN1R2Q3NK.
To post to this
Hi,
I am a bit confused: is final version v2.4.0 of ProtoBuf released? I've
heard from multiple sources that it is even though the official project
web-page has a link to v2.4.0a . Is it alpha version? What does a mean?
Is it stable release?
Thanks.
--
You received this message because you
Hi,
I guess, ProtoBuf was made for use as a very simple data container from the
very beginning. User (programmer) is supposed to write wrappers around these
containers. AFAIK, there is no access level control, all set/get methods are
public.
Don't forget, that ProtoBuf is only simple way to
I like the interest in the topic.
I've put 1GB to emphasize that the use case is safe. In fact, I save
messages in file in next way:
XYXYXYXYXY.
where X is the size of the message and Y is the message itself. Each message
is read in the loop and overwritten. Clearly, I do *not* read the
Hi,
I have a large set of files with a number of the same type messages saved.
My code reads messages in a sequence from these files one after another.
I've measured time (with terminal time command) of running the code, and
get something like:
READING
===
Read ProtoBuf
Processed events:
Thanks for a quick reply.
Honestly, I fill a set of histograms for each event. I've added this feature
only recently and have a version of the code without histograms.
Here is the same performance measurement without histograms:
READING
===
Read ProtoBuf
Processed events: 100
real
I've added the synched cout wrapper and fixed C float function use.
Eventually code started working as expected, for example, in cast of 8 cores
computer the performance measurements are:
Generate 20 files with 10 events in each
WRITING
===
Generate ProtoBuf
real 0m15.608s
user
btw, ProtoBuf is really fast and easy to use. I like it.
--
You received this message because you are subscribed to the Google Groups
Protocol Buffers group.
To post to this group, send email to protobuf@googlegroups.com.
To unsubscribe from this group, send email to
Well, I have to update the very first value, , after all messages are
written. I do not know a priory how many messages will be stored. Therefore,
I use fstream::fseekp(0) to move the write pointer before the file is closed
and update the value. Of course, the number is written without
Hi,
Are there any examples on how to use GzipOUtputStream in ProtoBuf?
I've manages so far combo:
_raw_out.reset(new
::google::protobuf::io::OstreamOutputStream(_output));
_coded_out.reset(new
::google::protobuf::io::CodedOutputStream(_raw_out.get()));
(both objects are
Cool, it worked great.
Can I mix Raw out and Gzip out in the file?
Say, I'd like to write a raw number (4 bytes) at the beginning of the file
and then add the message through the Gzip stream. Visually, my file would
look like:
.
where first - 4 bytes written
Hmm, thanks for the advice. It may work fine. Nevertheless, I have to skip
previously read messages in this case every time CodedInputStream is read.
In fact, I faced different problem recently. It turns out I can write
arbitrary long files, even 7GB. No problems.
Unfortunately, reading does
How come? I explicitly track the larges message written to the file
with: http://goo.gl/SAKlU
Here is an example of output I get:
[1 ProtoBuf git.hist]$ ./bin/write data.pb echo ---===--- ./bin/read
data.pb
Saved: 100040 events
Largest message size writte: 1815 bytes
---===---
File has:
I think I found the source of the problem. The problem is that
CodedInputStream has internal counter of how many bytes are read so far with
the same object.
In my case, there are a lot of small messages saved in the same file. I do
not read them at once and therefore do not care about large
Hmm, it makes sense now and explains everything. Unfortunately, I didn't see
the way to write fixed width number with CodedOutputStream. Is there a way
to do this?
--
You received this message because you are subscribed to the Google Groups
Protocol Buffers group.
To post to this group, send
Hi,
I am wondering how do Protocol Buffers read input files? Is the entire
file read into memory or some proxy technique is used and entries are
read only when required?
This is a vital feature for large lists, say, some dataset with 10^9
messages.
Do Protocol Buffers use any additional
18 matches
Mail list logo