Re: [capnproto] Re: Can not build capnp from master. Linking error.

2017-04-12 Thread 'Kenton Varda' via Cap'n Proto
(Ugh the Google Groups web interface apparently doesn't know how to CC the
right people.)

On Wed, Apr 12, 2017 at 8:30 PM, kenton via Cap'n Proto <
capnproto@googlegroups.com> wrote:

> Hi Iryna,
>
> I am not able to reproduce this failure, BUT I did find an error in the
> Makefile which looks like it could relate to the problem. I have pushed a
> fix. Can you try again and let me know if it works correctly now?
>
> Sorry for the inconvenience!
>
> -Kenton
>
> On Wednesday, April 12, 2017 at 12:15:55 PM UTC-7, Ірина Микитин wrote:
>>
>> Hello!
>>
>> I am trying to build capnp following instruction like this:'
>>
>> git clone https://github.com/sandstorm-io/capnproto cd capnproto git
>> checkout master cd c++ autoreconf -i ./configure make -j2 sudo make
>> install
>>
>> But during the last step (sudo make install) I am getting linking error:
>>
>>
>> ​
>>  I need to have capnp built from master. ​Could you please help me to
>> resolve this issue.
>>
>> --
>>
>>
>> *Best Regards,Iryna Mykytyn*
>>
>> *mobile: +380931200324 <+380%2093%20120%200324>*
>>
>>
>> *skype: irynamykytyn*
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Detecting broken message and recovering from it

2017-04-14 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Stepan,

No, there's no easy way to detect the corruption your describe. In fact,
for most serialization formats, there's no solution to this problem. Once
you've lost track of message boundaries, it's impossible to tell the
difference between the start of a new message vs. data in the previous
message, since any message can contain arbitrary byte blobs (e.g. via the
`Data` type).

If what you describe is a requirement for your use case, you could
accomplish it with an additional framing layer.

Option 1: Choose an 128-bit unguessable random number before you start
writing. Write that number before each message. Now you can scan the bytes
of the file looking for this 128-bit sequence and, if you see it, you can
be fairly certain (p ~= 2^-128) that a new message starts after it. You
have to use a new random number for every file in case you ever embed a
whole file into another file.

Option 2: Choose a magic number to write before each message, *and* scan
the contents of each message for this number, replacing it with an "escape
sequence" if seen. Do the opposite transformation while reading. This
allows you to detect boundaries "perfectly" (zero probability of false
positive) but you lose the benefits of zero-copy due to the need to process
escape sequences.

-Kenton

On Fri, Apr 14, 2017 at 12:35 PM,  wrote:

> I have a message that serializes into 24 bytes. I write two messages to a
> file resulting in a file thats 48 bytes long. Now I truncate the file to 40
> bytes and write one message, so the file now looks like this: 1 full
> message, one broken, 1 full message. Is there any way to iterate over the
> file and when encountering the broken message detect that it is broken and
> skip directly to the second full message? I've been using python to read
> such file with following code
>
> def main():
> with open('dates.txt', 'r') as fp:
> for date in date_capnp.Date.read_multiple(fp):
> print(date)
>
> But it fails with following message:
>
> Message contains non-struct pointer where struct pointer was expected
>
> Also, if it's possible to detect such message, is it possible to get it's
> position and length? Thank you.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Detecting broken message and recovering from it

2017-04-14 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
FWIW capnp messages already encode their own size at the start of the
message (or, rather, they encode a segment table, which you can sum up to
get the total size).

This might be useful:
https://github.com/sandstorm-io/capnproto/blob/master/c++/src/capnp/serialize.h#L111

-Kenton

On Fri, Apr 14, 2017 at 1:17 PM,  wrote:

> Thanks for the reply. Option 1 seems pretty reasonable for me. I would
> probably go as far as to frame the messages with magic + message size, that
> way I can verify that when there's another magic (or end of file) at
> current position + message size It's probably correct.
>
> On Friday, April 14, 2017 at 1:08:55 PM UTC-7, Kenton Varda wrote:
>>
>> Hi Stepan,
>>
>> No, there's no easy way to detect the corruption your describe. In fact,
>> for most serialization formats, there's no solution to this problem. Once
>> you've lost track of message boundaries, it's impossible to tell the
>> difference between the start of a new message vs. data in the previous
>> message, since any message can contain arbitrary byte blobs (e.g. via the
>> `Data` type).
>>
>> If what you describe is a requirement for your use case, you could
>> accomplish it with an additional framing layer.
>>
>> Option 1: Choose an 128-bit unguessable random number before you start
>> writing. Write that number before each message. Now you can scan the bytes
>> of the file looking for this 128-bit sequence and, if you see it, you can
>> be fairly certain (p ~= 2^-128) that a new message starts after it. You
>> have to use a new random number for every file in case you ever embed a
>> whole file into another file.
>>
>> Option 2: Choose a magic number to write before each message, *and* scan
>> the contents of each message for this number, replacing it with an "escape
>> sequence" if seen. Do the opposite transformation while reading. This
>> allows you to detect boundaries "perfectly" (zero probability of false
>> positive) but you lose the benefits of zero-copy due to the need to process
>> escape sequences.
>>
>> -Kenton
>>
>> On Fri, Apr 14, 2017 at 12:35 PM,  wrote:
>>
>>> I have a message that serializes into 24 bytes. I write two messages to
>>> a file resulting in a file thats 48 bytes long. Now I truncate the file to
>>> 40 bytes and write one message, so the file now looks like this: 1 full
>>> message, one broken, 1 full message. Is there any way to iterate over the
>>> file and when encountering the broken message detect that it is broken and
>>> skip directly to the second full message? I've been using python to read
>>> such file with following code
>>>
>>> def main():
>>> with open('dates.txt', 'r') as fp:
>>> for date in date_capnp.Date.read_multiple(fp):
>>> print(date)
>>>
>>> But it fails with following message:
>>>
>>> Message contains non-struct pointer where struct pointer was expected
>>>
>>> Also, if it's possible to detect such message, is it possible to get
>>> it's position and length? Thank you.
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Cap'n Proto" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to capnproto+...@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/capnproto.
>>>
>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


[capnproto] Cap'n Proto security advisory CVE-2017-7892

2017-04-17 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi all,

(This was announced in various places early Monday but I forgot to send it
here -- doh!)

I discovered a vulnerability in Cap'n Proto C++. It appears to affect only
32-bit builds, seemingly only when built with Apple's compiler, and I think
it's only a DoS -- but my analysis could be wrong on any of these points.

I've released version 0.5.3.1 with the fix.

Details: https://github.com/sandstorm-io/capnproto/blob/master/
security-advisories/2017-04-17-0-apple-clang-elides-bounds-check.md

-Kenton

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] capnp Readers and Builders: how to write what I've read

2017-04-24 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
You don't need the `.getRoot()` on the last line. Do:

builder.setRoot(pyRegionImplProto);

-Kenton

On Mon, Apr 24, 2017 at 5:50 PM,  wrote:

> Hi Kenton, I got into a pickle trying to convert a reader into a builder
> in C++. The relevant code looks like this:
>
> ```
> PyObject* Network::_readPyRegion(const std::string& moduleName,
>  const std::string& className,
>  RegionProto::Reader& proto)
> {
>   capnp::AnyPointer::Reader implProto = proto.getRegionImpl();
>
>   PyRegionProto::Reader pyRegionImplProto = implProto.getAs
> ();
>
>   // See PyRegion::read implementation for reference
>
>   capnp::MallocMessageBuilder builder;
>   builder.setRoot(pyRegionImplProto.getRoot()); // copy
> ```
>
> The last line chokes during compilation: "*error: **no member named
> 'getRoot' in 'PyRegionProto::Reader*"
>
> Thank you,
> Vitaly
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Need help making pycapnp/capnproto work across python and extension boundaries

2017-04-25 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi,

Since regionImpl is an AnyPointer, it doesn't have a direct setter.
Instead, do:

regionProto.getRegionImpl().setAs(_writePyRegion());

-Kenton

On Tue, Apr 25, 2017 at 11:17 AM,  wrote:

> Hi Kenton, I thought I was almost there, but got stuck here:
>
> One of the use cases involves a "Network" class in C++ python extension
> code that needs to serialize several subordinate "region" instances, some
> of which are implemented in C++ and some in Python. I am having a problem
> with the latter. To demonstrate that specific problem, I defined the
> following schemas:
>
> struct NetworkProto {
>   region @2 : RegionProto;
> }
>
> struct RegionProto {
>   # This stores the data for the RegionImpl. This will be a PyRegionProto
>   # instance if it is a PyRegion.
>   regionImpl @0 :AnyPointer;
> }
>
> struct PyRegionProto {
>   regionImpl @0 :AnyPointer;
> }
>
> As you recommended, we're passing byte buffers between the python and C++
> layers. In this case, I have the C++ method _writePyRegion in the extension
> that makes the call into the python layer and converts the bytes returned
> by the python layer into `PyRegionProto::Reader`: `PyRegionProto::Reader
> Network::_writePyRegion()`.
>
> Then, the following higher level method attempts to stuff the result of
> `Network::_writePyRegion` into "NetworkProto:: RegionProto:: regionImpl",
> but the compilation fails with "*error: **no member named 'setRegionImpl'
> in 'RegionProto::Builder'*":
>
> void Network::write(NetworkProto::Builder& proto) const
> {
>   // Serialize the python region
>   auto regionProto = proto.initRegion();
>   regionProto.setRegionImpl(_writePyRegion()); // copy
> }
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Need help making pycapnp/capnproto work across python and extension boundaries

2017-04-26 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi,

I think what you want here is for pycapnp to be extended with some API that
other Python extensions can use to interact with it in order to wrap and
unwrap builders. pycapnp builders are actually wrapping a
capnp::DynamicStruct::Builder under the hood, which is easy to cast back
and forth to your native builder type. You just need pycapnp to give you
access somehow.

I unfortunately do not know very much about how pycapnp and cython work, so
I'm not sure I can help. This may be a question for Jason Paryani.

By the way, if you guys are in the Bay Area, you should come to our Cap'n
Proto 0.6 release party on May 18 at Cloudflare:
https://www.meetup.com/Sandstorm-SF-Bay-Area/events/239341254/

-Kenton

On Wed, Apr 26, 2017 at 2:40 PM,  wrote:

> Here is more complete C++ code snippet for my prior post:
>
> PyObject* Network::_readPyRegion(const std::string& moduleName,
>  const std::string& className,
>  const RegionProto::Reader& proto)
> {
>   capnp::AnyPointer::Reader implProto = proto.getRegionImpl();
>
>   PyRegionProto::Reader pyRegionImplProto = implProto.getAs();
> // no copy here, right?
>
>   // Extract data bytes from reader to pass to python layer
>
>   capnp::MallocMessageBuilder builder;
>   builder.setRoot(pyRegionImplProto); // copy
>   auto array = capnp::messageToFlatArray(builder); // copy
>   // Copy from array to PyObject so that we can pass it to the Python layer
>   py::String pyRegionImplBytes((const char *)array.begin(),
>sizeof(capnp::word)*array.size()); // copy
>
> }
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


[capnproto] 0.6 release candidate!

2017-04-28 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi all,

There's a release candidate!

https://capnproto.org/capnproto-c++-0.6.0-rc1.tar.gz
https://capnproto.org/capnproto-c++-win32-0.6.0-rc1.zip

Please play with it and tell me what happens.

Highlights:
- *All* of Cap'n Proto now works on MSVC! That includes the dynamic
library, RPC, schema parsing, and building the capnp tool itself. Thanks to
Harris Hancock for doing most of this work. MSVC 2017 is required.
- JSON parser/serializer.
- HTTP library (somewhat WIP).
- Thorough fuzz testing.
- No more gtest dependency.
- Lots and lots of other stuff. I'll have a longer (but nowhere near
complete) list with the final release.

-Kenton

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] capnp::canonicalize versus capnp::messageToFlatArray ?

2017-05-02 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Vitaly,

capnp::canonicalize() does a lot more work to make sure that two messages
with the same content will produce exactly the same output bytes,
independent of e.g. in which order they were initialized. This makes it
slower than messageToFlatArray, so you should still use messageToFlatArray
unless you specifically need canonicalization.

-Kenton

On Tue, May 2, 2017 at 10:20 AM, vitaly numenta <
vitaly.krugl.nume...@gmail.com> wrote:

> I just read about capnp::canonicalize in the announcement. How does it
> differ from capnp::messageToFlatArray?
>
> Thank you,
> Vitaly
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Blocking and Non-Blocking Message Handling

2017-05-02 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi,

Yes, Cap'n Proto features an advanced event loop framework allowing for
non-blocking operation. It is also easy to make blocking calls provided you
are not inside a callback (callbacks must be non-blocking to avoid stalling
the event loop).

That said, usually when someone requires a library to be non-blocking,
there is a requirement that the library is compatible with some
pre-existing event framework. Cap'n Proto provides its own framework, which
may be problematic if you need to work with an existing one. That said, it
is possible to integrate Cap'n Proto with other event loops, it just takes
a bit of engineering. Let me know if you need help with this.

-Kenton

-Kenton

On May 2, 2017 4:47 PM, "Hedge Hog"  wrote:

Hi,
I'm evaluating Cap'n Proto against the middle requirements for the
Machinekit project.  We likely don't have many people familiar with
promise-pipelining, or least I am not, so there is some personal seed
investment required if CP was to be used as middleware.
I would like to be able to point to a definitive statement that
addresses this requirement, such that I can add a Y, N or some clearly
qualified note:

### Blocking and Non-Blocking Message Handling:
The transport library shall be able to send and receive messages in a
blocking as well as a non-blocking fashion without resorting to
cyclically polling for new messages to be available.

Thanks in advance.

Best wishes
Hedge

--
πόλλ' οἶδ ἀλώπηξ, ἀλλ' ἐχῖνος ἓν μέγα
[The fox knows many things, but the hedgehog knows one big thing.]
  Archilochus, Greek poet (c. 680 BC – c. 645 BC)
http://hedgehogshiatus.com

--
You received this message because you are subscribed to the Google Groups
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Ez Rpc deprecated import/exportCap

2017-05-03 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Mark,

There's currently no way to avoid the warnings except to disable them with
pragmas or compiler flags.

For example:

#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wdeprecated-declarations"
  // ... call importCap here ...
#pragma GCC diagnostic pop

I would like to remove these eventually, though...

How sticky is this embedded board software? Is it likely to be around for
years to come, without updates? If so maybe we're stuck with these.

FWIW, importCap/exportCap were introduced in 0.4 (2013) and deprecated in
0.5 (2014), so they've now spent far more time deprecated than not...

-Kenton

On Tue, May 2, 2017 at 1:52 AM,  wrote:

> I'm connecting to some embedded hardware running both client and server
> EzRpc interfaces with the old importCap/exportCap methods and I want to
> connect to them without any warnings in my code about depreciated
> interfaces.
>
> Is it possible to have a client and server which have the new
> getMain<.*>() functionality talk to server and client using the original
> methods?
>
> I don't have access to change the embedded board software.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Defaulted sub-structure fields

2017-05-15 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Preston,

If you change `initField2()` to `getField2()`, you'll get the behavior you
want.

This is a somewhat unfortunate quirk in the nature of default values and
Cap'n Proto memory allocation. Imagine you had a very deep default value,
with a tree of nested structs. When you call the getter for that field the
first time, the implementation needs to make a copy of the whole default
value into the message proper so that you can modify it. However, it may be
that you're going to modify it to something that isn't nested. In that
case, not only has a bunch of time been wasted copying the tree and
allocating objects, but because of the arena-style memory allocation, the
objects you remove still end up taking space in the message.

Hence, I introduced "init" with a different policy: "init" always allocates
exactly one object, initialized to the *type*'s default value --which
always corresponds to zeros on the wire due to the XOR defaults trick.
Hence "init" is always the fastest possible thing, but then it's up to you
to set the fields.

In all honesty, I don't think I've ever set a default value on a
struct-typed field, so I'm not sure if it was worth having them (and hence
this confusion) in the first place. Oh well.

This is actually mentioned in the docs, but admittedly it's easy to miss...

https://capnproto.org/cxx.html#structs

-Kenton

On Sun, May 14, 2017 at 10:24 PM, Preston Elder  wrote:

> So according to the spec (https://capnproto.org/language.html#structs)
> the following schema should be valid:
>
> struct FieldWithFlags(Type) {
> value @0 : Type;
> flag1 @1 : Bool = false;
> flag2 @2 : Bool = false;
> }
>
>
> struct MyRecord {
>field1 @0 : FieldWithFlags(Text);
>field2 @1 : FieldWithFlags(Text) = (flag1 = true);
> }
>
>
> However when I use this, using the C++ compiled code, when I do:
>
> auto field2 = myRecord.initField2();
> printf("%d, %d\n", field2.getFlag1(), field2.getFlag2());
>
>
> Will always print 0 0.  In other words, it ignored the default of true for
> flag1 I set in MyRecord.
>
> I get the same result even if I DON'T default the values of flag1 and
> flag2 in FieldWithFlags.
>
> I'm using CapnProto 0.5.3.
>
> Is there something I'm missing?  Is this a bug?  It should be defaulting
> the flag1 for field2 to true.
>
> Preston
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Problems with 0.6.0 header files on Windows Visual Studio 2015

2017-05-16 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Another possible solution is to convert macros into proper symbols in the
global scope. E.g.

#ifdef VOID
typedef VOID VOID_;
#undef VOID
typedef VOID_ VOID;
#endif

This approach then allows the application to use both the Windows VOID
symbol and capnp::VOID without doing its own macro tricks.

That said, the ability to emulate system_header for MSVC would be nice.

-Kenton

On Mon, May 15, 2017 at 6:01 PM, Harris Hancock  wrote:

> Kenton,
>
> I have a suggestion for a solution over on the GitHub issue for this:
> https://github.com/sandstorm-io/capnproto/issues/284
>
> It involves `#include`ing a preamble and postamble header at the start and
> end of every public header -- the preamble uses `#pragma push_macro` on
> problematic defines and the postamble restores them with `#pragma
> pop_macro`. I stole the idea from Boost.Asio. I know it's gross, but it
> seems like the most robust solution to me, and allows us to emulate
> `system_header` behavior. I can submit a PR implementing this idea for
> consideration in a week or two.
>
> Harris
>
> On Fri, May 12, 2017 at 6:22 PM, Kenton Varda  wrote:
>
>> Hi,
>>
>> The problem is that windows.h #defines the symbol VOID. You will either
>> need to include windows.h after capnp headers, or you will need to #undef
>> VOID after the include.
>>
>> We should probably make this more automatic, or at least provide a better
>> error message...
>>
>> -Kenton
>>
>> On May 12, 2017 6:20 PM,  wrote:
>>
>>> I'm trying to get Cap'n Proto 0.6.0 compiled as a plugin under
>>> UnrealEngine 4.13 using Visual Studio 2015 (update 3). I have compiled this
>>> version under Visual Studio from source. To test, if I create just a plain
>>> C++ project and use the installed Cap'n headers and libraries I can
>>> generate test message classes that seem to compile with a few compiler
>>> warnings. However when compiling an Unreal Project (or a Win32 project)
>>> with these same classes I get all sorts of strange errors in kj and cap'n
>>> (common.h, etc.) as it imports the header files. My best guess is that it
>>> has something to do with the precompiled headers in those environments?
>>> Does anyone have a clue or have any advice using Cap'n Proto under Unreal
>>> Engine/ Windows?
>>>
>>> Example of the errors:
>>>
>>> thirdparty\capnproto\include\capnp\common.h(64): error C2628:
>>> 'capnp::Void' followed by 'void' is illegal (did you forget a ';'?)
>>> thirdparty\capnproto\include\capnp\common.h(64): error C2513:
>>> 'capnp::Void': no variable declared before '='
>>> thirdparty\capnproto\include\capnp\common.h(96): error C2062: type
>>> 'void' unexpected
>>> thirdparty\capnproto\include\capnp\common.h(106): error C2143: syntax
>>> error: missing ';' before '}'
>>> thirdparty\capnproto\include\capnp\common.h(130): error C2065: 'Void':
>>> undeclared identifier
>>> thirdparty\capnproto\include\capnp\common.h(130): error C2923:
>>> '_::Kind_': 'Void' is not a valid template type argument for parameter 'T'
>>> thirdparty\capnproto\include\capnp\common.h(130): error C2913: explicit
>>> specialization; '_::Kind_' is not a specialization of a class template
>>> thirdparty\capnproto\include\capnp\common.h(131): error C2913: explicit
>>> specialization; '_::Kind_' is not a specialization of a class template
>>>
>>> I can include these  headers in an "empty project" and compile these
>>> classes fine so I'm not sure what the deal is.
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Cap'n Proto" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to capnproto+unsubscr...@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/capnproto.
>>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Cap'n Proto" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to capnproto+unsubscr...@googlegroups.com.
>> Visit this group at https://groups.google.com/group/capnproto.
>>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Error reporting with async/push interfaces?

2017-05-23 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Ian,

I'm not sure I understand. write() can throw an exception. Does that not
solve the problem?

-Kenton

On Mon, May 22, 2017 at 5:01 PM, Ian Denhardt  wrote:

> Are there established best practices for handling errors that occur with
> async/push style interfaces, such as sandstorm's Util.ByteStream[1]?
> That interface doesn't seem to supply a way to report e.g. an IO error
> that occurs while streaming, and since the call that obtained the
> ByteStream has already completed, the error can't be reported via the
> rpc protocol's exception mechanism.
>
> I suppose one could just add a method for reporting the error. But I
> wanted to poll to see if there was any standard pattern for this.
>
> [1]: https://github.com/sandstorm-io/sandstorm/blob/master/src/sa
> ndstorm/util.capnp#L69
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Error reporting with async/push interfaces?

2017-05-23 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Ah.

Two things:
1) If the stream is dropped without done() ever having been called, you
know at least that the data is incomplete.
2) Usually, I would recommend that the method you call to say "write data
to this stream" should not return until all data is written.

get @0 (filename :Text, stream :ByteStream);
# Writes the content of the given file to the given stream. Returns
once all data is written
# and stream.done() has completed successfully.

-Kenton

On Tue, May 23, 2017 at 12:56 PM, Ian Denhardt  wrote:

> I'm talking about reporting an error to the receiver of the data, not
> the caller of write().  E.g. I have a capnp interface for a remote
> filesystem, and I want to read data from a file. I pass the server a
> ByteStream to use to send me the data. How does the server report an
> error *to me*?
>
> Quoting Kenton Varda (2017-05-23 11:42:44)
> >Hi Ian,
> >I'm not sure I understand. write() can throw an exception. Does that
> >not solve the problem?
> >-Kenton
> >On Mon, May 22, 2017 at 5:01 PM, Ian Denhardt <[1]i...@zenhack.net>
> >wrote:
> >
> >  Are there established best practices for handling errors that occur
> >  with
> >  async/push style interfaces, such as sandstorm's Util.ByteStream[1]?
> >  That interface doesn't seem to supply a way to report e.g. an IO
> >  error
> >  that occurs while streaming, and since the call that obtained the
> >  ByteStream has already completed, the error can't be reported via
> >  the
> >  rpc protocol's exception mechanism.
> >  I suppose one could just add a method for reporting the error. But I
> >  wanted to poll to see if there was any standard pattern for this.
> >  [1]: [2]https://github.com/sandstorm-io/sandstorm/blob/
> master/src/sa
> >  ndstorm/util.capnp#L69
> >  --
> >  You received this message because you are subscribed to the Google
> >  Groups "Cap'n Proto" group.
> >  To unsubscribe from this group and stop receiving emails from it,
> >  send an email to [3]capnproto+unsubscr...@googlegroups.com.
> >  Visit this group at [4]https://groups.google.com/group/capnproto.
> >
> > Verweise
> >
> >1. mailto:i...@zenhack.net
> >2. https://github.com/sandstorm-io/sandstorm/blob/master/src/
> sandstorm/util.capnp#L69
> >3. mailto:capnproto%2bunsubscr...@googlegroups.com
> >4. https://groups.google.com/group/capnproto
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Build Cap'n Proto failure with Clang 3.5

2017-05-25 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi,

Clang 3.5 should be OK.

In the build log you gave, the "-std=gnu++1y" flag is not being passed to
the compiler, which explains the errors that follow.

One reason this could happen is if you are building from source and you are
using an older version of libtool. libtool is known to drop flags from
CXXFLAGS if it doesn't recognize them, and perhaps it doesn't recognize
"-std=gnu++1y". Note that normally the configure script automatically
detects which -std= flag to use and automatically works around this bug in
libtool, however since you specified -std= manually in your CXXFLAGS,
you've disabled the auto-detection.

I suggest starting over and letting the autodetection do its thing:

make distclean
./configure CXX=clang++ CXXFLAGS=-g

Or if you really want to enable C++14 (even though Cap'n Proto doesn't use
it), put it in CXX to work around libtool:

make distclean
./configure CXX="clang++ -std=gnu++1y" CXXFLAGS=-g

BTW, I see you are using 0.5.3.1, but note that the current version is
0.6.0.

-Kenton

On Thu, May 25, 2017 at 12:14 PM, Lucky Boy  wrote:

> Hi everyone,
>
> I am a beginner on Cap'n Proto and I would like to ask a build question. I
> am using Clang 3.5.0 to build Cap'n Proto but failed. What I did  for build
> was
>
> ./configure CXX=clang++ CXXFLAGS='-std=gnu++1y -g'
>
> make
>
> and the build error is as follows:
>
>
> luckyboy@cse-322osu10:capnproto-c++-0.5.3.1$ make
> depbase=`echo src/capnp/compiler/module-loader.o | sed
> 's|[^/]*$|.deps/&|;s|\.o$||'`;\
> clang++ -stdlib=libc++ -DHAVE_CONFIG_H -I.-I./src -I./src
> -DKJ_HEADER_WARNINGS -DCAPNP_HEADER_WARNINGS 
> -DCAPNP_INCLUDE_DIR='"/usr/local/include"'
> -pthread -g -pthread -MT src/capnp/compiler/module-loader.o -MD -MP -MF
> $depbase.Tpo -c -o src/capnp/compiler/module-loader.o
> src/capnp/compiler/module-loader.c++ &&\
> mv -f $depbase.Tpo $depbase.Po
> In file included from src/capnp/compiler/module-loader.c++:22:
> In file included from src/capnp/compiler/module-loader.h:29:
> In file included from src/capnp/compiler/compiler.h:29:
> In file included from ./src/capnp/compiler/grammar.capnp.h:7:
> In file included from ./src/capnp/generated-header-support.h:31:
> In file included from ./src/capnp/layout.h:36:
> ./src/kj/common.h:35:4: error: "This code requires C++11. Either your
> compiler does not support it or it is not enabled."
>   #error "This code requires C++11. Either your compiler does not support
> it or it is not enabled."
>^
> ./src/kj/common.h:38:6: error: "Pass -std=c++11 on the compiler command
> line to enable C++11."
> #error "Pass -std=c++11 on the compiler command line to enable C++11."
>  ^
> ./src/kj/common.h:289:39: warning: alias declarations are a C++11
> extension [-Wc++11-extensions]
> template  using NoInfer = typename NoInfer_::Type;
>   ^
> ./src/kj/common.h:295:43: warning: alias declarations are a C++11
> extension [-Wc++11-extensions]
> template  using RemoveConst = typename RemoveConst_::Type;
>   ^
> ./src/kj/common.h:297:56: error: unknown type name 'constexpr'
> template  struct IsLvalueReference_ { static constexpr bool
> value = false; };
>^
> ./src/kj/common.h:297:66: error: expected member name or ';' after
> declaration specifiers
> template  struct IsLvalueReference_ { static constexpr bool
> value = false; };
>  ^
> ./src/kj/common.h:298:62: error: unknown type name 'constexpr'
> template  struct IsLvalueReference_ { static constexpr
> bool value = true; };
>  ^
> ./src/kj/common.h:298:72: error: expected member name or ';' after
> declaration specifiers
> template  struct IsLvalueReference_ { static constexpr
> bool value = true; };
>    ^
> ./src/kj/common.h:300:8: error: unknown type name 'constexpr'
> inline constexpr bool isLvalueReference() { return
> IsLvalueReference_::value; }
>^
> ./src/kj/common.h:300:18: error: expected unqualified-id
> inline constexpr bool isLvalueReference() { return
> IsLvalueReference_::value; }
>  ^
> ./src/kj/common.h:303:30: error: explicit specialization of non-template
> struct 'Decay_'
> template  struct Decay_ { typedef typename Decay_::Type
> Type; };
>  ^ 
> ./src/kj/common.h:303:71: error: no type named 'Type' in 'Decay_'
> template  struct Decay_ { typedef typename Decay_::Type
> Type; };
>   ^~~~
> ./src/kj/common.h:304:38: warning: rvalue references are a C++11 extension
> [-Wc++11-extensions]
> template  struct Decay_ { typedef typename
> Decay_::Type Type; };
>  ^
> ./src/kj/common.h:311:37: warning: alias declarations are 

Re: [capnproto] build w/ yocto

2017-05-26 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Eric,

I don't know why the behavior would differ on your system (maybe automake
changed?), but I imagine this should fix it:

https://github.com/sandstorm-io/capnproto/commit/5f20b5cc1032df29d830409433c7d7a0c71266e5

Let me know if that helps.

-Kenton

On Fri, May 26, 2017 at 3:08 AM, Schwarz, Eric  wrote:

> Hello,
>
> w/ Yocto's krogoth branch everything works fine. With Yocto's morty branch
> the "src" directory in the "build" directory is missing and "capnp compile"
> complains about that. Just creating a "src" directory is enough. Then
> everything works fine.
> What would be the proper solution to fix this? Is there a configure option
> available?
> We use capnproto 0.6.0 release. Please find the Yocto recipes attached. -
> Maybe someone may post it on [1].
>
> Cheers
> Eric
>
> [1]... https://layers.openembedded.org/layerindex/branch/master/layers/
> 
>  [http://www.arri.com/media/sign/2017_wvs_sign.jpg] <
> http://www.arri.com/camera/alexa/cameras/camera_details/alexa-sxt-w/>
>
> Get all the latest information from www.arri.com,
> Facebook, Twitter ARRIChannel>, Instagram and YouTube<
> http://www.youtube.com/user/ARRIChannel>.
>
> Arnold & Richter Cine Technik GmbH & Co. Betriebs KG
> Sitz: München - Registergericht: Amtsgericht München -
> Handelsregisternummer: HRA 57918
> Persönlich haftender Gesellschafter: Arnold & Richter Cine Technik GmbH
> Sitz: München - Registergericht: Amtsgericht München -
> Handelsregisternummer: HRB 54477
> Geschäftsführer: Franz Kraus; Dr. Jörg Pohlman; Stephan Schenk; Walter
> Trauninger
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] build w/ yocto

2017-05-26 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Eric,

On Fri, May 26, 2017 at 11:37 AM, Schwarz, Eric  wrote:

> Hello Kenton,
>
> many thanks for the fast reply and check-in. - I will give it a try and
> get back to you.
> Three things are really near to my heart:
> 1.) Please abstract the usage of commands
> - mkdir -> $(MKDIR)
> - touch -> $(TOUCH)
> - make -> $(MAKE)
> - ...
> This is really necessary since e.g. Yocto brings it very own
> environment including native tools along. Actually every build system does.
>

Yocto provides a custom mkdir? Why?


> Using bare commands is like the 80's hackers did.
>

This ad hominem insult isn't helpful.

We just committed a fix to U-Boot recently abstracting "python".
> There should then be a file defining the commands such as:
> MKDIR ?= mkdir -p
> TOUCH ?= touch
> MAKE ?= make
> This also gives you the opportunity to define certain parameters
> along w/ the command. Please note the question mark. This enables any other
> make environment to override the commands.
>

Defining MKDIR to "mkdir -p" would be wrong as -p changes the behavior.

It looks like automake already provides a MKDIR_P binding, so I guess I'll
use that.

https://github.com/sandstorm-io/capnproto/commit/
bd15dd218910d48df13209546e3767a9daa8e8da

Apart, capnproto has got two build systems. - Which one shall be
> actually used and why are there two of it?
>

You can use either. The automake build is canonical but a lot of developers
prefer cmake, especially when integrating Cap'n Proto into a cmake project.
cmake also supports Visual Studio.

2.) The second thing I would like to address is that there should
> be branches for Yocto. E.g. "krogoth" and "morty". We have already the
> openjdk maintainer convinced that this makes definitely sense.
>

Sorry, I don't understand what you're asking for here.


> 3.) Third, will you push the recipes to [1]? - I might remove the
> hack w/ the "src" directory before.
>

No, I don't plan to do this. You are welcome to do it, if you'd like.

On Fri, May 26, 2017 at 11:48 AM, Schwarz, Eric  wrote:

> the more elegant way would be:
>
> DEPS_DIRS := src
>
> $(DEPS_DIRS):
> $(MKDIR) $@
>
> test_capnpc_middleman: $(test_capnpc_inputs) | $(DEPS_DIRS)
>
> test_capnpc_middleman: capnp$(EXEEXT) capnpc-c++$(EXEEXT)
> $(test_capnpc_inputs) | $(DEPS_DIRS)
>

I'm happy with my solution, thanks.

-Kenton

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] build w/ yocto

2017-05-26 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
On Fri, May 26, 2017 at 12:54 PM, Schwarz, Eric  wrote:

> Well, there exist different branches for Yocto which bring along different
> versions of tools needed to build the stuff e.g. autotools. Obviously the
> versions are not compatible as it is in the case of e.g. LaTeX (always the
> same behaviour for existing features). Thus, the build behaviour changes
> and there might be patches necessary which may exclude each other for
> different branches. Thus, having a "krogoth" and "morty" branch for the
> 0.6.0 release where just those build fixes and nothing else gets pushed on
> top would make life a lot easier. Only if things build smoothly and are
> easy to integrate people will use it.
>

If we were to extend this to every platform on which people use Cap'n
Proto, we would quickly have an unmanageable number of branches -- at least
dozens, maybe hundreds.

I would rather have one branch that is expected to work everywhere. If
different platforms require divergent code, then we need to auto-detect
which behavior to use, rather than maintain separate branches.

That said, you are of course welcome to maintain your own fork of Cap'n
Proto where you apply fixes as needed for your environment.

-Kenton

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Build Cap'n Proto failure with Clang 3.5

2017-05-27 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi LuckyBoy,

It looks like your compiler is crashing, which is definitely a bug in the
compiler, not in Cap'n Proto. Is this really an unmodified build of Clang
3.5? Are you targeting an unusual architecture, or is there anything else
unusual about your compiler configuration? I tested 0.6.0 using Clang 3.5
on Linux before release and it worked for me.

-Kenton

On Sat, May 27, 2017 at 8:39 AM, Lucky Boy  wrote:

> Hi Kenton,
>
> Thanks very much and I appreciate your comments! I tried again and again
> but unfortunately still couldn't make capnproto built by clang.3.5... the
> information is as follows:
>
>
> luckyboy@cse-322osu10:capnproto-c++-0.6.0$  *./configure CXX=clang++
> CXXFLAGS=-g*
> checking for a BSD-compatible install... /usr/bin/install -c
> checking whether build environment is sane... yes
> checking for a thread-safe mkdir -p... /bin/mkdir -p
> checking for gawk... gawk
> checking whether make sets $(MAKE)... yes
> checking whether make supports nested variables... yes
> checking whether UID '1001' is supported by ustar format... yes
> checking whether GID '1001' is supported by ustar format... yes
> checking how to create a ustar tar archive... gnutar
> checking for gcc... gcc
> checking whether the C compiler works... yes
> checking for C compiler default output file name... a.out
> checking for suffix of executables...
> checking whether we are cross compiling... no
> checking for suffix of object files... o
> checking whether we are using the GNU C compiler... yes
> checking whether gcc accepts -g... yes
> checking for gcc option to accept ISO C89... none needed
> checking whether gcc understands -c and -o together... yes
> checking for style of include used by make... GNU
> checking dependency style of gcc... gcc3
> checking whether we are using the GNU C++ compiler... yes
> checking whether clang++ accepts -g... yes
> checking dependency style of clang++... gcc3
> checking whether clang++ supports C++11 features by default... no
> checking whether clang++ supports C++11 features with -std=gnu++11... yes
> checking whether clang++ -std=gnu++11 supports C++11 library features by
> default... yes
> checking build system type... x86_64-pc-linux-gnu
> checking host system type... x86_64-pc-linux-gnu
> checking for the pthreads library -lpthreads... no
> checking whether pthreads work without any flags... no
> checking whether pthreads work with -Kthread... no
> checking whether pthreads work with -kthread... no
> checking for the pthreads library -llthread... no
> checking whether pthreads work with -pthread... yes
> checking for joinable pthread attribute... PTHREAD_CREATE_JOINABLE
> checking if more special flags are required for pthreads... no
> checking whether to check for GCC pthread/shared inconsistencies... yes
> checking whether -pthread is sufficient with -shared... yes
> checking whether pthread flag is sufficient with -nostdlib... no
> checking whether adding -lpthread fixes that... yes
> checking how to print strings... printf
> checking for a sed that does not truncate output... /bin/sed
> checking for grep that handles long lines and -e... /bin/grep
> checking for egrep... /bin/grep -E
> checking for fgrep... /bin/grep -F
> checking for ld used by gcc... /usr/local/bin/ld
> checking if the linker (/usr/local/bin/ld) is GNU ld... yes
> checking for BSD- or MS-compatible name lister (nm)... /usr/local/bin/nm -B
> checking the name lister (/usr/local/bin/nm -B) interface... BSD nm
> checking whether ln -s works... yes
> checking the maximum length of command line arguments... 1572864
> checking how to convert x86_64-pc-linux-gnu file names to
> x86_64-pc-linux-gnu format... func_convert_file_noop
> checking how to convert x86_64-pc-linux-gnu file names to toolchain
> format... func_convert_file_noop
> checking for /usr/local/bin/ld option to reload object files... -r
> checking for objdump... objdump
> checking how to recognize dependent libraries... pass_all
> checking for dlltool... no
> checking how to associate runtime and link libraries... printf %s\n
> checking for ar... ar
> checking for archiver @FILE support... @
> checking for strip... strip
> checking for ranlib... ranlib
> checking command to parse /usr/local/bin/nm -B output from gcc object... ok
> checking for sysroot... no
> checking for a working dd... /bin/dd
> checking how to truncate binary pipes... /bin/dd bs=4096 count=1
> checking for mt... mt
> checking if mt is a manifest tool... no
> checking how to run the C preprocessor... gcc -E
> checking for ANSI C header files... yes
> checking for sys/types.h... yes
> checking for sys/stat.h... yes
> checking for stdlib.h... yes
> checking for string.h... yes
> checking for memory.h... yes
> checking for strings.h... yes
> checking for inttypes.h... yes
> checking for stdint.h... yes
> checking for unistd.h... yes
> checking for dlfcn.h... yes
> checking for objdir... .libs
> checking if gcc supports -fno-rtti -fno-exceptions... no
> checking for gcc option to 

Re: [capnproto] build w/ yocto

2017-05-30 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Eric,

My normal branching policy is to cherry-pick minor bug fixes into the
release branch as needed. I don't normally distinguish between build fixes
vs. other kinds of fixes, but I avoid cherry-picking anything that has any
chance of breaking anyone.

It sounds like you'd like me to cherry-pick these build fixes into the 0.6
release?

-Kenton

On Sun, May 28, 2017 at 7:58 AM, Schwarz, Eric  wrote:

> Hello Kenton,
>
> mhhh, I see ... what about just one branch based on the release branch
> only containing build fixes? - What that be feasible?
> Cap'n Proto is IMHO a very complex thing and one may really distinguish
> between build and source code fixes.
> Also explicit tags as they are used in the Linux kernel development (e.g.
> build: ...) concerning the commit messages would make life a lot easier.
>
> We have got here a rather complete CI infrastructure (Windows+VS/Mac
> OSX/Linux - almost any version of anything). If I hook some CI builds for
> Cap'n Proto would you be interested in getting direct feedback via e-mail
> concerning build breaks?
>
> Cheers
> Eric
>
> Von: Kenton Varda [mailto:ken...@cloudflare.com]
> Gesendet: Freitag, 26. Mai 2017 22:10
> An: Schwarz, Eric
> Cc: capnproto@googlegroups.com
> Betreff: Re: [capnproto] build w/ yocto
>
> On Fri, May 26, 2017 at 12:54 PM, Schwarz, Eric  wrote:
> Well, there exist different branches for Yocto which bring along different
> versions of tools needed to build the stuff e.g. autotools. Obviously the
> versions are not compatible as it is in the case of e.g. LaTeX (always the
> same behaviour for existing features). Thus, the build behaviour changes
> and there might be patches necessary which may exclude each other for
> different branches. Thus, having a "krogoth" and "morty" branch for the
> 0.6.0 release where just those build fixes and nothing else gets pushed on
> top would make life a lot easier. Only if things build smoothly and are
> easy to integrate people will use it.
>
> If we were to extend this to every platform on which people use Cap'n
> Proto, we would quickly have an unmanageable number of branches -- at least
> dozens, maybe hundreds.
>
> I would rather have one branch that is expected to work everywhere. If
> different platforms require divergent code, then we need to auto-detect
> which behavior to use, rather than maintain separate branches.
>
> That said, you are of course welcome to maintain your own fork of Cap'n
> Proto where you apply fixes as needed for your environment.
>
> -Kenton
> 
>  [http://www.arri.com/media/sign/2017_wvs_sign.jpg] <
> http://www.arri.com/camera/alexa/cameras/camera_details/alexa-sxt-w/>
>
> Get all the latest information from www.arri.com,
> Facebook, Twitter ARRIChannel>, Instagram and YouTube<
> http://www.youtube.com/user/ARRIChannel>.
>
> Arnold & Richter Cine Technik GmbH & Co. Betriebs KG
> Sitz: München - Registergericht: Amtsgericht München -
> Handelsregisternummer: HRA 57918
> Persönlich haftender Gesellschafter: Arnold & Richter Cine Technik GmbH
> Sitz: München - Registergericht: Amtsgericht München -
> Handelsregisternummer: HRB 54477
> Geschäftsführer: Franz Kraus; Dr. Jörg Pohlman; Stephan Schenk; Walter
> Trauninger
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] RPC pipelining on a struct's union members

2017-05-31 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Johannes,

Thanks for the feedback.

I'm curious: In your use case, does the caller somehow know in advance when
getValue() is going to return an interface, thus allowing pipelining? If
so, would it make sense to have a separate method, e.g. getInterface(), for
that case, which just returns the interface without the union, thus
allowing pipelining today? Or if the caller does not know in advance, then
what would you like the behavior to be if the caller makes a pipelined
request but the result ends up not being an interface? I guess the
pipelined call should throw an exception?

-Kenton

On Wed, May 31, 2017 at 2:00 AM, Johannes Zeppenfeld 
wrote:

> Hi Kenton,
>
> I'm reviving this thread because I've just had the same wish - to be able
> to pipeline a union access. I have a type-erased interface Getter that can
> return its value through a union struct. The returned value could be either
> a builtin type, a struct or an interface:
>
> interface MyInterface {
>  myFunction @0 ();
> }
>
> struct Any {
>  union {
>   invalid @0 :Void;
>   uint32 @1 :UInt32;
>   myInterface @2 :MyInterface;
>   # etc...
>  }
> }
>
> interface Getter {
>  getValue @0 () -> (value :Any);
> }
>
>
> Given a Getter and wanting to create the associated MyInterface client, I
> currently need to do two round trips:
>
> MyInterface::Client myInterface = getter.getValueRequest().send().then(
>   [](capnp::Response &&response)
>   -> MyInterface::Client {
> return response.getValue().getMyInterface();
>   }
> );
>
>
> It would be a welcome optimization (both performance-wise and
> boilerplate-wise) to be able to pipeline this:
> MyInterface::Client myInterface = getter.getValueRequest().send()
>   .getValue().getMyInterface();
>
>
> Just to add another voice to those desiring this feature :)
>
> Thanks for your work on this amazing library!
> Johannes
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Making a Cap'n Proto Github organization

2017-06-05 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
OK, I've sent out invites and started populating the org.

On Mon, Jun 5, 2017 at 8:48 AM, Ross Light  wrote:

> SGTM.  For Go, I will need to hack my vanity URL resolver so that "
> zombiezen.com/go/capnproto2" will resolve to the org repository before I
> can do the move.  Short-term, this is fine, since the import path won't
> change at all.
>

FWIW github is pretty good about automatic redirects, so maybe you don't
actually need to do anything?


>
> Long-term, we may want to consider changing Go to use the new GitHub org
> in the import path, instead of being tied to my domain for the reasons
> mentioned above.  This will be much easier to do once Go 1.9 is released
> and introduces type aliases.  That way, I can introduce a stub import path
> that just aliases the new import path.
>
> -Ross
>
> On Mon, Jun 5, 2017 at 8:42 AM Julián Díaz  wrote:
>
>> Big 👍 from me, I'll happily move my typescript/js implementation to the
>> new org once it's prod ready.
>>
>> On Sun, Jun 4, 2017 at 8:25 PM, Kenton Varda  wrote:
>>
>>> Hi all,
>>>
>>> Currently capnproto is a project inside the github organization for
>>> Sandstorm, i.e. github.com/sandstorm-io/capnproto.
>>>
>>> However, Sandstorm has become less-active lately
>>> 
>>>  whereas I'm now working on Cap'n Proto fairly actively at Cloudflare
>>> ,
>>> independent of Sandstorm. Hence, it seems like it no longer makes a lot of
>>> sense to treat it as a sub-project of Sandstorm.
>>>
>>> Moreover, several people are maintaining Cap'n Proto implementations in
>>> various languages hosted under their own github user accounts. I like for
>>> repositories to be separate in this way, to delineate maintainership and
>>> avoid unnecessarily tying together projects and release cycles. However, it
>>> is admittedly disorganized, and things get particularly awkward when
>>> maintainership changes over time.
>>>
>>> I propose, therefore, that we create a Cap'n Proto organization. I
>>> further propose that any implementation of Cap'n Proto which we consider
>>> production-ready should be moved into this organization. Maintainership /
>>> ownership of repositories won't change, but this will make it easier for
>>> people to find all the code in one place. And if a maintainer wants to step
>>> down or designate other maintainers, it will be much easier to do so with
>>> the repos under an organization.
>>>
>>> I've gone ahead and created the org here:
>>>   https://github.com/capnproto
>>>
>>> I propose that the following repositories be moved into the org:
>>> - C++ main repo (Kenton Varda)
>>> - Rust (David Renshaw)
>>> - Java (David Renshaw)
>>> - Python (Jason Paryani)
>>> - Go v2 (Ross Light)
>>> - Lua (Cloudflare / Jiale Zhi) (Jiale no longer works at Cloudflare, but
>>> Cloudflare definitely uses this code in prod!)
>>> - C (David Lamparter)
>>> - Node.js (Kenton Varda)
>>> - OCaml (Paul Pelzl)
>>>
>>> (It looks to me like the other implementations -- Javascript, Nim, Ruby,
>>> Scala, and Erlang -- are currently either still incomplete or not actively
>>> maintained. However, if I've misjudged, let me know.)
>>>
>>> Thoughts? Objections?
>>>
>>> -Kenton
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Cap'n Proto" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to capnproto+unsubscr...@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/capnproto.
>>>
>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Does a message serialized by 0.5.3 can be read by 0.6 if scheme file is same?

2017-06-07 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Moreover, the encoding will never change in a backwards-incompatible way.
If for some reason I invented a new incompatible encoding I'd give it a new
name entirely.

(However, we may occasionally add new features which aren't understood by
old versions, but such features would be opt-in.)

-Kenton

On Wed, Jun 7, 2017 at 9:20 AM, Ian Denhardt  wrote:

> Yes. The encoding hasn't changed at all.
>
> Quoting Vitaliy Bondarchuk (2017-06-07 07:39:46)
> >Hi
> >I use cap'n'proto messages as storage format in NoSQL database. Can I
> >continue use data with 0.6 prepared with older version?
> >Thanks
> >
> >--
> >You received this message because you are subscribed to the Google
> >Groups "Cap'n Proto" group.
> >To unsubscribe from this group and stop receiving emails from it, send
> >an email to [1]capnproto+unsubscr...@googlegroups.com.
> >Visit this group at [2]https://groups.google.com/group/capnproto.
> >
> > Verweise
> >
> >1. mailto:capnproto+unsubscr...@googlegroups.com
> >2. https://groups.google.com/group/capnproto
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Re: Making a Cap'n Proto Github organization

2017-06-07 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Thomas,

I've given you write access and given Anil admin access. Sorry, I'd assumed
Anil already had admin.

-Kenton

On Wed, Jun 7, 2017 at 1:59 AM, Thomas Leonard  wrote:

> We've moved the OCaml repository to https://github.com/capnproto/c
> apnp-ocaml but I can't commit to it.
> Could someone add me (talex5) to it? It would be good if at least one of
> talex5 and avsm had admin access so we could add other people, configure
> Travis, etc.
>
> See: https://github.com/capnproto/capnp-ocaml/issues/12#issuecomm
> ent-306296506
>
> (BTW, I also have some experimental-and-incomplete OCaml RPC support at
> https://github.com/mirage/capnp-rpc which it might be worth merging
> eventually)
>
>
>
> On Monday, June 5, 2017 at 1:26:07 AM UTC+1, Kenton Varda wrote:
>>
>> Hi all,
>>
>> Currently capnproto is a project inside the github organization for
>> Sandstorm, i.e. github.com/sandstorm-io/capnproto.
>>
>> However, Sandstorm has become less-active lately
>> 
>>  whereas I'm now working on Cap'n Proto fairly actively at Cloudflare
>> ,
>> independent of Sandstorm. Hence, it seems like it no longer makes a lot of
>> sense to treat it as a sub-project of Sandstorm.
>>
>> Moreover, several people are maintaining Cap'n Proto implementations in
>> various languages hosted under their own github user accounts. I like for
>> repositories to be separate in this way, to delineate maintainership and
>> avoid unnecessarily tying together projects and release cycles. However, it
>> is admittedly disorganized, and things get particularly awkward when
>> maintainership changes over time.
>>
>> I propose, therefore, that we create a Cap'n Proto organization. I
>> further propose that any implementation of Cap'n Proto which we consider
>> production-ready should be moved into this organization. Maintainership /
>> ownership of repositories won't change, but this will make it easier for
>> people to find all the code in one place. And if a maintainer wants to step
>> down or designate other maintainers, it will be much easier to do so with
>> the repos under an organization.
>>
>> I've gone ahead and created the org here:
>>   https://github.com/capnproto
>>
>> I propose that the following repositories be moved into the org:
>> - C++ main repo (Kenton Varda)
>> - Rust (David Renshaw)
>> - Java (David Renshaw)
>> - Python (Jason Paryani)
>> - Go v2 (Ross Light)
>> - Lua (Cloudflare / Jiale Zhi) (Jiale no longer works at Cloudflare, but
>> Cloudflare definitely uses this code in prod!)
>> - C (David Lamparter)
>> - Node.js (Kenton Varda)
>> - OCaml (Paul Pelzl)
>>
>> (It looks to me like the other implementations -- Javascript, Nim, Ruby,
>> Scala, and Erlang -- are currently either still incomplete or not actively
>> maintained. However, if I've misjudged, let me know.)
>>
>> Thoughts? Objections?
>>
>> -Kenton
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


[capnproto] 0.6.1 released

2017-06-08 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi all,

I rolled up some bug fixes into a 0.6.1 release. These include:

- Work-around GCC 4.9.2 bug that caused test case
"Capability/DynamicServerPipelining"
to segfault. The bug is fixed in GCC 4.9.4 but Debian Jessie is stuck at
4.9.2. The bug only affected the test.
- Work around SFINAE bug in MSVC involving List(T) when T is a generic type
parameter (see issue #479).
- Fix bug in HTTP library where large writes would be randomly canceled.
(Note that the HTTP library has had several other smaller bugfixes and
shouldn't be considered ready for production until the next release.)
- Work around automake build problem where `src` directory sometimes was
not created before `capnp` was invoked.

-Kenton

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Re: Reviving the JavaScript implementation

2017-06-09 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Sweet!

On Thu, Jun 8, 2017 at 7:01 PM, Julián Díaz  wrote:

> If anyone is really interested in using this stuff *today* please reach
> out to me so I can better understand what you need and perhaps rearrange
> how I implement things. Otherwise, there's still lots to do before 1.0.0!
>

I can wait until its more ready, but once it is I'll be eager to try it out
in Sandstorm.

It would be amazing if we could retire node-capnp which Sandstorm uses
currently. It leaks a lot of memory due to inability to GC through C++
objects and the V8 C++ API being hard to use correctly in general. Of
course, we'll need RPC before this can happen.

PS: For the compiler nerds: the schema compiler actually uses the
> TypeScript compiler API directly to build an AST before printing it to a
> file.
>

Nice!

-Kenton


>
> On Thursday, May 11, 2017 at 12:58:03 PM UTC-4, Julián Díaz wrote:
>>
>> Can't use proxies at all in TypeScript unless I set the target to ES6 -
>> right now I want to keep it compiling to ES5 so it's immediately useful for
>> a wider range of people.
>>
>> It also seems like it's going to perform like crap:
>> http://thecodebarbarian.com/thoughts-on-es6-proxies-performance.html
>>
>> I'll add it to the TODO, regardless; a separate ES6 build would be useful
>> for many people. I could document it with the caveat that .get() will
>> always be faster.
>>
>> On Thursday, May 11, 2017 at 11:10:16 AM UTC-4, Kenton Varda wrote:
>>>
>>> On Wed, May 10, 2017 at 11:17 PM, Mark Miller  wrote:
>>>
 https://kangax.github.io/compat-table/es6/

 It looks like proxies are supported everywhere.

>>>
>>> Unfortunately a lot of people still use old browsers.
>>>
>>> http://caniuse.com/#feat=proxy -- click on the "usage relative" box.
>>>
>>> -Kenton
>>>
>>>


 On Wed, May 10, 2017 at 8:54 PM, Kenton Varda 
 wrote:

> Sweet!
>
> Totally random comment from totally randomly opening a file and
> looking at it:
>
> I see lists are accessed via a method .get(n). Have you considered
> using proxies to allow array subscript [] syntax? I guess some 10-20% of
> browsers still don't support proxies but that number will only go down.
>
> -Kenton
>
> On Tue, May 9, 2017 at 2:03 PM, Julián Díaz  wrote:
>
>> I'm happy to report some real progress!
>>
>> https://github.com/jdiaz5513/capnp-ts
>>
>> Right now it's not very useful at all (I just barely have
>> serialization working) but it's a solid starting point to wrap up the
>> serialization API. The peanut gallery can start poking around to see how 
>> I
>> organized things – it does depart slightly from the reference
>> implementation but I'm still aiming to make an external API that's very
>> similar to the C++ one.
>>
>> Once the Struct/List classes are complete I'll move on to the schema
>> compiler, which looks like it'll be a cinch. Hoping I can keep up the
>> steady progress from here.
>>
>> On Monday, April 10, 2017 at 1:45:01 AM UTC-4, Ian Denhardt wrote:
>>>
>>> Quoting Kenton Varda (2017-04-09 18:35:48)
>>>
>>> >1) libcapnp and libkj together add up to some 730k of code
>>> (text
>>> >segment) these days. Unless emscripten builds are significantly
>>> >smaller, that's probably too big.
>>>
>>> Hard to know without trying it, but it may well be the case that
>>> wasm
>>> builds will be smaller. The VM seems to be designed for small code
>>> size
>>> (sensibly, given its target use case). This obviously doesn't apply
>>> to
>>> the asm.js output.
>>>
>>> That said, I agree having a pure JS implementation is preferable.
>>>
>> --
>> You received this message because you are subscribed to the Google
>> Groups "Cap'n Proto" group.
>> To unsubscribe from this group and stop receiving emails from it,
>> send an email to capnproto+...@googlegroups.com.
>> Visit this group at https://groups.google.com/group/capnproto.
>>
>
> --
> You received this message because you are subscribed to the Google
> Groups "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to capnproto+...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>



 --
   Cheers,
   --MarkM

 --
 You received this message because you are subscribed to the Google
 Groups "Cap'n Proto" group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to capnproto+...@googlegroups.com.
 Visit this group at https://groups.google.com/group/capnproto.

>>>
>>> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegro

Re: [capnproto] Shared memory communication

2017-06-13 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Indeed, this was a major design goal!

I haven't personally used it this way yet, though I've used it for mmap()
many times, which has similar properties.

I would like to develop an RPC transport which uses shared memory. It would
be especially neat if the code could automatically upgrade to shared memory
whenever it detects that it's communicating over a unix socket.

One interesting hurdle for comms is what happens if you don't trust the
sending process. If the memory is still shared, they can potentially modify
the data while you're consuming it. If you make sure to read each bit of
data no more than once, then in theory no attack is possible -- but I
haven't yet reviewed the Cap'n Proto library itself to check that the
pointer validation reads each bit once, which would need to be guaranteed
before an application could consider relying on this.

Linux's memfds support "sealing" the memory after writing it so that the
consuming end can be assured that no further changes are occurring.
However, messages need to be pretty large before this becomes worthwhile --
for typical-size messages, an upfront memcpy() will be much faster.

-Kenton

On Tue, Jun 13, 2017 at 11:26 AM, Omnifarious  wrote:

> Cap'n Proto seems like it would be ideal for a shared memory
> communications channel, right down to the relative pointers. Has anybody
> used it this way? What are the hurdles?
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Shared memory communication

2017-06-13 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
If you're aiming to slot Cap'n Proto into an existing system with existing
protocols, you probably want to use Cap'n Proto's serialization layer only,
not the RPC layer. In that case it's just a byte blob, so embedding is
easy. The RPC layer, OTOH, dictates a lot about the design of your
application code, and requires an underlying transport in which messages
can be sent in either direction at any time (which often doesn't fit well
into existing protocols).

-Kenton

On Tue, Jun 13, 2017 at 1:24 PM, Eric Hopper  wrote:

> I'm working with some stuff right now in which there is a need for mass
> structured data transfer between two processes running on the same machine.
> They already have a protocol, and didn't use shared memory out of security
> concerns.
>
> While I'm not in a position to get them to completely rethink the whole
> protocol at this time, I can certainly make it a recommendation for the
> future. Using memfds with memory sealing (which I will have to read about
> in more depth) might well be a good solution in the futrue.
>
> Unfortunately, there is a need to have some compatibility with a Windows
> implementation, but they can just use Cap'n Proto based named pipes or TCP
> connections and still be somewhat better off than the TCP-based protocol
> they use now. Right now they also move pointers around to point at
> structures read in off the wire. Those structures contain no pointers of
> course, but still...
>
> How much of a state machine is Cap'n Protocol internally? Can another
> transport be slid in underneath it really easily? How about if that
> transport multiplexes between several different senders, each of which is
> using Cap'n Proto, but which then have their own frame structure around it
> in which those frames may not correspond neatly to protocol units in Cap'n
> Proto (i.e. one Cap'n Proto protocol unit might be split between two
> frames)?
>
>
> On Tue, Jun 13, 2017 at 12:07 PM, Kenton Varda 
> wrote:
>
>> Indeed, this was a major design goal!
>>
>> I haven't personally used it this way yet, though I've used it for mmap()
>> many times, which has similar properties.
>>
>> I would like to develop an RPC transport which uses shared memory. It
>> would be especially neat if the code could automatically upgrade to shared
>> memory whenever it detects that it's communicating over a unix socket.
>>
>> One interesting hurdle for comms is what happens if you don't trust the
>> sending process. If the memory is still shared, they can potentially modify
>> the data while you're consuming it. If you make sure to read each bit of
>> data no more than once, then in theory no attack is possible -- but I
>> haven't yet reviewed the Cap'n Proto library itself to check that the
>> pointer validation reads each bit once, which would need to be guaranteed
>> before an application could consider relying on this.
>>
>> Linux's memfds support "sealing" the memory after writing it so that the
>> consuming end can be assured that no further changes are occurring.
>> However, messages need to be pretty large before this becomes worthwhile --
>> for typical-size messages, an upfront memcpy() will be much faster.
>>
>> -Kenton
>>
>> On Tue, Jun 13, 2017 at 11:26 AM, Omnifarious 
>> wrote:
>>
>>> Cap'n Proto seems like it would be ideal for a shared memory
>>> communications channel, right down to the relative pointers. Has anybody
>>> used it this way? What are the hurdles?
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Cap'n Proto" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to capnproto+unsubscr...@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/capnproto.
>>>
>>
>>
>
>
> --
> Please only send email to this account that you consider public enough to
> be published on a public web page.
> Eric Hopper -- http://www.omnifarious.org/~hopper/
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Shared memory communication

2017-06-13 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
(That said, the underlying transport is suitably abstracted such that you
can plug in your own. Though you'll need to integrate your application's
event loop with the KJ event loop, which can be tedious.)

-Kenton

On Tue, Jun 13, 2017 at 1:29 PM, Kenton Varda  wrote:

> If you're aiming to slot Cap'n Proto into an existing system with existing
> protocols, you probably want to use Cap'n Proto's serialization layer only,
> not the RPC layer. In that case it's just a byte blob, so embedding is
> easy. The RPC layer, OTOH, dictates a lot about the design of your
> application code, and requires an underlying transport in which messages
> can be sent in either direction at any time (which often doesn't fit well
> into existing protocols).
>
> -Kenton
>
> On Tue, Jun 13, 2017 at 1:24 PM, Eric Hopper 
> wrote:
>
>> I'm working with some stuff right now in which there is a need for mass
>> structured data transfer between two processes running on the same machine.
>> They already have a protocol, and didn't use shared memory out of security
>> concerns.
>>
>> While I'm not in a position to get them to completely rethink the whole
>> protocol at this time, I can certainly make it a recommendation for the
>> future. Using memfds with memory sealing (which I will have to read about
>> in more depth) might well be a good solution in the futrue.
>>
>> Unfortunately, there is a need to have some compatibility with a Windows
>> implementation, but they can just use Cap'n Proto based named pipes or TCP
>> connections and still be somewhat better off than the TCP-based protocol
>> they use now. Right now they also move pointers around to point at
>> structures read in off the wire. Those structures contain no pointers of
>> course, but still...
>>
>> How much of a state machine is Cap'n Protocol internally? Can another
>> transport be slid in underneath it really easily? How about if that
>> transport multiplexes between several different senders, each of which is
>> using Cap'n Proto, but which then have their own frame structure around it
>> in which those frames may not correspond neatly to protocol units in Cap'n
>> Proto (i.e. one Cap'n Proto protocol unit might be split between two
>> frames)?
>>
>>
>> On Tue, Jun 13, 2017 at 12:07 PM, Kenton Varda 
>> wrote:
>>
>>> Indeed, this was a major design goal!
>>>
>>> I haven't personally used it this way yet, though I've used it for
>>> mmap() many times, which has similar properties.
>>>
>>> I would like to develop an RPC transport which uses shared memory. It
>>> would be especially neat if the code could automatically upgrade to shared
>>> memory whenever it detects that it's communicating over a unix socket.
>>>
>>> One interesting hurdle for comms is what happens if you don't trust the
>>> sending process. If the memory is still shared, they can potentially modify
>>> the data while you're consuming it. If you make sure to read each bit of
>>> data no more than once, then in theory no attack is possible -- but I
>>> haven't yet reviewed the Cap'n Proto library itself to check that the
>>> pointer validation reads each bit once, which would need to be guaranteed
>>> before an application could consider relying on this.
>>>
>>> Linux's memfds support "sealing" the memory after writing it so that the
>>> consuming end can be assured that no further changes are occurring.
>>> However, messages need to be pretty large before this becomes worthwhile --
>>> for typical-size messages, an upfront memcpy() will be much faster.
>>>
>>> -Kenton
>>>
>>> On Tue, Jun 13, 2017 at 11:26 AM, Omnifarious 
>>> wrote:
>>>
 Cap'n Proto seems like it would be ideal for a shared memory
 communications channel, right down to the relative pointers. Has anybody
 used it this way? What are the hurdles?

 --
 You received this message because you are subscribed to the Google
 Groups "Cap'n Proto" group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to capnproto+unsubscr...@googlegroups.com.
 Visit this group at https://groups.google.com/group/capnproto.

>>>
>>>
>>
>>
>> --
>> Please only send email to this account that you consider public enough to
>> be published on a public web page.
>> Eric Hopper -- http://www.omnifarious.org/~hopper/
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Cap'n Proto" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to capnproto+unsubscr...@googlegroups.com.
>> Visit this group at https://groups.google.com/group/capnproto.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] C# support functioning?

2017-06-14 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
My understanding is that the repo was never production-ready and is no
longer being actively developed. :(

I think Marc is looking for a new maintainer.

-Kenton

On Wed, Jun 14, 2017 at 4:02 PM, Emil Christopher Solli Melar <
typ...@gmail.com> wrote:

> Hi! I am looking into this and comparing capn'proto to Flatbuffers,
> Protobuf, MsgPack.
>
> But I need C# support as well. https://github.com/
> StackExchange/capnproto-net
>
> It's 3 years with no activity. Will it work today, or is it in dire need
> of an overhaul?
>
> Thanks!
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Cap'n Proto for D

2017-06-21 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Nice!

Would you be interested in moving this into the Cap'n Proto github
organization? We've recently been working on consolidating production-ready
implementations there: https://github.com/capnproto

-Kenton

On Wed, Jun 21, 2017 at 4:11 AM, Thomas Brix Larsen 
wrote:

> I have been working on a D port of the Java implementation by dwrensha. It
> has reached a point where I consider it stable for production use.
>
> Repo can be found here: https://github.com/ThomasBrixLarsen/capnproto-
> dlang
>
> Please add it to the list of other languages. Serialization only.
>
> - Thomas
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] What happens on kj::joinPromises failure?

2017-06-22 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Amit,

Yeah, I'll avoid changing the behavior of joinPromises() -- if I implement
the new behavior I'll make it a new function joinPromisesFailfast().

That said, for the use case you describe, I'd suggest an RAII-style design.
That is, cleanup/teardown should always happen in destructors, not in
exception handlers. The biggest reason for this is that a catch handler
won't ever run if the top-level promise is canceled (dropped/destroyed),
whereas destructors will run. But also, you could probably come up with a
design where the correct cleanup would happen whether or not joinPromises()
failed fast or waited for all promises to finish.

-Kenton

On Wed, Jun 21, 2017 at 11:38 PM,  wrote:

> Hi Kenton,
>
> Sorry to bring this up from a long time ago, but if it's possible I think
> the behavior of joinPromises should be well-defined. For my use case, which
> I share below, it's preferable for all promises to complete before an
> exception is propagated, but I understand the reasons to go the other way.
>
> Here's my use case:
> For each successful promise in the array I would like to call an "undo"
> promise in case one of the others fail. So I can write code similar to this:
> kj::Vector> vec;
> std::shared_ptr cleanup; // This is like an "async guard" to
> undo successful promises in case one fails
> for (p in promises) {
>   vec.add(p.then()[cleanup] { cleanup->add(undo(p)); });
> }
> kj::joinPromises(vec.releaseAsArray()).catch_([cleanup](){
> cleanup->Go(); // Calls all added undo promises
>  });
>
>
> If an exception is called after all promises complete (successfully or
> unsuccessfully) - I believe this code is correct.
> However, if a single failure propagates immediately - this code is
> incorrect, as one promise can be halfway to successful completion and the
> cleanup won't be called for it when it completes.
>
> Thanks,
> Amit
>
> On Saturday, November 14, 2015 at 1:00:08 AM UTC+2, Kenton Varda wrote:
>>
>> Yes, an exception from any one promise becomes an exception from the
>> combined promise.
>>
>> I think joinPromises() still waits for all to complete before propagating
>> the exception. Arguably it should cancel all the other promises as soon as
>> one resolves to an exception.
>>
>> -Kenton
>>
>> On Fri, Nov 13, 2015 at 2:26 PM, Nathan Hourt  wrote:
>>
>>> If I use kj::joinPromises to convert my Array> to
>>> Promise>, and one of the promises breaks, what happens to the
>>> joined promise? Does it break and thereby throw away all of the resolved
>>> promises?
>>>
>>> Thanks!
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Cap'n Proto" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to capnproto+...@googlegroups.com.
>>> Visit this group at http://groups.google.com/group/capnproto.
>>>
>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] [Java] Failing to read multiple messages from a single file

2017-07-10 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Farid,

Does the problem happen around 2^31 bytes in? My guess is that the library
is using an `int` somewhere where it should be using a `long`. Or, perhaps
it's trying to map more than 2^31 bytes in a single ByteBuffer, which won't
work since ByteBuffer seems to use `int`s for indexes.

David, any thoughts?

-Kenton

On Fri, Jul 7, 2017 at 11:55 AM, Farid Zakaria 
wrote:

> Hi everyone,
>
> I'm looking for some guidance on what I may be doing wrong.
> I'm serializing (unpacked) multiple MessageBuilder to FileChannel via a
> BufferedOutputStreamWrapper
>
> Here is a snippet of the code in Kotlin
>
> val partitionFileInputStream = FileOutputStream(filename, true).channel
> val buffered = BufferedOutputStreamWrapper(partitionFileInputStream)
>
> val recordIterator = RecordIterator(dataSource)
> recordIterator.asSequence().map { row ->
> converter(row)
> }.forEach {  message ->
> Serialize.write(buffered, message)
> }
>
> buffered.flush()
> buffered.close()
>
>
>
> I'm writing millions of records to a file which is several GB in size at the 
> end.
>
> I then try to read the file:
>
>
> val fileChannel = RandomAccessFile(filePath.toFile(), "r").getChannel()
>
> for(message in SerializedIterator(fileChannel)) {
> val record = message.getRoot(SomeClass.Object.factory)
> }
>
>
> here is the iterator implementation:
>
>
> class SerializedIterator(readChan: ReadableByteChannel) : 
> AbstractIterator(), AutoCloseable {
>
> val buffer = BufferedInputStreamWrapper(readChan)
>
> override fun close() {
> buffer.close()
> }
>
> override fun computeNext() {
> try {
> setNext(Serialize.read(buffer))
> } catch (e : Error) {
> close()
> done()
> }
> }
>
> }
>
>
>
> It seems to go fine for several million records and then I get hit with:
>
>
> java.lang.NegativeArraySizeException: null
>   at org.capnproto.Serialize.read(Serialize.java:91) 
> ~[runtime-0.1.1.jar:0.1.1]
>   at org.capnproto.Serialize.read(Serialize.java:51) 
> ~[runtime-0.1.1.jar:0.1.1]
>   at SerializedIterator.computeNext(SerializedIterator.kt:18)
>
>
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Re: Reviving the JavaScript implementation

2017-07-17 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
So I looked at this today and I'm pretty impressed! Code looks clean and
seems to be following best practices. I made some notes on the issue
tracker, as you probably saw, but generally looks pretty good. I'm pretty
excited to start using this -- and I really want to migrate Sandstorm to
TypeScript.

One question: Have you written tests using the test data in the capnp repo?

https://github.com/capnproto/capnproto/tree/master/c++/src/capnp/testdata

This would help check for any misreads of the spec.

-Kenton

On Thu, Jun 8, 2017 at 7:01 PM, Julián Díaz  wrote:

> For those itching to get on the bleeding edge, I've got a working schema
> compiler now!
>
> Serialization seems to be working okay with some unimplemented edges here
> and there. Perhaps not surprisingly, I'm already seeing places where this
> can outperform JSON.parse, so that's a major win!
>
> I almost had compile-to-JS support working as well, but the TypeScript
> compiler is refusing to play nice with me right now. (See:
> https://github.com/jdiaz5513/capnp-ts/issues/5)
>
> If anyone is really interested in using this stuff *today* please reach
> out to me so I can better understand what you need and perhaps rearrange
> how I implement things. Otherwise, there's still lots to do before 1.0.0!
>
> PS: For the compiler nerds: the schema compiler actually uses the
> TypeScript compiler API directly to build an AST before printing it to a
> file.
>
> On Thursday, May 11, 2017 at 12:58:03 PM UTC-4, Julián Díaz wrote:
>>
>> Can't use proxies at all in TypeScript unless I set the target to ES6 -
>> right now I want to keep it compiling to ES5 so it's immediately useful for
>> a wider range of people.
>>
>> It also seems like it's going to perform like crap:
>> http://thecodebarbarian.com/thoughts-on-es6-proxies-performance.html
>>
>> I'll add it to the TODO, regardless; a separate ES6 build would be useful
>> for many people. I could document it with the caveat that .get() will
>> always be faster.
>>
>> On Thursday, May 11, 2017 at 11:10:16 AM UTC-4, Kenton Varda wrote:
>>>
>>> On Wed, May 10, 2017 at 11:17 PM, Mark Miller  wrote:
>>>
 https://kangax.github.io/compat-table/es6/

 It looks like proxies are supported everywhere.

>>>
>>> Unfortunately a lot of people still use old browsers.
>>>
>>> http://caniuse.com/#feat=proxy -- click on the "usage relative" box.
>>>
>>> -Kenton
>>>
>>>


 On Wed, May 10, 2017 at 8:54 PM, Kenton Varda 
 wrote:

> Sweet!
>
> Totally random comment from totally randomly opening a file and
> looking at it:
>
> I see lists are accessed via a method .get(n). Have you considered
> using proxies to allow array subscript [] syntax? I guess some 10-20% of
> browsers still don't support proxies but that number will only go down.
>
> -Kenton
>
> On Tue, May 9, 2017 at 2:03 PM, Julián Díaz  wrote:
>
>> I'm happy to report some real progress!
>>
>> https://github.com/jdiaz5513/capnp-ts
>>
>> Right now it's not very useful at all (I just barely have
>> serialization working) but it's a solid starting point to wrap up the
>> serialization API. The peanut gallery can start poking around to see how 
>> I
>> organized things – it does depart slightly from the reference
>> implementation but I'm still aiming to make an external API that's very
>> similar to the C++ one.
>>
>> Once the Struct/List classes are complete I'll move on to the schema
>> compiler, which looks like it'll be a cinch. Hoping I can keep up the
>> steady progress from here.
>>
>> On Monday, April 10, 2017 at 1:45:01 AM UTC-4, Ian Denhardt wrote:
>>>
>>> Quoting Kenton Varda (2017-04-09 18:35:48)
>>>
>>> >1) libcapnp and libkj together add up to some 730k of code
>>> (text
>>> >segment) these days. Unless emscripten builds are significantly
>>> >smaller, that's probably too big.
>>>
>>> Hard to know without trying it, but it may well be the case that
>>> wasm
>>> builds will be smaller. The VM seems to be designed for small code
>>> size
>>> (sensibly, given its target use case). This obviously doesn't apply
>>> to
>>> the asm.js output.
>>>
>>> That said, I agree having a pure JS implementation is preferable.
>>>
>> --
>> You received this message because you are subscribed to the Google
>> Groups "Cap'n Proto" group.
>> To unsubscribe from this group and stop receiving emails from it,
>> send an email to capnproto+...@googlegroups.com.
>> Visit this group at https://groups.google.com/group/capnproto.
>>
>
> --
> You received this message because you are subscribed to the Google
> Groups "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to capnproto+...@googlegroups.com.
> Visit this group at https://groups.google.c

Re: [capnproto] Re: Reviving the JavaScript implementation

2017-07-18 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
On Tue, Jul 18, 2017 at 9:00 AM, Julián Díaz  wrote:

> Appreciate the endorsement!
>
> I did in fact borrow some of that test data directly, though there's an
> interesting divergence in the TypeScript version of the packing algorithm
> so I wound up editing segmented-packed by hand to match:
> https://github.com/jdiaz5513/capnp-ts/pull/10.
>

Hmm. I suspect this is because the C++ implementation packs each segment
separately, and so the two runs of zeros came before and after a segment
boundary. Unfortunately, assuming my suspicion is correct, then I suspect
the C++ implementation will not be able to unpack your version. This is
because it unpacks each segment into a separate array, but for speed
reasons the core unpacking loop targets a single array at a time and has no
way to pass along its state to the next call. I suppose this needs to be
documented.

You can check if this is the case by using `capnp decode -p` on your
version of the data. I suspect it will throw an error.

If it doesn't throw an error, then I'm confused.

-Kenton


> Still making slow but steady progress writing tests and finding broken
> stuff. I'll publish to npm once the serialization part isn't so... broken.
>
> - Julián
>
> On Monday, July 17, 2017 at 8:36:36 PM UTC-4, Kenton Varda wrote:
>>
>> So I looked at this today and I'm pretty impressed! Code looks clean and
>> seems to be following best practices. I made some notes on the issue
>> tracker, as you probably saw, but generally looks pretty good. I'm pretty
>> excited to start using this -- and I really want to migrate Sandstorm to
>> TypeScript.
>>
>> One question: Have you written tests using the test data in the capnp
>> repo?
>>
>> https://github.com/capnproto/capnproto/tree/master/c++/src/capnp/testdata
>>
>> This would help check for any misreads of the spec.
>>
>> -Kenton
>>
>> On Thu, Jun 8, 2017 at 7:01 PM, Julián Díaz  wrote:
>>
>>> For those itching to get on the bleeding edge, I've got a working schema
>>> compiler now!
>>>
>>> Serialization seems to be working okay with some unimplemented edges
>>> here and there. Perhaps not surprisingly, I'm already seeing places where
>>> this can outperform JSON.parse, so that's a major win!
>>>
>>> I almost had compile-to-JS support working as well, but the TypeScript
>>> compiler is refusing to play nice with me right now. (See:
>>> https://github.com/jdiaz5513/capnp-ts/issues/5)
>>>
>>> If anyone is really interested in using this stuff *today* please reach
>>> out to me so I can better understand what you need and perhaps rearrange
>>> how I implement things. Otherwise, there's still lots to do before 1.0.0!
>>>
>>> PS: For the compiler nerds: the schema compiler actually uses the
>>> TypeScript compiler API directly to build an AST before printing it to a
>>> file.
>>>
>>> On Thursday, May 11, 2017 at 12:58:03 PM UTC-4, Julián Díaz wrote:

 Can't use proxies at all in TypeScript unless I set the target to ES6 -
 right now I want to keep it compiling to ES5 so it's immediately useful for
 a wider range of people.

 It also seems like it's going to perform like crap:
 http://thecodebarbarian.com/thoughts-on-es6-proxies-performance.html

 I'll add it to the TODO, regardless; a separate ES6 build would be
 useful for many people. I could document it with the caveat that .get()
 will always be faster.

 On Thursday, May 11, 2017 at 11:10:16 AM UTC-4, Kenton Varda wrote:
>
> On Wed, May 10, 2017 at 11:17 PM, Mark Miller 
> wrote:
>
>> https://kangax.github.io/compat-table/es6/
>>
>> It looks like proxies are supported everywhere.
>>
>
> Unfortunately a lot of people still use old browsers.
>
> http://caniuse.com/#feat=proxy -- click on the "usage relative" box.
>
> -Kenton
>
>
>>
>>
>> On Wed, May 10, 2017 at 8:54 PM, Kenton Varda 
>> wrote:
>>
>>> Sweet!
>>>
>>> Totally random comment from totally randomly opening a file and
>>> looking at it:
>>>
>>> I see lists are accessed via a method .get(n). Have you considered
>>> using proxies to allow array subscript [] syntax? I guess some 10-20% of
>>> browsers still don't support proxies but that number will only go down.
>>>
>>> -Kenton
>>>
>>> On Tue, May 9, 2017 at 2:03 PM, Julián Díaz 
>>> wrote:
>>>
 I'm happy to report some real progress!

 https://github.com/jdiaz5513/capnp-ts

 Right now it's not very useful at all (I just barely have
 serialization working) but it's a solid starting point to wrap up the
 serialization API. The peanut gallery can start poking around to see 
 how I
 organized things – it does depart slightly from the reference
 implementation but I'm still aiming to make an external API that's very
 similar to the C++ one.

 Once the Struct/List classes are comple

Re: [capnproto] Cap'n Proto for C#/.NET

2017-07-19 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Nice!

What makes it slow? Presumably things that can be fixed?

-Kenton



On Tue, Jul 18, 2017 at 12:55 PM, Thomas Brix Larsen  wrote:

> Following the success of my recent D port, I have been working on a C#
> port based on the Java and D implementations.
>
> Repo can be found here: https://github.com/ThomasBrixLarsen/capnproto-
> dotnet
>
> Serialization only. It is slow. Very slow. But at least it works.
>
> - Thomas
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Some questions about the RPC spec

2017-07-19 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
On Wed, Jul 19, 2017 at 2:57 PM, Ross Light  wrote:

> Replies inline (with the disclaimer that I'm not Kenton, my only
> credentials are that I have stared at this file for a long time):
>
> On Wed, Jul 19, 2017 at 1:46 PM Thomas Leonard  wrote:
>
>> Hi,
>>
>> I'm trying to write an implementation of the RPC spec (level 1, in
>> OCaml). I found a few parts of the spec unclear - could someone clarify
>> them for me?
>>
>> It says:
>>
>> [ExportId]
>> > The exporter chooses an ID before sending a capability over the wire. If
>> > the capability is already in the table, the exporter should reuse the
>> same ID.
>>
>> But later:
>>
>> [CapDescriptor]
>> > senderHosted @1 :ExportId;
>> > A capability newly exported by the sender.  This is the ID of the new
>> capability in the
>> > sender's export table (receiver's import table).
>>
>> How can the exporter reuse the same ID, if it has to be newly exported?
>>
>
> That seems like a doc/spec typo.  You can always specify an existing
> capability.  I think the wording should be something like: "A capability
> exported by the sender.  This may or may not be a new ID in the sender's
> export table (receiver's import table)."
>

Correct.


>
>
>> [Message]
>> > This could be e.g.  because the sender received an invalid or
>> nonsensical
>> > message (`isCallersFault` is true) or because the sender had an
>> internal error
>> > (`isCallersFault` is false).
>>
>> isCallersFault appears to be deprecated (`obsoleteIsCallersFault` appears
>> much later).
>>
>
> Yup, Exception has changed (IMO for the better).  Instead of placing blame
> on sender or receiver (such distinctions are hard to draw in general),
> exceptions are now about what action that caller is advised to take based
> on the failure.
>

Correct.


>
> [Call.sendResultsTo]
>> > When `yourself` is used, the receiver must still send a `Return` for
>> the call, but sets the
>> > field `resultsSentElsewhere` in that `Return` rather than including the
>> results.
>>
>> When should `resultsSentElsewhere` be returned? Once the result is known?
>> Or
>> once the first takeFromOtherQuestion collects it?
>>
>
> (I haven't implemented this for Go yet, but want to.) AFAICT
> resultsSentElsewhere should be sent once the result is known.
>

I think the answer here is "it doesn't really matter".

When Alice calls Bob.foo(), and Bob tail-calls back to Alice.bar(), Bob
sends the Call to bar() with "send to yourself" *immediately* followed by
the Return for foo() with "take from other question". Eventually Alice
sends a Return for bar(), but Bob doesn't really do anything with this
Return, so it actually doesn't matter when it is sent. That said, the C++
implementation appears to wait for bar() to finish before sending the
Return.


>
>
>> Can takeFromOtherQuestion be used more than once for a single source
>> question?
>>
>
> I would assume that it could be used until Finish message is sent for that
> question, much like other question-based data.  In practice, every call's
> result is held in the answers table until Finish is received.
>

No, it can only be used once.

For languages without garbage collection, it would be annoying for the
protocol to specify that some messages can potentially be shared.


>
>
>> > The `Call` for bar'() has `sendResultsTo` set to `yourself`, with the
>> value being the
>> > question ID originally assigned to the bar() call.
>>
>> What does "the value" refer to here? `yourself` has type `Void`.
>>
>
>> > Vat B receives the `Return` for bar'() and sends a `Return` for bar(),
>> with
>> > `receivedFromYourself` set in place of the results.
>>
>> `receivedFromYourself` does not appear anywhere else in the spec.
>>
>
> I think this whole example is stale and probably needs another draft.
>

Yeah. I must have had an earlier version where the child call specified its
parent, rather than the parent return specifying the child.


>
>> [Return.releaseParamCaps]
>> > If true, all capabilities that were in the params should be considered
>> released.
>>
>> Just to be sure: as if the sender had sent a release message for each one
>> with `count=1`?
>>
>
> (I might be wrong on this point, it's been a while since I've looked.  The
> docs should probably spell this out.)  Usually.  The list of CapDescriptors
> in a Payload could point to the same capability multiple times.  A release
> message of count=1 per CapDescriptor is a more accurate way of phrasing
> this.
>

Correct.


>
>
>> [Payload]
>> Why is it not possible to send exceptions in payloads? Should I export
>> each
>> broken capability as an export and then immediately send a Resolve for
>> each
>> one, resolving it to an exception?
>>
>
> Payload is only used for parameters and results.  It doesn't make sense
> for parameters to be an exception, and results is inside a union where you
> could specify an exception that is an alternative.  I'm not sure I
> understand the use-case where you are sending a broken capability.
>

Correct that Payload i

Re: [capnproto] Concurrency model in RPC protocol

2017-07-20 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
On Wed, Jul 19, 2017 at 10:26 PM, Ross Light  wrote:

> So in this old thread
> ,
> it's stated that the "call is received" event requires calling into
> application code.  From an implementation standpoint, this is declaring
> that receiving a call in the RPC system is a critical section that involves
> crossing over into application code boundary, which may try to acquire the
> same mutex (by making a call on the connection).  While you can postpone
> this problem by adding queueing, I'm already a little nervous about how
> much queueing is required by the protocol.
>
> I'd like to suggest that the E model be considered: each capability is a
> separate single queue.  Instead of "call A is received happens before call
> B is received", "call A returns happens before call B starts".  The reason
> this simplifies implementation is that because it prescribes what ought to
> happen in the critical section (enqueue or throw an overload exception),
> then no application needs to be invoked in the critical section.  This
> might not be a problem for the C++ implementation right now, but once
> fibers are involved, I think it would become one.
>

If I understand what you're proposing correctly, then another way of saying
it is: "A call must return a result before the next call on the same object
can begin."

(To be clear, this certainly isn't "the E model". Strictly speaking, in E,
calls don't "return" anything, but they do eventually resolve a promise,
and there's no requirement that that resolution happens before the next
call can proceed.)

I don't think this proposal would work. You're saying that if a method call
foo() wants to allow other methods to be called before foo() produces a
result, then foo() must produce a result via a callback rather than via
return. foo() would need to take, as one of its parameters, a capability
which it calls with the eventual results.

This would, of course, lead to all the usual "callback hell" problems we
see in async I/O. Over time, we've reached a consensus that the right way
to solve "callback hell" is to use promises. Promises allow us to express
the eventual results of an async function as a return value, which is much
more natural than using callbacks. Also, it makes it much clearer how
errors are to be propagated, and makes it harder to accidentally leak
errors.

So the next logical step would be to introduce a notion of promises into
Cap'n Proto interface specs. Let methods return promises instead of raw
values, and then they are free to interleave as necessary.

But then what would happen? Probably, everyone would declare all of their
methods to return promises, to give themselves flexibility to change their
implementation in the future if needed. In fact, there'd be even more
temptation to do this then there is in C++ and Javascript today, because
the client of a Cap'n Proto interface already has to treat the result as a
promise for latency and pipelining reasons. So, making a method return a
promise would create no new inconvenience on the client side (because
clients already have to deal with promises either way), and it would create
no inconvenience on the server side (because you can return an
immediately-resolved promise basically just as easily as returning a
value). So, everyone would declare every method to return a promise.

The next step, then, would be to say: OK, since everyone is declaring all
returns to be promises anyway, we're just going to say that it's implied.
All methods actually return promises. You don't need to declare it.

And then... we'd be back to *exactly* where we are today.

Today, the right way to think about Cap'n Proto methods is to say that all
methods return promises.


> I believe that the same properties can be obtained by pushing this into
> interface definitions: if an interface really wants to declare that
> operations can happen in parallel, then there can be a root capability that
> creates a capability for each operation.  Then the RPC subsystem can know
> much more about how much work is being scheduled.
>
> I realize this would be a big change, but I can't see a good way to avoid
> this problem in any implementation of the RPC protocol that tries to use a
> connection concurrently.  Effectively, this forces all implementations to
> be single-threaded AFAICT.  Let me know what you think.
>

Cap'n Proto technically only requires that each *object* is
single-threaded. It's based on the actor model, which is actually similar
to the CSP model on which Go's concurrency is based. In fact, maybe the
problem is that we're trying to map Cap'n Proto to the wrong idioms in Go.

Imagine this design: Instead of mapping capnp methods to Go functions, we
map them to messages on a Go channel. Each object reads from a channel.
Each message on the channel initiates a call, and specifies a response
channel to which the call results should be sent when they are ready.

This

Re: [capnproto] Cap'n Proto Deterministic Encoding

2017-07-20 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Ghadi,

As Ian points out, there is a specification for canonicalization, and this
spec is implemented at least in the C++ library. But, you have to invoke it
explicitly.

The details of determinism is Cap'n Proto are completely different from
Protobuf, due to differences in the encoding. Cap'n Proto does not
serialize fields as key/value pairs -- the fields of a struct are always in
the same order -- and it does not currently support maps. However, Cap'n
Proto uses pointers, which means that whole objects can be ordered
arbitrarily with respect to each other; protobuf has nothing like that.

-Kenton

On Thu, Jul 20, 2017 at 7:41 AM, Ghadi Shayban  wrote:

> Does Cap'n Proto guarantee that generated payloads are byte-for-byte
> identical when generated from different systems?
>
> A prerequisite for this is ensuring that key serialization order is
> consistent when serializing map/dicts. There are other sources of
> nondeterminism though. [1]   My use case is storing payloads in a Merkle
> tree, where each node is SHA2'd, and may be generated from different
> languages.
>
>
> [1] protobuf determinism https://stackoverflow.com/questions/31208725/is-
> protocol-buffer-serialization-output-fully-deterministic
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Some questions about the RPC spec

2017-07-20 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
On Thu, Jul 20, 2017 at 4:02 AM, Thomas Leonard  wrote:

> I thought that must be the reason originally, but it seems that
> takeFromOtherQuestion requires sharing even if it can only be used
> once, because the struct is held by the original answer (for
> pipelining) and also by the question that took it.
>

True, but this is a restricted case, and may still allow the implementation
more freedom than general sharing would. For example, for pipelining
purposes, technically the implementation only needs to keep the
capabilities around, along with remembering their pointer paths. It doesn't
otherwise need to remember the content of the response.


> >> If you're implementing level 1 (two-party), then really the only place
> >> where this applies is when you receive a capability that the receiver
> hosts
> >> as part of a return or resolve after you have made calls on the promised
> >> capability.  This implies that the RPC system needs to keep track of
> which
> >> parts of the answer have had calls made on them.  When this occurs, the
> >> receiver gives the application code an embargoed client, and then sends
> a
> >> Disembargo with senderLoopback set.  It releases the embargo once the
> same
> >> disembargo ID is returned with receiverLoopback set.
>
> Maybe I got this bit wrong. I attached the "used" flags to the
> question, but maybe I should be tagging the reference to the question
> instead. Can different references to the same question need different
> disembargoes? e.g. should forwarding a message mark the promised
> answer as needing a disembargo or not?
>

Sorry, I don't understand your question here.


> > Example:
>
> That example is straight-forward, but there are more complex cases
> that are unclear to me. Here's one I'm not sure about:
>
> There are two vats, Client and Server, each of which starts with a
> reference to the other's bootstrap service. All calls either return a
> single capability (field-name `x`) or Void.
>
> 1. Client makes a call, q1, on the server's bootstrap object, getting
> a promise a=q1.x
> 2. Client makes another call, q2, on the same target, getting promise
> b=q2.x.
> 3. Server asks one question, q3 (c=q3.x)
> 4. Client responds to q3 with a (the unresolved promised cap from its q1)
> 5. Server responds to q1 with client_bs (the client's bootstrap
> service, resolved)
> 6. Server responds to q2 with c (q3.x, still unresolved)
> 7. Client makes call m1 on b (sent to q2)
> 8. Client receives response a=client_bs (no embargo needed)
> 9. Client receives response b=q3.x, which is a.
> This was q1.x at the time q3 returned, but client_bs now. Which
> should it use?
> If client_bs, it embargoes the target due to m1.
> If not, b now points at the returned q1, which seems odd.

10. Client makes call m2 on b (which is then held at the embargo).
> 11. Server receives response that c=q1.x (which is client_bs).
> 12. Server receives m1 and forwards it to q3.x.
> 13. Server sends disembargo response back to client.
> 14. Client receives m1 and forwards it to q1 (the resolution it gave for
> q3).
> 15. Client disembargoes b and sends m2 to client_bs.
>

Nice example!

It looks like the C++ implementation today will decide b = q1.x, and never
allow it to further resolve to client_bs. This "works" but is clearly
suboptimal.

For a correct solution, we need to recognize that Disembargo messages can
"bounce" multiple times:

The disembargo sent in step 9 has a final destination of client_bs.

In order to get there, it has to bounce back and forth between the client
and server twice:
* The client sends it towards q2.x.
* The server, recognizing that it resolved q2.x to q3.x, reflects the
embargo towards q3.x.
* The client, recognizing that it resolved q3.x to q1.x, reflects back to
the server again.
* The server, recognizing that q1.x resolved to client_bs, finally reflects
back to client_bs.

This gives m1 enough time to arrive before the disembargo.

It looks like this is not implemented correctly in C++ currently. It
appears the C++ implementation ignores disembargo.messageTarget in the case
that the Disembargo has type `receiverLoopback`. This is incorrect -- it
needs to verify that the embargo has reached its final destination, not an
intermediate promise. (However, it is "saved" by the suboptimal behavior
mentioned above.)

On another note, you say you found this with AFL, which is amazing. Could
your fuzzing strategy be applied to the C++ implementation as well?

-Kenton

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Some questions about the RPC spec

2017-07-20 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Will respond in more detail later, but:

On Thu, Jul 20, 2017 at 1:05 PM, Thomas Leonard  wrote:

> OK, I'll try to match the C++ behaviour for now.
>

I don't think there's any need to. The difference in behavior is entirely
on the side that initiates the embargoes, and is only to protect invariants
on that end. So you don't need cooperation from the other end to implement
correct behavior now.

-Kenton

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] [C++] Fastest way to deserialize multiple messages in a file

2017-07-20 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Farid,

Try using mmap() (disclaimer: haven't tried compiling this):

struct stat stats;
KJ_SYSCALL(fstat(fd, &stats));
size_t size = stats.st_size;
const void* data = mmap(nullptr, size, PROT_READ, MAP_PRIVATE, fd, 0);

if (data == MAP_FAILED) {
  KJ_FAIL_SYSCALL("mmap");
}

KJ_DEFER(KJ_SYSCALL(munmap(data, size)) { break; });

KJ_SYSCALL(madvise(data, size, MADV_SEQUENTIAL));


kj::ArrayPtr words(
reinterpret_cast(data),
size / sizeof(capnp::word));

while (words.size() > 0) {
  capnp::FlatArrayMessageReader message(words);
  Message::Reader chunk = message.getRoot();
  count++;
  words = kj::arrayPtr(message.getEnd(), words.end());
}


(On Windows you'll need to use CreateFileMapping() and MapViewOfFile() or
whatever.)

-Kenton

On Thu, Jul 20, 2017 at 2:11 PM, Farid Zakaria 
wrote:

> Looking for some guidance on possibly the fastest way to read multiple
> messages in a file that has multiple MessageRoots
>
> So far I have this written:
>
> capnp::MallocMessageBuilder messageBuilder;
> //What is a good size for our words? As long as its smaller?
> capnp::word scratch[1024];
> kj::ArrayPtr scratchSpace(scratch);
> kj::FdInputStream stream(fd);
> kj::BufferedInputStreamWrapper buff(stream);
>
> unsigned long count = 0;
> while (buff.tryGetReadBuffer().size() != 0) {
> capnp::InputStreamMessageReader message(buff, capnp::ReaderOptions(), 
> scratchSpace);
> Message::Reader chunk = message.getRoot();
> count++;
> }
>
>
> I tried the helper methods first, but they seem too slow I think without the 
> BufferedInputStreamWrapper.
>
>
> I don't do much C++ so I appreciate any help :)
>
> Btw, Is there an IRC chat or something ?
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] [C++] Fastest way to deserialize multiple messages in a file

2017-07-20 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
On Thu, Jul 20, 2017 at 3:40 PM, Farid Zakaria 
 wrote:

> Is MMAP the only way to randomly seek to an offset in the file?
>
> I can't seem to find a way to do that with kj::FdInputStream ?
>
>
> I'm trying to create an index of the elements in the file.
>

kj::InputStream doesn't assume the stream is seekable and doesn't track the
current location. You could create a custom wrapper around InputStream or
around BufferedInputStream that remembers how many bytes have been read.
You can also lseek() the underlying fd directly, though of course you'll
have to discard any buffers after that.

But indeed, if you use mmap() this will all be a lot easier, and faster. I
highly recommend using mmap() here.

On Thu, Jul 20, 2017 at 4:14 PM, Farid Zakaria 
wrote:

> One more question =)
>
> I need to copy the root from a FdStream to a vector
> Do I need to copy it into a MallocMessageBuilder ?
>

With InputStreamMessageReader, yes. You have to destroy the
InputStreamMessageReader before you can read the next message, and that
invalidates the root Reader and all other Readers pointing into it.

However, with the mmap strategy, you don't need to delete the
FlatArrayMessageReader before reading the next message. So, you can
allocate them on the heap and put them into your vector, and then all the
Readers pointing into them remain valid, as long as the
FlatArrayMessageReaders exist and the memory is still mapped. (In this case
you should remove the madvise() line since you plan to go back and randomly
access the data later.)

Again, I *highly* recommend this strategy instead of using a stream. With
the mmap strategy, not only do you avoid copying into a builder, but you
avoid copying the underlying data when you read it. The operating system
causes the memory addresses to point directly at its in-memory cache of the
file data. If multiple programs mmap() the same file, they share the
memory, rather than creating their own copies. Moreover, the operating
system is free to evict the data from memory and then load it again later
on-demand. There are tons of advantages to this approach and it is exactly
what Cap'n Proto is designed to enable.

-Kenton

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] [C++] Fastest way to deserialize multiple messages in a file

2017-07-20 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
On Thu, Jul 20, 2017 at 5:25 PM, Farid Zakaria 
wrote:

> All the items in my message array seem to be always pointing to the last
> item read.
> I'm not sure what I'm doing wrong here.
>
>
> auto messages = std::make_unique >(10);
>
> while (words.size() > 0) {
> capnp::FlatArrayMessageReader * reader = new 
> capnp::FlatArrayMessageReader(words);
> Message::Reader message = reader->getRoot();
> words = kj::arrayPtr(message->getEnd(), words.end());
> messages->at(index++) = & message;
> }
>
> There are multiple problems with this code. They are C++ usage errors, not
specifically Cap'n Proto related.

messages->at(index++) = & message;

First, on this line you are taking the address of a temporary stack object
(`message`). That object then goes out-of-scope, so this pointer is no
longer valid. But you are storing the pointer in a long-lived object. You
should make your deque contain instances of `Message::Reader`, not
`Message::Reader*`.

Second, on this same line, there's no guarantee that `index` is a valid
index into your deque. It looks like you're allocating a 10-element deque
but if there are more than 10 messages you're running off the end of the
deque.

Third, though it wouldn't prevent the code from functioning, it has a
memory leak:

capnp::FlatArrayMessageReader * reader = new
capnp::FlatArrayMessageReader(words);

You aren't ever deleting this object that you created with `new`.

-Kenton


>
> On Thursday, July 20, 2017 at 4:35:29 PM UTC-7, Kenton Varda wrote:
>>
>> On Thu, Jul 20, 2017 at 3:40 PM, Farid Zakaria 
>>  wrote:
>>
>>> Is MMAP the only way to randomly seek to an offset in the file?
>>>
>>> I can't seem to find a way to do that with kj::FdInputStream ?
>>>
>>>
>>> I'm trying to create an index of the elements in the file.
>>>
>>
>> kj::InputStream doesn't assume the stream is seekable and doesn't track
>> the current location. You could create a custom wrapper around InputStream
>> or around BufferedInputStream that remembers how many bytes have been read.
>> You can also lseek() the underlying fd directly, though of course you'll
>> have to discard any buffers after that.
>>
>> But indeed, if you use mmap() this will all be a lot easier, and faster.
>> I highly recommend using mmap() here.
>>
>> On Thu, Jul 20, 2017 at 4:14 PM, Farid Zakaria 
>> wrote:
>>
>>> One more question =)
>>>
>>> I need to copy the root from a FdStream to a vector
>>> Do I need to copy it into a MallocMessageBuilder ?
>>>
>>
>> With InputStreamMessageReader, yes. You have to destroy the
>> InputStreamMessageReader before you can read the next message, and that
>> invalidates the root Reader and all other Readers pointing into it.
>>
>> However, with the mmap strategy, you don't need to delete the
>> FlatArrayMessageReader before reading the next message. So, you can
>> allocate them on the heap and put them into your vector, and then all the
>> Readers pointing into them remain valid, as long as the
>> FlatArrayMessageReaders exist and the memory is still mapped. (In this case
>> you should remove the madvise() line since you plan to go back and randomly
>> access the data later.)
>>
>> Again, I *highly* recommend this strategy instead of using a stream. With
>> the mmap strategy, not only do you avoid copying into a builder, but you
>> avoid copying the underlying data when you read it. The operating system
>> causes the memory addresses to point directly at its in-memory cache of the
>> file data. If multiple programs mmap() the same file, they share the
>> memory, rather than creating their own copies. Moreover, the operating
>> system is free to evict the data from memory and then load it again later
>> on-demand. There are tons of advantages to this approach and it is exactly
>> what Cap'n Proto is designed to enable.
>>
>> -Kenton
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] [C++] Fastest way to deserialize multiple messages in a file

2017-07-20 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
On Thu, Jul 20, 2017 at 5:44 PM, Farid Zakaria 
wrote:

> Finally (sorry I keep making separate messages) --
>
> The reason why I was seeking a FdInputStream solution is because it seems
> to be much faster than an MMAP solution.
> Although my file is quite large (10GB) -- memory is not much of a concern.
>

This is very surprising. Can you show your complete code that is faster
with InputStreamMessageReader than with mmap()? Probably there is a problem
in the code that causes the difference.

How does one copy from InputStreamMessageReader into the
> MallocMessageReader ?
>

I assume you mean MallocMessageBuilder. You would do:

bulider.setRoot(reader.getRoot());

-Kenton


>
> On Thursday, July 20, 2017 at 5:30:30 PM UTC-7, Farid Zakaria wrote:
>>
>> I had to actually store the FlatArrayMessageReader rather than the
>> Message::Reader for it to work ?
>> I think i'm not grokking why that matters -- I thought
>> FlatArrayMessageReader is just a pointer into the MMAP file.
>> Why would it matter if it cast it to the reader ?
>>
>>
>> hmm.
>>
>> On Thursday, July 20, 2017 at 5:25:00 PM UTC-7, Farid Zakaria wrote:
>>>
>>> All the items in my message array seem to be always pointing to the last
>>> item read.
>>> I'm not sure what I'm doing wrong here.
>>>
>>>
>>> auto messages = std::make_unique >(10);
>>>
>>> while (words.size() > 0) {
>>> capnp::FlatArrayMessageReader * reader = new 
>>> capnp::FlatArrayMessageReader(words);
>>> Message::Reader message = reader->getRoot();
>>> words = kj::arrayPtr(message->getEnd(), words.end());
>>> messages->at(index++) = & message;
>>> }
>>>
>>>
>>> On Thursday, July 20, 2017 at 4:35:29 PM UTC-7, Kenton Varda wrote:

 On Thu, Jul 20, 2017 at 3:40 PM, Farid Zakaria 
  wrote:

> Is MMAP the only way to randomly seek to an offset in the file?
>
> I can't seem to find a way to do that with kj::FdInputStream ?
>
>
> I'm trying to create an index of the elements in the file.
>

 kj::InputStream doesn't assume the stream is seekable and doesn't track
 the current location. You could create a custom wrapper around InputStream
 or around BufferedInputStream that remembers how many bytes have been read.
 You can also lseek() the underlying fd directly, though of course you'll
 have to discard any buffers after that.

 But indeed, if you use mmap() this will all be a lot easier, and
 faster. I highly recommend using mmap() here.

 On Thu, Jul 20, 2017 at 4:14 PM, Farid Zakaria 
 wrote:

> One more question =)
>
> I need to copy the root from a FdStream to a vector
> Do I need to copy it into a MallocMessageBuilder ?
>

 With InputStreamMessageReader, yes. You have to destroy the
 InputStreamMessageReader before you can read the next message, and that
 invalidates the root Reader and all other Readers pointing into it.

 However, with the mmap strategy, you don't need to delete the
 FlatArrayMessageReader before reading the next message. So, you can
 allocate them on the heap and put them into your vector, and then all the
 Readers pointing into them remain valid, as long as the
 FlatArrayMessageReaders exist and the memory is still mapped. (In this case
 you should remove the madvise() line since you plan to go back and randomly
 access the data later.)

 Again, I *highly* recommend this strategy instead of using a stream.
 With the mmap strategy, not only do you avoid copying into a builder, but
 you avoid copying the underlying data when you read it. The operating
 system causes the memory addresses to point directly at its in-memory cache
 of the file data. If multiple programs mmap() the same file, they share the
 memory, rather than creating their own copies. Moreover, the operating
 system is free to evict the data from memory and then load it again later
 on-demand. There are tons of advantages to this approach and it is exactly
 what Cap'n Proto is designed to enable.

 -Kenton

>>> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Some questions about the RPC spec

2017-07-20 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
On Thu, Jul 20, 2017 at 1:05 PM, Thomas Leonard  wrote:

> I did start off trying to implement it that way, but then I realised
> that questions don't usually hang around long anyway, so it didn't
> seem worth the effort.
>

Yes, that's probably the right decision.


> >> Maybe I got this bit wrong. I attached the "used" flags to the
> >> question, but maybe I should be tagging the reference to the question
> >> instead. Can different references to the same question need different
> >> disembargoes? e.g. should forwarding a message mark the promised
> >> answer as needing a disembargo or not?
> >
> > Sorry, I don't understand your question here.
>
> Maybe it doesn't make sense, or only with my implementation, but it
> seems we have two objects for a question/export:
>
> - a proxy that always sends to the remote peer
> - a switchable proxy that forwards to the previous object until the
> question returns, and then sends to the new target (possibly after a
> disembargo)
>
> I was just wondering which proxy should track whether it has been used
> (and, therefore, whether it needs a disembargo).
>
> If we had the implementor's guide that was mentioned earlier, it could
> probably cover this. My current implementation muddles these two up,
> which is why it's delivering things out of order, so my question is
> probably muddled up too. I'll need to think about this a bit more.
>

OK, makes sense.


> > Nice example!
> >
> > It looks like the C++ implementation today will decide b = q1.x, and
> never
> > allow it to further resolve to client_bs. This "works" but is clearly
> > suboptimal.
> >
> > For a correct solution, we need to recognize that Disembargo messages can
> > "bounce" multiple times:
>
> Does it alternate between being a disembargo request and a disembargo
> response as this happens?
>

It alternates between `senderLoopback` and `receiverLoopback`, yes.


> Does the 3-vat case complicate things?
>

Always. :) (But I haven't thought it through lately...)

-Kenton


> > On another note, you say you found this with AFL, which is amazing. Could
> > your fuzzing strategy be applied to the C++ implementation as well?
>
> Maybe. Here's how it works:
>
> To simplify things, my OCaml capnp-rpc library is in two parts. One
> provides the RPC logic over abstract message types, and the other
> provides an implementation using the Cap'n Proto serialisation for the
> messages. Most of the unit-tests check the core logic directly, using
> a simpler message type where a payload is just a test string and an
> array of capability pointers. The fuzz tests use a mutable struct with
> things useful for checking for violations.
>
> The fuzz tests set up some vats (two or three) in a single process and
> then have them perform operations based on input from the fuzzer.
> Each step selects one vat and performs a random (fuzzer-chosen)
> operation out of:
>
> 1. Request a bootstrap capability from a random peer.
> 2. Handle one message on the incoming queue.
> 3. Call a random capability, passing randomly-selected capabilities as
> arguments.
> 4. Finish a random question.
> 5. Release a random capability.
> 6. Add a capability to a newly-created local service.
> 7. Answer a random question, passing random-selected capabilities as
> the response.
>
> When it runs out of input data from the fuzzer it releases all
> capabilities, answers all questions and allows the system to become
> idle.
>
> The fuzz tests include in the call's payload contents a sequence
> number and a (mutable) struct containing the source reference's
> counters:
>
> type cap_ref_counters = {
>   mutable next_to_send : int;
>   mutable next_expected : int;
> }
>
> When the message arrives, the target service checks that the counter
> in the content matches the current value of `next_expected` and
> increments it.
> So, it should always detect if messages arrive out of order.
>
> Another way it takes advantage of everything running in one process is
> that it maintains a second reference graph, but one which doesn't use
> CapTP. When it requests a bootstrap capability over CapTP, it also
> returns a direct pointer to the target service. So, it's a copy of the
> reference graph but with all vat-spanning links replaced with direct
> pointers. Then, it checks that every message is delivered to the
> service it would have been delivered to if there were no network in
> the way.
>
> I leave AFL running my binary for a while with the --fuzz option
> (which disables logging to keep things fast).
> When it finds a violation it leaves it in the crash directory. Then I
> run the fuzz binary on it manually without --fuzz, which turns on
> logging and runs a load of sanity checks at each step, as well as
> dumping the state of the system at each step. It also outputs an OCaml
> unit-test, which can be cut-and-pasted into the test-suite. The
> unit-tests look like this, after being cleaned up a bit:
>
> https://github.com/mirage/capnp-rpc/blob/f5a32455c41056eaa40
> b3

Re: [capnproto] Using Cap'n Proto in an embedded environment

2017-07-24 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Moritz,

If you subclass capnp::MesasgeBuilder, you can define your own memory
allocation.

However, I think other places in the library will allocate small amounts of
heap memory here and there.

Maybe you could allocate a little bit of static space (e.g. 1MB would
probably be plenty) and turn it into a heap?

Alternatively, there is a C implementation of Cap'n Proto which may work
better for embedded use cases:

https://github.com/opensourcerouting/c-capnproto

-Kenton

On Sat, Jul 22, 2017 at 5:37 AM,  wrote:

> Hi,
>
> I would love to use Cap'n Proto in an embedded environment. The main
> problem is that heap allocation is not possible. However, the documentation
> stated that given enough scratch space, this should be possible. Static
> memory is relatively cheap, 4GB, so creating a large statically allocated
> scratch space would be possible. The source mentions the use of "
> MallocMessageBuilder", but there is no detailed information for heap-less
> systems.
>
> Any pointers to experience of using Cap'n Proto in the embedded space?
>
> Best regards,
> Moritz
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Concurrency model in RPC protocol

2017-07-24 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
On Fri, Jul 21, 2017 at 1:52 PM, Ross Light  wrote:

> (Sorry for the long response.  I did not have time to make it shorter.)
>
> I see your point about how "a call returns happens before the next call
> starts" reaches an undesirable state for applications.  I had an inkling
> that this could be the case, but hadn't fully experimented to see the
> results.  However, just on terminology, I'm not sure I agree with your
> assessment that objects are single-threaded: because returns can come back
> in a different order than the calls arrived, this implies some level of
> concurrent (but perhaps not parallel) execution.
>

To clarify, what I meant was that Cap'n Proto is based on the actor model.
In the basic actor model, an actor repeatedly receives messages and, in
response to each message, may update its state and may send additional
messages to other actors (perhaps a reply, perhaps not). These message
handlers are sequential (only one runs at a time) and are non-blocking.

Conceptually, each actor has an internal event loop and only does one thing
at a time. But this doesn't mean the actor model is single-threaded:
multiple actors can be executing simultaneously. Since each only has access
to its own state, there's no need for complex synchronization.

Cap'n Proto extends the model by making request/response pairings explicit,
but it doesn't require that a response be sent before a new request arrives.

Technically, it's common under Cap'n Proto implementations for one actor to
implement multiple objects (where an object is an endpoint for messages,
i.e. what a capability points to) -- or, put another way, multiple objects
may share the same event loop, and thus their event handlers are
serialized. In C++ in particular, currently most (all?) Cap'n Proto
applications are single-threaded, hence the entire process acts like one
actor. But what I'd like to do (in C++) in the future is make it easy to
have multiple actors (and thus multiple threads) in a process, each
possibly handling multiple objects.


> As for your idea of mapping Cap'n Proto methods to messages on Go
> channels: it shuffles the problem around a bit but doesn't escape this
> deadlock issue.  In fact, the first draft I had of the RPC system used a
> lot more channels, but I found it made the control flow hard to reason
> about (but it could still be implemented this way).  Let me give you enough
> background on how Go concurrency works so that we're talking with each
> other.
>
> *Background*
>

Thanks, I understand the issue better now. Let me know if this is correct:

Go's channels don't really map to the actor model in the way I imagined,
because channels in Go are really used for one-way messages, whereas when
you have request/response, it's considered more idiomatic to use a blocking
function call. If you were trying to match the actor model, you would send
a call message over a channel, and include in that call message a new
channel to which the response is to be sent. You'd then use a select block
or a goroutine to wait for those responses at the same time as waiting for
further calls. But that's not how people usually write things in Go,
perhaps because it makes for difficult-to-follow code.

But indeed, it seems awkward to support a threaded model with concurrent
calls while also supporting e-order, since you now need to convince people
to explicitly acknowledge calls.

If you make it explicitly illegal to call back into the RPC system before
acknowledging the current call -- e.g. panicking if they do -- then
programmers ought to notice the mistake quickly.

Alternatively, what if making a new call implicitly acknowledged the
current call? This avoids cognitive overhead and probably produces the
desired behavior?

On Sat, Jul 22, 2017 at 8:58 PM, Ross Light  wrote:

> I have been thinking about this more, and I think I have a solution.
> Instead of making the call to the capability directly in the critical
> section, the connection could have a goroutine that receives calls on a
> buffered channel. Importantly, the send would abort if the queue is full,
> so that it never blocks. The effect would be that any calls would be
> deferred until after the critical section, but they would have the same
> order. While it still introduces a queue, it's only one per connection,
> which is less troublesome to me.
>

Hmm, do calls on separate objects potentially block each other? Note that
normally E-order only applies within a single object; calls on independent
objects need not be ordered. (However, in practice I do think there are
some use cases for defining "object groups" that have mutual e-order, but
this is not the original guarantee that E gave.)

What happens when the queue is full? Do you start rejecting calls? Or do
you stop reading from the connection? That could, of course, also lead to
deadlock, if a string of dependent calls are bouncing back and forth.

I wonder if a missing piece here is some way to apply backpressure on a
si

Re: [capnproto] Generating field numbers automatically

2017-07-25 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Branislav,

Thanks for the feedback.

This is a frequent request for protobuf and Cap'n Proto. Presumably the
compiler would automatically number the fields in the order they appear.

My concern is that in this mode, it is very easy to accidentally introduce
backwards-incompatibility by inserting a field into the middle of the
struct, or by re-ordering field declarations during refactoring. I think it
would be non-obvious to many developers that the ordering of field
declarations mattered -- many developers assume that fields are identified
on the wire by name, by sending the schema, or by some other magic. Such
accidents likely wouldn't be caught in automated testing since it's unusual
to test compatibility with binaries built with different versions of the
protocol, and they wouldn't be caught in review since reviewers may be
similarly confused about the importance of the field ordering.

Meanwhile I think that if we made field numbers optional, most developers
would in fact omit them, since developers tend to take the path of least
resistance.

OTOH, with field numbers being required, the path of least resistance is
actually pretty robust. Most developers will infer that the numbers are
probably important for something, and than renumbering fields is likely to
be a bad idea, even if they don't know exactly what they are for. In my
experience, it is remarkably uncommon for people to accidentally make
backwards-incompatible changes in Protobuf or Cap'n Proto, which is a great
thing.

-Kenton

On Tue, Jul 25, 2017 at 6:10 AM, Branislav Katreniak 
wrote:

> When prototyping capnproto structs, I feel a bit of burden to make the
> compiler happy to assign valid numbers to all fields.
>
> What about having and option to let the compiler generate the sequence
> numbers automatically if the field number is not assigned to any field?
>
> Flatbuffers IDL works this way.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Concurrency model in RPC protocol

2017-07-26 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
On Wed, Jul 26, 2017 at 9:16 AM, Ross Light  wrote:
>
> Cap'n Proto extends the model by making request/response pairings
>> explicit, but it doesn't require that a response be sent before a new
>> request arrives.
>>
>
> Good point; I'm not arguing for that restriction.  I'm fine with this
> sequence (which conceptually only requires one actor):
>
> 1. Alice sends Bob foo1()
> 2. Bob starts working on foo1()
> 3. Alice sends Bob foo2().  Bob queues it.
> 4. Alice sends Bob foo3().  Bob queues it.
> 5. Bob finishes foo1() and returns foo1()'s response to Alice
> 6. Bob starts working on foo2()
> 7. Bob finishes foo2() and returns foo2()'s response to Alice
> 8. Bob starts working on foo3()
> 9. Bob finishes foo3() and returns foo3()'s response to Alice
>

In this example, you're saying Bob can't start working on a new request
until after sending a response for the last request. That's what I'm saying
is *not* a constraint imposed by Cap'n Proto.


> Here's the harder sequence (which IIUC, C++ permits.  *If it doesn't*,
> then it simplifies everything.):
>
> 1. Alice sends Bob foo1()
> 2. Bob starts working on foo1().  It's going to do something that will
> take a long time (read as: requires a future), so it acknowledges delivery
> and keeps going.  Bob now has has multiple conceptual actors for the same
> capability, although I can see how this can be also be thought of as a
> single actor receiving request messages and sending response messages.
> 3. Alice sends Bob foo2()
> 4. Bob starts working on foo2().
> 5. foo2() is short, so Bob returns a result to Alice.
> 6. foo1()'s long task completes.  Bob returns foo1()'s result to Alice.
>

This does not create "multiple conceptual actors". I think you may be
mixing up actors with threads. The difference between a (conceptual) thread
and an (conceptual) actor is that a thread follows a call stack (possibly
crossing objects) while an actor follows an object (sending asynchronous
messages to other objects).

In step 2, when Bob initiates "something that will take a long time", in
your threaded approach in Go, he makes a blocking call of some sort. But in
the actor model, blocking calls aren't allowed. Bob would initiate a
long-running operation by sending a message. When the operation completes,
a message is sent back to Bob with the results. In between these messages,
Bob is free to process other messages. The important thing is that only one
message handler is executing in Bob at a time, therefore Bob's state does
not need to be protected by a mutex. However, message handlers cannot block
-- they always complete immediately.

Concretely speaking, in C++, the implementation of Bob.foo() will call some
other function that returns a promise, and then foo() will return a promise
chained off of it. As soon as foo() returns that promise, then a new method
on Bob can be invoked immediately, without waiting for the returned promise
to resolve.

This of course suffers from the "function coloring problem" you referenced
earlier. All Cap'n Proto methods are colored red (asynchronous).

I think what the function coloring analogy misses, though, is that
permitting functions to block doesn't really avoid the function-coloring
problem, it only sweeps the problem under the rug. Even in a multi-threaded
program, it is incredibly important to know which functions might block.
Because, in a multi-threaded program, you almost certainly don't want to
call a blocking functions while holding a mutex lock. If you do, you risk
blocking not only your own thread, but all other threads that might need to
take that lock. And in the case of bidirectional communication, you risk
deadlock.

This is, I think, exactly the problem I think you're running into here.

Alternatively, what if making a new call implicitly acknowledged the
>> current call? This avoids cognitive overhead and probably produces the
>> desired behavior?
>>
>
> I don't think this is a good idea, since it seems common to want to start
> off a call (or multiple) before acknowledging delivery.
>

I guess I meant: *Waiting* on results of a sub call should implicitly
acknowledge the super call / unblock concurrent calls. So you could *start*
multiple sub calls while still being protected from concurrent calls, but
as soon as you *wait* on one, you're no longer protected.


>   I thought about this a bit more over the last couple of days and I think
> I have a way out (finally).  Right now, operating on the connection
> acquires a mutex.  I think I need to extend this to be a mutex+condition,
> where the condition is for is-connection-making-call.  When the connection
> makes a call, it marks the is-connection-making-call bit, then plumbs the
> is-in-a-connection-call info through the Context (think of as thread-local
> storage, except explicit).  When the connection acquires the mutex,
> non-send-RPC operations will block on the is-connection-making-call bit to
> be cleared and send-RPC operations will not block.  I've exami

Re: [capnproto] Generating field numbers automatically

2017-07-26 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Ah, yes, if you're just converting to JSON then the field numbers are
irrelevant.

Conceptually, I would be OK with a way to annotate a struct which is not
allowed to be serialized in capnp format. When such a struct is constructed
in a message, the message would be marked such that attempts to serialize
it would fail. But, it could be used for JSON or similar operations.

Practically speaking, though, I think the invasiveness of such a change may
not be worth the convenience it brings.

I'm open to other opinions on this.

-Kenton

On Tue, Jul 25, 2017 at 11:45 PM, Branislav Katreniak 
wrote:

> Hi Kenton
>
> Thank you for prompt reply.
>
> I agree that requiring field numbers greatly helps to ensure capnproto
> binary format backwards compatibility and I like current convenience /
> backwards compatibility trade off.
>
> On the other side, when capnproto is used as IDL for json apis, the field
> numbers feel like a pure noise. It would be nice to have a less verbose
> option for this use case. What do you think about special struct annotation?
>
>   Brano
>
>
> Kind regards
>  Brano
>
> On Tue, Jul 25, 2017 at 5:56 PM, Kenton Varda 
> wrote:
>
>> Hi Branislav,
>>
>> Thanks for the feedback.
>>
>> This is a frequent request for protobuf and Cap'n Proto. Presumably the
>> compiler would automatically number the fields in the order they appear.
>>
>> My concern is that in this mode, it is very easy to accidentally
>> introduce backwards-incompatibility by inserting a field into the middle of
>> the struct, or by re-ordering field declarations during refactoring. I
>> think it would be non-obvious to many developers that the ordering of field
>> declarations mattered -- many developers assume that fields are identified
>> on the wire by name, by sending the schema, or by some other magic. Such
>> accidents likely wouldn't be caught in automated testing since it's unusual
>> to test compatibility with binaries built with different versions of the
>> protocol, and they wouldn't be caught in review since reviewers may be
>> similarly confused about the importance of the field ordering.
>>
>> Meanwhile I think that if we made field numbers optional, most developers
>> would in fact omit them, since developers tend to take the path of least
>> resistance.
>>
>> OTOH, with field numbers being required, the path of least resistance is
>> actually pretty robust. Most developers will infer that the numbers are
>> probably important for something, and than renumbering fields is likely to
>> be a bad idea, even if they don't know exactly what they are for. In my
>> experience, it is remarkably uncommon for people to accidentally make
>> backwards-incompatible changes in Protobuf or Cap'n Proto, which is a great
>> thing.
>>
>> -Kenton
>>
>> On Tue, Jul 25, 2017 at 6:10 AM, Branislav Katreniak > > wrote:
>>
>>> When prototyping capnproto structs, I feel a bit of burden to make the
>>> compiler happy to assign valid numbers to all fields.
>>>
>>> What about having and option to let the compiler generate the sequence
>>> numbers automatically if the field number is not assigned to any field?
>>>
>>> Flatbuffers IDL works this way.
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Cap'n Proto" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to capnproto+unsubscr...@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/capnproto.
>>>
>>
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] JSON decode: fails to distinguish Void in union

2017-07-31 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Sounds like a bug! Care to file an issue on github?

Thanks,
-Kenton

On Mon, Jul 31, 2017 at 10:53 AM,  wrote:

> Is the Json Decoder supposed to be able to parse Void fields in a union
> member?
>
> Given
> struct TestDistinguishVoidsInUnion {
>   union {
> first @0 :Void;
> second @1 :Void;
>   }
> }
>
> the following test fails (Windows, v0.6.0):
> KJ_TEST("decode voids in union") {
>   MallocMessageBuilder message;
>   auto root = message.initRoot();
>
>   JsonCodec json;
>   json.decode("{\"first\":null}", root);
>   KJ_EXPECT(root.isFirst() == true);
>
>   json.decode("{\"second\":null}", root);
>   KJ_EXPECT(root.isSecond() == true); //<-- this fails
> }
>
>
> The encoding seems to work fine:
> KJ_TEST("encode voids in union") {
>   MallocMessageBuilder message;
>   auto root = message.getRoot();
>
>   JsonCodec json;
>
>   root.setFirst();
>   auto encoded = json.encode(root);
>   KJ_EXPECT(encoded == "{\"first\":null}");
>
>   root.setSecond();
>   encoded = json.encode(root);
>   KJ_EXPECT(encoded == "{\"second\":null}");
> }
>
>
> Thanks!
> Stijn
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Problem building on Opensuse Tumbleweed

2017-07-31 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Brian,

It looks like the project you reference is hard-coded to download Cap'n
Proto version 0.5.3. The bug you describe was fixed in 0.5.3.1 and 0.6.0. I
suggest updating the project to the current release, 0.6.1.

-Kenton

On Mon, Jul 31, 2017 at 1:30 PM,  wrote:

> Shalom
>
> I was trying to build this project
>
> https://github.com/thekvs/cpp-serializers
>
> and have run into an error
>
> In file included from /home/brian/build-cpp/external
> /capnproto/src/capnproto/c++/src/capnp/generated-header-support.h:31:0,
>  from /home/brian/build-cpp/external
> /capnproto/src/capnproto/c++/src/capnp/compiler/grammar.capnp.h:7,
>  from /home/brian/build-cpp/external
> /capnproto/src/capnproto/c++/src/capnp/compiler/compiler.h:29,
>  from /home/brian/build-cpp/external
> /capnproto/src/capnproto/c++/src/capnp/compiler/module-loader.h:29,
>  from /home/brian/build-cpp/external
> /capnproto/src/capnproto/c++/src/capnp/compiler/module-loader.c++:22:
> /home/brian/build-cpp/external/capnproto/src/capnproto/c++/src/capnp/layout.h:129:65:
> error: could not convert template argument ‘b’ from ‘bool’ to ‘capnp::Kind’
>  template  struct ElementSizeForType> {
>  ^
> /home/brian/build-cpp/external/capnproto/src/capnproto/c++/src/capnp/layout.h:129:66:
> error: template argument 1 is invalid
>  template  struct ElementSizeForType> {
>   ^~
> make[3]: *** [Makefile:1746: src/capnp/compiler/module-loader.o] Error 1
>
>
> I have several compilers on this machine and am not 100% sure which one
> it's using (the output above doesn't say).
>
> /usr/bin/c++ -v
>
> gives g++ 7.1.1 and I think that's what it is using.  Have you seen this?
> Any suggestions on how to fix it?   Thanks.
>
>
> Brian
> Ebenezer Enterprises - In G-d we trust.
> http://webEbenezer.net
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] RPC: simple return values and a "finish" roundtrip

2017-08-08 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Tomáš,

Indeed, there may be some room for optimization here. It's a little tricky
since *normally* the rule is that Question table entries are allocated and
freed strictly by the caller, hence requiring a Call and a Finish. If we
allow releasing a table entry to happen on the opposite side from
allocation (as we do with the export/import tables) then we have to think
carefully about race conditions.

Perhaps we could add a bool to Return which allows the callee to say: "I
don't need you to send a Finish." The callee would only set this bool in
cases where there are no capabilities at all in the results. It would mark
the table entry as "optimistically freed"* and could release all associated
resources.

The caller would still be allowed to send a Finish message, which it might
do for two reasons:
- Because it sent Finish before it had received Return, in an attempt to
cancel the call.
- Because it doesn't implement the new flag, so always sends Finish.

So the callee should be prepared to receive a Finish for an "optimistically
freed" table entry, in which case it changes the table entry's state to
plain "free". It should also be prepared to receive no Finish, which means
it may later receive a Call that re-allocates the table entry.

Also, of course, the callee may receive promise-pipeline messages
referencing this table entry. This would happen if the caller *expected*
the results to contain a capability, which could be the case even if the
results ultimately did not contain any such entry. In this case the callee
would treat any pipelining attempts as if they were trying to pipeline on a
null capability.

I think this could work, but to be sure I'd have to review all the other
places where question IDs are referenced.

* Technically, it's not necessary for "optimistically freed" and "free" to
be separate states, but at least for common low-numbered table entries this
should be "almost free" to store and may help catch bugs.

-Kenton

On Tue, Aug 8, 2017 at 1:40 AM,  wrote:

> Hi everyone,
>
> I apologize in advance if I have missed this in the docs or other thread,
> but I have not been able to understand why, when a method gets called,
> there are three packets being sent: call (@2), return (@3), finish (@4)
> even if no capabilities are involved (except for the bootstrap). The setup
> I have used is just plain "Hello world" type bootstrap interface like
> "interface Hello { hello @0 (a :Text) -> (a :Text); }" and I saw it in
> Python, C++ and Rust. (Same effect with any numeric call/return type).
> (Observing the network traffic with wireshark.)
>
> Why is there the need for a finish message even when the returned content
> contains no capabilities? (I figure that message may only point to caps
> declared in its capTable.)
>
> When the return payload contains no capability, is it possible to somehow
> reuse the server-side content? (I did not figure out how, wireshark shows
> the data being sent back and forth.) Is anything from the reply actually
> stored server-side after the server sends the return without caps?
>
> Until the client does delete the replied payload (for example because it
> wants to keep the data in capnp without copying),,the finish is not sent
> which might require the server to retain some info on the call (used ID? or
> even the data?), even though I have not found it in the sources.
>
> (I believe it is not because of IDs, as for every client independently, if
> the server receives e.g. an abort (actually a finish (@4)) with an ID it
> has not receiver a call it has not returned yet, it can be sure the abort
> is meant for an ID it has returned recently and can ignore the abort.)
>
> Note that this is not a big performance problem for my use-case but since
> we are mostly sending simple one-time messages between long-lived caps, it
> seems like a waste.
>
> Thanks for any clarification and all the great work!
> Tomáš Gavenčiak
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] RPC: simple return values and a "finish" roundtrip

2017-08-11 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Tomáš,

I think a flag in Return will work better than one in Call because the
library doesn't know whether the caller plans to use pipelining at the time
that the Call is sent, but on the callee end the library definitely knows
when the results don't contain any capabilities.

I think this would be a good optimization to implement.

I would add a field to Return like:

finishNeeded @8 :Bool = true;


Doc comment:

If true, the receiver must send a Finish message for this call in order to
release resources and before reusing the question ID. If false, then a
Finish message is optional; the receiver can choose not to send a Finish
message, in which case it is free to reuse the question ID in a new Call
immediately.

This is useful to implement an optimization: If the call results contain no
capabilities, then there's no need for the callee to retain any state about
the call after the Return message is sent, and therefore no reason for the
caller to send a Finish. Any attempts to pipeline on the original call are
clearly invalid (because there's nothing to pipeline on), and there are no
capabilities that need to be released.


Then update whichever implementations you care about. Note that Python uses
the C++ implementation. rpc.c++ is... pretty complicated, but it might not
be too hard to add this flag. You'll also want to add a test, of course.

Alternatively, file an issue and I'll implement it when I have a chance.

-Kenton

On Wed, Aug 9, 2017 at 9:24 AM, Tomáš Gavenčiak  wrote:

> Hi Kenton,
>
> thanks for the quick reply and clarification! I was not 100% sure the
> server-side result (without capabilities) may not be reused in some way. I
> was thinking about some flag system and I like your solution. (I was
> thinking about a caller-set feature flag in addition but it is likely not
> necessary).
>
> How can I help with specifying and implementing this? I would be happy to
> give it a shot but would be grateful for some guidance. And how does it
> align with any future plans you have for capnp?
>
> All the best,
> Tomáš
>
> On Tue, Aug 8, 2017 at 6:24 PM, Kenton Varda 
> wrote:
>
>> Hi Tomáš,
>>
>> Indeed, there may be some room for optimization here. It's a little
>> tricky since *normally* the rule is that Question table entries are
>> allocated and freed strictly by the caller, hence requiring a Call and a
>> Finish. If we allow releasing a table entry to happen on the opposite side
>> from allocation (as we do with the export/import tables) then we have to
>> think carefully about race conditions.
>>
>> Perhaps we could add a bool to Return which allows the callee to say: "I
>> don't need you to send a Finish." The callee would only set this bool in
>> cases where there are no capabilities at all in the results. It would mark
>> the table entry as "optimistically freed"* and could release all associated
>> resources.
>>
>> The caller would still be allowed to send a Finish message, which it
>> might do for two reasons:
>> - Because it sent Finish before it had received Return, in an attempt to
>> cancel the call.
>> - Because it doesn't implement the new flag, so always sends Finish.
>>
>> So the callee should be prepared to receive a Finish for an
>> "optimistically freed" table entry, in which case it changes the table
>> entry's state to plain "free". It should also be prepared to receive no
>> Finish, which means it may later receive a Call that re-allocates the table
>> entry.
>>
>> Also, of course, the callee may receive promise-pipeline messages
>> referencing this table entry. This would happen if the caller *expected*
>> the results to contain a capability, which could be the case even if the
>> results ultimately did not contain any such entry. In this case the callee
>> would treat any pipelining attempts as if they were trying to pipeline on a
>> null capability.
>>
>> I think this could work, but to be sure I'd have to review all the other
>> places where question IDs are referenced.
>>
>> * Technically, it's not necessary for "optimistically freed" and "free"
>> to be separate states, but at least for common low-numbered table entries
>> this should be "almost free" to store and may help catch bugs.
>>
>> -Kenton
>>
>> On Tue, Aug 8, 2017 at 1:40 AM,  wrote:
>>
>>> Hi everyone,
>>>
>>> I apologize in advance if I have missed this in the docs or other
>>> thread, but I have not been able to understand why, when a method gets
>>> called, there are three packets being sent: call (@2), return (@3), finish
>>> (@4) even if no capabilities are involved (except for the bootstrap). The
>>> setup I have used is just plain "Hello world" type bootstrap interface like
>>> "interface Hello { hello @0 (a :Text) -> (a :Text); }" and I saw it in
>>> Python, C++ and Rust. (Same effect with any numeric call/return type).
>>> (Observing the network traffic with wireshark.)
>>>
>>> Why is there the need for a finish message even when the returned
>>> content contains no capabilities? (I figure that m

Re: [capnproto] Build failures on OSX

2017-08-15 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Brent,

In order to use RPC, you'll need to link against libcapnp-rpc and
libkj-async, in addition to libcapnp and libkj.

Let me know if that helps.

-Kenton

On Tue, Aug 15, 2017 at 12:25 PM, Brent Murphy 
wrote:

> Hi there,
>
> I'm trying to incorporate cap'n proto into a C++ server project. It's my
> first dabble with C++ so I'm struggling a bit and wondered if anyone could
> offer some advice on the error below?
>
> Many thanks,
> Brent
>
> Cmake error:
>
> cmake -H. -Bbuild ; cmake --build build --  -j3
>
> ✗ ✭ ✱
> -- Configuring done
> -- Generating done
> -- Build files have been written to: /Users/burt/Development/go/src/
> github.com/brentmurphy/piper_server/build
> Scanning dependencies of target piper_server
> [ 25%] Building CXX object CMakeFiles/piper_server.dir/src/main.cpp.o
> Apple LLVM version 8.1.0 (clang-802.0.42)
> Target: x86_64-apple-darwin16.7.0
> Thread model: posix
> InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/
> XcodeDefault.xctoolchain/usr/bin
>  "/Applications/Xcode.app/Contents/Developer/Toolchains/
> XcodeDefault.xctoolchain/usr/bin/clang" -cc1 -triple
> x86_64-apple-macosx10.12.0 -Wdeprecated-objc-isa-usage
> -Werror=deprecated-objc-isa-usage -emit-obj -mrelax-all -disable-free
> -disable-llvm-verifier -discard-value-names -main-file-name main.cpp
> -mrelocation-model pic -pic-level 2 -mthread-model posix -mdisable-fp-elim
> -masm-verbose -munwind-tables -target-cpu penryn -target-linker-version
> 278.4 -v -dwarf-column-info -debugger-tuning=lldb -coverage-file
> /Users/burt/Development/go/src/github.com/brentmurphy/
> piper_server/build/CMakeFiles/piper_server.dir/src/main.cpp.o
> -resource-dir /Applications/Xcode.app/Contents/Developer/Toolchains/
> XcodeDefault.xctoolchain/usr/bin/../lib/clang/8.1.0 -I /usr/local/include
> -I /Users/burt/Development/go/src/github.com/brentmurphy/piper
> -stdlib=libc++ -std=c++11 -fdeprecated-macro -fdebug-compilation-dir
> /Users/burt/Development/go/src/github.com/brentmurphy/piper_server/build
> -ferror-limit 19 -fmessage-length 181 -stack-protector 1 -fblocks
> -fobjc-runtime=macosx-10.12.0 -fencode-extended-block-signature
> -fcxx-exceptions -fexceptions -fmax-type-align=16 -fdiagnostics-show-option
> -fcolor-diagnostics -o CMakeFiles/piper_server.dir/src/main.cpp.o -x c++
> /Users/burt/Development/go/src/github.com/brentmurphy/
> piper_server/src/main.cpp
> clang -cc1 version 8.1.0 (clang-802.0.42) default target
> x86_64-apple-darwin16.7.0
> ignoring nonexistent directory "/usr/include/c++/v1"
> ignoring duplicate directory "/usr/local/include"
>   as it is a non-system directory that duplicates a system directory
> #include "..." search starts here:
> #include <...> search starts here:
>  /Users/burt/Development/go/src/github.com/brentmurphy/piper
>  /Applications/Xcode.app/Contents/Developer/Toolchains/
> XcodeDefault.xctoolchain/usr/bin/../include/c++/v1
>  /usr/local/include
>  /Applications/Xcode.app/Contents/Developer/Toolchains/
> XcodeDefault.xctoolchain/usr/bin/../lib/clang/8.1.0/include
>  /Applications/Xcode.app/Contents/Developer/Toolchains/
> XcodeDefault.xctoolchain/usr/include
>  /usr/include
>  /System/Library/Frameworks (framework directory)
>  /Library/Frameworks (framework directory)
> End of search list.
> [ 50%] Linking CXX executable ../bin/piper_server
> Apple LLVM version 8.1.0 (clang-802.0.42)
> Target: x86_64-apple-darwin16.7.0
> Thread model: posix
> InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/
> XcodeDefault.xctoolchain/usr/bin
>  "/Applications/Xcode.app/Contents/Developer/Toolchains/
> XcodeDefault.xctoolchain/usr/bin/ld" -demangle -lto_library
> /Applications/Xcode.app/Contents/Developer/Toolchains/
> XcodeDefault.xctoolchain/usr/lib/libLTO.dylib -dynamic -arch x86_64
> -macosx_version_min 10.12.0 -o ../bin/piper_server -search_paths_first
> -headerpad_max_install_names CMakeFiles/piper_server.dir/src/main.cpp.o
> CMakeFiles/piper_server.dir/Users/burt/Development/go/src/
> github.com/brentmurphy/piper/piper.capnp.c++.o
> CMakeFiles/piper_server.dir/Users/burt/Development/go/src/
> github.com/brentmurphy/piper/piper.rpc.capnp.c++.o /usr/local/lib/libkj.a
> /usr/local/lib/libcapnp.a -lc++ -lSystem /Applications/Xcode.app/
> Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/
> bin/../lib/clang/8.1.0/lib/darwin/libclang_rt.osx.a
> Undefined symbols for architecture x86_64:
>   
> "capnp::Capability::Client::makeLocalClient(kj::Own&&)",
> referenced from:
>   capnp::Capability::Client::Client void>(kj::Own&&) in main.cpp.o
>   "capnp::Capability::Server::internalUnimplemented(char const*, char
> const*, unsigned long long, unsigned short)", referenced from:
>   
> piper::rpc::Piper::Server::list(capnp::CallContext piper::rpc::Piper::ListResults>) in piper.rpc.capnp.c++.o
>   
> piper::rpc::Piper::Server::load(capnp::CallContext piper::rpc::Piper::LoadResults>) in piper.rpc.capnp.c++.o
>   piper::rpc::Piper::Server::configure(capnp:

Re: [capnproto] Receiving CAPNP Messages over ZeroMQ Multipart Messages with zero-copy?

2017-08-31 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Stephan,

When using UDP, is there a limit on message size? I would think that if
ZeroMQ doesn't allow multi-part messages with UDP, it's because they're
trying to fit each message in a single packet, which would imply that your
messages have to be less than 64k and should probably be kept under the
network MTU which is typically 512-1500 bytes.

If you know your messages will always be small, then you can use the
constructor parameters to MallocMessageBuilder to make sure that the first
segment is always large enough for the whole message. Then, your message
will always be 1 segment and you only need to send that segment.

-Kenton

On Wed, Aug 30, 2017 at 11:32 PM, Stephan Opfer 
wrote:

> Hi Kenton,
>
> I run into the problem, that I need to use UDP and Multi-Part messages are
> not supported by zeromq while using UDP. The goal of Multi-Part messages
> are to be delivered in an all or nothing principle. This guarantee is
> impossible to hold, according to some guys from the ZeroMQ Mailinglist, if
> you use UDP.
>
> Greetings,
>   Stephan
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Concurrency model in RPC protocol

2017-09-01 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
The C++ RPC implementation has one, limited, form of backpressure created
specifically for Sandstorm sandboxing purposes: setFlowLimit().

https://github.com/capnproto/capnproto/blob/master/c++/src/capnp/rpc.h#L115

This simple approach works well enough to prevent buggy Sandstorm apps from
filling up the front-end's memory. It can theoretically lead to deadlock,
though, in the case where a recursive call bounces back and forth enough
times to fill the limit, then gets stuck waiting.

-Kenton

On Fri, Sep 1, 2017 at 9:26 AM, Ross Light  wrote:

> Just wanted to close this thread off: I think I have what I need to
> unblock Go RPC improvements.  My ramblings on implementation at the end
> didn't make much sense and were more complicated than what's needed.  Don't
> mind me. :)
>
> Time permitting, I'll try to collect my observations about backpressure in
> Cap'n Proto in some sort of sensible documentation.  Perhaps this would be
> a good candidate for some of the non-normative docs of the RPC spec.  I
> agree that being able to apply backpressure to a single capability without
> blocking the whole connection would be a boon.
>
> One thing I'm currently curious about in the C++ implementation: does the
> RPC system provide any backpressure for sending calls to the remote vat?
> AFAICT there's no bound on the EventLoop queue.
>
> On Wed, Jul 26, 2017 at 10:38 AM Kenton Varda 
> wrote:
>
>> On Wed, Jul 26, 2017 at 9:16 AM, Ross Light  wrote:
>>>
>>> Cap'n Proto extends the model by making request/response pairings
 explicit, but it doesn't require that a response be sent before a new
 request arrives.

>>>
>>> Good point; I'm not arguing for that restriction.  I'm fine with this
>>> sequence (which conceptually only requires one actor):
>>>
>>> 1. Alice sends Bob foo1()
>>> 2. Bob starts working on foo1()
>>> 3. Alice sends Bob foo2().  Bob queues it.
>>> 4. Alice sends Bob foo3().  Bob queues it.
>>> 5. Bob finishes foo1() and returns foo1()'s response to Alice
>>> 6. Bob starts working on foo2()
>>> 7. Bob finishes foo2() and returns foo2()'s response to Alice
>>> 8. Bob starts working on foo3()
>>> 9. Bob finishes foo3() and returns foo3()'s response to Alice
>>>
>>
>> In this example, you're saying Bob can't start working on a new request
>> until after sending a response for the last request. That's what I'm saying
>> is *not* a constraint imposed by Cap'n Proto.
>>
>>
>>> Here's the harder sequence (which IIUC, C++ permits.  *If it doesn't*,
>>> then it simplifies everything.):
>>>
>>> 1. Alice sends Bob foo1()
>>> 2. Bob starts working on foo1().  It's going to do something that will
>>> take a long time (read as: requires a future), so it acknowledges delivery
>>> and keeps going.  Bob now has has multiple conceptual actors for the same
>>> capability, although I can see how this can be also be thought of as a
>>> single actor receiving request messages and sending response messages.
>>> 3. Alice sends Bob foo2()
>>> 4. Bob starts working on foo2().
>>> 5. foo2() is short, so Bob returns a result to Alice.
>>> 6. foo1()'s long task completes.  Bob returns foo1()'s result to Alice.
>>>
>>
>> This does not create "multiple conceptual actors". I think you may be
>> mixing up actors with threads. The difference between a (conceptual) thread
>> and an (conceptual) actor is that a thread follows a call stack (possibly
>> crossing objects) while an actor follows an object (sending asynchronous
>> messages to other objects).
>>
>> In step 2, when Bob initiates "something that will take a long time", in
>> your threaded approach in Go, he makes a blocking call of some sort. But in
>> the actor model, blocking calls aren't allowed. Bob would initiate a
>> long-running operation by sending a message. When the operation completes,
>> a message is sent back to Bob with the results. In between these messages,
>> Bob is free to process other messages. The important thing is that only one
>> message handler is executing in Bob at a time, therefore Bob's state does
>> not need to be protected by a mutex. However, message handlers cannot block
>> -- they always complete immediately.
>>
>> Concretely speaking, in C++, the implementation of Bob.foo() will call
>> some other function that returns a promise, and then foo() will return a
>> promise chained off of it. As soon as foo() returns that promise, then a
>> new method on Bob can be invoked immediately, without waiting for the
>> returned promise to resolve.
>>
>> This of course suffers from the "function coloring problem" you
>> referenced earlier. All Cap'n Proto methods are colored red (asynchronous).
>>
>> I think what the function coloring analogy misses, though, is that
>> permitting functions to block doesn't really avoid the function-coloring
>> problem, it only sweeps the problem under the rug. Even in a multi-threaded
>> program, it is incredibly important to know which functions might block.
>> Because, in a multi-threaded program, you almost c

Re: [capnproto] Sending Cap'n Proto via ZeroMQ

2017-09-05 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Stephan,

You need to make sure the kj::Array returned by
messageToFlatArray() is not destroyed until you're done with it. You can
move it to the heap like:

kj::Array* arrayPtr = new kj::Array(kj::mv(
wordArray)):

Then you can define a free function:

template 
void freeArray(void *data, void *hint) {
  delete reinterpret_cast*>(hint);
}

Then you can pass these to zmq:

zmq_msg_init_data(&msg, byteArray.begin(), byteArray.size(),
&freeArray, ptr);

This is all standard C/C++ programming here, not specific to Cap'n Proto
nor KJ. I recommend reading up on C++11 move semantics and RAII, as KJ and
Cap'n Proto use these very heavily, so it'll be hard to understand how to
use them if you aren't comfortable with these topics. Of course, since ZMQ
is a C interface, you need to abandon RAII a bit in this example, creating
a bare pointer and a free function instead...

-Kenton

On Tue, Sep 5, 2017 at 12:40 AM, Stephan Opfer 
wrote:

> Hi Kenton,
>
> thanks for fishing my message out of the spam filter (was driving crazy to
> write the message a third time :) ).
>
> Currently I have changed my code a little. I send the wordArray now
> directly. That works, as long as I keep the capnproto message in memory,
> until zmq has sent its content. That is, because I try to sent it with
> zero-copy semantic.
>
> So my situation/problem is like this:
>
> zmq_msg_init_data(&msg, byteArray.begin(), byteArray.size(), NULL, NULL);
>
> This is the crucial call to zeromq for creating a zeromq message with 
> zero-copy effort. Here is the API-Reference for this method 
> .
> Short arguments summary:
>
>
>1. The zeromq message to be created
>2. Pointer to the start of the data or content of the message
>3. The size of the data, the pointer is pointing to.
>4. Function-Pointer of the form void(void*,void*). This method will be
>called, when zeromq cleans up the message content. First argument of the
>function is the pointer to the messages content. The second argument is the
>pointer you can pass as 5. argument (see next).
>5. void* that will be the second argument of the function in argument
>4.
>
> The question is: How can I keep the message content, until zeromq cleaned
> it up and how to I recognize that zeromq clean it up.
>
> Passing a shared_ptr as 5th argument does not work, according to the
> zeromq dev mailing list. It also seems that the function (4th argument)
> must be static (not sure about that).
>
> Basically my c++ software engineering skills come to an end here. I think,
> that it must work somehow, but I couldn't find it out, so far. Some
> snippets would be nice... :)
>
> Greetings,
>   Stephan
>
>
> Am Dienstag, 5. September 2017 03:26:11 UTC+2 schrieb Kenton Varda:
>>
>> Hi Stephan,
>>
>> For some reason Google Groups thought your message was spam, but I fished
>> it out of the spam bucket. (It looked like you sent three different
>> versions. I accepted this one and deleted the other two.)
>>
>> I think your problem may be here:
>>
>> auto byteArray = capnp::messageToFlatArray(msgBuilder).asBytes();
>>
>> asBytes() returns an ArrayPtr which points back at the array on which it
>> was called. But in this line, you're calling it on a temporary value (the
>> return value of messageToFlatArray), so you end up with `byteArray` being a
>> dangling pointer (and so its contents will be garbage). You could fix that
>> like this:
>>
>> auto wordArray = capnp::messageToFlatArray(msgBuilder);
>> auto byteArray = wordArray.asBytes();
>>
>> Or like this:
>>
>> auto byteArray = capnp::messageToFlatArray(msgB
>> uilder).releaseAsBytes();
>>
>> -Kenton
>>
>> On Wed, Aug 30, 2017 at 2:36 AM,  wrote:
>>
>>> Hi all,
>>>
>>> like some other people before me, I would like to send Cap'n Proto
>>> Messages via ZeroMQ. Nevertheless I did not manage to make it work, yet.
>>> The exception I receive on the receiver side is:
>>>
>>> Exception catched:  Receiver - src/capnp/serialize.c++:43: failed:
>>> expected array.size() >= offset; Message ends prematurely in segment table.
>>>
>>> *Here is the sending code:*
>>>
>>> // cap'n proto part
>>> ::capnp::MallocMessageBuilder msgBuilder;
>>> discovery_msgs::Beacon::Builder beaconMsgBuilder =
>>> msgBuilder.initRoot();
>>> beaconMsgBuilder.setIp(this->wirelessIpAddress);
>>> beaconMsgBuilder.setPort();
>>> beaconMsgBuilder.setUuid(kj::arrayPtr(this->uuid, sizeof(this->uuid)));
>>> auto byteArray = capnp::messageToFlatArray(msgBuilder).asBytes();
>>>
>>> // zmq part
>>>
>>> this->ctx = zmq_ctx_new();
>>> this->socket = zmq_socket(ctx, ZMQ_RADIO);
>>> zmq_connect(this->socket, "udp://224.0.0.1:");
>>> zmq_msg_t msg;
>>> zmq_msg_init_data(&msg, byteArray.begin(), byteArray.size(), NULL, NULL);
>>> zmq_msg_set_group(&msg, "TestMCGroup");
>>> zmq_msg_send(&msg, this->socket, 0);
>>> zmq_msg_close(&msg);
>>>
>>>
>>> *On the receiving side it is:*
>>>
>>> this->ctx 

Re: [capnproto] RPC Level 2 questions - SturdyRefs, restorers, authentication and encryption

2017-09-06 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Thomas,

Sorry I missed this earlier!

So, "level 2" of the RPC protocol / SturdyRefs turn out to be something
that does not make sense to specify as part of the Cap'n Proto
implementation itself. Probably level 2 should be ripped out of the spec
entirely. This is what we learned as we build Sandstorm: SturdyRefs and how
to restore them turned out to be intrinsically tied to the Sandstorm
environment, and attempts to define an "abstract" SturdyRef format not
dependent on Sandstorm did not seem to fit what Sandstorm needed.

To understand this, consider a few different cases:
- A Sandstorm grain (app instance).
- A node in the Blackrock (clustered Sandstorm) infrastructure.
- An application on the general internet that has nothing to do with
Sandstorm.

Now try to answer the question: What does a SturdyRef express, and how does
one restore it?

The answer is totally different depending on the context:

- For a Sandstorm grain, a SturdyRef can be an opaque byte string, which
refers to an object in another grain. The client grain passes the SturdyRef
to the Sandstorm API to restore it. The Sandstorm infrastructure then looks
up the token in its database, verifies that the token belongs to the
requesting grain, finds out to what grain the token points, starts up that
grain, asks that grain for a live ref of the desired capability, and then
returns that to the requesting grain.

- For a Blackrock node, a SturdyRef typically refers to another component
of the Blackrock infrastructure: maybe an object in Blackrock storage (a
graph store of Cap'n Proto objects), or a running container on one of the
worker nodes. Or, it could also refer to something hosted in a grain, or a
totally external capability. See the definition here:
https://github.com/sandstorm-io/blackrock/blob/master/src/blackrock/cluster-rpc.capnp#L67
Notice how the namespace of SturdyRefs as seen by the infrastructure itself
is completely different from the namespace of SturdyRefs seen by apps --
although some kinds of objects can be represented by both. Also notice that
depending on the type of SturdyRef, the process for restoring is different:
for "transient" objects located on a specific machine, the restorer
connects directly to the target, but for stored object, the restorer
connects to "the storage service" which it is introduced to independently
at startup, and for external caps, it connects to "the gateways", etc.

- For the public internet, you probably want a SturdyRef to encode a
hostname and perhaps a pinned certificate list. Maybe it even encodes an
HTTP URL, to which a Cap'n Proto session can be created over WebSocket or
streaming HTTP/2. Additionally, it would encode some sort of object ID,
probably as an AnyPointer. The target host would provide a bootstrap
interface with a restore() method that takes this AnyPointer.

As you can see, in each case the format of a SturdyRef and the procedure
for restoring it is completely different, so much so that it doesn't appear
that any "standard" definition makes sense.

At some point I do want to spec out the "public internet" SturdyRef format
and protocol. But, for now, I think implementations should leave SturdyRefs
up to the application to define.

On a side note, it seems like you were confused a bit by EZ RPC's mechanism
for exporting capabilities by name. We deprecated this in favor of a
singleton bootstrap interface because you can trivially implement the same
thing by defining a bootstrap interface with a "restore(name)" method.

-Kenton

On Wed, Aug 23, 2017 at 7:48 AM, Thomas Leonard  wrote:

> Hi,
>
> I'm currently trying to implement RPC level 2 (for the OCaml RPC
> implementation - see https://github.com/mirage/capnp-rpc#encryption-and-
> authentication for the current status).
>
> I have some questions...
>
> https://capnproto.org/cxxrpc.html says:
>
> Current Status: As of version 0.4, Cap’n Proto’s C++ RPC implementation is
>> a Level 1 implementation. Persistent capabilities, three-way introductions,
>> and distributed equality are not yet implemented.
>>
>
> But I imagine this is out of date.
>
> The RPC spec says:
>
> How exactly a SturdyRef is restored to a live object is specified along
>> with the SturdyRef definition (i.e. not by rpc.capnp).
>>
>
> and
>
> However, in practice, the ability to restore SturdyRefs is itself a
>> capability that may require going through an authentication process to
>> obtain. Thus, it makes more sense to define a "restorer service" as a full
>> Cap'n Proto interface. If this restorer interface is offered as the vat's
>> bootstrap interface, then this is equivalent to the old arrangement.
>
>
> I imagine this must be some network-realm-wide restorer API, because if
> every SturdyRef has its own restorer API then an RPC implementation won't
> know how to authenticate to it when the user does something like:
>
> liveRef := sturdyRef.getRcvr()
>
> Is this restorer API specified somewhere? The Python docs mention an
> *ez_restore* method and say

Re: [capnproto] Joining Promises

2017-09-08 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Johannes,

Actually I don't think using braced initialization format ever worked, due
to the C++ standard's unfortunate decision that elements of an
std::initializer_list should be const. My example code there is erroneous.

What you actually need to do is something like:

auto builder = kj::heapArrayBuilder>(2);
builder.add(kj::mv(promise1));
builder.add(kj::mv(promise2));
auto joined = kj::joinPromises(builder.finish());

-Kenton

On Fri, Sep 8, 2017 at 2:20 AM, Johannes Zeppenfeld 
wrote:

> Hi Kenton,
>
> in [1] you give an example of joining two Promises to produce a
> Promise that is fulfilled when both other promises have fulfilled (to
> chain a list of related promises, avoiding a TaskSet and allowing to wait
> on the result).
>
> In Capnp 6.0.1 using g++ 5.4.0 this gives me the following error:
>
> error: no matching function for call to ‘joinPromises( initializer list>)’
>tasks = kj::joinPromises({kj::mv(tasks), kj::mv(newTask)});
>
> /usr/local/include/kj/async.h:312:24: note: candidate: kj::Promise
> kj::joinPromises(kj::Array >&&)
>friend Promise joinPromises(Array>&& promises);
> ^
> /usr/local/include/kj/async.h:312:24: note:   no known conversion for
> argument 1 from ‘’ to
> ‘kj::Array >&&’
>
> Has something changed here to make this no longer possible? Do I have to
> use an ArrayBuilder? Is there some other way to join Promises without
> needing to allocate an Array?
>
> Thanks!
> Johannes
>
>
> [1] https://github.com/capnproto/capnproto/issues/286#
> issuecomment-185975985
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Joining Promises

2017-09-13 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Johannes,

joinPromises() actually takes ownership of the array (the parameter is
taken by-move), so it doesn't have to allocate a copy internally.
joinPromises() has to store the list of promises somewhere, so I don't
think you can actually do much better than this, unless we wanted to
special-case for certain fixed sizes (2, maybe), but I think that's
probably over-engineering.

Also note that promises in general involve a lot of heap allocation, so the
overhead of allocating this particular array is probably negligible.

I don't like that promises are allocation-heavy, but it's hard to see what
we can do about that without making them harder to use. I think in the
longer run, the introduction of async/await in C++ will help reduce the
number of allocations needed considerably, but we need to wait for the
compilers to support it.

-Kenton

On Wed, Sep 13, 2017 at 3:08 AM, Johannes Zeppenfeld 
wrote:

> Hi Kenton,
>
> thanks for the clarification.
>
> Are there any plans to allow joining of Promises without the need
> for allocating a separate Array? I'd think this is a fairly common
> operation, so avoiding a heap-allocated array would be a welcome
> optimization.
>
> -Johannes
>
> On Friday, September 8, 2017 at 6:34:30 PM UTC+2, Kenton Varda wrote:
>>
>> Hi Johannes,
>>
>> Actually I don't think using braced initialization format ever worked,
>> due to the C++ standard's unfortunate decision that elements of an
>> std::initializer_list should be const. My example code there is erroneous.
>>
>> What you actually need to do is something like:
>>
>> auto builder = kj::heapArrayBuilder>(2);
>> builder.add(kj::mv(promise1));
>> builder.add(kj::mv(promise2));
>> auto joined = kj::joinPromises(builder.finish());
>>
>> -Kenton
>>
>> On Fri, Sep 8, 2017 at 2:20 AM, Johannes Zeppenfeld 
>> wrote:
>>
>>> Hi Kenton,
>>>
>>> in [1] you give an example of joining two Promises to produce a
>>> Promise that is fulfilled when both other promises have fulfilled (to
>>> chain a list of related promises, avoiding a TaskSet and allowing to wait
>>> on the result).
>>>
>>> In Capnp 6.0.1 using g++ 5.4.0 this gives me the following error:
>>>
>>> error: no matching function for call to ‘joinPromises(>> initializer list>)’
>>>tasks = kj::joinPromises({kj::mv(tasks), kj::mv(newTask)});
>>>
>>> /usr/local/include/kj/async.h:312:24: note: candidate:
>>> kj::Promise kj::joinPromises(kj::Array >&&)
>>>friend Promise joinPromises(Array>&& promises);
>>> ^
>>> /usr/local/include/kj/async.h:312:24: note:   no known conversion for
>>> argument 1 from ‘’ to
>>> ‘kj::Array >&&’
>>>
>>> Has something changed here to make this no longer possible? Do I have to
>>> use an ArrayBuilder? Is there some other way to join Promises without
>>> needing to allocate an Array?
>>>
>>> Thanks!
>>> Johannes
>>>
>>>
>>> [1] https://github.com/capnproto/capnproto/issues/286#issuecomme
>>> nt-185975985
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Cap'n Proto" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to capnproto+...@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/capnproto.
>>>
>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Handling Signals with EzRpcServer

2017-10-02 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Richard,

The EZ classes are really designed for the most basic use case. The intent
is that if you're doing anything more complicated, you skip them and use
the underlying APIs.

In particular you'll want to use kj::setupAsyncIo() to get your initial I/O
context, and then use capnp::TwoPartyServer and capnp::TwoPartyClient in
.

Honestly I should probably deprecate the EZ classes altogether and direct
people to these interfaces instead... it's only a couple extra lines to set
up and then you have full flexibility.

-Kenton

On Wed, Sep 27, 2017 at 2:01 AM, Richard Petri  wrote:

> Hello,
>
> I'm trying to implement a graceful shutdown of a server if SIGTERM is
> received. I'm new to capnproto, so I'm not sure on how to approach this.
> Looking through the source, the UnixEventPort seems to be a way to catch
> signals, but from what I can see this won't work together with the
> EzRPCServer. For example, if a adapt the simple server example from
> waiting forever like this
>
> kj::NEVER_DONE.wait(waitScope);
>
> and instead wait like this:
>
> kj::UnixEventPort::captureSignal(SIGTERM);
> kj::UnixEventPort evport;
> kj::Promise p = evport.onSignal(SIGTERM).wait(waitScope);
> std::cout << "Shutting down server..." << std::endl;
>
> it appears to catch the signal (the server won't terminate when SIGTERM
> arrives, so the default signal handler isn't called at least), but will
> never handle it / fulfill the promise. I guess I don't understand enough
> about the EventPorts, EventLoops, etc., and how this all works together.
>
> Looking into the source, the AsyncIoContext created by an EzRpcServer
> can access the eventport, but this is hidden.
>
> Is there an easy way to achieve this that I'm missing, or will I have to
> implement my own RpcServer spinoff?
>
> Best Regards,
> Richard
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


[capnproto] Re: Shared Library Name

2017-10-03 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Eric,

The numbers that come after .so are not related to the library's marketing
version. They are instead an indication of ABI compatibility. Cap'n Proto
makes no attempt to be ABI-compatible between releases (because it is very
hard to do so in C++), hence we don't attempt to produce any ABI
compatibility indicator, but rather vary the whole library name. It would
be incorrect for us to move the version number to be after .so.

Cap'n Proto's approach is not unusual -- it's a normal, supported way of
using libtool. Why doesn't Yocto support it?

-Kenton

On Tue, Oct 3, 2017 at 10:02 AM, Schwarz, Eric  wrote:

> Hello Kenton,
>
> would you mind changing the shared library name according to „standard“
> naming convention (libcapnp-0.6.0.so => libcapnp.so.0.6.0)?
> Otherwise, yes, at least Yocto will complain at build time and one needs
> to suppress the warning w/ „INSANE_SKIP = "dev-so"“.
>
> Many thanks
> Eric
> 
>  [http://www.arri.com/media/sign/2017_interactive-timeline_sign.jpg] <
> http://100.arri.com/?utm_source=ARRI&utm_medium=email-
> footer&utm_campaign=E-Mail-Signature&utm_content=2017-09-
> Interactive-Timeline>
>
> Get all the latest information from www.arri.com,
> Facebook, Twitter ARRIChannel>, Instagram and YouTube<
> http://www.youtube.com/user/ARRIChannel>.
>
> Arnold & Richter Cine Technik GmbH & Co. Betriebs KG
> Sitz: München - Registergericht: Amtsgericht München -
> Handelsregisternummer: HRA 57918
> Persönlich haftender Gesellschafter: Arnold & Richter Cine Technik GmbH
> Sitz: München - Registergericht: Amtsgericht München -
> Handelsregisternummer: HRB 54477
> Geschäftsführer: Franz Kraus; Dr. Jörg Pohlman; Stephan Schenk; Walter
> Trauninger
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Handling Signals with EzRpcServer

2017-10-03 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
On Tue, Oct 3, 2017 at 1:01 PM, Richard Petri  wrote:

> On 10/02/2017 08:45 PM, Kenton Varda wrote:
> > In particular you'll want to use kj::setupAsyncIo() to get your initial
> > I/O context, and then use capnp::TwoPartyServer and
> > capnp::TwoPartyClient in .
>
> With this hint, I came up with the following:
>
> kj::AsyncIoContext asyncio = kj::setupAsyncIo();
> auto& waitScope = asyncio.waitScope;
> auto& ioprovider = *asyncio.provider;
> auto& network = ioprovider.getNetwork();
> auto addr = network.parseAddress(SOCKET_FILE).wait(waitScope);
> auto listener = addr->listen();
> capnp::TwoPartyServer server(kj::heap stInterfaceImpl>());
> auto serverPromise = server.listen(*listener);
> // Run until SIGTERM
> kj::UnixEventPort::captureSignal(SIGTERM);
> asyncio.unixEventPort.onSignal(SIGTERM).wait(waitScope);
> std::cout << "Shutting down server..." << std::endl;
>
> Works as far as I can see, thanks! From what I understand, the server
> will close the socket if the serverPromise will be destroyed?
>

Yes.

That said, consider joining the promises:

asyncio.unixEventPort.onSignal(SIGTERM)
.exclusiveJoin(kj::mv(serverPromise))
.wait(waitScope);

A promise formed by exclusiveJoin() will finish (or throw) as soon as
either of its inputs finishes (or throws), automatically cancelling the
other input.

This way, if serverPromise throws an exception, it will propagate up rather
than sitting in limbo.

In general, a promise that is just sitting without being consumed is always
risky since you won't find out if it throws.

-Kenton

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Suggested addition to capnproto.org/install.html Windows Installation instructions

2017-10-03 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hmm, but we shouldn't be telling people they need to run as Administrator
to do things.

Maybe the install prefix should default somewhere else? Harris, any
thoughts?

-Kenton

On Tue, Oct 3, 2017 at 11:40 AM,  wrote:

> 4. Open the “Cap’n Proto” solution in Visual Studio.
>
>
> Leads to an error on point 7 because CMAKE_INSTALL_PREFIX is "C:/Program
> Files/..." by default, which requires administrative permissions to access.
>
> 4. Open the “Cap’n Proto” solution in Visual Studio as administrator.
>>
>
> The above addition hints at the solution to the issue.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Cap'n'proto on microcontrollers

2017-10-05 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi David,

I'm not sure, but I suspect you may have a tough time with the RPC
implementation -- and the KJ async framework in general -- in such a
constrained environment. :/

Disabling exceptions should be OK -- everything is designed such that it
should still work (although fatal errors may abort the process). But the
async stuff is pretty malloc-heavy and I don't think it would be easy to
guarantee that memory usage stays under your threshold.

-Kenton

On Thu, Oct 5, 2017 at 12:06 PM, David Ondrušek <
david.ondru...@student.sps-cl.cz> wrote:

> Hi Kenton,
>
> I'm developing an IoT device using the ESP8266 wifi-enabled
> microcontroller. When connected to wifi it has 48kB free RAM.
> Cap'n'proto is useful for my project since message building/reading is a
> static process. So it's easy to check if there is enough free memory to
> process the message. That's important because while modern microcontroller
> toolchains do compile C++ code. C++ exceptions aren't really supported.
>
> What i'm unsure about is if i should also port over the two party RPC
> implementation. It would make communication a lot easier. But i don't know
> if that's even feasible given the low memory available.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] capnp::SchemaLoader::CompatibilityChecker

2017-10-16 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Liam,

What kind of negotiation are you imagining, exactly?

Note that just because two schemas are "compatible" as checked by
CompatibilityChecker does not prove that they are compatible at the
application level. So, CompatibilityChecker can't really prove
compatibility -- it can only prove incompatibility, in cases where the
schemas are plainly mismatching.

CompatibilityChecker could be used, for example, in a git commit hook, to
check for changes made to a capnp file that appear to introduce
incompatibilities. But, in this role it would be like a linter -- it can't
catch all bugs, just some obvious ones.

I would be wary of relying on CompatibilityChecker for correctness at
runtime as part of a "negotiation", since it could falsely claim
compatibility between protocols that aren't actually compatible.

With all that said, although CompatibilityChecker isn't directly exposed as
a public API today, you can access its functionality by creating a
SchemaLoader and loading the two schemas you want to check into it (making
sure they have the same type ID). If it detects they are incompatible, this
will throw an exception.

-Kenton

On Sun, Oct 15, 2017 at 10:06 PM, Liam Staskawicz  wrote:

> Greetings,
>
> I'm interested in negotiating/validating the compatibility of data types
> prior to establishing a session between two peers.
>
> I came across capnp::SchemaLoader::CompatibilityChecker and found it to
> be quite similar to & perhaps a good match for what I am interested in but,
> unfortunately, not part of the public capnp api.
>
> Would you recommend reimplementing the parts of CompatibilityChecker
> needed for my application? Is there any interest/value/sense in making it
> (or something like it) public, to allow for reuse instead?
>
> Thank you,
> Liam
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] capnp::SchemaLoader::CompatibilityChecker

2017-10-18 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Liam,

The point is that CompatibilityChecker can catch *some* cases of
incompatibility, but not all -- and it's impossible for it to cover all
cases. So, you could indeed use it as a "safety check", just remember that
there could be false positives (where it reports that the protocols are
compatible, but they actually are not).

The right way to do this is by loading the schemas into SchemaLoader, and
letting it throw an exception if they aren't compatible.

-Kenton

On Tue, Oct 17, 2017 at 5:03 PM, Liam Staskawicz  wrote:

> Hi Kenton,
>
> Thanks for the response. I'm interested in identifying scenarios in which
> changes have been made to a schema that break backward compatibility.
>
> Do you think CompatibilityChecker is the wrong approach for that purpose?
> Or just needs to cover more cases before it can be relied upon in that
> context? Something else altogether?
>
> Thanks,
> Liam
>
>
> On Mon, Oct 16, 2017, at 12:09 PM, Kenton Varda wrote:
>
> Hi Liam,
>
> What kind of negotiation are you imagining, exactly?
>
> Note that just because two schemas are "compatible" as checked by
> CompatibilityChecker does not prove that they are compatible at the
> application level. So, CompatibilityChecker can't really prove
> compatibility -- it can only prove incompatibility, in cases where the
> schemas are plainly mismatching.
>
> CompatibilityChecker could be used, for example, in a git commit hook, to
> check for changes made to a capnp file that appear to introduce
> incompatibilities. But, in this role it would be like a linter -- it can't
> catch all bugs, just some obvious ones.
>
> I would be wary of relying on CompatibilityChecker for correctness at
> runtime as part of a "negotiation", since it could falsely claim
> compatibility between protocols that aren't actually compatible.
>
> With all that said, although CompatibilityChecker isn't directly exposed
> as a public API today, you can access its functionality by creating a
> SchemaLoader and loading the two schemas you want to check into it (making
> sure they have the same type ID). If it detects they are incompatible, this
> will throw an exception.
>
> -Kenton
>
> On Sun, Oct 15, 2017 at 10:06 PM, Liam Staskawicz  wrote:
>
>
> Greetings,
>
> I'm interested in negotiating/validating the compatibility of data types
> prior to establishing a session between two peers.
>
> I came across capnp::SchemaLoader::CompatibilityChecker and found it to
> be quite similar to & perhaps a good match for what I am interested in but,
> unfortunately, not part of the public capnp api.
>
> Would you recommend reimplementing the parts of CompatibilityChecker
> needed for my application? Is there any interest/value/sense in making it
> (or something like it) public, to allow for reuse instead?
>
> Thank you,
> Liam
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


[capnproto] Memory leak at master since Sep 3

2017-10-23 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi all,

FYI, if you build Cap'n Proto C++ from master, you may have been affected
by a memory leak introduced on September 3rd. The leak causes zero-sized
heap-allocated arrays often not to be freed. The Cap'n Proto RPC library
has been observed to allocate zero-sized arrays frequently.

Although the arrays have zero size, malloc() is required to return a unique
pointer value even for zero-sized allocations. So, each allocation likely
consumes 8 or 16 bytes. It takes quite a while for such leaks to consume
much memory, but a high-traffic long-running service can be affected.

The change that introduced this has been reverted at master.

If you use a release build, you were not affected. Note that we run leak
check analysis using Valgrind as part of pre-release tests, hence this
would have been caught before the next release. We should probably start
running said analysis as part of regular CI builds, so that this doesn't
happen again.

-Kenton

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] how can i compile cap‘s proto for android?

2017-11-13 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Jaccen,

I build and run the tests on Android before every release. You can see how
I do that here:

https://github.com/capnproto/capnproto/blob/master/super-test.sh#L113-L160

However, this probably isn't what you want to do for an app. What I'm doing
here basically cross-compiling the tests as a regular old Unix command-line
binary, then running them on an Android root shell. I don't know anything
about building Android apps, so I don't know what you need to do
differently there. I assume you'd still do some cross compiling but I don't
know how libraries are managed, exactly.

-Kenton

On Wed, Nov 1, 2017 at 11:56 PM,  wrote:

> is there any doc ?
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] possible to access MessageBuilder from RootType::Builder?

2017-11-22 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Liam,

Currently there's no way to do this.

But I think it would lead to a different kind of type unsafety anyway: If
you accept a T::Builder, the implication is that it doesn't need to be the
root builder of the message. If someone passed an inner object, or an
orphan, what should happen?

What I'd suggest instead is that you encapsulate MessageBuilder into your
own API, such that people can only construct a MessageBuilder of the right
type. Two ways you might do that:

1) You could define a type `FooMessage` which privately contains a
MallocMessageBuilder and which has a getRoot() method that only returns the
appropriate builder type. Then have your project require a `FooMessage`
instead of a `MesasgeBuilder`. If you have a lot of root types, you could
of course make this a template.

template 
class TypedMessage {
public:
  typename T::Builder getRoot() { return inner.getRoot(); }

private:
  capnp::MallocMessageBuilder inner;
};

2) If your framework controls the scope in which the message is supposed to
be constructed (like, your framework calls "up" to the application to
request a message), then it could allocate the MessageBuilder internally
and pass the T::Builder up to be filled in, so the app never sees the
MessageBuilder. This is what Cap'n Proto RPC does, for example. This has
the neat advantage that your framework can play with message allocation
strategies. E.g. Cap'n Proto RPC aims to be able to allocate messages
directly inside a shared memory segment, if using a shared memory
transport, without the application knowing that this is the case.

-Kenton

On Wed, Nov 22, 2017 at 1:10 PM, Liam Staskawicz  wrote:

> Hello,
>
> Part of my current project provides a mechanism to transmit capnp messages
> and I'd like to accept a T::Builder as input, rather than a MessageBuilder
> if possible, to help avoid messages of the incorrect type being sent on a
> channel.
>
> Is it possible to access the MessageBuilder that a T::Builder is
> associated with through the T::Builder? Is there a better way to approach
> this issue?
>
> Thanks!
> Liam
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] About random access for Cap'N proto message

2017-12-05 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Tao,

You can get random access to files on disk by memory mapping the file. In
Java, you would use FileChannel.map() to get a MappedByteBuffer. You can
then pass that ByteBuffer off to Cap'n Proto and use it like any other
ByteBuffer. The operating system will not actually read in the data from
disk until your program attempts to access the corresponding part of the
MappedByteBuffer, which Cap'n Proto will only do when you invoke the
accessor for a field located there. So, somewhat magically, you get random
access.

Unfortunately, you cannot get random access to compressed data this way,
unless the compression is implemented inside the OS / filesystem. (And most
compression methods are not random-access-friendly anyhow.)

-Kenton

On Tue, Dec 5, 2017 at 10:38 AM,  wrote:

> Hi,
>
> I am working on a project which is using protobuf to encode/decode
> messages. I am evaluating if it is worth to migrate to Cap'N proto. I am
> using the Java implementation of Cap'N. https://github.com/capn
> proto/capnproto-java
>
> From the documentation, https://capnproto.org/index.html, Random access
> is mentioned as a key feature. But I am not able to find any piece of code
> example to demonstrate this feature. Am I misunderstanding it? Does "random
> access" simply means we can access any field without "deserializing" the
> whole message (it actually not serialized at all if not packed)?
>
> What I thought about "random access" is Cap'N is able to read any field
> back from disk without loading the whole bunch of message data to memory.
> But from the java API implementation (the source code), it seems that it
> always read the whole message back to byte buffer, getRoot and then access
> any field. So, I guess my understanding is wrong, isn't it?
>
> Our scenario:
> Our current protobuf message schema has many fields (~100) with embedded
> other messages. The serialized message size varies from hundreds bytes to
> tens of kilobytes and a few large messages may over 1 megabytes. We store
> the messages in term of compressed byte array to underlying KV store and
> read back from KV store, uncompress and then parse to protobuf object.
>
> In this case, do you think it is worth to migrate from protobuf to cap'N ?
> If so, how can I benefit from "random access" feature?
>
>
> Thanks,
> Tao
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Re: About random access for Cap'N proto message

2017-12-06 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
On Tue, Dec 5, 2017 at 2:27 PM,  wrote:

> Thanks a lot. I got it. In my case, I will always read the compressed byte
> array back from KV store, decompress and then read fields. So, in this
> case, "random access" means Cap'N will only create the object of that field
> from unpacked message without creating the temp objects of other fields, in
> other word, all other fields will still be the flat bytes without any
> managed objects created. Is that correct?
>

Yes. However, if you're reading *packed* messages, then packed bytes do
need to be unpacked upfront. They are unpacked into another ByteBuffer. No
message objects are created, but this does require reading through all the
bytes.

The memory mapping strategy I described does not work for packed messages.


> Moreover, another question is how to write message in packed format to a
> byte array. Because I have to allocate a ByteBuffer will enough capacity to
> store the message. But it is not possible to know the packed message size
> without packing it first. Currently, I have to allocate with its unpacked
> size (computeSerializedSizeInWords * 8), then use a tricky way to trim the
> tailing zeros. Do you know if there is any better way to do this?
>

The only way to know the packed size is to actually run the packing
algorithm. You could run the algorithm twice, once where you throw away the
data just to get the size, and then another time to save it. Or, you could
allocate successive buffers on-demand, and then assemble them into one big
buffer at the end. Or, if you're going to write to an OutputStream anyway,
write the bytes to the OutputStream as they are being packed, rather than
packing everything first and writing second.

-Kenton

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Issue with Int8 and UInt8

2017-12-07 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Shuo,

The problem you are seeing is a classic problem with C++'s iostream
classes. int8_t and uint8_t are aliases for `signed char` and `unsigned
char`. std::ostream::operator<<() treats both of these types as equivalent
to `char`. So, it writes the value as a single character rather than as an
integer. If the value is an unprintable character, it looks like nothing is
printed.

FWIW, the `kj::str()` universal stringification function does not have this
problem. It treats only bare `char` as a character. `signed char` and
`unsigned char` are treated as integers.

As to your second question, yes, a size of 16 is correct for both messages.
The first segment of a message always starts with an 8-byte root pointer
pointing to the root object (usually a struct). Object sizes are rounded up
to the nearest 8-byte boundary, so unless the object is totally empty, you
end up with at least 16 bytes per message. Of course, most of this size is
zero-valued so would compress away with capnp packing or any decent
compression algorithm.

-Kenton

On Thu, Dec 7, 2017 at 2:55 PM,  wrote:

> Hello,
>
> I wanted to test the performance of serializing a single scaler type of
> message. For example:
> @0xf123cfa3565bb5a6;
>
> struct TestBool {
>  value @0 :Bool;
> }
> struct TestInt8 {
>  value @0 :Int8;
> }
>
> When I was testing scaler types of Int8 and UInt8, by calling
> auto r2 = message.getRoot();
> cout << r2.getValue() << "\nsize: " << size << endl;
>
> returned nothing for the value:
>
> size: 16
>
>
> However, when I set the field to Int16, the correct value is returned.
>
> 127
>
> size: 16
>
> The complete code is pasted below:
> capnp::MallocMessageBuilder message;
> TestInt8::Builder r1 = message.getRoot();
> r1.setValue(127);
> auto serialized = message.getSegmentsForOutput();
> // auto serialized = capnp::messageToFlatArray(message);
>
> //capnp::SegmentArrayMessageReader reader(serialized);
> size_t size = 0;
> for (auto segment : serialized) {
>   size += segment.asBytes().size();
> }
> auto r2 = message.getRoot();
> cout << r2.getValue() << "\nsize: " << size << endl;
>
> Is there anything wrong? It only happened for Int8 and UInt8 cases.
> Also, is the size I am getting the correct serialized size?
>
> Best,
> Shuo
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] How to create kj::ArrayPtr from char *?

2017-12-11 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi zosiasmail,

On Mon, Dec 11, 2017 at 8:43 AM,  wrote:

> std::vector bytes(1024);
> kj::ArrayPtr words(reinterpret_cast capnp::byte*>(bytes.data()), bytes.size() / sizeof(capnp::word));
>

This code will work if you change the reinterpret_cast to:

reinterpret_cast(bytes.data())

That is, you are casting to the wrong type (and wrong constness).

However, there's a deeper problem this doesn't solve, which is ensuring
that your buffer is aligned. There's no guarantee that the bytes in a
vector are aligned to a word boundary. Since Cap'n Proto doesn't have a
separate decoding step, it's necessary that the message be properly-aligned
for direct access of types up to 64 bits.

The trick is to allocate your backing buffer as words in the first place:

std::vector words(128);

Now you can read into this vector like:

read(fd, words.begin(), words.size() * sizeof(capnp::word))

Better yet, don't use std::vector; use kj::Array all the way through:

auto words = kj::heapArray(128);
ssize_t n = read(fd, words.begin(), words.size() * sizeof(capnp::word));
// TODO: check errors, etc.
auto messageWords = words.slice(0, n / sizeof(capnp::word));

If you are using an I/O library that insists on giving you strictly bytes
with no alignment guarantee, then you might have a problem. You may be
forced to make a copy in this case.

-Kenton

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] How to create kj::ArrayPtr from char *?

2017-12-12 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
FWIW, it's actually pretty likely that a vector's buffer will
actually be aligned in practice, because it is almost certainly allocated
using malloc() which always returns aligned buffers. But, there's
technically no guarantee.

Given this, perhaps you could write some code which *checks* for alignment,
and if so, does a reinterpret_cast, but if the buffer isn't aligned, then
it falls back to a copy.

bool aligned = reinterpret_cast(bytes.begin()) %
sizeof(void*) == 0;

(I use sizeof(void*) as the denominator because 32-bit systems usually
require only 32-bit alignment even for 64-bit data types.)

-Kenton

On Tue, Dec 12, 2017 at 1:54 AM, Zosia A  wrote:

> Thank you for fast reply!
>
> Yeah, I was doing it just how you've shown, but now I'm trying to
> integrate it with existing project and trying to do this with as little
> changes possible. I wasn't aware, that vector doesn't guarantee alignment
> for any standard type, so I will have to stick to the first version. Thank
> you once again :).
>
>
> On Tue, Dec 12, 2017 at 3:50 AM, Kenton Varda 
> wrote:
>
>> Hi zosiasmail,
>>
>> On Mon, Dec 11, 2017 at 8:43 AM,  wrote:
>>
>>> std::vector bytes(1024);
>>> kj::ArrayPtr words(reinterpret_cast>> capnp::byte*>(bytes.data()), bytes.size() / sizeof(capnp::word));
>>>
>>
>> This code will work if you change the reinterpret_cast to:
>>
>> reinterpret_cast(bytes.data())
>>
>> That is, you are casting to the wrong type (and wrong constness).
>>
>> However, there's a deeper problem this doesn't solve, which is ensuring
>> that your buffer is aligned. There's no guarantee that the bytes in a
>> vector are aligned to a word boundary. Since Cap'n Proto doesn't have a
>> separate decoding step, it's necessary that the message be properly-aligned
>> for direct access of types up to 64 bits.
>>
>> The trick is to allocate your backing buffer as words in the first place:
>>
>> std::vector words(128);
>>
>> Now you can read into this vector like:
>>
>> read(fd, words.begin(), words.size() * sizeof(capnp::word))
>>
>> Better yet, don't use std::vector; use kj::Array all the way through:
>>
>> auto words = kj::heapArray(128);
>> ssize_t n = read(fd, words.begin(), words.size() *
>> sizeof(capnp::word));
>> // TODO: check errors, etc.
>> auto messageWords = words.slice(0, n / sizeof(capnp::word));
>>
>> If you are using an I/O library that insists on giving you strictly bytes
>> with no alignment guarantee, then you might have a problem. You may be
>> forced to make a copy in this case.
>>
>> -Kenton
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Problem with C++ generator and minimum long value

2017-12-13 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Mathias,

The place to modify is c++/src/capnp/compiler/capnpc-c++.c++, line 532.

What is your proposed fix here? I guess we need to write
(-9223372036854775807ll-1)?

-Kenton

On Wed, Dec 13, 2017 at 9:07 AM,  wrote:

> When setting a constant of value -9223372036854775808, the C++ generated
> is the two following tokens -9223372036854775808ll which causes a warning
> with GCC, integer constant is too large.
> Could someone point me to the right place in the code so that I can fix it?
>
> Thanks
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Do i need to change the unique file id each time i change the schema?

2018-01-05 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Muhamad,

No, in fact, you should not change the ID. The purpose of the ID is to
recognize when two different files are versions of the same schema, so it
should remain the same across versions.

-Kenton

On Thu, Jan 4, 2018 at 4:22 AM,  wrote:

> The `capnp` unique file id, do i have to change it (generate a new one)
> after adding new fields to my schema ?
>
> Thanks
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Can't move unique pointer to MallocMessageBuilder

2018-01-09 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Kevin,

Moving a unique_ptr doesn't touch the pointed-to type at all, so the
problem here couldn't possibly have anything to do with
MallocMessageBuilder. I don't see anything obviously wrong with the code
you provided, so the bug must be elsewhere in your code. It is possible
that my_class is an invalid reference? What do you mean when you say that
std::move() "fails"? I honestly can't think of any way that std::move()
could fail at runtime, since it only casts one reference type to another.

-Kenton

On Mon, Jan 8, 2018 at 10:57 PM,  wrote:

> I'm trying to figure out how to move a MallocMessageBuilder. I thought
> that if I created a std::unique_ptr to it, then I could move that pointer,
> but it is failing at runtime because the rvalue constructor is deleted.
>
> Am I doing something wrong with my move?
>
> Pseudo-code:
> void my_func(std::unique_ptr<::capnp::MallocMessageBuilder> builder) {
>   my_class.builder = std::move(builder);
> }
> auto builder = std::unique_ptr<::capnp::MallocMessageBuilder>();
> // ... initialize the builder ...
> my_func(std::move(builder));
>
> Is there a way to move a MallocMessageBuilder without just using a raw
> pointer? It fails at runtime in the std::move call in my function.
>
> Thanks!
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Checking the consistency of PakcedMessage

2018-01-11 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Shuo,

Could you possibly provide a minimal self-contained test program that I'd
actually be able to build and debug? It's hard to see what might be wrong
just from the code snippets.

Incidentally, while this doesn't explain your problem, note that you don't
need to pass Readers or Builders by pointer. The Reader and Builder types
already behave as pointers themselves, so you should pass them by value.

-Kenton

On Thu, Jan 11, 2018 at 10:38 AM,  wrote:

> I am trying to check whether the packed message is handled correctly.
> The message was serialized using:
> kj::VectorOutputStream output;
>capnp::MallocMessageBuilder message;
>typename msgT::Builder r1 = message.initRoot();
>assign_function(&r1);
>capnp::writePackedMessage(output, message);
>serialized_size = output.getArray().size();
> And was deserialized using:
> typename msgT::Reader r2;
> kj::ArrayInputStream input(output.getArray());
>capnp::PackedMessageReader reader(input);
>r2 = reader.getRoot();
> It was then checked using:
> capnp::AnyStruct::Reader left = r1.asReader();
>capnp::AnyStruct::Reader right = r2;
>return left == right;
> The message assign function is:
> void capnproto::assign_function(RobotControl::Builder *r) {
>  auto states = r->initState(nested_iter);
>  for (int i = 0; i < nested_iter; i++) {
>auto command = states[i].initRobotCommand();
>if (i%2 == 0) {
>  command.setMove(1);
>} else {
>  command.setGrasp(1);
>}
>auto pose = states[i].initPose();
>auto pos = pose.initPos();
>auto ori = pose.initOri();
>pos.setX(static_cast(i + 1.0));
>pos.setY(static_cast(i + 1.0));
>pos.setZ(static_cast(i + 1.0));
>ori.setX(static_cast(i + 1.0));
>ori.setY(static_cast(i + 1.0));
>ori.setZ(static_cast(i + 1.0));
>ori.setW(static_cast(i + 1.0));
>  }
> }
> I also tried to print out r2, but it returned:
> unknown file: Failure
> C++ exception with description "capnp/layout.c++:2240: failed: expected
> ref->kind() == WirePointer::LIST; Message contains non-list pointer where
> list pointer was expected.
> stack: 0x4e86bb 0x4dfb0b 0x611d71 0x60b375 0x5ee6f8 0x5ef090 0x5ef783
> 0x5f68ca 0x613349 0x60c1b7 0x5f5366 0x4dd3c8 0x7f84f1d6c830 0x4ddde9"
> thrown in the test body.
> The print function is:
> void capnproto::print_function(RobotControl::Reader *r) {
>  auto states = r->getState();
>  for (int i = 0; i < nested_iter; i++) {
>auto command = states[i].getRobotCommand();
>if (i%2 == 0) {
>  std::cout << command.isMove() << std::endl;
>} else {
>  std::cout << command.isGrasp() << std::endl;
>}
>auto pose = states[i].getPose();
>auto pos = pose.getPos();
>auto ori = pose.getOri();
>std::cout << pos.getX() << " " << pos.getY() <<
> " " << pos.getZ() << std::endl;
>std::cout << ori.getX() << " " << ori.getY() <<
> " " << ori.getZ() << " " << ori.getW() << std::endl;
>  }
> }
> However, this test for simple scalar value message passed.
> Can anyone help me with this?
>
> Best,
> Shuo
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Expected error on pipelined request when first message wasn't sent?

2018-01-30 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Alex,

This sounds like a bug. The pipelined request should fail with the same
error as the original request threw, not some cryptic thing about
questionIds. Can you file an issue? If you have some sample code that could
help too.

-Kenton

On Tue, Jan 30, 2018 at 9:08 AM,  wrote:

> Hi all,
>
> As of about a year ago capnp won't send a message if it thinks it would
> get rejected for size issues on the remote end. Our service is structured
> such that we send a very large request to open a remote object, then make a
> series of small queries against that. We naturally let those queries be
> pipelined, so don't call `.wait()` immediately on the initial query.
>
> If the initial upload fails, it seems to happen pretty much silently. So
> where if we call `.wait()` immediately on the upload, we get the expected
> "Trying to send Cap'n proto message larger than single-message size limit."
> If we pipeline a second request, we instead get the cryptic
> "PromisedAnswer.questionId is not a current question." Is there a way to
> return a more interpretable error, possibly mentioning that somewhere along
> your pipelined requests a message wasn't sent?
>
> Thanks!
> -Alex
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Re: MallocMessageBuilder mmap

2018-01-31 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Sachin,

MallocMessageBuilder is really intended to allocate space with malloc. If
you are using mmap, then you are not allocating using malloc, and
MallocMessageBuilder probably isn't the right way to go.

Instead, you should implement a custom subclass of MessageBuilder. Then,
when more space is needed, Cap'n Proto will call your custom
allocateSegment() implementation which can mmap more space (or throw an
exception, etc.).

Note also that by implementing a custom MessageBuilder subclass, you can
access the constructors of MessageBuilder that allow you to initialize it
with an existing message to be modified in-place. MallocMessageBuilder only
works for creating new messages from scratch. (However, note that there are
security concerns -- if you don't trust the message content, you should
always start a new message and copy the old message into it. See the
comments in the header file for more info.)

-Kenton

On Wed, Jan 31, 2018 at 7:04 AM, Sachin Kumar  wrote:

> Some updates from my own reading of other message on this group:
>
> I figured out how to construct a kj::ArrayPtr from a raw char*
> pointer claimed from shared memory. Moreover, the raw buffer I get from my
> mmap'ed file is 64-bit aligned. So this means there should be no issues
> backing a MallocMessageBuilder with this memory, correct?
>
> What still eludes me if this is an intended use case for MallocMessageBuilder,
> particularly since I need to ensure that the MallocMessageBuilder does not
> write more than the fixed size buffer I claimed, and that the
> MallocMessageBuilder does not take ownership of the backed memory -- i.e.
> does not try to deallocate it.
>
> Any suggestions on this use case?
>
>
> On Wednesday, January 31, 2018 at 12:44:28 AM UTC-5, Sachin Kumar wrote:
>>
>> Hi,
>>
>> I'm trying to use MallocMessageBuilder that's backed by a fixed size
>> chunk of mmap'ed memory. So, in pseudo-code:
>>
>> char* buf = claim_mmap_memory(1024);
>> ::capnp::MallocMessageBuilder message(buf);
>> ... build message
>>
>> The questions I have:
>>
>> 1) Am I correct in trying to do this with MallocMessageBuilder given that
>> the memory claimed is fixed size (1024 bytes in the above example). I can
>> generally assume that the message should *not* exceed 1024 -- perhaps I can
>> programmatically check the size used as the message is built and throw an
>> exception if it exceeds 1024?
>>
>> 2) Is it efficient to instance a malloc message builder like this in a
>> tight loop repeatedly to send many messages?
>>
>> 3) How do I convert the raw char* buffer into a kj::ArrayPtr so
>> that I can pass into the constructor of MallocMessageBuilder?
>>
>>
>> Thanks,
>>
>> Sachin
>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Shared memory builder

2018-02-12 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Roman,

Indeed, allocateSegment() is expected to return memory that has been
pre-zero'd. This appears to be missing from its doc comment. :( I've filed
an issue for that: https://github.com/capnproto/capnproto/issues/636

The reason for this is actually performance: Whenever you initialize an
object, it has to start out containing zeros, so if the underlying memory
is known to be zero'd already, then the implementation doesn't have to zero
each object on allocation. Many memory allocators are already guaranteed to
return zero'd memory, so pushing this responsibility to the memory
allocator possibly eliminates an extra zeroing pass.

The reason individual objects have to start out containing zeros is:
1) Part of Cap'n Proto's forwards/backwards-compatibility story is that if
you don't explicitly set a field, it gets set to its default value. Since
field values are XOR'd against the default on the wire, the wire encoding
of the default value is always zero. So zeroing is the same thing as
default-initializing, which is a necessary step.
2) When communicating with a party you don't trust, exposing uninitialized
memory in the message would be a major security flaw. By pre-zeroing
messages we can be more easily assured that that is not a problem.

Regarding debug mode, defining DEBUG should be sufficient. Invoke configure
like:

./configure CXXFLAGS="-g -DDEBUG"

BTW, 0.5.3 is many years old. I strongly recommend updating to 0.6.1.

-Kenton

On Mon, Feb 12, 2018 at 12:50 PM,  wrote:

> Hi,
>
> We are trying to use capnp (0.5.3) for inter-process communication. For
> that we are going to use a shared memory organized as a ring buffer
> (between C++ and Rust apps).
> The allocateSegment() method (in C++) is allocating a chunk of unused
> memory and returns it as kj::array.
> Unfortunately we noticed that if the underlining memory chunk (which is
> reused, as we have a ring buffer) is not nullified, that is memset(0) for
> the whole size, capn'p would crash.
> Further analysis revealed that the library tries to read from the newly
> allocated chunk at the function WireHelpers::zeroObject by putting a read
> watch on the newly allocated memory chunk:
>
> #0  0x0064af43 in 
> capnp::_::WireHelpers::zeroObject(capnp::_::SegmentBuilder*,
> capnp::_::WirePointer*) ()
> #1  0x006468e8 in 
> capnp::_::PointerBuilder::initStruct(capnp::_::StructSize)
> ()
> #2  0x00425d9c in capnp::_::PointerHelpers (capnp::Kind)3>::init (builder=...) at /sandbox/common/include/capnp/
> pointer-helpers.h:52
> #3  0x004259e6 in capnp::AnyPointer::Builder::initAs
> (this=0x76b0dd10) at /sandbox/common/include/capnp/any.h:690
> #4  0x0042557e in capnp::MessageBuilder::initRoot
> (this=0x76b0ddd0) at /sandbox/common/include/capnp/message.h:432
> #5  0x004241c6 in MsgWriterRunner::run (this=0x96bbc8) at
> /sandbox/mylib/unittest/capnp_message_test.cpp:246
> ...
>
> The capnp is simple
> struct Test {
>   name @0 :Text;
>   number @1 : UInt64;
> }
>
> Looking at MallocBuilder we notice the calloc() is actually used, that is
> buffers are always zeroed.
>
>
> Is that an implicit requirement that the memory returned by the overriding
> allocateSegment is zeroed?
> If so, it would have significant implications on the performance, which is
> the reson we chose Capn'p in the first place.
>
> Any assistance appreciated.
>
> A side question: what is the right way to build a debug version of the
> libraries on linux? Found no documentation on that; tried .configure with
> -DDEBUG but the results are not satisfying - some functions remain inlined.
>
> Thanks,
> Roman.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Passing and calling capabilities with pycapnp

2018-02-15 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
It looks like capnp.wait_forever() is actually implemented as
kj::NEVER_DONE.wait() under the hood.

Can you provide more complete example code that we would be able to build
and run?

-Kenton

On Wed, Feb 14, 2018 at 2:45 PM,  wrote:

> I have a use-case which is similar to streaming RPC to a server written in
> C++ - here is a simplified example:
>
> # Setup a client and connect it to our task server
> client = capnp.TwoPartyClient('localhost:8000')
> task_mgr = client.bootstrap().cast_as(schema.Task)
>
> class Notifier(schema.Notifier.Server):
> def notify(self, params, **kwargs):
> print "notifying"
>
> task = task_mgr.create(type=0)
> n = Notifier()
> task.add_notifier(n).wait()
>
> task.run().wait()
>
> capnp.wait_forever()
>
> The C++ server will kick off the task which will call the notifier's
> notify asynchronously.  The C++ client works as expected (I get notify
> callbacks when the task wants to notify).  The python client seems to be
> unable to pump the message loop.  I see an example with a threaded client
> which involves repeatedly chaining promises to make something like this
> work, but is there an easier way?
>
> I was hoping that the capnp.wait_forever() would take care of pumping the
> loop and executing any callbacks, similar to my 
> kj::NEVER_DONE.wait(client.getWaitScope());
> in the C++ client...
>
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] initialise nested lists

2018-02-20 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Frank,

I think you want something like:

auto outer = name.initNestedList(vec.size());
for (uint i = 0; i < vec.size(); i++) {
  auto inner = outer.init(i, vec[i].size());
  for (uint j = 0; j < vec[i].size(); j++) {
inner.set(j, vec[i][j]);
  }
}

-Kenton

On Tue, Feb 20, 2018 at 3:04 PM, 'Frank' via Cap'n Proto <
capnproto@googlegroups.com> wrote:

> Hi,
>
>
> I use Capnproto to serialise data. What is the best way to fill
> nestedList and list from std::vector> resp.
> std::vector? For list I used initList() and the set() method in a
> loop but I am not sure how to proceed with nestedList. Do I have to
> create kj::Arrays for the nested lists?
>
> --
> struct Name {
> nestedList @0 :List(List(Bool));
> list @1 :List(Bool);
> }
> --
>
>
> Best,
>
> Frank
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Re: Bikeshedding the capnp format

2018-02-27 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Fun stuff.

I, too, am not entirely happy with the Cap'n Proto wire format in
retrospect. Here's what I'd do if compatibility were not an issue:

1) Eliminate the concept of segments from the encoding. Segments need only
exist at write time, to allow for progressive allocation of additional
message space. But, tracking this could be handled entirely in the
MessageBuilder implementation. All the segments could be concatenated at
write time, and the receiver could then treat the message as one large
segment. Inter-segment pointers no longer need to be far pointers; as long
as the distance is under 4GB, a regular relative pointer is fine. (Builder
implementations would recognize cross-segment pointers by the fact that
they fail the bounds check.)

2) All pointers -- including far pointers, if they exist -- should be
relative to the pointer location, never absolute. (Currently, far pointers
contain an absolute segment index.) Making all pointers relative means that
it's always possible to embed an existing message into a larger message
without doing a tree copy, which turns out to be a pretty nifty thing to be
able to do.

3) Recognize that there's no fundamental need to distinguish on the wire
whether a pointer points to a struct or a list. All we really need to know
is the object location, size, and which bits are data vs. pointers. One we
recognize this, the pointer format can instead focus on optimizing the
common case, with fallbacks for less-common cases. The "common" pointer
encoding could be:

1 bit: 1 to indicate this encoding.
31 bits: offset
16 bits: element count
8 bits: data words per element (with special reserved values for 0-bit,
1-bit, 8-bit, 16-bit, 32-bit)
8 bits: pointers per element

This encoding would cover the vast majority of use cases -- including
struct lists without the need for a tag. Note that for a simple struct (not
a list), the element count would be 1. We then add a fallback encoding used
when any of these fields is not large enough. When the first bit is 0, this
indicates an "uncommon pointer", which could be any of:

- Null pointer (all-zero).
- Capability reference.
- Tag pointer: Encodes a 61-bit word offset pointing to a tagged object. A
tagged object starts with a tag word that encodes a 32-bit element count,
16-bit data word per element, 16-bit pointers per element.
- Trampoline pointer: Like today's "double-far" pointer: points to a
two-word object which contains tag information and a further pointer to the
actual object content. Here we can have 2x16-bit segment sizes, 61-bit
offset, and 35-bit element count, which would become the new upper limit on
list sizes (compared to today's 29-bit limit).
- Other pointer types to be defined later.

-Kenton

On Mon, Feb 26, 2018 at 5:47 PM,  wrote:

> On Tue, 2018-02-27 at 02:35 +0100, ashpil...@gmail.com wrote:
> > Let me now describe the format that results from these observations,
> > without fixing the numerical constants.  The pointer formats are:
>
> Forgot to include the far pointer format; it is, of course,
>
>   +--+-+--+---+---+
>   |Ty|M|Pad offset|  Pad segment  |  Obj segment  |
>   +--+-+--+---+---+
>
>   Ty  ( 2 bits) = "far pointer"
>   M   ( 1 bit ) = more bit
>   Pad offset  (29 bits) = offset of pad, in words
>   Pad segment (16 bits) = segment of pad
>   Obj segment (16 bits) = segment of object
>
> The destination object is referred to by the pointer located at (Pad
> segment):(Pad offset) (the "landing pad"), with the offset inside the
> pointer being interpreted relative to (Obj segment).
>
> Alex
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Re: Bikeshedding the capnp format

2018-02-28 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
On Wed, Feb 28, 2018 at 1:08 PM,  wrote:

> Hm.  I don't see how you could do this without a full-on O(n)
> compaction pass on transmission/write-out.  (Though you could argue
> that writing out the message is O(n) in the first place, so you don't
> use anything.)  Even with one, I'm not sure I know how to do it (and
> still allow the message to grow arbitrarily in several places).
>

I don't understand. What I described shouldn't change write logistics.

Think of it this way: We're saying that far pointers can instead be
represented as regular pointers that happen to have an out-of-bounds
offset. Previously, such pointers were simply invalid. Now, we interpret
them as pointing into the next segment.

That said, given that segment boundaries no longer matter, it might make
sense to replace the segment table with a single 64-bit size -- or in the
case of files (where the filesystem tracks the size), no prefix is needed
at all.

Oh, I didn't think of that.  Yes, that probably makes relative pointers
> worth keeping.  You still need to get the message data into the buffer,
> and that's still O(n) [though a much faster O(n) than a tree traversal]
> unless you use something like scatter/gather, which is not exactly a
> standard feature of any of the current capnp libraries (or popular
> operating systems for that matter).
>

Eh? kj::OutputStream and kj::AsyncOutputStream both support gather-writes
and Cap'n Proto uses it when serializing to a stream. This translates into
a writev() syscall.

Moreover, the C++ implementation of Cap'n Proto supports linking a large
external byte blob into a message without copying it. The blob becomes a
message segment.

I actively use this. Sandstorm.io's spk tool, which constructs a
capnp-encoded archive from a directory tree, actually mmap()s each file and
links it into the archive this way. It then writes out the whole archive at
once. All this happens without ever reading the bytes of any of the files
in userspace. In theory a really smart kernel and copy-on-write filesystem
has the information it needs to reflink the underlying bytes, though
admittedly that's unlikely to happen in practice.


> > 3) Recognize that there's no fundamental need to distinguish on the
> > wire whether a pointer points to a struct or a list. All we really
> > need to know is the object location, size, and which bits are data
> > vs. pointers.
>
> I thought about that too when writing the original email (and mentioned
> it in a parenthetical remark), but I wasn't quite sure not
> distinguishing between an object and a list of pointers was safe wrt
> schema evolution.
>

I think it only creates new compatibilities, not incompatibilities.
Specifically a struct field would be compatible with a list-of-struct field
so long as the list contained only one element. I'd probably take it a step
further and say that replacing a struct with a list-of-structs is a
supported upgrade, and that old programs will always use the first element
of the list. This is actually a fairly change to want to make in real-world
protocols.


> > The "common" pointer encoding could be:
> > [...]
> > 16 bits: element count
> > 8 bits: data words per element (with special reserved values for 0-bit,
> 1-bit, 8-bit, 16-bit, 32-bit)
> > 8 bits: pointers per element
>
> I know I ramble a lot, but if you can stand it, my original email
> describes why you don't actually need a pointer count, provided you can
> spare a bit in each pointer and a lookup table with 32 bits per struct
> field in the decoder for every struct you'll want to read from.  (Well,
> OK, you need a "has pointers?" flag, but that's it.)
>

I read that, but I don't like it. I'm uncomfortable with the idea that, to
determine the size of the sections, you either need a lookup table based on
the schema, or you need to do an O(n) scan over the pointers. I'll grant
you that you can make this work for the obvious use cases: copying is O(n)
anyway, and bounds-checking in accessor methods could be based on a check
against the total size rather than a check against the specific location
you're trying to read. But, the code to implement both these cases becomes
significantly more complicated.

I think other, obscure cases may be broken. For example, take the case
where you initialize a builder by deep-copying a message from a reader,
then you traverse that message in the builder. Say you follow some struct
pointer. How do you determine if the target object has a compatible layout?
It could be that the total size is a match, but the section sizes don't
match. You have two options:

1) Scan the pointers to verify there are the correct number. This would be
unacceptably slow -- repeatedly get()ing the same pointer should be fast,
but here we need to do an O(n) scan each time.
2) Just assume that if the struct has the right overall size, then it must
have been created with the same version of the schema and therefore has the
correct section sizes. In this case, though, you now h

Re: [capnproto] Re: Bikeshedding the capnp format

2018-03-01 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
On Wed, Feb 28, 2018 at 7:35 PM,  wrote:

> The difference is that previously you could resize a segment without
> adjusting any pointers. (Now, pointers into the next segment will
> change their targets.) Conversely, if you allocated too big a buffer
> for a segment, you could just ignore the extra part. (Now, again, you
> have to write/transmit the padding, or pointers into the next segment
> will get invalid.)
>

The usual usage pattern here is that you allocate space for a segment,
progressively fill it, and when you're out of space, then you allocate the
next segment. Some space might be left over at the end of an allocated
block if the objects didn't fit just right, but that space is not included
in the serialized message.

Under my proposal, this all works the same.

Admittedly, the current implementation is able to "go back" to previous
segments sometimes and fill in space that was left empty before, if the
objects happen to fit right. That's still possible under the new model too,
but harder: the extra space has to be treated as a new segment, which just
happens to be part of the same memory allocation. In most cases it probably
makes more sense to ignore the wasted space; if you allocate each segment
to be twice the size of the last, then the wasted memory will amortize away.


> (For some reason I was thinking about Linux-specific asynchronous
> scatter/gather I/O, which _is_ a pain to use.)
>

Eh, it's only a pain in that there's a limit on how many buffers you can
pass per syscall, but I've handled that in the library.

(Windows, OTOH, doesn't seem to support scatter/gather except with some
very finicky requirements that are unrealistic.)

-Kenton

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Compilation failure with JsonCodec

2018-03-01 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Krzystof,

The parameter to encode() should be a specific typed Reader, not a
MessageReader. So, something like:

   capnp::JsonCodec json;
   kj::String json_encoded = json.encode(builder.getRoot<
MyStructType>().asReader());

-Kenton

On Thu, Mar 1, 2018 at 1:50 PM, Krzysztof Sakrejda <
krzysztof.sakre...@gmail.com> wrote:

> I'm looking for a suggestion about where to look next for fixing this
> issue.  With these three lines I get a compilation failure:
>
>capnp::JsonCodec json;
>capnp::SegmentArrayMessageReader reader(builder.getSegmentsForOutput
> ());
>kj::String json_encoded = json.encode(reader);
>
>
> The failure is on the third line, I'm failing to see if I did something
> wrong or if there are caveats to the JSON
> encode function that I'm missing.  These are schema I can already write
> out to a binary stream successfully and
> read back so I think those are ok.
>
> Here's the compiler message:
>
> [ 83%] Building CXX object src/capnstan/CMakeFiles/config_writer.dir/
> config_writer.cpp.o
> In file included from /home/krzysztof/packages/capnStan/downloads/
> capnproto-c++/src/capnp/raw-schema.h:29:0,
>  from /home/krzysztof/packages/capnStan/capnStan/../
> downloads/capnproto-c++/src/capnp/generated-header-support.h:31,
>  from /home/krzysztof/packages/capnStan/capnStan/src/capnp
> /stan-config.capnp.h:7,
>  from /home/krzysztof/packages/capnStan/capnStan/src/
> capnstan/config_switch.hpp:34,
>  from /home/krzysztof/packages/capnStan/capnStan/src/
> capnstan/config_writer.cpp:1:
> /home/krzysztof/packages/capnStan/downloads/capnproto-c++/src/capnp/common
> .h: In substitution of ‘template using FromAny = typename capnp::
> FromAny_::Type [with T = kj::Decay_ &>::Type]’:
> /home/krzysztof/packages/capnStan/capnStan/../downloads/capnproto-c++/src/
> capnp/compat/json.h:215:33:   required from ‘kj::String capnp::JsonCodec::
> encode(T&&) [with T = capnp::SegmentArrayMessageReader&]’
> /home/krzysztof/packages/capnStan/capnStan/src/capnstan/config_writer.cpp:
> 26:51:   required from here
> /home/krzysztof/packages/capnStan/downloads/capnproto-c++/src/capnp/common
> .h:290:43: error: invalid use of incomplete type ‘struct capnp::FromAny_<
> capnp::SegmentArrayMessageReader, void>’
>  using FromAny = typename FromAny_::Type;
>^
> /home/krzysztof/packages/capnStan/downloads/capnproto-c++/src/capnp/common
> .h:256:8: note: declaration of ‘struct capnp::FromAny_ ArrayMessageReader, void>’
>  struct FromAny_;
> ^~~~
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Re: Segfault during JSON encode() in v0.6.1

2018-03-02 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Marc,

Do you have any custom type handlers registered via addTypeHandler()? Is it
possible that the handler class has gone out-of-scope (been destroyed) by
the time the encoder is executed?

-Kenton

On Fri, Mar 2, 2018 at 10:18 AM, Marc Sune  wrote:

> > That's about all that I can get, even though capnproto is compiled with
> DEBUG.
>
> Or shall I say, it should :/
>
>
> On Friday, March 2, 2018 at 7:17:02 PM UTC+1, Marc Sune wrote:
>>
>> Hi guys,
>>
>> I am experiencing a _very_ strange segfault during JSON encoding of
>> message:
>>
>> ```
>> Program received signal SIGSEGV, Segmentation fault.
>> [Switching to Thread 27946]
>> 0x01e4b81a in std::__detail::_Hashtable_ebo_helper<1,
>> capnp::(anonymous namespace)::TypeHash, 
>> true>::_S_cget(std::__detail::_Hashtable_ebo_helper<1,
>> capnp::(anonymous namespace)::TypeHash, true> const&) ()
>> (gdb) bt
>> #0  0x01e4b81a in std::__detail::_Hashtable_ebo_helper<1,
>> capnp::(anonymous namespace)::TypeHash, 
>> true>::_S_cget(std::__detail::_Hashtable_ebo_helper<1,
>> capnp::(anonymous namespace)::TypeHash, true> const&) ()
>> #1  0x01e4b11a in std::__detail::_Hash_code_base> std::pair,
>> std::__detail::_Select1st, capnp::(anonymous namespace)::TypeHash,
>> std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash,
>> true>::_M_h1() const ()
>> #2  0x01e4a976 in std::__detail::_Hash_code_base> std::pair,
>> std::__detail::_Select1st, capnp::(anonymous namespace)::TypeHash,
>> std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash,
>> true>::_M_hash_code(capnp::Type const&) const ()
>> #3  0x01e4a401 in std::_Hashtable> std::pair,
>> std::allocator> capnp::JsonCodec::HandlerBase*> >, std::__detail::_Select1st,
>> std::equal_to, capnp::(anonymous namespace)::TypeHash,
>> std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash,
>> std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits> false, true> >::find(capnp::Type const&) const ()
>> #4  0x01e483a3 in std::unordered_map> capnp::JsonCodec::HandlerBase*, capnp::(anonymous namespace)::TypeHash,
>> std::equal_to, std::allocator> capnp::JsonCodec::HandlerBase*> > >::find(capnp::Type const&) const ()
>> #5  0x01e443d5 in 
>> capnp::JsonCodec::encode(capnp::DynamicValue::Reader,
>> capnp::Type, capnp::JsonValue::Builder) const ()
>> #6  0x01e44a6f in 
>> capnp::JsonCodec::encode(capnp::DynamicValue::Reader,
>> capnp::Type, capnp::JsonValue::Builder) const ()
>> #7  0x01e45a36 in 
>> capnp::JsonCodec::encodeField(capnp::StructSchema::Field,
>> capnp::DynamicValue::Reader, capnp::JsonValue::Builder) const ()
>> #8  0x01e45377 in 
>> capnp::JsonCodec::encode(capnp::DynamicValue::Reader,
>> capnp::Type, capnp::JsonValue::Builder) const ()
>> #9  0x01e45a36 in 
>> capnp::JsonCodec::encodeField(capnp::StructSchema::Field,
>> capnp::DynamicValue::Reader, capnp::JsonValue::Builder) const ()
>> #10 0x01e45377 in 
>> capnp::JsonCodec::encode(capnp::DynamicValue::Reader,
>> capnp::Type, capnp::JsonValue::Builder) const ()
>> #11 0x01e45a36 in 
>> capnp::JsonCodec::encodeField(capnp::StructSchema::Field,
>> capnp::DynamicValue::Reader, capnp::JsonValue::Builder) const ()
>> #12 0x01e45377 in 
>> capnp::JsonCodec::encode(capnp::DynamicValue::Reader,
>> capnp::Type, capnp::JsonValue::Builder) const ()
>> #13 0x01e45a36 in 
>> capnp::JsonCodec::encodeField(capnp::StructSchema::Field,
>> capnp::DynamicValue::Reader, capnp::JsonValue::Builder) const ()
>> #14 0x01e45618 in 
>> capnp::JsonCodec::encode(capnp::DynamicValue::Reader,
>> capnp::Type, capnp::JsonValue::Builder) const ()
>> #15 0x01e45a36 in 
>> capnp::JsonCodec::encodeField(capnp::StructSchema::Field,
>> capnp::DynamicValue::Reader, capnp::JsonValue::Builder) const ()
>> #16 0x01e45618 in 
>> capnp::JsonCodec::encode(capnp::DynamicValue::Reader,
>> capnp::Type, capnp::JsonValue::Builder) const ()
>> #17 0x01e43fcc in 
>> capnp::JsonCodec::encode(capnp::DynamicValue::Reader,
>> capnp::Type) const ()
>> #18 0x009e02c7 in capnp::JsonCodec::encode> const&> (this=0x722e1210, value=...) at /home/marc/.../capnp/compat/js
>> on.h:216
>> ```
>>
>> That's about all that I can get, even though capnproto is compiled with
>> DEBUG.
>>
>> The message trying to be encoded (sorry, I am not sure I can share the
>> entire set of schemas), is a series of simple objects, which in the
>> inner-most object contains a list that is initialized normally:
>>
>> 115 s.initIfaceType(1);
>>
>> The funny part; not initializing it, doesn't make JsonCodec crash. But
>> initializing it, or initializing it + setting a value (valid one), produces
>> the crash always.
>>
>> Valgrind etc... doesn't complain until that point.
>>
>> I am trying to isolate the problem, to make it reproducible, but I am not
>> able yet.
>>
>> Any ideas on this?
>>
>> Thanks
>>
> --
> You recei

Re: [capnproto] Re: Segfault during JSON encode() in v0.6.1

2018-03-02 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Sorry, I don't really have any ideas here. The stack trace is deep in STL
code for the type handler map, inside find(). If you've registered no type
handlers, that map should be empty. It's hard to imagine how find() on an
empty std::unordered_map could ever segfault...

-Kenton

On Fri, Mar 2, 2018 at 11:07 AM, Marc Sune  wrote:

> Kenton,
>
> On Friday, March 2, 2018 at 7:27:30 PM UTC+1, Kenton Varda wrote:
>>
>> Hi Marc,
>>
>> Do you have any custom type handlers registered via addTypeHandler()? Is
>> it possible that the handler class has gone out-of-scope (been destroyed)
>> by the time the encoder is executed?
>>
>
> No, I am not using custom handlers. I am just dumping the Builder like
> this (simplified):
>
> template
> void dump_msg(T& msg){
> try{
> capnp::JsonCodec enc;
> enc.setPrettyPrint(PPRINT);
> auto t = enc.encode(msg);
> fprintf(stderr, "MSG: %s\n", t.cStr());
> }catch(...){}
> }
>
> Where, in this case, msg is:
>
> 147 capnp::MallocMessageBuilder builder;
>
> 148 auto msg = builder.initRoot();
>
> //Fill it
>
> 254  dump_msg(msg);
>
> So the builder and encoder al valid, I believe in the entire dumping.
> Moreover, valgrind would complain before the SEGFAULT if something would be
> out of the stack, and the only thing I get is the direct SEGFAULT.
>
> Any thoughts? I will keep trying to isolate the problem
>
> marc
>
> -Kenton
>>
>> On Fri, Mar 2, 2018 at 10:18 AM, Marc Sune  wrote:
>>
>>> > That's about all that I can get, even though capnproto is compiled
>>> with DEBUG.
>>>
>>> Or shall I say, it should :/
>>>
>>>
>>> On Friday, March 2, 2018 at 7:17:02 PM UTC+1, Marc Sune wrote:

 Hi guys,

 I am experiencing a _very_ strange segfault during JSON encoding of
 message:

 ```
 Program received signal SIGSEGV, Segmentation fault.
 [Switching to Thread 27946]
 0x01e4b81a in std::__detail::_Hashtable_ebo_helper<1,
 capnp::(anonymous namespace)::TypeHash, 
 true>::_S_cget(std::__detail::_Hashtable_ebo_helper<1,
 capnp::(anonymous namespace)::TypeHash, true> const&) ()
 (gdb) bt
 #0  0x01e4b81a in std::__detail::_Hashtable_ebo_helper<1,
 capnp::(anonymous namespace)::TypeHash, 
 true>::_S_cget(std::__detail::_Hashtable_ebo_helper<1,
 capnp::(anonymous namespace)::TypeHash, true> const&) ()
 #1  0x01e4b11a in std::__detail::_Hash_code_base>>> std::pair,
 std::__detail::_Select1st, capnp::(anonymous namespace)::TypeHash,
 std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash,
 true>::_M_h1() const ()
 #2  0x01e4a976 in std::__detail::_Hash_code_base>>> std::pair,
 std::__detail::_Select1st, capnp::(anonymous namespace)::TypeHash,
 std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash,
 true>::_M_hash_code(capnp::Type const&) const ()
 #3  0x01e4a401 in std::_Hashtable>>> std::pair,
 std::allocator>>> capnp::JsonCodec::HandlerBase*> >, std::__detail::_Select1st,
 std::equal_to, capnp::(anonymous namespace)::TypeHash,
 std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash,
 std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits>>> false, true> >::find(capnp::Type const&) const ()
 #4  0x01e483a3 in std::unordered_map>>> capnp::JsonCodec::HandlerBase*, capnp::(anonymous
 namespace)::TypeHash, std::equal_to,
 std::allocator>>> capnp::JsonCodec::HandlerBase*> > >::find(capnp::Type const&) const ()
 #5  0x01e443d5 in 
 capnp::JsonCodec::encode(capnp::DynamicValue::Reader,
 capnp::Type, capnp::JsonValue::Builder) const ()
 #6  0x01e44a6f in 
 capnp::JsonCodec::encode(capnp::DynamicValue::Reader,
 capnp::Type, capnp::JsonValue::Builder) const ()
 #7  0x01e45a36 in 
 capnp::JsonCodec::encodeField(capnp::StructSchema::Field,
 capnp::DynamicValue::Reader, capnp::JsonValue::Builder) const ()
 #8  0x01e45377 in 
 capnp::JsonCodec::encode(capnp::DynamicValue::Reader,
 capnp::Type, capnp::JsonValue::Builder) const ()
 #9  0x01e45a36 in 
 capnp::JsonCodec::encodeField(capnp::StructSchema::Field,
 capnp::DynamicValue::Reader, capnp::JsonValue::Builder) const ()
 #10 0x01e45377 in 
 capnp::JsonCodec::encode(capnp::DynamicValue::Reader,
 capnp::Type, capnp::JsonValue::Builder) const ()
 #11 0x01e45a36 in 
 capnp::JsonCodec::encodeField(capnp::StructSchema::Field,
 capnp::DynamicValue::Reader, capnp::JsonValue::Builder) const ()
 #12 0x01e45377 in 
 capnp::JsonCodec::encode(capnp::DynamicValue::Reader,
 capnp::Type, capnp::JsonValue::Builder) const ()
 #13 0x01e45a36 in 
 capnp::JsonCodec::encodeField(capnp::StructSchema::Field,
 capnp::DynamicValue::Reader, capnp::JsonValue::Builder) const ()
 #14 0x01e45618 in 
 capnp::JsonCodec::en

Re: [capnproto] Re: Segfault during JSON encode() in v0.6.1

2018-03-06 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Marc,

I would guess each encode() stack frame is on the order of 100 bytes.
Unless your stacks are *really* small, that doesn't seem like it could be
the problem.

It looks like the real problem has something to do with unexpectedly null
pointers appearing in the schema objects. These objects are declared as
static constants in the generated code. Here's an example:

  const ::capnp::_::RawSchema s_e682ab4cf923a417 = {
0xe682ab4cf923a417, b_e682ab4cf923a417.words, 225, d_e682ab4cf923a417,
m_e682ab4cf923a417,
8, 14, i_e682ab4cf923a417, nullptr, nullptr, { &s_e682ab4cf923a417,
nullptr, nullptr, 0, 0, nullptr }
  }

Your latest stack trace shows a case where, somehow, the second field of
this turned out null, which seems impossible.

Is it at all possible that your generated code was created with a different
version of Cap'n Proto compiler vs. the runtime library and/or headers you
are compiling against?

-Kenton

On Mon, Mar 5, 2018 at 2:34 PM, Marc Sune  wrote:

> Kenton,
>
> Is it remotely possible some of the encode() methods are consuming large
> amounts of stack? I would need to to get deep into the code, but glancing
> over the code I see some parameters that _seem_ to be passed by value, and
> few autos which I am _not sure_ on whether they might perform a copy or not.
>
> Marc
>
> On Monday, March 5, 2018 at 10:43:11 PM UTC+1, Marc Sune wrote:
>>
>> Thanks Kenton,
>>
>> An update on this. I am leaning to think it is an stack overflow,
>> although I couldn't confirm 100%. The truth is that, calling the same code
>> from the main(), with the same exact objects and encoding routine, doesn't
>> crash.
>>
>> However, when the encode() method is called from a thread that is created
>> by an external library, I consistently get a SIGSEGV. I've tried the
>> typical ulimit / pthread_attr_setstacksize() without much of a success (not
>> sure why though).
>>
>> Re-arranging the code so that the encode() is called with less stack (I
>> cut 2 or 3 frames), I get past the point of libstdc++, but I consistenly
>> crash here:
>>
>> (gdb) bt
>> #0  0x009d4e46 in capnp::_::DirectWireValue::get
>> *(this=0x0)* at /home/marc/target/rootfs/include/capnp/endian.h:80
>> #1  0x01e64d37 in 
>> capnp::_::WirePointer::target(capnp::_::SegmentReader*)
>> const ()
>> #2  0x01e6a83c in capnp::_::WireHelpers::readStr
>> uctPointer(capnp::_::SegmentReader*, capnp::_::CapTableReader*,
>> capnp::_::WirePointer const*, capnp::word const*
>> , int) ()
>> #3  0x01e5e7e7 in capnp::_::PointerReader::getStruct(capnp::word
>> const*) const ()
>> #4  0x01e7fe61 in capnp::_::PointerHelpers> (capnp::Kind)3>::get(capnp::_::PointerReader, capnp::word const*) ()
>> #5  0x01e7f6ec in capnp::ReaderFor_> (kind)()>::Type 
>> capnp::AnyPointer::Reader::getAs()
>> const ()
>> #6  0x01e7de31 in capnp::schema::Node::Reader
>> capnp::readMessageUnchecked(capnp::word const*) ()
>> #7  0x01e780db in capnp::Schema::getProto() const ()
>> #8  0x01e7995f in capnp::EnumSchema::getEnumerants() const ()
>> #9  0x01e8068f in capnp::DynamicEnum::getEnumerant() const ()
>> #10 0x01e45c74 in 
>> capnp::JsonCodec::encode(capnp::DynamicValue::Reader,
>> capnp::Type, capnp::JsonValue::Builder) const ()
>>
>> Really odd... So I think it is not related to capnproto code
>>
>> Thanks
>> marc
>>
>> On Saturday, March 3, 2018 at 1:19:15 AM UTC+1, Kenton Varda wrote:
>>>
>>> Sorry, I don't really have any ideas here. The stack trace is deep in
>>> STL code for the type handler map, inside find(). If you've registered no
>>> type handlers, that map should be empty. It's hard to imagine how find() on
>>> an empty std::unordered_map could ever segfault...
>>>
>>> -Kenton
>>>
>>> On Fri, Mar 2, 2018 at 11:07 AM, Marc Sune  wrote:
>>>
 Kenton,

 On Friday, March 2, 2018 at 7:27:30 PM UTC+1, Kenton Varda wrote:
>
> Hi Marc,
>
> Do you have any custom type handlers registered via addTypeHandler()?
> Is it possible that the handler class has gone out-of-scope (been
> destroyed) by the time the encoder is executed?
>

 No, I am not using custom handlers. I am just dumping the Builder like
 this (simplified):

 template
 void dump_msg(T& msg){
 try{
 capnp::JsonCodec enc;
 enc.setPrettyPrint(PPRINT);
 auto t = enc.encode(msg);
 fprintf(stderr, "MSG: %s\n", t.cStr());
 }catch(...){}
 }

 Where, in this case, msg is:

 147 capnp::MallocMessageBuilder builder;

 148 auto msg = builder.initRoot();

 //Fill it

 254  dump_msg(msg);

 So the builder and encoder al valid, I believe in the entire dumping.
 Moreover, valgrind would complain before the SEGFAULT if something would be
 out of the stack, and the only thing I get is the direct SEGFAULT.

 Any thoughts? I will keep trying to isolate the proble

Re: [capnproto] Waiting for a promise from inside a server callback?

2018-03-16 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi The Cheaterman,

The trick here is that your RPC methods can return a promise rather than a
result. The method call is not considered "done" until the returned promise
completes. If you don't do this, then the method is considered to be done
immediately, and all the promises you created but didn't use are discarded
/ canceled.

In your case you are constructing several promises, one for each client.
So, you also need to join them.

def send(self, message, _context, **kwargs):
message = message.as_builder()
self.chatroom.messages.append(message)
promises = []
for client in self.chatroom.users:
promises.append(client.send(message))
return capnp.join_promises(promises)

Note that if you wanted to do something with the result of the
client.send() calls, you would use .then() to register a callback. You can
see some examples in the calculator server sample here:

https://github.com/capnproto/pycapnp/blob/develop/examples/calculator_server.py
https://github.com/capnproto/pycapnp/blob/develop/examples/calculator.capnp

-Kenton

On Fri, Mar 16, 2018 at 4:37 AM, The Cheaterman 
wrote:

> Hi everyone, I hope you're doing goodie!
>
> First, thanks a lot Kenton for all the magic! I want to use it! However,
> I'd like to use it from Python if possible hehe, but as you may know
> pycapnp is somewhat unmaintained (one thing I'd like to see it have is
> support for more than one client when wrapping a socket FD, or support for
> UDP connection strings somehow, maybe with udp: or something), and that's
> why I'm here instead - I'm probably misunderstanding how to use Capnp RPC
> more than anything else.
>
> So, let's get to the point: I'm trying to learn how to use Capnp RPC in
> Python, so I made what seemed to me like one of the simplest things to do -
> a chat program. I managed to make it work with extremely poor network
> architecture where the client polls the server (several times per second)
> for new messages to be received. The code is here:
> https://github.com/Cheaterman/capnchat/ - the master branch still uses
> the "polling" protocol, while the "push_messages" branch attempts to push
> the messages directly to the client.
>
> The issue happens here: https://github.com/Cheaterman/
> capnchat/blob/push_messages/server.py#L69 (you probably already guessed I
> was trying something like that from the title), which attempts to wait on this
> promise
> ,
> which corresponds to this part
> 
> of the proto (implemented here in the client
> 
> - BTW I have no idea if having this ".Server" class on the client is a good
> idea).
>
> As you already mentioned in other threads, the reason this doesn't work is
> because I can't make the main event loop wait for things (AIUI) because
> nothing would be processed in the meantime ; and callbacks are executed in
> the event loop. The problem is, if I don't wait() on that promise, it's
> never actually executed, so I don't quite know what I should be doing
> here... You also mentioned in other threads that I might want to add my
> promise to the TaskSet or something? I don't think this is exposed in
> Python, but I imagine wrapping it wouldn't be an issue (even with my
> limited knowledge of modern C++ and how to use it from Cython).
>
> In any case, thanks in advance for clearing out any misunderstanding I may
> have as to how I am supposed to implement that kind of things. If things go
> well, I'd like to use Capnp RPC to implement a multiplayer games protocol,
> please tell me if that makes sense at all or if I should stop right there
> :-P
>
> Thanks for reading!
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Getting client IP... Somehow?

2018-03-19 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
If you've already managed to get the information to your bootstrap object,
then the right thing to do from there is to have the bootstrap object add
wrappers to other objects which add knowledge of the IP to them.

For example:

interface Bootstrap {
  getRoom @0 (name :Text) -> (room :Room);
}

The getRoom() method might do something like:

getRoom(this, name):
  room = rooms.find(name)
  return RoomWrapper(room, this.client_ip)

RoomWrapper is a class that implements the `Room` RPC interface for a
specific client, with knowledge of that client's IP. Whenever it receives
an RPC, it can call into the wrapped room object and pass along the IP
address as well.

This is a common design pattern in Cap'n Proto. Since capnp is
capability-based, we like to avoid "ambient authority" (information about
the call context that is not expressed in the parameters).

As for how the bootstrap interface itself gets the info: In C++ there's a
concept of a BootstrapFactory which receives a callback each time a client
connects, and receives the identity of the client. I imagine this isn't
exposed yet in Python, but this would be the way to do it.

-Kenton

On Mon, Mar 19, 2018 at 2:59 PM, The Cheaterman 
wrote:

> Hi everyone, I hope you're doing great!
>
> As you may already know, I'm doing a small chat system to familiarize
> myself with Capnp before I do more ambitious things. I would like to have
> some sort of way to implement a banlist on the chat server. I do realize
> the whole point of capabilities is to have the same behavior no matter
> where the capability is called from. However, I feel like users
> (administrators) of server software are used to filter users by IP (when it
> comes to that). Alternatively, I'd like to find something unique (but
> persistent for a given computer - OS install? hardware? not sure) I could
> send during the handshake, to filter undesired users. Basically, I feel
> like I need some sort of persistent authentication system that's relatively
> hard to refresh, if I can't get access to the IP:port of the user even in
> the bootstrap object. I currently managed to hack pycapnp to get a method
> called on the bootstrap object when a client connects with IP and port as
> arguments, but even if I store them I have no way of knowing which client
> calls a given callback (which is a design choice I imagine).
>
> I'd like to know your thoughts on the subject :-) thanks a lot in advance!
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Next stable release

2018-05-03 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Vitaliy,

Is there a particular feature that's landed in master that you're looking
for? Most of the development lately has been on the HTTP library, but I'm
not sure if I'm ready to call that API "stable".

-Kenton

On Wed, May 2, 2018 at 2:43 PM,  wrote:

> Hello.
> When the next stable release is planned?
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Considerations for larger, recursive datasets.

2018-05-04 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Stefan,

On Fri, May 4, 2018 at 11:08 AM,  wrote:

> Hello!
>
> I'm interested in using Cap'n Proto for serializing a large-ish quad tree
> data structure that is used for querying geospatial data. I have a few
> questions that I was hoping you could help me out with:
>
>- Have you come across any message size limitations or performance
>issues with larger data in Cap'n Proto? I'd love to be able to represent
>indices on the order of 100 MB+.
>
> As long as you are mmap()ing in the file, the total size of the file will
have no bearing on performance -- the only thing that matters is how much
of the object tree you actually traverse.

Note that mmap() doesn't allow for any userspace compression. If you find
you're spending a lot of time in disk I/O, you may want to enable
compression at the filesystem level.

>
>- In a post made on 8/1/14 ("Recursive Schemas"), you mentioned that
>there can only be a single pointer to other structs -- is this still the
>case? I would love to be able to have a pointer to parent nodes so that I
>can traverse up through the quad tree.
>
> Sorry, but capnp is still a tree structure, not a graph. You'll need to
remember parent node pointers on a stack as you traverse.

>
>- How are structs laid out in memory (via arena allocation) when
>working with nested structs? Is it based on the order of "init" statements?
>I'd like to maintain a breadth-first layout of nested structs in my
>serialized output to maintain locality of nodes at a specific depth -- do I
>just need to initialize the structs in the order at which I want them laid
>out in the serialized representation?
>
> Yes, they will be ordered in memory in the order in which they were
allocated (which happens when you call "init").

Note you may want to tune the constructor parameters to
MallocMessageBuilder to make sure you are allocating large segments, to
avoid fragmentation near the start of the message. Or you may want to
implement your own MessageBuilder subclass.

Are there any other considerations I should take into account?
>

I'm assuming this is a data structure that you'll build once, and then use
many times without modifying it further. If so, it should work well. If you
need to continuously modify, Cap'n Proto isn't very good at that right now.

-Kenton

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] how to use the function named writePackedMessage?

2018-05-14 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi Max,

You'll need to write a custom implementation of the kj::OutputStream
interface, then pass it to `writePackedMessage()`. Or, you could use
kj::ArrayOutputSteam or kj::VectorOutputStream, if you just want to write
to an in-memory byte array.

-Kenton

On Mon, May 7, 2018 at 5:34 AM, max <389167...@qq.com> wrote:

>  I am reading the code,
> i can write a message to the file,but how can i serialize the message to
> stream?
> use writePackedMessage?
> i just want to serialize the message ,but dont write the serialized
> message to file ?
> can you help me ?
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


Re: [capnproto] Thinking of building a size profiler -- thoughts, ideas?

2018-05-18 Thread &#x27;Kenton Varda&#x27; via Cap'n Proto
Hi,

This sounds neat. I'm not aware of anyone having built such a tool yet.

It should indeed be straightforward using the Dynamic API, or maybe the
"Any" API (AnyPointer/AnyList/AnyStruct), which gives you a lower-level
view of the object tree.

-Kenton

On Mon, May 14, 2018 at 8:46 AM,  wrote:

> Hi,
> For the project I'm working on I need to distribute some zipped capnproto
> data. I'd like the data itself to be fairly small, but in particular I'd
> like the result of zipping it to be the smallest I can absolutely make it.
>
> I used to use protocol buffers and implemented a size profiler for those.
> It basically traversed the entire structure while keeping track of the path
> that led to each point and counted the size of data encountered against a
> fixed-size suffix of the path. It was pretty simple but really useful in
> identifying where the problem points were. Now I've switched to capnproto
> and am considering doing the same for that, possibly as a stand-alone tool
> if I have time. I'm assuming it won't be all that hard to do with the
> reflection api. The plan then is to use it separately but in particular to
> combine it with a zip profiler I already have to find parts of the data
> that don't compress well.
>
> My question is, is this something anyone has already done or has thought
> about so they have any input into how such a tool should work? Also, I
> wonder if this is even something that might be useful to anyone else.
>
>
> c
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> Visit this group at https://groups.google.com/group/capnproto.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.


  1   2   3   4   5   6   >