Re: [proto] Using a derived class as terminals in Boost.proto

2016-04-14 Thread Eric Niebler
Proto grammars and transforms handle this better than evaluators, which are
deprecated. It would pay to look into some examples that use transforms.
Sorry, that's all the advice I have time for at the moment.

\e
On Apr 14, 2016 10:33 AM, "Mathias Gaunard" 
wrote:

> I'd try to use IsVector.
> I'm not sure how to do this with a grammar (maybe someone can pitch in)
> but you could do something like this
>
> enable_if< IsVector::type> >
>
> On 14 April 2016 at 18:04, Frank Winter  wrote:
>
>> I made some progress. If I specialize struct VectorSubscriptCtx::eval
>> with Vector10, like
>>
>>
>> struct VectorSubscriptCtx
>> {
>> VectorSubscriptCtx(std::size_t i) : i_(i) {}
>>
>> template
>> struct eval
>> : proto::default_eval
>> {};
>>
>> template
>> struct eval<
>> Expr
>> , typename boost::enable_if<
>> proto::matches >
>> >::type
>> >
>> {
>> //..
>> }
>> };
>>
>> then it works (is was specialized with Vector). It also works when using
>> the Boost _ literal (match anything), like
>>
>> template
>> struct eval<
>> Expr
>> , typename boost::enable_if<
>> proto::matches >
>> >::type
>>
>>
>> However, I feel this is not good style. Can this be expressed with the
>> is_base_of trait instead?
>>
>>
>>
>>
>>
>> On 04/14/2016 10:10 AM, Mathias Gaunard wrote:
>>
>>> On 14 April 2016 at 14:43, Frank Winter >> > wrote:
>>>
>>> Hi all!
>>>
>>> Suppose you'd want to implement a simple EDSL (Embedded Domain
>>> Specific Language) with Boost.proto with the following requirements:
>>>
>>>  Custom class 'Vector' as terminal
>>>  Classes derived from 'Vector' are working terminals too, e.g.
>>> Vector10
>>>
>>> [...]
>>>
>>> template
>>> struct IsVector
>>>: mpl::false_
>>> {};
>>>
>>>
>>> template<>
>>> struct IsVector< Vector >
>>>: mpl::true_
>>> {};
>>>
>>>
>>> Surely this should be true for all types derived from Vector.
>>>
>>> template
>>> struct IsVector
>>>: mpl::false_
>>> {};
>>>
>>> template
>>> struct IsVector >::type>
>>>: mpl::true_
>>> {};
>>>
>>>
>>> ___
>>> proto mailing list
>>> proto@lists.boost.org
>>> http://lists.boost.org/mailman/listinfo.cgi/proto
>>>
>>>
>>
>> ___
>> proto mailing list
>> proto@lists.boost.org
>> http://lists.boost.org/mailman/listinfo.cgi/proto
>>
>
>
> ___
> proto mailing list
> proto@lists.boost.org
> http://lists.boost.org/mailman/listinfo.cgi/proto
>
>
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Multi assignment expressions for vector DSEL

2014-01-06 Thread Eric Niebler
On 01/06/2014 07:25 AM, Ole Svensson wrote:
> Hi,
> 
> for my vector expression DSEL I would like to be able to write multi 
> assignment expressions like
> 
> y = x = 3.0 * z;
> 
> that effectively translate to 
> 
> for(i=0; iy[i] = x[i] = 3.0 * z[i];
> }
> 
> Currently, my proto-fied vector class has an operator=() that calls the 
> respective context to evaluate the RHS of an assignment operation. With multi 
> assignment expressions, I obviously have to include the assignment operator 
> into my grammar. If I do this, how can I execute a context when I just write 
> "y = x = 3.0 * z;"?
> 
> Thank you very much!

Your chosen syntax is a little problematic because for any given
assignment operation, you can't tell statically if it's the left-most
(and should trigger evaluation) or not. One possibility would be to have
assignment return a temporary object, and have the evaluation happen in
the destructor. I don't encourage doing non-trivial work in a destructor
however. Special care would be needed to keep exceptions from leaking out.

I recommend finding a different syntax.

Eric

___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Clang compile times

2013-11-21 Thread Eric Niebler
On 11/20/2013 02:36 AM, Bart Janssens wrote:
> Hello,
> 
> I recently upgraded the OS and XCode on my Mac, resulting in the
> following clang version:
> Apple LLVM version 5.0 (clang-500.2.79) (based on LLVM 3.3svn).
> The previous version was "Apple LLVM version 4.2 (clang-425.0.24)
> (based on LLVM 3.2svn)"
> 
> The new version is about 4 times slower when compiling proto code, but
> only uses about half as much RAM. Does anyone here know if this may be
> due to some clang setting that I can revert back? I'd like to use more
> RAM again and compile faster.

Ugh, this is terrible news. If you have a self-contained repro
(preprocessed translation unit), please file a clang bug. They'll take a
regression of this magnitude seriously.

Thanks,
Eric
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Holding children by copy or reference

2013-10-01 Thread Eric Niebler
On 10/1/2013 12:05 AM, Bart Janssens wrote:
> On Tue, Oct 1, 2013 at 12:59 AM, Mathias Gaunard
>  wrote:
>> To clarify, in terms of performance, from best-to-worst:
>> 1) everything by reference: no problem with performance (but problematic
>> dangling references in some scenarios)
>> 2) everything by value: no CSE or other optimizations
>> 3) nodes by value, terminals by reference: no CSE or other optimizations +
>> loads when accessing the terminals
> 
> Just out of interest: would holding the a*b temporary node by rvalue
> reference be possible and would it be of any help?

Possible in theory, yes. In practice, it probably doesn't work since
proto-v4 is not C++11 aware. But even if it worked, it wouldn't solve
anything. Rvalue refs have the same lifetime issues that (const) lvalue
refs have. The temporary object to which they refer will not outlive the
full expression.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Holding children by copy or reference

2013-09-30 Thread Eric Niebler
On 9/30/2013 11:08 AM, Mathias Gaunard wrote:
> On 30/09/13 08:01, Eric Niebler wrote:
> 
>>> Therefore, to avoid performance issues, I'm considering moving to always
>>> using references (with the default domain behaviour), and relying on
>>> BOOST_FORCEINLINE to make it work as expected.
>>
>> Why is FORCEINLINE needed?
> 
> The scenario is
> 
> terminal a, b, c, r;
> 
> auto tmp = a*b*c;
> r = tmp + tmp;
> 
> Assuming everything is held by reference, when used in r, tmp will refer
> to a dangling reference (the a*b node).
> 
> If everything is inlined, the problem may be avoided because it doesn't
> require things to be present on the stack.

Yikes! You don't need me to tell you that's UB, and you really shouldn't
encourage people to do that.

You can independently control how intermediate nodes are captured, as
opposed to how terminals are captured. In this case, you want a,b,c held
by reference, and the temporary "a*b" to be held by value. Have you
tried this, and still found it to be slow?

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Holding children by copy or reference

2013-09-30 Thread Eric Niebler
On 9/30/2013 1:54 PM, Mathias Gaunard wrote:
> Hi,
> 
> A while ago, I recommended to set up domains so that Proto contains its
> children by value, except for terminals that should either be references
> or values depending on the lvalue-ness. This allows to avoid dangling
> reference problems when storing expressions or using 'auto'.
> I also said there was no overhead to doing this in the case of Boost.SIMD.
> 
> After having done more analyses with more complex code, it appears that
> there is indeed an overhead to doing this: it confuses the alias
> analysis of the compiler which becomes unable to perform some
> optimizations that it would otherwise normally perform.
> 
> For example, an expression like this:
> r = a*b + a*b;
> 
> will not anymore get optimized to
> tmp = a*b;
> r = tmp + tmp;

Interesting!

> If terminals are held by reference, the compiler can also emit extra
> loads, which it doesn't do if the the terminal is held by value or if
> all children are held by reference.
> 
> This is a bit surprising that this affects compiler optimizations like
> this, but this is replicable on both Clang and GCC, with all versions I
> have access to.

It's very surprising. I suppose it's because the compiler can't assume
equasional reasoning holds for some user-defined type. That's too bad.

> Therefore, to avoid performance issues, I'm considering moving to always
> using references (with the default domain behaviour), and relying on
> BOOST_FORCEINLINE to make it work as expected.

Why is FORCEINLINE needed?

> Of course this has the caveat that if the force inline is disabled (or
> doesn't work), then you'll get segmentation faults.

I don't understand why that should make a difference. Can you clarify? A
million thanks for doing the analysis and reporting the results, by the way.

As an aside, in Proto v5, terminals and intermediate nodes are captured
as you describe by default, which means perf problems. I still think
this is the right default for C++11, and for most EDSLs. I'll have to be
explicit in the docs about the performance implications, and make it
easy for people to get the by-ref capture behavior when they're ok with
the risks.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Proto v5

2013-06-17 Thread Eric Niebler
On 6/16/2013 11:59 AM, Agustín K-ballo Bergé wrote:
> On 15/06/2013 10:59 p.m., Eric Niebler wrote:
>>> >- Some specific uses of Proto actions in constant expressions fail. GCC
>>> >reports an ambiguity with ref-qualifiers in the following scenario:
>>> >
>>> > struct foo
>>> > {
>>> > int& bar() &
>>> > { return _bar; }
>>> > //~ int&& bar() &&
>>> > //~ { return static_cast(_bar); }
>>> > constexpr int const& bar() const &
>>> > { return _bar; }
>>> > constexpr int const&& bar() const &&
>>> > { return static_cast(_bar); }
>>> >
>>> > int _bar;
>>> > };
>>> >
>>> > foo().bar();
>>> >
>>> >   For that to work correctly, the 4 overloads need to be provided.
>> Huh. According to the standard, or according to gcc? I won't work around
>> a bug in a compiler without filing it first.
>>
> 
> I got a thorough explanation on the subject from this SO question:
> http://stackoverflow.com/questions/17130607/overload-resolution-with-ref-qualifiers
> . The answer confirms this is a GCC bug, and hints to a "better
> workaround" that would retain constexpr functionality. I may pursue this
> alternative workaround if I ever get to play with the constexpr side of
> Proto v5 (that is, if I use it in a place other than next to an `omg` or
> `srsly` identifier :P).
> 
> Another GCC bug (as far as I understand) is that instantiations within
> template arguments to a template alias are completely ignored when the
> aliased type does not depend on those, thus breaking SFINAE rules. I
> have attached a small code sample that reproduces this issue.

Thanks for your research. When I get a chance, I'll check gcc's bugzilla
to see if they have been filed already, unless you beat me to it.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Proto v5

2013-06-15 Thread Eric Niebler
On 13-06-15 03:40 PM, Agustín K-ballo Bergé wrote:
> On 15/06/2013 03:43 a.m., Agustín K-ballo Bergé wrote:
>> On 14/06/2013 11:06 p.m., Eric Niebler wrote:
>>>
>>> (Sorry for the top-posting. I'm away from my computer.)
>>>
>>> The repository *is* compilable, if your compiler is clang built from
>>> trunk. I suspect there are bugs in Proto, gcc, and clang, and sorting
>>> it all out will be fun.
>>>
>>> Thanks for your patch. I'll apply it as soon as I can.
>>>
>>> Eric
>>
>> That's the green light I was expecting to start picking Proto v5 at GCC.
>> I just got the first test compiling and passing successfully
>> (action.cpp). I have pushed all the changes to my fork of the
>> repository, so if you are interested keep an eye on it.
>>
>> Even after disabling the substitution_failure machinery (to get the full
>> instantiation spew), going through the compiler output is mind
>> bending... My respects to you, sir!
>>
>>>
> 
> The fork of Proto v5 at https://github.com/K-ballo/proto-0x correctly
> compiles and passes (almost*) all test cases and examples with GCC
> 4.8.1.

Wow! This is huge.

> There are two caveats:
> 
> - GCC does not allow the use of `this` within noexcept specifications,
> so those are disabled. This is a bug in GCC (reported by Dave Abrahams
> here: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=52869)

Thanks. There is also this one that I filed (and I see you worked around
with mpl::identity): http://gcc.gnu.org/bugzilla/show_bug.cgi?id=57384

> - Some specific uses of Proto actions in constant expressions fail. GCC
> reports an ambiguity with ref-qualifiers in the following scenario:
> 
> struct foo
> {
> int& bar() &
> { return _bar; }
> //~ int&& bar() &&
> //~ { return static_cast(_bar); }
> constexpr int const& bar() const &
> { return _bar; }
> constexpr int const&& bar() const &&
> { return static_cast(_bar); }
> 
> int _bar;
> };
> 
> foo().bar();
> 
>   For that to work correctly, the 4 overloads need to be provided.

Huh. According to the standard, or according to gcc? I won't work around
a bug in a compiler without filing it first.

> This,
> in turn, means that non-const rvalues (?) cannot be used in constant
> expressions since constexpr implies const (in C++11, not anymore in
> C++14). Anyway, this is more than I can digest at the moment.
> 
> (*) the bit failing to compile is a use of Proto actions as a constant
> expression [the `omg` case at everywhere.cpp], due to the issue with
> ref-qualifier overloads.

I see you included all of these fixes in your one pull request. I'll
need to go through this carefully and file compiler bugs where
necessary. I also want to use BOOST_WORKAROUND if we actually include
any temporary workarounds in the code. I say "temporary" because I have
every intention of ripping out all workarounds once the bugs actually
get fixed. I intend to keep this code as clean as possible.

A million thanks for your work. It's a huge help.

-- 
Eric Niebler
BoostPro Computing
www.boostpro.com



signature.asc
Description: OpenPGP digital signature
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Proto v5

2013-06-14 Thread Eric Niebler

I've made no effort so far to port Proto v5 to any compiler other than clang. 
I'm sure it would be a big job. I welcome any contributions. Otherwise, it'll 
get ported eventually, but probably not before I get the API settled.

Eric


Sent via tiny mobile device

-Original Message-
From: Agustín K-ballo Bergé 
Sender: "proto" Date: Fri, 14 Jun 2013 16:19:23 
To: Discussions about Boost.Proto and DSEL design
Reply-To: "Discussions about Boost.Proto and DSEL design"

Subject: [proto] Proto v5

Hi,

I watched the C++Now session about Proto v5, and now I want to play with 
it. I do not have the luxury of a Clang build from trunk, but I do have 
GCC 4.8.1 which should do pretty well.

I cloned the repository at https://github.com/ericniebler/proto-0x/. 
After jumping a few hoops, I am now left with tons of instances of the 
same errors:

- error: no type named 'proto_grammar_type' in ...
  using type = typename Ret::proto_grammar_type(Args...);

- error: no type named 'proto_action_type' in ...
  using type = typename Ret::proto_action_type(Args...);

For at least some cases, those are clear errors since the Ret type 
represents an empty structs (e.g. `not_`).

What is going on? What should I be doing to get Proto v5 to compile?

Regards,

-- 
Agustín K-ballo Bergé.-
http://talesofcpp.fusionfenix.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] problems with proto::matches

2012-12-13 Thread Eric Niebler
On 12/13/2012 4:51 AM, Thomas Heller wrote:
> Hi,
> 
> I recently discovered a behavior which i find quite odd:
> proto::matches::type fails when Expression is not a
> proto expression. I would have expected that it just returns false in
> this case. What am I missing. Patch is attached for what i think would
> be a better behavior of that meta function.

Hi Thomas,

Thanks for the patch. Pros and cons to this. Pro: it works in more
situations, including yours. (Could you tell me a bit about your
situation?) Also, the implementation is dead simple and free of extra
TMP overhead.

Cons: Someone might expect a non-Proto type to be treated as a terminal
of that type and be surprised at getting a false where s/he expected
true (a fair assumption since Proto treats non-expressions as terminals
elsewhere; e.g., in its operator overloads). It slightly complicates the
specification of matches. It is potentially breaking in that it changes
the template arity of proto::matches. (Consider what happens if someone
is doing mpl::quote2.)

I'm inclined to say this is not a bug and that it's a prerequisite of
matches that Expression is a proto expression. If you want to use it
with types that aren't expressions, you can already do that:

  template
  struct maybe_matches
: mpl::if_< proto::is_expr
  , proto::matches
  , mpl::false_
  >::type
  {};

Would the above work for you? I realize that's more expensive than what
you're doing now. :-(

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Transform result_of nightmare and preserving terminal identity

2012-10-31 Thread Eric Niebler
10/31/2012 12:30 PM, Agustín K-ballo Bergé wrote:
> On 16/10/2012 03:50 p.m., Agustín K-ballo Bergé wrote:
>> On 16/10/2012 02:22 a.m., Eric Niebler wrote:
>>> Hi Agustín,
>>>
>>> This is just a quick note to let you know that I'm currently at the
>>> standard committee meeting in Portland, and that I'll be unable to look
>>> until this until I get back next week.
>>>
>>
>> Thank you for letting me know.
> 
> For future reference, my issue was resolved at StackOverflow. You can
> find it here
> http://stackoverflow.com/questions/13146537/boost-proto-and-complex-transform

Heh, answered by me! Funny, I thought Bart's solution on this list had
answered your question, so I didn't come back to it.

> Preliminar tests for 1 evaluations of a simple expression `p = q
> + r * 3.f` where p, q and r are geometric vectors of 3 ints give the
> following promising times:
> 
> Regular: 1.15s
> Proto: 1.2s
> Hand-Unrolled: 0.39s
> Proto-Unrolled: 0.8s
> 
> Proto expressions build and optimization times are not taken into
> account. There is a considerable number of expression copies made by the
> expression optimization that cannot be avoided by the compiler. 

Expression copies ... during expression evaluation? I wonder why that's
necessary.

> I will
> continue my research by implementing a custom evaluation context that
> does this optimization 'on the fly', without actually modifying the
> expression.

Evaluation contexts are weaker than transforms. If it can't be done with
a transform, it can't be done with a context. I can't tell from the code
fragment what exactly you're doing with the transforms you've written,
or whether they can be improved.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Transform result_of nightmare and preserving terminal identity

2012-10-15 Thread Eric Niebler
On 10/13/2012 4:20 PM, Agustín K-ballo Bergé wrote:
> Hi All,
> 
> I'm experimenting with Proto to build a DSEL that operates on geometric
> vectors. I'm trying to write a transform that would take an assign
> expression and unroll it component wise. For instance, I want to replace

Hi Agustín,

This is just a quick note to let you know that I'm currently at the
standard committee meeting in Portland, and that I'll be unable to look
until this until I get back next week. Sorry for the delay. Maybe
someone else on this list might be able to help (nudge!). You might also
pose this question on stackoverflow.com.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Joining Proto List

2012-08-18 Thread Eric Niebler
(Sorry for the delay. I've been traveling.)

Hi John, I'm sure many on this list are familiar with FC++. You might
even be able to enlist help here to add some of FC++ to Phoenix. Why
don't you describe what you have in mind, or, if you have specific
questions about Proto or Phoenix, you could ask them here.

While Bartosz's articles are very interesting and educational, I don't
think they'll help you learn Proto. A better start would be the series
of articles I wrote for cpp-next.com, which start here:

http://cpp-next.com/archive/2010/08/expressive-c-introduction/

Welcome,
Eric


On 8/17/2012 2:08 AM, Fletcher, John P wrote:
> Eric suggested that I join this list.
> 
>  
> 
> I have been working for some years on FC++ (the old web site died
> recently unfortunately but there is a link to it on
> http://c2.com/cgi/wiki?FunctoidsInCpp ).  I have extended it a lot and
> also worked on a version using concepts to do the return type analysis,
> so I was sad when concepts were dropped from C++11 and hope they come
> back soon.
> 
>  
> 
> I have linked FC++ and Boost Lambda in the past and was working on doing
> the same with Boost Phoenix when it became necessary to learn about
> Boost Proto as well.  So here I am.
> 
>  
> 
> I have also been looking at some articles by Bartosz Milewski about the
> implementation of compile time Monads in C++, which seem to be related
> to Proto.  I have had a look around here and not seen anything that
> obviously relates to that.
> 
>  
> 
> John Fletcher
> 
>  
> 
> Dr John P. Fletcher Tel: (44) 121 204 3389 (direct line), FAX: (44) 121
> 204 3678
> 
> Chemical Engineering and Applied Chemistry (CEAC),
> 
> formerly Associate Dean - External Relations,
> 
> School of Engineering and Applied Science (EAS),
> 
> Aston University, Aston Triangle, BIRMINGHAM B4 7ET  U.K.  
> 
>  
> 
>  
> 
> 
> 

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] fold_tree and grammar using external_transforms and state

2012-07-27 Thread Eric Niebler
On 7/27/2012 12:19 AM, Joel Falcou wrote:
> Le 27/07/2012 08:11, Eric Niebler a écrit :
>> Naming is becoming an issue, though. We already have proto::transform.
>> You'd be adding proto::functional::transform that would be totally
>> unrelated. I think I screwed up with the namespaces. It should probably
>> be proto::functional::fusion::transform. Urg.
> 
> Well, I guess this is a breaking change :s

I could import the existing stuff into proto::functional for back-compat.

> What I need is maybe more generic as I need to apply an arbitrary
> function with arbitrary number of parmaeters, the first beign the
> flattened tree, the others begin whatever:
> 
> transform( f, [a b c d], stuff, thingy )
> => [f(a,stuff,thingy) f(b,stuff,thingy) f(c,stuff,thingy)]

Seems to me you want to be able to bind the 2nd and 3rd arguments to f
so that you can do this with a standard transform.

   transform( [a b c], bind(f, _1, stuff, thingy) )

=> [f(a,stuff,thingy) f(b,stuff,thingy) f(c,stuff,thingy)]

> I'll try and ake it works out of the box first and see how it can be
> generalized.

I'll take transform and bind if you write them. :-)

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] fold_tree and grammar using external_transforms and state

2012-07-26 Thread Eric Niebler
On 7/26/2012 10:26 PM, Joel Falcou wrote:
> Yeah i figured the code was amiss.
> After corrections and using your tip, it works.

Good.

> The I discovered it was not what I wanted ;)

Oops. :-)

> What I actually need to do is that when I encounter a bunch of
> bitwise_and_ node, I need to flatten them then pass this flattened
> tree + the initial tuple to the equivalent of fusion transform that will
> do:
> 
> skeleton_grammar(current, current value from state, current
> external_transforms)
> 
> I guess proto::functional::transform is not there and need to be done
> by hand ?

You mean, a proto callable that wraps fusion::transform? No, we don't
have one yet. If you write one, I'll put it in proto.

Naming is becoming an issue, though. We already have proto::transform.
You'd be adding proto::functional::transform that would be totally
unrelated. I think I screwed up with the namespaces. It should probably
be proto::functional::fusion::transform. Urg.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] fold_tree and grammar using external_transforms and state

2012-07-26 Thread Eric Niebler
On 7/26/2012 10:30 AM, Mathias Gaunard wrote:
> On 26/07/2012 17:28, Joel Falcou wrote:
>> Here is the deal:
>>
>> https://gist.github.com/3182676
> 
> Doesn't fold_tree normally take template arguments? I can't see any in
> the code here.

Right, this code seems like it can't possibly work as-is. But to get to
Joel's original question. ..

Yes, fold is using the state parameter and yours is getting dropped on
the floor. You have some control over this, however. Both fold and
tree_fold are used like ...

  fold< sequence, state0, fun >

All three are transforms. The third is called at each iteration, but the
first two are called before the iteration begins, so they have access to
the state before the fold algorithm nukes it. If you want to save it off
somewhere, you can. You could, for instance, make state0 something like
"proto::functional::make_pair(proto::_state, )".
Then you'll need to use functional::first and functional::second to get
at the parts. Kind of a pain. The long-sought let<> transform would give
you an easier way around this.

I've wanted a way for fold and fold_tree to automatically save off the
state and restore it later, but without let<> there really isn't a good
place. And I'm not too enamored with that idea either since it would
change the data parameter, which would be a breaking change -- and a
somewhat surprising one. Probably better to let the state drop and tell
folks to use let<> explicitly if they need it.

Gotta write let<> first, tho. :-P

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Visitor Design Pattern

2012-07-22 Thread Eric Niebler
On 10/25/2010 10:03 PM, Eric Niebler wrote:
> On 10/25/2010 10:01 PM, joel.fal...@lri.fr wrote:
>>
>>> There, that's better. I don't think I'll mess with it any more. Go ahead
>>> and use it, Thomas.
>>
>> just a small question: what if I need a transform that use external data ?
>> in nt2, we have thsi compute trnsform that recursively eats the AST and
>> call the approprite function passing a n dimension position tuple as a
>> data.
>>
>> I guess I could pass it as a state but will we have any other alternative ?
> 
> You could pass it as state or bundle it with the external transforms.
> All you need is a nested when template. Does that help?

[Resurrecting this ancient thread]

This was the thread that led to the current design of external
transforms, which uses the data parameter. Joel here wanted to know how
to use external transforms if you're already using the data parameter
for something else. I never felt good about the answer, and now I have a
better one: use the transform environment slot I've created for this
purpose:

  MyEval( expr, state, (proto::data = foo, proto::transforms = bar) );

In this case, bar is an object of a type derived from
proto::external_transforms that contains the mapping from rules to
transforms.

The proto::_data transform will only return the value of foo.

The change is backward compatible. If the "proto::transform" key is not
found in the environment, or the environment is an old-style blob,
things behave as they did before.

This change lives on trunk and will be moved to release after 1.51 ships.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com

___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] proto-11 progress report

2012-07-20 Thread Eric Niebler
On 7/17/2012 6:14 PM, Eric Niebler wrote:
> I'm considering adding the slots mechanism to proto-current so that this
> can be made to work there, also. The problem is that once you use a
> slot, the data parameter is no longer just a dumb blob. I can make
> proto::_data ignore the slots and just return the dumb blob as before,
> and that will satisfy some. But any custom primitive transforms that
> folks have written will need to be ready for the fact that they could
> get passed something different than what they were expecting. I don't
> think it will break code until you decide to use a slot (or use a
> transform that uses a slot). Then you'll need to fix up your transforms.
> 
> Does anybody object to the above scheme?

This is now implemented on trunk. It's implemented in a backward
compatible way.[*]

What this means is that instead of a monolithic blob of data, the third
parameter to a Proto transforms can be a structured object with O(1)
lookup based on tag. You define a tag with:

  BOOST_PROTO_DEFINE_ENV_VAR(my_tag_type, my_key);

Then, you can use it like this:

  some_transform()(expr, state, (my_key= 42, your_key= "hello"));

In your transforms, you can access the value associated with a
particular key using the proto::_env_var transform.

You can still pass an unstructured blob, and things work as they did
before. The proto::_data transform checks to see if the data parameter
is a blob or structured. If it's a blob, that simply gets returned. If
it's structured, it returns the value associated with the
proto::data_type tag. In other words, these two are treated the same:

  int i = 42;
  some_transform()(expr, state, i);
  some_transform()(expr, state, (proto::data= i));

There's more, but I'll save it. It's a big change, and docs are yet to
be written. Y'all might want to test your code against trunk and report
problems early. (This will *not* be part of 1.51.)

[*] I had to make some changes to Phoenix though because unfortunately
Phoenix makes use of some undocumented parts of Proto.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Refining the Proto concepts

2012-07-18 Thread Eric Niebler
On 7/18/2012 3:59 PM, Mathias Gaunard wrote:
> On 18/07/2012 18:29, Eric Niebler wrote:
> 
>> Is there some code in Proto that is forcing the instantiation of those
>> specializations? Probably, and that would unintended. One approach would
>> be to replace these normalized forms with an equivalent incomplete type
>> and fix all places where the code breaks.
> 
> Doesn't
> 
> template
> struct foo
> {
>typedef bar baz;
> };
> 
> foo f = {};
> 
> instantiate bar?

No, that merely mentions the specialization bar, but it doesn't
instantiate it. Nothing about that typedef requires bar to be
complete. You can try it yourself. If you make bar incomplete, the
above code still compiles.

Also, matching against the partial specializations of detail::matches_
in matches.hpp also doesn't require the basic_expr specialization to be
complete. But like I said, if there is some sloppy code in there that
requires that nested typedef to be complete, it *will* get instantiated.
Replacing it with an incomplete type will change it from a compile-time
perf bug to a hard error, and those are easy to find and fix.

> The problem I see is that for a regular Proto expression, the whole tree
> gets instantiated twice for expr and basic_expr.

If this is indeed happening, cleaning it up would be a nice perf win.
Want to give it a shot?

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Refining the Proto concepts

2012-07-18 Thread Eric Niebler
On 7/18/2012 2:59 AM, Mathias Gaunard wrote:
> In an experiment to reduce compile times, I'd like to try reducing the
> number of template instantiations tied to the use of Proto.
> 
> To do this, one could start by defining his own expression types instead
> of wrappers of proto::expr, which is something that is more-or-less
> promoted by proto-11.
> 
> However, the proto_base_expr and proto_grammar typedefs still force
> those instantiations.
> 
> Would it be possible to refine the concepts so as to avoid this?

The key to doing this would be in the implementation of proto::matches.
It uses proto_base_expr and proto_grammar as "normalized forms" of an
expression. In matches.hpp, you'll find lots of specializations of
detail::matches_ specified in terms of basic_expr. This works for
arbitrary expression types -- even expression extensions -- because of
the presence of the proto_base_expr and proto_grammar typedefs.

Is there some code in Proto that is forcing the instantiation of those
specializations? Probably, and that would unintended. One approach would
be to replace these normalized forms with an equivalent incomplete type
and fix all places where the code breaks.

The presence of proto_base() is also going to cause problems. It returns
the normalized form so that Proto can quickly get at child nodes. It's
also used by virtual members to build an expression on the fly. You'd
need to find equivalents for these.

I'd say, not impossible, but tricky.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] proto-11 progress report

2012-07-17 Thread Eric Niebler
On 6/25/2012 12:21 PM, Mathias Gaunard wrote:
> There is a function which is very simple and that I found to be very
> useful when dealing with expression trees.
> 
> unpack(e, f0, f1) which calls
> f0(f1(e.child0), f1(e.child1), ..., f1(e.childN))
> 
> I can do recursion or not with the right f1, and I can 'unpack' an
> expression to an n-ary operation f0.
> 
> Here f0 is typically a function that uses its own overloading-based
> dispatching mechanism.

Now with proto-11, it's not too hard to implement Matias' unpack
function in terms of primitives. You can find an example here:

<https://github.com/ericniebler/home/blob/master/src/proto/libs/proto/test/bind.cpp>

The functions f0 and f1 can be stateful. It uses the new unpacking
patterns, the slot-based transform environment, and a new bind function
object.

I'm considering adding the slots mechanism to proto-current so that this
can be made to work there, also. The problem is that once you use a
slot, the data parameter is no longer just a dumb blob. I can make
proto::_data ignore the slots and just return the dumb blob as before,
and that will satisfy some. But any custom primitive transforms that
folks have written will need to be ready for the fact that they could
get passed something different than what they were expecting. I don't
think it will break code until you decide to use a slot (or use a
transform that uses a slot). Then you'll need to fix up your transforms.

Does anybody object to the above scheme? The advantage is that things
like Mathias' unpack function become possible. Also, I'll finally be
able to implement the let<> transform. I'd probably be able to change
the implementation of the external transforms feature to make use of
slots, too (although that might be a breaking change, in which case I'd
wait until proto-11).

BTW, the "unpacking patterns" feature will be part of 1.51.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com

___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Precomputing common matrix products in an expression

2012-07-15 Thread Eric Niebler
On 7/14/2012 11:07 PM, Mathias Gaunard wrote:
> On 07/15/2012 08:04 AM, Mathias Gaunard wrote:
> 
>> Assuming your expressions are CopyConstructible

For the record, in proto-11 no rvalues are stored by reference within
expressions by default.

> You'd also need them to be EqualityComparable or to use a comparator to
> use them as keys. A recursive call to fusion::equal_to would probably be
> a good definition.

That wouldn't check the tag type. But as Bart says, this is unnecessary
in his case.

In proto-11, expressions currently satisfy EqualityComparable.
operator== builds an expression node that has an implicit conversion to
bool IFF the left and right operands are compatible. Likewise for the
other relational operators. This is admittedly a kludge and I'm not sure
how I feel about it.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com


___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Precomputing common matrix products in an expression

2012-07-13 Thread Eric Niebler
On 7/13/2012 12:51 PM, Bart Janssens wrote:
> Hi guys,
> 
> I've been thinking about a feature for our DSEL, where lots of matrix
> products can occur in an expression. Part of an expression might be:
> nu_eff * transpose(nabla(u)) * nabla(u) + transpose(N(u) +
> coeffs.tau_su*u_adv*nabla(u)) * u_adv*nabla(u)
> 
> Here, u_adv*nabla(u) is a vector-matrix product that occurs twice, so
> it would be beneficial to calculate it only once. I was wondering if
> it would be doable to construct a fusion map, with as keys the type of
> each product occurring in an expression, and evaluate each member of
> the map before evaluating the actual expression. When the expression
> is evaluated, matrix products would then be looked up in the map.
> 
> Does this sound like something that's doable? I'm assuming the fold
> transform can help me in the construction of the fusion map. Note also
> that each matrix has a compile-time size, so any stored temporary
> would need to have its type computed.

This is an instance of the larger "common subexpression elimination"
problem. The problem is complicated by the fact that it can't be solved
with type information alone. u_adv*nable(u) might have the same type as
u_adv*nable(v) but different values. A hybrid solution is needed where
you cache a set of results indexed both by type information and the
identities (addresses) of the constituents of the subexpressions. This
is hard, and nobody has attempted it yet. I can give you encouragement,
but not guidance. If you find a general and elegant solution, it would
certainly be worth putting in Proto, since lots of folks would benefit.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com


___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] _unpack transform

2012-07-13 Thread Eric Niebler
On 7/13/2012 6:37 AM, Mathias Gaunard wrote:
> On 07/11/2012 06:55 PM, Eric Niebler wrote:
> 
>> You're referring to this:
>>
>> http://lists.boost.org/proto/2010/11/0304.php
>>
>> I should have followed through! The code referenced there isn't
>> available anymore. I remember putting it on my TODO list to understand
>> the compile-time implications of it, because of your warning about
>> compile times. And then ... I don't remember. :-P
> 
> It's available here
> <https://raw.github.com/MetaScale/nt2/169b69d47e4598e403caad0682dd6d24b8fd4668/modules/boost/dispatch/include/boost/dispatch/dsl/proto/unpack.hpp>

Thanks.

> As I said earlier we got rid of it because it wasn't very practical to
> use, this is from an old revision.

Impractical because of the compile times? Did you replace it with
anything? Would you have any interest in giving the new unpacking
patterns a spin and letting me know if they meet your need, when you
have time?

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] _unpack transform

2012-07-12 Thread Eric Niebler
On 7/11/2012 10:32 AM, Eric Niebler wrote:
>   f0(f1(f2(pack(_))...))
> 
> That's no so bad, actually. Now, the question is whether I can retrofit
> this into proto-current without impacting compile times.

This is now implemented on boost trunk for proto-current. Seems to work
without a significant perf hit (my subjective sense). Docs forthcoming.
It's also implemented for proto-11. This is a good feature, I think.
Thanks for all the feedback that led to it.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] _unpack transform

2012-07-11 Thread Eric Niebler
On 7/11/2012 9:55 AM, Eric Niebler wrote:
> I'm going to keep playing with this. Your suggested syntax is nice. I
> wonder how close I can get. (Although I kinda like my pseudo-pack
> expansions, too. :-)



But based on Thomas' suggestion, I'm considering this alternate syntax:

  f0(unpack(_)...)

One of the problems I see with Thomas' original syntactic suggestion was
that it gave you no control over /how/ the arguments get unpacked.
Better to have the unpack to appear in a *pattern* that gets repeated.
With the above syntax, I can do:

  f0(f1(f2(unpack(_))...))

and it's clear I want:

  f0(f1(f2(child0), f2(child1), f2(child2), /*...*/))

But keeping Thomas' unpack keyword has 2 nice benefits:

1) The wildcard keeps it's meaning. It always means "the current
expression". In my original proposal, a pseudo-pack expression changed
the meaning of the wildcard to mean, "the current child of the current
expression." That might be a bit confusing.

2) I could use a different transform as an argument to unpack. For instance:

  f0(f1(f2(unpack(_child0))...))

That would mean, unpack the children of the current expression's first
child.

But perhaps "unpack" is the wrong name now, since "unpack" and "..."
seem redundant. Maybe it should be "children_of":

  f0(f1(f2(children_of(_))...))

Blech, that sucks too. Maybe "pack":

  f0(f1(f2(pack(_))...))

That's no so bad, actually. Now, the question is whether I can retrofit
this into proto-current without impacting compile times.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com


___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] _unpack transform

2012-07-11 Thread Eric Niebler
On 7/11/2012 4:47 AM, Bart Janssens wrote:
> On Tue, Jul 10, 2012 at 11:18 PM, Eric Niebler  wrote:
>> The _unpack transform is pretty general, allowing a lot of variation
>> within the pack expansion pattern. There can be any number of Tfx
>> transforms, and the wildcard can be arbitrarily nested. So these are all ok:
>>
>>   // just call f0 with all the children
>>   _unpack
> 
> Hi Eric,
> 
> Is it correct that the above example just generates a sequence of
> calls to f0, one for every child of the expression? 

No, it calls f0 (once) with all the children. It's like:

  f0(child0, child1, child2...)

> If so, we are
> currently implementing that functionality like this:
> https://github.com/coolfluid/coolfluid3/blob/master/cf3/solver/actions/Proto/ExpressionGroup.hpp
> 
> So for us this would avoid the (in this case quite simple) primitive 
> transform.

Ah, you need a for_each transform. That wouldn't be hard to add. Feel
free to file a feature request so I don't loose track of it.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] _unpack transform

2012-07-11 Thread Eric Niebler
On 7/11/2012 12:42 AM, Thomas Heller wrote:
> On 07/10/2012 11:18 PM, Eric Niebler wrote:
>> I just committed to the proto-11 codebase a new transform called
>> _unpack. You use it like this:
>>
>>_unpack
>>
>> Where Tfx represents any transform (primitive or otherwise) f0 is any
>> callable or object type, and f1(_) is an object or callable transform.
>> The "..." denotes pseudo-pack expansion (although it's really an C-style
>> vararg ellipsis). The semantics are to replace "f1(_)..." with
>> "f1(_child<0>), f1(_child<1>), etc.".
>>
>> With this, the _default transform is trivially implemented like this:
>>
>> struct _default
>>: proto::or_<
>>  proto::when, proto::_value>
>>, proto::otherwise<
>>  proto::_unpack(), _default(_)...)>
>>  >
>>  >
>> {};
>>
>> ...where eval is:
>>
>> struct eval
>> {
>>  template
>>  auto operator()(proto::tag::plus, E0&&  e0, E1&&  e1) const
>>  BOOST_PROTO_AUTO_RETURN(
>>  static_cast(e0) + static_cast(e1)
>>  )
>>
>>  template
>>  auto operator()(proto::tag::multiplies, E0&&  e0, E1&&  e1) const
>>  BOOST_PROTO_AUTO_RETURN(
>>  static_cast(e0) * static_cast(e1)
>>  )
>>
>>  // Other overloads...
>> };
>>
>> The _unpack transform is pretty general, allowing a lot of variation
>> within the pack expansion pattern. There can be any number of Tfx
>> transforms, and the wildcard can be arbitrarily nested. So these are
>> all ok:
>>
>>// just call f0 with all the children
>>_unpack
>>
>>// some more transforms first
>>_unpack
>>
>>// and nest the wildcard deeply, too
>>_unpack
>>
>> I'm still playing around with it, but it seems quite powerful. Thoughts?
>> Would there be interest in having this for Proto-current? Should I
>> rename it to _expand, since I'm modelling C++11 pack expansion?
>>
> i think _expand would be the proper name. Funny enough i proposed it
> some time ago for proto-current, even had an implementation for it, and
> the NT2 guys are using that exact implementation ;)
> Maybe with some extensions.
> So yes, Proto-current would benefit from such a transform.

You're referring to this:

http://lists.boost.org/proto/2010/11/0304.php

I should have followed through! The code referenced there isn't
available anymore. I remember putting it on my TODO list to understand
the compile-time implications of it, because of your warning about
compile times. And then ... I don't remember. :-P

I remember that it was an invasive change to how Proto evaluates all
transforms, which made me nervous. I also don't think that exact syntax
can be implemented without forcing everybody to pay the compile-time
hit, whether they use the feature or not. In contrast, having a separate
_unpack transform isolates the complexity there.

I'm going to keep playing with this. Your suggested syntax is nice. I
wonder how close I can get. (Although I kinda like my pseudo-pack
expansions, too. :-)

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com


___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


[proto] _unpack transform (was: proto-11 progress report)

2012-07-10 Thread Eric Niebler
I just committed to the proto-11 codebase a new transform called
_unpack. You use it like this:

  _unpack

Where Tfx represents any transform (primitive or otherwise) f0 is any
callable or object type, and f1(_) is an object or callable transform.
The "..." denotes pseudo-pack expansion (although it's really an C-style
vararg ellipsis). The semantics are to replace "f1(_)..." with
"f1(_child<0>), f1(_child<1>), etc.".

With this, the _default transform is trivially implemented like this:

struct _default
  : proto::or_<
proto::when, proto::_value>
  , proto::otherwise<
proto::_unpack(), _default(_)...)>
>
>
{};

...where eval is:

struct eval
{
template
auto operator()(proto::tag::plus, E0 && e0, E1 && e1) const
BOOST_PROTO_AUTO_RETURN(
static_cast(e0) + static_cast(e1)
)

template
auto operator()(proto::tag::multiplies, E0 && e0, E1 && e1) const
BOOST_PROTO_AUTO_RETURN(
static_cast(e0) * static_cast(e1)
)

// Other overloads...
};

The _unpack transform is pretty general, allowing a lot of variation
within the pack expansion pattern. There can be any number of Tfx
transforms, and the wildcard can be arbitrarily nested. So these are all ok:

  // just call f0 with all the children
  _unpack

  // some more transforms first
  _unpack

  // and nest the wildcard deeply, too
  _unpack

I'm still playing around with it, but it seems quite powerful. Thoughts?
Would there be interest in having this for Proto-current? Should I
rename it to _expand, since I'm modelling C++11 pack expansion?

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com

___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] proto-11 progress report

2012-07-01 Thread Eric Niebler
On 6/29/2012 4:49 AM, Mathias Gaunard wrote:
> On 28/06/2012 21:09, Eric Niebler wrote:
> 
>> After meditating on this for a bit, a thought occurred to me. Your
>> unpack function is a generalization of the pattern used by the _default
>> transform.
> 
> It is indeed.

Right. Providing the higher-level primitive transform is on my to-do
list. Thanks for the suggestion.

>> Generators are intended to meet this need. What are they lacking for
>> you? Is it the lack of an "unpack" transform?
> 
> We use generators for something else. Generators are in charge of
> putting a raw expression in the NT2 domain, which involves computing the
> logical size of the expression as well as the type it would have it
> evaluated.
> 
> Doing the expression rewriting in the generator itself causes dependency
> problems, since expression rewriting is defined in terms of unpacking
> then re-making expressions, which involves calling the generator.
> 
> I don't know yet how we could do what we need in a generator-less world.

Well, in the present generator world, the generator is passed an
expression, and your mission is to rewrite it (or wrap it). Rewriting
the expression causes a recursive invocation of the generator for the
current expression. This is the cause of the trouble, IIUC.

In the future generator-less world, your domain's make_expr function
object is passed a tag and a bunch of child nodes. You get to decide how
to turn that into an expression. If you want to transform the child
nodes first, you should be able to do that, and it wouldn't recurse
back. You're recursing only on the children, not on the current
expression. That should work. In theory.

>>> Both optimize and schedule require containing children by value to
>>> function correctly.
>>
>> How is this relevant?
> 
> When using normal Proto features, the expressions built from operators
> contain their children by reference while the expression built from
> transforms contain their children by value.
> 
> Since in our case we use the same code for both, we had to always
> contain children by value.

I'm afraid I'm being dense. I still don't see how that relates to the
need for your unpack function or the limitations of transforms.

>> Transforms are not pure functional. The data parameter can be mutated
>> during evaluation.
> 
> The transform language is functional since the only thing it does is
> define a calling expression which is a combination of primitive
> transforms. Of course each primitive transform doesn't have to be
> functional, but the language to use them cannot define state, it can
> just pass it along like a monad.

Your unpack function is no less functional in nature. I'm not denying
that transforms have limitations that your unpack function doesn't (see
below). I'm just saying that you're barking up the wrong tree with your
argument about transform's functional nature.

> I guess it's not pure functional though because of proto::and_.

I don't understand this statement.

> In any case, a lot of non-trivial expression evaluation strategies
> cannot be practically implemented as a transform and require a primitive
> transform.

Ah! By "transform" you mean "callable transform or object transform, but
not primitive transform". But the term "transform" includes primitive
transforms. Is that why we're talking past each other?

> If everything ends up being primitive transforms, we might as well use
> simple function objects directly, which are not tied by constraints such
> as arity, state, data, environment etc., just store whatever state is
> needed in the function object or bind that state with boost::bind or
> similar.

I see what you're getting at. Primitive transforms have limitations on
the number of arguments and the meaning of those arguments. I understand.

> I'd like to see Proto provide algorithms like this that accept arbitrary
> function objects and that are not intended to require transforms to do
> useful things.

Sure. However, I have a reason for wanting to make these things play
nicely with transforms. See below.

>> You know about proto::vararg, right? It lets you handle nodes of
>> arbitrary arity.
> 
> The transformations that can currently be done as a non-primitive
> transform when the transformation must not rely on an explicit arity of
> the expression are extremely limited.

Limiting the discussion to non-primitive transforms, I agree. I didn't
know that's what we were discussing.

> Adding unpacking transforms would certainly make it more powerful, but
> still not as powerful as what you could do with a simple function object
> coupled with m

Re: [proto] proto-11 progress report

2012-06-28 Thread Eric Niebler
On 6/27/2012 2:11 PM, Mathias Gaunard wrote:
> On 25/06/2012 23:30, Eric Niebler wrote:
>> On 6/25/2012 12:21 PM, Mathias Gaunard wrote:
> 
>>> There is a function which is very simple and that I found to be very
>>> useful when dealing with expression trees.
>>>
>>> unpack(e, f0, f1) which calls
>>> f0(f1(e.child0), f1(e.child1), ..., f1(e.childN))
>>>
>>> I can do recursion or not with the right f1, and I can 'unpack' an
>>> expression to an n-ary operation f0.
>>>
>>> Here f0 is typically a function that uses its own overloading-based
>>> dispatching mechanism.
>>
>> OK, thanks for the suggestion. Where have you found this useful?
> 
> For example I can use this to call
> 
> functor()(functor()(a, b), c);
> 
> from a tree like
> 
> expr,
> expr >, expr > >
> 
> NT2 uses a mechanism like this to evaluate expressions.

OK.

> For element-wise expressions (i.e. usual vector operations), the f1 is
> the run(expr, pos) function -- actually more complicated, but there is
> no need to go into details -- which by default simply calls unpack
> recursively.
> 
> What the f0 does is simply unpack the expression and call a functor
> associated to the tag (i.e. run(expr, i) with expr bar> > calls pow(run(foo, i), run(bar, i)) ).
> 
> The important bit is that it is also possible to overload run for a
> particular node type.
> 
> On terminals, the run is defined to do a load/store at the given
> position. This means run(a + b * sqrt(c) / pow(d, e), i)  calls
> plus(a[i], multiplies(b[i], divides(sqrt(c[i]), pow(d[i], e[i]
> 
> Each function like plus, multiplies, sqrt, pow, etc. is overloaded so
> that if any of the arguments is an expression, it does a make_expr. If
> the values are scalars, it does the expected operations on scalars. If
> they're SIMD vectors, it does the expected operations on vectors.
> 
> run is also overloaded for a variety of operations that depend on the
> position itself, such as restructuring, concatenating or repeating data;
> the position is modified before running the children or different things
> may be run depending on a condition.
> 
> A simple example is the evaluation of cat(a, b) where run(expr, i) is
> defined as something a bit like i < size(child0(expr)) ?
> run(child0(expr), i) : run(child1(expr), i-size(child0(expr))
> 
> unpack is also used for expression rewriting: before expressions are
> run, the expression is traversed recursively and reconstructed. Whenever
> operations that are not combinable are found, those sub-expressions are
> evaluated and the resulting terminal is inserted in their place in the
> tree. In NT2 this is done by the schedule function.

After meditating on this for a bit, a thought occurred to me. Your
unpack function is a generalization of the pattern used by the _default
transform. The _default transform unpacks an expression, transforms
each child with X, then recombines the result using the "C++ meaning"
corresponding to the expression's tag type. In other words,
_default()(e) is unpack(e, f0, f1) where f1 is X and f0 is
hard-coded. I could very easily provide unpack as a fundamental
transform and then trivially implement _default in terms of that. I very
much like this idea.

Aside: The pass_through transform almost fits this mold too, except that
each child can get its own f1. Hmm.

Unpack doesn't sound like the right name for it, though. It reminds me a
bit of Haskell's fmap, but instead of always putting the mapped elements
back into a box of the source type, it lets you specify how to box
things on the way out.

> Similarly we have a phase that does the same kind of thing to replace
> certain patterns of combinations by their optimized counterparts
> (optimize). We'd like to do this at construction time but it's currently
> not very practical with the way Proto works.

Generators are intended to meet this need. What are they lacking for
you? Is it the lack of an "unpack" transform?

> Both optimize and schedule require containing children by value to
> function correctly.

How is this relevant?

> Transforms are not used much since they're only useful for the most
> simple operations due to their pseudo-DSL pure functional nature. It's
> especially problematic when performance sensitive code, which needs to
> be stateful, is involved. 

Transforms are not pure functional. The data parameter can be mutated
during evaluation. Even expressions themselves can be mutated by a
transform, as long as they're non-const.

> Finally it's also not practical to do anything
> involving nodes of arbitrary arity with them.

You know about proto::vararg, right? It le

Re: [proto] proto-11 progress report

2012-06-25 Thread Eric Niebler
On 6/25/2012 12:44 PM, Bart Janssens wrote:
> On Sun, Jun 24, 2012 at 1:10 AM, Eric Niebler  wrote:
>> Data parameter uses a slot mechanism
>> 
>> In proto today, transforms take 3 parameters: expression, state and
>> data. As you can see from above, transforms in proto-11 take an
>> arbitrary number of parameters. However, that can make it hard to find
>> the piece of data you're looking for. Which position will it be in?
>> Instead, by convention most transforms will still only deal with the
>> usual 3 parameters. However, the data parameter is like a fusion::map:
>> it will have slots that you can access in O(1) by tag.
> 
> Intersting! Our current "data" element contains a fusion vector, also
> to enable grouping of strongly typed data. This is a major source of
> complexity in our code, so it will be simpler if we can reuse this
> mechanism.

Yes, the slots mechanism greatly simplifies that kind of code. You can
find a simple lambda library example here:

https://github.com/ericniebler/home/blob/master/src/proto/libs/proto/example/lambda.cpp

Notice how the lambda_eval_ function uses the slots feature to store the
lambda's arguments on line 48, and how the algorithm accesses the values
of those slots on line 36. Formerly, this had to be done with a fusion
vector, as you are doing. This way is much simpler, I think.

I primarily added this feature so that I have a way to implement the
let<> transform I've described here previously. It'll give you a way to
create local variables within a transform. The current let "stack frame"
will use one of these slots.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com



signature.asc
Description: OpenPGP digital signature
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] proto-11 progress report

2012-06-25 Thread Eric Niebler
On 6/25/2012 12:21 PM, Mathias Gaunard wrote:
> On 24/06/2012 01:10, Eric Niebler wrote:
> 
>> As for what is not changing:
>>
>> Grammars, Transforms and Algorithms
>> ===
>> It would be wonderful if there were a more natural syntax for describing
>> proto algorithms rather than with structs, function objects, proto::or_,
>> proto::when, and friends. If there is one, I haven't found it yet. On
>> the up side, it means that many current proto-based libraries can be
>> upgraded with little effort. On the down side, the learning curve will
>> still be pretty steep. If anybody has ideas for how to use C++11 to
>> simplify pattern matching and the definition of recursive tree
>> transformation algorithms, I'm all ears.
> 
> There is a function which is very simple and that I found to be very
> useful when dealing with expression trees.
> 
> unpack(e, f0, f1) which calls
> f0(f1(e.child0), f1(e.child1), ..., f1(e.childN))
> 
> I can do recursion or not with the right f1, and I can 'unpack' an
> expression to an n-ary operation f0.
> 
> Here f0 is typically a function that uses its own overloading-based
> dispatching mechanism.

OK, thanks for the suggestion. Where have you found this useful?

Along those lines, I've been thinking about adding a kind of transform
that gives an exploded view of an expression, kind of like what
evaluation contexts give you. If you tell me your use case, I can try to
make the new transform support it.

For the record, I'll be killing off evaluation contexts completely in
proto.next.

>> It needs clang trunk to compile.
> 
> Why doesn't it work with GCC?

The big missing feature is ref qualifiers for member functions (n2439).
AFAIK, clang is the only compiler that has implemented it.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com





signature.asc
Description: OpenPGP digital signature
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] proto-11 progress report

2012-06-25 Thread Eric Niebler
On 6/25/2012 12:39 AM, Joel Falcou wrote:
> On 06/24/2012 01:10 AM, Eric Niebler wrote:

>>int i = LambdaEval()(_1 + 42, 0, proto::tag::data = 8);
>> 
>> The 3rd parameter associates the value 8 with the data tag.

> 
> How do you set up new tag  ? Is just mytag some
> 
> mytag_type mytag = {};
> 
> ?
> 
> or should mytag_type inherit/be wrapped from some special stuff

Special stuff. Tags are defined as follows:

struct my_tag_type
  : proto::tags::def
{
using proto::tags::def::operator=;
};

namespace
{
constexpr my_tag_type const & my_tag =
proto::utility::static_const::value;
}

The gunk in the unnamed namespace is for strict ODR compliance. A simple
global const would be plenty good for most purposes.

>> As for what is not changing:
>>
>> Grammars, Transforms and Algorithms
>> ===
>> It would be wonderful if there were a more natural syntax for describing
>> proto algorithms rather than with structs, function objects, proto::or_,
>> proto::when, and friends. If there is one, I haven't found it yet. On
>> the up side, it means that many current proto-based libraries can be
>> upgraded with little effort. On the down side, the learning curve will
>> still be pretty steep. If anybody has ideas for how to use C++11 to
>> simplify pattern matching and the definition of recursive tree
>> transformation algorithms, I'm all ears.
> 
> There is not so much way to describe something that looks like
> a grammar definition anyway. BNF/EBNF is probably the simplest
> way to do it.

That would be overkill IMO. Proto grammars don't need to worry about
precedence and associativity. Forcing folks to write E?BNF would mean
forcing them to think about stuff they don't need to think about.

> Now on the syntactic clutter front, except wrapping everything in round
> lambda
> or use object/function call in a hidden decltype call, I don't see what we
> can do better :s

More round lambda, sure. Fixing inconsistencies, also. But I tend to
doubt that using grammar-like expressions in a decltype would be a
significant improvement. Folks still can't write transforms in straight,
idiomatic C++, which is what I want.

> Glad it is picking up steam :D

C++11 has made this pretty fun. I'm ready to stop supporting all my
C++03 libraries now. :-)

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com



signature.asc
Description: OpenPGP digital signature
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Streamulus v0.1 is out: An EDSL for Event Stream Processing with C++

2012-06-24 Thread Eric Niebler
On 6/24/2012 4:42 PM, Dave Abrahams wrote:
> 
> on Sun Jun 24 2012, Eric Niebler 
>  wrote:
> 
>> On 6/24/2012 8:50 AM, Irit Katriel wrote:
>>>
>>> In the accumulators library, all the accumulators are invoked for
>>> every update to the input. This is why the visitation order can be
>>> determined at compile time.
>>
>> That's correct.
> 
> Are you forgetting about "droppable" accumulators?

Not forgetting. It doesn't change the fact that the visitation order is
set at compile time. There is no centralized, automatic, dynamic flow
control in the accumulators library.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com





signature.asc
Description: OpenPGP digital signature
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Streamulus v0.1 is out: An EDSL for Event Stream Processing with C++

2012-06-24 Thread Eric Niebler
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 6/24/2012 8:50 AM, Irit Katriel wrote:
> 
> On 24 Jun 2012, at 03:47, Dave Abrahams wrote:
>> 
>> Well, I think the hello world example is too simple to illustrate
>> what this does, and the blog posting is TL;DR, but I skimmed it,
>> and still didn't really have a clue.  Have you looked at 
>> http://www.boost.org/doc/libs/1_49_0/doc/html/accumulators.html?
>> It seems to have some overlap with the problems you're solving.
>> 
> 
> I was not aware of accumulators, and I agree that there is some 
> overlap. As far as I can tell, the main differences are as follows.
> Please correct me if I misunderstood anything.
> 
> In the accumulators library, all the accumulators are invoked for
> every update to the input. This is why the visitation order can be
> determined at compile time.

That's correct.

> I am building a dependency graph that is used at runtime to
> determine which nodes need to be activated, so that inputs only
> propagate through the part of the expression that may be affected. 
> Obviously I am imagining large expressions on many inputs, so that
> this is worthwhile doing.

Very interesting! So ... data flow? Or does this take inspiration from
stream databases?

> In addition, I am trying to achieve a more programming-like syntax
> for complex expressions, by making the expression (rather than the 
> accumulator type) encode the dependencies between nodes.

Since accumulators is solving a simpler problem, as DSL isn't needed
there. For the more general problem you're solving, I think a DSL
makes sense.

> Example: you want to compute f(g(x),h(x)) over a stream. With 
> streamulus you define each of the functions f,g,h, and when you 
> subscribe the expression f(g(x),h(x)), and only then, f learns that
> its inputs are the outputs of g and h. With accumulators you would
> need to define the accumulator f_of_g_and_h(), make it depend on g
> and h, give it a tag, and then add it to the accumulators list.
> 
> Similarly, with streamulus you can define your function f, and then
> use it multiple times within the same expression:  sqrt(sqrt(x)).
> This will build a graph with three nodes : (x--> sqrt-->sqrt) where
> x is an input node and the two sqrt's don't need to know or care
> about each other because a different instance was constructed for
> each node.
> 
> I'm sorry about the too-long blog post.. It mostly deals with the
> bad things that can happen if you DIY your stream computations. You
> can skip most of it and just look at the definition of the sample
> application in the beginning, and then the streamulus solution in
> the end.

Looks pretty interesting. Thanks for sharing.

- -- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.17 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQEcBAEBAgAGBQJP54V6AAoJEAeJsEDfjLbXhKwH/1khe8DP9wSY3rc6KrIfPzux
MBSPyiz2vOqT5C/I1pIVXHo71dzNttcLp0NfLTII/++T5Df9MEiE3wfufBBWKDjv
elI/2QCgYzEm8IeGUqN3oRz9I0XprsOfWqxsfJs6TglL+JiNlHENBbaH8GGkPI8s
Nn1ntI+SWFzhqjaBB9gYyosDYT9f2bRYGQAO0/Ov/+hhjX7tSPc/pET2MOpn0L5p
NsyOJv7bqPAICn61yT0O+XJpDBGLeGpqFumsJryMJoxcT/sv1Lvlh2FAIhfrjet6
TAB0lfRKI92O4Bw+kE+lOLRI1TMTKBr8ig+Ykodm0myNLItJRWrV2/9Q2nXTzco=
=Ok3v
-END PGP SIGNATURE-
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


[proto] proto-11 progress report

2012-06-23 Thread Eric Niebler
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I've made some good progress on the C++11 proto rewrite that I'd like to
share. So far, it's been a less radical shift than I expected.

Expressions vs. Grammars

Many new users are confused by the difference between terminal and
terminal::type. In proto.next, there is no difference. Forget the
::type. Things just work.

Custom transforms are simpler
=
Currently, defining a custom transform means defining a struct with a
nested impl class template of 3 parameters, correctly inheriting and
following a protocol. In the rewrite, I wanted to simplify things. Here
for instance, is how the _expr transform is defined:

struct _expr
  : transform<_expr>
{
template
auto operator()(E && e, Rest &&...) const
BOOST_PROTO_AUTO_RETURN(
static_cast(e)
)
};

A custom transform is simply a struct that inherits from
proto::transform and that has an operator() that accepts an arbitrary
number of parameters. (The use of BOOST_PROTO_AUTO_RETURN is not
necessary. It simply handles the return statement, the return type, and
the noexcept clause.)

Data parameter uses a slot mechanism

In proto today, transforms take 3 parameters: expression, state and
data. As you can see from above, transforms in proto-11 take an
arbitrary number of parameters. However, that can make it hard to find
the piece of data you're looking for. Which position will it be in?
Instead, by convention most transforms will still only deal with the
usual 3 parameters. However, the data parameter is like a fusion::map:
it will have slots that you can access in O(1) by tag.

Here is how a proto algorithm will be invoked:

  int i = LambdaEval()(_1 + 42, 0, proto::tag::data = 8);
   
The 3rd parameter associates the value 8 with the data tag. The _data
transform returns the data associated with that tag. Additionally, you
can define you own tags and pass along another blob of data, as follows:

  int i = LambdaEval()(_1 + 42, 0, (proto::tag::data = 8, mytag = 42));

The _data transform will still just return 8, but you can use
_env to fetch the 42. The third parameter has been
generalized from an unstructured blob of data to a structured collection
of environment variables. Slots can even be reused, in which case they
behave like FILO queues (stacks).

proto::callable and proto::is_callable will no longer be necessary
==
Much work has gone into eliminating the need for proto::callable and
proto::is_callable. The present need comes from a difference between
the proto call and make transforms. In make, X is treated as a
lambda describing a type. For instance, X might be
std::vector, in which case, proto::_value is evaluated
and the result (say, int) is used to instantiate vector. proto::call
doesn't do that, and that decision has caused no end of grief. In
proto.next, a transform like X(Y) is evaluated by first treating X as a
lambda like make does today. After a real type has been computed, it's
safe to ask whether the new type is a transform, callable or other.
Transforms are quite simply anything that has inherited from
proto::transform. And in C++11, there is no reason to tag something as
callable or not; extended SFINAE can do the job for us. Either X()(y)
compiles or it doesn't. As a result, proto can easily and reliably
distinguish callable transforms from object transforms with no help from
the user.

As for what is not changing:

Grammars, Transforms and Algorithms
===
It would be wonderful if there were a more natural syntax for describing
proto algorithms rather than with structs, function objects, proto::or_,
proto::when, and friends. If there is one, I haven't found it yet. On
the up side, it means that many current proto-based libraries can be
upgraded with little effort. On the down side, the learning curve will
still be pretty steep. If anybody has ideas for how to use C++11 to
simplify pattern matching and the definition of recursive tree
transformation algorithms, I'm all ears.

For those curious to taking a peek, the code lives here:
https://github.com/ericniebler/home/tree/master/src/proto

It needs clang trunk to compile. I also had to manually fix a bug in my
gcc headers to get the tests to compile. This is still in a very raw
state, and not useful for real work yet.

- -- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.17 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQEcBAEBAgAGBQJP5kzWAAoJEAeJsEDfjLbX+S4H/2n3p7jILN6uFTqGY/9OZwxY
/DBKIZDyRPbNtDXaEfPSelMjrszubdg577MCnl2+mE8zONEmImP8qPACnJUGBoA3
4N4FBn1q9BGcmJYlDlb2lw5tT3znS/DoXPEJSdgWT5lRKsJLQ2cuBIjLde

Re: [proto] [proto-11] expression extension

2012-06-14 Thread Eric Niebler
On 6/14/2012 12:03 AM, Joel Falcou wrote:
> Just a question that just struck me. Will this rewrite be backward
> compatible with C++03  for the features that make sense ? I think the
> C++03 version may benefit from the new expression extension mechanism etc.

'Fraid not. The new extension mechanism is a breaking interface change
to proto::domain. The proto you know in boost will stay as it is.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] [proto-11] expression extension

2012-06-05 Thread Eric Niebler
On 6/5/2012 11:10 PM, Mathias Gaunard wrote:
> On 03/06/2012 09:41, Eric Niebler wrote:
>>
>> Hey all, this is just an FYI. I've been hard at work at a ground-up
>> redesign of proto for C++11. I've gotten far enough along that I know
>> what expression extension will look like, so I thought I'd share. This
>> should interest those who want finer control over how expressions in
>> their domain are constructed. Without further ado:
>>
>>  template
>>  struct MyExpr;
>>
>>  struct MyDomain
>>: proto::domain
>>  {
>>  struct make_expr
>>: proto::make_custom_expr
>>  {};
>>  };
>>
>>  template
>>  struct MyExpr
> 
> Wouldn't it be more interesting for make_custom_expr to take a
> meta-function class?

The template template parameter is distasteful, I agree, but I can shave
template instantiations this way. There's no need to instantiate a
nested apply template for every new expression type created. Especially
now with template aliases, it's quite painless to adapt a template to
have the interface that make_custom_expr expects. That was my reasoning,
at least.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Hold terminals by "smart" reference

2012-06-05 Thread Eric Niebler
On 6/4/2012 5:51 PM, Mathias Gaunard wrote:
> On 04/06/2012 17:52, Eric Niebler wrote:
> 
>> I don't know what you mean by the "right type". If you want it held by
>> shared_ptr to manage lifetime, then shared_ptr is the right type, it
>> seems to me. Or use a wrapper around a shared_ptr, whichever.
> 
> I want all tree manipulation and transformation algorithms to see the
> value as if it were a T and not a shared_ptr or ref_holder.

OK, I see.

> shared_ptr is the right type to contain the terminal in the proto
> nullary expression, but in that particular case it is not the logical
> type of the value associated to that leaf node.
> 
> I want to be able to manipulate the tree using straight Proto tools
> (otherwise I might as well not use Proto at all -- the point is to have
> a well-defined de-facto standard framework for tree manipulation).

I understand your frustration.

> Those algorithms should not need to know how the value is stored in the
> expressions. It is just noise as far as they're concerned.
> 
> Alternatively I'll need to provide substitutes for value,
> result_of::value and _value, and ban those from my code and the
> programming interface, telling developers to use mybettervalue instead
> of proto's. That saddens me a bit.

I want you to understand that I'm not just being obstructionist or
obstinate. Proto's value functions are very simple and low-level and are
called very frequently. Adding metaprogramming overhead there, as would
be necessary for adding a customization point, has the potential to slow
compiles down for everybody, as well as complicating the code, tests and
docs. There are also unanswered questions. For instance, how does
proto::matches work with these terminals? Does it match on the actual
value or the logical one? There are arguments on both sides, but one
needs to be picked. Going with the logical value will force
proto::matches to go through this customization point for every
terminal. I also am thinking about how it effects other proto features
such as: as_expr, as_child, make_expr, unpack_expr, literal+lit, fusion
integration, display_expr, fold and all the other transforms, etc. etc.
I worry that allowing the logical type of a terminal to differ from the
actual type opens a Pandora's box of tough question with no obvious
answers, and trying to add this would cause unforeseen ripple effects
through the code. It makes me very uneasy, especially considering the
workaround on your end (sorry) is very simple.

Making proto's user's happy must be balanced against feature-creep-ism,
which hurts everybody in the long run. So I'm afraid I'm still leaning
against adding this customization point. But I encourage you to file a
feature request, and if you can find a patch that doesn't negatively
effect compile times or end user-complexity and integrates cleanly with
all the other features of proto and has docs and tests, then -- and only
then --- would I add it.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] [proto-11] expression extension

2012-06-04 Thread Eric Niebler
On 6/4/2012 6:08 PM, Mathias Gaunard wrote:
>> Eric Niebler wrote:
>>> Proto-11 will probably take many months. I'm taking my time and
>>> rethinking everything. Don't hold your work up waiting for it.
> 
> Best thing to do is probably to make it lighter, keep separate things
> separate, and truly extendable.
> 
> For example, transforms seem too tighly coupled with the rest in the
> current Proto version, and their limitations are quite intrusive.

Can you be more specific and give some examples? BTW, I appreciate your
helping to improve proto.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] [proto-11] expression extension

2012-06-04 Thread Eric Niebler
On 6/4/2012 12:48 PM, Joel Falcou wrote:
> Le 04/06/2012 21:18, Eric Niebler a écrit :
>> Assuming your types are efficiently movable, the default should just do
>> the right thing, and your expression trees can be safely stored in local
>> auto variables without dangling references. Does that help?
> 
> I was thinking of the case where we constructed a foo expression by
> calling expression constructor one  into the other. I guess it fixes that.

One into the other? I must be dense. Not getting it. ???

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] [proto-11] expression extension

2012-06-04 Thread Eric Niebler
On 6/3/2012 11:47 PM, Joel Falcou wrote:
> On 03/06/2012 09:41, Eric Niebler wrote:
>> 2) Rather than writing generators, you'll be defining per-domain
>> make_expr function objects that accept a tag and a number of children.
>> How you decide to assemble these into an expression is up to you, but
>> you can use a helper like make_custom_expr above to simplify things.
> 
> It's very important those make_expr functino object could be extended
> externally of any structure. By the look of it, it looks like
> it'll behave similary to the switch_ construct, aka a template functor
> inside a struct to be extended outside.

The make_expr function object takes as arguments the tag and the
children. You can do whatever you want. If open extensibility matters,
you can dispatch to a function found by ADL or to a template specialized
on the tag like proto::switch_. It's up to you.

>> 3) There are other per-domain customization points: (a) store_value,
>> which specifies the capture policy for non-proto objects in expressions,
>> and (b) store_child, which specifies how children are stored. For both
>> (a) and (b), the default is: lvalues are stored by reference and rvalues
>> are stored by (moved from) value. Expressions can safely be stored in
>> auto variables by default.
> 
> So I guess it also fix the problem we faced with Mathias on having to
> store everythign by value to have proper chains of expression building
> function works properly ?

Not sure what you mean. Are you referring to the current discussion
about having to use shared_ptr to store something? That seems unrelated
to me.

Assuming your types are efficiently movable, the default should just do
the right thing, and your expression trees can be safely stored in local
auto variables without dangling references. Does that help?

>> Thanks all for now. Feedback welcome. If you have wishlist features for
>> proto-11, speak now.
> 
> On eo fmy PHD student will start converting Quaff to C++11 this july, so
> depending on your advancement on Proot-11, we may give it a shot an
> dreport any missing features.

Proto-11 will probably take many months. I'm taking my time and
rethinking everything. Don't hold your work up waiting for it.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Hold terminals by "smart" reference

2012-06-04 Thread Eric Niebler
On 6/3/2012 5:37 PM, Mathias Gaunard wrote:
> On 03/06/2012 18:36, Eric Niebler wrote:
> 
>>> Plus I don't have a good way to distinguish between a terminal of
>>> shared_ptr  and a terminal of T which is held through a shared_ptr.
>>
>> Have you tried a grammar? Something like (untested) proto::terminal<
>> boost::shared_ptr<  proto::_>  >  ?
> 
> That would match expressions of the form (assuming I have binary plus in
> my grammar)
> 
> shared_ptr p1, p2.
> p1 + p2;

I don't understand. p1 and p2 are not Proto terminals, so "p1 + p2"
doesn't make sense. Even if it did, it would build a plus node which
would *not* match the grammar I gave above.

Let's back up. What are you trying to do?

> This is exactly what I do not want. I don't want my grammar to be
> cluttered by implementation details. It makes no sense semantically for
> shared_ptr to be values, it's just a technique used for life time
> management of specific values.
> 
> If I ever introduced shared_ptrs as values in my grammar, they might do
> something entirely different.

OK. If you want to hide the use of shared_ptr, I suggest writing a
custom wrapper class named, say, my_detail::by_ref_holder which
stores a shared_ptr as a data member. You can then choose how you
want to handle terminals of this type.

> To separate this more or less cleanly, I use a special tag for nullary
> expressions where shared_ptr is just an implementation detail, but it's
> still not really satisfying since the value doesn't have the right type
> in the tree.

I don't know what you mean by the "right type". If you want it held by
shared_ptr to manage lifetime, then shared_ptr is the right type, it
seems to me. Or use a wrapper around a shared_ptr, whichever.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Hold terminals by "smart" reference

2012-06-03 Thread Eric Niebler
On 6/3/2012 6:23 AM, Mathias Gaunard wrote:
> On 03/06/2012 02:59, Eric Niebler wrote:
> 
>> There is no way to hook proto::value to return anything but what is
>> stored in the terminal. It's a very dumb, low-level function. You could
>> easily define your own value function that does something smarter, tho.
> 
> I'm currently doing it with my own function, but that means I need to
> use my own value instead of Proto's everywhere (value, result_of::value,
> or _value).

Right.

> Plus I don't have a good way to distinguish between a terminal of
> shared_ptr and a terminal of T which is held through a shared_ptr.

Have you tried a grammar? Something like (untested) proto::terminal<
boost::shared_ptr< proto::_ > > ?

> I was considering specializing something like expr term_shared, 0> and basic_expr, 0> to do
> this, but it appears I also need to change proto::value itself to call a
> function instead of returning child0 directly.

In my opinion, defining your own value function would be a much easier,
cleaner solution.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


[proto] [proto-11] expression extension

2012-06-03 Thread Eric Niebler

Hey all, this is just an FYI. I've been hard at work at a ground-up
redesign of proto for C++11. I've gotten far enough along that I know
what expression extension will look like, so I thought I'd share. This
should interest those who want finer control over how expressions in
their domain are constructed. Without further ado:

template
struct MyExpr;

struct MyDomain
  : proto::domain
{
struct make_expr
  : proto::make_custom_expr
{};
};

template
struct MyExpr
  : proto::basic_expr
  , proto::expr_assign, MyDomain>
  , proto::expr_subscript, MyDomain>
  , proto::expr_function, MyDomain>
{
BOOST_PROTO_REGULAR_TRIVIAL_CLASS(MyExpr);
using proto::basic_expr::basic_expr;
using proto::expr_assign::operator=;
};

Things to note:

1) Rather than writing expression wrappers, you'll be writing actual
expression types. Proto provides helpers that make this easy. To get the
basics, inherit from basic_expr. To get tree-building assign, subscript,
and function call operators, inherit from expr_assign, expr_subscript
and expr_function respectively.

2) Rather than writing generators, you'll be defining per-domain
make_expr function objects that accept a tag and a number of children.
How you decide to assemble these into an expression is up to you, but
you can use a helper like make_custom_expr above to simplify things.

3) There are other per-domain customization points: (a) store_value,
which specifies the capture policy for non-proto objects in expressions,
and (b) store_child, which specifies how children are stored. For both
(a) and (b), the default is: lvalues are stored by reference and rvalues
are stored by (moved from) value. Expressions can safely be stored in
auto variables by default.

4) Expressions are both Regular and Trivial. Regular means they have
normal copy and assign semantics (movable, too). Trivial means they can
be statically initialized. All their constructors are constexpr. Yes,
expressions can have both regular assignment semantics *and*
tree-building assignment operators. "x=y" is normal assignment when x
and y have the same type. It builds a tree node when they don't. Also,
you'll have to opt in to get the address-of operator.

Caveat: there is no compiler that can handle the above yet. Clang is
close, but it doesn't support inheriting constructors. Instead of this:

using proto::basic_expr::basic_expr;

you must do this:

typedef proto::basic_expr proto_basic_expr;
BOOST_PROTO_INHERIT_EXPR_CTORS(MyExpr, proto_basic_expr);

It's just a temporary hack.

Thanks all for now. Feedback welcome. If you have wishlist features for
proto-11, speak now.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] restructuring expression

2012-05-31 Thread Eric Niebler
On 5/30/2012 9:26 AM, Eric Niebler wrote:
> On 5/30/2012 4:33 AM, Joel Falcou wrote:
>> Won't having a way to build it properly from the get go be a better
>> solution ?
>>
>> This basically require the feature we spoke about earlier so that
>> building a X * Y
>> node check which from X or Y is a double and put it in the proper place ?
>>
>> Then when doing X * Expr, check if there is a double at child<0> of expr
>> and restructure the whole tree at generation time ?
> 
> That's not a bad suggestion. You can do this today with a custom generator.

Incidentally, I'm now working on an C++11 rewrite of proto. Mathias'
feature request and this problem have only served to reinforce a feeling
I've had for a while that generators are the wrong abstraction. It's
fine for simple expression wrapping, but for anything more complicated,
it simply makes no sense to build an expression only to have the
generator rip it apart and build a different one.

There's already a way to define a per-domain as_expr and as_child. What
is needed is a per-domain make_expr. Rather than building an expression
and passing it to a generator, I'll be passing child expressions and a
tag to a domain and asking it to build an expression from them. This, I
think, is the way it should have been designed from the beginning.

However, it means that you will no longer be able to use a proto grammar
as a generator. That was cute functionality, but I don't think it was
terribly useful in practice. How do folks feel about the loss of that
functionality?

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] restructuring expression

2012-05-30 Thread Eric Niebler
On 5/30/2012 4:22 PM, Karsten Ahnert wrote:
> Next trial:
> 
> template< typename I > struct placeholder : I {};
> 
> proto::terminal< placeholder< mpl::size_t< 0 > > >::type const arg1 = {};
> 
> proto::display_expr(
> fusion::fold(
> proto::flatten( arg1 ) ,
> proto::functional::make_terminal()( 1.0 ) ,
> proto::functional::make_multiplies()
>   )
> );
> 
> gives a compilation error:
> 
> /boost/proto/fusion.hpp:86:20: error: no type named ‘proto_tag’ in
> ‘const struct placeholder >
>
> It is difficult for me to figure out what happens here. Any ideas?


Right. proto::flatten uses the type of the top-most node to figure out
how to flatten the expression tree. E.g., if you passed arg1 * 32, the
top-most node is a multiplication, so it would produce a list [arg1,
32]. You're passing just a terminal, so it creates a 1-element list
containing the value of the terminal: [placeholder]. Passing this to
proto::functional::make_multiplies results in the above error because
it's expecting a proto expression.

Try this:

#include 
#include 
#include 

namespace mpl = boost::mpl;
namespace proto = boost::proto;
namespace fusion = boost::fusion;
using proto::_;

template< typename I > struct placeholder : I {};

proto::terminal< placeholder< mpl::size_t< 0 > > >::type const arg1
= {};

template< class Expr >
void eval( const Expr &e )
{
proto::display_expr(
fusion::fold(
proto::flatten( e ) ,
proto::functional::make_terminal()( 1.0 ) ,
proto::when<_,
proto::_make_multiplies(proto::_byval(proto::_state), proto::_byval(_))>()
)
);
}

int main()
{
eval( 2 * arg1 * 42.0 * arg1 );
}

It takes the multiplication tree, flattens it, and turns it back into a
multiplication tree in reversed order.

HTH,

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] restructuring expression

2012-05-30 Thread Eric Niebler
On 5/30/2012 4:33 AM, Joel Falcou wrote:
> On 05/29/2012 08:21 PM, Eric Niebler wrote:
>> On 5/29/2012 1:44 AM, Karsten Ahnert wrote:
>>> I have an arithmetic expression template where multiplication is
>>> commutative. Is there an easy way to order a chain of multiplications
>>> such that terminals with values (like proto::terminal< double >) appear
>>> at the beginning? For example that
>>>
>>> arg1 * arg1 * 1.5 * arg1
>>>
>>> will be transformed to
>>>
>>> 1.5 * arg1 * arg1 * arg1
>>>
>>> ?
>>>
>>> I can imagine some complicated algorithms swapping expressions and child
>>> expressions but I wonder if there is a simpler way.
>> There is no clever built-in Proto algorithm for commutative
>> transformations like this, I'm afraid. I was going to suggest flattening
>> to a fusion vector and using fusion sort, but I see there is no fusion
>> sort! :-( Nevertheless, that seems like a promising direction to me.
>> Once you have the sorted vector, you should(?) be able to use
>> fusion::fold to build the correct proto tree from it.
>>
> 
> Won't having a way to build it properly from the get go be a better
> solution ?
> 
> This basically require the feature we spoke about earlier so that
> building a X * Y
> node check which from X or Y is a double and put it in the proper place ?
> 
> Then when doing X * Expr, check if there is a double at child<0> of expr
> and restructure the whole tree at generation time ?

That's not a bad suggestion. You can do this today with a custom generator.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] restructuring expression

2012-05-30 Thread Eric Niebler
On 5/29/2012 3:29 PM, Karsten Ahnert wrote:
> On 05/29/2012 08:21 PM, Eric Niebler wrote:
>> On 5/29/2012 1:44 AM, Karsten Ahnert wrote:
>>> I have an arithmetic expression template where multiplication is
>>> commutative. Is there an easy way to order a chain of multiplications
>>> such that terminals with values (like proto::terminal< double >) appear
>>> at the beginning? For example that
>>>
>>> arg1 * arg1 * 1.5 * arg1
>>>
>>> will be transformed to
>>>
>>> 1.5 * arg1 * arg1 * arg1
>>>
>>> ?
>>>
>>> I can imagine some complicated algorithms swapping expressions and child
>>> expressions but I wonder if there is a simpler way.
>>
>> There is no clever built-in Proto algorithm for commutative
>> transformations like this, I'm afraid. I was going to suggest flattening
>> to a fusion vector and using fusion sort, but I see there is no fusion
>> sort! :-( Nevertheless, that seems like a promising direction to me.
>> Once you have the sorted vector, you should(?) be able to use
>> fusion::fold to build the correct proto tree from it.
> 
> Ok, this looks promising. But I failed to get proto::fold to work. As a
> first step I tried to flatten an expression and reconstruct it with
> proto::fold:
> 
> struct back_fold :
> proto::fold<
> proto::_ ,
> proto::_state ,
> proto::functional::make_multiplies( proto::_state , proto::_ )
> > { };
> 
> template< class Expr >
> void eval( const Expr &e )
> {
> back_fold b;
> proto::display_expr(
>   b( proto::flatten( e ) , proto::_make_terminal()( 1.0 ) )
>   );
> }
> 
> eval( 2.0 * arg1 * arg1 * arg1 * 1.0 );
> 
> Unfortunately this does not compile. Any ideas what is wrong here?

Yes, I know. I said you should use fusion::fold, not proto::fold. Once
you flatten a proto expression, you get a fusion sequence. That means
you have to use a fusion algorithm on it. Proto's transforms expect
their first parameter to be proto expression trees. The result of
proto::flatten is emphatically not.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] restructuring expression

2012-05-29 Thread Eric Niebler
On 5/29/2012 1:44 AM, Karsten Ahnert wrote:
> I have an arithmetic expression template where multiplication is
> commutative. Is there an easy way to order a chain of multiplications
> such that terminals with values (like proto::terminal< double >) appear
> at the beginning? For example that
> 
> arg1 * arg1 * 1.5 * arg1
> 
> will be transformed to
> 
> 1.5 * arg1 * arg1 * arg1
> 
> ?
> 
> I can imagine some complicated algorithms swapping expressions and child
> expressions but I wonder if there is a simpler way.

There is no clever built-in Proto algorithm for commutative
transformations like this, I'm afraid. I was going to suggest flattening
to a fusion vector and using fusion sort, but I see there is no fusion
sort! :-( Nevertheless, that seems like a promising direction to me.
Once you have the sorted vector, you should(?) be able to use
fusion::fold to build the correct proto tree from it.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Feature request: control how the built-in operator overloads build expressions

2012-05-11 Thread Eric Niebler
On 5/11/2012 8:04 AM, Mathias Gaunard wrote:
> Proto comes with operator overloads that call make_expr if the arguments
> are in a compatible domain and if the grammar is satisfied.
> 
> In some cases however, we'd like to build custom trees that don't
> exactly map to what the default operator overloads would do. We suggest
> adding an extension point per-domain to specify how to make_expr but
> only in the context of the provided operator overloads.
> 
> The transformation could arguably be done in the generator or in a later
> pass, but we'd prefer doing that early for different reasons:
>  - This only affects expressions generated through the built-in operator
> overloads. Other expressions are generated through our custom functions,
> which already do what we want.
>  - We'd like to keep the responsibility of the generator of converting a
> naked expression to an expression in our domain (which, in the case of
> NT2, involves computing the size and logical type of the elements).
>  - Doing it at a later pass means we have to run the generator on things
> that don't necessarily have the desired form
> 
> The alternative is for us to not rely on Proto-defined operator
> overloads and to overload all that stuff ourselves.

This seems quite reasonable. Could you file a feature request on trac so
I don't loose track of it? Thanks.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Restructuring noses in generator

2012-04-28 Thread Eric Niebler
On 4/28/2012 3:38 AM, Mathias Gaunard wrote:
> On 27/04/12 21:47, Joel Falcou wrote:
>> How can I use a custom generator to turn a specific node expression into
>> a different version of itself without triggering endless recursive call ?
>>
>> My use cas is the following, i want to catch all function node looking
>> like
>>
>> tag::function( some_terminal, grammar, ..., grammar )
>>
>> with any nbr of grammar instances
>>
>> into
>>
>> tag::function( some_terminal, my_tuple_terminal,
>> some_other_info )
>>
>> basically makign n-ary function node into ternayr node with a specific
>> structures. Of course this new node should live in whatever domain
>> some_terminal is coming from.


And some_terminal is not in your domain? How does your generator get
invoked? I guess I'm confused. Can you send a small repro?

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] The proper way to compose function returning expressions

2012-04-26 Thread Eric Niebler
On 4/26/2012 9:35 AM, Mathias Gaunard wrote:
> On 26/04/12 18:02, Eric Niebler wrote:
> 
>> Interesting. I avoided this design because I was uncertain whether the
>> compiler would be able to optimize out all the copies of the
>> intermediate nodes. You're saying NT2 does it this way and doesn't
>> suffer performance problems? And you've hand-checked the generated code
>> and found it to be optimal? That would certainly change things.
>>
> 
> NT2 treats large amounts of data per expression, so construction time is
> not very important. It's the time to evaluate the tree in a given
> position that matters (which only really depends on proto::value and
> proto::child_c, which are always inlined now).
> 
> We also have another domain that does register-level computation, where
> construction overhead could be a problem. The last tests we did with
> this was a while ago and was with the default Proto behaviour. That
> particular domain didn't get sufficient testing to give real conclusions
> about the Proto overhead.

In that case, I will hold off making any core changes to Proto until I
have some evidence that it won't cause performance regressions.

Thanks,

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] The proper way to compose function returning expressions

2012-04-26 Thread Eric Niebler
On 4/25/2012 1:41 PM, Mathias Gaunard wrote:
> On 24/04/12 22:31, Eric Niebler wrote:
>> On 4/23/2012 10:17 PM, Joel Falcou wrote:
>>> On 04/24/2012 12:15 AM, Eric Niebler wrote:
>>>
>>> I think this is an important issues to solve as far as Proto grokability
>>> does.
>>
>> Agreed. It would be very nice to have. But you still have to know when
>> to use it.
>>
>>> One of my coworker on NT2 tried  to do just this (the norm2 thingy) and
>>> he get puzzled by the random crash.
>>>
> [...]
>>
>> The implicit_expr code lived in a detail namespace in past versions of
>> proto. You can find it if you dig through subversion history. I'm not
>> going to do that work for you because the code was broken in subtle ways
>> having to do with the consistency of terminal handling. Repeated
>> attempts to close the holes just opened new ones. It really should be
>> left for dead. I'd rather see what you come up with on your own.
> 
> The issue Joel had in NT2 was probably unrelated to this. In NT2 we hold
> all expressions by value unless the tag is boost::proto::tag::terminal.
> This was done by modifying as_child in our domain.
> 
> I strongly recommend doing this for most proto-based DSLs. It makes auto
> foo = some_proto_expression work as expected, and allows expression
> rewriting of the style that was shown in the thread without any problem.
> 
> There is probably a slight compile-time cost associated to it, though.

Interesting. I avoided this design because I was uncertain whether the
compiler would be able to optimize out all the copies of the
intermediate nodes. You're saying NT2 does it this way and doesn't
suffer performance problems? And you've hand-checked the generated code
and found it to be optimal? That would certainly change things.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] The proper way to compose function returning expressions

2012-04-24 Thread Eric Niebler
On 4/23/2012 10:17 PM, Joel Falcou wrote:
> On 04/24/2012 12:15 AM, Eric Niebler wrote:
>> implicit_expr() returns an object that holds its argument and is
>> convertible to any expression type. The conversion is implemented by
>> trying to implicitly convert all the child expressions, recursively.
>> It sort of worked, but I never worked out all the corner cases, and
>> documenting it would have been a bitch. Perhaps I should take another
>> look. Patches welcome. :-) 
> 
> I think this is an important issues to solve as far as Proto grokability
> does.

Agreed. It would be very nice to have. But you still have to know when
to use it.

> One of my coworker on NT2 tried  to do just this (the norm2 thingy) and
> he get puzzled by the random crash.
> 
> I think we should at least document the issues (I can write that and
> submit a patch for the doc) and
> maybe resurrect this implicit_expr. Do you have any remnant of code
> lying around so I don't start from scratch ?

The implicit_expr code lived in a detail namespace in past versions of
proto. You can find it if you dig through subversion history. I'm not
going to do that work for you because the code was broken in subtle ways
having to do with the consistency of terminal handling. Repeated
attempts to close the holes just opened new ones. It really should be
left for dead. I'd rather see what you come up with on your own.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] The proper way to compose function returning expressions

2012-04-23 Thread Eric Niebler
On 4/23/2012 1:01 PM, Joel Falcou wrote:
> Let's say we have a bunch of functions like sum and sqr defined on a
> proto domain to return
> expression of tag sum_ and sqr_ in this domain. One day we want to make
> a norm2(x) function
> which is basically sum(sqr(x)).
> 
> My feeling is that I should be able to write it using sqr and sum
> expressions.
> Alas it seems this results in dandling reference, crash and some sad pandas.
> 
> Then I remember about proto::deep_copy but I have a worries. x is
> usually a terminal
> holding a huge matrix like value and I just don't want this huge matrix
> to be copied.
> 
> What's the correct way to handle such a problem ? How can I build new
> function returning
> expressions built from expression composition without incurring a huge
> amount of copy ?

Right. The canonical way of doing this is as follows:

#include 
namespace proto = boost::proto;

struct sum_ {};
struct sqr_ {};

namespace result_of
{
template
struct sum
  : proto::result_of::make_expr
{};

template
struct sqr
  : proto::result_of::make_expr
{};

template
struct norm2
  : sum::type>
{};
}

template
typename result_of::sum::type const
sum(T &t)
{
return proto::make_expr(boost::ref(t));
}

template
typename result_of::sqr::type const
sqr(T &t)
{
return proto::make_expr(boost::ref(t));
}

template
typename result_of::norm2::type const
norm2(T &t)
{
return
proto::make_expr(proto::make_expr(boost::ref(t)));
}

int main()
{
sum(proto::lit(1));
sqr(proto::lit(1));
norm2(proto::lit(1));
}


As you can see, the norm2 is not implemented in terms of the sum and sqr
functions. That's not really ideal, but it's the only way I know of to
get fine grained control over which parts are stored by reference and
which by value.

You always need to use make_expr to build expression trees that you
intend to return from a function. That's true even for the built-in
operators. You can't ever return the result of expressions like "a+b*42"
... because of the lifetime issues.

You can't use deep_copy for the reason you mentioned.

I once had a function proto::implicit_expr, which you could have used
like this:

template
typename result_of::norm2::type const
norm2(T &t)
{
return proto::implicit_expr(sum(sqr(x)));
}

implicit_expr() returns an object that holds its argument and is
convertible to any expression type. The conversion is implemented by
trying to implicitly convert all the child expressions, recursively. It
sort of worked, but I never worked out all the corner cases, and
documenting it would have been a bitch. Perhaps I should take another
look. Patches welcome. :-)

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Held nodes by value for Fundamental types

2012-04-09 Thread Eric Niebler
On 4/9/2012 2:21 PM, Fernando Pelliccioni wrote:
> Hello,
> 
> I'm wondering if it would be appropriate to treat the fundamental types
> (char, short, int, double, ...) by value, by default.
> 
> I wrote this simple piece of code.
> I'm not sure if I'm leaving without considering any other implication,
> but I think it may be an improvement.
> Please, tell me if I am wrong.

Thanks. I thought long about whether to handle the fundamental types
differently than user-defined types and decided against it. The
capture-everything-by-reference-by-default model is easy to explain and
reason about. Special cases can be handled on a per-domain basis as needed.

There is a way to change the capture behavior for your domain. The newly
released version of Proto documents how to do this (although the
functionality has been there for a few releases already).

http://www.boost.org/doc/libs/1_49_0/doc/html/proto/users_guide.html#boost_proto.users_guide.front_end.customizing_expressions_in_your_domain.per_domain_as_child

In short, you'll need to define an as_child metafunction in your domain
definition:

class my_domain
  : proto::domain< my_generator, my_grammar >
{
// Here is where you define how Proto should handle
// sub-expressions that are about to be glommed into
// a larger expression.
template< typename T >
struct as_child
{
typedef unspecified-Proto-expr-type result_type;

result_type operator()( T & t ) const
{
return unspecified-Proto-expr-object;
}
};
};

In as_child, you'll have to do this (pseudocode):

if (is_expr)
  return T &
else if(is_fundamental)
  return proto::terminal::type
else
  return proto::terminal::type

The metaprogramming is left as an exercise. :-)

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Best way to change tag in generator?

2012-01-31 Thread Eric Niebler
On Mon, Jan 30, 2012 at 4:53 AM, Mathias Gaunard
 wrote:
> For a couple of reasons, I'm considering replacing the proto-provided
> operator tags by my own in the generator of my domain.
>
> What would the easiest and less costly way of doing it?

You mean, for instance, you want Proto's operator+ to return an
expression with a tag other than tag::plus? Can I ask why? You can do
this in a generator, but you're right to be cautious of compile times.
Proto will instantiate a bunch of templates while assembling each
expression before passing them to your generator, which would just rip
them apart and throw them away.

Your other option would be to use the (undocumented)
BOOST_PROTO_DEFINE_UNARY_OPERATOR and
BOOST_PROTO_DEFINE_BINARY_OPERATOR to define a complete alternate set
of operator overloads that use your tags instead of proto's. See
proto/operators.hpp. Nobody has ever done this and I don't know if it
would work.

Eric
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] user docs for advanced features

2012-01-04 Thread Eric Niebler
On 1/4/2012 7:37 AM, Thomas Heller wrote:

> Thanks for adding this documentation!

Great feedback. I've just accommodated all of it. Thanks!

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


[proto] user docs for advanced features

2012-01-02 Thread Eric Niebler
Proto's users guide has been behind the times for a while. No longer.
More recent and powerful features are now documented. Feedback welcome.

Sub-domains:

http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/front_end/customizing_expressions_in_your_domain/subdomains.html

Per-domain as_child customization:
==
http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/front_end/customizing_expressions_in_your_domain/per_domain_as_child.html

External Transforms:
===
http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/back_end/expression_transformation/external_transforms.html

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Grouping expressions

2011-12-29 Thread Eric Niebler
On 12/29/2011 10:07 AM, Bart Janssens wrote:
> Hi guys,
> 
> Currently, we are using the proto operator() overload and grammars to
> "group" expressions together. The result is one big expression that
> consists of several sub-expressions. An example can be found here:
> https://github.com/coolfluid/coolfluid3/blob/master/plugins/UFEM/src/UFEM/NavierStokes.cpp#L93
> 
> The problem with this approach is that for example the file above
> takes over 2GB of RAM to compile. I think this is due to the size of
> the expression (the group(...) call starting on line 93 is a single
> expression).
> 
> I was wondering if it would be useful to find some other way to treat
> a group of expressions, maybe by using a fusion vector of expressions,
> or even using the new variadic templates of C++11? Could this have a
> significant impact in reducing the compiler memory usage? An extra
> complication is that the groups may be nested. In the example, there
> is a second sort of group inside the element_quadrature call.

Are you certain your problem is caused by using operator() for grouping?
I think this is just a very big expression template, and any syntax you
choose for grouping will result in long compile times and heavy memory
usage.

Can I ask, what version of Boost are you using? I see you #define
BOOST_PROTO_MAX_ARITY to 10 at the top. In recent versions of Proto, 10
is the default. And newer Proto versions already make use of variadic
templates for operator() if available.

Other things to think about: does this really need to be all one big
expression, or can parts of it be broken up and type-erased, as with
spirit::qi::rule?

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Problems with unary function node

2011-10-28 Thread Eric Niebler
On 10/28/2011 6:29 AM, Mathias Gaunard wrote:
> On 28/10/2011 07:30, Eric Niebler wrote:
> 
>> Regardless, I'm convinced that a complete fix is possible, and I have it
>> mostly coded. It would require you (the user) to disable unary function
>> and assign in your domain via a grammar. But. It's expensive at compile
>> time, and everybody pays. I need to be convinced before I proceed.
> 
> I think it would also be fine to just document the issue, and let people
> special-case the generator if needed.
> 
> I wouldn't want all Proto-based code to become slower just because of this.
> 
> I would be quite interested to see what the fix is, though.

I opened a trac issue, and attached the patch.

https://svn.boost.org/trac/boost/ticket/6070

I don't think this solves your problem, though. See below.

>> Your
>> example code was very contrived. (Certainly you don't need to ask a
>> Proto expression extension type whether it is a proto expression. The
>> answer will always be yes.)
>> So what is your realistic usage scenario?
>> What type categorization do you want to do on the extension type that
>> you can't do on the raw passed-in expression?
> 
> The call goes through generic components that don't necessarily deal
> with Proto expressions, but some of them may have special
> specializations for Proto expressions.
> 
> The categorization of non-raw expressions is of course richer, because
> each of those went through their own generators and attached special
> semantic information to each node type.
> 
> In NT2, we generate expressions that look like this
> 
> template
> struct expression
>   : proto::extends< expression >
> {
> typedef typename extent_type::type extent_type;
> 
> expression(Expr const& expr, extent_type const& extent_)
>   : proto::extends< expression >(expr)
>   , extent(extent_)
> {
> }
> 
> extent_type const& extent() const { return extent; }
> 
> private:
> extent_type extent;
> };
> 
> It not only wraps the naked Proto expression, but it also contains what
> the expression represents from a logical point of view (at some point we
> wanted to use domains for this, but it's just not practical).
> The expression also contains its logical size, which is computed at
> runtime by the generator, and is tightly coupled with what the
> expression represents.
> 
> So we categorize a naked tag(a0, a1) expression as
> expr_< unspecified_, domain, tag >
> 
> The same expression with the result type information of int32_t is
> expr_< scalar_< int32_ >, domain, tag >
> 
> In certain situations, I ended up with a category of unspecified_
> on some of my expression types because the categorization meta-function
> was instantiated on an incomplete type (the expression type for the
> terminal).
> This way quite unexepcted and hard to debug.
> 
> Regardless, we like to have to ability to inject specializations *after*
> the expression type has been defined, which only works if instantiation
> is delayed until the function is actually called.

It could be that I'm stupid, or that I'm tired, or that your explanation
is insufficient ... but I don't get it. Regardless, it seems to me that
you still want your expressions to have a unary function operator, but
that you don't want to compute its return type eagerly. And as you said,
that's just not possible in today's language. My patch solves the
problem only for those people who want to turn off unary function
completely. Ditto for assign.

What should you do? Avoid calling those generic categorization
metafunctions in your generator. The generator has to assume the type it
computes is incomplete. Don't pass it to anything that tries to
introspect it. You have to find another way to get the information you need.

Sorry I can't be more helpful.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Problems with unary function node

2011-10-27 Thread Eric Niebler
On 10/22/2011 3:02 PM, Mathias Gaunard wrote:
> On 10/18/2011 05:53 AM, Eric Niebler wrote:
>> On 10/12/2011 2:24 PM, Mathias Gaunard wrote:
>>> There seems to be a significant problem with the unary function node
>>> (and by that I mean (*this)() ) generated by proto::extends and
>>> BOOST_PROTO_EXTENDS_USING_FUNCTION().
>> 
>>
>> Sorry for the delay, and I'm afraid I don't have news except to say that
>> this is on my radar. I hope to look into this soon. But if someone were
>> to beat me to it, that'd be pretty awesome. :-)
> 
> I don't think it can really be fixed in C++03.
> In C++11 though, it's pretty easy, you can just make it a template with
> a default template argument.

It should already be "fixed" for C++11 because operator() uses variadics
if they're available. It's been that way for a while. But in
investigating this problem, I've found that the copy assign operator can
cause the same problem, and that can't be "fixed" this way, even in C++11.

Regardless, I'm convinced that a complete fix is possible, and I have it
mostly coded. It would require you (the user) to disable unary function
and assign in your domain via a grammar. But. It's expensive at compile
time, and everybody pays. I need to be convinced before I proceed. Your
example code was very contrived. (Certainly you don't need to ask a
Proto expression extension type whether it is a proto expression. The
answer will always be yes.) So what is your realistic usage scenario?
What type categorization do you want to do on the extension type that
you can't do on the raw passed-in expression?

Thanks,

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Problems with unary function node

2011-10-17 Thread Eric Niebler
On 10/12/2011 2:24 PM, Mathias Gaunard wrote:
> There seems to be a significant problem with the unary function node
> (and by that I mean (*this)() ) generated by proto::extends and
> BOOST_PROTO_EXTENDS_USING_FUNCTION().


Sorry for the delay, and I'm afraid I don't have news except to say that
this is on my radar. I hope to look into this soon. But if someone were
to beat me to it, that'd be pretty awesome. :-)

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Recent (rvalue support) changes in proto causes a whole bunch of regressions in Spirit

2011-10-10 Thread Eric Niebler
On 10/10/2011 4:50 PM, Joel de Guzman wrote:
> Everything's green now, Eric. The problem was not pervasive after
> all; just a couple of fixes solved everything.

Whew!

> There's no need to revert. It makes me wonder though if we've missed
> something that will blow up in the future. I'll probably have to
> scrutinize classes that use proto::extends and check for overloads.

The problem is that Spirit defines its own operator overload(s) that
compete with Proto's. The officially sanctioned way of doing this with
Proto is to use a grammar to restrict Proto's overloads, and then define
your own.

Admittedly, Proto's docs could be more clear about best practice here.

> You might want to warn other proto users of this potential
> breaking change.

This will need to be mentioned in the release notes, yes.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Recent (rvalue support) changes in proto causes a whole bunch of regressions in Spirit

2011-10-10 Thread Eric Niebler
On 10/10/2011 2:08 AM, Joel de Guzman wrote:
> On 10/10/2011 1:52 PM, Joel de Guzman wrote:
>> On 10/10/2011 12:38 PM, Eric Niebler wrote:
>>>
>>> Bummer. I suggest adding an rvalue ref overload of operator%= that
>>> shares an implementation with the const lvalue one. Should just be a few
>>> lines of code. Is that a problem?
>>
>> Not really a problem. But, %= is just an example of the problems
>> (plural with an s). We suspect that it is more widespread. I'll
>> see how pervasive the changes need to be and get back to you.
> 
> Ok, adding the %= for rvalue refs for Qi and Karma fixed a lot of
> the failing tests. However, I am not sure how to fix the tests
> involving Lex (crashes on VC10 but OK on GCC). The compiler
> tutorial I am working on also got broken. I am not sure what else
> in the examples got broken. It's quite difficult to ascertain
> where the problem is because the code builds without errors
> but either crashes or does not work as expected at runtime.
> This is an insidious critter.
> 
> Hartmut, I committed the fix for Qi and Karma. Can you please
> take a look at the Lex regressions? There's a good chance
> that the problem with the examples is also related to the Lex
> problem.

I just ran the spirit_v2/lex and spirit_v2/lex_regressions test suites
on msvc-10.0, and everything passed for me. Is this fixed already? Let
me know asap so I know whether to revert my Proto changes.

(Note: If they don't make it in this time, these Proto changes will
eventually go back in for next release, so Qi/Karma/Lex will need to be
fixed eventually.)

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Recent (rvalue support) changes in proto causes a whole bunch of regressions in Spirit

2011-10-09 Thread Eric Niebler
On 10/9/2011 8:32 PM, Joel de Guzman wrote:
> Hi,
> 
> This code:
> 
>   #include 
> 
>   int main()
>   {
>   namespace qi = boost::spirit::qi;
> 
>   qi::rule r;
>   r %= (+qi::char_);
> 
>   return 0;
>   }
> 
> no longer works as expected. r %= (+qi::char_) becomes a no-op.
> Before it calls either:
> 
>   template 
>   friend rule& operator%=(rule& r, Expr const& expr);
> 
>   // non-const version needed to suppress proto's %= kicking in
>   template 
>   friend rule& operator%=(rule& r, Expr& expr);
> 
> Both defined in the rule class.
> 
> Correct me if I'm wrong, but it seems the changes in proto forces
> us to support rvalue refs, but we are not ready for that yet and
> it's too late in the release cycle.
> 
> Thoughts?

Bummer. I suggest adding an rvalue ref overload of operator%= that
shares an implementation with the const lvalue one. Should just be a few
lines of code. Is that a problem?

As an alternative, you can use a domain/grammar to disable Proto's
operator%= for Spirit rules.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Patch for extending proto::switch_

2011-09-23 Thread Eric Niebler
>> On 9/16/2011 9:09 AM, Pierre Esterie wrote:
>>> I'm working with Joel Falcou and I just submitted a patch in the tracker
>>> about extending proto::switch_.
>>>
>>> Here is the previous thread : http://lists.boost.org/proto/2011/08/0559.php
>>> And here is the ticket : https://svn.boost.org/trac/boost/ticket/5905


Just fyi, this has now been merged to release and the ticket is closed.
Thanks for the contribution!

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Patch for extending proto::switch_

2011-09-19 Thread Eric Niebler
On Monday, September 19, 2011 11:30:17 AM, Eric Niebler wrote:
> On 9/16/2011 9:09 AM, Pierre Esterie wrote:
>> Hi everybody,
>>
>> I'm working with Joel Falcou and I just submitted a patch in the tracker
>> about extending proto::switch_.
>>
>> Here is the previous thread : http://lists.boost.org/proto/2011/08/0559.php
>> And here is the ticket : https://svn.boost.org/trac/boost/ticket/5905
>>
>> All comments and discussions are welcomed !
>
> Thanks for your work on this, Pierre! I just got back from a week-long
> vacation. I hope to give this a look in the next few days.

Thanks, Pierre. My feedback is attached to the ticket. When you change 
the reference docs for switch_, you only need to document the behavior 
of the primary template, not the specialization.

___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Patch for extending proto::switch_

2011-09-19 Thread Eric Niebler
On 9/16/2011 9:09 AM, Pierre Esterie wrote:
> Hi everybody,
> 
> I'm working with Joel Falcou and I just submitted a patch in the tracker
> about extending proto::switch_.
> 
> Here is the previous thread : http://lists.boost.org/proto/2011/08/0559.php
> And here is the ticket : https://svn.boost.org/trac/boost/ticket/5905
> 
> All comments and discussions are welcomed !

Thanks for your work on this, Pierre! I just got back from a week-long
vacation. I hope to give this a look in the next few days.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Extendin proto::switch_

2011-08-29 Thread Eric Niebler
On 8/28/2011 6:17 PM, Joel Falcou wrote:
> Le 06/08/2011 08:10, Eric Niebler a écrit :
>> On 8/5/2011 10:55 PM, Joel falcou wrote:
>>> On 06/08/11 07:30, Eric Niebler wrote:
>>>> That wouldn't be enough because proto::matches "knows" about
>>>> proto::switch_. It would be easy enough to extend proto::switch_ to
>>>> take
>>>> an optional mpl metafunction that accepts and expression and returns a
>>>> type to dispatch on. It would default to proto::tag_of. Or for
>>>> the sake of consistency with the rest of proto, it should probably be a
>>>> transform, in which case it would default to proto::tag_of().
>>>
>>> OK
>>>
>>>> Could you open a feature request?
>>>
>>> Well, we wanted to know the correct road, i have someone to do it, so
>>> let's say we'll provide you with a patch request instead ;)
>>
>> Even better. :-)
>>
> 
> Here is a first try:
> 
> https://github.com/MetaScale/nt2/blob/30251fccec639a3823179fc04100fb3fba0688b2/modules/sdk/include/nt2/sdk/dsl/select.hpp
> 
> 
> Not sure it is perfect but it works eemingly :)

Good. Obviously, this needs to be called switch_ instead of select_.
There needs to be an appropriate default for the Transform parameter,
something like tag_of<_>(). There should also be a specialization of
switch_ when the transform is tag_of<_>() to make it as efficient as the
current switch_ (but it should be backward-compatible without the
specialization -- test this!). And of course docs and tests.

> My main cocnern is can i get to remove this internediate when + result_of ?

No need to replace the internal use of when, but you can access when's
nested impl template directly instead of using result_of's needlessly
complicated machinery. See the implementation of if_.

Thanks!

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Defining the result domain of a proto operator

2011-08-26 Thread Eric Niebler
On 8/26/2011 11:44 AM, Eric Niebler wrote:
> Proto will
> compute the domain of m*v to be matrix. It will use matrix_domain's
> generator to post-process the new expression. That generator can do
> anything -- including placing the new expression in the vector domain.
> In short, there is no requirement that a domain's generator must produce
> expressions in that domain. Just hack matrix_domain's generator.

Expanding on this a bit ... there doesn't seem to a sub-/super-domain
relationship between matrix and vector. Why not make them both (together
with covector) sub-domains of some abstract nt2_domain, which has all
the logic for deciding which sub-domain a particular expression should
be in based on its structure? Its generator could actually be a Proto
algorithm, like:

  nt2_generator
: proto::or_<
 proto::when<
  vector_grammar
, proto::generator(_)>
 proto::when<
  covector_grammar
, proto::generator(_)>
 proto::otherwise<
  proto::generator(_)>
  >
  {};

  struct nt2_domain
: proto::domain
  {};

Etc...

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Defining the result domain of a proto operator

2011-08-26 Thread Eric Niebler
On 8/26/2011 11:23 AM, Joel Falcou wrote:
> On 26/08/2011 17:18, Eric Niebler wrote:
>> Why can't you use a grammar to recognize patterns like these and
>> take appropriate action?
> 
> we do. Another point is that container based operation in our system 
> need to know the number of dimension of the container. Domains carry 
> this dimensions informations as we dont want to mix different sized 
> container in a same expression. The containers we have are :
> 
> table which can have 1 to MAX_DIM dimesnions matrix which behave as
> table<2> when mixed with table covector and vector that act as a
> matrix when mixed with matrix adn table<2> with table.
> 
> The domain are then flagged with this dimension informations.

OK, then I'll just assume you guys know what you're doing ('cause you
clearly do).

The original questions was:

> Is there a mechanism in Proto to define how the domain of a node new
> should be computed depending on the tag and the domains of the
> children?

The answer is no, but you don't need that, I don't think. Proto will
compute the domain of m*v to be matrix. It will use matrix_domain's
generator to post-process the new expression. That generator can do
anything -- including placing the new expression in the vector domain.
In short, there is no requirement that a domain's generator must produce
expressions in that domain. Just hack matrix_domain's generator.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Defining the result domain of a proto operator

2011-08-26 Thread Eric Niebler
On 8/26/2011 10:56 AM, Joel Falcou wrote:
> On 26/08/2011 16:45, Eric Niebler wrote:
>> Before I answer, can you tell me why you've decided to put vector and
>> matrix operations into separate domains? This seems like an artificial
>> and unnecessary separation to me.
> 
> We have a system of specialisation where being able to make this
> distinction allowed us to replace sub proto tree by a pregenerated call
> to some BLAS functions or to apply some other Linear Algebra math based
> simplification.
> 
> We also have a covector domain which allow us to know that : covector *
> vector is a dot product while vector * covector generate a matrix. In
> the same way, covector * matrix and matrix * vector can be recognized
> and handled in a proper way.

Why can't you use a grammar to recognize patterns like these and take
appropriate action?

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Defining the result domain of a proto operator

2011-08-26 Thread Eric Niebler
On 8/26/2011 10:25 AM, Mathias Gaunard wrote:
> With the following Proto expression:
> m * v;
> 
> with m in the matrix_domain and v in the vector_domain.
> vector_domain is a sub-domain of matrix_domain, so the common domain is
> matrix_domain.
> 
> We want the '*' operation to model the matrix multiplication.
> matrix times vector yields a vector.
> Therefore, the result of m * v should be in the vector_domain.
> 
> If we define operator* ourselves, then we can easily put the domain we
> want when calling proto::make_expr.
> 
> However, using Proto-provided operator overloads, this doesn't appear to
> be possible.
> 
> Is there a mechanism in Proto to define how the domain of a node new
> should be computed depending on the tag and the domains of the children?

Before I answer, can you tell me why you've decided to put vector and
matrix operations into separate domains? This seems like an artificial
and unnecessary separation to me.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Extendin proto::switch_

2011-08-06 Thread Eric Niebler
On 8/6/2011 1:26 PM, Joel falcou wrote:
> On 06/08/11 21:01, Eric Niebler wrote:
>> Besides, enable_if is yuk.
> 
> Care to elaborate (not like we use it like over 9000 times in our code
> base) /

I just don't like it. SFINAE is an ugly hack. There may also be
compile-time perf impacts. Consider:

template
struct foo< T, typename emable_if< X >::type * = 0>

template
struct foo< T, typename emable_if< Y >::type * = 0>

template
struct foo< T, typename emable_if< Z >::type * = 0>

Now, imagine that X, Y and Z are all expensive metafunctions. They must
*all* be computed before the compiler can select the correct
specialization. Had we written it as:

template
struct select { typedef ... type; };

template::type>
struct foo;

template
struct foo

template
struct foo

template
struct foo

Then we bring the number of template instantiations from O(N) (in the
number of cases) down to O(1) (assuming that select is O(1)).

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Extendin proto::switch_

2011-08-06 Thread Eric Niebler
On 8/6/2011 3:47 AM, Mathias Gaunard wrote:
> On 08/06/2011 07:30 AM, Eric Niebler wrote:
>> On 8/5/2011 8:52 AM, Joel falcou wrote:
>>> There is few use case where I wish i can have a proto::switch_ like
>>> transform being extendable externally but based on something else than
>>> the expression tag like the result of an arbitrary meta-function.
>>>
>>> Is cloning proto::swicth_ and changing the way it dispatch over its
>>> internal cases_ enough ?
>>
>> That wouldn't be enough because proto::matches "knows" about
>> proto::switch_. It would be easy enough to extend proto::switch_ to take
>> an optional mpl metafunction that accepts and expression and returns a
>> type to dispatch on. It would default to proto::tag_of. Or for
>> the sake of consistency with the rest of proto, it should probably be a
>> transform, in which case it would default to proto::tag_of().
>>
>> Could you open a feature request?
> 
> Why not just add an extra class Enable = void template parameter to
> case_, which would allow to use SFINAE for case_ partial specializations?
> 
> template
> struct case_ >::type>
>  : ....
> {
> };

That doesn't change the fact that switch_ dispatches to the cases using
tag type, which may not be desired. Besides, enable_if is yuk.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Extendin proto::switch_

2011-08-05 Thread Eric Niebler
On 8/5/2011 10:55 PM, Joel falcou wrote:
> On 06/08/11 07:30, Eric Niebler wrote:
>> That wouldn't be enough because proto::matches "knows" about
>> proto::switch_. It would be easy enough to extend proto::switch_ to take
>> an optional mpl metafunction that accepts and expression and returns a
>> type to dispatch on. It would default to proto::tag_of. Or for
>> the sake of consistency with the rest of proto, it should probably be a
>> transform, in which case it would default to proto::tag_of().
> 
> OK
> 
>> Could you open a feature request?
> 
> Well, we wanted to know the correct road, i have someone to do it, so
> let's say we'll provide you with a patch request instead ;)

Even better. :-)

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Extendin proto::switch_

2011-08-05 Thread Eric Niebler
On 8/5/2011 8:52 AM, Joel falcou wrote:
> There is few use case where I wish i can have a proto::switch_ like
> transform being extendable externally but based on something else than
> the expression tag like the result of an arbitrary meta-function.
> 
> Is cloning proto::swicth_ and changing the way it dispatch over its
> internal cases_ enough ?

That wouldn't be enough because proto::matches "knows" about
proto::switch_. It would be easy enough to extend proto::switch_ to take
an optional mpl metafunction that accepts and expression and returns a
type to dispatch on. It would default to proto::tag_of. Or for
the sake of consistency with the rest of proto, it should probably be a
transform, in which case it would default to proto::tag_of().

Could you open a feature request?

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Deep copy while keeping references in terminals

2011-07-03 Thread Eric Niebler
On 7/2/2011 4:44 PM, Bart Janssens wrote:
> Hello,
> 
> In the framwork I'm developing, I need to deep-copy expressions to
> store them in some sort of wrapper objects. This works fine in most
> cases, but when terminals are copied the value they refer to seems to
> be copied as well. In the following expression, "tau" is a POD struct
> with some coefficients that need to be computed, and compute_tau is a
> proto terminal, where the grammar is used to give meaning to the
> operator() :
> compute_tau(u, tau)
> 
> The problem is that I use this tau in other expressions that are
> defined later on, so every expression needs to refer to the same tau,
> but it seems that after proto::deep_copy, each expression has his own
> tau.
> 
> Is there an easy way around this? Note that I also like to use things
> like tau.ps, with ps a double in the tau struct directly in
> expressions.

You can do pretty much anything -- including reimplement
proto::deep_copy with slight variations -- with Proto transforms. From
your mail, it's a little unclear if you want the terminal nodes
/themselves/ held by reference, or if you want the values referred to by
the terminals held by reference but the terminal nodes held by value. I
implemented the second:

// Hold all intermediate nodes by value. If a terminal is holding
// a value by reference, leave it as a reference.
struct DeepCopy1
  : proto::or_<
proto::terminal<_>,
proto::nary_expr<
_,
    proto::vararg<
proto::when
>
>
>
{};

HTH,

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Expression as *other* fusion sequence

2011-06-20 Thread Eric Niebler
On 6/17/2011 6:46 PM, Joel falcou wrote:
> On 17/06/11 01:25, Eric Niebler wrote:
>> Doable, but not easy. The problem you'll have is that all Proto
>> expression types have a nested fusion_tag that is a typedef for
>> proto::tag::proto_expr. That is how Fusion figures out how to iterate
>> over Proto expressions. You'll need to define your own tag, use
>> proto::extends (not BOOST_PROTO_EXTENDS) to define an expression
>> extension, and hide the fusion_tag typedef in the base with your own.
>> Then you'll need to implement the necessary Fusion hooks for your custom
>> Fusion tag type.
> 
> OK, but is there any internal proto part relying on the proper fusion
> behavior that may get hampered by this ?

No. There are few places where Proto's core makes use of the Fusion
interface of Proto expression. proto::fold is one, but it detects when
it's operating on a Proto expression and skips Fusion to iterate over
the children directly (for better compile times).

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Expression as *other* fusion sequence

2011-06-16 Thread Eric Niebler
On 6/17/2011 7:30 AM, Joel falcou wrote:
> proto expression are fusion sequence that iterates over the node
> children. All fine and dandy.
> 
> Now here is my use case. I have expression whose terminal are fusion
> sequence that access tot he terminal values (think terminal holding a
> std:;array for example) and I wished to have expression of the terminal
> be fusion sequence themselves so i can do stuff like :
> 
> at_c<0>( x + y * 3 )
> 
> where x and y are such terminals, this statement returning me the
> equivalent of :
> 
> at_c<0>( x ) + at_c<0>( y ) * 3
> 
> Obviously, no candy as both fusion registration conflicts with each
> others. My Fusion-fu beign quite weak, is there a way to have this AND
> still have proto expressions behave as they should in other context ?

Doable, but not easy. The problem you'll have is that all Proto
expression types have a nested fusion_tag that is a typedef for
proto::tag::proto_expr. That is how Fusion figures out how to iterate
over Proto expressions. You'll need to define your own tag, use
proto::extends (not BOOST_PROTO_EXTENDS) to define an expression
extension, and hide the fusion_tag typedef in the base with your own.
Then you'll need to implement the necessary Fusion hooks for your custom
Fusion tag type.

HTH,

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] [Proto] Expression as fusion/MPL sequence

2011-06-03 Thread Eric Niebler
On 6/2/2011 11:12 AM, Joel falcou wrote:
> On 01/06/11 22:24, Eric Niebler wrote:
> 
>> Proto expressions are random access, but flattened views are
>> forward-only. That's a limitation of the current implementation of the
>> segmented Fusion stuff. It's a known problem. Segmented fusion needs a
>> complete rewrite, but it's a metaprogramming Everest, and I'm too tired
>> to climb it again. Some hot-shot metaprogramming wunderkind should try
>> cutting his/her teeth on that problem. They'd earn my eternal admiration
>> and appreciation.
> 
> Oh OK. So i may just need to *not* flatten them.

I just updated the docs to state that flatten returns a Fusion Forward
Sequence.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] [Proto] Expression as fusion/MPL sequence

2011-06-01 Thread Eric Niebler
On 6/2/2011 7:02 AM, Joel Falcou wrote:
> Seems somethign crooky on this front. Calling fusion::at_c on expression
> ends up in error even after including boost/proto/fusion.hpp.
> Same way, flatten used as a transform seems to not give me a type that
> can be passed to any fusion or mpl function. Looking at
> proto/fusion.hpp I noticed that the iterator is indeed random_access but
> not the view itself which as a forward_traversal tag. Even
> after fixing this, no dice, at_c(some_proto_expr) still fails to
> compile.

That's odd. Proto's fusion tests are passing on trunk and release, and
the following program compiles for me (on trunk):

  #include 
  #include 

  namespace proto = boost::proto;
  namespace fusion = boost::fusion;

  int main()
  {
proto::terminal::type i = {42};
fusion::at_c<1>(i + i);
  }

Can you post some code that demonstrates the problem?

Proto expressions are random access, but flattened views are
forward-only. That's a limitation of the current implementation of the
segmented Fusion stuff. It's a known problem. Segmented fusion needs a
complete rewrite, but it's a metaprogramming Everest, and I'm too tired
to climb it again. Some hot-shot metaprogramming wunderkind should try
cutting his/her teeth on that problem. They'd earn my eternal admiration
and appreciation.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


[proto] proto at boostcon

2011-05-19 Thread Eric Niebler
For those who don't know, BoostCon is going on right now
(http://www.boostcon.com), and several talks are being given about Proto
and libraries built with Proto like Phoenix. Sadly, I'm not there, but
Joel Falcou is, Hartmut Kaiser is, and so is my old friend Bartosz
Milewski who gave a talk about the similarities between Proto and
Haskell monads. The slides are now online here:

https://github.com/boostcon/2011_presentations/tree/master/tue/haskell

Even more exciting, Hartmut, Joel and Bartosz have been working on a
C++0x rewrite of Proto that bases it solidly on Bartosz' ideas, adding
mathematical rigor that Proto currently lacks. I hope they report their
progress here!

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] proto::expr vs. proto::basic_expr

2011-05-16 Thread Eric Niebler
On 5/16/2011 12:31 PM, Thomas Heller wrote:
> You might get even faster. I noticed some places in proto that still
> instantiate a proto::expr.
> That is in the creation of operators (the lazy_matches in
> operators.hpp) instantiate a proto::expr. And the generator of
> basic_expr is the default generator which instantiates, when called a
> proto::expr as well. Not sure if that may actually matter

Yeah, THAT's not right. Thanks for pointing it out. I fixed the problem
on trunk. I had to muck with domain deduction, which makes me nervous.
The regression tests aren't running right now, so please let me know if
you spot any problems.

Thanks,

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] proto::expr vs. proto::basic_expr

2011-05-15 Thread Eric Niebler
On 5/15/2011 9:19 PM, Thomas Heller wrote:
> Hi,
> 
> Today I experimented a little bit with phoenix and proto.
> My goal was to decrease the compile time of phoenix. When I started the 
> development of phoenix, Eric advised me to use proto::basic_expr to reduce 
> compile times.
> Which makes sense giving the argumentation that on instatiating the 
> expression 
> node, basic_expr has a lot less functions etc. thus the compiler needs to 
> instantiate less. So much for the theory.
> In practice this, sadly is not the case. Today I made sure that phoenix uses 
> basic_expr exclusively (did not commit the changes).
> 
> The result of this adventure was that compile times stayed the same. I was a 
> little bit disappointed by this result.
> 
> Does anybody have an explanation for this?

Impossible to say with certainty. I suspect, though, that your use case
is different than mine or, say, Christophe's. With xpressive or MSM,
compiles weren't effected by pre-preprocessing, which shows that we're
hitting limits in the speed of semantic analysis and code gen (possibly
template instantiation). Pre-preprocessing sped up Phoenix, which shows
that you're more hamstrung by lexing. The choice of either proto::expr
or proto::basic_expr doesn't matter for lexing because they're both
going to be lexed regardless.

I know when I introduced basic_expr, I ran tests that showed it was a
perf win for xpressive. But it was a small win, on the order of about
5%, IIRC. YMMV.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Compiling in debug mode

2011-05-12 Thread Eric Niebler
On 5/13/2011 5:45 AM, Bart Janssens wrote:
> Hi guys,
> 
> I've followed the recent discussion about compilation performance,
> it's good to know things are getting better and hopefully support for
> the new standard will help even more.

Probably, but someone needs to adapt Proto to use variadics/rvalue refs.
Patches welcome. :-)

> Currently, my main problem is not so much the compile time itself, but
> how much RAM gets used in debug mode (GCC 4.5.2 on ubuntu 11.04). I'm
> still using proto from boost 1.45, would the recent changes help
> anything in reducing RAM usage in debug mode? 

I don't think so, but I haven't tested.

> Is anyone aware of
> tweaks for GCC that reduce memory usage, but still produce useful
> debug info (just using -g now, no optimization)?

I'll leave this for the gcc experts.

> I've gotten to the point where a compile can use upwards of 1.5GB for
> a single test, resulting in much swapping, especially when compiling
> with make -j2 (which I try to remember not to do, now ;).

Ouch. Do you have to use gcc? Perhaps clang might give you better results.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Latest proto commit on trunk.

2011-05-09 Thread Eric Niebler
On 5/10/2011 8:36 AM, Eric Niebler wrote:
> On 5/10/2011 3:22 AM, Joel Falcou wrote:
>> I got these error compiling NT2 with proto trunk
>>
>> /usr/local/include/boost-latest/boost/proto/detail/decltype.hpp:67:56:
>> error: 'M0' has not been declared

> 
> This is what happens to people who use stuff under a detail/ directory
> or in a detail namespace. :-) You already know what I'm going to say:
> don't do that.

FWIW, this was due to a missing #include, which I've since fixed. This
*should* work again, but it's not part of Proto's public documented
interface. I reserve the right to break your code. ;-)

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Latest proto commit on trunk.

2011-05-09 Thread Eric Niebler
On 5/10/2011 3:22 AM, Joel Falcou wrote:
> I got these error compiling NT2 with proto trunk
> 
> /usr/local/include/boost-latest/boost/proto/detail/decltype.hpp:67:56:
> error: 'M0' has not been declared
> /usr/local/include/boost-latest/boost/proto/detail/decltype.hpp:67:1:
> error: expected identifier before '~' token
> /usr/local/include/boost-latest/boost/proto/detail/decltype.hpp:67:1:
> error: expected ')' before '~' token
> /usr/local/include/boost-latest/boost/proto/detail/decltype.hpp:67:1:
> error: ISO C++ forbids declaration of
> 'BOOST_PP_REPEAT_1_BOOST_PROTO_MAX_ARITY' with no type
> /usr/local/include/boost-latest/boost/proto/detail/decltype.hpp:67:1:
> error: expected ';' before '~' token
> 
> Our code is :
> 
> #include 
> #include 
> 
> #if BOOST_WORKAROUND(BOOST_MSVC, >= 1600) && defined BOOST_NO_DECLTYPE
> #undef BOOST_NO_DECLTYPE
> #endif
> 
> #include 
> #define NT2_DECLTYPE(EXPR, TYPE) BOOST_PROTO_DECLTYPE_(EXPR, TYPE)
> 
> Is detail/decltype.hpp a no-go to reuse this way ?

Right, that's not going to work. I'm surprised it ever did.

> As for why we do this, we have to fight against some MSVC bug w/r to
> decltype that PROTO_DECLTYPE seemed to fix.

This is what happens to people who use stuff under a detail/ directory
or in a detail namespace. :-) You already know what I'm going to say:
don't do that.

Can you you boost/typeof.hpp?

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] [phoenix][msm] proto limits increased to 10, phoenix broken on trunk

2011-05-09 Thread Eric Niebler
On 5/8/2011 9:54 PM, Thomas Heller wrote:
> Phoenix is up and running now again and should play nice with other libs!

Proto is now fully preprocessed on trunk, and I've just merged it all to
release. That means that the corresponding changes to Phoenix and MSM
also should be merged now also.

Christophe, there's a good chance that you won't see the promised
compile-time improvements from this. Compile times for xpressive didn't
budge. I think it's because xpressive's compile times are determined
primarily by the large number of templates it instantiates, which swamps
the PP time. I'm guessing MSM is the same. Oh, well. At least we've
fixed the mess with the predefined limits.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] [phoenix][msm] proto limits increased to 10, phoenix broken on trunk

2011-05-08 Thread Eric Niebler
On 5/8/2011 11:51 PM, Christophe Henry wrote:
> Thanks Eric. Unfortunately, the compile-time went up after the change
> for VC. Not much but up. Example, CompositeTutorialEuml.
> VC:
> Before: 52s
> Now: 57s (ignoring a first compile of 77s, statistical VC error)
> 
> g++ 4.4: Roughly the same time (21s / 21.7). Yes, much faster than VC, I 
> know...
> 
> I suppose what I gain from preprocessing is lost from moving arity from 7 to 
> 10.
> It's not a big change but from previous burns, I know that the
> slightest increase is enough to get some working code crash a VC
> compiler at a user's machine, so I would not welcome another increase.
> 
> Thanks for the hard work anyway, I'm happy to get rid of my #define's.

Don't despair yet. I'm not done with the pre-preprocessing work. I'll
let you know when to run benchmarks.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] [phoenix][msm] proto limits increased to 10, phoenix broken on trunk

2011-05-08 Thread Eric Niebler
On 5/8/2011 11:36 PM, Hartmut Kaiser wrote:
> Hmmm, maybe I misunderstand the situation, but MPL and Phoenix have 5
> different versions of preprocessed headers which will be used depending on
> the LIMITs specified by the user. No unnecessary overhead is created this
> way. For any LIMIT <= 10 Phoenix uses one set of pp files, for LIMITs <= 20
> the next set, etc.
> 
> This surely creates some additional burden for the author as you have to run
> wave 5 times, but that's it.

Ah, I see what you mean. Yes, I could do that. I'll add it to the list.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] [phoenix][msm] proto limits increased to 10, phoenix broken on trunk

2011-05-08 Thread Eric Niebler
On 5/8/2011 11:08 PM, Hartmut Kaiser wrote:
> Just a question: what's your rationale of limiting the generated pp headers 
> to an arity of 10? 
> MPL and Phoenix have it set up for higher arities as well (as you probably 
> know).

Phoenix doesn't have it set higher. Or, it did, but it was a bug.
Perhaps you meant Fusion. Yes, it's higher for Fusion and MPL. The
reason for 10 and not something higher (yet) is because there is N^2
overloads of expr::operator() on compilers that don't support variadic
templates. And with BLL and Bind and Phoenix, there's a history of
supporting arities up to 10 and no more. I'm balancing keeping it fast
and light(-ish) and making it useful in the real world.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


[proto] [phoenix][msm] proto limits increased to 10, phoenix broken on trunk (was: [phoenix] not playing nice with other libs)

2011-05-08 Thread Eric Niebler
On 5/2/2011 6:18 PM, Thomas Heller wrote:
> On Mon, May 2, 2011 at 12:54 PM, Eric Niebler  wrote:
>> Phoenix is changing the following fundamental constants:
>>
>>  BOOST_PROTO_MAX_ARITY
>>  BOOST_MPL_LIMIT_METAFUNCTION_ARITY
>>  BOOST_PROTO_MAX_LOGICAL_ARITY
>>  BOOST_RESULT_OF_NUM_ARGS
>>
>> IMO, Phoenix shouldn't be touching these. It should work as best it can
>> with the default values. Users who are so inclined can change them.
> 
> Eric,
> This problem is well known. As of now I have no clue how to fix it properly.


The Proto pre-preprocessing work on trunk has progressed to the point
where compiling with all the arities at 10 now compiles *faster* than
unpreprocessed Proto with the arities at 5. So I've bumped everything to 10.

A few things:

1) Phoenix is now broken. My Proto work involved pruning some
unnecessary headers, and Phoenix isn't including everything it needs.
Thomas, I'll leave this for you to fix.

2) Phoenix is actually setting Proto's max arity to 11, not to 10. I
think this is unnecessary. Locally, I un-broke Phoenix and ran its tests
with 10, and only one test broke. That was due to a bug in Phoenix. I'm
attaching a patch for that.

3) After the patch is applied, Phoenix should be changed such that it
includes proto_fwd.hpp and then acts accordingly based on the values of
the constants. IMO, that should mean graceful degradation of behavior
with lower arities, until a point such that Phoenix cannot function at
all, in which case it should #error out.

4) Phoenix no longer needs to change BOOST_MPL_LIMIT_METAFUNCTION_ARITY
and BOOST_RESULT_OF_NUM_ARGS, but BOOST_RESULT_OF_NUM_ARGS should be
given the same treatment as (3).

5) MSM do the same.

My pre-preprocessing work continues, and all EDSLs that use Proto will
benefit from faster compiles. I'd like to thank Hartmut for his work on
Wave and Thomas for getting me set up.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
Index: boost/phoenix/core/function_equal.hpp
===
--- boost/phoenix/core/function_equal.hpp   (revision 71779)
+++ boost/phoenix/core/function_equal.hpp   (working copy)
@@ -8,6 +8,7 @@
 #ifndef BOOST_PHOENIX_CORE_FUNCTION_EQUAL_HPP
 #define BOOST_PHOENIX_CORE_FUNCTION_EQUAL_HPP
 
+#include 
 #include 
 #include 
 #include 
@@ -134,7 +135,7 @@
 
 BOOST_PP_REPEAT_FROM_TO(
 1
-  , BOOST_PROTO_MAX_ARITY
+  , BOOST_PP_INC(BOOST_PROTO_MAX_ARITY)
   , BOOST_PHOENIX_FUNCTION_EQUAL
   , _
 )
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] [phoenix] not playing nice with other libs

2011-05-07 Thread Eric Niebler
On 5/5/2011 12:32 AM, Eric Niebler wrote:
> I'll also need to investigate why Proto depends on
> BOOST_MPL_LIMIT_METAFUNCTION_ARITY.

Proto no longer depends on BOOST_MPL_LIMIT_METAFUNCTION_ARITY. At least,
not on trunk.

I'm working on pre-preprocessing stuff. So far, it doesn't seem to be
having the dramatic impact on compile-time performance that I had hoped
for, but I still have a ways to go.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] [phoenix] not playing nice with other libs

2011-05-04 Thread Eric Niebler
On 5/5/2011 2:27 AM, Bart Janssens wrote:
> On Wed, May 4, 2011 at 7:55 PM, Eric Niebler 
>  wrote:
>> Bart, how high can N go in your EDSL? Is it really arbitrarily large?
> 
> I didn't hit any limit in the real application (most complicated case
> is at 9) and just did a test that worked up to 30. Compilation (debug
> mode) took about 2-3 minutes at that point, with some swapping, so I
> didn't push it any further.
> 
> I've attached the header defining the grouping grammar, there should
> be no dependencies on the rest of our code.

We're talking about picking a sane and useful default for
BOOST_PROTO_MAX_ARITY. You seem to be saying that 10 would cover most
practical uses of your EDSL. Is that right?

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] [phoenix] not playing nice with other libs

2011-05-04 Thread Eric Niebler
On 5/4/2011 6:39 PM, Bart Janssens wrote:
> On Wed, May 4, 2011 at 1:25 PM, Thomas Heller
>  wrote:
>> 2) Have some kind of completely variadic proto expression. Not by
>> having variadic
>> templates but by creating the list of children by some kind of cons list.
>> This might requires a quite substantial change in proto, haven't fully
>> investigated
>> that option.
> 
> I needed something like this to implement a "group(expr1, ... , exprN)
> function that would group several expressions into one. I also hit the
> max arity limit with that solution, so I changed the syntax to "group
> << (expr1, ..., expr2)". Using the overloaded comma operator from
> proto, this becomes a binary tree that can be as large as you want,
> and can easily be converted to a list using flatten. 

Clever! But not your preferred syntax, so not ideal.

> Probably useless
> for phoenix, but I thought I'd mention it anyhow :) I would also be
> interested to go back to the group(...) syntax without needing to
> modify the limits, so any progress on this would be great.

Bart, how high can N go in your EDSL? Is it really arbitrarily large?

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] [phoenix] not playing nice with other libs

2011-05-04 Thread Eric Niebler
On 5/4/2011 6:25 PM, Thomas Heller wrote:
> On Wed, May 4, 2011 at 10:58 AM, Eric Niebler  wrote:
>> On 5/2/2011 6:18 PM, Thomas Heller wrote:
>>> The default BOOST_PROTO_MAX_ARITY is 5.
>>
>> I see. So this is inherently a limitation in Proto. I set Proto's max
>> arity to 5 because more than that causes compile time issues. That's
>> because there are N*M proto::expr::operator() overloads, where N is
>> Proto's max arity and M is Proto's max function call arity. However:
>>
>> - IIRC, Phoenix doesn't use proto::expr. It uses proto::basic_expr, a
>> lighter weight expression container that has no member operator overloads.
> 
> Correct. But we also need the arity in:
> proto::call, proto::or_ and maybe some others

I'd like more details here, please. You never really *need* to increase
BOOST_PROTO_MAX_LOGICAL_ARITY because you can nest multiple proto::or_'s
and proto::and_'s. And if you need that many, you might think about
refactoring your grammar. Proto::or_ can be more efficiently rewritten
as proto::switch_, for instance.



>> The solution then is in some combination of (a) allowing basic_expr to
>> have a greater number of child expressions than expr, (b) bumping the
>> max arity while leaving the max function call arity alone, (c)
>> pre-preprocessing, (d) adding a variadic operator() for compilers that
>> support it, and (e) just living with worse compile times until compilers
>> catch up with C++0x.
>>
>> Not sure where the sweet spot is, but I'm pretty sure there is some way
>> we can get Proto to support 10 child expressions for Phoenix's usage
>> scenario. It'll take some work on my end though. Help would be appreciated.
> 
> Yes, I was thinking of possible solutions:
> 1) splittling the expressions in half, something like this:
> proto::basic_expr<
> tag
>   , proto::basic_expr<
> sub_tag
>   , Child0, ..., Child(BOOST_PROTO_MAX_ARITY)
> >
>   , proto::basic_expr<
> sub_tag
>   , Child(BOOST_PROTO_MAX_ARITY), ... Child(BOOST_PROTO_MAX_ARITY * 2)
> >
> >
> 
> This would only need some additional work on the phoenix side.
> Not sure if its actually worth it ... or even working.

Not this. It's like that early prototype of Phoenix where every
expression was a terminal and the value was a Fusion sequence of other
Proto expressions. You can't use Proto's transforms to manipulate such
beasts.

Admittedly, Proto is rather inflexible when it comes to how children are
stored. My excuse is that I do it to bring down compile times.

> 2) Have some kind of completely variadic proto expression. Not by
> having variadic
> templates but by creating the list of children by some kind of cons list.
> This might requires a quite substantial change in proto, haven't fully
> investigated
> that option.

You would go from instantiating 1 template per node to instantiating N
templates, where N is the number of child nodes. This is then multiplied
by the number of nodes in an expression tree. Not good.

>>> The BOOST_RESULT_OF_NUM_ARGS constant needed to be changed because i
>>> needed to provide 11 arguments in a "call" to boost::result_of. But i
>>> guess a workaround
>>> can be found in this specific case.
>>
>> What workaround did you have in mind?
> 
> Calling F::template result<...> directly, basically reimplementing
> result_of for our
> phoenix' own limits.

As an implementation detail? Sure, no problem.

>>> I wonder what qualifies as "User". Phoenix is certainly a user of mpl,
>>> result_of and proto. Spirit is a user of proto and phoenix. Spirit needs an 
>>> arity of 7 (IIRC).
>>
>> By "user" I meant "end-user" ... a user of Boost. You have to consider
>> that someone may want to use Phoenix and MPL and Numeric and ... all in
>> the same translation unit. We shouldn't make that hard. This
>> proliferation of interdependent constants is a maintenance nightmare.
> 
> I agree. I don't think there really is a general solution to that.
> There have been reports
> by Micheal Caisse of some macro definition nightmare while using MSM
> together with spirit.
> If i remember the details correctly, MSM changes the proto constants as well.
> This problem is not really phoenix specific!

Oh, yeah. MSM changes Proto's max arity to be 7. OK, I can see that 5 is
too low for folks. Proto needs some work.

>> I tend to agree with Jeff Hellrung who said that Phoenix should make do
>> with the defaults and document any backwards incompatibilities and how
>> to fix them. But w

  1   2   3   >