Re: [proto] Using a derived class as terminals in Boost.proto

2016-04-14 Thread Eric Niebler
Proto grammars and transforms handle this better than evaluators, which are
deprecated. It would pay to look into some examples that use transforms.
Sorry, that's all the advice I have time for at the moment.

\e
On Apr 14, 2016 10:33 AM, "Mathias Gaunard" 
wrote:

> I'd try to use IsVector.
> I'm not sure how to do this with a grammar (maybe someone can pitch in)
> but you could do something like this
>
> enable_if< IsVector::type> >
>
> On 14 April 2016 at 18:04, Frank Winter  wrote:
>
>> I made some progress. If I specialize struct VectorSubscriptCtx::eval
>> with Vector10, like
>>
>>
>> struct VectorSubscriptCtx
>> {
>> VectorSubscriptCtx(std::size_t i) : i_(i) {}
>>
>> template
>> struct eval
>> : proto::default_eval
>> {};
>>
>> template
>> struct eval<
>> Expr
>> , typename boost::enable_if<
>> proto::matches >
>> >::type
>> >
>> {
>> //..
>> }
>> };
>>
>> then it works (is was specialized with Vector). It also works when using
>> the Boost _ literal (match anything), like
>>
>> template
>> struct eval<
>> Expr
>> , typename boost::enable_if<
>> proto::matches >
>> >::type
>>
>>
>> However, I feel this is not good style. Can this be expressed with the
>> is_base_of trait instead?
>>
>>
>>
>>
>>
>> On 04/14/2016 10:10 AM, Mathias Gaunard wrote:
>>
>>> On 14 April 2016 at 14:43, Frank Winter >> > wrote:
>>>
>>> Hi all!
>>>
>>> Suppose you'd want to implement a simple EDSL (Embedded Domain
>>> Specific Language) with Boost.proto with the following requirements:
>>>
>>>  Custom class 'Vector' as terminal
>>>  Classes derived from 'Vector' are working terminals too, e.g.
>>> Vector10
>>>
>>> [...]
>>>
>>> template
>>> struct IsVector
>>>: mpl::false_
>>> {};
>>>
>>>
>>> template<>
>>> struct IsVector< Vector >
>>>: mpl::true_
>>> {};
>>>
>>>
>>> Surely this should be true for all types derived from Vector.
>>>
>>> template
>>> struct IsVector
>>>: mpl::false_
>>> {};
>>>
>>> template
>>> struct IsVector >::type>
>>>: mpl::true_
>>> {};
>>>
>>>
>>> ___
>>> proto mailing list
>>> proto@lists.boost.org
>>> http://lists.boost.org/mailman/listinfo.cgi/proto
>>>
>>>
>>
>> ___
>> proto mailing list
>> proto@lists.boost.org
>> http://lists.boost.org/mailman/listinfo.cgi/proto
>>
>
>
> ___
> proto mailing list
> proto@lists.boost.org
> http://lists.boost.org/mailman/listinfo.cgi/proto
>
>
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Clang compile times

2013-11-21 Thread Eric Niebler
On 11/20/2013 02:36 AM, Bart Janssens wrote:
 Hello,
 
 I recently upgraded the OS and XCode on my Mac, resulting in the
 following clang version:
 Apple LLVM version 5.0 (clang-500.2.79) (based on LLVM 3.3svn).
 The previous version was Apple LLVM version 4.2 (clang-425.0.24)
 (based on LLVM 3.2svn)
 
 The new version is about 4 times slower when compiling proto code, but
 only uses about half as much RAM. Does anyone here know if this may be
 due to some clang setting that I can revert back? I'd like to use more
 RAM again and compile faster.

Ugh, this is terrible news. If you have a self-contained repro
(preprocessed translation unit), please file a clang bug. They'll take a
regression of this magnitude seriously.

Thanks,
Eric
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Holding children by copy or reference

2013-10-01 Thread Eric Niebler
On 10/1/2013 12:05 AM, Bart Janssens wrote:
 On Tue, Oct 1, 2013 at 12:59 AM, Mathias Gaunard
 mathias.gaun...@ens-lyon.org wrote:
 To clarify, in terms of performance, from best-to-worst:
 1) everything by reference: no problem with performance (but problematic
 dangling references in some scenarios)
 2) everything by value: no CSE or other optimizations
 3) nodes by value, terminals by reference: no CSE or other optimizations +
 loads when accessing the terminals
 
 Just out of interest: would holding the a*b temporary node by rvalue
 reference be possible and would it be of any help?

Possible in theory, yes. In practice, it probably doesn't work since
proto-v4 is not C++11 aware. But even if it worked, it wouldn't solve
anything. Rvalue refs have the same lifetime issues that (const) lvalue
refs have. The temporary object to which they refer will not outlive the
full expression.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Holding children by copy or reference

2013-09-30 Thread Eric Niebler
On 9/30/2013 1:54 PM, Mathias Gaunard wrote:
 Hi,
 
 A while ago, I recommended to set up domains so that Proto contains its
 children by value, except for terminals that should either be references
 or values depending on the lvalue-ness. This allows to avoid dangling
 reference problems when storing expressions or using 'auto'.
 I also said there was no overhead to doing this in the case of Boost.SIMD.
 
 After having done more analyses with more complex code, it appears that
 there is indeed an overhead to doing this: it confuses the alias
 analysis of the compiler which becomes unable to perform some
 optimizations that it would otherwise normally perform.
 
 For example, an expression like this:
 r = a*b + a*b;
 
 will not anymore get optimized to
 tmp = a*b;
 r = tmp + tmp;

Interesting!

 If terminals are held by reference, the compiler can also emit extra
 loads, which it doesn't do if the the terminal is held by value or if
 all children are held by reference.
 
 This is a bit surprising that this affects compiler optimizations like
 this, but this is replicable on both Clang and GCC, with all versions I
 have access to.

It's very surprising. I suppose it's because the compiler can't assume
equasional reasoning holds for some user-defined type. That's too bad.

 Therefore, to avoid performance issues, I'm considering moving to always
 using references (with the default domain behaviour), and relying on
 BOOST_FORCEINLINE to make it work as expected.

Why is FORCEINLINE needed?

 Of course this has the caveat that if the force inline is disabled (or
 doesn't work), then you'll get segmentation faults.

I don't understand why that should make a difference. Can you clarify? A
million thanks for doing the analysis and reporting the results, by the way.

As an aside, in Proto v5, terminals and intermediate nodes are captured
as you describe by default, which means perf problems. I still think
this is the right default for C++11, and for most EDSLs. I'll have to be
explicit in the docs about the performance implications, and make it
easy for people to get the by-ref capture behavior when they're ok with
the risks.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Holding children by copy or reference

2013-09-30 Thread Eric Niebler
On 9/30/2013 11:08 AM, Mathias Gaunard wrote:
 On 30/09/13 08:01, Eric Niebler wrote:
 
 Therefore, to avoid performance issues, I'm considering moving to always
 using references (with the default domain behaviour), and relying on
 BOOST_FORCEINLINE to make it work as expected.

 Why is FORCEINLINE needed?
 
 The scenario is
 
 terminal a, b, c, r;
 
 auto tmp = a*b*c;
 r = tmp + tmp;
 
 Assuming everything is held by reference, when used in r, tmp will refer
 to a dangling reference (the a*b node).
 
 If everything is inlined, the problem may be avoided because it doesn't
 require things to be present on the stack.

Yikes! You don't need me to tell you that's UB, and you really shouldn't
encourage people to do that.

You can independently control how intermediate nodes are captured, as
opposed to how terminals are captured. In this case, you want a,b,c held
by reference, and the temporary a*b to be held by value. Have you
tried this, and still found it to be slow?

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Proto v5

2013-06-17 Thread Eric Niebler
On 6/16/2013 11:59 AM, Agustín K-ballo Bergé wrote:
 On 15/06/2013 10:59 p.m., Eric Niebler wrote:
 - Some specific uses of Proto actions in constant expressions fail. GCC
 reports an ambiguity with ref-qualifiers in the following scenario:
 
  struct foo
  {
  int bar() 
  { return _bar; }
  //~ int bar() 
  //~ { return static_castint(_bar); }
  constexpr int const bar() const 
  { return _bar; }
  constexpr int const bar() const 
  { return static_castint const(_bar); }
 
  int _bar;
  };
 
  foo().bar();
 
For that to work correctly, the 4 overloads need to be provided.
 Huh. According to the standard, or according to gcc? I won't work around
 a bug in a compiler without filing it first.

 
 I got a thorough explanation on the subject from this SO question:
 http://stackoverflow.com/questions/17130607/overload-resolution-with-ref-qualifiers
 . The answer confirms this is a GCC bug, and hints to a better
 workaround that would retain constexpr functionality. I may pursue this
 alternative workaround if I ever get to play with the constexpr side of
 Proto v5 (that is, if I use it in a place other than next to an `omg` or
 `srsly` identifier :P).
 
 Another GCC bug (as far as I understand) is that instantiations within
 template arguments to a template alias are completely ignored when the
 aliased type does not depend on those, thus breaking SFINAE rules. I
 have attached a small code sample that reproduces this issue.

Thanks for your research. When I get a chance, I'll check gcc's bugzilla
to see if they have been filed already, unless you beat me to it.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Proto v5

2013-06-14 Thread Eric Niebler

I've made no effort so far to port Proto v5 to any compiler other than clang. 
I'm sure it would be a big job. I welcome any contributions. Otherwise, it'll 
get ported eventually, but probably not before I get the API settled.

Eric


Sent via tiny mobile device

-Original Message-
From: Agustín K-ballo Bergé kaball...@hotmail.com
Sender: proto proto-boun...@lists.boost.orgDate: Fri, 14 Jun 2013 16:19:23 
To: Discussions about Boost.Proto and DSEL designproto@lists.boost.org
Reply-To: Discussions about Boost.Proto and DSEL design
proto@lists.boost.org
Subject: [proto] Proto v5

Hi,

I watched the C++Now session about Proto v5, and now I want to play with 
it. I do not have the luxury of a Clang build from trunk, but I do have 
GCC 4.8.1 which should do pretty well.

I cloned the repository at https://github.com/ericniebler/proto-0x/. 
After jumping a few hoops, I am now left with tons of instances of the 
same errors:

- error: no type named 'proto_grammar_type' in ...
  using type = typename Ret::proto_grammar_type(Args...);

- error: no type named 'proto_action_type' in ...
  using type = typename Ret::proto_action_type(Args...);

For at least some cases, those are clear errors since the Ret type 
represents an empty structs (e.g. `not_`).

What is going on? What should I be doing to get Proto v5 to compile?

Regards,

-- 
Agustín K-ballo Bergé.-
http://talesofcpp.fusionfenix.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] problems with proto::matches

2012-12-13 Thread Eric Niebler
On 12/13/2012 4:51 AM, Thomas Heller wrote:
 Hi,
 
 I recently discovered a behavior which i find quite odd:
 proto::matchesExpression, Grammar::type fails when Expression is not a
 proto expression. I would have expected that it just returns false in
 this case. What am I missing. Patch is attached for what i think would
 be a better behavior of that meta function.

Hi Thomas,

Thanks for the patch. Pros and cons to this. Pro: it works in more
situations, including yours. (Could you tell me a bit about your
situation?) Also, the implementation is dead simple and free of extra
TMP overhead.

Cons: Someone might expect a non-Proto type to be treated as a terminal
of that type and be surprised at getting a false where s/he expected
true (a fair assumption since Proto treats non-expressions as terminals
elsewhere; e.g., in its operator overloads). It slightly complicates the
specification of matches. It is potentially breaking in that it changes
the template arity of proto::matches. (Consider what happens if someone
is doing mpl::quote2proto::matches.)

I'm inclined to say this is not a bug and that it's a prerequisite of
matches that Expression is a proto expression. If you want to use it
with types that aren't expressions, you can already do that:

  templateclass MaybeExpr, class Grammar
  struct maybe_matches
: mpl::if_ proto::is_exprMaybeExpr
  , proto::matchesMaybeExpr, Grammar
  , mpl::false_
  ::type
  {};

Would the above work for you? I realize that's more expensive than what
you're doing now. :-(

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Transform result_of nightmare and preserving terminal identity

2012-10-15 Thread Eric Niebler
On 10/13/2012 4:20 PM, Agustín K-ballo Bergé wrote:
 Hi All,
 
 I'm experimenting with Proto to build a DSEL that operates on geometric
 vectors. I'm trying to write a transform that would take an assign
 expression and unroll it component wise. For instance, I want to replace

Hi Agustín,

This is just a quick note to let you know that I'm currently at the
standard committee meeting in Portland, and that I'll be unable to look
until this until I get back next week. Sorry for the delay. Maybe
someone else on this list might be able to help (nudge!). You might also
pose this question on stackoverflow.com.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] fold_tree and grammar using external_transforms and state

2012-07-27 Thread Eric Niebler
On 7/27/2012 12:19 AM, Joel Falcou wrote:
 Le 27/07/2012 08:11, Eric Niebler a écrit :
 Naming is becoming an issue, though. We already have proto::transform.
 You'd be adding proto::functional::transform that would be totally
 unrelated. I think I screwed up with the namespaces. It should probably
 be proto::functional::fusion::transform. Urg.
 
 Well, I guess this is a breaking change :s

I could import the existing stuff into proto::functional for back-compat.

 What I need is maybe more generic as I need to apply an arbitrary
 function with arbitrary number of parmaeters, the first beign the
 flattened tree, the others begin whatever:
 
 transform( f, [a b c d], stuff, thingy )
 = [f(a,stuff,thingy) f(b,stuff,thingy) f(c,stuff,thingy)]

Seems to me you want to be able to bind the 2nd and 3rd arguments to f
so that you can do this with a standard transform.

   transform( [a b c], bind(f, _1, stuff, thingy) )

= [f(a,stuff,thingy) f(b,stuff,thingy) f(c,stuff,thingy)]

 I'll try and ake it works out of the box first and see how it can be
 generalized.

I'll take transform and bind if you write them. :-)

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] proto-11 progress report

2012-07-21 Thread Eric Niebler
On 7/17/2012 6:14 PM, Eric Niebler wrote:
 I'm considering adding the slots mechanism to proto-current so that this
 can be made to work there, also. The problem is that once you use a
 slot, the data parameter is no longer just a dumb blob. I can make
 proto::_data ignore the slots and just return the dumb blob as before,
 and that will satisfy some. But any custom primitive transforms that
 folks have written will need to be ready for the fact that they could
 get passed something different than what they were expecting. I don't
 think it will break code until you decide to use a slot (or use a
 transform that uses a slot). Then you'll need to fix up your transforms.
 
 Does anybody object to the above scheme?

This is now implemented on trunk. It's implemented in a backward
compatible way.[*]

What this means is that instead of a monolithic blob of data, the third
parameter to a Proto transforms can be a structured object with O(1)
lookup based on tag. You define a tag with:

  BOOST_PROTO_DEFINE_ENV_VAR(my_tag_type, my_key);

Then, you can use it like this:

  some_transform()(expr, state, (my_key= 42, your_key= hello));

In your transforms, you can access the value associated with a
particular key using the proto::_env_varmy_tag_type transform.

You can still pass an unstructured blob, and things work as they did
before. The proto::_data transform checks to see if the data parameter
is a blob or structured. If it's a blob, that simply gets returned. If
it's structured, it returns the value associated with the
proto::data_type tag. In other words, these two are treated the same:

  int i = 42;
  some_transform()(expr, state, i);
  some_transform()(expr, state, (proto::data= i));

There's more, but I'll save it. It's a big change, and docs are yet to
be written. Y'all might want to test your code against trunk and report
problems early. (This will *not* be part of 1.51.)

[*] I had to make some changes to Phoenix though because unfortunately
Phoenix makes use of some undocumented parts of Proto.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Refining the Proto concepts

2012-07-18 Thread Eric Niebler
On 7/18/2012 3:59 PM, Mathias Gaunard wrote:
 On 18/07/2012 18:29, Eric Niebler wrote:
 
 Is there some code in Proto that is forcing the instantiation of those
 specializations? Probably, and that would unintended. One approach would
 be to replace these normalized forms with an equivalent incomplete type
 and fix all places where the code breaks.
 
 Doesn't
 
 templateclass T
 struct foo
 {
typedef barT baz;
 };
 
 fooint f = {};
 
 instantiate barT?

No, that merely mentions the specialization barT, but it doesn't
instantiate it. Nothing about that typedef requires barT to be
complete. You can try it yourself. If you make barT incomplete, the
above code still compiles.

Also, matching against the partial specializations of detail::matches_
in matches.hpp also doesn't require the basic_expr specialization to be
complete. But like I said, if there is some sloppy code in there that
requires that nested typedef to be complete, it *will* get instantiated.
Replacing it with an incomplete type will change it from a compile-time
perf bug to a hard error, and those are easy to find and fix.

 The problem I see is that for a regular Proto expression, the whole tree
 gets instantiated twice for expr and basic_expr.

If this is indeed happening, cleaning it up would be a nice perf win.
Want to give it a shot?

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] _unpack transform

2012-07-13 Thread Eric Niebler
On 7/13/2012 6:37 AM, Mathias Gaunard wrote:
 On 07/11/2012 06:55 PM, Eric Niebler wrote:
 
 You're referring to this:

 http://lists.boost.org/proto/2010/11/0304.php

 I should have followed through! The code referenced there isn't
 available anymore. I remember putting it on my TODO list to understand
 the compile-time implications of it, because of your warning about
 compile times. And then ... I don't remember. :-P
 
 It's available here
 https://raw.github.com/MetaScale/nt2/169b69d47e4598e403caad0682dd6d24b8fd4668/modules/boost/dispatch/include/boost/dispatch/dsl/proto/unpack.hpp

Thanks.

 As I said earlier we got rid of it because it wasn't very practical to
 use, this is from an old revision.

Impractical because of the compile times? Did you replace it with
anything? Would you have any interest in giving the new unpacking
patterns a spin and letting me know if they meet your need, when you
have time?

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] _unpack transform

2012-07-12 Thread Eric Niebler
On 7/11/2012 10:32 AM, Eric Niebler wrote:
   f0(f1(f2(pack(_))...))
 
 That's no so bad, actually. Now, the question is whether I can retrofit
 this into proto-current without impacting compile times.

This is now implemented on boost trunk for proto-current. Seems to work
without a significant perf hit (my subjective sense). Docs forthcoming.
It's also implemented for proto-11. This is a good feature, I think.
Thanks for all the feedback that led to it.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] _unpack transform

2012-07-11 Thread Eric Niebler
On 7/11/2012 12:42 AM, Thomas Heller wrote:
 On 07/10/2012 11:18 PM, Eric Niebler wrote:
 I just committed to the proto-11 codebase a new transform called
 _unpack. You use it like this:

_unpackf0(Tfx, f1(_)...)

 Where Tfx represents any transform (primitive or otherwise) f0 is any
 callable or object type, and f1(_) is an object or callable transform.
 The ... denotes pseudo-pack expansion (although it's really an C-style
 vararg ellipsis). The semantics are to replace f1(_)... with
 f1(_child0), f1(_child1), etc..

 With this, the _default transform is trivially implemented like this:

 struct _default
: proto::or_
  proto::whenproto::terminal_, proto::_value
, proto::otherwise
  proto::_unpackeval(proto::tag_of_(), _default(_)...)
  
  
 {};

 ...where eval is:

 struct eval
 {
  templatetypename E0, typename E1
  auto operator()(proto::tag::plus, E0  e0, E1  e1) const
  BOOST_PROTO_AUTO_RETURN(
  static_castE0(e0) + static_castE1(e1)
  )

  templatetypename E0, typename E1
  auto operator()(proto::tag::multiplies, E0  e0, E1  e1) const
  BOOST_PROTO_AUTO_RETURN(
  static_castE0(e0) * static_castE1(e1)
  )

  // Other overloads...
 };

 The _unpack transform is pretty general, allowing a lot of variation
 within the pack expansion pattern. There can be any number of Tfx
 transforms, and the wildcard can be arbitrarily nested. So these are
 all ok:

// just call f0 with all the children
_unpackf0(_...)

// some more transforms first
_unpackf0(Tfx0, Tfx1, Tfx2, f1(_)...)

// and nest the wildcard deeply, too
_unpackf0(Tfx0, Tfx1, Tfx2, f1(f2(f3(_)))...)

 I'm still playing around with it, but it seems quite powerful. Thoughts?
 Would there be interest in having this for Proto-current? Should I
 rename it to _expand, since I'm modelling C++11 pack expansion?

 i think _expand would be the proper name. Funny enough i proposed it
 some time ago for proto-current, even had an implementation for it, and
 the NT2 guys are using that exact implementation ;)
 Maybe with some extensions.
 So yes, Proto-current would benefit from such a transform.

You're referring to this:

http://lists.boost.org/proto/2010/11/0304.php

I should have followed through! The code referenced there isn't
available anymore. I remember putting it on my TODO list to understand
the compile-time implications of it, because of your warning about
compile times. And then ... I don't remember. :-P

I remember that it was an invasive change to how Proto evaluates all
transforms, which made me nervous. I also don't think that exact syntax
can be implemented without forcing everybody to pay the compile-time
hit, whether they use the feature or not. In contrast, having a separate
_unpack transform isolates the complexity there.

I'm going to keep playing with this. Your suggested syntax is nice. I
wonder how close I can get. (Although I kinda like my pseudo-pack
expansions, too. :-)

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com


___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


[proto] _unpack transform (was: proto-11 progress report)

2012-07-10 Thread Eric Niebler
I just committed to the proto-11 codebase a new transform called
_unpack. You use it like this:

  _unpackf0(Tfx, f1(_)...)

Where Tfx represents any transform (primitive or otherwise) f0 is any
callable or object type, and f1(_) is an object or callable transform.
The ... denotes pseudo-pack expansion (although it's really an C-style
vararg ellipsis). The semantics are to replace f1(_)... with
f1(_child0), f1(_child1), etc..

With this, the _default transform is trivially implemented like this:

struct _default
  : proto::or_
proto::whenproto::terminal_, proto::_value
  , proto::otherwise
proto::_unpackeval(proto::tag_of_(), _default(_)...)


{};

...where eval is:

struct eval
{
templatetypename E0, typename E1
auto operator()(proto::tag::plus, E0  e0, E1  e1) const
BOOST_PROTO_AUTO_RETURN(
static_castE0 (e0) + static_castE1 (e1)
)

templatetypename E0, typename E1
auto operator()(proto::tag::multiplies, E0  e0, E1  e1) const
BOOST_PROTO_AUTO_RETURN(
static_castE0 (e0) * static_castE1 (e1)
)

// Other overloads...
};

The _unpack transform is pretty general, allowing a lot of variation
within the pack expansion pattern. There can be any number of Tfx
transforms, and the wildcard can be arbitrarily nested. So these are all ok:

  // just call f0 with all the children
  _unpackf0(_...)

  // some more transforms first
  _unpackf0(Tfx0, Tfx1, Tfx2, f1(_)...)

  // and nest the wildcard deeply, too
  _unpackf0(Tfx0, Tfx1, Tfx2, f1(f2(f3(_)))...)

I'm still playing around with it, but it seems quite powerful. Thoughts?
Would there be interest in having this for Proto-current? Should I
rename it to _expand, since I'm modelling C++11 pack expansion?

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com

___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] proto-11 progress report

2012-07-01 Thread Eric Niebler
On 6/29/2012 4:49 AM, Mathias Gaunard wrote:
 On 28/06/2012 21:09, Eric Niebler wrote:
 
 After meditating on this for a bit, a thought occurred to me. Your
 unpack function is a generalization of the pattern used by the _default
 transform.
 
 It is indeed.

Right. Providing the higher-level primitive transform is on my to-do
list. Thanks for the suggestion.

 Generators are intended to meet this need. What are they lacking for
 you? Is it the lack of an unpack transform?
 
 We use generators for something else. Generators are in charge of
 putting a raw expression in the NT2 domain, which involves computing the
 logical size of the expression as well as the type it would have it
 evaluated.
 
 Doing the expression rewriting in the generator itself causes dependency
 problems, since expression rewriting is defined in terms of unpacking
 then re-making expressions, which involves calling the generator.
 
 I don't know yet how we could do what we need in a generator-less world.

Well, in the present generator world, the generator is passed an
expression, and your mission is to rewrite it (or wrap it). Rewriting
the expression causes a recursive invocation of the generator for the
current expression. This is the cause of the trouble, IIUC.

In the future generator-less world, your domain's make_expr function
object is passed a tag and a bunch of child nodes. You get to decide how
to turn that into an expression. If you want to transform the child
nodes first, you should be able to do that, and it wouldn't recurse
back. You're recursing only on the children, not on the current
expression. That should work. In theory.

 Both optimize and schedule require containing children by value to
 function correctly.

 How is this relevant?
 
 When using normal Proto features, the expressions built from operators
 contain their children by reference while the expression built from
 transforms contain their children by value.
 
 Since in our case we use the same code for both, we had to always
 contain children by value.

I'm afraid I'm being dense. I still don't see how that relates to the
need for your unpack function or the limitations of transforms.

 Transforms are not pure functional. The data parameter can be mutated
 during evaluation.
 
 The transform language is functional since the only thing it does is
 define a calling expression which is a combination of primitive
 transforms. Of course each primitive transform doesn't have to be
 functional, but the language to use them cannot define state, it can
 just pass it along like a monad.

Your unpack function is no less functional in nature. I'm not denying
that transforms have limitations that your unpack function doesn't (see
below). I'm just saying that you're barking up the wrong tree with your
argument about transform's functional nature.

 I guess it's not pure functional though because of proto::and_.

I don't understand this statement.

 In any case, a lot of non-trivial expression evaluation strategies
 cannot be practically implemented as a transform and require a primitive
 transform.

Ah! By transform you mean callable transform or object transform, but
not primitive transform. But the term transform includes primitive
transforms. Is that why we're talking past each other?

 If everything ends up being primitive transforms, we might as well use
 simple function objects directly, which are not tied by constraints such
 as arity, state, data, environment etc., just store whatever state is
 needed in the function object or bind that state with boost::bind or
 similar.

I see what you're getting at. Primitive transforms have limitations on
the number of arguments and the meaning of those arguments. I understand.

 I'd like to see Proto provide algorithms like this that accept arbitrary
 function objects and that are not intended to require transforms to do
 useful things.

Sure. However, I have a reason for wanting to make these things play
nicely with transforms. See below.

 You know about proto::vararg, right? It lets you handle nodes of
 arbitrary arity.
 
 The transformations that can currently be done as a non-primitive
 transform when the transformation must not rely on an explicit arity of
 the expression are extremely limited.

Limiting the discussion to non-primitive transforms, I agree. I didn't
know that's what we were discussing.

 Adding unpacking transforms would certainly make it more powerful, but
 still not as powerful as what you could do with a simple function object
 coupled with macros or variadic templates.
 
 I think it's important to keep in mind that transforms are a possible
 solution that can work well for some languages, but that there should be
 other solutions as well.
 
 Once there is an unpack transform, will you still feel this way?
 
 We already have the unpack algorithm that I described. It's relatively
 simple and straightforward code. We used to have it defined as a
 primitive transform, it was much more complicated

Re: [proto] proto-11 progress report

2012-06-28 Thread Eric Niebler
On 6/27/2012 2:11 PM, Mathias Gaunard wrote:
 On 25/06/2012 23:30, Eric Niebler wrote:
 On 6/25/2012 12:21 PM, Mathias Gaunard wrote:
 
 There is a function which is very simple and that I found to be very
 useful when dealing with expression trees.

 unpack(e, f0, f1) which calls
 f0(f1(e.child0), f1(e.child1), ..., f1(e.childN))

 I can do recursion or not with the right f1, and I can 'unpack' an
 expression to an n-ary operation f0.

 Here f0 is typically a function that uses its own overloading-based
 dispatching mechanism.

 OK, thanks for the suggestion. Where have you found this useful?
 
 For example I can use this to call
 
 functortag::plus()(functortag::multiplies()(a, b), c);
 
 from a tree like
 
 exprtag::plus, list2 exprtag::multiplies, list2 exprtag::terminal,
 exprtag::terminal , exprtag:terminal  
 
 NT2 uses a mechanism like this to evaluate expressions.

OK.

 For element-wise expressions (i.e. usual vector operations), the f1 is
 the run(expr, pos) function -- actually more complicated, but there is
 no need to go into details -- which by default simply calls unpack
 recursively.
 
 What the f0 does is simply unpack the expression and call a functor
 associated to the tag (i.e. run(expr, i) with exprtag::pow, list2foo,
 bar  calls pow(run(foo, i), run(bar, i)) ).
 
 The important bit is that it is also possible to overload run for a
 particular node type.
 
 On terminals, the run is defined to do a load/store at the given
 position. This means run(a + b * sqrt(c) / pow(d, e), i)  calls
 plus(a[i], multiplies(b[i], divides(sqrt(c[i]), pow(d[i], e[i]
 
 Each function like plus, multiplies, sqrt, pow, etc. is overloaded so
 that if any of the arguments is an expression, it does a make_expr. If
 the values are scalars, it does the expected operations on scalars. If
 they're SIMD vectors, it does the expected operations on vectors.
 
 run is also overloaded for a variety of operations that depend on the
 position itself, such as restructuring, concatenating or repeating data;
 the position is modified before running the children or different things
 may be run depending on a condition.
 
 A simple example is the evaluation of cat(a, b) where run(expr, i) is
 defined as something a bit like i  size(child0(expr)) ?
 run(child0(expr), i) : run(child1(expr), i-size(child0(expr))
 
 unpack is also used for expression rewriting: before expressions are
 run, the expression is traversed recursively and reconstructed. Whenever
 operations that are not combinable are found, those sub-expressions are
 evaluated and the resulting terminal is inserted in their place in the
 tree. In NT2 this is done by the schedule function.

After meditating on this for a bit, a thought occurred to me. Your
unpack function is a generalization of the pattern used by the _default
transform. The _defaultX transform unpacks an expression, transforms
each child with X, then recombines the result using the C++ meaning
corresponding to the expression's tag type. In other words,
_defaultX()(e) is unpack(e, f0, f1) where f1 is X and f0 is
hard-coded. I could very easily provide unpack as a fundamental
transform and then trivially implement _default in terms of that. I very
much like this idea.

Aside: The pass_through transform almost fits this mold too, except that
each child can get its own f1. Hmm.

Unpack doesn't sound like the right name for it, though. It reminds me a
bit of Haskell's fmap, but instead of always putting the mapped elements
back into a box of the source type, it lets you specify how to box
things on the way out.

 Similarly we have a phase that does the same kind of thing to replace
 certain patterns of combinations by their optimized counterparts
 (optimize). We'd like to do this at construction time but it's currently
 not very practical with the way Proto works.

Generators are intended to meet this need. What are they lacking for
you? Is it the lack of an unpack transform?

 Both optimize and schedule require containing children by value to
 function correctly.

How is this relevant?

 Transforms are not used much since they're only useful for the most
 simple operations due to their pseudo-DSL pure functional nature. It's
 especially problematic when performance sensitive code, which needs to
 be stateful, is involved. 

Transforms are not pure functional. The data parameter can be mutated
during evaluation. Even expressions themselves can be mutated by a
transform, as long as they're non-const.

 Finally it's also not practical to do anything
 involving nodes of arbitrary arity with them.

You know about proto::vararg, right? It lets you handle nodes of
arbitrary arity.

 Unfortunately I cannot say transforms are good enough for the kind of
 thing NT2 does.

Once there is an unpack transform, will you still feel this way?

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com


___
proto mailing list
proto@lists.boost.org
http://lists.boost.org

Re: [proto] proto-11 progress report

2012-06-25 Thread Eric Niebler
On 6/25/2012 12:39 AM, Joel Falcou wrote:
 On 06/24/2012 01:10 AM, Eric Niebler wrote:
snip
int i = LambdaEval()(_1 + 42, 0, proto::tag::data = 8);
 
 The 3rd parameter associates the value 8 with the data tag.
snip
 
 How do you set up new tag  ? Is just mytag some
 
 mytag_type mytag = {};
 
 ?
 
 or should mytag_type inherit/be wrapped from some special stuff

Special stuff. Tags are defined as follows:

struct my_tag_type
  : proto::tags::defmy_tag_type
{
using proto::tags::defmy_tag_type::operator=;
};

namespace
{
constexpr my_tag_type const  my_tag =
proto::utility::static_constmy_tag_type::value;
}

The gunk in the unnamed namespace is for strict ODR compliance. A simple
global const would be plenty good for most purposes.

 As for what is not changing:

 Grammars, Transforms and Algorithms
 ===
 It would be wonderful if there were a more natural syntax for describing
 proto algorithms rather than with structs, function objects, proto::or_,
 proto::when, and friends. If there is one, I haven't found it yet. On
 the up side, it means that many current proto-based libraries can be
 upgraded with little effort. On the down side, the learning curve will
 still be pretty steep. If anybody has ideas for how to use C++11 to
 simplify pattern matching and the definition of recursive tree
 transformation algorithms, I'm all ears.
 
 There is not so much way to describe something that looks like
 a grammar definition anyway. BNF/EBNF is probably the simplest
 way to do it.

That would be overkill IMO. Proto grammars don't need to worry about
precedence and associativity. Forcing folks to write E?BNF would mean
forcing them to think about stuff they don't need to think about.

 Now on the syntactic clutter front, except wrapping everything in round
 lambda
 or use object/function call in a hidden decltype call, I don't see what we
 can do better :s

More round lambda, sure. Fixing inconsistencies, also. But I tend to
doubt that using grammar-like expressions in a decltype would be a
significant improvement. Folks still can't write transforms in straight,
idiomatic C++, which is what I want.

 Glad it is picking up steam :D

C++11 has made this pretty fun. I'm ready to stop supporting all my
C++03 libraries now. :-)

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com



signature.asc
Description: OpenPGP digital signature
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Streamulus v0.1 is out: An EDSL for Event Stream Processing with C++

2012-06-24 Thread Eric Niebler
On 6/24/2012 4:42 PM, Dave Abrahams wrote:
 
 on Sun Jun 24 2012, Eric Niebler 
 eric-xT6NqnoQrPdWk0Htik3J/w-AT-public.gmane.org wrote:
 
 On 6/24/2012 8:50 AM, Irit Katriel wrote:

 In the accumulators library, all the accumulators are invoked for
 every update to the input. This is why the visitation order can be
 determined at compile time.

 That's correct.
 
 Are you forgetting about droppable accumulators?

Not forgetting. It doesn't change the fact that the visitation order is
set at compile time. There is no centralized, automatic, dynamic flow
control in the accumulators library.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com





signature.asc
Description: OpenPGP digital signature
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] [proto-11] expression extension

2012-06-06 Thread Eric Niebler
On 6/5/2012 11:10 PM, Mathias Gaunard wrote:
 On 03/06/2012 09:41, Eric Niebler wrote:

 Hey all, this is just an FYI. I've been hard at work at a ground-up
 redesign of proto for C++11. I've gotten far enough along that I know
 what expression extension will look like, so I thought I'd share. This
 should interest those who want finer control over how expressions in
 their domain are constructed. Without further ado:

  templatetypename Tag, typename Args
  struct MyExpr;

  struct MyDomain
: proto::domainMyDomain
  {
  struct make_expr
: proto::make_custom_exprMyExpr, MyDomain
  {};
  };

  templatetypename Tag, typename Args
  struct MyExpr
 
 Wouldn't it be more interesting for make_custom_expr to take a
 meta-function class?

The template template parameter is distasteful, I agree, but I can shave
template instantiations this way. There's no need to instantiate a
nested apply template for every new expression type created. Especially
now with template aliases, it's quite painless to adapt a template to
have the interface that make_custom_expr expects. That was my reasoning,
at least.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Hold terminals by smart reference

2012-06-05 Thread Eric Niebler
On 6/4/2012 5:51 PM, Mathias Gaunard wrote:
 On 04/06/2012 17:52, Eric Niebler wrote:
 
 I don't know what you mean by the right type. If you want it held by
 shared_ptr to manage lifetime, then shared_ptr is the right type, it
 seems to me. Or use a wrapper around a shared_ptr, whichever.
 
 I want all tree manipulation and transformation algorithms to see the
 value as if it were a T and not a shared_ptrT or ref_holderT.

OK, I see.

 shared_ptrT is the right type to contain the terminal in the proto
 nullary expression, but in that particular case it is not the logical
 type of the value associated to that leaf node.
 
 I want to be able to manipulate the tree using straight Proto tools
 (otherwise I might as well not use Proto at all -- the point is to have
 a well-defined de-facto standard framework for tree manipulation).

I understand your frustration.

 Those algorithms should not need to know how the value is stored in the
 expressions. It is just noise as far as they're concerned.
 
 Alternatively I'll need to provide substitutes for value,
 result_of::value and _value, and ban those from my code and the
 programming interface, telling developers to use mybettervalue instead
 of proto's. That saddens me a bit.

I want you to understand that I'm not just being obstructionist or
obstinate. Proto's value functions are very simple and low-level and are
called very frequently. Adding metaprogramming overhead there, as would
be necessary for adding a customization point, has the potential to slow
compiles down for everybody, as well as complicating the code, tests and
docs. There are also unanswered questions. For instance, how does
proto::matches work with these terminals? Does it match on the actual
value or the logical one? There are arguments on both sides, but one
needs to be picked. Going with the logical value will force
proto::matches to go through this customization point for every
terminal. I also am thinking about how it effects other proto features
such as: as_expr, as_child, make_expr, unpack_expr, literal+lit, fusion
integration, display_expr, fold and all the other transforms, etc. etc.
I worry that allowing the logical type of a terminal to differ from the
actual type opens a Pandora's box of tough question with no obvious
answers, and trying to add this would cause unforeseen ripple effects
through the code. It makes me very uneasy, especially considering the
workaround on your end (sorry) is very simple.

Making proto's user's happy must be balanced against feature-creep-ism,
which hurts everybody in the long run. So I'm afraid I'm still leaning
against adding this customization point. But I encourage you to file a
feature request, and if you can find a patch that doesn't negatively
effect compile times or end user-complexity and integrates cleanly with
all the other features of proto and has docs and tests, then -- and only
then --- would I add it.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] [proto-11] expression extension

2012-06-04 Thread Eric Niebler
On 6/4/2012 12:48 PM, Joel Falcou wrote:
 Le 04/06/2012 21:18, Eric Niebler a écrit :
 Assuming your types are efficiently movable, the default should just do
 the right thing, and your expression trees can be safely stored in local
 auto variables without dangling references. Does that help?
 
 I was thinking of the case where we constructed a foo expression by
 calling expression constructor one  into the other. I guess it fixes that.

One into the other? I must be dense. Not getting it. ???

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] [proto-11] expression extension

2012-06-04 Thread Eric Niebler
On 6/4/2012 6:08 PM, Mathias Gaunard wrote:
 Eric Niebler wrote:
 Proto-11 will probably take many months. I'm taking my time and
 rethinking everything. Don't hold your work up waiting for it.
 
 Best thing to do is probably to make it lighter, keep separate things
 separate, and truly extendable.
 
 For example, transforms seem too tighly coupled with the rest in the
 current Proto version, and their limitations are quite intrusive.

Can you be more specific and give some examples? BTW, I appreciate your
helping to improve proto.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] restructuring expression

2012-05-29 Thread Eric Niebler
On 5/29/2012 1:44 AM, Karsten Ahnert wrote:
 I have an arithmetic expression template where multiplication is
 commutative. Is there an easy way to order a chain of multiplications
 such that terminals with values (like proto::terminal double ) appear
 at the beginning? For example that
 
 arg1 * arg1 * 1.5 * arg1
 
 will be transformed to
 
 1.5 * arg1 * arg1 * arg1
 
 ?
 
 I can imagine some complicated algorithms swapping expressions and child
 expressions but I wonder if there is a simpler way.

There is no clever built-in Proto algorithm for commutative
transformations like this, I'm afraid. I was going to suggest flattening
to a fusion vector and using fusion sort, but I see there is no fusion
sort! :-( Nevertheless, that seems like a promising direction to me.
Once you have the sorted vector, you should(?) be able to use
fusion::fold to build the correct proto tree from it.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Restructuring noses in generator

2012-04-28 Thread Eric Niebler
On 4/28/2012 3:38 AM, Mathias Gaunard wrote:
 On 27/04/12 21:47, Joel Falcou wrote:
 How can I use a custom generator to turn a specific node expression into
 a different version of itself without triggering endless recursive call ?

 My use cas is the following, i want to catch all function node looking
 like

 tag::function( some_terminal, grammar, ..., grammar )

 with any nbr of grammar instances

 into

 tag::function( some_terminal, my_tuple_terminalgrammar, ..., grammar,
 some_other_info )

 basically makign n-ary function node into ternayr node with a specific
 structures. Of course this new node should live in whatever domain
 some_terminal is coming from.
snip

And some_terminal is not in your domain? How does your generator get
invoked? I guess I'm confused. Can you send a small repro?

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] The proper way to compose function returning expressions

2012-04-26 Thread Eric Niebler
On 4/25/2012 1:41 PM, Mathias Gaunard wrote:
 On 24/04/12 22:31, Eric Niebler wrote:
 On 4/23/2012 10:17 PM, Joel Falcou wrote:
 On 04/24/2012 12:15 AM, Eric Niebler wrote:

 I think this is an important issues to solve as far as Proto grokability
 does.

 Agreed. It would be very nice to have. But you still have to know when
 to use it.

 One of my coworker on NT2 tried  to do just this (the norm2 thingy) and
 he get puzzled by the random crash.

 [...]

 The implicit_expr code lived in a detail namespace in past versions of
 proto. You can find it if you dig through subversion history. I'm not
 going to do that work for you because the code was broken in subtle ways
 having to do with the consistency of terminal handling. Repeated
 attempts to close the holes just opened new ones. It really should be
 left for dead. I'd rather see what you come up with on your own.
 
 The issue Joel had in NT2 was probably unrelated to this. In NT2 we hold
 all expressions by value unless the tag is boost::proto::tag::terminal.
 This was done by modifying as_child in our domain.
 
 I strongly recommend doing this for most proto-based DSLs. It makes auto
 foo = some_proto_expression work as expected, and allows expression
 rewriting of the style that was shown in the thread without any problem.
 
 There is probably a slight compile-time cost associated to it, though.

Interesting. I avoided this design because I was uncertain whether the
compiler would be able to optimize out all the copies of the
intermediate nodes. You're saying NT2 does it this way and doesn't
suffer performance problems? And you've hand-checked the generated code
and found it to be optimal? That would certainly change things.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] The proper way to compose function returning expressions

2012-04-26 Thread Eric Niebler
On 4/26/2012 9:35 AM, Mathias Gaunard wrote:
 On 26/04/12 18:02, Eric Niebler wrote:
 
 Interesting. I avoided this design because I was uncertain whether the
 compiler would be able to optimize out all the copies of the
 intermediate nodes. You're saying NT2 does it this way and doesn't
 suffer performance problems? And you've hand-checked the generated code
 and found it to be optimal? That would certainly change things.

 
 NT2 treats large amounts of data per expression, so construction time is
 not very important. It's the time to evaluate the tree in a given
 position that matters (which only really depends on proto::value and
 proto::child_cN, which are always inlined now).
 
 We also have another domain that does register-level computation, where
 construction overhead could be a problem. The last tests we did with
 this was a while ago and was with the default Proto behaviour. That
 particular domain didn't get sufficient testing to give real conclusions
 about the Proto overhead.

In that case, I will hold off making any core changes to Proto until I
have some evidence that it won't cause performance regressions.

Thanks,

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] The proper way to compose function returning expressions

2012-04-24 Thread Eric Niebler
On 4/23/2012 10:17 PM, Joel Falcou wrote:
 On 04/24/2012 12:15 AM, Eric Niebler wrote:
 implicit_expr() returns an object that holds its argument and is
 convertible to any expression type. The conversion is implemented by
 trying to implicitly convert all the child expressions, recursively.
 It sort of worked, but I never worked out all the corner cases, and
 documenting it would have been a bitch. Perhaps I should take another
 look. Patches welcome. :-) 
 
 I think this is an important issues to solve as far as Proto grokability
 does.

Agreed. It would be very nice to have. But you still have to know when
to use it.

 One of my coworker on NT2 tried  to do just this (the norm2 thingy) and
 he get puzzled by the random crash.
 
 I think we should at least document the issues (I can write that and
 submit a patch for the doc) and
 maybe resurrect this implicit_expr. Do you have any remnant of code
 lying around so I don't start from scratch ?

The implicit_expr code lived in a detail namespace in past versions of
proto. You can find it if you dig through subversion history. I'm not
going to do that work for you because the code was broken in subtle ways
having to do with the consistency of terminal handling. Repeated
attempts to close the holes just opened new ones. It really should be
left for dead. I'd rather see what you come up with on your own.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] The proper way to compose function returning expressions

2012-04-23 Thread Eric Niebler
On 4/23/2012 1:01 PM, Joel Falcou wrote:
 Let's say we have a bunch of functions like sum and sqr defined on a
 proto domain to return
 expression of tag sum_ and sqr_ in this domain. One day we want to make
 a norm2(x) function
 which is basically sum(sqr(x)).
 
 My feeling is that I should be able to write it using sqr and sum
 expressions.
 Alas it seems this results in dandling reference, crash and some sad pandas.
 
 Then I remember about proto::deep_copy but I have a worries. x is
 usually a terminal
 holding a huge matrix like value and I just don't want this huge matrix
 to be copied.
 
 What's the correct way to handle such a problem ? How can I build new
 function returning
 expressions built from expression composition without incurring a huge
 amount of copy ?

Right. The canonical way of doing this is as follows:

#include boost/proto/proto.hpp
namespace proto = boost::proto;

struct sum_ {};
struct sqr_ {};

namespace result_of
{
templatetypename T
struct sum
  : proto::result_of::make_exprsum_, T
{};

templatetypename T
struct sqr
  : proto::result_of::make_exprsqr_, T
{};

templatetypename T
struct norm2
  : sumtypename sqrT::type
{};
}

templatetypename T
typename result_of::sumT ::type const
sum(T t)
{
return proto::make_exprsum_(boost::ref(t));
}

templatetypename T
typename result_of::sqrT ::type const
sqr(T t)
{
return proto::make_exprsqr_(boost::ref(t));
}

templatetypename T
typename result_of::norm2T ::type const
norm2(T t)
{
return
proto::make_exprsum_(proto::make_exprsqr_(boost::ref(t)));
}

int main()
{
sum(proto::lit(1));
sqr(proto::lit(1));
norm2(proto::lit(1));
}


As you can see, the norm2 is not implemented in terms of the sum and sqr
functions. That's not really ideal, but it's the only way I know of to
get fine grained control over which parts are stored by reference and
which by value.

You always need to use make_expr to build expression trees that you
intend to return from a function. That's true even for the built-in
operators. You can't ever return the result of expressions like a+b*42
... because of the lifetime issues.

You can't use deep_copy for the reason you mentioned.

I once had a function proto::implicit_expr, which you could have used
like this:

templatetypename T
typename result_of::norm2T ::type const
norm2(T t)
{
return proto::implicit_expr(sum(sqr(x)));
}

implicit_expr() returns an object that holds its argument and is
convertible to any expression type. The conversion is implemented by
trying to implicitly convert all the child expressions, recursively. It
sort of worked, but I never worked out all the corner cases, and
documenting it would have been a bitch. Perhaps I should take another
look. Patches welcome. :-)

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Held nodes by value for Fundamental types

2012-04-09 Thread Eric Niebler
On 4/9/2012 2:21 PM, Fernando Pelliccioni wrote:
 Hello,
 
 I'm wondering if it would be appropriate to treat the fundamental types
 (char, short, int, double, ...) by value, by default.
 
 I wrote this simple piece of code.
 I'm not sure if I'm leaving without considering any other implication,
 but I think it may be an improvement.
 Please, tell me if I am wrong.

Thanks. I thought long about whether to handle the fundamental types
differently than user-defined types and decided against it. The
capture-everything-by-reference-by-default model is easy to explain and
reason about. Special cases can be handled on a per-domain basis as needed.

There is a way to change the capture behavior for your domain. The newly
released version of Proto documents how to do this (although the
functionality has been there for a few releases already).

http://www.boost.org/doc/libs/1_49_0/doc/html/proto/users_guide.html#boost_proto.users_guide.front_end.customizing_expressions_in_your_domain.per_domain_as_child

In short, you'll need to define an as_child metafunction in your domain
definition:

class my_domain
  : proto::domain my_generator, my_grammar 
{
// Here is where you define how Proto should handle
// sub-expressions that are about to be glommed into
// a larger expression.
template typename T 
struct as_child
{
typedef unspecified-Proto-expr-type result_type;

result_type operator()( T  t ) const
{
return unspecified-Proto-expr-object;
}
};
};

In as_child, you'll have to do this (pseudocode):

if (is_exprT)
  return T 
else if(is_fundamentalT)
  return proto::terminalT::type
else
  return proto::terminalT ::type

The metaprogramming is left as an exercise. :-)

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] user docs for advanced features

2012-01-04 Thread Eric Niebler
On 1/4/2012 7:37 AM, Thomas Heller wrote:
snip many good suggestions
 Thanks for adding this documentation!

Great feedback. I've just accommodated all of it. Thanks!

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


[proto] user docs for advanced features

2012-01-02 Thread Eric Niebler
Proto's users guide has been behind the times for a while. No longer.
More recent and powerful features are now documented. Feedback welcome.

Sub-domains:

http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/front_end/customizing_expressions_in_your_domain/subdomains.html

Per-domain as_child customization:
==
http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/front_end/customizing_expressions_in_your_domain/per_domain_as_child.html

External Transforms:
===
http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/back_end/expression_transformation/external_transforms.html

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Problems with unary function node

2011-10-27 Thread Eric Niebler
On 10/22/2011 3:02 PM, Mathias Gaunard wrote:
 On 10/18/2011 05:53 AM, Eric Niebler wrote:
 On 10/12/2011 2:24 PM, Mathias Gaunard wrote:
 There seems to be a significant problem with the unary function node
 (and by that I mean (*this)() ) generated by proto::extends and
 BOOST_PROTO_EXTENDS_USING_FUNCTION().
 snip

 Sorry for the delay, and I'm afraid I don't have news except to say that
 this is on my radar. I hope to look into this soon. But if someone were
 to beat me to it, that'd be pretty awesome. :-)
 
 I don't think it can really be fixed in C++03.
 In C++11 though, it's pretty easy, you can just make it a template with
 a default template argument.

It should already be fixed for C++11 because operator() uses variadics
if they're available. It's been that way for a while. But in
investigating this problem, I've found that the copy assign operator can
cause the same problem, and that can't be fixed this way, even in C++11.

Regardless, I'm convinced that a complete fix is possible, and I have it
mostly coded. It would require you (the user) to disable unary function
and assign in your domain via a grammar. But. It's expensive at compile
time, and everybody pays. I need to be convinced before I proceed. Your
example code was very contrived. (Certainly you don't need to ask a
Proto expression extension type whether it is a proto expression. The
answer will always be yes.) So what is your realistic usage scenario?
What type categorization do you want to do on the extension type that
you can't do on the raw passed-in expression?

Thanks,

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] [Proto] Expression as fusion/MPL sequence

2011-06-03 Thread Eric Niebler
On 6/2/2011 11:12 AM, Joel falcou wrote:
 On 01/06/11 22:24, Eric Niebler wrote:
 
 Proto expressions are random access, but flattened views are
 forward-only. That's a limitation of the current implementation of the
 segmented Fusion stuff. It's a known problem. Segmented fusion needs a
 complete rewrite, but it's a metaprogramming Everest, and I'm too tired
 to climb it again. Some hot-shot metaprogramming wunderkind should try
 cutting his/her teeth on that problem. They'd earn my eternal admiration
 and appreciation.
 
 Oh OK. So i may just need to *not* flatten them.

I just updated the docs to state that flatten returns a Fusion Forward
Sequence.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] [Proto] Expression as fusion/MPL sequence

2011-06-01 Thread Eric Niebler
On 6/2/2011 7:02 AM, Joel Falcou wrote:
 Seems somethign crooky on this front. Calling fusion::at_c on expression
 ends up in error even after including boost/proto/fusion.hpp.
 Same way, flatten used as a transform seems to not give me a type that
 can be passed to any fusion or mpl function. Looking at
 proto/fusion.hpp I noticed that the iterator is indeed random_access but
 not the view itself which as a forward_traversal tag. Even
 after fixing this, no dice, at_cN(some_proto_expr) still fails to
 compile.

That's odd. Proto's fusion tests are passing on trunk and release, and
the following program compiles for me (on trunk):

  #include boost/proto/proto.hpp
  #include boost/fusion/include/at_c.hpp

  namespace proto = boost::proto;
  namespace fusion = boost::fusion;

  int main()
  {
proto::terminalint::type i = {42};
fusion::at_c1(i + i);
  }

Can you post some code that demonstrates the problem?

Proto expressions are random access, but flattened views are
forward-only. That's a limitation of the current implementation of the
segmented Fusion stuff. It's a known problem. Segmented fusion needs a
complete rewrite, but it's a metaprogramming Everest, and I'm too tired
to climb it again. Some hot-shot metaprogramming wunderkind should try
cutting his/her teeth on that problem. They'd earn my eternal admiration
and appreciation.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Compiling in debug mode

2011-05-12 Thread Eric Niebler
On 5/13/2011 5:45 AM, Bart Janssens wrote:
 Hi guys,
 
 I've followed the recent discussion about compilation performance,
 it's good to know things are getting better and hopefully support for
 the new standard will help even more.

Probably, but someone needs to adapt Proto to use variadics/rvalue refs.
Patches welcome. :-)

 Currently, my main problem is not so much the compile time itself, but
 how much RAM gets used in debug mode (GCC 4.5.2 on ubuntu 11.04). I'm
 still using proto from boost 1.45, would the recent changes help
 anything in reducing RAM usage in debug mode? 

I don't think so, but I haven't tested.

 Is anyone aware of
 tweaks for GCC that reduce memory usage, but still produce useful
 debug info (just using -g now, no optimization)?

I'll leave this for the gcc experts.

 I've gotten to the point where a compile can use upwards of 1.5GB for
 a single test, resulting in much swapping, especially when compiling
 with make -j2 (which I try to remember not to do, now ;).

Ouch. Do you have to use gcc? Perhaps clang might give you better results.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] [phoenix] not playing nice with other libs

2011-05-07 Thread Eric Niebler
On 5/5/2011 12:32 AM, Eric Niebler wrote:
 I'll also need to investigate why Proto depends on
 BOOST_MPL_LIMIT_METAFUNCTION_ARITY.

Proto no longer depends on BOOST_MPL_LIMIT_METAFUNCTION_ARITY. At least,
not on trunk.

I'm working on pre-preprocessing stuff. So far, it doesn't seem to be
having the dramatic impact on compile-time performance that I had hoped
for, but I still have a ways to go.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Manipulating an expression tree

2011-05-01 Thread Eric Niebler
On 4/15/2011 10:45 PM, Karsten Ahnert wrote:
 @All: Are somewhere more examples and docs for proto transforms. They
 are quite complicated and maybe a bit underrepresented in the official
 users guide.

(Funny, I *know* I replied to this, but I don't see it in the archive.
Sorry if y'all get this twice.)

Have you read the Expressive C++ article series on C++Next? It covers
grammars and transforms step-by-step. Here's the first one in the series:

http://cpp-next.com/archive/2010/08/expressive-c-introduction/

HTH,

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Manipulating an expression tree

2011-04-06 Thread Eric Niebler
(Please don't top-post. Rearranging...)

On 4/7/2011 5:45 AM, Karsten Ahnert wrote:
 On 04/06/2011 10:53 PM, Bart Janssens wrote:
 On Wed, Apr 6, 2011 at 10:29 PM, Karsten Ahnert
 karsten.ahn...@ambrosys.de wrote:
 Is there a direct way to transform an expression tree into another one?
 For example, is it possible that every proto::plus  node is
 transformed to it left child? I tried to solve this problem via protos
 build-in transforms without success. It seems that they are suited for
 evaluation of an existing tree, but I might be wrong.

 Hi Karsten,

 I'm pretty sure they can do both. For your example, I think something
 along the lines of this might work (untested):

 struct LeftPlus :
   boost::proto::or_
   
 boost::proto::terminalboost::proto::_,
 boost::proto::when
 
   boost::proto::plusboost::proto::_, boost::proto::_,
   LeftPlus(boost::proto::_left)
 ,
 boost::proto::nary_expr boost::proto::_, boost::proto::varargLeftPlus 
 
   
 {};

 This should recurse through expressions and replace sequences of
 pluses with the left-most terminal. You may need some other criteria
 to end the recursion depending on your use case.

 Disclaimer: I'm relatively new to proto myself, so the experts might
 have better solutions!

 Cheers,

 Great! It works perfectly, alltough I don't understand the code
 completely yet.

It takes time, but it'll be worth it. You can't do much with Proto with
grokking grammars and transforms.

 Another question is: can a node have a state. In my algorithm it would
 be nice, if every proto::multiplies  node stores some intermediate
 values which are used during later evaluations of the tree.

No. Intermediate expression nodes carry no run-time state in their
nodes. They only carry compile-time information in the form of the tag.
That's how a plus node is distinguished from a minus node, for instance.

If you need to compute intermediate values, you can use a transform to
build a parallel structure.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com



signature.asc
Description: OpenPGP digital signature
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] invalid use of incomplete type 'detail::uncvref...'

2011-02-28 Thread Eric Niebler
On 3/1/2011 3:15 AM, Hossein Haeri wrote:
 Hi Eric,
 
 boost/proto/matches.hpp:391:13: error: invalid use of incomplete
 type 'struct
 boost::proto::detail::uncvrefarity_caller::CanBeCalledPlus2, 
 mpl_::integral_cint, 2  ::type' Now look at how you've defined
 CanBeCalled:
 
 templatetypename Fun, typename Int struct CanBeCalled;
 
 Thanks. I added another specialisation for mpl::integral_cint, n 
 and it worked. But, now I'm wondering why on earth was that basically
 needed? I had never touched mpl::integral_c in my code snippet. That
 should have been generated by Proto then, right? 

No.

 And, in that case,
 may I please know why?

Somewhere in your code you're adding two MPL integers. That's just how
MPL works. Your code is assuming a particular type of Integral Constant
(mpl::int_). MPL only promises to give you /a/ MPL Integral Constant.

 On the other hand, I'm wondering why GCC never nagged about the need
 for mpl::integral_cint, 1 when I wrote:
 
 EW1InpPool, GameState, AmmoMsg()  Plus1();
 
 In other words, why is mpl::int_n used for the above line (when n
 == 1), whereas mpl::integral_cint, n is used for the following one
 (when n == 2)?
 
 (EW1InpPool, GameState, AmmoMsg() || EW1InpPool, GameState,
 AmmoMsg())  Plus2();

Probably because the former isn't doing an addition of MPL Integral
Constants.

This is an MPL gotcha. Others have complained about it on the boost
list, IIRC.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Nested Transforms

2011-02-28 Thread Eric Niebler
On 2/28/2011 11:40 PM, Nate Knight wrote:
 
 On Feb 26, 2011, at 4:55 AM, Eric Niebler wrote:

 I think I know what's causing this. Can you try compiling with
 BOOST_PROTO_STRICT_RESULT_OF?
 
 Eric,
 
 Thanks for the information.  Compiling with BOOST_PROTO_STRICT_RESULT_OF
 allows the first commented line to compile.  I guess we'll wait to hear from 
 Joel about 
 the impact of this change on the run times of his library.  
 
 The second commented line does not compile.  This seems to be because there 
 are no
 'const Expr' overloads of boost::proto::transform::operator().  Is this 
 correct? The pertinent 
 part of the compiler error (from clang) is 

snip

That's correct. This code compiles on trunk. I made a late fix that
didn't make it into 1.46. If we ship a point release, I'll merge it over.

But even with this fix, it can crop up in other circumstances. This is a
Proto gotcha. It's missing a bunch of overloads in the interest of
compile times (transform_impl is instantiated /everywhere/), but I need
to reconsider this because you're not the first to get bitten by this.

If you file a bug, I'll get around to it eventually. Some day, I'll use
rvalue refs and this problem will just go away.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] proto performance

2011-02-20 Thread Eric Niebler
On 2/20/2011 5:52 PM, Joel Falcou wrote:
 1/ how do you measure performances ? Anything which is not the median of
 1-5K runs is meaningless.

You can see how he measures it in the code he posted.

 2/ Don't use context, transform are usually better optimized by compilers

That really shouldn't matter.

 3/ are you using gcc on a 64 bits system ? On this configuration a gcc
 bug prevent proto to be inlined.

Naive question: are you actually compiling with optimizations on? -O3
-DNDEBUG? And are you sure the compiler isn't lifting the whole thing
out of the loop, since the computation is the same with each iteration?

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com



signature.asc
Description: OpenPGP digital signature
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] proto performance

2011-02-20 Thread Eric Niebler
On 2/20/2011 6:40 PM, Joel Falcou wrote:
 On 20/02/11 12:31, Karsten Ahnert wrote:
 It is amazing that the proto expression is faster then the naive one.
 The compiler must really love the way proto evaluates an expression.
 
 I still dont really know why. Usual speed-up in our use cases here is
 like ranging from 10 to 50%.

That's weird.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] [Proto] Nested Types in Transforms

2011-02-16 Thread Eric Niebler
On 2/16/2011 10:23 PM, Hossein Haeri wrote:
 Hi Eric,
 
 When you access a member of a class template, it causes the
 template to be instantiated. CanBeCalled cannot legally be
 instantiated with two function types. Hence the error.
 
 Function types? Are you really speaking of types of ordinary C++
 functions? If so, I have to say that, by coincidence, I had not
 passed any function types at all. Or, am I missing anything here?

You did. Look again:

  arity_caller::CanBeCalled
  
boost::proto::_value(boost::proto::_right),
EmtnTermOrGram(boost::proto::_left)
  ::type

What do you think _value(_right) and EmtnTermOrGram(_left) are?

 Also, proto::if_ takes as it's template parameter a Transform. It
 should be a transform that evaluates to a compile-time Boolean.
 
 So, this problem can simply be solved by replacing the '::type' part
 in my code with '()' in CanBeCalled...() even despite the fact that
 I never designed my CanBeCalled to be a transform?

Yes. That makes it an ObjectTransform. Check the docs for
ObjectTransform and proto::make.

 You can easily solve both problems by making the parameter to if_
 an ObjectTransform, as follows:
 snip
 
 Unfortunately, this didn't quite help. Despite the fact that
 EmtnShiftFObjGram itself does compile, GCC 4.5.1 (under MinGW32,
 WinXP, SP3) fails to compile all the code I like. Here are my test
 cases, where the line annotated with *** doesn't compile. The error
 message I get can be found in the PS:

I don't know, and without a complete example that I can compile, I can't
help you further. And a warning: I'm completely swamped with work and am
not likely to be much help in the near future. Maybe you post your code
to the proto list (cross-posting) and someone can chime in there.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Matching terminals holding a function pointer

2011-02-07 Thread Eric Niebler
On 2/8/2011 3:38 AM, Bart Janssens wrote:
 Hi,
 
 I may be overlooking the obvious here, but I can't seem to find an
 easy way to match terminals containing a pointer to a function (of
 arbitrary type). 
snip

Sure there's an easy way. You can use proto::if_ and type traits:

// untested
struct FunctionPointer
  : proto::and_
proto::terminal _ 
  , proto::if_ is_pointer proto::_value () 
  , proto::if_ is_function
remove_pointer proto::_value  () 

{};

HTH,

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Default constructor of proto::extends

2011-02-06 Thread Eric Niebler
On 2/7/2011 6:01 AM, Antoine de Maricourt wrote:
 Hi Eric,
 
 is there any reason why the default ctor of proto::extends uses the
 following form:
 
   extends()
  : proto_expr_()
   {}
 
 instead of simply
 
   extends() {}
 
 I use proto::extends over expressions that hold x86 SIMD registers (like
 proto::terminal__m128i), and the current version of proto::extends
 default ctor yields to a __m128i() call, which fills the register with
 zeros, while I wanted to keep it uninitialized.
 
 Apparently the compiler (g++, 4.6 and previous versions as well) is
 enable to detect this is unneeded, as the register is in fact
 initialized a few instructions later, and the only way I was able to get
 rid of this was to remove the explicit call to proto_expr_ ctor in the
 extends ctor.

You could use BOOST_PROTO_EXTENDS instead of proto::extends. I'm a
little uncomfortable with a base class that leaves members in an
undefined state, but I'm willing to consider it. Feel free to file a
bug. Thanks.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Active operator/function generation checking

2011-01-30 Thread Eric Niebler
On 1/31/2011 2:55 AM, Joel Falcou wrote:
 I'm trying to polish the last layer of compile time error handling in nt2.
 my concern at the moment is that, if have a function foo(a,b) that works
 on any real a and any char b, i dont want my foo function working on nt2
 container to work with nothing but matrix of real and matrix of char.
 nt2 has a is_callable_with metafunction that basically check for this on
 the scalar
 level.
 
 Considering the huge amount of functions nt2 has to support and their
 complex type requirement,
 grammar are a bit unusable here.
 
 Is it OK to have a custom nt2 generator that basically static_assert
 over is_callable_with
 to prevent wrong container expression to be built and hence ends up in
 error waay far in the
 expression evaluation code ?

This is a judgment call that only you, as library author, can make. If
doing the checking early imposes too high a compile-time requirement,
then it may make sense to delay it until it's less expensive to do, and
accept worse error messages.

You might also consider a debugging mode controlled with a compiler
switch, where things are checked up-front. Just a suggestion.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] : Proto transform with state

2011-01-29 Thread Eric Niebler
On 11/18/2010 4:56 AM, Eric Niebler wrote:
 I think Proto transforms need a let statement for storing intermediate
 results. Maybe something like this:
 
   struct RenumberFun
 : proto::fold
   _
 , make_pair(fusion::vector0(), proto::_state)
 , let
   _a( Renumber(_, second(proto::_state)) )
 , make_pair(
   push_back(
   first(proto::_state)
 , first(_a)
   )
 , second(_a)
   )
   
   
   {};
 
 I haven't a clue how this would be implemented.
 
 It's fun to think about this stuff, but I wish it actually payed the bills.

Bills be damned. I just committed to trunk an implementation of
proto::let, along with tests and reference docs. End-user docs are still
todo.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] : Proto transform with state

2011-01-29 Thread Eric Niebler
On 1/29/2011 7:49 PM, Eric Niebler wrote:
 Bills be damned. I just committed to trunk an implementation of
 proto::let, along with tests and reference docs. End-user docs are still
 todo.

sigh As often happens, I woke up this morning knowing this code was
broken, so I pulled it. I think I finally know how to fix it, though.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com



signature.asc
Description: OpenPGP digital signature
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


[proto] new proto article, please vote it up

2011-01-26 Thread Eric Niebler
There's a new article about proto on cpp-next.com. Go to the following
reddit page and vote it up. Thanks!

http://www.reddit.com/r/cpp/comments/f9ek6/expressive_c_expression_optimization_eliminating/

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Proto Transform Questions

2011-01-15 Thread Eric Niebler
On 1/15/2011 6:41 AM, Nate Knight wrote:
 On Jan 14, 2011, at 12:29 PM, Nate Knight wrote:
 
 I've pasted some code below where I am trying to transform expressions of 
 the form

 (a op b op c)[i] 

 to 

 (a[i] op b[i] op c[i])

 I managed to get this to work for the simple case

 (a+b)[i]

 but I'm curious about how to generalize this to include other operators 
 (without explicitly handling them all).  Also, as written the transform 
 doesn't recurse properly, and I'm having some trouble seeing how to correct 
 this.


This is a fun little problem. The answer is very simple, but requires
some knowledge of proto's pass_through transform, possessed by
proto::nary_expr (and friends):

// Take any expression and turn each node
// into a subscript expression, using the
// state as the RHS.
struct Distribute
  : or_
whenterminal_, _make_subscript(_, _state)
  , nary_expr_, varargDistribute 

{};

struct Vectorize
  : or_
terminal_
  , whensubscript_, _, Distribute(_left, _right)
  , nary_expr_, varargVectorize 

{};

int main()
{
terminalchar const *::type a = {a};
terminalchar const *::type b = {b};
terminalchar const *::type c = {c};
terminalint::type i = {42};

display_expr( (a+b+c)[i] );

display_expr( Vectorize()((a+b+c)[i]) );
}

HTH,

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Adding stuff in proto operator

2010-12-29 Thread Eric Niebler
On 12/29/2010 5:40 AM, Joel Falcou wrote:
 Error found.
 
 The problem was in the and_impl transform. It uses comma operator to
 chain calls to each and_ alternatives.
 However, when this is used in a grammar used as a Generator, it enters a
 subtle infinite loop as each comma
 want to build an expression with the newly generated expression.
snip

The problem is that in this expression, Proto's overloaded comma
operator is considered. It shouldn't be. I was being a little too cute
when I used the comma operator like this.

 Is this fix acceptable or am I doing something wrong all together ?
 If yes, Eric, any objections that I merge this into trunk ?

I made a few adjustments and committed it myself. Thanks, Joel!

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] expanding Proto's library of callables

2010-12-28 Thread Eric Niebler
On 12/28/2010 5:39 AM, Thomas Heller wrote:
 I just saw that you added functional::at.
 I was wondering about the rationale of your decision to make it a non 
 template.
 My gut feeling would have been to have proto::functional::atN(seq)
 and not proto::functional::at(seq, N).

Think of the case of Phoenix placeholders, where in the index is a
parameter:

  when terminalplaceholder_ , _at(_state, _value) 

For the times when the index is not a parameter, you can easily do:

  _at(_state, mpl::int_N())

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] expanding Proto's library of callables

2010-12-28 Thread Eric Niebler
On 12/28/2010 11:43 AM, Thomas Heller wrote:
 Eric Niebler wrote:
 
 On 12/28/2010 5:39 AM, Thomas Heller wrote:
 I just saw that you added functional::at.
 I was wondering about the rationale of your decision to make it a non
 template.
 My gut feeling would have been to have proto::functional::atN(seq)
 and not proto::functional::at(seq, N).

 Think of the case of Phoenix placeholders, where in the index is a
 parameter:

   when terminalplaceholder_ , _at(_state, _value) 
 
 vs:
 
 whenterminalplaceholder_ , _at_value(_state)

Have you tried that? Callable transforms don't work that way. It would
have to be:

 lazyat_value(_state)

Blech.

 For the times when the index is not a parameter, you can easily do:

   _at(_state, mpl::int_N())
 
 vs:
 
 _atmpl::int_N (_state)
 
 just wondering ... the second version looks more natural and consistent

Still think so?

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Adding stuff in proto operator

2010-12-28 Thread Eric Niebler
On 12/28/2010 5:05 PM, Joel Falcou wrote:
 Here i smy use case. I guess Eric answer will be do this at
 evaluation time 

Do this at evaluation time. Just kidding.

 but let's I have some array/matrix DSEL going on. I
 want to test if two expression containing said matrix has compatible
 size before creating a proto ast node.
 
 e.g if a,b are matrices, a + b should assert if size(a) != size(b)
 (in the matlab meaning of size).
 
 Now i can do the check when evaluating the expression before trying
 to assign it BUT it irks me that the assert triggers inside the
 matrix expression evaluator instead of at the line said + was wrongly
 called.
 
 Could we have some way to specify code to call before returning the
 a new operator AST node, shoudl I overload operators myself ? Should
 I stick with the assert in eval policy and try to come up with way
 to tell the user which operators faile din which expression, did I
 miss the obvious ?

You missed the Generator parameter to proto::domain. It's a unary
function object that accepts all new proto expressions and does
something to it. That something can include asserting if matrix/vector
sizes don't match.

HTH,

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] looking for an advise

2010-12-27 Thread Eric Niebler
On 12/27/2010 5:26 AM, Joel Falcou wrote:
 On 27/12/10 11:02, Maxim Yanchenko wrote:
 Hi Eric and other gurus,

 Sorry in advance for a long post.

 I'm making a mini-language for message processing in our system.
 It's currently implemented in terms of overloaded functions with
 enable_ifmatchGrammar  dispatching, but now I see that

 Dont. this increases copiel time and provide unclear error. Accept
 anykind of expression adn use matches in a
 static_assert with a clear error ID.
 
 (a) I'm reimplementing Phoenix which is not on Proto yet in Boost
 1.45.0 (that's
 how I found this mailing list). It would be great to reuse what
 Phoenix has;

 Isn't it in trunk already Thomas ?

No, it's still in the sandbox. Maxim, you can find Phoenix3 in svn at:

https://svn.boost.org/svn/boost/sandbox/SOC/2010/phoenix3

According to Thomas, the docs are still a work in progress, but the code
is stable.

 (b) I need to do several things on expressions and I don't know what
 would be
 the best way to approach them all.

 Here is a background.
 Every message is a multiset of named fields (i.e. is a multimap
 FieldName-FieldValue).
 I have a distinct type for each FieldName, so I can do some
 multiprogramming on
 sets of FieldNames, like making generating a structure that will hold
 values of
 the fields I need, by list of field names (e.g. fusion::map).

 While processing a message, I can do some checks like if particular
 field is
 present, if it's equal or not to some value, if it matches a predicate
 etc.
 They are implemented as a set of predicate functions condition like

template  class Msg, class Expr
typename boost::enable_if  proto::matchesExpr, proto::equal_to 
 proto::_,
 proto::_  , bool::type
condition( const Msg  msg, const Expr  expr )

 with various condition grammars in enable_ifmatches...

 Again, use matches inside the function body.

 (a) everything runs on enable_if. I expect it to become more concise
 and clean
 if I use either transforms or contexts.

 You need none. Put your grammar into a domain with a proper context and
 proto will check operators overload for you.

 (b) a lot of Phoenix is basically reimplemented from scratch (thanks
 Eric, with
 Proto it was very easy to do!). But I don't know how to extend Phoenix
 so it
 could work in my expressions with my things like any_field, optional,
 mandatory etc.

 Better see what Thomas has up his sleeves in Phoenix.

Right. Maxim, you are totally correct that you are reimplementing much
of Phoenix3, and that the extensibility of Phoenix3 is designed with
precisely these use cases in mind. The only thing missing are docs on
the customization points you'll need to hook.

Thomas, this is a GREAT opportunity to put the extensibility of Phoenix3
to test. Can you jump in here and comment?

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] [Spirit-devel] [Spirit Development] Bug in today's boost_trunk -- ambiguous operator

2010-12-26 Thread Eric Niebler
On 12/25/2010 4:59 PM, Hartmut Kaiser wrote:
 Anyways, I'd suggest to do the outlined changes to Fusion and Proto in any
 case, as in other contexts the very same problem might pop up again.

Agreed. Once the customization point has been moved out of Fusion's
detail namespace, I'll use it in Proto to disable the troublesome overload.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] phoenix 3 refactoring complete.

2010-12-23 Thread Eric Niebler
On 12/23/2010 7:57 AM, Thomas Heller wrote:
 I just wanted you to know that phoenix3 is in a working state once 
 again.

Woo!

 I refactored everything with the changes we discussed ...
 
 All tests from boost.bind are passing!

Woo!

 Placeholder unification is in place!

Woo!

 Now ... up to documentation writing and checking for BLL 
 compatibility ...

This is awesome news, Thomas. You should send this around to the boost
developers list.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] [Spirit-devel] [Spirit Development] Bug in today's boost_trunk -- ambiguous operator

2010-12-23 Thread Eric Niebler
On 12/23/2010 9:42 PM, Hartmut Kaiser wrote:
 I have a fairly large program that compiled just fine on boost_trunk
 version 67416, but is now broken.  I traced to problem to an
 ambiguous operator overload that occurs when both qi.hpp and
 fusion/tuple.hpp are included.

 The following code does not compile as of today, but should compile
 on version 67416.

 #include boost/fusion/tuple.hpp //including this file caused
 ambiguity errors #include boost/spirit/include/qi.hpp #include
 string

 int main()
 {
   static const boost::spirit::qi::rulestd::string::const_iterator a;
   static const boost::spirit::qi::rulestd::string::const_iterator b;
   boost::spirit::qi::rulestd::string::const_iterator rule = a  b;
 }


 Ohh, that looks serious! The ambiguity is caused by a proto/fusion
 overlap.
 A qi::rule is both, a proto expression and a fusion sequence (as all
 proto expressions are proto sequences now).

 Proto's overload for operator() is a valid choice for this, as is
 fusions
 operator().

 I'm cc'ing Eric and Christopher, perhaps they have an idea how to
 proceed.

 I suggest defining a Fusion trait, is_less_then_comparable, and define it
 for the built-in Fusion sequences (tuple, vector, etc.). That can be used
 to SFINAE out Fusion's operator, which should not be picked up for Proto
 expressions. Ditto for the other operations on Fusion sequences.

 I have no better ideas at the moment.
 
 Fusion already has a traits class for this, but it's not SFINAE enabled (see
 boost/fusion/sequence/comparison/detail/enable_comparison.hpp), even if it's
 in 'namespace detail'. 
 
 Adding an additional template parameter to make it into a customization
 point:
 
 namespace boost { namespace fusion { namespace detail
 {
 template typename Seq1, typename Seq2, typename Enable = void
 struct enable_equality ...
 
 template typename Seq1, typename Seq2, typename Enable = void
 struct enable_comparison ...
 }}}
 
 which allows to add:
 
 namespace boost { namespace fusion { namespace detail
 {
 template typename T1, typename T2
 struct enable_equalityT1, T2,
   typename enable_ifmpl::or_proto::is_exprT1, proto::is_exprT2 
  ::type
   : mpl::false_
 {};
 
 template typename T1, typename T2
 struct enable_comparisonT1, T2, 
   typename enable_ifmpl::or_proto::is_exprT1, proto::is_exprT2 
   ::type
   : mpl::false_
 {};
 }}}
 
 I'm currently running the Fusion tests to see whether the additional
 'typename Enable = void' break anything. But I expect everything to work -
 Yep everything is fine.
 
 Eric, would you be willing to add the specializations above to Proto's
 Fusion adaptation code?

Doable, but I would feel better if these were documented customization
points of Fusion. At least move them out of the detail namespace.

Note that if we had Concepts, the Fusion operators would be restricted
to models of IsComparable, which presumably would require that each
element of the sequence could be compared and that the result was
convertible to bool. By that definition, a Proto expression is *not* a
model of IsComparable. What you're asking me (and everyone else) to do
is to explicitly state that my type does *not* model the concept. It
might be expedient in this case, but it feel vaguely wrong to me.

I'm still curious why an operator in the Fusion namespace is even being
considered. How did boost::fusion become an associated namespace of
spirit::qi::rule?

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] current state

2010-12-20 Thread Eric Niebler
On 12/20/2010 8:10 AM, Maxim Yanchenko wrote:
 Hi all,
 
 I see there going to be breaking changes in Proto and Phoenix (like contexts
 must go, visitors etc).

I don't think there will be massive breaking changes anytime soon. I
/might/ deprecate contexts, but I doubt I could remove them. Too many
people use them. Someday I might come out with a completely different
C++0x version of Proto with significant breakages, but until then, I
think you're safe.

And I might not even deprecate contexts. A week ago I tried porting
Proto's examples from contexts to transforms and found they got
significantly more complicated. I don't have any definite plans wrt
contexts and transforms yet.

 So the question is: for a beginner who just started to learn Proto, which 
 parts
 of Proto should be studied now and which are likely to be dropped/changed? 
 Just
 to avoid wasting time learning something deprecated.

Despite the fact that they're simpler for some things, I'd say avoid
contexts. They won't get you as far as transforms.

 P.S. Eric, there is a bug in proto::display_expr, I wrote about it (and 
 posted a
 patch) in comments to your Playing with syntax article on cpp-next, could 
 you
 please take a look when you have time?

I just committed a fix to trunk. Thanks for bringing this to my attention.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


[proto] expanding Proto's library of callables

2010-12-17 Thread Eric Niebler
Proto ships with a very small collection of callables for use in Proto
transforms: wrappers for fusion algorithms like reverse and pop_front
and the like. For 1.46, there will be a few more: make_pair, first,
second, and wrappers for a few more Fusion algorithms. It's woefully
incomplete, though.

I have an idea. Phoenix3 defines *all* these juicy callables under the
stl/ directory. I can't #include them in Proto because that would create
a circular dependency. Why don't we just move the definitions of the
function objects into Proto and make them Proto callables? Phoenix3 can
just #include 'em and use 'em.

Thoughts?

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] grammars, domains and subdomains

2010-12-10 Thread Eric Niebler
On 12/10/2010 3:23 AM, Thomas Heller wrote:
 I think there is a misunderstanding in how sub-domaining works.
 The solution you propose is that a sub domain extends its super domain in 
 the way that the expressions in the sub domain also become valid in the 
 super domain. I guess this is the way it should work.

Right.

 However, the solution I am looking for is different.
 The sub-domain i tried to define should also extend its super domain, BUT 
 expressions valid in this sub-domain should not be valid in the super 
 domain, only in the sub-domain itself.

Because you don't want phoenix::_a to be a valid Phoenix expression
outside of a phoenix::let, right?

 I think proto should support both forms of operation. The first one can be 
 easily (more or less) achieved by simply changing proto::matches in the way 
 you demonstrated earlier, I think. I am not sure, how to do the other stuff 
 properly though.

OK, let's back up. Let's assume for the moment that I don't have time to
do intensive surgery on Proto sub-domains for Phoenix (true). How can we
get you what you need?

My understanding of your needs: you want a way to define the Phoenix
grammar such that (a) it's extensible, and (b) it guarantees that local
variables are properly scoped. You have been using proto::switch_ for
(a) and sub-domains for (b), but sub-domains don't get you all the way
there. Have I summarized correctly?

My recommendation at this point is to give up on (b). Don't enforce
scoping in the grammar at this point. You can do the scope checking
later, in the evaluator of local variables. If a local is not in scope
by then, you'll get a horrific error. You can improve the error later
once we decide what the proper solution looks like.

If I have mischaracterized what you are trying to do, please clarify.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] grammars, domains and subdomains

2010-12-10 Thread Eric Niebler
On 12/9/2010 10:51 AM, Thomas Heller wrote:
 Eric Niebler wrote:
 
 On 12/8/2010 5:30 PM, Thomas Heller wrote:
 I don't really now how to do it otherwise with the current design.
 There is really only this part of the puzzle missing. If it is done, we
 have a working and clean Phoenix V3.
 For the time being, I can live with the workaround I did.
 However, I will focus my efforts of the next days on working out a patch
 for this to work properly.

 I made a simple fix to proto::matches on trunk that should get you
 moving. The overloads can still be smarter about sub-domains, but for
 now you can easily work around that by explicitly allowing expressions
 in sub-domains in your super-domain's grammar. See the attached solution
 to your original problem.
 
 It solves the problem as it succeeds to compile.
 However, there are two problems with that solution:
   1) t2 is matched by grammar1
   2) I have to add the plus rule in grammar2 (this could be solved with the 
  grammar parametrisation from my earlier post)
   3) The expression in a subdomain is matched in grammar1 on the pure fact 
  that it is a subdomain of domain1, it should be matched against the 
  subdomains grammar as well.
 
 Right now, i am questioning the whole deduce_domain domain part and
 selection of the resulting domains in proto's operator overload.
 Here is what i think should happen (Without the loss of genericity I
 restrict myself to binary expressions):
 
 If the two operands are in the same domain, the situation is clear:
 The operands need to match the grammar belonging to the domain, and the
 result has to as well.
 
 If one of the operands is in a different domain the situation gets
 complicated. IMHO, the domain of the resulting expression should be
 selected differently.
 Given is a domain (domain1) which has a certain grammar (grammar1) and
 sub-domain (domain2) with another grammar (grammar2).
 When combining two expressions from these domains with a binary op, the
 resulting expression should be in domain2.
 Why? Because there is no way that when writing grammar1 to account the
 expressions which should be valid grammar2. With the current deduce_domain,
 this is determined to fail always. Additionally, conceptionally, it makes
 no sense that a expression containing t1 and t2 be in domain1.

Why not? What about this:

class D1 {};
class D2 : public D1 {};

D1 d1;
D2 d2;

D1 *p1 = d1; // OK
D1 *p2 = d2; // ok
D2 *p3 = d1; // ERROR

You're suggesting an analogous behavior for expressions and domains.
What happens when you have a situation where ...

D1 is a sub-domain of B
D2 is a sub-domain of B

E1 is an expression in D1
E2 is an expression in D2

E1 + E2; // What domain is this in?

The current implementation gives an unambiguous answer -- B -- and
nicely mirrors the implicit conversions of C++ inheritance. Your
solution leads to ambiguities.

 When the domains are not compatible (meaning they have no domain - sub
 domain relationship), the resulting domain should be common_domain.

No, that should be an error.

 These considerations are based on the assumption that a expression in a
 sub-domain should not be matched by the grammar of the super domain.

That /may/ be the case or it may not.

 Which makes sense, given the context of the local variables in phoenix.
 Remember, local variables shall only be valid when embedded in a let or
 lambda expression.
 Maybe, the sub-domain idea is not suited at all for that task.

Possibly.

 OK ... thinking along ... The stuff which is already in place, and your
 suggested fix, makes sense when seeing sub-domains really as extensions to
 the super domain, and the grammar of these ...
 
 Thoughts?

I really don't think sub-domains are what you're looking for. Why can't
you just give locals like _a a tag type that is unrecognized by the main
Phoenix grammar, but recognized by the grammar of let expressions? That
seems simplest.

Aside: I've given more thought to the problem of grammar-checking in the
presence of sub-domains, and here's my current thinking (and note that
this doesn't address your problem):

The common domain is calculated as always. The existing solution has the
operator overloads building a new expression type and then checking it
against the grammar of the common domain. Let's say Left is already in
the common domain and Right is in a sub-domain. E.g. operator+ builds
proto::plusLeft, Right::type and checks against common_domainLeft,
Right::type::proto_grammar.

What if instead we build a dummy expression type like this: proto::plus
Left, proto::_ ::type and match THAT against the common domain's
grammar? It's currently undefined what happens when you use the wildcard
as an expression type when doing pattern matching, but we can say that
it's an expression that trivially matches any grammar. (It's already a
grammar that matches any expression.)

Why this is a good thing: expressions that are sub-domains of the common
domain are grandfathered

Re: [proto] grammars, domains and subdomains

2010-12-08 Thread Eric Niebler
On 12/8/2010 5:30 AM, Thomas Heller wrote:
 Eric Niebler wrote:
 On 12/7/2010 2:37 PM, Thomas Heller wrote:
 So, How to handle that correctly?

 Yup, that's a problem. I don't have an answer for you at the moment,
 sorry.
 
 I think i solved the problem. The testcase for this solution is attached.
 Let me restate what I wanted to accomplish:
snip

Thomas,

A million thanks for following through. The holidays and my day job are
taking their toll, and I just don't have the time to dig into this right
now. It's on my radar, though. I'm glad you have a work-around, but it
really shouldn't require such Herculean efforts to do this. There are 2
bugs in Proto:

1) Proto operator overloads are too fragile in the presence of
subdomains. Your line (5) should just work. It seems like a problem that
Proto is conflating grammars with subdomain relationships the way it is,
but I really need to sit down and think it through.

One possible solution is an implicit modification of the grammar used to
check expressions when the children are in different domains. For
instance, in the expression A+B, the grammar used to check the
expression currently is: common_domainA, B::type::proto_grammar.
Instead, it should be:

   or_
   typename common_domainA, B::type::proto_grammar
 , if_
   is_subdomain_of
   typename common_domainA, B::type
 , domain_of _ 
   ()
   
   

That is, any expression in a subdomain of the common domain is by
definition a valid expression in the common domain. is_subdomain_of
doesn't exist yet, but it's trivial to implement. However ...

2) Using domain_of_ in proto::if_ doesn't work because grammar
checking is currently done after stripping expressions of all their
domain-specific wrappers. That loses information about what domain an
expression is in. Fixing this requires some intensive surgery on how
Proto does pattern matching, but I foresee no inherent obstacles.

It'd be a big help if you could file these two bugs.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] : Proto transform with state

2010-12-07 Thread Eric Niebler
On 12/6/2010 4:50 PM, Thomas Heller wrote:
 Eric Niebler wrote:
 I played with the let transform idea over the weekend. It *may* be
 possible to accomplish without the two problems I described above. See
 the attached let transform (needs latest Proto trunk). I'm also
 attaching the Renumber example, reworked to use let.
snip
 
 Without having looked at it too much ... this looks a lot like the 
 environment in phoenix. Maybe this helps in cleaning it out a bit.

I tend to doubt it would help clean up the implementation of Phoenix
environments. These features exist on different meta-levels: one
(proto::let) is a feature for compiler-construction (Proto), the other
(phoenix::let) is a language feature (Phoenix). The have roughly the
same purpose within their purview, but as their purviews are separated
by one great, big Meta, it's not clear that they have anything to do
with each other.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] grammars, domains and subdomains

2010-12-07 Thread Eric Niebler
On 12/7/2010 2:37 PM, Thomas Heller wrote:
 Hi,
 
 I have been trying to extend a domain by subdomaining it. The sole
 purpose of this subdomain was to allow another type of terminal expression.
 
 Please see the attached code, which is a very simplified version of what
 I was trying to do.
snip

 So, How to handle that correctly?

Yup, that's a problem. I don't have an answer for you at the moment, sorry.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] : Proto transform with state

2010-12-06 Thread Eric Niebler
On 11/18/2010 3:31 PM, Eric Niebler wrote:
 On 11/18/2010 1:45 PM, Thomas Heller wrote:
 Eric Niebler e...@... writes:
 It's REALLY hard. The let context needs to be bundled with the Expr,
 State, or Data parameters somehow, but in a way that's transparent. I
 don't actually know if it's possible.

 Very hard ... yeah. I am thinking that we can maybe save these variables in 
 the 
 transform?
 
 I'm thinking we just stuff it into the Data parameter. We have a
 let_scope template that is effectively a pair containing:
 
 1. The user's original Data, and
 2. A Fusion map from local variables (_a) to values.
 
 The let transform evaluates the bindings and stores the result in the
 let_scope's Fusion map alongside the user's Data. We pass the let_scope
 as the new Data parameter. _a is itself a transform that looks up the
 value in Data's Fusion map. The proto::_data transform is changed to be
 aware of let_scope and return only the original user's Data. This can
 work. We also need to be sure not to break the new
 proto::external_transform.
 
 The problems with this approach as I see it:
 
 1. It's not completely transparent. Custom primitive transforms will see
 that the Data parameter has been monkeyed with.
 
 2. Local variables like _a are not lexically scoped. They are, in fact,
 dynamically scoped. That is, you can access _a outside of a let
 clause, as long as you've been called from within a let clause.
 
 Might be worth it. But as there's no pressing need, I'm content to let
 this simmer. Maybe we can think of something better.

I played with the let transform idea over the weekend. It *may* be
possible to accomplish without the two problems I described above. See
the attached let transform (needs latest Proto trunk). I'm also
attaching the Renumber example, reworked to use let.

This code is NOT ready for prime time. I'm not convinced it behaves
sensibly in all cases. I'm only posting it as a curiosity. You're insane
if you use this in production code. Etc, etc.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
#ifndef BOOST_PP_IS_ITERATING

///
/// \file let.hpp
/// Contains definition of the let transform.
//
//  Copyright 2010 Eric Niebler. Distributed under the Boost
//  Software License, Version 1.0. (See accompanying file
//  LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)

#ifndef BOOST_PROTO_TRANSFORM_LET_HPP_EAN_12_04_2010
#define BOOST_PROTO_TRANSFORM_LET_HPP_EAN_12_04_2010

#include boost/preprocessor/cat.hpp
#include boost/preprocessor/facilities/intercept.hpp
#include boost/preprocessor/repetition/repeat.hpp
#include boost/preprocessor/repetition/enum.hpp
#include boost/preprocessor/repetition/enum_trailing.hpp
#include boost/preprocessor/repetition/enum_params.hpp
#include boost/preprocessor/repetition/enum_binary_params.hpp
#include boost/preprocessor/repetition/enum_trailing_params.hpp
#include boost/preprocessor/repetition/enum_params_with_a_default.hpp
#include boost/preprocessor/repetition/enum_trailing_binary_params.hpp
#include boost/preprocessor/facilities/intercept.hpp
#include boost/preprocessor/iteration/iterate.hpp
#include boost/mpl/if.hpp
#include boost/mpl/eval_if.hpp
//#include boost/mpl/print.hpp
#include boost/mpl/identity.hpp
#include boost/mpl/aux_/template_arity.hpp
#include boost/mpl/aux_/lambda_arity_param.hpp
#include boost/fusion/include/map.hpp
#include boost/fusion/include/at_key.hpp
#include boost/proto/proto_fwd.hpp
#include boost/proto/traits.hpp
#include boost/proto/transform/impl.hpp

namespace boost { namespace proto
{
// Fwd declarations to be moved to proto_fwd.hpp
template
BOOST_PP_ENUM_PARAMS_WITH_A_DEFAULT(BOOST_PROTO_MAX_ARITY, typename 
Local, void)
  , typename Transform = void

struct let;

templatetypename Tag
struct local;

namespace detail
{
// A structure that holds both a map of local variables as
// well as the original Data parameter passed to the let transform
templatetypename LocalMap, typename Data
struct let_scope
{
typedef LocalMap local_map_type;
typedef Data data_type;

let_scope(LocalMap l, Data d)
  : locals(l)
  , data(d)
{}

LocalMap locals;
Data data;

private:
let_scope operator=(let_scope const );
};

templatetypename Expr, typename State, typename Data
BOOST_PP_ENUM_TRAILING_BINARY_PARAMS(BOOST_PROTO_MAX_ARITY, 
typename Local, = void BOOST_PP_INTERCEPT)
  , typename Transform = void

struct init_local_map

Re: [proto] Proto documentation, tutorials, developer guide and general Publis Relations

2010-12-04 Thread Eric Niebler
On 12/4/2010 12:10 PM, joel falcou wrote:
 On 04/12/10 18:01, Eric Niebler wrote:
 Yes, this an some other newer features are not described in the users'
 guide at all. That includes sub-domains, per-domain control over
 as_child and as_expr, external transforms, and now the expanded set of
 functional callables.
 
 and the member thing or is it still in flux ?

Oh, right. Darn. I really need to sit down and convince myself that that
code is legit. It may still change. :-/

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] problem with constness of operator return types

2010-12-02 Thread Eric Niebler
On 12/2/2010 6:51 AM, Thomas Heller wrote:
 Hi,
 
 I just encountered a somehow stupid problem. Possibility is high that i 
 missed something.
 
 The problem is, that proto's default transform cannot handle op_assign 
 correctly. This is due to the fact that operator OP returns a const proto 
 expression, which turns every value type in proto terminals into a const 
 value. 

Actually, it doesn't. See below:

 Meaning, that codes like the following don't compile:
 
 // following lines copy the literal 9 into the terminal expression:
 boost::proto::terminalint::type t1 = {9};
 boost::proto::terminalint::type t2 = {9};
 
 // try to plus assign t2 to t1:
 boost::proto::default_context ctx;
 boost::proto::eval(t1 += t2, ctx);
 
 This fails due to the following code:

Did you actually try to compile this code?
snip

 With that type of expression creation, proto's default evaluation fails with 
 this error message:
 ./boost/proto/context/default.hpp:142:13: error: read-only variable is not 
 assignable

The following code compiles for me (msvc-10)

  #include boost/proto/proto.hpp

  int main()
  {
// following lines copy the literal 9 into the terminal expression:
boost::proto::terminalint::type t1 = {9};
boost::proto::terminalint::type t2 = {9};

// try to plus assign t2 to t1:
boost::proto::default_context ctx;
boost::proto::eval(t1 += t2, ctx);
  }

But please don't use contexts. :-) The following also works:

  boost::proto::_default eval;
  eval(t1 += t2);

Either way, t1 ends up with the value 18.

 So far so good ... or not good. This should work!
 The problem is that expressions const qualified above, return a const 
 terminal value as well.

Consider the following code:

  struct S { int  i; }
  int i = 9;
  S const s = {i}; // S is const, s.i is not.
  s.i += 9; // OK

Proto holds children by reference by default, and respects their
constness. So what problem are you seeing, exactly?

I can guess that in Phoenix, you are seeing this problem because we told
Proto that in the Phoenix domain, children need to be held by value. In
this case, the top-level const really is a problem.

What's the right way to fix this? For one thing, your patch is *very*
incomplete. There are a million places in Proto where return types are
const-qualified. I do it in the absence of rvalue references to get
Proto temporaries to bind to lvalue references. That reduces the number
of overloads needed, which (I think) brings down compilation times. If
you removed *all* return type const-qualification, Proto would stop
compiling. You'd need to add a bunch of overloads to make it work again,
and then benchmark that Proto didn't compile slower.

I don't actualy believe this is the right fix. Phoenix stores terminals
by value *by design*. I argue that t1 += t2 should NOT compile.
Consider this Phoenix code:

  auto t1 = val(9);
  std::for_each(c.begin(), c.end(), t1 += _1);

What's the value of t1 after for_each returns? It's 9! The for_each is
actually mutating a temporary object. The const stuff catches this error
for you at compile time. You're forced to write:

  int t1 = 9;
  std::for_each(c.begin(), c.end(), ref(t1) += _1);

Now the terminal is held by reference, and it works as expected.

If you think there are legitimate usage scenarios that are busted by
this const stuff, please let me know.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


[proto] unpack transform (was: : Proto transform with state)

2010-11-18 Thread Eric Niebler
On 11/18/2010 3:58 AM, Thomas Heller wrote:
 Btw, i just finished implementing the unpack feature we were talking about 

Awesome!

 ...
 Short description:
 
 Calling some_callable with the expression unpacked:
 proto::whenproto::_, some_callable(unpack)
 
 Calling some_callable with an arbitrary sequence unpacked:
 proto::whenproto::_, some_callable(unpack(some_fusion_seq())
 
 Calling some_callable with an arbitrary sequence unpacked, and apply a proto 
 transform before:
 proto::whenproto::_, some_callable(unpack(some_fusion_seq(), 
 some_transform)

Perfect.

 Additionally it is possible to have arbitrary parameters before or after 
 unpack. 

Whoa. You said that was going to be impossible or very expensive. I'[m
impressed.

 The implementation is located at:
 http://svn.boost.org/svn/boost/sandbox/SOC/2010/phoenix3/boost/phoenix/core/unpack.hpp
 
 Just a whole mess of preprocessor generation ... this is not really fast to 
 compile at the moment, PROTO_MAX_ARITY of 5 is fine, everything above will 
 just blow the compiler :(

That's a bit worrying. When I have some time, I'll try to grok your work
and see if there's any way to speed things up.

Thanks for your work on this!

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] : Proto transform with state

2010-11-17 Thread Eric Niebler
On 11/17/2010 2:18 PM, joel falcou wrote:
 On 17/11/10 19:46, Eric Niebler wrote:
 See the attached code. I wish I had a better answer. It sure would be
 nice to generalize this for other times when new state needs to bubble
 up and back down.
 
 Just chiming in. We had the exact same problem in quaff where needed to
 carry on a process ID over the trasnform of parallel statement. If it can
 make you worry less Eric, we ended with the exact same workaround.

There's another issue. Look here:

  // don't evaluate T at runtime, but default-construct
  // an object of T's result type.
  templatetypename T
  struct type_of
: proto::makeproto::callT 
  {};

  struct RenumberFun
: proto::fold
  _
, make_pair(fusion::vector0(), proto::_state)
, make_pair(
  push_back(
  first(proto::_state)
//--1
, first(Renumber(_, second(proto::_state)))
  )
//---2
, type_ofsecond(Renumber(_, second(proto::_state))) 
  )
  
  {};

Notice that the Renumber algorithm needs to be invoked twice with the
same arguments. In this case, we can avoid the runtime overhead of the
second invocation by just using the type information, but that's not
always going to be the case. There doesn't seem to be a way around it,
either.

I think Proto transforms need a let statement for storing intermediate
results. Maybe something like this:

  struct RenumberFun
: proto::fold
  _
, make_pair(fusion::vector0(), proto::_state)
, let
  _a( Renumber(_, second(proto::_state)) )
, make_pair(
  push_back(
  first(proto::_state)
, first(_a)
  )
, type_ofsecond(_a) 
  )
  
  
  {};

I haven't a clue how this would be implemented.

It's fun to think about this stuff, but I wish it actually payed the bills.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


[proto] latest proto article is posted

2010-11-04 Thread Eric Niebler
This one has little directly to do with Proto, but lays the foundation
for a deeper understanding of Proto transforms and why they are
necessarily purely functional.

http://cpp-next.com/archive/2010/11/expressive-c-fun-with-function-composition/

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Using proto with expressions containing matrices from the EIgen library

2010-10-27 Thread Eric Niebler
On 10/27/2010 12:57 PM, Bart Janssens wrote:
 So to rephrase my question:
 let's say I want to evaluate proto expression _cout  A * B * C. To
 do this, I traverse the tree and the following should happen:
   1. B * C gets evaluated first, into an (Eigen) expression template
 that stores a reference to B and C. This is OK, since B and C are not
 temporaries.

Yes.

   2. A*(result of B*C) gets evaluated, which may produce something
 that stores (result of B*C) by reference

Yes.

   3. The final expression result is output using , and at that point
 the Eigen expression templates execute, expecting that all the
 referred variables still exist.

Yes. And they do because all the temporary objects that have been
created live until the end of the full expression, which includes the
output expression.

 So how can I make sure the (result of B*C) gets stored somewhere? 

(result of B*C) is a temporary object (X) that holds B and C by
reference. A*(B*C) is a temporary object (Y) that holds the temporary
object (X) by reference. This is all kosher. However, that's not what
your example was doing. Your example was RETURNING the equivalent of
A*(B*C) from a function. NOT GOOD. The temporary object (X) dies at the
end of the full expression in which it was created. That's the return
statement. By the time you try to traverse the expression to evaluate
it, (X) is dead and buried.

 If I
 can do that, then I can use this stored data to construct the
 A*(result of B*C) step, and it's safe even if it is done by reference.

You should be asking yourself why you're trying to return expression
templates from a function. If you really need to do that, then you can't
go returning references to temporary objects.

 For more complex expressions, all the steps in the tree would need to
 be stored like this. Once the call to operator finishes, this
 temporary tree can be discarded.
 
 The problem I have here appears to be general to expression template
 matrix libraries, before Eigen we used our own matrix lib, and it
 exhibited the same problem.

Right. This problem has nothing at all to do with Proto.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Visitor Design Pattern

2010-10-25 Thread Eric Niebler
On 10/25/2010 4:44 AM, Thomas Heller wrote:
 Thank you very much! So, we are good to changing the internals of
 phoenix3 to use this extension mechanism?

Yes. But today I'm going to made some changes, based on my experience
playing with this code last night. In particular, it should be possible
to have *only* the named rules be customization points; for all other
rules, actions are attached to rules the old way via proto::when. I
think that if we pass the Actions bundle as the Data parameter to the
transforms, these things become possible.

 Regarding naming 
 I like renaming phoenix::actor to phoenix::lambda. But what about the existing
  phoenix::lambda? Rename it to protect (from Boost.Lambda)?

See Joel's response.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Visitor Design Pattern

2010-10-25 Thread Eric Niebler
On 10/25/2010 4:44 PM, Joel de Guzman wrote:
 On 10/26/2010 4:30 AM, Eric Niebler wrote:
 
 [...]
 
 Voila! The implementation is trivial: one specialization of proto::when
 on the new (incomplete) proto::external type. God, why didn't I think of
 this sooner?

 The naming issue goes away completely. There is no fancy new proto
 transform to be named. Also, proto::named_rule goes away, too.

 [...]

 One potential further simplification would be to give users a nicer way
 to map rules to actions. I'll think about it.
 
 This is awesome, Eric! I thought we had a winner. Now you doubled
 the win! :-) Don't stop! ;-)

I just committed this to trunk. I also added an action_map class that
makes it pretty easy and intuitive to define action parameters. The
previous example becomes:

struct my_grammar
  : proto::or_
proto::when int_terminal, proto::external 
  , proto::when char_terminal, proto::external 
  , proto::when
proto::plus my_grammar, my_grammar 
  , proto::fold _, int(), my_grammar 


{};

struct my_actions
  : proto::action_map
proto::whenint_terminal, print(proto::_value)
  , proto::whenchar_terminal, print(proto::_value)

{};

It uses mpl::map under the hood for optimized look-up time on the rule
type. Of course, using action_map makes your actions non-extensible, so
this feature is not interesting for Phoenix. You can still do it the
other way with a nested when template that you can specialize on rules.

I might rename action_map to transform_map or even just
transforms. I think this is the only place in proto where transforms
are referred to as actions. Opinions?

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Visitor Design Pattern

2010-10-23 Thread Eric Niebler
On 10/23/2010 10:45 AM, Thomas Heller wrote:
 On Saturday 23 October 2010 19:30:18 Eric Niebler wrote:
 On 10/23/2010 10:12 AM, Eric Niebler wrote:
 I've tweaked both the traversal example you sent around as well as my
 over toy Phoenix. Tell me what you guys think.

 Actually, I think it's better to leave the definition of some_rule
 alone and wrap it in named_rule at the point of use. A bit cleaner.
 See attached.
 
 I like that.
 With that named_rule approach, we have some kind of in code documentation: 
 Look, here that rule is a customization point.

Exactly.

 Why not just rule? Less characters to type.

I almost called it rule, but *everything* in Proto is a rule including
proto::or_ and proto::switch_. What makes these rules special is that
they have a name.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Visitor Design Pattern

2010-10-23 Thread Eric Niebler
On 10/23/2010 5:10 PM, Joel de Guzman wrote:
 On 10/24/2010 2:18 AM, Thomas Heller wrote:
 On Saturday 23 October 2010 19:47:59 Eric Niebler wrote:
 On 10/23/2010 10:45 AM, Thomas Heller wrote:
 Why not just rule? Less characters to type.

 I almost called it rule, but *everything* in Proto is a rule including
 proto::or_ and proto::switch_. What makes these rules special is that
 they have a name.

 True. But you could look at proto::or_ and proto::switch_ or any other
 already exisiting rules as anonymous rules. While rule or named_rule
 explicitly name them.
 
 Well, in parsing land, rules are always named. There's no such thing
 as anonymous rules, AFAIK. 

I think there is, at least in the context we're discussing. For
instance, in Spirit, you might have:

rule r = (this  that) [action1] | (the  other) [action2] ;

We're discussing the ability to make action1 and action2 external and
pluggable. To do that, you'd look them up by the rules (thisthat)
and (theother). Those don't have names.

Regardless, I feel that since assigning a name is the sole purpose of
thing, that named_rule is the right choice.

Now, what to call the traveral/algorithm/action/on thingy. None of those
feel right. Maybe if I describe in words what it does, someone can come
up with a good name. Given a Proto grammar that has been built with
named rules, and a set of actions that can be indexed with those rules,
it creates a Proto algorithm. The traversal is fixed, the actions can
float. It's called insert good name here.

 What's the counterpart of parser in the
 proto world?

The C++ compiler. :-)

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Visitor Design Pattern

2010-10-22 Thread Eric Niebler
On 10/21/2010 7:09 PM, Joel de Guzman wrote:
 Check out the doc I sent (Annex A). It's really, to my mind,
 generic languages -- abstraction of rules and templated grammars
 through metanotions and hyper-rules. 

Parameterized rules. Yes, I can understand that much. My understanding
stops when I try to imagine how to build a parser that recognizes a
grammar with parameterized rules.

 I have this strong feeling that
 that's the intent of Thomas and your recent designs. Essentially,
 making the phoenix language a metanotion in itself that can be
 extended post-hoc through generic means.

I don't think that's what Thomas and I are doing. vW-grammars change the
descriptive power of grammars. But we don't need more descriptive
grammars. Thomas and I aren't changing the grammar of Phoenix at all.
We're just plugging in different actions. The grammar is unchanged.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Visitor Design Pattern

2010-10-22 Thread Eric Niebler
On 10/22/2010 12:33 AM, Thomas Heller wrote:
 On Friday 22 October 2010 09:15:47 Eric Niebler wrote:
 On 10/21/2010 7:09 PM, Joel de Guzman wrote:
 Check out the doc I sent (Annex A). It's really, to my mind, 
 generic languages -- abstraction of rules and templated grammars 
 through metanotions and hyper-rules.
 
 Parameterized rules. Yes, I can understand that much. My
 understanding stops when I try to imagine how to build a parser
 that recognizes a grammar with parameterized rules.
 
 And I can't understand how expression templates relate to parsing.

It doesn't in any practical sense, really. No parsing ever happens in
Proto. The C++ compiler parses expressions for us and builds the tree.
Proto grammars are patterns that match trees. (It is in this sense
they're closer to schemata, not grammars that drive parsers.)

They're called grammars in Proto not because they drive the parsing
but because they describe the valid syntax for your embedded language.

 I have this strong feeling that that's the intent of Thomas and
 your recent designs. Essentially, making the phoenix language a
 metanotion in itself that can be extended post-hoc through
 generic means.
 
 I don't think that's what Thomas and I are doing. vW-grammars
 change the descriptive power of grammars. But we don't need more
 descriptive grammars. Thomas and I aren't changing the grammar of
 Phoenix at all. We're just plugging in different actions. The
 grammar is unchanged.
 
 Exactly.
 Though, I think this is the hard part to wrap the head around. We
 have a grammar, and this very same grammar is used to describe
 visitation.

It's for the same reason that grammars are useful for validating
expressions that they are also useful for driving tree traversals:
pattern matching. There's no law that the /same/ grammar be used for
validation and evaluation. In fact, that's often not the case.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Visitor Design Pattern

2010-10-22 Thread Eric Niebler
On 10/22/2010 10:01 AM, Thomas Heller wrote:
 I think this is the simplification of client proto code we searched for. It 
 probably needs some minor polishment though.
snip

Hi Thomas, this looks promising. I'm digging into this now.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Thoughts on traversing proto expressions and reusing grammar

2010-10-21 Thread Eric Niebler
On 10/20/2010 11:41 PM, Thomas Heller wrote:
 On Thu, Oct 21, 2010 at 7:50 AM, Thomas Heller
 thom.hel...@googlemail.com wrote:
 On Thursday 21 October 2010 05:11:49 Eric Niebler wrote:
 On 10/20/2010 7:49 AM, Thomas Heller wrote:
 snip
 Here it goes:
 namespace detail
 {
 template 
 typename Grammar, typename Visitor, typename IsRule = void
 struct algorithm_case
 : Grammar
 {};

 Why inherit from Grammar here instead of:
   : proto::when
 Grammar
   , typename Visitor::template visitGrammar
 

 ?

 Because I wanted to have an escape point. There might be some valid
 usecase, that does not want to dispatch to the Visitor/Actions. This is btw
 the reason i didn't reuse or_, but introduced the rules template. To
 distinguish between: 1) regular proto grammars -- no dispatch 2) the
 rules, which do the dispatch.
 
 Ok ... after rereading your mini phoenix you solve that problem with
 default_actions.
 Very neat as well!

Right. In fact, I don't think it's necessary or desirable to let the
Grammar parameter to algorithm_case be anything that isn't an
instantiation of rules. That is, you have algorithm_case and
algorithm_case_rule. Nuke algorithm_case and rename algorithm_case_rule
to algorithm_case. Also, nuke rule. The variadic rules is all that's
needed. A few orthogonal features are better than lots needless
distracting flexibility.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Visitor Design Pattern

2010-10-21 Thread Eric Niebler
On 10/21/2010 6:41 PM, Joel de Guzman wrote:
 I like it when we are talking on a conceptual level :-). I think part
 of the difficulty is in combining two domains: language/parsing and OOP.

nod

 As much as I do not have any problems with visitation, it's also
 intersecting with the notion of semantic actions. If we add yet one
 more set of parlance: schemas and documents (which is not totally
 unrelated -- schemas are basically just grammars), I'm afraid we'll
 add more confusion.

Thanks, Joel. Yes, let's not mix our metaphors. (But I do think that
XML/Schema shares much with Language/Grammar. And a Proto expression is
quite a lot like an XML DOM in that is has a tree structure, and the
Schema describes the valid trees. But yes, we're talking about DSLs and
compiler-construction toolkits, so I can keep the XML/Schema talk to
myself. :-)

 I'd say stick to only one domain's parlance. Since proto is closer to
 the language/parsing domain, I think we should stick to (semantic)actions,
 rules, grammars etc. 

This is my feeling as well.

 If you want to go meta on parsing, then you might
 get some inspiration on 2-level grammars (inspired by van Wijngaarden
 grammars) with the notion of hyper-rules, etc. This document:
 
  http://www.cl.cam.ac.uk/~mgk25/iso-14977.pdf
 
 gives a better glimpse into 2-level grammars (see Annex A).
 
   Although the notation (also known as a van Wijngaarden grammar,
   or a W-grammar) is more powerful, it is more complicated and,
   as the authors of Algol 68 recognized, “may be difficult
   for the uninitiated reader”.
 
 I'm not really sure how this relates to the current design, but
 I think we should be getting closer to this domain and it deserves
 some notice.

You're not the first to bring up vW-grammars in relation to Proto.
Someone suggested them to implement EDSL type systems. I spent a good
amount of time reading about them, and could get my head around it. My
understanding is that they're powerfully descriptive, but that building
compilers with vW-grammars is very expensive. I don't really know. I
think I'd need to work with a domain expert to make that happen. Any
volunteers? :-)

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] [phoenix3] New design proposal

2010-10-19 Thread Eric Niebler
On 10/19/2010 1:33 AM, Thomas Heller wrote:
 On Tue, Oct 19, 2010 at 6:21 AM, Joel de Guzman wrote:
 Can we also focus on one very specific use-case that demonstrates
 the motivation behind the need of such a refactoring and why the
 old(er) design is not sufficient? I'd really want to sync up with
 you guys.
 
 With the old design (the one which is currently in the gsoc svn
 sandbox) I had problems with defining what phoenix expressions really
 are. We had at least two types of expressions. First were the ones we
 reused from proto (plus, multiplies, function and so on), Second were
 these proto::function constructs which had a funcwrapT struct and
 an env placeholder. This env placeholder just wastes a valuable slot
 for potential arguments. The second point why this design is not
 good, is that data and behaviour is not separated. The T in funcwrap
 defines how the phoenix expression will get evaluated.
 
 This design solves this two problems: Data and behaviour are cleanly
 separated. Additionally we end up with only one type of expressions:
 A expression is a structure which has a tag, and a variable list of
 children. You define what what a valid expression is by extending the
 phoenix_algorithm template through specialisation for your tag. The
 Actions parameter is responsible for evaluating the expression. By
 template parametrisation of this parameter we allow users to easily 
 define their own evaluation schemes without worrying about the
 validity of the phoenix expression. This is fixed by the meta grammar
 class.

What Thomas said. We realized that for Phoenix to be extensible at the
lowest level, we'd need to document its intermediate form: the Proto
tree. That way folks have the option to use Proto transforms on it.
(There are higher-level customization points that don't expose Proto,
but I'm talking about real gear-heads here.)

There were ugly things about the intermediate form we wanted to clean up
before we document it. That started the discussion. Then the discussion
turned to, Can a user just change a semantic actions here and there
without having to redefine the whole Phoenix grammar in Proto, which is
totally non-trivial? I forget offhand what the use case was, but it
seemed a reasonable thing to want to do in general. So as Thomas says,
the goal is two-fold: (a) a clean-up of the intermediate form ahead of
its documentation, and (b) a way to easily plug in user-defined semantic
actions without changing the grammar.

I think these changes effect the way to define new Phoenix syntactic
constructs, so it's worth doing a before-and-after comparison of the
extensibility mechanisms. Thomas, can you send around such a comparison?
How hard is it to add a new statement, for instance?

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Phoenix3 at BoostCon?

2010-10-17 Thread Eric Niebler
On 10/17/2010 8:20 AM, Hartmut Kaiser wrote:
 Eric Niebler wrote:
 IMO, Phoenix3 is one of the most important Boost development over the past
 year. There should unquestionably be a presentation at BoostCon about it.
 I think I'll go, and would at the very least like to help. Is anybody else
 going, and are they interested in collaborating?

 There is obviously lots to talk about. Choosing a direction would be
 tough, but I think it should focus the things that are new in v3 over v2.
 I think an end-user-centric talk would be more valuable than a talk about
 implementation details (despite how much fun it would be to give such a
 talk). So I'm thinking about a talk on AST manipulation a-la Scheme
 macros. There is evidence that there is already excitement about this
 topic.[*] Thoughts?

 [*] http://cplusplus-soup.com/2010/07/23/lisp-macro-capability-in-c/
 
 Sounds like a very valuable idea. Who would be willing to present that?

I would.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Thoughts on traversing proto expressions and reusing grammar

2010-10-09 Thread Eric Niebler
On 10/8/2010 12:12 AM, Thomas Heller wrote:
 On Thursday 07 October 2010 23:06:24 Eric Niebler wrote:
 On 10/4/2010 1:55 PM, Eric Niebler wrote:
 The idea of being able to specify the transforms separately from the
 grammar is conceptually very appealing. The grammar is the control
 flow, the transform the action. Passing in the transforms to a grammar
 would be like passing a function object to a standard algorithm: a
 very reasonable thing to do. I don't think we've yet found the right
 formulation for it, though. Visitors and tag dispatching are too
 ugly/hard to use.

 I have some ideas. Let me think some.

 Really quickly, what I have been thinking of is something like this:

 templateclass Transforms
 struct MyGrammar
   : proto::or_
 proto::when rule1, typename Transforms::tran1 
   , proto::when rule2, typename Transforms::tran2 
   , proto::when rule3, typename Transforms::tran3 

 {};
 
 I don't think this is far away from what i proposed.
 Consider the following:
 
 template typename
 struct my_grammar
 : proto::or_
  rule1
, rule2
, rule3
 {};
 
 template typename my_transform;
 
 // corresponding to the tag of expression of rule1

Do you mean expression tag, like proto::tag::plus, or some other more
abstract tag?

 template  struct my_transformtag1
 : // transform
 {};
 
 // corresponding to the tag of expression of rule2
 template  struct my_transformtag2
 : // transform
 {};
 
 // corresponding to the tag of expression of rule3
 template  struct my_transformtag3 
 : // transform
 {};
 
 typedef proto::visitormy_transform, my_grammar 
 algorithm_with_specific_transforms;
 
 In my approach, both the transform and the grammar can be exchanged at will.

I don't know what this can possibly mean. Grammars and transforms are
not substitutable for each other in *any* context.

 What i am trying to say is, both the transforms and the control flow (aka the 
 grammar) intrinsically depend on the tag of the expressions, because the tag 
 is what makes different proto expressions distinguishable.

This is where I disagree. There are many cases where the top-level tag
is insufficient to distinguish between two expressions. That's why Proto
has grammars. Proto::switch_ dispatches on tags, but I consider switch_
to be primarily an optimization technique (although it does have that
nice open-extensibility feature that we're using for Phoenix).

 This imminent characteristic of a proto expression is what drove Joel Falcou 
 (i am just guessing here) and me (I know that for certain) to this tag based 
 dispatching of transforms and grammars.

Understood. OK, the problem you're trying to solve is:

A) Have an openly extensible grammar.
B) Have an equally extensible set of transforms.
C) Be able to substitute out a whole other (extensible) set of transforms.

Is that correct?

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Thoughts on traversing proto expressions and reusing grammar

2010-10-04 Thread Eric Niebler
On 10/4/2010 12:20 PM, Thomas Heller wrote:
 On Mon, Oct 4, 2010 at 8:53 PM, joel falcou joel.fal...@lri.fr wrote:
 On 04/10/10 20:45, Eric Niebler wrote:

 I'm not opposed to such a thing being in Proto, but I (personally) don't
 feel a strong need. I'd be more willing if I saw a more strongly
 motivating example. I believe Joel Falcou invented something similar.
 Joel, what was your use scenario?


 NT2 ;)

 More specifically, all our transform are built the same way:
 visit the tree, dispatch on visitor type + tag and act accordignly.
 It was needed for us cause the grammar could NOT have been written by hand
 as we supprot 200+ functions on nt2 terminal. All our code is somethign like
 for each node, do Foo with variable Foo depending on the pass and
 duplicating
 the grammar was a no-no.

 We ended up with somethign like this, except without switch_ (which I like
 btw), so we
 can easily add new transform on the AST from the external view point of user
 who
 didn't have to know much proto. As I only had to define one grammar (the
 visitor) and only specialisation of the
 visitor for some tag, it compiled fast and that was what we wanted.

 Thomas, why not showign the split example ? It's far better than this one
 and I remember I and Eric
 weren't able to write it usign grammar/transform back in the day.
 
 The split example was one of the motivating examples, that is correct,
 though it suffers the exact points Eric is criticizing.
 The split example was possible because i added some new transforms
 which proto currently misses, but i didn't want to shoot out all my
 ammunition just yet :P
 But since you ask for it:
 http://github.com/sithhell/boosties/blob/master/proto/libs/proto/test/splitter.cpp

Can you describe in words or add some comments? It's not immediately
obvious what this code does.

 the new thing i added is transform_expr, which works like fusion::transform:
 It creates the expression again, with transformed child nodes (the
 child nodes get transformed according to the transform you specify as
 template parameter

How is that different than what the pass_through transform does?

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Thoughts on traversing proto expressions and reusing grammar

2010-10-04 Thread Eric Niebler
On Mon, Oct 4, 2010 at 12:43 PM, Thomas Heller
thom.hel...@googlemail.comwrote:

 On Mon, Oct 4, 2010 at 8:45 PM, Eric Niebler e...@boostpro.com wrote:
  On 10/4/2010 6:49 AM, Thomas Heller wrote:
  Hi,
 
  I spent some time on thinking how one could make the traversal of a
 proto
  expression tree easier. I know want to share my findings with you all,
 and
  propose a new addition to proto.
 
  My first idea was to decouple grammars from transforms, to follow the
 idea of
  separation of data and algorithm.
 
  Data and algorithm are already separate in Proto. Data is the
  expression to traverse. Algorithm is the transform, driven by
  pattern-matching in the grammar.

 True, but you can only attach one grammar to one transform. Thus for
 every transformation you want to make, you need (in theory) replicate
 your grammar, see further down that this is not the case in real life.



OK, but let's be clear: Proto strictly enforces the separation of data and
algorithm. The grammar is not part of the data. We have:

Expression: data
Grammar: schema
Transforms: algorithm

The idea of being able to specify the transforms separately from the grammar
is conceptually very appealing. The grammar is the control flow, the
transform the action. Passing in the transforms to a grammar would be like
passing a function object to a standard algorithm: a very reasonable thing
to do. I don't think we've yet found the right formulation for it, though.
Visitors and tag dispatching are too ugly/hard to use.

I have some ideas. Let me think some.




  Currently, a transform is only applyable
  when a certain expression is matched, this is good and wanted, though
  sometimes you get yourself into a situation that requires you to
 reformulate
  your grammar, and just exchange the transform part.
 
  True. But in my experience, the grammar part is rarely unchanged.

 Yep, unchanged, and therefore you don't want to write it again.



I said rarely unchanged. That is, you would *have* to write it again in
most cases.




  Let me give you an example of a comma separated list of proto expression
 you
  want to treat like an associative sequence:
 
  opt1 = 1, opt2 = 2, opt3 = 3
 
  A proto grammar matching this expression would look something like this:
 
 
  snip
 
 
  This code works and everybody is happy, right?
  Ok, what will happen if we want to calculate the number of options we
  provided in our expression?
  The answer is that we most likely need to repeat almost everything from
  pack. except the transform part:
 
 struct size :
 or_
 when
  commapack, spec
, mpl::plussize(_left), size(_right)()
 
   , when
 spec
   , mpl::int_1()
 
 
  {};
 
  This trivial example doesn't make your point, because the grammar that
  gets repeated (commapack, spec and spec) is such a tiny fraction
  of the algorithm.

 Right, this example does not show the full potential.

  Now think of it if you are having a very complicated grammar, cp the
 whole
  grammar, or even just parts of it no fun.
 
  This is true, but it can be alleviated in most (all?) cases by building
  grammars in stages, giving names to parts for reusability. For instance,
  if commapack, spec were actually some very complicated grammar, you
  would do this:
 
struct packpart:
  : commapack, spec
{};
 
  And then reuse packpart in both algorithms.

 Right, that also works!

 snip

 
  I'm not opposed to such a thing being in Proto, but I (personally) don't
  feel a strong need. I'd be more willing if I saw a more strongly
  motivating example. I believe Joel Falcou invented something similar.
  Joel, what was your use scenario?
 
  I've also never been wild for proto::switch_, which I think is very
  syntax-heavy. It's needed as an alternative to proto::or_ to bring down
  the instantiation count, but it would be nice if visitor has a usage
  mode that didn't require creating a bunch of (partial) template
  specializations.

 Ok, point taken. The partial specializations is also something i don't
 like. I thought about
 doing the same as you did with contexts (tag as parameter to operator()
 calls),
 but decided against it, cause you would end up with an enormous set of
 operator()
 overloads. With this solution you end up with an enormous set of
 classes, but this
 solution is extendable (even from the outside), both from the grammar
 part and the transform part. Meaning you can add new tags without
 changing th intrinsics of your other grammars/transforms.



Yes, that's very important.




 
  I'll also point out that this solution is FAR more verbose that the
  original which duplicated part of the grammar. I also played with such
  visitors, but every solution I came up with suffered from this same
  verbosity problem.

 Ok, the verbosity is a problem, agreed. I invented this because of
 phoenix, actually. As a use case i

[proto] latest proto post

2010-09-24 Thread Eric Niebler
This one is likely to start a language war. Read the article. Vote it
up. Enjoy.

http://www.reddit.com/r/programming/comments/diddg/expressive_c_why_template_errors_suck_and_what/

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com



signature.asc
Description: OpenPGP digital signature
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] question about sub-domains

2010-09-22 Thread Eric Niebler
On 9/22/2010 4:35 PM, Christophe Henry wrote:
 Hi,
 
 The subject of the day seems to be sub-domains and it's great because
 I wanted to check that a use I just made of them was correct.
 So the problem is:
 In eUML, states are terminals of my state machine domain (sm_domain).
 The domain's grammar disables as little as possible (address_of):
 
 struct terminal_grammar : proto::not_proto::address_ofproto::_ 
 {};
 
 // Forward-declare an expression wrapper
 templatetypename Expr
 struct euml_terminal;
 
 struct sm_domain
 : proto::domain proto::generatoreuml_terminal, terminal_grammar 
 {};
 
 Now, I just implemented serialization for state machines using
 boost::serialization which happens to define its own DSEL, archive 
 fsm and archive  fsm.
 As state machines are states in eUML, I created a conflict between
 both DSELs:  and  for serialization and my own eUML, like init_ 
 some_state.

Right.

 I solved the conflict by creating a sub-domain for states with a
 stricter grammar:
 
 struct state_grammar :
 proto::and_
 proto::not_proto::address_ofproto::_ ,
 proto::not_proto::shift_rightproto::_,proto::_ ,
 proto::not_proto::shift_leftproto::_,proto::_ ,
 proto::not_proto::bitwise_andproto::_,proto::_ 
 
 {};
 struct state_domain
 : proto::domain proto::generatoreuml_terminal, state_grammar,sm_domain 
 
 {};
 
 As init_ is in the super domain sm_domain, init_  state is allowed,
 but as states are in the sub-domain, archive  fsm calls the correct
 boost::serialization overload.

Clever!

 However, I have some doubt that what I'm doing really is ok or happens
 to be working by pure luck.
 If it's an acceptable usage, sub-domains helped me out of a tight spot
 and I'm really going to be tempted to push this further ;-)

This is really ok. I'm very glad this feature is finding real-world
uses. Push it further and send us status updates. :-)

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] for your review: intro to a series of blog posts about proto and phoenix v3

2010-08-10 Thread Eric Niebler
On 8/10/2010 11:14 AM, Robert Jones wrote:
 Well, as a complete novice to code of this sophistication I
 understood that piece perfectly, as far as it goes. Naturally, as the
 opening piece of a series it raises far more questions than it
 answers.

That's great feedback, thank you.

 It also scares me somewhat. This stuff could mark an absolute
 explosion of complexity in the code your average jobbing programmer
 is expected to get to grips with, and in my experience the technology
 is already slipping from the grasp of most of us! When you get this
 stuff wrong, what do the error messages look like? Boost.Bind 
 Boost.Lambda errors are already enough to send most of us running for
 the hills, 

A great point! (I've held back a whole rant about how long template
error messages are library bugs and should be filed as such. That's a
whole other blog post.) I sort of address this when I say that a good
dsel toolkit would force dsel authors to rigorously define their dsels,
leading to better usage experiences. That's pretty vague, though. I
could be more explicit. But certainly the intention here is that proto
makes it easier for dsel authors to give their users more succinct error
messages.

 and tool support is somewhat lacking as far as I know,
 being pretty much limited to STLFilt.
 
 Maybe I'm just too long in the tooth for this!
 
 Still, great piece, and I look forward to subsequent installments.

Thanks,

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] So I heard proto make AST ...

2010-08-10 Thread Eric Niebler
On 8/10/2010 2:52 PM, joel.fal...@lri.fr wrote:
 Eric Niebler wrote:
 A pre-order traversal, pushing each visited node into an mpl vector? How
 about:
snip
 I'm on a tiny mobile, but my idéa was to have such algo as proto
 transforms  grammar

Good. Now if you are saying that Proto's existing transforms are too
low-level and that things like pre- and post-order traversals should be
first class Proto citizens ... no argument. Patch? :-)

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Proto v4.1

2010-08-04 Thread Eric Niebler
On 8/4/2010 1:35 PM, joel falcou wrote:
 On 04/08/10 01:00, Eric Niebler wrote:
 Most folks here don't know this, but the version of Proto y'all
 are using is actually v4. (Three times the charm wasn't true for
 Proto.) Anyway, there are so many goodies coming in Boost 1.44 that
 think of it as Proto v4.1.
snip
 
 Would you like me to write some lines on my compile-time performance
 and figures to include somewhere in the doc. I remember you wanted to
 do that at some point.

IIRC, you use some tricks to bring down compile times, right? I think
that would make a very good section for the docs, yes.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


[proto] Proto v4.1

2010-08-03 Thread Eric Niebler
Most folks here don't know this, but the version of Proto y'all are
using is actually v4. (Three times the charm wasn't true for Proto.)
Anyway, there are so many goodies coming in Boost 1.44 that think of it
as Proto v4.1.

I just posted the release notes for this version to give you guys an
heads-up of the coming changes. There are a few very small breaking
changes that you should take careful note of.

Most of the interesting stuff is in the new features: sub-domains and
per-domain control of as_expr and as_child. Have a look. Let me know if
you have any questions:

  Boost 1.44 release notes:
  http://tinyurl.com/242ln7f

FYI, most of these changes were motivated by the Phoenix3 work. That
sure is one demanding DSEL.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] using BOOST_PROTO_EXTENDS_FUNCTION()

2010-07-27 Thread Eric Niebler
On 7/27/2010 8:48 AM, Christophe Henry wrote:
 Hi,
 
 I have a small issue which puzzles me a little.
 I want to provide my DSEL a syntax very much like the map_list_of example:
 func( some_grammar_expr )..; //extra parens coming later = func(
 some_grammar_expr )( some_other_grammar_expr )
 which I would like to generate me a functor object. 

snip

 templatetypename Expr
 struct my_expr
 {
 BOOST_PROTO_BASIC_EXTENDS(Expr, my_exprExpr, my_dom)
 BOOST_PROTO_EXTENDS_FUNCTION()
 /* here comes the fun part */
 
 };

snip

 But a conversion to any
 functor? Hmmm, ok then I define my_expr to be a functor itself:
 template typename A0
 ...
 operator()(A0  a0) const
 {
 /* evaluate my_creation_grammar, forward the call to the functor
 returned by evaluating the grammar */
 }
 
 Interestingly, this sometimes works, but in some cases, the compiler
 complains that I now have a conflict with the operator() provided by
 BOOST_PROTO_EXTENDS_FUNCTION().

Yep, you have created an ambiguity. Sometimes you want func( a )( b ) to
create an expression template, and sometimes you want it to evaluate the
function object created by func( a ). If it's not obvious to the
compiler, it might not be obvious to your users either. I would rethink
your syntax.

If you're really enamored with this syntax, you need to distinguish the
calls somehow. You should have to SFINAE operator() on whether the
argument is a proto expression or not (with enable_ifis_exprArg0 ).
That means BOOST_PROTO_EXTENDS_FUNCTION isn't going to work for you and
you need to define them by hand.

HTH,

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] So I heard proto make AST ...

2010-07-27 Thread Eric Niebler
On 7/27/2010 9:01 AM, joel falcou wrote:
 what about having some Tree related trasnform/function/meta-function then ?
 
 I'm often thinking : dang, this transform is basically a BFS for a node
 verifying meta-function foo
 and have to rewrite a BFS usign default_ and such, which is relatively
 easy.
 
 Now, sometimes it is dang, this code is basically splitting an AST into
 multiples AST everytime I found a bar tag or I need to do a DFS
 or even worse, I need to make the AST a DAG :E ...
 
 Do people think such stuff (maybe in proto::tree:: or smthg ?) be useful
 additions ?

That would be awesome, Joel!

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto