Re: Re: [boost] Re: A new boost::thread implementation?

2003-02-12 Thread William E. Kempf

Peter Dimov said:
> William E. Kempf wrote:
 R result() const
 {
 boost::mutex::scoped_lock lock(m_mutex);
 while (!m_result)
 m_cond.wait(lock);
>>>
>>> This changes result()'s semantics to "block until op() finishes";
>>> what happens if nobody calls op()? Or it throws an exception?
>>
>> Changes the semantics?  I thought this was what was expected and
>> illustrated in every example thus far?
>
> No, my example throws an exception when the call hadn't been made. It
> would also throw when the call has been made but did not complete due to
> an exception, had I added a try block in operator() that eats the
> exception.

Would work, but can complicate (not prevent) usage patterns where the
future<> is shared across threads.  However, see below.

 future(const future& other)
 {
 mutex::scoped_lock lock(m_mutex);
>>>
>>> I don't think you need a lock here, but I may be missing something.
>>
>> I have to double check the implementation of shared_ptr<>, but I was
>> assuming all it did was to synchronize the ref count manipulation.
>> Reads/writes of the data pointed at needed to be synchronized
>> externally.
>
> Yes, you are right. The lock is necessary to achieve the level of thread
> safety that you implemented. I think that a "thread neutral" (as safe as
> an int) future<> would be acceptable, since this is the thread safety
> level I expect from lightweight CopyConstructible components, but that's
> not a big deal.

I was focusing on making the type sharable across threads in a thread safe
manner.  However, stepping back and thinking on it, I think that may have
been the wrong approach.  Most usage patterns won't involve this sharing,
so paying the cost for synchronization regardless is probably not a good
idea.  I think it would be better to take the "thread neutral" approach. 
Thanks for getting me to think about this issue.

-- 
William E. Kempf


___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: Re: [boost] Re: A new boost::thread implementation?

2003-02-12 Thread Peter Dimov
William E. Kempf wrote:
> Sorry for late reply... had a hard disk problem that prevented
> accessing e-mail.
>
> Peter Dimov said:
>> William E. Kempf wrote:

[...]

>>> void operator()()
>>> {
>>> mutex::scoped_lock lock(m_mutex);
>>> if (m_result)
>>> throw "can't call multiple times";
>>
>> operator() shouldn't throw; it's being used as a thread procedure,
>> and
>> the final verdict on these was to terminate() on exception, I
>> believe.
>> But you may have changed that. :-)
>
> I'm not sure how the terminate() on exception semantics (which haven't
> changed) apply, exactly.

Operator() should not report errors with exceptions because no one could
possibly listen. It's being executed in a separate thread, and if an
exception escapes, terminate() will be called.

In my example, I omitted the try block in operator() since I didn't want to
implement the exception translation logic, but such a try block is
necessary. Thread procedures effectively have a throw() specification.

> But I assume you (and probably Dave) would
> prefer this to just be an assert and documented undefined behavior.  I
> have no problems with that.

I would prefer a second call to either complete, overwriting the result, or
to be silently ignored.

>>> R result() const
>>> {
>>> boost::mutex::scoped_lock lock(m_mutex);
>>> while (!m_result)
>>> m_cond.wait(lock);
>>
>> This changes result()'s semantics to "block until op() finishes";
>> what happens if nobody calls op()? Or it throws an exception?
>
> Changes the semantics?  I thought this was what was expected and
> illustrated in every example thus far?

No, my example throws an exception when the call hadn't been made. It would
also throw when the call has been made but did not complete due to an
exception, had I added a try block in operator() that eats the exception.

>>> future(const future& other)
>>> {
>>> mutex::scoped_lock lock(m_mutex);
>>
>> I don't think you need a lock here, but I may be missing something.
>
> I have to double check the implementation of shared_ptr<>, but I was
> assuming all it did was to synchronize the ref count manipulation.
> Reads/writes of the data pointed at needed to be synchronized
> externally.

Yes, you are right. The lock is necessary to achieve the level of thread
safety that you implemented. I think that a "thread neutral" (as safe as an
int) future<> would be acceptable, since this is the thread safety level I
expect from lightweight CopyConstructible components, but that's not a big
deal.

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-12 Thread William E. Kempf

David Abrahams said:
> "William E. Kempf" <[EMAIL PROTECTED]> writes:
>
>>> From: David Abrahams <[EMAIL PROTECTED]>
>>> "Peter Dimov" <[EMAIL PROTECTED]> writes:
>>
>>> > It's a tool that allows high-level interfaces to be built. Whether
>>> people will want/need to build their own high-level interfaces is
>>> another story.
>>>
>>> I think it's a valuable question to ask whether /everyone/ will want
>>> to create /the same/ high-level interface ;-).  In other words, as
>>> long as we have a bunch of low-level thread primitives, I prefer to
>>> reduce interface complexity and increase encapsulation unless we can
>>> find a specific use for a medium-level interface.
>>
>> How about this compromise:
>
> 
>
> I don't want either of these to have a separate function (operator() in
> this case) which initiates the call, for reasons described earlier
>
> My suggestion:
>
>   template 
>   class future
>   {
>   public:
>   template 
>   future(F const& f, Executor const& e)
>   : m_pimpl(new async_call(f))
>   {
>   (*get())();
>   }
>
>   future(const future& other)
>   {
>   mutex::scoped_lock lock(m_mutex);
>   m_pimpl = other.m_pimpl;
>   }
>
>   future& operator=(const future& other)
>   {
>   mutex::scoped_lock lock(m_mutex);
>   m_pimpl = other.m_pimpl;
>   }
>
>   R result() const
>   {
>   return get()->result();
>   }
>
>   private:
>   shared_ptr > get() const
>   {
>   mutex::scoped_lock lock(m_mutex);
>   return m_pimpl;
>   }
>
>   shared_ptr > m_pimpl;
>   mutable mutex m_mutex;
>   };
>
>   // Not convinced that this helps, but...
>   template 
>   R result(future const& f)
>   {
>   return f.result();
>   }
>
> ...and I don't care whether async_call gets implemented as part of the
> public interface or not, but only because I can't see a compelling
> reason to have it yet.

OK.  Thanks for the input.  I go from here.

-- 
William E. Kempf


___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: Re: [boost] Re: A new boost::thread implementation?

2003-02-12 Thread William E. Kempf

Peter Dimov said:
> William E. Kempf wrote:
>>
>> It's not just the efficiencies that concern me with dynamic
>> allocation.  It's the additional points of failure that occur in this
>> case as well.  For instance, check out the article on embedded coding
>> in the most recent CUJ (sorry, don't have the exact title handy).
>> Embedded folks generally avoid dynamic memory when ever possible, so
>> I'm a little uncomfortable with a solution that mandates that the
>> implementation use dynamic allocation of memory.  At least, if that's
>> the only solution provided.
>
> This allocation isn't much different than the allocation performed by
> pthread_create. An embedded implementation can simply impose an upper
> limit on the total number of async_calls and never malloc.

True enough.

-- 
William E. Kempf


___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: Re: [boost] Re: A new boost::thread implementation?

2003-02-12 Thread William E. Kempf
Sorry for late reply... had a hard disk problem that prevented accessing
e-mail.

Peter Dimov said:
> William E. Kempf wrote:
>>
>> How about this compromise:
>>
>> template 
>> class async_call
>> {
>> public:
>> template 
>> explicit async_call(const F& f)
>> : m_func(f)
>> {
>> }
>>
>> void operator()()
>> {
>> mutex::scoped_lock lock(m_mutex);
>> if (m_result)
>> throw "can't call multiple times";
>
> operator() shouldn't throw; it's being used as a thread procedure, and
> the final verdict on these was to terminate() on exception, I believe.
> But you may have changed that. :-)

I'm not sure how the terminate() on exception semantics (which haven't
changed) apply, exactly.  But I assume you (and probably Dave) would
prefer this to just be an assert and documented undefined behavior.  I
have no problems with that.

>> lock.unlock();
>> R temp(m_func());
>> lock.lock();
>> m_result.reset(temp);
>> m_cond.notify_all();
>> }
>>
>> R result() const
>> {
>> boost::mutex::scoped_lock lock(m_mutex);
>> while (!m_result)
>> m_cond.wait(lock);
>
> This changes result()'s semantics to "block until op() finishes"; what
> happens if nobody calls op()? Or it throws an exception?

Changes the semantics?  I thought this was what was expected and
illustrated in every example thus far?  Failure to call op() is a user
error that will result in deadlock if result() is called.  The only other
alternative is to throw in result() if op() wasn't called, but I don't
think that's appropriate.  The exception question still needs work.  We
probably want result() to throw in this case, the question is what it will
throw.  IOW, do we build the mechanism for propogating exception types
across thread boundaries, or just throw a single generic exception type.

>> return *m_result.get();
>> }
>>
>> private:
>> boost::function0 m_func;
>> optional m_result;
>> mutable mutex m_mutex;
>> mutable condition m_cond;
>> };
>>
>> template 
>> class future
>> {
>> public:
>> template 
>> explicit future(const F& f)
>> : m_pimpl(new async_call(f))
>> {
>> }
>>
>> future(const future& other)
>> {
>> mutex::scoped_lock lock(m_mutex);
>
> I don't think you need a lock here, but I may be missing something.

I have to double check the implementation of shared_ptr<>, but I was
assuming all it did was to synchronize the ref count manipulation. 
Reads/writes of the data pointed at needed to be synchronized externally. 
If that's the case, the assignment here needs to be synchronized in order
to insure it doesn't interrupt the access in op().

>> m_pimpl = other.m_pimpl;
>> }
>>
>> future& operator=(const future& other)
>> {
>> mutex::scoped_lock lock(m_mutex);
>
> -"-
>
>> m_pimpl = other.m_pimpl;
>> }
>>
>> void operator()()
>> {
>> (*get())();
>> }
>>
>> R result() const
>> {
>> return get()->result();
>> }
>>
>> private:
>> shared_ptr > get() const
>> {
>> mutex::scoped_lock lock(m_mutex);
>
> -"-
>
>> return m_pimpl;
>> }
>>
>> shared_ptr > m_pimpl;
>> mutable mutex m_mutex;
>> };
>
> As for the "big picture", ask Dave. ;-) I tend towards a refcounted
> async_call.

That's what future<> gives you, while async_call<> requires no dynamic
memory allocation, which is an important consideration for many uses.

-- 
William E. Kempf


___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread David Abrahams
"Peter Dimov" <[EMAIL PROTECTED]> writes:

> I can. On the other hand, I can implement the thread primitives and
> optional, too. The point is that if, while building a high-level interface
> implementation, we discover an useful low-level primitive that offers
> greater expressive power (if less safety), we should consider exposing it,
> too, unless there are strong reasons not to.

And we should also consider NOT exposing it, unless there are strong
reasons to do so ;-).  I'm all for considering everything, but let's
be careful not to generalize this too much, too early.  If we discover
that people really need fine-grained control over the way their
async_calls work, we can go with a policy-based design ;-)

>> 2. Is that much different (or more valuable than)
>>
>>   R f() -> { construct(), R result() }
>>
>>which is what I was suggested?
>
> I don't know. Post the code. ;-) 

done.

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread David Abrahams
"William E. Kempf" <[EMAIL PROTECTED]> writes:

> No, I was asking anyone interested in responding, and you're certainly
> not wasting your breath.  I think I reached a compromise on these
> issues/questions, and would appreciate your response (it's in another
> post).

Done.

>>  Allocation can be pretty darned efficient when it matters.  See my
>> fast smart pointer allocator that Peter added to shared_ptr for
>> example.
>
> It's not just the efficiencies that concern me with dynamic
> allocation.  It's the additional points of failure that occur in this
> case as well.  For instance, check out the article on embedded coding
> in the most recent CUJ (sorry, don't have the exact title handy).
> Embedded folks generally avoid dynamic memory when ever possible, so
> I'm a little uncomfortable with a solution that mandates that the
> implementation use dynamic allocation of memory.  At least, if that's
> the only solution provided.

"What Peter said."

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread David Abrahams
"William E. Kempf" <[EMAIL PROTECTED]> writes:

>> From: David Abrahams <[EMAIL PROTECTED]>
>> "Peter Dimov" <[EMAIL PROTECTED]> writes:
>
>> > It's a tool that allows high-level interfaces to be built. Whether
>> > people will want/need to build their own high-level interfaces is
>> > another story.
>> 
>> I think it's a valuable question to ask whether /everyone/ will want
>> to create /the same/ high-level interface ;-).  In other words, as
>> long as we have a bunch of low-level thread primitives, I prefer to
>> reduce interface complexity and increase encapsulation unless we can
>> find a specific use for a medium-level interface.
>
> How about this compromise:



I don't want either of these to have a separate function (operator()
in this case) which initiates the call, for reasons described earlier

My suggestion:

  template 
  class future
  {
  public:
  template 
  future(F const& f, Executor const& e)
  : m_pimpl(new async_call(f))
  {
  (*get())();
  }

  future(const future& other)
  {
  mutex::scoped_lock lock(m_mutex);
  m_pimpl = other.m_pimpl;
  }

  future& operator=(const future& other)
  {
  mutex::scoped_lock lock(m_mutex);
  m_pimpl = other.m_pimpl;
  }

  R result() const
  {
  return get()->result();
  }

  private:
  shared_ptr > get() const
  {
  mutex::scoped_lock lock(m_mutex);
  return m_pimpl;
  }

  shared_ptr > m_pimpl;
  mutable mutex m_mutex;
  };

  // Not convinced that this helps, but...
  template 
  R result(future const& f)
  {
  return f.result();
  }

...and I don't care whether async_call gets implemented as part of the
public interface or not, but only because I can't see a compelling
reason to have it yet.

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread Peter Dimov
William E. Kempf wrote:
>
> It's not just the efficiencies that concern me with dynamic
> allocation.  It's the additional points of failure that occur in this
> case as well.  For instance, check out the article on embedded coding
> in the most recent CUJ (sorry, don't have the exact title handy).
> Embedded folks generally avoid dynamic memory when ever possible, so
> I'm a little uncomfortable with a solution that mandates that the
> implementation use dynamic allocation of memory.  At least, if that's
> the only solution provided.

This allocation isn't much different than the allocation performed by
pthread_create. An embedded implementation can simply impose an upper limit
on the total number of async_calls and never malloc.

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread Peter Dimov
William E. Kempf wrote:
>
> How about this compromise:
>
> template 
> class async_call
> {
> public:
> template 
> explicit async_call(const F& f)
> : m_func(f)
> {
> }
>
> void operator()()
> {
> mutex::scoped_lock lock(m_mutex);
> if (m_result)
> throw "can't call multiple times";

operator() shouldn't throw; it's being used as a thread procedure, and the
final verdict on these was to terminate() on exception, I believe. But you
may have changed that. :-)

> lock.unlock();
> R temp(m_func());
> lock.lock();
> m_result.reset(temp);
> m_cond.notify_all();
> }
>
> R result() const
> {
> boost::mutex::scoped_lock lock(m_mutex);
> while (!m_result)
> m_cond.wait(lock);

This changes result()'s semantics to "block until op() finishes"; what
happens if nobody calls op()? Or it throws an exception?

> return *m_result.get();
> }
>
> private:
> boost::function0 m_func;
> optional m_result;
> mutable mutex m_mutex;
> mutable condition m_cond;
> };
>
> template 
> class future
> {
> public:
> template 
> explicit future(const F& f)
> : m_pimpl(new async_call(f))
> {
> }
>
> future(const future& other)
> {
> mutex::scoped_lock lock(m_mutex);

I don't think you need a lock here, but I may be missing something.

> m_pimpl = other.m_pimpl;
> }
>
> future& operator=(const future& other)
> {
> mutex::scoped_lock lock(m_mutex);

-"-

> m_pimpl = other.m_pimpl;
> }
>
> void operator()()
> {
> (*get())();
> }
>
> R result() const
> {
> return get()->result();
> }
>
> private:
> shared_ptr > get() const
> {
> mutex::scoped_lock lock(m_mutex);

-"-

> return m_pimpl;
> }
>
> shared_ptr > m_pimpl;
> mutable mutex m_mutex;
> };

As for the "big picture", ask Dave. ;-) I tend towards a refcounted
async_call.

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread Peter Dimov
David Abrahams wrote:
> "Peter Dimov" <[EMAIL PROTECTED]> writes:
>
>> David Abrahams wrote:
>>> "Peter Dimov" <[EMAIL PROTECTED]> writes:
>>>
 Unspecified, but I don't think we can avoid that with the low-level
 interface. High level wrappers that package creation and execution
 would be immune to this problem.
>>>
>>> Is there really a need for a low-level async_call interface?  After
>>> all, the existing threads interface provides all the low-levelness
>>> you can handle.
>>
>> I don't know. But the low-levelness contributed by async_call is
>> unique, and not covered by boost::thread at present. I'm thinking of
>> the R f() -> { void f(), R result() } transformation, with the
>> associated synchronization and (possibly) encapsulated exception
>> transporting/translation from the execution to result().
>
> 1. Are you saying you can't implement that in terms of existing thread
>primitives and optional?

I can. On the other hand, I can implement the thread primitives and
optional, too. The point is that if, while building a high-level interface
implementation, we discover an useful low-level primitive that offers
greater expressive power (if less safety), we should consider exposing it,
too, unless there are strong reasons not to.

> 2. Is that much different (or more valuable than)
>
>   R f() -> { construct(), R result() }
>
>which is what I was suggested?

I don't know. Post the code. ;-) You can use stub synchronous threads and
thread_pools for illustration:

struct thread
{
template explicit thread(F f) { f(); }
};

struct thread_pool
{
template void dispatch(F f) { f(); }
};

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread William E. Kempf
> From: David Abrahams <[EMAIL PROTECTED]>
> "William E. Kempf" <[EMAIL PROTECTED]> writes:
> >> > I lean towards simple undefined behavior.  How do you feel about it?
> 
> I have a feeling that I'm not being asked here, and maybe even that
> it's wasted breath because you've grown tired of my emphasis on a
> high-level interface, but there's a lot to be said for eliminating
> sources of undefined behavior, especially when it might have to do
> with the ordering of operations in a MT context.

No, I was asking anyone interested in responding, and you're certainly not wasting 
your breath.  I think I reached a compromise on these issues/questions, and would 
appreciate your response (it's in another post).
 
> >> Seems entirely reasonable. I don't think that we can "fix" this. Accessing
> >> an object after it has been destroyed is simply an error; although this is
> >> probably a good argument for making async_call copyable/counted so that the
> >> copy being executed can keep the representation alive.
> >
> > Yes, agreed.  I'm just not sure which approach is more
> > appropriate... to use dynamic allocation and ref-counting in the
> > implementation or to simply require the user to strictly manage the
> > lifetime of the async_call so that there's no issues with a truly
> > asynchronous Executor accessing the return value after it's gone out
> > of scope.
> 
> Allocation can be pretty darned efficient when it matters.  See my
> fast smart pointer allocator that Peter added to shared_ptr for
> example.

It's not just the efficiencies that concern me with dynamic allocation.  It's the 
additional points of failure that occur in this case as well.  For instance, check out 
the article on embedded coding in the most recent CUJ (sorry, don't have the exact 
title handy).  Embedded folks generally avoid dynamic memory when ever possible, so 
I'm a little uncomfortable with a solution that mandates that the implementation use 
dynamic allocation of memory.  At least, if that's the only solution provided.
 


William E. Kempf
[EMAIL PROTECTED]

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread William E. Kempf
> From: David Abrahams <[EMAIL PROTECTED]>
> "Peter Dimov" <[EMAIL PROTECTED]> writes:

> > It's a tool that allows high-level interfaces to be built. Whether
> > people will want/need to build their own high-level interfaces is
> > another story.
> 
> I think it's a valuable question to ask whether /everyone/ will want
> to create /the same/ high-level interface ;-).  In other words, as
> long as we have a bunch of low-level thread primitives, I prefer to
> reduce interface complexity and increase encapsulation unless we can
> find a specific use for a medium-level interface.

How about this compromise:

template 
class async_call
{
public:
template 
explicit async_call(const F& f)
: m_func(f)
{
}

void operator()()
{
mutex::scoped_lock lock(m_mutex);
if (m_result)
throw "can't call multiple times";
lock.unlock();
R temp(m_func());
lock.lock();
m_result.reset(temp);
m_cond.notify_all();
}

R result() const
{
boost::mutex::scoped_lock lock(m_mutex);
while (!m_result)
m_cond.wait(lock);
return *m_result.get();
}

private:
boost::function0 m_func;
optional m_result;
mutable mutex m_mutex;
mutable condition m_cond;
};

template 
class future
{
public:
template 
explicit future(const F& f)
: m_pimpl(new async_call(f))
{
}

future(const future& other)
{
mutex::scoped_lock lock(m_mutex);
m_pimpl = other.m_pimpl;
}

future& operator=(const future& other)
{
mutex::scoped_lock lock(m_mutex);
m_pimpl = other.m_pimpl;
}

void operator()()
{
(*get())();
}

R result() const
{
return get()->result();
}

private:
shared_ptr > get() const
{
mutex::scoped_lock lock(m_mutex);
return m_pimpl;
}

shared_ptr > m_pimpl;
mutable mutex m_mutex;
};

The async_result gives us the low level interface with a minimum of overhead, while 
the future gives us a higher level interface for ease of use.  This higher level 
interface should even allow the syntax suggested elsewhere in this thread:

template 
future execute(F function, E executor)
{
   future res(function);
   executor(res);
   return res;
}

template 
void thread_executor(F function)
{
   thread thrd(function);
}

future res = execute(foo, &thread_executor);
double d = res.result();

(And yes, I would offer these interfaces as well.)

Thoughts?


William E. Kempf
[EMAIL PROTECTED]

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread David Abrahams
"William E. Kempf" <[EMAIL PROTECTED]> writes:

>> > Sort of... I was thinking about the refactoring where you don't hold
>> > the mutex the entire time the function is being called.  But even
>> > with out the refactoring, there is some room for error:
>> >
>> > thread1: call()
>> > thread2: call()
>> > thread1: result() // which result?
>> 
>> Unspecified, but I don't think we can avoid that with the low-level
>> interface. High level wrappers that package creation and execution would be
>> immune to this problem.
>
> Agreed.

I don't know if it was already mentioned, but you can prevent this
problem by having construction initiate the call, and not providing
another way to do it.  Having an interface that eliminates the
possibility of this kind of error and associated semantic confusion is
a huge win.

>> >>> Actually, there's another minor issue as well.  The user can call
>> >>> operator() and then let the async_call go out of scope with out ever
>> >>> calling result().  Mayhem would ensue.  The two options for dealing
>> >>> with this are to either block in the destructor until the call has
>> >>> completed or to simply document this as undefined behavior.
>> >>
>> >> Yes, good point, I missed that.
>> >
>> > I lean towards simple undefined behavior.  How do you feel about it?

I have a feeling that I'm not being asked here, and maybe even that
it's wasted breath because you've grown tired of my emphasis on a
high-level interface, but there's a lot to be said for eliminating
sources of undefined behavior, especially when it might have to do
with the ordering of operations in a MT context.

>> Seems entirely reasonable. I don't think that we can "fix" this. Accessing
>> an object after it has been destroyed is simply an error; although this is
>> probably a good argument for making async_call copyable/counted so that the
>> copy being executed can keep the representation alive.
>
> Yes, agreed.  I'm just not sure which approach is more
> appropriate... to use dynamic allocation and ref-counting in the
> implementation or to simply require the user to strictly manage the
> lifetime of the async_call so that there's no issues with a truly
> asynchronous Executor accessing the return value after it's gone out
> of scope.

Allocation can be pretty darned efficient when it matters.  See my
fast smart pointer allocator that Peter added to shared_ptr for
example.

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread David Abrahams
"Peter Dimov" <[EMAIL PROTECTED]> writes:

> David Abrahams wrote:
>> "Peter Dimov" <[EMAIL PROTECTED]> writes:
>>
>>> Unspecified, but I don't think we can avoid that with the low-level
>>> interface. High level wrappers that package creation and execution
>>> would be immune to this problem.
>>
>> Is there really a need for a low-level async_call interface?  After
>> all, the existing threads interface provides all the low-levelness
>> you can handle.
>
> I don't know. But the low-levelness contributed by async_call is unique, and
> not covered by boost::thread at present. I'm thinking of the R f() -> { void
> f(), R result() } transformation, with the associated synchronization and
> (possibly) encapsulated exception transporting/translation from the
> execution to result(). 

1. Are you saying you can't implement that in terms of existing thread
   primitives and optional?

2. Is that much different (or more valuable than)

  R f() -> { construct(), R result() }

   which is what I was suggested?

> It's a tool that allows high-level interfaces to be built. Whether
> people will want/need to build their own high-level interfaces is
> another story.

I think it's a valuable question to ask whether /everyone/ will want
to create /the same/ high-level interface ;-).  In other words, as
long as we have a bunch of low-level thread primitives, I prefer to
reduce interface complexity and increase encapsulation unless we can
find a specific use for a medium-level interface.

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread Peter Dimov
David Abrahams wrote:
> "Peter Dimov" <[EMAIL PROTECTED]> writes:
>
>> Unspecified, but I don't think we can avoid that with the low-level
>> interface. High level wrappers that package creation and execution
>> would be immune to this problem.
>
> Is there really a need for a low-level async_call interface?  After
> all, the existing threads interface provides all the low-levelness
> you can handle.

I don't know. But the low-levelness contributed by async_call is unique, and
not covered by boost::thread at present. I'm thinking of the R f() -> { void
f(), R result() } transformation, with the associated synchronization and
(possibly) encapsulated exception transporting/translation from the
execution to result(). It's a tool that allows high-level interfaces to be
built. Whether people will want/need to build their own high-level
interfaces is another story.

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread William E. Kempf

> 
> From: "Peter Dimov" <[EMAIL PROTECTED]>
> Date: 2003/02/10 Mon PM 12:54:28 EST
> To: "Boost mailing list" <[EMAIL PROTECTED]>
> Subject: Re: Re: [boost] Re: A new boost::thread implementation?
> 
> William E. Kempf wrote:
> >> From: "Peter Dimov" <[EMAIL PROTECTED]>
> >>>> // step 2: execute an async_call
> >>>> call();
> >>>
> >>> This example, and the implementation above, are just complex
> >>> synchronous calls.  I assume you really meant for either the
> >>> constructor or this call to also take an Executor concept?
> >>
> >> This line could be
> >>
> >> boost::thread exec( ref(call) );
> >>
> >> or
> >>
> >> boost::thread_pool pool;
> >> pool.dispatch( ref(call) );
> >>
> >> I didn't have a prebuilt Boost.Threads library handy when I wrote
> >> the code (rather quickly) so I used a plain call.
> >
> > No, it couldn't be, because async_call isn't copyable ;).  But I get
> > the point.
> 
> Note that I diligently used ref(call) above. ;-)

Yeah, I noticed that when I received my own response.  Sorry about not reading it more 
carefully.

> >> Since operator() is synchronized, i don't see a race... am I missing
> >> something?
> >
> > Sort of... I was thinking about the refactoring where you don't hold
> > the mutex the entire time the function is being called.  But even
> > with out the refactoring, there is some room for error:
> >
> > thread1: call()
> > thread2: call()
> > thread1: result() // which result?
> 
> Unspecified, but I don't think we can avoid that with the low-level
> interface. High level wrappers that package creation and execution would be
> immune to this problem.

Agreed.
 
> >>> Actually, there's another minor issue as well.  The user can call
> >>> operator() and then let the async_call go out of scope with out ever
> >>> calling result().  Mayhem would ensue.  The two options for dealing
> >>> with this are to either block in the destructor until the call has
> >>> completed or to simply document this as undefined behavior.
> >>
> >> Yes, good point, I missed that.
> >
> > I lean towards simple undefined behavior.  How do you feel about it?
> 
> Seems entirely reasonable. I don't think that we can "fix" this. Accessing
> an object after it has been destroyed is simply an error; although this is
> probably a good argument for making async_call copyable/counted so that the
> copy being executed can keep the representation alive.

Yes, agreed.  I'm just not sure which approach is more appropriate... to use dynamic 
allocation and ref-counting in the implementation or to simply require the user to 
strictly manage the lifetime of the async_call so that there's no issues with a truly 
asynchronous Executor accessing the return value after it's gone out of scope.
 


William E. Kempf
[EMAIL PROTECTED]

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread David Abrahams
"Peter Dimov" <[EMAIL PROTECTED]> writes:

> Unspecified, but I don't think we can avoid that with the low-level
> interface. High level wrappers that package creation and execution would be
> immune to this problem.

Is there really a need for a low-level async_call interface?  After
all, the existing threads interface provides all the low-levelness
you can handle.

 Actually, there's another minor issue as well.  The user can call
 operator() and then let the async_call go out of scope with out ever
 calling result().  Mayhem would ensue.  The two options for dealing
 with this are to either block in the destructor until the call has
 completed or to simply document this as undefined behavior.
>>>
>>> Yes, good point, I missed that.
>>
>> I lean towards simple undefined behavior.  How do you feel about it?
>
> Seems entirely reasonable. I don't think that we can "fix" this. Accessing
> an object after it has been destroyed is simply an error; although this is
> probably a good argument for making async_call copyable/counted so that the
> copy being executed can keep the representation alive.

Seems like this is also pointing at a high-level interface...

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread Peter Dimov
William E. Kempf wrote:
>> From: "Peter Dimov" <[EMAIL PROTECTED]>
 // step 2: execute an async_call
 call();
>>>
>>> This example, and the implementation above, are just complex
>>> synchronous calls.  I assume you really meant for either the
>>> constructor or this call to also take an Executor concept?
>>
>> This line could be
>>
>> boost::thread exec( ref(call) );
>>
>> or
>>
>> boost::thread_pool pool;
>> pool.dispatch( ref(call) );
>>
>> I didn't have a prebuilt Boost.Threads library handy when I wrote
>> the code (rather quickly) so I used a plain call.
>
> No, it couldn't be, because async_call isn't copyable ;).  But I get
> the point.

Note that I diligently used ref(call) above. ;-)

>> Since operator() is synchronized, i don't see a race... am I missing
>> something?
>
> Sort of... I was thinking about the refactoring where you don't hold
> the mutex the entire time the function is being called.  But even
> with out the refactoring, there is some room for error:
>
> thread1: call()
> thread2: call()
> thread1: result() // which result?

Unspecified, but I don't think we can avoid that with the low-level
interface. High level wrappers that package creation and execution would be
immune to this problem.

>>> Actually, there's another minor issue as well.  The user can call
>>> operator() and then let the async_call go out of scope with out ever
>>> calling result().  Mayhem would ensue.  The two options for dealing
>>> with this are to either block in the destructor until the call has
>>> completed or to simply document this as undefined behavior.
>>
>> Yes, good point, I missed that.
>
> I lean towards simple undefined behavior.  How do you feel about it?

Seems entirely reasonable. I don't think that we can "fix" this. Accessing
an object after it has been destroyed is simply an error; although this is
probably a good argument for making async_call copyable/counted so that the
copy being executed can keep the representation alive.

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread William E. Kempf
> From: "Peter Dimov" <[EMAIL PROTECTED]>
> >> // step 2: execute an async_call
> >> call();
> >
> > This example, and the implementation above, are just complex
> > synchronous calls.  I assume you really meant for either the
> > constructor or this call to also take an Executor concept?
> 
> This line could be
> 
> boost::thread exec( ref(call) );
> 
> or
> 
> boost::thread_pool pool;
> pool.dispatch( ref(call) );
> 
> I didn't have a prebuilt Boost.Threads library handy when I wrote the code
> (rather quickly) so I used a plain call.

No, it couldn't be, because async_call isn't copyable ;).  But I get the point.
 
> >> // step 3: obtain result
> >> try
> >> {
> >> std::cout << call.result() << std::endl;
> >> }
> >> catch(std::exception const & x)
> >> {
> >> std::cout << x.what() << std::endl;
> >> }
> >> }
> >
> > The one "issue" I see with using operator() to invoke the function is
> > the race conditions that can occur if the user calls this multiple
> > times.  I'd consider it a non-issue personally, since you'd have to
> > go out of your way to use this design incorrectly, but thought I
> > should at least point it out.
> 
> Since operator() is synchronized, i don't see a race... am I missing
> something?

Sort of... I was thinking about the refactoring where you don't hold the mutex the 
entire time the function is being called.  But even with out the refactoring, there is 
some room for error:

thread1: call()
thread2: call()
thread1: result() // which result?

But like I said, you have to do some pretty obviously stupid things to get a race 
here, and I have no intentions of trying to prevent them in our interface.  I just 
know I'll have to document them.

> > Actually, there's another minor issue as well.  The user can call
> > operator() and then let the async_call go out of scope with out ever
> > calling result().  Mayhem would ensue.  The two options for dealing
> > with this are to either block in the destructor until the call has
> > completed or to simply document this as undefined behavior.
> 
> Yes, good point, I missed that.

I lean towards simple undefined behavior.  How do you feel about it?


William E. Kempf
[EMAIL PROTECTED]

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread David Abrahams
"William E. Kempf" <[EMAIL PROTECTED]> writes:

>>  From: David Abrahams <[EMAIL PROTECTED]> Date: 2003/02/10
>> Mon AM 11:15:31 EST To: Boost mailing list <[EMAIL PROTECTED]>
>> Subject: Re: [boost] Re: A new boost::thread implementation?
>> 
>> "William E. Kempf" <[EMAIL PROTECTED]> writes:
>> 
>> > Actually, there's another minor issue as well.  The user can call
>> > operator() and then let the async_call go out of scope with out
>> > ever calling result().  Mayhem would ensue.  The two options for
>> > dealing with this are to either block in the destructor until the
>> > call has completed or to simply document this as undefined
>> > behavior.
>>  If you want async_call to be copyable you'd need to have a
>> handle-body idiom anyway, and something associated with the thread
>> could be used to keep the body alive.
>
> True enough.  The code provided by Mr. Dimov wasn't copyable, however.
> Is it important enough to allow copying to be worth the issues
> involved with dynamic memory usage here (i.e. a point of failure in
> the constructor)?  I think it probably is, I just want to see how
> others feel.

I don't have an opinion.  The answer may depend on the relative
expense of acquiring the asynchronous executor resource (thread).

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread William E. Kempf

> 
> From: David Abrahams <[EMAIL PROTECTED]>
> Date: 2003/02/10 Mon AM 11:15:31 EST
> To: Boost mailing list <[EMAIL PROTECTED]>
> Subject: Re: [boost] Re: A new boost::thread implementation?
> 
> "William E. Kempf" <[EMAIL PROTECTED]> writes:
> 
> > Actually, there's another minor issue as well.  The user can call
> > operator() and then let the async_call go out of scope with out ever
> > calling result().  Mayhem would ensue.  The two options for dealing
> > with this are to either block in the destructor until the call has
> > completed or to simply document this as undefined behavior.
> 
> If you want async_call to be copyable you'd need to have a handle-body
> idiom anyway, and something associated with the thread could be used
> to keep the body alive.

True enough.  The code provided by Mr. Dimov wasn't copyable, however.  Is it 
important enough to allow copying to be worth the issues involved with dynamic memory 
usage here (i.e. a point of failure in the constructor)?  I think it probably is, I 
just want to see how others feel.


William E. Kempf
[EMAIL PROTECTED]

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread Peter Dimov
William E. Kempf wrote:
>> Here's some compilable code, to put things in perspective:
>
> Thanks.  This helps me, at least.
>
>> #include 
>> #include 
>> #include 
>> #include 
>> #include 
>> #include 
>>
>> template class async_call
>> {
>> public:
>>
>> template explicit async_call(F f): f_(f), ready_(false)
>> {
>> }
>>
>> void operator()()
>> {
>> mutex_type::scoped_lock lock(mutex_);
>> new(result_) R(f_());
>> ready_ = true;
>> }
>
> Hmm... is this truly portable?  Don't you have to use the same
> techniques as optional<> here... or even just use optional<> in the
> implementation?  Also, though not on subject with discussion of the
> design, it's probably a bad idea to lock the mutex in this way
> (mutexes shouldn't be held for extensive periods of time), though the
> alternative implementation requires 2 copies of R and a condition for
> waiting on the result.

No, this is not portable; it's also incorrect since it doesn't execute ~R
when it should, and I didn't block copy/assignment. Using optional<> would
be a much better choice. This is only a sketch.

I've used the "illustrative" mutex to specify the synchronization semantics,
not implementation.

>> // step 2: execute an async_call
>> call();
>
> This example, and the implementation above, are just complex
> synchronous calls.  I assume you really meant for either the
> constructor or this call to also take an Executor concept?

This line could be

boost::thread exec( ref(call) );

or

boost::thread_pool pool;
pool.dispatch( ref(call) );

I didn't have a prebuilt Boost.Threads library handy when I wrote the code
(rather quickly) so I used a plain call.

>> // step 3: obtain result
>> try
>> {
>> std::cout << call.result() << std::endl;
>> }
>> catch(std::exception const & x)
>> {
>> std::cout << x.what() << std::endl;
>> }
>> }
>
> The one "issue" I see with using operator() to invoke the function is
> the race conditions that can occur if the user calls this multiple
> times.  I'd consider it a non-issue personally, since you'd have to
> go out of your way to use this design incorrectly, but thought I
> should at least point it out.

Since operator() is synchronized, i don't see a race... am I missing
something?

> Actually, there's another minor issue as well.  The user can call
> operator() and then let the async_call go out of scope with out ever
> calling result().  Mayhem would ensue.  The two options for dealing
> with this are to either block in the destructor until the call has
> completed or to simply document this as undefined behavior.

Yes, good point, I missed that.

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread David Abrahams
"William E. Kempf" <[EMAIL PROTECTED]> writes:

> Actually, there's another minor issue as well.  The user can call
> operator() and then let the async_call go out of scope with out ever
> calling result().  Mayhem would ensue.  The two options for dealing
> with this are to either block in the destructor until the call has
> completed or to simply document this as undefined behavior.

If you want async_call to be copyable you'd need to have a handle-body
idiom anyway, and something associated with the thread could be used
to keep the body alive.

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread William E. Kempf

> 
> From: "Peter Dimov" <[EMAIL PROTECTED]>
> Date: 2003/02/10 Mon AM 09:08:08 EST
> To: "Boost mailing list" <[EMAIL PROTECTED]>
> Subject: Re: [boost] Re: A new boost::thread implementation?
> 
> David Abrahams wrote:
> > "Peter Dimov" <[EMAIL PROTECTED]> writes:
> >
> >> I am not saying that this is never useful, but syntax should target
> >> the typical scenario, not corner cases.
> >
> > Agreed.  I suppose that you'll say it doesn't target the typical
> > scenario because of its confusability.  I wouldn't argue.  Any other
> > reasons?
> >
> > What about:
> >
> > result(f)
> 
> Unqualified? ;-)

Sorry, I don't understand this response.

> >> It makes a lot more sense (to me) to reserve operator() for the
> >> Runnable concept, since that's what Boost.Threads currently uses.
> >
> > And prevent any other concepts from using operator()?  Surely you
> > don't mean that.
> 
> No, I meant in that particular case.

I tend to agree with this.
 
> We have three concepts: Runnable, Executor (executes Runnables), and
> HasResult (for lack of a better name.) The AsyncCall concept I had in mind
> is both Runnable and HasResult, so it can't use operator() for both.
> x.result() or result(x) are both fine for HasResult.

I tend to prefer x.result() because it doesn't require a friend declaration.  Adding a 
result(x) on top of that is certainly easy, and if we think it's useful enough to be 
provided by the library, I'd vote for providing both forms because of this.

> Here's some compilable code, to put things in perspective:

Thanks.  This helps me, at least.

> #include 
> #include 
> #include 
> #include 
> #include 
> #include 
> 
> template class async_call
> {
> public:
> 
> template explicit async_call(F f): f_(f), ready_(false)
> {
> }
> 
> void operator()()
> {
> mutex_type::scoped_lock lock(mutex_);
> new(result_) R(f_());
> ready_ = true;
> }

Hmm... is this truly portable?  Don't you have to use the same techniques as 
optional<> here... or even just use optional<> in the implementation?  Also, though 
not on subject with discussion of the design, it's probably a bad idea to lock the 
mutex in this way (mutexes shouldn't be held for extensive periods of time), though 
the alternative implementation requires 2 copies of R and a condition for waiting on 
the result.
 
> R result() const
> {
> mutex_type::scoped_lock lock(mutex_);
> if(ready_) return reinterpret_cast(result_);
> throw std::logic_error("async_call not completed");
> }
> 
> private:
> 
> typedef boost::detail::lightweight_mutex mutex_type;
> 
> mutable mutex_type mutex_;
> boost::function f_;
> char result_[sizeof(R)];
> bool ready_;
> };
> 
> int f(int x)
> {
> return x * 2;
> }
> 
> int main()
> {
> // step 1: construct an async_call
> async_call call( boost::bind(f, 3) );
> 
> // 1a: attempt to obtain result before execution
> try
> {
> std::cout << call.result() << std::endl;
> }
> catch(std::exception const & x)
> {
> std::cout << x.what() << std::endl;
> }
> 
> // step 2: execute an async_call
> call();

This example, and the implementation above, are just complex synchronous calls.  I 
assume you really meant for either the constructor or this call to also take an 
Executor concept?
 
> // step 3: obtain result
> try
> {
> std::cout << call.result() << std::endl;
> }
> catch(std::exception const & x)
> {
> std::cout << x.what() << std::endl;
> }
> }

The one "issue" I see with using operator() to invoke the function is the race 
conditions that can occur if the user calls this multiple times.  I'd consider it a 
non-issue personally, since you'd have to go out of your way to use this design 
incorrectly, but thought I should at least point it out.

Actually, there's another minor issue as well.  The user can call operator() and then 
let the async_call go out of scope with out ever calling result().  Mayhem would 
ensue.  The two options for dealing with this are to either block in the destructor 
until the call has completed or to simply document this as undefined behavior.


William E. Kempf
[EMAIL PROTECTED]

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread David Abrahams
"Peter Dimov" <[EMAIL PROTECTED]> writes:

> David Abrahams wrote:
>> "Peter Dimov" <[EMAIL PROTECTED]> writes:
>>
>>> We have three concepts: Runnable, Executor (executes Runnables), and
>>> HasResult (for lack of a better name.) The AsyncCall concept I had
>>> in mind is both Runnable and HasResult, so it can't use operator()
>>> for both. x.result() or result(x) are both fine for HasResult.
>>
>> I see that you separate the initiation of the call from its creation.
>> I was going under the assumption that a call IS a call, i.e. verb, and
>> it starts when you construct it.  An advantage of this arrangement is
>> that you get simple invariants: you don't need to handle the "tried to
>> get the result before initiating the call" case.  Are there
>> disadvantages?
>
> No, the "tried to get the result but the call did not complete" case needs
> to be handled anyway. The call may have been initiated but ended with an
> exception.

Yes, good point.  It's still one fewer case to handle, though.  I
prefer interfaces which disallow erroneous usage, and asking for the
result before the call is even initiated is definitely erroneous.

> This particular arrangement is one way to cleanly separate the
> synchronization/result storage/exception translation from the actual
> execution: async_call doesn't know anything about threads or thread pools.
> Other alternatives are possible, too, but I think that we've reached the
> point where we need actual code and not just made-up syntax.

Agreed.

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread Peter Dimov
David Abrahams wrote:
> "Peter Dimov" <[EMAIL PROTECTED]> writes:
>
>> We have three concepts: Runnable, Executor (executes Runnables), and
>> HasResult (for lack of a better name.) The AsyncCall concept I had
>> in mind is both Runnable and HasResult, so it can't use operator()
>> for both. x.result() or result(x) are both fine for HasResult.
>
> I see that you separate the initiation of the call from its creation.
> I was going under the assumption that a call IS a call, i.e. verb, and
> it starts when you construct it.  An advantage of this arrangement is
> that you get simple invariants: you don't need to handle the "tried to
> get the result before initiating the call" case.  Are there
> disadvantages?

No, the "tried to get the result but the call did not complete" case needs
to be handled anyway. The call may have been initiated but ended with an
exception.

This particular arrangement is one way to cleanly separate the
synchronization/result storage/exception translation from the actual
execution: async_call doesn't know anything about threads or thread pools.
Other alternatives are possible, too, but I think that we've reached the
point where we need actual code and not just made-up syntax.

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread David Abrahams
"Peter Dimov" <[EMAIL PROTECTED]> writes:

> David Abrahams wrote:
>> "Peter Dimov" <[EMAIL PROTECTED]> writes:
>>
>>> I am not saying that this is never useful, but syntax should target
>>> the typical scenario, not corner cases.
>>
>> Agreed.  I suppose that you'll say it doesn't target the typical
>> scenario because of its confusability.  I wouldn't argue.  Any other
>> reasons?
>>
>> What about:
>>
>> result(f)
>
> Unqualified? ;-)

Good question ;-)

Maybe so, for the generic case.  If you know you're dealing with
Boost.Threads, then qualified should work.

>>> It makes a lot more sense (to me) to reserve operator() for the
>>> Runnable concept, since that's what Boost.Threads currently uses.
>>
>> And prevent any other concepts from using operator()?  Surely you
>> don't mean that.
>
> No, I meant in that particular case.

Okay.

> We have three concepts: Runnable, Executor (executes Runnables), and
> HasResult (for lack of a better name.) The AsyncCall concept I had in mind
> is both Runnable and HasResult, so it can't use operator() for both.
> x.result() or result(x) are both fine for HasResult.

I see that you separate the initiation of the call from its creation.
I was going under the assumption that a call IS a call, i.e. verb, and
it starts when you construct it.  An advantage of this arrangement is
that you get simple invariants: you don't need to handle the "tried to
get the result before initiating the call" case.  Are there
disadvantages?

I'll also observe that tying initiation to creation (RAII ;-)) also
frees up operator() for other uses.  It is of course arguable whether
those other uses are good ones ;-)

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread Peter Dimov
David Abrahams wrote:
> "Peter Dimov" <[EMAIL PROTECTED]> writes:
>
>> I am not saying that this is never useful, but syntax should target
>> the typical scenario, not corner cases.
>
> Agreed.  I suppose that you'll say it doesn't target the typical
> scenario because of its confusability.  I wouldn't argue.  Any other
> reasons?
>
> What about:
>
> result(f)

Unqualified? ;-)

>> It makes a lot more sense (to me) to reserve operator() for the
>> Runnable concept, since that's what Boost.Threads currently uses.
>
> And prevent any other concepts from using operator()?  Surely you
> don't mean that.

No, I meant in that particular case.

We have three concepts: Runnable, Executor (executes Runnables), and
HasResult (for lack of a better name.) The AsyncCall concept I had in mind
is both Runnable and HasResult, so it can't use operator() for both.
x.result() or result(x) are both fine for HasResult.

Here's some compilable code, to put things in perspective:

#include 
#include 
#include 
#include 
#include 
#include 

template class async_call
{
public:

template explicit async_call(F f): f_(f), ready_(false)
{
}

void operator()()
{
mutex_type::scoped_lock lock(mutex_);
new(result_) R(f_());
ready_ = true;
}

R result() const
{
mutex_type::scoped_lock lock(mutex_);
if(ready_) return reinterpret_cast(result_);
throw std::logic_error("async_call not completed");
}

private:

typedef boost::detail::lightweight_mutex mutex_type;

mutable mutex_type mutex_;
boost::function f_;
char result_[sizeof(R)];
bool ready_;
};

int f(int x)
{
return x * 2;
}

int main()
{
// step 1: construct an async_call
async_call call( boost::bind(f, 3) );

// 1a: attempt to obtain result before execution
try
{
std::cout << call.result() << std::endl;
}
catch(std::exception const & x)
{
std::cout << x.what() << std::endl;
}

// step 2: execute an async_call
call();

// step 3: obtain result
try
{
std::cout << call.result() << std::endl;
}
catch(std::exception const & x)
{
std::cout << x.what() << std::endl;
}
}

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread David Abrahams
"Peter Dimov" <[EMAIL PROTECTED]> writes:

> David Abrahams wrote:
>> "William E. Kempf" <[EMAIL PROTECTED]> writes:
>>
>>> David Abrahams said:
 No, my fault.  Syntactically, I should've written this:

 async_call f(create_thread(), bind(g,1,2));
 int r = f();
>>>
>>> Do you want it to be "int r = f();" or just "int r = f;" or even
>>> "int r = f.result();" or similar?
>>
>> My other message makes it clear (I hope) that I want what I wrote
>> above.
>
> Are you sure that you really want the above? Or are you speculating that you
> might want something like the above in a hypothetical situation that you
> haven't encountered yet but you might in the future?

:-)

OK, you caught me out.  It's the latter.

>>> The f() syntax makes it look like you're invoking the call at that
>>> point, when in reality the call was invoked in the construction of
>>> f.
>>
>> You're just invoking a function to get the result.  f itself is not
>> the result, so I don't want to use implicit conversion, and f.result()
>> does not let f behave polymorphically in a functional programming
>> context.
>
> f does not behave polymorphically, or rather, its polymorphic behavior isn't
> useful. In the generic contexts I'm familiar with, generators are only used
> to produce a sequence of values.
>
> I am not saying that this is never useful, but syntax should target the
> typical scenario, not corner cases.

Agreed.  I suppose that you'll say it doesn't target the typical
scenario because of its confusability.  I wouldn't argue.  Any other
reasons?

What about:

result(f)

> It makes a lot more sense (to me) to reserve operator() for the Runnable
> concept, since that's what Boost.Threads currently uses.

And prevent any other concepts from using operator()?  Surely you
don't mean that.

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread Peter Dimov
David Abrahams wrote:
> "William E. Kempf" <[EMAIL PROTECTED]> writes:
>
>> David Abrahams said:
>>> No, my fault.  Syntactically, I should've written this:
>>>
>>> async_call f(create_thread(), bind(g,1,2));
>>> int r = f();
>>
>> Do you want it to be "int r = f();" or just "int r = f;" or even
>> "int r = f.result();" or similar?
>
> My other message makes it clear (I hope) that I want what I wrote
> above.

Are you sure that you really want the above? Or are you speculating that you
might want something like the above in a hypothetical situation that you
haven't encountered yet but you might in the future?

>> The f() syntax makes it look like you're invoking the call at that
>> point, when in reality the call was invoked in the construction of
>> f.
>
> You're just invoking a function to get the result.  f itself is not
> the result, so I don't want to use implicit conversion, and f.result()
> does not let f behave polymorphically in a functional programming
> context.

f does not behave polymorphically, or rather, its polymorphic behavior isn't
useful. In the generic contexts I'm familiar with, generators are only used
to produce a sequence of values.

I am not saying that this is never useful, but syntax should target the
typical scenario, not corner cases.

It makes a lot more sense (to me) to reserve operator() for the Runnable
concept, since that's what Boost.Threads currently uses.

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



RE: [boost] Re: A new boost::thread implementation?

2003-02-09 Thread Darryl Green
> -Original Message-
> From: William E. Kempf [mailto:[EMAIL PROTECTED]]
>
> > .. borrowing Dave's async_call syntax and Alexander's
> > semantics (which aren't really any different to yours):
> 
> Dave's semantics certainly *were* different from mine (and the Futures
> link posted by Alexander).  In fact, I see Alexander's post as
> strengthening my argument for semantics different from Dave's.  Which
> leaves us with my semantics (mostly), but some room left to argue the
> syntax.

Isn't that what I said? In any case it is what I meant :-)

> 
> > async_call later1(foo, a, b, c);
> > async_call later2(foo, d, e, f);
> > thread_pool pool;
> > pool.dispatch(later1);
> > pool.dispatch(later2);
> > d = later1.result() + later2.result();
> 
> You've not used Dave's semantics, but mine (with the variation of when
you
> bind).

Yes - I thought I said I was using Alexander's/your semantics? Anyway
the result is it looks to me like we agree - so I'll go back to
lurking...

> 
> >> More importantly, if you really don't like the syntax of my design,
it
> > at
> >> least allows you to *trivially* implement your design.  Sometimes
> > there's
> >> something to be said for being "lower level".
> >
> > Well as a user I'd be *trivially* implementing something to produce
the
> > above. Do-able I think (after I have a bit of a look at the innards
of
> > bind), but its hardly trivial.
> 
> The only thing that's not trivial with your syntax changes above is
> dealing with the requisite reference semantics with out requiring
dynamic
> memory allocation.  But I think I can work around that.  If people
prefer
> the early/static binding, I can work on this design.  I think it's a
> little less flexible, but won't argue that point if people prefer it.

I'm not sure that it is flexible enough for everyone - I was just
putting up a "what one user would like" argument. I see that Dave wants
results obtained from/as a function object for a start - and I'm
prepared to believe that that is more important than whether the syntax
is a little odd/inside-out at first glance.

Regards
Darryl Green.
___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-09 Thread David Abrahams
"William E. Kempf" <[EMAIL PROTECTED]> writes:

> David Abrahams said:
>> "William E. Kempf" <[EMAIL PROTECTED]> writes:
>> int r = async_call(create_thread(), bind(g, 1, 2));
>>
>> int r = async(boost::thread(), g, 1, 2);
^^^
>>
>> int r = async_call(rpc(some_machine), bind(g,1,2));
>>
>> int r = async_call(my_message_queue, bind(g,1,2));
>>>
>>> None of these make much sense to me.  You're executing the function
>>> object in a supposedly asynchronous manner, but the immediate
>>> assignment to int renders it a synchronous call.  Am I missing
>>> something again?
>>
>> No, my fault.  Syntactically, I should've written this:
>>
>> async_call f(create_thread(), bind(g,1,2));
>> int r = f();
>
> Do you want it to be "int r = f();" or just "int r = f;" or even "int r =
> f.result();" or similar?  

My other message makes it clear (I hope) that I want what I wrote
above.

> The f() syntax makes it look like you're invoking the call at that
> point, when in reality the call was invoked in the construction of
> f.

You're just invoking a function to get the result.  f itself is not
the result, so I don't want to use implicit conversion, and f.result()
does not let f behave polymorphically in a functional programming
context.

>>> Ouch.  A tad harsh.
>>
>> Sorry, not intended.  I was just trying to palm off responsibility for
>> justifying that line on you ;o)
>>
>>> But yes, I do see this concept applying to RPC invocation.  "All"
>>> that's required is the proxy that handles wiring the data and the
>>> appropriate scaffolding to turn this into an Executor.  Obviously this
>>> is a much more strict implementation then thread
>>> creation... you can't just call any function here.
>>
>> I don't get it.  Could you fill in more detail?  For example, where does
>> the proxy come from?
>
>>From a lot of hard work? ;)  Or often, it's generated by some IDL like
> compiler.  Traditional RPCs behave like synchronous calls, but DCOM and
> other such higher level RPC mechanisms offer you alternative designs where
> the initial invocation just wires the input to the server and you retreive
> the results (which have been asynchronously wired back to the client and
> buffered) at a later point.  In my own work we often deal with RPC
> mechanisms like this, that can take significant amounts of time to
> generate and wire the results back, and the client doesn't need/want to
> block waiting the whole time.

Sure, sure, but all the details about the proxy, etc, are missing. Not
that I think it matters... ;-)

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-09 Thread William E. Kempf

David Abrahams said:
> "William E. Kempf" <[EMAIL PROTECTED]> writes:
>
>> David Abrahams said:
>>> "William E. Kempf" <[EMAIL PROTECTED]> writes:
>>>
 David Abrahams said:
>>> ...and if it can't be default-constructed?
>>
>> That's what boost::optional<> is for ;).
>
> Yeeeh. Once the async_call returns, you have a value, and should be
> able to count on it.  You shouldn't get back an object whose
> invariant allows there to be no value.

 I'm not sure I can interpret the "yeeeh" part. Do you think there's
 still an issue to discuss here?
>>>
>>> Yes.  Yh means I'm uncomfortable with asking people to get
>>> involved with complicated state like "it's there or it isn't there"
>>> for something as conceptually simple as a result returned from
>>> waiting on a thread function to finish.
>>
>> OK, *if* I'm totally understanding you now, I don't think the issue
>> you see actually exists.  The invariant of optional<> may allow
>> there to be no value, but the invariant of a future/async_result
>> doesn't allow this *after the invocation has completed*.  (Actually,
>> there is one case where this might occur, and that's when the
>> invocation throws an exception if we add the async exception
>> functionality that people want here.  But in this case what happens is
>> a call to res.get(), or what ever name we use, will throw an
>> exception.)  The optional<> is just an implementation detail that
>> allows you to not have to use a type that's default constructable.
>
> It doesn't matter if the semantics of future ensures that the optional
> is always filled in; returning an object whose class invariant is more
> complicated than the actual intended result complicates life for the
> user.  The result of a future leaves it and propagates through other
> parts of the program where the invariant established by future aren't as
> obvious.  Returning an optional where a double is intended is
> akin to returning a vector that has only one element.  Use the
> optional internally to the future if that's what you need to do. The
> user shouldn't have to mess with it.
>
>> If, on the other hand, you're concerned about the uninitialized state
>> prior to invocation... we can't have our cake and eat it to, and since
>> the value is meaningless prior to invocation any way, I'd rather allow
>> the solution that doesn't require default constructable types.
>
> I don't care if you have an "uninitialized" optional internally to the
> future.  The point is to encapsulate that mess so the user doesn't have
> to look at it, read its documentation, etc.

I think there's some serious misunderstanding here. I never said the user
would use optional<> directly, I said I'd use it in the implementation of
this "async" concept.

>> I *think* I understand what you're saying.  So, the interface would be
>> more something like:
>>
>> future f1 = thread_executor(foo, a, b, c);
>> thread_pool pool;
>> future f2 = thread_pool_executor(pool, foo, d, e, f);
>> double d = f1.get() + f2.get();
>>
>> This puts a lot more work on the creation of "executors" (they'll have
>> to obey a more complex interface design than just "anything that can
>> invoke a function object"), but I can see the merits.  Is this
>> actually what you had in mind?
>
> Something very much along those lines.  I would very much prefer to
> access the value of the future with its operator(), because we have lots
> of nice mechanisms that work on function-like objects; to use get you'd
> need to go through mem_fn/bind, and according to Peter we
> wouldn't be able to directly get such a function object from a future
> rvalue.

Hmmm... OK, more pieces are falling into place.  I think the f() syntax
conveys something that's not the case, but I won't argue the utility of
it.

 Only if you have a clearly defined "default case".  Someone doing a
 lot of client/server development might argue with you about thread
 creation being a better default than RPC calling, or even
 thread_pool usage.
>>>
>>> Yes, they certainly might.  Check out the systems that have been
>>> implemented in Erlang with great success and get back to me ;-)
>>
>> Taking a chapter out of Alexander's book?
>
> Ooooh, touché!  ;-)
>
> Actually I think it's only fair to answer speculation about what
> people will like with a reference to real, successful systems.

I'd agree with that, but the link you gave led me down a VERY long
research path, and I'm in a time crunch right now ;).  Maybe a short code
example or a more specific link would have helped.

>> As for the alternate interface your suggesting here, can you spell it
>> out for me?
>
> I'm not yet wedded to a particular design choice, though I am getting
> closer; I hope you don't think that's a cop-out.  What I'm aiming for is
> a particular set of design requirements:

Not a cop-out, though I wasn't asking for a final design from you.

> 1. Simple syntax, for some definition of "simple".
> 2. A way

Re: [boost] Re: A new boost::thread implementation?

2003-02-09 Thread William E. Kempf

David Abrahams said:
> "William E. Kempf" <[EMAIL PROTECTED]> writes:
> int r = async_call(create_thread(), bind(g, 1, 2));
>
> int r = async(boost::thread(), g, 1, 2);
>>>^^^
>
> int r = async_call(rpc(some_machine), bind(g,1,2));
>
> int r = async_call(my_message_queue, bind(g,1,2));
>>
>> None of these make much sense to me.  You're executing the function
>> object in a supposedly asynchronous manner, but the immediate
>> assignment to int renders it a synchronous call.  Am I missing
>> something again?
>
> No, my fault.  Syntactically, I should've written this:
>
> async_call f(create_thread(), bind(g,1,2));
> int r = f();

Do you want it to be "int r = f();" or just "int r = f;" or even "int r =
f.result();" or similar?  The f() syntax makes it look like you're
invoking the call at that point, when in reality the call was invoked in
the construction of f.

 though. How do you envision this working? A local opaque function
 object can't be RPC'ed.
>>>
>>> It would have to not be opaque ;-)
>>>
>>> Maybe it's a wrapper over Python code that can be transmitted across
>>> the wire.  Anyway, I agree that it's not very likely.  I just put it
>>> in there to satisfy Bill, who seems to have some idea how RPC can be
>>> squeezed into the same mold as thread invocation ;-)
>>
>> Ouch.  A tad harsh.
>
> Sorry, not intended.  I was just trying to palm off responsibility for
> justifying that line on you ;o)
>
>> But yes, I do see this concept applying to RPC invocation.  "All"
>> that's required is the proxy that handles wiring the data and the
>> appropriate scaffolding to turn this into an Executor.  Obviously this
>> is a much more strict implementation then thread
>> creation... you can't just call any function here.
>
> I don't get it.  Could you fill in more detail?  For example, where does
> the proxy come from?

>From a lot of hard work? ;)  Or often, it's generated by some IDL like
compiler.  Traditional RPCs behave like synchronous calls, but DCOM and
other such higher level RPC mechanisms offer you alternative designs where
the initial invocation just wires the input to the server and you retreive
the results (which have been asynchronously wired back to the client and
buffered) at a later point.  In my own work we often deal with RPC
mechanisms like this, that can take significant amounts of time to
generate and wire the results back, and the client doesn't need/want to
block waiting the whole time.

-- 
William E. Kempf
[EMAIL PROTECTED]


___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-08 Thread David Abrahams
"William E. Kempf" <[EMAIL PROTECTED]> writes:

> David Abrahams said:
>> "William E. Kempf" <[EMAIL PROTECTED]> writes:
>>
>>> David Abrahams said:
>> ...and if it can't be default-constructed?
>
> That's what boost::optional<> is for ;).

 Yeeeh. Once the async_call returns, you have a value, and should be
 able to count on it.  You shouldn't get back an object whose
 invariant allows there to be no value.
>>>
>>> I'm not sure I can interpret the "yeeeh" part. Do you think there's
>>> still an issue to discuss here?
>>
>> Yes.  Yh means I'm uncomfortable with asking people to get
>> involved with complicated state like "it's there or it isn't there" for
>> something as conceptually simple as a result returned from waiting on a
>> thread function to finish.
>
> OK, *if* I'm totally understanding you now, I don't think the issue
> you see actually exists.  The invariant of optional<> may allow
> there to be no value, but the invariant of a future/async_result
> doesn't allow this *after the invocation has completed*.  (Actually,
> there is one case where this might occur, and that's when the
> invocation throws an exception if we add the async exception
> functionality that people want here.  But in this case what happens
> is a call to res.get(), or what ever name we use, will throw an
> exception.)  The optional<> is just an implementation detail that
> allows you to not have to use a type that's default constructable.

It doesn't matter if the semantics of future ensures that the optional
is always filled in; returning an object whose class invariant is more
complicated than the actual intended result complicates life for the
user.  The result of a future leaves it and propagates through other
parts of the program where the invariant established by future aren't
as obvious.  Returning an optional where a double is intended
is akin to returning a vector that has only one element.  Use
the optional internally to the future if that's what you need to do.
The user shouldn't have to mess with it.

> If, on the other hand, you're concerned about the uninitialized state
> prior to invocation... we can't have our cake and eat it to, and since the
> value is meaningless prior to invocation any way, I'd rather allow the
> solution that doesn't require default constructable types.

I don't care if you have an "uninitialized" optional internally to the
future.  The point is to encapsulate that mess so the user doesn't
have to look at it, read its documentation, etc.

> I *think* I understand what you're saying.  So, the interface would be
> more something like:
>
> future f1 = thread_executor(foo, a, b, c);
> thread_pool pool;
> future f2 = thread_pool_executor(pool, foo, d, e, f);
> double d = f1.get() + f2.get();
>
> This puts a lot more work on the creation of "executors" (they'll have to
> obey a more complex interface design than just "anything that can invoke a
> function object"), but I can see the merits.  Is this actually what you
> had in mind?

Something very much along those lines.  I would very much prefer to
access the value of the future with its operator(), because we have
lots of nice mechanisms that work on function-like objects; to use get
you'd need to go through mem_fn/bind, and according to Peter we
wouldn't be able to directly get such a function object from a future
rvalue.

>>> Only if you have a clearly defined "default case".  Someone doing a
>>> lot of client/server development might argue with you about thread
>>> creation being a better default than RPC calling, or even thread_pool
>>> usage.
>>
>> Yes, they certainly might.  Check out the systems that have been
>> implemented in Erlang with great success and get back to me ;-)
>
> Taking a chapter out of Alexander's book?

Ooooh, touché!  ;-)

Actually I think it's only fair to answer speculation about what
people will like with a reference to real, successful systems.

>>> The suggestion that the binding occur at the time of construction is
>>> going to complicate things for me, because it makes it much more
>>> difficult to handle the reference semantics required here.
>>
>> a. What "required reference semantics?"
>
> The reference semantics required for asynchronous calls ;).
>
> Seriously, though, you have to pass a reference across thread
> boundaries here.  With late binding you have a seperate entity
> that's passed as the function object, which can carry the reference
> semantics.  With the (specific) early binding syntax suggested it's
> the future<> itself which is passed, which means it has to be copy
> constructable and each copy must reference the same instance of the
> value.

OK, I understand.  That sounds right.

> Well, I did say I was open to alternative designs.  Whether said designs
> are high level or low level means little to me, so long as they fullfill
> the requirements.  The suggestions made so far didn't, AFAICT.
>
> As for the alternate interface your suggesting here, can you

Re: [boost] Re: A new boost::thread implementation?

2003-02-08 Thread William E. Kempf

David Abrahams said:
> "Peter Dimov" <[EMAIL PROTECTED]> writes:
>> The line in the middle won't work, actually, but that's another story.
>> boost::thread() creates a "handle" to the current thread. ;-) Score
>> another one for defining concepts before using them.
>
> Oh, I'm not up on the new interface.  How are we going to create a new
> thread?

Nothing new about the interface in this regard.  The default constructor
has always behaved this way.  New threads are created with the overloaded
constructor taking a function object.

BTW: I'm not opposed to changing the semantics or removing the default
constructor in the new design, since it's Copyable and Assignable.  If
there's reasons to do this, we can now switch to a self() method for
accessing the current thread.

-- 
William E. Kempf
[EMAIL PROTECTED]


___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-08 Thread David Abrahams
"William E. Kempf" <[EMAIL PROTECTED]> writes:

>>> except that the async_call doesn't need to know about Executors.
>>
>> ...and that you don't need a special syntax to get the result out that
>> isn't compatible with functional programming.  If you want to pass a
>> function object off from the above, you need something like:
>>
>> bind(&async_call::result, async_call(bind(g, 1, 2)))
>
> I think the light is dawning for me.  Give me a little bit of time to work
> out a new design taking this into consideration.

...and there was much rejoicing!!

 int r = async_call(create_thread(), bind(g, 1, 2));

 int r = async(boost::thread(), g, 1, 2);
>>^^^

 int r = async_call(rpc(some_machine), bind(g,1,2));

 int r = async_call(my_message_queue, bind(g,1,2));
>
> None of these make much sense to me.  You're executing the function
> object in a supposedly asynchronous manner, but the immediate
> assignment to int renders it a synchronous call.  Am I missing
> something again?

No, my fault.  Syntactically, I should've written this:

async_call f(create_thread(), bind(g,1,2));
int r = f();

async_call f(thread_pool(), bind(g,1,2));
int r = f();

int r = async_call(create_thread(), bind(g, 1, 2))();
   ^^
int r = async(boost::thread(), g, 1, 2)();
   ^^
int r = async_call(rpc(some_machine), bind(g,1,2))();
   ^^
int r = async_call(my_message_queue, bind(g,1,2))();
  ^^

But you're also right about the synchronous thing.  The usage isn't
very likely.  More typically, you'd pass an async_call object off to
some other function, which would eventually invoke it to get the
result (potentially blocking if neccessary until the result was
available).

>>> though. How do you envision this working? A local opaque function
>>> object can't be RPC'ed.
>>
>> It would have to not be opaque ;-)
>>
>> Maybe it's a wrapper over Python code that can be transmitted across the
>> wire.  Anyway, I agree that it's not very likely.  I just put it in
>> there to satisfy Bill, who seems to have some idea how RPC can be
>> squeezed into the same mold as thread invocation ;-)
>
> Ouch.  A tad harsh.  

Sorry, not intended.  I was just trying to palm off responsibility for
justifying that line on you ;o)

> But yes, I do see this concept applying to RPC invocation.  "All"
> that's required is the proxy that handles wiring the data and the
> appropriate scaffolding to turn this into an Executor.  Obviously
> this is a much more strict implementation then thread
> creation... you can't just call any function here.

I don't get it.  Could you fill in more detail?  For example, where
does the proxy come from?

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-08 Thread David Abrahams
"Peter Dimov" <[EMAIL PROTECTED]> writes:

>>> except that the async_call doesn't need to know about Executors.
>>
>> ...and that you don't need a special syntax to get the result out that
>> isn't compatible with functional programming.  If you want to pass a
>> function object off from the above, you need something like:
>>
>> bind(&async_call::result, async_call(bind(g, 1, 2)))
>
> Hmm. Actually I'll need a similar three-liner:
>
> async_call f( bind(g, 1, 2) );
> // execute f using whatever Executor is appropriate
> // pass bind(&async_call::result, &f) to whoever is interested

A little bit worse, you gotta admit.

> Synchronous RPC calls notwithstanding, the point of the async_call is that
> the creation+execution (lines 1-2) are performed well in advance so that the
> background thread has time to run. It doesn't make sense to construct and
> execute an async_call if the very next thing is calling result(). So in a
> typical scenario there will be other code between lines 2 and 3.

I agree, but I'm not sure what difference it makes.

> The bind() can be syntax-sugared, of course. :-)

I think some useful syntax-sugaring is what I'm trying to push for ;-)

 int r = async_call(create_thread(), bind(g, 1, 2));

 int r = async(boost::thread(), g, 1, 2);
>>^^^

 int r = async_call(rpc(some_machine), bind(g,1,2));

 int r = async_call(my_message_queue, bind(g,1,2));
>>>
>>> All of these are possible with helper functions (and the  could
>>> be made optional.)
>>
>> Yup, note the line in the middle.
>
> The line in the middle won't work, actually, but that's another story.
> boost::thread() creates a "handle" to the current thread. ;-) Score another
> one for defining concepts before using them.

Oh, I'm not up on the new interface.  How are we going to create a new
thread?

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-08 Thread David Abrahams
Alexander Terekhov <[EMAIL PROTECTED]> writes:

> But I >>still insist<< ( ;-) ) on a rather simple interface 
> for creating a thread object (that shall kinda-"encapsulate" 
> that "async_call"-thing "representing" the thread routine 
> with its optional parameter(s) and return value... and which 
> can be canceled [no-result-ala-PTHREAD_CANCELED] and timedout-
> on-timedjoin() -- also "no result" [reported by another "magic" 
> pointer value]):
>
> http://groups.google.com/groups?selm=3D5D59A3.E6C97827%40web.de
> (Subject: Re: High level thread design question)
>
> http://groups.google.com/groups?selm=3D613D44.9B67916%40web.de
> (Well, "futures" aside for a moment, how about the following...)
>  ^^ ;-) ;-)

Hmm, good point.  If we are going to get results back in this
straightforward way we probably ought to be thinking about exception
propagation also.  However, that's a *much* harder problem, so I'm
inclined to defer solving it.

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-08 Thread William E. Kempf

David Abrahams said:
> "Peter Dimov" <[EMAIL PROTECTED]> writes:
>
>> David Abrahams wrote:
>>> "Peter Dimov" <[EMAIL PROTECTED]> writes:
>>>
 With the above AsyncCall:

 async_call f( bind(g, 1, 2) ); // can offer syntactic sugar
 here thread t(f); // or thread(f); for extra cuteness
 int r = f.result();

 The alternative seems to be

 async_call f( bind(g, 1, 2) );
 int r = f.result();

 but now f is tied to boost::thread. A helper

 int r = async(g, 1, 2);
>>>
>>> Another alternative might allow all of the following:
>>>
>>> async_call f(create_thread(), bind(g,1,2));
>>> int r = f();
>>>
>>> async_call f(thread_pool(), bind(g,1,2));
>>> int r = f();
>>
>> Using an undefined-yet Executor concept for the first argument. This
>> is not much different from
>>
>> async_call f( bind(g, 1, 2) );
>> // execute f using whatever Executor is appropriate
>> int r = f.result();
>>
>> except that the async_call doesn't need to know about Executors.
>
> ...and that you don't need a special syntax to get the result out that
> isn't compatible with functional programming.  If you want to pass a
> function object off from the above, you need something like:
>
> bind(&async_call::result, async_call(bind(g, 1, 2)))

I think the light is dawning for me.  Give me a little bit of time to work
out a new design taking this into consideration.

>>> int r = async_call(create_thread(), bind(g, 1, 2));
>>>
>>> int r = async(boost::thread(), g, 1, 2);
>^^^
>>>
>>> int r = async_call(rpc(some_machine), bind(g,1,2));
>>>
>>> int r = async_call(my_message_queue, bind(g,1,2));

None of these make much sense to me.  You're executing the function object
in a supposedly asynchronous manner, but the immediate assignment to int
renders it a synchronous call.  Am I missing something again?

>> All of these are possible with helper functions (and the  could
>> be made optional.)
>
> Yup, note the line in the middle.
>
>> I've my doubts about
>>
>>> int r = async_call(rpc(some_machine), bind(g,1,2));
>>
>> though. How do you envision this working? A local opaque function
>> object can't be RPC'ed.
>
> It would have to not be opaque ;-)
>
> Maybe it's a wrapper over Python code that can be transmitted across the
> wire.  Anyway, I agree that it's not very likely.  I just put it in
> there to satisfy Bill, who seems to have some idea how RPC can be
> squeezed into the same mold as thread invocation ;-)

Ouch.  A tad harsh.  But yes, I do see this concept applying to RPC
invocation.  "All" that's required is the proxy that handles wiring the
data and the appropriate scaffolding to turn this into an Executor. 
Obviously this is a much more strict implementation then thread
creation... you can't just call any function here.

-- 
William E. Kempf
[EMAIL PROTECTED]


___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-08 Thread William E. Kempf

David Abrahams said:
> "William E. Kempf" <[EMAIL PROTECTED]> writes:
>
>> David Abrahams said:
> ...and if it can't be default-constructed?

 That's what boost::optional<> is for ;).
>>>
>>> Yeeeh. Once the async_call returns, you have a value, and should be
>>> able to count on it.  You shouldn't get back an object whose
>>> invariant allows there to be no value.
>>
>> I'm not sure I can interpret the "yeeeh" part. Do you think there's
>> still an issue to discuss here?
>
> Yes.  Yh means I'm uncomfortable with asking people to get
> involved with complicated state like "it's there or it isn't there" for
> something as conceptually simple as a result returned from waiting on a
> thread function to finish.

OK, *if* I'm totally understanding you now, I don't think the issue you
see actually exists.  The invariant of optional<> may allow there to be no
value, but the invariant of a future/async_result doesn't allow this
*after the invocation has completed*.  (Actually, there is one case where
this might occur, and that's when the invocation throws an exception if we
add the async exception functionality that people want here.  But in this
case what happens is a call to res.get(), or what ever name we use, will
throw an exception.)  The optional<> is just an implementation detail that
allows you to not have to use a type that's default constructable.

If, on the other hand, you're concerned about the uninitialized state
prior to invocation... we can't have our cake and eat it to, and since the
value is meaningless prior to invocation any way, I'd rather allow the
solution that doesn't require default constructable types.

>> These are the two obvious (to me) alternatives, but the idea is to
>> leave the call/execute portion orthogonal and open.  Alexander was
>> quite right that this is similar to the "Future" concept in his Java
>> link.  The "Future" holds the storage for the data to be returned and
>> provides the binding mechanism for what actually gets called, while
>> the "Executor" does the actual invocation.  I've modeled the "Future"
>> to use function objects for the binding, so the "Executor" can be any
>> mechanism which can invoke a function object.  This makes thread,
>> thread_pool and other such classes "Executors".
>
> Yes, it is a non-functional (stateful) model which allows efficient
> re-use of result objects when they are large, but complicates simple
> designs that could be better modeled as stateless functional programs.
> When there is an argument for "re-using the result object", C++
> programmers tend to write void functions and pass the "result" by
> reference anyway.  There's a good reason people write functions
> returning non-void, though.  There's no reason to force them to twist
> their invocation model inside out just to achieve parallelism.

I *think* I understand what you're saying.  So, the interface would be
more something like:

future f1 = thread_executor(foo, a, b, c);
thread_pool pool;
future f2 = thread_pool_executor(pool, foo, d, e, f);
double d = f1.get() + f2.get();

This puts a lot more work on the creation of "executors" (they'll have to
obey a more complex interface design than just "anything that can invoke a
function object"), but I can see the merits.  Is this actually what you
had in mind?

 And there's other examples as well, such as RPC mechanisms.
>>>
>>> True.
>>>
 And personally, I find passing such a "creation parameter" to be
 turning the design inside out.
>>>
>>> A bit, yes.
>
> It turns _your_ design inside out, which might not be a bad thing for
> quite a few use cases ;-)

We're obviously not thinking of the same interface choices here.

 It might make things a little simpler for the default case, but it
 complicates usage for all the other cases.  With the design I
 presented every usage is treated the same.
>>>
>>> There's a lot to be said for making "the default case" very easy.
>>
>> Only if you have a clearly defined "default case".  Someone doing a
>> lot of client/server development might argue with you about thread
>> creation being a better default than RPC calling, or even thread_pool
>> usage.
>
> Yes, they certainly might.  Check out the systems that have been
> implemented in Erlang with great success and get back to me ;-)

Taking a chapter out of Alexander's book?

 More importantly, if you really don't like the syntax of my design,
 it at least allows you to *trivially* implement your design.
>>>
>>> I doubt most users regard anything involving typesafe varargs as
>>> "trivial to implement."
>>
>> Well, I'm not claiming to support variadric parameters here.  I'm only
>> talking about supporting a 0..N for some fixed N interface.
>
> That's what I mean by "typesafe varargs"; it's the best we can do in
> C++98/02.
>
>>  And with Boost.Bind already available, that makes other such
>> interfaces "trivial to implement".  At least usually.
>
> For an expert in library design familiar wi

Re: [boost] Re: A new boost::thread implementation?

2003-02-08 Thread Peter Dimov
David Abrahams wrote:
> "Peter Dimov" <[EMAIL PROTECTED]> writes:
>
>> Using an undefined-yet Executor concept for the first argument. This
>> is not much different from
>>
>> async_call f( bind(g, 1, 2) );
>> // execute f using whatever Executor is appropriate
>> int r = f.result();
>>
>> except that the async_call doesn't need to know about Executors.
>
> ...and that you don't need a special syntax to get the result out that
> isn't compatible with functional programming.  If you want to pass a
> function object off from the above, you need something like:
>
> bind(&async_call::result, async_call(bind(g, 1, 2)))

Hmm. Actually I'll need a similar three-liner:

async_call f( bind(g, 1, 2) );
// execute f using whatever Executor is appropriate
// pass bind(&async_call::result, &f) to whoever is interested

Synchronous RPC calls notwithstanding, the point of the async_call is that
the creation+execution (lines 1-2) are performed well in advance so that the
background thread has time to run. It doesn't make sense to construct and
execute an async_call if the very next thing is calling result(). So in a
typical scenario there will be other code between lines 2 and 3.

The bind() can be syntax-sugared, of course. :-)

>>> int r = async_call(create_thread(), bind(g, 1, 2));
>>>
>>> int r = async(boost::thread(), g, 1, 2);
>^^^
>>>
>>> int r = async_call(rpc(some_machine), bind(g,1,2));
>>>
>>> int r = async_call(my_message_queue, bind(g,1,2));
>>
>> All of these are possible with helper functions (and the  could
>> be made optional.)
>
> Yup, note the line in the middle.

The line in the middle won't work, actually, but that's another story.
boost::thread() creates a "handle" to the current thread. ;-) Score another
one for defining concepts before using them.

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-08 Thread David Abrahams
"Peter Dimov" <[EMAIL PROTECTED]> writes:

> David Abrahams wrote:
>> "Peter Dimov" <[EMAIL PROTECTED]> writes:
>>
>>> With the above AsyncCall:
>>>
>>> async_call f( bind(g, 1, 2) ); // can offer syntactic sugar here
>>> thread t(f); // or thread(f); for extra cuteness
>>> int r = f.result();
>>>
>>> The alternative seems to be
>>>
>>> async_call f( bind(g, 1, 2) );
>>> int r = f.result();
>>>
>>> but now f is tied to boost::thread. A helper
>>>
>>> int r = async(g, 1, 2);
>>
>> Another alternative might allow all of the following:
>>
>> async_call f(create_thread(), bind(g,1,2));
>> int r = f();
>>
>> async_call f(thread_pool(), bind(g,1,2));
>> int r = f();
>
> Using an undefined-yet Executor concept for the first argument. This is not
> much different from
>
> async_call f( bind(g, 1, 2) );
> // execute f using whatever Executor is appropriate
> int r = f.result();
>
> except that the async_call doesn't need to know about Executors.

...and that you don't need a special syntax to get the result out that
isn't compatible with functional programming.  If you want to pass a
function object off from the above, you need something like:

bind(&async_call::result, async_call(bind(g, 1, 2)))


>> int r = async_call(create_thread(), bind(g, 1, 2));
>>
>> int r = async(boost::thread(), g, 1, 2);
   ^^^
>>
>> int r = async_call(rpc(some_machine), bind(g,1,2));
>>
>> int r = async_call(my_message_queue, bind(g,1,2));
>
> All of these are possible with helper functions (and the  could be made
> optional.) 

Yup, note the line in the middle.

> I've my doubts about
>
>> int r = async_call(rpc(some_machine), bind(g,1,2));
>
> though. How do you envision this working? A local opaque function object
> can't be RPC'ed.

It would have to not be opaque ;-)

Maybe it's a wrapper over Python code that can be transmitted across
the wire.  Anyway, I agree that it's not very likely.  I just put it
in there to satisfy Bill, who seems to have some idea how RPC can be
squeezed into the same mold as thread invocation ;-)

> int r = rpc_call("g", 1, 2);
>
> looks much easier to implement.

Agreed.

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-08 Thread Peter Dimov
David Abrahams wrote:
> "Peter Dimov" <[EMAIL PROTECTED]> writes:
>
>> With the above AsyncCall:
>>
>> async_call f( bind(g, 1, 2) ); // can offer syntactic sugar here
>> thread t(f); // or thread(f); for extra cuteness
>> int r = f.result();
>>
>> The alternative seems to be
>>
>> async_call f( bind(g, 1, 2) );
>> int r = f.result();
>>
>> but now f is tied to boost::thread. A helper
>>
>> int r = async(g, 1, 2);
>
> Another alternative might allow all of the following:
>
> async_call f(create_thread(), bind(g,1,2));
> int r = f();
>
> async_call f(thread_pool(), bind(g,1,2));
> int r = f();

Using an undefined-yet Executor concept for the first argument. This is not
much different from

async_call f( bind(g, 1, 2) );
// execute f using whatever Executor is appropriate
int r = f.result();

except that the async_call doesn't need to know about Executors.

> int r = async_call(create_thread(), bind(g, 1, 2));
>
> int r = async(boost::thread(), g, 1, 2);
>
> int r = async_call(rpc(some_machine), bind(g,1,2));
>
> int r = async_call(my_message_queue, bind(g,1,2));

All of these are possible with helper functions (and the  could be made
optional.) I've my doubts about

> int r = async_call(rpc(some_machine), bind(g,1,2));

though. How do you envision this working? A local opaque function object
can't be RPC'ed.

int r = rpc_call("g", 1, 2);

looks much easier to implement.

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-08 Thread David Abrahams
"Peter Dimov" <[EMAIL PROTECTED]> writes:

> David Abrahams wrote:
>
>> That's the general idea.  Of course we can haggle over the syntactic
>> details, but the main question is whether you can get a return value
>> from invoking a thread function or whether you have to declare some
>> "global" state and ask the thread function to modify it.
>
> With the above AsyncCall:
>
> async_call f( bind(g, 1, 2) ); // can offer syntactic sugar here
> thread t(f); // or thread(f); for extra cuteness
> int r = f.result();
>
> The alternative seems to be
>
> async_call f( bind(g, 1, 2) );
> int r = f.result();
>
> but now f is tied to boost::thread. A helper
>
> int r = async(g, 1, 2);

Another alternative might allow all of the following:

async_call f(create_thread(), bind(g,1,2));
int r = f();

async_call f(thread_pool(), bind(g,1,2));
int r = f();

int r = async_call(create_thread(), bind(g, 1, 2));

int r = async(boost::thread(), g, 1, 2);

int r = async_call(rpc(some_machine), bind(g,1,2));

int r = async_call(my_message_queue, bind(g,1,2));

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-08 Thread Peter Dimov
David Abrahams wrote:
> "Peter Dimov" <[EMAIL PROTECTED]> writes:
> 
>> David Abrahams wrote:
>> 
>> [...]
>> 
>>> Well, I don't really feel like arguing about this much longer.
>> 
>> I'd love to contribute to this discussion but there's no firm ground
>> to stand on. What _are_ the concepts being discussed? I think I see
>> 
>> AsyncCall
>> 
>>   AsyncCall(function f);
>> 
>>   void operator()();
>> 
>> // effects: f();
>> 
>>   R result() const;
>> 
>> // if operator()() hasn't been invoked, throw;
>> // if operator()() is still executing, block;
>> // otherwise, return the value returned by f().
>> 
>> but I'm not sure.
> 
> That's the general idea.  Of course we can haggle over the syntactic
> details, but the main question is whether you can get a return value
> from invoking a thread function or whether you have to declare some
> "global" state and ask the thread function to modify it.

With the above AsyncCall:

async_call f( bind(g, 1, 2) ); // can offer syntactic sugar here
thread t(f); // or thread(f); for extra cuteness
int r = f.result();

The alternative seems to be

async_call f( bind(g, 1, 2) );
int r = f.result();

but now f is tied to boost::thread. A helper

int r = async(g, 1, 2);

seems possible with either approach.

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-08 Thread David Abrahams
"Peter Dimov" <[EMAIL PROTECTED]> writes:

> David Abrahams wrote:
>
> [...]
>
>> Well, I don't really feel like arguing about this much longer.
>
> I'd love to contribute to this discussion but there's no firm ground to
> stand on. What _are_ the concepts being discussed? I think I see
>
> AsyncCall
>
>   AsyncCall(function f);
>
>   void operator()();
>
> // effects: f();
>
>   R result() const;
>
> // if operator()() hasn't been invoked, throw;
> // if operator()() is still executing, block;
> // otherwise, return the value returned by f().
>
> but I'm not sure.

That's the general idea.  Of course we can haggle over the syntactic
details, but the main question is whether you can get a return value
from invoking a thread function or whether you have to declare some
"global" state and ask the thread function to modify it.

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-08 Thread Peter Dimov
David Abrahams wrote:

[...]

> Well, I don't really feel like arguing about this much longer.

I'd love to contribute to this discussion but there's no firm ground to
stand on. What _are_ the concepts being discussed? I think I see

AsyncCall

  AsyncCall(function f);

  void operator()();

// effects: f();

  R result() const;

// if operator()() hasn't been invoked, throw;
// if operator()() is still executing, block;
// otherwise, return the value returned by f().

but I'm not sure.

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-07 Thread David Abrahams
"William E. Kempf" <[EMAIL PROTECTED]> writes:

> David Abrahams said:
 ...and if it can't be default-constructed?
>>>
>>> That's what boost::optional<> is for ;).
>>
>> Yeeeh. Once the async_call returns, you have a value, and should be able
>> to count on it.  You shouldn't get back an object whose invariant allows
>> there to be no value.
>
> I'm not sure I can interpret the "yeeeh" part. Do you think there's still
> an issue to discuss here?

Yes.  Yh means I'm uncomfortable with asking people to get
involved with complicated state like "it's there or it isn't there"
for something as conceptually simple as a result returned from waiting
on a thread function to finish.

>>> It's not "thread-creation" in this case.  You don't create threads
>>> when you use a thread_pool.
>>
>> OK, "thread acquisition", then.
>
> No, not even that.  An RPC mechanism, for instance, isn't acquiring
> a thread.  

Yes, but we don't have an RPC mechanism in Boost.  It's important to
be able to use a generic interface that will handle RPC, but for
common tasks where nobody's interested in RPC it's important to be
able to do something reasonably convenient and uncomplicated.

Anyway, if you want to stretch this to cover RPC it's easy enough:
just call it "acquisition of an executor resource."

> And a message queue implementation wouldn't be acquiring
> a thread either.  

But it _would_ be aquiring an execution resource.

> These are the two obvious (to me) alternatives, but the idea is to
> leave the call/execute portion orthogonal and open.  Alexander was
> quite right that this is similar to the "Future" concept in his Java
> link.  The "Future" holds the storage for the data to be returned
> and provides the binding mechanism for what actually gets called,
> while the "Executor" does the actual invocation.  I've modeled the
> "Future" to use function objects for the binding, so the "Executor"
> can be any mechanism which can invoke a function object.  This makes
> thread, thread_pool and other such classes "Executors".

Yes, it is a non-functional (stateful) model which allows efficient
re-use of result objects when they are large, but complicates simple
designs that could be better modeled as stateless functional programs.
When there is an argument for "re-using the result object", C++
programmers tend to write void functions and pass the "result" by
reference anyway.  There's a good reason people write functions
returning non-void, though.  There's no reason to force them to twist
their invocation model inside out just to achieve parallelism.  

If you were designing a language from the ground up to support
parallelism, would you encourage or rule out a functional programming
model?  I bet you can guess what the designers of Erlang
(http://www.erlang.org/) chose to do ;o)

>>> And there's other examples as well, such as RPC mechanisms.
>>
>> True.
>>
>>> And personally, I find passing such a "creation parameter" to be
>>> turning the design inside out.
>>
>> A bit, yes.

It turns _your_ design inside out, which might not be a bad thing for
quite a few use cases ;-)

>>> It might make things a little simpler for the default case, but it
>>> complicates usage for all the other cases.  With the design I
>>> presented every usage is treated the same.
>>
>> There's a lot to be said for making "the default case" very easy.
>
> Only if you have a clearly defined "default case".  Someone doing a lot of
> client/server development might argue with you about thread creation being
> a better default than RPC calling, or even thread_pool usage.

Yes, they certainly might.  Check out the systems that have been
implemented in Erlang with great success and get back to me ;-)

>>> More importantly, if you really don't like the syntax of my design, it
>>> at least allows you to *trivially* implement your design.
>>
>> I doubt most users regard anything involving typesafe varargs as
>> "trivial to implement."
>
> Well, I'm not claiming to support variadric parameters here.  I'm only
> talking about supporting a 0..N for some fixed N interface. 

That's what I mean by "typesafe varargs"; it's the best we can do in
C++98/02.

>  And with Boost.Bind already available, that makes other such
> interfaces "trivial to implement".  At least usually.  

For an expert in library design familiar with the workings of boost
idioms like ref(x), yes.  For someone who just wants to accomplish a
task using threading, no.

> The suggestion that the binding occur at the time of construction is
> going to complicate things for me, because it makes it much more
> difficult to handle the reference semantics required here.

a. What "required reference semantics?"

b. As a user, I don't really care if I'm making it hard for the
   library provider, (within reason).  It's the library provider's job
   to make my life easier.

>>> Sometimes there's something to be said for being "lower level".
>>
>> Sometimes.  I think users have complained all along that

Re: [boost] Re: A new boost::thread implementation?

2003-02-07 Thread David Abrahams
"William E. Kempf" <[EMAIL PROTECTED]> writes:

> David Abrahams said:
 ...and if it can't be default-constructed?
>>>
>>> That's what boost::optional<> is for ;).
>>
>> Yeeeh. Once the async_call returns, you have a value, and should be able
>> to count on it.  You shouldn't get back an object whose invariant allows
>> there to be no value.
>
> I'm not sure I can interpret the "yeeeh" part. Do you think there's still
> an issue to discuss here?

Yes.  Yh means I'm uncomfortable with asking people to get
involved with complicated state like "it's there or it isn't there"
for something as conceptually simple as a result returned from waiting
on a thread function to finish.

>>> It's not "thread-creation" in this case.  You don't create threads
>>> when you use a thread_pool.
>>
>> OK, "thread acquisition", then.
>
> No, not even that.  An RPC mechanism, for instance, isn't acquiring
> a thread.  

Yes, but we don't have an RPC mechanism in Boost.  It's important to
be able to use a generic interface that will handle RPC, but for
common tasks where nobody's interested in RPC it's important to be
able to do something reasonably convenient and uncomplicated.

Anyway, if you want to stretch this to cover RPC it's easy enough:
just call it "acquisition of an executor resource."

> And a message queue implementation wouldn't be acquiring
> a thread either.  

But it _would_ be aquiring an execution resource.

> These are the two obvious (to me) alternatives, but the idea is to
> leave the call/execute portion orthogonal and open.  Alexander was
> quite right that this is similar to the "Future" concept in his Java
> link.  The "Future" holds the storage for the data to be returned
> and provides the binding mechanism for what actually gets called,
> while the "Executor" does the actual invocation.  I've modeled the
> "Future" to use function objects for the binding, so the "Executor"
> can be any mechanism which can invoke a function object.  This makes
> thread, thread_pool and other such classes "Executors".

Yes, it is a non-functional (stateful) model which allows efficient
re-use of result objects when they are large, but complicates simple
designs that could be better modeled as stateless functional programs.
When there is an argument for "re-using the result object", C++
programmers tend to write void functions and pass the "result" by
reference anyway.  There's a good reason people write functions
returning non-void, though.  There's no reason to force them to twist
their invocation model inside out just to achieve parallelism.  

If you were designing a language from the ground up to support
parallelism, would you encourage or rule out a functional programming
model?  I bet you can guess what the designers of Erlang
(http://www.erlang.org/) chose to do ;o)

>>> And there's other examples as well, such as RPC mechanisms.
>>
>> True.
>>
>>> And personally, I find passing such a "creation parameter" to be
>>> turning the design inside out.
>>
>> A bit, yes.

It turns _your_ design inside out, which might not be a bad thing for
quite a few use cases ;-)

>>> It might make things a little simpler for the default case, but it
>>> complicates usage for all the other cases.  With the design I
>>> presented every usage is treated the same.
>>
>> There's a lot to be said for making "the default case" very easy.
>
> Only if you have a clearly defined "default case".  Someone doing a lot of
> client/server development might argue with you about thread creation being
> a better default than RPC calling, or even thread_pool usage.

Yes, they certainly might.  Check out the systems that have been
implemented in Erlang with great success and get back to me ;-)

>>> More importantly, if you really don't like the syntax of my design, it
>>> at least allows you to *trivially* implement your design.
>>
>> I doubt most users regard anything involving typesafe varargs as
>> "trivial to implement."
>
> Well, I'm not claiming to support variadric parameters here.  I'm only
> talking about supporting a 0..N for some fixed N interface. 

That's what I mean by "typesafe varargs"; it's the best we can do in
C++98/02.

>  And with Boost.Bind already available, that makes other such
> interfaces "trivial to implement".  At least usually.  

For an expert in library design familiar with the workings of boost
idioms like ref(x), yes.  For someone who just wants to accomplish a
task using threading, no.

> The suggestion that the binding occur at the time of construction is
> going to complicate things for me, because it makes it much more
> difficult to handle the reference semantics required here.

a. What "required reference semantics?"

b. As a user, I don't really care if I'm making it hard for the
   library provider, (within reason).  It's the library provider's job
   to make my life easier.

>>> Sometimes there's something to be said for being "lower level".
>>
>> Sometimes.  I think users have complained all along that

Re: [boost] Re: A new boost::thread implementation?

2003-02-07 Thread William E. Kempf

David Abrahams said:
>>> ...and if it can't be default-constructed?
>>
>> That's what boost::optional<> is for ;).
>
> Yeeeh. Once the async_call returns, you have a value, and should be able
> to count on it.  You shouldn't get back an object whose invariant allows
> there to be no value.

I'm not sure I can interpret the "yeeeh" part. Do you think there's still
an issue to discuss here?

 Second, and this is more important, you've bound this concept to
 boost::thread explicitly.  With the fully seperated concerns of my
 proposal, async_result can be used with other asynchronous call
 mechanisms, such as the coming boost::thread_pool.

asyc_result res1, res2;
>
> no fair - I'm calling it async_call now ;-)
>
thread_pool pool;
pool.dispatch(bind(res1.call(foo), a, b, c));
pool.dispatch(bind(res2.call(foo), d, e, f));
d = res1.value() + res2.value();
>>>
>>> This one is important.  However, there are other ways to deal with
>>> this.
>>>  An async_call object could take an optional thread-creation
>>> parameter,
>>> for example.
>>
>> It's not "thread-creation" in this case.  You don't create threads
>> when you use a thread_pool.
>
> OK, "thread acquisition", then.

No, not even that.  An RPC mechanism, for instance, isn't acquiring a
thread.  And a message queue implementation wouldn't be acquiring a thread
either.  These are the two obvious (to me) alternatives, but the idea is
to leave the call/execute portion orthogonal and open.  Alexander was
quite right that this is similar to the "Future" concept in his Java link.
 The "Future" holds the storage for the data to be returned and provides
the binding mechanism for what actually gets called, while the "Executor"
does the actual invocation.  I've modeled the "Future" to use function
objects for the binding, so the "Executor" can be any mechanism which can
invoke a function object.  This makes thread, thread_pool and other such
classes "Executors".

>> And there's other examples as well, such as RPC mechanisms.
>
> True.
>
>> And personally, I find passing such a "creation parameter" to be
>> turning the design inside out.
>
> A bit, yes.
>
>> It might make things a little simpler for the default case, but it
>> complicates usage for all the other cases.  With the design I
>> presented every usage is treated the same.
>
> There's a lot to be said for making "the default case" very easy.

Only if you have a clearly defined "default case".  Someone doing a lot of
client/server development might argue with you about thread creation being
a better default than RPC calling, or even thread_pool usage.

>> More importantly, if you really don't like the syntax of my design, it
>> at least allows you to *trivially* implement your design.
>
> I doubt most users regard anything involving typesafe varargs as
> "trivial to implement."

Well, I'm not claiming to support variadric parameters here.  I'm only
talking about supporting a 0..N for some fixed N interface.  And with
Boost.Bind already available, that makes other such interfaces "trivial to
implement".  At least usually.  The suggestion that the binding occur at
the time of construction is going to complicate things for me, because it
makes it much more difficult to handle the reference semantics required
here.

>> Sometimes there's something to be said for being "lower level".
>
> Sometimes.  I think users have complained all along that the
> Boost.Threads library takes the "you can implement it yourself using our
> primitives" line way too much.  It's important to supply
> simplifying high-level abstractions, especially in a domain as
> complicated as threading.

OK, I actually believe this is a valid criticism.  But I also think it's
wrong to start at the top of the design and work backwards.  In other
words, I expect that we'll take the lower level stuff I'm building now and
use them as the building blocks for the higher level constructs later.  If
I'd started with the higher level stuff, there'd be things that you
couldn't accomplish.

 > That's what we mean by the terms "high-level" and "encapsulation"
 ;-)

 Yes, but encapsulation shouldn't hide the implementation to the
 point that users aren't aware of what the operations actually are.
 ;)
>>>
>>> I don't think I agree with you, if you mean that the implementation
>>> should be apparent from looking at the usage.  Implementation details
>>> that must be revealed should be shown in the documentation.
>>
>> I was referring to the fact that you have no idea if the "async call"
>> is being done via a thread, a thread_pool, an RPC mechanism, a simple
>> message queue, etc.  Sometimes you don't care, but often you do.
>
> And for those cases you have a low-level interface, right?

Where's the low level interface if I don't provide it? ;)

-- 
William E. Kempf
[EMAIL PROTECTED]


___
Unsubscribe & other changes: http://lists.boost.

Re: [boost] Re: A new boost::thread implementation?

2003-02-07 Thread David Abrahams
"William E. Kempf" <[EMAIL PROTECTED]> writes:

> Dave Abrahams said:
>> On Thursday, February 06, 2003 12:33 PM [GMT+1=CET],
>> William E. Kempf <[EMAIL PROTECTED]> wrote:
>>
> I didn't say it wasn't a result, I said that it wasn't "only" a result. 
> In your case it's also the call.

OK.

>>> An asynchronous call can be bound to this result more than once.
>>
>> ...and if it can't be default-constructed?
>
> That's what boost::optional<> is for ;).

Yeeeh. Once the async_call returns, you have a value, and should be
able to count on it.  You shouldn't get back an object whose invariant
allows there to be no value.

>>> 2) You're still hiding the thread creation.
>>
>> Absolutely.  High-level vs. low-level.
>
> But I think too high-level.  I say this, because it ties you solely
> to thread creation for asynchronous calls.

I understand your argument.  I'm not suggesting we mask the low-level
interface.

>>> Second, and this is more important, you've bound this concept to
>>> boost::thread explicitly.  With the fully seperated concerns of my
>>> proposal, async_result can be used with other asynchronous call
>>> mechanisms, such as the coming boost::thread_pool.
>>>
>>>asyc_result res1, res2;

no fair - I'm calling it async_call now ;-)

>>>thread_pool pool;
>>>pool.dispatch(bind(res1.call(foo), a, b, c));
>>>pool.dispatch(bind(res2.call(foo), d, e, f));
>>>d = res1.value() + res2.value();
>>
>> This one is important.  However, there are other ways to deal with this.
>>  An async_call object could take an optional thread-creation parameter,
>> for example.
>
> It's not "thread-creation" in this case.  You don't create threads
> when you use a thread_pool.

OK, "thread acquisition", then.  

> And there's other examples as well, such as RPC mechanisms.  

True.

> And personally, I find passing such a "creation parameter" to be
> turning the design inside out.  

A bit, yes.

> It might make things a little simpler for the default case, but it
> complicates usage for all the other cases.  With the design I
> presented every usage is treated the same.

There's a lot to be said for making "the default case" very easy.

> More importantly, if you really don't like the syntax of my design,
> it at least allows you to *trivially* implement your design.

I doubt most users regard anything involving typesafe varargs as
"trivial to implement."

> Sometimes there's something to be said for being "lower level".

Sometimes.  I think users have complained all along that the
Boost.Threads library takes the "you can implement it yourself using
our primitives" line way too much.  It's important to supply
simplifying high-level abstractions, especially in a domain as
complicated as threading.

>>> > That's what we mean by the terms "high-level" and "encapsulation"
>>> ;-)
>>>
>>> Yes, but encapsulation shouldn't hide the implementation to the point
>>> that users aren't aware of what the operations actually are. ;)
>>
>> I don't think I agree with you, if you mean that the implementation
>> should be apparent from looking at the usage.  Implementation details
>> that must be revealed should be shown in the documentation.
>
> I was referring to the fact that you have no idea if the "async call" is
> being done via a thread, a thread_pool, an RPC mechanism, a simple message
> queue, etc.  Sometimes you don't care, but often you do.

And for those cases you have a low-level interface, right?

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



RE: [boost] Re: A new boost::thread implementation?

2003-02-07 Thread William E. Kempf

Darryl Green said:
>> -Original Message-
>> From: William E. Kempf [mailto:[EMAIL PROTECTED]]
>>
>> Dave Abrahams said:
>> > Hm? How is the result not a result in my case?
>>
>> I didn't say it wasn't a result, I said that it wasn't "only" a
> result.
>> In your case it's also the call.
>
> Regardless of whether it invokes the function the result must always be
> associated with the function at some point. It would be nice if this
> could be done at creation as per Dave's suggestion but providing
> behaviour like the futures alexander refers to. That is, bind the
> function, its parameters and the "async_result" into a single
> aync_call/future object that is a function/executable object. It can be
> passed to (executed by) a thread or a thread_pool (or whatever).

I'm not sure that binding the result and "the function" at creation time
is that helpful.  Actual results aren't that way.  This allows you to
reuse the result variable in multiple calls to different functions.  But
if people aren't comfortable with this binding scheme, I'm not opposed to
changing it.  Doing so *will* complicate things a bit, however, on the
implementation side.  So let me explore it some.

>> It's not "thread-creation" in this case.  You don't create threads
> when
>> you use a thread_pool.  And there's other examples as well, such as
> RPC
>> mechanisms.  And personally, I find passing such a "creation
> parameter" to
>> be turning the design inside out.
>
> But this doesn't (borrowing Dave's async_call syntax and Alexander's
> semantics (which aren't really any different to yours):

Dave's semantics certainly *were* different from mine (and the Futures
link posted by Alexander).  In fact, I see Alexander's post as
strengthening my argument for semantics different from Dave's.  Which
leaves us with my semantics (mostly), but some room left to argue the
syntax.

> async_call later1(foo, a, b, c);
> async_call later2(foo, d, e, f);
> thread_pool pool;
> pool.dispatch(later1);
> pool.dispatch(later2);
> d = later1.result() + later2.result();

You've not used Dave's semantics, but mine (with the variation of when you
bind).

>> More importantly, if you really don't like the syntax of my design, it
> at
>> least allows you to *trivially* implement your design.  Sometimes
> there's
>> something to be said for being "lower level".
>
> Well as a user I'd be *trivially* implementing something to produce the
> above. Do-able I think (after I have a bit of a look at the innards of
> bind), but its hardly trivial.

The only thing that's not trivial with your syntax changes above is
dealing with the requisite reference semantics with out requiring dynamic
memory allocation.  But I think I can work around that.  If people prefer
the early/static binding, I can work on this design.  I think it's a
little less flexible, but won't argue that point if people prefer it.

-- 
William E. Kempf
[EMAIL PROTECTED]


___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



RE: [boost] Re: A new boost::thread implementation?

2003-02-07 Thread Darryl Green
> -Original Message-
> From: William E. Kempf [mailto:[EMAIL PROTECTED]]
>
> Dave Abrahams said:
> > Hm? How is the result not a result in my case?
> 
> I didn't say it wasn't a result, I said that it wasn't "only" a
result.
> In your case it's also the call.

Regardless of whether it invokes the function the result must always be
associated with the function at some point. It would be nice if this
could be done at creation as per Dave's suggestion but providing
behaviour like the futures alexander refers to. That is, bind the
function, its parameters and the "async_result" into a single
aync_call/future object that is a function/executable object. It can be
passed to (executed by) a thread or a thread_pool (or whatever).
 
> 
> >> 2) You're still hiding the thread creation.
> >
> > Absolutely.  High-level vs. low-level.
> 
> But I think too high-level.  I say this, because it ties you solely to
> thread creation for asynchronous calls.
> 
> >> This is a mistake to me for
> >> two reasons.  First, it's not as obvious that a thread is being
> >> created here (though the new names help a lot).
> >
> > Unimportant, IMO.  Who cares how an async_call is implemented under
the
> > covers?
> 
> I care, because of what comes next ;).
> 
> >> Second, and this is more
> >> important, you've bound this concept to boost::thread explicitly.
> >> With the fully seperated concerns of my proposal, async_result can
be
> >> used with other asynchronous call mechanisms, such as the coming
> >> boost::thread_pool.
> >
> > This one is important.  However, there are other ways to deal with
this.
> >  An async_call object could take an optional thread-creation
parameter,
> > for example.
> 
> It's not "thread-creation" in this case.  You don't create threads
when
> you use a thread_pool.  And there's other examples as well, such as
RPC
> mechanisms.  And personally, I find passing such a "creation
parameter" to
> be turning the design inside out.

But this doesn't (borrowing Dave's async_call syntax and Alexander's
semantics (which aren't really any different to yours):

async_call later1(foo, a, b, c);
async_call later2(foo, d, e, f);
thread_pool pool;
pool.dispatch(later1);
pool.dispatch(later2);
d = later1.result() + later2.result();

> More importantly, if you really don't like the syntax of my design, it
at
> least allows you to *trivially* implement your design.  Sometimes
there's
> something to be said for being "lower level".

Well as a user I'd be *trivially* implementing something to produce the
above. Do-able I think (after I have a bit of a look at the innards of
bind), but its hardly trivial.

Regards
Darryl Green.

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-06 Thread William E. Kempf

Dave Abrahams said:
> On Thursday, February 06, 2003 12:33 PM [GMT+1=CET],
> William E. Kempf <[EMAIL PROTECTED]> wrote:
>
>> Dave Abrahams said:
>>
>> > > Hmm... that would be
>> > > an interesting alternative implementation.  I'm not sure it's as
>> "obvious" as the syntax I suggested
>> >
>> > Sorry, IMO there's nothing "obvious" about your syntax.  It looks
>> cumbersome and low-level to me.  Let me suggest some other syntaxes
>> for async_result, though:
>> >
>> > async_call later(foo, a, b, c)
>> >
>> > or, if you don't want to duplicate the multi-arg treatment of
>> bind(), just:
>> >
>> > async_call later(bind(foo, a, b, c));
>> > ...
>> > ...
>> > double d = later(); // call it to get the result out.
>>
>> The two things that come to mind for me with this suggestion are:
>>
>> 1) You've explicitly tied the result into the call.  I chose the other
>> design because the result is just that, only a result.
>
> Hm? How is the result not a result in my case?

I didn't say it wasn't a result, I said that it wasn't "only" a result. 
In your case it's also the call.

>> An asynchronous call can be bound to this result more than once.
>
> ...and if it can't be default-constructed?

That's what boost::optional<> is for ;).

>> 2) You're still hiding the thread creation.
>
> Absolutely.  High-level vs. low-level.

But I think too high-level.  I say this, because it ties you solely to
thread creation for asynchronous calls.

>> This is a mistake to me for
>> two reasons.  First, it's not as obvious that a thread is being
>> created here (though the new names help a lot).
>
> Unimportant, IMO.  Who cares how an async_call is implemented under the
> covers?

I care, because of what comes next ;).

>> Second, and this is more
>> important, you've bound this concept to boost::thread explicitly.
>> With the fully seperated concerns of my proposal, async_result can be
>> used with other asynchronous call mechanisms, such as the coming
>> boost::thread_pool.
>>
>>asyc_result res1, res2;
>>thread_pool pool;
>>pool.dispatch(bind(res1.call(foo), a, b, c));
>>pool.dispatch(bind(res2.call(foo), d, e, f));
>>d = res1.value() + res2.value();
>
> This one is important.  However, there are other ways to deal with this.
>  An async_call object could take an optional thread-creation parameter,
> for example.

It's not "thread-creation" in this case.  You don't create threads when
you use a thread_pool.  And there's other examples as well, such as RPC
mechanisms.  And personally, I find passing such a "creation parameter" to
be turning the design inside out.  It might make things a little simpler
for the default case, but it complicates usage for all the other cases. 
With the design I presented every usage is treated the same.

More importantly, if you really don't like the syntax of my design, it at
least allows you to *trivially* implement your design.  Sometimes there's
something to be said for being "lower level".

>> > That's what we mean by the terms "high-level" and "encapsulation"
>> ;-)
>>
>> Yes, but encapsulation shouldn't hide the implementation to the point
>> that users aren't aware of what the operations actually are. ;)
>
> I don't think I agree with you, if you mean that the implementation
> should be apparent from looking at the usage.  Implementation details
> that must be revealed should be shown in the documentation.

I was referring to the fact that you have no idea if the "async call" is
being done via a thread, a thread_pool, an RPC mechanism, a simple message
queue, etc.  Sometimes you don't care, but often you do.

-- 
William E. Kempf
[EMAIL PROTECTED]


___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-06 Thread Dave Abrahams
On Thursday, February 06, 2003 12:33 PM [GMT+1=CET],
William E. Kempf <[EMAIL PROTECTED]> wrote:

> Dave Abrahams said:
>
> > > Hmm... that would be
> > > an interesting alternative implementation.  I'm not sure it's as
> > > "obvious" as the syntax I suggested
> >
> > Sorry, IMO there's nothing "obvious" about your syntax.  It looks
> > cumbersome and low-level to me.  Let me suggest some other syntaxes for
> > async_result, though:
> >
> > async_call later(foo, a, b, c)
> >
> > or, if you don't want to duplicate the multi-arg treatment of bind(),
> > just:
> >
> > async_call later(bind(foo, a, b, c));
> > ...
> > ...
> > double d = later(); // call it to get the result out.
>
> The two things that come to mind for me with this suggestion are:
>
> 1) You've explicitly tied the result into the call.  I chose the other
> design because the result is just that, only a result.

Hm? How is the result not a result in my case?

> An asynchronous call can be bound to this result more than once.

...and if it can't be default-constructed?

> 2) You're still hiding the thread creation.

Absolutely.  High-level vs. low-level.

> This is a mistake to me for
> two reasons.  First, it's not as obvious that a thread is being created
> here (though the new names help a lot).

Unimportant, IMO.  Who cares how an async_call is implemented under the
covers?

> Second, and this is more
> important, you've bound this concept to boost::thread explicitly.  With
> the fully seperated concerns of my proposal, async_result can be used with
> other asynchronous call mechanisms, such as the coming boost::thread_pool.
>
>asyc_result res1, res2;
>thread_pool pool;
>pool.dispatch(bind(res1.call(foo), a, b, c));
>pool.dispatch(bind(res2.call(foo), d, e, f));
>d = res1.value() + res2.value();

This one is important.  However, there are other ways to deal with this.  An
async_call object could take an optional thread-creation parameter, for
example.

> > I like the first one better, but could understand why you'd want to go
> > with the second one.  This is easily implemented on top of the existing
> > Boost.Threads interface.  Probably any of my suggestions is.
>
> Yes, all of the suggestions which don't directly modify boost::thread are
> easily implemented on top of the existing interface.

No duh ;-)

> > That's what we mean by the terms "high-level" and "encapsulation" ;-)
>
> Yes, but encapsulation shouldn't hide the implementation to the point that
> users aren't aware of what the operations actually are. ;)

I don't think I agree with you, if you mean that the implementation should
be apparent from looking at the usage.  Implementation details that must be
revealed should be shown in the documentation.

> But I'll admit that some of my own initial confusion on this particular
> case probably stem from having my brain focused on implementation details.

Ha!

> > > I found this
> > > surprising enough to require careful thought about the FULL example
> > > you posted to understand this.
> >
> > Like I said, I can't argue with user confusion.  Does the name
> > "async_call" help?
>
> Certainly... but leads to the problems I addresed above.  There's likely a
> design that will satisfy all concerns, however, that's not been given yet.

P'raps.

-- 
Dave Abrahams
Boost Consulting
http://www.boost-consulting.com

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-06 Thread William E. Kempf

Dave Abrahams said:

>> Hmm... that would be
>> an interesting alternative implementation.  I'm not sure it's as
>> "obvious" as the syntax I suggested
>
> Sorry, IMO there's nothing "obvious" about your syntax.  It looks
> cumbersome and low-level to me.  Let me suggest some other syntaxes for
> async_result, though:
>
> async_call later(foo, a, b, c)
>
> or, if you don't want to duplicate the multi-arg treatment of bind(),
> just:
>
> async_call later(bind(foo, a, b, c));
> ...
> ...
> double d = later(); // call it to get the result out.

The two things that come to mind for me with this suggestion are:

1) You've explicitly tied the result into the call.  I chose the other
design because the result is just that, only a result.  An asynchronous
call can be bound to this result more than once.

2) You're still hiding the thread creation.  This is a mistake to me for
two reasons.  First, it's not as obvious that a thread is being created
here (though the new names help a lot).  Second, and this is more
important, you've bound this concept to boost::thread explicitly.  With
the fully seperated concerns of my proposal, async_result can be used with
other asynchronous call mechanisms, such as the coming boost::thread_pool.

   asyc_result res1, res2;
   thread_pool pool;
   pool.dispatch(bind(res1.call(foo), a, b, c));
   pool.dispatch(bind(res2.call(foo), d, e, f));
   d = res1.value() + res2.value();

> I like the first one better, but could understand why you'd want to go
> with the second one.  This is easily implemented on top of the existing
> Boost.Threads interface.  Probably any of my suggestions is.

Yes, all of the suggestions which don't directly modify boost::thread are
easily implemented on top of the existing interface.

>> as evidenced by the questions I've raised here,
>
> Can't argue with user confusion I guess ;-)
>
>> but worth considering.  Not sure I care for "spawn(foo)(a, b, c)"
>> though. I personally still prefer explicit usage of Boost.Bind or some
>> other binding/lambda library.  But if you want to "hide" the binding,
>> why not just "spawn(foo, a, b, c)"?
>
> Mostly agree; it's just that interfaces like that tend to obscure which
> is the function and which is the argument list.

OK.  That's never bothered me, though, and is not the syntax used by
boost::bind, so I find it less appealing.

>> > This approach doesn't get the asynchronous call wound up with the
>> meaning of the "thread" concept.
>>
>> If I fully understand it, yes it does, but too a lesser extent.  What
>> I mean by this is that the async_result hides the created thread
>> (though you do get access to it through the res.thread() syntax).
>
> That's what we mean by the terms "high-level" and "encapsulation" ;-)

Yes, but encapsulation shouldn't hide the implementation to the point that
users aren't aware of what the operations actually are. ;)

But I'll admit that some of my own initial confusion on this particular
case probably stem from having my brain focused on implementation details.

>> I found this
>> surprising enough to require careful thought about the FULL example
>> you posted to understand this.
>
> Like I said, I can't argue with user confusion.  Does the name
> "async_call" help?

Certainly... but leads to the problems I addresed above.  There's likely a
design that will satisfy all concerns, however, that's not been given yet.

-- 
William E. Kempf
[EMAIL PROTECTED]


___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-06 Thread Wolfgang Bangerth

> >> async_result res;
> >> thread t(bind(res.call(), a, b, c));
> >> // do something else
> >> d = res.value();  // Explicitly waits for the thread to return a
> >> value?
> >
> > This does the same, indeed. Starting a thread this way is just a little
> > more complex (and -- in my view -- less obvious to read) than writing
> >   thread t = spawn(foo)(a,b,c);
> 
> Not sure I agree about "less obvious to read".  If your syntax had been
>thread t = spawn(foo, a, b, c);
> I think you'd have a bit more of an argument here.  And I certainly could
> fold the binding directly into boost::thread so that my syntax would
> become:
> 
> thread t(res.call(), a, b, c);
> 
> I could even eliminate the ".call()" syntax with some implicit
> conversions, but I dislike that for the obvious reasons.  I specifically
> chose not to include syntactic binding in boost::thread a long time ago,
> because I prefer the explicit seperation of concerns.  So, where you think
> my syntax is "less obvious to read", I think it's "more explicit".

If you do all this, then you'll probably almost arrive at the code I 
posted :-)

Still, keeping the analogy to the usual call foo(a,b,c), I prefer the 
arguments to foo in a separate pair of parentheses. However, there is 
another point that I guess will make your approach very hard: assume
void foo(int, double, char);
and a potential constructor for your thread class
template 
thread (void (*p)(A,B,C), A, B, C);
Then you can write
foo(1,1,1)
and arguments will be converted automatically. However, you cannot write
thread t(foo, 1, 1, 1);
since template parameters must be exact matches.

There really is no other way than to first get at the argument types in a 
first step, and pass the arguments in a second step. You _need_ two sets 
of parentheses to get the conversions.


> > Actually it does duplicate the work, but not because I am stubborn. We
> > have an existing implementation for a couple of years, and the present
> > version just evolved from this. However, there's a second point: when
> > starting threads, you have a relatively clear picture as to how long
> > certain objects are needed, and one can avoid several copying steps if
> > one  does some things by hand. It's short anyway, tuple type and tie
> > function  are your friend here.
> 
> I'm not sure how you avoid copies here.

Since you have control over lifetimes of objects, you can pass references 
instead of copies at various places.


> >   t->kill ();
> >   t->suspend ();
> > Someone sees that there's a function yield() but doesn't have the time
> > to  read the documentation, what will he assume what yield() does?
> 
> How does "someone see that there's a function yield()" with out also
> seeing that it's static?  No need to read documentation for that, as it's
> an explicit part of the functions signature.

Seeing it used in someone else's code? Just not being careful when reading 
the signature?

I think it's the same argument as with void* : if applied correctly it's 
ok, but in general it's considered harmful.

W.

-
Wolfgang Bangerth email:[EMAIL PROTECTED]
  www: http://www.ticam.utexas.edu/~bangerth/


___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-06 Thread Dave Abrahams
From: "William E. Kempf" <[EMAIL PROTECTED]>

> > From: "Wolfgang Bangerth" <[EMAIL PROTECTED]>
> >> This does the same, indeed. Starting a thread this way is just a
> >> little more complex (and -- in my view -- less obvious to read) than
> >> writing
> >>   thread t = spawn(foo)(a,b,c);
> >>
> >> But that's just personal opinion, and I'm arguably biased :-)
> >
> > I'm sure I'm never biased , and I tend to like your syntax better.
> > However, I recognize what Bill is concerned about.  Let me suggest a
> > compromise:
> >
> >   async_result later = spawn(foo)(a, b, c);
>
> Mustn't use the name spawn() here.  It implies a thread/process/what ever
> has been spawned at this point, which is not the case.  Or has it (he says
> later, having read on)?

It has.

> >   ...
> >   thread& t = later.thread();
>
> The thread type will be Copyable and Assignable soon, so no need for the
> reference.  Hmm... does this member indicate that spawn() above truly did
> create a thread that's stored in the async_result?

Yes.

> Hmm... that would be
> an interesting alternative implementation.  I'm not sure it's as "obvious"
> as the syntax I suggested

Sorry, IMO there's nothing "obvious" about your syntax.  It looks cumbersome
and low-level to me.  Let me suggest some other syntaxes for async_result,
though:

async_call later(foo, a, b, c)

or, if you don't want to duplicate the multi-arg treatment of bind(), just:

async_call later(bind(foo, a, b, c));
...
...
double d = later(); // call it to get the result out.

I like the first one better, but could understand why you'd want to go with
the second one.  This is easily implemented on top of the existing
Boost.Threads interface.  Probably any of my suggestions is.

> as evidenced by the questions I've raised here,

Can't argue with user confusion I guess ;-)

> but worth considering.  Not sure I care for "spawn(foo)(a, b, c)" though.
> I personally still prefer explicit usage of Boost.Bind or some other
> binding/lambda library.  But if you want to "hide" the binding, why not
> just "spawn(foo, a, b, c)"?

Mostly agree; it's just that interfaces like that tend to obscure which is
the function and which is the argument list.

> And if we go this route, should be remove the boost::thread constructor
> that creates a thread in favor of using spawn() there as well?
>
>thread t = spawn(foo, a, b, c);

Good point.  I dunno.  I don't see a problem with the idea that
async_call adds little or nothing to thread

> > This approach doesn't get the asynchronous call wound up with the
> > meaning of the "thread" concept.
>
> If I fully understand it, yes it does, but too a lesser extent.  What I
> mean by this is that the async_result hides the created thread (though you
> do get access to it through the res.thread() syntax).

That's what we mean by the terms "high-level" and "encapsulation" ;-)

> I found this
> surprising enough to require careful thought about the FULL example you
> posted to understand this.

Like I said, I can't argue with user confusion.  Does the name "async_call"
help?

--
Dave Abrahams
Boost Consulting
http://www.boost-consulting.com

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-06 Thread William E. Kempf

> From: "Wolfgang Bangerth" <[EMAIL PROTECTED]>
>> This does the same, indeed. Starting a thread this way is just a
>> little more complex (and -- in my view -- less obvious to read) than
>> writing
>>   thread t = spawn(foo)(a,b,c);
>>
>> But that's just personal opinion, and I'm arguably biased :-)
>
> I'm sure I'm never biased , and I tend to like your syntax better.
> However, I recognize what Bill is concerned about.  Let me suggest a
> compromise:
>
>   async_result later = spawn(foo)(a, b, c);

Mustn't use the name spawn() here.  It implies a thread/process/what ever
has been spawned at this point, which is not the case.  Or has it (he says
later, having read on)?

>   ...
>   thread& t = later.thread();

The thread type will be Copyable and Assignable soon, so no need for the
reference.  Hmm... does this member indicate that spawn() above truly did
create a thread that's stored in the async_result?  Hmm... that would be
an interesting alternative implementation.  I'm not sure it's as "obvious"
as the syntax I suggested, as evidenced by the questions I've raised here,
but worth considering.  Not sure I care for "spawn(foo)(a, b, c)" though. 
I personally still prefer explicit usage of Boost.Bind or some other
binding/lambda library.  But if you want to "hide" the binding, why not
just "spawn(foo, a, b, c)"?

And if we go this route, should be remove the boost::thread constructor
that creates a thread in favor of using spawn() there as well?

   thread t = spawn(foo, a, b, c);

>   // do whatever with t
>   ...
>   double now = later.join();  // or later.get()
>
> You could also consider the merits of providing an implicit conversion
> from async_result to T.

The merits, and the cons, yes.  I'll be considering this carefully at some
point.

> This approach doesn't get the asynchronous call wound up with the
> meaning of the "thread" concept.

If I fully understand it, yes it does, but too a lesser extent.  What I
mean by this is that the async_result hides the created thread (though you
do get access to it through the res.thread() syntax).  I found this
surprising enough to require careful thought about the FULL example you
posted to understand this.

-- 
William E. Kempf
[EMAIL PROTECTED]


___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-06 Thread Dave Abrahams
From: "Wolfgang Bangerth" <[EMAIL PROTECTED]>

> > > double d;
> > > thread t = spawn(foo)(a,b,c);
> > > // do something else
> > > d = thread.return_value();
> >
> > A solution like this has been proposed before, but I don't like it.
This
> > creates multiple thread types, instead of a single thread type.  I think
> > this will only make the interface less convenient, and will make the
> > implementation much more complex.  For instance, you now must have a
> > seperate thread_self type that duplicates all of thread<> except for the
> > data type specific features.  These differing types will have to compare
> > to each other, however.
>
> Make the common part a base class. That's how the proposal I sent you does
> it :-)
>
>
> > async_result res;
> > thread t(bind(res.call(), a, b, c));
> > // do something else
> > d = res.value();  // Explicitly waits for the thread to return a value?
>
> This does the same, indeed. Starting a thread this way is just a little
> more complex (and -- in my view -- less obvious to read) than writing
>   thread t = spawn(foo)(a,b,c);
>
> But that's just personal opinion, and I'm arguably biased :-)

I'm sure I'm never biased , and I tend to like your syntax better.
However, I recognize what Bill is concerned about.  Let me suggest a
compromise:

  async_result later = spawn(foo)(a, b, c);
  ...
  thread& t = later.thread();
  // do whatever with t
  ...
  double now = later.join();  // or later.get()

You could also consider the merits of providing an implicit conversion from
async_result to T.

This approach doesn't get the asynchronous call wound up with the meaning of
the "thread" concept.


--
Dave Abrahams
Boost Consulting
http://www.boost-consulting.com

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-06 Thread William E. Kempf

>
>> > double d;
>> > thread t = spawn(foo)(a,b,c);
>> > // do something else
>> > d = thread.return_value();
>>
>> A solution like this has been proposed before, but I don't like it.
>> This creates multiple thread types, instead of a single thread type.
>> I think this will only make the interface less convenient, and will
>> make the implementation much more complex.  For instance, you now must
>> have a seperate thread_self type that duplicates all of thread<>
>> except for the data type specific features.  These differing types
>> will have to compare to each other, however.
>
> Make the common part a base class. That's how the proposal I sent you
> does  it :-)

Simplifies the implementation, but complicates the interface.

>> async_result res;
>> thread t(bind(res.call(), a, b, c));
>> // do something else
>> d = res.value();  // Explicitly waits for the thread to return a
>> value?
>
> This does the same, indeed. Starting a thread this way is just a little
> more complex (and -- in my view -- less obvious to read) than writing
>   thread t = spawn(foo)(a,b,c);

Not sure I agree about "less obvious to read".  If your syntax had been
   thread t = spawn(foo, a, b, c);
I think you'd have a bit more of an argument here.  And I certainly could
fold the binding directly into boost::thread so that my syntax would
become:

thread t(res.call(), a, b, c);

I could even eliminate the ".call()" syntax with some implicit
conversions, but I dislike that for the obvious reasons.  I specifically
chose not to include syntactic binding in boost::thread a long time ago,
because I prefer the explicit seperation of concerns.  So, where you think
my syntax is "less obvious to read", I think it's "more explicit".

> But that's just personal opinion, and I'm arguably biased :-)

As am I :).

>> Hopefully you're not duplicating efforts here, and are using
>> Boost.Bind and Boost.Function in the implementation?
>
> Actually it does duplicate the work, but not because I am stubborn. We
> have an existing implementation for a couple of years, and the present
> version just evolved from this. However, there's a second point: when
> starting threads, you have a relatively clear picture as to how long
> certain objects are needed, and one can avoid several copying steps if
> one  does some things by hand. It's short anyway, tuple type and tie
> function  are your friend here.

I'm not sure how you avoid copies here.  Granted, the current
implementation isn't optimized in this way, but it's possible for me to
reduce the number of copies down to what I think would be equivalent to a
hand coded implementation.

>> > thread<> t = spawn(foo)(a,b,c);
>> > t.yield ();// oops, who's going to yield here?
>>
>> You shouldn't really ever write code like that.  It should be
>> thread::yield().  But even if you write it the way you did, it will
>> always be the current thread that yields, which is the only thread
>> that can.  I don't agree with seperating the interfaces here.
>
> I certainly know that one shouldn't write the code like this. It's just
> that this way you are inviting people to write buglets. After all, you
> have (or may have in the future) functions
>   t->kill ();
>   t->suspend ();
> Someone sees that there's a function yield() but doesn't have the time
> to  read the documentation, what will he assume what yield() does?

How does "someone see that there's a function yield()" with out also
seeing that it's static?  No need to read documentation for that, as it's
an explicit part of the functions signature.

> If there's a way to avoid such invitations for errors, one should use
> it.

I understand the theory behind this, I've just never seen a real world
case where someone's been bitten in this way.  I know I never would be. 
So I don't find it very compelling.  But as I said elsewhere, I'm not so
opposed as to not consider making these free functions instead because of
this reasoning.  I would be opposed to another class, however, as I don't
think that solves anything, but instead makes things worse.

-- 
William E. Kempf
[EMAIL PROTECTED]


___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-05 Thread William E. Kempf

>
> Hi Ove,
>
>> f. It shall be possible to send extra information, as an optional
>> extra argument
>>to the  boost::thread ctor, to the created  thread.
>> boost::thread::self shall offer a method for retrieving this extra
>> information. It is not required that this information be passed in
>> a type-safe manner, i.e. void* is okay.
>>
>> g. It shall  be possible for a thread  to exit with a return value.
>> It shall be
>>possible for  the creating side to  retrieve, as a return  value
>> from join(), that  value. It is  not required  that this  value be
>> passed in  a type-safe manner, i.e. void* is okay.
>>
>> j. The header file shall not expose any implementation specific
>> details.
>
> Incidentally, I have a scheme almost ready that does all this. In
> particular, it allows you to pass every number of parameters to the new
> thread, and to return every possible type. Both happens in a type-safe
> fashion, i.e. whereever you would call a function serially like
> double d = foo(a, b, c);
> you can now call it like
> double d;
> thread t = spawn(foo)(a,b,c);
> // do something else
> d = thread.return_value();

A solution like this has been proposed before, but I don't like it.  This
creates multiple thread types, instead of a single thread type.  I think
this will only make the interface less convenient, and will make the
implementation much more complex.  For instance, you now must have a
seperate thread_self type that duplicates all of thread<> except for the
data type specific features.  These differing types will have to compare
to each other, however.

I don't feel that this sort of information belongs in the thread object. 
It belongs in the thread function.  This already works very nicely for
passing data, we just need some help with returning data.  And I'm working
on that.  The current idea would be used something like this:

async_result res;
thread t(bind(res.call(), a, b, c));
// do something else
d = res.value();  // Explicitly waits for the thread to return a value?

Now thread remains type-neutral, but we have the full ability to both pass
and return values in a type-safe manner.

> Argument and return types are automatically deducted, and the number of
> arguments are only limited by the present restriction on the number of
> elements in boost::tuple (which I guess is 10). Conversions between
> types  are performed in exactly the same way as they would when calling
> a  function serially. Furthermore, it also allows calling member
> functions  with some object, without the additional syntax necessary to
> tie object  and member function pointer together.

Hopefully you're not duplicating efforts here, and are using Boost.Bind
and Boost.Function in the implementation?

> I attach an almost ready proposal to this mail, but rather than
> steamrolling the present author of the threads code (William Kempf), I
> would like to discuss this with him (and you, if you like) before
> submitting it as a proposal to boost.

Give me a couple of days to have the solution above implemented in the dev
branch, and then argue for or against the two designs.

> Let me add that I agree with all your other topics, in particular the
> separation of calling/called thread interface, to prevent accidents like
> thread<> t = spawn(foo)(a,b,c);
> t.yield ();// oops, who's going to yield here?

You shouldn't really ever write code like that.  It should be
thread::yield().  But even if you write it the way you did, it will always
be the current thread that yields, which is the only thread that can.  I
don't agree with seperating the interfaces here.

> I would be most happy if we could cooperate and join efforts.

Certainly.

-- 
William E. Kempf
[EMAIL PROTECTED]


___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost