[kamaelia-list] Re: Bug with Unconnected UDP Peers

2009-03-03 Thread Steve

Michael,

I was looking through the UDP_ng code.  Maybe I missed something, but
I can't find any tear down code.  SimplePeer, TargettedPeer,
PostboxPeer; all three exit their while loop in response to a
shutdownMicroprocess, but they don't clean anything up.

Since my problem is they don't release their ports, I tried adding a
simple self.sock.close() immediately after the main while loop.  With
this change, the peers release their sockets on shutdown and I don't
get an exception when the same port number rolls around.

So, I think adding sock.close() is a good thing.  After this change
though I get intermittant (1s-10m) bad file descriptor exceptions
thrown from line 312 of Selector.py.

I tried adding remove messages to the selector service prior to the
close, but I still sometimes get a bad file descriptior exception.
Have I missed something?  After my while loop, I have now:

self.send(removeWriter(self, ((self, "writeReady"),
self.sock)),
  "_selectorSignal")
yield 1
self.send(removeReader(self, ((self, "readReady"),
self.sock)),
  "_selectorSignal")
yield 1
self.sock.close()

Digging a little deeper, it looks like despite sending the messages
prior to the close, the messages are read and acted upon after the
close.  Which is probably why sometimes it barfs when it tries to do
something with the descriptor and it doesn't know the socket is
closed.

Thanks,
Steve


On Mar 2, 1:36 pm, Steve  wrote:
> I'm running on Vista, so again maybe this is a windows specific
> problem.  I have two different application codes that work
> similarly.   One is a server that rotates through a range of ports.
> It creates a UDP Peer on a port, waits for data, and after a time
> sends a shutdownMicroprocess and rotates to the next port.
>
> The problem is that my software firewall shows these local ports are
> never relinquished.  In in fact, if I wait long enough the rotate
> algorithm goes through 1000 ports and starts back over.  When that
> happens I get an exception that the requested port is already bound.
> In the server code I used UDP_ng.TargettedPeer, but I think it's a
> problem in the basic peer code.
>
> The second program is a client side version.  It does the same thing
> except it binds to a port, sends data, waits for a response, rotates,
> etc.  The result is the same thing.  The ports are never
> relniquished.  In the client side code I used UDP_ng.SimplePeer, but
> again I think it's in the base code.
>
> At one point I added an extra ._isStopped() check (it wasn't stopped)
> and then called the .stop() method.  This indeed stopped the
> microprocess but still left the port bound.
>
> Any ideas?  Could this be related to the TCP ignored connection bug
> that I was describing in another thread?
>
> Thanks,
> Steve
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"kamaelia" group.
To post to this group, send email to kamaelia@googlegroups.com
To unsubscribe from this group, send email to 
kamaelia+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/kamaelia?hl=en
-~--~~~~--~~--~--~---



[kamaelia-list] Re: Bug in SingleShotHTTPClient

2009-03-03 Thread Steve
Thinko, I meant:

>                waitTill = time.time() + self.connect_timeout
>                while not self.safeConnect(sock,(self.host,
> self.port)):
>                   if self.shutdown():
>                       return
>                   if time.time() >= waitTill:
>                       self.howDied = "timeout"
>                       raise Finality
>                   yield 1

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"kamaelia" group.
To post to this group, send email to kamaelia@googlegroups.com
To unsubscribe from this group, send email to 
kamaelia+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/kamaelia?hl=en
-~--~~~~--~~--~--~---



[kamaelia-list] Re: Bug in SingleShotHTTPClient

2009-03-03 Thread Steve

> > class TCPClient(Axon.Component.component):
> >def __init__(self,host,port,delay=0,connect_timeout=60):
> > self.connect_timeout = connect_timeout
> > ...
> >connect_start = time.time()
> >while not self.safeConnect(sock,(self.host, self.port)):
> >   if self.shutdown():
> >   return
> >   if ( time.time() - connect_start ) > self.connect_timeout:
> >   self.howDied = "timeout"
> >   raise Finality
> >   yield 1

I just updated to my tcpclient to get the timeout you checked in.  May
I suggest rearranging the math a little to take it out of the loop:

   waitTill = time.time() + self.connect_timeout
   while not self.safeConnect(sock,(self.host,
self.port)):
  if self.shutdown():
  return
  if time.time() >= self.connect_timeout:
  self.howDied = "timeout"
  raise Finality
  yield 1

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"kamaelia" group.
To post to this group, send email to kamaelia@googlegroups.com
To unsubscribe from this group, send email to 
kamaelia+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/kamaelia?hl=en
-~--~~~~--~~--~--~---



[kamaelia-list] Re: Making self.pause() for generator components mirror self.pause() for threads components

2009-03-03 Thread Matt Hammond


> The upshot is that pause() tells the scheduler "please don't call me again
> unless there's a new message in an inbox OR a message is taken from an
> outbox".

... OR if a child component terminates :-)

> For threaded components it means the same thing. However, since a threaded
> component can genuinely sleep, if it doesn't ever bother checking its
> inboxes
> for messages, it will sleep for ever. As a result, for threaded
> components,
> it adds an optional timeout argument to say "sleep for this long". Whilst
> this was added for practicality reasons, it does mean that self.pause()
> has a
> different meaning for generator components from threaded ones.

When a ThreadedComponent calls self.pause() to go to sleep, it will be
woken by messages arriving at an inbox, or being taken from an outbox. See
lines 483-504 and 541 in ThreadedComponent.py:

http://code.google.com/p/kamaelia/source/browse/trunk/Code/Python/Axon/Axon/ThreadedComponent.py#483

http://code.google.com/p/kamaelia/source/browse/trunk/Code/Python/Axon/Axon/ThreadedComponent.py#541

The timeout feature is there, to my mind, because it provide a convenient
way to provide basic timing facilities, very nearly for free (since the
threading.Event() object it uses to pause has an optional timeout argument
already built in)

In my understanding, there is therefore no difference between self.pause()
for generator based components and threaded components, except from the
fact that the pause-ing will not happen in a generator based component
until the next yield statement is reached.


> Now there's two ways that self.pause(timeout=delay) could gain the same
> meaning for generator components. One is to change the scheduler to become
> time aware. The other is for it to wrap up a call to "PausingService" that
> will awaken you after a minimum of delay has passed.
>
> Personally I prefer the latter, but this raises two issues:
> * By changing the scheduler we change Axon/Kamaelia in a more
> fundamental
>   way. Not necessarily the correct way. Certainly in a way that makes
> it
>   harder to hack on and modify.
> * The latter approach would mean that we're either using a component
>   *inside Axon* itself. This goes against Axon's spirit to an extent,
> but
>is the simpler, and probably more robust solution.
>
> I would also suggest that the return value of self.pause() be something
> that
> is yieldable.

I like the idea of self.pause() returning something you have to yield - to
my mind that sorts the potential confusion that can surround the way that
the self.pause() doesn't take effect until the next yield statement.

I agree that the latter approach feels simpler and possibly cleaner; but
maybe at the expense of scope for performance optimisation (without later
going back and working out how to change the scheduler). However, we don't
seem to have that many applications where performance is that critical
anyway.

However, generalising a bit, what would be nice to add to the scheduler
would be finer grained control over what can and cannot cause a
component's generator to be woken from a paused state. For example, many
components have no interest in being awoken when a message is taken from
their outbox (because they do not support sending to size-limited
destinations). If you could say something like (but not necessarily
exactly like) this:

yield self.pause(timeout=0.001)

or this:

yield self.pause(ignoreOutboxes=["outbox","signal"])

or this:

yield self.pause(timeout=0.001, ignoreAllOutboxes=true)

then, as a component writer, I'd find that a quite nice extra level of
optimisation I could choose to incrementally add to my components.

No idea how it should be implemented though. However I think I'm making
the point that perhaps there's scope for a more general mechanism which
*would* be appropriate to incorporate into the scheduler.

My gut feeling is that whatever is returned by self.pause(...) would be
some kind of object/function that the scheduler would call. This
object/function contains the intelligence needed to enqueue some kind of
timeout event, or to block/disable other sources of events.



Matt
-- 
| Matt Hammond
|
| [anything you like unless it bounces] 'at' matthammond 'dot' org




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"kamaelia" group.
To post to this group, send email to kamaelia@googlegroups.com
To unsubscribe from this group, send email to 
kamaelia+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/kamaelia?hl=en
-~--~~~~--~~--~--~---



[kamaelia-list] Making self.pause() for generator components mirror self.pause() for threads components

2009-03-03 Thread Michael Sparks

Hi,


self.pause() in generator components has the follow effects:
   * It schedules a request with the schedule to pause the generator. cf:
 self.scheduler.pauseThread(self)

This actually adds a pauseRequest for the microprocess on a threadsafe
queue. That queue is periodically processed, and the microprocesse is marked
as "_GOINGTOSLEEP". This precludes it being added to the run queue (ie being 
given a timeslice).

In practice it gets unpaused by a callback in Component & Box resulting in an 
unpause request undoing this.

The upshot is that pause() tells the scheduler "please don't call me again 
unless there's a new message in an inbox OR a message is taken from an 
outbox".

For threaded components it means the same thing. However, since a threaded 
component can genuinely sleep, if it doesn't ever bother checking its inboxes 
for messages, it will sleep for ever. As a result, for threaded components, 
it adds an optional timeout argument to say "sleep for this long". Whilst 
this was added for practicality reasons, it does mean that self.pause() has a 
different meaning for generator components from threaded ones.

Now there's two ways that self.pause(timeout=delay) could gain the same 
meaning for generator components. One is to change the scheduler to become 
time aware. The other is for it to wrap up a call to "PausingService" that 
will awaken you after a minimum of delay has passed.

Personally I prefer the latter, but this raises two issues:
* By changing the scheduler we change Axon/Kamaelia in a more fundamental
  way. Not necessarily the correct way. Certainly in a way that makes it
  harder to hack on and modify.
* The latter approach would mean that we're either using a component
  *inside Axon* itself. This goes against Axon's spirit to an extent, but
   is the simpler, and probably more robust solution.

I would also suggest that the return value of self.pause() be something that 
is yieldable.

For a practical, real world usecase where this would be useful, it would allow 
this:
   while not self.safeConnect(sock,(self.host, self.port)):
  if self.shutdown():
  return
  if ( time.time() - startConnect ) > self.connect_timeout:
  self.howDied = "timeout"
  raise Finality
  yield 1

To become this:
   while not self.safeConnect(sock,(self.host, self.port)):
  if self.shutdown():
  return
  if ( time.time() - startConnect ) > self.connect_timeout:
  self.howDied = "timeout"
  raise Finality
  # Retry in a millisecond. Release CPU.
  yield self.pause(timeout=0.001) 

This would release the CPU significantly for this particular usecase. It would 
also be useful in other components, but this one is topical :)


Michael.
-- 
http://yeoldeclue.com/blog
http://twitter.com/kamaelian
http://www.kamaelia.org/Home

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"kamaelia" group.
To post to this group, send email to kamaelia@googlegroups.com
To unsubscribe from this group, send email to 
kamaelia+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/kamaelia?hl=en
-~--~~~~--~~--~--~---



[kamaelia-list] Re: Windows socket errors and timeouts

2009-03-03 Thread Michael Sparks

On Tuesday 03 March 2009 22:11:11 Steve wrote:
> The real problem is we need a way to set a timeout on the connection
> attempt in the background without making it blocking.

Yes, this is what I've done :-) 

OK, not without sucking CPU, but I did say "The cost at present is higher
CPU usage than would be ideal". I didn't make it clear that I can also see
how we resolve that point.

Regarding your concerns around WSAEINVAL, you may wish to be aware that
what I'm doing mirrors what twisted does inside 
twisted.internet.BaseClient.doConnect.

Furthermore, there's an explanatory comment there:
   # on Windows EINVAL means sometimes that we should keep trying:
   #  
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/winsock/winsock/connect_2.asp

If you follow this back, you find the referred to rationale:

Until the connection attempt completes on a nonblocking socket, all
subsequent calls to connect on the same socket will fail with the
error code WSAEALREADY, and WSAEISCONN when the connection
completes successfully. Due to ambiguities in version 1.1 of the
Windows Sockets specification, error codes returned from connect
while a connection is already pending may vary among implementations.
As a result, it is not recommended that applications use multiple calls
to connect to detect connection completion. If they do, they must be
prepared to handle WSAEINVAL and WSAEWOULDBLOCK error values the
same way that they handle WSAEALREADY, to assure robust operation.

This tracks with what I've seen in the past (this code was added a long
while back - 6 line fix - IIRC by a colleague, 4 years ago? :-).

The underlying issue you keep banging up against is that sockets in reality
in blocking mode don't provide for timeouts. For example the code in the
Python socket module that you're seeking to use looks like this:

 http://pastebin.com/m1e2171fd

In order for that code to work, the underlying code does this:
if (defaulttimeout >= 0.0)
internal_setblocking(s, 0);

or for sock_settimeout, this line:
internal_setblocking(s, timeout < 0.0);

The upshot being this: if you set a timeout, internally python changes the
socket to non-blocking. Then any operation that can fail - for example
connection - results (in windows) in entering into a select statement to
check to see when the operation would be completed - cf:
 res = select(s->sock_fd+1, NULL, &fds, &fds_exc, &tv);

That &tv is the actual timeout you set originally, and then it's blocking
on select. Now the way we'd do this properly in Kamaelia is to get the
TCPClient to get access to the Selector service, and to ask the selector
service to let the TCPClient know when the socket is ready to read.

BUT in the error case - which we're dealing with, the Selector would never
re-awaken the TCPClient since it's an error case. So waiting for the selector
would need to have a timeout mechanism itself. ie fundamentally we would
still need the timeout mechanism I added.

That's the sort of thing that twisted implements with deferreds, and in
threaded components you can implement both aspects with self.pause().

The nice thing though about doing that with self.pause() is that it then
would give self.pause() the same sort of semantics for generator components
as it does for threaded components.

But beyond that, when we fail we also need to tell the selector that we're no
longer want it to notify us that we're done using it.

That's easy enough to do btw, buty all of this is a significant complexity
jump over what we currently have which is why I've initially gone for the
simpler timeout mechanism. (ie to get something working correctly before
optimising it - which is what this would be)

This is admittedly a little complex, and not something most users have to ever
deal with, and it's fundamentally an optimisation really. However it makes
sense to address that now since there's a real case that needs it fixed :-)

Oh, as for this which has come in as I was typing:
>  I don't understand why the TCPClient code only
> sees an infinte set of WSAEINVALIDs.

Sheer speed. As fast as you can type, you're unlikely to be able to repeat
anything manually faster than between 8 and 20 ms (best case). That's at
least 2-3 (minimum) orders of magnitude slower than python will.

Please bear in mind that the code you're critiquing does in fact critique 
itself:
"Rather brute force".

My personal view on dealing with this is this:
* Get the code working as it should - ie allow timeouts to occur.
* Get the code working such that it doesn't suck your CPU as it's
   doing (ie performance improvement)
* Then refactor that "not sucking" CPU into nicer, more readable,
   more reusable, modular code.

I think we've got to stage 1, and are now on stage 2. 

:)

Michael
-- 
http://yeoldeclue.com/blog
http://twitter.com/kamaelian
http://www.kamaelia.org/Home

--~--~-~--~~

[kamaelia-list] Re: Windows socket errors and timeouts

2009-03-03 Thread Steve

On Mar 3, 2:11 pm, Steve  wrote:
> What should happen in my opinion is that at some point we should go
> from getting WSAEINVALID back to a single EWOULDBLOCK when the first
> connection is refused and then a brand new connection is started.  But
> we never do see that.  That transition would be a good place to
> realize we had a connection refused.  And even then we'd still never
> know if the connection was silently ignored.

This behavior that I expected for a connection refused is in fact what
I see if I run the python interpreter and manually try to repeatedly
connect to a connection refused port.  Depending on how fast my
fingers, I see 1-2 WSAEINVALIDs and then I see a wouldblock as a new
connection is tried.  I don't understand why the TCPClient code only
sees an infinte set of WSAEINVALIDs.

--Steve

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"kamaelia" group.
To post to this group, send email to kamaelia@googlegroups.com
To unsubscribe from this group, send email to 
kamaelia+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/kamaelia?hl=en
-~--~~~~--~~--~--~---



[kamaelia-list] Re: Windows socket errors and timeouts

2009-03-03 Thread Steve

Ok, I think I'm starting to get what's going on here on my windows
box:
1) We try to connect non-blocking
2) There is no immediate error so an EWOULDBLOCK is thrown
3) The connection attempt magically continues in the background
4) We loop around and try to connect again
5) The process may be still connecting.  In a sane OS we would get
EALREADY.

Now instead of EALREADY we get WSAEINVALID.Microsoft says we have
to treat that like EWOULDBLOCK or EALREADY.  And in fact I can see one
WSAEINVALID in a good connection.  But in a connection refused, we
just continually get WSAEINVALIDs.

What should happen in my opinion is that at some point we should go
from getting WSAEINVALID back to a single EWOULDBLOCK when the first
connection is refused and then a brand new connection is started.  But
we never do see that.  That transition would be a good place to
realize we had a connection refused.  And even then we'd still never
know if the connection was silently ignored.

The real problem is we need a way to set a timeout on the connection
attempt in the background without making it blocking.  Should we be
making these socket connections from a separate thread so that it
could block and use timeouts?

--Steve



On Mar 3, 12:30 pm, Steve  wrote:
> Michael,
>
> Thank you for your detailed explanation.  I did see what you checked
> in before I started looking at the timeout capabilities.  I have to
> think there is a better way still.  I've been doing some testing and
> here is what I have found.
>
> On windows, with the timeout = 0, all you get are the WSAEINVALID
> errors which should tell us something.  Microsoft says this means:
> "Some invalid argument was supplied (for example, specifying an
> invalid level to the setsockopt function). In some instances, it also
> refers to the current state of the socket—for instance, calling accept
> on a socket that is not listening."
>
> I understand your concern about using a timeout = 10s, but please
> humor me for a moment.
>
> On windows, with setttimeout(1), I still get an infinite loop because
> of this code:
>
>          elif hasattr(errno, "WSAEINVAL"):
>             if errorno == errno.WSAEINVAL:
>                 print 'WSAEINVAL'
>                 # If we are on windows, this will be the error instead
> of EALREADY
>                 # above.
>                 assert(self.connecting==1)
>                 return False
>          # Anything else is an error we don't handle
>          else:
>             print 'else'
>             raise socket.msg
>
> You see, we know we're on windows, but all we check and handle is the
> WSAEINVAL.  So if I change the code to catch _other_ windows errors
> like so:
>
>          elif hasattr(errno, "WSAEINVAL"):
>             if errorno == errno.WSAEINVAL:
>                 print 'WSAEINVAL'
>                 # If we are on windows, this will be the error instead
> of EALREADY
>                 # above.
>                 assert(self.connecting==1)
>                 return False
>              else:
>                 # On windows and unsucessful
>                 raise socket.msg
>          # Anything else is an error we don't handle
>          else:
>             print 'else'
>             raise socket.msg
>
> Now on windows, with settimeout(1), with the above code it Does The
> Right Thing and raises a 10061 - WSAECONNREFUSED.  Yes!
>
> Now, I understand not wanting to block the system, so I tried
> settimeout(0.01).  This is interesting because it catches a
> socket.error exception, but then throws and exception trying to do
> this line:
> (errorno, errmsg) = socket.msg.args.  So I'm about to try to dig out
> the details from the socket.error that is caught to see if it is
> something we can handle.
>
> Bottom line is unless you set some timeout, windows machines won't get
> sane connection refused errors.  And I can understand an aversion to
> 10 secoonds, but what about 1 second?  A half a second?  What is the
> shortest we can block and still get (somehow) a useful error?  I'm
> going to keep experimenting.
>
> --Steve
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"kamaelia" group.
To post to this group, send email to kamaelia@googlegroups.com
To unsubscribe from this group, send email to 
kamaelia+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/kamaelia?hl=en
-~--~~~~--~~--~--~---



[kamaelia-list] Windows socket errors and timeouts

2009-03-03 Thread Steve

Michael,

Thank you for your detailed explanation.  I did see what you checked
in before I started looking at the timeout capabilities.  I have to
think there is a better way still.  I've been doing some testing and
here is what I have found.

On windows, with the timeout = 0, all you get are the WSAEINVALID
errors which should tell us something.  Microsoft says this means:
"Some invalid argument was supplied (for example, specifying an
invalid level to the setsockopt function). In some instances, it also
refers to the current state of the socket—for instance, calling accept
on a socket that is not listening."

I understand your concern about using a timeout = 10s, but please
humor me for a moment.

On windows, with setttimeout(1), I still get an infinite loop because
of this code:

 elif hasattr(errno, "WSAEINVAL"):
if errorno == errno.WSAEINVAL:
print 'WSAEINVAL'
# If we are on windows, this will be the error instead
of EALREADY
# above.
assert(self.connecting==1)
return False
 # Anything else is an error we don't handle
 else:
print 'else'
raise socket.msg

You see, we know we're on windows, but all we check and handle is the
WSAEINVAL.  So if I change the code to catch _other_ windows errors
like so:

 elif hasattr(errno, "WSAEINVAL"):
if errorno == errno.WSAEINVAL:
print 'WSAEINVAL'
# If we are on windows, this will be the error instead
of EALREADY
# above.
assert(self.connecting==1)
return False
 else:
# On windows and unsucessful
raise socket.msg
 # Anything else is an error we don't handle
 else:
print 'else'
raise socket.msg

Now on windows, with settimeout(1), with the above code it Does The
Right Thing and raises a 10061 - WSAECONNREFUSED.  Yes!

Now, I understand not wanting to block the system, so I tried
settimeout(0.01).  This is interesting because it catches a
socket.error exception, but then throws and exception trying to do
this line:
(errorno, errmsg) = socket.msg.args.  So I'm about to try to dig out
the details from the socket.error that is caught to see if it is
something we can handle.

Bottom line is unless you set some timeout, windows machines won't get
sane connection refused errors.  And I can understand an aversion to
10 secoonds, but what about 1 second?  A half a second?  What is the
shortest we can block and still get (somehow) a useful error?  I'm
going to keep experimenting.

--Steve
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"kamaelia" group.
To post to this group, send email to kamaelia@googlegroups.com
To unsubscribe from this group, send email to 
kamaelia+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/kamaelia?hl=en
-~--~~~~--~~--~--~---



[kamaelia-list] Re: Bug in SingleShotHTTPClient

2009-03-03 Thread Michael Sparks

On Tuesday 03 March 2009 19:26:57 Steve wrote:
> Michael,
>
> I was reviewing the TCPClient.py code.  

Many thanks for this. As a preface to what follows, I've put a different
implementation into TCPClient - in line with my comments yesterday. The
reason is to allow TCPClient to continue to not cause the system to freeze.

The cost at present is higher CPU usage than would be ideal, but it's during
a connection phase, so your example usage (making many many outbound
connections simultaneously) is an edge case, which we can come back to
and optimse. (personal general viewpoint: get it working, make it work
correctly[1], then optimise)

[1] eg handle edge cases "you" (me in this case) haven't considered :)

> In the runClient method you 
> have:
>
>          sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM);
> yield 0.3
>          self.sock = sock # We need this for shutdown later
>          try:
>             sock.setblocking(0); yield 0.6
>             try:
>                startConnect = time.time()
>                while not self.safeConnect(sock,(self.host,
> self.port)):

Correct. For some history as to why it uses the "raise Finality" structure,
you can see the history here:
* http://mail.python.org/pipermail/python-list/2003-June/207723.html
  & 
http://mail.python.org/pipermail/python-list/2003-June/thread.html#207723

> And in safeConnect you have:
>
>          sock.connect(*sockArgsList); # Expect socket.error: (115,
> 'Operation now in progress')
>
> In the python socket module docs I see:
>
>     s.setblocking(0) is equivalent to s.settimeout(0)
>
> and
>
>     Note that the connect() operation is subject to the timeout
> setting, and in general it is   recommended to call settimeout()
> before calling connect().

Note, this code form is due to me being used to coding sockets stuff
in C, C++ & perl previously where socket calls don't contain any timeout.

Indeed, if you want an idea of the complexity of implementing timeouts
normally, it's perhaps worth looking at this page:
* http://tinyurl.com/bu8tz2

(scroll down to just past 1/2 way - "There are three ways to place a timeout
on an I/O operation involving a socket.")

The timeout you're referring to here is actually implemented inside
Python/Modules/socketmodule.c, and behind the scenes actually
uses either poll or select (depending on platform) in a blocking mode
in order to "do the right thing". (do the right thing being subjective
here relative to blocking sockets)

However in this case, setting the timeout to non-zero, eventually ends up
with this piece of c-code being executed:
tv.tv_sec = (int)s->sock_timeout;
...
if (writing)
n = select(s->sock_fd+1, NULL, &fds, NULL, &tv);
else
n = select(s->sock_fd+1, &fds, NULL, NULL, &tv);

This turns into a blocking call, which then hangs the system. (Which is why
sock.setblocking(0) has to set the timeout to 0 as well :)

> So I get that you want the socket operations to be non-blocking.  And
> non-blocking operations should fail if they can't complete rather than
> block.  But the connect operation is using a timeout of zero because
> of the blocking setting.  And it seems like the problem I'm having on
> windows is that the connection attempt never times out.

This conflates the two issues really. The real issues is simply that I
never thought of putting timeout handling into the TCPClient code, nor
where.

> So, would it be reasonable to:
> 1) setblocking(0) in runClient as it is today
> 2) In safeConnect, sock.settimeout(20)
> 3) sock.connect() as it is today
> 4) sock.settimeout(0) after the connection
>
> It seems like this would allow you to have a timeout honored for the
> connect operation without impacting non-blocking data operations post-
> connect.

>From the above you should see what this isn't reasonable, but in case it
isn't suppose you start 10 TCPClients as follows:

for x in range(10):
Pipeline( TCPClient(dest[x],port[x], connect_timeout=20), 
OutputHandler() ).activate()

And suppose every single one is blocked. Rather than this timing out
in about 20 seconds (as it would now given the fix just put in), it would
effectively hang the system for 200 seconds, until all 10 connections time
out - effectively serialising the connection attempts. 1000 failed/filtered
consecutive connections in this manner would take 20,000 seconds or
just over 5 1/2 hours :)

Fundamentally that's why I've not taken this approach here :)

The fix put in, which solves the immediate issue, is here:
* http://tinyurl.com/covwp6


Michael.
-- 
http://yeoldeclue.com/blog
http://twitter.com/kamaelian
http://www.kamaelia.org/Home

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"kamaelia" group.
To post to this group, send email to kamaelia@googlegroups.com
To unsubscribe from this group, send email

[kamaelia-list] Re: Bug in SingleShotHTTPClient

2009-03-03 Thread Steve

Michael,

I was reviewing the TCPClient.py code.  In the runClient method you
have:

 sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM);
yield 0.3
 self.sock = sock # We need this for shutdown later
 try:
sock.setblocking(0); yield 0.6
try:
   startConnect = time.time()
   while not self.safeConnect(sock,(self.host,
self.port)):

And in safeConnect you have:

 sock.connect(*sockArgsList); # Expect socket.error: (115,
'Operation now in progress')

In the python socket module docs I see:

s.setblocking(0) is equivalent to s.settimeout(0)

and

Note that the connect() operation is subject to the timeout
setting, and in general it is   recommended to call settimeout()
before calling connect().

So I get that you want the socket operations to be non-blocking.  And
non-blocking operations should fail if they can't complete rather than
block.  But the connect operation is using a timeout of zero because
of the blocking setting.  And it seems like the problem I'm having on
windows is that the connection attempt never times out.

So, would it be reasonable to:
1) setblocking(0) in runClient as it is today
2) In safeConnect, sock.settimeout(20)
3) sock.connect() as it is today
4) sock.settimeout(0) after the connection

It seems like this would allow you to have a timeout honored for the
connect operation without impacting non-blocking data operations post-
connect.

--Steve

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"kamaelia" group.
To post to this group, send email to kamaelia@googlegroups.com
To unsubscribe from this group, send email to 
kamaelia+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/kamaelia?hl=en
-~--~~~~--~~--~--~---



[kamaelia-list] Re: Small mod to console example doesn't work?

2009-03-03 Thread Gloria W

> Ah, I forgot the original context you were working in!
>   
I did too :) I didn't communicate it clearly, I think. I was trying to 
accomplish too much too quickly.
This is exactly what I hoped for, a solution that would not force me to 
disconnect and reconnect.
Let me give this a go and get back to you tonight  EST, which is your 
morning tomorrow.
Thanks again!
Gloria
> All the suggestions I've made previously could be considered as trying to  
> this problem: that the ConsoleReader component not only reads user input,  
> but also generates a prompt for it (which it directly writes to the  
> console (standard output).
>
> It sounds like what you actually are after building is a simple chat  
> protocol - abstracted away from having to know about where its input is  
> coming from or where its output is going to. The ConsoleReader component  
> mixes a bit of both - it handles the sourcing of input, but also generates  
> some output itself (the prompt) and worse still, it puts this output onto  
> the console directly! :-)
>
> So what you're actually after is what you originally had:
>
>  Pipeline(
>  SourceOfInput(),
>  MyComponent(),
>  DestinationForOutput()
>  )
>
> So for testing your chat protocol component MyComponent - we can still use  
> ConsoleReader and ConsoleEchoer. We just have to ignore the '>>>' prompt  
> generated by the ConsoleReader and pretend its not there.
>
> When you plug Mycomponent into the ConnectedServer (also known as  
> SimpleServer in more recent subversion copies of Kamaelia iirc) as its  
> protocol, it gets wired up in much the same way as above - data coming  
>  from the client will get send to its inbox and data send out of its outbox  
> will be sent back to the client.
>
> So all of the Seq stuff is not strictly necessary. All you need to do is  
> encode the sequential behaviour into the main() generator in MyComponent.  
> For example something along the lines of this (note the shutdown handling  
> makes it a little messier that it might otherwise be):
>
>  def main(self):
>  mustStop = false;
>
>  # 1) generate login prompt
>  self.send("Enter username:", "outbox")
>
>  # 2) wait for response
>  mustStop = self.doShutdown()
>  while not mustStop and not self.dataReady("inbox"):
>  self.pause()
>  yield 1
>  mustStop = self.doShutdown()
>
>  if mustStop:
>  return
>
>  username = self.recv("inbox")
>
>  # 3) welcome
>  self.send("Welcome "+username+"! Begin chatting!")
>
>  # 4) do chatting
>  ...
>
> One other thing to consider is that the ConsoleReader specifically buffers  
> input until a whole line is received from the user, before outputting that  
> as a single string message to its "outbox" outbox. Whereas the socket  
> connection may send fragments of strings before a whole line has been  
> entered. A component such as this one can sort that out for you:  
> http://www.kamaelia.org/Components/pydoc/Kamaelia.Visualisation.PhysicsGraph.chunks_to_lines.html
>
> Simply pipeline it with MyComponent and make that the protocol thats given  
> to the ConnectedServer:
>
>  def myProtocol():
>  return Pipeline(
>  chunks_to_lines(),
>  MyComponent()
>  )
>
>  ...
>
>  ConnectedServer(protocol=myProtocol, port=WHATEVER).run()
>
>
>
> Matt
>
>
> On Tue, 03 Mar 2009 04:35:07 -, Gloria W  wrote:
>
>   
>> I got the carousel example working, but it sends/receives from/to the
>> original console, not any new client which connects. I realized I can
>> reuse the ConsoleEchoer, but somehow make it send a string to a client
>> upon connect, and have the server process the string instead of echoing
>> it to the server's console.
>> I tried this, but all proposed solutions so far write to the console,
>> not a newly connected client. Maybe I am missing something, btu I think
>> I am close.
>>
>>  Aside from this, I was wondering if a carousel or seq will be
>> appropriate to solve my problem.
>>
>> I am trying to simulate what happens in crude chat systems such as IRC.
>> For simplicity, in this example the user connects, is prompted for a
>> name, and then can chat freely.
>> If I have two pipelines: one detects client connect and is prompting for
>> a name, receiving the response, and then handing control over to another
>> pipeline; doesn't this require a disconnect/reconnect with the client?
>> If so, this would be bad, because the user would have to re-telnet in,
>> would not be "logged in", and would have to start again.
>>
>> So I guess if I get the data flow working, I can write this logic in one
>> pipeline, and reuse the same connection, assuming the first thing a user
>> types after connecting would be the name. Or maybe there is a clever way
>> to hand off a connected pipline to another utility, so that my flow
>> control

[kamaelia-list] Re: Installation problem under Windows

2009-03-03 Thread Michael Sparks

On Monday 02 March 2009 18:38:06 Matthew Miles Clark wrote:
> Thanks for taking a look at this.  All in all I'm very impressed by
> Kamaelia and look forward to working with it.

Glad to hear that. :-)

Incidentally, I've looked at this:

> data_files=[ ('share/kamaelia', ['App/kamaelia_logo.png']) ],
>
> and it seemed to work (it place the logo under the egg directory in
> share/kamaelia).

And it seems that I can get at the logo this way:
>>> os.listdir(os.path.join(os.path.join(Kamaelia.__file__[:Kamaelia.__file__.rfind("Kamaelia/")],
>>>  "share"),"kamaelia"))
['kamaelia_logo.png']

Which seems OK. I want to check that on Mac OS X and Windows before
merging that though. (it's nice to be able to pull the logo into some
components)

Regards,


Michael.
-- 
http://yeoldeclue.com/blog
http://twitter.com/kamaelian
http://www.kamaelia.org/Home

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"kamaelia" group.
To post to this group, send email to kamaelia@googlegroups.com
To unsubscribe from this group, send email to 
kamaelia+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/kamaelia?hl=en
-~--~~~~--~~--~--~---