Re: [PD] pd and tcp: what to do against crashes?

2009-03-05 Thread mrz
hi,
i can't give any  hints here but..
i just wanna thank you guys a lot that you're getting to solve this problem
with a lot of dirty hands while mine stays clean.
This problem brought us (in the past ;) ) a lot of interruption in exciting
netpd-jams.

all the best,
moritz


On Wed, Mar 4, 2009 at 7:13 PM, Roman Haefeli reduzie...@yahoo.de wrote:

 On Wed, 2009-03-04 at 09:14 -0500, Martin Peach wrote:

   martin, would you mind implementing similar changes to [tcpclient] as
   well?
  
  
 
  I'll do that today if I have time.

 yo... no hurry.. but it seems you already did it. many thanks.

 those changes to [tcpserver] and [tcpclient] enable me to solve a _lot_
 of issues with netpd (which currently is still based on
 [netclient]/[netserver]). some of them were very long standing problems,
 such as server hangs, and it took me also a long time to understand the
 underlying causes for those problems. i am very satisfied to see, that
 the current problems can be addressed now.

 i think there is nothing left to be said for now. it's definitely time
 to get my hands dirty again on the netpd-server and other related
 stuff.

 many thanks for your cooperation.

 roman



 ___
 Telefonate ohne weitere Kosten vom PC zum PC: http://messenger.yahoo.de


 ___
  Pd-list@iem.at mailing list
 UNSUBSCRIBE and account-management -
 http://lists.puredata.info/listinfo/pd-list

___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] pd and tcp: what to do against crashes?

2009-03-04 Thread Martin Peach
Roman Haefeli wrote:
 On Wed, 2009-03-04 at 08:08 +0900, PSPunch wrote:
 
 The earlier sounds like to introduce massive overhead caused by TCP
 headers, especially when we are speak of sending amounts of data that
 may flood the socket's send buffer. In the later case, the OS may 
 indicate that bytes entered the socket, while they were actually only 
 buffered while the connection breaks and was never sent.

 
 if i interprete my observations correctly, this is not a big deal, since
 not every message sent to [tcpserver] will be transmitted in its own tcp
 frame. at least on my box (ubuntu 8.04), they are sent seperately, if
 there is at least a time interval of ~10ms between them. messages sent
 with shorter intervals are concatenated into one frame. 
 said this, i have to add, that the above is only true, if the number of
 elements of a lists on the receiving side represent the framesize. for
 instance, when i plug out the ethernet cable and fill the buffer on the
 sender side, then plug the cable back in, i get one big list with ~5000
 elements on the receiving side (don't try to print that one, it will
 hang pd)

TCP is supposed to use the Nagle algorithm, which sends the first byte 
as soon as it is put into the buffer, then sends everything in its 
buffer whenever the other end acknowledges the previous message. That's 
the most efficient way to use packets with things like telnet, where 
someone is typing live at the keyboard.
The OS takes care of this and there is no way to control it except to 
switch it off and have every byte sent immediately.

Martin

















 
 roman
 
 
 
   
   
 ___ 
 Der fr?he Vogel f?ngt den Wurm. Hier gelangen Sie zum neuen Yahoo! Mail: 
 http://mail.yahoo.de
 
 
 
 
 
 
 ___
 Pd-list@iem.at mailing list
 UNSUBSCRIBE and account-management - 
 http://lists.puredata.info/listinfo/pd-list


___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] pd and tcp: what to do against crashes?

2009-03-04 Thread Martin Peach
Roman Haefeli wrote:
 On Wed, 2009-03-04 at 00:45 +0100, Roman Haefeli wrote:
 how do i know, when the [tcpserver] socket is ready to transmit another
 byte? do i have to nag it every ms with a message? if i go the
 BYTE-AT-A-TIME route, the interval would even need to be slower, if
 higher troughput should be achieved. is there any strategy to avoid too
 much overhead?

 
 having thought another two minutes about it, i think i can answer my own
 question: i don't need to drip every byte with an interval, but just
 fill the buffer completely in zero logical time, then i wait a few
 miliseconds, then i do it again. depending on the wait time, the
 connection bandwidth and the buffersize, the buffer will be filled again
 before it completely got empty. this way the maximum available bandwidth
 can be exploited, when necessary, without having to penetrate
 [tcpserver] too much with 'buffer still full?' messages.
 

You could also try setting the buffer size the same as the message 
length for each outgoing message. Then the buffer wouldn't consume 
thousands of bytes before it stopped.


 martin, would you mind implementing similar changes to [tcpclient] as
 well?
 
 

I'll do that today if I have time.

Martin


___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] pd and tcp: what to do against crashes?

2009-03-04 Thread Roman Haefeli
On Wed, 2009-03-04 at 09:14 -0500, Martin Peach wrote:

  martin, would you mind implementing similar changes to [tcpclient] as
  well?
  
  
 
 I'll do that today if I have time.

yo... no hurry.. but it seems you already did it. many thanks. 

those changes to [tcpserver] and [tcpclient] enable me to solve a _lot_
of issues with netpd (which currently is still based on
[netclient]/[netserver]). some of them were very long standing problems,
such as server hangs, and it took me also a long time to understand the
underlying causes for those problems. i am very satisfied to see, that
the current problems can be addressed now.

i think there is nothing left to be said for now. it's definitely time
to get my hands dirty again on the netpd-server and other related
stuff. 

many thanks for your cooperation.

roman



___ 
Telefonate ohne weitere Kosten vom PC zum PC: http://messenger.yahoo.de


___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] pd and tcp: what to do against crashes?

2009-03-03 Thread PSPunch

 i really wonder, how other projects handle that. i mean, if several
 people download a big file from apache, then a disappearing client
 doesn't interfere with the other clients. i guess, in apache it is
 solved by using threads. when using threads, one single thread doesn't
 necessarily need to know about the buffer state, because it could be
 blocked without harm to the other apache children. so it can try to send
 as much data as possible.
 is using threads the _only_ solution to deal with that problem? i guess,
 it would overcomplicate the programming of [tcpserver], but you sure
 know better...

From my understanding, the alternative to using multiple
threads/processes would be to set the socket to non-blocking and
implement a Pd object that buffers the messages requested to be sent.
Then attempts to retry sending what the OS once rejected should be made.

This will also involve giving the object a timer to call it a fault and
close the socket if no bytes seemed to have traveled over a certain
period of time.

This probably calls for some decision making in the design of the Pd
object, such as how frequently to retry sending the bytes to the socket.
I was thinking a [bang] may be flexible. Whether it be triggered by
[metro] or [bang~], the one designing the patch would have the most
control over how to deal with the results of the transfer.


I found this article helpful to gain basic understanding on which layer
of the OS or library is responsible for the actions happening beneath
the covers.

http://www.amk.ca/python/howto/sockets/


I have yet to learn and may be wrong.
Please excuse me if this has already been done.

--
David Shimamoto

___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] pd and tcp: what to do against crashes?

2009-03-03 Thread Martin Peach
PSPunch wrote:
 From my understanding, the alternative to using multiple
 threads/processes would be to set the socket to non-blocking and
 implement a Pd object that buffers the messages requested to be sent.
 Then attempts to retry sending what the OS once rejected should be made.
 

It seems like that would always end up blocking something, depending on 
the reason for the inability to send the messages. If the other end has 
crashed the object would be trying sending for hours and its buffer 
would expand to fill up all available memory.
The way I do it now in [tcpserver] is to send the messages one byte at a 
time, first using a select() call to verify that each byte can be sent 
without blocking. This is similar to using non-blocking sockets but 
doesn't involve timers.

 This will also involve giving the object a timer to call it a fault and
 close the socket if no bytes seemed to have traveled over a certain
 period of time.

This can be done in the patch, so the user can decide what to do about 
unsendable messages. They can implement a timer and know what state it 
was in.

 
 This probably calls for some decision making in the design of the Pd
 object, such as how frequently to retry sending the bytes to the socket.
 I was thinking a [bang] may be flexible. Whether it be triggered by
 [metro] or [bang~], the one designing the patch would have the most
 control over how to deal with the results of the transfer.
 

Yes, you can do all that in the patch that uses [tcpserver]. I don't 
think the object itself needs to be overly complex. In my experience, 
'user-friendly' usually means 'opaque', 'inscrutable', 'why TF is it 
doing that?'

Martin

___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] pd and tcp: what to do against crashes?

2009-03-03 Thread PSPunch

Hi Martin,


 From my understanding, the alternative to using multiple
 threads/processes would be to set the socket to non-blocking and
 implement a Pd object that buffers the messages requested to be sent.
 Then attempts to retry sending what the OS once rejected should be made.
 
 It seems like that would always end up blocking something, depending on 
 the reason for the inability to send the messages. If the other end has 
 crashed the object would be trying sending for hours and its buffer 
 would expand to fill up all available memory.

Infinite retries will not occur if a timeout is set for the [tcpserver]
to decide that the other end has crashed. This timer will not wait and 
block,
but increment occasionally. The person programming the patch can wait
for output of the object to confirm that the previous transfer has
completed if it is of his concern.

Yes, the data WILL be blocked in the sense that it is being buffered in
the object, but without causing a pause in the process. (Whether setting
the socket to non-block or using select with a timeout of zero was not
my concern)

Users will also benefit from, while having to pay attention to broken 
connections, can be less concerned about how to resend failed
packets over a socket that is not broken but simply delayed.

...Doesn't allowing this sort of usage make your objects more compatible
with the previous design?


 The way I do it now in [tcpserver] is to send the messages one byte at a 
 time, first using a select() call to verify that each byte can be sent 
 without blocking. This is similar to using non-blocking sockets but 
 doesn't involve timers.

Having said all of the above, this point was my main concern.

With your new design, does the OS also flush data a byte at a time over
the network, or does it buffer it for a reasonable duration?

The earlier sounds like to introduce massive overhead caused by TCP
headers, especially when we are speak of sending amounts of data that
may flood the socket's send buffer. In the later case, the OS may 
indicate that bytes entered the socket, while they were actually only 
buffered while the connection breaks and was never sent.


 This will also involve giving the object a timer to call it a fault and
 close the socket if no bytes seemed to have traveled over a certain
 period of time.
 
 This can be done in the patch, so the user can decide what to do about 
 unsendable messages. They can implement a timer and know what state it 
 was in.
 

 This probably calls for some decision making in the design of the Pd
 object, such as how frequently to retry sending the bytes to the socket.
 I was thinking a [bang] may be flexible. Whether it be triggered by
 [metro] or [bang~], the one designing the patch would have the most
 control over how to deal with the results of the transfer.

 
 Yes, you can do all that in the patch that uses [tcpserver]. I don't 
 think the object itself needs to be overly complex. In my experience, 
 'user-friendly' usually means 'opaque', 'inscrutable', 'why TF is it 
 doing that?'

I agree with you on this point.


All mentioned out of curiosity..
I don't know enough on writing externals to implement what I suggested 
myself, and do truly feel respect for your work.

Perhaps I should just pull out a packet analyzer and confirm what goes 
on myself before nagging about it.

--
David Shimamoto


___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] pd and tcp: what to do against crashes?

2009-03-03 Thread Roman Haefeli
On Mon, 2009-03-02 at 18:51 -0500, Martin Peach wrote:
 Roman Haefeli wrote:
  On Sun, 2009-03-01 at 22:53 -0500, Martin Peach wrote:
 
  So I made [tcpserver] send the messages one byte at a time. This latest 
  version should not block, since it adds only one element to the buffer 
  for each select call that says the buffer is still writeable.
  
  can you tell me something about how to use it correctly? i just compiled
  the newest version and still could prevent it from blocking pd. before
  the blocking happens, i do _not_ get any message from the new outlet
  from [tcpserver], such as 'sent 0'. so what happens is still, that i
  send messages to it until it blocks. is it different on windows? do you
  mind sending me a windows binary, if it is? 
  is there anything i can do in terms of testing on linux?
  
 
 Sorry, there was a bug in it so it was staying in the send loop even 
 though it couldn't send. I fixed it in the latest svn.

great news with great results! it works for me as well. no blocking
anymore and a feedback of what could be sent: that is what i needed.
thanks for your work.

  When I use it, if 
 I set the buffer size to 10 I can send 4 messages of 3 bytes after 
 unpluggng the cable, but the last 'sent' says that only 2 bytes were 
 transmitted. Subsequent attempts give 0. After a few seconds I also get 
 a message saying the connection was terminated. It should work 
 identically on linux.

same behaviour here as well with server on linux and client on OS X. the
only difference: the connection is not automatically terminated (or
probably i have to wait longer then 10min). windows seems to be more
keen to close 'unused' connections. just for the record: actually i
wanted to test only on one computer running ubuntu with windows in a
virtual box, but it didn't work, because whenever i shut down the
network device in virtualbox, the connection was terminated. so i had to
use physically two computers.


i have some questions about the usage of [tcpserver]:

when sending lists of floats to it, tracking of what is sent and what
not is not very trivial to do, since you have to count the elements of
each message 'going out', so that you can compare it with the status
output of [tcpserver]. in order to resend omitted parts, you also need
to store a copy of the last sent list. now, since i want to buffer the
outgoing messages anyway, buffering them as lists is unpractical anyway
and i much rather would buffer them in an array. programming-wise, it
would be much easier to send byte-by-byte, a.k.a sending a 'client 1
BYTE' for every BYTE separately. is that a stupid thing to do? would it
cause a lot of overhead in terms of cpu cycles, when trasferring big
amounts of data this way?

how do i know, when the [tcpserver] socket is ready to transmit another
byte? do i have to nag it every ms with a message? if i go the
BYTE-AT-A-TIME route, the interval would even need to be slower, if
higher troughput should be achieved. is there any strategy to avoid too
much overhead?

thanks
roman
 




___ 
Der frühe Vogel fängt den Wurm. Hier gelangen Sie zum neuen Yahoo! Mail: 
http://mail.yahoo.de


___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] pd and tcp: what to do against crashes?

2009-03-03 Thread Roman Haefeli
On Wed, 2009-03-04 at 00:45 +0100, Roman Haefeli wrote:
 
 how do i know, when the [tcpserver] socket is ready to transmit another
 byte? do i have to nag it every ms with a message? if i go the
 BYTE-AT-A-TIME route, the interval would even need to be slower, if
 higher troughput should be achieved. is there any strategy to avoid too
 much overhead?
 

having thought another two minutes about it, i think i can answer my own
question: i don't need to drip every byte with an interval, but just
fill the buffer completely in zero logical time, then i wait a few
miliseconds, then i do it again. depending on the wait time, the
connection bandwidth and the buffersize, the buffer will be filled again
before it completely got empty. this way the maximum available bandwidth
can be exploited, when necessary, without having to penetrate
[tcpserver] too much with 'buffer still full?' messages.

martin, would you mind implementing similar changes to [tcpclient] as
well?

roman 





___ 
Telefonate ohne weitere Kosten vom PC zum PC: http://messenger.yahoo.de


___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] pd and tcp: what to do against crashes?

2009-03-03 Thread Roman Haefeli
On Wed, 2009-03-04 at 00:45 +0100, Roman Haefeli wrote:


 how do i know, when the [tcpserver] socket is ready to transmit another
 byte? do i have to nag it every ms with a message? if i go the
 BYTE-AT-A-TIME route, the interval would even need to be slower

i either wanted to say 'lower' or 'faster', but not 'slower'.

roman



___ 
Telefonate ohne weitere Kosten vom PC zum PC: http://messenger.yahoo.de


___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] pd and tcp: what to do against crashes?

2009-03-03 Thread Roman Haefeli
On Tue, 2009-03-03 at 18:44 +0900, PSPunch wrote:
  i really wonder, how other projects handle that. i mean, if several
  people download a big file from apache, then a disappearing client
  doesn't interfere with the other clients. i guess, in apache it is
  solved by using threads. when using threads, one single thread doesn't
  necessarily need to know about the buffer state, because it could be
  blocked without harm to the other apache children. so it can try to send
  as much data as possible.
  is using threads the _only_ solution to deal with that problem? i guess,
  it would overcomplicate the programming of [tcpserver], but you sure
  know better...
 
 From my understanding, the alternative to using multiple
 threads/processes would be to set the socket to non-blocking and
 implement a Pd object that buffers the messages requested to be sent.
 Then attempts to retry sending what the OS once rejected should be made.

actually, there is no need for another external, if you implicitly meant
an external here. even more since [tcpserver] accepts only floats,
respectively lists of floats, the buffering can be done easily in pd.

 This will also involve giving the object a timer to call it a fault and
 close the socket if no bytes seemed to have traveled over a certain
 period of time.
 
when all of this is done in pd, one has much more control over it. i
guess, depending on the circumstances, the time limit might differ much,
depending on the application needed. i think, that also the kind of
action, that needs to be performed when the other side stops listening,
is dependent on the specific application of [tcpserver].


 This probably calls for some decision making in the design of the Pd
 object, such as how frequently to retry sending the bytes to the socket.
 I was thinking a [bang] may be flexible. Whether it be triggered by
 [metro] or [bang~], the one designing the patch would have the most
 control over how to deal with the results of the transfer.

yeah, i actually mean the same.


 I found this article helpful to gain basic understanding on which layer
 of the OS or library is responsible for the actions happening beneath
 the covers.
 
 http://www.amk.ca/python/howto/sockets/


thanks. this will be my bed reading for tonight.

roman




___ 
Der frühe Vogel fängt den Wurm. Hier gelangen Sie zum neuen Yahoo! Mail: 
http://mail.yahoo.de


___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] pd and tcp: what to do against crashes?

2009-03-03 Thread Roman Haefeli
On Wed, 2009-03-04 at 08:08 +0900, PSPunch wrote:

 The earlier sounds like to introduce massive overhead caused by TCP
 headers, especially when we are speak of sending amounts of data that
 may flood the socket's send buffer. In the later case, the OS may 
 indicate that bytes entered the socket, while they were actually only 
 buffered while the connection breaks and was never sent.
 

if i interprete my observations correctly, this is not a big deal, since
not every message sent to [tcpserver] will be transmitted in its own tcp
frame. at least on my box (ubuntu 8.04), they are sent seperately, if
there is at least a time interval of ~10ms between them. messages sent
with shorter intervals are concatenated into one frame. 
said this, i have to add, that the above is only true, if the number of
elements of a lists on the receiving side represent the framesize. for
instance, when i plug out the ethernet cable and fill the buffer on the
sender side, then plug the cable back in, i get one big list with ~5000
elements on the receiving side (don't try to print that one, it will
hang pd)

roman





___ 
Der frühe Vogel fängt den Wurm. Hier gelangen Sie zum neuen Yahoo! Mail: 
http://mail.yahoo.de


___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] pd and tcp: what to do against crashes?

2009-03-02 Thread Roman Haefeli
On Sun, 2009-03-01 at 22:53 -0500, Martin Peach wrote:
 Roman Haefeli wrote:
  On Sun, 2009-03-01 at 19:30 -0500, Martin Peach wrote:
   I think the blocking happens because the 
  connection is gone, not because of the buffer overflowing.
  
  i am not sure, if understand what you mean here. what i experience: when
  i overrun the buffer (by plugging out the ethernet cable to the client),
  the pd process of [tcpserver] is completely blocked after having sent
  $BUFFERSIZE bytes. it stays blocked, until i plug in the cable back
  again. the whole content of the buffer is sent after some seconds and
  the server pd instance responds again. this means, that even if the
  connection is not completely lost, a blocking of pd could occur. 
 
 You can send until the buffer fills up, even if it isn't emptying. I 
 think it blocks because it only checks for at least one empty space in 
 the buffer but then sends more than that.

exactly. if the next message is bigger than the left free space in the
buffer, pd is blocked. i guess, we mean the same thing.

  if the connection to the client is permanently lost, then there is no
  way to make pd responding again, after a buffer overrun occured.
  
  If the connection is present the buffer will be emptying as fast as the 
  network can drain it, so just pacing the writes should work. Trying to 
  write an infinite amount instantly won't.
  
  i can't follow here as well. it's not about trying to send an infinite
  amount of data in zero time. it's about not knowing the current
  connection condition, and because of that risking a drop-out. since the
  connection condition changes all the time, you cannot implement a
  self-adapting system in pd without knowing the internal buffer status
  (empty or not). so even if you send data with a fixed rate, it still
  could happen, that you trigger the buffer overrun.
  
 
 
 Well I guess empty and almost full are the same as long as we stick to 
 single bytes.

ah, yes.. 

 So I made [tcpserver] send the messages one byte at a time. This latest 
 version should not block, since it adds only one element to the buffer 
 for each select call that says the buffer is still writeable.

can you tell me something about how to use it correctly? i just compiled
the newest version and still could prevent it from blocking pd. before
the blocking happens, i do _not_ get any message from the new outlet
from [tcpserver], such as 'sent 0'. so what happens is still, that i
send messages to it until it blocks. is it different on windows? do you
mind sending me a windows binary, if it is? 
is there anything i can do in terms of testing on linux?

roman







___ 
Telefonate ohne weitere Kosten vom PC zum PC: http://messenger.yahoo.de


___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] pd and tcp: what to do against crashes?

2009-03-02 Thread Martin Peach
Roman Haefeli wrote:
 On Sun, 2009-03-01 at 22:53 -0500, Martin Peach wrote:

 So I made [tcpserver] send the messages one byte at a time. This latest 
 version should not block, since it adds only one element to the buffer 
 for each select call that says the buffer is still writeable.
 
 can you tell me something about how to use it correctly? i just compiled
 the newest version and still could prevent it from blocking pd. before
 the blocking happens, i do _not_ get any message from the new outlet
 from [tcpserver], such as 'sent 0'. so what happens is still, that i
 send messages to it until it blocks. is it different on windows? do you
 mind sending me a windows binary, if it is? 
 is there anything i can do in terms of testing on linux?
 

Sorry, there was a bug in it so it was staying in the send loop even 
though it couldn't send. I fixed it in the latest svn. When I use it, if 
I set the buffer size to 10 I can send 4 messages of 3 bytes after 
unpluggng the cable, but the last 'sent' says that only 2 bytes were 
transmitted. Subsequent attempts give 0. After a few seconds I also get 
a message saying the connection was terminated. It should work 
identically on linux.

Martin

___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] pd and tcp: what to do against crashes?

2009-03-01 Thread Roman Haefeli
On Tue, 2009-02-24 at 21:15 +, Martin Peach wrote:
 Roman Haefeli wrote:
 --- Martin Peach martin.pe...@sympatico.ca schrieb am Di, 24.2.2009:
   Roman Haefeli wrote:
On Mon, 2009-02-23 at 21:03 +, Martin
   Peach wrote:
   Yes, I agree. I think a status outlet on the [tcpserver]
   could be extended later to have more messages. Some of the
   stuff that gets printed to the Pd window could go there and
   then it could be handled by the patch instead of the
   'operator'. I don't want to keep adding more
   outlets, so it would output lists with a selector, like
   [comport].
 
 i totally agree, that instead of adding more outlets it would be better to 
 provide additional information on the same outlet with appropriate 
 selector.
 
 
 OK it's done for now, in svn. Each time something is sent, you get a sent 
 message from the status outlet that gives the number of bytes that were 
 actually sent and the client number. Also a [client( message with no data 
 lists the connections using a client selector.
 The send function doesn't wait any more. If the number of bytes sent is 
 zero, you have to try again.
 It all needs to be tested...

thank you for implementing those changes. 

i finally had a chance (and time) to have a closer look and it turned
out, that the additional information is actually no gain and this still
doesn't allow to programm a non-blocking server.
it seems, that the 'sent' message is output, when something was _added_
to the 'send' buffer. actually, we would need this message to appear
when something was _removed_ from the buffer, which is when a message
actually was sent. 

with the current implementation,  the buffer still overruns without
having the chance to know this beforehand. whenever i send a message to
client, i get _immediately_  a 'sent 1 7' message, which i use to
trigger the next message, etc. so buffer keeps filling and filling. when
the buffer is full, [tcpserver] blocks pd. so, currently the situation
is not different from the one before i have started this thread.

i don't know, how much control you have at c level over what is
happening at tcp level. in order to solve the current issues at
pd-level, information about either the current buffer size or amount of
sent bytes  (number of bytes removed from the buffer) would be required.
i don't know how and if this is possible at all.

i would be interested to read about the c functions providing tcp
capabilities. may i ask where you got your knowledge about those?

roman 




___ 
Der frühe Vogel fängt den Wurm. Hier gelangen Sie zum neuen Yahoo! Mail: 
http://mail.yahoo.de


___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] pd and tcp: what to do against crashes?

2009-03-01 Thread Martin Peach
Roman Haefeli wrote:
 On Tue, 2009-02-24 at 21:15 +, Martin Peach wrote:
 Roman Haefeli wrote:
 --- Martin Peach martin.pe...@sympatico.ca schrieb am Di, 24.2.2009:
 Roman Haefeli wrote:
 On Mon, 2009-02-23 at 21:03 +, Martin
 Peach wrote:
 Yes, I agree. I think a status outlet on the [tcpserver]
 could be extended later to have more messages. Some of the
 stuff that gets printed to the Pd window could go there and
 then it could be handled by the patch instead of the
 'operator'. I don't want to keep adding more
 outlets, so it would output lists with a selector, like
 [comport].
 i totally agree, that instead of adding more outlets it would be better to 
 provide additional information on the same outlet with appropriate 
 selector.

 OK it's done for now, in svn. Each time something is sent, you get a sent 
 message from the status outlet that gives the number of bytes that were 
 actually sent and the client number. Also a [client( message with no data 
 lists the connections using a client selector.
 The send function doesn't wait any more. If the number of bytes sent is 
 zero, you have to try again.
 It all needs to be tested...
 
 thank you for implementing those changes. 
 
 i finally had a chance (and time) to have a closer look and it turned
 out, that the additional information is actually no gain and this still
 doesn't allow to programm a non-blocking server.
 it seems, that the 'sent' message is output, when something was _added_
 to the 'send' buffer. actually, we would need this message to appear
 when something was _removed_ from the buffer, which is when a message
 actually was sent. 


Yes, because the actual buffer is hidden from the user. You should get a 
'sent 0' message when it would block though, I don't know why you don't.


 
 with the current implementation,  the buffer still overruns without
 having the chance to know this beforehand. whenever i send a message to
 client, i get _immediately_  a 'sent 1 7' message, which i use to
 trigger the next message, etc. so buffer keeps filling and filling. when
 the buffer is full, [tcpserver] blocks pd. so, currently the situation
 is not different from the one before i have started this thread.

It's not supposed to do that. It should return 'sent 0' when it can't 
take any more, never block. Are you sending to the same client or many 
different ones? Can you post a test patch that will reproduce the bug?
Something like this should stop when the buffer is full:

[bang][r stop]
| |
[until]
|
[send 1 7 7 7 7 7 7 7(
|
[tcpserver]
| | | | |
 [route sent]
  |
 [select 0]
  |
 [s stop]

 
 i don't know, how much control you have at c level over what is
 happening at tcp level. in order to solve the current issues at
 pd-level, information about either the current buffer size or amount of
 sent bytes  (number of bytes removed from the buffer) would be required.
 i don't know how and if this is possible at all.

I don't think it's possible (but then I'm often wrong ;(). There might 
be an ioctl that will return the buffer size so you could know how much 
is safe to send at once.

 
 i would be interested to read about the c functions providing tcp
 capabilities. may i ask where you got your knowledge about those?
 

 From all over, but usually from the man pages for tcp, send, ioctl, and 
the like. For Windows, MSDN has a lot of info about winsock, which is 
very similar to unix sockets.

Martin


___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] pd and tcp: what to do against crashes?

2009-03-01 Thread Martin Peach
Martin Peach wrote:
 Roman Haefeli wrote:
 i don't know, how much control you have at c level over what is
 happening at tcp level. in order to solve the current issues at
 pd-level, information about either the current buffer size or amount of
 sent bytes  (number of bytes removed from the buffer) would be required.
 i don't know how and if this is possible at all.
 
 I don't think it's possible (but then I'm often wrong ;(). There might 
 be an ioctl that will return the buffer size so you could know how much 
 is safe to send at once.

Yes you see I was wrong. There is a getsockopt call that will return the 
buffer size. And a setsockopt that can also set the size on a per-socket 
basis.
On WinXp I get 8192 for the default send buffer.

Martin

___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] pd and tcp: what to do against crashes?

2009-03-01 Thread Roman Haefeli
On Sun, 2009-03-01 at 12:56 -0500, Martin Peach wrote:
 Roman Haefeli wrote:
  On Tue, 2009-02-24 at 21:15 +, Martin Peach wrote:
  Roman Haefeli wrote:
  --- Martin Peach martin.pe...@sympatico.ca schrieb am Di, 24.2.2009:
  Roman Haefeli wrote:
  On Mon, 2009-02-23 at 21:03 +, Martin
  Peach wrote:
  Yes, I agree. I think a status outlet on the [tcpserver]
  could be extended later to have more messages. Some of the
  stuff that gets printed to the Pd window could go there and
  then it could be handled by the patch instead of the
  'operator'. I don't want to keep adding more
  outlets, so it would output lists with a selector, like
  [comport].
  i totally agree, that instead of adding more outlets it would be better 
  to 
  provide additional information on the same outlet with appropriate 
  selector.
 
  OK it's done for now, in svn. Each time something is sent, you get a 
  sent 
  message from the status outlet that gives the number of bytes that were 
  actually sent and the client number. Also a [client( message with no data 
  lists the connections using a client selector.
  The send function doesn't wait any more. If the number of bytes sent is 
  zero, you have to try again.
  It all needs to be tested...
  
  thank you for implementing those changes. 
  
  i finally had a chance (and time) to have a closer look and it turned
  out, that the additional information is actually no gain and this still
  doesn't allow to programm a non-blocking server.
  it seems, that the 'sent' message is output, when something was _added_
  to the 'send' buffer. actually, we would need this message to appear
  when something was _removed_ from the buffer, which is when a message
  actually was sent. 
 
 
 Yes, because the actual buffer is hidden from the user. You should get a 
 'sent 0' message when it would block though, I don't know why you don't.
 
 
  
  with the current implementation,  the buffer still overruns without
  having the chance to know this beforehand. whenever i send a message to
  client, i get _immediately_  a 'sent 1 7' message, which i use to
  trigger the next message, etc. so buffer keeps filling and filling. when
  the buffer is full, [tcpserver] blocks pd. so, currently the situation
  is not different from the one before i have started this thread.
 
 It's not supposed to do that. It should return 'sent 0' when it can't 
 take any more, never block. Are you sending to the same client or many 
 different ones? Can you post a test patch that will reproduce the bug?
 Something like this should stop when the buffer is full:
 
 [bang][r stop]
 | |
 [until]
 |
 [send 1 7 7 7 7 7 7 7(
 |
 [tcpserver]
 | | | | |
  [route sent]
   |
  [select 0]
   |
  [s stop]
 

ah, now i see your example, i understand how the additional information
is supposed to be used. however, i never see a 'sent 0' message. even
after pluging the cable back again, so that the buffer is flushed, there
is no 'sent 0' message, but only lots of 'sent 1 7' messages. this
means, that this example patch didn't work for me. pd was blocked after
5700 or so 8-byte messages.

this is with a build from yesterday, so i will check out the todays
build as well. you'll hear again from me soon. ;-)

roman






___ 
Der frühe Vogel fängt den Wurm. Hier gelangen Sie zum neuen Yahoo! Mail: 
http://mail.yahoo.de


___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] pd and tcp: what to do against crashes?

2009-03-01 Thread Roman Haefeli
On Sun, 2009-03-01 at 14:01 -0500, Martin Peach wrote:
 Martin Peach wrote:
  Roman Haefeli wrote:
  i don't know, how much control you have at c level over what is
  happening at tcp level. in order to solve the current issues at
  pd-level, information about either the current buffer size or amount of
  sent bytes  (number of bytes removed from the buffer) would be required.
  i don't know how and if this is possible at all.
  
  I don't think it's possible (but then I'm often wrong ;(). There might 
  be an ioctl that will return the buffer size so you could know how much 
  is safe to send at once.
 
 Yes you see I was wrong. There is a getsockopt call that will return the 
 buffer size. And a setsockopt that can also set the size on a per-socket 
 basis.
 On WinXp I get 8192 for the default send buffer.

hm... knowing the actual buffer size isn't really helpful. what would be
helpful is to know, what is the state of the buffer: is it empty or
full? even the tiniest bit of information (empty or not empty) would
solve all issues. 

i really wonder, how other projects handle that. i mean, if several
people download a big file from apache, then a disappearing client
doesn't interfere with the other clients. i guess, in apache it is
solved by using threads. when using threads, one single thread doesn't
necessarily need to know about the buffer state, because it could be
blocked without harm to the other apache children. so it can try to send
as much data as possible.
is using threads the _only_ solution to deal with that problem? i guess,
it would overcomplicate the programming of [tcpserver], but you sure
know better...

roman 




___ 
Der frühe Vogel fängt den Wurm. Hier gelangen Sie zum neuen Yahoo! Mail: 
http://mail.yahoo.de


___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] pd and tcp: what to do against crashes?

2009-03-01 Thread Roman Haefeli
On Sun, 2009-03-01 at 17:01 -0500, Martin Peach wrote:
 So I added a [clientbuf( message to [tcpserver] to get/set the size of 
 the send buffer. Apparently the actual buffer will be twice this size.

when i set the buffer to 10, i get a message:
tcpserver_buf_size: client 1 set to 2048

when no cable is plugged in, pd blocks after the second 8-byte message,
so i guess, that real client buffer is 10 and not 2048.

 I'm still looking for a way to know if the buffer is so full that any 
 further data will block. 

it seems, that it would be more elegant to know, when it is empty. i
think this is more useful, since then you know that you can send a
message, which is = buffersize without blocking pd. otoh, when you
know, that it is almost full, then it is more likely to overrun the
buffer with a big message.

 It seems that even if the select() call returns 
 OK there still may not be enough room for any arbitrary length of data.
 Probably I need to set the sockets to nonblocking.

what does that mean: setting sockets to non-blocking? will this cause
the sockets to simply ommit data, that cannot be sent in time?
if so, i think, that would be the worst solution of all.

roman




___ 
Telefonate ohne weitere Kosten vom PC zum PC: http://messenger.yahoo.de


___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] pd and tcp: what to do against crashes?

2009-03-01 Thread Claude Heiland-Allen
Roman Haefeli wrote:
 On Sun, 2009-03-01 at 17:01 -0500, Martin Peach wrote:
[snip]
 Probably I need to set the sockets to nonblocking.
 
 what does that mean: setting sockets to non-blocking? will this cause
 the sockets to simply ommit data, that cannot be sent in time?

http://docsrv.sco.com/SDK_netapi/sockC.nonBlockSocks.html


Claude

___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] pd and tcp: what to do against crashes?

2009-03-01 Thread Roman Haefeli
On Sun, 2009-03-01 at 19:30 -0500, Martin Peach wrote:

 
 But if you know how much data you want to send you could set the buffer 
 to at least that size first.

yeah, this works for the very first time you send a chunk of data. the
next time, you don't know, if the buffer is already empty again or if
there is still some data left to be sent. in an environment like netpd
(and many others) you cannot make any assumptions about the internal
buffer state, since it would be just too complex. there are situations,
where there are problably some kilobytes sent in 0 logical time (on a
state dump, for instance), where most of the time there are only
sporadically distributed messages to be sent. 

  I think the blocking happens because the 
 connection is gone, not because of the buffer overflowing.

i am not sure, if understand what you mean here. what i experience: when
i overrun the buffer (by plugging out the ethernet cable to the client),
the pd process of [tcpserver] is completely blocked after having sent
$BUFFERSIZE bytes. it stays blocked, until i plug in the cable back
again. the whole content of the buffer is sent after some seconds and
the server pd instance responds again. this means, that even if the
connection is not completely lost, a blocking of pd could occur. 

if the connection to the client is permanently lost, then there is no
way to make pd responding again, after a buffer overrun occured.

 If the connection is present the buffer will be emptying as fast as the 
 network can drain it, so just pacing the writes should work. Trying to 
 write an infinite amount instantly won't.

i can't follow here as well. it's not about trying to send an infinite
amount of data in zero time. it's about not knowing the current
connection condition, and because of that risking a drop-out. since the
connection condition changes all the time, you cannot implement a
self-adapting system in pd without knowing the internal buffer status
(empty or not). so even if you send data with a fixed rate, it still
could happen, that you trigger the buffer overrun.


  It seems that even if the select() call returns 
  OK there still may not be enough room for any arbitrary length of data.
  Probably I need to set the sockets to nonblocking.
  
  what does that mean: setting sockets to non-blocking? will this cause
  the sockets to simply ommit data, that cannot be sent in time?
  if so, i think, that would be the worst solution of all.
  
 
 It's kind of like running each send call in its own thread, so if 
 something gets stuck it won't block everything else in the process. 
 That's better than making a thread for each connection.

sounds exciting. do you think you could implement that?

roman




___ 
Der frühe Vogel fängt den Wurm. Hier gelangen Sie zum neuen Yahoo! Mail: 
http://mail.yahoo.de


___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] pd and tcp: what to do against crashes?

2009-03-01 Thread Roman Haefeli
On Mon, 2009-03-02 at 00:02 +, Claude Heiland-Allen wrote:
 Roman Haefeli wrote:
  On Sun, 2009-03-01 at 17:01 -0500, Martin Peach wrote:
 [snip]
  Probably I need to set the sockets to nonblocking.
  
  what does that mean: setting sockets to non-blocking? will this cause
  the sockets to simply ommit data, that cannot be sent in time?
 
 http://docsrv.sco.com/SDK_netapi/sockC.nonBlockSocks.html
 

yo, although it is well described, i am not sure, if understand
everything correctly.
if blocking would have occured, an error is returned instead and it
would say how much data could be sent. so then you can try to send the
rest of the data again some bit of time later?

if that is the case: if you want to transmit data as fast as possible,
the only non-blocking way would be to constantly try to send data, while
triggering the error, and try to send the next chunk of data, when the
error did _not_ occur? is 'pulling the error' the only solution to the
issues of the currently blocking [tcpserver]?

roman





___ 
Der frühe Vogel fängt den Wurm. Hier gelangen Sie zum neuen Yahoo! Mail: 
http://mail.yahoo.de


___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] pd and tcp: what to do against crashes?

2009-03-01 Thread Martin Peach
Roman Haefeli wrote:
 On Sun, 2009-03-01 at 19:30 -0500, Martin Peach wrote:
  I think the blocking happens because the 
 connection is gone, not because of the buffer overflowing.
 
 i am not sure, if understand what you mean here. what i experience: when
 i overrun the buffer (by plugging out the ethernet cable to the client),
 the pd process of [tcpserver] is completely blocked after having sent
 $BUFFERSIZE bytes. it stays blocked, until i plug in the cable back
 again. the whole content of the buffer is sent after some seconds and
 the server pd instance responds again. this means, that even if the
 connection is not completely lost, a blocking of pd could occur. 

You can send until the buffer fills up, even if it isn't emptying. I 
think it blocks because it only checks for at least one empty space in 
the buffer but then sends more than that.

 
 if the connection to the client is permanently lost, then there is no
 way to make pd responding again, after a buffer overrun occured.
 
 If the connection is present the buffer will be emptying as fast as the 
 network can drain it, so just pacing the writes should work. Trying to 
 write an infinite amount instantly won't.
 
 i can't follow here as well. it's not about trying to send an infinite
 amount of data in zero time. it's about not knowing the current
 connection condition, and because of that risking a drop-out. since the
 connection condition changes all the time, you cannot implement a
 self-adapting system in pd without knowing the internal buffer status
 (empty or not). so even if you send data with a fixed rate, it still
 could happen, that you trigger the buffer overrun.
 


Well I guess empty and almost full are the same as long as we stick to 
single bytes.
So I made [tcpserver] send the messages one byte at a time. This latest 
version should not block, since it adds only one element to the buffer 
for each select call that says the buffer is still writeable.

Martin

___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] pd and tcp: what to do against crashes?

2009-02-24 Thread Martin Peach
Roman Haefeli wrote:
 On Mon, 2009-02-23 at 19:05 -0500, Martin Peach wrote:
 Roman Haefeli wrote:
 On Mon, 2009-02-23 at 21:03 +, Martin Peach wrote:
 OK I fixed it now in svn. It works on debian. The select() call was not 
 being done properly. Now I need to test it on Windows again.
 hey, many thanks! it works. now i wonder, what happens, if the message
 is triggered: 'tcpserver_send_buf: client 1 not writeable'. does that
 indicated, that the buffer is cleared? does it mean, that when this
 message appears, that at least one message didn't come through?

 Right now it means that the message is dropped. I can't see a way of 
 holding on to it that wouldn't end up crashing Pd eventually if you keep 
 sending to an unconnected client.
 
 do i understand correctly, that if the buffer is full, there is a time
 limit for it to become emptied and if it is not emptied in that given
 time interval, the content is cleared? if this is true, i think, the one
 second interval is way to short. for instance, if a state dump happens
 in netpd (probably several hundred messages), it could well be, that the
 connection is not fast enough send enough messages in the given time, so
 they would be dropped. i guess, for my on practice, i change the code to
 use a much longer time interval.

But then it would hold up the whole process for even longer.

 
 what is not solved yet: similar to the previous version, a drop-out
 occurs, whenever a buffer overrun happens. unlike before, it is not
 possible, that pd hangs forever anymore (it will only hang at most for
 the given time limit), but there is still no mechanism provided to
 generally avoid drop-outs. 
 

Better to have it output a message immediately that states it is unable 
to deliver the data.

 somehow i need to design netpd in way, that as soon as one single
 message is lost, the connection should be shut down and established
 again, and the client should then again sync with other clients.
 otherwise very bad things could happen (patches are not transmitted
 completely and loading incomplete patches causes pd crashing). 

 Well the easiest thing would be to have [tcpserver] close the connection 
 itself when that happens.
 
 it's just too easy to trigger that. i think, it would lead to too many
 unwanted disconnects. 
 
  The next best would be to have it output a 
 message on a 'status' outlet that you could use to close the connection.
 
 personally, i find this the much better idea.
 

Yes, I'm gonna work on that.

 before the change i could be sure, that either all messages came through
 or the server crashed at some point, if messages could not be delivered.
 now, since the server doesn't crash anymore, i need to know, if messages
 were dropped. how can i know?
 
 At the moment it prints to the Pd window, which isn't much use for 
 control purposes. As I said, for me the easiest and most logical thing 
 is to have the connection closed automatically, but then you have to 
 keep track of the connection count to know whether it happened.
 What do you think?
 
 without knowing how hard it would be to implement, the best solution IMO
 (and the only one, that addresses all of above issues) would be, if the
 whole buffering would happen in the pd patch itself, so that the patch
 could adapt itself to the current network conditions. translated into
 features, this would mean, that [tcpserver] needs to provide information
 about its inner buffer state. the most simple and probably most
 effective thing i can think of, would be an additional outlet, that
 sends a bang every time, when the inner buffer is completely emptied. i
 don't know, if it has several buffers, one for each client; if so, then
 probably a number (socket number) would be more appropriate than a bang.
 this way, a patch can send only as many messages, as the bandwidth
 allows. also it would give the possibility to the patch to decide, what
 time interval of not being able to send messages is appropriate to shut
 down the connection. the time interval could be dynamically set without
 the need to change the code of [tcpserver]. 

The buffer is maintained by the TCP stack. There is no way of knowing if 
it is empty, only if it can accept more.

 
 i see, that implementing those features would make the use of and the
 programming around [tcpserver] much more complex, although it would make
 it much more powerful. personally, i am all for giving the most control
 to the patch programmer, since i believe, that only then pd can be used
 for robust programming. it's probably a matter, if someone sees pd as a
 fully featured programming language or rather as a tool for fast
 prototyping or a 'quick hacking-together' à la 'reaktor'. both
 expectations are valid, but speaking for myself, i never found, that
 things were _too_ low-level in pd. 
 [tcpserver] is actually a good example for explaining what i mean: it
 was originally designed to tranport streams of data between the server
 and clients. in 

Re: [PD] pd and tcp: what to do against crashes?

2009-02-24 Thread Roman Haefeli




--- Martin Peach martin.pe...@sympatico.ca schrieb am Di, 24.2.2009:

 Roman Haefeli wrote:
  On Mon, 2009-02-23 at 19:05 -0500, Martin Peach wrote:
  Roman Haefeli wrote:
  On Mon, 2009-02-23 at 21:03 +, Martin
 Peach wrote:
  OK I fixed it now in svn. It works on
 debian. The select() call was not being done properly. Now I
 need to test it on Windows again.
  hey, many thanks! it works. now i wonder, what
 happens, if the message
  is triggered: 'tcpserver_send_buf: client
 1 not writeable'. does that
  indicated, that the buffer is cleared? does it
 mean, that when this
  message appears, that at least one message
 didn't come through?
  
  Right now it means that the message is dropped. I
 can't see a way of holding on to it that wouldn't
 end up crashing Pd eventually if you keep sending to an
 unconnected client.
  
  do i understand correctly, that if the buffer is full,
 there is a time
  limit for it to become emptied and if it is not
 emptied in that given
  time interval, the content is cleared? if this is
 true, i think, the one
  second interval is way to short. for instance, if a
 state dump happens
  in netpd (probably several hundred messages), it could
 well be, that the
  connection is not fast enough send enough messages in
 the given time, so
  they would be dropped. i guess, for my on practice, i
 change the code to
  use a much longer time interval.
 
 But then it would hold up the whole process for even
 longer.
 
  
  what is not solved yet: similar to the previous
 version, a drop-out
  occurs, whenever a buffer overrun happens. unlike
 before, it is not
  possible, that pd hangs forever anymore (it will only
 hang at most for
  the given time limit), but there is still no mechanism
 provided to
  generally avoid drop-outs. 
 
 Better to have it output a message immediately that states
 it is unable to deliver the data.
 
  somehow i need to design netpd in way, that as
 soon as one single
  message is lost, the connection should be shut
 down and established
  again, and the client should then again sync
 with other clients.
  otherwise very bad things could happen
 (patches are not transmitted
  completely and loading incomplete patches
 causes pd crashing). 
  Well the easiest thing would be to have
 [tcpserver] close the connection itself when that happens.
  
  it's just too easy to trigger that. i think, it
 would lead to too many
  unwanted disconnects. 
   The next best would be to have it output a
 message on a 'status' outlet that you could use to
 close the connection.
  
  personally, i find this the much better idea.
  
 
 Yes, I'm gonna work on that.

juhuu!..

  before the change i could be sure, that either
 all messages came through
  or the server crashed at some point, if
 messages could not be delivered.
  now, since the server doesn't crash
 anymore, i need to know, if messages
  were dropped. how can i know?
  
  At the moment it prints to the Pd window, which
 isn't much use for control purposes. As I said, for me
 the easiest and most logical thing is to have the connection
 closed automatically, but then you have to keep track of the
 connection count to know whether it happened.
  What do you think?
  
  without knowing how hard it would be to implement, the
 best solution IMO
  (and the only one, that addresses all of above issues)
 would be, if the
  whole buffering would happen in the pd patch itself,
 so that the patch
  could adapt itself to the current network conditions.
 translated into
  features, this would mean, that [tcpserver] needs to
 provide information
  about its inner buffer state. the most simple and
 probably most
  effective thing i can think of, would be an additional
 outlet, that
  sends a bang every time, when the inner buffer is
 completely emptied. i
  don't know, if it has several buffers, one for
 each client; if so, then
  probably a number (socket number) would be more
 appropriate than a bang.
  this way, a patch can send only as many messages, as
 the bandwidth
  allows. also it would give the possibility to the
 patch to decide, what
  time interval of not being able to send messages is
 appropriate to shut
  down the connection. the time interval could be
 dynamically set without
  the need to change the code of [tcpserver]. 
 
 The buffer is maintained by the TCP stack. There is no way
 of knowing if it is empty, only if it can accept more.

i see. even knowing that it accepts more would be good to know, i guess


  i see, that implementing those features would make the
 use of and the
  programming around [tcpserver] much more complex,
 although it would make
  it much more powerful. personally, i am all for giving
 the most control
  to the patch programmer, since i believe, that only
 then pd can be used
  for robust programming. it's probably a matter, if
 someone sees pd as a
  fully featured programming language or rather as a
 tool for fast
  prototyping or a 'quick hacking-together' à
 la 'reaktor'. 

Re: [PD] pd and tcp: what to do against crashes?

2009-02-24 Thread Martin Peach
Roman Haefeli wrote:
--- Martin Peach martin.pe...@sympatico.ca schrieb am Di, 24.2.2009:
  Roman Haefeli wrote:
   On Mon, 2009-02-23 at 21:03 +, Martin
  Peach wrote:
  Yes, I agree. I think a status outlet on the [tcpserver]
  could be extended later to have more messages. Some of the
  stuff that gets printed to the Pd window could go there and
  then it could be handled by the patch instead of the
  'operator'. I don't want to keep adding more
  outlets, so it would output lists with a selector, like
  [comport].

i totally agree, that instead of adding more outlets it would be better to 
provide additional information on the same outlet with appropriate 
selector.


OK it's done for now, in svn. Each time something is sent, you get a sent 
message from the status outlet that gives the number of bytes that were 
actually sent and the client number. Also a [client( message with no data 
lists the connections using a client selector.
The send function doesn't wait any more. If the number of bytes sent is 
zero, you have to try again.
It all needs to be tested...


i am very happy to notice, that we agree and that you are willing to 
address the existing issue. many thanks for your help.


You're welcome. I too prefer functional objects;)

Martin



___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] pd and tcp: what to do against crashes?

2009-02-23 Thread Roman Haefeli
On Sun, 2009-02-22 at 18:42 -0500, Martin Peach wrote:
 Roman Haefeli wrote:
  On Sun, 2009-02-22 at 17:30 -0500, Martin Peach wrote:
  
  Maybe you could try it (I just uploaded it to the svn at 
  http://pure-data.svn.sourceforge.net/viewvc/pure-data/trunk/externals/mrpeach/net/)
   
  and see if anything changes.

with the newest [tcpserver] i cannot send messages to clients anymore. i
tried both, 'send socketnumber' and 'client number'. whenever
[tcpserver] receives such a message, pd is blocked for about a second
and the i get in the console:

tcpserver_send_buf: client 1 not writeable

roman



___ 
Telefonate ohne weitere Kosten vom PC zum PC: http://messenger.yahoo.de


___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] pd and tcp: what to do against crashes?

2009-02-23 Thread Martin Peach



From: Roman Haefeli reduzie...@yahoo.de
Reply-To: reduzie...@yahoo.de
To: Martin Peach martin.pe...@sympatico.ca
CC: PD list pd-list@iem.at
Subject: Re: [PD] pd and tcp: what to do against crashes?
Date: Mon, 23 Feb 2009 19:50:44 +0100

On Sun, 2009-02-22 at 18:42 -0500, Martin Peach wrote:
  Roman Haefeli wrote:
   On Sun, 2009-02-22 at 17:30 -0500, Martin Peach wrote:
  
   Maybe you could try it (I just uploaded it to the svn at
   
http://pure-data.svn.sourceforge.net/viewvc/pure-data/trunk/externals/mrpeach/net/)
   and see if anything changes.

with the newest [tcpserver] i cannot send messages to clients anymore. i
tried both, 'send socketnumber' and 'client number'. whenever
[tcpserver] receives such a message, pd is blocked for about a second
and the i get in the console:

tcpserver_send_buf: client 1 not writeable


Are you sending a lot of data? That should only happen if you send more than 
a buffer, whatever that is, only the system knows. It blocks for exactly one 
second if the buffer is full, I was thinking that should give it enough time 
to send everything. I guess Pd isn't crashing anymore at least ;)
You could try changing line 383 of tcpserver.c to change the timeout:
timeout.tv_sec = 10; /* for ten seconds */

Martin



___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] pd and tcp: what to do against crashes?

2009-02-23 Thread Roman Haefeli
On Mon, 2009-02-23 at 19:10 +, Martin Peach wrote:
 
 
 From: Roman Haefeli reduzie...@yahoo.de
 Reply-To: reduzie...@yahoo.de
 To: Martin Peach martin.pe...@sympatico.ca
 CC: PD list pd-list@iem.at
 Subject: Re: [PD] pd and tcp: what to do against crashes?
 Date: Mon, 23 Feb 2009 19:50:44 +0100
 
 On Sun, 2009-02-22 at 18:42 -0500, Martin Peach wrote:
   Roman Haefeli wrote:
On Sun, 2009-02-22 at 17:30 -0500, Martin Peach wrote:
   
Maybe you could try it (I just uploaded it to the svn at

 http://pure-data.svn.sourceforge.net/viewvc/pure-data/trunk/externals/mrpeach/net/)
and see if anything changes.
 
 with the newest [tcpserver] i cannot send messages to clients anymore. i
 tried both, 'send socketnumber' and 'client number'. whenever
 [tcpserver] receives such a message, pd is blocked for about a second
 and the i get in the console:
 
 tcpserver_send_buf: client 1 not writeable
 
 
 Are you sending a lot of data? That should only happen if you send more than 
 a buffer, whatever that is, only the system knows. It blocks for exactly one 
 second if the buffer is full, I was thinking that should give it enough time 
 to send everything. I guess Pd isn't crashing anymore at least ;)
 You could try changing line 383 of tcpserver.c to change the timeout:
 timeout.tv_sec = 10; /* for ten seconds */

i am sending messages with 8 byte data (lists with 8 numbers). none of
the message is received on the other side. it is not possible to send
anything at all.

roman



___ 
Telefonate ohne weitere Kosten vom PC zum PC: http://messenger.yahoo.de


___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] pd and tcp: what to do against crashes?

2009-02-23 Thread Roman Haefeli


On Mon, 2009-02-23 at 19:10 +, Martin Peach wrote:
 
 
 From: Roman Haefeli reduzie...@yahoo.de
 Reply-To: reduzie...@yahoo.de
 To: Martin Peach martin.pe...@sympatico.ca
 CC: PD list pd-list@iem.at
 Subject: Re: [PD] pd and tcp: what to do against crashes?
 Date: Mon, 23 Feb 2009 19:50:44 +0100
 
 On Sun, 2009-02-22 at 18:42 -0500, Martin Peach wrote:
   Roman Haefeli wrote:
On Sun, 2009-02-22 at 17:30 -0500, Martin Peach wrote:
   
Maybe you could try it (I just uploaded it to the svn at

 http://pure-data.svn.sourceforge.net/viewvc/pure-data/trunk/externals/mrpeach/net/)
and see if anything changes.
 
 with the newest [tcpserver] i cannot send messages to clients anymore. i
 tried both, 'send socketnumber' and 'client number'. whenever
 [tcpserver] receives such a message, pd is blocked for about a second
 and the i get in the console:
 
 tcpserver_send_buf: client 1 not writeable
 
 
 Are you sending a lot of data? That should only happen if you send more than 
 a buffer, whatever that is, only the system knows. It blocks for exactly one 
 second if the buffer is full, I was thinking that should give it enough time 
 to send everything. I guess Pd isn't crashing anymore at least ;)
 You could try changing line 383 of tcpserver.c to change the timeout:
 timeout.tv_sec = 10; /* for ten seconds */

probably, i should add, that i am testing the new [tcpserver] code on
ubuntu 8.04, as i don't know how to compile on OS X or win XP. what are
you testing on?

roman





___ 
Der frühe Vogel fängt den Wurm. Hier gelangen Sie zum neuen Yahoo! Mail: 
http://mail.yahoo.de


___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] pd and tcp: what to do against crashes?

2009-02-23 Thread Martin Peach
From: Roman Haefeli reduzie...@yahoo.de
Reply-To: reduzie...@yahoo.de
To: Martin Peach martin.pe...@sympatico.ca
CC: pd-list@iem.at
Subject: Re: [PD] pd and tcp: what to do against crashes?
Date: Mon, 23 Feb 2009 20:59:41 +0100

On Mon, 2009-02-23 at 19:10 +, Martin Peach wrote:
 
  From: Roman Haefeli reduzie...@yahoo.de
  Reply-To: reduzie...@yahoo.de
  To: Martin Peach martin.pe...@sympatico.ca
  CC: PD list pd-list@iem.at
  Subject: Re: [PD] pd and tcp: what to do against crashes?
  Date: Mon, 23 Feb 2009 19:50:44 +0100
  
  On Sun, 2009-02-22 at 18:42 -0500, Martin Peach wrote:
Roman Haefeli wrote:
 On Sun, 2009-02-22 at 17:30 -0500, Martin Peach wrote:

 Maybe you could try it (I just uploaded it to the svn at

  
 http://pure-data.svn.sourceforge.net/viewvc/pure-data/trunk/externals/mrpeach/net/)
 and see if anything changes.
  
  with the newest [tcpserver] i cannot send messages to clients anymore. 
i
  tried both, 'send socketnumber' and 'client number'. whenever
  [tcpserver] receives such a message, pd is blocked for about a second
  and the i get in the console:
  
  tcpserver_send_buf: client 1 not writeable
  
 
  Are you sending a lot of data? That should only happen if you send more 
than
  a buffer, whatever that is, only the system knows. It blocks for exactly 
one
  second if the buffer is full, I was thinking that should give it enough 
time
  to send everything. I guess Pd isn't crashing anymore at least ;)
  You could try changing line 383 of tcpserver.c to change the timeout:
  timeout.tv_sec = 10; /* for ten seconds */

probably, i should add, that i am testing the new [tcpserver] code on
ubuntu 8.04, as i don't know how to compile on OS X or win XP. what are
you testing on?


I tried it yesterday on WinXp. I have a debian machine here I can try it on.

Martin



___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] pd and tcp: what to do against crashes?

2009-02-23 Thread Martin Peach
Roman Haefeli wrote:

  with the newest [tcpserver] i cannot send messages to clients anymore. 
i
  tried both, 'send socketnumber' and 'client number'. whenever
  [tcpserver] receives such a message, pd is blocked for about a second
  and the i get in the console:
  
  tcpserver_send_buf: client 1 not writeable
  
 
  Are you sending a lot of data? That should only happen if you send more 
than
  a buffer, whatever that is, only the system knows. It blocks for exactly 
one
  second if the buffer is full, I was thinking that should give it enough 
time
  to send everything. I guess Pd isn't crashing anymore at least ;)
  You could try changing line 383 of tcpserver.c to change the timeout:
  timeout.tv_sec = 10; /* for ten seconds */

probably, i should add, that i am testing the new [tcpserver] code on
ubuntu 8.04, as i don't know how to compile on OS X or win XP. what are
you testing on?


OK I fixed it now in svn. It works on debian. The select() call was not 
being done properly. Now I need to test it on Windows again.

Martin



___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] pd and tcp: what to do against crashes?

2009-02-23 Thread Roman Haefeli
On Mon, 2009-02-23 at 21:03 +, Martin Peach wrote:
 Roman Haefeli wrote:
 
   with the newest [tcpserver] i cannot send messages to clients anymore. 
 i
   tried both, 'send socketnumber' and 'client number'. whenever
   [tcpserver] receives such a message, pd is blocked for about a second
   and the i get in the console:
   
   tcpserver_send_buf: client 1 not writeable
   
  
   Are you sending a lot of data? That should only happen if you send more 
 than
   a buffer, whatever that is, only the system knows. It blocks for exactly 
 one
   second if the buffer is full, I was thinking that should give it enough 
 time
   to send everything. I guess Pd isn't crashing anymore at least ;)
   You could try changing line 383 of tcpserver.c to change the timeout:
   timeout.tv_sec = 10; /* for ten seconds */
 
 probably, i should add, that i am testing the new [tcpserver] code on
 ubuntu 8.04, as i don't know how to compile on OS X or win XP. what are
 you testing on?
 
 
 OK I fixed it now in svn. It works on debian. The select() call was not 
 being done properly. Now I need to test it on Windows again.

hey, many thanks! it works. now i wonder, what happens, if the message
is triggered: 'tcpserver_send_buf: client 1 not writeable'. does that
indicated, that the buffer is cleared? does it mean, that when this
message appears, that at least one message didn't come through?

somehow i need to design netpd in way, that as soon as one single
message is lost, the connection should be shut down and established
again, and the client should then again sync with other clients.
otherwise very bad things could happen (patches are not transmitted
completely and loading incomplete patches causes pd crashing). 

before the change i could be sure, that either all messages came through
or the server crashed at some point, if messages could not be delivered.
now, since the server doesn't crash anymore, i need to know, if messages
were dropped. how can i know?

thanks again for all your effort.

roman





___ 
Telefonate ohne weitere Kosten vom PC zum PC: http://messenger.yahoo.de


___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] pd and tcp: what to do against crashes?

2009-02-23 Thread Martin Peach
Roman Haefeli wrote:
 On Mon, 2009-02-23 at 21:03 +, Martin Peach wrote:
 OK I fixed it now in svn. It works on debian. The select() call was not 
 being done properly. Now I need to test it on Windows again.
 
 hey, many thanks! it works. now i wonder, what happens, if the message
 is triggered: 'tcpserver_send_buf: client 1 not writeable'. does that
 indicated, that the buffer is cleared? does it mean, that when this
 message appears, that at least one message didn't come through?
 

Right now it means that the message is dropped. I can't see a way of 
holding on to it that wouldn't end up crashing Pd eventually if you keep 
sending to an unconnected client.

 somehow i need to design netpd in way, that as soon as one single
 message is lost, the connection should be shut down and established
 again, and the client should then again sync with other clients.
 otherwise very bad things could happen (patches are not transmitted
 completely and loading incomplete patches causes pd crashing). 
 

Well the easiest thing would be to have [tcpserver] close the connection 
itself when that happens. The next best would be to have it output a 
message on a 'status' outlet that you could use to close the connection.

 before the change i could be sure, that either all messages came through
 or the server crashed at some point, if messages could not be delivered.
 now, since the server doesn't crash anymore, i need to know, if messages
 were dropped. how can i know?
 

At the moment it prints to the Pd window, which isn't much use for 
control purposes. As I said, for me the easiest and most logical thing 
is to have the connection closed automatically, but then you have to 
keep track of the connection count to know whether it happened.
What do you think?

Martin

___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] pd and tcp: what to do against crashes?

2009-02-23 Thread Roman Haefeli
On Mon, 2009-02-23 at 19:05 -0500, Martin Peach wrote:
 Roman Haefeli wrote:
  On Mon, 2009-02-23 at 21:03 +, Martin Peach wrote:
  OK I fixed it now in svn. It works on debian. The select() call was not 
  being done properly. Now I need to test it on Windows again.
  
  hey, many thanks! it works. now i wonder, what happens, if the message
  is triggered: 'tcpserver_send_buf: client 1 not writeable'. does that
  indicated, that the buffer is cleared? does it mean, that when this
  message appears, that at least one message didn't come through?
  
 
 Right now it means that the message is dropped. I can't see a way of 
 holding on to it that wouldn't end up crashing Pd eventually if you keep 
 sending to an unconnected client.

do i understand correctly, that if the buffer is full, there is a time
limit for it to become emptied and if it is not emptied in that given
time interval, the content is cleared? if this is true, i think, the one
second interval is way to short. for instance, if a state dump happens
in netpd (probably several hundred messages), it could well be, that the
connection is not fast enough send enough messages in the given time, so
they would be dropped. i guess, for my on practice, i change the code to
use a much longer time interval.

what is not solved yet: similar to the previous version, a drop-out
occurs, whenever a buffer overrun happens. unlike before, it is not
possible, that pd hangs forever anymore (it will only hang at most for
the given time limit), but there is still no mechanism provided to
generally avoid drop-outs. 

  somehow i need to design netpd in way, that as soon as one single
  message is lost, the connection should be shut down and established
  again, and the client should then again sync with other clients.
  otherwise very bad things could happen (patches are not transmitted
  completely and loading incomplete patches causes pd crashing). 
  
 
 Well the easiest thing would be to have [tcpserver] close the connection 
 itself when that happens.

it's just too easy to trigger that. i think, it would lead to too many
unwanted disconnects. 

  The next best would be to have it output a 
 message on a 'status' outlet that you could use to close the connection.

personally, i find this the much better idea.

  before the change i could be sure, that either all messages came through
  or the server crashed at some point, if messages could not be delivered.
  now, since the server doesn't crash anymore, i need to know, if messages
  were dropped. how can i know?

 At the moment it prints to the Pd window, which isn't much use for 
 control purposes. As I said, for me the easiest and most logical thing 
 is to have the connection closed automatically, but then you have to 
 keep track of the connection count to know whether it happened.
 What do you think?

without knowing how hard it would be to implement, the best solution IMO
(and the only one, that addresses all of above issues) would be, if the
whole buffering would happen in the pd patch itself, so that the patch
could adapt itself to the current network conditions. translated into
features, this would mean, that [tcpserver] needs to provide information
about its inner buffer state. the most simple and probably most
effective thing i can think of, would be an additional outlet, that
sends a bang every time, when the inner buffer is completely emptied. i
don't know, if it has several buffers, one for each client; if so, then
probably a number (socket number) would be more appropriate than a bang.
this way, a patch can send only as many messages, as the bandwidth
allows. also it would give the possibility to the patch to decide, what
time interval of not being able to send messages is appropriate to shut
down the connection. the time interval could be dynamically set without
the need to change the code of [tcpserver]. 

i see, that implementing those features would make the use of and the
programming around [tcpserver] much more complex, although it would make
it much more powerful. personally, i am all for giving the most control
to the patch programmer, since i believe, that only then pd can be used
for robust programming. it's probably a matter, if someone sees pd as a
fully featured programming language or rather as a tool for fast
prototyping or a 'quick hacking-together' à la 'reaktor'. both
expectations are valid, but speaking for myself, i never found, that
things were _too_ low-level in pd. 
[tcpserver] is actually a good example for explaining what i mean: it
was originally designed to tranport streams of data between the server
and clients. in order to transport packet oriented protocols,
[tcpserver] would have needed to be adapted accordingly, while each
protocol would have required its own code. the fact, that i can do all
that in pd, let's me implement those protocols, that i personally need
(without touching the code of [tcpserver]). this way, i can expand the
functionality of [tcpserver] 

Re: [PD] pd and tcp: what to do against crashes?

2009-02-22 Thread Roman Haefeli
On Sat, 2009-02-21 at 12:59 -0500, Martin Peach wrote:
 Hi Roman,
 I think it probably comes down to the code not checking for all possible 
 error conditions. 

cool, if it would be as simple as that.

 Under udp you can send as much as you like to 
 nonexistent receivers but tcp needs an active connection.
 Most likely the code is just assuming that everything is working properly.
 It sounds as though data being sent to a client whose connection has 
 just dropped but before it has timed out, will go into nevernever land 
 and the thread will hang.

where is neverneverland?  i mean, in tcp protocol, the receiver has to
confirm, that it received the messages, so i guess, the sender needs to
keep all the messages, that were sent to the vanished client, but were
never confirmed, right? 
 
 It would be nice to have a setup that could reliably reproduce the bug, 
 then it would be much easier to fix. Probably having 2 machines 
 connected and pulling the cable out of one at the right moment should do it.
 Anyway I'll stop speculating now and have a look at the code...

let me try some test setups, though i think one needs to have at least
two computers in order to trigger the problem. it would be just awesome,
if this long-standing issue could be fixed. 

roman




___ 
Der frühe Vogel fängt den Wurm. Hier gelangen Sie zum neuen Yahoo! Mail: 
http://mail.yahoo.de


___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] pd and tcp: what to do against crashes?

2009-02-22 Thread Martin Peach
Roman Haefeli wrote:
 On Sat, 2009-02-21 at 12:59 -0500, Martin Peach wrote:
 Hi Roman,
 I think it probably comes down to the code not checking for all possible 
 error conditions. 
 
 cool, if it would be as simple as that.
 
 Under udp you can send as much as you like to 
 nonexistent receivers but tcp needs an active connection.
 Most likely the code is just assuming that everything is working properly.
 It sounds as though data being sent to a client whose connection has 
 just dropped but before it has timed out, will go into nevernever land 
 and the thread will hang.

After looking at the actual code, I think the above is not true. The TCP 
stack will just keep trying to send the buffer until it times out; how 
long that takes seems to be system dependent. I don't see why that 
should cause Pd to crash.

 
 where is neverneverland?  i mean, in tcp protocol, the receiver has to
 confirm, that it received the messages, so i guess, the sender needs to
 keep all the messages, that were sent to the vanished client, but were
 never confirmed, right? 


Yes, the TCP code keeps trying to send for a while. From the code it 
looks like an error tcp_server: send blocked xxx msec should be 
printed if the send() function doesn't return quickly, but I think that 
will only happen if there is some local problem with the network.
The send() man page says:
When the message does not fit into the send buffer of the socket, 
send() normally blocks, unless the socket has been placed in 
non-blocking I/O mode. In non-blocking mode it would return EAGAIN in 
this case. The select(2) call may be used to determine when it is 
possible to send more data. 

So I guess it's plausible that Pd is getting stuck when the send buffer 
is overrun (in blocking mode send() doesn't return until there is some 
room in the buffer, although it does return if the buffer is not full 
even if it can't be sent). The error message will never get printed 
because send has blocked forever.

I think netserver uses the exact same code.
I guess they should either be using select() to see if a socket is 
writeable before calling send() on it, or opening the socket in 
non-blocking mode and checking for errors like EAGAIN, and in either 
case shut down a socket whose send buffer is full.

A way around it could be to have the clients always reply to messages, 
then have the server shut down the connections that don't answer in time.

In playing around with [tcpclient] and web servers I noticed that the 
server always closes the connection as soon as each request has been 
answered, so that problem doesn't really arise for Apache.

Martin

___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] pd and tcp: what to do against crashes?

2009-02-22 Thread Roman Haefeli
On Sun, 2009-02-22 at 15:17 -0500, Martin Peach wrote:
 Roman Haefeli wrote:
  On Sat, 2009-02-21 at 12:59 -0500, Martin Peach wrote:
  Hi Roman,
  I think it probably comes down to the code not checking for all possible 
  error conditions. 
  
  cool, if it would be as simple as that.
  
  Under udp you can send as much as you like to 
  nonexistent receivers but tcp needs an active connection.
  Most likely the code is just assuming that everything is working properly.
  It sounds as though data being sent to a client whose connection has 
  just dropped but before it has timed out, will go into nevernever land 
  and the thread will hang.
 
 After looking at the actual code, I think the above is not true. The TCP 
 stack will just keep trying to send the buffer until it times out; how 
 long that takes seems to be system dependent. I don't see why that 
 should cause Pd to crash.
 
  
  where is neverneverland?  i mean, in tcp protocol, the receiver has to
  confirm, that it received the messages, so i guess, the sender needs to
  keep all the messages, that were sent to the vanished client, but were
  never confirmed, right? 
 
 
 Yes, the TCP code keeps trying to send for a while. From the code it 
 looks like an error tcp_server: send blocked xxx msec should be 
 printed if the send() function doesn't return quickly, but I think that 
 will only happen if there is some local problem with the network.
 The send() man page says:
 When the message does not fit into the send buffer of the socket, 
 send() normally blocks, unless the socket has been placed in 
 non-blocking I/O mode. In non-blocking mode it would return EAGAIN in 
 this case. The select(2) call may be used to determine when it is 
 possible to send more data. 
 
 So I guess it's plausible that Pd is getting stuck when the send buffer 
 is overrun (in blocking mode send() doesn't return until there is some 
 room in the buffer, although it does return if the buffer is not full 
 even if it can't be sent). The error message will never get printed 
 because send has blocked forever.
 
 I think netserver uses the exact same code.

good to know, since it appears to have the exact same problem.

 I guess they should either be using select() to see if a socket is 
 writeable before calling send() on it, or opening the socket in 
 non-blocking mode and checking for errors like EAGAIN, and in either 
 case shut down a socket whose send buffer is full.

hm.. i doubt, that this is a good idea. in the current implementation of
all [net*] and [tcp*] classes, it is very likely to hit a buffer
overrun, you only need to send some amount of messages in zero logical
time and the socket would be closed. i guess, either would those classes
handle this kind of situation in a more intelligent way (don't know yet,
what this means, though), or there needs to be more control in
userspace. i already mentioned it before: if every net class would
output a bang, whenever the send buffer is emptied, one could design a
patch in a manner, that it only sends messages, if the other end is
listening and buffer is not full. this way it would even be possible to
have transmission at maximum available bandwidth. i don't know how this
could be achieved without giving at least that amount of control into
userspace. 

 A way around it could be to have the clients always reply to messages, 
 then have the server shut down the connections that don't answer in time.

yeah, this would work with [tcpserver], but not with [netserver]: it
doesn't provide a method for closing connections, afaik. 

but to me it sounds awkward to reimplement task a in a higher level,
that should be done at tcp level. i don't think, that a protocol over
tcp should work this way. also it would make message based data
transmission very slow, since for each message, that should be send, you
would need to wait the time of latency twice. 

 In playing around with [tcpclient] and web servers I noticed that the 
 server always closes the connection as soon as each request has been 
 answered, so that problem doesn't really arise for Apache.

you're right. actually, i can't think of many setups, that are similar
to what i described in my first post of the thread: one server with many
clients constantly staying connected. it seems to be the least trivial
setup.

roman





___ 
Der frühe Vogel fängt den Wurm. Hier gelangen Sie zum neuen Yahoo! Mail: 
http://mail.yahoo.de


___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] pd and tcp: what to do against crashes?

2009-02-22 Thread Martin Peach
I just tried with 2 machines having [tcpserver] (WinXP) repeatedly send 
to [tcpclient] (Ubuntu) while I pulled out the cable from one machine. 
The server keeps sending until it disconnects about a minute later with 
message
tcpserver: not a valid socket number (-1)
The client however thinks it's still connected and I need to disconnect 
before reconnecting to be able to resume communication.
So there's no crash there. It's probably the buffer overflow.
Then I modified [tcpserver] to check if the socket is writeable first, 
using select(). This causes messages to appear when the buffer overflows 
instead of blocking at the send() call, but it doesn't close the socket.
I gave select a one second timeout, which should allow time for 
zero-logical time multiple messages to get out.
Maybe you could try it (I just uploaded it to the svn at 
http://pure-data.svn.sourceforge.net/viewvc/pure-data/trunk/externals/mrpeach/net/)
 
and see if anything changes.


Martin

___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] pd and tcp: what to do against crashes?

2009-02-22 Thread Roman Haefeli
11584 tcpserver/linux
131760 tcpclient/OS X


On Sun, 2009-02-22 at 17:30 -0500, Martin Peach wrote:
 I just tried with 2 machines having [tcpserver] (WinXP) repeatedly send 
 to [tcpclient] (Ubuntu) while I pulled out the cable from one machine. 
 The server keeps sending until it disconnects about a minute later with 
 message
 tcpserver: not a valid socket number (-1)
 The client however thinks it's still connected and I need to disconnect 
 before reconnecting to be able to resume communication.
 So there's no crash there.

i tested the same with different results. after having send 11584 bytes
from [tcpserver] on ubuntu to [tcpclient] on OS X, that got disconnected
from ethernet, the pd instance of [tcpserver] didn't respond anymore.
after plugging in the ethernet cable again, the client on OS X did
receive all data in one message after a few seconds. after this
happened, the server responded again. 
the difference to your test was, that i sent all data (11584 bytes) in
_less_ than a minute, so the server didn't print the message:
tcpserver: not a valid socket number (-1)


i also tested it the other way around: client (OS X) connects to server
(linux) and then i plugged out the cable and started sending messages
from the client to the server. the client pd instance stopped responding
after having sent 131760 bytes. i don't know, if this difference comes
from different buffer sizes on [tcpserver] and [tcpclient] or from
different implementations on both OS'. however, also here: after a few
seconds after plugging in the cable again, the server received the whole
chunk as one message and the client started to respond again.

  It's probably the buffer overflow.
 Then I modified [tcpserver] to check if the socket is writeable first, 
 using select(). This causes messages to appear when the buffer overflows 
 instead of blocking at the send() call, but it doesn't close the socket.
 I gave select a one second timeout, which should allow time for 
 zero-logical time multiple messages to get out.
 Maybe you could try it (I just uploaded it to the svn at 
 http://pure-data.svn.sourceforge.net/viewvc/pure-data/trunk/externals/mrpeach/net/)
  
 and see if anything changes.

cool! many thanks for your effort. i am happy to perform some further
tests.

roman






___ 
Der frühe Vogel fängt den Wurm. Hier gelangen Sie zum neuen Yahoo! Mail: 
http://mail.yahoo.de


___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] pd and tcp: what to do against crashes?

2009-02-22 Thread Roman Haefeli
On Sun, 2009-02-22 at 17:30 -0500, Martin Peach wrote:

 Maybe you could try it (I just uploaded it to the svn at 
 http://pure-data.svn.sourceforge.net/viewvc/pure-data/trunk/externals/mrpeach/net/)
  
 and see if anything changes.
 


now, i cannot compile it anymore, when i do:

cd pd-svn/externals/
make mrpeach

i get:

cc -DPD -O2 -I/home/roman/pd-svn/pd/src -Wall -W -ggdb 
-I/home/roman/pd-svn/Gem/src -I/home/roman/pd-svn/externals/pdp/include -DUNIX 
-Dunix -fPIC -o /home/roman/pd-svn/externals/mrpeach/net/tcpserver.o -c 
/home/roman/pd-svn/externals/mrpeach/net/tcpserver.c
/home/roman/pd-svn/externals/mrpeach/net/tcpserver.c: In function 
'tcpserver_send_buf':
/home/roman/pd-svn/externals/mrpeach/net/tcpserver.c:387: error: 'errno' 
undeclared (first use in this function)
/home/roman/pd-svn/externals/mrpeach/net/tcpserver.c:387: error: (Each 
undeclared identifier is reported only once
/home/roman/pd-svn/externals/mrpeach/net/tcpserver.c:387: error: for each 
function it appears in.)
/home/roman/pd-svn/externals/mrpeach/net/tcpserver.c:375: warning: unused 
variable 'timebefore'
/home/roman/pd-svn/externals/mrpeach/net/tcpserver.c: In function 
'tcpserver_send':
/home/roman/pd-svn/externals/mrpeach/net/tcpserver.c:414: warning: unused 
parameter 's'
/home/roman/pd-svn/externals/mrpeach/net/tcpserver.c: In function 
'tcpserver_client_send':
/home/roman/pd-svn/externals/mrpeach/net/tcpserver.c:517: warning: unused 
parameter 's'
/home/roman/pd-svn/externals/mrpeach/net/tcpserver.c: In function 
'tcpserver_broadcast':
/home/roman/pd-svn/externals/mrpeach/net/tcpserver.c:549: warning: unused 
parameter 's'
/home/roman/pd-svn/externals/mrpeach/net/tcpserver.c: In function 
'tcpserver_connectpoll':
/home/roman/pd-svn/externals/mrpeach/net/tcpserver.c:595: warning: pointer 
targets in passing argument 3 of 'accept' differ in signedness
make: *** [/home/roman/pd-svn/externals/mrpeach/net/tcpserver.o] Error 1
ro...@yoyo2:~/pd-svn/externals$ 

roman




___ 
Telefonate ohne weitere Kosten vom PC zum PC: http://messenger.yahoo.de


___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] pd and tcp: what to do against crashes?

2009-02-22 Thread Martin Peach
Roman Haefeli wrote:
 On Sun, 2009-02-22 at 17:30 -0500, Martin Peach wrote:
 
 Maybe you could try it (I just uploaded it to the svn at 
 http://pure-data.svn.sourceforge.net/viewvc/pure-data/trunk/externals/mrpeach/net/)
  
 and see if anything changes.

 now, i cannot compile it anymore, when i do:
 
 cd pd-svn/externals/
 make mrpeach
 
 i get:
 
 cc -DPD -O2 -I/home/roman/pd-svn/pd/src -Wall -W -ggdb 
 -I/home/roman/pd-svn/Gem/src -I/home/roman/pd-svn/externals/pdp/include 
 -DUNIX -Dunix -fPIC -o /home/roman/pd-svn/externals/mrpeach/net/tcpserver.o 
 -c /home/roman/pd-svn/externals/mrpeach/net/tcpserver.c
 /home/roman/pd-svn/externals/mrpeach/net/tcpserver.c: In function 
 'tcpserver_send_buf':
 /home/roman/pd-svn/externals/mrpeach/net/tcpserver.c:387: error: 'errno' 
 undeclared (first use in this function)
 /home/roman/pd-svn/externals/mrpeach/net/tcpserver.c:387: error: (Each 
 undeclared identifier is reported only once
 /home/roman/pd-svn/externals/mrpeach/net/tcpserver.c:387: error: for each 
 function it appears in.)


You need to add
#include errno.h
for linux it seems.
I have added that and committed it.

 /home/roman/pd-svn/externals/mrpeach/net/tcpserver.c:375: warning: unused 
 variable 'timebefore'
 /home/roman/pd-svn/externals/mrpeach/net/tcpserver.c: In function 
 'tcpserver_send':
 /home/roman/pd-svn/externals/mrpeach/net/tcpserver.c:414: warning: unused 
 parameter 's'
 /home/roman/pd-svn/externals/mrpeach/net/tcpserver.c: In function 
 'tcpserver_client_send':
 /home/roman/pd-svn/externals/mrpeach/net/tcpserver.c:517: warning: unused 
 parameter 's'
 /home/roman/pd-svn/externals/mrpeach/net/tcpserver.c: In function 
 'tcpserver_broadcast':
 /home/roman/pd-svn/externals/mrpeach/net/tcpserver.c:549: warning: unused 
 parameter 's'
 /home/roman/pd-svn/externals/mrpeach/net/tcpserver.c: In function 
 'tcpserver_connectpoll':
 /home/roman/pd-svn/externals/mrpeach/net/tcpserver.c:595: warning: pointer 
 targets in passing argument 3 of 'accept' differ in signedness

These are just warnings.

Martin


___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] pd and tcp: what to do against crashes?

2009-02-21 Thread Martin Peach
Hi Roman,
I think it probably comes down to the code not checking for all possible 
error conditions. Under udp you can send as much as you like to 
nonexistent receivers but tcp needs an active connection.
Most likely the code is just assuming that everything is working properly.
It sounds as though data being sent to a client whose connection has 
just dropped but before it has timed out, will go into nevernever land 
and the thread will hang.
It would be nice to have a setup that could reliably reproduce the bug, 
then it would be much easier to fix. Probably having 2 machines 
connected and pulling the cable out of one at the right moment should do it.
Anyway I'll stop speculating now and have a look at the code...

Martin


Roman Haefeli wrote:
 hi all
 
 i've been working now quite some time with setups, where different
 instances of pd spread over the world are connected with each other over
 another instance of pd (i.e. serverpatch). i tried different classes for
 establishing tcp connections between clients and servers, namely
 [netclient]/[netserver], [tcpclient]/[tcpserver] or a mix of the two. no
 matter, what configuration is used, server crashes are likely to happen
 from time to time. the 'server' means here the instance of pd, that is
 running the patch containing either [tcpserver] or [netserver]. crash
 means: pd is still running, but not responding. when i start pd with gui
 for debugging purposes, the gui is also still there, but doesn't
 respond.
 
 when i am testing on my own, running several instance of pd on my local
 box (or on some more boxes, i have access to), everything runs fine,
 even under heavy load of data being exchanged between the clients. at
 most, there are some drop-outs, but never crashes. however, when having
 a netpd-session with several people connected from everywhere, crashes
 happen much more often. from my experience, i can tell, that those
 crashes are more likely to happen, if one or more clients have an
 unreliable internet connection  (or weak wifi signal etc). since tcp is
 connection-aware - tcp requires connection establishment (handshake) but
 also connection termination - and some clients just disappear without
 proper termination, the server still expects them to be there. this is
 also indicated by the number of connected clients reported by the
 server: when a client loses connection and then reconnects, the number
 is higher than the real number of connected clients. if this happens
 several times, the reported number of connected clients raises, because
 connections weren't terminated correctly. 
 
 now, when another client is sending 'broadcast' messages (messages meant
 to be sent to all connected clients), the server still tries to send the
 messages to the disappeared clients. 
 another situation: if the client, that disappeared, sent a dump request
 to another client just before vanishing, the other client will try to
 send the whole dump to the vanished client. i wonder now, what happens,
 if all those messages cannot be delivered by the server. i suspect this
 to be the cause of the crashes.
 
 from the pd user side, there seems to be no way to address this issue,
 since there is no way for the server (i.e. the patch around
 [netserver]/[tcpserver]) to tell, if a client silently disappeared. so
 the server will still try to deliver all the messages. i am suspecting,
 that some buffer overrun occurs here, but i cannot tell really without
 understanding the code of [netserver] or [tcpserver]. also i don't know,
 at which level those buffer overruns would happen: somewhere in the
 external (netserver/tcpserver) code, in the pd code, or even in the
 kernel/OS? the only thing, that i know, is that i haven't seen apache or
 some other tcp server crashing because of clients having bad connection.
 so there must be a solution to this problem, but i don't know where to
 look for it. another problem is that, from a pd user perspective, one
 has very little control over the things happening at tcp level. if you
 need to send a big amount of data, there is no mechanism provided to
 send the data at maximum available bandwidth. so you either send
 everything at once, which fills the internal 4kb buffer of [net*] or
 [tcp*], so that a long drop-out occurs, until the buffer is emptied
 again. or the data is sent with time intervals between  each message in
 order to artificially reduce the bandwidth used. the latter approach has
 the disadvantage of not using the whole available bandwidth. also, in
 userspace you don't see, if a message could be delivered or not, which
 will, as described in above situations, lead to the  situation, that
 more messages will be sent to a non-existing receiver, which might fill
 some buffer, which _probaly_ leads to a crash of pd. 
 
 because above problems, i came to the conclusion, that it is currently
 not possible to have several instances of pd connected with each other
 without the system  (i.e. one or more