While I agree that bandwidth is the rate of output and I agree that you
can not easily increase the bandwidth between all clients and the
server, you should be able to decrease it relatively easily though.
Page_to_char works with a buffer from what I've seen, and sends the
client a number of lines, the number of lines is set by the player using
the scroll command.  Why if you can stop the output in it's tracks and
it doesn't affect the rest of the mud can you not send a page, wait a
pulse, send another page, wait a pulse etc...? In effect you are slowing
down the rate of output or cutting the bandwidth required between the
server and client.  I did not mean to make the assumption that
wait_state itself could be used, what I was trying to do was make an
analogy to the fact that a wait_state controls the rate of input, it
does not ignore commands sent during the wait_state, it buffers them and
executes them after the wait_state is up ... why then could the output
not work similar to this?  I would also like to add that a denial of
service is not a concern when you are sending output not based on
anything other than a wait that is not determined by the client, in
order for a DoS to occur the client has to have some control over the
rate out output, in this case it doesn't, if it ignores the data sent
from the server the server is still going to send the rest of the data
of flush the buffer, it will just send the data slower than it would
without the output control.

-----Original Message-----
From: 'brian moore' [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, November 19, 2002 05:40 PM
To: Hiddukel
Subject: Re: Disconnects

On Tue, Nov 19, 2002 at 05:07:28PM -0500, Hiddukel wrote:
> If I'm not mistaken the code already allows for this type of buffering
> via the scroll option.  Basically what I am talking about is an
> automatic paging system whereby the user need not send a carriage
return
> at the [hit return to continue] prompt, it would simply wait a pulse
or
> two and then continue the output, preferably without a prompt in
> between.

Then design a buffer system that buffers more than 32k or so, possibly
infinite.  (Ie, can handle ALL the output from any command, as well as
intermediate MUD output such as channels that will appear mid-stream.)

>  The pulse or two should in most cases allow the connection to
> finish receiving some of the data before more is sent, basically
slowing
> the output down without affecting the performance of the mud.  True
the
> mud will seem a bit slower because of the pulse wait in the output,
but
> given that we can control the rate of input using wait_state I don't
see
> why we can't control the rate of output in much the same way.

In other words, increase buffer length?

I don't see the 'mud will seem a bit slower' unless you consume enough
RAM to swap.

As for controlling the 'rate of output', you can not do that without
increasing the bandwidth between the client and the server.  'rate of
output' -IS- bandwidth.

Or do you mean 'the maximum backlog of data before the connection is
assumed to be too backlogged to ever catch up and is better off closed'?
That you change by... increasing the buffer size.  Ie, buffer more, and
the buffer full condition will occur less frequently.  With an infinite
sized buffer, it will never occur.  You will have to determine what an
acceptable limit is.  (And do remember there can be a Denial of Service
attack the larger you make this: it is quite possible to configure a
client to lie and refuse to accept more packets, forcing the mud to
buffer more and more.  Limits are a good thing for out of control
clients, whether intentional or caused by network or software problems.)

> This would seem to answer my own question, however I am not familiar
> with how the wait_state works, only that it does work, so any
> information that you could give me on how to add an output wait_state
> would be much appreciated.

Wait_state has nothing to do with it.  That is for "ignore any input
from this socket for this number of pulses", ie, the delay when typing
'save'.

See page_to_char()?

> Thanks,
> Matt Bradbury
> 
> -----Original Message-----
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of brian
> moore
> Sent: Tuesday, November 19, 2002 01:06 PM
> To: ROM
> Subject: Re: Disconnects
> 
> <snip>
> Your thinking isn't considering the whole problem.
> 
> Yes, the telnet protocol allows for unlimited data (how else would you
> maintain a connection for days).
> 
> What it doesn't control is "when UserA issues a command that requires
> sending more data than he can receive, how do we buffer all that data
so
> we can continue processing input and output for others until he's
ready
> to accept more data."
> 
> Did you catch the keyword there?  It's 'buffer'.
> </snip>
> 
> 




Reply via email to