Sure
Zoran Vasiljevic wrote:
Am 06.01.2006 um 16:56 schrieb Vlad Seryakov:
I uploaded driver.c into SFE, it needs more testing because after my
last corrections and cleanups it seems i broke something.
I would clean/destroy the "uploadTable" AND "hosts" hashtable
in NsWaitDriversShutdown(
Am 06.01.2006 um 16:56 schrieb Vlad Seryakov:
I uploaded driver.c into SFE, it needs more testing because after
my last corrections and cleanups it seems i broke something.
I would clean/destroy the "uploadTable" AND "hosts" hashtable
in NsWaitDriversShutdown() if the drivers have been stoppe
Am 07.01.2006 um 17:41 schrieb Vlad Seryakov:
Just in case for future encahncements, we may restrict spooling
queue and keep socks in waiting list in driver for timeout or
something else.
allright.
Just in case for future encahncements, we may restrict spooling queue
and keep socks in waiting list in driver for timeout or something else.
Zoran Vasiljevic wrote:
Am 06.01.2006 um 16:56 schrieb Vlad Seryakov:
I uploaded driver.c into SFE, it needs more testing because after my
last corre
Yes, it makes sense
Zoran Vasiljevic wrote:
Am 06.01.2006 um 19:00 schrieb Vlad Seryakov:
sockPtr will be queued into the connection queue for processing, it
may happen from driver thread or from spooler thread, works equally.
Hmhmhmhmhmhmhm...
The SOCK_SPOOL is only retured from SockRea
Am 06.01.2006 um 16:56 schrieb Vlad Seryakov:
I uploaded driver.c into SFE, it needs more testing because after
my last corrections and cleanups it seems i broke something.
What is the meaning of:
case SOCK_SPOOL:
if (!SockSpoolPush(sockPtr)) {
Am 07.01.2006 um 09:18 schrieb Zoran Vasiljevic:
if (sockPtr->drvPtr->opts & NS_DRIVER_ASYNC) {
n = SockRead(sockPtr, 1);
U, sorry, wrong part...
if (sockPtr->drvPtr->opts & NS_DRIVER_ASYNC) {
n = SockRead(sockPtr, 0
Am 06.01.2006 um 19:00 schrieb Vlad Seryakov:
sockPtr will be queued into the connection queue for processing, it
may happen from driver thread or from spooler thread, works equally.
Hmhmhmhmhmhmhm...
The SOCK_SPOOL is only retured from SockRead(). The SockRead()
is only attempted in the Dri
sockPtr will be queued into the connection queue for processing, it may
happen from driver thread or from spooler thread, works equally.
Zoran Vasiljevic wrote:
Am 06.01.2006 um 18:25 schrieb Vlad Seryakov:
It is already fixed in the last uploaded driver.c, it works good now
one question:
Am 06.01.2006 um 18:25 schrieb Vlad Seryakov:
It is already fixed in the last uploaded driver.c, it works good now
one question:
what happens if the:
sockPtr->keep = 0;
if (sockPtr->drvPtr->opts & NS_DRIVER_ASYNC) {
n = SockRead(sockPtr, 1)
It is already fixed in the last uploaded driver.c, it works good now
Zoran Vasiljevic wrote:
Am 06.01.2006 um 17:46 schrieb Vlad Seryakov:
There are a lot of options here actually like:
- confg option to enable/disable spooling
I do not think this is needed. I'd enable it all the time.
Or
Am 06.01.2006 um 17:46 schrieb Vlad Seryakov:
There are a lot of options here actually like:
- confg option to enable/disable spooling
I do not think this is needed. I'd enable it all the time.
Or perhaps, if we make it configurable (see below) a
0 count of spooler threads disables the functi
I examined the patch in the RFE and it seems to me (not sure though)
that you have more/less duplicated the driver thread processing.
So, we'd have just ONE spool thread collecting data from sockets and
not a spool-thread PER socket?
Yes, it is smaller replica of the driver thread because it doe
I uploaded driver.c into SFE, it needs more testing because after my
last corrections and cleanups it seems i broke something.
Zoran Vasiljevic wrote:
Am 04.01.2006 um 16:15 schrieb Vlad Seryakov:
The main reason to reuse driver.c is that spooler is alsmost
identical to driver thread, and
Am 04.01.2006 um 16:15 schrieb Vlad Seryakov:
The main reason to reuse driver.c is that spooler is alsmost
identical to driver thread, and uses the same functions as driver.
Spooler can be disabled(config option), in this case driver works
as usual. Also it does parsing and other Sock rel
I find it confusing that the actual spooling code is not in the
SpoolThread, but still in SockRead().
Take a look at nsd/task.c. I think you should be able to implement
this as a Ns_Task callback, which gives you the extra thread and all
the poll handling etc. for free. Move the spooling code
On 1/3/06, Vlad Seryakov <[EMAIL PROTECTED]> wrote:
> I am attaching the whole driver.c file because patch would not be very
> readable.
>
> See if this a good solution or not, it works and uses separate thread
> for all reading and spooling, also all upload stats are done in the
> spoller thread,
Am 04.01.2006 um 06:20 schrieb Vlad Seryakov:
I am attaching the whole driver.c file because patch would not be
very readable.
Won't go as the list limits the message to 40K.
Go and upload it into the RFE. That should work.
Zoran
I am attaching the whole driver.c file because patch would not be very
readable.
See if this a good solution or not, it works and uses separate thread
for all reading and spooling, also all upload stats are done in the
spoller thread, so driver thread now works without any locking.
Vlad Sery
Stephen Deasey wrote:
The driver thread gets the connection and reads all up to maxreadahead.
It then passes the connection to the conn thread.
The conn thread decides to run that connection entirely (POST of a
simple form
or GET, all content read-in) OR it decides to pass the connection to the
s
I submitted Service Request with new patch with spooler support. Very
simple but seems to be working well.
Zoran Vasiljevic wrote:
Am 03.01.2006 um 16:47 schrieb Vlad Seryakov:
Spool thread can be a replica of driver thread, it will do the same
as driver and then pass Sock to the conn threa
Am 03.01.2006 um 16:47 schrieb Vlad Seryakov:
Spool thread can be a replica of driver thread, it will do the same
as driver and then pass Sock to the conn thread. So making it C-
based is not that hard and still it will be fast and small.
Plus, upload statistics now will be handled in the sp
Am 03.01.2006 um 18:50 schrieb Stephen Deasey:
What happens to the conn thread after this? It can't wait for
completion, that would defeat the purpose.
Do traces (logging etc.) run now, in the conn thread, or later in a
spool thread? If logging runs now, but the upload fails, the log will
be w
On 1/3/06, Zoran Vasiljevic <[EMAIL PROTECTED]> wrote:
>
> Am 03.01.2006 um 01:07 schrieb Vlad Seryakov:
>
> >
> > Will it be more generic way just to set upper limit, if we see that
> > upload exceeds that limit, pass control to conn thread and let it
> > finish reading, this way, even spooling to
Spool thread can be a replica of driver thread, it will do the same as
driver and then pass Sock to the conn thread. So making it C-based is
not that hard and still it will be fast and small.
Plus, upload statistics now will be handled in the spool thread, not
driver thread, so no overhead and
Am 03.01.2006 um 11:21 schrieb Andrew Piskorski:
Hm, does Tcl support asynchronous (non-blocking) IO for both network
sockets and local files? Tcl has 'fconfigure -blocking 0' of course,
but I don't know for sure whether that really does what you want for
this application. If Tcl DOES support
Am 03.01.2006 um 10:49 schrieb Bernd Eidenschink:
Of course, AJAX is still evolving and so are browser features, it's
good to
have stats and it is often requested, but for now there are use
cases where
the client approach is still better.
All very true...
The ultimate solution is: both.
On Tue, Jan 03, 2006 at 09:49:02AM +0100, Zoran Vasiljevic wrote:
> The spooling threads operate in event-loop mode. There has to be
> some kind of dispacher which evenly distributes the processing among
> spooling threads.
Or just start out with one spool thread, support for multiple spool
threa
>o. add upload statistics and control?
Reading the current very interesting thread, I would vote for adding it, if
the stats are some kind of by-product of more stability against DOS attacks,
multiple slow clients, reducing memory needs etc. as you all pointed out.
Because what is still not
Am 03.01.2006 um 01:07 schrieb Vlad Seryakov:
Will it be more generic way just to set upper limit, if we see that
upload exceeds that limit, pass control to conn thread and let it
finish reading, this way, even spooling to file will work, because
each upload conn thread will use their own
Stephen Deasey wrote:
Your single thread helper mechanism above may not work the disk as
hard as multiple threads, but it does mean that the driver thread
doesn't have to block waiting for disk.
it does not necessary have to write to the disk.
You've also moved quota checking etc. into the
Check my last upload patch, it may be usefull until more generic
approach will be developed.
Zoran Vasiljevic wrote:
Am 02.01.2006 um 21:13 schrieb Stephen Deasey:
If the problem is that threads are too "heavy", then it's pointless to
use aio_write() etc. if the underlying implementation al
As i see it by default nsd does not use mmaps, only for uploads, yes it
might need to review this again, mmap for big movie/iso files is not
appropriate and i am using nsd for those kinds of files most of the time.
Stephen Deasey wrote:
On 1/2/06, Vlad Seryakov <[EMAIL PROTECTED]> wrote:
I d
It was kind of random, sometimes SIGBUS, next time SIGEGV, it needs
another round of testing. The goal was just simple replace with and see
if aio_write may work, looks like it does not.
I checked samba and squid, they use some kind of high level wrappers
around AIO, most of the time their own
Am 02.01.2006 um 21:13 schrieb Stephen Deasey:
If the problem is that threads are too "heavy", then it's pointless to
use aio_write() etc. if the underlying implementation also uses
threads. It will be worse than using conn threads alone, as there
will be extra context switches as control boun
On 1/2/06, Vlad Seryakov <[EMAIL PROTECTED]> wrote:
> I did very simple test, replaced write with aio_write and at the end
> checked aio_error/aio_return, they all returned 0 so mmap should work
> because file is synced. when i was doing aio_write i used aio_offset, so
> each aio_write would put da
On 1/2/06, Zoran Vasiljevic <[EMAIL PROTECTED]> wrote:
>
> Am 02.01.2006 um 17:23 schrieb Stephen Deasey:
>
> >> So, what is the problem?
> >
> >
> > http://www.gnu.org/software/libc/manual/html_node/Configuration-of-
> > AIO.html
> >
>
> Still, where is the problem? The fact that Linux implements
Am 02.01.2006 um 19:13 schrieb Vlad Seryakov:
I did very simple test, replaced write with aio_write and at the
end checked aio_error/aio_return, they all returned 0 so mmap
should work because file is synced. when i was doing aio_write i
used aio_offset, so each aio_write would put data int
I did very simple test, replaced write with aio_write and at the end
checked aio_error/aio_return, they all returned 0 so mmap should work
because file is synced. when i was doing aio_write i used aio_offset, so
each aio_write would put data into separate region on the file.
Unfortunately i re
Am 02.01.2006 um 17:23 schrieb Stephen Deasey:
So, what is the problem?
http://www.gnu.org/software/libc/manual/html_node/Configuration-of-
AIO.html
Still, where is the problem? The fact that Linux implements them as
userlevel thread does not mean much to me.
On Solaris, each thread you
On 1/2/06, Zoran Vasiljevic <[EMAIL PROTECTED]> wrote:
>
> Am 02.01.2006 um 08:36 schrieb Stephen Deasey:
>
> > On 12/31/05, Zoran Vasiljevic <[EMAIL PROTECTED]> wrote:
> >>
> >> Am 31.12.2005 um 20:12 schrieb Vlad Seryakov:
> >>
> >>> aio_read/aio_write system calls look like supported ubnder Linu
Am 02.01.2006 um 12:06 schrieb Zoran Vasiljevic:
Am 02.01.2006 um 04:43 schrieb Vlad Seryakov:
Tried it, nsd started crashing and i am guessing that the problem
is combination of aio_write and mmap.
When is start spooling, i just submit aio_write and return
immediately, so there are a lot
Am 02.01.2006 um 04:43 schrieb Vlad Seryakov:
Tried it, nsd started crashing and i am guessing that the problem
is combination of aio_write and mmap.
When is start spooling, i just submit aio_write and return
immediately, so there are a lot of quick aio_write calls. By the
time i reach mmap
Am 02.01.2006 um 08:36 schrieb Stephen Deasey:
On 12/31/05, Zoran Vasiljevic <[EMAIL PROTECTED]> wrote:
Am 31.12.2005 um 20:12 schrieb Vlad Seryakov:
aio_read/aio_write system calls look like supported ubnder Linux
yes. most of modern os's support some kind of kaio.
I have checked solari
Am 02.01.2006 um 09:21 schrieb Stephen Deasey:
POST request with small amounts of data are really common. Think of a
user logging in via a web form. Couple of bytes.
The way you've coded it at the mo (and I realize it's just a first
cut), all requests with more than 0 bytes of body content w
Am 02.01.2006 um 09:30 schrieb Stephen Deasey:
Right. So the thread handling the upload would lock/unlock every
upload_size/10Kbytes or seconds_to_upload. The frequency of locking
for the every 10Kbytes case would increase (and hence chance of
blocking) with the capacity of your and your clien
On 12/31/05, Zoran Vasiljevic <[EMAIL PROTECTED]> wrote:
>
> Am 31.12.2005 um 19:03 schrieb Vlad Seryakov:
>
> > Could config option, like 1 second or 10Kbytes
>
> Yup. For example. This could reduce locking attempts.
> Seems fine to me.
Right. So the thread handling the upload would lock/unlock
On 12/31/05, Vlad Seryakov <[EMAIL PROTECTED]> wrote:
>
> > I think we talked about this before, but I can't find it in the
> > mailing list archive. Anyway, the problem with recording the upload
> > process is all the locking that's required. You could minimize this,
> > e.g. by only recording u
On 12/31/05, Gustaf Neumann <[EMAIL PROTECTED]> wrote:
> Zoran Vasiljevic wrote:
> >
> > But that would occupy the conn thread for ages, right?
> > I can imagine several slow-line large-file uploads could
> > consume many of those. Wasn't that the reason everything
> > was moved in the driver threa
On 12/31/05, Zoran Vasiljevic <[EMAIL PROTECTED]> wrote:
>
> Am 31.12.2005 um 20:12 schrieb Vlad Seryakov:
>
> > aio_read/aio_write system calls look like supported ubnder Linux
> >
>
> yes. most of modern os's support some kind of kaio.
> I have checked solaris, linux and darwin and all do.
Hmm
On 1/1/06, Vlad Seryakov <[EMAIL PROTECTED]> wrote:
> Another solution to reduce locking, just to allocate maxconn structures
> with its own mutex and perform locking for particular struct only, so
> writers will not block other writers, only writer/reader. More memory
> but less concurrency.
>
> I
Another solution to reduce locking, just to allocate maxconn structures
with its own mutex and perform locking for particular struct only, so
writers will not block other writers, only writer/reader. More memory
but less concurrency.
I will try this tomorrow.
Zoran Vasiljevic wrote:
Am 31.1
Tried it, nsd started crashing and i am guessing that the problem is
combination of aio_write and mmap.
When is start spooling, i just submit aio_write and return immediately,
so there are a lot of quick aio_write calls. By the time i reach mmap,
it looks like it's always ahead of actual writing
Am 31.12.2005 um 20:12 schrieb Vlad Seryakov:
aio_read/aio_write system calls look like supported ubnder Linux
yes. most of modern os's support some kind of kaio.
I have checked solaris, linux and darwin and all do.
the problem with spawning yet another thread is
system resources. each new t
aio_read/aio_write system calls look like supported ubnder Linux
Zoran Vasiljevic wrote:
Am 31.12.2005 um 19:26 schrieb Vlad Seryakov:
What about separate thread doing all spooling I/O, driver thread will
send buffer to be written and immediately continue while spooling
thread will do writ
Am 31.12.2005 um 19:26 schrieb Vlad Seryakov:
What about separate thread doing all spooling I/O, driver thread
will send buffer to be written and immediately continue while
spooling thread will do writes?
This is another option. I like the kaio because it saves you
yet-another-thread to man
Also, the blocking of the driver thread during spooling
into the file should be taken care. I wanted to look into
the kernel async IO (as it is available on Darwin/Sol/Linux).
What about separate thread doing all spooling I/O, driver thread will
send buffer to be written and immediately conti
Am 31.12.2005 um 19:03 schrieb Vlad Seryakov:
Could config option, like 1 second or 10Kbytes
Yup. For example. This could reduce locking attempts.
Seems fine to me.
Also, the blocking of the driver thread during spooling
into the file should be taken care. I wanted to look into
the kernel asy
Could config option, like 1 second or 10Kbytes
Zoran Vasiljevic wrote:
Am 31.12.2005 um 18:20 schrieb Vlad Seryakov:
Another possible solution can be, pre-allocating maxconn upload
structs and update them without locks, it is integer anyway, so no
need to lock, 4 byte write is never innter
Am 31.12.2005 um 18:20 schrieb Vlad Seryakov:
Another possible solution can be, pre-allocating maxconn upload
structs and update them without locks, it is integer anyway, so no
need to lock, 4 byte write is never innterupted, usually it is 1
CPU instruction(true for Intel, maybe not for Spa
What I have/had in mind is aync writes from the driver thread.
Most of the OS'es have this feature (kaio) so we can employ it.
The question of locking, however still remains in such case.
So the decision has to be made on: what is cheaper? Locking
or spooling to disk out of the conn thread? I hav
I think we talked about this before, but I can't find it in the
mailing list archive. Anyway, the problem with recording the upload
process is all the locking that's required. You could minimize this,
e.g. by only recording uploads above a certain size, or to a certain
URL.
Yes, that is true,
Am 31.12.2005 um 15:27 schrieb Gustaf Neumann:
Maybe, the receiving thread is not needed at all,
since the driver thread could handle everything as well.
This is what I ment. Driver thread is very hot as it
does not (should not?) find iself into any blocking at all.
At the moment it does, as
Zoran Vasiljevic wrote:
But that would occupy the conn thread for ages, right?
I can imagine several slow-line large-file uploads could
consume many of those. Wasn't that the reason everything
was moved in the driver thread for the 4.0 ?
What I have/had in mind is aync writes from the driver th
Am 31.12.2005 um 11:58 schrieb Stephen Deasey:
I think we talked about this before, but I can't find it in the
mailing list archive. Anyway, the problem with recording the upload
process is all the locking that's required. You could minimize this,
e.g. by only recording uploads above a certai
On 12/30/05, Vlad Seryakov <[EMAIL PROTECTED]> wrote:
> On that note i have another idea i'd like to discuss before i even start
> coding a prototype. There is thread on aolserver mailing list about
> upload prgoress, so i thought would it be a good idea to have global
> url-specific cache of all u
66 matches
Mail list logo