Re: AIO was Re: Kernel threads
Christopher Sedore wrote: On Thu, 6 Jan 2000, Arjan de Vet wrote: Jordan K. Hubbard wrote: This is very interesting data and I was just wondering about the actual state of functionality in our AIO code just the other day, oddly enough. Does anyone have a PR# for the mentioned patches? kern/12053 A Dec 16 version of the patch can be found at: http://tfeed.maxwell.syr.edu/aio-diff They won't apply cleanly because some new syscalls have been added. There may be another PR related too (although a quick search a few seconds ago didn't show it)--this patch set also fixes a problem where signals were not posted for aio completion under certain circumstances (the code just wasn't there before). Just found the PR--kern/13075 Now that we've found the two PRs and a reasonable up to date version of the patch are these changes going to be committed? Or are they still under review? Just curious ;-). Arjan -- Arjan de Vet, Eindhoven, The Netherlands [EMAIL PROTECTED] URL: http://www.iae.nl/users/devet/ for PGP key: finger [EMAIL PROTECTED] To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-hackers" in the body of the message
Re: AIO was Re: Kernel threads
On Mon, Jan 10, 2000 at 10:48:29PM +0100, Arjan de Vet wrote: Christopher Sedore wrote: On Thu, 6 Jan 2000, Arjan de Vet wrote: Jordan K. Hubbard wrote: This is very interesting data and I was just wondering about the actual state of functionality in our AIO code just the other day, oddly enough. Does anyone have a PR# for the mentioned patches? kern/12053 A Dec 16 version of the patch can be found at: http://tfeed.maxwell.syr.edu/aio-diff They won't apply cleanly because some new syscalls have been added. There may be another PR related too (although a quick search a few seconds ago didn't show it)--this patch set also fixes a problem where signals were not posted for aio completion under certain circumstances (the code just wasn't there before). Just found the PR--kern/13075 Now that we've found the two PRs and a reasonable up to date version of the patch are these changes going to be committed? Or are they still under review? Just curious ;-). I'm reviewing them right now. Jason To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-hackers" in the body of the message
Re: AIO was Re: Kernel threads
On Thu, 6 Jan 2000, Arjan de Vet wrote: Jordan K. Hubbard wrote: This is very interesting data and I was just wondering about the actual state of functionality in our AIO code just the other day, oddly enough. Does anyone have a PR# for the mentioned patches? kern/12053 A Dec 16 version of the patch can be found at: http://tfeed.maxwell.syr.edu/aio-diff They won't apply cleanly because some new syscalls have been added. There may be another PR related too (although a quick search a few seconds ago didn't show it)--this patch set also fixes a problem where signals were not posted for aio completion under certain circumstances (the code just wasn't there before). Just found the PR--kern/13075 -Chris To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-hackers" in the body of the message
Re: AIO was Re: Kernel threads
With respect to AIO... we run a data server which multiplexes on the select() function, and uses AIO to do all it's I/O. This has been a very stable system. system : 4.0-19990827-SNAP start time : 1999/12/24 11:14:44 up time (days hh:mm:ss): 12 13:32:53 Current/Max/Total connections: 0 / 2 / 244 Total requests:6228 Total aio bytes written: 573499989 (546.9M) Current/Max stat daemons: 0 / 1 Current/Max Queued for stat: 0 / 0 Current/Maxcp daemons: 0 / 2 Current/Max Queued for cp: 0 / 0 Current/Max aio_write daemons: 0 / 2 Current/Max Queued for write: 0 / 0 Current/Max/ telnets: 1 / 1 The above is a sample of the stats kept by the daemon. The numbers are very low due to the holidays. Basically, this thing hands raw data to applications running on NT, where the data is kept on Network Appliance fileservers attached to FreeBSD boxes. Direct CIFS attachment to the data from the NT system does not come close to the through-put we attain using this process. I would very much like to see these patches applied also. At a minimum, it means that the following type of code loop can be removed since we would know immediately which aio operation completed. The marked loop below becomes relatively hot as the max number of outstanding aio processes is increased and the number of simultanious hits on the server grows. /*--+ | ST_AIO | | | | A task in the ST_AIO state means that one of our | | aio_writes has finished. we will loop thru all | | outstanding aio_writes to see which one completed. | | | *--*/ case ST_AIO: ... code deleled ... /*-+ | we know we have a completed process. let's find out | | which one it is.| *-*/ --- for (j=0; jMAX_WRITERS; j++) { ---if (aio[j].task aio_error(aio[j].iocb) != EINPROGRESS) --- break; } ... code deleted ... /*-+ | Get the actual return code, check length, decrement | | active count, send message | +-*/ t = aio[j].task; rc = aio_return(aio[j].iocb); Since we are getting ready to bring this process forward and integrate the new signal handling code, now would be a great time for these patches to be applied and have some heavy testing done on them. Thanks! John In article [EMAIL PROTECTED] you write: The best fix I've thought of thus far (other than async I/O, which I understand isn't ready for prime time) would be to have a number of kernel Speaking of AIO, which I would really like to use if possible, how actively maintained is it? The copyright on vfs_aio.c is 1997, suggesting to me that John Dyson has moved onto other things. Yep, that's right. Quite recently Christopher Sedore has done some work on vfs_aio.c, to make it work better with sockets and he also added a very useful aio_waitcomplete system call which returns the first aiocb (AIO control block) from the 'completed' queue. I would be nice if these patches could be added to FreeBSD-current. About AIO not ready for prime time: I did some experiments recently by throwing up to 256 aio requests on one fd (a raw disk device) into the system and it worked without any problems. The only time I got a panic was when (I think) I had a negative aiocb-offset (I still need to reproduce this). See http://www.iae.nl/users/devet/freebsd/aio/ for my aiotest.c program. I'm thinking about using AIO for a faster Squid file system by using raw disk devices instead of UFS which has too much overhead for Squid. Arjan - -- Arjan de Vet, Eindhoven, The Netherlands [EMAIL PROTECTED] URL: http://www.iae.nl/users/devet/ for PGP key: finger [EMAIL PROTECTED] To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-hackers" in the body of the message -- To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-hackers" in the body of
Re: AIO was Re: Kernel threads
This is very interesting data and I was just wondering about the actual state of functionality in our AIO code just the other day, oddly enough. Does anyone have a PR# for the mentioned patches? - Jordan To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-hackers" in the body of the message
Re: AIO was Re: Kernel threads
In article [EMAIL PROTECTED] you write: The best fix I've thought of thus far (other than async I/O, which I understand isn't ready for prime time) would be to have a number of kernel Speaking of AIO, which I would really like to use if possible, how actively maintained is it? The copyright on vfs_aio.c is 1997, suggesting to me that John Dyson has moved onto other things. Yep, that's right. Quite recently Christopher Sedore has done some work on vfs_aio.c, to make it work better with sockets and he also added a very useful aio_waitcomplete system call which returns the first aiocb (AIO control block) from the 'completed' queue. I would be nice if these patches could be added to FreeBSD-current. About AIO not ready for prime time: I did some experiments recently by throwing up to 256 aio requests on one fd (a raw disk device) into the system and it worked without any problems. The only time I got a panic was when (I think) I had a negative aiocb-offset (I still need to reproduce this). See http://www.iae.nl/users/devet/freebsd/aio/ for my aiotest.c program. I'm thinking about using AIO for a faster Squid file system by using raw disk devices instead of UFS which has too much overhead for Squid. Arjan -- Arjan de Vet, Eindhoven, The Netherlands [EMAIL PROTECTED] URL: http://www.iae.nl/users/devet/ for PGP key: finger [EMAIL PROTECTED] To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-hackers" in the body of the message