Re: Generic I/O

2011-11-15 Thread michi1
Hi!

On 11:40 Tue 15 Nov , Kai Meyer wrote:
> On 11/15/2011 11:13 AM, mic...@michaelblizek.twilightparadox.com wrote:
...
> > You might want to take a look at wait queues (the kernel equivalent to 
> > pthread
> > "condidions"). Basically you instead of calling msleep(), you call
> > wait_event(). In the function which decrements numbios, you check whether it
> > is 0 and if so call wake_up().
...
> That sounds very promising. When I read up on wait_event here:
> lxr.linux.no/#linux+v2.6.32/include/linux/wait.h#L191
> 
> It sounds like it's basically doing the same thing. I would call it like so:
> 
> wait_event(wq, atomic_read(numbios) == 0);

Yes, you dol something like this.

> To make sure I understand, this seems very much like what I'm doing, 
> except I'm being woken up every time a bio finishes instead of being 
> woken up once every millisecond. That is, I'm assuming I would use the 
> same work queue for all my bios.

You are *not* woken up every time you a bio finishes. You are woken up every
time you call wake_up(). You could do something like:

if (atomic_dec_return(numbios) == 0)
wake_up(wp);

> During my testing, when I do a lot of disk I/O, I may potentially have 
> hundreds of threads waiting on anywhere between 1 and 32 bios. Help me 
> understand the sort of impact you think I might see between having 
> hundreds waiting for a millisecond, and having hundreds get woken up 
> each time a bio completes. It seems like it would be very helpful in low 
> I/O scenarios, especially when there are fast disks involved. I'm 
> concerned that during heavy I/O loads, I'll be doing a lot of 
> atomic_reads, and I have the impression that atomic_read isn't the 
> cheapest operation.

The wakeups might some some overhead. However, I would worry more about
scheduling overhead on smp systems than atomic_read performance.

-Michi
-- 
programing a layer 3+4 network protocol for mesh networks
see http://michaelblizek.twilightparadox.com

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Generic I/O

2011-11-15 Thread Kai Meyer
On 11/15/2011 11:13 AM, mic...@michaelblizek.twilightparadox.com wrote:
> Hi!
>
> On 12:15 Mon 14 Nov , Kai Meyer wrote:
> ...
>
>> My
>> caller function has an atomic_t value that I set equal to the number of
>> bios I want to submit. Then I pass a pointer to that atomic_t around to
>> each of the bios which decrement it in the endio function for that bio.
>>
>> Then the caller does this:
>> while(atomic_read(numbios)>  0)
>>   msleep(1);
>>
>> I'm finding the msleep(1) is a really really really long time,
>> relatively. It seems to work ok if I just have an empty loop, but it
>> also seems to me like I'm re-inventing a wheel here.
> ...
>
> You might want to take a look at wait queues (the kernel equivalent to pthread
> "condidions"). Basically you instead of calling msleep(), you call
> wait_event(). In the function which decrements numbios, you check whether it
> is 0 and if so call wake_up().
>
>   -Michi

That sounds very promising. When I read up on wait_event here:
lxr.linux.no/#linux+v2.6.32/include/linux/wait.h#L191

It sounds like it's basically doing the same thing. I would call it like so:

wait_event(wq, atomic_read(numbios) == 0);

To make sure I understand, this seems very much like what I'm doing, 
except I'm being woken up every time a bio finishes instead of being 
woken up once every millisecond. That is, I'm assuming I would use the 
same work queue for all my bios.

During my testing, when I do a lot of disk I/O, I may potentially have 
hundreds of threads waiting on anywhere between 1 and 32 bios. Help me 
understand the sort of impact you think I might see between having 
hundreds waiting for a millisecond, and having hundreds get woken up 
each time a bio completes. It seems like it would be very helpful in low 
I/O scenarios, especially when there are fast disks involved. I'm 
concerned that during heavy I/O loads, I'll be doing a lot of 
atomic_reads, and I have the impression that atomic_read isn't the 
cheapest operation.

-Kai Meyer

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Generic I/O

2011-11-15 Thread michi1
Hi!

On 12:15 Mon 14 Nov , Kai Meyer wrote:
...

> My 
> caller function has an atomic_t value that I set equal to the number of 
> bios I want to submit. Then I pass a pointer to that atomic_t around to 
> each of the bios which decrement it in the endio function for that bio.
> 
> Then the caller does this:
> while(atomic_read(numbios) > 0)
>  msleep(1);
> 
> I'm finding the msleep(1) is a really really really long time, 
> relatively. It seems to work ok if I just have an empty loop, but it 
> also seems to me like I'm re-inventing a wheel here.
...

You might want to take a look at wait queues (the kernel equivalent to pthread
"condidions"). Basically you instead of calling msleep(), you call
wait_event(). In the function which decrements numbios, you check whether it
is 0 and if so call wake_up().

-Michi
-- 
programing a layer 3+4 network protocol for mesh networks
see http://michaelblizek.twilightparadox.com

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Generic I/O

2011-11-14 Thread Kai Meyer
I'm finding it's really simple to write generic I/O functions for block 
devices (via a "struct block_device") to mimic the posix read() and 
write() functions (I have to supply the position, since I don't have a 
fd to keep a position for me, but that's perfectly ok).

I've got a little hack that allows me to run synchronously or 
asynchronously, relying on submit_bio() to create the threads for me. My 
caller function has an atomic_t value that I set equal to the number of 
bios I want to submit. Then I pass a pointer to that atomic_t around to 
each of the bios which decrement it in the endio function for that bio.

Then the caller does this:
while(atomic_read(numbios) > 0)
 msleep(1);

I'm finding the msleep(1) is a really really really long time, 
relatively. It seems to work ok if I just have an empty loop, but it 
also seems to me like I'm re-inventing a wheel here. Are there 
mechanisms that are better suited for waiting for tasks to complete? Or 
even for generic block I/O functions?

-Kai Meyer

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


generic I/O

2011-11-03 Thread Kai Meyer
Are there existing generic block device I/O operations available 
already? I am familiar with constructing and submitting 'struct bio's, 
but what I'd like to do would be greatly simplified if there was an 
existing I/O interface similar to the posix 'read' and 'write' 
functions. If they don't exist, I would probably end up writing 
functions like:
int blk_read(struct block_device *bdev, void *buffer, off_t length);
int blk_write(struct block_device *bdev, void *buffer, off_t length);

Pros and cons to this sort of approach?

-Kai Meyer

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies