hi all How to understand some codes in the function of __FD_SET?

2013-11-02 Thread lx
hi all:
   the codes of functions is:

 51 
#undef  __FD_SET 
 52 
static __inline__ void __FD_SET
(unsigned long __fd,
__kernel_fd_set
 *__fdsetp)
 53  {
 54 
unsigned long __tmp = __fd / __NFDBITS
;
 55 
unsigned long __rem = __fd % __NFDBITS
;
 56 
__fdsetp->fds_bits[__tmp] |= (1UL<<__rem);
 57  }


I can't understand the usage of __rem,How to understand it? thank you.

PS:


 21 
#undef __NFDBITS 
 22 
#define __NFDBITS 
(8 * sizeof(unsigned long))
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Festival Shutdown : Kernelnewbies Digest, Vol 36, Issue 4

2013-11-02 Thread hemal . patel
Hi,

Due to Diwali Festival, SLS will be closed from Oct. 31th to Nov. 6th.
We will return on Nov. 7th to the office. For urgent matter please call on 
001-408-852-0067.

For more information about Diwali:
http://en.wikipedia.org/wiki/Diwali

Regards,
Hemal Patel



___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Block device driver question

2013-11-02 Thread Pranay Srivastava
On 03-Nov-2013 9:10 AM, "neha naik"  wrote:
>
> Hi Pranay,
> Your answers assume that there is always a filesystem above the block
device driver which is not necessarily condition.
Yes.
But isn't your case the same one? You are doing i/o directly on device
right? Does your case differ?

---P.K.S
>
> Regards,
> Neha
>
> On Nov 2, 2013 8:58 AM, "Pranay Srivastava"  wrote:
>>
>>
>> On 01-Nov-2013 10:57 PM, "neha naik"  wrote:
>> >
>> > Hi,
>> >   I am writing a block device driver and i am using the
>> > 'blq_queue_make_request' call while registering my block device
>> > driver.
>> >   Now as far as i understand this will bypass the linux kernel queue
>> > for each block device driver (bypassing the elevator algorithm etc).
>> > However, i am still not very clear about exactly how i get a request.
>> >
>> >  1.  Consider i am doing a dd on the block device directly :
>> >   Will it bypass the buffer cache(/page cache) or will it use it.
>>
>> Page cache use is for file system. Block driver has got nothing to do
with it. So lets keep these separate.
>>
>> Bios don't care which page you give them all it needs is a page in bvec.
The file system would wait on that page to be uptodate which might be done
in bio_end_io if i/o was good.
>>
>> In case of buffer heads the same thing. Submit_bh would create a bio for
that bh so really same stuff.
>>
>> > Example if i register my block device with set_blocksize() as 512. And
>> > i do a dd of 512 bytes will i get a read because it passes through the
>> > buffer cache and since the minimum page size is 4096 it has to read
>> > the page first and then pass it to me.
>>
>> If you are writing why it would read the page? Reads would initiate
write outs i think. Take a look at generic_file_aio_write.
>>
>> > I am still unclear about the 'page' in the bvec. What does that
>> > refer to? Is it a page from the page cache or a user buffer (DMA).
>>
>> Whatever filesystem gave it. If it uses the generic functions that
should come from page cache but again it depends on how filesystem created
bio.
>>
>> So for block driver you need to know if the page you are given in bvec
is something you can use or you need to check and take measures to
successfully do i/o.
>>
>> >
>> >
>> > 2. Another thing i am not clear about is a queue. When i register my
>> > driver, the 'make_request' function gets called whenever there is an
>> > io. Now in my device driver, i have some more logic about  writing
>> > this io i.e some time may be spent in the device driver for each io.
>> > In such a case, if i get two ios on the same block one after the other
>> > (say one is writing 'a' and the other is writing 'b') then isn't it
>> > possible that i may end up passing 'b' followed by 'a' to the layer
>> > below me (changing the order because thread 'a' took more time than
>> > thread 'b').
>> Then in that case should i be using a queue in my layer -
>> > put the ios in the queue whenever i get a call to 'make_request'.
>> > Another thread keeps pulling the ios from the queue and processing
>> > them and passing it to the layer below.
>> >
>> You mean layer above right? that is the filesystem correct? But if thats
the case then wouldn't your second request be blocked until the page was
unlocked by file system which would happen i think after your driver was
done with i/o. Thats because you won't mark the request as complete so i
guess threads would wait_on_page to be unlocked.
>>
>> If however your driver "lies" about completing requests then yeah you
need to take appropriate measure.
>>
>> >
>> > Regards,
>> > Neha
>> >
>> > ___
>> > Kernelnewbies mailing list
>> > Kernelnewbies@kernelnewbies.org
>> > http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
>>
>> --P.K.S
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Question on Alignment

2013-11-02 Thread anish singh
On Sat, Nov 2, 2013 at 7:42 PM, Shyam Sunkara  wrote:

> Hi All,
>
> I'm allocating a memory for linux driver using the kmalloc and I need to
> align it to 32 bit how do I do it?
>
Did you mean 32 bit or byte?

>
>
> Thank you,
>
> Regards,
> Omk
>
> ___
> Kernelnewbies mailing list
> Kernelnewbies@kernelnewbies.org
> http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
>
>
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Block device driver question

2013-11-02 Thread neha naik
Hi Pranay,
Your answers assume that there is always a filesystem above the block
device driver which is not necessarily condition.

Regards,
Neha
 On Nov 2, 2013 8:58 AM, "Pranay Srivastava"  wrote:

>
> On 01-Nov-2013 10:57 PM, "neha naik"  wrote:
> >
> > Hi,
> >   I am writing a block device driver and i am using the
> > 'blq_queue_make_request' call while registering my block device
> > driver.
> >   Now as far as i understand this will bypass the linux kernel queue
> > for each block device driver (bypassing the elevator algorithm etc).
> > However, i am still not very clear about exactly how i get a request.
> >
> >  1.  Consider i am doing a dd on the block device directly :
> >   Will it bypass the buffer cache(/page cache) or will it use it.
>
> Page cache use is for file system. Block driver has got nothing to do with
> it. So lets keep these separate.
>
> Bios don't care which page you give them all it needs is a page in bvec.
> The file system would wait on that page to be uptodate which might be done
> in bio_end_io if i/o was good.
>
> In case of buffer heads the same thing. Submit_bh would create a bio for
> that bh so really same stuff.
>
> > Example if i register my block device with set_blocksize() as 512. And
> > i do a dd of 512 bytes will i get a read because it passes through the
> > buffer cache and since the minimum page size is 4096 it has to read
> > the page first and then pass it to me.
>
> If you are writing why it would read the page? Reads would initiate write
> outs i think. Take a look at generic_file_aio_write.
>
> > I am still unclear about the 'page' in the bvec. What does that
> > refer to? Is it a page from the page cache or a user buffer (DMA).
>
> Whatever filesystem gave it. If it uses the generic functions that should
> come from page cache but again it depends on how filesystem created bio.
>
> So for block driver you need to know if the page you are given in bvec is
> something you can use or you need to check and take measures to
> successfully do i/o.
>
> >
> >
> > 2. Another thing i am not clear about is a queue. When i register my
> > driver, the 'make_request' function gets called whenever there is an
> > io. Now in my device driver, i have some more logic about  writing
> > this io i.e some time may be spent in the device driver for each io.
> > In such a case, if i get two ios on the same block one after the other
> > (say one is writing 'a' and the other is writing 'b') then isn't it
> > possible that i may end up passing 'b' followed by 'a' to the layer
> > below me (changing the order because thread 'a' took more time than
> > thread 'b').
> Then in that case should i be using a queue in my layer -
> > put the ios in the queue whenever i get a call to 'make_request'.
> > Another thread keeps pulling the ios from the queue and processing
> > them and passing it to the layer below.
> >
> You mean layer above right? that is the filesystem correct? But if thats
> the case then wouldn't your second request be blocked until the page was
> unlocked by file system which would happen i think after your driver was
> done with i/o. Thats because you won't mark the request as complete so i
> guess threads would wait_on_page to be unlocked.
>
> If however your driver "lies" about completing requests then yeah you need
> to take appropriate measure.
>
> >
> > Regards,
> > Neha
> >
> > ___
> > Kernelnewbies mailing list
> > Kernelnewbies@kernelnewbies.org
> > http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
>
> --P.K.S
>
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Question on Alignment

2013-11-02 Thread Shyam Sunkara
Hi All,

I'm allocating a memory for linux driver using the kmalloc and I need to
align it to 32 bit how do I do it?


Thank you,

Regards,
Omk
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: PF_RING on Mellanox Card

2013-11-02 Thread Jason Ball
Not specifically with the melanocytes, no.  I have successfully used this
approach with a number of other nics though.

On Saturday, 2 November 2013, Robert Clove wrote:

> Hi All,
>
> I want to know has anyone used the PF_RING on the Mellanox NIC card?
> Is it working correctly?
>
>
> Thanks
>
>

-- 
--
Teach your kids Science, or somebody else will :/

ja...@ball.net
vk2...@google.com 
callsign: vk2vjb
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Festival Shutdown : Kernelnewbies Digest, Vol 36, Issue 3

2013-11-02 Thread hemal . patel
Hi,

Due to Diwali Festival, SLS will be closed from Oct. 31th to Nov. 6th.
We will return on Nov. 7th to the office. For urgent matter please call on 
001-408-852-0067.

For more information about Diwali:
http://en.wikipedia.org/wiki/Diwali

Regards,
Hemal Patel



___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Block device driver question

2013-11-02 Thread Pranay Srivastava
On 01-Nov-2013 10:57 PM, "neha naik"  wrote:
>
> Hi,
>   I am writing a block device driver and i am using the
> 'blq_queue_make_request' call while registering my block device
> driver.
>   Now as far as i understand this will bypass the linux kernel queue
> for each block device driver (bypassing the elevator algorithm etc).
> However, i am still not very clear about exactly how i get a request.
>
>  1.  Consider i am doing a dd on the block device directly :
>   Will it bypass the buffer cache(/page cache) or will it use it.

Page cache use is for file system. Block driver has got nothing to do with
it. So lets keep these separate.

Bios don't care which page you give them all it needs is a page in bvec.
The file system would wait on that page to be uptodate which might be done
in bio_end_io if i/o was good.

In case of buffer heads the same thing. Submit_bh would create a bio for
that bh so really same stuff.

> Example if i register my block device with set_blocksize() as 512. And
> i do a dd of 512 bytes will i get a read because it passes through the
> buffer cache and since the minimum page size is 4096 it has to read
> the page first and then pass it to me.

If you are writing why it would read the page? Reads would initiate write
outs i think. Take a look at generic_file_aio_write.

> I am still unclear about the 'page' in the bvec. What does that
> refer to? Is it a page from the page cache or a user buffer (DMA).

Whatever filesystem gave it. If it uses the generic functions that should
come from page cache but again it depends on how filesystem created bio.

So for block driver you need to know if the page you are given in bvec is
something you can use or you need to check and take measures to
successfully do i/o.

>
>
> 2. Another thing i am not clear about is a queue. When i register my
> driver, the 'make_request' function gets called whenever there is an
> io. Now in my device driver, i have some more logic about  writing
> this io i.e some time may be spent in the device driver for each io.
> In such a case, if i get two ios on the same block one after the other
> (say one is writing 'a' and the other is writing 'b') then isn't it
> possible that i may end up passing 'b' followed by 'a' to the layer
> below me (changing the order because thread 'a' took more time than
> thread 'b').
Then in that case should i be using a queue in my layer -
> put the ios in the queue whenever i get a call to 'make_request'.
> Another thread keeps pulling the ios from the queue and processing
> them and passing it to the layer below.
>
You mean layer above right? that is the filesystem correct? But if thats
the case then wouldn't your second request be blocked until the page was
unlocked by file system which would happen i think after your driver was
done with i/o. Thats because you won't mark the request as complete so i
guess threads would wait_on_page to be unlocked.

If however your driver "lies" about completing requests then yeah you need
to take appropriate measure.

>
> Regards,
> Neha
>
> ___
> Kernelnewbies mailing list
> Kernelnewbies@kernelnewbies.org
> http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies

--P.K.S
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Fwd: Packet Loss

2013-11-02 Thread Robert Clove
On Mon, Oct 28, 2013 at 5:44 PM, Kristof Provost  wrote:

> On 2013-10-28 17:32:43 (+0530), Robert Clove 
> wrote:
> > Kind of Packets :- *UDP*
> > How are you generating them? :-* Packeth (
> > http://packeth.sourceforge.net/packeth/Home.html)*
> > kernel version : uname -r -   *2.6.32-358.18.1.el6.x86_64
> > *
> > Are you *SURE* you're sending 1000? - *ya checked through packeth status
> > bar and also through ifconfig command*
> > I have connected them through LAN cable (cat 6) back to back.
> >
> > Changed cable but no use.
> > What should i do?
> >
> Break down the problem. Get that smart switch to tell you if the packets
> are lost by the sender of the receiver.
>
> Test if it's bidirectional (i.e. does it still happen if the switch the
> sender and receiver)?
>
> Does it still happen if you send only 100 packets? Do you still lose 30%
> then, or do you lose more or less?
>
> Perhaps try a kernel that isn't nearly five years old too.
>
> Also, don't top-post.
>
> Regards,
> Kristof
>
>

Hey Sir,

I just want to know are there any driver or kernel parameters that we can
adjust to get the better packet capture?

Thanks
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


PF_RING on Mellanox Card

2013-11-02 Thread Robert Clove
Hi All,

I want to know has anyone used the PF_RING on the Mellanox NIC card?
Is it working correctly?


Thanks
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies