Re: Ethernet driver - without DMA ?

2016-07-29 Thread Ran Shalit
On Fri, Jul 29, 2016 at 10:03 PM, Greg KH  wrote:
> On Fri, Jul 29, 2016 at 09:47:40PM +0300, Ran Shalit wrote:
>>  Hello,
>>
>> Can we write Ethernet driver without using dma ?
>
> Sure, we have USB network drivers that don't use DMA.
>
>> But still using sk_buff APIs like done in most drivers ?
>
> Yup.
>
> What type of hardware are you wanting to write an Ethernet driver for?

OMAP4, (omap4460), which connect to mac controller implemented in FPGA.
There is no template for ethernet driver, but I see that netx-eth has
quite simple implementation, which looks as a good starting template:
https://github.com/torvalds/linux/blob/master/drivers/net/ethernet/netx-eth.c



>
> thanks,
>
> greg k-h

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Ethernet driver - without DMA ?

2016-07-29 Thread Greg KH
On Fri, Jul 29, 2016 at 09:47:40PM +0300, Ran Shalit wrote:
>  Hello,
> 
> Can we write Ethernet driver without using dma ?

Sure, we have USB network drivers that don't use DMA.

> But still using sk_buff APIs like done in most drivers ?

Yup.

What type of hardware are you wanting to write an Ethernet driver for?

thanks,

greg k-h

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Ethernet driver - without DMA ?

2016-07-29 Thread Ran Shalit
 Hello,

Can we write Ethernet driver without using dma ?
But still using sk_buff APIs like done in most drivers ?

Thanks,
Ran

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: doubt on schedule_work() - work task getting scheduled lately

2016-07-29 Thread Daniel.
Nice tool @Ricardo!

2016-07-29 10:48 GMT-03:00 Ricardo Ribalda Delgado :
> you can use http://lttng.org/ for analyzing this
>
> Regards!
>
> On Fri, Jul 29, 2016 at 12:44 PM, Pranay Srivastava  wrote:
>> On Fri, Jul 29, 2016 at 4:02 PM, Muni Sekhar  wrote:
>>> Hi All,
>>>
>>> I have a doubt regarding the workqueue scheduling.
>>>
>>> I am using the workqueue for processing the Rx Interrupt data. I am
>>> calling schedule_work() on receiving the Rx interrupt from hardware.
>>>
>>> I calculated the time between calling the schedule_work() and
>>> workqueue task actually getting executed, Here I see many cases of
>>> less than 100 us(It is fairly good).
>>>
>>> But sometimes I see 10’s of ms and a lot in the 100’s of ms. I have
>>> seen over 0.5 secs too. I would like to know why does sometimes kernel
>>> takes longer time(in milli seconds) to schedule it? Is there any way
>>> to reduce this time gap?
>>>
>>>
>>> My code:
>>>
>>> static void my_workqueuetask(struct work_struct *work)
>>> {
>>> printk("In %s() \n",__func__);
>>>
>> You probably don't need this if it's just for your work_fn, yeah but
>> if there's race between this and something else...
>>> mutex_lock(&bh_mutex);
>>>
>>> //Do something here
>>>
>>> mutex_unlock(&bh_mutex);
>>> }
>>>
>>>
>>> static irqreturn_t my_irq_handler(int irq, void *dev)
>>> {
>>> ……;
>>>
>>> if(Rx Interrupt)
>>>  schedule_work(&rx_work);
>>
>> Maybe system_wq has too much already on it's plate?
>> Have you tried the same with completion and a kthread? or
>> with your own workqueue, overkill but you can give it a shot.
>>>
>>> return IRQ_HANDLED;
>>> }
>>>
>>> --
>>> Thanks,
>>> Sekhar
>>>
>>> ___
>>> Kernelnewbies mailing list
>>> Kernelnewbies@kernelnewbies.org
>>> https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
>>
>>
>>
>> --
>> ---P.K.S
>>
>> ___
>> Kernelnewbies mailing list
>> Kernelnewbies@kernelnewbies.org
>> https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
>
>
>
> --
> Ricardo Ribalda
>
> ___
> Kernelnewbies mailing list
> Kernelnewbies@kernelnewbies.org
> https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies



-- 
"Do or do not. There is no try"
  Yoda Master

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: doubt on schedule_work() - work task getting scheduled lately

2016-07-29 Thread Ricardo Ribalda Delgado
you can use http://lttng.org/ for analyzing this

Regards!

On Fri, Jul 29, 2016 at 12:44 PM, Pranay Srivastava  wrote:
> On Fri, Jul 29, 2016 at 4:02 PM, Muni Sekhar  wrote:
>> Hi All,
>>
>> I have a doubt regarding the workqueue scheduling.
>>
>> I am using the workqueue for processing the Rx Interrupt data. I am
>> calling schedule_work() on receiving the Rx interrupt from hardware.
>>
>> I calculated the time between calling the schedule_work() and
>> workqueue task actually getting executed, Here I see many cases of
>> less than 100 us(It is fairly good).
>>
>> But sometimes I see 10’s of ms and a lot in the 100’s of ms. I have
>> seen over 0.5 secs too. I would like to know why does sometimes kernel
>> takes longer time(in milli seconds) to schedule it? Is there any way
>> to reduce this time gap?
>>
>>
>> My code:
>>
>> static void my_workqueuetask(struct work_struct *work)
>> {
>> printk("In %s() \n",__func__);
>>
> You probably don't need this if it's just for your work_fn, yeah but
> if there's race between this and something else...
>> mutex_lock(&bh_mutex);
>>
>> //Do something here
>>
>> mutex_unlock(&bh_mutex);
>> }
>>
>>
>> static irqreturn_t my_irq_handler(int irq, void *dev)
>> {
>> ……;
>>
>> if(Rx Interrupt)
>>  schedule_work(&rx_work);
>
> Maybe system_wq has too much already on it's plate?
> Have you tried the same with completion and a kthread? or
> with your own workqueue, overkill but you can give it a shot.
>>
>> return IRQ_HANDLED;
>> }
>>
>> --
>> Thanks,
>> Sekhar
>>
>> ___
>> Kernelnewbies mailing list
>> Kernelnewbies@kernelnewbies.org
>> https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
>
>
>
> --
> ---P.K.S
>
> ___
> Kernelnewbies mailing list
> Kernelnewbies@kernelnewbies.org
> https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies



-- 
Ricardo Ribalda

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Help understanding block layer sample in LDD3

2016-07-29 Thread François
On Fri, Jul 29, 2016 at 04:26:41PM +0530, Pranay Srivastava wrote:
> On Fri, Jul 29, 2016 at 4:15 PM, François  wrote:
> > On Fri, Jul 29, 2016 at 03:58:28PM +0530, Pranay Srivastava wrote:
> >>
> >> I don't see req->buffer. Which version you are using?
> >
> > You're absolutely right. Both [1] and [2] seems to be outdated.
> > I'm currently compiling and testing most of my code on a 3.19 on a 14.04 
> > LTS ubuntu in a VM,
> > rather than the actual kernel. It's simpler for me to work that way.
> >
> > [1] https://github.com/martinezjavier/ldd3/blob/master/sbull/sbull.c#L119
> > [2] 
> > https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/block/biodoc.txt
> >
> > [...]
> >>
> >> If this is a memory backed block driver, then perhaps you can handle
> >> multiple requests[?]. I don't think you need
> >> to actually break up the same request into multiple requests.
> >
> > Actually, it is a shared memory based. Hence, a request might larger than 
> > the available room in
> > the shared memory. This case has to be handled.
> 
> So basically you hold on to some pages[?] and use that as your disk right?

Well those shared pages are used to exchange chunks of data with another party, 
which will itself
get those data, produce bio, submit bio, and put response on the shared region.

> I guess setcapacity should take care of this [no?]

The set_capacity() value is the one reported by the other party which queries 
an actual block device.
Eventhough request size could be resized, if the other party takes time to 
respond, very few memory
might be available, but large request can be enqueued. So limiting the capacity 
does not seem to be an option here.

> I think if you just take care of the proper mapping of sector(s) to
> your driver then
> it should be alright.
> 
> Too large request shouldn't reach your driver even when you
> have the device opened as raw and not mounted it.
> 
> >
> > Thanks for your input!
> >
> > --
> > François
> 
> 
> 
> -- 
> ---P.K.S

-- 
François

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Help understanding block layer sample in LDD3

2016-07-29 Thread Pranay Srivastava
On Fri, Jul 29, 2016 at 4:15 PM, François  wrote:
> On Fri, Jul 29, 2016 at 03:58:28PM +0530, Pranay Srivastava wrote:
>>
>> I don't see req->buffer. Which version you are using?
>
> You're absolutely right. Both [1] and [2] seems to be outdated.
> I'm currently compiling and testing most of my code on a 3.19 on a 14.04 LTS 
> ubuntu in a VM,
> rather than the actual kernel. It's simpler for me to work that way.
>
> [1] https://github.com/martinezjavier/ldd3/blob/master/sbull/sbull.c#L119
> [2] 
> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/block/biodoc.txt
>
> [...]
>>
>> If this is a memory backed block driver, then perhaps you can handle
>> multiple requests[?]. I don't think you need
>> to actually break up the same request into multiple requests.
>
> Actually, it is a shared memory based. Hence, a request might larger than the 
> available room in
> the shared memory. This case has to be handled.

So basically you hold on to some pages[?] and use that as your disk right?
I guess setcapacity should take care of this [no?]

I think if you just take care of the proper mapping of sector(s) to
your driver then
it should be alright.

Too large request shouldn't reach your driver even when you
have the device opened as raw and not mounted it.

>
> Thanks for your input!
>
> --
> François



-- 
---P.K.S

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Help understanding block layer sample in LDD3

2016-07-29 Thread François
On Fri, Jul 29, 2016 at 03:58:28PM +0530, Pranay Srivastava wrote:
> 
> I don't see req->buffer. Which version you are using?

You're absolutely right. Both [1] and [2] seems to be outdated.
I'm currently compiling and testing most of my code on a 3.19 on a 14.04 LTS 
ubuntu in a VM, 
rather than the actual kernel. It's simpler for me to work that way.

[1] https://github.com/martinezjavier/ldd3/blob/master/sbull/sbull.c#L119
[2] 
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/block/biodoc.txt

[...]
> 
> If this is a memory backed block driver, then perhaps you can handle
> multiple requests[?]. I don't think you need
> to actually break up the same request into multiple requests.

Actually, it is a shared memory based. Hence, a request might larger than the 
available room in 
the shared memory. This case has to be handled.

Thanks for your input!

-- 
François

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: doubt on schedule_work() - work task getting scheduled lately

2016-07-29 Thread Pranay Srivastava
On Fri, Jul 29, 2016 at 4:02 PM, Muni Sekhar  wrote:
> Hi All,
>
> I have a doubt regarding the workqueue scheduling.
>
> I am using the workqueue for processing the Rx Interrupt data. I am
> calling schedule_work() on receiving the Rx interrupt from hardware.
>
> I calculated the time between calling the schedule_work() and
> workqueue task actually getting executed, Here I see many cases of
> less than 100 us(It is fairly good).
>
> But sometimes I see 10’s of ms and a lot in the 100’s of ms. I have
> seen over 0.5 secs too. I would like to know why does sometimes kernel
> takes longer time(in milli seconds) to schedule it? Is there any way
> to reduce this time gap?
>
>
> My code:
>
> static void my_workqueuetask(struct work_struct *work)
> {
> printk("In %s() \n",__func__);
>
You probably don't need this if it's just for your work_fn, yeah but
if there's race between this and something else...
> mutex_lock(&bh_mutex);
>
> //Do something here
>
> mutex_unlock(&bh_mutex);
> }
>
>
> static irqreturn_t my_irq_handler(int irq, void *dev)
> {
> ……;
>
> if(Rx Interrupt)
>  schedule_work(&rx_work);

Maybe system_wq has too much already on it's plate?
Have you tried the same with completion and a kthread? or
with your own workqueue, overkill but you can give it a shot.
>
> return IRQ_HANDLED;
> }
>
> --
> Thanks,
> Sekhar
>
> ___
> Kernelnewbies mailing list
> Kernelnewbies@kernelnewbies.org
> https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies



-- 
---P.K.S

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


doubt on schedule_work() - work task getting scheduled lately

2016-07-29 Thread Muni Sekhar
Hi All,

I have a doubt regarding the workqueue scheduling.

I am using the workqueue for processing the Rx Interrupt data. I am
calling schedule_work() on receiving the Rx interrupt from hardware.

I calculated the time between calling the schedule_work() and
workqueue task actually getting executed, Here I see many cases of
less than 100 us(It is fairly good).

But sometimes I see 10’s of ms and a lot in the 100’s of ms. I have
seen over 0.5 secs too. I would like to know why does sometimes kernel
takes longer time(in milli seconds) to schedule it? Is there any way
to reduce this time gap?


My code:

static void my_workqueuetask(struct work_struct *work)
{
printk("In %s() \n",__func__);

mutex_lock(&bh_mutex);

//Do something here

mutex_unlock(&bh_mutex);
}


static irqreturn_t my_irq_handler(int irq, void *dev)
{
……;

if(Rx Interrupt)
 schedule_work(&rx_work);

return IRQ_HANDLED;
}

-- 
Thanks,
Sekhar

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Help understanding block layer sample in LDD3

2016-07-29 Thread Pranay Srivastava
On Fri, Jul 29, 2016 at 3:14 PM, François  wrote:
> Hi there,
>
> I've been reading LDD3's chapter 16 on block devices a few times, and have 
> made toys
> block layers module. Now I've been looking at the up-to-date examples 
> provided by martinez javier[1],
> but still there's a fundamental concept I fail to understand.
>
> Considering only the RM_FULL and RM_SIMPLE cases, a request queue is created, 
> bound with a lock, and associated
> with a request function.
>
> In the simple case, that function "sbull_request" processes each request from 
> the request queue, and delegates
> the work to "sbull_transfer", which basically performs some arithmetic and 
> does the actual data copy.
> This function is given a sector, a number of sectors, a pointer to a buffer, 
> and a read or write parameter
> extracted from the request using blk_rq_pos(), blk_rq_cur_sectors(), 
> req->buffer and rq_data_dir() respectively.
>
> On the other hand, the same mechanism is used, but a different function is 
> associated: "sbull_full_request".
> That function also extracts requests and delegates the actual work to 
> "sbull_xfer_request" which iterates on
> the request's bio, calls "sbull_xfer_bio" which itself iterates on the bio's 
> segments and finally,
> calls the same "sbull_transfer" function.
>
> What I fail to understand is: how (with the same initialization) the 
> behaviour of the module using those two
> somehow different mechanism is equivalent.

I don't see req->buffer. Which version you are using?

>
> One has to understand the full complexity of the underlying data structure 
> (requests having bio, having segments)
> while the other only reads the containing structure (the struct request) and 
> do the same job, without iterations.
>
> Bonus point, to give some context: I'm writing an asynchronous block-layer 
> which has to split requests into custom subrequest.
> I'm wondering which approach (between those two) I should pickup.

If this is a memory backed block driver, then perhaps you can handle
multiple requests[?]. I don't think you need
to actually break up the same request into multiple requests.

>
> Thanks for reading so far, and for any hints :)
>
>
> [1] https://github.com/martinezjavier/ldd3/blob/master/sbull/sbull.c
> --
> François
>
> ___
> Kernelnewbies mailing list
> Kernelnewbies@kernelnewbies.org
> https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies



-- 
---P.K.S

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Help understanding block layer sample in LDD3

2016-07-29 Thread François
Hi there,

I've been reading LDD3's chapter 16 on block devices a few times, and have made 
toys 
block layers module. Now I've been looking at the up-to-date examples provided 
by martinez javier[1],
but still there's a fundamental concept I fail to understand.

Considering only the RM_FULL and RM_SIMPLE cases, a request queue is created, 
bound with a lock, and associated
with a request function.

In the simple case, that function "sbull_request" processes each request from 
the request queue, and delegates
the work to "sbull_transfer", which basically performs some arithmetic and does 
the actual data copy.
This function is given a sector, a number of sectors, a pointer to a buffer, 
and a read or write parameter
extracted from the request using blk_rq_pos(), blk_rq_cur_sectors(), 
req->buffer and rq_data_dir() respectively.
 
On the other hand, the same mechanism is used, but a different function is 
associated: "sbull_full_request".
That function also extracts requests and delegates the actual work to 
"sbull_xfer_request" which iterates on
the request's bio, calls "sbull_xfer_bio" which itself iterates on the bio's 
segments and finally, 
calls the same "sbull_transfer" function.

What I fail to understand is: how (with the same initialization) the behaviour 
of the module using those two
somehow different mechanism is equivalent.

One has to understand the full complexity of the underlying data structure 
(requests having bio, having segments)
while the other only reads the containing structure (the struct request) and do 
the same job, without iterations.

Bonus point, to give some context: I'm writing an asynchronous block-layer 
which has to split requests into custom subrequest.
I'm wondering which approach (between those two) I should pickup.

Thanks for reading so far, and for any hints :) 


[1] https://github.com/martinezjavier/ldd3/blob/master/sbull/sbull.c
-- 
François

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies