Question about task 5 of the Eudyptula Challenge

2014-05-20 Thread Peter Tosh
Hey guys,

I have written a module for task 5 that I believe should work, but it
isn't. I have hard coded the values of Vendor ID and Product ID (for the
call to the USB_DEVICE macro) to match the USB keyboard I'm testing
with, but the Logitech driver seems to be getting selected first. My
google-foo is apparently crap, as I haven't found a solution to this
yet. I have read chapter 14 as the task suggests, but couldn't find a
solution in there. If someone could point in the right direction I would
appreciate it a lot.

Cheers!


___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: two virtual address pointing to same physical address

2014-05-20 Thread 김찬

FYI,
I found out this morning, there is a SRMMU setting with initial page table 
before linux start_kernel starts.
The page table directs 0xc000 to 0x6000.
Chan


보낸 사람 : "Chan Kim" 
보낸 날짜 : 2014-05-20 22:29:08 ( +09:00 )
받는 사람 : valdis.kletni...@vt.edu 
참조 : kernelnewbies@kernelnewbies.org 
제목 : Re: two virtual address pointing to same physical address

Valdis and all,
I understand we cannot access the same physical memory both as cacheable and 
non-cacheable.
and I'm pretty sure they were pointing to the same physical address. (I haven't 
check with mmap yet, I will try later.)
The point I was confused about was the __nocache_fix operation to the address.
Writing a reply email, I remembered that the __nocache_fix conversion to the 
address is used only before MMU setup.
After MMU setup (setting the context table and PGD pointer to the MMU 
register), the __nocache_fix operation to the address is not used.
but __nocache_fix(0xc8000320) is 0xc0540320 and In our system we don't have 
physical memory at 0xc000 ~.
(we have memory at 0x6000 ~ 0x6fff)
seeing the definition of __nocache_fix, the input and ouput is all virtual 
address (VADDR).
This means I can access virtual address 0xc054 (nocache region) through MMU 
to 0x6054.
Maybe before the MMU setup, somewhere at a previous point, the 0xc054 -> 
0x6054... virtual->physical conversion may have been setup to the SRMMU. I have 
to check. (in the prom_stage, or very early in the init)
Can anybody give me some light on this?
Thanks,
Chan



보낸 사람 : "valdis.kletni...@vt.edu"
보낸 날짜 : 2014-05-20 11:31:14 ( +09:00 )
받는 사람 : 김찬
참조 : kernelnewbies@kernelnewbies.org
제목 : Re: two virtual address pointing to same physical address

On Tue, 20 May 2014 00:39:26 -, Chan Kim said:
> But still it's confusing. Can two virtual addresses from the "same process"
> (in init process, one for nocache pool, the other not) point to the same
> physical address?

I'm not sure what you're trying to say there. In general, the hardware
tags like non-cacheable and write-combining are applied to physical addresses,
not virtual.

And a moment's thought will show that treating the same address (whether it's
virtual or physical) as caching in one place and non-caching in another is just
*asking* for a stale-data bug when the non-caching reference updates data and
the caching reference thinks it's OK to use the non-flushed non-refreshed
cached data.

It's easy enough to test if two addresses from a single process can
point to the same physical address - do something like this:

/* just ensure these two map the same thing at different addresses */
foo = mmap(something,yaddayadda);
bar = mmap(something,yaddayadda);
/* modify via one reference */
*foo = 23;
/* you probably want a barrier call here so gcc doesn't screw you */
/* Now dereference it via the other reference */
printf("And what we read back is %d\n", *bar);

(Making this work is left as an exercise for the student :)

And figuring out why you need a barrier is fundamental to writing bug-free
code that uses shared memory. The file Documentation/memory-barriers.txt
is a good place to start.
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Predict/force PCI slot id given by the kernel to NICs

2014-05-20 Thread Valdis . Kletnieks
On Tue, 20 May 2014 15:55:09 -0500, Jaime Arrocha said:

> I am working on a Proliant server model which have 6 physical pci slots.
> They are going to be used for NICs with either 2 or 4 interfaces and
> installed in an arbitrarily order in the slots. By this I mean that the a
> certain box will only have a card/s on only certain slots.
>
> I need to write a program to its able to predict/read what bus id (slot id)
> the kernel gives to each interface for every card. The program should be
> able to relate the bus id to the slot where it is physically installed at.

Have whatever installer program (anaconda for RH-based, etc) call an exit
script that does an 'lspci | grep your_nic_card_here'? and scribble in
the /etc/udev/rules.d file?



pgpp0nj_Mhe1m.pgp
Description: PGP signature
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Predict/force PCI slot id given by the kernel to NICs

2014-05-20 Thread Greg KH
On Tue, May 20, 2014 at 03:55:09PM -0500, Jaime Arrocha wrote:
> Good day all,
> 
> I am working on a Proliant server model which have 6 physical pci slots. They
> are going to be used for NICs with either 2 or 4 interfaces and installed in 
> an
> arbitrarily order in the slots. By this I mean that the a certain box will 
> only
> have a card/s on only certain slots. 
> 
> I need to write a program to its able to predict/read what bus id (slot id) 
> the
> kernel gives to each interface for every card. The program should be able to
> relate the bus id to the slot where it is physically installed at.

The kernel already gives you this information in sysfs, if the BIOS
provides it.

> I understand that some cards obtain a different bus id's even do they are
> connected on the same physical pci slot. So if I could find a way where the
> program is able to control/predict what bus id is given to them by the kernel
> to each slot, that would solve my problem.

You can't do that, the BIOS picks the bus ids, and they can be random
for all it cares.  You can not pick them at all, that's not how PCI
works, sorry.

> The reason for this is because, by identifying the slot where a certain card 
> is
> installed, I can use that bus id to change the udev rule naming for the card's
> interfaces. 

udev already does this in a persistent way, using the bus id and slot
number, if present, by default, why do you have to add any additional
logic here?

> For example,
> Card at slot 1? rename its interfaces to s1pX 
> where 'p' stands for port.

Already done, look at the default naming scheme udev provides for
network devices :)

Hope this helps,

greg k-h

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Predict/force PCI slot id given by the kernel to NICs

2014-05-20 Thread Jaime Arrocha
Good day all,

I am working on a Proliant server model which have 6 physical pci slots.
They are going to be used for NICs with either 2 or 4 interfaces and
installed in an arbitrarily order in the slots. By this I mean that the a
certain box will only have a card/s on only certain slots.

I need to write a program to its able to predict/read what bus id (slot id)
the kernel gives to each interface for every card. The program should be
able to relate the bus id to the slot where it is physically installed at.

I understand that some cards obtain a different bus id's even do they are
connected on the same physical pci slot. So if I could find a way where the
program is able to control/predict what bus id is given to them by the
kernel to each slot, that would solve my problem.

The reason for this is because, by identifying the slot where a certain
card is installed, I can use that bus id to change the udev rule naming for
the card's interfaces.

For example,
Card at slot 1? rename its interfaces to s1pX
where 'p' stands for port.

I just need help being pointed to the right direction. Will I find all my
answers at pci.h?
The man page for lspci sends me to this file.

Let me know if I need to be more clear.

Thanks for your time.

Jaime
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: two virtual address pointing to same physical address

2014-05-20 Thread Max Filippov
Hi,

On Tue, May 20, 2014 at 6:28 AM,   wrote:
> On Tue, 20 May 2014 00:39:26 -, Chan Kim said:
>> But still it's confusing. Can two virtual addresses from the "same process"
>> (in init process, one for nocache pool, the other not) point to the same
>> physical address?
>
> I'm not sure what you're trying to say there.  In general, the hardware
> tags like non-cacheable and write-combining are applied to physical addresses,
> not virtual.

AFAIK most processors with MMU have cache control bits built into page
table entries, effectively controlling caching by virtual address.
E.g. x86 has Write Through and Cache Disabled bits in its PTEs,
ARM PTEs have bits B and C that determine caching and buffering
of the page, etc.

-- 
Thanks.
-- Max

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: two virtual address pointing to same physical address

2014-05-20 Thread Chan Kim
Valdis and all,
I understand we cannot access the same physical memory both as cacheable and 
non-cacheable.
and I'm pretty sure they were pointing to the same physical address. (I haven't 
check with mmap yet, I will try later.)
The point I was confused about was the __nocache_fix operation to the address.
Writing a reply email, I remembered that the __nocache_fix conversion to the 
address is used only before MMU setup.
After MMU setup (setting the context table and PGD pointer to the MMU 
register), the __nocache_fix operation to the address is not used.
but __nocache_fix(0xc8000320) is 0xc0540320 and In our system we don't have 
physical memory at 0xc000 ~. 
(we have memory at 0x6000 ~ 0x6fff)
seeing the definition of __nocache_fix, the input and ouput is all virtual 
address (VADDR). 
This means I can access virtual address 0xc054 (nocache region) through MMU 
to 0x6054.
Maybe before the MMU setup, somewhere at a previous point, the 0xc054 -> 
0x6054...  virtual->physical conversion may have been setup to the SRMMU. I 
have to check. (in the prom_stage, or very early in the init)
Can anybody give me some light on this?
Thanks,
Chan



보낸 사람 : "valdis.kletni...@vt.edu" 
보낸 날짜 : 2014-05-20 11:31:14 ( +09:00 )
받는 사람 : 김찬 
참조 : kernelnewbies@kernelnewbies.org 
제목 : Re: two virtual address pointing to same physical address

On Tue, 20 May 2014 00:39:26 -, Chan Kim said:
> But still it's confusing. Can two virtual addresses from the "same process"
> (in init process, one for nocache pool, the other not) point to the same
> physical address?

I'm not sure what you're trying to say there. In general, the hardware
tags like non-cacheable and write-combining are applied to physical addresses,
not virtual.

And a moment's thought will show that treating the same address (whether it's
virtual or physical) as caching in one place and non-caching in another is just
*asking* for a stale-data bug when the non-caching reference updates data and
the caching reference thinks it's OK to use the non-flushed non-refreshed
cached data.

It's easy enough to test if two addresses from a single process can
point to the same physical address - do something like this:

/* just ensure these two map the same thing at different addresses */
foo = mmap(something,yaddayadda);
bar = mmap(something,yaddayadda);
/* modify via one reference */
*foo = 23;
/* you probably want a barrier call here so gcc doesn't screw you */
/* Now dereference it via the other reference */
printf("And what we read back is %d\n", *bar);

(Making this work is left as an exercise for the student :)

And figuring out why you need a barrier is fundamental to writing bug-free
code that uses shared memory. The file Documentation/memory-barriers.txt
is a good place to start.
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


An idea to capture kernel memory access

2014-05-20 Thread RS
Hello
I have an idea, to add some changes to the kernel, like the kmemcheck, to help 
to check the kernel memory.


 I call it kernel_module_check_framework, it can check the memory buffer 
overflow errors and others.
The memory buffer is what the user want to monitor, not the whole system's 
memory. User can add/delete the memory buffers to the framework.
The framework provide four interface, register/unregister functions, add/delete 
functions. User can utilize the interface to do some works.


When user add/delete a memory buffer, the framework will store the memory 
information and set the all the pages which contains the memory buffer 
none-present .
Then, when an access to this page, the framework will check whether the access 
is in the monitored buffers. If it is hit, the framework will set the page 
present and execute the interface function(or the hook function), at last 
change the regs->flags to make the CPU to be the single step debugging mode. If 
not, let the kernel to handle it.
Cause the single step debugging  mode, kernel will step into the do_debug 
functions in the traps.c file, and make the page none-present again, at last 
restore the regs->flags.
There, the framework can catch the another access in the same page.
When unregister, the framework will recovery all the pages, and show something.


As mentioned above, the interface function(or the hook function), it is a 
function pointer, at first is NULL, when the user register to the framework, 
the pointer will change to the user's handler. So, the framework's main handler 
is implemented by users.


To implement the framework, I will change the fault.c, traps.c and will add new 
files. It sounds like the kmemcheck, but not the same, my framework intents to 
capture each access in the memory buffers that are dynamically added or deleted 
by users, and let user to handler it. For example, the user can write a module 
to monitor a process's specified memory buffers with the framework, can 
statistic the buffer write,read times and somethings. Or user can develop a 
module to check memory access overflow errors with it.


I don't know whether the design is feasible?  Any one can give some advises?


Thanks, 
HeChuan___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies