Re: Where is the version.h?

2014-01-28 Thread parmenides

于 2014/1/28 3:09, valdis.kletni...@vt.edu wrote:

 No, the build gets done with a -Iinclude/generated/uapi


So, I get it. Thanks for your reply!

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Where is the version.h?

2014-01-26 Thread parmenides
Hi,

According to LDD3, the linux/module.h automatically includes the
linux/version, which define some macros to help test the kernel version.
But I search the source tree, and can not find version.h in
include/linux. Where can I find it?

Thanks!

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


What's the meaning of CONFIG_BROKEN_ON_SMP?

2014-01-24 Thread parmenides
Hi,

I compiled a driver, ang get the following message:

CONFIG_BROKEN_ON_SMP: should be set in the kernel configuration, but isn't.

Somebody suggested to disable the kernel's SMP feature. I did, and the
problem got solved.

Then, I googled the meaning of CONFIG_BROKEN_ON_SMP, but did not get
definitive explanation, such as
http://cateee.net/lkddb/web-lkddb/BROKEN_ON_SMP.html

I wonder what the meaning of the configuration is. How does it work? Thx!



___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: What's the meaning of CONFIG_BROKEN_ON_SMP?

2014-01-24 Thread parmenides


于 2014/1/25 1:27, valdis.kletni...@vt.edu wote:
 On Fri, 24 Jan 2014 17:43:35 +0800, parmenides said:

 CONFIG_BROKEN_ON_SMP: should be set in the kernel configuration, but isn't.

 I wonder what the meaning of the configuration is. How does it work? Thx!

 Drivers (and all other kernel-mode code, actually) need to do proper locking,
 so that if there's a race between code running on 2 different CPUs at the same
 time, they don't stomp all over each other (consider the case of one CPU 
 trying
 to walk a linked list at the same time that another CPU is deleting an entry
 from the list - this can leave the first CPU walking down a now corrupted list
 following now-stale pointers).

 There are a lot of old buggy drivers that don't do proper locking.  In a
 few cases, the drivers are *technically* buggy, but the bugs just happen to
 be in code that will manage to work anyhow *if there is only one CPU* (for
 instance, wrapped in a IRQ-disabled section).  These drivers get BROKEN_ON_SMP
 attached, because they can still potentially be useful for people compiling
 on architectures that only support 1 processor core, or *need* the driver and
 don't care if they only use 1 core of the 4 they have.

 The proper fix is, of course, to put proper locking in the driver - but most
 BROKEN_ON_SMP drivers are creeping horrorshows straight out of HP Lovecraft,
 and nobody wants to invest the resources needed to fix the abandonware driver.


Does that mean BROKEN_ON_SMP drivers are all tagged, and they are not 
seen when I 'make menuconfig', if CONFIG_BROKEN_ON_SMP not be set? If 
so, how these drivers are tagged?

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Why can not install kernel headers?

2012-12-23 Thread mobile . parmenides

I am reading an article about kernel header installtion, its links as follows: 
http://lxr.linux.no/#linux+v2.6.32/Documentation/make/headers_install.txt#L11

Following a command given by the article:
make headers_install ARCH=i386 INSTALL_HDR_PATH=/usr/include
the kernel headers should be installed into '/usr/include'. However, when 
checking 
'/usr/include/linux' and '/usr/include/asm', I found actually these kernel 
headers have
not installed (by checking timestamps). 

In fact, kernel headers have be installed into 'include' subdirectory of kernel 
top-level 
directory. Obviously,  the 'INSTALL_HDR_PATH' parameter in the above command 
does not take effect. Is there any way to deal with the problem? If I have to 
use 'cp' to
install headers by hand, the 'INSTALL_HDR_PATH' seems not to play its roles.



--
mobile.parmenides
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


How to understand lowmem_reserve in a zone?

2012-08-18 Thread Parmenides
Hi,

I have a question about the reseved page frames in a zone. The
physical memory is splitted into some node, which is  further divied
into some zone. For each zone, the kernel try to reserve some page
frames to statisfy requests on low memory condition. There is a
lowmem_reserve[] in a zone descriptor, its defined like:

 struct zone {

unsigned long lowmem_reserve[MAX_NR_ZONES];
...
 };

It is obvious that lowmem_reserve[] contains MAX_NR_ZONES elements.
But, I think just a interger is enough to record the number of  a
zone's reserved page frames. Why do we have to use a
array?Furthermore, lowmem_reserve[] merely stand for the amount. How
does the kernel mark a page frame to be reserved?

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


what's the differences among vairous memory models?

2012-08-07 Thread Parmenides
It is said that there are three memory models,namely FLATMEM,DISCONTIGMEM
and SPARSEMEM。What's the differences among them? Thx!
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Why can not processes switch in atomic context?

2012-07-04 Thread Parmenides
Thanks for all responses to my question. As far as this quesiton be
concerned, I am more interested in why we should do somthing rather
than merely we should do it. The discussions have made it more and
more clear. Thanks again.

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Why can not processes switch in atomic context?

2012-07-03 Thread Parmenides
Hi,

It is said that kernel can not be preempted in interrupt context
and when it is in a critical section protected by a spin lock.

1. For the spinlock case, it is easy to get if preemption is allowed
in critical section, the purpose of protection provided by spinlock
can not be achieved readily.

2. For the interrupt context case, I think when processing interrupt,
kernel can be preempted in principle. But, this really increases the
interrupt processing time which further cause longer response time and
data missing in device. Except that, is there any other reasons?

3. Kernel is responsible for prohibiiting passive process switches,
namely preemption, in the above cases. But, It seems that it does not
take care of active process swtiches, namely yield. For example, some
code in a critical section protected by a spinlock can invoke
schedule() to switch process passively. Is this the case?

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


How to debug a kernel thread?

2012-03-30 Thread Parmenides
Hi,

  It is said that the kernel can be debugged in qemu and I take a
try. First, I started the qemu with

 qemu -m 64M -kernel arch/x86/boot/bzImage -initrd
~/image.cpio.gz  -net nic -net tap,ifname=tap0  -s

 In another console

  gdb vmlinux
  (gdb) target remote localhost:1234
  (gdb) continue

A LKM (mymodule.ko) which starts a kernel thread is made with debug
info, and was 'scp' to the guest. In guest, it is inserted by

  insmod mymodule.ko

Then, back to gdb

  (gdb) add-symbol-file mymodule.ko 0xc482e000
  (gdb) break mymodules.c:37
  (gdb) continue

The 37th line of mymodules.c is in a loop of kernel thread, which
ensures the breakpoint should be triggered every time the loop go
through.  But, the breakpoint doesn't triggered as expected. Instead,
the kernel thread is running over and over indicated by its repeated
output messages. So, I think a kernel thread can not be break by any
breakpoint.

However, I think maybe the gdb want to attach to the kernel thread.
Then, I checked the kernel thread's PID with ps and got 62.

 (gdb) control+C
 (gdb) attach 62

The gdb promted me it will kill the program being debugged. I answered
with 'yes', the gdb told me

 ptrace: No such process.

then the debug session is terminated and the guest is closed.

I started the qemu  with the above command again

 qemu -m 64M -kernel arch/x86/boot/bzImage -initrd
~/image.cpio.gz  -net nic -net tap,ifname=tap0  -s

And, without quitting the gdb

  (gdb) target remote localhost:1234
  (gdb) continue

In the guest, 'mymodule.ko' is inserted again

  insmod mymodule.ko

I found that the breakpoint set at mymodule.c:37 is triggered this
time surprisingly, and the 'insmod' didn't return immediately until
the gdb is given with another 'continue' command.

  (gdb) continue

Then, the breakpoint doesn't triggered anymore as usual,


There is two questions:

1. Why the kernel thread can not be break?
2. Why is the breakpoint triggered just when the 'mymodule.ko' is loaded?

Thanks.

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Is an IRQ line disabled on local CPU or globally?

2011-11-18 Thread Parmenides
Hi,

 It is said that an IRQ line is disabled when the same IRQ line is
in process. But, is the IRQ line disabled on the local CPU or all
CPUs? Another relevant question is that the disable_irq() funciton
will disable an IRQ line on local CPU or all CPUs.

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


What's the meaning of PREEMPT_ACTIVE in preempt_count?

2011-11-06 Thread Parmenides
Hi,

   The preempt_count contains a PREEMPT_ACTIVE flag. In cond_resched()
and preempt_schedule(), there is a pattern which is like this:

add_preempt_count(PREEMPT_ACTIVE);
schedule();
sub_preempt_count(PREEMPT_ACTIVE);

I wonder what's the role which PREEMPT_ACTIVE plays. Why we need to
add to it before schedule() and sub from it after?

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Why can the kernel be stuck by a busy kernel thread ?

2011-10-14 Thread Parmenides
Hi,

   I code a kernel module which do some nop. When inserted into the
kernel, the kernel will be stuck and can not reponse my keypress
anymore. So, I have to reboot to get out. Why?

#include linux/init.h
#include linux/module.h
#include linux/kernel.h
#include linux/kthread.h

struct task_struct *ktask = NULL;

static int thread_func(void *data)
{
 int i;
 while (!kthread_should_stop()){
  for (i = 0; i  1000; i++){
   asm volatile (nop\n\t);
  }
 }

 return 0;
}

static int tst_init(void)
{
 ktask = kthread_run(thread_func, NULL, mythread);

 return 0;
}

static void tst_exit(void)
{
 if (ktask){
  kthread_stop(ktask);
  ktask = NULL;
 }
}

module_init(tst_init);
module_exit(tst_exit);
MODULE_LICENSE(GPL);
MODULE_AUTHOR(Parmenides);
MODULE_DESCRIPTION(Something.);

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Why can the kernel be stuck by a busy kernel thread ?

2011-10-14 Thread Parmenides
2011/10/14 Daniel Baluta daniel.bal...@gmail.com:
 Is kernel preemption activated? Could you check for # grep
 CONFIG_PREEMPT .config?


Yes. I really have not select CONFIG_PREEMPT option. Now, I turn it on
and things are ok. Thanks a lot.

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: When is to preempt safe?

2011-10-08 Thread Parmenides
2011/10/8 Chetan Nanda chetanna...@gmail.com:

 New task pick by scheduler may try to get the same lock resulting in
 deadlock

It seems that this kind of deadlock may be removed eventually. Suppose
that we have a task A, which is holding a spinlock. If A is preempted
by task B which try to obtain the same spinlock. Although B has to
busy wait, it will end up with be preempted owing to using up its
timeslice. Therefore, A has chance to be selected by shechedler and
release the spinlock. Then, B will go on when it is selected by the
secheduler next time.

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Why do processes with higher priority to be allocated more timeslice?

2011-09-27 Thread Parmenides
Hi, Mulyadi

2011/9/27 Mulyadi Santosa mulyadi.sant...@gmail.com:
 simply to say that, the more important a job is, it should be given
 longer time to run... but, the process has privilege to yield before
 time slice is up...and when it comes back,it will use the remaining
 time slice.and its dynamic priority will stay the same (that's the
 property that I recall)

 well, you can think, what happen if you take the other direction for
 the policy? higher priority, but less time slice? that, IMHO, is less
 intuitive.


Initially, I think that the scheduler should enlarge the timeslices of
CPU-bound processes to improve throughput. But, now I have realized
that the two goals of schedulers, namely shorter latency and higher
throughput, can not be achieved at the same time. Linux scheduler may
prefer to the former. Thanks! :-)

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Why do processes with higher priority to be allocated more timeslice?

2011-09-26 Thread Parmenides
2011/9/26 Mulyadi Santosa mulyadi.sant...@gmail.com:
 Hi :)

Actually, the CFS scheduler which
 is a new scheduler in Linux kernel also does the same thing. But, I
 think this way does not fit with scheduler's principle.

 remember the keyword you ask? fairness? that is being  fair to all
 processes but since, there are always more processes than
 processors, unfairness always happen.


In fact, I am interested in the length of timeslice rather than
fairness at this point. :-)

This way ensures
 lower latency. It is also necessary that CPU-bound processes are to be
 allocated longer timeslice to improve throughput owing to less process
 switch costs. That means lower priority processes (CPU-bound) should
 be allocated longer timeslice, whichs obviously conflicts with the
 actual practice taken by the Linux's scheduler. Any explanation?

 What you refer initially is the time when time slice assignment is
 strictly derived from the static/nice level. So e.g process with nice
 level 0 has lesser time slice that nice level -5.

 But as you can see, situation change dynamically during run time, thus
 static prio must be taken into dynamic priority. And dynamic priority
 itself, must take another factor for time slice calculation. Here,
 sleep time comes into play.


Ok, suppose that there is a CPU-bound process and a I/O-bound process,
both of them are allocated the same nice level 0. After some time, the
I/O-bound process will receive higher dynamic priority owing to its
frequent sleeping. Now that the I/O-bound process more like to sleep,
why does the scheduler give it longer timeslice? After all, it really
does not need more time.

On ther other hand, the CPU-bound process will receive lower dynamic
priority as its punishment because it costs more CPU time. Lower
dynamic priority indicates this process is more 'CPU-bound', that is
this process need more CPU time. If the scheduler allocates longer
timeslice for this process, the frequency of process switch will be
reduced. I think that will help to improve throughput of the entire
system.

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Why do processes with higher priority to be allocated more timeslice?

2011-09-26 Thread Parmenides
Hi Jeff,

2011/9/27 Jeff Donner jeffrey.don...@gmail.com:

 Well, if it doesn't need more time then it doesn't matter what its priority 
 is,
 when it goes to sleep waiting for some IO it yields back the
 remainder of its time. You could give it as long a timeslice
 as you like; it won't use more than it needs, because it mostly waits on IO.


 A lot of the time the IO process won't be runnable, as it's waiting on IO.
 When the kernel is looking to dole out CPU time at those times, well the
 CPU-bound process is the only one that can take it. So the kernel
 gives it to it, lower priority or not.


 CFS doesn't distort anything.

For this example, it is really ok. But, dynamic priority doesn't has
nothing to do with timeslice. I have no intention to give any remarks
conerning whichever scheduler (Forgive me if I seem do that.) :-).
Actually, a common characteristics of Linux's schedulers is that
timeslices will be longer with priorities raising . I am just curious
about why the the schedulers takes this policy. IMHO, this policy
somewhat conflicts with intuition. I think there must be some
motivations to take this policy, but I have no idea about it.

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Why do the CFS chase fairness?

2011-09-20 Thread Parmenides
Hi,

I have gotten clearer idea of fairness between processes. Thanks for
your explanation with enough patience. :-)

2011/9/20 Mulyadi Santosa mulyadi.sant...@gmail.com:
 Hi .

 I am reaching my virtual limit here, so beg me pardon :)

 On Mon, Sep 19, 2011 at 23:26, Parmenides
 Hmm..., does that mean timeslice weighting introduce unfainess? If we
 think fairness relies on each task not fetching more timeslice than
 other tasks, the eaiest way to achieve fairness is to give every task
 the same timeslice.

 At the extreme theoritical side, yes, but again that is if all are
 CPU bound the complication comes since in reality most processes
 are mixture of CPU and I/O bound...or sometimes I/O bound only.

 Can I understand like this: each task advance its progress tinier than
 traditional timeslice, which makes C has more chances to be selected
 to preempt A or B owing to its higher priority? Higer priority makes
 C's virtual time smaller than A and B.


 in non preemptive kernel i.e cooperative scheduling, your above
 suggested idea is the right way to achieve fairness in such situation.
 However, since user space (and now kernel space too) implements
 preemptive, adjusting time slice is not really necessary to make C
 kicks back into run queue.

 What the scheduler needs perhaps at this point is good priority
 recalculation is C could run ASAP. If not, even though C is in run
 queue, it still can beat the other processes in the competition of CPU
 time.


 --
 regards,

 Mulyadi Santosa
 Freelance Linux trainer and consultant

 blog: the-hydra.blogspot.com
 training: mulyaditraining.blogspot.com


___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Why do the CFS chase fairness?

2011-09-20 Thread Parmenides
2011/9/21 Mulyadi Santosa mulyadi.sant...@gmail.com:
 Hi Permenides :)

 Looks like I made few typos here and there, so allow me to put few erratas :)

 2011/9/20 Mulyadi Santosa mulyadi.sant...@gmail.com:
 What the scheduler needs perhaps at this point is good priority
 recalculation is C could run ASAP. If not, even though C is in run

 recalculation so C could be executed ASAP. ...

 queue, it still can beat the other processes in the competition of CPU

 ...it could be beaten by other processes..


Yes, even with enough timeslice, if C does not have high enough
priority, it can not preempt A or B. That's the point where priority
play its role when the scheduler prefer to IO-bound tasks. Thank you
again.

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Why do the CFS chase fairness?

2011-09-19 Thread Parmenides
Hi,

 2011/9/19 Mulyadi Santosa mulyadi.sant...@gmail.com:
 Hi :)

 Seriously, what I consider more fair is Con Kolivas BFS scheduler
 these days. No excessive time slice weighting, just priority
 stepping and very strict deadline.


Hmm..., does that mean timeslice weighting introduce unfainess? If we
think fairness relies on each task not fetching more timeslice than
other tasks, the eaiest way to achieve fairness is to give every task
the same timeslice. But this way seemingly can not be considered as
fair. So, an exact definition of fairness will be appreciated.


 I took this chance to add: to maximize throughput too...

 well, if you have processess let's say A, B, C. A and B are CPU bound,
 C sleeps most of the times (let's say it's vim process running)

 If a scheduler implement very fair scheduling, then whenever user
 press a key in vim window, C should be kicked in ASAP and then run.
 However, as C yields its time slice, A or B should back ASAP too.

 If the scheduler is not really fair, then C should wait longer to be
 back running. But C should not be given too much time so A and B have
 more time to complete their number crunching works


Can I understand like this: each task advance its progress tinier than
traditional timeslice, which makes C has more chances to be selected
to preempt A or B owing to its higher priority? Higer priority makes
C's virtual time smaller than A and B.

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


How to create some threads running tha same function

2011-09-15 Thread Parmenides
Hi,

   I will try to test how to create kernel threads and have write a
kernel module which creates a number of kernel threads running the
same function. But the results is somewhat confusing.

#include linux/kernel.h
#include linux/kthread.h
#include linux/delay.h
#define MAX_KTHREAD  2
struct task_struct *ktask[MAX_KTHREAD];
static int my_kthread(void *data)
{
 int nr = *(int *)data;
 while (!kthread_should_stop()){
  ssleep(1);
  printk(KERN_ALERT This is mythread[%d].\n, nr);
 }
 return 0;
}
static int kthread_init(void)
{
 int i;

 for (i = 0; i  MAX_KTHREAD; i++){
  ktask[i] = kthread_run(my_kthread, i, mythread[%d], i);
 }
 return 0;
}
static void kthread_exit(void)
{
 int i;

 for (i = 0; i  MAX_KTHREAD; i++){
  if (ktask[i]){
   kthread_stop(ktask[i]);
   ktask[i] = NULL;
  }
 }
}
module_init(kthread_init);
module_exit(kthread_exit);
MODULE_LICENSE(GPL);
MODULE_AUTHOR(Shakespeare);
MODULE_DESCRIPTION(This is a test program of kthread.);

The messages on the screen are:

This is mythread[-929820448].
This is mythread[1].
This is mythread[-929820448].
This is mythread[1].
This is mythread[-929820448].
... ... ...

I wonder why the first thread's number is not zero rather than -929820448.

Furthermore, when running again with MAX_KTHREAD == 3, the messages are:

This is mythread[1].
This is mythread[2].
This is mythread[1].
This is mythread[1].
This is mythread[2].
This is mythread[1].
This is mythread[1].
This is mythread[2].
... ... ...

There should be 3 threads running, but only two of them appear. The
first thread get lost.

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Why the PF_SUPERPRIV flag is cleared?

2011-09-08 Thread Parmenides
 This flag PF_SUPERPRI, indicates used superuser privileges and not use
 superuser privileges.
I get it. This is really a misunderstanding. Thanks a lot.

2011/9/8 rohan puri rohan.pur...@gmail.com:
 Hi,

     When forking a child process, the copy_process() function will by
 default clear the PF_SUPERPRIV flag, which indicates whether a process
 use superuser privileges. That means a  superuser process will create
 a child process does not has superuser privileges. I think the child
 process of a superuser process should also be a superuser one, while
 the child process of a normal process by default should also be a
 normal one (except that the setuid bit of the child executable is turn
 on). In both cases it is not necessary that the PF_SUPERPRIV flag to
 be cleared.  So, I wonder why the PF_SUPERPRIV flag is cleared by
 defult.


 Hi,

 This flag PF_SUPERPRI, indicates used superuser privileges and not use
 superuser privileges. Which in any case, INDEPENDENT of all the processes
 which have superuser privileges, whether they had used them or not and for
 those processes which do not have superuser privileges needs to be cleared
 for the child of them (since the child process has been just created and at
 this point in time it has not used the superuser privileges) Its a kind of
 initialization you can think of.

 Regards,
 Rohan.


___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


question about oops and panic

2011-08-29 Thread Parmenides
Hi,

1. I think oops and panic are both some way to deal with errors occurs
in kernel space. Is there any relationship between them?

2. I make a NULL pointer reference deliberately in a kernel module and
get an oops like:

... ... ...

Aug 29 00:58:45 lfs kernel: Call Trace:
Aug 29 00:58:45 lfs kernel:  [c100112d] ? do_one_initcall+0x44/0x120
Aug 29 00:58:45 lfs kernel:  [c10517ce] ? sys_init_module+0xa7/0x1d9
Aug 29 00:58:45 lfs kernel:  [c138d49d] ? syscall_call+0x7/0xb

... ... ...

I wonder what is the meaning of the tow numbers after a function name.

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Can not free irq 0

2011-08-29 Thread Parmenides
2011/8/29 Mulyadi Santosa mulyadi.sant...@gmail.com:
 Hi :)

 On Sun, Aug 28, 2011 at 17:13, Parmenides mobile.parmeni...@gmail.com wrote:
 The irq 8 is really occupied by rtc and its initial flags is set as
 IRQF_DISABLED.

 Ah, great you found it :) I could only guess it...

Thanks for your encouragement. Your guess is really my important guide. :-)


 I think it is somewhat make sense to use IRQF_DISABLED here, since
 this kind of irq should ask for exclusive line and none other should
 ever bug with it. That way RTC handling is handled as fast and as
 efficiently as possible.

So, it is an good idea to maintain kernel's logical integrity. I am
just do it for funning rather than productive purpose.

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Can not free irq 0

2011-08-28 Thread Parmenides
 After enabling the RTC support, I have recompiled the kernel and try
 to use the irq 8. But, it seems that the 'irq_request()' can not
 register my hangler.

 isn't that 8 occupied by rtc? and it might be occupied
 exclusivelya.k.a you can put more handler there

The irq 8 is really occupied by rtc and its initial flags is set as
IRQF_DISABLED. At the beginning, I think the irq's registration is
invoked in drivers/char/rtc.c, but this is not really the case. Then,
I have to find where the registration is done. I modified the
request_irq() to the other function name, and compiled the kernel.
Checking the error messages from gcc I found that the actual
registration is done in drivers/rtc/rtc-cmos.c like this:

retval = request_irq(rtc_irq, rtc_cmos_int_handler,
IRQF_DISABLED, dev_name(cmos_rtc.rtc-dev),
cmos_rtc.rtc);

It is obviously that the irq8 (rtc_irq == 8) is not permitted to be
shared with other interrupt handlers. So, I changed IRQF_DISABLED to
IRQF_SHARED, recompiled the kernel, installed it, and then reboot.
After my module inserted, I get the following messages:

root [ ~ ]# cat /proc/interrupts
   CPU0
  0:150XT-PIC-XTtimer
  1:  8XT-PIC-XTi8042
  2:  0XT-PIC-XTcascade
  5:   2871XT-PIC-XTeth0
  6:  3XT-PIC-XTfloppy
  8:132XT-PIC-XTrtc0, myinterrupt
  9:  0XT-PIC-XTacpi
 10:  0XT-PIC-XTuhci_hcd:usb2
 11: 45XT-PIC-XTioc0, ehci_hcd:usb1
 12:116XT-PIC-XTi8042
 14:   2016XT-PIC-XTide0
 15: 48XT-PIC-XTide1
NMI:  0   Non-maskable interrupts
LOC:  14033   Local timer interrupts
SPU:  0   Spurious interrupts
PMI:  0   Performance monitoring interrupts
PND:  0   Performance pending work
RES:  0   Rescheduling interrupts
CAL:  0   Function call interrupts
TLB:  0   TLB shootdowns
TRM:  0   Thermal event interrupts
THR:  0   Threshold APIC interrupts
MCE:  0   Machine check exceptions
MCP:  8   Machine check polls

The 'rtc0' and 'myinterrupt' share the irq8 really. Then, I executed
the test program from Documentation/rtc.txt to activate some periodic
interrupts from the rtc, and found that the handler of 'myinterrupt'
invoked several times by its output to /var/log/messages.

But I don't think this is an ideal solution to share irq owing to the
modification to kernel code directly. So, I wonder whether there is
any EXPORTed funciton which can modify the flags of an existing
interrupt handler.

2011/8/28 Mulyadi Santosa mulyadi.sant...@gmail.com:
 Hi...

 On Sun, Aug 28, 2011 at 03:59, Parmenides mobile.parmeni...@gmail.com wrote:
 are you really sure? in my system (laptop with core duo cpu) it is
 increased by around 1000-2000 every 2 seconds and AFAIK it is using
 HPET.

 Yes. How can I see the timer is i8253 or HPET? I just found 'timer' in
 terms of the output of 'cat /proc/interrupts'.

 try dumping the output of
 /sys/devices/system/clocksource/clocksource*/current_clocksource

 So maybe IMO free_irq() is causing your cpu referencing null
 instruction...that might be due to free_irq is not checking whether it
 is safe to delete a handler


 --
 regards,

 Mulyadi Santosa
 Freelance Linux trainer and consultant

 blog: the-hydra.blogspot.com
 training: mulyaditraining.blogspot.com


___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Can not free irq 0

2011-08-27 Thread Parmenides
Hi,

I wonder how an interrupt handler work, and so try to make one by
myself. The first problem is which irq number should I select to hook
an interrupt handler on it. In terms of the following messages, I
think irq 0 can be adopted because the number of interrupts raised
remains 145 and does not increase any more. This case may imply the
interrupt handler of rtc clock is not working and can be replaced by
another one. Furthermore, I also notice that the Local timer
interrupts is increasing, and guess that the clock ticks are driven by
this clock chip. That means the irq 0 may be deprecated and its
handler can be replaced safely.

root [ ~/work/moduleprog ]# cat /proc/interrupts
   CPU0
  0:145XT-PIC-XTtimer  is not working
  1:  8XT-PIC-XTi8042
  2:  0XT-PIC-XTcascade
  5:402XT-PIC-XTeth0
  6:  3XT-PIC-XTfloppy
  9:  0XT-PIC-XTacpi
 10:  0XT-PIC-XTuhci_hcd:usb2
 11: 45XT-PIC-XTioc0, ehci_hcd:usb1
 12:116XT-PIC-XTi8042
 14:750XT-PIC-XTide0
 15: 48XT-PIC-XTide1
NMI:  0   Non-maskable interrupts
LOC:   4932   Local timer interrupts  is working really
SPU:  0   Spurious interrupts
... ... ... ... ...

 The kernel module code:

#include linux/module.h
#include linux/kernel.h
#include linux/interrupt.h

static irqreturn_t my_interrupt(int irq, void *dev)
{
 return IRQ_HANDLED;
}

int init_module(void)
{
 free_irq(0, NULL);
 if (request_irq(0, my_interrupt, IRQF_SHARED, myinterrupt,
(void *)my_interrupt)){
  printk(KERN_ALERT Can not install interrupt handler of irq 0.\n);
 }

 return 0;
}

void cleanup_module(void)
{
}

MODULE_LICENSE(GPL);
MODULE_AUTHOR(parmenides);
MODULE_SUPPORTED_DEVICE(mydevice);

Compile it and then invork insmod, I get these messages:

Aug 27 23:18:35 lfs kernel: [ cut here ]
Aug 27 23:18:35 lfs kernel: kernel BUG at mm/slab.c:521!
Aug 27 23:18:35 lfs kernel: invalid opcode:  [#1] SMP
Aug 27 23:18:35 lfs kernel: last sysfs file: /sys/kernel/uevent_seqnum
Aug 27 23:18:35 lfs kernel: Modules linked in: mydevice(+)
Aug 27 23:18:35 lfs kernel:
Aug 27 23:18:35 lfs kernel: Pid: 1688, comm: insmod Not tainted 2.6.34
#1 440BX Desktop Reference Platform/VMware Virtual Platform
Aug 27 23:18:35 lfs kernel: EIP: 0060:[c108a835] EFLAGS: 00010046 CPU: 0
Aug 27 23:18:35 lfs kernel: EIP is at kfree+0x67/0x96
Aug 27 23:18:35 lfs kernel: EAX:  EBX: c155eb40 ECX: 
EDX: c16cdde0
Aug 27 23:18:35 lfs kernel: ESI: 0282 EDI: c156fc80 EBP: 
ESP: cf323f70
Aug 27 23:18:35 lfs kernel:  DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068
Aug 27 23:18:35 lfs kernel: Process insmod (pid: 1688, ti=cf322000
task=cfbaa030 task.ti=cf322000)
Aug 27 23:18:35 lfs kernel: Stack:
Aug 27 23:18:35 lfs kernel:  c155eb40   c1054076
d0846080  d0846007 d0846010
Aug 27 23:18:35 lfs kernel: 0 c100112d d0846080  f63d4e2e
cf322000 c10517ce 0804b018 080488f0
Aug 27 23:18:35 lfs kernel: 0 c138ba4d 0804b018 0826 0804b008
080488f0 f63d4e2e bfddbfb8 0080
Aug 27 23:18:35 lfs kernel: Call Trace:
Aug 27 23:18:35 lfs kernel:  [c1054076] ? free_irq+0x2e/0x40
Aug 27 23:18:35 lfs kernel:  [d0846007] ? init_module+0x0/0x3d [mydevice]
Aug 27 23:18:35 lfs kernel:  [d0846010] ? init_module+0x9/0x3d [mydevice]
Aug 27 23:18:35 lfs kernel:  [c100112d] ? do_one_initcall+0x44/0x120
Aug 27 23:18:35 lfs kernel:  [c10517ce] ? sys_init_module+0xa7/0x1d9
Aug 27 23:18:35 lfs kernel:  [c138ba4d] ? syscall_call+0x7/0xb
Aug 27 23:18:35 lfs kernel: Code: e2 05 03 15 c0 65 68 c1 8b 02 25 00
80 00 00 66 85 c0 74 03 8b 52 0c 8b 02 25 00 80 00 00 66 85 c0 74 03
8b 52 0c 80 3a 00 78 04 0f 0b eb fe 8b 4a 18 64 a1 e8 5d 5f c1 8b 1c
81 8b 03 3b 43 04
Aug 27 23:18:35 lfs kernel: EIP: [c108a835] kfree+0x67/0x96 SS:ESP
0068:cf323f70
Aug 27 23:18:35 lfs kernel: ---[ end trace 00973d9f77f0e389 ]---

The problem is likely caused by free_irq(), but I don't know why and
how to resolve it.

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Can not free irq 0

2011-08-27 Thread Parmenides
 are you really sure? in my system (laptop with core duo cpu) it is
 increased by around 1000-2000 every 2 seconds and AFAIK it is using
 HPET.

Yes. How can I see the timer is i8253 or HPET? I just found 'timer' in
terms of the output of 'cat /proc/interrupts'.


 So maybe IMO free_irq() is causing your cpu referencing null
 instruction...that might be due to free_irq is not checking whether it
 is safe to delete a handler

After enabling the RTC support, I have recompiled the kernel and try
to use the irq 8. But, it seems that the 'irq_request()' can not
register my hangler.


2011/8/28 Mulyadi Santosa mulyadi.sant...@gmail.com:
 Hi...

 On Sat, Aug 27, 2011 at 22:23, Parmenides mobile.parmeni...@gmail.com wrote:
    I wonder how an interrupt handler work, and so try to make one by
 myself. The first problem is which irq number should I select to hook
 an interrupt handler on it. In terms of the following messages, I
 think irq 0 can be adopted because the number of interrupts raised
 remains 145 and does not increase any more.

 are you really sure? in my system (laptop with core duo cpu) it is
 increased by around 1000-2000 every 2 seconds and AFAIK it is using
 HPET.

 So maybe IMO free_irq() is causing your cpu referencing null
 instruction...that might be due to free_irq is not checking whether it
 is safe to delete a handler

 --
 regards,

 Mulyadi Santosa
 Freelance Linux trainer and consultant

 blog: the-hydra.blogspot.com
 training: mulyaditraining.blogspot.com


___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


How to understand 'make oldconfig'?

2011-08-25 Thread Parmenides
Hi,

I have tried to understand 'make oldconfig' command while
configurating kernel. I do some experiments and get the following
ideas:

1. When there is no a .config in /usr/src/linux,
(1)  If there is no a /boot/config-x.y.z, make will ask some
questions and then produce a .config.
(2)  Otherwise, make will copy the /boot/config-x.y.z to
/usr/src/linux/.config.

My question: According to the messages generated by make, I get that
both 'make defconfig' and 'make oldconfig' will gerenate .config based
on 'i386_defconfig'. Why does 'make oldconfig' ask some questions,
while 'make defconfig' does not?

2. When there is a .config in /usr/src/linux indeed, make do nothing
but generate a copy of .config, namely .config.old.

My question: According to Love, After making changes to your
configuration file, or when using an existing configuration file on a
new kernel tree, you can validate and update the configuration: make
oldconfig. But, whether I edit .config and make some changes or I
copy an existing .config into /usr/src/linux, 'make oldconfig' seems
do nothing. So, how can I understand 'make oldconfig' in this case?

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: How to make the kernel support NTFS?

2011-08-25 Thread Parmenides
 well, previously you didn't mention about make oldconfig, but then
 you said you did make oldconfig. Which one is right?


It may be an inaccurate expression. 'make oldconfig' are next to
Coping .config, and they are two steps. I mean I don't do them at all.

 The only way make oldconfig find the old .config file is AFAIK by
 placing it in top source directory of extracted linux kernel.

When I invoking 'make oldconfig' withou a .config in /usr/src/linux ,
I get these messages:

root [ /usr/src/linux ]# make oldconfig
scripts/kconfig/conf -o arch/x86/Kconfig
#
# using defaults found in /boot/config-2.6.34
#
#
# configuration written to .config
#

So, I think 'make' should get a copy of /boot/.config-2.6.34 into
/usr/src/linux.

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: How to understand 'make oldconfig'?

2011-08-25 Thread Parmenides
 2. When there is a .config in /usr/src/linux indeed, make do nothing
 but generate a copy of .config, namely .config.old.

 I don't understand.

Other 'make XXXconfig' such as:

   make config
   make menuconfig
   make xconfig
   make defconfig

do some modification to the .config more or less. If 'make oldconfig'
have not ask some questions, then the .config does not changed. So, I
think the 'make oldconfig' does nothing.  I wonder what 'make
oldconfig' does.


2011/8/26 Mulyadi Santosa mulyadi.sant...@gmail.com:
 hi

 On Thu, Aug 25, 2011 at 23:32, Parmenides mobile.parmeni...@gmail.com wrote:
 Hi,

    I have tried to understand 'make oldconfig' command while
 configurating kernel. I do some experiments and get the following
 ideas:

 1. When there is no a .config in /usr/src/linux,
    (1)  If there is no a /boot/config-x.y.z, make will ask some
 questions and then produce a .config.
    (2)  Otherwise, make will copy the /boot/config-x.y.z to
 /usr/src/linux/.config.

 My question: According to the messages generated by make, I get that
 both 'make defconfig' and 'make oldconfig' will gerenate .config based
 on 'i386_defconfig'. Why does 'make oldconfig' ask some questions,
 while 'make defconfig' does not?

 because make defconfig is simply generating a default pre configured
 config file (it's based on i386_defconfig as you said).

 make oldconfig asks something? quite likely because you tried to fetch
 that config into newer kernel version. Thus, it asks your decision on
 what to do on those new introduced options.

 2. When there is a .config in /usr/src/linux indeed, make do nothing
 but generate a copy of .config, namely .config.old.

 I don't understand.


 --
 regards,

 Mulyadi Santosa
 Freelance Linux trainer and consultant

 blog: the-hydra.blogspot.com
 training: mulyaditraining.blogspot.com


___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: How to make the kernel support NTFS?

2011-08-24 Thread Parmenides
 NFS != NTFS

Yes, I have checked my spelling.


 You are trying to mount the root file system using Network File System
 (NFS) - remote mounting.

 NFS option is in File system  Network File System  NFS

The parameter passed to kernel by grub is

kernel /boot/vmlinuz-2.6.34 root=/dev/hda1

So, the action to boot from NFS taken by the kernel is not directed by
the paremeter. I tell the kernel to boot from hda1, while the kernel
make a decision to boot from NFS.

Disabling 'NFS' while recompiling the kernel does not work as well.




___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: How to make the kernel support NTFS?

2011-08-24 Thread Parmenides
 you said, LFS? Linux from scratch? BTW, that root-nfs message is
 really weird, are you sure you're not inaccidentally enable such
 option and push it to boot stage?

Yes, LFS I use is really Linux from scratch.
I have recompiled and disabled 'NFS' option explicitly. But, the
problem remains.

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: How to make the kernel support NTFS?

2011-08-24 Thread Parmenides
The problem has been resolved by recovering an old .config file, which
once was deleted with the old kernel. I think this problem has not to
do with 'NTFS' options anywhere, but improper configrations of kernel.
The steps are as follows:

step 1. Copy the old .config to /boot/config-2.6.34. This .config is
from the old 2.6.25 kernel source directory.
step 2. Enter the new 2.6.34 kernel source directory, then 'make clean
 make mrproper' to clean the directory.
step 3. 'make menuconfig' to select NTFS options.
step 4. 'make  make modules_install', then copy
arch/x86/boot/bzImage to /boot and modify /boot/grub/menu.lst.
step 5. Reboot successfully.

However, I have a further questions about the old .config file. At
step 1, I didn't copy the old .config into the new 2.6.34 kernel
source directory, and then 'make oldconfig' to activate old
configurations. The only place where the old .config exists is /boot.
At step 3, how can 'make' find the old .config file?

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


How to make the kernel support NTFS?

2011-08-23 Thread Parmenides
Hi,

I have dumped a lfs 6.3 cdrom to a virutal hard disk in vmware,
and it goes without any problem. The original kernel version of lfs
6.3 is 2.6.25. I have upgraded it to 2.6.34 and it seems work well.
But, when trying to compile the kernnel 2.6.34 to include the NTFS
support, the kernel can not boot and gives me the following messages:

Root-NFS: No NFS server available, giving up.
VFS: Unable to mount root fs via NFS, trying floppy.
Kernel panic - not syncing: VFS Unable to mount root fs on unknown-block(2,0)

When doing  'make menuconfig', I have selected these options:

File systems - FUSE(Filesystem in Userspace) support
- DOS/FAT/NT Filesystem - NTFS file system support

   - NTFS write support
and all of them are compiled into the kernel, rather than kernel modules.

The kermel parameter is also OK:

default 0
timeout 30
title LFS
root(hd0,0)
kernel /boot/vmlinuz-2.6.34 root=/dev/hda1

Now that I just add NTFS support, what's the NFS matter? How can I
make the kernel to support the NTFS file system?

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Why is the clear_bit() a specical bitop?

2011-07-14 Thread Parmenides
Hi,

When reading asm/bitops.h, I have some questions about bit clear operations.
p.s. the link: http://lxr.linux.no/linux+v2.6.34/arch/x86/include/asm/bitops.h

The clear_bit() is defined as follows:

/**
 * clear_bit - Clears a bit in memory
 * @nr: Bit to clear
 * @addr: Address to start counting from
 *
 * clear_bit() is atomic and may not be reordered.  However, it does
 * not contain a memory barrier, so if it is used for locking purposes,
 * you should call smp_mb__before_clear_bit() and/or smp_mb__after_clear_bit()
 * in order to ensure changes are visible on other processors.
*/
static __always_inline void
clear_bit(int nr, volatile unsigned long *addr)
{
   if (IS_IMMEDIATE(nr)) {
   asm volatile(LOCK_PREFIX andb %1,%0
   : CONST_MASK_ADDR(nr, addr)
: iq ((u8)~CONST_MASK(nr)));
} else {
   asm volatile(LOCK_PREFIX btr %1,%0
   : BITOP_ADDR(addr)
   : Ir (nr));
   }
}

1. Two its parallel funcitons, namely set_bit() and change_bit()
contain their memory barriers, while the clear_bit() does not has one.
What does make it deserver such a special consideration?
2. How is it used for locking purpose?  Is there any example?

The clear_bit_unlock() is defined as follows:

/*
 * clear_bit_unlock - Clears a bit in memory
 * @nr: Bit to clear
 * @addr: Address to start counting from
 *
 * clear_bit() is atomic and implies release semantics before the memory
 * operation. It can be used for an unlock.
 */
static inline void clear_bit_unlock(unsigned nr, volatile unsigned long *addr)
{
   barrier();
   clear_bit(nr, addr);
}

3. clear_bit() is atomic and implies ***release semantics*** before
the memory operation
   What's the meaning of release semantics?
4. Why does the barrier() precede the clear_bit()? AFAIK, barrier()
will cause memory values reload into registers. Now that the
clear_bit() modifies the memory, why is there no a barrier() after the
clear_bit()?

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


two questions about test_and_c​lear_bit()

2011-05-05 Thread Parmenides
Hi,

For the following function, I have two questions:

static inline int test_and_set_bit(int nr, volatile unsigned long * addr)
{
int oldbit;

__asm__ __volatile__( LOCK_PREFIX
  btsl %2,%1\n\tsbbl %0,%0
  :=r (oldbit),+m (ADDR)
  :Ir (nr) : memory);
return oldbit;
}

1. There are two instructions in the inline assemably, namely btsl and
sbbl. But, can the only one LOCK_PREFIX ensure that the operation is
atomic?

2. The clobber list of the inline assembly contains a string of
memory. What is the meaning of this declaration and why does the
operation need it ? After all, some other operations such as
clear_bit() dose not need memory indeed. The declaration of memory
appears in source code of kernel here and there, and bothers me for a
long time. So any details about it will be appreciated.

static inline void clear_bit(int nr, volatile unsigned long * addr)
{
__asm__ __volatile__( LOCK_PREFIX
  btrl %1,%0
  :+m (ADDR)
  :Ir (nr));
}

 Thanks!

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies