Bits of SPI data

2009-10-29 Thread J.H.Kim
Hi, everyone

I'm trying to write data to SPI device.
The datasheet of my SPI device requires that
SPI data should be 22 bits.

But, if I send data with "write(fd, buffer, 3)"
the bit of data sent would be 24bits.
It does not fit to me.

How I can send data only 22bits to SPI device?

Thanks in advance.

Best Regards,
J.Hwan Kim

--
To unsubscribe from this list: send an email with
"unsubscribe kernelnewbies" to ecar...@nl.linux.org
Please read the FAQ at http://kernelnewbies.org/FAQ



Re: Doubt regarindg Virtual Memory

2009-10-29 Thread shailesh jain
a) __pa(kaddr) : Translates kernel virtual address to physical address,
which is just arithmetic operation subtracting PAGE_OFFSET from kaddr.


b) #define virt_to_page(kaddr) (mem_map + (__pa(kaddr) >> PAGE_SHIFT))
above macro is giving you struct page for a corresponding virtual address.

c) linear address to physical address translation is done by x86 processor
(32 but without PAE) by doing traversal pgd->pte->page_phys_addr. OS sets up
this entries and processor uses it.

Yes, above macro is not for ZONE_HIGHMEM. Because ZONE_HIGHMEM is not
directly mapped by kernel.


Shailesh Jain

On Fri, Oct 23, 2009 at 2:01 PM, Shameem Ahamed wrote:

> Hi Friends,
>
> Please help me to figure out some basic concepts in MM.
>
> From the books, i learned that VMA to PA translation consists of traversing
> the full page directory, which consists of   Global Directory, Middle
> Directory and Page Table.
>
> I have also read that, VMA to PA translation is done using a macro
> virt_to_page  defined as given below.
>
> #define virt_to_page(kaddr) (mem_map + (__pa(kaddr) >> PAGE_SHIFT))
>
>
> If
> there is a macro, why we need code for traversing all the page
> directories ?.  This macro is a simple math, which points to an index
> in global mem_map array, which contains all the pages in the system.
>
>
> So my doubts are,
> Is the macro only for ZONE_NORMAL (upto 896M, which is directly mapped by
> kernel)  memory pages ?.
> Is mem_map array contains pages upto ZONE_NORMAL ?
> Is page traversing happens only for HIGH_MEM ?.
>
> Regards,
> Shameem
>
>
>
>
>
> --
> To unsubscribe from this list: send an email with
> "unsubscribe kernelnewbies" to ecar...@nl.linux.org
> Please read the FAQ at http://kernelnewbies.org/FAQ
>
>


Re: Questions about linux scheduler

2009-10-29 Thread Chris Friesen
On 10/29/2009 09:08 AM, Daniel Rodrick wrote:
> Hi list,
> 
> I'm following the Robert Love's book and am trying to understand the
> Linux O(1) scheduler.

The details of Rob Love's book are now out of date and no longer
applicable to the current scheduler.  Some of the overall concepts are
still applicable though.

> So here is my understanding. The kernel allows
> the applications to specify two types of priorities
> 
> * Realtime Priorities: Range from 0 to 99

Actually, 1 to 99.

> * Non-realtime priorities: Also called "nice" values range from -20 to +19.
> 
> (The above are mutually exclusive)

Correct.

> Over all Scheduling algo
> =
> * A total of 140 priorities (100 RT + 40 non-RT) - these priorities
> are static - do not change over time.

So far so good.

> * A lower priority process will run only if there are no runnable
> processes in priority above it - this automatically means that all RT
> processes get to run before non-RT processes.

True for RT, not true for non-RT.  In the current scheduler the non-RT
tasks are stored in a time-ordered structure rather than the 40
runqueues that were used before.  A non-RT task will run once it becomes
the most "urgent" task based on its nice level, how much cpu time it
uses, and how long it's been since it ran last relative to other tasks
on the system.

>  * tasks on the same priority level are scheduled round robin

True for RT.  For non-RT, tasks of other nice levels may be interleaved
depending on how much cpu time they've been using.

> Is my above understanding correct? Where my understanding doesn't fit
> is the conncept of dynamic timeslice calculation. IMHO, the dynamic
> timeslice calculation applies only to Non-RT processes, right? Because
> a higher priority RT process should always get to run.

With the new scheduler I think it's fair to say that non-RT tasks don't
really have a fixed "timeslice".  The amount of time they get to run is
determined by their nice level, previous cpu usage, cpu usage of other
tasks, etc.

Chris

--
To unsubscribe from this list: send an email with
"unsubscribe kernelnewbies" to ecar...@nl.linux.org
Please read the FAQ at http://kernelnewbies.org/FAQ



Re: Questions about linux scheduler

2009-10-29 Thread Jonathan Corbet
Hi, Daniel,

I'm not a scheduler expert by any stretch, but I can try to play one on
the net...

> I'm following the Robert Love's book and am trying to understand the
> Linux O(1) scheduler. 

Note that the O(1) scheduler is no more; it was replaced by the
completely fair scheduler in 2.6.23.  For the purposes of your
questions things haven't changed a lot, but it's completely different
internally.

> So here is my understanding. The kernel allows
> the applications to specify two types of priorities
> 
> * Realtime Priorities: Range from 0 to 99
> * Non-realtime priorities: Also called "nice" values range from -20 to +19.

You're really talking about different scheduling classes.  There are
two realtime classes (FIFO and RR), both of which trump the interactive
class (SCHED_OTHER).

> * A total of 140 priorities (100 RT + 40 non-RT) - these priorities
> are static - do not change over time.

Sort of, but realtime priorities are really an entirely different scale.

> * A lower priority process will run only if there are no runnable
> processes in priority above it - this automatically means that all RT
> processes get to run before non-RT processes.

That is true for the realtime classes.  SCHED_OTHER will give
lower-priority processes a bit of time even in the presence of runnable
high-priority processes.

>  * tasks on the same priority level are scheduled round robin

SCHED_RR does that, SCHED_FIFO does not.  SCHED_OTHER is
fairness-based, which has RR-like characteristics but is not quite the
same.

Hope that helps,

jon

Jonathan Corbet / LWN.net / cor...@lwn.net

--
To unsubscribe from this list: send an email with
"unsubscribe kernelnewbies" to ecar...@nl.linux.org
Please read the FAQ at http://kernelnewbies.org/FAQ



IRQF_TRIGGER_* flags

2009-10-29 Thread Rick Brown
Hi,

This is regarding the IRQF_TRIGGER_* macros (that can be passed in
flags in request_irq()) I came acrosss while browsing the code:

/*
 * These correspond to the IORESOURCE_IRQ_* defines in
 * linux/ioport.h to select the interrupt line behaviour.  When
 * requesting an interrupt without specifying a IRQF_TRIGGER, the
 * setting should be assumed to be "as already configured", which
 * may be as per machine or firmware initialisation.
 */
#define IRQF_TRIGGER_NONE   0x
#define IRQF_TRIGGER_RISING 0x0001
#define IRQF_TRIGGER_FALLING0x0002
#define IRQF_TRIGGER_HIGH   0x0004
#define IRQF_TRIGGER_LOW0x0008

1) I assume that the above flags can be used to configure whether the
interrupt is edge-triggered or level-triggered and further more if it
is high / low (for level triggered), or rising / falling (for edge
triggered). What confuses me though is that as per my understanding,
this configuration must alrteady have been done by the interrupt
handler code at the system initialization time. So these flags provide
a way to alter that behaviour (over-ride the default) by the device
driver? Isn't that drastic considering that an interrupt may be shared
by drivers?

2) Secondly, should / are driver aware of whether the line is edge
trigggered / level triggered. Doesn't this responsibility more
appropriately belong to the interrupt handler code? Can some one help
me a practical situation where we could use these flags?

3) Finally and most importantly, does my driver need to do anything
differently depending upon whether the IRQ line is edge triggered or
level triggered? And what does the kernel do differently for both the
cases?

TIA,

Rick.

--
To unsubscribe from this list: send an email with
"unsubscribe kernelnewbies" to ecar...@nl.linux.org
Please read the FAQ at http://kernelnewbies.org/FAQ



Questions about linux scheduler

2009-10-29 Thread Daniel Rodrick
Hi list,

I'm following the Robert Love's book and am trying to understand the
Linux O(1) scheduler. So here is my understanding. The kernel allows
the applications to specify two types of priorities

* Realtime Priorities: Range from 0 to 99
* Non-realtime priorities: Also called "nice" values range from -20 to +19.

(The above are mutually exclusive)


Over all Scheduling algo
=
* A total of 140 priorities (100 RT + 40 non-RT) - these priorities
are static - do not change over time.
* A lower priority process will run only if there are no runnable
processes in priority above it - this automatically means that all RT
processes get to run before non-RT processes.
 * tasks on the same priority level are scheduled round robin

Is my above understanding correct? Where my understanding doesn't fit
is the conncept of dynamic timeslice calculation. IMHO, the dynamic
timeslice calculation applies only to Non-RT processes, right? Because
a higher priority RT process should always get to run.

Thanks,

Dan

--
To unsubscribe from this list: send an email with
"unsubscribe kernelnewbies" to ecar...@nl.linux.org
Please read the FAQ at http://kernelnewbies.org/FAQ



答复: About the system call named "sys_mount".

2009-10-29 Thread 付新荣
i think there is no risk about my opinion based the following two reasons:
1. when the software interrupt occurs, the C2 register of CP15 have no change, 
IOW, the page table of  current task is used by MMU at this time, this page 
table include the mapping of kernel space and the mapping of current task.   
2. the priority of  data abort interrupt is higher then the software interrupt.

the "sys_mount" needn't to copy the paramenters from user space.

who have diffrent opinion?
thanks


-邮件原件-
发件人: 付新荣 
发送时间: 2009年10月21日 12:19
收件人: 'Joel Fernandes'; Rajat Jain
抄送: kernelnewbies@nl.linux.org
主题: 答复: About the system call named "sys_mount".

 
hi all:
the "__get_user_asm_byte" macro  is called by "sys_mount" finally  to copy the 
paraments from user space.

copy_mount_options->
exact_copy_from_user->
call "__get_user" circularly  for "length" times


the "__get_user_asm_byte" macro is defined as follows:
#define __get_user_asm_byte(x,addr,err) \
__asm__ __volatile__(   \
"1: ldrbt   %1,[%2],#0\n"   \
"2:\n"  \
"   .section .fixup,\"ax\"\n"   \
"   .align  2\n"\
"3: mov %0, %3\n"   \
"   mov %1, #0\n"   \
"   b   2b\n"   \
"   .previous\n"\
"   .section __ex_table,\"a\"\n"\
"   .align  3\n"\
"   .long   1b, 3b\n"   \
"   .previous"  \
: "+r" (err), "=&r" (x) \
: "r" (addr), "i" (-EFAULT) \
: "cc")

#define __get_user(x,ptr)   \
({  \
long __gu_err = 0;  \
__get_user_err((x),(ptr),__gu_err); \
__gu_err;   \
})

#define __get_user_err(x,ptr,err)   \
do {\
unsigned long __gu_addr = (unsigned long)(ptr); \
unsigned long __gu_val; \
__chk_user_ptr(ptr);\
switch (sizeof(*(ptr))) {   \
case 1: __get_user_asm_byte(__gu_val,__gu_addr,err);break;  \
case 2: __get_user_asm_half(__gu_val,__gu_addr,err);break;  \
case 4: __get_user_asm_word(__gu_val,__gu_addr,err);break;  \
default: (__gu_val) = __get_user_bad(); \
}   \
(x) = (__typeof__(*(ptr)))__gu_val; \
} while (0)



i found your explanation is reasonable, thanks now, i want to find a way to 
prevent the swap of pages occupied by the "mount" task.

thanks!


the "sys_mount" cpoy the paraments from user space finally
-邮件原件-
发件人: Joel Fernandes [mailto:agnel.j...@gmail.com]
发送时间: 2009年10月21日 10:09
收件人: Rajat Jain
抄送: 付新荣; kernelnewbies@nl.linux.org
主题: Re: About the system call named "sys_mount".

Hi Rajat,

So kernel virtual memory is always directly and permanently mapped and never 
has to fault? Is this for performance or is it because the kernel can't handle 
its own faults (kernel doesn't want to take responsibility for its own faults!).

Also I would be grateful if you could describe in a sentence or two, how this 
copy from user to kernel space happens? my guess - it looks up the process's 
mm_struct and gets the physical location of its pages whether on disk or in 
physically memory, and then makes a copy of it to kernel space? wouldn't this 
be slow if the user memory is still on disk?

Also at the time copy_from_user is called, it seems the memory would be 
uptodate anyway and going to disk wouldn't be required. The user obviously 
stored something in the memory and the processor would have segfaulted already?

thanks,
-Joel

On Tue, Oct 20, 2009 at 4:08 AM, Rajat Jain  wrote:
>
> Hi,
>
>> Thank you for your reply.
>> it's interesting, my modified kernel image is run ok on my 
>> hardware(arm926ejs). i test mounting ramfs and nfs, they are all ok.
>> are they occasional?
>>
>> sorry, i don't comprehend  your explanation about it In my opinion, 
>> if it's possible that the content of parameters isn't in memory at 
>> the time of the call, the "sys_mount" can't get them also.
>>
>> could u exp

What stops the vanialla kernel from being called an RTOS?

2009-10-29 Thread Rick Brown
Hi list,

I have asked this before, but some of my doubts remain. So I want to
know that when it comes down to code, what stops the plain linux
kernel from being called RTOS? Please note that I understand that
there are different Linux based patches / projects that make LInux
realtime, but I want to know about the kernel from kernel.org - what
makes it a non-RTOS?

As far as I could understand, it is dues to two factors:

1) Its use of virtual memory (because the user pages may be paged out
thus causing unpredictable delays). But what if given that the user
applications always call the "mlockall()" to lock all the pages in
memory? Is this reason then taken care of? Also , do all RTOSes do not
make use of virtual memory?

2) No specified worst case latency (which is the definition of RTOS).
I could find that there are two kinds of latencies:

a) Scheduling latency: Well, with the latest 2.6 kernel and with Ingo
Molner's O(1) scheduler, isn't this now deterministic?

b) Interrupt latency: This seems to be the main cause for which kernel
cannot be called RTOS< because a driver may hold spinlocks / disable
preemption for any amount of time?

Finally, what I could conclude from the googling was that the only
bottleneck with linux is that certain areas of code are
non-preemptible (where spinlocks are held or where interrupts are
disabled). If some how we could make entire kernel preemptible, then
it would be an RTOS. Is this right?

Lastly, isn't scheduling latency and interrupt latency contradictory?
I mean there will always be a tradeoff between the two. Any efforts to
decrease on / make it predictable will increase / make unpredictable
of the other. No?

Thanks,

Rick

--
To unsubscribe from this list: send an email with
"unsubscribe kernelnewbies" to ecar...@nl.linux.org
Please read the FAQ at http://kernelnewbies.org/FAQ



Re: kernel panic about kernel unaligned access

2009-10-29 Thread Anupam Kapoor
> Take  "8709ed20 writeback_inodes+0xb4/0x160" for example, what does
> 0x160, the last hex mean? The value of parameter?
first-number   i.e 0xb4   means that EIP was as many bytes into
the 'writeback_inodes' function when this happened.
second-number  i.e 0x160 mean that the function is '0x160' bytes long.

anupam
-- 
In the beginning was the lambda, and the lambda was with Emacs, and
Emacs was the lambda.

--
To unsubscribe from this list: send an email with
"unsubscribe kernelnewbies" to ecar...@nl.linux.org
Please read the FAQ at http://kernelnewbies.org/FAQ