Re: On loading userspace data into particular section of physical memory (ARM)

2012-12-08 Thread Joel A Fernandes
Hi Mulyadi,

How are you doing?
 PS: are you considering creating special data section? perhaps by
 using custom ld script?

Yes, that would be a possiblity.

Regards,
Joel

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


On loading userspace data into particular section of physical memory (ARM)

2012-12-04 Thread Joel A Fernandes
I am looking at a problem that might be too difficult to solve, or
might not if I'm missing something so I thought I'd bounce it off this
group,

Basically I have an application in userspace who's .data section
_has_ to be loaded into particular locations in physical memory. That
is, there is a chunk of physical memory that has to contain the .data
section and no other part of physical memory should.

What is the easiest way to do this? I guess, changes might be required
to the ELF loaders in fs/bin*.c. Any other tricks?

Is it non-trivial to add a new memory zone to the kernel that manages
a particular section of physical memory? I thought if a new zone could
be added, then we could possibly modify the kernel ELF loader to
recognize a specially marked .data section and alloc memory from that
special zone when allocating page frames.

Let me know if you have any ideas, Thanks,

Regards,
Joel

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: On loading userspace data into particular section of physical memory (ARM)

2012-12-04 Thread Joel A Fernandes
On Tue, Dec 4, 2012 at 11:19 PM, Subramaniam Appadodharana
c.a.subraman...@gmail.com wrote:
 Hi,


 On Tue, Dec 4, 2012 at 11:07 PM, Joel A Fernandes agnel.j...@gmail.com
 wrote:

 I am looking at a problem that might be too difficult to solve, or
 might not if I'm missing something so I thought I'd bounce it off this
 group,

 Basically I have an application in userspace who's .data section
 _has_ to be loaded into particular locations in physical memory. That
 is, there is a chunk of physical memory that has to contain the .data
 section and no other part of physical memory should.

 This is something unconventional. I wouldn't expect user space to mandate
 that.  I think you are approaching the problem in a wrong way.

It is unconventional of course.

 What is the easiest way to do this? I guess, changes might be required
 to the ELF loaders in fs/bin*.c. Any other tricks?

 Why would you want to do that? Even if you do that, how are you going to
 achieve this for just your application. What even you change here (even if
 that the right place to do it), would be applicable for all processes. I
 dont think you want that, do you?

No, just for this process if we could use any directives that could
mark the section in a certain way.

 Is it non-trivial to add a new memory zone to the kernel that manages
 a particular section of physical memory? I thought if a new zone could
 be added, then we could possibly modify the kernel ELF loader to
 recognize a specially marked .data section and alloc memory from that
 special zone when allocating page frames.

 Let me know if you have any ideas, Thanks,


 I am not sure why you would need data section to be in a certain physical
 memory, but if all you want is to copy a certain data from userspace to be
 shared with a peripheral or some thing that could be done.
 One way to do it would be to have a driver (you own memory manager if you
 will) that would reserve this physical memory during bootup, and then can
 have a char device interface that you would call from your user app to
 request for memory.

No, that would require modifications to the userspace code itself
which doesn't work as there is lots of code that needs to be changed.
What would be an ideal solution is if the whole physical memory
allocation transparent to the user process involved.

 Would be interesting to see your use case!

The use case is one of- separating the code and data section into
different physical memory areas. Currently, kernel allocates pages for
user code and data mixed interspersed across physical memory (without
caring about whether its code or data).
This works fine, but incurs a very high performance hit for a certain
application.

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Need help understanding memory models, cpu modes and address translation

2011-07-17 Thread Joel A Fernandes
On Sat, Jul 16, 2011 at 3:47 PM, Vaibhav Jain vjoss...@gmail.com wrote:
 Hi Mulyadi,

 Thanks for the explanation.Its really nice! But
 what I was referring to the was this article on virtual address layout of
 program :

 http://duartes.org/gustavo/blog/post/anatomy-of-a-program-in-memory

 which mentions : The distinct bands in the address space correspond to
 memory segments like the heap, stack, and so on. Keep in mind these segments
 are simply a range of memory addresses and have nothing to do with
 Intel-style segments.

 This gave rise to all the confusion.I used to think that code, stack and
 heap segments in the virtual address layout of a program
 are the same as the segments which we talk about when referring to hardware
 provided Segmentation. But seems like this is not the case.

Segmentation and the virtual address layout are independent of each
other so you shouldn't confuse the two. You can choose to have
segmentation and then virtual addressing.

The way it works is:

logical address (segmented) - virtual address - physical address

The logical to virtual conversion is called segmentation, and virtual
to physical is called paging. The Software always uses logical
addresses.

The article explains heap, stack etc are segments created by the
operating system and have nothing to do with traditional Intel-style
segments. Infact the hardware would not be even aware of the
presence of the heap. The HW only knows virtual addresses.

 I also read a little about real and protected mode and came to know that in
 32-bit protected mode all the Segment registers point to the same address

It happens so that Linux sets up the processor tables in such a way
that the logical addresses map to the same as virtual addresses. IIRC,
For x86 there's a Global Descriptor table that Linux manipulates to
produce this one-to-one mapping.

 This confused me even more. So I need an explanation of how all these work
 together. I am Sorry if the question is
 not clear or if it sounds confusing.

Hope this clears it, do read the introduction chapters of
Understanding the Linux Kernel which touches on a lot of these topics.
Feel free to ask more questions.

Thanks,
Joel

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Need help understanding memory models, cpu modes and address translation

2011-07-17 Thread Joel A Fernandes
On Sun, Jul 17, 2011 at 1:24 AM, Vaibhav Jain vjoss...@gmail.com wrote:
 Thanks a lot Joel! This is a great explanation.
 Just one more question. I used to think that the compiler always
 assigns/generates starting from 0 as Mulyadi has also mentioned.
 In the case when Segmentation(intel-style) is being used how does the
 compiler assigns
 addresses?


The compiler just generates code as if segmentation is not being
used.I'm not familiar with segmentation in x86 Protected mode. Further
because Linux doesn't uses segmentation (the logical addresses are
one-to-one mapped to virtual addresses) so neither have I really cared
about how it works ;) The compiler just generates code as if
segmentation is not being used. But I'd say some google searches on
Global Descriptor Table would give you some pointers.

You shouldn't worry about segmentation too much because virtual
addressing achieves everything it does and is more flexible. I'd say
ignore segmentation and focus on paging. :)

Thanks,
Joel

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Need help understanding memory models, cpu modes and address translation

2011-07-15 Thread Joel A Fernandes
On Fri, Jul 15, 2011 at 10:04 PM, Vaibhav Jain vjoss...@gmail.com wrote:
 Hi,

 Actually I have read that book. But when i started reading other books such
 as those on assembly
 they had these concpets of Real Mode , Protected Mode , Flat Memory model,
 Segmented Memory model which are specific to intel 32-bit architecture
 and which has got me highly confused. So I am looking for references that
 explain these concepts in depth.


Then you should read the book Understanding The Linux Kernel, and
Intel's reference manuals which document these features.

Also this is the book we read in Undergrad for x86 internals :) It was
considered to be the bible for the topic (atleast at the time)
http://www.amazon.com/Intel-Microprocessors-80186-80286-80386/dp/0132606704/ref=sr_1_1?s=booksie=UTF8qid=1310792692sr=1-1

Regards,
Joel

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Need help understanding memory models, cpu modes and address translation

2011-07-15 Thread Joel A Fernandes
Hi Mulyadi,

Good to read your posts. It has certainly been a long time and it
feels good to be back on this list!

 On Sat, Jul 16, 2011 at 10:14, Vaibhav Jain vjoss...@gmail.com wrote:


 Could somebody please state the difference clearly for me and explain how
 these two work together. I would
 really appreciate if someone could explain the whole chain from generation
 of addresses by compiler and then translation of
 those addresses in case of  Segmentation working along with Paging.

 when you generate object code from your source code (let's say in C)
 using gcc, first your code and variables (data) are turn into Position
 Independent Code. It means, it is just an offset. If there is an
 offset, surely we need base address, right? But not at this object
 (resulting in .o) stage.

 Then in reach producing final ELF binary (executable). Using known
 standart ELF rule, those offset are turn into final final address. So
 let's, code are placed starting at 0x080499f0 and so on.

Very nice explanation!


 When that binary is loaded into memory, loader (ld.so) take that
 information and use it as a clue on where to put the code and data.
 Using standart mmap() syscall, memory area is reserved and data/code
 is loaded there. The exception is stack, where it is allocated
 dynamically (and grows down, for Intel arch) starting at the upper
 limit of user space (near 3 GiB).


I'm just a little troubled by this bit.

AFAIK, the kernel takes ELF executables and loads them into the
appropriate sections after parsing the ELF tables and headers. Correct
me if I'm wrong? Ofcourse, ld.so takes care of dynamic linking/loading
shared libraries into the address space.

Thanks,
Joel

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Kernel development process questions

2011-07-13 Thread Joel A Fernandes
Hi,

I went over the development process Documentation/ in the kernel tree,
and had the following question:

With respect to Linux Kernel development process, what does the word
stage mean? It is the process of getting patches into Greg's staging
tree, or is it linux-next? Or depends on the context?

Also, is there a standard way to track the status and location of a
patch (i.e. some information about which maintainer's tree it is
currently applied to, and which tree it is sent for applying to by
which maintainer etc). I guess one way is to read the mailing lists,
but is there a general preferred technique to get this information?

Regards,
Joel

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: query regarding inode pages

2011-07-13 Thread Joel A Fernandes
[CC'ing list]

Hi Shubham,

I am not very familiar with the code for pdflush. But wasn't it
superseded by bdflush (or similar) in recent kernels?

On Wed, Jul 13, 2011 at 10:45 AM, shubham sharma shubham20...@gmail.com wrote:
 I am trying to write a memory based file system. The file system is
 intended to create files/directories and write their contents only on
 pages. For this I have used the function grab_cache_page() function to
 get a new locked page in case the page does not exists in the radix
 tree of the inode.

 As the filesystem is memory based and all the data exists only on the
 memory, so I don't release the lock on the page as I fear that the
 pdflush daemon can swap out the pages on which I have written data and
 I may never see that data again. I unlock all the pages of all the
 inodes of the file system in the kill_sb function once the file system
 is being unmounted. But the problem I am facing is that if I open a
 file in which I have already written something (and its pages are
 locked), the open system call in turn calls the __lock_page() function
 which waits for the pages belonging to the inode to get unlocked.
 Hence the system call stalls indefinitely. I want to know if there is
 a mechanism by which I can prevent the pdflush daemon from swapping
 the pages that my filesystem refers to??

I'm not sure if pdflush is what swaps pages? Isn't that the role of
kswapd? pdflush AFAIK just writes dirty pages back to the backing
device.

I think what you're referring to is a certain page replacement
behavior in low memory conditions that you want to protect your
filesystem against. I will be interested in responses from others
about this, and will dig into the code during off office hours.

Maybe tmpfs is a good reference for your work?

Thanks,
Joel

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies