Re: Help on how to configure for user-defined memory protection support (GSoC 2020)

2020-05-21 Thread Gedare Bloom
>  This means that our low-level design for providing thread stack protection 
> may look something like this:-
>
> 1. For MPU based processors, the number of protected stacks will depend on 
> the number of protection domains i.e. for MPUs with 8 protection domains we 
> can have 7 protected stacks ( 1 of the region will be assigned for global 
> data). For MMU based system we will have a section (a page of size 1MB) for 
> global data and task address space will be divided into smaller pages, page 
> sizes will be decided by keeping in mind the number of TLB entries, in a 
> manner I have described above in the thread.
>
There is value to defining a few of the global regions. I'll assume
R/W/X permissions. Then code (.text) should be R/X. read-only data
sections should be grouped together and made R. Data sections should
be RW. And then stacks should be added to the end. The linker scripts
should be used to group the related sections together. I think some
ARM BSPs do some of this already.  That seems like a minimally useful
configuration for most users that would care, they want to have also
protection of code from accidental overwrite, and probably data too,
and non-executable data in general. You also may have to consider a
few more permission complications (shared/cacheable) depending on the
hardware.

>  2. The protection, size, page table, and sharing attributes of each created 
> thread will be tracked.
>
I'd rather we not be calling this a page table. MPU-based systems
don't have a notion of page table. But maybe it is OK as long as we
understand that you mean the data structure responsible for mapping
out the address space. I'm not sure what you mean by size, unless you
refer to that thread's stack.

>  3. At every context switch, these attributes will be updated, the 
> static-global regions will be assigned a global ASID and will not change 
> during the switch only the protected regions will be updated.
>
Yes, assuming the hardware supports ASIDs and a global attribute.

I don't know if you will be able to pin the global entries in
hardware. You'll want to keep an eye out for that. If not, you might
need to do something in software to ensure they don't get evicted
(e.g., touch them all before finishing a context switch assuming LRU
replacement).

>  4. Whenever we share stacks, the page table entries of the shared stack, 
> with the access bits as specified by the mmap/shm high-level APIs will be 
> installed to the current thread. This is different from simply providing the 
> page table base address of the shared thread-stack ( what if the user wants 
> to make the shared thread only readable from another thread while the 
> 'original' thread is r/w enabled?) We will also have to update the TLB by 
> installing the shared regions while the global regions remain untouched.
>

Correct. I think we need to make a design decision whether a stack can
exceed one page. It will simplify things if we can assume that, but it
may limit applications unnecessarily. Have to think on that.

The "page table base address" points to the entire structure that maps
out a thread's address space, so you'd have to walk it to find the
entry/entries for its stack. So, definitely not something you'd want
to do.

The shm/mmap should convey the privileges to the requesting thread
asking to share. This will result in adding the shared entry/entries
to that thread's address space, with the appropriately set
permissions. So, if the entry is created with read-only permission,
then that is how the thread will be sharing. The original thread's
entry should not be modified by the addition of an entry in another
thread for the same memory region.

I lean toward thinking it is better to always pay for the TLB miss at
the context switch, which might mean synthesizing accesses to the
entries that might have been evicted in case hardware restricts the
ability of sw to install/manipulate TLB entries directly. That is
something worth looking at more though. There is definitely a tradeoff
between predictable costs and throughput performance. It might be
worth implementing both approaches.

Gedare
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Re: [PATCH] riscv: Mark htif_console_handler in htif.h as extern

2020-05-21 Thread Gedare Bloom
On Thu, May 21, 2020 at 6:36 PM Hesham Almatary
 wrote:
>
> On Mon, 18 May 2020 at 06:10, Gedare Bloom  wrote:
> >
> > you can push this one. I don't know if there are others?
>
> Thanks! Yeah, just one more here [1] but it's not vital/used yet.
>
If it's dead code, hold off until 6, thanks

> [1] https://lists.rtems.org/pipermail/devel/2020-May/059772.html
> >
> > On Sun, May 17, 2020 at 7:06 PM Hesham Almatary
> >  wrote:
> > >
> > >
> > >
> > > On Sun, 17 May 2020 at 23:45, Joel Sherrill  wrote:
> > >>
> > >> I hope you have committed these by now. :)
> > >
> > > Not yet, was waiting for approval. Shall I wait for the release first?
> > >>
> > >>
> > >> On Thu, May 7, 2020 at 3:14 PM  wrote:
> > >>>
> > >>> From: Hesham Almatary 
> > >>>
> > >>> It is defined in htif.c
> > >>> ---
> > >>>  bsps/riscv/riscv/include/dev/serial/htif.h | 2 +-
> > >>>  1 file changed, 1 insertion(+), 1 deletion(-)
> > >>>
> > >>> diff --git a/bsps/riscv/riscv/include/dev/serial/htif.h 
> > >>> b/bsps/riscv/riscv/include/dev/serial/htif.h
> > >>> index b0d83652b..4b16d8746 100644
> > >>> --- a/bsps/riscv/riscv/include/dev/serial/htif.h
> > >>> +++ b/bsps/riscv/riscv/include/dev/serial/htif.h
> > >>> @@ -45,7 +45,7 @@ void 
> > >>> htif_console_putchar(rtems_termios_device_context *base, char c);
> > >>>
> > >>>  int htif_console_getchar(rtems_termios_device_context *base);
> > >>>
> > >>> -const rtems_termios_device_handler htif_console_handler;
> > >>> +extern const rtems_termios_device_handler htif_console_handler;
> > >>>
> > >>>  #ifdef __cplusplus
> > >>>  }
> > >>> --
> > >>> 2.25.1
> > >>>
> > >>> ___
> > >>> devel mailing list
> > >>> devel@rtems.org
> > >>> http://lists.rtems.org/mailman/listinfo/devel
> > >
> > > --
> > > Hesham
> > > ___
> > > devel mailing list
> > > devel@rtems.org
> > > http://lists.rtems.org/mailman/listinfo/devel
>
>
>
> --
> Hesham
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Re: Help on how to configure for user-defined memory protection support (GSoC 2020)

2020-05-21 Thread Utkarsh Rai
On Thu, May 21, 2020 at 5:43 AM Hesham Almatary 
wrote:

> Yes, I completely agree with Gedare, and my reply doesn't entail
> otherwise. As Gedare stated a few requirements:
>
> "2. The basic protection isolates the text, rodata, and rwdata from
> each other. There is no notion of task-specific protection domains,
> and tasks should not incur any additional overhead due to this
> protection."
>
> Such areas are the ones I meant to be "Global." The design and
> implementation should aim to make them stick in the TLB and don't get
> kicked out. Those aren't being assigned an ASID as they are global and
> won't need to get flushed and their mappings/attributes won't change.
>
> "3. The advanced protection strongly isolates all tasks' stacks.
> Sharing is done explicitly via POSIX/RTEMS APIs, and the heap and
> executive (kernel/RTEMS) memory are globally shared. A task shall only
> incur additional overhead in context switches and the first access to
> a protected region (other task's stack it shares) after a context
> switch."
>
> The additional overhead here is the flushing of the protected region
> (that might be a shared protected stack for example). Only that
> region's TLB entry will differ between tasks on context switches, and
> if ASID is used, the hardware will make sure it gets the correct entry
> (by doing a HW page-table walk).
>
> On Wed, 20 May 2020 at 11:05, Utkarsh Rai  wrote:
> >
> >
> >
> >
> > On Wed, May 20, 2020 at 7:40 AM Hesham Almatary <
> heshamelmat...@gmail.com> wrote:
> >>
> >> On Tue, 19 May 2020 at 14:00, Utkarsh Rai 
> wrote:
> >> >
> >> >
> >> >
> >> > On Mon, May 18, 2020 at 8:38 PM Gedare Bloom 
> wrote:
> >> >>
> >> >> On Mon, May 18, 2020 at 4:31 AM Utkarsh Rai 
> wrote:
> >> >> >
> >> >> >
> >> >> >
> >> >> >
> >> >> > On Sat, May 16, 2020 at 9:16 PM Joel Sherrill 
> wrote:
> >> >> >>
> >> >> >>
> >> >> >>
> >> >> >> On Sat, May 16, 2020 at 10:14 AM Gedare Bloom 
> wrote:
> >> >> >>>
> >> >> >>> Utkarsh,
> >> >> >>>
> >> >> >>> What do you mean by "This would although mean that we would have
> page tables of  1MB."
> >> >> >>>
> >> >> >>> Check that you use plain text when inlining a reply, or at least
> that you broke the reply format.
> >> >> >>>
> >> >> >>> Gedare
> >> >> >>>
> >> >> >>> On Fri, May 15, 2020, 6:04 PM Utkarsh Rai <
> utkarsh.ra...@gmail.com> wrote:
> >> >> 
> >> >> 
> >> >> 
> >> >>  On Thu, May 14, 2020 at 10:23 AM Sebastian Huber <
> sebastian.hu...@embedded-brains.de> wrote:
> >> >> >
> >> >> > Hello Utkarsh Rai,
> >> >> >
> >> >> > On 13/05/2020 14:30, Utkarsh Rai wrote:
> >> >> > > Hello,
> >> >> > > My GSoC project,  providing thread stack protection support,
> has to be
> >> >> > > a user-configurable feature.
> >> >> > > My question is,  what would be the best way to implement
> this, my idea
> >> >> > > was to model it based on the existing system configuration
> >> >> > > <
> https://docs.rtems.org/branches/master/c-user/config/intro.html>, but
> >> >> > > Dr. Gedare pointed out that configuration is undergoing
> heavy changes
> >> >> > > and may look completely different in future releases. Kindly
> advise me
> >> >> > > as to what would be the best way to proceed.
> >> >> > before we start with an implementation. It would be good to
> define what
> >> >> > a thread stack protection support is supposed to do.
> >> >> 
> >> >> 
> >> >>  The thread stack protection mechanism will protect against
> stack overflow errors and will completely isolate the thread stacks from
> each other. Sharing of thread stack will be possible only when the user
> makes explicit calls to do so. More details about this can be found in this
> thread.
> >> >> >
> >> >> > Then there should
> >> >> > be a concept for systems with a Memory Protection Unit (MPU)
> and a
> >> >> > concept for systems with a Memory Management Unit (MMU). MMUs
> may
> >> >> > provide normal 4KiB Pages, large Pages (for example 1MiB) or
> something
> >> >> > more flexible. We should identify BSPs which should have
> support for
> >> >> > this. For each BSP should be a concept. Then we should think
> about how a
> >> >> > user can configure this feature.
> >> >> >
> >> >> > For memory protection will have a 1:1 VA-PA address
> translation that means a 4KiB page size will be set for both the MPU and
> MMU, a 1:1 mapping will ensure we will have to do lesser page table
> walks.This would although mean that we would have page tables of  1MB. I
> will be first providing the support for Armv7 based BSPs (RPi , BBB, etc.
> have MMU support) then when I have a working example I will move on to
> provide the support for RISC-V. which has MPU support.
> >> >> >>
> >> >> >>
> >> >> >> I think Sebastian is asking exactly what I did. What are the
> processor (specific CPU) requirements to support thread stack protection?
> >> >> >
> >> >> >
> >> >> > For thread stack protection 

Re: [PATCH] riscv: Mark htif_console_handler in htif.h as extern

2020-05-21 Thread Hesham Almatary
On Mon, 18 May 2020 at 06:10, Gedare Bloom  wrote:
>
> you can push this one. I don't know if there are others?

Thanks! Yeah, just one more here [1] but it's not vital/used yet.

[1] https://lists.rtems.org/pipermail/devel/2020-May/059772.html
>
> On Sun, May 17, 2020 at 7:06 PM Hesham Almatary
>  wrote:
> >
> >
> >
> > On Sun, 17 May 2020 at 23:45, Joel Sherrill  wrote:
> >>
> >> I hope you have committed these by now. :)
> >
> > Not yet, was waiting for approval. Shall I wait for the release first?
> >>
> >>
> >> On Thu, May 7, 2020 at 3:14 PM  wrote:
> >>>
> >>> From: Hesham Almatary 
> >>>
> >>> It is defined in htif.c
> >>> ---
> >>>  bsps/riscv/riscv/include/dev/serial/htif.h | 2 +-
> >>>  1 file changed, 1 insertion(+), 1 deletion(-)
> >>>
> >>> diff --git a/bsps/riscv/riscv/include/dev/serial/htif.h 
> >>> b/bsps/riscv/riscv/include/dev/serial/htif.h
> >>> index b0d83652b..4b16d8746 100644
> >>> --- a/bsps/riscv/riscv/include/dev/serial/htif.h
> >>> +++ b/bsps/riscv/riscv/include/dev/serial/htif.h
> >>> @@ -45,7 +45,7 @@ void htif_console_putchar(rtems_termios_device_context 
> >>> *base, char c);
> >>>
> >>>  int htif_console_getchar(rtems_termios_device_context *base);
> >>>
> >>> -const rtems_termios_device_handler htif_console_handler;
> >>> +extern const rtems_termios_device_handler htif_console_handler;
> >>>
> >>>  #ifdef __cplusplus
> >>>  }
> >>> --
> >>> 2.25.1
> >>>
> >>> ___
> >>> devel mailing list
> >>> devel@rtems.org
> >>> http://lists.rtems.org/mailman/listinfo/devel
> >
> > --
> > Hesham
> > ___
> > devel mailing list
> > devel@rtems.org
> > http://lists.rtems.org/mailman/listinfo/devel



-- 
Hesham
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


RSB PC BSP packages fail on building curl (libbsd related)

2020-05-21 Thread Joel Sherrill
Hi

Appears to be something with the probe for gethostbyname. It fails with
this:

configure:19546: i386-rtems5-gcc -o conftest -qrtems
-B/home/joel/rtems-cron-5/rtems-source-builder/rtems/build/tmp/sb-1001-staging/i386-rtems5/lib/
-B/home/joel/rtems-cron-5/rtems-source-builder/rtems/build/tmp/sb-1001-staging/i386-rtems5/pc686/lib/
--specs bsp_specs -mtune=pentiumpro -march=pentium -O2 -ffunction-sections
-fdata-sections -Werror-implicit-function-declaration -Wno-system-headers
-isystem
/home/joel/rtems-cron-5/rtems-source-builder/rtems/build/tmp/sb-1001-staging/i386-rtems5/pc686/lib/include
-L/home/joel/rtems-cron-5/rtems-source-builder/rtems/build/tmp/sb-1001-staging/i386-rtems5/pc686/lib
-mtune=pentiumpro -march=pentium
 
-L/home/joel/rtems-cron-5/rtems-source-builder/rtems/build/tmp/sb-1001/ftp/curl/home/joel/rtems-cron-5/tools/5/lib
conftest.c -lbsd -lm -lz -lrtemsdefaultconfig >&5
/home/joel/rtems-cron-5/rtems-source-builder/rtems/build/tmp/sb-1001-staging/bin/../lib/gcc/i386-rtems5/7.5.0/../../../../i386-rtems5/bin/ld:
/home/joel/rtems-cron-5/rtems-source-builder/rtems/build/tmp/sb-1001-staging/i386-rtems5/pc686/lib/libbsd.a(rtems-bsd-init-dhcp.c.18.o):/home/joel/rtems-cron-5/rtems-source-builder/rtems/build/rtems-libbsd-vd38dbbe18e5315bf69a7c3916d71ef3838d4c20d-x86_64-linux-gnu-1/rtems-libbsd-d38dbbe18e5315bf69a7c3916d71ef3838d4c20d/build/i386-rtems5-pc686-default/../../rtemsbsd/include/bsp/nexus-devices.h:157:
undefined reference to `_bsd_lem_pcimodule_sys_init'
/home/joel/rtems-cron-5/rtems-source-builder/rtems/build/tmp/sb-1001-staging/bin/../lib/gcc/i386-rtems5/7.5.0/../../../../i386-rtems5/bin/ld:
/home/joel/rtems-cron-5/rtems-source-builder/rtems/build/tmp/sb-1001-staging/i386-rtems5/pc686/lib/libbsd.a(iflib.c.18.o):
in function `iflib_pseudo_register':
/home/joel/rtems-cron-5/rtems-source-builder/rtems/build/rtems-libbsd-vd38dbbe18e5315bf69a7c3916d71ef3838d4c20d-x86_64-linux-gnu-1/rtems-libbsd-d38dbbe18e5315bf69a7c3916d71ef3838d4c20d/build/i386-rtems5-pc686-default/../../freebsd/sys/net/iflib.c:4804:
undefined reference to `iflib_gen_mac'
/home/joel/rtems-cron-5/rtems-source-builder/rtems/build/tmp/sb-1001-staging/bin/../lib/gcc/i386-rtems5/7.5.0/../../../../i386-rtems5/bin/ld:
/home/joel/rtems-cron-5/rtems-source-builder/rtems/build/tmp/sb-1001-staging/i386-rtems5/pc686/lib/libbsd.a(iflib.c.18.o):(.rodata.iflib_pseudo_methods+0x4):
undefined reference to `noop_attach'
/home/joel/rtems-cron-5/rtems-source-builder/rtems/build/tmp/sb-1001-staging/bin/../lib/gcc/i386-rtems5/7.5.0/../../../../i386-rtems5/bin/ld:
/home/joel/rtems-cron-5/rtems-source-builder/rtems/build/tmp/sb-1001-staging/i386-rtems5/pc686/lib/libbsd.a(iflib.c.18.o):(.rodata.iflib_pseudo_methods+0xc):
undefined reference to `iflib_pseudo_detach'
collect2: error: ld returned 1 exit status

What needs to be fixed in libbsd to correct this?

Filed as https://devel.rtems.org/ticket/3985#ticket

--joel
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel