Re: AMD64 arch

2020-07-31 Thread Yang Chung Fan
The x86_64

> I would also recommend working off of my branch.  There is stuff in
> the current pci branch that will need to go away especially around MSI
> and MSI-X.  If you are ok using the legacy interrupt for now it should
> be ok.

I agree.
That PCI feature branch I started is quite embarrassing in my option, because:
 1. I am not a PCI/PCI-E expert, I did all sorts of weird things.
 2. It's only based on my limited knowledge while working with
Jailhouse hypervisor.
 3. It is tightly coupled to the x86_64 system, e.g. APCI etc.
 4. I am lacking will to push the development forward.

I do really think that Brennan's work should be the "true" solution to
PCI devices.

-- 
Yang Chung Fan (楊宗凡) (ヤン ゾン ファン)


Re: Problem booting when using Nuttx on Qemu x86_64

2020-07-03 Thread Yang Chung Fan
Hi,

I have tested it on my machine.

Using commit deb3b13759fe08 , I can successfully boot into nsh.

For your reference, the following are my environments.

Build machine:
 - Windows10 WSL2 18.04
 - gcc (Ubuntu 9.3.0-11ubuntu0~18.04.1) 9.3.0

Execute machine:
 - Ubuntu 18.04
 - Xeon 2650 v4 / 16GB
 - Custom compiled kernel:
- Linux rtlab-linux 5.4.39-rt23+ #1 SMP PREEMPT_RT Mon May 11
22:10:55 JST 2020 x86_64 x86_64 x86_64 GNU/Linux
 - Custom compiled Qemu:
- QEMU emulator version 4.2.0 (v4.2.0-dirty)


BR,

--
Yang Chung Fan (楊宗凡) (ヤン ゾン ファン)


Re: Problem booting when using Nuttx on Qemu x86_64

2020-07-03 Thread Yang Chung Fan
In addition, may I have your processor model please.

The Intel processors feature set varys quite a lot.

Maybe some feature settings are considered improper for your
processor, causing a GP.

BR,

-- 
Yang Chung Fan (楊宗凡) (ヤン ゾン ファン)


Re: Problem booting when using Nuttx on Qemu x86_64

2020-07-03 Thread Yang Chung Fan
Hi,

Good to hear more people trying our nuttx on x86_64.

> I am trying to run nuttx in a Qemu x86_64 VM but seem to be hitting some
> issue during early boot (as far as I can tell). I am using commit
> deb3b13759fe08 ("Udate TODO List") from yesterday.

That's quite new.
I was recently working with release 9.0 on a hypervisor called
jailhouse without issues.

I will try to take a look.

> Following the instructions for the board qemu-intel64, I configured and
> built the nuttx.elf using -
>
> ./tools/configure.sh qemu-intel64:nsh
> make

Although I ported x86_64 nuttx, I have zero experience with nsh port.
I only did the ostest.

But as you stated below, it doesn't seem like an application related
problem, but more architecture related..

> Seeing the output on the serial console, it feels like the cpu gets into
> a reboot loop with BIOS messages followed by grub loading the nuttx
> binary.
> ...
> Single stepping through the early startup code
> (arch/x86_64/src/intel64/intel64_head.S) suggests an unhandled exception
> is taken at the start of __nxstart. Considering the code has a comment
> indicating that it's executing from high memory, I am guessing an issue
> with the memory setup before getting here is causing an issue.

The processor has triple faulted and resets itself.
That's why it is looping.
I can think of a GP or a PF Fault happening during booting.

> I am using qemu[0] and gcc[1] shipped with Debian Bullseye (testing).

These shouldn't be an issue.

I have seen aliked looping problem during porting, mainly due to
improperly setup GPT causing GP or page table causing PF.

I suppose you didn't modify the code, therefore it seems strange to me.

BR,

-- 
Yang Chung Fan (楊宗凡) (ヤン ゾン ファン)


Re: Duplicate task_spawn()

2020-05-30 Thread Yang Chung Fan
>
> Any static should be conditioned on CONFIG_LIB_SYSCALL for the
> task_spawn() version in sched/task/task_spawn.c, however, that is not
> really necessary either because that version is not linked into the same
> binary as is the version in libs/libc/spawn.
>
> I suppose a user could enable CONFIG_LIB_SYSCALL in a FLAT build.  Then
> both would be linked into the same blob, but that is kind of a useless
> configuration.

Yes, this is what my current config has.

I am trying to port my cRTOS design to newer nuttx.

While running in FLAT build, I load Linux binaries in SystemV ABI and
hook the system calls redirect them to Nuttx APIs.

Of course, I can do this in a kernel build, but I want to keep things simple.

I agree that using system calls in a FLAT build is quite meaning less.

However, I think we can somehow keep this flexibility for exploiting
for binary compability?

-- 
Yang

2020年5月31日(日) 1:07 Gregory Nutt :
>
>
> > Any static should be conditioned on CONFIG_LIB_SYSCALL for the
> > task_spawn() version in sched/task/task_spawn.c, however, that is not
> > really necessary either because that version is not linked into the
> > same binary as is the version in libs/libc/spawn.
> >
> > I suppose a user could enable CONFIG_LIB_SYSCALL in a FLAT build.
> > Then both would be linked into the same blob, but that is kind of a
> > useless configuration.
> >
> So, I think the preferred fix would simply to make CONFIG_LIB_SYSCALL
> dependent on !CONFIG_BUILD_FLAT in the Kconfig file.
>
>


-- 
Yang Chung Fan (楊宗凡) (ヤン ゾン ファン)
Member of Softlab, Tsukuba University , Japan
Email: sonic.tw...@gmail.com


Duplicate task_spawn()

2020-05-30 Thread Yang Chung Fan
Hi,

Did anyone also noticed that when building with CONFIG_LIB_SYSCALL=y,
the linker is unhappy about the duplicated task_spawn() symbols?

One of them is in sched/ and other one is in libs/libc.

-- 
Yang


Re: intel64

2020-05-19 Thread Yang Chung Fan
2020年5月18日(月) 14:47 Brennan Ashton :
>
> On Sun, May 17, 2020, 10:36 PM Takashi Yamamoto
>  wrote:
>
> > hi,
> >
> > this is just a curious question.
> > why do we use the name "intel64" for qemu things?
> > i thought it was from qemu, but qemu seems to use x86_64 or amd64.
> > i think "amd64" is more commonly used as it's from amd.
> > do we want to help intel marketing for some reasons?
> >
>
> I think it's mostly because the initial port was done against Intel
> hardware. I believe that there are some dependencies directly on Intel
> features as well right now (those could go away). x86_64 is probably most
> appropriate since it covers amd64 and em64t (Intel).
>
> There are also bits identified as QEMU that are more generic that should
> probably be moved.
>
> Personally I would be in favor of just leaving for things to settle on the
> port a bit (a few of us have patches in the works) and then see what feels
> right. But if someone wants to take it on now I would not be opposed to it,
> and would review and test.
>
> --Brennan

I did it.

Because I ported it against a Xeon 2650v4.
The original port includes something related to features which is
Intel processor only.
However, I am still meshing that port up, removing GPL code etc.
I have only done the PR of a flat memory version of the port.
This guess this one is clean from Intel only features.
If anyone do PR to change it to amd64 or x86_64, I won't oppose.

Regarding Qemu, I named it because I have only tested on qemu and boches.
Qemu is more easy to use and available to everyone.
I found it quite difficult to apply the ARM ecosystem idea to x86,
which have very different boards.
I guess "generic" is a more appropriate name perhaps?

--
Yang


Re: Possibility of nested signals.

2020-02-28 Thread Yang Chung Fan
Hi Greg,

>
> Where do you see a problem in this?
>

I found that 2 tasks can race calling up_schedule_sigaction to set the
sigdeliver in TCB.
In up_schedule_sigaction , a critical section should prevent this from
happening.
However, If the calling task suspends voluntary, e.g. by calling
syslog, another task can enter the critical section and race with the
other task.
The first task moves the RIP in register profile to the TCB and set
the RIP in register profile as sigdeliver.
The second task moves the RIP in register profile (which is sigdeliver
now) to the TCB and set the RIP in register profile as again
sigdeliver.
The original RIP saved in TCB is overwritten and destroyed.

Currently, by removing any possibility voluntary schedule point in the
critical section, this problem no longer exists.

-- 
Yang Chung Fan (楊宗凡) (ヤン ゾン ファン)
Member of Softlab, Tsukuba University , Japan
Email: sonic.tw...@gmail.com


Possibility of nested signals.

2020-02-27 Thread Yang Chung Fan
Hi,

I noticed that both the armv7m and x86 port of nuttx's signal sending
procedure cannot handle nested signals.

I am wondering that in the up_sigdeliver functions.
A task A, being signaled, is possibile to be switch-out because of
 1. calling syslog via sinfo
 2. interrupts

A newly switched in task might send a second signal to the task A.
In such case, the saved instruction pointer value in the TCB might get
overwritten, causing a consecutive incorrect execution path when
returning tho the previous signal.

Am I correct or did I missed something here?

--
Yang Chung Fan (楊宗凡) (ヤン ゾン ファン)
Member of Softlab, Tsukuba University , Japan
Email: sonic.tw...@gmail.com


Re: A x86-64 port of nuttx with Linux compatibility layer

2020-02-20 Thread Yang Chung Fan
Seems I have finally got my apache list setup working properly.

I think I need to make some clarification for my current directory
setup in the repository.

The idea was to minimize the modifications to both Linux and Nuttx
while providing the ability to run them in parallel and accessing each
other's benefit.
Therefore, I used jailhouse hypervisor which is a simple hardware
enabled partitioning hypervisor.
Only bootloader is needed for porting a existing RTOS to jailhouse if
the architecture is already supported by the RTOS.

About the nuttx directories:
 * I tried not to modify the architecture independent part of nuttx,
but I might did some in the early day of development, when I was using
dynamic linking interface instead of system call interface.
 * arch/x86_64: contains my port of the x86_64 Nuttx, I tried to
maintain the same directory structure as i486 port, but for x86_64, I
really had a difficulty to separate common, intel64, broadwell
architecture part perfectly.
 * arch/x86_64/src/linux_subsystem: This is where the Linux
compatibility layer lives and licensing can be painful.
 * config/jailhouse-intel64: This directory contains the drivers and
bootloader related to jailhouse. I tried to make a clean split of the
Jailhouse related components and pure x86_64 part, but I might have
failed some where.

I do think that we can:
 1. Find and revert any unnecessary change to the Nuttx kernel.
 2. Clean up the arch/x86_64 code, decouple it from Jailhouse
completely, and provide a qemu port.
 3. Try to work out a working development environment with qemu,
instead of jailhouse, for the sake that everyone can try it. (Because
setting up Jailhouse is really a pain in the ass).
 4. We can continue onto integrate the Linux compatibility layer and
Linux binary loader after these are properly done.


Yang



--
Yang Chung Fan (楊宗凡) (ヤン ゾン ファン)
Member of Softlab, Tsukuba University , Japan
Email: sonic.tw...@gmail.com


A x86-64 port of nuttx with Linux compatibility layer

2020-02-18 Thread Yang Chung Fan
Hi,

I have created a x86-64 port of Nuttx (tested with Xeon 2650 v4) along
with a Linux compatibility layer for my research.

I would like to contribute the research artifacts to the upstream Nuttx.
I hosted the current code at github:
https://github.com/sonicyang/cRTOS

The basic idea of my work is to run Nuttx and Linux side by side with
Jailhouse (or any other real-time hypervisor).
The Nuttx executes a process with handle the system calls.
If the system call does not exist, delegate to Linux. Let Linux do the
work for you.
(Of course, due to the semantics of system calls, some must be done in
Nuttx, e.g. clone(2))

Features high lights
 * x86-64 port with MMU support
 * Jailhouse hypervisor support
 * Extended nanosecond clock accuracy
 * Linux compatibility layer with system calls
 * fork/clone support for processes.
 * Remote exec for loading Linux ABI binaries
 * Remote system call support for non-existing system calls.

Some problem and questions for me to do this.
Questions:
 * Besides the code formatting, another things I should do to my code?
 * Any suggestions on how do I submit patch for this kind of large
piece of work?
 * I have modified some internal parts of Nuttx, e.g. 16550 driver and
gran_allocator, etc. for extending their features and bug fix. Should
I submit them separately?

Problems:
 * There is code stolen from Linux(GPL) header and Jailhouse(GPL/MIT)
repo.  I do think this cause a license conflict. (I have limited the
GPL affected file as 1) (As I remembered, I only used MIT code from
Jailhouse).
 * Implementation is ugly. The x86-64 port is tightly coupled with
Linux compatibility layer and MMU support.
 * Only Jailhouse support. (Theoretically, because jailhouse is a thin
partitioning hypervisor, this should also work directly on the actual
hardware.) Someone need to write a bootloader or multiboot2 support
for it.
 * It is based on Nuttx 7.27, a bit old.
 * The commit log is ugly and long, spanning the whole year 2019.

I hope someone can give me instructions to upstream this work.
Any suggestions are welcome.

The research result is being published as a paper in ACM VEE2020,
which will be held next month in Switzerland.
DOI: https://doi.org/10.1145/3381052.3381323 (should be active after
the conference 3/17/2020).
I will find somewhere legal to host the paper for everyone.
In short words, this method do better than most of existing Linux
real-time solutions while not losing the compatibility with Linux.

--
Yang Chung Fan (楊宗凡) (ヤン ゾン ファン)
Member of Softlab, Tsukuba University , Japan
Email: sonic.tw...@gmail.com