Re: NXView

2020-06-12 Thread Gregory Nutt

Hi, again,

I suppose the first question should be, "Is the FT245RL the correct 
choice?"  After all, it is only 8-bits wide and only USB 2.0.  That 
could limit the amount of instrumentation data passed to the host 
because of data overrun or or it could alter the real-time behavior of 
the target.  Ideally, the instrumentation should involve minimal 
overhead and the behavior the real time system should be the same with 
or without the instrumentation enabled.  Otherwise, such a tool would 
not be a proper diagnostic tool.


I considered some PCIe parallel data acquisition devices, but did not 
see a good match.  PCIe would be hot, howeer.


I also looked at FTDI FT60xx devices, but these seem so camera focused 
that it was not completely clear to me that these could be usable.  But 
I am a mostly a software guy.  Perhaps someone out there with better 
knowledge of these devices could help out.


The older FT600x, for example, has a 16-bits wide FIFO (pretty much 
optimal for most MCUs) and has a USB 3.0 interface to the host PC.  
Using such a camera-oriented device is less obvious to me than the more 
general FT245RL. Perhaps that is only because of the camera-oriented 
language used in the data sheets?


If you know something about these options, I would like to hear from you.

Greg


On 6/12/2020 6:02 PM, Gregory Nutt wrote:


Hi, List,

I have been contemplating a NuttX-based Open Source project today and 
I am interested in seeing if anyone is willing to participate or, even 
if not, if anyone has any insights or recommendations that could be 
useful.


Basically, I am thinking of a NuttX tool to monitor the internal state 
of the OS.  This would be conceptually similar to Segger SystemView or 
Wind River WindView:  A host basic graphical tool that exposes the 
internal behavior of tasks and threads within the OS in a "logic 
analyzer format":


 1. Horizontal rows would be indicate the state of each task, running
or block (and if blocked why/)
 2. Each arranged vertically by task/thread priority so that the
highest priority task is the first row and the lowest priority
task is the bottom row.
 3. Annotation to indicated events:  Interrupts, semaphore operations,
spinlock operations, etc.
 4. This display should be realtime (with a lag, of course) and should
scroll to the right as time elapses.  It should be possible to
capture and save the event data for subsequent offline analysis.

Additional analytic displays could be considered in the future.

The hardware I am thinking to accomplish this would be an inexpensive 
FT245RL board which connects to the target via an 8-bit parallel 
interface and to the host via a USB 2.0 interface.  The target side is 
essentially a FIFO:  OS events would be written to the FT245RL FIFO 
and transferred to the host via USB 2.0.


The OS instrumentation is already in place to accomplish this. This is 
controlled by CONFIG_SCHED_INSTRUMENTATION and related configuration 
options that you can see in sched/Kconfig.  The target side effort is 
then:


1. Configure the parallel interface to the FT245RL's FIFO.  This would 
likely be FSMC for an initial STM32 implementation.
2. Develop the simple logic to encode the instrumented events and to 
pass them to host visa that FIFO.


Drivers and configuration tools for the host side are already 
available from the FTDI website.  Becoming familiar with these tools 
and integrating the host-side interface would be another task.


The final task, the one that is the most daunting to me, is the 
development of the substantial host-side graphics application that 
would receive the OS instrumentation data and produce the graphic 
presentation.  I would think that such an application would be a C++ 
development and would be usable both on Windows and Linux.


I believe that such a tool would be a valuable addition to the NuttX 
ecology.  I think that such a tool would move NuttX from a basic, 
primitive open source OS project and into full competition with 
commercial products (in terms of features and usage... we are not 
actually in competition with anyone).


Is this something that would be interesting to anyone?  Does anyone 
have any input or advice?  If there is any interest I think that we 
should create a small development team to make this happen.  If that 
team is small enough, I would be happy to provide common development 
hardware (STM32 and FT245RL boards from China, or course).


What say ye?

Greg





RE: NXView

2020-06-13 Thread David Sidrane
Why not use the ETM?

-Original Message-
From: Gregory Nutt [mailto:spudan...@gmail.com]
Sent: Friday, June 12, 2020 5:03 PM
To: dev@nuttx.apache.org
Subject: NXView

Hi, List,

I have been contemplating a NuttX-based Open Source project today and I
am interested in seeing if anyone is willing to participate or, even if
not, if anyone has any insights or recommendations that could be useful.

Basically, I am thinking of a NuttX tool to monitor the internal state
of the OS.  This would be conceptually similar to Segger SystemView or
Wind River WindView:  A host basic graphical tool that exposes the
internal behavior of tasks and threads within the OS in a "logic
analyzer format":

 1. Horizontal rows would be indicate the state of each task, running or
block (and if blocked why/)
 2. Each arranged vertically by task/thread priority so that the highest
priority task is the first row and the lowest priority task is the
bottom row.
 3. Annotation to indicated events:  Interrupts, semaphore operations,
spinlock operations, etc.
 4. This display should be realtime (with a lag, of course) and should
scroll to the right as time elapses.  It should be possible to
capture and save the event data for subsequent offline analysis.

Additional analytic displays could be considered in the future.

The hardware I am thinking to accomplish this would be an inexpensive
FT245RL board which connects to the target via an 8-bit parallel
interface and to the host via a USB 2.0 interface. The target side is
essentially a FIFO:  OS events would be written to the FT245RL FIFO and
transferred to the host via USB 2.0.

The OS instrumentation is already in place to accomplish this. This is
controlled by CONFIG_SCHED_INSTRUMENTATION and related configuration
options that you can see in sched/Kconfig.  The target side effort is then:

1. Configure the parallel interface to the FT245RL's FIFO.  This would
likely be FSMC for an initial STM32 implementation.
2. Develop the simple logic to encode the instrumented events and to
pass them to host visa that FIFO.

Drivers and configuration tools for the host side are already available
from the FTDI website.  Becoming familiar with these tools and
integrating the host-side interface would be another task.

The final task, the one that is the most daunting to me, is the
development of the substantial host-side graphics application that would
receive the OS instrumentation data and produce the graphic
presentation.  I would think that such an application would be a C++
development and would be usable both on Windows and Linux.

I believe that such a tool would be a valuable addition to the NuttX
ecology.  I think that such a tool would move NuttX from a basic,
primitive open source OS project and into full competition with
commercial products (in terms of features and usage... we are not
actually in competition with anyone).

Is this something that would be interesting to anyone?  Does anyone have
any input or advice?  If there is any interest I think that we should
create a small development team to make this happen.  If that team is
small enough, I would be happy to provide common development hardware
(STM32 and FT245RL boards from China, or course).

What say ye?

Greg


Re: NXView

2020-06-14 Thread Erdem MEYDANLI
>
I would think that such an application would be a C++ development and would
be usable both on Windows and Linux.
<<

Would it be nice to have an application that utilizes HTML5 and CSS3 for
the UI, communicates with the low-level parts of the app via WebSockets?
(Lesser C++, but additionally some JS.)

>
I believe that such a tool would be a valuable addition to the NuttX
ecology.
<

I do agree. That would be such an invaluable tool. BTW, speaking of
particular hardware like FT245 gives me an idea. Well, this might sound a
little bit irrelevant, but what about taking it a step further and
developing a mini-SDK (NX-SDK) as the one Zephyr has? Still not as a very
active contributor, but an enthusiastic follower of the NuttX project, I
think that the entry barrier of the project is still not that low.
Onboarding takes some time. Having a custom SDK that includes not only a
tracer, but also other helper tools (e.g.,  flasher/debugger for the
supported boards) would facilitate the adaptation process of the newcomers.
Finally, rather than relying on some particular ICs, would it be more
practical to build such a tool by creating a (custom) fork of OpenOCD?

Erdem

Gregory Nutt , 13 Haz 2020 Cmt, 02:02 tarihinde şunu
yazdı:

> Hi, List,
>
> I have been contemplating a NuttX-based Open Source project today and I
> am interested in seeing if anyone is willing to participate or, even if
> not, if anyone has any insights or recommendations that could be useful.
>
> Basically, I am thinking of a NuttX tool to monitor the internal state
> of the OS.  This would be conceptually similar to Segger SystemView or
> Wind River WindView:  A host basic graphical tool that exposes the
> internal behavior of tasks and threads within the OS in a "logic
> analyzer format":
>
>  1. Horizontal rows would be indicate the state of each task, running or
> block (and if blocked why/)
>  2. Each arranged vertically by task/thread priority so that the highest
> priority task is the first row and the lowest priority task is the
> bottom row.
>  3. Annotation to indicated events:  Interrupts, semaphore operations,
> spinlock operations, etc.
>  4. This display should be realtime (with a lag, of course) and should
> scroll to the right as time elapses.  It should be possible to
> capture and save the event data for subsequent offline analysis.
>
> Additional analytic displays could be considered in the future.
>
> The hardware I am thinking to accomplish this would be an inexpensive
> FT245RL board which connects to the target via an 8-bit parallel
> interface and to the host via a USB 2.0 interface. The target side is
> essentially a FIFO:  OS events would be written to the FT245RL FIFO and
> transferred to the host via USB 2.0.
>
> The OS instrumentation is already in place to accomplish this. This is
> controlled by CONFIG_SCHED_INSTRUMENTATION and related configuration
> options that you can see in sched/Kconfig.  The target side effort is then:
>
> 1. Configure the parallel interface to the FT245RL's FIFO.  This would
> likely be FSMC for an initial STM32 implementation.
> 2. Develop the simple logic to encode the instrumented events and to
> pass them to host visa that FIFO.
>
> Drivers and configuration tools for the host side are already available
> from the FTDI website.  Becoming familiar with these tools and
> integrating the host-side interface would be another task.
>
> The final task, the one that is the most daunting to me, is the
> development of the substantial host-side graphics application that would
> receive the OS instrumentation data and produce the graphic
> presentation.  I would think that such an application would be a C++
> development and would be usable both on Windows and Linux.
>
> I believe that such a tool would be a valuable addition to the NuttX
> ecology.  I think that such a tool would move NuttX from a basic,
> primitive open source OS project and into full competition with
> commercial products (in terms of features and usage... we are not
> actually in competition with anyone).
>
> Is this something that would be interesting to anyone?  Does anyone have
> any input or advice?  If there is any interest I think that we should
> create a small development team to make this happen.  If that team is
> small enough, I would be happy to provide common development hardware
> (STM32 and FT245RL boards from China, or course).
>
> What say ye?
>
> Greg
>
>


Re: NXView

2020-06-14 Thread Alan Carvalho de Assis
Hi Erdem,

On 6/14/20, Erdem MEYDANLI  wrote:
sic
>
> I do agree. That would be such an invaluable tool. BTW, speaking of
> particular hardware like FT245 gives me an idea. Well, this might sound a
> little bit irrelevant, but what about taking it a step further and
> developing a mini-SDK (NX-SDK) as the one Zephyr has? Still not as a very
> active contributor, but an enthusiastic follower of the NuttX project, I
> think that the entry barrier of the project is still not that low.
> Onboarding takes some time. Having a custom SDK that includes not only a
> tracer, but also other helper tools (e.g.,  flasher/debugger for the
> supported boards) would facilitate the adaptation process of the newcomers.
> Finally, rather than relying on some particular ICs, would it be more
> practical to build such a tool by creating a (custom) fork of OpenOCD?
>

In the past NuttX used to have a Buildroot that was able to generate
the toolchain, etc. It is still around, some time ago David Alessio
fixed it.

At first place the SDK idea appears good, but there are many issues.

We have many architectures, we support MCU/CPU from 8 to 64-bit
(Zephyr and others are 32-bit only and mainly ARM, RISC-V and Xtensa).
I could go on citing other issues...

Currently at least on Linux (Debian, Ubuntu, ...) and Ubuntu Shell on
Windows it is very easy, just some apt/apt-get away. Even the
kfrontend is already there, you don't need to compile it anymore. I
think the main issue is that OpenOCD version is too old. But creating
a fork of OpenOCD is not a good idea.

OpenOCD is a project that deserves more attention, it is like the SSH,
many people/companies uses it and people only not it was a "backbone"
when it broke.
The last OpenOCD release was 3 years ago and I don't see any move to a
new release. If they release a new version now, maybe it will delay
about 2 years to get it officially on Linux distros.

I heard that OpenOCD was going to be part of Linux Foundation, but
nothing happened yet. Let see!

BR,

Alan


Re: NXView

2020-06-14 Thread Gregory Nutt




In the past NuttX used to have a Buildroot that was able to generate
the toolchain, etc. It is still around, some time ago David Alessio
fixed it.


It generates more than that.  It generates most of the tools that you 
need including kconfig-frontends, genromfs, etc.  And it is easily 
extended to include more host tools.  It is 90% of an SDK, depending on 
how you define an SDK.


It can never be a part of the Apache NuttX project, however, since most 
of the tools are GPL.  If someone wants to take over the build root and 
expand on it, we can do that.






Re: NXView

2020-06-14 Thread Erdem MEYDANLI
Hi Alan,


You are right. NuttX has a more comprehensive scope. For sure, what I
proposed requires a lot of work.


With or without OpenOCD, what I meant by SDK was a combination of (actively
maintained) buildroot and a meta-tool like *West *in Zephyr.


Those who haven't heard of Zephyr's meta-tool might want to look at this:
https://docs.zephyrproject.org/latest/guides/west/build-flash-debug.html


I assume that the 'SDK' solves all dependency issues, and the meta-tool
offers the functionality below:


nxtool flash

nxtool debug

nxtool monitor (imagine this initiates @Greg's idea as part of its
functionality.)


People who are already familiar with RTOSes and MCU development undoubtedly
follow the current installation steps quickly. Maybe they already
established a mini-automation for their development process. However, when
it comes to beginners, the installation can still be a pain in the neck.


So, this discussion is about the unborn NXView, and I don't want to ramble
on about it. I find the NXView idea very beneficial. And referring to
Greg's paragraph below, having a meta-tool that I try to explain above
might add significant value as well.

>>
I believe that such a tool would be a valuable addition to the NuttX
ecology.  I think that such a tool would move NuttX from a basic,
primitive open source OS project and into full competition with
commercial products (in terms of features and usage... we are not
actually in competition with anyone).
<<


Alan Carvalho de Assis , 14 Haz 2020 Paz, 18:51
tarihinde şunu yazdı:

> Hi Erdem,
>
> On 6/14/20, Erdem MEYDANLI  wrote:
> sic
> >
> > I do agree. That would be such an invaluable tool. BTW, speaking of
> > particular hardware like FT245 gives me an idea. Well, this might sound a
> > little bit irrelevant, but what about taking it a step further and
> > developing a mini-SDK (NX-SDK) as the one Zephyr has? Still not as a very
> > active contributor, but an enthusiastic follower of the NuttX project, I
> > think that the entry barrier of the project is still not that low.
> > Onboarding takes some time. Having a custom SDK that includes not only a
> > tracer, but also other helper tools (e.g.,  flasher/debugger for the
> > supported boards) would facilitate the adaptation process of the
> newcomers.
> > Finally, rather than relying on some particular ICs, would it be more
> > practical to build such a tool by creating a (custom) fork of OpenOCD?
> >
>
> In the past NuttX used to have a Buildroot that was able to generate
> the toolchain, etc. It is still around, some time ago David Alessio
> fixed it.
>
> At first place the SDK idea appears good, but there are many issues.
>
> We have many architectures, we support MCU/CPU from 8 to 64-bit
> (Zephyr and others are 32-bit only and mainly ARM, RISC-V and Xtensa).
> I could go on citing other issues...
>
> Currently at least on Linux (Debian, Ubuntu, ...) and Ubuntu Shell on
> Windows it is very easy, just some apt/apt-get away. Even the
> kfrontend is already there, you don't need to compile it anymore. I
> think the main issue is that OpenOCD version is too old. But creating
> a fork of OpenOCD is not a good idea.
>
> OpenOCD is a project that deserves more attention, it is like the SSH,
> many people/companies uses it and people only not it was a "backbone"
> when it broke.
> The last OpenOCD release was 3 years ago and I don't see any move to a
> new release. If they release a new version now, maybe it will delay
> about 2 years to get it officially on Linux distros.
>
> I heard that OpenOCD was going to be part of Linux Foundation, but
> nothing happened yet. Let see!
>
> BR,
>
> Alan
>


Re: NXView

2020-06-14 Thread Alan Carvalho de Assis
Hi Erdem,

Right, I understood your idea!

In fact Maciej already created it, see:

https://hackaday.io/project/94372-upm-nuttx-project-manager

https://gitlab.com/w8jcik/upm

Did you try it?

BR,

Alan

On 6/14/20, Erdem MEYDANLI  wrote:
> Hi Alan,
>
>
> You are right. NuttX has a more comprehensive scope. For sure, what I
> proposed requires a lot of work.
>
>
> With or without OpenOCD, what I meant by SDK was a combination of (actively
> maintained) buildroot and a meta-tool like *West *in Zephyr.
>
>
> Those who haven't heard of Zephyr's meta-tool might want to look at this:
> https://docs.zephyrproject.org/latest/guides/west/build-flash-debug.html
>
>
> I assume that the 'SDK' solves all dependency issues, and the meta-tool
> offers the functionality below:
>
>
> nxtool flash
>
> nxtool debug
>
> nxtool monitor (imagine this initiates @Greg's idea as part of its
> functionality.)
>
>
> People who are already familiar with RTOSes and MCU development undoubtedly
> follow the current installation steps quickly. Maybe they already
> established a mini-automation for their development process. However, when
> it comes to beginners, the installation can still be a pain in the neck.
>
>
> So, this discussion is about the unborn NXView, and I don't want to ramble
> on about it. I find the NXView idea very beneficial. And referring to
> Greg's paragraph below, having a meta-tool that I try to explain above
> might add significant value as well.
>
>>>
> I believe that such a tool would be a valuable addition to the NuttX
> ecology.  I think that such a tool would move NuttX from a basic,
> primitive open source OS project and into full competition with
> commercial products (in terms of features and usage... we are not
> actually in competition with anyone).
> <<
>
>
> Alan Carvalho de Assis , 14 Haz 2020 Paz, 18:51
> tarihinde şunu yazdı:
>
>> Hi Erdem,
>>
>> On 6/14/20, Erdem MEYDANLI  wrote:
>> sic
>> >
>> > I do agree. That would be such an invaluable tool. BTW, speaking of
>> > particular hardware like FT245 gives me an idea. Well, this might sound
>> > a
>> > little bit irrelevant, but what about taking it a step further and
>> > developing a mini-SDK (NX-SDK) as the one Zephyr has? Still not as a
>> > very
>> > active contributor, but an enthusiastic follower of the NuttX project,
>> > I
>> > think that the entry barrier of the project is still not that low.
>> > Onboarding takes some time. Having a custom SDK that includes not only
>> > a
>> > tracer, but also other helper tools (e.g.,  flasher/debugger for the
>> > supported boards) would facilitate the adaptation process of the
>> newcomers.
>> > Finally, rather than relying on some particular ICs, would it be more
>> > practical to build such a tool by creating a (custom) fork of OpenOCD?
>> >
>>
>> In the past NuttX used to have a Buildroot that was able to generate
>> the toolchain, etc. It is still around, some time ago David Alessio
>> fixed it.
>>
>> At first place the SDK idea appears good, but there are many issues.
>>
>> We have many architectures, we support MCU/CPU from 8 to 64-bit
>> (Zephyr and others are 32-bit only and mainly ARM, RISC-V and Xtensa).
>> I could go on citing other issues...
>>
>> Currently at least on Linux (Debian, Ubuntu, ...) and Ubuntu Shell on
>> Windows it is very easy, just some apt/apt-get away. Even the
>> kfrontend is already there, you don't need to compile it anymore. I
>> think the main issue is that OpenOCD version is too old. But creating
>> a fork of OpenOCD is not a good idea.
>>
>> OpenOCD is a project that deserves more attention, it is like the SSH,
>> many people/companies uses it and people only not it was a "backbone"
>> when it broke.
>> The last OpenOCD release was 3 years ago and I don't see any move to a
>> new release. If they release a new version now, maybe it will delay
>> about 2 years to get it officially on Linux distros.
>>
>> I heard that OpenOCD was going to be part of Linux Foundation, but
>> nothing happened yet. Let see!
>>
>> BR,
>>
>> Alan
>>
>


Re: NXView

2020-06-15 Thread Erdem MEYDANLI
Hi Alan,

Thanks for sharing this. Indeed, I haven't been aware of such a tool. I'll
try it out.

BR,
Erdem

Alan Carvalho de Assis , 14 Haz 2020 Paz, 21:56
tarihinde şunu yazdı:

> Hi Erdem,
>
> Right, I understood your idea!
>
> In fact Maciej already created it, see:
>
> https://hackaday.io/project/94372-upm-nuttx-project-manager
>
> https://gitlab.com/w8jcik/upm
>
> Did you try it?
>
> BR,
>
> Alan
>
> On 6/14/20, Erdem MEYDANLI  wrote:
> > Hi Alan,
> >
> >
> > You are right. NuttX has a more comprehensive scope. For sure, what I
> > proposed requires a lot of work.
> >
> >
> > With or without OpenOCD, what I meant by SDK was a combination of
> (actively
> > maintained) buildroot and a meta-tool like *West *in Zephyr.
> >
> >
> > Those who haven't heard of Zephyr's meta-tool might want to look at this:
> > https://docs.zephyrproject.org/latest/guides/west/build-flash-debug.html
> >
> >
> > I assume that the 'SDK' solves all dependency issues, and the meta-tool
> > offers the functionality below:
> >
> >
> > nxtool flash
> >
> > nxtool debug
> >
> > nxtool monitor (imagine this initiates @Greg's idea as part of its
> > functionality.)
> >
> >
> > People who are already familiar with RTOSes and MCU development
> undoubtedly
> > follow the current installation steps quickly. Maybe they already
> > established a mini-automation for their development process. However,
> when
> > it comes to beginners, the installation can still be a pain in the neck.
> >
> >
> > So, this discussion is about the unborn NXView, and I don't want to
> ramble
> > on about it. I find the NXView idea very beneficial. And referring to
> > Greg's paragraph below, having a meta-tool that I try to explain above
> > might add significant value as well.
> >
> >>>
> > I believe that such a tool would be a valuable addition to the NuttX
> > ecology.  I think that such a tool would move NuttX from a basic,
> > primitive open source OS project and into full competition with
> > commercial products (in terms of features and usage... we are not
> > actually in competition with anyone).
> > <<
> >
> >
> > Alan Carvalho de Assis , 14 Haz 2020 Paz, 18:51
> > tarihinde şunu yazdı:
> >
> >> Hi Erdem,
> >>
> >> On 6/14/20, Erdem MEYDANLI  wrote:
> >> sic
> >> >
> >> > I do agree. That would be such an invaluable tool. BTW, speaking of
> >> > particular hardware like FT245 gives me an idea. Well, this might
> sound
> >> > a
> >> > little bit irrelevant, but what about taking it a step further and
> >> > developing a mini-SDK (NX-SDK) as the one Zephyr has? Still not as a
> >> > very
> >> > active contributor, but an enthusiastic follower of the NuttX project,
> >> > I
> >> > think that the entry barrier of the project is still not that low.
> >> > Onboarding takes some time. Having a custom SDK that includes not only
> >> > a
> >> > tracer, but also other helper tools (e.g.,  flasher/debugger for the
> >> > supported boards) would facilitate the adaptation process of the
> >> newcomers.
> >> > Finally, rather than relying on some particular ICs, would it be more
> >> > practical to build such a tool by creating a (custom) fork of OpenOCD?
> >> >
> >>
> >> In the past NuttX used to have a Buildroot that was able to generate
> >> the toolchain, etc. It is still around, some time ago David Alessio
> >> fixed it.
> >>
> >> At first place the SDK idea appears good, but there are many issues.
> >>
> >> We have many architectures, we support MCU/CPU from 8 to 64-bit
> >> (Zephyr and others are 32-bit only and mainly ARM, RISC-V and Xtensa).
> >> I could go on citing other issues...
> >>
> >> Currently at least on Linux (Debian, Ubuntu, ...) and Ubuntu Shell on
> >> Windows it is very easy, just some apt/apt-get away. Even the
> >> kfrontend is already there, you don't need to compile it anymore. I
> >> think the main issue is that OpenOCD version is too old. But creating
> >> a fork of OpenOCD is not a good idea.
> >>
> >> OpenOCD is a project that deserves more attention, it is like the SSH,
> >> many people/companies uses it and people only not it was a "backbone"
> >> when it broke.
> >> The last OpenOCD release was 3 years ago and I don't see any move to a
> >> new release. If they release a new version now, maybe it will delay
> >> about 2 years to get it officially on Linux distros.
> >>
> >> I heard that OpenOCD was going to be part of Linux Foundation, but
> >> nothing happened yet. Let see!
> >>
> >> BR,
> >>
> >> Alan
> >>
> >
>


RE: NXView

2020-06-16 Thread Nakamura, Yuuichi (Sony)
Hi, Greg.

I am developing the feature to collect the NuttX internal task events and dump 
the data in Linux ftrace format.
The dumped data can be displayed graphically by using "TraceCompass".
It extends the NuttX sched note APIs to get enter/leave event of the interrupt 
handler and system calls.

The detail is described at :
https://github.com/YuuichiNakamura/nuttx-task-tracer-doc

And the latest implementation is available at :
https://github.com/YuuichiNakamura/incubator-nuttx
https://github.com/YuuichiNakamura/incubator-nuttx-apps
in "feature/task-tracer" branch.

I'm glad if this is helpful to your ideas.

Thanks,
Yuuichi Nakamura

> -Original Message-
> From: Gregory Nutt 
> Sent: Saturday, June 13, 2020 9:03 AM
> To: dev@nuttx.apache.org
> Subject: NXView
> 
> Hi, List,
> 
> I have been contemplating a NuttX-based Open Source project today and I am
> interested in seeing if anyone is willing to participate or, even if not, if 
> anyone has
> any insights or recommendations that could be useful.
> 
> Basically, I am thinking of a NuttX tool to monitor the internal state of the
> OS.  This would be conceptually similar to Segger SystemView or Wind River
> WindView:  A host basic graphical tool that exposes the internal behavior of
> tasks and threads within the OS in a "logic analyzer format":
> 
>  1. Horizontal rows would be indicate the state of each task, running or
> block (and if blocked why/)
>  2. Each arranged vertically by task/thread priority so that the highest
> priority task is the first row and the lowest priority task is the
> bottom row.
>  3. Annotation to indicated events:  Interrupts, semaphore operations,
> spinlock operations, etc.
>  4. This display should be realtime (with a lag, of course) and should
> scroll to the right as time elapses.  It should be possible to
> capture and save the event data for subsequent offline analysis.
> 
> Additional analytic displays could be considered in the future.
> 
> The hardware I am thinking to accomplish this would be an inexpensive FT245RL
> board which connects to the target via an 8-bit parallel interface and to the 
> host
> via a USB 2.0 interface. The target side is essentially a FIFO:  OS events 
> would be
> written to the FT245RL FIFO and transferred to the host via USB 2.0.
> 
> The OS instrumentation is already in place to accomplish this. This is 
> controlled
> by CONFIG_SCHED_INSTRUMENTATION and related configuration options that
> you can see in sched/Kconfig.  The target side effort is then:
> 
> 1. Configure the parallel interface to the FT245RL's FIFO.  This would likely 
> be
> FSMC for an initial STM32 implementation.
> 2. Develop the simple logic to encode the instrumented events and to pass them
> to host visa that FIFO.
> 
> Drivers and configuration tools for the host side are already available from 
> the
> FTDI website.  Becoming familiar with these tools and integrating the 
> host-side
> interface would be another task.
> 
> The final task, the one that is the most daunting to me, is the development 
> of the
> substantial host-side graphics application that would receive the OS
> instrumentation data and produce the graphic presentation.  I would think that
> such an application would be a C++ development and would be usable both on
> Windows and Linux.
> 
> I believe that such a tool would be a valuable addition to the NuttX ecology. 
>  I
> think that such a tool would move NuttX from a basic, primitive open source OS
> project and into full competition with commercial products (in terms of 
> features
> and usage... we are not actually in competition with anyone).
> 
> Is this something that would be interesting to anyone?  Does anyone have any
> input or advice?  If there is any interest I think that we should create a 
> small
> development team to make this happen.  If that team is small enough, I would 
> be
> happy to provide common development hardware
> (STM32 and FT245RL boards from China, or course).
> 
> What say ye?
> 
> Greg



RE: NXView

2020-06-16 Thread Xiang Xiao
Cool! It's a great idea to generate the trace compatible with Linux ftrace 
format. How about mainline your work?

> -Original Message-
> From: Nakamura, Yuuichi (Sony) 
> Sent: Tuesday, June 16, 2020 3:49 PM
> To: dev@nuttx.apache.org
> Cc: Nakamura, Yuuichi (Sony) 
> Subject: RE: NXView
> 
> Hi, Greg.
> 
> I am developing the feature to collect the NuttX internal task events and 
> dump the data in Linux ftrace format.
> The dumped data can be displayed graphically by using "TraceCompass".
> It extends the NuttX sched note APIs to get enter/leave event of the 
> interrupt handler and system calls.
> 
> The detail is described at :
> https://github.com/YuuichiNakamura/nuttx-task-tracer-doc
> 
> And the latest implementation is available at :
> https://github.com/YuuichiNakamura/incubator-nuttx
> https://github.com/YuuichiNakamura/incubator-nuttx-apps
> in "feature/task-tracer" branch.
> 
> I'm glad if this is helpful to your ideas.
> 
> Thanks,
> Yuuichi Nakamura
> 
> > -Original Message-
> > From: Gregory Nutt 
> > Sent: Saturday, June 13, 2020 9:03 AM
> > To: dev@nuttx.apache.org
> > Subject: NXView
> >
> > Hi, List,
> >
> > I have been contemplating a NuttX-based Open Source project today and
> > I am interested in seeing if anyone is willing to participate or, even
> > if not, if anyone has any insights or recommendations that could be useful.
> >
> > Basically, I am thinking of a NuttX tool to monitor the internal state
> > of the OS.  This would be conceptually similar to Segger SystemView or
> > Wind River
> > WindView:  A host basic graphical tool that exposes the internal
> > behavior of tasks and threads within the OS in a "logic analyzer format":
> >
> >  1. Horizontal rows would be indicate the state of each task, running or
> > block (and if blocked why/)
> >  2. Each arranged vertically by task/thread priority so that the highest
> > priority task is the first row and the lowest priority task is the
> > bottom row.
> >  3. Annotation to indicated events:  Interrupts, semaphore operations,
> > spinlock operations, etc.
> >  4. This display should be realtime (with a lag, of course) and should
> > scroll to the right as time elapses.  It should be possible to
> > capture and save the event data for subsequent offline analysis.
> >
> > Additional analytic displays could be considered in the future.
> >
> > The hardware I am thinking to accomplish this would be an inexpensive
> > FT245RL board which connects to the target via an 8-bit parallel
> > interface and to the host via a USB 2.0 interface. The target side is
> > essentially a FIFO:  OS events would be written to the FT245RL FIFO and 
> > transferred to the host via USB 2.0.
> >
> > The OS instrumentation is already in place to accomplish this. This is
> > controlled by CONFIG_SCHED_INSTRUMENTATION and related configuration
> > options that you can see in sched/Kconfig.  The target side effort is then:
> >
> > 1. Configure the parallel interface to the FT245RL's FIFO.  This would
> > likely be FSMC for an initial STM32 implementation.
> > 2. Develop the simple logic to encode the instrumented events and to
> > pass them to host visa that FIFO.
> >
> > Drivers and configuration tools for the host side are already
> > available from the FTDI website.  Becoming familiar with these tools
> > and integrating the host-side interface would be another task.
> >
> > The final task, the one that is the most daunting to me, is the
> > development of the substantial host-side graphics application that
> > would receive the OS instrumentation data and produce the graphic
> > presentation.  I would think that such an application would be a C++
> > development and would be usable both on Windows and Linux.
> >
> > I believe that such a tool would be a valuable addition to the NuttX
> > ecology.  I think that such a tool would move NuttX from a basic,
> > primitive open source OS project and into full competition with
> > commercial products (in terms of features and usage... we are not actually 
> > in competition with anyone).
> >
> > Is this something that would be interesting to anyone?  Does anyone
> > have any input or advice?  If there is any interest I think that we
> > should create a small development team to make this happen.  If that
> > team is small enough, I would be happy to provide common development
> > hardware
> > (STM32 and FT245RL boards from China, or course).
> >
> > What say ye?
> >
> > Greg




Re: NXView

2020-06-16 Thread Gregory Nutt



Hi, Greg.

I am developing the feature to collect the NuttX internal task events and dump 
the data in Linux ftrace format.
The dumped data can be displayed graphically by using "TraceCompass".
It extends the NuttX sched note APIs to get enter/leave event of the interrupt 
handler and system calls.

The detail is described at :
https://github.com/YuuichiNakamura/nuttx-task-tracer-doc

And the latest implementation is available at :
https://github.com/YuuichiNakamura/incubator-nuttx
https://github.com/YuuichiNakamura/incubator-nuttx-apps
in "feature/task-tracer" branch.

I'm glad if this is helpful to your ideas.

Thanks,
Yuuichi Nakamura


Nice work.  This seems to replicate a lot of existing logic at least at 
the level of the block diagram:


1. sched/sched_tracer.c seems to be functionally like
   sched/sched/sched_note.c with CONFIG_SCHED_INSTRUMENTATION_BUFFER
   selected.
2. dev/tracer looks a lot like dev/misc/not_driver.c
3. The change to apps/nshlib seem to be a lot like apps/system/sched_note.

One difference is the ftrace output format. apps/system/sched_note just 
outputs like debug info to the syslog.  Are there other functional 
differences?  Should these related implementations be merged in some way?


Greg




Re: NXView

2020-06-16 Thread Gregory Nutt





Hi, Greg.

I am developing the feature to collect the NuttX internal task events and dump 
the data in Linux ftrace format.
The dumped data can be displayed graphically by using "TraceCompass".
It extends the NuttX sched note APIs to get enter/leave event of the interrupt 
handler and system calls.

The detail is described at :
https://github.com/YuuichiNakamura/nuttx-task-tracer-doc

And the latest implementation is available at :
https://github.com/YuuichiNakamura/incubator-nuttx
https://github.com/YuuichiNakamura/incubator-nuttx-apps
in "feature/task-tracer" branch.

I'm glad if this is helpful to your ideas.

Thanks,
Yuuichi Nakamura


Nice work.  This seems to replicate a lot of existing logic at least 
at the level of the block diagram:


 1. sched/sched_tracer.c seems to be functionally like
sched/sched/sched_note.c with CONFIG_SCHED_INSTRUMENTATION_BUFFER
selected.
 2. dev/tracer looks a lot like dev/misc/not_driver.c
 3. The change to apps/nshlib seem to be a lot like
apps/system/sched_note.

One difference is the ftrace output format. apps/system/sched_note 
just outputs like debug info to the syslog.  Are there other 
functional differences?  Should these related implementations be 
merged in some way?


Greg

Regardless of that decision, it would be nice if you could at least 
upstream the interrupt and system call instrumentation. That will be 
needed in any event and we should re-use that logic, not re-invent it.


Thanks,

Greg




Re: NXView

2020-06-16 Thread Gregory Nutt

eg


Regardless of that decision, it would be nice if you could at least 
upstream the interrupt and system call instrumentation. That will be 
needed in any event and we should re-use that logic, not re-invent it.



I did that.  I created PR 1256 with the commit in your name


RE: NXView

2020-06-16 Thread Nakamura, Yuuichi (Sony)
> I did that.  I created PR 1256 with the commit in your name

Thank you!

I want to mainline the feature by issuing the additional PRs for the remaining 
part.
As you pointed out, the codes are based on the existing sched note logic.
The reason of preparing another files is the changes for this feature affects 
the existing logic.
Of course, the code duplication should be avoided. So if needed I want to 
change it to modification of the existing codes.
Please let me keep discussion in ML or new PR to make the code acceptable into 
the maineline.

> Are there other functional differences?  Should these related implementations 
> be merged in some way?

The major differences are:

- Different trace data format between the accumulated data in the memory and 
/dev/tracer output
  It is because to reduce the trace data size in the memory. The accumulated 
data contains packed (not aligned) values and
  task is recorded by its PID, not the name. The correspondence between PID and 
task name string is hold in the separated task name buffer.
  On the other hand, the output from /dev/tracer contains aligned words and 
contains the task name string for each trace entries.
  It is because easy to handle the data by the application code (nsh trace 
command).

- Additional ioctl functions in /dev/tracer driver
  There are many features which can be controlled by the application such as 
system call trace filters.
  So the driver has ioctl handlers for it. Of course, sched_tracer.c has the 
code to handle the filters.

I feel that the code should be separate into the different PRs:
- remaining system call trace support code which requires the modification to 
the build system
- sched tracer and device driver which uses new sched note APIs (needs more 
discussion)

Thanks,
Yuuichi Nakamura

> -Original Message-
> From: Gregory Nutt 
> Sent: Tuesday, June 16, 2020 11:11 PM
> To: dev@nuttx.apache.org
> Subject: Re: NXView
> 
> eg
> >
> > Regardless of that decision, it would be nice if you could at least
> > upstream the interrupt and system call instrumentation. That will be
> > needed in any event and we should re-use that logic, not re-invent it.
> >
> I did that.  I created PR 1256 with the commit in your name


Re: NXView

2020-06-16 Thread Gregory Nutt

Some comments:

1. I don't think that there should be two implementations that are so 
similar.  There should be only one.  But I am open to extending/merging 
that one, common implementation.


2.  nsh is not an appropriate place for the application side code.  That 
should go in apps/system.  Perhaps not in apps/system/sched_note; 
perhaps a unique application for your purpose.  It is not an appropriate 
command to reside within the shell, however.


Greg


On 6/16/2020 8:03 PM, Nakamura, Yuuichi (Sony) wrote:

I did that.  I created PR 1256 with the commit in your name

Thank you!

I want to mainline the feature by issuing the additional PRs for the remaining 
part.
As you pointed out, the codes are based on the existing sched note logic.
The reason of preparing another files is the changes for this feature affects 
the existing logic.
Of course, the code duplication should be avoided. So if needed I want to 
change it to modification of the existing codes.
Please let me keep discussion in ML or new PR to make the code acceptable into 
the maineline.


Are there other functional differences?  Should these related implementations 
be merged in some way?

The major differences are:

- Different trace data format between the accumulated data in the memory and 
/dev/tracer output
   It is because to reduce the trace data size in the memory. The accumulated 
data contains packed (not aligned) values and
   task is recorded by its PID, not the name. The correspondence between PID 
and task name string is hold in the separated task name buffer.
   On the other hand, the output from /dev/tracer contains aligned words and 
contains the task name string for each trace entries.
   It is because easy to handle the data by the application code (nsh trace 
command).

- Additional ioctl functions in /dev/tracer driver
   There are many features which can be controlled by the application such as 
system call trace filters.
   So the driver has ioctl handlers for it. Of course, sched_tracer.c has the 
code to handle the filters.

I feel that the code should be separate into the different PRs:
- remaining system call trace support code which requires the modification to 
the build system
- sched tracer and device driver which uses new sched note APIs (needs more 
discussion)

Thanks,
Yuuichi Nakamura


-Original Message-
From: Gregory Nutt 
Sent: Tuesday, June 16, 2020 11:11 PM
To: dev@nuttx.apache.org
Subject: Re: NXView

eg

Regardless of that decision, it would be nice if you could at least
upstream the interrupt and system call instrumentation. That will be
needed in any event and we should re-use that logic, not re-invent it.


I did that.  I created PR 1256 with the commit in your name


Re: NXView

2020-06-16 Thread Gregory Nutt




The major differences are:

- Different trace data format between the accumulated data in the memory and 
/dev/tracer output
   It is because to reduce the trace data size in the memory. The accumulated 
data contains packed (not aligned) values and
   task is recorded by its PID, not the name. The correspondence between PID 
and task name string is hold in the separated task name buffer.
   On the other hand, the output from /dev/tracer contains aligned words and 
contains the task name string for each trace entries.
   It is because easy to handle the data by the application code (nsh trace 
command).


That is a trivial difference and there are some misconceptions.

The structures can be packed by simply adding the packed attribute to 
the structures.  That does not justify a redesign.


The current implementation does *not* use the task name, it uses the 
pid.  The task name is provided only when the task is created.  The 
provides the associated between pid and name. Thereafter only the pid is 
uses.


Your implementation has two much overlap and should not come upstream as 
a separate implementation.  Extensions and improvements to the existing 
implementation are welcome, however.


Greg



RE: NXView

2020-06-16 Thread Nakamura, Yuuichi (Sony)
Thanks for valuable comments.
I have no objection to your advice that overlapping the similar implementation 
should be avoided.
Let me make change the current implementation into the extension of the 
existing codes,
and if there are any problems in extending, please let me discuss again.

Regarding to another issue, the place of the application side code, it is 
because I have wanted to implement 
 trace cmd ""
It gets the trace while executing the specified command line like "time" 
command of nsh.
It requires nsh_parse() nshlib internal API.

Thanks,
Yuuichi Nakamura

> -Original Message-
> From: Gregory Nutt 
> Sent: Wednesday, June 17, 2020 11:13 AM
> To: dev@nuttx.apache.org
> Subject: Re: NXView
> 
> 
> > The major differences are:
> >
> > - Different trace data format between the accumulated data in the memory and
> /dev/tracer output
> >It is because to reduce the trace data size in the memory. The 
> > accumulated
> data contains packed (not aligned) values and
> >task is recorded by its PID, not the name. The correspondence between PID
> and task name string is hold in the separated task name buffer.
> >On the other hand, the output from /dev/tracer contains aligned words and
> contains the task name string for each trace entries.
> >It is because easy to handle the data by the application code (nsh trace
> command).
> 
> That is a trivial difference and there are some misconceptions.
> 
> The structures can be packed by simply adding the packed attribute to the
> structures.  That does not justify a redesign.
> 
> The current implementation does *not* use the task name, it uses the pid.  The
> task name is provided only when the task is created.  The provides the
> associated between pid and name. Thereafter only the pid is uses.
> 
> Your implementation has two much overlap and should not come upstream as a
> separate implementation.  Extensions and improvements to the existing
> implementation are welcome, however.
> 
> Greg



RE: NXView

2020-06-18 Thread Xiang Xiao
Agree that we don't need provide two drivers to collect the similar 
information. It's better to incorporate the new feature and ftrace format into 
the current driver. The change may break the binary format combability but I 
think it's acceptable to get the more feature and clean implementation.

-Original Message-
From: Gregory Nutt  
Sent: Wednesday, June 17, 2020 10:13 AM
To: dev@nuttx.apache.org
Subject: Re: NXView


> The major differences are:
>
> - Different trace data format between the accumulated data in the memory and 
> /dev/tracer output
>It is because to reduce the trace data size in the memory. The accumulated 
> data contains packed (not aligned) values and
>task is recorded by its PID, not the name. The correspondence between PID 
> and task name string is hold in the separated task name buffer.
>On the other hand, the output from /dev/tracer contains aligned words and 
> contains the task name string for each trace entries.
>It is because easy to handle the data by the application code (nsh trace 
> command).

That is a trivial difference and there are some misconceptions.

The structures can be packed by simply adding the packed attribute to the 
structures.  That does not justify a redesign.

The current implementation does *not* use the task name, it uses the pid.  The 
task name is provided only when the task is created.  The provides the 
associated between pid and name. Thereafter only the pid is uses.

Your implementation has two much overlap and should not come upstream as a 
separate implementation.  Extensions and improvements to the existing 
implementation are welcome, however.

Greg




RE: NXView

2020-07-01 Thread Xiang Xiao
It's a reasonable function partitioning. How about we define an interface like 
syslog_channel_s between note and driver? So we can plug in the different 
transport like syslog.

> -Original Message-
> From: Nakamura, Yuuichi (Sony) 
> Sent: Wednesday, July 1, 2020 3:01 PM
> To: dev@nuttx.apache.org
> Cc: Nakamura, Yuuichi (Sony) 
> Subject: RE: NXView
> 
> Hi all,
> 
> After merging my syscall instrumentation patch into 
> feature/syscall-instrumentation branch, I had considered how to incorporate my
> task trace support into the mainline.
> 
> Currently sched_note.c has the codes to generate notes and buffer management 
> functions.
> notes are generated all the time if configured to be enabled. (attached fig.1)
> 
> In task tracer, I add the filter logic for some note types, and all notes 
> have to be enabled explicitly.
> The buffer management functions are also used by the task tracer, but the 
> hardware solution doesn't require them.
> 
> So, I propose the new configuration CONFIG_SCHED_INSTRUMENTATION_FILTER in 
> sched_note.c.
> CONFIG_SCHED_INSTRUMENTATION_FILTER only enables the filter logic in each 
> note types.
> And change CONFIG_SCHED_INSTRUMENTATION_BUFFER to make enable only buffer 
> management logic, not note generation.
> (attached fig.2)
> 
> If hardware solution needs only filter logic, by enabling 
> CONFIG_SCHED_INSTRUMENTATION_FILTER and disabling
> CONFIG_SCHED_INSTRUMENTATION_BUFFER can realize it.
> sched_note_add() (previously note_add() static function in sched_note.c) is 
> called when some kernel instrumentation event occured
> and it can be implemented to send the note data to the external hardware 
> device. (attached fig.3)
> 
> The task tracer defines both CONFIG_SCHED_INSTRUMENTATION_FILTER and 
> CONFIG_SCHED_INSTRUMENTATION_BUFFER.
> And if only CONFIG_SCHED_INSTRUMENTATION_BUFFER is defined, the existing 
> sched_note specification remains.
> 
> How about this proposal ?
> I'm fixing my task trace code as the patch of existing sched_note.c and 
> note_driver.c.
> After that, I'd like to send new pull request to 
> feature/syscall-instrumentation branch for the review.
> 
> Thanks,
> Yuuichi Nakamura
> 
> > -----Original Message-
> > From: Nakamura, Yuuichi (Sony)
> > Sent: Wednesday, June 17, 2020 2:43 PM
> > To: dev@nuttx.apache.org
> > Cc: Nakamura, Yuuichi (Sony) 
> > Subject: RE: NXView
> >
> > Thanks for valuable comments.
> > I have no objection to your advice that overlapping the similar
> > implementation should be avoided.
> > Let me make change the current implementation into the extension of
> > the existing codes, and if there are any problems in extending, please let 
> > me discuss again.
> >
> > Regarding to another issue, the place of the application side code, it
> > is because I have wanted to implement  trace cmd ""
> > It gets the trace while executing the specified command line like
> > "time" command of nsh.
> > It requires nsh_parse() nshlib internal API.
> >
> > Thanks,
> > Yuuichi Nakamura
> >
> > > -Original Message-
> > > From: Gregory Nutt 
> > > Sent: Wednesday, June 17, 2020 11:13 AM
> > > To: dev@nuttx.apache.org
> > > Subject: Re: NXView
> > >
> > >
> > > > The major differences are:
> > > >
> > > > - Different trace data format between the accumulated data in the
> > > > memory and
> > > /dev/tracer output
> > > >It is because to reduce the trace data size in the memory. The
> > > > accumulated
> > > data contains packed (not aligned) values and
> > > >task is recorded by its PID, not the name. The correspondence
> > > > between PID
> > > and task name string is hold in the separated task name buffer.
> > > >On the other hand, the output from /dev/tracer contains aligned
> > > > words and
> > > contains the task name string for each trace entries.
> > > >It is because easy to handle the data by the application code
> > > > (nsh trace
> > > command).
> > >
> > > That is a trivial difference and there are some misconceptions.
> > >
> > > The structures can be packed by simply adding the packed attribute
> > > to the structures.  That does not justify a redesign.
> > >
> > > The current implementation does *not* use the task name, it uses the
> > > pid.  The task name is provided only when the task is created.  The
> > > provides the associated between pid and name. Thereafter only the pid is 
> > > uses.
> > >
> > > Your implementation has two much overlap and should not come
> > > upstream as a separate implementation.  Extensions and improvements
> > > to the existing implementation are welcome, however.
> > >
> > > Greg




RE: NXView

2020-07-01 Thread Nakamura, Yuuichi (Sony)
Thanks for your comment.
Then it may be better to separate the buffer management logic into another file 
like sched_note_buffer.c.
I'll try it.

> -Original Message-
> From: Xiang Xiao 
> Sent: Wednesday, July 1, 2020 10:54 PM
> To: dev@nuttx.apache.org
> Subject: RE: NXView
> 
> It's a reasonable function partitioning. How about we define an interface like
> syslog_channel_s between note and driver? So we can plug in the different
> transport like syslog.
> 
> > -Original Message-
> > From: Nakamura, Yuuichi (Sony) 
> > Sent: Wednesday, July 1, 2020 3:01 PM
> > To: dev@nuttx.apache.org
> > Cc: Nakamura, Yuuichi (Sony) 
> > Subject: RE: NXView
> >
> > Hi all,
> >
> > After merging my syscall instrumentation patch into
> > feature/syscall-instrumentation branch, I had considered how to incorporate
> my task trace support into the mainline.
> >
> > Currently sched_note.c has the codes to generate notes and buffer
> management functions.
> > notes are generated all the time if configured to be enabled.
> > (attached fig.1)
> >
> > In task tracer, I add the filter logic for some note types, and all notes 
> > have to be
> enabled explicitly.
> > The buffer management functions are also used by the task tracer, but the
> hardware solution doesn't require them.
> >
> > So, I propose the new configuration
> CONFIG_SCHED_INSTRUMENTATION_FILTER in sched_note.c.
> > CONFIG_SCHED_INSTRUMENTATION_FILTER only enables the filter logic in
> each note types.
> > And change CONFIG_SCHED_INSTRUMENTATION_BUFFER to make enable
> only buffer management logic, not note generation.
> > (attached fig.2)
> >
> > If hardware solution needs only filter logic, by enabling
> > CONFIG_SCHED_INSTRUMENTATION_FILTER and disabling
> CONFIG_SCHED_INSTRUMENTATION_BUFFER can realize it.
> > sched_note_add() (previously note_add() static function in
> > sched_note.c) is called when some kernel instrumentation event occured
> > and it can be implemented to send the note data to the external
> > hardware device. (attached fig.3)
> >
> > The task tracer defines both CONFIG_SCHED_INSTRUMENTATION_FILTER
> and CONFIG_SCHED_INSTRUMENTATION_BUFFER.
> > And if only CONFIG_SCHED_INSTRUMENTATION_BUFFER is defined, the
> existing sched_note specification remains.
> >
> > How about this proposal ?
> > I'm fixing my task trace code as the patch of existing sched_note.c and
> note_driver.c.
> > After that, I'd like to send new pull request to 
> > feature/syscall-instrumentation
> branch for the review.
> >
> > Thanks,
> > Yuuichi Nakamura
> >
> > > -Original Message-
> > > From: Nakamura, Yuuichi (Sony)
> > > Sent: Wednesday, June 17, 2020 2:43 PM
> > > To: dev@nuttx.apache.org
> > > Cc: Nakamura, Yuuichi (Sony) 
> > > Subject: RE: NXView
> > >
> > > Thanks for valuable comments.
> > > I have no objection to your advice that overlapping the similar
> > > implementation should be avoided.
> > > Let me make change the current implementation into the extension of
> > > the existing codes, and if there are any problems in extending, please 
> > > let me
> discuss again.
> > >
> > > Regarding to another issue, the place of the application side code,
> > > it is because I have wanted to implement  trace cmd ""
> > > It gets the trace while executing the specified command line like
> > > "time" command of nsh.
> > > It requires nsh_parse() nshlib internal API.
> > >
> > > Thanks,
> > > Yuuichi Nakamura
> > >
> > > > -Original Message-
> > > > From: Gregory Nutt 
> > > > Sent: Wednesday, June 17, 2020 11:13 AM
> > > > To: dev@nuttx.apache.org
> > > > Subject: Re: NXView
> > > >
> > > >
> > > > > The major differences are:
> > > > >
> > > > > - Different trace data format between the accumulated data in
> > > > > the memory and
> > > > /dev/tracer output
> > > > >It is because to reduce the trace data size in the memory.
> > > > > The accumulated
> > > > data contains packed (not aligned) values and
> > > > >task is recorded by its PID, not the name. The correspondence
> > > > > between PID
> > > > and task name string is hold in the separated task name buffer.
> > > > >On the other hand, the output f

Re: NXView

2020-07-01 Thread Gregory Nutt



It's a reasonable function partitioning. How about we define an interface like 
syslog_channel_s between note and driver? So we can plug in the different 
transport like syslog.


The correct way to redirect streams within the OS is to use NuttX stream 
interfaces.  Forget about systlog channels.  That is not relevant here.


NuttX stream interfaces are defined in include/nuttx/streams.h. you 
would need to create an oustream and "inherits" from struct 
lib_outstream_s.  There are several examples of custom outstreams in 
that header file, but you will create a custom one for the ram log.  You 
will need one that manages the circular ram buffer and whatever other 
special properties.  Please follow the examples in that header file.


This is the universal way of redirecting byte streams within the OS.  
there are many examples since they are used in all cases.  A good 
example is libs/libc/stdio/lib_libvsprintf.c.  That implements all of 
the family of printf-like functions including printf, fprintf, dprintf, 
sprintf, snprintf, asprintf, etc.  It uses an outstream to send the 
formatted data to the correct recipient:


   int lib_vsprintf(FAR struct lib_outstream_s *stream,
 FAR const IPTR char *fmt, va_list ap)

Functions like printf, fprintf, dprintf, sprintf, snprintf, asprintf, 
etc then just setup the outstream instance and call lib_vsprintf().


The architecture should consist of a encoder that converts the 
sched_note call data to a byte stream by serializing/marshaling a packed 
data structure.  It should then use a global outstream to send the 
data.  Each "client" of the encoder should provide the global outstream 
and handle the data sent to "client" byte-by-byte.  The syslog is only 
one of many possible "clients" for the encoded data so you should not 
focus on that.


The byte-by-byte transfer may be too inefficient.  You could come up 
with a similar interface that transfers multiple bytes of data at a time 
(the full packed data in one transfer) -- like write() vs fputc().  That 
will probably be necessary for performance reasons.






RE: NXView

2020-07-01 Thread Xiang Xiao
Yes, lib_outstream_s is a better candidate.
BTW, the buffer may always need before the hardware transport driver finish the 
initialization otherwise the important initial activity will lost.

> -Original Message-
> From: Gregory Nutt 
> Sent: Thursday, July 2, 2020 10:05 AM
> To: dev@nuttx.apache.org
> Subject: Re: NXView
> 
> 
> > It's a reasonable function partitioning. How about we define an interface 
> > like syslog_channel_s between note and driver? So we can
> plug in the different transport like syslog.
> 
> The correct way to redirect streams within the OS is to use NuttX stream 
> interfaces.  Forget about systlog channels.  That is not
> relevant here.
> 
> NuttX stream interfaces are defined in include/nuttx/streams.h. you would 
> need to create an oustream and "inherits" from struct
> lib_outstream_s.  There are several examples of custom outstreams in that 
> header file, but you will create a custom one for the ram
> log.  You will need one that manages the circular ram buffer and whatever 
> other special properties.  Please follow the examples in that
> header file.
> 
> This is the universal way of redirecting byte streams within the OS. there 
> are many examples since they are used in all cases.  A good
> example is libs/libc/stdio/lib_libvsprintf.c.  That implements all of the 
> family of printf-like functions including printf, fprintf, dprintf,
> sprintf, snprintf, asprintf, etc.  It uses an outstream to send the formatted 
> data to the correct recipient:
> 
> int lib_vsprintf(FAR struct lib_outstream_s *stream,
>   FAR const IPTR char *fmt, va_list ap)
> 
> Functions like printf, fprintf, dprintf, sprintf, snprintf, asprintf, etc 
> then just setup the outstream instance and call lib_vsprintf().
> 
> The architecture should consist of a encoder that converts the sched_note 
> call data to a byte stream by serializing/marshaling a
> packed data structure.  It should then use a global outstream to send the 
> data.  Each "client" of the encoder should provide the global
> outstream and handle the data sent to "client" byte-by-byte.  The syslog is 
> only one of many possible "clients" for the encoded data
> so you should not focus on that.
> 
> The byte-by-byte transfer may be too inefficient.  You could come up with a 
> similar interface that transfers multiple bytes of data at a
> time (the full packed data in one transfer) -- like write() vs fputc().  That 
> will probably be necessary for performance reasons.
> 
> 




RE: NXView

2020-07-01 Thread Nakamura, Yuuichi (Sony)
Thanks for detailed comment. I'll study streams.h to apply it for the note data 
interface.

> -Original Message-
> From: Gregory Nutt 
> Sent: Thursday, July 2, 2020 11:05 AM
> To: dev@nuttx.apache.org
> Subject: Re: NXView
> 
> 
> > It's a reasonable function partitioning. How about we define an interface 
> > like
> syslog_channel_s between note and driver? So we can plug in the different
> transport like syslog.
> 
> The correct way to redirect streams within the OS is to use NuttX stream
> interfaces.  Forget about systlog channels.  That is not relevant here.
> 
> NuttX stream interfaces are defined in include/nuttx/streams.h. you would need
> to create an oustream and "inherits" from struct lib_outstream_s.  There are
> several examples of custom outstreams in that header file, but you will 
> create a
> custom one for the ram log.  You will need one that manages the circular ram
> buffer and whatever other special properties.  Please follow the examples in 
> that
> header file.
> 
> This is the universal way of redirecting byte streams within the OS. there 
> are many
> examples since they are used in all cases.  A good example is
> libs/libc/stdio/lib_libvsprintf.c.  That implements all of the family of 
> printf-like
> functions including printf, fprintf, dprintf, sprintf, snprintf, asprintf, 
> etc.  It uses
> an outstream to send the formatted data to the correct recipient:
> 
> int lib_vsprintf(FAR struct lib_outstream_s *stream,
>   FAR const IPTR char *fmt, va_list ap)
> 
> Functions like printf, fprintf, dprintf, sprintf, snprintf, asprintf, etc 
> then just setup
> the outstream instance and call lib_vsprintf().
> 
> The architecture should consist of a encoder that converts the sched_note call
> data to a byte stream by serializing/marshaling a packed data structure.  It
> should then use a global outstream to send the data.  Each "client" of the
> encoder should provide the global outstream and handle the data sent to 
> "client"
> byte-by-byte.  The syslog is only one of many possible "clients" for the 
> encoded
> data so you should not focus on that.
> 
> The byte-by-byte transfer may be too inefficient.  You could come up with a
> similar interface that transfers multiple bytes of data at a time (the full 
> packed
> data in one transfer) -- like write() vs fputc().  That will probably be 
> necessary
> for performance reasons.
> 
> 



Re: [nuttx] Re: NXView

2020-06-12 Thread Brennan Ashton
On Fri, Jun 12, 2020, 5:18 PM Gregory Nutt  wrote:

> Hi, again,
>
> I suppose the first question should be, "Is the FT245RL the correct
> choice?"  After all, it is only 8-bits wide and only USB 2.0.  That could
> limit the amount of instrumentation data passed to the host because of data
> overrun or or it could alter the real-time behavior of the target.
> Ideally, the instrumentation should involve minimal overhead and the
> behavior the real time system should be the same with or without the
> instrumentation enabled.  Otherwise, such a tool would not be a proper
> diagnostic tool.
>
> I considered some PCIe parallel data acquisition devices, but did not see
> a good match.  PCIe would be hot, howeer.
>
> I also looked at FTDI FT60xx devices, but these seem so camera focused
> that it was not completely clear to me that these could be usable.  But I
> am a mostly a software guy.  Perhaps someone out there with better
> knowledge of these devices could help out.
>
> The older FT600x, for example, has a 16-bits wide FIFO (pretty much
> optimal for most MCUs) and has a USB 3.0 interface to the host PC.  Using
> such a camera-oriented device is less obvious to me than the more general
> FT245RL.  Perhaps that is only because of the camera-oriented language used
> in the data sheets?
>
> If you know something about these options, I would like to hear from you.
>
> Greg
>

If you want high-speed io to USB the FX3 is probably one of the best bets.
You see it frequently used on logic analyser and software defined radio
boards between the USB and the FPGA.

https://www.cypress.com/products/ez-usb-fx3-superspeed-usb-30-peripheral-controller

Somewhat related but have in the past modified the firmware on this for
some custom debugging. https://1bitsquared.com/products/black-magic-probe

--Brennan

>


Re: [nuttx] Re: NXView

2020-06-12 Thread Gregory Nutt

Hi, Brennan,

I am inclined to stick with the FT245RL because the boards are cheap and 
readily available.  Conceptually, the basic solution does not depend on 
the selection of hardware. The hardware does effect performance and 
scalability, but I think the that the hardware selection is not critical 
for initial development.


I can get the RT245RL board for ~$11 USD on eBay an adequate 
STM32F103/F407 for $10-15 from China. Ready availability, inexpensive 
hardware (albeit low performance) would probably be a better starting 
point.. unless you can point to a competing low cost OTS solution.  The 
combined cost is around $20 and meets all of the initial development 
requirements.  I would have to have to have strong reason to deviate 
from that. But I could be very easily dissuaded with an alternative OTS 
hardware proposal at similar cost.


Nothing I have said precludes that alternative, higher performance 
implementation.  At a block diagram level, it does not matter.  It is 
just a matter of drivers on both sides.


Greg


On 6/12/2020 6:27 PM, Brennan Ashton wrote:

On Fri, Jun 12, 2020, 5:18 PM Gregory Nutt  wrote:


Hi, again,

I suppose the first question should be, "Is the FT245RL the correct
choice?"  After all, it is only 8-bits wide and only USB 2.0.  That could
limit the amount of instrumentation data passed to the host because of data
overrun or or it could alter the real-time behavior of the target.
Ideally, the instrumentation should involve minimal overhead and the
behavior the real time system should be the same with or without the
instrumentation enabled.  Otherwise, such a tool would not be a proper
diagnostic tool.

I considered some PCIe parallel data acquisition devices, but did not see
a good match.  PCIe would be hot, howeer.

I also looked at FTDI FT60xx devices, but these seem so camera focused
that it was not completely clear to me that these could be usable.  But I
am a mostly a software guy.  Perhaps someone out there with better
knowledge of these devices could help out.

The older FT600x, for example, has a 16-bits wide FIFO (pretty much
optimal for most MCUs) and has a USB 3.0 interface to the host PC.  Using
such a camera-oriented device is less obvious to me than the more general
FT245RL.  Perhaps that is only because of the camera-oriented language used
in the data sheets?

If you know something about these options, I would like to hear from you.

Greg


If you want high-speed io to USB the FX3 is probably one of the best bets.
You see it frequently used on logic analyser and software defined radio
boards between the USB and the FPGA.

https://www.cypress.com/products/ez-usb-fx3-superspeed-usb-30-peripheral-controller

Somewhat related but have in the past modified the firmware on this for
some custom debugging. https://1bitsquared.com/products/black-magic-probe

--Brennan





Re: [nuttx] Re: NXView

2020-06-12 Thread Brennan Ashton
On Fri, Jun 12, 2020 at 6:22 PM Gregory Nutt  wrote:
>
> Hi, Brennan,
>
> I am inclined to stick with the FT245RL because the boards are cheap and
> readily available.  Conceptually, the basic solution does not depend on
> the selection of hardware. The hardware does effect performance and
> scalability, but I think the that the hardware selection is not critical
> for initial development.
>
> I can get the RT245RL board for ~$11 USD on eBay an adequate
> STM32F103/F407 for $10-15 from China. Ready availability, inexpensive
> hardware (albeit low performance) would probably be a better starting
> point.. unless you can point to a competing low cost OTS solution.  The
> combined cost is around $20 and meets all of the initial development
> requirements.  I would have to have to have strong reason to deviate
> from that. But I could be very easily dissuaded with an alternative OTS
> hardware proposal at similar cost.
>
> Nothing I have said precludes that alternative, higher performance
> implementation.  At a block diagram level, it does not matter.  It is
> just a matter of drivers on both sides.
>
> Greg

Makes total sense if it provides enough bandwidth.  There are some
other options that are based off of the FX2 USB2.0 chip that are
common in low cost ($10) 8ch 25MHZ logic analyzers as well.  As you
said it's a block with a few input pins, FIFO, and a usb interface, so
if it works, sounds good.

I am wondering if the host side could be implemented by leveraging
sigrok and pulseview?
https://sigrok.org/wiki/Protocol_decoder_HOWTO

One of the advantages would be the ability to easily overlay other
datasource as you already have some major chunks built?
Had you put much thought into what the target side would look like
from an interface?

Not trying to dissuade from building something new :)

Here is an example using sigrok+pulseview and a $15 logic analyzer
https://learn.sparkfun.com/tutorials/using-the-usb-logic-analyzer-with-sigrok-pulseview/all

--Brennan


Re: [nuttx] Re: NXView

2020-06-12 Thread Brennan Ashton
On Fri, Jun 12, 2020 at 6:52 PM Brennan Ashton
 wrote:
> I am wondering if the host side could be implemented by leveraging
> sigrok and pulseview?
> https://sigrok.org/wiki/Protocol_decoder_HOWTO
>
Another source of inspiration (or integration?) could be kernelshark
https://kernelshark.org/Documentation.html

which is a frontend for ftrace
https://www.kernel.org/doc/Documentation/trace/ftrace.txt

--Brennan


Re: [nuttx] Re: NXView

2020-06-13 Thread Petr Buchta
My two cents... I would definitely make use of some existing frontend for
tracing visualisation. Something like this in case of lttng -
https://lttng.org/viewers/

Trace Compass seems to be a fairly complete solution for visualisation -
https://www.eclipse.org/tracecompass

Petr

On Sat, Jun 13, 2020, 4:13 AM Brennan Ashton 
wrote:

> On Fri, Jun 12, 2020 at 6:52 PM Brennan Ashton
>  wrote:
> > I am wondering if the host side could be implemented by leveraging
> > sigrok and pulseview?
> > https://sigrok.org/wiki/Protocol_decoder_HOWTO
> >
> Another source of inspiration (or integration?) could be kernelshark
> https://kernelshark.org/Documentation.html
>
> which is a frontend for ftrace
> https://www.kernel.org/doc/Documentation/trace/ftrace.txt
>
> --Brennan
>


Re: [nuttx] Re: NXView

2020-06-13 Thread Gregory Nutt

Thanks, Brennan and Petr, for the recommendations.

At this point, I am only trying to ascertain if there are a few people 
interested in participating in such a project.  I think it is more that 
I can consider to do alone so any further steps would require some 
interest in the development itself.


Brennan, you asked about the target side effort.  That would be pretty 
small because the instrumentation was build into the OS on day one.  You 
can see this in sched/Kconfigs for the description of 
CONFIG_SCHED_INSTRUMENTATION.  When that the option is enabled, OS hooks 
in the form of call-outs are enabled.


There is some current use of those call-outs now for target-side, 
software monitoring of the scheduler.


- There is logic sched/sched/sched_note.c that will buffer the 
instrumented data in memory
- There is a character driver at drivers/syslog/note_driver.c that will 
export the buffered instrumentation to any application via a character 
driver
- There is an application at apps/system/sched_note that will use that 
driver to periodically show OS activity


None of this, of course would be used in the proposed host-based 
monitoring except for the raw OS call-outs.  For this proposal, a modest 
amount of new development would be needed:


- Board-specific initialization of the parallel interface (like the 
FMC/FSMC in STM32 MCUs) so that writes to a memory mapped address will 
add data to the FIFO.
- A new module that would (1) receive the OS instrumentation call-outs, 
(2) encode/marshal the scheduler event data into a byte stream, and (3) 
transfer the data to the FIFO by writing the byte stream to the memory 
mapped address


That is not such a big effort.  The other efforts would be to 
configuration the host-side USB driver, verify proper transfer of data, 
and develop the application to present the data graphically.  Those, I 
believe, are more effort than the work on the target side.


So, in general, I think the target side is in good shape for this use.

Greg




Re: [nuttx] Re: NXView

2020-06-13 Thread Gregory Nutt




If you want high-speed io to USB the FX3 is probably one of the best bets.
You see it frequently used on logic analyser and software defined radio
boards between the USB and the FPGA.

https://www.cypress.com/products/ez-usb-fx3-superspeed-usb-30-peripheral-controller


There are several EZ-USB FX2LP boards on eBay at $4-6.  That is only USB 
2.0 but might be a good option --  but I am up to speed with the FT245RL.


I actually think that USB 2.0 full speed at 480 Mbps would be adequate 
in most cases.  We would probably only realize 300Mbps or so, but I 
think even transferring at that rate could impact real time 
performance.  The FT60x with USB 3.0 will do 54Gbps (actual).  I imagine 
that the EZ-USB FX3 would be similar.  But that rate may not be 
necessary.  One would have to collect some measurements to really 
understand (a) the required throughput, (b) the MCU overhead for making 
the transfers, and (c) the effect on real-time performance.



Why not use the ETM?
The solution should not conceptually depend on any particular 
transport.  Any transport that can meet the data rate requirements 
(whatever those are) and interferes only minimally with CPU performance 
could be considered.  So I suppose that ETM might also be a possibility 
on many ARMs, but not a general solution that I would personally be 
interested in.






Re: [nuttx] Re: NXView

2020-06-13 Thread Gregory Nutt




Makes total sense if it provides enough bandwidth.  There are some
other options that are based off of the FX2 USB2.0 chip that are
common in low cost ($10) 8ch 25MHZ logic analyzers as well.  As you
said it's a block with a few input pins, FIFO, and a usb interface, so
if it works, sounds good.


Well, I just discovered that although the FT246 claims to be USB 2.0, it 
does not support high speed.  Only 12Mbps (full speed). So that would be 
a bad choice.


The problem is that in the context of the OS instrumentation call-outs, 
we can do no driver operations.  With the FT245R, it could do writes to 
a memory-mapped FIFO.  Most of the FX2LP modes are more complex.  There 
is a slave FIFO, but I don't fully understand that yet.





Re: [nuttx] Re: NXView

2020-06-13 Thread Brennan Ashton
On Sat, Jun 13, 2020 at 1:56 PM Gregory Nutt  wrote:
> The problem is that in the context of the OS instrumentation call-outs,
> we can do no driver operations.  With the FT245R, it could do writes to
> a memory-mapped FIFO.  Most of the FX2LP modes are more complex.  There
> is a slave FIFO, but I don't fully understand that yet.

If you want to just stream data to the FIFO then yes that is probably
the right way to go.
This app note shows the basic setup
https://www.cypress.com/file/44551/download
You have up-to 16 data lines that map to the fifo
FLAGA/D show the status of the FIFO so probably can be ignored if you
are OK with losing data
SLOE probably can be always asserted
SLRD/WR can also be fixed value since we won't be reading
FIFOADDR is just selecting which bank/endpoint you are writing to, so
once again likely can be hard coded.

I don't think I have a dev board for this anymore, last design I did
with this was a few years back, but I would be open to tracking one
down if this is a path you want to go.

--Brennan


Re: [nuttx] Re: NXView

2020-06-13 Thread Brennan Ashton
On Sat, Jun 13, 2020 at 2:25 PM Brennan Ashton
 wrote:
>
> On Sat, Jun 13, 2020 at 1:56 PM Gregory Nutt  wrote:
> > The problem is that in the context of the OS instrumentation call-outs,
> > we can do no driver operations.  With the FT245R, it could do writes to
> > a memory-mapped FIFO.  Most of the FX2LP modes are more complex.  There
> > is a slave FIFO, but I don't fully understand that yet.
>
> If you want to just stream data to the FIFO then yes that is probably
> the right way to go.
> This app note shows the basic setup
> https://www.cypress.com/file/44551/download
> You have up-to 16 data lines that map to the fifo
> FLAGA/D show the status of the FIFO so probably can be ignored if you
> are OK with losing data
> SLOE probably can be always asserted
> SLRD/WR can also be fixed value since we won't be reading
> FIFOADDR is just selecting which bank/endpoint you are writing to, so
> once again likely can be hard coded.
>
> I don't think I have a dev board for this anymore, last design I did
> with this was a few years back, but I would be open to tracking one
> down if this is a path you want to go.
>
> --Brennan

You sucked me in. It was only $8 to get one here tomorrow... and I am
not sure I can find another use for it even if this does not work out.

For those following along, you can find the board I am talking about
on ebay, amazon, etc.. by looking for "EX-USB FX2LP"
--Brennan


Re: [nuttx] Re: NXView

2020-06-13 Thread Gregory Nutt

On 6/13/2020 3:25 PM, Brennan Ashton wrote:

On Sat, Jun 13, 2020 at 1:56 PM Gregory Nutt  wrote:

The problem is that in the context of the OS instrumentation call-outs,
we can do no driver operations.  With the FT245R, it could do writes to
a memory-mapped FIFO.  Most of the FX2LP modes are more complex.  There
is a slave FIFO, but I don't fully understand that yet.

If you want to just stream data to the FIFO then yes that is probably
the right way to go.
This app note shows the basic setup
https://www.cypress.com/file/44551/download
You have up-to 16 data lines that map to the fifo
FLAGA/D show the status of the FIFO so probably can be ignored if you
are OK with losing data
SLOE probably can be always asserted
SLRD/WR can also be fixed value since we won't be reading
FIFOADDR is just selecting which bank/endpoint you are writing to, so
once again likely can be hard coded.

I don't think I have a dev board for this anymore, last design I did
with this was a few years back, but I would be open to tracking one
down if this is a path you want to go.

--Brennan


The FT232H would be another option.

Greg




Re: [nuttx] Re: NXView

2020-06-14 Thread Gregory Nutt




You sucked me in. It was only $8 to get one here tomorrow... and I am
not sure I can find another use for it even if this does not work out.

For those following along, you can find the board I am talking about
on ebay, amazon, etc.. by looking for "EX-USB FX2LP"
I ordered a couple of those from China too.  But I won't get them here 
in Costa Rica until August.  Looks like you will be the trailblazer.