[beagleboard] Real Time experience on Beagle?

2019-10-15 Thread Drew Fustini
Hello,

I'm presenting an overview of Beagle projects on October 31 at the Real
Time Summit in Lyon, France.

I'd appreciate any feedback if you've tried Xenomia or the RT_PREEMPT
kernel.

Thanks
Drew

-- 
For more options, visit http://beagleboard.org/discuss
--- 
You received this message because you are subscribed to the Google Groups 
"BeagleBoard" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to beagleboard+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/beagleboard/CAPgEAj76sjaQ7KM1gG3hePiE%2BSq9tdZ%3DvdvSwhFnwL3EkX6u%3DQ%40mail.gmail.com.


[beagleboard] Re: Real Time experience on Beagle?

2019-10-15 Thread shabaz
Hi Drew,

I hope you're well!

I recently experimented briefly with both, here' s the steps I used to 
install Xenomai (Rob Nelson helped me find the pre-built kernel, the link 
to them is below). 
Not all Xenomai APIs are enabled in the kernel. Anyway, I did get reduced 
jitter when using one of the API sets called Alchemy, which I guess is 
probably the easiest to code with. In a nutshell you can use the API to 
create a task thread, and do your low-latency stuff there.

For my experiment, I used this content in my makefile:

XENO_CONFIG := /usr/xenomai/bin/xeno-config

CFLAGS := $(shell $(XENO_CONFIG)   --posix --alchemy --cflags)
LDFLAGS := $(shell $(XENO_CONFIG)  --posix --alchemy --ldflags)

CC := gcc
EXECUTABLE := atest

all: $(EXECUTABLE)

%: %.c
$(CC) -o $@ $< $(CFLAGS) $(LDFLAGS)

clean:
rm -f $(EXECUTABLE)

and, code needs to look like this:

#include 
#include 
#include 
#include 
#include 

RT_TASK hello_task;

// function to be executed by task
// this is your stuff for which you want low jitter
void helloWorld(void *arg)
{
  RT_TASK_INFO curtaskinfo;
  printf("Hello World!\n");

  // inquire current task
  rt_task_inquire(NULL,&curtaskinfo);

  // print task name
  printf("Task name : %s \n", curtaskinfo.name);

  while(1)
  {
// do your stuff here in a forever loop if you like

// use this sleep command if you need to use any sleep. This sleep has 
low jitter:
rt_task_sleep(5);
  }

}

int main(int argc, char* argv[])
{
  char  str[10];

  printf("start task\n");
  sprintf(str,"hello");

  /* Create task
   * Arguments: &task,
   *name,
   *stack size (0=default),
   *priority,
   *mode (FPU, start suspended, ...)
   */
  rt_task_create(&hello_task, str, 0, 99, 0);

  /*  Start task
   * Arguments: &task,
   *task function,
   *function argument
   */
  rt_task_start(&hello_task, &helloWorld, 0);
  while(1)
  {
sleep(10);
  }
}

To test latency I ran this:
cyclictest -n -p 90 -i 1000
and the result was:
T: 0 ( 2914) P:90 I:1000 C:  31719 Min:  6 Act:   19 Avg:   18 Max: 
 51
which was about ten times lower for the Max value, compared to PREEMPT RT.
And it was far lower than x86 Linux running a standard kernel with Ubuntu. 
The x86 Linux was a virtual machine on ESXi on an Intel NUC.
It was all over the place with that - especially if I tried opening another 
terminal to do something. With Xenomai, it was stable.
In summary, provided one is willing to code for Xenomai, then the jitter 
difference is large - still no-where as good as PRU or a microcontroller of 
course, but fantastic for Linux.
Also, it seems that the pre-built Machinekit images use PREMPT RT, not 
Xenomai : ( I've no idea if Machinekit is coded to support Xenomai, I've 
not really investigated too far currently.

Installing pre-built Xenomi kernel:


https://github.com/beagleboard/linux/releases

 

cd /opt/scripts/tools/

git pull

As root user:

./update_kernel.sh --ti-xenomai-channel --lts-4_14

 

as non-root user:

cd development

mkdir xenomi

cd xenomi

wget 
https://xenomai.org/downloads/xenomai/stable/latest/xenomai-3.0.9.tar.bz2

bunzip2 xenomai-3.0.9.tar.bz2

tar xvf xenomai-3.0.9.tar

cd xenomai-3.0.9

./configure --enable-smp CFLAGS="-march=armv7-a -mfpu=vfp3" 
LDFLAGS="-march=armv7-a -mfpu=vfp3"

make

As root user:

make install

 

Testing it:

/usr/xenomai/bin/xeno-test

 


 

Development/xtest

make -f Mafefile-a

as root user:

export LD_LIBRARY_PATH=/usr/lib:/usr/xenomai/lib

./atest

-- 
For more options, visit http://beagleboard.org/discuss
--- 
You received this message because you are subscribed to the Google Groups 
"BeagleBoard" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to beagleboard+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/beagleboard/9a9163b5-6a12-480c-abf5-aa4024e4300a%40googlegroups.com.


Re: [beagleboard] Which rev of LCD7 cape?

2019-10-15 Thread Mark A. Yoder
I looked on the back, but don't see it.



On Monday, October 14, 2019 at 6:02:56 PM UTC-4, RobertCNelson wrote:
>
> On Mon, Oct 14, 2019 at 4:37 PM Mark A. Yoder  > wrote: 
> > 
> > I have a LCD7 caoe that works fine with my BB Black.  How can I tell 
> which version it is?  A1, A2 or A3. 
> > 
> > --Mark 
> > 
> > https://elinux.org/BeagleBone_LCD7 
>
> It should say on the pcb on the back..  Otherwise trust the eeprom.. 
>
> Regards, 
>
> -- 
> Robert Nelson 
> https://rcn-ee.com/ 
>

-- 
For more options, visit http://beagleboard.org/discuss
--- 
You received this message because you are subscribed to the Google Groups 
"BeagleBoard" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to beagleboard+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/beagleboard/0bac15b7-e5de-4be3-9d2f-4bb8c4c8cc15%40googlegroups.com.


Re: [beagleboard] Re: Real Time experience on Beagle?

2019-10-15 Thread walter harms



Am 15.10.2019 16:06, schrieb shabaz:
> Hi Drew,
> 
> I hope you're well!
> 
> I recently experimented briefly with both, here' s the steps I used to 
> install Xenomai (Rob Nelson helped me find the pre-built kernel, the link 
> to them is below). 
> Not all Xenomai APIs are enabled in the kernel. Anyway, I did get reduced 
> jitter when using one of the API sets called Alchemy, which I guess is 
> probably the easiest to code with. In a nutshell you can use the API to 
> create a task thread, and do your low-latency stuff there.
> 
> For my experiment, I used this content in my makefile:
> 
> XENO_CONFIG := /usr/xenomai/bin/xeno-config
> 
> CFLAGS := $(shell $(XENO_CONFIG)   --posix --alchemy --cflags)
> LDFLAGS := $(shell $(XENO_CONFIG)  --posix --alchemy --ldflags)
> 

You can simplify your life just use:

atest:

clean:
 rm -f atest


the %: %.c stuff is a build in rule CC should be a default
to you local compiler.

jm2c,

re,
 wh

> CC := gcc
> EXECUTABLE := atest
> 
> all: $(EXECUTABLE)
> 
> %: %.c
> $(CC) -o $@ $< $(CFLAGS) $(LDFLAGS)
> 
> clean:
> rm -f $(EXECUTABLE)
> 


> and, code needs to look like this:
> 
> #include 
> #include 
> #include 
> #include 
> #include 
> 
> RT_TASK hello_task;
> 
> // function to be executed by task
> // this is your stuff for which you want low jitter
> void helloWorld(void *arg)
> {
>   RT_TASK_INFO curtaskinfo;
>   printf("Hello World!\n");
> 
>   // inquire current task
>   rt_task_inquire(NULL,&curtaskinfo);
> 
>   // print task name
>   printf("Task name : %s \n", curtaskinfo.name);
> 
>   while(1)
>   {
> // do your stuff here in a forever loop if you like
> 
> // use this sleep command if you need to use any sleep. This sleep has 
> low jitter:
> rt_task_sleep(5);
>   }
> 
> }
> 
> int main(int argc, char* argv[])
> {
>   char  str[10];
> 
>   printf("start task\n");
>   sprintf(str,"hello");
> 
>   /* Create task
>* Arguments: &task,
>*name,
>*stack size (0=default),
>*priority,
>*mode (FPU, start suspended, ...)
>*/
>   rt_task_create(&hello_task, str, 0, 99, 0);
> 
>   /*  Start task
>* Arguments: &task,
>*task function,
>*function argument
>*/
>   rt_task_start(&hello_task, &helloWorld, 0);
>   while(1)
>   {
> sleep(10);
>   }
> }
> 
> To test latency I ran this:
> cyclictest -n -p 90 -i 1000
> and the result was:
> T: 0 ( 2914) P:90 I:1000 C:  31719 Min:  6 Act:   19 Avg:   18 Max: 
>  51
> which was about ten times lower for the Max value, compared to PREEMPT RT.
> And it was far lower than x86 Linux running a standard kernel with Ubuntu. 
> The x86 Linux was a virtual machine on ESXi on an Intel NUC.
> It was all over the place with that - especially if I tried opening another 
> terminal to do something. With Xenomai, it was stable.
> In summary, provided one is willing to code for Xenomai, then the jitter 
> difference is large - still no-where as good as PRU or a microcontroller of 
> course, but fantastic for Linux.
> Also, it seems that the pre-built Machinekit images use PREMPT RT, not 
> Xenomai : ( I've no idea if Machinekit is coded to support Xenomai, I've 
> not really investigated too far currently.
> 
> Installing pre-built Xenomi kernel:
> 
> 
> https://github.com/beagleboard/linux/releases
> 
>  
> 
> cd /opt/scripts/tools/
> 
> git pull
> 
> As root user:
> 
> ./update_kernel.sh --ti-xenomai-channel --lts-4_14
> 
>  
> 
> as non-root user:
> 
> cd development
> 
> mkdir xenomi
> 
> cd xenomi
> 
> wget 
> https://xenomai.org/downloads/xenomai/stable/latest/xenomai-3.0.9.tar.bz2
> 
> bunzip2 xenomai-3.0.9.tar.bz2
> 
> tar xvf xenomai-3.0.9.tar
> 
> cd xenomai-3.0.9
> 
> ./configure --enable-smp CFLAGS="-march=armv7-a -mfpu=vfp3" 
> LDFLAGS="-march=armv7-a -mfpu=vfp3"
> 
> make
> 
> As root user:
> 
> make install
> 
>  
> 
> Testing it:
> 
> /usr/xenomai/bin/xeno-test
> 
>  
> 
> 
>  
> 
> Development/xtest
> 
> make -f Mafefile-a
> 
> as root user:
> 
> export LD_LIBRARY_PATH=/usr/lib:/usr/xenomai/lib
> 
> ./atest
> 

-- 
For more options, visit http://beagleboard.org/discuss
--- 
You received this message because you are subscribed to the Google Groups 
"BeagleBoard" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to beagleboard+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/beagleboard/5DA5E576.8090602%40bfs.de.


Re: [beagleboard] Re: Real Time experience on Beagle?

2019-10-15 Thread Adrian Godwin
Can you use bela (https://bela.io/about) ?

On Tue, Oct 15, 2019 at 4:28 PM walter harms  wrote:

>
>
> Am 15.10.2019 16:06, schrieb shabaz:
> > Hi Drew,
> >
> > I hope you're well!
> >
> > I recently experimented briefly with both, here' s the steps I used to
> > install Xenomai (Rob Nelson helped me find the pre-built kernel, the
> link
> > to them is below).
> > Not all Xenomai APIs are enabled in the kernel. Anyway, I did get
> reduced
> > jitter when using one of the API sets called Alchemy, which I guess is
> > probably the easiest to code with. In a nutshell you can use the API to
> > create a task thread, and do your low-latency stuff there.
> >
> > For my experiment, I used this content in my makefile:
> >
> > XENO_CONFIG := /usr/xenomai/bin/xeno-config
> >
> > CFLAGS := $(shell $(XENO_CONFIG)   --posix --alchemy --cflags)
> > LDFLAGS := $(shell $(XENO_CONFIG)  --posix --alchemy --ldflags)
> >
>
> You can simplify your life just use:
>
> atest:
>
> clean:
>  rm -f atest
>
>
> the %: %.c stuff is a build in rule CC should be a default
> to you local compiler.
>
> jm2c,
>
> re,
>  wh
>
> > CC := gcc
> > EXECUTABLE := atest
> >
> > all: $(EXECUTABLE)
> >
> > %: %.c
> > $(CC) -o $@ $< $(CFLAGS) $(LDFLAGS)
> >
> > clean:
> > rm -f $(EXECUTABLE)
> >
>
>
> > and, code needs to look like this:
> >
> > #include 
> > #include 
> > #include 
> > #include 
> > #include 
> >
> > RT_TASK hello_task;
> >
> > // function to be executed by task
> > // this is your stuff for which you want low jitter
> > void helloWorld(void *arg)
> > {
> >   RT_TASK_INFO curtaskinfo;
> >   printf("Hello World!\n");
> >
> >   // inquire current task
> >   rt_task_inquire(NULL,&curtaskinfo);
> >
> >   // print task name
> >   printf("Task name : %s \n", curtaskinfo.name);
> >
> >   while(1)
> >   {
> > // do your stuff here in a forever loop if you like
> >
> > // use this sleep command if you need to use any sleep. This sleep
> has
> > low jitter:
> > rt_task_sleep(5);
> >   }
> >
> > }
> >
> > int main(int argc, char* argv[])
> > {
> >   char  str[10];
> >
> >   printf("start task\n");
> >   sprintf(str,"hello");
> >
> >   /* Create task
> >* Arguments: &task,
> >*name,
> >*stack size (0=default),
> >*priority,
> >*mode (FPU, start suspended, ...)
> >*/
> >   rt_task_create(&hello_task, str, 0, 99, 0);
> >
> >   /*  Start task
> >* Arguments: &task,
> >*task function,
> >*function argument
> >*/
> >   rt_task_start(&hello_task, &helloWorld, 0);
> >   while(1)
> >   {
> > sleep(10);
> >   }
> > }
> >
> > To test latency I ran this:
> > cyclictest -n -p 90 -i 1000
> > and the result was:
> > T: 0 ( 2914) P:90 I:1000 C:  31719 Min:  6 Act:   19 Avg:   18 Max:
>
> >  51
> > which was about ten times lower for the Max value, compared to PREEMPT
> RT.
> > And it was far lower than x86 Linux running a standard kernel with
> Ubuntu.
> > The x86 Linux was a virtual machine on ESXi on an Intel NUC.
> > It was all over the place with that - especially if I tried opening
> another
> > terminal to do something. With Xenomai, it was stable.
> > In summary, provided one is willing to code for Xenomai, then the jitter
> > difference is large - still no-where as good as PRU or a microcontroller
> of
> > course, but fantastic for Linux.
> > Also, it seems that the pre-built Machinekit images use PREMPT RT, not
> > Xenomai : ( I've no idea if Machinekit is coded to support Xenomai, I've
> > not really investigated too far currently.
> >
> > Installing pre-built Xenomi kernel:
> >
> >
> > https://github.com/beagleboard/linux/releases
> >
> >
> >
> > cd /opt/scripts/tools/
> >
> > git pull
> >
> > As root user:
> >
> > ./update_kernel.sh --ti-xenomai-channel --lts-4_14
> >
> >
> >
> > as non-root user:
> >
> > cd development
> >
> > mkdir xenomi
> >
> > cd xenomi
> >
> > wget
> >
> https://xenomai.org/downloads/xenomai/stable/latest/xenomai-3.0.9.tar.bz2
> >
> > bunzip2 xenomai-3.0.9.tar.bz2
> >
> > tar xvf xenomai-3.0.9.tar
> >
> > cd xenomai-3.0.9
> >
> > ./configure --enable-smp CFLAGS="-march=armv7-a -mfpu=vfp3"
> > LDFLAGS="-march=armv7-a -mfpu=vfp3"
> >
> > make
> >
> > As root user:
> >
> > make install
> >
> >
> >
> > Testing it:
> >
> > /usr/xenomai/bin/xeno-test
> >
> >
> >
> >
> >
> >
> > Development/xtest
> >
> > make -f Mafefile-a
> >
> > as root user:
> >
> > export LD_LIBRARY_PATH=/usr/lib:/usr/xenomai/lib
> >
> > ./atest
> >
>
> --
> For more options, visit http://beagleboard.org/discuss
> ---
> You received this message because you are subscribed to the Google Groups
> "BeagleBoard" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to beagleboard+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/beagleboard/5DA5E576.8090602%40bfs.de.
>

-- 
For more options, visit http://beag

Re: [beagleboard] Re: Real Time experience on Beagle?

2019-10-15 Thread shabaz
Thanks for that! : )
I'm terrible at makefiles : ) 

On Tuesday, October 15, 2019 at 4:28:06 PM UTC+1, wharms wrote:
>
>
>
> Am 15.10.2019 16:06, schrieb shabaz: 
> > Hi Drew, 
> > 
> > I hope you're well! 
> > 
> > I recently experimented briefly with both, here' s the steps I used to 
> > install Xenomai (Rob Nelson helped me find the pre-built kernel, the 
> link 
> > to them is below). 
> > Not all Xenomai APIs are enabled in the kernel. Anyway, I did get 
> reduced 
> > jitter when using one of the API sets called Alchemy, which I guess is 
> > probably the easiest to code with. In a nutshell you can use the API to 
> > create a task thread, and do your low-latency stuff there. 
> > 
> > For my experiment, I used this content in my makefile: 
> > 
> > XENO_CONFIG := /usr/xenomai/bin/xeno-config 
> > 
> > CFLAGS := $(shell $(XENO_CONFIG)   --posix --alchemy --cflags) 
> > LDFLAGS := $(shell $(XENO_CONFIG)  --posix --alchemy --ldflags) 
> > 
>
> You can simplify your life just use: 
>
> atest: 
>
> clean: 
>  rm -f atest 
>
>
> the %: %.c stuff is a build in rule CC should be a default 
> to you local compiler. 
>
> jm2c, 
>
> re, 
>  wh 
>
> > CC := gcc 
> > EXECUTABLE := atest 
> > 
> > all: $(EXECUTABLE) 
> > 
> > %: %.c 
> > $(CC) -o $@ $< $(CFLAGS) $(LDFLAGS) 
> > 
> > clean: 
> > rm -f $(EXECUTABLE) 
> > 
>
>
> > and, code needs to look like this: 
> > 
> > #include  
> > #include  
> > #include  
> > #include  
> > #include  
> > 
> > RT_TASK hello_task; 
> > 
> > // function to be executed by task 
> > // this is your stuff for which you want low jitter 
> > void helloWorld(void *arg) 
> > { 
> >   RT_TASK_INFO curtaskinfo; 
> >   printf("Hello World!\n"); 
> > 
> >   // inquire current task 
> >   rt_task_inquire(NULL,&curtaskinfo); 
> > 
> >   // print task name 
> >   printf("Task name : %s \n", curtaskinfo.name); 
> > 
> >   while(1) 
> >   { 
> > // do your stuff here in a forever loop if you like 
> > 
> > // use this sleep command if you need to use any sleep. This sleep 
> has 
> > low jitter: 
> > rt_task_sleep(5); 
> >   } 
> > 
> > } 
> > 
> > int main(int argc, char* argv[]) 
> > { 
> >   char  str[10]; 
> > 
> >   printf("start task\n"); 
> >   sprintf(str,"hello"); 
> > 
> >   /* Create task 
> >* Arguments: &task, 
> >*name, 
> >*stack size (0=default), 
> >*priority, 
> >*mode (FPU, start suspended, ...) 
> >*/ 
> >   rt_task_create(&hello_task, str, 0, 99, 0); 
> > 
> >   /*  Start task 
> >* Arguments: &task, 
> >*task function, 
> >*function argument 
> >*/ 
> >   rt_task_start(&hello_task, &helloWorld, 0); 
> >   while(1) 
> >   { 
> > sleep(10); 
> >   } 
> > } 
> > 
> > To test latency I ran this: 
> > cyclictest -n -p 90 -i 1000 
> > and the result was: 
> > T: 0 ( 2914) P:90 I:1000 C:  31719 Min:  6 Act:   19 Avg:   18 Max: 
> 
> >  51 
> > which was about ten times lower for the Max value, compared to PREEMPT 
> RT. 
> > And it was far lower than x86 Linux running a standard kernel with 
> Ubuntu. 
> > The x86 Linux was a virtual machine on ESXi on an Intel NUC. 
> > It was all over the place with that - especially if I tried opening 
> another 
> > terminal to do something. With Xenomai, it was stable. 
> > In summary, provided one is willing to code for Xenomai, then the jitter 
> > difference is large - still no-where as good as PRU or a microcontroller 
> of 
> > course, but fantastic for Linux. 
> > Also, it seems that the pre-built Machinekit images use PREMPT RT, not 
> > Xenomai : ( I've no idea if Machinekit is coded to support Xenomai, I've 
> > not really investigated too far currently. 
> > 
> > Installing pre-built Xenomi kernel: 
> > 
> > 
> > https://github.com/beagleboard/linux/releases 
> > 
> >   
> > 
> > cd /opt/scripts/tools/ 
> > 
> > git pull 
> > 
> > As root user: 
> > 
> > ./update_kernel.sh --ti-xenomai-channel --lts-4_14 
> > 
> >   
> > 
> > as non-root user: 
> > 
> > cd development 
> > 
> > mkdir xenomi 
> > 
> > cd xenomi 
> > 
> > wget 
> > 
> https://xenomai.org/downloads/xenomai/stable/latest/xenomai-3.0.9.tar.bz2 
> > 
> > bunzip2 xenomai-3.0.9.tar.bz2 
> > 
> > tar xvf xenomai-3.0.9.tar 
> > 
> > cd xenomai-3.0.9 
> > 
> > ./configure --enable-smp CFLAGS="-march=armv7-a -mfpu=vfp3" 
> > LDFLAGS="-march=armv7-a -mfpu=vfp3" 
> > 
> > make 
> > 
> > As root user: 
> > 
> > make install 
> > 
> >   
> > 
> > Testing it: 
> > 
> > /usr/xenomai/bin/xeno-test 
> > 
> >   
> > 
> > 
> >   
> > 
> > Development/xtest 
> > 
> > make -f Mafefile-a 
> > 
> > as root user: 
> > 
> > export LD_LIBRARY_PATH=/usr/lib:/usr/xenomai/lib 
> > 
> > ./atest 
> > 
>

-- 
For more options, visit http://beagleboard.org/discuss
--- 
You received this message because you are subscribed to the Google Groups 
"BeagleBoard" group.
To unsubscribe from this group and stop receiving emails from it, se

[beagleboard] Re: Real Time experience on Beagle?

2019-10-15 Thread shabaz

By the way,

I've put an oscilloscope trace of a Xenomai'd program on the BBB here in 
case you'd like to show it for your presentation:
https://app.box.com/s/nfwlud613c7zoz7gu6rn9arfttticvvc

For that trace, the BBB is just toggling a GPIO pin repeatedly, in a 
Xenomai'd thread. I left it running for several minutes and the statistics 
that were collected are at the bottom of the screenshot.
The delta between the Max (66.34 usec) and Min (60.76 usec) values 
indicates that jitter was under 6 usec.

The code that produced that was the same code that I pasted below, but in 
the real-time thread (i.e. in the helloWorld function there) I added some 
code toggling a GPIO pin (using the I/O library I wrote a while back - 
updated version is documented here:
https://www.element14.com/community/community/designcenter/single-board-computers/blog/2019/08/15/beaglebone-black-bbb-io-gpio-spi-and-i2c-library-for-c-2019-edition

The code I had in that function was something like:
while(1) {
  pin_high(8,12);
  rt_task_sleep(5);
  pin_low(8,12);
  rt_task_sleep(5);
}


-- 
For more options, visit http://beagleboard.org/discuss
--- 
You received this message because you are subscribed to the Google Groups 
"BeagleBoard" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to beagleboard+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/beagleboard/dd7d71bf-090d-4223-b913-60ac296cff99%40googlegroups.com.


[beagleboard] Re: BeagleBone AI - Severe thermal issues

2019-10-15 Thread arashed31
There are so many more cores in the AM5729 SoC than there are in a 
RaspberryPi SoC. The reason heat is so high on the BeagleBone-AI is due to 
all these cores enabled at once. If you are not using the board connected 
to a display then you should disable the HDMI peripheral, GPU core, and 
IVAHD core through the device tree. This can be done by setting status = 
"disabled" for the modules mentioned. This alone will drop your 
temperatures to 55C. I also modified CPUfreq to run at 1GHz minimum instead 
of 400MHz minimum and I am still seeing 55C idle.

Regards,
Ahmad

On Thursday, September 26, 2019 at 4:09:30 PM UTC-5, Michael Zoran wrote:
>
>  I just got my BeagleBone AI today from mouser, and I'm noticing severe 
> thermal issues even when the BeagleBone isn't doing much.
> /sys/class/thermal is reporting over 100C within 10 minutes of the power 
> being connected. 
> At which point the safety features kick in and the power to the device 
> gets shut off.
>
> Is this normal???  Does the BeagleBone AI require a fan to be usable at 
> all?
>
> Also of interest is the free command is showing only 512MB of RAM instead 
> of the 1GB it should. 
> And the /proc file system is showing that a very generic device tree is 
> being used.
>
> This is an AI directly out of the box using only the preinstalled 
> software.   I'm waiting for the serial debug cable to arrive that I ordered 
> today
> before I try reinstalling everything and maybe try lowering the clock 
> speed.
>
> Just thought this would be useful to other people.
>

-- 
For more options, visit http://beagleboard.org/discuss
--- 
You received this message because you are subscribed to the Google Groups 
"BeagleBoard" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to beagleboard+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/beagleboard/684c934b-02d1-44e3-9568-90e701152aa2%40googlegroups.com.


[beagleboard] Re: BBONE-AI - Operating Temp Min & Max

2019-10-15 Thread arashed31
Please! Disable the GPU and any other graphics accelerators if you are not 
using the board with a display. The GPU, IVA, and HDMI are not needed for 
any machine learning use. Disabling any cores you're not using will help 
temperatures greatly.

On Thursday, October 10, 2019 at 8:29:57 AM UTC-5, M S, Siddeshappa wrote:
>
> Hi Team,
>
>  
>
> Please see the below issue one of customer asking for the operating 
> Temperature Min & Max if available for BBONE-AI Board.
>
>  
>
>
> Customer Issue – ‘Hello Team, customer cannot find Operating Temperature 
> Min and Max - please, could you provide Thank you so much for your help 
> Peter’
>
>  
>
>  
>
> *Siddeshappa M S*
>
> s...@element14.com 
>
> Product Categorisation Manager
>
> PDD
>
>  
>
> [image: http://digital.avnet.com/signature/templates/farnell/logo.png]
>
>  
>
> IBC Knowledge Park, Tower D ,11th Floor Bannerghatta Main Road, 
> Bangalore-29
>
> O +91080.40003880.34904
>
> M +919945211348
>
> www.farnell.com
>
>  
>
>  
>
>  
>
>  
>
>  
>
>  
>
>  
>
>  
>
>  
>
>  
>
>  
>
>  
>
>  
>
> **Disclaimer***
>
> The contents of this e-mail and any file transmitted with it are 
> confidential and intended solely for the individual or entity to whom they 
> are addressed. The content may also contain legal, professional or other 
> privileged information. If you received this e-mail in error, please 
> destroy it immediately. You should not copy or use it for any purpose nor 
> disclose its contents to any other person. The views stated herein do not 
> necessarily represent the view of the Company.
>
> Please ensure you have adequate virus protection before you open or detach 
> any documents from this transmission. The Company does not accept any 
> liability for viruses.
>
>  
>
> element14 India Pvt Ltd
>
> IBC Knowledge Park
>
> 11th Floor, Tower “D”, 4/1
> Bannerghatta Main Rd, Suddagunte Palya, Bengaluru-560029 
> Karnataka, India
> TEL: 91 804000 3888
> FAX: 91 804000 3880 
> A Premier Farnell Company
> Company Number: U30009KA2007PTC08
>

-- 
For more options, visit http://beagleboard.org/discuss
--- 
You received this message because you are subscribed to the Google Groups 
"BeagleBoard" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to beagleboard+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/beagleboard/fd84433d-105c-4347-9915-05fbbed8425d%40googlegroups.com.


Re: [beagleboard] BeagleBone Black Pin Mode Configuration?

2019-10-15 Thread Ralph Stormer
*UPDATE 2:*

I was able to get my version of the overlay to compile and I believe it 
works, although it is mostly original code that I don't need and afraid to 
remove. Unfortunately, I cannot test it on my display because the connector 
I had broke :(.

Is there anyway I can list the pin functions of each pin to ensure that all 
24 bits are working? I think there is a utility that allow this, however, I 
don't believe I have it installed. Is there perhaps an easier way to do 
this without downloading and installing things?

On Saturday, October 12, 2019 at 3:06:06 PM UTC-4, RobertCNelson wrote:
>
> On Sat, Oct 12, 2019 at 12:38 PM Ralph Stormer wrote: 
> > 
> > Hi Robert, 
> > 
> > I don't understand what you mean by the /bb.org-overlays/. Am I supposed 
> to place my version of the example files into this directory? 
>
> yes "copy" your version of the file into the same folder the original 
> example lived, then moved down two dir and run "make".. 
>
> https://github.com/beagleboard/bb.org-overlays 
>
> ./bb.org-overlays/src/arm/(location of dts's) 
> ./bb.org-overlays/ (where you run make from)... 
>
> Regards, 
>
> -- 
> Robert Nelson 
> https://rcn-ee.com/ 
>

-- 
For more options, visit http://beagleboard.org/discuss
--- 
You received this message because you are subscribed to the Google Groups 
"BeagleBoard" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to beagleboard+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/beagleboard/6664d9e2-85d8-4ff8-83f0-769e7fd7f9f9%40googlegroups.com.


Re: [beagleboard] BBONE-AI - Operating Temp Min & Max

2019-10-15 Thread jonnymo
Isn't the purpose of OpenCL is to handle the parallel processing between
CPU, GPU, and other related devices? Disabling the GPU seems counter
intuative. Where are you getting your data that a GPU is not needed for
Machine Learning? This again seems counter intuative.

Jon

On Tuesday, October 15, 2019,  wrote:

> Please! Disable the GPU and any other graphics accelerators if you are not
> using the board with a display. The GPU, IVA, and HDMI are not needed for
> any machine learning use. Disabling any cores you're not using will help
> temperatures greatly.
>
> On Thursday, October 10, 2019 at 8:29:57 AM UTC-5, M S, Siddeshappa wrote:
>>
>> Hi Team,
>>
>>
>>
>> Please see the below issue one of customer asking for the operating
>> Temperature Min & Max if available for BBONE-AI Board.
>>
>>
>>
>>
>> Customer Issue – ‘Hello Team, customer cannot find Operating Temperature
>> Min and Max - please, could you provide Thank you so much for your help
>> Peter’
>>
>>
>>
>>
>>
>> *Siddeshappa M S*
>>
>> s...@element14.com
>>
>> Product Categorisation Manager
>>
>> PDD
>>
>>
>>
>> [image: http://digital.avnet.com/signature/templates/farnell/logo.png]
>>
>>
>>
>> IBC Knowledge Park, Tower D ,11th Floor Bannerghatta Main Road,
>> Bangalore-29
>> 
>>
>> O +91080.40003880.34904
>>
>> M +919945211348
>>
>> www.farnell.com
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> **Disclaimer***
>>
>> The contents of this e-mail and any file transmitted with it are
>> confidential and intended solely for the individual or entity to whom they
>> are addressed. The content may also contain legal, professional or other
>> privileged information. If you received this e-mail in error, please
>> destroy it immediately. You should not copy or use it for any purpose nor
>> disclose its contents to any other person. The views stated herein do not
>> necessarily represent the view of the Company.
>>
>> Please ensure you have adequate virus protection before you open or
>> detach any documents from this transmission. The Company does not accept
>> any liability for viruses.
>>
>>
>>
>> element14 India Pvt Ltd
>>
>> IBC Knowledge Park
>>
>> 11th Floor, Tower “D”, 4/1
>> 
>> Bannerghatta Main Rd, Suddagunte Palya, Bengaluru-560029
>> Karnataka, India
>> TEL: 91 804000 3888
>> FAX: 91 804000 3880
>> A Premier Farnell Company
>> Company Number: U30009KA2007PTC08
>>
> --
> For more options, visit http://beagleboard.org/discuss
> ---
> You received this message because you are subscribed to the Google Groups
> "BeagleBoard" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to beagleboard+unsubscr...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/beagleboard/fd84433d-105c-4347-9915-05fbbed8425d%40googlegroups.com
> 
> .
>

-- 
For more options, visit http://beagleboard.org/discuss
--- 
You received this message because you are subscribed to the Google Groups 
"BeagleBoard" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to beagleboard+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/beagleboard/CAG99bkoWnBDstgeJRJfnhxzf4KS7eue5ZWgOj3m9EgT4m9B6eQ%40mail.gmail.com.


Re: [beagleboard] BBONE-AI - Operating Temp Min & Max

2019-10-15 Thread Drew Fustini
On Wed, Oct 16, 2019 at 1:57 AM jonnymo  wrote:
> Isn't the purpose of OpenCL is to handle the parallel processing between CPU, 
> GPU, and other related devices? Disabling the GPU seems counter intuative. 
> Where are you getting your data that a GPU is not needed for Machine 
> Learning? This again seems counter intuative.

The TI Deep Learning SDK uses OpenCL interface to the DSP cores and
the EVE (embedded vision engine).  Here is an overview:
http://downloads.ti.com/mctools/esd/docs/tidl-api/intro.html

Regards,
Drew

-- 
For more options, visit http://beagleboard.org/discuss
--- 
You received this message because you are subscribed to the Google Groups 
"BeagleBoard" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to beagleboard+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/beagleboard/CAEf4M_CX0mJXuOvGyZ%2BCDimQaMMGSrSnjGeCkt_X%3DMwBTV4%3Dmw%40mail.gmail.com.


Re: [beagleboard] BBONE-AI - Operating Temp Min & Max

2019-10-15 Thread jonnymo
Yeah, I seen this, but this was not referenced in the comment. The comment
implied the GPU is not used in any ML implementation. This I don't agree
with.

Jon

On Tuesday, October 15, 2019, Drew Fustini  wrote:

> On Wed, Oct 16, 2019 at 1:57 AM jonnymo  wrote:
> > Isn't the purpose of OpenCL is to handle the parallel processing between
> CPU, GPU, and other related devices? Disabling the GPU seems counter
> intuative. Where are you getting your data that a GPU is not needed for
> Machine Learning? This again seems counter intuative.
>
> The TI Deep Learning SDK uses OpenCL interface to the DSP cores and
> the EVE (embedded vision engine).  Here is an overview:
> http://downloads.ti.com/mctools/esd/docs/tidl-api/intro.html
>
> Regards,
> Drew
>
> --
> For more options, visit http://beagleboard.org/discuss
> ---
> You received this message because you are subscribed to the Google Groups
> "BeagleBoard" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to beagleboard+unsubscr...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/beagleboard/CAEf4M_CX0mJXuOvGyZ%2BCDimQaMMGSrSnjGeCkt_X%
> 3DMwBTV4%3Dmw%40mail.gmail.com.
>

-- 
For more options, visit http://beagleboard.org/discuss
--- 
You received this message because you are subscribed to the Google Groups 
"BeagleBoard" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to beagleboard+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/beagleboard/CAG99bkrwZBN-PUb7gE567iH%2BU_G4Ymz-57Z4JB245K2gGmPSjw%40mail.gmail.com.