Re: [Xenomai-core] Synchronising TSC and periodic timer

2006-03-20 Thread Rodrigo Rosenfeld Rosas

Jan Kiszka escreveu:


We discussed a lot about how to prevent the user shooting him/herself in
the knee with inter-tick timestamps, but I still think that
rtdm_clock_read_tsc() would even be worse in this regard.
 


What do you think abou this documentation:
"This function is meant to be used in periodic mode for getting a high 
timestamp, independently from the system timer's tick.
Its return values should not be used mixed with rtdm_clock_read() values 
because they are not syncronised.
Driver authors[developers?] are advised to state this on driver 
documentation for the final users where returning these values to them 
for avoiding confusion.


Note: This function is available for uniprocessor systems only in the 
meantime."


I think it explains and will not make confusion in driver developers... 
If sometime someone give a good solution to the syncronisation problem 
between multiple processors, this can be changed...


> ...

Rodrigo.





___ 
Yahoo! doce lar. Faça do Yahoo! sua homepage. 
http://br.yahoo.com/homepageset.html 




___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] rt-video interface

2006-03-20 Thread Rodrigo Rosenfeld Rosas
Em Segunda 20 Março 2006 21:24, Jan Kiszka escreveu:
>...
>You may want to have a look at this thread regarding poll/select and RT:
>http://www.mail-archive.com/rtnet-users%40lists.sourceforge.net/msg00968.htm

I tried to. Not found. But I didn't give up so quicky. It was missing the 
final 'l':
http://www.mail-archive.com/rtnet-users%40lists.sourceforge.net/msg00968.html

>Do video capturing applications tend to have to observe multiple
>channels asynchronously via a single thread? If so, my statement about
>how often poll/select is actually required in RT-applications may have
>to be reconsidered.

Actually, I don't see any reasonable reason for using select/poll in rt 
applications. But, while trying to keep the API similar to V4L2, I would 
implement them by IOCTL and think it is OK, since it was already done for 
MMAP/MUNMAP. I don't think it worths writing the poll/select rt like 
functions...

What could be discussed here is if it will be required or not to have those 
calls needed when using streaming (most designs will use streaming). I don't 
think it should be required as it is on V4L2, but could be implemented 
optionally, as IOCTL calls. But I would need to investigate more this topic. 
I'll do it tomorrow... I'm the last man in the lab and they are calling me 
out for closing the lab...

>...
>Does your time allow to list the minimal generic services a RTDM video
>capturing driver has to provide in a similar fashion like the serial or
>the CAN profile? If it's mostly about copying existing Linux API specs,
>feel free to just reference them. But the differences should certainly
>fill up a RT-video (or so) profile, and that would be great!

I'll think about it and will answer tomorrow.

Regards,

Rodrigo





___
Yahoo! doce lar. Faça do Yahoo! sua homepage.
http://br.yahoo.com/homepageset.html



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] adding xenomai MLs to Gmane

2006-03-20 Thread Jim Cromie

Jeff Webb wrote:

Jim wrote:

Philippe,

any objection to my requesting that Gmane.org
add the xenomai MLs to their site ? 


This list has already been added.  I found the archive at gmane the 
other day.  In fact, your message has already been archived there!
so it has.  I just had to refresh the nntp list (tho thats not the 
search interface). 
thanks



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core



[Xenomai-core] COMEDI over RTDM (was: rtdm_event_timedwait hang-up)

2006-03-20 Thread Jan Kiszka
Hi Alexis,

as this discussion starts to become architectural and low-level, I think
we should move it to xenomai-core and give people who are interested a
chance to jump onto it again there.

Alexis Berlemont wrote:
> Hi,
> 
> I heard rumours of some COMEDI (www.comedi.org) port over RTDM recently,
> don't know the current status.
 Indeed. The rumour is called Alex.
>>> Yes, I know. I just don't wanted to drag someone to public, saying: "Hey
>>> man, already doing your job?" ;)
>> It's part of the anti-shyness treatment I'm administering to Alex...
>>
>>> But, as we all know him now, what is the current status then?
> 
> My comedi port over RTDM is not completely over (and far from perfect). The 
> mmap functionnalities have not been rewritten yet. However, I have been 
> working on
> -> the port of the comedi infrastructure layer (comedi/comedi_fops.c 
> comedi/drivers.c comedi/range.c comedidev.h etc.) 
> -> the rewriting of the driver comedi_test.c (which becomes comedi_test1.c)
> -> other test stuff has been written in comedi_test2.c (replications 
> functions 
> to test comedi_write operations)
> -> the ports of comedi_config and comedilib (partially done for comedilib), 
> etc.
> 
> This stuff is working (minor bugs appart) and I could give you a version in 
> the next few days (I have to check "make dist" ;) and minor things).
> 
> But there are plenty of things I am not happy with :
> -> the original comedilib version is not really well suited for rtdm. In this 
> library for many reasons; for example, you can find calls to malloc/free 

Oops, not so nice.

> functions. If I stick to the original implementation, I have to either to ask 
> for adding alloc stuff in user mode in rtdm skin or use another skin to 
> manage allocations. None of these solutions seems interesting for me for many 
> reasons. A lot of people  must be thinking I am overkill, it is true that the 
> comedilib allocations should be done at init time (comedi_open, comedi_close) 
> then no need to fulfull real-time constraints but I think comedi should be 
> fully rtdm compliant; this would avoid tricky corner cases for 
> developpers/users. The best and simplest solution for me would be some slight 
> modifications in the comedilib API but I doubt everyone is OK with that.

Could you give some concrete use cases of the comedilib where dynamic
allocation is involved? I don't know that library actually. What does it
manage beyond calling the driver core?

> 
> -> I think the comedi structures organization (comedi_device, subdevice, 
> async, etc.) should be reviewed considering the rtdm architecture. Of course, 
> these modifications should not induce big changes in the comedi drivers 
> source.

Please also give concrete examples here. RTDM devices should be
manageable by the user via file descriptors, just like normal devices.
What is different, what extra information is needed?

> 
> -> etc.
> 
> In fact, I wanted to propose two versions :
> -> a first implementation as close as possible from the original 
> implementation and API.
> -> a second one a bit more adapted for rtdm.

What would be different with RTDM compared to the existing RTLinux and
RTAI support of comedi? Don't they use comedilib at all? Isn't there
some LXRT adoption in RTAI? Is it providing a different API?

> 
> Thus, we could have compared the two versions and see if everyone agrees with 
> the idea of adapting comedi infrastructure. It would have been a good 
> opportunity to work closely with comedi developers community.

Yes, that would be best. But I guess it will not be too easy, as there
seems to exist a limited interest in RT on their side. Maybe it just
takes some (more) users explaining the requirements and the need. ;)

> 
> Unfortunately, my second version is not finished yet. I still have some work 
> on it (non negligible). (I know I know I am slower than a turtle which learns 
> programming with an amstrad cpc 6128 and damaged floppy disks ). 
> 
> If someone is interested in getting a version right now, I will try to send 
> to 
> him a tarball as soon as possible. Compared to original comedi deliveries, I 
> have not created two autotools tarballs (comedi and comedilib) but only one.

Release early, release often ;). I would offer to have a look, maybe it
will clarify where the RTDM-specific problems are.

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] rt-video interface

2006-03-20 Thread Jan Kiszka
Rodrigo Rosenfeld Rosas wrote:
> Hi Jan and others interested.
> 
> I've finally got my driver in a usable condition. It lacks a lot of 
> functionalities yet, but it aplies to my needs.
> 
> I would like to propose a real-time video interface for using with RTDM.
> 
> For making it simple to port Linux applications to Xenomai, I tried to make 
> it 
> as close as possible to Video for Linux 2 API. I didn't see any serious 
> problem regarding its use on real-time environments in the specification. So, 
> the changes I think that would be necessary are:
> 
> o Change open/fopen to rtdm_dev_open
> o Implement MMAP/MUNMAP as an IOCTL (while it can not be done in a rt-context 
> in the mean time, nor should be necessary)
> o Implement also as IOCTLs: select and poll (I didn't implement them on my 
> driver because I didn't need them, but it should be necessary accordling to 
> specs)

You may want to have a look at this thread regarding poll/select and RT:

http://www.mail-archive.com/rtnet-users%40lists.sourceforge.net/msg00968.htm

Do video capturing applications tend to have to observe multiple
channels asynchronously via a single thread? If so, my statement about
how often poll/select is actually required in RT-applications may have
to be reconsidered.

> o Change all timeval structs to uint64_t or some typedef to it for making it 
> easier to store the timestamps (we use rtdm_clock_read() instead of 
> gettimeofday())
> 
> I can't remember of another issue now. I think these changes would be enough.
> 
> Any ideas?

Does your time allow to list the minimal generic services a RTDM video
capturing driver has to provide in a similar fashion like the serial or
the CAN profile? If it's mostly about copying existing Linux API specs,
feel free to just reference them. But the differences should certainly
fill up a RT-video (or so) profile, and that would be great!

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Synchronising TSC and periodic timer

2006-03-20 Thread Jan Kiszka
Rodrigo Rosenfeld Rosas wrote:
> Em Segunda 20 Março 2006 13:51, Philippe Gerum escreveu:
> 
>> ...
>> I think that you should try convincing Jan that rtdm_clock_tsc() might
>> be a good idea to provide, instead of tweaking rtdm_clock_read() in a
>> way which changes its underlying logic. ;o)
> 
> Yes, that is exactly what I want! :)
> I don't see any reason for changing rtdm_timer_read() neither. I think that 
> the most common usage of high-precision timestamps is for relative time 
> cases. It doesn't need to be in sync with Xenomai's timer... It is best to 
> keep things simple.
> 
> What do you think Jan?

We discussed a lot about how to prevent the user shooting him/herself in
the knee with inter-tick timestamps, but I still think that
rtdm_clock_read_tsc() would even be worse in this regard.

xnarch_get_cpu_tsc() and derived skin services are not supposed to
deliver consistent results across multiple CPUs, are they? While the
user could avoid such scenarios by locking tasks on a specific CPU,
drivers cannot - at least so far. So, to safely introduce such a
low-level service for RTDM, I think we need

A) CPU affinity for RTDM-registered IRQs
B) CPU affinity for RTDM kernel tasks
C) Some well written docs, explaining how to safely use TSCs at driver
level and how to provide them to the user (the latter aspect makes me
worry most)

While A) and B) might be useful for other (though rare) scenarios as
well, C) will still require a very good understanding and interface
design from the driver writer, while I don't see comparable error
dimensions with the improved rtdm_clock_read(). Comparing apples
(rtdm_clock_read()) to oranges (rt_timer_read()), there will be some
error around a tick period. But comparing apple[0]
(rtdm_clock_read_tsc() on CPU#0) to apple[1] (rtdm_clock_read_tsc() on
CPU#1), the error could become *much* larger and the design and
documentation effort to avoid this will be significant.

Ok, as a simple resolution for this problem, I could imagine introducing
a TSC timestamping service to RTDM that always fall back to that level
of accuracy which is guaranteed to be consistent - either because we run
in aperiodic mode, or on uniprocessor, or thanks to some magic
synchronisation between all CPU clocks. This would have to be decided at
build time, in the first version likely by checking for (multiprocessor
|| aperiodic) to switch to xnpod_get_time().

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] CONFIG_XENO_OPT_DEBUG_LEVEL

2006-03-20 Thread Jan Kiszka
Philippe Gerum wrote:
> Jan Kiszka wrote:
>> Hi,
>>
>> as I started to actually use XENO_ASSERT, I noticed that there is no
>> infrastructure yet to enable it. CONFIG_XENO_OPT_DEBUG_LEVEL is nowhere
>> defined. Add this as integer to Kconfig? Or better convert
>> nucleus/assert.h to CONFIG_OPT_DEBUG for now until we really feel like
>> we need more than on/off for this? I would vote for the latter ATM.
> 
> Actually, we already need more than a simple switch: I'd really like the
> queue debugging option to become a level on its own (say, 4294967295?).
> I'll add this tomorrow.

Hmm, raises my old concern again: this "vertical" debugging implies to
switch everything on when you only want queue debugging. I still think
we rather need "horizontal" control: switch on queues, asserts, ...

This level "4294967295" indicates for me where we may end with: dozens
of debug levels no one can tell apart, and you have to switch them all
on to gain the relevant pieces or to be sure that you didn't missed
anything. I always have the mess in mind we once had in ndiswrapper: for
serious debugging of the USB layer you had to raise the debug level
which dragged in bulks of (in this case) useless reports from other
subsystems.

A student (Marc Kleine-Budde) once ported some nice debug subsystem into
an internal project that requested a subsystem ID for every debug
statement (I think it came from kaffe). At compilation time or even
later during runtime you could easily select which subsystem should
start babbling and checking and which one is not that interesting for a
specific test. I think this is far more useful than debug levels. Queues
could become such a subsystem, RTDM (with its asserts) another, and so
forth. Of course, this means maintaining those subsystem IDs at a
central place, but it's clearer than deciding which level to pick for a
new debug code.

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] rt-video interface

2006-03-20 Thread Rodrigo Rosenfeld Rosas
Hi Jan and others interested.

I've finally got my driver in a usable condition. It lacks a lot of 
functionalities yet, but it aplies to my needs.

I would like to propose a real-time video interface for using with RTDM.

For making it simple to port Linux applications to Xenomai, I tried to make it 
as close as possible to Video for Linux 2 API. I didn't see any serious 
problem regarding its use on real-time environments in the specification. So, 
the changes I think that would be necessary are:

o Change open/fopen to rtdm_dev_open
o Implement MMAP/MUNMAP as an IOCTL (while it can not be done in a rt-context 
in the mean time, nor should be necessary)
o Implement also as IOCTLs: select and poll (I didn't implement them on my 
driver because I didn't need them, but it should be necessary accordling to 
specs)
o Change all timeval structs to uint64_t or some typedef to it for making it 
easier to store the timestamps (we use rtdm_clock_read() instead of 
gettimeofday())

I can't remember of another issue now. I think these changes would be enough.

Any ideas?

Rodrigo.


___
Yahoo! Acesso Grátis - Internet rápida e grátis. Instale o discador agora!
http://br.acesso.yahoo.com


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Interrupt priorities

2006-03-20 Thread Rodrigo Rosenfeld Rosas
Hi Philippe.

I was wondering if there are any plans to provide an option to Xenomai to 
allow the use of interrupt priorities. I mean, by having the timer source as 
the most prioritary interrupt so that the scheduler could preempt the 
interrupt.

Let me explain why such option would be good. When I was testing my 
framegrabber driver, I had to reboot my PC about 10 times until I could 
identificate what was causing total freeze of the system. The problem was on 
the interrupt handler. One of the problems was that I was not clearing the 
interrupt events correctly so that the handler was in loop. The other problem 
was a crash inside the interrupt due to something like:

uint64_t timestamp=rtdm_clock_read();
b.timestamp = *((struct timeval *) timestamp);

Where it should be
b.timestamp = *((struct timeval *) ×tamp);

I forgot the '&' char.

If it was possible to preempt the interrupt, by a task of greater priority, I 
could write a watchdog that would disable the interrupt in case the system 
stops responding.

Do you think it would worth providing such option to Xenomai, at least as a 
debug feature?

Best Regards,

Rodrigo.

P.S: BTW, I think my driver is usable now, so that you (or someone else) could 
cite it in some article or documentation. When I finish my master thesis (my 
deadline is the end of June ), I think I'll have time to comment it better 
and to remove the Data Translation specific code so that it can me made 
available as a template for real-time video interfaces. Or maybe you could 
convince Data Translation to allow me to show all the code! ;)


___
Yahoo! Acesso Grátis - Internet rápida e grátis. Instale o discador agora!
http://br.acesso.yahoo.com


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Synchronising TSC and periodic timer

2006-03-20 Thread Rodrigo Rosenfeld Rosas
Em Segunda 20 Março 2006 13:51, Philippe Gerum escreveu:

>...
>I think that you should try convincing Jan that rtdm_clock_tsc() might
>be a good idea to provide, instead of tweaking rtdm_clock_read() in a
>way which changes its underlying logic. ;o)

Yes, that is exactly what I want! :)
I don't see any reason for changing rtdm_timer_read() neither. I think that 
the most common usage of high-precision timestamps is for relative time 
cases. It doesn't need to be in sync with Xenomai's timer... It is best to 
keep things simple.

What do you think Jan?

P.S: Sorry for the last message, Philippe. I didn't see that one at the time.


___
Yahoo! Acesso Grátis - Internet rápida e grátis. Instale o discador agora!
http://br.acesso.yahoo.com


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Synchronising TSC and periodic timer

2006-03-20 Thread Rodrigo Rosenfeld Rosas
Em Segunda 20 Março 2006 12:23, Philippe Gerum escreveu:

>...
>It's not a matter of dealing with users always doing The Right Thing,
>but preferably preventing people from doing the wrong one.

But we then have two problems and there are tradeoffs here. In one hand we 
want to avoid users from making mistakes. In the other hand we want to 
provide a way for solving the issue that generated this thread.

A solution for the last one would be to have a function that would return a 
high-precision timestamp, not necessarily in sync with xenomai's timer, since 
it would be used for relative time calculations, but should be in sync 
between multiple CPUs. But this solution would rise the possibility for a 
user to do a wrong thing. Of course, the functions should be well documented 
and states the lack of sync if it is the case. So, we have to choose between 
turning it possible to have such design (as the example I have last message) 
or avoid people from doing the wrong thing. I would choose the first case, 
since I think all rt-programmers are smarter (or should be) then the average 
programmer. They must have attention when dealing with rt-programming. So, 
reading a documentation and understanding it should not be a hard task for 
them... In the other hand, if the second approach was chosen, a user wanting 
to use a rt-video interface will be forced to use the aperiodic timer for 
having reliable timestamps...

Rodrigo.


___
Yahoo! Acesso Grátis - Internet rápida e grátis. Instale o discador agora!
http://br.acesso.yahoo.com


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] yet another test tool

2006-03-20 Thread Philippe Gerum

Gilles Chanteperdrix wrote:

Jan Kiszka wrote:
 > As a first step, I would vote for establishing that generic service to
 > redirect the userspace return path to some arbitrary handler in hard-RT
 > context. Then we can think about how to handle signal injection from
 > Linux vs. injection from Xenomai gracefully.

Ok, the first idea was a bad idea. Will try the second, or the
alternative proposed by Philippe.

 > 
 > > 
 > >  > 
 > >  > What do you think, is it worth including as a POSIX counterpart for

 > >  > testsuite/latency?
 > > 
 > > There are a few details that I do not like about this tool, but we may

 > > take it, and fix the details later.
 > 
 > It's a real hack, isn't it ;)? But what precisely do you mean?


What I dislike most, is the lack of return values check. This may work
well with Linux, but not with Xenomai. I would also prefer a clean
shutdown using pthread_cancel, after all sigwait and *nanosleep are
cancellation points, so it should work even without resorting to cleanup
handlers.



The current implementation is one thing (we could fix it), the purpose 
of the tool is another, and actually, this is the latter which seems 
useful to me. By sharing some common tests between native preemption and 
real-time sub-systems like Xeno, we would make performance comparisons 
more relevant. Additionally, I'm convinced that the POSIX skin is an 
underutilized goodie albeit it works damn well and obviously favours a 
close integration within the Linux environment, simply because there is 
a lack of explanation about it. Illustrating how one could leverage it 
is always a good thing.


--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] CONFIG_XENO_OPT_DEBUG_LEVEL

2006-03-20 Thread Philippe Gerum

Jan Kiszka wrote:

Hi,

as I started to actually use XENO_ASSERT, I noticed that there is no
infrastructure yet to enable it. CONFIG_XENO_OPT_DEBUG_LEVEL is nowhere
defined. Add this as integer to Kconfig? Or better convert
nucleus/assert.h to CONFIG_OPT_DEBUG for now until we really feel like
we need more than on/off for this? I would vote for the latter ATM.


Actually, we already need more than a simple switch: I'd really like the 
queue debugging option to become a level on its own (say, 4294967295?). 
I'll add this tomorrow.




Jan





___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core



--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] adding xenomai MLs to Gmane

2006-03-20 Thread Jeff Webb

Jim wrote:

Philippe,

any objection to my requesting that Gmane.org
add the xenomai MLs to their site ?
They will probably want to hear from you
on this, I'll forward your response if ok.


This list has already been added.  I found the archive at gmane the other day.  
In fact, your message has already been archived there!

http://www.mail-archive.com/xenomai-core@gna.org/msg01146.html
http://dir.gmane.org/gmane.linux.real-time.xenomai.devel

-Jeff

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] adding xenomai MLs to Gmane

2006-03-20 Thread Jim

Philippe,

any objection to my requesting that Gmane.org
add the xenomai MLs to their site ?
They will probably want to hear from you
on this, I'll forward your response if ok.

They have a search facility that would help
folks find out for themselves the status
of support for ARM or PXA255, etc.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] yet another test tool

2006-03-20 Thread Gilles Chanteperdrix
Jan Kiszka wrote:
 > As a first step, I would vote for establishing that generic service to
 > redirect the userspace return path to some arbitrary handler in hard-RT
 > context. Then we can think about how to handle signal injection from
 > Linux vs. injection from Xenomai gracefully.

Ok, the first idea was a bad idea. Will try the second, or the
alternative proposed by Philippe.

 > 
 > > 
 > >  > 
 > >  > What do you think, is it worth including as a POSIX counterpart for
 > >  > testsuite/latency?
 > > 
 > > There are a few details that I do not like about this tool, but we may
 > > take it, and fix the details later.
 > 
 > It's a real hack, isn't it ;)? But what precisely do you mean?

What I dislike most, is the lack of return values check. This may work
well with Linux, but not with Xenomai. I would also prefer a clean
shutdown using pthread_cancel, after all sigwait and *nanosleep are
cancellation points, so it should work even without resorting to cleanup
handlers.

-- 


Gilles Chanteperdrix.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] CONFIG_XENO_OPT_DEBUG_LEVEL

2006-03-20 Thread Jan Kiszka
Hi,

as I started to actually use XENO_ASSERT, I noticed that there is no
infrastructure yet to enable it. CONFIG_XENO_OPT_DEBUG_LEVEL is nowhere
defined. Add this as integer to Kconfig? Or better convert
nucleus/assert.h to CONFIG_OPT_DEBUG for now until we really feel like
we need more than on/off for this? I would vote for the latter ATM.

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] yet another test tool

2006-03-20 Thread Philippe Gerum

Jan Kiszka wrote:

Philippe Gerum wrote:


Gilles Chanteperdrix wrote:


Jan Kiszka wrote:
> (...)As Xenomai does not support hard-RT signal delivery yet (...)

This is the next feature missing to the POSIX skin. I would like to
implement this, but I am not sure which way to go :
- either, if it is possible, getting Linux signals services to run in
 every domain at Adeos level, by replacing spinlocks with spinlocks_hw
 and such kind of tricks;


Would be a nightmare, I think. Way too many paths are involved in the
vanilla kernel, and this would be overkill wrt what we want to do.
Actually, what we need is basically exposing the nucleus signal
interface to user-space, and map Linux RT signals over nucleus signals.
Other (non-RT) Linux signals would keep on being handled in secondary
mode the way they are now.



- or adding a generic service at the adeos layer (a hook called when
 returning to user-space), building a generic user-space signals
 service at the nucleus level, and finally building all posix signals
 services on top of this.


A (maybe easier) third option would be to generalize some kind of
pseudo-asynchronous call support, with a worker thread operating on a
dedicated priority level inside applications registering for
asynchronous notifications. The kernel-side would handle the server
wakeups, providing a unified interface for pending on hooks, signals,
watchdogs etc. It would also need to properly multiplex those events
notified from within the skins, and wake up the right pending server in
user-space, which would in turn fire the user provided handler, all in
primary mode. In any case, this would not be more costly latency-wise
than implementing mere callouts, since most of the switching cost comes
from the MMU switch, which we would have to do in both cases, anyway.



We would need a "shadow" priority level for each real one so that those
handlers do not cause any priority inversions (the main RT issue of
servers).


Not if the worker threads inherit the priority of the target threads 
dynamically.


 Moreover, it would require a bulk of extra threads, actually

one per used prio level, to handle all those calls with the correct
priority. My feeling: too costly, memory-wise.


You would just need one worker thread per-application and per-cpu. Async 
calls are always serialized, at least on a given CPU. The idea is to 
provide some event multiplexing capabilities to the kernel-side, instead 
of implementing a worker for each and every pended event.


The only thing that would differ from a real signal handling is the 
actual backtrace of the handler.


Another option without worker thread would be to implement the same kind 
of pending event detection the kernel does on the return path from 
Xenomai syscalls using some reserved CPU register, and ask the kernel 
side about the callout to fire and the argument to pass the handler 
whenever some pending event is detected.




Jan




--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] yet another test tool

2006-03-20 Thread Jan Kiszka
Philippe Gerum wrote:
> Gilles Chanteperdrix wrote:
>> Jan Kiszka wrote:
>>  > (...)As Xenomai does not support hard-RT signal delivery yet (...)
>>
>> This is the next feature missing to the POSIX skin. I would like to
>> implement this, but I am not sure which way to go :
>> - either, if it is possible, getting Linux signals services to run in
>>   every domain at Adeos level, by replacing spinlocks with spinlocks_hw
>>   and such kind of tricks;
> 
> Would be a nightmare, I think. Way too many paths are involved in the
> vanilla kernel, and this would be overkill wrt what we want to do.
> Actually, what we need is basically exposing the nucleus signal
> interface to user-space, and map Linux RT signals over nucleus signals.
> Other (non-RT) Linux signals would keep on being handled in secondary
> mode the way they are now.
> 
>> - or adding a generic service at the adeos layer (a hook called when
>>   returning to user-space), building a generic user-space signals
>>   service at the nucleus level, and finally building all posix signals
>>   services on top of this.
> 
> A (maybe easier) third option would be to generalize some kind of
> pseudo-asynchronous call support, with a worker thread operating on a
> dedicated priority level inside applications registering for
> asynchronous notifications. The kernel-side would handle the server
> wakeups, providing a unified interface for pending on hooks, signals,
> watchdogs etc. It would also need to properly multiplex those events
> notified from within the skins, and wake up the right pending server in
> user-space, which would in turn fire the user provided handler, all in
> primary mode. In any case, this would not be more costly latency-wise
> than implementing mere callouts, since most of the switching cost comes
> from the MMU switch, which we would have to do in both cases, anyway.

We would need a "shadow" priority level for each real one so that those
handlers do not cause any priority inversions (the main RT issue of
servers). Moreover, it would require a bulk of extra threads, actually
one per used prio level, to handle all those calls with the correct
priority. My feeling: too costly, memory-wise.

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] yet another test tool

2006-03-20 Thread Philippe Gerum

Gilles Chanteperdrix wrote:

Jan Kiszka wrote:
 > (...)As Xenomai does not support hard-RT signal delivery yet (...)

This is the next feature missing to the POSIX skin. I would like to
implement this, but I am not sure which way to go :
- either, if it is possible, getting Linux signals services to run in
  every domain at Adeos level, by replacing spinlocks with spinlocks_hw
  and such kind of tricks;


Would be a nightmare, I think. Way too many paths are involved in the 
vanilla kernel, and this would be overkill wrt what we want to do. 
Actually, what we need is basically exposing the nucleus signal 
interface to user-space, and map Linux RT signals over nucleus signals. 
Other (non-RT) Linux signals would keep on being handled in secondary 
mode the way they are now.



- or adding a generic service at the adeos layer (a hook called when
  returning to user-space), building a generic user-space signals
  service at the nucleus level, and finally building all posix signals
  services on top of this.


A (maybe easier) third option would be to generalize some kind of 
pseudo-asynchronous call support, with a worker thread operating on a 
dedicated priority level inside applications registering for 
asynchronous notifications. The kernel-side would handle the server 
wakeups, providing a unified interface for pending on hooks, signals, 
watchdogs etc. It would also need to properly multiplex those events 
notified from within the skins, and wake up the right pending server in 
user-space, which would in turn fire the user provided handler, all in 
primary mode. In any case, this would not be more costly latency-wise 
than implementing mere callouts, since most of the switching cost comes 
from the MMU switch, which we would have to do in both cases, anyway.




The first approach guarantees the best integration with Linux, but
potentially add sections in Linux that are not preemptible by any
Xenomai skin. With the second approach, all services related to signals
have to be reimplemented plus some shortcuts to have standard user-space
tools such as "kill" working.


 > 
 > What do you think, is it worth including as a POSIX counterpart for

 > testsuite/latency?

There are a few details that I do not like about this tool, but we may
take it, and fix the details later.





--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Synchronising TSC and periodic timer

2006-03-20 Thread Philippe Gerum

Rodrigo Rosenfeld Rosas wrote:

Philippe Gerum wrote:


...
Given the description above, just that some skin might return either
nucleus ticks or corrected timestamps to the applications, which would
in turn do some arithmetics for converting values they got from the skin
between both scales internally, and mistakenly use the result while
assuming that both scales are always in sync. In this situation, and
during a fraction of the time (i.e. the jitter), both scales might not
be in sync, and the result would be unusable.



But I still can't find a real situation where the user would need these values 
to be in sync...




...
But maybe we are still discussing different issues actually, so it would
be useful that the core issue that triggered the discussion about
periodic mode precision be exposed again.



The core issue is that:
I think the driver development should be kind of independent from the user 
programs. That said, if the driver needs a precise timestamp, it should be 
able to get it, even if the user is fine with a, say, 100ms periodic tick. If 
the user have a loop with a deadline of 100ms, and if it takes two 
consecutive images for estimating speed in the same loop, the user will need 
to have a higher precision timestamps for the images. So, the driver will 
need a high precision timer reading for making possible to provide those 
timestamps...




This is exactely what rt_timer_tsc() or clock_gettime(CLOCK_MONOTONIC) 
[or whatever ends up calling xnarch_get_cpu_tsc() from the underlying 
architecture] are there for.


The thing is that rt_timer_read() is expected to return values 
compatible with the timer mode, always. But rt_timer_tsc() is there to 
return the most precise timestamp available from the underlying 
architecture, regardless of the current timing mode. If no TSC exists on 
x86, then it is going to be emulated (using the PIT's channel #2), but 
in any case, you will get a high precision timestamp, up to the best 
precision the architecture can provide, that is.


The key issue is to acknowledge the fact that periodic ticks and precise 
timestamps are two _unrelated_ units. What I'm reluctant to is to try 
finding some artificial binding between both units, because there is 
none (being stable, that is).


In your example above, you would be able to estimate the elapsed time 
using something like rt_timer_tsc(), and convert this to ticks using 
rt_timer_ns2ticks(rt_timer_tsc2ns(timestamp)). The main problem would be 
the rounding here, and work around the lack of precision the periodic 
mode currently exhibits due to the constant delay between timer shots.


I think that you should try convincing Jan that rtdm_clock_tsc() might 
be a good idea to provide, instead of tweaking rtdm_clock_read() in a 
way which changes its underlying logic. ;o)




I hope that was what you were asking for...

Regards,

Rodrigo.


___
Yahoo! Acesso Grátis - Internet rápida e grátis. Instale o discador agora!
http://br.acesso.yahoo.com


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core




--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] yet another test tool

2006-03-20 Thread Jan Kiszka
Gilles Chanteperdrix wrote:
> Jan Kiszka wrote:
>  > (...)As Xenomai does not support hard-RT signal delivery yet (...)
> 
> This is the next feature missing to the POSIX skin. I would like to
> implement this, but I am not sure which way to go :
> - either, if it is possible, getting Linux signals services to run in
>   every domain at Adeos level, by replacing spinlocks with spinlocks_hw
>   and such kind of tricks;
> - or adding a generic service at the adeos layer (a hook called when
>   returning to user-space), building a generic user-space signals
>   service at the nucleus level, and finally building all posix signals
>   services on top of this.
> 
> The first approach guarantees the best integration with Linux, but
> potentially add sections in Linux that are not preemptible by any
> Xenomai skin. With the second approach, all services related to signals
> have to be reimplemented plus some shortcuts to have standard user-space
> tools such as "kill" working.
> 

The preemptibility of the Linux signal code path heavily depends on the
tasklist_lock, and that was still an issue even for PREEMPT_RT with all
their modifications the last time I checked their code more thoroughly
(~2 months ago). Meanwhile, things may have improved for that tree, but
I doubt that this is already in mainline, not to speak of older 2.6 or
even 2.4 kernels.

As a first step, I would vote for establishing that generic service to
redirect the userspace return path to some arbitrary handler in hard-RT
context. Then we can think about how to handle signal injection from
Linux vs. injection from Xenomai gracefully.

> 
>  > 
>  > What do you think, is it worth including as a POSIX counterpart for
>  > testsuite/latency?
> 
> There are a few details that I do not like about this tool, but we may
> take it, and fix the details later.

It's a real hack, isn't it ;)? But what precisely do you mean?

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] yet another test tool

2006-03-20 Thread Gilles Chanteperdrix
Jan Kiszka wrote:
 > (...)As Xenomai does not support hard-RT signal delivery yet (...)

This is the next feature missing to the POSIX skin. I would like to
implement this, but I am not sure which way to go :
- either, if it is possible, getting Linux signals services to run in
  every domain at Adeos level, by replacing spinlocks with spinlocks_hw
  and such kind of tricks;
- or adding a generic service at the adeos layer (a hook called when
  returning to user-space), building a generic user-space signals
  service at the nucleus level, and finally building all posix signals
  services on top of this.

The first approach guarantees the best integration with Linux, but
potentially add sections in Linux that are not preemptible by any
Xenomai skin. With the second approach, all services related to signals
have to be reimplemented plus some shortcuts to have standard user-space
tools such as "kill" working.


 > 
 > What do you think, is it worth including as a POSIX counterpart for
 > testsuite/latency?

There are a few details that I do not like about this tool, but we may
take it, and fix the details later.


-- 


Gilles Chanteperdrix.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Synchronising TSC and periodic timer

2006-03-20 Thread Rodrigo Rosenfeld Rosas
Philippe Gerum wrote:
>...
>Given the description above, just that some skin might return either
>nucleus ticks or corrected timestamps to the applications, which would
>in turn do some arithmetics for converting values they got from the skin
>between both scales internally, and mistakenly use the result while
>assuming that both scales are always in sync. In this situation, and
>during a fraction of the time (i.e. the jitter), both scales might not
>be in sync, and the result would be unusable.

But I still can't find a real situation where the user would need these values 
to be in sync...

>...
>But maybe we are still discussing different issues actually, so it would
>be useful that the core issue that triggered the discussion about
>periodic mode precision be exposed again.

The core issue is that:
I think the driver development should be kind of independent from the user 
programs. That said, if the driver needs a precise timestamp, it should be 
able to get it, even if the user is fine with a, say, 100ms periodic tick. If 
the user have a loop with a deadline of 100ms, and if it takes two 
consecutive images for estimating speed in the same loop, the user will need 
to have a higher precision timestamps for the images. So, the driver will 
need a high precision timer reading for making possible to provide those 
timestamps...

I hope that was what you were asking for...

Regards,

Rodrigo.


___
Yahoo! Acesso Grátis - Internet rápida e grátis. Instale o discador agora!
http://br.acesso.yahoo.com


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Synchronising TSC and periodic timer

2006-03-20 Thread Philippe Gerum

Rodrigo Rosenfeld Rosas wrote:

Philippe Gerum wrote:


...
Given the description above, just that some skin might return either
nucleus ticks or corrected timestamps to the applications, which would
in turn do some arithmetics for converting values they got from the skin
between both scales internally, and mistakenly use the result while
assuming that both scales are always in sync. In this situation, and
during a fraction of the time (i.e. the jitter), both scales might not
be in sync, and the result would be unusable.



But I still can't find a real situation where the user would need these values 
to be in sync...


It's not a matter of dealing with users always doing The Right Thing, 
but preferably preventing people from doing the wrong one.


--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Synchronising TSC and periodic timer

2006-03-20 Thread Jan Kiszka
Philippe Gerum wrote:
> Jan Kiszka wrote:
>> Philippe Gerum wrote:
>>
>>> ...
>>> The issue that worries me - provided that we bound the adjustment offset
>>> to the duration of one tick after some jittery - is that any attempt to
>>> get intra-tick precision would lead to a possible discrepancy regarding
>>> the elapsed time according to those two different scales, between the
>>> actual count of jiffies tracked by the timer ISR on the timekeeper CPU,
>>> and the corrected time value returned by rtdm_read_clock. And this
>>> discrepancy would last for the total duration of the jitter. E.g., for a
>>> 100 us period, xnpod_get_time() could return 2 albeit rtdm_read_clock
>>> returns 300, instead of 200. Spuriously mixing both units in
>>> applications would lead to some funky chaos.
>>>
>>
>>
>> Trying to pick up this thread again, I just tried to understand your
>> concerns, but failed so far to imagine a concrete scenario. Could you
>> sketch such a "funky chaotic" situation from the application point of
>> view?
> 
> Given the description above, just that some skin might return either
> nucleus ticks or corrected timestamps to the applications, which would
> in turn do some arithmetics for converting values they got from the skin
> between both scales internally, and mistakenly use the result while
> assuming that both scales are always in sync. In this situation, and
> during a fraction of the time (i.e. the jitter), both scales might not
> be in sync, and the result would be unusable. This said, this kind of
> issue could be solved by big fat warnings in documentation, explicitely
> saying that conversions between both scales might be meaningless.

So the worst-case is when a user derives some relative times from two
different time sources, one purely tick-base, the other improved by
inter-tick TSC (when available on that arch)?

Let's say the user takes timestamp t1 = rtdm_clock_read() (via some
driver) and a bit later t2 = rt_timer_tick2ns(rt_timer_read()). t1 was
set to the last tick Tn + the number of TSC ticks since then:

t1 = Tn * tick_period + TSC_offset

With, e.g., Tn=1001, tick_period = 1000 us, and TSC_offset = 589 us:

t1 = 1001 * 1000 us + 589 us = 1001569 us

As the next tick may have not stroke yet when taking t2, that value
converted to us can be smaller:

t2 = Tn * tick_period = 1001000

Now the difference between t2 and t1 becomes negative (-589 us),
although the user may expect it t2-t1 >= 0. Is this non-monotony your
concern?


On the other hand, the advantage of TSC-based synchronised inter-tick
timestamps is that you can do things like

sleep_until(rt_timer_ns2ticks(rtdm_clock_read() + 100))

without risking an error beyond +/- 1 tick (+jitter). With current
jiffies vs. TSC in periodic mode, this is not easily possible. You have
to sync in the application, creating another error source when the delay
between acquiring the TSC and sync'ing the TSC on jiffies is too long.

> 
> And what would prevent us from improving the accuracy of other
>> timestamping API functions beyond RTDM as well, e.g. on converting from
>> ticks to nanos in rt_timer_ticks2ns()?
>>
> 
> I don't understand why rt_timer_ticks2ns() should be impacted by such
> extension. This service must keep a constant behaviour, regardless of
> any outstanding timing issue. I mean, 3 ticks from a 1Khz clock rate
> must always return 3,000,000 nanos, unless you stop passing count of
> ticks but fractional/compound values instead.

Forget about this, it was (pre-lunch) nonsense.

> 
> The bottom-line is that we should not blur the line between periodic and
> aperiodic timing modes, just for getting precise timestamps in the
> former case. Additionally, and x86-wise, when no TSC is available on the
> target system, rt_timer_tsc() already returns a timestamp obtained from
> the 8254's channel #2 we use as a free running counter, which is the
> most precise source we have at hand to do so.
> 
> Periodic mode bears its own limitation, which is basically a loss of
> accuracy we trade against a lower overhead (even if that does not mean
> much except perhaps on x86). What we could do is reducing the jittery
> involved in periodic ticks, by always emulating periodic mode over
> aperiodic shots instead of using e.g. the 8254 in PIT mode (and remove
> the need for the double scale on x86, tsc + 8254 channel #1), but not
> change the basic meaning of periodic timing.

Hmm, interesting, and it also reminds of a long pending (slightly OT)
question I have: why not creating the infrastructure (a dedicated
periodic timer) for providing round-robin scheduling even in aperiodic mode?

> 
> But maybe we are still discussing different issues actually, so it would
> be useful that the core issue that triggered the discussion about
> periodic mode precision be exposed again.

Yep, Rodrigo...?

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list

Re: [Xenomai-core] Synchronising TSC and periodic timer

2006-03-20 Thread Philippe Gerum

Jan Kiszka wrote:

Philippe Gerum wrote:


...
The issue that worries me - provided that we bound the adjustment offset
to the duration of one tick after some jittery - is that any attempt to
get intra-tick precision would lead to a possible discrepancy regarding
the elapsed time according to those two different scales, between the
actual count of jiffies tracked by the timer ISR on the timekeeper CPU,
and the corrected time value returned by rtdm_read_clock. And this
discrepancy would last for the total duration of the jitter. E.g., for a
100 us period, xnpod_get_time() could return 2 albeit rtdm_read_clock
returns 300, instead of 200. Spuriously mixing both units in
applications would lead to some funky chaos.




Trying to pick up this thread again, I just tried to understand your
concerns, but failed so far to imagine a concrete scenario. Could you
sketch such a "funky chaotic" situation from the application point of
view?


Given the description above, just that some skin might return either 
nucleus ticks or corrected timestamps to the applications, which would 
in turn do some arithmetics for converting values they got from the skin 
between both scales internally, and mistakenly use the result while 
assuming that both scales are always in sync. In this situation, and 
during a fraction of the time (i.e. the jitter), both scales might not 
be in sync, and the result would be unusable. This said, this kind of 
issue could be solved by big fat warnings in documentation, explicitely 
saying that conversions between both scales might be meaningless.


And what would prevent us from improving the accuracy of other

timestamping API functions beyond RTDM as well, e.g. on converting from
ticks to nanos in rt_timer_ticks2ns()?



I don't understand why rt_timer_ticks2ns() should be impacted by such 
extension. This service must keep a constant behaviour, regardless of 
any outstanding timing issue. I mean, 3 ticks from a 1Khz clock rate 
must always return 3,000,000 nanos, unless you stop passing count of 
ticks but fractional/compound values instead.


The bottom-line is that we should not blur the line between periodic and 
aperiodic timing modes, just for getting precise timestamps in the 
former case. Additionally, and x86-wise, when no TSC is available on the 
target system, rt_timer_tsc() already returns a timestamp obtained from 
the 8254's channel #2 we use as a free running counter, which is the 
most precise source we have at hand to do so.


Periodic mode bears its own limitation, which is basically a loss of 
accuracy we trade against a lower overhead (even if that does not mean 
much except perhaps on x86). What we could do is reducing the jittery 
involved in periodic ticks, by always emulating periodic mode over 
aperiodic shots instead of using e.g. the 8254 in PIT mode (and remove 
the need for the double scale on x86, tsc + 8254 channel #1), but not 
change the basic meaning of periodic timing.


But maybe we are still discussing different issues actually, so it would 
be useful that the core issue that triggered the discussion about 
periodic mode precision be exposed again.



--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Re: 2.6.16-rc6 support

2006-03-20 Thread Gilles Chanteperdrix
Jan Kiszka wrote:
 > and a fix for a name collision in
 > posix/syscall.c (there are other mutex functions in the kernel now...).

Applied, thanks.

 > PS: Gilles, as a tiny cleanup, I would suggest converting all syscall
 > wrapper functions in posix/syscall.c into static ones. They are only
 > used in that file.

Done.

-- 


Gilles Chanteperdrix.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Synchronising TSC and periodic timer

2006-03-20 Thread Jan Kiszka
Philippe Gerum wrote:
> ...
> The issue that worries me - provided that we bound the adjustment offset
> to the duration of one tick after some jittery - is that any attempt to
> get intra-tick precision would lead to a possible discrepancy regarding
> the elapsed time according to those two different scales, between the
> actual count of jiffies tracked by the timer ISR on the timekeeper CPU,
> and the corrected time value returned by rtdm_read_clock. And this
> discrepancy would last for the total duration of the jitter. E.g., for a
> 100 us period, xnpod_get_time() could return 2 albeit rtdm_read_clock
> returns 300, instead of 200. Spuriously mixing both units in
> applications would lead to some funky chaos.
> 

Trying to pick up this thread again, I just tried to understand your
concerns, but failed so far to imagine a concrete scenario. Could you
sketch such a "funky chaotic" situation from the application point of
view? And what would prevent us from improving the accuracy of other
timestamping API functions beyond RTDM as well, e.g. on converting from
ticks to nanos in rt_timer_ticks2ns()?

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] 2.6.16-rc6 support

2006-03-20 Thread Jan Kiszka
Jan Kiszka wrote:
> Hi,
> 
> Jeroen reported success much earlier, now I had to rebase my box on
> upcoming 2.6.16. So I would like to post the result, maybe others are
> interested in starting an early test as well. Attached is a cleanly
> applying Ipipe patch for -rc6 and a fix for a name collision in
> posix/syscall.c (there are other mutex functions in the kernel now...).
> 

Uups, someone must have whispered to me that 2.6.16 will come out very
soon, and now it happened. The final patch seems to cause no problems to
the ipipe patch as well.

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Xenomai v2.0.4

2006-03-20 Thread Philippe Gerum


This is the fourth maintenance release of the 2.0 branch. Most of the 
fixes merged since 2.0.3 have been backported from 2.1:


- Generic core fix (domain migration lockup)
- x86 fix (I/O bitmap owner for 2.6.15 and above)
- ia64 fix (signal receipt handling)
- Native API fixes (mutex lock count leakage, event mask update, 
reported heap size).

- Adeos support upgrade for all architectures.

http://download.gna.org/xenomai/stable/xenomai-2.0.4.tar.bz2

--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] yet another test tool

2006-03-20 Thread Philippe Gerum

Jan Kiszka wrote:

Hi,

as I already mentioned, I experimented with the cyclictest-0.5 by Thomas
Gleixner (http://www.tglx.de/projects/misc/cyclictest), one of the
PREEMPT_RT developers. The attached patch fixes the scheduling policy
setup and locks the whole test into memory.

This tool is quite handy for running more than one timed thread, and for
basic testing of the POSIX skin. As Xenomai does not support hard-RT
signal delivery yet, the only relevant mode is -n, i.e. delaying via
clock_nanosleep. From the bugs I fixed I would say that not every
feature may work yet, but running with -n, -p 99 (highest priority
used), and -t 10 (create e.g. 10 cascading threads) looks fine to me.

What do you think, is it worth including as a POSIX counterpart for
testsuite/latency?


Yes indeed, e.g. testsuite/cyclic.



Jan




--- cyclictest.c.orig   2005-11-24 13:33:21.0 +0100
+++ cyclictest.c2006-03-17 10:50:26.0 +0100
@@ -24,12 +24,14 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 
 #include 

 #include 
 #include 
+#include 
 
 /* Ugly, but  */

 #define gettid() syscall(__NR_gettid)
@@ -158,7 +160,7 @@ void *timerthread(void *param)
 
 	memset(&schedp, 0, sizeof(schedp));

schedp.sched_priority = par->prio;
-   sched_setscheduler(0, policy, &schedp);
+   pthread_setschedparam(pthread_self(), policy, &schedp);
 
 	/* Get current time */

clock_gettime(par->clock, &now);
@@ -265,7 +267,7 @@ out:
 
 	/* switch to normal */

schedp.sched_priority = 0;
-   sched_setscheduler(0, SCHED_OTHER, &schedp);
+   pthread_setschedparam(pthread_self(), SCHED_OTHER, &schedp);
 
 	stat->threadstarted = -1;
 
@@ -396,6 +398,7 @@ int main(int argc, char **argv)

int mode;
struct thread_param *par;
struct thread_stat *stat;
+   pthread_attr_t thattr;
int i, ret = -1;
 
 	if (geteuid()) {

@@ -403,6 +406,8 @@ int main(int argc, char **argv)
exit(-1);
}
 
+	mlockall(MCL_CURRENT | MCL_FUTURE);

+
process_options(argc, argv);
 
 	mode = use_nanosleep + use_system;

@@ -442,7 +447,9 @@ int main(int argc, char **argv)
par[i].stats = &stat[i];
stat[i].min = 100;
stat[i].max = -100;
-   pthread_create(&stat[i].thread, NULL, timerthread, &par[i]);
+   pthread_attr_init(&thattr);
+   pthread_attr_setstacksize(&thattr, PTHREAD_STACK_MIN);
+   pthread_create(&stat[i].thread, &thattr, timerthread, &par[i]);
stat[i].threadstarted = 1;
}





___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core



--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core