Re: [Xenomai-core] Synchronising TSC and periodic timer

2006-03-24 Thread Jan Kiszka
Philippe Gerum wrote:
 Jan Kiszka wrote:
 ...
 On the other hand, the advantage of TSC-based synchronised inter-tick
 timestamps is that you can do things like

 sleep_until(rt_timer_ns2ticks(rtdm_clock_read() + 100))

 without risking an error beyond +/- 1 tick (+jitter). With current
 jiffies vs. TSC in periodic mode, this is not easily possible. You have
 to sync in the application, creating another error source when the delay
 between acquiring the TSC and sync'ing the TSC on jiffies is too long.

 
 The proper way to solve this is rather to emulate the periodic mode over
 the oneshot machinery, so that we stop having this +/- 1 tick error
 margin. The periodic mode as it is now is purely a x86 legacy; even on
 some ppc boards where the auto-reload feature is available from the
 decrementer, Xeno doesn't use it.
 
 The more I think of the x86 situation, the more I find it quite silly. I
 mean, picking the periodic mode means that 1) all delays can be
 expressed as multiples of a given constant interval, 2) the constant
 interval must be large enough so that you don't put your board on its
 knees, by processing useless ticks most of the time. What one saves here
 - using periodic mode - is a couple of outb's per tick on the ISA bus,
 since the PIT handles this automatically without software intervention
 once set up properly. We already know that the programming overhead
 (i.e. introduced by those outb's) is perfectly bearable even for high
 frequency sampling like 10Khz loops in aperiodic mode. So why on earth
 do we care about saving two outb's and get a lousy timing accuracy in
 the same move, for constant interval delays which are necessarily going
 to be much larger than those already supported by the aperiodic mode? Er...
 
 This is a shift in the underlying logic of the periodic mode we are
 discussing here actually. It used to be a mode where timing accuracy was
 only approximate, mostly to deal with timeouts, in the watchdog sense.
 Now, it is becoming a way to rely on a constant interval unit, while
 still keeping a high timing accuracy. I'm ok with this, since we don't
 rely on true PIT (except for x86, which is fixable) when running in
 periodic mode, so I see no problem in raising the level of timing
 accuracy of such mode. Existing stuff would not break because of such
 change, but improve instead for people who care for exact durations in
 periodic mode.

Yep, getting rid of as much periodic mode limitations as reasonable in a
transparent way sounds very good to me.

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Re: COMEDI over RTDM

2006-03-24 Thread Jan Kiszka
Alexis Berlemont wrote:
 Hi,
 
 But there are plenty of things I am not happy with :
 - the original comedilib version is not really well suited for rtdm. In
 this library for many reasons; for example, you can find calls to
 malloc/free
 Oops, not so nice.

 functions. If I stick to the original implementation, I have to either to
 ask for adding alloc stuff in user mode in rtdm skin or use another skin
 to manage allocations. None of these solutions seems interesting for me
 for many reasons. A lot of people  must be thinking I am overkill, it is
 true that the comedilib allocations should be done at init time
 (comedi_open, comedi_close) then no need to fulfull real-time constraints
 but I think comedi should be fully rtdm compliant; this would avoid
 tricky corner cases for
 developpers/users. The best and simplest solution for me would be some
 slight modifications in the comedilib API but I doubt everyone is OK with
 that.
 Could you give some concrete use cases of the comedilib where dynamic
 allocation is involved? I don't know that library actually. What does it
 manage beyond calling the driver core?
 
 In the function comedi_open() : 
 
   if(!(it=malloc(sizeof(comedi_t
 goto cleanup;
   memset(it,0,sizeof(comedi_t));
 
   if((it-fd=rt_dev_open(fn,0))0)
   {
 fprintf(stderr, comedi_open: rt_dev_open failed (ret=%d)\n,
   it-fd);
 goto cleanup;
   }
 
   if(comedi_ioctl(it-fd, COMEDI_DEVINFO, 
 (unsigned long)it-devinfo)0)
 goto cleanup;
 
   it-n_subdevices=it-devinfo.n_subdevs;
 
   get_subdevices(it);
 
 ...
 
 We can see an allocation for a structure which will contain the result (fd) 
 of rt_dev_open(). And this is not over, the function get_subdevices() will 
 make another allocation to keep info about the driver (subdevice, number of 
 channels, etc.). And this function get_subdevices() will trigger more 
 allocations by calling get_rangeinfo(). In fact, malloc() is called eight 
 times.
 
 All disallocations are done in comedi_close().

Ok, then opening and closing comedi devices is not deterministic and
must not happen when strict timing requirements exist - but that's a
rather unlikely scenario anyway. As long as there is no
allocation/release in the acquisition code path...

 
 Starting from here, we have two alternatives:
 - preallocate enough structs the first time comedi_open() is called. mmh...
 - modify the comedilib API to let the developper handle the allocations.
 
 - I think the comedi structures organization (comedi_device, subdevice,
 async, etc.) should be reviewed considering the rtdm architecture. Of
 course, these modifications should not induce big changes in the comedi
 drivers source.
 Please also give concrete examples here. RTDM devices should be
 manageable by the user via file descriptors, just like normal devices.
 
 There is a little difference with normal devices and classical drivers. The 
 link between a device and a driver is not direct. The comedi layer affects a 
 comedi device (/dev/comedi0..9 or comedi0..9) to a specific driver at runtime 
 thanks to a specific ioctl (cf. comedi_config in comedilib). This is the 
 comedi attach stuff.

Then what does comedi0..9 stand for, some interface channel?

 
 At this level, I may think it would be interesting to consider quite 
 precisely 
 the layer organization. I have not well understood the architectural border 
 between what must be done by the driver and what must be done by the abstract 
 layer.
 
 For example, here is a description of the attaching procedure: 
 1) the devconfig ioctl is received by the abstract comedi layer;
 2) the abstract layer (in comedi/drivers.c) calls do_devconfig_ioctl() which 
 makes some allocations and a few setups in the structure comedi_device, the 
 the function comedi_device_attach() is called;
 3)In comedi_device_attach(), we check if the proper driver whether available 
 (insmoded), if so a driver specific function is called;
 4) in this driver function, we have access to the structures of the abstract 
 layer and we modify them (comedi_subdevice);
 5) back in the abstract layer, the function postconfig() is called to setup 
 the struct comedi_async (this struct belongs to a comedi_subdevice).
 
 To sum up: 
 - comedi_device { (managed by the abstract layer)
   - comedi_subdevice { (managed by the driver)
   - comedi_async (managed by the abstract layer)
   }
 }
 
 I am not sure I am clear (it is quite hard to explain without source code), 
 but I think the drivers should not get direct access to the structures of the 
 abstract layer. 
 
 You may find this points useless, all this stuff is not directly related with 
 rtdm functionnalities, I just thought the rtdm port would be a good 
 opportunity to think about that. 

Yes, certainly. And as it's not related to real-time or RTDM, this
should definitely raise the interest of the comedi developers as well. I
can only recommend torturing them with questions