Hi,

When i compiled the following program , (taken from
/usr/src/linux/Documentation/rtc.txt )


(See attached file: rtc2.c)

it gave me the following error:

[root@msatuts1 timer1]#  gcc -s -Wall -Wstrict-prototypes rtc2.c -o rtc2
In file included from rtc2.c:17:
/usr/include/linux/mc146818rtc.h:29: parse error before `rtc_lock'
/usr/include/linux/mc146818rtc.h:29: warning: data definition has no type or
storage class
rtc2.c:25: warning: return type of `main' is not `int'
[root@msatuts1 timer1]#

 Is this a bug?Can anyone tell me how to remove this parse error ?

With Regards,
--Niraj


---------------------- Forwarded by Niraj Punmia/HSS on 04/20/2001 04:31 PM
---------------------------


Niraj Punmia
04/12/2001 02:50 PM

To:   [EMAIL PROTECTED]
cc:

Subject:  RTC !!

Hi ,

The RTC interrupt  is programmable from 2 Hz to 8192 Hz, in powers of 2. So the
interrupts that you
could get are one of the following:      0.122ms, .244ms, .488ms, .977ms,
1.953ms, 3.906ms, 7.813ms, and so on.    Is there any  workaround , so that i
can use RTC
for meeting my requirement of an interrupt every 1.666..ms!!  ( I know that i
can use UTIME or #define HZ 600, but i want to know if i can use RTC for this
purpose )

With Regards,
--Niraj

---------------------- Forwarded by Niraj Punmia/HSS on 04/12/2001 02:33 PM
---------------------------


James Stevenson <[EMAIL PROTECTED]> on 04/09/2001 06:42:44 PM

Please respond to [EMAIL PROTECTED]

To:   Niraj Punmia/HSS@HSS
cc:

Subject:  Re: 1.6666.... ms interrupts needed!!





Hi

instead of modifing the time irq freq you could try using the
realt time clock (rtc) it will generate irqs with better timing
and you also wont hit system performance as much by modifing the timer
ever time the timer send an irq some code is run to see it schedule need
to be called the more times schedule is called a second the worse the
system performance is because of the task switching overhead.

In local.linux-kernel-list, you wrote:
>
>
>
>Hi.
>
>We are simulating air interface of GPRS on LAN. A TDMA(time division multiple
>access) frame duration is 40ms.  Each TDMA frame consists of 24 timeslots. Each
>timeslot  is  of 40/24 ms (i.e 1.66666.......ms) . To know  what current
>timeslot it is, we need a timer interrupt after every 1.6666... ms .   Since we
>are implementing this on LAN, minor jitters once in a while can be tolerated
>(say 0.2 ms more or less once a while would be OK).
>     As of now, we are modifying the HZ value in param.h to 600.  This gives us
>a CPU tick of  1.6666.... ms. (i.e 1/600sec).  I want to know if it would
affect
>the perfomance of the CPU.
>     Is there a better way to achieve the granularity of 1.666...ms .  Would
the
>UTIME patch be a better way from performance or any other point of view  than
>this method?
>
>With Regards,
>Niraj Punmia
>
>
>
>-
>To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>the body of a message to [EMAIL PROTECTED]
>More majordomo info at  http://vger.kernel.org/majordomo-info.html
>Please read the FAQ at  http://www.tux.org/lkml/
>


--
---------------------------------------------
Check Out: http://stev.org
E-Mail: [EMAIL PROTECTED]
  1:10pm  up 13 days, 21:05,  5 users,  load average: 0.45, 0.45, 0.47





rtc2.c

Reply via email to