Re: [opensuse] OpenSUSE 10.3 benchmarking: slower than 10.2?

2007-10-26 Thread Roberto Mannai
Hi all, I'm reading the bug news in
https://bugzilla.novell.com/show_bug.cgi?id=333739.
Maybe I'm being too much newbie, I cannot understand why the audit is
enabled by default, and what CONFIG_AUDITSYSCALL option means. Any
hint?

Rob

On 10/19/07, Ben Kevan [EMAIL PROTECTED] wrote:
 On Thursday 18 October 2007 02:25:04 pm nordi wrote:
  I built myself a new kernel and surprise: I got A LOT faster!
 
Posix 10.0   UTF8 10.3   10.3 (new kernel)
==   ==
  Dhrystone336  339 338
  Whetstone198  204 206
  Execl658  576 632
  File Copy 1024   535  481 595
  File Copy 256455  355 458
  File Copy 4096   588  717 827
  Pipe Throughput  468  278 408
  Context Switch   554  384 567
  Process Creat   1000  783 921
  Shell Scripts1   873  344 362
  Shell Scripts8   894  332 349
  System Call  904  334 819
   --   --  --
  Index Score: 569  397 496
 
  I have appended my kernel config. It is the standard config minus
  everything that I thought could potentially hurt syscall performance (8
  changes all together). Since building a kernel takes quite some time on
  my machine I haven't checked exactly which change it was.
 
  Note that I used UTF-8 again, but Shell Script performance still went up
by 17-18 points. That is a 5% speed increase for a shell script,
  simply because of the new kernel! Quite likely other applications will
  benefit as well.
 
  I think Suse should really look into this issue, getting 5% more
  performance in your applications is something that everyone would like
  to have. But so far, I have gotten no replies to my bug report [1]. Btw,
  if you have a Bugzilla account you can add yourself to the CC: list and
  get informed about all changes to this bug.
 
  Regards
  nordi
 
  [1] https://bugzilla.novell.com/show_bug.cgi?id=333739

 I did similar here.. I actually installed a vanilla 2.6.23.. and boy.. the
 shit screamed like a little kiddie in a haunted house on halloween.. I love
 it..

 Ben


 --
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]





-- 
Are you tired of making software? Play it! (http://www.codesounding.org)
-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [opensuse] OpenSUSE 10.3 benchmarking: slower than 10.2?

2007-10-26 Thread Andreas Jaeger
Roberto Mannai [EMAIL PROTECTED] writes:

 Hi all, I'm reading the bug news in
 https://bugzilla.novell.com/show_bug.cgi?id=333739.
 Maybe I'm being too much newbie, I cannot understand why the audit is
 enabled by default, and what CONFIG_AUDITSYSCALL option means. Any
 hint?

There're the Audit Quick Start and Linux Audit Framework manuals on
the Novell side ( http://www.novell.com/documentation/sles10/ ) for an
in-depth treatment.

Audit is a security tool - used not only by AppArmor - that allows you
to see what programs are doing.  Think of a system wide trace mechanism,

Andreas
-- 
 Andreas Jaeger, Director Platform/openSUSE, [EMAIL PROTECTED]
  SUSE LINUX Products GmbH, GF: Markus Rex, HRB 16746 (AG Nürnberg)
   Maxfeldstr. 5, 90409 Nürnberg, Germany
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


pgpaqND22MaLh.pgp
Description: PGP signature


Re: [opensuse] OpenSUSE 10.3 benchmarking: slower than 10.2?

2007-10-18 Thread nordi

I built myself a new kernel and surprise: I got A LOT faster!

 Posix 10.0   UTF8 10.3   10.3 (new kernel)
 ==   ==
Dhrystone336  339 338
Whetstone198  204 206
Execl658  576 632
File Copy 1024   535  481 595
File Copy 256455  355 458
File Copy 4096   588  717 827
Pipe Throughput  468  278 408
Context Switch   554  384 567
Process Creat   1000  783 921
Shell Scripts1   873  344 362
Shell Scripts8   894  332 349
System Call  904  334 819
--   --  --
Index Score: 569  397 496

I have appended my kernel config. It is the standard config minus 
everything that I thought could potentially hurt syscall performance (8 
changes all together). Since building a kernel takes quite some time on 
my machine I haven't checked exactly which change it was.


Note that I used UTF-8 again, but Shell Script performance still went up 
 by 17-18 points. That is a 5% speed increase for a shell script, 
simply because of the new kernel! Quite likely other applications will 
benefit as well.


I think Suse should really look into this issue, getting 5% more 
performance in your applications is something that everyone would like 
to have. But so far, I have gotten no replies to my bug report [1]. Btw, 
if you have a Bugzilla account you can add yourself to the CC: list and 
get informed about all changes to this bug.


Regards
nordi

[1] https://bugzilla.novell.com/show_bug.cgi?id=333739



config_103fast.bz2
Description: BZip2 compressed data


Re: [opensuse] OpenSUSE 10.3 benchmarking: slower than 10.2?

2007-10-18 Thread Ben Kevan
On Thursday 18 October 2007 02:25:04 pm nordi wrote:
 I built myself a new kernel and surprise: I got A LOT faster!

   Posix 10.0   UTF8 10.3   10.3 (new kernel)
   ==   ==
 Dhrystone336  339 338
 Whetstone198  204 206
 Execl658  576 632
 File Copy 1024   535  481 595
 File Copy 256455  355 458
 File Copy 4096   588  717 827
 Pipe Throughput  468  278 408
 Context Switch   554  384 567
 Process Creat   1000  783 921
 Shell Scripts1   873  344 362
 Shell Scripts8   894  332 349
 System Call  904  334 819
  --   --  --
 Index Score: 569  397 496

 I have appended my kernel config. It is the standard config minus
 everything that I thought could potentially hurt syscall performance (8
 changes all together). Since building a kernel takes quite some time on
 my machine I haven't checked exactly which change it was.

 Note that I used UTF-8 again, but Shell Script performance still went up
   by 17-18 points. That is a 5% speed increase for a shell script,
 simply because of the new kernel! Quite likely other applications will
 benefit as well.

 I think Suse should really look into this issue, getting 5% more
 performance in your applications is something that everyone would like
 to have. But so far, I have gotten no replies to my bug report [1]. Btw,
 if you have a Bugzilla account you can add yourself to the CC: list and
 get informed about all changes to this bug.

 Regards
 nordi

 [1] https://bugzilla.novell.com/show_bug.cgi?id=333739

I did similar here.. I actually installed a vanilla 2.6.23.. and boy.. the 
shit screamed like a little kiddie in a haunted house on halloween.. I love 
it.. 

Ben


-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [opensuse] OpenSUSE 10.3 benchmarking: slower than 10.2?

2007-10-15 Thread jdd

I just see this in an other thread og this list

http://kerneltrap.org/node/5411

should this be related to benchmarking?

jdd

--
http://www.dodin.net


--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [opensuse] OpenSUSE 10.3 benchmarking: slower than 10.2?

2007-10-15 Thread Patrick Kirsch
Hey,
nordi wrote:
 Looking at the syscalls in more depth, I wrote a small and simple
 program that _only_ does syscalls in a big loop, see attachment. Just
 uncomment the syscall that you want to benchmark.
Maybe it is from interest:
 http://www.opensolaris.org/os/project/libmicro/
 LibMicro is intended to measure the performance of various system and
library calls.
With that you have nearly all syscalls tested, even with math. analysis
(median, stdddev, 99% confidence level)

Regards,
-- 
Patrick Kirsch - Quality Assurance Department
SUSE Linux Products GmbH GF: Markus Rex, HRB 16746 (AG Nuernberg)
-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [opensuse] OpenSUSE 10.3 benchmarking: slower than 10.2?

2007-10-15 Thread Ian Smith
On Sunday 14 October 2007 14:22:48 nordi wrote:
 Forgot to mention that we now have bug #333739, see
 
 https://bugzilla.novell.com/show_bug.cgi?id=333739

Thanks for that!

 Maybe the Suse kernel-guys know if this is a bug or a feature.

Yeah, I was wondering... is this because of different configure options SuSE 
is setting?  Or because of the kernel itself?

Right now I'm installing 10.2 and 10.3, and hopefully 10.0 and 10.1, on a test 
machine.

Ian
-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [opensuse] OpenSUSE 10.3 benchmarking: slower than 10.2?

2007-10-14 Thread nordi

I wrote:

Suse 10.3, runlevel 2:385.9
Suse 10.3, runlevel 1:756.9


After noticing that the benchmark also runs significantly faster for 
root than it runs for a normal user, I started looking at the 
environment. It turns out that the LANG variable is all that matters:


$ echo $LANG 
   de_DE.UTF-8


$ time for i in {1..1000}; do ../pgms/tst.sh; done

real0m22.807s
user0m16.365s
sys 0m6.388s
$ LANG=POSIX
$ time for i in {1..1000}; do ../pgms/tst.sh; done

real0m12.113s
user0m7.092s
sys 0m5.016s


So this benchmark is not really measuring performance, it is measuring 
your language settings. Ian, you should modify all tests to use the same 
language settings everywhere, because otherwise the results are pure 
bogus. And then re-run the benchmarks on 10.2 and 10.3 and we will 
hopefully see a performance _increase_ for 10.3 ;)


Regards
nordi
--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[opensuse] OpenSUSE 10.3 benchmarking: slower than 10.2?

2007-10-14 Thread Aniruddha
Where can I get Unixbench?
Are these tests done with or without beagle enabled?
-- 
Regards,

Aniruddha

Please adhere to the OpenSUSE_mailing_list_netiquette
http://en.opensuse.org/OpenSUSE_mailing_list_netiquette


-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [opensuse] OpenSUSE 10.3 benchmarking: slower than 10.2?

2007-10-14 Thread nordi

Aniruddha wrote:

Where can I get Unixbench?

Simply use the link that Ian provided:
http://www.hermit.org/Linux/Benchmarking/
At the bottom of the page you will find links to Unixbench.


Are these tests done with or without beagle enabled?
I don't have beagle installed, so my results are without beagle. Don't 
know for Ian.


Regards
nordi

--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [opensuse] OpenSUSE 10.3 benchmarking: slower than 10.2?

2007-10-14 Thread nordi

I wrote:
Ian, you should modify all tests to use the same 
language settings everywhere, because otherwise the results are pure 
bogus.
The question is: Should we use POSIX or UTF8? If we use POSIX the 
results are somehow unrealistic, because everyone uses UTF8 nowadays. If 
we use UTF8, we cannot compare to older systems that do not support it.


And then re-run the benchmarks on 10.2 and 10.3 and we will 
hopefully see a performance _increase_ for 10.3 ;)
Hm, my results are not really what I had hoped for. More testing shows 
that 10.3 still seems to be much slower than 10.0 on my system:


Posix 10.0  UTF8 10.3   Posix 10.3
==  ==
Dhrystone   335.6   339.1   326.9   ok
Whetstone   198.4   203.5   201.7   ok
Execl   658.3   576.3   573.1   -13%
File Copy 1024  534.6   481.0   480.9   
File Copy 256   455.2   354.5   353.8
File Copy 4096  588.3   717.4   736.2
Pipe Throughput 468.1   277.6   283.3   -40%
Context Switch  554.3   384.1   385.4   -31%
Process Creat   1000.2  782.7   770.5   -23%
Shell Scripts1  873.0   343.8!!!721.0   -17%
Shell Scripts8  893.6   331.7!!!724.6   -19%
System Call 903.8   333.7   336.7   -63%!!!
-   -   -
Index Score:568.9   397.3   450.6

The first two only do calculations and they are ok, some jitter, not 
more. The last ones (syscall, pipe, switch, create processes) have a lot 
of kernel involvement and score very low. The shell scripts also make 
heavy use of pipes, which might explain why they still score much lower 
for 10.3 than for 10.0, even though LANG=POSIX is used on both systems.


Somehow this does not look right. The kernel in 10.3 seems to be _much_ 
slower than in 10.0. Maybe someone forgot to activate some optimization 
in the kernel config?


Regards
nordi
--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [opensuse] OpenSUSE 10.3 benchmarking: slower than 10.2?

2007-10-14 Thread Anders Johansson
On Sunday 14 October 2007 15:40:58 nordi wrote:
 I wrote:
  Ian, you should modify all tests to use the same
  language settings everywhere, because otherwise the results are pure
  bogus.

 The question is: Should we use POSIX or UTF8? If we use POSIX the
 results are somehow unrealistic, because everyone uses UTF8 nowadays. If
 we use UTF8, we cannot compare to older systems that do not support it.

  And then re-run the benchmarks on 10.2 and 10.3 and we will
  hopefully see a performance _increase_ for 10.3 ;)

 Hm, my results are not really what I had hoped for. More testing shows
 that 10.3 still seems to be much slower than 10.0 on my system:

   Posix 10.0  UTF8 10.3   Posix 10.3
   ==  ==
 Dhrystone 335.6   339.1   326.9   ok
 Whetstone 198.4   203.5   201.7   ok
 Execl 658.3   576.3   573.1   -13%
 File Copy 1024534.6   481.0   480.9
 File Copy 256 455.2   354.5   353.8
 File Copy 4096588.3   717.4   736.2
 Pipe Throughput   468.1   277.6   283.3   -40%
 Context Switch554.3   384.1   385.4   -31%
 Process Creat 1000.2  782.7   770.5   -23%
 Shell Scripts1873.0   343.8!!!721.0   -17%
 Shell Scripts8893.6   331.7!!!724.6   -19%
 System Call   903.8   333.7   336.7   -63%!!!
   -   -   -
 Index Score:  568.9   397.3   450.6

 The first two only do calculations and they are ok, some jitter, not
 more. The last ones (syscall, pipe, switch, create processes) have a lot
 of kernel involvement and score very low. The shell scripts also make
 heavy use of pipes, which might explain why they still score much lower
 for 10.3 than for 10.0, even though LANG=POSIX is used on both systems.

 Somehow this does not look right. The kernel in 10.3 seems to be _much_
 slower than in 10.0. Maybe someone forgot to activate some optimization
 in the kernel config?

I don't have a 10.0 handy for testing. could you try'

strace -T pgms/syscall 3

it will run for 3 seconds, and tell you where it spends its time.

btw, I'm assuming you're running this on the same hardware for all tests

Also, tests that don't do text manipulation, like grep, don't need to be 
tested with different locales

Anders

-- 
Madness takes its toll
-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [opensuse] OpenSUSE 10.3 benchmarking: slower than 10.2?

2007-10-14 Thread nordi

Seems my table got messed up for whatever reason. Lets try again:

 Posix 10.0   UTF8 10.3   Posix 10.3
 ==   ==
Dhrystone335.6339.1   326.9 ok
Whetstone198.4203.5   201.7 ok
Execl658.3576.3   573.1 -13%
File Copy 1024   534.6481.0   480.9
File Copy 256455.2354.5   353.8
File Copy 4096   588.3717.4   736.2
Pipe Throughput  468.1277.6   283.3 -40%
Context Switch   554.3384.1   385.4 -31%
Process Creat   1000.2782.7   770.5 -23%
Shell Scripts1   873.0343.8!!!721.0 -17%
Shell Scripts8   893.6331.7!!!724.6 -19%
System Call  903.8333.7   336.7 -63%!!!
--   --  --
Index Score: 568.9397.3   450.6


Regards
nordi

--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [opensuse] OpenSUSE 10.3 benchmarking: slower than 10.2?

2007-10-14 Thread Marcus Meissner
On Sun, Oct 14, 2007 at 03:40:58PM +0200, nordi wrote:
 I wrote:
 Ian, you should modify all tests to use the same 
 language settings everywhere, because otherwise the results are pure 
 bogus.
 The question is: Should we use POSIX or UTF8? If we use POSIX the 
 results are somehow unrealistic, because everyone uses UTF8 nowadays. If 
 we use UTF8, we cannot compare to older systems that do not support it.
 
 And then re-run the benchmarks on 10.2 and 10.3 and we will 
 hopefully see a performance _increase_ for 10.3 ;)
 Hm, my results are not really what I had hoped for. More testing shows 
 that 10.3 still seems to be much slower than 10.0 on my system:
 
   Posix 10.0  UTF8 10.3   Posix 10.3
   ==  ==
 Dhrystone 335.6   339.1   326.9   ok
 Whetstone 198.4   203.5   201.7   ok
 Execl 658.3   576.3   573.1   -13%
 File Copy 1024534.6   481.0   480.9   
 File Copy 256 455.2   354.5   353.8
 File Copy 4096588.3   717.4   736.2
 Pipe Throughput   468.1   277.6   283.3   -40%
 Context Switch554.3   384.1   385.4   -31%
 Process Creat 1000.2  782.7   770.5   -23%
 Shell Scripts1873.0   343.8!!!721.0   -17%
 Shell Scripts8893.6   331.7!!!724.6   -19%
 System Call   903.8   333.7   336.7   -63%!!!
   -   -   -
 Index Score:  568.9   397.3   450.6
 
 The first two only do calculations and they are ok, some jitter, not 
 more. The last ones (syscall, pipe, switch, create processes) have a lot 
 of kernel involvement and score very low. The shell scripts also make 
 heavy use of pipes, which might explain why they still score much lower 
 for 10.3 than for 10.0, even though LANG=POSIX is used on both systems.
 
 Somehow this does not look right. The kernel in 10.3 seems to be _much_ 
 slower than in 10.0. Maybe someone forgot to activate some optimization 
 in the kernel config?

Can you run oprofile on them and see where time is wasted?

Ciao, Marcus
-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [opensuse] OpenSUSE 10.3 benchmarking: slower than 10.2?

2007-10-14 Thread nordi

Anders Johansson wrote:

I don't have a 10.0 handy for testing. could you try'

strace -T pgms/syscall 3

it will run for 3 seconds, and tell you where it spends its time.

I have attached the trace for Suse Linux 10.0.


btw, I'm assuming you're running this on the same hardware for all tests
Sure, wouldn't make a lot of sense if I didn't. Same hardware (Pentium M 
1.3Ghz), same binaries (as downloaded for unixbench5.0).


Also, tests that don't do text manipulation, like grep, don't need to be 
tested with different locales

You are right, just wanted to be _really_ sure.

Regards
nordi



trace_anders.bz2
Description: BZip2 compressed data


Re: [opensuse] OpenSUSE 10.3 benchmarking: slower than 10.2?

2007-10-14 Thread Ian Smith
On Sunday 14 October 2007 02:51:40 Aniruddha wrote:
 Where can I get Unixbench?

See my original post:

 You'll find the benchmarks, system details, and full results at:
 
 http://www.hermit.org/Linux/Benchmarking/

 Are these tests done with or without beagle enabled?

No beagle, I always delete it hey, I know where things are... ;-)

Ian
-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [opensuse] OpenSUSE 10.3 benchmarking: slower than 10.2?

2007-10-14 Thread Anders Johansson
On Sunday 14 October 2007 16:58:28 nordi wrote:
 Anders Johansson wrote:
  I don't have a 10.0 handy for testing. could you try'
 
  strace -T pgms/syscall 3
 
  it will run for 3 seconds, and tell you where it spends its time.

 I have attached the trace for Suse Linux 10.0.

Cool. Can you do the same thing for 10.3, so we can compare?

Anders

-- 
Madness takes its toll
-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Re: [opensuse] OpenSUSE 10.3 benchmarking: slower than 10.2?

2007-10-14 Thread Ian Smith
What a response!  Thanks everyone.  I'm going to consolidate some replies 
here...

On Saturday 13 October 2007 13:09:37 nordi wrote:
 
 Suse 10.0, runlevel 5:511.4
 Suse 10.0, runlevel 1:920.7
 
 Suse 10.3, runlevel 2:385.9
 Suse 10.3, runlevel 1:756.9
 
 This is _very_ strange. Usually I would say the benchmark is broken, but 
 the benchmark simply starts a shell script that starts some GNU 
 utilities. There's not much you can break here.

Please note that the benchmarks themselves haven't been touched in 10 years at 
least (all my work has been on the framework around them).  There could be 
all sorts of weirdness in there.  But I don't think so, they're really very 
simple (too simple, really).

 Can someone confirm that running in runlevel 1 yields much higher 
 benchmark scores?

Yes: as the USAGE file says:

When running the tests, I do *not* recommend switching to single-user 
mode (init 1).  This seems to change the results in ways I don't
understand, and it's not realistic (unless your system will actually 
be running in this mode, of course).

No idea why, though.

On Saturday 13 October 2007 13:43:37 Anders Johansson wrote:
 
 But yes, the benchmark is broken. I haven't looked in any great 
 detail at what it does, but how it measures it is just wrong.
 
 In theory, it runs for 60 seconds, and at the end it counts how many 
 iterations it has managed to do in that time, averaged over a couple of runs
 
 The problem is that it never checks if it has run for 60 seconds.
 It sets up a signal handler for SIGALRM, and just assumes that when the
 process receives that signal, the 60 seconds are up and it's time to report.

Not true.  Most test don't report the time taken; the Run script measures the 
*actual* time the test runs for, and uses that figure.  This is not ideal, 
because it includes the program start-up and shutdown in the test score, but 
that's how it's always been.  I'm considering changing it, though; I already 
did for the FS tests.

On Saturday 13 October 2007 16:24:53 Lew Wolfgang wrote:
 
 I didn't try Ian's benchmarks, but I did fiddle around
 a bit with a floating-point intensive one that I've
 been using for years.  It calculates very long FFT's
 and displays the accuracy.
 
 Bottom line is I didn't see any significant differences
 between runlevels 1 and 5.  The benchmark ran in 8.7
 seconds as measured by time.
 
 It did run a bit faster in 10.3 than 10.2.  However, this
 wasn't a fair test since my 10.2 is 32-bit, my 10.3 64-bit
 on the same computer

Then it's meaningless, I'm afraid.  I'm seeing a 10-15% speedup just from 
running the 64-bit version of Linux as opposed to the 32-bit version, which 
would compensate for the slowdown in 10.3.  Compare like with like.  See:

http://www.hermit.org/Linux/Benchmarking/

My test shows dhrystone *faster* on 10.3, and double-precision whetstone about 
the same.  So it's no surprise that your FP test shows now slowdown.

The slowdown I saw was in context switching, shell scripts (dramatically) and 
system calls (dramatically).

OK, next batch... ;-)

Ian
-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [opensuse] OpenSUSE 10.3 benchmarking: slower than 10.2?

2007-10-14 Thread Ian Smith
On Sunday 14 October 2007 02:03:37 nordi wrote:
 
 After noticing that the benchmark also runs significantly faster for 
 root than it runs for a normal user, I started looking at the 
 environment. It turns out that the LANG variable is all that matters:
 
 [ ... ]
 
 So this benchmark is not really measuring performance, it is measuring 
 your language settings. Ian, you should modify all tests to use the same 
 language settings everywhere, because otherwise the results are pure 
 bogus. And then re-run the benchmarks on 10.2 and 10.3 and we will 
 hopefully see a performance _increase_ for 10.3 ;)

Thanks for this investigation, it certainly looks relevant.  BUT:

I don't get what you mean.

On one hand, I guess you're saying that I should set LANG manually, so that 
people running UnixBench all around the world will see consistent results.  
That's obviously a very good idea, and I'll do that.  Thanks for the tip!

But how does that change *my* results?  My Sony and HP test systems are always 
installed as UK English; the others (they belong to my employer) as US 
English.  So when I see a slowdown between 10.2 and 10.3 on the Sony, and a 
similar slowdown on the HP, that must be caused by something else.

The slowdown seems to be worst in the shell tests and the system call overhead 
tests.  How would LANG affect the latter?

I'm going to re-install 10.2 on one of the Dell's boot partitions and do some 
more testing... now that I know about LANG, I'll take that into account.

Ian
-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [opensuse] OpenSUSE 10.3 benchmarking: slower than 10.2?

2007-10-14 Thread Ian Smith
On Sunday 14 October 2007 06:40:58 nordi wrote:
 I wrote:
  Ian, you should modify all tests to use the same 
  language settings everywhere, because otherwise the results are pure 
  bogus.
 The question is: Should we use POSIX or UTF8? If we use POSIX the 
 results are somehow unrealistic, because everyone uses UTF8 nowadays. If 
 we use UTF8, we cannot compare to older systems that do not support it.

Anyone else got any feelings on this?  Obviously we need to set it to 
something consistent.

My feeling is that I should set it to en_US.UTF-8.  Rationale:

* Every (modern) install should support en_US.UTF-8.

* Like nordi said, benchmarking with settings no-one uses is going to
  be unrealistic; for example, say there was a machine with hardware
  UTF support (it could happen) -- then if the tests were run as POSIX
  they wouldn't show the improvement.

The big drawback, as nordi said, is that you lose consistency with pre-Unicode 
systems.  Or do you?  It's the old benchmarking problem of what it is that 
you're trying to measure.  If you're measuring kernel performance, then you 
should always use POSIX, to remove the effect of things like the shell.  But 
if you're measuring system performance -- which is what UnixBench is really 
designed for -- then you should use the system's default settings, so you 
measure what the system really does.  After all, Unicode systems *really do* 
go slower than ASCII systems, and the test results should reflect that.

Ian
-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [opensuse] OpenSUSE 10.3 benchmarking: slower than 10.2?

2007-10-14 Thread Anders Johansson
On Sunday 14 October 2007 17:22:02 Ian Smith wrote:
 After all, Unicode systems *really do* go slower than ASCII systems, and
 the test results should reflect that.

Not for people whose languages aren't representable in ASCII. For them, an 
ASCII system would be much slower

If you don't take into account missing functionality, you might as well run 
your benchmark on a machine with no OS at all (like DOS, for example) and 
declare it the overall winner. Sure it's faster, but what difference does it 
make if you can't actually use it for anything useful

If all you're interested in is winning benchmarks, I can provide you with 
patched versions of glibc and bash (where most functions are replaced by 
NOOP), which would beat all your systems hands down

Like you said yourself, compare like with like


Anders

-- 
Madness takes its toll
-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [opensuse] OpenSUSE 10.3 benchmarking: slower than 10.2?

2007-10-14 Thread Ian Smith
OK, I may have cracked the runlevel 1 issue.  It seems that LANG is not set in 
runlevel 1.

I just ran the following tests, on the Sony system, OpenSuSE 10.3, in runlevel 
1:

LANG not defined:

  System Benchmarks Partial Index  BASELINE   RESULTINDEX
  Shell Scripts (1 concurrent) 42.4   2815.8664.1
  System Call Overhead  15000.0 454455.1303.0

LANG = en_US.UTF8:

  System Benchmarks Partial Index  BASELINE   RESULTINDEX
  Shell Scripts (1 concurrent) 42.4   1416.8334.2
  System Call Overhead  15000.0 455891.6303.9

These last results pretty much match what I saw in runlevel 5 on the same 
machine.

Ian
-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [opensuse] OpenSUSE 10.3 benchmarking: slower than 10.2?

2007-10-14 Thread nordi

Anders Johansson schrieb:

Cool. Can you do the same thing for 10.3, so we can compare?


Here it is.

Syscalls run considerably slower in 10.3 than in 10.0. I grepped through 
the file to see how long the syscalls took:


10.0:
5 usecs: 52792
6 usecs: 79944
7 usecs:   434
weighted average: 5.61 usecs

10.3
5 usecs: 16811
6 usecs: 85497
7 usecs:  4282
weighted average: 5.88 usecs

That's only a 5% difference, but probably there are rounding errors 
hiding somewhere since we have very small numbers: If you go by the size 
of the files (6.0MB versus 7.5MB) the difference is much clearer.



Btw, I saw the following on Suse 10.3:

$ head -n 1 trace_anders  temp
$ time grep -c 'uid' temp
2494

real0m4.745s
user0m4.720s
sys 0m0.004s
$ LANG=POSIX
$ time grep -c 'uid' temp
2494

real0m0.004s
user0m0.004s
sys 0m0.000s


Grepping with UTF-8 was super-slow for me, while grepping with 
LANG=POSIX worked as expected. On Suse 10.0 I can grep the whole file 
with UTF-8 in just 0.9 seconds, while on Suse 10.3 it takes 4.7 seconds 
to grep a small fraction. Anyone else seeing this?


Regards
nordi


trace_slow.bz2
Description: BZip2 compressed data


Re: [opensuse] OpenSUSE 10.3 benchmarking: slower than 10.2?

2007-10-14 Thread Ian Smith
On Sunday 14 October 2007 08:38:06 Anders Johansson wrote:
 
 If all you're interested in is winning benchmarks, I can provide you with 
 patched versions of glibc and bash (where most functions are replaced by 
 NOOP), which would beat all your systems hands down
 
 Like you said yourself, compare like with like

So... you're agreeing that we should use UTF-8?  Seems sensible, anyhow.

One question.  I can set LANG to en_US.UTF-8, but I would like to have the 
test report include the language setting to confirm that it is set right.   
Like, if a system doesn't support en_US.UTF-8 for some reason, I want to know 
that it's not running a fair test.  So how can I tell what the system is 
really using?

For example, if I set LANG to sfsfgsfdg, then locale tells me I'm 
using sfsfgsfdg, but it actually defaults back to POSIX, and I get the 
wrong scores again.

The command

locale -a | grep $LANG

should tell me whether the locale is installed, but doesn't, because it 
reports the name in a different format!

Ideas?

Ian
-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [opensuse] OpenSUSE 10.3 benchmarking: slower than 10.2?

2007-10-14 Thread nordi

Ian Smith wrote:

The command

locale -a | grep $LANG

should tell me whether the locale is installed, but doesn't, because it 
reports the name in a different format!


Ideas?


Try this:

$ LANG=foo
$ locale -a /dev/null
locale: Cannot set LC_CTYPE to default locale: No such file or directory
locale: Cannot set LC_MESSAGES to default locale: No such file or directory
locale: Cannot set LC_COLLATE to default locale: No such file or directory

If you get any messages on stderr, then the locale is not supported.

nordi
--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [opensuse] OpenSUSE 10.3 benchmarking: slower than 10.2?

2007-10-14 Thread Ian Smith
On Sunday 14 October 2007 09:11:24 nordi wrote:
 
 $ LANG=foo
 $ locale -a /dev/null
 locale: Cannot set LC_CTYPE to default locale: No such file or directory
 locale: Cannot set LC_MESSAGES to default locale: No such file or directory
 locale: Cannot set LC_COLLATE to default locale: No such file or directory
 
 If you get any messages on stderr, then the locale is not supported.

Yeah, it's a bit of a hack, and I'm worried how portable that would be (I'm 
trying to keep this a Unix benchmark, not just Linux), but that may be the 
best plan.

So, UnixBench 5.2 will set LANG, and do its best to report the setting.  Any 
other feature requests while I'm at it?

Cheers,


Ian
-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [opensuse] OpenSUSE 10.3 benchmarking: slower than 10.2?

2007-10-14 Thread nordi

nordi wrote:

Try this:

$ LANG=foo
$ locale -a /dev/null
locale: Cannot set LC_CTYPE to default locale: No such file or directory
locale: Cannot set LC_MESSAGES to default locale: No such file or directory
locale: Cannot set LC_COLLATE to default locale: No such file or directory

If you get any messages on stderr, then the locale is not supported.


Nope, that doesn't work on Solaris. But Solaris formats its output so 
that your original instruction works:


$ locale -a | grep $LANG
de_DE.UTF-8
[EMAIL PROTECTED]

Looks like Linux and Solaris use different output formats, and who knows 
what the other Unixes are doing.


Regards
nordi
--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [opensuse] OpenSUSE 10.3 benchmarking: slower than 10.2?

2007-10-14 Thread Anders Johansson
On Sunday 14 October 2007 18:01:32 nordi wrote:
 Anders Johansson schrieb:
  Cool. Can you do the same thing for 10.3, so we can compare?

 Here it is.

 Syscalls run considerably slower in 10.3 than in 10.0. I grepped through
 the file to see how long the syscalls took:

 10.0:
 5 usecs: 52792
 6 usecs: 79944
 7 usecs:   434
 weighted average: 5.61 usecs

 10.3
 5 usecs: 16811
 6 usecs: 85497
 7 usecs:  4282
 weighted average: 5.88 usecs

 That's only a 5% difference, but probably there are rounding errors
 hiding somewhere since we have very small numbers: If you go by the size
 of the files (6.0MB versus 7.5MB) the difference is much clearer.

I think the main difference here is in the time the execve() call takes. On 
10.0 it's 0.000140 seconds, whereas on 10.3 it's 0.019752. That's 140 times 
slower, and it dwarves all the other times

I wonder why that would be

Anders

-- 
Madness takes its toll
-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [opensuse] OpenSUSE 10.3 benchmarking: slower than 10.2?

2007-10-14 Thread nordi
Looking at the syscalls in more depth, I wrote a small and simple 
program that _only_ does syscalls in a big loop, see attachment. Just 
uncomment the syscall that you want to benchmark.


Here are the results for Suse 10.0, 10.2 (rescue system) and 10.3. 
Hopefully I get the table right this time...


   10.0 10.210.3
  --   -
gethostname8.5711.77   14.47   seconds/run
stat  14.4919,38   21.90   seconds/run
getuid 2.78 5.438.41   seconds/run
close(dup) 9.0915.83   21.93   seconds/run

Looks like syscalls have been getting slower over time. I'm just amazed 
_how_ much slower this is. Did the same thing happen for the vanilla kernel?


Regards
nordi
#include stdio.h
#include unistd.h

#include sys/utsname.h
#include sys/types.h
#include sys/stat.h

int main() {
int i;
uid_t myuid;
struct stat x;
char name[5];
for(i=0;i2000;i++) {
	//gethostname(name, 1);
	//stat(/, x);
	//myuid=getuid();
	close(dup(0));
}
return 0;
}


Re: [opensuse] OpenSUSE 10.3 benchmarking: slower than 10.2?

2007-10-14 Thread nordi

Forgot to mention that we now have bug #333739, see

https://bugzilla.novell.com/show_bug.cgi?id=333739

Maybe the Suse kernel-guys know if this is a bug or a feature.

nordi
--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[opensuse] OpenSUSE 10.3 benchmarking: slower than 10.2?

2007-10-13 Thread Ian Smith
Hi,

I've been doing some benchmarking of OpenSUSE using UnixBench 5.1.  I noticed 
that 10.3 is 15% - 25% slower than 10.2.  (10.2 was 50% faster than 10.1, 
yay!).

I was wondering if anyone knows why this might be?  And is benchmarking a part 
of the release testing process?

You'll find the benchmarks, system details, and full results at:

http://www.hermit.org/Linux/Benchmarking/

Cheers,

Ian Smith
-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [opensuse] OpenSUSE 10.3 benchmarking: slower than 10.2?

2007-10-13 Thread nordi

Ian Smith wrote:
I've been doing some benchmarking of OpenSUSE using UnixBench 5.1.  I noticed 
that 10.3 is 15% - 25% slower than 10.2.  (10.2 was 50% faster than 10.1, 
yay!).
The benchmark is really showing very strange numbers. The shell script 
benchmark (consisting mainly of sort and grep) is only _half_ as fast in 
10.3 as in 10.2 in your measurements. I ran that test on my system and 
got similar results. Interestingly, the performance is much higher if I 
switch to runlevel 1!!! Here are my results for ./Run shell1 on a 
Pentium M at 1.3Ghz:


Suse 10.0, runlevel 5:511.4
Suse 10.0, runlevel 1:920.7

Suse 10.3, runlevel 2:385.9
Suse 10.3, runlevel 1:756.9

Please note that Ian's Intel Core Duo Processor at 2Ghz scored only 
557.9 points on this test, while my much older 1.3Ghz Pentium M scores 
756.9 points, at least when I benchmark in runlevel 1.


This is _very_ strange. Usually I would say the benchmark is broken, but 
the benchmark simply starts a shell script that starts some GNU 
utilities. There's not much you can break here.


Can someone confirm that running in runlevel 1 yields much higher 
benchmark scores?


Puzzled
nordi
--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [opensuse] OpenSUSE 10.3 benchmarking: slower than 10.2?

2007-10-13 Thread Anders Johansson
On Saturday 13 October 2007 22:09:37 nordi wrote:
 Ian Smith wrote:
  I've been doing some benchmarking of OpenSUSE using UnixBench 5.1.  I
  noticed that 10.3 is 15% - 25% slower than 10.2.  (10.2 was 50% faster
  than 10.1, yay!).

 The benchmark is really showing very strange numbers. The shell script
 benchmark (consisting mainly of sort and grep) is only _half_ as fast in
 10.3 as in 10.2 in your measurements. I ran that test on my system and
 got similar results. Interestingly, the performance is much higher if I
 switch to runlevel 1!!! Here are my results for ./Run shell1 on a
 Pentium M at 1.3Ghz:

 Suse 10.0, runlevel 5:511.4
 Suse 10.0, runlevel 1:920.7

 Suse 10.3, runlevel 2:385.9
 Suse 10.3, runlevel 1:756.9

 Please note that Ian's Intel Core Duo Processor at 2Ghz scored only
 557.9 points on this test, while my much older 1.3Ghz Pentium M scores
 756.9 points, at least when I benchmark in runlevel 1.

 This is _very_ strange. Usually I would say the benchmark is broken, but
 the benchmark simply starts a shell script that starts some GNU
 utilities. There's not much you can break here.

 Can someone confirm that running in runlevel 1 yields much higher
 benchmark scores?

Well, runlevel 1 has nothing running, so anything you do will have the machine 
more or less to itself. I would be surprised if you didn't get higher scores 
there

But yes, the benchmark is broken. I haven't looked in any great detail at what 
it does, but how it measures it is just wrong.

In theory, it runs for 60 seconds, and at the end it counts how many 
iterations it has managed to do in that time, averaged over a couple of runs

The problem is that it never checks if it has run for 60 seconds. It sets up a 
signal handler for SIGALRM, and just assumes that when the process receives 
that signal, the 60 seconds are up and it's time to report. This isn't a good 
idea for any number of reasons

Now let's see what the actual job does.

Anders

-- 
Madness takes its toll
-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [opensuse] OpenSUSE 10.3 benchmarking: slower than 10.2?

2007-10-13 Thread nordi

Anders Johansson wrote:
Well, runlevel 1 has nothing running, so anything you do will have the machine 
more or less to itself. I would be surprised if you didn't get higher scores 
there
Higher scores, yes. But I would expect 1% more, maybe even 5% more. But 
certainly not 100% higher performance. Otherwise that would mean the few 
services running in runlevel 2 eat up 50% of my CPU time.


I also tried killing services in runlevel 2, hoping to reach the 
performance of runlevel 1. I killed every demon I could find, unloaded 
some kernel modules... nothing changed. Since you said the benchmark 
uses signals for its timing: Is there anything that might send signals 
in one runlevel and not send them in another?


Regards
nordi


--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [opensuse] OpenSUSE 10.3 benchmarking: slower than 10.2?

2007-10-13 Thread Anders Johansson
On Saturday 13 October 2007 22:43:37 Anders Johansson wrote:
 Now let's see what the actual job does.

Not sure if I'm missing something here, but...

As far as I can see, the shell1 test simply runs pgms/tst.sh over and over 
again, for 60 second

I run it, and I got a score of 316 for the 1 concurrent instance.

Then I ran time for i in {1..700}; do ../pgms/tst.sh; done and it took 
21.179 seconds

Somewhere, there is something I'm missing

Anders

-- 
Madness takes its toll
-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [opensuse] OpenSUSE 10.3 benchmarking: slower than 10.2?

2007-10-13 Thread Lew Wolfgang
nordi wrote:
 Ian Smith wrote:
 I've been doing some benchmarking of OpenSUSE using UnixBench 5.1.  I
 noticed that 10.3 is 15% - 25% slower than 10.2.  (10.2 was 50% faster
 than 10.1, yay!).
 The benchmark is really showing very strange numbers. The shell script
 benchmark (consisting mainly of sort and grep) is only _half_ as fast in
 10.3 as in 10.2 in your measurements. I ran that test on my system and
 got similar results. Interestingly, the performance is much higher if I
 switch to runlevel 1!!! Here are my results for ./Run shell1 on a
 Pentium M at 1.3Ghz:

I didn't try Ian's benchmarks, but I did fiddle around
a bit with a floating-point intensive one that I've
been using for years.  It calculates very long FFT's
and displays the accuracy.

Bottom line is I didn't see any significant differences
between runlevels 1 and 5.  The benchmark ran in 8.7
seconds as measured by time.

It did run a bit faster in 10.3 than 10.2.  However, this
wasn't a fair test since my 10.2 is 32-bit, my 10.3 64-bit
on the same computer.  Interesting nevertheless, and all
in runlevel 5.

10.2 (with 10.2 binary) 0m9.866s
10.3 (with 10.2 binary) 0m9.871s
10.3 (with 10.3 binary) 0m8.734s

Here are the particulars:

10.2
Linux train 2.6.18.8-0.5-bigsmp #1 SMP Fri Jun 22 12:17:53 UTC 2007 i686 i686 
i386 GNU/Linux

10.3
Linux train 2.6.22.9-0.4-default #1 SMP 2007/10/05 21:32:04 UTC x86_64 x86_64 
x86_64 GNU/Linux

10.2
real0m9.866s
user0m9.825s
sys 0m0.040s

10.3 (running 10.2 binary)
real0m9.871s
user0m9.813s
sys 0m0.060s

10.3 (running 10.3 binary)
real0m8.734s
user0m8.701s
sys 0m0.036s

Here's the source for the benchmark.  For the purposes
of this report, all versions were compiled:
gcc -O3 edelbench.c -lm

/* Dave Edelblut's Benchmark */
#include math.h
#include time.h
#define NP_MAX 8388608
/* #define NP_MAX 16777216 */
#define DATA_TYPE float
/* #define DATA_TYPE double */
typedef struct { DATA_TYPE r; DATA_TYPE i; } complex;

/*
   A Duhamel-Hollman split-radix dif fft
   Ref: Electronics Letters, Jan. 5, 1984
   Complex input and output data in arrays x and y
   Length is n
*/

int cfft( complex *x, int np )
{
int i,j,k,m,n,i0,i1,i2,i3,is,id,n1,n2,n4 ;
DATA_TYPE  a,e,a3,cc1,ss1,cc3,ss3,r1,r2,s1,s2,s3,xt ;
  x = x - 1;
  i = 2; m = 1; while (i  np) { i = i+i; m = m+1; };
  n = i; if (n != np) {
for (i = np+1; i = n; i++)  { x[i].r=0.0; x[i].i=0.0; };
/* printf(\nuse %d point fft\n,n); */ }
  n2 = n+n;
  for (k = 1;  k = m-1; k++ ) {
n2 = n2 / 2; n4 = n2 / 4; e = 2.0 * M_PI / n2; a = 0.0;
for (j = 1; j= n4 ; j++) {
  a3 = 3.0*a; cc1 = cos(a); ss1 = sin(a);
  cc3 = cos(a3); ss3 = sin(a3); a = j*e; is = j; id = 2*n2;
  while ( is  n ) {
  for (i0 = is; i0 = n-1; i0 = i0 + id) {
 i1 = i0 + n4; i2 = i1 + n4; i3 = i2 + n4;
 r1= x[i0].r - x[i2].r;
 x[i0].r = x[i0].r + x[i2].r;
 r2= x[i1].r - x[i3].r;
 x[i1].r = x[i1].r + x[i3].r;
 s1= x[i0].i - x[i2].i;
 x[i0].i = x[i0].i + x[i2].i;
 s2= x[i1].i - x[i3].i;
 x[i1].i = x[i1].i + x[i3].i;
 s3= r1 - s2; r1 = r1 + s2; s2 = r2 - s1; r2 = r2 + s1;
 x[i2].r = r1*cc1 - s2*ss1;
 x[i2].i = -s2*cc1 - r1*ss1;
 x[i3].r = s3*cc3 + r2*ss3;
 x[i3].i = r2*cc3 - s3*ss3;
 }
   is = 2*id - n2 + j; id = 4*id;
}
}
  }

  /*
-Last stage, length=2 butterfly-
*/
  is = 1; id = 4;
  while ( is  n) {
  for (i0 = is; i0 = n; i0 = i0 + id) {
  i1 = i0 + 1; r1 = x[i0].r;
  x[i0].r = r1 + x[i1].r;
  x[i1].r = r1 - x[i1].r;
  r1 = x[i0].i;
  x[i0].i = r1 + x[i1].i;
  x[i1].i = r1 - x[i1].i;
}
  is = 2*id - 1; id = 4 * id; }
  /*
c--Bit reverse counter
*/
  j = 1; n1 = n - 1;
  for (i = 1; i = n1; i++) {
if (i  j) {
  xt = x[j].r;
  x[j].r = x[i].r; x[i].r = xt;
  xt = x[j].i; x[j].i = x[i].i;
  x[i].i = xt;
}
k = n / 2; while (k  j) { j = j - k; k = k / 2; }
j = j + k;
  }
  return(n);
  }


/*
program to test fast fourier transform in double precision;
*/

void main()
{
int i,j,ib,np,npm,n2,kr,ki;
double a,enp,t,rx,y,zr,zi,pi,el_t;
clock_t ct0,ct1,ct2,ctd;
static complex x[NP_MAX];

pi = M_PI;
np = 1024;
ct0 = clock();
  printf(\n fft benchmark - double precision - GNU C\n);
  while (np = NP_MAX){
  printf(np =%7d,np);  enp = np; npm = np/2-1;  t = pi/enp;
  x[0].r = (enp - 1.0) / 2.0;  x[0].i = 0;
  n2 = np / 2;  x[n2].r = -0.5; x[n2].i = 0.0;
  for (i = 1; i = npm; i++) {  j = np - i;
  x[i].r = -0.5; x[j].r = -0.5;
  y = t * i;  y = -cos(y)/sin(y)/2.0;
  x[i].i = y; x[j].i = -y;
  }
  ct1 = clock(); i = cfft(x,np); ct2 = clock(); ctd = ct2 - ct1;
  el_t = (double) ctd;