ADV: Weight loss prescription medication now! Save time & money!

2002-02-17 Thread Elizabeth
Title: MedRxPharmacy filling prescriptions online




  




  
  

Weight loss, sexual and pain relief medications online without a doctor's bill! Order prescription 
drugs ONLINE from a trusted pharmacy located in the USA. Just answer a 
few questions about your health, and one of our licensed physicians will electronically 
issue you a prescription and our online pharmacy will fill your order. 
It's that simple! You'll have your medication delivered to your door immediately! No need for an office visit. No consultation fees! 
  GO 
TO MED RX PHARMACY NOW!

  
  

  

  
  




  
  

 
  Should 
you not wish to be contacted at this email address again, please click 
on this link and follow the instructions. This message is a commercial 
advertisement. It is compliant with all federal and state laws regarding email 
messages including the California Business and Professions Code. 
  We 
have provided this "opt out" link so you can be deleted from our mailing list. 
In addition we have provided the subject line "ADV" to provide you notification 
that this is a commercial advertisement for persons over 18yrs old.
  
  




  






Process capability Cpk goals (industrial statistics)

2002-02-17 Thread Boris

Hi, Do anyone there have an experience to set organizational (plant)
Cpk goals using Confidence Interval and/or hypothesis testing?

Most places use just point estimates for Cpk but in the liturature
(like classic Montgomery SPC book) confidence interval approach is
described.

I'd like to hear about such Cpk assessment system, how does it work in
practice.

Boris.


=
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at
  http://jse.stat.ncsu.edu/
=



Re: Likert Scale Analysis - HELP!

2002-02-17 Thread Art Kendall

for a small set of data like this using SPSS is pretty straight forward.
Use the data view (spread sheet) to put your data in. Use the variables
view spread sheet to define your variables.  You can copy info from one
row to the other.  It is worthwhile to take the time to put all the labels
in. besure to put an id so you can refer back to the survey instrument.
Be sure to proofread your data view before doing analysis.

  
drag the variables you are interested in to the "frequencies for" box
  set count and percent to display and set zero decimals

  

  

assuming you do this all in one session
change the lists of variables to match you question.
 
correlations
  item01 to item34 with sex age/
  item01 to item05/
  item01 to item05 with item28 item13 item33/ .

Why did you do the survey?

What is the nature of your items?  Are the designed to be used in groups
(scales) ?
If they are attitude items, are some positively worded and some negatively
worded?

Dave M wrote:

> Hi there,
>
> I have recently done a 5-point likert-style survey (with 34 questions)
> and got about 45 responses.
>
> I am not great at statistics, and have not studied it since high
> school!
>
> Can someone please give me some advice on how to analyse the data?
>
> I am not looking to do a full-on smart-ass analysis, I am realistic of
> the time I can allocate to get this done (a few days max). I *would*
> like to examine correlations between pairs (or even better - groups)
> of questions.
>
> The trouble I find with trying to learn stats is that all the books
> tell me what to use, but not WHY I should use it.
>
> I have been told that it would be good idea if I used SPSS, but it
> looks a little daunting - easy for them to say. I am pretty good with
> Excel... I am not a lazy person, but I would love to plug the figures
> into a custom-made spreadsheet and then just set it running.
>
> Assuming that I can get the variables set up in SPSS, what are the
> functions to use?
>
> thanks.



=
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at
  http://jse.stat.ncsu.edu/
=



585858

2002-02-17 Thread ghgfhgfh



	
  

¼ÓÈë ÑÇÖÞ½»ÓÑÖÐÐÄ - ×î´óÑÇÖÞ¸öÈ˽»ÓÑÖÐÐÄÍøÕ¾!







¡¡===¸ÃÓʼþʹÓà ¿ÆÌØÓʼþȺ·¢Èí¼þ ·¢ËÍ,ÓʼþÄÚÈÝÓë¿ÆÌØÈí¼þÎ޹ؿÆÌØÈí¼þ  http://www.caretop.com===


=
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at
  http://jse.stat.ncsu.edu/
=


Work At Home Biz *FREE For U.K*

2002-02-17 Thread BIGmoney237027









GET POSITIONED IN THE POWER LEG 
BEFORE WE SEND THIS EMAIL TO MILLIONS...

IT IS FREEWHAT DO YOU HAVE TO LOSE??

FREE POSITIONING UNTIL MARCH 4TH


FREE SIGN UP... I will have 5000 in your downline by month end... go to the site
right now and register for free... don't worry about the money it
can come from your commissions... http://www.freeearlysignup.biz/mm";>
http://www.freeearlysignup.biz/mmm 

I promised in the 1st e-mail to the ReferEveryone.com database that i
would get them a 1000 person downline... "I LIED" I got them a 5000 person 

downline... Sign up now, its FREE! http://www.freeearlysignup.biz/mm";>
http://www.freeearlysignup.biz/mmm 

the following are some powerful testimonies

I AM IMPRESSED! I enrolled mid-last week. I paid up my fees today. I purchased 
my PAP leads and went for it all. I ordered the 60 signup package, doubled. 
Before the day had ended I had every signup (120) promised. And, these are 
people who are already interested in the program. AMAZING! I sure hope that 
everyone in my downline takes advantage of this amazing service before it 
expires. I have never seen support like this before on the net. NO HYPE, all 
delivery. Thank you Phil!
Randy, Phoenix, AZ


I couldn't believe it !!! I purchased the one thousand dollar pack on
thursday, went fishing on friday, and friday night, I had all of my 20
PAP (plus some) members. GREAT WORK PHIL!! And , I can visualize
OneQwest as my future..Keep up the great work!
Rod Deide

Sign up now, its FREEhttp://www.freeearlysignup.biz/mm";> 
http://www.freeearlysignup.biz/mmm 

Well..I have been in about 10 days now, and have just activated. The opportunity 
is along the same lines that I had envisioned trying to do myself, and when this 
came along I jumped right on it. Sponsoring folks is quite easy, either use the 
PAP system and they do all the work for you, or wait for possible spillover...I 
used the PAP and had my sponsored amount within 1 day fantastic. 
Thomas Rutledge 
newbie 



Hello Phil,
You are simply amazing. Placing 120 personally sponsored people in my
downline within 24 hours. You should teach your recruiting method to all the
members so Oneqwest, can become, the most gigantic mlm company in the world.
I must say, that you are a very trustworthy person. I started to receive my
downline even before you received my payment. I cannot thank you enough.
With your help, I'm looking forward to a very profitable year and beyond
with Oneqwest.
Thank you again,
Aloha
Tom Nishiyama


Dear Phil,
I signed up with Oneqwest on 1/25/02 with a belief that this opportunity would 
enable me to make a good living working from Home. My goal was to make $5,000 
per month and I think I will reach that goal very soon with the Oneqwest 
Business Opportunity. I will ask my wife to retire early after we continue to 
promote Oneqwest and deposit our $5,000+ checks
per month consitently for a 6 month period. I believe in Oneqwest and I continue 
to promote it all over the world. 
Sincerely,
Harold Estes


I have been with Oneqwest for two weeks, I paid in the FIRST week and found 169 
personally
enrolled people in my line, now they are even more. I first was sceptical and 
asked many questions which were answered to my satisfaction. Now I want to see 
my first payment...
All the best to you
Hilarion

I joined Free with in an hour of being notified - 2 days later I activated my 
position. I sent the transfer and forms last week. I live in UK and this takes 
some 4 days at a cost of $50. I am still waiting to be fully activated - I can't 
wait. I am working on this venture only as I feel it will soon be seen on every 
street corner (the product). I use 2 tools to promote my site every week to 
search engines and for spidering, as well as emailing people who have used my 
FFA site. I am
now using ROIbot as well. NO SPAM.
Regards
Mark Sellars

Sign up now its FREE! http://www.freeearlysignup.biz/mm";>
http://www.freeearlysignup.biz/mmm 

Aloha, the first week I signed up I got 7 people signed up! I am going full 
speed ahead in the Next Two weeks and I am getting My POWER LEGS in place! Good 
luck to Everyone this Is the Greatest Opportunity Ever! Mahalo Michael Yamane

P.S. See you in Hawaii,yor all welcome to Visit Our Island of KAUAI.


I believe One Qwest is one of the greatest opportunities to enter the 
network marketing field. I've never seen something build so fast and with
a unique product in my over 20 years in the business. What can I say about
my upline a true visionary.
Tom


Hello Phil --
I'm pretty amazed at the response I got from the folks I sent emails to.
I joined Sunday, January 27th, and I have personallysponsored 30 people.
My email just told them :
"This is MLM with a twist ... we don't sell the product ... the company 
does."

I guess people like that idea !
Jerry Booth
Westminster, CA

Hello
I joined as soon as I got the information on the program. Mainly because of the 
downline that was being built for 

Re: Question on random number generator

2002-02-17 Thread Linda

Thanks everyone for helping me...

Regards,
Linda


Art Kendall <[EMAIL PROTECTED]> wrote in message 
news:<[EMAIL PROTECTED]>...
> try this SPSS syntax.
> 
> new file.
> * this program generates 200 cases
> * trims those outside the desired range
> * and takes the first 100  of the remaining.
> * change lines flagged with  < .
> input program.
> loop #i = 1 to 200. /* < .
> compute mu= .005. /* < .
> compute x = rv.exp(mu).
> end case.
> end loop.
> end file.
> end input program.
> formats mu (f6.3).
> select if x gt 0 and x le 150. /* < .
> compute seqnum =$casenum.
> execute.
> select if seqnum le 100. /* < .
> execute.
> 
> 
> Linda wrote:
> 
> > I want to generate a series of random variables, X with exponential
> > PDF with a given mean,MU value. However, I only want X to be in some
> > specified lower and upper limit?? Say between 0 -> 150 i.e. rejected
> > anything outside this range Does anyone have any ideas how should I do
> > that??
> >
> > Regards,
> > Linda


=
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at
  http://jse.stat.ncsu.edu/
=



Re: Likert Scale Analysis - HELP!

2002-02-17 Thread Dave M

Hi, 

Well the survey is for a project looking into ways the Internet can
enhance learning.
The first part of the survey asks pertinent questions about their
current study/learning environment, such as "I have trouble finding
library books at the right time" and "I see lectures as a major source
of learning", and they specify their opinion in the 5 point range
strongly agree --> strongly disagree.

The second part uses the same scale, but asks for opinions on a number
of innovations for using the Internet in learning, in terms of whether
it would be an improvement.

Therefore, I want to find correlations between groups of factors in
the first section and opinions in the second section.


Hope that makes sense - are there some obvious ways to tackle this
analysis?


thanks.


[EMAIL PROTECTED] (Simon, Steve, PhD) wrote in message 
news:...
> It's difficult to answer a question that is asked so generally. You might
> try explaining to this group why you collected the data in the first place.
> For the most part, it is typically to:
> 
> 1. characterize a specific group of interest, 
> 2. compare two or more specific groups,
> 3. discover a pattern among several variables.
> 
> If your answer is, "because I had to do it for an assignment" then you need
> to take a step back and ask yourself why would someone else might be
> interested in the data you have collected. You might also seek feedback from
> your teacher (or your boss if this was a work assignment).
> 
> There may also be multiple objectives. If so, just specify the two or three
> that are most important or interesting.
> 
> Don't be bashful and don't be vague. The more information you can provide,
> the better answer we can provide.
> 
> Do keep your objectives realistic, of course. Both because of the limited
> time you have and the small sample size that you have collected.
> 
> Steve Simon, [EMAIL PROTECTED], Standard Disclaimer.
> The STATS web page has moved to
> http://www.childrens-mercy.org/stats
> 
> 
>  -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] 
> Sent: Saturday, February 16, 2002 4:59 PM
> To:   [EMAIL PROTECTED]
> Subject:  Likert Scale Analysis - HELP!
> 
> Hi there,
> 
> I have recently done a 5-point likert-style survey (with 34 questions)
> and got about 45 responses.
> 
> I am not great at statistics, and have not studied it since high
> school!
> 
> Can someone please give me some advice on how to analyse the data? 
> 
> I am not looking to do a full-on smart-ass analysis, I am realistic of
> the time I can allocate to get this done (a few days max). I *would*
> like to examine correlations between pairs (or even better - groups)
> of questions.
> 
> The trouble I find with trying to learn stats is that all the books
> tell me what to use, but not WHY I should use it.
> 
> I have been told that it would be good idea if I used SPSS, but it
> looks a little daunting - easy for them to say. I am pretty good with
> Excel... I am not a lazy person, but I would love to plug the figures
> into a custom-made spreadsheet and then just set it running.
> 
> Assuming that I can get the variables set up in SPSS, what are the
> functions to use?
> 
> 
> thanks.
> 
> 
> 
> Instructions for joining and leaving this list, remarks about the
> problem of INAPPROPRIATE MESSAGES, and archives are available at
>   http://jse.stat.ncsu.edu/
> 
> 
> --


=
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at
  http://jse.stat.ncsu.edu/
=



Re: Statistical Distributions

2002-02-17 Thread Alan McLean

This is a good idea, Dennis. I would like to see the sequence start with
the binomial - in a very real way, the normal occurs naturally as an
'approximation' to the binomial.

Alan


Dennis Roberts wrote:
> 
> Back in 1970, Glass and Stanley in their excellent Statistical Methods in
> Education and Psychology book, Prentice-Hall ... had an excellent chapter
> on several of the more important distributions used in statistical work
> (normal, chi square, F, and t) and developed how each was derived from the
> other(s). Most recent books do not develop distributions in this fashion
> anymore: they tend to discuss distributions ONLY when a specific test is
> discussed. I have found this to be a more disjointed treatment.
> 
> Anyway, I have developed a handout that parallels their chapter, and have
> used Minitab to do simulation work that supplements what they have presented.
> 
> The first form of this can be found in a PDF file at:
> 
> http://roberts.ed.psu.edu/users/droberts/papers/statdist2.PDF
> 
> Now, there is still some editing work to do AND, working with the spacing
> of text. Acrobat does not allow too much in the way of EDITING features
> and, trying to edit the original document and then convert to pdf, is also
> somewhat of a hit and miss operation.
> 
> When I get an improved version with better spacing, I will simply copy over
> the file above.
> 
> In the meantime, I would appreciate any feedback about this document and
> the general thrust of it.
> 
> Feel free to pass the url along to students and others; copy freely and use
> if you find this helpful.
> 
> Dennis Roberts, 208 Cedar Bldg., University Park PA 16802
> 
> WWW: http://roberts.ed.psu.edu/users/droberts/drober~1.htm
> AC 8148632401
> 
> =
> Instructions for joining and leaving this list, remarks about the
> problem of INAPPROPRIATE MESSAGES, and archives are available at
>   http://jse.stat.ncsu.edu/
> =

-- 
Alan McLean ([EMAIL PROTECTED])
Department of Econometrics and Business Statistics
Monash University, Caulfield Campus, Melbourne
Tel:  +61 03 9903 2102Fax: +61 03 9903 2007



=
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at
  http://jse.stat.ncsu.edu/
=



Weibull --> Gumbel

2002-02-17 Thread Uta & Ulrich Horstmann

Hi!
I know the relationship between the Weibull and the Gumbel, i.e. if T is
Weib than log-e (T) is Gumbel.

Also, I understand how to prove that. however, I fail to prove how the
Gumbel evolves from the 3-parameter Weibull. So far, I have the following:

R: Reliability function
R_sub_Y(y)=P(Y>y)=P(logT>y)=P(T>exp(y))=R_sub_T(exp(y))
=exp(-(exp(y)/alpha)^beta)=exp(-exp((y-gamma)/eta))

with eta=1/beta and
gamma=log alpha

If I try to do the same with the three-parameter Weibull, then
R_sub_T(exp(y))=exp(-((exp(y)-delta)/alpha)^beta)

where delta=location parameter for the Weibull distribution.

Any idea/hep available?

Thanks a lot! Ulrich






=
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at
  http://jse.stat.ncsu.edu/
=



Re: Process capability Cpk goals (industrial statistics)

2002-02-17 Thread Jay Warner

Boris,
there are lots of ways to use different statistically calculated
numbers.  I am suspicious, nonetheless, that your concept of a plant wide
goal for Cpk, either as a point estimate or as a confidence interval,
will not let you reach the larger goal you seek.

One can manipulate the math to show that an SPC chart control limits are
mathematically equivalent to a Student 't' test.  the interpretation of
certain terms is not identical, exactly, but who cares?  (slap your face,
Jay.)

the Cp and Cpk can be mathematically adjusted to show the math
equivalence.  Without going into the details, I believe you can show that
The Cpk, CI, and 't' values are closely related.

use of point estimates of Cp and Cpk as yardsticks or standards or goals
tends to ignore the great sensitivity of the values to variations in the
standard deviation.  Translation:  you would be much better off to use
archival documentary data which comes from 50-100 measurements to get
decent estimates of the standard deviation.  This leads to questions of
process variability and product variability.

In turn, these variability's will lead (eventually) to a question of
customer needs, as interpreted by (engineering, marketing, CEO,
whoever).  Finally all this number pushing gets down to the real issues!

Not every dimension can be easily or precisely measured.  Not every
dimension needs the same Cpk.  So much for plant wide values.

I recommend you put your key focus on the issues that really count -
customer requirements, as interpreted.  Let these determine production
requirements.

Cheers,
Jay

Boris wrote:

> Hi, Do anyone there have an experience to set organizational (plant)
> Cpk goals using Confidence Interval and/or hypothesis testing?
>
> Most places use just point estimates for Cpk but in the liturature
> (like classic Montgomery SPC book) confidence interval approach is
> described.
>
> I'd like to hear about such Cpk assessment system, how does it work in
> practice.
>
> Boris.
>
> =
> Instructions for joining and leaving this list, remarks about the
> problem of INAPPROPRIATE MESSAGES, and archives are available at
>   http://jse.stat.ncsu.edu/
> =

--
Jay Warner
Principal Scientist
Warner Consulting, Inc.
 North Green Bay Road
Racine, WI 53404-1216
USA

Ph: (262) 634-9100
FAX: (262) 681-1133
email: [EMAIL PROTECTED]
web: http://www.a2q.com

The A2Q Method (tm) -- What do you want to improve today?






=
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at
  http://jse.stat.ncsu.edu/
=



Re: Newbie question

2002-02-17 Thread Rich Ulrich

On 15 Feb 2002 14:38:49 -0800, [EMAIL PROTECTED] (AP) wrote:

> Hi all:
> 
> I would appreciate your help in solving this question.
> 
> calculate the standard deviation of a sample where the mean and 
> standard deviation from the process are provided?
> E.g. Process mean = 150; standard deviation = 20. What is the SD for 
> a sample of 25?  The answer suggested is 4.0

Here is a vocabulary distinction.   Or error.
I don't know if you are repeating the problem wrong, or 
you are speaking from a tradition that I am not familiar with.

As I am familiar with it, statisticians say that 
"the standard deviation"  is the "standard deviation of the sample."

We say that the "standard deviation of the sample *mean*" 
will be frequently referred to as the "standard error";  and 
"The SD of the mean [or the SE] equals SD/sqrt(N)".

That is confusing enough.  
I hope this makes your sources clear.

-- 
Rich Ulrich, [EMAIL PROTECTED]
http://www.pitt.edu/~wpilib/index.html


=
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at
  http://jse.stat.ncsu.edu/
=



Re: Statistical Distributions

2002-02-17 Thread Timothy W. Victor

I also think Alan's idea is sound. I start my students off with some
binomial expansion theory.

Alan McLean wrote:
> 
> This is a good idea, Dennis. I would like to see the sequence start with
> the binomial - in a very real way, the normal occurs naturally as an
> 'approximation' to the binomial.
> 
> Alan
> 
> Dennis Roberts wrote:
> >
> > Back in 1970, Glass and Stanley in their excellent Statistical Methods in
> > Education and Psychology book, Prentice-Hall ... had an excellent chapter
> > on several of the more important distributions used in statistical work
> > (normal, chi square, F, and t) and developed how each was derived from the
> > other(s). Most recent books do not develop distributions in this fashion
> > anymore: they tend to discuss distributions ONLY when a specific test is
> > discussed. I have found this to be a more disjointed treatment.
> >
> > Anyway, I have developed a handout that parallels their chapter, and have
> > used Minitab to do simulation work that supplements what they have presented.
> >
> > The first form of this can be found in a PDF file at:
> >
> > http://roberts.ed.psu.edu/users/droberts/papers/statdist2.PDF
> >
> > Now, there is still some editing work to do AND, working with the spacing
> > of text. Acrobat does not allow too much in the way of EDITING features
> > and, trying to edit the original document and then convert to pdf, is also
> > somewhat of a hit and miss operation.
> >
> > When I get an improved version with better spacing, I will simply copy over
> > the file above.
> >
> > In the meantime, I would appreciate any feedback about this document and
> > the general thrust of it.
> >
> > Feel free to pass the url along to students and others; copy freely and use
> > if you find this helpful.
> >
> > Dennis Roberts, 208 Cedar Bldg., University Park PA 16802
> > 
> > WWW: http://roberts.ed.psu.edu/users/droberts/drober~1.htm
> > AC 8148632401
> >
> > =
> > Instructions for joining and leaving this list, remarks about the
> > problem of INAPPROPRIATE MESSAGES, and archives are available at
> >   http://jse.stat.ncsu.edu/
> > =
> 
> --
> Alan McLean ([EMAIL PROTECTED])
> Department of Econometrics and Business Statistics
> Monash University, Caulfield Campus, Melbourne
> Tel:  +61 03 9903 2102Fax: +61 03 9903 2007
> 
> =
> Instructions for joining and leaving this list, remarks about the
> problem of INAPPROPRIATE MESSAGES, and archives are available at
>   http://jse.stat.ncsu.edu/
> =

-- 
Tim Victor
Policy Research, Evaluation, and Measurement
Psychology in Education Division
Graduate School of Education
University of Pennsylvania


=
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at
  http://jse.stat.ncsu.edu/
=



Re: Which is faster? ziggurat or Monty Python (or maybe something else?)

2002-02-17 Thread Glen Barnett


Alan Miller <[EMAIL PROTECTED]> wrote in message
OC2b8.28457$[EMAIL PROTECTED]">news:OC2b8.28457$[EMAIL PROTECTED]...
> First - the reference to George's paper on the ziggurat, and the code:
> The Journal of Statistical Software (2000) at:
> http://www.jstatsoft.org/v05/i08

That I already have, thanks.

Glen



=
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at
  http://jse.stat.ncsu.edu/
=



Re: Which is faster? ziggurat or Monty Python (or maybe something else?)

2002-02-17 Thread Glen Barnett


Bob Wheeler <[EMAIL PROTECTED]> wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> Marsaglia's ziggurat and MCW1019 generators are
> available in the R package SuppDists. The gcc
> compiler was used.

Thanks Bob.

Glen



=
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at
  http://jse.stat.ncsu.edu/
=



Re: Which is faster? ziggurat or Monty Python (or maybe something else?)

2002-02-17 Thread Glen Barnett


George Marsaglia <[EMAIL PROTECTED]> wrote in message
0l7b8.42092$[EMAIL PROTECTED]">news:0l7b8.42092$[EMAIL PROTECTED]...
> (3-year old) Timings, in nanoseconds,  using Microsoft Visual C++
>  and gcc under DOS on a 400MHz PC.   Comparisons are with
> methods by Leva and by Ahrens-Dieter, both said to be fast,
> using the same the same uniform RNG.
>
>MSgcc
> Leva  307384
> Ahrens-Dieter161193
> RNOR55  65 (Ziggurat)
> REXP 77  40 (Ziggurat)
>
>
> The Monty Python method is not quite as fast as as the Ziggurat.

Thanks for the information. Could you give a rough idea about the relativities?
roughly 5% slower? 10%? 30%?

I realise it's machine-dependent, but I'm only after a rough picture.

> Some may think that Alan Miller's somewhat vague reference to
> a source for the ziggurat article suggests disdain.

I didn't get that impression.

> (I don't have a web page, so the above can be considered
>  my way to play Ozymandius.)

I wish you did!

Glen



=
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at
  http://jse.stat.ncsu.edu/
=



Re: Which is faster? ziggurat or Monty Python (or maybe something else?)

2002-02-17 Thread Glen Barnett


Art Kendall <[EMAIL PROTECTED]> wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> I tend to be more concerned with the "apparent randomness" of the results
than with the speed of the algorithm.

This will be mainly a function of the randomness of the uniform generator. If
we assume the same uniform generator for both, and assuming it's a pretty good
one (our current one is reasonable, though I want to go back and update it
soon), there shouldn't be a huge difference in the apparent randomness of the
resulting gaussians.

> As a thought experiment,  what is the cumulative time difference in a run
using the fastest vs the slowest algorithm? A
> whole minute? A second? A fractional second?

When you need millions of them (as we do; a run of 10,000 simulations could
need as many as 500 million gaussians, and we sometimes want to do more than
10,000), and you also want your program to be interactive (in the sense that
the user doesn't have to wander off and have coffee just to do one simulation
run), knowing that one algorithm is, say, 30% faster is kind of important.
Particularly if the user may want to do hundreds of simulations...

A whole minute extra on a simulation run is a big difference, if the user is
doing simulations all day.

Glen




=
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at
  http://jse.stat.ncsu.edu/
=