University Job Bank - new website
FYI: You can post and search jobs for free at the new website University Job Bank. http://www.UJobBank.com Two sister websites: Post-doctoral positions: http://www.post-docs.com Graduate Assistantships: http://www.GradAsst.com Hope this helps. -- Find a job at the University Job Bank http://www.UJobBank.com Sent via Deja.com http://www.deja.com/ Before you buy. === This list is open to everyone. Occasionally, less thoughtful people send inappropriate messages. Please DO NOT COMPLAIN TO THE POSTMASTER about these messages because the postmaster has no way of controlling them, and excessive complaints will result in termination of the list. For information about this list, including information about the problem of inappropriate messages and information about how to unsubscribe, please see the web page at http://jse.stat.ncsu.edu/ ===
Nonparametric repeated measures
Is anyone aware of a nonparametric procedure/analogue for repeated measures ANOVA, e.g., repeated measures (pre/post) for intervention and control groups design? Thanks, SR Millis === This list is open to everyone. Occasionally, less thoughtful people send inappropriate messages. Please DO NOT COMPLAIN TO THE POSTMASTER about these messages because the postmaster has no way of controlling them, and excessive complaints will result in termination of the list. For information about this list, including information about the problem of inappropriate messages and information about how to unsubscribe, please see the web page at http://jse.stat.ncsu.edu/ ===
Re: Exploratory data analysis
We have some material on the subject you raise. http://www.autobox.com/outlier.html as it deals with exploratory data analysis ( hypothesis generation ) also please see http://www.autobox.com/whatis.html If they are useful to you, please let me and the group know. regards Dave Reilly Ken wrote: > Try > > http://www.itl.nist.gov/div898/handbook/eda/eda.htm > > and here's one that will give you a headache > > http://seamonkey.ed.asu.edu/~behrens/asu/reports/Peirce/Logic_of_EDA.html > > Jostein Vada wrote: > > > Hi, > > I am a norwegian PhD student within the field of process control, > > and in 7 days I am going to defend my thesis. One part of the "exam" is > > to give a lecture on subject which is unknown to me. Six days ago I > > received the title: > > > > "Exploratory Process Data Analysis" > > > > Focus is on methods and theory for the data-processing stage of model > > developement. Issues should include nonlinear filtering techniques, > > robust statistics, redundant techniques for data analysis, data > > reconciliation, and graphical examination of data. In other words, the > > process from raw data until model identification. > > > > As far as I know, there is a huge amont of litterature on all these > > issues. Does anyone know of books, papers or webpages which give a > > survey over this field? > > > > I'll be appreciated for any information. > > > > Jostein Vada === This list is open to everyone. Occasionally, less thoughtful people send inappropriate messages. Please DO NOT COMPLAIN TO THE POSTMASTER about these messages because the postmaster has no way of controlling them, and excessive complaints will result in termination of the list. For information about this list, including information about the problem of inappropriate messages and information about how to unsubscribe, please see the web page at http://jse.stat.ncsu.edu/ ===
Re: Texts: Factor Analysis
Check out 'Multivariate Data Analysis' (4th Ed.) Hair, Anderson, Tatham & Black Great book. [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: > What are your favorite book(s) on factor analysis? > > What do you think of R. Gorsuch's book? > > Thanks, > Scott Millis > > === > This list is open to everyone. Occasionally, less thoughtful > people send inappropriate messages. Please DO NOT COMPLAIN TO > THE POSTMASTER about these messages because the postmaster has no > way of controlling them, and excessive complaints will result in > termination of the list. > > For information about this list, including information about the > problem of inappropriate messages and information about how to > unsubscribe, please see the web page at > http://jse.stat.ncsu.edu/ > === === This list is open to everyone. Occasionally, less thoughtful people send inappropriate messages. Please DO NOT COMPLAIN TO THE POSTMASTER about these messages because the postmaster has no way of controlling them, and excessive complaints will result in termination of the list. For information about this list, including information about the problem of inappropriate messages and information about how to unsubscribe, please see the web page at http://jse.stat.ncsu.edu/ ===
Re: "Kolmogorov-Smirnov" vs "Chi Square"
On Tue, 04 Apr 2000 17:41:27 GMT, Madewell <[EMAIL PROTECTED]> wrote: > Let me ask you guys this. If you calculated the power for > the Chi-Squared test using both a small and then a large numbers and did > the same for the KS test what would you find? > Which chisquared test? I keep reminding myself that there are an awful lot of tests that are ideal, or close to it, that just happen to test *different hypotheses* - and it is rather nonsensical to compare tests without having that in mind. The best power the KS is likeliest to appear (though it might be elsewhere) at the median split. If you figure beforehand on a median split, you could test that single, special hypothesis with a chisquared, and that chisquared would outperform the KS for that alternative. Of course, using a bunch of not-ordered categories will give you a weak test on ORDERED values, compared to any test that does treat them as Ordered, whether you collapse them into categories or not. -- Rich Ulrich, [EMAIL PROTECTED] http://www.pitt.edu/~wpilib/index.html === This list is open to everyone. Occasionally, less thoughtful people send inappropriate messages. Please DO NOT COMPLAIN TO THE POSTMASTER about these messages because the postmaster has no way of controlling them, and excessive complaints will result in termination of the list. For information about this list, including information about the problem of inappropriate messages and information about how to unsubscribe, please see the web page at http://jse.stat.ncsu.edu/ ===
RE: Question re Wilcoxon
S. Shapiro writes: >I have a set of six numbers, as follows: > >6.77597 >7.04532 >7.17026 >7.13235 >7.56820 >6.97272 > >which represent results from six different measurements of >the same thing in six different trials, one measurement >per trial. (As a consequence of measurement the samples >are destroyed, so it is not possible to measure the same >sample six different times. Therefore, I had to set up >six separate, independent experiments and measure my >parameter of interest once in each experiment.) > >The question I seek to answer is: are the 6 values >obtained in the measuring process reproducible within >statistically meaningful boundaries? I suppose another >way of asking the same question is: is the null hypothesis >Ho satisfied with respect to this series of measured >values? Your question is a bit vague, but let me try to answer it. First the phrase "statistically meaningful boundaries" is an indication that you are using statistics as a substitute for careful intellectual analysis. What you want instead of statistical boundaries is to get a scientist or engineer to specify practical boundaries that have relevance to your business or industry. For example, an expert in your area might consider a measurement process as reproducible if the range is less than 2 units or if the coefficient of variation (standard deviation divided by the mean) is less than 0.25. Statistics can tell you nothing about what is important from a practical perspective. In medicine, we might tolerate a large amount of deviation when we are measuring the body temperature of an adult, but we would want far more precision when measuring the body temperature of a pre-term infant. Only a doctor could tell you this, though. A mere statistician like me is clueless in deciding what is important from a medical perspective. Although you give no context for your data, I suspect that the same is true for your situation. No statistical summary is going to be useful until you first define what a reasonable amount of variation might be from a scientific or engineering perspective. Talk to the subject matter experts before you compute any statistics. If these measurements are for a product that you sell, you might also try asking your customers to specify what is important. Furthermore, although you have not stated in precise terms what your null hypothesis is, I suspect that there is no reasonable null hypothesis worth testing on this data. If this data is part of an ongoing evaluation program, you might consider using control charts. Wheeler's book has a good explanation of how to use control charts (the voice of the data) and how to compare them to practical boundaries (the voice of the customer). But don't bother with anything involving statistically meaningful boundaries or testing hypotheses. I'm sorry if these comments seem critical. One of the hardest things in Statistics is deciding what your goal is when you start to collect some data. Since you only have a vague idea what your goal is, you need to get some outside advice from experts in your area. I hope this helps. Wheeler, Donald J. (1993). Understanding Variation. The Key to Managing Chaos. Knoxvile TN: SPC Press, Inc. (ISBN: 0-945320-35-3). For the beginning student. An insightful introduction about variation in business processes, how to identify it and how to control it. A must read for anyone working on improving quality in work processes. Steve Simon, [EMAIL PROTECTED], Standard Disclaimer. STATS - Steve's Attempt to Teach Statistics: http://www.cmh.edu/stats === This list is open to everyone. Occasionally, less thoughtful people send inappropriate messages. Please DO NOT COMPLAIN TO THE POSTMASTER about these messages because the postmaster has no way of controlling them, and excessive complaints will result in termination of the list. For information about this list, including information about the problem of inappropriate messages and information about how to unsubscribe, please see the web page at http://jse.stat.ncsu.edu/ ===
Re: "Kolmogorov-Smirnov" vs "Chi Square"
In article <8cd9g3$c5d$[EMAIL PROTECTED]>, Madewell <[EMAIL PROTECTED]> wrote: >Let me ask you guys this. If you calculated the power for >the Chi-Squared test using both a small and then a large numbers and did >the same for the KS test what would you find? The power of the KS test, as usually defined, cannot be calculated except by simulation with few exceptions, and is a function of the significance level, even asymptotically. There is the asymptotic limit as the significance level goes to zero, but not too quickly, and this is also obtainable in other ways, such as the rather easily calculated asymptotic Bayes risk efficiency; see my paper with Sethuraman in Sankhya 1965. At any rate, it has reasonable power in the classical sense. The chi-squared test with a FIXED number of cells has an efficiency in the classical sense, and at least in principle it can be calculated. However, as the number of cells increases, this efficiency goes to zero. This is because the chi-squared test ignores adjacency, and for reasonable alternatives, adjacent regions are likely to differ from the null in similar ways. In practice, the difference is surprising. -- This address is for information only. I do not claim that these views are those of the Statistics Department or of Purdue University. Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399 [EMAIL PROTECTED] Phone: (765)494-6054 FAX: (765)494-0558 === This list is open to everyone. Occasionally, less thoughtful people send inappropriate messages. Please DO NOT COMPLAIN TO THE POSTMASTER about these messages because the postmaster has no way of controlling them, and excessive complaints will result in termination of the list. For information about this list, including information about the problem of inappropriate messages and information about how to unsubscribe, please see the web page at http://jse.stat.ncsu.edu/ ===
Re: request for suggestions regarding meta-analysis
On Tue, 04 Apr 2000 14:00:34 GMT, Jerry Dallal <[EMAIL PROTECTED]> wrote: > "Crepaz, Nicole" wrote: > > > > Dear all, > > > >As a first-time user of meta-analytical techniques, I am hoping that some > > of you could suggest how to choose a reliable and proficient software from > > various computer programs, > > I would suggest you don't, at least not before reading > John Bailar's letter to the NEJM, Jan 1, 1998, page 62. I think I can guess, so RIGHT! Nicole also wrote, " Also, I would very much appreciate any suggestion regarding how to convert beta-weights deriving from regressions and odd ratios into effect sizes ... " And, once you are educated enough to be ready to do a meta-analysis, you will *know* why it is, that beta-weights and odds ratios *are* fine measures of effect sizes. Trying to convert those two measures to something else for a meta-analysis is like trying to convert your "dollars" for a shopping trip to New York City -- neither rubles nor cartons of cigarettes would be nearly so negotiable in NYC, though they serve a similar function in other parts of the world. -- Rich Ulrich, [EMAIL PROTECTED] http://www.pitt.edu/~wpilib/index.html === This list is open to everyone. Occasionally, less thoughtful people send inappropriate messages. Please DO NOT COMPLAIN TO THE POSTMASTER about these messages because the postmaster has no way of controlling them, and excessive complaints will result in termination of the list. For information about this list, including information about the problem of inappropriate messages and information about how to unsubscribe, please see the web page at http://jse.stat.ncsu.edu/ ===
Re: "Kolmogorov-Smirnov" vs "Chi Square"
Let me ask you guys this. If you calculated the power for the Chi-Squared test using both a small and then a large numbers and did the same for the KS test what would you find? In article <8c4vhs$e75$[EMAIL PROTECTED]>, [EMAIL PROTECTED] wrote: > Herman Rubin ([EMAIL PROTECTED]) wrote: > : How should one decide which type of test to use EXCEPT by > : looking at its power? Statistics is not a collection of > > Minor details like validity come to mind. But you're exactly right, > Herman, among tests that are valid, power is certainly an important, if > not the most important, criterion. But tests are sometimes chosen that > have a reputation for high power against corner-case alternatives over > more general tests, when these alternatives are not likely for the > context in question. Computability (both of the test statistic and its > critical values), though less often an issue, is also a relevant > criterion. Finally, interpretability and understandability are > relevant. A test or diagnostic (e.g. Q-Q or P-P plots) that gives > richer information than just a p-value may be much more valuable than a > blind test. I've seen cases where a test was originally chosen over > another because it was theoretically superior, but the superiority was > in the sixth decimal place and the method was completely unintelligible > to the intended audience (this was an application journal). Of course, > something to consider is multiple approaches: some more interpretable > and others perhaps chosen for theoretical superiority. It might be > worth pointing out that if you haven't done a histogram or Q-Q plot, > you have no business performing a test. > > So yes, there are other criteria than power, but this is the first > and perhaps most important criterion to consider. > > -- > Clark K. Gaylord > Senior Research Engineer > Communications Network Services > Virginia Tech, Blacksburg, Virginia 24061-0506 > Voice: 540/231-2347 Fax: 540/231-3928 E-mail: [EMAIL PROTECTED] > -- Madewell Interests: Engineering Management, Reliability Engineering, Failure Analysis, Statistical Methods. Sent via Deja.com http://www.deja.com/ Before you buy. === This list is open to everyone. Occasionally, less thoughtful people send inappropriate messages. Please DO NOT COMPLAIN TO THE POSTMASTER about these messages because the postmaster has no way of controlling them, and excessive complaints will result in termination of the list. For information about this list, including information about the problem of inappropriate messages and information about how to unsubscribe, please see the web page at http://jse.stat.ncsu.edu/ ===
Re: help!!
On Mon, 03 Apr 2000 14:52:58 -0400, sowmya <[EMAIL PROTECTED]> wrote: > I'm looking for references for my thesis. I'm working with a > longitudinal study with 4 waves of follow-up. At each wave > non-respondents to the previous wave are followed. In health surveys, > most of the time, non-respondents are followed as long as funding is > available to do so. I'm interested in being able to make a decision on > when to stop following subjects based on the amount of change in the > point estimates that occurs by sampling the non-respondents. So I'm I am having trouble with terminology. Or you are. "Waves of followup" used to mean that the people who were tracked at 5 years (say) were also recorded at 10 years. Finding the "non-respondents" is something that you do several times in trying to complete a single *wave*. If they are not at the same address, you look in the phone book. Then you ask their employer/union/insurance company. Then you ask a neighbor. Then you look for death certificates. Then you ask for whatever the Social Security Administration may tell you, though I think that is very little. All of that is in one wave; and you hope that the items you are tracking are not correlated with the difficulty of finding the people; else, you might have to make estimates about *why* there is a correlation. Have you messed up the question? -- Rich Ulrich, [EMAIL PROTECTED] http://www.pitt.edu/~wpilib/index.html === This list is open to everyone. Occasionally, less thoughtful people send inappropriate messages. Please DO NOT COMPLAIN TO THE POSTMASTER about these messages because the postmaster has no way of controlling them, and excessive complaints will result in termination of the list. For information about this list, including information about the problem of inappropriate messages and information about how to unsubscribe, please see the web page at http://jse.stat.ncsu.edu/ ===
SEM course in Belgrade?
Dear coleages, Our department (Department of Psychology in Belgrade, Serbia) want to organize the course from Structural Equation Modeling (Theory and Applications in Psychology). We need an expert (if it is possible from European countries) who will be able to come to our department and teach such an course. In English (because we do not expect him to know Serbian, of course)! For details please send message to [EMAIL PROTECTED] Best, Lazar Tenjovic Department of Psychology School of Philosophy Belgrade, Serbia === This list is open to everyone. Occasionally, less thoughtful people send inappropriate messages. Please DO NOT COMPLAIN TO THE POSTMASTER about these messages because the postmaster has no way of controlling them, and excessive complaints will result in termination of the list. For information about this list, including information about the problem of inappropriate messages and information about how to unsubscribe, please see the web page at http://jse.stat.ncsu.edu/ ===
Re: request for suggestions regarding meta-analysis
"Crepaz, Nicole" wrote: > > Dear all, > >As a first-time user of meta-analytical techniques, I am hoping that some > of you could suggest how to choose a reliable and proficient software from > various computer programs, I would suggest you don't, at least not before reading John Bailar's letter to the NEJM, Jan 1, 1998, page 62. === This list is open to everyone. Occasionally, less thoughtful people send inappropriate messages. Please DO NOT COMPLAIN TO THE POSTMASTER about these messages because the postmaster has no way of controlling them, and excessive complaints will result in termination of the list. For information about this list, including information about the problem of inappropriate messages and information about how to unsubscribe, please see the web page at http://jse.stat.ncsu.edu/ ===
Question re Wilcoxon
Dear Colleagues, I have what I believe to be a rather simple-minded statistics problem, but there's no one around here with whom I can consult, hence my writing to you. I was assigned to come up with an answer to this little problem by my Direktor and (as usual) he wants a definitive answer _yesterday_. I have a set of six numbers, as follows: 6.77597 7.04532 7.17026 7.13235 7.56820 6.97272 which represent results from six different measurements of the same thing in six different trials, one measurement per trial. (As a consequence of measurement the samples are destroyed, so it is not possible to measure the same sample six different times. Therefore, I had to set up six separate, independent experiments and measure my parameter of interest once in each experiment.) The question I seek to answer is: are the 6 values obtained in the measuring process reproducible within statistically meaningful boundaries? I suppose another way of asking the same question is: is the null hypothesis Ho satisfied with respect to this series of measured values? Using MINITAB 11.21 (the only statistics programme available to me) I saw that the population distribution for these six values is _sort of_ symmetric though not quite normal. This observation, plus the fact that the sample size is so small (n = 6) suggested that I might obtain the answer I seek using Wilcoxon's Signed Rank Test. Using the default confidence interval (CI) of 95.0, I obtained the following: EstimatedAchieved N Median Confidence Confidence Interval C1 6 7.08994.1 ( 6.874, 7.369) Four of the six measured values fall within the confidence interval, though two measured values (6.77597 and 7.56820) each lie slightly outside the confidence boundaries (which I presume is defined by a confidence interval of 94.1). Next I raised the confidence interval from the default (95.0) to 97.5, in which case I obtained EstimatedAchieved N Median Confidence Confidence Interval C1 6 7.08996.4 ( 6.776, 7.568) As you see, now _all_ six measurements fall within the confidence interval, which I take to be defined as 96.4. With these results in hand, the question then becomes one of interpretation. I am given to understand that (in the absence of complicating factors) the confidence interval contains all values of Ho that would be retained had they been tested using alpha = (100 - CI) x (0.01). In that case, would I be correct to say that the six measured values are reproducible (i.e. the null hypothesis is satisfied) at the significance level alpha = (100 - 96.4) x (0.01) = 0.036? If I am doing everything wrong, could someone please explain to me what the correct procedure should be for me to use to check on the reproducibility of the six measured values in question? Please keep in mind that the question I seek to answer is (I believe) a relatively simple one, so I hope that forthcoming explanations will likewise be relatively simple. (No Einstein-Bose stats, please.) As I do not regularly consult this usegroup, responders are kindly requested to contact me _directly_ at [EMAIL PROTECTED] Thanks in advance to all responders for your help in the above matter; I look forward to hearing from you at your earliest convienience, since the Direktor is already harassing me about this. Regards, S. Shapiro [EMAIL PROTECTED] === This list is open to everyone. Occasionally, less thoughtful people send inappropriate messages. Please DO NOT COMPLAIN TO THE POSTMASTER about these messages because the postmaster has no way of controlling them, and excessive complaints will result in termination of the list. For information about this list, including information about the problem of inappropriate messages and information about how to unsubscribe, please see the web page at http://jse.stat.ncsu.edu/ ===