ICIP2001 Registration and Accommodation

2001-06-18 Thread icip2001



ICIP 2001 REGISTRATION IS NOW OPEN
 
Dear Madame/Sir,
 
Registration for ICIP2001 has already started. We 
have prepared for you a fabulus technical program of 824 papers in 
oral/poster sessions,including8 special sessions, 6 tutorials and 6 plenary 
talks. ICIP2001 plenaries focus Digital Image Processing on cultural 
presentation.We would be honored to meet you in Thessaloniki.
 
You can register either by fax or on-line. 
Please make use of the advance registration/payment deadline (July 30, 2001) 
to benefit from lower fees. Conference Manager Diastasi will send a 
confirmation to each registrant by e-mail.All payments must be sent in Greek 
Drachmas (GRD).For more information about the prices,terms and conditions 
please visit the conference web site:http://icip01.ics.forth.gr
 
!!DO YOUR ICIP 2001 HOTEL RESERVATION 
EARLY!!
 
Please note that you should make your hotel 
reservation as soon as possible, because October is a high season in 
Thessaloniki.Requests for hotel accommodation can only be made on the 
official hotel reservation form to be sent either by fax or e-mail or 
on-line through the congress web-site at the latest by July 30, 2001.In 
order to guarantee the requested accommodation full prepayment is 
required.For more information about the prices, terms and conditions please 
visit the conference web site:http://icip01.ics.forth.gr
 
ICIP 2001 VIRTUAL JOB FAIR,VIRTUAL EXHIBITION
 
This year ICIP2001 will organize two events for the first 
time:1.Virtual Job Fair2.Virtual exhibitionThe virtual job fair is 
addressed both to experienced professionals from industry and academia, as well 
as to graduate/postgraduate students nearing completion of their 
degrees.For more information on how to participate to the above events 
please visit the  ICIP 2001 web-site (http:\\icip01.ics.forth.gr) and 
contact   Dr. Adrian G. Bors, E-mail: [EMAIL PROTECTED] for any 
further questions.
 
ICIP2001 On-site Exhibition and Job Fair
 
For information on the participation on On-site Exhibition and Job Fair 
please contact Dr. N. Nikolaidis, e-mail: [EMAIL PROTECTED]


Re: Help me, please!

2001-06-18 Thread Doc

In article <9gmcaa$75i$[EMAIL PROTECTED]>, "Glen Barnett"
<[EMAIL PROTECTED]> wrote:

> Monica De Stefani <[EMAIL PROTECTED]> wrote in message
> [EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> > 2) Can Kendall discover nonlinear dependence?
> 
> He used to be able to, but he died.
> 
> (Look at how Kendall's tau is calculated. Notice that it is
> not affected by any monotonic increasing transformation. So
> Kendall's tau measures monotonic association - the tendency
> of two variables to be in the same order.)
> 
> Glen

I do not understand why Kendall's Tau is being used instead of the
ordinary correlation coefficient with its partials and semi-partials. 

For example, say you're reporting the correlation of X and Y (ice cream
consumed and water consumed) but Z (air temperature) might be secretly
responsible. So you correlate X and Z to find their linear dependence and
stash the residuals temporarily. Then you correlate Y and Z to find their
linear dependence--and stash the residuals again. Now you can revisit the
dependence of X and Y by correlating the residuals of X and Z versus the
residuals of Y and Z. The effect of Z has been partialed out. 

You could try this using Kendall's Tau verus the ordinary correlation
coefficient to see if there is a difference. I personally have not run
into a data set where was a difference. BTW, significance tests are not
involved.

Doc


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Help me, please!

2001-06-18 Thread Glen Barnett


Monica De Stefani <[EMAIL PROTECTED]> wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> 2) Can Kendall discover nonlinear dependence?

He used to be able to, but he died.

(Look at how Kendall's tau is calculated. Notice that it is
not affected by any monotonic increasing transformation. So
Kendall's tau measures monotonic association - the tendency
of two variables to be in the same order.)

Glen





=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Edstat: I. J. Good and Walker

2001-06-18 Thread Alex Yu


In 1940 Helen M. Walker wrote an article in the journal of Educational
Psychology regarding the concept degrees of freedom.  In 1970s, I. J. Good
wrote something to criticize Walker's idea. I forgot the citation. I tried
many databases and even searched the internet but got no result. Does any
one know the citation? Thanks in advance. 


Chong-ho (Alex) Yu, Ph.D., MCSE, CNE
Academic Research Professional/Manager
Educational Data Communication, Assessment, Research and Evaluation
Farmer 418
Arizona State University
Tempe AZ 85287-0611
Email: [EMAIL PROTECTED]
URL:http://seamonkey.ed.asu.edu/~alex/
   
  



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Maximum likelihood Was: Re: Factor Analysis

2001-06-18 Thread Herman Rubin

In article <[EMAIL PROTECTED]>,
Ken Reed  <[EMAIL PROTECTED]> wrote:
>It's not really possible to explain this in lay person's terms. The
>difference between principal factor analysis and common factor analysis is
>roughly that PCA uses raw scores, whereas factor analysis uses scores
>predicted from the other variables and does not include the residuals.
>That's as close to lay terms as I can get.

>I have never heard a simple explanation of maximum likelihood estimation,
>but --  MLE compares the observed covariance matrix with a  covariance
>matrix predicted by probability theory and uses that information to estimate
>factor loadings etc that would 'fit' a normal (multivariate) distribution.

>MLE factor analysis is commonly used in structural equation modelling, hence
>Tracey Continelli's conflation of it with SEM. This is not correct though.

>I'd love to hear simple explanation of MLE!

MLE is triviality itself, if you do not make any attempt to
state HOW it is to be carried out.

For each possible value X of the observation, and each state
of nature \theta, there is a probability (or density with 
respect to some base measure) P(X | \theta).  There is no
assumption that X is a single real number; it can be anything;
the same holds about \theta.

What MLE does is to choose the \theta which makes P(X | \theta)
as large as possible.  That is all there is to it.

-- 
This address is for information only.  I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
[EMAIL PROTECTED] Phone: (765)494-6054   FAX: (765)494-0558


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Probability Of an Unknown Event

2001-06-18 Thread Richard Beldin

The problem comes because there is often no unique way of defining events. It
is hard to think of a real example where we literally "know nothing". The
"equal probability" answer is often just a cop-out for not thinking about what
we do know.



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



data shaping vs. data mining

2001-06-18 Thread Data Analysis

What is the difference between data shaping and data mining?



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



WHY NOT TRY IT?

2001-06-18 Thread

Below is the result of your feedback form.  It was submitted by  
([EMAIL PROTECTED]) on Monday, June 18, 19101 at 12:02:21
---

message: INCREASE THE SIZE OF YOUR PENIS!
Click Below For More Info!!!http://how.to/bigone/";>http://www.geocities.com/penisgrowth2001mailto:[EMAIL PROTECTED]>to be removed write remove in the 
subject.

---




=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Probability Of an Unknown Event

2001-06-18 Thread Rich Ulrich

On Sat, 16 Jun 2001 23:05:52 GMT, "W. D. Allen Sr."
<[EMAIL PROTECTED]> wrote:

> It's been years since I was in school so I do not remember if I have the
> following statement correct.
> 
> Pascal said that if we know absolutely nothing
> about the probability of occurrence of an event
> then our best estimate for the probability of
> occurrence of that event is one half.
> 
> Do I have it correctly? Any guidance on a source reference would be greatly
> appreciated!

I did a little bit of Web searching and could not find that.

Here is an essay about Bayes, which (dis)credits him and his
contemporaries as assuming something like that, years before Laplace.

I found it with a google search on 
 <"know absolutely nothing"  probability> .

 http://web.onetel.net.uk/~wstanners/bayes.htm

-- 
Rich Ulrich, [EMAIL PROTECTED]
http://www.pitt.edu/~wpilib/index.html


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Probability Of an Unknown Event

2001-06-18 Thread W. D. Allen Sr.

Thanks Robert!

WDA

end

- Original Message -
From: "Robert J. MacG. Dawson" <[EMAIL PROTECTED]>
To: "W. D. Allen Sr." <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Sunday, June 17, 2001 6:35 PM
Subject: Re: Probability Of an Unknown Event


>
>
> "W. D. Allen Sr." wrote:
> >
> > It's been years since I was in school so I do not remember if I have the
> > following statement correct.
> >
> > Pascal said that if we know absolutely nothing
> > about the probability of occurrence of an event
> > then our best estimate for the probability of
> > occurrence of that event is one half.
> >
> >
>
> [snipped]
>




=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: 3rd degree polynom curve fitting, correlation needed

2001-06-18 Thread Paige Miller

Matti Overmark wrote:

> I have fitted a 3 rd degree curve to a sample (least square method), and
> I want to compare this particular R2 with that of
> a (similarily) fitted 2 degree polynom.

I can assure you that the 3rd degree polynomial will fit as well or
better than the 2nd degree polynomial, as measured by R-squared. If you
want a statistical test to test the hypothesis that the 3rd degree model
yields a significantly better fit compared to the second degree model,
then you should do an "extra-sums-of-squares" test, as explained in the
fine textbook by Draper and Smith "Applied Regression Analysis".
 
> I want to "see" which of the two models is the best.
> Any suggestion of a good book?

A plot would work just fine, if you want to "see" how the models fit.

-- 
Paige Miller
Eastman Kodak Company
[EMAIL PROTECTED]

"It's nothing until I call it!" -- Bill Klem, NL Umpire
"When you get the choice to sit it out or dance,
   I hope you dance" -- Lee Ann Womack


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Probability Of an Unknown Event

2001-06-18 Thread Art Kendall

The only time I can think of this being meaningful is in determining what size
sample to draw.  If we don't have any prior information about what the
proportion of events in a population have a particular characteristic (the
probability of a characteristic), then we assume the worse-case (widest
variance) of 50%.

"W. D. Allen Sr." wrote:

> It's been years since I was in school so I do not remember if I have the
> following statement correct.
>
> Pascal said that if we know absolutely nothing
> about the probability of occurrence of an event
> then our best estimate for the probability of
> occurrence of that event is one half.
>
> Do I have it correctly? Any guidance on a source reference would be greatly
> appreciated!
>
> Thanks,
>
> WDA
>
> [EMAIL PROTECTED]
>
> end



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: multivariate techniques for large datasets

2001-06-18 Thread Art Kendall

you might want to go to http://www.pitt.edu/~csna/
and then cross-post your question to CLASS-L

The Classification Society meeting this weekend had a lot of discussion of
these topics.

My first question is whether you intend to interpret the clusters?

If so, what is the nature of the 500 variables?
What is the nature of your cases?
What does the set of cases represent?
How much data is missing. What kinds of missing data do you have?
What do you want to do with the cluster reults?
Are you interested in a tree or a simple clustering?


Many users of clustering use data reduction techniques such as factor
analysis to summarize the variability of the 500 with a smaller number of
dimensions.



srinivas wrote:

> Hi,
>
>   I have a problem in identifying the right multivariate tools to
> handle datset of dimension 1,00,000*500. The problem is still
> complicated with lot of missing data. can anyone suggest a way out to
> reduce the data set and  also to estimate the missing value. I need to
> know which clustering tool is appropriate for grouping the
> observations( based on 500 variables ).



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: 3rd degree polynom curve fitting, correlation needed

2001-06-18 Thread Mike Granaas


Judd & McClelland, _Data Analysis: A Model Comparison Approch_, chapter 8.

MG

On 18 Jun 2001, Matti Overmark wrote:

> Hi group!
> 
> I´m new to this group, so...just you know.
> 
> I have fitted a 3 rd degree curve to a sample (least square method), and 
> I want to compare this particular R2 with that of
> a (similarily) fitted 2 degree polynom.
> 
> I want to "see" which of the two models is the best.
> Any suggestion of a good book?
> 
> Thanks in advance,
> Matti Ö.
> 
> 
> =
> Instructions for joining and leaving this list and remarks about
> the problem of INAPPROPRIATE MESSAGES are available at
>   http://jse.stat.ncsu.edu/
> =
> 

***
Michael M. Granaas
Associate Professor[EMAIL PROTECTED]
Department of Psychology
University of South Dakota Phone: (605) 677-5295
Vermillion, SD  57069  FAX:   (605) 677-6604
***
All views expressed are those of the author and do not necessarily
reflect those of the University of South Dakota, or the South
Dakota Board of Regents.



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



3rd degree polynom curve fitting, correlation needed

2001-06-18 Thread Matti Overmark

Hi group!

I´m new to this group, so...just you know.

I have fitted a 3 rd degree curve to a sample (least square method), and 
I want to compare this particular R2 with that of
a (similarily) fitted 2 degree polynom.

I want to "see" which of the two models is the best.
Any suggestion of a good book?

Thanks in advance,
Matti Ö.


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Probability Of an Unknown Event

2001-06-18 Thread Robert J. MacG. Dawson



"W. D. Allen Sr." wrote:
> 
> It's been years since I was in school so I do not remember if I have the
> following statement correct.
> 
> Pascal said that if we know absolutely nothing
> about the probability of occurrence of an event
> then our best estimate for the probability of
> occurrence of that event is one half.
> 
> Do I have it correctly? 

You may - somebody certainly said it. Laplace is the name that springs
to my mind, and I'm not at all certain about it. But whoever it was was
wrong. In such a case there is no "best estimate", all estimates are
equally silly. The estimate above gives inconsistent probabilities to
the propositions "this urn contains a ball", "this urn contains a red
ball" and "this urn contains a black ball".

One approach to pathological cases like this might be to replace
"probability" by "odds" and to give odds of 0 to 0 for such an event. In
such an approach, odds of (say) 1 to 1 would be vaguer than odds of
23.51 to 23.51, and probability would be the asymptotic limit of odds of
np to nq as n -> infinity. 

Probability can be informally defined as the value p such that you can
bet p-epsilon against the opponent's 1-p+epsilon and win in the long
run, no matter how small epsilon may be. Perhaps we could define "odds"
in this sense similarly, with the following changes:

(1) the opponent is omniscient except for the outcome of the plays.
(EG: in most jurisdictions the casino knows (to a high degree of
accuracy) the odds on its slots and the punter does not.) 
(2) the opponent gets to choose a side of the proposition
(3) you derive one unit's worth of pleasure from winning (or, more
objectively are allowed a *fixed* bonus of 1 unit if you win in
recognition of (2); or you are the "house" and can add 1 unit to your
opponent's bet as your percentage on the wager. )

Thus, odds of 0-0 would mean that you would know so little about the
situation that "the fix could already be in" either way. In such a
situation you would not take bets on either side of the proposition if
you suspected your opponent was savvy. These are the
correct odds in the situation described above.

Odds of 1-1 would mean that you would accept a long sequence of 2-1
bets on either side; odds of 20-20 that you would accept a long sequence
of 21-20 bets on either side; and so on. In the limit, you obtain
probabilities, when your knowledge of the situation is also absolute and
you can profit from any deviation whatsoever from reciprocal odds.

I have not worked this idea through; if it works, it might be useful,
but it may also involve equally fatal paradoxes if pushed a bit farther. 

-Robert Dawson


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Factor Analysis

2001-06-18 Thread Gottfried Helms

Ken Reed schrieb:
> 
> It's not really possible to explain this in lay person's terms. The
> difference between principal factor analysis and common factor analysis is
> roughly that PCA uses raw scores, whereas factor analysis uses scores
> predicted from the other variables and does not include the residuals.
> That's as close to lay terms as I can get.


If you have a correlation-matrix of 
itm   itm
  ___1_2__  

item 1 | 10.8   | =  R 
item 2 | 0.8   1|

then you can factor it into a loadingsmatrix L, so that L*L´ = R.
There is a infinite number of options for L; especially there can
be found infinitely many just in different rotational positions
(pairs of columns rotated). In the list below there are 5 examples
from the extreme solutions (the extremes and 3 rotational positions 
betweeen)








PCA CFA
factor   factor factor  factor  factor
  12   1  2   3
  --   
1)
Item 1  1.0000  1.000   0   0
Item 2  0.8000.600  0.800   0   0.600

2)
Item 1  0.987   -0.160  0.949   0.316   0
Item 2  0.8860.464  0.843   0   0.537

3)--+
Item 1  0.949   -0.316  0.894   0.447   0   |
Item 2  0.9490.316  0.894   0   0.447   |
+

4)
Item 1  0.8860.464  0.843   0.537   0
Item 2  0.987   -0.160  0.949   0   0.316

5)
Item 1  0.8000.600  0.800   0.600   0
Item 2  1.0000  1.000   0   0
===

PCA:
Left list shows, how a components-analyst would attack the problem: 
two factors; *principal* components analysis starts from the configu-
ration 3, where the sum of the squares of the entries is at a maximum.
The reduced number of factors, which a PCA-analyst will select for 
further work, will be determined by criteria like "use all factors with
sum of loadingssquares>1" (equivalent eigenvalues>1) or "apply scree test"
or something like that.



CFA:
Right list shows, how a *common* factor analyst would attack the problem:
a common factor, plus an itemspecific factor  for each item (like measuring
error etc). There are again infinite options, how to position the factors.
One could assume, that again example 3 was taken as default; but this is
not the case, as the algorithms, how to identify itemspecific variance
are iterative and not restricted to a special outcome (only the startposi-
tion is given). The only restriction is, that *for each item* there must
be an itemspecific factor. (In a two-item-case however the CFA-iteration
converges to position 3).
The itemspecific factors are not used in further analysis, only the common
one(s). Hence the name.









> 
> I have never heard a simple explanation of maximum likelihood estimation,
> but --  MLE compares the observed covariance matrix with a  covariance
> matrix predicted by probability theory and uses that information to estimate
> factor loadings etc that would 'fit' a normal (multivariate) distribution.

It estimates the population-covariance matrix in that way, that your
empirical matrix is the most-likely-one, when randomly selected samples
would be taken.


Gottfried Helms


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Help me, please!

2001-06-18 Thread Monica De Stefani

1) Are there some conditions which I can apply normality to Kendall
tau?
I was wondering if x's observations must be
independent and y's observations must be independent to apply
asymptotically normal limiting
distribution. 
(null hypothesis = x and y are independent).
Could you tell me something about?

2) Can Kendall discover nonlinear dependence? If not (i.e. Kendall
discover
 only linear dependence) why Kendall's partial tau discover nonlinear
dependence?

3) How is T(z) calculate? In some papers it's "predefined tolerance",
but how
can I calculate it? 

Thanks.
Monica De Stefani.


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=