Kai,
Your question is best addressed to "r-sig-fina...@stat.math.ethz.ch" as
it is finance related question.
Jude
___
Jude Ryan
Director, Client Analytical Services
Strategy & Business Development
UBS Financial Services Inc.
1200 Harbor Boulevard,
The Bhattacharyya distance is different from the Mahalanobis distance.
See:
http://en.wikipedia.org/wiki/Bhattacharyya_distance
There is also the Hellinger Distance and the Rao distance. For the Rao
distance, see:
http://www.scholarpedia.org/article/Fisher-Rao_metric
Jude
__
Thanks Petr! It is good to see multiple solutions to the same problem.
Best,
Jude
-Original Message-
From: Petr PIKAL [mailto:petr.pi...@precheza.cz]
Sent: Wednesday, September 23, 2009 10:59 AM
To: Ryan, Jude
Cc: alxmil...@yahoo.it; r-help@r-project.org
Subject: Re: [R] compute differe
Alessandro Carletti wrote:
Hi,
I have a problem.
I have a data frame looking like:
ID val
A? .3
B? 1.2
C? 3.4
D? 2.2
E? 2.0
I need to CREATE the following TABLE:
CASE?? DIFF
A-A??? 0
A-B??? -0.9
A-C??? -3.1
A-D??? -1.9
A-E??? -1.7
B-A??? ...
B-B??? ...
B-C
B-D
Thanks for your point of view Terry! It is always fascinating to follow
the history of the field, especially as told by someone involved with
it.
Jude Ryan
-Original Message-
From: Terry Therneau [mailto:thern...@mayo.edu]
Sent: Tuesday, June 23, 2009 9:22 AM
To: Ryan, Jude; c...@datanal
I have used all 3 packages for decision trees (SAS/EM, CART and R). As
another user on the list commented, the algorithms CART uses are
proprietary. I also know that since the algorithms are proprietary, the
decision tree that you get from SAS is based on a "slightly different"
algorithm so as to n
Thanks! I did not look at the output of str(df) closely. Since y is
defined as a character variable when df is created (but stored as a
factor), it looks like str(df) is sorting the factors, at least when it
is displayed to the screen.
Jude
-Original Message-
From: David Winsemius [mailto
David Winsemius' solution:
> apply(data.matrix(df), 1, I)
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
x12345678910
y13456789 10 2
For y and [,2] above the value is 3. Why is the value not 2?
It look
Andrea,
You can calculate predictions for your validation data based on nnet objects
using the predict function (the predict function can also be used for
regressions, quantile regressions, etc.)
If you create a neural net with the following code:
library(nnet)
# 3 hidden neurons, for
Satish,
For a comparison of SAS and S, see the document "An Introduction to S
and the Hmisc and Design Libraries" by Carlos Alzola and Frank E.
Harrell. Frank Harrell is an expert in both SAS and R. You can download
this document from http://www.r-project.org/, then click on manuals, and
then c
Thanks Rongui! I wasn't aware of this FAQ page. I will look at that page.
Jude
-Original Message-
From: Ronggui Huang [mailto:ronggui.hu...@gmail.com]
Sent: Monday, June 01, 2009 10:11 PM
To: Ryan, Jude
Cc: r-help@r-project.org
Subject: Re: [R] warning message when running quantile regre
Hi All,
I am running quantile regression in a "for loop" starting with 1
variable and adding a variable at a time reaching a maximum of 20
variables.
I get the following warning messages after my "for" loop runs. Should I
be concerned about these messages? I am building predictive models and
a
Not that I know of.
If you do come across any, let me know, or better still, email r-help.
Good luck with what you are trying to do.
Jude Ryan
From: Filipe Rocha [mailto:filipemaro...@gmail.com]
Sent: Friday, May 29, 2009 1:17 PM
To: Ryan, Jude
Cc: r-hel
You can figure out which weights go with which connections with the
function summary(nnet.object) and nnet.object$wts. Sample code from
Venables and Ripley is below:
# Neural Network model in Modern Applied Statistics with S, Venables and
Ripley, pages 246 and 247
> library(nnet)
> attach(roc
The package AMORE appears to be more flexible, but I got very poor
results using it when I tried to improve the predictive accuracy of a
regression model. I don't understand all the options well enough to be
able to fine tune it to get better predictions. However, using the
nnet() function in packa
Hi All,
I am trying to manually extract the scoring equations for a neural
network so that I can score clients on a system that does not have R
(mainframe using COBOL).
Using the example in Modern Applied Statistics with S (MASS), by
Venables and Ripley (VR), pages 246 and 247, I ran the follo
Sorry for these multiple postings.
I solved the problem using na.omit() to drop records with missing values
for the time being. I will worry about imputation, etc. later.
I calculated the sum of squared errors for 3 models, linear regression,
neural networks, and support vector machines. This
As a follow-up to my email below:
The input data frame to nnet() has dimensions:
> dim(coreaff.trn.nn)
[1] 50888
And the predictions from the neural network (35 records are dropped -
see email below for more details) has dimensions:
> pred <- predict(coreaff.nn1)
> dim(pred)
I am exploring neural networks (adding non-linearities) to see if I can
get more predictive power than a linear regression model I built. I am
using the function nnet and following the example of Venables and
Ripley, in Modern Applied Statistics with S, on pages 246 to 249. I have
standardized vari
Hi,
I am trying to read a SAS version 9.1.3 SAS dataset into R (to preserve
the SAS labels), but am unable to do so (I have read in a CSV version).
I first created a transport file using the SAS code:
libname ces2 'D:\CES Analysis\Data';
filename transp 'D:\CES Analysis\Data\fadata.xpt';
20 matches
Mail list logo