On 30.11.2011 07:25, Indrajit Sengupta wrote:
Agreed, nobody has deemed it default, but is there any other such package
that you can think of for this purpose.
Not that I know. Suggesting it is perfect. I'd just not call any package
a default if it does not ship with R.
Best,
Uwe
On 11/29/2011 10:31 PM, RobH2011 wrote:
Hi
I have data from an experiment that used a repeated-measures factorial 2x2
design (i.e. each participant contributed data to both levels of both
factors). I need a non-parametric version of the repeated-measures factorial
ANOVA to analyse the data.
Le 11/30/2011 2:09 AM, Florent D. a écrit :
Thanks for your answer !
I also think your last write-up for LogLiketot (using a single
argument par) is the correct approach if you want to feed it to
optim().
I'm not dedicated to optim() fonction. I just want to optimise my two
parameters and the
Good morning,
Normally, if you save your R environment, some objects, like your lm,
must be saved in with.
Otherwise, have you tried save() or dput() functions ?
Le 11/30/2011 6:10 AM, arunkumar a écrit :
Hi
Please let me know if we can store the linear model object in the data
base
hi
I have data like this in a dataframe
Var Value Cheque
X140FALSE
X220FALSE
X328TRUE
I want to replace it FLASE with 0 and TRUE with 1.
is there any method by which i can do without using LOOP
--
View this message in context:
On 30/11/11 20:22, Jeff Newmiller wrote:
Don't do that.
Use $ notation to refer to elements of lists/data frames explicitly.
Or use with(). It is truly-ruly handy and effective.
cheers,
Rolf Turner
---
On 30.11.2011 09:16, arunkumar wrote:
hi
I have data like this in a dataframe
Var Value Cheque
X140FALSE
X220FALSE
X328TRUE
I want to replace it FLASE with 0 and TRUE with 1.
is there any method by which i can do without using LOOP
dataframe$Cheque -
Error in mvr(Kd_nM ~ qsar, ncomp = 6, data = my, validation = CV, method =
kernelpls) :
Invalid number of components, ncomp
How I can fix this?
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
Hello,
I would like to perform a generalized singular value decomposition with
R. The only possibility I found is GSVD that is based on LAPACK/BLAS.
Are there other possibilities too?
If not, has anybody used LAPACK/BLAS under Windows XP? How can I install
them? Following [1] did not help.
With optimx(c(30,50),ms=c(0.4,0.5), fn=LogLiketot)
where
LogLiketot- function(dist,ms) {
res - NULL
for(i in 1:nrow(pop5)) {
for(l in 1:nrow(freqvar)) {
res - c(res, pop5[i,l]*log(LikeGi(l,i,dist,ms)))
}
}
return(-sum(res))
}
I think it will do something
er... Uwe, shouldn't that be, e.g.
dataframe$Cheque - as.integer(dataframe$Cheque)
## or building on Rolf's suggestion
dataframe - within(dataframe, Cheque - as.integer(Cheque))
While I am at it, is there any practical difference in efficiency
between these two approaches?
-- Bert
Am 29.11.2011 07:06, schrieb Indrajit Sengupta:
What have you tried so far - can you explain? fitdistrplus package is the
default package for fitting distributions.
Regards,
Indrajit
From: rch4 r...@geneseo.edu
To: r-help@r-project.org
Sent:
At 16:41 27/11/2011, Kristi Shoemaker wrote:
Hi John,
Your assumptions are correct and those examples were very helpful, thanks!
I think I'm almost there, but I'm screwing up something with the
within-subjects factor (the example has two, I only have one). See
below. Why am I not seeing my
On 30.11.2011 12:26, Bert Gunter wrote:
er... Uwe, shouldn't that be, e.g.
dataframe$Cheque- as.integer(dataframe$Cheque)
Sure, thanks.
## or building on Rolf's suggestion
dataframe- within(dataframe, Cheque- as.integer(Cheque))
While I am at it, is there any practical difference in
If you mean storing it within one session, it's just like any other object
m - lm(...)
and it can be put in lists, modified, whatever just like any other object.
Michael
On Nov 30, 2011, at 4:23 AM, Diane Bailleul diane.baill...@u-psud.fr wrote:
Good morning,
Normally, if you save your R
Dear all,
I would like something simple to do in R that I do not know how I should search
for it.
Let's say that I have a list of
a-c(1,2,3,4,5)
b-(6,7,8)
and I want to get back all their possible cominations like
1,6
1,7
1,8
2,6
2,7
2,8
3,6
3,7
3,8
and so on.
How I can do that?
B.R
Alex
expand.grid()
This one is admittedly rather hard to find...
Michael
On Nov 30, 2011, at 7:15 AM, Alaios ala...@yahoo.com wrote:
Dear all,
I would like something simple to do in R that I do not know how I should
search for it.
Let's say that I have a list of
a-c(1,2,3,4,5)
b-(6,7,8)
Hi,
Googling for R-help gsvd leads to a number of interesting entries on
the mailing list. It seems that GSVD is present in the lapack version
included with R and can be called (see the mailing list entries).
https://stat.ethz.ch/pipermail/r-help/2004-August/056713.html
Hi Silvano,
Your function is not part of the standard R installation. After some
googling I found out, but next time:
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
The betareg package provides
I'd suggest you do some leg-work and figure out why you are getting values 1.
If your algorithm is motivated by some approximation then a min() or pmin()
*might* be the right fix, but if there are no approximations you may need to
start debugging properly to see why you are getting an out of
The loglikelihood() looks ok and gives some value. But I am using this
function for the simulated annealing, generating the random samples from
uniform proposal density., The codes are as follows
epiannea - function(T0 = 1, N = 500,beta = 0.1,x0 = 0.1, rho = 0.90, eps =
0.1, loglikelihood,
Thank you for your help!
The GSVD (SVDgen) in PTAk does not perform a generalized singular value
decomposition for 2 matrices, only for 1. I should have mentioned this -
sorry.
Are there maybe other packages?
I have also found the last 2 links, but I look for a way to use LAPACK
with
Vorrei aggiungerti alla mia rete professionale su LinkedIn.
-giovanni
giovanni parrinello
Senior Statistical Consultant at University of Brescia
Italy
Confirm that you know giovanni parrinello:
https://www.linkedin.com/e/-92ffr2-gvmdq8pr-1/isd/5085379353/o5O4tnc8/?hs=falsetok=0BYzKxkO2MnB01
Hi volks,
i have a question about the step() fkt. Is there a possibility to save the
last model generated from this method. I have a loop and so i generate 100
different models with the step fkt and i want to know which model is the
most common.
CODE:
...
missStep - numeric(100)
for (j in
Hi
I have a data-frame which look like this
X1 X2 X3
1 3 5
2 4 6
3 6 1
I want to apply a formula Y=6*X1 + 7*X2 + 8*X3 for every row
Thanks in Advance
--
View this message in context:
http://r.789695.n4.nabble.com/how-to-call-a-function-for-each-row-tp4122906p4122906.html
Hi
Thanks for your reply. The package appears to be for the analysis of
longitudinal designs, and requires a time factor to be input on the relevant
. My experiment is just a standard repeated-measures design, and I just need
a direct, non-parametricc equivalent of the Factorial repeated-measures
Dear community,
I'm working with the data.frame attached (
http://r.789695.n4.nabble.com/file/n4122926/df1.xls df1.xls ), let's call it
df1.
I typed: df1- read.xls(C:/... dir .../df1.xls,colNames= TRUE, rowNames=
TRUE)
Then I splited randomly df1 using splitdf function
Emmanuel Jjunju ejjunju at gmail.com writes:
I normally use the raster or clim.pact pckages to read netcdf (.nc) files.
This has always worked out for me until this weekend every time i try to
read a .nc file i get the following error
...
Could you please:
1. Consider moving this thread
Read ?apply
This is easiest:
df - matrix(c(1,2,3,3,4,6,5,6,1), 3)
apply(df, 1, function(x) 6*x[1]+7*x[2]+8*x[3])
But it's much more efficient to do it with matrix multiplication. In
keeping with the best of tradition, this is left as an exercise to the
reader.
Michael
On Wed, Nov 30, 2011 at
will this do it:
x - read.table(text = X1 X2 X3
+ 1 3 5
+ 2 4 6
+ 3 6 1, header = TRUE)
x
X1 X2 X3
1 1 3 5
2 2 4 6
3 3 6 1
apply(x, 1, function(a) 6 * a[1] + 7 * a[2] + 8 * a[3])
[1] 67 88 68
On Wed, Nov 30, 2011 at 8:10 AM, arunkumar akpbond...@gmail.com wrote:
Hi
Homework. ?
I would say that this indiicates that you need to open the R tutorials
and start reading.
-- Bert
On Wed, Nov 30, 2011 at 5:10 AM, arunkumar akpbond...@gmail.com wrote:
Hi
I have a data-frame which look like this
X1 X2 X3
1 3 5
2 4 6
3 6 1
I want to apply a
Put them in a list:
ModelList - vector(list, 100)
ModelList[[i]] - mod.step - step(mod, direction=both,trace=T)
Then come back and use sapply() to do whatever you want to the set of
models to compare/count/etc them
Michael
On Wed, Nov 30, 2011 at 6:12 AM, Schrabauke bern...@yahoo.de
Thanks Peter, will take a look at this package.
Regards,
Indrajit
From: Peter Ruckdeschel peter.ruckdesc...@web.de
To: r-h...@stat.math.ethz.ch
Sent: Wednesday, November 30, 2011 5:03 PM
Subject: Re: [R] Negative exponential fit
I do not want to shed out
Isn't this even easier?
X1 - c(1:3)
X2 - c(3, 4, 6)
X3 - c(5, 6, 1)
Y - 6*X1 + 7*X2 + 8*X3
Y
[1] 67 88 68
Or if you really need a function:
MakeY - function(x, y, z) 6*x + 7*y + 8*z
MakeY(X1, X2, X3)
[1] 67 88 68
--
David L Carlson
Associate
Dear Michael,
The original poster and I ended up pursuing this question off-list and, I
think, resolving the issue, discovering in the process that SPSS apparently
reports type-II tests that are really type-III tests. (Yes, one fits a
multivariate linear model prior to calling Anova() with the
tom wrote
Thank you for your help!
The GSVD (SVDgen) in PTAk does not perform a generalized singular value
decomposition for 2 matrices, only for 1. I should have mentioned this -
sorry.
Are there maybe other packages?
I have also found the last 2 links, but I look for a way to use
optimx does allow you to use bounds. The default is using only methods from
optim(), but
even though I had a large hand in those methods, and they work quite well,
there are other
tools available within optimx that should be more appropriate for your problem.
For example, the current version of
On Nov 30, 2011, at 7:18 AM, R. Michael Weylandt wrote:
expand.grid()
This one is admittedly rather hard to find...
Well, it is linked from the `combn` help page. And it is the likely to
be first or second in a search with ??combinations since it is in
pkg:base and at least on my
Dear Florent,
I know that I'm asking to optim to minimize my values, and that the
results with a lower fvalue are best supported than those with a higher
fvalue.
My comment was just from a data point of view. I'd like the lower ms
(second parameter) as possible, as well as the fvalue. So a ms
On Nov 29, 2011, at 23:19 , Ben Bolker wrote:
rch4 rch4 at geneseo.edu writes:
We need help
We are doing a project for a statistical class in and we are looking at
world record times in different running events over time. We are trying to
fit the data with a negative exponential
On 11-11-30 11:32 AM, peter dalgaard wrote:
On Nov 29, 2011, at 23:19 , Ben Bolker wrote:
rch4 rch4 at geneseo.edu writes:
We need help
We are doing a project for a statistical class in and we are
looking at world record times in different running events over
time. We are
Two ways immediately come to mind:
1) Change the values before splitting
2) Use the same seed for the two splits so the rows match exactly and
then just do the changes directly
Any more automated solution will depend on whether your data has
rownames or not.
Michael
PS - As a general rule,
On Wed, Nov 16, 2011 at 3:06 PM, Milan Bouchet-Valat nalimi...@club.frwrote:
Hi list!
I'm getting an error message when trying to fit an accelerated failure
time parametric model using the aftreg() function from package eha:
Error in optim(beta, Fmin, method = BFGS, control = list(trace =
Hi All,
I've tried to install these multtest and preprocessCore packages in Mac,
but kept getting error messages. I tried to load the packages using 2 ways:
1. Installed from BioConductor (sources)
2. And installed from BioConductor (binaries)
Both ways, I got these error messages:
For
As the posting guide strongly suggests, I think the first step is to
update to R 2.14: there have been changes in package design and I'm
not sure back-compatibility will run that far. Doing so should let you
download fresh binaries straight from BioC.
I'm not sure why you would get compilation
My apologies: the posting guide doesn't actually suggest updating to
the most recent stable version -- though perhaps it would be a good
addition to the before posting section -- but I still think that's
the necessary fix to your problem.
Michael
On Wed, Nov 30, 2011 at 1:08 PM, R. Michael
On Nov 30, 2011, at 12:57 PM, UyenThao Nguyen wrote:
Hi All,
I've tried to install these multtest and preprocessCore packages
in Mac, but kept getting error messages. I tried to load the
packages using 2 ways:
1. Installed from BioConductor (sources)
You should read the directions
chuck.01 wrote
datum - structure(list(Y = c(415.5, 3847.8325, 1942.83325,
1215.223,
950.142857325, 2399.585, 804.75, 579.5, 841.70825, 494.053571425
), X = c(1.081818182, 0.492727273, 0.756363636, 0.896363636,
1.518181818, 0.49917, 1.354545455, 1.61,
1. Start R (as administrator i.e. right click on the R icon on your
desktop and select run as administrator).
2 Select Packages form menu bar
3 Select Install package(s) from local zip files.. and negotiate to
your zip file.
When you want to use the package you must enter
library(bayesQR)
I
You still haven't provided anything reproducible since we can't get to
your data file (also you used
the-you-really-shouldn't-use-this-function function attach) but here's
what I'd guess:
Your say the problem occurs when
exp(-alpha*d^(-beta)) 1
Of course, this happens when
alpha*d^(-beta) 0.
Hello everybody,
A statistician performed an analysis in SAS for me which I would like to
replicate in R.
I have however problems in figuring out the R code to do that.
As I understood it was a covariance regression model. In the analysis,
baseline was used as covariate and autoregressive
Just throwing this out there: there's no probability distribution in
the equation you gave, so there's no context for an MLE: what is the
likelihood for m?
Knowing a little bit about the NS model, it seems like it makes more
sense to use nls() to fit all four parameters to the data at once.
Hello,
I have data like the following:
datum - structure(list(Y = c(415.5, 3847.8325, 1942.83325,
1215.223,
950.142857325, 2399.585, 804.75, 579.5, 841.70825, 494.053571425
), X = c(1.081818182, 0.492727273, 0.756363636, 0.896363636,
1.518181818, 0.49917,
Hi,
I have a variable X classified in a lot of groups and I need to run the
[fitdistr] funtion for each group. I tried with the [by] or the [tapply]
funtions because my data is organize in two columns (variable and the
groups), but neither of these command work. If somebody have a tip to help
me
Hi all;
After an overnight Ubuntu upgrade to 10.10, I had to reinstall some R packages.
But the fields package appears to be missing from my usual repository, and a
few of the other repositories I've tested:
Hi Nguyen,
Subject: [R] install multtest and preprocessCore packages in
Bioconductor library
Date: Wed, 30 Nov 2011 09:57:36 -0800
From: UyenThao Nguyen ungu...@tethysbio.com
To: r-help r-help@r-project.org
CC: uth.ngu...@ucdavis.edu uth.ngu...@ucdavis.edu
Hi All,
I've tried to install
Hello,
I am using Dr. Harrell's rms package to make a nomogram. I was able to make
a beautiful one. However, I want to change 5-year survival probability to
5-year failure probability.
I couldn’t get hazard rate from Hazard(f1) because I used cph for the model.
Here is my code:
library(rms)
Hi all,
I would like to use optim() to estimate the equation by the log-likelihood
function and gradient function which I had written. I try to use OPG(Out
Product of Gradient) to calculate the Hessian matrix since sometime Hessian
matrix is difficult to calculate. Thus I want to pick the
Hi, I used Dr. Harrell's rms package to make a nomogram.
Below is my code for nomogram and calculate total points and probability *in
original data set* used for building the nomogram. *My question is how I get
the formula for calculating the survival probability for this nomogram. Then
I can
Hello people. My problem is that when I try to install the package
tkrplot, I got the next problem:
install.packages(tkrplot)
Installing package(s) into ‘/home/marcos/R/i686-pc-linux-gnu-library/2.13’
(as ‘lib’ is unspecified)
probando la URL
Hi, Ive been trying to write a program for bootstrapping residuals in R but
without much success.
A lecturer wants to predict the performance of students in an end-of-year
physics exam, y. The lecturer has the students results from a mid-term
physics exam, x, and a mid-term biology exam, z.
He
It's a scaling problem:
If you do this:
datum - datum[order(datum$X),]
with(datum, plot(Y~X))
with(datum, lines(X, 3400*exp(-1867*X)))
you'll see that your initial guess is just so far gone that the nls()
optimizer can't handle it.
If you try a more reasonable initial guess it works fine:
Hi all,
I would like to use optim() to estimate the equation by the log-likelihood
function and gradient function which I had written. I try to use OPG(Out
Product of Gradient) to calculate the Hessian matrix since sometime Hessian
matrix is difficult to calculate. Thus I want to pick the
I like to use split() to split your data into groups, then run
lapply() to use fitdistr on each element of the list.
E.g,
df - data.frame(X = rnorm(500), ID = sample(letters[1:5], 500, TRUE))
temp - split(df$X, df$ID)
lapply(temp, fitdistr, normal)
Though it's just as easy with tapply():
Thank you All for your prompt helps.
Dear Dan,
Before I used
source(http://bioconductor.org/biocLite.R;)
biocLite(Biobase) # GET BIOCONDUCTOR PACKAGE FROM SOURCE
then, I went to Install packages in Rgui pointing to BioConductor to load
these packages in. That was why I got the error messages.
Hi Nguyen,
On Wed, Nov 30, 2011 at 12:11 PM, UyenThao Nguyen ungu...@tethysbio.com wrote:
Thank you All for your prompt helps.
Dear Dan,
Before I used
source(http://bioconductor.org/biocLite.R;)
biocLite(Biobase) # GET BIOCONDUCTOR PACKAGE FROM SOURCE
then, I went to Install packages in
Marianne Stephan mariannestephan at hotmail.com writes:
A statistician performed an analysis in SAS for me which I would
like to replicate in R. I have however problems in figuring out the
R code to do that. As I understood it was a covariance regression
model. In the analysis, baseline was
On Nov 30, 2011, at 11:31 AM, bubbles1990 wrote:
Hi, Ive been trying to write a program for bootstrapping residuals
in R but
without much success.
A lecturer wants to predict the performance of students in an end-of-
year
physics exam, y. The lecturer has the students results from a
On Nov 30, 2011, at 1:15 PM, joseph.s...@geog.ubc.ca wrote:
Hi all;
After an overnight Ubuntu upgrade to 10.10, I had to reinstall some
R packages. But the fields package appears to be missing from my
usual repository, and a few of the other repositories I've tested:
Ah, I see, thank you both.
Michael Weylandt wrote
It's a scaling problem:
If you do this:
datum - datum[order(datum$X),]
with(datum, plot(Y~X))
with(datum, lines(X, 3400*exp(-1867*X)))
you'll see that your initial guess is just so far gone that the nls()
optimizer can't
I am trying to add the bigmemory packages but I get the following error
message:
In file included from bigmemory.cpp:14:0:
../inst/include/bigmemory/isna.hpp: In function 'bool neginf(double)':
../inst/include/bigmemory/isna.hpp:22:57: error: 'isinf' was not declared in
this scope
make: ***
The relevant code is contained in the following working paper:
http://ideas.repec.org/p/ucn/wpaper/201122.html
--
View this message in context:
http://r.789695.n4.nabble.com/Calculating-marginal-effects-in-R-tp887488p4124996.html
Sent from the R help mailing list archive at Nabble.com.
I study a part-time long distance learning course so only really have access
to online sources for help, i'm sure ill crack it eventually. I understand
how to do it with 2 variables, its just having 3 that is confusing me
--
View this message in context:
I'm currently working with some time series data with the xts package, and
would like to generate a forecast 12 periods into the future. There are
limited observations, so I am unable to use an ARIMA model for the forecast.
Here's the regression setup, after converting everything from zoo objects
hi: that model is what the political scientists called an LDV ( lagged
dependent variable model ), the
marketing people call a koyck distributed model and the economists call it
an ADL(1,0) or ADL(0,1). ( i forget ).
You have to be careful about the error term because, if it's not white
noise,
On Nov 30, 2011, at 15:31 , Marcos Amaris Gonzalez wrote:
Hello people. My problem is that when I try to install the package
tkrplot, I got the next problem:
You seem to be missing the tk.h file. This is, most likely, in a tcl/tk
development package that you haven't installed. This is part
On Nov 30, 2011, at 11:31 AM, bubbles1990 wrote:
Hi, Ive been trying to write a program for bootstrapping residuals
in R but
without much success.
A lecturer wants to predict the performance of students in an end-of-
year
physics exam, y. The lecturer has the students results from a
On Wed, Nov 30, 2011 at 4:40 PM, AaronB aa...@communityattributes.com wrote:
I'm currently working with some time series data with the xts package, and
would like to generate a forecast 12 periods into the future. There are
limited observations, so I am unable to use an ARIMA model for the
I'm not familiar with the package and I don't currently have it
loaded, but looking at the examples in the documentation here
http://www.stat.ucl.ac.be/ISdidactique/Rhelp/library/pls.pcr/html/mvr.html
it seems that you need ncomp = 1:6. Saying you want to do multivariate
regression on only a
if you want to store the result in a column of your data.frame:
within(df, Y - 6*X1+7*X2+8*X3)
On Wed, Nov 30, 2011 at 9:59 AM, David L Carlson dcarl...@tamu.edu wrote:
Isn't this even easier?
X1 - c(1:3)
X2 - c(3, 4, 6)
X3 - c(5, 6, 1)
Y - 6*X1 + 7*X2 + 8*X3
Y
[1] 67 88 68
Or if you
Is there a way to use full information maximum likelihood (FIML) to
estimate missing data in the sem package? For example, suppose I have a
dataset with complete information on X1-X3, but missing data (MAR) on X4.
Is there a way to use FIML in this case? I know lavaan and openmx allow you
to do
I understand the original implementation of Random Forest was done in
Fortran code. In the source files of the R implementation there is a note
C wrapper for random forests: get input from R and drive the Fortran
routines.. I'm far from an expert on this...does that mean that the
implementation
G'day all,
Sorry if this message has been posted before, but searching for R is always
difficult...
I was hoping for a copy of the logo in eps format? Can I do this from R, or is
one available for download?
cheers
Ben
__
R-help@r-project.org
I know citation() gives the R citation to be used in publications. Has
anyone put this into endnote nicely? I'm not very experienced with
endnote, and the way I have it at the momeny the 'R Development Core
Team' becomes R. D. C. T. etc.
Cheers.
__
I have R entered as a computer program, and in the Programmer Name field
I write it as:
R Development Core Team,
Notice the comma after Team. That seems to be the key to getting Endnote to
treat the whole thing as a surname that doesn't get abbreviated.
On Wed, Nov 30, 2011 at 10:56 PM, Matt
On Wed, Nov 30, 2011 at 7:48 PM, Axel Urbiz axel.ur...@gmail.com wrote:
I understand the original implementation of Random Forest was done in
Fortran code. In the source files of the R implementation there is a note
C wrapper for random forests: get input from R and drive the Fortran
Hi,
I’m trying to use the STLperArea function in the ndvits package. I can run
the sample data (“SLPSAs_full”) without any problem. However, when I come to
run the function on my own data, I get the following message.
Waiting to confirm page change...
Error in .checkPath(path) : no TISEAN
Hi everybody,
I am unable to resolve this error using the following for loop. Would
appreciate help. The same loop works with for(i in 1:92) strangely. I
checked the .raw input file and everything is kosher, even Line 547
mentioned in the error message.
I wonder if there is any problem with the
What if you have over 50 matrices and you don't want to write them all out
one-by-one? I know there's something really quite simple, but I haven't
found it yet!...
--
View this message in context:
http://r.789695.n4.nabble.com/Average-of-Two-Matrices-tp860672p4126615.html
Sent from the R help
Dear all: I am trying to use the response.zigp and est.zigp from the ZIGP
package. Firstly, I generated a simple data via response.zigp and then I try
to fit them by est.zigp function, this is the code:
-
library(ZIGP)
#try-ZIGP-1
n-100
Dear all,
I am doing an ordinal data simulation. I have a question regarding the cut
off values between categories. Suppose I have three categories, if I do
regression, there should be two cut off values. I find some simulation code for
the ordinal data. However, they usually only generate a
Well, you're rather stuck writing them all out unless you have them in
some other data structure.
## three dimensional array (50 4 x 4 matrices)
x - array(rnorm(16 * 50), dim = c(4, 4, 50))
apply(x, 1:2, mean)
[,1][,2] [,3][,4]
[1,] -0.09460574 0.01572077
Am 30.11.2011 16:13, schrieb Berend Hasselman:
tom wrote
I don't think you have to install something.
You have not told us how you are trying to load the dll's.
You can use the GSVD function from
It is one of the rather confusing things about the RTisean package.
It does not say what your subject says: it says 'TISEAN executables'.
The DESCRIPTION *should* have a SystemRequirements line (and this has
been reported), It does say
It
requires that you have the Tisean-3.0.1 algorithms
94 matches
Mail list logo