Mario Falchi yahoo.com> writes:
> Iâm trying to evaluate the frequency of different strings in each row of a
data.frame :
> INPUT:
> ID G1 G2 G3 G4 ⦠GN
> 1 AA BB AB AB â¦
Something like
z <- data[,-1]
table(z,row(z))
?
Ben Bolker
__
Paul Murrell auckland.ac.nz> writes:
>
> Hi
>
> Kilian Plank wrote:
> > Good morning,
> >
> > in a 3D plot based on persp() the axis title (of dimension z) overlaps with
> > the axis labels.
> > How can the distance (between axis labels and axis title) be increased?
>
> Paul
Another way
; in front of them -- protect with
## verbatim environment!
Ben Bolker
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
ndicated the number of
rows in each sub-table:
x <- 1:24
nrow <- c(1,2,3)
ncol <- 4
ind <- rep(1:length(nrow),ncol*nrow)
lapply(split(x,ind),
matrix,ncol=ncol,byrow=TRUE)
seems to work.
cheers
Ben Bolker
__
R-help@stat.math.ethz
ther (1) what
arctangent distribution you're talking about (it may be the Cauchy/
t-distribution with 1 df, or it may be something else) or (2) why
you want to transform the distribution, and whether it's a sensible
thing to do in the first place ... (see what the posting guide has
that's a very strong test.
Any ideas about where to start looking/diagnosing?
thanks
Ben Bolker
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
oled) within-group variances.
Bottom line is that, except for knowing about pt and pf,
this is really a basic statistics question rather than an
R question.
good luck
Ben Bolker
PS: it is too bad, but the increasing sophistication of R is
making it harder for beginners to explore the gut
, etc.) to find the answers to your
questions --- then you'd be much more likely to get a useful
response, even if it was not strictly an R question.
Ben Bolker
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
he parameter is 0, so the
estimates of the coefficients (object$coefficients)
*are* the distance from the null hypothesis.
Ben Bolker
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide!
or extracting the residual degrees of freedom,
they could -- or someone could change the
internal representation of lm and glm objects
(unlikely though that is) to mean that
object$df.residual was missing, or even wrong.
Ben Bolker
__
R-help
Federico Calboli imperial.ac.uk> writes:
>
> Hi All,
>
> I would like to know, is there a *ballpark* figure for how many
> parameters the minimisation routines can cope with?
>
I think I would make a distinction between theoretical
and practical limits. A lot depends on how fast your
obj
antonio rodriguez gmail.com> writes:
> I have 2 arrays:
>
> dim(a1)
> [1] 3 23 23
> dim(a2)
> [1] 3 23 23
>
> And I want a new array, to say, a3, where:
>
> dim(a3)
> [1] 6 23 23
You can (1) figure out how to transpose the
arrays (?aperm), put them together with c(),
and re-array() them, *
nificant
I still had some problems profiling. Various things that may help:
(1) set parscale= option
(2) do fits on the log-parameters (which all have to be positive) OR
(3) use L-BFGS-B and set lower=0
(4) summary(m1) will give you approximate (Fisher informati
max(0,x-15)+max(0,x-90)
if x>90:x-(x-15)+(x-90) = x+15-90=x-75
if 1590,x-75,pmin(x,15)))
}
replicate(10,replicate(8,tmpf()))
(you could also generate 80 values and then break them
up into segments of 8)
Ben Bolker
__
R-help@stat.ma
mps.
>
> Hope this helps
> Adrian Baddeley
>
Just to clarify: I suggested the Strauss process for "more even
than random" point processes and the Poisson cluster process for
clumped -- the original poster asked for both.
cheers
Ben Bolker
Randy Zelick pdx.edu> writes:
>
> There is a switch to set the upper bound on rings (radial.lim) but I don't
> see a way to specify the lower bound. What I want is a "bullseye" plot
> that goes from my start value (first ring at 10) to my end value (last
> ring at 100) independent of the data
> > On Thursday 18 May 2006 14:51, Damien Joly wrote:
> > > Hi all,
> > >
> > >
> > > HOWEVER, I might be able to work around this policy if I can find a
> > > licensed software vendor, preferably in Canada, that sells R.
> > >
> > > I tried googling R vendors but was unsuccessful.
> > >
> > > An
ns(default.theme = ltheme) ## set as default
ltheme <- col.whitebg()
ltheme$xyplot.background$col <- "transparent" ## change strip bg
trellis.par.set(theme=col.whitebg())
print(xyplot(y~x|tr,data=all,layout=c(3,1),aspect="iso",scales=list(draw=FALSE),
bg=&quo
orkun deprem.gov.tr> writes:
>
> I am working in trellis package.
> what can I do to make bacground white ?
trellis.par.set(col.whitebg())
works for me.
http://wiki.r-project.org/rwiki/doku.php?id=tips:graphics-lattice
Ben Bolker
_
Robin Hankin noc.soton.ac.uk> writes:
>
> Hi everyone
>
> well, quite a few people were interested in my little sparklines
> example,
> and one suggestion was to post it on a webpage.
>
> What would be a good place to post it?
on the wiki? http://wiki.
Xin hotmail.com> writes:
>
> Dear All:
>
> Then error messga there: initial value in 'vmmin' is not finite
> In addition: There were 38 warnings (use warnings() to see them).
>
> Could you give some advice please?
>
> Thanks a lot!
>
> Xin Shi
>
> [[alternative HTML version dele
)
y <- fft(z, inverse = TRUE)/l
chop(y)
}
of course this also depends exactly what you mean by
"conditioned on the available data" -- and how do you
feel about parametric models? RandomFields has lots of
tools for conditional simulation, if you don't mind fitting
a p
:
> x <- list("a","b","c")
> x[1]
[[1]]
[1] "a"
> class(x[1])
[1] "list"
> x[[1]]
[1] "a"
> class(x[[1]])
[1] "character"
(2) cophy <- lapply(anj30,cophenetic.phylo) should process all
of your phylogen
Prof Brian Ripley stats.ox.ac.uk> writes:
>
> On Wed, 3 May 2006, Johannes Graumann wrote:
>
> > What's the canonical way of patching something like this in R? Redefining
> > the
> > function at the start of your script?
>
> There are namespace issues, so the canonical way is to change the so
istributed ... as R is trying to tell you, Poisson
distributions only make sense for integer data. Your best bets
are probably (1) reconsider the error distribution, (2) perhaps
trying using nlme instead of lmer -- it is more polished, and you
can use Pinheiro and Bates "Mixed-Effect
log-likelihood) for (1) pooled data and (2) each group
separately. Add the negative log-likelihoods for the grouped
estimates. Use the likelihood ratio test to decide whether
the reduced model (all parameters equal) is significantly worse
than the full model (all parameters different).
Ben Bolker
ula other than entering the whole formula.
>
> Victor
>
You haven't given us enough information/described your problem
clearly enough. The maximum likelihood
estimate of _what_? fitdistr() in the MASS package might help.
If you want (log)likelihoods dlnorm (for log no
Peter Dalgaard biostat.ku.dk> writes:
>
> Watch out for the parametrization: In SAS the intercept (in *this*
> context!, it is different in other procs...) corresponds to
> parastat==1 and patsize==small, and I wager that at least the former
> is vice-versa in R, quite possibly both.
>
> > #
1348
p3 -1.084389 0.5093672
coef(s1)[,"Estimate"]+1.96*outer(coef(s1)[,"Std. Error"],c(-1,1))
[,1][,2]
p1 2.07188136 3.42153311
p2 0.01547139 2.05439969
p3 -2.08274893 -0.08602966
Standard errors are slightly differ
ometricFunction.html);
Robin Hankin wrote some code (hypergeo in the Davies package on CRAN)
to compute a particular Gaussian h'geom function, and was asking
at one point on the mailing list whether anyone was interested
in other code; I don't know whether it will be generalized
enough
s the proper way to do this?
I think if you really mean array (i.e. an n-dimensional
table with n>2) then something like the following will do it:
## create an example list of matrices
z <- replicate(5,matrix(runif(9),nrow=3),simplify=FALSE)
library(abind)
do.call("ab
[redirected from R-devel: this really belongs on R-help]
I have two sets of time-series that I imported from Excel using RODBC
and placed in
"securities" and "factors".
What I need to do is generate t-scores for each security-factor pair.
I tried the following:
t1 <- t.test(securities[
out to 0.941, which seems reasonable ...
I'm hoping/figuring that someone more knowledgeable will
jump in with corrections if I've said something terribly
wrong ...
Ben Bolker
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
lionel humbert courrier.uqam.ca> writes:
>
> Dear R user,
>
> I have made many logistic regression (glm function) with a second order
> polynomial formula on a data set containing 440 observation of 96
> variables. I’ve made the plot of AIC versus the frequency
> (presence/observations) of e
;rise time (sec)",ylab="counts",
> xlim=c(0,8))
I think you need
multhist(list(risetime[,,,1,],risetime[,,,2,]))
or for more clarity
L <- list(risetime[,,,1,],risetime[,,,2,])
multhist(L)
(try it that way first and then put in all the
extra arguments)
good l
Haifeng Xie wmin.ac.uk> writes:
>
> If I understand it correctly, something like this should do what you want
>
> x[!apply(x, 1, function(y) any(is.na(y)), ]
>
> where x is the dataframe in question.
>
> Hope that helps.
>
> Kevin
>
I believe he wants to remove *columns* with NAs, not r
Sam Steingold podval.org> writes:
>
> Hi,
> It appears that deal does not support missing values (NA), so I need to
> remove them (NAs) from my data frame.
> how do I do this?
> (I am very new to R, so a detailed step-by-step
> explanation with code samples would be nice).
If you wanted to re
>
Not a complete solution, but you could take a look at
the likelihoods associated with Box-Cox transformations
(e.g. Venables and Ripley MASS pp. 170-172).
Ben Bolker
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-
ds like you did this
in R). (The mle function from stats4 provides a *framework*
for maximum-likelihood estimation; you still need to be able to write down
a function for the likelihood of a given set of parameters.)
Ben Bolker
__
R-help@stat.math.ethz.ch mai
Domenico Vistocco unicas.it> writes:
>
> Is there a function in R for constrained linear least squares?
>
> I used the matlab function LSQLIN: my aim is to obtain
> non-negative regression coefficients which sum 1.
>
> Thanks in advance,
> domenico vistocco
I haven't tried it, but it looks
RandomFields package on CRAN
from ?PrintModelList:
* 'fractalB' (fractal Brownian motion)
gamma(x) = x^a
The parameter a is in (0,2]. (Implemented for up to three
dimensions)
cheers
Ben
_
where you fill in the predicted mean and std. dev. of each.
Ben Bolker
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
ctions (optim, nlm, nlminb) that you can use to
minimize a negative log-likelihood function; the mle() function
in the stats4 package is a wrapper for this function. You have
to define your own likelihood function:
http://www.mayin.org/ajayshah/KB/R/documents/mle/mle.html
may be helpf
Christian Hoffmann wsl.ch> writes:
>
> Hi,
>
> In a second try I will ask this list to give me some useful pointers.
[There is a general problem with the list, because of its
volunteer nature: easy questions and questions that catch peoples'
attention get answered. Others don't. I saw you
eg Tarpinian <[EMAIL PROTECTED]>
To: Ben Bolker <[EMAIL PROTECTED]>
This may not be the most helpful advice, but Mathematica is a wonderful
platform having many more built-in optimization routines than R. I have
found the simulated annealing facility to be robust and much easier to
undary constraints is to
add a quadratic penalty (perhaps not even trying to evaluate the
objective function -- e.g. substituting the value at the closest
boundary point) for parameters outside the constraints into
the objective function.
With more information (number of parameters,
Gasper Cankar ric.si> writes:
>
> Hello everyone.
>
> For reasons too long to explain I wanted to do plots similar to histograms
with plot(type="h").
> I ran into a problem - if I set line width too high, histogram isn't accurate
anymore.
try par(lend=1) instead. Far from obvious, but see
P
orking directory].
The other common problem, which you probably *aren't* having, is
hidden file extensions under Windows.
good luck
Ben Bolker
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE
something like
myper$spec <- myper$spec[myper$spec<4]
Please take some time to sit down and read "An Introduction to R",
which came with your copy of R -- pages 11 and 12 will be especially
helpful. If you find it too technical, try one of the many books
or contributed
more accurate (generally and in particular situations).
That's just my best guess, someone else may have better advice ...
cheers
Ben Bolker
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do
Kemp S E (Comp glam.ac.uk> writes:
>
> Hi,
>
> Does anyone know of a pre-existing function where I can get the t-test
> confidence interval for a given mean, sd, degrees of freedom and
> confidence limit.
>
> I do NOT want to run any data through the t.test function.
>
> Kind regards,
>
> Sa
ese
except Statistical Computing are listed on the R books page. Faraway
also has
"Linear models with R", which has a chapter on block designs.
That should give you some starting points ...
Ben Bolker
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
otate3d() is used with a matrix that isn't
> a rotation matrix; it may not be obvious that this is allowed, but it is.)
>
> Duncan Murdoch
>
>
>>
>> On 2/2/06, Ben Bolker <[EMAIL PROTECTED]> wrote:
>>
>>> Duncan Murdoch stats.uwo.ca> writes:
rix (the scale will want
to be something like eigen(v)$values*qnorm(0.975)
for a 95% contour, the rotation matrix is some
function of the eigenvectors), but it seems to
me that this will produce any ("free form"?) ellipsoid
you want.
cheers
Ben Bolker
t; > existing method in R.
>
> The misc3d package includes a function for 3d contour plots; that should
> do what you want.
>
is contour3d really necessary or could you just plot ellipsoids?
(library(rgl); demo(shapes3d)) -- still a little bit of figuring
to do, but
B Dittmann yahoo.co.uk> writes:
>
> Hi,
>
> just run the Kolmogorov-Smirnov test on R.
> Is there any detailed documentation available for the options for the KS
> test, esp. with regard to the hypotheses.
> The help file is rather "thin".
>
> Many thanks.
>
> Bernd
>
Hmmm. Does lookin
()
select the file in the browser and then see what R thinks
f is.
My guess is that there's a hidden extension here somewhere ...
Ben Bolker
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do r
theoritical distribution than the
> function density. How come?
Because dnorm takes the standard deviation and dmvnorm (which
is in the mvtnorm package, not the gtools package) takes the
variance as an argument. replace var(X) with sd(X) in your call to dnorm and
everything will make
")
is probably what you're looking for. See ?predict.glm.
Ben Bolker
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
al, we might be able
to help.
Ben Bolker
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
#x27;re trying
to do we might be able to help (or possibly tell you
that it really can't work ...)
Ben Bolker
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Petri Palmu geneos.fi> writes:
> I'm using gregexpr(). As a result something like this:
>
> # starting positions of the match:
> [[1]]
> [1] 7 18
>
> # length of the matched text:
> attr(,"match.length")
> [1] 4 4
>
> Now, I'd like to have a matrix,
> 74
> 18 4
>
something like
Robert Baer atsu.edu> writes:
> Well, consider this example:
> barplot(c(-200,300,-250,350),ylim=c(-99,400))
>
> It seems that barplot uses ylim and pretty to decide things about the axis
> but does some slightly unexpected things with the bars themselves that are
> not just at the 'zero' end o
Ben Bolker ufl.edu> writes:
>
> Bliese, Paul D LTC USAMH us.army.mil> writes:
>
> >
> > R Version 2.2.0
> >
> > Platform: Windows
> >
> > When I use barplot but select a ylim value greater than zero, the graph
> > is
Bliese, Paul D LTC USAMH us.army.mil> writes:
>
> R Version 2.2.0
>
> Platform: Windows
>
> When I use barplot but select a ylim value greater than zero, the graph
> is distorted. The bars extend below the bottom of the graph.
>
The problem is that barplot() is really designed to work
wit
Constantinos Antoniou central.ntua.gr> writes:
>
> Hello all,
>
> I would like to fit a mixed effects model, but my response is of the
> negative binomial (or overdispersed poisson) family. The only (?)
> package that looks like it can do this is glmm.ADMB (but it cannot
> run on Mac
bout breaking y axes but the same techniques
apply to the x axis.
good luck,
Ben Bolker
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
the possibility of incorporating temporal/spatial correlation
structures like those in nlme into lme4, Doug Bates said that he
wanted to work first on getting the basic framework of the package
really solid [can't blame him at all, and of course honor and
glory to him for putting so much work int
Sérgio Nunes gmail.com> writes:
>
> Hi,
>
> I'm trying to draw a 2D plot using multiple tints of red. The
> (simplified) setup is the following: || year | x | y ||
>
> My idea is that each year is plotted with a different tint of red.
> Older year (lightest) -> Later year (darkest).
how abo
me parameters with mle(), ending
up with a named vector of coefficients,
and then want to use some or all of
those coefficients as input to another
mle() call -- I have to remove the
names manually.)
Can anyone suggest why this happens/
why it is a good design/whether there
are simple
l for the result otherwise ...
you can specify covariates for both models.
As a minor point: both references you cite actually focus
on ZI Poisson (not NB) regression models, although Dobbie and
Welsh do allow for overdispersion ...
hope that helps
Ben Bolker
_
Serguei Kaniovski wifo.ac.at> writes:
>
> I've a vector of pairwise correlations in the order low-index element
> precedes the high-index element, say:
>
> corr(1,2)=0.1, corr(1,3)=0.2, corr(2,3)=0.3, corr(3,4)=0.4
>
> How can I construct the corresponding correlation matrix?
Not absolutel
e,scale) {
-sum(dfrechet(x,loc=loc,shape=shape,scale=scale,log=TRUE))
}
[totally untested]
and use mle, from the stats4 package (or just plain
old optim()) to find the parameters.
The hardest part may be to find good starting parameters.
Ben Bolker
__
R
jobst landgrebe gwdg.de> writes:
>
> Dear List,
>
> can anyone tell me how to plot a discontinuous y-axis (ordinate
> with a -/ /- "break sign") to fit in data with a wide range without the
> need of logarthimic transformation? My data are distributed like this:
>
> (abscissa: 1:10)
>
> 1. ve
thanks all; this makes sense now. For what
it's worth, this came up in the context of
mapply(...,SIMPLIFY=TRUE), which returned a
matrix as requested, but an odd-looking one.
cheers
Ben Bolker
__
R-help@stat.math.ethz.ch mailing list
d the doctor saying "well then, don't do that"?
Ben Bolker
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Dimitris Rizopoulos med.kuleuven.be> writes:
>
> you don't need L-BFGS-B in this case since you can easily
> re-parameterize you problem, i.e., you can always write:
>
> pi_i = exp(a_i) / sum(exp(a_i)), with e.g., a_1 = 0
>
> and thus maximize with respect to a_i's.
>
> I hope it helps.
>
>
extended metafile format.
the other answer that has been suggested is to install R
for windows under Wine (assuming you're running on Intel
hardware).
Ben Bolker
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r
ent levels without using any specific library.
>
> Thanxs for your help,
>
> Guillaume
I don't know exactly why you want to avoid "using any specific
library", but the ellipse package (sic) would seem to do what
you want pretty conveniently ...
cheers
Ben Bolker
Ben Bolker ufl.edu> writes:
>
> Tony Gill uq.edu.au> writes:
>
> >
> > Hi All,
> >
> > I have a greyscale image ...
> library(pixmap)
> x <- read.pnm("graypic.pnm")
> str(x)
> x grey[1,1]
>
Tony Gill uq.edu.au> writes:
>
> Hi All,
>
> I have a greyscale image that I am reading in through RGDAL and placing in a
> pixmap object.
>
>
library(pixmap)
x <- read.pnm("graypic.pnm")
str(x)
[EMAIL PROTECTED],1]
__
R-help@stat.math.ethz.ch ma
Thomas Isenbarger plantpath.wisc.edu> writes:
>
> I haven't been an R lister for a bit, but I hope to enlist someone's
> help here. I think this is a simple question, so I hope the answer
> is not much trouble. Can you please respond directly to this email
> address in addition to the li
Carlo Fezzi stat.unibo.it> writes:
>
> Dear R-helpers,
> Anybody knows which function can I use to comupute maximum likelihood
> standard errors?
>
> Using the function "nlm" I can get the estimate of the parameters of any
> likelihood that I want (for example now I am working on a jump diffus
Dan Janes oeb.harvard.edu> writes:
>
> Hi all,
> I am trying to bootstrap a small data set into 1000 "pseudodatasets" and
> then run an ANOVA on each one. Can anyone provide guidance on how I could
> do this?
>
> Thank you.
>
> -Dan Janes
>
> ***
[snip snip snip snip]
>
How about "strict" option that could be set to
disallow use of T/F variables?
I had a student run into trouble fairly recently (although can't
at the moment provide a reproducible example using T as a
variable in a formula that was passed to nls() ... I think there
m
I *think* (but am not sure) that these guys were actually (politely)
advertising a commercial package that they're developing. But, looking at
the web page, it seems that this module may be freely available -- can't
tell at the moment.
Ben
On Wed, 13 Apr 2005, Henric Nilsson
This is a little bit tricky (nonlinear, mixed, count data ...) Off the
top of my head, without even looking at the documentation, I think your
best bet for this problem would be to use the weights statement to allow
the variance to be proportional to the mean (and add a normal error term
for
uggest that the models with
autocorrelation *are* preferable, by quite a bit -- I could pursue this
further.) Or should I just not worry about it and move on?
Ben Bolker
--
simulation code:
library(MASS)
simdata <- function(sd=200,range=2,n=NULL,mmin=0) {
if (is.nu
o has a paper with Ram Myers on hockey stick
models in fisheries). The basic trick is that, unless you do some kind
of numerical smoothing, it's very easy to get stuck in local minima.
I got a little carried away with the problem and am sending you some
code off-list ...
cheers
and in this case there's no point using with().
can someone help me understand this behavior and to find a clean way to
use mle() on a predefined likelihood function that allows substitution of
an arbitrary data set?
R 2.0.0 on Gentoo (trying to stick with the package management system so
ha
o say "install all
R packages")
cheers,
Ben
On Tue, 11 May 2004, Dirk Eddelbuettel wrote:
>
> Hi Ben,
>
> On Tue, May 11, 2004 at 11:57:40AM -0400, Ben Bolker wrote:
> >
> > Just in case anyone cares or is hitting the same problem:
>
> Any re
Just in case anyone cares or is hitting the same problem:
to install current mgcv (1.0-5) on 1.9.0 on Knoppix/Debian unstable I had
to:
# cd /usr/lib
# ln -s /usr/lib/atlas/libblas.so.3 libblas-3.so
# ln -s /usr/lib/atlas/liblapack.so.3 liblapack-3.so
Otherwise compilation couldn't find -lbl
I was getting similar errors, which I finally tracked down to the
following:
I had accidentally left an extraneous "test.R" in my pkg/R directory;
that file contained a system call to an external program that created a
particular file, which I then tried to read into R. The R code that
tr
This implementation, originally written by Nici Schraudolph, allows you
to choose which branch you want. I've checked the answers for complex
arguments, non-systematically, against Mathematica's ProductLog function.
Ben Bolker
lambertW = function(z,b=0,maxiter=10,eps=.Machine$
plot(table(factor(x,levels=c("c","b","a"
is at least approximately what you want (the only complicated bit is
reversing the order of the bars from the default alphabetical order)
substituting barplot() for plot() also works
you may want to use ylab="something" in the plot or barplot comma
With all due respect to BDR and you, I think this behavior is not
obvious to casual/new users (using the R search page with "if else" as the
search string turns up nearly identical queries from 1998, 2001, and
2002). There's a philosophical issue here, of course, about how much we
need to hold
In the second case, R stops when it has a syntactically complete clause:
if (...) {
...
}
is complete since an else{} clause is not required. R evaluates it, then
moves onto
else { ... }
which is a syntax error (since it "doesn't have an if {} in front of it,
since that has already b
Maybe you should take this up with package maintainers (who may or may
not be reading R-help) ... this sounds like a design/documentation issue
rather than a "bug" per se (although the distinction is not always clear).
To be honest, the underlying R code in CircStats doesn't seem terribly
s
I've always built from source and almost never had to do anything beyond
"tar zxf sources.tgz; ./configure; make; make install" (on various Red Hat
versions). On the other hand ... I've been hoping to move in the
direction of an apt- or rpm-based solution to get a better handle on
tracking
Try returning list(c(Rprime,Cprime,Pprime),NULL) -- the first element in
the returned list should be a numeric *vector* of the derivatives.
Ben
On Tue, 4 Nov 2003, Ivan Kautter wrote:
> R help list subscribers,
>
> I am a new user of R. I am attempting to use R to explore a set of
> e
101 - 200 of 239 matches
Mail list logo