Terence Broderick wrote:
> I am just trying to teach myself how to use the mle function in R because it
> is much better than what is provided in MATLAB. I am following tutorial
> material from the internet, however, it gives the following errors, does
> anybody know what is happening to cause s
I am just trying to teach myself how to use the mle function in R because it is
much better than what is provided in MATLAB. I am following tutorial material
from the internet, however, it gives the following errors, does anybody know
what is happening to cause such errors, or does anybody know
I need to estimate all parameters (except maybe df). Thank you for
pointing me into a direction, I will have a look.
The aim is to use a fat-tail distribution to calculate the Value At
Risk instead of using the Normal distribution.
Ben
On 11/15/06, Prof Brian Ripley <[EMAIL PROTECTED]> wrote:
> O
On Wed, 15 Nov 2006, Benjamin Dickgiesser wrote:
> Hi
> is there an easy way/ R-function to calculate the numerical maximum
> likelihood estimators for a Student's t-distribution?
> I searched the mailing list archive the last 30mins but didn't find an answer.
See fitdistr() in MASS.
MLE of what
Hi
is there an easy way/ R-function to calculate the numerical maximum
likelihood estimators for a Student's t-distribution?
I searched the mailing list archive the last 30mins but didn't find an answer.
Regards
Ben
__
R-help@stat.math.ethz.ch mailing l
nand kumar yahoo.com> writes:
>
> Greetings Forum,
>
> I am new to R and and writing in hopes of getting some help.
> Our MLE results from a home grown software do not match with that of R. We
are using a censored sample and will
> really appreciate if you could give us any pointers as to
Greetings Forum,
I am new to R and and writing in hopes of getting some help.
Our MLE results from a home grown software do not match with that of R. We
are using a censored sample and will really appreciate if you could give us any
pointers as to which MLE method is used in R... to my
I don't understand your question. First, I'm not familiar with the
'wls' package; I found no such package by that name via
"www.r-project.org" -> CRAN -> (select a local mirror) -> Packages.
The 'qr' function in the 'quandreg' package looks straightforward to
me. Have you
Hi,
I load my data set and separate it as folowing:
presu <- read.table("C:/_Ricardo/Paty/qtdata_f.txt", header=TRUE, sep="\t",
na.strings="NA", dec=".", strip.white=TRUE)
dep<-presu[,3];
exo<-presu[,4:92];
Now, I want to use it using the wls and quantreg packages. How I change the
data classes
Hi,
I load my data set and separate it as folowing:
presu <- read.table("C:/_Ricardo/Paty/qtdata_f.txt", header=TRUE, sep="\t",
na.strings="NA", dec=".", strip.white=TRUE)
dep<-presu[,3];
exo<-presu[,4:92];
Now, I want to use it using the wls and quantreg packages. How I change the
data classes
I regularly optimize functions of over 1000 parameters for posterior
mode computations using a variant of newton-raphson. I have some
favorable conditions: the prior is pretty good, the posterior is
smooth, and I can compute the gradient and hessian.
albyn
On Mon, Jun 19, 2006 at 06:53:00PM +01
Seagulls have a very different perspective to ballparks
than ants. Nonetheless, there is something that can be
said.
There are several variables in addition to the number of
parameters that are important. These include:
* The complexity of the likelihood
* The number of observations in the dat
Applications with lots of parameters also tend to have parameters in
a relatively small number of families, and each of these few families
could be considered to have a distribution. Splines, for example, have
lots of parameters -- sometimes more parameters than observations (as do
n
It really depends on how well-behaved your objective function is, but I've been
able to fit a few models with 10--15 parameters. But I felt like I was
stretching the limit there.
-roger
Federico Calboli wrote:
> Hi All,
>
> I would like to know, is there a *ballpark* figure for how many
> p
Federico Calboli imperial.ac.uk> writes:
>
> Hi All,
>
> I would like to know, is there a *ballpark* figure for how many
> parameters the minimisation routines can cope with?
>
I think I would make a distinction between theoretical
and practical limits. A lot depends on how fast your
obj
Hi All,
I would like to know, is there a *ballpark* figure for how many
parameters the minimisation routines can cope with?
I'm asking because I was asked if I knew.
Cheers,
Federico
--
Federico C. F. Calboli
Department of Epidemiology and Public Health
Imperial College, St. Mary's Campus
No
(0)16/337015
Web: http://www.med.kuleuven.be/biostat/
http://www.student.kuleuven.be/~m0390867/dimitris.htm
- Original Message -
From: "Alexander Nervedi" <[EMAIL PROTECTED]>
To:
Sent: Tuesday, May 02, 2006 9:20 AM
Subject: [R] mle package
> Hi !
>
> There u
Hi !
There used to be a package called mle for maximum likelihood estimation. I
couldn't find it when I tried to get the package. Is this still available?
Perhaps under another package?
I'd appreciate any suggestion on this.
Alex
__
R-help@stat.mat
cs.uct.ac.za> writes:
>
>
> Hi ,
>
> I want to compute the MLE for a simple sample of data, say
> 45,26,98,65,25,36,42,62,28,36,15,48,45, of which I obviously have the mean
> and the sd. Is there a way of calling the log normal and already
> diffrentiated formula other than entering the whole
Hi ,
I want to compute the MLE for a simple sample of data, say
45,26,98,65,25,36,42,62,28,36,15,48,45, of which I obviously have the mean
and the sd. Is there a way of calling the log normal and already
diffrentiated formula other than entering the whole formula.
Victor
___
Arun Kumar Saha gmail.com> writes:
>
> hi all,
>
> suppose I have:
>
> r[i] ~ N(0, h[i])
> h[i] = a + b*r[i-1] + c*h[i-1] for all i=2..n
>
> I want to get estimates of a, b, c by mle.
>
> Can you tell me how to do that?
>
(1) can you convince us this isn't a homework problem?
(a
hi all,
suppose I have:
r[i] ~ N(0, h[i])
h[i] = a + b*r[i-1] + c*h[i-1] for all i=2..n
I want to get estimates of a, b, c by mle.
Can you tell me how to do that?
thanks in advance,
Arun
[[alternative HTML version deleted]]
__
R-he
Dear All,
Can somebody tell me how to do Maximum Likelihood Estimation in R
for Non-linear function?
My function is non-linear and it has four parameters, only one explanatory
variable.
If possible Please tell me the source so that I can write my own code for above.
Thanks,
GS
___
Original Message -
From: "Carsten Steinhoff" <[EMAIL PROTECTED]>
To:
Sent: Wednesday, June 29, 2005 5:19 PM
Subject: [R] MLE with optim
> Hello,
>
> I tried to fit a lognormal distribution by using optim. But sadly
> the output
> seems to be incorrect.
Carsten Steinhoff wrote:
> Hello,
>
> I tried to fit a lognormal distribution by using optim. But sadly the output
> seems to be incorrect.
> Who can tell me where the "bug" is?
>
> test = rlnorm(100,5,3)
> logL= function(parm, x,...) -sum(log(dlnorm(x,parm,...)))
> start=
Hello,
I tried to fit a lognormal distribution by using optim. But sadly the output
seems to be incorrect.
Who can tell me where the "bug" is?
test = rlnorm(100,5,3)
logL= function(parm, x,...) -sum(log(dlnorm(x,parm,...)))
start= list(meanlog=5, sdlog=3)
optim(start,log
Just to make sure, do you have any information on events when xu >
u, i.e. do you know how many such events and you know u for those
events? If yes, then that's called "censoring", not truncating. For
that, the survival package seems pretty good. I found the information
in Venables and Ri
Hello,
I've the following setting:
(1) Data from a source without truncation (x)
(2) Data from an other source with left-truncation at threshold u (xu)
I have to fit a model on these these two sources, thereby I assume that both
are "drawn" from the same distribution (eg log
[EMAIL PROTECTED] writes:
> Hi R users!
>
> I have a likelihood ratio statistic that depends on a parameter delta and I am
> trying to get confidence intervals for this delta using the fact that the
> likelihood ratio statistic is approx. chi-squared distributed.
>
> For this I need to maximize
Hi R users!
I have a likelihood ratio statistic that depends on a parameter delta and I am
trying to get confidence intervals for this delta using the fact that the
likelihood ratio statistic is approx. chi-squared distributed.
For this I need to maximize the two likelihoods (for the ratio stat
Ben Bolker <[EMAIL PROTECTED]> writes:
>I'm trying to figure out the best way of fitting the same negative
> log-likelihood function to more than one set of data, using mle() from
> the stats4 package.
It's not the same likelihood function if the data differ, since
likelihood functions are f
I'm trying to figure out the best way of fitting the same negative
log-likelihood function to more than one set of data, using mle() from the
stats4 package.
Here's what I would have thought would work:
--
library(stats4)
## simulate values
r = rnorm(1000,mean=2)
## very basic neg.
1. Don't use "t" as a variable name. It is the name of the
matrix transpose function. In most but not all contexts, R is smart
enough to tell whether you want the system function or the local object.
2. I can't tell from your question what you want. "PLEASE do
read the posting g
Dear Spencer:
My problem get solved by using Matlab. It runs pretty
quick(less than 5 seconds)and the result is stable
with respect to the initial values. I was amaized.
Here my t and are as long as 2390, sum the functions
over t and d, the function becomes daunting. But I
still like to try nlmb(I
Have you considered estimating ln.m1, ln.m2, and ln.b, which makes
the negative log likelihood something like the following:
l.ln<- function(ln.m1,ln.m2,ln.b){
m1 <- exp(ln.m1); m2 <- exp(ln.m2); b <- exp(ln.b)
lglk <- d*( ln.m1 + ln.m2
+ log1p(-exp(-(b+m2)*t)
Hi, everyone
I am trying to estimate 3 parameters for my survival
function. It's very complicated. The negative
loglikelihood function is:
l<- function(m1,m2,b) -sum(d*( log(m1) + log(m2)
+ log(1- exp(-(b + m2)*t)) ) + (m1/b - d)*log(m2 +
b*exp(-(b + m2)*t) ) + m1*t - m1/b*log(b+m2) )
library(MASS)
?fitdistr
You can also use survreg (survival) and glm (with some help from functions
in MASS for the shape).
On Mon, 24 Nov 2003, Dominique Couturier wrote:
> I'm looking for a classic equivalent of the wle.gamma function (library
> wle) that estimate robustly the shape and the s
Dear [R]-list,
I'm looking for a classic equivalent of the wle.gamma function (library
wle) that estimate robustly the shape and the scale parameters of gamma
data.
I have a vector of iid gamma rv :
>data=rgamma(100,shape=10,scale=3)
and a vector of their weights:
>weights=c(rep(.5/70,70),rep(.2
38 matches
Mail list logo