Re: [R] State space model

2011-11-14 Thread Kristian Lind
I found an old thread on R-Sig-Finance with the same problem and a possible
solution https://stat.ethz.ch/pipermail/r-sig-finance/2007q2/001362.html

I've used with success a few times but it seems a bit slow.

If anyone has a better way of modelling state-dependent volatility using
one of the available packages please let me know.

Kristian

2011/11/12 Kristian Lind 

> Hi,
>
> I'm trying to estimate the parameters of a state space model of the
> following form
>
> measurement eq:
>
> z_t = a + b*y_t + eps_t
>
> transition eq
>
> y_t+h = (I -exp(-hL))theta + exp(-hL)y_t+ eta_{t+h}.
>
> The problem is that the distribution of the innovations of the transition
> equation depend on the previous value of the state variable.
> To be exact: y_t|y_{t-1} ~N(mu, Q_t) where Q is a diagonal matrix with
> elements equal to
>
> Q_{i,t} =
> sigma_i*(1-exp(-kappa_i*h)/kappa_i*(theta_i/2*(1-exp(kappa_i*h)+exp(-kappa_i*h)y_{t-1,i}
>
> The fkf returns the filtered states variables so y_{t-1,i} is available. I
> just can't figure out how to write my program in such a way  that this
> information is included and updated in the state space model for each
> iteration in optim. Any suggestions on how to solve this are much
> appreciated.
>
> Thank you,
>
> Kristian.
>
> Below my program where the variance matrices are just identity matrices
> and the data is just random numbers. I used the example from the FKF
> package as framework.
>
> library(FKF) #loading Fast Kalman Filter package
> library(Matrix) # matrix exponential package
> library(BB)
> library(alabama)
> x <- matrix(abs(rnorm(400)), nrow=10, ncol=40)
> m <- 2 # m is the number of state variables
> n <- ncol(x) # is the length of the observed sample
> d <- nrow(x) # is the number of observed variables.
> h <- 1/52
> ## creating state space representation of 2-factor CIR model
> CIR2ss <- function(K_1, K_2, sigma_1, sigma_2, lambda_1, lambda_2,
> theta_1, theta_2, delta_0, delta_1, delta_2)  {
>   ## defining auxilary parameters,
> phi_11 <- sqrt((K_1+lambda_1)^2+2*sigma_1^2*delta_1)
>   phi_21 <- sqrt((K_2+lambda_2)^2+2*sigma_2^2*delta_2)
> phi_12 <- K_1+lambda_1+phi_11
> phi_22 <- K_2+lambda_2+phi_21
> phi_13 <- 2*K_1*theta_1/sigma_1^2
> phi_23 <- 2*K_2*theta_2/sigma_2^2
> a <- array(1, c(d,n))
> phi_14 <- numeric(d)
> phi_24 <- numeric(d)
> b1 <- numeric(d)
> b2 <- numeric(d)
> for(t in 1:d){
> phi_14[t] <- 2*phi_11+phi_12*(exp(phi_11*t)-1)
> phi_24[t] <- 2*phi_21+phi_22*(exp(phi_21*t)-1)
> a[t] <- delta_0 -
> phi_13/(t)*log(2*phi_11*exp(phi_12*(t)/2)/phi_14[t])-
> phi_23/(t)*log(2*phi_21*exp(phi_22*(t)/2)/phi_24[t])
> b1[t] <- (delta_1/(t))*2*(exp(phi_11*(t))-1)/phi_14[t]
> b2[t] <- (delta_2/(t))*2*(exp(phi_21*(t))-1)/phi_24[t]
> }
> b <- array(c(b1,b2), c(d,m,n))
> j <- -matrix(c(K_1, 0, 0, K_2), c(2,2))*h
> explh <- as.matrix(expm(j))
>
> Tt <- array(explh, c(m,m,n)) #array giving the factor of the
> transition equation
> Zt <- b #array giving the factor of the measurement equation
> ct <- a #matrix giving the intercept of the measurement equation
> dt <- as.matrix((diag(m)-explh)%*%c(theta_1, theta_2)) #matrix giving
> the intercept of the transition equation
> GGt <- array(diag(d), c(d,d,1)) #array giving the variance of the
> disturbances of the measurement equation
> HHt <- diag(m) #array giving the variance of the innovations of the
> transition equation
> a0 <- c(0.5, 0.5) #vector giving the initial value/estimation of the
> state variable
> P0 <- matrix(1e6, nrow = 2, ncol = 2) # matrix giving the variance of
> a0
> return(list(a0 = a0, P0 = P0, ct = ct, dt = dt, Zt = Zt, Tt = Tt, GGt
> = GGt,
> HHt = HHt))
> }
> ## Objective function passed to optim
> objective <- function(parm, yt) {
>   sp <- CIR2ss(parm["K_1"], parm["K_2"], parm["sigma_1"], parm["sigma_2"],
> parm["lambda_1"], parm["lambda_2"],
>parm["theta_1"], parm["theta_2"], parm["delta_0"],
> parm["delta_1"], parm["delta_2"])
>
>   ans <- fkf(a0 = sp$a0, P0 = sp$P0, dt = sp$dt, ct = sp$ct, Tt = sp$Tt,
>Zt = sp$Zt, HHt = sp$HHt, GGt = sp$GGt, yt = yt)
>   return(-ans$logLik)
> }
>
> parm <- c(K_1 =  0.6048, K_2 = 0.656, sigma_1 =0.1855, sigma_2 =0.5524,
>lambda_1 =0.55, lambda_2 =0.009, theta_1 = 0.173, theta_2 =
> 0.12,
>delta_0 =0.686, delt

[R] State space model

2011-11-12 Thread Kristian Lind
Hi,

I'm trying to estimate the parameters of a state space model of the
following form

measurement eq:

z_t = a + b*y_t + eps_t

transition eq

y_t+h = (I -exp(-hL))theta + exp(-hL)y_t+ eta_{t+h}.

The problem is that the distribution of the innovations of the transition
equation depend on the previous value of the state variable.
To be exact: y_t|y_{t-1} ~N(mu, Q_t) where Q is a diagonal matrix with
elements equal to

Q_{i,t} =
sigma_i*(1-exp(-kappa_i*h)/kappa_i*(theta_i/2*(1-exp(kappa_i*h)+exp(-kappa_i*h)y_{t-1,i}

The fkf returns the filtered states variables so y_{t-1,i} is available. I
just can't figure out how to write my program in such a way  that this
information is included and updated in the state space model for each
iteration in optim. Any suggestions on how to solve this are much
appreciated.

Thank you,

Kristian.

Below my program where the variance matrices are just identity matrices and
the data is just random numbers. I used the example from the FKF package as
framework.

library(FKF) #loading Fast Kalman Filter package
library(Matrix) # matrix exponential package
library(BB)
library(alabama)
x <- matrix(abs(rnorm(400)), nrow=10, ncol=40)
m <- 2 # m is the number of state variables
n <- ncol(x) # is the length of the observed sample
d <- nrow(x) # is the number of observed variables.
h <- 1/52
## creating state space representation of 2-factor CIR model
CIR2ss <- function(K_1, K_2, sigma_1, sigma_2, lambda_1, lambda_2, theta_1,
theta_2, delta_0, delta_1, delta_2)  {
  ## defining auxilary parameters,
phi_11 <- sqrt((K_1+lambda_1)^2+2*sigma_1^2*delta_1)
  phi_21 <- sqrt((K_2+lambda_2)^2+2*sigma_2^2*delta_2)
phi_12 <- K_1+lambda_1+phi_11
phi_22 <- K_2+lambda_2+phi_21
phi_13 <- 2*K_1*theta_1/sigma_1^2
phi_23 <- 2*K_2*theta_2/sigma_2^2
a <- array(1, c(d,n))
phi_14 <- numeric(d)
phi_24 <- numeric(d)
b1 <- numeric(d)
b2 <- numeric(d)
for(t in 1:d){
phi_14[t] <- 2*phi_11+phi_12*(exp(phi_11*t)-1)
phi_24[t] <- 2*phi_21+phi_22*(exp(phi_21*t)-1)
a[t] <- delta_0 - phi_13/(t)*log(2*phi_11*exp(phi_12*(t)/2)/phi_14[t])-
phi_23/(t)*log(2*phi_21*exp(phi_22*(t)/2)/phi_24[t])
b1[t] <- (delta_1/(t))*2*(exp(phi_11*(t))-1)/phi_14[t]
b2[t] <- (delta_2/(t))*2*(exp(phi_21*(t))-1)/phi_24[t]
}
b <- array(c(b1,b2), c(d,m,n))
j <- -matrix(c(K_1, 0, 0, K_2), c(2,2))*h
explh <- as.matrix(expm(j))

Tt <- array(explh, c(m,m,n)) #array giving the factor of the transition
equation
Zt <- b #array giving the factor of the measurement equation
ct <- a #matrix giving the intercept of the measurement equation
dt <- as.matrix((diag(m)-explh)%*%c(theta_1, theta_2)) #matrix giving
the intercept of the transition equation
GGt <- array(diag(d), c(d,d,1)) #array giving the variance of the
disturbances of the measurement equation
HHt <- diag(m) #array giving the variance of the innovations of the
transition equation
a0 <- c(0.5, 0.5) #vector giving the initial value/estimation of the
state variable
P0 <- matrix(1e6, nrow = 2, ncol = 2) # matrix giving the variance of a0
return(list(a0 = a0, P0 = P0, ct = ct, dt = dt, Zt = Zt, Tt = Tt, GGt =
GGt,
HHt = HHt))
}
## Objective function passed to optim
objective <- function(parm, yt) {
  sp <- CIR2ss(parm["K_1"], parm["K_2"], parm["sigma_1"], parm["sigma_2"],
parm["lambda_1"], parm["lambda_2"],
   parm["theta_1"], parm["theta_2"], parm["delta_0"],
parm["delta_1"], parm["delta_2"])

  ans <- fkf(a0 = sp$a0, P0 = sp$P0, dt = sp$dt, ct = sp$ct, Tt = sp$Tt,
   Zt = sp$Zt, HHt = sp$HHt, GGt = sp$GGt, yt = yt)
  return(-ans$logLik)
}

parm <- c(K_1 =  0.6048, K_2 = 0.656, sigma_1 =0.1855, sigma_2 =0.5524,
   lambda_1 =0.55, lambda_2 =0.009, theta_1 = 0.173, theta_2 = 0.12,
   delta_0 =0.686, delta_1 =-0.003, delta_2=0.0025)

hin <- function(parm, yt){
  h<- numeric(2)
  h[1] <- 2*parm[1]*parm[7]-parm[3]^2
  h[2] <- 2*parm[2]*parm[8]-parm[4]^2
  h
}
##optimizing objective function
fit <- auglag(par = parm, fn = objective, yt = x, control.outer =
list(method = "CG"), hin = hin)
print(fit)

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Error in eigen(a$hessian) : infinite or missing values in 'x'

2011-11-05 Thread Kristian Lind
Dear R-users,

I'm estimating a two- dimensional state-space model using the FKF package.
The resulting log likelihood function is maximized using auglag from the
Alabama package. The procedure works well for a subset of my data, but if I
try to use the entire data set I get the following error message.

Error in eigen(a$hessian) : infinite or missing values in 'x'

What's even more confusing is that if I estimate the model for a sample say
data[1:200,] then there's convergence. If I estimate it for data[1:300, ]
then I get the error message. But if I estimate the model for
data[201,300]  it once again converges.

Can anyone please enlighten me;  where does this error stem from and what
can I do about it?

Thank you in advance.

Kristian

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Linear Regression with Linear Equality Constraint

2011-10-31 Thread Kristian Lind
I believe the package systemfit can help you with that. Haven't tried it
myself, but give it a go.

Regards,

Kristian

2011/10/31 JW 

> Please advice on the package I should use to run a linear regression model
> (weighted least squared) with linear equality constraint.  I initially
> tried
> "constrOptim" but it turned out that it only supported inequality linear
> constraint.  Thank you very much in advance.
>
> Cheers,
>
> Jon
>
> --
> View this message in context:
> http://r.789695.n4.nabble.com/Linear-Regression-with-Linear-Equality-Constraint-tp3956588p3956588.html
> Sent from the R help mailing list archive at Nabble.com.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Imposing Feller condition using project constraint in spg

2011-10-23 Thread Kristian Lind
Hi Ravi,

Thank you for your reply and please excuse my late response.

Plugging w2’ = k/w1’ from (A) into (B) yields

(C) f(w1') = (w1-w1’)^2 + (w2-k/w1’)^2

The partial derivative wrt w1' is

(D) df(w1')/ dw1' = -2(w1-w1’) + 2(w2-k/w1’)*k/(w1')^2

in order for this to be a minimum the f.o.c. df(w1')/ dw1' = 0 and the
s.o.c. d^2f(w1')/ d(w1')^2 >0 must be satisfied.

I follow you on substituting the constraint into the distance from the
iterate to (w1', w2') and then minimizing this distance, but I'm not quite
sure how to turn this into a projection function. Any suggestions?

Regards,

Kristian

2011/9/8 Ravi Varadhan 

>  Hi Kristian,
>
> ** **
>
> The idea behind projection is that you take an iterate that violates the
> constraints and project it onto a point such that it is the nearest point
> that satisfies the constraints.  
>
> ** **
>
> Suppose you have an iterate (w1, w4) that does not satisfy the constraint
> that w1 * w4 != (1 + eps)/2.  Our goal is to find a (w1’, w2’), given (w1,
> w2), such that
>
> ** **
>
> **(A)   **w1’ * w2’ = (1+eps)/2 = k
>
> **(B)   **(w1-w1’)^2 + (w2-w2’)^2 is minimum.  
>
> ** **
>
> This is quite easy to solve.  We know (w1, w2).  You plug in w2’ = k/w1’
> from (A) into (B) and minimize the function of w1’.  This is a simple
> calculus exercise, and I will leave this as a homework problem for you to
> solve!
>
> ** **
>
> Best,
>
> Ravi.
>
> ** **
>
> ---
>
> Ravi Varadhan, Ph.D.
>
> Assistant Professor,
>
> Division of Geriatric Medicine and Gerontology School of Medicine Johns
> Hopkins University
>
> ** **
>
> Ph. (410) 502-2619
>
> email: rvarad...@jhmi.edu
>
> ** **
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] write.csv naming file after function argument

2011-10-16 Thread Kristian Lind
Thank you for your help. It works now.

2011/10/13 Jean V Adams 

>
> Kristian Lind wrote on 10/13/2011 04:52:16 AM:
>
> >
> > Dear R-users,
> >
> > I'm writing a program that constructs a dataset. I wish to save the
> dataset
> > to a file.
> >
> > Here's a very simple example of what I'm trying to do
> >
> > function(x=peter){
> > y <- x/2
> > write.csv(y, file = "...\x")
> > }
> >
> > The problem is that I want to name the dataset as whatever the name of
> the
> > input is. In this case peter.
> > How do I do this?
> >
> > Thank you in advance.
> >
> > Kristian
>
>
> I think you're looking for something like this
>
> foo <- function(x){
> y <- x/2
> file.name <- paste("...\\", deparse(substitute(x)), ".csv",
> sep="")
> # I include the print() functions just so you can see what happened
> print(y)
> print(file.name)
> write.csv(y, file=file.name)
> }
>
> peter <- 12
> foo(peter)
>
>
> Jean

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] write.csv naming file after function argument

2011-10-13 Thread Kristian Lind
Dear R-users,

I'm writing a program that constructs a dataset. I wish to save the dataset
to a file.

Here's a very simple example of what I'm trying to do

function(x=peter){
y <- x/2
write.csv(y, file = "...\x")
}

The problem is that I want to name the dataset as whatever the name of the
input is. In this case peter.
How do I do this?

Thank you in advance.

Kristian

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Error in as.vector(data) optim() / fkf()

2011-09-23 Thread Kristian Lind
Problem solved.

It had something to do with calling expm(-array(c(K_1, 0, 0, K_2),
c(2,2))*h).

2011/9/22 Kristian Lind 

> Dear R users,
>
> When running the program below I receive the following error message:
> fit <- optim(parm, objective, yt = tyield, hessian = TRUE)
> Error in as.vector(data) :
>   no method for coercing this S4 class to a vector
>
> I can't figure out what the problem is exactly. I imagine that it has
> something to do with "tyield" being a matrix.  Any help on explaining what's
> going on and how to solve this is much appreciated.
>
> Thank you,
>
> Kristian
>
> library(FKF) #loading Fast Kalman Filter package
> library(Matrix) # matrix exponential package
>
> K_1 = 0.1156
> K_2 = 0.17
> sigma_1 = 0.1896
> sigma_2 = 0.2156
> lambda_1 = 0
> lambda_2 = -0.5316
> theta_1 = 0.1513
> theta_2 = 0.2055
>
> #test data
> tyield <- matrix(data = rnorm(200), nrow =2, ncol =100)
>
> # defining dimensions
> m <- 2 # m is the number of state variables
> n <- 100 # is the length of the observed sample
> d <- 2 # is the number of observed variables.
> theta <- c(theta_1, theta_2)
> h <- t <- 1/52 # time between observations
>
> ## creating state space representation of 2-factor CIR model follwing
> Driessen and Geyer et al.
> CIR2ss <- function(K_1, K_2, sigma_1, sigma_2, lambda_1, lambda_2, theta_1,
> theta_2){
>   ## defining auxilary parameters
> phi_11 <- sqrt((K_1+lambda_1)^2+2*sigma_1^2)
>   phi_21 <- sqrt((K_1+lambda_2)^2+2*sigma_2^2)
> phi_12 <- K_1+lambda_1+phi_11
> phi_22 <- K_2+lambda_2+phi_12
> phi_13 <- -2*K_1*theta_1/sigma_1^2
> phi_23 <- -2*K_2*theta_2/sigma_2^2
> phi_14 <- 2*phi_11+phi_21*(exp(phi_11*t)-1)
> phi_24 <- 2*phi_12+phi_22*(exp(phi_12*t)-1)
> phi <- array(c(phi_11, phi_21, phi_12, phi_22, phi_13, phi_23, phi_14,
> phi_24), c(4,2))
> a <- array(0, c(d,n))
> for(t in n:1){
>   a[,n-(t+1)] <-
> -phi_13/(n-(t+1))*log(2*phi_11*exp(phi_12*(n-(t+1))/2)/phi_14)-phi_23/(n-(t+1))*log(2*phi_21*exp(phi_22*(n-(t+1))/2)/phi_24)
> }
> b <- array(c(1,0,0,1,0), c(d,m,n))
> j <- -array(c(K_1, 0, 0, K_2), c(2,2))*h
> explh <- expm(j)
> Tt <- array(explh, c(m,m,n)) #array giving the factor of the transition
> equation
> Zt <- b #array giving the factor of the measurement equation
> ct <- a #matrix giving the intercept of the measurement equation
> dt <- (diag(m)-expm(-array(c(K_1, 0, 0, K_2), c(2,2))*h))*theta #matrix
> giving the intercept of the transition equation
> GGt <- array(c(1,0,0,1), c(d,d,n)) #array giving the variance of the
> disturbances of the measurement equation
> HHt <- array(c(1,0,0,1), c(m,m,n)) #array giving the variance of the
> innovations of the transition equation
> a0 <- c(0, 0) #vector giving the initial value/estimation of the state
> variable
> P0 <- matrix(1e6, nrow = 2, ncol = 2) # matrix giving the variance of
> a0
> return(list(a0 = a0, P0 = P0, ct = ct, dt = dt, Zt = Zt, Tt = Tt, GGt =
> GGt,
> HHt = HHt))
> }
>
> ## Objective function passed to optim
> objective <- function(parm, yt) {
>   sp <- CIR2ss(parm["K_1"], parm["K_2"], parm["sigma_1"], parm["sigma_2"],
> parm["lambda_1"], parm["lambda_2"],
>parm["theta_1"], parm["theta_2"])
>   ans <- fkf(a0 = sp$a0, P0 = sp$P0, dt = sp$dt, ct = sp$ct, Tt = sp$Tt,
>Zt = sp$Zt, HHt = sp$HHt, GGt = sp$GGt, yt = yt)
>   return(-ans$loglik)
> }
>
> parm <- c(K_1 = 0.1156, K_2 = 0.17, sigma_1 = 0.1896, sigma_2 = 0.2156,
>   lambda_1 = 0, lambda_2 = -0.5316, theta_1 = 0.1513, theta_2 =
> 0.2055) # initial parameters
>
> ##optimizing objective function
>  fit <- optim(parm, objective, yt = tyield, hessian = TRUE)
>  print(fit)
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Error in as.vector(data) optim() / fkf()

2011-09-22 Thread Kristian Lind
Dear R users,

When running the program below I receive the following error message:
fit <- optim(parm, objective, yt = tyield, hessian = TRUE)
Error in as.vector(data) :
  no method for coercing this S4 class to a vector

I can't figure out what the problem is exactly. I imagine that it has
something to do with "tyield" being a matrix.  Any help on explaining what's
going on and how to solve this is much appreciated.

Thank you,

Kristian

library(FKF) #loading Fast Kalman Filter package
library(Matrix) # matrix exponential package

K_1 = 0.1156
K_2 = 0.17
sigma_1 = 0.1896
sigma_2 = 0.2156
lambda_1 = 0
lambda_2 = -0.5316
theta_1 = 0.1513
theta_2 = 0.2055

#test data
tyield <- matrix(data = rnorm(200), nrow =2, ncol =100)

# defining dimensions
m <- 2 # m is the number of state variables
n <- 100 # is the length of the observed sample
d <- 2 # is the number of observed variables.
theta <- c(theta_1, theta_2)
h <- t <- 1/52 # time between observations

## creating state space representation of 2-factor CIR model follwing
Driessen and Geyer et al.
CIR2ss <- function(K_1, K_2, sigma_1, sigma_2, lambda_1, lambda_2, theta_1,
theta_2){
  ## defining auxilary parameters
phi_11 <- sqrt((K_1+lambda_1)^2+2*sigma_1^2)
  phi_21 <- sqrt((K_1+lambda_2)^2+2*sigma_2^2)
phi_12 <- K_1+lambda_1+phi_11
phi_22 <- K_2+lambda_2+phi_12
phi_13 <- -2*K_1*theta_1/sigma_1^2
phi_23 <- -2*K_2*theta_2/sigma_2^2
phi_14 <- 2*phi_11+phi_21*(exp(phi_11*t)-1)
phi_24 <- 2*phi_12+phi_22*(exp(phi_12*t)-1)
phi <- array(c(phi_11, phi_21, phi_12, phi_22, phi_13, phi_23, phi_14,
phi_24), c(4,2))
a <- array(0, c(d,n))
for(t in n:1){
  a[,n-(t+1)] <-
-phi_13/(n-(t+1))*log(2*phi_11*exp(phi_12*(n-(t+1))/2)/phi_14)-phi_23/(n-(t+1))*log(2*phi_21*exp(phi_22*(n-(t+1))/2)/phi_24)
}
b <- array(c(1,0,0,1,0), c(d,m,n))
j <- -array(c(K_1, 0, 0, K_2), c(2,2))*h
explh <- expm(j)
Tt <- array(explh, c(m,m,n)) #array giving the factor of the transition
equation
Zt <- b #array giving the factor of the measurement equation
ct <- a #matrix giving the intercept of the measurement equation
dt <- (diag(m)-expm(-array(c(K_1, 0, 0, K_2), c(2,2))*h))*theta #matrix
giving the intercept of the transition equation
GGt <- array(c(1,0,0,1), c(d,d,n)) #array giving the variance of the
disturbances of the measurement equation
HHt <- array(c(1,0,0,1), c(m,m,n)) #array giving the variance of the
innovations of the transition equation
a0 <- c(0, 0) #vector giving the initial value/estimation of the state
variable
P0 <- matrix(1e6, nrow = 2, ncol = 2) # matrix giving the variance of a0
return(list(a0 = a0, P0 = P0, ct = ct, dt = dt, Zt = Zt, Tt = Tt, GGt =
GGt,
HHt = HHt))
}

## Objective function passed to optim
objective <- function(parm, yt) {
  sp <- CIR2ss(parm["K_1"], parm["K_2"], parm["sigma_1"], parm["sigma_2"],
parm["lambda_1"], parm["lambda_2"],
   parm["theta_1"], parm["theta_2"])
  ans <- fkf(a0 = sp$a0, P0 = sp$P0, dt = sp$dt, ct = sp$ct, Tt = sp$Tt,
   Zt = sp$Zt, HHt = sp$HHt, GGt = sp$GGt, yt = yt)
  return(-ans$loglik)
}

parm <- c(K_1 = 0.1156, K_2 = 0.17, sigma_1 = 0.1896, sigma_2 = 0.2156,
  lambda_1 = 0, lambda_2 = -0.5316, theta_1 = 0.1513, theta_2 =
0.2055) # initial parameters

##optimizing objective function
 fit <- optim(parm, objective, yt = tyield, hessian = TRUE)
 print(fit)

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Selecting from matrix and then stacking in array.

2011-09-20 Thread Kristian Lind
Thank you. It works great.

Kristian

2011/9/16 R. Michael Weylandt 

> I think changing nrow() to NROW() will get around that error. However,
> this, as you can see, this goes a little astray at the end. Perhaps
> something like this will work for you:
>
> z.ext <- rbind(z,array(0,c(9,3)))
> z.out <- array(0, c(3,3,9))
>
> for(i in 1:9){
> z.out[,,i] <- cbind(z.ext[c(i,i+3,i+6),])
> }
> print(z.out)
>
> Hope this helps,
>
> Michael Weylandt
>
>
> On Wed, Sep 14, 2011 at 3:42 PM, Kristian Lind <
> kristian.langgaard.l...@gmail.com> wrote:
>
>> Hi,
>>
>> I'm solve a problem where I want to select every 3rd row from a matrix. I
>> do
>> this for all rows in a loop. This gives me "i" matrices with different
>> number of rows
>>
>> I want to stack these matrices in an array and fill the blanks with zero.
>>
>> Does anyone have any suggestions on how to go about this?
>>
>> So i want to go from
>>
>>   [,1] [,2] [,3]
>>  [1,]111
>>  [2,]222
>>  [3,]333
>>  [4,]444
>>  [5,]555
>>  [6,]666
>>  [7,]777
>>  [8,]888
>>  [9,]999
>>
>> to
>>
>> , , 1
>>
>> [,1] [,2] [,3]
>> [1,]111
>> [2,]444
>> [3,]777
>>
>> , , 2
>>
>> [,1] [,2] [,3]
>> [1,]222
>> [2,]555
>> [3,]888
>>
>> , , 3
>>
>> [,1] [,2] [,3]
>> [1,]333
>> [2,]666
>> [3,]999
>>
>> , , 4
>>
>> [,1] [,2] [,3]
>> [1,]444
>> [2,]777
>> [3,]000
>>
>> ...
>>
>> , , 9
>>
>> [,1] [,2] [,3]
>> [1,]999
>> [2,]000
>> [3,]000
>>
>>
>> Here's my go at it
>>
>> z <- array(c(1:9), c(9,3))
>> z1 <- array(0, c(3,3,9))
>>for(i in 1:9){
>>if(nrow(z[seq(i,9,3),])  == 2)
>>z1[,,i] <- rbind(z[seq(i,9,3),],c(0,0,0))
>>else
>>z1[,,i] <- z[seq(i,9,3),]
>>}
>> print(z1)
>>
>> I get the following error: Error in if (nrow(z[seq(i, 9, 3), ]) == 2) z1[,
>> ,
>> i] <- rbind(z[seq(i,  :
>>  argument is of length zero
>>
>> This is because nrow(z[seq(i, 9, 3) becomes NULL.
>> Furthermore, this doesn't take care of the case where nrow(z[seq(i,9,3),])
>> ==1
>>
>> Sorry for the long email and thank you in advance for reading it ;)
>>
>> Kristian
>>
>>[[alternative HTML version deleted]]
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Selecting from matrix and then stacking in array.

2011-09-14 Thread Kristian Lind
Hi,

I'm solve a problem where I want to select every 3rd row from a matrix. I do
this for all rows in a loop. This gives me "i" matrices with different
number of rows

I want to stack these matrices in an array and fill the blanks with zero.

Does anyone have any suggestions on how to go about this?

So i want to go from

   [,1] [,2] [,3]
 [1,]111
 [2,]222
 [3,]333
 [4,]444
 [5,]555
 [6,]666
 [7,]777
 [8,]888
 [9,]999

to

, , 1

 [,1] [,2] [,3]
[1,]111
[2,]444
[3,]777

, , 2

 [,1] [,2] [,3]
[1,]222
[2,]555
[3,]888

, , 3

 [,1] [,2] [,3]
[1,]333
[2,]666
[3,]999

, , 4

 [,1] [,2] [,3]
[1,]444
[2,]777
[3,]000

...

, , 9

 [,1] [,2] [,3]
[1,]999
[2,]000
[3,]000


Here's my go at it

z <- array(c(1:9), c(9,3))
z1 <- array(0, c(3,3,9))
for(i in 1:9){
if(nrow(z[seq(i,9,3),])  == 2)
z1[,,i] <- rbind(z[seq(i,9,3),],c(0,0,0))
else
z1[,,i] <- z[seq(i,9,3),]
}
print(z1)

I get the following error: Error in if (nrow(z[seq(i, 9, 3), ]) == 2) z1[, ,
i] <- rbind(z[seq(i,  :
  argument is of length zero

This is because nrow(z[seq(i, 9, 3) becomes NULL.
Furthermore, this doesn't take care of the case where nrow(z[seq(i,9,3),])
==1

Sorry for the long email and thank you in advance for reading it ;)

Kristian

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Imposing Feller condition using project constraint in spg

2011-09-07 Thread Kristian Lind
Dear R-users,

I'm running a maximization problem in which I want to impose a condition on
the relationship between 2 parameters.
The condition is that w[4] = (1+eps)/(2*w[1]), or equivalently w[4]*w[1] =
(1+eps)/2 , where eps is some small positive constant.

I've been trying to formulate a function that takes care of this, but I
can't really make it work so any suggestions would be much appreciated.

Thank you.

Kristian Lind

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Using capture.output within a function

2011-09-02 Thread Kristian Lind
Yes, that's true. It works as it's supposed to now.

Thank you.

Kristian Lind

2011/9/2 Duncan Murdoch 

> On 11-09-02 5:24 AM, Kristian Lind wrote:
>
>> Dear R-users
>>
>> I'm running a maximum likelihood procedure using the spg package. I'd like
>> to save some output produced in each iteration to a file, but if I put the
>> capture.output() within the function I get the following message; Error in
>> spg(par = startval, fn = loglik, gr = NULL, method = 3, lower = lo,  :
>>   Failure in initial function evaluation!Error in -fn(par, ...) : invalid
>> argument to unary operator
>>
>
> It looks as though you put capture.output() last in your function, so the
> result of the function is the result of the capture.output call, not the
> function value.
>
> Duncan Murdoch
>
>
>> I have considered putting the capture.output() after the function, but
>> there
>> are some issues with R stalling on me so I'd like that the output is saved
>> for each iteration and not only at completion.
>>
>> Any suggestions on how to get this done would be much appreciated.
>>
>> Kristian Lind
>>
>> *Below an example of what I'm trying to do...*
>>
>>
>> loglik<- function(w){
>>
>> state<- c(b_1 = 0,
>> b_2 = 0,
>> a = 0)
>> #declaring ODEs
>> Kristian<-function(t, state, w){
>> with(as.list(c(state, w)),
>> {
>> db_1 = -((w[1]+w[8])*b_1+(w[2]+w[6]***w[8]
>> +w[7]*w[9])*b_2+0.5*(b_1)^2+w[**6]*b_1*b_2+0.5*
>> ((w[6])^2+(w[7])^2)*(b_2)^2)
>> db_2 = -w[3]*b_2+1
>> da = w[1]*w[4]*b_1+(w[2]*w[4]+w[3]***w[5])*b_2
>> list(c(db_1, db_2, da))
>> })
>> }
>>
>> # time making a sequence from t to T evaluated at each delta seq(t, T,
>> by = delta)
>> times<- seq(0, 10, by = 0.5)
>>
>> outmat<- ode(y = state, times = times, func = Kristian, parms = w)
>> print(w)
>> print(outmat)
>> .
>> .
>> .
>> f<-rep(NA, 1)
>> f[1]<- 1/(T-1)*sum(log(pJ$p)-log(pJ$**J))
>> f
>> capture.output(outmat, file = "spgoutput.txt", append = TRUE)
>> }
>> fit<- spg(fn =loglik, ...)
>>
>>[[alternative HTML version deleted]]
>>
>> __**
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/**listinfo/r-help<https://stat.ethz.ch/mailman/listinfo/r-help>
>> PLEASE do read the posting guide http://www.R-project.org/**
>> posting-guide.html <http://www.R-project.org/posting-guide.html>
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Using capture.output within a function

2011-09-02 Thread Kristian Lind
Dear R-users

I'm running a maximum likelihood procedure using the spg package. I'd like
to save some output produced in each iteration to a file, but if I put the
capture.output() within the function I get the following message; Error in
spg(par = startval, fn = loglik, gr = NULL, method = 3, lower = lo,  :
  Failure in initial function evaluation!Error in -fn(par, ...) : invalid
argument to unary operator

I have considered putting the capture.output() after the function, but there
are some issues with R stalling on me so I'd like that the output is saved
for each iteration and not only at completion.

Any suggestions on how to get this done would be much appreciated.

Kristian Lind

*Below an example of what I'm trying to do...*


loglik <- function(w){

state <- c(b_1 = 0,
b_2 = 0,
a = 0)
#declaring ODEs
Kristian <-function(t, state, w){
with(as.list(c(state, w)),
{
db_1 = -((w[1]+w[8])*b_1+(w[2]+w[6]*w[8]
+w[7]*w[9])*b_2+0.5*(b_1)^2+w[6]*b_1*b_2+0.5*
((w[6])^2+(w[7])^2)*(b_2)^2)
db_2 = -w[3]*b_2+1
da = w[1]*w[4]*b_1+(w[2]*w[4]+w[3]*w[5])*b_2
list(c(db_1, db_2, da))
})
}

# time making a sequence from t to T evaluated at each delta seq(t, T,
by = delta)
times <- seq(0, 10, by = 0.5)

outmat <- ode(y = state, times = times, func = Kristian, parms = w)
print(w)
print(outmat)
.
.
.
f<-rep(NA, 1)
f[1] <- 1/(T-1)*sum(log(pJ$p)-log(pJ$J))
f
capture.output(outmat, file = "spgoutput.txt", append = TRUE)
}
fit <- spg(fn =loglik, ...)

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] DLSODA error

2011-04-28 Thread Kristian Lind
Dear R-users,

I'm running an MLE procedure where some ODEs are solved for each iteration
in the maximization process. I use mle2 for the Maximum likelihood and
deSolve for the ODEs.
The problem is that somewhere along the way the ODE solver crashes and I get
the following error message:

DLSODA-  Warning..Internal T (=R1) and H (=R2)
are
  such that in the machine, T + H = T on the next
step
 (H = step size). Solver will continue
anyway.
 In above,  R1 =  0.2630651367072D+01   R2 =  0.2055665829864D-15

Error in optim(par = par, fn = U, method = "Nelder-Mead", control =
list(maxit = 100),  :
  function cannot be evaluated at initial parameters
...
Error in BBsolve(par = par, fn = Bo, s = s, outmat = outmat, method = c(1,
:
  object 'ans.best' not found
In addition: Warning messages:
1: In lsoda(y, times, func, parms, ...) :
  an excessive amount of work (> maxsteps ) was done, but integration was
not successful - increase maxsteps
2: In lsoda(y, times, func, parms, ...) :
  Returning early. Results are accurate, as far as they go

For each iteration of the MLE my code gives me the parameters. If I use the
parameters from the last iteration before the above error message appears I
would expect that it should crash immediately, but this isn't the case. Any
suggestions why this is the case and how to avoid the ODE solver to crash
would be much appreciated.

Thank you for your time.

Kristian

the code is below.

#Loading the needed packages
library(deSolve)
library(stats4)
library(BB)
library(bbmle)
c2 <- c(3.02, 2.88, 2.91, 2.83, 2.85, 2.88, 2.91, 2.90, 2.94, 3.09, 3.17,
3.14, 3.37, 3.40, 3.50, 3.58, 3.55, 3.70, 3.90, 3.77)
c10 <- c(4.39, 4.22, 4.27, 4.21, 4.25, 4.34, 4.47, 4.40, 4.46, 4.64, 4.73,
4.60, 4.87, 4.96, 5.09, 5.10, 5.08, 5.26, 5.54, 5.37)
T = 20
#definining -log-likelihood function
minusloglik <- function(K_vv, K_rv, K_rr, theta_v, theta_r, Sigma_rv,
Sigma_rr, lambda_v, lambda_r){
##solving ODEs as functions of "parameters"
parameters <- c(K_vv,
K_rv,
K_rr,
theta_v,
theta_r,
Sigma_rv,
Sigma_rr,
lambda_v,
lambda_r)

state <- c(b_1 = 0,
b_2 = 0,
a = 0)
#declaring ODEs
Kristian <- function(t, state, parameters){
with(as.list(c(state, parameters)),
{
db_1 = -((K_vv+lambda_v)*b_1+(K_rv+Sigma_rv*lambda_v
+Sigma_rr*lambda_r)*b_2+0.5*(b_1)^2+Sigma_rv*b_1*b_2+0.5*
((Sigma_rv)^2+(Sigma_rr)^2)*(b_2)^2 )
db_2 = -K_rr*b_2+1
da = K_vv*theta_v*b_1+(K_rv*theta_v+K_rr*theta_r)*b_2
list(c(db_1, db_2, da))
})
}

# time making a sequence from t to T evaluated at each delta seq(t, T,
by = delta)
times <- seq(0, 10, by = 0.5)

outmat <- ode(y = state, times = times, func = Kristian, parms =
parameters)
#solving to equations in to unknowns.
Bo <- function(x, s, outmat)
{
f <- rep(NA, length(x))
z <- - outmat[,2:4] %*% c(x[1],x[2],1)
f[1] <- (1-exp(z[5]))/sum(exp(z[2:5])) - s[1]
f[2] <- (1-exp(z[21]))/sum(exp(z[2:21])) - s[2]
f
}
#extracting statevariables
v<- numeric(T)
r<- numeric(T)

for(i in 1:T)
{
s <- c(c2[i],c10[i] )
par <- c(50, 5)
BBsolveans <- BBsolve(par=par, fn=Bo, s=s, outmat=outmat, method
=c(1,2,3), control =list(maxit =1000), quiet =TRUE)
v[i] <- BBsolveans$par[1]
r[i] <- BBsolveans$par[2]
}
r <- r[!v < 0]
v <- v[!v < 0]

#calculating transition density as a function of parameters
transdens <- function(v1, v2, r1, r2, t1,t2)#where v1 is the volatily at
time i and v2 is at time i+1
{
delta <- 7/365
alpha_r <- 0
c <- 2*K_vv*(1-exp(-K_vv*delta))^(-1)
q <- 2*K_vv*theta_v-1

m <-
-0.5*(v2*(K_rv*v2-2*K_rv*theta_v+K_rr*theta_r)-v1*(K_rv*v1-2*K_rv*theta_v+K_rr*theta_r))+exp(-K_rr)*r1
var <-
abs(0.5*((v2*(2*alpha_r+(Sigma_rr+Sigma_rv)^2*v2)-(v1*(2*alpha_r+(Sigma_rr+Sigma_rv)^2*v1)

f <- rep(NA, 1)
f[1]<-pchisq(2*c*v2, 2*q+2, ncp=2*c*v1*exp(-K_vv*delta), lower.tail =
TRUE, log.p = FALSE)*pnorm(r2,mean = m, sd= sqrt(var))
f
}

p<- numeric(length(v)-1)
for(i in 1:(length(v)-1)){
p[i] <- transdens(v1=v[i], v2=v[i+1], r1=r[i], r2=r[i+1], t1=i,
t2=i+1)
}

# Calculating Jacobian determinant.
gfunc <- function(x)
{
z <- - outmat[,2:4] %*% c(x[1],x[2],1)
x[1] <- x1
x[2] <- x2
c((1-exp(z[5]))/sum(exp(z[2:5])), (1-exp(z[21]))/sum(exp(z[2:21])))
}

J<- numeric(length(v)-1)

for(i in 1:(length(v)-1)){
x1 <- v[i]
x2 <- r[i]
x <- cbind(x1, x2)
Jac <- jacobian(func=gfunc, x=x)
J[i] <- abs(det(Jac))
}
J <- J[!p == 0]
p <- p[!p == 0]
#removing NAs
J <- J[!is.na(J)]
p <- p[!is.na(p)]

print(parameters)
f<-rep(NA, 1)
f[1] <- -1/(T-1)*s

Re: [R] MLE where loglikelihood function is a function of numerical solutions

2011-04-14 Thread Kristian Lind
HI Berend,

Thank you for your reply.

2011/4/13 Berend Hasselman 

> Questions:
>
> 1. why are you defining Bo within a loop?
> 2. Why are you doing library(nleqslv) within the loop?
>
> Yes, I see what you mean. There's no reason for defining that within the
loop.

Doing both those statements outside the loop once is more efficient.
>
> In your transdens function you are not using the function argument
> parameters, why?
> Shouldn't there be a with(parameters) since otherwise where is for example
> K_vv supposed to come from?
>
> I can't believe that the code "worked": in the call of nleqslv you have ...
> control=list(matxit=10) ...
> It should be maxit and nleqslv will issue an error message and stop (at
> least in the latest versions).
> And why 10? If that is required, something is not ok with starting
> values and/or functions.
>
> I get the error message 6 that the Jacobian is singular, which I think
stems from the parameter values. The parameter values are from a similar
study and serve only as a starting point. The 10 was just me playing
around.

Finally the likelihood function at the end of your code
>
> #Maximum likelihood estimation using mle package
> library(stats4)
> #defining loglikelighood function
> #T <- length(v)
> #minuslogLik <- function(x,x2)
> #{f <- rep(NA, length(x))
> #for(i in 1:T)
> #{
> #f[1] <- -1/T*sum(log(transdens(parameters = parameters, x =
> c(v[i],v[i+1])))-log(Jac(outmat=outmat, x2=c(v[i],r[i])))
> #}
> #f
> # }
>
> How do the arguments of your function x and x2 influence the calculations
> in
> the likelihood function?
>

What I was thinking was that the x and x2 would be the input for the
transdens() and Jac() functions, but I guess that is already taken care of
when defining them.

As written now with argument x and x2 not being used in the body of the
> function, there is nothing to optimize.
>

That is correct and that is the problem. The likelihood need to be stated as
a function of the parameters, here the vector "parameters". Because in the
maximum likelihood estimation we want to maximize the value of the
likelihood function by changing the parameter values. The likelihood
function is a function of the parameters but only through the functions, for
example Kristian() is a function of "parameters" which feeds into Bo(),
transdens() and Jac(). Do you have any suggestions how to get around this
issue?

Shouldn't f[1] be f[i] because otherwise the question is why are looping
> for( i in 1:T)?
> But then returning f as a vector seems wrong here. Shouldn't a likelihood
> function return a scalar?
>
> The likelihood function should return a scalar. I think the fix could be to
make the function calculate the function value at each "i" and then make it
return the mean of all the f[i]s.


> Berend
>
> --
> View this message in context:
> http://r.789695.n4.nabble.com/MLE-where-loglikelihood-function-is-a-function-of-numerical-solutions-tp3439436p3447224.html
> Sent from the R help mailing list archive at Nabble.com.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

Kristian

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] MLE where loglikelihood function is a function of numerical solutions

2011-04-13 Thread Kristian Lind
m(log(transdens(parameters = parameters, x =
c(v[i],v[i+1])))-log(Jac(outmat=outmat, x2=c(v[i],r[i])))
#}
#f
 }


2011/4/10 Albyn Jones 

> to clarify: by  "if you knew that LL(psi+eps) were well approximated by
> LL(psi), for the values of eps used to evaluate numerical derivatives of LL.
> "
> I mean the derivatives of LL(psi+eps) are close to the derivatives of
> LL(psi),
> and perhaps you would want the hessian to be close as well.
>
> albyn
>
>
> Quoting Albyn Jones :
>
>  Hi Kristian
>>
>> The obvious approach is to treat it like any other MLE problem:
>>  evaluation
>> of the log-likelihood is done as often as necessary for the optimizer you
>> are using: eg a call to optim(psi,LL,...)  where LL(psi) evaluates the log
>> likelihood at psi.  There may be computational shortcuts that would work if
>> you knew that LL(psi+eps) were well approximated by LL(psi), for the values
>> of eps used to evaluate numerical derivatives of LL.  Of course, then you
>> might need to write your own custom optimizer.
>>
>> albyn
>>
>> Quoting Kristian Lind :
>>
>>  Hi there,
>>>
>>> I'm trying to solve a ML problem where the likelihood function is a
>>> function
>>> of two numerical procedures and I'm having some problems figuring out how
>>> to
>>> do this.
>>>
>>> The log-likelihood function is of the form L(c,psi) = 1/T sum [log (f(c,
>>> psi)) - log(g(c,psi))], where c is a 2xT matrix of data and psi is the
>>> parameter vector. f(c, psi) is the transition density which can be
>>> approximated. The problem is that in order to approximate this we need to
>>> first numerically solve 3 ODEs. Second, numerically solve 2 non-linear
>>> equations in two unknowns wrt the data. The g(c,psi) function is known,
>>> but
>>> dependent on the numerical solutions.
>>> I have solved the ODEs using the deSolve package and the 2 non-linear
>>> equations using the BB package, but the results are dependent on the
>>> parameters.
>>>
>>> How can I write a program that will maximise this log-likelihood
>>> function,
>>> taking into account that the numerical procedures needs to be updated for
>>> each iteration in the maximization procedure?
>>>
>>> Any help will be much appreciated.
>>>
>>>
>>> Kristian
>>>
>>>[[alternative HTML version deleted]]
>>>
>>> __
>>> R-help@r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>> PLEASE do read the posting guide
>>> http://www.R-project.org/posting-guide.html
>>> and provide commented, minimal, self-contained, reproducible code.
>>>
>>>
>>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>>
>>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] MLE where loglikelihood function is a function of numerical solutions

2011-04-10 Thread Kristian Lind
Hi there,

I'm trying to solve a ML problem where the likelihood function is a function
of two numerical procedures and I'm having some problems figuring out how to
do this.

The log-likelihood function is of the form L(c,psi) = 1/T sum [log (f(c,
psi)) - log(g(c,psi))], where c is a 2xT matrix of data and psi is the
parameter vector. f(c, psi) is the transition density which can be
approximated. The problem is that in order to approximate this we need to
first numerically solve 3 ODEs. Second, numerically solve 2 non-linear
equations in two unknowns wrt the data. The g(c,psi) function is known, but
dependent on the numerical solutions.
I have solved the ODEs using the deSolve package and the 2 non-linear
equations using the BB package, but the results are dependent on the
parameters.

How can I write a program that will maximise this log-likelihood function,
taking into account that the numerical procedures needs to be updated for
each iteration in the maximization procedure?

Any help will be much appreciated.


Kristian

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] dfsane arguments

2011-03-31 Thread Kristian Lind
Hi there,

I'm trying to solve 2 nonlinear equations in 2 unknowns using the BB
package.

The first part of my program solves 3 ODEs using the deSolve package. This
part works. The output is used as parameter values in the functions I need
to solve.

The second part is to solve 2 equations in 2 unknowns. This does not work. I
get the error message "unexpected end of input". So what inputs am I missing
here? As I understand it the arguments I have excluded from dfsane(), such
as control, are set to default?

parameters <- c(K_vv= 0.0047,
K_rv=-0.0268,
K_rr=0.3384,
theta_v=107.4039,
theta_r =5.68,
Sigma_rv=0.0436,
Sigma_rr=0.1145,
lambda_v=0,
lambda_r=-0.0764
)
state <- c(b_1 = 0,
b_2 = 0,
a = 0)
Kristian <- function(t, state, parameters){
with(as.list(c(state, parameters)),{
db_1 =
-((K_vv+lambda_v)*b_1+(K_rv+Sigma_rv*lambda_v+Sigma_rr*lambda_r)*b_2+0.5*(b_1)^2+Sigma_rv*b_1*b_2+0.5*((Sigma_rv)^2+(Sigma_rr)^2)*(b_2)^2
)
db_2 = -K_rr*b_2+1
da = K_vv*theta_v*b_1+(K_rv*theta_v+K_rr*theta_r)*b_2
})
}
times <- seq(0, 10, by = 0.5)
library(deSolve)
out <- ode(y = state, times = times, func = Kristian, parms = parameters)
# constructing output as a matrix
outmat <- as.matrix(out)
library(BB) #loading BB package
Bo <- function(x, s){
f <- rep(NA, length(x))
f[1] <- (1-exp(outmat[4,4]-outmat[4,3]*x[1]-outmat[4,2]*x[2]))/
( exp(-outmat[1,4]-outmat[1,3]*x[1]-outmat[1,2]*x[2])
+ exp(-outmat[2,4]-outmat[2,3]*x[1]-outmat[2,2]*x[2])
+ exp(-outmat[3,4]-outmat[3,3]*x[1]-outmat[3,2]*x[2])
+ exp(-outmat[4,4]-outmat[4,3]*x[1]-outmat[4,2]*x[2]))-s[1]

f[2] <-(1-exp(-outmat[20,4]-outmat(20,3)*x[1]-outmat[20,2]*x[2]))/
( exp(-outmat[1,4]-outmat[1,3]*x[1]-outmat[1,2]*x[2])
+ exp(-outmat[2,4]-outmat[2,3]*x[1]-outmat[2,2]*x[2])
+ exp(-outmat[3,4]-outmat[3,3]*x[1]-outmat[3,2]*x[2])
+ exp(-outmat[4,4]-outmat[4,3]*x[1]-outmat[4,2]*x[2])
+ exp(-outmat[5,4]-outmat[5,3]*x[1]-outmat[5,2]*x[2])
+ exp(-outmat[5,4]-outmat[5,3]*x[1]-outmat[5,2]*x[2])
+ exp(-outmat[6,4]-outmat[6,3]*x[1]-outmat[6,2]*x[2])
+ exp(-outmat[7,4]-outmat[7,3]*x[1]-outmat[7,2]*x[2])
+ exp(-outmat[8,4]-outmat[8,3]*x[1]-outmat[8,2]*x[2])
+ exp(-outmat[9,4]-outmat[9,3]*x[1]-outmat[9,2]*x[2])
+ exp(-outmat[10,4]-outmat[10,3]*x[1]-outmat[10,2]*x[2])
+ exp(-outmat[11,4]-outmat[11,3]*x[1]-outmat[11,2]*x[2])
+ exp(-outmat[12,4]-outmat[12,3]*x[1]-outmat[12,2]*x[2])
+ exp(-outmat[13,4]-outmat[13,3]*x[1]-outmat[13,2]*x[2])
+ exp(-outmat[14,4]-outmat[14,3]*x[1]-outmat[14,2]*x[2])
+ exp(-outmat[15,4]-outmat[15,3]*x[1]-outmat[15,2]*x[2])
+ exp(-outmat[16,4]-outmat[16,3]*x[1]-outmat[16,2]*x[2])
+ exp(-outmat[17,4]-outmat[17,3]*x[1]-outmat[17,2]*x[2])
+ exp(-outmat[18,4]-outmat[18,3]*x[1]-outmat[18,2]*x[2])
+ exp(-outmat[19,4]-outmat[19,3]*x[1]-outmat[19,2]*x[2])
+ exp(-outmat[20,4]-outmat[20,3]*x[1]-outmat[20,2]*x[2])) -s[2]
f
s <- c(0.03, 0.045)
p<-c(0.5, 0.5)
ans <- dfsane(par=p, fn=Bo, s=s)
ans$par

Any help will be much appreciated!

Thank you,

Kristian Lind

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.