Re: [R] optimization problem

2015-10-19 Thread Hector Villalobos
Thanks to all who responded,

I've found a very useful code here:

http://courses.washington.edu/fish507/notes.html

In particular the Lecture 3...

Héctor


2015-10-17 7:05 GMT+00:00 Berend Hasselman :

>
> Your model is producing -Inf entries in the vector Be (in function modl
> and LL) at some stage during the optimization process.
> You should first do something about that before anything else.
>
> Berend
>
>
> > On 17 Oct 2015, at 03:01, Bert Gunter  wrote:
> >
> > I made no attempt to examine your details for problems, but in general,
> >
> > My problem
> >> is that the results change a lot depending on the initial values... I
> can't
> >> see what I am doing wrong...
> >>
> >> This is a symptom of an overparameterized model: The parameter estimates
> >> are unstable even though the predictions may not change much. In other
> >> words, your model may be too complex for your data.
> >
> >
> > Whether that is true here, you or others will have to determine. Try
> > simplifying your model as a start.
> >
> > -- Bert
> >
> >
> >
> >>
> >>
> >> # Data
> >> x <- 1995:2010
> >> B <- c(3500, 3200, 3000, 2800, 2600, 3000, 3200, 3800, 4200, 4300, 4400,
> >> 4400, 4500, 4600, 5000, 4300)
> >> Ct <- c(912, 767, 642, 482, 353, 331, 332, 309, 366, 402, 392, 478, 408,
> >> 434, 407, 637)
> >> a <- c(0.539, 0.603, -0.948, 0.166, 1.895, 0.786, 0.901, 0.844, 0.337,
> >> 0.429, 0.304, 0.230, 1.001, 0.750, 0.507, 1.502)
> >> Ag <- 0.55
> >>
> >> # Function with quantity to minimize
> >> modl <- function(par) {
> >>  ro <- par[1]
> >>  ko <- par[2]
> >>  n <- length(B)
> >>  Be <- rep(NA, n)
> >>  Be[1] <- ko * Ag
> >>  for ( k in 2:n)
> >>Be[k] <- Be[k-1] + ro * a[k-1] * Be[k-1] * (1 - Be[k-1]/ko) - Ct[k-1]
> >>  err <- (log(B) - log(Be))^2
> >>  ee <- sqrt( sum(err)/(n-2) )
> >>  LL <- (1/(sqrt(2*pi)*ee)) * exp( -(err/(2*ee^2) ) )
> >>  -crossprod(LL)
> >> }
> >>
> >> # Using function optim()
> >> par.optim <- optim(par = list(ro=0.4, ko=8000), modl, method = "BFGS")
> >> ro <- par.optim$par[1]
> >> ko <- par.optim$par[2]
> >>
> >> # estimated values of "B"
> >> n <- length(B)
> >> Be <- rep(NA, n)
> >> Be[1] <- ko * Ag
> >> for ( k in 2:n)
> >>  Be[k] <- Be[k-1] + ro * a[k-1] * Be[k-1] * (1 - Be[k-1]/ko) - Ct[k-1]
> >>
> >> # Plot, estimation of "B" seems reasonable
> >> plot(x, B, ylim=c(1000, 7000))
> >> lines(x, Be, col="blue", lwd=2)
> >>
> >>
> >> # ... but it is very sensible to initial values...
> >> par.optim2 <- optim(par = list(ro=0.4, ko=1), modl, method = "BFGS")
> >> ro2 <- par.optim2$par[1]
> >> ko2 <- par.optim2$par[2]
> >>
> >> Be2 <- rep(NA, n)
> >> Be2[1] <- ko2 * Ag
> >> for ( k in 2:n)
> >>  Be2[k] <- Be2[k-1] + ro2 * a[k-1] * Be2[k-1] * (1 - Be2[k-1]/ko2) -
> >> Ct[k-1]
> >>
> >> lines(x, Be2, col="blue", lwd=2, lty=3)
> >>
> >>
> >>
> >> # Uing mle2 function
> >> library(bbmle)
> >> LL <- function(ro, ko, mu, sigma) {
> >>  n <- length(B)
> >>  Be <- rep(NA, n)
> >>  Be[1] <- ko * Ag
> >>  for ( k in 2:n)
> >>Be[k] <- Be[k-1] + ro * a[k-1] * Be[k-1] * (1 - Be[k-1]/ko) - Ct[k-1]
> >>  err <- log(B) - log(Be)
> >>  R <- (dnorm(err, mu, sigma, log=TRUE))
> >>  -sum(R)
> >> }
> >>
> >> Bc.mle <- mle2(LL, start = list(ro=0.4, ko=8000, mu=0, sigma=1))
> >> summary(Bc.mle)
> >>
> >> ro3 <- coef(Bc.mle)[1]
> >> ko3 <- coef(Bc.mle)[2]
> >>
> >> Be3 <- rep(NA, n)
> >> Be3[1] <- ko3 * Ag
> >> for ( k in 2:n)
> >>  Be3[k] <- Be3[k-1] + ro3 * a[k-1] * Be3[k-1] * (1 - Be3[k-1]/ko3) -
> >> Ct[k-1]
> >>
> >> lines(x, Be3, col="red", lwd=2)
> >>
> >>
> >> --
> >>
> >> Héctor Villalobos >
> >>
> >> Depto. de Pesquerías y Biología Marina
> >>
> >> Centro Interdisciplinario de Ciencias Marinas-Instituto Politécnico
> >> Nacional
> >>
> >> CICIMAR-I.P.N.
> >>
> >> A.P. 592. Colonia Centro
> >>
> >> La Paz, Baja California Sur, MÉXICO. 23000
> >>
> >> Tels.: (+52 612) 122 53 44; 123 46 58; 123 47 34 ext.: 81602
> >>
> >> Fax: (+52 612) 122 53 22
> >>
> >>[[alternative HTML version deleted]]
> >>
> >> __
> >> R-help@r-project.org  mailing list -- To UNSUBSCRIBE and
> >> more, see
> >> https://stat.ethz.ch/mailman/listinfo/r-help
> >> PLEASE do read the posting guide
> >> http://www.R-project.org/posting-guide.html
> >> and provide commented, minimal, self-contained, reproducible code.
> >
> >
> >
> > --
> > Bert Gunter
> >
> > "Data is not information. Information is not knowledge. And knowledge is
> > certainly not wisdom."
> >   -- Clifford Stoll
> >
> >   [[alternative HTML version deleted]]
> >
> > __
> > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
>
>


-- 

Dr. Héctor Villalobos 

Re: [R] optimization problem

2015-10-17 Thread Berend Hasselman

Your model is producing -Inf entries in the vector Be (in function modl and LL) 
at some stage during the optimization process.
You should first do something about that before anything else.

Berend


> On 17 Oct 2015, at 03:01, Bert Gunter  wrote:
> 
> I made no attempt to examine your details for problems, but in general,
> 
> My problem
>> is that the results change a lot depending on the initial values... I can't
>> see what I am doing wrong...
>> 
>> This is a symptom of an overparameterized model: The parameter estimates
>> are unstable even though the predictions may not change much. In other
>> words, your model may be too complex for your data.
> 
> 
> Whether that is true here, you or others will have to determine. Try
> simplifying your model as a start.
> 
> -- Bert
> 
> 
> 
>> 
>> 
>> # Data
>> x <- 1995:2010
>> B <- c(3500, 3200, 3000, 2800, 2600, 3000, 3200, 3800, 4200, 4300, 4400,
>> 4400, 4500, 4600, 5000, 4300)
>> Ct <- c(912, 767, 642, 482, 353, 331, 332, 309, 366, 402, 392, 478, 408,
>> 434, 407, 637)
>> a <- c(0.539, 0.603, -0.948, 0.166, 1.895, 0.786, 0.901, 0.844, 0.337,
>> 0.429, 0.304, 0.230, 1.001, 0.750, 0.507, 1.502)
>> Ag <- 0.55
>> 
>> # Function with quantity to minimize
>> modl <- function(par) {
>>  ro <- par[1]
>>  ko <- par[2]
>>  n <- length(B)
>>  Be <- rep(NA, n)
>>  Be[1] <- ko * Ag
>>  for ( k in 2:n)
>>Be[k] <- Be[k-1] + ro * a[k-1] * Be[k-1] * (1 - Be[k-1]/ko) - Ct[k-1]
>>  err <- (log(B) - log(Be))^2
>>  ee <- sqrt( sum(err)/(n-2) )
>>  LL <- (1/(sqrt(2*pi)*ee)) * exp( -(err/(2*ee^2) ) )
>>  -crossprod(LL)
>> }
>> 
>> # Using function optim()
>> par.optim <- optim(par = list(ro=0.4, ko=8000), modl, method = "BFGS")
>> ro <- par.optim$par[1]
>> ko <- par.optim$par[2]
>> 
>> # estimated values of "B"
>> n <- length(B)
>> Be <- rep(NA, n)
>> Be[1] <- ko * Ag
>> for ( k in 2:n)
>>  Be[k] <- Be[k-1] + ro * a[k-1] * Be[k-1] * (1 - Be[k-1]/ko) - Ct[k-1]
>> 
>> # Plot, estimation of "B" seems reasonable
>> plot(x, B, ylim=c(1000, 7000))
>> lines(x, Be, col="blue", lwd=2)
>> 
>> 
>> # ... but it is very sensible to initial values...
>> par.optim2 <- optim(par = list(ro=0.4, ko=1), modl, method = "BFGS")
>> ro2 <- par.optim2$par[1]
>> ko2 <- par.optim2$par[2]
>> 
>> Be2 <- rep(NA, n)
>> Be2[1] <- ko2 * Ag
>> for ( k in 2:n)
>>  Be2[k] <- Be2[k-1] + ro2 * a[k-1] * Be2[k-1] * (1 - Be2[k-1]/ko2) -
>> Ct[k-1]
>> 
>> lines(x, Be2, col="blue", lwd=2, lty=3)
>> 
>> 
>> 
>> # Uing mle2 function
>> library(bbmle)
>> LL <- function(ro, ko, mu, sigma) {
>>  n <- length(B)
>>  Be <- rep(NA, n)
>>  Be[1] <- ko * Ag
>>  for ( k in 2:n)
>>Be[k] <- Be[k-1] + ro * a[k-1] * Be[k-1] * (1 - Be[k-1]/ko) - Ct[k-1]
>>  err <- log(B) - log(Be)
>>  R <- (dnorm(err, mu, sigma, log=TRUE))
>>  -sum(R)
>> }
>> 
>> Bc.mle <- mle2(LL, start = list(ro=0.4, ko=8000, mu=0, sigma=1))
>> summary(Bc.mle)
>> 
>> ro3 <- coef(Bc.mle)[1]
>> ko3 <- coef(Bc.mle)[2]
>> 
>> Be3 <- rep(NA, n)
>> Be3[1] <- ko3 * Ag
>> for ( k in 2:n)
>>  Be3[k] <- Be3[k-1] + ro3 * a[k-1] * Be3[k-1] * (1 - Be3[k-1]/ko3) -
>> Ct[k-1]
>> 
>> lines(x, Be3, col="red", lwd=2)
>> 
>> 
>> --
>> 
>> Héctor Villalobos >
>> 
>> Depto. de Pesquerías y Biología Marina
>> 
>> Centro Interdisciplinario de Ciencias Marinas-Instituto Politécnico
>> Nacional
>> 
>> CICIMAR-I.P.N.
>> 
>> A.P. 592. Colonia Centro
>> 
>> La Paz, Baja California Sur, MÉXICO. 23000
>> 
>> Tels.: (+52 612) 122 53 44; 123 46 58; 123 47 34 ext.: 81602
>> 
>> Fax: (+52 612) 122 53 22
>> 
>>[[alternative HTML version deleted]]
>> 
>> __
>> R-help@r-project.org  mailing list -- To UNSUBSCRIBE and
>> more, see
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
> 
> 
> 
> -- 
> Bert Gunter
> 
> "Data is not information. Information is not knowledge. And knowledge is
> certainly not wisdom."
>   -- Clifford Stoll
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

[R] optimization problem

2015-10-16 Thread Hector Villalobos
Dear R users,

I'im trying to find the parameters of a dynamic biomass model using maximum
likelihood estimation. I used two approaches, one by hand, with optim()
function and the other using mle2() function from package bbmle. My problem
is that the results change a lot depending on the initial values... I can't
see what I am doing wrong...

Thank you for any help!


# Data
x <- 1995:2010
B <- c(3500, 3200, 3000, 2800, 2600, 3000, 3200, 3800, 4200, 4300, 4400,
4400, 4500, 4600, 5000, 4300)
Ct <- c(912, 767, 642, 482, 353, 331, 332, 309, 366, 402, 392, 478, 408,
434, 407, 637)
a <- c(0.539, 0.603, -0.948, 0.166, 1.895, 0.786, 0.901, 0.844, 0.337,
0.429, 0.304, 0.230, 1.001, 0.750, 0.507, 1.502)
Ag <- 0.55

# Function with quantity to minimize
modl <- function(par) {
  ro <- par[1]
  ko <- par[2]
  n <- length(B)
  Be <- rep(NA, n)
  Be[1] <- ko * Ag
  for ( k in 2:n)
Be[k] <- Be[k-1] + ro * a[k-1] * Be[k-1] * (1 - Be[k-1]/ko) - Ct[k-1]
  err <- (log(B) - log(Be))^2
  ee <- sqrt( sum(err)/(n-2) )
  LL <- (1/(sqrt(2*pi)*ee)) * exp( -(err/(2*ee^2) ) )
  -crossprod(LL)
}

# Using function optim()
par.optim <- optim(par = list(ro=0.4, ko=8000), modl, method = "BFGS")
ro <- par.optim$par[1]
ko <- par.optim$par[2]

# estimated values of "B"
n <- length(B)
Be <- rep(NA, n)
Be[1] <- ko * Ag
for ( k in 2:n)
  Be[k] <- Be[k-1] + ro * a[k-1] * Be[k-1] * (1 - Be[k-1]/ko) - Ct[k-1]

# Plot, estimation of "B" seems reasonable
plot(x, B, ylim=c(1000, 7000))
lines(x, Be, col="blue", lwd=2)


# ... but it is very sensible to initial values...
par.optim2 <- optim(par = list(ro=0.4, ko=1), modl, method = "BFGS")
ro2 <- par.optim2$par[1]
ko2 <- par.optim2$par[2]

Be2 <- rep(NA, n)
Be2[1] <- ko2 * Ag
for ( k in 2:n)
  Be2[k] <- Be2[k-1] + ro2 * a[k-1] * Be2[k-1] * (1 - Be2[k-1]/ko2) -
Ct[k-1]

lines(x, Be2, col="blue", lwd=2, lty=3)



# Uing mle2 function
library(bbmle)
LL <- function(ro, ko, mu, sigma) {
  n <- length(B)
  Be <- rep(NA, n)
  Be[1] <- ko * Ag
  for ( k in 2:n)
Be[k] <- Be[k-1] + ro * a[k-1] * Be[k-1] * (1 - Be[k-1]/ko) - Ct[k-1]
  err <- log(B) - log(Be)
  R <- (dnorm(err, mu, sigma, log=TRUE))
  -sum(R)
}

Bc.mle <- mle2(LL, start = list(ro=0.4, ko=8000, mu=0, sigma=1))
summary(Bc.mle)

ro3 <- coef(Bc.mle)[1]
ko3 <- coef(Bc.mle)[2]

Be3 <- rep(NA, n)
Be3[1] <- ko3 * Ag
for ( k in 2:n)
  Be3[k] <- Be3[k-1] + ro3 * a[k-1] * Be3[k-1] * (1 - Be3[k-1]/ko3) -
Ct[k-1]

lines(x, Be3, col="red", lwd=2)


-- 

Héctor Villalobos 

Depto. de Pesquerías y Biología Marina

Centro Interdisciplinario de Ciencias Marinas-Instituto Politécnico Nacional

CICIMAR-I.P.N.

A.P. 592. Colonia Centro

La Paz, Baja California Sur, MÉXICO. 23000

Tels.: (+52 612) 122 53 44; 123 46 58; 123 47 34 ext.: 81602

Fax: (+52 612) 122 53 22

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Re: [R] optimization problem

2015-10-16 Thread Bert Gunter
I made no attempt to examine your details for problems, but in general,

 My problem
> is that the results change a lot depending on the initial values... I can't
> see what I am doing wrong...
>
> This is a symptom of an overparameterized model: The parameter estimates
> are unstable even though the predictions may not change much. In other
> words, your model may be too complex for your data.


Whether that is true here, you or others will have to determine. Try
simplifying your model as a start.

-- Bert



>
>
> # Data
> x <- 1995:2010
> B <- c(3500, 3200, 3000, 2800, 2600, 3000, 3200, 3800, 4200, 4300, 4400,
> 4400, 4500, 4600, 5000, 4300)
> Ct <- c(912, 767, 642, 482, 353, 331, 332, 309, 366, 402, 392, 478, 408,
> 434, 407, 637)
> a <- c(0.539, 0.603, -0.948, 0.166, 1.895, 0.786, 0.901, 0.844, 0.337,
> 0.429, 0.304, 0.230, 1.001, 0.750, 0.507, 1.502)
> Ag <- 0.55
>
> # Function with quantity to minimize
> modl <- function(par) {
>   ro <- par[1]
>   ko <- par[2]
>   n <- length(B)
>   Be <- rep(NA, n)
>   Be[1] <- ko * Ag
>   for ( k in 2:n)
> Be[k] <- Be[k-1] + ro * a[k-1] * Be[k-1] * (1 - Be[k-1]/ko) - Ct[k-1]
>   err <- (log(B) - log(Be))^2
>   ee <- sqrt( sum(err)/(n-2) )
>   LL <- (1/(sqrt(2*pi)*ee)) * exp( -(err/(2*ee^2) ) )
>   -crossprod(LL)
> }
>
> # Using function optim()
> par.optim <- optim(par = list(ro=0.4, ko=8000), modl, method = "BFGS")
> ro <- par.optim$par[1]
> ko <- par.optim$par[2]
>
> # estimated values of "B"
> n <- length(B)
> Be <- rep(NA, n)
> Be[1] <- ko * Ag
> for ( k in 2:n)
>   Be[k] <- Be[k-1] + ro * a[k-1] * Be[k-1] * (1 - Be[k-1]/ko) - Ct[k-1]
>
> # Plot, estimation of "B" seems reasonable
> plot(x, B, ylim=c(1000, 7000))
> lines(x, Be, col="blue", lwd=2)
>
>
> # ... but it is very sensible to initial values...
> par.optim2 <- optim(par = list(ro=0.4, ko=1), modl, method = "BFGS")
> ro2 <- par.optim2$par[1]
> ko2 <- par.optim2$par[2]
>
> Be2 <- rep(NA, n)
> Be2[1] <- ko2 * Ag
> for ( k in 2:n)
>   Be2[k] <- Be2[k-1] + ro2 * a[k-1] * Be2[k-1] * (1 - Be2[k-1]/ko2) -
> Ct[k-1]
>
> lines(x, Be2, col="blue", lwd=2, lty=3)
>
>
>
> # Uing mle2 function
> library(bbmle)
> LL <- function(ro, ko, mu, sigma) {
>   n <- length(B)
>   Be <- rep(NA, n)
>   Be[1] <- ko * Ag
>   for ( k in 2:n)
> Be[k] <- Be[k-1] + ro * a[k-1] * Be[k-1] * (1 - Be[k-1]/ko) - Ct[k-1]
>   err <- log(B) - log(Be)
>   R <- (dnorm(err, mu, sigma, log=TRUE))
>   -sum(R)
> }
>
> Bc.mle <- mle2(LL, start = list(ro=0.4, ko=8000, mu=0, sigma=1))
> summary(Bc.mle)
>
> ro3 <- coef(Bc.mle)[1]
> ko3 <- coef(Bc.mle)[2]
>
> Be3 <- rep(NA, n)
> Be3[1] <- ko3 * Ag
> for ( k in 2:n)
>   Be3[k] <- Be3[k-1] + ro3 * a[k-1] * Be3[k-1] * (1 - Be3[k-1]/ko3) -
> Ct[k-1]
>
> lines(x, Be3, col="red", lwd=2)
>
>
> --
>
> Héctor Villalobos >
>
> Depto. de Pesquerías y Biología Marina
>
> Centro Interdisciplinario de Ciencias Marinas-Instituto Politécnico
> Nacional
>
> CICIMAR-I.P.N.
>
> A.P. 592. Colonia Centro
>
> La Paz, Baja California Sur, MÉXICO. 23000
>
> Tels.: (+52 612) 122 53 44; 123 46 58; 123 47 34 ext.: 81602
>
> Fax: (+52 612) 122 53 22
>
> [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org  mailing list -- To UNSUBSCRIBE and
> more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.



-- 
Bert Gunter

"Data is not information. Information is not knowledge. And knowledge is
certainly not wisdom."
   -- Clifford Stoll

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Re: [R] Optimization problem

2013-04-11 Thread Rui Barradas

Hello,

You cannot change the numerical accuracy, it's a built-in constant. To 
see it use


?.Machine
.Machine$double.eps  # smallest value different from zero


Actually, .Machine$double.eps is the the smallest positive 
floating-point number x such that 1 + x != 1


You can try the following function

is.zero - function(x, eps = .Machine$double.eps^0.5) abs(x)  eps

is.zero(your_value)  # TRUE?


Or even try ?all.equal

all.equal(your_value, 0)


Hope this helps,

Rui Barradas

Em 10-04-2013 22:13, nntx escreveu:

Rui, thanks for your reply. You meant that it is the issue of accuracy? So if
I change the numerical accuracy, my results can be output? Thanks a lot!



--
View this message in context: 
http://r.789695.n4.nabble.com/Optimization-problem-tp4663821p4663928.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Optimization problem

2013-04-10 Thread nntx
As a simple example, I want to find minimum value for x^2, but it can't be
obtained by:
f-function(x)x^2
optimize(f,lower=-1,upper=1)

What are other methods to deal with this? I tried DEoptim, still doesn't
work. Any suggustions will be extremely helpful! THanks!


Shelly



--
View this message in context: 
http://r.789695.n4.nabble.com/Optimization-problem-tp4663821.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Optimization problem

2013-04-10 Thread peter dalgaard

On Apr 10, 2013, at 03:24 , nntx wrote:

 As a simple example, I want to find minimum value for x^2, but it can't be
 obtained by:
 f-function(x)x^2
 optimize(f,lower=-1,upper=1)

Works fine for me. What did you expect it to do?

 f-function(x)x^2
 optimize(f,lower=-1,upper=1)
$minimum
[1] -2.775558e-17

$objective
[1] 7.70372e-34

-pd

 
 What are other methods to deal with this? I tried DEoptim, still doesn't
 work. Any suggustions will be extremely helpful! THanks!
 
 
 Shelly
 
 
 
 --
 View this message in context: 
 http://r.789695.n4.nabble.com/Optimization-problem-tp4663821.html
 Sent from the R help mailing list archive at Nabble.com.
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

-- 
Peter Dalgaard, Professor
Center for Statistics, Copenhagen Business School
Solbjerg Plads 3, 2000 Frederiksberg, Denmark
Phone: (+45)38153501
Email: pd@cbs.dk  Priv: pda...@gmail.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Optimization problem

2013-04-10 Thread nntx
Thank you professor. I think the minimum value of x^2 between -1 and 1 should
be x=0, y=0. but the result is not that. I am thinking is any wrong with my
thought? 
Thanks for helping me out!



--
View this message in context: 
http://r.789695.n4.nabble.com/Optimization-problem-tp4663821p4663898.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Optimization problem

2013-04-10 Thread Rui Barradas

Hello,

Your thoght is mathematically right but numerically wrong. The result 
given by optimize is so close to the real minimum that numerical 
accuracy comes in and it becomes indistinguishable from the value you're 
expecting.

You get the minimum up to a certain accuracy, not more.

Hope this helps,

Rui Barradas

Em 10-04-2013 19:33, nntx escreveu:

Thank you professor. I think the minimum value of x^2 between -1 and 1 should
be x=0, y=0. but the result is not that. I am thinking is any wrong with my
thought?
Thanks for helping me out!



--
View this message in context: 
http://r.789695.n4.nabble.com/Optimization-problem-tp4663821p4663898.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Optimization problem

2013-04-10 Thread nntx
Rui, thanks for your reply. You meant that it is the issue of accuracy? So if
I change the numerical accuracy, my results can be output? Thanks a lot!



--
View this message in context: 
http://r.789695.n4.nabble.com/Optimization-problem-tp4663821p4663928.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Optimization Problem in R

2013-02-09 Thread Axel Urbiz
Dear List,

I'm new in R. I'm trying to solve a simple constrained optimization
problem.

Essentially, let's say I have a matrix as in the object 'mm' inside the
function below. My objective function should have a matrix of parameters,
one parameter for each element 'mm'  (4 in this case). The problem is to
select the value of the parameters to maximize the function 'ff' s.t. (1)
each parameter being either 0 or 1, and (2) the sum of each row in the
parameter matrix should equal 1.

I'm using the function constrOptim as shown below, but obviously I'm not
doing things right. Any help is much appreciated.

ff - function (x) {

  mm - matrix(c(10, 25, 5, 10), 2, 2)
  matx - matrix(NA, 2, 2)

  for (i in 1:nrow(x)) {
for (j in 1:ncol(x)) {

  matx[i, j] - x[i, j]
}

  }

  -sum(apply(mm ^ matx, 1, prod))

}


constrOptim(theta = c(0, 0, 0, 0), f = ff, ui=rbind(c(1, 1),
c(1, 1)),
ci=c(1, 1))


Best,
Axel.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Optimization Problem in R

2013-02-09 Thread Berend Hasselman

On 09-02-2013, at 21:08, Axel Urbiz axel.ur...@gmail.com wrote:

 Dear List,
 
 I'm new in R. I'm trying to solve a simple constrained optimization
 problem.
 
 Essentially, let's say I have a matrix as in the object 'mm' inside the
 function below. My objective function should have a matrix of parameters,
 one parameter for each element 'mm'  (4 in this case). The problem is to
 select the value of the parameters to maximize the function 'ff' s.t. (1)
 each parameter being either 0 or 1, and (2) the sum of each row in the
 parameter matrix should equal 1.
 
 I'm using the function constrOptim as shown below, but obviously I'm not
 doing things right. Any help is much appreciated.
 
 ff - function (x) {
 
  mm - matrix(c(10, 25, 5, 10), 2, 2)
  matx - matrix(NA, 2, 2)
 
  for (i in 1:nrow(x)) {
for (j in 1:ncol(x)) {
 
  matx[i, j] - x[i, j]
}
 
  }
 
  -sum(apply(mm ^ matx, 1, prod))
 
 }
 
 constrOptim(theta = c(0, 0, 0, 0), f = ff, ui=rbind(c(1, 1),
c(1, 1)),
ci=c(1, 1))


1. your parameter vector has length(4)
2. the ui matrix is 2x2 so ui %*% theta can't be done

3. ui should be something like rbind(c(1,1,0,0),c(0,0,1,1))

4. you haven't specified the gradient function (grad); should be a function or 
NULL

5. in your function ff on entry x is a vector.  It can be simplified to

ff - function (x) {   
mm - matrix(c(10, 25, 5, 10), 2, 2)
matx - matrix(x,2,2)
-sum(apply(mm ^ matx, 1, prod))
}

then

constrOptim(theta = c(1, 1, 1, 1), f = ff, grad=NULL, ui=rbind(c(1, 
1,0,0),c(0,0,1, 1)),ci=c(1, 1))

gives:

$par
[1]  -63.16203  120.71655 -135.39084  139.50023

$value
[1] -1.797693e+308

$counts
function gradient 
 501   NA 

$convergence
[1] 0

$message
NULL

$outer.iterations
[1] 2


$barrier.value
[1] 0


which is probably not at all what you are looking for.

Look in the CRAN Task View for Optimization for possible options for your 
optimization problem.

Berend

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Optimization problem

2012-05-17 Thread Pacin Al
Hi Greg,


The problem is that I also have restrictions for each variable (they must be
higher than -.07 and smaller than .2) and I'm dealing with a lot of them.

I've already tried the second approach but, as far as it seems, the function
doesn't satisfy my objective.  
That's what I'm doing:

.
faum = function(aum)
{
(...)
ifelse(colMeans(prob) .65, totm - (sum(marg)), totm - -1e10) 
#prob is a transformation of 'aum' and sum(marg) (always positive) is what I
want to maximize
(...)
return(-totm)
}

optim(rep(0,nrow), faum, method=L-BFGS-B, control=list(trace=12),
lower=rep(-.07,nrow), upper=rep(.2,nrow) )
.

The function gives me 1e10 if my initial values for the parameters doesn't
satisfy mean.65 and doesn't improve my initial conditions if they satisfy
mean=.65.

I know that a solution with mean.65 exists, I can find it by hand.

 

--
View this message in context: 
http://r.789695.n4.nabble.com/Optimization-problem-tp4630278p4630419.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Optimization problem

2012-05-16 Thread Pacin Al
Hi,

I'm dealing with an optimization problem. I'm using 'optim' to maximize the
output of a function, given some restrictions on the input. I would like to
know if there is a way to impose some restrictions on 'intermediate
variables' of the function. An example..

fx = function (x)
{
s - 0
for (i in 1:3)
{
s - x[i]^3 + s
}
s
}

optim(rep(4,3), method=L-BFGS-B, lower=rep(-10,nlin), upper=rep(10,nlin))

It would return '-10' for all variables. I want, however, a solution
satisfying mean(x)7.
Please, don't analyse this specific example, but the logic of satisfying a
criterium for the mean of the input (with thousands of variables). My real
problem involves price elasticity and I want to find the price increase for
each individual that would give me maximum total profit margin, but
respecting a minimum retention of clients.

Thank you very much,
John Mayer 

--
View this message in context: 
http://r.789695.n4.nabble.com/Optimization-problem-tp4630278.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Optimization problem

2012-05-16 Thread Greg Snow
There are a couple of options.

First if you want the mean to equal 7, then that means the sum must
equal 21 and therefore you can let optim only play with 2 of the
variables, then set the 3rd to be 21-s1-s2.

If you want the mean to be greater than 7 then just put in a test, if
the mean is less than 7 then return -Inf or another really small
number, if the mean is large enough then go on to compute the function
that you want to maximize.

Also note that you don't need the loop, it can be replaced with sum(s^3).

On Wed, May 16, 2012 at 10:44 AM, Pacin Al jok...@gmail.com wrote:
 Hi,

 I'm dealing with an optimization problem. I'm using 'optim' to maximize the
 output of a function, given some restrictions on the input. I would like to
 know if there is a way to impose some restrictions on 'intermediate
 variables' of the function. An example..

 fx = function (x)
 {
 s - 0
 for (i in 1:3)
 {
 s - x[i]^3 + s
 }
 s
 }

 optim(rep(4,3), method=L-BFGS-B, lower=rep(-10,nlin), upper=rep(10,nlin))

 It would return '-10' for all variables. I want, however, a solution
 satisfying mean(x)7.
 Please, don't analyse this specific example, but the logic of satisfying a
 criterium for the mean of the input (with thousands of variables). My real
 problem involves price elasticity and I want to find the price increase for
 each individual that would give me maximum total profit margin, but
 respecting a minimum retention of clients.

 Thank you very much,
 John Mayer

 --
 View this message in context: 
 http://r.789695.n4.nabble.com/Optimization-problem-tp4630278.html
 Sent from the R help mailing list archive at Nabble.com.

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.



-- 
Gregory (Greg) L. Snow Ph.D.
538...@gmail.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Optimization problem with nonlinear constraint

2010-07-28 Thread Uli Kleinwechter

Dear Ravi,

As I've already written to you, the problem indeed is to find a solution 
to the transcendental equation y = x * T^(x-1), given y and T and the 
optimization problem below only a workaround.


John C. Nash has been so kind to help me on here. In case anyone faces a 
similar problem in the future, the solution looks as follows:


*

func1 - function(y,x,T){
out - x*T^(x-1)-y
return(out)
}

# Assign the known values to y and T:
y -   3
T - 123

root - uniroot(func1,c(-10,100),y=y,T=T)
print(root)



Somewhat simpler than I thought

Thanks again!

Uli



Am 26.07.2010 17:44, schrieb Ravi Varadhan:

Hi Uli,

I am not sure if this is the problem that you really want to solve.  The
answer is the solution to the equation y = x * T^(x-1), provided a solution
exists.  There is no optimization involved here.  What is the real problem
that you are trying to solve?

If you want to solve a more meaningful constrained optimization problem, you
may want to try the abalama package which I just put on CRAN.  It can
optimize smooth nonlinear functions subject to linear and nonlinear equality
and inequality constraints.

http://cran.r-project.org/web/packages/alabama/index.html

Let me know if you run into any problems using it.

Best,
Ravi.


-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Uli Kleinwechter
Sent: Monday, July 26, 2010 10:16 AM
To: r-help@r-project.org
Subject: [R] Optimization problem with nonlinear constraint

Dear all,

I'm looking for a way to solve a simple optimization problem with a
nonlinear constraint. An example would be

max x   s.t.   y = x * T ^(x-1)

where y and T are known values.

optim() and constrOptim() do only allow for box or linear constraints,
so I did not succedd here. I also found hints to donlp2 but this does
not seem to be available anymore.

Any hints are welcome,

Uli



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Optimization problem with nonlinear constraint

2010-07-28 Thread Hans W Borchers
Uli Kleinwechter u.kleinwechter at uni-hohenheim.de writes:

 
 Dear Ravi,
 
 As I've already written to you, the problem indeed is to find a solution 
 to the transcendental equation y = x * T^(x-1), given y and T and the 
 optimization problem below only a workaround.


I don't think optimization is the right approach for simply inverting
a simple function.

The inverse of the function  x \to x * e^x  is the Lambert W function.
So the solution in your case is:

W(log(T)*y*T) / log(T)  # hope I transformed it correctly.

Now, how to compute Lambert's W ? Well, look into the 'gsl' package
and, alas, there is the function lambert_W0.

Your example:

y -   3
T - 123

library(gsl)
lambert_W0(log(T)*y*T)/log(T)
# [1] 1.191830


Regards,  Hans Werner


 
 John C. Nash has been so kind to help me on here. In case anyone faces a 
 similar problem in the future, the solution looks as follows:
 
 *
 
 func1 - function(y,x,T){
  out - x*T^(x-1)-y
  return(out)
 }
 
 # Assign the known values to y and T:
 y -   3
 T - 123
 
 root - uniroot(func1,c(-10,100),y=y,T=T)
 print(root)
 
 
 
 Somewhat simpler than I thought
 
 Thanks again!
 
 Uli


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Optimization problem with nonlinear constraint

2010-07-28 Thread Ravi Varadhan
Very nice, Hans!  I didn't know of the existence of Lambert W function
(a.k.a Omega function) before.  I didn't know that it occurs in the solution
of exponential decay with delay: dy/dy = a * y(t - 1).  Apparently it is
more than 250 years old!

Thanks,
Ravi.

-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Hans W Borchers
Sent: Wednesday, July 28, 2010 11:11 AM
To: r-h...@stat.math.ethz.ch
Subject: Re: [R] Optimization problem with nonlinear constraint

Uli Kleinwechter u.kleinwechter at uni-hohenheim.de writes:

 
 Dear Ravi,
 
 As I've already written to you, the problem indeed is to find a solution 
 to the transcendental equation y = x * T^(x-1), given y and T and the 
 optimization problem below only a workaround.


I don't think optimization is the right approach for simply inverting
a simple function.

The inverse of the function  x \to x * e^x  is the Lambert W function.
So the solution in your case is:

W(log(T)*y*T) / log(T)  # hope I transformed it correctly.

Now, how to compute Lambert's W ? Well, look into the 'gsl' package
and, alas, there is the function lambert_W0.

Your example:

y -   3
T - 123

library(gsl)
lambert_W0(log(T)*y*T)/log(T)
# [1] 1.191830


Regards,  Hans Werner


 
 John C. Nash has been so kind to help me on here. In case anyone faces a 
 similar problem in the future, the solution looks as follows:
 
 *
 
 func1 - function(y,x,T){
  out - x*T^(x-1)-y
  return(out)
 }
 
 # Assign the known values to y and T:
 y -   3
 T - 123
 
 root - uniroot(func1,c(-10,100),y=y,T=T)
 print(root)
 
 
 
 Somewhat simpler than I thought
 
 Thanks again!
 
 Uli


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Optimization problem with nonlinear constraint

2010-07-28 Thread Ravi Varadhan
For those interested in esoteric of special functions, here is a nice
reference on the Lambert W function:

http://www.cs.uwaterloo.ca/research/tr/1993/03/W.pdf

Ravi.

-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Hans W Borchers
Sent: Wednesday, July 28, 2010 11:11 AM
To: r-h...@stat.math.ethz.ch
Subject: Re: [R] Optimization problem with nonlinear constraint

Uli Kleinwechter u.kleinwechter at uni-hohenheim.de writes:

 
 Dear Ravi,
 
 As I've already written to you, the problem indeed is to find a solution 
 to the transcendental equation y = x * T^(x-1), given y and T and the 
 optimization problem below only a workaround.


I don't think optimization is the right approach for simply inverting
a simple function.

The inverse of the function  x \to x * e^x  is the Lambert W function.
So the solution in your case is:

W(log(T)*y*T) / log(T)  # hope I transformed it correctly.

Now, how to compute Lambert's W ? Well, look into the 'gsl' package
and, alas, there is the function lambert_W0.

Your example:

y -   3
T - 123

library(gsl)
lambert_W0(log(T)*y*T)/log(T)
# [1] 1.191830


Regards,  Hans Werner


 
 John C. Nash has been so kind to help me on here. In case anyone faces a 
 similar problem in the future, the solution looks as follows:
 
 *
 
 func1 - function(y,x,T){
  out - x*T^(x-1)-y
  return(out)
 }
 
 # Assign the known values to y and T:
 y -   3
 T - 123
 
 root - uniroot(func1,c(-10,100),y=y,T=T)
 print(root)
 
 
 
 Somewhat simpler than I thought
 
 Thanks again!
 
 Uli


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Optimization problem with nonlinear constraint

2010-07-26 Thread Uli Kleinwechter

Dear all,

I'm looking for a way to solve a simple optimization problem with a 
nonlinear constraint. An example would be


max x   s.t.   y = x * T ^(x-1)

where y and T are known values.

optim() and constrOptim() do only allow for box or linear constraints, 
so I did not succedd here. I also found hints to donlp2 but this does 
not seem to be available anymore.


Any hints are welcome,

Uli

--

Uli Kleinwechter
Agricultural and Food Policy Group (420a)
University of Hohenheim
D-70593 Stuttgart
E-mail: u.kleinwech...@uni-hohenheim.de

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Optimization problem with nonlinear constraint

2010-07-26 Thread Ravi Varadhan
Hi Uli,

I am not sure if this is the problem that you really want to solve.  The
answer is the solution to the equation y = x * T^(x-1), provided a solution
exists.  There is no optimization involved here.  What is the real problem
that you are trying to solve?

If you want to solve a more meaningful constrained optimization problem, you
may want to try the abalama package which I just put on CRAN.  It can
optimize smooth nonlinear functions subject to linear and nonlinear equality
and inequality constraints.

http://cran.r-project.org/web/packages/alabama/index.html

Let me know if you run into any problems using it.

Best,
Ravi.


-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Uli Kleinwechter
Sent: Monday, July 26, 2010 10:16 AM
To: r-help@r-project.org
Subject: [R] Optimization problem with nonlinear constraint

Dear all,

I'm looking for a way to solve a simple optimization problem with a 
nonlinear constraint. An example would be

max x   s.t.   y = x * T ^(x-1)

where y and T are known values.

optim() and constrOptim() do only allow for box or linear constraints, 
so I did not succedd here. I also found hints to donlp2 but this does 
not seem to be available anymore.

Any hints are welcome,

Uli

-- 

Uli Kleinwechter
Agricultural and Food Policy Group (420a)
University of Hohenheim
D-70593 Stuttgart
E-mail: u.kleinwech...@uni-hohenheim.de

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Optimization problem

2010-06-18 Thread José E. Lozano
 How about smoothing the percentages, and then take the second derrivative
to find the inflection point?

 which.max(diff(diff((lowess(percentages)$y

This solution is what I've been using so far. The only difference is that I
am smoothing the 1st derivative, since its the one I want to be smooth,
smoothing the percentages curve does not produce good results.

Thanks!
Jose Lozano

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Optimization problem

2010-06-18 Thread José E. Lozano
Hello:
 
 Here is a general approach using smoothing using the Gasser-Mueller
kernel,
 which is implemented in the lokern package.   The optimal bandwidth for
 derivative estimation is automatically chosen using a plug-in
approximation.
 The code and the results are attached here.

Maybe am I missing the features.mat function?

Probably the function is inside the file features.txt, that is not
included.

 source(features.txt)

Thanks a lot,
Jose Lozano

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Optimization problem

2010-06-18 Thread José E. Lozano
 
 How about smoothing the percentages, and then take the second 
 derrivative to find the inflection point?

 which.max(diff(diff((lowess(percentages)$y

 This solution is what I've been using so far. The only difference is that
I am smoothing the 1st derivative, since its
 the one I want to be smooth, smoothing the percentages curve does not
produce good results.

I've noticed something:

What I am using is something like:

which.max(abs(diff(sign(diff(diff(lowess(percentages)$y))

The fist value where the 2nd derivative changes its sign To find the
f''(x)=0

But you have suggested the max value of the 2nd derivative. 

Regards,
Jose Lozano

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Optimization problem

2010-06-18 Thread William Simpson
I don't see why one would want to pretend that the function is
continuous. It isn't.
The x variable devices is discrete.
Moreover, the whole solution space is small: the possible solutions
are integers in the range of maybe 20-30.

Bill

On Fri, Jun 18, 2010 at 9:00 AM, José E. Lozano lozal...@jcyl.es wrote:

 How about smoothing the percentages, and then take the second
 derrivative to find the inflection point?

 which.max(diff(diff((lowess(percentages)$y

 This solution is what I've been using so far. The only difference is that
 I am smoothing the 1st derivative, since its
 the one I want to be smooth, smoothing the percentages curve does not
 produce good results.

 I've noticed something:

 What I am using is something like:

 which.max(abs(diff(sign(diff(diff(lowess(percentages)$y))

 The fist value where the 2nd derivative changes its sign To find the
 f''(x)=0

 But you have suggested the max value of the 2nd derivative.

 Regards,
 Jose Lozano

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Optimization problem

2010-06-18 Thread José E. Lozano

 I don't see why one would want to pretend that the function is continuous.
It isn't.
 The x variable devices is discrete.
 Moreover, the whole solution space is small: the possible solutions are
integers in the range of maybe 20-30.

Yes, you are right, what I'd like to think is that the outcome is a
discretization of a continous variable. How could it be? Dont know... forget
we are talking about phisical devices, would it be posible/rational to speak
of half a device?

Anyway, I have to test different approaches to find a convenient solution, I
must admit that I like the idea of using a fixed criteria of a percentage %
as you suggested, but it needs some research to see if it is possible to
find a common criterion to all my dataset, or at least if the criterium can
be based upon the number of devices, as I tried the last time I thought on
it:

criterion-devices/10
solution-min(devices[diff(percentages)criterion])

Regards,
Jose Lozano

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Optimization problem

2010-06-17 Thread José E. Lozano
Hello,

I'm facing a problem of optimization, I've already solved but I'm trying to
find other answers to this problem to improve the solution.

Well, to make it short: I have to set/install a number of devices in a
building, and I have to give service to a number of customers, or better
say, to give a good quality of the signal. The more devices I place, the
higher the signal. The signal is measured in a (coverage) percentage, the
higher the percentage, the better the service. The max percentage is
(obviously) 100%.

As an example:

Example 1:
--
devices-1:49
percentages-c(15.8,29.3,43.1,52.9,61.8,70.4,77.6,84.4,88.6,90.9,92.7,93.2,9
4.1,94.6,95.4,96.1,96.5,97,97.3,97.8,98.1,98.7,99,99.4,99.5,99.6,99.7,99.9,1
00,100,100,100,100,100,100,100,100,100,100,100,100,100,100,100,100,100,100,1
00,100)
matplot(devices,percentages,type=l)
cbind(devices,percentages)

In this example, I can place up to 49 devices, though it does not make any
sense to place more than 29 since 29 devices gives a quality of 100%, the
maximum.

My problem is that I want to minimize the number of devices maximizing the
percentage.

I think the key is I don't want to add a new device if there is no
significance change in the final percentage, so looking at the graph 11
devices seems logical. Notice that my objective is not the final percentage,
it can be 90%... or 50%. The service is ok with both numbers.

Although I've solved the problem calculating the slope and making an axis
change, I'd like to make something statistically stronger, because in the
end what I'm doing is making some trigonometrics.

If you think on other solutions i'd appreciate your help.

Thanks,
J. Lozano

Other examples:

Example 2:
--
devices-1:45
percentages-c(19.6,38.3,53.2,65,72.9,78.5,82.8,87,89,90.1,91.7,92.6,93.7,94
.6,95.4,96,96.5,96.9,97.2,97.7,97.9,98.3,98.6,98.8,99,99.3,99.6,99.8,100,100
,100,100,100,100,100,100,100,100,100,100,100,100,100,100,100)
matplot(devices,percentages,type=l)
cbind(devices,percentages)

45 total devices, 9-10 devices seem logical.

Example 3:
--
devices-1:37
percentages-c(12.4,24.6,35.8,44.9,53.4,61.6,70,76.9,82.3,84.7,87.5,89.1,90.
7,91.8,92.9,94.3,95.6,95.9,97,97.2,97.4,97.8,98,98.2,98.4,98.7,99,99.2,99.4,
99.6,99.8,100,100,100,100,100,100)
matplot(devices,percentages,type=l)
cbind(devices,percentages)

37 total devices, 9 devices seem logical.

Example 4:
--
devices-1:52
percentages-c(12.2,22.4,32,41.3,50.1,56.2,61,64.3,66.9,69.2,73.7,76,78.6,81
.7,85.4,87.8,91.1,92.8,94.2,95,95.6,96.3,96.6,97.3,97.7,98.2,98.5,98.7,99.2,
99.6,100,100,100,100,100,100,100,100,100,100,100,100,100,100,100,100,100,100
,100,100,100,100)
matplot(devices,percentages,type=l)
cbind(devices,percentages)

52 total devices, hard to chose, 9? 19?

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Optimization problem

2010-06-17 Thread Bart Joosen

How about smoothing the percentages, and then take the second derrivative to
find the inflection point?

which.max(diff(diff((lowess(percentages)$y


Bart
-- 
View this message in context: 
http://r.789695.n4.nabble.com/Optimization-problem-tp2258654p2258828.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Optimization problem

2010-06-17 Thread William Simpson

min(devices[percentages==max(percentages)])

Bill

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Optimization problem

2010-06-17 Thread Ravi Varadhan
Here is a general approach using smoothing using the Gasser-Mueller kernel,
which is implemented in the lokern package.   The optimal bandwidth for
derivative estimation is automatically chosen using a plug-in approximation.
The code and the results are attached here.

Let me know if you have any questions.

Ravi.


-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of José E. Lozano
Sent: Thursday, June 17, 2010 7:48 AM
To: r-help@r-project.org
Subject: [R] Optimization problem

Hello,

I'm facing a problem of optimization, I've already solved but I'm trying to
find other answers to this problem to improve the solution.

Well, to make it short: I have to set/install a number of devices in a
building, and I have to give service to a number of customers, or better
say, to give a good quality of the signal. The more devices I place, the
higher the signal. The signal is measured in a (coverage) percentage, the
higher the percentage, the better the service. The max percentage is
(obviously) 100%.

As an example:

Example 1:
--
devices-1:49
percentages-c(15.8,29.3,43.1,52.9,61.8,70.4,77.6,84.4,88.6,90.9,92.7,93.2,9
4.1,94.6,95.4,96.1,96.5,97,97.3,97.8,98.1,98.7,99,99.4,99.5,99.6,99.7,99.9,1
00,100,100,100,100,100,100,100,100,100,100,100,100,100,100,100,100,100,100,1
00,100)
matplot(devices,percentages,type=l)
cbind(devices,percentages)

In this example, I can place up to 49 devices, though it does not make any
sense to place more than 29 since 29 devices gives a quality of 100%, the
maximum.

My problem is that I want to minimize the number of devices maximizing the
percentage.

I think the key is I don't want to add a new device if there is no
significance change in the final percentage, so looking at the graph 11
devices seems logical. Notice that my objective is not the final percentage,
it can be 90%... or 50%. The service is ok with both numbers.

Although I've solved the problem calculating the slope and making an axis
change, I'd like to make something statistically stronger, because in the
end what I'm doing is making some trigonometrics.

If you think on other solutions i'd appreciate your help.

Thanks,
J. Lozano

Other examples:

Example 2:
--
devices-1:45
percentages-c(19.6,38.3,53.2,65,72.9,78.5,82.8,87,89,90.1,91.7,92.6,93.7,94
.6,95.4,96,96.5,96.9,97.2,97.7,97.9,98.3,98.6,98.8,99,99.3,99.6,99.8,100,100
,100,100,100,100,100,100,100,100,100,100,100,100,100,100,100)
matplot(devices,percentages,type=l)
cbind(devices,percentages)

45 total devices, 9-10 devices seem logical.

Example 3:
--
devices-1:37
percentages-c(12.4,24.6,35.8,44.9,53.4,61.6,70,76.9,82.3,84.7,87.5,89.1,90.
7,91.8,92.9,94.3,95.6,95.9,97,97.2,97.4,97.8,98,98.2,98.4,98.7,99,99.2,99.4,
99.6,99.8,100,100,100,100,100,100)
matplot(devices,percentages,type=l)
cbind(devices,percentages)

37 total devices, 9 devices seem logical.

Example 4:
--
devices-1:52
percentages-c(12.2,22.4,32,41.3,50.1,56.2,61,64.3,66.9,69.2,73.7,76,78.6,81
.7,85.4,87.8,91.1,92.8,94.2,95,95.6,96.3,96.6,97.3,97.7,98.2,98.5,98.7,99.2,
99.6,100,100,100,100,100,100,100,100,100,100,100,100,100,100,100,100,100,100
,100,100,100,100)
matplot(devices,percentages,type=l)
cbind(devices,percentages)

52 total devices, hard to chose, 9? 19?

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


optimal_devices_smooth_plots.pdf
Description: Adobe PDF document
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Optimization problem

2010-06-17 Thread William Simpson
Sorry, thought you wanted to find lowest value of x that produced
maximum value of y. I see now that is not the case.

I think you have to decide on what amount of improvement per device
you judge to be 'minimal'. Then the algorithm uses the value of y that
occurs at the point where this criterion is reached.

I think this will do it
min(devices[diff(percentages)criterion])

Bill

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] optimization problem with linear constraints

2010-02-14 Thread Kathie

Dear R users,

I need some advises on how to use R to optimize this function with the
following constraints.

f(x1,x2,x3,y1,y2,y3,)  

= gamma(x1+x2-1)/{gamma(x1)*gamma(x2)} * y1^(x2-1) * y2^(x1-1)
+ gamma(x1+x3-1)/{gamma(x1)*gamma(x3)} * y1^(x3-1) * y3^(x1-1)
+ gamma(x2+x3-1)/{gamma(x2)*gamma(x3)} * y2^(x3-1) * y3^(x2-1)

s.t

0  x1
0  x2
0  x3
1  x1+x2
1  x1+x3
1  x2+x3
0  y1
0  y2
0  y3

I've tried constrOptim, but I got several errors; e.g.  value out of
range in 'gammafn', etc.

Is there any other built-in functions or something for these constraints??

Any suggestion will be greatly appreciated.

Regards,

Kathryn Lord 
-- 
View this message in context: 
http://n4.nabble.com/optimization-problem-with-linear-constraints-tp1555718p1555718.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] optimization problem

2010-01-17 Thread Hans W. Borchers
Ravi Varadhan rvaradhan at jhmi.edu writes:

 
 Interesting! 
 
 Now, if I change the cost matrix, D,  in the LSAP formulation slightly
 such that it is quadratic, it finds the best solution to your example:

Dear Ravi,

I thought your solution is ingenious, but after the discussion with 
Erwin Kalvelagen I found two things quite irritating:

(1) Why is solve_LSAP(D) different from solve_LSAP(D^2) in Erwin's
example? I believed just squaring the distance matrix would make
no difference to solving the LSAP - except for some numerical
instability which does not seem to be the case here.

(2) If you change rows and sums in the definition of D, that is

D[j, i] - sqrt(sum((B[, j] - A[, i])^2))

then the solution to Erwin's example comes out right even with
keeping the square root.

Do you have explanations for these 'phenomena'? Otherwise, I think,
there will remain some doubts about this approach.

Very best
Hans Werner


 pMatrix.min - function(A, B) {
 # finds the permutation P of A such that ||PA - B|| is minimum
 # in Frobenius norm
 # Uses the linear-sum assignment problem (LSAP) solver
 # in the clue package
 # Returns P%*%A and the permutation vector `pvec' such that
 # A[pvec, ] is the permutation of A closest to B
   n - nrow(A)
   D - matrix(NA, n, n)
   for (i in 1:n) {
   for (j in 1:n) {
 # D[j, i] - sqrt(sum((B[j, ] - A[i, ])^2))
   D[j, i] - (sum((B[j, ] - A[i, ])^2))  # this is better
   } }
 vec - c(solve_LSAP(D))
 list(A=A[vec,], pvec=vec)
 }
 
   X-pMatrix.min(A,B)
  X$pvec
 [1] 6 1 3 2 4 5
  dist(X$A, B)
 [1] 10.50172
 
 
 This should be fine.  Any counter-examples to this?!
 
 Best,
 Ravi.
 
 
 Ravi Varadhan, Ph.D.
 Assistant Professor,
 Division of Geriatric Medicine and Gerontology
 School of Medicine
 Johns Hopkins University
 
 Ph. (410) 502-2619
 email: rvaradhan at jhmi.edu


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] optimization problem

2010-01-17 Thread klausch
Dear Erwin, Ravi and Hans Werner,

thanks a lot for your replies. I don't think I have access to Cplex and 
therefore probably cannot try that out but will read about MIQP.

I played a bit then around with Ravi's suggestions and made also the 
observation that the linear cost function often found the exact solution but 
now always - but in my tests the quadratic cost function version always found 
the correct result. But I'll test that still a bit more detailed.

Would anyone of you know a good reference for an overview what algorithms are 
there and for which problems they can be used?

Thank a lot again!

Klaus

 Original-Nachricht 
 Datum: Sat, 16 Jan 2010 23:42:08 -0500
 Von: Ravi Varadhan rvarad...@jhmi.edu
 An: Erwin Kalvelagen erwin.kalvela...@gmail.com
 CC: r-h...@stat.math.ethz.ch
 Betreff: Re: [R] optimization problem

 
 Interesting! 
 
 Now, if I change the cost matrix, D,  in the LSAP formulation slightly
 such that it is quadratic, it finds the best solution to your example:
 
 
 pMatrix.min - function(A, B) {
 # finds the permutation P of A such that ||PA - B|| is minimum
 # in Frobenius norm
 # Uses the linear-sum assignment problem (LSAP) solver
 # in the clue package
 # Returns P%*%A and the permutation vector `pvec' such that
 # A[pvec, ] is the permutation of A closest to B
   n - nrow(A)
   D - matrix(NA, n, n)
   for (i in 1:n) {
   for (j in 1:n) {
 # D[j, i] - sqrt(sum((B[j, ] - A[i, ])^2))
   D[j, i] - (sum((B[j, ] - A[i, ])^2))  # this is better
   } }
 vec - c(solve_LSAP(D))
 list(A=A[vec,], pvec=vec)
 }
 
   X-pMatrix.min(A,B)
  X$pvec
 [1] 6 1 3 2 4 5
  dist(X$A, B)
 [1] 10.50172
 
 
 This should be fine.  Any counter-examples to this?!
 
 Best,
 Ravi.
 
 
 Ravi Varadhan, Ph.D.
 Assistant Professor,
 Division of Geriatric Medicine and Gerontology
 School of Medicine
 Johns Hopkins University
 
 Ph. (410) 502-2619
 email: rvarad...@jhmi.edu
 
 
 - Original Message -
 From: Erwin Kalvelagen erwin.kalvela...@gmail.com
 Date: Saturday, January 16, 2010 5:26 pm
 Subject: Re: [R] optimization problem
 To: Ravi Varadhan rvarad...@jhmi.edu
 Cc: r-h...@stat.math.ethz.ch
 
 
  I believe this is a very good approximation but not a 100% correct
  formulation of the original problem.
  
  E.g. for
  
  
  A - matrix(c(
 -0.62668585718,-0.78785288063,-1.32462887089, 0.63935994044,
 -1.99878497801, 4.42667400292, 0.65534961645, 1.86914537669,
 -0.97229929674, 2.37404268115, 0.01223810011,-1.24956493590,
  0.92711756473,-1.51859351813, 3.76707054743,-2.30641777527,
 -1.98918429361,-0.43634856903,-3.65989556798, 0.00073904070,
 -1.44153837918, 1.42004180214,-1.81388322489,-1.92423923917,
 -1.46322482727,-1.81783134835,-2.59801302663, 2.03462135096,
  1.32546711550,-0.21519150247,-1.94319654347, 0.68773346973,
 -2.75094791807,-1.44814080195, 3.14197123843,-0.52521369103
  ),nrow=6)
  
  B-diag(nrow=6)
  
  X-pMatrix.min(A,B)
  
  dist(X$A, B)
  
  X$pvec
  
  bestpvec - c(6,1,3,2,4,5)
  
  dist(A[bestpvec,],B)
  
  you get a norm of 10.58374 while the true optimal norm is 10.50172.  For
  small problems you often get the optimal solution, but the error 
  caused by
  linearizing the objective becomes larger if the problems are larger. 
  But the
  approximation is actually very good.
  
  Erwin
  
  
  
  Erwin Kalvelagen
  Amsterdam Optimization Modeling Group
  er...@amsterdamoptimization.com
  
  
  
  
  On Sat, Jan 16, 2010 at 2:01 PM, Ravi Varadhan rvarad...@jhmi.edu
 wrote:
  
  
   Yes, it can be formulated as an LSAP.  It works just fine.  I have
 checked
   it with several 3 x 3 examples.
  
   Here is another convincing example:
  
   n - 50
  
   A - matrix(rnorm(n*n), n, n)
  
# Find P such that ||PA - C|| is minimum
  
   vec - sample(1:n, n, rep=FALSE)  # a random permutation
  
   C - A[vec, ]  # the target matrix is just a permutation of original 
  matrix
  
   B - pMatrix.min(A, C)$A
  
dist(B, C)
   [1] 0
  
dist(A, C)
   [1] 69.60859
  
   Ravi.
   
  
   Ravi Varadhan, Ph.D.
   Assistant Professor,
   Division of Geriatric Medicine and Gerontology
   School of Medicine
   Johns Hopkins University
  
   Ph. (410) 502-2619
   email: rvarad...@jhmi.edu
  
  
   - Original Message -
   From: Erwin Kalvelagen erwin.kalvela...@gmail.com
   Date: Saturday, January 16, 2010 1:36 pm
   Subject: Re: [R] optimization problem
   To: Ravi Varadhan rvarad...@jhmi.edu
   Cc: r-h...@stat.math.ethz.ch
  
  
I also have doubts this can be formulated correctly as a linear
   assignment
problem. You may want to check the results with a small example.
   
Erwin

Re: [R] optimization problem

2010-01-17 Thread Ravi Varadhan
Dear Hans,

I agree with your comments.  My intuition was that the quadratic form would be 
better behaved  than the radical form (less nonlinear!?).  So, I was hoping 
to see a change in behavior when the cost function was altered from a radical 
(i.e. sqrt) form to quadratic, but I was still surprised to actually see it.  I 
don't have a good explanation for this.  

I came up with the idea of solving Klaus' problem as an LSAP problem.  I did 
not know that the LSAP approach to this and similar problems has already been 
considered by Nick Higham.  I asked Nick last night about this problem thinking 
that he might know of more direct solutions to this problem (e.g. some kind of 
SVD or related factorizations). He said that I should take a look at the PhD 
thesis of one of his students.

Take a look at Section 3.5 of the PhD thesis

   Parallel Solution of SVD-Related Problems, With Applications

at

http://www.maths.manchester.ac.uk/~higham/misc/past-students.php


This thesis proposed algorithms for the current problem and different versions 
of it under the heading of Procrustes-type problems.  I have to read this and 
get a better handle on it.  However, I would not be able to get to this for 
another two weeks.  If you have any insights from this, in the meanwhile, do 
share with us.

Best regards,
Ravi.



Ravi Varadhan, Ph.D.
Assistant Professor,
Division of Geriatric Medicine and Gerontology
School of Medicine
Johns Hopkins University

Ph. (410) 502-2619
email: rvarad...@jhmi.edu


- Original Message -
From: Hans W. Borchers hwborch...@googlemail.com
Date: Sunday, January 17, 2010 3:54 am
Subject: Re: [R] optimization problem
To: r-h...@stat.math.ethz.ch


 Ravi Varadhan rvaradhan at jhmi.edu writes:
 
  
  Interesting! 
  
  Now, if I change the cost matrix, D,  in the LSAP formulation slightly
  such that it is quadratic, it finds the best solution to your example:
 
 Dear Ravi,
 
 I thought your solution is ingenious, but after the discussion with 
 Erwin Kalvelagen I found two things quite irritating:
 
 (1) Why is solve_LSAP(D) different from solve_LSAP(D^2) in Erwin's
 example? I believed just squaring the distance matrix would make
 no difference to solving the LSAP - except for some numerical
 instability which does not seem to be the case here.
 
 (2) If you change rows and sums in the definition of D, that is
 
 D[j, i] - sqrt(sum((B[, j] - A[, i])^2))
 
 then the solution to Erwin's example comes out right even with
 keeping the square root.
 
 Do you have explanations for these 'phenomena'? Otherwise, I think,
 there will remain some doubts about this approach.
 
 Very best
 Hans Werner
 
 
  pMatrix.min - function(A, B) {
  # finds the permutation P of A such that ||PA - B|| is minimum
  # in Frobenius norm
  # Uses the linear-sum assignment problem (LSAP) solver
  # in the clue package
  # Returns P%*%A and the permutation vector `pvec' such that
  # A[pvec, ] is the permutation of A closest to B
  n - nrow(A)
  D - matrix(NA, n, n)
  for (i in 1:n) {
  for (j in 1:n) {
  #   D[j, i] - sqrt(sum((B[j, ] - A[i, ])^2))
  D[j, i] - (sum((B[j, ] - A[i, ])^2))  # this is better
  } }
  vec - c(solve_LSAP(D))
  list(A=A[vec,], pvec=vec)
  }
  
X-pMatrix.min(A,B)
   X$pvec
  [1] 6 1 3 2 4 5
   dist(X$A, B)
  [1] 10.50172
  
  
  This should be fine.  Any counter-examples to this?!
  
  Best,
  Ravi.
  
  
  Ravi Varadhan, Ph.D.
  Assistant Professor,
  Division of Geriatric Medicine and Gerontology
  School of Medicine
  Johns Hopkins University
  
  Ph. (410) 502-2619
  email: rvaradhan at jhmi.edu
 
 
 __
 R-help@r-project.org mailing list
 
 PLEASE do read the posting guide 
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] optimization problem

2010-01-17 Thread Ravi Varadhan

Klaus,

I am happy to know that the quadratic cost LSAP seems to work well for you.  

The Hungarian algorithm is a classic for solving linear sum assignment problem, 
which is closely related to matching in bipartite graphs.  You can google or 
wiki these terms to get papers and books on this topic. 

Also look at the PhD thesis by one of Nick Higham's students (Section 3.5):

 Parallel Solution of SVD-Related Problems, With Applications

at

http://www.maths.manchester.ac.uk/~higham/misc/past-students.php

This thesis has a good discussion of different variants of your problem, some 
of which are more general than yours.  

Your problem is one of the variants of procrustes-type problems found in 
multivariate statistics.  The very same problem that you posed is supposed to 
occur in multidimensional scaling.  So, you might also want to look in that 
literature.


Best,
Ravi.



Ravi Varadhan, Ph.D.
Assistant Professor,
Division of Geriatric Medicine and Gerontology
School of Medicine
Johns Hopkins University

Ph. (410) 502-2619
email: rvarad...@jhmi.edu


- Original Message -
From: klau...@gmx.de
Date: Sunday, January 17, 2010 8:06 am
Subject: Re: [R] optimization problem
To: Ravi Varadhan rvarad...@jhmi.edu, erwin.kalvela...@gmail.com, 
hwborch...@googlemail.com
Cc: r-h...@stat.math.ethz.ch


 Dear Erwin, Ravi and Hans Werner,
 
 thanks a lot for your replies. I don't think I have access to Cplex 
 and therefore probably cannot try that out but will read about MIQP.
 
 I played a bit then around with Ravi's suggestions and made also the 
 observation that the linear cost function often found the exact 
 solution but now always - but in my tests the quadratic cost function 
 version always found the correct result. But I'll test that still a 
 bit more detailed.
 
 Would anyone of you know a good reference for an overview what 
 algorithms are there and for which problems they can be used?
 
 Thank a lot again!
 
 Klaus
 
  Original-Nachricht 
  Datum: Sat, 16 Jan 2010 23:42:08 -0500
  Von: Ravi Varadhan rvarad...@jhmi.edu
  An: Erwin Kalvelagen erwin.kalvela...@gmail.com
  CC: r-h...@stat.math.ethz.ch
  Betreff: Re: [R] optimization problem
 
  
  Interesting! 
  
  Now, if I change the cost matrix, D,  in the LSAP formulation slightly
  such that it is quadratic, it finds the best solution to your example:
  
  
  pMatrix.min - function(A, B) {
  # finds the permutation P of A such that ||PA - B|| is minimum
  # in Frobenius norm
  # Uses the linear-sum assignment problem (LSAP) solver
  # in the clue package
  # Returns P%*%A and the permutation vector `pvec' such that
  # A[pvec, ] is the permutation of A closest to B
  n - nrow(A)
  D - matrix(NA, n, n)
  for (i in 1:n) {
  for (j in 1:n) {
  #   D[j, i] - sqrt(sum((B[j, ] - A[i, ])^2))
  D[j, i] - (sum((B[j, ] - A[i, ])^2))  # this is better
  } }
  vec - c(solve_LSAP(D))
  list(A=A[vec,], pvec=vec)
  }
  
X-pMatrix.min(A,B)
   X$pvec
  [1] 6 1 3 2 4 5
   dist(X$A, B)
  [1] 10.50172
  
  
  This should be fine.  Any counter-examples to this?!
  
  Best,
  Ravi.
  
  
  Ravi Varadhan, Ph.D.
  Assistant Professor,
  Division of Geriatric Medicine and Gerontology
  School of Medicine
  Johns Hopkins University
  
  Ph. (410) 502-2619
  email: rvarad...@jhmi.edu
  
  
  - Original Message -
  From: Erwin Kalvelagen erwin.kalvela...@gmail.com
  Date: Saturday, January 16, 2010 5:26 pm
  Subject: Re: [R] optimization problem
  To: Ravi Varadhan rvarad...@jhmi.edu
  Cc: r-h...@stat.math.ethz.ch
  
  
   I believe this is a very good approximation but not a 100% correct
   formulation of the original problem.
   
   E.g. for
   
   
   A - matrix(c(
  -0.62668585718,-0.78785288063,-1.32462887089, 0.63935994044,
  -1.99878497801, 4.42667400292, 0.65534961645, 1.86914537669,
  -0.97229929674, 2.37404268115, 0.01223810011,-1.24956493590,
   0.92711756473,-1.51859351813, 3.76707054743,-2.30641777527,
  -1.98918429361,-0.43634856903,-3.65989556798, 0.00073904070,
  -1.44153837918, 1.42004180214,-1.81388322489,-1.92423923917,
  -1.46322482727,-1.81783134835,-2.59801302663, 2.03462135096,
   1.32546711550,-0.21519150247,-1.94319654347, 0.68773346973,
  -2.75094791807,-1.44814080195, 3.14197123843,-0.52521369103
   ),nrow=6)
   
   B-diag(nrow=6)
   
   X-pMatrix.min(A,B)
   
   dist(X$A, B)
   
   X$pvec
   
   bestpvec - c(6,1,3,2,4,5)
   
   dist(A[bestpvec,],B)
   
   you get a norm of 10.58374 while the true optimal norm is 
 10.50172.  For
   small problems you often get the optimal solution, but the error 
   caused by
   linearizing the objective becomes larger if the problems are 
 larger. 
   But the
   approximation is actually very good.
   
   Erwin

Re: [R] optimization problem

2010-01-17 Thread Erwin Kalvelagen
No, I cannot find counterexamples, so that actually seems to give identical
solutions! That is fantastic.

I guess my homework is now convince myself  we can go from this quadratic
objective

sum_i,j [ sum_k P[i,k]*G[k,j] - I[i,j] ]^2

to a linear one:

sum_i,j c[i,j]*P[i,j]

Thanks !

Erwin


Erwin Kalvelagen
Amsterdam Optimization Modeling Group
er...@amsterdamoptimization.com
http://amsterdamoptimization.com



On Sat, Jan 16, 2010 at 11:42 PM, Ravi Varadhan rvarad...@jhmi.edu wrote:


 Interesting!

 Now, if I change the cost matrix, D,  in the LSAP formulation slightly
 such that it is quadratic, it finds the best solution to your example:


 pMatrix.min - function(A, B) {
 # finds the permutation P of A such that ||PA - B|| is minimum
 # in Frobenius norm
 # Uses the linear-sum assignment problem (LSAP) solver
 # in the clue package
 # Returns P%*%A and the permutation vector `pvec' such that
 # A[pvec, ] is the permutation of A closest to B
n - nrow(A)
D - matrix(NA, n, n)
for (i in 1:n) {
for (j in 1:n) {
 #   D[j, i] - sqrt(sum((B[j, ] - A[i, ])^2))
D[j, i] - (sum((B[j, ] - A[i, ])^2))  # this is better
 } }
 vec - c(solve_LSAP(D))
 list(A=A[vec,], pvec=vec)
 }

   X-pMatrix.min(A,B)
  X$pvec
 [1] 6 1 3 2 4 5
  dist(X$A, B)
 [1] 10.50172
 

 This should be fine.  Any counter-examples to this?!

 Best,
 Ravi.
 

 Ravi Varadhan, Ph.D.
 Assistant Professor,
 Division of Geriatric Medicine and Gerontology
 School of Medicine
 Johns Hopkins University

 Ph. (410) 502-2619
 email: rvarad...@jhmi.edu


 - Original Message -
 From: Erwin Kalvelagen erwin.kalvela...@gmail.com
 Date: Saturday, January 16, 2010 5:26 pm
 Subject: Re: [R] optimization problem
 To: Ravi Varadhan rvarad...@jhmi.edu
 Cc: r-h...@stat.math.ethz.ch


  I believe this is a very good approximation but not a 100% correct
  formulation of the original problem.
 
  E.g. for
 
 
  A - matrix(c(
 -0.62668585718,-0.78785288063,-1.32462887089, 0.63935994044,
 -1.99878497801, 4.42667400292, 0.65534961645, 1.86914537669,
 -0.97229929674, 2.37404268115, 0.01223810011,-1.24956493590,
  0.92711756473,-1.51859351813, 3.76707054743,-2.30641777527,
 -1.98918429361,-0.43634856903,-3.65989556798, 0.00073904070,
 -1.44153837918, 1.42004180214,-1.81388322489,-1.92423923917,
 -1.46322482727,-1.81783134835,-2.59801302663, 2.03462135096,
  1.32546711550,-0.21519150247,-1.94319654347, 0.68773346973,
 -2.75094791807,-1.44814080195, 3.14197123843,-0.52521369103
  ),nrow=6)
 
  B-diag(nrow=6)
 
  X-pMatrix.min(A,B)
 
  dist(X$A, B)
 
  X$pvec
 
  bestpvec - c(6,1,3,2,4,5)
 
  dist(A[bestpvec,],B)
 
  you get a norm of 10.58374 while the true optimal norm is 10.50172.  For
  small problems you often get the optimal solution, but the error
  caused by
  linearizing the objective becomes larger if the problems are larger.
  But the
  approximation is actually very good.
 
  Erwin
 
 
  
  Erwin Kalvelagen
  Amsterdam Optimization Modeling Group
  er...@amsterdamoptimization.com
 
  
 
 
  On Sat, Jan 16, 2010 at 2:01 PM, Ravi Varadhan rvarad...@jhmi.edu
 wrote:
 
  
   Yes, it can be formulated as an LSAP.  It works just fine.  I have
 checked
   it with several 3 x 3 examples.
  
   Here is another convincing example:
  
   n - 50
  
   A - matrix(rnorm(n*n), n, n)
  
# Find P such that ||PA - C|| is minimum
  
   vec - sample(1:n, n, rep=FALSE)  # a random permutation
  
   C - A[vec, ]  # the target matrix is just a permutation of original
  matrix
  
   B - pMatrix.min(A, C)$A
  
dist(B, C)
   [1] 0
  
dist(A, C)
   [1] 69.60859
  
   Ravi.
   
  
   Ravi Varadhan, Ph.D.
   Assistant Professor,
   Division of Geriatric Medicine and Gerontology
   School of Medicine
   Johns Hopkins University
  
   Ph. (410) 502-2619
   email: rvarad...@jhmi.edu
  
  
   - Original Message -
   From: Erwin Kalvelagen erwin.kalvela...@gmail.com
   Date: Saturday, January 16, 2010 1:36 pm
   Subject: Re: [R] optimization problem
   To: Ravi Varadhan rvarad...@jhmi.edu
   Cc: r-h...@stat.math.ethz.ch
  
  
I also have doubts this can be formulated correctly as a linear
   assignment
problem. You may want to check the results with a small example.
   
Erwin
   

Erwin Kalvelagen
Amsterdam Optimization Modeling Group
er...@amsterdamoptimization.com
   

   
   
On Sat, Jan 16, 2010 at 9:59 AM, Ravi Varadhan rvarad

Re: [R] optimization problem

2010-01-17 Thread Hans W. Borchers
Ravi Varadhan rvaradhan at jhmi.edu writes:

 
 Dear Hans,
 
 I agree with your comments.  My intuition was that the quadratic
 form would be better behaved  than the radical form (less
 nonlinear!?).  So, I was hoping to see a change in behavior when
 the cost function was altered from a radical (i.e. sqrt) form to
 quadratic, but I was still surprised to actually see it.  I don't
 have a good explanation for this.  
 
 I came up with the idea of solving Klaus' problem as an LSAP
 problem.  I did not know that the LSAP approach to this and
 similar problems has already been considered by Nick Higham.
 I asked Nick last night about this problem thinking that he might
 know of more direct solutions to this problem (e.g. some kind of
 SVD or related factorizations). He said that I should take a look
 at the PhD thesis of one of his students.


Thanks for pointing out the thesis. As I understand, the (one-sided)
Procrustes problem is finding an orthogonal matrix minimizing

min! || A R - B || , t(R) R = I , ||.|| the Frobenius norm

and that there is an substantial theory behind in numerical linear
algebra. The basic literature appears to be

Gower, J.C; Dijksterhuis, G.B., Procrustes Problems, Oxford
Statistical Science Series, Oxford University Press, 2004

The thesis itself will lead us astray as it treats the two-sided
Procrustes Problem, which is not our main concern.
The request on R-help only asked for permutation matrices P (from
left or right) minimizing

min! || P A - B ||

so I still think that a direct approach as you have suggested is
possible given this apparently much simpler problem.

Take your definition of D with quadratic terms:
The LSAP approach will find a permutations of the rows of B, say Bp,
such that the (linear!) sum over D_{ip(i)} is minimal, that is

sum over rows {sum d(Bp - A)^2} = sum over all {(Bp - A)^2}

which is exactly square of the Frobenius norm.
If you instead apply your first definition with square roots, it is

sum over rows {sum sqrt(d(Bp - A)^2)}

and this cannot be expanded to the sum of the Frobenius norm.
Therefore, the quadratic approach is indeed correct and will lead
to a true optimum, while the first variant will not.

Hans Werner


 Take a look at Section 3.5 of the PhD thesis
 
 Parallel Solution of SVD-Related Problems, With Applications
 http://www.maths.manchester.ac.uk/~higham/misc/past-students.php
 
 This thesis proposed algorithms for the current problem and
 different versions of it under the heading of Procrustes-type
 problems.  I have to read this and get a better handle on it.  
 I would not be able to get to this for another two weeks. If you
 have any insights from this, in the meanwhile, do share with us.
 
 Best regards,
 Ravi.
 
 
 
 Ravi Varadhan, Ph.D.
 Assistant Professor,
 Division of Geriatric Medicine and Gerontology
 School of Medicine
 Johns Hopkins University
 
 Ph. (410) 502-2619
 email: rvaradhan at jhmi.edu


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] optimization problem

2010-01-16 Thread Ravi Varadhan

Thanks, Erwin, for pointing out this mistake.

Here is the correct function for Frobenius norm.  

Klaus - Just replace the old `dist' with the following one. 

dist - function(A, B) {
 # Frobenius norm of A - B
  n - nrow(A)
  sqrt(sum((B - A)^2))
 }

Ravi.



Ravi Varadhan, Ph.D.
Assistant Professor,
Division of Geriatric Medicine and Gerontology
School of Medicine
Johns Hopkins University

Ph. (410) 502-2619
email: rvarad...@jhmi.edu


- Original Message -
From: Erwin Kalvelagen erwin.kalvela...@gmail.com
Date: Saturday, January 16, 2010 2:35 am
Subject: Re: [R] optimization problem
To: r-h...@stat.math.ethz.ch


 Ravi Varadhan rvaradhan at jhmi.edu writes:
  dist - function(A, B) { 
  # Frobenius norm of A - B   
  n - nrow(A)
  sum(abs(B - A))
  }
  
 
 See  for a definition of the 
 Frobenius norm.
 
 
 Erwin
 
 
 Erwin Kalvelagen
 Amsterdam Optimization Modeling Group
 er...@amsterdamoptimization.com
 
 
 __
 R-help@r-project.org mailing list
 
 PLEASE do read the posting guide 
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] optimization problem

2010-01-16 Thread Ravi Varadhan

Klaus,

You also need to make a change in the main function, as shown below.

pMatrix.min - function(A, B) {
# finds the permutation P of A such that ||PA - B|| is minimum
# in Frobenius norm
# Uses the linear-sum assignment problem (LSAP) solver
# in the clue package
# Returns P%*%A and the permutation vector `pvec' such that
# A[pvec, ] is the permutation of A closest to B
n - nrow(A)
D - matrix(NA, n, n)
for (i in 1:n) {
for (j in 1:n) {
#   D[j, i] - sum(abs(B[j, ] - A[i, ]))
D[j, i] - sqrt(sum((B[j, ] - A[i, ])^2)) # correct Frobenius norm
} }
vec - c(solve_LSAP(D))
list(A=A[vec,], pvec=vec)
}

Hope this help,
Ravi.


Ravi Varadhan, Ph.D.
Assistant Professor,
Division of Geriatric Medicine and Gerontology
School of Medicine
Johns Hopkins University

Ph. (410) 502-2619
email: rvarad...@jhmi.edu


- Original Message -
From: Ravi Varadhan rvarad...@jhmi.edu
Date: Saturday, January 16, 2010 10:00 am
Subject: Re: [R] optimization problem
To: Erwin Kalvelagen erwin.kalvela...@gmail.com
Cc: r-h...@stat.math.ethz.ch


 Thanks, Erwin, for pointing out this mistake.
 
 Here is the correct function for Frobenius norm.  
 
 Klaus - Just replace the old `dist' with the following one. 
 
 dist - function(A, B) {
  # Frobenius norm of A - B
   n - nrow(A)
   sqrt(sum((B - A)^2))
  }
 
 Ravi.
 
 
 
 Ravi Varadhan, Ph.D.
 Assistant Professor,
 Division of Geriatric Medicine and Gerontology
 School of Medicine
 Johns Hopkins University
 
 Ph. (410) 502-2619
 email: rvarad...@jhmi.edu
 
 
 - Original Message -
 From: Erwin Kalvelagen erwin.kalvela...@gmail.com
 Date: Saturday, January 16, 2010 2:35 am
 Subject: Re: [R] optimization problem
 To: r-h...@stat.math.ethz.ch
 
 
  Ravi Varadhan rvaradhan at jhmi.edu writes:
   dist - function(A, B) { 
   # Frobenius norm of A - B 
 n - nrow(A)
 sum(abs(B - A))
   }
   
  
  See  for a definition of the 
  Frobenius norm.
  
  
  Erwin
  
  
  Erwin Kalvelagen
  Amsterdam Optimization Modeling Group
  er...@amsterdamoptimization.com
  
  
  __
  R-help@r-project.org mailing list
  
  PLEASE do read the posting guide 
  and provide commented, minimal, self-contained, reproducible code.
 
 __
 R-help@r-project.org mailing list
 
 PLEASE do read the posting guide 
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] optimization problem

2010-01-16 Thread Erwin Kalvelagen
I also have doubts this can be formulated correctly as a linear assignment
problem. You may want to check the results with a small example.

Erwin


Erwin Kalvelagen
Amsterdam Optimization Modeling Group
er...@amsterdamoptimization.com
http://amsterdamoptimization.com



On Sat, Jan 16, 2010 at 9:59 AM, Ravi Varadhan rvarad...@jhmi.edu wrote:


 Thanks, Erwin, for pointing out this mistake.

 Here is the correct function for Frobenius norm.

 Klaus - Just replace the old `dist' with the following one.

 dist - function(A, B) {
  # Frobenius norm of A - B
  n - nrow(A)
  sqrt(sum((B - A)^2))
  }

 Ravi.

 

 Ravi Varadhan, Ph.D.
 Assistant Professor,
 Division of Geriatric Medicine and Gerontology
 School of Medicine
 Johns Hopkins University

 Ph. (410) 502-2619
 email: rvarad...@jhmi.edu


 - Original Message -
 From: Erwin Kalvelagen erwin.kalvela...@gmail.com
 Date: Saturday, January 16, 2010 2:35 am
 Subject: Re: [R] optimization problem
 To: r-h...@stat.math.ethz.ch


  Ravi Varadhan rvaradhan at jhmi.edu writes:
   dist - function(A, B) {
   # Frobenius norm of A - B
   n - nrow(A)
   sum(abs(B - A))
   }
  
 
  See  for a definition of the
  Frobenius norm.
 
 
  Erwin
 
  
  Erwin Kalvelagen
  Amsterdam Optimization Modeling Group
  er...@amsterdamoptimization.com
 
 
  __
  R-help@r-project.org mailing list
 
  PLEASE do read the posting guide
  and provide commented, minimal, self-contained, reproducible code.


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] optimization problem

2010-01-16 Thread Ravi Varadhan

Yes, it can be formulated as an LSAP.  It works just fine.  I have checked it 
with several 3 x 3 examples.

Here is another convincing example:

n - 50

A - matrix(rnorm(n*n), n, n)

 # Find P such that ||PA - C|| is minimum

vec - sample(1:n, n, rep=FALSE)  # a random permutation
  
C - A[vec, ]  # the target matrix is just a permutation of original matrix
 
B - pMatrix.min(A, C)$A

 dist(B, C)
[1] 0

 dist(A, C)
[1] 69.60859

Ravi.


Ravi Varadhan, Ph.D.
Assistant Professor,
Division of Geriatric Medicine and Gerontology
School of Medicine
Johns Hopkins University

Ph. (410) 502-2619
email: rvarad...@jhmi.edu


- Original Message -
From: Erwin Kalvelagen erwin.kalvela...@gmail.com
Date: Saturday, January 16, 2010 1:36 pm
Subject: Re: [R] optimization problem
To: Ravi Varadhan rvarad...@jhmi.edu
Cc: r-h...@stat.math.ethz.ch


 I also have doubts this can be formulated correctly as a linear assignment
 problem. You may want to check the results with a small example.
 
 Erwin
 
 
 Erwin Kalvelagen
 Amsterdam Optimization Modeling Group
 er...@amsterdamoptimization.com
 
 
 
 
 On Sat, Jan 16, 2010 at 9:59 AM, Ravi Varadhan rvarad...@jhmi.edu wrote:
 
 
  Thanks, Erwin, for pointing out this mistake.
 
  Here is the correct function for Frobenius norm.
 
  Klaus - Just replace the old `dist' with the following one.
 
  dist - function(A, B) {
   # Frobenius norm of A - B
   n - nrow(A)
   sqrt(sum((B - A)^2))
   }
 
  Ravi.
 
  
 
  Ravi Varadhan, Ph.D.
  Assistant Professor,
  Division of Geriatric Medicine and Gerontology
  School of Medicine
  Johns Hopkins University
 
  Ph. (410) 502-2619
  email: rvarad...@jhmi.edu
 
 
  - Original Message -
  From: Erwin Kalvelagen erwin.kalvela...@gmail.com
  Date: Saturday, January 16, 2010 2:35 am
  Subject: Re: [R] optimization problem
  To: r-h...@stat.math.ethz.ch
 
 
   Ravi Varadhan rvaradhan at jhmi.edu writes:
dist - function(A, B) {
# Frobenius norm of A - B
n - nrow(A)
sum(abs(B - A))
}
   
  
   See  for a definition of the
   Frobenius norm.
  
  
   Erwin
  
   
   Erwin Kalvelagen
   Amsterdam Optimization Modeling Group
   er...@amsterdamoptimization.com
  
  
   __
   R-help@r-project.org mailing list
  
   PLEASE do read the posting guide
   and provide commented, minimal, self-contained, reproducible code.
 

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] optimization problem

2010-01-16 Thread Erwin Kalvelagen
I believe this is a very good approximation but not a 100% correct
formulation of the original problem.

E.g. for


A - matrix(c(
   -0.62668585718,-0.78785288063,-1.32462887089, 0.63935994044,
   -1.99878497801, 4.42667400292, 0.65534961645, 1.86914537669,
   -0.97229929674, 2.37404268115, 0.01223810011,-1.24956493590,
0.92711756473,-1.51859351813, 3.76707054743,-2.30641777527,
   -1.98918429361,-0.43634856903,-3.65989556798, 0.00073904070,
   -1.44153837918, 1.42004180214,-1.81388322489,-1.92423923917,
   -1.46322482727,-1.81783134835,-2.59801302663, 2.03462135096,
1.32546711550,-0.21519150247,-1.94319654347, 0.68773346973,
   -2.75094791807,-1.44814080195, 3.14197123843,-0.52521369103
),nrow=6)

B-diag(nrow=6)

X-pMatrix.min(A,B)

dist(X$A, B)

X$pvec

bestpvec - c(6,1,3,2,4,5)

dist(A[bestpvec,],B)

you get a norm of 10.58374 while the true optimal norm is 10.50172.  For
small problems you often get the optimal solution, but the error caused by
linearizing the objective becomes larger if the problems are larger. But the
approximation is actually very good.

Erwin



Erwin Kalvelagen
Amsterdam Optimization Modeling Group
er...@amsterdamoptimization.com
http://amsterdamoptimization.com



On Sat, Jan 16, 2010 at 2:01 PM, Ravi Varadhan rvarad...@jhmi.edu wrote:


 Yes, it can be formulated as an LSAP.  It works just fine.  I have checked
 it with several 3 x 3 examples.

 Here is another convincing example:

 n - 50

 A - matrix(rnorm(n*n), n, n)

  # Find P such that ||PA - C|| is minimum

 vec - sample(1:n, n, rep=FALSE)  # a random permutation

 C - A[vec, ]  # the target matrix is just a permutation of original matrix

 B - pMatrix.min(A, C)$A

  dist(B, C)
 [1] 0

  dist(A, C)
 [1] 69.60859

 Ravi.
 

 Ravi Varadhan, Ph.D.
 Assistant Professor,
 Division of Geriatric Medicine and Gerontology
 School of Medicine
 Johns Hopkins University

 Ph. (410) 502-2619
 email: rvarad...@jhmi.edu


 - Original Message -
 From: Erwin Kalvelagen erwin.kalvela...@gmail.com
 Date: Saturday, January 16, 2010 1:36 pm
 Subject: Re: [R] optimization problem
 To: Ravi Varadhan rvarad...@jhmi.edu
 Cc: r-h...@stat.math.ethz.ch


  I also have doubts this can be formulated correctly as a linear
 assignment
  problem. You may want to check the results with a small example.
 
  Erwin
 
  
  Erwin Kalvelagen
  Amsterdam Optimization Modeling Group
  er...@amsterdamoptimization.com
 
  
 
 
  On Sat, Jan 16, 2010 at 9:59 AM, Ravi Varadhan rvarad...@jhmi.edu
 wrote:
 
  
   Thanks, Erwin, for pointing out this mistake.
  
   Here is the correct function for Frobenius norm.
  
   Klaus - Just replace the old `dist' with the following one.
  
   dist - function(A, B) {
# Frobenius norm of A - B
n - nrow(A)
sqrt(sum((B - A)^2))
}
  
   Ravi.
  
   
  
   Ravi Varadhan, Ph.D.
   Assistant Professor,
   Division of Geriatric Medicine and Gerontology
   School of Medicine
   Johns Hopkins University
  
   Ph. (410) 502-2619
   email: rvarad...@jhmi.edu
  
  
   - Original Message -
   From: Erwin Kalvelagen erwin.kalvela...@gmail.com
   Date: Saturday, January 16, 2010 2:35 am
   Subject: Re: [R] optimization problem
   To: r-h...@stat.math.ethz.ch
  
  
Ravi Varadhan rvaradhan at jhmi.edu writes:
 dist - function(A, B) {
 # Frobenius norm of A - B
 n - nrow(A)
 sum(abs(B - A))
 }

   
See  for a definition of the
Frobenius norm.
   
   
Erwin
   

Erwin Kalvelagen
Amsterdam Optimization Modeling Group
er...@amsterdamoptimization.com
   
   
__
R-help@r-project.org mailing list
   
PLEASE do read the posting guide
and provide commented, minimal, self-contained, reproducible code.
  


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] optimization problem

2010-01-16 Thread Ravi Varadhan

Interesting! 

Now, if I change the cost matrix, D,  in the LSAP formulation slightly such 
that it is quadratic, it finds the best solution to your example:


pMatrix.min - function(A, B) {
# finds the permutation P of A such that ||PA - B|| is minimum
# in Frobenius norm
# Uses the linear-sum assignment problem (LSAP) solver
# in the clue package
# Returns P%*%A and the permutation vector `pvec' such that
# A[pvec, ] is the permutation of A closest to B
n - nrow(A)
D - matrix(NA, n, n)
for (i in 1:n) {
for (j in 1:n) {
#   D[j, i] - sqrt(sum((B[j, ] - A[i, ])^2))
D[j, i] - (sum((B[j, ] - A[i, ])^2))  # this is better
} }
vec - c(solve_LSAP(D))
list(A=A[vec,], pvec=vec)
}

  X-pMatrix.min(A,B)
 X$pvec
[1] 6 1 3 2 4 5
 dist(X$A, B)
[1] 10.50172


This should be fine.  Any counter-examples to this?!

Best,
Ravi.


Ravi Varadhan, Ph.D.
Assistant Professor,
Division of Geriatric Medicine and Gerontology
School of Medicine
Johns Hopkins University

Ph. (410) 502-2619
email: rvarad...@jhmi.edu


- Original Message -
From: Erwin Kalvelagen erwin.kalvela...@gmail.com
Date: Saturday, January 16, 2010 5:26 pm
Subject: Re: [R] optimization problem
To: Ravi Varadhan rvarad...@jhmi.edu
Cc: r-h...@stat.math.ethz.ch


 I believe this is a very good approximation but not a 100% correct
 formulation of the original problem.
 
 E.g. for
 
 
 A - matrix(c(
-0.62668585718,-0.78785288063,-1.32462887089, 0.63935994044,
-1.99878497801, 4.42667400292, 0.65534961645, 1.86914537669,
-0.97229929674, 2.37404268115, 0.01223810011,-1.24956493590,
 0.92711756473,-1.51859351813, 3.76707054743,-2.30641777527,
-1.98918429361,-0.43634856903,-3.65989556798, 0.00073904070,
-1.44153837918, 1.42004180214,-1.81388322489,-1.92423923917,
-1.46322482727,-1.81783134835,-2.59801302663, 2.03462135096,
 1.32546711550,-0.21519150247,-1.94319654347, 0.68773346973,
-2.75094791807,-1.44814080195, 3.14197123843,-0.52521369103
 ),nrow=6)
 
 B-diag(nrow=6)
 
 X-pMatrix.min(A,B)
 
 dist(X$A, B)
 
 X$pvec
 
 bestpvec - c(6,1,3,2,4,5)
 
 dist(A[bestpvec,],B)
 
 you get a norm of 10.58374 while the true optimal norm is 10.50172.  For
 small problems you often get the optimal solution, but the error 
 caused by
 linearizing the objective becomes larger if the problems are larger. 
 But the
 approximation is actually very good.
 
 Erwin
 
 
 
 Erwin Kalvelagen
 Amsterdam Optimization Modeling Group
 er...@amsterdamoptimization.com
 
 
 
 
 On Sat, Jan 16, 2010 at 2:01 PM, Ravi Varadhan rvarad...@jhmi.edu wrote:
 
 
  Yes, it can be formulated as an LSAP.  It works just fine.  I have checked
  it with several 3 x 3 examples.
 
  Here is another convincing example:
 
  n - 50
 
  A - matrix(rnorm(n*n), n, n)
 
   # Find P such that ||PA - C|| is minimum
 
  vec - sample(1:n, n, rep=FALSE)  # a random permutation
 
  C - A[vec, ]  # the target matrix is just a permutation of original 
 matrix
 
  B - pMatrix.min(A, C)$A
 
   dist(B, C)
  [1] 0
 
   dist(A, C)
  [1] 69.60859
 
  Ravi.
  
 
  Ravi Varadhan, Ph.D.
  Assistant Professor,
  Division of Geriatric Medicine and Gerontology
  School of Medicine
  Johns Hopkins University
 
  Ph. (410) 502-2619
  email: rvarad...@jhmi.edu
 
 
  - Original Message -
  From: Erwin Kalvelagen erwin.kalvela...@gmail.com
  Date: Saturday, January 16, 2010 1:36 pm
  Subject: Re: [R] optimization problem
  To: Ravi Varadhan rvarad...@jhmi.edu
  Cc: r-h...@stat.math.ethz.ch
 
 
   I also have doubts this can be formulated correctly as a linear
  assignment
   problem. You may want to check the results with a small example.
  
   Erwin
  
   
   Erwin Kalvelagen
   Amsterdam Optimization Modeling Group
   er...@amsterdamoptimization.com
  
   
  
  
   On Sat, Jan 16, 2010 at 9:59 AM, Ravi Varadhan rvarad...@jhmi.edu
  wrote:
  
   
Thanks, Erwin, for pointing out this mistake.
   
Here is the correct function for Frobenius norm.
   
Klaus - Just replace the old `dist' with the following one.
   
dist - function(A, B) {
 # Frobenius norm of A - B
 n - nrow(A)
 sqrt(sum((B - A)^2))
 }
   
Ravi.
   

   
Ravi Varadhan, Ph.D.
Assistant Professor,
Division of Geriatric Medicine and Gerontology
School of Medicine
Johns Hopkins University
   
Ph. (410) 502-2619
email: rvarad...@jhmi.edu
   
   
- Original Message -
From: Erwin Kalvelagen erwin.kalvela...@gmail.com
Date: Saturday, January 16, 2010 2

[R] optimization problem

2010-01-15 Thread klausch
Dear R-experts,

this is not a direct R-problem but I hope you can help me anyway.

I would like to minimize || PG-I || over P, where P is a p x p permutation 
matrix (obtained by permuting the rows and/or columns of the identity matrix), 
G is a given p x p matrix with full rank and I the identity matrix.  ||.|| is 
the frobenius norm.

Does anyone know an algorithm to solve such a problem? And if yes, is it 
implemented in R?

Currently I minimize it by going through all possible permutations - but this 
is not feasible for higher dimensional problems as I would like to consider too.

Any help is appreciated!

Thanks in advance,

Klaus

-- 
Jetzt kostenlos herunterladen: Internet Explorer 8 und Mozilla Firefox 3.5 -
sicherer, schneller und einfacher! http://portal.gmx.net/de/go/chbrowser

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] optimization problem

2010-01-15 Thread Erwin Kalvelagen

 klausch at gmx.de writes:

 
 Dear R-experts,
 
 this is not a direct R-problem but I hope you can help me anyway.
 
 I would like to minimize || PG-I || over P, where P is a p x p permutation 
matrix (obtained by permuting the rows
 and/or columns of the identity matrix), G is a given p x p matrix with full 
rank and I the identity matrix. 
 ||.|| is the frobenius norm.
 
 Does anyone know an algorithm to solve such a problem? And if yes, is it 
implemented in R?
 
 Currently I minimize it by going through all possible permutations - but this 
is not feasible for higher
 dimensional problems as I would like to consider too.
 
 Any help is appreciated!
 
 Thanks in advance,
 
 Klaus
 




This could be modeled as a MIQP problem:

min sum((i,j),sqr(V(i,j)))
v(i,j) = sum(k, P(i,k)*G(k,j)) - Id(i,j);
sum(i, P(i,j)) = 1
sum(j, P(i,j)) = 1
P(i,j) in {0,1}

If you have access to Cplex then http://cran.r-
project.org/web/packages/Rcplex/Rcplex.pdf can help.

If you can use a different norm it may be possible to use linear MIP technology 
allowing a wider range of solvers.

This is combinatorial, so for p  20 say it may become slow (this also depends 
on the data).

Erwin


Erwin Kalvelagen
Amsterdam Optimization Modeling Group
er...@amsterdamoptimization.com
http://amsterdamoptimization.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] optimization problem

2010-01-15 Thread Ravi Varadhan
Hi Klaus,

This problem can be cast as a linear sum assignment problem (LSAP), and solved 
using the `solve_LSAP' function in the clue package.  I show how to solve a 
slightly more general problem of finding a permulation of matrix A (n x n) that 
minimizes the Frobenius distance to a given matrix B (n x n).  In your case, B 
= I.  I ran this for n up to 100.  It seems to be quite fast.  

Here is how you do this:

dist - function(A, B) { 
# Frobenius norm of A - B   
n - nrow(A)
sum(abs(B - A))
}

pMatrix.min - function(A, B) {
# finds the permutation P of A such that ||PA - B|| is minimum
# in Frobenius norm
# Uses the linear-sum assignment problem (LSAP) solver
# in the clue package
# Returns P%*%A and the permutation vector `pvec' such that
# A[pvec, ] is the permutation of A closest to B
n - nrow(A)
D - matrix(NA, n, n)
for (i in 1:n) {
for (j in 1:n) {
D[j, i] - sum(abs(B[j, ] - A[i, ]))
} }
vec - c(solve_LSAP(D))
list(A=A[vec,], pvec=vec)
}

#set.seed(123)
# Find P such that ||PA - I|| is minimum
n - 100
A - matrix(rnorm(n*n), n, n)
dist(A, diag(n))
B - pMatrix.min(A, diag(n))$A  # B = P%*%A
dist(B, diag(n))

# Find P such that ||PA - C|| is minimum
C - matrix(rnorm(n*n), n, n)
B - pMatrix.min(A, C)$A
dist(A, C)
dist(B, C)

Let me know how this works for you.

Best,
Ravi.  



Ravi Varadhan, Ph.D.
Assistant Professor,
Division of Geriatric Medicine and Gerontology
School of Medicine
Johns Hopkins University

Ph. (410) 502-2619
email: rvarad...@jhmi.edu


- Original Message -
From: klau...@gmx.de
Date: Friday, January 15, 2010 9:53 am
Subject: [R] optimization problem
To: r-help@r-project.org


 Dear R-experts,
  
  this is not a direct R-problem but I hope you can help me anyway.
  
  I would like to minimize || PG-I || over P, where P is a p x p 
 permutation matrix (obtained by permuting the rows and/or columns of 
 the identity matrix), G is a given p x p matrix with full rank and I 
 the identity matrix.  ||.|| is the frobenius norm.
  
  Does anyone know an algorithm to solve such a problem? And if yes, is 
 it implemented in R?
  
  Currently I minimize it by going through all possible permutations - 
 but this is not feasible for higher dimensional problems as I would 
 like to consider too.
  
  Any help is appreciated!
  
  Thanks in advance,
  
  Klaus
  
  -- 
  Jetzt kostenlos herunterladen: Internet Explorer 8 und Mozilla 
 Firefox 3.5 -
  sicherer, schneller und einfacher! 
  
  __
  R-help@r-project.org mailing list
  
  PLEASE do read the posting guide 
  and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] optimization problem

2010-01-15 Thread Erwin Kalvelagen
Ravi Varadhan rvaradhan at jhmi.edu writes:
 dist - function(A, B) { 
 # Frobenius norm of A - B 
   n - nrow(A)
   sum(abs(B - A))
 }
 

See http://mathworld.wolfram.com/FrobeniusNorm.html for a definition of the 
Frobenius norm.


Erwin


Erwin Kalvelagen
Amsterdam Optimization Modeling Group
er...@amsterdamoptimization.com
http://amsterdamoptimization.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] optimization problem with constraints..

2009-10-18 Thread Prof. John C Nash
Apologies if this shows up a second time with uninformative header 
(apparently it got filtered, but ...), as I forgot to replace the 
subject line.


As a first try, use a bounds constrained method (L-BFGS-B or one from 
the r-forge Optimizer project 
http://r-forge.r-project.org/R/?group_id=395) and then add a penalty or 
barrier function to your objective function to take care of the

x1+x2  1  (the other end is implicit in the lower bounds on x1 and x2).

e.g.,   - const * log(1-x1-x2)

You should provide a feasible starting point. const scales the penalty.

Cheers, JN



 Message: 27
 Date: Sat, 17 Oct 2009 13:50:10 -0700 (PDT)
 From: kathie kathryn.lord2...@gmail.com
 Subject: [R]  optimization problem with constraints...
 To: r-help@r-project.org
 Message-ID: 25941686.p...@talk.nabble.com
 Content-Type: text/plain; charset=us-ascii


 Dear R users,
 I need some advises on how to use R to optimize a nonlinear function with
 the following constraints.

 f(x1,x2,x3,x4,x5,x6)
 s.t
 0  x1  1
 0  x2  1
 0  x1+x2  1
 -inf  x3  inf
 -inf  x4  inf
 0  x5  inf
 0  x6  inf

 Is there any built-in function or something for these constraint??

 Any suggestion will be greatly appreciated.

 Regards,

 Kathryn Lord -- View this message in context: 
http://www.nabble.com/optimization-problem-with-constraints...-tp25941686p25941686.html 
Sent from the R help mailing list archive at Nabble.com.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] optimization problem with constraints...

2009-10-17 Thread kathie

Dear R users, 

I need some advises on how to use R to optimize a nonlinear function with
the following constraints.

f(x1,x2,x3,x4,x5,x6) 

s.t 

0  x1  1
0  x2  1
0  x1+x2  1
-inf  x3  inf
-inf  x4  inf
0  x5  inf
0  x6  inf

Is there any built-in function or something for these constraint??

Any suggestion will be greatly appreciated.

Regards,

Kathryn Lord 
-- 
View this message in context: 
http://www.nabble.com/optimization-problem-with-constraints...-tp25941686p25941686.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] optimization problem

2008-12-02 Thread Mike Prager
Hans W. Borchers [EMAIL PROTECTED] wrote:

 Why not use one of the global optimizers in R, for instance 'DEoptim', and
 then apply optim() to find the last six decimals? I am relatively sure that
 the Differential Evolution operator has a better chance to come near a
 global optimum than a loop over optim(), though 'DEoptim' may be a bit slow
 (only for quite large numbers of parameters).
 
Thanks for the reference. I will see if 'DEoptim' might be
useful in future problems.

HWB asked, why not use 'DEoptim' rather than a loop? Perhaps
that's a rhetorical question, but I'll answer it anyway, in the
context of the specific problem I am solving. (1) I did not know
that 'DEoptim' existed. (2) After starting a problem with 'nls',
I changed its structure slightly, which meant a change to
'optim'. Because the two functions have totally different
syntaxes, it was necessary to rewrite the entire script and its
supporting functions. Adding a loop was much simpler than
looking for yet *another* optimizer in R. (3) In the current
problem, perhaps 97 of 100 runs of 'optim' come to the same
solution (the best one found). That suggests that this is not a
terribly difficult problem and that there is little to be gained
by employing a different approach.

SOMEONE once posted about an R function that masked the syntax
differences among (at least some) R optimizers. That surely
would lower the barrier to switching among them. I've lost that
post, and my search has not turned it up. If that poster is
reading this, would you please respond with the information?

ALSO, is anyone aware of any document comparing the various
optimizers available in R (even in core R)?  What are the
different intended applications, and when would each be
preferred? There is some helpful material in MASS 4, but I am
hoping for something more recent and detailed.


-- 
Mike Prager, NOAA, Beaufort, NC
* Opinions expressed are personal and not represented otherwise.
* Any use of tradenames does not constitute a NOAA endorsement.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] optimization problem (optim vs. nlminb)

2008-12-02 Thread Mike Prager
In case anyone is still reading this thread, I want to add this:
In a current problem (a data-shy five-parameter nonlinear
optimization), I found nlminb markedly more reliable than
optim with method L-BFGS-B. In reviewing the fit I made, I
found that optim only came close to its own minimum in about
13 of 120 trials (same data, different starting values). I
previously said 97, but I was clearly looking at the wrong data!
In contrast, nlminb came to that best answer in about 92
trials out of 120.

The original poster might consider nlminb instead of optim.
Because nonlinear optimization is sensitive to starting values,
I would still advise solving the problem a number of times to
see if a clear minimum solution emerges.

-- 
Mike Prager, NOAA, Beaufort, NC
* Opinions expressed are personal and not represented otherwise.
* Any use of tradenames does not constitute a NOAA endorsement.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] optimization problem

2008-12-01 Thread Mike Prager
tedzzx [EMAIL PROTECTED] wrote:

 
 If I want to find out the globle minia, how shoul I change my code?

I sometimes use optim() within a loop, with random starting
values for each iteration of the loop. You can save the
objective function value each time and pick the best solution.
Last time I did that, I ran it 100 times.

That procedure does not guarantee finding the global minimum.
However, it does make it *more likely* to find the global minmum
*within the range of your starting values*.

Often, I make a boxplot of the various results. If they don't
show a strong mode, there is a data or model problem that needs
to be addressed. For example, the solution may be poorly defined
by the data, or the model may be specified with confounded
parameters.

-- 
Mike Prager, NOAA, Beaufort, NC
* Opinions expressed are personal and not represented otherwise.
* Any use of tradenames does not constitute a NOAA endorsement.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] optimization problem

2008-12-01 Thread Hans W. Borchers

Why not use one of the global optimizers in R, for instance 'DEoptim', and
then apply optim() to find the last six decimals? I am relatively sure that
the Differential Evolution operator has a better chance to come near a
global optimum than a loop over optim(), though 'DEoptim' may be a bit slow
(only for quite large numbers of parameters).

Regards,  Hans Werner



Mike Prager wrote:
 
 tedzzx [EMAIL PROTECTED] wrote:
 
 
 If I want to find out the globle minia, how shoul I change my code?
 
 I sometimes use optim() within a loop, with random starting
 values for each iteration of the loop. You can save the
 objective function value each time and pick the best solution.
 Last time I did that, I ran it 100 times.
 
 That procedure does not guarantee finding the global minimum.
 However, it does make it *more likely* to find the global minmum
 *within the range of your starting values*.
 
 Often, I make a boxplot of the various results. If they don't
 show a strong mode, there is a data or model problem that needs
 to be addressed. For example, the solution may be poorly defined
 by the data, or the model may be specified with confounded
 parameters.
 
 -- 
 Mike Prager, NOAA, Beaufort, NC
 * Opinions expressed are personal and not represented otherwise.
 * Any use of tradenames does not constitute a NOAA endorsement.
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.
 
 

-- 
View this message in context: 
http://www.nabble.com/optimization-problem-tp20730032p20784256.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] optimization problem

2008-11-29 Thread Ben Bolker
tedzzx zengzhenxing at gmail.com writes:

 
 
 Hi, all
 
 I am facing an optimization problem. I am using the function optim(par,fun),
 but I find that every time I give different original guess parameter, I can
 get different result. For example
 I have a data frame named data:
 head(data)
price s x t
 1 1678.0 12817 11200 0.1495902
 2 1675.5 12817 11200 0.1495902
 3 1678.0 12817 11200 0.1495902
 4 1688.0 12817 11200 0.1495902
 5 1677.0 12817 11200 0.1495902
 6 1678.5 12817 11200 0.1495902
 …….
 …….
 …….
   
  Can you post the whole data set somewhere (on
the web if it's big, here if it's say  100 lines)?
That way your problem would be reproducible ...

  cheers
Ben Bolker

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] optimization problem

2008-11-28 Thread tedzzx

Hi, all

I am facing an optimization problem. I am using the function optim(par,fun),
but I find that every time I give different original guess parameter, I can
get different result. For example
I have a data frame named data:
head(data)
   price s x t
1 1678.0 12817 11200 0.1495902
2 1675.5 12817 11200 0.1495902
3 1678.0 12817 11200 0.1495902
4 1688.0 12817 11200 0.1495902
5 1677.0 12817 11200 0.1495902
6 1678.5 12817 11200 0.1495902
…….
…….
…….

f-function(p,...){
v=exp(p[1]+p[2]*(x/s)+p[3]*(x/s)^2)
d1=(log(s/x)+(v^2)*t/2)/(v*sqrt(t))
d2=(log(s/x)-(v^2)*t/2)/(v*sqrt(t))
sum((price-(s*pnorm(d1)-x*pnorm(d2)))^2)
}
p=c(-0.1,-0.1,0.01)
optim(par=p,f) # use the default algorism 

$par
[1] -1.2669459  0.4840307 -0.6607008

$value
[1] 14534.56

$counts
function gradient 
 154   NA 

$convergence
[1] 0

If I try a different original guess estimes, I can get different number
 p=c(-1,-0.1,0.5)
 optim(par=p,f)
$par
[1] -0.7784273 -0.4776970 -0.1877029

$value
[1] 14292.19

$counts
function gradient 
  76   NA 

$convergence
[1] 0

$message
NULL

I have other different original estimes, It also show me different result.

Why?

Thanks.


-- 
View this message in context: 
http://www.nabble.com/optimization-problem-tp20730032p20730032.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] optimization problem

2008-11-28 Thread axionator
Hi,
I guess your function has several local minima and depending on where
you start, i.e. your initial variables, you get into another mimimum.

HTH
Armin

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] optimization problem

2008-11-28 Thread John C Nash

tedzxx asked about apparent multiple optima. See below.

Users should be aware that optim() does local optimization. The default 
Nelder-Mead approach is fairly robust at finding such a local minimum, though 
it may halt if it is on a flat area of the loss function surface. I would 
recommend trying one of the BFGS codes (they use somewhat different approaches) 
and look at the gradient information. With only 3 parameters, these should work 
fine. There is also another package (I forget the name -- someone?) that does 
full Newton with Hessian computed. That may be worth using to get more complete 
information about your problem.

tedzxx: If you send me the data off-list (maybe also include the function again to save me digging it up again), I'll try to provide more information. 


John Nash






Date: Thu, 27 Nov 2008 23:30:56 -0800 (PST)
From: tedzzx [EMAIL PROTECTED]
Subject: [R]  optimization problem
To: r-help@r-project.org
Message-ID: [EMAIL PROTECTED]
Content-Type: text/plain; charset=UTF-8



 I am facing an optimization problem. I am using the function optim(par,fun),
 but I find that every time I give different original guess parameter, I can
 get different result. For example
 I have a data frame named data:
 head(data)
price s x t
 1 1678.0 12817 11200 0.1495902
 2 1675.5 12817 11200 0.1495902
 3 1678.0 12817 11200 0.1495902
 4 1688.0 12817 11200 0.1495902
 5 1677.0 12817 11200 0.1495902
 6 1678.5 12817 11200 0.1495902
 ??.
 f-function(p,...){
v=exp(p[1]+p[2]*(x/s)+p[3]*(x/s)2)
d1=(log(s/x)+(v2)*t/2)/(v*sqrt(t))
d2=(log(s/x)-(v2)*t/2)/(v*sqrt(t))
sum((price-(s*pnorm(d1)-x*pnorm(d2)))2)
 }
 p=c(-0.1,-0.1,0.01)
 optim(par=p,f) # use the default algorism 



 I have other different original estimes, It also show me different result.

 Why?

Thanks.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] optimization problem

2008-11-28 Thread tedzzx

If I want to find out the globle minia, how shoul I change my code?
Thanks a lot

Armin Meier wrote:
 
 Hi,
 I guess your function has several local minima and depending on where
 you start, i.e. your initial variables, you get into another mimimum.
 
 HTH
 Armin
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.
 
 

-- 
View this message in context: 
http://www.nabble.com/optimization-problem-tp20730032p20737066.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] optimization problem

2008-11-24 Thread Jiang Peng

 Dear list,
  hi !
  I am a R beginner and I have  a function to optimize .

  alpha = argmin{ f(x,alpha) }

  I want alpha to be in [0,1].  Is there any function that can work?

  I use nlm() but i can't fix the domain of alpha.

  thanks in advance
___

Jiang Peng, Ph.D. Candidate
Department of Mathematics 
Antai college of Economics and Management
Shanghai Jiao Tong University
Address: Room 127, #3 Building, Xuhui Campus
E-mail: jp021 at sjtu.edu.cn
 jp021 at 126.com
MSN: jp021 at hotmail.com
iChat: mathfrog at mac.com
Mobile: +86-138 168 780 95

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] optimization problem

2008-11-24 Thread Jagat.K.Sheth
On the help page for nlm (type ?nlm) check out the 'See Also' section.
It mentions other functions such as 'optim' and 'nlminb' which can do
constrained optimizations. 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Jiang Peng
Sent: Monday, November 24, 2008 5:28 AM
To: r-help@r-project.org
Subject: [R] optimization problem

  Dear list,
   hi !
   I am a R beginner and I have  a function to optimize .

   alpha = argmin{ f(x,alpha) }

   I want alpha to be in [0,1].  Is there any function that can work?

   I use nlm() but i can't fix the domain of alpha.

   thanks in advance
___

Jiang Peng, Ph.D. Candidate
Department of Mathematics 
Antai college of Economics and Management Shanghai Jiao Tong University
Address: Room 127, #3 Building, Xuhui Campus
E-mail: jp021 at sjtu.edu.cn
  jp021 at 126.com
MSN: jp021 at hotmail.com
iChat: mathfrog at mac.com
Mobile: +86-138 168 780 95

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] optimization problem

2008-11-24 Thread Ravi Varadhan
Is this the actual problem that you are trying to optimize, i.e. optimize a
function with respect to a scalar unknown parameter?

If so, just use optimize and specify the search interval for the algorithm
as [0,1].  

Ravi.   



---

Ravi Varadhan, Ph.D.

Assistant Professor, The Center on Aging and Health

Division of Geriatric Medicine and Gerontology 

Johns Hopkins University

Ph: (410) 502-2619

Fax: (410) 614-9625

Email: [EMAIL PROTECTED]

Webpage:  http://www.jhsph.edu/agingandhealth/People/Faculty/Varadhan.html

 





-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of Jiang Peng
Sent: Monday, November 24, 2008 6:28 AM
To: r-help@r-project.org
Subject: [R] optimization problem

  Dear list,
   hi !
   I am a R beginner and I have  a function to optimize .

   alpha = argmin{ f(x,alpha) }

   I want alpha to be in [0,1].  Is there any function that can work?

   I use nlm() but i can't fix the domain of alpha.

   thanks in advance
___

Jiang Peng, Ph.D. Candidate
Department of Mathematics 
Antai college of Economics and Management Shanghai Jiao Tong University
Address: Room 127, #3 Building, Xuhui Campus
E-mail: jp021 at sjtu.edu.cn
  jp021 at 126.com
MSN: jp021 at hotmail.com
iChat: mathfrog at mac.com
Mobile: +86-138 168 780 95

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Optimization problem with constraints

2008-06-02 Thread Matthias Graser
I'm trying to da an optimization for the followig function

Zwischenwert - function (x)
{
lambda-x[1];
mu-x[2];
gammal-x[3];
mud-x[4];
gammad-x[5];
Mittelwert -0;
for(t in 0:(T-1))
{
for(i in 0:(n-1))
{
for(j in i:(n-1))
{
Mittelwert - Mittelwert +( 
phi[i+1,j+1,t+1]*((j-i)*log(lambda)-log(factorial(j-i))-lambda));
};
if(t0){Mittelwert - Mittelwert 
+(phisum[i+1,t+1]*(-0.5*log(2*pi)-0.5*log(t*gammal+i*gammad)-((Y[t+1]-i*mud-t*mu-0.5*t*gammal+lambda*exp(mud+0.5*gammad)-t*lambda)^2
 )/(2*(t*gammal+i*gammad;};
};
};
};


and it's gradient

ZWGrad - function(x)
{
lambda-x[1];
mu-x[2];
gammal-x[3];
mud-x[4];
gammad-x[5];
dlambda-0;
dmu-0;
dgamma-0;
dmud-0;
dgammad-0;
for (t in 0:(T-1))
{
for (i in 0:(n-1))
{
for (j in i:(n-1))
{

dlambda-dlambda-phi[i+1,j+1,t+1]*(-1+(j-i)/lambda);
};
};
};
for (t in 1:(T-1))
{
for (i in 0:(n-1))
{

dlambda-dlambda+phisum[i+1,t+1]*(t-t*lambda*exp(mud+0.5*gammad))*(Y[t+1]-i*mud-mu*t-0.5*gammal*t+lambda*t*exp(mud+0.5*gammad)-lambda*t)/(2*(gammal*t+gammad*i));

dmu-dmu+phisum[i+1,t+1]*t*(Y[t+1]-i*mud-t*mu-0.5*gammal*t+lambda*t*exp(mud+0.5*gammad)-lambda*t)/(gammal*t+gammad*i);
dgamma-dgamma+phisum[i+1,t+1]*( 
t*(0.5*i*gammad+Y[t+1]-i*mud-t*mu+lambda*t*exp(mud+0.5*gammad)-lambda*t)^2/(2*(gammal*t+gammad*i)^2)-t/(2*(gammal*t+gammad*i))-0.125*t);

dmud-dmud+phisum[i+1,t+1]*(i-lambda*t*exp(mud+0.5*gammad))*(Y[t+1]-i*mud-t*mu-0.5*gammal*t+lambda*t*exp(mud+0.5*gammad)-lambda*t)/(gammal*t+gammad*i);

dgammad-dgammad+phisum[i+1,t+1]*(Y[t+1]-i*mud-t*mu-0.5*gammal*t+lambda*t*exp(mud+0.5*gammad)-lambda*t)*(i*(Y[t+1]-i*mud-t*mu-0.5*gammal*t-lambda*t)-lambda*t*exp(mud+0.5*gammad)*(gammal*t+gammad*i-i))/(2*(gammal*t+gammad*i)^2);
};
};
GradZW - c(dlambda,dmu,dgamma,dmud,dgammad);
};


lambda, gammal und gammd need to be larger than 0, so I've tried different 
constrained methods like L-BFGS-B via optim and constrOptim, but both seem to 
have problems with my functions. The standard optim-method works, but sometimes 
it returns negative values for lambda, gammal and gammad. 
Those phi and phisums are known at runtime, n and T can be pretty large, so 
maybe R simple can't handel it.


Many thanks
Matthias


--

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Optimization problem, to minimize the length(rle(B)$lengths) for all the rows and columns

2008-05-12 Thread Ng Stanley
Hi,

how can I order the rows and columns of a matrix A to generate B, in order
to minimize the length(rle(B)$lengths) for all the rows and columns ?

 set.seed(5)
 a - matrix(rnorm(200), nrow=20)
 a[a=0] - 0
 a[a0] - 1
 a
  [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
 [1,]011010100 1
 [2,]110101101 0
 [3,]010011011 1
 [4,]111101011 1
 [5,]110010100 0
 [6,]001001100 0
 [7,]010010010 0
 [8,]010000100 0
 [9,]000000000 1
[10,]100001000 0
[11,]111010110 1
[12,]011001011 1
[13,]011111000 0
[14,]010111010 1
[15,]010110100 0
[16,]010011111 1
[17,]001100101 1
[18,]000101010 1
[19,]100101101 0
[20,]001000110 1

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.