[R] Excel Price function in R

2015-09-27 Thread Amelia Marsh via R-help
Dear Forum,

I am using trying to find price of bond in R. I have written the code in line 
with Excel PRICE formula. However, whenever the residual maturity is less than 
a year, my R output tallies with the Excel Price formula. However, moment my 
residual maturity exceeds 1 year, the R output differs from Excel Price 
function. I have tried to find out the reason for am not able to figure out. 

Please guide me. Here is my code alongwith illustrative examples -

(I am copying this code from notepad++. Please forgive forgive for any 
inconvenience caused)


# MY code

add.months = function(date, n) {
  nC <- seq(date, by=paste (n, "months"), length = 2)[2]
  fD <- as.Date(strftime(as.Date(date), format='%Y-%m-01'))
  C  <- (seq(fD, by=paste (n+1, "months"), length = 2)[2])-1
  if(nC>C) return(C)
  return(nC)
}

# 

date.diff = function(end, start, basis=1) {
  if (basis != 0 && basis != 4)
return(as.numeric(end - start))
  e <- as.POSIXlt(end)
  s <- as.POSIXlt(start)
  d <-   (360 * (e$year - s$year)) + 
( 30 * (e$mon  - s$mon )) +
(min(30, e$mday) - min(30, s$mday))
  return (d)
}

# 


excel.price = function(settlement, maturity, coupon, yield, redemption, 
frequency, basis=1) 
{
  cashflows   <- 0
  last.coupon <- maturity
  while (last.coupon > settlement) {
last.coupon <- add.months(last.coupon, -12/frequency)
cashflows   <- cashflows + 1
  }
  next.coupon <- add.months(last.coupon, 12/frequency)
  
  valueA   <- date.diff(settlement,  last.coupon, basis)
  valueE   <- date.diff(next.coupon, last.coupon, basis)
  valueDSC <- date.diff(next.coupon, settlement,  basis)

  if (cashflows == 0)
stop('number of coupons payable cannot be zero')else
  if (cashflows == 1)
  {
  valueDSR = valueE - valueA
  T1 = 100 * coupon / frequency + redemption
  T2 = (yield/frequency * valueDSR/valueE) + 1
  T3 = 100 * coupon / frequency * valueA / valueE
  result = (T1 / T2) - T3
  return(result = result)
  }else
  if (cashflows > 1)  
  {  
  expr1<- 1 + (yield/frequency)
  expr2<- valueDSC / valueE
  expr3<- coupon / frequency
  result   <- redemption / (expr1 ^ (cashflows - 1 + expr2))
  for (k in 1:cashflows) {
result <- result + ( 100 * expr3 / (expr1 ^ (k - 1 + expr2)) )
  }
  result   <- result - ( 100*expr3 * valueA / valueE )
   return(result = result)
   }
}


# 


(ep1 = excel.price(settlement = as.Date(c("09/15/24"), "%m/%y/%d"), maturity = 
as.Date(c("11/15/4"), "%m/%y/%d"), coupon = 0.065, yield = 0.0590417, 
redemption = 100, frequency = 2, basis = 1))

(ep2 = excel.price(settlement = as.Date(c("09/15/24"), "%m/%y/%d"), maturity = 
as.Date(c("7/16/22"), "%m/%y/%d"), coupon = 0.0725, yield = 0.0969747125, 
redemption = 100, frequency = 2, basis = 1))

(ep3 = excel.price(settlement = as.Date(c("09/15/24"), "%m/%y/%d"), maturity = 
as.Date(c("11/16/30"), "%m/%y/%d"), coupon = 0.08, yield = 0.0969747125, 
redemption = 100, frequency = 2, basis = 1))

# 
...


# OUTPUT

ep1 = 100.0494
Excel output = 100.0494


ep2 = 98.0815
Excel output = 98.08149


ep3 = 98.12432
Excel output = 98.122795


While ep1 and ep2 match exactly with Excel Price function values, ep3 which has 
maturity exceeding one year doesnt tally with Excel Price function.



Kindly advise

With regards

Amelia

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Rattle installation

2015-09-27 Thread Giorgio Garziano
This is what I can observe for rattle 3.5.0 on Windows

pack <- available.packages()

pack["rattle","Depends"]
[1] "R (>= 2.13.0), RGtk2"


pack["rattle", "Suggests"]



[1] "pmml (>= 1.2.13), bitops, colorspace, ada, amap, arules,\narulesViz, 
biclust, cairoDevice, cba, corrplot, descr, doBy,\ndplyr, e1071, ellipse, 
fBasics, foreign, fpc, gdata, ggdendro,\nggplot2, gplots, graph, grid, gtools, 
gWidgetsRGtk2, hmeasure,\nHmisc, kernlab, Matrix, methods, mice, nnet, 
odfWeave, party,\nplaywith, plyr, psych, randomForest, RBGL, 
RColorBrewer,\nreadxl, reshape, rggobi, RGtk2Extras, ROCR, RODBC, 
rpart,\nrpart.plot, SnowballC, stringr, survival, timeDate, tm,\nverification, 
wskm, XML, pkgDepTools, Rgraphviz"


pack["rattle", "Imports"]

[1] NA

In general, the package installation by RStudio is straightforward as it takes 
care of dependencies.

--
Giorgio Garziano

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Variable Class "numeric" instead recognized by dplyr as a 'factor'

2015-09-27 Thread peter dalgaard

> On 27 Sep 2015, at 22:12 , Bert Gunter  wrote:
> 
>> 
>> Due to missing data, R originally classified each X and Y variable as a 
>> ‘factor’, subsequently changed to ‘numeric’ via ‘as.numeric’ command.
> 
> No.
> a) missing data will not cause numeric data to become factor. There's
> something wrong in the data from the beginning (as Thierry said)

Well, if you forget to tell R what the input code for missing is (na.strings if 
you use read.table), then that is de facto what happens: The whole column gets 
interpreted as character and subsequently converted to a factor. The fix is to 
_remember_ to tell R what missing value codes are being used.

> 
> b) If f is numeric data that is a factor, as.numeric(f) is almost
> certainly **not** the corrrect way to change it to numeric.

Amen... as.numeric(as.character(f)) if you must, but the proper fix is usually 
the above.

-pd

-- 
Peter Dalgaard, Professor,
Center for Statistics, Copenhagen Business School
Solbjerg Plads 3, 2000 Frederiksberg, Denmark
Phone: (+45)38153501
Email: pd@cbs.dk  Priv: pda...@gmail.com

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Re: [R] Variable Class "numeric" instead recognized by dplyr as a 'factor'

2015-09-27 Thread Thierry Onkelinx
I doubt that dplyr is the problem. have a look at the output of
str(CSUdata2) The problem is probably in there.

Sending a reproducible example of the problem makes it easier for us to
help you. Note that this list doesn't accept HTML mail. I suggest that you
read the posting guide carefully.

ir. Thierry Onkelinx
Instituut voor natuur- en bosonderzoek / Research Institute for Nature and
Forest
team Biometrie & Kwaliteitszorg / team Biometrics & Quality Assurance
Kliniekstraat 25
1070 Anderlecht
Belgium

To call in the statistician after the experiment is done may be no more
than asking him to perform a post-mortem examination: he may be able to say
what the experiment died of. ~ Sir Ronald Aylmer Fisher
The plural of anecdote is not data. ~ Roger Brinner
The combination of some data and an aching desire for an answer does not
ensure that a reasonable answer can be extracted from a given body of data.
~ John Tukey

2015-09-27 9:58 GMT+02:00 :

> Hi--I’m new to R.  For a dissertation, my panel data is for 48 Sub-Saharan
> countries (cross-sectional index=’i’) over 55 years 1960-2014 (time-series
> index=’t’).  The variables read into R from a text file are levels data.
> The 2SLS regression due to reverse causality will be based on change in the
> levels data, so will need to difference the data grouped by cross-sectional
> index ‘i’.
>
>
> There are nearly 50 total variables, but the model essentially will
> regress the differenced Yit ~ X1it+X2it+X3it+X4it+X5it+X6it, with a dummy
> variable attached to each of the change-X(s).
>
>
> Due to missing data, R originally classified each X and Y variable as a
> ‘factor’, subsequently changed to ‘numeric’ via ‘as.numeric’ command.
>
>
> However, when I write the following command for dplr solely to difference
> Yit (=Yit-Yi[t-1]) mutated to new variable dYit, I receive error messages
> to the effect that Yit and each of the X variables are ‘factors’.
>
>
>
>
> >library (dplr)
>
> >dt = CSUdata2 %>% group_by (i) %>% (dYit=Yit-lag(Yit))
>
>
>
> ‘CSUdata2’ is the object in which the tab-delimited text file dataset is
> stored.
>
>
> Questions:
>
>
>  Any idea why dplyr reads the variables as ‘factors’?  A class(*) command
> per variable shows R to know each Y and X as ‘numeric’.
>
>
> Is the command to difference Yit done correctly?  I plan to use the same
> command for each variable requiring change until I understand the commands
> better.
>
>
>
> Thank you.
>
>
>
>
>
>
>
>
>
> Sent from Windows Mail
> [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Re: [R] Variable Class "numeric" instead recognized by dplyr as a 'factor'

2015-09-27 Thread Bert Gunter
I believe you need to spend some time with an R tutorial, as I don't
believe what you understand what factors are and how they should be
used."Dummy variables" are also almost certainly unnecessary and
usually undesirable, as well.

A few comments below may help..

Cheers,
Bert


Bert Gunter

"Data is not information. Information is not knowledge. And knowledge
is certainly not wisdom."
   -- Clifford Stoll


On Sun, Sep 27, 2015 at 12:58 AM,   wrote:
> Hi--I’m new to R.  For a dissertation, my panel data is for 48 Sub-Saharan 
> countries (cross-sectional index=’i’) over 55 years 1960-2014 (time-series 
> index=’t’).  The variables read into R from a text file are levels data.  The 
> 2SLS regression due to reverse causality will be based on change in the 
> levels data, so will need to difference the data grouped by cross-sectional 
> index ‘i’.
>
>
> There are nearly 50 total variables, but the model essentially will regress 
> the differenced Yit ~ X1it+X2it+X3it+X4it+X5it+X6it, with a dummy variable 
> attached to each of the change-X(s).
>
>
> Due to missing data, R originally classified each X and Y variable as a 
> ‘factor’, subsequently changed to ‘numeric’ via ‘as.numeric’ command.

No.
a) missing data will not cause numeric data to become factor. There's
something wrong in the data from the beginning (as Thierry said)

b) If f is numeric data that is a factor, as.numeric(f) is almost
certainly **not** the corrrect way to change it to numeric. You will
get garbage, viz.:

> f <- runif(5)
> f
[1] 0.42568762 0.03105132 0.46606135 0.35251240 0.57303571
> as.numeric(factor(f))
[1] 3 1 4 2 5




>
>
> However, when I write the following command for dplr solely to difference Yit 
> (=Yit-Yi[t-1]) mutated to new variable dYit, I receive error messages to the 
> effect that Yit and each of the X variables are ‘factors’.
>
>
>
>
>>library (dplr)
>
>>dt = CSUdata2 %>% group_by (i) %>% (dYit=Yit-lag(Yit))
>
>
>
> ‘CSUdata2’ is the object in which the tab-delimited text file dataset is 
> stored.
>
>
> Questions:
>
>
>  Any idea why dplyr reads the variables as ‘factors’?  A class(*) command per 
> variable shows R to know each Y and X as ‘numeric’.
>
>
> Is the command to difference Yit done correctly?  I plan to use the same 
> command for each variable requiring change until I understand the commands 
> better.

Almost certainly not. See ?diff


>
>
>
> Thank you.
>
>
>
>
>
>
>
>
>
> Sent from Windows Mail
> [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Re: [R] Variable Class "numeric" instead recognized by dplyr as a 'factor'

2015-09-27 Thread Bert Gunter
Yes, but I think of numeric data with non-numeric values (e.g. "." for
missing) as character, not numeric.  Missing to me means either empty
or with the missing value code specified as you describe. Ergo my
comment. Your clarification is nevertheless appropriate.

Cheers,
Bert
Bert Gunter

"Data is not information. Information is not knowledge. And knowledge
is certainly not wisdom."
   -- Clifford Stoll


On Sun, Sep 27, 2015 at 1:29 PM, peter dalgaard  wrote:
>
>> On 27 Sep 2015, at 22:12 , Bert Gunter  wrote:
>>
>>>
>>> Due to missing data, R originally classified each X and Y variable as a 
>>> ‘factor’, subsequently changed to ‘numeric’ via ‘as.numeric’ command.
>>
>> No.
>> a) missing data will not cause numeric data to become factor. There's
>> something wrong in the data from the beginning (as Thierry said)
>
> Well, if you forget to tell R what the input code for missing is (na.strings 
> if you use read.table), then that is de facto what happens: The whole column 
> gets interpreted as character and subsequently converted to a factor. The fix 
> is to _remember_ to tell R what missing value codes are being used.
>
>>
>> b) If f is numeric data that is a factor, as.numeric(f) is almost
>> certainly **not** the corrrect way to change it to numeric.
>
> Amen... as.numeric(as.character(f)) if you must, but the proper fix is 
> usually the above.
>
> -pd
>
> --
> Peter Dalgaard, Professor,
> Center for Statistics, Copenhagen Business School
> Solbjerg Plads 3, 2000 Frederiksberg, Denmark
> Phone: (+45)38153501
> Email: pd@cbs.dk  Priv: pda...@gmail.com
>
>
>
>
>
>
>
>

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Re: [R] Excel Price function in R

2015-09-27 Thread peter dalgaard
Given that this requires knowledge of both bond theory and Excel plus a fair 
amount of effort to understand your code, you are likely to be _so_ on your 
own

However, I'll venture a guess that it has something to do with whether coupons 
should be discounted until payout or until maturity.  

There are some fairly straightforward numerical experiments that you could 
perform to get a handle on what is different in Excel: Graph the price as a 
function of maturity; do you see an abrupt change or does your curve and 
Excel's diverge in a smoothish fashion? If the latter, what is the order of 
magnitude of the divergence? Can you relate it to some of the parameters of 
your model? What happens if you go beyond 2 years to maturity? 3? 4? Etc.

-pd

> On 27 Sep 2015, at 20:19 , Amelia Marsh via R-help  
> wrote:
> 
> Dear Forum,
> 
> I am using trying to find price of bond in R. I have written the code in line 
> with Excel PRICE formula. However, whenever the residual maturity is less 
> than a year, my R output tallies with the Excel Price formula. However, 
> moment my residual maturity exceeds 1 year, the R output differs from Excel 
> Price function. I have tried to find out the reason for am not able to figure 
> out. 
> 
> Please guide me. Here is my code alongwith illustrative examples -
> 
> (I am copying this code from notepad++. Please forgive forgive for any 
> inconvenience caused)
> 
> 
> # MY code
> 
> add.months = function(date, n) {
>  nC <- seq(date, by=paste (n, "months"), length = 2)[2]
>  fD <- as.Date(strftime(as.Date(date), format='%Y-%m-01'))
>  C  <- (seq(fD, by=paste (n+1, "months"), length = 2)[2])-1
>  if(nC>C) return(C)
>  return(nC)
> }
> 
> # 
> 
> date.diff = function(end, start, basis=1) {
>  if (basis != 0 && basis != 4)
>return(as.numeric(end - start))
>  e <- as.POSIXlt(end)
>  s <- as.POSIXlt(start)
>  d <-   (360 * (e$year - s$year)) + 
>( 30 * (e$mon  - s$mon )) +
>(min(30, e$mday) - min(30, s$mday))
>  return (d)
> }
> 
> # 
> 
> 
> excel.price = function(settlement, maturity, coupon, yield, redemption, 
> frequency, basis=1) 
> {
>  cashflows   <- 0
>  last.coupon <- maturity
>  while (last.coupon > settlement) {
>last.coupon <- add.months(last.coupon, -12/frequency)
>cashflows   <- cashflows + 1
>  }
>  next.coupon <- add.months(last.coupon, 12/frequency)
> 
>  valueA   <- date.diff(settlement,  last.coupon, basis)
>  valueE   <- date.diff(next.coupon, last.coupon, basis)
>  valueDSC <- date.diff(next.coupon, settlement,  basis)
> 
>  if (cashflows == 0)
>stop('number of coupons payable cannot be zero')else
>  if (cashflows == 1)
>  {
>  valueDSR = valueE - valueA
>  T1 = 100 * coupon / frequency + redemption
>  T2 = (yield/frequency * valueDSR/valueE) + 1
>  T3 = 100 * coupon / frequency * valueA / valueE
>  result = (T1 / T2) - T3
>  return(result = result)
>  }else
>  if (cashflows > 1)  
>  {  
>  expr1<- 1 + (yield/frequency)
>  expr2<- valueDSC / valueE
>  expr3<- coupon / frequency
>  result   <- redemption / (expr1 ^ (cashflows - 1 + expr2))
>  for (k in 1:cashflows) {
>result <- result + ( 100 * expr3 / (expr1 ^ (k - 1 + expr2)) )
>  }
>  result   <- result - ( 100*expr3 * valueA / valueE )
>   return(result = result)
>   }
> }
> 
> 
> # 
> 
> 
> (ep1 = excel.price(settlement = as.Date(c("09/15/24"), "%m/%y/%d"), maturity 
> = as.Date(c("11/15/4"), "%m/%y/%d"), coupon = 0.065, yield = 0.0590417, 
> redemption = 100, frequency = 2, basis = 1))
> 
> (ep2 = excel.price(settlement = as.Date(c("09/15/24"), "%m/%y/%d"), maturity 
> = as.Date(c("7/16/22"), "%m/%y/%d"), coupon = 0.0725, yield = 0.0969747125, 
> redemption = 100, frequency = 2, basis = 1))
> 
> (ep3 = excel.price(settlement = as.Date(c("09/15/24"), "%m/%y/%d"), maturity 
> = as.Date(c("11/16/30"), "%m/%y/%d"), coupon = 0.08, yield = 0.0969747125, 
> redemption = 100, frequency = 2, basis = 1))
> 
> # 
> ...
> 
> 
> # OUTPUT
> 
> ep1 = 100.0494
> Excel output = 100.0494
> 
> 
> ep2 = 98.0815
> Excel output = 98.08149
> 
> 
> ep3 = 98.12432
> Excel output = 98.122795
> 
> 
> While ep1 and ep2 match exactly with Excel Price function values, ep3 which 
> has maturity exceeding one year doesnt tally with Excel Price function.
> 
> 
> 
> Kindly advise
> 
> With regards
> 
> Amelia
> 
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

-- 

[R] Error finding clusters for new data with FlexMix

2015-09-27 Thread Luiz Alberto Lima
Hello,

I am trying to find the clusters for new data using FlexMix. I have the
following code:

library(flexmix)

x <- c(0.605, 0.523, 0.677, 0.101, 0.687, 0.586, 0.517, 0.592, 0.653,
0.617)
y <- c(0.222, 0.741, 0.182, 0.162, 0.192, 0.254, 0.745, 0.669, 0.198,
0.214)
test <- c(0.720, 0.168, 0.520, 0.134, 0.558)

model <- flexmix(y ~ x, data = data.frame(x = x, y = y), k=2)
pred <- predict(model, newdata=data.frame(x = test))
clusters_train <- clusters(model)
clusters_test <- clusters(model, newdata=data.frame(x = test))

When I execute the last line of the code, I receive the following error
message:

> clusters_test <- clusters(model, newdata=data.frame(x = test))
Error in model.frame.default(model@terms, data = data, na.action = NULL,  :
  variable lengths differ (found for 'x')

What am I doing wrong?

Thanks!

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Variable Class "numeric" instead recognized by dplyr as a 'factor'

2015-09-27 Thread james.vordtriede
Hi--I’m new to R.  For a dissertation, my panel data is for 48 Sub-Saharan 
countries (cross-sectional index=’i’) over 55 years 1960-2014 (time-series 
index=’t’).  The variables read into R from a text file are levels data.  The 
2SLS regression due to reverse causality will be based on change in the levels 
data, so will need to difference the data grouped by cross-sectional index ‘i’. 
 


There are nearly 50 total variables, but the model essentially will regress the 
differenced Yit ~ X1it+X2it+X3it+X4it+X5it+X6it, with a dummy variable attached 
to each of the change-X(s).


Due to missing data, R originally classified each X and Y variable as a 
‘factor’, subsequently changed to ‘numeric’ via ‘as.numeric’ command.  


However, when I write the following command for dplr solely to difference Yit 
(=Yit-Yi[t-1]) mutated to new variable dYit, I receive error messages to the 
effect that Yit and each of the X variables are ‘factors’.




>library (dplr)

>dt = CSUdata2 %>% group_by (i) %>% (dYit=Yit-lag(Yit))



‘CSUdata2’ is the object in which the tab-delimited text file dataset is 
stored.  


Questions:


 Any idea why dplyr reads the variables as ‘factors’?  A class(*) command per 
variable shows R to know each Y and X as ‘numeric’.


Is the command to difference Yit done correctly?  I plan to use the same 
command for each variable requiring change until I understand the commands 
better.



Thank you.









Sent from Windows Mail
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

[R] How to find out if two cells in a dataframe belong to the same pre-specified factor-level

2015-09-27 Thread trichter

Dear list,
I really couldnt find a better way to describe my question, so please  
bear with me.


To illustrate my problem, i have a matrix with ecological distances  
(m1) and one with genetic distances (m2) for a number of biological  
species. I have merged both matrices and want to plot both distances  
versus each other, as illustrated in this example:


library(reshape)
library(ggplot2)
library(dplyr)

dist1 <- matrix(runif(16),4,4)
dist2 <- matrix(runif(16),4,4)
rownames(dist1) <- colnames(dist1) <- paste0("A",1:4)
rownames(dist2) <- colnames(dist2) <- paste0("A",1:4)

m1 <- melt(dist1)
m2 <- melt(dist2)

final <- full_join(m1,m2, by=c("Var1","Var2"))
ggplot(final, aes(value.x,value.y)) + geom_point()

Here is the twist:
The biological species belong to certain groups, which are given in  
the dataframe `species`, for example:


species <- data.frame(spcs=as.character(paste0("A",1:4)),
  grps=as.factor(c(rep("cat",2),(rep("dog",2)

I want to check if a x,y pair in final (as in `final$Var1`,  
`final$Var2`) belongs to the same group of species (here "cat" or  
"dog"), and then want to color all groups specifically in the  
x,y-scatterplot.

Thus, i need an R translation for:

final$group <- If (final$Var1 and final$Var2) belong to the same group  
as specified
  in species, then assign the species group here, else do nothing  
or assign NA


so i can proceed with

ggplot(final, aes(value.x,value.y, col=group)) + geom_point()

So, in the example, the pairs A1-A1, A1-A2, A2-A1, A2-A2 should be  
identified as "both cats", hence should get the factor "cat".


Thank you very much!


Tim

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] FlexBayes installation from R-Forge Problem R 3.2.2

2015-09-27 Thread Davidwkatz
I tried to install FlexBayes like this:

install.packages("FlexBayes", repos="http://R-Forge.R.project.org;) but got
errors:

Here's the transcript in R:

R version 3.2.2 (2015-08-14) -- "Fire Safety"
Copyright (C) 2015 The R Foundation for Statistical Computing
Platform: x86_64-w64-mingw32/x64 (64-bit)

R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.

  Natural language support but running in an English locale

R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.

Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.

> install.packages("FlexBayes", repos="http://R-Forge.R.project.org;)
Installing package into ‘C:/Users/dkatz/R/win-library/3.2’
(as ‘lib’ is unspecified)
Error: Line starting 'http://r.789695.n4.nabble.com/FlexBayes-installation-from-R-Forge-Problem-R-3-2-2-tp4712861.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Re: [R] Line in hist count

2015-09-27 Thread bgnumis bgnum
Hi, all

I have discovered that with abline(h=dataf,col="red") can add a line as I
want in this plot

fhist<-hist(f,plot=FALSE)
par(mar=c(6,0,6,6))
barplot(fhist$counts/ sum(fhist$counts),axes=FALSE,
space=0,horiz=TRUE,col="lightgray")
grid()
title("Marginal Distribution CDS vs. Ibex",font=4)
abline(h=dataf,col="red")

The thing is:

¿How can I display the associated fhist$counts/ sum(fhist$counts on the
last value of f?

2015-09-26 23:31 GMT+02:00 bgnumis bgnum :

> Hi all,
>
> Several time ago I used to work with R, now I´m returning to study and
> work and searching old file I see that I used this code:
>
>
> gfhist<-hist(gf,plot=FALSE)
>
> par(mar=c(6,0,6,6))
>
> barplot(gfhist$counts,axes=FALSE, space=0,horiz=TRUE,col="lightgray")
>
> grid()
>
> title("Marginal Distribution Lagged",font=4)
>
> The thing is I would line to plot a bar (horizontal and thing bar that
> will be placed on the last gf data but on the barplot
>
> ¿Do you think is it possible? gf is a matrix.
>
>
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Re: [R] Appropriate specification of random effects structure for EEG/ERP data: including Channels or not?

2015-09-27 Thread Phillip Alday
You might also want to take a look at the recent paper from the Federmeier 
group, especially the supplementary materials. There are a few technical 
inaccuracies (ANOVA is a special case of hierarchical modelling, not the other 
way around), but they discuss some of the issues involved. And relevant for 
your work: they model channel as a grouping variable in the random-effects 
structure.

Payne, B. R., Lee, C.-L., and Federmeier, K. D. (2015). Revisiting the 
incremental effects of context on word processing: Evidence from single-word 
event-related brain potentials. Psychophysiology.

http://dx.doi.org/10./psyp.12515

Best,
Phillip

> On 24 Sep 2015, at 22:42, Phillip Alday  wrote:
> 
> There is actually a fair amount of ERP literature using mixed-effects
> modelling, though you may have to branch out from the traditional
> psycholinguistics journals a bit (even just more "neurolinguistics" or
> language studies published in "psychology" would get you more!). But
> just in the traditional psycholinguistics journals, there is a wealth of
> literature, see for example the 2008 special issue on mixed models of
> the Journal of Memory and Language.
> 
> I would NOT encode the channels/ROIs/other topographic measures as
> random effects (grouping variables). If you think about the traditional
> ANOVA analysis of ERPs, you'll recall that ROI or some other topographic
> measure (laterality, saggitality) are included in the main effects and
> interactions. As a rule of thumb, this corresponds to a fixed effect in
> random effects models. More specifically, you generally care about
> whether the particular levels of the topographic measure (i.e. you care
> if an ERP component is located left-anterior or what not) and this is
> what fixed effects test. Random effects are more useful when you only
> care about the variance introduced by a particular term but not the
> specific levels (e.g. participants or items -- we don't care about a
> particular participant, but we do care about how much variance there is
> between participants, i.e. how the population of participants looks). 
> 
> Or, another thought: You may have seen ANOVA by-subjects and by-items,
> but I bet you've never seen an ANOVA by-channels. ANOVA "implicitly"
> collapses the channels within ROIs and you can do the same with mixed
> models. (That's an awkward statement technically, but it should help
> with the intuition.)
> 
> There is an another, related important point -- "nuisance parameters"
> aren't necessarily random effects. So even if you're not interested in
> the per-electrode distribution of the ERP component, that doesn't mean
> those should automatically be random effects. It *might* make sense to
> add a channel (as in per-electrode) random effect, if you care to model
> the variation within a given ROI (as you have done), but I haven't seen
> that yet. It is somewhat rare to include a per-channel fixed effect,
> just because you lose a lot of information that way and introduce more
> parameters into the model, but you could include a more fine-grained
> notion of saggital / lateral location based on e.g. the 10-20 system and
> make that into an ordered factor. (Or you could be extreme and even use
> the spherical coordinates that the 10-20 is based on and have continuous
> measures of electrode placement!) The big problem with including
> "channel" as a random-effect grouping variable is that the channels
> would have a very complicated covariance structure (because adjacent
> electrodes are very highly correlated with each other) and I'm not sure
> how to model this in a straightforward way with lme4.
> 
> More generally, in considering your random effects structure, you should
> look at Barr et al (2013, "Random effects structure for confirmatory
> hypothesis testing: Keep it maximal") and the recent reply by Bates et
> al (arXiv, "Parsimonious Mixed Models"). You should read up on the GLMM
> FAQ on testing random effects -- there are different opinions on this
> and not all think that testing them via likelihood-ratio tests makes
> sense.
> 
> That wasn't my most coherent response, but maybe it's still useful. And
> for questions like this on mixed models, do check out the R Special
> Interest Group on Mixed Models. :-)
> 
> Best,
> Phillip
> 
> On Thu, 2015-09-24 at 12:00 +0200, r-help-requ...@r-project.org wrote:
>> Message: 4
>> Date: Wed, 23 Sep 2015 12:46:46 +0200
>> From: Paolo Canal 
>> To: r-help@r-project.org
>> Subject: [R] Appropriate specification of random effects structure for
>>EEG/ERP data: including Channels or not?
>> Message-ID: <56028316.2050...@iusspavia.it>
>> Content-Type: text/plain; charset="UTF-8"
>> 
>> Dear r-help list,
>> 
>> I work with EEG/ERP data and this is the first time I am using LMM to 
>> analyze my data (using lme4).
>> The experimental design is a 2X2: one manipulated factor is
>> agreement, 
>> the other is noun (agreement 

[R] Theme white bands blue and grey or other color

2015-09-27 Thread bgnumis bgnum
Hi all,

I want to plot two bands in quant mod but with white theme,

chartSeries(Fond,theme="white",TA = c(addBBands(50,2), addBBands(100,2) )
)

The thing is if I put this "t" it plot with black but two bands can be
shown, ¿Is it psiible to put the two different bands with white theme¿?

t=chartTheme()
t$BBands$fill="#ff33"
reChart(theme="t")
t$BBands$col=c('red','blue','green')
t$BBands$col='blue'
reChart(theme="t")

Many Thanks in advance

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

[R] Truncation of character fields in a data frame

2015-09-27 Thread Luigi Marongiu
Dear all,
I am reading a txt file into the R environment to create a data frame,
however I have notice that some entries have a truncated version of a
field, so for instance I get "Astro" instead of "Astro 1-Astro 1" and
"Sapo" for "Sapo #1-Sapo_1" and "Sapo #2-Sapo_2", but I also get
"Adeno 40/41 EH-Adeno_40-41_EH" so the problem is not in the spaces
between the words. The txt file is a simple tab delimited file
generated from excel which I read with:

bad.data<-read.table(
"test_df.txt",
header=TRUE,
row.names=1,
dec = ".",
sep="\t",
stringsAsFactors = FALSE,
fill = TRUE
)

[the fill = TRUE was introduced because in the real case I got an
error of a missing line.]

I can recreate this file as follows:
sample <- c(rep("p.001", 48), rep("p.547", 48))
target <- c("Adeno 1-Adeno 1","Adeno 40/41 EH-AIQJCT3","Astro
1-Astro 1","Sapo 1-Sapo 1","Sapo 2-Sapo 2","Enterovirus
1-Enterovirus 1","Parechovirus-Parechovirus","HEV 1-HEV 1",
"IC PDV control-AIRSA0B","Rotavirus cam-Rotavirus cam",
"18S-Hs9901_s1","Noro gp II-Noro gp II","Noro gp 1-Noro gp
1","Noro gp 1 mod33-Noro gp 1 mod33","C difficile
GDH-AIS086J","C difficile Tox B-C difficile Tox B","VTX
1-AIT97CR","BT control Man-AIVI5IZ","E. coli vtx 2-E. coli vtx
2","Campy spp-AIWR3O7","Salmonella ttr-AIX01VF","Crypto
CP2-AIY9Z1N","Green Fluorescent Protein-AI0IX7V","Adeno
2-Adeno 2","Adeno 40_41 Oly-AI1RWD3","Astro 2 Liu-AI20UKB",
"Giardia lambia 1-AI39SQJ","Rotavirus Liu-Rotavirus Liu 2",
"Enterovirus Bruges-Enterovirus 2 Br","HAV 1-Hepatitis A 1",
"HEV 2-AI5IQWR","MS2 control-AI6RO2Z","Rotarix NSP2-AI70M87",
  "CMV br-CMV br","IC Rnase P-AI89LFF","Salmonella hil
A-Salmonella hil A","Shigella ipa H-AIAA0K8","Enteroagg E.
coli-AIBJYRG","Campy jejuni-AICSWXO","Campy coli-AID1U3W",
"Yersinia enterocolitica-AIFAS94","Bacterial 16S-Bacterial 16S",
 "Aeromonas hydrophilia-Aeromonas hydrophilia","V
cholerae-AIGJRGC","Dientamoeba fragilis-AIHSPMK","Entamoeba
histolytica-AII1NSS","Crypto 2 J-AIKALY0","Giardia lambia
rev-AILJJ48","Adeno #1-Adeno_1","Adeno 40/41
EH-Adeno_40-41_EH","Astro #1-Astro_1","Sapo #1-Sapo_1",
"Sapo #2-Sapo_2","Enterovirus #1-Enterovirus_1",
"Parechovirus-Parechovirus","HEV #1-HEV_1","C coli jejuni
Liu-C_coli_jejuni_Li","Rotavirus cam-Rotavirus_cam","IC 18s-IC
18s","Noro gp II-Noro_gp_II","Noro gp 1-Noro_gp_1","Noro
gp 1 mod33-Noro_gp_1_mod33","C difficile GDH-C-difficile_GDH",
"C difficile Tox B-C_difficile_T_B","E. coli vtx 1-E_coli_vtx_1",
  "BT control Man-BT_control_Man","E. coli vtx 2-E_coli_vtx_2",
"Campy spp NEW-Campy_spp_NEW","Salmonella ttr-Salmonella_ttr",
"Cryptosporidium spp CP2-Cryptos_spp_CP2","C jejuni
#2-C_jejuni_2","Adeno #2-Adeno_2","Adeno 40/41
Oly-Adeno_40-41_Oly","Astro Liu #2-Astro_Liu_2","Giardia
lambia #1-Giardia_lambia_1","Rotavirus Liu #2-Rotavirus_Liu_2",
"Enterovirus #2 Br-Enterovirus_2_Br","Hepatitis A
#1-Hepatitis_A_1","HEV #2-HEV_2","MS2 control-MS2_control",
"Rotarix NSP2 Bris-Rotarix_NSP2_Bri","CMV br-CMV_br","Rnase P
control-Rnase_P_control","Salmonella hil A-Salmonella_hil_A",
"Shigella ipa H-Shigella_ipa_H","Enteroagg E.
coli-Enteroagg_E_coli","V parahaemolyticus-V_p_haemolyticus",
"Campy coli-Campy_coli","Yersinia
enterocolitica-Y_enterocolitica","Bacterial 16S-Bacterial_16S",
"Aeromonas hydrophilia-Aero_hydrophilia","Vibrio
cholerae-Vibrio_cholerae","Dientamoeba fragilis-Dien_fragilis",
"Entamoeba histolytica-Enta_histolytica","Cryptosporidium spp #2
J-Crypto_spp_2_J","Giardia lambia #2 rev-Giardia_lambia_r")
ct <- c(NA,NA,NA,NA,NA,NA,NA,NA,NA,
NA,18.793,NA,NA,NA,NA,NA,NA,33.302,
NA,32.388,NA,NA,NA,NA,NA,NA,NA,NA,
   NA,NA,NA,31.398,NA,NA,NA,NA,NA,
NA,NA,NA,NA,8.115,NA,NA,NA,NA,NA,
  NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,
 NA,21.161,NA,NA,NA,NA,NA,NA,31.302,
 NA,29.785,NA,NA,NA,NA,NA,NA,NA,
NA,NA,NA,NA,31.212,42.967,NA,33.503,
NA,NA,NA,NA,NA,NA,9.584,NA,NA,NA,
  NA,NA,NA)

good.data <- data.frame(sample, target, ct, stringsAsFactors = FALSE)

and the structure of these object is the same:
> str(good.data)
'data.frame':96 obs. of  3 variables:
 $ sample: chr  "p.001" "p.001" "p.001" "p.001" ...
 $ target: chr  "Adeno 1-Adeno 1" "Adeno 40/41 EH-AIQJCT3" "Astro
1-Astro 1" "Sapo 1-Sapo 1" ...
 $ ct: num  NA NA NA NA NA NA NA NA NA NA ...
> str(bad.data)
'data.frame':96 obs. of  3 variables:
 $ Sample: chr  "p.001" "p.001" "p.001" "p.001" ...
 $ Target: 

Re: [R] Truncation of character fields in a data frame

2015-09-27 Thread Duncan Murdoch
On 27/09/2015 7:56 AM, Luigi Marongiu wrote:
> Dear all,
> I am reading a txt file into the R environment to create a data frame,
> however I have notice that some entries have a truncated version of a
> field, so for instance I get "Astro" instead of "Astro 1-Astro 1" and
> "Sapo" for "Sapo #1-Sapo_1" and "Sapo #2-Sapo_2", but I also get
> "Adeno 40/41 EH-Adeno_40-41_EH" so the problem is not in the spaces
> between the words. The txt file is a simple tab delimited file
> generated from excel which I read with:
> 
> bad.data<-read.table(
> "test_df.txt",
> header=TRUE,
> row.names=1,
> dec = ".",
> sep="\t",
> stringsAsFactors = FALSE,
> fill = TRUE
> )
> 
> [the fill = TRUE was introduced because in the real case I got an
> error of a missing line.]

See the "comment.char" argument to read.table.  By default the "#"
character marks a comment, as in R code.

Duncan Murdoch

> 
> I can recreate this file as follows:
> sample <- c(rep("p.001", 48), rep("p.547", 48))
> target <- c("Adeno 1-Adeno 1","Adeno 40/41 EH-AIQJCT3","Astro
> 1-Astro 1","Sapo 1-Sapo 1","Sapo 2-Sapo 2","Enterovirus
> 1-Enterovirus 1","Parechovirus-Parechovirus","HEV 1-HEV 1",
> "IC PDV control-AIRSA0B","Rotavirus cam-Rotavirus cam",
> "18S-Hs9901_s1","Noro gp II-Noro gp II","Noro gp 1-Noro gp
> 1","Noro gp 1 mod33-Noro gp 1 mod33","C difficile
> GDH-AIS086J","C difficile Tox B-C difficile Tox B","VTX
> 1-AIT97CR","BT control Man-AIVI5IZ","E. coli vtx 2-E. coli vtx
> 2","Campy spp-AIWR3O7","Salmonella ttr-AIX01VF","Crypto
> CP2-AIY9Z1N","Green Fluorescent Protein-AI0IX7V","Adeno
> 2-Adeno 2","Adeno 40_41 Oly-AI1RWD3","Astro 2 Liu-AI20UKB",
> "Giardia lambia 1-AI39SQJ","Rotavirus Liu-Rotavirus Liu 2",
> "Enterovirus Bruges-Enterovirus 2 Br","HAV 1-Hepatitis A 1",
> "HEV 2-AI5IQWR","MS2 control-AI6RO2Z","Rotarix NSP2-AI70M87",
>   "CMV br-CMV br","IC Rnase P-AI89LFF","Salmonella hil
> A-Salmonella hil A","Shigella ipa H-AIAA0K8","Enteroagg E.
> coli-AIBJYRG","Campy jejuni-AICSWXO","Campy coli-AID1U3W",
> "Yersinia enterocolitica-AIFAS94","Bacterial 16S-Bacterial 16S",
>  "Aeromonas hydrophilia-Aeromonas hydrophilia","V
> cholerae-AIGJRGC","Dientamoeba fragilis-AIHSPMK","Entamoeba
> histolytica-AII1NSS","Crypto 2 J-AIKALY0","Giardia lambia
> rev-AILJJ48","Adeno #1-Adeno_1","Adeno 40/41
> EH-Adeno_40-41_EH","Astro #1-Astro_1","Sapo #1-Sapo_1",
> "Sapo #2-Sapo_2","Enterovirus #1-Enterovirus_1",
> "Parechovirus-Parechovirus","HEV #1-HEV_1","C coli jejuni
> Liu-C_coli_jejuni_Li","Rotavirus cam-Rotavirus_cam","IC 18s-IC
> 18s","Noro gp II-Noro_gp_II","Noro gp 1-Noro_gp_1","Noro
> gp 1 mod33-Noro_gp_1_mod33","C difficile GDH-C-difficile_GDH",
> "C difficile Tox B-C_difficile_T_B","E. coli vtx 1-E_coli_vtx_1",
>   "BT control Man-BT_control_Man","E. coli vtx 2-E_coli_vtx_2",
> "Campy spp NEW-Campy_spp_NEW","Salmonella ttr-Salmonella_ttr",
> "Cryptosporidium spp CP2-Cryptos_spp_CP2","C jejuni
> #2-C_jejuni_2","Adeno #2-Adeno_2","Adeno 40/41
> Oly-Adeno_40-41_Oly","Astro Liu #2-Astro_Liu_2","Giardia
> lambia #1-Giardia_lambia_1","Rotavirus Liu #2-Rotavirus_Liu_2",
> "Enterovirus #2 Br-Enterovirus_2_Br","Hepatitis A
> #1-Hepatitis_A_1","HEV #2-HEV_2","MS2 control-MS2_control",
> "Rotarix NSP2 Bris-Rotarix_NSP2_Bri","CMV br-CMV_br","Rnase P
> control-Rnase_P_control","Salmonella hil A-Salmonella_hil_A",
> "Shigella ipa H-Shigella_ipa_H","Enteroagg E.
> coli-Enteroagg_E_coli","V parahaemolyticus-V_p_haemolyticus",
> "Campy coli-Campy_coli","Yersinia
> enterocolitica-Y_enterocolitica","Bacterial 16S-Bacterial_16S",
> "Aeromonas hydrophilia-Aero_hydrophilia","Vibrio
> cholerae-Vibrio_cholerae","Dientamoeba fragilis-Dien_fragilis",
> "Entamoeba histolytica-Enta_histolytica","Cryptosporidium spp #2
> J-Crypto_spp_2_J","Giardia lambia #2 rev-Giardia_lambia_r")
> ct <- c(NA,NA,NA,NA,NA,NA,NA,NA,NA,
> NA,18.793,NA,NA,NA,NA,NA,NA,33.302,
> NA,32.388,NA,NA,NA,NA,NA,NA,NA,NA,
>NA,NA,NA,31.398,NA,NA,NA,NA,NA,
> NA,NA,NA,NA,8.115,NA,NA,NA,NA,NA,
>   NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,
>  NA,21.161,NA,NA,NA,NA,NA,NA,31.302,
>  NA,29.785,NA,NA,NA,NA,NA,NA,NA,
> NA,NA,NA,NA,31.212,42.967,NA,33.503,
> NA,NA,NA,NA,NA,NA,9.584,NA,NA,NA,
>   NA,NA,NA)
> 
> good.data <- data.frame(sample, target, ct, stringsAsFactors = FALSE)
> 
> and the structure of these object is the same:
>> str(good.data)
> 'data.frame':

Re: [R] Special characters in regular expressions

2015-09-27 Thread Patrick Connolly
On Thu, 24-Sep-2015 at 12:38PM +0200, peter dalgaard wrote:

|> 
|> On 24 Sep 2015, at 12:05 , Thierry Onkelinx  wrote:
|> 
|> > gsub("[A|K]\\|", "", x)
|> 

|> That'll probably do it, but what was the point of the | in [A|K] ??
|> I don't think it does what I think you think it does...

|> Somewhat safer, maybe:
|> 
|> gsub("\\|[AK]\\|","\\|", x)
|> 
|> (avoids surprises from, say, "LBAM 5|A|15A|3h")

Thanks for that suggestion.  Very simple now.

|> 
|> -pd
|> 
|> > [snip]
|> > 2015-09-24 11:52 GMT+02:00 Patrick Connolly :
|> > 
|> >> I need to change a vector dd that looks like this:
|> >> c("LBAM 5|A|15C|3h", "LBAM 5|K|15C|2h")
|> >> 
|> >> into this:
|> >> c("LBAM 5|15C|3h", "LBAM 5|15C|2h")
|> >> 
|> >> It's not very imaginative, but I could use a complicated nesting of
|> >> gsub() as so:
|> >> 
|> >> gsub("-", "\\|", gsub("K-", "", gsub("A-", "", gsub("\\|", "-", dd
|> >> 
|> >> Or I could make it a bit more readable by using interim objects,
|> >> 
|> >> But I'd prefer to use a single regular expression that can detect "A|"
|> >> *and* "K|" without collateral damage from the impact of special
|> >> characters and regular characters.
|> >> 
|> 
|> -- 
|> Peter Dalgaard, Professor,
|> Center for Statistics, Copenhagen Business School
|> Solbjerg Plads 3, 2000 Frederiksberg, Denmark
|> Phone: (+45)38153501
|> Office: A 4.23
|> Email: pd@cbs.dk  Priv: pda...@gmail.com
|> 
|> 
|> 
|> 
|> 
|> 
|> 
|> 
|> 

-- 
~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.   
   ___Patrick Connolly   
 {~._.~}   Great minds discuss ideas
 _( Y )_ Average minds discuss events 
(:_~*~_:)  Small minds discuss people  
 (_)-(_)  . Eleanor Roosevelt
  
~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.