Re: [R] Writing data onto xlsx file without cell formatting

2016-07-10 Thread Ismail SEZEN
I think, this is what you are looking for:

http://stackoverflow.com/questions/11228942/write-from-r-into-template-in-excel-while-preserving-formatting
 


> On 11 Jul 2016, at 03:43, Christofer Bogaso  
> wrote:
> 
> Hi again,
> 
> I am trying to write a data frame to an existing Excel file (xlsx)
> from row 5 and column 6 of the 1st Sheet. I was going through a
> previous instruction which is available here :
> 
> http://stackoverflow.com/questions/32632137/using-write-xlsx-in-r-how-to-write-in-a-specific-row-or-column-in-excel-file
> 
> However trouble is that it is modifying/removing formatting of all the
> affected cells. I have predefined formatting of those cells where data
> to be pasted, and I dont want to modify or remove that formatting.
> 
> Any idea if I need to pass some additional argument.
> 
> Appreciate your valuable feedback.
> 
> Thanks,
> 
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Statistical Test

2016-07-10 Thread Jim Lemon
Hi Julia,
You seem to be looking for a test for trend in proportions in the
first question. Have a look at this page:

http://sphweb.bumc.bu.edu/otlt/MPH-Modules/BS/R/R6_CategoricalDataAnalysis/R6_CategoricalDataAnalysis6.html

The second question may require GLMs using experimental condition as a
predictor and proportion of each type of error as the response. Are
the groups balanced?

Jim


On Sun, Jul 10, 2016 at 10:34 PM, Julia Edeleva
 wrote:
> Dear R-community,
>
> Thanks for replying to my previous post. I would need some more help,
> thoug.
>
> I am performing statistical analysis on chidlren's accuracy rates  as a
> dependent variable and two predictor variables with two levels each (syntax
> - subject vs object; internal NP position - pre vs post).
>
> As an outcome of my study, children committed 3 types of errors. I want to
> compare whether children committed significantly more errors of one type as
> compared to the other two types, i.e. test the scale *error 1 > error 2 >
> error 3 (">" is "more than").*  Which statistical test is most appropriate?
>
> Furthermore, I want to know whether one particular type of error is more
> common in one experimental condition than in the other, i.e. test
> whether *error
> 1 in condition 1 is more common than error 1 in condition 2*.
>
> Thnx a lot
>
> Julia Edeleva
>
> *Compare different types of errors in chidren's performance. Statistical
> Test? - ResearchGate*. Available from:
> https://www.researchgate.net/post/Compare_different_types_of_errors_in_chidrens_performance_Statistical_Test
> [accessed Jul 10, 2016].
>
> [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Help- Converting the ODE equation to fuzzy logic in R

2016-07-10 Thread Jeff Newmiller
The only reason I can imagine for such a "need" is that you have been assigned 
homework and there is a no-homework policy on this list. That said, Google came 
up with at least one hit when I looked. 

You really ought to read the Posting Guide before posting again.
-- 
Sent from my phone. Please excuse my brevity.

On July 10, 2016 5:57:27 PM PDT, mohammad alsharaiah  
wrote:
>​Hi one and all ,
>
>i have few ordinary differential equations ​(ODE's) and i need to
>transform
>it to fuzzy logic by using any package  work with R language or any R
>code
>that convert these equations. the fuzzy logic must provides the same or
>approximate result that from ODE.
>
>
>Thanks for your help.
>
>
>*Mohammad*
>
>   [[alternative HTML version deleted]]
>
>__
>R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide
>http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

[R] Course in Alice Springs: Data exploration, regression, GLM & GAM

2016-07-10 Thread Highland Statistics Ltd
We would like to announce the following statistics course:

Course: Data exploration, regression, GLM & GAM with introduction to R
Where:  Charles Darwin University, Alice Springs, Australia
When:   1-5 August 2016

Course website: http://www.highstat.com/statscourse.htm
Course flyer: 
http://highstat.com/Courses/Flyers/Flyer2016_08AliceSprings_RGG.pdf


Kind regards,

Alain Zuur


-- 
Dr. Alain F. Zuur

First author of:
1. Beginner's Guide to GAMM with R (2014).
2. Beginner's Guide to GLM and GLMM with R (2013).
3. Beginner's Guide to GAM with R (2012).
4. Zero Inflated Models and GLMM with R (2012).
5. A Beginner's Guide to R (2009).
6. Mixed effects models and extensions in ecology with R (2009).
7. Analysing Ecological Data (2007).

Highland Statistics Ltd.
9 St Clair Wynd
UK - AB41 6DZ Newburgh
Tel:   0044 1358 788177
Email:highs...@highstat.com
URL:www.highstat.com


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Help- Converting the ODE equation to fuzzy logic in R

2016-07-10 Thread mohammad alsharaiah
​Hi one and all ,

i have few ordinary differential equations ​(ODE's) and i need to transform
it to fuzzy logic by using any package  work with R language or any R code
that convert these equations. the fuzzy logic must provides the same or
approximate result that from ODE.


Thanks for your help.


*Mohammad*

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

[R] Writing data onto xlsx file without cell formatting

2016-07-10 Thread Christofer Bogaso
Hi again,

I am trying to write a data frame to an existing Excel file (xlsx)
from row 5 and column 6 of the 1st Sheet. I was going through a
previous instruction which is available here :

http://stackoverflow.com/questions/32632137/using-write-xlsx-in-r-how-to-write-in-a-specific-row-or-column-in-excel-file

However trouble is that it is modifying/removing formatting of all the
affected cells. I have predefined formatting of those cells where data
to be pasted, and I dont want to modify or remove that formatting.

Any idea if I need to pass some additional argument.

Appreciate your valuable feedback.

Thanks,

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R-es] Resumen de R-help-es, Vol 89, Envío 9

2016-07-10 Thread Patricio Fuenmayor
Jose Luis
Te recomiendo que uses el paquete data.table


El 10 de julio de 2016, 4:53,  escribió:

> Envíe los mensajes para la lista R-help-es a
> r-help-es@r-project.org
>
> Para subscribirse o anular su subscripción a través de la WEB
> https://stat.ethz.ch/mailman/listinfo/r-help-es
>
> O por correo electrónico, enviando un mensaje con el texto "help" en
> el asunto (subject) o en el cuerpo a:
> r-help-es-requ...@r-project.org
>
> Puede contactar con el responsable de la lista escribiendo a:
> r-help-es-ow...@r-project.org
>
> Si responde a algún contenido de este mensaje, por favor, edite la
> linea del asunto (subject) para que el texto sea mas especifico que:
> "Re: Contents of R-help-es digest...". Además, por favor, incluya en
> la respuesta sólo aquellas partes del mensaje a las que está
> respondiendo.
>
> Asuntos del día:
>
>1. Re: Red Neuronal complicada categorías (Javier Marcuzzi)
>2. Re: Red Neuronal complicada categorías (Carlos Ortega)
>3. Cruce Tablas (jose luis)
>
>
> -- Mensaje reenviado --
> From: Javier Marcuzzi 
> To: "r-help-es@r-project.org" 
> Cc:
> Date: Sat, 9 Jul 2016 16:58:24 -0300
> Subject: Re: [R-es] Red Neuronal complicada categorías
>
> Estimados
>
>
>
> Adjunto un archivo de texto separado por comas, muy simple como ejemplo,
> luego el siguiente código explicaría el problema. Si ejecutan el código se
> entenderá, creo.
>
>
>
> x <- read.csv("~/R/neuronal/x.csv", header=FALSE, sep=";")
>
> V1Binario <- model.matrix(~ factor(x$V1) - 1)
>
> # -1 no coloca como interceto, deja x$V1 sin nada, son los que no tienen
> nada (nada, puerta, porton)
>
> V1Binario
>
> V2Binario <- model.matrix(~ factor(x$V2) - 1)
>
> V3Binario <- model.matrix(~ factor(x$V3) - 1)
>
> V4Binario <- model.matrix(~ factor(x$V4) - 1)
>
> V5Binario <- model.matrix(~ factor(x$V5) - 1)
>
> V6Binario <- model.matrix(~ factor(x$V6) - 1)
>
>
>
> x <- cbind(x,V1Binario)
>
> x <- cbind(x,V2Binario)
>
> x <- cbind(x,V3Binario)
>
> x <- cbind(x,V4Binario)
>
> x <- cbind(x,V5Binario)
>
> x <- cbind(x,V6Binario)
>
>
>
> nn <-
> neuralnet(V6Binario~V1Binario+V2Binario+V3Binario+V4Binario+V5Binario, x,
> hidden=2, rep=5)
>
> #claro no funciona
>
> #porque si miro los datos con x
>
> x
>
> # puedo ver que la cantidad de "columnas" se incrementa por cada factor
> pasado a binario
>
> # ¿Alguna idea?
>
>
>
>
>
> Javier Rubén Marcuzzi
>
>
>
> *De: *Javier Marcuzzi 
> *Enviado: *jueves, 7 de julio de 2016 10:51
> *Para: *r-help-es@r-project.org
> *Asunto: *Red Neuronal complicada categorías
>
>
>
> Estimados
>
> Les consulto por redes neuronales, hay diversos artículos como los
> siguientes (el último tienen un error actualmente). Pero mi pregunta va un
> poco por otro lado.
>
> http://www.r-bloggers.com/build-your-own-neural-network-classifier-in-r/
>
> http://www.r-bloggers.com/classification-using-neural-net-in-r/
>
> Básicamente se puede calcular un valor, por ejemplo doblar 2,4 grados a la
> derecha, luego 1 grado a la izquierda, y de esa forma conducir un auto,
> donde no importa el valor exacto porque siempre se puede corregir (muchas
> actualizaciones producen el resultado).
>
> Otros casos donde se realiza la predicción, como las redes neuronales solo
> tienen números, la configuración (normalización) para categorías puede ser
> (0,0,0), (0,1,0), (1,0,0), (1,1,1), donde el significado no normalizado es:
> nada, techo, pieza, pileta, …, objetos de una casa.
>
> La red neuronal no produce 0,0,1 como resultado, este podría ser 0,
> 0,9, 0,98.
>
> Yo puedo decir a R que como 0,9 y 0,98 están próximos a 0 y 1
> estos vales 0 y 1, trasladando el resultado a 0,0,1 que significa una
> categoría (palabra de objeto de la casa).
>
> Hasta ahí todo correcto, puedo predecir la categoría.
>
> Pero ¿Qué pasa si estas categorías son la presencia de objetos de la casa
> para clasificar la casa?
>
> Quiero decir, techo, pieza es casa común.
>
> Otro es techo, pieza, pileta es casa grande.
>
> Pero otro usuario solo ingresa pieza y pileta (supone que hay techo) y
> también es casa grande.
>
> En el primer caso tengo dos tripletes (0,0,0 techo y 0,0,1 pieza)
>
> En el segundo caso tres tripletes porque hay tres objetos.
>
> En el tercero solo tiene dos tripletes, suponiendo la existencia de techo
> en una casa.
>
> En un ejemplo como este donde hay tres objetos de casa, podría entrenarla
> sin problemas porque hay unas 9 posibilidades de combinaciones de objetos.
>
> Pero si la cantidad de objetos es tan alta que no puedo ingresar todas las
> combinaciones posibles ¿Cómo puedo escribir el modelo en R? ¿Es posible, o
> con redes neuronales puedo llegar a determinar que letra es –
> reconocimiento de caracteres (patrón x e y con presencia o ausencia de
> color, encontrando grupos de píxeles vecinos pintados), pero no tantas
> presencias o 

Re: [R] How to make the "apply" faster

2016-07-10 Thread Debasish Pai Mazumder
Thanks for your response. It is faster than before but still very slow. Any
other suggestion ?
-Deb


On Sun, Jul 10, 2016 at 2:13 PM, William Dunlap  wrote:

> There is no need to test that a logical equals TRUE: 'logicalVector==TRUE'
> is the
> same as just 'logicalVector'.
>
> There is no need to convert logical vectors to numeric, since rle() works
> on both
> types.
>
> There is no need to use length(subset(x, logicalVector)) to count how many
> elements
> in logicalVector are TRUE, just use sum(logicalVector).
>
> There is no need to make a variable, 'ans', then immediately return it.
>
> Hence your
>
> b[b == TRUE] = 1
> y <- rle(b)
> ans <- length(subset(y$lengths[y$values == 1], y$lengths[y$values ==
> 1] >= 2))
> return(ans)
>
> could be replaced by
>
> y <- rle(b)
> sum(y$lengths[y$values] >= 2)
>
> This gives some speedup, mainly for long vectors, but I find it more
> understandable.
> E.g., if f1 is your original function and f2 has the above replacement I
> get:
>   > d <- -sin(1:1+sqrt(1:4))
>   > system.time(for(i in 1:1)f1(d,.3))
>  user  system elapsed
>  5.190.005.19
>   > system.time(for(i in 1:1)f2(d,.3))
>  user  system elapsed
>  3.650.003.65
>   > c(f1(d,.3), f2(d,.3))
>   [1] 1492 1492
>   > length(d)
>   [1] 1
>
> If it were my function, I would also get rid of the part that deals with
> the threshhold
> and direction of the inequality and tell the user to to use f(data <= 0.3)
> instead of
> f(data, .3, "below").  I would also make the spell length an argument
> instead of
> fixing it at 2.  E.g.
>
>> f3 <- function (condition, spellLength = 2)
>{
>stopifnot(is.logical(condition), !anyNA(condition))
>y <- rle(condition)
>sum(y$lengths[y$values] >= spellLength)
>}
>> f3( d >= .3 )
>[1] 1492
>
>
>
> Bill Dunlap
> TIBCO Software
> wdunlap tibco.com
>
> On Sun, Jul 10, 2016 at 11:58 AM, Debasish Pai Mazumder  > wrote:
>
>> Hi Everyone,
>> Thanks for your help. It works. I have similar problem when I am
>> calculating number of spell.
>> I am also calculation spell (definition: period of two or more days where
>> x
>> exceeds 70) using similar way:
>>
>> *new = apply(x,c(1,2,4),FUN=function(y) {fun.spell.deb(y, 70)})*
>>
>> where fun.spell.deb.R:
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> *## Calculate spell durationfun.spell.deb <- function(data, threshold = 1,
>> direction = c("above", "below")){  #coln <- grep(weather, names(data))#
>> var <- data[,8]  if(missing(direction)) {direction <- "above"}
>> if(direction=="below") {b <- (data <= threshold)} else  {b <- (data >=
>> threshold)}b[b==TRUE] = 1  y <-rle(b)  ans
>> <-length(subset((y$lengths[y$values==1]), (y$lengths[y$values==1])>=2))
>> return(ans)}*
>>
>> Do you have any idea how to make the "apply" faster here?
>>
>> -Deb
>>
>>
>> On Sat, Jul 9, 2016 at 3:46 PM, Charles C. Berry 
>> wrote:
>>
>> > On Sat, 9 Jul 2016, Debasish Pai Mazumder wrote:
>> >
>> > I have 4-dimension array x(lat,lon,time,var)
>> >>
>> >> I am using "apply" to calculate over time
>> >> new = apply(x,c(1,2,4),FUN=function(y) {length(which(y>=70))})
>> >>
>> >> This is very slow. Is there anyway make it faster?
>> >>
>> >
>> > If dim(x)[3] << prod(dim(x)[-3]),
>> >
>> > new <-  Reduce("+",lapply(1:dim(x)[3],function(z) x[,,z,]>=70))
>> >
>> > will be faster.
>> >
>> > However, if you can follow Peter Langfelder's suggestion to use rowSums,
>> > that would be best. Even using rowSums(aperm(x,c(1,2,4,3)>=70,dims=3)
>> and
>> > paying the price of aperm() might be better.
>> >
>> > Chuck
>> >
>>
>> [[alternative HTML version deleted]]
>>
>> __
>> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to make the "apply" faster

2016-07-10 Thread William Dunlap via R-help
There is no need to test that a logical equals TRUE: 'logicalVector==TRUE'
is the
same as just 'logicalVector'.

There is no need to convert logical vectors to numeric, since rle() works
on both
types.

There is no need to use length(subset(x, logicalVector)) to count how many
elements
in logicalVector are TRUE, just use sum(logicalVector).

There is no need to make a variable, 'ans', then immediately return it.

Hence your

b[b == TRUE] = 1
y <- rle(b)
ans <- length(subset(y$lengths[y$values == 1], y$lengths[y$values == 1]
>= 2))
return(ans)

could be replaced by

y <- rle(b)
sum(y$lengths[y$values] >= 2)

This gives some speedup, mainly for long vectors, but I find it more
understandable.
E.g., if f1 is your original function and f2 has the above replacement I
get:
  > d <- -sin(1:1+sqrt(1:4))
  > system.time(for(i in 1:1)f1(d,.3))
 user  system elapsed
 5.190.005.19
  > system.time(for(i in 1:1)f2(d,.3))
 user  system elapsed
 3.650.003.65
  > c(f1(d,.3), f2(d,.3))
  [1] 1492 1492
  > length(d)
  [1] 1

If it were my function, I would also get rid of the part that deals with
the threshhold
and direction of the inequality and tell the user to to use f(data <= 0.3)
instead of
f(data, .3, "below").  I would also make the spell length an argument
instead of
fixing it at 2.  E.g.

   > f3 <- function (condition, spellLength = 2)
   {
   stopifnot(is.logical(condition), !anyNA(condition))
   y <- rle(condition)
   sum(y$lengths[y$values] >= spellLength)
   }
   > f3( d >= .3 )
   [1] 1492



Bill Dunlap
TIBCO Software
wdunlap tibco.com

On Sun, Jul 10, 2016 at 11:58 AM, Debasish Pai Mazumder 
wrote:

> Hi Everyone,
> Thanks for your help. It works. I have similar problem when I am
> calculating number of spell.
> I am also calculation spell (definition: period of two or more days where x
> exceeds 70) using similar way:
>
> *new = apply(x,c(1,2,4),FUN=function(y) {fun.spell.deb(y, 70)})*
>
> where fun.spell.deb.R:
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *## Calculate spell durationfun.spell.deb <- function(data, threshold = 1,
> direction = c("above", "below")){  #coln <- grep(weather, names(data))#
> var <- data[,8]  if(missing(direction)) {direction <- "above"}
> if(direction=="below") {b <- (data <= threshold)} else  {b <- (data >=
> threshold)}b[b==TRUE] = 1  y <-rle(b)  ans
> <-length(subset((y$lengths[y$values==1]), (y$lengths[y$values==1])>=2))
> return(ans)}*
>
> Do you have any idea how to make the "apply" faster here?
>
> -Deb
>
>
> On Sat, Jul 9, 2016 at 3:46 PM, Charles C. Berry  wrote:
>
> > On Sat, 9 Jul 2016, Debasish Pai Mazumder wrote:
> >
> > I have 4-dimension array x(lat,lon,time,var)
> >>
> >> I am using "apply" to calculate over time
> >> new = apply(x,c(1,2,4),FUN=function(y) {length(which(y>=70))})
> >>
> >> This is very slow. Is there anyway make it faster?
> >>
> >
> > If dim(x)[3] << prod(dim(x)[-3]),
> >
> > new <-  Reduce("+",lapply(1:dim(x)[3],function(z) x[,,z,]>=70))
> >
> > will be faster.
> >
> > However, if you can follow Peter Langfelder's suggestion to use rowSums,
> > that would be best. Even using rowSums(aperm(x,c(1,2,4,3)>=70,dims=3) and
> > paying the price of aperm() might be better.
> >
> > Chuck
> >
>
> [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] about smwrgraphs package

2016-07-10 Thread lily li
Has anyone used smwrGraphs package? I have some problems and think it may
be better to discuss if you have been using it. Thanks very much.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to make the "apply" faster

2016-07-10 Thread Debasish Pai Mazumder
Hi Everyone,
Thanks for your help. It works. I have similar problem when I am
calculating number of spell.
I am also calculation spell (definition: period of two or more days where x
exceeds 70) using similar way:

*new = apply(x,c(1,2,4),FUN=function(y) {fun.spell.deb(y, 70)})*

where fun.spell.deb.R:















*## Calculate spell durationfun.spell.deb <- function(data, threshold = 1,
direction = c("above", "below")){  #coln <- grep(weather, names(data))#
var <- data[,8]  if(missing(direction)) {direction <- "above"}
if(direction=="below") {b <- (data <= threshold)} else  {b <- (data >=
threshold)}b[b==TRUE] = 1  y <-rle(b)  ans
<-length(subset((y$lengths[y$values==1]), (y$lengths[y$values==1])>=2))
return(ans)}*

Do you have any idea how to make the "apply" faster here?

-Deb


On Sat, Jul 9, 2016 at 3:46 PM, Charles C. Berry  wrote:

> On Sat, 9 Jul 2016, Debasish Pai Mazumder wrote:
>
> I have 4-dimension array x(lat,lon,time,var)
>>
>> I am using "apply" to calculate over time
>> new = apply(x,c(1,2,4),FUN=function(y) {length(which(y>=70))})
>>
>> This is very slow. Is there anyway make it faster?
>>
>
> If dim(x)[3] << prod(dim(x)[-3]),
>
> new <-  Reduce("+",lapply(1:dim(x)[3],function(z) x[,,z,]>=70))
>
> will be faster.
>
> However, if you can follow Peter Langfelder's suggestion to use rowSums,
> that would be best. Even using rowSums(aperm(x,c(1,2,4,3)>=70,dims=3) and
> paying the price of aperm() might be better.
>
> Chuck
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] dependent p.values in R

2016-07-10 Thread Michael Friendly

Hello Fernando

First, ask yourself what Gosta Ekman would have said if you asked him 
this question.  He would have asked "does it make any difference to

your conclusion?"  He might also have asked you "Did you do a visual
test?"  Plot your data as a QQ plot or density plot?

If the test doesn't make a difference in conclusions, it is a waste of 
your time (and ours) to worry about how to cite a

'combined p.value' (if such an animal exists), presumably to
more decimal places than is worth worrying about.

If the test *does* make a difference about normality, then ask yourself
does the degree of non-normality impede my substantive conclusions.

HTH,
-ichael

On 7/10/16 3:39 AM, Fernando Marmolejo Ramos wrote:

hi marc

say i have a vector with some x number of observations

x = c(23, 56, 123, . )

and i want to know how normal it is

as there are many normality tests, i want to combine their p.values

so, suppose i use shapiro.wilk, anderson darling and jarque bera and each will 
give a pvalue

i could simply average those p,values but to my knowledge that approach is 
biased

so i thought, in the same way there is a method to combine independent pvalues 
(e.g. stouffer method); is there a way to combine dependent pvalues?

best

f


Fernando Marmolejo-Ramos
Postdoctoral Fellow
Gösta Ekman Laboratory
Department of Psychology
Stockholm University
Frescati Hagväg 9A, Stockholm 114 19
Sweden

ph = +46 08-16 46 07
website = http://sites.google.com/site/fernandomarmolejoramos/




From: Marc Girondot 
Sent: Sunday, 10 July 2016 8:25 AM
To: r-help@r-project.org; Fernando Marmolejo Ramos
Subject: Re: [R] dependent p.values

Le 09/07/2016 à 17:17, Fernando Marmolejo Ramos a écrit :

hi all


does any one know a method to combine dependent p.values?



First, it is a stats question and not a R question. So you could have
better chance to ask this in stackexchange forum.
Second, your question is difficult to answer without context: why
p.values are dependent ? Do they come from the same dataset ? Or are
they linked by an external source ? For both these situations, combining
dependent p.values seems strange for me.
When you will ask question in stackexchange, be more precise.
Sincerely,
Marc Girondot

--
__
Marc Girondot, Pr

Laboratoire Ecologie, Systématique et Evolution
Equipe de Conservation des Populations et des Communautés
CNRS, AgroParisTech et Université Paris-Sud 11 , UMR 8079
Bâtiment 362
91405 Orsay Cedex, France

Tel:  33 1 (0)1.69.15.72.30   Fax: 33 1 (0)1.69.15.73.53
e-mail: marc.giron...@u-psud.fr
Web: http://www.ese.u-psud.fr/epc/conservation/Marc.html
Skype: girondot




__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] dependent p.values in R

2016-07-10 Thread Ben Bolker
Fernando Marmolejo Ramos  psychology.su.se>
writes:

> 
> hi marc
> 
> say i have a vector with some x number of observations
> 
> x = c(23, 56, 123, . )
> 
> and i want to know how normal it is
> 
> as there are many normality tests, i want to combine their p.values
> 
> so, suppose i use shapiro.wilk, anderson darling and
>jarque bera and each will give a pvalue
> 
> i could simply average those p,values but to my knowledge 
>that approach is biased
> 
> so i thought, in the same way there is a method to combine
>independent pvalues (e.g. stouffer method); is
> there a way to combine dependent pvalues?
> 
> best
> 
> f
> 

  Yikes.  There is extensive discussion, e.g. at
http://tinyurl.com/normtests , that suggests that much of the
time (if not always) formal statistical hypothesis tests for
normality are misguided.  Combining p-values from different tests
feels like compounding the issue.  In any case, I would definitely
say that this a question for CrossValidated 
(http://stats.stackexchange.com), rather than r-help ...

  Ben Bolker

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Statistical Test

2016-07-10 Thread Julia Edeleva
Dear R-community,

Thanks for replying to my previous post. I would need some more help,
thoug.

I am performing statistical analysis on chidlren's accuracy rates  as a
dependent variable and two predictor variables with two levels each (syntax
- subject vs object; internal NP position - pre vs post).

As an outcome of my study, children committed 3 types of errors. I want to
compare whether children committed significantly more errors of one type as
compared to the other two types, i.e. test the scale *error 1 > error 2 >
error 3 (">" is "more than").*  Which statistical test is most appropriate?

Furthermore, I want to know whether one particular type of error is more
common in one experimental condition than in the other, i.e. test
whether *error
1 in condition 1 is more common than error 1 in condition 2*.

Thnx a lot

Julia Edeleva

*Compare different types of errors in chidren's performance. Statistical
Test? - ResearchGate*. Available from:
https://www.researchgate.net/post/Compare_different_types_of_errors_in_chidrens_performance_Statistical_Test
[accessed Jul 10, 2016].

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] dependent p.values

2016-07-10 Thread Marc Girondot

Le 09/07/2016 à 17:17, Fernando Marmolejo Ramos a écrit :

hi all


does any one know a method to combine dependent p.values?


First, it is a stats question and not a R question. So you could have 
better chance to ask this in stackexchange forum.
Second, your question is difficult to answer without context: why 
p.values are dependent ? Do they come from the same dataset ? Or are 
they linked by an external source ? For both these situations, combining 
dependent p.values seems strange for me.

When you will ask question in stackexchange, be more precise.
Sincerely,
Marc Girondot

--
__
Marc Girondot, Pr

Laboratoire Ecologie, Systématique et Evolution
Equipe de Conservation des Populations et des Communautés
CNRS, AgroParisTech et Université Paris-Sud 11 , UMR 8079
Bâtiment 362
91405 Orsay Cedex, France

Tel:  33 1 (0)1.69.15.72.30   Fax: 33 1 (0)1.69.15.73.53
e-mail: marc.giron...@u-psud.fr
Web: http://www.ese.u-psud.fr/epc/conservation/Marc.html
Skype: girondot

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] column name changes

2016-07-10 Thread Jim Lemon
Hi Kristi,
The period is there for a reason. If you want to extract that column like this:

x<-data.frame(a=1:3,b=2:4,c=3:5)
> names(x)[3]<-"dif of AB"
> x
 a b dif of AB
1 1 2 3
2 2 3 4
3 3 4 5
> x$dif of AB
Error: unexpected symbol in "x$dif of"
> x$'dif of AB'
[1] 3 4 5

you will have to quote the column name every time.

Jim


On Sun, Jul 10, 2016 at 3:34 PM, Kristi Glover
 wrote:
> Hi R user,
> I wanted to change a column name with new one  but it comes with "." where 
> there was space. Is there any way to keep my formate with space?
> Here what I found
>
>
> Images<-stack(imageA,imageB,imageC)
> names(Images)[3]<-c("dif of AB")
> head(Images)
> It gives the column name of 3 as a "dif.of.AB", but I wanted to be "dif of AB"
>
> I don't want to put the "." on the spaces.
>
>
> Any suggestions?
>
> Thanks
>
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] [FORGED] column name changes

2016-07-10 Thread Rolf Turner

On 10/07/16 17:34, Kristi Glover wrote:

Hi R user,
I wanted to change a column name with new one  but it comes with "." where 
there was space. Is there any way to keep my formate with space?
Here what I found


Images<-stack(imageA,imageB,imageC)
names(Images)[3]<-c("dif of AB")
head(Images)
It gives the column name of 3 as a "dif.of.AB", but I wanted to be "dif of AB"

I don't want to put the "." on the spaces.


Any suggestions?



(1) Forget about what you "don't want" and leave the dots be. Spaces in 
variable/column names are an abomination, tolerated only by the great 
unwashed (i.e. users of Windoze).


(2) See fortune(37).

(3) It doesn't happen to me:

set.seed(42)
Images <- data.frame(x=rnorm(1),y=rnorm(10),z=rnorm(10))
names(Images)[3] <- "dif of AB"
names(Images)

[1] "x" "y" "dif of AB"


There may be some setting that enforces "syntactically valid" names, but 
I see no such setting associated with names().  (There *is* such a 
setting associated with data.frame() --- are you telling the truth about 
how you formed the new names of "Images"?)


cheers,

Rolf Turner

--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] dependent p.values in R

2016-07-10 Thread Fernando Marmolejo Ramos
hi marc

say i have a vector with some x number of observations

x = c(23, 56, 123, . )

and i want to know how normal it is

as there are many normality tests, i want to combine their p.values

so, suppose i use shapiro.wilk, anderson darling and jarque bera and each will 
give a pvalue

i could simply average those p,values but to my knowledge that approach is 
biased

so i thought, in the same way there is a method to combine independent pvalues 
(e.g. stouffer method); is there a way to combine dependent pvalues?

best

f


Fernando Marmolejo-Ramos
Postdoctoral Fellow
Gösta Ekman Laboratory
Department of Psychology
Stockholm University
Frescati Hagväg 9A, Stockholm 114 19
Sweden

ph = +46 08-16 46 07
website = http://sites.google.com/site/fernandomarmolejoramos/




From: Marc Girondot 
Sent: Sunday, 10 July 2016 8:25 AM
To: r-help@r-project.org; Fernando Marmolejo Ramos
Subject: Re: [R] dependent p.values

Le 09/07/2016 à 17:17, Fernando Marmolejo Ramos a écrit :
> hi all
>
>
> does any one know a method to combine dependent p.values?
>
>
First, it is a stats question and not a R question. So you could have
better chance to ask this in stackexchange forum.
Second, your question is difficult to answer without context: why
p.values are dependent ? Do they come from the same dataset ? Or are
they linked by an external source ? For both these situations, combining
dependent p.values seems strange for me.
When you will ask question in stackexchange, be more precise.
Sincerely,
Marc Girondot

--
__
Marc Girondot, Pr

Laboratoire Ecologie, Systématique et Evolution
Equipe de Conservation des Populations et des Communautés
CNRS, AgroParisTech et Université Paris-Sud 11 , UMR 8079
Bâtiment 362
91405 Orsay Cedex, France

Tel:  33 1 (0)1.69.15.72.30   Fax: 33 1 (0)1.69.15.73.53
e-mail: marc.giron...@u-psud.fr
Web: http://www.ese.u-psud.fr/epc/conservation/Marc.html
Skype: girondot


__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Reading a large directory of compressed zips into a data frame

2016-07-10 Thread David Winsemius

> On Jul 9, 2016, at 10:59 PM, Giles Bischoff  wrote:
> 
> Hello R Programmers!
> I was wondering if y'all could help me. I'm trying to read data from a
> directory containing 332 compressed zips all with about 1000 lines (or
> more) of data into a data frame. I have it so that the directory is set to
> the file with the zips in it. I figured this way when I tried using the
> dir() function, I could do something like d1 <- read.csv(dir()[1:332]) to
> read all the data and then find the mean of say "columnA" in that data
> table by using something like mean(d1$columnA). Though, so far this has not
> worked. Any ideas?

This is highly likely to be one of the homework problems for one of Peng's 
Johns Hopkins online data management courses. Questions from htat course have 
been posted many times on StackOverflow and many of them have been answered 
there as well. R-help, however, has a no homework policy.


> Sincerely,
> Giles
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

David Winsemius
Alameda, CA, USA

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Reading a large directory of compressed zips into a data frame

2016-07-10 Thread Giles Bischoff
Hello R Programmers!
I was wondering if y'all could help me. I'm trying to read data from a
directory containing 332 compressed zips all with about 1000 lines (or
more) of data into a data frame. I have it so that the directory is set to
the file with the zips in it. I figured this way when I tried using the
dir() function, I could do something like d1 <- read.csv(dir()[1:332]) to
read all the data and then find the mean of say "columnA" in that data
table by using something like mean(d1$columnA). Though, so far this has not
worked. Any ideas?
Sincerely,
Giles

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.