Hey, all!
I've got a report that uses datatable from DT to create an rmarkdown html that
looks great as an html but when I try to print it, to a printer, or to a pdf
the colors I've assigned to cells are not displaying. I'm using chrome and I've
clicked on the Background graphics button there,
tle and you will
see that edgeR is available. You will still have to learn a little about
edgeR analysis, so reading the vignette will be very helpful.
Also, for the comparisons you want to do, statistical help is
recommended.
Matthew
On 8/22/21 2:13 PM, Anas Jamshed wrote:
Exte
nging df.)
df_final<- full_join(df, df1, by = c(“Sample”, "Plot"))
Matthew
On 6/29/21 7:15 PM, Jim Lemon wrote:
External Email - Use Caution
Hi Esthi,
Have you tried something like:
df2<-merge(df,df1,by.x="Sample",by.y="Plot",all.y=TRUE)
Thi
Bye the way, I thought I had checked my e-mail before sending it, but my
last e-mail had an unfortunate typo with an 'I' that originally belonged
to the beginning of a deleted sentence.
Matthew
On 11/17/20 1:54 AM, Matthew McCormack wrote:
External Email - Use Caution
No
e explored. A oncee-in-a-while dive into a practical application of
statistics that has current interest can be fun and enlightening for
those interested.
Matthew
On 11/16/20 9:01 PM, Abby Spurdle wrote:
External Email - Use Caution
I've come to the conclusion this whole
both agree with Benfords Law. However, he uses the last digit
and not the first. A word of caution before you click on that link: he
uses Excel !
Matthew
On 11/13/20 9:59 PM, Rolf Turner wrote:
External Email - Use Caution
On Thu, 12 Nov 2020 01:23:06 +0100
Martin Møller Skarbiniks P
Benford Analysis for Data Validation and Forensic Analytics
Provides tools that make it easier to validate data using Benford's Law.
https://www.rdocumentation.org/packages/benford.analysis/versions/0.1.5
Matthew
On 11/9/20 9:23 AM, Alexandra Thorn wrote:
> External Ema
om/statistical-anomalies-in-biden-votes-analyses-indicate_3570518.html?utm_source=newsnoe&utm_medium=email&utm_campaign=breaking-2020-11-08-5
Matthew
On 11/8/20 11:25 PM, Bert Gunter wrote:
> External Email - Use Caution
>
> NYT had interactive maps that reported votes by c
Excel and then inspect
the logFC and p-values for the top 1250 genes.
Matthew
On 8/1/20 1:13 PM, Jeff Newmiller wrote:
> External Email - Use Caution
>
> https://www.bioconductor.org/help/
>
> On August 1, 2020 4:01:08 AM PDT, Anas Jamshed
> wrote:
>> I choo
ate it with the column name ATG. David's result
complements Jim's, and both end up being very helpful to me.
Thanks again to both of you for your time and help.
Matthew
On 5/2/2019 8:40 PM, Jim Lemon wrote:
> External Email - Use Caution
>
> Hi again,
>
at a time from each
entry in the character matrix, but have not got anything near working yet.
Matthew
On 4/30/2019 6:29 PM, David L Carlson wrote
> If you read the data frame with read.csv() or one of the other read()
> functions, use the asis=TRUE argument to prevent conversion to facto
ith the greatest number of hits. The full joins
I can do with dplyr, but getting up to that point seems rather difficult.
This would get me what my ultimate goal would be; each Regulator is a
column name (152 columns) and a given row has either NA or the same hit.
This seems ve
header of
column 2 and AT2G85403 is row 1 of column 2, etc.
I have tried playing around with strsplit(TF2list[2:2]) and
strsplit(as.character(TF2list[2:2]), but I am getting nowhere.
Matthew
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE
You are not late to the party. And you solved it!
Thank you very much. You just made my PhD a little closer to reality!
Matt
*Matthew R. Snyder*
*~*
PhD Candidate
University Fellow
University of Toledo
Computational biologist, ecologist, and bioinformatician
Sponsored Guest
I tried this too:
xyplot(mpg ~ wt | cyl, data=mtcars,
# groups = carb,
subscripts = TRUE,
col = as.factor(mtcars$gear),
pch = as.factor(mtcars$carb)
)
Same problem...
*Matthew R. Snyder*
*~*
PhD Candidate
University Fellow
University of Toledo
unique in the whole plot. But when you add cyl as a factor. Those two
points are only unique within their respective panels, and not across the
whole plot.
Matt
*Matthew R. Snyder*
*~*
PhD Candidate
University Fellow
University of Toledo
Computational biologist, ecologist, and
for single plots in
ggplot. Maybe I should contact the authors of lattice and see if this is
something they can help me with or if they would like to add this as a
feature in the future...
Matt
*Matthew R. Snyder*
*~*
PhD Candidate
University Fellow
University of Toledo
Computational
ilter rule match"
Thanks,
Matt
*Matthew R. Snyder*
*~*
PhD Candidate
University Fellow
University of Toledo
Computational biologist, ecologist, and bioinformatician
Sponsored Guest Researcher at NOAA PMEL, Seattle, WA.
matthew.snyd...@rockets.utoledo.edu
msnyder...@gmail.co
, Jim, for the work you put into this.
Matthew
On 3/21/2019 11:01 PM, Jim Lemon wrote:
External Email - Use Caution
Hi Matthew,
Remember, keep it on the list so that people know the status of the request.
I couldn't get this to work with the "_source_info_" variable.
the same
row. # I could rename the colname to 'AGI' so that I can join by 'AGI',
# but then I would lose the name of the list. # In the final dataframe,
I want to know the name of the original list # the column was made from. Matthew
me, I want to know the name of the original list
# the column was made from.
Matthew
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/
me across that specific message before, and am not exactly sure how to
interpret its meaning. What exactly is this error message trying to tell
me? Any suggestions or insights are appreciated!
Thank you all,
Matthew Campbell
> library (ElemStatLearn)
> library(kknn)
> data(zip.train)
Hi Katherina.
Good point you make. What makes your IT department happy with the use of R
studio server? What are the safe packages?
Can I trust your answer? :)
John.
On 9 Aug 2018 10:38, "Fritsch, Katharina (NNL) via R-help" <
r-help@r-project.org> wrote:
> Hiya,
> I work in a very security co
So there is probably a command that resets the capture variables as I call
them. No doubt someone will write what it is.
On 9 Aug 2018 10:36, "john matthew" wrote:
> Hi Marc.
> For question 1.
> I know in Perl that regular expressions when captured can be saved if not
> o
Hi Marc.
For question 1.
I know in Perl that regular expressions when captured can be saved if not
overwritten. \\1 is the capture variable in your R examples.
So the 2nd regular expression does not match but \\1 still has 1980
captured from the previous expression, hence the result.
Maybe if you
Hello Laurence.
Taking a pragmatic approach.
If the data is so valuable and secret but also needs some analysis in R,
here is suggested steps to minimise security risks.
1. Plan the analysis up front, what exactly what you want and the outcomes.
2. Take a laptop with Internet, install R and all p
ner.
>
> ?maintainer
>
> Cheers,
> Bert
>
> Bert Gunter
>
> "The trouble with having an open mind is that people keep coming along and
> sticking things into it."
> -- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
>
> On
Hello all,
I am using the samplesize package (n.ttest function) to calculate
number of samples per group power analysis (t-tests with unequal
variance).
I can break this n.ttest function from the samplesize package,
depending on the standard deviations I input.
This works very good.
n.ttest(sd1
ble example (check it with package reprex) and try again here, or
> ask one of the maintainers of that package.
> --
> Sent from my phone. Please excuse my brevity.
>
> On October 2, 2017 8:56:46 AM PDT, Matthew Keller
> wrote:
> >Hi all,
> >
> >I used to use fwri
a a namespace (and not attached):
[1] tools_3.2.0 chron_2.3-47 tcltk_3.2.0
--
Matthew C Keller
Asst. Professor of Psychology
University of Colorado at Boulder
www.matthewckeller.com
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing l
Hi all,
I noticed that the scaled Schoenfeld residuals produced by
residuals.coxph(fit, type="scaledsch") were different from those returned
by cox.zph for a model where robust standard errors have been estimated.
Looking at the source code for both functions suggests this is because
residuals.cox
Hi Rolf,
Thanks for the warning. I think because my initial efforts used the
assign function, that Jim provided his solution using it.
Any suggestions for how it could be done without assign() ?
Matthew
On 7/11/2016 6:31 PM, Rolf Turner wrote:
On 12/07/16 10:13, Matthew wrote:
Hi
Hi Jim,
Wow ! And it does exactly what I was looking for. Thank you very much.
That assign function is pretty nice. I should become more familiar with it.
Matthew
On 7/11/2016 5:59 PM, Jim Lemon wrote:
Hi Matthew,
This question is a bit mysterious as we don't know what the object
ame of 265 observations of 2666 variables, if this data
structure makes things easier.
My initial attempts are not working. Starting with a test data structure
that is a little simpler I have tried:
for (i in 1:4)
{ ATG <- tTargTFS[1, i]
assign(cat(ATG)
Thank you very much, Dan.
These work great. Two more great answers to my question.
Matthew
On 5/24/2016 4:15 PM, Nordlund, Dan (DSHS/RDA) wrote:
You have several options.
1. You could use the aggregate function. If your data frame is called DF, you
could do something like
with(DF
Thanks, Tom. I was making a mistake looking at your example and that's
what my problem was.
Cool answer, works great. Thank you very much.
Matthew
On 5/24/2016 4:23 PM, Tom Wright wrote:
> Don't see that as being a big problem. If your data grows then dplyr
> supports connect
<-data.frame(Length=c(321,350,340,180,198),
> ID=c(rep('A234',3),'B123','B225') )
> $ x %>% group_by(ID) %>% summarise(m=mean(Length))
>
>
>
> On Tue, May 24, 2016 at 3:46 PM, Matthew
> <mailto:mccorm...@molbio.mgh
report the mean to get this:
Length Identifier
337 A234
180 B123
198 B225
Matthew
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide ht
d <- cbind(rep(index[,2], ind_len), ind)
> #
> # new indices
> new.ind <- cbind(rep(index[,1], ind_len), ind)
> #
> # create the new matrix
> result <- matrix(NA_integer_, max(index[,1]), max(index[,4]))
> #
> # fill the new matrix
> result[new.ind] <- old.mat
or loop would be prohibitively
slow.
I may resort to unix tools and use a shell script, but wanted to first see
if this is doable in R in a fast way.
Thanks in advance!
Matt
--
Matthew C Keller
Asst. Professor of Psychology
University of Colorado at Boulde
don't understand
> exactly what you want, but does sum work? If there is more than one record
> for a given set of factors the sum is the sum of the counts. If only one
> record, then the sum is the same as the original number.
>
> On Tue, Sep 1, 2015 at 10:00 AM, Matthew Pick
Hi,
As i need R to speak to Bloomberg (and big only runs on windows), i'm
running windows 7 via VM Fusion on my mac.
I think i am having permission problems, as i cannot use install.packages,
and cannot change .libPaths via either a .Rprofile, or Profile.site.
I've posted more detail in this sup
do with them.
Matthew
On 7/9/2015 5:31 PM, Greg Snow wrote:
If you want you script to wait until you have a value entered then you
can use the tkwait.variable or tkwait.window commands to make the
script wait before continuing (or you can bind the code to a button so
that you enter the value
Wow ! Very nice. Thank you very much, John. This is very helpful and
just what I need.
Yes, I can see that I should have paid attention to tcltk before going
to tcltk2.
Matthew
On 7/8/2015 8:37 PM, John Fox wrote:
Dear Matthew,
For file selection, see ?tcltk::tk_choose.files or ?tcltk
would like a
way for the user to input that information as the script ran.
Matthew McCormack
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http
m = 0) :
argument is not numeric or logical: returning NA
I get that rock[2,] is itself a data.frame of mode list, but why the
inconsistency between functions? How can you figure this out from, e.g.,
?mean
?sum
Thanks in advance,
Matt
--
Matthew C Keller
Asst. Professor of Psychology
University of Colora
plit columns.
You may have to do some 'cleaning' of individual cells, such as removing
leading and/or trainling spaces. A lot of this can be one with the ASAP
Utilities 'Text' pull down menu.
Matthew
On 1/21/2015 3:31 PM, Dr Polanski wrote:
Hi all!
Sorry to bother you, I am tr
get if I just type at the command line:
/usr/local/R-3.1.1/bin/R.
Matthew
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/
go.
Matthew
On 8/13/2014 7:40 PM, William Dunlap wrote:
> Previously you asked
>> A second question: is this the best way to make a list
>> of data frames without having to manually type c(dataframe1, dataframe2,
>> ...) ?
> If you use 'c' there you
something like: dplyr::rbind_all(list of data frames), but
when I try dplyr::rbind_all(lsDataFrame(ls())), I get the error: object
at index 1 not a data.frame. So, I am going to have to learn some more
about lists in R before proceding.
Thank you for your help and code.
Matthew
Matthew
On 8
t
various times of the day, so I want to automate the process so it does
not need anyone to manually sit at the computer and type the list of
data frames.
Matthew
On 8/13/2014 3:06 PM, jim holtman wrote:
> Here is a function that I use that might give you the res
. Which objects are data frames ? How to then make a list of
these data frames.
A second question: is this the best way to make a list of data
frames without having to manually type c(dataframe1, dataframe2, ...) ?
Matthew
__
R-help@r-project.org
ackground noise' in the
measurement process, but it is somewhat arbitrary.
Matthew
On 7/28/2014 2:43 AM, PIKAL Petr wrote:
Hi
I like to use logical values directly in computations if possible.
yourData[,10] <- yourData[,9]/(yourData[,8]+(yourData[,8]==0))
Logical values are automagi
rom Perl to Python or Java etc., but it
seems like R programming works differently.
Matthew
On 7/25/2014 12:06 AM, Peter Alspach wrote:
Tena koe Matthew
" Column 10 contains the result of the value in column 9 divided by the value in
column 8. If the value in column 8==0, then the division
On 7/24/2014 8:52 PM, Sarah Goslee wrote:
> Hi,
>
> Your description isn't clear:
>
> On Thursday, July 24, 2014, Matthew <mailto:mccorm...@molbio.mgh.harvard.edu>> wrote:
>
> I am coming from the perspective of Excel and VBA scripts, but I
>
me started ?
Matthew
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
[1] 0
> > z[2]==0.15
> [1] TRUE
>
> Peter
>
> On Thu, Jul 3, 2014 at 11:28 AM, Matthew Keller
> wrote:
> > Hi all,
> >
> > A bit stumped here.
> >
> > z <- seq(.05,.85,by=.1)
> > z==.05 #good
> > [1] TRUE FALSE FALSE FALSE
darwin9.8.0"
$status
[1] ""
$major
[1] "2"
$minor
[1] "13.1"
$year
[1] "2011"
$month
[1] "07"
$day
[1] "08"
$`svn rev`
[1] "56322"
$language
[1] "R"
$version.string
[1] "R version 2.13.1 (2011-07-08)&q
Dear all,
I am working through a problem at the moment and have got stuck. I have
searched around on the help list for assistance but could not find anything -
but apologies if I have missed something. A dummy example of my problem is
below. I will continue to work on it, but any help would be
the scale file.
On Fri, Feb 21, 2014 at 1:50 PM, Matthew Wood wrote:
> I have a trained SVM that I want to export with write.svm and
> eventually use in libSVM. Some of my features are factors. Standard
> libSVM only works with features that are doubles, so I need to figure
> ou
I have a trained SVM that I want to export with write.svm and
eventually use in libSVM. Some of my features are factors. Standard
libSVM only works with features that are doubles, so I need to figure
out how my features should be represented and used.
How does e1071 treat factors in an SVM? For fe
Hi great that was easy I feel like a bit of a fool for not figuring this
out.
TO LOAD ALL SAVED MODELS AT ONCE:
library(biomod2)
# change directory to where you stored you’re original models (my documents
is default if you did not specify). Go into the file models
*# TO LOAD ALL SAVED MODELS
I have been struggling with this same problem. I always have to re-run.
PLEASE HELP!!
I have however figured out the whole data-format issue & am now able to
save grid files for use in other GIS programs after they are re-exported.
On Thursday, August 15, 2013 1:32:31 AM UTC-7, Jenny Williams w
>
> I think it is a problem with your directory setting & changing your
> directory.
When you make your enviro stack you set your directory to:
setwd("V:/BIOCLIM")
Then when you import your species coordinates and presence/absence status
you change your directory to:
setwd("C:/Users/Linds
Completing the reverse engineering effort is the principle barrier to fully
incorporating the sas7bdat file format. Of course, SAS may change the
format specification at any time, and without our knowledge. The sas7bdat
package is a repository for the results of our (myself, Clint Cummins, and
seve
Any insight on issues leading to the following error modes would be
appreciated.
#Version_1 CALL
alphaDivOTU <- ggplot(data=alphaDivOTU_pt1to5, aes(y = Num.OTUs,x =
Patient,fill = Timepoint)) +
geom_bar(position = position_dodge) +
theme(text = element_text(family = 'Helvetica-Narrow',
Awesome! Thanks for the fix Dennis, and thanks for clearing up aes() too.
It makes sense now.
Cheers,
MVS
=
Matthew Van Scoyoc
Graduate Research Assistant, Ecology
Wildland Resources Department <http://www.cnr.usu.edu/wild/> & Ecology
Center <http://www.usu.edu/ecology/>
Qu
No dice. I still get the "10" legend element.
Thanks for the quick reply.
Cheers,
MVS
=====
Matthew Van Scoyoc
Graduate Research Assistant, Ecology
Wildland Resources Department <http://www.cnr.usu.edu/wild/> & Ecology
Center <http://www.usu.edu/ecology/>
Quinney Co
ank(),
>legend.key = element_rect(color = "white")
>)
I have been messing around with
> theme(..., legend.key.size = unit(1, "cm"))
but I keep getting the error "could not find function unit". I'm not sure
why, isn't unit supposed t
Hello everybody,
i have to save a 100 iteration computation in a file every 5 iterations
until the end.
I first give a vector A of 100 elements for the 100 iterations and i want
to update A every 5 iterations.
I use "save" but it doesn't work.
Someone has an idea, i need a help
Cheers.
-
you have tried the maintainer first but
didn't hear from them; i.e., r-help isn't for support about packages.
I don't follow r-help, so please continue to cc me if you reply.
Matthew
On 25/09/13 00:47, Jonathan Dushoff wrote:
I got bitten badly when a variable I created for the
:39 AM, Uwe Ligges wrote:
On 02.06.2013 05:01, Matthew Fagan wrote:
Hi all,
I am attempting to do Regularized Discriminant Analysis (RDA) on a large
dataset, and I want to extract the RDA discriminant score matrix. But
the predict function in the "klaR" package, unlike the pre
Hi all,
I am attempting to do Regularized Discriminant Analysis (RDA) on a large
dataset, and I want to extract the RDA discriminant score matrix. But
the predict function in the "klaR" package, unlike the predict function
for LDA in the "MASS" package, doesn't seem to give me an option to
Thanks a lot, that is all i want. If someone is interessed, see the code
below
panel.3d.contour <-
function(x, y, z, rot.mat, distance,
nlevels = 20, zlim.scaled, ...) # les3 points de suspension
pour dire les autres paramètres sont ceux données par défaut
{
add.line <- trel
and haven't found a resolution (and
note that a similar question was asked sometime in 2011 without an answer).
Does anyone have any thoughts? Thank you in advance.
--
Matthew D. Venesky, Ph.D.
Postdoctoral Research Associate,
Department of Integrative Biology,
The University of South
is is OK, since this can change if the implementation
> does. Which is the whole point, of course.
>
> -- Bert
>
>
>
> On Mon, Apr 1, 2013 at 12:16 PM, Matthew Lundberg
> wrote:
> >
> > When used as an index, the factor is implicitly converted to integer. In
>
, Apr 1, 2013 at 1:49 PM, Peter Ehlers wrote:
> On 2013-04-01 10:48, Matthew Lundberg wrote:
>
>> These two seem to be at odds. Is this the case?
>>
>> From help(factor) - section Warning:
>>>
>>
>> To transform a factor f to approximately its origin
These two seem to be at odds. Is this the case?
>From help(factor) - section Warning:
To transform a factor f to approximately its original numeric values,
as.numeric(levels(f))[f] is recommended and slightly more efficient than
as.numeric(as.character(f)).
>From the language definition - secti
t into the Windows Font directory was not necessary. I'm
including the solution in case anyone else has this problem.
Many thanks Brian Diggs! I just had the same problem and that fixed it.
Matthew
__
R-help@r-project.org mailing list
https://stat.
Janesh,
This might help get you started:
http://biostatmatt.com/archives/78
(apologies for linking to my own blog)
Regards,
Matt
--
Message: 51
Date: Wed, 6 Feb 2013 18:50:43 -0600
From: Janesh Devkota
To: r-help@r-project.org
Subject: [R] low pass filter analysi
Hi,
I have both R and R64 installed on Mac OSX 10.8 Mountain Lion (64-bit).
When I run the command
sessionInfo()
from within Rscript, I get:
R version 2.15.2 (2012-10-26)
Platform: i386-apple-darwin9.8.0/i386 (32-bit)
Is there a way to make Rscript point at the R64 rather than R (32-bit)?
Th
I am trying to exclude integer values from a small data frame 1, d1 that
have matching hits in data frame 2, d2 (Very big) which involves matching
those hits first. I am trying to use sqldf on the df's in the following
fashion:
df1:
V1
12675
14753
16222
18765
df2: head(df2)
V1 V2
13
Hi all:
I have two data sets. Set A includes a long list of hits in a single
column, say:
m$V1
10
15
36
37
38
44
45
57
61
62
69 ...and so on
Set B includes just a few key ranges set up by way of a minimum in column X
and a maximum in column Y. Say,
n$X n$Y
30 38 # range from 30 to 38
52 6
Hello,
I recently began using R and the lme4 package to carry out linear mixed
effects analyses.
I am interested in the effects of variables 'prime','time', and 'mood'
on 'reaction_time' while taking into account the random effect
'subjects.' I've read through documentation on lme4 and came
I am uncertain about how to acknowledge the fact that $ can do partial
matching in the space of about 30 characters. One option is this:
x[["name"]] column named "name"
x$name same as above (almost always)
Is that better or worse than ignoring this issue, or is there an even
bett
I made an update/reboot of Tom Short's classic and public domain "R
Reference Card". His is from late 2004 and I've found myself giving it to
new R users with additional notes about packages.
If anyone knows how to reach Tom, that would be great. I am titling this
reboot "Short R Reference", in
Dear R help,
I am trying to cluster my data according to "group" in a data frame such as
the following:
df=data.frame(group=rep(c("a","b","c","d"),10),(replicate(100,rnorm(40
I'm not sure how to tell hclust() that I want to cluster according to the
group variable. For example:
dfclust=hc
Dear R help,
I am trying to cluster my data according to "group" in a data frame such as
the following:
df=data.frame(group=rep(c("a","b","c","d"),10),(replicate(100,rnorm(40
I'm not sure how to tell hclust() that I want to cluster according to the
group variable. For example:
dfclust=hc
my first question is whether or not there is a better
way to return all of the values over which "optimize" evaluates my
function. The second question is if I do use my solution to the first
question, how can I get the "vals" object returned from the child
process?
Thanks anyon
son I ask is that I'm
looking at the possibility of establishing an R grid if one doesn't
already exist, and if one does, then I'm looking for interfaces,
protocols, and guidelines for adding an R node.
--
Matthew K. Hettinger, Enterprise Architect and Systemist
Ma
s,
but again, maybe I'm wrong. My computer programmer writes in C++, so
if you have ideas in C++, that works too.
Any help would be much appreciated... Thanks!
Matt
--
Matthew C Keller
Asst. Professor of Psychology
University of Colorado at Boulder
www.matthewckeller.com
_
Dear R help,
Does no one have an idea of where I might find information that could help
me with this problem? I apologize for re-posting - I have half a suspicion
that my original message did not make it through.
I hope you all had a good weekend and look forward to your reply,
MO
On Fri, Jul
Dear R help list,
I have done a lot of searching but have not been able to find an answer to
my problem. I apologize in advance if this has been asked before.
I am applying a mixed model to my data using lmer. I will use sample data
to illustrate my question:
>library(lme4)
>library(arm)
>data
AKJ,
Please see this recent answer :
http://r.789695.n4.nabble.com/data-table-vs-plyr-reg-output-tp4634518p4634865.html
Matthew
--
View this message in context:
http://r.789695.n4.nabble.com/how-to-convert-list-of-matrix-raster-extract-o-p-to-data-table-with-additional-colums-polygon-Id
x27;d
suggest the data.table tag on Stack Overflow (which I subscribe to) :
http://stackoverflow.com/questions/tagged/data.table
Btw, I recently presented at LondonR. Here's a link to the slides :
http://datatable.r-forge.r-project.org/LondonR_2012.pdf
Matthew
--
View this messag
thank you for your patience. i assure you i will get better with the
appropriate etiquette - and hopefully eventually contribute.
On 13 June 2012 16:18, David Winsemius wrote:
>
> On Jun 13, 2012, at 10:09 AM, Matthew Johnson wrote:
>
>> my sessioninfo was as follows:
>
ethods for “zoo” objects do not work if the index entries in
‘order.by’ are not unique
mnz[, 1] mnz[, -1]
197.90 408
297.89 208
So is this a bug in XTS?
thanks for your patience
mj
On 13 June 2012 15:53, David Winsemius wrote:
>
> On Jun 13, 2012, at 9:38 AM, Matthe
mand could be made to be:
px_ym1 count vol_ym1
1 97.90 11 408
2 97.89 10 208
where we have the price traded, the number of trades (a count of
px_ym1 / mn[,1], and the sum of vol_ym1 (mn[,2]).
thanks and best regards
matt johnson
On 13 June 2012 15:06, David Winsemius wrote:
>
Dear R-help,
I have an xts data set that i have subset by date.
now it contains a date-time-stamp, and two columns (price and volume
traded): my objective is to create tables of volume traded at a price - and
i've been successfully using aggregate to do so in interactive use.
say the data looks
thanks. i think i understand: the difference is that the first command
converts my 'searched-for' date to a number and matches it, but the second
does not?
On 13 June 2012 12:58, Joshua Ulrich wrote:
> On Tue, Jun 12, 2012 at 9:48 PM, Matthew Johnson
> wrote:
> > Dear R
1 - 100 of 464 matches
Mail list logo