FWIW I use them quite frequently, but not for the purpose of storing
heterogeneous data... rather for holding complex objects of the same class.
On September 14, 2021 10:25:54 PM PDT, Avi Gross via R-help
wrote:
>My apologies. My reply was to Andrew, not Gregg.
>
>Enough damage for one night.
My apologies. My reply was to Andrew, not Gregg.
Enough damage for one night. Here is hoping we finally understood a question
that could have been better phrased. list columns are not normally considered
common data structures but quite possibly will be more as time goes on and the
tools to
You are correct, Gregg, I am aware of that trick of asking something to not be
evaluated in certain ways.
And you can indeed use base R to play with contents of beta as defined above.
Here is a sort of incremental demo:
> sapply(mydf$beta, is.numeric)
[1] FALSE TRUE TRUE FALSE
>
I'd like to point out that base R can handle a list as a data frame column,
it's just that you have to make the list of class "AsIs". So in your example
temp <- list("Hello", 1, 1.1, "bye")
data.frame(alpha = 1:4, beta = I(temp))
means that column "beta" will still be a list.
On Wed, Sep 15,
Calling something a data.frame does not make it a data.frame.
The abbreviated object shown below is a list of singletons. If it is a column
in a larger object that is a data.frame, then it is a list column which is
valid but can be ticklish to handle within base R but less so in the
Calling something a data.frame does not make it a data.frame.
The abbreviated object shown below is a list of singletons. If it is a column
in a larger object that is a data.frame, then it is a list column which is
valid but can be ticklish to handle within base R but less so in the tidyverse.
You cannot apply vectorized operators to list columns... you have to use a map
function like sapply or purrr::map_lgl to obtain a logical vector by running
the function once for each list element:
sapply( VPN_Sheet1$HVA, is.numeric )
On September 14, 2021 8:38:35 PM PDT, Gregg Powell
wrote:
names(x) <- c("some names")
if different from
`names<-`(x, value = c("some names"))
because the second piece of code does not ever call `<-`. The first piece
of code is (approximately) equivalent to
`*tmp*` <- x
`*tmp*` <- `names<-`(`*tmp*`, value = c("some names"))
x <- `*tmp*`
Another
'is.numeric' is a function that returns whether its input is a numeric
vector. It looks like what you want to do is
VPN_Sheet1 <- VPN_Sheet1[!vapply(VPN_Sheet1$HVA, "is.numeric", NA), ]
instead of
VPN_Sheet1 <- VPN_Sheet1[!is.numeric(VPN_Sheet1$HVA), ]
I hope this helps, and see ?vapply if
On Wed, 15 Sep 2021 02:01:53 +
Gregg Powell via R-help wrote:
> > Stuck on this problem - How does one remove all rows in a dataframe
> > that have a numeric in the first (or any) column?
> >
>
> > Seems straight forward - but I'm having trouble.
> >
>
>
> I've attempted to used:
>
>
Here is the output:
> str(VPN_Sheet1$HVA)
List of 2174
$ : chr "Email: f...@fff.com"
$ : num 1
$ : chr "Eloisa Libas"
$ : chr "Percival Esquejo"
$ : chr "Louchelle Singh"
$ : num 2
$ : chr "Charisse Anne Tabarno, RN"
$ : chr "Sol Amor Mucoy"
$ : chr "Josan Moira Paler"
$ : num 3
An atomic column of data by design has exactly one mode, so if _any_ values are
non-numeric then the entire column will be non-numeric. What does
str(VPN_Sheet1$HVA)
tell you? It is likely either a factor or character data.
On September 14, 2021 7:01:53 PM PDT, Gregg Powell via R-help
wrote:
> Stuck on this problem - How does one remove all rows in a dataframe that have
> a numeric in the first (or any) column?
>
> Seems straight forward - but I'm having trouble.
>
I've attempted to used:
VPN_Sheet1 <- VPN_Sheet1[!is.numeric(VPN_Sheet1$HVA),]
and
VPN_Sheet1 <-
This should be posted on r-sig-mixed-models, not here. But you should
realize that "equivalent analysis" presumes knowledge of what ASReml
does, so that perhaps the best target of your query is the package
maintainer, not a list concerned with other methods.
Bert Gunter
"The trouble with having
Rich,
You have helped us understand and at this point, suppose we now are sure
about the way missing info is supplied. What you show is not the same as the
CSV sample earlier but assuming you know that "Eqp" is the one and only way
they signaled bad data.
One choice is to fix the original data
On Tue, 14 Sep 2021, Bert Gunter wrote:
**Don't do this.*** You will make errors. Use fit-for-purpose tools.
That's what R is for. Also, be careful **how** you "download", as that
already may bake in problems.
Bert,
Haven't had downloading errors saving displayed files.
The problem with the
Inline.
On Tue, Sep 14, 2021 at 10:42 AM Rich Shepard wrote:
>
> On Tue, 14 Sep 2021, Eric Berger wrote:
>
> > My suggestion was not 'to make a difference'. It was to determine whether
> > the NAs or NaNs appear before the dplyr commands. You confirmed that they
> > do. There are 2321 NAs in
On Tue, 14 Sep 2021, Bert Gunter wrote:
Input problems of this sort are often caused by stray or extra characters
(commas, dashes, etc.) in the input files, which then can trigger
automatic conversion to character. Excel files are somewhat notorious for
this.
Bert,
Large volume of missing
Greetings R Community
The ASReml-R package will analyze data from experiments with
pseudoreplications.
Dealing with Pseudo-Replication in Linear Mixed Models
https://www.vsni.co.uk/case-studies/dealing-with-pseudo-replication-in-linear-mixed-models
Will the ‘lme4’ package return an equivalent
On Tue, 14 Sep 2021, Eric Berger wrote:
My suggestion was not 'to make a difference'. It was to determine whether
the NAs or NaNs appear before the dplyr commands. You confirmed that they
do. There are 2321 NAs in vel. Bert suggested some ways that an NA might
appear.
Eric,
Yes, you're all
Hi Rich,
My suggestion was not 'to make a difference'. It was to determine
whether the NAs or NaNs appear before the dplyr commands. You
confirmed that they do. There are 2321 NAs in vel. Bert suggested some
ways that an NA might appear.
Best,
Eric
On Tue, Sep 14, 2021 at 6:42 PM Rich Shepard
Dear List members,
I wrote some code to split long names in format.ftable. I hope it will
be useful to others as well.
Ideally, this code should be implemented natively in R. I will provide
in the 2nd part of the mail a concept how to actually implement the code
in R. This may be
On Tue, 14 Sep 2021, Bert Gunter wrote:
Input problems of this sort are often caused by stray or extra characters
(commas, dashes, etc.) in the input files, which then can trigger
automatic conversion to character. Excel files are somewhat notorious for
this.
Bert,
Yes, I'm going to closely
Input problems of this sort are often caused by stray or extra
characters (commas, dashes, etc.) in the input files, which then can
trigger automatic conversion to character. Excel files are somewhat
notorious for this.
A couple of comments, and then I'll quit, as others should have
greater
Rich,
I have to wonder about how your data was placed in the CSV file based on
what you report.
functions like read.table() (which is called by read.csv()) ultimately make
guesses about what number of columns to expect and what the contents are
likely to be. They may just examine the first N
Rich,
I reproduced your problem on my re-arranging the code the mailer mangled. I
tried variations like not using pipes or changing what it is grouped by and
they all show your results on the abbreviated data with the error:
`summarise()` has grouped output by 'year'. You can override using
On Tue, 14 Sep 2021, Bert Gunter wrote:
Remove all your as.integer() and as.double() coercions. They are
unnecessary (unless you are preparing input for C code; also, all R
non-integers are double precision) and may be the source of your
problems.
Bert,
Are all columns but the fps factors?
On Tue, 14 Sep 2021, Bert Gunter wrote:
Remove all your as.integer() and as.double() coercions. They are
unnecessary (unless you are preparing input for C code; also, all R
non-integers are double precision) and may be the source of your problems.
Bert,
When I remove coercions the script
On Tue, 14 Sep 2021, Eric Berger wrote:
Before you create vel_by_month you can check vel for NAs and NaNs by
sum(is.na(vel))
sum(unlist(lapply(vel,is.nan)))
Eric,
There should not be any missing values in the data file. Regardless, I added
those lines to the script and it made no
Remove all your as.integer() and as.double() coercions. They are
unnecessary (unless you are preparing input for C code; also, all R
non-integers are double precision) and may be the source of your
problems.
Bert Gunter
"The trouble with having an open mind is that people keep coming along
and
Before you create vel_by_month you can check vel for NAs and NaNs by
sum(is.na(vel))
sum(unlist(lapply(vel,is.nan)))
HTH,
Eric
On Tue, Sep 14, 2021 at 6:21 PM Rich Shepard
wrote:
> The data file begins this way:
> year,month,day,hour,min,fps
> 2016,03,03,12,00,1.74
> 2016,03,03,12,10,1.75
>
The data file begins this way:
year,month,day,hour,min,fps
2016,03,03,12,00,1.74
2016,03,03,12,10,1.75
2016,03,03,12,20,1.76
2016,03,03,12,30,1.81
2016,03,03,12,40,1.79
2016,03,03,12,50,1.75
2016,03,03,13,00,1.78
2016,03,03,13,10,1.81
The script to process it:
library('tidyverse')
vel <-
Hello Nevil,
you could test something like:
# the Matrix
m = matrix(1:1000, ncol=10)
m = t(m)
# Extract Data
idcol = sample(seq(100), 100, TRUE); # now columns
for(i in 1:100) {
m2 = m[ , idcol];
}
m2 = t(m2); # transpose back
It may be faster, although I did not benchmark it.
There
OK thanks, I thought it probably was, but always worth asking. the
multiplication of the columns of M2 by V2 is as intended - not matrix
multiplication.
On Tue, 14 Sept 2021 at 17:49, Jeff Newmiller
wrote:
> That is about as fast as it can be done. However you may be able to avoid
> doing it
That is about as fast as it can be done. However you may be able to avoid doing
it at all if you fold V2 into a matrix instead. Did you mean to use matrix
multiplication in your calculation of M3?
On September 13, 2021 11:48:48 PM PDT, nevil amos wrote:
>Hi is there a faster way to "extract"
Thank you Adam!
I'm a bit surprised that an extra package is needed for this, but why not!
Best,
Ivan
--
Dr. Ivan Calandra
Imaging lab
RGZM - MONREPOS Archaeological Research Centre
Schloss Monrepos
56567 Neuwied, Germany
+49 (0) 2631 9772-243
https://www.researchgate.net/profile/Ivan_Calandra
Hi is there a faster way to "extract" rows of a matrix many times to for a
longer matrix based in a vector or for indices than M[ V, ]
I need to "expand" ( rather than subset) a matrix M of 10-100,000 rows x
~50 columns to produce a matrix with a greater number (10^6-10^8) of rows
using a vector
37 matches
Mail list logo