Plus, current dates are an awful lot of seconds since 1970-01-01, so the
relative error on second-scale differences is bigger than you might think:
> as.numeric(Sys.time())
[1] 1441874878
and since relative representation errors are of the order 1e-16, the
corresponding absolute errors are
Can anyone suggest a way of counting how frequently sets of values occurs in a
data frame? Like table() only with sets.
So for a dataset:
V1, V2, V3
1, 2, 1
1, 3, 2
1, 2, 1
1, 1, 1
The output would be something like:
1,2,1: 2
1,3,2: 1
1,1,1: 1
Thank you,
Thomas Chesney
This message and
On 10/09/2015 9:11 AM, Thomas Chesney wrote:
> Can anyone suggest a way of counting how frequently sets of values occurs in
> a data frame? Like table() only with sets.
Do you want 1,2,1 to be the same as 1,1,2, or different? What about
1,2,2? For sets, those are all the same, but for most
Have a look at the dplyr package
library(dplyr)
n <- 1000
data_frame(
V1 = sample(0:1, n, replace = TRUE),
V2 = sample(0:1, n, replace = TRUE),
V3 = sample(0:1, n, replace = TRUE)
) %>%
group_by(V1, V2, V3) %>%
mutate(
Freq = n()
)
ir. Thierry Onkelinx
Instituut voor natuur- en
Dear Thomas,
How about this?
> table(apply(Data, 1, paste, collapse=","))
1,1,1 1,2,1 1,3,2
1 2 1
I hope this helps,
John
> -Original Message-
> From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of Thomas
> Chesney
> Sent: September 10, 2015 9:11 AM
> To:
Hi,
I have a data.frame x1, of which a variable A needs to be split by
element 1 and element 2 where separator is ":". Sometimes could be three
elements in A, but I do not need the third element.
Since R does not have a SCAN function as in SAS, C=scan(A,1,":");
D=scan(A,2,":");
I am using a
df <- data.frame( V1= 1, V2= c( 2, 3, 2, 1), V3= c( 1, 2, 1, 1))
dfO <- df[ do.call( order, df), ]
dfOD <- duplicated( dfO)
dfODTrigger <- ! c( dfOD[-1], FALSE)
dfOCounts <- diff( c( 0, which( dfODTrigger)))
cbind( dfO[ dfODTrigger, ], dfOCounts)
V1 V2 V3 dfOCounts
4 1 1 1 1
3 1
try this:
> x <- read.table(text = "A B
+ 1:29439275 0.46773514
+ 5:85928892 0.81283052
+ 10:128341232 0.09332543
+ 1:106024283:ID 0.36307805
+ 3:62707519 0.42657952
+ 2:80464120 0.89125094", header = TRUE, as.is = TRUE)
>
> temp <- strsplit(x$A, ":")
> x$C <- sapply(temp, '[[',
just going to throw this idea out there in case it's something that anyone
wants to pursue: if i have an R script and i'm hitting some unexpected
behavior, there should be some way to remove extraneous objects and
manipulations that never touch the line that i'm trying to reproduce.
automatically
...
Alternatively, you can avoid the looping (i.e. sapply) altogether by:
do.call(rbind,strsplit(x[[1]],":"))[,-3]
[,1] [,2]
[1,] "1" "29439275"
[2,] "5" "85928892"
[3,] "10" "128341232"
[4,] "1" "106024283"
[5,] "3" "62707519"
[6,] "2" "80464120"
These can then be added to the
On Fri, 11 Sep 2015, Rolf Turner wrote:
On 11/09/15 11:57, David Winsemius wrote:
The urge to imitate other statistical package that rely on profusion
of dummies should be resisted. R repression functions can handle
factor variables
Fortune? :-)
Nice! Should I include the
Dear all,
I have 3-hourly temperature data from 1970-2010 for 122 cities in the US. I
would like to bin this data by city-year-week. My idea is if the
temperature for a particular city in a given week falls within a given
range (-17.78 & -12.22), (-12.22 & -6.67), ... (37.78 & 43.33), then the
On Sep 10, 2015, at 3:28 PM, Shouro Dasgupta wrote:
> Dear all,
>
> I have 3-hourly temperature data from 1970-2010 for 122 cities in the US. I
> would like to bin this data by city-year-week. My idea is if the
> temperature for a particular city in a given week falls within a given
> range
On 11/09/15 11:57, David Winsemius wrote:
The urge to imitate other statistical package that rely on profusion
of dummies should be resisted. R repression functions can handle
factor variables
Fortune? :-)
cheers,
Rolf
--
Technical Editor ANZJS
Department of Statistics
University
1. Posting in HTML largely negated your ability to provide data
through dput(). Folow he posting guide and post in PLAIN TEXT only,
please.
2. See ?cut . I think this will at least get you started.
Cheers,
Bert
Bert Gunter
"Data is not information. Information is not knowledge. And knowledge
could you try this, and then not use factor(age) elsewhere?
sv1 <- update( sv1 , age = factor( age ) )
if that doesn't work, is it possible for you to share a reproducible
example? thanks
On Thu, Sep 10, 2015 at 4:51 PM, Emanuele Mazzola wrote:
> Hello,
>
> I’m
?match
as in:
> y <- lk_up[match(x,lk_up[,"key"]),"val"]
> y
[1] "1" "1" "1" "1" "15000" "15000" "2"
[8] "2" "2" "2"
Bert
Bert Gunter
"Data is not information. Information is not knowledge. And knowledge
is certainly not wisdom."
-- Clifford Stoll
On
On 11/09/15 12:25, Achim Zeileis wrote:
On Fri, 11 Sep 2015, Rolf Turner wrote:
On 11/09/15 11:57, David Winsemius wrote:
The urge to imitate other statistical package that rely on profusion
of dummies should be resisted. R repression functions can handle
factor variables
Fortune?
On Sep 10, 2015, at 5:25 PM, Achim Zeileis wrote:
> On Fri, 11 Sep 2015, Rolf Turner wrote:
>
>> On 11/09/15 11:57, David Winsemius wrote:
>>
>>
>>
>>> The urge to imitate other statistical package that rely on profusion
>>> of dummies should be resisted. R repression functions can handle
I apologize in advance: I must be overlooking something quite simple,
but I'm failing to make progress.
Suppose I have a "lookup table":
Browse[2]> dput(lk_up)
structure(c("1.1", "1.9", "1.4", "1.5", "1.15", "1", "1",
"15000", "2", "25000"), .Dim = c(5L, 2L), .Dimnames = list(
This is most likely the "stringi" dependency, which is new.
Follow the links from the CRAN page for "stringi" and you may find some
guidance.
I initially had the same problem with my Mageia install, but it's sorted
now.
On 3 September 2015 at 22:53, Jeff Trefftzs wrote:
>
Hello,
I’m having a weird issue with the function “svychisq” in package “survey",
which would be very helpful for me in this case.
I’m tabulating age categories (a factor variable subdivided into 4 categories:
[18,25), [25, 45), [45,65), [65, 85) ) with respect to ethnicity/race (another
Dear group
how to calculate kendall tau distance according to Kendall_tau_distance at
wikipedia
httpsen.wikipedia.orgwikiKendall_tau_distance
thanks in advance
Ragia
__
R-help@r-project.org mailing list -- To
23 matches
Mail list logo