Thanks everyone and any/all reading this. I think I got my answer. And, no, I
suspected I did not need to provide a very specific example, at least not yet.
The answer is that my experiment was not vectorized while using dplyr verbs
like mutate do their work implicitly in a vectorized way.
Hi Avi,
As Duncan already mentioned, a reproducible example would be helpful to
assist you better. Having said that, I think you misunderstand how
`dplyr::filter` works: it performs row-wise filtering, so the filtering
expression shall return a logical vector of the same length as the
On 12/04/2024 3:52 p.m., avi.e.gr...@gmail.com wrote:
Base R has generic functions called any() and all() that I am having trouble
using.
It works fine when I play with it in a base R context as in:
all(any(TRUE, TRUE), any(TRUE, FALSE))
[1] TRUE
all(any(TRUE, TRUE), any(FALSE, FALSE))
Base R has generic functions called any() and all() that I am having trouble
using.
It works fine when I play with it in a base R context as in:
> all(any(TRUE, TRUE), any(TRUE, FALSE))
[1] TRUE
> all(any(TRUE, TRUE), any(FALSE, FALSE))
[1] FALSE
But in a tidyverse/dplyr environment, it
Thanks a lot both Duncan and Ivan,
I will keep that example in mind, Duncan, great!
Best regards,
Iago
De: Duncan Murdoch
Enviat el: divendres, 12 d�abril de 2024 15:36
Per a: Iago Gin� V�zquez ; r-help@r-project.org
Tema: Re: [R] Debugging functions defined
On 12/04/2024 8:15 a.m., Iago Giné Vázquez wrote:
Hi all, I am trying to debug an error of a function g defined and used inside
another function f of a package.
So I have
f <- function(whatever){
...
g <- function(whatever2){
...
}
...
}
If I wanted to debug some thing
В Fri, 12 Apr 2024 12:53:02 +
Iago Giné Vázquez пишет:
> How should I call trace() if f was a function?
Let the tracer be quote(debug(g)) and use as.list(body(f)) to determine
where it should be injected:
f <- function() {
message('exists("g") so far is ', exists('g'))
g <- function() {
Thank you Ivan, your example solves my issue this time through
debug(environment(Adder$add)$add)
Just for the future, you say
Moreover, `g` doesn't exist at all until f() is evaluated and reaches
this point. If `f` was a function, it would be possible to trace() it,
inserting a
В Fri, 12 Apr 2024 12:15:07 +
Iago Giné Vázquez пишет:
> f <- function(whatever){
>...
>g <- function(whatever2){
> ...
>}
>...
> }
>
> If I wanted to debug some thing directly inside f I would do
> debug(f). But this does not go inside g code. On the other hand,
>
To be precise, in the case I am looking this time f is not a function, but
f <- ggplot2::ggproto(...)
So debug(f) produces
Error in debug(f) : argument must be a function
Iago
De: R-help de part de Iago Gin� V�zquez
Enviat el: divendres, 12 d�abril de 2024
Hi all, I am trying to debug an error of a function g defined and used inside
another function f of a package.
So I have
f <- function(whatever){
...
g <- function(whatever2){
...
}
...
}
If I wanted to debug some thing directly inside f I would do debug(f). But this
does not
On 11/04/2024 12:58 p.m., Iris Simmons wrote:
Hi Duncan,
I only know about sub() and gsub().
There is no way to have pattern be a regular expression and replacement
be a fixed string.
Backslash is the only special character in replacement. If you need a
reference, see this file:
On 11/04/2024 12:57 p.m., Dave Dixon wrote:
Backslashes in regex expressions in R are maddening, but they make sense.
R string handling interprets your replacement string "\\" as just one
backslash. Your string is received by gsub as "\" - that is, just the
control backslash, NOT the character
Hi Duncan,
I only know about sub() and gsub().
There is no way to have pattern be a regular expression and replacement be
a fixed string.
Backslash is the only special character in replacement. If you need a
reference, see this file:
Backslashes in regex expressions in R are maddening, but they make sense.
R string handling interprets your replacement string "\\" as just one
backslash. Your string is received by gsub as "\" - that is, just the
control backslash, NOT the character backslash. gsub is expecting to see
\0,
I noticed this issue in stringr::str_replace, but it also affects sub()
in base R.
If the pattern in a call to one of these needs to be a regular
expression, then backslashes in the replacement text are treated specially.
For example,
gsub("a|b", "\\", "abcdef")
gives "def", not
Whatever you had as HTML was deleted. If that was data we did not get it.
1) manipulate wpi_def_nic2004 and wpi_def_nic2008 first so that the data are
compatible, then join them.
2) The full_join statement should explicitly state the columns to join by.
Using by=NULL joins by all the columns
I'm working on a quarto template with ESS that runs 'R' code chunks .
When using RStudio, if there are parameters set in the yaml header, the
default values are available in code chunks (for testing the template).
However in ESS the `params$...` variables are not available. How would I
create
I've tidied up the code for showing R plots as SVG in an Emacs
buffer;
it is now on melpa:
https://melpa.org/#/essgd
so package.el can grab it easily enough.
Best wishes,
Stephen
On Wed, Mar 27 2024, Stephen J. Eglen wrote:
I've got it now working locally so that the aspect ratio is
Hello!
I was trying to splice two wholesale price index deflators series which
have different base years. One of them is called wpi_def_nic2004(from 2005
to 2012), and another is called wpi_def_nic2008(from 2012 to 2019). I am
trying to create a single series such that the base year prices are
Hi R enthusiasts,
I am happy to announce a new package available on CRAN: fastTS
(https://cran.r-project.org/web/packages/fastTS/). fastTS is especially useful
for large time series with exogenous features and/or complex seasonality (i.e.
with multiple modes), allowing for possibly
Dear all;
Thank you for your reply.
David has explained an interesting method.
David I have DEM file of the region and I have extracted the xyz data from
that.
Also I can extract bathymetry data as xyz file.
I have calculated the storage (volume) of reservoir at the current
elevation.
But the
Dave,
Your method works for you and seems to be a one-time fix of a corrupted data
file so please accept what I write not as a criticism but explaining my
alternate reasoning which I suspect may work faster in some situations.
Here is my understanding of what you are doing:
You have a file in
That's basically what I did
1. Get text lines using readLines
2. use tryCatch to parse each line using read.csv(text=...)
3. in the catch, use gregexpr to find any quotes not adjacent to a comma
(gregexpr("[^,]\"[^,]",...)
4. escape any quotes found by adding a second quote (using str_sub from
It sounds like the discussion is now on how to clean your data, with a twist.
You want to clean it before you can properly read it in using standard methods.
Some of those standard methods already do quite a bit as they parse the data
such as looking ahead to determine the data type for a
Às 06:47 de 08/04/2024, Dave Dixon escreveu:
Greetings,
I have a csv file of 76 fields and about 4 million records. I know that
some of the records have errors - unmatched quotes, specifically.
Reading the file with readLines and parsing the lines with read.csv(text
= ...) is really slow. I
Hi,
you are unfortunately right. Executing
x <- sample(c(1,2,NA), 26, replace=TRUE)
y <- sample(c(1,2,NA), 26, replace=TRUE)
o <- order(x, y, decreasing = c(T,F), na.last=c(F,T))
cbind(x[o], y[o])
shows that the second entry of na.last is ignored without warning.
Thanks Sigbert
Am 10.04.24
В Wed, 10 Apr 2024 09:33:19 +0200
Sigbert Klinke пишет:
> decreasing=c(F,F,F)
This is only documented to work with method = 'radix':
>> For the ‘"radix"’ method, this can be a vector of length equal to
>> the number of arguments in ‘...’ and the elements are recycled as
>> necessary. For the
Hi,
when I execute
order(letters, LETTERS, 1:26)
then everything is fine. But if I execute
order(letters, LETTERS, 1:26, na.last=c(T,T,T), decreasing=c(F,F,F))
I get the error message
Error in method != "radix" && !is.na(na.last) :
'length = 3' in constraint to 'logical(1)'
Shouldn't both
I answer to myself: I have to generate my signal up to f1 = Fs/2 in
order to have a flat response over the entire frequency range.
Le 09/04/2024 à 19:48, laurentRhelp a écrit :
Dear RHelp-list,
I generate a swept sine signal using the signal library. I can see
using the spec.pgram command
On Tue, 9 Apr 2024, Ivan Krylov wrote:
That's fine, R will run straight from the build directory. It has to do
so in order to compile the vignettes.
Ivan,
That's good to know. Thanks.
But let's skip this step. Here's reshape.tex from R-4.3.3:
https://0x0.st/XidU.tex/reshape.tex
(Feel free
On Tue, 9 Apr 2024, Ivan Krylov wrote:
At this point in the build, R already exists, is quite operable and
even has all the recommended packages installed. The build system then
uses this freshly compiled R to run Sweave on the vignettes. Let me
break the build in a similar manner and see what
Dear RHelp-list,
I generate a swept sine signal using the signal library. I can see
using the spec.pgram command that the spectrum of this signal is white.
But when I am calculating the cumulative periodogram using the command
cpgram the process doesn't stay inside the confidence band (cf.
Water engineer here. The standard approach is to 1) get the storage vs.
elevation data from the designers of the reservoir or, barring that, 2)
get the bathymetry data from USBR or state DWR, or, if available, get
the DEM data from USGS if the survey was done before the reservoir was
built or
Thanks for the suggestion, Ivan.
The issue has been overcome with a simple change of the code to the form
(is.null(A) || any(is.na(A))
following advice from Peter Dalgard.
However, I have kept a note of the R Inferno reference, for future problems.
Best wishes,
Adelchi
> On 9 Apr 2024, at
> On 9 Apr 2024, at 14:54, peter dalgaard wrote:
>
> Hi, Adelchi,
>
> Depends on what you want help with...
>
> The proximate cause would seem to be that the code ought to have "is.null(A)
> || any(is.NA(A))", which I presume you could fairly easily fix for yourself
> in the package
So, you know how to get volume for given water level.
For the reverse problem, you get in trouble because of the nonlinearity
inherent in the dependence of surface area on the level.
I don't think there is a simple solution to this, save for mapping out the
volume as a function of water
Hi, Adelchi,
Depends on what you want help with...
The proximate cause would seem to be that the code ought to have "is.null(A) ||
any(is.NA(A))", which I presume you could fairly easily fix for yourself in the
package sources or even locally in an R session. Vector-valued logicals in flow
В Tue, 9 Apr 2024 09:55:06 +0200
gernop...@gmx.net пишет:
> If I only move away /usr/local/lib/libomp.dylib, I can still install
> it. So it seems that also here the internal libomp.dylib from R is
> used. Just the bundled omp files at /usr/local/include (omp-tools.h,
> omp.h, ompt.h) seem to be
В Tue, 9 Apr 2024 12:04:26 +0200
Adelchi Azzalini пишет:
> res <- CEoptim(sumsqrs, f.arg = list(xt), continuous = list(mean =
> c(0, 0, 0), sd = rep(1,3), conMat = A, conVec = b), discrete =
> list(categories = c(298L, 298L), smoothProb = 0.5),N = 1, rho
> = 0.001)
>
> Error in
In the attempt to explore the usage of package CEoptim, I have run the code
listed at the end of this message. This code is nothing but the one associated
to example 5.7 in the main reference of the package, available at
https://www.jstatsoft.org/article/view/v076i08
and is included in the
Sorry, if have to correct this. If I only move away
/usr/local/lib/libomp.dylib, I can still install it. So it seems that also here
the internal libomp.dylib from R is used. Just the bundled omp files at
/usr/local/include (omp-tools.h, omp.h, ompt.h) seem to be used. So maybe this
is caused
Sorry fort he late reply, your mail ended up in my spam and I've just seen it
recently.
> Does the behaviour change if you temporarily move away
> /usr/local/lib/libomp.dylib?
It does not change the behavior after loading (or attaching) data.table using
"library(data.table)". It is still
Try reading the lines in (readLines), count the number of both types of
quotes in each line. Find out which are not even and investigate.
On Mon, Apr 8, 2024, 15:24 Dave Dixon wrote:
> I solved the mystery, but not the problem. The problem is that there's
> an unclosed quote somewhere in those
I find QSV very helpful.
el
On 08/04/2024 22:21, Dave Dixon wrote:
> I solved the mystery, but not the problem. The problem is that
> there's an unclosed quote somewhere in those 5 additional records I'm
> trying to access. So read.csv is reading million-character fields.
> It's slow at that.
Right, I meant to add header=FALSE. And, it looks now like the next line
is the one with the unclosed quote, so read.csv is trying to read
million-character headers!
On 4/8/24 12:42, Ivan Krylov wrote:
В Sun, 7 Apr 2024 23:47:52 -0600
Dave Dixon пишет:
> second_records <-
Good suggestion - I'll look into data.table.
On 4/8/24 12:14, CALUM POLWART wrote:
> data.table's fread is also fast. Not sure about error handling. But I
> can merge 300 csvs with a total of 0.5m lines and 50 columns in a
> couple of minutes versus a lifetime with read.csv or readr::read_csv
>
Thanks, yeah, I think scan is more promising. I'll check it out.
On 4/8/24 11:49, Bert Gunter wrote:
> No idea, but have you tried using ?scan to read those next 5 rows? It
> might give you a better idea of the pathologies that are causing
> problems. For example, an unmatched quote might
I solved the mystery, but not the problem. The problem is that there's
an unclosed quote somewhere in those 5 additional records I'm trying to
access. So read.csv is reading million-character fields. It's slow at
that. That mystery solved.
However, the the problem persists: how to fix what is
On Mon, 8 Apr 2024, Ivan Krylov wrote:
A Web search suggests that texi2dvi may output this message by mistake
when the TeX installation is subject to a different problem:
https://web.archive.org/web/20191006123002/https://lists.gnu.org/r/bug-texinfo/2016-10/msg00036.html
Ivan,
That thread is
Às 19:42 de 08/04/2024, Ivan Krylov via R-help escreveu:
В Sun, 7 Apr 2024 23:47:52 -0600
Dave Dixon пишет:
> second_records <- read.csv(file_name, skip = 2459465, nrows = 5)
It may or may not be important that read.csv defaults to header =
TRUE. Having skipped 2459465 lines, it may
On Mon, 8 Apr 2024, Ivan Krylov wrote:
Questions about building R do get asked here and R-devel. Since you're
compiling a released version of R and we don't have an R-SIG-Slackware
mailing list, R-help sounds like the right place.
Ivan,
Okay:
What are the last lines of the build log,
В Sun, 7 Apr 2024 23:47:52 -0600
Dave Dixon пишет:
> > second_records <- read.csv(file_name, skip = 2459465, nrows = 5)
It may or may not be important that read.csv defaults to header =
TRUE. Having skipped 2459465 lines, it may attempt to parse the next
one as a header, so the second call
I've been building R versions for years with no issues. Now I'm trying to
build R-4.3.3 on Slackware64-15.0 (fully patched) with TeXLive2024 (fully
patched) installed. The error occurs building a vignette.
Is this mail list the appropriate place to ask for help or should I post the
request on
data.table's fread is also fast. Not sure about error handling. But I can
merge 300 csvs with a total of 0.5m lines and 50 columns in a couple of
minutes versus a lifetime with read.csv or readr::read_csv
On Mon, 8 Apr 2024, 16:19 Stevie Pederson,
wrote:
> Hi Dave,
>
> That's rather
No idea, but have you tried using ?scan to read those next 5 rows? It might
give you a better idea of the pathologies that are causing problems. For
example, an unmatched quote might result in some huge number of characters
trying to be read into a single element of a character variable. As your
I appreciate the compliment from Ivan and still share the puzzlement at the
empty return.
What is the policy for changing something that is wrong? There is a trade-off
between breaking old code that worked around a problem and breaking new code
written by people who make reasonable
Dear R-help,
Hope this email finds you well. My name is Ziyan. I am a graduate student in
Zhejiang University. My subject research involves ks.test in stats-package
{stats}. Based on the code, I have two main questions. Could you provide me
some more information?
I download different
Hi Dave,
That's rather frustrating. I've found vroom (from the package vroom) to be
helpful with large files like this.
Does the following give you any better luck?
vroom(file_name, delim = ",", skip = 2459465, n_max = 5)
Of course, when you know you've got errors & the files are big like that
Greetings,
I have a csv file of 76 fields and about 4 million records. I know that
some of the records have errors - unmatched quotes, specifically.
Reading the file with readLines and parsing the lines with read.csv(text
= ...) is really slow. I know that the first 2459465 records are good.
В Mon, 8 Apr 2024 10:29:53 +0200
gernophil--- via R-help пишет:
> I have some weird issue with using multithreaded data.table in macOS
> and I am trying to figure out, if it’s connected to my libomp.dylib.
> I started using libomp as stated here:
> https://mac.r-project.org/openmp/
Does the
Hey everyone,
I have some weird issue with using multithreaded data.table in macOS and I am
trying to figure out, if it’s connected to my libomp.dylib. I started using
libomp as stated here: https://mac.r-project.org/openmp/
Everything worked fine till beginning of this year, but all of a
Dear all;
Many thanks for your replies. This was not homework. I apologize.
Let me explain more.
There is a dam constructed in a valley with the highest elevation of 1255
m. The area of its reservoir can be calculated by drawing a polygon around
the water and it is known.
I have the Digital
John,
Your reaction was what my original reaction was until I realized I had to
find out what a DEM file was and that contains enough of the kind of
depth-dimension data you describe albeit what may be a very irregular cross
section to calculate for areas and thence volumes.
If I read it
With respect to duplicated.data.frame taking account of row names to return
all the rows as unique: thinking about this some more, I can see that making
sense in isolation, but it's at odds with the usual behaviour of duplicated for
other classes, e.g. primitive vectors, where it doesn't take
Chris, since it does indeed look like homework, albeit a deeper looks
suggests it may not beI think we can safely answer the question:
>Is there any way to write codes to do this in R?
The answer is YES.
And before you ask, it can be done in Python, Java, C++, Javascript, BASIC,
FORTRAN and
Aside from the fact that the original question might well be a class exercise
(or homework), the question is unanswerable given the data given by the
original poster. One needs to know the dimensions of the reservoir, above and
below the current waterline. Are the sides, above and below the
Às 13:27 de 07/04/2024, javad bayat escreveu:
Dear all;
I have a question about the water level of a reservoir, when the volume
changed or doubled.
There is a DEM file with the highest elevation 1267 m. The lowest elevation
is 1230 m. The current volume of the reservoir is 7,000,000 m3 at 1240
Homework?
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
On April 7, 2024 8:27:18 AM EDT, javad bayat wrote:
>Dear all;
>I have a question about the water level of a reservoir, when the volume
>changed or doubled.
>There is a DEM file with the highest elevation 1267 m.
Dear all;
I have a question about the water level of a reservoir, when the volume
changed or doubled.
There is a DEM file with the highest elevation 1267 m. The lowest elevation
is 1230 m. The current volume of the reservoir is 7,000,000 m3 at 1240 m.
Now I want to know what would be the water
В Fri, 5 Apr 2024 16:08:13 +
Jorgen Harmse пишет:
> if duplicated really treated a row name as part of the row then
> any(duplicated(data.frame(…))) would always be FALSE. My expectation
> is that if key1 is a subset of key2 then all(duplicated(df[key1]) >=
> duplicated(df[key2])) should
Hola usuarios de R
Cuando aplico k prototype
clustering <- kproto(x = data, k = 3, verbose = TRUE, lambda = 2)
El objeto clustering no es un data frame,
y summary(clustering) me da las medias para variables numéricas
pero me gustaría obtener la desviación típica. Una forma que podría
ser es
(I do not know how to make Outlook send plain text, so I avoid apostrophes.)
For what it is worth, I agree with Mark Webster. The discussion by Ivan Krylov
is interesting, but if duplicated really treated a row name as part of the row
then any(duplicated(data.frame(�))) would always be FALSE.
Hello Ivan, thanks for this.
> Part of the problem is that it's not obvious what should be a
> zero-column but non-zero-row data.frame mean.
>
> On the one hand, your database relation use case is entirely valid. On
> the other hand, if data.frames are considered to be tables of data with
>
Hello Mark,
В Fri, 5 Apr 2024 03:58:36 + (UTC)
Mark Webster via R-help пишет:
> I found what looks to me like an odd edge case for duplicated(),
> unique() etc. on data frames with zero columns, due to duplicated()
> returning a zero-length vector for them, regardless of the number of
>
Hello,
I found what looks to me like an odd edge case for duplicated(), unique() etc.
on data frames with zero columns, due to duplicated() returning a zero-length
vector for them, regardless of the number of rows:
df <- data.frame(a = 1:5)df$a <- NULLnrow(df) # 5 (row count preserved by
Note:
> levels(factor(c(0,0,1))) ## just gives you the levels attribute
[1] "0" "1"
> as.character(factor(c(0,0,1))) ## gives you the level of each value in
the vector
[1] "0" "0" "1"
Does that answer your question or have I misunderstood.
Cheers,
Bert
On Tue, Apr 2, 2024 at 12:00 AM Kimmo
Using levels rather than length might cause problems. 2024 1, 1, 0, 0 will have
a different number of levels than 2024, 3, 8, 0, 0 and I cannot assume that the
two tailing zeros are zero for all records. The code can be simplified if you
can assume more. It might require more work if I have
Already did...
On Tue, Apr 2, 2024 at 10:45 AM Eric Berger wrote:
>
> According to https://cran.r-project.org/web/packages/genoPlotR/index.html
> the maintainer of genoPlotR is
>
> Lionel Guy
>
> Send your question also to him.
>
> On Tue, Apr 2, 2024 at 11:27 AM Luigi Marongiu
> wrote:
> >
>
According to https://cran.r-project.org/web/packages/genoPlotR/index.html
the maintainer of genoPlotR is
Lionel Guy
Send your question also to him.
On Tue, Apr 2, 2024 at 11:27 AM Luigi Marongiu wrote:
>
> I would like to use your genoPlotR package
> (doi:10.1093/bioinformatics/btq413) to
I would like to use your genoPlotR package
(doi:10.1093/bioinformatics/btq413) to compare the genomes of two
isolates of E. coli K-12 that I have. One is a K-12 that was in my
lab's fridge; the other is a derivative of K-12 bought some time ago,
HB101.
I tried to use genoPlotR, but I could not
Hi,
why would this simple procedure not work?
--- snip ---
mydf <- data.frame(id_station = 1234, string_data = c(2024, 12, 1, 0, 0),
rainfall_value= 55)
mydf$string_data <- as.factor(mydf$string_data)
values<-as.integer(levels(mydf$string_data))
for (i in 1:length(values)) {
My rbook project may be of interest to other ESS users.
It is a project I wrote in Go (golang) that mirrors
the R session (text commands, text outputs, and plots)
to a web browser.
This makes it easy for me to work on a remote linux
box but see all the graphics (without
having to run a local X
That did it. Thanks,
Kevin
On Sun, Mar 31, 2024, 7:17 AM Stephen J. Eglen wrote:
>
>
> hi Kevin, Rodney,
>
> I think the variable and setting you are looking for is:
>
> (setq ess-startup-directory 'default-directory)
>
> Best wishes,
>
> Stephen
>
>
>
>
>
>
>
> On Sun, Mar 31 2024, Kevin
hi Kevin, Rodney,
I think the variable and setting you are looking for is:
(setq ess-startup-directory 'default-directory)
Best wishes,
Stephen
On Sun, Mar 31 2024, Kevin Coombes via ESS-help wrote:
I don't know where this "feature" lives in the code. But it
might help find
it to
I don't know where this "feature" lives in the code. But it might help find
it to know that it does the same thing in any git project or subversion
project, even if it isn't an R package.
Hopefully, someone can track it down, since I view it as a bug I want to
remove, and not a feature.
Best,
Oops…
More details…
(ess-version)
"ess-version: 24.01.1 [] (loaded from
/usr/local/emacs/29.2/share/emacs/site-lisp/ess/lisp/)"
emacs-version
"29.2"
When I have (setq ess-ask-for-ess-directory t)
then it prompts me: but the prompted default
is for the root of the R package rather than
Hi ESS-help:
When I am working on an R script that is not part of an R package,
then ess-set-working-directory does what would be expected, i.e.,
on a launch of R it does setwd() to the buffer�s default-directory.
However, when I am in, say, the demo directory of an R package,
then it doesn�t.
Hello,
I am interested in running generalized estimating equation models in R.
Currently there are two main packages for doing so in R, geepack and gee. I
understand that even though one can obtain similar to almost identical
results using either of the two, that there are differences between the
This is great!
Many thanks to all for helping to further resolve the problem.
Best wishes
Ogbos
On Fri, Mar 29, 2024 at 6:39 AM Rui Barradas wrote:
> Às 01:43 de 29/03/2024, Ogbos Okike escreveu:
> > Dear Rui,
> > Thanks again for resolving this. I have already started using the version
> >
Às 01:43 de 29/03/2024, Ogbos Okike escreveu:
Dear Rui,
Thanks again for resolving this. I have already started using the version
that works for me.
But to clarify the second part, please let me paste the what I did and the
error message:
set.seed(2024)
data <- data.frame(
+Date =
I would guess your version of R is earlier than 4.1, when the built-in pipe was
introduced to the language
On March 28, 2024 6:43:05 PM PDT, Ogbos Okike wrote:
>Dear Rui,
>Thanks again for resolving this. I have already started using the version
>that works for me.
>
>But to clarify the second
Dear Deepayan,
Thanks for your kind response.
Regards
Ogbos
On Thu, Mar 28, 2024 at 3:40 AM Deepayan Sarkar
wrote:
> For more complicated examples, the (relatively new) array2DF()
> function is also useful:
>
> > with(data, tapply(count, Date, mean)) |> array2DF()
> Var1Value
> 1
Dear Rui,
Thanks again for resolving this. I have already started using the version
that works for me.
But to clarify the second part, please let me paste the what I did and the
error message:
> set.seed(2024)
> data <- data.frame(
+Date = sample(seq(Sys.Date() - 5, Sys.Date(), by = "1
Here are some pieces of working code. I assume you want the second one or the
third one that is functionally the same but all in one statement. I do not
understand why it is a factor, but I will assume that there is a current and
future reason for that. This means I cannot alter the string_data
On 28/03/2024 7:48 a.m., Stefano Sofia wrote:
as.factor(2024, 12, 1, 0, 0)
That doesn't work. You need to put the numbers in a single vector as
Fabio did, or you'll see this:
Error in as.factor(2024, 12, 1, 0, 0) : unused arguments (12, 1, 0, 0)
Duncan Murdoch
Sorry for my hurry.
The correct reproducible code is different from the initial one. The correct
example is
mydf <- data.frame(id_station = 1234, string_data = as.factor(2024, 12, 1, 0,
0), rainfall_value= 55)
In this case mydf$string_data is a factor, but of length 1 (and not 5 like in
Thank you Fabio.
So easy and straighforward!
Stefano
(oo)
--oOO--( )--OOo--
Stefano Sofia PhD
Civil Protection - Marche Region - Italy
Meteo Section
Snow Section
Via del Colle Ameno 5
60126 Torrette di Ancona, Ancona (AN)
Uff: +39 071 806 7743
Hi Stefano,
maybe something like this can help you?
myfactor <- as.factor(c(2024, 2, 1, 0, 0))
# Convert factor values to integers
first_element <- as.integer(as.character(myfactor)[1])
second_element <- as.integer(as.character(myfactor)[2])
third_element <- as.integer(as.character(myfactor)[3])
Dear R-list users,
forgive me for this silly question, I did my best to find a solution with no
success.
Suppose I have a factor type like
myfactor <- as.factor(2024, 2, 1, 0, 0)
There are no characters (and therefore strsplit for eample does not work).
I need to store separately the 1st,
501 - 600 of 269215 matches
Mail list logo