t a DEM file was and that contains enough of the kind of
>> >>> depth-dimension data you describe albeit what may be a very irregular
>> cross
>> >>> section to calculate for areas and thence volumes.
>> >>>
>> >>> If I read it correc
In a word: Yes.
We discussed this about 2w ago. Basically, the lm() fits a local Linear
Probability Model and the coef to "score" gives you the direction of the effect.
In the same thread it was discussed (well, readable between the lines, maybe)
that if you change the lm() to a Gaussian glm()
Colleagues,
The code for prop.trend.test is given by:
function (x, n, score = seq_along(x))
{
method <- "Chi-squared Test for Trend in Proportions"
dname <- paste(deparse1(substitute(x)), "out of",
deparse1(substitute(n)),
",\n using scores:", paste(score, collapse = " "))
I want to apply interpolation functions from one data.table to each row
of another data.table.
interp.dt <- data.table(scen = rep(c("a", "b"), c(3, 3)), term = c(1,
20, 60, 1, 32, 72), shock = c(10, 20, 30, 9, 12, 32))
interp.fn <- function(df, x) with(df, approx(term, shock, xout = x)$y)
You may also find the package "lutz" to be of interest, although that may
be overkill for your needs.
(found by an internet search).
Cheers,
Bert
On Thu, Aug 17, 2023 at 1:31 PM Dennis Fisher wrote:
> R 4.3.1
> OS X
>
> Colleagues
>
> Is there a simple way to determine the timezone offset for
3 at 4:31 PM
To: mailto:r-help@r-project.org>>
Subject: [R] Timezone question
R 4.3.1
OS X
Colleagues
Is there a simple way to determine the timezone offset for my present location.
For example, during standard time in the US, the offset from GMT is 8 hours in
California.
Dennis
Den
R 4.3.1
OS X
Colleagues
Is there a simple way to determine the timezone offset for my present location.
For example, during standard time in the US, the offset from GMT is 8 hours in
California.
Dennis
Dennis Fisher MD
P < (The "P Less Than" Company)
Phone / Fax: 1-866-PLessThan
I was just replying to say which bit do you consider the indicator.
But I see Boris has provided a Chat GPT solution.
Running it hopefully shows you how to change colours on various parts.
On Fri, 21 Jul 2023, 22:43 Jeff Newmiller, wrote:
> plotly is _not_ associated with posit. I think you
Since it is 2023, I asked that question to ChatGPT-4 and got the following
response.
-
The `plotly` library in R uses the `gauge` argument inside the `plot_ly`
function to specify the properties of the gauge plot. You can change the
indicator color of the arc (also
plotly is _not_ associated with posit. I think you are unlikely to find
expertise with plotly in their forums. You might find help at stackoverflow.com.
On July 21, 2023 1:40:49 PM PDT, Bert Gunter wrote:
>As you apparently haven't received any responses yet, I'll try to
>suggest something
As you apparently haven't received any responses yet, I'll try to
suggest something useful. However, I have absolutely zero experience
with plotly, so this is just from general principles and reading the
plot_ly Help file, which says for the "..." arguments:
"Arguments (i.e., attributes) passed
Colleagues
Here is my reproducible code
plot_ly(
domain = list(x = c(0, 1), y = c(0, 1)),
value = 2874,
title = list(text = "Generic"),
type = "indicator",
mode = "gauge+number+delta",
delta = list(reference = 4800),
gauge = list(
axis =list(range = list(NULL, 5000)),
steps
Colleagues
Consider:smokers <- c( 83, 90, 129, 70 )
patients <- c( 86, 93, 136, 82 )
prop.test(smokers, patients)
4-sample test for equality of proportions
without continuity correction
data: smokers out of patients
X-squared = 12.6, df = 3, p-value = 0.005585
alternative hypothesis:
Probably not here. Better here: https://bioconductor.org/help/
-- Bert
On Thu, Nov 17, 2022 at 6:43 AM Li, Aiguo (NIH/NCI) [E] via R-help <
r-help@r-project.org> wrote:
> Dear all,
>
> I need to extract peptides from a long list of indels of mouse for
> neoantigen analysis. Does anyone know a
Dear all,
I need to extract peptides from a long list of indels of mouse for neoantigen
analysis. Does anyone know a tool that will do it?
Thanks,
Anna
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list -- To
I hit the wrong button, unfortunately, so others beside Naresh and
Deepayan can safely ignore my "coda".
On Fri, Aug 12, 2022 at 2:29 PM Bert Gunter wrote:
>
> As a private coda -- as it is unlikely to be of general interest --
> note that it is easy to do this without resorting to the layering
This is the solution I was looking for. Thanks to Deepayan and Bert for
sticking with me.
Naresh
Sent from my iPhone
On Aug 12, 2022, at 8:02 AM, Deepayan Sarkar wrote:
On Thu, Aug 11, 2022 at 9:03 PM Naresh Gurbuxani
mailto:naresh_gurbux...@hotmail.com>> wrote:
Bert,
Thanks for
Deepayan,
Thanks for providing a solution. While this is close to my goal, I want one
more change. The line type (lty) should be the same for long and short. The
line type should only change according to “name” group. So the the graph will
have two line types (not four as in your
y rename columns if necessary.
Cheers
Petr
> -Original Message-
> From: R-help On Behalf Of Richard O'Keefe
> Sent: Thursday, June 23, 2022 2:29 AM
> To: Thomas Subia
> Cc: r-help@r-project.org
> Subject: Re: [R] Dplyr question
>
> Why do you want to use dplyr?
ple. And there are way more packages out there
>that most of us are not even aware of exist!
>
>
>-Original Message-
>From: Bert Gunter
>To: Rui Barradas
>Cc: r-help@r-project.org ; Thomas Subia
>
>Sent: Tue, Jun 21, 2022 2:25 pm
>Subject: Re: [R] Dplyr questio
Hello,
Right, intuitive is (very) relative. I was thinking of base function
stats::reshape. Its main difficulty is, imho, to reshape to both wide
and long formats. Compared to it, tidyr::pivot_* are (much?) easier to
understand.
Here is a stats::reshape solution.
df_long <- reshape(
Beware of being too specific about how you want something solved... not just
here, but in all contexts. Your question is like "how do I slice this apple
with this potholder"... dplyr actually doesn't do that, and you can benefit
from learning how to do things in general, not just in your
21, 2022 12:23 PM
To: r-help@r-project.org
Subject: [R] Dplyr question
[External Email]
Colleagues:
The header of my data set is:
Time_stamp P1A0B0D P190-90D
Jun-10 10:34-0.000208 -0.000195
Jun-10 10:51-0.000228 -0.000188
Jun-10 11:02-0.000234 -0.000204
Jun-10 11
Thank you for the reference to "Spatial Predictive Modeling with R".
I look forward to reading it.
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
Start with a good book like "Applied Spatial
Data Analysis with R". If you want to do spatial
data analysis, then you are going to need measurements
at lots of different places in space.
On Thu, 24 Mar 2022 at 23:14, Hasliza Rusmili
wrote:
> Thank you very much. I will ask the question there.
Thank you very much. I will ask the question there.
Siti Hasliza
On Mon, 21 Mar 2022, 03:28 Bert Gunter, wrote:
> You should post this on the r-sig-geo list rather than here:
> https://stat.ethz.ch/mailman/listinfo/r-sig-geo
> That's where expertise on spatial data analysis is likely to
Hello,
Sorry, typo. It's rowSums(y), not x.
x[rowSums(y) > 0L, ]
Rui Barradas
Às 20:30 de 18/02/2022, Rui Barradas escreveu:
Hello,
Use ?rowSums and compare its result to 0. You want the sums greater than
zero.
x <- "
id g
1 1 21
2 3 52
3 2 43
4 4 94
5 5 35"
y <- "
id g
1 1 1
x[apply(y,MAR=1,sum) > 0,]
On Fri, Feb 18, 2022 at 10:24 PM Li, Aiguo (NIH/NCI) [E] via R-help <
r-help@r-project.org> wrote:
> I have tow dataframes as below:
> > x
> id g
> 1 1 21
> 2 3 52
> 3 2 43
> 4 4 94
> 5 5 35
>
> > y
> id g
> 1 1 1
> 2 0 0
> 3 0 1
> 4 1 0
> 5 1 0
>
>
Hello List,
I use ggplot to draw a stack bar chart. but I error message. please look it
below:
> ggplot(s8_plot, aes(fill=GTresult, y=cases, x=gc_label) +
+ geom_bar(position="stack", stat="identity"))
Error: Mapping should be created with `aes()` or `aes_()`.
GTresult and gc_label are
Definitely doable and you are on the right path and maybe even close.
The error message you got showed your query as having the wrong info after
the 'FROM'
keyword
' SELECT * FROM c("BIODBX.MECCUNIQUE2", "BIODBX.QDATA_HTML_DUMMY",
"BIODBX.SET_ITEMS", "BIODBX.SET_NAMES", "dbo.sysdiagrams",
Hi Eric,
Thank you spent time to help me for this.
Here is the thing: I was requested to manage a sql server for my group. the
server has many schemas and the tables (>200). I use ODBC to connect the server
and get the schema name + table name into a data frame.
For each of schema + table on
Not all advice received on the Internet is safe.
https://xkcd.com/327
https://db.rstudio.com/best-practices/run-queries-safely
It is not that much more difficult to do it right.
On July 2, 2021 12:05:43 PM PDT, Eric Berger wrote:
>Modify the summ() function to start like this
>
>summ <-
Hard for me to tell without more details but it looks like the following
has several bugs
for (i in dbtable$Tot_table)
{
Tabname <- as.character(sqldf(sprintf("SELECT Tot_table FROM dbtable",
i)))
summ(Tabname)
}
Your sprintf() statement seems to use 'i' but actually does not.
You probably
Hello Eric,
Following your suggestion, I modified the code as:
summ <- function(Tabname){
query <- sprintf(" SELECT * FROM %s",Tabname)
res <- dbGetQuery(con, query)
view(dfSummary(res), file =
"W:/project/_Joe.B/MSSQL/try/summarytools.Tabname.html")
rm(res)
}
for (i in
Modify the summ() function to start like this
summ <- function(Tabname){
query <- sprintf(" SELECT * FROM %s",Tabname)
res <- dbGetQuery(con, query)
etc
HTH,
Eric
On Fri, Jul 2, 2021 at 9:39 PM Kai Yang via R-help
wrote:
> Hello List,
>
> The previous post look massy. I repost my
Hello List,
The previous post look massy. I repost my question. Sorry,
I need to generate summary report for many tables (>200 tables). For each
table, I can use the script to generate report:
res <- dbGetQuery(con, "SELECT * FROM BIODBX.MECCUNIQUE2")
view(dfSummary(res), file =
Hello List,I need to generate summary report for many tables (>200 tables). For
each table, I can use the script to generate repost:
res <- dbGetQuery(con, "SELECT * FROM BIODBX.MECCUNIQUE2")view(dfSummary(res),
file =
"W:/project/_Joe.B/MSSQL/try/summarytools.BIODBX.MECCUNIQUE2.html")rm(res)
FWIW:
I think Jim makes an excellent point -- regex's really aren't the right
tool for this sort of thing (imho); matching is.
Note also that if one is willing to live with a logical response (better,
again imho), then the ifelse() can of course be dispensed with:
> CRC$MMR.gene<-CRC$gene.all
Hi Kai,
You may find %in% easier than grep when multiple matches are needed:
match_strings<-c("MLH1","MSH2")
CRC<-data.frame(gene.all=c("MLH1","MSL1","MSH2","MCC3"))
CRC$MMR.gene<-ifelse(CRC$gene.all %in% match_strings,"Yes","No")
Composing your match strings before applying %in% may be more
Hi,
A quick clarification:
The regular expression is a single quoted character vector, not a
character vector on either side of the | operator:
"MLH1|MSH2"
not:
"MLH1"|"MSH2"
The | is treated as a special character within the regular expression.
See ?regex.
grep(), when value = FALSE,
Hi Rui,thank you for your suggestion.
but when I try the solution, I got message below:
Error in "MLH1" | "MSH2" : operations are possible only for numeric, logical
or complex types
does it mean, grepl can not work on character field?
Thanks,KaiOn Thursday, May 27, 2021, 01:37:58 AM
Hello,
ifelse needs a logical condition, not the value. Try grepl.
CRC$MMR.gene <- ifelse(grepl("MLH1"|"MSH2",CRC$gene.all), "Yes", "No")
Hope this helps,
Rui Barradas
Às 05:29 de 27/05/21, Kai Yang via R-help escreveu:
Hi List,
I wrote the code to create a new variable:
Post in plain text
Use grepl
On May 26, 2021 9:29:10 PM PDT, Kai Yang via R-help
wrote:
>Hi List,
>I wrote the code to create a new variable:
>CRC$MMR.gene<-ifelse(grep("MLH1"|"MSH2",CRC$gene.all,value=T),"Yes","No")
>
>
>I need to create MMR.gene column in CRC data frame, ifgene.all column
Hi List,
I wrote the code to create a new variable:
CRC$MMR.gene<-ifelse(grep("MLH1"|"MSH2",CRC$gene.all,value=T),"Yes","No")
I need to create MMR.gene column in CRC data frame, ifgene.all column contenes
MLH1 or MSH2, then the MMR.gene=Yes, if not,MMR.gene=No
But, the code doesn't work for
Checked R-Sig-Mac, which I should have done before posting, then
leaving this alone on R-help. Seems to be a solved problem:
*
The Mac R GUI: "R-GUI-7903-4.0-high-sierra-Debug" works fine.
Those warnings disappeared with this R GUI (with a 16-inch MacBook Pro
2019, Big Sur 11.0.1 operating
The current version of Big Sur is 11.1, with 11.2 in public beta. So
this may have been fixed. Maedeh, are you able to check?
On Sun, Jan 17, 2021 at 4:10 PM Maedeh Kamali wrote:
>
> Dear Gregory Coast,
>
> Thanks for your reply.
> I searched so much regarding how to fix this problem.
Dear Gregory Coast,
Thanks for your reply.
I searched so much regarding how to fix this problem. Unfortunately, it
stems from Bug Sur 11.0.1 and we should wait for its new version in which
the problem has been fixed.
Best,
Maedeh Kamali
On Sat, 16 Jan 2021, 08:18 Gregory Coats, wrote:
> I
I reported this behavior on Thu Jan 7, 2021.
You did nothing wrong.
No fix has been issued.
This evening, I upgraded from R 4.0.2 to the Duke University R 4.0.3 for Apple
Mac. Now all I can get from R 4.0.3 is this red error message (that means
nothing to me). Is there an easy fix? Greg
Dear Sir/Madam,
After installing R package version 4.0.3 and launching the R Console for the
first time, below warning message appeared:
021-01-15 11:52:28.749 R[21525:2855847] Warning: Expected min height of view:
() to be less than or equal to 30
but got a height of 32.00. This error
Please disregard my previous post. My understanding is correct, and the
behavior is **AS DOCUMENTED**.
I failed to read the docs carefully. Mea Culpa.
Best,
Bert
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka
I would appreciate any help in correcting my misunderstanding of the
following:
> substitute(quote(x+a), env = list(a=5))
quote(x + 5) ## as expected
> substitute(quote(x+a), env = list2env(list(a=5)))
quote(x + 5) ## as expected
> ### BUT
> .GlobalEnv$a
[1] 5
> substitute(quote(x+a), env =
This is well described in the manual.
Sounds like homework...
el
On 15/10/2020 12:39, Nico Gutierrez wrote:
> Hi All,
>
> Trying to get familiar with dplyr so I have a basic question:
>
> How to summarise sum(Values) per species, maintaining Code column (each
> species has a Code):
>
>
Hi All,
Trying to get familiar with dplyr so I have a basic question:
How to summarise sum(Values) per species, maintaining Code column (each species
has a Code):
Species Values Code
1Acanthocybium solandri33
Hello,
This question looks like a homework question. The posting guide syas that
"Basic statistics and classroom homework: R-help is not intended for these."
I would take a look at
help('cut') # pay attention to argument labels
help('ifelse')
help('findInterval')
Hope this helps,
Rui
This list has a no-homework policy. I assume that your teaching material has
examples that are sufficiently similar so that you should be able to modify
them.
-pd
> On 8 Oct 2020, at 10:10 , Xavier Garcia via R-help
> wrote:
>
> I'm solving the following problem: Create a variable (column)
I'm solving the following problem: Create a variable (column) in the “wf”
dataframe named “Zone” that takes value of “tropic” if Latitude is less
than or equal to 30, or “non-tropic” for Latitude greater than 30. Show you
Zone variable. Latitude is a column of my dataframe. I don't know the
concatenating them
together in one step seems to be the most efficient way. One probably
could
design such function, but time spent on the function performing the
task
only once is probably bigger than performing 250*3 reads.
I see inefficiency in writing each column into separate text file and
cop
design such function, but time spent on the function performing the
task
only once is probably bigger than performing 250*3 reads.
I see inefficiency in writing each column into separate text file and
coppying it back to Excel file.
Cheers
Petr
-Original Message-
From: Upton
t; To: PIKAL Petr ; Thomas Subia
> Cc: r-help@r-project.org
> Subject: RE: [R] readxl question
>
> From your example, it appears you are reading in the same excel file for
> each function to get a value. I would look at creating a function that
> extracts what you need from each
Center for Data Farming
SEED Center website: https://harvest.nps.edu
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of PIKAL Petr
Sent: Wednesday, August 26, 2020 3:50 AM
To: Thomas Subia
Cc: r-help@r-project.org
Subject: Re: [R] readxl question
NPS
now "result.xls" is directly readable with Excel
Cheers
Petr
>
> -Original Message-
> From: R-help On Behalf Of Thomas Subia via
> R-help
> Sent: Saturday, August 22, 2020 6:25 AM
> To: r-help@r-project.org
> Subject: [R] readxl question
>
> Collea
Colleagues,
I have 250 Excel files in a directory. Each of those files has the same
layout. The problem is that the data in each Excel data is not in
rectangular form. I've been using readxl to extract the data which I need.
Each of my metrics are stored in a particular cell. For each metric,
Colleagues,
I have 250 Excel files in a directory. Each of those files has the same
layout. The problem is that the data in each Excel data is not in
rectangular form. I've been using readxl to extract the data which I need.
Each of my metrics are stored in a particular cell. For each metric,
Rasmus,
thank you,
I am an elderly Gynecologist, dabbling a little, ie exactly the
clientele for which the tidyverse "thingy" was developed :-)-O.
In addition I like readable code so I later understand what I was trying
to do :-)-O
el
On 2020-08-21 16:15 , Rasmus Liland wrote:
> On
On 2020-08-21 13:45 +0200, Dr Eberhard Lisse wrote:
|
| Eric, Rasmus,
|
| thank you very much,
|
|ALLPAP %>%
|group_by(Provider) %>%
|mutate( minDt=min(CollectionDate),
|maxDt=max(CollectionDate)) %>%
|summarize(
Using mutate followed by summarise in this case is completely unnecessary.
a <- ( lDf
%>% dplyr::group_by( Provider )
%>% dplyr::summarise( u = min( CollectionDate )
,, v = max( CollectionDate )
)
)
On August 21, 2020 2:41:26 AM
Eric, Rasmus,
thank you very much,
ALLPAP %>%
group_by(Provider) %>%
mutate( minDt=min(CollectionDate),
maxDt=max(CollectionDate)) %>%
summarize( minDt = min(minDt),
maxDt = max(maxDt),
Hi Eberhard,
Here is one possibility using dplyr.
library(dplyr)
set.seed(3)
## set up some fake data
dtV <- as.Date("2020-08-01") + 0:4
x <- sample(dtV,20,repl=TRUE)
provider <- sample(LETTERS[1:3],20,repl=TRUE)
lDf <- data.frame(Provider=provider,CollectionDate=x,stringsAsFactors=FALSE)
##
On 2020-08-21 09:03 +0200, Dr Eberhard Lisse wrote:
> Hi,
>
> I have a small test sample with lab
> reports (PAP smears) from a number of
> different providers. These have
> Collection Dates and the relevant
> columns glimpse() something like
> this:
>
> $ Provider"Dr C", "Dr D",
Hi,
I have a small test sample with lab reports (PAP smears) from a number
of different providers. These have Collection Dates and the relevant
columns glimpse() something like this:
$ Provider"Dr C", "Dr D", "Dr C", "Dr D"
$ CollectionDate "2016-11-03", "2016-11-02", "2016-11-03",
Dear Zixuan,
On 2020-07-26 07:36 -0700, Jeff Newmiller wrote:
> On July 26, 2020 7:33:32 AM PDT, Zixuan Qi wrote:
> > Hi,
> >
> > I encounter a problem in R. My program is as follows.
> > lower <- c(-Inf, -Inf, -Inf, -Inf, 0, 0, 0, -1, -1, -1)
> > upper <- c(Inf, Inf, Inf, Inf, Inf, Inf, Inf,
For this and the nlminb posting, a reproducible example would be useful.
The optimx package (I am maintainer) would make your life easier in that it
wraps nlminb and optim() and other solvers, so you can use a consistent call.
Also you can compare several methods with opm(), but do NOT use this
Hi,
I use the function nlminb to maximize a function and got convergence with the
message false-convergence. I know the reason may be the gradient $B"`(Bf(x)
may be computed incorrectly, the other stopping tolerances may be too tight, or
either f or $B"`(Bf may be discontinuous near the
This is not reproducible.
[1]
http://stackoverflow.com/questions/5963269/how-to-make-a-great-r-reproducible-example
[2] http://adv-r.had.co.nz/Reproducibility.html
[3] https://cran.r-project.org/web/packages/reprex/index.html (read the
vignette)
On July 26, 2020 7:33:32 AM PDT, Zixuan Qi
Hi,
I encounter a problem in R. My program is as follows.
lower <- c(-Inf,-Inf,-Inf,-Inf,0,0,0,-1,-1,-1)
upper <- c(Inf,Inf,Inf,Inf,Inf,Inf,Inf,1,1,1)
out <-
optim(parm,logLik,method='L-BFGS-B',lower=lower,upper=upper,hessian=hessian)
As you can see, I have restricted parameter[5], parameter[6]
Hi Ravi,
that's an interesting claim and N-M. Can you provide any reading matter to
support it?
Cheers,
Andrew
--
Andrew Robinson
Director, CEBRA and Professor of Biosecurity,
School/s of BioSciences and Mathematics & Statistics
University of Melbourne, VIC 3010 Australia
Tel: (+61) 0403 138
Hi John,
I wonder if you can suggest some reading material on that topic? A cursory
search of the net doesn't uncover anything obvious.
Andrew
--
Andrew Robinson
Director, CEBRA and Professor of Biosecurity,
School/s of BioSciences and Mathematics & Statistics
University of Melbourne, VIC
I agree with John that SANN should be removed from optim.
More importantly, the default choice of optimizer in optim should be changed
from "Nelder-Mead" to "BFGS." Nelder-Mead is a bad choice for the most
commonly encountered optimization problems in statistics. I really do not see
a good
SANN is almost NEVER the tool to use.
I've given up trying to get it removed from optim(), and will soon give up
on telling folk not to use it.
JN
On 2020-07-22 3:06 a.m., Zixuan Qi wrote:
> Hi,
>
> I encounter a problem. I use optim() function in R to estimate likelihood
> function and the
Simulated annealing is a probabilistic method and will do things like that. You
should probably read an introduction to the method, e.g. the Wikipedia page.
Not too unlikely, you really want to use one of the other methods in optim()
(or better still optimr from the optimx package).
(I take
Hi,
I encounter a problem. I use optim() function in R to estimate likelihood
function and the method is SANN in the optim function.
out <-
optim(parm,logLik,method='SANN',hessian=T,control=list(maxit=500))
However, I find that each time I run the program, I will get different
values of
lease call phone number above prior to faxing)
>>
>>
>> From: R-help on behalf of Adelchi
>Azzalini
>> Sent: Monday, June 1, 2020 3:17 PM
>> To: Michael Dewey
>> Cc: r-help@r-project.org
>> Subject: Re: [R] a question of etiquette
>>
>>
&g
4
> (Phone) 410-605-7119
> (Fax) 410-605-7913 (Please call phone number above prior to faxing)
>
>
> From: R-help on behalf of Adelchi Azzalini
>
> Sent: Monday, June 1, 2020 3:17 PM
> To: Michael Dewey
> Cc: r-help@r-project.org
> Subject: Re: [R] a question of e
/18/GR)
Baltimore, MD 21201-1524
(Phone) 410-605-7119
(Fax) 410-605-7913 (Please call phone number above prior to faxing)
From: R-help on behalf of Adelchi Azzalini
Sent: Monday, June 1, 2020 3:17 PM
To: Michael Dewey
Cc: r-help@r-project.org
Subject: Re: [R
> On 1 Jun 2020, at 19:37, Michael Dewey wrote:
>
> You might get better answers on the list dedicated to package development
> r-pkg-devel
This is a good suggestion. Thanks, Michael.
Some initial search of that list did not lead to any indication,
but I will have a second look.
Best
You might get better answers on the list dedicated to package
development r-pkg-devel
This may have already been discussed there so a quick look at the
archive might also help you.
On 01/06/2020 17:34, Adelchi Azzalini wrote:
The new version of a package which I maintain will include a new
The new version of a package which I maintain will include a new function which
I have ported to R from Matlab.
The documentation of this R function indicates the authors of the original
Matlab code, reference to their paper, URL of the source code.
Question: is this adequate, or should I
Dear Ista (and Phillip),
Ista, that's the exact same advice I gave Phillip over a week ago:
https://stat.ethz.ch/pipermail/r-help/2020-March/465994.html
Phillip, it doesn't make sense to post the same question under
different subject headings. While I'm convinced you're making a
sincere effort
This is not a reproducible recipe. I am positive this is an operator error...
not keeping current working directory in the same place, or failing to close
the file when done.
On March 23, 2020 3:33:00 PM PDT, Phillip Heinrich wrote:
>Can someone out there run the following code from the book
Hi Phillip,
On Mon, Mar 23, 2020 at 6:33 PM Phillip Heinrich wrote:
>
> Can someone out there run the following code from the book Analyzing Baseball
> Data with R – Chapter 7 page 164?
>
> library(tidyverse)
> db <- src_sqlite(“data/pitchrx.sqlite”,create=TRUE)
>
> Over the past two
Can someone out there run the following code from the book Analyzing Baseball
Data with R – Chapter 7 page 164?
library(tidyverse)
db <- src_sqlite(“data/pitchrx.sqlite”,create=TRUE)
Over the past two weeks this code has run correctly twice but I have gotten the
following error dozens
8 passpass
5 2 10 passpass
6 3 19 failfail
7 3 13 passfail
Would be easier for us if used dput() to share your data but thanks for the
minimal example!
Chris
- Original Message -
&
On Sat, 21 Mar 2020 20:01:30 -0700
Thomas Subia via R-help wrote:
> Serial_test is a pass, when all of the Meas_test are pass for a given
> serial. Else Serial_test is a fail.
Use by/tapply in base R or dplyr::group_by if you prefer tidyverse
packages.
--
Best regards,
Ivan
Colleagues,
Here is my dataset.
Serial Measurement Meas_test Serial_test
1 17 failfail
1 16 passfail
2 12 passpass
2 8 passpass
2 10 pass
Hello,
If groups are factors, pass the level you want to annotate.
This works, note the 'x' value:
ggplot(iris, aes(Species, Petal.Length)) +
geom_boxplot() +
annotate(geom = "text", x = "versicolor", y = 6, label = "16 u")
Hope this helps,
Rui Barradas
Às 20:26 de 19/02/20, Thomas
Since factor levels (groups) are coded by integers, you can use 1, 2, 3
etc. as your x values. If you want to annotate in between you can simply
pick values in between 1, 2, 3, etc.
On Wed, Feb 19, 2020, 13:26 Thomas Subia, wrote:
> Colleagues,
>
> To add an annotation using ggplot, I've used
Colleagues,
To add an annotation using ggplot, I've used
annotate("text",x=17,y=2130,label="16 u").
However, this does not work when trying to annotate box plots by groups since
groups are factors.
Any advice would be appreciated.
Thomas Subia
ASQ CQE
IMG Companies
225 Mountain Vista
Hi Thomas,
Perhaps this is what you are seeking:
my_read_excel<-function(filename) {
serials<-read_excel(filename,sheet="Flow Data",range=("c6"))
flow.data<-read_excel(filename,sheet="Flow Data",range=("c22:c70"))
dates<-read_excel(filename,sheet="Flow Data",range=("h14"))
Colleagues,
I am using readxl to extract a serial number and its associated data using the
following code.
library(readxl)
files <- list.files(pattern="*.xls", full.names = FALSE)
serials <- lapply(files, read_excel, sheet="Flow Data", range=("c6"))
flow.datum <- lapply(files, read_excel,
On Thu, 5 Dec 2019 15:39:56 +
Thomas Subia wrote:
> date <- lapply(files, read_excel, sheet="Sheet1", range=("B5"))
> date_df <- as.data.frame(date)
> trans_date <-t(date_df)
> mydates <- list(trans_date)
This feels a bit excessive for what looks like a one-dimensional string
vector. Why is
1 - 100 of 2674 matches
Mail list logo