tion.
If there are any other ways to do this, please let me know.
Thomas Subia
On Thursday, September 7, 2023 at 10:31:27 AM PDT, Rui Barradas
wrote:
Às 14:23 de 07/09/2023, Thomas Subia via R-help escreveu:
>
> Colleagues
>
> Consider
> smokers <- c( 83,
Colleagues
Consider
smokers <- c( 83, 90, 129, 70 )
patients <- c( 86, 93, 136, 82 )
prop.trend.test(smokers, patients)
Output:
Chi-squared Test for Trend inProportions
data: smokers out of patients ,
using scores: 1 2 3 4
X-squared = 8.2249, df = 1, p-value = 0.004132
# trend
Colleagues,
Your suggestions are elegant and greatly appreciated.
Thomas Subia
On Friday, August 11, 2023 at 11:08:42 PM PDT, Berwin A Turlach
wrote:
G'day Thomas,
On Sat, 12 Aug 2023 04:17:42 + (UTC)
Thomas Subia via R-help wrote:
> Here is my reproducible code for
Colleagues,
Here is my reproducible code for a graph using geom_smooth
set.seed(55)
scatter_data <- tibble(x_var = runif(100, min = 0, max = 25)
,y_var = log2(x_var) + rnorm(100))
library(ggplot2)
library(cowplot)
ggplot(scatter_data,aes(x=x_var,y=y_var))+
geom_point()+
statement to 56, results in the
> gauge reading 60. I'm not sure what needs to be changed in the script or the
> environment to stop rounding.
> On Jul 22, 2023, at 10:43, Boris Steipe wrote:
>
> What do you mean "Rounded"?
> What do you expect, what do you
not sure why this occurs. Changing the statement to 56, results in the
> gauge reading 60. I'm not sure what needs to be changed in the script or the
> environment to stop rounding.
> On Jul 22, 2023, at 10:43, Boris Steipe wrote:
>
> What do you mean "Rounded&
Colleagues,
Thanks for the update.
My colleagues at work have run this script but the resulting graph output for
value is rounded. How can one turn this annoying feature off?
I've googled this but to no avail.
[[alternative HTML version deleted]]
__
Colleagues
Here is my reproducible code
plot_ly(
domain = list(x = c(0, 1), y = c(0, 1)),
value = 2874,
title = list(text = "Generic"),
type = "indicator",
mode = "gauge+number+delta",
delta = list(reference = 4800),
gauge = list(
axis =list(range = list(NULL, 5000)),
steps
Colleagues
Consider:smokers <- c( 83, 90, 129, 70 )
patients <- c( 86, 93, 136, 82 )
prop.test(smokers, patients)
4-sample test for equality of proportions
without continuity correction
data: smokers out of patients
X-squared = 12.6, df = 3, p-value = 0.005585
alternative hypothesis: two
Colleagues,
Thanks for the help!
Root cause of the problem was not to define z and x as factors!Now I know
better.
All the best,
Thomas Subia
On Monday, June 5, 2023 at 08:45:39 PM PDT, Richard M. Heiberger
wrote:
This works.
> d$zz <- factor(d$z, levels=c("low","med","high"))
> d$xx
Colleagues,
I am trying to create a 3D barplot using the following script
d <- read.table(text=' x y z
t1 5 high
t1 2 low
t1 4 med
t2 8 high
t2 1 low
t2 3 med
t3 50 high
t3 12 med
t3 35 low', header=TRUE)
library(latticeExtra)
cloud(y~x+z, d, panel.3d.clou
Colleague,
smokers <- c( 83, 90, 129, 70 )
patients <- c( 86, 93, 136, 82 )
pairwise.prop.test(smokers, patients)
# Output
Pairwise comparisons using Pairwise comparison of proportions
data: smokers out of patients
1 2 3
2 1.000 - -
3 1.
Brinkley,
I am using R studio with
R version 4.2.0 (2022-04-22 ucrt)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 19045)
I cannot reproduce your error messages.
That being said, you might want to look at:
https://github.com/rstudio/rstudio/issues/2214
https://
for matrix",axes=FALSE)
> axis(1,at=seq(0.5,5.5,by=1),labels=LETTERS[1:6])
> axis(2,at=seq(0.5,5.5,by=1),labels=rev(LETTERS[1:6]))
> color.legend(0,-1.3,2.5,-0.7,c("NA","NS","<0.05","<0.01"),
> rect.col=c(NA,"red","orang
Colleagues,
The RVAideMemoire package has a pairwise variance test which one can use to
identify variance differences between group levels.
Using the example from this package,
pairwise.var.test(InsectSprays$count,InsectSprays$spray), we get this output:
Pairwise comparisons using F tests
The length of time it takes to learn R is dependent on what you want to use R
for.
Let's assume you want to use R to perform basic statistical analyses on your
own.
IMHO, the best book for self-study for this is Andy Field's book, Discovering
Statistics using R, It's the best book because it gi
Colleagues,
I attempted to copy data from the clipboard and use rcompanion's
transformTukey command in an attempt to normalize the dataset.
data = read.delim("clipboard")
head(data)
Flatness
17e-04
21e-03
38e-04
45e-04
55e-04
65e-04
All data are greater than 0.
Data se
I was wondering if this is a good alternative method to split a data column
into distinct groups.
Let's say I want my first group to have 4 elements selected randomly
mydata <- LETTERS[1:11]
random_grp <- sample(mydata,4,replace=FALSE)
Now random_grp is:
> random_grp
[1] "H" "E" "A" "D"
# How's
Colleagues,
I've been using uniroot to identify a root of an equation.
As a check, I always verify that calculated root.
This is where I need some help.
Consider the following script
fun <- function(x) {x^x -23}
# Clearly the root lies somewhere between 2.75 and 3.00
uniroot(fun, lower = 2.7
Colleagues,
Here is my code which plots sin(x) vs x, for angles between 0 and 180
degrees.
library(ggplot2)
library(REdaS)
copdat$degrees <- c(0,45,90,135,180)
copdat$radians <- deg2rad(copdat$degrees)
copdat$sin_x <- sin(copdat$radians)
ggplot(copdat,aes(x=degrees,y=sin_x))+
geom_point(size =
Ferri,
Radar Charts are often used to compare two or more items or groups on various
features or characteristics. However, as the number of groups increases, the
user has a harder time making comparisons between groups. As the number of
groups increase, the number of spokes of the radar chart in
Colleagues,
I have 250 Excel files in a directory. Each of those files has the same
layout. The problem is that the data in each Excel data is not in
rectangular form. I've been using readxl to extract the data which I need.
Each of my metrics are stored in a particular cell. For each metric, I
Colleagues,
I have 250 Excel files in a directory. Each of those files has the same
layout. The problem is that the data in each Excel data is not in
rectangular form. I've been using readxl to extract the data which I need.
Each of my metrics are stored in a particular cell. For each metric, I
Colleagues,
Here is my dataset.
Serial Measurement Meas_test Serial_test
1 17 failfail
1 16 passfail
2 12 passpass
2 8 passpass
2 10 pass
Colleagues,
I've got several text files which contain data for each metric I need to report
on.One text file contains the serial number data. Another has customer and work
order number. Another has test data. All text files have the same number of
rows but all have different numbers of columns.
the page
> multiple times... again scrambling your option to read it digitally. Tools
> like "pdftools" can sometimes work when the program that generated the file
> does so in a simple and extraction-friendly way... but there are no
> guarantees, and your description sugges
Colleagues,
I can extract specific data from lines in a pdf using:
library(pdftools)
pdf_text("10619.pdf")
txt <- pdf_text(".pdf")
write.table(txt,file="mydata.txt")
con <- file('mydata.txt')
open(con)
serial <- read.table(con,skip=5,nrow=1) #Extract[3]flatness <-
read.table(con,sk
m not sure whether the pwr
package does that.
On Thursday, July 4, 2019, 4:31:44 PM PDT, John wrote:
On Tue, 2 Jul 2019 22:23:18 +0000 (UTC)
Thomas Subia via R-help wrote:
> Colleagues,
> Can anyone suggest a package or code which might help me calculate
> the minimum samp
Colleagues,
Can anyone suggest a package or code which might help me calculate the minimum
sample size required to estimate the population variance? I can do this in
Minitab but I'd rather do this in R.
Thomas Subia
[[alternative HTML version deleted]]
___
Colleagues,
When using Levene's test, I can identify if there are any differences in
variance between factors. This is straight forward
Is there a way to do a post hoc test to identify variance differences between
factors? This is not so straight forward.
All the best
Thomas Subia
[[al
>From previous posting:
"This is my function:
wilcox.test(A,B, data = data, paired = FALSE)
It gives me high p value, though the median of A column is 6900 and B
column is 3500.
Why it gives p value high if there is a difference in the median?"
Let's examine your choice to use the Wilcoxon tes
Javid wrote:
"I have two set of data in excel:
A column( 16.38, -31, -16.77, 127, -57, 23.44 and so on)
B column ( -12, -59.23, -44, 34.23, 55.5, -12.12 and so on)
I run the wilcox test as :
wilcox.test(A , B, data = mydata, paired = FALSE)
I got always the p value very high, like 0.60
Even I
Colleagues,
I have a workbook which has 3 worksheets
I need to extract data from two specific cells from one ofthose worksheets.
I can use read_excel to do this for one file.
data<-read_excel("C:/Desktop/Excel_raw_data/0020-49785 8768.xls",
sheet="Flow Data",range=("b
Hello all,
Zeki(?) reported:
> ggplot(data = mtcars, aes(x= wt, y= mpg)) + geom_line()
> Error: Found object is not a stat.
Using R v3.4.62 and R studio, I'm unable to reproduce this error.
All the best,
Thomas Subia
[[alternative HTML version deleted]]
Colleagues,
age_days <- difftime(Date,DOM,units="days")
date_vals$age_yrs <- age_days/365.242
I'm trying to calculate the number of years between DOM and Date.
The output reads
DOM Date age_yrs
1 2005-04-04 2015-05-13 10.10563 days
How doe
Thanks for writing this great piece of code.
x = rnorm(100)
boxplot(x) # you shouldn't see any outliers here although sometimes yow will
# lets add some outliers intentionally
x = c(21, 20, 25, x) # now 10, 15 and 20 are outliers
myboxplot <- boxplot(x) # now you should see your three
36 matches
Mail list logo