Probably better posted to the r-sig-geo list, where you are more
likely to find the relevant expertise.
Perhaps see also https://cran.r-project.org/web/views/Spatial.html,
depending on what you plan to do with your maps.
Bert Gunter
"The trouble with having an open mind is that people
I know nothing about the packages in question, but do you know what
"collinear" means and how/why it can mess up model fitting? If no,
that may be the problem. See also "overfitting."
Bert Gunter
"The trouble with having an open mind is that people keep coming
Rolf:
This behavior has nothing to do with S3 methods:
> f <- function(x,y,...) xyplot(y~x, col = "red",...)
> x <- 1:5; y <- runif(5)
> z <- f(x,y)
> z$call
xyplot(x, y)
I have **not** studied the lattice code, but assuming that it is using
match.call() to get the call it returns, it looks
" I have attached a .zip with some sample data and a list of R
terminal commands..."
Maybe to Ek, but not to the list. The server strips most attachments.
Bert Gunter
"The trouble with having an open mind is that people keep coming along
and sticking things into it."
-
You might wish to post this on r-sig-db if you do not get a
satisfactory reply here.
Also, have you checked the databases task view on :
https://cran.r-project.org/ ?
Bert Gunter
"The trouble with having an open mind is that people keep coming along
and sticking things into it."
--
1. Wrong list. You should post on r-sig-geo instead.
2. As you use ggplot, why aren't you using ggmap() ?
Bert Gunter
"The trouble with having an open mind is that people keep coming along
and sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" c
?tryCatch
See also:
https://www.r-bloggers.com/error-handling-in-r/
Bert Gunter
"The trouble with having an open mind is that people keep coming along
and sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
On Tue, Mar 24, 2020 at 5
Untested in the absence of example data, but I think
combined <- do.call(rbind, lapply(ls2972, function(x)get(x)[[2]]))
should do it.
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed
about such matters. I find
it readable, however. Others may not.
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
On Tue, Mar 17, 2020 at
?str
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
On Tue, Mar 17, 2020 at 12:19 PM Sparks, John wrote:
> Hi R-Helpers,
>
> I
... and in addition see ?file.choose for an interactive way to choose files
that may be easier and/or help you to diagnose your problems.
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in
be that your methodology is suspect).
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
On Sun, Mar 15, 2020 at 9:11 PM Yuan Chun Ding wrote:
>
Read its documentation yourself and unless you have good reason not to,
always cc the list (which I have done here).
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom Coun
Here's a novel idea:
Do a google search on "multiprecision computing package R" for an answer.
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County"
Inline.
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
On Sat, Mar 14, 2020 at 10:36 AM |Juergen Hedderich
wrote:
> Dear R-help list mem
ut of thin air. In other words, a fantasy. But don't take my word
for it -- knowledgeable people at a stats site would be far more reliable.
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed
I believe this would be better posted on r-sig-Geo .
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
On Mon, Mar 9, 2020 at 3:16 PM Miluji Sb
Unless there is a good reason not to, always cc the list in your responses.
I am not your free private consultant.
Beause you use your own function, your result is not reproducible, so that
it is unlikely you can get useful help.
Bert Gunter
"The trouble with having an open mind is that p
... but this is the wrong list for such questions.
Post on r-package-devel if you need further help.
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic s
I believe that a much better place to ask this is on the Bioconductor
Support Site:
https://support.bioconductor.org/
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom Coun
Your question is way off topic here -- this list is for R programming
questions, not statistical consulting. You might wish to try
stats.stackexchange.com for the latter.
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
--
, as it appears that mixed effects models might be useful for
you. Another specific statistical help site is stats.stackexchange.com .
However, I think you would do much better to consult with a local
statistical expert, as your statistical background seems minimal. But
that's just my opinion.
Bert Gunter
I don't use dplyr, but it's trivial with gsub, assuming I understand
correctly:
> x <- "a b\t c\n e"
> cat(x)
a b c
e
> gsub("[[:space:]]", "",x)
[1] "abce"
See ?regex for details (of course!), especially the section on character
classes.
Wrong list.
Post here:
https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
in **plain text** not html.
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom Coun
Oh ... and don't cross post.
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
On Fri, Feb 21, 2020 at 9:08 AM Bert Gunter wrote:
> T
??
Isn't is resample() not resamples()?
>From what package?
What package is bwplot from? lattice:::bwplot has no "metric" argument.
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley
Ummm... rather vague, and I certainly have no clue. But if you haven't
already done so, have a look here:
https://cran.r-project.org/web/views/
And of course, you should always try googling, e.g. on "supply chain
optimization using R" , etc.
Bert Gunter
"The trouble with havi
Wrong list -- (for which statistical methods question are generally
offtopic anyway).
Post here instead:
https://www.bioconductor.org/help/
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breat
What do you think these two statements do?
df <- data2
df <- data.frame(var=c("Young_Control", "Young_Treated", "CHG_methylation"))
I suggest you stop what you're doing and spend time with an R tutorial or
two before proceeding.
Bert Gunter
"The troubl
he "lubridate"
package has all sorts of this kind of functionality built in, and you may
prefer using that as your interface to date handling.
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka B
Thanks Deepayan. Yes that is both the correct diagnosis and the "obvious"
solution I was looking for. And now I don't have to embarrass myself by
showing my "clumsy" solution.
Bert
Bert Gunter
"The trouble with having an open mind is that people keep coming
hance to find a more sensible approach, if you care to try.
Bert
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
On Wed, Feb 12, 2020 at 9:19 PM D
quot;red"))),
xlab="log2 (variance)",
plot.points=FALSE,
auto.key=TRUE)
**warning**: in the absence of data, untested.
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka B
Yes. Most attachments are stripped by the server.
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
On Fri, Feb 7, 2020 at 5:34 PM John Kane w
ctor or go through an R
tutorial or two to learn about factors, which may be part of the issue
here. R *generally* obtains whatever "label" info it needs from the object
being tabled -- see ?tabulate, ?table etc. -- if that's what you're doing.
Bert Gunter
"The trouble with having an open
Oh -- I see you did. Sorry. But cross-posting is discouraged.
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
On Wed, Feb 5, 2020 at 10:38
Shouldn't you be posting this on the r-sig-geo list, not here?
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
On Wed, Feb 5, 2020 at 10:38
Questions on specialized packages (the ROSE package presumably) may not get
answered on this list. If that turns out to be the case, you may wish to
contact the package maintainer: Nicola Lunardon .
Bert Gunter
"The trouble with having an open mind is that people keep coming
Try posting this at the RStudio Help site, as R Markdown is part of the
ecosystem they have created and support.
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom Coun
that, but it's obviously wrong. The argument to
> FUN is an element of seq_len(10), it's not the full dataset. Try
>
> result<- lapply(seq_len(10), FUN = function(i){
> dat <- dat2[, sample.int(4)]
> print(colnames(dat))
> } )
>
> Duncan Murdoch
>
>
or use an explicit for() loop to populate a list or vector with your
results.
Again, if I have misunderstood what you want to do, then clarify, and
perhaps someone else will help.
-- Bert
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
stic
If you just want to permute columns of a matrix,
?sample
> sample.int(10)
[1] 9 2 10 8 4 6 3 1 5 7
and you can just use this as an index into the columns of your matrix,
presumably within a loop of some sort.
If I have misunderstood, just ignore.
Cheers,
Bert
On Tue, Feb 4, 2020
Inline.
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
On Tue, Feb 4, 2020 at 9:12 AM Ana Marija
wrote:
> Hello,
>
> I
Time to study some tutorials and do your own work, don't you think? There
are many good tutorials on the web.
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom Coun
As this apparently involves genomic data, I would suggest that you ask this
on the BioConductor site:
https://www.bioconductor.org/help/
especially if you don't get effective help here.
Bert Gunter
On Tue, Jan 28, 2020 at 9:29 AM Ana Marija
wrote:
> Hello,
>
> I tried doing:
&g
Second.
Bert
On Sun, Jan 26, 2020 at 2:20 PM Rolf Turner wrote:
>
> On 27/01/20 11:06 am, Jim Lemon wrote:
>
> > Hi Puja,
> > Three things:
>
>
>
> > 3) You should probably change the subject line of your message to
> > "Would anyone care to do my work for me?"
>
> Fortune nomination!!! :-)
Google is your friend!
https://stackoverflow.com/questions/26643852/ggplot-plots-in-scripts-do-not-display-in-rstudio
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom Coun
Just a note: There is no such thing as "a best fitting curve" that must
pass through all the points.
You may wish to consult a statistician or spend time with references to
clarify your intent.
Bert Gunter
"The trouble with having an open mind is that people keep coming alo
Use ?file.choose to choose a file interactively and avoid typing paths:
read.table(file.choose(), header = TRUE, etc)
will open a finder window to navigate to and click on the file you want.
-- Bert
Bert Gunter
"The trouble with having an open mind is that people keep coming
Trickier, but shorter:
> lapply(u,'[',1)
$a
[1] 1
$b
[1] "a"
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
On Fri, Jan 17, 20
That's fine, but do note that the which() function is wholly unnecessary in
your last line as R allows logical indexing. Perhaps another topic you need
to study.
-- Bert
On Mon, Jan 13, 2020 at 10:56 PM ani jaya wrote:
> Dear Jeff and Bert,
>
> Thank you very much for your correction and
Inline.
Bert Gunter
On Mon, Jan 13, 2020 at 8:54 PM ani jaya wrote:
> Good morning R-Help,
>
> I have a dataframe with 7 columns and 1+ rows. I want to subset/extract
> those data frame with specific date (not in order). Here the head of my
> data frame:
>
> head(m
already found out, there are folks here who may help also.
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
On Fri, Jan 10, 2020 at 10:32 AM Phi
... and even more generally, is generally misleading. ;-)
(search "problems with R^2" or similar for why).
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom Count
You may get a useful answer here, but you are much more likely to at the
Bioconductor website that specializes in this sort of thing.
https://bioconductor.org/help/
Cheers,
Bert
On Thu, Jan 2, 2020 at 12:16 PM Yuan Chun Ding wrote:
> Hi R users,
> Does any one know a R package for genetic
7" "380" "509" "853" "964" "217" "574" "480" "769"
> Qnm
[1] "509" "770" "920" "287" "259" "889" "157" "574" "480" &quo
?list.files and ?regexp
Warning: following obviously untested:
Gfiles <- list.files("G:", pattern = ".*10806\\.xls$")
should then give you a vector of character names of the files you want to
feed to read.xls() or whatever function exists in the favored package is
for reading Excel files these
, date, and active time start and end, several
feeding time start/stop entries ( I have no idea how bats behave)).
Until you can expicitly explain how your data can generate such
information, I think it will be difficult/impossible to help you.
Cheers,
Bert
Bert Gunter
"The trouble with h
(say 1000 at a time)
and then combining them. Or have you already tried this? If you do wish to
do this, wait to give experts a chance to tell you that my suggestion is
completely useless before you attempt it.
3. I'll let someone else resolve your dates problem, as I have never used
lubridate.
Bert
amp; x<1, '<1', format(round(x,2), big.mark=",",
scientific=FALSE) )
> x
[1] "-2.00" "-1.75" "-1.50" "-1.25" "-1.00" "-0.75" "-0.50" "-0.25" " 0.00"
[10] "<1""<1&qu
median(x),IQR = IQR(x)))
d$w: a
median IQR
0.5469662 0.4548506
--
d$w: b
median IQR
0.6860975 0.3456893
-- Bert
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking thin
The "digest" package might be what you're looking for: messages --> numerics
Or perhaps you want to create a hash in R. Search on "hashing in R" or
similar for info on this.
See ?file.info for obtaining file info, including date/time info.
Bert Gunter
"The trou
project.org/web/packages/reprex/index.html (read the
> vignette)
>
>
> On December 22, 2019 3:44:19 PM PST, Bert Gunter
> wrote:
> >I think this is the wrong list. Shiny is an RStudio app and is not part
> >of
> >R. You need to post on the RStudio site.
> >
> >
>
?ave
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
On Mon, Dec 23, 2019 at 6:57 AM Medic wrote:
> I have
> mydata$var
>
> and
I think this is the wrong list. Shiny is an RStudio app and is not part of
R. You need to post on the RStudio site.
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom Coun
x y1 y2 y3
1 1 11 21 NA
2 2 12 NA 0.5
3 3 13 22 0.6
## note the change to "all = TRUE" in the merge() call
Cheers,
Bert
On Fri, Dec 20, 2019 at 9:37 AM Bert Gunter wrote:
> ?merge ## note the all.x option
> Example:
> > a <- data.frame(x = 1:3, y1 = 11:13)
> &
?merge ## note the all.x option
Example:
> a <- data.frame(x = 1:3, y1 = 11:13)
> b <- data.frame(x = c(1,3), y2 = 21:22)
> merge(a,b, all.x = TRUE)
x y1 y2
1 1 11 21
2 2 12 NA
3 3 13 22
Bert Gunter
"The trouble with having an open mind is that people keep coming along
No.
You can install both versions side by side, use the newer version and
xx package as needed, use the older version and your essential package
as needed, and shift results in .Rdata files -- or even as text files of
data -- between them as necessary.
This is obviously a nightmare and may
Did you even make an attempt to do this? -- or would you like us do all
your work for you?
If you made an attempt, show us your code and errors.
If not, we usually expect you to try on your own first.
If you have no idea where to start, perhaps you need to spend some more
time with tutorials to
"But the important point is:
If you know the structure of the data you want to
parse, then it is best to tell R (or any other language)
this structure explicitly. "
Fortune nomination!
-- Bert
Thu, Dec 19, 2019, 2:49 AM Enrico Schumann wrote:
>
> Quoting Eric Berger :
>
> > Martin
If n = N, then this is unnecessarily complicated.
sample(mydata$Temperature)
is all you need (see ?sample).
If n < N, then the "trick" is not done.
sample(mydata$Temperature, n)
is what is wanted.
Bert
On Wed, Dec 18, 2019 at 12:54 PM Jim Lemon wrote:
> Hi Medic,
>
re are books written on the "analytics" (both statistical and graphical)
of multidimensional contingency tables and categorical data that you may
wish to consult some to get some more specific ideas.
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sti
See ?try which links you to ?tryCatch for the preferred approach.
Alternatively: if(inherits(e, "try-error")) ## should work and
satisfy CRAN
-- Bert
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-
In addition, as John's included output shows, only 1 parameter, the
intercept, is fit. As he also said, the sd is estimated from the residual
deviance -- it is not a model parameter.
Suggest you spend some time with a glm tutorial/text.
Bert
On Mon, Dec 9, 2019 at 7:17 AM Marc Girondot via
Basically no clue, but note that C[, 1:length(C[ ,1])] looks suspect -- you
are selecting the number of columns equal to the number of rows in C. Is
that really what you want to do?
Bert
On Tue, Dec 3, 2019, 2:36 PM Ana Marija wrote:
> Hello,
>
> I am trying to run this software:
>
>
Of course! Use regexec() and regmatches()
>
regmatches(dat$varx,regexec("(^[[:digit:]]{1,3})([[:alpha:]]{1,2})([[:digit:]]{1,5}$)",dat$varx))
[[1]]
[1] "9F209" "9" "F" "209"
[[2]]
character(0)
[[3]]
[1] "2F250" "2" "F" "250"
[[4]]
character(0)
[[5]]
character(0)
[[6]]
Use regular expressions.
See ?regexp and ?grep
Using your example:
> grep("^[[:digit:]]{1,3}[[:alpha:]]{1,2}[[:digit:]]{1,5}$",dat$varx,value
= TRUE)
[1] "9F209" "2F250" "121FL50"
Cheers,
Bert
Bert Gunter
"The trouble with havi
for a discussion of
the many issues Greenland raises. I am open to private replies, but I am
not an authority and my opinions aren't worth much. (See tagline below).
Cheers,
Bert
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it.&quo
Statistical questions are generally off topic on this list. Try
stats.stackexchange.com instead.
But FWIW, I recommend that you work with someone with expertise in time
series analysis, as your efforts to shake and bake your own methodology
seem rather unwise to me.
Cheers,
Bert
Bert Gunter
at2 <-within(dat2,
{
d4 <- d1 ## d1. 0 when d1 == 0
d4[!d4]<- d2[!d4]
d4[!d4]<- d3[!d4]
})
> dat2
ID d1 d2 d3 d4
1 A 0 25 35 25
2 B 12 22 0 12
3 C 0 0 31 31
4 E 10 20 30 10
5 F 0 0 0 0
Cheers,
Bert
Bert Gunter
"The trouble with having an o
> g[1]
> $a
> [1] 2
>
>
>
>
> Mik
>
>
> > I do not see this behavior.
> >
> > f <- function(x){
> >## a comment
> > 2
> > }
> >
> > g <- list(a =2, fun =f)
> >
> >> g
> > $a
> >
I do not see this behavior.
f <- function(x){
## a comment
2
}
g <- list(a =2, fun =f)
> g
$a
[1] 2
$fun
function(x){
## a comment
2
}
> g[[2]]
function(x){
## a comment
2
}
> g[2]
$fun
function(x){
## a comment
2
}
Cheers,
Bert
Bert Gunter
"The
cially for long used and extensively exercised core functionality.
Cheers,
Bert
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
On Sun, Nov 24,
I think you are more likely to get a helpful answer if you give a minimal
example of what your lines look like. I certainly don't have a clue, though
maybe someone else will.
Cheers,
Bert
On Wed, Nov 20, 2019 at 12:21 PM Thomas Subia via R-help <
r-help@r-project.org> wrote:
> Thanks all for
rather, system.time() of course.
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
On Wed, Nov 20, 2019 at 11:19 AM Bert Gunter wrote:
> Do
Don't do this (the timing code you showed, not Eric's suggestions).
Do this:
sys.time( {
code of interest
})
(or use the microbenchmark package functionality)
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
--
1. The Help page for getVarCov says that it supports lme and gls models
only. So not nonlinear models. As to why it does not, that is off topic
here, as that is a statistical question.
2. As this is longstanding, it almost certainly has good reason for being
there, so I would seriously doubt it.
Please use search facilities before posting such questions here.
I searched on "How to save google map in R" at rseek.org and got many
relevant hits.
If you have *already* done this, you should tell us why what you got did
not suffice.
Cheers,
Bert
Bert Gunter
"The trou
Ha! -- A bug! "Corrected" version inline below:
Bert Gunter
On Thu, Nov 14, 2019 at 8:10 PM Bert Gunter wrote:
> Brute force approach, possibly inefficient:
>
> 1. You have a vector of file names. Sort them in the appropriate (time)
> order. These names are also the
ching, you would suggest a few key words to look at?
>
> Sorry for the HTML thing, this is my first post. I'll do better next times.
>
> Thanks,
> Nathan
>
>
>
> On Fri, Nov 15, 2019 at 11:34 AM Bert Gunter
> wrote:
>
>> So you've made no attempt at all to do t
So you've made no attempt at all to do this for yourself?!
That suggests to me that you need to spend time with some R tutorials.
Also, please post in plain text on this plain text list. HTML can get
mangled, as it may have here.
-- Bert
"The trouble with having an open mind is that people keep
Obvious advice:
DON'T DO THIS!
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
On Thu, Nov 14, 2019 at 10:50 AM Ana Marija
wrote:
> H
My recommendation is:
Post on the BioConductor site, not here.
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
On Thu, Nov 14, 2019 at 9:2
Typo: "... from 5.5 million..."
Bert
On Tue, Nov 12, 2019 at 3:11 PM Bert Gunter wrote:
> IMO, this thread has now gone totally off the rails and totally off topic
> -- it is clearly *not* about R programming and totally about statistics.
>
> I believe Ana Marija would
IMO, this thread has now gone totally off the rails and totally off topic
-- it is clearly *not* about R programming and totally about statistics.
I believe Ana Marija would do better to get local statistical help or post
on a statistics or genomics list (stats.stackexchange.com is one such)
As this is O/T I'll keep it offlist.
Inline:
On Tue, Nov 12, 2019 at 12:00 PM Jim Lemon wrote:
> I thought about this and did a little study of GWAS and the use of
> p-values to assess significant associations. As Ana's plot begins at
> values of about 0.001, this seems to imply that almost
Correction:
df <- data.frame(a = 1:3, b = letters[c(1,1,2)], d = LETTERS[c(1,1,2)])
df[!duplicated(df[,2:3]), ] ## Note the ! sign
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his
Sorry, but you ask basic questions.You really need to spend some more time
with an R tutorial or two. This list is not meant to replace your own
learning efforts.
You also do not seem to be reading the docs carefully. Under ?unique, it
links ?duplicated and tells you that it gives indices of
As I said previously, you probably should contact the maintainer. But a
(wild??) guess might be that robust/resistant procedures -- which is what
you are using -- can downweight data values to 0 weight, effectively
removing them.This might lead to effectively empty cells in a cross
tabulation that
This might be impossible to answer without all the data. You might wish to
contact the package maintainer for a question like this.
Cheers,
Bert
On Wed, Nov 6, 2019 at 1:58 PM greg holly wrote:
> I got the following error message after running t2way(y ~ Grup*Time, data
> = cp)
> Error in
801 - 900 of 5068 matches
Mail list logo