On Oct 25, 2012, at 10:32 PM, Santini Silvana wrote:
Dear R users,
I have used the following function (in blue)
No, we do not do in blue here. This is a monochrome mailing list.
aiming to find the linear regression between MOE and XLA and nesting my data
by Species. I have obtained the
HI users,
Is it possible to check the significance of peaks in a power
spectrum in R
Thanks and regards
nuncio
--
Nuncio.M
Scientist
National Center for Antarctic and Ocean research
Head land Sada
Vasco da Gamma
Goa-403804
ph off 91 832 2525636
ph: cell 91 9890357423
Dear all. Apologies if I am asking a stupid question, but I have been unable to
find a solution so far.
I would like to run a logistic regression in which individual data points are
assigned different weights (related to my confidence in their validity). These
individual observations are
On Fri, Oct 26, 2012 at 4:23 AM, stats12 ska...@gmail.com wrote:
Dear R users,
I need to run 1000 simulations to find maximum likelihood estimates. I
print my output as a vector. However, it is taking too long. I am running 50
simulations at a time and it is taking me 30 minutes. Once I
Thank you very much for this. Yes the ggplot does look prettier. Just a few
additional questions though.
The data in the sample data frame are already the means of the 5 samples
measured for each time T. Does this mean that I need to
1) calculate the means and standard deviations separately per
Dear all,
to make our life easier, I give you the following dataset (just this
vector) and code! It gives me two arrays of different dimensions: the first
is a 2x2x1 array and the second is a 2x2x2 array (please, feel free to
correct me on the way I define the arrays. This is the first time I am
hello,
I have some data that looks similar to this (only not as nice as this):
Y - c(abs(rnorm(100, 0.10, .1)), seq(.10, 1.0, .3)+rnorm(1, 0, .5) ,
seq(0.8, 4.0, .31)+rnorm(1, 0, .5)
, seq(3.9, .20, -.2)+rnorm(1, 0, .5) , abs(rnorm(100, 0.13, .1)) , seq(.10,
1.2, .35)+rnorm(1, 0, .5)
, seq(0.7,
Hello,
If you just want to write the matrices to a csv file instead of
write.csv you can use write.table with options append=TRUE, sep=, and
col.names=FALSE.
If you want to merge (rbind) them, you can use code similar to
pattern1 - (^var)[[:digit:]]+(.*$)
pattern2 - (^var)_n_(.*$)
Close - but it's evaluating on 'first date' AND 'last date' - I'll be
considering groups defined by 'first diagnosis' and groups defined by 'last
diagnosis' completely separately, so I need it to run considering the first
date (to produce e.g. INCLUDE.FIRST), then on a separate run to consider
Hi
I am not sure where I get it from, but this one gives some more info then size
ls.objects
function (pos = 1, pattern, order.by)
{
napply - function(names, fn) sapply(names, function(x) fn(get(x,
pos = pos)))
names - ls(pos = pos, pattern = pattern)
obj.class -
Sounds like a Markov-chain problem.
There are a number of packages that deal with this, including multi-state
Markov models, hidden Markov models, and Bayesian Monte Carlo chains (msm, HHM
and HiddenMarkov, MCMCpack, respectively).
But you won't get much help from us as to which model to use,
On 10/26/2012 09:17 AM, Rlotus wrote:
I have code in R. I need also demo code of my output and demo code of code. I
dont know what is demo. An d how to create it...can you tell how to do that
plz. thank you.
Hi Rlotus,
The demo code for a package is usually selected from the examples of
the
Hi dear three helpers,
Thanks a lot! Your solutions worked great. Again I learned a lot.
Tagmarie
Am 25.10.2012 18:36, schrieb Felipe Carrillo:
Another option using plyr,
library(plyr)
myframe - data.frame (ID=c(Ernie, Ernie, Ernie, Bert, Bert,
Bert), Timestamp=c(24.09.2012 09:00,
use Rprof to profile your code to determine where time is being spent. This
might tell you where to concentrate your effort.
Sent from my iPad
On Oct 25, 2012, at 23:23, stats12 ska...@gmail.com wrote:
Dear R users,
I need to run 1000 simulations to find maximum likelihood estimates. I
data1-data.matrix(newdata) # transforming the factor into different values
data.use-data1[,-c(1,2,3)] # leaving the value matrix
data.dist = dist(data.use)
data.hclust = hclust(data.dist) #complete linkage is used
#if I plot the following one, I have too large data set(rows`=9980), can not
Macy
The data in the sample data frame are already the means of
the 5 samples measured for each time T. Does this mean that I need to
1) calculate the means and standard deviations separately per
variable per time,
2) compile those results in a new data frame, then
3) use the ggplot code
Hi
is there a automatic way that long distances between points are not
connected. I have something like
plot(x,y,type=o,...)
atx - seq(as.Date(2009-04-01),as.Date(2011-04-01),month)
axis.Date(1, at=atx,labels=format(atx, %b\n%Y), padj=0.5 )
but I do not want lines between points whose
On 10/26/2012 04:32 PM, Santini Silvana wrote:
Dear R users,
I have used the following function (in blue) aiming to find the linear
regression between MOE and XLA and nesting my data by Species. I have obtained
the following results (in green).
model4-lme(MOE~XLA, random = ~ XLA|Species,
You have likely failed to install the required package or load the right
library for the function you are trying to use.
R 2.13 doesn't have an errbar function natively either, so there must have been
such a package present in your R 2.13 installation. I have functions by that
name in Hmisc
The build system has rolled up R-2.15.2.tar.gz (codename Trick or Treat) at
9:00 this morning. This is a maintenance release; see the list below for
details.
You can get it from
http://cran.r-project.org/src/base/R-2/R-2.15.2.tar.gz
or wait for it to be mirrored at a CRAN site nearer to you.
Hi
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
project.org] On Behalf Of Christof Kluß
Sent: Friday, October 26, 2012 1:42 PM
To: r-h...@stat.math.ethz.ch
Subject: [R] connect points in charts
Hi
is there a automatic way that long distances
Hi Stuart,
I guess, this should do it.
fun1H-function(dat){
res1Head- data.frame(flag=tapply(dat[,2],dat[,1],FUN=function(x)
head(duplicated(x)|duplicated(x,fromLast=TRUE),1)))
Using the data you provided, a combination of slope and height comes
close:
X - seq(Y)
high - Y 0.6
upslope - c(FALSE, diff(Y) 0)
section - rep(1, length(Y))
section[upslope==TRUE high==TRUE] - 2
section[upslope==FALSE high==TRUE] - 3
plot(X, Y, col=section)
Or you could base the slope on
Christof,
You could use single linkage clustering to separate the dates into
different groups if they are more than 14 days apart. Below is a simple
example, where x represents day.
x - sort(sample(1:500, 100))
y - rnorm(100)
cluster - hclust(dist(x), method=single)
group - cutree(cluster,
The lrm function in the rms package will do this.
David Schoeman wrote
Dear all. Apologies if I am asking a stupid question, but I have been
unable to find a solution so far.
I would like to run a logistic regression in which individual data points
are assigned different weights (related
Le jeudi 25 octobre 2012 à 15:02 +0530, Purna chander a écrit :
Dear All,
My main objective was to compute the distance of 10 vectors from a
set having 900 other vectors. I've a file named seq_vec containing
10 records and 256 columns.
While computing, the memory was not
Hello,
I am using the polar.plot function from plotrix.
Is there a way to change the width of the grid lines?
grid.lwd doesn't work
Thanks
Martin
--
View this message in context:
http://r.789695.n4.nabble.com/Grid-Width-in-polar-plot-tp4647547.html
Sent from the R help mailing list
Dennis,
This works well and is exactly what I wanted for these matrices. Thank you
very much, however, when I would try to export the resulting list it just
is bound by columns. Now this isn't so bad for 2 matrices but I hope to
apply this where there are many matrices and scrolling down a file
Many thanks!
Stuart
-Original Message-
From: arun [mailto:smartpink...@yahoo.com]
Sent: 26 October 2012 13:54
To: Stuart Leask
Cc: R help; Petr PIKAL
Subject: Re: [r] How to pick colums from a ragged array?
Hi Stuart,
I guess, this should do it.
fun1H-function(dat){
res1Head-
Hi Thomas,
thanks for the comment. I had a similar idea, so got rid of the rounding
(these are laboratory measurement based data, thats why I have rounded to
only 2 decimal values, but I also tried with 4 and got the same. I will try
to get rid of the many 0s with random noise, hopefully it will
Dear useRs,
we have just released a new version (1.2-0) of the colorspace package:
http://CRAN.R-project.org/package=colorspace
In addition to the infrastructure for transforming colors between
different color spaces (RGB, HSV, HCL, and various others) and support for
different types of color
Hi,
I have to delete first row from my csv file but I don’t want to read total
data in to R memory (size is around 10GB). Actually, I want to use LAF
package but which read data without header.
Can anybody help me to resolve this problem?
-
Bharat Warule
Pune
--
View this message in
Hi, my name is Ellen. I want to ask you about R Code.
I got a code for extracting a pixel value, but I can't compile it..
It is said Error in is.data.frame(x) : object 'lena' not found
Here is the original full code:
library(pixmap)
lena - read.pnm(oldlennablur.pgm)
Adding a small random value (0,0001-0,0009) to all values helped to solve the
problem.
Thank You everyone, who helped.
--
View this message in context:
http://r.789695.n4.nabble.com/system-is-computationally-singular-reciprocal-condition-number-tp4647472p4647540.html
Sent from the R help
Dear Berend and Thomas,
thank you for suggesting the lsei function. I found that the tlsce {BCE}
function also works very well:
library(BCE)
tlsce(A=bmat,B=target)
The limSolve package has an 'xsample' function for generating uncertainty
values via Monte-Carlo simulation, however it only works
Thanks MCOM,
This is really helpful for me.
-
Bharat Warule
Pune
--
View this message in context:
http://r.789695.n4.nabble.com/help-with-read-table-ffdf-parameters-tp3223805p4647527.html
Sent from the R help mailing list archive at Nabble.com.
Hi,
Thank you for your reply. I updated my post with the code. Also, about
posting from Nabble, since I am a new user I didn't know about that problem.
If I post to the mailing list ( r-help@r-project.org), would it get rid of
that problem?
output1-vector(numeric,length(1:r))
Dear R users,
I hope you all are doing great. Is there a way to automatically upgrade my R
2.15.0 to the new R 2.15.2 release instead of having to reinstall the new
release?
Any information regarding this would be greatly appreciated.
Best regards,
Paul
--
View this message in context:
Thanks Jean,
Your 1st solution was one I've tried, w/o great success. The 2nd works a
lot better (on real, and more messy, data), especially when I set span to a
much smaller number than 1/10.
Thank you greatly.
-Chuck
Jean V Adams wrote
Using the data you provided, a combination of slope
Thank you. I tried Rprof and looks like aggregate function I am using is one
of the functions that takes most of the time. What is the difference between
self time and total time?
$by.total
total.time total.pct self.time self.pct
f 925.92 99.98
number-c(0,1,3,4,5,6,8)
rsidp-function(x){
i=0
{y-sample(number,x,replace=TRUE)}
return(y)
}
plot(o, xlim=c(0,20), ylim(0,20), type=n, xlab=Sample size, ylab=Sample
variance)
for (i in seq(1:20)){
retVal=rsidp(i)
var(rsidpVector)
points (i,
I'm looking to create a correlation matrix, but I have already obtained the
correlations, which are stored in a vector. (Basically, I'm running a
simulation which requires a correlation matrix, but I am simulating the
various correlations.)
My aim is to create a function that can take the vector,
thank u so much for helping
--
View this message in context:
http://r.789695.n4.nabble.com/How-to-draw-the-graph-tp4647464p4647557.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
When estimating values each determined similarly, and in which get to them
by algebraic operations in some cases, are rounded with 0 decimal places and
in other cases with 2 or 3 decimal places. What is happening?
Thank you.
--
View this message in context:
Dear R-users,
would it be a better way to construct the matrix below without using any
for-loop or model.matrix? preferably with some matrix algebra?
Thanks in advance,
Carlo Giovanni Camarda
## dimensions
m - 3
n - 4
mn - m*n
k - m+n-1
## with a for-loop
X - matrix(0, mn, k)
for(i in 1:n){
For example
a=12344.567
a
[1] 12344.57
b=234.567
b
[1] 234.567
a=234235423.56
a
[1] 1.11e+20
b=111.898
b
[1] 112
--
View this message in context:
http://r.789695.n4.nabble.com/Number-of-decimal-places-tp4647549p4647552.html
Sent from the R help
Hello,
Just run your code and every time there's an error correct it.
number - c(0,1,3,4,5,6,8)
rsidp - function(x){
y - sample(number,x,replace=TRUE)
y # don't need return()
}
plot(0, xlim=c(0,20), ylim=c(0,20), type=n, xlab=Sample size,
ylab=Sample variance)
for (i in seq(1:20)){
Hi,
I have a set of measurements that are made on a daily basis over many
years. I would like to produce a *non-parametric* smooth of these data to
estimate the seasonal cycle - to achieve this, I have been using the cyclic
cubic splines from the mgcv package. This works superbly in most
Hi all and thank you for your time.
I would like to delete rows from this matrix I call var if the character
in Ref_Allele is equal to the character in Var_Allele. I have attached a
before and after, to help my poor explanation. If someone could provide me
with some code, or some guidance I
Hello,
First, two notes:
1. 'var' is a really bad name for a variable, it already is an R function.
2. Your matrix seems more like a data.frame. The difference is important
because data.frames by default coerce character strings to factors. I
have tried to make the code work if this is the
On Oct 26, 2012, at 2:45 AM, Bharat Warule wrote:
Hi,
I have to delete first row from my csv file but I don’t want to read total
data in to R memory (size is around 10GB). Actually, I want to use LAF
package but which read data without header.
?read.csv
There is a 'skip' parameter and an
Hi list,
Is there a way to use sqlQuery function where there is a sql file (ie.
sample.sql)? I just want to mention that in my sql file there are some
comment lines (starting with --). This means that if I paste all the lines
in the sql file, I'll come up with a long string that most part of it is
On Oct 26, 2012, at 6:52 AM, F_Smithers wrote:
I'm looking to create a correlation matrix, but I have already obtained the
correlations, which are stored in a vector. (Basically, I'm running a
simulation which requires a correlation matrix, but I am simulating the
various correlations.)
I think f0 (from your code) and f1 give identical results
f0 - function (m = 3, n = 4)
{
stopifnot(m0, n0)
mn - m * n
k - m + n - 1
X - matrix(0, mn, k)
for (i in 1:n) {
wr - 1:m + (i - 1) * m
wc - rev(1:m + (i - 1))
where - cbind(wr, wc)
All -
I'm new to SQL and the RODBC package. I've read the documentation
associated with the RODBC package, but I'm still having problems with
my SQL statements; I think my syntax, particularly with respect to my
WHERE statement, is off but I can't find any documentation as to why.
When I run a
Putting back the context:
( We are not looking at this with Nabble.)
When estimating values each determined similarly, and in which get to them
by algebraic operations in some cases, are rounded with 0 decimal places and
in other cases with 2 or 3 decimal places. What is happening?
Thank
On Oct 26, 2012, at 11:02 AM, Steven Ranney steven.ran...@gmail.com wrote:
All -
I'm new to SQL and the RODBC package. I've read the documentation
associated with the RODBC package, but I'm still having problems with
my SQL statements; I think my syntax, particularly with respect to my
Hello everyone,
I'm trying to parse a very large XML file using SAX with the XML package
(i.e., mainly the xmlEventParsing function). This function takes as an
argument a list of other functions (handlers) that will be called to handle
particular xml nodes.
If when I use Rprof(), all the handler
Hello again,
I have another question related to parsing a very large xml file with SAX:
what kind of data structure should I favor? Unlike using DOM function that
can return lists of relevant nodes and let me use various versions of
'apply', the SAX parsing returns me one thing at a time.
I
The code you posted was not runnable. 'r' and at least 'Zi' were missing.
The 'total time' is the amount of the elapsed time that it was
sampling with the given function. The self time is how much time
was actually spent in that function.
From your data, one of the big hitter is the factor
On 26-10-2012, at 12:50, Richard James wrote:
Dear Berend and Thomas,
thank you for suggesting the lsei function. I found that the tlsce {BCE}
function also works very well:
library(BCE)
tlsce(A=bmat,B=target)
The limSolve package has an 'xsample' function for generating uncertainty
You can get even better improvement using the 'data.table' package:
require(data.table)
system.time({
+ dt - data.table(value = x, z = z)
+ r3 - dt[
+ , list(sum = sum(value))
+ , keyby = z
+ ]
+ })
user system elapsed
0.140.000.14
On
On 2012-10-26 08:58, David Winsemius wrote:
On Oct 26, 2012, at 6:52 AM, F_Smithers wrote:
I'm looking to create a correlation matrix, but I have already obtained the
correlations, which are stored in a vector. (Basically, I'm running a
simulation which requires a correlation matrix, but I am
Hi,
The Biostrings package (Bioconductor) contains a function (g.test(),
not exported) that does a G-test. It was originally written by Peter
Hurd. See:
https://stat.ethz.ch/pipermail/r-sig-ecology/2008-July/000275.html
Peter Hurd's implementation of g.test() looks very much like the
On 26-10-2012, at 12:50, Richard James wrote:
Dear Berend and Thomas,
thank you for suggesting the lsei function. I found that the tlsce {BCE}
function also works very well:
library(BCE)
tlsce(A=bmat,B=target)
The limSolve package has an 'xsample' function for generating uncertainty
On 26.10.2012 15:28, PaulJr wrote:
Dear R users,
I hope you all are doing great. Is there a way to automatically upgrade my R
2.15.0 to the new R 2.15.2 release instead of having to reinstall the new
release?
No, you have to install it.
Uwe Ligges
Any information regarding this would
Hi Paul,
If your question is whether there is a way for you to replace your
current R 2.15.0 with new R 2.15.2 without having to re-install all
the packages you installed under R 2.15.0, maybe there is a way
to do this (even though it's probably not recommended), and a lot of
people on this
Hi All,
I know in R there is function named 'step', which does the stepwise regression
and choose the model by AIC. However, if I want to choose a model per this
logic:
1. Run a full model (linear regression, f = lm(y ~., data = ZZZ), for
example)
2. Pick up the variable with
Why taking this off-list? Don't you want people on the list to tell you
how they do this on Windows? I don't use Windows sorry, so I can't help
you. One note though is that, as far as CRAN packages are concerned,
re-installing them on Windows should be relatively fast (as long as you
have fast
Dear useRs,
i have vectors of about 27 descriptors, each having 703 elements. what i want
to do is the following 1. i want to do regression analysis of these 27 vectors
individually, against a dependent vector, say B, having same number of
elements.2. i would like to know best 10 regression
I'm wondering if anyone has written a script to automate this job for
Windows. I mainly use Ubuntu, so I do not care much since it is easy
under Debian/Ubuntu, but I remember the boring process of updating R
under Windows: uninstall the current version, go to CRAN, download,
install, check a few
Dear All,
I am given some data to analyze. The data is in the form of a Stata
database (.dta file).
What is the best way to import it into an R dataframe?
Is there any particular caveat I should be aware of?
Many thanks
Lorenzo
__
Install the ares library first. Then type import.data(the direction you
have saved the data,dta).
On Fri, Oct 26, 2012 at 11:10 PM, Lorenzo Isella
lorenzo.ise...@gmail.comwrote:
Dear All,
I am given some data to analyze. The data is in the form of a Stata
database (.dta file).
What is the
Hui Du Hui.Du at dataventures.com writes:
I know in R there is function named 'step', which does the
stepwise regression and choose the model by AIC.
However, if I want to choose a model per this logic:
1. Run a full model (linear regression, f = lm(y ~., data = ZZZ),
for
Hi,
Not sure how you want the results to look like in .csv file.
If L is the list of matrices, you can also use ?sink()
sink(L.csv)
L
sink()
#L.csv output:
$matrix1
var1 var2 var3
[1 ]147
[2 ]258
[3 ]369
$matrix2
var4 var5 var6
[1 ]4
I'd look into the data.table package.
Cheers,
RMW
On Oct 26, 2012, at 6:00 PM, Frederic Fournier frederic.bioi...@gmail.com
wrote:
Hello again,
I have another question related to parsing a very large xml file with SAX:
what kind of data structure should I favor? Unlike using DOM function
Hi All,
I'm trying to create two side-by-side contour plots with one legend by
modifying the code found here:
http://wiki.cbr.washington.edu/qerm/sites/qerm/images/b/bb/Example_4-panel_v1a.R
I've been able to set up the other two functions called in the code, but I
can't find reference to
It's never actually used in the sample code you linked to, so you may
not even need it. Your best bet is to try to track down the author of
that page; it's not part of any of the normal R packages.
Sarah
On Fri, Oct 26, 2012 at 5:22 PM, K Simmons kasim...@gmail.com wrote:
Hi All,
I'm trying
HI,
May be this helps.
set.seed(8)
mat1-matrix(sample(150,90,replace=FALSE),ncol=9,nrow=10)
dat1-data.frame(mat1)
set.seed(10)
B-sample(150:190,10,replace=FALSE)
res1-lapply(dat1,function(x) lm(B~as.matrix(x)))
#or
res1-lapply(dat1,function(x) lm(B~x))
res1Summary-lapply(res1,summary)
#to get
library(ROCR)
n - 1000
fitglm - function(iteration,intercept,sigma,tau,beta){
x - rnorm(n,0,sigma)
ystar - intercept+beta*x
z - rbinom(n,1,plogis(ystar))
xerr - x + rnorm(n,0,tau)
model-glm(z ~ xerr, family=binomial(logit))
*int*-coef(model)[1]
*slope*-coef(model)[2] # when add error
Thank you all for your suggestions. It really helped me a lot.
Best,
Bibek
On Thu, Oct 25, 2012 at 4:44 AM, arun smartpink...@yahoo.com wrote:
HI Petr,
Thanks for sharing the function. True, very efficient than cut.
dat1$cat-findInterval(dat1$V1, 1:6, rightmost.closed = T, all.inside = T)
That would be very implementation-specific, and ODBC is generic in its own way.
No, you must run one query at a time in general, and deal with the results
using the procedural language.
Keep in mind that you have to pick a back-end database to work with, and for
creating the database you may
Hello,
Using the same example, at the end, add the following lines to have the
models ordered by AIC.
aic - lapply(res2, AIC)
idx - order(unlist(aic))
lapply(list1[idx], names)
And if there are more than 10 models, if you want the 10 best,
best10 - idx[1:10]
lapply(list1[best10], names)
Hello,
There might be another problem with some of the numbers in the example.
R uses C's double numeric type and that means that the limit is around
16 decimal digits of precision. One of the numbers has 23, so there will
allways be loss of accuracy.
a - 234235423.56
b -
CENTERS.csv http://r.789695.n4.nabble.com/file/n4647606/CENTERS.csv Hello
all,
I'm trying to run SPACECAP. A couple of days ago I ran it with a centers
file with 300 GPS points, now I'm trying to run it with 2250, but I get this
error:
Error in NN[i, 1:length(od)] - od : subscript out of
Hey there,
I was wondering if someone could tell me if there's a package or command
that allows me to compute a GINI coefficient using a vector of weights.
Also the coefficient should be bias corrected.
Diego Rojas
[[alternative HTML version deleted]]
Thanks Rui for all your help and I really appreciate you taking the time to
explain everything!
--
View this message in context:
http://r.789695.n4.nabble.com/Delete-row-if-two-values-in-a-matrix-are-equal-tp4647554p4647565.html
Sent from the R help mailing list archive at Nabble.com.
Dear Sir/Madam,
I am a MSc student of Biostatistics. For my thesis, I used R to simulate
data. Surprisingly, when I wrote the code below in R14, the output was a
vector full of FALSEs.
Would you please check and let me know why this happend?
ti=seq(1,10,by=0.1)
ti==2.9
I'm looking forward to
I am regularly running into a problem where I can't seem to figure out how
maintain correct data order when selecting data out of a dataframe. The
below code shows an example of trying to pull data from a dataframe using
ordered zip codes. My problem is returning the pulled data in the correct
I am trying to read a csv file with Chinese language text in it. The file
should look like this:
userid,jobid,Title,companyid,industryids1
82497,1160,互联网产品经理,12
96429,658,企划经理(商业公司),24
14471,95,产品运营经理,25,6
14471,1708,产品营销高级经理,727,2
14471,1558,产品总监,611,4
14471,1777,产品总监,743,1
Hi all-
Thank you for reading my post. Please bear in mind that I'm very much a
newbie with R! My question is this:
I'm trying to use rollapply() on an irregular time series so I can't simply
use the width parameter (I don't think). Rather than last 5 entries, I'd
like to rollapply on last 6
Hi everyone,
I have carried out a multiple imputation in R using Amelia II and have
created 5 multiply imputed datasets. The purpose of my research is to fit a
Poisson Model to the data to estimate numbers of hospital admissions.
Now that I have 5 completed datasets and I have to pool all the 5
Sorry I forgot to include the R list.
Best regards,
Paul
El 26/10/2012 14:47, Hervé Pagès hpa...@fhcrc.org escribió:
Why taking this off-list? Don't you want people on the list to tell you
how they do this on Windows? I don't use Windows sorry, so I can't help
you. One note though is that,
Hi all,
I am a very recent user of R. Mine is probably a basic question. I
would like to know how do we create a certain set of points with
geographic coordinates, between two endpoints. Please explain me how to
do this using R. Which are the packages to be used for doing this.
Thanks
That solution works very well.
The only issue is that 'rnorm' occasionally generates negative values which
aren't logical in this situation.
Is there a way to set a lower limit of zero?
--
View this message in context:
Thank you very much for saving my time. I ran 500 simulations in 20 min using
sapply function. I'll try data.table method for the rest of my
simulations to get the results even faster. Thanks a lot again!
jholtman wrote
You can get even better improvement using the 'data.table' package:
Hi,
You can try either one of these:
dat1-read.table(text=
Ref_Pos Ref_Allele Var_Allele Var_Freq
1 A A 100
2 T G 50
3 G G
On 2012-10-26 13:00, eliza botto wrote:
Dear useRs,
i have vectors of about 27 descriptors, each having 703 elements. what i want
to do is the following 1. i want to do regression analysis of these 27 vectors
individually, against a dependent vector, say B, having same number of
elements.2.
Hi,
I'm working on the codes below however every time I run them when they get
to OpenBUGS I keep getting the error message: array index is greater than
array upper bound for hab.
Any help would be greatly appreciated,
Suzie
Codes:
ungulate - read.csv(file.choose ()) #ungulate
You seem to be quite lost. You should execute one statement at a time, and
troubleshoot from there. I suspect that your attempts to get your file in the
right place went wrong, or the file is the wrong type of file for that function
(regardless of the name you gave it).
As for giving you an
1 - 100 of 108 matches
Mail list logo