Is there any package to balance the unbalance panel data?
--
Akhil Dua
Mo:+91-7827-662-202
Consultant
National Institute of Public Finance and Policy
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.
Hi,
I tried to read your data from the image:
OPENCUT<- read.table("OpenCut.dat",header=TRUE,sep="\t")
OPENCUT
FC LC SR DM
1 400030.34 1323.5 0 400
2 12680.13 2.5 0 180
3 472272.75 2004.7 3 300
4 332978.03 1301.3 106 180
5 98654.20 295.0 0 180
6 68142.05 259.9
Dear all,
How can I calculate (in runtime) the max width and height I can use in order to
avoid the content in graphics window appears truncate?. In Windows I can avoid
it (using windows(width=my.width, height=my.height, rescale="fixed") because
the value "fixed" in rescale makes the scroll bar
You can also do this:
set.seed(28)
t1<-data.frame(id=rep(1:3,rep(3,3)),dt=rep(1:3,rep(9,3)),var=c('num1','num2','norm'),value=rnorm(27))
t2<-t1[t1$var=="norm",]
t3<-t1[t1$var!="norm",]
df1<-merge(t2,t3,by=c("id","dt"))
df1$Norm<-df1$value.y/df1$value.x
df2<-df1[,c(1:2,5,7)]
colnames(df2)[3]<
Hi,
Try this:
set.seed(28)
t1<-data.frame(id=rep(1:3,rep(3,3)),dt=rep(1:3,rep(9,3)),var=c('num1','num2','norm'),value=rnorm(27))
head(t1)
# id dt var value
#1 1 1 num1 -1.90215722
#2 1 1 num2 -0.06429479
#3 1 1 norm -1.33116707
#4 2 1 num1 -1.81999167
#5 2 1 num2 0.16266969
#6
On Apr 8, 2013, at 12:28 PM, yanboulanger wrote:
Hi folks,
I have some problems with plots (any) saved from R (saved from the
menu). It
seems that text (either plot titles or axes) is sometimes not
"concatenated"
in a full "vector" (Illustrator-speaking). I mean, sometimes, a
given title
Hi
I would like to normalize my data by one of the variables in long format.
My data is like this:
>
t1<-data.frame(id=rep(1:3,rep(3,3)),dt=rep(1:3,rep(9,3)),var=c('num1','num2','norm'),value=rnorm(27))
> t1
id dt var value
1 1 1 num1 -1.83276256
2 1 1 num2 1.57034303
3 1 1 no
On Apr 8, 2013, at 11:54 AM, Yuan, Rebecca wrote:
Hello all,
I would like to attach some results from R to an existing pdf file,
can I do that through R?
Paul Murrel has provided tools and documentation to do exactly this.
Do a search with his name.
--
David Winsemius, MD
Alameda, CA,
I am trying to solve an integral in R. However, I am getting an error when
I am trying to solve for that integral.
The equation that I am trying to solve is as follows:
$$ C_m = \frac{{abs{x}}e^{2x}}{\pi^{1/2}}\int_0^t t^{-3/2}e^{-x^2/t-t}dt $$
[image: enter image description here]
The code tha
Dear R Users,
I am trying to solve a tridiagonal matrix in R. I am wondering if there is
an inbuilt R function or package to solve that. I tried looking on google
but couldn't find something that would help directly. Any help is highly
appreciated.
Thanks.
Janesh
[[alternative HTML vers
On 8 Apr 2013, at 23:21, Andy Cooper wrote:
> So, no one has direct experience running irlba on a data matrix as large as
> 500,000 x 1,000 or larger?
I haven't used irlba in production code, but ran a few benchmarks on much
smaller matrices. My impression was (also from the documentation, I
On Mon, Apr 8, 2013 at 3:54 PM, Harry Mamaysky wrote:
> Can someone explain why this happens when one of the list elements is named
> 'all'?
>
>> zz <- list( zoo(1:10,1:10), zoo(101:110,1:10), zoo(201:210,1:10) )
>> names(zz)<-c('test','bar','foo')
>> do.call(cbind,zz)
>test bar foo
> 1
Because 'all' is the name of one of the arguments to cbind.zoo:
R> args(cbind.zoo)
function (..., all = TRUE, fill = NA, suffixes = NULL, drop = FALSE)
NULL
do.call constructs a call somewhat like:
R> cbind(test=zz$test, all=zz$all, foo=zz$foo)
The same thing would happen for list elements named
Dear Berend,
thanks to everyone who responded. I should point that the data matrix is not
sparse. Yes, the irlba package seemed of interest, however annoyingly, they
don't specify in the package how large is "large"!!!, and when you read the
paper on which it is based, it deals with rather smal
Can someone explain why this happens when one of the list elements is named
'all'?
> zz <- list( zoo(1:10,1:10), zoo(101:110,1:10), zoo(201:210,1:10) )
> names(zz)<-c('test','bar','foo')
> do.call(cbind,zz)
test bar foo
1 1 101 201
2 2 102 202
3 3 103 203
4 4 104 204
5 5 1
http://www.iolcus.gr/kfaiyjg/swwpuokfd
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide
Dear Rees Morrison,
Re:
(...)
>
> What additional code would create a table output, with the function name in
> the left column, sorted alphabetically within a pattern, and the pattern of
> the function in the column to the right. Users could then sort by those
> patterns, rename some to su
Hi,
Try ?unique()
You haven't provided reproducible data or information about the package. So,
this may or may not work.
dat1<- data.frame(idvar=c(rep(1,2),2,4),col2=c(7,7,8,9))
dat1
# idvar col2
#1 1 7
#2 1 7
#3 2 8
#4 4 9
unique(dat1)
# idvar col2
#1 1 7
Hi folks,
I have some problems with plots (any) saved from R (saved from the menu). It
seems that text (either plot titles or axes) is sometimes not "concatenated"
in a full "vector" (Illustrator-speaking). I mean, sometimes, a given title
is broken in several different chunks even though in R, it
On 08-04-2013, at 16:44, Andy Cooper wrote:
>
>
> Dear All,
>
> I need to perform a SVD on a very large data matrix, of dimension ~ 500,000 x
> 1,000 , and I am looking
> for an efficient algorithm that can perform an approximate (partial) SVD to
> extract on the order of the top 50
> right
Hi Andy,
On Mon, Apr 8, 2013 at 7:44 AM, Andy Cooper
wrote:
>
>
>
> Dear All,
>
> I need to perform a SVD on a very large data matrix, of dimension ~
500,000 x 1,000 , and I am looking
> for an efficient algorithm that can perform an approximate (partial) SVD
to extract on the order of the top 50
Check the mada package for a bivariate approach to sensitivity/specificity.
Check the metafor package if trying only to do a meta analysis of a
proportion.
Nathan
On 4/8/13 12:51 PM, "array chip" wrote:
>Hi all, I am new to meta-analysis. Is there any special package that can
>calculate "s
On 13-04-08 2:54 PM, Yuan, Rebecca wrote:
Hello all,
I would like to attach some results from R to an existing pdf file, can I do
that through R?
R doesn't have built-in tools that can edit pdf files, but there are
external tools (e.g. pdftk) that you can call from R. The animations
packag
Hello!
Does anyone know how to apply bagging for SVM? ( for example)
I am using adabag package to execute bagging but this method, "bagging",
works with classification trees. I would like to apply my bagging to other
classifiers as SVM,RNA or KNN. Has anyone do it?
Thanks!!
[[alternativ
Dear all,
Can I "import" the content of a pdf (or jpg) and put it in the graphic window?.
Thanks.
Eva
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the
On 13-04-08 2:29 PM, Jannis wrote:
Thanks for your reply, Duncan. I hoped for an auutomatic way without
manually having to load the packages to exist ... Perhaps this time this
is not the case.
That doesn't make sense. How could checkUsage possibly know what
packages you plan to attach if you
No answer, but first obvious question" Is the matrix sparse?
Next obvious question: what's your ram, OS, etc.
(Reply to list, as I can't help further).
-- Bert
On Mon, Apr 8, 2013 at 7:44 AM, Andy Cooper wrote:
>
>
> Dear All,
>
> I need to perform a SVD on a very large data matrix, of dimensio
Hi Prof Brian / Uwe,
I understand your ideas, but the problem is as follows:
It works ok in a Windows machine (20 inches screen) if I use
windows(width=8.27, height=11.69, rescale="fixed"), because it shows scroll bar
(vertical) and you can see all tnhe content. Moreover, the pdf generated is
Hello all,
I would like to attach some results from R to an existing pdf file, can I do
that through R?
Thanks,
Rebecca
--
This message, and any attachments, is for the intended r...{{dropped:5}}
_
Hi all, I am new to meta-analysis. Is there any special package that canÂ
calculate "summarized" sensitivity with 95% confidence interval for a
diagnostic test, based on sensitivities from several individual studies?Â
Thanks for any suggestions.
John
From:
Hello,
I am trying to run the following Random Effects panel model in R, but I am
getting an error message. I am just thinking, maybe the random.method I
chose is not working?
re1=plm(gdp ~ fossil+renewables+labour+gfcf, model = "random", data =
new.frame,index = c("id"),random.method = "swar")
E
Hi
i am using prediction.strength with k-medoids algorithms. There are simple
examples like
prediction.strength(iriss,2,3,M=3,method="pam")
I wrote my code like
prediction.strength(data,2,6,M=10,clustermethod=pamkCBI,DIST,krange=2:6,diss=TRUE,usepam=TRUE)
because i am using the dissimilarit
Dear All,
I need to perform a SVD on a very large data matrix, of dimension ~ 500,000 x
1,000 , and I am looking
for an efficient algorithm that can perform an approximate (partial) SVD to
extract on the order of the top 50
right and left singular vectors.
Would be very grateful for any advic
Hi all,
Have any of you instaled R on a Tablet, in this case, which Tablet. My
family wants me to gift a Tablet but I suposse R can not be instaled on a
Tablet. And my few "free" time I pass mainly trying to "improve" in R.
Can yoy tell me, if there are, wich Tablets should be instaled?
On Mon, Apr 08 2013, PIKAL Petr wrote:
Thanks for responding.
> without data we can provide just basic help.
> fit<-lm(Time~I(1/Requests))
> shall give you hyperbolic fit.
> You can test if your data follow this assumption by
> plot(1/Requests, Time)
> which shall for straight line.
>
>
Dear All,
I would greatly appreciate if someone was so kind and share with me or us
an R-package OR R-script example that calculate Eta correlation coefficient
between a nominal (independent) and an interval (dependent) variable in R
in a similar way as SPSS calculates it through crosstabulation t
I wonder if anyone knows any package enabling the transformation of Blom (data
not normal) in R.
Thank you!
Anselmo
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEAS
On 08/04/2013 2:12 PM, Jannis wrote:
Dear list members,
I frequently program small scripts and wrap them into functions to be
able to check them with checkUsage. In case these functions (loaded via
source or copy pasted to the R console) use functions from other
packages, I get this error:
no
Thanks for your reply, Duncan. I hoped for an auutomatic way without
manually having to load the packages to exist ... Perhaps this time this
is not the case.
Cheers
Jannis
On 08.04.2013 20:25, Duncan Murdoch wrote:
On 08/04/2013 2:12 PM, Jannis wrote:
Dear list members,
I frequently progr
Dear list members,
i try to use checkUsage from codetools to test my scripts and functions
for errors. The scripts warn me correctly in case I use an unbinded
variable but fail to do so in case I bind a value to it after its first
usage in the script. Both cases, however, would result in th
Hi,
Just to add:
load:
library(plyr) #forgot
if you are using ddply()
#you can directly use subset() in aggregate()
aggregate(capital~firm,data=subset(Grunfeld,year%in% 1951:1954),mean)
# firm capital
#1 1 1660.4500
#2 2 519.9000
#3 3 771.6500
#4 4 313.7750
#5 5 748
Dear list members,
I frequently program small scripts and wrap them into functions to be
able to check them with checkUsage. In case these functions (loaded via
source or copy pasted to the R console) use functions from other
packages, I get this error:
no visible global function definitio
Hi,
1)
Grunfeld1951<-Grunfeld[Grunfeld$year==1951,]
Grunfeld1951
# firm year inv value capital
#17 1 1951 755.90 4833.00 1207.70
#37 2 1951 588.20 2289.50 342.10
#57 3 1951 135.20 1819.40 671.30
#77 4 1951 160.62 809.00 203.50
#97 5 1951 80.30 327.30 683.90
#117
The data frame looks okay.
str(dat1)
#'data.frame': 2 obs. of 3 variables:
# $ X : num 0.1 0.2
# $ Y1: int 3 2
# $ Y2: int 2 1
dat1<- structure(list(X = c(0.1, 0.2), Y1 = c(3L, 2L), Y2 = c(2L, 1L
)), .Names = c("X", "Y1", "Y2"), class = "data.frame", row.names = c(NA,
-2L))
as.table(dat1)
Thanks Jeff, and William.
@William: your example is terrific, very clear. I love it, thanks!
Cheers,
Mike
On Sun, Apr 7, 2013 at 7:52 PM, William Dunlap wrote:
> > I am aware of the apply() functions,
> > but they are wrapper function of for loops, so they are slower.
>
> While this sounds ri
> I would like to use predict.lm to obtain a set of predicted
> values based on a regression model I estimated.
>
Do you want predictions - which will always be the same - or randomly
distributed data that is based on the model prediction plus random error?
*
If you are starting with a table rather than a data frame, try this
> # Convert Arun's data.frame, dat1, to a table, dat2
> dat2 <- as.table(as.matrix(data.frame(dat1[,2:3], row.names=dat1[,1])))
> # convert dat2 to form requested
> dat3 <- data.frame(dat2)
> dat4 <- dat3[rep(1:nrow(dat3), dat3$Fr
I didn't fully understand the requirement.
This version puts the first col in the first panel, the second col in the
second panel,
and if there more more levels to a, and more colors, then the nth color in
the nth panel.
bwplot(b ~ x | a, data=DF,
panel=function(..., col) {
panel.
Student middlebury.edu> writes:
>
> Hey,
>
> So I have a scatter plot and I am trying to plot a curve to fit the data
> based on a Holling Type III functional response. My function is this:
>
It's hard to see how 'size=DBH' could make sense; 'size' is
an overdispersion parameter ... I guess it
Hi Torvon,
In R you can specify rownames and columnames to a matrix, which in
essence becomes what you describe as first row and first column,
except they are not actually rows or columns (so that the matrix is
still numeric). qgraph will extract labels from these rownames. You
can also use the la
In that case specify the colors you want, for example
## install.packages("HH") ## if necessary
library(HH)
bwplot(b~x|a,data=DF, panel=panel.bwplot.intermediate.hh,
col = c("darkorange1","limegreen","limegreen"))
On Sun, Apr 7, 2013 at 9:38 PM, Richard M. Heiberger wrote:
> I recom
It works, thanks!
Concha
Try
print(bwplot(b~x|a,data=DF,col=c("black","black"),
panel=function(x,...) {
pnl = panel.number()
if (pnl ==1) panel.bwplot(x,fill="darkorange1",...)
else panel.bwplot(x,fill="limegreen",...)
We aim to visualize a 17*17 correlation matrix with the package *qgraph*,
consisting of 16 variables.
Without variable names in the input file, that works perfectly
R> qgraph(data)
but we'd like variable names instead of numbers for variables.
In a correlation matrix, the first row and the firs
http://www.scenedenuit.com/xmwln/zwfpfz.nkeiq
Hey Sky
4/8/2013 4:12:17 PM
cbcdvutdvqnlpwsxx
tmuadh
[[alternative
There are a number of different ways to do this, so it would have been
helpful if you had set the context with a stripped down example. That
said, here are some pointers:
> x1 <- NULL
> x1 <- c(x1,3)
> x1
[1] 3
> x1 <- c(x1,4)
> x1
[1] 3 4
So you see that you can add elements to the end of a vect
# here's an example data frame
n <- 10
mydf <- data.frame(present=rnorm(n),
answer=sample(1:4, n, replace=TRUE),
p.num=sample(1:18, n, replace=TRUE),
session=sample(1:2, n, replace=TRUE),
count=sample(1:8, n, replace=TRUE),
type=sample(1:3, n, replace=TRUE))
# define a new variable, combo5, that
Graham,
# use dput() to share your data in a way that's easy for R-help readers to
use
mydf <- structure(list(dn = c(4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L,
4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L), obs = c(1L, 1L, 1L, 1L,
1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L, 3L, 3L, 4L),
choice = c(0L,
Taking the Grunfeld data, which is built-in in R, for example,
(1)How can I construct a dataset (or dataframe) that consists of the data
of all firms in 1951?
(2)How can I calculate the average capital in each form over the period
1951-1954?
What I can imagine is to categorize the data by firm, a
Try
print(bwplot(b~x|a,data=DF,col=c("black","black"),
panel=function(x,...) {
pnl = panel.number()
if (pnl
==1) panel.bwplot(x,fill="darkorange1",...) else
panel.bwplot(x,fill="limegreen",...)
}
) )
On Thu, Apr 4, 2013 at 7:34 AM, ivo welch wrote:
> I wonder whether there is a complete list of all R commands (incl the
> standard packages) somewhere, preferably each with its one-liner AND
> categorization(s). the one-liner can be generated from the documentation.
>
Try the 'sos' package. Not
Hi
another option is to use na.locf from zoo, based on assumption that each zero
has to be replaced with previous nonzero value.
dat[dat==0]<-NA
library(zoo)
dat$state<-na.locf(dat$state)
dat$country<-na.locf(dat$country)
> dat
val state country
1 -0.543116777672352TN In
Subject line says it all really.
I would be pleased to receive feedback on what I have left out which
I should have put in, and what I have put in which I should have left out.
Michael Dewey
i...@aghmed.fsnet.co.uk
http://www.aghmed.fsnet.co.uk/home.html
_
Hi,
Not sure if you have only one "country" or not.
Try this:
dat<- data.frame(val,state,country,stringsAsFactors=FALSE)
dat$country[dat$country==0]<-dat$country[1]
#or
#dat$country[dat$country==0]<- dat$country[dat$country!=0]
res<-do.call(rbind,lapply(split(dat,cumsum(grepl("[A-Za-z]",dat$
Hi,
Try this:
dat1<-read.table(text="
X Y1 Y2
0.1 3 2
0.2 2 1
",sep="",header=TRUE)
res<-do.call(rbind,lapply(split(dat1,seq_len(nrow(dat1))),function(x)
{Y=rep(colnames(x)[-1],x[-1]); X=rep(x[,1],length(Y));
data.frame(X,Y,stringsAsFactors=FALSE)}))
row.names(res)<- 1:n
Hello all,
I have data in the form of a table:
X Y1Y2
0.1 3 2
0.2 2 1
And I would like to transform in the form:
X Y
0.1 Y1
0.1 Y1
0.1 Y1
0.1 Y2
0.1 Y2
0.2 Y1
0.2 Y1
0.2 Y2
Any ideas how?
Thanks in advance,
IOanna
Dear all,
I would like to use predict.lm to obtain a set of predicted values based on
a regression model I estimated.
When I apply predict.lm to two vectors that have the same values, the
predicted values will be identical. I know that my regression model is not
perfect and I would like to take a
You misunderstood me; I referred to having the same colors in the boxes
to the left
and other different color for all the boxes to the right! Can you help me?
Concha
> I recommend the panel function from the HH package
>
> ## install.packages("HH") ## if necessary
> library(HH)
> bwplot(b~x|a,da
Hi
without data we can provide just basic help.
fit<-lm(Time~I(1/Requests))
shall give you hyperbolic fit.
You can test if your data follow this assumption by
plot(1/Requests, Time)
which shall for straight line.
anyway, when you want to provide data use
dput(your.data) and copy console ou
Hi,
I am new to R, and I suspect I am missing something simple.
I have a data set that performance data that correlates request
rate to response times
http://pastebin.com/Xhg0RaUp
There is some jitter in the data, but mostly it looks like a hockey
puck curve. It does
Dear Listmembers
Thank you for your help in resolving the duplicate row.names error. The
solution was very straightforward to ensure that the choice set is completely
balanced. Once that was achieved the program worked fine.
Kind Regards
Graham
__
R
On 08/04/2013 07:54, Eva Prieto Castro wrote:
Hi Uwe,
Thanks. At this point I need solve one doubt:
If I use windows(width=8.27, height=11.69, rescale="fixed"), it shows the
content in graphics window as real size, and a vertical scroll appears. How can I get it
in Mac and Linux?. I mean, ho
Hi Uwe,
Thanks. At this point I need solve one doubt:
If I use windows(width=8.27, height=11.69, rescale="fixed"), it shows the
content in graphics window as real size, and a vertical scroll appears. How can
I get it in Mac and Linux?. I mean, how can I get it in quartz() and X11()?.
Moreove
Hello
Following some standard textbooks on ARMA(1,1)-GARCH(1,1) (e.g. Ruey
Tsay's Analysis of Financial Time Series), I try to write an R program
to estimate the key parameters of an ARMA(1,1)-GARCH(1,1) model for
Intel's stock returns. For some random reason, I cannot decipher what
is wrong with
On Sun, Apr 7, 2013 at 11:48 PM, S Ellison wrote:
>
Another software which can help you is TeamViewer. Install it on Linux as
well as windows and you can connect effortlessly between the two
The woods are lovely, dark and deep
But I have promises to keep
And miles before I go to sleep
And miles
74 matches
Mail list logo