Hi there,
I used the rlm() function for doing a robust estimation based on
M-estimates.
Obviously, you only get the estimate, standard error and t- value by
implementing this rlm() function.
So, how can I say if a coefficient is statistical significant without the
presence of a p-value?
Thanks
Given my acknowledged statistical ignorance, I tried to find a *solution
*in this forum...
And this is not primarily a statistical issue, it is an issue about the
Hausman test in the R environment.
I cannot imagine, no one in this forum has ever done a Hausman test on OLS
regressions.
I read in
Thanks for your answer, John!
Having read in Wooldridge, Verbeek and Hausman himself, I tried to figure
out how this whole Hausman test works.
I tried to figure out, if endogeneity exists in my particular case. So I did
this
Y ~ X + Z + Rest + error term [# this is the the original regression
Hi there,
I am really new to statistics in R and statistics itself as well.
My situation: I ran a lot of OLS regressions with different independent
variables. (using the lm() function).
After having done that, I know there is endogeneity due to omitted
variables. (or perhaps due to any other
Hi there,
I tried it many times but didn't get it worked.
I just want to export the summary of a OLS regression (lm() function) into a
csv-file including the call-formula, coefficients, r-squared,
adjusted r-squared and f statistic.
I know I can export:
Hi,
I have a dataset called data. There is one row called ac_name. Some
names in this column appear very often, some less.
What I want is to filter this dataset with the following condition:
Exclude the names, which appear more than five times. (example: House A
appears 8 times == exclude it;
Thanks for the first reply.
Unfortunately, my list of different ac_names ist pretty long (about 1,000
different names). Is there a way, to sort them, count the quantity of each
name and exclude these rows, who exceed a particular limit?
--
View this message in context:
Hi there,
I tried this code from homepage:
http://myowelt.blogspot.de/2008/04/beautiful-correlation-tables-in-r.html
http://myowelt.blogspot.de/2008/04/beautiful-correlation-tables-in-r.html
corstarsl - function(x){
require(Hmisc)
x - as.matrix(x)
R - rcorr(x)$r
p - rcorr(x)$P
## define
Hi there,
I'm sorry for the bad subject decision. Couldn't describe it better...
In my dataset called dataSet I want to create a new variable column called
deal_category which depends on another column called trans_value.
In column trans_value I have values in USDm. Now what I want to do is to
Hi,
the first command was bringing the numbers into R directly:
* testdata - c(0.2006160108532920, 0.1321167173880490, 0.0563941428921262,
0.0264198664609803, 0.0200581303857603, -0.2971754213679500,
-0.2353086361784190, 0.0667195538296534, 0.1755852636926560)
mean(testdata)
[1] 0.0161584*
I created a Microsoft Excel spreadsheet. As you said, I only have as
displayed numbers. I just solved the problem by showing 25 decimal places
in Excel and then exported the data into a CSV-file.
Is there a better way to solve this?
Regards,
Felix
--
View this message in context:
Hi all,
I recently tried to calculate the mean and the median just for one column.
In this column I have numbers with some empty cells due to missing data.
So how can I calculate the mean just for the filled cells?
I tried:
mean(dataSet2$ac_60d_4d_after_ann[!is.na(master$ac_60d_4d_after_ann)],
I imported the whole dataset with read.csv2() and it works fine. (2 for
German is correct ;) )
I already checked the numbers and I also tried to calculate the mean of a
range of numbers where there is no NA given. (as mentioned in my last post
above).
--
View this message in context:
I'm sorry!
Now I tried it again with just 10 numbers (just random numbers) and Excel
gives a different output than R.
Here are the numbers I used:
0,2006160108532920
0,1321167173880490
0,0563941428921262
0,0264198664609803
0,0200581303857603
-0,2971754213679500
-0,2353086361784190
14 matches
Mail list logo