[R] How do I use R to build a dictionary of proper nouns?

2017-05-04 Thread θ "
�� �� ���c�� OneDrive �n��Ҫ�zҕ�nՈ�B�Y��



[https://r1.res.office365.com/owa/prem/images/dc-png_20.png]

2.corpus_patent text.PNG


[https://r1.res.office365.com/owa/prem/images/dc-png_20.png]

3ontology_proper nouns 
keywords.PNG


[https://r1.res.office365.com/owa/prem/images/dc-png_20.png]

1.patents.PNG




Hi :

I want to do patents text mining in R.
I need to use the proper nouns of domain ontology to build a dictionary.
Then use the dictionary to analysis my corpus of patent files.
I want to calculate the proper nouns and get the word frequency that appears in 
each file.

Now I have done the preprocess for the corpus and extract the proper nouns from 
domain ontology.
But I have no idea how to build a proper nouns dictionary and use the 
dictionary to analysis my corpus.

The Attachments are my texts, corpus preprocesses and proper nouns.

Thanks.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

[R] How to run Linear mixed model for an experiment design with species nested in an random block experiment

2017-05-04 Thread Faming Wang
Dear all,

   I  have conducted an N and P field addition experiment in a tropical
forest, and we used a random block design in this experiment, briefly, we
had four plots in each block (Control, +N, +P, and +NP), and five blocks
located in the forest randomly. Totally we have 20 plots, with four
treatments and five replicated blocks. In each plot, we selected five
species  plants (some plots only contains 3 or 4 species) to measure their
leave variables, like N concentration.  We want to know the effect of N and
P addition as well as the species level variability (inter-species )  of
leaf N.  So we used linear mixed effect models to conduct our statistical
analysis, the sample code was listed below. Can anybody take a look at this
script, and help me to figure out how to analysis species effect using LME?

 Thanks!


R script attached

leaf N concentration

### FIRST, WE TEST WHETHER NESTING SPECIES AS A RANDOM EFFECT IMPROVES
THE FULL MODEL

lmeleafN1<-lme(fixed=N~Naddition*Paddition*Species, random = ~1|Block,
data = NPdata, method = "ML", na.action=na.exclude)
lmeleafN1a<-lme(fixed=N~Naddition*Paddition*Species, random =
~1|Block/Species, data = NPdata, method = "ML", na.action=na.exclude)
anova(lmeleafN1, lmeleafN1a )

### NESTING SPECIES WITHIN BLOCK DOESN'T IMPROVE THE MODEL, SO WE CAN
USE THE MODEL WITH THE SIMPLER RANDOM EFFECT

lmeleafN2<-lme(fixed=N~Naddition*Paddition+Species, random = ~1|Block,
data = NPdata, method = "ML", na.action=na.exclude)
lmeleafN3<-lme(fixed=N~Naddition+Paddition*Species, random = ~1|Block,
data =NPdata, method = "ML", na.action=na.exclude)
lmeleafN4<-lme(fixed=N~Naddition+Paddition+Species, random = ~1|Block,
data = NPdata, method = "ML", na.action=na.exclude)

AIC(lmeleafN1, lmeleafN2, lmeleafN3, lmeleafN4)

# THE FULL MODEL CLEARLY HAS THE LOWEST AIC
# CHECK AGAINST THE NULL MODEL
lmeleafN0<-lme(fixed=N~1, random = ~1|Block, data = NPdata, method =
"ML", na.action=na.exclude)

anova(lmeleafN1, lmeleafN0)

## AND CHECK THE MODEL FIT WITH DIAGNOSTIC PLOTS

par(mfrow = c(2,2))
plot(resid(lmeleafN1)  ~ fitted(lmeleafN1))
abline(h=0, lty=2)
hist(resid(lmeleafN1))
qqnorm(resid(lmeleafN1))
qqline(resid(lmeleafN1))
anova(lmeleafN1)



-- 




Sincerely

Faming Wang

Associate Scientist
Deputy Director of Xiaoliang Research Station,
South China Botanical Garden, Chinese Academy of Sciences
Xingke Road 723, Guangzhou, China. 519650
Email: wan...@scbg.ac.cn
Tel/Fax:0086-20-37252905

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

[R] Non-Linear Regression Help

2017-05-04 Thread Zachary Shadomy
I am having some errors come up in my first section of code. I have no
issue in plotting the points. Is there an easier method for creating a
non-linear regression using C*(x+a)^n. The .txt file is named
stage_discharge with the two variables being stage and discharge.
The data is a relatively small file listed below:

stage discharge
6.53 2592.05
6.32 559.5782
5.96 484.2151
4.99 494.7527
3.66 456.0778
0.51 291.13





> power.nls<-nls(stage_dischargee$discharge~C*(stage_discharge$stage+a)^n,
data=stage_discharge, start=list(C=4, a=0, n=1))
> C<-coef(power.nls)["C"]
> a<-coef(power.nls)["a"]
> n<-coef(power.nls)["n"]
> plot(stage_discharge$stage, stage_discharge$discharge, pch=17, cex=1.25,
ylab='Discharge (cfs )', xlab='Stage (ft)', font.lab=2, main='Boone Creek\n
St age-Discharge Curve')
> curve(C*(x+a)^n, add=TRUE, col="red")

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] I cannot run R.EXE or RSCRIPT.EXE

2017-05-04 Thread Dominik Szewczyk
I cannot run R.EXE or RSCRIPT.EXE. It produces this error:


'C:\Program' is not recognized as an internal or external command,
operable program or batch file.


I have attempted to put quotes around the full path including the executable 
and also running the exectuable from within the path itself with the same 
result. The only way I can do this is to move the installation from C:\Program 
Files\R\R-3.4.0 to C:\R\..


Is this a limitation of the program itself?


-Dom

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lmomRFA: update "regfit" object

2017-05-04 Thread Douglas Hultstrand
Hello,

I am creating error bounds based on simulated data across multiple data 
durations.  I was wondering if there is a way to update an object class 
from "regfit" from lmomRFA package?  The reason is for consistency 
across durations.

Example below:

library(lmom); library(lmomRFA)

data <- c(0.42, 0.13, 0.59, 0.12, 0.78, 0.17, 0.3, 0.41, 0.28, 0.79)
# random data

reg <- regsamlmu(data)# calc l-moments

*org_gev <- regfit(reg,"gev")*# original gev fit

# UPDATE *"org_gev*" values (below) and save as "update_gev"

# xi = 0.65

# alpha = 0. 51

# k = -0.023


Thank you for the help,
Doug




-- 
-
Douglas M. Hultstrand, MS
Senior Hydrometeorologist
Applied Weather Associates
Monument, Colorado
mobile: 720-771-5840
www.appliedweatherassociates.com
dhultstr...@appliedweatherassociates.com
-


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] 3D equal-spaced closed mesh

2017-05-04 Thread Bert Gunter
The r-sig-geo list and corresponding cran task view *might* be a better
place to post and/look.

Bert


On May 4, 2017 12:51 PM, "Eric Krantz"  wrote:

> I have sonar image which is basically a point cloud (x, y, z) of an
> irregular underground cone-shaped structure. I am trying to create either
> (1) a closed 3D mesh with "equally spaced" mesh grid, or (2) a smooth
> closed surface, for instance by calculating connected localized regression
> surfaces. My Z (vertical) component is equally spaced, but X and Y consist
> of 128 points around the perimeter, regardless of diameter, which makes the
> grid size small where the diameter is small, and large where it is large.
> I'm looking for relatively equal grids, so I could use regression or
> interpolation to add points where needed, or conversely remove them, hence
> the idea of a localized surface regression. I've tried plot3D package, RGL,
> and a couple other packages including alphashape3d. I have not tried plotly
> or rms. So much time invested I thought I would ask for advice, if anyone
> knows a package or technique or can clue which direction to go. Thanks,
> Confidentiality Notice: This E-mail and any attachments ...{{dropped:12}}
>
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/
> posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] 3D equal-spaced closed mesh

2017-05-04 Thread Eric Krantz
I have sonar image which is basically a point cloud (x, y, z) of an irregular 
underground cone-shaped structure. I am trying to create either (1) a closed 3D 
mesh with "equally spaced" mesh grid, or (2) a smooth closed surface, for 
instance by calculating connected localized regression surfaces. My Z 
(vertical) component is equally spaced, but X and Y consist of 128 points 
around the perimeter, regardless of diameter, which makes the grid size small 
where the diameter is small, and large where it is large. I'm looking for 
relatively equal grids, so I could use regression or interpolation to add 
points where needed, or conversely remove them, hence the idea of a localized 
surface regression. I've tried plot3D package, RGL, and a couple other packages 
including alphashape3d. I have not tried plotly or rms. So much time invested I 
thought I would ask for advice, if anyone knows a package or technique or can 
clue which direction to go. Thanks,
Confidentiality Notice: This E-mail and any attachments ...{{dropped:12}}

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to extract values after using metabin from the package meta?

2017-05-04 Thread Michael Dewey
Try using str(the_name_of_your_object) and see if you get any clues as 
to where it is putting them. Sorry I cannot help further but I do not 
use meta myself.


On 04/05/2017 15:29, jan Pierre wrote:

Hello,

I’m trying to do a meta-analysis with R. I tried to use the function
metabin from the package meta :


data <- data.frame(matrix(rnorm(40,25), nrow=17, ncol=8))
centres<-c("SVP","NANTES","STRASBOURG","GRENOBLE","ANGERS","TOULON","MARSEILLE","COLMAR","BORDEAUX","RENNES","VALENCE","CAEN","NANCY")
rownames(data) = centres
colnames(data) =
c("case_exposed","witness_exposed","case_nonexposed","witness_nonexposed","exposed","nonexposed","case","witness")
metabin( data$case_exposed, data$case, data$witness_exposed, data$witness,
studlab=centres,
   data=data, sm="OR")

where data_meta is a data frame with the number of case_exposed, case_data,
witness_exposed, witness for each centre.

I obtain after using metabin :

How can I extract the values of OR and 95%-CI in the fixed effect model and
the random effects model? I want to put these data in another array.

I tried to use summary, but it doesn’t change anything.

Thanks for your help.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

---
This email has been checked for viruses by AVG.
http://www.avg.com



--
Michael
http://www.dewey.myzen.co.uk/home.html

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Re: [R] Sparse (dgCMatrix) Matrix row-wise normalization

2017-05-04 Thread Murat Tasan
Thanks, Stefan, I'll take a look!

Also, I figured out another solution (~15 minutes after posting :-/):

```
row_normalized_P <- Matrix::Diagonal(x = 1 / sqrt(Matrix::rowSums(P^2)))
%*% P
```

Cheers,

-m

On Thu, May 4, 2017 at 12:23 PM, Stefan Evert 
wrote:

>
> > On 4 May 2017, at 20:13, Murat Tasan  wrote:
> >
> > The only semi-efficient method I've found around this is to `apply`
> across
> > rows (more accurately through blocks of rows coerced into dense
> > sub-matrices of P), but I'd like to try to remove the looping logic from
> my
> > codebase if I can, and I'm wondering if perhaps there's a built-in in the
> > Matrix package (that I'm just not aware of) that helps with this
> particular
> > type of computation.
>
> The "wordspace" package has an efficient C-level implementation for this
> purpose:
>
> P.norm <- normalize.rows(P)
>
> which is a short-hand for
>
> P.norm <- scaleMargins(P, rows=1 / rowNorms(P, method="euclidean"))
>
> Best,
> Stefan

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Sparse (dgCMatrix) Matrix row-wise normalization

2017-05-04 Thread Stefan Evert

> On 4 May 2017, at 20:13, Murat Tasan  wrote:
> 
> The only semi-efficient method I've found around this is to `apply` across
> rows (more accurately through blocks of rows coerced into dense
> sub-matrices of P), but I'd like to try to remove the looping logic from my
> codebase if I can, and I'm wondering if perhaps there's a built-in in the
> Matrix package (that I'm just not aware of) that helps with this particular
> type of computation.

The "wordspace" package has an efficient C-level implementation for this 
purpose:

P.norm <- normalize.rows(P)

which is a short-hand for

P.norm <- scaleMargins(P, rows=1 / rowNorms(P, method="euclidean"))

Best,
Stefan
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Sparse (dgCMatrix) Matrix row-wise normalization

2017-05-04 Thread Murat Tasan
Hi all ---

I have a large sparse matrix, call it P:
```
 > str(P)
 Formal class 'dgCMatrix' [package "Matrix"] with 6 slots
   ..@ i   : int [1:7868093] 4221 6098 8780 10313 11102 14243 20570
22145 24468 24977 ...
   ..@ p   : int [1:7357] 0 0 269 388 692 2434 3662 4179 4205 4256 ...
   ..@ Dim : int [1:2] 1303967 7356
   ..@ Dimnames:List of 2
   .. ..$ : NULL
   .. ..$ : NULL
   ..@ x   : num [1:7868093] 1 1 1 1 1 1 1 1 1 1 ...
   ..@ factors : list()
```

I'd like to row-normalize (say, with the L-2 norm)... the straight-forward
approach would be something like:
```
> row_normalized_P <- P / rowSums(P^2)
```

But this causes a memory allocation error, since it appears the `rowSums`
result is being recycled (appropriately) into a _dense_ matrix with
dimensions equal to `dim(P)`.
Given that P is known to be sparse (or at the very least is stored in
sparse format), does anyone know of a non-iterative approach to achieve the
desired `row_normalized_P` shown above?
(I.e. the resultant matrix will be equally sparse as P itself... and I'd
like to avoid ever having a dense matrix (apart from the rowSums vector)
allocated during the normalization steps.)

The only semi-efficient method I've found around this is to `apply` across
rows (more accurately through blocks of rows coerced into dense
sub-matrices of P), but I'd like to try to remove the looping logic from my
codebase if I can, and I'm wondering if perhaps there's a built-in in the
Matrix package (that I'm just not aware of) that helps with this particular
type of computation.

Cheers and thanks for any help!

-murat

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Non standard Beta Distribution

2017-05-04 Thread David Winsemius

> On May 3, 2017, at 11:43 PM, Collins Ochieng Onyanga  
> wrote:
> 
> Hi, 
> 
> I would like to fit a non standard beta distribution with the two scale 
> parameters and lower and upper boundaries to data like  the one shown without 
> normalizing it.
> [1] 37.50 46.79 48.30 46.04 43.40 39.25 38.49 49.51 40.38 36.98 40.00
> [12] 38.49 37.74 47.92 44.53 44.91 44.91 40.00 41.51 47.92 36.98 43.40
> [23] 42.26 41.89 38.87 43.02 39.25 40.38 42.64 36.98 44.15 44.91 43.40
> [34] 49.81 38.87 40.00 52.45 53.13 47.92 52.45 44.91 29.54 27.13 35.60
> 
> I have tried using the following code;
> 
> fitdist((Z1-r)/(t-r) , "beta", method = "mme",lower=c(0,0))
> 
> but with this I am normalizing the data to be in the interval (0,1) .

So what's wrong with using that approach? If you try to re-invent the wheel, 
you will lose efficiency since dbata, qbeta and pbeta are all coded in C. Is 
the back-transformation difficult? 

The help page for fitdistrplus::fitdist has a worked example of defining a 
three-member dpq-distribution family. Admittedly the mathematical expression 
for the more general presented at the NIST document is mildly complex, but this 
now appears to be a request to satisfy a homework assignment. I never took a 
math-stats course, but this task doesn't appear particularly difficult, only 
tedious. And the Posting Guide says rhelp is not for homework. That rule would 
probably be relaxed if you showed greater effort at creating a 3 member set of 
gbeta distribution function, but I haven't seen that level of effort yet.

-- 
David.
> 
> 
> Thanks.
> 
> On 4 May 2017 at 03:27, David Winsemius  wrote:
> 
> > On May 3, 2017, at 3:55 PM, Collins Ochieng Onyanga  
> > wrote:
> >
> > On 4 May 2017 at 01:00, Collins Ochieng Onyanga  wrote:
> >
> >> Hi,
> >>
> >> I am trying to fit fit a non standard Beta distribution to a data set  but
> >> so far I have not succeeded. Can anyone help me with a code in R that can
> >> do this.
> >>
> >> Thanks.
> >
> To Collins Ochieng Onyanga;
> 
> 

David Winsemius
Alameda, CA, USA

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] adding counter to df by group

2017-05-04 Thread Davide Piffer
Thanks David! I tried this but didn't work. Got a bunch of warning
messages: Warning messages:
1: In `[<-.factor`(`*tmp*`, i, value = 1:52) :

On 4 May 2017 at 02:23, David Winsemius  wrote:
>
>> On May 3, 2017, at 11:24 AM, Davide Piffer  wrote:
>>
>
> You should look at this result more closely. Its length is not the same 
> length as the number of rows of the target of the attempted assignment.
>
>> unlist(miniblock_cong)
>
> You might try:
>
> red_congruent$miniblock <- ave( red_congruent$subject_nr, 
> red_congruent$subject_nr, FUN=seq_along)
>
> `ave` is very useful for delivering vectors with length equal to nrow of a 
> dataframe. Do remember to name the FUN parameter (although I still usually 
> forget).
>
> --
>
> David Winsemius
> Alameda, CA, USA
>

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] How to extract values after using metabin from the package meta?

2017-05-04 Thread jan Pierre
Hello,

I’m trying to do a meta-analysis with R. I tried to use the function
metabin from the package meta :


data <- data.frame(matrix(rnorm(40,25), nrow=17, ncol=8))
centres<-c("SVP","NANTES","STRASBOURG","GRENOBLE","ANGERS","TOULON","MARSEILLE","COLMAR","BORDEAUX","RENNES","VALENCE","CAEN","NANCY")
rownames(data) = centres
colnames(data) =
c("case_exposed","witness_exposed","case_nonexposed","witness_nonexposed","exposed","nonexposed","case","witness")
metabin( data$case_exposed, data$case, data$witness_exposed, data$witness,
studlab=centres,
   data=data, sm="OR")

where data_meta is a data frame with the number of case_exposed, case_data,
witness_exposed, witness for each centre.

I obtain after using metabin :

How can I extract the values of OR and 95%-CI in the fixed effect model and
the random effects model? I want to put these data in another array.

I tried to use summary, but it doesn’t change anything.

Thanks for your help.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Re: [R] Non-standard Beta distribution

2017-05-04 Thread David L Carlson
You could try installing package ExtDist and using distribution Beta_ab in that 
package.

-
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352

-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of Collins Ochieng 
Onyanga
Sent: Thursday, May 4, 2017 2:12 AM
To: r-help@r-project.org
Subject: [R] Non-standard Beta distribution

Hi,

I would like to fit a non standard beta distribution with the two scale
parameters and lower and upper boundaries to data like  the one shown
without normalizing it.

[1] 37.50 46.79 48.30 46.04 43.40 39.25 38.49 49.51 40.38 36.98
40.00[12] 38.49 37.74 47.92 44.53 44.91 44.91 40.00 41.51 47.92 36.98
43.40[23] 42.26 41.89 38.87 43.02 39.25 40.38 42.64 36.98 44.15 44.91
43.40[34] 49.81 38.87 40.00 52.45 53.13 47.92 52.45 44.91 29.54 27.13
35.60


I have tried using the following code;

fitdist((Z1-r)/(t-r) , "beta", method = "mme",lower=c(0,0))

but with this I am normalizing the data to be in the interval (0,1) .


Thanks.
--

-- 

*AIMS-Tanzania*

*DISCLAIMER*: The contents of this email and any attachm...{{dropped:9}}

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Perfect prediction of AR1 series using package dlm, posted on stack exchange

2017-05-04 Thread peter dalgaard
I am not an expert on dlm, but it seems to me that you are getting perfect 
_filtering_ not _prediction_. If you cast an AR model as a state space model, 
there is no measurement error on the state values, hence the conditional 
distribution of theta_t given y_t is just the point value of y_t...

-pd

> On 4 May 2017, at 12:05 , Ashim Kapoor  wrote:
> 
> Dear all,
> 
> I have made a dlm model,where I am getting a perfect prediction.
> 
> Here is a link to the output:
> 
> http://pasteboard.co/9IxVQwjm6.png
> 
> The query and code is on:
> 
> https://stats.stackexchange.com/questions/276449/perfect-prediction-in-case-of-a-univariate-ar1-model-using-dlm
> 
> Can someone here be kind enough to answer my query?
> 
> Best Regards,
> Ashim
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

-- 
Peter Dalgaard, Professor,
Center for Statistics, Copenhagen Business School
Solbjerg Plads 3, 2000 Frederiksberg, Denmark
Phone: (+45)38153501
Office: A 4.23
Email: pd@cbs.dk  Priv: pda...@gmail.com

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Perfect prediction of AR1 series using package dlm, posted on stack exchange

2017-05-04 Thread Ashim Kapoor
Dear all,

I have made a dlm model,where I am getting a perfect prediction.

Here is a link to the output:

http://pasteboard.co/9IxVQwjm6.png

The query and code is on:

https://stats.stackexchange.com/questions/276449/perfect-prediction-in-case-of-a-univariate-ar1-model-using-dlm

Can someone here be kind enough to answer my query?

Best Regards,
Ashim

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Non-standard Beta distribution

2017-05-04 Thread Collins Ochieng Onyanga
Hi,

I would like to fit a non standard beta distribution with the two scale
parameters and lower and upper boundaries to data like  the one shown
without normalizing it.

[1] 37.50 46.79 48.30 46.04 43.40 39.25 38.49 49.51 40.38 36.98
40.00[12] 38.49 37.74 47.92 44.53 44.91 44.91 40.00 41.51 47.92 36.98
43.40[23] 42.26 41.89 38.87 43.02 39.25 40.38 42.64 36.98 44.15 44.91
43.40[34] 49.81 38.87 40.00 52.45 53.13 47.92 52.45 44.91 29.54 27.13
35.60


I have tried using the following code;

fitdist((Z1-r)/(t-r) , "beta", method = "mme",lower=c(0,0))

but with this I am normalizing the data to be in the interval (0,1) .


Thanks.
--

-- 

*AIMS-Tanzania*

*DISCLAIMER*: The contents of this email and any attachm...{{dropped:13}}

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.