Re: [R] Rmpi; sample code not running, the slaves won't execute commands

2011-02-09 Thread Chris Carleton
Hi Patrick,

Thanks for taking the time to respond. I've since worked it all out
and my Rmpi functions are working well now. There seems to be
something about slaves and the print() function for me though that
results in the slaves executing the commands, but the data is never
printed to the terminal - same for error reporting from functions
called by the slaves. I'm not sure where the blockage is, but I've
just worked around it by passing anything I want printed from the
slaves back to the master and then having the master print them. For
all I know, it's normal that the slaves operate in this way, but it's
only been a minor inconvenience for me in any event.

Chris

On 9 February 2011 03:36, Patrick Connolly  wrote:
> On Tue, 01-Feb-2011 at 11:01AM -0500, Chris Carleton wrote:
>
> [...]
>
> |>
> |> My output is as follows;
> |>
>
> |> > source("./test_Rmpi.R")
>
> How was test_Rmpi.R created?  Was it using sink()?
>
> |>      3 slaves are spawned successfully. 0 failed.
> |> master (rank 0, comm 1) of size 4 is running on: minanha
> |> slave1 (rank 1, comm 1) of size 4 is running on: minanha
> |> slave2 (rank 2, comm 1) of size 4 is running on: minanha
> |> slave3 (rank 3, comm 1) of size 4 is running on: minanha
> |>
>
> [...]
>
> |> noticing that problem, I tried to execute the 'hello world' program
> |> above and, as you can see, the slaves are spawned, but they won't
> |> print the text in the mpi.remote.exec() function. Any ideas?
>
> Are you saying that there was no response at all from the other three
> lines?  They work fine for me.  I suspect that you have something not
> quite finished with your MPI setup.  I doubt the problem is with Rmpi
> itself, but then of course, there's not much to work on.
>
> HTH
>
>
>
> --
> ~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.
>   ___    Patrick Connolly
>  {~._.~}                   Great minds discuss ideas
>  _( Y )_                 Average minds discuss events
> (:_~*~_:)                  Small minds discuss people
>  (_)-(_)                              . Eleanor Roosevelt
>
> ~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.
>
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Rmpi; sample code not running, the slaves won't execute commands

2011-02-01 Thread Chris Carleton
Hi All,

I'm trying to parallelize some code using Rmpi and I've started with a
sample 'hello world' program that's available at
http://math.acadiau.ca/ACMMaC/Rmpi/sample.html. The code is as
follows;

# Load the R MPI package if it is not already loaded.
if (!is.loaded("mpi_initialize")) {
library("Rmpi")
}
# Spawn as many slaves as possible
mpi.spawn.Rslaves(nslaves=3)
# In case R exits unexpectedly, have it automatically clean up
# resources taken up by Rmpi (slaves, memory, etc...)
.Last <- function(){
if (is.loaded("mpi_initialize")){
if (mpi.comm.size(1) > 0){
print("Please use mpi.close.Rslaves() to close slaves.")
mpi.close.Rslaves()
}
print("Please use mpi.quit() to quit R")
.Call("mpi_finalize")
}
}
# Tell all slaves to return a message identifying themselves
mpi.remote.exec(paste("I am",mpi.comm.rank(),"of",mpi.comm.size()))
# Tell all slaves to close down, and exit the program
mpi.close.Rslaves()
mpi.quit()

My output is as follows;

> source("./test_Rmpi.R")
3 slaves are spawned successfully. 0 failed.
master (rank 0, comm 1) of size 4 is running on: minanha
slave1 (rank 1, comm 1) of size 4 is running on: minanha
slave2 (rank 2, comm 1) of size 4 is running on: minanha
slave3 (rank 3, comm 1) of size 4 is running on: minanha

While trying to parallelize one of my other functions, I found that
the master seems to be sending the messages and executing its portion
of the program, but the slaves are not responding. However, the slaves
do seem to send the initial message to the master that they are ready
to receive a job, which prompts the master to send the job. So,
noticing that problem, I tried to execute the 'hello world' program
above and, as you can see, the slaves are spawned, but they won't
print the text in the mpi.remote.exec() function. Any ideas?

Chris

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] npRmpi memory error

2010-12-06 Thread Chris Carleton
Hi List,

I'm using npRmpi to run some density equality tests and place the
output into a matrix. I've put together some crude functions for the
purpose, but I'm receiving the following error when npdeneqtest()
reached the bootstrap;

FATAL ERROR: Memory allocation failure (type DBL_VECTOR). Program terminated.

I'm running Ubuntu Lucid 64bit with 4 cores and 12GB of memory. My R
install is also 64bit.

platform   x86_64-pc-linux-gnu
arch   x86_64
os linux-gnu
system x86_64, linux-gnu
status
major  2
minor  12.0
year   2010
month  10
day15
svn rev53317
language   R
version.string R version 2.12.0 (2010-10-15)

Here's the home-brew function I'm using;

npdenseqtestMatrix <- function(df1_row,df2_col,var_names,...) {
dim_nms_row <- unique(df1_row[,'cat'])
dim_nms_col <- unique(df2_col[,'cat'])
tn_pval <- c()
col_bw <- list()
for(i in dim_nms_col) {
col_bw[[which(dim_nms_col==i)]] <-
npudensbw(df2_col[which(df2_col[,'cat']==i),var_names],...)
}
for(i in dim_nms_row) {
df1_row_bw <- 
npudensbw(df1_row[which(df1_row[,'cat']==i),var_names],...)
for(j in dim_nms_col) {
tn_pval <- 
c(tn_pval,npdeneqtest(df1_row[which(df1_row[,'cat']==i),var_names],df2_col[which(df2_col[,'cat']==j),var_names],bw.x=df1_row_bw,bw.y=col_bw[[which(dim_nms_col==j)]],...)$Tn.P)
}
}
print(tn_pval)

return(matrix(tn_pval,length(dim_nms_row),length(dim_nms_col),dimnames=list(dim_nms_row,dim_nms_col),byrow=TRUE))
}

Keep in mind that 'cat' is just a shorthand for Category in this case
and it's acting as a database key (largely because the data come from
GRASS). I think that my memory allocation should be unlimited (well to
the hardware potential) since I'm using 64 bit installs. I have no
experience with coding anything for parallel processing. It may also
be relevant to point out that I commonly use screen and work remotely,
but I've tried with and without screen and the result is the same
error. I can't really provide the data here because my dataset is
quite large (though I don't think large enough to fill 12GB of
memory). Any thoughts would be appreciated,

Chris

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] efficient conversion of matrix column rows to list elements

2010-11-17 Thread Chris Carleton
Thanks for the suggestion. The solution below is much better than my
round-about way.

combn(outcomes, 2, list )

I can't do much about the speed of combn() so I wanted to trim the fat
wherever else I could.

C

On 17 November 2010 15:10, Charles C. Berry  wrote:
>
> On Wed, 17 Nov 2010, Chris Carleton wrote:
>
>> Hi List,
>>
>> I'm hoping to get opinions for enhancing the efficiency of the following
>> code designed to take a vector of probabilities (outcomes) and calculate a
>> union of the probability space. As part of the union calculation, combn()
>> must be used, which returns a matrix, and the parallelized version of
>> lapply() provided in the multicore package requires a list. I've found that
>> parallelization is very necessary for vectors of outcomes greater in length
>> than about 10 or 15 elements, which is why I need to make use of multicore
>> (and, therefore, convert the combn() matrix to a list). It would speed the
>> process up if there was a more direct way to convert the columns of combn()
>> to elements of a single list.
>
>
> I think you are mistaken.
>
> Is this what Rprof() tells you?
>
> On my system, combn() is the culprit
>
>> Rprof()
>> outcomes <- 1:25
>> nada <- replicate(200, {apply(combn(outcomes,2),2,column2list);NULL})
>> Rprof(NULL)
>> summaryRprof()
>
> $by.self
>          self.time self.pct total.time total.pct
> "combn"        0.64    61.54       0.70     67.31
> "apply"        0.20    19.23       1.04    100.00
> "FUN"          0.10     9.62       1.04    100.00
> "!="           0.04     3.85       0.04      3.85
> "<"            0.02     1.92       0.02      1.92
> "-"            0.02     1.92       0.02      1.92
> "is.null"      0.02     1.92       0.02      1.92
>
>
> And it hardly takes any time at that!
>
>
> HTH,
>
> Chuck
>
> p.s. Isn't
>
>        as.data.frame( combn( outcomes, 2 ) )
> or
>        combn(outcomes, 2, list )
>
> good enough?
>
>
> Any constructive suggestions will be greatly
>>
>> appreciated. Thanks for your consideration,
>>
>> C
>>
>> code:
>> 
>> unionIndependant <- function(outcomes) {
>>   intsctn <- c()
>>   column2list <- function(x){list(x)}
>>   pb <-
>> ProgressBar(max=length(outcomes),stepLength=1,newlineWhenDone=TRUE)
>>   for (i in 2:length(outcomes)){
>>       increase(pb)
>>       outcomes_ <- apply(combn(outcomes,i),2,column2list)
>>       for (j in 1:length(outcomes_)){outcomes_[[j]] <-
>> outcomes_[[j]][[1]]}
>>       outcomes_container <- mclapply(outcomes_,prod,mc.cores=3)
>>       intsctn[i] <- sum(unlist(outcomes_container))
>>   }
>>   intsctn <- intsctn[-1]
>>   return(sum(outcomes) - sum(intsctn[which(which((intsctn %in% intsctn))
>> %% 2 == 1)]) + sum(intsctn[which(which((intsctn %in% intsctn)) %% 2 == 0)])
>> + ((-1)^length(intsctn) * prod(outcomes)))
>> }
>> 
>> PS This code has been tested on vectors of up to length(outcomes) == 25 and
>> it should be noted that ProgressBar() requires the R.utils package.
>>
>>        [[alternative HTML version deleted]]
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
> Charles C. Berry                            Dept of Family/Preventive Medicine
> cbe...@tajo.ucsd.edu                        UC San Diego
> http://famprevmed.ucsd.edu/faculty/cberry/  La Jolla, San Diego 92093-0901
>
>
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] efficient conversion of matrix column rows to list elements

2010-11-17 Thread Chris Carleton
Hi List,

I'm hoping to get opinions for enhancing the efficiency of the following
code designed to take a vector of probabilities (outcomes) and calculate a
union of the probability space. As part of the union calculation, combn()
must be used, which returns a matrix, and the parallelized version of
lapply() provided in the multicore package requires a list. I've found that
parallelization is very necessary for vectors of outcomes greater in length
than about 10 or 15 elements, which is why I need to make use of multicore
(and, therefore, convert the combn() matrix to a list). It would speed the
process up if there was a more direct way to convert the columns of combn()
to elements of a single list. Any constructive suggestions will be greatly
appreciated. Thanks for your consideration,

C

code:

unionIndependant <- function(outcomes) {
intsctn <- c()
column2list <- function(x){list(x)}
pb <-
ProgressBar(max=length(outcomes),stepLength=1,newlineWhenDone=TRUE)
for (i in 2:length(outcomes)){
increase(pb)
outcomes_ <- apply(combn(outcomes,i),2,column2list)
for (j in 1:length(outcomes_)){outcomes_[[j]] <-
outcomes_[[j]][[1]]}
outcomes_container <- mclapply(outcomes_,prod,mc.cores=3)
intsctn[i] <- sum(unlist(outcomes_container))
}
intsctn <- intsctn[-1]
return(sum(outcomes) - sum(intsctn[which(which((intsctn %in% intsctn))
%% 2 == 1)]) + sum(intsctn[which(which((intsctn %in% intsctn)) %% 2 == 0)])
+ ((-1)^length(intsctn) * prod(outcomes)))
}

PS This code has been tested on vectors of up to length(outcomes) == 25 and
it should be noted that ProgressBar() requires the R.utils package.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] indexing lists

2010-11-15 Thread Chris Carleton
Thanks for the suggestions, but 'cat' is not causing name space conflicts
for me and since I'm not packaging the code for anyone else to use, I'm less
than concerned about potential conflicts. I did type that too quickly, and I
have resolved my problem using a workaround that does not involve finding
the names of top level list objects based on comparisons, but to clarify for
anyone who is interested;

adataframe[ i, c('cat','result') ] <- c( i, result)

allows me to assign the variable 'i' to the df column 'cat' and 'result' to
the df column 'result' simultaneously for the ith entry (of course where 'i'
is the row number). It just happens to be the case that 'i' is both the db
key and the row name in the df, but since other portions of my code are
shuffling the list I'm using to eventually fill the df it makes more sense
for me to keep track of everything with the db key ('cat') than to try and
use list indexes to keep track of where data is going. The solution I'm
using is like this;

for (i in npu_pdf_bw) {
prob <-
fitted(npudist(bw=i[[2]],bwmethod="normal-reference",edat=origin[cols_select]))
print(prob)
pdf_pred[i[[1]],c('cat','probability')] <- c(i[[1]],prob)
}

where npu_pdf_bw is a list of lists that each contains the 'cat' db key and
an associated 'np' bandwidth object. 'pdf_pred' is a dataframe that holds
the values and allows me to search for the result ('prob') on the basis of
the 'cat' column thus maintaining my database integrity. It's perfectly
acceptable for you to choose not to offer support on a volunteer help
mailing list and I'm certain that support offered, when bitter or scathing,
isn't appreciated in any event.

C

On 15 November 2010 17:59, David Winsemius  wrote:

>
> On Nov 15, 2010, at 5:07 PM, Chris Carleton wrote:
>
>  Thanks for the suggestions. The issue for me is that the top level index
>> is
>> also like a database key so it might be a bit annoying to coerce it to
>> char() so that I can reference it with a $ and then I would have to still
>> be
>> able to find out what the name was automatically. I've got a function
>> right
>> now that iterates through a list of values (db keys called cat values in
>> this case) that returns an object from another function that will be used
>> in
>> yet another function. So, I have to store the objects and then pass each
>> one
>> in turn to a function while keeping track of what cat value that object is
>> associated with so that it can be stored in relation to the cat value in a
>> dataframe. Essentially like this...
>>
>> for i in cat {
>>
>
> Please do not use "cat" as an object name for the same reason as not to use
> "c" or "data" as object names.
>
>
>  object <- somefunction()
>> alist[[ i ]] <- object
>> }
>>
>> for i in cat {
>> result <- somefunction(object[[ i ]])
>> adataframe[[ i, c('cat','result') ]] <- c( i, result)
>>
>
> I don't think the [[ operator take two arguments. Perhaps you meant to use
> the "[" operator. Even then cannot tell what "cat" is supposed to be and
> quoteing cat would prevent it from being evaluated.
>
>
>  }
>>
>> I'm paranoid about loosing track of which cat value is associated with
>> which
>> result and that's why I'm looking for a way to ensure that the output is
>> stored correctly. The whole thing is going to be automated. Any
>> suggestions
>> would definitely be appreciated. I've tried just creating a list of lists
>> to
>> keep track of things so that list[[1]][[1]] is the cat value and
>> list[[1]][[2]] is the associated object, but now I'm having trouble
>> passing
>> the object to the next function. This might take some time for me to work
>> out.
>>
>
> It appears you are too busy to make a working example, so I am too busy to
> do it for you.
>
> ?names  # for naming  and access to names of list elements
>
> --
> David.
>
>
>
>
>>
>> Thanks,
>>
>> Chris
>>
>>
>> On 15 November 2010 16:38, Joshua Wiley  wrote:
>>
>>  Hi Chris,
>>>
>>> Does this do what you're after?  It just compares each element of a
>>> (i.e., a[[1]] and a[[2]]) to c(1, 2) and determines if they are
>>> identical or not.
>>>
>>> which(sapply(a, identical, y = c(1, 2)))
>>>
>>> There were too many 1s floating around 

Re: [R] indexing lists

2010-11-15 Thread Chris Carleton
Thanks for the suggestions. The issue for me is that the top level index is
also like a database key so it might be a bit annoying to coerce it to
char() so that I can reference it with a $ and then I would have to still be
able to find out what the name was automatically. I've got a function right
now that iterates through a list of values (db keys called cat values in
this case) that returns an object from another function that will be used in
yet another function. So, I have to store the objects and then pass each one
in turn to a function while keeping track of what cat value that object is
associated with so that it can be stored in relation to the cat value in a
dataframe. Essentially like this...

for i in cat {
object <- somefunction()
alist[[ i ]] <- object
}

for i in cat {
result <- somefunction(object[[ i ]])
adataframe[[ i, c('cat','result') ]] <- c( i, result)
}

I'm paranoid about loosing track of which cat value is associated with which
result and that's why I'm looking for a way to ensure that the output is
stored correctly. The whole thing is going to be automated. Any suggestions
would definitely be appreciated. I've tried just creating a list of lists to
keep track of things so that list[[1]][[1]] is the cat value and
list[[1]][[2]] is the associated object, but now I'm having trouble passing
the object to the next function. This might take some time for me to work
out.


Thanks,

Chris


On 15 November 2010 16:38, Joshua Wiley  wrote:

> Hi Chris,
>
> Does this do what you're after?  It just compares each element of a
> (i.e., a[[1]] and a[[2]]) to c(1, 2) and determines if they are
> identical or not.
>
> which(sapply(a, identical, y = c(1, 2)))
>
> There were too many 1s floating around for me to figure out if you
> wanted to find elements of a that matched the entire vector or
> subelements of a that matched elements of the vector (if that makes
> any sense).
>
> HTH,
>
> Josh
>
> On Mon, Nov 15, 2010 at 1:24 PM, Chris Carleton
>  wrote:
> > Hi List,
> >
> > I'm trying to work out how to use which(), or another function, to find
> the
> > top-level index of a list item based on a condition. An example will
> clarify
> > my question.
> >
> > a <- list(c(1,2),c(3,4))
> > a
> > [[1]]
> > [1] 1 2
> >
> > [[2]]
> > [1] 3 4
> >
> > I want to find the top level index of c(1,2), which should return 1
> since;
> >
> > a[[1]]
> > [1] 1 2
> >
> > I can't seem to work out the syntax. I've tried;
> >
> > which(a == c(1,2))
> >
> > and an error about coercing to double is returned. I can find the index
> of
> > elements of a particular item by
> >
> > which(a[[1]]==c(1,2)) or which(a[[1]]==1) etc that return [1] 1 2 and [1]
> 1
> > respectively as they should. Any thoughts?
> >
> > C
> >
> >[[alternative HTML version deleted]]
> >
> > __
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
>
>
>
> --
> Joshua Wiley
> Ph.D. Student, Health Psychology
> University of California, Los Angeles
> http://www.joshuawiley.com/
>
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Fwd: indexing lists

2010-11-15 Thread Chris Carleton
Sorry folks, I keep forgetting to switch to my r-help email to send the
replies so they get unintentionally sent to a moderator (particularly sorry
for that moderators...)

C

-- Forwarded message --
From: Chris Carleton 
Date: 15 November 2010 17:07
Subject: Re: [R] indexing lists
To: Joshua Wiley 
Cc: r-help@r-project.org


Thanks for the suggestions. The issue for me is that the top level index is
also like a database key so it might be a bit annoying to coerce it to
char() so that I can reference it with a $ and then I would have to still be
able to find out what the name was automatically. I've got a function right
now that iterates through a list of values (db keys called cat values in
this case) that returns an object from another function that will be used in
yet another function. So, I have to store the objects and then pass each one
in turn to a function while keeping track of what cat value that object is
associated with so that it can be stored in relation to the cat value in a
dataframe. Essentially like this...

for i in cat {
object <- somefunction()
alist[[ i ]] <- object
}

for i in cat {
result <- somefunction(object[[ i ]])
adataframe[[ i, c('cat','result') ]] <- c( i, result)
}

I'm paranoid about loosing track of which cat value is associated with which
result and that's why I'm looking for a way to ensure that the output is
stored correctly. The whole thing is going to be automated. Any suggestions
would definitely be appreciated. I've tried just creating a list of lists to
keep track of things so that list[[1]][[1]] is the cat value and
list[[1]][[2]] is the associated object, but now I'm having trouble passing
the object to the next function. This might take some time for me to work
out.


Thanks,

Chris



On 15 November 2010 16:38, Joshua Wiley  wrote:

> Hi Chris,
>
> Does this do what you're after?  It just compares each element of a
> (i.e., a[[1]] and a[[2]]) to c(1, 2) and determines if they are
> identical or not.
>
> which(sapply(a, identical, y = c(1, 2)))
>
> There were too many 1s floating around for me to figure out if you
> wanted to find elements of a that matched the entire vector or
> subelements of a that matched elements of the vector (if that makes
> any sense).
>
> HTH,
>
> Josh
>
> On Mon, Nov 15, 2010 at 1:24 PM, Chris Carleton
>  wrote:
> > Hi List,
> >
> > I'm trying to work out how to use which(), or another function, to find
> the
> > top-level index of a list item based on a condition. An example will
> clarify
> > my question.
> >
> > a <- list(c(1,2),c(3,4))
> > a
> > [[1]]
> > [1] 1 2
> >
> > [[2]]
> > [1] 3 4
> >
> > I want to find the top level index of c(1,2), which should return 1
> since;
> >
> > a[[1]]
> > [1] 1 2
> >
> > I can't seem to work out the syntax. I've tried;
> >
> > which(a == c(1,2))
> >
> > and an error about coercing to double is returned. I can find the index
> of
> > elements of a particular item by
> >
> > which(a[[1]]==c(1,2)) or which(a[[1]]==1) etc that return [1] 1 2 and [1]
> 1
> > respectively as they should. Any thoughts?
> >
> > C
> >
> >[[alternative HTML version deleted]]
> >
> > __
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
>
>
>
> --
> Joshua Wiley
> Ph.D. Student, Health Psychology
> University of California, Los Angeles
> http://www.joshuawiley.com/
>
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] indexing lists

2010-11-15 Thread Chris Carleton
Hi List,

I'm trying to work out how to use which(), or another function, to find the
top-level index of a list item based on a condition. An example will clarify
my question.

a <- list(c(1,2),c(3,4))
a
[[1]]
[1] 1 2

[[2]]
[1] 3 4

I want to find the top level index of c(1,2), which should return 1 since;

a[[1]]
[1] 1 2

I can't seem to work out the syntax. I've tried;

which(a == c(1,2))

and an error about coercing to double is returned. I can find the index of
elements of a particular item by

which(a[[1]]==c(1,2)) or which(a[[1]]==1) etc that return [1] 1 2 and [1] 1
respectively as they should. Any thoughts?

C

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R package 'np' problems

2010-11-15 Thread Chris Carleton
Hi Peter and List,

I realized the err of my ways here. Thanks for the response; I appreciate
the help. The struggles of self-taught statistics and maths continue!

Chris

On 15 November 2010 04:34, P Ehlers  wrote:

> Chris Carleton wrote:
>
>> Hi List,
>>
>> I'm trying to get a density estimate for a point of interest from an
>> npudens
>> object created for a sample of points. I'm working with 4 variables in
>> total
>> (3 continuous and 1 unordered discrete - the discrete variable is the
>> character column in training.csv). When I try to evaluate the density for
>> a
>> point that was not used in the training dataset, and when I extract the
>> fitted values from the npudens object itself, I'm getting values that are
>> much greater than 1 in some cases, which, if I understand correctly,
>> shouldn't be possible considering a pdf estimate can only be between 0 and
>> 1. I think I must be doing something wrong, but I can't see it. Attached
>> I've included the training data (training.csv) and the point of interest
>> (origin.csv); below I've included the code I'm using and the results I'm
>> getting. I also don't understand why, when trying to evaluate the npudens
>> object at one point, I'm receiving the same set of fitted values from the
>> npudens object with the predict() function. It should be noted that I'm
>> indexing the dataframe of training data in order to get samples of the df
>> for density estimation (the samples are from different geographic
>> locations
>> measured on the same set of variables; hence my use of sub-setting by [i]
>> and removing columns from the df before running the density estimation).
>> Moreover, in the example I'm providing here, the point of interest does
>> happen to come from the training dataset, but I'm receiving the same
>> results
>> when I compare the point of interest to samples of which it is not a part
>> (density estimates that are either extremely small, which is acceptable,
>> or
>> much greater than one, which doesn't seem right to me). Any thoughts would
>> be greatly appreciated,
>>
>> Chris
>>
>>
> I haven't looked at this in any detail, but why do say that pdf values
> cannot exceed 1? That's certainly not true in general.
>
>  -Peter Ehlers
>
>
>  fitted(npudens(tdat=training_df[training_cols_select][training_df$cat ==
>>>
>> i,]))
>>
>> [1] 7.762187e+18 9.385532e+18 6.514318e+18 7.583486e+18 6.283017e+18
>>  [6] 6.167344e+18 9.820551e+18 7.952821e+18 7.882741e+18 1.744266e+19
>> [11] 6.653258e+18 8.704722e+18 8.631365e+18 1.876052e+19 1.995445e+19
>> [16] 2.323802e+19 1.203780e+19 8.493055e+18 8.485279e+18 1.722033e+19
>> [21] 2.227207e+19 2.177740e+19 2.168679e+19 9.329572e+18 9.380505e+18
>> [26] 1.023311e+19 2.109676e+19 7.903112e+18 7.935457e+18 8.91e+18
>> [31] 8.899827e+18 6.265440e+18 6.204720e+18 6.276559e+18 6.218002e+18
>>
>>  npu_dens <-
>>> npudens(tdat=training_df[training_cols_select][training_df$cat
>>>
>> == i,])
>>
>>> summary(npu_dens)
>>>
>>
>> Density Data: 35 training points, in 4 variable(s)
>>  aster_srtm_aspect aster_srtm_dem_filled aster_srtm_slope
>> Bandwidth(s):  29.22422  2.500559e-24 3.111467
>>  class_unsup_pc_iso
>> Bandwidth(s):  0.2304616
>>
>> Bandwidth Type: Fixed
>> Log Likelihood: 1531.598
>>
>> Continuous Kernel Type: Second-Order Gaussian
>> No. Continuous Vars.: 3
>>
>> Unordered Categorical Kernel Type: Aitchison and Aitken
>> No. Unordered Categorical Vars.: 1
>>
>>  predict(npu_dens,newdata=origin[training_cols_select]))
>>>
>>
>> [1] 7.762187e+18 9.385532e+18 6.514318e+18 7.583486e+18 6.283017e+18
>>  [6] 6.167344e+18 9.820551e+18 7.952821e+18 7.882741e+18 1.744266e+19
>> [11] 6.653258e+18 8.704722e+18 8.631365e+18 1.876052e+19 1.995445e+19
>> [16] 2.323802e+19 1.203780e+19 8.493055e+18 8.485279e+18 1.722033e+19
>> [21] 2.227207e+19 2.177740e+19 2.168679e+19 9.329572e+18 9.380505e+18
>> [26] 1.023311e+19 2.109676e+19 7.903112e+18 7.935457e+18 8.91e+18
>> [31] 8.899827e+18 6.265440e+18 6.204720e+18 6.276559e+18 6.218002e+18
>>
>
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Correction on last post

2010-11-14 Thread Chris Carleton
Hi again List,

By 'discrete' variable in the last email, I meant 'categorical'. Also, the
data I sent is one of the samples of the main data frame, which I mentioned
in a round-about way, but I thought that it might be confusing after reading
the email again. Thanks again for any help,

Chris

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] R package 'np' problems

2010-11-14 Thread Chris Carleton
Hi List,

I'm trying to get a density estimate for a point of interest from an npudens
object created for a sample of points. I'm working with 4 variables in total
(3 continuous and 1 unordered discrete - the discrete variable is the
character column in training.csv). When I try to evaluate the density for a
point that was not used in the training dataset, and when I extract the
fitted values from the npudens object itself, I'm getting values that are
much greater than 1 in some cases, which, if I understand correctly,
shouldn't be possible considering a pdf estimate can only be between 0 and
1. I think I must be doing something wrong, but I can't see it. Attached
I've included the training data (training.csv) and the point of interest
(origin.csv); below I've included the code I'm using and the results I'm
getting. I also don't understand why, when trying to evaluate the npudens
object at one point, I'm receiving the same set of fitted values from the
npudens object with the predict() function. It should be noted that I'm
indexing the dataframe of training data in order to get samples of the df
for density estimation (the samples are from different geographic locations
measured on the same set of variables; hence my use of sub-setting by [i]
and removing columns from the df before running the density estimation).
Moreover, in the example I'm providing here, the point of interest does
happen to come from the training dataset, but I'm receiving the same results
when I compare the point of interest to samples of which it is not a part
(density estimates that are either extremely small, which is acceptable, or
much greater than one, which doesn't seem right to me). Any thoughts would
be greatly appreciated,

Chris

> fitted(npudens(tdat=training_df[training_cols_select][training_df$cat ==
i,]))

[1] 7.762187e+18 9.385532e+18 6.514318e+18 7.583486e+18 6.283017e+18
 [6] 6.167344e+18 9.820551e+18 7.952821e+18 7.882741e+18 1.744266e+19
[11] 6.653258e+18 8.704722e+18 8.631365e+18 1.876052e+19 1.995445e+19
[16] 2.323802e+19 1.203780e+19 8.493055e+18 8.485279e+18 1.722033e+19
[21] 2.227207e+19 2.177740e+19 2.168679e+19 9.329572e+18 9.380505e+18
[26] 1.023311e+19 2.109676e+19 7.903112e+18 7.935457e+18 8.91e+18
[31] 8.899827e+18 6.265440e+18 6.204720e+18 6.276559e+18 6.218002e+18

> npu_dens <- npudens(tdat=training_df[training_cols_select][training_df$cat
== i,])
> summary(npu_dens)

Density Data: 35 training points, in 4 variable(s)
  aster_srtm_aspect aster_srtm_dem_filled aster_srtm_slope
Bandwidth(s):  29.22422  2.500559e-24 3.111467
  class_unsup_pc_iso
Bandwidth(s):  0.2304616

Bandwidth Type: Fixed
Log Likelihood: 1531.598

Continuous Kernel Type: Second-Order Gaussian
No. Continuous Vars.: 3

Unordered Categorical Kernel Type: Aitchison and Aitken
No. Unordered Categorical Vars.: 1

> predict(npu_dens,newdata=origin[training_cols_select]))

[1] 7.762187e+18 9.385532e+18 6.514318e+18 7.583486e+18 6.283017e+18
 [6] 6.167344e+18 9.820551e+18 7.952821e+18 7.882741e+18 1.744266e+19
[11] 6.653258e+18 8.704722e+18 8.631365e+18 1.876052e+19 1.995445e+19
[16] 2.323802e+19 1.203780e+19 8.493055e+18 8.485279e+18 1.722033e+19
[21] 2.227207e+19 2.177740e+19 2.168679e+19 9.329572e+18 9.380505e+18
[26] 1.023311e+19 2.109676e+19 7.903112e+18 7.935457e+18 8.91e+18
[31] 8.899827e+18 6.265440e+18 6.204720e+18 6.276559e+18 6.218002e+18
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] package np, convolution functions

2010-11-10 Thread Chris Carleton
Hello List,

I'm trying to find a convenient way of performing a weighted convolution on
multiple n-dimensional kernel density estimates (n-d pdfs) produced by the
np package function npudens()/npudist(). Are there any functions in R that
will take a list of functions and a list of weights and convolve the set of
weighted functions? Thanks,

Chris

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Time Series Data

2009-11-27 Thread chris carleton

I'm not sure what you mean by 'multiplying by 10 and rounding'. I've tried to 
re-organize the data and then create a ts object that is analogous to the 
various sunspot datasets provided in the data package with R, but the matrix I 
generate by binning the data into centuries (which would make the frequency in 
stl() 10) is being read by plot() and stl() as a multivariate time series 
dataset (stl() requires that the data be univariate). For those that haven't 
read stl() docs, the frequency is the number of observations per unit of time. 
In the case of quarterly economic data the frequency is 4 (4 observations over 
a year), and in the case of decadal sunspot numbers the frequency is .1 (0.1 
observations per year). I would have thought that after I binned the data into 
centuries (now ten observations per unit of time) that it would be sufficient, 
but I don't know why stl() interprets the matrix as a multivariate dataset when 
a similar matrix (sunspots - measured each month for a !
 frequency of 12; although from what I can tell the sunspots data in R is not a 
ts object anyhow) encounters no such difficulty. Any suggestions would be 
appreciated,

Chris

> CC: r-help@r-project.org
> From: dwinsem...@comcast.net
> To: w_chris_carle...@hotmail.com
> Subject: Re: [R] Time Series Data
> Date: Fri, 27 Nov 2009 10:10:36 -0500
> 
> 
> On Nov 27, 2009, at 9:55 AM, chris carleton wrote:
> 
> >
> > Hi All,
> >
> > I'm trying to analyze some time series data and I have run into  
> > difficulty. I have decadal sun spot data and I want to separate the  
> > very regular periodic function from the trend and noise. I looked  
> > into using stl(), but the frequency of the time series data must be  
> > greater than 1 for stl(). My data covers a 1000 year interval from  
> > 9095 BP to 8095BP and the frequency is, therefore, 0.1 (because the  
> > data are decadal). I've tried changing the frequency, but the only  
> > frequency that creates a plot of the time series data which matches  
> > the raw data is 0.1. Is there anything I can do to the data that  
> > will make it more amenable to stl(), or is there another package  
> > that I could use for decomposing the signal that does not require  
> > that the frequency of the time series to be greater than 1? Thanks,
> 
> I do not understand why you are not multiplying by 10 and rounding.
> 
> -- 
> David
> >
> 
> 
> David Winsemius, MD
> Heritage Laboratories
> West Hartford, CT
> 
  
_

 Facebook.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Time Series Data

2009-11-27 Thread chris carleton

Hi All,

I'm trying to analyze some time series data and I have run into difficulty. I 
have decadal sun spot data and I want to separate the very regular periodic 
function from the trend and noise. I looked into using stl(), but the frequency 
of the time series data must be greater than 1 for stl(). My data covers a 1000 
year interval from 9095 BP to 8095BP and the frequency is, therefore, 0.1 
(because the data are decadal). I've tried changing the frequency, but the only 
frequency that creates a plot of the time series data which matches the raw 
data is 0.1. Is there anything I can do to the data that will make it more 
amenable to stl(), or is there another package that I could use for decomposing 
the signal that does not require that the frequency of the time series to be 
greater than 1? Thanks,

Chris
  
_

 Facebook.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Poly()

2009-11-16 Thread chris carleton

Hi All,

I was hoping someone could save me the trouble of reading through source code 
and answer a quick question of mine regarding poly(). Does the poly() function 
use a classical orthogonal polynomial series to fit polynomial models, or does 
poly() generate a unique series of orthogonal polynomials based on the method 
described in Kennedy and Gentle (1980:342-347). I understand that the 
aforementioned reference is the source for the recursion used in 
predict.poly(), but I was a bit confused by the documentation regarding how the 
orthogonal polynomial series was constructed in the first place. If any of the 
developers are reading - it would be handy to have the orthogonal polynomial 
series available as an output of poly(), in addition to the coefficients, if 
that function is generating a unique series based on the dataset used for the 
approximation. Thanks to everyone for your time,

Chris
  
_


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] More polyfit problems

2009-10-17 Thread chris carleton

Hi Everyone,

I'm continuing to run into trouble with polyfit. I'm using the fitting function 
of the form;

fit <- lm(y ~ poly(x,degree,raw=TRUE))

and I have found that in some cases a polynomial of certain degree can't be 
fit, the coefficient won't be calculated, because of a singularity. If I use 
orthogonal polynomials I can fit a polynomial of any degree, but I don't get 
the proper coefficients. I'm having trouble solving this partly because I 
likely don't understand enough about polynomial fitting - and for that I have 
to apologize since I know this isn't supposed to be a forum for asking 
questions about mathematical process and rather for questions about the 
functionality of R. Nevertheless, can anyone provide guidance on this point? Is 
there a way to use orthogonal polynomials (the part about the fitting process 
that I don't quite understand) and still find the polynomial coefficients? 
Alternatively, what does one do about singularities using raw polynomials? It 
seems odd to me that the orthogonal polynomials can be used to fit the data, 
even visually once plotted, predicted values that are accurate can be derived 
from the function using predict(), but the coefficients are unknown. Any help 
is appreciated,

Chris

output below:
---
> archit[,1]

[1] 1.8 1.3 2.0 2.1 1.9 1.9 1.3 1.9 2.8

> x

[1] 8752 8610 8554 8496 8482 8462 8438 8418 8384

>archi_rooms <- lm(archit[,1] ~ poly(x,4))
>summary(archi_rooms)

Call:

lm(formula = archit[, 1] ~ poly(x, 4))


Residuals:

1 2 3 4 5 6 7 8 

-0.001790  0.042008 -0.112756  0.083040  0.005183  0.175733 -0.330450  0.137128 

9 

 0.001905 


Coefficients:

Estimate Std. Error t value Pr(>|t|)

(Intercept)  1.90.07077  26.691 1.17e-05 ***

poly(x, 4)1 -0.475200.21230  -2.238   0.0888 .  

poly(x, 4)2  0.501160.21230   2.361   0.0776 .  

poly(x, 4)3 -0.208300.21230  -0.981   0.3821

poly(x, 4)4  0.942460.21230   4.439   0.0113 *  

---

Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 


Residual standard error: 0.2123 on 4 degrees of freedom

Multiple R-squared: 0.8865,Adjusted R-squared: 0.7731 

F-statistic: 7.813 on 4 and 4 DF,  p-value: 0.03570 


>archi_rooms <- lm(archit[,1] ~ poly(x,4,raw=TRUE))
> summary(archi_rooms)


Call:

lm(formula = archit[, 1] ~ poly(x, 4, raw = TRUE))


Residuals:

   12345678 

 0.04665 -0.39399  0.34009  0.37112  0.13015  0.05111 -0.67957 -0.22202 

   9 

 0.35647 


Coefficients: (1 not defined because of singularities)

  Estimate Std. Error t value Pr(>|t|)

(Intercept)  4.205e+04  9.107e+04   0.4620.664

poly(x, 4, raw = TRUE)1 -1.461e+01  3.191e+01  -0.4580.666

poly(x, 4, raw = TRUE)2  1.693e-03  3.726e-03   0.4540.669

poly(x, 4, raw = TRUE)3 -6.536e-08  1.450e-07  -0.4510.671

poly(x, 4, raw = TRUE)4 NA NA  NA   NA


Residual standard error: 0.4623 on 5 degrees of freedom

Multiple R-squared: 0.3275,Adjusted R-squared: -0.076 

F-statistic: 0.8116 on 3 and 5 DF,  p-value: 0.5397

  
_


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Polynomial Fitting

2009-09-29 Thread chris carleton

Thanks a ton Rolf,

I was using the coefficients as given from summary(fit), which have been 
rounded. When I used coef(fit) as you've done, the poly function is the same 
now in Octave and returned the R predicted values as expected. Your time is 
appreciated,

C

> CC: r-help@r-project.org
> From: r.tur...@auckland.ac.nz
> Subject: Re: [R] Polynomial Fitting
> Date: Wed, 30 Sep 2009 08:30:15 +1300
> To: w_chris_carle...@hotmail.com
> 
> 
> On 30/09/2009, at 5:34 AM, chris carleton wrote:
> 
> >
> > Thanks for the response. I'm sorry I didn't provide the code or  
> > data example earlier. I was using the polynomial fitting technique  
> > of this form;
> >
> > test <- lm(x[,34] ~ I(x[,1]) + I(x[,1]^2) + I(x[,1]^3))
> >
> > for the original fitting operation. I also tried to use;
> >
> > lm(y ~ poly(x,3,raw=TRUE))
> >
> > with the same results for the polynomial coefficients in both  
> > cases. If my understanding is correct, both of the methods above  
> > produce the coefficients of a polynomial based on the data in 'y'  
> > as that data varies over 'x'. Therefore, I would assume that the  
> > function of the polynomial should always produce the same results  
> > as the predict() function in R produces. However, here are the raw  
> > data for anyone that has the time to help me out.
> >
> > y:
> > [1] 9097 9074 9043 8978 8912 8847 8814 8786 8752 8722 8686 8657  
> > 8610 8604 8554
> > [16] 8546 8496 8482 8479 8462 8460 8438 8428 8418 8384
> >
> > x:
> > [1] 17.50NA 20.59 21.43 17.78 21.89NA 22.86NA  6.10 
> > NA  5.37
> > [13]  3.80NA  6.80NANANA  5.80NANANA 
> > NANA
> > [25]NA
> >
> > I think that R lm() just ignores the NA values, but I've also tried  
> > this by first eliminating NAs and the corresponding x values from  
> > the data before fitting the poly and the result was the same  
> > coefficients. Thanks very much to anyone who is willing to provide  
> > information.
> 
> What's the problem?
> 
> fit <- lm(y~x+I(x^2)+I(x^3))
> ccc <- coef(fit)
> X <- cbind(1,x,x^2,x^3)
> print(fitted(fit))
> chk <- X%*%ccc
> print(chk[!is.na(chk)])
> print(range(fitted(fit)-chk[!is.na(chk)]))
> [1] 0.00e+00 5.456968e-12
> 
> The answers are the same.
> 
>   cheers,
> 
>   Rolf Turner
> 
> ##
> Attention: 
> This e-mail message is privileged and confidential. If you are not the 
> intended recipient please delete the message and notify the sender. 
> Any views or opinions presented are solely those of the author.
> 
> This e-mail has been scanned and cleared by MailMarshal 
> www.marshalsoftware.com
> ##
  
_


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Polynomial Fitting

2009-09-29 Thread chris carleton

Thanks for the response. I'm sorry I didn't provide the code or data example 
earlier. I was using the polynomial fitting technique of this form;

test <- lm(x[,34] ~ I(x[,1]) + I(x[,1]^2) + I(x[,1]^3))

for the original fitting operation. I also tried to use;

lm(y ~ poly(x,3,raw=TRUE))

with the same results for the polynomial coefficients in both cases. If my 
understanding is correct, both of the methods above produce the coefficients of 
a polynomial based on the data in 'y' as that data varies over 'x'. Therefore, 
I would assume that the function of the polynomial should always produce the 
same results as the predict() function in R produces. However, here are the raw 
data for anyone that has the time to help me out.

y:
[1] 9097 9074 9043 8978 8912 8847 8814 8786 8752 8722 8686 8657 8610 8604 8554
[16] 8546 8496 8482 8479 8462 8460 8438 8428 8418 8384

x:
[1] 17.50NA 20.59 21.43 17.78 21.89NA 22.86NA  6.10NA  5.37
[13]  3.80NA  6.80NANANA  5.80NANANANANA
[25]NA

I think that R lm() just ignores the NA values, but I've also tried this by 
first eliminating NAs and the corresponding x values from the data before 
fitting the poly and the result was the same coefficients. Thanks very much to 
anyone who is willing to provide information.

Chris Carleton

> CC: r-help@r-project.org
> From: r.tur...@auckland.ac.nz
> Subject: Re: [R] Polynomial Fitting
> Date: Tue, 29 Sep 2009 13:30:07 +1300
> To: w_chris_carle...@hotmail.com
> 
> 
> On 29/09/2009, at 10:52 AM, chris carleton wrote:
> 
> >
> > Hello All,
> >
> >  This might seem elementary to everyone, but please bear with me. I've
> >  just spent some time fitting poly functions to time series data in R
> >  using lm() and predict(). I want to analyze the functions once I've
> >  fit them to the various data I'm studying. However, after pulling the
> >  first function into Octave (just by plotting the polynomial function
> >  using fplot() over the same x interval as my original data) I was
> >  surprised to see that the scale and y values were vastly different
> >  than the ones I have in R. The basic shape of the polynomial over the
> >  same interval looks similar in both Octave and R, but the y values  
> > are
> >  all different. When I compute the y values using the polynomial
> >  function by hand, the y values from the Octave plot are returned and
> >  not the y values predicted by predict() in R. Can someone explain to
> >  me why the values for a function would be different in R? Thanks,
> >  Chris Carleton
> 
> Presumably because you were using poly() with the argument "raw" left
> equal to its default, i.e. FALSE.
> 
>   cheers,
> 
>   Rolf Turner
> 
> P. S.  The posting guide asks for reproducible examples .
> 
>   R. T.
> 
> ##
> Attention: 
> This e-mail message is privileged and confidential. If you are not the 
> intended recipient please delete the message and notify the sender. 
> Any views or opinions presented are solely those of the author.
> 
> This e-mail has been scanned and cleared by MailMarshal 
> www.marshalsoftware.com
> ##
>   
_


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Polynomial Fitting

2009-09-28 Thread chris carleton

Hello All,

 This might seem elementary to everyone, but please bear with me. I've 
 just spent some time fitting poly functions to time series data in R 
 using lm() and predict(). I want to analyze the functions once I've 
 fit them to the various data I'm studying. However, after pulling the 
 first function into Octave (just by plotting the polynomial function 
 using fplot() over the same x interval as my original data) I was 
 surprised to see that the scale and y values were vastly different 
 than the ones I have in R. The basic shape of the polynomial over the 
 same interval looks similar in both Octave and R, but the y values are 
 all different. When I compute the y values using the polynomial 
 function by hand, the y values from the Octave plot are returned and 
 not the y values predicted by predict() in R. Can someone explain to 
 me why the values for a function would be different in R? Thanks,
 Chris Carleton 
 
  

  
_


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.