[R] Saving Lattice Plots with RInside and Rcpp

2014-06-23 Thread Manoj G
Hi,

I am trying to build an R application in C++ using RInside. I wanted to
save the plots as images in specified directory using codes,

png(filename = "filename", width = 600, height = 400)
xyplot(data ~ year | segment, data = dataset, layout = c(1,3),
   type = c("l", "p"), ylab = "Y Label", xlab = "X Label",
   main = "Title of the Plot")
dev.off()

It creates a png file in the specified directory if directly run from R.
But using C++ calls from RInside, I was not able to reproduce the same
result. (*I could reproduce all base plots using C++ calls. Problem with
only Lattice and ggplots*)

I used following codes as well,

myplot <- xyplot(data ~ year | segment, data = dataset, layout = c(1,3),
 type = c("l", "p"), ylab = "Y Label", xlab = "X Label",
 main = "Title of the Plot")
trellis.device(device = "png", filename = "filename")
print(myplot)
dev.off()

png file is getting created if I run the above code in R without any
problem. But from C++ calls, a pngfile with empty panel with title and x-y
label is getting created and not a complete plot.

I'm using the function R.parseEval() for C++ call to R.

How to get proper lattice and ggplot2 plots properly?


Thanks,

Manoj G

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] help plsr function

2014-06-23 Thread annie Zhang
Hi All,

I want to produce scores from X using $projection. When I predict, I cannot
match the predicted scores and scores using x%*%projection.

Below is a very simple example,

set.seed(seed=1)
y <- c(1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0)
x <- matrix(runif(200),nrow=20)
data <- data.frame(cbind(y,x))
colnames(data) <-
c("out","x1","x2","x3","x4","x5","x6","x7","x8","x9","x10")
data.cpls <- plsr(out~x1+x2+x3+x4+x5+x6+x7+x8+x9+x10,2,data=data)

x.new <- matrix(runif(50),nrow=5)
x.new.centered <- x.new
for (i in 1:ncol(x.new)) {
 x.mean <- mean(x.new[,i])
 x.new.centered[,i] <- (x.new[,i]-x.mean)
}
## the predicted scores from the model
(pred <- predict(data.cpls,n.comp=1:2,newdata=x.new,type="score"))
## the predicted scores using x%*%projection
cbind(x.new.centered%*%data.cpls$projection[,1],x.new.centered%*%data.cpls$projection[,2])

Can someone please tell me why the two predicted scores don't match?

Thanks,
Annie

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Custom sampling method in R XXXX

2014-06-23 Thread Daniel Nordlund
Something like this could work


x <- 0.250
new_sample <- function(xx) {
  j<-c(0.000,0.125,0.250,0.375,0.500,0.625,0.750,0.875,1.000)
  probs<-c(0.02307692,0.20769231,0.53846154,0.20769231,0.02307692)
  jj <- c(0,0,j,1,1)
  ndx <- which(j == xx)
  sample(jj[ndx:(ndx+4)], size=1, p=probs, replace=TRUE)
}
new_sample(x)



Daniel Nordlund
Bothell, WA USA
 

> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
> On Behalf Of Dan Abner
> Sent: Monday, June 23, 2014 3:19 PM
> To: Greg Snow
> Cc: r-help@r-project.org
> Subject: Re: [R] Custom sampling method in R 
> 
> Hi Greg,
> 
> Thanks, this makes sense. I can envision the call to the sample fn
> like you are discribing. Any ideas on how to construct the vector? I
> still am unclear about that.
> 
> Thanks,
> 
> Dan
> 
> On Mon, Jun 23, 2014 at 5:26 PM, Greg Snow <538...@gmail.com> wrote:
> > The sample function can be used to sample discrete values with
> > designated probabilities.  I would just construct your list of 5
> > values based on the selected value (duplicating end values if needed,
> > so a choice of x=0 would be the vector c(0,0,0, 0.125, 0.25) ), then
> > sample from this vector with the probabilities that you specify.
> >
> > On Mon, Jun 23, 2014 at 3:11 PM, Dan Abner 
> wrote:
> >>  Hi all,
> >>
> >> I have the following situation and a good efficient way to perform
> >> this operation in R has not come to me. Any suggestions/input are
> >> welcome.
> >>
> >> I have a user-defined parameter (let's call it x) whose value is
> >> selected from a set of possible values (j). Once the user selects one
> >> of the values of j for x, then I need to map a probability
> >> distribution to the values of j such that the middle probability of
> >> .5385 (see probs below) is associated with the value of x and the tail
> >> probabilities are assigned to the 2 values below x and 2 values above
> >> x in j. Therefore, in the example below:
> >>
> >>
> >> x<-.250
> >> j<-c(0.000,0.125,0.250,0.375,0.500,0.625,0.750,0.875,1.000)
> >> probs<-c(0.02307692,0.20769231,0.53846154,0.20769231,0.02307692)
> >>
> >> probabilities would be assigned to the values of j as such:
> >>
> >> value probability
> >> 00.023077
> >> 0.125 0.207692
> >> 0.25   0.538462
> >> 0.375 0.207692
> >> 0.5 0.023077
> >>
> >> And then 1 value of j is selected based on the associated probability.
> >> Any ideas on an efficient way to do this?
> >>
> >> An added dimension of complexity is when the value of x is selected
> >> near the parameter boundary of j. If x = 0, then the easiest thing I
> >> can think of is to assign probabilities as:
> >>
> >> value  probability
> >> 0 0.76923077
> >> 0.125  0.207692
> >> 0.250.023077
> >>
> >> However, I am open to other possibilities.
> >>
> >> Any assistance is appreciated.
> >>
> >> Thanks,
> >>
> >> Dan
> >>
> >> __
> >> R-help@r-project.org mailing list
> >> https://stat.ethz.ch/mailman/listinfo/r-help
> >> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> >> and provide commented, minimal, self-contained, reproducible code.
> >
> >
> >
> > --
> > Gregory (Greg) L. Snow Ph.D.
> > 538...@gmail.com
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Delaunay Graph, once again

2014-06-23 Thread Rolf Turner


On 24/06/14 06:24, Raphael Päbst wrote:


I have the same suspicion, unfortunately I am no Matlab-User either
and only have a adjacency-matrix created by the corresponding matlab
functions to compare with my own output. So, even though I don't see
any use for collinear triples either, is there a way to get them with
deldir as well?


Uh, no. Fortune fortune(228) springs to mind here.

cheers,

Rolf



On 6/21/14, Rolf Turner  wrote:


You are almost surely being bitten by the issue about which you
previously made inquiries to me, concerning the delaunayn() function
from the "geometry" package.  (NOTE: ***PACKAGE***!!!  Not "library".  A
library is a *collection* of packages.) The delaunayn()
function also uses the qhull algorithm.  By default it treats some
triples of *collinear* points as triangles.  This is probably causing
the discrepancy between the results.

Ergo it would seem best to use deldir() unless you *really want*
collinear triples to be considered as triangles --- and I can't see why
you would.




__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Using RPostgreSQL

2014-06-23 Thread Anirudh Kondaveeti
Just to add to the previous message: ad_data_new table has less than 10
columns.

Anirudh Kondaveeti 
Senior Data Scientist
Pivotal (EMC)




On Mon, Jun 23, 2014 at 4:15 PM, Anirudh Kondaveeti <
anirudh.kondave...@gmail.com> wrote:

> Hi All,
>
> I am using RPostgreSQL to pull data from Greenplum database. I have
> successfully connected to the database and am using the following command
> to retrieve the first 100 records from a table by the name "ad_data_new"
>
> s <- dbSendQuery(con, "select * from ad_data_new limit 100")
>
> This is taking more than 15 minutes to execute. The original table has
> ~100,000 rows. At the current rate, it wouldn't be feasible to pull the
> data into R using the above command.
>
> Any suggestions on improving the performance ? Or do you experience the
> same performance as well ?
>
>
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Using RPostgreSQL

2014-06-23 Thread Anirudh Kondaveeti
Hi All,

I am using RPostgreSQL to pull data from Greenplum database. I have
successfully connected to the database and am using the following command
to retrieve the first 100 records from a table by the name "ad_data_new"

s <- dbSendQuery(con, "select * from ad_data_new limit 100")

This is taking more than 15 minutes to execute. The original table has
~100,000 rows. At the current rate, it wouldn't be feasible to pull the
data into R using the above command.

Any suggestions on improving the performance ? Or do you experience the
same performance as well ?

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Custom sampling method in R XXXX

2014-06-23 Thread Dan Abner
Hi Greg,

Thanks, this makes sense. I can envision the call to the sample fn
like you are discribing. Any ideas on how to construct the vector? I
still am unclear about that.

Thanks,

Dan

On Mon, Jun 23, 2014 at 5:26 PM, Greg Snow <538...@gmail.com> wrote:
> The sample function can be used to sample discrete values with
> designated probabilities.  I would just construct your list of 5
> values based on the selected value (duplicating end values if needed,
> so a choice of x=0 would be the vector c(0,0,0, 0.125, 0.25) ), then
> sample from this vector with the probabilities that you specify.
>
> On Mon, Jun 23, 2014 at 3:11 PM, Dan Abner  wrote:
>>  Hi all,
>>
>> I have the following situation and a good efficient way to perform
>> this operation in R has not come to me. Any suggestions/input are
>> welcome.
>>
>> I have a user-defined parameter (let's call it x) whose value is
>> selected from a set of possible values (j). Once the user selects one
>> of the values of j for x, then I need to map a probability
>> distribution to the values of j such that the middle probability of
>> .5385 (see probs below) is associated with the value of x and the tail
>> probabilities are assigned to the 2 values below x and 2 values above
>> x in j. Therefore, in the example below:
>>
>>
>> x<-.250
>> j<-c(0.000,0.125,0.250,0.375,0.500,0.625,0.750,0.875,1.000)
>> probs<-c(0.02307692,0.20769231,0.53846154,0.20769231,0.02307692)
>>
>> probabilities would be assigned to the values of j as such:
>>
>> value probability
>> 00.023077
>> 0.125 0.207692
>> 0.25   0.538462
>> 0.375 0.207692
>> 0.5 0.023077
>>
>> And then 1 value of j is selected based on the associated probability.
>> Any ideas on an efficient way to do this?
>>
>> An added dimension of complexity is when the value of x is selected
>> near the parameter boundary of j. If x = 0, then the easiest thing I
>> can think of is to assign probabilities as:
>>
>> value  probability
>> 0 0.76923077
>> 0.125  0.207692
>> 0.250.023077
>>
>> However, I am open to other possibilities.
>>
>> Any assistance is appreciated.
>>
>> Thanks,
>>
>> Dan
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>
>
>
> --
> Gregory (Greg) L. Snow Ph.D.
> 538...@gmail.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] c() with POSIXlt objects and their timezone is lost

2014-06-23 Thread Marc Girondot
When two POSIXlt objects are combine with c(), they lost their tzone 
attribute, even if they are the same.

I don't know if it is a feature, but I don't like it !

Marc

> es <- strptime("2010-02-03 10:20:30", format="%Y-%m-%d %H:%M:%S", 
tz="UTC")

> es
[1] "2010-02-03 10:20:30 UTC"
> attributes(es)
$names
[1] "sec"   "min"   "hour"  "mday"  "mon"   "year"  "wday"  "yday" "isdst"

$class
[1] "POSIXlt" "POSIXt"

$tzone
[1] "UTC"

> c(es, es)
[1] "2010-02-03 11:20:30 CET" "2010-02-03 11:20:30 CET"
> attributes(c(es, es))
$names
 [1] "sec""min""hour"   "mday"   "mon""year"   "wday" 
"yday"   "isdst"  "zone"   "gmtoff"


$class
[1] "POSIXlt" "POSIXt"

$tzone
[1] "" "CET"  "CEST"

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] sparse factor and findAssocs

2014-06-23 Thread Sarah Goslee
Hi,

On Mon, Jun 23, 2014 at 9:11 AM, ashwin ittoo  wrote:
> Hello
> I have been using R for some text pre-processing. I have 2 qestions
> concerning the tm package/
> 1)  the function removeSparseTerms takes as parameters a matrix and a
> sparsefactor. Can anyone please tell me how is the sparsefactor calculated?
> I have tried playing around with different values and then inspecting the
> marix. But I could not still grasp the maths behind the sparsefactor

The help says percentage, although since sparse can range from 0 to 1
this is likely proportion instead. But you could always look at the
source yourself if you want to know for certain.

>
> 2) Similarly, the function findAssocs() takes as parameters a matrix , a
> term and an association threshold, e.g. findAssocs(mat, "test",.5) will
> return all the tokens in the matrix mat (created from a corpus) that have
> an association strength of 0.5 with the term "test". Can anyone please tell
> me what association metric is being used, for e.g. chi-squared,mutual
> information,The documentation,  help.search("findAssocs"), does not say
> anything. I read on a web page (which i cannot retrieve now) that
> findAssocs is a *generic* function, but this is still very vague

The help says correlation, and the vignette "Introduction to the tm
Package" confirms that. Again, you could check the source, or you
could contact the package maintainer, which is the appropriate thing
to do for questions of this sort.

Sarah

--
Sarah Goslee
http://www.functionaldiversity.org

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Custom sampling method in R XXXX

2014-06-23 Thread Greg Snow
The sample function can be used to sample discrete values with
designated probabilities.  I would just construct your list of 5
values based on the selected value (duplicating end values if needed,
so a choice of x=0 would be the vector c(0,0,0, 0.125, 0.25) ), then
sample from this vector with the probabilities that you specify.

On Mon, Jun 23, 2014 at 3:11 PM, Dan Abner  wrote:
>  Hi all,
>
> I have the following situation and a good efficient way to perform
> this operation in R has not come to me. Any suggestions/input are
> welcome.
>
> I have a user-defined parameter (let's call it x) whose value is
> selected from a set of possible values (j). Once the user selects one
> of the values of j for x, then I need to map a probability
> distribution to the values of j such that the middle probability of
> .5385 (see probs below) is associated with the value of x and the tail
> probabilities are assigned to the 2 values below x and 2 values above
> x in j. Therefore, in the example below:
>
>
> x<-.250
> j<-c(0.000,0.125,0.250,0.375,0.500,0.625,0.750,0.875,1.000)
> probs<-c(0.02307692,0.20769231,0.53846154,0.20769231,0.02307692)
>
> probabilities would be assigned to the values of j as such:
>
> value probability
> 00.023077
> 0.125 0.207692
> 0.25   0.538462
> 0.375 0.207692
> 0.5 0.023077
>
> And then 1 value of j is selected based on the associated probability.
> Any ideas on an efficient way to do this?
>
> An added dimension of complexity is when the value of x is selected
> near the parameter boundary of j. If x = 0, then the easiest thing I
> can think of is to assign probabilities as:
>
> value  probability
> 0 0.76923077
> 0.125  0.207692
> 0.250.023077
>
> However, I am open to other possibilities.
>
> Any assistance is appreciated.
>
> Thanks,
>
> Dan
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.



-- 
Gregory (Greg) L. Snow Ph.D.
538...@gmail.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Custom sampling method in R XXXX

2014-06-23 Thread Dan Abner
 Hi all,

I have the following situation and a good efficient way to perform
this operation in R has not come to me. Any suggestions/input are
welcome.

I have a user-defined parameter (let's call it x) whose value is
selected from a set of possible values (j). Once the user selects one
of the values of j for x, then I need to map a probability
distribution to the values of j such that the middle probability of
.5385 (see probs below) is associated with the value of x and the tail
probabilities are assigned to the 2 values below x and 2 values above
x in j. Therefore, in the example below:


x<-.250
j<-c(0.000,0.125,0.250,0.375,0.500,0.625,0.750,0.875,1.000)
probs<-c(0.02307692,0.20769231,0.53846154,0.20769231,0.02307692)

probabilities would be assigned to the values of j as such:

value probability
00.023077
0.125 0.207692
0.25   0.538462
0.375 0.207692
0.5 0.023077

And then 1 value of j is selected based on the associated probability.
Any ideas on an efficient way to do this?

An added dimension of complexity is when the value of x is selected
near the parameter boundary of j. If x = 0, then the easiest thing I
can think of is to assign probabilities as:

value  probability
0 0.76923077
0.125  0.207692
0.250.023077

However, I am open to other possibilities.

Any assistance is appreciated.

Thanks,

Dan

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Problem with "nlm" function to minimize the negative log likelihood

2014-06-23 Thread Ferra Xu
Hi all
 I have a problem in using "nlm" function to minimize the negative log 
likelihood of a function in R. The problem is that it gives me the same 
estimated values for all the parameters, except one of them, in each 
iteration!! Does anyone have any ideas what may cause this mistake? 

Thank you

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] sparse factor and findAssocs

2014-06-23 Thread ashwin ittoo
Hello
I have been using R for some text pre-processing. I have 2 qestions
concerning the tm package/
1)  the function removeSparseTerms takes as parameters a matrix and a
sparsefactor. Can anyone please tell me how is the sparsefactor calculated?
I have tried playing around with different values and then inspecting the
marix. But I could not still grasp the maths behind the sparsefactor


2) Similarly, the function findAssocs() takes as parameters a matrix , a
term and an association threshold, e.g. findAssocs(mat, "test",.5) will
return all the tokens in the matrix mat (created from a corpus) that have
an association strength of 0.5 with the term "test". Can anyone please tell
me what association metric is being used, for e.g. chi-squared,mutual
information,The documentation,  help.search("findAssocs"), does not say
anything. I read on a web page (which i cannot retrieve now) that
findAssocs is a *generic* function, but this is still very vague

kind regards
ashwin

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Dead link in the help page of as.Date()

2014-06-23 Thread Christofer Bogaso
Hi,

I was reading the help page for as.Date() function for some reason,
and noticed a Matlab link:

http://www.mathworks.com/help/techdoc/matlab_prog/bspgcx2-1.html

It looks like this link is dead. So may be it would be better to put a
correct link or remove this altogether.

Thanks and regards,

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Reading CSV file every time when R starts

2014-06-23 Thread MacQueen, Don
However, be careful, because .Rprofile runs before any saved .RData file
is loaded. If the object you created in .Rprofile is later saved when you
quit R, then the next time you start R, the saved version will replace the
version created in .Rprofile. (Unless you start R using R --no-restore)
This means that if you create the object in .Rprofile, and then later
change it, then later save it when you quit R, the next time you start R
you will have the modified version, not the original version.

See ?Startup for this information (?Startup also answers your original
question)

-Don

-- 
Don MacQueen

Lawrence Livermore National Laboratory
7000 East Ave., L-627
Livermore, CA 94550
925-423-1062





On 6/23/14 10:51 AM, "Christofer Bogaso" 
wrote:

>Hi Greg,
>
>Thanks for your prompt reply. I have added 'library(utils)' in my
>Rprofile file and now it is working fine.
>
>Thanks and regards,
>
>On Mon, Jun 23, 2014 at 11:34 PM, Greg Snow <538...@gmail.com> wrote:
>> The .Rprofile file is processed before all the standard packages are
>> loaded, that is why you are seeing the error.  If you instead run the
>> command as utils::read.csv or use library or require to manually load
>> the utils package before calling read.csv then everything should work
>> for you.
>>
>> On Mon, Jun 23, 2014 at 11:42 AM, Christofer Bogaso
>>  wrote:
>>> Hi again,
>>>
>>> I am trying to build a procedure such that whenever R starts, it will
>>> read some CSV file.
>>>
>>> Therefore I put a code sometime like 'read.csv(...)' in my Rprofile
>>> file. However if I put this function there then I am getting an error
>>> saying 'Error: could not find function "read.csv"', whenever R starts.
>>>
>>> So is it the case that, read.csv() function will not work if it put to
>>> run with Rprofile file?
>>>
>>> I got similar error with flush.console() function as well.
>>>
>>> Can someone tell me how I can achieve that? My goal is to read some
>>> CSV file on startup of R.
>>>
>>> Thanks for your pointer.
>>>
>>> __
>>> R-help@r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>> PLEASE do read the posting guide
>>>http://www.R-project.org/posting-guide.html
>>> and provide commented, minimal, self-contained, reproducible code.
>>
>>
>>
>> --
>> Gregory (Greg) L. Snow Ph.D.
>> 538...@gmail.com
>
>__
>R-help@r-project.org mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide
>http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Delaunay Graph, once again

2014-06-23 Thread Raphael Päbst
I have the same suspicion, unfortunately I am no Matlab-User either
and only have a adjacency-matrix created by the corresponding matlab
functions to compare with my own output. So, even though I don't see
any use for collinear triples either, is there a way to get them with
deldir as well?

All the best!

Raphael

On 6/21/14, Rolf Turner  wrote:
>
> You are almost surely being bitten by the issue about which you
> previously made inquiries to me, concerning the delaunayn() function
> from the "geometry" package.  (NOTE: ***PACKAGE***!!!  Not "library".  A
> library is a *collection* of packages.) The delaunayn()
> function also uses the qhull algorithm.  By default it treats some
> triples of *collinear* points as triangles.  This is probably causing
> the discrepancy between the results.
>
> Ergo it would seem best to use deldir() unless you *really want*
> collinear triples to be considered as triangles --- and I can't see why
> you would.
>
> Your code for calculating the adjacency list is correct. (Except for the
> fact that you put in unnecessary semi-colons --- this is R, not C ---
> and for the fact that a comment runs over into the next line, causing an
> error to be thrown when one tries to copy and paste your code.)
>
> Try a toy example:
>
> require(deldir)
> set.seed(42)
> x <- runif(6)
> y <- runif(6)
> dxy <- deldir(x,y)
> ind <- dxy$dirsgs[,5:6]
> adj <- matrix(0, length(x), length(y))
> for (i in 1:nrow(ind)){
> adj[ind[i,1], ind[i,2]] <- 1
> adj[ind[i,2], ind[i,1]] <- 1
> }
> adj
>   [,1] [,2] [,3] [,4] [,5] [,6]
> [1,]010101
> [2,]101110
> [3,]010011
> [4,]110011
> [5,]011101
> [6,]101110
> ind
> ind1 ind2
> 1 21
> 2 32
> 3 41
> 4 42
> 5 52
> 6 53
> 7 54
> 8 61
> 9 63
> 1064
> 1165
>
> This looks right.
>
> I'm not a Matlab user so I can't check what Matlab would give, but I'm
> pretty sure the results would be the same in this toy example where
> there are no collinear triples.
>
> cheers,
>
> Rolf Turner
>
> On 21/06/14 03:17, Raphael Päbst wrote:
>> Hello again,
>> After playing around with my current problem for some time, I have
>> once again returned to Delaunay Graphs and after banging my head
>> against the problem for some time I fear that I can't see the issue
>> clearly anymore and want to ask for some outside comments, to maybe
>> shake my thoughts loose again.
>>
>> So, let me explain my project and then my problem:
>>   I have a set of points, given as x- and y-coordinates and want to
>> create the adjacency matrix of the corresponding Delaunay Graph. I
>> have already removed duplicates in my set of points and to calculate
>> the matrix, I have hit upon the following procedure, which might not
>> be the most efficient, but I'm aiming for correct results and
>> simplicty first, before I can think about efficiency.
>>
>> # x a vector of length n, containing the x-coordinates of my n points
>> #y: the same for the y-coordinates
>> library(deldir)
>> del <- deldir(x, y) # calculating the tesselation
>> dels <- del$delsgs[,5:6] # giving me a data.frame with 2 columns,
>> containing the indices of points connected by an edge in the Delaunay
>> Graph
>>
>> adj <- matrix(0, length(x), length(y)) # creating the empty adjacency
>> matrix
>>
>> for (i in 1:nrow(dels)){ # going through the rows of dels
>> adj[dels[i,1], dels[i,2]] <- 1; # since the points in a row of dels
>> are connected, the matrix at that point is set to 1
>> adj[dels[i,2], dels[i,1]] <- 1; # and this makes it symmetric
>> }
>>
>> Now as I see it this should give me the adjacency matrix, right?
>>
>> So let's come to my problem: The program I'm writing is a translation
>> of some Matlab-Code and so the results of both versions should be the
>> same, if at all possible. My adjacency matrix created in the above
>> mentioned manner however is not the same as the one created with
>> Matlab. The Matlab-Version uses algorithms based on Qhull, while I use
>> deldir as a library, could this account for the difference or have I
>> fundamentally misunderstood some part of the whole plan?
>>
>> I hope the information is helpful and would very much appreciate any
>> pointers and thoughts on the matter.
>
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Reading CSV file every time when R starts

2014-06-23 Thread Christofer Bogaso
Hi Greg,

Thanks for your prompt reply. I have added 'library(utils)' in my
Rprofile file and now it is working fine.

Thanks and regards,

On Mon, Jun 23, 2014 at 11:34 PM, Greg Snow <538...@gmail.com> wrote:
> The .Rprofile file is processed before all the standard packages are
> loaded, that is why you are seeing the error.  If you instead run the
> command as utils::read.csv or use library or require to manually load
> the utils package before calling read.csv then everything should work
> for you.
>
> On Mon, Jun 23, 2014 at 11:42 AM, Christofer Bogaso
>  wrote:
>> Hi again,
>>
>> I am trying to build a procedure such that whenever R starts, it will
>> read some CSV file.
>>
>> Therefore I put a code sometime like 'read.csv(...)' in my Rprofile
>> file. However if I put this function there then I am getting an error
>> saying 'Error: could not find function "read.csv"', whenever R starts.
>>
>> So is it the case that, read.csv() function will not work if it put to
>> run with Rprofile file?
>>
>> I got similar error with flush.console() function as well.
>>
>> Can someone tell me how I can achieve that? My goal is to read some
>> CSV file on startup of R.
>>
>> Thanks for your pointer.
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>
>
>
> --
> Gregory (Greg) L. Snow Ph.D.
> 538...@gmail.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Reading CSV file every time when R starts

2014-06-23 Thread Greg Snow
The .Rprofile file is processed before all the standard packages are
loaded, that is why you are seeing the error.  If you instead run the
command as utils::read.csv or use library or require to manually load
the utils package before calling read.csv then everything should work
for you.

On Mon, Jun 23, 2014 at 11:42 AM, Christofer Bogaso
 wrote:
> Hi again,
>
> I am trying to build a procedure such that whenever R starts, it will
> read some CSV file.
>
> Therefore I put a code sometime like 'read.csv(...)' in my Rprofile
> file. However if I put this function there then I am getting an error
> saying 'Error: could not find function "read.csv"', whenever R starts.
>
> So is it the case that, read.csv() function will not work if it put to
> run with Rprofile file?
>
> I got similar error with flush.console() function as well.
>
> Can someone tell me how I can achieve that? My goal is to read some
> CSV file on startup of R.
>
> Thanks for your pointer.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.



-- 
Gregory (Greg) L. Snow Ph.D.
538...@gmail.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Reading CSV file every time when R starts

2014-06-23 Thread Christofer Bogaso
Hi again,

I am trying to build a procedure such that whenever R starts, it will
read some CSV file.

Therefore I put a code sometime like 'read.csv(...)' in my Rprofile
file. However if I put this function there then I am getting an error
saying 'Error: could not find function "read.csv"', whenever R starts.

So is it the case that, read.csv() function will not work if it put to
run with Rprofile file?

I got similar error with flush.console() function as well.

Can someone tell me how I can achieve that? My goal is to read some
CSV file on startup of R.

Thanks for your pointer.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Help partimat()

2014-06-23 Thread daniel_stahl
I am  a bit late for this discussion but hope that I might still get an 
answer. I tried to follow your advice and had a look at "drawparti" but 
could not find the relevant section about how to plot specific pairs of 
variables 

I am trying to plot only a few combinations a few several combinations but 
always received an error message: For example here I tried to plot variable 
2 with all other variables (=3 to 10). [,1] is my grouping variable.

 drawparti(mydata[,1] ~ mydata[,2], mydata[,3:10], data = mydata, method = 
"rda", gamma=0,lamda=1)

Error in rda.formula(grouping ~ x + y, data = cbind.data.frame(grouping = 
grouping,  : 
  formal argument "data" matched by multiple actual arguments


Best wishes, Daniel

> 

On Friday, March 29, 2013 3:13:58 PM UTC, Uwe Ligges wrote:
>
>
>
> On 29.03.2013 15:59, Antelmo Aguilar wrote: 
> > Hello David, 
> > 
> > Thank you for letting me know that the partimat() function calls that 
> function.  I am kind of knew to R so I do not know exactly how to describe 
> the structure.  If I understand correctly, what I essentially need to do is 
> pass in all the different data sets into one partimat() function and then 
> the partimat() function will create the different plots and the way the 
> different data sets get passed in is by describing a structure of the 
> different data and passing it into the partimat() function. Is my thinking 
> correct?  Would it also be possible if I could be directed to a website 
> that shows me how to describe a data structure or if someone could be so 
> generous as to tell me how to do this?  I would greatly appreciate it and 
> thank you for the help. 
>
>
> partimat is intended to plot several plots for each combination of 
> explanatory variables in a classification problem. 
>
> If you want to generate such plots separately and/or combine them in 
> another way, see the help page. It says "See Also: for much more fine 
> tuning see drawparti". The latter function allows to generate a single 
> plot that can again be arranged within others by the user. 
>
> Best, 
> Uwe Ligges 
>
>
>
>__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lattice : superpose symbols with a great many points

2014-06-23 Thread Laurent Rhelp

Le 23/06/2014 01:03, Dennis Murphy a écrit :

Hi:

There are times when lattice is easier than ggplot2 and times when the
converse is true. This is a case where ggplot2 is easier to code once
you understand what it is doing. I'm going to try to reproduce your
lattice graph in ggplot2 as follows:

library(ggplot2)

# Set up the rows of data.sim where points should be plotted
pidx <- seq(1, nrow(data.sim), by = 100)

# Default appearance, plotting lines and points; the shapes are
# specified in scale_shape_manual()
p <- ggplot(data.sim, aes(x = time, y = value, colour = essai, shape = essai)) +
 geom_vline(xintercept = 0, linetype = "dotted") +
 geom_line() +
 geom_point(data = data.sim[pidx, ]) +
 scale_shape_manual(values = 0:4)
p  # render it in the graphics window

# Change some background elements to more closely resemble the lattice
# plot:
#  * change the colors of ticks and tick labels
#  * remove grid lines
#  * put a bounding box on the graphics panel

p + theme_bw() +
theme(panel.grid.major = element_blank(),
  panel.grid.minor = element_blank(),
  axis.text = element_text(color = "black"),
  axis.ticks = element_line(color = "black"),
  panel.border = element_rect(color = "black"))



After doing this, I realized your lattice code could be simplified
with the latticeExtra package:

library(latticeExtra)

# Set up the rows of data.sim where points should be plotted
pidx <- seq(1, nrow(data.sim), by = 100)

# Plot the lines
p1 <- xyplot(value ~ time, data = data.sim, groups = essai, type = "l",
  col = 1:5)

# Plot the points - notice the input data frame
p2 <- xyplot(value ~ time, data = data.sim[pidx, ], groups = essai,
  type = "p", pch = 1:5, col = 1:5)

# Plot the vertical line
p3 <- xyplot(value ~ time, data = data.sim,
  panel = function(x, y, ...)
 panel.abline(v = 0, lty = "dotted", col = "black")
 )
p1 + p2 + p3

This is a consequence of the layering principles added to latticeExtra
a while back, which are designed to emulate the layered grammar of
graphics (ggplot2). Basically, each plot is a different piece, and you
can see that each trellis object is similar to the plot layers added
by geom_*() in ggplot2. This works largely because the scaling is the
same in all three graphs. One advantage of ggplot2 is that it produces
pretty decent legends by default. You have to work harder to achieve
that in lattice, particularly when you divide plots into layers like
this. (I think you'd need to add the same key = ... code to each plot,
but I didn't try it.) If the legend is irrelevant to you, then this
should be fine.

HTH,
Dennis

On Sun, Jun 22, 2014 at 2:09 PM, Laurent Rhelp  wrote:

Le 20/06/2014 21:50, Christoph Scherber a écrit :


Dear Laurent

for numeric x variables, you could try jitter:

xyplot(y~jitter(x,0.5))

Cheers
Christoph


Am 20.06.2014 21:45, schrieb Laurent Rhelp:

Hi,

   I like to use with xyplot (package lattice) the groups argument and
superpose.symbol to compare several curves. But, when there are a great many
points, the symbols are very close and the graph becomes unreadable. Would
there be an argument  or a tip not to draw all the symbols, for example the
symbols could be drawn only every x points, x given as an argument ?

Thanks
Best regards
Laurent



---
Ce courrier électronique ne contient aucun virus ou logiciel malveillant
parce que la protection avast! Antivirus est active.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


It is a very good idea !!! The symbols become readable but it is not an
actual answer to the issue.

I finally wrote the code below which works but I think there is certainly
something easier (shorter) !!!

Thanks

--o<->o
##
## 1. mock data
##
n <- 1
time <- seq(-10*pi,10*pi,length=n)
essai <- c("essai1","essai2","essai3","essai4","essai5")

ll <- list()
for( i in 1:5){
ll[[i]] <-data.frame(time=time,
 value=sin(time+(pi/i)),
 essai=essai[i])
}
data.sim <- do.call("rbind",ll)
##
## 2. lattice initialisation for the colors and the symbols
##
para.liste <- trellis.par.get()
superpose.symbol <- para.liste$superpose.symbol
superpose.symbol$pch <- seq(1,5)
superpose.symbol$col <- seq(1,5)
##
## 3. lattice code
##
xyplot(value ~ time,
data=data.sim,
nr=100,
groups=essai,


Re: [R] Cox regression model for matched data with replacement

2014-06-23 Thread Therneau, Terry M., Ph.D.



On 06/23/2014 05:00 AM, r-help-requ...@r-project.org wrote:

My problem was how to build a Cox model for the matched data (1:n) with
replacement. Usually, we can use stratified Cox regression model when the
data were matched without replacement. However, if the data were matched
with replacement, due to the re-use of subjects, we should give a weight
for each pair, then how to incorporate this weight into a Cox model. I also
checked the "clogit" function in survival package, it seems suitable to the
logistic model for the matched data with replacement, rather than Cox
model. Because it sets the time to a constant. Anyone can give me some
suggestions?


You don't need to worry.  In an ordinary Cox model controls are "reused" at each death 
time; very early on this caused some concern but the theory has been worked out in 
multiple ways.  In your example a random sample is taken from each of the potential 
control sets, which preserves the underlying martingale theory.
  If you are still concerned then add " + cluster(id)" to the model statement to force a 
GEE type variance estimate, where id is a variable that identifies individual subjects.


Terry Therneau

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Fwd; Trellis devices and pointize

2014-06-23 Thread Patrick Connolly
Thanks Deepayan,

I'd never have figured that out for myself.


On Sat, 21-Jun-2014 at 08:47PM +0530, Deepayan Sarkar wrote:

|> On Sat, Jun 21, 2014 at 1:00 PM, Patrick Connolly
|>  wrote:
|> > Hello Deepayan,
|> >
|> > The question below I asked on the Rhelp list.  For the first time in
|> > my experience, what Brian Ripley says seems not to be the case.  No
|> > one else made any comment, hence this direction to the main Lattice
|> > man.
|> >
|> > It might, indeed, be more of a grid issue which you would understand
|> > better than the rest of us.  Please direct me to any documentation
|> > that covers the question.  My searches have been unsuccessful.
|> 
|> Basically, the settings are taken from
|> 
|> > trellis.par.get("fontsize")
|> $text
|> [1] 12
|> 
|> $points
|> [1] 8
|> 
|> I'll give a more detailed response on r-help.
|> 
|> -Deepayan
|> 
|> > Thank you.
|> >
|> >
|> > - Forwarded message from Patrick Connolly  
-
|> >
|> > Date: Thu, 12 Jun 2014 21:01:20 +1200
|> > From: Patrick Connolly 
|> > To: Prof Brian Ripley 
|> > Cc: R-help 
|> > Subject: Re: [R] Trellis devices and pointize
|> > User-Agent: Mutt/1.5.21 (2010-09-15)
|> >
|> > On Mon, 09-Jun-2014 at 08:33AM +0100, Prof Brian Ripley wrote:
|> >
|> > |> The issue here is not trellis.device.
|> > |>
|> > |> You are using lattice plots (without mentioning lattice), which are
|> > |> based on package 'grid' and so using the grid sub-system of a
|> > |> device. That sub-system does not use the 'pointsize' of the device
|> > |> as its initial font size.  So you need to use
|> > |>
|> > |> grid::gpar(fontsize = 28)
|> > |>
|> > |> If I call that after opening the device I get what I guess you expected.
|> >
|> > I don't see it making any difference at all.  At one time I supposed
|> > that some clever inner workings rescaled everything to something more
|> > sensible for a plotting region that size, but that is not the case.
|> >
|> >>   trellis.device(device = pdf, file = "Singers.pdf", height = 160/25.4,
|> > +  width = 160/25.4)
|> >
|> >> grid::get.gpar()$fontsize
|> > [1] 12
|> >> grid::gpar(fontsize = 8)
|> > $fontsize
|> > [1] 8
|> >
|> >> grid::get.gpar()$fontsize
|> > [1] 12
|> >>
|> >
|> > Evidently I missed something somewhere.  Grid graphics is sometimes a
|> > bit too subtle for me.  I know gpar() doesn't work exactly analogously
|> > to the way par() works.  I can't find any examples in Paul's "R
|> > Graphics" book or Deepayan's "Lattice" book (but that might just be
|> > lack of searching skills).
|> >
|> > Then I tried putting it in the call to print.trellis:
|> >
|> >   print(pik, plot.args = grid::gpar(fontsize = 8))
|> >
|> > and in the bwplot() call similar to the way I'd done in panel functions
|> > with an argument called gp but I get no error message or any
|> > difference in my resulting plot.
|> >
|> > I know I can fiddle with trellis.par.get() and trellis.par.set() but
|> > that's a bit long-winded when it's so simple to do in base graphics.
|> >
|> > TA
|> >
|> > |>
|> > |>
|> > |> On 09/06/2014 07:54, Patrick Connolly wrote:
|> > |> >How is the pointsize set in trellis.devices?
|> > |> >
|> > |> >>From my reading of the trellis.device help file, I understood that the
|> > |> >pointsize arg would be referenced to the call to the pdf function.
|> > |> >
|> > |> >So I set up a trellis pdf device as so:
|> > |> >
|> > |> >   trellis.device(device = pdf, file = "Singers.pdf", height = 
160/25.4,
|> > |> >  width = 160/25.4, pointsize = 28)
|> > |> >
|> > |> >A base R graphics plot works as I'd expected.
|> > |> >
|> > |> >   plot(1:10, 50:59) # silly plot with huge plotting characters and 
letters
|> > |> >
|> > |> >However, pointsize is ignored in trellis plots;
|> > |> >
|> > |> >   pik <- bwplot(voice.part ~ height, data = singer)# pointsize ignored
|> > |> >   print(pik)
|> > |> >   dev.off()
|> > |> >
|> > |> >There are many trellis cex-type settings, but FWIU they're all
|> > |> >relative to the default size.  My question is: How do I set that
|> > |> >default?
|> > |> >
|> > |> >
|> > |> >R version 3.0.2 (2013-09-25)
|> > |> >Platform: i686-pc-linux-gnu (32-bit)
|> > |> >
|> > |> >locale:
|> > |> >  [1] LC_CTYPE=en_US.UTF-8   LC_NUMERIC=C
|> > |> >  [3] LC_TIME=en_US.UTF-8LC_COLLATE=en_US.UTF-8
|> > |> >  [5] LC_MONETARY=en_US.UTF-8LC_MESSAGES=en_US.UTF-8
|> > |> >  [7] LC_PAPER=en_US.UTF-8   LC_NAME=C
|> > |> >  [9] LC_ADDRESS=C   LC_TELEPHONE=C
|> > |> >[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
|> > |> >
|> > |> >attached base packages:
|> > |> >[1] grDevices utils stats graphics  methods   base
|> > |> >
|> > |> >other attached packages:
|> > |> >[1] RColorBrewer_1.0-5 lattice_0.20-24
|> > |> >
|> > |> >loaded via a namespace (and not attached):
|> > |> >[1] grid_3.0.2  plyr_1.8tools_3.0.2
|> > |> >
|> > |> >I've tried with R-3.1.0 on another machine so I don't think the
|> > |> >problem is with an old version.
|> > 

Re: [R] fortran package crashes

2014-06-23 Thread Prof Brian Ripley

On 23/06/2014 10:06, Karl May wrote:

Hi all,

I have written a small Fortran routine to be attached to R for private use,
that is reading matrices written to binary files by a Fortran program (I could
not get "readBin" to read it). Unfortunately, when using the package R crashes
every now and then, but not always, with a segmentation fault. When restarting
R, it very often happens that the same read command, parameterised as before,
is successful. Since the routine compiles without errors when installing it
into R, I have no idea where to search further for the bug and it would be
great if you could have a look at the source code below.


See the 'Writing R Extensions' manual, 
http://cran.r-project.org/doc/manuals/r-patched/R-exts.html#Debugging


Such things are usually caused by either array overruns or use of 
uninitialized memory.





Thank you very much.


readfortran <- function(filename,nrows,ncols){
 dat <- matrix(0,ncol=ncols,nrow=nrows)
 ISStat <- c()
 out <- .Fortran("readfortran",
 csfilename=as.character(filename),
 ISNCol=as.integer(ncols),
 ISNRow=as.integer(nrows),
 RMOut=dat,
 ISStat=as.integer(ISStat),
 Package="readfortran")
 return(out)
}

Subroutine readfortran(csfilename,ISNcol,ISNrow,RMOut,ISStat)
   Implicit None
   Integer, Parameter :: IkXL=Selected_Int_Kind(12)
   Integer, Parameter :: IkS=Selected_Int_Kind(2)
   Integer(IkS), Parameter :: RkDbl=Selected_Real_Kind(15,100)
   Character(len=100), intent(inout) :: CSFileName
   Integer(IKXL), intent(inout) ::ISNCol, ISNrow
   Real(rkdbl), Intent(inout), Dimension(ISNRow,ISNCol) :: RMOut
   Character(len=400) :: CSErr
   Integer(ikXL), intent(inout) :: ISStat
   Integer(IkXL) :: c1, c2
   open(unit=200,file=Trim(AdjustL(CSFilename)),status="old",action="&
&read",form="unformatted",iostat=isstat,iomsg=cserr)
   If(ISSTat==0) Then
 Do c1=1,ISNRow
   read(200,iostat=isstat,iomsg=Cserr) (RMOut(c1,c2),c2=1,ISNCol)
   If(ISStat/=0) Then
 write(*,*) "Reading error"//Trim(AdjustL(CSErr))
 exit
   End If
 End Do
   Else
 write(*,*) "Opening error "//Trim(AdjustL(CSErr))
   End If
   If(ISSTat==0) Then
 write(*,*) "Reading successfull"
   End If
   close(unit=200,status="keep")
End Subroutine readfortran

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Please do.  This is not reproducible, you have not told us your platform 
and this is the wrong list 


--
Brian D. Ripley,  rip...@stats.ox.ac.uk
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] fortran package crashes

2014-06-23 Thread Karl May
Hi all,

I have written a small Fortran routine to be attached to R for private use, 
that is reading matrices written to binary files by a Fortran program (I could 
not get "readBin" to read it). Unfortunately, when using the package R crashes 
every now and then, but not always, with a segmentation fault. When restarting 
R, it very often happens that the same read command, parameterised as before, 
is successful. Since the routine compiles without errors when installing it 
into R, I have no idea where to search further for the bug and it would be 
great if you could have a look at the source code below.

Thank you very much.


readfortran <- function(filename,nrows,ncols){
dat <- matrix(0,ncol=ncols,nrow=nrows)
ISStat <- c()
out <- .Fortran("readfortran",
csfilename=as.character(filename),
ISNCol=as.integer(ncols),
ISNRow=as.integer(nrows),
RMOut=dat,
ISStat=as.integer(ISStat),
Package="readfortran")
return(out)
}

Subroutine readfortran(csfilename,ISNcol,ISNrow,RMOut,ISStat)
  Implicit None
  Integer, Parameter :: IkXL=Selected_Int_Kind(12)
  Integer, Parameter :: IkS=Selected_Int_Kind(2)
  Integer(IkS), Parameter :: RkDbl=Selected_Real_Kind(15,100)
  Character(len=100), intent(inout) :: CSFileName
  Integer(IKXL), intent(inout) ::ISNCol, ISNrow
  Real(rkdbl), Intent(inout), Dimension(ISNRow,ISNCol) :: RMOut
  Character(len=400) :: CSErr
  Integer(ikXL), intent(inout) :: ISStat
  Integer(IkXL) :: c1, c2
  open(unit=200,file=Trim(AdjustL(CSFilename)),status="old",action="&
   &read",form="unformatted",iostat=isstat,iomsg=cserr)
  If(ISSTat==0) Then
Do c1=1,ISNRow
  read(200,iostat=isstat,iomsg=Cserr) (RMOut(c1,c2),c2=1,ISNCol)
  If(ISStat/=0) Then
write(*,*) "Reading error"//Trim(AdjustL(CSErr))
exit
  End If
End Do
  Else
write(*,*) "Opening error "//Trim(AdjustL(CSErr))
  End If
  If(ISSTat==0) Then
write(*,*) "Reading successfull"
  End If
  close(unit=200,status="keep")
End Subroutine readfortran

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.