Re: [Bioc-devel] COHCAP GitHub Update

2017-09-04 Thread Sean Davis
Hi, Charles. See replies inline. On Mon, Sep 4, 2017 at 3:30 PM, Charles Warden wrote: > Hi Sean, > > I'm admittedly on a different computer now, but I still get an error > message (although it is different than the last one): > > git clone

Re: [Rd] Suggestion: Create On-Disk Dataframes; SparkR

2017-09-04 Thread frederik
What's wrong with SparkR? I never heard of either Spark or SparkR. For on-disk dataframes there is a package called 'ff'. I looked into using it, it works well but there are some drawbacks with the implementation. I think that it should be possible to mmap an object from disk and use it as a

Re: [Bioc-devel] COHCAP GitHub Update

2017-09-04 Thread Sean Davis
Hi, Charles. The command is slightly off. Note the first '/' is a ':'. This worked for me: git clone g...@git.bioconductor.org:packages/COHCAP.git Cloning into 'COHCAP'... remote: Counting objects: 485, done. remote: Compressing objects: 100% (479/479), done. remote: Total 485 (delta 329),

Re: [Bioc-devel] COHCAP GitHub Update

2017-09-04 Thread Charles Warden
Hi Nitesh, Thanks. Unfortunately, I can't connect to the Bioconductor repository that way: *cwarden$ git clone g...@git.bioconductor.org/packages/COHCAP.git * *fatal: repository 'g...@git.bioconductor.org/packages/COHCAP.git

Re: [Bioc-devel] git transition: unrelated histories

2017-09-04 Thread Turaga, Nitesh
Make sure you do not have duplicate commits when you merge unrelated histories. Thanks Nitesh Get Outlook for Android From: Fabian M�ller Sent: Monday, September 4, 2017 9:27:14 AM To: Turaga, Nitesh Cc:

Re: [Bioc-devel] git transition: unrelated histories

2017-09-04 Thread Fabian Müller
Hi Nitesh, I think I solved it: another “--allow-unrelated-histories” for the merge from the additional branch did the trick. Best, Fabian > On 1. Sep 2017, at 20:57, Turaga, Nitesh > wrote: > > Hi Fabian, > > Could you please try the merge for the other

Re: [Rd] Suggestion: Create On-Disk Dataframes

2017-09-04 Thread Dirk Eddelbuettel
On 4 September 2017 at 11:35, Suzen, Mehmet wrote: | It is not needed. There is a large community of developer using SparkR. | https://spark.apache.org/docs/latest/sparkr.html | It does exactly what you want. I hope you are not going to mail a sparkr commercial to this list every day. As the

Re: [Rd] readLines() segfaults on large file & question on how to work around

2017-09-04 Thread Tomas Kalibera
As of R-devel 72925 one gets a proper error message instead of the crash. Tomas On 09/04/2017 08:46 AM, rh...@eoos.dds.nl wrote: Although the problem can apparently be avoided in this case. readLines causing a segfault still seems unwanted behaviour to me. I can replicate this with the

Re: [Rd] Suggestion: Create On-Disk Dataframes

2017-09-04 Thread Suzen, Mehmet
It is not needed. There is a large community of developer using SparkR. https://spark.apache.org/docs/latest/sparkr.html It does exactly what you want. On 3 September 2017 at 20:38, Juan Telleria wrote: > Dear R Developers, > > I would like to suggest the creation of a new

[Rd] Suggestion: Create On-Disk Dataframes

2017-09-04 Thread Juan Telleria
Dear R Developers, I would like to suggest the creation of a new S4 object class for On-Disk data.frames which do not fit in RAM memory, which could be called disk.data.frame() It could be based in rsqlite for example (By translating R syntax to SQL syntax for example), and the syntax and way of

Re: [Rd] readLines() segfaults on large file & question on how to work around

2017-09-04 Thread rhelp
Although the problem can apparently be avoided in this case. readLines causing a segfault still seems unwanted behaviour to me. I can replicate this with the example below (sessionInfo is further down): # Generate an example file l <- paste0(sample(c(letters, LETTERS), 1E6, replace = TRUE),