Re: [R] How to protect two jobs running on the same directory at the same time not to corrupt each other results:
Am Donnerstag, 8. Februar 2007 21:33 schrieb Aldi Kraja: Is there any other solution better than creating separate directories in R? I am thinking if there is any option in R to create a unique id which has its own unique .Rdata, although in the same directory? You could try using tempfile() in combination with basename(). From the helppage on tempfile(): The names are very likely to be unique among calls to 'tempfile' in an R session and across simultaneous R sessions. The filenames are guaranteed not to be currently in use. So this is my suggestion for the generation of a useful, unique id in your working dir: prefix - paste(var_, var, sep = ) while(file.exists(id - basename(tempfile(pattern = prefix {} save(result, file = paste(id, .Rdata, sep = )) Hope this helps. Michel Lang __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Permutation problem
Dear R-Users, I need a matrix containing line by line all possible permutations with length 'i' of the integers 1:N, given the restriction that the integers in each row have to be in ascending order. For example: N = 5, length i = 3, should result in a matrix like this: 1 2 3 1 2 4 1 2 5 1 3 4 1 3 5 1 4 5 2 3 4 2 3 5 2 4 5 3 4 5 I'm grateful for any advice on how to proceed, Michel Lang __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] RODBC sqlQuery insert slow
I am trying to insert a lot of data into a table using windows R (2.3.1) and a mysql database via RODBC. First I read a file with read.csv and then form sql insert statements for each row and execute the insert query one row at a time. See the loop below. This turns out to be very slow. Can anyone please suggest a way to speed it up? A few weeks ago I had to solve a similar task. Inserting each row turned out to be horrible slow due to paste() and the data.frame-indexing. The estimated runtime would have been over 3 weeks, so I used MySQLs LOAD DATE INFILE syntax to speed things up. You must have FILE_PRIV = 'Y' set in the mysql.user-table to use this small hack, and I'm not that sure that this runs remotely. It is also assumed that your df has valid column-names. tmp_filename - tempfile() write.table(df, tmp_filename, na = \\N, row.names = FALSE, col.names = FALSE, quote = FALSE, sep = \t) query - paste( LOAD DATA LOCAL INFILE ', tmp_filename, ', INTO TABLE , your_table, (, toString(names(df)), );, sep = ) sqlQuery(channel, query) unlink(tmp_filename) The total runtime for the LOAD DATA INFILE querys was something below 5 minutes, inserting 3e+6 rows with 200 columns. Michel Lang __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.