Hey Guys
I am wondering if I can execute/run python functions through
multiprocessing package on a grid/cluster rather than on the same local
machine. It will help me create 100's of jobs on which same function has to
be used and farm them out to our local cluster through DRMAA. I am not sure
if
IBM). I realize this can be
more suited to HDFS but wanted to know if people have implemented
something similar on a normal linux based NFS
-Abhi
On Mon, Mar 26, 2012 at 6:44 PM, Steve Howell showel...@yahoo.com wrote:
On Mar 26, 3:56 pm, Abhishek Pratap abhishek@gmail.com wrote:
Hi Guys
] concurrent file reading using python
To: tu...@python.org
Abhishek Pratap wrote:
Hi Guys
I want to utilize the power of cores on my server and read big files
( 50Gb) simultaneously by seeking to N locations.
Yes, you have many cores on the server. But how many hard drives is
each file on? If all
Hey Guys
Pushing this one again just in case it was missed last night.
Best,
-Abhi
On Mon, Oct 31, 2011 at 10:31 PM, Abhishek Pratap abhishek@gmail.comwrote:
Hey Guys
I shud mention I am relative new to the language. Could you please let me
know based on your experience which module
Hey Guys
I shud mention I am relative new to the language. Could you please let me
know based on your experience which module could help me with farm out jobs
to our existing clusters(we use SGE here) using python.
Ideally I would like to do the following.
1. Submit #N jobs to cluster
2.
AM, Roy Smith r...@panix.com wrote:
In article
c6cbd486-7e5e-4d26-93b9-088d48a25...@g9g2000yqb.googlegroups.com,
aspineux aspin...@gmail.com wrote:
On Sep 9, 12:49 am, Abhishek Pratap abhishek@gmail.com wrote:
1. My input file is 10 GB.
2. I want to open 10 file handles each handling 1
Hi Guys
My experience with python is 2 days and I am looking for a slick way
to use multi-threading to process a file. Here is what I would like to
do which is somewhat similar to MapReduce in concept.
# test case
1. My input file is 10 GB.
2. I want to open 10 file handles each handling 1 GB