Hi, 

I have a huge ascii file(40G) and I have around 100M lines.  I read this file 
using :

f=np.loadtxt(os.path.join(dir,myfile),delimiter=None,skiprows=0) 

x=f1[:,1]
y=f1[:,2]
z=f1[:,3]
id=f1[:,0]

I will need the x,y,z and id arrays later for interpolations. The problem is 
reading the file takes around 80 min while the interpolation only takes 15 
mins. 

I was wondering if there is a more optimized way to read the file that would  
reduce the time to read the input file?

I have the same problem when writing the output using np.savetxt. 

Thank you in Advance for your help, 
-- 
https://mail.python.org/mailman/listinfo/python-list

Reply via email to