I'd like to add a way for loadtxt to infer a dtype from the data it reads in.
Record arrays/structured arrays are the best thing ever, and ideally I'd like to read in a csv-style file into a structured array in one easy step. loadtxt almost does this - if I know the number and type of fields before reading the file, I can specify the dtype keyword, and loadtxt will give me a structured array. But having to know the dtype before you read a file (including the the required string lengths!) is a real pain. It would be great if you could tell loadtxt to read a file into a structured array and guess the dtype for each field. I've made some changes to lib/io.py that does this by adding a 'names' keyword to loadtxt. If a list of field names is given, then loadtxt reads the file data into a structured array, trying int, float and str types for each column and keeping whichever is suitable for all the data in that column. Does this sound like a good approach? _______________________________________________ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion