Hi everyone, I have a rather interesting problem, that i would like to solve with awk or bash scripting - but if all else fails then I will switch to python.
here is the idea: i have a set of files, each with 30 columns - coming from a set of 10 data loggers. each file represents the output from a single datalogger. the ordering of the columns is consistant, and maps to a soil-pit id and moisture probe id. I need to create a look-up table to index ids to column numbers. Then, looping over the dimensions of the look-up table would allow me to process the file line-by line, column-by column accordingly: do bash or awk support these type of data structures? this is how i would envision it in something like python / php: # a multi-dimensional hash for each datalogger: # referencing the sensors assigned to a pit # and the row number in which the sensor values exist in the output file datalogger_1[ pit_1[ sensor_1 => 4 sensor_2 => 5 sensor_3 => 6 sensor_4 => 7 ], pit_2[ sensor_1 => 8 sensor_2 => 9 sensor_3 => 10 sensor_4 => 11 sensor_5 => 12 ], ... ] # the logic of the program would be : iterate over the pits in the datalogger hash iterate over the sensors in each pit hash lookup the column number for each sensor do cool stuff end end any ideas ? or should I just stick with python for this? cheers, -- Dylan Beaudette Soils and Biogeochemistry Graduate Group University of California at Davis 530.754.7341 _______________________________________________ vox-tech mailing list vox-tech@lists.lugod.org http://lists.lugod.org/mailman/listinfo/vox-tech