[SQL] sum(time) problem
Hi all, I've got following table structure: sport=# \d polar Table "polar" Column | Type | Modifiers +--+--- ts | timestamp with time zone | not null time | time without time zone | sport | integer | default 1 kcal | integer | Primary key: polar_pkey sport=# select * from polar limit 3; ts | time | sport | kcal +--+---+-- 2001-05-17 19:47:31+02 | 00:28:25 | 1 | 388 2001-05-18 17:08:11+02 | 00:42:36 | 1 | 595 2001-05-19 13:41:43+02 | 00:51:39 | 1 | 737 (3 rows) I'd like to have the total amount of time: sport=# select sum(time) from polar where extract(year from ts)=2001; ERROR: Unable to select an aggregate function sum(time without time zone) It seems I've chosen the wrong type. Or is there another solution? Oliver -- VECERNIK Datenerfassungssysteme A-2560 Hernstein, Hofkogelgasse 17 Tel.: +43 2633 47530, Fax: DW 50 http://members.aon.at/vecernik ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[SQL] plpgsql: debugging
Hi, Searching Google I found a thread in July 2001 concerning the facilities for debugging plpgsql functions. The actual answer was: it should be improved. What is the best way to debug a plpgsql function? Oliver -- VECERNIK Datenerfassungssysteme A-2560 Hernstein, Hofkogelgasse 17 Tel.: +43 2633 47530, Fax: DW 50 http://members.aon.at/vecernik ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[SQL] import error
Hi all! I'm trying to import a text file from Suunto Dive Manager. I've got following table structure: CREATE TABLE dives ( id int, dive_number int, dive_date date, time_of_day time, series int, dc_dive_number int, dive_time int, surface_interval int, max_depth float, mean_depth float, dc_type int, dc_serial_number int, dc_personal_data text, dc_sample_rate int, dc_altitude_mode int, dc_personal_mode int, solution_time_adj int, modified int, location text, site text, weather text, water_visibility text, air_temp int, water_temp int, water_temp_at_end int, partner text, dive_master text, boat_name text, cylinder_desc text, cylinder_size float, cylinder_units_code int, cylinder_work_pressure float, cylinder_start_pressure float, cylinder_end_pressure int, sac_rate float, sac_units int, user_field_1 text, user_field_2 text, user_field_3 text, user_field_4 text, user_field_5 text, weight int, oxygen_percent int, olf_percent int, otu_flag int ); Data is structured as follows: 1,1,"07.08.2001","11:35",1,1,2220,0,6.0,0.0,12,0,"",20,0,0,0,1,"Sarigerme","Asche Bucht","wolkenlos",">20m",35,0,30,"","Mehmet Semerkant","","",0.0,1,0.,0.,0,0.,0,"","","","","",12,0,0,0 My import gives following error: sport=# copy dives from '/var/n/tmp/ov.CSV' delimiters ','; ": can't parse "ne 1, pg_atoi: error in "0 My first thought was that the date and time is wrong (because it's the only "0), but even after removing it for a test results in the same error. How can I find out, which field exactly causes this error? My PostgreSQL version is (on Debian/Woody): sport=# select version(); version --- PostgreSQL 7.2.1 on i686-pc-linux-gnu, compiled by GCC 2.95.4 (1 row) Regards, Oliver -- VECERNIK Datenerfassungssysteme A-2560 Hernstein, Hofkogelgasse 17 Tel.: +43 2633 47530, Fax: DW 50 http://members.aon.at/vecernik ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: CR/LF conversion (was: [SQL] import error)
Oliver Vecernik schrieb: Hi all! I'm trying to import a text file from Suunto Dive Manager. I've got following table structure: Arghh ... It's always the same problem with CR/LF conversions ... After changing it to just LFs, everthing worked like a charm. Is there an elegant way to handle this automatically? Oliver -- VECERNIK Datenerfassungssysteme A-2560 Hernstein, Hofkogelgasse 17 Tel.: +43 2633 47530, Fax: DW 50 http://members.aon.at/vecernik ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[SQL] CSV import
Hi again! After investigating a little bit further my CSV import couldn't work because of following reasons: 1. CSV files are delimited with CR/LF 2. text fields are surrounded by double quotes Is there a direct way to import such files into PostgreSQL? I would like to have something like MySQL provides: LOAD DATA [LOW_PRIORITY | CONCURRENT] [LOCAL] INFILE 'file_name.txt' [REPLACE | IGNORE] INTO TABLE tbl_name [FIELDS [TERMINATED BY '\t'] [[OPTIONALLY] ENCLOSED BY ''] [ESCAPED BY '\\' ] ] [LINES TERMINATED BY '\n'] [IGNORE number LINES] [(col_name,...)] Has anybody written such a function already? Regards, Oliver -- VECERNIK Datenerfassungssysteme A-2560 Hernstein, Hofkogelgasse 17 Tel.: +43 2633 47530, Fax: DW 50 http://members.aon.at/vecernik ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] CSV import
Chad Thompson schrieb: Unix EOL is LF not CR. Is this the only difference between a dos and unix text file? Yes, but to be more precise: dos: CR + LF unix: LF mac: CR Oliver -- VECERNIK Datenerfassungssysteme A-2560 Hernstein, Hofkogelgasse 17 Tel.: +43 2633 47530, Fax: DW 50 http://members.aon.at/vecernik ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] Converting clarion
Nasair Junior da Silva schrieb: Dear friends, i'm looking for some program that converts clarion database files to sql instructions. Someone can help-me ? Try Google with: clarion sql convert My first result is: http://www.clarionmag.com/cmag/v1/v1n4convertingtosql.html HTH Oliver -- VECERNIK Datenerfassungssysteme A-2560 Hernstein, Hofkogelgasse 17 Tel.: +43 2633 47530, Fax: DW 50 http://members.aon.at/vecernik ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] CSV import
Oliver Vecernik schrieb: Hi again! After investigating a little bit further my CSV import couldn't work because of following reasons: 1. CSV files are delimited with CR/LF 2. text fields are surrounded by double quotes Is there a direct way to import such files into PostgreSQL? The answer seems to be no. But after googeling a bit a found a wonderful Python module called csv at: http://www.object-craft.com.au/projects/csv/ A minimal script called 'csv2tab.py' for conversion to a tab delimited file could be: #!/usr/bin/env python import csv import sys def convert(file): try: f = open(file, 'r') lines = f.readlines() p = csv.parser() for line in lines: print '\t'.join(p.parse(line)) except: print 'Error opening file!' if __name__ == '__main__': convert(sys.argv[1]); Regards, Oliver -- VECERNIK Datenerfassungssysteme A-2560 Hernstein, Hofkogelgasse 17 Tel.: +43 2633 47530, Fax: DW 50 http://members.aon.at/vecernik ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly