[SQL] SQL copy from csv with explicit field ordering

2001-03-02 Thread Terry Fielder

I guess this is really a SQL question:

I have a csv that I want to import, but the csv has different column
ordering.

I have tried putting the column names in the first row, but then the
copy command fails on field which is data type (eg it is seeing the
cells in first row as data, not header info).

ggtest=> copy ap_trans from '/var/dbase/ap_trans.csv' using delimiters
',';
ERROR:  pg_atoi: error in "id": can't parse "id"


The help indicates:
ggtest=> \h copy
Command: copy
Description: copy data to and from a table
Syntax:
COPY [BINARY] class_name [WITH OIDS]
TO|FROM filename|STDIN|STDOUT [USING DELIMITERS 'delim'];


I have tried WITH OIDS but with same results.

Is there somewhere that I can either enable the first line of CSV as
header names

OR

Can I explicitly define my import field ordering from within the select
statement?

Thanks

Terry Fielder
[EMAIL PROTECTED]


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] query

2005-03-17 Thread Terry Fielder




now() returns a timestamp.

Cast it to a date and then you can subtract days. e.g.

select now()::date -1

Terry

Chandan_Kumaraiah wrote:

  
  
  
  
   
  Hi,
   
  In
oracle we write sysdate-1
  For
example,we write a query (select *
from table1 where created_date>=sysdate-1).Whats its equivalent in
postgre?
   
  Chandan
   
  


-- 
Terry Fielder
[EMAIL PROTECTED]
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085




Re: [SQL] NULL in IN clause

2005-10-19 Thread Terry Fielder



Havasvölgyi Ottó wrote:

Hi,

I have just run this command on 8.0.4 :

SELECT 'foo' WHERE 0 NOT IN (NULL, 1);


0 <> NULL   (Indeed nothing equals NULL, other then sometimes NULL itself)

0 <> 1

Therefore, the statement: 0 NOT IN (NULL, 1)
Should always equate to false.

Therefore No rows returned.  Ever.

Terry


And it resulted is zero rows.
Without NULL it is OK.
Is this a bug, or the standard has such a rule?

Best Regards,
Otto



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org



--
Terry Fielder
[EMAIL PROTECTED]
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] casting character varying to integer - order by numeric

2005-10-19 Thread Terry Fielder

Check out the function to_number()

In particular here's an example...
If a field named section is text containing numbers:
ORDER BY to_number(t.section, text())

If the field can also contain non-numerals such as 3a, 3b, and you want 
3a to show first then do this:

ORDER BY to_number(t.section, text()), t.section

And if the field section can actually START with an alpha, then to 
prevent to_number from failing do this:

to_number(textcat('0', t.section), text()), t.section

Terry

Bryce W Nesbitt wrote:

How can I force a character field to sort as a numeric field?
I've got something like this:

Postgres=> SELECT username,last_name
 FROM eg_member ORDER BY username;
--+---
0120 | Foley
1| Sullivan
10   | Guest
11   | User
(5 rows)

(I can't change the field type).  I tried:

 SELECT username,last_name
 FROM eg_member ORDER BY username::integer;

But postgres 7 rejects this with "ERROR:  cannot cast type character 
varying to integer".  Is there a way to force numeric sort order?  I 
tried a variety of functions, such as to_char() and convert() without 
any luck.  Thanks for your insight!



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster



--
Terry Fielder
[EMAIL PROTECTED]
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] migrating numeric to serial from MSSQL to postgres

2006-10-15 Thread Terry Fielder

I believe:
IDENTITY(1, 1) just means "Primary Key" in M$SQL

numeric 18,0 means a numeric field of zero decimal points.  Hence we are 
looking at a 18 byte integer.  bigint is not big enough, so probably 
should use the same in numeric 18,0 in postgres


There may be a way to get MSSQL to dump a SQL compliant dump, which 
would make a migration to postgres much easier if your schema is large.  
Without a SQL compliant dump, you have a lot of cleaning up/tweaking the 
dump to make it readable by Postgres (but that's what I have done the 
few times in the past I have had to do that, fortunately not for many 
statements :)


Terry Fielder
[EMAIL PROTECTED]
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085



Kenneth Gonsalves wrote:

hi,

am migrating a database from MSSQL to postgres. How would i migrate this:

[Id] [numerc](18, 0) IDENTITY (1, 1)

--
regards
kg
http://lawgon.livejournal.com
http://nrcfosshelpline.in/web/



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [SQL] Getting pk of the most recent row, in a group by

2007-08-13 Thread Terry Fielder

Do you have a table of coupon types?

Terry

Terry Fielder
[EMAIL PROTECTED]
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085



Bryce Nesbitt wrote:

I've got a table of "coupons" which have an expiration date.  For each
type of coupon, I'd like to get the primary key of the coupon which will
expire first. 


# create table coupon
(
coupon_id serial primary key,
type varchar(255),
expires date
);
insert into coupon values(DEFAULT,'free','2007-01-01');
insert into coupon values(DEFAULT,'free','2007-01-01');
insert into coupon values(DEFAULT,'free','2007-06-01');
insert into coupon values(DEFAULT,'free','2007-06-01');
insert into coupon values(DEFAULT,'50%','2008-06-01');
insert into coupon values(DEFAULT,'50%','2008-06-02');
insert into coupon values(DEFAULT,'50%','2008-06-03');

The desired query would look like:

# select coupon_id,type,expires from coupon where type='free' order by
expires limit 1;
 coupon_id | type |  expires  
---+--+

 1 | free | 2007-01-01


But be grouped by type:

# select type,min(expires),count(*) from coupon group by type;
 type |min | count
--++---
 free | 2007-01-01 | 4; pk=1
 50%  | 2008-06-01 | 3; pk=5

In the second example, is it possible to get the primary key of the row
with the minimum expires time?

  


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] Authentification failed

2007-08-14 Thread Terry Fielder
Connecting as root may be ill advised  (doing stuff as root is a bad 
idea unless one HAS to).


All that notwithstanding, you need to setup the correct permissions to 
allow the connections you want in pg_hba.conf, usually is 
/var/lib/pgsql/data/pg_hba.conf


Terry

Terry Fielder
[EMAIL PROTECTED]
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085



Bryce Nesbitt wrote:

Judith wrote:
  

Hello everybody!!

I'm trying in SUSE to connect to a postgres db and this is the error:

Ident Authentification failed for user <>

I'm already created the user with createuser root, but the error
persist, I would aprecciate some help, thanks in advanced 


...or just cheat and get in as the postgres root user:
psql -U postgres
Which on SUSE has no password.

  


Re: [SQL] outer join issues

2008-02-06 Thread Terry Fielder



Tom Hart wrote:
Let me preface this by saying hello SQL list, and I'm an idiot. My SQL 
knowledge is advanced to the point of being able to use a WHERE clause 
basically, so I appreciate your business. Now on to my issue


I have 3 tables I'm trying to use in this query: loan, share and draft 
(for those of you not familiar with credit unions, share and draft are 
savings and checking accounts). What I'm trying to do is get a list of 
all loans that were charged off (ln_chgoff_dt > 0), and any share and 
draft accounts that have the same account number. My query looks 
something like this


SELECT ln_acct_num, ln_num, ln_chrgoff_dt, ln_chrgoff_amt, sh_balance, 
sh_stat_cd, df_balance, df_stat_cd

FROM loan
LEFT OUTER JOIN share ON loan.ln_acct_num = share.sh_acct_num
LEFT OUTER JOIN draft ON loan.ln_acct_num = draft.df_acct_num
WHERE
 ln_chrgoff_dt > 0
 AND loan.dataset = 0
 AND share.dataset = 0
 AND draft.dataset = 0
;
try 
AND (share.dateset = 0 OR share.dataset IS NULL)

AND (draft.dataset = 0 OR draft.dataset IS NULL)

because when the left join is utilized, the dateset field will be a 
null, which is not =0 and hence would fail the AND clause in your version


Terry



Now the query
SELECT * FROM loan WHERE ln_chrgoff_dt > 0 AND loan.dataset = 0
returns 139 rows. Shouldn't the first query return at least that many? 
My understanding is that a LEFT OUTER JOIN will not drop any records 
that are only found in the first table, regardless of whether they 
match records on the second or third table. I end up with 14 results 
with the first query. I know I'm doing something wrong, but I'm not 
sure what. Anybody have a helpful kick in the right direction for me?


Thanks in advance.



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] I need some magical advice

2009-01-29 Thread Terry Fielder

The trick is to do a "GROUP BY" on your identifier (name)
and then use a HAVING clause to see if the count is more then 1.

NOTE: You likely need a query that does subqueries that use group by 
considering you want to ignore SOME of the records (ie one per group if 
that group does not have a status 1 record) but not others (update all 
in the group if the group has a status 1 record).


Hopefully that's enough of a hint, but if not when I get a moment I can 
spell it out in more detail.


NOTE: I recommend running a SELECT first, rather then an UPDATE, so you 
can see what WOULD be updated and verify your query is going to do what 
you want before you clobber data.
(or use a transaction, but if its a live database you don't want a 
transaction around locking users out)


Terry


Terry Fielder
te...@greatgulfhomes.com
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085



Andreas wrote:

Hi,

I'd like to update some records in a table.
Those have a status_id and among other columns a varchar with a name 
and a create_date.

The status_id is 0 if nothing was done with this record, yet.

For some reasons I've got double entries which I now want to flag to 
-1 so that they can be sorted out without actually deleting them since 
there are other tables referencing them.


From every group that shares the same name all should get  status_id  
set to -1 where status_id = 0.


The tricky bit is:
How could I provide, that 1 of every group survives, even then when 
all have status_id = 0?

Sometimes 2 of a group are touched so both have to stay.


e.g.
c_date, status_id, name
2008/01/01,   0,   A --> -1
2008/01/02,   1,   A --> do nothing
2008/01/03,   0,   A --> -1

2008/01/01,   0,   B --> do nothing (single entry)

2008/01/01,   0,   C --> do nothing (oldest 0 survives)
2008/01/02,   0,   C --> -1

2008/01/01,   1,   D --> do nothing
2008/01/02,   1,   D --> do nothing





--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql