Re: [SQL] The length of the sql query

2006-08-24 Thread Emi Lu

Hello,

It appears in MySql 3.23 the limit is 16 MB.  In 4.0 and later, it is 1 GB

http://dev.mysql.com/doc/refman/4.1/en/packet-too-large.html


Could someone tell me where I can find PostgreSQL doc about the query 
length please

Tks a lot!


Hello,

Just curious to know whether postgresql has any length constraint 
about where part, such as



Query =
[
select col1, col2, ... coln
from table 1, table2,
where

constraint1 + constraint2 +constraintN
]

Is there any length arrange for the Query str such as 500M, 1G, etc? 
Or the query can be as long as it is.


Thanks a lot!


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [SQL] The length of the sql query

2006-08-24 Thread Tom Lane
Emi Lu <[EMAIL PROTECTED]> writes:
> Could someone tell me where I can find PostgreSQL doc about the query 
> length please

The theoretical limit is 1Gb (because palloc won't allow creation of a
longer string than that).  The practical limit is probably a great deal
less, especially if you don't have a 64-bit machine with gobs of memory,
because the query text itself is hardly the major consumer of memory
for any real-world query.

Your question is really entirely meaningless when you haven't specified
exactly what sort of query you're thinking of or what kind of platform
you intend to try to run it on.

The 1Gb upper limit for MySQL is a pretty academic number too, for
exactly the same reasons.  Have you tried putting a 1Gb query string
into MySQL?

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[SQL] Importing data from csv

2006-08-24 Thread Sumeet
Hi Folks,sorry if this is a duplicate post, i've been tryin to find a solution of importing data into postgres from a csv file. The problem is, I have a database which consists of columns which contain newline characters (mac and unix). now when i export these files to a csv format, there are some line breaks (mixed unix and mac) in the data which breaks the copy procedure.
I also tried using the script posted in one of the previous posts..#! /usr/bin/perl$inquotes = 0;while (<>){ # Chop the crlf chop ($_); chop ($_); # this first bit goes through and replaces
 # all the commas that re not in  quotes with tildes for ($i=0 ; $i < length($_) ; $i++){ $char=substr($_,$i,1); if ($char eq '"' ){ $inquotes = not($inquotes);
 }else{ if ( (!$inquotes) && ($char eq ",") ){ substr($_,$i,1)="~"; } } } # this replaces any quotes
 s/"//g; print "$_\n";}cat data_file | perl scriptname.pl > outputfile.datand when i run the copy command i get messages like data missing for xyz column.any possible hints...
--Thanks,Sumeet


[SQL] double precision vs. numeric

2006-08-24 Thread Aarni Ruuhimäki
Hello,

Not sure if this is the right list to ask ...

I vaguely remember having seen a message 

' ... type double precision ... will be depreciated / unsupported in future 
releases ... '

or something like that somewhere. (?)

I have quite a few older dbs with tables that have one or more fields type 
double precision and have so far upgraded ok since 7.0.x (I now use numeric 
with appropriate precision and scale.)

Is there something to worry about when upgrading next time ? Start changing 
these to numeric perhaps ?

Running 8.0.2 at the moment.

Best regards to all,

Aarni
-- 
Aarni Ruuhimäki

**Kmail**
**Fedora Core Linux**

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] double precision vs. numeric

2006-08-24 Thread Peter Eisentraut
Aarni Ruuhimäki wrote:
> ' ... type double precision ... will be depreciated / unsupported in
> future releases ... '

That is completely false.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[SQL] Function to retrieve the Id column

2006-08-24 Thread Ezequias Rodrigues da Rocha
Hi list,
 
I have Id field that is used in all my database but my more often used parameter is a column called "number" that I have in more than 5000 magnetic cards, my question is.
 
How could I make a function to retrieve this Id just passing the number as parameter, just like:
 
getCardId(number: varchar)
 
 
Best regards ...
Ezequias


Re: [SQL] double precision vs. numeric

2006-08-24 Thread Michael Fuhr
On Thu, Aug 24, 2006 at 06:51:33PM +0200, Peter Eisentraut wrote:
> Aarni Ruuhimäki wrote:
> > ' ... type double precision ... will be depreciated / unsupported in
> > future releases ... '
> 
> That is completely false.

Presumably because double precision is standard SQL and has been
so for a long time.  Perhaps Aarni is thinking about the money type,
which the documentation does say is deprecated.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] double precision vs. numeric

2006-08-24 Thread Tom Lane
Aarni =?iso-8859-1?q?Ruuhim=E4ki?= <[EMAIL PROTECTED]> writes:
> I vaguely remember having seen a message 
> ' ... type double precision ... will be depreciated / unsupported in future 
> releases ... '
> or something like that somewhere. (?)

Perhaps you are thinking of type "money"?  DOUBLE PRECISION is in the
SQL standard, it's certainly not going anywhere.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [SQL] Function to retrieve the Id column

2006-08-24 Thread Michael Fuhr
On Thu, Aug 24, 2006 at 02:07:14PM -0300, Ezequias Rodrigues da Rocha wrote:
> I have Id field that is used in all my database but my more often used
> parameter is a column called "number" that I have in more than 5000 magnetic
> cards, my question is.
> 
> How could I make a function to retrieve this Id just passing the number as
> parameter, just like:
> 
> getCardId(number: varchar)

Are you looking for something like this?

CREATE FUNCTION getCardId(varchar) RETURNS integer AS $$
  SELECT id FROM table_name WHERE number = $1;
$$ LANGUAGE sql STABLE STRICT;

Here's an example of how you'd call this function:

SELECT getCardId('123456');

See "Server Programming" in the documentation for more information
about how to write functions.

http://www.postgresql.org/docs/8.1/interactive/server-programming.html
http://www.postgresql.org/docs/8.1/interactive/xfunc-sql.html
http://www.postgresql.org/docs/8.1/interactive/plpgsql.html

-- 
Michael Fuhr

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [SQL] Deleting Functions

2006-08-24 Thread Scott Petersen
Thanks for those who responded.  It works fine and lasts a long time.  
The following script does NOT require any editing.  It simply executes 
and works.


\pset format unaligned
\pset fieldsep ''
\pset footer
\t
\o drops.sql
select 'drop function ' || nspname || '.' || proname || '(' || 
pg_catalog.oidvectortypes(p.proargtypes) || ') CASCADE;' from
pg_proc p join pg_namespace b on (p.pronamespace=b.oid) where 
nspname='public';

\o
\i drops.sql

Scott.


Jim Buttafuoco wrote:

Scott,

I use the following query with psql \o option.  Change the schema name from 
public to whatever.  I am sure you could put
this into a plpgsql function using execute as well.


Jim

\o drops.sql
select 'drop function ' || nspname || '.' || proname || '(' || 
pg_catalog.oidvectortypes(p.proargtypes) || ');' from
pg_proc p join pg_namespace b on (p.pronamespace=b.oid) where nspname='public';
\o
\!vi drops.sql # <\i drops.sql





  



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[SQL] Performance Problem with sub-select using array

2006-08-24 Thread Travis Whitton
Hello all, I'm running the following query on about 6,000 records worth
of data, and it takes about 8 seconds to complete. Can anyone provide
any suggestions to improve performance? I have an index on
two columns in the transacts table (program_id, customer_id). If I specify a number
for customer.id
in the sub-select, query time is reduced to about 2 seconds, which
still seems like a million years for only 6,000 records, but I'm guessing that the sub-select can't resolve the id since it's done before the outer query, so it scans the entre recordset for every row? Transacts is a
many to many table for customers and programs. I know this query
doesn't even reference any columns from programs; however, I
dynamically insert where clauses to constrain the result set.
SELECT distinct customers.id,
first_name, last_name, address1, contact_city, contact_state,
primary_phone, email, array(select programs.program_name from
transacts, programs where customer_id = customers.id and 
programs.id
= transacts.program_id and submit_status = 'success') AS partners from
customers, transacts, programs where transacts.customer_id = customers.id and transacts.program_id = 
programs.id


[SQL] Lock Problem

2006-08-24 Thread André José Guergolet



Hello 
all,  I'm using the Postgres 8.0 and my product creates 7 
connections at the server. 
 
One of 
my connections do an update at a simple table with 360 rows and I've got many 
table locks. How I can take more scability?
 
Thanks, and sorry about my english.
 


RES: [SQL] Lock Problem

2006-08-24 Thread André José Guergolet
Sorry, I have a table with 360 rows, in this table I control the state of 
machines on network:


Ip  State   StateDate
172.20.0.39 Running 2006-08-23 00:00:00
172.20.0.59 Running 2006-08-23 00:00:00
172.20.0.72 Running 2006-08-23 00:00:00
172.20.0.84 Running 2006-08-23 00:00:00
172.20.0.35 Running 2006-08-23 00:00:00
172.20.0.17 Running 2006-08-23 00:00:00
172.20.0.28 Running 2006-08-23 00:00:00
172.20.0.39 Running 2006-08-23 00:00:00
172.20.0.14 Running 2006-08-23 00:00:00
172.20.0.33 Running 2006-08-23 00:00:00
172.20.0.19 Running 2006-08-23 00:00:00

My system, checks if my script is running in each machine at this table, this 
table has 360 rows and has 50-100 updates per minute in columns STATE and 
STATEDATE.

I list this states with a webpage. This webpage updates the list every 10 
seconds. My page executes only "select * from machinestates".

If I stop the updates, I never get my page stopped at the select command.

I read about "DIRTY Transaction", is it the way?
I have another solution?

Thanks.
Att.
André Guergolet




-Mensagem original-
De: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] nome de Andrew Sullivan
Enviada em: quinta-feira, 24 de agosto de 2006 17:53
Para: pgsql-sql@postgresql.org
Assunto: Re: [SQL] Lock Problem


On Thu, Aug 24, 2006 at 05:27:58PM -0300, André José Guergolet wrote:
> Hello all,  I'm using the Postgres 8.0 and my product creates 7
> connections at the server.
>  
> One of my connections do an update at a simple table with 360 rows
> and I've got many table locks. How I can take more scability?

It's pretty hard to tell, given what you're telling us.  The UPDATE
will take a write-blocking lock on each of the rows for the duration
of the transaction.  You shouldn't have any table locks, unless
you're not telling us something.  You should post more detail.  Why
do you think you have table locks, to begin with?

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
The plural of anecdote is not data.
--Roger Brinner

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: RES: [SQL] Lock Problem

2006-08-24 Thread Scott Marlowe
On Thu, 2006-08-24 at 16:12, André José Guergolet wrote:
> Sorry, I have a table with 360 rows, in this table I control the state of 
> machines on network:
> 
> 
> IpState   StateDate
> 172.20.0.39   Running 2006-08-23 00:00:00
> 172.20.0.59   Running 2006-08-23 00:00:00
> 172.20.0.72   Running 2006-08-23 00:00:00
> 172.20.0.84   Running 2006-08-23 00:00:00
> 172.20.0.35   Running 2006-08-23 00:00:00
> 172.20.0.17   Running 2006-08-23 00:00:00
> 172.20.0.28   Running 2006-08-23 00:00:00
> 172.20.0.39   Running 2006-08-23 00:00:00
> 172.20.0.14   Running 2006-08-23 00:00:00
> 172.20.0.33   Running 2006-08-23 00:00:00
> 172.20.0.19   Running 2006-08-23 00:00:00
> 
> My system, checks if my script is running in each machine at this table, this 
> table has 360 rows and has 50-100 updates per minute in columns STATE and 
> STATEDATE.
> 
> I list this states with a webpage. This webpage updates the list every 10 
> seconds. My page executes only "select * from machinestates".
> 
> If I stop the updates, I never get my page stopped at the select command.
> 
> I read about "DIRTY Transaction", is it the way?
> I have another solution?

I'm guessing you've got a different problem.  Generally speaking, in an
MVCC database like PostgreSQL, readers don't block writers, and writers
don't block readers.

We need more info on how you're doing this.  SQL queries for the updates
etc...

Are you vacuuming the database often enough?  Is this table suffering
from bloat?

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] Lock Problem

2006-08-24 Thread Andrew Sullivan
On Thu, Aug 24, 2006 at 05:27:58PM -0300, André José Guergolet wrote:
> Hello all,  I'm using the Postgres 8.0 and my product creates 7
> connections at the server.
>  
> One of my connections do an update at a simple table with 360 rows
> and I've got many table locks. How I can take more scability?

It's pretty hard to tell, given what you're telling us.  The UPDATE
will take a write-blocking lock on each of the rows for the duration
of the transaction.  You shouldn't have any table locks, unless
you're not telling us something.  You should post more detail.  Why
do you think you have table locks, to begin with?

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
The plural of anecdote is not data.
--Roger Brinner

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] Importing data from csv

2006-08-24 Thread Phillip Smith








I recently did this by parsing the data
through a VB program that appended a “\” in front of any Char(10)
and/or Char(13) characters which tells Postgres to accept the next character as
a literal part of the column value I believe – must do because it worked!
I also quoted the whole column as part of the VB prog…

 

Worked for me but I’m not sure the
exact science behind it so someone else might be able to be of some more
detailed help.

 

Cheers,

-p

 

-Original
Message-
From:
[EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Sumeet
Sent: Friday,
 25 August 2006 00:48
To: pgsql-sql@postgresql.org
Subject: [SQL] Importing data from
csv

 

Hi Folks,

sorry if this is a duplicate post, i've been tryin to find a solution of
importing data into postgres from a csv file. The problem is, I have a database
which consists of columns which contain newline characters (mac and unix). now
when i export these files to a csv format, there are some line breaks (mixed
unix and mac) in the data which breaks the copy procedure. 

I also tried using the script posted in one of the previous posts..

#! /usr/bin/perl
$inquotes = 0;
while (<>){
 # Chop the crlf
 chop ($_);
 chop ($_);

 # this first bit goes through and replaces 
 # all the commas that re not in  quotes with
tildes
 for ($i=0 ; $i < length($_) ; $i++){
 $char=substr($_,$i,1);
 if ($char eq '"' ){

$inquotes = not($inquotes); 
 }else{
 if (
(!$inquotes) && ($char eq ",") ){

substr($_,$i,1)="~";
 }
 }
 }
 # this replaces any quotes
 s/"//g;
 print "$_\n";
}


cat data_file | perl scriptname.pl > outputfile.dat

and when i run the copy command i get messages like data missing for xyz
column.
any possible hints... 

--
Thanks,
Sumeet 





***Confidentiality and Privilege Notice***

The material contained in this message is privileged and confidential to
the addressee.  If you are not the addressee indicated in this message or
responsible for delivery of the message to such person, you may not copy
or deliver this message to anyone, and you should destroy it and kindly
notify the sender by reply email.

Information in this message that does not relate to the official business
of Weatherbeeta must be treated as neither given nor endorsed by Weatherbeeta.
Weatherbeeta, its employees, contractors or associates shall not be liable
for direct, indirect or consequential loss arising from transmission of this
message or any attachments





Re: [SQL] Importing data from csv

2006-08-24 Thread Scot P. Floess




A newline in CSV parlance denotes the end of a recordunless that
newline is contained with quotes...

Phillip Smith wrote:

  
  
  
  
  I recently
did this by parsing the data
through a VB program that appended a “\” in front of any Char(10)
and/or Char(13) characters which tells Postgres to accept the next
character as
a literal part of the column value I believe – must do because it
worked!
I also quoted the whole column as part of the VB prog…
   
  Worked for
me but I’m not sure the
exact science behind it so someone else might be able to be of some
more
detailed help.
   
  Cheers,
  -p
   
  -Original
Message-
  From:
[EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On Behalf Of Sumeet
  Sent: Friday, 25
August 2006 00:48
  To:
pgsql-sql@postgresql.org
  Subject: [SQL]
Importing data from
csv
   
  Hi
Folks,
  
sorry if this is a duplicate post, i've been tryin to find a solution
of
importing data into postgres from a csv file. The problem is, I have a
database
which consists of columns which contain newline characters (mac and
unix). now
when i export these files to a csv format, there are some line breaks
(mixed
unix and mac) in the data which breaks the copy procedure. 
  
I also tried using the script posted in one of the previous posts..
  
#! /usr/bin/perl
$inquotes = 0;
while (<>){
 # Chop the crlf
 chop ($_);
 chop ($_);
  
 # this first bit goes through and replaces 
 # all the commas that re not in  quotes with
tildes
 for ($i=0 ; $i < length($_) ; $i++){
 $char=substr($_,$i,1);
 if ($char eq '"' ){

$inquotes = not($inquotes); 
 }else{
 if (
(!$inquotes) && ($char eq ",") ){

substr($_,$i,1)="~";
 }
 }
 }
 # this replaces any quotes
 s/"//g;
 print "$_\n";
}
  
  
cat data_file | perl scriptname.pl > outputfile.dat
  
and when i run the copy command i get messages like data missing for
xyz
column.
any possible hints... 
  
--
Thanks,
Sumeet 
  
  
  
  ***Confidentiality and Privilege
Notice***
  
  The material contained in this message is privileged and
confidential to
the addressee. If you are not the addressee indicated in this message
or
responsible for delivery of the message to such person, you may not
copy
or deliver this message to anyone, and you should destroy it and kindly
notify the sender by reply email.
  
  Information in this message that does not relate to the official
business
of Weatherbeeta must be treated as neither given nor endorsed by
Weatherbeeta.
Weatherbeeta, its employees, contractors or associates shall not be
liable
for direct, indirect or consequential loss arising from transmission of
this
message or any attachments
  
  


-- 
Scot P. Floess
27 Lake Royale
Louisburg, NC  27549

252-478-8087 (Home)
919-754-4592 (Work)

Chief Architect JPlate  http://sourceforge.net/projects/jplate
Chief Architect JavaPIM http://sourceforge.net/projects/javapim





Re: [SQL] Importing data from csv

2006-08-24 Thread Scot P. Floess




And if its contained with quotes...its considered a field

Scot P. Floess wrote:

  
A newline in CSV parlance denotes the end of a recordunless that
newline is contained with quotes...
  
Phillip Smith wrote:
  




I recently
did this by parsing the data
through a VB program that appended a “\” in front of any Char(10)
and/or Char(13) characters which tells Postgres to accept the next
character as
a literal part of the column value I believe – must do because it
worked!
I also quoted the whole column as part of the VB prog…
 
Worked for
me but I’m not sure the
exact science behind it so someone else might be able to be of some
more
detailed help.
 
Cheers,
-p
 
-Original
Message-
From:
[EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]
On Behalf Of Sumeet
Sent: Friday, 25
August 2006 00:48
To:
pgsql-sql@postgresql.org
Subject: [SQL]
Importing data from
csv
 
Hi
Folks,

sorry if this is a duplicate post, i've been tryin to find a solution
of
importing data into postgres from a csv file. The problem is, I have a
database
which consists of columns which contain newline characters (mac and
unix). now
when i export these files to a csv format, there are some line breaks
(mixed
unix and mac) in the data which breaks the copy procedure. 

I also tried using the script posted in one of the previous posts..

#! /usr/bin/perl
$inquotes = 0;
while (<>){
 # Chop the crlf
 chop ($_);
 chop ($_);

 # this first bit goes through and replaces 
 # all the commas that re not in  quotes with
tildes
 for ($i=0 ; $i < length($_) ; $i++){
 $char=substr($_,$i,1);
 if ($char eq '"' ){

$inquotes = not($inquotes); 
 }else{
 if (
(!$inquotes) && ($char eq ",") ){

substr($_,$i,1)="~";
 }
 }
 }
 # this replaces any quotes
 s/"//g;
 print "$_\n";
}


cat data_file | perl scriptname.pl > outputfile.dat

and when i run the copy command i get messages like data missing for
xyz
column.
any possible hints... 

--
Thanks,
Sumeet 


 ***Confidentiality and Privilege
Notice*** 
The material contained in this message is privileged and
confidential to
the addressee. If you are not the addressee indicated in this message
or
responsible for delivery of the message to such person, you may not
copy
or deliver this message to anyone, and you should destroy it and kindly
notify the sender by reply email. 
Information in this message that does not relate to the official
business
of Weatherbeeta must be treated as neither given nor endorsed by
Weatherbeeta.
Weatherbeeta, its employees, contractors or associates shall not be
liable
for direct, indirect or consequential loss arising from transmission of
this
message or any attachments 

  
  
  -- 
Scot P. Floess
27 Lake Royale
Louisburg, NC  27549

252-478-8087 (Home)
919-754-4592 (Work)

Chief Architect JPlate  http://sourceforge.net/projects/jplate
Chief Architect JavaPIM http://sourceforge.net/projects/javapim
  


-- 
Scot P. Floess
27 Lake Royale
Louisburg, NC  27549

252-478-8087 (Home)
919-754-4592 (Work)

Chief Architect JPlate  http://sourceforge.net/projects/jplate
Chief Architect JavaPIM http://sourceforge.net/projects/javapim





Re: [SQL] Importing data from csv

2006-08-24 Thread Phillip Smith









There you go – it was the quotes
that did it, not the back-slashes. I knew someone else would shed some better
light! J

 

Cheers,

-p

 

-Original Message-
From:
[EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Scot P. Floess
Sent: Friday, 25 August 2006 10:00
To: [EMAIL PROTECTED]
Cc: Phillip Smith;
pgsql-sql@postgresql.org
Subject: Re: [SQL] Importing data
from csv

 

And if its contained with
quotes...its considered a field

Scot P. Floess wrote: 

A newline in CSV parlance
denotes the end of a recordunless that newline is contained with quotes...

Phillip Smith wrote: 

I
recently did this by parsing the data through a VB program that appended a
“\” in front of any Char(10) and/or Char(13) characters which tells
Postgres to accept the next character as a literal part of the column value I
believe – must do because it worked! I also quoted the whole column as
part of the VB prog…

 

Worked
for me but I’m not sure the exact science behind it so someone else might
be able to be of some more detailed help.

 

Cheers,

-p

 

-Original
Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]
On Behalf Of Sumeet
Sent: Friday, 25 August 2006 00:48
To: pgsql-sql@postgresql.org
Subject: [SQL] Importing data from
csv

 

Hi Folks,

sorry if this is a duplicate post, i've been tryin to find a solution of
importing data into postgres from a csv file. The problem is, I have a database
which consists of columns which contain newline characters (mac and unix). now
when i export these files to a csv format, there are some line breaks (mixed
unix and mac) in the data which breaks the copy procedure. 

I also tried using the script posted in one of the previous posts..

#! /usr/bin/perl
$inquotes = 0;
while (<>){
 # Chop the crlf
 chop ($_);
 chop ($_);

 # this first bit goes through and replaces 
 # all the commas that re not in  quotes with
tildes
 for ($i=0 ; $i < length($_) ; $i++){
 $char=substr($_,$i,1);
 if ($char eq '"' ){

$inquotes = not($inquotes); 
 }else{
 if (
(!$inquotes) && ($char eq ",") ){

substr($_,$i,1)="~";
 }
 }
 }
 # this replaces any quotes
 s/"//g;
 print "$_\n";
}


cat data_file | perl scriptname.pl > outputfile.dat

and when i run the copy command i get messages like data missing for xyz
column.
any possible hints... 

--
Thanks,
Sumeet 

 

***Confidentiality
and Privilege Notice*** 

The material contained in this message is privileged
and confidential to the addressee. If you are not the addressee indicated in
this message or responsible for delivery of the message to such person, you may
not copy or deliver this message to anyone, and you should destroy it and
kindly notify the sender by reply email. 

Information in this message that does not relate to
the official business of Weatherbeeta must be treated as neither given nor
endorsed by Weatherbeeta. Weatherbeeta, its employees, contractors or
associates shall not be liable for direct, indirect or consequential loss
arising from transmission of this message or any attachments 





-- Scot P. Floess27 Lake RoyaleLouisburg, NC  27549 252-478-8087 (Home)919-754-4592 (Work) Chief Architect JPlate  http://sourceforge.net/projects/jplateChief Architect JavaPIM http://sourceforge.net/projects/javapim  





-- Scot P. Floess27 Lake RoyaleLouisburg, NC  27549 252-478-8087 (Home)919-754-4592 (Work) Chief Architect JPlate  http://sourceforge.net/projects/jplateChief Architect JavaPIM http://sourceforge.net/projects/javapim



***Confidentiality and Privilege Notice***

The material contained in this message is privileged and confidential to
the addressee.  If you are not the addressee indicated in this message or
responsible for delivery of the message to such person, you may not copy
or deliver this message to anyone, and you should destroy it and kindly
notify the sender by reply email.

Information in this message that does not relate to the official business
of Weatherbeeta must be treated as neither given nor endorsed by Weatherbeeta.
Weatherbeeta, its employees, contractors or associates shall not be liable
for direct, indirect or consequential loss arising from transmission of this
message or any attachments





Re: [SQL] Importing data from csv

2006-08-24 Thread Scot P. Floess




Well, being that there isn't a RFC for CSV...other than "defacto"
definitions...I am pretty sure that is widely agreed upon ;)

Phillip Smith wrote:

  
  
  
  
  There you go
– it was the quotes
that did it, not the back-slashes. I knew someone else would shed some
better
light! J
   
  Cheers,
  -p
   
  -Original Message-
  From:
[EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On Behalf Of Scot P. Floess
  Sent: Friday, 25
August 2006 10:00
  To:
[EMAIL PROTECTED]
  Cc: Phillip Smith;
pgsql-sql@postgresql.org
  Subject: Re: [SQL]
Importing data
from csv
   
  And if
its contained with
quotes...its considered a field
  
Scot P. Floess wrote: 
  A
newline in CSV parlance
denotes the end of a recordunless that newline is contained with
quotes...
  
Phillip Smith wrote: 
  I
recently did this by parsing the data through a VB program that
appended a
“\” in front of any Char(10) and/or Char(13) characters which tells
Postgres to accept the next character as a literal part of the column
value I
believe – must do because it worked! I also quoted the whole column as
part of the VB prog…
   
  Worked
for me but I’m not sure the exact science behind it so someone else
might
be able to be of some more detailed help.
   
  Cheers,
  -p
   
  -Original
Message-
  From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]
  On Behalf Of Sumeet
  Sent: Friday, 25
August 2006 00:48
  To: pgsql-sql@postgresql.org
  Subject: [SQL]
Importing data from
csv
   
  Hi
Folks,
  
sorry if this is a duplicate post, i've been tryin to find a solution
of
importing data into postgres from a csv file. The problem is, I have a
database
which consists of columns which contain newline characters (mac and
unix). now
when i export these files to a csv format, there are some line breaks
(mixed
unix and mac) in the data which breaks the copy procedure. 
  
I also tried using the script posted in one of the previous posts..
  
#! /usr/bin/perl
$inquotes = 0;
while (<>){
 # Chop the crlf
 chop ($_);
 chop ($_);
  
 # this first bit goes through and replaces 
 # all the commas that re not in  quotes with
tildes
 for ($i=0 ; $i < length($_) ; $i++){
 $char=substr($_,$i,1);
 if ($char eq '"' ){

$inquotes = not($inquotes); 
 }else{
 if (
(!$inquotes) && ($char eq ",") ){

substr($_,$i,1)="~";
 }
 }
 }
 # this replaces any quotes
 s/"//g;
 print "$_\n";
}
  
  
cat data_file | perl scriptname.pl > outputfile.dat
  
and when i run the copy command i get messages like data missing for
xyz
column.
any possible hints... 
  
--
Thanks,
Sumeet 
   
  ***Confidentiality
and Privilege Notice*** 
  The
material contained in this message is privileged
and confidential to the addressee. If you are not the addressee
indicated in
this message or responsible for delivery of the message to such person,
you may
not copy or deliver this message to anyone, and you should destroy it
and
kindly notify the sender by reply email. 
  Information
in this message that does not relate to
the official business of Weatherbeeta must be treated as neither given
nor
endorsed by Weatherbeeta. Weatherbeeta, its employees, contractors or
associates shall not be liable for direct, indirect or consequential
loss
arising from transmission of this message or any attachments 
  
  
  
  -- 
  Scot P. Floess
  27 Lake Royale
  Louisburg, NC  27549
   
  252-478-8087 (Home)
  919-754-4592 (Work)
   
  Chief Architect JPlate  http://sourceforge.net/projects/jplate
  Chief Architect JavaPIM http://sourceforge.net/projects/javapim
    
  
  
  
  -- 
  Scot P. Floess
  27 Lake Royale
  Louisburg, NC  27549
   
  252-478-8087 (Home)
  919-754-4592 (Work)
   
  Chief Architect JPlate  http://sourceforge.net/projects/jplate
  Chief Architect JavaPIM http://sourceforge.net/projects/javapim
  
  
  
  ***Confidentiality and Privilege
Notice***
  
  The material contained in this message is privileged and
confidential to
the addressee. If you are not the addressee indicated in this message
or
responsible for delivery of the message to such person, you may not
copy
or deliver this message to anyone, and you should destroy it and kindly
notify the sender by reply email.
  
  Information in this message that does not relate to the official
business
of Weatherbeeta must be treated as neither given nor endorsed by
Weatherbeeta.
Weatherbeeta, its employees, contractors or associates shall not be
liable
for direct, indirect or consequential loss arising from transmission of
this
message or any attachments
  
  


-- 
Scot P. Floess
27 Lake Royale
Louisburg, NC  27549

252-478-8087 (Home)
919-754-4592 (Work)

Chief Architect JPlate  http://sourceforge.net/projects/jplate
Chief Architect JavaPIM http://sourceforge.net/projects/javapim





Re: [SQL] RES: Lock Problem

2006-08-24 Thread Christopher Browne
Centuries ago, Nostradamus foresaw when [EMAIL PROTECTED] (André José 
Guergolet) would write:
> Sorry, I have a table with 360 rows, in this table I control the state of 
> machines on network:
>
>
> IpState   StateDate
> 172.20.0.39   Running 2006-08-23 00:00:00
> 172.20.0.59   Running 2006-08-23 00:00:00
> 172.20.0.72   Running 2006-08-23 00:00:00
> 172.20.0.84   Running 2006-08-23 00:00:00
> 172.20.0.35   Running 2006-08-23 00:00:00
> 172.20.0.17   Running 2006-08-23 00:00:00
> 172.20.0.28   Running 2006-08-23 00:00:00
> 172.20.0.39   Running 2006-08-23 00:00:00
> 172.20.0.14   Running 2006-08-23 00:00:00
> 172.20.0.33   Running 2006-08-23 00:00:00
> 172.20.0.19   Running 2006-08-23 00:00:00
>
> My system, checks if my script is running in each machine at this
> table, this table has 360 rows and has 50-100 updates per minute in
> columns STATE and STATEDATE.

Question: How often are you vacuuming this table?

If, as you indicate, you're updating about 1/4 of the table each
minute, you should probably VACUUM the table about once a minute.

If you only VACUUM it once an hour or once a day, those 360 tuples
will be spread across 200,000 pages, and need a VACUUM FULL and a
REINDEX to draw the table back down to a decent size.

You can see how badly the table has grown by running the SQL:
  VACUUM VERBOSE machinestates;

This will list various statistics; generally, if you have many more
pages than tuples, there's probably a problem with how often you're
vacuuming...

> I list this states with a webpage. This webpage updates the list
> every 10 seconds. My page executes only "select * from
> machinestates".
>
> If I stop the updates, I never get my page stopped at the select
> command.

I don't understand what you mean by that.

In PostgreSQL, readers don't block writers and writers don't block
readers (unless you expressly ask for them to do so).

> I read about "DIRTY Transaction", is it the way?
> I have another solution?

I'm not sure you have successfully described either the observed
phenomenon or the nature of the problem; I know I don't yet quite
understand what seems wrong.

Sometimes language gets in the way; that seems possible here.  I also
think you're trying to solve the problem before understanding what it
is.

Please try to explain further what phenomenon you are observing; as
details emerge, hopefully someone will recognize what is going on.
Trying to fix it isn't the right thing to do at this point; just try
to explain what you see happening.  Getting to the point where someone
recognizes what is happening is really the goal.
-- 
(format nil "[EMAIL PROTECTED]" "cbbrowne" "gmail.com")
http://linuxdatabases.info/info/lsf.html
"Just because the code is intended to cause flaming death is no reason
to get sloppy and leave off  the casts." - Tim Smith, regarding sample
(F0 0F C7 C8) Pentium Death code on comp.os.linux.advocacy

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: RES: [SQL] Lock Problem

2006-08-24 Thread Tom Lane
Scott Marlowe <[EMAIL PROTECTED]> writes:
> On Thu, 2006-08-24 at 16:12, André José Guergolet wrote:
>> My system, checks if my script is running in each machine at this table, 
>> this table has 360 rows and has 50-100 updates per minute in columns STATE 
>> and STATEDATE.

> Are you vacuuming the database often enough?  Is this table suffering
> from bloat?

That's my bet.  With numbers like those, that table has to get vacuumed
every few minutes to keep performance from going into the tank.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] double precision vs. numeric

2006-08-24 Thread Aarni Ruuhimäki
On Thursday 24 August 2006 20:29, Tom Lane wrote:
> Aarni =?iso-8859-1?q?Ruuhim=E4ki?= <[EMAIL PROTECTED]> writes:
> > I vaguely remember having seen a message
> > ' ... type double precision ... will be depreciated / unsupported in
> > future releases ... '
> > or something like that somewhere. (?)
>
> Perhaps you are thinking of type "money"?  DOUBLE PRECISION is in the
> SQL standard, it's certainly not going anywhere.
>
>regards, tom lane
>
> ---(end of broadcast)---
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>choose an index scan if your joining column's datatypes do not
>match

Well, I have used it for 'money type' like sums and prices but I have never 
used the actual "money" data type.

So, false alarm. 

Thank you guys !

Aarni
-- 
Aarni Ruuhimäki

**Kmail**
**Fedora Core Linux**


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] Importing data from csv

2006-08-24 Thread Michael Fuhr
On Thu, Aug 24, 2006 at 08:19:58PM -0400, Scot P. Floess wrote:
> Well, being that there isn't a RFC for CSV...other than "defacto" 
> definitions...I am pretty sure that is widely agreed upon ;)

RFC 4180
Common Format and MIME Type for Comma-Separated Values (CSV) Files
ftp://ftp.rfc-editor.org/in-notes/rfc4180.txt

"While there are various specifications and implementations for the
CSV format (for ex. [4], [5], [6] and [7]), there is no formal
specification in existence, which allows for a wide variety of
interpretations of CSV files.  This section documents the format
that seems to be followed by most implementations:"

-- 
Michael Fuhr

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match