Re: [GENERAL] Using C# to create stored procedures

2007-04-03 Thread Magnus Hagander
Guy Rouillier wrote:
 Andrus wrote:
 Last change for this project was 3 years ago.
 So I think that it is dead.

 I'm writing application in C#.
 I expected that I can wrote stored procedures in C# also using
 something like mod_mono  in Apache.

 So it seems that most reasonable way is to learn dreaded plpgsql
 language and write stored procedures in it.
 
 Who dreads PL/pgSQL?  It's a pretty easy language to learn. 

It's fairly easy and fairly good as long as you're doing db kind of
things. There are other things that are harder to do - which is why we
support a wide range of languages like perl, tcl, python, php, java etc.

Speaking of which - you might want to look into PL/Java, given that Java
syntax is fairly similar to C#. But if what you do is suitable for
pl/pgsql, it's probably a better idea to use that one.


 Granted,
 not as easy as one you already know, but I'm not aware of any RDBMS that
 supports C# as a stored procedure language.

Both MS SQL Server and IBM DB2 (on windows) supports .net stored
procedures in C#, VB, or any other .net hosted language. There may be
others that do as well, but those are the two I know of.

//Magnus

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] sql schema advice sought

2007-04-03 Thread Jonathan Vanasco

I'm redoing a sql schema , and looking for some input

First I had 2 tables :
Table_A
id
name
a
b
c
Table_B
id
name
x
y
z

as the project grew,  so did functionality.

Table_A_Comments
id
id_refd references Table_A(id)
timestamp
text
Table_B_Comments
id
id_refd references Table_B(id)
timestamp
text

well, it just grew again

Table_C
id
name
m
n
o
Table_C_Comments
id
id_refd references Table_B(id)
timestamp
text

Now:
Table_A , Table_B , and Table_C are all quite different.
But:
	Table_A_Comments ,  Table_B_Comments , Table_C_Comments are  
essentially the same -- except that they fkey on different tables.


I could keep 3 sep. tables for comments, but I'd really like to  
consolidate them in the db -- it'll be easier to reference the data  
in the webapps that query it .


My problem is that I can't figure out a way to do this cleanly ,  
while retain integrity.


When dealing with this In the past, I used a GUID table
Table_ABC_guid
guid , child_type [ A , B, C ] , child_id
and then add a guid column onto each table that FKEYS it.

	On instantiation of a new row in A, B, C  I would create a GUID  
record and then update the row with it.  general tables would ref the  
guid, not the real table.


I can't help but feel thats still a dirty hack, and there's a better  
way.  That didn't solve my integrity problems, it just shifted them  
into a more manageable place.


Anyone have a suggestion ?





---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Using C# to create stored procedures

2007-04-03 Thread Tom Lane
Magnus Hagander [EMAIL PROTECTED] writes:
 Both MS SQL Server and IBM DB2 (on windows) supports .net stored
 procedures in C#, VB, or any other .net hosted language.

Awhile back I read an article claiming that .NET could only host one
language, or at least only languages that differed merely in trivial
syntactic details --- its execution engine isn't flexible enough for
anything truly interesting.  Haven't looked into that for myself
though ... any comments?

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Using C# to create stored procedures

2007-04-03 Thread Harald Armin Massa

Tom,

Awhile back I read an article claiming that .NET could only host one

language, or at least only languages that differed merely in trivial
syntactic details --- its execution engine isn't flexible enough for
anything truly interesting.



Jim Hugunin (creator of Jython, which is Python on Java Virtual Machine)
thought similiar:

wanted to understand how Microsoft could have screwed up so badly that
the CLR was a worse platform for dynamic languages than the JVM.  My plan
was to take a couple of weeks to build a prototype implementation of Python
on the CLR and then to use that work to write a short pithy article called,
Why the CLR is a terrible platform for dynamic languages

He tried it, wrote Ironpython, was hired by Microsoft...

http://blogs.msdn.com/hugunin/archive/2006/09/05/741605.aspx

So there is proof that .NET is usable for more then one language. (Not that
I want to embrace that platform)

Harald
--
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Reinsburgstraße 202b
70197 Stuttgart
0173/9409607
fx 01212-5-13695179
-
Python: the only language with more web frameworks than keywords.


Re: [GENERAL] Using C# to create stored procedures

2007-04-03 Thread Magnus Hagander
On Tue, Apr 03, 2007 at 04:00:17AM -0400, Tom Lane wrote:
 Magnus Hagander [EMAIL PROTECTED] writes:
  Both MS SQL Server and IBM DB2 (on windows) supports .net stored
  procedures in C#, VB, or any other .net hosted language.
 
 Awhile back I read an article claiming that .NET could only host one
 language, or at least only languages that differed merely in trivial
 syntactic details --- its execution engine isn't flexible enough for
 anything truly interesting.  Haven't looked into that for myself
 though ... any comments?

It can certainly host different languages - there are (to me known)
implementations of C#, Visual Basic, JScript, Java, Python, Cobol and
others. These langauges are certainly pretty different. The whole thing is
designed with C# as the *primary* language, so there are definitly parts of
that leaked through into requirements for other languages. But it's
doable.

That said, they'll always need *some* changes, and the framework is the
framework regardless of which language (which makes code in cobol.net look
really freakish. Then again, most cobol code look freakish to me). 

Haven't tried that one myself, but I can certainly tell that the VB.Net
code is sufficiently VB:ish to make it very hard to read/use for someone
who hates VB. But API calls are teh same, so it's at least *possible* to
read it.

//Magnus


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] PGSQL with high number of database rows?

2007-04-03 Thread Tim Perrett
Hey all

I am possibly looking to use PSGSQL in a project I am working on for a very
large client. The upshot of this is the throughput of data will be pretty
massive, around 20,000 new rows in one of the tables per day. We also have to
keep this data online for a set period so after 5 or 6 weeks it could have
nearly a million rows.

Are there any implications with possibly doing this? will PG handle it? Are
there realworld systems using PG that have a massive amount of data in them?

All the best, thanks for any advice up front

Tim

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] PGSQL with high number of database rows?

2007-04-03 Thread Magnus Hagander
On Tue, Apr 03, 2007 at 09:28:28AM +0100, Tim Perrett wrote:
 Hey all
 
 I am possibly looking to use PSGSQL in a project I am working on for a very
 large client. The upshot of this is the throughput of data will be pretty
 massive, around 20,000 new rows in one of the tables per day. We also have to
 keep this data online for a set period so after 5 or 6 weeks it could have
 nearly a million rows.
 
 Are there any implications with possibly doing this? will PG handle it? Are
 there realworld systems using PG that have a massive amount of data in them?

This is in no way massive for pg. Many millions of rows is not a problem at
all, given that you have proper schema and indexing, and run on reasonable
hardware (hint: it might be a bit slow on your laptop). 20,000 rows / day
is still no more than about 14 / minute, which is a very light load for a
server grade machine to deal with without any problem at all.

//Magnus


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] PGSQL with high number of database rows?

2007-04-03 Thread Dave Page
Tim Perrett wrote:
 Hey all
 
 I am possibly looking to use PSGSQL in a project I am working on for a very
 large client. The upshot of this is the throughput of data will be pretty
 massive, around 20,000 new rows in one of the tables per day. We also have to
 keep this data online for a set period so after 5 or 6 weeks it could have
 nearly a million rows.
 
 Are there any implications with possibly doing this? will PG handle it? Are
 there realworld systems using PG that have a massive amount of data in them?

In all honesty that's really not that big. There are systems out there
with database sizes in the multiple terabyte range with billions of rows.

A few million shouldn't cause you any issues, unless they're
exceptionally wide.

Regards, Dave.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] PGSQL with high number of database rows?

2007-04-03 Thread Albe Laurenz
 I am possibly looking to use PSGSQL in a project I am working on for a
very
 large client. The upshot of this is the throughput of data will be
pretty
 massive, around 20,000 new rows in one of the tables per day. 
 We also have tokeep this data online for a set period so after 5 or 6
weeks 
 it could have nearly a million rows.
 
 Are there any implications with possibly doing this? will PG 
 handle it?

What do you mean, massive? A mere 100 rows?
I don't think that a small database like this will be a worry.
Try to avoid unnecessary table scans by using indexes!

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Webappication and PostgreSQL login roles

2007-04-03 Thread Thorsten Kraus

No idea??

Thorsten Kraus schrieb:

Hi,

I designed a Java web application. The persistence layer is a 
PostgreSQL database. The application needs user authentication.
I think it's a good choice to implement this authentication mechanism 
via PostgreSQL login roles. So I can create several database login 
roles and set the database permissions to this login roles. This is my 
first project with the postgres database, so I don't know how I can 
validate a login from the website. Is there a best practice to do this 
or does PostgreSQL offers a stored procedure like 
'authenticateUser(String username, String password)'?


Thanks for your help.

Bye,
Thorsten

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Webappication and PostgreSQL login roles

2007-04-03 Thread Alban Hertroys
Thorsten Kraus wrote:
 No idea??

You'd need an authenticated user to call that stored procedure in the
first place. It is kind of a chicken-and-egg problem.

Usually people create a user for the webapp. This user makes the first
connection to the database.
After that you probably could define a security-definer procedure that
handles further authentication (to an actual schema, for example).

I have to admit I have never done this myself; but this is what I recall
from previous discussions on similar topics.

 Thorsten Kraus schrieb:
 Hi,

 I designed a Java web application. The persistence layer is a
 PostgreSQL database. The application needs user authentication.
 I think it's a good choice to implement this authentication mechanism
 via PostgreSQL login roles. So I can create several database login
 roles and set the database permissions to this login roles. This is my
 first project with the postgres database, so I don't know how I can
 validate a login from the website. Is there a best practice to do this
 or does PostgreSQL offers a stored procedure like
 'authenticateUser(String username, String password)'?

 Thanks for your help.

 Bye,
 Thorsten


-- 
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Webappication and PostgreSQL login roles

2007-04-03 Thread Lutz Broedel

Thorsten Kraus wrote:

Hi,

I designed a Java web application. The persistence layer is a PostgreSQL 
database. The application needs user authentication.
I think it's a good choice to implement this authentication mechanism 
via PostgreSQL login roles. So I can create several database login roles 
and set the database permissions to this login roles. This is my first 
project with the postgres database, so I don't know how I can validate a 
login from the website. Is there a best practice to do this or does 
PostgreSQL offers a stored procedure like 'authenticateUser(String 
username, String password)'?


Thanks for your help.

Bye,
Thorsten



Can you not use the username/password as part of the DSN?

Regards,
Lutz Broedel

--
Lutz Broedel

Leibniz University of Hannover
Institute for Water Quality  Waste Management / ISAH
Division: Water Resources Management

Am Kleinen Felde 30
D - 30167 Hannover, Germany
phone +49 (0)511 762 5984
fax  +49 (0)511 762 19 413
[EMAIL PROTECTED]

To verify the digital signature, you need to load the following certificate:
https://pki.pca.dfn.de/uh-ca/pub/cacert/rootcert.crt


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [GENERAL] ECPG: inserting NULL values when using prepared statements

2007-04-03 Thread Michael Meskes
On Mon, Apr 02, 2007 at 11:53:50AM -0500, Anders Nilsson wrote:
 The situation:
A loop that inserts thousands of values into a table.
In hopes of optimizing the bunches of inserts, I prepared

Sorry, but that won't work. ECPG only simulates statement preparation.

a statement like the follows:
 
insert into some_table_name ( value, id, date ) values ( ?, ?, ? )
 
then executing the prepared statement identifier numerous times 
 using already declared host variables.
 
 The problem:
 Some of these values are null. Is there a way to specify a null
 value without having to rewrite the statement with the explicit NULL
 replacing the ? ?

Yes, use an indicator when instanciating the statement.

 (If this works) if I were to use indicator variables when 
 inserting, what would the syntax be? So far, indicator variables work 
 great when fetching, though I just can't seem to get it right when 
 inserting / updating. Or, if there is another method to specify NULL 
 values, that would be great as well.

Just an example from the regression suite:

/* use indicator in insert */
exec sql insert into test (id, str, val) values ( 2, 'Hi there', :intvar 
:nullind);

So this is essantially the same as with fetch.

Michael
-- 
Michael Meskes
Email: Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
ICQ: 179140304, AIM/Yahoo: michaelmeskes, Jabber: [EMAIL PROTECTED]
Go SF 49ers! Go Rhein Fire! Use Debian GNU/Linux! Use PostgreSQL!

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] PSQL - Slow on Windows 2000 or Windows 2003

2007-04-03 Thread Wilton Wonrath
Hello,



Does anyone knows why it´s so slow to return a backup in Windows 2000 or 
Windows 2003 for archives bigger than 80 MB ?



I do the same thing using others windows versions or linux, and it´s far 
fast than this. What could it be ?

I´m using PostgreSQL 8.1 or lower version.



Regards,



Wilton Ruffato Wonrath

[EMAIL PROTECTED]

São Paulo - Brazil



__
Fale com seus amigos  de graça com o novo Yahoo! Messenger 
http://br.messenger.yahoo.com/ 

Re: [GENERAL] plpy prepare problem

2007-04-03 Thread Adrian Klaver
On Sunday 01 April 2007 9:09 am, jlowery wrote:
 I'm having a bit of a problem getting plpython's prepare to work
 properly:

 CREATE OR REPLACE FUNCTION batch_item_reversal(b batch_item)
   RETURNS varchar AS
 $BODY$

 if b['reversal_flag'] == 'Y':
   sql = plpy.prepare(
   SELECT batch_item_number
   FROM batch_item
   WHERE patient_ssn=$1 AND
   patient_dob=$1 AND
   claim_number=$1 AND
   batch_item_number != $1,
   [varchar, date, varchar, varchar])
   refs = plpy.execute(sql, [
   b['patient_ssn'],
   b['patient_dob'],
   b['claim_number'],
   b['batch_item_number']])

You need to have unique numbers for the variables. 
patient_ssn=$1
patient_dob=$2
etc


   refs2 = plpy.execute(
   SELECT batch_item_number
   FROM batch_item
   WHERE patient_ssn='%s' AND
   patient_dob='%s' AND
   claim_number='%s' AND
   batch_item_number != '%s'
% (b['patient_ssn'],
  b['patient_dob'],
b['claim_number'],
b['batch_item_number']))

   if refs:
   return refs[0][batch_item_number]
   else:
   return ERROR
 else:
   return None

 $BODY$
   LANGUAGE 'plpythonu' VOLATILE;


 Here, refs2 returns the proper data, but refs always returns nothing.
 I have a feeling it has something to do with the type list, I tried
 all text's but to no avail.


 ---(end of broadcast)---
 TIP 4: Have you searched our list archives?

http://archives.postgresql.org/

-- 
Adrian Klaver
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Webappication and PostgreSQL login roles

2007-04-03 Thread Thorsten Kraus

Hi,

thanks for your answer. I cant use the username/password in my DSN 
because I don't connect directly via JDBC to the database. I use 
hibernate for all database actions. The username and password has to be 
stored in the hibernate configuration file...


Bye,
Thorsten


Lutz Broedel schrieb:


Can you not use the username/password as part of the DSN?

Regards,
Lutz Broedel




---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Webappication and PostgreSQL login roles

2007-04-03 Thread Bill Moran
In response to Thorsten Kraus [EMAIL PROTECTED]:

 Hi,
 
 thanks for your answer. I cant use the username/password in my DSN 
 because I don't connect directly via JDBC to the database. I use 
 hibernate for all database actions. The username and password has to be 
 stored in the hibernate configuration file...

I can't help but wonder what other poor programming practices hibernate
encourages ...

 Lutz Broedel schrieb:
 
  Can you not use the username/password as part of the DSN?
 
  Regards,
  Lutz Broedel
 
 
 
 ---(end of broadcast)---
 TIP 6: explain analyze is your friend


-- 
Bill Moran
http://www.potentialtech.com

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] SQLConnect failure

2007-04-03 Thread Andrei Kovalevski

[EMAIL PROTECTED] wrote:
  

 Original Message 
Subject: Re: [GENERAL] SQLConnect failure
From: Bill Moran [EMAIL PROTECTED]
Date: Mon, April 02, 2007 2:54 pm
To: [EMAIL PROTECTED]
Cc: pgsql-general@postgresql.org

In response to [EMAIL PROTECTED]:



We have code that has been using MSDE/SQL Server successfully for
  

years,


and are adding Postgres support.  Doing a SQLConnect to connect to a
local Postgres server works fine, but if we try to connect to a remote
system, the SQLConnect fails, and we get an error code that seems to
indicate The value specified for the argument UserName or the value
specified for the argument Authentication violated restrictions defined
by the data source..  


We can connect via pgadmin to the remote system, so we believe all the
little .conf files should be correct, but can't get in
programmatically.  Any pointers on where to look?  
  

The logs on the PostgreSQL server would be a good place to start.

This sounds suspiciously like a pg_hba.conf misconfig.  You might want
to verify its correctness.




Thanks guys.  I can connect to the remote server via pgadmin on a
different machine, so I'm pretty sure that the .conf files are correct
(that took awhile, but there are very good diagnostic messages when
they are wrong).  When I set the hba, the encryption is set to MD5 -
does that need to be set somewhere on the client side?
  


   What version of the PostgreSQL ODBC driver you are using?


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
  



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] sql schema advice sought

2007-04-03 Thread Jaime Silvela

I have a similar situation. Here's what I do.

I have a stand-alone comment table:
   Comments
   id
   timestamp
   text

Then I have individual product tables to tie a table to a comment:
   Table_A_Comment
   id
   id_ref_a references tableA
   id_comment references Comments

The Table_*_Comment tables can be unified into one, of course:
   Table_Comment
   id
  id_comment references Comments
   id_ref_a references tableA
  id_ref_b references tableB
  id_ref_c references tableC

In my view, the advantage is that you keep concepts separate: the 
structure of comments does not depend on the tables it comments.
Also, the product table/s give you more flexibility if, say, you decide 
a comment can apply to more than one object.


Jonathan Vanasco wrote:

I'm redoing a sql schema , and looking for some input

First I had 2 tables :
Table_A
id
name
a
b
c
Table_B
id
name
x
y
z

as the project grew,  so did functionality.

Table_A_Comments
id
id_refd references Table_A(id)
timestamp
text
Table_B_Comments
id
id_refd references Table_B(id)
timestamp
text

well, it just grew again

Table_C
id
name
m
n
o
Table_C_Comments
id
id_refd references Table_B(id)
timestamp
text

Now:
Table_A , Table_B , and Table_C are all quite different.
But:
Table_A_Comments ,  Table_B_Comments , Table_C_Comments are 
essentially the same -- except that they fkey on different tables.


I could keep 3 sep. tables for comments, but I'd really like to 
consolidate them in the db -- it'll be easier to reference the data in 
the webapps that query it .


My problem is that I can't figure out a way to do this cleanly , while 
retain integrity.


When dealing with this In the past, I used a GUID table
Table_ABC_guid
guid , child_type [ A , B, C ] , child_id
and then add a guid column onto each table that FKEYS it.

On instantiation of a new row in A, B, C  I would create a GUID 
record and then update the row with it.  general tables would ref the 
guid, not the real table.


I can't help but feel thats still a dirty hack, and there's a better 
way.  That didn't solve my integrity problems, it just shifted them 
into a more manageable place.


Anyone have a suggestion ?





---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster




***
Bear Stearns is not responsible for any recommendation, solicitation,
offer or agreement or any information about any transaction, customer
account or account activity contained in this communication.

Bear Stearns does not provide tax, legal or accounting advice.  You
should consult your own tax, legal and accounting advisors before
engaging in any transaction. In order for Bear Stearns to comply with
Internal Revenue Service Circular 230 (if applicable), you are notified
that any discussion of U.S. federal tax issues contained or referred to
herein is not intended or written to be used, and cannot be used, for
the purpose of:  (A) avoiding penalties that may be imposed under the
Internal Revenue Code; nor (B) promoting, marketing or recommending to
another party any transaction or matter addressed herein.
***

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] inserting multiple values in version 8.1.5

2007-04-03 Thread [EMAIL PROTECTED]

I need to do like 1000 inserts periodically from a web app. Is it better to
do 1000 inserts or 1 insert with the all 1000 rows? Is using copy command
faster than inserts?
thanks

On 4/2/07, Chris [EMAIL PROTECTED] wrote:


[EMAIL PROTECTED] wrote:
 Hi
 I am trying to insert multiple values into a table like this.
 INSERT INTO tab_name (col1, col2) VALUES (val1, val2), (val3, val4)

 This works in postgres version 8.2.1

 My production server runs in 8.1.5. It gives me
 ERROR:  syntax error at or near , at character 35

That came in at v8.2.

You can't use it in 8.1.5.

--
Postgresql  php tutorials
http://www.designmagick.com/



Re: [GENERAL] inserting multiple values in version 8.1.5

2007-04-03 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160


 I am trying to insert multiple values into a table like this.
 INSERT INTO tab_name (col1, col2) VALUES (val1, val2), (val3, val4)
...
 My production server runs in 8.1.5.
...
 What to do?

Upgrade to 8.2. :)

Seriously, you should upgrade to 8.1.8.

You can add multiple rows in one statement like this:

INSERT INTO tab_name (col1,col2)
SELECT val1, val2
UNION ALL
SELECT val3, val4;

- --
Greg Sabino Mullane [EMAIL PROTECTED]
PGP Key: 0x14964AC8 200704031025
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-

iD8DBQFGEmRGvJuQZxSWSsgRA+dyAJ9buRgJdNfSK4pOWZQT+/bxZ27yEgCeO6AJ
sWpYA1cMbjHIziROLwrXwrM=
=Oeqk
-END PGP SIGNATURE-



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] inserting multiple values in version 8.1.5

2007-04-03 Thread A. Kretschmer
am  Tue, dem 03.04.2007, um  7:19:15 -0700 mailte [EMAIL PROTECTED] folgendes:
 I need to do like 1000 inserts periodically from a web app. Is it better to do
 1000 inserts or 1 insert with the all 1000 rows? Is using copy command faster
 than inserts?

You can do the massive Inserts within one transaktion, but COPY is much
faster than many Inserts. The multi-line Insert is a new feature since
8.2. I prefer COPY.


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] BitmapScan mishaps

2007-04-03 Thread Listmail


Hello everyone !

I have this query :

annonces= EXPLAIN ANALYZE SELECT * FROM annonces AS a WHERE
detect_time  CURRENT_TIMESTAMP - '7 DAY'::INTERVAL
AND detect_time = '2006-10-30 16:17:45.064793'
AND vente
AND surface IS NOT NULL AND price IS NOT NULL
AND type_id IN  
(1,12,24,17,18,19,20,33,35,50,51,7,52,4,13,41,14,16,26,28,43,53,15,29,30,31,45,32,34,46,47,6)

AND (a.zipcode IN (69001,69002,69003,69004,69005,69006,69007,69008,69009)
OR a.city_id IN (27595)
OR a.coords   
'(45.74101689082,4.8371263505564),(45.75898310918,4.8628736494436)'::BOX) ;
   QUERY  
PLAN

-
 Bitmap Heap Scan on annonces a  (cost=1657.06..7145.98 rows=1177  
width=691) (actual time=118.342..118.854 rows=194 loops=1)
   Recheck Cond: (((vente AND (zipcode = ANY  
('{69001,69002,69003,69004,69005,69006,69007,69008,69009}'::integer[])))  
OR (vente AND (city_id = 27595)) OR (coords   
'(45.75898310918,4.8628736494436),(45.74101689082,4.8371263505564)'::box))  
AND (detect_time  (now() - '7 days'::interval)) AND (detect_time =  
'2006-10-30 16:17:45.064793'::timestamp without time zone))
   Filter: (vente AND (surface IS NOT NULL) AND (price IS NOT NULL) AND  
(type_id = ANY  
('{1,12,24,17,18,19,20,33,35,50,51,7,52,4,13,41,14,16,26,28,43,53,15,29,30,31,45,32,34,46,47,6}'::integer[])))
   -  BitmapAnd  (cost=1657.06..1657.06 rows=2465 width=0) (actual  
time=118.294..118.294 rows=0 loops=1)
 -  BitmapOr  (cost=133.83..133.83 rows=4368 width=0) (actual  
time=2.903..2.903 rows=0 loops=1)
   -  Bitmap Index Scan on annonces_zip  (cost=0.00..56.54  
rows=1825 width=0) (actual time=0.599..0.599 rows=1580 loops=1)
 Index Cond: ((vente = true) AND (zipcode = ANY  
('{69001,69002,69003,69004,69005,69006,69007,69008,69009}'::integer[])))
   -  Bitmap Index Scan on annonces_city  (cost=0.00..43.30  
rows=1904 width=0) (actual time=0.464..0.464 rows=1575 loops=1)

 Index Cond: ((vente = true) AND (city_id = 27595))
   -  Bitmap Index Scan on annonces_coords  (cost=0.00..33.10  
rows=640 width=0) (actual time=1.837..1.837 rows=2166 loops=1)
 Index Cond: (coords   
'(45.75898310918,4.8628736494436),(45.74101689082,4.8371263505564)'::box)
 -  Bitmap Index Scan on annonces_date  (cost=0.00..1522.68  
rows=72241 width=0) (actual time=114.930..114.930 rows=68022 loops=1)
   Index Cond: ((detect_time  (now() - '7 days'::interval))  
AND (detect_time = '2006-10-30 16:17:45.064793'::timestamp without time  
zone))

 Total runtime: 119.000 ms
(14 lignes)

The interesting part is :

Bitmap Index Scan on annonces_date  (cost=0.00..1522.68 rows=72241  
width=0) (actual time=114.930..114.930 rows=68022 loops=1)


It bitmapscans about half the table...
	I realized this index was actually useless for all my queries, so I  
dropped it, and behold :


  QUERY  
PLAN

--
 Bitmap Heap Scan on annonces a  (cost=133.83..7583.77 rows=1176  
width=691) (actual time=5.483..18.731 rows=194 loops=1)
   Recheck Cond: ((vente AND (zipcode = ANY  
('{69001,69002,69003,69004,69005,69006,69007,69008,69009}'::integer[])))  
OR (vente AND (city_id = 27595)) OR (coords   
'(45.75898310918,4.8628736494436),(45.74101689082,4.8371263505564)'::box))
   Filter: ((detect_time  (now() - '7 days'::interval)) AND (detect_time  
= '2006-10-30 16:17:45.064793'::timestamp without time zone) AND vente  
AND (surface IS NOT NULL) AND (price IS NOT NULL) AND (type_id = ANY  
('{1,12,24,17,18,19,20,33,35,50,51,7,52,4,13,41,14,16,26,28,43,53,15,29,30,31,45,32,34,46,47,6}'::integer[])))
   -  BitmapOr  (cost=133.83..133.83 rows=4368 width=0) (actual  
time=2.648..2.648 rows=0 loops=1)
 -  Bitmap Index Scan on annonces_zip  (cost=0.00..56.54  
rows=1825 width=0) (actual time=0.505..0.505 rows=1580 loops=1)
   Index Cond: ((vente = true) AND (zipcode = ANY  

[GENERAL] PSQL - Slow on Windows 2000 or Windows 2003

2007-04-03 Thread Wilton

Hello,

Does anyone knows why it´s so slow to return a backup in Windows 2000 or 
Windows 2003 for archives bigger than 80 MB ?


I do the same thing using others windows versions or linux, and it´s far 
fast than this. What could it be ?

I´m using PostgreSQL 8.1 or lower version.

Regards,

Wilton Ruffato Wonrath
[EMAIL PROTECTED]
São Paulo - Brazil

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Webappication and PostgreSQL login roles

2007-04-03 Thread Ben Trewern
You could originally connect to the database as some kind of power user. 
Check the password against the pg_shadow view (you would need to md5 your 
password somehow) and then do a SET SESSION AUTHORIZATION (or SET ROLE) to 
change your permissions.  Not sure how secure this would be but it's the way 
I would try.

Regards,

Ben
Thorsten Kraus [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]
 Hi,

 thanks for your answer. I cant use the username/password in my DSN because 
 I don't connect directly via JDBC to the database. I use hibernate for all 
 database actions. The username and password has to be stored in the 
 hibernate configuration file...

 Bye,
 Thorsten


 Lutz Broedel schrieb:

 Can you not use the username/password as part of the DSN?

 Regards,
 Lutz Broedel



 ---(end of broadcast)---
 TIP 6: explain analyze is your friend
 



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] documentation generator for pgsql

2007-04-03 Thread jezemine

 There are a few others.

 http://freshmeat.net/projects/postgresql_autodoc
 http://dbmstools.sourceforge.net/
 http://sqlfairy.sourceforge.net/

 are some of the ones with explicit postgresql support I've played
 with in the past. I've had some luck using the ODBC or
 JDBC based ones too.

thanks for the links - good to know.  they are ok, but sqlspec is much
more comprehensive.

for example, compare these:
http://www.rbt.ca/autodoc/autodocexample.html
http://dbmstools.sourceforge.net/samples/xml2doc/schema-jcr-frames/index-jcr-postgres8.html

to this:
http://www.elsasoft.org/chm/dellstore.zip (chm)
http://www.elsasoft.org/localhost.dellstore (IE)
http://www.elsasoft.org/tabular/localhost.dellstore (non-IE, eg,
firefox)

not to mention that sqlspec supports every other DBMS of
consequence. ;)

Jesse



---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] UPDATE on two large datasets is very slow

2007-04-03 Thread Steve Gerhardt

I've been working for the past few weeks on porting a closed source
BitTorrent tracker to use PostgreSQL instead of MySQL for storing
statistical data, but I've run in to a rather large snag. The tracker in
question buffers its updates to the database, then makes them all at
once, sending anywhere from 1-3 MiB of query data. With MySQL, this is
accomplished using the INSERT INTO...ON DUPLICATE KEY UPDATE query,
which seems to handle the insert/update very quickly; generally it only
takes about a second for the entire set of new data to be merged.

The problem I am encountering is that when I attempt to duplicate this
functionality in Postgres, it is terrifically slow to a point of utter
unusability. The tracker currently handles around 10,000-40,000 client
updates per minute, which translates roughly to the same number of rows
in the database. Part of the issue is that some of those rows cannot be
updated because they do not yet exist in the database, but there is
likely around a 100:1 ratio on updates to inserts.

After consulting with some of the folks on the PostgreSQL IRC channel on
freenode.net, I was left with this idea to try:

-
BEGIN

CREATE TEMP TABLE temp_p2 ON COMMIT DROP AS (SELECT tid, uid, uploaded,
downloaded, remaining, avg_up, avg_down, active, timespent, ip, port,
peer_id, blocked FROM peers2 WHERE FALSE)

COPY temp_p2 FROM STDIN WITH CSV QUOTE AS 
 the data is sent by the tracker using PQputCopyData 

UPDATE peers2 AS p SET uploaded = p.uploaded + t.uploaded, downloaded =
p.downloaded + t.downloaded, remaining = t.remaining, avg_up = t.avg_up,
avg_down = t.avg_down, active = t.active, timespent = p.timespent +
t.timespent, ip = t.ip, port = t.port, blocked = t.blocked, timestamp =
CURRENT_TIMESTAMP FROM temp_p2 AS t WHERE (p.uid = t.uid AND p.tid = t.tid)

INSERT INTO peers2 (tid, uid, uploaded, downloaded, remaining, avg_up,
avg_down, active, timespent, ip, port, peer_id, blocked) SELECT t.* FROM
temp_p2 AS t LEFT JOIN peers2 USING (uid, tid) WHERE peers2.uid IS NULL
AND peers2.tid IS NULL

COMMIT
-

Initial attempts showed the UPDATE query was incredibly slow. After
sitting down at the psql command line, I managed to get the query plan
for it after much waiting.

# EXPLAIN ANALYZE UPDATE peers2...etc etc
QUERY PLAN
-
 Merge Join  (cost=262518.76..271950.65 rows=14933 width=153) (actual
time=8477.422..9216.893 rows=26917 loops=1)
   Merge Cond: ((p.tid = t.tid) AND (p.uid = t.uid))
   -  Sort  (cost=177898.12..180004.09 rows=842387 width=65) (actual
time=7803.248..8073.817 rows=109732 loops=1)
 Sort Key: p.tid, p.uid
 -  Seq Scan on peers2 p  (cost=0.00..25885.87 rows=842387
width=65) (actual time=0.043..4510.771 rows=647686 loops=1)
   -  Sort  (cost=84620.64..85546.64 rows=370400 width=96) (actual
time=641.438..761.893 rows=55393 loops=1)
 Sort Key: t.tid, t.uid
 -  Seq Scan on temp_p2 t  (cost=0.00..2.00 rows=370400
width=96) (actual time=0.093..275.110 rows=55393 loops=1)
 Total runtime: 192569.492 ms
(9 rows)

(Apologies if the formatting got ruined by my e-mail client.)

Essentially, it looks like what it's doing is sorting both tables on the
WHERE clause, then finding which positions correspond between the two.
The problem is that, as can be seen, peers2 has 600,000+ rows, so
sequential scanning and sorting it is a rather non-trivial operation. As
a sidenote, there is a unique index set up for peers2.uid and
peers2.tid, so any lookups should be fully indexed.

After this method seemed to fail miserably, I took another approach and
wrote a stored procedure, which should in theory accomplish much the
same thing. I assumed this would be faster because it would iterate over
the temp_p2 table sequentially, and do a simple index lookup + update to
the peers2 table on each step, without any sorting or craziness
required. For this to work, the tracker needs to automatically
categorize client updates into needs UPDATE or needs INSERT buffers,
which would be handled separately. The inserts are lightning quick and
are not an issue, but the updates, as illustrated below, are not very good.

Here is the first version of the stored procedure:

CREATE OR REPLACE FUNCTION tracker_update() RETURNS integer AS $PROC$
DECLARE
  rec temp_p2%ROWTYPE;
BEGIN
  FOR rec IN SELECT * FROM temp_p2 LOOP
UPDATE peers2 SET uploaded = uploaded + rec.uploaded,
   downloaded = downloaded + rec.downloaded,
   remaining = rec.remaining,
   avg_up = rec.avg_up,
   avg_down = rec.avg_down,
   active = rec.active,
   timespent = timespent + rec.timespent,
   ip = rec.ip,
   port = rec.port,
   peer_id = rec.peer_id,
   blocked = rec.blocked,
   timestamp = CURRENT_TIMESTAMP
   WHERE uid = rec.uid AND tid = rec.tid;
  END 

Re: [GENERAL] sql schema advice sought

2007-04-03 Thread Jonathan Vanasco


On Apr 3, 2007, at 9:56 AM, Jaime Silvela wrote:


I have a similar situation. Here's what I do.

I have a stand-alone comment table:
   Comments
   id
   timestamp
   text

Then I have individual product tables to tie a table to a comment:
   Table_A_Comment
   id
   id_ref_a references tableA
   id_comment references Comments


thats perfect, and simple.

the unified table is too dirty :)   i've done stuff like that in the  
past, and was always upset with it.





// Jonathan Vanasco

| - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -  
- - - - - - - - - - - - - - - -

| FindMeOn.com - The cure for Multiple Web Personality Disorder
| Web Identity Management and 3D Social Networking
| - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -  
- - - - - - - - - - - - - - - -

| RoadSound.com - Tools For Bands, Stuff For Fans
| Collaborative Online Management And Syndication Tools
| - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -  
- - - - - - - - - - - - - - - -




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] UPDATE on two large datasets is very slow

2007-04-03 Thread Scott Marlowe
On Mon, 2007-04-02 at 22:24, Steve Gerhardt wrote:
 I've been working for the past few weeks on porting a closed source
 BitTorrent tracker to use PostgreSQL instead of MySQL for storing
 statistical data, but I've run in to a rather large snag. The tracker in
 question buffers its updates to the database, then makes them all at
 once, sending anywhere from 1-3 MiB of query data. With MySQL, this is
 accomplished using the INSERT INTO...ON DUPLICATE KEY UPDATE query,
 which seems to handle the insert/update very quickly; generally it only
 takes about a second for the entire set of new data to be merged.
 
 The problem I am encountering is that when I attempt to duplicate this
 functionality in Postgres, it is terrifically slow to a point of utter
 unusability. The tracker currently handles around 10,000-40,000 client
 updates per minute, which translates roughly to the same number of rows
 in the database. Part of the issue is that some of those rows cannot be
 updated because they do not yet exist in the database, but there is
 likely around a 100:1 ratio on updates to inserts.
 
 After consulting with some of the folks on the PostgreSQL IRC channel on
 freenode.net, I was left with this idea to try:

I can't help but think that the way this application writes data is
optimized for MySQL's transactionless table type, where lots of
simultaneous input streams writing at the same time to the same table
would be death.

Can you step back and work on how the app writes out data, so that it
opens a persistent connection, and then sends in the updates one at a
time, committing every couple of seconds while doing so?

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Webappication and PostgreSQL login roles

2007-04-03 Thread Thorsten Kraus
This would be a possible way. Now the question is which algorithm 
implementation of md5 PostgreSQL uses...


Bye,
Thorsten

Ben Trewern schrieb:
You could originally connect to the database as some kind of power user. 
Check the password against the pg_shadow view (you would need to md5 your 
password somehow) and then do a SET SESSION AUTHORIZATION (or SET ROLE) to 
change your permissions.  Not sure how secure this would be but it's the way 
I would try.


Regards,

Ben
Thorsten Kraus [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]
  

Hi,

thanks for your answer. I cant use the username/password in my DSN because 
I don't connect directly via JDBC to the database. I use hibernate for all 
database actions. The username and password has to be stored in the 
hibernate configuration file...


Bye,
Thorsten


Lutz Broedel schrieb:


Can you not use the username/password as part of the DSN?

Regards,
Lutz Broedel

  

---(end of broadcast)---
TIP 6: explain analyze is your friend






---(end of broadcast)---
TIP 6: explain analyze is your friend

  




Re: [GENERAL] UPDATE on two large datasets is very slow

2007-04-03 Thread Jonathan Vanasco


On Apr 3, 2007, at 11:44 AM, Scott Marlowe wrote:


I can't help but think that the way this application writes data is
optimized for MySQL's transactionless table type, where lots of
simultaneous input streams writing at the same time to the same table
would be death.

Can you step back and work on how the app writes out data, so that it
opens a persistent connection, and then sends in the updates one at a
time, committing every couple of seconds while doing so?


I'd look into indexing the tables your update requires in such a way  
that you're not doing so many  sequential scans.


I have a system that does many updates on a quickly growing db - 5M  
rows last week, 25M this week.


Even simple updates could take forever, because of poor indexing in  
relation to fields addressed in the 'where' on the update and foreign  
keys.

With some proper updating, the system is super fast again.

So i'd look into creating new indexes and trying to shift the seq  
scans into more time-efficient index scans.




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] COPY FROM - how to identify results?

2007-04-03 Thread Jaime Silvela
I've written a web application where users can upload spreadsheets, 
instead of having to key in forms. The spreadsheets get parsed and 
INSERTED into a table, and with the INSERT gets added an identifier so 
that I can always trace back what a particular row in the table 
corresponds to.
I'd like to use COPY - FROM to achieve the same thing, but a stopping 
point is that I don't see how to add the new spreadsheet with a 
particular identifier.


I'd like to be able to do something like
COPY mytable (field-1, ..  field-n, id = my_id) FROM file; or
COPY mytable FROM file WITH id = my_id;

A very messy solution would be to create a temp table with a special 
name, COPY to it, then INSERT from it to the permanent table. However, I 
don't want a solution of that type.


I assume many people have this same problem. Any elegant solutions here?

Thanks
Jaime



***
Bear Stearns is not responsible for any recommendation, solicitation,
offer or agreement or any information about any transaction, customer
account or account activity contained in this communication.

Bear Stearns does not provide tax, legal or accounting advice.  You
should consult your own tax, legal and accounting advisors before
engaging in any transaction. In order for Bear Stearns to comply with
Internal Revenue Service Circular 230 (if applicable), you are notified
that any discussion of U.S. federal tax issues contained or referred to
herein is not intended or written to be used, and cannot be used, for
the purpose of:  (A) avoiding penalties that may be imposed under the
Internal Revenue Code; nor (B) promoting, marketing or recommending to
another party any transaction or matter addressed herein.
***

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] BitmapScan mishaps

2007-04-03 Thread Tom Lane
Listmail [EMAIL PROTECTED] writes:
   It bitmapscans about half the table...

Which PG version is this exactly?  We've fooled with the
choose_bitmap_and heuristics quite a bit ...

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] UPDATE on two large datasets is very slow

2007-04-03 Thread Listmail



I can't help but think that the way this application writes data is
optimized for MySQL's transactionless table type, where lots of
simultaneous input streams writing at the same time to the same table
would be death.

Can you step back and work on how the app writes out data, so that it
opens a persistent connection, and then sends in the updates one at a
time, committing every couple of seconds while doing so?


	No, he can't, I also coded a bittorrent tracker of the same kind, and the  
problem is :


- each user has N torrents active
- torrent client does N tracker requests
- tracker only does 1 UDPATE to update user's stats

	So if you do instantaneous updates you multiply your query load by N (on  
average between 6 and 8).


Besides, these kinds of trackers face several problems :

	- they are accessed by clients which have near saturated connections  
since they're leeching illegal prOn like crazy

- therefore these HTTP connections are very slow
- therefore you have a hell of a lot of concurrent connections.

	Therefore using a threaded server for this kind of load is asking for  
trouble.
	All decent torrent trackers are designed like lighttpd : select() /  
poll() or other variants, and no threads.

No threads means, database queries are something long and to be avoided.
	Hosting providers will delete your account if they see a php torrent  
tracker on it, and for good reason.


600 hits/s = 600 connections = 600 apache and PG process = you cry.

	Anyway my tracker was in Python with select/poll asynchronous HTTP model.  
It handled 200 HTTP requests per second using 10% CPU on a Core 2. I guess  
thats pretty decent.


	(I do NOT work on it anymore, DO NOT ASK for sources, it is illegal now  
in my country to code trackers so I have completely dropped the project,  
but I guess helping a fellow living in a free country is OK)


Back to databases.

You complain that postgres is slow for your application.
	Yes, it is a lot slower than MyISAM *on this application* (but try InnoDB  
and cry).
	But PG is a real database. It is simply not the right tool to your  
application.

You have to choose between in-place updates and transactions.

(besides, your website is locked while MySQL does your big UPDATE).

Here is how you can do it :

Your problem is that you put the peers in the database.
Ask yourself why ?

You need seeders / leechers count for each torrent ?
	- Two INTEGERs in your torrents table, updated in batch by the tracker  
every hour.


	You need to have all peers saved somewhere so that you may exit and  
restart your tracker ?

- code your tracker in python and be able to reload running code
- or just save it when you exit
	- or don't save it, it's not like it's your bank accounting data, who  
cares
	- the FBI will be happy to have all that data when they seize your  
server (see: piratebay laughs as all data was in RAM and police had to  
unplug the server to seize it.)


So, DO NOT put the peers in the database.
	IF you put the peers info in the database you get one UPDATE per user per  
torrent.

If you only update the user stats you only get one UPDATE per user.
	And the tracker never inserts users and torrents (hopefully) so you only  
get UPDATES to users and to torrents tables, never inserts.


Now you need to display realtime info on the user's and torrents pages.
	This is easily done : your tracker is a HTTP server, it can serve data  
via HTTP (php serialized, JSON, whatever) that is inserted via AJAX of PHP  
in your webpages.


	From my stats my tracker needs about 100 microseconds to serve a HTTP web  
page with the peer counts for a torrent.


	So, you don't need Postgres for your tracker ! Use it for your website  
instead...






---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] BitmapScan mishaps

2007-04-03 Thread Listmail

On Tue, 03 Apr 2007 19:23:31 +0200, Tom Lane [EMAIL PROTECTED] wrote:


Listmail [EMAIL PROTECTED] writes:

It bitmapscans about half the table...


Which PG version is this exactly?  We've fooled with the
choose_bitmap_and heuristics quite a bit ...

regards, tom lane


Version is 8.2.3.



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] COPY FROM - how to identify results?

2007-04-03 Thread brian

Jaime Silvela wrote:
I've written a web application where users can upload spreadsheets, 
instead of having to key in forms. The spreadsheets get parsed and 
INSERTED into a table, and with the INSERT gets added an identifier so 
that I can always trace back what a particular row in the table 
corresponds to.
I'd like to use COPY - FROM to achieve the same thing, but a stopping 
point is that I don't see how to add the new spreadsheet with a 
particular identifier.


I'd like to be able to do something like
COPY mytable (field-1, ..  field-n, id = my_id) FROM file; or
COPY mytable FROM file WITH id = my_id;

A very messy solution would be to create a temp table with a special 
name, COPY to it, then INSERT from it to the permanent table. However, I 
don't want a solution of that type.




I may have completely misunderstood you, but i'd think that copying the 
data directly from an uploaded file would be more than a little 
insecure. But then, you also mentioned that you parse the uploaded file. 
I don't understand how these two statements can be compatible.


Do you mean that you'd like to load the data into a table, then retrieve 
the sequence ID? Presumably, if your application is really parsing the 
data first, one could simply do an INSERT and then grab the last 
inserted ID. Look at nextval()  currval().


http://www.postgresql.org/docs/7.3/static/functions-sequence.html

brian

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] UPDATE on two large datasets is very slow

2007-04-03 Thread Tom Lane
Steve Gerhardt [EMAIL PROTECTED] writes:
 # EXPLAIN ANALYZE UPDATE peers2...etc etc
 QUERY PLAN
 -
   Merge Join  (cost=262518.76..271950.65 rows=14933 width=153) (actual
 time=8477.422..9216.893 rows=26917 loops=1)
 Merge Cond: ((p.tid = t.tid) AND (p.uid = t.uid))
 -  Sort  (cost=177898.12..180004.09 rows=842387 width=65) (actual
 time=7803.248..8073.817 rows=109732 loops=1)
   Sort Key: p.tid, p.uid
   -  Seq Scan on peers2 p  (cost=0.00..25885.87 rows=842387
 width=65) (actual time=0.043..4510.771 rows=647686 loops=1)
 -  Sort  (cost=84620.64..85546.64 rows=370400 width=96) (actual
 time=641.438..761.893 rows=55393 loops=1)
   Sort Key: t.tid, t.uid
   -  Seq Scan on temp_p2 t  (cost=0.00..2.00 rows=370400
 width=96) (actual time=0.093..275.110 rows=55393 loops=1)
   Total runtime: 192569.492 ms
 (9 rows)

 Essentially, it looks like what it's doing is sorting both tables on the
 WHERE clause, then finding which positions correspond between the two.

You're focusing on the wrong thing --- there's nothing wrong with the plan.
It's only taking 9 seconds to perform the merge join.  The other 183
seconds are going somewhere else; you need to find out where.

One thing that came to mind was triggers, which would be shown in the
EXPLAIN results if you are using a sufficiently recent version of PG
(but you didn't say what you're using) ... however if this is a straight
port of MySQL code it's pretty unlikely to have either custom triggers
or foreign keys, so that is most likely the wrong guess.  It may just be
that it takes that long to update 26917 rows, which would suggest a
configuration problem to me.

 Anyway, I admit I haven't done a great deal of configuration file tuning
 for the Postgres setup,

shared_buffers, wal_buffers, and checkpoint_segments seem like things
you might need to increase.

Another problem with this approach is that it's not going to take long
before the table is bloated beyond belief, if it's not vacuumed
regularly.  Do you have autovacuum turned on?

Does the tracker tend to send a lot of null updates (no real change to
the rows)?  If so it'd be worth complicating the query to check for
no-change and avoid the update for unchanged rows.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Using C# to create stored procedures

2007-04-03 Thread Martin Gainty

David--

Mono is DotNet on SUSE

Heres the main site ..beware this is rather complicated to install and 
configure but once Ic
you can run .NET Framework as a SUSE Binary Image then allow the GAC to pull 
in assemblies


This link will get you started
http://www.mono-project.com/VMware_Image

I dont believe Im saying this but Perl *might possibly be* an easier 
development environment for interfacing to

Stored Procedures--

Then again if you have 3 or 4 servers in your LR and have time to install 
and configure this might be

a worthwhile weekend project

I know of a SUSE engineer that worked at Novell developing this so give me a 
shout if you REALLY get stuck


Martin

__
Disclaimer and confidentiality note
Everything in this e-mail and any attachments relates to the official 
business of Sender. This transmission is of a confidential nature and Sender 
does not endorse distribution to any party other than intended recipient. 
Sender does not necessarily endorse content contained within this 
transmission.







From: David Fetter [EMAIL PROTECTED]
To: Andrus [EMAIL PROTECTED]
CC: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Using C# to create stored procedures
Date: Mon, 2 Apr 2007 10:43:24 -0700
MIME-Version: 1.0
Received: from postgresql.org ([200.46.204.71]) by 
bay0-mc2-f20.bay0.hotmail.com with Microsoft SMTPSVC(6.0.3790.2668); Mon, 2 
Apr 2007 10:45:23 -0700
Received: from localhost (maia-4.hub.org [200.46.204.183])by postgresql.org 
(Postfix) with ESMTP id 1CDD89FB698for [EMAIL PROTECTED]; Mon,  2 Apr 
2007 14:45:23 -0300 (ADT)
Received: from postgresql.org ([200.46.204.71]) by localhost (mx1.hub.org 
[200.46.204.183]) (amavisd-maia, port 10024) with ESMTP id 74312-01 for 
[EMAIL PROTECTED]; Mon,  2 Apr 2007 14:45:22 -0300 (ADT)
Received: from postgresql.org (postgresql.org [200.46.204.71])by 
postgresql.org (Postfix) with ESMTP id 07BB29FB30Ffor 
[EMAIL PROTECTED]; Mon,  2 Apr 2007 14:45:22 -0300 (ADT)
Received: from localhost (maia-4.hub.org [200.46.204.183])by postgresql.org 
(Postfix) with ESMTP id 58CBE9FB2E7for 
[EMAIL PROTECTED]; Mon,  2 Apr 2007 14:43:30 
-0300 (ADT)
Received: from postgresql.org ([200.46.204.71]) by localhost (mx1.hub.org 
[200.46.204.183]) (amavisd-maia, port 10024) with ESMTP id 70979-10 for 
[EMAIL PROTECTED]; Mon,  2 Apr 2007 14:43:25 
-0300 (ADT)
Received: from fetter.org (start.fetter.org [66.92.188.65])by 
postgresql.org (Postfix) with ESMTP id B7C6A9FB2E4for 
pgsql-general@postgresql.org; Mon,  2 Apr 2007 14:43:25 -0300 (ADT)
Received: by fetter.org (Postfix, from userid 500)id 4468AF3CBC9; Mon,  2 
Apr 2007 10:43:24 -0700 (PDT)
X-Message-Info: 
LsUYwwHHNt2AlwlyMK8asddYvQnrjJPbBh1DBkmlDwlkxe5CK3eB33QYQy5fK3wA

X-Greylist: from auto-whitelisted by SQLgrey-1.7.4
References: [EMAIL PROTECTED]
User-Agent: Mutt/1.4.2.2i
X-Virus-Scanned: Maia Mailguard 1.0.1
X-Mailing-List: pgsql-general
List-Archive: http://archives.postgresql.org/pgsql-general
List-Help: mailto:[EMAIL PROTECTED]
List-ID: pgsql-general.postgresql.org
List-Owner: mailto:[EMAIL PROTECTED]
List-Post: mailto:pgsql-general@postgresql.org
List-Subscribe: mailto:[EMAIL PROTECTED]
List-Unsubscribe: 
mailto:[EMAIL PROTECTED]

Precedence: bulk
Return-Path: [EMAIL PROTECTED]
X-OriginalArrivalTime: 02 Apr 2007 17:45:23.0783 (UTC) 
FILETIME=[B0D27570:01C7754E]


On Fri, Mar 30, 2007 at 12:19:44PM +0300, Andrus wrote:
 Any idea how to write server-side stored procedures in C#  for
 PostgreSQL database ?

There's an old project called PL/Mono
http://gborg.postgresql.org/project/plmono/projdisplay.php, but as
far as I know it's unmaintained.  You might want to try to contact the
author. :)

Cheers,
David.

 In windows .NET 2 framework should be used and in Linuc/Mac/Windows
 MONO should be used for this.

 How to install MONO engine as server-side language to PostgreSQL ?

 How to call .NET dlls from PostgreSQL stored procedure ?


 Andrus.


 ---(end of broadcast)---
 TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly

--
David Fetter [EMAIL PROTECTED] http://fetter.org/
phone: +1 415 235 3778AIM: dfetter666
  Skype: davidfetter

Remember to vote!
Consider donating to PostgreSQL: http://www.postgresql.org/about/donate

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


_
Exercise your brain! Try Flexicon. 
http://games.msn.com/en/flexicon/default.htm?icid=flexicon_hmemailtaglineapril07



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] PGSQL with high number of database rows?

2007-04-03 Thread Listmail


Are there any implications with possibly doing this? will PG handle it?  
Are there realworld systems using PG that have a massive amount of data  
in them?


It's not how much data you have, it's how you query it.

	You can have a table with 1000 rows and be dead slow if said rows are big  
TEXT data and you seq-scan it in its entierety on every webpage hit your  
server gets...
	You can have a terabyte table with billions of row, and be fast if you  
know what you're doing and have proper indexes.


	Learning all this is very interesting. MySQL always seemed hostile to me,  
but postgres is friendly, has helpful error messages, the docs are great,  
and the developer team is really nice.


	The size of your data has no importance (unless your disk is full), but  
the size of your working set does.


	So, if you intend on querying your data for a website, for instance,  
where the user searches data using forms, you will need to index it  
properly so you only need to explore small sections of your data set in  
order to be fast.


	If you intend to scan entire tables to generate reports or statistics,  
you will be more interested in knowing if the size of your RAM is larger  
or smaller than your data set, and about your disk throughput.


So, what is your application ?

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Webappication and PostgreSQL login roles

2007-04-03 Thread Listmail



I designed a Java web application. The persistence layer is a
PostgreSQL database. The application needs user authentication.
I think it's a good choice to implement this authentication mechanism
via PostgreSQL login roles. So I can create several database login
roles and set the database permissions to this login roles. This is my
first project with the postgres database, so I don't know how I can
validate a login from the website. Is there a best practice to do this
or does PostgreSQL offers a stored procedure like
'authenticateUser(String username, String password)'?


	Keep in mind that this might interact badly with very desirable features  
like :


- persistent connections
	(opening a postgres connection takes a lot longer than a simple SELECT,  
so if you must reopen connections all the time your performance will suck)


- connection pooling
(what happens when a user gets the admin's connection out of the pool ?)

	Since you use an object-relational mapper I believe it is better, and  
more flexible to have your objects handle their own operations.
	On a very basic level your objects can have a .isReadOnly() method which  
is checked in your application before any writing takes place, for  
instance.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] COPY FROM - how to identify results?

2007-04-03 Thread Jaime Silvela

Brian, that's not what I meant.
Parsing of the uploaded file is just for the purpose of extracting the 
components of each spreadsheet row and constructing the INSERTs.
Actually, whenever I copy from a file, either using COPY or with a 
custom importer, I put the data into a staging table, so that I can 
pre-process before writing to the main table. But why would COPYing from 
a file be so insecure?


nextval() and sequences are not what I'm looking for. I want to assign 
the same id to all the rows imported from the same file. Let's say user 
A is working on portfolio_id 3, and decides to upload a spreadsheet with 
new values. I want to be able to import the spreadsheet into the staging 
table, and assign a portfolio_id of 3 to all its entries.
Of course, I can't just UPDATE the staging table to have portfolio_id = 
3, because user B might also be uploading a sheet for portfolio_id = 9.


Any ideas on this?

Thanks
Jaime

brian wrote:

Jaime Silvela wrote:
I've written a web application where users can upload spreadsheets, 
instead of having to key in forms. The spreadsheets get parsed and 
INSERTED into a table, and with the INSERT gets added an identifier 
so that I can always trace back what a particular row in the table 
corresponds to.
I'd like to use COPY - FROM to achieve the same thing, but a stopping 
point is that I don't see how to add the new spreadsheet with a 
particular identifier.


I'd like to be able to do something like
COPY mytable (field-1, ..  field-n, id = my_id) FROM file; or
COPY mytable FROM file WITH id = my_id;

A very messy solution would be to create a temp table with a special 
name, COPY to it, then INSERT from it to the permanent table. 
However, I don't want a solution of that type.




I may have completely misunderstood you, but i'd think that copying 
the data directly from an uploaded file would be more than a little 
insecure. But then, you also mentioned that you parse the uploaded 
file. I don't understand how these two statements can be compatible.


Do you mean that you'd like to load the data into a table, then 
retrieve the sequence ID? Presumably, if your application is really 
parsing the data first, one could simply do an INSERT and then grab 
the last inserted ID. Look at nextval()  currval().


http://www.postgresql.org/docs/7.3/static/functions-sequence.html

brian

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match




***
Bear Stearns is not responsible for any recommendation, solicitation,
offer or agreement or any information about any transaction, customer
account or account activity contained in this communication.

Bear Stearns does not provide tax, legal or accounting advice.  You
should consult your own tax, legal and accounting advisors before
engaging in any transaction. In order for Bear Stearns to comply with
Internal Revenue Service Circular 230 (if applicable), you are notified
that any discussion of U.S. federal tax issues contained or referred to
herein is not intended or written to be used, and cannot be used, for
the purpose of:  (A) avoiding penalties that may be imposed under the
Internal Revenue Code; nor (B) promoting, marketing or recommending to
another party any transaction or matter addressed herein.
***

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] BitmapScan mishaps

2007-04-03 Thread Tom Lane
Listmail [EMAIL PROTECTED] writes:
 On Tue, 03 Apr 2007 19:23:31 +0200, Tom Lane [EMAIL PROTECTED] wrote:
 Listmail [EMAIL PROTECTED] writes:
 It bitmapscans about half the table...
 
 Which PG version is this exactly?  We've fooled with the
 choose_bitmap_and heuristics quite a bit ...

   Version is 8.2.3.

Hmmm [ studies query a bit more... ]  I think the reason why that index
is so expensive to use is exposed here:

Index Cond: ((detect_time  (now() - '7 days'::interval)) 
 AND (detect_time = '2006-10-30 16:17:45.064793'::timestamp without time 
 zone))

Evidently detect_time is timestamp without time zone, but you're
comparing it to an expression that is timestamp with time zone
(ie CURRENT_TIMESTAMP).  That's an enormously expensive operator
compared to straight comparisons of two timestamps of the same ilk,
because it does some expensive stuff to convert across time zones.
And you're applying it to a whole lot of index rows.

If you change the query to use LOCALTIMESTAMP to avoid the type
conversion, how do the two plans compare?

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] COPY FROM - how to identify results?

2007-04-03 Thread brian

Jaime Silvela wrote:

Brian, that's not what I meant.
Parsing of the uploaded file is just for the purpose of extracting the 
components of each spreadsheet row and constructing the INSERTs.
Actually, whenever I copy from a file, either using COPY or with a 
custom importer, I put the data into a staging table, so that I can 
pre-process before writing to the main table. But why would COPYing from 
a file be so insecure?




I was under the impression that you were copying indiscriminately from 
an uploaded CSV file (spreadsheet being ambiguous). Obviously, that 
would be a Bad Thing to rely upon.


nextval() and sequences are not what I'm looking for. I want to assign 
the same id to all the rows imported from the same file. Let's say user 
A is working on portfolio_id 3, and decides to upload a spreadsheet with 
new values. I want to be able to import the spreadsheet into the staging 
table, and assign a portfolio_id of 3 to all its entries.
Of course, I can't just UPDATE the staging table to have portfolio_id = 
3, because user B might also be uploading a sheet for portfolio_id = 9.




Seems like you need to adjust your schema to use a pivot table:

CREATE TABLE portfolio (

id SERIAL PRIMARY KEY,
...

CREATE TABLE portfolio_entries (
portfolio_id INT4 NOT NULL,

...

CONSTRAINT fk_portfolio_entries FOREIGN KEY (portfolio_id)
REFERENCES portfolio
ON DELETE CASCADE

Then you should be able to insert directly into the second table a row 
for each entry (for want of a better word) that corresponds to a 
particular portfolio.


brian

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] BitmapScan mishaps

2007-04-03 Thread Listmail



Hmmm [ studies query a bit more... ]  I think the reason why that index
is so expensive to use is exposed here:

   Index Cond: ((detect_time  (now() - '7  
days'::interval)) AND (detect_time = '2006-10-30  
16:17:45.064793'::timestamp without time zone))


Evidently detect_time is timestamp without time zone, but you're
comparing it to an expression that is timestamp with time zone
(ie CURRENT_TIMESTAMP).  That's an enormously expensive operator
compared to straight comparisons of two timestamps of the same ilk,
because it does some expensive stuff to convert across time zones.
And you're applying it to a whole lot of index rows.

If you change the query to use LOCALTIMESTAMP to avoid the type
conversion, how do the two plans compare?

regards, tom lane


	OK, I recreated the index, and... you were right. Actually, it was my  
query that sucked.


   Index Cond: ((detect_time  (now() - '7  
days'::interval)) AND (detect_time = '2006-10-30  
16:17:45.064793'::timestamp without time zone))


	Is it greater() which returns the highest of two values ? (like max() but  
not aggregate)

Anyway, I fixed this in the code that generates the query, it's cleaner.
So now, I just put a constant timestamp.

	Then we have this interesting side effect. Simply changing the timestamp  
value induces a different plan, and the one which returns more rows is  
actually faster !


annonces= EXPLAIN ANALYZE SELECT * FROM annonces AS a  
WHERE
detect_time = '2007-03-27 20:46:29.187131+02'

AND vente
AND surface IS NOT NULL AND price IS NOT NULL
AND type_id IN  
(1,12,24,17,18,19,20,33,35,50,51,7,52,4,13,41,14,16,26,28,43,53,15,29,30,31,45,32,34,46,47,6)

AND (a.zipcode IN (69001,69002,69003,69004,69005,69006,69007,69008,69009)
OR a.city_id IN (27595)
OR a.coords   
'(45.74101689082,4.8371263505564),(45.75898310918,4.8628736494436)'::BOX) ;
   QUERY  
PLAN


 Bitmap Heap Scan on annonces a  (cost=1422.91..6758.82 rows=1130  
width=691) (actual time=27.007..27.542 rows=194 loops=1)
   Recheck Cond: (((vente AND (zipcode = ANY  
('{69001,69002,69003,69004,69005,69006,69007,69008,69009}'::integer[])))  
OR (vente AND (city_id = 27595)) OR (coords   
'(45.75898310918,4.8628736494436),(45.74101689082,4.8371263505564)'::box))  
AND (detect_time = '2007-03-27 20:46:29.187131'::timestamp without time  
zone))
   Filter: (vente AND (surface IS NOT NULL) AND (price IS NOT NULL) AND  
(type_id = ANY  
('{1,12,24,17,18,19,20,33,35,50,51,7,52,4,13,41,14,16,26,28,43,53,15,29,30,31,45,32,34,46,47,6}'::integer[])))
   -  BitmapAnd  (cost=1422.91..1422.91 rows=2367 width=0) (actual  
time=26.960..26.960 rows=0 loops=1)
 -  BitmapOr  (cost=133.80..133.80 rows=4368 width=0) (actual  
time=2.764..2.764 rows=0 loops=1)
   -  Bitmap Index Scan on annonces_zip  (cost=0.00..56.54  
rows=1825 width=0) (actual time=0.503..0.503 rows=1580 loops=1)
 Index Cond: ((vente = true) AND (zipcode = ANY  
('{69001,69002,69003,69004,69005,69006,69007,69008,69009}'::integer[])))
   -  Bitmap Index Scan on annonces_city  (cost=0.00..43.30  
rows=1904 width=0) (actual time=0.457..0.457 rows=1575 loops=1)

 Index Cond: ((vente = true) AND (city_id = 27595))
   -  Bitmap Index Scan on annonces_coords  (cost=0.00..33.10  
rows=640 width=0) (actual time=1.802..1.802 rows=2166 loops=1)
 Index Cond: (coords   
'(45.75898310918,4.8628736494436),(45.74101689082,4.8371263505564)'::box)
 -  Bitmap Index Scan on annonces_timestamp  (cost=0.00..1288.58  
rows=69375 width=0) (actual time=23.906..23.906 rows=68022 loops=1)
   Index Cond: (detect_time = '2007-03-27  
20:46:29.187131'::timestamp without time zone)

 Total runtime: 27.669 ms
(14 lignes)

annonces= EXPLAIN ANALYZE SELECT * FROM annonces AS a WHERE
detect_time = '2006-03-27 20:46:29.187131+02'
AND vente
AND surface IS NOT NULL AND price IS NOT NULL
AND type_id IN  
(1,12,24,17,18,19,20,33,35,50,51,7,52,4,13,41,14,16,26,28,43,53,15,29,30,31,45,32,34,46,47,6)

AND (a.zipcode IN (69001,69002,69003,69004,69005,69006,69007,69008,69009)
OR a.city_id IN (27595)
OR a.coords   
'(45.74101689082,4.8371263505564),(45.75898310918,4.8628736494436)'::BOX) ;
   

Re: [GENERAL] COPY FROM - how to identify results?

2007-04-03 Thread Jaime Silvela
That's sort of what I have already, and my problem is that the 
portfolio_id field does not exist in the CSV files. I'd like to be able 
to assign a portfolio_id, for the current file's entries. Another person 
in the list suggested dynamically adding a column with the portfolio_id 
to the file, and that of course would work, but is kinda messy.


The problem with the solution you suggest is that when doing COPY, I'll 
get a complaint because of trying to populate an entry with a null value 
for portfolio_id.
Some sort of automatic population of the portfolio_id field wouldn't 
work either, since many different users and processes could be inserting 
data into the staging table simultaneously.




Seems like you need to adjust your schema to use a pivot table:

CREATE TABLE portfolio (

id SERIAL PRIMARY KEY,
...

CREATE TABLE portfolio_entries (
portfolio_id INT4 NOT NULL,

...

CONSTRAINT fk_portfolio_entries FOREIGN KEY (portfolio_id)
REFERENCES portfolio
ON DELETE CASCADE

Then you should be able to insert directly into the second table a row 
for each entry (for want of a better word) that corresponds to a 
particular portfolio.


brian

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster




***
Bear Stearns is not responsible for any recommendation, solicitation,
offer or agreement or any information about any transaction, customer
account or account activity contained in this communication.

Bear Stearns does not provide tax, legal or accounting advice.  You
should consult your own tax, legal and accounting advisors before
engaging in any transaction. In order for Bear Stearns to comply with
Internal Revenue Service Circular 230 (if applicable), you are notified
that any discussion of U.S. federal tax issues contained or referred to
herein is not intended or written to be used, and cannot be used, for
the purpose of:  (A) avoiding penalties that may be imposed under the
Internal Revenue Code; nor (B) promoting, marketing or recommending to
another party any transaction or matter addressed herein.
***

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] speeding up a query

2007-04-03 Thread Marcus Engene

Hi,

I'm on 8.0.10 and there is a query I cannot quite get adequately fast.
Should it take 2.5s to sort these 442 rows? Are my settings bad? Is
my query stupid?

Would appreciate any tips.

Best regards,
Marcus


apa= explain analyze
apa-  select
apa-  ai.objectid as ai_objectid
apa-  from
apa-  apa_item ai
apa-  where
apa-  idxfti @@ to_tsquery('default', 'KCA0304')  AND
apa-  ai.status = 30
apa-  ORDER BY ai.calc_rating desc
apa-  LIMIT 1000;

Limit  (cost=54.40..54.43 rows=12 width=8) (actual 
time=2650.254..2651.093 rows=442 loops=1)
  -  Sort  (cost=54.40..54.43 rows=12 width=8) (actual 
time=2650.251..2650.515 rows=442 loops=1)

Sort Key: calc_rating
-  Index Scan using apa_item_fts on apa_item ai  
(cost=0.00..54.18 rows=12 width=8) (actual time=61.261..2649.045 
rows=442 loops=1)

  Index Cond: (idxfti @@ '''kca0304'''::tsquery)
  Filter: (status = 30)
Total runtime: 2651.659 ms
(7 rows)

apa= explain analyze
apa-  select
apa-  ai.objectid as ai_objectid
apa-  from
apa-  apa_item ai
apa-  where
apa-  idxfti @@ to_tsquery('default', 'KCA0304')  AND
apa-  ai.status = 30
apa-  LIMIT 1000;

Limit  (cost=0.00..54.18 rows=12 width=4) (actual time=0.186..18.628 
rows=442 loops=1)
  -  Index Scan using apa_item_fts on apa_item ai  (cost=0.00..54.18 
rows=12 width=4) (actual time=0.183..17.999 rows=442 loops=1)

Index Cond: (idxfti @@ '''kca0304'''::tsquery)
Filter: (status = 30)
Total runtime: 19.062 ms
(5 rows)


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Nice, web-based SNMP-Frontend for pgsnmpd?

2007-04-03 Thread Florian G. Pflug

Hi

I've just stumbled across pgsnmpd. It works quite well,
though I haven't yet found a web-based monitoring
software that works well with pgsnmpd. The problem is
that pgsnmpd exportsa bunch of values _per_ database.

(The output of snmpwalk looks something like
PGSQL-MIB::pgsqlDbDatabase.1.1.3 = STRING: postgres
PGSQL-MIB::pgsqlDbDatabase.1.1.4 = STRING: template0
PGSQL-MIB::pgsqlDbDatabase.1.1.5 = STRING: template1
PGSQL-MIB::pgsqlDbDatabase.1.2.3 = STRING: postgres
PGSQL-MIB::pgsqlDbDatabase.1.2.4 = STRING: postgres
PGSQL-MIB::pgsqlDbDatabase.1.2.5 = STRING: postgres
PGSQL-MIB::pgsqlDbDatabase.1.3.3 = STRING: UTF8
PGSQL-MIB::pgsqlDbDatabase.1.3.4 = STRING: UTF8
PGSQL-MIB::pgsqlDbDatabase.1.3.5 = STRING: UTF8
PGSQL-MIB::pgsqlDbDatabase.1.4.3 = INTEGER: 0
PGSQL-MIB::pgsqlDbDatabase.1.4.4 = INTEGER: 0
PGSQL-MIB::pgsqlDbDatabase.1.4.5 = INTEGER: 0
PGSQL-MIB::pgsqlDbDatabase.1.5.3 = INTEGER: 21263
PGSQL-MIB::pgsqlDbDatabase.1.5.4 = INTEGER: 0
PGSQL-MIB::pgsqlDbDatabase.1.5.5 = INTEGER: 17043
PGSQL-MIB::pgsqlDbDatabase.1.6.3 = INTEGER: 976
PGSQL-MIB::pgsqlDbDatabase.1.6.4 = INTEGER: 0
PGSQL-MIB::pgsqlDbDatabase.1.6.5 = INTEGER: 4
PGSQL-MIB::pgsqlDbDatabase.1.7.3 = INTEGER: 0
PGSQL-MIB::pgsqlDbDatabase.1.7.4 = INTEGER: 0
PGSQL-MIB::pgsqlDbDatabase.1.7.5 = INTEGER: 0
PGSQL-MIB::pgsqlDbDatabase.1.8.3 = INTEGER: 0
PGSQL-MIB::pgsqlDbDatabase.1.8.4 = INTEGER: 0
PGSQL-MIB::pgsqlDbDatabase.1.8.5 = INTEGER: 0
PGSQL-MIB::pgsqlDbDatabase.1.9.3 = STRING: 3720 kB
PGSQL-MIB::pgsqlDbDatabase.1.9.4 = STRING: 3760 kB
PGSQL-MIB::pgsqlDbDatabase.1.9.5 = STRING: 3720 kB
)

Most SNMP monitoring tools (like netmrg) allow you to
define graphs for custom oid - but they don't allow me
to say Create a graph for every oid that matches a
certain pattern. Therefor, I'd need to manually create
one graph per database, which is tiresome...

So - does anyone know a good webapplication that does
snmp graphs?

greetings, Florian Pflug

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] assistance needed for autovacuum on the windows version of 8.2.3

2007-04-03 Thread Richard Broersma Jr
Can anyone see why autovacuum or autoanalyze are not working?


proj02u20411=# select version();
  version

 PostgreSQL 8.2.3 on i686-pc-mingw32, 
 compiled by GCC gcc.exe (GCC) 3.4.2 (mingw-special)
(1 row)

proj02u20411=# explain analyze select * from dev.tag;
  QUERY PLAN
--
 Seq Scan on tag
  (cost=0.00..810.35 rows=18835 width=166)
  (actual time=510.270..584.418 rows=2696 loops=1)
 Total runtime: 588.207 ms
(2 rows)

proj02u20411=# select * 
   from   pg_stat_all_tables
   where  schemaname = 'dev'
   andrelname= 'tag';
-[ RECORD 1 ]+--
relid| 16800
schemaname   | dev
relname  | tag
seq_scan | 255
seq_tup_read | 416689
idx_scan | 4123
idx_tup_fetch| 82080
n_tup_ins| 2585
n_tup_upd| 10
n_tup_del| 0
last_vacuum  |
last_autovacuum  |
last_analyze |
last_autoanalyze | 2007-01-17 17:57:54.33-08


#---
# RUNTIME STATISTICS
#---

# - Query/Index Statistics Collector -

stats_start_collector = on  # needed for block or row stats
# (change requires restart)
stats_row_level = on

#---
# AUTOVACUUM PARAMETERS
#---

autovacuum = on # enable autovacuum subprocess?
# 'on' requires stats_start_collector
# and stats_row_level to also be on
autovacuum_naptime = 1min   # time between autovacuum runs
autovacuum_vacuum_threshold = 100   # min # of tuple updates before
# vacuum
autovacuum_analyze_threshold = 25   # min # of tuple updates before 
# analyze
autovacuum_vacuum_scale_factor = 0.002  # fraction of rel size before 
# vacuum
autovacuum_analyze_scale_factor = 0.001 # fraction of rel size before 
# analyze
autovacuum_freeze_max_age = 2   # maximum XID age before forced vacuum
# (change requires restart)
autovacuum_vacuum_cost_delay = -1   # default vacuum cost delay for 
# autovacuum, -1 means use 
# vacuum_cost_delay
autovacuum_vacuum_cost_limit = -1   # default vacuum cost limit for 
# autovacuum, -1 means use
# vacuum_cost_limit


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] assistance needed for autovacuum on the windows version of 8.2.3

2007-04-03 Thread Alvaro Herrera
Richard Broersma Jr wrote:
 Can anyone see why autovacuum or autoanalyze are not working?

Known bug, fixed in the 8.2.4-to-be code.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] assistance needed for autovacuum on the windows version of 8.2.3

2007-04-03 Thread Richard Broersma Jr

--- Alvaro Herrera [EMAIL PROTECTED] wrote:
 Known bug, fixed in the 8.2.4-to-be code.

Okay.  Thanks for the information.

Regards,
Richard Broersma Jr.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] PGSQL with high number of database rows?

2007-04-03 Thread Harvey, Allan AC
Tim,
 massive, around 20,000 new rows in one of the tables per day. 
As an example...
I'm doing about 4000 inserts spread across about 1800 tables per minute.
Pisses it in with fsync off and the PC ( IBM x3650 1 CPU, 1 Gig memory ) on a 
UPS.

Allan


The material contained in this email may be confidential, privileged or 
copyrighted. If you are not the intended recipient, use, disclosure or copying 
of this information is prohibited. If you have received this document in error, 
please advise the sender and delete the document. Neither OneSteel nor the 
sender accept responsibility for any viruses contained in this email or any 
attachments.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Nice, web-based SNMP-Frontend for pgsnmpd?

2007-04-03 Thread Philip Hallstrom

I've just stumbled across pgsnmpd. It works quite well,
though I haven't yet found a web-based monitoring
software that works well with pgsnmpd. The problem is
that pgsnmpd exportsa bunch of values _per_ database.

(The output of snmpwalk looks something like
PGSQL-MIB::pgsqlDbDatabase.1.1.3 = STRING: postgres
PGSQL-MIB::pgsqlDbDatabase.1.1.4 = STRING: template0



)

Most SNMP monitoring tools (like netmrg) allow you to
define graphs for custom oid - but they don't allow me
to say Create a graph for every oid that matches a
certain pattern. Therefor, I'd need to manually create
one graph per database, which is tiresome...

So - does anyone know a good webapplication that does
snmp graphs?


cacti should let you do it.  I've not done it myself, but when you tell it 
to graph disk space it let's you pick from all the partitions available. 
I just hooked it up with litespeed web server and it does the same thing 
letting me graph each of the configured virtual hosts.  In both cases the 
scripts/templates were built by someone else so I don't can't tell you how 
to do it, but it can be done.


-philip

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Using MS Access front-end with PG

2007-04-03 Thread Paul Lambert
I've got an MS Access front end reporting system that has previously 
used MS SQL server which I am moving to Postgres.


The front end has several hundred if not thousand inbuilt/hard-coded 
queries, most of which aren't working for the following reasons:
1.) Access uses double quotes () as text qualifiers, PG uses single 
quotes. ('')
2.) The Like function in SQL Server is case insensitive, PG it is case 
sensitive. The ilike function is not recognised by Access and it tries 
to turn that into a string, making my test (like ilike 'blah')


Has anyone had any experience with moving an access program from SQL 
server to PG?


Is there any way to change the text qualifier in PG or the case sensitivity?

TIA,
P.

--
Paul Lambert
Database Administrator
AutoLedgers


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Nice, web-based SNMP-Frontend for pgsnmpd?

2007-04-03 Thread Florian G. Pflug

Philip Hallstrom wrote:

I've just stumbled across pgsnmpd. It works quite well,
though I haven't yet found a web-based monitoring
software that works well with pgsnmpd. The problem is
that pgsnmpd exportsa bunch of values _per_ database.

(The output of snmpwalk looks something like
PGSQL-MIB::pgsqlDbDatabase.1.1.3 = STRING: postgres
PGSQL-MIB::pgsqlDbDatabase.1.1.4 = STRING: template0



)

Most SNMP monitoring tools (like netmrg) allow you to
define graphs for custom oid - but they don't allow me
to say Create a graph for every oid that matches a
certain pattern. Therefor, I'd need to manually create
one graph per database, which is tiresome...

So - does anyone know a good webapplication that does
snmp graphs?


cacti should let you do it.  I've not done it myself, but when you tell 
it to graph disk space it let's you pick from all the partitions 
available. I just hooked it up with litespeed web server and it does the 
same thing letting me graph each of the configured virtual hosts.  In 
both cases the scripts/templates were built by someone else so I don't 
can't tell you how to do it, but it can be done.


I've stumbled over this myself minutes before I received your mail ;-)
So far it looks good (certainly much better than netmrg), and it seems
as if it at least support what I want semi-automatically.

Still, I'd prefer a solution where new databases show up
automatically - I'll see if I can get cacti to do that somehow.

Thanks for the tip!

greetings, Florian Pflug


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Using MS Access front-end with PG

2007-04-03 Thread Tom Lane
Paul Lambert [EMAIL PROTECTED] writes:
 Is there any way to change the text qualifier in PG

No.  I suppose you could hack the Postgres lexer but you'd break
pretty much absolutely everything other than your Access code.

 or the case sensitivity?

That could be attacked in a few ways, depending on whether you want
all text comparisons to be case-insensitive or only some (and if so
which some).  But it sounds like MS SQL's backward standards for
strings vs identifiers has got you nicely locked in, as intended :-(
so there may be no point in discussing further.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Using MS Access front-end with PG]

2007-04-03 Thread Paul Lambert



Tom Lane wrote:

Paul Lambert [EMAIL PROTECTED] writes:

Is there any way to change the text qualifier in PG


No.  I suppose you could hack the Postgres lexer but you'd break
pretty much absolutely everything other than your Access code.


or the case sensitivity?


That could be attacked in a few ways, depending on whether you want
all text comparisons to be case-insensitive or only some (and if so
which some).  But it sounds like MS SQL's backward standards for
strings vs identifiers has got you nicely locked in, as intended :-(
so there may be no point in discussing further.


I don't have any case sensitive data - so if sensitivity could be
completely disabled by a parameter somewhere, that would be nice.



regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster





--
Paul Lambert
Database Administrator
AutoLedgers

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Finding Queries that have been done on a DB

2007-04-03 Thread 4wheels

Hello all!
This is my first post!  I am interested in finding out what queries have
been made against a particular database in postgres. The version of Postgres
is 8.0 running on Mandrake 10.  The queries are made by client computers
over the network.  What steps must I take to accomplish such a task?  Is
this even at all a feasible goal?

Much Thanks

-- 
View this message in context: 
http://www.nabble.com/Finding-Queries-that-have-been-done-on-a-DB-tf3523303.html#a9829374
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Using MS Access front-end with PG

2007-04-03 Thread Joshua D. Drake

Paul Lambert wrote:
I've got an MS Access front end reporting system that has previously 
used MS SQL server which I am moving to Postgres.


The front end has several hundred if not thousand inbuilt/hard-coded 
queries, most of which aren't working for the following reasons:
1.) Access uses double quotes () as text qualifiers, PG uses single 
quotes. ('')
2.) The Like function in SQL Server is case insensitive, PG it is case 
sensitive. The ilike function is not recognised by Access and it tries 
to turn that into a string, making my test (like ilike 'blah')


Has anyone had any experience with moving an access program from SQL 
server to PG?


Is there any way to change the text qualifier in PG or the case 
sensitivity?


I would suggest pushing things like this to a pass through query.

Joshua D. Drake



TIA,
P.




--

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Using MS Access front-end with PG]

2007-04-03 Thread Joshua D. Drake

Paul Lambert wrote:



Tom Lane wrote:

Paul Lambert [EMAIL PROTECTED] writes:

Is there any way to change the text qualifier in PG


No.  I suppose you could hack the Postgres lexer but you'd break
pretty much absolutely everything other than your Access code.


or the case sensitivity?


That could be attacked in a few ways, depending on whether you want
all text comparisons to be case-insensitive or only some (and if so
which some).  But it sounds like MS SQL's backward standards for
strings vs identifiers has got you nicely locked in, as intended :-(
so there may be no point in discussing further.


I don't have any case sensitive data - so if sensitivity could be
completely disabled by a parameter somewhere, that would be nice.


You could preface all your queries with something like:

select * from foo where lower(bar) = lower('qualifer');

But that seems a bit silly.

Joshua D. Drake






regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster








--

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] COPY FROM - how to identify results?

2007-04-03 Thread John D. Burger
nextval() and sequences are not what I'm looking for. I want to  
assign the same id to all the rows imported from the same file.  
Let's say user A is working on portfolio_id 3, and decides to  
upload a spreadsheet with new values. I want to be able to import  
the spreadsheet into the staging table, and assign a portfolio_id  
of 3 to all its entries.
Of course, I can't just UPDATE the staging table to have  
portfolio_id = 3, because user B might also be uploading a sheet  
for portfolio_id = 9.


The first thing to occur to me is to make the staging table TEMP, so  
every session its own copy.  But the second thing is, do you really  
need a portfolio_id column in the staging table?  After you get the  
data massaged correctly into the staging table, perhaps you could  
load it into the main table thusly:


  insert into main_table (portfolio_id, other_columns ...)
select 3, other_columns ... from staging_table;

where 3 is the portfolio_id you want to assign to all the data you're  
currently loading.  This may not work exactly for your situation, but  
does some variant make sense?


- John Burger
  MITRE



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Finding Queries that have been done on a DB

2007-04-03 Thread Chris

4wheels wrote:

Hello all!
This is my first post!  I am interested in finding out what queries have
been made against a particular database in postgres. The version of Postgres
is 8.0 running on Mandrake 10.  The queries are made by client computers
over the network.  What steps must I take to accomplish such a task?  Is
this even at all a feasible goal?


Yep it is.

http://www.postgresql.org/docs/current/static/runtime-config-logging.html

If you set 'log_statement = all' in your postgresql.conf file and look 
at the other params you'll get what you need.


That will log all statements to the db, not just to a particular database.

Though you could use 'log_line_prefix' to put in the database name and 
then a grep of the log will get you what you want :)


--
Postgresql  php tutorials
http://www.designmagick.com/

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Using MS Access front-end with PG

2007-04-03 Thread Edward Macnaghten

Paul Lambert wrote:
I've got an MS Access front end reporting system that has previously 
used MS SQL server which I am moving to Postgres.


Are you using PassThrough queries?  It is not clear


The front end has several hundred if not thousand inbuilt/hard-coded 
queries, most of which aren't working for the following reasons:
1.) Access uses double quotes () as text qualifiers, PG uses single 
quotes. ('')


What Access uses should not make any  difference.  In pass through 
queries I as MS-SQL uses single quotes same as PostGres, and attached 
table queries all this gets transalated at the Access to ODBC layer. 


2.) The Like function in SQL Server is case insensitive, PG it is case 
sensitive. The ilike function is not recognised by Access and it tries 
to turn that into a string, making my test (like ilike 'blah')


The only way the ilike can be passed from ACCESS to Postgres is 
through pass through queries.  This is probably not what you want 
though.  I do not actually know how MS-Access translates the Like 
operator at the ACCESS- ODBC layer (probably just converts the search 
string to use % and _ from * and ?).  I do not know if it is possible to 
switch off case sensitivity in Postgres though



Has anyone had any experience with moving an access program from SQL 
server to PG?


Yes, but some time ago.  I did not have the case sensitivity problem as 
I knew that was a non-standard feature and did not rely on it while 
developing the MS-SQL solution though (just call me smartypants :-)).  I 
still had to change a few things though (I cannot remember what, sorry).


Eddy

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Using MS Access front-end with PG]

2007-04-03 Thread Edward Macnaghten

Joshua D. Drake wrote:

You could preface all your queries with something like:

select * from foo where lower(bar) = lower('qualifer');

But that seems a bit silly.


And also it would prevent the optimizer from using any indexes on 
bar.  Not a good idea.


Eddy


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] COPY FROM - how to identify results?

2007-04-03 Thread Klint Gore
On Tue, 03 Apr 2007 12:45:54 -0400, Jaime Silvela [EMAIL PROTECTED] wrote:
 I'd like to be able to do something like
 COPY mytable (field-1, ..  field-n, id = my_id) FROM file;

How do you get my_id?  Can you get it in a trigger?  Triggers still fire
with copy so if you can get a trigger to fill in the id column you can
copy with just the field names.

klint.

+---+-+
: Klint Gore: Non rhyming:
: EMail   : [EMAIL PROTECTED]   :  slang - the:
: Snail   : A.B.R.I.:  possibilities  :
: Mail  University of New England   :  are useless   :
:   Armidale NSW 2351 Australia : L.J.J.  :
: Fax : +61 2 6772 5376 : :
+---+-+

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Using MS Access front-end with PG]

2007-04-03 Thread Klint Gore
On Tue, 03 Apr 2007 18:24:00 -0700, Joshua D. Drake [EMAIL PROTECTED] wrote:
 Paul Lambert wrote:
  Tom Lane wrote:
  Paul Lambert [EMAIL PROTECTED] writes:
  or the case sensitivity?
 
  That could be attacked in a few ways, depending on whether you want
  all text comparisons to be case-insensitive or only some (and if so
  which some).  But it sounds like MS SQL's backward standards for
  strings vs identifiers has got you nicely locked in, as intended :-(
  so there may be no point in discussing further.
  
  I don't have any case sensitive data - so if sensitivity could be
  completely disabled by a parameter somewhere, that would be nice.
 
 You could preface all your queries with something like:
 
 select * from foo where lower(bar) = lower('qualifer');
 
 But that seems a bit silly.

Is there any way to create operators to point like to ilike?  There
doesn't seem to be a like or ilike in pg_operator (not in 7.4 anyway).

klint.

+---+-+
: Klint Gore: Non rhyming:
: EMail   : [EMAIL PROTECTED]   :  slang - the:
: Snail   : A.B.R.I.:  possibilities  :
: Mail  University of New England   :  are useless   :
:   Armidale NSW 2351 Australia : L.J.J.  :
: Fax : +61 2 6772 5376 : :
+---+-+

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Using MS Access front-end with PG]

2007-04-03 Thread Paul Lambert

Joshua D. Drake wrote:



You could preface all your queries with something like:

select * from foo where lower(bar) = lower('qualifer');

But that seems a bit silly.

Joshua D. Drake




I'm trying to avoid having to alter all of my queries, per the OP I've 
got several hundred if not thousands of them and if I have to change 
them all to put lower() around all the text, that is a lot of time.


If I have to do that I will, I'm just curious if there was an ability to 
tell pg to not be case sensitive when doing lookups.


Judging by the responses so far, there is not... so I'll get to work :)

--
Paul Lambert
Database Administrator
AutoLedgers

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Using MS Access front-end with PG

2007-04-03 Thread Paul Lambert

Paul Lambert wrote:
I've got an MS Access front end reporting system that has previously 
used MS SQL server which I am moving to Postgres.


The front end has several hundred if not thousand inbuilt/hard-coded 
queries, most of which aren't working for the following reasons:
1.) Access uses double quotes () as text qualifiers, PG uses single 
quotes. ('')


Ignore point one in my op, it wasn't the double quotes causing the 
problem and was a quick and easy fix.


Thanks,
P.

--
Paul Lambert
Database Administrator
AutoLedgers


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Using MS Access front-end with PG]

2007-04-03 Thread Tom Lane
Paul Lambert [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 That could be attacked in a few ways, depending on whether you want
 all text comparisons to be case-insensitive or only some (and if so
 which some).

 I don't have any case sensitive data - so if sensitivity could be
 completely disabled by a parameter somewhere, that would be nice.

If you are certain of that, the best way would be to initdb in a
case-insensitive locale setting.  My locale-fu is insufficient to
tell you exactly how to create a case-insensitive locale if you
haven't got one already, but I believe it is possible.  One note
is to be sure that the locale uses the character encoding you want
to use.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Using MS Access front-end with PG]

2007-04-03 Thread Joshua D. Drake

Edward Macnaghten wrote:

Joshua D. Drake wrote:

You could preface all your queries with something like:

select * from foo where lower(bar) = lower('qualifer');

But that seems a bit silly.


And also it would prevent the optimizer from using any indexes on 
bar.  Not a good idea.


You could use a functional index to solve that.

CREATE INDEX lower_bar_idx on foo(lower(bar));



Eddy


---(end of broadcast)---
TIP 6: explain analyze is your friend




--

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] speeding up a query

2007-04-03 Thread Tom Lane
Marcus Engene [EMAIL PROTECTED] writes:
 Should it take 2.5s to sort these 442 rows?

  Limit  (cost=54.40..54.43 rows=12 width=8) (actual 
 time=2650.254..2651.093 rows=442 loops=1)
-  Sort  (cost=54.40..54.43 rows=12 width=8) (actual 
 time=2650.251..2650.515 rows=442 loops=1)
  Sort Key: calc_rating
  -  Index Scan using apa_item_fts on apa_item ai  
 (cost=0.00..54.18 rows=12 width=8) (actual time=61.261..2649.045 
 rows=442 loops=1)
Index Cond: (idxfti @@ '''kca0304'''::tsquery)
Filter: (status = 30)
  Total runtime: 2651.659 ms

It's not the sort that's taking 2.5s --- the sort looks to be taking
about a millisec and a half.  The indexscan is eating the other 2649
msec.  The question that seems to be interesting is what's the
difference between the contexts of your two queries, because they
sure look like the indexscans were the same.  Maybe the second one
is merely benefiting from the first one having already sucked all the
data into cache?

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] speeding up a query

2007-04-03 Thread Marcus Engene

Hi again,

I was thinking, in my slow query it seems the sorting is the villain. 
Doing a simple qsort test I notice that:

[EMAIL PROTECTED] /cygdrive/c/pond/dev/tt
$ time ./a.exe 430

real0m0.051s
user0m0.030s
sys 0m0.000s

[EMAIL PROTECTED] /cygdrive/c/pond/dev/tt
$ time ./a.exe 43

real0m0.238s
user0m0.218s
sys 0m0.015s

[EMAIL PROTECTED] /cygdrive/c/pond/dev/tt
$ time ./a.exe 430

real0m2.594s
user0m2.061s
sys 0m0.108s

From this very unfair test indeed I see that my machine has the 
capability to sort 4.3 million entries during the same time my pg is 
sorting 430.


And i cannot stop wondering if there is some generic sorting routine 
that is incredibly slow? Would it be possible to, in the situations 
where order by is by simple datatypes of one column, to do a special 
sorting, like the qsort example in the end of this mail? Is this already 
addressed in later versions?


If no, why? and if yes, where in the pg code do I look?

Best regards,
Marcus


#include stdio.h
#include stdlib.h

typedef struct {
   int val;
   void   *pek;
} QSORTSTRUCT_INT_S;

int sortstruct_int_compare(void const *a, void  const *b)
{
   return ( ((QSORTSTRUCT_INT_S *)a)-val - ((QSORTSTRUCT_INT_S 
*)b)-val );

}

int main (int argc, char **argv)
{
   int nbr = 0;
   int i = 0;
   QSORTSTRUCT_INT_S *sort_arr = 0;
   if (1 == argc) {
   printf(forgot amount argument\n);
   exit(1);
   }
   nbr = atoi (argv[1]);
   if (0 == (sort_arr = malloc (sizeof(QSORTSTRUCT_INT_S) * nbr))) {
   printf(cannot alloc\n);
   exit(1);
   }
   srand(123);
   for (i=0; inbr; i++) {
   sort_arr[i].val = rand();
   }
   qsort(sort_arr, nbr, sizeof(QSORTSTRUCT_INT_S),sortstruct_int_compare);
   return 0;
}


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] speeding up a query

2007-04-03 Thread Marcus Engene

Tom Lane skrev:

Marcus Engene [EMAIL PROTECTED] writes:
  

Should it take 2.5s to sort these 442 rows?



  
 Limit  (cost=54.40..54.43 rows=12 width=8) (actual 
time=2650.254..2651.093 rows=442 loops=1)
   -  Sort  (cost=54.40..54.43 rows=12 width=8) (actual 
time=2650.251..2650.515 rows=442 loops=1)

 Sort Key: calc_rating
 -  Index Scan using apa_item_fts on apa_item ai  
(cost=0.00..54.18 rows=12 width=8) (actual time=61.261..2649.045 
rows=442 loops=1)

   Index Cond: (idxfti @@ '''kca0304'''::tsquery)
   Filter: (status = 30)
 Total runtime: 2651.659 ms



It's not the sort that's taking 2.5s --- the sort looks to be taking
about a millisec and a half.  The indexscan is eating the other 2649
msec.  The question that seems to be interesting is what's the
difference between the contexts of your two queries, because they
sure look like the indexscans were the same.  Maybe the second one
is merely benefiting from the first one having already sucked all the
data into cache?

regards, tom lane
  

Yes indeed you are completely right! Both queries take about the same when
run after the other. And I just made a fool of myself with an optimizing 
idea I

had...

Sorry for the noise and thanks for your answer!

Best regards,
Marcus


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Using MS Access front-end with PG]

2007-04-03 Thread Tom Lane
Klint Gore [EMAIL PROTECTED] writes:
 Is there any way to create operators to point like to ilike?  There
 doesn't seem to be a like or ilike in pg_operator (not in 7.4 anyway).

Actually it's the other way 'round: if you look into gram.y you'll see
that LIKE is expanded as the operator ~~ and ILIKE as the operator ~~*
... so one of the alternatives I was thinking of offering to Paul was
to rename those two operators to swap 'em.  However I'm afraid that
that would break the planner, which has some hardwired assumptions
about the behavior of those two operator OIDs.  Maybe we should change
the planner to look a level deeper and see what functions the operators
refer to.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Using MS Access front-end with PG

2007-04-03 Thread Richard Broersma Jr
 2.) The Like function in SQL Server is case insensitive, PG it is case 
 sensitive. The ilike function is not recognised by Access and it tries 
 to turn that into a string, making my test (like ilike 'blah')
 
 Has anyone had any experience with moving an access program from SQL 
 server to PG?
 
 Is there any way to change the text qualifier in PG or the case sensitivity?

I wonder if this would be a good feature to request from the ODBC developers by 
adding a parameter
to the drivers to use ilike instead of like.

Regards,
Richard Broersma Jr.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Using MS Access front-end with PG]

2007-04-03 Thread Oleg Bartunov

Paul,

we have contrib module mchar, which does what you need. We developed it
when porting from MS SQL one very popular in Russia accounting software.
It's available from http://v8.1c.ru/overview/postgres_patches_notes.htm,
in Russian. I don't rememeber about license, though.


Oleg

On Wed, 4 Apr 2007, Paul Lambert wrote:


Joshua D. Drake wrote:



You could preface all your queries with something like:

select * from foo where lower(bar) = lower('qualifer');

But that seems a bit silly.

Joshua D. Drake




I'm trying to avoid having to alter all of my queries, per the OP I've got 
several hundred if not thousands of them and if I have to change them all to 
put lower() around all the text, that is a lot of time.


If I have to do that I will, I'm just curious if there was an ability to tell 
pg to not be case sensitive when doing lookups.


Judging by the responses so far, there is not... so I'll get to work :)




Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

---(end of broadcast)---
TIP 6: explain analyze is your friend