Re: [GENERAL] XEON familiy 5000, 5100 or 5300?

2007-01-14 Thread Philippe Lang
Shane wrote:

> No - a *core* is another cpu, basically you will have 2 or 4 cpu's in
> the one physical package. 
> 
> HT creates 2 virtual cpu's sharing the same cpu resources but the
> cores are seperate cpu's in themselves. 
> 
> The Quad-core will only benefit you more if you have more users
> running queries at the same time. Each core can run a query at the
> same time without slowing the others down (allowing for disk
> access/FSB limits).   

Jose wrote:

> PostgreSQL handles each connection in a dedicated process, so you
> won't get better performance for a single connection by adding more
> CPUs (I mean, beyond the benefit of having the postmaster and the
> specific connection running in separate CPUs). This means that a
> query will not be resolved by more than one CPU. What you will get is
> better performance for multiple connections.  

Shane, Jose,

Thanks for your answers. In my "very-low-concurrency scenario", I guess
then that multiple cores won't really help, as I suspected.

I think I have better take (for the same price) a ...

  Dual-Core Intel Xeon 5060, 3.2 GHz, 4MB

... instead of a ...

  Quad-Core Intel Xeon 5310, 1.6 GHz, 4MB

With my CPU-bound query, it will perform better.



But what about Hyperthreading then? Is it able to spread two threads
over two different cores? I guess the answer is no...


Philippe

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Backup the part of postgres database

2007-01-14 Thread Joshua D. Drake
roopa perumalraja wrote:
> Hi Adam,
>
>   Thanks a lot for your immediate reply. can you please tell me how to use 
> the command pg_dump to backup specific tables. Thanks a lot in advance.

pg_dump --help

>
>   Regards
>   Roopa
>   
> Adam Rich <[EMAIL PROTECTED]> wrote:
>   Roopa,
>   You can use the command pg_dump to backup specific tables.   But you won't 
> be able to
>   restrict the backup to specific rows.  (You can always create a separate 
> table just for backups,
>   and dump just that table).
>
>
> 
>   -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of roopa 
> perumalraja
> Sent: Sunday, January 14, 2007 8:39 PM
> To: pgsql-general@postgresql.org
> Subject: [GENERAL] Backup the part of postgres database 
> 
> 
>   Hi all,
>
>   Is it possible to back up only part of my database for example from tables 
> which have data from April to July 2006, if so what will be the command for 
> that.
>
>   Thanks a lot in advance
>   Roopa
> 
> -
>   It's here! Your new message!
> Get new email alerts with the free Yahoo! Toolbar.
> 
>  
> -
> Bored stiff? Loosen up...
> Download and play hundreds of games for free on Yahoo! Games.


-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Backup the part of postgres database

2007-01-14 Thread roopa perumalraja

Hi Adam,
   
  Thanks a lot for your immediate reply. can you please tell me how to use the 
command pg_dump to backup specific tables. Thanks a lot in advance.
   
  Regards
  Roopa
  
Adam Rich <[EMAIL PROTECTED]> wrote:
  Roopa,
  You can use the command pg_dump to backup specific tables.   But you won't be 
able to
  restrict the backup to specific rows.  (You can always create a separate 
table just for backups,
  and dump just that table).
   
   

  -Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of roopa perumalraja
Sent: Sunday, January 14, 2007 8:39 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Backup the part of postgres database 


  Hi all,
   
  Is it possible to back up only part of my database for example from tables 
which have data from April to July 2006, if so what will be the command for 
that.
   
  Thanks a lot in advance
  Roopa

-
  It's here! Your new message!
Get new email alerts with the free Yahoo! Toolbar.

 
-
Bored stiff? Loosen up...
Download and play hundreds of games for free on Yahoo! Games.

Re: [GENERAL] Backup the part of postgres database

2007-01-14 Thread Adam Rich
Roopa,
You can use the command pg_dump to backup specific tables.   But you
won't be able to
restrict the backup to specific rows.  (You can always create a separate
table just for backups,
and dump just that table).
 
 

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of roopa
perumalraja
Sent: Sunday, January 14, 2007 8:39 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Backup the part of postgres database 


Hi all,
 
Is it possible to back up only part of my database for example from
tables which have data from April to July 2006, if so what will be the
command for that.
 
Thanks a lot in advance
Roopa



  _  

It's here! Your new message!
Get new
 email alerts with the free Yahoo!
 Toolbar.



[GENERAL] Backup the part of postgres database

2007-01-14 Thread roopa perumalraja
Hi all,
   
  Is it possible to back up only part of my database for example from tables 
which have data from April to July 2006, if so what will be the command for 
that.
   
  Thanks a lot in advance
  Roopa

 
-
It's here! Your new message!
Get new email alerts with the free Yahoo! Toolbar.

[GENERAL] Avoiding empty queries in tsearch

2007-01-14 Thread Doug Cole

I am having trouble with to_tsquery when the query is all stop words.
Rather than return everything as a match, it returns nothing with the
notice:

NOTICE:  Query contains only stopword(s) or doesn't contain lexem(s), ignored

What is the best way to check for this, I was hoping to be able to
check if the resulting tsquery was empty or null, but haven't found a
way.  Any advice?
Thanks for your help,
Doug

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] PQexec does not return.

2007-01-14 Thread Steve Martin

Hi All,

We have an intermittent problem where PQexec does not seem to return 
even though the server seems to have sent the results.


From the gdb output , the sql statement can be seen, and from the log, 
the result can be seen to be sent.  Both process are running on the same 
machine.  Version of postgres and other info:


  1. Postgres Version 8.1.4 
  2. Machine HP rx4640

  3. OS: HPUX 11.23
  4. Mem 8G 



Has anyone seen this type of problem before, can it be cause by a TCP/IP 
communication failure?



From gdb:

   % /opt/langtools/bin/gdb /path/name -p 3587   
   HP gdb 5.2 for HP Itanium (32 or 64 bit) and target HP-UX 11.2x.

   Copyright 1986 - 2001 Free Software Foundation, Inc.
   Hewlett-Packard Wildebeest 5.2 (based on GDB) is covered by the
   GNU General Public License. Type "show copying" to see the conditions to
   change it and/or distribute copies. Type "show warranty" for
   warranty/support.
   ..
   Attaching to program: /path/name, process 3587

   warning: No unwind information found.
Skipping this library /usr/lib/hpux32/libcl.so.1.

   0x6000c0342810:0 in _poll_sys+0x30 () from /usr/lib/hpux32/libc.so.1
   (gdb) bt
   #0  0x6000c0342810:0 in _poll_sys+0x30 () from
   /usr/lib/hpux32/libc.so.1
   #1  0x6000c03553e0:0 in poll+0x160 () from /usr/lib/hpux32/libc.so.1
   #2  0x6000cefa75b0:0 in pqSocketCheck+0xb00 ()
  from /usr/local/pgsql/lib/libpq.so.4
   #3  0x6000cefa77c0:0 in pqWaitTimed+0x40 ()
  from /usr/local/pgsql/lib/libpq.so.4
   #4  0x6000cefa7890:0 in pqWait+0x40 () from
   /usr/local/pgsql/lib/libpq.so.4
   #5  0x6000cefa53b0:0 in PQgetResult+0x180 ()
  from /usr/local/pgsql/lib/libpq.so.4
   #6  0x6000cefa56f0:0 in PQexecFinish+0x40 ()
  from /usr/local/pgsql/lib/libpq.so.4
   #7  0x6000c1c83870:0 in DBIFPostgreSelect::getRecord
   (this=0x40114840)
   at DBIFPostgre.C:1688
   #8  0x6000c1c73d20:0 in DBIFPostgreSelect::selectNext
   (this=0x40114840,
   [EMAIL PROTECTED]) at DBIFPostgre.C:1902
   #9  0x6000c1c64b90:0 in DBIFSelect::selectNext (this=0x7fff5240,
   [EMAIL PROTECTED]) at DBIF.C:2704
   #10 0x404ff00:0 in TableStatus::checkDeferredTR (this=0x403104c0)
   at DbSContTable.C:1468
   #11 0x405b430:0 in TableStatusManager::checkDeferredTR (this=0x400bde90)
   at DbSContTable.C:3146
   #12 0x4068960:0 in Controller::go (this=0x7fffc320) at
   DbSController.C:1950
   #13 0x406b1b0:0 in main (argc=1, argv=0x7fffed74) at DbSContMain.C:137
   (gdb) q
   The program is running.  Quit anyway (and detach it)? (y or n) y
   Detaching from program: /path/name, process 3587

   % /opt/langtools/bin/gdb /path/name -p 3587
   HP gdb 5.2 for HP Itanium (32 or 64 bit) and target HP-UX 11.2x.
   Copyright 1986 - 2001 Free Software Foundation, Inc.
   Hewlett-Packard Wildebeest 5.2 (based on GDB) is covered by the
   GNU General Public License. Type "show copying" to see the conditions to
   change it and/or distribute copies. Type "show warranty" for
   warranty/support.
   ..
   Attaching to program: /path/name, process 3587

   warning: No unwind information found.
Skipping this library /usr/lib/hpux32/libcl.so.1.

   0x6000c0342810:0 in _poll_sys+0x30 () from /usr/lib/hpux32/libc.so.1
   (gdb) up
   #1  0x6000c03553e0:0 in poll+0x160 () from /usr/lib/hpux32/libc.so.1
   (gdb) up
   #2  0x6000cefa75b0:0 in pqSocketCheck+0xb00 ()
  from /usr/local/pgsql/lib/libpq.so.4
   (gdb) up
   #3  0x6000cefa77c0:0 in pqWaitTimed+0x40 ()
  from /usr/local/pgsql/lib/libpq.so.4
   (gdb) up
   #4  0x6000cefa7890:0 in pqWait+0x40 () from
   /usr/local/pgsql/lib/libpq.so.4
   (gdb) up
   #5  0x6000cefa53b0:0 in PQgetResult+0x180 ()
  from /usr/local/pgsql/lib/libpq.so.4
   (gdb) up
   #6  0x6000cefa56f0:0 in PQexecFinish+0x40 ()
  from /usr/local/pgsql/lib/libpq.so.4
   (gdb) up
   #7  0x6000c1c83870:0 in DBIFPostgreSelect::getRecord
   (this=0x40114840)
   at DBIFPostgre.C:1688
   1688myResultExecPrepare =
   PQexec(myConnection->conn, seleStmt);
   (gdb) p seleStmt
   $1 = "SELECT * FROM T_AM_TERM_BILLING WHERE (TR_STATUS = 'DEFERRED')
   AND ((DOWNLOAD_DATE < 20070108) OR ((DOWNLOAD_DATE = 20070108) AND
   (DOWNLOAD_TIME < 203744)))", '\000' 
   (gdb) q
   The program is running.  Quit anyway (and detach it)? (y or n) y
   Detaching from program: /path/name, process 3587


From postgres log

   2007-01-08 20:37:44.839 NZDT [EMAIL PROTECTED]>LOG:  statement: select
   pg_get_indexdef(indexrelid) from pg_index where ind
   relid = ( select oid from pg_class where relname =
   't_am_registration_db')
   2007-01-08 20:37:44.840 NZDT [EMAIL PROTECTED]>LOG:  duration: 0.347 ms
   2007-01-08 20:37:44.840 NZDT [EMAIL PROTECTED]>LOG:  statement: SELECT
   * FROM T_AM_REGISTRATION_DB WHERE (TR_STATUS = 'DEF
   ERRED') AND ((DOWNLOAD_DATE < 20070108) OR ((DOWNLOAD_DATE =
   20070108) AND (DOWNLOAD_TIME < 203744)))
   2007-01-08 20:37:4

Re: [GENERAL] substr negative indexes

2007-01-14 Thread Tom Lane
Guy Rouillier <[EMAIL PROTECTED]> writes:
> Does the spec leave implementation of indexes on the substr() function 
> less than one undefined?

SQL99 defines the result of

  ::=
  SUBSTRING   FROM 
  [ FOR  ] 

as

a) Let C be the value of the , let
  LC be the length of C, and let S be the value of the .

b) If  is specified, then let L be the value of
   and let E be S+L. Otherwise, let E be the
  larger of LC + 1 and S.

c) If either C, S, or L is the null value, then the result of
  the  is the null value.

d) If E is less than S, then an exception condition is raised:
  data exception - substring error.

e) Case:

  i) If S is greater than LC or if E is less than 1 (one), then
 the result of the  is a zero-
 length string.

 ii) Otherwise,

 1) Let S1 be the larger of S and 1 (one). Let E1 be the
   smaller of E and LC+1. Let L1 be E1-S1.

 2) The result of the  is
   a character string containing the L1 characters of C
   starting at character number S1 in the same order that
   the characters appear in C.

and unless I'm mistaken, our behavior conforms to the spec and Oracle's
doesn't.

Strictly speaking, the spec doesn't define the behavior of "SUBSTR" at
all, only "SUBSTRING" with this weird FROM/FOR argument syntax.  But
PG treats SUBSTR(x,y,z), SUBSTRING(x,y,z) and SUBSTRING(x FROM y FOR z)
all the same.  Possibly Oracle conforms to spec for SUBSTRING but
their SUBSTR acts differently?

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: Antw: Re: [GENERAL] Problems with unique restrictions

2007-01-14 Thread korryd
> "Marcel Gsteiger" <[EMAIL PROTECTED]> writes:
> > Sometimes I wish there was something like a debugger for PL/PGSQL with
> > breakpoints, single step, variable watching...
> 
> Take a look at http://pgfoundry.org/projects/edb-debugger/
> ... it's beta but without users it won't get better.
> 
> The front page says the plpgsql hooks needed will be in 8.3 but
> I think that's a mistake; they got into 8.2 didn't they?  Korry?


Yes - the hooks for the debugger are in 8.2.  That means that we can
publish the PL/pgSQL debugger at any time (well, as soon as I have the
code all cleaned up - still removing some obsolete remnants of the
pre-plugin architecture).

-- Korry


--
  Korry Douglas[EMAIL PROTECTED]
  EnterpriseDB  http://www.enterprisedb.com


Re: Antw: Re: [GENERAL] Problems with unique restrictions

2007-01-14 Thread Tom Lane
"Marcel Gsteiger" <[EMAIL PROTECTED]> writes:
> Sometimes I wish there was something like a debugger for PL/PGSQL with
> breakpoints, single step, variable watching...

Take a look at http://pgfoundry.org/projects/edb-debugger/
... it's beta but without users it won't get better.

The front page says the plpgsql hooks needed will be in 8.3 but
I think that's a mistake; they got into 8.2 didn't they?  Korry?

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Autovacuum Improvements

2007-01-14 Thread Pavan Deolasee

Simon Riggs wrote:

On Fri, 2006-12-29 at 20:25 -0300, Alvaro Herrera wrote:

Christopher Browne wrote:


Seems to me that you could get ~80% of the way by having the simplest
"2 queue" implementation, where tables with size < some threshold get
thrown at the "little table" queue, and tables above that size go to
the "big table" queue.

That should keep any small tables from getting "vacuum-starved."




This is exectly what I am trying, two process autovacuum and a GUC to
seperate small tables.

In this case, one process takes up vacuuming of the small tables and
other process vacuuming of the remaining tables as well as Xid
avoidance related vacuuming. The goal is to avoid starvation of small
tables when a large table is being vacuumed (which may take
several hours) without adding too much complexity to the code.



Some feedback from initial testing is that 2 queues probably isn't
enough. If you have tables with 100s of blocks and tables with millions
of blocks, the tables in the mid-range still lose out. So I'm thinking
that a design with 3 queues based upon size ranges, plus the idea that
when a queue is empty it will scan for tables slightly above/below its
normal range. That way we wouldn't need to specify the cut-offs with a
difficult to understand new set of GUC parameters, define them exactly
and then have them be wrong when databases grow.

The largest queue would be the one reserved for Xid wraparound
avoidance. No table would be eligible for more than one queue at a time,
though it might change between queues as it grows.

Alvaro, have you completed your design?

Pavan, what are your thoughts?



IMO 2-queue is a good step forward, but in long term we may need to go
for a multiprocess autovacuum where the number and tasks of processes
are either demand based and/or user configurable.

Another idea is to vacuum the tables in round-robin fashion
where the quantum could be either "time" or "number of block". The
autovacuum process would vacuum 'x' blocks of one table and then
schedule next table in the queue. This would avoid starvation of
small tables, though cost of index cleanup might go up because of
increased IO. Any thoughts of this approach ?

Thanks,
Pavan





---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] substr negative indexes

2007-01-14 Thread Guy Rouillier
Does the spec leave implementation of indexes on the substr() function 
less than one undefined?  By mistake, I had a substr() invocation with 
an initial index of zero (would be nice if all the computer languages of 
the world could agree to a single definition.)  Oracle silently treats 
this the same as 1 (one), while I just learned that PG treats indexes 
less than 1 as a sliding window off the left of the string.  Oracle 
defines negative indexes to mean "from the end of the string."


So, given a string "abcdefg":

Oracle substr('abcdefg', 1, 4) = 'abcd'
Oracle substr('abcdefg', 0, 4) = 'abcd'
Oracle substr('abcdefg', -1, 4) = 'd'
PG substr('abcdefg', 1, 4) = 'abcd'
PG substr('abcdefg', 0, 4) = 'abc'
PG substr('abcdefg', -1, 4) = 'ab'

--
Guy Rouillier

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Export to shape file

2007-01-14 Thread Michael Fuhr
On Wed, Jan 10, 2007 at 10:43:46PM +0100, Martijn van Oosterhout wrote:
> On Tue, Jan 09, 2007 at 10:39:05PM -0800, kmohan wrote:
> > I am trying to export my spatial data from postgres to shae using pgsql2shp
> > command.
> > When i keyed in the syntax pgsql2shp -f pl dcmms plss
> > It is showing fe_sendauth: no password supplied
> > Please can some one help me on how to import it to shape file.
> 
> I can't find any docs on that program quickly, but it's evidently
> looking for a password. You could:

pgsql2shp is part of PostGIS.  Running pgsql2shp with no arguments
gives a help message; see also "Using the Dumper" in the PostGIS
documentation.

http://postgis.refractions.net/docs/ch04.html

> - Find a way to give it the password or

You can use the -P option for this, although providing a password
on the command line has security implications and should usually
be avoided.

pgsql2shp -P yourpassword -f pl dcmms plss

> - Setup your system so it does not need a password.

See the "Client Authentication" and "libpq - C Library" chapters
in the PostgreSQL documentation to learn more about authentication
(adjust the following links for whatever version of PostgreSQL
you're running).

http://www.postgresql.org/docs/8.2/interactive/client-authentication.html
http://www.postgresql.org/docs/8.2/interactive/libpq.html

-- 
Michael Fuhr

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Remove diacritical marks in SQL

2007-01-14 Thread Michael Fuhr
On Fri, Jan 12, 2007 at 10:58:36PM +0100, Martijn van Oosterhout wrote:
> On Fri, Jan 12, 2007 at 10:16:22PM +0100, Jiří Němec wrote:
> > I would like to remove diacritical marks from a string in a SQL query.
> > I tried to convert a UTF8 string to ASCII but it doesn't work for me.
> > 
> > SELECT convert('ěščřžýáíé','UTF8','SQL_ASCII')
> 
> I don't think postgres has any stuff builtin for that, but other
> languages (like perl) have modules to do this kind of thing. The method
> is to decompose the string to normal form D, strip the diacritics, and
> recompose what's left.

A technique that's been posted before might work:

SELECT to_ascii(convert('ěščřžýáíé', 'LATIN2'), 'LATIN2');
 to_ascii  
---
 escrzyaie
(1 row)

to_ascii() supports only LATIN1, LATIN2, LATIN9, and WIN1250 so you
have to convert to one of those encodings first.

As Martijn suggested, you could use Perl.  Here's an example with
Text::Unaccent (you'll need to use encoding names that iconv
recognizes):

CREATE FUNCTION unaccent(charset text, string text) RETURNS text AS $$
use Text::Unaccent;
return unac_string($_[0], $_[1]);
$$ LANGUAGE plperlu IMMUTABLE STRICT;

SELECT unaccent('UTF-8', 'ěščřžýáíé');
 unaccent  
---
 escrzyaie
(1 row)

Here's an example that uses Unicode::Normalize to strip non-spacing
and enclosing marks:

CREATE FUNCTION unaccent(string text) RETURNS text AS $$
use Unicode::Normalize;
my $nfd_string = NFD($_[0]);
$nfd_string =~ s/[\p{Mn}\p{Me}]//g;
return NFC($nfd_string);
$$ LANGUAGE plperlu IMMUTABLE STRICT;

SELECT unaccent('ěščřžýáíé');
 unaccent  
---
 escrzyaie
(1 row)

-- 
Michael Fuhr

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] XEON familiy 5000, 5100 or 5300?

2007-01-14 Thread Juan Jose Comellas
PostgreSQL handles each connection in a dedicated process, so you won't get 
better performance for a single connection by adding more CPUs (I mean, 
beyond the benefit of having the postmaster and the specific connection 
running in separate CPUs). This means that a query will not be resolved by 
more than one CPU. What you will get is better performance for multiple 
connections.


On Sat January 13 2007 05:43, Philippe Lang wrote:
> Hi,
>
> I'm about to buy a few new servers, and I'm trying to determine if I
> should buy XEON family 5000, 5100 or 5300 processors.
>
> For about the same price, I can have:
>
> 2 Dual-Core Intel Xeon 5060, 3.2 GHz, 4MB
> 2 Dual-Core Intel Xeon 5130, 2.0 GHz, 4MB
> 2 Quad-Core Intel Xeon 5310, 1.6 GHz, 4MB
>
> I have a few queries that take about 4 minutes each to complete on a
> single Pentium 4, and all of them are CPU-bound, with the whole database
> in RAM. With the new system, I expect a performance boost, of course!
>
> If I'm not wrong, a single postgresql sql query cannot be spread over
> two processors, but can it be spread over multiple cores? If that's
> true, does that mean the best CPU would be the last one, although the
> clock is lower that the one of the other processors?
>
> Thanks for the infos,
>
> Cheers,
>
> ---
> Philippe Lang
> Attik System
>
> ---(end of broadcast)---
> TIP 6: explain analyze is your friend

-- 
Juan Jose Comellas
([EMAIL PROTECTED])

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] FK Constraint on index not PK

2007-01-14 Thread Stéphane Schildknecht
Tom Lane a écrit :
> =?UTF-8?B?U3TDqXBoYW5lIFNjaGlsZGtuZWNodA==?= <[EMAIL PROTECTED]> writes:
>   
>> My goal is to migrate to 8.2.1. definitely. But as you said it, I do not
>> want to recreate unwanted index when migrating. I want to drop them BEFORE.
>> But, I can't just do a "drop index" command. It fails.
>> 
>
> Right, because the FK constraints by chance seized on those indexes as
> being the matching ones for them to depend on.
>
> What you want to do is (1) update the relevant pg_depend entries to
> reference the desired PK indexes instead of the undesired ones; then
> (2) drop the undesired indexes.
>
> I don't have a script to do (1) but it should be relatively
> straightforward: in the rows with objid = OID of FK constraint
> and refobjid = OID of unwanted index, update refobjid to be the
> OID of the wanted index.  (To be truly correct, make sure that
> classid and refclassid are the right values; but the odds of a
> false match are probably pretty low.)
>
> Needless to say, test and debug your process for this in a scratch
> database ... and when you do it on the production DB, start with
> BEGIN so you can roll back if you realize you blew it.
>
>   regards, tom lane
>   
Hi Tom,

Thank You very much for this answer. I'll try that tomorrow morning.

regards,

SAS

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Problems with unique restrictions

2007-01-14 Thread Shoaib Mir

You can give EnterpriseDB PL Debugger a try, details for its usage can be
found at --> http://www.enterprisedb.com/documentation/debugger.html

--
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)

On 1/14/07, Marcel Gsteiger <[EMAIL PROTECTED]> wrote:


thanks for responding. Meanwhile I found out that ist was my own fault. A
newly installed insert trigger fired unexpectedly and caused the error. Now
I'm redesigning my functions  to make them smaller so that errors can be
found easier. Sometimes I wish there was something like a debugger for
PL/PGSQL with breakpoints, single step, variable watching...
Anyway, PostgreSQL is the best backend I ever have worked with.

Regards
--Marcel
>>> Tom Lane <[EMAIL PROTECTED]> 13.01.2007 >>>
"Marcel Gsteiger" <[EMAIL PROTECTED]> writes:
> Now since I upgraded to 8.2 I have problems inserting data into tables
that have unique indexes. Ugly enough, I get the message 'duplicate key
violates unique constraint' when inserting the very first record into a
table. This happens everytime when the new tuple references another tuple
that has been inserted just before this one in the same transaction.

> Putting a "SET CONSTRAINTS ALL DEFERRED" in my procedure does not help.

> To me it looks that something with referential integrity checking goes
wrong, but in this case the error message would be misleading.

RI would not have anything to do with a duplicate-key error.

Do you have any SERIAL-type columns in these tables?  My first thought
is of a sequence that hasn't been updated to be above the existing ID
values.  It's fairly easy to get into such a state if you do anything
but a plain vanilla dump-all-and-reload-all update process ...

regards, tom lane


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match



Antw: Re: [GENERAL] Problems with unique restrictions

2007-01-14 Thread Marcel Gsteiger
thanks for responding. Meanwhile I found out that ist was my own fault. A newly 
installed insert trigger fired unexpectedly and caused the error. Now I'm 
redesigning my functions  to make them smaller so that errors can be found 
easier. Sometimes I wish there was something like a debugger for PL/PGSQL with 
breakpoints, single step, variable watching...  
Anyway, PostgreSQL is the best backend I ever have worked with.

Regards
--Marcel
>>> Tom Lane <[EMAIL PROTECTED]> 13.01.2007 >>>
"Marcel Gsteiger" <[EMAIL PROTECTED]> writes:
> Now since I upgraded to 8.2 I have problems inserting data into tables that 
> have unique indexes. Ugly enough, I get the message 'duplicate key violates 
> unique constraint' when inserting the very first record into a table. This 
> happens everytime when the new tuple references another tuple that has been 
> inserted just before this one in the same transaction.

> Putting a "SET CONSTRAINTS ALL DEFERRED" in my procedure does not help.

> To me it looks that something with referential integrity checking goes wrong, 
> but in this case the error message would be misleading.

RI would not have anything to do with a duplicate-key error.

Do you have any SERIAL-type columns in these tables?  My first thought
is of a sequence that hasn't been updated to be above the existing ID
values.  It's fairly easy to get into such a state if you do anything
but a plain vanilla dump-all-and-reload-all update process ...

regards, tom lane


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] like query backslash

2007-01-14 Thread Shoaib Mir

This is how version below 8.2 used to handled backslashes, but with 8.2 you
can now handle the backslashes using a setting in the postgresql.conf file:
standard_conforming_strings (boolean)

This controls whether ordinary string literals ('...') treat backslashes
literally, as specified in the SQL standard. The default is currently off,
causing PostgreSQL to have its historical behavior of treating backslashes
as escape characters. The default will change to on in a future release to
improve compatibility with the standard. Applications may check this
parameter to determine how string literals will be processed. The presence
of this parameter can also be taken as an indication that the escape string
syntax (E'...') is supported. Escape string syntax should be used if an
application desires backslashes to be treated as escape characters.
---
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)


On 1/14/07, Sim Zacks <[EMAIL PROTECTED]> wrote:


select version()
"PostgreSQL 8.0.1 on i686-pc-linux-gnu, compiled by GCC
i686-pc-linux-gnu-gcc (GCC) 3.3.5  (Gentoo
Linux 3.3.5-r1, ssp-3.3.2-3, pie-8.7.7.1)"

I have a table with UNC filename values (windows), such as
\\server\dir\fname.txt
I am running a like query to find all files on a specific server\dir.

I would expect to put 2 backslashes into my query for each one in the text
in order to escape it,
however it is making me put in 4 backslashes for each backslash in the
text.

To retrieve the above example, I needed to query:
select * from filetable where filename like 'serverdir%'

Is this a bug or is there a reason for this?

Sim

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/



[GENERAL] like query backslash

2007-01-14 Thread Sim Zacks

select version()
"PostgreSQL 8.0.1 on i686-pc-linux-gnu, compiled by GCC i686-pc-linux-gnu-gcc (GCC) 3.3.5  (Gentoo 
Linux 3.3.5-r1, ssp-3.3.2-3, pie-8.7.7.1)"


I have a table with UNC filename values (windows), such as 
\\server\dir\fname.txt
I am running a like query to find all files on a specific server\dir.

I would expect to put 2 backslashes into my query for each one in the text in order to escape it, 
however it is making me put in 4 backslashes for each backslash in the text.


To retrieve the above example, I needed to query:
select * from filetable where filename like 'serverdir%'

Is this a bug or is there a reason for this?

Sim

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/