[GENERAL] ALTER USER ..... PASSWORD ....

2006-06-06 Thread Rafal Pietrak
Just wondering,

psql clinet tool loggs issued commands into ~/.psql_history, which is
VERY usefull. I exercise grep-ing the file extensively.

But when it comes to command like "ALTER/CREATE USER ... PASSWORD" I'd
rather have it NOT logged.

This is not a major issue, since there are workarounds - temporary
switching the logging off, etc. Still one have to keep thinking of that
and I have forgot occasionally, still there might be admins which don't
really care.

The implementation should be fairly simple for psql author (I guess :),
but I myself am not up to knowing if it really is. The problem looks
simple: psql-tool should filter away 'sensitive' sql-commands, before
putting them into the log? But the implementation is not so trivial,
since it should take into account not filtering: "ALTER TABLE 'user' add
column password text" and other such similarities, which should actually
be logged.

Comments?

-- 
-R

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] How to use index in case insensitive substing search

2006-06-06 Thread Andrus
How to force postgres to use index for the following query (I can change the 
query to equivalent if required)

select nimi from klient where lower(nimi) like 'test%'

Currently it does NOT use index:

create index nimib2 on firma1.klient(lower(nimi) bpchar_pattern_ops);
explain analyze select nimi from firma1.klient where lower(nimi) like 
'mokter%'

"Seq Scan on klient  (cost=0.00..9.79 rows=1 width=74) (actual 
time=0.740..0.761 rows=1 loops=1)"
"  Filter: (lower((nimi)::text) ~~ 'mokter%'::text)"
"Total runtime: 0.877 ms"

Postgres 8.1 UTF8 encoding in Windows XP

Note.

Query

explain analyze select nimi from firma1.klient where nimi like 'Mokter%'

for same data uses index:

"Index Scan using nimib on klient  (cost=0.00..5.80 rows=1 width=74) (actual 
time=9.402..9.427 rows=1 loops=1)"
"  Index Cond: ((nimi ~>=~ 'Mokter'::bpchar) AND (nimi ~<~ 
'Moktes'::bpchar))"
"  Filter: (nimi ~~ 'Mokter%'::text)"
"Total runtime: 9.615 ms"

Andrus. 



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] How to use index in case insensitive substing search

2006-06-06 Thread Roman Neuhauser
# [EMAIL PROTECTED] / 2006-06-06 11:58:26 +0300:
> How to force postgres to use index for the following query (I can change the 
> query to equivalent if required)
> 
> select nimi from klient where lower(nimi) like 'test%'

do you have an index on klient (lower(nimi))?

-- 
How many Vietnam vets does it take to screw in a light bulb?
You don't know, man.  You don't KNOW.
Cause you weren't THERE. http://bash.org/?255991

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] How to use index in case insensitive substing search

2006-06-06 Thread Andrus
>do you have an index on klient (lower(nimi))?

Yes. As I wrote in first message, I created index explicity for this test 
sample:

create index nimib2 on firma1.klient(lower(nimi) bpchar_pattern_ops);

Andrus. 



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] How to use index in case insensitive substing search

2006-06-06 Thread Hakan Kocaman
Hi Andrus,

how about:
create index nimib2 on firma1.klient(lower(nimi) varchar_pattern_ops);
 ^^^
> explain analyze select nimi from firma1.klient where lower(nimi) like 
> 'mokter%'
> 
> "Total runtime: 0.877 ms"
> explain analyze select nimi from firma1.klient where nimi 
> like 'Mokter%'
> 
> for same data uses index:
> "Total runtime: 9.615 ms"

Hmm...Index-use seems to slow down the query.

Best regards

Hakan Kocaman
Software-Development

digame.de GmbH
Richard-Byrd-Str. 4-8
50829 Köln

Tel.: +49 (0) 221 59 68 88 31
Fax: +49 (0) 221 59 68 88 98
Email: [EMAIL PROTECTED]



> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Andrus
> Sent: Tuesday, June 06, 2006 10:58 AM
> To: pgsql-general@postgresql.org
> Subject: [GENERAL] How to use index in case insensitive 
> substing search
> 
> 
> How to force postgres to use index for the following query (I 
> can change the 
> query to equivalent if required)
> 
> select nimi from klient where lower(nimi) like 'test%'
> 
> Currently it does NOT use index:
> 
> create index nimib2 on firma1.klient(lower(nimi) bpchar_pattern_ops);
> 
> Andrus. 
> 
> 
> 
> ---(end of 
> broadcast)---
> TIP 5: don't forget to increase your free space map settings
> 

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Best open source tool for database design / ERDs?

2006-06-06 Thread dananrg
tedia2sql for DIA seems pretty strange, in that it seems to require you
to use UML diagrams, rather than ER Diagrams, to forward engineer a
database - e.g. output DDL statements.

Am I misreading what tedia2sql does, or is does one have to use
repurpose UML diagrams to get DDL statements created?

I was a little disappointed with the ER Diagram functions of DIA. Looks
like you can only add a few properties at most given the way it is
structured - having circles radiating out from the Entity. There's only
so much screen space... then there's no way to specify a SQL ANSI data
type from what I recall.

Is there a commercial tool that's less than $200 which kicks the *ss of
any of these open source solutions? I'd rather use something free, but
not if it doesn't do what I want it to do easily.

I like the Open Office apps like Writer and Calc. Calc in particular
does everything Excel does - at least everything I've ever used Excel
for.

Seems like the open source data modeling tools aren't feature-rich
quite yet.

Dana


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Best open source tool for database design / ERDs?

2006-06-06 Thread Kenneth Downs

[EMAIL PROTECTED] wrote:


Seems like the open source data modeling tools aren't feature-rich
quite yet.
 

Disclaimer: this is probably *not* what you want, but I will throw it 
out for completeness.


We have a non-graphical tool that builds databases out of text files 
that resemble CSS, such as:


table customers {
  column customer { primary_key: Y; }
...
}

Very likely we share the same purpose as you do, to capture data 
structure outside of DDL.   But going further, we designed with nothing 
less in mind than to capture the entire system requirements in the 
database spec, including derived columns and other automations, and 
including security as well.  A generator builds the DDL.


Going one better, the generator also works as a diff engine, so when you 
make changes it generates the DDL to change the structure and also the 
trigger code to enforce the rules.


The twist is that we found when the file format was complete and the 
features were in we really didn't need the GUI.  We may put one in 
someday, but jedit is our GUI now :)


So if you are at all willing to consider non-GUI tools that aim for the 
same purpose, you may wish to check it out:


http://docs.secdat.com
begin:vcard
fn:Kenneth  Downs
n:Downs;Kenneth 
email;internet:[EMAIL PROTECTED]
tel;work:631-689-7200
tel;fax:631-689-0527
tel;cell:631-379-0010
x-mozilla-html:FALSE
version:2.1
end:vcard


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Performance difference between char and int2 columns

2006-06-06 Thread Alban Hertroys

Yavuz Kavus wrote:

hi everybody.



The problem is that:
my both table has 10 records.
when i run this query on first table :
   select * from tb_section_phones;
   it lasts 0.02 sec.
when i run this query on second table :
   select * from tb_section_phones_temp;
   it lasts 0.13 sec.
 
6.5 times slower.

why so performance difference?
should i prefer int as column type?


Did you ANALYZE your tables?
What version of Postgres is this?

--
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] php 5.1.4 w/ PostgreSQL 8.1.4

2006-06-06 Thread [EMAIL PROTECTED]

Maybe you should check that you're linking against the libraries you
think you are. For example, run ldd over both apache and the pgsql
module and check they have similar ideas as to which libraries they're
using.

Make sure you don't have multiple copies of postgres installed, or
multiple copies of any libraries it might use...

Hope this helps,


Thanks everybody for the help. I lost 2-3 days trying and I went back go 
8.0.8 in order to be able to work, but I'll make the suggested test the 
following weekend.


Regards,
Iv.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Best open source tool for database design / ERDs?

2006-06-06 Thread Bjørn T Johansen
Have you tried Druid (http://druid.sourceforge.net/index.html) ?

It does anything a good ERD designer do and it's free...


BTJ

On 5 Jun 2006 16:54:56 -0700
[EMAIL PROTECTED] wrote:

> tedia2sql for DIA seems pretty strange, in that it seems to require you
> to use UML diagrams, rather than ER Diagrams, to forward engineer a
> database - e.g. output DDL statements.
> 
> Am I misreading what tedia2sql does, or is does one have to use
> repurpose UML diagrams to get DDL statements created?
> 
> I was a little disappointed with the ER Diagram functions of DIA. Looks
> like you can only add a few properties at most given the way it is
> structured - having circles radiating out from the Entity. There's only
> so much screen space... then there's no way to specify a SQL ANSI data
> type from what I recall.
> 
> Is there a commercial tool that's less than $200 which kicks the *ss of
> any of these open source solutions? I'd rather use something free, but
> not if it doesn't do what I want it to do easily.
> 
> I like the Open Office apps like Writer and Calc. Calc in particular
> does everything Excel does - at least everything I've ever used Excel
> for.
> 
> Seems like the open source data modeling tools aren't feature-rich
> quite yet.
> 
> Dana
> 
> 
> ---(end of broadcast)---
> TIP 6: explain analyze is your friend


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] New version of DBD::Pg for 8.1.4?

2006-06-06 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


> Will there need to be a new version of DBD::Pg released for 8.1.4?

No, DBD::Pg escapes everything with double quotes instead of a backslash,
and uses its own escaping routines, not the libpq ones. Therefore, the
existing version (latest is 1.49) should work just fine against recent
version of Postgres. However, a new version is being actively written that
will have better handling of utf-8 content in general, and as part of its
development we'll be doing heavy testing to make absolutely sure that
DBD::Pg is not affected by the recent discoveries. Watch this list of
monitor www.planetpostresql.org for updates.

- --
Greg Sabino Mullane [EMAIL PROTECTED]
PGP Key: 0x14964AC8 200606060704
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-

iD8DBQFEhXMxvJuQZxSWSsgRAlnVAKDm4hjDPS4RywSneyNWtY2/TP5IPQCgmkvE
i5qs3NOxX36lCIgmRHe1c8k=
=8QMD
-END PGP SIGNATURE-



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] high %system time

2006-06-06 Thread Jacob Coby
We recently upgraded from php 4.3.10 to 5.1.2, and the %system time has 
skyrocketed:


Cpu(s): 42.8% us, 43.6% sy,  0.0% ni, 11.3% id,  2.2% wa,  0.2% hi,  0.0% si
Mem:   8312844k total,  7566168k used,   746676k free,22356k buffers
Swap:  2040244k total,  520k used,  2039724k free,  6920384k cached

it used to be around 5% to 10%.  The server is a quad-xeon dell pe 6650 
running CentOS 4.2 with 8G of RAM running pg 8.1.1.


How can I determine what is causing such high system load?  it seems to 
have immediately jumped with the php upgrade.


thanks,
--
-Jacob

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] high %system time

2006-06-06 Thread Bill Moran
On Tue, 06 Jun 2006 09:19:10 -0400
Jacob Coby <[EMAIL PROTECTED]> wrote:

> We recently upgraded from php 4.3.10 to 5.1.2, and the %system time has 
> skyrocketed:
> 
> Cpu(s): 42.8% us, 43.6% sy,  0.0% ni, 11.3% id,  2.2% wa,  0.2% hi,  0.0% si
> Mem:   8312844k total,  7566168k used,   746676k free,22356k buffers
> Swap:  2040244k total,  520k used,  2039724k free,  6920384k cached
> 
> it used to be around 5% to 10%.  The server is a quad-xeon dell pe 6650 
> running CentOS 4.2 with 8G of RAM running pg 8.1.1.
> 
> How can I determine what is causing such high system load?  it seems to 
> have immediately jumped with the php upgrade.

This sounds more like a PHP question than a PostgreSQL question.

However, we had a similar problem recently, and I ran ktrace (on FreeBSD)
to track down the system calls.  2 things jumped out:
1) pg_connect() creates a LOT of system time compared to pg_pconnect()
2) require_once() and include_once() are truly evil.

HTH.

-- 
Bill Moran
Collaborative Fusion Inc.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] high %system time

2006-06-06 Thread Jacob Coby

Bill Moran wrote:

On Tue, 06 Jun 2006 09:19:10 -0400
Jacob Coby <[EMAIL PROTECTED]> wrote:

We recently upgraded from php 4.3.10 to 5.1.2, and the %system time has 
skyrocketed:


Cpu(s): 42.8% us, 43.6% sy,  0.0% ni, 11.3% id,  2.2% wa,  0.2% hi,  0.0% si
Mem:   8312844k total,  7566168k used,   746676k free,22356k buffers
Swap:  2040244k total,  520k used,  2039724k free,  6920384k cached

it used to be around 5% to 10%.  The server is a quad-xeon dell pe 6650 
running CentOS 4.2 with 8G of RAM running pg 8.1.1.


How can I determine what is causing such high system load?  it seems to 
have immediately jumped with the php upgrade.


This sounds more like a PHP question than a PostgreSQL question.


I know.  I'm looking for advice on what would affect system time.  Its 
actually not related so much to the php upgrade as a major code change 
we did.  What baffles me is that the db abstraction layer didn't change.




However, we had a similar problem recently, and I ran ktrace (on FreeBSD)
to track down the system calls.  2 things jumped out:
1) pg_connect() creates a LOT of system time compared to pg_pconnect()


We use pg_pconnect().  We also reuse connections to avoid the overhead 
of RESET ALL (php issues a RESET ALL on persistent connections, which 
can take up to 0.1s)



2) require_once() and include_once() are truly evil.


Can you elaborate a bit on this?  Privately if you want since it's off 
topic for this list.


Thanks.
--
Jacob Coby


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] high %system time

2006-06-06 Thread Bill Moran
On Tue, 06 Jun 2006 09:37:16 -0400
Jacob Coby <[EMAIL PROTECTED]> wrote:

> Bill Moran wrote:
>
> > 2) require_once() and include_once() are truly evil.
> 
> Can you elaborate a bit on this?  Privately if you want since it's off 
> topic for this list.

They're evil.  They are a bad idea gone horribly awry.

Here are some specific reason why they should never be used and should
be removed from the language:
1) They encourage sloppy coding.  If hackers are using *_once() it means
   they don't know their inclusion hierarchy.
2) They incur lots of system time by stat()ing lots of directories and
   files for everything you include.
3) If you have the same file included multiple times, it still runs the
   stat()s _every_time_, even if the file has already been included and
   doesn't need included again.  I believe this is necessary for security
   purposes, but it's a LOT of overhead.

After determining that require_once() was requiring significant amounts
of CPU for our application, we tasked a single developer to organize and
replace them all with require().  It took him an hour or two.  The result
was about a 2x performance boost, with significant reduction in system
time.  It was a big enough jump that the bottleneck relocated to another
location and I haven't been back to optimizing PHP since.

In conclusion, (require|include)_once() are evil.  The only time they
should be used is when the application is so small that it doesn't need
them.

-- 
Bill Moran
Collaborative Fusion Inc.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] 7.3.3, Fedora Core 5, test geometry and test horology

2006-06-06 Thread Tom Lane
David Fetter <[EMAIL PROTECTED]> writes:
> On Mon, Jun 05, 2006 at 07:03:59PM -0300, Daniel Henrique Alves Lima wrote:
>> Yes, i know, we are using a jurassic version of postgresql and we
>> should try to upgrade first, but that is the next step. Now we've to
>> make sure that 7.3.3 will work with FC5.

> If you're going with something in the (as you point out, Jurassic)
> 7.3.x series, you need to make sure that 7.3.15 is running, as all
> previous versions have known security and/or data loss bugs.  

Yup.  The particular errors you mention are harmless (eg, the 7.3
regression tests are unaware of the recent changes in US DST laws)
but it'd be foolish not to be using the latest 7.3.x release.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] How to use index in case insensitive substing search

2006-06-06 Thread Andrus
> how about:
> create index nimib2 on firma1.klient(lower(nimi) varchar_pattern_ops);
> ^^^

Hakan, thank you.
Excellent.
It works.

Andrus. 



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] ALTER USER ..... PASSWORD ....

2006-06-06 Thread Tom Lane
Rafal Pietrak <[EMAIL PROTECTED]> writes:
> psql clinet tool loggs issued commands into ~/.psql_history, which is
> VERY usefull. I exercise grep-ing the file extensively.

> But when it comes to command like "ALTER/CREATE USER ... PASSWORD" I'd
> rather have it NOT logged.

The history file is only readable by yourself, so I see no problem.
Personally I *don't* want psql editorializing on what it saves there.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Backwards index scan

2006-06-06 Thread Carlos Oliva








Are there any configurations/flags that we should re-set for
the database (v 7.4.x) in order to enable a backwards scan on an index? 
We are trying to query a table in descending order.  We added an index
that we were hoping would be scanned backwards but EXPLAIN never indicates that
the optimizer will carry out a backwards scan on the index that we added to the
table.  EXPLAIN indicates that the optimizer will always use a sequential
scan if we order the query in descending order.

 

OUR TESTS

We are conducting a simple test to asses if the optimizer
ever uses the index.  The table has several columns and the select statement
is as follows: select * from ord0007 order by prtnbr, ordschdte desc.  The
index that we added is "ord0007_k" btree (prtnbr, ordschdte).  Prtnbr
is numeric(10,0) not null, and ordschdte is date.

 

We find that the optimizer uses the index for the query if
we set enable_sort to off and the query uses ordschdte in ascending order as
follows: select * from ord0007 order by prtnbr, ordschdte.  For this query,
EXPLAIN returns the following output:

  
QUERY PLAN



 Index Scan using ord0007_k on ord0007 
(cost=0.00..426.03 rows=232 width=1816)

(1 row)

 

However the optimizer uses a sequential scan if we order by
a descending ordschdte as follows: select * from ord0007 order by prtnbr,
ordschdte desc.  For this query, whether we set the enable_sort to on or
off, EXPLAIN returns the following output:


QUERY PLAN



 Sort  (cost=10155.44..10156.02 rows=232
width=1816)

   Sort Key: prtnbr, ordschdte

   ->  Seq Scan on ord0007 
(cost=0.00..146.32 rows=232 width=1816)

(3 rows)








Re: [GENERAL] Backwards index scan

2006-06-06 Thread Alan Hodgson
On June 6, 2006 07:59 am, "Carlos Oliva" <[EMAIL PROTECTED]> wrote:
> We are conducting a simple test to asses if the optimizer ever uses the
> index.  The table has several columns and the select statement is as
> follows: select * from ord0007 order by prtnbr, ordschdte desc.  The
> index that we added is "ord0007_k" btree (prtnbr, ordschdte).  Prtnbr is
> numeric(10,0) not null, and ordschdte is date.

You have to "order by prtnbr desc, ordschdte desc" to have the index used 
the way you want.  You can re-order in an outer query if you need to.

-- 
Alan

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] ALTER USER ..... PASSWORD ....

2006-06-06 Thread Rafal Pietrak
The point is, it hangs around: in backup 'tapes' to begin with.

And if it's the case of postmaster history, there may be lots of other
people passwords to find. (So I generaly disable postmaster
psql_history, but that's a nuicence).

But as I said, it's a sort of a nuicence, not really an issue.

Obviously, psql is not a place for any extensive command filtering. But
this touches security and I would be willing to have an exception here.

Still, that's just my 2c.

Regards,

-R

On Tue, 2006-06-06 at 10:07 -0400, Tom Lane wrote:
> Rafal Pietrak <[EMAIL PROTECTED]> writes:
> > psql clinet tool loggs issued commands into ~/.psql_history, which is
> > VERY usefull. I exercise grep-ing the file extensively.
> 
> > But when it comes to command like "ALTER/CREATE USER ... PASSWORD" I'd
> > rather have it NOT logged.
> 
> The history file is only readable by yourself, so I see no problem.
> Personally I *don't* want psql editorializing on what it saves there.
> 
>   regards, tom lane
> 
> ---(end of broadcast)---
> TIP 2: Don't 'kill -9' the postmaster
-- 
-R

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Backwards index scan

2006-06-06 Thread Carlos Oliva
Thank for your response Alan.  This indeed corrects the problem as long as
we configure the database to enable_seqscan=false.

Perhaps, you can help me with a side effect of using this index:  Rows with
null dates seem to fall off the index.  When the ordschdte is null, the
query fails the rows of the data for which the ordschdte is null.  We had to
resort to a second query that uses a sequential scan to retrieve the rows
that have a null ordschdte.  Is there any kind of index that we can create
that would allow us to order by ordshcdte and which would retrieve rows with
null dates?

Thanks in advance for your response.

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Alan Hodgson
Sent: Tuesday, June 06, 2006 11:05 AM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Backwards index scan

On June 6, 2006 07:59 am, "Carlos Oliva" <[EMAIL PROTECTED]> wrote:
> We are conducting a simple test to asses if the optimizer ever uses the
> index.  The table has several columns and the select statement is as
> follows: select * from ord0007 order by prtnbr, ordschdte desc.  The
> index that we added is "ord0007_k" btree (prtnbr, ordschdte).  Prtnbr is
> numeric(10,0) not null, and ordschdte is date.

You have to "order by prtnbr desc, ordschdte desc" to have the index used 
the way you want.  You can re-order in an outer query if you need to.

-- 
Alan

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Backwards index scan

2006-06-06 Thread John Sidney-Woollett
I don't think that null values are indexed - you'll probably need to 
coalesce your null data value to some value if you want it indexed.


You can coalesce those value back to null when you retrieve the data 
from the query.


John

Carlos Oliva wrote:

Thank for your response Alan.  This indeed corrects the problem as long as
we configure the database to enable_seqscan=false.

Perhaps, you can help me with a side effect of using this index:  Rows with
null dates seem to fall off the index.  When the ordschdte is null, the
query fails the rows of the data for which the ordschdte is null.  We had to
resort to a second query that uses a sequential scan to retrieve the rows
that have a null ordschdte.  Is there any kind of index that we can create
that would allow us to order by ordshcdte and which would retrieve rows with
null dates?

Thanks in advance for your response.

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Alan Hodgson
Sent: Tuesday, June 06, 2006 11:05 AM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Backwards index scan

On June 6, 2006 07:59 am, "Carlos Oliva" <[EMAIL PROTECTED]> wrote:


We are conducting a simple test to asses if the optimizer ever uses the
index.  The table has several columns and the select statement is as
follows: select * from ord0007 order by prtnbr, ordschdte desc.  The
index that we added is "ord0007_k" btree (prtnbr, ordschdte).  Prtnbr is
numeric(10,0) not null, and ordschdte is date.



You have to "order by prtnbr desc, ordschdte desc" to have the index used 
the way you want.  You can re-order in an outer query if you need to.




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[GENERAL] Trigger function to audit any kind of table

2006-06-06 Thread Sergio Duran
Hello,I would like to know if it is possible to create a trigger function which does something likeCREATE OR REPLACE FUNCTION table_audit() RETURNS TRIGGER AS $table_audit$BEGIN  INSERT INTO audit SELECT TG_OP, current_timestampmp, current_user, OLD, NEW;
  RETURN NEW;END $table_audit$ LANGUAGE plpgsql;Can I create a table with those two RECORD pseudo-types or something compatible?I'm using pgsql 8.1Thanks.


Re: [GENERAL] Backwards index scan

2006-06-06 Thread Andrew Sullivan
On Tue, Jun 06, 2006 at 12:27:33PM -0400, Carlos Oliva wrote:
> Thank for your response Alan.  This indeed corrects the problem as long as
> we configure the database to enable_seqscan=false.

If you have to do that, something is still wrong.  Do you have
accurate statistics?  Is the planner mistaken about something?

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
A certain description of men are for getting out of debt, yet are
against all taxes for raising money to pay it off.
--Alexander Hamilton

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] COLLATE

2006-06-06 Thread Nikolay Samokhvalov

On 6/5/06, Martijn van Oosterhout  wrote:


Yeah, I was working on it but got stuck on the planner/optimiser
changes. In the mean time the tree drifted and lack of interest, which
gets us where we are now...


Very bad news :-(
I were looking forward to this feature... So many troubles in my
projects would be vanished...

What is 'lack of interest'? Interest from community, or major
developers, or your personal one?

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Backwards index scan

2006-06-06 Thread Greg Stark

John Sidney-Woollett <[EMAIL PROTECTED]> writes:

> I don't think that null values are indexed - you'll probably need to coalesce
> your null data value to some value if you want it indexed.

That is most definitely not true for Postgres. NULL values are included in the
index.

However NULLs sort as greater than all values in Postgres. So when you sort
descending they'll appear *first*. If you sort ascending they'll appear last.

If you have any clauses like 'WHERE col > foo' then it will not be true for
NULL values of col regardless of what foo is and those records will be
dropped. This is true regardless of whether there's an index.


-- 
greg


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] [pgsql-advocacy] Me And My Database

2006-06-06 Thread Jim C. Nasby
Moving to -general.

On Fri, Jun 02, 2006 at 10:45:14PM +0200, Leif B. Kristensen wrote:
> Don't know if it's relevant here, but I've recently launched the third 
> article in a series on my transition from shrink-wrapped genealogy 
> software on Windows to my own custom PostgreSQL/PHP application running 
> on Linux. The new article is almost all about PostgreSQL, along with 
> some rantings about commercial genealogy software and a description on 
> my "structured document" (or WYSIWYG) PHP interface. Along with this, 
> I've created a blog about the project. 
> 
> I wonder if anyone here would want to review the stuff and maybe post 
> some comments. It's all at , and the 
> latest article is of course the 
> . The second article in the 
> series  was my preliminary 
> PostgreSQL data definition. A lot of it is outdated, but it might be 
> interesting for someone as well.
> 
> Eventually I'll publish the full code, but it's still rather immature.

>From the bottom of that page:

SELECT * FROM sources INTO src WHERE source_id = $1;

SELECT * is generally something to avoid. You end up shoving around more
data than needed. Granted, in this case it's only getting shoved down
into plpgsql, but it's still extra work for the server.

Also, the commentary about how MySQL is faster isn't very clear. Are you
using MySQL as some kind of result cache? When you get to running actual
concurrent access on the website, you could well find yourself very
disappointed with the performance of MyISAM and it's table-level
locking. There's probably also some gains to be had on the PostgreSQL
performance.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] [pgsql-advocacy] Me And My Database

2006-06-06 Thread Leif B. Kristensen
On Wednesday 7. June 2006 00:10, Jim C. Nasby wrote:
>Moving to -general.
>From the bottom of that page:
>
>SELECT * FROM sources INTO src WHERE source_id = $1;
>
>SELECT * is generally something to avoid. You end up shoving around
> more data than needed. Granted, in this case it's only getting shoved
> down into plpgsql, but it's still extra work for the server.

I know that. But the table is only four columns wide, and all the 
columns matter in this query. Eventually I'll remove such things 
as "SELECT * FROM ..." which really is a bad habit.

>Also, the commentary about how MySQL is faster isn't very clear. Are
> you using MySQL as some kind of result cache? When you get to running
> actual concurrent access on the website, you could well find yourself
> very disappointed with the performance of MyISAM and it's table-level
> locking. There's probably also some gains to be had on the PostgreSQL
> performance.

I may have been a little unclear here. My production database is 
PostgreSQL, as it quite clearly is the better choice of the two, in 
particular wrt data integrity. My Web presentation software is quite a 
different matter. It's running at a web hotel that's only offering 
MySQL for a database. I find MySQL with MyISAM quite sufficient for 
that use, as its only purpose is to serve up simple selects quickly.

The reason why the generation of eg. the family sheet is faster in the 
MySQL web context than in my production environment, is that I'm really 
comparing apples and potatoes here. The Web database has a much flatter 
and denormalized structure, due to the fact that there's no editing. 
The entire Web database is repopulated from scratch every time I do an 
update.
-- 
Leif Biberg Kristensen | Registered Linux User #338009
http://solumslekt.org/ | Cruising with Gentoo/KDE

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Data about rate of downloads

2006-06-06 Thread Ilir Gashi

Hello,

My name is Ilir Gashi and I am a PhD student at the Centre for Software 
Reliability, City University, London, UK. I was wondering if anyone can 
help me with the following question:


- Are the numbers of downloads of a particular release of the PostgreSQL 
server recorded anywhere centrally, and if (at least approximatly) these 
numbers are known, can they be shared?


We are interested on doing some research on how much the classical Software 
Reliability Growth models can be used for Reliability predictions of a 
server based on the bug reports for that server and the download rates. 
This work would form some part of my PhD thesis and, since we are an 
academic insitution, all the work and findings would be made available 
freely.


Appologies if I have posted this question on the wrong list, in which case 
I would be very grateful for suggestions on where I should then post this 
question.


Thanks in advance,

Best regards,

Ilir

P.S. Just to clarify, we are purely interested on the numbers, not the 
identities of the 'downloaders'.

--


Ilir Gashi 
PhD Student 
Centre for Software Reliability 
City University 
Northampton Square, London EC1V 0HB

Tel: +44 20 7040 0273
Fax: +44 20 7040 8585
email: [EMAIL PROTECTED]
website: http://www.csr.city.ac.uk/csr_city/staff/gashi/


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Data about rate of downloads

2006-06-06 Thread Rodrigo Gonzalez
I dont know the answer to your question, but I think that you forgot 
somethingmost linux distributions have postgresql included.so, 
(I dont know the number) some people that use it, didnt download from 
postgresql site or mirrors...


Ilir Gashi wrote:

Hello,

My name is Ilir Gashi and I am a PhD student at the Centre for 
Software Reliability, City University, London, UK. I was wondering if 
anyone can help me with the following question:


- Are the numbers of downloads of a particular release of the 
PostgreSQL server recorded anywhere centrally, and if (at least 
approximatly) these numbers are known, can they be shared?


We are interested on doing some research on how much the classical 
Software Reliability Growth models can be used for Reliability 
predictions of a server based on the bug reports for that server and 
the download rates. This work would form some part of my PhD thesis 
and, since we are an academic insitution, all the work and findings 
would be made available freely.


Appologies if I have posted this question on the wrong list, in which 
case I would be very grateful for suggestions on where I should then 
post this question.


Thanks in advance,

Best regards,

Ilir

P.S. Just to clarify, we are purely interested on the numbers, not the 
identities of the 'downloaders'.



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Plpgsql Multidimensional array

2006-06-06 Thread Rodrigo Carvalhaes

Hi!

I need to make a plpgsql function and I only think on a multidimensional 
array to make this function BUT I know that multidimensional arrays are 
not supported in plpgsql so, I wanna to receive some ideas from this list.


Our system have a proprietary way to make his "ENCODING" , for example,  
if a product description it's "Paper No. 4" it changes to "Papel 
N\\012\\015 4" on the database so, changing the "o" for "\\012" and "." 
for "\\015". Of course that it's not only this substitutions that it 
makes, it's +/- 80 items.


As I have to access this data externaly (read)  and include data directy 
on the database via another application, I have to make a function that 
is able to make this conversions on both ways:


DATABASE(ENCODED) => DECODE => DISPLAY
INPUT DATA => CODE => STORE ON THE DATABASE

So, my idea was:
1. Make a plpgsql function with two arguments, varchar (string to code 
or decode) and a bool argument that will inform if it's to code or decode;
2. Make a multidimensional array with all the relations (code relations 
Ex.: {'o', '\\015}, {'.', '\\012'}
3. According ti the bool argument make a loop and using the replace 
function code or decode my string.


It's not possible insert this ENCODINGs on a table because I am not 
allowed to include tables on this database, I can only use functions!


Something like this:

CREATE OR REPLACE FUNCTION arruma_memo("varchar",bool)
 RETURNS "varchar" AS

$BODY$

DECLARE

old_string ALIAS FOR $1;
tipo_conversao ALIAS FOR $2;
new_string varchar;
varr_tabela varchar[];
vstr_chave varchar(50);

varr_tabela[1] :=$$'[[341','á'$$;
varr_tabela[2] :=$$'[[341','á'$$;
varr_tabela[3] :=$$'[[341','á'$$;

IF tipo_conversao IS TRUE THEN
FOR i IN 1 .. 3
   LOOP
   SELECT varr_tabela[i] INTO vstr_chave;
   new_string := replace(old_string,vstr_chave);
   END LOOP;

ELSE

FOR i IN 1 .. 3
   LOOP
   SELECT varr_tabela[i] INTO vstr_chave;
   new_string := replace(old_string,vstr_chave);
   END LOOP;

END IF;

RETURN new_string;
END;
$BODY$
 LANGUAGE 'plpgsql' VOLATILE;

--

[]'s

Rodrigo Carvalhaes



--
Esta mensagem foi verificada pelo sistema de antivírus e
acredita-se estar livre de perigo.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Data about rate of downloads

2006-06-06 Thread Robert Treat
On Tuesday 06 June 2006 19:29, Ilir Gashi wrote:
> Hello,
>
> My name is Ilir Gashi and I am a PhD student at the Centre for Software
> Reliability, City University, London, UK. I was wondering if anyone can
> help me with the following question:
>
> - Are the numbers of downloads of a particular release of the PostgreSQL
> server recorded anywhere centrally, and if (at least approximatly) these
> numbers are known, can they be shared?
>

We don't really track it thoroughly... looking at some numbers we've had ~ 
60,000 downloads of 8.1 since 8.1.4 was released... but that only people who 
have gone through our main website... a lot of people get updates through 
thier OS vendor (ports,apt-get,yum,etc...) and a fair number also go directly 
into an ftp server or website.  

> We are interested on doing some research on how much the classical Software
> Reliability Growth models can be used for Reliability predictions of a
> server based on the bug reports for that server and the download rates.
> This work would form some part of my PhD thesis and, since we are an
> academic insitution, all the work and findings would be made available
> freely.
>

I doubt they could for open source projects, unless those projects had very 
tight control over thier distribution (which is kind of self defeating).  
Smaller projects on places like pgfoundry might be able to track this (slony 
comes to mind), but for us it seems doubtful. 

-- 
Robert Treat
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] [pgsql-advocacy] Me And My Database

2006-06-06 Thread Robert Treat
On Tuesday 06 June 2006 18:44, Leif B. Kristensen wrote:
> On Wednesday 7. June 2006 00:10, Jim C. Nasby wrote:
> >Moving to -general.
> >From the bottom of that page:
> >
> >SELECT * FROM sources INTO src WHERE source_id = $1;
> >
> >SELECT * is generally something to avoid. You end up shoving around
> > more data than needed. Granted, in this case it's only getting shoved
> > down into plpgsql, but it's still extra work for the server.
>
> I know that. But the table is only four columns wide, and all the
> columns matter in this query. Eventually I'll remove such things
> as "SELECT * FROM ..." which really is a bad habit.
>
> >Also, the commentary about how MySQL is faster isn't very clear. Are
> > you using MySQL as some kind of result cache? When you get to running
> > actual concurrent access on the website, you could well find yourself
> > very disappointed with the performance of MyISAM and it's table-level
> > locking. There's probably also some gains to be had on the PostgreSQL
> > performance.
>
> I may have been a little unclear here. My production database is
> PostgreSQL, as it quite clearly is the better choice of the two, in
> particular wrt data integrity. My Web presentation software is quite a
> different matter. It's running at a web hotel that's only offering
> MySQL for a database. I find MySQL with MyISAM quite sufficient for
> that use, as its only purpose is to serve up simple selects quickly.
>

I'd think sqlite would be even faster, though it sounds like that might not be 
an option for you. 

> The reason why the generation of eg. the family sheet is faster in the
> MySQL web context than in my production environment, is that I'm really
> comparing apples and potatoes here. The Web database has a much flatter
> and denormalized structure, due to the fact that there's no editing.
> The entire Web database is repopulated from scratch every time I do an
> update.

If you going through this kind of step now, why not just generate the whole 
site from the pg database as html pages and then push those out to the 
client?  That way you eliminate any dbms overhead and reduce load on your 
webservers (and eliminate the need for a 2nd db schema) 

-- 
Robert Treat
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] psql: krb5_sendauth: Bad application version was sent

2006-06-06 Thread Eric Montague

thanks for the response. you were right about setting the env variable.
that cleared up the 'bad appl version' error. subsequently, i got it to 
the point where

the connection attempt just hung, and hitting C at the psql prompt
caused a "stop batch process?" msg. I stopped at that point.

we are going to run PG on a linux machine, so running everything on
windows has become a mute point.

I was successful configuring PG on ubuntu linux to authenticate users on
windows using kerberos and Active Directory. Pavel's directions in the
user-commented PG docs were spot on.

if i get time, i might do some more work trying to get an all windows 
kerberos

configuration working.

thanks again.

Magnus Hagander wrote:

I'm trying to setup Kerberos authentication with PG on Windows 2000.

I have installed the MIT Kerberos Windows dlls into the PG 
bin directory

- replacing the krb5_32.dll and comerr32.dll from the PG install.
I did this because the PG install did not have the 
krbcc32.dll, which is needed to find the credentials cache on Windows.


The name of the service is 'POSTGRES', as is the name of the 
user who starts the service.


I've created the krb5.keytab, and set its location in 
postgresql.conf, and set krb_srvname = 'POSTGRES'

krb_server_hostname = 'host.domain.com' (the host machine)

I've set pg_hba.conf to use krb5 for my username.

When I try to connect with psql, I get the following error in the cmd
window:
'krb5_sendauth: Bad application version was sent (via sendauth)'
and the following msg in the pg_log
'authentication LOG:  Kerberos recvauth returned error -1765328179'




Not sure if I've heard anybody successfully running a pg *server* on
windows with kerberos, but I ru nseveral pg serverson linux with clients
on windows, using kerberos and AD.

This error message looks a lot like the ones you'd get if you have an
incorrect service name on the client. Did you change the name on the
client as well, or just in postgresql.conf? (Your steps seems to
indicate only postgresql.conf). The easiest way is to set the
environment variable PGKRBSRVNAME=POSTGRES.

//Magnus

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org

  


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings