Re: [GENERAL] Postgresql INSERT speed (how to improve performance)?

2007-08-13 Thread Tom Lane
"Lim Berger" <[EMAIL PROTECTED]> writes:
>> Thanks Tom. But on a newly minted table, sure, the performance would
>> be great. My table now has about 3 million rows (both in MySQL and
>> PG).

Well, INSERT speed is really not very dependent on table size (else I'd
have inserted a few zillion rows before timing it).  Personally I rather
like Tony's theory that you've gotten confused about what the timing
units are.

> Btw, the query is as simple as:

> INSERT INTO cachedstats
> (id, prof_name, notes, inform_prof_on_change)
> VALUES
> (3190087, 'Lim Berger', '.text of about 1000 chars', 'Y');

These are the sorts of details that you can't leave out if you want
an intelligent response.

> I am testing through PHP microtime function.

Well, that brings up a whole host of questions, starting with whether
the PHP DB adapter you're using is any good.  Can you replicate similar
timings if you feed the same query into plain psql?

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Postgresql INSERT speed (how to improve performance)?

2007-08-13 Thread Lim Berger
On 8/14/07, Lim Berger <[EMAIL PROTECTED]> wrote:
> On 8/14/07, Tom Lane <[EMAIL PROTECTED]> wrote:
> > "Lim Berger" <[EMAIL PROTECTED]> writes:
> > > I have a table in MySQL with three compound indexes. I have only three
> > > columns from this table also in PostgreSQL, which serves as a cache of
> > > sorts for fast queries, and this table has only ONE main index on the
> > > primary key!
> >
> > > INSERTing into MySQL takes 0.0001 seconds per insert query.
> > > INSERTing into PgSQL takes 0.871 seconds per (much smaller) insert query.
> >
> > You had better provide some details, because that's completely out of
> > line, assuming that by "insert query" you mean insert one row.  For a
> > comparison point, I get this on a rather old and slow machine:
> >
> > regression=# create table t1 (f1 int, f2 int, f3 int,
> > regression(# unique(f1,f2),
> > regression(# unique(f2,f3),
> > regression(# unique(f1,f3));
> > NOTICE:  CREATE TABLE / UNIQUE will create implicit index "t1_f1_key" for 
> > table "t1"
> > NOTICE:  CREATE TABLE / UNIQUE will create implicit index "t1_f2_key" for 
> > table "t1"
> > NOTICE:  CREATE TABLE / UNIQUE will create implicit index "t1_f1_key1" for 
> > table "t1"
> > CREATE TABLE
> > regression=# \timing
> > Timing is on.
> > regression=# insert into t1 values(1,2,3);
> > INSERT 0 1
> > Time: 9.048 ms
> > regression=# insert into t1 values(1,7,4);
> > INSERT 0 1
> > Time: 4.357 ms
> > regression=# insert into t1 values(11,7,5);
> > INSERT 0 1
> > Time: 3.998 ms
> > regression=#
>
>
>
> Thanks Tom. But on a newly minted table, sure, the performance would
> be great. My table now has about 3 million rows (both in MySQL and
> PG).
>
> Here's the table definition:
>
>
>  Table "public.cachedstats"
> Column | Type  |  Modifiers
> ---+---+--
>  id| bigint| not null
>  prof_name | character varying(20) | not null
>  notes | text  | not null
>  inform_prof_on_change | character(1)  | not null default 'N'::bpchar
>
> Indexes:
> "cachedstats_pkey" PRIMARY KEY, btree (id)
> "idx_cachedstats_unique_prof_name" UNIQUE, btree (alias)
>
>
> How can I test the bottleneck?
>



Btw, the query is as simple as:

INSERT INTO cachedstats
(id, prof_name, notes, inform_prof_on_change)
VALUES
(3190087, 'Lim Berger', '.text of about 1000 chars', 'Y');

I am testing through PHP microtime function. The query is administered
through pg_query() function of PHP. I know there could be some latency
coming in from the PHP's PG functions' overhead, but not such an order
of magnitude different from "mysqli_query"!  (I hope -- because this
is quite a common real-world situation I would think).

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Postgresql INSERT speed (how to improve performance)?

2007-08-13 Thread Lim Berger
On 8/14/07, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Lim Berger" <[EMAIL PROTECTED]> writes:
> > I have a table in MySQL with three compound indexes. I have only three
> > columns from this table also in PostgreSQL, which serves as a cache of
> > sorts for fast queries, and this table has only ONE main index on the
> > primary key!
>
> > INSERTing into MySQL takes 0.0001 seconds per insert query.
> > INSERTing into PgSQL takes 0.871 seconds per (much smaller) insert query.
>
> You had better provide some details, because that's completely out of
> line, assuming that by "insert query" you mean insert one row.  For a
> comparison point, I get this on a rather old and slow machine:
>
> regression=# create table t1 (f1 int, f2 int, f3 int,
> regression(# unique(f1,f2),
> regression(# unique(f2,f3),
> regression(# unique(f1,f3));
> NOTICE:  CREATE TABLE / UNIQUE will create implicit index "t1_f1_key" for 
> table "t1"
> NOTICE:  CREATE TABLE / UNIQUE will create implicit index "t1_f2_key" for 
> table "t1"
> NOTICE:  CREATE TABLE / UNIQUE will create implicit index "t1_f1_key1" for 
> table "t1"
> CREATE TABLE
> regression=# \timing
> Timing is on.
> regression=# insert into t1 values(1,2,3);
> INSERT 0 1
> Time: 9.048 ms
> regression=# insert into t1 values(1,7,4);
> INSERT 0 1
> Time: 4.357 ms
> regression=# insert into t1 values(11,7,5);
> INSERT 0 1
> Time: 3.998 ms
> regression=#



Thanks Tom. But on a newly minted table, sure, the performance would
be great. My table now has about 3 million rows (both in MySQL and
PG).

Here's the table definition:


 Table "public.cachedstats"
Column | Type  |  Modifiers
---+---+--
 id| bigint| not null
 prof_name | character varying(20) | not null
 notes | text  | not null
 inform_prof_on_change | character(1)  | not null default 'N'::bpchar

Indexes:
"cachedstats_pkey" PRIMARY KEY, btree (id)
"idx_cachedstats_unique_prof_name" UNIQUE, btree (alias)


How can I test the bottleneck?

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Postgresql INSERT speed (how to improve performance)?

2007-08-13 Thread Tony Caduto

Lim Berger wrote:

On 8/14/07, Andrej Ricnik-Bay <[EMAIL PROTECTED]> wrote:
  

On 8/14/07, Lim Berger <[EMAIL PROTECTED]> wrote:



INSERTing into MySQL takes 0.0001 seconds per insert query.
INSERTing into PgSQL takes 0.871 seconds per (much smaller) insert query.

What can I do to improve this performance? What could be going wrong
to elicit such poor insertion performance from Postgresql?
  

Lim,

Are you sure you are not confusing Seconds VS Milliseconds from one 
query tool to the other?


What tools are you using to show the query times?

As far as I know the psql command line tool shows milliseconds not seconds.

You should probably post some more info like the actual insert query 
used and the table.


I have never seen a insert take that long even a big one, unless you 
have some network latency to the PGSQL host and it's causing the time to

be messed up on the client.

Might also help to let us know exactly how you are timing this stuff.  
Are you connecting remotely via PSQL or are you connecting via SSH and 
running psql or mysql that way?



Later,

Tony Caduto
AM Software Design
http://www.amsoftwaredesign.com
Home of PG Lightning Admin for PostgreSQL and MySQL

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Postgresql INSERT speed (how to improve performance)?

2007-08-13 Thread Tom Lane
"Lim Berger" <[EMAIL PROTECTED]> writes:
> I have a table in MySQL with three compound indexes. I have only three
> columns from this table also in PostgreSQL, which serves as a cache of
> sorts for fast queries, and this table has only ONE main index on the
> primary key!

> INSERTing into MySQL takes 0.0001 seconds per insert query.
> INSERTing into PgSQL takes 0.871 seconds per (much smaller) insert query.

You had better provide some details, because that's completely out of
line, assuming that by "insert query" you mean insert one row.  For a
comparison point, I get this on a rather old and slow machine:

regression=# create table t1 (f1 int, f2 int, f3 int,
regression(# unique(f1,f2),
regression(# unique(f2,f3),
regression(# unique(f1,f3));
NOTICE:  CREATE TABLE / UNIQUE will create implicit index "t1_f1_key" for table 
"t1"
NOTICE:  CREATE TABLE / UNIQUE will create implicit index "t1_f2_key" for table 
"t1"
NOTICE:  CREATE TABLE / UNIQUE will create implicit index "t1_f1_key1" for 
table "t1"
CREATE TABLE
regression=# \timing 
Timing is on.
regression=# insert into t1 values(1,2,3);
INSERT 0 1
Time: 9.048 ms
regression=# insert into t1 values(1,7,4);
INSERT 0 1
Time: 4.357 ms
regression=# insert into t1 values(11,7,5);
INSERT 0 1
Time: 3.998 ms
regression=# 

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] "Out of memory" errors..

2007-08-13 Thread Tom Lane
"Lim Berger" <[EMAIL PROTECTED]> writes:
> I think I have located the problem. It is in "/etc/profile" where some
> ulimits are added. This is the offending text, I think:


> #* cPanel Added Limit Protections -- BEGIN

> #unlimit so we can run the whoami
> ulimit -n 4096 -u 14335 -m unlimited -d unlimited -s 8192 -c 100
> -v unlimited 2>/dev/null

> LIMITUSER=$USER
> if [ -e "/usr/bin/whoami" ]; then
> LIMITUSER=`/usr/bin/whoami`
> fi
> if [ "$LIMITUSER" != "root" ]; then
> ulimit -n 100 -u 20 -m 20 -d 20 -s 8192 -c 20 -v
> 20 2>/dev/null
> else
> ulimit -n 4096 -u 14335 -m unlimited -d unlimited -s 8192 -c
> 100 -v unlimited 2>/dev/null
> fi
> #* cPanel Added Limit Protections -- END

> I am not much of a shell scripter so I am afraid of breaking this,

My advice: remove the whole block that you've quoted.  After that, find
out what "cPanel" is, and get rid of that entire piece of brain-damage.
There might be some merit to restrictions as draconian as the above on
an overloaded multi-user machine, but there is no call for anything to
install restrictions like that behind the back of the machine's admin.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Postgresql INSERT speed (how to improve performance)?

2007-08-13 Thread Lim Berger
On 8/14/07, Andrej Ricnik-Bay <[EMAIL PROTECTED]> wrote:
> On 8/14/07, Lim Berger <[EMAIL PROTECTED]> wrote:
>
> > INSERTing into MySQL takes 0.0001 seconds per insert query.
> > INSERTing into PgSQL takes 0.871 seconds per (much smaller) insert query.
> >
> > What can I do to improve this performance? What could be going wrong
> > to elicit such poor insertion performance from Postgresql?
> MySQL might not be writing the data straight out
> to disk ... just a guess.
>



The MYSQL table is MYISAM, yes, so no transaction support. I would
like PgSQL to do the same. These are not a batch of queries so I
cannot bundle them inside a transaction. These are individual
submissions from the web.

To make PG behave in the above manner, I have the following in my conf:

commit_delay = 0
fsync = on
wal_buffers=64
checkpoint_segments=64
checkpoint_timeout=900

Am I missing something? (I may well be). Would explicitly issuing a
"COMMIT" command help at all? Should I do the following:

BEGIN TRANSACTION;
INSERT INTO...;
COMMIT;

Would this be faster?

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Index not being used

2007-08-13 Thread Greg Smith

On Mon, 13 Aug 2007, Scott Marlowe wrote:


We can look at how big your shared_buffers are, your work_mem, and a
few others in postgresql.conf.


That's going to be sort_mem, not work_mem, with 7.4

--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Postgresql INSERT speed (how to improve performance)?

2007-08-13 Thread Andrej Ricnik-Bay
On 8/14/07, Lim Berger <[EMAIL PROTECTED]> wrote:

> INSERTing into MySQL takes 0.0001 seconds per insert query.
> INSERTing into PgSQL takes 0.871 seconds per (much smaller) insert query.
>
> What can I do to improve this performance? What could be going wrong
> to elicit such poor insertion performance from Postgresql?
MySQL might not be writing the data straight out
to disk ... just a guess.


> Thanks.
Cheers,
Andrej

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Postgresql INSERT speed (how to improve performance)?

2007-08-13 Thread Tony Caduto

Lim Berger wrote:


INSERTing into MySQL takes 0.0001 seconds per insert query.
INSERTing into PgSQL takes 0.871 seconds per (much smaller) insert query.

What can I do to improve this performance? What could be going wrong
to elicit such poor insertion performance from Postgresql?

Thanks.

---(end of broadcast)---
TIP 6: explain analyze is your friend

  



The first thing I would ask is what type of DB engine are you using 
while doing the inserts on MySQL?
The reason MySQL is doing the inserts faster is it does not have 
transaction support if you are using anything other than InnoDB.


With that said you can increase your insert performance by simply using 
a transaction and committing every 1000 rows or so.


If you do this you will see a huge performance increase.

hope that helps.

Tony Caduto
AM Software Design
http://www.amsoftwaredesign.com
Home of Lightning Admin for PostgreSQL and MySQL



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Copy command and duplicate items (Support Replace?)

2007-08-13 Thread Ow Mun Heng
Hi,

Writing a script to pull data from SQL server into a flat-file (or just
piped in directly to PG using Perl DBI)

Just wondering if the copy command is able to do a replace if there are
existing data in the Db already. (This is usually in the case of updates
to specific rows and there be a timestamp indicating it has been changed
etc.)

In MySQL, the mysqlimport util has the --replace function which will
replace the data if there is any event of a duplicate.

Does PG support this?



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] "Out of memory" errors..

2007-08-13 Thread Lim Berger
On 8/14/07, Sander Steffann <[EMAIL PROTECTED]> wrote:
> Hi Lim,
>
> >> It might also be in /etc/security/limits.conf.
> >
> > Thanks. I see these two lines in that file:
> >
> > postgressoftnofile  8192
> > postgreshardnofile  8192
> >
> > How should I change these values? I am not sure how this reflects the
> > "ulimit" options.
>
> Those are limits to the allowed number of open files (ulimit -n). I think
> 8192 should be enough for PostgreSQL. The problem you had were related to
> other settings, so if only the "nofile" setting is changed your strange
> ulimits do not come from here :-)


I think I have located the problem. It is in "/etc/profile" where some
ulimits are added. This is the offending text, I think:


#* cPanel Added Limit Protections -- BEGIN

#unlimit so we can run the whoami
ulimit -n 4096 -u 14335 -m unlimited -d unlimited -s 8192 -c 100
-v unlimited 2>/dev/null

LIMITUSER=$USER
if [ -e "/usr/bin/whoami" ]; then
LIMITUSER=`/usr/bin/whoami`
fi
if [ "$LIMITUSER" != "root" ]; then
ulimit -n 100 -u 20 -m 20 -d 20 -s 8192 -c 20 -v
20 2>/dev/null
else
ulimit -n 4096 -u 14335 -m unlimited -d unlimited -s 8192 -c
100 -v unlimited 2>/dev/null
fi
#* cPanel Added Limit Protections -- END



I am not much of a shell scripter so I am afraid of breaking this, but
how can I change the line "if [ "$LIMITUSER" != "root" ];" to include
the postgres user as well? Can I do something like:

   if [ "$LIMITUSER" != "root" and "$LIMITUSER" != "postgres" ];

Would appreciate any thoughts!

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Postgresql INSERT speed (how to improve performance)?

2007-08-13 Thread Lim Berger
Hi,

I've googled and yahooed and most of the performance tweaks suggested
cover SELECT speed, some cover COPY speed with things like turning
fsync off and such. But I still have not found how to improve regular
INSERT speed on Postgresql.

I have a table in MySQL with three compound indexes. I have only three
columns from this table also in PostgreSQL, which serves as a cache of
sorts for fast queries, and this table has only ONE main index on the
primary key!

INSERTing into MySQL takes 0.0001 seconds per insert query.
INSERTing into PgSQL takes 0.871 seconds per (much smaller) insert query.

What can I do to improve this performance? What could be going wrong
to elicit such poor insertion performance from Postgresql?

Thanks.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Index not being used

2007-08-13 Thread Scott Marlowe
Oh yeah, go read this:

http://www.westnet.com/~gsmith/content/postgresql/pg-5minute.htm

Note that you shouldn't set your shared buffers quite as high as in
that guide, since you're running 7.4 which isn't quite as good at
using shared_buffers

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Index not being used

2007-08-13 Thread Scott Marlowe
Oh, and you can use the sledge hammer of tuning by using the

set enable_xxx = off

settings for the planner.  It's not a normal way to tune most queries,
but it certainly can let you know if the problem is using the index or
not.

psql mydb
\timing
select count(*) from table where field > 12345;
set enable_seqscan=off;
select count(*) from table where field > 12345;

and compare them.  run each a few times, since the cache will affect
the performance.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Index not being used

2007-08-13 Thread Scott Marlowe
On 8/13/07, Ralph Smith <[EMAIL PROTECTED]> wrote:
>
>
>
> On Aug 13, 2007, at 4:39 PM, Scott Marlowe wrote:
>
> On 8/13/07, Ralph Smith <[EMAIL PROTECTED]> wrote:
> I'm confused.  Shouldn't this index be used?
> (It's running on v7.4.7)
>
> airburst=> \d stats2
>  Table "public.stats2"
>   Column   | Type  | Modifiers
> ---+---+---
>  lab   | character varying(30) |
>  name  | character varying(50) |
>  status| character varying(40) |
>  eventtime | integer   |
>  username  | character varying(30) |
>  pkey  | character varying(60) |
> Indexes:
> "stats2_etime_index" btree (eventtime)
>
> airburst=> \d stats2_etime_index
> Index "public.stats2_etime_index"
>   Column   |  Type
> ---+-
>  eventtime | integer
> btree, for table "public.stats2"
>
> airburst=> explain select count(*) from stats2 where eventtime > 1167638400
> ;
>   QUERY PLAN
> ---
>  Aggregate  (cost=185247.97..185247.97 rows=1 width=0)
>->  Seq Scan on stats2  (cost=0.00..179622.45 rows=2250205 width=0)
>  Filter: (eventtime > 1167638400)
> (3 rows)
>
>
> ==
>
>
> =
> That really depends.  how many rows are actually returned?  If it's
> 2250205 like the query planner thinks, and that's a fair chunk of the
> table, then no, it shouldn't use an index, a seq scan will be faster.
> What does explain analyze select ... say?
> ==
> Somewhere between 40,000 and 48,000 rows returned the index kicks in.
> Out of a table of 7 million rows, that's a fairly common count I have to
> work with.
> It's the amount of activity since August 2nd, this year; NOT that long ago.
>
> Any suggestions on speeding up these queries, other than using more and more
> tables, thus ruling out the reasonable use of command-line queries?
>
> Ultimately we'll move to some datawarehousing solution, but that's not a
> 'tomorrow' kind of thing...

H.  Not sure you answered my question about the explain analyze output.

Note that select count(*) from table is NEVER going to fast in
PostgreSQL compared to MySQL with MyIsam tables, just because of the
difference in design.  The same kind of problem exists for MySQL with
innodb tables.  It's a FAQ, you can find lots of info on it by
searching this forum or reading the FAQ.  Basically it's a question of
visibility.  No matter whether or not the data are in the index, the
db has to look at the table to see if the row is visible.

Oh good night, I just noticed you're running 7.4.7.  Two things.  1:
IMMEDIATELY UPDATE to 7.4.17 or whatever the latest 7.4 series is.
That's easy, you don't have to dump and restore the db for that.  2:
Start planning to upgrade to 8.2.4 now.  7.4 is getting old fast, and
I've found a lot of queries that run much faster (factors faster) in
8.2.4 than they did in 7.4.

That said, that probably won't help on this query a lot.  select
count(*) is expensive.

We can look at how big your shared_buffers are, your work_mem, and a
few others in postgresql.conf.  Make sure your stats are up to date,
and you might have to increase stats target for that field in your
database (see alter table alter column...)

Let us know what explain analyze says.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Index not being used

2007-08-13 Thread Ralph Smith


On Aug 13, 2007, at 4:39 PM, Scott Marlowe wrote:


On 8/13/07, Ralph Smith <[EMAIL PROTECTED]> wrote:

I'm confused.  Shouldn't this index be used?
(It's running on v7.4.7)

airburst=> \d stats2
 Table "public.stats2"
  Column   | Type  | Modifiers
---+---+---
 lab   | character varying(30) |
 name  | character varying(50) |
 status| character varying(40) |
 eventtime | integer   |
 username  | character varying(30) |
 pkey  | character varying(60) |
Indexes:
"stats2_etime_index" btree (eventtime)

airburst=> \d stats2_etime_index
Index "public.stats2_etime_index"
  Column   |  Type
---+-
 eventtime | integer
btree, for table "public.stats2"

airburst=> explain select count(*) from stats2 where eventtime >  
1167638400

;
  QUERY PLAN
- 
--

 Aggregate  (cost=185247.97..185247.97 rows=1 width=0)
   ->  Seq Scan on stats2  (cost=0.00..179622.45 rows=2250205  
width=0)

 Filter: (eventtime > 1167638400)
(3 rows)



==


=
That really depends.  how many rows are actually returned?  If it's
2250205 like the query planner thinks, and that's a fair chunk of the
table, then no, it shouldn't use an index, a seq scan will be faster.
What does explain analyze select ... say?


==
Somewhere between 40,000 and 48,000 rows returned the index kicks in.
Out of a table of 7 million rows, that's a fairly common count I have  
to work with.
It's the amount of activity since August 2nd, this year; NOT that  
long ago.


Any suggestions on speeding up these queries, other than using more  
and more tables, thus ruling out the reasonable use of command-line  
queries?


Ultimately we'll move to some datawarehousing solution, but that's  
not a 'tomorrow' kind of thing...


Thanks again all,

Ralph
==

Re: [GENERAL] Index not being used

2007-08-13 Thread Scott Marlowe
On 8/13/07, Ralph Smith <[EMAIL PROTECTED]> wrote:
> I'm confused.  Shouldn't this index be used?
> (It's running on v7.4.7)
>
> airburst=> \d stats2
>  Table "public.stats2"
>   Column   | Type  | Modifiers
> ---+---+---
>  lab   | character varying(30) |
>  name  | character varying(50) |
>  status| character varying(40) |
>  eventtime | integer   |
>  username  | character varying(30) |
>  pkey  | character varying(60) |
> Indexes:
> "stats2_etime_index" btree (eventtime)
>
> airburst=> \d stats2_etime_index
> Index "public.stats2_etime_index"
>   Column   |  Type
> ---+-
>  eventtime | integer
> btree, for table "public.stats2"
>
> airburst=> explain select count(*) from stats2 where eventtime > 1167638400
> ;
>   QUERY PLAN
> ---
>  Aggregate  (cost=185247.97..185247.97 rows=1 width=0)
>->  Seq Scan on stats2  (cost=0.00..179622.45 rows=2250205 width=0)
>  Filter: (eventtime > 1167638400)
> (3 rows)

That really depends.  how many rows are actually returned?  If it's
2250205 like the query planner thinks, and that's a fair chunk of the
table, then no, it shouldn't use an index, a seq scan will be faster.
What does explain analyze select ... say?

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Index not being used

2007-08-13 Thread Ralph Smith

I'm confused.  Shouldn't this index be used?
(It's running on v7.4.7)

airburst=> \d stats2
 Table "public.stats2"
  Column   | Type  | Modifiers
---+---+---
lab   | character varying(30) |
name  | character varying(50) |
status| character varying(40) |
eventtime | integer   |
username  | character varying(30) |
pkey  | character varying(60) |
Indexes:
"stats2_etime_index" btree (eventtime)

airburst=> \d stats2_etime_index
Index "public.stats2_etime_index"
  Column   |  Type
---+-
eventtime | integer
btree, for table "public.stats2"

airburst=> explain select count(*) from stats2 where eventtime >  
1167638400 ;

  QUERY PLAN
---
Aggregate  (cost=185247.97..185247.97 rows=1 width=0)
   ->  Seq Scan on stats2  (cost=0.00..179622.45 rows=2250205 width=0)
 Filter: (eventtime > 1167638400)
(3 rows)

Thanks,
Ralph

Re: [GENERAL] Reordering columns, will this ever be simple?

2007-08-13 Thread Peter Childs
On 13/08/07, novnov <[EMAIL PROTECTED]> wrote:
>
>
> I would like to make a request for this feature to be added to postgres.
>
> Postgres is a really great database. I'm still very much a novice at using
> postgres but in general, it's been a very good experience and I plan to
> use
> it as often as I can. The community is very helpful.
>
> My projects tend to be more seat of the pants than your average enterprise
> database. Adding the ability to reorder columns in a simple manner would
> be
> a very nice addition. I think such a feature would add to the 'curb
> appeal'
> of postgres...make it more user friendly, more flexible.
>
> I don't know anything about the internals of databases, how they actually
> work, but from my 'ignorance is bliss' perspective, it seems odd that one
> can alter field names at will in postgres but reordering columns is such a
> big deal. Maybe internally columns are id'd via index position, thus order
> matters, and the name is relatively light duty property? Maybe too hard to
> explain it in simple terms.



Yes thats it or nearer enough, It was not until quite recently that you
could even drop a column.

Even now dropping a column is not a very good idea, It leaves the column
around inside the database  marked as deleted  data still intact  (if I
remember correctly) So you can even get the data back by hmm messing with
the database internals (not a very good idea I believe an update on
pg_attribute will do the trick:)). Hence if you do more than drop the odd
column here or there you may be better dumping and reloading anyway!
Further more if you then add another column the already dropped column does
not get recycled.

I also remember some 7.2 drivers not coping correctly with 7.3 due to
dropped columns. Don't quote me on that I just remember having some problems
about 3 years ago with Qt!

Hmm this is beginning to sound like a leak.

Peter Childs


Decibel! wrote:
> >
> > On Tue, Aug 07, 2007 at 02:28:20PM -0500, Tony Caduto wrote:
> >> Gregory Stark wrote:
> >> >"novnov" <[EMAIL PROTECTED]> writes:
> >> >
> >> >
> >> >>Is there any plan to add such a capability to postgres?
> >> >>
> >> >
> >> >It's been talked about. I wouldn't be surprised to see it in 8.4 but
> >> >nobody's
> >> >said they'll be doing it yet and there are a lot of other more
> exciting
> >> >ideas
> >> >too.
> >> >
> >> >
> >> From a admin tool developers perspective the ability to reorder columns
> >> without manually copying to a new table and all that is pretty exiting
> >> :-)
> >
> > Patches welcome. :)
> >
> > BTW, this is much more likely to happen if we divorce presentation order
> > from actual storage order, something that there is some desire to do
> > because it would allow the engine to automagically store things in the
> > optimal ordering from an alignment standpoint.
> > --
> > Decibel!, aka Jim Nasby[EMAIL PROTECTED]
> > EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)
> >
> >
> >
>
> --
> View this message in context:
> http://www.nabble.com/Reordering-columns%2C-will-this-ever-be-simple--tf4231761.html#a12129772
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
>
> ---(end of broadcast)---
> TIP 6: explain analyze is your friend
>


Re: [GENERAL] Running a stored procedure via pgagent, need an examp le

2007-08-13 Thread Dave Page


> --- Original Message ---
> From: novnov <[EMAIL PROTECTED]>
> To: pgsql-general@postgresql.org
> Sent: 13/08/07, 17:36:12
> Subject: [GENERAL] Running a stored procedure via pgagent, need an example
> 
> Can someone give me a simple example of the way in which I might be able to
> call a stored procedure or query using pgagent? I have never scripted a
> postgres .sql etc to run via cron or pgagent and an example of the syntax
> would be helpful. In pgagent terms, the Step is set up as SQL, and it's the
> definition that I would like an example of. I am guessing that one would
> save the routine off to a .sql file, then call via \i? 

Just enter the SQL to execute in the textbox on the step dialog, e.g.

SELECT my_proc();

Regards, Dave

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] is there a psql equivilent of fsck?

2007-08-13 Thread Chris Browne
[EMAIL PROTECTED] (Ben) writes:
> We recently installed and populated a new postgres 7.3 server, which
> was quickly abused with a good 12 hours of 115-degree heat. Now, we
> see ~1000 rows missing from a single table, and given our application,
> a delete of those rows seems a very remote possibility. Is there some
> database analogy to fsck I can run?
>
> FWIW the hardware raid claims everything is just fine.

What we tend to use when we run into such situations is:
  "VACUUM VERBOSE ANALYZE;"

This walks through all tables and indices in the database, and seeks
to clean them up.  If the disk has been mussed up, this will tend to
terminate with suitably scary looking error messages.

FYI, is there some particular reason why you went with PostgreSQL 7.3?
That's almost five years old, which is like (hmm... (* 5 6)) thirty
years old in "Internet Years."  That's, like, way, way, way obsolete.

We haven't been quick about jumping onto fresh new releases - we only
got to 8.1 this year, when 8.2 is now *last* year's flavour.
Nonetheless, we got rid of our last 7.3 instance several years ago...
-- 
output = ("cbbrowne" "@" "linuxdatabases.info")
http://cbbrowne.com/info/x.html
Rules of the Evil Overlord  #105. "I will design all doomsday machines
myself. If I must hire a mad  scientist to assist me, I will make sure
that he is sufficiently twisted to never regret his evil ways and seek
to undo the damage he's caused." 

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] "Out of memory" errors..

2007-08-13 Thread Sander Steffann

Hi Lim,


It might also be in /etc/security/limits.conf.


Thanks. I see these two lines in that file:

postgressoftnofile  8192
postgreshardnofile  8192

How should I change these values? I am not sure how this reflects the
"ulimit" options.


Those are limits to the allowed number of open files (ulimit -n). I think 
8192 should be enough for PostgreSQL. The problem you had were related to 
other settings, so if only the "nofile" setting is changed your strange 
ulimits do not come from here :-)


- Sander



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] is there a psql equivilent of fsck?

2007-08-13 Thread Ben

On Mon, 13 Aug 2007, Michael Glaesemann wrote:



On Aug 13, 2007, at 12:50 , Ben wrote:


We recently installed and populated a new postgres 7.3 server,


Why would you deploy a new server with 7.3? Current release is 8.2. The 7.3 
branch is no longer even updated.


Because our product uses a decrepit RH7.3 image as a base (no, really) and 
we're focusing our efforts on upgrading the whole base, instead of 
spending almost as much effort on just moving to PG7.4.


But back to my question.?



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] "Out of memory" errors..

2007-08-13 Thread Sander Steffann

Hi Lim,


"Lim Berger" <[EMAIL PROTECTED]> writes:

Wow, you are right! The "su - postgres" showed up with wildly
different values! Most notably, the "max user processes" is only 20!!
Whereas in the regular user stuff it was above 14000. Would you know
how to change this in a CentOS Linux machine? Where can I find the
startup settings for postgresql?


Yipes, that's pretty bogus.  The most likely culprit would be a .profile
or .bashrc script belonging to the postgres user --- poke around in its
home directory.


It might also be in /etc/security/limits.conf.

Good luck,
Sander



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] "Out of memory" errors..

2007-08-13 Thread Lim Berger
On 8/14/07, Sander Steffann <[EMAIL PROTECTED]> wrote:
> Hi Lim,
>
> > "Lim Berger" <[EMAIL PROTECTED]> writes:
> >> Wow, you are right! The "su - postgres" showed up with wildly
> >> different values! Most notably, the "max user processes" is only 20!!
> >> Whereas in the regular user stuff it was above 14000. Would you know
> >> how to change this in a CentOS Linux machine? Where can I find the
> >> startup settings for postgresql?
> >
> > Yipes, that's pretty bogus.  The most likely culprit would be a .profile
> > or .bashrc script belonging to the postgres user --- poke around in its
> > home directory.
>
> It might also be in /etc/security/limits.conf.


Thanks. I see these two lines in that file:


postgressoftnofile  8192
postgreshardnofile  8192

How should I change these values? I am not sure how this reflects the
"ulimit" options.

Thanks!

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] is there a psql equivilent of fsck?

2007-08-13 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Ben wrote:
> We recently installed and populated a new postgres 7.3 server, which was
> quickly abused with a good 12 hours of 115-degree heat. Now, we see
> ~1000 rows missing from a single table, and given our application, a
> delete of those rows seems a very remote possibility. Is there some
> database analogy to fsck I can run?
> 
> FWIW the hardware raid claims everything is just fine.

If you are missing 1000 rows, you deleted your data , you had
transaction wrap or your hardware is toast. If your hardware as toast
there would be a lot of other issues cropping up.

Why are you installing a "new" postgres 7.3 server. It isn't even
officially supported anymore.

Joshua D. Drake

> 
> ---(end of broadcast)---
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>   choose an index scan if your joining column's datatypes do not
>   match
> 


- --

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
PostgreSQL solutions since 1997  http://www.commandprompt.com/
UNIQUE NOT NULL
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGwJ/2ATb/zqfZUUQRArcUAKCPdiqmdLpPHoOvCiT3jI7MKjDQuACfRTVc
FIqCsdmwlos9DqMpMHgnNmA=
=1GPV
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] is there a psql equivilent of fsck?

2007-08-13 Thread Michael Glaesemann


On Aug 13, 2007, at 12:50 , Ben wrote:


We recently installed and populated a new postgres 7.3 server,


Why would you deploy a new server with 7.3? Current release is 8.2.  
The 7.3 branch is no longer even updated.


Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] is there a psql equivilent of fsck?

2007-08-13 Thread Ben
We recently installed and populated a new postgres 7.3 server, which was 
quickly abused with a good 12 hours of 115-degree heat. Now, we see ~1000 
rows missing from a single table, and given our application, a delete of 
those rows seems a very remote possibility. Is there some database analogy 
to fsck I can run?


FWIW the hardware raid claims everything is just fine.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] PITR for postgresql-7.3

2007-08-13 Thread Tom Lane
Mary Ellen Fitzpatrick <[EMAIL PROTECTED]> writes:
> I am trying to run pg_dump on the database with the corrupt table, and 
> try to restore the database.  I also tried to vacuumdb the database and 
> get the same error.
> I get the following error. 

> pg_dump   
> pg_dump: query to obtain list of data types failed: PANIC:  read of clog 
> file 100, offset 81920 failed: Success
> server closed the connection unexpectedly

That looks pretty bad --- it implies that there's corruption in the
pg_type system catalog, not only in your user table.  Which in turn
suggests that it's not a localized problem but might affect many
different data blocks.  I'm afraid that database is probably toast :-(.
An expert might be able to salvage something from it, but only with
great effort, and there'd be no guarantees of getting complete or
consistent data.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] "Out of memory" errors..

2007-08-13 Thread Tom Lane
"Lim Berger" <[EMAIL PROTECTED]> writes:
> Wow, you are right! The "su - postgres" showed up with wildly
> different values! Most notably, the "max user processes" is only 20!!
> Whereas in the regular user stuff it was above 14000. Would you know
> how to change this in a CentOS Linux machine? Where can I find the
> startup settings for postgresql?

Yipes, that's pretty bogus.  The most likely culprit would be a .profile
or .bashrc script belonging to the postgres user --- poke around in its
home directory.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] PITR for postgresql-7.3

2007-08-13 Thread Mary Ellen Fitzpatrick
I am trying to run pg_dump on the database with the corrupt table, and 
try to restore the database.  I also tried to vacuumdb the database and 
get the same error.
I get the following error. 

pg_dump   
pg_dump: query to obtain list of data types failed: PANIC:  read of clog 
file 100, offset 81920 failed: Success

server closed the connection unexpectedly
   This probably means the server terminated abnormally
   before or while processing the request.

Do I have any options for getting around this error?

Thanks
Mary Ellen

Scott Marlowe wrote:

On 8/10/07, Mary Ellen Fitzpatrick <[EMAIL PROTECTED]> wrote:
  

Hi,

We are running postgresql-7.3.3 and we had a hardware controller and
disk failure on the system.  And of course the database does not appear
to be backup anywhere.



Ouch, I feel your pain.

  

I was reading about PITR and was wondering if that is applicable to my
version.  We do have pg_xlog files and I am wondering if there is anyway
to recover the data.



Sadly, no.  First off you need a backup of the database from some
point in the past, then you need to ship it's wal logs off to some
other machine for storage.  Then you need to be running a version of
pgsql that supports replaying those logs, which 7.3 definitely
doesn't.

  

The error we get when trying to access a very important table is

"Invalid page header in block 51 of vuser"



Sounds like a bad page.  Is this access going through an index?  If
so, then you might be able to access parts of the table with a where
clause and get most of the database out one select statement at a
time.

  

Any help would be appreciated.
(upgrades and backups lessons learned)



Just a quick point on the upgrades, there's also updates, which are
even more important.  7.3 is up to 7.3.19 now, and I'm sure there are
some very nasty bugs in a version as old as 7.3.3

It might be worth backing up the db from the file system level and
trying to upgrade your install to 7.3.19 and seeing if that fixes the
problems.  Probably won't, but worth a try.

also, I don't recall if 7.3 had the zero damaged pages option, but if
it does, you can use that to get the db more or less up and running
again.

Good luck, and report back on your progress to see if there's anything
else we can do to help.
  


--
Thanks
Mary Ellen


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] "Out of memory" errors..

2007-08-13 Thread Lim Berger
On 8/14/07, Lim Berger <[EMAIL PROTECTED]> wrote:
> On 8/14/07, Alvaro Herrera <[EMAIL PROTECTED]> wrote:
> > Lim Berger escribió:
> >
> > > Thanks. I did "su postgres" and ran the ulimit command again. All
> > > values are the same, except for "open files" which is double in the
> > > case of this user (instead of 4096, it is 8192). Not sure what I can
> > > gather from that?
> >
> > Try "su - postgres" instead (which will run the user start scripts and
> > may modify the ulimits for that user), but note that the ulimit can also
> > be changed in the script that starts the Postgres process on system
> > boot.
>
>
>
> Wow, you are right! The "su - postgres" showed up with wildly
> different values! Most notably, the "max user processes" is only 20!!
> Whereas in the regular user stuff it was above 14000. Would you know
> how to change this in a CentOS Linux machine? Where can I find the
> startup settings for postgresql? Full values below:
>
>
>
> ~ > su - postgres
> -bash-3.00$ ulimit -a
> core file size  (blocks, -c) 20
> data seg size   (kbytes, -d) 20
> file size   (blocks, -f) unlimited
> pending signals (-i) 1024
> max locked memory   (kbytes, -l) 32
> max memory size (kbytes, -m) 20
> open files  (-n) 100
> pipe size(512 bytes, -p) 8
> POSIX message queues (bytes, -q) 819200
> stack size  (kbytes, -s) 8192
> cpu time   (seconds, -t) unlimited
> max user processes  (-u) 20
> virtual memory  (kbytes, -v) 20
> file locks  (-x) unlimited
> -bash-3.00$
>




I tried doing "ulimit -u 9" for instance, as postgres user, but it
tells me:

-bash-3.00$ ulimit -u 9000
-bash: ulimit: max user processes: cannot modify limit: Operation not permitted

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] "Out of memory" errors..

2007-08-13 Thread Lim Berger
On 8/14/07, Alvaro Herrera <[EMAIL PROTECTED]> wrote:
> Lim Berger escribió:
>
> > Thanks. I did "su postgres" and ran the ulimit command again. All
> > values are the same, except for "open files" which is double in the
> > case of this user (instead of 4096, it is 8192). Not sure what I can
> > gather from that?
>
> Try "su - postgres" instead (which will run the user start scripts and
> may modify the ulimits for that user), but note that the ulimit can also
> be changed in the script that starts the Postgres process on system
> boot.



Wow, you are right! The "su - postgres" showed up with wildly
different values! Most notably, the "max user processes" is only 20!!
Whereas in the regular user stuff it was above 14000. Would you know
how to change this in a CentOS Linux machine? Where can I find the
startup settings for postgresql? Full values below:



~ > su - postgres
-bash-3.00$ ulimit -a
core file size  (blocks, -c) 20
data seg size   (kbytes, -d) 20
file size   (blocks, -f) unlimited
pending signals (-i) 1024
max locked memory   (kbytes, -l) 32
max memory size (kbytes, -m) 20
open files  (-n) 100
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 20
virtual memory  (kbytes, -v) 20
file locks  (-x) unlimited
-bash-3.00$

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Reordering columns, will this ever be simple?

2007-08-13 Thread novnov

I would like to make a request for this feature to be added to postgres. 

Postgres is a really great database. I'm still very much a novice at using
postgres but in general, it's been a very good experience and I plan to use
it as often as I can. The community is very helpful.

My projects tend to be more seat of the pants than your average enterprise
database. Adding the ability to reorder columns in a simple manner would be
a very nice addition. I think such a feature would add to the 'curb appeal'
of postgres...make it more user friendly, more flexible.

I don't know anything about the internals of databases, how they actually
work, but from my 'ignorance is bliss' perspective, it seems odd that one
can alter field names at will in postgres but reordering columns is such a
big deal. Maybe internally columns are id'd via index position, thus order
matters, and the name is relatively light duty property? Maybe too hard to
explain it in simple terms.


Decibel! wrote:
> 
> On Tue, Aug 07, 2007 at 02:28:20PM -0500, Tony Caduto wrote:
>> Gregory Stark wrote:
>> >"novnov" <[EMAIL PROTECTED]> writes:
>> >
>> >  
>> >>Is there any plan to add such a capability to postgres? 
>> >>
>> >
>> >It's been talked about. I wouldn't be surprised to see it in 8.4 but 
>> >nobody's
>> >said they'll be doing it yet and there are a lot of other more exciting 
>> >ideas
>> >too.
>> >
>> >  
>> From a admin tool developers perspective the ability to reorder columns 
>> without manually copying to a new table and all that is pretty exiting
>> :-)
> 
> Patches welcome. :)
> 
> BTW, this is much more likely to happen if we divorce presentation order
> from actual storage order, something that there is some desire to do
> because it would allow the engine to automagically store things in the
> optimal ordering from an alignment standpoint.
> -- 
> Decibel!, aka Jim Nasby[EMAIL PROTECTED]
> EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)
> 
>  
> 

-- 
View this message in context: 
http://www.nabble.com/Reordering-columns%2C-will-this-ever-be-simple--tf4231761.html#a12129772
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Persistent connections in PHP

2007-08-13 Thread Pavel Stehule
> 6:  The reason for connection pooling is primarily to twofold.  One is
> to allow very fast connections to your database when doing lots of
> small things where connection time will cost too much.  The other is
> to prevent your database from having lots of stale / idle connections
> that cause it to waste memory and to be slower since each backend
> needs to communicate with every other backend some amount of data some
> times.  pconnect takes care of the first problem, but exacerbates the
> second.
>

next advance ... longer life of cached executive plans in PL/pgSQL procedures.

Regards
Pavel Stehule

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] "Out of memory" errors..

2007-08-13 Thread Alvaro Herrera
Lim Berger escribió:

> Thanks. I did "su postgres" and ran the ulimit command again. All
> values are the same, except for "open files" which is double in the
> case of this user (instead of 4096, it is 8192). Not sure what I can
> gather from that?

Try "su - postgres" instead (which will run the user start scripts and
may modify the ulimits for that user), but note that the ulimit can also
be changed in the script that starts the Postgres process on system
boot.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Running a stored procedure via pgagent, need an example

2007-08-13 Thread novnov

Can someone give me a simple example of the way in which I might be able to
call a stored procedure or query using pgagent? I have never scripted a
postgres .sql etc to run via cron or pgagent and an example of the syntax
would be helpful. In pgagent terms, the Step is set up as SQL, and it's the
definition that I would like an example of. I am guessing that one would
save the routine off to a .sql file, then call via \i? The pgagent docs are
sparse, no examples given, and there are not many posts on this subject that
I can find.
-- 
View this message in context: 
http://www.nabble.com/Running-a-stored-procedure-via-pgagent%2C-need-an-example-tf4262256.html#a12129548
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] can i use an array as a table (in the from clause)

2007-08-13 Thread Richard Broersma Jr
--- Michael Glaesemann <[EMAIL PROTECTED]> wrote:
> > Is there some way that I can treat a two dimensional array as a table
> > that can be referenced in the from clause?
> 
> I know of no way off hand, and if so, not easily. This is a pretty  
> clear sign that you shouldn't be using arrays in this context and  
> should rethink your schema.

The closest thing to an array (but is actually a derived table) is achieved 
using the VALUES
predicate.

  SELECT col1, avg( col2 )
FROM ( VALUES ( 1, 1 ), ( 1, 2 ), 
  ( 2, 3 ), ( 2, 4 )) AS ValuesTable( col1, col2 )
   WHERE col2 < 5
GROUP BY col1
ORDER BY col1;

 col1 |avg
--+
1 | 1.5000
2 | 3.5000
(2 rows)

Regards,
Richard Broersma Jr.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Persistent connections in PHP

2007-08-13 Thread Scott Marlowe
On 8/13/07, Scott Marlowe <[EMAIL PROTECTED]> wrote:
> On 8/13/07, Josh Trutwin <[EMAIL PROTECTED]> wrote:
> > On Mon, 13 Aug 2007 09:44:26 -0500
> > Erik Jones <[EMAIL PROTECTED]> wrote:
> >
> > > I'll agree with Scott on this one.  (Not that I can recall
> > > specifically ever disagreeing with him before...).  Unless you
> > > know all of the potential caveats associated with php's persisent
> > > postgres connections and have a use case that fits them, don't use
> > > them.  If you need something to pool connections, look at pgpool.
> >
> > Could elaborate a little on the problems with using php's persistent
> > connections?
> >
> > Personally I use ADODB php abstraction library (adodb.sf.net) for my
> > database stuff and I think there's a way to enable persistent
> > connections though I just use the default connection.
> >
> > I've heard before that php's persistent connections are to be
> > avoided, was just curious as to why though?
>
> OK, there are a few things that gather together to make php's
> persistant connections a problem.
>
> 1:  Each apache / php process maintains its own connections, not
> sharing with others.  So it's NOT connection pooling, but people tend
> to think it is.
> 2:  Each unique connection creates another persistent connection for
> an apache/php child process.  If you routinely connect to multiple
> servers / databases or as > 1 user, then each one of those
> combinations that is unique makes another persistent connection.
> 3:  There's no facility in PHP to clean an old connection out and make
> sure it's in some kind of consistent state when you get it.  It's in
> exactly the same state it was when the previous php script finished
> with it.  Half completed transactions, partial sql statements,
> sequence functions like currval() may have values that don't apply to
> you.
> 4:  pg_close can't close a persistent connection.  Once it's open, it
> stays open until the child process is harvested.
> 5:  Apache, by default, is configured for 150 child processes.
> Postgresql, and many other databases for that matter, are configured
> for 100 or less.  Even if apache only opens one connection to one
> database with one user account, it will eventually try to open the
> 101st connection to postgresql and fail.  So, the default
> configuration of apache / postgresql for number of connections is
> unsafe for pconnect.
> 6:  The reason for connection pooling is primarily to twofold.  One is
> to allow very fast connections to your database when doing lots of
> small things where connection time will cost too much.  The other is
> to prevent your database from having lots of stale / idle connections
> that cause it to waste memory and to be slower since each backend
> needs to communicate with every other backend some amount of data some
> times.  pconnect takes care of the first problem, but exacerbates the
> second.
>
> P.s. dont' think I'm dogging PHP, cause I'm not.  I use it all the
> time, and it's really great for simple small scripts that need to be
> done NOW and need to be lightweight.  I even use pconnect a bit.  But
> my machine is set for 50 or fewer apache children and 150 postgresql
> connects, and I only use pconnect on small, lightweight things that
> need to zoom.  Everything else gets regular old connect.

Oh, one other thing that contributes to the problem is that the
php.ini file has an entry for max persistent connections.  Sadly, most
people think this is max persistent connections for apache / php as a
whole.  it's not.  It's for each apache / php child process.  This
number should generally be set to 1, 2 at the absolute most when using
persistent connections.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Persistent connections in PHP

2007-08-13 Thread Scott Marlowe
On 8/13/07, Josh Trutwin <[EMAIL PROTECTED]> wrote:
> On Mon, 13 Aug 2007 09:44:26 -0500
> Erik Jones <[EMAIL PROTECTED]> wrote:
>
> > I'll agree with Scott on this one.  (Not that I can recall
> > specifically ever disagreeing with him before...).  Unless you
> > know all of the potential caveats associated with php's persisent
> > postgres connections and have a use case that fits them, don't use
> > them.  If you need something to pool connections, look at pgpool.
>
> Could elaborate a little on the problems with using php's persistent
> connections?
>
> Personally I use ADODB php abstraction library (adodb.sf.net) for my
> database stuff and I think there's a way to enable persistent
> connections though I just use the default connection.
>
> I've heard before that php's persistent connections are to be
> avoided, was just curious as to why though?

OK, there are a few things that gather together to make php's
persistant connections a problem.

1:  Each apache / php process maintains its own connections, not
sharing with others.  So it's NOT connection pooling, but people tend
to think it is.
2:  Each unique connection creates another persistent connection for
an apache/php child process.  If you routinely connect to multiple
servers / databases or as > 1 user, then each one of those
combinations that is unique makes another persistent connection.
3:  There's no facility in PHP to clean an old connection out and make
sure it's in some kind of consistent state when you get it.  It's in
exactly the same state it was when the previous php script finished
with it.  Half completed transactions, partial sql statements,
sequence functions like currval() may have values that don't apply to
you.
4:  pg_close can't close a persistent connection.  Once it's open, it
stays open until the child process is harvested.
5:  Apache, by default, is configured for 150 child processes.
Postgresql, and many other databases for that matter, are configured
for 100 or less.  Even if apache only opens one connection to one
database with one user account, it will eventually try to open the
101st connection to postgresql and fail.  So, the default
configuration of apache / postgresql for number of connections is
unsafe for pconnect.
6:  The reason for connection pooling is primarily to twofold.  One is
to allow very fast connections to your database when doing lots of
small things where connection time will cost too much.  The other is
to prevent your database from having lots of stale / idle connections
that cause it to waste memory and to be slower since each backend
needs to communicate with every other backend some amount of data some
times.  pconnect takes care of the first problem, but exacerbates the
second.

P.s. dont' think I'm dogging PHP, cause I'm not.  I use it all the
time, and it's really great for simple small scripts that need to be
done NOW and need to be lightweight.  I even use pconnect a bit.  But
my machine is set for 50 or fewer apache children and 150 postgresql
connects, and I only use pconnect on small, lightweight things that
need to zoom.  Everything else gets regular old connect.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] [PROPOSAL] DML value format

2007-08-13 Thread Michael Glaesemann


On Aug 13, 2007, at 11:21 , Osvaldo Rosario Kussama wrote:


Dollar-Quoted String Constants?
http://www.postgresql.org/docs/8.2/interactive/sql-syntax- 
lexical.html#SQL-SYNTAX-CONSTANTS


INSERT INTO persons VALUES ($$Harry$$, $$O'Callaghan$$);


Do not interpolate values into SQL literals, regardless of which  
literal syntax you're using. Use bind parameters.



Perhaps use quote_literal() function?
http://www.postgresql.org/docs/8.2/interactive/functions-string.html


If you're writing a database adapter, maybe. Otherwise, use bind  
parameters. They're easier and you shouldn't have to worry about  
always remembering to use quote_literal.


Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] [PROPOSAL] DML value format

2007-08-13 Thread Osvaldo Rosario Kussama

Alejandro Torras escreveu:

-- English --
Hi,

Is there some way to put values in a INSERT statement
without taking care of apostrophes?

In example:
INSERT INTO persons VALUES ('Harry', 'O'Callaghan');
  ^^^

I think that it can be used some kind of length-marker
to help the parsing of the value.

In example:
INSERT INTO persons VALUES ('Harry', @11:O'Callaghan);

I think this approach could help reducing the sql
injections.

Regards,
A. Torras.

-- Castellano --
Hola,

¿Hay alguna manera de insertar valores en una
sentencia INSERT sin tener en cuenta apóstrofes?

Por ejemplo:
INSERT INTO persons VALUES ('Harry', 'O'Callaghan');
  ^^^

Pienso que puede ser usado algún tipo de marcador de
longitud para ayudar el parseo del valor.

Por ejemplo:
INSERT INTO persons VALUES ('Harry', @11:O'Callaghan);

Creo que este enfoque podría ayudar reduciendo las
inyecciones SQL (SQL injections).




Dollar-Quoted String Constants?
http://www.postgresql.org/docs/8.2/interactive/sql-syntax-lexical.html#SQL-SYNTAX-CONSTANTS

INSERT INTO persons VALUES ($$Harry$$, $$O'Callaghan$$);

Perhaps use quote_literal() function?
http://www.postgresql.org/docs/8.2/interactive/functions-string.html

Osvaldo

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] can i use an array as a table (in the from clause)

2007-08-13 Thread Michael Glaesemann


On Aug 13, 2007, at 9:25 , Coarr, Matt wrote:


Is there some way that I can treat a two dimensional array as a table
that can be referenced in the from clause?


I know of no way off hand, and if so, not easily. This is a pretty  
clear sign that you shouldn't be using arrays in this context and  
should rethink your schema.


Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] can i use an array as a table (in the from clause)

2007-08-13 Thread Coarr, Matt
Is there some way that I can treat a two dimensional array as a table
that can be referenced in the from clause?
 
Thanks,
Matt

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] non superuser creating flat files

2007-08-13 Thread Scott Marlowe
On 8/13/07, Erik Jones <[EMAIL PROTECTED]> wrote:
>
> On Aug 13, 2007, at 9:50 AM, Tom Lane wrote:
>
> > "Terri Reid" <[EMAIL PROTECTED]> writes:
> >> I have data that is being updated in a table that I need to export
> >> to a flat
> >> file via a database trigger on insert or update. The user
> >> performing the
> >> update will not be a superuser. I've tried to use COPY TO, but
> >> that doesn't
> >> work for non-superusers.
> >
> > It will if invoked in a trigger function that's marked SECURITY
> > DEFINER
> > and owned by a superuser.
>
> Also/Or, make sure the that the correct filesystem write permissions
> are set on the file/directory to which you're writing for which ever
> user whose permissions the trigger function is executed as.

The trigger will always write as the use the postgres server is
running under (typically the postgres user).  You will need to make
sure that that account can write to the file and that whoever else
needs to read it can read it.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Unable to connect to PostgreSQL server via PHP

2007-08-13 Thread John Coulthard





From: Tom Lane <[EMAIL PROTECTED]>
To: "John Coulthard" <[EMAIL PROTECTED]>
CC: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Unable to connect to PostgreSQL server via PHP Date: 
Mon, 13 Aug 2007 10:09:15 -0400


"John Coulthard" <[EMAIL PROTECTED]> writes:
> That's not my problem though this is "could not connect to server:
> Permission denied"  If it's denying permission I must have the 
permissions

> set wrong but where to I start looking for them?

"Permission denied" is a pretty strange error for a TCP connect failure,
as that is not a filesystem operation.

Are you able to connect with psql, or some other non-php client?
Use "psql -h localhost" to make sure it tries a TCP connection not
a Unix-socket connection.



Thanks. You mean like this?  This connects without an error.

[EMAIL PROTECTED] john]# su webuser
[EMAIL PROTECTED] john]$ psql -h localhost lumbribase
Welcome to psql 8.0.8, the PostgreSQL interactive terminal.

Type:  \copyright for distribution terms
  \h for help with SQL commands
  \? for help with psql commands
  \g or terminate with semicolon to execute query
  \q to quit

lumbribase=>

_
FREE pop-up blocking with the new MSN Toolbar - get it now! 
http://toolbar.msn.click-url.com/go/onm00200415ave/direct/01/



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] "Out of memory" errors..

2007-08-13 Thread Lim Berger
On 8/13/07, Bill Moran <[EMAIL PROTECTED]> wrote:
> In response to "Lim Berger" <[EMAIL PROTECTED]>:
>
> > On 8/13/07, Tom Lane <[EMAIL PROTECTED]> wrote:
> > > "Lim Berger" <[EMAIL PROTECTED]> writes:
> > > > ERROR:  out of memory
> > > > DETAIL:  Failed on request of size 67108860.
> > >
> > > Apparently, this number:
> > >
> > > > maintenance_work_mem = 64MB
> > >
> > > is more than your system can actually support.  Which is a bit odd for
> > > any modern-day machine.  I suspect the postmaster is being started with
> > > an unduly small ulimit.
> > >
> > > regards, tom lane
> >
> > Thanks Tom. Where can I check the "ulimit"? Is it in the config?
> >
> > I did a "ulimit -a"  (found the command through Google, on an archive
> > posting by in fact you! -- http://snipr.com/pg_ulimit ) and got the
> > following output:
> >
> >
> > ~ > ulimit -a
> > core file size  (blocks, -c) 100
> > data seg size   (kbytes, -d) unlimited
> > file size   (blocks, -f) unlimited
> > pending signals (-i) 1024
> > max locked memory   (kbytes, -l) 32
> > max memory size (kbytes, -m) unlimited
> > open files  (-n) 4096
> > pipe size(512 bytes, -p) 8
> > POSIX message queues (bytes, -q) 819200
> > stack size  (kbytes, -s) 8192
> > cpu time   (seconds, -t) unlimited
> > max user processes  (-u) 14335
> > virtual memory  (kbytes, -v) unlimited
> > file locks  (-x) unlimited
> >
> >
> > Any idea how to configure this? That is how that thread on an archived
> > discussion ends too -- the poster did not seem to get any response to
> > his question about how to tweak this.
>
> Make sure your run the command as the same user that PG runs as (usually
> "postgres", but sometimes "pgsql")  ulimits can differ from one user to
> another.
>


Thanks. I did "su postgres" and ran the ulimit command again. All
values are the same, except for "open files" which is double in the
case of this user (instead of 4096, it is 8192). Not sure what I can
gather from that?

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Persistent connections in PHP

2007-08-13 Thread Dimitri Fontaine
Le lundi 13 août 2007, Erik Jones a écrit :
> If you need something to pool connections, look at pgpool.

Or better yet, pgbouncer. At least for my values of better :)
  https://developer.skype.com/SkypeGarage/DbProjects/PgBouncer
  http://pgfoundry.org/projects/pgbouncer/

Hope this helps,
-- 
dim


signature.asc
Description: This is a digitally signed message part.


Re: [GENERAL] "Out of memory" errors..

2007-08-13 Thread Bill Moran
In response to "Lim Berger" <[EMAIL PROTECTED]>:

> On 8/13/07, Tom Lane <[EMAIL PROTECTED]> wrote:
> > "Lim Berger" <[EMAIL PROTECTED]> writes:
> > > ERROR:  out of memory
> > > DETAIL:  Failed on request of size 67108860.
> >
> > Apparently, this number:
> >
> > > maintenance_work_mem = 64MB
> >
> > is more than your system can actually support.  Which is a bit odd for
> > any modern-day machine.  I suspect the postmaster is being started with
> > an unduly small ulimit.
> >
> > regards, tom lane
> 
> Thanks Tom. Where can I check the "ulimit"? Is it in the config?
> 
> I did a "ulimit -a"  (found the command through Google, on an archive
> posting by in fact you! -- http://snipr.com/pg_ulimit ) and got the
> following output:
> 
> 
> ~ > ulimit -a
> core file size  (blocks, -c) 100
> data seg size   (kbytes, -d) unlimited
> file size   (blocks, -f) unlimited
> pending signals (-i) 1024
> max locked memory   (kbytes, -l) 32
> max memory size (kbytes, -m) unlimited
> open files  (-n) 4096
> pipe size(512 bytes, -p) 8
> POSIX message queues (bytes, -q) 819200
> stack size  (kbytes, -s) 8192
> cpu time   (seconds, -t) unlimited
> max user processes  (-u) 14335
> virtual memory  (kbytes, -v) unlimited
> file locks  (-x) unlimited
> 
> 
> Any idea how to configure this? That is how that thread on an archived
> discussion ends too -- the poster did not seem to get any response to
> his question about how to tweak this.

Make sure your run the command as the same user that PG runs as (usually
"postgres", but sometimes "pgsql")  ulimits can differ from one user to
another.

How to change the limits differs from one OS to another, and (maybe) even
from distro to distro.

-- 
Bill Moran
http://www.potentialtech.com

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] "Out of memory" errors..

2007-08-13 Thread Lim Berger
On 8/13/07, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Lim Berger" <[EMAIL PROTECTED]> writes:
> > ERROR:  out of memory
> > DETAIL:  Failed on request of size 67108860.
>
> Apparently, this number:
>
> > maintenance_work_mem = 64MB
>
> is more than your system can actually support.  Which is a bit odd for
> any modern-day machine.  I suspect the postmaster is being started with
> an unduly small ulimit.
>
> regards, tom lane



Thanks Tom. Where can I check the "ulimit"? Is it in the config?

I did a "ulimit -a"  (found the command through Google, on an archive
posting by in fact you! -- http://snipr.com/pg_ulimit ) and got the
following output:


~ > ulimit -a
core file size  (blocks, -c) 100
data seg size   (kbytes, -d) unlimited
file size   (blocks, -f) unlimited
pending signals (-i) 1024
max locked memory   (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files  (-n) 4096
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 14335
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited


Any idea how to configure this? That is how that thread on an archived
discussion ends too -- the poster did not seem to get any response to
his question about how to tweak this.

Many thanks!

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] "Out of memory" errors..

2007-08-13 Thread Lim Berger
My responses below yours. Thanks so much for bearing with me..



On 8/13/07, Gregory Stark <[EMAIL PROTECTED]> wrote:
> "Lim Berger" <[EMAIL PROTECTED]> writes:
>
> > On 8/13/07, Gregory Stark <[EMAIL PROTECTED]> wrote:
> >> "Lim Berger" <[EMAIL PROTECTED]> writes:
> >>
> >> > Hi
> >> >
> >> > I am getting the following error while running queries such as "vacuum
> >> > analyze TABLE", even on small tables with a piddly 35,000 rows!
> >> >
> >> > The error message:
> >> > --
> >> > ERROR:  out of memory
> >> > DETAIL:  Failed on request of size 67108860.
> >> > --
> >> >
> >> > My postgresql.conf is below. I am on a Dual Core server with 4GB or
> >> > RAM, which runs MySQL as well (key_buffer for which is at around
> >> > 800M).
> >>
> >> What version of Postgres is this?
> >
> > =# select version();
> > version
> > 
> >  PostgreSQL 8.2.3 on i686-redhat-linux-gnu, compiled by GCC gcc (GCC)
> > 3.4.6 20060404 (Red Hat 3.4.6-3)
> > (1 row)
>
> Hm, this is quite odd.
>



> Could you give more information? You're getting this
> on lots of different tables?


Yes, all of them. The example I gave is the smallest table with about
35,000 rows.

Btw, before anything else, the tweaking comments here --
http://www.powerpostgresql.com/PerfList/  ..suggest that for
autovacuum I need to recompile and have settings like "-D -v 400 -V
0.4 -a 100 -A 0.3".

Where do I do this? I don't have any such setting in my
postgresql.conf! How should I find out if autovacuum is set up and
functional, and at what points it enters and vacuums up?


> Could you give more examples? And do you get it on anything
> other than vacuum analyze?


More examples of "vacuum analyze" on other tables? Every table gives
the same error. ANALYZE alone works ok. SELECTing works ok. Multiple
UPDATEing in a transaction block works ok. So does INSERT. What other
examples could I furnish? Please help me help you help me :)


> What does the schema look like?
>


You mean the entire tables definition? Below it is. It's a simple
schema, because I am sharing the workload between MySQL and PGSQL for
now, slowly switching to PGSQL.


   List of relations
 Schema |  Name   |   Type   |  Owner
+-+--+-
 public | program | table| MYUSERID
 public | program_id_seq  | sequence | MYUSERID
 public | program_subscribers | table| MYUSERID
 public | mini| table| MYUSERID
 public | users   | table| MYUSERID
(5 rows)


PROGRAM table has 35,000 rows
PROGRAM_SUBSCRIBERS has 10,000
MINI has about 3 million
USERS has about 200.

On this small DB, I am not sure why there is memory outage. Just one
thing -- the "MINI" table has **huge** concurrent usage, about 10,000
accesses per minute, and it has only three columns so it is a bit of a
caching table.

It does have an index though, that is used in our queries very simply
and effeciently, because this concurrent use is all with an "=" query
on the indexed column. Very small, fast queries.

In MYSQL, this used to be very fast due to their "query cache", but
there is no equivalent in PGSQL inside the DB, not outside of the
usual filesystem anyway, so I am not sure if PGSQL is holding up to
the concurrent usage.

To test this ignorant hypothesis of whether PGSQL was buckling under
huge concurrent pressure, I restarted the postgresql process, and the
memory problem is still there, so I doubt the memory outage is caused
by huge concurrent access.

PGSQL (seems to) return the results very fast as well.



> Do you have any hash indexes? (there was a bug fixed
> in 8.2.4 with them)


Nope. Very simple tables actually, all with one BTREE index each. The
complex stuff is still in MySQL and totally separate from this. There
are five tables in MYSQL with compound indexes on 2 to 5 columns. But
that is besides the point for now.


> Do you have anything else unusual like tsearch2 or
> custom C modules loaded?


Not that I know of, unless they are included by default. How can I
check? If they are not smooshed in, then no, I don't have them
installed.


> Has anything unusual happened to this machine such as a
> server crash or power failure?


Hmm, not really. But I did reboot it last week when it buckled under a
MYSQL REPAIR TABLE issue. I wonder how that could be related to PGSQL
though? Same machine and all? It wasn't a very dramatic crash or
anything.



> Is anything else failing? Can you run a good memory tester like
> memtest86? Could you check your dmesg log to see if there are any system
> problems?



Wow, this is all a bit technical for me. I went to the memtest86 site,
and downloaded their binary. Now I will try to do the untarring and
make/makeinstall stuff, but their site is very sparse on info.

But here is my memory check info from the server:

---
MemTotal:  414

Re: [GENERAL] non superuser creating flat files

2007-08-13 Thread Erik Jones


On Aug 13, 2007, at 9:50 AM, Tom Lane wrote:


"Terri Reid" <[EMAIL PROTECTED]> writes:
I have data that is being updated in a table that I need to export  
to a flat
file via a database trigger on insert or update. The user  
performing the
update will not be a superuser. I've tried to use COPY TO, but  
that doesn't

work for non-superusers.


It will if invoked in a trigger function that's marked SECURITY  
DEFINER

and owned by a superuser.


Also/Or, make sure the that the correct filesystem write permissions  
are set on the file/directory to which you're writing for which ever  
user whose permissions the trigger function is executed as.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Persistent connections in PHP

2007-08-13 Thread Josh Trutwin
On Mon, 13 Aug 2007 09:44:26 -0500
Erik Jones <[EMAIL PROTECTED]> wrote:

> I'll agree with Scott on this one.  (Not that I can recall  
> specifically ever disagreeing with him before...).  Unless you
> know all of the potential caveats associated with php's persisent
> postgres connections and have a use case that fits them, don't use
> them.  If you need something to pool connections, look at pgpool.

Could elaborate a little on the problems with using php's persistent
connections?  

Personally I use ADODB php abstraction library (adodb.sf.net) for my
database stuff and I think there's a way to enable persistent
connections though I just use the default connection.  

I've heard before that php's persistent connections are to be
avoided, was just curious as to why though?

Thanks!

Josh

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] non superuser creating flat files

2007-08-13 Thread Tom Lane
"Terri Reid" <[EMAIL PROTECTED]> writes:
> I have data that is being updated in a table that I need to export to a flat
> file via a database trigger on insert or update. The user performing the
> update will not be a superuser. I've tried to use COPY TO, but that doesn't
> work for non-superusers.

It will if invoked in a trigger function that's marked SECURITY DEFINER
and owned by a superuser.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Persistent connections in PHP

2007-08-13 Thread Erik Jones


On Aug 13, 2007, at 9:35 AM, Scott Marlowe wrote:


On 8/13/07, Naz Gassiep <[EMAIL PROTECTED]> wrote:

Hi,
Does the connection pooling feature of PHP cause the persistent
connections to keep the properties between accesses? E.g., if a user
takes a connection, sets a timezone to it using SET TIMEZONE, will  
the
next user who happens to take this connection get it in that same  
state,
or will it be reset to a blank or starting state as though it had  
been
opened? Also, what about temp tables? Will they be present to the  
second

user if the first user set some up?


Except for a few special uses, persistent connections in php are more
of a misfeature.  They are NOT pooling, in the standard sense.  Most
of the time they cause more problems than they solve.


I'll agree with Scott on this one.  (Not that I can recall  
specifically ever disagreeing with him before...).  Unless you know  
all of the potential caveats associated with php's persisent postgres  
connections and have a use case that fits them, don't use them.  If  
you need something to pool connections, look at pgpool.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] tsearch2: plainto_tsquery() with OR?

2007-08-13 Thread Ron Mayer
Tom Lane wrote:
> Oleg Bartunov <[EMAIL PROTECTED]> writes:
>> On Wed, 8 Aug 2007, cluster wrote:
>>> Does anyone know where I can request an OR-version of plainto_tsquery()?
> 
>> plainto_tsquery expects plain text, use to_tsquery for boolean operators.
> 
> Are either of these definitions really right?  If I type "foo bar baz"
> into Google, for instance, it seems to produce some sort of weighted
> result, neither a strict AND nor a strict OR.  Google didn't get where
> they are by misjudging what the simplest search behavior should be like.


For what it's worth, Google states [1]

   Automatic "and" queries

   By default, Google only returns pages that include
   all of your search terms. There is no need to
   include "and" between terms. Keep in mind that
   the order in which the terms are typed will affect
   the search results. To restrict a search further,
   just include more terms. For example, to plan a
   vacation to Hawaii, simply type vacation hawaii.

and also describes "OPERATOR EXAMPLE...vacation hawaii"
as "FINDS PAGES CONTAINING...the words vacation and Hawaii".

If I'm not mistaken, it sounds the same as what tsearch describes.


[1] http://www.google.com/intl/en/help/basics.html#and
[2] http://www.google.com/intl/en/help/cheatsheet.html


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Selecting rows where timestamp between two timestamps

2007-08-13 Thread Jeff Lanzarotta
Well that was easy enough... Thanks!

Michael Fuhr <[EMAIL PROTECTED]> wrote: On Mon, Aug 13, 2007 at 07:16:30AM 
-0700, Jeff Lanzarotta wrote:
> select * from foobar where ts between now() and now() - interval '5 days'
> 
> btw, the column ts is defined as:
> 
> ts timestamp with time zone NOT NULL DEFAULT now()
> 
> No rows are returned, but I know there are at least 100 rows that should be 
> returned...

Put the lower value first or use BETWEEN SYMMETRIC:

select * from foobar where ts between now() - interval '5 days' and now()
select * from foobar where ts between symmetric now() and now() - interval '5 
days'

-- 
Michael Fuhr



Re: [GENERAL] Selecting rows where timestamp between two timestamps

2007-08-13 Thread Michael Fuhr
On Mon, Aug 13, 2007 at 07:16:30AM -0700, Jeff Lanzarotta wrote:
> select * from foobar where ts between now() and now() - interval '5 days'
> 
> btw, the column ts is defined as:
> 
> ts timestamp with time zone NOT NULL DEFAULT now()
> 
> No rows are returned, but I know there are at least 100 rows that should be 
> returned...

Put the lower value first or use BETWEEN SYMMETRIC:

select * from foobar where ts between now() - interval '5 days' and now()
select * from foobar where ts between symmetric now() and now() - interval '5 
days'

-- 
Michael Fuhr

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Persistent connections in PHP

2007-08-13 Thread Scott Marlowe
On 8/13/07, Naz Gassiep <[EMAIL PROTECTED]> wrote:
> Hi,
> Does the connection pooling feature of PHP cause the persistent
> connections to keep the properties between accesses? E.g., if a user
> takes a connection, sets a timezone to it using SET TIMEZONE, will the
> next user who happens to take this connection get it in that same state,
> or will it be reset to a blank or starting state as though it had been
> opened? Also, what about temp tables? Will they be present to the second
> user if the first user set some up?

Except for a few special uses, persistent connections in php are more
of a misfeature.  They are NOT pooling, in the standard sense.  Most
of the time they cause more problems than they solve.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] copy command - date

2007-08-13 Thread Scott Marlowe
On 8/12/07, novice <[EMAIL PROTECTED]> wrote:
> I resolved it by doing this - is there another more efficient method?
> And yes, the text file I am working with doesn't have any TABs
>
> 5162   OK   SM 06/12/04 06:12
>
> substr("data", 30, 2)||'-'||substr("data", 27,
> 2)||'-20'||substr("data", 24, 2)||substr("data", 32, 6) as
> inspection_date

I didn't have to do anything special, just copied it in:

create table g (ts timestamp);
set datestyle=ISO, MDY;
copy g (ts) from stdin;
06/12/04 12:00:00
\.
select * from g;
 ts
-
 2004-06-12 12:00:00
delete from g;
set datestyle=ISO, DMY;
copy g (ts) from stdin;
06/12/04 12:00:00
\.
 select * from g;
 ts
-
 2004-12-06 12:00:00

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] non superuser creating flat files

2007-08-13 Thread Terri Reid
I have data that is being updated in a table that I need to export to a flat
file via a database trigger on insert or update. The user performing the
update will not be a superuser. I've tried to use COPY TO, but that doesn't
work for non-superusers. Is there some other functionality that can write
out flat files? I'm looking for something similar to the Oracle UTL file
functionality, as ideally I'd want to append the data to an existing file
and be able to manipulate the data before I wrote it out.

 

Thanks

Terri

 

Terri Reid

BCT Software Solutions 

Mail: [EMAIL PROTECTED]

Tel: 01925 732359

 

 

 

 



[GENERAL] Selecting rows where timestamp between two timestamps

2007-08-13 Thread Jeff Lanzarotta
Hello,

I am trying to get this query to work with no luck...


select * from foobar where ts between now() and now() - interval '5 days'


btw, the column ts is defined as:

ts timestamp with time zone NOT NULL DEFAULT now()

No rows are returned, but I know there are at least 100 rows that should be 
returned...

Any ideas?

Thanks.


-Jeff

Re: [GENERAL] TimestampTZ

2007-08-13 Thread Michael Glaesemann


On Aug 13, 2007, at 0:35 , Naz Gassiep wrote:




As clearly stated in the documentation

http://www.postgresql.org/docs/8.2/interactive/datatype- 
datetime.html#DATATYPE-TIMEZONES


Perhaps I'm thick, but I don't find that particular page to be  
clear on this at all.


Had you read the documentation before you posted? Did you read the  
part that was quoted to you by both myself and Paul Lambert? If so,  
why didn't you mention this in your first post? While the page has a  
lot of information on it, the part that deals with time zones on  
input is pretty clear and plainly stated. What do you find confusing  
about it? If you can explain how part of the documentation is not  
clear, then perhaps the documentation can be improved so others won't  
have the same problem you did.


Had you experimented as Hubert Depecz Lubaczewksi recommended? If so,  
what did you find confusing? If not, why not?


I pose these questions because I know you are not new to PostgreSQL,  
nor to posting on these mailing lists, and should be familiar with  
with these techniques of exploring PostgreSQL and asking questions.  
Or do you consider it easier to have others do this work for you?


And most importantly, have you found the answer to your question?

Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Problem Designing Index

2007-08-13 Thread Gregory Stark
"Alan J Batsford" <[EMAIL PROTECTED]> writes:

> Thanks for the help, after your email I went to capture some analyze output
> for you and when I did I figured to bump up the statistics on the two
> columns of interest from 100 to 1000. Now all statements return close to
> instantly.

Note that 1000 can take quite a lot of space in the statistics table. Make
sure it's vacuumed regularly and check that this isn't slowing down planning
of simple queries excessively.

Look at the explain analyze and check that the estimates are reasonably
accurate. They may have just flipped from being wildly inaccurate on the wrong
side of the decision point to wildly inaccurate but on the right side of the
decision point.

> Is this analyze tool something I need to run periodically to keep
> performance up? If so how often should I run it.

Yes. autovacuum likes to do it whenever 10% of the table has been updated, but
your mileage will vary considerably depending on how much your updates or
other DML affects the distribution which the queries are depending on.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Unable to connect to PostgreSQL server via PHP

2007-08-13 Thread Tom Lane
"John Coulthard" <[EMAIL PROTECTED]> writes:
> That's not my problem though this is "could not connect to server: 
> Permission denied"  If it's denying permission I must have the permissions 
> set wrong but where to I start looking for them?

"Permission denied" is a pretty strange error for a TCP connect failure,
as that is not a filesystem operation.

Are you able to connect with psql, or some other non-php client?
Use "psql -h localhost" to make sure it tries a TCP connection not
a Unix-socket connection.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] "Out of memory" errors..

2007-08-13 Thread Tom Lane
"Lim Berger" <[EMAIL PROTECTED]> writes:
> ERROR:  out of memory
> DETAIL:  Failed on request of size 67108860.

Apparently, this number:

> maintenance_work_mem = 64MB

is more than your system can actually support.  Which is a bit odd for
any modern-day machine.  I suspect the postmaster is being started with
an unduly small ulimit.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Problem Designing Index

2007-08-13 Thread Alan J Batsford
[EMAIL PROTECTED] wrote on 08/13/2007 08:36:23 AM:

> While it's difficult to be sure, I'm guessing you have either a hardware
> problem, or a tuning problem -- but I don't think your indexes are a
problem.
>
> Keep in mind that once PostgreSQL has determined which rows to return, it
> has to actually read all those rows off disk and send them to the client
> application.  In my opinion, 8 seconds to read in over 100,000 rows isn't
> unreasonable (especially if those rows are wide).
>
> If 8 seconds is an unacceptable time, then you're liable to need hardware
to
> fix it: more RAM to cache those rows, or faster disks or both.
>
> However, this is just speculation.  You didn't provide analyze output,
table
> schema, hardware details, or configuration information ... so it's
entirely
> possible that there is something else wrong.  I'm just making an educated
> guess.

Thanks for the help, after your email I went to capture some analyze output
for you and when I did I figured to bump up the statistics on the two
columns of interest from 100 to 1000. Now all statements return close to
instantly.

I originally thought it was the index because I could make an index that
yielded great performance for each type of select I was doing, but never
for all of them at once. To answer your question about hardware the CPU is
a xeon with 3GB of ram. I am unsure of the exact speed of the HDD but I'm
certain its high performance.

Is this analyze tool something I need to run periodically to keep
performance up? If so how often should I run it.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Unable to connect to PostgreSQL server via PHP

2007-08-13 Thread John Coulthard





From: "Lim Berger" <[EMAIL PROTECTED]>
To: "John Coulthard" <[EMAIL PROTECTED]>
CC: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Unable to connect to PostgreSQL server via PHP
Date: Mon, 13 Aug 2007 18:51:37 +0800

On 8/13/07, John Coulthard <[EMAIL PROTECTED]> wrote:
> The part of the php code for the connection is
>
>
> $dbconn=pg_connect( "dbname=lumbribase host=localhost port=5432
> user=postgres password=$PG_PASS" );
> if ( ! $dbconn ) {
> echo "Error connecting to the database ! " ;
> printf("%s", pg_errormessage( $dbconn ) );
> exit(); }
>
> This code works on zeldia
> http://zeldia.cap.ed.ac.uk/Lumbribase/search_id.php
> but not on xyala http://xyala.cap.ed.ac.uk/Lumbribase/search_id.php
>
> So it's something to do with the way xyala is set up.  I'm sure I'm 
missing

> something obvious but what..?
>
> It's not  a firewall issue because it persists when the iptabes are off.
>
> What does "could not connect to server: Permission denied" mean?  Have I
> done some something as root or me that I should have done as
> user postgres?



Hi John,

Had the same issue. Try the following steps:

1. PGHBA.CONF

This is an important file. Mine (on Linux CentOS 4) is located at
"/var/lib/pgsql/data/pghba.conf". Make sure it looks like the
following.

local   all all  md5
hostall all 127.0.0.1  255.255.255.255   md5


2. POSTGRESQL.CONF


listen_addresses = 'localhost,*'
#port = 5432
other settings


3. PHP CODE


link   = pg_connect("host=localhost dbname=MYDB user=MYUSER 
password=MYPASS");


--
That is all you need. Don't specify anything else in the connection
string. Let me know how it goes.

LB

---(end of broadcast)---
TIP 6: explain analyze is your friend



Thanks for the sugestions but they don''t solve the problem.  I do notice 
that if I set listen_addresses='localhost,*' then I get the following when I 
start postgres...


bash-3.00$ ps -ef | grep postgres
root  9669  8757  0 13:34 pts/500:00:00 su postgres
postgres  9670  9669  0 13:34 pts/500:00:00 bash
postgres  9673  9670  0 13:34 pts/500:00:00 ps -ef
postgres  9674  9670  0 13:34 pts/500:00:00 grep postgres
bash-3.00$ /usr/bin/pg_ctl -D /var/lib/pgsql/data start
postmaster starting
bash-3.00$ LOG:  could not bind IPv4 socket: Address already in use
HINT:  Is another postmaster already running on port 5432? If not, wait a 
few seconds and retry.


bash-3.00$

it does start though.  But if I just have localhost or * as the 
listen_addresses it starts without error.


That's not my problem though this is "could not connect to server: 
Permission denied"  If it's denying permission I must have the permissions 
set wrong but where to I start looking for them?  All sub dirs in 
/var/lib/pgsql are owned by postgres and seem to have the same permissions 
on both systems.


Cheers

_
Find a local pizza place, movie theater, and more….then map the best route! 
http://maps.live.com/default.aspx?v=2&ss=yp.bars~yp.pizza~yp.movie%20theater&cp=42.358996~-71.056691&style=r&lvl=13&tilt=-90&dir=0&alt=-1000&scene=950607&encType=1&FORM=MGAC01



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] "Out of memory" errors..

2007-08-13 Thread Gregory Stark
"Lim Berger" <[EMAIL PROTECTED]> writes:

> On 8/13/07, Gregory Stark <[EMAIL PROTECTED]> wrote:
>> "Lim Berger" <[EMAIL PROTECTED]> writes:
>>
>> > Hi
>> >
>> > I am getting the following error while running queries such as "vacuum
>> > analyze TABLE", even on small tables with a piddly 35,000 rows!
>> >
>> > The error message:
>> > --
>> > ERROR:  out of memory
>> > DETAIL:  Failed on request of size 67108860.
>> > --
>> >
>> > My postgresql.conf is below. I am on a Dual Core server with 4GB or
>> > RAM, which runs MySQL as well (key_buffer for which is at around
>> > 800M).
>>
>> What version of Postgres is this?
>
> =# select version();
> version
> 
>  PostgreSQL 8.2.3 on i686-redhat-linux-gnu, compiled by GCC gcc (GCC)
> 3.4.6 20060404 (Red Hat 3.4.6-3)
> (1 row)

Hm, this is quite odd.

Could you give more information? You're getting this on lots of different
tables? Could you give more examples? And do you get it on anything other than
vacuum analyze? What does the schema look like?

Do you have any hash indexes? (there was a bug fixed in 8.2.4 with them)
Do you have anything else unusual like tsearch2 or custom C modules loaded?

Has anything unusual happened to this machine such as a server crash or power
failure? Is anything else failing? Can you run a good memory tester like
memtest86? Could you check your dmesg log to see if there are any system
problems?

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Problem Designing Index

2007-08-13 Thread Bill Moran
In response to Alan J Batsford <[EMAIL PROTECTED]>:
> 
> Hello,
> 
> I'm doing some select statements on my table that look like:
> SELECT * FROM table WHERE prod_num = '1234567' AND transaction_timestamp >
> '2007-07-18 21:29:57' OR prod_num > '1234567' ORDER BY prod_num ASC,
> transaction_timestamp ASC LIMIT 1;
> 
> I've added two indices one for prod_num and another transaction_timestamp.
> This table has 151,000 rows and the above statement returns in less than a
> millisecond. If I change the above statement from '>' to '<' it takes 8
> seconds to complete. Prod_num '1234567' is towards the end of the 151k
> rows. If i use a prod_num like '123' towards the front the problem is
> reversed with '>' and '<'.
> 
> I tried adding a third index that uses both prod_num and
> transaction_timestamp. The average performance at each end of the data for
> both '>' and '<' improved but the problem wasn't resolved. Selects at the
> end of the data with '>' conditions (Like the original statement) then
> become broken and take 500 ms to finish, which is unacceptable for the
> application.
> 
> I did analyze on the table with no effect.
> 
> Is it possible to design an index that can account for all the scenerios?
> Thanks for any help you can provide.

While it's difficult to be sure, I'm guessing you have either a hardware
problem, or a tuning problem -- but I don't think your indexes are a problem.

Keep in mind that once PostgreSQL has determined which rows to return, it
has to actually read all those rows off disk and send them to the client
application.  In my opinion, 8 seconds to read in over 100,000 rows isn't
unreasonable (especially if those rows are wide).

If 8 seconds is an unacceptable time, then you're liable to need hardware to
fix it: more RAM to cache those rows, or faster disks or both.

However, this is just speculation.  You didn't provide analyze output, table
schema, hardware details, or configuration information ... so it's entirely
possible that there is something else wrong.  I'm just making an educated
guess.

-- 
Bill Moran
http://www.potentialtech.com

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] "Out of memory" errors..

2007-08-13 Thread Lim Berger
On 8/13/07, Gregory Stark <[EMAIL PROTECTED]> wrote:
> "Lim Berger" <[EMAIL PROTECTED]> writes:
>
> > Hi
> >
> > I am getting the following error while running queries such as "vacuum
> > analyze TABLE", even on small tables with a piddly 35,000 rows!
> >
> > The error message:
> > --
> > ERROR:  out of memory
> > DETAIL:  Failed on request of size 67108860.
> > --
> >
> > My postgresql.conf is below. I am on a Dual Core server with 4GB or
> > RAM, which runs MySQL as well (key_buffer for which is at around
> > 800M).
>
> What version of Postgres is this?
>



=# select version();
version

 PostgreSQL 8.2.3 on i686-redhat-linux-gnu, compiled by GCC gcc (GCC)
3.4.6 20060404 (Red Hat 3.4.6-3)
(1 row)


Thanks for any tips!

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] UDFs

2007-08-13 Thread Martijn van Oosterhout
On Mon, Aug 13, 2007 at 07:36:41PM +, jf wrote:
> > usually it's in: /share/postgresql/contrib/pgcrypto.sql
> > in the database you want to use pgcrypto functions, you simply run this
> > sql (as superuser), and that's all.
> 
> theory# pwd
> /home/jf/postgresql-8.2.4
> theory# cd share
> bash: cd: share: No such file or directory

He means the INSTALLPREFIX, which is usually /usr or /usr/local. Your
problem is that you havn't build it yet (run make) which will generate
the actual module and you need to install it to get the scripts into
the right locations.

If you can, I'd suggest installing binary versions of the contrib
modules (it's called postgresql-8.2-contrib in debian for example).
That saves you having to worry about sources, paths, compilations, etc).

Have a nice day,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] UDFs

2007-08-13 Thread jf
> share of *installed* system. if you compiled with --prefix=/usr/local,
> then it would be /usr/local/share/postgresql/...

Ah you have to forgive me, I'm in the states and its quite late ;]

> of course - you dont need all (on the other hand - i strongly suggest
> that you get some familiarity with it, as there are some real gems).

Well as I continue working on my backend and find that I need some other
functionality, I'll check there first ;]

Thanks again.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] UDFs

2007-08-13 Thread hubert depesz lubaczewski
On Mon, Aug 13, 2007 at 07:36:41PM +, jf wrote:
> > usually it's in: /share/postgresql/contrib/pgcrypto.sql
> > in the database you want to use pgcrypto functions, you simply run this
> > sql (as superuser), and that's all.
> theory# pwd
> /home/jf/postgresql-8.2.4
> theory# cd share
> bash: cd: share: No such file or directory
> theory# pwd
> /home/jf/postgresql-8.2.4
> theory# ls share
> ls: cannot access share: No such file or directory

share of *installed* system. if you compiled with --prefix=/usr/local,
then it would be /usr/local/share/postgresql/...

> theory# find ./ -name pgcrypto.sql
> theory# cd contrib/pgcrypto/
> theory# ls *.sql

if you didn't compile it - it is not built.
simply:
cd contrib/pgcrypto
make
make install
and you should be fine.

i usually compile postgresql in this way:
cd postgre*
./configure -- some options
make
make install
cd contrib
make
make install

in this way i have all contrib modules compiled.

of course - you dont need all (on the other hand - i strongly suggest
that you get some familiarity with it, as there are some real gems).

depesz

-- 
quicksil1er: "postgres is excellent, but like any DB it requires a
highly paid DBA.  here's my CV!" :)
http://www.depesz.com/ - blog dla ciebie (i moje CV)

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] UDFs

2007-08-13 Thread jf
> 1. as for installing contrib - usually when you install psotgresql from
> prebuilt binary packages, there is also contrib package. for example on
> ubuntu it is postgresql-contrib-8.2, so simple: apt-get install
> postgresql-contrib-8.2 will install it.

I actually built from source, and just didnt know that crypto wasnt
included until I was already up and running and had gotten far enough into
things that I didn't want to restart.

> as for installation in sql - for every contrib module you get sql
> script. it is usually called .sql (pgcrypto.sql).
>
> usually it's in: /share/postgresql/contrib/pgcrypto.sql
> in the database you want to use pgcrypto functions, you simply run this
> sql (as superuser), and that's all.

theory# pwd
/home/jf/postgresql-8.2.4
theory# cd share
bash: cd: share: No such file or directory
theory# pwd
/home/jf/postgresql-8.2.4
theory# ls share
ls: cannot access share: No such file or directory
theory# find ./ -name pgcrypto.sql
theory# cd contrib/pgcrypto/
theory# ls *.sql
uninstall_pgcrypto.sql
theory# ls
Makefile crypt-blowfish.c  expected   imath.h  mbuf.h
pgcrypto.c   pgp-cfb.c   pgp-info.c  pgp-pgsql.c
pgp-s2k.c   px-crypt.h  random.c  sha1.c  sql
README.pgcrypto  crypt-des.c   fortuna.c  internal-sha2.c  md5.c
pgcrypto.h   pgp-compress.c  pgp-mpi-internal.c  pgp-pubdec.c  pgp.c
px-hmac.c   rijndael.csha1.h  uninstall_pgcrypto.sql
blf.ccrypt-gensalt.c   fortuna.h  internal.c   md5.h
pgcrypto.sql.in  pgp-decrypt.c   pgp-mpi-openssl.c   pgp-pubenc.c  pgp.h
px.crijndael.hsha2.c
blf.hcrypt-md5.c   imath.cmbuf.c   openssl.c
pgp-armor.c  pgp-encrypt.c   pgp-mpi.c   pgp-pubkey.c
px-crypt.c  px.hrijndael.tbl  sha2.h


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Problem Designing Index

2007-08-13 Thread Alan J Batsford


Hello,

I'm doing some select statements on my table that look like:
SELECT * FROM table WHERE prod_num = '1234567' AND transaction_timestamp >
'2007-07-18 21:29:57' OR prod_num > '1234567' ORDER BY prod_num ASC,
transaction_timestamp ASC LIMIT 1;

I've added two indices one for prod_num and another transaction_timestamp.
This table has 151,000 rows and the above statement returns in less than a
millisecond. If I change the above statement from '>' to '<' it takes 8
seconds to complete. Prod_num '1234567' is towards the end of the 151k
rows. If i use a prod_num like '123' towards the front the problem is
reversed with '>' and '<'.

I tried adding a third index that uses both prod_num and
transaction_timestamp. The average performance at each end of the data for
both '>' and '<' improved but the problem wasn't resolved. Selects at the
end of the data with '>' conditions (Like the original statement) then
become broken and take 500 ms to finish, which is unacceptable for the
application.

I did analyze on the table with no effect.

Is it possible to design an index that can account for all the scenerios?
Thanks for any help you can provide.

-Alan


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] "Out of memory" errors..

2007-08-13 Thread Gregory Stark
"Lim Berger" <[EMAIL PROTECTED]> writes:

> Hi
>
> I am getting the following error while running queries such as "vacuum
> analyze TABLE", even on small tables with a piddly 35,000 rows!
>
> The error message:
> --
> ERROR:  out of memory
> DETAIL:  Failed on request of size 67108860.
> --
>
> My postgresql.conf is below. I am on a Dual Core server with 4GB or
> RAM, which runs MySQL as well (key_buffer for which is at around
> 800M). 

What version of Postgres is this?


-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] UDFs

2007-08-13 Thread hubert depesz lubaczewski
On Mon, Aug 13, 2007 at 07:22:26PM +, jf wrote:
> understood, I appreciate the suggestion. In addition I couldn't find any
> documentation that told me how to install the functions in pgcrypto (do I
> need to CREATE FUNCTION for every function in there?), the README mentions
> a .sql file thats supposed to get you up and going, but the only .sql file
> I found in the directory was an uninstall .sql file, am I expected to grok
> it and do the inverse for every step? et cetera.
> 

1. as for installing contrib - usually when you install psotgresql from
prebuilt binary packages, there is also contrib package. for example on
ubuntu it is postgresql-contrib-8.2, so simple: apt-get install
postgresql-contrib-8.2 will install it.

as for installation in sql - for every contrib module you get sql
script. it is usually called .sql (pgcrypto.sql).

usually it's in: /share/postgresql/contrib/pgcrypto.sql

in the database you want to use pgcrypto functions, you simply run this
sql (as superuser), and that's all.

depesz

-- 
quicksil1er: "postgres is excellent, but like any DB it requires a
highly paid DBA.  here's my CV!" :)
http://www.depesz.com/ - blog dla ciebie (i moje CV)

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] UDFs

2007-08-13 Thread jf
>
> your function is also not compiled on default. but pgcrypto is at the
> very least available by default (in sources, or in precompiled
> packages).

Yes I understand, and trust me I'm typically not a 'not made in my house'
type, but google for pgcrypto, you don't find much out there other than
'look in contrib/', and honestly I wasn't about to sit down and recompile
everything to get a bunch of functionality I really didn't need, I (in
hindsight incorrectly) decided it would be quicker just to write a UDF.

Where the incorrectly part is that I've been battling for a couple hours
trying to figure out why even though I had changed the UDF to use BYTEA
primitives instead of TEXT it was still using TEXTs, and I just learned
something new, CREATE OR REPLACE FUNCTION [...] doesn't replace the
function if you change the types; it makes sense, I've just trying to
figure out wth is going on.

> i dont know about online documentation, but in installed pgcrypto you
> have a nice readme with all functions described.

Expecting people who may or may not have the source sitting around anymore
to download the source to look at the documentation to even determine if
they want to use that code is not a very effective method of marketing,
imho of course.

This holds especially true when the online manual for the
DB itself is lacking in many places (ie can palloc() fail? does palloc0()
zero out the memory returned? et cetera), I had to dig through the header
files just to find the various types and such, and then its still not
entirely clear what exactly the header for the variable length types is,
in one header file it gets typedef'd to a structure defined right above it
that contains a length field and an array of a single character (does that
serve as a pointer to the beginning of the data?), but then the
VARDATA()/VARSIZE()/et cetera macro's appear to work on an entirely
different structure, but there is no immediate correlation between the two
structures in the headers.

It's a bit sad because the documentation for pgcrypto is actually quite a
bit better than for the UDFs, and it wasn't until after I had made the
decision that the UDFs would be easier and fought with it for some time
that I realized this.


> i'm not opposing to the idea. i was just checking if you do know about
> it :)

understood, I appreciate the suggestion. In addition I couldn't find any
documentation that told me how to install the functions in pgcrypto (do I
need to CREATE FUNCTION for every function in there?), the README mentions
a .sql file thats supposed to get you up and going, but the only .sql file
I found in the directory was an uninstall .sql file, am I expected to grok
it and do the inverse for every step? et cetera.

I don't mean to come off as a complainer, but the documentation for the DB
overall seems to be lacking, and that was my main reason for opting out,
even if now that all is said and done it would've been easier to use
pgcrypto.

 > best regards,
>
> depesz


cheers,

jf

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] UDFs

2007-08-13 Thread hubert depesz lubaczewski
On Mon, Aug 13, 2007 at 05:07:50PM +, jf wrote:
> my understanding was that pgcrypto was not compiled by default?

your function is also not compiled on default. but pgcrypto is at the
very least available by default (in sources, or in precompiled
packages).

> Furthermore, finding next to no documentation about it online and deciding

i dont know about online documentation, but in installed pgcrypto you
have a nice readme with all functions described.

> I only needed one function instead of an entire crypto API i decided it
> would make the most sense to just code the 10 lines to do it myself.

i'm not opposing to the idea. i was just checking if you do know about
it :)

best regards,

depesz

-- 
quicksil1er: "postgres is excellent, but like any DB it requires a
highly paid DBA.  here's my CV!" :)
http://www.depesz.com/ - blog dla ciebie (i moje CV)

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] LIKE conditions in PGSQL very, very slow!

2007-08-13 Thread Alban Hertroys
.ep wrote:
> Hi,
> 
> I'm moving from the mysql camp and quite liking things like functions
> and such, but a lot of my functionality depends on queries such as
> 
>SELECT id, name, start_date
>FROM customer
>WHERE name LIKE 'eri%';
> 
> These kinds of queries are super fast in MySQL because "eri%" type
> conditions also use the index. Is this not the case with PG?
> 
> Here's the EXPLAIN output:
> 
> 
> CUSTDB=# explain select id,name,start_date from customer where name
> like 'eri%';
>QUERY PLAN
> 
>  Seq Scan on customer  (cost=0.00..86032.18 rows=1 width=111)
>Filter: ((name)::text ~~ 'eri%'::text)
> (2 rows)

I think there's either no index on customer.name or you didn't analyze
the table, so PG has outdated statistics on its contents (probably
stating the table is still empty) and thinks a sequential scan will be
faster. You probably want to become acquainted with autovacuum.

Another possibility is that most of your customers names start with
'eri', in which case a seq scan is actually faster... In that case you
should probably do something about your customer base ;)

Regards,

-- 
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Unable to connect to PostgreSQL server via PHP

2007-08-13 Thread John Coulthard

The part of the php code for the connection is


$dbconn=pg_connect( "dbname=lumbribase host=localhost port=5432 
user=postgres password=$PG_PASS" );

if ( ! $dbconn ) {
   echo "Error connecting to the database ! " ;
   printf("%s", pg_errormessage( $dbconn ) );
   exit(); }

This code works on zeldia 
http://zeldia.cap.ed.ac.uk/Lumbribase/search_id.php

but not on xyala http://xyala.cap.ed.ac.uk/Lumbribase/search_id.php

So it's something to do with the way xyala is set up.  I'm sure I'm missing 
something obvious but what..?


It's not  a firewall issue because it persists when the iptabes are off.

What does "could not connect to server: Permission denied" mean?  Have I 
done some something as root or me that I should have done as user postgres?


Thanks




From: Julio Cesar Sánchez González <[EMAIL PROTECTED]>
To: John Coulthard <[EMAIL PROTECTED]>
CC: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Unable to connect to PostgreSQL server via PHP
Date: Fri, 10 Aug 2007 22:50:47 -0500

El jue, 09-08-2007 a las 14:51 +, John Coulthard escribió:
> Hi
>
> I'm trying to set up a new webserver running php and pgsql.  PHP was
> connecting to postgres but I needed to install the php-gd module and now 
I

> get the error...
>
> "PHP Warning:  pg_connect() [> href='function.pg-connect'>function.pg-connect]: Unable to connect 
to
> PostgreSQL server: could not connect to server: Permission denied\n\tIs 
the

> server running on host "localhost" and accepting\n\tTCP/IP
> connections on port 5432?"
>
>  and I'm at a loss can anyone tell me why it's not connecting?
>
> Thanks
>
> This bit's I know are...
> http://xyala.cap.ed.ac.uk/php_info.php  say's php's configured for pgsql
>
> [EMAIL PROTECTED] telnet localhost 5432
> Trying 127.0.0.1...
> Connected to localhost.localdomain (127.0.0.1).
> Escape character is '^]'.
> Connection closed by foreign host.
> [EMAIL PROTECTED]
>
>
> [EMAIL PROTECTED] less /var/lib/pgsql/data/pg_hba.conf
> # TYPE  DATABASEUSERCIDR-ADDRESS  METHOD
>
> # "local" is for Unix domain socket connections only
> #local   all all   ident sameuser
> local   all all   trust
> # IPv4 local connections:
> #hostall all 127.0.0.1/32  ident sameuser
> hostall all 127.0.0.1/32  trust
> # IPv6 local connections:
> #hostall all ::1/128   ident sameuser
> hostall all ::1/128   trust
>
> [EMAIL PROTECTED] grep 'listen' /var/lib/pgsql/data/postgresql.conf
> # "pg_ctl reload". Some settings, such as listen_address, require
> #listen_addresses = 'localhost' # what IP interface(s) to listen on;
> listen_addresses = '*'
> [EMAIL PROTECTED]
>
> [EMAIL PROTECTED] less /etc/php.d/pgsql.ini
> ; Enable pgsql extension module
> extension=pgsql.so
>
> the server I'm going to replace is running the same versions of PHP and
> postgres http://zeldia.cap.ed.ac.uk/php_info.php
> The /etc/php.ini files on the two machines are the same and the
> /var/lib/pgsql/data/postgresql.conf files are only different because 
I've
> set listen_addresses = '*' on the new server (xyala) to see if I can 
make it

> work.
>
> _
> Express yourself instantly with MSN Messenger! Download today it's FREE!
> http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/
>
>
> ---(end of broadcast)---
> TIP 5: don't forget to increase your free space map settings


show your php source code for help you.


--
Regards,

Julio Cesar Sánchez González
www.sistemasyconectividad.com.mx
blog: http://darkavngr.blogspot.com

---
Ahora me he convertido en la muerte, destructora de mundos.
Soy la Muerte que se lleva todo, la fuente de las cosas que vendran.



_
Messenger Café — open for fun 24/7. Hot games, cool activities served daily. 
Visit now. http://cafemessenger.com?ocid=TXT_TAGHM_AugHMtagline



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] "Out of memory" errors..

2007-08-13 Thread Lim Berger
Hi

I am getting the following error while running queries such as "vacuum
analyze TABLE", even on small tables with a piddly 35,000 rows!

The error message:
--
ERROR:  out of memory
DETAIL:  Failed on request of size 67108860.
--

My postgresql.conf is below. I am on a Dual Core server with 4GB or
RAM, which runs MySQL as well (key_buffer for which is at around
800M). So I have allocated shared_buffers for postgresql based on that
number. The server also runs Apache and other stuff, but I have never
had any problem running the vacuum equivalent called "REPAIR TABLE" on
MySQL.

Thanks in advance for any inputs!



--POSTGRESQL.CONF---
#--- Some tuning ~
#--- http://www.opennms.org/index.php/Performance_tuning
max_connections = 250
shared_buffers = 21000
effective_cache_size = 21000
max_fsm_relations = 1500
max_fsm_pages = 8
sort_mem = 16348
work_mem = 16348
vacuum_mem = 16348
temp_buffers = 4096
authentication_timeout = 10s
ssl = off

autovacuum = on
vacuum_cost_delay = 50
stats_start_collector = on
stats_row_level = on

#--- For COPY performance
wal_buffers=64
checkpoint_segments=64
checkpoint_timeout=900
fsync = on
maintenance_work_mem = 64MB

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Unable to connect to PostgreSQL server via PHP

2007-08-13 Thread Lim Berger
On 8/13/07, John Coulthard <[EMAIL PROTECTED]> wrote:
> The part of the php code for the connection is
>
>
> $dbconn=pg_connect( "dbname=lumbribase host=localhost port=5432
> user=postgres password=$PG_PASS" );
> if ( ! $dbconn ) {
> echo "Error connecting to the database ! " ;
> printf("%s", pg_errormessage( $dbconn ) );
> exit(); }
>
> This code works on zeldia
> http://zeldia.cap.ed.ac.uk/Lumbribase/search_id.php
> but not on xyala http://xyala.cap.ed.ac.uk/Lumbribase/search_id.php
>
> So it's something to do with the way xyala is set up.  I'm sure I'm missing
> something obvious but what..?
>
> It's not  a firewall issue because it persists when the iptabes are off.
>
> What does "could not connect to server: Permission denied" mean?  Have I
> done some something as root or me that I should have done as
> user postgres?



Hi John,

Had the same issue. Try the following steps:

1. PGHBA.CONF

This is an important file. Mine (on Linux CentOS 4) is located at
"/var/lib/pgsql/data/pghba.conf". Make sure it looks like the
following.

local   all all  md5
hostall all 127.0.0.1  255.255.255.255   md5


2. POSTGRESQL.CONF


listen_addresses = 'localhost,*'
#port = 5432
other settings


3. PHP CODE


link   = pg_connect("host=localhost dbname=MYDB user=MYUSER password=MYPASS");

--
That is all you need. Don't specify anything else in the connection
string. Let me know how it goes.

LB

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] UDFs

2007-08-13 Thread jf

my understanding was that pgcrypto was not compiled by default?
Furthermore, finding next to no documentation about it online and deciding
I only needed one function instead of an entire crypto API i decided it
would make the most sense to just code the 10 lines to do it myself.


 On Mon, 13 Aug 2007, hubert depesz lubaczewski wrote:

> Date: Mon, 13 Aug 2007 11:29:58 +0200
> From: hubert depesz lubaczewski <[EMAIL PROTECTED]>
> To: jf <[EMAIL PROTECTED]>
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] UDFs
>
> On Mon, Aug 13, 2007 at 03:17:36PM +, jf wrote:
> > Trying to implement some simple digest routines via UDFs and for whatever
> > reason I get: ERROR:  invalid memory alloc request size 4294967293 on
> > PG_RETURN_TEXT_P(); any ideas what the issue is exactly?
>
> just checking - you do realize that it's reinventing the wheel?
> i mean - the code already exists, and is bundled in standard postgresql
> sources?
> if not - use pgcrypto module from contrib/
>
> depesz
>
>

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] UDFs

2007-08-13 Thread hubert depesz lubaczewski
On Mon, Aug 13, 2007 at 03:17:36PM +, jf wrote:
> Trying to implement some simple digest routines via UDFs and for whatever
> reason I get: ERROR:  invalid memory alloc request size 4294967293 on
> PG_RETURN_TEXT_P(); any ideas what the issue is exactly?

just checking - you do realize that it's reinventing the wheel?
i mean - the code already exists, and is bundled in standard postgresql
sources?
if not - use pgcrypto module from contrib/

depesz

-- 
quicksil1er: "postgres is excellent, but like any DB it requires a
highly paid DBA.  here's my CV!" :)
http://www.depesz.com/ - blog dla ciebie (i moje CV)

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] UDFs

2007-08-13 Thread jf

Yup, an uninitialized variable makes sense, That appears to have worked, I 
appreciate it. In regards to the checks for
NULL, I wasn't sure if it was necessary or not so I opted for the safe
route, thanks for clearing that up for me.

The output from the function is binary, does it matter if I use text or
bytea? Finally, how do I cast in SQL? when I try to pass the output to
encode() it says it can't find a function taking those types.

Thanks a lot!

On Mon, 13 Aug 2007, Martijn van Oosterhout wrote:

> Date: Mon, 13 Aug 2007 10:42:52 +0200
> From: Martijn van Oosterhout <[EMAIL PROTECTED]>
> To: jf <[EMAIL PROTECTED]>
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] UDFs
>
> On Mon, Aug 13, 2007 at 03:17:36PM +, jf wrote:
> > Hi.
> >
> > Trying to implement some simple digest routines via UDFs and for whatever
> > reason I get: ERROR:  invalid memory alloc request size 4294967293 on
> > PG_RETURN_TEXT_P(); any ideas what the issue is exactly?
>
> A few points about your code:
> - The tests against NULL are useless: palloc never returns NULL and
> the argument to the function won't be NULL either (it's decalred
> STRICT).
> - Your palloc should be for more, the header may be more than one byte
> VAR_HEADER_LEN or some such would be more appropriate)
> - Finally, what is probably the actual problem, at no point did you
> assign a length to the hash variable, ie VARLEN(hash)=foo.
>
> Hope this helps,
>

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Unable to connect to PostgreSQL server via PHP

2007-08-13 Thread John Coulthard

The part of the php code for the connection is


$dbconn=pg_connect( "dbname=lumbribase host=localhost port=5432 
user=postgres password=$PG_PASS" );

if ( ! $dbconn ) {
   echo "Error connecting to the database ! " ;
   printf("%s", pg_errormessage( $dbconn ) );
   exit(); }

This code works on zeldia 
http://zeldia.cap.ed.ac.uk/Lumbribase/search_id.php

but not on xyala http://xyala.cap.ed.ac.uk/Lumbribase/search_id.php

So it's something to do with the way xyala is set up.  I'm sure I'm missing 
something obvious but what..?


It's not  a firewall issue because it persists when the iptabes are off.

What does "could not connect to server: Permission denied" mean?  Have I 
done some something as root or me that I should have done as user postgres?


Thanks




From: Julio Cesar Sánchez González <[EMAIL PROTECTED]>
To: John Coulthard <[EMAIL PROTECTED]>
CC: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Unable to connect to PostgreSQL server via PHP
Date: Fri, 10 Aug 2007 22:50:47 -0500

El jue, 09-08-2007 a las 14:51 +, John Coulthard escribió:
> Hi
>
> I'm trying to set up a new webserver running php and pgsql.  PHP was
> connecting to postgres but I needed to install the php-gd module and now 
I

> get the error...
>
> "PHP Warning:  pg_connect() [> href='function.pg-connect'>function.pg-connect]: Unable to connect 
to
> PostgreSQL server: could not connect to server: Permission denied\n\tIs 
the

> server running on host "localhost" and accepting\n\tTCP/IP
> connections on port 5432?"
>
>  and I'm at a loss can anyone tell me why it's not connecting?
>
> Thanks
>
> This bit's I know are...
> http://xyala.cap.ed.ac.uk/php_info.php  say's php's configured for pgsql
>
> [EMAIL PROTECTED] telnet localhost 5432
> Trying 127.0.0.1...
> Connected to localhost.localdomain (127.0.0.1).
> Escape character is '^]'.
> Connection closed by foreign host.
> [EMAIL PROTECTED]
>
>
> [EMAIL PROTECTED] less /var/lib/pgsql/data/pg_hba.conf
> # TYPE  DATABASEUSERCIDR-ADDRESS  METHOD
>
> # "local" is for Unix domain socket connections only
> #local   all all   ident sameuser
> local   all all   trust
> # IPv4 local connections:
> #hostall all 127.0.0.1/32  ident sameuser
> hostall all 127.0.0.1/32  trust
> # IPv6 local connections:
> #hostall all ::1/128   ident sameuser
> hostall all ::1/128   trust
>
> [EMAIL PROTECTED] grep 'listen' /var/lib/pgsql/data/postgresql.conf
> # "pg_ctl reload". Some settings, such as listen_address, require
> #listen_addresses = 'localhost' # what IP interface(s) to listen on;
> listen_addresses = '*'
> [EMAIL PROTECTED]
>
> [EMAIL PROTECTED] less /etc/php.d/pgsql.ini
> ; Enable pgsql extension module
> extension=pgsql.so
>
> the server I'm going to replace is running the same versions of PHP and
> postgres http://zeldia.cap.ed.ac.uk/php_info.php
> The /etc/php.ini files on the two machines are the same and the
> /var/lib/pgsql/data/postgresql.conf files are only different because 
I've
> set listen_addresses = '*' on the new server (xyala) to see if I can 
make it

> work.
>
> _
> Express yourself instantly with MSN Messenger! Download today it's FREE!
> http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/
>
>
> ---(end of broadcast)---
> TIP 5: don't forget to increase your free space map settings


show your php source code for help you.


--
Regards,

Julio Cesar Sánchez González
www.sistemasyconectividad.com.mx
blog: http://darkavngr.blogspot.com

---
Ahora me he convertido en la muerte, destructora de mundos.
Soy la Muerte que se lleva todo, la fuente de las cosas que vendran.



_
Express yourself instantly with MSN Messenger! Download today it's FREE! 
http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] UDFs

2007-08-13 Thread Martijn van Oosterhout
On Mon, Aug 13, 2007 at 03:17:36PM +, jf wrote:
> Hi.
> 
> Trying to implement some simple digest routines via UDFs and for whatever
> reason I get: ERROR:  invalid memory alloc request size 4294967293 on
> PG_RETURN_TEXT_P(); any ideas what the issue is exactly?

A few points about your code:
- The tests against NULL are useless: palloc never returns NULL and
the argument to the function won't be NULL either (it's decalred
STRICT).
- Your palloc should be for more, the header may be more than one byte
VAR_HEADER_LEN or some such would be more appropriate)
- Finally, what is probably the actual problem, at no point did you
assign a length to the hash variable, ie VARLEN(hash)=foo.

Hope this helps,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] UDFs

2007-08-13 Thread Pavel Stehule
>
> hash = (text *)palloc(hlen+1);

??? palloc(hlen + VARHDRSZ)

>
> memset(VARDATA(hash), 0, hlen);
> SHA512(VARDATA(plain), hlen, VARDATA(hash));


++ VARATT_SIZEP (hash) = VARHDRSZ + ;

> PG_RETURN_TEXT_P(hash);
> }
>
> ---(end of broadcast)---
> TIP 3: Have you checked our extensive FAQ?
>
>http://www.postgresql.org/docs/faq
>

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] UDFs

2007-08-13 Thread jf
Hi.

Trying to implement some simple digest routines via UDFs and for whatever
reason I get: ERROR:  invalid memory alloc request size 4294967293 on
PG_RETURN_TEXT_P(); any ideas what the issue is exactly?

The code is verified as working when pulled out of a UDF and put into a
normal C program. Thanks.

dp_blag=# CREATE OR REPLACE FUNCTION sha512(text) RETURNS text AS
'dp_sha512.so','dp_sha512' LANGUAGE C STRICT;
CREATE FUNCTION
dp_blag=# SELECT sha512('jf');
ERROR:  invalid memory alloc request size 4294967293

PG_FUNCTION_INFO_V1(dp_sha512);

Datum
dp_sha512(PG_FUNCTION_ARGS)
{
text *  hash;
text *  plain;
int32   hlen;

plain = (text *)PG_GETARG_TEXT_P(0);

if (NULL == plain) {
ereport(ERROR,
(errcode(ERRCODE_NULL_VALUE_NOT_ALLOWED),
errmsg("invalid argument to function"))
);
PG_RETURN_NULL();
}

hlen = SHA512_DIGEST_LENGTH;

hash = (text *)palloc(hlen+1);

if (NULL == hash) {
ereport(ERROR,
(errcode(ERRCODE_INTERNAL_ERROR),
errmsg("palloc() failed"))
);
PG_RETURN_NULL();
}

memset(VARDATA(hash), 0, hlen);
SHA512(VARDATA(plain), hlen, VARDATA(hash));
PG_RETURN_TEXT_P(hash);
}

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq