Re: [GENERAL] out of memory during query execution

2005-12-20 Thread DANTE ALEXANDRA

Hello,

Thank you for your answer.
The person is charge of building PostGreSQL 8.1.0 has done a 32 bit 
build and has used the cc_r compiler.
This person does not succeed to build PostGreSQL 8.1.0 with gcc and 64 
bits. Unfortunatly, I don't have the errors or the logs of the 64 bits 
build and I can't tell you what error occurs.


The build done was realized in 32 bits, with the cc_r compiler.
To build POstGreSQL, a rpm was done and the .spec file contained the 
following instructions :

export OBJECT_MODE=32
./configure CC=/usr/vac/bin/cc_r CFLAGS=-O2 -qmaxmem=-1 -qsrcmsg 
-qlargepage --enable-thread-safety

--without-readline --prefix=%{buildroot}%{prefix}
gmake -j 4
unset OBJECT_MODE

Do you think that my problems of out of memory are due to the 32 bits 
build ?

Do you think that I must build PostGreSQL wih 64 bits to solve this error ?

Thank you for your help.
Regards,
Alexandra DANTE


Chris Browne a écrit :


[EMAIL PROTECTED] (DANTE ALEXANDRA) writes:
 


I am a PostGreSQL newbie. I work with the 8.1.0 release on AIX 5.3,
with 300GB of datas.
Some of the queries launched on this database finish with an *out of
memory*. The queries which have failed contain a lot of join (between
6 tables), sub-select and aggregate. For these queries, the log file
contains :
psql:Q9.sql:40: ERROR:  out of memory
DETAIL:  Failed on request of size 148.

On the server used, I got 3GB of memory and 1 CPU.
The settings specified in the postgresql.conf are :
# - Memory -
shared_buffers = 12288 #temp_buffers = 1000
#max_prepared_transactions = 5   work_mem = 65536
maintenance_work_mem = 262144 max_stack_depth = 24574

Are some of these values false?
Is the out of memory error due to smaller memory available ?
Has somenone ever seen this problem ?
   



We have seen this problem...

It's *probably* related to the memory model you're using.

I have thus far evaded *fully* understanding the details (and hope
that can persist!), but here are some of the things to consider:

- By default, AIX really prefers to build 32 bit binaries

- The sorta-hacks that IBM put in place on library segmentation (and
this stuff is quite ghastly) mean that any backend will likely have
quite a bit less than 2GB of even theoretically-available memory space.

The problem is probably that the memory model is throttling you to
*WAY* less than 2GB of memory.

You may want to try a 64 bit build.  With GCC, this requires something
like the following ./configure incantation...

 CC=gcc -maix64 LDFLAGS=-Wl,-bbigtoc ./configure
 




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] out of memory during query execution

2005-12-20 Thread DANTE ALEXANDRA

Hello,

The postmaster is launched by the user pg_810 who is not the root user.
When I launch the ulimit -a command, I've got :
$ ulimit -a
time(seconds)unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes)unlimited
memory(kbytes)   unlimited
coredump(blocks) unlimited
nofiles(descriptors) 2000

You will find below the explain plan of one of the queries which has 
finished with out of memory. This query contains aggregate and a 
sub-select with 6 joins :
 QUERY 
PLAN 
   
--

-
GroupAggregate  (cost=103283274.03..103283274.07 rows=1 width=76)
  -  Sort  (cost=103283274.03..103283274.04 rows=1 width=76)
Sort Key: nation.n_name, date_part('year'::text, 
(orders.o_orderdate)::timestamp without time zone)

-  Nested Loop  (cost=2447049.00..103283274.02 rows=1 width=76)
  Join Filter: (outer.s_nationkey = inner.n_nationkey)
  -  Nested Loop  (cost=2447049.00..103283272.45 rows=1 
width=55)
-  Nested Loop  (cost=2447049.00..103283267.25 
rows=1 width=59)
  -  Hash Join  (cost=2447049.00..103256685.03 
rows=4800 width=80)
Hash Cond: (outer.l_suppkey = 
inner.s_suppkey)
-  Hash Join  
(cost=2311445.00..102985544.04 rows=2880228 width=64)
  Hash Cond: (outer.l_partkey = 
inner.p_partkey)
  -  Seq Scan on lineitem  
(cost=0.00..69142803.64 rows=1800142464 width=56)
  -  Hash  
(cost=2311205.00..2311205.00 rows=96000 width=8)
-  Seq Scan on part  
(cost=0.00..2311205.00 rows=96000 width=8)
  Filter: 
((p_name)::text ~~ '%green%'::text)
-  Hash  (cost=110525.00..110525.00 
rows=300 width=16)
  -  Seq Scan on supplier  
(cost=0.00..110525.00 rows=300 width=16)
  -  Index Scan using i_ps_partkey_suppkey on 
partsupp  (cost=0.00..5.52 rows=1 width=27)
Index Cond: ((partsupp.ps_partkey = 
outer.l_partkey) AND (partsupp.ps_suppkey = outer.l_s

uppkey))
-  Index Scan using i_o_orderkey on orders  
(cost=0.00..5.19 rows=1 width=12)
  Index Cond: (orders.o_orderkey = 
outer.l_orderkey)

  -  Seq Scan on nation  (cost=0.00..1.25 rows=25 width=37)
(22 rows)

Regards,
Alexandra DANTE

Tom Lane a écrit :


DANTE ALEXANDRA [EMAIL PROTECTED] writes:
 

I am a PostGreSQL newbie. I work with the 8.1.0 release on AIX 5.3, with 
300GB of datas.
Some of the queries launched on this database finish with an *out of 
memory*. The queries which have failed contain a lot of join (between 6 
tables), sub-select and aggregate. For these queries, the log file 
contains :

psql:Q9.sql:40: ERROR:  out of memory
DETAIL:  Failed on request of size 148.
   



Hmm ... what ulimit settings are you running the postmaster under?
Could we see the EXPLAIN plans for some of the failing queries?

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq

 




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] One DB not backed up by pg_dumpall

2005-12-20 Thread Francisco Reyes

Michael Fuhr writes:


On Sun, Dec 18, 2005 at 11:29:13PM -0500, Francisco Reyes wrote:

Any reason why a database would not get dumped by pg_dumpall?


Is there a way to check the successfull completion of pg_dumpall.
Loosing 3 databases is not an experience I want to repeat.

Perphaps it returns a value on failure?
Just checked the man page and did not see any reference to that regard.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] out of memory during query execution

2005-12-20 Thread Martijn van Oosterhout
On Tue, Dec 20, 2005 at 01:35:03PM +0100, DANTE ALEXANDRA wrote:
 You will find below the explain plan of one of the queries which has 
 finished with out of memory. This query contains aggregate and a 
 sub-select with 6 joins :

1. Firstly, it could be the Hash node. Does the estimated number of
matches in part (96000 rows) match reality?

2. Secondly, looks like lineitem could use an index on partkey. Maybe it
could then use a more efficient join?

Do you have indexes on the relevent columns?

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


pgpI6ceMBEuD0.pgp
Description: PGP signature


[GENERAL] About Blobs in postgresSQL

2005-12-20 Thread kishore mukati
Hi   I am facing a proble when I am trying to store images, filse in  postgreSQL.by using Bytea datatype field in table  using setBinaryStream().  it show me exception  inputstream is supported.please send me if anybody have solution of this problem.thanks  kishore mukati  __Do You Yahoo!?Tired of spam?  Yahoo! Mail has the best spam protection around http://mail.yahoo.com 

[GENERAL] Could not read - Permission denied

2005-12-20 Thread Henrique Engelmann
Hello,We´re running 8.0.4in a Windows XP box and some of the applications are not working... Searchingpg_log we´ve found the following messages:  2005-12-17 07:26:54 ERROR: could not read block 3 of relation 1663/17253/104561: Permission denied2005-12-17 09:26:11 ERROR: could not read block 3 of relation 1663/17253/104561: Permission denied  What can be causing this error?Thanks!Engelmann
		 
Yahoo! doce lar. Faça do Yahoo! sua homepage.

[GENERAL] deduce sequence name from table and column

2005-12-20 Thread Louis-David Mitterrand
Hi,

Is there a way (from DBI) to deduce a sequence name from the table and
column it is attached to?

For instance:

  Column|  Type   |Modifiers
 
-+-+--
 id_fonction | integer | not null default 
nextval(('fonction_id_fonction_seq'::text)::regclass)

I'd like to be able to programmatically find fonction_id_fonction_seq
so that I can then call nextval() on it.

Thanks,

-- 
Only half the people in the world are above average intelligence.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] deduce sequence name from table and column

2005-12-20 Thread Tom Lane
Louis-David Mitterrand [EMAIL PROTECTED] writes:
 Is there a way (from DBI) to deduce a sequence name from the table and
 column it is attached to?

Since 8.0 you can use pg_get_serial_sequence(), see
http://www.postgresql.org/docs/8.0/static/functions-info.html

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] out of memory during query execution

2005-12-20 Thread Seneca Cunningham
DANTE ALEXANDRA wrote:
 The person is charge of building PostGreSQL 8.1.0 has done a 32 bit
 build and has used the cc_r compiler.
 This person does not succeed to build PostGreSQL 8.1.0 with gcc and 64
 bits. Unfortunatly, I don't have the errors or the logs of the 64 bits
 build and I can't tell you what error occurs.

Too bad, I may have been able to determine what had happened with the
gcc build.

 The build done was realized in 32 bits, with the cc_r compiler.
 To build POstGreSQL, a rpm was done and the .spec file contained the
 following instructions :
 export OBJECT_MODE=32
 ./configure CC=/usr/vac/bin/cc_r CFLAGS=-O2 -qmaxmem=-1 -qsrcmsg
 -qlargepage --enable-thread-safety
 --without-readline --prefix=%{buildroot}%{prefix}
 gmake -j 4
 unset OBJECT_MODE
 
 Do you think that my problems of out of memory are due to the 32 bits
 build ?
 Do you think that I must build PostGreSQL wih 64 bits to solve this error ?

It is quite likely that the out of memory errors are due to your use of
the default 32-bit memory model.  In that model, a single 256MB memory
segment contains your heap, stack, thread stacks, and other per-process,
non-shared-library data.  Switching to 64-bit would stop the errors if
this is true.  It is also possible to adjust the amount of space
available to a 32-bit process' heap with the -bmaxdata linker option,
but the largest heap size that I would consider safe with 32-bit is 2GB
and comes with the cost of reducing the amount of shared memory
available to the process.

Setting OBJECT_MODE to 64 before the ./configure and gmake should result
in a 64-bit build, but I don't have a copy of IBM's compiler to test
with.  I would be interested in seeing the errors output by the 64-bit
gcc build if another build is attempted.

-- 
Seneca Cunningham
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] deduce sequence name from table and column

2005-12-20 Thread Louis-David Mitterrand
On Tue, Dec 20, 2005 at 10:31:46AM -0500, Tom Lane wrote:
 Louis-David Mitterrand [EMAIL PROTECTED] writes:
  Is there a way (from DBI) to deduce a sequence name from the table and
  column it is attached to?
 
 Since 8.0 you can use pg_get_serial_sequence(), see
 http://www.postgresql.org/docs/8.0/static/functions-info.html

Just what I needed,
Thanks!

-- 
  -= this .sig is not funny =-

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] One DB not backed up by pg_dumpall

2005-12-20 Thread Jaime Casanova
On 12/19/05, Francisco Reyes [EMAIL PROTECTED] wrote:
 Michael Fuhr writes:

  On Sun, Dec 18, 2005 at 11:29:13PM -0500, Francisco Reyes wrote:
  Any reason why a database would not get dumped by pg_dumpall?
  Always run pg_dumpall as the superuser.
 
  As the operating system superuser or as a database superuser?
  There's a difference.

 As the database superuser.

  Is this a new procedure that has never worked, or is it an old
  procedure with a new problem?

 Old procedure with a new problem.

  What are the exact commands you're using to dump and restore?  Have
  you examined the output and the server's logs for errors and warnings?


 The nightly script is:
 #!/bin/csh
 setenv PGUSER pgsql
 setenv PGPASSWORD password
 /usr/local/bin/pg_dumpall |/usr/bin/bzip2 -c file

 Where file is
 /vol1/backs/pgsql/dump_all.sql.bz2


 This procedure has been running for a while.
 Last night before upgrading from 8.0.x to 8.1 I ran the script, then
 proceeded to upgrade. So far from what I can tell only lost that one
 database. :-(


- you still have the server where these databases exists?
- what version of pgsql, is this?

pg_dumpall ignore all databases with datallowconn = true, maybe it is the case?

 However it deeply worries me. I will need to find if the script above is the
 problem or something else. In coming days will keep an eye on the dump. This
 one db I lost was bad to loose, but not critical (personal wiki), however it
 would have been horrible if had lost other databases.


--
regards,
Jaime Casanova
(DBA: DataBase Aniquilator ;)

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] About Blobs in postgresSQL

2005-12-20 Thread Leonel Nunez

kishore mukati wrote:


Hi
  I am facing a proble when I am trying to store images, filse in
postgreSQL.by using Bytea datatype field in table
using  setBinaryStream().
it show me exception
 inputstream is supported.
 
please send me if anybody have solution of this problem.
 
thanks

kishore mukati
 


__
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com




No problems  here
this is what I used as reference :

http://jdbc.postgresql.org/documentation/80/binary-data.html


Leonel


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Could not read - Permission denied

2005-12-20 Thread Tom Lane
Henrique Engelmann [EMAIL PROTECTED] writes:
   We´re running 8.0.4 in a Windows XP box and some of the applications are 
 not working... 
   
   Searching pg_log we´ve found the following messages:
   2005-12-17 07:26:54 ERROR:  could not read block 3 of relation 
 1663/17253/104561: Permission denied
 2005-12-17 09:26:11 ERROR:  could not read block 3 of relation 
 1663/17253/104561: Permission denied

   What can be causing this error?

We've heard of misbehaving antivirus products causing this sort of
problem on Windows.  What do you have installed on that machine?

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] One DB not backed up by pg_dumpall

2005-12-20 Thread Francisco Reyes

Jaime Casanova writes:


- you still have the server where these databases exists?


No. I lost 3 databases.


- what version of pgsql, is this?


It was 8.0.4
I was upgrading to 8.1.
I checked the nightly jobs had been running, then ran a manual one and 
proceeded to do the upgrade.
 

pg_dumpall ignore all databases with datallowconn = true, maybe it is the case?


The original database is gone so can't check that.

Do you know if there is a way to find out if pg_dumpall had problems?
Later today I plan to do a mini test.. run pg_dumpall as a user with rights 
to only some tables and see if the program returns an error or if returns a 
value upon failure... so I can modify my script.


It would be helpfull if the docs/man page were updated to indicate any info 
about what pg_dumpall does in case of failures.


I am also planning on writing one or more scripts to check the pg_dumpall 
file. My DBs are small enough that I can run a check on them (ie count how 
many DBs were backed up, compare to how many \connect the dump file has). 
Hopefully will make them semi-generic so others can re-use them too.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] is this a bug or I am blind?

2005-12-20 Thread Karsten Hilbert
On Mon, Dec 19, 2005 at 08:37:26PM +0100, Martijn van Oosterhout wrote:

 I don't know which locales are affected. It just can't be that
 widespread because we're not getting similar reports for 99% of the
 locales out there.
Not getting reports doesn't mean the problem is rare.
Perhaps people moved to another database. Perhaps people
decided to solve their problem in client code. Perhaps 50%
of all potential locales haven't been put to use with
PostgreSQL such that the problem showed up.

It's definitely worth doing something about. Had I the
skills would I myself help with it.

Karsten
-- 
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] One DB not backed up by pg_dumpall

2005-12-20 Thread Jaime Casanova
On 12/20/05, Francisco Reyes [EMAIL PROTECTED] wrote:
 Jaime Casanova writes:

  - you still have the server where these databases exists?

 No. I lost 3 databases.

  - what version of pgsql, is this?

 It was 8.0.4
 I was upgrading to 8.1.
 I checked the nightly jobs had been running, then ran a manual one and
 proceeded to do the upgrade.


mmm... so at least you lost another database we can't check the
problem... too bad :(

  pg_dumpall ignore all databases with datallowconn = true, maybe it is the 
  case?

 The original database is gone so can't check that.

 Do you know if there is a way to find out if pg_dumpall had problems?
 Later today I plan to do a mini test.. run pg_dumpall as a user with rights
 to only some tables and see if the program returns an error or if returns a
 value upon failure... so I can modify my script.


i haven't tried but it seems that it exits...

/*
 * Dump contents of databases.
 */
static void
dumpDatabases(PGconn *conn)
{
PGresult   *res;
int i;

if (server_version = 70100)
res = executeQuery(conn, SELECT datname FROM pg_database WHERE
datallowconn ORDER BY 1);
else
res = executeQuery(conn, SELECT datname FROM pg_database ORDER 
BY 1);

for (i = 0; i  PQntuples(res); i++)
{
int ret;

char   *dbname = PQgetvalue(res, i, 0);

if (verbose)
fprintf(stderr, _(%s: dumping database \%s\...\n), 
progname, dbname);

printf(\\connect %s\n\n, fmtId(dbname));
ret = runPgDump(dbname);
if (ret != 0)
{
fprintf(stderr, _(%s: pg_dump failed on database 
\%s\,
exiting\n), progname, dbname);
exit(1);

}
}

PQclear(res);
}

 It would be helpfull if the docs/man page were updated to indicate any info
 about what pg_dumpall does in case of failures.

 I am also planning on writing one or more scripts to check the pg_dumpall
 file. My DBs are small enough that I can run a check on them (ie count how
 many DBs were backed up, compare to how many \connect the dump file has).
 Hopefully will make them semi-generic so others can re-use them too.



--
regards,
Jaime Casanova
(DBA: DataBase Aniquilator ;)

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] out of memory during query execution

2005-12-20 Thread Tom Lane
Martijn van Oosterhout kleptog@svana.org writes:
 On Tue, Dec 20, 2005 at 01:35:03PM +0100, DANTE ALEXANDRA wrote:
 You will find below the explain plan of one of the queries which has
 finished with out of memory. This query contains aggregate and a
 sub-select with 6 joins :

 1. Firstly, it could be the Hash node. Does the estimated number of
 matches in part (96000 rows) match reality?

Actually, the hash on supplier (300 rows) looks like a bigger
risk.  But if this is 8.1 then there is code in there to spill oversize
hash tables to disk, so I don't understand where the memory is going.

The out of memory failure should have provoked a MemoryContextStats
report in the postmaster log.  Are there a bunch of lines like
%s: %ld total in %ld blocks; %ld free (%ld chunks); %ld used
and if so could we see 'em?

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] is this a bug or I am blind?

2005-12-20 Thread Tom Lane
Karsten Hilbert [EMAIL PROTECTED] writes:
 On Mon, Dec 19, 2005 at 08:37:26PM +0100, Martijn van Oosterhout wrote:
 I don't know which locales are affected. It just can't be that
 widespread because we're not getting similar reports for 99% of the
 locales out there.

 Not getting reports doesn't mean the problem is rare.

I'm not sure that we can say we're not getting reports, either.  We've
seen *plenty* of reports of strange comparison misbehavior.  Up to now
I've always written them off as pilot error (ie, incompatible locale and
encoding selections) but now I suspect some of them were due to this
issue.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] out of memory during query execution

2005-12-20 Thread Kevin Murphy
I'm certainly not an AIX expert, but I remember my 32-bit AIX programs 
being limited to 256MB of heap by default.  When I linked, I think I had 
to ask for more maximum data page space using something like:


-bmaxdata:0x4000

(which asks for 1GB, I believe)

-Kevin Murphy


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] out of memory during query execution

2005-12-20 Thread DANTE ALEXANDRA

Hello,

The part table contains 6000 rows, so I think that the 96000 rows 
estimated matches in part could match reality.


Currently, the lineitem table contains only one index :
TPCH=# \d lineitem
  Table public.lineitem
Column  | Type  | Modifiers
-+---+---
l_orderkey  | bigint| not null
l_partkey   | bigint| not null
l_suppkey   | bigint| not null
l_linenumber| bigint| not null
l_quantity  | numeric   |
l_extendedprice | numeric   |
l_discount  | numeric   |
l_tax   | numeric   | not null
l_returnflag| character(1)  |
l_linestatus| character(1)  |
l_shipdate  | date  |
l_commitdate| date  |
l_receiptdate   | date  |
l_shipinstruct  | character(25) |
l_shipmode  | character(10) |
l_comment   | character varying(44) |
Indexes:
   i_l_orderkey btree (l_orderkey), tablespace tb_index
Tablespace: tb_lit

I think I will try to optimize PostGreSQL in a second time by creating 
appropriate indexes.

I don't think that this index is on relevent column for this query.

Regards,
Alexandra DANTE

Martijn van Oosterhout a écrit :


On Tue, Dec 20, 2005 at 01:35:03PM +0100, DANTE ALEXANDRA wrote:
 

You will find below the explain plan of one of the queries which has 
finished with out of memory. This query contains aggregate and a 
sub-select with 6 joins :
   



1. Firstly, it could be the Hash node. Does the estimated number of
matches in part (96000 rows) match reality?

2. Secondly, looks like lineitem could use an index on partkey. Maybe it
could then use a more efficient join?

Do you have indexes on the relevent columns?

Have a nice day,
 




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] out of memory during query execution

2005-12-20 Thread Tom Lane
Kevin Murphy [EMAIL PROTECTED] writes:
 I'm certainly not an AIX expert, but I remember my 32-bit AIX programs 
 being limited to 256MB of heap by default.

Hmm ... if that's the case then it'd probably explain the problem.
Alexandra had work_mem set to 64MB, so the two hashes and sort would
think they could use 3/4ths of the available heap; given that there
are other needs and our management of memory-use limitations is fairly
sloppy, that could easily translate into running out.

So the answer is either to increase the available heap or reduce
work_mem to a smaller fraction of it.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] tsearch2: index in different table?

2005-12-20 Thread Mag Gam
Is it possible to store an index in a different table? Instead of
keeping the index in a separate column, I have a separate table for
storing all indexes. I am trying to set up something like this:

UPDATE indexTable SET index1=to_tsvector('default',coalesce(Table1.Data,''));

This is currently not working, but something in this line. Is this
possible? Does tsearch2 only restrict to 1 table?

TIA

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] recursive function

2005-12-20 Thread Klein Balázs








hi,

though I have worked for a while with SQLServer2000
but I am new to Postgres (and also new to using mailing lists), so please bear
with me if I am slower to pick these up.



I am trying to write a function in plpgsql that returns a
group of people. 

The group is defined by a set of conditions. These conditions
are stored in a separate table so that it is configurable by the user.



So far so good, but here comes the trick: one of the
conditions could be that people are already members of a parent group 
so I need to call this function recursively.



At the top level the groupid=parentgroupid

So what I am trying to do is basically



Function dyn_group (groupheaderid)



If 

 groupid=parenttgroupid then
return all people

Else

 Select from 

  dyngroup(groupparentid)
INNER JOIN people WHERE all sorts of conditions

Endif



So what it should do is to look up if a group has parent, if
yes look up, if it has parent and so on until we get a groupid=parentgroupid
where it returns all people, use that in the INNER JOIN and return a recordset,
than use this recordset in the INNER JOIN, return a recordset and so on until
the original function returns the recordset that contains people who are
members of all these groups.



In practice however this doesnt seem to work as the
function never returns.

Should this work in theory? Is this the recommended
approach?



Thanks for the help.

SWK





 








Re: [GENERAL] recursive function

2005-12-20 Thread Jim C. Nasby
On Tue, Dec 20, 2005 at 06:58:41PM +0100, Klein Bal?zs wrote:
 In practice however this doesn?t seem to work as the function never returns.
 
 Should this work in theory? Is this the recommended approach?

I can' think of any reason why it wouldn't work; have you tried adding
RAISE statements to see what's actually happening?

Also, you should look at contrib/ltree; it might be a better way to do
what you're trying to do. Celko's book SQL For Smarties also has some
different ways to store hierarchies; unfortunately it's not handy or I
could give you some google search terms, but the book's worthy buying
anyway. Apparently he's also got a book that's dedicated to hierarchies
and graphs, but I don't know how good it is.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Syntax Error Inserting From STDIN?

2005-12-20 Thread Hunter's Lists
I am trying to run the following:

COPY departments (id, issue_id, title, description, feature_type) FROM
stdin;
23  4   Local BuzzThings to do, people to see, places to go.  aspen

I get back:

COPY departments (id, issue_id, title, description, feature_type) FROM
stdin;

23  4   Local BuzzThings to do, people to see, places to go.  aspen

ERROR:  syntax error at or near 23 at character 80

What am I doing wrong? I am on 8.1.1...

The table is defined as:

--++
--
 id   | integer| not null default
 issue_id | integer|
 title| character varying(255) |
 description  | character varying(255) |
 feature_type | character varying(255) |

Thx.



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Syntax Error Inserting From STDIN?

2005-12-20 Thread Scott Marlowe
On Tue, 2005-12-20 at 13:30, Hunter's Lists wrote:
 I am trying to run the following:
 
 COPY departments (id, issue_id, title, description, feature_type) FROM
 stdin;
 23  4   Local BuzzThings to do, people to see, places to go.  aspen
 
 I get back:
 
 COPY departments (id, issue_id, title, description, feature_type) FROM
 stdin;
 
 23  4   Local BuzzThings to do, people to see, places to go.  aspen
 
 ERROR:  syntax error at or near 23 at character 80

Seeing that character 80 I'm gonna guess this is a CR/LF issue.  I.e.
pgsql on your machine is seeing the 23 as being on the same line as the
copy departments statement.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Syntax Error Inserting From STDIN?

2005-12-20 Thread Hunter's Lists
Interesting.

How would I go about solving that?

I inserted an extra line between the two, no dice.

 From: Scott Marlowe [EMAIL PROTECTED]
 Date: Tue, 20 Dec 2005 13:53:37 -0600
 To: Hunter's Lists [EMAIL PROTECTED]
 Cc: PostgreSQL pgsql-general@postgresql.org
 Subject: Re: [GENERAL] Syntax Error Inserting From STDIN?
 
 On Tue, 2005-12-20 at 13:30, Hunter's Lists wrote:
 I am trying to run the following:
 
 COPY departments (id, issue_id, title, description, feature_type) FROM
 stdin;
 23  4   Local BuzzThings to do, people to see, places to go.  aspen
 
 I get back:
 
 COPY departments (id, issue_id, title, description, feature_type) FROM
 stdin;
 
 23  4   Local BuzzThings to do, people to see, places to go.  aspen
 
 ERROR:  syntax error at or near 23 at character 80
 
 Seeing that character 80 I'm gonna guess this is a CR/LF issue.  I.e.
 pgsql on your machine is seeing the 23 as being on the same line as the
 copy departments statement.



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Syntax Error Inserting From STDIN?

2005-12-20 Thread Tom Lane
Hunter's Lists [EMAIL PROTECTED] writes:
 I get back:

 COPY departments (id, issue_id, title, description, feature_type) FROM
 stdin;

 23  4   Local BuzzThings to do, people to see, places to go.  aspen

 ERROR:  syntax error at or near 23 at character 80

How are you feeding this into Postgres exactly?  Scott's right that it
looks like the connection isn't getting switched into COPY mode at the
right instant, but you haven't given us any hint about why that might
be happening.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Syntax Error Inserting From STDIN?

2005-12-20 Thread Scott Marlowe
quick answer, try a different editor.

Are you editing in the same environment as the database is in, or are
you editing on windows and feeding the copy data in on another platform?

On Tue, 2005-12-20 at 14:40, Hunter's Lists wrote:
 Interesting.
 
 How would I go about solving that?
 
 I inserted an extra line between the two, no dice.
 
  From: Scott Marlowe [EMAIL PROTECTED]
  Date: Tue, 20 Dec 2005 13:53:37 -0600
  To: Hunter's Lists [EMAIL PROTECTED]
  Cc: PostgreSQL pgsql-general@postgresql.org
  Subject: Re: [GENERAL] Syntax Error Inserting From STDIN?
  
  On Tue, 2005-12-20 at 13:30, Hunter's Lists wrote:
  I am trying to run the following:
  
  COPY departments (id, issue_id, title, description, feature_type) FROM
  stdin;
  23  4   Local BuzzThings to do, people to see, places to go.  aspen
  
  I get back:
  
  COPY departments (id, issue_id, title, description, feature_type) FROM
  stdin;
  
  23  4   Local BuzzThings to do, people to see, places to go.  aspen
  
  ERROR:  syntax error at or near 23 at character 80
  
  Seeing that character 80 I'm gonna guess this is a CR/LF issue.  I.e.
  pgsql on your machine is seeing the 23 as being on the same line as the
  copy departments statement.
 
 

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Syntax Error Inserting From STDIN?

2005-12-20 Thread Hunter's Lists
Everything was on OS X.

Looks like it was a problem with spaces vs. tabs. Anyway, I went through and
fixed all the lines and everything went in.

We had a strange problem restoring a 8.0.4 dump to a 8.1.1 server and this
was the last of the data that had to be re-imported.

 From: Scott Marlowe [EMAIL PROTECTED]
 Date: Tue, 20 Dec 2005 16:41:32 -0600
 To: Hunter's Lists [EMAIL PROTECTED]
 Cc: PostgreSQL pgsql-general@postgresql.org
 Subject: Re: [GENERAL] Syntax Error Inserting From STDIN?
 
 quick answer, try a different editor.
 
 Are you editing in the same environment as the database is in, or are
 you editing on windows and feeding the copy data in on another platform?
 
 On Tue, 2005-12-20 at 14:40, Hunter's Lists wrote:
 Interesting.
 
 How would I go about solving that?
 
 I inserted an extra line between the two, no dice.
 
 From: Scott Marlowe [EMAIL PROTECTED]
 Date: Tue, 20 Dec 2005 13:53:37 -0600
 To: Hunter's Lists [EMAIL PROTECTED]
 Cc: PostgreSQL pgsql-general@postgresql.org
 Subject: Re: [GENERAL] Syntax Error Inserting From STDIN?
 
 On Tue, 2005-12-20 at 13:30, Hunter's Lists wrote:
 I am trying to run the following:
 
 COPY departments (id, issue_id, title, description, feature_type) FROM
 stdin;
 23  4   Local BuzzThings to do, people to see, places to go.  aspen
 
 I get back:
 
 COPY departments (id, issue_id, title, description, feature_type) FROM
 stdin;
 
 23  4   Local BuzzThings to do, people to see, places to go.  aspen
 
 ERROR:  syntax error at or near 23 at character 80
 
 Seeing that character 80 I'm gonna guess this is a CR/LF issue.  I.e.
 pgsql on your machine is seeing the 23 as being on the same line as the
 copy departments statement.
 
 
 
 ---(end of broadcast)---
 TIP 2: Don't 'kill -9' the postmaster



---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Newbie Question: FAQ for database optimization?

2005-12-20 Thread Alexander Scholz

Hi,

is there a newbie's FAQ / book / link for howto optimize databases with 
PostgreSQL?


Background: Customer has the Windows* (sorry g) Postgres 8.1.0 
standard installation out of the box. A table has 2.5 mio records. No 
indizes defined, primary key (sequence) does exist. In pgAdmin select 
count(*) takes over 30 seconds, an update affecting 70'000 records 
takes minutes... I am sure PostgreSQL could do better, we just need to 
tune the database. (I hope so at least!)


What action and/or reading can you recommend? (We quickly need some 
'wow' effects to keep the customer happy sigh).


Thanx,

Alexander.

*) sorry, I don't have server's hardware spec. available right now, but 
the MSSQL2005 instance on it does the same things in a few seconds... ;-)


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[GENERAL] Questions about server.

2005-12-20 Thread max chin
Dear Mr.,My name is Chia. I'm a Malaysian student who still persuing my computer course. Now I'm workingwith aIT company for industry training. I got some questions about server. 1.) What I knew is when too many users access a database atthe same time, it will slow down databaseserver process. My question is how to make database server process more fastereven ifa lot of users access information in database at the same time?What kind of technologyis neededto help database server process more faster ?2.) In my company, database server and web server are store inside a machine.  My question is how to separate database server and web server from one machine totwo machine? I mean how those 2 server within one machine will be separated beco
 me 1
 server within one machine, another server within one another machine.  Can you show me the way or process of implementation?3.) How to back up automatically database information from host machine to another machine every one hour and everytime update database informations is done?4.) Sometimes IIS web server is unavailable and the webpages can' t display for clients. Can you tell me the reasons and the methods to overcome the problems?THANKS YOU. CAN YOU GIVE ME YOUR ANSWER AS FAST AS POSSIBLE BECAUSE I NEED THESE IMPORTANT ANSWER URGENTLY.I'M APPRECIATED FOR YOUR ABSOLUTELY, HUNDRED PERCENTLY RELIABLE ANSWER AND YOUR GUIDANCE. THANKS AGAIN.With regard,  Chia JH__Do You Yahoo!?Tired of spam?  Yahoo! Mail has the be
 st spam
 protection around http://mail.yahoo.com 

Re: [GENERAL] About Blobs in postgresSQL

2005-12-20 Thread kishore mukati
Hi Leonelat first thanks for help  I try with that link you send to me  but now I am getting a new error   belowthis is exception in MyMailTracker.main()java.sql.SQLException: ERROR: column "attachment" is of type bytea but _expression_ is of type integerand one more thing my postgreSQL version is 7.4 so it must support  setBinaryStream() ;  and   setBytes()   as per the postgreSQL specification   I also tried with  setBytes()  but it didn't work.thanks  regards  kishore mukati  Leonel Nunez [EMAIL PROTECTED] wrote:  kishore mukati wrote: Hi I am facing a proble when I am trying to store images, filse in postgreSQL.by using Bytea datatype field in table using setBinaryStream(). it show me exception inputstream is supported.  please send me if anybody have solution of this problem.  thanks kishore mukati  __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.comNo problems herethis is what I used as reference :http://jdbc.postgresql.org/documentation/80/binary-data.htmlLeonel  __Do You Yahoo!?Tired of spam?  Yahoo! Mail has the best spam protection around http://mail.yahoo.com 

Re: [GENERAL] About Blobs in postgresSQL

2005-12-20 Thread Leonel Nunez

kishore mukati wrote:


Hi Leonel
 
 at first thanks for help

I try with that link you send to me
but now I am getting a new error
below

this is exception in MyMailTracker.main()java.sql.SQLException: ERROR: 
column attachment is of type bytea but expression is of type integer
 
and one more thing my postgreSQL version is 7.4 so it must support

setBinaryStream() ;
and
setBytes()
as per the postgreSQL specification
I also tried with
setBytes()
but it didn't work.
 
thanks

regards
kishore mukati

*/L/*





I've tested on a  postgresql 7.4.8  and  works  fine

the error says  that  you are trying to store an integer on a bytea 
field,  that means  there's a mismatch  on the setBinaryStream and the  
prepareStatement
check the  position on the prepareStatement  to match the  
ps.setBinaryStream




leonel


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Newbie Question: FAQ for database optimization?

2005-12-20 Thread A. Kretschmer
am  20.12.2005, um 22:21:54 +0100 mailte Alexander Scholz folgendes:
 Hi,
 
 is there a newbie's FAQ / book / link for howto optimize databases with 
 PostgreSQL?

07:12  rtfm_please For information about tuning
07:12  rtfm_please see http://www.powerpostgresql.com
07:12  rtfm_please or http://www.powerpostgresql.com/PerfList
07:12  rtfm_please or http://www.varlena.com/varlena/GeneralBits/116.php



 
 Background: Customer has the Windows* (sorry g) Postgres 8.1.0 standard 
 installation out of the box. A table has 2.5 mio records. No indizes 
 defined, primary key (sequence) does exist. In pgAdmin select count(*) 

bad  ugly


 What action and/or reading can you recommend? (We quickly need some 'wow' 
 effects to keep the customer happy sigh).

Create suitable indexes.

07:14  akretschmer ??index
07:14  rtfm_please For information about index
07:14  rtfm_please see 
http://www.postgresql.org/docs/current/static/indexes-expressional.html
07:14  rtfm_please or 
http://www.postgresql.org/docs/current/static/indexes-partial.html
07:14  rtfm_please or 
http://www.postgresql.org/docs/current/static/indexes.html


Andreas
-- 
Andreas Kretschmer(Kontakt: siehe Header)
Heynitz:  035242/47212,  D1: 0160/7141639
GnuPG-ID 0x3FFF606C http://wwwkeys.de.pgp.net
 ===Schollglas Unternehmensgruppe=== 

---(end of broadcast)---
TIP 6: explain analyze is your friend