Re: [GENERAL] tsearch2 for alphabetic character strings codes

2005-09-24 Thread Oleg Bartunov

Ron,

probably you need to write custom parser. tsearch2 supports
different parsers.

Oleg
On Fri, 23 Sep 2005, Ron Mayer wrote:



I'm looking for a way search for substrings strings within
documents in a way very similar to tsearch2, but my strings
are not alphabetical codes so I'm having a tough time
trying to use the current tsearch2 configurations with them.

For example, using tsearch to search for codes like
 '31.03(e)(2)(A)'
in a set of documents is tricky because tsearch seems
to treat most of the punctuation as word separators.

 fli=# select
 fli-#  to_tsvector('default','31.03(e)(2)(A)'),
 fli-#  to_tsvector('simple','31.03(e)(2)(A)');

   to_tsvector  | to_tsvector
 ---+-
  '2':3 'e':2 '31.03':1 | '2':3 'a':4 'e':2 '31.03':1
 (1 row)


I see that tsearch2 allows different configurations
that appaently differ in how they parse strings.

I guess what I'm looking for is a configuration
that's even simpler-than-simple, and only breaks
up strings on whitespace and doesn't use any natural
language dictionaries. I was hoping I could download
or define such a configuration; but didn't see any
obvious documentation on how to set up my own
configuration.

Does this sound like a good approach (and if so, could
someone please point me in the right direction), or
are there other things I should be looking to.

  Ron

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org



Regards,
Oleg
_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] tsearch2 for alphabetic character strings codes

2005-09-24 Thread Andrew J. Kopciuch
On Saturday 24 September 2005 00:09, Oleg Bartunov wrote:
 Ron,

 probably you need to write custom parser. tsearch2 supports
 different parsers.


To expand somewhat on what Oleg mentioned, you can find a howto on writing a 
custom parser here :

http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/docs/HOWTO-parser-tsearch2.html

This example might be exactly what you are looking for, I did not look into it 
too much myself though, but it appears to just split on whitespace.

There is lots of documentation, examples, help, and other goodies for tsearch2 
here:

http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/


HTH,


Andy

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Slow connection to the database

2005-09-24 Thread Poul Møller Hansen

I have a problem with slow connects to a database server.

In a little Java application I am starting 100 threads, each one making
a database connection. The program is running on my laptop.

When connecting to one server (AMD Opteron 2GHz with Solaris 
pgsql version 8.0.3) the 100 connections takes 455 seconds!

Comparing to an old PC (AMD K6-II 500MHz with FreeBSD and pgsql version 
8.0.3) the 100 connections takes 6 seconds


When running the client program on the Opteron the 100 connections takes
only 1 second, but in general the network speed to/from the server is ok
and they are on the same LAN.
When they are connected insert and select speed are ok.

Is anyone of you having an idea where to seek the problem ?


Regards, Poul


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Problems creating view

2005-09-24 Thread Martín Marqués
I'm having troble creating a view here. Particularly, each time I try to 
create a view, I get a FATAL error and the database server restarts.

Here are the logs:

LOG:  sentencia: CREATE VIEW biblioteca.permisosoperadorprocesos AS
SELECT uag.admin, ugp.proceso
FROM unionadmingrupo uag, uniongrupoproceso ugp
WHERE uag.grupo = ugp.grupo;
LOG:  proceso de servidor (PID 12716) fue terminado por una señal 10
LOG:  terminando todos los otros procesos de servidor activos
LOG:  todos los procesos fueron terminados; reinicializando
LOG:  el sistema de bases de datos fue interrumpido en 2005-09-24 09:06:08 ART
LOG:  el registro de checkpoint está en 0/EBE6C90
LOG:  registro de redo en 0/EBE6C90; registro de undo en 0/0; apagado TRUE
LOG:  siguiente ID de transacción: 316713; siguiente OID: 781400
LOG:  el sistema de bases de datos no fue apagado apropiadamente; se está 
efectuando la recuperación automática
LOG:  conexión recibida: host=[local] port=
FATAL:  el sistema de base de datos está iniciándose
LOG:  redo comienza en 0/EBE6CD0
LOG:  la dirección de página 0/8C0 en el archivo de registro 0, segmento 
14, posición 12582912 es inesperada
LOG:  redo listo en 0/EBFF350
LOG:  el sistema de bases de datos está listo


I can't create any VIEW at all. I'm using the latest debian testing PostgreSQL 
(8.0.3):

[EMAIL PROTECTED]:/space/home/martin$ dpkg -l postgresql-8.0
Desired=Unknown/Install/Remove/Purge/Hold

ii postgresql-8.0  8.0.3-15 object-relational SQL database, version 8.0 server

I don't know if to report a BUG to the debian guys. I just upgraded yesterday 
PostgreSQL.

-- 
 09:04:54 up 7 days, 14:22,  2 users,  load average: 1.12, 1.68, 1.35
-
Lic. Martín Marqués |   SELECT 'mmarques' || 
Centro de Telemática|   '@' || 'unl.edu.ar';
Universidad Nacional|   DBA, Programador, 
del Litoral |   Administrador
-

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Slow connection to the database

2005-09-24 Thread Douglas McNaught
Poul Møller Hansen [EMAIL PROTECTED] writes:

 I have a problem with slow connects to a database server.

 In a little Java application I am starting 100 threads, each one making
 a database connection. The program is running on my laptop.

 When connecting to one server (AMD Opteron 2GHz with Solaris 
 pgsql version 8.0.3) the 100 connections takes 455 seconds!

Is it possible that you've got DNS timeouts happening or something
like that?  Though I don't think PG does reverse DNS lookups,
especially not by default. 

Watching the connection process with 'tcpdump' to see where the stall
happens might be informative.

-Doug

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Extraordinarily slow!!

2005-09-24 Thread Justin R. Smith
I'm running postgresql 8.03 under Gentoo Linux and find that all actions 
take at least a MINUTE (even dropdb and createdb). During this time, the 
posgres account is locked up to the extent that one cannot even su to 
it. (The su suceeds when the db operation completes.)


I assume something is misconfigured and Postgres is waiting for 
something with each action I perform and then timing out. Unfortunately, 
the system is unusable in this form...


It's also interesting that the actions eventually complete properly --- 
they just take an incredibly long time. I ran the same version of 
postgres under FreeBSD and it was lightning fast.


No error messages appear in any of the logs --- just the normal messages 
one expects.


Any suggestions?

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Problems creating view

2005-09-24 Thread Tom Lane
=?iso-8859-1?q?Mart=EDn_Marqu=E9s?= martin@bugs.unl.edu.ar writes:
 I can't create any VIEW at all. I'm using the latest debian testing 
 PostgreSQL 
 (8.0.3):

Maybe you got a corrupt download?

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Backend crash with user defined aggregate

2005-09-24 Thread Tom Lane
Ron Mayer [EMAIL PROTECTED] writes:
 Fails the same way in each case.  It was doing a HashAggregate last time; but
 I just reproduced it now with a GroupAggregate.

 fli=# explain analyze select ext_doc_id,strcat_agg(nam)||+strcat_agg(val) 
 from facet_raw group by ext_doc_id;
 FATAL:  terminating connection due to administrator command

Hmm.  Now that I look at it again, that message implies that something
sent SIGTERM to the backend.  You need to figure out where that signal
is coming from and why.  I think I have heard of platforms on which
SIGTERM is generated if a process exceeds a preset run-time limit;
is it possible that you've started your postmaster with finite limits
on process runtime?  What is the platform exactly, anyway?

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Extraordinarily slow!!

2005-09-24 Thread Tom Lane
Justin R. Smith [EMAIL PROTECTED] writes:
 I'm running postgresql 8.03 under Gentoo Linux and find that all actions 
 take at least a MINUTE (even dropdb and createdb). During this time, the 
 posgres account is locked up to the extent that one cannot even su to 
 it. (The su suceeds when the db operation completes.)

Bizarre.

 Any suggestions?

Don't use Gentoo?  Seriously, that distribution is completely
unsupportable because no one else has the faintest idea what you are
really running.  You've probably got some system component that is
either broken in itself or incompatible with some other component,
but there's no way to tell which.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Extraordinarily slow!!

2005-09-24 Thread Joshua D. Drake




Bizarre.


Question:

Does the problem happen when you just type psql?
What if you pass the host option? e.g; psql -h 127.0.0.1
What type of auth are you running?


Don't use Gentoo?  Seriously, that distribution is completely
unsupportable because no one else has the faintest idea what you are
really running.  You've probably got some system component that is
either broken in itself or incompatible with some other component,
but there's no way to tell which.


I would have to agree with Tom here. Gentoo is a great
development machine and a fun toy but if you don't want
unknowns in your environment and Gentoo will easily introduce
them.

You really should look at something like Fedora Core or Ubuntu
if you want FREE. If you don't mind paying the SuSE is a good choice.
If you want RHE look and feel but don't want to pay, look at CentOS.

Sincerely,

Joshua D. Drake







regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings




--
Your PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240
PostgreSQL Replication, Consulting, Custom Programming, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Extraordinarily slow!!

2005-09-24 Thread Tony Caduto

Tom Lane wrote:


Justin R. Smith [EMAIL PROTECTED] writes:
 

I'm running postgresql 8.03 under Gentoo Linux and find that all actions 
take at least a MINUTE (even dropdb and createdb). During this time, the 
posgres account is locked up to the extent that one cannot even su to 
it. (The su suceeds when the db operation completes.)
   



Bizarre.

 


Any suggestions?
   



Don't use Gentoo?  Seriously, that distribution is completely
unsupportable because no one else has the faintest idea what you are
really running.  You've probably got some system component that is
either broken in itself or incompatible with some other component,
but there's no way to tell which.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


 

I have been running Postgresql on Gentoo for several years and it works 
great, better than RH EL.
You must have seriously messed up your install somehow.  I suggest you 
install on a clean system.
Did you fool around with the Kernel scheduler by chance?  Right out of 
the box Gentoo works very well.


I highly recommend Gentoo, the trick for a server install is just to use 
the base system, don't install unneeded desktop managers etc.  I have 
current uptimes of 250+ days on all my Gentoo boxes and we import and 
update 350,000 records every day for a large financial application that 
uses 8.03 as the backend.


You did increase the kernels shared memory right, and properly setup 
your postgresql.conf file for performance?

Are you using the autovaccum daemon?
There are a few guides available on how to setup your postgresql.conf 
file for performance, just do a search on google.


Tony Caduto
AM Software Design
Home of PG Lightning Admin (PGLA)
http://www.amsoftwaredesign.com


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Extraordinarily slow!!

2005-09-24 Thread Peter Fein
Tom Lane wrote:
 Don't use Gentoo?  Seriously, that distribution is completely
 unsupportable because no one else has the faintest idea what you are
 really running.  You've probably got some system component that is
 either broken in itself or incompatible with some other component,
 but there's no way to tell which.

I'm not looking to start a distro war, but we're successfully running
postgres on Gentoo on a handful of machines, both Intel and AMD, without
  trouble.  I suppose compiling by hand from source is equally
unsupportable.

As for not knowing what's running, it's fairly trivial to deduce the
compile time options based on USE flags.  Heck, you've even got full
compile/install logs if you want them...

Just saying. ;)

-- 
Peter Fein [EMAIL PROTECTED] 773-575-0694

Basically, if you're not a utopianist, you're a schmuck. -J. Feldman

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Slow connection to the database

2005-09-24 Thread Poul Møller Hansen


Is it possible that you've got DNS timeouts happening or something
like that?  Though I don't think PG does reverse DNS lookups,
especially not by default. 


Watching the connection process with 'tcpdump' to see where the stall
happens might be informative.

-Doug


DNS was also my first thought, but the connection isn't logged so why do 
reverse lookup and with my clients ip address in /etc/hosts is same.


I have now also tried on a Sun Ultra Sparc running Solaris 10
and an Intel EMT64 running FreeBSD 6-BETA4 with the same discouraging 
result. On the FreeBSD I have tried both compiling by hand and from ports.


All servers where it isn't working properly have in common that it
is PgSQL ver. 8.0.3 compiled with gcc ver. 3.4.4 running on 64-bits
cpu's

It's working well on an old K6-II running FreeBSD 4.11, PgSQL 8.03 
compiled with gcc 2.95.4



Poul



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Slow connection to the database

2005-09-24 Thread Tom Lane
=?ISO-8859-1?Q?Poul_M=F8ller_Hansen?= [EMAIL PROTECTED] writes:
 Is it possible that you've got DNS timeouts happening or something
 like that?  Though I don't think PG does reverse DNS lookups,
 especially not by default. 

 DNS was also my first thought, but the connection isn't logged so why do 
 reverse lookup and with my clients ip address in /etc/hosts is same.

How do you know the delay isn't in the client's lookup of the server?

I concur with Doug's suggestion that you need to trace the connection
process to find out what's slow.  Personally I'd try strace (or local
equivalent) first; the odds are that you can spot the issue at the
kernel-call level.  You'd probably need to trace both the client and
server sides of the operation.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Backend crash with user defined aggregate

2005-09-24 Thread Ron Mayer

Tom Lane wrote:


Hmm.  Now that I look at it again, that message implies that something
sent SIGTERM to the backend.  You need to figure out where that signal
is coming from and why.  ...
is it possible that you've started your postmaster with finite limits
on process runtime?  What is the platform exactly, anyway?


Thanks. That's something for me to look into monday.  I don't think
there was a process runtime, but there might have been a memory limit.
The platform was a SuSE 9.1 on x86 but with so many updates  upgrades
from so many sources (rpm, apt, rcd, yast, tar.gz) that it's hard to
tell what it really is anymore.   Monday I'll try to reproduce on a
cleaner system and/or look for the SIGTERM with strace or something.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] How many insert + update should one transaction

2005-09-24 Thread Jim C. Nasby
Just remember the first rule of performance tuning: don't.

Unless you *know* having the dead rows will be an issue, you will almost
certainly be best off going the simple, straightforward route.

On Fri, Sep 23, 2005 at 10:49:00AM -0500, Scott Marlowe wrote:
 On Fri, 2005-09-23 at 05:51, Yonatan Ben-Nes wrote:
  Hi all,
  
  Every few days I need to DELETE all of the content of few tables and 
  INSERT new data in them.
  The amount of new data is about 5 million rows and each row get about 3 
  queries (INSERT + UPDATE).
  Now because I need the old data to be displayed till all of the new data 
  will be available I'm doing all of the process of deleting the old 
  content and inserting the new one in one transaction.
  Should I divide the insertion so ill insert the new data into a 
  temporary table and the transaction should be commited every 
  100,1000,1 whatever queries? or maybe it doesnt matter to the server 
  whats the size of the transaction and its ok to handle such a process in 
  one transaction?
 
 The only possible issue would be one of capacity, and possibly having a
 lot of dead tuples laying about.
 
 If you have 5 million rows, and you update every one, then you now have
 5 million live and 5 million dead tuples in your database.  A Vacuum
 full will take quite a while.
 
 If you're fsm is set large enough, then as long as you vacuum (regular,
 non full vacuum) between these transactions, then the 5 million dead
 tuples should get reused.  however, the performance of your database
 will for selects and such will be like it was a 10 million row database.
 
 Given that you NEED to have all 10 million tuples in the database at the
 same time, the use of a temp / holding table would allow you to truncate
 the main table, move everything into the main table, and then drop /
 truncate the temp / holding table.
 
 If you truncate the main table, then initiate another transaction to
 move the data into it, it shouldn't be so bloated, but the down side is
 you'll have a period of time when it appears empty to users.
 
 So, the real question is whether or not you can afford to have an empty
 table at some point in the process.
 
 If you can't, then either method (running the whole transaction against
 the one table or using the temp / holding table) are equivalent.  If you
 can, there should be a noticeable gain from the method of truncating the
 main table outside the update transaction.
 
 If you need that table to always have the old or new tuples (i.e. never
 be empty) and you can afford the very lengthy vacuum full on the 5
 million dead rows, then that method will give you the best select
 performance the rest of the day.
 
 ---(end of broadcast)---
 TIP 2: Don't 'kill -9' the postmaster
 

-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] What is an 'unused item pointer'

2005-09-24 Thread Jim C. Nasby
I can't seem to find info about this in the docs, or on the web.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] What is an 'unused item pointer'

2005-09-24 Thread Tom Lane
Jim C. Nasby [EMAIL PROTECTED] writes:
 I can't seem to find info about this in the docs, or on the web.

http://developer.postgresql.org/docs/postgres/storage-page-layout.html

Currently, when a tuple is reclaimed by VACUUM, we just mark its item
pointer as unused (and hence recyclable).  I think it might be safe to
decrease pd_lower if there are unused pointers at the end of the page's
pointer array, but we don't currently do that.  In any case there could
still be unused pointers embedded within the array.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] What is an 'unused item pointer'

2005-09-24 Thread Jim C. Nasby
On Sat, Sep 24, 2005 at 07:19:10PM -0400, Tom Lane wrote:
 Jim C. Nasby [EMAIL PROTECTED] writes:
  I can't seem to find info about this in the docs, or on the web.
 
 http://developer.postgresql.org/docs/postgres/storage-page-layout.html
 
 Currently, when a tuple is reclaimed by VACUUM, we just mark its item
 pointer as unused (and hence recyclable).  I think it might be safe to
 decrease pd_lower if there are unused pointers at the end of the page's
 pointer array, but we don't currently do that.  In any case there could
 still be unused pointers embedded within the array.

I suspect that if the free space in pages isn't collapsed during vacuum
then it probably doesn't matter, but I don't know if that's the case. If
it is collapsed then my intuition is that tables that are fairly narrow
would benefit from adjusting pd_lower because it wouldn't take too many
item pointers to equate to a tuple. This would probably be even more
pronounced if the tuples were generally small but could vary to a larger
size.

Sounds like a good newbie TODO?
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] createdb problem

2005-09-24 Thread Sterpu Victor

I use postgresql 7.4.8.
I need te delete and recreate a database very ofent, because I do some 
tests in it.

After a few months the command createdb becamed very slow.

How can I make it go fast again?
Thank you.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Extraordinarily slow!!

2005-09-24 Thread Tony Caduto

Joshua D. Drake wrote:





Bizarre.


Question:

Does the problem happen when you just type psql?
What if you pass the host option? e.g; psql -h 127.0.0.1
What type of auth are you running?


Don't use Gentoo?  Seriously, that distribution is completely
unsupportable because no one else has the faintest idea what you are
really running.  You've probably got some system component that is
either broken in itself or incompatible with some other component,
but there's no way to tell which.


I would have to agree with Tom here. Gentoo is a great
development machine and a fun toy but if you don't want
unknowns in your environment and Gentoo will easily introduce
them.

You really should look at something like Fedora Core or Ubuntu
if you want FREE. If you don't mind paying the SuSE is a good choice.
If you want RHE look and feel but don't want to pay, look at CentOS.

Sincerely,

Joshua D. Drake


I guess I would agree for a Linux Newbie, but the only unknowns are 
those that the user would introduce.
Like I said before Gentoo makes a extremely stable and powerful OS, but 
it must be setup properly, and in my experience if you follow the 
install docs and don't mess around with things like the kernel scheduler 
and what not Postgresql runs extremely well and is super easy to update 
with the Portage emerge.


Later,

Tony


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PQtrace doesn't work

2005-09-24 Thread Bruce Momjian
Tom Lane wrote:
 Bruce Momjian pgman@candle.pha.pa.us writes:
  Daniel Verite wrote:
  So it looks like the problem would be DLL-related?
 
  Is there a problem with a DLL writing to a file descriptor opened by
  application code?  I would think not, but perhaps.
 
 Hmm  malloc/free are broken in exactly that way in Windows DLLs,
 maybe stdio has the same kind of issue?  I'd think this would be pretty
 well known if true, though.

Ah, I have found the cause of the crash, and added documentation about
the cause:

On Win32, if the applicationlibpq/ and the application are
compiled with different flags, this function call will crash the
application because the internal representation of the literalFILE/
pointers differ.

While such a mismatch is a problem on all platforms, it is more common
on Win32 where the FILE structure changes for debug, for example.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Extraordinarily slow!!

2005-09-24 Thread Tony Caduto

You really should look at something like Fedora Core or Ubuntu


if you want FREE. If you don't mind paying the SuSE is a good choice.
If you want RHE look and feel but don't want to pay, look at CentOS.

Sincerely,

Joshua D. Drake







Just as a FYI, you don't have to pay for Suse anymore, unless you want 
support:

http://www.opensuse.org

The 10.0 release will be next month.

--
Tony Caduto
http://www.amsoftwaredesign.com
Home of PG Lightning Admin for Postgresql 8.x 



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] createdb problem

2005-09-24 Thread Jim C. Nasby
My guess is you need to vacuum. Try a vacuumdb -a.

On Sun, Sep 25, 2005 at 03:46:38AM +0300, Sterpu Victor wrote:
 I use postgresql 7.4.8.
 I need te delete and recreate a database very ofent, because I do some 
 tests in it.
 After a few months the command createdb becamed very slow.
 
 How can I make it go fast again?
 Thank you.
 
 
 ---(end of broadcast)---
 TIP 3: Have you checked our extensive FAQ?
 
   http://www.postgresql.org/docs/faq
 

-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] createdb problem

2005-09-24 Thread Tom Lane
Sterpu Victor [EMAIL PROTECTED] writes:
 I use postgresql 7.4.8.
 I need te delete and recreate a database very ofent, because I do some 
 tests in it.
 After a few months the command createdb becamed very slow.
 How can I make it go fast again?

The only explanation that comes to mind is that you haven't ever
vacuumed pg_database ... at this point you probably need a VACUUM
FULL, and maybe REINDEX too, to get it back down to a reasonable
size.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] What is an 'unused item pointer'

2005-09-24 Thread Tom Lane
Jim C. Nasby [EMAIL PROTECTED] writes:
 On Sat, Sep 24, 2005 at 07:19:10PM -0400, Tom Lane wrote:
 Currently, when a tuple is reclaimed by VACUUM, we just mark its item
 pointer as unused (and hence recyclable).  I think it might be safe to
 decrease pd_lower if there are unused pointers at the end of the page's
 pointer array, but we don't currently do that.

 Sounds like a good newbie TODO?

Uh, no, because the $64 question is whether it actually *is* safe, or
perhaps would be safe with more locking than we do now.  I'm not sure of
the answer myself, and would have zero confidence in a newbie's answer.

Decreasing pd_lower would definitely be a win if we can do it free or
cheaply.  If it requires significant additional locking overhead, then
maybe not.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster