Re: [GENERAL] MS SQL - PostgreSQL

2005-09-14 Thread Tino Wildenhain
Am Mittwoch, den 14.09.2005, 13:02 +0700 schrieb Irfan Syukur:
 Dear Tino,
 
 Thanks for your answer.
 
 Can PostGreSQL handle dynamic 'stored procedure'

whatever that means? :)
Postgres stored functions are not fully equivalent to
MSSQL stored procedures. Instead you use them just like
regular database functions or even tables.

 Can you give an example of writing 'stored procedure' in PostgreSQL ??

Did you have a look into the documentation?

http://www.postgresql.org/docs/8.0/static/

and specifically:

http://www.postgresql.org/docs/8.0/static/xplang.html

try it - the postgres documentation is really good!

btw, this would be worth reading (not only for you ;))
http://www.netmeister.org/news/learn2quote.html

HTH
Tino Wildenhain

PS: your mail server has a problem by now. I got an 
error.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] CREATE TEMP TABLE AS SELECT/ GET DIAGNOSTICS ROW_COUNT again

2005-09-14 Thread Ilja Golshtein
Hello!

According to Tom's message
http://archives.postgresql.org/pgsql-general/2005-06/msg00476.php
we have the stuff supposed to return number of rows affected by
CREATE .. AS SELECT or SELECT ... INTO in 8.1 beta.

The patch is at place, though PQcmdTuples returns nothing. It's not
a big surprise considering PQcmdTuples likes INSERT, DELETE, FETCH
or MOVE only. The real pain is cmdStatus does not contain number
of affected rows.

Any suggestions? Is it possible to calculate number of rows
in newly created table without explicit SELECT COUNT(*)?
I don't use PL/pgSQL.

Thanks a lot.

-- 
Best regards
Ilja Golshtein

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] how to list rules?

2005-09-14 Thread Oleg

Dear all,
I am new to PostgreSQL. I am experementing with rules. There is a way to 
create and drop rules but I did not find a way to list all existing 
rules. How do I view rules that I have?

Thanks a lot in advance
Oleg


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Block Size and various FS settings

2005-09-14 Thread Michael Ben-Nes

After a week of testing i decided to go with JFS as the FS for Postgres.

im not an expert benchmarker so i hope i initiated the right parameters 
in bonnie.



Any way here are the results of bonnie++  pgbench:


http://www.canaan.co.il/users/miki/stats/stats.html


Cheers


Michael Ben-Nes wrote:


Hi Everyone


Im setting up a machine that will host postgres 8.0.3.

The main tables will be:

Product - around 5 million records.

keywords - around 80 million records.


The machine that will host it is IBM x345 with two XEON, 3GB RAM, 
ServeRAID 6i  and 6 15K HD.


The OS ( Centos4.1 ) will be installed on the RAID 1 ( 2 HD )

The PG Data will be initialized on RAID 10 ( 4 HD ).


should I stick to the default postgres 8192 block size ?

If so, the same 8192 should be applied to the Reiserfs block size ?

and what about the RAID hardware striping size ?


Read on the net that pg 8k, reiserfs 8k and Raid strip of 64k will be 
the best.


http://forums.devshed.com/archive/t-33101/Any-RAID-striping-recommendations 




Is it safe to use noatime, nodirtime, notail on the partion of PG data ?



--
--
Canaan Surfing Ltd.
Internet Service Providers
Ben-Nes Michael - Manager
Tel: 972-4-6991122
Cel: 972-52-8555757
Fax: 972-4-6990098
http://www.canaan.net.il
--


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Upgrade instructions -7.4.8/8.0.3 incomplete regarding tsearch2

2005-09-14 Thread Markus Wollny
Hello!

I am currently trying to import a 8.0.1 database dump into a 8.0.3
installation. I, too, have at first stumbled over the tsearch2-issue
which is explained here:
http://www.postgresql.org/docs/8.0/static/release-7-4-8.html (should
make a rule to thoroughly read the upgrade notes first :) )

So I followed the instructions and executed the recommended procedure in
every database of my 8.0.1 cluster:

BEGIN;
UPDATE pg_proc SET proargtypes[0] = 'internal'::regtype
WHERE oid IN (
'dex_init(text)'::regprocedure,
'snb_en_init(text)'::regprocedure,
'snb_ru_init(text)'::regprocedure,
'spell_init(text)'::regprocedure,
'syn_init(text)'::regprocedure
);
-- The command should report having updated 5 rows;
-- if not, rollback and investigate instead of committing!
COMMIT;

Unfortunately, this is indeed not sufficient, if one has configured
support for one or more additional languages, which are not configured
per default (i.e. anything but English and Russian, e.g. German).

In my case, I have got a function dinit_de which is declared thus:

-- Name: dinit_de(text); Type: FUNCTION; Schema: public; Owner: postgres
--

CREATE FUNCTION dinit_de(text) RETURNS internal
AS '$libdir/dict_de', 'dinit_de'
LANGUAGE c;

ALTER FUNCTION public.dinit_de(text) OWNER TO postgres;

So when restoring the dump from 8.0.1 to 8.0.3, I receive this error:

ERROR:  unsafe use of INTERNAL pseudo-type
DETAIL:  A function returning internal must have at least one
internal argument.

In order to be able to restore the dump, the function declaration must
be altered according to the new declaration of the other
tsearch2-functions thus:

CREATE FUNCTION dinit_de(internal) RETURNS internal
AS '$libdir/tsearch2', 'dinit_de'
LANGUAGE c;

So the recommended procedure for upgrading the databases in my
particular case should be

BEGIN;
UPDATE pg_proc SET proargtypes[0] = 'internal'::regtype
WHERE oid IN (
'dex_init(text)'::regprocedure,
'snb_en_init(text)'::regprocedure,
'snb_ru_init(text)'::regprocedure,
'dinit_de(text)'::regprocedure,
'spell_init(text)'::regprocedure,
'syn_init(text)'::regprocedure
);
-- The command should report having updated 6 rows;
-- if not, rollback and investigate instead of committing!
COMMIT;

I recommend that anyone who wishes to upgrade their tsearch2-databases
examine their function declarations for any declaration which is of the
same type (returning internal without having at least one internal
argument); this certainly applies to the function-declarations for the
initialization of any tsearch2-dictionaries, which originally expect
text as input and are supposed to return internal. These declarations
must be included in the pre-upgrade-procedure, or else there will be
errors on restoring the dump. I don't know how this will affect the
restored database or if the issue can be somehow resolved after the dump
has already been restored without previous correction of the matter,
maybe someone else can shed some light on that. I think that it's
probably more on the safe side to assume, that it's better to have a
restore process that doesn't throw any such errors.

Kind regards

   Markus

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Unsubcribe

2005-09-14 Thread Roman Neuhauser
# [EMAIL PROTECTED] / 2005-09-13 18:28:07 -0700:
 Please...help me..
 How to unsubcribe

Look at the headers of any message posted to the list.

-- 
How many Vietnam vets does it take to screw in a light bulb?
You don't know, man.  You don't KNOW.
Cause you weren't THERE. http://bash.org/?255991

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] XlogFlus error message

2005-09-14 Thread philip johnson

I've a problem with my postgresql database
Version : 7.2.4

Os: Mandrake Linux release 8.2 Kernel 2.4.20

I get this king of error
ERROR:  XLogFlush: request 14D/F77EFE94 is not satisfied --- flushed only to
14D/F75CD1B0


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] oracle's first_value function for postgres?

2005-09-14 Thread Stephen Frost
* Tom Lane ([EMAIL PROTECTED]) wrote:
 Stephen Frost [EMAIL PROTECTED] writes:
  * Ben ([EMAIL PROTECTED]) wrote:
  Sweet, indeed it is. DISTINCT ON seems to be a postgresism, but I can
  live with that.
 
  Uhhh, I was pretty sure it was standard SQL...
 
 Nope, definitely a Postgres-ism.

Huh.  Guess I suck then.  I actually do this a fair bit, is there any
way to do it in standard SQL?  It's terribly useful...

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [GENERAL] how to list rules?

2005-09-14 Thread Michael Fuhr
On Wed, Sep 14, 2005 at 10:05:15AM +0200, Oleg wrote:
 I am new to PostgreSQL. I am experementing with rules. There is a way to 
 create and drop rules but I did not find a way to list all existing 
 rules. How do I view rules that I have?

In psql \d tablename shows a table's rules; otherwise you could
query the pg_rules system view.

http://www.postgresql.org/docs/8.0/interactive/view-pg-rules.html

-- 
Michael Fuhr

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] buffer manager

2005-09-14 Thread ravi chotalia

hi
i'm working on typical problem of comparing buffer manager's policies 
(MRU/LRU/FIFO) and working on postgresql 8-03 . I read old versiona and beta 
version and found 8-03 uses LRU 2q and the newer beta version is using 
simple LRU but this time global buffer lock is used removing vacum process 
in freelist. since i'm working on 8-03 and want to change my code as less as 
possible , is there any version where simple LRU is used with 8-0-3's vacum 
and BufferStrategyCDB stuff.

regards
ravi

_
1000s of Sexy Singles online now at Lavalife http://lavalife.com.au


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Backup and Restore mechanism in Postgres

2005-09-14 Thread vinita bansal

Hi,

I have a 4 proc. AMD Opteron machine with 32 GB RAM and ~400GB HDD and a 
40GB database. I need to take backup of this database and restore it some 
other location (say some test environment). I am currently using pg_dump and 
pg_restore utilities to get this done which takes 4-5 hrs for a dump and 8-9 
hrs for restore respectively. I am using custom format for taking dumps.
I want this time to be reduced as much as possible since 8-9 hrs for restore 
is definitely not desirable and I might have to do it 3-4 times a day.
I have already tuned postgres to set maintenance_work_mem = 100 (I guess 
other settings do not effect db backup and restore).


One thing I tried was to take a backup of the complete PGDATA directory and 
untar it to a seperate location. The problem comes in case of tablespaces 
which will now point to the same location as the place from where the dump 
was taken. I tried modifying tablespace of the retsored db to set it to some 
other location but it seems to change it for the original db as well. So, 
this won't work.


Note: I need to take into account tablespaces that can be distributed across 
different partitions. I cannot modify tablespaces in the database from which 
a dump is generated to set them to default tablespace, copy PGDATA dir and 
work from there on. (as a requirement orig. db should not be modified in any 
way)


Is there some other way to get fast db backups and restore?
Major bottleneck here is CPU usage where CPU usage gets to 100% for one of 
the processors. Is there a way to distribute dump and restore across all the 
processors.


I also looked at Incremental backups but could not see anything on how to 
restore them to some other location. Every where backup is combined with 
recovery and not restore.


Regards,
Vinita Bansal

_
Special offer for NRIs! 
http://creative.mediaturf.net/creatives/citibankrca/rca_msntagofline.htm Get 
a zero balance account for next 20 years. From Citibank



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] index and ilke question

2005-09-14 Thread CARLADATA, mailing list

SHOW ALL  list all show the value of a run-time parameter.

 LC_COLLATE String sort order
 LC_CTYPE Character classification (What is a letter? Its upper-case 
equivalent?)

 LC_MESSAGES Language of messages
 LC_MONETARY Formatting of currency amounts
 LC_NUMERIC Formatting of numbers
 LC_TIME Formatting of dates and times


LC_COLLATE and LC_CTYPE is set on C, you just need index on the text field.

If not C then you can create an index with a special operator class (s. 
documation 11.6).



- Original Message - 
From: Joost Kraaijeveld [EMAIL PROTECTED]

To: Pgsql-General pgsql-general@postgresql.org
Sent: Sunday, September 11, 2005 10:49 AM
Subject: [GENERAL] index and ilke question



Hi,

I want to use the following query:

select * from customers where lastname ilike 'jansen%'

Explain says it uses a sequential scan on customers while there is an
index on lastname (and 'jansen%' contains 1800 entries in a table of
370.000 customers so a index scan should be more logical?).

The docs say However, if your server does not use the C locale you will
need to create the index with a special operator class to support
indexing of pattern-matching queries.

This seems to be the case as it does not use the index.

Two questions:

1. How can I check if my (PostgreSQL or Linux?) server uses the C
locale ?

2. And if it does not the (correct?) C locale is the syntax for a
correct index the following, assuming that lastname is of type text:

CREATE INDEX test_index ON prototype.customers (lastname
text_pattern_ops);

(I tried this, but it did not change anything so I assume that either my
assumptions about when to use an index as described above or my syntax
are wrong)

TIA

--
Groeten,

Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
tel: 024-3888063 / 06-51855277
fax: 024-3608416
e-mail: [EMAIL PROTECTED]
web: www.askesis.nl



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings






---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] buffer manager

2005-09-14 Thread Alvaro Herrera
On Tue, Sep 13, 2005 at 04:38:27AM +, ravi chotalia wrote:
 hi
 i'm working on typical problem of comparing buffer manager's policies 
 (MRU/LRU/FIFO) and working on postgresql 8-03 . I read old versiona and 
 beta version and found 8-03 uses LRU 2q and the newer beta version is using 
 simple LRU but this time global buffer lock is used removing vacum process 
 in freelist. since i'm working on 8-03 and want to change my code as less 
 as possible , is there any version where simple LRU is used with 8-0-3's 
 vacum and BufferStrategyCDB stuff.

IIRC Neil Conway posted a patch to make 8.0.2 use LRU instead of ARC,
when the whole patent issue arised.  You could search that in the
archives.  OTOH, the current devel code uses an approach completely
different to both.

-- 
Alvaro Herrera -- Valdivia, Chile Architect, www.EnterpriseDB.com
¿Que diferencia tiene para los muertos, los huérfanos, y aquellos que han
perdido su hogar, si la loca destrucción ha sido realizada bajo el nombre
del totalitarismo o del santo nombre de la libertad y la democracia? (Gandhi)

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] C-JDBC experiences

2005-09-14 Thread Arjan van Krimpen





Tointroduce myself, my name is Arjan van Krimpen 
and I work currently for Stichting Kennisnet in the Netherlands. At the moment 
we're in the starting phase of a project for which we consider using 
C-JDBC.The aim of the project is to 
implement a new central authentication (and in some cases) authorisation 
service; the main reaseon for considering c-jdbc is to improve the reliability 
and eventually if necessary the performance (master/servant setup). 
We're using postgres 8.1 as database 
server.

I am looking for experiences of other project with this 
combination and/or with the use of c-jdbc for the improvement of 
performance/stability/reliability in large(r) scale projects (, especially projects that made it into 
production)? 


Thanks for your attention and hope to hear from 
you.

Regards,

Arjan van 
Krimpen

DISCLAIMER:

Dit bericht (met bijlagen) is met grote zorgvuldigheid samengesteld. Voor mogelijke onjuistheid en/of onvolledigheid van de hierin verstrekte informatie kan Kennisnet geen aansprakelijkheid aanvaarden, evenmin kunnen aan de inhoud van dit bericht (met bijlagen) rechten worden ontleend. De inhoud van dit bericht (met bijlagen) kan vertrouwelijke informatie bevatten en is uitsluitend bestemd voor de geadresseerde van dit bericht. Indien u niet de beoogde ontvanger van dit bericht bent, verzoekt Kennisnet u dit bericht te verwijderen, eventuele bijlagen niet te openen en wijst Kennisnet u op de onrechtmatigheid van het gebruiken, kopiren of verspreiden van de inhoud van dit bericht (met bijlagen).

This message (with attachments) is given in good faith. Kennisnet cannot assume any responsibility for the accuracy or reliability of the information contained in this message (with attachments), nor shall the information be construed as constituting any obligation on the part of Kennisnet. The information contained in this message (with attachments) may be confidential or privileged and is only intended for the use of the named addressee. If you are not the intended recipient, you are requested by Kennisnet to delete this message (with attachments) without opening it and you are notified by Kennisnet that any disclosure, copying or distribution of the information contained in this message (with attachments) is strictly prohibited and unlawful.





Re: [GENERAL] XlogFlus error message

2005-09-14 Thread Tom Lane
philip johnson [EMAIL PROTECTED] writes:
 I've a problem with my postgresql database
 Version : 7.2.4

That's an awfully old version, with a lot of known problems.  You should
at least be running 7.2.8, and preferably thinking about an update to a
newer release series.

 I get this king of error
 ERROR:  XLogFlush: request 14D/F77EFE94 is not satisfied --- flushed only to
 14D/F75CD1B0

Is the error reproducible after restarting the postmaster?  If so it may
be something similar to what was reported here:
http://archives.postgresql.org/pgsql-bugs/2005-09/msg00056.php
but we haven't been able to track down the original cause :-(

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Ranking Results

2005-09-14 Thread Thomas O'Connell
On Sep 13, 2005, at 3:07 PM, Craig wrote:  Hi   I have a query that is grouping by 3 fields and returning a sum of another field. I would also like a further field to be returned that shows a ranking for the records based on the sum field. This should really only be a numbering of the rows returned, since I will do the ordering in an ORDER BY clause   A Simplistic example:   SELECT f1, f2, f3, sum(f4) FROM TableA GROUP BY f1, f2, f3 ORDER BY f2,f1,f3   Now I would like to add another field that simply return 1,2,3,4,5,etc.. for each row that is returned.   Can this be done?   Thanks CraigIf you're talking about row numbering rather than ranking, you can try a temporary sequence:http://archives.postgresql.org/pgsql-general/2005-02/msg01412.php--Thomas F. O'ConnellCo-Founder, Information ArchitectSitening, LLCStrategic Open Source: Open Your i™http://www.sitening.com/110 30th Avenue North, Suite 6Nashville, TN 37203-6320615-469-5150615-469-5151 (fax)

Re: [GENERAL] XlogFlus error message

2005-09-14 Thread philip johnson
I know.
I'm working on an upgrade to postgresql 8.0.3
The big deal is to migrate openfts with tsearch V1 to new version.

I've tried to use pg_resetxlog, but not successful. The I've tried to
reindex the database. Same problem.

Now I came back to a normal state with a backup of the database from this
morning and launch my synchronization with the master database.

I think that the upgrade will be done next week

Thanks

 -Message d'origine-
 De : Tom Lane [mailto:[EMAIL PROTECTED]
 Envoyé : mercredi 14 septembre 2005 16:29
 À : philip johnson
 Cc : pgsql-general@postgresql.org
 Objet : Re: [GENERAL] XlogFlus error message
 
 philip johnson [EMAIL PROTECTED] writes:
  I've a problem with my postgresql database
  Version : 7.2.4
 
 That's an awfully old version, with a lot of known problems.  You should
 at least be running 7.2.8, and preferably thinking about an update to a
 newer release series.
 
  I get this king of error
  ERROR:  XLogFlush: request 14D/F77EFE94 is not satisfied --- flushed
 only to
  14D/F75CD1B0
 
 Is the error reproducible after restarting the postmaster?  If so it may
 be something similar to what was reported here:
 http://archives.postgresql.org/pgsql-bugs/2005-09/msg00056.php
 but we haven't been able to track down the original cause :-(
 
   regards, tom lane


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Speed problems

2005-09-14 Thread Scott Marlowe
On Tue, 2005-09-13 at 21:01, Warren Bell wrote:
 On Tue, 2005-09-13 at 13:20, Warren Bell wrote:
  I am having problems with performance. I think this is a simple question
 and
  I am in the right place, if not, please redirect me.
 
  I have a table with 36 fields that slows down quite a bit after some
 light
  use. There are only 5 clients connected to this DB and they are doing
 mostly
  table has had no more than 10,000 records and is being accesessd at the
 rate
  of once per 5 seconds. It will slow down quite a bit. It will take 10
  seconds to do a `SELECT * FROM` query. I delete all records except one
  perform a VACUUM and this will not speed it up. I drop the table and
  recreate it and insert one record and it speeds right back up takeing
 only
  100 ms to do the query.
 
 This sounds like classic table / index bloat.
 
 Are you updating all 10,000 rows every 5 seconds?  Good lord, that's a
 lot of updates.  If so, then do a vacuum immediately after the update
 (or a delete), or change the system so it doesn't update every row every
 time.
 
 Next time, try a vacuum full instead of a drop and recreate and see if
 that helps.
 
 
  I am fairly new to Postgres. What do I need to do to keep this table from
  slowing down?
 
 Vacuum this table more often.  You might want to look at using the
 autovacuum daemon to do this for you.
 
 You might want to post a little more info on what, exactly, you're doing
 to see if we can spot any obvious problems.
 
 
 I have three indexes on this table. One index is a 1 column, one index is a
 5 column multi and one is a 2 column multi. I have run EXPLAIN ANALYZE on
 all of my queries and they seem to be taking advantage of these indexes.
 
 Would three indexes of this sort be considered index bloat?

No, index bloat is a different problem.  In the days of yore, postgresql
had a tendency to grow its indexes over time without reclaiming lost
space in them, which lead to bloated indexes (back in the day, I once
had a 100k table with an 80 meg index after a while...  Now that is
bloat)

Today, index bloat is generally not a problem, as vacuum can reclaim
much more space in an index than it once could.  I'm guessing you're
suffering from a bloating of tables and indexes caused by not vacuuming
enough.  Use a vacuum full once to clear up the bloated tables and
indexes, and then regularly scheduled plain vacuums to keep them at a
reasonable size.

 I am updating no more than 200 records at a time. Here are some examples of
 my queries:
 
 UPDATE table SET boolean_col_1 = true WHERE boolean_col_2 = false
 
 UPDATE table SET (several columns = something) WHERE char_col_1 = 'blah' AND
 int4_col_1 = 11
 
 UPDATE table SET boolean_col_1 = true WHERE boolean_col_2 = false AND
 boolean_col_3 = false AND  boolean_col_4 = false AND  boolean_col_5 = false
 AND  boolean_col_6 = false

OK.  But how many are you updating between regular vacuums?  That's the
real issue.  If your regular vacuums aren't often enough, postgresql
starts lengthening the tables instead of reusing the space in them that
was freed by the last updates / deletes.

Keep in mind, that in postgresql, all updates are really insert / delete
pairs, as far as storage is concerned.  So, updates create dead tuples
just like deletes would.

 Is my use of indexes correct?

Seems good to me.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Block Size and various FS settings

2005-09-14 Thread Marco Colombo
On Wed, 2005-09-14 at 11:25 +0300, Michael Ben-Nes wrote:
 After a week of testing i decided to go with JFS as the FS for Postgres.
 
 im not an expert benchmarker so i hope i initiated the right parameters 
 in bonnie.
 
 
 Any way here are the results of bonnie++  pgbench:
 
 
 http://www.canaan.co.il/users/miki/stats/stats.html
 
 
 Cheers

Have you tried data=journal / data=ordered / data=writeback mount
options for ext3? If so, did they make any difference?

.TM.
-- 
  /  /   /
 /  /   /   Marco Colombo
___/  ___  /   /  Technical Manager
   /  /   /  ESI s.r.l.
 _/ _/  _/  [EMAIL PROTECTED]


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] ERROR: cursor 'xxx' does not exist

2005-09-14 Thread [EMAIL PROTECTED]
  I had a problem with a program that load about 2 MB of information from
de DB with a rate of data transfer about 80K. I don´t know why the 'top' of
80K, but is not harware. I still have this problem.
 Then I set the Use Declare/Fetch opcion on the ODBC. This opcion take
the information to transfer from 2MB to about 100K. But I have a BIG
problem, this opcion activate the use of cursors and the use of commit and
rollback. The problem is that I get the message  'ERROR: cursor xxx does
not exist' when it makes the close cursor (I see this on the CommLog of
ODBC), but the cursor was declared previously. The really problem is
'ERROR:  current transaction is aborted, commands ignored until end of
transaction block' because of cursor's problem. Since I set the Use
Declare/Fetch the ODBC use transactions (BEGIN,COMMIT,ETC).
 I do not want the use of transactions. Can I use Cursors and 'Disable'
Transactions ?. I'm using postgresql 8.0 windows native.

Nuno Goncalves


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] TSearch2 snowball version error

2005-09-14 Thread William Leite Araújo
 Hi,

 I'm trying compile a new brazilian portuguese dictionary to TSearch2 contrib, but found the errors:
portuguese_stem.c: In function `r_prelude':
portuguese_stem.c:481: error: void value not ignored as it ought to be
portuguese_stem.c:487: error: void value not ignored as it ought to be
portuguese_stem.c: In function `r_postlude':
portuguese_stem.c:610: error: void value not ignored as it ought to be
portuguese_stem.c:616: error: void value not ignored as it ought to be
portuguese_stem.c: In function `r_standard_suffix':
portuguese_stem.c:662: error: void value not ignored as it ought to be
portuguese_stem.c:672: error: void value not ignored as it ought to be
portuguese_stem.c:682: error: void value not ignored as it ought to be
portuguese_stem.c:692: error: void value not ignored as it ought to be
portuguese_stem.c:702: error: void value not ignored as it ought to be
portuguese_stem.c:715: error: void value not ignored as it ought to be
portuguese_stem.c:729: error: void value not ignored as it ought to be
portuguese_stem.c:744: error: void value not ignored as it ought to be
portuguese_stem.c:760: error: void value not ignored as it ought to be
portuguese_stem.c:775: error: void value not ignored as it ought to be
portuguese_stem.c:791: error: void value not ignored as it ought to be
portuguese_stem.c:806: error: void value not ignored as it ought to be
portuguese_stem.c:818: error: void value not ignored as it ought to be
portuguese_stem.c:832: error: void value not ignored as it ought to be
portuguese_stem.c: In function `r_verb_suffix':
portuguese_stem.c:856: error: void value not ignored as it ought to be
portuguese_stem.c: In function `r_residual_suffix':
portuguese_stem.c:880: error: void value not ignored as it ought to be
portuguese_stem.c: In function `r_residual_form':
portuguese_stem.c:902: error: void value not ignored as it ought to be
portuguese_stem.c:929: error: void value not ignored as it ought to be
portuguese_stem.c:935: error: void value not ignored as it ought to be
portuguese_stem.c: In function `portuguese_ISO_8859_1_stem':
portuguese_stem.c:993: error: void value not ignored as it ought to be
make: ** [portuguese_stem.o] Erro 1

 This after change the included file header.h, the old ../runtime/header.h is invalid. 
 I think that is a version trouble, because the
return of functions slice_from_s and slice_del are int in current
snowball portuguese files, but on include files of version 8.0.2 and
8.0.3 of Portgresql the return type is void.
 Help-me, please.
-- William Leite Araújo


Re: [GENERAL] Speed problems

2005-09-14 Thread Warren Bell


 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] Behalf Of Scott Marlowe
 Sent: Wednesday, September 14, 2005 8:24 AM
 To: Warren Bell
 Cc: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] Speed problems


 On Tue, 2005-09-13 at 21:01, Warren Bell wrote:
  On Tue, 2005-09-13 at 13:20, Warren Bell wrote:
   I am having problems with performance. I think this is a
 simple question
  and
   I am in the right place, if not, please redirect me.
  
   I have a table with 36 fields that slows down quite a bit after some
  light
   use. There are only 5 clients connected to this DB and they are doing
  mostly
   table has had no more than 10,000 records and is being
 accesessd at the
  rate
   of once per 5 seconds. It will slow down quite a bit. It will take 10
   seconds to do a `SELECT * FROM` query. I delete all records
 except one
   perform a VACUUM and this will not speed it up. I drop the table and
   recreate it and insert one record and it speeds right back up takeing
  only
   100 ms to do the query.
  
  This sounds like classic table / index bloat.
  
  Are you updating all 10,000 rows every 5 seconds?  Good lord, that's a
  lot of updates.  If so, then do a vacuum immediately after the update
  (or a delete), or change the system so it doesn't update every
 row every
  time.
  
  Next time, try a vacuum full instead of a drop and recreate and see if
  that helps.
  
  
   I am fairly new to Postgres. What do I need to do to keep
 this table from
   slowing down?
  
  Vacuum this table more often.  You might want to look at using the
  autovacuum daemon to do this for you.
  
  You might want to post a little more info on what, exactly,
 you're doing
  to see if we can spot any obvious problems.
  
 
  I have three indexes on this table. One index is a 1 column,
 one index is a
  5 column multi and one is a 2 column multi. I have run EXPLAIN
 ANALYZE on
  all of my queries and they seem to be taking advantage of these indexes.
 
  Would three indexes of this sort be considered index bloat?

 No, index bloat is a different problem.  In the days of yore, postgresql
 had a tendency to grow its indexes over time without reclaiming lost
 space in them, which lead to bloated indexes (back in the day, I once
 had a 100k table with an 80 meg index after a while...  Now that is
 bloat)

 Today, index bloat is generally not a problem, as vacuum can reclaim
 much more space in an index than it once could.  I'm guessing you're
 suffering from a bloating of tables and indexes caused by not vacuuming
 enough.  Use a vacuum full once to clear up the bloated tables and
 indexes, and then regularly scheduled plain vacuums to keep them at a
 reasonable size.

  I am updating no more than 200 records at a time. Here are some
 examples of
  my queries:
 
  UPDATE table SET boolean_col_1 = true WHERE boolean_col_2 = false
 
  UPDATE table SET (several columns = something) WHERE char_col_1
 = 'blah' AND
  int4_col_1 = 11
 
  UPDATE table SET boolean_col_1 = true WHERE boolean_col_2 = false AND
  boolean_col_3 = false AND  boolean_col_4 = false AND
 boolean_col_5 = false
  AND  boolean_col_6 = false

 OK.  But how many are you updating between regular vacuums?  That's the
 real issue.  If your regular vacuums aren't often enough, postgresql
 starts lengthening the tables instead of reusing the space in them that
 was freed by the last updates / deletes.

 Keep in mind, that in postgresql, all updates are really insert / delete
 pairs, as far as storage is concerned.  So, updates create dead tuples
 just like deletes would.

  Is my use of indexes correct?

 Seems good to me.


 ---(end of broadcast)---
 TIP 6: explain analyze is your friend


I have installed pg_autovacuum and also did a VACUUM FULL on the tables.
Speed has improved quite a bit.

Are there any set rules on what the pg_autovacuum -v and -V arguments should
be set to?

I went with the defaults

Thanks for your help,



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] buffer manager

2005-09-14 Thread Neil Conway

Alvaro Herrera wrote:

IIRC Neil Conway posted a patch to make 8.0.2 use LRU instead of ARC,
when the whole patent issue arised.


http://archives.postgresql.org/pgsql-patches/2005-01/msg00253.php

-Neil

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] importing shape files nightmare!

2005-09-14 Thread christomec
I have two machines, suse and gentoo

suse:

select version();
PostgreSQL 8.0.3 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3.5
20050117 (prerelease) (SUSE Linux)

select postgis_full_version();
POSTGIS=1.0.3 PROJ=Rel. 4.4.8, 3 May 2004 USE_STATS DBPROC=0.3.0
RELPROC=0.3.0

gentoo:

select version();
PostgreSQL 8.0.3 on i686-pc-linux-gnu, compiled by GCC
i686-pc-linux-gnu-gcc (GCC) 3.3.6 (Gentoo 3.3.6, ssp-3.3.6-1.0, pie-8.7.8)

select postgis_full_version();
POSTGIS=1.1.0CVS GEOS=2.1.4 PROJ=Rel. 4.4.9, 29 Oct 2004 USE_STATS
DBPROC=0.3.1 RELPROC=0.3.1

running:

shp2pgsql -c streets.shp streets | psql -d dbname

suse, no problem!

gentoo: ERROR:  unterminated quoted string at or near
'0105000100010...

trying:

shp2pgsql -D streets.shp streets dbname  streets.sql

suse, no problem!

gentoo:
Shapefile type: Arc
Postgis type: MULTILINESTRING[2]
Segmentation fault

finally:

Trying to import streets.sql generated on the suse machine into postgresql
running on the gentoo machine also fails:

psql:streets.sql:1350: invalid command \N

What is it with the gentoo installation? What should I upgrade or
downgrade to remedy this problems I'm having?

Please help, it is really important that I get this done and I'm really
stuck trying to solve it.

Kind Regards,
Christo Du Preez

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Howto create a plperlu function as user (not superuser)??

2005-09-14 Thread Brent Wood


Hi,

I have a PostGIS enabled Postgres database. For institutional (un)reasons
I'm the database owner in all but reality.

I'm writing a plperl function to get the lat  lon coords from a geometry
and execute a system call to get the depth (or altitude) at a location
from a global datamodel.

So I and other users can do something like:
update site set depth = depth(todeg(site_geom));

(where site_geom is a point geometry. The todeg function is plpsql to
convert the point to lat long coords from any other projections, to match
the coordinate system of the global grid. It works fine.)

My problem is that the language needs to be plperlu (the unsecured
implementation of plperl) to be allowed to execute the system call to get
the depth at the specified location. To work, the plperlu function must be
created by the superuser, who I assume is postgres.


That is not me. Sigh. Is there any way I can develop (with the various
create or replace function iterations this wiil probably require) this
plperlu function as a non superuser?



Thanks,

  Brent Wood


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Replication

2005-09-14 Thread Andrew Rawnsley



On 9/13/05 2:45 PM, Scott Marlowe [EMAIL PROTECTED] wrote:

 On Tue, 2005-09-13 at 10:45, Russ Brown wrote:
 Simon Riggs wrote:
 Barry,
 
 You can use PITR to archive transaction logs to a second server that is
 kept in standby mode.
 
 This will cope with any number of tables and cope with dynamic changes
 to tables.
 
 This is fairly straightforward and very low overhead.
 Set archive_command to a program that transfers xlog files to second
 server.
 Then set restore_command on the second server to a program that loops
 until the next file is available.
 Switchover time is low.
 
 
 Apologies for going slighly off topic, but isn't this basically how
 MySQL does replication? I ask because one of the arguments against
 moving to PostgreSQL in my organisation is 'no replication out of the
 box'. But if the above is true it seems that all that is required are a
 couple of scripts to handle log transfer and you have a form of
 replication out of the box right there.
 
 Or am I missing something?
 
 I don't know, but someone in your organization seems to be.
 
 Let me present it as a simple devil's choice, which would you rather
 have, proven replication, that works, but requires you to setup a
 secondary bit of software / system scripts (like rsync) but is tested
 and proven to work, or, an out of the box solution that is untested,
 unreliable, and possible unsafe for your data?
 

When I was putting together a fairly complex log-shipping solution in Oracle
(sorry for the O word...), I was presented with that exact choice: use
Oracle's built-in log shipping/recovery mechanism, or design an 'in rsync we
trust' system of scripts. I chose the scripts, and its worked without a burp
for a looong time now. Easy to test, easy to debug, predictable, small
simple parts. Its really not that hard. Keep track of disk space, and make
sure to check the size of the destination file when you move something
around and not just its existence. Not much else to it.

 Chosing a database because it has out of the box replication without
 paying attention to how it is implemented, how well it works, and what
 are the ways it can break is a recipe for (data) disaster.
 

We're getting back to the oft-repeated mantra here - replication is hard.
Anyone saying it can be effortless doesn't understand the complexity of the
problem. 

 I've tested slony, and I know that for what we use it for, it's a good
 fit and it works well.  I've tested MySQL's replication, and it simply
 can't do what I need from a replication system.  It can't be setup on
 the fly on a live system with no down time, and it has reliability
 issues that make it a poor choice for a 24/7 enterprise replication
 system.
 
 That said, it's a great system for content management replication, where
 downtime is fine while setting up replication.
 
 But I wouldn't choose either because it was easier to implement.  Being
 easy to implement is just sauce on the turkey.  I need the meat to be
 good or the sauce doesn't matter.
 
 ---(end of broadcast)---
 TIP 6: explain analyze is your friend




---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] how to supress Notice messages or decrease verbosity level

2005-09-14 Thread Keve Nagy
Keve Nagy wrote:
 I get NOTICE messages messing up the screen, telling me the obvious
 thing that indexes had to be created for the UNIQUE or PRIMARY KEY
 constraint.

If anybody else is interested, the solution is:

SET LOCAL client_min_messages TO 'warning';

This will suppress all NOTICE messages for the transaction, and then
return to the original state which is client_min_messages = 'notice' by
default.

YES I love answering my own posts!
:-)

Regards,
Keve

-- 
If you need to reply directly:
keve(at)mail(dot)poliod(dot)hu

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Pushing result set to table on different machine?

2005-09-14 Thread Brent Wood


On Tue, 13 Sep 2005, Jerry LeVan wrote:

 Hi,
 I recently added a linux/windows laptop to our home network.

 I have started Postgresql (8.0.3) on the laptop and have
 replicated my main database on the laptop. (pg_dump,
 ftp and pg-undump).

 I would like to keep them reasonably in sync. Slony is
 overkill and I think the mirror contrib is possibly
 overkill also.

 I have installed the dblink package and can easily pull
 data from my main server back to  the laptop.

 Is there a elegant way to push the data from the main
 server to the laptop?


I have not tried this with Postgres, but have done similar things with
other databases and related packages.

In the Postgres case, a script on the server which remotely runs a command
on the laptop (ssh/rexec/rsh as you prefer) could run a

copy from table (on the server) | copy to table from stdin (remotely on
the laptop)

Something to empty tables first might help, but any command can be set up
to run on the laptop, but be invoked from the server. Data from a srever
run command can, as above, be piped as input to a command run by the
laptop (but started from/by the server)

I don't know that I'd recommend it, but you may be able to rsynch the
database directory.

Set up the script  run it on the server whenever you want.


Brent Wood

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Partial dates

2005-09-14 Thread Brent Wood


On Tue, 13 Sep 2005, Joshua D. Drake wrote:

 
  ERROR:  date/time field value out of range: 1997-10-00
  HINT:  Perhaps you need a different datestyle setting.
  CONTEXT:  COPY Entry, line 1, column EntryDate: 1997-10-00

 Well the easy solution is to just make the date a text type but that is
 the wrong answer. The right answer is to fix the data set.
 MySQL should never have allowed you do insert those dates in the first
 place. I know that doesn't help you much but at some point
 you are going to have to sanitize the data anyway.


Hmmm... given that our real world data, (currently in a commercial RDBMS
but I have hopes :-) often has dates where we only have a month and year,
is there any way a part of a timestamp can be null? I guess this also has
indexing issues. Maybe some form of GIST would work.

Sanitizing is one thing, inventing data to fit an incomplete value into a
date datatype is not good practice.

It would need some arbitrary standard to apply date/time arithmetic 
queries. For example, if we wanted all values for 1987, a record from an
unknown day in March 1987 would be in the result set. If we wanted all
values from March 1987, similarly. All records since 13 March 1987 and the
arbitrary rule would come into play. Probably excluded because we couldn't
explicitly prove it should be included in the result set. Like other nulls
get treated.

In case anyone is interested, right now we store year, month  day and
have a timestamp field where the entire field is null if any one part is
unknown.

Are there any better ways in Postgres?



Brent Wood

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Partial dates

2005-09-14 Thread Alvaro Herrera
On Wed, Sep 14, 2005 at 03:49:29PM +1200, Brent Wood wrote:

 In case anyone is interested, right now we store year, month  day and
 have a timestamp field where the entire field is null if any one part is
 unknown.
 
 Are there any better ways in Postgres?

You can create a new type, which is a privilege you don't have in some
other systems.  Or you can separate the column in three columns and
leave some of them NULL as appropiate.

-- 
Alvaro Herrera -- Valdivia, Chile Architect, www.EnterpriseDB.com
Right now the sectors on the hard disk run clockwise, but I heard a rumor that
you can squeeze 0.2% more throughput by running them counterclockwise.
It's worth the effort. Recommended.  (Gerry Pourwelle)

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Speed problems

2005-09-14 Thread John Fabiani
On Wednesday 14 September 2005 08:23, Scott Marlowe wrote:

 OK.  But how many are you updating between regular vacuums?  That's the
 real issue.  If your regular vacuums aren't often enough, postgresql
 starts lengthening the tables instead of reusing the space in them that
 was freed by the last updates / deletes.

 Keep in mind, that in postgresql, all updates are really insert / delete
 pairs, as far as storage is concerned.  So, updates create dead tuples
 just like deletes would.

  Is my use of indexes correct?

 Seems good to me.

Ok but this does seem to be a not a lot of records.  Even if the user updated 
500 times a day (500 * 200) will only add 10 records.  I would not expect 
that performance would suffer adding 10 per day for at least a week.  
Even if the number was double (in case I mis-read the user prior emails) 
20 or 100 at the end of the week would not account for the slow down? 
Or am I miss reading?
John

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Howto create a plperlu function as user (not superuser)??

2005-09-14 Thread Tom Lane
Brent Wood [EMAIL PROTECTED] writes:
 That is not me. Sigh. Is there any way I can develop (with the various
 create or replace function iterations this wiil probably require) this
 plperlu function as a non superuser?

If you could, it would be a security hole, which we would fix with
utmost alacrity.  Untrusted-language functions may only be created
by superusers.

Can you compartmentalize the depth-accessing function as a small
plperlu function, and do all the interesting stuff in plain plperl
atop that?

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] MS SQL - PostgreSQL

2005-09-14 Thread Tino Wildenhain
Am Donnerstag, den 15.09.2005, 07:43 +0700 schrieb Irfan Syukur:
 Dear Tino, 
 
 In MS SQL, I can execute store procedure that I do not know it's name yet.
 
 A Simplicity example :
CREATE   procedure  dbo.sp_run_batch (@as_spname varchar(20)) with 
 recompile as
 
declare @li_retstat  smallint, 
   @li_status   numeric(1,0), @ls_mesg   varchar(60)
 
exec @li_retstat = @as_spname @as_mesg = @ls_mesg output --(@as_spname = 
 the name of stored procedure)
select @li_retstat, @ls_mesg
return 
GO
 
 Can Postgres do that, how ?

Well, not that I know of. But in your example your function has a name.
So you simply:

SELECT dbo.sp_run_batch('something for spname');

I'd suggest you just play with it a bit - with an eye at the
docs. 

 
 in MS SQL, there are datetime and timestamp data type, what data type should 
 I use in Postgres

date, timestamp, timestamptz, ... whatever suits your application :)

 btw, this would be worth reading (not only for you ;))
 http://www.netmeister.org/news/learn2quote.html
 

Again ;)

HTH
Tino


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] MS SQL - PostgreSQL

2005-09-14 Thread Tino Wildenhain
Am Donnerstag, den 15.09.2005, 07:43 +0700 schrieb Irfan Syukur:


my Postfix reports:

 [EMAIL PROTECTED]: mail for bri.co.id loops back to myself

this is weird. Your mail is apparently seriously broken. Please
have that fixed.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster