Re: [GENERAL] tsearch2 in multilingual database?

2007-07-05 Thread Hannes Dorbath

On 04.07.2007 11:20, Joshua N Pritikin wrote:
Sometime in the future, I anticipate storing other languages in addition 
to English in my database to be indexed with tsearch2. set_curcfg() 
seems to be per-session. Will I need to call set_curcfg() every time I 
switch languages?


You *should* create a TSearch config for each language, at least if 
you'd like to use stemming, compound words etc. When you have a 
multilingual project, I'm sure you have a `languages' table already 
somewhere. Just extent that table with a tsearch_config field, so you 
know what config to use for what language.. and yes, you have to set 
that every time.



--
Regards,
Hannes Dorbath

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Problem with autovacuum and pg_autovacuum

2007-07-05 Thread Pavan Deolasee

On 7/5/07, Andreas 'ads' Scherbaum <[EMAIL PROTECTED]> wrote:



Hello,

On Wed, 4 Jul 2007 18:04:35 -0400 Alvaro Herrera wrote:

> Most likely it is worried about XID wraparound, and those are precisely
> the tables that need urgent vacuumed because they haven't been vacuumed
> in a long time.

No, autovacuum is doing this with every run. Beside this, the database has
only some 10k changes per day. The wraparound was my first idea, but i
don't see a reason, why this should be happen with every autovacuum run.



Did you check freeze_max_age values in the pg_autovacuum table ? A very
small value can trigger XID wraparound related VACUUMs in every run.

Thanks,
Pavan

--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com


[GENERAL] Polymorphic delete help needed

2007-07-05 Thread Perry Smith
I am doing a project using Ruby On Rails with PostgreSQL as the  
database.  I have not seen the term polymorphic used with databases  
except with Rails so I will quickly describe it.


Instead of holding just an id as a foreign key, the record holds a  
"type" field which is a string and an id.  The string is the name of  
the table to which the id applies.  (That is slightly simplified).


The first problem that creates is it makes it hard to do a constraint  
on the name/id pair.  I thought about writing a function that would  
take the pair and search the appropriate table.  If it found a match,  
it would return true and if not, it would return false.  I have not  
done that because the string used to "name" the table has been  
modified to look like a class name.  So, a foreign key pointing to  
the table happy_people would have "HappyPeople" in the string (and  
not "happy_people").  It is not an impossible task to transform the  
string but I just have not attacked it yet for a couple of reasons.


One reason is that I can put this check into Rails much easier.  I  
don't know which would be faster to execute or if it would make any  
significant different.


But I have a much bigger problem.  One that I can not really  
visualize how to properly solve and that is how do I do deletes.


To back up a step, I have a table called relationships which has two  
polymorphic foreign keys in it call parent and child.  Then I have a  
set of tables such as people, companies, addresses, etc.   
Collectively, I call these items.


The relationships are the glue that point between items like a person  
and a company for example.


Now, suppose I want to delete a person.  That implies that some  
relationships pointing to that person are no longer valid.  If I  
remove those, that could imply that there are other items that now  
have no relationships pointing to them.  How should I delete the  
item, extra relationships, and extra items and still make this update  
so that if something fails in the middle, it will get properly rolled  
back?


Thank you for your help,
Perry Smith ( [EMAIL PROTECTED] )
Ease Software, Inc. ( http://www.easesoftware.com )

Low cost SATA Disk Systems for IBMs p5, pSeries, and RS/6000 AIX systems




Re: [GENERAL] Localization trouble

2007-07-05 Thread Chris Travers

Tom Lane wrote:

Chris Travers <[EMAIL PROTECTED]> writes:
  

Is there a way to accept localized numbers as input?
i.e. '1,39'::numeric?



See to_number().
  


Thanks!  I somehow missed that function in the docs.

Best Wishes,
Chris Travers

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Localization trouble

2007-07-05 Thread Tom Lane
Chris Travers <[EMAIL PROTECTED]> writes:
> Is there a way to accept localized numbers as input?
> i.e. '1,39'::numeric?

See to_number().

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Localization trouble

2007-07-05 Thread Tom Lane
Michael Glaesemann <[EMAIL PROTECTED]> writes:
> I think display depends on whether or not you configured Postgres  
> with or without --enable-nls.

No, to_char understands numeric locales regardless of enable-nls.
There is no provision for locale-dependent output from a plain
numeric column; you must use to_char.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Localization trouble

2007-07-05 Thread Chris Travers



Michael Fuhr wrote:



I think you'll need to use to_char():

test=> set lc_numeric to 'es_ES.UTF-8';
SET
test=> select to_char(1.234, '9D999');
 to_char 
-

  1,234
(1 row)

The file src/backend/utils/adt/pg_locale.c in the PostgreSQL source
code has comments about how various LC_* settings are used in the
backend.

  

Is there a way to accept localized numbers as input?
i.e. '1,39'::numeric?

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Localization trouble

2007-07-05 Thread Michael Glaesemann


On Jul 5, 2007, at 19:47 , Michael Glaesemann wrote:

I don't believe you'll see numbers *as numbers* displayed with the  
formatting you desire unless you somehow tell your client (e.g.,  
psql) which locale you want to use. I haven't figured out how to do  
this yet, though.


I think display depends on whether or not you configured Postgres  
with or without --enable-nls. I never have, so I can't really test  
this, but I suspect that display would change with the different lc_*  
settings as well. For example, setting lc_messages doesn't do  
anything on my machine (built without --enable-nls).


Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Localization trouble

2007-07-05 Thread Michael Fuhr
On Thu, Jul 05, 2007 at 05:10:57PM -0700, Chris Travers wrote:
> I am trying to find a way to select the number format at runtime for 
> textual representation of numbers.  I am currently running 8.1.4 built 
> from source on Fedora Linux core 5.
> 
> I have been trying to use set lc_numeric = various country codes (for 
> example es_EC), but I am not able to get the format to change from 1.00 
> to 1,00. 

I think you'll need to use to_char():

test=> set lc_numeric to 'es_ES.UTF-8';
SET
test=> select to_char(1.234, '9D999');
 to_char 
-
  1,234
(1 row)

The file src/backend/utils/adt/pg_locale.c in the PostgreSQL source
code has comments about how various LC_* settings are used in the
backend.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Localization trouble

2007-07-05 Thread Michael Glaesemann


On Jul 5, 2007, at 19:10 , Chris Travers wrote:

I have been trying to use set lc_numeric = various country codes  
(for example es_EC), but I am not able to get the format to change  
from 1.00 to 1,00.

Any hints as to what I could be doing wrong?


Does this correspond to what you're seeing?

test=# CREATE TABLE lc_examples (a_money money not null, a_numeric  
numeric not null);

CREATE TABLE
test=# INSERT INTO lc_examples (a_money, a_numeric) VALUES ('1.32',  
-1.32);

INSERT 0 1
test=# CREATE VIEW lc_examples_view AS
SELECT a_money
   , a_numeric
   , to_char(a_numeric, '999D99S') as a_formatted_numeric
FROM lc_examples;
CREATE VIEW
test=# SELECT * FROM lc_examples_view;
a_money | a_numeric | a_formatted_numeric
-+---+-
   $1.32 | -1.32 |   1.32-
(1 row)

test=# SHOW lc_monetary;
lc_monetary
-
C
(1 row)

test=# SHOW lc_numeric;
lc_numeric

C
(1 row)

test=# SELECT * FROM lc_examples_view;
a_money | a_numeric | a_formatted_numeric
-+---+-
   $1.32 | -1.32 |   1.32-
(1 row)

test=# SET lc_monetary TO 'es_ES';
SET
test=# SET lc_numeric TO 'es_ES';
SET
test=# SELECT * FROM lc_examples_view;
a_money | a_numeric | a_formatted_numeric
-+---+-
  Eu1,32 | -1.32 |   1,32-
(1 row)

I don't believe you'll see numbers *as numbers* displayed with the  
formatting you desire unless you somehow tell your client (e.g.,  
psql) which locale you want to use. I haven't figured out how to do  
this yet, though.


Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] Localization trouble

2007-07-05 Thread Chris Travers

Hi all;

I am trying to find a way to select the number format at runtime for 
textual representation of numbers.  I am currently running 8.1.4 built 
from source on Fedora Linux core 5.


I have been trying to use set lc_numeric = various country codes (for 
example es_EC), but I am not able to get the format to change from 1.00 
to 1,00. 


Any hints as to what I could be doing wrong?


Best Wishes,
Chris Travers

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Nested Transactions in PL/pgSQL

2007-07-05 Thread Alvaro Herrera
John DeSoi wrote:
> 
> On Jul 5, 2007, at 1:34 PM, Nykolyn, Andrew wrote:
> 
> >Is it possible to nest transactions within a stored procedure?  I  
> >have a stored procedure that calls many other stored procedures and  
> >what happens it that after a certain amount of time the server runs  
> >out of shared memory.  I know I can increase the PostgreSQL shared  
> >memory.  However, that would be a temporary fix.  I know it will  
> >eventually run out again as more data is processed.  The right way  
> >to do it is to do  issue a save point or commit at various places  
> >in my long stored procedure.  I want to believe that there is a way  
> >to issue commits within a stored procedure since PostgreSQL now  
> >supports nested transactions.
> 
> PL/pgSQL functions implicitly run within a transaction, so I don't  
> think you can issue BEGIN/COMMIT/ROLLBACK. But save points should be  
> OK. Any reason that won't work for your case?

It can't be done directly.  The only way to have a subtransaction in a
PL/pgSQL function is to use an EXCEPTION clause.

You can nest it -- for example have one BEGIN/EXCEPTION/END block inside
another.  You can do it serially as well, along the lines of

BEGIN
BEGIN
do stuff
EXCEPTION WHEN ...
catch it
END
BEGIN
do more stuff
EXCEPTION WHEN ...
same
END
END

Note that BEGIN here delimits a block; it has no relationship at all
with BEGIN in SQL which starts a transaction.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Nested Transactions in PL/pgSQL

2007-07-05 Thread John DeSoi


On Jul 5, 2007, at 1:34 PM, Nykolyn, Andrew wrote:

Is it possible to nest transactions within a stored procedure?  I  
have a stored procedure that calls many other stored procedures and  
what happens it that after a certain amount of time the server runs  
out of shared memory.  I know I can increase the PostgreSQL shared  
memory.  However, that would be a temporary fix.  I know it will  
eventually run out again as more data is processed.  The right way  
to do it is to do  issue a save point or commit at various places  
in my long stored procedure.  I want to believe that there is a way  
to issue commits within a stored procedure since PostgreSQL now  
supports nested transactions.


PL/pgSQL functions implicitly run within a transaction, so I don't  
think you can issue BEGIN/COMMIT/ROLLBACK. But save points should be  
OK. Any reason that won't work for your case?




John DeSoi, Ph.D.
http://pgedit.com/
Power Tools for PostgreSQL


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] [pgsql-general] In memory tables/databases

2007-07-05 Thread PFC


Why not have a table type that writes no WAL and is truncated whenever  
postgres starts? Such a table could then be put in a ramdisk tablespace  
and there would be no transaction atomicity repercussions. Is there  
something I'm missing?


Is this not in the TODO (if not already scheduled for next version ?)
Check ALTER TABLE SET PERSISTENCE ...

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] simple query question to use with DBI selectall_hashref

2007-07-05 Thread Martijn van Oosterhout
On Thu, Jul 05, 2007 at 01:39:31PM -0400, Kenji Morishige wrote:
> I would like to create a query that returns a column with an integer
> 1 through (row_count) to use as the index while used in conjunction with
> DBI's selectall_hashref($sql,$key) function.  In the past I'd usually just
> write a wrapper around selectrow_hashref and put all those results in an
> array. I don't know why DBI doesn't include a function like this.

I think DBI already does what you want, from the docs:

   You may often want to fetch an array of rows where each row
   is stored as a hash. That can be done simple using:

 my $emps = $dbh->selectall_arrayref(
 "SELECT ename FROM emp ORDER BY ename",
 { Slice => {} }
 );

It's then an array rather than a hash, but that's better, right?

Have a nice day,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] Working with dates

2007-07-05 Thread Ranieri Mazili

 Original Message  
Subject: [GENERAL] Working with dates
From: Ranieri Mazili <[EMAIL PROTECTED]>
To: pgsql-general@postgresql.org, [EMAIL PROTECTED]
Date: 5/7/2007 16:00

Hello,

I need to do the following select:

Number of days of month - weekends - holydays

So this query will return the number of days that people can work

Look that I have the holydays in one table, it's bellow:

CREATE TABLE holidays
(
 id_holidays serial NOT NULL,
 dt_holiday date,
 holiday_description character varying(60),
 input_date timestamp without time zone NOT NULL,
 CONSTRAINT holidays_pkey PRIMARY KEY (id_holidays)
)

I have no idea of how do it

If someone could help, I would appreciate.

Thanks

---(end of broadcast)---
TIP 6: explain analyze is your friend



I can.

select count(A.data)
from (select (date_trunc('month',(select production_date from production 
order by production_date desc limit 1)::date)::date + x * '1 
day'::interval)::date as data from generate_series(0,31) x) A
where extract(month from A.data) = extract(month from (select 
production_date from production order by production_date desc limit 
1)::date )

and extract(dow from A.data) <> 0
and extract(dow from A.data) <> 6
and A.data not in (select dt_holiday from holidays)


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] Working with dates

2007-07-05 Thread Ranieri Mazili

Hello,

I need to do the following select:

Number of days of month - weekends - holydays

So this query will return the number of days that people can work

Look that I have the holydays in one table, it's bellow:

CREATE TABLE holidays
(
 id_holidays serial NOT NULL,
 dt_holiday date,
 holiday_description character varying(60),
 input_date timestamp without time zone NOT NULL,
 CONSTRAINT holidays_pkey PRIMARY KEY (id_holidays)
)

I have no idea of how do it

If someone could help, I would appreciate.

Thanks

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Way to determine index bloat stats?

2007-07-05 Thread Tom Lane
Jeff Amiel <[EMAIL PROTECTED]> writes:
> Is there some magic way of determining the number of bytes in an index 
> 'row' and then comparing the size on disk of the index file?

In recent releases contrib/pgstattuple/ has a function for obtaining
stats about an index.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] [pgsql-general] In memory tables/databases

2007-07-05 Thread Joshua D. Drake

A.M. wrote:


On Jul 5, 2007, at 13:20 , Andrew Sullivan wrote:


On Sun, Jul 01, 2007 at 11:11:30PM +0200, Alexander Todorov wrote:

The question was is there something else that exists in PostgreSQL and
will do the same job.


Why not have a table type that writes no WAL and is truncated whenever 
postgres starts? Such a table could then be put in a ramdisk tablespace 
and there would be no transaction atomicity repercussions. Is there 
something I'm missing?


Claiming that postgresql is simply the wrong tool is silly, especially 
since it is so close to having the desired behavior.


Use pg_memcache.

Sincerely,

Joshua D. Drake

P.S. I agree with you, but you are barking up a very tall tree and you 
don't have a chainsaw. The fact is, global temp tables that could be 
assigned a static amount of memory to use that would recycle based on 
some parameter would be infinitely useful, but likely won't get anywhere.






Cheers,
M

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings




--

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] Nested Transactions in PL/pgSQL

2007-07-05 Thread Nykolyn, Andrew
Is it possible to nest transactions within a stored procedure?  I have a
stored procedure that calls many other stored procedures and what
happens it that after a certain amount of time the server runs out of
shared memory.  I know I can increase the PostgreSQL shared memory.
However, that would be a temporary fix.  I know it will eventually run
out again as more data is processed.  The right way to do it is to do
issue a save point or commit at various places in my long stored
procedure.  I want to believe that there is a way to issue commits
within a stored procedure since PostgreSQL now supports nested
transactions.

Any help or work around would be appreciated.  Thanks.

Andy Nykolyn
Northrop Grumman 


[GENERAL] simple query question to use with DBI selectall_hashref

2007-07-05 Thread Kenji Morishige
I would like to create a query that returns a column with an integer
1 through (row_count) to use as the index while used in conjunction with
DBI's selectall_hashref($sql,$key) function.  In the past I'd usually just
write a wrapper around selectrow_hashref and put all those results in an
array. I don't know why DBI doesn't include a function like this.

if I could extract a pseudo column from the query itself with the result
order number, I could just use that as the key in selectall_hashref and sort
the results by key.

Is there a built-in function that returns the current row #?

select foo from bar;
row#;foo
;-
1;foodata1 
2;foodata2 
3;foodata3 
4;foodata4 

so I can pump it into:
my $hashref = $dbh->selectall_hashref($query);

print $hashref->{1}; # prints 'foodata1' etc

Or am I missing something and there is a way better way to do this...?

Sincerely,
Kenji

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] [pgsql-general] In memory tables/databases

2007-07-05 Thread A.M.


On Jul 5, 2007, at 13:20 , Andrew Sullivan wrote:


On Sun, Jul 01, 2007 at 11:11:30PM +0200, Alexander Todorov wrote:
The question was is there something else that exists in PostgreSQL  
and

will do the same job.


Why re-invent the wheel, and make it square?  But also, if you don't
care whether you keep your data, why on earth are you putting it in
an RDBMS?  Is it because all your pre-built tools already speak SQL?
If you're really after performance, I'm not convinced a SQL-speaking
RDBMS (delivered by MySQL or Postgres or anyone else) is what you
actually need.


Look- there are plenty of scenarios where data one does not care  
about is linked (in a relational fashion) to data one does care  
about. One common example is a web session. If your database fails,  
then the sessions are really irrelevant in the future. Another  
example is a live queue or snapshot of other data (materialized views?).


As long as the database is running, then the data is useful. Such a  
table can contain foreign keys but no triggers and may not have  
references to it from "non-temp" tables.


Why not have a table type that writes no WAL and is truncated  
whenever postgres starts? Such a table could then be put in a ramdisk  
tablespace and there would be no transaction atomicity repercussions.  
Is there something I'm missing?


Claiming that postgresql is simply the wrong tool is silly,  
especially since it is so close to having the desired behavior.


Cheers,
M

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Way to determine index bloat stats?

2007-07-05 Thread Jeff Amiel
Got a bit of a rude surprise when I rebuilt a slony subscriber node from 
scratch and noticed the indexes for some tables using significantly less 
space than on their master-node counterpart.
In once case, it was 2 gigabytes versus 9 gigabytes for the same table 
indexs on the master node.  I'm assuming that because of the specific 
indexed fields and the various 'states' that they go through that we 
ended up with many many sparsely populated pages of index data.


While I am not too concerned (took 3 years to get this 'fragmented' plus 
the ability to do concurrent index creation in 8.2 may help), I am 
interested in determining a way to see how 'sparsely' populated the 
index file pages are at any particular point in time


Is there some magic way of determining the number of bytes in an index 
'row' and then comparing the size on disk of the index file?
Also...would another index type (hash?) might be better suited for some 
of these indexes (they are all currently btree)..and would that 
reduce the bloat?
(I wish I had tracked which specific indexes were 'bloated' so I could 
analyze the usage pattern of the fields used.)


Any help would be appreciated.







---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] [pgsql-general] In memory tables/databases

2007-07-05 Thread Andrew Sullivan
On Sun, Jul 01, 2007 at 11:11:30PM +0200, Alexander Todorov wrote:
> The question was is there something else that exists in PostgreSQL and
> will do the same job.

Why re-invent the wheel, and make it square?  But also, if you don't
care whether you keep your data, why on earth are you putting it in
an RDBMS?  Is it because all your pre-built tools already speak SQL? 
If you're really after performance, I'm not convinced a SQL-speaking
RDBMS (delivered by MySQL or Postgres or anyone else) is what you
actually need.

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
The whole tendency of modern prose is away from concreteness.
--George Orwell

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Design Tool

2007-07-05 Thread Lee Keel
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:pgsql-general-
> [EMAIL PROTECTED] On Behalf Of Gabriele
> Sent: Tuesday, July 03, 2007 2:43 PM
> To: pgsql-general@postgresql.org
> Subject: [GENERAL] Design Tool
> 
> I need a design tool to design my database.
> 
> Back in past I used DBDesigner (free) but it is best suited to MySQL
> and i now use PostGreSQL and SQLite.
> 
> The tool should have most of the following:
> 
> Ability to create a rather complex and arbitrarily big scheme and
> produce automatically the SQL code for PostGreSQL and for SQLite.
> 
> Some degree of automation with "logical to physical" procedures
> (automake entity for n:n relationship, foreign key automatically added
> and other time saving stuff).
> 
> Ability to produce documentation in XML/HTML using the schema and
> designer's comments on tables and properties (best would be an html
> documentation or an xml schema easily convertible in an html
> documentation with some other tool i don't know).
> 
> A few tools of pure graphical purpose (colored boxes/panels, labels)
> and an easy way to arrange tables, show or hide additional
> information, organize relationship (move his names, eventually hide
> them), and the ability to export the whole thing as an image.
> 
> "Free" or "not so costly" license. If i use postgresql is also to save
> money, as you might expect. A one hundred dollars software might be my
> solution, a one thousand dollars is probably not.
> 
> Ability to generate CRUD stored procedure automatically may be
> appreciated, but i belienve it's more than i need.
> 
> 
> ---(end of broadcast)---
> TIP 2: Don't 'kill -9' the postmaster

We are currently using DeZign for Database.  This is not a freeware tool,
but is VERY useful.  I am not sure how much "not so costly" is, but I find
it very reasonable for the time it saves me.  It does do Postgres and MANY
more database formats.  You can find more information at
http://www.datanamic.com/dezign/index.html

Regards,
Lee Keel
This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed. If 
you have received this email in error please notify the sender. This message 
contains confidential information and is intended only for the individual 
named. If you are not the named addressee you should not disseminate, 
distribute or copy this e-mail.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] [general] [general] perpetual dump/restore problem

2007-07-05 Thread wu_zhong_min
You are right.  I have now corrected that and
I believe that should fix the problem.

>
>AFAICS it would not be issuing that unless you'd made template0
>connectable on t


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] [general] perpetual dump/restore problem

2007-07-05 Thread Tom Lane
[EMAIL PROTECTED] writes:
> time?  Why won't pg_dump include a line to accept connection for
> template0 just before it writes "\connect template0" and then remove
> the permission after it is done with template0?

AFAICS it would not be issuing that unless you'd made template0
connectable on the source database.  That is not a good thing;
undo it.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Vacuum issue

2007-07-05 Thread Michael Fuhr
On Thu, Jul 05, 2007 at 01:17:13PM +0100, E.J. Moloney wrote:
> I have a database with a table that adds 3 records a day, I am 
> delete records older than 2 days.
> I am vacuuming it once a day , I am having an issue that the disk usage 
> is continually rising. i.e. the space been flagged as free by the vacuum 
> process isn't being reused.

Are you sure this table is responsible for the disk space increase?
Might the growth be elsewhere?  Does this table receive a lot of
updates?  Have you done a database-wide VACUUM VERBOSE and examined
the free space map info at the end to see if you need to adjust
your FSM settings?

Have you checked whether the table's indexes are what's growing?
Certain usage patterns can cause indexes to grow even if they're
being vacuumed regularly; you might need to reindex periodically.

> Please find below a copy of the cron command being used.

Have you checked the vacuumdb output to confirm that it's running
successfully?

>  su - $PGUSER -c "$COMMAND --analyze -v " >>$PGLOG 2>&1

This command is vacuuming only one database, probably "postgres"
(but check the output to be sure).  Is that where your tables are?

> I am running Postgre 8.4 on a Susse 10.1

PostgreSQL (not "Postgre") 8.4 doesn't exist; do you mean 8.2.4?

-- 
Michael Fuhr

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] [general] perpetual dump/restore problem

2007-07-05 Thread wu_zhong_min
> OK. If you're not in a hurry, and can wait a few months 8.3 will be out.

I am moving the database to a new physical server and it makes sense to move to 
the latest version now.

>>for template0 all the
>> time?  Why won't pg_dump include a line to accept connection for
>> template0 just before it writes "\connect template0" and then remove
>> the permission after it is done with template0?

> Might be a bug, but first; did you dump using 8.2's pg_dumpall?

Yes.  I tried both 8.1 and 8.2 pg_dumpall many times.  I think that, with one 
try with 8.1, it does not give "connect template0". 



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] How to get fast ~ operator using C locale

2007-07-05 Thread Martijn van Oosterhout
On Thu, Jul 05, 2007 at 02:11:24PM +0200, Artur Rataj wrote:
> But I want the server to use the C locale. I set the locale to C and
> run postmaster. I supposed it uses C locale, but -d 5 shows in log:

The locale is decided when you run initdb. Once the cluster has been
created you can't change it.

Have a nice day,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] How to get fast ~ operator using C locale

2007-07-05 Thread Merlin Moncure

On 7/5/07, Artur Rataj <[EMAIL PROTECTED]> wrote:

But I want the server to use the C locale. I set the locale to C and
run postmaster. I supposed it uses C locale, but -d 5 shows in log:


The locale for indexing is set during initdb:
initdb --no-locale

The ~ operator does consider using btree regardless of locale unless
your right hand term is a constant and you can reduce to a
(functional) boolean index...so ISTM you are looking in the wrong
place anyways...try tsearch2?

merlin

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] Vacuum issue

2007-07-05 Thread E.J. Moloney
I have a database with a table that adds 3 records a day, I am 
delete records older than 2 days.
I am vacuuming it once a day , I am having an issue that the disk usage 
is continually rising. i.e. the space been flagged as free by the vacuum 
process isn't being reused.


Please find below a copy of the cron command being used.
0 0 * * * /data/newsam/scripts/samdatabasemaint.sh >/dev/null 2>&1


Relevent section from script

COMMAND="$prefix/bin/vacuumdb"

# What to use to shut down the postmaster

set -e

# Only start if we can find the postmaster.
test -x $DAEMON || exit 0

# Parse command line parameters.
 su - $PGUSER -c "$COMMAND --analyze -v " >>$PGLOG 2>&1

exit 0


I orgionaly had autovacuum active but that encountered the sam issue

I am running Postgre 8.4 on a Susse 10.1

Thanks in advance for any help

E.J. Moloney


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] How to get fast ~ operator using C locale

2007-07-05 Thread Artur Rataj

But I want the server to use the C locale. I set the locale to C and
run postmaster. I supposed it uses C locale, but -d 5 shows in log:

DEBUG:  /usr/bin/postmaster: PostmasterMain: initial environ dump:
.
.
.

DEBUG:  LC_COLLATE=pl_PL.UTF-8
DEBUG:  LC_CTYPE=pl_PL.UTF-8
DEBUG:  LC_MESSAGES=pl_PL.UTF-8
DEBUG:  LC_MONETARY=C
DEBUG:  LC_NUMERIC=C
DEBUG:  LC_TIME=C

So, does it mean that the server does not really use C locale?


On 7/5/07, Dimitri Fontaine <[EMAIL PROTECTED]> wrote:

Hi,

Le jeudi 05 juillet 2007, Artur Rataj a écrit:
> Hello, I want to have pg use fast indexing for ~ operator. Is setting
> C locale for this is still necessary? If yes, is it enough to
> initdb/createdb with C locale? If it is not enough, why setting locale
> to C and starting postmaster still gives other locale, for example,
> pl_PL, in
> postmaster log "initial environ dump"?

It seems to me this page of the fine manual could match:

http://www.postgresql.org/docs/8.2/static/indexes-opclass.html

The operator classes text_pattern_ops, varchar_pattern_ops,
bpchar_pattern_ops, and name_pattern_ops support B-tree indexes on the types
text, varchar, char, and name, respectively. The difference from the default
operator classes is that the values are compared strictly character by
character rather than according to the locale-specific collation rules. This
makes these operator classes suitable for use by queries involving pattern
matching expressions (LIKE or POSIX regular expressions) when the server does
not use the standard "C" locale. As an example, you might index a varchar
column like this:
CREATE INDEX test_index ON test_table (col varchar_pattern_ops);

Regards,
--
dim




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] How to get fast ~ operator using C locale

2007-07-05 Thread Dimitri Fontaine
Hi,

Le jeudi 05 juillet 2007, Artur Rataj a écrit :
> Hello, I want to have pg use fast indexing for ~ operator. Is setting
> C locale for this is still necessary? If yes, is it enough to
> initdb/createdb with C locale? If it is not enough, why setting locale
> to C and starting postmaster still gives other locale, for example,
> pl_PL, in
> postmaster log "initial environ dump"?

It seems to me this page of the fine manual could match:

http://www.postgresql.org/docs/8.2/static/indexes-opclass.html

The operator classes text_pattern_ops, varchar_pattern_ops, 
bpchar_pattern_ops, and name_pattern_ops support B-tree indexes on the types 
text, varchar, char, and name, respectively. The difference from the default 
operator classes is that the values are compared strictly character by 
character rather than according to the locale-specific collation rules. This 
makes these operator classes suitable for use by queries involving pattern 
matching expressions (LIKE or POSIX regular expressions) when the server does 
not use the standard "C" locale. As an example, you might index a varchar 
column like this:
CREATE INDEX test_index ON test_table (col varchar_pattern_ops);

Regards,
-- 
dim


signature.asc
Description: This is a digitally signed message part.


[GENERAL] How to get fast ~ operator using C locale

2007-07-05 Thread Artur Rataj

Hello, I want to have pg use fast indexing for ~ operator. Is setting
C locale for this is still necessary? If yes, is it enough to
initdb/createdb with C locale? If it is not enough, why setting locale
to C and starting postmaster still gives other locale, for example,
pl_PL, in
postmaster log "initial environ dump"?

Best regards,
Artur

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Large Project, Integration with existing Oracle and MS SQL

2007-07-05 Thread Patrick Carroll

I would use Java and possibly OLEDB/ADO/ADO.Net form the public side as the
consumers of this info will want it nice and easy and I dont want to spend
hours developing a bespoke interface but put the onus on the consumers of
the service to get what they need.

Thanks for your comments, much appreciated.

On 7/5/07, Richard Huxton <[EMAIL PROTECTED]> wrote:


Patrick Carroll wrote:
> There has got to be some persistence, there will be a lot of tables and
> metadata and it may have to handle validation requirements for other
apps
> doing secure file transfer and a bespoke secure http proxy and it's
> going to
> be a speculative buffer against protocol based worms crossing into the
> production environment.

OK, in that case I would have:
- one PG database for each Oracle/MSSQL database being transferred
- two users for each database being transferred, one "inside", one
"outside"
- each user is only allowed to connect to that database from a pre-set
IP address (via pg_hba.conf)
- lock down the permissions for each user appropriately

If you want to go the procedure/function route to access each DB you can
define your DBs as SECURITY DEFINER for operations that need "superuser"
permissions.

I'd use Perl to hook it all together, but you mentioned jdbc, so
presumably you'll be going for Java.

--
   Richard Huxton
   Archonet Ltd



Re: [GENERAL] Large Project, Integration with existing Oracle and MS SQL

2007-07-05 Thread Richard Huxton

Patrick Carroll wrote:

There has got to be some persistence, there will be a lot of tables and
metadata and it may have to handle validation requirements for other apps
doing secure file transfer and a bespoke secure http proxy and it's 
going to

be a speculative buffer against protocol based worms crossing into the
production environment.


OK, in that case I would have:
- one PG database for each Oracle/MSSQL database being transferred
- two users for each database being transferred, one "inside", one "outside"
- each user is only allowed to connect to that database from a pre-set 
IP address (via pg_hba.conf)

- lock down the permissions for each user appropriately

If you want to go the procedure/function route to access each DB you can 
define your DBs as SECURITY DEFINER for operations that need "superuser" 
permissions.


I'd use Perl to hook it all together, but you mentioned jdbc, so 
presumably you'll be going for Java.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Large Project, Integration with existing Oracle and MS SQL

2007-07-05 Thread Patrick Carroll

There has got to be some persistence, there will be a lot of tables and
metadata and it may have to handle validation requirements for other apps
doing secure file transfer and a bespoke secure http proxy and it's going to
be a speculative buffer against protocol based worms crossing into the
production environment.

On 7/5/07, Richard Huxton <[EMAIL PROTECTED]> wrote:


Patrick Carroll wrote:
> I am architecting a solution for an interface between a highly secure
> production environment and a corporate network which involves transfer
of
> records from Oracle and SQL Server through an intermediary "firewall
DB", a
> Postgres Instance, to SQL Server/ Oracle. I anticipate that there will
> either be direct database links or jdbc connections and stored
> procedures to
> pass data.
>
> Does anybody have a view on likely issues I may have in practice, should
I
> really be looking at existing commercial technologies or is PostgreSQL
the
> right technology?

I'm not sure what PostgreSQL is doing for you here, unless you need some
sort of "buffer" to cope with network bandwidth problems.

Why not just have a secured application sitting in the dmz/on firewall
and connect to both sides transferring for you?

--
   Richard Huxton
   Archonet Ltd



Re: [GENERAL] Large Project, Integration with existing Oracle and MS SQL

2007-07-05 Thread Richard Huxton

Patrick Carroll wrote:

I am architecting a solution for an interface between a highly secure
production environment and a corporate network which involves transfer of
records from Oracle and SQL Server through an intermediary "firewall DB", a
Postgres Instance, to SQL Server/ Oracle. I anticipate that there will
either be direct database links or jdbc connections and stored 
procedures to

pass data.

Does anybody have a view on likely issues I may have in practice, should I
really be looking at existing commercial technologies or is PostgreSQL the
right technology?


I'm not sure what PostgreSQL is doing for you here, unless you need some 
sort of "buffer" to cope with network bandwidth problems.


Why not just have a secured application sitting in the dmz/on firewall 
and connect to both sides transferring for you?


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] perpetual dump/restore problem

2007-07-05 Thread Richard Huxton

[EMAIL PROTECTED] wrote:

Dear List,

As many of you know every time one wants to upgrade to a major server
version, as I want to do now, there is a need for pg_dumpall and
restore.  This is a huge undertaking and requires lots of testing and
planning.  I do hope that in the future this requirement will be
dropped.


There is currently some discussion about an upgrade tool, but there has 
been discussion before. It's not a simple problem, and it won't reduce 
the need for testing, just the dump/restore.



I am now upgrading from 8.1.9 to 8.2.4 and pg_dumpall gives a dump
file which has a command "\connect template0"


OK. If you're not in a hurry, and can wait a few months 8.3 will be out.


Normally template0 does not accept connection but I found a way to
make that happen in the documentation on postgresql website.  But is
it generally safe to accept the connection for template0 all the
time?  Why won't pg_dump include a line to accept connection for
template0 just before it writes "\connect template0" and then remove
the permission after it is done with template0?


Might be a bug, but first; did you dump using 8.2's pg_dumpall?

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Reasonable way to backup cluster Windows

2007-07-05 Thread Hannes Dorbath

On 04.07.2007 17:14, Andrus wrote:
How to backup whole cluster so that huge temprary file containing all 
data in uncompressed form is not created in Windows temporary directory 
( equivalent to pg_dumpall -c | gzip >backup in Linux ) ?


Pipe in windows creates huge temprary file.


pg_dumpall has lot of disadvantages compared to pg_dump -Fc, I don't see 
the point why one would want that.


What I'd recommend is to use pg_dumpall -g and pg_dump -Fc on each DB. 
Then get a copy of rdiff-backup for windows to create nice incremental 
backups. Wrap those 3 things in a cmd script and use the task scheduler 
to run it in a given interval.



--
Regards,
Hannes Dorbath

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/