Re: [GENERAL] Vacuum problems

2009-01-05 Thread Scot Kreienkamp
Scott,

Would the "alter user postgres set statement_timeout=0;" be a permanent
change?  I ask because our application is using that for its login to
the database.  (No lectures please, I inherited the system that way.  I
already read the riot act to our developers about that.)  If so I'll
have to set it back after the vacuum is done.

FYI, when I inherited the system it was doing nightly vacuum fulls.  It
was that way for several months.  If that causes bloated indexes, then
that's fairly likely a problem I have.  Sounds like I should quit
running vacuum fulls altogether except maybe once or twice per year.  

I'll try adding a reindex for tonight's vacuum run.

Thanks,
 
Scot Kreienkamp
La-Z-Boy Inc.
skre...@la-z-boy.com

-Original Message-
From: Scott Marlowe [mailto:scott.marl...@gmail.com] 
Sent: Monday, January 05, 2009 11:35 AM
To: Scot Kreienkamp
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Vacuum problems

On Mon, Jan 5, 2009 at 7:46 AM, Scot Kreienkamp 
wrote:
> Hi everyone...
>
>
>
> I have a database that is currently about 25 gigs on my primary DB
server
> running Postgres 8.2.9, and two others that are less than 1 gig
apiece.  The
> DB server is a quad proc, quad core, 64 gigs of memory, 5 drive RAID5
array,
> so it has plenty of horsepower.  Until about three weeks ago I was
running a
> nightly vacuum analyze and a vacuum full analyze once per week.

Did you have a compelling reason for running vacuum full?  It's
generally discouraged unless you've got a usage pattern that demands
it.  If you are running vacuum full you likely have bloated indexes,
so you might need to reindex the db as well.

> This is what I was running for the vacuum full command:
>
> vacuumdb -a -e -f -z -v  -U postgres
>
>
>
> The nightly vacuums have been working flawlessly, but about three
weeks ago
> the vacuum full started failing.  It was taking about 5-10 minutes
normally,
> but all of a sudden it started hitting the command timeout that I have
set,
> which is at 60 minutes.

Since I assume vacuum is running under the superuser account you can try
this:

alter user postgres set statement_timeout=0;

To give it all the time it needs to finish.


>  I thought that it may be a corrupt table or a large
> amount of content had been deleted from a database, so I built a
script to
> loop through each database and run a vacuum full analyze on each table
> individually thinking I would find my problem table.  The script
finished in
> 5 minutes!

It might be that the previous vacuum full cleaned up enough stuff that
the next one ran faster.  But again, vacuum full is usually a bad idea
as regular maintenance.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Adding Arabic dictionary for TSearch2.. to_tsvector('arabic'...) doesn't work..

2009-01-05 Thread Sam Mason
On Mon, Jan 05, 2009 at 02:49:51PM +0100, Mohamed wrote:
> I have just made the switch from MySql to PostgreSql to be able to take
> advantage of TSearch2 and I just arrived to the mailing list :) I am
> creating a website in two languages (english, arabic) and would like to have
> dictionaries for both for my search. I noticed that arabic wasn't in as
> default. How can I add it / make it work ?

Not sure about adding different dictionaries; but there was a discussion
about multiple languages in the same relation a month ago:

  http://archives.postgresql.org/pgsql-general/2008-11/msg00340.php

if you don't get any more pointers, the following page documents PG's
support of full text searching:

  http://www.postgresql.org/docs/current/static/textsearch.html


  Sam

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Insert into partion table without logic statement

2009-01-05 Thread Jolles, Peter M (GE Infra, Energy)
I have a table that has multiple partitions (1000+) and I want a trigger
or rule to be able to write data automatically to each partition. One of
the values I am inserting is the table partition name, but I'm not sure
how to use it as a variable in a trigger. All trigger examples have a
series of IF statements, and I don't want to have 1000 IF statements
evaluated on every insert, not to mention adding new statements when I
have new partitions created.

Is something like the following possible? In the current form (below) I
get an INSERT INTO ERROR. I'm relatively new to databases, completely
new to Postgres, so I'm not sure if I'm headed in the right direction or
not. I'm running 8.3.5 if it makes a difference.


CREATE OR REPLACE FUNCTION mytable_insert_trigger()
RETURNS trigger AS
$BODY$
BEGIN
insert into NEW.PartitionNameVariable values (NEW.*);
return null;
END;
$BODY$
LANGUAGE 'plpgsql'


Thanks,
Peter 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Adding Arabic dictionary for TSearch2.. to_tsvector('arabic'...) doesn't work..

2009-01-05 Thread Oleg Bartunov

On Mon, 5 Jan 2009, Mohamed wrote:


Hi!
I have just made the switch from MySql to PostgreSql to be able to take
advantage of TSearch2 and I just arrived to the mailing list :) I am
creating a website in two languages (english, arabic) and would like to have
dictionaries for both for my search. I noticed that arabic wasn't in as
default. How can I add it / make it work ?


read documentation
http://www.postgresql.org/docs/8.3/static/textsearch.html
What dictionaries you have for arabic ?



I am currently saving the english and arabic text in the same relation. So I
guess I could create two indexes over the two dictionaries or should I split
them ?


if arabic and english characters are not overlaped, you can use one index.
For example, for english/russian text we have one index


Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: o...@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Vacuum problems

2009-01-05 Thread Scot Kreienkamp
Hi everyone...

 

I have a database that is currently about 25 gigs on my primary DB
server running Postgres 8.2.9, and two others that are less than 1 gig
apiece.  The DB server is a quad proc, quad core, 64 gigs of memory, 5
drive RAID5 array, so it has plenty of horsepower.  Until about three
weeks ago I was running a nightly vacuum analyze and a vacuum full
analyze once per week.  

 

This is what I was running for the vacuum full command:

vacuumdb -a -e -f -z -v  -U postgres

 

The nightly vacuums have been working flawlessly, but about three weeks
ago the vacuum full started failing.  It was taking about 5-10 minutes
normally, but all of a sudden it started hitting the command timeout
that I have set, which is at 60 minutes.  I thought that it may be a
corrupt table or a large amount of content had been deleted from a
database, so I built a script to loop through each database and run a
vacuum full analyze on each table individually thinking I would find my
problem table.  The script finished in 5 minutes!  

 

This is what I'm running on each table now in my script:

vacuumdb -d $DB -t $TABLE -e -f -z -v  -U postgres

 

As I understand it, the "vacuumdb -a" command basically does the same
thing as my script.  So why is it timing out while my script finishes in
5 minutes or less?  Is the "vacuumdb -a" command doing something that
I'm not?  

 

Now that the holiday season is past I will be upgrading to the latest
8.2 release as soon as possible, but I need to get this figured out
first unless it's a bug that the upgrade fixes.

 

I'd appreciate any advice or explanations you guys can send my way.  

 

 

Thanks,

 

Scot Kreienkamp

La-Z-Boy Inc.

skre...@la-z-boy.com

734-242-1444 ext 6379

 



Re: [GENERAL] Vacuum problems

2009-01-05 Thread Grzegorz Jaśkiewicz
btw, is that on windows ? (looking at la-z-boy.com, it uses .net).

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Mailing list ?

2009-01-05 Thread Mohamed
Mailing list ?


Re: [GENERAL] does postgres has the same limitation as MySQL?

2009-01-05 Thread Chris Browne
tekion  writes:
> I know that MySQL can only use one index at at time for query. Does
> Postgres has this same limitation? For example, the following query:
>
> select uid,count(uid) from A, B where A.uid = B.uid and date between
>  and 
>
> MySQL will either use index on uid or the date (Both uid and date are
> indexed).  Could Postgres use mulitple index in a query?

Preface: I assume that the date column is on table B, and that both A
and B have indexes on their respective "uid" columns.

There are two perspectives on this, both of which involve using
multiple indices :-).

1.  A sensible query plan for this would be quite likely to involve
using the index on B(date) to find the relevant entries in table B,
and then join against relevant entries in A via the index on A(uid).

If one or the other table is small enough, or if the indexes otherwise
don't seem useful, then the query planner may choose *not* to use such
indices, but if the tables are reasonably large, then you are very
likely to find PostgreSQL using multiple indices for this query.

2.  Since v8.1, PostgreSQL has been able to do "bitmap index scans,"
which allows composing multiple indexes together.

It wouldn't likely be relevant to this query, but here are the
relevant release notes from v8.1:

   Allow index scans to use an intermediate in-memory bitmap (Tom)

 In previous releases, only a single index could be used to do
 lookups on a table. With this feature, if a query has WHERE
 tab.col1 = 4 and tab.col2 = 9, and there is no multicolumn index
 on col1 and col2, but there is an index on col1 and another on
 col2, it is possible to search both indexes and combine the
 results in memory, then do heap fetches for only the rows
 matching both the col1 and col2 restrictions. This is very useful
 in environments that have a lot of unstructured queries where it
 is impossible to create indexes that match all possible access
 conditions. Bitmap scans are useful even with a single index, as
 they reduce the amount of random access needed; a bitmap index
 scan is efficient for retrieving fairly large fractions of the
 complete table, whereas plain index scans are not.
-- 
(format nil "~...@~s" "cbbrowne" "linuxfinances.info")
http://linuxfinances.info/info/sgml.html
The only problem
with Haiku is that you just
get started and then

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuum problems

2009-01-05 Thread Tom Lane
"Scot Kreienkamp"  writes:
> The nightly vacuums have been working flawlessly, but about three weeks
> ago the vacuum full started failing.  It was taking about 5-10 minutes
> normally, but all of a sudden it started hitting the command timeout
> that I have set, which is at 60 minutes.  I thought that it may be a
> corrupt table or a large amount of content had been deleted from a
> database, so I built a script to loop through each database and run a
> vacuum full analyze on each table individually thinking I would find my
> problem table.  The script finished in 5 minutes!  

Maybe the vacuum was blocking waiting for someone's lock on a table?
What else is running in parallel with your nightly vacuum script?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Timestamp shift when importing data

2009-01-05 Thread Jolles, Peter M (GE Infra, Energy)
On Saturday, January 03, 2009 6:27 PM, David T Wilson wrote:
 
> Those are the dates of daylight savings time kicking in- 
> which happens, not coincidentally, at 2am.
> 
> What's the type of the field you're trying to import into, 
> and how are you doing the import?

That makes a lot more sense now, although I'm not sure why it is only
happening in the spring and not in the fall. The original data field is
a MS Access "General Date". In Postgres it is stored as a timestamp with
timezone.

To do the import, I tried using an Access append query. I've also tried
to use the Access export function.

Reading up on Windows XP handling of DST, it appears that it is
unreliable for pre-2007 time shifts, which would explain why it isn't
happening with more recent data. Is there any way to ignore DST in an
import/export transaction?

Thanks,
Peter

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Mailing list ?

2009-01-05 Thread Scott Marlowe
Yes.  Yes it is.

On Mon, Jan 5, 2009 at 6:43 AM, Mohamed  wrote:
> Mailing list ?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuum problems

2009-01-05 Thread Scot Kreienkamp
Nothing.  I ran a query to see what other queries were running and what
other locks existed at the time.  No queries running, no locks.  Nothing
running at that time of night except dump, which is finished before the
vacuum job starts.

Thanks,
 
Scot Kreienkamp
La-Z-Boy Inc.
skre...@la-z-boy.com

-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us] 
Sent: Monday, January 05, 2009 1:05 PM
To: Scot Kreienkamp
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Vacuum problems 

"Scot Kreienkamp"  writes:
> The nightly vacuums have been working flawlessly, but about three
weeks
> ago the vacuum full started failing.  It was taking about 5-10 minutes
> normally, but all of a sudden it started hitting the command timeout
> that I have set, which is at 60 minutes.  I thought that it may be a
> corrupt table or a large amount of content had been deleted from a
> database, so I built a script to loop through each database and run a
> vacuum full analyze on each table individually thinking I would find
my
> problem table.  The script finished in 5 minutes!  

Maybe the vacuum was blocking waiting for someone's lock on a table?
What else is running in parallel with your nightly vacuum script?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuum problems

2009-01-05 Thread Scott Marlowe
On Mon, Jan 5, 2009 at 10:15 AM, Scot Kreienkamp  wrote:
> Scott,
>
> Would the "alter user postgres set statement_timeout=0;" be a permanent
> change?  I ask because our application is using that for its login to
> the database.  (No lectures please, I inherited the system that way.  I
> already read the riot act to our developers about that.)  If so I'll
> have to set it back after the vacuum is done.

Then you can just issue a "set statement_timeout=0" before you run
vacuum / vacuum full.

The update versus insert ratio isn't as important as how many rows are
updated out of the total between each run of vacuum analyze.  Vacuum
full is definitely NOT a regular, recommended practice.  I don't think
the docs really say it is.  But a few other people have seemed to get
the same idea from the docs, so there must be some gray area I'm not
seeing when I read them.  Given the usage pattern you described
earlier, I'd say vacuum full is definitely NOT called for, but regular
vacuum should be plenty.

The best thing to do is to examine how many dead tuples you've got to
keep track of, and if that number keeps rising then figure out if fsm
pages needs to be bumped up, and / or autovacuum needs more aggresive
settings.  Note that autovacuum is kind of hand cuffed on pg versions
before 8.3 because it was single threaded, and one really big table
could throw it behind on other more frequently updated tables getting
bloated while the vacuum thread runs against that one large table.

Use vacuum verbose to get an idea of how many dead tuples there are in
the database, and see if they rise to a plateu, or just keep rising.
For most usage patterns with autovacuum enabled, you'll see a steady
rise to about 10-20% dead tuples then it should level off.

> FYI, when I inherited the system it was doing nightly vacuum fulls.  It
> was that way for several months.  If that causes bloated indexes, then
> that's fairly likely a problem I have.  Sounds like I should quit
> running vacuum fulls altogether except maybe once or twice per year.

A lot of times a pgsql doing nightly fulls is a sign of someone who
started out with an old version that only supported full vacuum and
applying the faulty knowledge they gained from there to the newer
version which likely doesn't need it.

If you do find one table that really needs full vacuums because of its
usage pattern, it's best to cron up a single vacuum (regular) to run
more often on it, or make autovacuum more aggresive, or, failing those
two, to make a regular nightly vacuum full / cluster / reindex for
that one relation.

Usually cluster is a better choice, as it doesn't bloat indexes and
puts the table into index order (on the index you clustered on).

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuum problems

2009-01-05 Thread Scott Marlowe
Another strategy to look at is to make autovacuum more aggresive by
putting entries for that one table into the pg_autovacuum table.  I
know that table exists in 8.3 but I don't remember if it exists in
older versions or how it works there.  But if you have that available
you can make that one relation more aggresively autovacuumed without
affecting the other relations vacuum scheduling.  On 8.3 you can set
individual cost delays, and all the other settings you'd need.  If 8.2
doesn't support this, look at migrating just for that functionality
alone.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Adding Arabic dictionary for TSearch2.. to_tsvector('arabic'...) doesn't work..

2009-01-05 Thread Mohamed
Hi!
I have just made the switch from MySql to PostgreSql to be able to take
advantage of TSearch2 and I just arrived to the mailing list :) I am
creating a website in two languages (english, arabic) and would like to have
dictionaries for both for my search. I noticed that arabic wasn't in as
default. How can I add it / make it work ?

I am currently saving the english and arabic text in the same relation. So I
guess I could create two indexes over the two dictionaries or should I split
them ?

/ Moe


Re: [GENERAL] does postgres has the same limitation as MySQL?

2009-01-05 Thread Jonah H. Harris
On Mon, Jan 5, 2009 at 12:05 PM, Chris Browne  wrote:

> tekion  writes:
> > I know that MySQL can only use one index at at time for query. Does
> > Postgres has this same limitation? For example, the following query:


This is false for many cases since MySQL 5.0.

http://dev.mysql.com/doc/refman/5.0/en/index-merge-optimization.html

-- 
Jonah H. Harris, Senior DBA
myYearbook.com


Re: [GENERAL] Adding Arabic dictionary for TSearch2.. to_tsvector('arabic'...) doesn't work..

2009-01-05 Thread Mohamed
There has been an error made and messages I have written didn't not end up
here so I will do a repost on couple ... :)
---

Thank you Oleg. I am reading that guide. Its a little too much at one time.
I am getting a little confused. I don't have any dictionary yet, but I just
found a hunspell dictionary for Arabic :

http://qa.debian.org/developer.php?login=msameer%40debian.org

Now I see Sam wrote too... :)

I hope you guys will still be around to help me set it up when I have
finished my reading? I have seen earlier posts from you Oleg about
dictionaries, so I suppose you know it pretty well. (Not to mention the big
pdf file you have written !)
I am working on my developer (laptop) now, I prefer if I don't have to
repeat all the work later on.. is there away of avoiding this? To only have
to do these builds once or create some kind of a batch file and run them on
the production server later ( I am afraid I won't remember what I did one
inorder to repeat it)


Btw, you know if it is possible to combine the Tsearch with Hibernate (HQL)
or will I just have to do it all in SQL ?

The more dictionaries I use the better? or should I just choose and use only
one to build my lexemes and stopwords (etc) ?


Oleg :



> We usually use {ispell, stemmer} dictionary stack. if you don't have
> stemmer for arabic, just use simple dictionary, so if ispell dict doesn't
> recognize word, it will be recognized by simple dict and that word will be
> indexed.
>


What do you mean with simple dictionary ? Does that come with postgre ? Is
it possible to do the same with {hunspell, stemmer(simple?)}


/ Moe


Re: [GENERAL] Vacuum problems

2009-01-05 Thread Scot Kreienkamp
Nope, on RHEL5.  This is the backend of an application that's not available or 
visible to the public.  

Thanks,
 
Scot Kreienkamp
La-Z-Boy Inc.
skre...@la-z-boy.com
734-242-1444 ext 6379
-Original Message-
From: Grzegorz Jaśkiewicz [mailto:gryz...@gmail.com] 
Sent: Monday, January 05, 2009 11:19 AM
To: Scot Kreienkamp
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Vacuum problems

btw, is that on windows ? (looking at la-z-boy.com, it uses .net).

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Timestamp shift when importing data

2009-01-05 Thread Adrian Klaver
On Monday 05 January 2009 5:29:19 am Jolles, Peter M (GE Infra, Energy) wrote:
> On Saturday, January 03, 2009 6:27 PM, David T Wilson wrote:
> > Those are the dates of daylight savings time kicking in-
> > which happens, not coincidentally, at 2am.
> >
> > What's the type of the field you're trying to import into,
> > and how are you doing the import?
>
> That makes a lot more sense now, although I'm not sure why it is only
> happening in the spring and not in the fall. The original data field is
> a MS Access "General Date". In Postgres it is stored as a timestamp with
> timezone.
>
> To do the import, I tried using an Access append query. I've also tried
> to use the Access export function.
>
> Reading up on Windows XP handling of DST, it appears that it is
> unreliable for pre-2007 time shifts, which would explain why it isn't
> happening with more recent data. Is there any way to ignore DST in an
> import/export transaction?

One way would be to create a field with type  "timestamp without timezone" and 
import your timestamp data into that field. 

>
> Thanks,
> Peter



-- 
Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuum problems

2009-01-05 Thread Scott Marlowe
On Mon, Jan 5, 2009 at 7:46 AM, Scot Kreienkamp  wrote:
> Hi everyone…
>
>
>
> I have a database that is currently about 25 gigs on my primary DB server
> running Postgres 8.2.9, and two others that are less than 1 gig apiece.  The
> DB server is a quad proc, quad core, 64 gigs of memory, 5 drive RAID5 array,
> so it has plenty of horsepower.  Until about three weeks ago I was running a
> nightly vacuum analyze and a vacuum full analyze once per week.

Did you have a compelling reason for running vacuum full?  It's
generally discouraged unless you've got a usage pattern that demands
it.  If you are running vacuum full you likely have bloated indexes,
so you might need to reindex the db as well.

> This is what I was running for the vacuum full command:
>
> vacuumdb -a -e -f -z -v  -U postgres
>
>
>
> The nightly vacuums have been working flawlessly, but about three weeks ago
> the vacuum full started failing.  It was taking about 5-10 minutes normally,
> but all of a sudden it started hitting the command timeout that I have set,
> which is at 60 minutes.

Since I assume vacuum is running under the superuser account you can try this:

alter user postgres set statement_timeout=0;

To give it all the time it needs to finish.


>  I thought that it may be a corrupt table or a large
> amount of content had been deleted from a database, so I built a script to
> loop through each database and run a vacuum full analyze on each table
> individually thinking I would find my problem table.  The script finished in
> 5 minutes!

It might be that the previous vacuum full cleaned up enough stuff that
the next one ran faster.  But again, vacuum full is usually a bad idea
as regular maintenance.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] General subselect question

2009-01-05 Thread Tim Hart
In the general case, is a subselect that uses union less performant  
than a union?


I have a query that looks something like this:

select 
from   table1,
  table2,
  table3
where 
union
select 
fromtable1,
   table2,
   table3 ref1,
   table3 ref2
where 

I realized today that I could convert the query to

select 
fromtable1,
   table2,
   ( select 
  from   table3
  where 
  union
  select 
  fromtable3 ref1,
 table3 ref2
  where   )
where

For my specific case, I'll use EXPLAIN   to determine the performance  
difference. I'll weigh the performance benefits with the maintenance  
benefits ( less duplication of code for me on the second case) and  
decide (assuming I'm still motivated on Friday ;) ).


But in the general case, do I have to pay special attention with  
unions in subselects?


Tim Hart
615-713-9956 :cell
timothyjhart :Y!
tjh...@me.com :AIM

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuum problems

2009-01-05 Thread Grzegorz Jaśkiewicz
well, upgrade to 8.2.11 - even tho, change log doesn't say anything
about vacuum there.
Secondly, I think turn up logging verbosity - and see if postgresql
actually complains about anything there, otherwise it is pretty much
blind guess.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Per-user schemas with inherited skeleton.

2009-01-05 Thread alvarezp
On Mon, 2009-01-05 at 13:05 +0900, Craig Ringer wrote:
> alvar...@alvarezp.ods.org wrote:
>
> > On user account creation, the schema gets created and the interface
> > tries to do a "CREATE TABLE my_relation () INHERITS
> > (_skel.my_relation);" as the new role, but PostgreSQL returns the error
> > "must be owner of relation my_relations". I am assuming it refers to
> > _skel.my_relation.
>
> > The following questions arose during the preparation of this test:
> >
> > Why is ownership needed on the parent table?
>
> I don't know for sure, but I'd say it's a security issue. Granting a
> user ownership of a table that inherits another table gives them, as the
> owner of the child table, the ability to (via the child table) INSERT
> into the parent table, as well as DELETE/UPDATE/SELECT rows they've
> previously inserted themselves.

I see. So, in other words, inserting a record in a table they don't have
INSERT privileges from would be just a matter of inheriting that table.

For the tables given in the _skel schema from this model, this is be the
way to go. The _skel tables are empty, and have foreign keys to other
secured-tables that prevent two users from having the same value in the
primary keys.

It is also possible to list the records from all users with a simple
SELECT directly from the the parent table in the _skel schema.

> You can also have the owner of the parent table CREATE the child
> table
> with the inheritance relationship, then ALTER TABLE ... OWNER TO  to
> give ownership of the child table away. You can wrap this within a
> SECURITY DEFINER  Pl/PgSQL function if you want the eventual owning user
> to be able to do it. eg, assuming for the sake of the example that
> you're on a machine with `trust' authentication set to the db:

This is what bewildered me. So if parent and child tables CAN have
different owners, this is perfect for the model. Users can extend their
use of the database with an app of their own connecting with their own
role and permissions, it is still secure (or so it looks), and
administration is relatively simple.

> \c - super
> CREATE SCHEMA super;
> CREATE TABLE super.parent (x integer);
>
> -- insert a dummy row, too. No end user should ever be able to see it,
> -- and it's just here to demonstrate that one user shouldn't be able
> << snipped rest of SQL demonstration >>

Thank you for this detailed example. It proves to be secure enough.

CHECK constraints or FOREIGN keys to secured tables are present so the
users don't fill up the tables with dummy rows to perform a DoS. This
can or can not be ultimately desired, though.

> I guess it might be handy for a new right might be created called say
> 'INHERIT'. This would let a table owner delegate the right to inherit
> from the table to other users. Such a right does not presently exist.
> There might be good reasons for it that I don't know about, or it might
> simply be that nobody has wanted it - or at least, wanted it enough to
> bother implementing it.
>
> Do you? To me, it seems pretty easy to just create the table with the
> same ownership as the parent then ALTER TABLE ... OWNER TO it away.

What *would* be even nicer is for PostgreSQL to expose the owner of the
row as a special column (say, "__owner_user") in a SELECT statement to
the parent table.

An INHERITS privilege is a very nice proposal for extension, though
through the use of SECURITY DEFINER we can achieve the same result and
it is good enough for this model. SECURITY DEFINER is comparable to
having a SUID-root program in a Unix system, with everything it implies.

Also, if this privilege would exist, PostgreSQL would need to provide
the owner of the parent table a way to use the before-mentioned special
__owner_user column as part of primary and unique keys.

This would be a good on the way of having a native way of setting up
row-level security based on user ownership (a la virtual private
database with some shared data).

Octavio.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Description of transaction model for indexes

2009-01-05 Thread Erik Jones

On Jan 4, 2009, at 3:09 AM, Gerhard Wiesinger wrote:


Hello!

The transaction model is discussed in several areas:
http://www.packtpub.com/article/transaction-model-of-postgresql
The POSTGRES Data Model (1987)
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.48.8578
The Design of POSTGRES (1986)
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.48.1422

However: Is there a paper available, how indexes are handled inside  
the transaction (e.g. visibility (not handled), concurrency, etc.)?

E.g. What happens on SELECT/INSERT/UPDATE/DELETE?

Some aspects can be found in:
http://www.pgcon.org/2008/schedule/attachments/54_PGCon2008-HOT.ppt


I don't know about a paper anywhere but index tuples contain pointers  
to the data tuples that are followed to determine transactional  
visibility.  All HOT does for this is means there may be more than one  
pointer to follow in the event of a HOT updated tuple.


Erik Jones, Database Administrator
Engine Yard
Support, Scalability, Reliability
866.518.9273 x 260
Location: US/Pacific
IRC: mage2k






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] lesslog "incorrect resource manager data checksum."

2009-01-05 Thread Koichi Suzuki

Sorry, to have very long to handle your problem.

I tried to compress and decompress your WAL segments and found that your 
WAL segments are created in 32bit environment, which must be 
compressed/decompressed using 32bit binary.


PostgreSQL's WAL segment in 32bit environment is binary incompatible 
with 64bit version so if you created WAL segments in 32bit environment, 
you must compress them with 32bit version of pg_compresslog and restored 
by 32bit version of pg_decompresslog.


I've tested your WAL segment using 32bit and 64bit pglesslog and found 
that 64bit version worked perfect.


Please try this in 64bit environment.

Sorry for taking this long.   I'd like to notify this in pglesslog 
materials.


Good luck;


Jean-Christophe Arnu さんは書きました:

Koichi-san,

full_page_writes are written to be on by default in the global user 
configuration file (i.e. postgresql.conf). Query to the server showed 
that it is :

 show full_page_writes ;
 full_page_writes
--
 on

I use the RPM version provided on pgfoundry.

The main problem is located on pg_compresslog side (first) : it is unabe 
to compress xlogs.
I send you (in private) a bunch of uncompressed logs that are known to 
report pg_compresslog problem.


Once pg_compresslog will work, I plan to test "the other side" solution 
with pg_decompresslog :)


Thanks for your reply!

2008/11/20 Koichi Suzuki >


Hi,

Sorry I was out and late to find your report.

Lesslog replaces full page write to corresponding incremental log to
decrease the size and so you should turn full_page_writes "on" for
lesslog to work.

Did you specify "pg_decpmpresslog" as restore command?  If so, I don't
see anything missing so far.   Pg_decompresslog must be used in place
of "cat".

If possible, could you try to share your archive log (a part may help)
and let me know how you built lesslog?  (Did you use rpm?)   They will
help.   I've tested lesslog against pgbench and DBT-2 so I believe the
coverage is reasonable.

Sorry for the inconvenience and thanks a lot for your help.

-
Koichi Suzuki

2008/11/18 Jean-Christophe Arnu mailto:jca...@gmail.com>>:
 > Hi,
 >
 > I'm trying to use lesslog 1.1 (out of NTT rpm available on
pgfoundry) on my
 > pg_xlog files out of a 8.3.3 postgresql server.
 > I always get messages such as "incorrect resource manager data
checksum."
 >
 > This message is issued (as I've read the sources) when there's
CRC mismatch
 > in a record.
 >
 > Do you have any idea of what's going wrong (i.e. why my CRC are not
 > matching)? Does this deals with full_page_writes to be set to on
(by default
 > in GUC)  which is commented in my GUC (thus set to on) ? Is there
something
 > missing?
 >
 > Note : I'm transferring my xlog to another node with rsync. The
log I get (a
 > bunch of) for my test purpose are the ones on the remote node,
they should
 > be exactly the same as the one on the server.
 >
 > Thanks by advance for any ideao!
 >
 > --
 > Jean-Christophe Arnu
 >




--
Jean-Christophe Arnu



--
Koichi Suzuki

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Replication on windows

2009-01-05 Thread Tuan Hoang Anh
Is there any postgres replication support windows (not slony because i 
want merge replication) ?

Please help me because I must work with postgreSQL on Windows OS :-(

Thanks in advance.
Sorry for my English.

Tuan Hoang Anh



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Per-user schemas with inherited skeleton.

2009-01-05 Thread Craig Ringer
Octavio Alvarez wrote:

> CHECK constraints or FOREIGN keys to secured tables are present so the
> users don't fill up the tables with dummy rows to perform a DoS. This
> can or can not be ultimately desired, though.

Beware here. IIRC PostgreSQL's inheritance support has some major
limitations when it comes to enforcing constraints on a parent table on
data inserted/updated via a child table. See the documentation for details.

> What *would* be even nicer is for PostgreSQL to expose the owner of the
> row as a special column (say, "__owner_user") in a SELECT statement to
> the parent table.

That would be interesting, though I don't think "owner user" would be
the way to do it. If anything could be obtained without significant
cost, it'd probably be the relid of the leaf table from which that
particular record came. You could get an owning role name etc from that
relatively easily, of course.

That seems like something that'd have a performance hit, though (as each
record would be wider by one relid) and, like the oid column, might
confuse apps that don't expect extra columns not present in the table
definition to be returned.

You can get vaguely the same effect yourself in any case, though
unfortunately not with any sort of guaranteed enforcement. Add an ON
INSERT ... FOR EACH ROW trigger to the child table that sets the value
of a "username" field to the value of the "user" SQL variable, ignoring
whatever the user may have supplied for that field. Since you're giving
your users ownership of the tables they can always just drop or replace
that trigger, so it's not a security measure or suitable for accounting,
but it's good enough for informational reporting.

> An INHERITS privilege is a very nice proposal for extension, though
> through the use of SECURITY DEFINER we can achieve the same result and
> it is good enough for this model. SECURITY DEFINER is comparable to
> having a SUID-root program in a Unix system, with everything it implies.

Yep .. which is why the example I posted didn't use it. If it can be
avoided it should be.

Note that I'm VERY far from an expert in Pg in general and Pg security
in particular, so of course you need to evaluate all this in detail
yourself.

> Also, if this privilege would exist, PostgreSQL would need to provide
> the owner of the parent table a way to use the before-mentioned special
> __owner_user column as part of primary and unique keys.
> 
> This would be a good on the way of having a native way of setting up
> row-level security based on user ownership (a la virtual private
> database with some shared data).

I'm not entirely sure I see where this would be useful. I'd be
interested in some examples. As I mentioned before, though,

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] ERROR: failed to find conversion function from "unknown" to text

2009-01-05 Thread Gurjeet Singh
Q1: select '' union all select ''
Q2: select '' union all select * from (select '' ) as s

version: PostgreSQL 8.3.1, compiled by Visual C++ build 1400

Hi All,

Q1 works just fine, but Q2 fails with:

ERROR:  failed to find conversion function from "unknown" to text

Q2 is a generalization of a huge query we are facing, which we cannot
modify. I don't think this is a 'removed-casts' problem generally faced in
8.3, but I may be wrong. Will adding some cast resolve this?

Best regards,
-- 
gurjeet[.sin...@enterprisedb.com
singh.gurj...@{ gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] [HACKERS] ERROR: failed to find conversion function from "unknown" to text

2009-01-05 Thread Pavel Stehule
Hello

2009/1/6 Gurjeet Singh :
> Q1: select '' union all select ''
> Q2: select '' union all select * from (select '' ) as s
>
> version: PostgreSQL 8.3.1, compiled by Visual C++ build 1400
>
> Hi All,
>
> Q1 works just fine, but Q2 fails with:
>
> ERROR:  failed to find conversion function from "unknown" to text
>
> Q2 is a generalization of a huge query we are facing, which we cannot
> modify. I don't think this is a 'removed-casts' problem generally faced in
> 8.3, but I may be wrong. Will adding some cast resolve this?

yes

postgres=#  select '' union all select * from (select ''::text ) as s;
 ?column?
--


(2 rows)

regards
Pavel Stehule

>
> Best regards,
> --
> gurjeet[.sin...@enterprisedb.com
> singh.gurj...@{ gmail | hotmail | indiatimes | yahoo }.com
>
> EnterpriseDB  http://www.enterprisedb.com
>
> Mail sent from my BlackLaptop device
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [HACKERS] ERROR: failed to find conversion function from "unknown" to text

2009-01-05 Thread Gurjeet Singh
As I mentioned, we cannot change the query, so adding casts to the query is
not an option. I was looking for something external to the query, like a
CREATE CAST command that'd resolve the issue.

Best regards,

On Tue, Jan 6, 2009 at 12:00 PM, Pavel Stehule wrote:

> Hello
>
> 2009/1/6 Gurjeet Singh :
> > Q1: select '' union all select ''
> > Q2: select '' union all select * from (select '' ) as s
> >
> > version: PostgreSQL 8.3.1, compiled by Visual C++ build 1400
> >
> > Hi All,
> >
> > Q1 works just fine, but Q2 fails with:
> >
> > ERROR:  failed to find conversion function from "unknown" to text
> >
> > Q2 is a generalization of a huge query we are facing, which we cannot
> > modify. I don't think this is a 'removed-casts' problem generally faced
> in
> > 8.3, but I may be wrong. Will adding some cast resolve this?
>
> yes
>
> postgres=#  select '' union all select * from (select ''::text ) as s;
>  ?column?
> --
>
>
> (2 rows)
>
> regards
> Pavel Stehule
>
> >
> > Best regards,
> > --
> > gurjeet[.sin...@enterprisedb.com
> > singh.gurj...@{ gmail | hotmail | indiatimes | yahoo }.com
> >
> > EnterpriseDB  http://www.enterprisedb.com
> >
> > Mail sent from my BlackLaptop device
> >
>



-- 
gurjeet[.sin...@enterprisedb.com
singh.gurj...@{ gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] [HACKERS] ERROR: failed to find conversion function from "unknown" to text

2009-01-05 Thread Pavel Stehule
2009/1/6 Gurjeet Singh :
> As I mentioned, we cannot change the query, so adding casts to the query is
> not an option. I was looking for something external to the query, like a
> CREATE CAST command that'd resolve the issue.

I am sorry, I blind - I tested casting on 8.3.0 and it doesn't work
(but I am have old 8.3)
postgres=# create function unknown2text(unknown) returns text as
$$select $1::text$$ language sql;
CREATE FUNCTION
postgres=# create cast(unknown as text) with function
unknown2text(unknown) as implicit;
CREATE CAST
postgres=# select '' union all select * from (select '' ) as s;
ERROR:  stack depth limit exceeded
HINT:  Increase the configuration parameter "max_stack_depth", after
ensuring the platform's stack depth limit is adequate.
CONTEXT:  SQL function "unknown2text" during startup
SQL function "unknown2text" statement 1
SQL function "unknown2text" statement 1
SQL function "unknown2text" statement 1
SQL function "unknown2text" statement 1
SQL function "unknown2text" statement 1

It working on 8.4

postgres=# create cast (unknown as text) with inout as implicit;
CREATE CAST
postgres=# select '' union all select * from (select '' ) as s;
 ?column?
--


(2 rows)

regards
Pavel Stehule


>
> Best regards,


>
> On Tue, Jan 6, 2009 at 12:00 PM, Pavel Stehule 
> wrote:
>>
>> Hello
>>
>> 2009/1/6 Gurjeet Singh :
>> > Q1: select '' union all select ''
>> > Q2: select '' union all select * from (select '' ) as s
>> >
>> > version: PostgreSQL 8.3.1, compiled by Visual C++ build 1400
>> >
>> > Hi All,
>> >
>> > Q1 works just fine, but Q2 fails with:
>> >
>> > ERROR:  failed to find conversion function from "unknown" to text
>> >
>> > Q2 is a generalization of a huge query we are facing, which we
>> > cannot
>> > modify. I don't think this is a 'removed-casts' problem generally faced
>> > in
>> > 8.3, but I may be wrong. Will adding some cast resolve this?
>>
>> yes
>>
>> postgres=#  select '' union all select * from (select ''::text ) as s;
>>  ?column?
>> --
>>
>>
>> (2 rows)
>>
>> regards
>> Pavel Stehule
>>
>> >
>> > Best regards,
>> > --
>> > gurjeet[.sin...@enterprisedb.com
>> > singh.gurj...@{ gmail | hotmail | indiatimes | yahoo }.com
>> >
>> > EnterpriseDB  http://www.enterprisedb.com
>> >
>> > Mail sent from my BlackLaptop device
>> >
>
>
>
> --
> gurjeet[.sin...@enterprisedb.com
> singh.gurj...@{ gmail | hotmail | indiatimes | yahoo }.com
>
> EnterpriseDB  http://www.enterprisedb.com
>
> Mail sent from my BlackLaptop device
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general