Re: [GENERAL] Reordering columns, will this ever be simple?

2007-08-07 Thread novnov

Right, rel theory is fine, but I work with databases all day long and in the
various lists that shows the fields in a table, placing them in some kind of
order, other than chron by creation time, is very useful and efficient. It
may not matter to the database engine but it matters to the human engine, in
this case. Views don't help solve the basic issue. It's not a fatal issue,
just one that would help streamline postgres, IMO.



Josh Trutwin wrote:
> 
> On Tue, 7 Aug 2007 11:07:39 -0700 (PDT)
> novnov <[EMAIL PROTECTED]> wrote:
> 
> 
> 
>> Is there any plan to add such a capability to postgres? Is there
>> deep seated reason why reordering columns can't be handled without
>> doing a root canal on the database?
> 
> Probably because the theory behind the relational model (at least as
> far as I understand it) is that column and row ordering is irrelevant.
> 
> From http://en.wikipedia.org/wiki/Relational_model :
> 
> "The relational model requires there to be no significance to any
> ordering of the attributes of a relation."
> 
> Just like if you do a SELECT * that the order of the rows returned
> could change at any time unless you specify an ORDER BY clause.
> 
> That said, most people I imagine like being able to "re-order" the
> columns that a SELECT * produces.
> 
>> If there already is some simple way to reorder columns, please let
>> me know.
> 
> CREATE VIEW my_favorite_order AS
>   SELECT col2, col1, col5, col4, etc 
> FROM base_table;
> 
> SELECT * FROM my_favorite_order;
> 
> Only drawback is that you have to re-build the view if the column
> names in the base table change.
> 
> Josh
> 
> ---(end of broadcast)---
> TIP 2: Don't 'kill -9' the postmaster
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Reordering-columns%2C-will-this-ever-be-simple--tf4231761.html#a12044467
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Take your postgresSql on the road, and live to tell of it.

2007-08-07 Thread Ben
You can group schemas with views, and it guarentees nobody will accidently 
overwrite somebody else's stuff. Merging a two schemas with identical 
table structure should also be quite trivial. Of course, if you have a lot 
of users, this might not work so well


On Tue, 7 Aug 2007, Owen Hartnett wrote:


At 2:15 PM -0700 8/7/07, Ben wrote:
How many users do you have? Have you considered giving each user a schema 
in which to make their changes? It sounds like you don't really have a 
multi-master replication issue, which makes things easier.


Maybe I'm not understanding the strategy, but I don't see what this buys me, 
as I have to end up with a single database schema that has incorporated all 
the changes.  If I can "record" all the SQL a user does from the checkpoint 
on, then I can "psql <" it in to the main database.  Once I've combined their 
data into the database that sits on the server, I don't need their database 
copies anymore.


-Owen


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

 http://www.postgresql.org/docs/faq



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Intelligent Database in postgreSQL

2007-08-07 Thread Decibel!
On Mon, Aug 06, 2007 at 07:37:09PM -0700, Rich Shepard wrote:
> On Mon, 6 Aug 2007, Rodrigo Faccioli wrote:
> 
> >I'm a new user's postgreSQL and I have a question: Is possible add
> >function of intelligent System like Neural Networks or Fuzzy Logic within
> >postgre SGDB?
> 
>   Inherently, no. But, there is a fellow at a Bulgarian university that is
> developing a fuzzy SQL addon. I've not looked at it in a while because we
> don't need that capability right now.
> 
>   I've never looked for a hook to a NN so I cannot comment on that. Neural
> networks, of course, are totally different from fuzzy logic, and address
> different problems. They are not interchangable.

See also http://pgfoundry.org/projects/qbe/ (note that it has *nothing*
to do with QBE as termed by Microsoft and some other tools).
-- 
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgpXY7oKZkaTN.pgp
Description: PGP signature


Re: [GENERAL] finding out vacuum completion %, and vacuum VS vacuum full

2007-08-07 Thread Decibel!
On Tue, Aug 07, 2007 at 08:40:47AM -0700, Steve Atkins wrote:
> If you have adequate disk space free (enough to hold another
> copy of the new table) and the table has an index on it, then
> CLUSTER the table.

Be advised that there's some MVCC issues with CLUSTER in current
versions, but normally you'd only run into them in a serialized
transaction. If you're not using that you're probably fine, but remember
that pg_dump and pg_dumpall use serialized transactions.
-- 
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgpg1nfqTdVBY.pgp
Description: PGP signature


Re: [GENERAL] create table liek view

2007-08-07 Thread Ragnar
On þri, 2007-08-07 at 15:23 +0200, Filip Rembiałkowski wrote:
> [EMAIL PROTECTED] create view v as select 1::int;
> CREATE VIEW
> [EMAIL PROTECTED] select * from v;
>  int4
> --
> 1
> (1 row)
> [EMAIL PROTECTED] create table t ( like v);
> ERROR:  inherited relation "v" is not a table
> 
> 
> 
> Why? Is there any logical reason for this? View is just a table with
> some rule ON SELECT...

i do not know about the reason this is not allowed, but you can:

create table t as select * from v limit 0;



gnari



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] array_to_set functions

2007-08-07 Thread Decibel!
On Sun, Aug 05, 2007 at 08:18:08PM +0530, Merlin Moncure wrote:
> On 8/3/07, Guy Fraser <[EMAIL PROTECTED]> wrote:
> > On Wed, 2007-08-01 at 07:14 +0530, Merlin Moncure wrote:
> > > On 8/1/07, Decibel! <[EMAIL PROTECTED]> wrote:
> > > > David Fetter and I just came up with these, perhaps others will find
> > > > them useful:
> > > >
> > > > CREATE OR REPLACE FUNCTION array_to_set(anyarray, int) RETURNS SETOF 
> > > > anyelement LANGUAGE SQL AS $$
> > > > SELECT $1[i] from generate_series(array_lower($1, $2), 
> > > > array_upper($1, $2)) i
> > > > $$;
> > > > CREATE OR REPLACE FUNCTION array_to_set(anyarray) RETURNS SETOF 
> > > > anyelement LANGUAGE SQL AS $$
> > > > SELECT array_to_set($1, 1)
> > > > $$;
> > >
> > > very nice, although IMO there is a strong justification for these
> > > functions to be in core and written in C for efficiency (along with
> > > array_accum, which I have hand burn from copying and pasting out of
> > > the documentation).
> > >
> > > merlin
> > >
> > Excellent timing guys. :^)
> >
> > I was trying to build a function to list the items of an array, but
> > ran into problems and was going to post what I had been working on.
> >
> > Your functions work great.
> >
> > In case you don't have the function to generate an array from a set
> > here is one I have been using :
> >
> >
> > CREATE AGGREGATE array_accum (
> > BASETYPE = anyelement,
> > SFUNC = array_append,
> > STYPE = anyarray,
> > INITCOND = '{}'
> > );
> 
> I think that's what just about everyone uses.  Unfortunately the
> reverse of the function (array_to_set above) AFAIK does not map
> directly to the C array API.

Oh, cool, hadn't thought about using an aggregate to do this. That's
probably faster than what I came up with.
-- 
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgpfL68HN8E0g.pgp
Description: PGP signature


Re: [GENERAL] Tablespace

2007-08-07 Thread Decibel!
On Sun, Aug 05, 2007 at 04:03:10AM -0700, RPK wrote:
> 
> When PGSQL is installed, it creates two default tablespaces, postgres and

There is no postgres tablespace, only a database. The other default
tablespace is pg_global.

> pg_default. When a user creates a new database he is opted to select a
> tablespace from the above two. Do I need to create a different tablespace
> before creating a database in order to prevent playing with these default

No, it's perfectly fine to store stuff in pg_default. It's what probably
99% of installs do.

> tablespaces? While backing up the database, with a new tablespace, is new
> tablespace also backed up and automatically created when the backup is
> restored on another machine?

Tablespaces are "global" or cluster-wide objects. As such, pg_dump will
not do anything with them. You'd need to use pg_dumpall.

There's a whole different set of concerns with tablespaces and PITR.

Unless you have a real need for tablespaces, just stick with the
defaults.
-- 
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgpcYkMlirlSY.pgp
Description: PGP signature


Re: [GENERAL] Take your postgresSql on the road, and live to tell of it.

2007-08-07 Thread Owen Hartnett

At 5:13 PM -0500 8/7/07, Scott Marlowe wrote:

On 8/7/07, Owen Hartnett <[EMAIL PROTECTED]> wrote:


 Here's what I want to do:

 Checkpoint the database in whatever way is appropriate.

 Make copies of the database on several laptops for use in the field
 (in automobiles) to do database changes.  Record all the changes made
 since the checkpoint as the user makes them.

 Periodically take all the changes back into the office, take the
 changes made out in the field and apply them to the main database.

 Repeat the process.

 Notes:

 1) Unless an user makes a mistake, there should be no changes to the
 same records by multiple users.  (i.e. any concurrency violations
 should be registered as an exception.)

 2) I'd prefer it to just record the sql commands executed by the
 database as text, then use psql < myFieldcommands to update the
 database.  This will also help me isolate any concurrency exceptions,
 and I'd like to wrap the whole update in a transaction, so I can roll
 the whole thing back if it does detect concurrency problems anywhere
 in the process (then I can edit out the offending lines).

 3) There's no particular rush to update the database - I don't need
 this real-time.

 4) Users might make their checkpoint at a different time from other users.


Given that each person is likely to only be only operating on their
own data set, I'd use an integer range for each person.  Make an int
field in each table, and give each use a 1,000,000 id range to play
in, or something like that.  You can even set it up so that the app
uses sequences and have them start at whatever the user's first id is,
and not cycling and stopping when it reaches the end to keep them from
bumping into the next person's range.

Heck, go with bigint and give each person a 1,000,000,000 range.  Then
you could still handle 9,223,372,035 or so users before you'd run out
of sequences for each.

Heck, you could even write a system of update functions that checked
the userid against their numeric range and only updated the data if it
was in their range.  Send it to a coworker for approval if it's not.
I'm having a few too mad scientist moments right about now.  Got to
get back to my data mining project...


This would probably work, but it seems like overkill...I'll have to 
think about it some more...


-Owen

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Take your postgresSql on the road, and live to tell of it.

2007-08-07 Thread Owen Hartnett

At 2:15 PM -0700 8/7/07, Ben wrote:
How many users do you have? Have you considered giving each user a 
schema in which to make their changes? It sounds like you don't 
really have a multi-master replication issue, which makes things 
easier.


Maybe I'm not understanding the strategy, but I don't see what this 
buys me, as I have to end up with a single database schema that has 
incorporated all the changes.  If I can "record" all the SQL a user 
does from the checkpoint on, then I can "psql <" it in to the main 
database.  Once I've combined their data into the database that sits 
on the server, I don't need their database copies anymore.


-Owen


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Take your postgresSql on the road, and live to tell of it.

2007-08-07 Thread Scott Marlowe
On 8/7/07, Owen Hartnett <[EMAIL PROTECTED]> wrote:
>
> Here's what I want to do:
>
> Checkpoint the database in whatever way is appropriate.
>
> Make copies of the database on several laptops for use in the field
> (in automobiles) to do database changes.  Record all the changes made
> since the checkpoint as the user makes them.
>
> Periodically take all the changes back into the office, take the
> changes made out in the field and apply them to the main database.
>
> Repeat the process.
>
> Notes:
>
> 1) Unless an user makes a mistake, there should be no changes to the
> same records by multiple users.  (i.e. any concurrency violations
> should be registered as an exception.)
>
> 2) I'd prefer it to just record the sql commands executed by the
> database as text, then use psql < myFieldcommands to update the
> database.  This will also help me isolate any concurrency exceptions,
> and I'd like to wrap the whole update in a transaction, so I can roll
> the whole thing back if it does detect concurrency problems anywhere
> in the process (then I can edit out the offending lines).
>
> 3) There's no particular rush to update the database - I don't need
> this real-time.
>
> 4) Users might make their checkpoint at a different time from other users.

Given that each person is likely to only be only operating on their
own data set, I'd use an integer range for each person.  Make an int
field in each table, and give each use a 1,000,000 id range to play
in, or something like that.  You can even set it up so that the app
uses sequences and have them start at whatever the user's first id is,
and not cycling and stopping when it reaches the end to keep them from
bumping into the next person's range.

Heck, go with bigint and give each person a 1,000,000,000 range.  Then
you could still handle 9,223,372,035 or so users before you'd run out
of sequences for each.

Heck, you could even write a system of update functions that checked
the userid against their numeric range and only updated the data if it
was in their range.  Send it to a coworker for approval if it's not.
I'm having a few too mad scientist moments right about now.  Got to
get back to my data mining project...

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] track row / field usage

2007-08-07 Thread brian

John Smith wrote:

guys,
i want to track just how many times a particular row, field or
contained value gets queried / fetched? i guess a trigger like so
would do the trick:
on query, update value in column
even a log and then parsing it would help. but is there a cooler way-
some built-in stats counter? just trying to see which rows, fields or
values are most sort after.
cheers, jzs



Set the log_statement line in postgresql.conf to "all".

brian

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] List tables in load order

2007-08-07 Thread Kristo Kaiv


On 02.08.2007, at 4:16, Gregory Williamson wrote:

I am not sure if this is the appropriate list -- please point me at  
the correct one if not.


I'm trying to create a procedure that would let me retrieve a list  
of tables and views in a database that will be used to control the  
order in which lookup data is created/loaded. So, much simplified,  
if table references table B, which in turn references table A, we  
want output to list table A, B and C in that order.


I'm sure that this exists -- the pg_dump command must use some  
similar algorithm to decide in which order to load tables, but I  
can't see to puzzle this out.


Can anyone provide me with some clues, appropriate RTFM references,  
etc. ?


Apologies for any duplicate postings -- had issues with my sign up.

This is currently one of the main topics in skytools list. Check the  
archives:

http://pgfoundry.org/pipermail/skytools-users/2007-July/74.html
http://pgfoundry.org/pipermail/skytools-users/2007-August/thread.html
AFAIK they don't have it quite ready yet but shouldn't take long...

Kristo Kaiv
http://kaiv.wordpress.com (PostgreSQL blog)




Re: [GENERAL] Take your postgresSql on the road, and live to tell of it.

2007-08-07 Thread Ben
How many users do you have? Have you considered giving each user a schema 
in which to make their changes? It sounds like you don't really have a 
multi-master replication issue, which makes things easier.


On Tue, 7 Aug 2007, Owen Hartnett wrote:



Here's what I want to do:

Checkpoint the database in whatever way is appropriate.

Make copies of the database on several laptops for use in the field (in 
automobiles) to do database changes.  Record all the changes made since the 
checkpoint as the user makes them.


Periodically take all the changes back into the office, take the changes made 
out in the field and apply them to the main database.


Repeat the process.

Notes:

1) Unless an user makes a mistake, there should be no changes to the same 
records by multiple users.  (i.e. any concurrency violations should be 
registered as an exception.)


2) I'd prefer it to just record the sql commands executed by the database as 
text, then use psql < myFieldcommands to update the database.  This will also 
help me isolate any concurrency exceptions, and I'd like to wrap the whole 
update in a transaction, so I can roll the whole thing back if it does detect 
concurrency problems anywhere in the process (then I can edit out the 
offending lines).


3) There's no particular rush to update the database - I don't need this 
real-time.


4) Users might make their checkpoint at a different time from other users.

Since I'm relatively new to Postgres, (and I apologize if this has come up 
before), I'm hoping some respondents will provide me with the correct 
strategy.


-Owen

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

 http://www.postgresql.org/docs/faq



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] import content of XLS file into PostgreSQL

2007-08-07 Thread Julio Cesar Sánchez González
 

 

De: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] En nombre de Alain Roger
Enviado el: lunes, 06 de agosto de 2007 05:37 a.m.
Para: pgsql-general@postgresql.org
Asunto: [GENERAL] import content of XLS file into PostgreSQL

 

Hi,

I would like to know what should i do to import the content (not all
columns) of a XLS file into pgsql.
is there something special to do ?

thanks a lot,

-- 
Alain
 
Windows XP SP2
PostgreSQL 8.1.4
Apache 2.2.4
PHP 5.2.1 

 

 

With “CSV” file type should be. For one example to you visit this link:
http://darkavngr.blogspot.com/2007/06/importar-datos-externos-nuestra-base-d
e.html with my blog.

 

 

Regards,

 

Julio Cesar Sánchez González

 



[GENERAL] Take your postgresSql on the road, and live to tell of it.

2007-08-07 Thread Owen Hartnett


Here's what I want to do:

Checkpoint the database in whatever way is appropriate.

Make copies of the database on several laptops for use in the field 
(in automobiles) to do database changes.  Record all the changes made 
since the checkpoint as the user makes them.


Periodically take all the changes back into the office, take the 
changes made out in the field and apply them to the main database.


Repeat the process.

Notes:

1) Unless an user makes a mistake, there should be no changes to the 
same records by multiple users.  (i.e. any concurrency violations 
should be registered as an exception.)


2) I'd prefer it to just record the sql commands executed by the 
database as text, then use psql < myFieldcommands to update the 
database.  This will also help me isolate any concurrency exceptions, 
and I'd like to wrap the whole update in a transaction, so I can roll 
the whole thing back if it does detect concurrency problems anywhere 
in the process (then I can edit out the offending lines).


3) There's no particular rush to update the database - I don't need 
this real-time.


4) Users might make their checkpoint at a different time from other users.

Since I'm relatively new to Postgres, (and I apologize if this has 
come up before), I'm hoping some respondents will provide me with the 
correct strategy.


-Owen

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] track row / field usage

2007-08-07 Thread John Smith
guys,
i want to track just how many times a particular row, field or
contained value gets queried / fetched? i guess a trigger like so
would do the trick:
on query, update value in column
even a log and then parsing it would help. but is there a cooler way-
some built-in stats counter? just trying to see which rows, fields or
values are most sort after.
cheers, jzs

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Reordering columns, will this ever be simple?

2007-08-07 Thread Tony Caduto

Gregory Stark wrote:

"novnov" <[EMAIL PROTECTED]> writes:

  
Is there any plan to add such a capability to postgres? 



It's been talked about. I wouldn't be surprised to see it in 8.4 but nobody's
said they'll be doing it yet and there are a lot of other more exciting ideas
too.

  
From a admin tool developers perspective the ability to reorder columns 
without manually copying to a new table and all that is pretty exiting :-)


Tony Caduto
AM Software Design
http://www.amsoftwaredesign.com


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Reordering columns, will this ever be simple?

2007-08-07 Thread Gregory Stark
"novnov" <[EMAIL PROTECTED]> writes:

> Is there any plan to add such a capability to postgres? 

It's been talked about. I wouldn't be surprised to see it in 8.4 but nobody's
said they'll be doing it yet and there are a lot of other more exciting ideas
too.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] finding out vacuum completion %, and vacuum VS vacuum full

2007-08-07 Thread Gregory Stark
"Steve Atkins" <[EMAIL PROTECTED]> writes:

> On Aug 7, 2007, at 1:17 AM, Sergei Shelukhin wrote:
>
>> Or any way to optimize it besides the obvious (maintenace_work_mem &
>> max_fsm_pages increases and no workload)?
>> Can someone please help with this one?

What does the output of "vacuum verbose" say?

> If you have adequate disk space free (enough to hold another
> copy of the new table) and the table has an index on it, then
> CLUSTER the table.

Or you can use ALTER TABLE to change the type of a column which forces the
whole table to be rewritten.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Reordering columns, will this ever be simple?

2007-08-07 Thread Josh Trutwin
On Tue, 7 Aug 2007 11:07:39 -0700 (PDT)
novnov <[EMAIL PROTECTED]> wrote:



> Is there any plan to add such a capability to postgres? Is there
> deep seated reason why reordering columns can't be handled without
> doing a root canal on the database?

Probably because the theory behind the relational model (at least as
far as I understand it) is that column and row ordering is irrelevant.

From http://en.wikipedia.org/wiki/Relational_model :

"The relational model requires there to be no significance to any
ordering of the attributes of a relation."

Just like if you do a SELECT * that the order of the rows returned
could change at any time unless you specify an ORDER BY clause.

That said, most people I imagine like being able to "re-order" the
columns that a SELECT * produces.

> If there already is some simple way to reorder columns, please let
> me know.

CREATE VIEW my_favorite_order AS
  SELECT col2, col1, col5, col4, etc 
FROM base_table;

SELECT * FROM my_favorite_order;

Only drawback is that you have to re-build the view if the column
names in the base table change.

Josh

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Auto Starting +/or Shutdown on OS X

2007-08-07 Thread Michael Glaesemann


On Jul 31, 2007, at 18:53 , Ralph Smith wrote:

Apologies for the huge post, but it's got everything relevant that  
I can think of.

See below.


Ralph,

Did you ever get this sorted? I don't have any new ideas, but was  
wondering if you had figured out a solution.


Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] [SQL] Restricting columns by users

2007-08-07 Thread Andrej Ricnik-Bay
On 8/8/07, Ranieri Mazili <[EMAIL PROTECTED]> wrote:

> Exist something like it for postgresql?
I thought that's what views are for 


> Thanks
Cheers,
Andrej

-- 
Please don't top post, and don't use HTML e-Mail :}  Make your quotes concise.

http://www.american.edu/econ/notes/htmlmail.htm

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] [SQL] Using function like where clause

2007-08-07 Thread Ragnar
On mán, 2007-08-06 at 16:44 -0300, Ranieri Mazili wrote:
> 1) Can I use a function that will return a string in a where clause like 
> bellow?
> 
> select *
> from table
> where my_function_making_where()
>   and another_field = 'another_think'

you could have your function return a boolean instead of a string


> 2) Can I use a function that will return a string to return the list of 
> columns that I want to show like below?
> 
> select my_function_making_list_of_columns()
> from table
> where field_test = 'mydatum'

no

gnari



---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Reordering columns, will this ever be simple?

2007-08-07 Thread novnov

One of the few problems I have with postgres is that one is stuck with
columns ordered by time of creation, unless one resorts to dropping the
table and rebuilding from ddl. If you have data and a bunch of contraints,
it's not simple. It's not a critical capability either, but reordering
columns in the other databases I work with is very simple. Schemas do evolve
and I like to keep things in order, naturally.

Is there any plan to add such a capability to postgres? Is there deep seated
reason why reordering columns can't be handled without doing a root canal on
the database?

If there already is some simple way to reorder columns, please let me know.
-- 
View this message in context: 
http://www.nabble.com/Reordering-columns%2C-will-this-ever-be-simple--tf4231761.html#a12039408
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] truncate transaction log

2007-08-07 Thread Simon Riggs
On Sun, 2007-08-05 at 03:45 -0700, Sergei Shelukhin wrote:

> Is there any way to truncate WAL log in postgres?
> We want to use full-backup strategy where we stop the server and copy
> the data directory, however WAL log is taking dozens gigabytes of
> spaces.
> Is there any way to remove it while keeping the database operational/
> restore-able by copying it back?

Reduce the setting of checkpoint_segments to something more realistic.

-- 
  Simon Riggs
  EnterpriseDB  http://www.enterprisedb.com


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] finding reusable ids

2007-08-07 Thread Michael Glaesemann
[Please don't top post as it makes the discussion more difficult to  
follow.]


On Aug 7, 2007, at 9:05 , Kenji Morishige wrote:



On Tue, Aug 07, 2007 at 12:23:00PM +0200, Nis Jørgensen wrote:


This doesn't solve your problem, but might simplify the query to  
find a

new id - something like this (untested):

SELECT min(q.co_id) +1
FROM (
SELECT (co_id + 1) as co_id FROM checked_out
EXCEPT
SELECT co_id FROM checked_out
) q;


I don't believe this is concurrency safe, even if wrapped in a  
transaction. Two concurrent transactions could end up calculating the  
same next co_id. This could be caught by having appropriate  
constraints on checked_out and retrying on error.


Actually, I already have a resource table that stores the uid of  
the item in
question.  The checkout table does double duty as a history  
mechanism and a
check-out mechanism.  I think you are on the right track, I should  
seperate
these two tasks and possibly create another table.  The actual  
design is a
bit more complicated as we actually don't have a a checked-in flag,  
but a
start and finish time where users can actually store multiple  
overlapping

records.


I agree that you should probably tweak the schema a bit. Also, as you  
want to reuse your checkout ids, you're actually considering them a  
separate resoure, so you might consider putting them in a separate  
table. Here's what I came up with:


As an aside, I wouldn't call them checkout_ids (even though I  
did :)), as id often connotes a unique identifier something (like  
your uids), and you're reusing them. I might call them  
checkout_reference or checkout_number or something.


CREATE TABLE checkout_ids
(
checkout_id INTEGER PRIMARY KEY
, is_checked_out BOOLEAN NOT NULL
DEFAULT FALSE
, UNIQUE (checkout_id, is_checked_out)
);
-- populate the table with the values you'll use
INSERT INTO checkout_ids (checkout_id)
SELECT generate_series(1,99);

CREATE TABLE checkouts
(
checkout_id INTEGER PRIMARY KEY
, is_checked_out BOOLEAN NOT NULL
CHECK (is_checked_out)
DEFAULT TRUE
, FOREIGN KEY (checkout_id, is_checked_out)
REFERENCES checkout_ids (checkout_id, is_checked_out)
, uid INTEGER NOT NULL -- with some fk
);

-- Of course, you can add the checkout start and end dates/timestamps  
to this table: 
-- they're independent of managing the checkout_id resource
-- I've added is_checked_out to this table to ensure that all  
checkouts (checkout_id) have
-- is_checked_out set to true (via the CHECK constraint). This could  
also be done with a

-- trigger.

-- And a couple quick functions to handle the process of checking in  
and checking out.
-- The SELECT ... FOR UPDATE in checkout should ensure that  
concurrent transactions

-- aren't grabbing the same checkout_id.

CREATE FUNCTION checkout (p_uid INTEGER)
RETURNS INTEGER -- checkout_id
LANGUAGE plpgsql AS $body$
DECLARE
v_checkout_id INTEGER;
BEGIN
SELECT INTO v_checkout_id
checkout_id
FROM checkout_ids
WHERE NOT is_checked_out
LIMIT 1
FOR UPDATE;

UPDATE checkout_ids
SET is_checked_out = TRUE
WHERE checkout_id = v_checkout_id;

INSERT INTO checkouts (checkout_id, uid)
VALUES (v_checkout_id, p_uid);
RETURN v_checkout_id;
END;
$body$;

CREATE FUNCTION checkin (p_checkout_id INTEGER)
RETURNS VOID
LANGUAGE plpgsql AS $body$
BEGIN

DELETE FROM checkouts
WHERE checkout_id = p_checkout_id;

UPDATE checkout_ids
SET is_checked_out = FALSE
WHERE checkout_id = p_checkout_id;

RETURN;
END;
$body$;

Hope this helps.

Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] clustering failover... ala Oracle Parallel server

2007-08-07 Thread Alexander Staubo
On 8/4/07, hanasaki <[EMAIL PROTECTED]> wrote:
> clustering failover... ala Oracle Parallel server

Note that OPS is now called RAC (see http://orafaq.com/faq/what_is_rac_ops).

> How can the server be setup in a cluster for load-balancing and failover
> like perhaps OPS?

As I understand it, RAC implements a kind of multi-master replication
by sharing database files across multiple machines in conjunction with
a distributed lock manager.

I believe the closest you get to this scheme with PostgreSQL is
PGCluster-II (see
http://www.pgcon.org/2007/schedule/events/6.en.html), which implements
a shared-disk replicaton system. However, I believe the project is
still under development.

There are other ways to implement load-balancing and failover, using
tools such as Slony, pgpool and Skype's londiste. There have been a
dozen threads this year on this topic; I suggest you scan the mailing
list archives.

> How does the Postges solution compare to an Oracle? MSSQL? MySQL solution?

Does MySQL have anything resembling RAC?

Alexander.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] pg_dump of only the structure from a client such as ruby

2007-08-07 Thread Mason Hale
If you haven't seen it already, there is a rails plugin that adds
support for foreign-key dependencies (among other things) to the
migration domain specific language (DSL):

http://www.redhillonrails.org/#foreign_key_migrations

Another useful plug-in is "Transactional Migrations" which
automatically wraps ActiveRecord migrations inside a transaction, if
your database supports transactional DDL, as Postgres does. This
neatly avoids messy half-completed database migrations and the need to
clean them up by hand.


On 8/6/07, Michael Glaesemann <[EMAIL PROTECTED]> wrote:
> On Aug 6, 2007, at 13:17 , Perry Smith wrote:
>
> > I'm using config.active_record.schema_format = :sql.  I like the
> > idea of constraints in the db -- I know that goes against a lot of
> > rails ideas.
>
> I think most who choose Postgres come to the same conclusion. Though
> do take care not to confuse Rails as a whole with ActiveRecord in
> particular.
>
> > The :ruby choice does not dump and load constraints (at least not
> > in the released version -- I have not verified this on edge Rails
> > yet).
>
> I doubt it does. DHH's take on "application database" (as much-
> discussed elsewhere) wouldn't make such developments a priority, if
> they'd even be considered for ActiveRecord.
>
> > The pg_dump and psql load have one short coming.  I really do not
> > like warning messages.  If I have a language loaded in the
> > database, the psql load of the database produces two warnings
> > (because I'm not loading it as postgres -- just a regular user with
> > createdb privilege.
> >
> > I might be drifting off the original subject but, what I did to
> > solve this was to hook up the create_database and drop_database and
> > I have it understand the template parameter.  So, now in
> > database.yml, I have a template database (like foo_template) and
> > foo_test is copied from foo_template -- that avoides the error
> > messages and creates a test database with whatever I need in it in
> > one step.
>
> I've considered using a similar technique for testing. There was
> discussion on rails-core a few weeks ago about various migration/
> testing related issues, IIRC. Haven't gotten around to it yet as my
> rake tasks and the roles I use have pretty much taken care of the
> issue for me.
>
> > One thing I thought about over the past evening is that I could
> > just beef up the :ruby schema dump and load to understand the
> > couple of things I need for it to understand: constraints and
> > functions  But, I'm not sure what kind of quagmire I'm walking in to.
>
> Definitely not worth the effort. The :ruby schema dump is there only
> to support the migration SQL DSL. In my opinion, if you're using SQL
> not supported by the DSL, there's little reason to use it at all.
> Most likely the SQL will not be entirely portable anyway (leaving
> aside the point of whether or not that should even be a design goal)
> so why take the time to learn another microlanguage? :) It doesn't
> take much to have requirements beyond what the migration DSL
> provides, as you've already discovered, and to extend the DSL in a
> portable way would be quite an endeavor. Quagmire is a good word for it.
>
> Michael Glaesemann
> grzm seespotcode net
>
>
>
> ---(end of broadcast)---
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>choose an index scan if your joining column's datatypes do not
>match
>

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] How do I connect postgres table structures and view structures to an existing svn repository?

2007-08-07 Thread Vivek Khera


On Aug 1, 2007, at 10:56 AM, Richard Huxton wrote:

You could write a small cron-script that dumped the schema once  
every 5 minutes so it could be picked up by svn.


I think most people have a separate collection of schema-creation/ 
update scripts that they keep under version control. All changes  
are then through running these.





You would have to do it via polling, since schema changes cause no  
events to be generated (ie, you can't attach a trigger to a schema  
change.)


But the *right* way is to make schema change scripts as "delta"  
files, add them to your repo, test them on your staging environment,  
then apply them to your production environment.  That way you can  
reconstruct your DB at any time and *know* it will work.



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] truncate transaction log

2007-08-07 Thread Scott Marlowe
On 8/5/07, Sergei Shelukhin <[EMAIL PROTECTED]> wrote:
> Hi.
>
> Is there any way to truncate WAL log in postgres?
> We want to use full-backup strategy where we stop the server and copy
> the data directory, however WAL log is taking dozens gigabytes of
> spaces.
> Is there any way to remove it while keeping the database operational/
> restore-able by copying it back?

This sounds unusually high.  What non-default settings are there in
your postgresql.conf file?  Are you using PITR or something like that?

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] finding out vacuum completion %, and vacuum VS vacuum full

2007-08-07 Thread Steve Atkins


On Aug 7, 2007, at 1:17 AM, Sergei Shelukhin wrote:


Ok here's the update after ~30 hours we have killed vacuum full and
did vacuum on the tables we freed.
However, VACUUM hasn't freed any space at all 0_o
We want to launch vacuum full on per-table basis but we can't have any
more downtime right now so we will launch it at night today.

The original question still stands, is there any way to diagnose
vacuum full time-to-run?


It could easily take many days. VACUUM FULL is painfully slow.
Dropping indexes and suchlike can make it faster, but it's still
painfully slow.


Or any way to optimize it besides the obvious (maintenace_work_mem &
max_fsm_pages increases and no workload)?
Can someone please help with this one?


VACUUM FULL is about the worst thing you can do in this case.

If you have adequate disk space free (enough to hold another
copy of the new table) and the table has an index on it, then
CLUSTER the table.

If not, dump and restore the table.

Cheers,
  Steve

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] Unsubscribe

2007-08-07 Thread Matt Starr


From: [EMAIL PROTECTED] [EMAIL PROTECTED] On Behalf Of Merlin Moncure [EMAIL 
PROTECTED]
Sent: Wednesday, August 01, 2007 11:14 PM
To: [EMAIL PROTECTED]
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Move database from Solaris to Windows

On 8/2/07, Barry C Dowell <[EMAIL PROTECTED]> wrote:
> Ok, if you can forgive the possible stupid answer and help pull me a long a
> bit more, in answer to this:
>
> >> Yes, dump/restore is pretty much the standard to move dbs across
> architectures so we'll need more to work with.  One thing to check, did you
> make sure that your dump was in the same encoding as the database you
> created on your Windows server?
>
> How do I know what encoding was used on one database (on one OS) versus the
> other?

psql -l

lists the encoding.  postgresql on windows iirc defaults to sql_ascii
which is actually the most forgiving but not a great choice.  linux
defaults to utf-8.

can you post the exact text of the error?

merlin

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] clustering failover... ala Oracle Parallel server

2007-08-07 Thread hanasaki
clustering failover... ala Oracle Parallel server

How can the server be setup in a cluster for load-balancing and failover
like perhaps OPS?

How does the Postges solution compare to an Oracle? MSSQL? MySQL solution?

Thank!

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] finding out vacuum completion %, and vacuum VS vacuum full

2007-08-07 Thread Sergei Shelukhin
Ok here's the update after ~30 hours we have killed vacuum full and
did vacuum on the tables we freed.
However, VACUUM hasn't freed any space at all 0_o
We want to launch vacuum full on per-table basis but we can't have any
more downtime right now so we will launch it at night today.

The original question still stands, is there any way to diagnose
vacuum full time-to-run?
Or any way to optimize it besides the obvious (maintenace_work_mem &
max_fsm_pages increases and no workload)?
Can someone please help with this one?

I wonder why are people only trying to help w/simple question or when
I flame 0_o



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] List tables in reverse dependancy order

2007-08-07 Thread Gregory Williamson
Thanks ... I was afraid it would as messy as it is; unfortunately Oracle seems 
to have a way to gather at least some of this in one (ugly) SQL command and I 
was hoping for some equivalent trick.

Greg W.
(apologies for top-posting -- limited mail reader)

-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: Sat 8/4/2007 9:51 AM
To: Gregory Williamson
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] List tables in reverse dependancy order 
 
"Gregory Williamson" <[EMAIL PROTECTED]> writes:
> I'm trying to create a procedure that would let me retrieve a list of =
> tables and views in a database that will be used to control the order in =
> which lookup data is created/loaded. So, much simplified, if table =
> references table B, which in turn references table A, we want output to =
> list table A, B and C in that order.

> I'm sure that this exists -- the pg_dump command must use some similar =
> algorithm to decide in which order to load tables, but I can't see to =
> puzzle this out.

pg_dump expends a fair amount of code on this problem; if you want to
handle the general case with circular references and so on, it's not
simple.  You could do worse than to run "pg_dump -s" and postprocess
its output.

If you are only interested in simpler cases then you might be able to
find a simpler solution.  For instance if you are only worried about
foreign-key linkages then looking into pg_constraint is much the
easiest way to find out about those.

regards, tom lane



[GENERAL] finding out vacuum completion %, and vacuum VS vacuum full

2007-08-07 Thread Sergei Shelukhin
Hi. We have archived and removed majority of data from a database, the
main impact was on 4 tables, which lost several million rows (3
tables) and several dozen million rows (one table).

Naturally we decided to execute VACUUM FULL on the database to reclaim
all the space; it keeps running for 22 hours already.
Can VACUUM get stuck and run forever e.g. should we cancel it?
Is there any way to diagnose the amount of time remaining for it to
run?
Should we rather cancel it, and run a normal VACUUM? 8.2 manual said
you should use FULL when the majority of data is deleted but I'm
concerned about the time it takes :)
Also, would increasing maintenance_work_mem while the query is running
help? It's currently 2Gb out of 4Gb RAM, postgres proccess supposedly
running the vacuum is slowly eating up memory but it's not even at 30%
yet...


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Using function like where clause

2007-08-07 Thread Ranieri Mazili

Hello,

I have 2 questions.
1) Can I use a function that will return a string in a where clause like 
bellow?


select *
from table
where my_function_making_where()
 and another_field = 'another_think'

2) Can I use a function that will return a string to return the list of 
columns that I want to show like below?


select my_function_making_list_of_columns()
from table
where field_test = 'mydatum'

Thanks

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] List tables in load order

2007-08-07 Thread Gregory Williamson
I am not sure if this is the appropriate list -- please point me at the correct 
one if not.

I'm trying to create a procedure that would let me retrieve a list of tables 
and views in a database that will be used to control the order in which lookup 
data is created/loaded. So, much simplified, if table references table B, which 
in turn references table A, we want output to list table A, B and C in that 
order.

I'm sure that this exists -- the pg_dump command must use some similar 
algorithm to decide in which order to load tables, but I can't see to puzzle 
this out.

Can anyone provide me with some clues, appropriate RTFM references, etc. ?

Apologies for any duplicate postings -- had issues with my sign up.

TIA,

Greg Williamson
Senior DBA
GlobeXplorer LLC, a DigitalGlobe company

Confidentiality Notice: This e-mail message, including any attachments, is for 
the sole use of the intended recipient(s) and may contain confidential and 
privileged information and must be protected in accordance with those 
provisions. Any unauthorized review, use, disclosure or distribution is 
prohibited. If you are not the intended recipient, please contact the sender by 
reply e-mail and destroy all copies of the original message.

(My corporate masters made me say this.)


[GENERAL] truncate transaction log

2007-08-07 Thread Sergei Shelukhin
Hi.

Is there any way to truncate WAL log in postgres?
We want to use full-backup strategy where we stop the server and copy
the data directory, however WAL log is taking dozens gigabytes of
spaces.
Is there any way to remove it while keeping the database operational/
restore-able by copying it back?


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] how to detect the backup database every day

2007-08-07 Thread brian

Richard Huxton wrote:

[EMAIL PROTECTED] wrote:


Hello all

I use a crontab job to backup and restore my database every midnight 
every

day (using pg_dump and restore...)

My cron job run well for a while. However, today I discovered that my
database was not restored for one month.

I also fix the problem as there is corruption in the script.

My question is how I can detect whether the backup or restore 
processes is

corrupted. I donot want that my backup database is one month old  -:(



Well, if the script failed with an error, cron should have sent you an 
email (or the user the script runs as, anyway).




To expand on that, have a look at your crontab and ensure that the call 
to your script does not end with ">/dev/null 2>&1". That will cause the 
script to run silently, regardess of any errors. If you want it to be 
silent yet have errors emailed to you, change it to ">/dev/null" 
(without the quotes).


If the emails have been sent but this is an account on a remote server 
you might want to investigate having the emails for that remote account 
sent to your regular email address.


brian

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] how to detect the backup database every day

2007-08-07 Thread Scott Marlowe
On 8/7/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> Hello all
>
> I use a crontab job to backup and restore my database every midnight every
> day (using pg_dump and restore...)

Backup from one machine, restore to another, right?

> My cron job run well for a while. However, today I discovered that my
> database was not restored for one month.

> I also fix the problem as there is corruption in the script.
>
> My question is how I can detect whether the backup or restore processes is
> corrupted. I donot want that my backup database is one month old  -:(

You can detect whether backup failed from a bash script like so:

#!/bin/bash
if ( pg_dump dbname ); then
echo "good";
else
echo "bad";
fi;

Same thing for pg_restore or psql

> If you have any experience about this please help...

Some.  There are a lot of angles you can approach this from.  You can
have a simple cronjob that runs every day that checks the size / age
of the latest backup and sends an alarm if it's smaller than the last
one, or isn't there, etc...  You can use the find command to look for
files that are less than x seconds / minutes / hours / days old.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] finding reusable ids

2007-08-07 Thread Kenji Morishige
Actually, I already have a resource table that stores the uid of the item in
question.  The checkout table does double duty as a history mechanism and a
check-out mechanism.  I think you are on the right track, I should seperate
these two tasks and possibly create another table.  The actual design is a
bit more complicated as we actually don't have a a checked-in flag, but a
start and finish time where users can actually store multiple overlapping
records.

Kenji

On Tue, Aug 07, 2007 at 12:23:00PM +0200, Nis Jørgensen wrote:
> Kenji Morishige skrev:
> > I have a table that creates "check-out" records that stores information when
> > a particular resource is being utilized.  I want to maintain a friendly
> > shortened ID so people can reference these check outs.
> > 
> > At any given time, there should not be more than 99 or so check-outs, so
> > as the check-outs get checked in, the old IDs would become available.  What
> > is the best method to query for these resusable IDs that would not be
> > assigned to checked out items?  It seems that it would be quite inefficient
> > to look at the entire table to see which ids exist, then increment
> > accordingly.  For some reason, I feel that there would be something already
> > available to solve this.
> > 
> > example set:
> > 
> > uid  co-id  checked-in?
> > 11  n
> > 22  n
> > 33  y
> > 44  n
> > 53  n   
> > 
> > obviously, this is a small sample set, but the id 3 can be reused, so I'd
> > like to reuse it without using a external tracking mechansm.  My table has
> > 1,000,000+ records.
> 
> Do you need the co-id once the item is checked in? If not, I would split
> this into two tables:
> 
> resources
> uid 
> 1
> 2
> 3
> 4
> 5
> 
> checked_out
> uid co_id
> 1   1
> 2   2
> 4   4
> 5   3
> 
> Where the existence of the row in the second table doubles as the
> checked-in flag.
> 
> This doesn't solve your problem, but might simplify the query to find a
> new id - something like this (untested):
> 
> SELECT min(q.co_id) +1
> FROM (
> SELECT (co_id + 1) as co_id FROM checked_out
> EXCEPT
> SELECT co_id FROM checked_out
> ) q;
> 
> (you need a special case when the table is empty)
> 
> The same method can of course be used with your original table layout.
> 
> Nis
> 
> 
> ---(end of broadcast)---
> TIP 1: if posting/reading through Usenet, please send an appropriate
>subscribe-nomail command to [EMAIL PROTECTED] so that your
>message can get through to the mailing list cleanly

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Removing a schema

2007-08-07 Thread Tom Lane
Naz Gassiep <[EMAIL PROTECTED]> writes:
> I'm trying to remove a schema and move all the tables to another schema. 
> I've manually run alter table on every table to move them, however all 
> the foreign keys still reference the old schema,

What?  It works fine for me:

regression=# create schema s1;
CREATE SCHEMA
regression=# create table s1.t1(f1 int primary key);
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index "t1_pkey" for 
table "t1"
CREATE TABLE
regression=# create table s1.t2(f1 int references s1.t1);
CREATE TABLE
regression=# create schema s2;
CREATE SCHEMA
regression=# alter table s1.t1 set schema s2;
ALTER TABLE
regression=# \d s1.t2
Table "s1.t2"
 Column |  Type   | Modifiers 
+-+---
 f1 | integer | 
Foreign-key constraints:
"t2_f1_fkey" FOREIGN KEY (f1) REFERENCES s2.t1(f1)


regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Restricting columns by users

2007-08-07 Thread Ranieri Mazili

Hello,
I'm looking for a method to restrict columns by users on postgresql, 
searching in google I found what I want, but in mysql, below I reproduce 
the paragraph that shows what I exactly want:


"MySQL can also restrict access on the table level and even on the 
column level. What this means is that a user can have zero privileges on 
a database, but can have all privileges on a table in that database. 
Alternatively, a user can have zero privileges on a database, restricted 
privileges on a table, and all privileges on any particular column on 
that table. This is done by inserting records into the TABLE_PRIV table 
and the COLUMN_PRIV table."


Exist something like it for postgresql?

Thanks

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] create table liek view

2007-08-07 Thread Filip Rembiałkowski
[EMAIL PROTECTED] create view v as select 1::int;
CREATE VIEW
[EMAIL PROTECTED] select * from v;
 int4
--
1
(1 row)
[EMAIL PROTECTED] create table t ( like v);
ERROR:  inherited relation "v" is not a table



Why? Is there any logical reason for this? View is just a table with
some rule ON SELECT...



-- 
Filip Rembiałkowski

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Having trouble building 64-bit pgsql 7.4.17 on HPUX ia64

2007-08-07 Thread Rajaram J
Hi

I'm having trouble building 64-bit pgsql 7.4.17 on the latest release of HP-UX 
11.23 on ia64.

Here's my compiler:
cc: HP C/aC++ B3910B A.06.15 [May 16 2007]

Here's my error:

/opt/ansic/bin/cc -Ae +O2 -L../../src/port -Wl,+nodefaultrpath  -L/usr/lib 
-L/opt/openssl/lib -Wl,+b -Wl,/opt/rajaram/postgresql/lib -Wl,-z  -Wl,-E 
access/SUBSYS.o bootstrap/SUBSYS.o catalog/SUBSYS.o parser/SUBSYS.o 
commands/SUBSYS.o executor/SUBSYS.o lib/SUBSYS.o libpq/SUBSYS.o main/SUBSYS.o 
nodes/SUBSYS.o optimizer/SUBSYS.o port/SUBSYS.o postmaster/SUBSYS.o 
regex/SUBSYS.o rewrite/SUBSYS.o storage/SUBSYS.o tcop/SUBSYS.o utils/SUBSYS.o 
-lc -lssl -lcrypto -lgen -ldld -lnsl -ldl -lm -lkrb5 -lk5crypto -lcom_err 
-lpgport -o postgres
ld: Unsatisfied symbol "tas" in file access/SUBSYS.o
1 errors.
gmake[2]: *** [postgres] Error 1
gmake[2]: *** Deleting file `postgres'
gmake[2]: Leaving directory `/postgresql/postgresql-7.4.17/src/backend'
gmake[1]: *** [all] Error 2
gmake[1]: Leaving directory `/postgresql/postgresql-7.4.17/src'
gmake: *** [all] Error 2
hpdst156 B.11.23 64_bit >cc v
cc: warning 1913: `v' does not exist or cannot be read
ld: I/O error, file "v": No such file or directory
Fatal error.


Could anyone please advise on how to make 7.4.17 build here?


Regards

Rajaram J

Re: [GENERAL] how to detect the backup database every day

2007-08-07 Thread Richard Huxton

[EMAIL PROTECTED] wrote:

Hello all

I use a crontab job to backup and restore my database every midnight every
day (using pg_dump and restore...)

My cron job run well for a while. However, today I discovered that my
database was not restored for one month.

I also fix the problem as there is corruption in the script.

My question is how I can detect whether the backup or restore processes is
corrupted. I donot want that my backup database is one month old  -:(


Well, if the script failed with an error, cron should have sent you an 
email (or the user the script runs as, anyway).


If you didn't have an error, you could run a separate cron that checks 
whether there are recent backup files and emails you if not (find ... 
-mtime -1).


Or, if you want to make sure the restore has worked check the database 
for a row with a recent timestamp.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] backend process terminates

2007-08-07 Thread Geoffrey

Tom Lane wrote:

Geoffrey Myers <[EMAIL PROTECTED]> writes:

The output from the gdb batch process may be found here:
http://www.serioustechnology.com/gdbbatch.txt


gdb isn't telling you the whole truth, evidently --- how'd control get
from line 781 to 912 with nothing in between?  Recompiling the backend
with -O0 or at most -O1 would be a good idea to get a more trustworthy
gdb trace.


As previously noted, we are building some third party code into the 
backend.  We don't have the source code, so it's difficult to know what 
might be going on there.


I don't know all the idiosyncrasies of how this works, so bear with me 
on this.  The developer at the vendor indicated that he's narrowed down 
the problem to a set of wrapper routines in their code.  They are named 
OpenFile(), CloseFile() and ReadFile();  He inquired as to whether there 
might be routines in the Postgresql code with the same names that might 
be causing a conflict.  Sure enough, I searched the Postgresql source 
code and found routines with the same names.  I don't see how this could 
pose a problem though, as it is my understanding that the compiler will 
properly address this issue.


Anyone think this might be a problem?

--
Until later, Geoffrey

Those who would give up essential Liberty, to purchase a little
temporary Safety, deserve neither Liberty nor Safety.
 - Benjamin Franklin

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Removing a schema

2007-08-07 Thread Nis Jørgensen
Naz Gassiep skrev:
> I'm trying to remove a schema and move all the tables to another schema.
> I've manually run alter table on every table to move them, however all
> the foreign keys still reference the old schema, and there are too many
> to do by hand.
> 
> Is there an easy way to update one of the system catalogs to do this? I
> want to change every reference of that schema to point to the new one,
> and then drop the schema, but not drop any data.

I have done something like this

pg_dump old_schema in text format
create new schema
modify dump to set default schema to the new one
import dump with psql
drop old schema

Nis


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] finding reusable ids

2007-08-07 Thread Nis Jørgensen
Kenji Morishige skrev:
> I have a table that creates "check-out" records that stores information when
> a particular resource is being utilized.  I want to maintain a friendly
> shortened ID so people can reference these check outs.
> 
> At any given time, there should not be more than 99 or so check-outs, so
> as the check-outs get checked in, the old IDs would become available.  What
> is the best method to query for these resusable IDs that would not be
> assigned to checked out items?  It seems that it would be quite inefficient
> to look at the entire table to see which ids exist, then increment
> accordingly.  For some reason, I feel that there would be something already
> available to solve this.
> 
> example set:
> 
> uid  co-id  checked-in?
> 11  n
> 22  n
> 33  y
> 44  n
> 53  n   
> 
> obviously, this is a small sample set, but the id 3 can be reused, so I'd
> like to reuse it without using a external tracking mechansm.  My table has
> 1,000,000+ records.

Do you need the co-id once the item is checked in? If not, I would split
this into two tables:

resources
uid 
1
2
3
4
5

checked_out
uid co_id
1   1
2   2
4   4
5   3

Where the existence of the row in the second table doubles as the
checked-in flag.

This doesn't solve your problem, but might simplify the query to find a
new id - something like this (untested):

SELECT min(q.co_id) +1
FROM (
SELECT (co_id + 1) as co_id FROM checked_out
EXCEPT
SELECT co_id FROM checked_out
) q;

(you need a special case when the table is empty)

The same method can of course be used with your original table layout.

Nis


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] tsearch2: plainto_tsquery() with OR?

2007-08-07 Thread cluster

I have some questions related to tsearch2:

1) Is
   ...WHERE rank(myTsVector, myTsQuery) > 0 ...
just as fast as
   ...WHERE myTsVector @@ myTsQuery...
?


2)) I will use plainto_tsquery() to parse search keys entered by a 
website user to a tsquery. However, if only some of the entered keywords 
does not exist in the searched tsvectors (but others do), I would still 
like the search result to be "true".
plainto_tsquery() glues each keyword together with "&". I search for a 
plainto_tsquery() that glues the keywords with an "|" (the OR operator).
In that way, not ALL keywords are required to exist in the tsvector in 
order for the row to be returned,


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster