Re: [GENERAL] Is it possible to findout actual owner of table?

2010-03-08 Thread John R Pierce

dipti shah wrote:

What is middleware?


An application server that does all the business logic.  your user 
software calls the application server to do things, and it in turn 
accesses the database.User written software is not alloweed to 
directly connecct to the databases at all.   Classic example of a 
middleware platform Apache Tomcat




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is it possible to findout actual owner of table?

2010-03-08 Thread dipti shah
What is middleware?

Is it similar to stored procedure? That what I have been doing. I have
revoked all permissions from mydb schema from public and have SECURITY
DEFINER enable for stored procedure to allow creating/droping/altering
tables.

Thanks.

On Tue, Mar 9, 2010 at 12:19 PM, John R Pierce  wrote:

> dipti shah wrote:
>
>> I don't want users to create/drop/alter anything directly. They have to
>> use stored procedure for everything. The stored procedure creates logging
>> tables and stores many other auditing information so it is madatory to
>> prevent users from running any direct commands.
>>
>>
>
> may be you should use middleware instead and not let your users connect
> directly to SQL at all.
>
> so, any DDL changes, you'd call the middleware server, it would
> authenticate the user, decide whats allowed, maintain your audit trail,
> logging, etc.  ditto, any operations that require database queries, etc,
> would all be done by this middleware.
>
>
>
>
>


Re: [GENERAL] kernel version impact on PostgreSQL performance

2010-03-08 Thread John R Pierce

Albe Laurenz wrote:
"the PostgreSQL performance atop the EXT3 file-system has fallen off a 
cliff"


really a pretty stupid statement from them, as the performance went 
right back to where it was a few versions earlier.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] kernel version impact on PostgreSQL performance

2010-03-08 Thread Greg Smith

Albe Laurenz wrote:

Maybe that question is dumb, but why should a change in ext4 have an
impact on a figure that was generated with ext3? To quote the link:

"the PostgreSQL performance atop the EXT3 file-system has fallen off a cliff"
  


What I'm guessing is that after finding the root problem because it's 
more obvious when combined with ext4, someone has now put the fix for it 
in the right place for 2.6.33 so that it also impacts ext3 writes.  That 
would be more good news from a reliability perspective, and bad news 
from an advocacy one because it looks really bad on benchmarks if the 
writer doesn't understand how database commits to disk work--which seems 
to the case at Phoronix.  They really are not running the right sort of 
PostgreSQL benchmarks at all over there.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [NOVICE] How to read oracle table's data from postgre

2010-03-08 Thread Greg Smith

Scott Marlowe wrote:

Anyone know if this is still active / developed?
http://wiki.postgresql.org/wiki/SqlMedConnectionManager
  


That's still the right long-term approach for a lot of these problems, 
so while activity on it happens in bursts I don't think it will ever go 
completely inactive.  I expect SQL/MED plans for the next release may 
get fleshed out at this year's developer's meeting in a couple of 
months, should have a better idea who might be working on this by the 
end of May.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is it possible to findout actual owner of table?

2010-03-08 Thread John R Pierce

dipti shah wrote:
I don't want users to create/drop/alter anything directly. They have 
to use stored procedure for everything. The stored procedure creates 
logging tables and stores many other auditing information so it is 
madatory to prevent users from running any direct commands.
 


may be you should use middleware instead and not let your users connect 
directly to SQL at all.


so, any DDL changes, you'd call the middleware server, it would 
authenticate the user, decide whats allowed, maintain your audit trail, 
logging, etc.  ditto, any operations that require database queries, etc, 
would all be done by this middleware.






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is it possible to findout actual owner of table?

2010-03-08 Thread dipti shah
I don't want users to create/drop/alter anything directly. They have to use
stored procedure for everything. The stored procedure creates logging tables
and stores many other auditing information so it is madatory to prevent
users from running any direct commands.

Thanks,
Dipti

On Tue, Mar 9, 2010 at 11:54 AM, Tom Lane  wrote:

>  dipti shah  writes:
> > I have created the stored procedure that allows particular users to
> create
> > the table. I want to prevent users to drop the tables owned by someone
> esle
> > and hence, I am making owner of each table created by this stored
> procedure
> > to super user(postgres) so that no one will be allowed to drop/alter
> table.
> > I want to allow to drop/alter the table by actuall user who ran the
> stored
> > procedure to create the table. Could anyone please suggest me how to
> proceed
> > with this?
>
> It sounds like you are going to a great deal of trouble to reinvent the
> standard behavior.  Why not forget the stored procedure and just let
> users create and drop their own tables?
>
>regards, tom lane
>


[GENERAL] How to save existing permissions on schema and later on restore it?

2010-03-08 Thread dipti shah
Hi,

Could anyone please suggest me how to save the existing permissions on
schema and later on restore it? Basically, by default I have revoked all the
permissions from the mydb schema but there are some users who has all
permissions on mydb schema. I have one stored procedure which temporary
grant the ALL permissions on mydb schema and later on revoke it and just
grant USAGE permissions. Here I have problem if users have full permissions
on schema executes the stored procedure. His original schema permissions
will be altered by my stored procedure. I want to prevent this.

Thanks,
Dipti


Re: [GENERAL] Is it possible to findout actual owner of table?

2010-03-08 Thread Tom Lane
dipti shah  writes:
> I have created the stored procedure that allows particular users to create
> the table. I want to prevent users to drop the tables owned by someone esle
> and hence, I am making owner of each table created by this stored procedure
> to super user(postgres) so that no one will be allowed to drop/alter table.
> I want to allow to drop/alter the table by actuall user who ran the stored
> procedure to create the table. Could anyone please suggest me how to proceed
> with this?

It sounds like you are going to a great deal of trouble to reinvent the
standard behavior.  Why not forget the stored procedure and just let
users create and drop their own tables?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Is it possible to findout actual owner of table?

2010-03-08 Thread dipti shah
Hi,

I have created the stored procedure that allows particular users to create
the table. I want to prevent users to drop the tables owned by someone esle
and hence, I am making owner of each table created by this stored procedure
to super user(postgres) so that no one will be allowed to drop/alter table.
I want to allow to drop/alter the table by actuall user who ran the stored
procedure to create the table. Could anyone please suggest me how to proceed
with this?

Thanks,
Dipti


Re: [GENERAL] managing tablespaces like files?

2010-03-08 Thread Sam Carleton
I would like to thank both John and Scott for the help.  It is very clear to
me that PostgreSQL isn't the ideal solution for my current model.  The
conversation has gotten me thinking of ways the model could be modified to
work with PostgrSQL (and other client/server RDBM).  Thus I will return to
the drawing board just to double check to see if there might be a better
model.

Thank you all, this has been truly valuable!

Sam


Re: [GENERAL] autovacuum question

2010-03-08 Thread Tom Lane
"Scot Kreienkamp"  writes:
>> Why not just add an 'analyze' as the last step of the restore job?

> Due to the amount of time it takes.  The disks are slow enough to make a
> database-wide analyze painful since I would have to repeat it every 1-2
> hours, IE every reload time.  

You claimed that before.  It didn't make any sense then and it doesn't
now.  There is no way that an analyze is expensive compared to a
database reload.

Maybe what you need to be doing is rethinking the strategy that involves
reloading every couple of hours...

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] autovacuum question

2010-03-08 Thread Scot Kreienkamp
 

On Mon, Mar 8, 2010 at 5:13 PM, Scot Kreienkamp 
wrote:

Hi everyone,

I have a database that is constantly getting reloaded several times per
day from production backups and is used for reporting purposes.  The
problem I'm having with it is that the database seems to be much slower
than the others I have that are more static.  I suspect that is due to
the lack of query planner statistics (analyze) having been done after
each restore, which is that way due to the amount of time it takes to
accomplish.

First, I'm hoping someone here can validate my theory.  

 

  It would seem likely that this could be the problem...

 

Cool Thanks.  Glad I'm on the right track.

 

Second, if
that's true, is there any way to trigger an auto-analyze on a
table
automatically the first time a query touches that table?   

(I ask because
there is no way to shrink the amount of time a database-wide
analyze
would take into the window I have to do it in.  The expectations
may be
a bit unrealistic here, I know.)  

 

Why not just add an 'analyze' as the last step of the restore job?

 

Due to the amount of time it takes.  The disks are slow enough to make a
database-wide analyze painful since I would have to repeat it every 1-2
hours, IE every reload time.  

 

Third, what parameters can I set to
make analyze go as fast as possible, knowing that the disks are
slow on
it because of the hardware?  (Can't do anything about that
either, FYI)
Obviously more memory the better, and setting maintenance work
memory
higher also.  Doing a vacuum is kind of pointless because it
gets
reloaded every 2 hours, so all I really need is an analyze --I
think--. 

 

Sounds like you've done what you can.  How long does an analyze take?

 

Last I tried it, it took 15 minutes on a 30 gig database while it was
being used.  

 

I'm on 8.3.7.  Since 8.3.10 is due out I'll probably upgrade to it after
it's shaken out a bit.

 

  Why not move up to 8.4? 

 

Because I'm constrained by our application developers who don't have the
time to vet our app against 8.4 yet.  I've been pushing for it for the
last 2 months.  

 

--Scott M

 

Thanks for your help.  Any suggestions are welcome.

Scot Kreienkamp
skre...@la-z-boy.com



--
Sent via pgsql-general mailing list
(pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

 



Re: [GENERAL] autovacuum question

2010-03-08 Thread Scott Mead
On Mon, Mar 8, 2010 at 5:13 PM, Scot Kreienkamp wrote:

> Hi everyone,
>
> I have a database that is constantly getting reloaded several times per
> day from production backups and is used for reporting purposes.  The
> problem I'm having with it is that the database seems to be much slower
> than the others I have that are more static.  I suspect that is due to
> the lack of query planner statistics (analyze) having been done after
> each restore, which is that way due to the amount of time it takes to
> accomplish.
>
> First, I'm hoping someone here can validate my theory.


  It would seem likely that this could be the problem...


> Second, if
> that's true, is there any way to trigger an auto-analyze on a table
> automatically the first time a query touches that table?

(I ask because
> there is no way to shrink the amount of time a database-wide analyze
> would take into the window I have to do it in.  The expectations may be
> a bit unrealistic here, I know.)


Why not just add an 'analyze' as the last step of the restore job?



> Third, what parameters can I set to
> make analyze go as fast as possible, knowing that the disks are slow on
> it because of the hardware?  (Can't do anything about that either, FYI)
> Obviously more memory the better, and setting maintenance work memory
> higher also.  Doing a vacuum is kind of pointless because it gets
> reloaded every 2 hours, so all I really need is an analyze --I think--.


Sounds like you've done what you can.  How long does an analyze take?


>
>
> I'm on 8.3.7.  Since 8.3.10 is due out I'll probably upgrade to it after
> it's shaken out a bit.
>

  Why not move up to 8.4?

--Scott M


> Thanks for your help.  Any suggestions are welcome.
>
> Scot Kreienkamp
> skre...@la-z-boy.com
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] managing tablespaces like files?

2010-03-08 Thread Scott Mead
On Mon, Mar 8, 2010 at 4:58 PM, Sam Carleton wrote:

> On Mon, Mar 8, 2010 at 4:07 PM, John R Pierce  wrote:
>
>>
>> and what happens if someone copies your directory without shutting down
>> the instance?
>>
>
> Well, that is an issue right now, the current SQLite DB is locked by the
> app and cannot currently be copied while the app is running.  Or at least I
> don't think it can be copied safely, which is why there is a backup function
> that will simply create a copy of the DB file where it is told to go,
> something I have yet to implement as of now.
>
>
>> sounds like trouble in the making to me.  me thinks you're better off
>> sticking with a file based database and not trying to use a server based
>> one.
>
>
> I hear you, but I am not willing to throw in the towel, just yet...
>  Generally speaking, is there a lot of metadata that would need to be
> exported?  As I think about this, I am thinking I would have to read in ALL
> the system tables and find all the different parts to make up all the
> different aspects of all the stuff in the tablespace, like the tables,
> columns, views, stored procs, etc.  It isn't a matter of just dumping one or
> two tables, but a matter of combing lots of tables and exporting lots of
> very specific pieces of those tables.  Correct?
>


  It's not just the catalogs, you'd have to grab the pg_clog files that were
appropriate and you would need to be 100% sure that you froze the xmins in
the table before you tried the 'detach'.  The PG engine doesn't have the
notion of a 'transportable table[space]' internally.  If you can live with
just doing a pg_dump and pg_restore to your next server, then postgres will
work for you.  Instead of tablespaces, you could keep everything in schemas
in the database, then just dump / restore all objects in that schema.

  If you're not able to make a change like that however, you'll probably be
better off with a file-based DB.

--Scott

>
> Sam
>


Re: [GENERAL] managing tablespaces like files?

2010-03-08 Thread Tom Lane
Sam Carleton  writes:
> One of the cases I need to be concerned with is the case where the
> application is shut down, the user goes in and deletes the tablespace folder
> by way of deleting the parent folder. How will PostgreSQL handle such a
> situation?

It won't like it.  If your users are likely to do such a thing, Postgres
is not the database for you.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] managing tablespaces like files?

2010-03-08 Thread Sam Carleton
On Mon, Mar 8, 2010 at 5:46 PM, John R Pierce  wrote:

>
>> I hear you, but I am not willing to throw in the towel, just yet...
>>  Generally speaking, is there a lot of metadata that would need to be
>> exported?  As I think about this, I am thinking I would have to read in ALL
>> the system tables and find all the different parts to make up all the
>> different aspects of all the stuff in the tablespace, like the tables,
>> columns, views, stored procs, etc.  It isn't a matter of just dumping one or
>> two tables, but a matter of combing lots of tables and exporting lots of
>> very specific pieces of those tables.  Correct?
>>
>
> how about...
>   pg_dump -Fc dbname >filename.dump
>

One of the cases I need to be concerned with is the case where the
application is shut down, the user goes in and deletes the tablespace folder
by way of deleting the parent folder. How will PostgreSQL handle such a
situation?

Sam


Re: [GENERAL] managing tablespaces like files?

2010-03-08 Thread John R Pierce

Sam Carleton wrote:
On Mon, Mar 8, 2010 at 4:07 PM, John R Pierce > wrote:



and what happens if someone copies your directory without shutting
down the instance?


Well, that is an issue right now, the current SQLite DB is locked by 
the app and cannot currently be copied while the app is running.  Or 
at least I don't think it can be copied safely, which is why there is 
a backup function that will simply create a copy of the DB file where 
it is told to go, something I have yet to implement as of now.
 


sounds like trouble in the making to me.  me thinks you're better
off sticking with a file based database and not trying to use a
server based one.


I hear you, but I am not willing to throw in the towel, just yet... 
 Generally speaking, is there a lot of metadata that would need to be 
exported?  As I think about this, I am thinking I would have to read 
in ALL the system tables and find all the different parts to make up 
all the different aspects of all the stuff in the tablespace, like the 
tables, columns, views, stored procs, etc.  It isn't a matter of just 
dumping one or two tables, but a matter of combing lots of tables and 
exporting lots of very specific pieces of those tables.  Correct?


how about...
   pg_dump -Fc dbname >filename.dump



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] autovacuum question

2010-03-08 Thread Scot Kreienkamp
Hi everyone,

I have a database that is constantly getting reloaded several times per
day from production backups and is used for reporting purposes.  The
problem I'm having with it is that the database seems to be much slower
than the others I have that are more static.  I suspect that is due to
the lack of query planner statistics (analyze) having been done after
each restore, which is that way due to the amount of time it takes to
accomplish.  

First, I'm hoping someone here can validate my theory.  Second, if
that's true, is there any way to trigger an auto-analyze on a table
automatically the first time a query touches that table?  (I ask because
there is no way to shrink the amount of time a database-wide analyze
would take into the window I have to do it in.  The expectations may be
a bit unrealistic here, I know.)  Third, what parameters can I set to
make analyze go as fast as possible, knowing that the disks are slow on
it because of the hardware?  (Can't do anything about that either, FYI)
Obviously more memory the better, and setting maintenance work memory
higher also.  Doing a vacuum is kind of pointless because it gets
reloaded every 2 hours, so all I really need is an analyze --I think--.


I'm on 8.3.7.  Since 8.3.10 is due out I'll probably upgrade to it after
it's shaken out a bit.  

Thanks for your help.  Any suggestions are welcome.  

Scot Kreienkamp
skre...@la-z-boy.com



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] managing tablespaces like files?

2010-03-08 Thread Sam Carleton
On Mon, Mar 8, 2010 at 4:07 PM, John R Pierce  wrote:

>
> and what happens if someone copies your directory without shutting down the
> instance?
>

Well, that is an issue right now, the current SQLite DB is locked by the app
and cannot currently be copied while the app is running.  Or at least I
don't think it can be copied safely, which is why there is a backup function
that will simply create a copy of the DB file where it is told to go,
something I have yet to implement as of now.


> sounds like trouble in the making to me.  me thinks you're better off
> sticking with a file based database and not trying to use a server based
> one.


I hear you, but I am not willing to throw in the towel, just yet...
 Generally speaking, is there a lot of metadata that would need to be
exported?  As I think about this, I am thinking I would have to read in ALL
the system tables and find all the different parts to make up all the
different aspects of all the stuff in the tablespace, like the tables,
columns, views, stored procs, etc.  It isn't a matter of just dumping one or
two tables, but a matter of combing lots of tables and exporting lots of
very specific pieces of those tables.  Correct?

Sam


Re: [GENERAL] managing tablespaces like files?

2010-03-08 Thread Sam Carleton
On Mon, Mar 8, 2010 at 3:52 PM, John R Pierce  wrote:

>
> I don't think PostgreSQL is going to work for you if thats a requirement.
> A tablespace doesn't contain the metadata for the items in it, thats stored
> in the main cluster space (pg_catalog, etc)
>


Would it be difficult to export the metadata as part of the shutdown process
of the service?  Or am I simply reaching a bit too much?  (For the record,
my application exists because I run into this type of "things don't work
that way" and than I find some interesting solutions   But maybe this
time I am simply better off with Firebird.)

Sam Carleton
Developer of Photo Parata 


Re: [GENERAL] DROP column: documentation unclear

2010-03-08 Thread Martijn van Oosterhout
On Mon, Mar 08, 2010 at 05:09:14PM +0100, Adrian von Bidder wrote:
> Hi,
> 
> The documentation about DROP COLUMN is a bit unclear:
> 
> | The DROP COLUMN form does not physically remove the column, but simply
> | makes it invisible to SQL operations. Subsequent insert and update
> | operations in the table will store a null value for the column. Thus,
> | dropping a column is quick but it will not immediately reduce the on-disk
> | size of your table, as the space occupied by the dropped column is not
> | reclaimed. The space will be reclaimed over time as existing rows are
> | updated.
> 
> "subsequent ... will store a null value" would imply that deleted columns 
> will still take some place, while "the space will be reclaimed ..." would 
> suggest that new rows (insert or updates in mvcc) don't have the deleted 
> column anymore - I'm not quite sure how to interpret this.  What is pg 
> doing?

What you're missing is that in postgres NULLs are stored as a bit in
the header and there is no data. So in a sense NULLs take no space
(well, one bit) which means both statements are true.

Have a nice day,

-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> Please line up in a tree and maintain the heap invariant while 
> boarding. Thank you for flying nlogn airlines.


signature.asc
Description: Digital signature


Re: [GENERAL] managing tablespaces like files?

2010-03-08 Thread John R Pierce

Sam Carleton wrote:



On Mon, Mar 8, 2010 at 3:52 PM, John R Pierce > wrote:



I don't think PostgreSQL is going to work for you if thats a
requirement.   A tablespace doesn't contain the metadata for the
items in it, thats stored in the main cluster space (pg_catalog, etc)


 
Would it be difficult to export the metadata as part of the shutdown 
process of the service?  Or am I simply reaching a bit too much?  (For 
the record, my application exists because I run into this type of 
"things don't work that way" and than I find some interesting 
solutions   But maybe this time I am simply better off with 
Firebird.)


and what happens if someone copies your directory without shutting down 
the instance?


sounds like trouble in the making to me.  me thinks you're better off 
sticking with a file based database and not trying to use a server based 
one.






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] managing tablespaces like files?

2010-03-08 Thread John R Pierce

Sam Carleton wrote:
The server of my client/server program is currently using SQLite, an 
embedded file based SQL engine.  I am looking for a client/server 
based RDBM and have narrowed it down to either PosgreSQL or Firebird. 
 I would prefer to use PosgreSQL, but the management of the physical 
files are concerning me.  

A very key element of the program I am developing is the ability for 
the user to pickup one folder that contains all the data for that 
"document" and copy it somewhere else, then allow the program to read 
it from that other location.  It is equally important that when the 
database server initially start, if one of the "tablespaces" was 
deleted by the user, the database be none the wiser.  

This is, of course, not a problem with Firebird since it is file 
based.  What I could do is simply disconnect from the tablespace each 
and every time the server program shutdown and reconnect to that 
tablespace each time the server starts up, pointing to a new 
tablespace if need be.  So how would one:


1: "disconnect" from a tablespace, in such a way that the tablespace, 
nor it's contents is changed.

2: "connect" to an existing tablespace.


I don't think PostgreSQL is going to work for you if thats a 
requirement.   A tablespace doesn't contain the metadata for the items 
in it, thats stored in the main cluster space (pg_catalog, etc)









--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] managing tablespaces like files?

2010-03-08 Thread Sam Carleton
The server of my client/server program is currently using SQLite, an
embedded file based SQL engine.  I am looking for a client/server based RDBM
and have narrowed it down to either PosgreSQL or Firebird.  I would prefer
to use PosgreSQL, but the management of the physical files are concerning
me.

A very key element of the program I am developing is the ability for the
user to pickup one folder that contains all the data for that "document" and
copy it somewhere else, then allow the program to read it from that other
location.  It is equally important that when the database server initially
start, if one of the "tablespaces" was deleted by the user, the database be
none the wiser.

This is, of course, not a problem with Firebird since it is file based.
 What I could do is simply disconnect from the tablespace each and every
time the server program shutdown and reconnect to that tablespace each time
the server starts up, pointing to a new tablespace if need be.  So how would
one:

1: "disconnect" from a tablespace, in such a way that the tablespace, nor
it's contents is changed.
2: "connect" to an existing tablespace.

Sam


[GENERAL] DROP column: documentation unclear

2010-03-08 Thread Adrian von Bidder
Hi,

The documentation about DROP COLUMN is a bit unclear:

| The DROP COLUMN form does not physically remove the column, but simply
| makes it invisible to SQL operations. Subsequent insert and update
| operations in the table will store a null value for the column. Thus,
| dropping a column is quick but it will not immediately reduce the on-disk
| size of your table, as the space occupied by the dropped column is not
| reclaimed. The space will be reclaimed over time as existing rows are
| updated.

"subsequent ... will store a null value" would imply that deleted columns 
will still take some place, while "the space will be reclaimed ..." would 
suggest that new rows (insert or updates in mvcc) don't have the deleted 
column anymore - I'm not quite sure how to interpret this.  What is pg 
doing?

chees
-- vbi


-- 
featured product: PostgreSQL - http://postgresql.org


signature.asc
Description: This is a digitally signed message part.


Re: [GENERAL] Enterprise DB's windows package and upgrading windows.

2010-03-08 Thread Dave Page
On Mon, Mar 8, 2010 at 7:35 PM, John Moran  wrote:
> Hello,
>
> I'd like to upgrade my windows box from windows vista to windows 7. It
> runs the enterprise DB postgres package, as well as being the master
> of a Slony-I cluster. Is this likely to work without incident? Is
> there anything I should be aware of?

Its certainly not one of the many scenarios we test, but there
shouldn't be any problems that I can think of. We do test and support
both Vista and Windows 7, and there's no special installation steps
that differ on either, and the software is the same.

-- 
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com
PG East Conference: http://www.enterprisedb.com/community/nav-pg-east-2010.do

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Enterprise DB's windows package and upgrading windows.

2010-03-08 Thread John Moran
Hello,

I'd like to upgrade my windows box from windows vista to windows 7. It
runs the enterprise DB postgres package, as well as being the master
of a Slony-I cluster. Is this likely to work without incident? Is
there anything I should be aware of?

Regards,
John Moran

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 8.3.10 Changes

2010-03-08 Thread Tom Lane
Brad Nicholson  writes:
> Could someone please point me towards the changes for 8.3.10 that was
> mentioned on -announce this morning?

The release notes haven't been prepared yet.  You can browse the CVS
commit logs if you want the raw data.

> Also, any idea when this is going to be released?

Monday 3/15.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] ERROR: Package compat-postgresql-libs-4-1PGDG.rhel4.i386.rpm is not signed

2010-03-08 Thread Padmanabhan G
Hi,

I have an error while installing Postsgresql 8.3 in Cent OS 4.

I did the following...

# rpm -ivh  pgdg-centos-8.3-6.noarch.rpm
#
#vi /etc/yum.repos.d/CentOS-Base.repo

Add "exclude=postgresql*" to both section [base] and [update].


#yum install postgresql-server
Dependencies Resolved

=
 Package Arch   Version  RepositorySize
=
Installing:
 postgresql-server   i386   8.3.9-1PGDG.rhel4  pgdg834.4
M
Installing for dependencies:
 compat-postgresql-libs  i386   4-1PGDG.rhel4pgdg83 56 k
 postgresql  i386   8.3.9-1PGDG.rhel4  pgdg831.7
M
Updating for dependencies:
 perl-DBD-Pg i386   1.49-2.el4s1 centosplus112 k
 postgresql-libs i386   8.3.9-1PGDG.rhel4  pgdg83190
k
 postgresql-odbc i386   08.03.0400-1PGDG.rhel4
pgdg83229 k
 postgresql-python   i386   8.1.15-1.el4s1.1  centosplus 63
k
 postgresql-tcl  i386   8.1.15-1.el4s1.1  centosplus 79
k

Transaction Summary
=
Install  3 Package(s)
Update   5 Package(s)
Remove   0 Package(s)
Total download size: 6.9 M
Is this ok [y/N]: y
Downloading Packages:


Package compat-postgresql-libs-4-1PGDG.rhel4.i386.rpm is not signed




Best Regards,
G. Padmanabhan


-- 
என்றும் அன்புடன் உங்கள்

பத்மநாபன்


[GENERAL] 2 questions to ask

2010-03-08 Thread chaoyong wang

Hi,
I'm using vs2005 to debug PG, and I have 2 questions to ask:
1. I want to add a function to contrib/xml2/xpath.c   I changed xpath.c, 
pgxml.sql, pgxml.sql.in at the same time, then I builded and installed as the 
document said.When I execute "psql test < pgxml.sql", all functions created 
success except the one I added.   ERROR:  could not find function 
"xml_value_index" in file "C:\Program Files\PostgreSQL\8.3\lib/pgxml.dll"   I 
rebuild project pgxml, and replaced C:\Program 
Files\PostgreSQL\8.3\lib\pgxml.dll by the new produced .\Debug\pgxml\pgxml.dll  
 But remains all the same
2. When I execute a simple select query too see how it works, the concept 
"Portal" really puzzled me, maybe it's too abstract   Could you please explain 
it more specifically?

Thanks for your reading and looking forward your reply

Best  RegardsCatcher Wang 
_
Hotmail: Trusted email with powerful SPAM protection.
https://signup.live.com/signup.aspx?id=60969

[GENERAL] 8.3.10 Changes

2010-03-08 Thread Brad Nicholson
Could someone please point me towards the changes for 8.3.10 that was
mentioned on -announce this morning?

Also, any idea when this is going to be released?
-- 
Brad Nicholson  416-673-4106
Database Administrator, Afilias Canada Corp.

pgsql-general 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Entering a character code in a query

2010-03-08 Thread John Gage

I would like to use the following query:

SELECT english || '\n' || english || '\x2028' || french AS output FROM  
vocab_words_translated;


where \x2028 is the hexadecimal code for a soft carriage return.

However, this does not work.

Can anyone help with this problem?

Thanking you,

John

Pertinent codes:

2028	LINE SEPARATOR * may be used to represent this semantic  
unambiguously

U+2028, character
‬, decimal 8232, hex 0x2028, octal \20050, binary 1000101000  
UTF-8: 0xe2 0x80 0xa8 

Re: [GENERAL] obsessive-compulsive vacuum behavior

2010-03-08 Thread Tom Lane
Ben Chobot  writes:
> ...and so on. It's been running for an hour or so now, when it appears it 
> shouldn't take 10 minutes. This seems pretty weird to me has anybody else 
> seen this behavior? I'm not even sure what details I could report which would 
> help figure out what's going on.

You're apparently trying to vacuum a really large table with a whole lot
of dead rows using a tiny maintenance_work_mem setting (16MB, looks
like).  Every time the list of dead tuples fills up, it has to perform a
pass over the table and indexes to get rid of those tuples.  Then it
goes back to scanning for more dead tuples.

If it's still going, I'd suggest cancelling it and starting again with a
more realistic maintenance_work_mem, maybe something in the several
hundred MB range, depending on how much memory is free in your server.

After it's done you probably ought to look into your autovacuum
settings.  You shouldn't have gotten into a state with so many dead
tuples in the first place, I would think, if autovac were functioning
properly.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PostgreSQL RPM sets for 9.0 Alpha4 released

2010-03-08 Thread Devrim GÜNDÜZ

PostgreSQL RPM Building Project released RPM sets for 4th Alpha of the
upcoming 9.0 release.

Please note that these packages are **not** production ready. 

They are built for Fedora 7,8,11,12 and RHEL/CentOS 4,5.

These packages *do* require a dump/reload, even from the third alpha
packages, because of catversion updates.

We need more people to discover any bugs and test new features in 9.0
development version.

Here is the list of the new features in 9.0 development version:

http://developer.postgresql.org/pgdocs/postgres/release-9-0.html

This RPM set has new experimental feature that has been requested for a
long time: Multiple version installation for RPMs. We also need
volunteers to test this new RPM feature. This new release has many RPM
related fixes over the last alpha.

As usual, please find detailed info about RPMs from:

http://yum.pgrpms.org/howtoyum.php

This is the new RPM repository URL. You will need to update repo RPMs 
as written in the link above.

Also, these packages are distributed over the PostgreSQL FTP repository,
too:

ftp://ftp.postgresql.org/pub/binary/v9.0alpha4

A mini howto about 9.0 alpha release + RPMs are here:

http://yum.pgrpms.org/news-9.0alpha-packages-ready-for-testing.php

Please report any packaging related errors to me. If you find any
PostgreSQL 9.0 bugs, please post them to pgsql-b...@postgresql.org or
fill out this form: 

http://www.postgresql.org/support/submitbug

SRPMs will follow in a day or two.

Sincerely,
-- 
Devrim GÜNDÜZ
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
PostgreSQL RPM Repository: http://yum.pgrpms.org
Community: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
http://www.gunduz.org  Twitter: http://twitter.com/devrimgunduz


signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] Libpq: copy file to bytea column

2010-03-08 Thread seiliki
> What I can't understand is why PQputCopyData() encodes incoming character 
> string, which was exported by "COPY ... TO '/tmp/t'" SQL command under psql 
> prompt, for bytea columns and refuse to accept or misinterpret zero value.

As far as PQputCopyData() is concerned, server and/or client does not have to 
decode input character string for bytea column, does they? My understanding is 
that all client or server need to do to import data to bytea column is 
un-escaping all sets of \nnn (or \x000) character strings back to binary bytes 
and unconditionally store the result to the target bytea column.

Encoding/Decoding is required only by TEXT and (VAR)CHAR columns, isn't it?

Errors like the following ones should not occur to bytea columns, should they?

ERROR:  invalid byte sequence for encoding "UTF8": 0x00
HINT:  This error can also happen if the byte sequence does not match the 
encoding expected by the server, which is controlled by "client_encoding".
CONTEXT:  COPY tablein, line 1: "abcd\011\000ABCDEFGH"
STATEMENT:  COPY tablein FROM STDIN

Regards,

CN

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Libpq: copy file to bytea column

2010-03-08 Thread seiliki
> > The data types of tableout.c1 and tablein.c1 are both bytea. 
> > I first export tableout.c1 to a file:
> > 
> > db1=# COPY (SELECT c1 FROM tableout LIMIT 1) TO '/tmp/t';
> > 
> > Then I try to import the file to another table.
> > 
> > This works without flaw:
> > 
> > db1=# COPY tablein FROM '/tmp/t';
> > 
> > However, I get the following errors from log when calling 
> > libpq functions PQputCopyData() and PQputCopyEnd().
> > 
> > 2010-03-06 20:47:42 CST ERROR:  invalid byte sequence for 
> > encoding "UTF8": 0x00
> > 2010-03-06 20:47:42 CST HINT:  This error can also happen if 
> > the byte sequence does not match the encoding expected by the 
> > server, which is controlled by "client_encoding".
> > 2010-03-06 20:47:42 CST CONTEXT:  COPY in, line 1: 
> > "TPF0\011Report\000\003Tag\002\365\016ExportFromPage\002\000\014ExportToPage\002\000\006Values\016..."
> > 2010-03-06 20:47:42 CST STATEMENT:  COPY in FROM STDIN
> 
> The difference here is that your first examples are COPY TO/FROM
> files on the database server, while your failing C code uses
> COPY FROP STDIN.
> 
> You will not meet encoding problems with the first because
> the file /tmp/t is created in the server encoding, so if you read
> it later with a COPY FROM, it will automatically be in the encoding
> that COPY FROM  expects.
> 
> When you use COPY FROM STDIN, you send data from the client to the
> server, which is something quite different. Data sent from the client
> are expected to be in the client encoding, not in the server encoding.
> 
> You can use the SQL statement "SET client_encoding = ..." to set the
> client encoding to the encoding of your data, or you can use the
> PGCLIENTENCODING environment variable to set it automatically.
> 
> The error message you are getting indicates that you client_encoding
> is currently UTF8.
> By the way, the line quoted in your error message does not look like
> valid text at all - it contains "zero" bytes, and preceeding each
> string there seems to be a byte indicating the length of the string
> that follows.

Changing client encoding can not make PQputCopyData() work, either. The problem 
seems to be caused by the fact that PQputCopyData() does not accept zero value  
('\0' in C), which I think can not be encoded to any character regardless of 
which client encoding is used.

Yes, the data I am trying to import is binary data. That is why I use bytea to 
store it.

What I can't understand is why PQputCopyData() encodes incoming character 
string, which was exported by "COPY ... TO '/tmp/t'" SQL command under psql 
prompt, for bytea columns and refuse to accept or misinterpret zero value.

Below is the more or less self contained code that also yields the same problem.

#define BYTEA "abcd\\011\\000ABCDEFGHIJKLMNO"

int main()
{
  PGconn *PgConnect;
  PGresult *result;
  PgConnect=PQconndb("host=localhost port=5432 dbname=db1 user=postgres 
password=passwd");
  if(!PgConnect)
puts("Fail to connect.");
  else{
result=PQexec(PgConnect,"COPY tablein FROM STDIN");
if(IsPgsqlExecutionOk(NULL,result)){
  if(PQputCopyData(PgConnect,BYTEA,20) == 1){
if(PQputCopyEnd(PgConnect,NULL) == 1)
  puts("Done");
else
   puts(PQerrorMessage(PgConnect));
  }
  else
 puts(PQerrorMessage(PgConnect));
}
PQfinish(PgConnect);
  }
}
--Errors from the code:---
ERROR:  invalid byte sequence for encoding "UTF8": 0x00
HINT:  This error can also happen if the byte sequence does not match the 
encoding expected by the server, which is controlled by "client_encoding".
CONTEXT:  COPY tablein, line 1: "abcd\011\000ABCDEFGH"
STATEMENT:  COPY tablein FROM STDIN


Another issue I can not understand is that both PQputCopyData() and 
PQputCopyEnd() always return 1 even though the copy operation actually fails. 
That being said, the above code always shows "Done" but actually error occurs.

Regards,
CN

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] XML Index again

2010-03-08 Thread Chris Roffler
Yup you are right  however I am trying to run benchmarks with the two
solutions.

The xml solution will give us more flexibility in the future , just in case
we do not have attribute/value lists :)


On Mon, Mar 8, 2010 at 1:33 PM, Alban Hertroys <
dal...@solfertje.student.utwente.nl> wrote:

> On 8 Mar 2010, at 11:39, Chris Roffler wrote:
>
> > Alban
> >
> > Thanks for your help, your suggestion worked.
> >
> > I need another xpath expression to find any Attribute with  Name =""
> under the Attributes node. (not just if it is in a specific position)
> > see   query below.
> > How do I create an index for this xpath expression ?
>
> Come to think of it... You seem to be storing an attribute/value list in
> the database in XML. Is there any reason not to store the attributes and
> their values as a normal table? Your approach seems a bit over-complicated
> and you seem to have some trouble getting your own mind around it.
>
> Alban Hertroys
>
> --
> If you can't see the forest for the trees,
> cut the trees and you'll see there is no forest.
>
>
> !DSPAM:1034,4b94e081296926114815748!
>
>
>


Re: [GENERAL] ERROR: invalid byte sequence for encoding "UTF8": 0x93 Error

2010-03-08 Thread Michał Pawlikowski
On Mon, Mar 8, 2010 at 9:17 AM, Albe Laurenz  wrote:
> Mary Y Wang wrote:
>> I got the following error and not sure how to fix it.
>> "psql:/tmp/030610dumpfile.txt:4369: ERROR:  invalid byte sequence for 
>> encoding "UTF8": 0x93

look here:
http://www.depesz.com/index.php/2010/03/07/error-invalid-byte-sequence-for-encoding/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] XML Index again

2010-03-08 Thread Alban Hertroys
On 8 Mar 2010, at 13:23, Chris Roffler wrote:

> Alban
> 
> thanks for your response. I understand what you are saying .
> 
> >Your previous query wasn't about attributes in any specific position - it 
> >returned documents that contained >more than zero attributes matching a 
> >given name. What are you trying to do this time?
> 
> And that is exactly my problem, only if the first attribute in the Attributes 
> is a match the query returns successful.
> Please see my example below 

Ah yes it would, as you only index the first attribute. I guess you'll have to 
give indexing the list of attributes a try as I outlined in a previous message.

Alban Hertroys

--
Screwing up is an excellent way to attach something to the ceiling.


!DSPAM:737,4b94ef13296926894712957!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] XML Index again

2010-03-08 Thread Chris Roffler
Alban

thanks for your response. I understand what you are saying .

>Your previous query wasn't about attributes in any specific position - it
returned documents that contained >more than zero attributes matching a
given name. What are you trying to do this time?

And that is exactly my problem, only if the first attribute in the
Attributes is a match the query returns successful.
Please see my example below

Thanks
Chris


CREATE INDEX xml_index
  ON time_series
  USING btree
  ((
  (xpath('/AttributeList/Attributes/Attribute/Name/text()',
external_attributes))[1]::text));


SELECT id FROM time_series t1 WHERE EXISTS (
SELECT 1
  FROM time_series t2
 WHERE (xpath('/AttributeList/Attributes/Attribute/Name/text()',
external_attributes))[1]::text = 'x'
   AND t2.id = t1.id
 );

When I run the select it uses the index.

The first query I run is with 'x' = 'Attribute102021' this works like a
champ.
The second query is run with '' = 'Attribute202021' and returns null. (
this is the second Attribute )


 XML example




Attribute102021
1
2010-03-05T05:44:36.796-05:00
Chris


Attribute202021
2
2010-03-05T05:44:36.796-05:00
Maya




On Mon, Mar 8, 2010 at 1:27 PM, Alban Hertroys <
dal...@solfertje.student.utwente.nl> wrote:

> On 8 Mar 2010, at 11:39, Chris Roffler wrote:
>
> > Alban
> >
> > Thanks for your help, your suggestion worked.
> >
> > I need another xpath expression to find any Attribute with  Name =""
> under the Attributes node. (not just if it is in a specific position)
> > see   query below.
>
> Your previous query wasn't about attributes in any specific position - it
> returned documents that contained more than zero attributes matching a given
> name. What are you trying to do this time?
>
> > How do I create an index for this xpath expression ?
>
> You don't need to create another index (although one w/o the
> external_attributes column would probably be more convenient); the index you
> have contains those names already.
>
> Just make sure you use the same expression you used to create the index to
> match the part in your xml and compare it to the text you're looking for.
>
> If you want to use indexes on your xml, then you'll need to stop putting
> the variable parts of your queries inside your xpath expressions - you make
> them unindexable that way. So move those [Name='']'s out of your xpath
> expressions. Instead have the expressions result in the names so that you
> can compare them to the names stored in your index(es).
>
> It won't be as fast as looking for those names using xpath in an xml
> document, as every attribute name is a candidate for comparison now, but at
> least it's indexable.
>
> Alternatively you could try to build an index from the names contained in
> each xml document. Something like:
>
> CREATE INDEX xml_attribute_names
>ON time_series
>  USING btree (xpath('/AttributeList/Attributes/Attribute/Name/text()'));
>
> This stores the array of all attribute names in an index. You can query for
> existence of specific attributes using the ANY operator on the resulting
> array (see archives and docs for how to do that).
>
> I believe (I've never needed to use arrays) the syntax is:
>
> SELECT * FROM time_series WHERE '' = ANY
> (xpath('/AttributeList/Attributes/Attribute/text()'));
>
> It'll probably be faster than the previous solution, but use more disk
> space and memory.
>
> >  SELECT * FROM time_series
> > WHERE
> array_upper((xpath('/AttributeList/Attributes/Attribute[Name="x"]',
> external_attributes)),1) > 0
> >
> >
> > On Sun, Mar 7, 2010 at 1:11 PM, Alban Hertroys <
> dal...@solfertje.student.utwente.nl> wrote:
> ...
> > You seem to want to test for the existence of nodes with a specific name,
> maybe this is what you're looking for?:
> >
> > SELECT id FROM time_series t1 WHERE EXISTS (
> >SELECT 1
> >  FROM time_series t2
> > WHERE (xpath('/AttributeList/Attributes/Attribute/Name/text()',
> external_attributes))[1]::text = ('Attribute122021', external_attributes)
> >   AND t2.id = t1.id
> > );
>
> For clarity, if you would have an index on just that xpath expression -
> without the external_attributes column - this query would look simpler:
>
> SELECT id FROM time_series t1 WHERE EXISTS (
>   SELECT 1
> FROM time_series t2
> WHERE
> xpath('/AttributeList/Attributes/Attribute/Name/text()')[1]::text =
> 'Attribute122021'
>   AND t2.id = t1.id
> );
>
> Alban Hertroys
>
> --
> If you can't see the forest for the trees,
> cut the trees and you'll see there is no forest.
>
>
> !DSPAM:1034,4b94df38296921956520267!
>
>
>


Re: [GENERAL] How to find out if row was modified by EXECUTE UPDATE ...

2010-03-08 Thread Pavel Stehule
Hello

2010/3/8 A B :
> Hello.
> Doesn't this work in plpgsql functions?
>
> EXECUTE 'UPDATE mytable set ... ...'
> IF FOUND THEN
>       do stuff
> END IF;
>

it doesn't work - you have to use GET DIAGNOSTICS statement

GET DIAGNOSTICS integer_var = ROW_COUNT;
http://www.postgresql.org/docs/8.2/static/plpgsql-statements.html#PLPGSQL-STATEMENTS-DIAGNOSTICS

regards
Pavel Stehule


> It seems it always evaluate to false in the if statement, Isn't  found
> used to see if an update has modified rows?
>
> Are there any alternatives to selecting the row and see if the values
> are set to see if the update worked?
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to find out if row was modified by EXECUTE UPDATE ...

2010-03-08 Thread A B
Hello.
Doesn't this work in plpgsql functions?

EXECUTE 'UPDATE mytable set ... ...'
IF FOUND THEN
   do stuff
END IF;

It seems it always evaluate to false in the if statement, Isn't  found
used to see if an update has modified rows?

Are there any alternatives to selecting the row and see if the values
are set to see if the update worked?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] XML Index again

2010-03-08 Thread Alban Hertroys
On 8 Mar 2010, at 11:39, Chris Roffler wrote:

> Alban
> 
> Thanks for your help, your suggestion worked.
> 
> I need another xpath expression to find any Attribute with  Name ="" 
> under the Attributes node. (not just if it is in a specific position)
> see   query below.
> How do I create an index for this xpath expression ?

Come to think of it... You seem to be storing an attribute/value list in the 
database in XML. Is there any reason not to store the attributes and their 
values as a normal table? Your approach seems a bit over-complicated and you 
seem to have some trouble getting your own mind around it.

Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,4b94e083296929244758856!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] XML Index again

2010-03-08 Thread Alban Hertroys
On 8 Mar 2010, at 11:39, Chris Roffler wrote:

> Alban
> 
> Thanks for your help, your suggestion worked.
> 
> I need another xpath expression to find any Attribute with  Name ="" 
> under the Attributes node. (not just if it is in a specific position)
> see   query below.

Your previous query wasn't about attributes in any specific position - it 
returned documents that contained more than zero attributes matching a given 
name. What are you trying to do this time?

> How do I create an index for this xpath expression ?

You don't need to create another index (although one w/o the 
external_attributes column would probably be more convenient); the index you 
have contains those names already.

Just make sure you use the same expression you used to create the index to 
match the part in your xml and compare it to the text you're looking for.

If you want to use indexes on your xml, then you'll need to stop putting the 
variable parts of your queries inside your xpath expressions - you make them 
unindexable that way. So move those [Name='']'s out of your xpath 
expressions. Instead have the expressions result in the names so that you can 
compare them to the names stored in your index(es).

It won't be as fast as looking for those names using xpath in an xml document, 
as every attribute name is a candidate for comparison now, but at least it's 
indexable.

Alternatively you could try to build an index from the names contained in each 
xml document. Something like:

CREATE INDEX xml_attribute_names
ON time_series
 USING btree (xpath('/AttributeList/Attributes/Attribute/Name/text()'));

This stores the array of all attribute names in an index. You can query for 
existence of specific attributes using the ANY operator on the resulting array 
(see archives and docs for how to do that).

I believe (I've never needed to use arrays) the syntax is:

SELECT * FROM time_series WHERE '' = ANY 
(xpath('/AttributeList/Attributes/Attribute/text()'));

It'll probably be faster than the previous solution, but use more disk space 
and memory.

>  SELECT * FROM time_series 
> WHERE 
> array_upper((xpath('/AttributeList/Attributes/Attribute[Name="x"]',   
> external_attributes)),1) > 0  
> 
> 
> On Sun, Mar 7, 2010 at 1:11 PM, Alban Hertroys 
>  wrote:
...
> You seem to want to test for the existence of nodes with a specific name, 
> maybe this is what you're looking for?:
> 
> SELECT id FROM time_series t1 WHERE EXISTS (
>SELECT 1
>  FROM time_series t2
> WHERE (xpath('/AttributeList/Attributes/Attribute/Name/text()', 
> external_attributes))[1]::text = ('Attribute122021', external_attributes)
>   AND t2.id = t1.id
> );

For clarity, if you would have an index on just that xpath expression - without 
the external_attributes column - this query would look simpler:

SELECT id FROM time_series t1 WHERE EXISTS (
   SELECT 1
 FROM time_series t2
WHERE xpath('/AttributeList/Attributes/Attribute/Name/text()')[1]::text 
= 'Attribute122021'
  AND t2.id = t1.id
);

Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,4b94df40296929445119198!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] XML Index again

2010-03-08 Thread Chris Roffler
Alban

Thanks for your help, your suggestion worked.

I need another xpath expression to find any Attribute with  Name =""
under the Attributes node. (not just if it is in a specific position)
see   query below.
How do I create an index for this xpath expression ?

Thanks
Chris


 SELECT * FROM time_series
WHERE
array_upper((xpath('/AttributeList/Attributes/Attribute[Name="x"]',
external_attributes)),1) > 0


On Sun, Mar 7, 2010 at 1:11 PM, Alban Hertroys <
dal...@solfertje.student.utwente.nl> wrote:

> On 7 Mar 2010, at 11:02, Chris Roffler wrote:
>
> > I still have some problems with my xml index
> >
> > CREATE INDEX xml_index
> >   ON time_series
> >   USING btree
> >   ((
> >   (xpath('/AttributeList/Attributes/Attribute/Name/text()',
> external_attributes))[1]::text));
> >
> > When I run the following query the index is not used :
> >
> > select id from time_series where
> > array_upper(
> > (xpath('/AttributeList/Attributes/Attribute[Name="Attribute122021"]',
> external_attributes))
> > , 1) > 0
> >
> > Any Idea on how to configure the index ?
>
> There are a couple of cases where Postgres won't use your index, but in
> this case it's quite clearly because you're asking for (quite) a different
> expression than the one you indexed.
>
> You seem to want to test for the existence of nodes with a specific name,
> maybe this is what you're looking for?:
>
> SELECT id FROM time_series t1 WHERE EXISTS (
>SELECT 1
>  FROM time_series t2
> WHERE (xpath('/AttributeList/Attributes/Attribute/Name/text()',
> external_attributes))[1]::text = ('Attribute122021', external_attributes)
>   AND t2.id = t1.id
> );
>
> It's just a guess at what you're trying to do, so I may very well have
> gotten it wrong. The important part is that you need to use the expression
> you indexed in your where clause, or the database has no idea you mean
> something similar as to what you indexed.
>
> Alban Hertroys
>
> --
> If you can't see the forest for the trees,
> cut the trees and you'll see there is no forest.
>
>
> !DSPAM:1034,4b9389d6296921789322580!
>
>
>


Re: [GENERAL] Failed to run initdb: 128

2010-03-08 Thread Magnus Hagander
2010/3/8 Richard Huxton :
> On 05/03/10 20:05, Niranjan Maturi (nmaturi) wrote:
>>
>> Hi
>>
>> Thanks for the detailed explanation with the locales. I am trying to get
>> approvals to use a later version in 8.2. A colleague also suggested that
>> I install 8.3 version and uninstall it to clean up the machine. Then I
>> can install 8.2. I will try this as well.
>>
>> I am attaching the zipped version of the whole log file.
>
> OK - everything in there from [16:02:11:419] onwards (line 11630+) is just 
> uninstalling.
>
> The last few actions appear to be:
> Action 16:01:37: DoCreateUser. Creating user account...
> Action 16:01:38: SetPermissions. Setting filesystem permissions...
>
> This one appears to fail:
> Action 16:01:39: RunInitdb. Initializing database cluster (this may take a 
> minute or two)...
> MSI (s) (44:24) [16:01:39:449]: Executing op: 
> CustomActionSchedule(Action=RunInitdb,ActionType=3073,Source=BinaryData,target=runini...@4,CustomActionData=1033;e:\pgsql\;e:\pgsql\data\;e:\pgsql\share\;1502;;C;SQL_ASCII;postgres;postgres;GEAMTEST6V;cupmuser;V%JG: MSI (s) (44:24) [16:01:39:465]: Creating MSIHANDLE (3) of type 790536 for 
> thread 3620
> MSI (s) (44:20) [16:01:39:465]: Invoking remote custom action. DLL: 
> C:\WINDOWS\Installer\MSI21.tmp, Entrypoint: runini...@4
> MSI (s) (44!C0) [16:01:39:668]: Creating MSIHANDLE (4) of type 790531 for 
> thread 448
> MSI (c) (C4:08) [16:01:39:668]: Note: 1: 2205 2:  3: Error
> MSI (c) (C4:08) [16:01:39:668]: Note: 1: 2228 2:  3: Error 4: SELECT 
> `Message` FROM `Error` WHERE `Error` = 2867
> DEBUG: Error 2867:  The error dialog property is not set
> The installer has encountered an unexpected error installing this package. 
> This may indicate a problem with this package. The error code is 2867. The 
> arguments are: , ,
> MSI (c) (C4:08) [16:01:39:668]: Font created.  Charset: Req=0, Ret=0, Font: 
> Req=MS Shell Dlg, Ret=MS Shell Dlg
> ...
> DEBUG: Error 2888:  Executing the TextStyle view failed
> The installer has encountered an unexpected error installing this package. 
> This may indicate a problem with this package. The error code is 2888. The 
> arguments are: TextStyle, ,
> Failed to run initdb: 128!
> Please see the logfile in 'e:\pgsql\tmp\initdb.log'.
> ...
>
>
> It might be that the file permissions aren't actually working. If the 
> "postgres" user still exists, can you check it has permission to write to 
> "e:\pgsql\data" and also to acccess "e:\pgsql" and "e:\" too (doesn't need 
> write access).

I wonder if it may have osmehting to do with a semicolon in the
password. IIRC we fixed that in the way that we reject semicolons, but
I don't recall which version we did that in.

So please give it a try without having a semicolon in the password.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Failed to run initdb: 128

2010-03-08 Thread Richard Huxton

On 05/03/10 20:05, Niranjan Maturi (nmaturi) wrote:

Hi

Thanks for the detailed explanation with the locales. I am trying to get
approvals to use a later version in 8.2. A colleague also suggested that
I install 8.3 version and uninstall it to clean up the machine. Then I
can install 8.2. I will try this as well.

I am attaching the zipped version of the whole log file.


OK - everything in there from [16:02:11:419] onwards (line 11630+) is 
just uninstalling.


The last few actions appear to be:
Action 16:01:37: DoCreateUser. Creating user account...
Action 16:01:38: SetPermissions. Setting filesystem permissions...

This one appears to fail:
Action 16:01:39: RunInitdb. Initializing database cluster (this may take 
a minute or two)...
MSI (s) (44:24) [16:01:39:449]: Executing op: 
CustomActionSchedule(Action=RunInitdb,ActionType=3073,Source=BinaryData,target=runini...@4,CustomActionData=1033;e:\pgsql\;e:\pgsql\data\;e:\pgsql\share\;1502;;C;SQL_ASCII;postgres;postgres;GEAMTEST6V;cupmuser;V%JG:MSI (s) (44:24) [16:01:39:465]: Creating MSIHANDLE (3) of type 790536 
for thread 3620
MSI (s) (44:20) [16:01:39:465]: Invoking remote custom action. DLL: 
C:\WINDOWS\Installer\MSI21.tmp, Entrypoint: runini...@4
MSI (s) (44!C0) [16:01:39:668]: Creating MSIHANDLE (4) of type 790531 
for thread 448

MSI (c) (C4:08) [16:01:39:668]: Note: 1: 2205 2:  3: Error
MSI (c) (C4:08) [16:01:39:668]: Note: 1: 2228 2:  3: Error 4: SELECT 
`Message` FROM `Error` WHERE `Error` = 2867

DEBUG: Error 2867:  The error dialog property is not set
The installer has encountered an unexpected error installing this 
package. This may indicate a problem with this package. The error code 
is 2867. The arguments are: , ,
MSI (c) (C4:08) [16:01:39:668]: Font created.  Charset: Req=0, Ret=0, 
Font: Req=MS Shell Dlg, Ret=MS Shell Dlg

...
DEBUG: Error 2888:  Executing the TextStyle view failed
The installer has encountered an unexpected error installing this 
package. This may indicate a problem with this package. The error code 
is 2888. The arguments are: TextStyle, ,

Failed to run initdb: 128!
Please see the logfile in 'e:\pgsql\tmp\initdb.log'.
...


It might be that the file permissions aren't actually working. If the 
"postgres" user still exists, can you check it has permission to write 
to "e:\pgsql\data" and also to acccess "e:\pgsql" and "e:\" too (doesn't 
need write access).


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Fwd: [GENERAL] postgresql 8.2 startup script

2010-03-08 Thread Alban Hertroys
We solved this off-list.

Begin forwarded message:

> From: Aleksandar Sosic 
> Date: 7 March 2010 23:39:27 GMT+01:00
> To: Alban Hertroys 
> Subject: Re: [GENERAL] postgresql 8.2 startup script
> 
> On Sun, Mar 7, 2010 at 11:31 PM, Alban Hertroys
>  wrote:
>> [...]
>> That seems to indicate it isn't in rc.conf.
>> Does it start or log anything if you use /usr/local/etc/rc.d/postgresql 
>> onestart ?
>> If so, there's something wrong with your rc.conf as that's the way to 
>> circumvent that file.
> 
> Ok You're right...My rc.conf contained and errornow it's all fine
> and working...
> Sorry for bothering You...
> 
> Thanks for Your help!
> Sorry again...
> 
> Regards!
> -- 
> Aleksandar Sosic
> email: alex.sosic(at)gmailcom

Alban Hertroys

--
Screwing up is an excellent way to attach something to the ceiling.


!DSPAM:737,4b94cd40296926333416480!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] 2 questions when using vs2005 to debug PG

2010-03-08 Thread chaoyong wang

Hi,I'm using vs2005 to debug PG, and I have 2 questions to ask:1. I want to add 
a function to contrib/xml2/xpath.c   I changed xpath.c, pgxml.sql, pgxml.sql.in 
at the same time, then I builded and installed as the document said.When I 
execute "psql test < pgxml.sql", all functions created success except the one I 
added.   ERROR:  could not find function "xml_value_index" in file "C:\Program 
Files\PostgreSQL\8.3\lib/pgxml.dll"   I rebuild project pgxml, and replaced 
C:\Program Files\PostgreSQL\8.3\lib\pgxml.dll by the new produced 
.\Debug\pgxml\pgxml.dll   But remains all the same2. When I execute a simple 
select query too see how it works, the concept "Portal" really puzzled me, 
maybe it's too abstract   Could you please explain it more specifically?Thanks 
for your reading and looking forward your replyBest  RegardsCatcher Wang
   
_
Hotmail: Trusted email with powerful SPAM protection.
https://signup.live.com/signup.aspx?id=60969

Re: [GENERAL] Foreign key behavior different in a function and outside

2010-03-08 Thread Richard Huxton

On 05/03/10 18:12, Mridula Mahadevan wrote:

Richard, To answer your questions, I have a live application that is
running on postgresql. We are seeing this issue on certain
installations and not on others. So the code is no different in each
set up. I also added the trigger to table B and then the foreign key
error is thrown in all set ups. But without the trigger a delete on
table B from within a function assumes cascade delete even when one
is not specified. Again only in some cases, I can send you the entire
procedure if it helps (the one I have below only has the relevant
parts).


But does the test code you sent show this problem on:
1. all installations
2. some installations
3. none of the installations
4. Don't know - haven't tried the test code

If the test code shows the problem then we know it's something basic in 
your PostgreSQL installations. If it doesn't then it's something in the 
setup of the databases.


I don't think the problem has anything to do with the code of the 
function. You have checked that the code in your functions makes sense 
and looked at it on servers where it works and it doesn't. If the 
problem was there I'm sure you'd have seen it.



--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FSM and VM file

2010-03-08 Thread Richard Huxton

On 05/03/10 15:14, akp geek wrote:

Hi All -

 I have turned on the auto vacuum on the slave and for some
reason the db is getting bloated up. The master size is only 1G and the
slave is at 9.2GB now. I did cluster on couple of tables also. did any one
run into this situation? Can you please help?


OK - so autovacuum is now running on the slave, but your database keeps 
growing anyway.


So - either:

1. You have a long-running transaction blocking the vacuum.
2. Your autovacuum isn't running often enough.

The first step with #1 is to check for idle transactions. Look in the 
system view: pg_stat_activity

  SELECT * FROM pg_stat_activity;
Is there anything there that started a long time ago (probably marked 
IDLE IN TRANSACTION)?



For #2, I would start with seeing what tables are affected, then check 
your configuration settings. It might be that autovacuum_naptime is too 
large for your workload, or (auto)vacuum_cost_delay is too high.


http://www.postgresql.org/docs/8.4/static/routine-vacuuming.html#AUTOVACUUM

http://developer.postgresql.org/pgdocs/postgres/runtime-config-autovacuum.html#RUNTIME-CONFIG-AUTOVACUUM


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Avoiding duplicates (or at least marking them as such) in a "cumulative" transaction table.

2010-03-08 Thread Allan Kamau
On Mon, Mar 8, 2010 at 10:16 AM, Scott Marlowe  wrote:
> On Sun, Mar 7, 2010 at 11:31 PM, Allan Kamau  wrote:
>> On Mon, Mar 8, 2010 at 5:49 AM, Scott Marlowe  
>> wrote:
>>> On Sun, Mar 7, 2010 at 1:45 AM, Allan Kamau  wrote:
 Hi,
 I am looking for an efficient and effective solution to eliminate
 duplicates in a continuously updated "cumulative" transaction table
 (no deletions are envisioned as all non-redundant records are
 important). Below is my situation.
>>>
>>> Is there a reason you can't use a unique index and detect failed
>>> inserts and reject them?
>>>
>>
>> I think it would have been possible make use of a unique index as you
>> have suggested, and silently trap the uniqueness violation.
>>
>> But in my case (as pointed out in my previous lengthy mail) I am
>> inserting multiple records at once, which implicitly means a single
>> transaction. I think in this scenario a violation of uniqueness by
>> even a single record will lead to all the other records (in this
>> batch) being rejected either.
>
> There are a whole bunch of approaches to this.  You're basically bulk
> loading data into a table that already has data in it.  I'd put
> everything into a temp table then insert into main where not exists in
> temp table.  And I'd still have that unique index on my pk in the main
> table.  First maintain uniqueness, then figure out how to load data
> into it.  insert into where not exists is quite fast, and it's all one
> big transaction that either goes or doesn't, so there's no cleaning up
> broken / extra records later.
>
> Are you looking at upserting these inputs?  Or is the original row good 
> enough?
>
>>
>> Is there perhaps a way to only single out the unique constraint
>> violating record(s) without having to perform individual record
>> inserts, I am following the example found here
>> "http://www.postgresql.org/docs/8.4/interactive/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING";.
>>
>> Allan.
>>
>
>
>
> --
> When fascism comes to America, it will be intolerance sold as diversity.
>

The original row is good enough, the new should not replace a similar
but already existing record. I will now look at evaluating (for
performance).
1) INSERT INTO main_table(...) SELECT a... FROM mytmp_table a WHERE
a.supposedly_unique_field NOT IN (SELECT a.supposedly_unique_field
FROM main_table a;

2) INSERT INTO main_table(...)SELECT a... FROM mytmp_table a LEFT JOIN
main_table b ON  b.supposedly_unique_field=a.supposedly_unique_field
WHERE a.id IS NULL;

But since at a given time it is still likely for more than one client
running such insert query to insert data into this table, which may
lead to the possibility of transaction(s) that complete after data has
been inserted into the main table by other transaction(s) may attempt
to insert duplicating records (which will trigger the error if a
unique constraint is in effect in the "main" table). I am opting not
having a unique constaint on this field. But rather simply an index.
Then I make use of a boolean (initially NULL) field in the main table
which I will appropriately update at the end of run of all the data
updating clients (a cleanup step).
The update to this "is_redundant" field could be via "UPDATE
main_table b SET is_redundant=TRUE FROM (SELECT min(id)AS
id___min,a.supposedly_unique_field FROM main_table a GROUP BY
a.supposedly_unique_field HAVING count(*)>1)a WHERE
b.supposedly_unique_field=a.supposedly_unique_field AND
b.id>a.id___min;


Allan.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Transaction wraparound problem with database postgres

2010-03-08 Thread Scott Marlowe
On Sun, Mar 7, 2010 at 6:06 PM, Markus Wollny  wrote:
> Hi!
>
> After going several months without such incidents, we now got bit by the same 
> problem again. We have since upgraded the hardware we ran the database 
> cluster on and currently use version 8.3.7. The general outline of the 
> problem hasn't changed much though

I'm not sure I'm entirely sure exactly what the problem was and now
again is.  Could you fill me in on that?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: invalid byte sequence for encoding "UTF8": 0x93 Error

2010-03-08 Thread Albe Laurenz
Mary Y Wang wrote:
> I got the following error and not sure how to fix it.
> "psql:/tmp/030610dumpfile.txt:4369: ERROR:  invalid byte sequence for 
> encoding "UTF8": 0x93
> HINT:  This error can also happen if the byte sequence does 
> not match the encoding expected by the server, which is 
> controlled by "client_encoding"."
> 
> Any ideas?
> I'm running on Postgres 8.3.8.

You must either change the client encoding to the encoding of the
data you are feeding the client, or you must change the data to
match the client encoding.

Client encoding can be changed with the SQL statement
"SET client_encoding = ..." or by setting the PGCLIENTENCODING
environment variable.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Transaction wraparound problem with database postgres

2010-03-08 Thread Markus Wollny
Hi! 

> From: Scott Marlowe [mailto:scott.marl...@gmail.com] 

> Do your logs show any kind of error when vacuuming about 
> "only owner can vacuum" a table or anything?

I grepped through the logs from the last four days and, no, there were
none such errors whatsoever. Last vacuum analyze run returned the
following:

INFO:  free space map contains 1974573 pages in 9980 relations
DETAIL:  A total of 2043408 page slots are in use (including overhead).
2043408 page slots are required to track all free space.
Current limits are:  210 page slots, 1 relations, using 13376
kB.

I have since increased these limits by 50% as we've come quite close to
what was configured. But as they hadn't been reached yet anyway, so I
don't think we did have any sort of apparent problem with the running of
vaccuum as such.

Kind regards

   Markus


Computec Media AG
Sitz der Gesellschaft und Registergericht: Furth (HRB 8818)
Vorstandsmitglieder: Albrecht Hengstenberg (Vorsitzender) und Rainer Rosenbusch
Vorsitzender des Aufsichtsrates: Jurg Marquard 
Umsatzsteuer-Identifikationsnummer: DE 812 575 276



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] kernel version impact on PostgreSQL performance

2010-03-08 Thread Albe Laurenz
Greg Smith wrote:
> Cyril Scetbon wrote:
> > Does anyone know what can be the differences between linux kernels 
> > 2.6.29 and 2.6.30 that can cause this big difference (TPS x 7 !)
> > 
> http://www.phoronix.com/scan.php?page=article&item=linux_2624_2633&num=2
> 
> Discussed in detail at 
> http://archives.postgresql.org/message-id/4b512d0d.4030...@2nd
> quadrant.com
> 
> The short version is that ext3 combined with regular hard drives has 
> never been safe for database use by default, it was cheating on writes 
> in a way that left it possible for corruption to occur after a crash.  
> The change in ext4 that caused the performance drop is from the kernel 
> developers finally eliminating the source for that cheat.  The result 
> should be reliable operation by default, which unfortunately happens to 
> be much slower operation by default too.

Maybe that question is dumb, but why should a change in ext4 have an
impact on a figure that was generated with ext3? To quote the link:

"the PostgreSQL performance atop the EXT3 file-system has fallen off a cliff"

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Libpq: copy file to bytea column

2010-03-08 Thread Albe Laurenz
seiliki wrote:
> The data types of tableout.c1 and tablein.c1 are both bytea. 
> I first export tableout.c1 to a file:
> 
> db1=# COPY (SELECT c1 FROM tableout LIMIT 1) TO '/tmp/t';
> 
> Then I try to import the file to another table.
> 
> This works without flaw:
> 
> db1=# COPY tablein FROM '/tmp/t';
> 
> However, I get the following errors from log when calling 
> libpq functions PQputCopyData() and PQputCopyEnd().
> 
> 2010-03-06 20:47:42 CST ERROR:  invalid byte sequence for 
> encoding "UTF8": 0x00
> 2010-03-06 20:47:42 CST HINT:  This error can also happen if 
> the byte sequence does not match the encoding expected by the 
> server, which is controlled by "client_encoding".
> 2010-03-06 20:47:42 CST CONTEXT:  COPY in, line 1: 
> "TPF0\011Report\000\003Tag\002\365\016ExportFromPage\002\000\014ExportToPage\002\000\006Values\016..."
> 2010-03-06 20:47:42 CST STATEMENT:  COPY in FROM STDIN
> 
> 
> Helps will be much appreciated!

The difference here is that your first examples are COPY TO/FROM
files on the database server, while your failing C code uses
COPY FROP STDIN.

You will not meet encoding problems with the first because
the file /tmp/t is created in the server encoding, so if you read
it later with a COPY FROM, it will automatically be in the encoding
that COPY FROM  expects.

When you use COPY FROM STDIN, you send data from the client to the
server, which is something quite different. Data sent from the client
are expected to be in the client encoding, not in the server encoding.

You can use the SQL statement "SET client_encoding = ..." to set the
client encoding to the encoding of your data, or you can use the
PGCLIENTENCODING environment variable to set it automatically.

The error message you are getting indicates that you client_encoding
is currently UTF8.
By the way, the line quoted in your error message does not look like
valid text at all - it contains "zero" bytes, and preceeding each
string there seems to be a byte indicating the length of the string
that follows.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [NOVICE] How to read oracle table's data from postgre

2010-03-08 Thread Scott Marlowe
2010/3/7 Devrim GÜNDÜZ :
> On Mon, 2010-03-08 at 10:40 +0530, venkatra...@tcs.com wrote:
>>
>> I want to read oracle database table data from postgre. (like in
>> oracle we  can use database links, is there something like this
>> available in postgre).
>
> (It is not postgre, it is PostgreSQL or Postgres.)
>
> You can try dbi-link:
>
> http://pgfoundry.org/projects/dbi-link/

Anyone know if this is still active / developed?

http://wiki.postgresql.org/wiki/SqlMedConnectionManager

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] obsessive-compulsive vacuum behavior

2010-03-08 Thread Scott Marlowe
On Mon, Mar 8, 2010 at 12:26 AM, Scott Marlowe  wrote:
> On Sun, Mar 7, 2010 at 11:58 PM, Ben Chobot  wrote:
>> I've got an 8.4.2 database where it appears that vacuum keeps redoing the 
>> same table and indexes, never thinking it's finished:
>>
>> auditor=# VACUUM analyze VERBOSE repair_queue ;
>> INFO:  vacuuming "public.repair_queue"
>> INFO:  scanned index "repair_queue_pkey" to remove 2795932 row versions
>> DETAIL:  CPU 14.98s/15.29u sec elapsed 312.50 sec.
>> INFO:  scanned index "repair_queue_auditor" to remove 2795932 row versions
>> DETAIL:  CPU 0.74s/0.50u sec elapsed 10.49 sec.
>> INFO:  scanned index "repair_queue_sort" to remove 2795932 row versions
>> DETAIL:  CPU 2.99s/1.58u sec elapsed 45.14 sec.
>> INFO:  scanned index "repair_queue_sort3" to remove 2795932 row versions
>> DETAIL:  CPU 0.89s/0.48u sec elapsed 10.99 sec.
>> INFO:  "repair_queue": removed 2795932 row versions in 43199 pages
>> DETAIL:  CPU 1.04s/0.39u sec elapsed 17.93 sec.
>> INFO:  scanned index "repair_queue_pkey" to remove 2795938 row versions
>> DETAIL:  CPU 14.71s/15.06u sec elapsed 362.37 sec.
>> INFO:  scanned index "repair_queue_auditor" to remove 2795938 row versions
>> DETAIL:  CPU 0.62s/0.45u sec elapsed 14.36 sec.
>> INFO:  scanned index "repair_queue_sort" to remove 2795938 row versions
>> DETAIL:  CPU 2.97s/1.65u sec elapsed 56.94 sec.
>> INFO:  scanned index "repair_queue_sort3" to remove 2795938 row versions
>> DETAIL:  CPU 0.82s/0.44u sec elapsed 10.54 sec.
>> INFO:  "repair_queue": removed 2795938 row versions in 41055 pages
>> DETAIL:  CPU 0.75s/0.34u sec elapsed 7.59 sec.
>> INFO:  scanned index "repair_queue_pkey" to remove 2795959 row versions
>> DETAIL:  CPU 14.20s/14.56u sec elapsed 539.17 sec.
>> INFO:  scanned index "repair_queue_auditor" to remove 2795959 row versions
>> DETAIL:  CPU 0.75s/0.48u sec elapsed 13.76 sec.
>> INFO:  scanned index "repair_queue_sort" to remove 2795959 row versions
>> DETAIL:  CPU 3.07s/1.65u sec elapsed 44.29 sec.
>> INFO:  scanned index "repair_queue_sort3" to remove  row versions
>> DETAIL:  CPU 0.78s/0.44u sec elapsed 12.52 sec.
>> INFO:  "repair_queue": removed 2795959 row versions in 41004 pages
>> DETAIL:  CPU 0.88s/0.42u sec elapsed 12.49 sec.
>>
>>
>> ...and so on. It's been running for an hour or so now, when it appears it 
>> shouldn't take 10 minutes. This seems pretty weird to me has anybody 
>> else seen this behavior? I'm not even sure what details I could report which 
>> would help figure out what's going on.
>
> Those are all different relations, and it's reclaiming a good number
> of rows and pages.   41004 pages is ~320 Megs.  Even if the rows are
> small it's gonna be 100 Megs or so per index reclaimed.  Seems like
> vacuum is doing its job.  Is it running often enough to prevent
> bloating?

OK, they're not ALL different relations, but only one seems to repeat
much and that's the _sort one.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] obsessive-compulsive vacuum behavior

2010-03-08 Thread Scott Marlowe
On Sun, Mar 7, 2010 at 11:58 PM, Ben Chobot  wrote:
> I've got an 8.4.2 database where it appears that vacuum keeps redoing the 
> same table and indexes, never thinking it's finished:
>
> auditor=# VACUUM analyze VERBOSE repair_queue ;
> INFO:  vacuuming "public.repair_queue"
> INFO:  scanned index "repair_queue_pkey" to remove 2795932 row versions
> DETAIL:  CPU 14.98s/15.29u sec elapsed 312.50 sec.
> INFO:  scanned index "repair_queue_auditor" to remove 2795932 row versions
> DETAIL:  CPU 0.74s/0.50u sec elapsed 10.49 sec.
> INFO:  scanned index "repair_queue_sort" to remove 2795932 row versions
> DETAIL:  CPU 2.99s/1.58u sec elapsed 45.14 sec.
> INFO:  scanned index "repair_queue_sort3" to remove 2795932 row versions
> DETAIL:  CPU 0.89s/0.48u sec elapsed 10.99 sec.
> INFO:  "repair_queue": removed 2795932 row versions in 43199 pages
> DETAIL:  CPU 1.04s/0.39u sec elapsed 17.93 sec.
> INFO:  scanned index "repair_queue_pkey" to remove 2795938 row versions
> DETAIL:  CPU 14.71s/15.06u sec elapsed 362.37 sec.
> INFO:  scanned index "repair_queue_auditor" to remove 2795938 row versions
> DETAIL:  CPU 0.62s/0.45u sec elapsed 14.36 sec.
> INFO:  scanned index "repair_queue_sort" to remove 2795938 row versions
> DETAIL:  CPU 2.97s/1.65u sec elapsed 56.94 sec.
> INFO:  scanned index "repair_queue_sort3" to remove 2795938 row versions
> DETAIL:  CPU 0.82s/0.44u sec elapsed 10.54 sec.
> INFO:  "repair_queue": removed 2795938 row versions in 41055 pages
> DETAIL:  CPU 0.75s/0.34u sec elapsed 7.59 sec.
> INFO:  scanned index "repair_queue_pkey" to remove 2795959 row versions
> DETAIL:  CPU 14.20s/14.56u sec elapsed 539.17 sec.
> INFO:  scanned index "repair_queue_auditor" to remove 2795959 row versions
> DETAIL:  CPU 0.75s/0.48u sec elapsed 13.76 sec.
> INFO:  scanned index "repair_queue_sort" to remove 2795959 row versions
> DETAIL:  CPU 3.07s/1.65u sec elapsed 44.29 sec.
> INFO:  scanned index "repair_queue_sort3" to remove  row versions
> DETAIL:  CPU 0.78s/0.44u sec elapsed 12.52 sec.
> INFO:  "repair_queue": removed 2795959 row versions in 41004 pages
> DETAIL:  CPU 0.88s/0.42u sec elapsed 12.49 sec.
>
>
> ...and so on. It's been running for an hour or so now, when it appears it 
> shouldn't take 10 minutes. This seems pretty weird to me has anybody else 
> seen this behavior? I'm not even sure what details I could report which would 
> help figure out what's going on.

Those are all different relations, and it's reclaiming a good number
of rows and pages.   41004 pages is ~320 Megs.  Even if the rows are
small it's gonna be 100 Megs or so per index reclaimed.  Seems like
vacuum is doing its job.  Is it running often enough to prevent
bloating?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general