Re: [GENERAL] postgres.app OS X psql character encoding (utf-8) problem

2014-01-09 Thread Chris Hiestand

On Jan 9, 2014, at 8:56 PM, Tom Lane  wrote:

> Uh-huh.  You need to complain to whoever packages Postgres93.app, then.

This is done: https://github.com/PostgresApp/PostgresApp/issues/165

Thanks Tom, I appreciate your time.

-Chris


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres.app OS X psql character encoding (utf-8) problem

2014-01-09 Thread Tom Lane
Chris Hiestand  writes:
> postgres93.app is supplied with its own version of libedit.

> user@hostname:~ $ otool -L `which psql`
> /Applications/Postgres93.app/Contents/MacOS/bin/psql:
>   /Applications/Postgres93.app/Contents/MacOS/lib/libpq.5.dylib 
> (compatibility version 5.0.0, current version 5.6.0)
>   /Applications/Postgres93.app/Contents/MacOS/lib/libssl.1.0.0.dylib 
> (compatibility version 1.0.0, current version 1.0.0)
>   /Applications/Postgres93.app/Contents/MacOS/lib/libedit.0.dylib 
> (compatibility version 1.0.0, current version 1.45.0)
>   /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current 
> version 169.3.0)

Uh-huh.  You need to complain to whoever packages Postgres93.app, then.

The impression I have about libedit is that it's necessary to grab a
snapshot of upstream on a good day ;-)

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SQL State XX000 : XML namespace issue

2014-01-09 Thread David Johnston
Panneerselvam Posangu wrote
> Hi,
> When we run a SQL statement in Postgres 9.2 we get an error.
> Error : Could not register XML namespace with name "" and URI "" SQL State
> XX000
> In the SQL state we use xpath function. Any reason why this is happening..
> Thanks,Panneer

No.  Try providing more detail.  Especially the actual statement you are
running.  And, better, try to supply a minimal and self-contained query that
exhibits the same behavior.

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/SQL-State-XX000-XML-namespace-issue-tp5786103p5786222.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Looking for settings/configuration for FASTEST reindex on idle system.

2014-01-09 Thread Sergey Konoplev
On Thu, Jan 9, 2014 at 2:03 PM, Jeff Amiel  wrote:
> I have a maintenance window coming up and using pg_upgrade to upgrade from 
> 9.2.X to 9.3.X.
> As part of the window, I’d like to ‘cluster’ each table by its primary key.  
> After doing so, I see amazing performance improvements (probably mostly 
> because of index bloat - but possibly due to table fragmentation)

[...]

> fsync (set to off)
> setting wal_level to minimal (to avoid wal logging of cluster activity)
> bumping up maintenance work men (but I’ve also seen/read that uber high 
> values cause disk based sorts which ultimately slow things down)
> Tweaking checkpoint settings (although with wal_level set to minimal - I 
> don’t think it comes into play)
>
> any good suggestions for lighting a fire under this process?

Another idea is to drop all indexes except ones you are going to
cluster tables by, cluster the tables and then restore the dropped
indexes. Use xargs with -P to perform things in parallel, for example

cat indexdefs.sql | xargs -d '\n' -I {} -P 5 psql -e dbname -c '{}'

will start creating indexes from indexdefs.sql (assuming one index
definition in one line) by 5 in parallel. You can also cluster all
your tables a similar way by several tables in parallel.

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] wal archive peak during pg_dump

2014-01-09 Thread Michael Paquier
On Thu, Jan 9, 2014 at 7:42 PM, Willy-Bas Loos  wrote:
> Hi,
>
> I've set up hot standby slaves for a couple of clusters.
> The wal is cleaned up after use, i don't use it as a backup (yet).
> It seems that the amount of wal peaks shortly after midnight, when pg_dump
> is running.
>
> It doesn't seem logical to me that pg_dump should generate wal, but i
> haven't been able to find a different explanation so far.
> So to make sure, i want to ask you people: can it be that running pg_dump
> creates a lot of wal?
pg_dump does modify any data so it basically does not create any WAL
files. But HOT pruning could, as well as data checksums in this case a
plain SELECT can change some tuple hint bits, which indeed generates
WAL. This reminds me of this thread:
http://www.postgresql.org/message-id/e1t9g6s-0007c4...@wrigleys.postgresql.org
It would be interesting to see with xlogdump what is the WAL generated.

Regards,
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Looking for settings/configuration for FASTEST reindex on idle system.

2014-01-09 Thread Jeff Amiel




On Thursday, January 9, 2014 4:03 PM, Jeff Amiel  
wrote:
I am familiar with pg_reorg and it’s sibling pg_repack - but they call the base 
postgresql reindex functions underneath - and I have learned by using ‘verbose’ 
that the actual clustering of the table is quick - it’s the reindexing that is 
slow (It’s doing each reindex sequentially instead of concurently)


I'm second guessing this - I'm not sure I REALLY know how long the table 
cluster takes versus the index - 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore - selective restore use cases. HINT use DROP CASCADE

2014-01-09 Thread Adrian Klaver

On 01/09/2014 01:51 PM, Day, David wrote:

Adrian,

Thank you for your response.

I would note that the original dump archive created by pg_dump included all 
schemas and that I only intend
to restore a schema from it that is self contained, or a group of related 
tables from it.


I just tried that here and succeeded. I did a pg_dump and then restored 
only the public schema which in this database is  self contained. I did 
get the HINT because I used the -c switch and it tried to drop the 
public schema and there where existing objects dependent on it. The 
restore threw the HINT and a subsequent ERROR over trying to CREATE 
SCHEMA public where it already existed, but it completed the restore.




I acknowledge the dangers inherent in selective restoration, it just seems
that a couple of additional options ( disable constraints, drop cascade ) to 
pg_restore
would improve this utility to users who have put some thought into laying out 
the
database design and failure cases from which they would like to recover.

To have a pg_restore selective  restoration options, (-n, -t ), and have it 
fail simply
because there are  foreign keys amongst the tables within that schema seems like
to much protection or protection that I would at least like to have option to 
over-ride.


We will probably need to see more detail on why that failed in your case 
because I did not see that in mine. Another way to influence the outcome 
is to use the -l and -L options to pg_restore. -l returns the -Fc dump 
file table of contents(TOC) as a list. You can redirect that to a file 
and in that file comment out(using ;) items and rearrange the order of 
the TOC to suit your needs. Then you use pg_restore with the -L  option 
to feed it the edited TOC.


http://www.postgresql.org/docs/9.3/interactive/app-pgrestore.html



It may well be that I could shoot myself in the foot, but I'd still like to own 
the firearm :+)


Regards


Dave Day





--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] returning json object with subset of keys

2014-01-09 Thread Merlin Moncure
On Thu, Jan 9, 2014 at 1:42 AM, Raphael Bauduin  wrote:
> On Wed, Jan 8, 2014 at 4:05 PM, Merlin Moncure  wrote:
>>
>> On Wed, Jan 8, 2014 at 6:37 AM, Raphael Bauduin  wrote:
>> > Hi
>> >
>> > I'm using the json functionalities of postgresql 9.3.
>> > I have a query calling json_populate_recordset like this:
>> >   json_populate_recordset(null::product, event->'products')
>> > but it returns an error:
>> > ERROR:  cannot call json_populate_recordset on a nested object
>> >
>> > There is indeed one key in event->'products' giving access to an array
>> > of
>> > objects.
>> >
>> > Is there a way to specify which keys to keep from the object? I haven't
>> > found ti in the docs.
>> >
>> > Here is pseudo code of what I'd like to do:
>> >   json_populate_recordset(null::product, event->'products' WITH ONLY
>> > KEYS
>> > {'f1','f2'})
>>
>> unfortunately, not without manipulating the json.  this is basically a
>> somewhat crippling limitation of the json_populate functions -- they
>> can't handle anything but flat tuples.  so you have to do something
>> highly circuitous.
>>
>> problem (one record):
>> postgres=# create table foo(a text, b text);
>> postgres=# select json_populate_record(null::foo, '{"a": "abc", "b":
>> "def", "c": [1,2,3]}'::json);
>> ERROR:  cannot call json_populate_record on a nested object
>>
>> nasty solution:
>> postgres=# with data as (select '{"a": "abc", "b": "def", "c":
>> [1,2,3]}'::json as j)
>> select json_populate_record(null::foo, row_to_json(q)) from
>> (
>>   select j->'a' as a, j->'b' as b from data
>> ) q;
>>  json_populate_record
>> --
>>  (abc,def)
>>
>> with some extra manipulations you can do a record set. basically, you
>> need to get the json 'right' first (or that can be done on the
>> client).
>
> ok, thanks for your reply.
> Is this considered to be added in the future to the json functions
> available? I could use it frequently I think.

At some point the 'populate' functions will be amended so that you
will be able to deserialize to complex row object.  For 9.4, if jsonb
makes the cut (which is dicey), that may provide another route -- have
to double check that though.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Looking for settings/configuration for FASTEST reindex on idle system.

2014-01-09 Thread Jeff Amiel
I have a maintenance window coming up and using pg_upgrade to upgrade from 
9.2.X to 9.3.X.
As part of the window, I’d like to ‘cluster’ each table by its primary key.  
After doing so, I see amazing performance improvements (probably mostly because 
of index bloat - but possibly due to table fragmentation)

That being said, I have a single table that is blowing my window - 
at 140 million rows (28 gig in size with 75 gig worth of indexes), this bad boy 
is my white whale. There are 10 indexes (not including the primary key).  Yes - 
10 is a lot - but I’ve been monitoring their use (most are single column or 
partial indexes) and all are used.

That being said, I’ve been reading and experimenting in trying to get a cluster 
of this table (which re-indexes all 10/11 indexes) to complete in a reasonable 
amount of time.

There are lots of settings and ranges to chose from and while my experiments 
continue, I was looking to get some input.  Lowest I have gotten for clustering 
this table is just under 6 hours.  

I am familiar with pg_reorg and it’s sibling pg_repack - but they call the base 
postgresql reindex functions underneath - and I have learned by using ‘verbose’ 
that the actual clustering of the table is quick - it’s the reindexing that is 
slow (It’s doing each reindex sequentially instead of concurently)

PostgreSQL 9.3.2 on x86_64-pc-solaris2.11, compiled by gcc (GCC) 4.5.2, 64-bit
500 gig of ram
2.7gig processors (48 cores)
Shared buffers set to 120gig
Maintenance work men set to 1gig
work men set to 500 meg

Things I have read/seen/been told to tweak…

fsync (set to off)
setting wal_level to minimal (to avoid wal logging of cluster activity)
bumping up maintenance work men (but I’ve also seen/read that uber high values 
cause disk based sorts which ultimately slow things down)
Tweaking checkpoint settings (although with wal_level set to minimal - I don’t 
think it comes into play)

any good suggestions for lighting a fire under this process?

If worse comes to worse, I can vacuum full the table and reindex each index 
concurrently -   but it won’t give me the benefit of having the tuples ordered 
by their oft-grouped primary key.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore - selective restore use cases. HINT use DROP CASCADE

2014-01-09 Thread Day, David
Adrian,

Thank you for your response.

I would note that the original dump archive created by pg_dump included all 
schemas and that I only intend
to restore a schema from it that is self contained, or a group of related 
tables from it.

I acknowledge the dangers inherent in selective restoration, it just seems
that a couple of additional options ( disable constraints, drop cascade ) to 
pg_restore
would improve this utility to users who have put some thought into laying out 
the
database design and failure cases from which they would like to recover.

To have a pg_restore selective  restoration options, (-n, -t ), and have it 
fail simply
because there are  foreign keys amongst the tables within that schema seems like
to much protection or protection that I would at least like to have option to 
over-ride.

It may well be that I could shoot myself in the foot, but I'd still like to own 
the firearm :+)


Regards


Dave Day

 

-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@gmail.com] 
Sent: Thursday, January 09, 2014 3:47 PM
To: Day, David; pgsql-general@postgresql.org
Subject: Re: [GENERAL] pg_restore - selective restore use cases. HINT use DROP 
CASCADE

On 01/09/2014 08:51 AM, Day, David wrote:
> I have needs to do selective schema or table restorations and the 
> pg_restore
>
> utility seems to have hooks for this, yet seems deficient for this 
> type of problem.
>
> It appears that I have to develop a custom script to do what I think 
> would
>
> be a standard kind of restorations.
>
> ---
>
> Given:  Postgres 9.3 and I have dumped my database via:
>
> pg_dump -Fc -U   my_db -f archive_file
>
> I have no problem doing a total restoration
>
> e.g.
>
> pg_restore -c -U  -d my_db  archive_file
>
> Assuming I have not had a catastrophic error but merely wish to 
> address schemas
>
> or tables used for decision making that have been altered by users 
> into sub-optimal
>
> condition, then  if I attempt to do selective restorations of a schema 
> or tables in the database:
>
> ( e.g.   pg_restore -c  -n   -U   -d my_db
> archive_file )
>
> I encounter restoration problems over dependencies with suggestions/Hints:
>
> " HINT:  Use DROP ... CASCADE to drop the dependent objects too."

Well for what it is worth, that is documented.

http://www.postgresql.org/docs/9.3/interactive/app-pgdump.html

"Note: When -n is specified, pg_dump makes no attempt to dump any other 
database objects that the selected schema(s) might depend upon. 
Therefore, there is no guarantee that the results of a specific-schema dump can 
be successfully restored by themselves into a clean database."

>
> So it appears that I now have to develop a custom script to do what I
> think would
>
> be a standard kind of restoration.

Well I think that is the issue that it is not really standard or at 
least easy.  The possible combinations of schemas, dependencies across 
schemas, ownership across objects makes this complicated to do on a 
selective basis. Furthermore when you are using -n using are asking for 
a specific schema, it sort of breaks the contract to drag in objects 
from other schemas.

>
> I would think that there ought to be an some additional options to
> pg_restore.
>
> i.e  an  option  that turns a DROP into a DROP with CASCADE and or
> DISABLES constraint checking while the schema/table is being restored.

So you want to start dropping objects outside the schema you are 
restoring, seems like a foot gun to me.

>
> In addition I would think that with "-a" , data only option, there ought
> to be an assistive  option that allows for the table to
> truncated/cleaned so that

Probably because it is relatively easy to roll your own solution to this.

>
> the generated COPY commands do not append to the table resulting in
> PRIMARY KEY violations.
>
> In any event I have not found a straight forward way of using pg_restore
> to do selective restorations or have found some ways of doing certain
> tables but had to remove Foreign Keys to make it work which seems like a
> poor bargain. I think I know how to customize the output to do the task,
> it just seems that pg_restore should be
>
> able to do this without my additional efforts.

In the foreseeable future that is how you will need to handle it. Where 
the choice is either roll your own script or use one of the existing 
migration solutions, e.g Alembic, Sqitch, etc


>
> I am hopeful that there might be some instructive thoughts on selective
> restorations that have not occurred to me.
>
> ( options that I have played with:  -n -t -section=data -a -c
> -disable-triggers -1 )
>
> Regards
>
> Dave Day
>


-- 
Adrian Klaver
adrian.kla...@gmail.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres.app OS X psql character encoding (utf-8) problem

2014-01-09 Thread Chris Hiestand

On Jan 9, 2014, at 1:09 PM, Tom Lane  wrote:

> What this sounds like is that the readline or libedit library doesn't
> understand multibyte characters properly.  psql itself doesn't have
> anything to do with the display of un-entered lines, but relies on
> one of those libraries to manage input editing.
> 
> The default situation on OS X is generally that psql gets linked against
> the Apple-supplied libedit, which goes so far as to masquerade as
> readline.  Check "otool -L /path/to/psql"; if you see a reference to
> /usr/lib/libedit.3.dylib, or to /usr/lib/libreadline.dylib
> (which is really just a symlink to the former), then that's what you've
> got.  While I'm generally a fan and user of Apple stuff, their version
> of libedit is just abysmal; we've seen random crashes, complete failure
> of tab completion, and other bugs in successive OS X releases.  I also
> find specific references to multibyte input being busted in other
> distros' versions of libedit, eg
> http://www.postgresql.org/message-id/4d5b2c5a.8090...@catalyst.net.nz
> so it may not be all Apple's fault; but they definitely have a track
> record of shipping broken versions of libedit.

Thanks Tom Lane. It's useful to know that one of these handles unentered text.

postgres93.app is supplied with its own version of libedit.

user@hostname:~ $ otool -L `which psql`
/Applications/Postgres93.app/Contents/MacOS/bin/psql:
/Applications/Postgres93.app/Contents/MacOS/lib/libpq.5.dylib 
(compatibility version 5.0.0, current version 5.6.0)
/Applications/Postgres93.app/Contents/MacOS/lib/libssl.1.0.0.dylib 
(compatibility version 1.0.0, current version 1.0.0)
/Applications/Postgres93.app/Contents/MacOS/lib/libedit.0.dylib 
(compatibility version 1.0.0, current version 1.45.0)
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current 
version 169.3.0)

Do you think this an indication that the app bundle may have been compiled with 
a bad version of libedit? I can file a bug report along those lines.

Thanks,
Chris

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Add custom properties to a column's definition (pg_attribute)

2014-01-09 Thread David Johnston
Brooke Beacham wrote
> (without having to replicate/maintain a table of columns separately from
> the system catalog)

Just create the friggin' table and wrap whatever logic you want in a view
(or functions) so that you at least get usable results/defaults for any
columns you haven't added.  

Any solution you pick has the risk of becoming out-of-sync so tossing out
the best solution does more harm than good.

In terms of "group" columns you might just consider creating two tables that
have a one-to-one relationship to each other (or three, the base table and
then group A and group B tables).

A lot depends on the why and structure of your model but there is no
standard facility for custom column meta-data and though the comment
facility can be used for this I much prefer to leave that for human
consumption and have something more structured and easily customize-able for
meta-data.
 
David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Add-custom-properties-to-a-column-s-definition-pg-attribute-tp5786158p5786174.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres.app OS X psql character encoding (utf-8) problem

2014-01-09 Thread Tom Lane
Chris Hiestand  writes:
> If I enter a unicode character in the psql cli, such as:
> user=# select 'ö';

> But before hitting enter, use the keyboard "left" button to move the cursor 
> across all the way to the left edge, and then back all the way to the right, 
> the output get distorted and looks like this:
> user=#select 'ö';;

What this sounds like is that the readline or libedit library doesn't
understand multibyte characters properly.  psql itself doesn't have
anything to do with the display of un-entered lines, but relies on
one of those libraries to manage input editing.

The default situation on OS X is generally that psql gets linked against
the Apple-supplied libedit, which goes so far as to masquerade as
readline.  Check "otool -L /path/to/psql"; if you see a reference to
/usr/lib/libedit.3.dylib, or to /usr/lib/libreadline.dylib
(which is really just a symlink to the former), then that's what you've
got.  While I'm generally a fan and user of Apple stuff, their version
of libedit is just abysmal; we've seen random crashes, complete failure
of tab completion, and other bugs in successive OS X releases.  I also
find specific references to multibyte input being busted in other
distros' versions of libedit, eg
http://www.postgresql.org/message-id/4d5b2c5a.8090...@catalyst.net.nz
so it may not be all Apple's fault; but they definitely have a track
record of shipping broken versions of libedit.

I'd strongly recommend installing the GNU readline library and rebuilding
psql against that.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] postgres.app OS X psql character encoding (utf-8) problem

2014-01-09 Thread Chris Hiestand
Hi everyone,

I am running Postgres93.app (v 9.3.1). I'm new to Postgres, but experienced 
with MySQL and *nix. I'm having a problem with the command line client "psql" 
while connecting to postgress.app running on the localhost on OS X. It's 
behaving as if somewhere a character encoding is set incorrectly, but as far as 
I can tell, 'UTF-8' or 'UTF8' is correctly set everywhere.

If I enter a unicode character in the psql cli, such as:
user=# select 'ö';

But before hitting enter, use the keyboard "left" button to move the cursor 
across all the way to the left edge, and then back all the way to the right, 
the output get distorted and looks like this:
user=#select 'ö';;

If I do hit enter - the command works fine, so the problem is on the 
display/client side.

Here's why I think all my encodings are set correctly:

user@hostname:~ $ locale
LANG="en_US.UTF-8"
LC_COLLATE="en_US.UTF-8"
LC_CTYPE="en_US.UTF-8"
LC_MESSAGES="en_US.UTF-8"
LC_MONETARY="en_US.UTF-8"
LC_NUMERIC="en_US.UTF-8"
LC_TIME="en_US.UTF-8"
LC_ALL=

user@hostname:~ $ psql
psql (9.3.1)
Type "help" for help.

user=# \encoding
UTF8

user=# \l user
 List of databases
  Name  | Owner  | Encoding |   Collate   |Ctype| Access privileges 
++--+-+-+---
 user   | user   | UTF8 | en_US.UTF-8 | en_US.UTF-8 | 



Also if, from my OS X terminal, I ssh into a linux box and run 'psql', the 
problem does not reproduce. So I'm fairly certain my terminal is not the 
problem.

Can anyone reproduce the problem, or have any advice about something I've 
missed?

Thanks,
Chris

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore - selective restore use cases. HINT use DROP CASCADE

2014-01-09 Thread Adrian Klaver

On 01/09/2014 08:51 AM, Day, David wrote:

I have needs to do selective schema or table restorations and the pg_restore

utility seems to have hooks for this, yet seems deficient for this type
of problem.

It appears that I have to develop a custom script to do what I think would

be a standard kind of restorations.

---

Given:  Postgres 9.3 and I have dumped my database via:

pg_dump –Fc –U   my_db –f archive_file

I have no problem doing a total restoration

e.g.

pg_restore –c –U  -d my_db  archive_file

Assuming I have not had a catastrophic error but merely wish to address
schemas

or tables used for decision making that have been altered by users into
sub-optimal

condition, then  if I attempt to do selective restorations of a schema
or tables in the database:

( e.g.   pg_restore –c  -n   -U   -d my_db
archive_file )

I encounter restoration problems over dependencies with suggestions/Hints:

“ HINT:  Use DROP ... CASCADE to drop the dependent objects too.”


Well for what it is worth, that is documented.

http://www.postgresql.org/docs/9.3/interactive/app-pgdump.html

"Note: When -n is specified, pg_dump makes no attempt to dump any other 
database objects that the selected schema(s) might depend upon. 
Therefore, there is no guarantee that the results of a specific-schema 
dump can be successfully restored by themselves into a clean database."




So it appears that I now have to develop a custom script to do what I
think would

be a standard kind of restoration.


Well I think that is the issue that it is not really standard or at 
least easy.  The possible combinations of schemas, dependencies across 
schemas, ownership across objects makes this complicated to do on a 
selective basis. Furthermore when you are using -n using are asking for 
a specific schema, it sort of breaks the contract to drag in objects 
from other schemas.




I would think that there ought to be an some additional options to
pg_restore.

i.e  an  option  that turns a DROP into a DROP with CASCADE and or
DISABLES constraint checking while the schema/table is being restored.


So you want to start dropping objects outside the schema you are 
restoring, seems like a foot gun to me.




In addition I would think that with “-a” , data only option, there ought
to be an assistive  option that allows for the table to
truncated/cleaned so that


Probably because it is relatively easy to roll your own solution to this.



the generated COPY commands do not append to the table resulting in
PRIMARY KEY violations.

In any event I have not found a straight forward way of using pg_restore
to do selective restorations or have found some ways of doing certain
tables but had to remove Foreign Keys to make it work which seems like a
poor bargain. I think I know how to customize the output to do the task,
it just seems that pg_restore should be

able to do this without my additional efforts.


In the foreseeable future that is how you will need to handle it. Where 
the choice is either roll your own script or use one of the existing 
migration solutions, e.g Alembic, Sqitch, etc





I am hopeful that there might be some instructive thoughts on selective
restorations that have not occurred to me.

( options that I have played with:  -n –t –section=data –a –c
–disable-triggers -1 )

Regards

Dave Day




--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Add custom properties to a column's definition (pg_attribute)

2014-01-09 Thread Raymond O'Donnell
On 09/01/2014 20:16, Raymond O'Donnell wrote:
> On 09/01/2014 20:08, Brooke Beacham wrote:
>> Can anyone suggest a way to associate a set of custom properties
>> (key/value pairs) with a column's definition (ie pg_attribute)?
>>
>> (without having to replicate/maintain a table of columns separately from
>> the system catalog)
>>
>> For example:
>> * I have a table with 50 columns
>> * I'd like to 'flag' 30 of those columns as group - 'A' and the other 20
>> as group - 'B'
>>
>>
>> So I'd like a way to add custom metdata (k/v pairs) to a column's
>> definition (not the actual rows of data).
> 
> You could use the comment facility -
> 
>COMMENT ON COLUMN tablename.columnname IS IS '';

Whoops, sorry, an extra "is" got in there should have been:

   COMMENT ON COLUMN tablename.columnname IS '';

Ray.



-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Add custom properties to a column's definition (pg_attribute)

2014-01-09 Thread Raymond O'Donnell
On 09/01/2014 20:08, Brooke Beacham wrote:
> Can anyone suggest a way to associate a set of custom properties
> (key/value pairs) with a column's definition (ie pg_attribute)?
> 
> (without having to replicate/maintain a table of columns separately from
> the system catalog)
> 
> For example:
> * I have a table with 50 columns
> * I'd like to 'flag' 30 of those columns as group - 'A' and the other 20
> as group - 'B'
> 
> 
> So I'd like a way to add custom metdata (k/v pairs) to a column's
> definition (not the actual rows of data).

You could use the comment facility -

   COMMENT ON COLUMN tablename.columnname IS IS '';

- to store some JSON or the like, though you'll have to parse it
client-side.

Ray.


-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Add custom properties to a column's definition (pg_attribute)

2014-01-09 Thread Brooke Beacham
Can anyone suggest a way to associate a set of custom properties (key/value
pairs) with a column's definition (ie pg_attribute)?

(without having to replicate/maintain a table of columns separately from
the system catalog)

For example:
* I have a table with 50 columns
* I'd like to 'flag' 30 of those columns as group - 'A' and the other 20 as
group - 'B'


So I'd like a way to add custom metdata (k/v pairs) to a column's
definition (not the actual rows of data).


Re: [GENERAL] Index space growing even after cleanup via autovacuum in Postgres 9.2

2014-01-09 Thread Tom Lane
Tirthankar Barari  writes:
> However, we are noticing that after autovacuum, our disk space 
> consumption is still increasing and the increase is in the index size 
> (by querying pg_total_relation_size("mytable") and 
> pg_indexes_size("mytable")).

> In Postgres 9.2.2, doesn't autovacuum cleanup dead indexes and reuse 
> that space too? So, why do we see this not happening?

You've not provided enough info to speak in more than generalities, but:
whether index space is reclaimable depends a whole lot on your usage
patterns.  There's provision to recycle pages that've become completely
empty, but as long as there's at least one key left on a leaf page,
it won't be recycled.  So for instance if you have a time-ordered index
and you delete all but one out of every hundred entries in time sequence,
you're not going to get that space back short of a REINDEX.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Index space growing even after cleanup via autovacuum in Postgres 9.2

2014-01-09 Thread Tirthankar Barari

Hi,

We have a table where we insert about 10 million rows everyday. We keep 
14 day's worth of entries (i.e. 140 mil). A scheduled task wakes up 
every day and deletes all entries past the 14 day window (i.e. deletes 
entries from the 15th day in the past).


We have autovacuum set to trigger when 1% of rows are deleted and that 
is running fine consistently.


However, we are noticing that after autovacuum, our disk space 
consumption is still increasing and the increase is in the index size 
(by querying pg_total_relation_size("mytable") and 
pg_indexes_size("mytable")).


In Postgres 9.2.2, doesn't autovacuum cleanup dead indexes and reuse 
that space too? So, why do we see this not happening?


Thanks

- tirthankar



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to know server status variable in postgresql?

2014-01-09 Thread ambilalmca
oh thanks @Sameer Kumar

Thanks & Regards,
A.Mohamed Bilal


On Thu, Jan 9, 2014 at 12:50 PM, Sameer Kumar [via PostgreSQL] <
ml-node+s1045698n5786022...@n5.nabble.com> wrote:

>
>
>
> On Wed, Jan 8, 2014 at 7:22 PM, ambilalmca <[hidden 
> email]
> > wrote:
>
>> I want to collect,
>>
>> *Connections. *
>>
>> Current connections. The number of currently open connections.
>> Connections executing requests. The number of currently open connections
>> that are executing requests.
>> Idle connections. The number of currently idle connections.
>> Max connections. The maximum number of concurrent connections to the
>> database server.
>> Used connections. Connections used as a percentage of maximum connections.
>>
>> *Buffers. *
>>
>>
>> Shared buffers size. Current size of shared buffers.
>> Disk cache size. Current size of disk cache buffer.
>> Sort buffer size. Current size of sort buffer.
>> Work buffer size. Current size of working requests buffer.
>> Temp buffer size. Current size of temporary buffer.
>>
>> *IO Requests.*
>>
>>
>> Blocks read. Number of blocks directly read on disk.
>> For optimal performance this value should be the smallest possible. If the
>> database has to execute too many disk accesses, performance will suffer.
>> Index blocks read. Number of index blocks directly read on disk.
>> % Index blocks read. Percentage of index blocks directly read on disk.
>> Sequence blocks read. Number of sequence blocks directly read on disk.
>>
>> *Cache*
>>
>> Blocks read. Number of cached blocks read.
>> Index blocks read. Number of cached index blocks read.
>> % Index blocks read. Percentage of cached index blocks read.
>> For optimal performance, this value must be as large as possible. If an
>> insufficient number of index blocks are declared in the table, it could
>> negatively impact the database server performance.
>> Sequence blocks read. Number of cached sequence blocks read.
>>
>> *Index*.
>>
>>
>> Index read. Number of reads initiated by an index.
>> Indexed rows read. Number of rows read by indexed requests.
>> Indexed rows fetched. Number of live rows fetched by indexed requests.
>>
>> *Command Rates.*
>>
>>
>> Rows read. Number of rows read.
>> Rows fetched. Number of rows fetched.
>> Inserted rows. Number of rows inserted.
>> Updated rows. Number of rows updated.
>> Deleted rows. Number of rows deleted.
>> Committed transactions. Number of committed transactions.
>> This value should be relatively stable, indicating that there are no
>> performance-reducing load peaks. If applications do not commit often
>> enough,
>> it will lead to an overload on the database server.
>> Rolled back transactions. Number of transactions rolled back.
>> % Rolledback transactions. Percentage of transactions rolled back.
>>
>> *Locks. *
>>
>>
>> Locks waiting. Number of locks waiting.
>> Locks held. Number of locks held.
>> Process holding locks. Number of processes holding locks.
>>
>> how to collect these details by using query. now i find queries for
>> sonnections. but i dont know anout others. please help me.@Sameer Kumar
>>
>>
>>
>>
> Are you building your own scripts for monitoring the database?
> Are are open source plug-ins available for that. Anyways, take a look at
> this documentation:
>
> http://www.postgresql.org/docs/9.2/static/monitoring-stats.html
>
>
> You will find most of the things you have asked for (I guess all of it).
> If you are not able to find something then you can post here.
>
>
>
>> --
>> View this message in context:
>> http://postgresql.1045698.n5.nabble.com/How-to-know-server-status-variable-in-postgresql-tp5785677p5785833.html
>> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>>
>>
>> --
>> Sent via pgsql-general mailing list ([hidden 
>> email]
>> )
>>
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>
>
>
> --
>  If you reply to this email, your message will be added to the discussion
> below:
>
> http://postgresql.1045698.n5.nabble.com/How-to-know-server-status-variable-in-postgresql-tp5785677p5786022.html
>  To unsubscribe from How to know server status variable in postgresql?, click
> here
> .
> NAML
>




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/How-to-know-server-status-variable-in-postgresql-tp57

Re: [GENERAL] Last inserted row id with complex PK

2014-01-09 Thread Nelson Green
On Wed, Jan 8, 2014 at 5:39 PM, Alban Hertroys  wrote:

> On 08 Jan 2014, at 16:54, Nelson Green  wrote:
>
> > I have a projects log table with a three column PK, project_num,
> person_num, and sequence, where each new entry for a project/person
> combination increments the sequence, which is not an auto incrementing
> sequence. Is there any way to retrieve the last entry to the table? For
> instance, if the last entry for person 427 on project 15 was sequence
> number 125, is that information available to me anywhere?
>
> I think the answer rather depends on what you mean with this last inserted
> row and for what purpose you need it.
>
> If you want that information right after it’s been inserted (for example
> because you need that information in an FK relation), you can use
> INSERT..RETURNING to return the values of the relevant PK fields.
>

Hi Alban,

That is almost exactly what I am trying to do (create a FK relation). I am
sorry
I was so vague, that was not intentional. I just have so many disparate
things
going on at once and I just wasn't thinking my question through very
clearly.

I am scripting the input of dummy data, where I have a list of projects and
users associated with those projects. My script chooses one pairing at
"random"
and inserts a dummy log record. I wanted to be able to look at the entry's
in
the order they were inserted in to verify my sequence trigger is working as
intended. The default timestamp gave me that ability, but I think your
suggestion of INSERT ... RETURNING would have been a bit more in line with
my
original thought, where the script was doing an insert to the project table
and
then logging the PK of that last insert to a temp table. The PK of the temp
table is an auto incrementing sequence, but I wasn't sure how to get the PK
of
the log table from the previous INSERT. Since this is a test system I was
able
to add the timestamp and then just perform a query, sorting by that. But,
the
timestamp solution altered the model, the temp table did not.

Regards,
Nelson


>
> If that’s not what you’re after, then what is it that determines which
> record is the “last one”? If you can’t identify such a record from your
> data while you need that information, then something is missing from your
> model.
>
> If you’re planning to add such information to your model based on your
> current data, you might be able to get somewhat reliable results looking at
> the transaction xid’s that the records were created in. There are a number
> of pitfalls to that approach though, the most obvious one being transaction
> xid wraparound. Of course, multiple inserts from the same transaction would
> (originally) have the same xid, so you wouldn’t be able to determine which
> one of those would be the latest (unless they’re for the same
> person/project, obviously).
> Such information could then be used to add a field with, for example, an
> incrementing sequence.
>
> Alban Hertroys
> --
> If you can't see the forest for the trees,
> cut the trees and you'll find there is no forest.
>
>


[GENERAL] SQL State XX000 : XML namespace issue

2014-01-09 Thread Panneerselvam Posangu
Hi,
When we run a SQL statement in Postgres 9.2 we get an error.
Error : Could not register XML namespace with name "" and URI "" SQL State XX000
In the SQL state we use xpath function. Any reason why this is happening..
Thanks,Panneer

[GENERAL] pg_restore - selective restore use cases. HINT use DROP CASCADE

2014-01-09 Thread Day, David
I have needs to do selective schema or table restorations and the pg_restore
utility seems to have hooks for this, yet seems deficient for this type of 
problem.
It appears that I have to develop a custom script to do what I think would
be a standard kind of restorations.
---

Given:  Postgres 9.3 and I have dumped my database via:

pg_dump -Fc -U   my_db -f archive_file

I have no problem doing a total restoration
e.g.
pg_restore -c -U  -d my_db  archive_file

Assuming I have not had a catastrophic error but merely wish to address schemas
or tables used for decision making that have been altered by users into 
sub-optimal
condition, then  if I attempt to do selective restorations of a schema or 
tables in the database:
( e.g.   pg_restore -c  -n   -U   -d my_db archive_file 
)
I encounter restoration problems over dependencies with suggestions/Hints:
" HINT:  Use DROP ... CASCADE to drop the dependent objects too."

So it appears that I now have to develop a custom script to do what I think 
would
be a standard kind of restoration.

I would think that there ought to be an some additional options to pg_restore.
i.e  an  option  that turns a DROP into a DROP with CASCADE and or DISABLES 
constraint checking while the schema/table is being restored.

In addition I would think that with "-a" , data only option, there ought to be 
an assistive  option that allows for the table to truncated/cleaned so that
the generated COPY commands do not append to the table resulting in PRIMARY KEY 
violations.

In any event I have not found a straight forward way of using pg_restore to do 
selective restorations or have found some ways of doing certain tables but had 
to remove Foreign Keys to make it work which seems like a poor bargain. I think 
I know how to customize the output to do the task, it just seems that 
pg_restore should be
able to do this without my additional efforts.

I am hopeful that there might be some instructive thoughts on selective 
restorations that have not occurred to me.

( options that I have played with:  -n -t -section=data -a -c -disable-triggers 
-1 )


Regards


Dave Day



Re: [GENERAL] getting domain information from query results

2014-01-09 Thread Tom Lane
Marco Baringer  writes:
> it seems, if my understanding of the protocol is correct, that the oid
> of the underlying type, text in this case, is returned and not the oid
> of the domain.

That's correct.  This was an intentional decision long ago, and we're
unlikely to reconsider now for fear of breaking applications.  (IIRC,
the argument was that client-side code was most likely to be concerned
with matters like the textual representation of the value, and so getting
told about domains would just complicate life.)

> i tried creating a new type (which have solved my problem with the
> "wrong" oid getting returned with the query results), but then realized
> i'd have to implement all of the associated operators for my new type as
> well (they'd just be cut 'n pastes of the equivalent operators for
> varchar, but that's still a bit more work than i'd like to do at this
> point).

If you'd dug a bit more deeply, you'd have noticed that varchar doesn't
*have* any operators; it relies on text's operators via an implicit cast
to text.  You could likely do the same for a custom string type.  This
answer doesn't scale to non-string data types of course, but it might
be good enough for your situation.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] getting domain information from query results

2014-01-09 Thread Marco Baringer

hi,

i have this schema:

  create domain almost_a_string AS text;
  create table object ( name_like_thing almost_a_string );

and i'm trying to go from the results, using postgresql's
frontend/backend protocol, of this query:

  select name_like_thing from object;

to the domain of the column name_like_thing, almost_a_string.

it seems, if my understanding of the protocol is correct, that the oid
of the underlying type, text in this case, is returned and not the oid
of the domain. i could get at the domain with an extra query, since the
originating table and column of each field in the result row is
included, but i'd really rather have to make that extra query. is there
something obvious that i'm missing?

finally, i don't think i'm the first person to have to answer this
question this way, both psql's \d command and the function pg_typeof
return almost_a_string (the domain) and not text (the type).



if it helps, here's what i'm trying to do: i would like a type, or
something like a type that my app, which talks to postgresql over the
frontend/backend protocol, can use to distinguish the values from one
column from the values of another, even though both columns are indexed,
queried and operated on (as far as =, <>, ilike, etc. are concerned)
just like text values. this isn't a new type, it's basically a column of
type text plus an enum column with a "class name" or "type" (or
something equally app specific) which i can no longer, given the age and
size of the schema, and how this ORM currently works, add in.

i tried creating a new type (which have solved my problem with the
"wrong" oid getting returned with the query results), but then realized
i'd have to implement all of the associated operators for my new type as
well (they'd just be cut 'n pastes of the equivalent operators for
varchar, but that's still a bit more work than i'd like to do at this
point).

thanks for the help,
-- 
-marco


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] argument of CASE/WHEN must not return a set

2014-01-09 Thread George Weaver
- Original Message - 
From: David Johnston



>The condition (WHEN) in a case cannot be a set.  You have to make the
>expression always resolve to a single row/value.

>I'd suggest creating a regexp_matches_single(...) function that calls
>regexp_matches(...) in a sub-select so that no matches results in null. 
>You
>then need to decide how you want to handle multiple matches. This function
>will return a single text[] and so can be used in places where you want 
>your
>match to only and always return a single result (i.e. non-global behavior).

Thanks David,

I found that if the whole expression is made a sub-select it works:

development=# SELECT CASE
development-# WHEN (SELECT LENGTH(ARRAY_TO_STRING( 
REGEXP_MATCHES('12-70510','^[0-9,0-9.0-9]+')
development(# , ',')
development(# ) = LENGTH('12-70510')
development(#  )
development-#THEN ARRAY_TO_STRING( 
REGEXP_MATCHES('12-70510','^[0-9,0-9.0-9]+')
development(#  , ',')
development-# ELSE ''
development-# END AS "12-70510"
development-# ;
 12-70510
--

(1 row)

Cheers,
George

>Note a recent patch was applied yesterday to resolve an ancient 
>undiscovered
>bug related to this kind of query as well. Using the above 
>function/behavior
>will let you avoid that bug as well.

>David J.



Re: [GENERAL] argument of CASE/WHEN must not return a set

2014-01-09 Thread David Johnston
Sameer Kumar wrote
> On Thu, Jan 9, 2014 at 1:26 AM, George Weaver <

> gweaver@

> > wrote:
> 
>> ARRAY_TO_STRING(  REGEXP_MATCHES('12-70510','^[0-9,0-9.0-9]+')
>> development(#  , ',')
> 
> 
> I guess this part of your statement will return 1,2, which is a set
> 
> Can you try below:
> SELECT CASE
>  WHEN LENGTH(ARRAY_TO_STRING(
>  REGEXP_MATCHES('12-70510','^[0-9,0-9.0-9]+')
>  , ',')
>  )
>   = LENGTH('12-70510')
> 
>  THEN cast(ARRAY_TO_STRING(
>  REGEXP_MATCHES('12-70510','^[0-9,0-9.0-9]+')
>   , ',') as varchar(100))
> 
>  ELSE ''
>  END AS "12-70510";
> 
> But anyways, I think the best way to do it is the way you have already
> figured (check the plan for both statements once you have sorted out the
> error)
> 
> 
> Best Regards,
> *Sameer Kumar | Database Consultant*
> 
> *ASHNIK PTE. LTD.*101 Cecil Street, #11-11 Tong Eng Building, Singapore
> 069533
> M : *+65 8110 0350* T: +65 6438 3504 | www.ashnik.com
> www.facebook.com/ashnikbiz | www.twitter.com/ashnikbiz
> 
> [image: email patch]
> 
> This email may contain confidential, privileged or copyright material and
> is solely for the use of the intended recipient(s).
> 
> 
> image002.jpg (7K)
> ;

The condition (WHEN) in a case cannot be a set.  You have to make the
expression always resolve to a single row/value.

I'd suggest creating a regexp_matches_single(...) function that calls
regexp_matches(...) in a sub-select so that no matches results in null.  You
then need to decide how you want to handle multiple matches. This function
will return a single text[] and so can be used in places where you want your
match to only and always return a single result (i.e. non-global behavior).

Note a recent patch was applied yesterday to resolve an ancient undiscovered
bug related to this kind of query as well. Using the above function/behavior
will let you avoid that bug as well.

David J.



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/argument-of-CASE-WHEN-must-not-return-a-set-tp5785874p5786085.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] argument of CASE/WHEN must not return a set

2014-01-09 Thread Sameer Kumar
On Thu, Jan 9, 2014 at 6:05 PM, Sameer Kumar wrote:

>
> On Thu, Jan 9, 2014 at 1:26 AM, George Weaver  wrote:
>
>> ARRAY_TO_STRING(  REGEXP_MATCHES('12-70510','^[0-9,0-9.0-9]+')
>> development(#  , ',')
>
>
> I guess this part of your statement will return 1,2, which is a set
>
> Can you try below:
> SELECT CASE
>  WHEN LENGTH(ARRAY_TO_STRING(
>  REGEXP_MATCHES('12-70510','^[0-9,0-9.0-9]+')
>  , ',')
>  )
>   = LENGTH('12-70510')
>
>  THEN cast(ARRAY_TO_STRING(
>  REGEXP_MATCHES('12-70510','^[0-9,0-9.0-9]+')
>   , ',') as varchar(100))
>
>  ELSE ''
>  END AS "12-70510";
>
> But anyways, I think the best way to do it is the way you have already
> figured (check the plan for both statements once you have sorted out the
> error)
>
>
>
Sorry about this response. The issue seems to be different.



>  Best Regards,
> *Sameer Kumar | Database Consultant*
>
> *ASHNIK PTE. LTD.*101 Cecil Street, #11-11 Tong Eng Building, Singapore
> 069533
> M : *+65 8110 0350 <%2B65%208110%200350>* T: +65 6438 3504 |
> www.ashnik.com
> www.facebook.com/ashnikbiz | www.twitter.com/ashnikbiz
>
> [image: email patch]
>
> This email may contain confidential, privileged or copyright material and
> is solely for the use of the intended recipient(s).
>
<>

Re: [GENERAL] getting domain information from query results

2014-01-09 Thread Karsten Hilbert
On Thu, Jan 09, 2014 at 12:10:34PM +0100, Marco Baringer wrote:

> assuming i have this schema:
> 
>   create domain almost_a_string AS text;
>   create table object ( name_like_thing almost_a_string );
> 
> and i'm trying to go from the results, using postgresql's
> frontend/backend protocol, of this query:
> 
>   select name_like_thing from object;
> 
> to the domain of the column name_like_thing, almost_a_string.
> 
> it seems, if my understanding of the protocol is correct, that the oid
> of the underlying type, text in this case, is returned and not the oid
> of the domain. i could get at the domain with an extra query, since the
> originating table/viem and column of each field in the result row is
> included, but i'd really rather have to make that extra query. is there
> something obvious that i'm missing?
> 
> finally, i don't think i'm the first person to want to look at things
> this way; both psql's \d command and the function pg_typeof return
> almost_a_string (the domain) and not text (the type).

Maybe this

http://svana.org/~kleptog/pgsql/taggedtypes.html

gives some ideas.

Karsten
-- 
GPG key ID E4071346 @ gpg-keyserver.de
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] getting domain information from query results

2014-01-09 Thread Marco Baringer

i am trying to access a value's (a particular column of a row of query
results) domain information; but all i seem to be able to get at, using
the frontend/backend protocol, is the underlying type.

assuming i have this schema:

  create domain almost_a_string AS text;
  create table object ( name_like_thing almost_a_string );

and i'm trying to go from the results, using postgresql's
frontend/backend protocol, of this query:

  select name_like_thing from object;

to the domain of the column name_like_thing, almost_a_string.

it seems, if my understanding of the protocol is correct, that the oid
of the underlying type, text in this case, is returned and not the oid
of the domain. i could get at the domain with an extra query, since the
originating table/viem and column of each field in the result row is
included, but i'd really rather have to make that extra query. is there
something obvious that i'm missing?

finally, i don't think i'm the first person to want to look at things
this way; both psql's \d command and the function pg_typeof return
almost_a_string (the domain) and not text (the type).



if it helps, here's what i'm trying to do: i would like a type, or
something like a type that my app, which talks to postgresql over the
frontend/backend protocol, can use to distinguish the values from one
column from the values of another, even though both columns are indexed,
queried and operated on (as far as =, <>, ilike, etc. are concerned)
just like text values. this isn't a new type, it's basically a column of
type text plus an marker with a "class name" or "type" (or something
equally app specific) which i can no longer, given the age and size of
the schema, and how this ORM currently works, add in. domains really
seem like the perfect solution to my problem, except for their (not)
handled by the network protocol.

i tried creating a new type (which have solved my problem with the
"wrong" oid getting returned with the query results), but then realized
i'd have to implement all of the associated operators and functions for
my new type as well (they'd just be cut 'n pastes of the equivalent
operators for varchar, but that's still a bit more work than i'd like to
do at this point).

thanks for the help,
-- 
-marco


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] wal archive peak during pg_dump

2014-01-09 Thread Willy-Bas Loos
please excuse my forgetting [GENERAL] in the subject of the other mail.

On Thu, Jan 9, 2014 at 11:42 AM, Willy-Bas Loos  wrote:

> Hi,
>
> I've set up hot standby slaves for a couple of clusters.
> The wal is cleaned up after use, i don't use it as a backup (yet).
> It seems that the amount of wal peaks shortly after midnight, when pg_dump
> is running.
>
> It doesn't seem logical to me that pg_dump should generate wal, but i
> haven't been able to find a different explanation so far.
> So to make sure, i want to ask you people: can it be that running pg_dump
> creates a lot of wal?
>
> Cheers,
>
> WBL
>
> --
> "Quality comes from focus and clarity of purpose" -- Mark Shuttleworth
>



-- 
"Quality comes from focus and clarity of purpose" -- Mark Shuttleworth


[GENERAL] wal archive peak during pg_dump

2014-01-09 Thread Willy-Bas Loos
Hi,

I've set up hot standby slaves for a couple of clusters.
The wal is cleaned up after use, i don't use it as a backup (yet).
It seems that the amount of wal peaks shortly after midnight, when pg_dump
is running.

It doesn't seem logical to me that pg_dump should generate wal, but i
haven't been able to find a different explanation so far.
So to make sure, i want to ask you people: can it be that running pg_dump
creates a lot of wal?

Cheers,

WBL

-- 
"Quality comes from focus and clarity of purpose" -- Mark Shuttleworth


Re: [GENERAL] Last inserted row id with complex PK

2014-01-09 Thread Francisco Olarte
Hi Nelson:

On Wed, Jan 8, 2014 at 7:14 PM, Nelson Green  wrote:
> My apologies, I was not completely clear. I will not know any of the columns
> in advance. The most recent insert is the result of user input from a web
> form, so I won't know what project or what user generated the last insert.
> That was why I wandered if that information was stored anywhere in the
> system.

You mean someone already inserted a (project, user, sequence) tuple
and you need it? So the problem of calculating the last sequence for a
given ( p, u ) combination has been solved by the inserting web form
and you just need to communicate the result to another process ?

It seems what you need is an auxiliary log table ( serial + (P,U,S) )
which could be managed by a trigger, or made the web form do the
modifications using a procedure. Anyway, a lot more detail will be
needed to solve this.

Francisco Olarte.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] argument of CASE/WHEN must not return a set

2014-01-09 Thread Sameer Kumar
On Thu, Jan 9, 2014 at 1:26 AM, George Weaver  wrote:

> ARRAY_TO_STRING(  REGEXP_MATCHES('12-70510','^[0-9,0-9.0-9]+')
> development(#  , ',')


I guess this part of your statement will return 1,2, which is a set

Can you try below:
SELECT CASE
 WHEN LENGTH(ARRAY_TO_STRING(
 REGEXP_MATCHES('12-70510','^[0-9,0-9.0-9]+')
 , ',')
 )
  = LENGTH('12-70510')

 THEN cast(ARRAY_TO_STRING(
 REGEXP_MATCHES('12-70510','^[0-9,0-9.0-9]+')
  , ',') as varchar(100))

 ELSE ''
 END AS "12-70510";

But anyways, I think the best way to do it is the way you have already
figured (check the plan for both statements once you have sorted out the
error)


Best Regards,
*Sameer Kumar | Database Consultant*

*ASHNIK PTE. LTD.*101 Cecil Street, #11-11 Tong Eng Building, Singapore
069533
M : *+65 8110 0350* T: +65 6438 3504 | www.ashnik.com
www.facebook.com/ashnikbiz | www.twitter.com/ashnikbiz

[image: email patch]

This email may contain confidential, privileged or copyright material and
is solely for the use of the intended recipient(s).
<>