Re: [GENERAL] pg_stat_statements -- Historical Query

2017-08-10 Thread Julien Rouhaud
On Thu, Aug 10, 2017 at 3:00 PM, Melvin Davidson  wrote:
>
> If you are interested in historical stats, you would probably fair a lot 
> better with PgBadger. It is free
> and highly customizable.  In addition to SQL call rates at different times, 
> it provides analysis of
> most used queries, slowest queries, etc.

So is doing powa, but in real time.

>
> https://sourceforge.net/projects/pgbadger/
>

FYI pgBadger is no longer available on sourceforge (see the
downladable file), you should now download it on
https://github.com/dalibo/pgbadger/releases


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_stat_statements -- Historical Query

2017-08-10 Thread Julien Rouhaud
On Thu, Aug 10, 2017 at 6:41 AM, Michael Paquier
 wrote:
> On Thu, Aug 10, 2017 at 6:23 AM, anand086  wrote:
>> I was looking for a way to maintain historical query details in Postgres to
>> answer questions like
>>
>> What was the sql call rate between time X and Y?
>> Did the execution count increase for the query increase between time X and
>> Y?
>> In past 10mins what all queries were run in the db?
>>
>> and few others like this.
>>
>> What would be best way to do it? Any thoughts?
>
> pg_stat_statements has a function allowing to reset what the view
> pg_stat_statements holds as information. You could copy periodically
> the data of pg_stat_statements and then invoke
> pg_stat_statements_reset to put everything back to zero. Then you
> would just need to do your analysis work based on the amount of data
> copied into your custom table.

You can also use powa-archivist extension which does the aggregation,
data retention and so on with a bgworker:
https://github.com/dalibo/powa-archivist.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Ora2Pg-Database migration report

2017-05-31 Thread Julien Rouhaud
On 31/05/2017 16:36, PAWAN SHARMA wrote:
> Hi All,
> 
> What is the formula to calculate estimated cost in Ora2Pg-Database
> migration report?

You can find some information on the official documentation:
http://ora2pg.darold.net/documentation.html#migration_assessment_method

If you want more details, you can either look at the code or open an
issue on the repo (https://github.com/darold/ora2pg/).

-- 
Julien Rouhaud
http://dalibo.com - http://dalibo.org


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Difficult while acquiring LWLocks

2017-05-03 Thread Julien Rouhaud
On 03/05/2017 15:01, hariprasath nallasamy wrote:
> 
> AFAIK yes this is the correct way to use multiple lwlocks.
> 
> 
> Thanks.!
> 
> Just curious, Is there any other way to do this.? 

Probably no, except asking for 10 different tranches :)

-- 
Julien Rouhaud
http://dalibo.com - http://dalibo.org


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Difficult while acquiring LWLocks

2017-05-03 Thread Julien Rouhaud
On 03/05/2017 13:08, hariprasath nallasamy wrote:
> Hi all
>There is an use case, where i want some 10 LightWeight Locks and
> after 9.6 LW locks api's (LWLockAssign) are changed a bit and i am
> confused too.
> 
>  Only reference i cant get was from pg_stat_statement :(
> 
> Since GetNamedLWLockTranche method will return the base address of the
> specified tranche.
> 
> From pg_init
> *" RequestNamedLWLockTranche("Some_10_LWLocks", 10); "
> 
> 
> *For getting those locks which were requested from pg_init
> *" LWLockPadded *lwLockPadded = GetNamedLWLockTranche("Some_10_LWLocks");
>LWLock *lock = &(lwLockPadded[index in 0 to 9]).lock; "*
> 
> Is the above code snippet a valid for requesting some 10 LWLocks?
> 
> 

AFAIK yes this is the correct way to use multiple lwlocks.

-- 
Julien Rouhaud
http://dalibo.com - http://dalibo.org


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] too may LWLocks

2017-03-08 Thread Julien Rouhaud
On Wed, Mar 08, 2017 at 03:34:56PM +0530, hariprasath nallasamy wrote:
> Hi all
> I am building an extension using shared memory hash table and for locking
> hash table i am using LWLocks, but the thing was when i try to run some 1k
> queries one after other, for each query i am getting one LWLock but on
> executing 200th query i am getting the error *ERROR:  too many LWLocks
> taken*.
> 
> But in each query i acquire and release that block. So that lock has to be
> flushed after executing query, but why am i getting this error.?
> 
> Is this due to *held_lwlocks *in LWLock.c is fixed only to some number 200
> here.
> Or am i missing something here.?

The most likely reason is that you have some code path in your extension where
you don't release the LWLock.  Without access to the code we can't do much more
to help you I'm afraid.  You could also try on a postgres build having
LWLOCK_STATS defined.

--
Julien Rouhaud
http://dalibo.com - http://dalibo.org


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ShmemAlloc maximum size

2017-02-23 Thread Julien Rouhaud
On Thu, Feb 23, 2017 at 08:17:54PM +0530, hari.prasath wrote:
> I am trying to get some 15MB of shared memory using ShmemAlloc. 
> 
> Cross checked all my kernal variables set in OS level and also in 
> postgresql.conf.
> 
> Cant able to get the required memory(15MB) but if i try with some small 
> memory(1MB), its working fine. 
> 
> Am i missing something.? 

Assuming you're talking about some C extension, you have to request in
_PG_INIT() the amount of shmem you'll need, with RequestAddinShmemSpace().

You can see an example in pg_stat_statements.c how it's done (_PG_INIT() and
pgss_shmem_startup()).

Asking a small amount probably works because some shared memory is requested
but not totally used as soon as the server starts.

Regards.

--
Julien Rouhaud
http://dalibo.com - http://dalibo.org


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ora2pg - Java Message Service (JMS) Type

2016-12-12 Thread Julien Rouhaud
On Mon, Dec 12, 2016 at 04:15:59PM +, Joanna Xu wrote:
> On Fri, Dec 11, 2016 12:43 PM, Julien Rouhaud wrote:
> >>I forwarded your mail to the author, since he's not on this mailing-list.  
> >>This issue should be fixed with commit 
> >>>>https://github.com/darold/ora2pg/commit/2c22d377e9f90fa6fa83cfe1fbead9eb3396a0f3.
> 
> >>If this bugfix is not enough or if you find other issues, could you report 
> >>them directly on github (https://github.com/darold/ora2pg/issues)?
> Hi Julien,
> Thanks for looking into the issue.
> 
> I tried and updated "Ora2Pg.pm" with the change suggested in 
> https://github.com/darold/ora2pg/commit/2c22d377e9f90fa6fa83cfe1fbead9eb3396a0f3
>  (add "|TYPE" in the following line) but encountered the same issue "ERROR:  
> type "aq$_jms_text_message" while import the content of the output.sql into 
> PostgreSQL database.
> 
> [kll0199:/u01/app/oracle/ora2pg-17.6/lib] egrep "has_limitation" Ora2Pg.pm | 
> egrep -v "has_limitation ="
> 
> if (!$self->{is_mysql} && !$has_limitation && ($arr_type[$i] 
> =~ /TABLE|SEQUENCE|VIEW|TRIGGER|TYPE/)) {
> 
> At this point, I guess the issue occurred for that particular type 
> "aq$_jms_text_message".  So the key question would be : does Postgres 
> database support "aq$_jms_text_message" type?  If postgres does not support 
> this type, it will error out.  I would need confirmation on this please.
> 

Unfortunately I have no idea what's "aq$_jms_text_message" type, but I don't
see any reason why it couldn't be migrated to PostgreSQL.  It really seems that
ora2pg miss to migrate this type for some reason, so you should open an issue
on https://github.com/darold/ora2pg/issues.  The author will have a better
answer than me on your problem.

--
Julien Rouhaud
http://dalibo.com - http://dalibo.org


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ora2pg - Java Message Service (JMS) Type

2016-12-11 Thread Julien Rouhaud
On Fri, Dec 09, 2016 at 05:22:07PM +, Joanna Xu wrote:
> Hi All,
> 
> We plan to use ora2pg tool to migrate Oracle to PostgreSQL.  During the 
> testing, while importing content of the output.sql into PostgrSQL database, 
> it terminated with the following error "ERROR:  type "aq$_jms_text_message" 
> does not exist" for "user_data" column with "AQ$_JMS_TEXT_MESSAGE" as type.  
> In this case, if Java Message Service (JMS) types are not supported by 
> Postgres, ora2pg would not be the right tool to use for the migration but I 
> am not certain if it is true.  Can someone please advise?
> 
> psql -d wsp -U staging -W < /database/postgres/outputSTAGING.sql
> Password for user staging:
> SET
> CREATE TABLE
> :
> CREATE TABLE
> ALTER TABLE
> CREATE INDEX
> ERROR:  type "aq$_jms_text_message" does not exist
> LINE 29:  user_data AQ$_JMS_TEXT_MESSAGE,
> 

Hello,

I forwarded your mail to the author, since he's not on this mailing-list.  This
issue should be fixed with commit
https://github.com/darold/ora2pg/commit/2c22d377e9f90fa6fa83cfe1fbead9eb3396a0f3.

If this bugfix is not enough or if you find other issues, could you report them
directly on github (https://github.com/darold/ora2pg/issues)?

Thanks!

--
Julien Rouhaud
http://dalibo.com - http://dalibo.org


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to config start/stop scripts in a different data dir for CentOS7/systemctl/PG9.6

2016-12-08 Thread Julien Rouhaud
On Thu, Dec 08, 2016 at 05:16:11PM -0300, Edilmar LISTAS wrote:
> I decided to use the pg_ctl to start/stop, I didn't find a way to
> configure systemctl script.

I didn't noticed your first mail.

You can find all needed informations on this blog post:
https://people.planetpostgresql.org/devrim/index.php?/archives/82-Running-more-than-one-9.4-9.3-instance-in-parallel-on-RHEL-7.html
(This link is available in the "Yum Howto" section of yum.postgresql.org)

Last time I tried it worked as intended.

--
Julien Rouhaud
http://dalibo.com - http://dalibo.org


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Migrating data from DB2 zOS to PostgreSQL

2016-12-05 Thread Julien Rouhaud
On Tue, Dec 06, 2016 at 11:06:12AM +0530, Swapnil Vaze wrote:
> Hello,
> 

Hello

> We need some help on how we can migrate data from DB2 zOS database to
> postgres database.
> 
> Are there any utilities present? Any thoughts how we should approach?

You can use this utility: https://github.com/dalibo/db2topg

The README should provide all needed informations.

--
Julien Rouhaud
http://dalibo.com - http://dalibo.org


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No select privileges when not connecting from login postgres

2016-12-03 Thread Julien Rouhaud
On Wed, Nov 30, 2016 at 08:48:41PM -0800, Joseph Brenner wrote:
> I'm trying to get a new build of 9.6.1 working on a machine
> running Debian stable (jessie) and I'm seeing some odd
> behavior where things work correctly if I run psql when
> logged in as postgres, but if I run it as user 'doom' (my
> usual login), I don't seem to have any select privileges.
> Even this fails silenlty:
> 
>   select 'world' as hello;
> 
> But if run logged in as postgres, all is well:
> 
>   sudo su - postgres
>   /usr/local/pgsql/bin/psql --dbname=doom --username=doom
>   doom=#   select 'world' as hello;
> select 'world' as hello;
> hello
>---
> world
>(1 row)
> [...]
> I'm running out of ideas for things to check.  Any suggestions?
> 

Any unusual errors in the logs?  Or maybe a "\o /somefile" in your
~doom/.psqlrc?

--
Julien Rouhaud
http://dalibo.com - http://dalibo.org


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why is this query not using GIN index?

2016-11-13 Thread Julien Rouhaud
On 13/11/2016 15:26, Aaron Lewis wrote:
> Hi Oleg,
> 
> Can you elaborate on the title column? I don't get it.
> 

>>> create table mytable(hash char(40), title varchar(500));
>>> create index name_fts on mytable using gin(to_tsvector('english',
>>> 'title'));

You created an index on the text 'title', not on the title column, so
the index is useless.

Drop the existing index and create this one instead:

create index name_fts on mytable using gin(to_tsvector('english', title));

> On Sun, Nov 13, 2016 at 10:10 PM, Oleg Bartunov <obartu...@gmail.com> wrote:
>>
>>
>> On Sun, Nov 13, 2016 at 2:50 PM, Aaron Lewis <the.warl0ck.1...@gmail.com>
>> wrote:
>>>
>>> I have a simple table, and a gin index,
>>>
>>> create table mytable(hash char(40), title varchar(500));
>>> create index name_fts on mytable using gin(to_tsvector('english',
>>> 'title'));
>>
>>
>>
>> ^
>>
>>>
>>> create unique index md5_uniq_idx on mytable(hash);
>>>
>>> When I execute a query with tsquery, the GIN index was not in use:
>>>
>>> test=# explain analyze select * from mytable where
>>> to_tsvector('english', title) @@ 'abc | def'::tsquery limit 10;
>>>  QUERY PLAN
>>>
>>> 
>>>  Limit  (cost=0.00..277.35 rows=10 width=83) (actual
>>> time=0.111..75.549 rows=10 loops=1)
>>>->  Seq Scan on mytable  (cost=0.00..381187.45 rows=13744 width=83)
>>> (actual time=0.110..75.546 rows=10 loops=1)
>>>  Filter: (to_tsvector('english'::regconfig, (title)::text) @@
>>> '''abc'' | ''def'''::tsquery)
>>>  Rows Removed by Filter: 10221
>>>  Planning time: 0.176 ms
>>>  Execution time: 75.564 ms
>>> (6 rows)
>>>
>>> Any ideas?
>>> 

-- 
Julien Rouhaud
http://dalibo.com - http://dalibo.org


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] forcing a table (parent in inheritance) tor emain empty

2016-10-11 Thread Julien Rouhaud
On 11/10/2016 19:04, Rémi Cura wrote:
> This solution is very nice.
> Sadly the check is inherited by the children
> (I only want the parent to be empty, not the children).
> 
> It seems the element that are not inherited are
> 
>   * Indexes
>   * Unique constraints
>   * Primary Keys
>   * Foreign keys
>   * Rules and Triggers 
> 

you can specify a "NO INHERIT" on the check constraint, that should
solve your issue.

-- 
Julien Rouhaud
http://dalibo.com - http://dalibo.org


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query generator

2016-10-02 Thread Julien Rouhaud
On 29/09/2016 23:23, Vinicius Segalin wrote:
> 2016-09-29 16:32 GMT-03:00 Julien Rouhaud <julien.rouh...@dalibo.com
> <mailto:julien.rouh...@dalibo.com>>:
> You should try sqlsmith (https://github.com/anse1/sqlsmith
> <https://github.com/anse1/sqlsmith>), which works
> very well.
> 
> 
> I had found this one before, but all I could get was queries using
> "standard" tables, like the ones from pg_catalog and information_schema.
> It didn't generate queries from the tables I've created. Was I doing
> something wrong?
> 

I don't think so, unless you specified an empty database with the
--target option. I never had this issue myself, so I can only advice you
to open an issue on the project repository if you still have this problem.

-- 
Julien Rouhaud
http://dalibo.com - http://dalibo.org


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query generator

2016-09-29 Thread Julien Rouhaud
On 29/09/2016 21:27, Vinicius Segalin wrote:
> Hi everyone,
> 

Hello,

> Does anyone know a random query generator for Postgres? Something that
> gets my schema and, based on that, generates hundreds of different
> queries with join, group by, etc.
> 

You should try sqlsmith (https://github.com/anse1/sqlsmith), which works
very well.

-- 
Julien Rouhaud
http://dalibo.com - http://dalibo.org


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why are no NEGATORS defined in the standard operators

2016-06-11 Thread Julien Rouhaud
Hello

On 11/06/2016 10:00, Daniel Migowski wrote:
> Hello,
> 
> while trying to bake my own fixedpoint datatype I noticed the definition
> for =(int8,int8) to be
> 
>  
> 
> CREATE OPERATOR =(
> 
>   PROCEDURE = int8eq,
> 
>   LEFTARG = int8,
> 
>   RIGHTARG = int8,
> 
>   COMMUTATOR = =,
> 
>   RESTRICT = eqsel,
> 
>   JOIN = eqjoinsel,
> 
>   HASHES,
> 
>   MERGES);
> 
>  
> 
> in PostgreSQL 9.5, but I wonder, why there is no NEGATOR clause defined
> here? According to the docs it should help to add
> 
>  
> 
>   NEGATOR = <>
> 
>  
> 
> In query optimization. Is there some reason for it? Or is it a Bug in
> pgAdmin III that the negator is not shown?
> 

I guess this is a bug in pgAdmin3, because the negator is defined in the
catalog:

# select o1.oprname, o1.oprcode, o1.oprnegate, o2.oprname as negate from
pg_operator o1 join pg_operator o2 on o1.oprnegate = o2.oid where
o1.oprname = '=' and o1.oprleft = 'int8'::regtype and o1.oprright =
'int8'::regtype;

 oprname │ oprcode │ oprnegate │ negate
═╪═╪═══╪
 =   │ int8eq  │   411 │ <>
(1 row)



-- 
Julien Rouhaud
http://dalibo.com - http://dalibo.org


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] plugin dev, oid to pointer map

2016-05-29 Thread Julien Rouhaud
On 29/05/2016 22:10, Attila Soki wrote:
> Hi all,
> 

Hello,

> i am about to begin with postgresql plugin development.
>H Currently i'm trying to become somewhat familiar with the postgresql sources.

> 
> Without going too deep into details about the plugin, i want to use
> many Oid to pointer relations.
> The pointer is a pointer to my own struct (allocated with palloc).
> There will be approx. 1000 unique oid/pointer pairs.
> 
> Basically, what i want is, to be able to get the pointer to my struct by Oid.
> 
> Is there is a suitable hashmap or key-value storage solution in the pg code?
> if so, please point me to the right part of the source.
> 

Yes, there's an hashtable implementation, see dynahash.c

If you want to use that in shared memory in your extension, you can look
at the pg_stat_statements extension (look for pgss_hash) for an example.

Regards.


-- 
Julien Rouhaud
http://dalibo.com - http://dalibo.org


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] index question

2016-05-02 Thread Julien Rouhaud
On 02/05/2016 23:02, drum.lu...@gmail.com wrote:
> 
> Generically speaking,  if the total of dx_scan + idx_tup_read +
> idx_tup_fetch  are 0, then it is an _indication_ that those indexes
> should be dropped.
> You should also consider how long those indexes have existed and how
> often queries are executed.
> 
> A good practice would be to save the SQL to recreate the indexes
> before you drop any. In that way, if you notice a degradation in
> performance, you can just rebuild
> You can use the following query to do that, but you might want to
> edit and add the CONCURRENT option.
> 
> SELECT pg_get_indexdef(idx.indexrelid) || ';'
>   FROM pg_stat_all_indexes i
>   JOIN pg_class c ON (c.oid = i.relid)
>   JOIN pg_namespace n ON (n.oid = c.relnamespace)
>   JOIN pg_index idx ON (idx.indexrelid =  i.indexrelid )
>  WHERE NOT idx.indisprimary
>AND NOT idx.indisunique
>AND i.relname NOT LIKE 'pg_%'
>AND i.idx_scan + idx_tup_read + idx_tup_fetch = 0
>ORDER BY n.nspname,
>   i.relname;
> 
> The following query generates the drop statements.
> 
> SELECT 'DROP INDEX CONCURRENTLY IF EXISTS "' ||
> quote_ident(n.nspname) || '"' || '.' || '"' ||
> quote_ident(i.indexrelname) || '"' ||';'
>   FROM pg_stat_all_indexes i
>   JOIN pg_class c ON (c.oid = i.relid)
>   JOIN pg_namespace n ON (n.oid = c.relnamespace)
>   JOIN pg_index idx ON (idx.indexrelid =  i.indexrelid )
>  WHERE NOT idx.indisprimary
>AND i.relname NOT LIKE 'pg_%'
>AND i.idx_scan + idx_tup_read + idx_tup_fetch = 0
>ORDER BY i.indexrelname;
> 
> 
> I would not place any concern on the size of the index. That is just
> what is needed to keep track of all associated rows.
> Once you drop the indexes you determine are not needed, you will
> gain back the space that they use up.
> 
> Please stay in touch and let me know how it goes.
> 
> 
> 
> I will. Thanks for the help/tips!
> 

Be careful, this query discards indexes used for primary key, but at
least unique (indisunique) and exclusion constraint (indisexclusion)
indexes should also be excluded, and also probably indexes used to
cluster tables (indisclustered).

You should also check since when the idsx_scan and other counters are
aggregating before dropping any index. Check
pg_stat_get_db_stat_reset_time(oid), with the oid of the related
database(s).

> 
> 
> Cheers
> Lucas


-- 
Julien Rouhaud
http://dalibo.com - http://dalibo.org


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Confused by the behavior of pg_basebackup with replication slot

2016-03-19 Thread Julien Rouhaud
On 19/03/2016 15:58, Julien Rouhaud wrote:
> Hello,
> 
> On 19/03/2016 15:41, Yi, Yi wrote:
>> Hello, 
>>
>> I had an issue with the behavior of pg_basebackup command. I was convinced 
>> previously that pg_basebackup command always made the binary copy of the 
>> database cluster files of the postgres master. However, I recently noticed 
>> that pg_basebackup did not copy the the replication slot object of the 
>> master, in comparison with the fact that the copy-command-based-backup did 
>> copy the replication slot object. Is this difference designed on purpose ?
>>
> 
> Yes.
> 
>> Considering the difference mentioned above, I'm wandering that is there 
>> anything else that the pg_basebackup would NOT copy from the master ?
>> In other words, what is the no-copying rules of pg_basebackup ? 
>>
> 
> The full list is documented here:
> http://www.postgresql.org/docs/current/static/protocol-replication.html
> 

Sorry I sent the mail a little too fast. The list is at the end of the page.

> Regards.
> 
>> Any help will be greatly appreciated. Thanks.
>>
>> Best Regards.
>>
> 
> 


-- 
Julien Rouhaud
http://dalibo.com - http://dalibo.org


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bloated postgres data folder, clean up

2016-03-03 Thread Julien Rouhaud
 To be extra sure, you should use oid2name programme to check that
> the useless files are really useless.
> 
> 
> For this :
>  * output the list of potential useless files with copy for instance
>   ex :
>   COPY ( SELECT file_name
> 
>  FROM find_useless_postgres_file('your_database_name')
> 
>) TO 'path_to_you_database_folder/potential_useless.txt'
> 
>now you've got a file with a list of potential erroneous files.
> 
>  * Then use oid2name
> 
>   `$su postgres
>$cd path_to_you_database_folder
> 
>$while read i; do oid2name -f "$i" -i -S -q -d
> your_database_name; done < potential_useless.txt
>   `
> 
>   Nothing should show, meaning that every potential erroneous file
>has not been recognized by oid2name !
> 
>   If you feel unconvinced, you can manually try oid2name on some
>of the potential erroneous files, to be extra sure.
>   It should not find anything.
> 
> __ __
> 
>  * Now delete all the files in `potential_useless.txt`.
> 
>   It could be wiser to not delete the files but rename those
> 
>   (for instance, adding `.potentially_useless` as a postfix)
> 
>   so if it breaks something, you have an easy way to revert
> everything.
> 
> __ __
> 
> Anyway, use *-*extra extra*-* caution if you delete.
> Except a backup, there would be no easy way to correct a mistake.
> 
> Cheers,
> 
> Rémi-C
> 
> __ __
> 
> 2016-03-02 15:38 GMT+01:00 Johnny Morano <johnny.mor...@payon.com
> <mailto:johnny.mor...@payon.com>>:
> 
> Hi Remi!
> 
>  
> 
> This SQL function you have provided, seems to return all valid
> files, is that correct? In my case, it returned all my ‘base/’
> files. Is that normal?
> 
> If yes, maybe you rename the function to
> ‘find_useful_postgres_files’ ;-)
> 
>  
> 
> Could you explain in steps how to use this function to make a
> cleanup of bloated data? (like in an example with commands and
> example output, if possible of course)
> 
>  
> 
> Thanks!
> 
>  
> 
>  
> 
> Mit besten Grüßen / With best regards,
> 
> Johnny Morano
> 
> 
> 
>  
> 
> *Johnny Morano  | Principal Systems Engineer*
> 
>  
> 
> PAY.ON GmbH  |  AN ACI WORLDWIDE COMPANY  |  WWW.PAYON.COM
> <http://www.payon.com/>
> 
> Jakob-Haringer-Str. 1  |  5020 Salzburg  |  Austria
> 
> Registered at: LG Salzburg  |  Company number: FN 315081 f  | 
> VAT-ID: ATU64439405
> 
> Managing Director: Christian Bamberger
> 
>  
> 
>  
> 
> Follow us on: 
> 
>  
> 
> cid:image001.jpg@01D126D0.E1AB0670
> <http://blog.payon.com/>  cid:image002.jpg@01D126D0.E1AB0670
> <http://www.linkedin.com/company/146260?trk=tyah>  
> cid:image003.jpg@01D126D0.E1AB0670
> <https://twitter.com/PAYON_com>  
> 
>  
> 
> This email message and any attachments may contain confidential,
> proprietary or non-public information. This information is intended
> solely for the designated recipient(s). If an addressing or
> transmission error has misdirected this email, please notify the
> sender immediately and destroy this email. Any review,
> dissemination, use or reliance upon this information by unintended
> recipients is prohibited. Any opinions expressed in this email are
> those of the author personally.
> 
>  
> 
> This message and any attachments have been scanned for viruses prior
> leaving PAY.ON; however, PAY.ON does not guarantee the security of
> this message and will not be responsible for any damages arising as
> a result of any virus being passed on or arising from any alteration
> of this message by a third party. PAY.ON may monitor e-mails sent to
> and from PAY.ON.
> 
>  
> 
>  
> 
>  
> 
>  
> 
> *From:*pgsql-general-ow...@postgresql.org
> <mailto:pgsql-general-ow...@postgresql.org>
> [mailto:pgsql-general-ow...@postgresql.org
> <mailto:pgsql-general-ow...@postgresql.org>] *On Behalf Of *Rémi Cura
> *Sent:* Mittwoch, 2. März 2016 14:58
> *To:* Alvaro Herrera
> *Cc:* PostgreSQL General
> *Subject:* Re: [GENERAL] bloated postgres data folder, clean up
> 
>  
> 
> Would gladly do it,
> 
> but still this "wiki cooloff" stuff,
> 
> can't create a page
> 
> Cheers,
> 
> Rémi-C
> 
>  
> 
> 2016-02-29 20:44 GMT+01:00 Alvaro Herrera <alvhe...@2ndquadrant.com
> <mailto:alvhe...@2ndquadrant.com>>:
> 
> Rémi Cura wrote:
> > Hey dear list,
> > after a fex years of experiments and crash,
> > I ended up with a grossly bloated postgres folder.
> > I had about 8 Go of useless files.
> 
> Would you add a new page to the wiki with this?
> 
> https://wiki.postgresql.org/wiki/Category:Administrative_Snippets
> 
> --
> Álvaro Herrerahttp://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training &
> Services
> 
>  
> 
> __ __
> 
> 


-- 
Julien Rouhaud
http://dalibo.com - http://dalibo.org


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] RLS on catalog tables would be helpful

2016-03-02 Thread Julien Rouhaud
On 02/03/2016 20:56, Joe Conway wrote:
> On 03/02/2016 11:53 AM, Joshua D. Drake wrote:
>> On 03/02/2016 11:37 AM, Joe Conway wrote:
>>
>>> http://www.postgresql.org/message-id/flat/CA+Tgmoa=4vti1hb1hta0+qbzlojkpjbd5dkvw3zmp-kdwje...@mail.gmail.com#CA+Tgmoa=4vti1hb1hta0+qbzlojkpjbd5dkvw3zmp-kdwje...@mail.gmail.com
>>>
>>> It would be good for you to add your thoughts on your use case and
>>> specific functionality you would require to that thread.
>>
>> And how would one do that? (Not trying to be difficult, I really have no
>> idea how to join a thread that I have no email archive for).
> 
> I thought there was once a link somewhere on the mail archives to get a
> specific email resent, but for the life of me I cannot find it today :-/
> 

It's only available in majordomo AFAIK. For instance
https://lists.postgresql.org/mj/mj_wwwusr?list=pgsql-hackers=on=archive-get-part=201602/753

once you log in you'll find the "Mail this message to..." link at bottom
of the page.

> However, if you view the raw message (there is a link for that on the
> archives), save it locally, and then open it in your email client, you
> can then hit "reply-all".
> 
> HTH,
> 
> Joe
> 


-- 
Julien Rouhaud
http://dalibo.com - http://dalibo.org


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Creating table with data from a join

2015-07-14 Thread Julien Rouhaud
On 14/07/2015 11:12, Igor Stassiy wrote:
 Hello,
 
 I am benchmarking different ways of putting data into table on table
 creation:
 
 1. INSERT INTO c SELECT * FROM a JOIN b on a.id http://a.id = b.id
 http://b.id;
 2. CREATE TABLE c AS SELECT * FROM a JOIN b on a.id http://a.id = b.id
 http://b.id;
 3. psql -c COPY (SELECT * FROM a JOIN b on a.id http://a.id = b.id
 http://b.id) TO STDOUT | 
 parallel --block 128M --jobs 4 --pipe psql -c COPY c FROM STDIN;
 
 (the parallel command is available as part of parallel deb package in
 Ubuntu for example, it splits the stdin by newline character and feeds
 it to the corresponding command)
 
 Both tables a and b have ~16M records and one of the columns in a is
 geometry (ranging from several KB in size to several MB). Columns in b
 are mostly integers.
 
 The machine that I am running these commands on has the following
 parameters:
 
 default_statistics_target = 50 # pgtune wizard 2012-06-06
 maintenance_work_mem = 1GB # pgtune wizard 2012-06-06
 constraint_exclusion = on # pgtune wizard 2012-06-06
 checkpoint_completion_target = 0.9 # pgtune wizard 2012-06-06
 effective_cache_size = 48GB # pgtune wizard 2012-06-06
 work_mem = 80MB # pgtune wizard 2012-06-06 
 wal_buffers = 8MB # pgtune wizard 2012-06-06 
 checkpoint_segments = 16 # pgtune wizard 2012-06-06 
 shared_buffers = 16GB # pgtune wizard 2012-06-06 
 max_connections = 400 # pgtune wizard 2012-06-06
 
 One would expect the 3rd option to be faster than 1 and 2, however 2
 outperforms both by a large margin (sometimes x2). This is especially
 surprising taking into account that COPY doesn't acquire a global lock
 on the table, only a RowExclusiveLock 
 (according
 to http://www.postgresql.org/message-id/10611.1014867...@sss.pgh.pa.us)
 

What is wal_level value? I think this is because of an optimisation
happening with wal_level = minimal:

In minimal level, WAL-logging of some bulk operations can be safely
skipped, which can make those operations much faster

see http://www.postgresql.org/docs/current/static/runtime-config-wal.html

 So is option 2 a winner by design? Could you please suggest other
 alternatives to try (if there are any)? And what might be the reason
 that 3 is not outperforming the other 2?
 
 Thank you,
 Igor
 
 


-- 
Julien Rouhaud
http://dalibo.com - http://dalibo.org


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Creating table with data from a join

2015-07-14 Thread Julien Rouhaud
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 14/07/2015 18:21, Igor Stassiy wrote:
 Julien, I have the following setting for WAL level: #wal_level =
 minimal (which defaults to minimal anyway)
 
 On Tue, Jul 14, 2015 at 6:19 PM Julien Rouhaud 
 julien.rouh...@dalibo.com mailto:julien.rouh...@dalibo.com
 wrote:
 
 On 14/07/2015 11:12, Igor Stassiy wrote:
 Hello,
 
 I am benchmarking different ways of putting data into table on
 table creation:
 
 1. INSERT INTO c SELECT * FROM a JOIN b on a.id http://a.id
 http://a.id = b.id http://b.id
 http://b.id; 2. CREATE TABLE c AS SELECT * FROM a JOIN b on
 a.id http://a.id
 http://a.id = b.id http://b.id
 http://b.id; 3. psql -c COPY (SELECT * FROM a JOIN b on a.id
 http://a.id
 http://a.id = b.id http://b.id
 http://b.id) TO STDOUT | parallel --block 128M --jobs 4 --pipe
 psql -c COPY c FROM STDIN;
 
 (the parallel command is available as part of parallel deb
 package in Ubuntu for example, it splits the stdin by newline
 character and feeds it to the corresponding command)
 
 Both tables a and b have ~16M records and one of the columns in a
 is geometry (ranging from several KB in size to several MB).
 Columns in b are mostly integers.
 
 The machine that I am running these commands on has the
 following parameters:
 
 default_statistics_target = 50 # pgtune wizard 2012-06-06 
 maintenance_work_mem = 1GB # pgtune wizard 2012-06-06 
 constraint_exclusion = on # pgtune wizard 2012-06-06 
 checkpoint_completion_target = 0.9 # pgtune wizard 2012-06-06 
 effective_cache_size = 48GB # pgtune wizard 2012-06-06 work_mem =
 80MB # pgtune wizard 2012-06-06 wal_buffers = 8MB # pgtune wizard
 2012-06-06 checkpoint_segments = 16 # pgtune wizard 2012-06-06 
 shared_buffers = 16GB # pgtune wizard 2012-06-06 max_connections
 = 400 # pgtune wizard 2012-06-06
 
 One would expect the 3rd option to be faster than 1 and 2,
 however 2 outperforms both by a large margin (sometimes x2). This
 is especially surprising taking into account that COPY doesn't
 acquire a global lock on the table, only a RowExclusiveLock 
 (according to
 http://www.postgresql.org/message-id/10611.1014867...@sss.pgh.pa.us)

 
 
 
 What is wal_level value? I think this is because of an
 optimisation happening with wal_level = minimal:
 
 In minimal level, WAL-logging of some bulk operations can be
 safely skipped, which can make those operations much faster
 
 see 
 http://www.postgresql.org/docs/current/static/runtime-config-wal.html

 
 So is option 2 a winner by design? Could you please suggest
 other alternatives to try (if there are any)? And what might be
 the reason that 3 is not outperforming the other 2?
 
 Thank you, Igor
 
 
 
 
 -- Julien Rouhaud http://dalibo.com - http://dalibo.org
 


- -- 
Julien Rouhaud
http://dalibo.com - http://dalibo.org
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.17 (GNU/Linux)

iQEcBAEBAgAGBQJVpTlRAAoJELGaJ8vfEpOqvI4H/RZygc5QXOuEZDWqmWRoZZ5N
kNLWxPJbQ7cLpSNIUj3gJmq9bj0I3K071L09KbJWgxtwvQCzgiTsUIVURv7V83C6
nQ8CmrRr96+jKprx5Gw/uqSel8qnbi9LApl1IDqx9Hnd/HnyVOemND2gzHOQhsKN
tvGuo4ac5yR+rsFA8FHuwXgSgVH2NEDL2n4Zv6jI2uwh5NRBeeGEn8MFKDZCSWN6
HXG9wZaelSrYbcSfumRg07RLnAmP6E/xbY1eB8dA17XmnFxE9AMTFy0YqJb8Kl5Z
KvzQ6+VHnrW2zaoCUOGE56ra2La7TPeJxxeNA9U9Li+8GmvJIQHqIoQvLz7CzT8=
=Ztkl
-END PGP SIGNATURE-


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Creating table with data from a join

2015-07-14 Thread Julien Rouhaud
On 14/07/2015 18:21, Igor Stassiy wrote:
 Julien, I have the following setting for WAL level: #wal_level = minimal
 (which defaults to minimal anyway)
 

Sorry, I sent my mail too early :/

So, option #2 is winner by design. You didn't say anything about your
needs, so it's hard to help you much more.

If you don't care about losing data on this table if your server
crashes, you can try option #3 with an unlogged table.


 On Tue, Jul 14, 2015 at 6:19 PM Julien Rouhaud
 julien.rouh...@dalibo.com mailto:julien.rouh...@dalibo.com wrote:
 
 On 14/07/2015 11:12, Igor Stassiy wrote:
  Hello,
 
  I am benchmarking different ways of putting data into table on table
  creation:
 
  1. INSERT INTO c SELECT * FROM a JOIN b on a.id http://a.id
 http://a.id = b.id http://b.id
  http://b.id;
  2. CREATE TABLE c AS SELECT * FROM a JOIN b on a.id http://a.id
 http://a.id = b.id http://b.id
  http://b.id;
  3. psql -c COPY (SELECT * FROM a JOIN b on a.id http://a.id
 http://a.id = b.id http://b.id
  http://b.id) TO STDOUT |
  parallel --block 128M --jobs 4 --pipe psql -c COPY c FROM STDIN;
 
  (the parallel command is available as part of parallel deb package in
  Ubuntu for example, it splits the stdin by newline character and feeds
  it to the corresponding command)
 
  Both tables a and b have ~16M records and one of the columns in a is
  geometry (ranging from several KB in size to several MB). Columns in b
  are mostly integers.
 
  The machine that I am running these commands on has the following
  parameters:
 
  default_statistics_target = 50 # pgtune wizard 2012-06-06
  maintenance_work_mem = 1GB # pgtune wizard 2012-06-06
  constraint_exclusion = on # pgtune wizard 2012-06-06
  checkpoint_completion_target = 0.9 # pgtune wizard 2012-06-06
  effective_cache_size = 48GB # pgtune wizard 2012-06-06
  work_mem = 80MB # pgtune wizard 2012-06-06
  wal_buffers = 8MB # pgtune wizard 2012-06-06
  checkpoint_segments = 16 # pgtune wizard 2012-06-06
  shared_buffers = 16GB # pgtune wizard 2012-06-06
  max_connections = 400 # pgtune wizard 2012-06-06
 
  One would expect the 3rd option to be faster than 1 and 2, however 2
  outperforms both by a large margin (sometimes x2). This is especially
  surprising taking into account that COPY doesn't acquire a global lock
  on the table, only a RowExclusiveLock
  (according
  to
 http://www.postgresql.org/message-id/10611.1014867...@sss.pgh.pa.us)
 
 
 What is wal_level value? I think this is because of an optimisation
 happening with wal_level = minimal:
 
 In minimal level, WAL-logging of some bulk operations can be safely
 skipped, which can make those operations much faster
 
 see
 http://www.postgresql.org/docs/current/static/runtime-config-wal.html
 
  So is option 2 a winner by design? Could you please suggest other
  alternatives to try (if there are any)? And what might be the reason
  that 3 is not outperforming the other 2?
 
  Thank you,
  Igor
 
 
 
 
 --
 Julien Rouhaud
 http://dalibo.com - http://dalibo.org
 


-- 
Julien Rouhaud
http://dalibo.com - http://dalibo.org


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Creating table with data from a join

2015-07-14 Thread Julien Rouhaud
On 14/07/2015 18:50, Igor Stassiy wrote:
 Julien, I would gladly provide more information, I am just not sure what
 to add.
 

Well, was your concern about why option #2 is the quickest, or is this
runtime with option #2 still too slow for you ?

 I would be willing to leave the server compromised for things like
 corrupts or data losses during the time of this import, but the server
 has to be up and running before and after the import, if it is
 successful (so I can't take it down then change some parameters and
 start it up with again).

Check http://www.postgresql.org/docs/current/static/sql-createtable.html
and the UNLOGGED part to check if an unlogged table is suitable for you.

 
 On Tue, Jul 14, 2015 at 6:37 PM Julien Rouhaud
 julien.rouh...@dalibo.com mailto:julien.rouh...@dalibo.com wrote:
 
 On 14/07/2015 18:21, Igor Stassiy wrote:
  Julien, I have the following setting for WAL level: #wal_level =
 minimal
  (which defaults to minimal anyway)
 
 
 Sorry, I sent my mail too early :/
 
 So, option #2 is winner by design. You didn't say anything about your
 needs, so it's hard to help you much more.
 
 If you don't care about losing data on this table if your server
 crashes, you can try option #3 with an unlogged table.
 
 
  On Tue, Jul 14, 2015 at 6:19 PM Julien Rouhaud
  julien.rouh...@dalibo.com mailto:julien.rouh...@dalibo.com
 mailto:julien.rouh...@dalibo.com
 mailto:julien.rouh...@dalibo.com wrote:
 
  On 14/07/2015 11:12, Igor Stassiy wrote:
   Hello,
  
   I am benchmarking different ways of putting data into table
 on table
   creation:
  
   1. INSERT INTO c SELECT * FROM a JOIN b on a.id
 http://a.id http://a.id
  http://a.id = b.id http://b.id http://b.id
   http://b.id;
   2. CREATE TABLE c AS SELECT * FROM a JOIN b on a.id
 http://a.id http://a.id
  http://a.id = b.id http://b.id http://b.id
   http://b.id;
   3. psql -c COPY (SELECT * FROM a JOIN b on a.id
 http://a.id http://a.id
  http://a.id = b.id http://b.id http://b.id
   http://b.id) TO STDOUT |
   parallel --block 128M --jobs 4 --pipe psql -c COPY c FROM
 STDIN;
  
   (the parallel command is available as part of parallel deb
 package in
   Ubuntu for example, it splits the stdin by newline character
 and feeds
   it to the corresponding command)
  
   Both tables a and b have ~16M records and one of the columns
 in a is
   geometry (ranging from several KB in size to several MB).
 Columns in b
   are mostly integers.
  
   The machine that I am running these commands on has the
 following
   parameters:
  
   default_statistics_target = 50 # pgtune wizard 2012-06-06
   maintenance_work_mem = 1GB # pgtune wizard 2012-06-06
   constraint_exclusion = on # pgtune wizard 2012-06-06
   checkpoint_completion_target = 0.9 # pgtune wizard 2012-06-06
   effective_cache_size = 48GB # pgtune wizard 2012-06-06
   work_mem = 80MB # pgtune wizard 2012-06-06
   wal_buffers = 8MB # pgtune wizard 2012-06-06
   checkpoint_segments = 16 # pgtune wizard 2012-06-06
   shared_buffers = 16GB # pgtune wizard 2012-06-06
   max_connections = 400 # pgtune wizard 2012-06-06
  
   One would expect the 3rd option to be faster than 1 and 2,
 however 2
   outperforms both by a large margin (sometimes x2). This is
 especially
   surprising taking into account that COPY doesn't acquire a
 global lock
   on the table, only a RowExclusiveLock
   (according
   to

  http://www.postgresql.org/message-id/10611.1014867...@sss.pgh.pa.us)
  
 
  What is wal_level value? I think this is because of an
 optimisation
  happening with wal_level = minimal:
 
  In minimal level, WAL-logging of some bulk operations can be
 safely
  skipped, which can make those operations much faster
 
  see

  http://www.postgresql.org/docs/current/static/runtime-config-wal.html
 
   So is option 2 a winner by design? Could you please suggest
 other
   alternatives to try (if there are any)? And what might be
 the reason
   that 3 is not outperforming the other 2?
  
   Thank you,
   Igor
  
  
 
 
  --
  Julien Rouhaud
  http://dalibo.com - http://dalibo.org
 
 
 
 --
 Julien Rouhaud
 http://dalibo.com - http://dalibo.org
 


-- 
Julien Rouhaud
http://dalibo.com - http://dalibo.org


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http

Re: [GENERAL] Oracle to PostgreSQL Migration - Need Information

2015-07-09 Thread Julien Rouhaud
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Le 08/07/2015 22:25, CS DBA a écrit :
 
 
 On 07/08/2015 02:20 PM, John R Pierce wrote:
 On 7/8/2015 1:16 PM, dinesh kumar wrote:
 We recently done the similar migration for one of our customer.
 We used all opensource tools to achieve this migration
 process.
 
 We used Pentaho Data Integration tool for doing Online DB
 migration, which took minimal downtime with CDC{Change Data
 Capture} approach. Also, we used Ora2Pg tool to migrate the DB
 objects with some manual syntax modifications.
 
 
 thats the easy part.
 
 now what about the massive code base of pl/sql and triggers he 
 mentioned ?
 
 
 
 Have you considered using ora2pg? http://ora2pg.darold.net/
 
 We've done several client migrations with it, quite successfully
 
 
 

ora2pg is definitely the tool you need.  It can give you a total
migration estimated time (in man-day unit), with a call like

ora2pg -c ora2pg.conf -t SHOW_REPORT --dump_as_html --estimate_cost

You can check a sample report here: http://ora2pg.darold.net/report.html

- -- 
Julien Rouhaud
http://dalibo.com - http://dalibo.org
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.17 (GNU/Linux)

iQEcBAEBAgAGBQJVnimjAAoJELGaJ8vfEpOqTKAH/0+uGEXUmGm6tfagpJqU7kB+
2c+jooW/MKokDcgifvmUTy+fKb8iDoF8CUffActFyX5YyrCFfb4Bjw9P6wuJfF6S
WXhzWXQ//AFiApqNPknfHWnYeqe4jJlLq2fHN7qCQvItEWuKFiHpWcEi1zVBPnMm
e6NLxePm0WzjpigbwhT2X0Ziena8CxxdencPQvO81clsR8Fgtq4B//6KQ3GEsIL2
aUtj4k+wBCmRywiMgiSYiQzLUXUp2HWOp6qLLYpwifr4BgY2X+CQNSHlAK2KXecf
fQ+rm3tyo5QEtMxHQPO/NUsl+zSIllZjPYG1Wa81RwsQpWKhNNB/+reDTSgC5ws=
=9pb/
-END PGP SIGNATURE-


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why does the range type's upper function behave inconsistently?

2015-07-05 Thread Julien Rouhaud
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hello,

On 05/07/2015 19:13, Dane Foster wrote:
 I don't understand the inconsistent behavior of the range types'
 upper function in regard to inclusive ranges.
 
 For example(s): 1. SELECT upper(int4range(1, 4, '[]')) = 4; --
 FALSE 2. SELECT upper(int8range(1, 4, '[]')) = 4; -- FALSE 3.
 SELECT upper(numrange(1, 4, '[]')) = 4; -- TRUE 4. SELECT
 upper(tstzrange('2015-07-01: 00:00:00', now(), '[]')) = now(); --
 TRUE 5. SELECT upper(daterange('2015-01-01', current_date, '[]'))
 = current_date; -- FALSE
 
 #1  #2 are FALSE because upper returns 5 instead of 4; and #5 is
 FALSE because upper returns: current_date + interval '1 day'. I
 don't understand the logic behind why it would return the inclusive
 upper bound value for some ranges and not others. If anyone can
 shed some light on this behavior it would be greatly appreciated.
 
 One of things I originally tried to use upper for was CHECK
 constraints. That was until I wrote some unit tests and realized
 that upper doesn't consistently work the way I expected. Of course
 my assumptions are probably wrong so that's why I'm asking for
 clarification.
 

Because for discrete range types, the canonical form is used, which is
[). Check
http://www.postgresql.org/docs/current/static/rangetypes.html and the
discrete range types paragraph.

Regards.

 Regards,
 
 Dane


- -- 
Julien Rouhaud
http://dalibo.com - http://dalibo.org
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.17 (GNU/Linux)

iQEcBAEBAgAGBQJVmWhvAAoJELGaJ8vfEpOqRa0H/1+QaaZm3JrGOks2FN/24j3/
US4+Zc8AJWarOtd9Nxe4FGkUeVN1kEitJVOXLn1f6tyWRTJZ1A6v8ZaJzykqj3Bj
6cifqmq+c+NNXFyOS9vou7gzIiDxrIYmDTLBc7LqT8eWUmkQKGQT4no4Cre3uD4F
kAp/CvFBpyVLCGMsBP4fW7ShnyVlwk2r1KEDn8rgpVW5rPBV7KPrneoEPJ9EBHt0
jlnYpsxgnsu6OkbmTE3gA0a9Mx/pfJlN9r2TaVjH0oOVvgFDWYX6uLVJDtFJYQrf
zOEjVBaGJQ1CT+2M2GEWQj7X4Px/o6tXbEx9sZikp/xD//+rH5LAuKf3NhPGE1w=
=caV1
-END PGP SIGNATURE-


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is there any way to measure disk activity for each query?

2015-06-19 Thread Julien Rouhaud
Le 19/06/2015 01:07, Jeff Janes a écrit :
 On Thu, Jun 18, 2015 at 3:05 PM, Oleg Serov sero...@gmail.com
 mailto:sero...@gmail.com wrote:
 
 Hello!
 
 I'm wondering, if there any way to measure how much disk-io were
 generated by a query?
 
 
 For an individual query execution, you can explain it with 
 
 explain (analyze, buffers) select .
 
 It will report on the pages hit in the buffer cache versus the pages
 read.  However, for pages which were hit in the OS filesystem cache,
 those will be reported as if they were read from disk.  There is no way
 (that I know of) to distinguish at the statement level true disk io from
 OS caching.  The best way may be to turn track_io_timing on, then you
 can see how much time it spent waiting on pages.  If not much time was
 spent, then it must be coming from the OS cache.
 
 If you enable pg_stat_statements extension, you can get the same data
 summed over all natural calls of the same query string.  'Natural'
 meaning executions from applications, not just queries manually
 decorated with 'explain (analyze,buffers)'.  This too is best used in
 conjunction with track_io_timing.
 
 I've been thinking about making individual statements which
 exceed log_min_duration_statement log their track_io_timing numbers and
 their rusage numbers into the server log, rather than just their
 wall-time durations as it does now.  I'm not sure how that idea is going
 to work out yet, though.  Anyway, it wouldn't be until version 9.6 at
 minimum.
 
 Also, for temp file, see log_temp_files config parameter.
 

Also, if you need current disk activity for a query, you can use tools
like pg_activity of pg_view to monitor it.

And if you are using postgres 9.4 or more, you can also use
pg_stat_statement and pg_stat_kcache extensions to get actual disk reads
and writes for all normalized queries.

Regards.

 Cheers,
 
 Jeff


-- 
Julien Rouhaud
http://dalibo.com - http://dalibo.org


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] POWA tool

2014-08-20 Thread Julien Rouhaud
On Wed, Aug 20, 2014 at 6:46 PM, Thomas Kellerer spam_ea...@gmx.net wrote:

 Ramesh T schrieb am 20.08.2014 um 17:41:
  Hello,
 
   when i ran  following query,
postgres=# SELECT * FROM pg_stat_statements;
 
 
ERROR:  relation pg_stat_statements does not exist
LINE 1: SELECT * FROM pg_stat_statements;
 
 
  i need to install POWA..i got powa.zip

 No, you need to install the extension pg_stat_statements:

 http://www.postgresql.org/docs/current/static/pgstatstatements.html
 http://www.postgresql.org/docs/current/static/sql-createextension.html



You also need the extensions plpgsql (which should already be installed)
and btree_gist.

When installing the powa extension (CREATE EXTENSION powa;), postgres will
warn you of missing dependancy if any.

You can also refer to the installation documentation (
https://github.com/dalibo/powa/blob/master/README.md).

Regards.
--
Julien Rouhaud
http://www.dalibo.com


Re: [GENERAL] pg_database_size differs from df -s

2012-06-06 Thread Julien Rouhaud
On Wed, Jun 6, 2012 at 6:28 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Frank Lanitz fr...@frank.uvena.de writes:
  Am 06.06.2012 17:49, schrieb Tom Lane:
  For me, pg_database_size gives numbers that match up fairly well with
  what du says.  I would not expect an exact match, since du probably
  knows about filesystem overhead (such as metadata) whereas
  pg_database_size does not.  Something's fishy if it's off by any large
  factor, though.  Perhaps you have some tables in a nondefault
  tablespace, where du isn't seeing them?

  Nope. Its a pretty much clean database without any fancy stuff.

 Peculiar.  If you want to put some time into it, you could try comparing
 sizes table-by-table to see if you can isolate where the discrepancy is.


Perhaps with the contrib adminpack you may easily find where it comes from
comparing size from pg_table_size and pg_stat_file ?


 The only reason I can think of for du to report a size smaller than the
 nominal file length (which is which the pg_xxx_size functions look at)
 is if the file contains unallocated holes.  That really shouldn't ever
 happen with PG tables though.

regards, tom lane

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] [ADMIN] pg_dump : no tables were found.

2012-03-06 Thread Julien Rouhaud
On Tue, Mar 6, 2012 at 7:22 AM, Piyush Lenka lenka.piy...@gmail.com wrote:

 Hi,

 I m trying to take backup of data of a particular table using pg_dump.
 I used double quotes for table name but output is :
 pg_dump : no tables were found.

 Command used :
 -h localhost -p 5432 -U postgres -W -F p -a -t 'TestTable' -f
 DbBackup/BackupTableActions.sql TestDataBase

 This problem only shows when there is a upper case character in my table
 name.
 Please Help

 Thanks  Regards
 Piyush


Hi
You can try -t 'TestTable' or -t \TestTable\


Re: [GENERAL] Postgres Account Inherit Question

2011-10-14 Thread Julien Rouhaud
On Fri, Oct 14, 2011 at 6:38 PM, Carlos Mennens carlos.menn...@gmail.comwrote:

 I've configured my 'pg_hba.conf' file to look as follows:

 # local is for Unix domain socket connections only
 local   all all   md5
 # IPv4 local connections:
 hostall all 127.0.0.1/32  md5
 hostall all 192.168.0.0/24md5

 Now I've reloaded / restarted the PostgreSQL daemon however for some
 reason when I use the 'postgres' user locally, it never prompts for a
 password in 'psql'. I've altered the role to NOINHERIT

 postgres=# ALTER ROLE postgres NOINHERIT;
 ALTER ROLE

 Any other role locally requires a password to even list the database
 using 'psql -l' command except the 'postgres' role. Is this normal
 behavior or am I missing something here? How can I force the postgres
 account to be prompted for a password when communicating to the
 database server locally?


Hi
Did you check for a .pgpass file ?


Re: [GENERAL] Confused About pg_* Tables

2011-10-14 Thread Julien Rouhaud
On Fri, Oct 14, 2011 at 6:06 PM, Carlos Mennens carlos.menn...@gmail.comwrote:

 I'm confused about how I'm able to access the following pg_* tables
 regardless of connected database. I thought these tables were hidden
 or stored in the 'postgres' database but I'm still able to access this
 data regardless of which database I'm connected to:

 Code:

 zoo=# SELECT * FROM pg_user;
  usename  | usesysid | usecreatedb | usesuper | usecatupd | userepl |
 passwd  | valuntil | useconfig

 --+--+-+--+---+-+--+--+---
  postgres |   10 | t   | t| t | t   |
  |  |
  carlos   |16384 | t   | t| t | t   |
  |  |
 (2 rows)

 When I use my tab key in 'psql' after the 'FROM' statement, I'm
 presented with a ton of what I presume to be tables however when I
 check for tables in the 'postgres' database, I get nothing. I'm
 confused...

 Code:

 psql (9.1.1, server 9.1.1)
 You are now connected to database postgres.
 postgres=# \d
 No relations found.


These objects are in the schema pg_catalog not public


Re: [GENERAL] Confused About pg_* Tables

2011-10-14 Thread Julien Rouhaud
On Fri, Oct 14, 2011 at 6:57 PM, Julien Rouhaud rjuju...@gmail.com wrote:

 On Fri, Oct 14, 2011 at 6:06 PM, Carlos Mennens 
 carlos.menn...@gmail.comwrote:

 I'm confused about how I'm able to access the following pg_* tables
 regardless of connected database. I thought these tables were hidden
 or stored in the 'postgres' database but I'm still able to access this
 data regardless of which database I'm connected to:

 Code:

 zoo=# SELECT * FROM pg_user;
  usename  | usesysid | usecreatedb | usesuper | usecatupd | userepl |
 passwd  | valuntil | useconfig

 --+--+-+--+---+-+--+--+---
  postgres |   10 | t   | t| t | t   |
  |  |
  carlos   |16384 | t   | t| t | t   |
  |  |
 (2 rows)

 When I use my tab key in 'psql' after the 'FROM' statement, I'm
 presented with a ton of what I presume to be tables however when I
 check for tables in the 'postgres' database, I get nothing. I'm
 confused...

 Code:

 psql (9.1.1, server 9.1.1)
 You are now connected to database postgres.
 postgres=# \d
 No relations found.


I forgot, type \dS to show system objects.


[GENERAL] Using constraint exclusion with 2 floats

2011-10-12 Thread Julien Rouhaud
Hi everyone,
Is there an easy way (that I maybe missed) to use constraint exclusion with
2 floats ?
Must I find an extension the temporal extension which create a new type with
2 timestamp or can I usethe  operator with 2 fields ?

Thank you


Re: [GENERAL] Using constraint exclusion with 2 floats

2011-10-12 Thread Julien Rouhaud
Thank you for your answer.

I'm sorry I really didn't explained well my problem :/

For example if I have a table test like this :
CREATE TABLE test (min real not null, max real not null, desc character
varying not null);

and I want a constraint exclusion to make sure the range min/max doens't
overlap

I can't write ALTER TABLE test add constraint test_exclude EXCLUDE USING
btree ((min,max) WITH )

I saw the extension temporal gives a new type PERIOD and has operators like
, but only for timestamp, so I'm wondering if I must code something
something similar or if there's is an easier way



On Wed, Oct 12, 2011 at 11:23 PM, Simon Riggs si...@2ndquadrant.com wrote:

 On Wed, Oct 12, 2011 at 10:16 AM, Julien Rouhaud rjuju...@gmail.com
 wrote:
  Hi everyone,
  Is there an easy way (that I maybe missed) to use constraint exclusion
 with
  2 floats ?
  Must I find an extension the temporal extension which create a new type
 with
  2 timestamp or can I usethe  operator with 2 fields ?

 There's nothing in constraint exclusion that depends upon specific
 datatypes.

 Let us know if you find a problem with floats.

 --
  Simon Riggs   http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training  Services



Re: [GENERAL] Are file system level differential/incremental backups possible?

2011-10-12 Thread Julien Rouhaud
As there's one file for each object, a single update on each would make you
to copy the all the file again. I heard there was tool to make differentiel
copy of a part of a file but I don't know if it's really efficient.

Anyway, a better way for you would be to do a regular backup (with
pg_start_backup, copy and pg_stop_backup) and then use wal archive_command
to keep the xlogs between 2 full backups.

On Wed, Oct 12, 2011 at 11:30 PM, Bob Hatfield bobhatfi...@gmail.comwrote:

 Is it possible to do a full file system level backup of the data
 directory, say once a week, and differentials or incrementals daily?

 I'm wondering if there are files that would normally be removed that a
 restore: Full then diff/inc would not remove and perhaps
 corrupt/confuse things.

 Process:
 Saturday: Full backup (reset archive bits) of data dir with database
 shutdown
 Sunday: Differential (don't reset archive bits) of data dir with
 database shutdown
 Monday: Differential (don't reset archive bits) of data dir with
 database shutdown
 Wednesday: Restore to test server using Saturday's Full and Monday's
 Differential.

 Obviously this works for regular files/file systems; however, I'm not
 sure this is a good method with postgresql as the resulting data dir
 *may* (?) contain extra files (or other issues)?

 Note: our database is 850GB (Windows 2008 R2 pg version 8.3.12)

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Are file system level differential/incremental backups possible?

2011-10-12 Thread Julien Rouhaud
On Thu, Oct 13, 2011 at 12:04 AM, Bob Hatfield bobhatfi...@gmail.comwrote:

  Anyway, a better way for you would be to do a regular backup (with
 pg_start_backup, copy and pg_stop_backup) and then use wal archive_command
 to keep the xlogs between 2 full backups.

 Thanks Julien.  Can pg_start/stop_backup() be used for regular full
 file system backups?   All of the documentation I've read only refers
 to using those for warm standby/wal shipping methods.


Yes, and it's the only way to do a file backup without stopping the server.
Careful, the command can last a while as it forces a checkpoint (see the doc
for more details).

It's used for warm standby to create a copy of the server, before the wals
that'll be generated can continue to restore it.