Re: [GENERAL] pg_dump and quoted identifiers

2016-12-14 Thread Adrian Klaver

On 12/13/2016 11:18 PM, Thomas Kellerer wrote:

Tom Lane schrieb am 13.12.2016 um 19:35:

These cases work for me.  Maybe your shell is doing something weird
with the quotes?



Hmm, that's the default bash from CentOS 6 (don't know the exact version)


I'm using bash from current RHEL6, should be the same.

I'm suspicious that you're not actually typing plain-ASCII single and
double quotes, but some fancy curly quote character.


Definitely not. I typed this manually on the command line using Putty


So you are reaching the Bash shell via Putty on a Windows machine, correct?

So have you tried the answer from the SO question?:

"\"Statuses\""



Bash version is "GNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu)"








--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Install doesnt set SuperUser PW

2016-12-13 Thread Adrian Klaver

On 12/13/2016 02:46 PM, Jovi Federici wrote:

OMG it worked! I just threw in a PW I had used for something else and it
worked! Must have been your good vibes :-)


Best guess, at some point in past you did an install where you supplied 
a password and the installer stored that somewhere, so it did not need 
it when you reinstalled.




Inline image 1

On Tue, Dec 13, 2016 at 5:44 PM, Jovi Federici <jovi.feder...@gmail.com
<mailto:jovi.feder...@gmail.com>> wrote:

I'm going to try the new installer at bigsql, thanks!

Did the Postgres server actually get installed? - I think so. I have
Postgres in Programs folder

Do you have a postgres OS user? - I don't know. I assume the
postgres user is created upon install. I have not done anything
(create user) after install.

Can you connect to the server in any manner? - This is as far as I get:

Inline image 1


On Tue, Dec 13, 2016 at 5:35 PM, Adrian Klaver
<adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>> wrote:

On 12/13/2016 02:08 PM, Jovi Federici wrote:

Just re-ran installer in Administrator mode with no difference.
Correction on previous statement: the installed does ask for
Paths but
does not ask for a PW.


Dept of Late thoughts:

Did the Postgres server actually get installed?

Do you have a postgres OS user?

Can you connect to the server in any manner?



-Jovi



    --
Adrian Klaver
adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>




--
    Jovi




--
Jovi



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Install doesnt set SuperUser PW

2016-12-13 Thread Adrian Klaver

On 12/13/2016 02:44 PM, Jovi Federici wrote:

I'm going to try the new installer at bigsql, thanks!


I would uninstall the EDB version first.



Did the Postgres server actually get installed? - I think so. I have
Postgres in Programs folder

Do you have a postgres OS user? - I don't know. I assume the postgres
user is created upon install. I have not done anything (create user)
after install.

Can you connect to the server in any manner? - This is as far as I get:

Inline image 1



So what happens if you just hit Enter without supplying a password?

Or if you do?:

psql -U postgres



On Tue, Dec 13, 2016 at 5:35 PM, Adrian Klaver
<adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>> wrote:

On 12/13/2016 02:08 PM, Jovi Federici wrote:

Just re-ran installer in Administrator mode with no difference.
Correction on previous statement: the installed does ask for
Paths but
does not ask for a PW.


Dept of Late thoughts:

Did the Postgres server actually get installed?

Do you have a postgres OS user?

Can you connect to the server in any manner?



-Jovi



--
Adrian Klaver
adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>




--
Jovi



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Install doesnt set SuperUser PW

2016-12-13 Thread Adrian Klaver

On 12/13/2016 02:08 PM, Jovi Federici wrote:

Just re-ran installer in Administrator mode with no difference.
Correction on previous statement: the installed does ask for Paths but
does not ask for a PW.


Dept of Late thoughts:

Did the Postgres server actually get installed?

Do you have a postgres OS user?

Can you connect to the server in any manner?



-Jovi




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Install doesnt set SuperUser PW

2016-12-13 Thread Adrian Klaver

On 12/13/2016 02:08 PM, Jovi Federici wrote:

Just re-ran installer in Administrator mode with no difference.
Correction on previous statement: the installed does ask for Paths but
does not ask for a PW.


I have no idea at this point. I do not run Windows anymore, so I have 
nothing to test against. My suggestion would be to try here:


http://forums.enterprisedb.com/forums/show/9.page;jsessionid=49D70EC0A476B22D3C6FEF3A24F0BA90


Or if you are willing to try something different:

https://www.bigsql.org/postgresql/installers.jsp



-Jovi



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Install doesnt set SuperUser PW

2016-12-13 Thread Adrian Klaver

On 12/13/2016 01:49 PM, Jovi Federici wrote:

Hi Adrian, I did graphical installer is in

https://www.enterprisedb.com/docs/en/9.6/instguide/PostgreSQL_Installation_Guide.1.08.html#
<https://www.enterprisedb.com/docs/en/9.6/instguide/PostgreSQL_Installation_Guide.1.08.html#>

Except I was not asked to input anything. I didn't input paths, I didn't
input a PW.

It just ran and finished and that was it.


Did you do this part?:

"Note that in some versions of Windows, you can invoke the installer 
with Administrator privileges by right clicking on the installer icon 
and selecting Run as Administrator from the context menu."




BTW, I had installed same previously then ran the uninstall.exe and
re-installed because I forgot if I had entered a PW the first time but I
really don't think I ever did.

Thanks for you help!

Jovi




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CRM where pg is a first class citizen?

2016-12-13 Thread Adrian Klaver

On 12/13/2016 12:36 PM, George Weaver wrote:

I've never used it but what about:

https://developer.sugarcrm.com/2012/08/03/like-postgresql-and-want-to-use-it-with-sugarcrm-check-out-a-new-community-project/


This killed the community(Open Source) edition going forward:

https://community.sugarcrm.com/thread/18434




Cheers,
George

On 13/12/2016 2:24 PM, Joshua D. Drake wrote:

On 12/13/2016 12:19 PM, John R Pierce wrote:

On 12/13/2016 10:19 AM, Joshua D. Drake wrote:

I was looking for a open source CRM, PHP or python based, with a large
community where Postgresql is a first class citizen.

I'd prefer ease of use over features.


DjangoCMS
Drupal 7+


CRM (Customer Relationship Manager) != CMS (Content Management System).


Correct, sorry, even after I read your email I had to read it twice.
Go Acronyms!

Tryton is an ERP that has CRM functionality. Although it is a bit of a
beast.

JD




--
john r pierce, recycling bits in santa cruz









--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Install doesnt set SuperUser PW

2016-12-13 Thread Adrian Klaver

On 12/13/2016 12:46 PM, Jovi Federici wrote:

Hi Postgres,

My installation of Postgress 9.6 did not include any setting of a
SuperUser password so I'm at a loss.


How did you determine this?



Do I have a bad installation?

I got it here:

http://www.enterprisedb.com/postgresql-961-installers-win64?ls=Crossover=Crossover

I'm running Windows 7 in 64 bit VM on a Mac Pro.

I'm new to this stuff ok? Please go easy on me :-)


So did you do the following?:

https://www.enterprisedb.com/docs/en/9.6/instguide/PostgreSQL_Installation_Guide.1.08.html#

If so, in about step 4 you created the password.



Thanks!
--
Jovi



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] btree gist indices, null and open-ended tsranges

2016-12-12 Thread Adrian Klaver
On 12/11/2016 11:34 PM, Chris Withers wrote:
> On 01/12/2016 12:12, Francisco Olarte wrote:
>> On Thu, Dec 1, 2016 at 12:56 PM, Chris Withers
>> <ch...@simplistix.co.uk> wrote:
>>> So, first observation: if I make room nullable, the exclude
>>> constraint does
>>> not apply for rows that have a room of null. I guess that's to be
>>> expected,
>>> right?
>>
>> I would expect it, given:
>>
>> n=> select null=null, null<>null, not (null=null);
>>  ?column? | ?column? | ?column?
>> --+--+--
>>   |  |
>> (1 row)
>>
>> Those are nulls,
> 
> Yes, it's a shame psql has the same repr for null and empty-string ;-)

test=# select NULL; 

   
 ?column?   

   
--  

   


   
(1 row) 

   


   
test=# \pset null 'NULL'
Null display is "NULL". 

   

test=# select NULL; 

  
 ?column?   

   
--  

   
 NULL   

   
(1 row)   

> 
>> n=> select (null=null) is null, (null<>null) is null, (not
>> (null=null)) is null;
>>  ?column? | ?column? | ?column?
>> --+--+--
>>  t| t| t
>> (1 row)
>>
>> I.e., the same happens with a nullable unique column, you can have one
>> of each not null values and as many nulls as you want.
>>
>> SQL null is a strange beast.
> 
> Sure, I think that was the answer I was expecting but not hoping for...
> 
> However, my "next question" was the one I was really hoping for help with:
> 
> Working with the exclude constraint example from
> https://www.postgresql.org/docs/current/static/rangetypes.html:
> 
> CREATE EXTENSION btree_gist;
> CREATE TABLE room_reservation (
> room text,
> during tsrange,
> EXCLUDE USING GIST (room WITH =, during WITH &&)
> );
> 
> Next question: if lots of rows have open-ended periods
> (eg: [, 2010-01-01 15:00) or [2010-01-01 14:00,)), how does that affect
> the performance of the btree gist index backing the exclude constraint?
> 
> Tom Lane made a comment on here but never followed up with a definitive
> answer. Can anyone else help?
> 
> cheers,
> 
> Chris
> 
> 


-- 
Adrian Klaver
adrian.kla...@aklaver.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Importing SQLite database

2016-12-10 Thread Adrian Klaver

On 12/10/2016 06:56 PM, Igor Korot wrote:

Hi,

On Sat, Dec 10, 2016 at 2:50 PM, John R Pierce <pie...@hogranch.com> wrote:

On 12/10/2016 11:32 AM, Igor Korot wrote:


Looking
athttps://www.postgresql.org/docs/9.5/static/datatype-numeric.html,
I don't see a 'boolean' as supported data type.



thats because boolean isn't a number, its a true/false value.

https://www.postgresql.org/docs/current/static/datatype-boolean.html


OK, I changed all 0s to "FALSE".
The script did run but then failed silently (without error). No commit
were executed.


Sounds like you did:

BEGIN;

Your commands


and did not do a COMMIT; before exiting the session.



Is there any way to find which statement failed?

I can attach the script for you guys to review - I just don't know if
this ML OKs the attachment.

Thank you for a suggestion.



--
john r pierce, recycling bits in santa cruz




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] huge table occupation after updates

2016-12-10 Thread Adrian Klaver

On 12/10/2016 10:15 AM, Tom DalPozzo wrote:



2016-12-10 18:30 GMT+01:00 Francisco Olarte <fola...@peoplecall.com
<mailto:fola...@peoplecall.com>>:

A couple of things first.

1.- This list encourages inline replying, editing the text, and frowns
upon top posting.

2.- Your HTML formatting with so a small size makes it harder for me (
and I can assume some others ) to properly read your messages.

If you want to discourage people replying to you, keep doing the two
above.

On Sat, Dec 10, 2016 at 3:25 PM, Tom DalPozzo <t.dalpo...@gmail.com
<mailto:t.dalpo...@gmail.com>> wrote:
> you're right, VACUUM FULL  recovered the space, completely.

Well, it always does. ;-)

> So, at this point I'm worried about my needs.
> I cannot issue vacuum full as I read it locks the table.

Well, first hint of your needs. Bear in mind vacuum fulls can be very
fast on small tables ( i.e, if you have the 1.5Mb table, and do 2000
updates and then a vacuum full that will be very fast, time it ).

> In my DB, I (would) need to have a table with one bigint id field+
10 bytea
> fields, 100 bytes long each (more or less, not fixed).
> 5/1 rows maximum, but let's say 5000.
> As traffic I can suppose 1 updates per row per day (spread
over groups
> of hours; each update involving two of those fields, randomly.
> Also rows are chosen randomly (in my test I used a block of 2000
just to try
> one possibility).
> So, it's a total of 50 millions updates per day, hence (50millions
* 100
> bytes *2 fields updated) 10Gbytes net per day.

Not at all. That's the volume of updated data, you must multiply by
the ROW size, not just the changed size, in your case 50M * 1100 ( to
have some wiggle room ), 55Gbytes.

But this is the UPPER BOUND you asked for. Not the real one.

> I'm afraid it's not possible, according to my results.

It certaninly is. You can set a very aggresive autovacuum schedule for
the table, or even better, you may vacuum AFTER each hourly update.
This will mark dead tuples for reuse. It will not be as fast, but it
can certainly be fast enough.

And, if you only update once an hour, you may try other tricks ( like
copy to a temp table, truncate the original and insert the temp table
in the original, although I fear this will lock the table too, but it
will be a very short time, your readers may well tolerate it. )

Yours seem a special app with special need, try a few, measure, it is
certainly possible.

Francisco Olarte.


​Hi, ​I think you're right. I was surprised by the huge size of the
tables in my tests but I had not considered the vacuum properly.
My test had a really huge activity so perhaps the autovacuum didn't have
time to make the rows reusable.
Also, issuing plain VACUUM command does nothing visibile at once, but
only after when, inserting new rows, the size doesn't increase.
I will try again as you suggest.


To make more sense of this I would suggest reading the following 
sections of the manual:


https://www.postgresql.org/docs/9.5/static/routine-vacuuming.html

https://www.postgresql.org/docs/9.5/static/mvcc.html


There is a lot of ground covered in the above, more then can be digested 
in one pass but it will help provide some context for the 
answers/suggestions provided in this thread.



Thank you very much
Pupillo





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] huge table occupation after updates

2016-12-10 Thread Adrian Klaver

On 12/10/2016 09:30 AM, Francisco Olarte wrote:

A couple of things first.

1.- This list encourages inline replying, editing the text, and frowns
upon top posting.

2.- Your HTML formatting with so a small size makes it harder for me (
and I can assume some others ) to properly read your messages.


+1. I either had to Ctrl + or put the 'readers' on:)



If you want to discourage people replying to you, keep doing the two above.

On Sat, Dec 10, 2016 at 3:25 PM, Tom DalPozzo <t.dalpo...@gmail.com> wrote:

you're right, VACUUM FULL  recovered the space, completely.


Well, it always does. ;-)


So, at this point I'm worried about my needs.
I cannot issue vacuum full as I read it locks the table.


Well, first hint of your needs. Bear in mind vacuum fulls can be very
fast on small tables ( i.e, if you have the 1.5Mb table, and do 2000
updates and then a vacuum full that will be very fast, time it ).


In my DB, I (would) need to have a table with one bigint id field+ 10 bytea
fields, 100 bytes long each (more or less, not fixed).
5/1 rows maximum, but let's say 5000.
As traffic I can suppose 1 updates per row per day (spread over groups
of hours; each update involving two of those fields, randomly.
Also rows are chosen randomly (in my test I used a block of 2000 just to try
one possibility).
So, it's a total of 50 millions updates per day, hence (50millions * 100
bytes *2 fields updated) 10Gbytes net per day.


Not at all. That's the volume of updated data, you must multiply by
the ROW size, not just the changed size, in your case 50M * 1100 ( to
have some wiggle room ), 55Gbytes.

But this is the UPPER BOUND you asked for. Not the real one.


I'm afraid it's not possible, according to my results.


It certaninly is. You can set a very aggresive autovacuum schedule for
the table, or even better, you may vacuum AFTER each hourly update.
This will mark dead tuples for reuse. It will not be as fast, but it
can certainly be fast enough.

And, if you only update once an hour, you may try other tricks ( like
copy to a temp table, truncate the original and insert the temp table
in the original, although I fear this will lock the table too, but it
will be a very short time, your readers may well tolerate it. )

Yours seem a special app with special need, try a few, measure, it is
certainly possible.

Francisco Olarte.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SELECT slow immediately after many update or delete+insert, except using WHERE .. IN

2016-12-10 Thread Adrian Klaver

On 12/10/2016 04:21 AM, Tom DalPozzo wrote:

Hi,
my release is 9.5.4.
a took a look over it. I guessed that counting could be slow because it
needs to read everything and also that it can take advantage from an
index. But I don't understand why the delay is after the updates  for a


Best guess, autovacuum kicked in and marked a bunch of rows as no longer 
in play and thereby reduced the number of rows that needed to be counted.



certain time and why WHERE..IN is much faster (ok, it's an index, but
I'm reading all the rows).


So per the second link have you tried something like:

SELECT COUNT(*) FROM Table WHERE id > 0;


Regards
Pupillo


2016-12-09 17:16 GMT+01:00 Adrian Klaver <adrian.kla...@aklaver.com
<mailto:adrian.kla...@aklaver.com>>:

On 12/09/2016 08:03 AM, Tom DalPozzo wrote:
> Hi,
> I did two tests:
> TEST 1
> 1 I created a table ("Table") with two fields, one ("Id") is a bigint
> and the other ("Data") is a bytea. Also created  an index on Id.
> 2 Populated the table with 1 rows, in which the bigint is
> incremental and bytea is 1000 bytes long.
> 3 Executed SELECT COUNT(*) FROM Table;.   It was very fast, almost
> immediate.
> 4 Updated 2000 of those rows for 1000 times. Each time using
BEGIN; 2000
> UPDATEs to bytea field (no length changed);COMMIT;   < It
> reached around 1 rows updated/sec.
> 5 Immediately after that, executed SELECT COUNT(*). It took nearly 2
> seconds.
> 6 After 1 minute,  executed SELECT COUNT(*). It was immediate again.
>
> TEST 2
> I dropped the table and redid the whole test1 from the beginning but
> using DELETE.. IN (...) + INSERT VALUES (...),(...),...;  instead of
> UPDATE  at point 4.
>  I noticed that:
> - Point 4 took half of the time used through UPDATE (hence now  2
> rows/sec)-
> - The slowness of SELECT COUNT(*)  remained much more than 1 min. (5
> mins?) After that it was fast again.
>
>
> BUT, in both tests, if I substitute point 5 with:
> SELECT * FROM Table WHERE Id IN (0,1,2,... all the numbers up to
);
> then it's almost immediate even if executed immediately after point 4
>
> 

What version of Postgres?

See:


https://wiki.postgresql.org/wiki/FAQ#Why_is_.22SELECT_count.28.2A.29_FROM_bigtable.3B.22_slow.3F

<https://wiki.postgresql.org/wiki/FAQ#Why_is_.22SELECT_count.28.2A.29_FROM_bigtable.3B.22_slow.3F>

In particular:

https://wiki.postgresql.org/wiki/Slow_Counting
<https://wiki.postgresql.org/wiki/Slow_Counting>

> Now the questions:
> I'd like to know the reason of the delay at point 5, in particular in
> the 2nd test and why it is faster when using WHERE..IN .
>
> Also, should I be concerned about the delay at point 5? I mean, my DB
> will receive around 20 millions of updates (or delete+insert) per day.
> Will this delay raise more and more along the months/years?
>
    >
> Regards
> Pupillo
>
>
>
>
>
>
>
>


--
Adrian Klaver
adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Windows installation - could not connect to server: Connection refused (0x0000274D/10061)

2016-12-09 Thread Adrian Klaver

On 12/09/2016 12:43 AM, thomas.deboben@rohde-schwarz.com wrote:

Hello,

I need some help on an installation issue with PostgreSQL 9.5.0.1
I have written a wix bootstrapper to install postgresql for our
application.
This has worked for different version, but now we get sometimes trouble


Which version did it work on?


with the installer as some sql scripts can't be executed during install.
From the logs I see an error 2 retunred from psql.exe

This error is found in the bitrock_installer.log from your original
installer
---
[14:58:49] Installing the adminpack module in the postgres database...
Executing C:\Program Files\PostgreSQL\9.5\bin\psql.exe -c "CREATE
EXTENSION adminpack" -d postgres
Script exit code: 2
---

and as well in our log where we try to launch some more sql scripts to
create a user, the database and the tables.
---
2016.08.02 14:59:07, Info, LaunchProcess(), Launch 'C:\Program
Files\PostgreSQL\9.5\bin\psql.exe' with ' -U postgres --file "C:\Program
Files\PostgreSQL\9.5\Contest\CreateContestUser.sql"'.
2016.08.02 14:59:09, Info, LaunchProcess(), Standard output: ''
2016.08.02 14:59:09, Info, LaunchProcess(), Standard error: 'psql:
could not connect to server: Connection refused (0x274D/10061)
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5432?
could not connect to server: Connection refused (0x274D/10061)
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
'




So far I wasn't able to reproduce this error by my self, but I know from
about 5 issues inside the last month.
As I'm not able to reproduce I would appreciate any help from you to
find out what causes this error.


The error is pretty straight forward the server cannot be found at 
localhost on port 5432, so:


1) Do you know that the server is actually up and running at the point 
you run the scripts?


2) Is the server listening on port 5432?

3) Is Windows properly mapping localhost?


All installations do run on Windows 7 Pro x64.

Best regards,
   Thomas Deboben



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SELECT slow immediately after many update or delete+insert, except using WHERE .. IN

2016-12-09 Thread Adrian Klaver
On 12/09/2016 08:03 AM, Tom DalPozzo wrote:
> Hi,
> I did two tests:
> TEST 1
> 1 I created a table ("Table") with two fields, one ("Id") is a bigint
> and the other ("Data") is a bytea. Also created  an index on Id.
> 2 Populated the table with 1 rows, in which the bigint is
> incremental and bytea is 1000 bytes long.
> 3 Executed SELECT COUNT(*) FROM Table;.   It was very fast, almost
> immediate.
> 4 Updated 2000 of those rows for 1000 times. Each time using BEGIN; 2000
> UPDATEs to bytea field (no length changed);COMMIT;   < It
> reached around 1 rows updated/sec.
> 5 Immediately after that, executed SELECT COUNT(*). It took nearly 2
> seconds.
> 6 After 1 minute,  executed SELECT COUNT(*). It was immediate again.
> 
> TEST 2
> I dropped the table and redid the whole test1 from the beginning but
> using DELETE.. IN (...) + INSERT VALUES (...),(...),...;  instead of
> UPDATE  at point 4.
>  I noticed that:
> - Point 4 took half of the time used through UPDATE (hence now  2
> rows/sec)-
> - The slowness of SELECT COUNT(*)  remained much more than 1 min. (5
> mins?) After that it was fast again.
> 
> 
> BUT, in both tests, if I substitute point 5 with: 
> SELECT * FROM Table WHERE Id IN (0,1,2,... all the numbers up to );
> then it's almost immediate even if executed immediately after point 4
> 
> 

What version of Postgres?

See:

https://wiki.postgresql.org/wiki/FAQ#Why_is_.22SELECT_count.28.2A.29_FROM_bigtable.3B.22_slow.3F

In particular:

https://wiki.postgresql.org/wiki/Slow_Counting

> Now the questions:
> I'd like to know the reason of the delay at point 5, in particular in
> the 2nd test and why it is faster when using WHERE..IN . 
> 
> Also, should I be concerned about the delay at point 5? I mean, my DB
> will receive around 20 millions of updates (or delete+insert) per day.
> Will this delay raise more and more along the months/years? 
> 
> 
> Regards
> Pupillo
> 
> 
> 
> 
> 
> 
> 
> 


-- 
Adrian Klaver
adrian.kla...@aklaver.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unexplained statistics reset? Help tracking it down.

2016-12-09 Thread Adrian Klaver

On 12/09/2016 06:19 AM, Bill Moran wrote:


I've been seeing some unexplained behavior whereas the statistics in a Postgres
database reset with no explanation as to why. This is concerning because it
results in terrible query plans until someone manually runs analyze, and that
it negatively impacts autovacuum.

This is Postgres 9.5.4 on Ubuntu 14 (yes, I know there's a patch I need to
apply -- it's on my TODO list, but I don't see any mention of fixing unexplained
stats resets in that version, so I'm not assuming that will fix it)

My first thought is that some random user was calling pg_reset_stats() without
realizing what they were doing. However, I have full query logging enabled on
this system, and the logs don't show this happening. (Yes, I've also checked
for someone disabling query logging for their connection before doing 
myseterious
things).

Before I start theorizing that this might be a bug in Postgres, does anyone have
any suggestions on what other ways the stats could be reset that I need to check
on? Has anyone else experienced this to lend credence to the possibility that 
it's


https://www.postgresql.org/docs/9.5/static/monitoring-stats.html

" When the server shuts down cleanly, a permanent copy of the statistics 
data is stored in the pg_stat subdirectory, so that statistics can be 
retained across server restarts. When recovery is performed at server 
start (e.g. after immediate shutdown, server crash, and point-in-time 
recovery), all statistics counters are reset."





a bug? I have no clue how to reproduce it, as the occurrance is rare and still
seems random.




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Importing SQLite database

2016-12-08 Thread Adrian Klaver

On 12/08/2016 04:47 PM, Igor Korot wrote:

Hi, guys,





This is the result of running "SQL shell":

[code]
Last login: Thu Dec  8 19:46:41 on ttys001
Igors-MacBook-Air:~ igorkorot$ /Library/PostgreSQL/9.1/scripts/runpsql.sh; exit
Server [localhost]:
Database [postgres]:
Port [5432]:
Username [postgres]:
Password for user postgres:
psql (9.1.24)
Type "help" for help.

postgres=#
[/code]

And this is the result of running "psql" command in Terminal:

[code]
Igors-MacBook-Air:dbhandler igorkorot$ psql -U postgres -d postgres
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/pgsql_socket/.s.PGSQL.5432"?
[/code]

Any idea why I can't connect?


Because you are trying a local socket connection and psql cannot find 
the socket. Not sure where the Apple install would create it, so cannot 
help there. However if you replicate the connection that worked with SQL 
shell you should be able to connect. So:


psql -U postgres -d postgres -h localhost

FYI, not having the -h tells psql to connect using a socket.



Also is PostgreSQL is set initially to use "Transaction-mode"? I.e.
every SQL command should end up with COMMIT?


That is more of a client side attribute. For psql the default is to 
enclose every command in a transaction, so no explicit BEGIN/COMMIT is 
needed:


https://www.postgresql.org/docs/9.5/static/app-psql.html
"AUTOCOMMIT

When on (the default), each SQL command is automatically committed 
upon successful completion. To postpone commit in this mode, you must 
enter a BEGIN or START TRANSACTION SQL command. When off or unset, SQL 
commands are not committed until you explicitly issue COMMIT or END. The 
autocommit-off mode works by issuing an implicit BEGIN for you, just 
before any command that is not already in a transaction block and is not 
itself a BEGIN or other transaction-control command, nor a command that 
cannot be executed inside a transaction block (such as VACUUM).


Note: In autocommit-off mode, you must explicitly abandon any 
failed transaction by entering ABORT or ROLLBACK. Also keep in mind that 
if you exit the session without committing, your work will be lost.


Note: The autocommit-on mode is PostgreSQL's traditional 
behavior, but autocommit-off is closer to the SQL spec. If you prefer 
autocommit-off, you might wish to set it in the system-wide psqlrc file 
or your ~/.psqlrc file.


"



Thank you.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Looking for an online mentor

2016-12-08 Thread Adrian Klaver

On 12/08/2016 05:14 PM, Metare Solve wrote:

I'm happy with postgres. It came recommended by a girl whose skills I
totally respect. When i told my old boss I was going to use it, she
redirected me to MySQL because that's what our CRM used. I've always
looked back at postgres, though, and for some reason like it,
conceptually, better.


Something I did not mention before, you can use different languages 
inside Postgres:


https://www.postgresql.org/docs/9.5/static/xplang.html

https://www.postgresql.org/docs/9.5/static/external-pl.html

In the end what you choose to work with is probably determined as much 
by what makes sense to you as anything else.



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Looking for an online mentor

2016-12-08 Thread Adrian Klaver

On 12/08/2016 04:47 PM, Metare Solve wrote:

I'll check out the django girls, thanks. There's also a group of women
coders in DC who hold a lot of trainings and events. I'll see if they
have a level I python coming up.

You made me think about something about the GUI's: In my old job I used
alteryx because we were working with datasets from 20M-250M. When I got
laid off, I lost access to alteryx (unless I want to get a free trial).
I asked a guy who's pretty familiar with my skillset what I should be
using, R? He said to look for an open source gui/alteryx alternative. I
found one called KNIME that I like a lot.


R is very much a viable language for statistical analysis, FYI, Python 
has binding to R, in other words you can use Python to work with R. 
Though if you go that route I would suggest Pandas:


http://pandas.pydata.org/




But, what I'm gathering is, you think this is a crutch too. Will Python
enable me to do the same things that I do with that kind of big data
processing program? Should I be aiming for that as the ultimate rather
than "mastering" KNIME (or whatever, just looking to design my
curriculum). At my old job when I asked the same guy about SQL and how
some of the coding bits we did in the tool configuration looked like
SQL, he said alteryx is SQL on crack.

I need SOMETHING to use for analysis for the tests I'm going to have to
take when I job hunt so I'm exploring KNIME right now and doing the data
manipulation into what I'd do in access for analysis. I know, I need
stats too. You were educated as a biologist? I was educated as a
pianist, writer, and historian, lol. I have a lot to learn.


There is nothing wrong with GUI's per se, it is just that you often are 
constrained by whatever tools the developers provide. They are good way 
to start though, especially if they give you the ability to see what 
they are passing to the datasource. Then you can learn as you work.




However, we're veering away from postgres, which may not be appropriate
on the forum. I'd like to continue the conversation, though.


Hey, if gets you using Postgres all is good.




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Looking for an online mentor

2016-12-08 Thread Adrian Klaver

On 12/08/2016 04:39 PM, John R Pierce wrote:

On 12/8/2016 4:19 PM, Metare Solve wrote:

- The project is to build my Tableau skills, for one. There are a lot
of (work from home) jobs out there that call for Tableau skills and I
know the best way to learn something is to do it. I think using the
SQL database as a source will enable me to learn a lot of things in
one stroke.


I don't know anything about Tableau, but I don't see anything on their
fluff-heavy website mentioning postgresql, rather, it all seems


http://onlinehelp.tableau.com/current/pro/desktop/en-us/help.htm#examples_postgresql.html


cloud-centric and fluffy, they mention Microsoft SQL Server, Hadoop,
Oracle, 'web sources', CSV files, and others, but no mention of
Postgresql.   and, wow.  $1000 for a single seat license of the desktop
'personal' version, $2000 for the 'professional' version ?  yikes.


I have taken a glance at:

http://docs.redash.io/en/latest/

which at least is free and Open Source.


mostly it seems to be a tool that extracts data from pre-existing data
sources and crunches and presents it in 'executive dashboard' sort of
formats, it doesn't actually use the source database as anything other
than a place to fetch from.

pgadmin is just a database administration tool, its one way of managing
databases, creating and modifying table structures, etc, but its not
really suitable for production data entry and stuff, for that you'd
normally have application software, or use ETL tools to bulk load data
from other sources, etc.






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Looking for an online mentor

2016-12-08 Thread Adrian Klaver

On 12/08/2016 04:19 PM, Metare Solve wrote:

Sorry, I got on so many lists yesterday. I'm really not that dense.

I have absolutely no language programming skills and it is very very
frustrating. I can HTML and that's it. I desperately want to develop the
skills but whenever I try on my own, I hit the same wall as I do with
SQL. I'm just bad at the deep dives into code and really need some
honest help. I kind of had this kind of help at my old job but I got
laid off on 11/30 and I don't think it's appropriate to ask them anymore.

SQL experience: Only MS Access. I know it sucks, but it's all my boss
"let" me use (it didn't occur to me for a while to pursue it on my own
after she told me "no" for whatever odd reason she had). But! I can use
that GUI like a champ. I used Access for all of my data crunching, my
biggest record set was 600k so I didn't need anything more powerful. The
reason I think I can learn SQL with just a bit of guidance is I know the
concepts behind "where" and "group buys" and "unique," etc, but I miss a
; and get an error and then get frustrated.


GUI's only get you so far. At some point you will need to dive deeper to 
get what you. I am mostly a self taught programmer(biologist by 
training) so I understand the hill you are facing. The language I use is 
Python, mainly because to me it made sense. For you it might be a good 
choice as it is quite prevalent in the data analysis world. There are a 
lot of places to turn to to learn how to use it. My suggestion for 
seeing if it is something you can use would be to start here:


https://tutorial.djangogirls.org/en/

Go through at least the Introduction to Python part. The rest has to do 
with Django, Web framework built using Python.


If you want an immersion into programming the events that Django Girls 
put on are a good place to start:


https://djangogirls.org/events/

I have seen it action at my local Linux Fest and it is impressive.





Purpose of the project:

Eh, a bunch of things.

- The project is to build my Tableau skills, for one. There are a lot of
(work from home) jobs out there that call for Tableau skills and I know
the best way to learn something is to do it. I think using the SQL
database as a source will enable me to learn a lot of things in one stroke.

- The project will also give me a product (eventually multiple) to embed
on a website when I start looking for "clients," or what have you. Just
starting a portfolio.

- I have two projects, one of them is to analyze crime rates around the
moon phases. Just a question my dad once posed as a science project that
I blew off. Now seems kind of interesting to me to pursue. Will give me
date experience, mapping if I want it, can go down to the precinct
level, etc. The other is some data I've been collecting for about 15
months on a pixel dragon game I play. I want to build a dashboard to
manage my lair/income/value, etc. That is definitely where the SQL
database comes in. I think the moon one is just data blending.

- To give me intellectual stimulation because I am a nerd.

Just a note, I'm a female gen x with a master's degree in library and
information science. I took a database design class in grad school and
it was the biggest waste of money and time. lol, I wish I'd had these
project ideas then, I could have gone to the teacher with them.

Meta





On Thu, Dec 8, 2016 at 6:59 PM, Adrian Klaver <adrian.kla...@aklaver.com
<mailto:adrian.kla...@aklaver.com>> wrote:

On 12/08/2016 03:46 PM, Metare Solve wrote:

Hi,

I'm looking for an online mentor, someone I can chat/email and ask
questions of. Maybe "meet up" and have a direct chat. I've been
trying
to get into SQL for years and I just... have... problems. It's very
frustrating and at this point in my career is going to keep me from
getting a new job. (I was just laid off on 11/30.)

I have a project in mind: I want to build an SQL db to house some
datasets and sit a tableau dashboard/storyboard on top. A data
engineer
at my old job said that would be his preferred method, and I
think it
would be a useful project to learn SQL.

I know this is the pgadmin support and not a postgres, but I'm
technically going to be using pgadmin so it's tangentially
related ;)


Actually this is the Postgres mailing list.

FYI, the pgAdmin list is here:

https://www.postgresql.org/list/pgadmin-support/
<https://www.postgresql.org/list/pgadmin-support/>

Though pgAdmin is often discussed on this list(pgsql-general).


Any input, even on the project?



What language(s) do you plan to program in?

What studying on SQL have you done?

What will the purpose of the project?



thanks,
meta



--
    Adrian Klaver
adrian.kla...@aklaver.com <mailto:adr

Re: [GENERAL] Looking for an online mentor

2016-12-08 Thread Adrian Klaver

On 12/08/2016 03:46 PM, Metare Solve wrote:

Hi,

I'm looking for an online mentor, someone I can chat/email and ask
questions of. Maybe "meet up" and have a direct chat. I've been trying
to get into SQL for years and I just... have... problems. It's very
frustrating and at this point in my career is going to keep me from
getting a new job. (I was just laid off on 11/30.)

I have a project in mind: I want to build an SQL db to house some
datasets and sit a tableau dashboard/storyboard on top. A data engineer
at my old job said that would be his preferred method, and I think it
would be a useful project to learn SQL.

I know this is the pgadmin support and not a postgres, but I'm
technically going to be using pgadmin so it's tangentially related ;)


Actually this is the Postgres mailing list.

FYI, the pgAdmin list is here:

https://www.postgresql.org/list/pgadmin-support/

Though pgAdmin is often discussed on this list(pgsql-general).



Any input, even on the project?



What language(s) do you plan to program in?

What studying on SQL have you done?

What will the purpose of the project?




thanks,
meta



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PDF files: to store in database or not

2016-12-08 Thread Adrian Klaver

On 12/08/2016 07:16 AM, Rich Shepard wrote:

On Thu, 8 Dec 2016, John DeSoi wrote:


I have been storing PDFs in Postgres for several years without any
problems. Documents range in size from a few pages to 100+ pages. I'm
using a bytea column, not large objects. I store the documents in a
separate database from the rest of the application data in order to make
it easy to exclude in database dumps or backup in some other way. I'm
currently managing about 600,000 documents.


John,

  This is really good information. Rather than using a separate database I
think that storing all PDFs in a separate table makes sense for my
application. Backup practices will be the domain of those using the
application (which I've decided to open-source and give away because I'm
not
in the software business). A simple join to the appropriate data table will
make them available.

  Not having used the bytea data type before I'll read how to work with it.


http://initd.org/psycopg/docs/usage.html?highlight=binary#adapt-binary



Thanks very much for your insights,

Rich





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Importing SQLite database

2016-12-08 Thread Adrian Klaver

On 12/08/2016 07:02 AM, Igor Korot wrote:

Adrian,





Anything else you see?
Does Postgres uses single quotes for literal values or double quotes?


Single:

https://www.postgresql.org/docs/9.5/static/sql-syntax-lexical.html


I'm talking about
INSERT INTO... VALUES(); statements.

SQLite does use double quotes, but I already cleaned it using single ones.


That may be going away:

http://sqlite.org/lang_keywords.html

"For resilience when confronted with historical SQL statements, SQLite 
will sometimes bend the quoting rules above:


If a keyword in single quotes (ex: 'key' or 'glob') is used in a 
context where an identifier is allowed but where a string literal is not 
allowed, then the token is understood to be an identifier instead of a 
string literal.


If a keyword in double quotes (ex: "key" or "glob") is used in a 
context where it cannot be resolved to an identifier but where a string 
literal is allowed, then the token is understood to be a string literal 
instead of an identifier.


Programmers are cautioned not to use the two exceptions described in the 
previous bullets. We emphasize that they exist only so that old and 
ill-formed SQL statements will run correctly. Future versions of SQLite 
might raise errors instead of accepting the malformed statements covered 
by the exceptions above."




Thank you.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Importing SQLite database

2016-12-08 Thread Adrian Klaver

On 12/08/2016 06:54 AM, Igor Korot wrote:

Adrian,

On Thu, Dec 8, 2016 at 9:47 AM, Adrian Klaver <adrian.kla...@aklaver.com> wrote:

On 12/08/2016 04:54 AM, Igor Korot wrote:


Hi, ALL,
I have a text file which I got from exporting the SQLite database.

The file contains an SQL statement which will generate the database.



Is there a CREATE DATABASE statement in the file you are referencing?


Well there is no CREATE DATABASE() in the SQLite.
But I created a database by hand, so no issue here.



Otherwise you will have to create the database first and then load the file
into it.



Excerpt from this file:

[code]
CREATE TABLE leagues( id INTEGER AUTOINCREMENT PRIMARY KEY, name
VARCHAR(100),balance DOUBLE(10,2));
CREATE TABLE players(playerid INTEGER AUTOINCREMENT PRIMARY KEY,
player_name VARCHAR(60), player_position CHAR(1));
CREATE TABLE player_draft(id INTEGER, playerid INTEGER,
drafted_position CHAR(1), FOREIGN KEY id REFERENCE leagues(id),
FOREIGN KEY playerid REFERENCE players(playerid));
INSERT INTO leagues VALUES(1, 'Test1', 260.00);
INSERT INTO leagues VALUES(2, 'Test2', 260.00);
INSERT INTO players VALUES(1, 'Player1', 'D');
INSERT INTO players VALUES(2, ''Player2', 'F');
[/code]

My question would be:
Is there a command in Postgres which will open this file and execute all
those
commands one-by-one in a transaction?



Yes there is assuming the [code][/code] tags are for the email only.


Yes, "code" tags are for e-mail only.



The issue is that Postgres will not understand AUTOINCREMENT. In Postgres to
get the same behavior you would do:

id SERIAL PRIMARY KEY


I'm not very familiar with Postgres, so let me ask you - is
autoincrementing behavior set
by default for the primary key?
Like if I want to insert the record in the table and if I omit this
column it will get the last inserted
value in this table plus 1.


No that is a Sqlite thing:
http://sqlite.org/autoinc.html


If you want to replicate in Postgres you will need to use the SERIAL type:

https://www.postgresql.org/docs/9.5/static/datatype-numeric.html#DATATYPE-SERIAL

along with PRIMARY KEY so:

id SERIAL PRIMARY KEY

This sets up a SEQUENCE:

https://www.postgresql.org/docs/9.5/static/sql-createsequence.html

for the id column, which supplies an incrementing, but not necessarily 
gap free sequence of numbers.






If you clean up the file you can do, assuming you created a database called
some_database:

psql -d some_database -U some_user -f your_file



The file also contains line "BEGIN TRANSACTION" as the first line and
"COMMIT" as last.
Is the syntax the same for Postgres or is it different?


It is the same.

See below for list of Postgres commands:

https://www.postgresql.org/docs/9.5/static/sql-commands.html



Thank you.



If you do not want to do the manual clean up, then Willam's suggestion looks
interesting.




Or I will have to do a manual table creation, then split this file and
use "LOAD..."
command to load the data into the tables?

Hopefully I made myself clear.
Let me know if not and I will try to clarify further.

Thank you.





--
Adrian Klaver
adrian.kla...@aklaver.com





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Importing SQLite database

2016-12-08 Thread Adrian Klaver

On 12/08/2016 04:54 AM, Igor Korot wrote:

Hi, ALL,
I have a text file which I got from exporting the SQLite database.

The file contains an SQL statement which will generate the database.


Is there a CREATE DATABASE statement in the file you are referencing?

Otherwise you will have to create the database first and then load the 
file into it.




Excerpt from this file:

[code]
CREATE TABLE leagues( id INTEGER AUTOINCREMENT PRIMARY KEY, name
VARCHAR(100),balance DOUBLE(10,2));
CREATE TABLE players(playerid INTEGER AUTOINCREMENT PRIMARY KEY,
player_name VARCHAR(60), player_position CHAR(1));
CREATE TABLE player_draft(id INTEGER, playerid INTEGER,
drafted_position CHAR(1), FOREIGN KEY id REFERENCE leagues(id),
FOREIGN KEY playerid REFERENCE players(playerid));
INSERT INTO leagues VALUES(1, 'Test1', 260.00);
INSERT INTO leagues VALUES(2, 'Test2', 260.00);
INSERT INTO players VALUES(1, 'Player1', 'D');
INSERT INTO players VALUES(2, ''Player2', 'F');
[/code]

My question would be:
Is there a command in Postgres which will open this file and execute all those
commands one-by-one in a transaction?


Yes there is assuming the [code][/code] tags are for the email only.

The issue is that Postgres will not understand AUTOINCREMENT. In 
Postgres to get the same behavior you would do:


id SERIAL PRIMARY KEY

If you clean up the file you can do, assuming you created a database 
called some_database:


psql -d some_database -U some_user -f your_file


If you do not want to do the manual clean up, then Willam's suggestion 
looks interesting.




Or I will have to do a manual table creation, then split this file and
use "LOAD..."
command to load the data into the tables?

Hopefully I made myself clear.
Let me know if not and I will try to clarify further.

Thank you.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problems Importing table to pgadmin

2016-12-07 Thread Adrian Klaver

On 12/07/2016 05:19 PM, metaresolve wrote:

Uh, yeah, it was a SELECT * from cc_20161207;

I know, it was dumb. I didn't realize it would break it or at least run for
a while. I tend to do things in small steps, run a query, check my results,
then tweak.

You're right, I wouldn't want to be viewing those million. so I guess I
could just be doing the top 1 rows to get a sense, then be doing my
aggregate group bys/etc to get my final numbers? That's how I hacked Access
to be my data analysis program until I got to Alteryx. But I was also never
using files of 1M then.



FYI pgAdmin, if I remember correctly, has a setting that limits the 
maximum number of rows that it will fetch at a time.




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problems Importing table to pgadmin

2016-12-07 Thread Adrian Klaver

On 12/07/2016 04:54 PM, metaresolve wrote:

Choking: I get the "Waiting for the query execution to complete" circling
around for a while. I tried shutting it down and trying again but it's still
freezing on the execution. But if the TB are accurate, I wonder why it's
slowing on this? Any thoughts?


Also, what is the query you are executing?



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problems Importing table to pgadmin

2016-12-07 Thread Adrian Klaver

On 12/07/2016 04:54 PM, metaresolve wrote:

Choking: I get the "Waiting for the query execution to complete" circling
around for a while. I tried shutting it down and trying again but it's still
freezing on the execution. But if the TB are accurate, I wonder why it's
slowing on this? Any thoughts?


As Tom said pgAdmin is a GUI. If you are indeed trying to pull 1 million 
rows into it, then it will be slow. Think 1 million rows x number of 
fields per row and a GUI widget for each cell. It takes a lot of time to 
build that. Try the Postgres command line client, psql and see what happens.






--
View this message in context: 
http://postgresql.nabble.com/Problems-Importing-table-to-pgadmin-tp5933807p5933826.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problems Importing table to pgadmin

2016-12-07 Thread Adrian Klaver

On 12/07/2016 04:02 PM, metaresolve wrote:

That's a little beyond me. Let me back up a sec and maybe you guys can
help.

I used to use Access to do my data crunching, matching, and cleaning at my
old job. I worked with a max of 600k records so Access could handle it. I
know, lame, but it's what I knew.

I was using Alteryx the past 8 months on files of 20M and higher. But, I got
laid off, so I don't have access to that software anymore.
[
My thought was to use postgreSQL as a kind of more advanced Access that I
could use to crunch numbers on similarly. However, My file has 1.1M records
on it and pgadmin seems to be choking on it.


Define choking?



How many records and relational tables can pgadmin/postgres actually handle?


https://www.postgresql.org/about/

So you have plenty of head room.


I'm anticipating tests for my upcoming job hunts and I need something that
gives me query capability. I figured since I used to do it in Access I could
do it in SQL.

(btw, the reason I picked postgreSQL is my friend from my old job uses it
and likes it. i thought it would be easier to get help from her.)

Any input?



--
View this message in context: 
http://postgresql.nabble.com/Problems-Importing-table-to-pgadmin-tp5933807p5933818.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Select works only when connected from login postgres

2016-12-05 Thread Adrian Klaver

On 12/05/2016 05:13 PM, Joseph Brenner wrote:



I just went around temporarily undoing things I did while
straigtening up, and I find there's one thing I can do that
consistently breaks things: removing my new ~/.psqlrc file.
In fact, it appears that I need to have a file that exists and
contains this line:

   \pset pager off


So what does:

env | grep PAGER

show?




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres and LibreOffice's 'Base'

2016-12-04 Thread Adrian Klaver

On 12/04/2016 12:01 PM, Rich Shepard wrote:

On Sun, 4 Dec 2016, Adrian Klaver wrote:


The important questions that forgot to ask before, is how do you plan
to deploy this:



Or are you looking to deploy a front end/database combination to each
client?


  This way. I've no interest or expertise in maintaining a web site (my ISP
does this for mine). I would provide some guidance on installing necessary
software, but it's theirs to use as they choose.


Well that sort of negates the benefits of Web front end. You will now be 
responsible for setting up a database server, a Web server and the 
Django code that ties them together on each client.


What is this application going to do?



Rich





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres and LibreOffice's 'Base'

2016-12-04 Thread Adrian Klaver

On 12/04/2016 09:59 AM, Rich Shepard wrote:

On Sun, 4 Dec 2016, Adrian Klaver wrote:


That is why I ended up learning Django, I could not find a GUI that was
cross platform and had the capabilities I needed. Others might have
suggestions. It would help though if you could list what you want in a
GUI
platform:


  OK. I spent a lot of time over the past few years looking for a simpler
solution and they just don't exist.

  I'll learn django and take it step-by-step.


The important questions that forgot to ask before, is how do you plan to 
deploy this:


Are you looking at having clients remotely access a site/database on 
your machine?


Or are you looking to deploy a front end/database combination to each 
client?




Thanks,

Rich





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres and LibreOffice's 'Base'

2016-12-04 Thread Adrian Klaver

On 12/04/2016 08:47 AM, Rich Shepard wrote:

On Sun, 4 Dec 2016, Adrian Klaver wrote:


Base is basically a GUI over the database.


Adrian,

  That's what I assumed it to be.


That being said my experience with Base is that is not up to the task. It
worked for me early on, but development on Base lagged relative to the
other modules in LO. My recent attempts have been less successful. Exact
examples on why, will have to wait on me dredging up the memories.


  I'm very open to suggestions what to use for the UI. I'm not a
professional coder and learning SQLAlchemy or Django takes too much time
away from my business.

  My clients all run Windows about which I know nothing. Is there a GUI for
postgres that works on all platforms and suitable for someone like me?


That is why I ended up learning Django, I could not find a GUI that was 
cross platform and had the capabilities I needed. Others might have 
suggestions. It would help though if you could list what you want in a 
GUI platform:


1) Form/report designer?

2) Widgets beyond the basics?

3) Programming language bindings?

4) Anything else you can think of.




Thanks,

Rich





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres and LibreOffice's 'Base'

2016-12-04 Thread Adrian Klaver

On 12/04/2016 07:41 AM, Rich Shepard wrote:

   I have postgres tables (with data) for a specific application but have
not found the time to learn django to make it an application that my
clients
can use. It occurs to me that the most parsimonious approach is to use
LibreOffice's Base for the UI with postgres as the dbms-engine. While I
work
in only linux (and unixODBC is installed), my clients all use various
flavors of Windows, but the three critical software applications (LO,
PostgreSQL, and ODBC) are available for Microsoft, too.

   I've scanned the Base portion of the LO User Guide and it looks to be a
practical solution to quickly providing clients with working database
applications.

   As this is completely new territory for me I'd like is to learn from
those who've done this before. As examples, What can/should I do as stored
procedures using PL/pgSQL? Can queries be created and tested using psql
before being imported into Base?


Base is basically a GUI over the database. You can still do whatever you 
want in Postgres via psql or any other client for that matter. That 
being said my experience with Base is that is not up to the task. It 
worked for me early on, but development on Base lagged relative to the 
other modules in LO. My recent attempts have been less successful. Exact 
examples on why, will have to wait on me dredging up the memories.




   All tips and gotcha's for a new user are certainly welcome.

TIA,

Rich







--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Select works only when connected from login postgres

2016-12-04 Thread Adrian Klaver

On 12/03/2016 09:38 PM, Joseph Brenner wrote:

So is the 9.4 instance the production/live database?


Essentially, but it's not heavily used: this is me messing around on a dev box.


So what happens when you specify the port in your psql connection, eg:
/usr/local/pgsql/bin/psql --dbname=doom --username=doom -p 5432
/usr/local/pgsql/bin/psql --dbname=doom --username=doom -p 5433
/usr/local/pgsql/bin/psql --dbname=doom --username=doom -p 5434


With /usr/local/pgsql/bin/psql, only "-p 5433" connects, the
other two complain like so:

  psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5434"?



Alright how about?:

/usr/bin/psql --dbname=doom --username=doom -p 5434

My suspicion is that when you did the source build you got some cross 
contamination of libraries.


If it where me I would get rid of the instance that you built from 
source, assuming that there is nothing important on it.




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Select works only when connected from login postgres

2016-12-03 Thread Adrian Klaver
ds)

[ and later... ]


The only thing unusual about the steps that I followed was I built
with port 5433 (rather than 5432) as the default,


This is not as simple as it might look; the default port is actually
wired into libpq.so, not psql itself.  And on most brands of Linuxen,
it's not that easy to get a program to link to a non-default copy of
a shared library if there's a copy in /usr/lib.  However, if you were
connecting to the wrong port number, I'd still not expect that it
just dies without saying anything.

Hmm ... a different take on that is that maybe psql is crashing
because it's linking to an ABI-incompatible libpq.  You should try
"ldd" on the psql executable and see if it's resolving the libpq
dependency to the copy you intended.

regards, tom lane






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No select privileges when not connecting from login postgres

2016-12-03 Thread Adrian Klaver

On 12/03/2016 02:55 PM, Joseph Brenner wrote:

The version in the Debian stable repos right now is 9.4, and I saw an
issue with it I wanted to check against the latest version, so I did a
build of it from a tarball.


Save yourself some trouble and use the PGDG repos:

https://www.postgresql.org/download/linux/debian/

PostgreSQL Apt Repository



(Admittedly, there's no particular reason I need to be running stable,
and I was just wondering the other day why I wasn't using testing or
unstable).

The build steps I used are relatively conventional (following the
directions in the INSTALL file): this is how I ended up with an
installation in /usr/local.

The only thing unusual about the steps that I followed was I built
with port 5433 (rather than 5432) as the default, to make it easier to
run it in parallel with the system version, and I'm trying to use the
"with-perl" build option to link to a fresh build of perl:


In your psql commands you do not show connecting to port 5433



sudo su
export LD_LIBRARY_PATH='/home/doom/End/Sys/Perl5/perl-5.24.0'
ldconfig
./configure --with-pgport=5433 --with-perl --with-openssl

make
make install
adduser postgres
mkdir /usr/local/pgsql/data
chown postgres /usr/local/pgsql/data
mkdir /var/lib/postgresql-9.6.1/
chown postgres /var/lib/postgresql-9.6.1/
su - postgres
/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data

/usr/local/pgsql/bin/postgres -D /usr/local/pgsql/data >
/var/lib/postgresql-9.6.1/logfile 2>&1 &


On Sat, Dec 3, 2016 at 7:23 AM, rob stone <floripa...@gmail.com> wrote:


On Wed, 2016-11-30 at 20:48 -0800, Joseph Brenner wrote:

I'm trying to get a new build of 9.6.1 working on a machine
running Debian stable (jessie) and I'm seeing some odd
behavior where things work correctly if I run psql when
logged in as postgres, but if I run it as user 'doom' (my
usual login), I don't seem to have any select privileges.
Even this fails silenlty:

  select 'world' as hello;

But if run logged in as postgres, all is well:

  sudo su - postgres
  /usr/local/pgsql/bin/psql --dbname=doom --username=doom
  doom=#   select 'world' as hello;
select 'world' as hello;
hello
   ---
world
   (1 row)




I run Debian testing (stretch/sid).

For years now, the Postgres binaries are stored in:-

/usr/lib/postgresql//bin

and are root:root owned.

E.g.:-

/usr/lib/postgresql/9.6/bin/psql appears as:-

-rwxr-xr-x 1 root root  580864 Nov  2 21:04 psql

So, it is executable by anybody, which is not an issue with me.

The reason behind this path convention is to enable you to run
different major versions of Postgres on the same computer.

I don't know how you installed Postgres and all I can suggest for
starters is to physically remove it and re-install from the Debian
packages.

HTH,
Rob






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Select works only when connected from login postgres

2016-12-03 Thread Adrian Klaver

On 12/03/2016 12:08 PM, Joseph Brenner wrote:

I'm trying to get a new build of 9.6.1 working on Debian
stable and I'm seeing some odd behavior where things work
correctly if I run psql when logged in as user 'postgres',
but if I'm logged-in as user 'doom' (my usual login), I don't
seem to have any select privileges.  Even this fails
silently:

  select 'world' as hello;

But if run logged in as 'postgres', all is well:

  sudo su - postgres
  /usr/local/pgsql/bin/psql --dbname=doom --username=doom
  doom=#   select 'world' as hello;
select 'world' as hello;
hello
   ---
world

Note that I'm talking about the unix logins, in both cases
the postgresql username/role is 'doom' (which has Superuser
privileges and is the owner of the 'doom' database).

I've got my pg_hba.conf setup to use "trust" for all users:

# TYPE  DATABASEUSERADDRESS METHOD
local   all all trust
hostall all 127.0.0.1/32trust

Looking at how the program files are installed, I see they're all
owned by 'root' with group 'staff':

  ls -la /usr/local/pgsql/bin/psql
  -rwxr-xr-x 1 root staff 516824 Nov 26 23:20 /usr/local/pgsql/bin/psql

On speculation, I added doom to the staff group and reloaded
pg, but that didn't help.  I see that the data files are all
owned by postgres, but I don't think that's unusual (or else
I'd try adding 'doom' to the 'postgres' group, and adding g+rwx
privs to the data files):

  drwx-- 1 postgres postgres42 Nov 26 16:14 base

I'm running out of ideas for things to check.  Any suggestions?


Some more info, for completeness sake:

\du
   List of roles
 Role name | Attributes
 | Member of
---++---
 doom  | Superuser, Create role, Create DB  | {}
 postgres  | Superuser, Create role, Create DB, Replication, Bypass RLS | {}

\l
  List of databases
   Name|  Owner   | Encoding |   Collate   |Ctype|
Access privileges
---+--+--+-+-+---
 doom  | doom | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
 postgres  | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
   ...




Did you receive the previous suggestions?


--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No select privileges when not connecting from login postgres

2016-12-03 Thread Adrian Klaver

On 11/30/2016 08:48 PM, Joseph Brenner wrote:

I'm trying to get a new build of 9.6.1 working on a machine
running Debian stable (jessie) and I'm seeing some odd
behavior where things work correctly if I run psql when
logged in as postgres, but if I run it as user 'doom' (my
usual login), I don't seem to have any select privileges.
Even this fails silenlty:

  select 'world' as hello;

But if run logged in as postgres, all is well:

  sudo su - postgres
  /usr/local/pgsql/bin/psql --dbname=doom --username=doom
  doom=#   select 'world' as hello;
select 'world' as hello;
hello
   ---
world
   (1 row)

Note that I'm talking about the unix logins, in both cases
the postgresql username/role is 'doom' (which has Superuser
privileges and is the owner of the 'doom' database).

Looking at how the program files are installed, they're all
owned by 'root' with group 'staff':

  ls -la /usr/local/pgsql/bin/psql
  -rwxr-xr-x 1 root staff 516824 Nov 26 23:20 /usr/local/pgsql/bin/psql

So I added doom to the staff group and reloaded pg, but that
didn't help either.  The files in the data tree are all
owned by postgres, but I don't think that's unusual:

  drwx-- 1 postgres postgres42 Nov 26 16:14 base

I'm running out of ideas for things to check.  Any suggestions?


When you are logged in as OS user 'doom', the case that is not working, 
what does:


psql -V

show?

Is the psql connection the exact same as the one you show for the 
postgres OS user case?











--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] INSERT - UPDATE throughput oscillating and SSD activity after stopping the client

2016-12-02 Thread Adrian Klaver

On 12/02/2016 09:40 AM, Tom DalPozzo wrote:

Hi,
I've two tables, t1 and t2, both with one bigint id indexed field and
one 256 char data field; t1 has always got 1 row, while t2 is
increasing as explained in the following.

My pqlib client countinously updates  one row in t1 (every time
targeting a different row) and inserts a new row in t2. All this in
blocks of 1000 update-insert per commit, in order to get better performance.
Wal_method is fsync, fsync is on, attached my conf file.
I've a 3.8ghz laptop with evo SSD.

Performance is  measured every two executed blocks and related to these
blocks.

Over the first few minutes performance is around 10Krow/s then it slowly
drops, over next few minutes to 4Krow/s, then it slowly returns high and
so on, like a wave.
I don't understand this behaviour. Is it normal? What does it depend on?


Have you looked at the Postgres log entries that cover these episodes?

Is there anything of interest there?



Also, when I stop the client I see the SSD light still heavily working.
It would last quite a while unless I stop the postgresql server, in this
case it suddenly stops. If I restart the server it remains off.
I'm wondering if it's normal. I'd like to be sure that my data are safe
once commited.

Regards
Pupillo

P.S.: I put this question in general questions as my concern is not if
the performance is high or not.















--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using UPDATE ... RETURNING in a custom SQL function, which RETURNS integer

2016-12-02 Thread Adrian Klaver

On 12/02/2016 04:23 AM, Alexander Farber wrote:

Thank you, Rob -

On Fri, Dec 2, 2016 at 11:12 AM, Rob Sargent <robjsarg...@gmail.com
<mailto:robjsarg...@gmail.com>> wrote:


> On Dec 2, 2016, at 2:52 AM, Alexander Farber
<alexander.far...@gmail.com <mailto:alexander.far...@gmail.com>> wrote:
>
> CREATE OR REPLACE FUNCTION words_unban_user(
> in_uid integer)
> RETURNS integer AS
> $func$
> UPDATE words_users SET
> banned_until = null,
> banned_reason = null
> WHERE uid = in_uid
> RETURNING uid;  -- returns the user to be notified
>
> $func$ LANGUAGE sql;
>
> words=> SELECT uid FROM words_unban_user(1);
> ERROR:  column "uid" does not exist
> LINE 1: SELECT uid FROM words_unban_user(1);
>^
>

select words_unban_user(1) as uid;
Your function returns an int not a table.


this has worked well.

However if I rewrite the same function as "language plpgsql" - then
suddenly both ways of calling work:

CREATE OR REPLACE FUNCTION words_unban_user(
in_uid integer,
OUT out_uid integer)
RETURNS integer AS
$func$
BEGIN
UPDATE words_users SET
banned_until = null,
banned_reason = null
WHERE uid = in_uid
RETURNING uid into out_uid;
END
$func$ LANGUAGE plpgsql;

words=> select out_uid AS uid from words_unban_user(1);
 uid
-
   1
(1 row)

words=> select words_unban_user(1) AS uid;
 uid
-
   1
(1 row)

I am curious, why is it so...


In the SQL function you are not just using RETURNING to pop out the uid, 
which is not actually assigned to any output variable name but just 
returned as an integer.


In the plpgsql case you actually assign uid to an output variable name.

FYI, you can have OUT in SQL functions also:

https://www.postgresql.org/docs/9.5/static/xfunc-sql.html#XFUNC-OUTPUT-PARAMETERS



Regards
Alex








--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL ODBC driver for OSX 10.8

2016-12-01 Thread Adrian Klaver

On 12/01/2016 04:21 PM, Igor Korot wrote:

Hi, guys,
I downloaded the latest sources, but the configure failed.
I have OSX 10.8 here.

Which version of the driver is compatible?


That is going to need more information:

1) What are version of Postgres are you trying to connect to?

2) Do you have dev packages installed on your machine?
The immediate problem being:
configure: error: libpq library version >= 9.2 is required
So what do have on on the machine in the way of Postgres now?

3) How did you get Postgres on the machine?




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL ODBC driver for OSX 10.8

2016-12-01 Thread Adrian Klaver

On 12/01/2016 08:41 AM, Igor Korot wrote:

Adrian,

On Thu, Dec 1, 2016 at 11:37 AM, Adrian Klaver
<adrian.kla...@aklaver.com> wrote:

On 12/01/2016 08:01 AM, Igor Korot wrote:


 Hi, ALL,
This is my first post here.
I have actually 2 questions which are kind of related.

1. Is there an OSX version of the ODBC PostgreSQL driver?
1a. If there is none - is there an instructions on how to build and
install it?



https://odbc.postgresql.org/docs/unix-compilation.html


I presume there is no OSX distribution file (.dmg), right?
Which means I'm forced to build it myself...


Should have added previously, there is a separate mailing list for psqlodbc:

https://www.postgresql.org/list/pgsql-odbc/

That would probably be a good place to ask whether having a project dmg 
version is possible.







2. Is PostgreSQL ODBC driver works with iODBC?



See above.


Thank you.





Thank you.





--
Adrian Klaver
adrian.kla...@aklaver.com



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL ODBC driver for OSX 10.8

2016-12-01 Thread Adrian Klaver

On 12/01/2016 08:41 AM, Igor Korot wrote:

Adrian,

On Thu, Dec 1, 2016 at 11:37 AM, Adrian Klaver
<adrian.kla...@aklaver.com> wrote:

On 12/01/2016 08:01 AM, Igor Korot wrote:


 Hi, ALL,
This is my first post here.
I have actually 2 questions which are kind of related.

1. Is there an OSX version of the ODBC PostgreSQL driver?
1a. If there is none - is there an instructions on how to build and
install it?



https://odbc.postgresql.org/docs/unix-compilation.html


I presume there is no OSX distribution file (.dmg), right?


All the project files I know of are found at the link below and they do 
not include a *.dmg.


https://www.postgresql.org/ftp/odbc/versions/

A dmg is going to need to come from someone else. That triggered a 
thought. You might want to take a look at:


https://www.bigsql.org/postgresql/installers.jsp


Which means I'm forced to build it myself...




2. Is PostgreSQL ODBC driver works with iODBC?



See above.


Thank you.





Thank you.





--
Adrian Klaver
adrian.kla...@aklaver.com



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL ODBC driver for OSX 10.8

2016-12-01 Thread Adrian Klaver

On 12/01/2016 08:01 AM, Igor Korot wrote:

 Hi, ALL,
This is my first post here.
I have actually 2 questions which are kind of related.

1. Is there an OSX version of the ODBC PostgreSQL driver?
1a. If there is none - is there an instructions on how to build and install it?


https://odbc.postgresql.org/docs/unix-compilation.html


2. Is PostgreSQL ODBC driver works with iODBC?


See above.



Thank you.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to migrate from PGSQL 9.3 to 9.6

2016-11-30 Thread Adrian Klaver

On 11/30/2016 10:27 AM, Sinclair, Ian D (Ian) wrote:

The actual upgrade will be that I have an existing server running 9.3 on
RHEL 6.2. We’ll have to do a backup of the database, then deploy a new
OVA to get to RHEL 7 with PG 9.6, then restore the database. Are there
any specific steps that I’ll have to execute in the scripts that will
restore the database to get the 9.3 data to work with 9.6?


Dump the 9.3 database using the pg_dump(all) from 9.6.

Go here:

https://www.postgresql.org/docs/9.6/static/release.html

and read the notes for 9.4, 9.5, 9.6:

https://www.postgresql.org/docs/9.6/static/release-9-4.html
https://www.postgresql.org/docs/9.6/static/release-9-5.html
https://www.postgresql.org/docs/9.6/static/release-9-6.html

In particular the section:

E.8.2. Migration to Version X.X





Since all my users will be getting to 9.6 from a new system deployment,
not by any sort of partial upgrade, do I need to worry about trying to
get a correct version of the setup script with upgrade handling? (Other
than whatever I need for the answer above.)



Thanks







--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Index size

2016-11-30 Thread Adrian Klaver

On 11/30/2016 03:38 AM, Samuel Williams wrote:

Is there any reason why for the same data set, and same indexes, that
the data in postgres would be significantly larger than
innodb/mariadb?



Hard to say without the table/index definitions and some indication of 
what the data is.


What version of Postgres?

Also how are you measuring the size of the data set?

Have you looked at/tried the functions here?:

https://www.postgresql.org/docs/9.5/static/functions-admin.html#FUNCTIONS-ADMIN-DBOBJECT





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Index size

2016-11-29 Thread Adrian Klaver

On 11/29/2016 03:30 PM, Samuel Williams wrote:

I'd like to understand a bit more about indexes in PG.


https://www.postgresql.org/docs/9.5/static/indexam.html

"An index is effectively a mapping from some data key values to tuple 
identifiers, or TIDs, of row versions (tuples) in the index's parent 
table. A TID consists of a block number and an item number within that 
block (see Section 63.6). This is sufficient information to fetch a 
particular row version from the table. ..."




When I have a row in a table, and an index, say, for a single column,
does that duplicate the entire row on disk? Or is there some kind of
id lookup involved?

Thanks.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Storing files: 2.3TBytes, 17M file count

2016-11-29 Thread Adrian Klaver

On 11/29/2016 01:15 AM, Thomas Güttler wrote:



Am 28.11.2016 um 16:01 schrieb Adrian Klaver:

On 11/28/2016 06:28 AM, Thomas Güttler wrote:

Hi,

PostgreSQL is rock solid and one of the most reliable parts of our
toolchain.

   Thank you

Up to now, we don't store files in PostgreSQL.

I was told, that you must not do this  But this was 20 years ago.


I have 2.3TBytes of files. File count is 17M

Up to now we use rsync (via rsnapshot) to backup our data.

But it takes longer and longer for rsync to detect
the changes. Rsync checks many files. But daily only
very few files really change. More than 99.9% don't.


Are you rsyncing over all the files at one time?


Yes, we rsyncing every night.


Or do break it down into segments over the day?


No, up to now it is one rsync run.


Unless everything is in a single directory, it would seem you could 
break this down into smaller jobs that are spread over the day.





The closest I remember is Bacula:

http://blog.bacula.org/documentation/documentation/

It uses a hybrid solution where the files are stored on a file server
and data about the files is stored in a database.
Postgres is one of the database backends it can work with.


I heard of Bacula, but I was not aware of the fact, that they can use
postfres for the meta data.



I have the hope, that it would be easier to backup only the files which
changed.


Backup to where and how?
Are you thinking of using replication?


No, replication is not the current issue. Plain old backup is my current
issue.

Backup where and how? ... That's what this question is about :-)




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Storing files: 2.3TBytes, 17M file count

2016-11-29 Thread Adrian Klaver

On 11/29/2016 01:50 AM, Thomas Güttler wrote:



Am 29.11.2016 um 01:52 schrieb Mike Sofen:

From: Thomas Güttler   Sent: Monday, November 28, 2016 6:28 AM

...I have 2.3TBytes of files. File count is 17M

Since we already store our structured data in postgres, I think about
storing the files in PostgreSQL, too.

Is it feasible to store file in PostgreSQL?

---

I am doing something similar, but in reverse.  The legacy mysql
databases I’m converting into a modern Postgres data
model, have very large genomic strings stored in 3 separate columns.
Out of the 25 TB of legacy data storage (in 800
dbs across 4 servers, about 22b rows), those 3 columns consume 90% of
the total space, and they are just used for
reference, never used in searches or calculations.  They range from 1k
to several MB.



Since I am collapsing all 800 dbs into a single PG db, being very
smart about storage was critical.  Since we’re also
migrating everything to AWS, we’re placing those 3 strings (per row)
into a single json document and storing the
document in S3 bins, with the pointer to the file being the globally
unique PK for the row…super simple.  The app tier
knows to fetch the data from the db and large string json from the S3
bins.  The retrieval time is surprisingly fast,
this is all real time web app stuff.



This is a model that could work for anyone dealing with large objects
(text or binary).  The nice part is, the original
25TB of data storage drops to 5TB – a much more manageable number,
allowing for significant growth, which is on the horizon.


Thank you Mike for your feedback.

Yes, I think I will drop my idea. Encoding binary (the file content) to
text and decoding to binary again makes no sense. I was not aware that
this is needed.

I guess I will use some key-to-blob store like s3. AFAIK there are open
source s3 implementations available.


Just be aware that doing deltas over file changes, like rsync, while 
possible is more convoluted and time/resource consuming with something 
like s3.




Thank you all for your feeback!

 Regards, Thomas







--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump system catalog

2016-11-28 Thread Adrian Klaver

On 11/28/2016 07:44 AM, Melvin Davidson wrote:







*To clarify, you cannot dump the pg_catalog schema. It is the main
control of how all other objects are


Actually you can. I would not of thought so, but tried Achilleas's 
suggestion and it worked. Example:


--
-- Name: pg_aggregate; Type: TABLE; Schema: pg_catalog; Owner: postgres
--

CREATE TABLE pg_aggregate (
aggfnoid regproc NOT NULL,
aggkind "char" NOT NULL,
aggnumdirectargs smallint NOT NULL,
aggtransfn regproc NOT NULL,
aggfinalfn regproc NOT NULL,
aggmtransfn regproc NOT NULL,
aggminvtransfn regproc NOT NULL,
aggmfinalfn regproc NOT NULL,
aggfinalextra boolean NOT NULL,
aggmfinalextra boolean NOT NULL,
aggsortop oid NOT NULL,
aggtranstype oid NOT NULL,
aggtransspace integer NOT NULL,
aggmtranstype oid NOT NULL,
aggmtransspace integer NOT NULL,
agginitval text,
aggminitval text
);

COPY pg_aggregate (aggfnoid 


and you do get errors:

pg_dump: WARNING: typtype of data type "any" appears to be invalid
pg_dump: WARNING: typtype of data type "anyarray" appears to be invalid
pg_dump: WARNING: typtype of data type "anyelement" appears to be invalid
pg_dump: WARNING: typtype of data type "anyenum" appears to be invalid
pg_dump: WARNING: typtype of data type "anynonarray" appears to be invalid
pg_dump: WARNING: typtype of data type "anyrange" appears to be invalid
pg_dump: WARNING: typtype of data type "cstring" appears to be invalid



Still not sure why you would want to, but you can.



stored in the cluster. There is no point in dumping it and all it's
tables and views are already clearly
documented.
https://www.postgresql.org/docs/9.6/static/catalogs.html

pg_catalog itself is generated with the initdb command when a new
postgresql cluster is generated.
https://www.postgresql.org/docs/9.6/static/creating-cluster.html
https://www.postgresql.org/docs/9.6/static/app-initdb.html*

--
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Storing files: 2.3TBytes, 17M file count

2016-11-28 Thread Adrian Klaver

On 11/28/2016 06:28 AM, Thomas Güttler wrote:

Hi,

PostgreSQL is rock solid and one of the most reliable parts of our
toolchain.

   Thank you

Up to now, we don't store files in PostgreSQL.

I was told, that you must not do this  But this was 20 years ago.


I have 2.3TBytes of files. File count is 17M

Up to now we use rsync (via rsnapshot) to backup our data.

But it takes longer and longer for rsync to detect
the changes. Rsync checks many files. But daily only
very few files really change. More than 99.9% don't.


Are you rsyncing over all the files at one time?

Or do break it down into segments over the day?



Since we already store our structured data in postgres, I think
about storing the files in PostgreSQL, too.

What is the current state of the art?


I don't know.



Is it feasible to store file in PostgreSQL?


Yes, you can store a file in Postgres. Still I am not sure that stuffing 
17M files into Postgres is going to perform any better then dealing with 
them on the file system. In fact in Postgres they would be still be on 
the file system but with an extra layer above them.




Are there already projects which use PostgreSQL as storage backend?


The closest I remember is Bacula:

http://blog.bacula.org/documentation/documentation/

It uses a hybrid solution where the files are stored on a file server 
and data about the files is stored in a database. Postgres is one of the 
database backends it can work with.




I have the hope, that it would be easier to backup only the files which
changed.


Backup to where and how?
Are you thinking of using replication?



Regards,
   Thomas Güttler


Related question at rsnapshot mailing list:
https://sourceforge.net/p/rsnapshot/mailman/rsnapshot-discuss/thread/57a1a2f3.5090...@thomas-guettler.de/







--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump system catalog

2016-11-28 Thread Adrian Klaver

On 11/28/2016 03:40 AM, Juliano wrote:

I'm not knew that system catalogs could be saved as schema, thanks
Achilleas.

So, I would like to get the data and the schema of the system catalogs.
How can I include the data in a dump file with system schemas? (The
parameter -a makes sense together the parameter --schema?)


No.

Use Achilleas's command and see what happens.





Regards,
Juliano



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 'pg_ctl restart' does not terminate

2016-11-26 Thread Adrian Klaver

On 11/26/2016 11:33 AM, twoflower wrote:

Ah, it didn't occur to me to try hitting ENTER. Still, this would be
fine for manually running the script, but as I am restarting the server
as a part of SaltStack config, I need |pg_ctl| to terminate without me
intervening.

The solution with the |-l| argument is fine, I think. Even if I use it,
the server then logs its output into the file I specified in
|postgresql.conf| (which I would not expect, by the way).


You are seeing the two stages of logging. The first stage, that you are 
capturing with -l, is the early logging before the logging configuration 
you set up in postgresql.conf kicks in. This is why you see this:


-2016-11-26 11:16:40.737 PST-0 LOG:  redirecting log output to logging 
collector process
-2016-11-26 11:16:40.737 PST-0 HINT:  Future log output will appear in 
directory "pg_log".


After the above the rest of the logging goes to whatever you set up in 
postgresql.conf, assuming it was set up. Otherwise it will go to the 
stdout of the terminal.


--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 'pg_ctl restart' does not terminate

2016-11-26 Thread Adrian Klaver

On 11/26/2016 11:08 AM, twoflower wrote:

Yes, I am using that, thank you. But just by themselves these settings
do not make |pg_ctl| terminate.


You should see something like:

postgres@panda:~> /usr/local/pgsql/bin/pg_ctl -D /usr/local/pgsql/data/ 
restart

waiting for server to shut down. done
server stopped
server starting
postgres@panda:~> -2016-11-26 11:16:40.737 PST-0 LOG:  redirecting log 
output to logging collector process
-2016-11-26 11:16:40.737 PST-0 HINT:  Future log output will appear in 
directory "pg_log".


If you hit Enter you get:

postgres@panda:~>

--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 'pg_ctl restart' does not terminate

2016-11-26 Thread Adrian Klaver
On 11/26/2016 07:47 AM, twoflower wrote:
> Adrian Klaver-4 wrote
>> You also specify a log file to pg_ctl  by using -l:
>>
>> https://www.postgresql.org/docs/9.5/static/app-pg-ctl.html
> 
> This did the trick, thank you!

Great. Still I would suggest using the settings here:

https://www.postgresql.org/docs/9.5/static/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHERE

Then you will get logging however you start Postgres.




-- 
Adrian Klaver
adrian.kla...@aklaver.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 'pg_ctl restart' does not terminate

2016-11-26 Thread Adrian Klaver

On 11/26/2016 07:15 AM, twoflower wrote:

I am restarting the server using the following:

su postgres -c "/usr/lib/postgresql/9.6/bin/pg_ctl -D /var/lib/postgresql/9.6/main 
-o '-c config_file=/etc/postgresql/9.6/main/postgresql.conf' restart"

The server is restarted properly, but the the command never finishes.
After the restart, it displays the server's logfile. Is this intended?


Do have logging set up in postgresql.conf to go to a file?

Even if you do have logging set to go to a file you will probably see a 
couple of lines telling you it is redirecting to the file.


You also specify a log file to pg_ctl  by using -l:

https://www.postgresql.org/docs/9.5/static/app-pg-ctl.html


--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreDB stores table name is lower case

2016-11-25 Thread Adrian Klaver
On 11/23/2016 11:52 PM,  wrote:
> Hello ??
>   
> I'am a  Postgre fan.
> Now, I have a problem, the table name is stored in lower case ,
> but i want to change it into  upper case. Can i have a simple method?
> Such as modify a parameter.

https://www.postgresql.org/docs/9.5/static/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS

"Quoting an identifier also makes it case-sensitive, whereas unquoted names are 
always folded to lower case. For example, the identifiers FOO, foo, and "foo" 
are considered the same by PostgreSQL, but "Foo" and "FOO" are different from 
these three and each other. (The folding of unquoted names to lower case in 
PostgreSQL is incompatible with the SQL standard, which says that unquoted 
names should be folded to upper case. Thus, foo should be equivalent to "FOO" 
not "foo" according to the standard. If you want to write portable applications 
you are advised to always quote a particular name or never quote it.)"

There is no parameter to set. If you want upper case names then you need to 
quote them:

test[5432]=# create table FOLD_LOWERCASE ();
CREATE TABLE
test[5432]=# create table "QUOTE_UPPERCASE" ();
CREATE TABLE

test[5432]=# \d fold_lowercase 
Table "public.fold_lowercase"
 Column | Type | Modifiers 
+--+---

test[5432]=# \d FOLD_LOWERCASE
Table "public.fold_lowercase"   
 
 Column | Type | Modifiers  
 
+--+---

test[5432]=# \d QUOTE_UPPERCASE 
   
Did not find any relation named "QUOTE_UPPERCASE". 

  
test[5432]=# \d quote_uppercase 
 
Did not find any relation named "quote_uppercase".  
 

test[5432]=# \d "QUOTE_UPPERCASE"   
 
Table "public.QUOTE_UPPERCASE"  
 
 Column | Type | Modifiers  
 
+--+--- 

The above also shows what happens when you quote, you are committed 
to that case.

If you still want to do this then:

test[5432]=# ALTER table fold_lowercase rename to "MAKE_UPPERCASE";
ALTER TABLE

test[5432]=# \d "MAKE_UPPERCASE"
Table "public.MAKE_UPPERCASE"
 Column | Type | Modifiers 
+--+---





>   
> Thank you!
> 
>   
> 


-- 
Adrian Klaver
adrian.kla...@aklaver.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_am access in simple transaction?

2016-11-25 Thread Adrian Klaver

On 11/25/2016 07:04 AM, pinker wrote:

Adrian Klaver-4 wrote

I can not replicate using 50 clients instead of 2000. I suspect either
has to do with the extreme number of clients or it is an artifact of
from some other process.


And I have replicated it with 50 clients as well... lsof output:

51 data/base/13328/2601

command: watch 'lsof -e /run/user/1001/gvfs +D data|awk "{print
\$NF}"|sort|uniq -c|sort -nr'

Maybe our versions of PostgreSQL differs? I use "PostgreSQL 9.5.4 on
x86_64-redhat-linux-gnu, compiled by gcc (GCC) 6.1.1 20160621 (Red Hat
6.1.1-3), 64-bit"


No the the versions are the same. It is PEBKAC issue, I was logged in as 
wrong user. Running your watch command(minus the -e part which my 
version of lsof does not understand) while the Bash script is running 
gets the same results. A little digging found that it is used in psql by 
describe.c and tab-complete.c:


https://git.postgresql.org/gitweb/?p=postgresql.git;a=tree;f=src/bin/psql;h=6b1147ce68192ca381ff4b8221b28fc904176190;hb=HEAD

So are other system catalogs, not sure why this one is showing up?






--
View this message in context: 
http://postgresql.nabble.com/pg-am-access-in-simple-transaction-tp5931974p5931991.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] query locks up when run concurrently

2016-11-25 Thread Adrian Klaver

On 11/24/2016 02:14 PM, azhwkd wrote:


Adrian Klaver <adrian.kla...@aklaver.com
<mailto:adrian.kla...@aklaver.com>> schrieb am Do., 24. Nov. 2016 um
22:34 Uhr:

On 11/24/2016 01:23 PM, azhwkd wrote:
> It should not be possible because a group does not return to the
> update pool before the update hasn't finished.

So what is this 'update pool' and what is driving/using it?

In other words how is the determination of the parameters done?

To be more specific, the implication is that a group id can be reused so
what determines that?


The application is written in go. Every group ID has its own go routine
and the routine is blocked until the SQL statement returns.
The update process starts with a check to an external API endpoint and
if there is new data available the routine is downloading it, parsing it
and inserting the data into 2 tables. Once that is done, the routine
continues to execute the statement in question using the data it
inserted before for the calculation. Only once this finishes will the
routine start over again.



> I watched the queries in a postgres client and there was no
overlap I could see.

Was this a visual inspection or did you dump the results of the various
query/parameter combinations into tables and do an SQL comparison?


I inspected it visually and also dumped all variables into a file
directly from the application.



> I don't really know what to make from this behavior, sometimes when I
> start the application a few updates go through and eventually it will
> lock up completely and sometimes it locks up immediately - always with

Is there a common thread with regard to the parameters in use when
things lock up?


Do you mean if it always locks on the same parameters? If so then it
does not, sadly



Yes, that would have been too easy. I'm out of ideas for the moment. Rob 
Stones post looks promising though.



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_am access in simple transaction?

2016-11-25 Thread Adrian Klaver

On 11/25/2016 04:40 AM, pinker wrote:

Hi,
I'm doing simple tests with lsof on data catalog with bash script:

#!/bin/bash
for i in {0..2000}
do
psql -U postgres -c '*BEGIN; select pg_sleep(30); COMMIT*'&
done

and i measure number of open files and what files are affected by specific
command.
Lsof has shown me that the only file that was open during this test was:
data/base/13328/2601, which is pg_catalog table pg_am.

Why postgres opens a table that stores information about relation (index)
access methods, while none index evidently is in use?


I can not replicate using 50 clients instead of 2000. I suspect either 
has to do with the extreme number of clients or it is an artifact of 
from some other process.











--
View this message in context: 
http://postgresql.nabble.com/pg-am-access-in-simple-transaction-tp5931974.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] query locks up when run concurrently

2016-11-24 Thread Adrian Klaver

On 11/24/2016 01:23 PM, azhwkd wrote:

It should not be possible because a group does not return to the
update pool before the update hasn't finished.


So what is this 'update pool' and what is driving/using it?

In other words how is the determination of the parameters done?

To be more specific, the implication is that a group id can be reused so 
what determines that?



I watched the queries in a postgres client and there was no overlap I could see.


Was this a visual inspection or did you dump the results of the various 
query/parameter combinations into tables and do an SQL comparison?



I don't really know what to make from this behavior, sometimes when I
start the application a few updates go through and eventually it will
lock up completely and sometimes it locks up immediately - always with


Is there a common thread with regard to the parameters in use when 
things lock up?



heap_hot_search_buffer using ~20 of all CPU time on the system.

2016-11-24 19:14 GMT+01:00 Adrian Klaver <adrian.kla...@aklaver.com>:

On 11/23/2016 10:41 PM, azhwkd wrote:


The group ID is part of the primary key of the group_history table. My
understanding is that two INSERTs with different group IDs should not
collide in this case, or am I wrong in thinking this?






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [SPAM] Re: [SPAM] Re: [GENERAL] Best practices to manage custom statistics

2016-11-24 Thread Adrian Klaver

On 11/24/2016 09:59 AM, Moreno Andreo wrote:

Hi Adrian,
First of all: now I've seen that not all fields touched by WHERE




Aah, I get it now. You are refreshing forms every 60 seconds over 350
users each with their own database. Actually worse then that as there
is user overlap over databases(up to 10 per user), so the same
database can be hit multiple times at a given refresh. Seems physics
is at work here as you have already pointed out. Namely fetching all
that data at regular intervals taxes the bandwith as well as the
CPU/storage.

Exactly. At the moment the bottleneck is I/O (running on a VM over a
RAID-5 with 15kRPM SAS), in the next weeks I'm up to prepare a test
server with SSDs ("attached SSD" on Google Cloud Platform), that's
pretty much expensive, but data sheets point out a very good IOPS rate
(rising as size, and price, rises). CPU is not a problem (almost never
over 70-80%, average is about 40% over a 4 core server)


High levels solutions that come to mind to spread the load out:

1) Replication to spread data across multiple machines.
Or just split the databases over multiple non-replicated Postgres
instances on separate machines

Already in place, but only for owner's database. Other databases can't
be "downloaded" essentially for privacy matters.


Not sure I follow, a user can see their database and up to 9 other users 
databases. Not seeing how replication would be any less 'private' then 
that, especially if the databases are replicated to machines the company 
owns.




2) Caching results from each individual database so subsequent calls
for the information do not touch the database. You already touched on
this with your counter table.

Yes, and I'm all ears on suggestions on what to be done and if there are
hidden caveats...


The usual I would guess:

Storage for the cached data.
Maintaining the cache.
"There are only two hard things in Computer Science: cache invalidation 
and naming things."


-- Phil Karlton


I heard that stored procedures (and triggers, I suppose) are faster to
execute than the same "operation" coming from outside because the
backend has not to translate it, and in this case can be executed only
when needed


Well that and that less data has to flow back and forth across a 
network. In your case you said bandwidth is less of on an issue then 
disk I/0 on the server. In-database procedures are still going to 
require I/O on the server.




3) Spreading out the refresh interval. Not sure if the 60 second
interval is synced across users. Still maybe setting different refresh
intervals and/or changing it on the fly when load increases. A back
off equation so to speak.

I'm afraid that if we have a moment when load is extremely high (I
experienced top measuring 45 (5-minutes basis) on a 4-core machine, it
was even hard to type in putty console!!), we won't have any upgrade.


Is there a predictability(even a rough one) about when the load spikes 
occur? Something you could use to spread the load out.



Yes, in these conditions we're still not having upgrades, but my goal is
to find a way to reduce overall query load to have a good (V-)hardware
design so I can have better performance with lower cost.
"Pushing" data would be the best way... but I need to have a look to
LISTEN/NOTIFY, and what it means with Npgsql and JDBC (one thing I
didn't specify is that between user and database we have a web server,
in LAN with PgSQL server, hosting web services for the user to interact


?
http://www.memcached.org/
http://redis.io/


with database. Maybe web server can "proxy" connections (keep them open)
and the listen/notify method can be achieved (yes, it's a bit of hard
work to rewrite all connection policies, but if it's necessary...)

I think that a combination of the last two should be the best.
Tell me what you think.


I think there are a lot of moving parts to this and more pop up with 
each post:) I would not even know how to start to compose a useful 
answer to what is a complex problem that also looks to be in the process 
of fairly major hardware changes. All I can suggest is that you create a 
test setup and start doing some incremental changes, using some of the 
suggestions already provided, with tests to measure whether the changes 
actually are a benefit.








Except when they are wrong:) Still been there.

Don't tell me :-)






Thanks in advance and sorry for the long message (but I had to
explain
such a complex thing)
Moreno.-












--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] query locks up when run concurrently

2016-11-24 Thread Adrian Klaver

On 11/23/2016 10:41 PM, azhwkd wrote:

The group ID is part of the primary key of the group_history table. My
understanding is that two INSERTs with different group IDs should not
collide in this case, or am I wrong in thinking this?


After fresh coffee:

In your first post you had this:
"(In the application this
query is run up to 10 times in parallel with different parameters)"

where the parameters look to be a timestamp and a group id.

Is it possible that the combination of parameters over the 10 different 
queries is not actually selecting rows in each result set that are 
unique over (group,id,sub_category,"date","hour") between result sets?






The table definition for group_history is the following:

CREATE TABLE public.group_history (
group int4 NOT NULL,
id int4 NOT NULL,
sub_category int4 NOT NULL,
"date" date NOT NULL,
aa int8 NOT NULL,
ab int8 NOT NULL,
bb int8 NOT NULL,
ba int8 NOT NULL,
quantity int8 NOT NULL,
"hour" int2 NOT NULL,
CONSTRAINT group_history_pk PRIMARY KEY (group,id,sub_category,"date","hour")
)
WITH (
OIDS=FALSE
);

Kind regards,
Sebastian





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] query locks up when run concurrently

2016-11-24 Thread Adrian Klaver

On 11/23/2016 10:41 PM, azhwkd wrote:

The group ID is part of the primary key of the group_history table. My
understanding is that two INSERTs with different group IDs should not
collide in this case, or am I wrong in thinking this?

The table definition for group_history is the following:

CREATE TABLE public.group_history (
group int4 NOT NULL,
id int4 NOT NULL,
sub_category int4 NOT NULL,
"date" date NOT NULL,
aa int8 NOT NULL,
ab int8 NOT NULL,
bb int8 NOT NULL,
ba int8 NOT NULL,
quantity int8 NOT NULL,
"hour" int2 NOT NULL,
CONSTRAINT group_history_pk PRIMARY KEY (group,id,sub_category,"date","hour")
)
WITH (
OIDS=FALSE
);


Me slaps head

Windy night last night + lack of sleep + insufficient caffeine = Not 
seeing the table definition before hitting Send on previous post.




Kind regards,
Sebastian

2016-11-24 0:28 GMT+01:00 Adrian Klaver <adrian.kla...@aklaver.com>:

On 11/23/2016 01:52 PM, azhwkd wrote:


Greetings!

The parallel calls should not be working on the same row. Each query
services a different group ID on it's own and there is no overlap.



Except the INSERT query in the trigger function is working on dates not
group ids.



Kind regards,
Sebastian


Tom Lane <t...@sss.pgh.pa.us <mailto:t...@sss.pgh.pa.us>> schrieb am Mi.,
23. Nov. 2016 um 17:47 Uhr:

azh...@gmail.com <mailto:azh...@gmail.com> writes:
> I have a query which if run alone usually completes in about 300ms.
> When run in my application this query constantly locks up and bogs
> down all connections of the connection pool (In the application this
> query is run up to 10 times in parallel with different parameters).
> What's really weird is that I can re-run one of the hung queries
from
> the command line while it's hung and it will complete as expected
> while the hung queries continue to use 100% CPU time.

Judging from the EXPLAIN timing, most of the work is in the trigger,
which leads me to wonder if the parallel calls are likely to be
fighting
over inserting/updating the same row in the group_history partition
tables.  Or are you certain that they should be hitting different
rows?

    regards, tom lane




--
Adrian Klaver
adrian.kla...@aklaver.com





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] query locks up when run concurrently

2016-11-24 Thread Adrian Klaver

On 11/23/2016 10:41 PM, azhwkd wrote:

The group ID is part of the primary key of the group_history table. My
understanding is that two INSERTs with different group IDs should not
collide in this case, or am I wrong in thinking this?


The suspicion is that this:

insert into group_history ("group", id, sub_category, "date", aa, ab,
bb, ba, quantity, "hour") 

is not the problem. That this(from the trigger function) is:

execute 'INSERT INTO public.' || quote_ident(_tablename) || ' VALUES ($1.*)
on conflict ("group", id, sub_category, "when", "hour") do




where _tablename is:

_tablename := 'group_history_'||_startdate;


It would be nice to see the schema definitions for group_history and at 
least one of the group_history_startdate tables.




The table definition for group_history is the following:

CREATE TABLE public.group_history (
group int4 NOT NULL,
id int4 NOT NULL,
sub_category int4 NOT NULL,
"date" date NOT NULL,
aa int8 NOT NULL,
ab int8 NOT NULL,
bb int8 NOT NULL,
ba int8 NOT NULL,
quantity int8 NOT NULL,
"hour" int2 NOT NULL,
CONSTRAINT group_history_pk PRIMARY KEY (group,id,sub_category,"date","hour")
)
WITH (
OIDS=FALSE
);

Kind regards,
Sebastian





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] query locks up when run concurrently

2016-11-24 Thread Adrian Klaver

On 11/23/2016 10:26 PM, azhwkd wrote:

I'm sorry. I worded this quite poorly. I meant to say that there were
no log lines added to the postgres logfile at the time.
I hope these are the settings you were refferring to (I did not change
anything in the config files vs. the default)

log_destination = 'stderr'
logging_collector = off
log_directory = 'pg_log'
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'
log_file_mode = 0600
log_truncate_on_rotation = off
log_rotation_age = 1d
log_rotation_size = 10MB
syslog_facility = 'LOCAL0'
syslog_ident = 'postgres'
event_source = 'PostgreSQL'
client_min_messages = notice
log_min_messages = warning
log_min_error_statement = error
log_min_duration_statement = -1
debug_print_parse = off
debug_print_rewritten = off
debug_print_plan = off
debug_pretty_print = on
log_checkpoints = off
log_connections = off
log_disconnections = off
log_duration = off
log_error_verbosity = default
log_hostname = off
log_line_prefix = '%t '
log_lock_waits = off
log_statement = 'none'
log_temp_files = -1
log_timezone = 'UTC'


While troubleshooting this I would enable log_connections and 
log_disconnections and change log_statement to 'mod'. This will help you 
see when and what is happening when you do the concurrent queries. FYI, 
you need reload the server to have the changes be seen.




I'm sorry, it seems like I copied the trigger definition from the
wrong table (item_history is a test table I created yesterday while
trying things which is exactly the same as group_history just a
different name).
The trigger on the group_history table is exactly the same though,
except for the table name.

Kind regards,
Sebastian




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extension compatibility between postgresql minor version

2016-11-24 Thread Adrian Klaver

On 11/24/2016 09:01 AM, David Richer wrote:

I am not an expert but I am assuming that the native part of the extension 
(pg_freespacemap.so) is compiled with postgresql headers and libs.


The part that confused me is the last part where you ask about using a 
9.3.14 extension with a 9.3.15 server which is the opposite of 
everything you said previously, in particular the prior sentence.




On 2016-11-24, 11:55 AM, "Adrian Klaver" <adrian.kla...@aklaver.com> wrote:

On 11/24/2016 07:40 AM, David Richer wrote:
> Hi guys,
>
>
>
> I want to check my production server for the free space map issue.
> https://wiki.postgresql.org/wiki/Free_Space_Map_Problems
>
> I am on Centos 6 currently running 9.3.14 using pgdg repository.
>
>
>
> I need to install postgresql93-contrib to get the pg_freespacemap
> extension. Only postgresql93-contrib-9.3.15 package is available in pgdg
> repository. It will force me to upgrade my postgresql to 9.3.15 which
> will restart my production postgresql server which I am not yet ready to
> do, I want to know if we are affected by the free space map issue first.
>
>
>
> My question: Would it be safe to install postgresql93-contrib-9.3.15 and
> not upgrade my postgresql server? In other words, would an extension
> compiled with 9.3.14 will work with a postgresql 9.3.15 server?

I was following you until I got to the last sentence. Is that what you
really meant to say?

>
>
>
> Thanks,
    >
> Davud
>


--
Adrian Klaver
adrian.kla...@aklaver.com





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extension compatibility between postgresql minor version

2016-11-24 Thread Adrian Klaver

On 11/24/2016 07:40 AM, David Richer wrote:

Hi guys,



I want to check my production server for the free space map issue.
https://wiki.postgresql.org/wiki/Free_Space_Map_Problems

I am on Centos 6 currently running 9.3.14 using pgdg repository.



I need to install postgresql93-contrib to get the pg_freespacemap
extension. Only postgresql93-contrib-9.3.15 package is available in pgdg
repository. It will force me to upgrade my postgresql to 9.3.15 which
will restart my production postgresql server which I am not yet ready to
do, I want to know if we are affected by the free space map issue first.



My question: Would it be safe to install postgresql93-contrib-9.3.15 and
not upgrade my postgresql server? In other words, would an extension
compiled with 9.3.14 will work with a postgresql 9.3.15 server?


I was following you until I got to the last sentence. Is that what you 
really meant to say?






Thanks,

Davud




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] query locks up when run concurrently

2016-11-23 Thread Adrian Klaver

On 11/23/2016 01:52 PM, azhwkd wrote:

Greetings!

The parallel calls should not be working on the same row. Each query
services a different group ID on it's own and there is no overlap.


Except the INSERT query in the trigger function is working on dates not 
group ids.




Kind regards,
Sebastian


Tom Lane <t...@sss.pgh.pa.us <mailto:t...@sss.pgh.pa.us>> schrieb am Mi.,
23. Nov. 2016 um 17:47 Uhr:

azh...@gmail.com <mailto:azh...@gmail.com> writes:
> I have a query which if run alone usually completes in about 300ms.
> When run in my application this query constantly locks up and bogs
> down all connections of the connection pool (In the application this
> query is run up to 10 times in parallel with different parameters).
> What's really weird is that I can re-run one of the hung queries from
> the command line while it's hung and it will complete as expected
> while the hung queries continue to use 100% CPU time.

Judging from the EXPLAIN timing, most of the work is in the trigger,
which leads me to wonder if the parallel calls are likely to be fighting
over inserting/updating the same row in the group_history partition
tables.  Or are you certain that they should be hitting different rows?

        regards, tom lane




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] query locks up when run concurrently

2016-11-23 Thread Adrian Klaver

On 11/23/2016 01:47 PM, azhwkd wrote:

Greetings!

Yes I had a look at the logfiles but there is not a single logfile
generated when I'm reproducing this issue.


We are talking about the Postgres logfile, correct?

I have to believe a log file exists, but maybe not entries at that time.

What are you logging settings?


Concerning locks I used there queries
at https://wiki.postgresql.org/wiki/Lock_Monitoring and they came up empty.
The group_history table and its sub-tables do not have any foreign keys
as they only contain calculated data and the source data is removed
after some time.
The trigger which calls the partition function below is defined as follows:

create
trigger group_history_trigger before insert
on
item_history for each row execute procedure
group_history_partition_function()


This trigger is on the table item_history not group history, is that 
correct?


If so how does the item_history table fit into this?



Kind regards,
Sebastian



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [SPAM] Re: [GENERAL] Best practices to manage custom statistics

2016-11-23 Thread Adrian Klaver

On 11/23/2016 05:24 AM, Moreno Andreo wrote:

Il 15/11/2016 18:19, Adrian Klaver ha scritto:

On 11/15/2016 07:39 AM, Moreno Andreo wrote:

Sorry for late reply... i'm in some quite rough days

Il 08/11/2016 21:28, Adrian Klaver ha scritto:

On 11/08/2016 12:13 PM, Moreno Andreo wrote:

[...]

In your experience, would this approach help me lower server load?
Are there any other approach I can try?






So does the user need only their data or do they need the other users
data also?

they can be able to download also their group mates' data (stats,
numbers), if they want. Numbers can be confusing because evey user can
have more than one workplace.






How can they be working with 'old' data? The queries you are running
are compiling stats on data that exist at the time they are run and at
any point in time between stats runs the user is working with current
data regardless of what the last stats say.

Since we are on ADO.NET (with Npgsql) and we don't keep connections open
(query, fill a structure and disconnect), in the time slice between two
updates they will have both data and stats that are not "real" in that
moment...
I'll try to make an example
You and me are teammates and work everyone at his place (several miles
away). You read data from my database and get both stats and work data.
Suddenly I make a change. This change won't be visible to you until you
refresh data (that was pressing "Update" button, now it's automated
every 60 secs).
Hope It's clearer now...



Aah, I get it now. You are refreshing forms every 60 seconds over 350 
users each with their own database. Actually worse then that as there is 
user overlap over databases(up to 10 per user), so the same database can 
be hit multiple times at a given refresh. Seems physics is at work here 
as you have already pointed out. Namely fetching all that data at 
regular intervals taxes the bandwith as well as the CPU/storage.


High levels solutions that come to mind to spread the load out:

1) Replication to spread data across multiple machines.
Or just split the databases over multiple non-replicated Postgres 
instances on separate machines


2) Caching results from each individual database so subsequent calls for 
the information do not touch the database. You already touched on this 
with your counter table.


3) Spreading out the refresh interval. Not sure if the 60 second 
interval is synced across users. Still maybe setting different refresh 
intervals and/or changing it on the fly when load increases. A back off 
equation so to speak.





Except when they are wrong:) Still been there.

Don't tell me :-)






Thanks in advance and sorry for the long message (but I had to explain
such a complex thing)
Moreno.-




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] query locks up when run concurrently

2016-11-23 Thread Adrian Klaver
me: 0.475 ms
Trigger group_history_trigger: time=1442.561 calls=6139
Execution time: 1834.119 ms


group_history_trigger:

CREATE OR REPLACE FUNCTION public.group_history_partition_function()
RETURNS trigger
LANGUAGE plpgsql
AS $function$
declare
_new_date timestamptz;
_tablename text;
_startdate text;
begin
-- Takes the current inbound "when" value and determines when
midnight is for the given date
_new_date := date_trunc('day', new."when");
_startdate := to_char(_new_date, '_MM_DD');
_tablename := 'group_history_'||_startdate;

-- Insert the current record into the correct partition
execute 'INSERT INTO public.' || quote_ident(_tablename) || ' VALUES ($1.*)
on conflict ("group", id, sub_category, "when", "hour") do
update set aa = excluded.aa,
ab = excluded.ab,
bb = excluded.bb,
ba = excluded.ba,
quantity = excluded.quantity' using new;
return null;
end
$function$

Has anyone experienced this before or even has a solution for this?
I'd be very grateful.

Kind regards
Sebastian





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] min/max_wal_size

2016-11-22 Thread Adrian Klaver

On 11/22/2016 12:51 PM, Torsten Förtsch wrote:

Hi,

I am a bit confused about min_wal_size and max_wal_size. Previously,
there was this formula to estimate the max number of WAL segment files
in pg_xlog/
(https://www.postgresql.org/docs/9.4/static/wal-configuration.html):

  (2 + checkpoint_completion_target) * checkpoint_segments + 1 or
checkpoint_segments + wal_keep_segments + 1

I don't exactly know what the operation "or" means. Before writing this


'Or' distinguishes between the case where wal_keep_segments is the 
default of 0 and the case where you set it to some value > 0. In the 
second case you are forcing Postgres to keep segments it would not by 
default keep.



email I always thought of wal_keep_segments as a parameter that
configures how many segments to keep that would otherwise be deleted and
checkpoint_segments as the number of WAL files the database is allowed
to work with within a checkpoint_timeout interval.

The formula above makes more or less sense. The database is allowed to
write one set of WAL files during the checkpoint interval. While
performing the checkpoint it needs the previous set of WAL files. I
don't know where that checkpoint_completion_target comes in. But I trust


See the paragraph above the one with the equation for how 
checkpoint_completion_target applies.



the wisdom of the author of the documentation.

Now, I have a database with very low write activity. Archive_command is
called about once per hour to archive one segment. When the database was
moved to PG 9.5, it was initially configured with insanely high settings
for max_wal_size, min_wal_size and wal_keep_segments. I reset
min/max_wal_size to the default settings of 80MB and 1GB and reduced
wal_keep_segments to 150.

I am seeing in pg_xlog the WAL segments from

-rw--- 1 postgres postgres 16777216 Nov 17 04:01
pg_xlog/0001000400F9
...
-rw--- 1 postgres postgres 16777216 Nov 22 20:00
pg_xlog/00010005008E
-rw--- 1 postgres postgres 16777216 Nov 22 20:19
pg_xlog/00010005008F
-rw--- 1 postgres postgres 16777216 Nov 15 07:50
pg_xlog/000100050090
...
-rw--- 1 postgres postgres 16777216 Nov 15 07:52
pg_xlog/000100060017

As you can see, the files from 1/4/F9 to 1/5/8E are old. That are 150
files which matches exactly wal_keep_segments. If I understand
correctly, the file 1/5/8F is currently written. Further, the files from
1/5/90 to 1/6/17 seem to be old WAL files that have been renamed to be
reused in the future. Their count is 136.

Why does a database that generates a little more than 1 WAL file per
hour and has a checkpoint_timeout of 30 minutes with a
completion_target=0.7 need so many of them? The default value for
min_wal_size is 80MB which amounts to 5 segments. That should be totally
enough for this database.

Is this because of the previously insanely high setting (min=1GB,
max=9GB)? Should I expect this value to drop in a week's time? Or is
there anything that I am not aware of?


Are you talking about the recycled files?



Thanks,
Torsten




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database migration to RDS issues permissions

2016-11-22 Thread Adrian Klaver

On 11/21/2016 03:34 PM, Fran ... wrote:

Hi Adrian,


I followed you link and I had again errors:


What was the command you used?




/pg_restore: [archiver (db)] Error from TOC entry 4368; 2606 151317 FK
CONSTRAINT type_id_3940becf ownersuser/
/pg_restore: [archiver (db)] could not execute query: ERROR:  constraint
"type_id_3940becf" of relation "store" does not exist/
/Command was: ALTER TABLE ONLY public.store DROP CONSTRAINT
type_id_3940becf;/


Can't DROP what does not exist. The end result is the same anyway. You 
can avoid this type of error with --if-exists.

/
/
/pg_restore: [archiver (db)] Error from TOC entry 4273; 1259 1179680
INDEX profile_id owneruser/
/pg_restore: [archiver (db)] could not execute query: ERROR:  index
"profile_id" does not exist/
/Command was: DROP INDEX public.profile_id;/


See above.


/
/
/pg_restore: [archiver (db)] Error from TOC entry 4751; 0 0 COMMENT
EXTENSION plpgsql /
/pg_restore: [archiver (db)] could not execute query: ERROR:  must be
owner of extension plpgsql/
/Command was: COMMENT ON EXTENSION plpgsql IS 'PL/pgSQL procedural
language';/


Not adding a COMMENT, not necessarily fatal. Best guess plpgsql is 
actually installed, have you checked?



/
/

/pg_restore: [archiver (db)] Error from TOC entry 4756; 0 0 USER MAPPING
USER MAPPING dwhuser SERVER pg_rest postgres/
/pg_restore: [archiver (db)] could not execute query: ERROR:  role
"user" does not exist/
/Command was: CREATE USER MAPPING FOR user SERVER pg_rest OPTIONS (/
/password 'X',/
/"user" 'user'/
/);/


This is probably because you could not import the global roles from your 
original database.




Regards.



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Partial update on an postgres upsert violates constraint

2016-11-21 Thread Adrian Klaver

On 11/21/2016 02:32 AM, Andreas Terrius wrote:

Is there any way to check whether the row already exists before checking
constraints ? I still want it to fail if it turns out to be a new row
(which would violate the not null constraint), but updates the row if it
already exists.

Since if that is not possible, I would need to do a query to determine
whether the row exists in the database which kinda eliminates the use of
upsert. (in this case, partial upsert).


Before UPSERT appeared in 9.5, folks came up of with alternate methods 
of doing this. I would suggest searching on:


postgres upsert cte


You might be able to modify the examples to get what you want.





On Sun, Nov 20, 2016 at 3:57 AM, Tom Lane <t...@sss.pgh.pa.us
<mailto:t...@sss.pgh.pa.us>> wrote:

Adrian Klaver <adrian.kla...@aklaver.com
<mailto:adrian.kla...@aklaver.com>> writes:
> ... So looks like constraints are checked before you get to the ON
CONFLICT section.

Right.  ON CONFLICT is a means for dealing with duplicate-key errors in
the specified (or inferred) unique index.  It is *not* an all-purpose
error catcher.  In the case at hand, the given INSERT request fails due
to not-null constraints that are unrelated to what the ON CONFLICT
clause
tests for.

    regards, tom lane





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql 9.5 and Shell scripts/variables vs. C programming/defining a value to be used

2016-11-21 Thread Adrian Klaver

On 11/21/2016 06:56 AM, Poul Kristensen wrote:

Hi!

Hopefully this is this list.

A shell variabel is  defined like this

var1= value

used like ${var1}.

How is the equal defened in the Postgresql C programming?


You probably need to be more specific about what aspect of using C in 
Postgres you are talking about. Assuming  'ECPG - Embedded SQL in C' for 
now:


https://www.postgresql.org/docs/9.5/static/ecpg-variables.html



Thanks.

Poul

BTW: I have not received my ordered book yet!


There is one online:):

https://www.postgresql.org/docs/9.5/static/index.html







--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to introspect autovacuum analyze settings

2016-11-21 Thread Adrian Klaver

On 11/21/2016 05:44 AM, Benedikt Grundmann wrote:

Hello all,

I have a quick question.  I feel like somewhere in section 23.1.6 there
should be the answer but I couldn't find it yet.  Namely how can I query
the database for total number of tuples inserted, updated, or deleted
since the last ANALYZE?  pg_stat_user_tables.n_tup_{ins,upd,del,hot_upd}
seems to not reset after an analyze[1].  But clearly the database has
that knowledge somewhere persistent because otherwise how could
autovacuum do its thing.


Did you see this?:
https://www.postgresql.org/docs/9.5/static/monitoring-stats.html#PG-STAT-ALL-TABLES-VIEW

"When using the statistics to monitor collected data, it is important to 
realize that the information does not update instantaneously. Each 
individual server process transmits new statistical counts to the 
collector just before going idle; so a query or transaction still in 
progress does not affect the displayed totals. Also, the collector 
itself emits a new report at most once per PGSTAT_STAT_INTERVAL 
milliseconds (500 ms unless altered while building the server). So the 
displayed information lags behind actual activity. However, 
current-query information collected by track_activities is always 
up-to-date."


Still I do see changes:

test[5432]=# select * from pg_stat_user_tables where relname='ts_tsz_test';
-[ RECORD 1 ]---+--
relid   | 1140187
schemaname  | public
relname | ts_tsz_test
seq_scan| 66
seq_tup_read| 249
idx_scan|
idx_tup_fetch   |
n_tup_ins   | 32
n_tup_upd   | 0
n_tup_del   | 0
n_tup_hot_upd   | 0
n_live_tup  | 6
n_dead_tup  | 0
n_mod_since_analyze | 0
last_vacuum |
last_autovacuum |
last_analyze| 2016-11-21 06:48:38.500307-08
last_autoanalyze|
vacuum_count| 0
autovacuum_count| 0
analyze_count   | 5
autoanalyze_count   | 0


test[5432]=# insert into ts_tsz_test values ('11/21/2016', '11/21/2016');
INSERT 0 1
test[5432]=# select now();
-[ RECORD 1 ]--
now | 2016-11-21 06:49:19.957626-08

test[5432]=# analyze ts_tsz_test ;
ANALYZE
test[5432]=# select * from pg_stat_user_tables where relname='ts_tsz_test';
-[ RECORD 1 ]---+--
relid   | 1140187
schemaname  | public
relname | ts_tsz_test
seq_scan| 66
seq_tup_read| 249
idx_scan|
idx_tup_fetch   |
n_tup_ins   | 33
n_tup_upd   | 0
n_tup_del   | 0
n_tup_hot_upd   | 0
n_live_tup  | 7
n_dead_tup  | 0
n_mod_since_analyze | 0
last_vacuum |
last_autovacuum |
last_analyze| 2016-11-21 06:49:22.577586-08
last_autoanalyze|
vacuum_count| 0
autovacuum_count| 0
analyze_count   | 6
autoanalyze_count   | 0



So are you sure you are looking at the correct database and/or tables?





Rationale for the question.  I have the strong suspicion that for some
of our bigger tables autovacuum *analyze *is not hitting frequently
enough (even so we already much more aggressive settings than the
default).  So I want to set some custom settings for those tables.  But
rather than doing it manually for the one table I found I would much
rather write a query (ideally one taking any existing per table settings
into account) that tells me likely candidates for tweaking.  But so far
I fail to even get the relevant data.

Cheers,

Bene

[1] At least it didn't when I just run analyze on a table explicitly.



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: R: [GENERAL] Autovacuum and frequent pg_bulkload

2016-11-20 Thread Adrian Klaver

On 11/20/2016 07:20 AM, Job wrote:

Hi Adrian,
Thank you for your fast reply!


FYI, Postgres 8.4 is over two years past EOL.

Yes, i am aware; we will migrate next year but for application compatibility 
reason we have at the moment to the remain here..



What are the steps in the load/delete cycle?


We need to load, with pg_bulkload, log datas for reporting.
We load every hours about one million of row and twice a day we generated 
aggregated report.
Then we delete old rows (we cannot use truncate because we only delere rows 
older than 24 hours).
I think, but i can be wrong, that pg_bulkload do not reuse free space marked 
previously by the delete..


Can you be more specific.

In other words show the actual commands you use?

Also how are you defining free space, space the OS sees or space 
Postgres sees?


How are you determining the free space?



Only vacuum full empty free space, but tablea became locked.


VACUUM full returns space to the OS if that is what you mean?



Thank you
Francesco




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Autovacuum and frequent pg_bulkload

2016-11-20 Thread Adrian Klaver

On 11/20/2016 04:51 AM, Job wrote:

Hello,

i use PostgreSql 8.4.22 and two times a day, i import logs wth pg_bulkload and 
i delete old data. I move some millions of records in a day.


FYI, Postgres 8.4 is over two years past EOL.



I noticed that only autovacuum seems not to be able to free unused space.


Do you run ANALYZE at any point in your procedure?

What are the steps in the load/delete cycle?


I need a periodical vacuum full but this operations takes many hours.

Do you think it is due to pk_bulkload that is not able to "see" free-marked 
space to use when loading new data?

Thank you,
/F




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database migration to RDS issues permissions

2016-11-19 Thread Adrian Klaver
On 11/19/2016 11:12 AM, Fran ... wrote:
> Hi,
> 
> 
> You were right and I have tried to grant that role to user and I get
> following errors..
> 
> 
> /pg_restore: [archiver (db)] Error while PROCESSING TOC:/
> /pg_restore: [archiver (db)] Error from TOC entry 4335; 2606 151422 FK
> CONSTRAINT worker_id_refs_id_6fd8ce95 owneruser/
> /pg_restore: [archiver (db)] could not execute query: ERROR:  relation
> "public.worker_status" does not exist/
> /Command was: ALTER TABLE ONLY public.worker_status DROP CONSTRAINT
> worker_id_refs_id_6fd8ce95;/
> 
> /pg_restore: [archiver (db)] Error from TOC entry 4079; 1259 151046
> INDEX id_e owneruser/
> /pg_restore: [archiver (db)] could not execute query: ERROR:  index
> "id_e" does not exist/
> /Command was: DROP INDEX public.id_e;/
> 
> I run the dump from origin with postgres user.
> 

Alright, I bit the bullet and set up a test Postgres RDS instance. The only way 
I 
could get anything to load was to follow the instructions here:

http://dba.stackexchange.com/questions/66372/moving-a-postgres-database-from-standalone-local-db-to-amazon-rds

and that was only after finding a database that did not have plpythonu 
installed as that is uninstallable.

So I ended up with:

pg_restore -C -d test -h testdb.xxx.rds.amazonaws.com -p 5432 -U 
rds_user --no-owner --no-privileges b_app.out 

I think I will stick with my policy of not using RDS.

-- 
Adrian Klaver
adrian.kla...@aklaver.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database migration to RDS issues permissions

2016-11-19 Thread Adrian Klaver
On 11/19/2016 11:12 AM, Fran ... wrote:
> Hi,
> 
> 
> You were right and I have tried to grant that role to user and I get
> following errors..

GRANT what role to what user?

> 
> 
> /pg_restore: [archiver (db)] Error while PROCESSING TOC:/
> /pg_restore: [archiver (db)] Error from TOC entry 4335; 2606 151422 FK
> CONSTRAINT worker_id_refs_id_6fd8ce95 owneruser/
> /pg_restore: [archiver (db)] could not execute query: ERROR:  relation
> "public.worker_status" does not exist/
> /Command was: ALTER TABLE ONLY public.worker_status DROP CONSTRAINT
> worker_id_refs_id_6fd8ce95;/
> 
> /pg_restore: [archiver (db)] Error from TOC entry 4079; 1259 151046
> INDEX id_e owneruser/
> /pg_restore: [archiver (db)] could not execute query: ERROR:  index
> "id_e" does not exist/
> /Command was: DROP INDEX public.id_e;/
> 
> I run the dump from origin with postgres user.
> 

Pretty sure the issue is less where it is coming from then where it is going. 
RDS has constraints on what a user can do. As I said before I do not use it, 
so I cannot be of much help other then to point you at the docs:

http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.html#PostgreSQL.Concepts

Using the rds_superuser Role


Seems to be you need to use the above role to do your restore.

-- 
Adrian Klaver
adrian.kla...@aklaver.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Partial update on an postgres upsert violates constraint

2016-11-19 Thread Adrian Klaver
On 11/19/2016 11:33 AM, Kim Rose Carlsen wrote:
>> AFAIK, EXCLUDED is only available in a trigger function:
> 
>>
>> https://www.postgresql.org/docs/9.5/static/trigger-definition.html
>>
>> You are using EXCLUDED in a regular function so it would not be found.
>> 
>> Can you also show the failure for your alternate method?
> 
> From the manual
> https://www.postgresql.org/docs/9.5/static/sql-insert.html
> 
> "
> conflict_action
> conflict_action specifies an alternative ON CONFLICT action. It can be
> either DO NOTHING, or a DO UPDATE clause specifying the exact details of
> the UPDATE action to be performed in case of a conflict. The SET and
> WHERE clauses in ON CONFLICT DO UPDATE have access to the existing row
> using the table's name (or an alias), and to rows proposed for insertion
> using the special excluded table. SELECT privilege is required on any
> column in the target table where corresponding excluded columns are read.
> "
> 


Oops, my mistake. I should have spent more time on the examples.

Changing the function to;

CREATE OR REPLACE FUNCTION public.upsert_job(job jsonb)
 RETURNS void
 LANGUAGE plpgsql
 SECURITY DEFINER
AS $function$
BEGIN
INSERT INTO jobs AS origin VALUES(
(job->>'id')::INTEGER,
COALESCE(job->>'employee_name'::TEXT, 'test_name'),
COALESCE(job->>'address'::TEXT, 'test_address'),
job->>'phone_number'::TEXT
) ON CONFLICT (id) DO UPDATE SET
employee_name = COALESCE(EXCLUDED.employee_name, origin.employee_name),
address = COALESCE(EXCLUDED.address, origin.address),
phone_number = COALESCE(EXCLUDED.phone_number, origin.phone_number);
END;
$function$
;

makes it work. So looks like constraints are checked before you get to the ON 
CONFLICT section.


-- 
Adrian Klaver
adrian.kla...@aklaver.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database migration to RDS issues permissions

2016-11-19 Thread Adrian Klaver

On 11/19/2016 09:33 AM, Fran ... wrote:

Hi,


I run "pg_dumpall" command and there are the permissions por the user:


/CREATE ROLE dlapuser;/
/ALTER ROLE dlapuser WITH *SUPERUSER* INHERIT NOCREATEROLE NOCREATEDB
LOGIN NOREPLICATION PASSWORD 'md5XXafac';/

I think I would solve the problem granting "superuser" permission but
this is not possible in RDS.


I don't use RDS, but from what I gather the above is not strictly true:

http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.html#PostgreSQL.Concepts

http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/PostgreSQL.Procedural.Importing.html




I also saw the first errors and they are weird.



Indications that you are not running the restore as a user with 
sufficient privileges.


Is the database you are dumping from an RDS instance or a regular 
Postgres database?




@Adrian Klaver <mailto:adrian.kla...@aklaver.com> what others
permissions do you suggest?

Origin and target are the same version of course. PostgreSQL 9.4.1

Thanks in advance.

--------
*De:* Adrian Klaver <adrian.kla...@aklaver.com>
*Enviado:* sábado, 19 de noviembre de 2016 18:24
*Para:* Fran ...; pgsql-general@postgresql.org
*Asunto:* Re: [GENERAL] Database migration to RDS issues permissions

On 11/19/2016 07:21 AM, Fran ... wrote:

Hi Adrian,


these are some of them:


/pg_restore: [archiver (db)] Error from TOC entry 4997; 0 0 SEQUENCE SET
account_id_seq owneruser/
/pg_restore: [archiver (db)] could not execute query: ERROR:  permission
denied for sequence account_id_seq/
/Command was: SELECT pg_catalog.setval('account_id_seq', 26738, true);/
/
/

/pg_restore: [archiver (db)] Error from TOC entry 4548; 0 106491 TABLE
DATA account owneruser/
/pg_restore: [archiver (db)] could not execute query: ERROR:  permission
denied for relation account/
/Command was: COPY account (id, user_id, test, picture, status) FROM
stdin;/
/
/

/pg_restore: [archiver (db)] Error from TOC entry 3763; 2604 1179420
DEFAULT id owneruser/
/pg_restore: [archiver (db)] could not execute query: ERROR:  must be
owner of relation trix_venue/
/Command was: ALTER TABLE ONLY venue ALTER COLUMN id SET DEFAULT
eval('venue_id_seq'::regclass);/

Thanks in advance.


In addition to what Tom said:

create database database;
grant all privileges on database to ownerdatabase;

is probably not doing what you think it is or want.

A GRANT on a database only grants connect privileges and the ability to
create schemas in the database. It does not allow creating of objects
within the schema. For more details see:

https://www.postgresql.org/docs/9.5/static/sql-grant.html
PostgreSQL: Documentation: 9.5: GRANT
<https://www.postgresql.org/docs/9.5/static/sql-grant.html>
www.postgresql.org
GRANT on Database Objects. This variant of the GRANT command gives
specific privileges on a database object to one or more roles. These
privileges are added to those ...












--
Adrian Klaver
adrian.kla...@aklaver.com



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database migration to RDS issues permissions

2016-11-19 Thread Adrian Klaver

On 11/19/2016 07:21 AM, Fran ... wrote:

Hi Adrian,


these are some of them:


/pg_restore: [archiver (db)] Error from TOC entry 4997; 0 0 SEQUENCE SET
account_id_seq owneruser/
/pg_restore: [archiver (db)] could not execute query: ERROR:  permission
denied for sequence account_id_seq/
/Command was: SELECT pg_catalog.setval('account_id_seq', 26738, true);/
/
/

/pg_restore: [archiver (db)] Error from TOC entry 4548; 0 106491 TABLE
DATA account owneruser/
/pg_restore: [archiver (db)] could not execute query: ERROR:  permission
denied for relation account/
/Command was: COPY account (id, user_id, test, picture, status) FROM
stdin;/
/
/

/pg_restore: [archiver (db)] Error from TOC entry 3763; 2604 1179420
DEFAULT id owneruser/
/pg_restore: [archiver (db)] could not execute query: ERROR:  must be
owner of relation trix_venue/
/Command was: ALTER TABLE ONLY venue ALTER COLUMN id SET DEFAULT
eval('venue_id_seq'::regclass);/

Thanks in advance.


In addition to what Tom said:

create database database;
grant all privileges on database to ownerdatabase;

is probably not doing what you think it is or want.

A GRANT on a database only grants connect privileges and the ability to 
create schemas in the database. It does not allow creating of objects 
within the schema. For more details see:


https://www.postgresql.org/docs/9.5/static/sql-grant.html









--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Partial update on an postgres upsert violates constraint

2016-11-19 Thread Adrian Klaver

On 11/17/2016 10:13 PM, Andreas Terrius wrote:

Hi,
Basically I wanted to do a partial update inside pg (9.5), but it seems
that a partial update fails when not all of constraint is fulfilled
(such as the not null constraint)

Below are the sql queries I used,

|CREATETABLEjobs (id integer PRIMARYKEY,employee_name TEXT
NOTNULL,address TEXT NOTNULL,phone_number TEXT );CREATEORREPLACE
FUNCTIONupsert_job(job JSONB)RETURNS VOID AS$$BEGININSERTINTOjobs
ASorigin
VALUES((job->>'id')::INTEGER,job->>'employee_name'::TEXT,job->>'address'::TEXT,job->>'phone_number'::TEXT
)ONCONFLICT (id)DO UPDATESETemployee_name
=COALESCE(EXCLUDED.employee_name,origin.employee_name),address
=COALESCE(EXCLUDED.address,origin.address),phone_number
=COALESCE(EXCLUDED.phone_number,origin.phone_number);END;$$LANGUAGE
PLPGSQL SECURITY DEFINER;--Full insert (OK)SELECTupsert_job('{"id" : 1,
"employee_name" : "AAA", "address" : "City, x street no.y",
"phone_number" : "123456789"}'::jsonb);--Partial update that fulfills
constraint (Ok)SELECTupsert_job('{"id" : 1, "employee_name" : "BBB",
"address" : "City, x street no.y"}'::jsonb);--Partial update that
doesn't fulfill constraint (FAILS)SELECTupsert_job('{"id" : 1,
"phone_number" : "12345"}'::jsonb);--ERROR: null value in column
"employee_name" violates not-null constraint--DETAIL: Failing row
contains (1, null, null, 12345).|

I also tried explicitly stating the columns that I wanted to insert, and
it also fails. How do I go around doing this ?


AFAIK, EXCLUDED is only available in a trigger function:

https://www.postgresql.org/docs/9.5/static/trigger-definition.html

You are using EXCLUDED in a regular function so it would not be found.

Can you also show the failure for your alternate method?



Thank you



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database migration to RDS issues permissions

2016-11-19 Thread Adrian Klaver

On 11/19/2016 05:21 AM, Fran ... wrote:

Hi,


I have to migrate a production database to RDS. This is the size and info:


 database  | owneruser | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
=Tc/owneruser+| 32 GB   | pg_default |


Origin database:

1 database

1 owneruser with superuser permission


Backup archived size is 2G and it takes less than a minute.


I am trying to running the following steps in a DEV environment and I am
having problems with destination permissions.


1º-Creating RDS instance

Done and I can connect to.


2º-Making backup:

pg_dump -F c database > backup_db.dump

3º-Creating user,database and grant permissions in RDS.
Create database database;
CREATE USER owneruser WITH PASSWORD 'owneruser';
create database database;
grant all privileges on database to ownerdatabase;

4º-Restoring backup
pg_restore -d database -h hostname -U postgres -F c -f log_file.log
backup.db.dump

While restoring is working it prints a lot of permissions errors.


The permissions errors are ?

A sampling will suffice for now.



It's being hard to find "how to" and documentations about right permissions.

Regards.




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_class (system) table increasing size.

2016-11-17 Thread Adrian Klaver

On 11/16/2016 07:08 PM, dhaval jaiswal wrote:



Because you are creating (specific) objects.


I have gone through the link and  how would i figure out which
specific object is causing this.  Can you please elaborate more here.


We do not have the much temporary table usage.


Since the size is bigger (5 GB) to maintain. does it requires
maintenance as well for thepg_class.


Should have added to my previous post. What does:

select * from pg_stat_sys_tables where relname = 'pg_class';

show?




It seems its affecting performance.






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_class (system) table increasing size.

2016-11-17 Thread Adrian Klaver

On 11/16/2016 07:08 PM, dhaval jaiswal wrote:



Because you are creating (specific) objects.


I have gone through the link and  how would i figure out which
specific object is causing this.  Can you please elaborate more here.


It is not any one object, it is the total of the objects(relations) as 
defined here:


https://www.postgresql.org/docs/9.4/static/catalog-pg-class.html

48.11. pg_class

relkind 	char 	  	r = ordinary table, i = index, S = sequence, v = view, 
m = materialized view, c = composite type, t = TOAST table, f = foreign 
table


So if you do something like:

select relkind, relname from pg_class order by relkind, relname;

you should see what the entries are the table by their type. Might give 
you a clue as to what is causing the growth.





We do not have the much temporary table usage.


Since the size is bigger (5 GB) to maintain. does it requires


The size you are referring to is the database size, the table size or 
something else?




maintenance as well for thepg_class.


It seems its affecting performance.



*From:* David G. Johnston <david.g.johns...@gmail.com>
*Sent:* Thursday, November 17, 2016 8:13 AM
*To:* dhaval jaiswal
*Cc:* pgsql-general@postgresql.org
*Subject:* Re: [GENERAL] pg_class (system) table increasing size.

On Wed, Nov 16, 2016 at 7:30 PM, dhaval jaiswal <dhava...@hotmail.com
<mailto:dhava...@hotmail.com>>wrote:

PostgreSQL 9.4.0

Are generalizing here or are you really running 2+ year old patch version?

Why pg_class table is getting bigger in size.

Because you are creating (specific) objects.

See: https://www.postgresql.org/docs/9.6/static/catalog-pg-class.html

How to stop increasing it.

Stop creating (those specific) objects.

Does it affect the performance.

It can - depends greatly on scale.

Note, frequent usage of temporary tables is a common cause for this kind
of behavior.

David J.




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Streaming replication failover/failback

2016-11-16 Thread Adrian Klaver

On 11/16/2016 04:51 PM, Israel Brewster wrote:

I've been playing around with streaming replication, and discovered that
the following series of steps *appears* to work without complaint:

- Start with master on server A, slave on server B, replicating via
streaming replication with replication slots.
- Shut down master on A
- Promote slave on B to master
- Create recovery.conf on A pointing to B
- Start (as slave) on A, streaming from B

After those steps, A comes up as a streaming replica of B, and works as
expected. In my testing I can go back and forth between the two servers
all day using the above steps.

My understanding from my initial research, however, is that this
shouldn't be possible - I should need to perform a new basebackup from B
to A after promoting B to master before I can restart A as a slave. Is
the observed behavior then just a "lucky fluke" that I shouldn't rely


You don't say how active the database is, but I going to say it is not 
active enough for the WAL files on B to go out for scope for A in the 
time it takes you to do the switch over.



on? Or is it expected behavior and my understanding about the need for a
new basebackup is simply off? Does the new pg_rewind feature of 9.5
change things? If so, how?

Thanks for your time!
---
Israel Brewster
Systems Analyst II
Ravn Alaska
5245 Airport Industrial Rd
Fairbanks, AK 99709
(907) 450-7293
---








--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Syncing Data of data type BLOB into Postgres- Bytea

2016-11-15 Thread Adrian Klaver

On 11/15/2016 05:55 PM, John DeSoi wrote:



On Nov 10, 2016, at 12:55 AM, Cynthia Hombakazi Ngejane 
<hombakazi.ngej...@gmail.com> wrote:

I have two databases SQLlite and Postgres,  SQLite is my local database in it I 
am saving fingerprint templates that get capture on site (offline) and my 
column is of type BLOB. Now I want to sync these templates into Postgres (to 
the server), so I created a column of type bytea but Postgres is refusing to 
take BLOB it says there was a syntax error. I am using Qt c++ 5.7  application  
to do the syncing.


It is going to be hard to help without seeing the exact code you are using to 
insert and the exact error message you are seeing.


Part of the issue is that BLOB in SQLite does not actually mean anything:

http://sqlite.org/datatype3.html

"(Historical note: The "BLOB" type affinity used to be called "NONE". 
But that term was easy to confuse with "no affinity" and so it was 
renamed.) "




"A column with affinity BLOB does not prefer one storage class over 
another and no attempt is made to coerce data from one storage class 
into another."


So you can stick anything you want in there, which actually is true of 
any SQLite 'type'. So the OP is going from inserting into 
something(SQLite) that does no input validation to one(Postgres) that does.




John DeSoi, Ph.D.






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrade from 9.5.4 to 9.6.1

2016-11-15 Thread Adrian Klaver

On 11/15/2016 05:18 PM, Rich Shepard wrote:

On Tue, 15 Nov 2016, Adrian Klaver wrote:


Rich is using Slackware and I am pretty sure it marches to a different
drummer.


Adrian,

  And a different band. Pat likes MariaDB which replaced MySQL so
PostgreSQL
is not part of the base distribution. But, SlackBuilds.org provides a build
script that's worked for me since some early version 7 of postgres.


To add to my previous post. If you do decide to follow the pg_upgrade 
procedure in the README do a pg_dump of the 9.5 data just before you do 
pg_ugrade and store it away in a safe place. The first time through a 
new process does not always end well:)




Regards,

Rich





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrade from 9.5.4 to 9.6.1

2016-11-15 Thread Adrian Klaver

On 11/15/2016 05:18 PM, Rich Shepard wrote:

On Tue, 15 Nov 2016, Adrian Klaver wrote:


Rich is using Slackware and I am pretty sure it marches to a different
drummer.


Adrian,

  And a different band. Pat likes MariaDB which replaced MySQL so
PostgreSQL
is not part of the base distribution. But, SlackBuilds.org provides a build
script that's worked for me since some early version 7 of postgres.


Aah, so the answer is here:

https://slackbuilds.org/slackbuilds/14.2/system/postgresql/README.SBo

"A few hints for PostgreSQL 9.5 -> 9.6 upgrade:

"



Regards,

Rich





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrade from 9.5.4 to 9.6.1

2016-11-15 Thread Adrian Klaver

On 11/15/2016 05:06 PM, Rich Shepard wrote:

On Tue, 15 Nov 2016, Adrian Klaver wrote:


Assuming the bindirs are in your $PATH:

aklaver@panda:~> whereis -f pg_ctl


Adrian,

  In my case:

$ whereis -f pg_ctl
pg_ctl: /bin/pg_ctl /usr/bin/pg_ctl /usr/X11R6/bin/pg_ctl
/usr/bin/X11/pg_ctl /usr/X11/bin/pg_ctl /usr/man/man1/pg_ctl.1
/usr/man/man1/pg_ctl.1.gz /usr/share/man/man1/pg_ctl.1
/usr/share/man/man1/pg_ctl.1.gz /usr/X11/man/man1/pg_ctl.1
/usr/X11/man/man1/pg_ctl.1.gz


So what do you get with

/usr/bin/pg_ctl --version

and given John's suggestion:

ls -al /usr/bin/pg_ctl




Even if only one is the $PATH:



you can usually figure out where the other is.


  I suppose if I knew which version is in each of those directories it
would
make life easier.

  Currently, only 9.5.4 is running; 9.6.1 is installed and initiated but
not
running. When I invoke 'psql -l' I see the clusters installed in 9.5.4.


So how are you going to start the 9.6 instance?



  Will use pg_dumpall and pg_restore ... tomorrow morning.

Thanks very much,

Rich





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrade from 9.5.4 to 9.6.1

2016-11-15 Thread Adrian Klaver

On 11/15/2016 05:01 PM, John R Pierce wrote:

On 11/15/2016 4:55 PM, Adrian Klaver wrote:

where do I look for the two bindirs?


Assuming the bindirs are in your $PATH:

aklaver@panda:~> whereis -f pg_ctl
pg_ctl: /usr/local/pgsql/bin/pg_ctl /usr/local/pgsql94/bin/pg_ctl

Even if only one is the $PATH:

aklaver@panda:~> whereis -f pg_ctl
pg_ctl: /usr/local/pgsql/bin/pg_ctl

you can usually figure out where the other is.


the RHEL style distributions don't put the actual bin dirs in the path,
rather, they put symlinks to the common user utilities in /usr/bin
managed via a system known as 'alternates'.


Rich is using Slackware and I am pretty sure it marches to a different 
drummer.


http://www.slackware.com/faq/do_faq.php?faq=installation#2




--
john r pierce, recycling bits in santa cruz




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] dumping table contents in a sensible order

2016-11-15 Thread Adrian Klaver

On 11/15/2016 02:56 PM, Chris Withers wrote:

Hi All,

I have a database that I want to dump three tables from, for use in
development. They form a subset of the data, so I was dumping like this:

pg_dump thedatabase --inserts -t table_one -t depends_on_table_one -t
depends_on_previous_two > dump.sql

However, when I try to load this using the following:

psql thedatabase_dev < dump.sql

I get the following:

SET
ERROR:  unrecognized configuration parameter "lock_timeout"
SET
SET
SET
SET
ERROR:  unrecognized configuration parameter "row_security"
SET
SET
SET
ERROR:  relation "table_one" already exists
ALTER TABLE
ERROR:  relation "depends_on_previous_two" already exists
ALTER TABLE
ERROR:  relation "depends_on_previous_two_id_seq" already exists


Use the -c switch as previously suggested.


ALTER TABLE
ALTER SEQUENCE
ALTER TABLE
INSERT 0 1
...
INSERT 0 1
ERROR:  insert or update on table "table_one" violates foreign key
constraint "table_one_parent_id_fkey"
DETAIL:  Key (parent_id)=(xxx) is not present in table "table_one".

So, the problem appears to be that table_one is self-referential by way
of a parent_id field.

How can I either:

- dump the table in an insertable order?


Don't use --inserts, instead let the data be entered via COPY(the 
default) which does it a single transaction.



- have the load only apply the foreign key constraint at the end of each
table import?


See previous suggestion.



cheers,

Chris





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrade from 9.5.4 to 9.6.1

2016-11-15 Thread Adrian Klaver

On 11/15/2016 02:37 PM, Rich Shepard wrote:

  All my previous version upgrades were performed by running pg_dumpall in
the older version followed by running 'pgsql -f ...' to install the dumped
.sql file, usually because the upgrade jumped several versions. Now I'd
like
to try the available postgres commands.

  The older version is installed in /var/lib/pgsql/9.5/data and I just
initiated the new version in /var/lib/pgsql/9.6/data. The earlier
version is
currently running.

  Is pg_upgrade the recommended way to upgrade from one minor version to
the
next? The 9.5 manual recommends this approach for _major_ upgrades (e.g.,
8.4.7 to 9.6.1), but not for _minor_ upgrades (e.g., 9.0.1 to 9.0.4).
That's
a first digit upgrade and a third digit upgrade. Since 9.5.4 to 9.6.1 is a
second digit upgrade I suppose it's semi-major, but in which upgrade camp
does it belong?

  The command is:

pg_upgrade -b oldbindir -B newbindir -d olddatadir -D newdatadir

and I don't know where to find -b and -B. On my Slackware-14.1 server I
have
/usr/bin/postgres and assume it is for the 9.5 release since that's running
and the 9.6 release is initiated but not invoked. The data directories are
easy but where do I look for the two bindirs?


Assuming the bindirs are in your $PATH:

aklaver@panda:~> whereis -f pg_ctl
pg_ctl: /usr/local/pgsql/bin/pg_ctl /usr/local/pgsql94/bin/pg_ctl

Even if only one is the $PATH:

aklaver@panda:~> whereis -f pg_ctl
pg_ctl: /usr/local/pgsql/bin/pg_ctl

you can usually figure out where the other is.



TIA,

Rich






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best practices to manage custom statistics

2016-11-15 Thread Adrian Klaver

On 11/15/2016 07:39 AM, Moreno Andreo wrote:

Sorry for late reply... i'm in some quite rough days

Il 08/11/2016 21:28, Adrian Klaver ha scritto:

On 11/08/2016 12:13 PM, Moreno Andreo wrote:

[...]

In your experience, would this approach help me lower server load?
Are there any other approach I can try?


Instead of pushing why not pull.

Excuse me Adrian, but I can't get what you mean by not pushing but pulling.
We are now pulling data from clients for about everything we need...
what I'd like to do is either
- the database pushes updates when needed,


Pushes updates of what, the statistics you talking about or other data?


or
- the client pulls data from database, but querying a reduced dataset
(just a row values with all values for that user and not thousands of rows)


This confuses me given from your OP:

" 350 databases, 350 users, every user connects to his own database and
his teammates' (max 10 in total) so each user can connect to max 10
databases at a time"

"This is obtained, at the moment, with a select count(*) from . (that
involves 4 joins on 4 tables) to be run run every 20 secs from each
client connected to the cluster (ATM about 650 clients configured, about
200 concurrent) to each database it has rights to connect."

So does the user need only their data or do they need the other users 
data also?



In other words do the users really check/need the statistics every 20
secs?

Ideally, I need a real value to be read when that value changes. But on
Earth I'm happy with a consistent value (If it should be 800 and it
reads 799 it's not an earthquake) at least on a regular basis. This
means that if there's no activity, we will be uselessly polling the
database, so here's why I thought about "pushing" data from backend to
client, that would be the nearest to ideal solution.

Given that you say exact is not important over the course of day, why
not create a mechanism for the user to poll the database when they
need the information.


This is what we did in the past. The result was that users _did not_
update values (clicking an "update" button) and made disasters working
with "old" data (they forgot to do it, they didn't want to do it because


How can they be working with 'old' data? The queries you are running are 
compiling stats on data that exist at the time they are run and at any 
point in time between stats runs the user is working with current data 
regardless of what the last stats say.



"it's another click, I waste my time", and so many, even stupid,
excuses... but they're the customers, they pay, and here we say that
"customer is always right")


Except when they are wrong:) Still been there.



So we changed: now we check for values and for data  (not every 20 but
60 seconds... I just checked the right value). I need something that's
lighter for the DB backend, at least for the values procedure. If we had
only a database, I think that queries and datasets would be stuck in
cache, so response times would be faster. With more than 350 databases,
that's not possible (or we have to grow RAM size to values very big...)

I've also thought about using LISTEN/NOTIFY to send value updates to
client only when needed, but with NPgSQL I read that we need to keep an
open connection, and that's not a good idea AFAIK.

Thanks
Moreno





If more details are needed, just ask.

Thanks in advance and sorry for the long message (but I had to explain
such a complex thing)
Moreno.-














--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore --clean failing due to dependancies

2016-11-15 Thread Adrian Klaver

On 11/15/2016 08:09 AM, Arnaud L. wrote:

Le 15/11/2016 à 16:44, Tom Lane a écrit :

You'd have to provide a lot more detail before anyone could tell if there
was a fixable bug here, but I rather doubt it.  There are at least two
ways this scenario might lose:

1. There are additional objects in the target database that have
dependencies on ones that are in the dump.  In that case there is
no ordering of dropping the objects in the dump that will succeed.


The target databased was restored from the dump in an empty database.
So what I do is :
dropdb -U postgres -h localhost db1
createdb -U postgres -h localhost db1
psql -U postgres -h localhost -c "CREATE EXTENSION postgis" db1
pg_dump -h pgsql1 -U postgres -b -Fc -E UTF8 -N public -N osm -N osm2 -f
"D:\temp.dump" db1
pg_restore -U postgres -h localhost -j 2 -d db1 "D:\temp.dump"

That works, no error, no warning.

Then
pg_restore -U postgres -h localhost --clean -d db1 "D:\temp.dump"
fails.

So can we exclude additional objects in the target database in this case ?



2. There are dependency chains passing through objects that weren't
dumped (ie, if A depends on B which depends on C, and you omit B
from the dump, it might still be the case that A can't be restored
before C).


Can I trust what pgadmin says about objects dependent on a schema ?


Believe that only shows objects that have are declared for that schema. 
It does not show internal relationships of the objects to other objects 
outside their schema. In other words a function that is public.some_fnc 
but inside the function body operates on objects in another schema. Or a 
table in one schema that has a FK to a table in another schema and so on.



It says that public schema's dependent objects are only it's own
operators, functions, etc. (i.e., what's in the postgis extension), and
the same for the other two schemas.
They don't show any dependent objects outside themselves.



If you think neither of those cases apply, please provide a
self-contained
test case.


That's not going to be easy. I'll try to trim down a pg_dump -s to see
how I can reproduce this.

--
Arnaud







--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


<    2   3   4   5   6   7   8   9   10   11   >