Re: [GENERAL] pg on Debian servers

2017-11-11 Thread Jan Claeys
On Sat, 2017-11-11 at 14:23 +, Mark Morgan Lloyd wrote:
> I think that the "preventing upgrades" route is the one to follow,
> since inhibiting the restart would obviously present a risk that
> something loaded dynamically could get out of step. As an at least
> temporary hack I've disabled unattended updates using
> 
> # systemctl disable unattended-upgrades.service

Unattended-upgrades is configurable and allows whitelisting package
origins, as well as blacklisting packages so that they never get
upgraded automatically (you can still upgrade them manually, of
course).

See /etc/apt/apt.conf.d/50unattended-upgrades (the default version of
that file includes documentation as comments).

Also see the unattended-upgrade(8) manpage, and the on/off switch in
/etc/apt/apt.conf.d/20auto-upgrades


-- 
Jan Claeys



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] a JOIN to a VIEW seems slow

2017-10-04 Thread Jan de Visser
On Monday, October 2, 2017 2:32:34 AM EDT Frank Millman wrote:
> From: Frank Millman
> Sent: Friday, September 22, 2017 7:34 AM
> To: pgsql-general@postgresql.org
> Subject: Re: a JOIN to a VIEW seems slow
> 
> On Fri, Sep 22, 2017 at 7:34 AM, Frank Millman wrote:
> >  On Thu, Sep 21, 2017 at 3:54 PM, Merlin Moncure wrote:
> >  > Something is not adding up here.  Can you EXPLAIN ANALYZE the 26 minute
> >  > query?
> >  > 
>  >  > Here it is -
> > 
> > https://explain.depesz.com/s/cwm
> 
> Just checking – is this under investigation, or is this thread considered
> closed?

That's not how it works. This is a community list; if somebody finds your 
problem interesting you will get feedback, but there is no guarantee you will 
get any.

One suggestion is to provide as much details as possible (server version info, 
table definitions, full SQL statements, explain results, etc). I haven't looked 
back in the thread to see if yu actually did this, but that's often the reason 
for no/little feedback.

Another thing is to make life easier on the other list members by adhering to 
community conventions, i.e. text-only (no HTML) emails, reply trimming, and 
bottom posting.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Insertion of large xml files into PostgreSQL 10beta1

2017-06-28 Thread Jan de Visser
On Sunday, June 25, 2017 11:02:41 PM EDT Alain Toussaint wrote:
> > Narrowing down the entire file to a small problem region and posting a
> > self-contained example,
> 
> The url here contain the set of xml records from a publication I
> worked on many years ago:
> 
> https://www.ncbi.nlm.nih.gov/pubmed/21833294?report=xml&format=text
> 
> The particularly problematic region of the xml content is this:
> 
> 
> 
> Neuroreport. 2000 Sep
> 11;11(13):2969-72 11006976
> 
> 
> J Neurosci. 2005 May
> 25;25(21):5148-58 15917455
> 
> 
> Neuroimage. 2003 Dec;20(4):1944-54
> 14683700
> 
> 
> There is more of these type of comments in an given citation.
> 
> > or at least providing the error messages and
> > content, might help elicit good responses.
> 
> here it is:
> 
> ERROR: syntax error at or near "44"
> LINE 1: 44(1):37-43

This string does not appear in your link above. Please match your example data 
with your error message.

> 
> the command I used is this one:
> 
> echo "INSERT INTO samples (xmldata) VALUES $(cat
> /srv/pgsql/pubmed/medline17n0001.xml)" | /usr/bin/psql medline
> 1>/dev/null 2>error.log

I'm going to go out on a limb and assume the problem is unescaped quote 
characters, or some other escape/quote problem. Hard to say though without 
having the right data to look at.





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] LEFT JOIN, entry can not be referenced

2017-06-26 Thread Jan Danielsson
Hello,

   I'm trying to use LEFT JOIN's in a manner which I imagine is pretty
archetypal.  In short; I have a table called worklogs which has a few
columns that can't be NULL, and a few columns which may reference other
tables or will be NULL.  If the optional columns are not NULL I want to
use their id's to fetch names/titles from other columns.

   I use the following query to gather a list of the work log rows (this
query is much simpler than it looks at first glance; it just has a lot
of columns):

SELECT
wl.ts,wa.name,wl.user_id,u.name,wl.doc_id,d.doc_id,wl.docrev_id,dr.docrev,wl.file_id,f.fname,wl.issue
FROM worklogs AS wl, workactions AS wa, users AS u
LEFT JOIN documents AS d ON wl.doc_id=d.id
LEFT JOIN docrevs AS dr ON wl.docrev_id=dr.id
LEFT JOIN files AS f ON wl.file_id=f.id
WHERE wl.action_id=wa.id AND wl.user_id=u.id
ORDER BY wl.ts DESC;

   When I run this I get the error:


ERROR:  invalid reference to FROM-clause entry for table "wl"
LINE 3: LEFT JOIN documents AS d ON wl.doc_id=d.id
^
HINT:  There is an entry for table "wl", but it cannot be referenced
from this part of the query.


   This is at the outer bounds of my SQL knowledge; I understand what
the error and hint are saying (wl isn't valid in the context of the
JOIN), but I'm not sure how to remedy that / how to rephrase the query.


   ("Dear Diary"-moment: I've had queries in the past which work in
postgresql which I couldn't run in sqlite, but this is the first time I
can recall where a query works in sqlite but not in postgresql).

-- 
Kind regards,
Jan Danielsson



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extract from text id other table - PG 9.1

2017-06-07 Thread Jan de Visser
On Tuesday, June 6, 2017 10:32:16 PM EDT Patrick B wrote:
> Hi guys,
> 
> I've got tableA with 3 columns.
> 
> id(seriaL) | type(character varying(256)) | string(character varying(256))
> 
> I have the type/string value stored in another table, and from that i would
> like to get the id.
> 
> Example:
> http://dbfiddle.uk/?rdbms=postgres_9.6&fiddle=15c571caa36876f00a0a2eaace703a
> 2b
> 
> How can I extract, from that tablea.type_m column the tableb.id value?

SELECT b.id FROM tableb b, tablea a
  WHERE (b.type || '/' || b.string = a.type_m) AND (a.id = 2);



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrading postgresql minor version

2017-05-10 Thread Jan de Visser
On Wednesday, May 10, 2017 9:08:16 AM EDT Ron Ben wrote:
> 

Ron, 

You need to figure out how you can make your email client send something else 
than base64 encoded HTML with right-aligned text. Your messages are so hard to 
parse for me I just ignore them, and I assume there's other people that do the 
same.

Attached a screenshot of one of your messages.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Required Monitoring

2017-04-28 Thread Jan Keirse
On Fri, Apr 28, 2017 at 4:39 PM, Andrew Kerber 
wrote:

> I am a fairly experienced Oracle DBA, and we are starting to move in to
> the PostgreSQL world.  I would expect the standard monitoring items are
> required for mission critical postgres apps, Ie, disk space, wal log space,
> log monitoring, process counts,software running, connection available on
> the correct port, CPU usage.
>
> Are there additional PostgreSQL specific items that need to be monitored?
> if so, what items?
>
>
​In addition to what has already been mentioned I also monitor pg_locks .​
Not just counts but also mode, duration and non granted locks.

-- 


 DISCLAIMER 

http://www.tvh.com/glob/en/email-disclaimer

"This message is delivered to all addressees subject to the conditions
set forth in the attached disclaimer, which is an integral part of this
message."


Re: [GENERAL] Postgres connection Monitor

2017-04-27 Thread Jan de Visser
On Thu, Apr 27, 2017 at 6:18 AM, basti  wrote:
> Hallo, we have a Postgres database Server and around that there are 8
> clients get data from it.
>
> All servers are the the same datacenter i think (traceroute only 4-5 hops)
>
> Now we plan to move the server to an other hoster/datacenter.
>
> I have done simple test with ping and traceroute.
> Is there a way to monitor "traffic times" in postgres ?
> How long a query would take to be answered ?

You could write a tiny little C program which connects and sends a
PQping(), and measure that.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] UDP buffer drops / statistics collector

2017-04-20 Thread Jan de Visser
On Thursday, April 20, 2017 3:38:42 AM EDT Tim Kane wrote:
> The pgss_query_texts.stat still wants to live in the default *pg_stat_tmp*
> directory, wether by design or not..  but that's a non-issue for me now.

A 30 second investigation of the source seems to indicate that that directory 
is hardcoded (i.e. not configurable). But I may be missing some of the 
intricacies of the GUC mechanisms.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Build PostGIS In Windows Using --with-gdalconfig=FILE

2017-04-11 Thread Jan de Visser
On Tuesday, April 11, 2017 5:43:29 AM EDT Osahon Oduware wrote:
> Hi,
> 
> I am using Windows 7 OS and I have installed a GDAL with support for MrSID
> format. I want to build my PostGIS with the *--with-gdalconfig=FILE* to
> point to this new GDAL.
> 
> How do I accomplish this in a Windows OS?

This is a mailing list for general pgsql use questions. While some people may 
have postgis knowledge, I think you'll have more success on the postgis user's 
list:

https://lists.osgeo.org/mailman/listinfo/postgis-users

Hope this helps.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fwd: Query parameter types not recognized

2017-02-11 Thread Jan de Visser
On Friday, February 10, 2017 6:46:08 PM EST David G. Johnston wrote:
> In short - this is the wrong list (pgsql-j...@postgresql.org is the
> appropriate one; or the official GitHub repo) and you need to provide some
> working self-contained examples showing exactly what you are doing.​​
> 
> On Fri, Feb 10, 2017 at 8:17 AM, Roberto Balarezo 
> 
> wrote:
> [snip]

One thing to note is that JDBC PreparedStatement objects are not abstractions 
of pgsql prepared statements; the drivers performs parameter interpolation and 
sends a standard text query to the server. At least this was how it was many 
moons ago when I last hacked on the driver. So it's a case of different 
concepts using the same name.

Not quite sure how that impacts your analysis.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Slow index scan - Pgsql 9.2

2017-01-09 Thread Jan de Visser
>
> Hi guys,
>
> I've got the following Query:
>
> WITH
>
>query_p AS (
>
>SELECT CAST(6667176 AS
> BIGINT) AS client_id),
>
>
>
>
>  clients AS (
>
>SELECT
>
>client.id
> ,client.job_share_mode
>
>FROM
>
>customers AS
> client
>
>WHERE
>
>
>  (client.clientid = (SELECT qp.client_id FROM query_p AS qp))
>
>AND
>
>NOT
> client.is_demo
>
>AND
>
>NOT
> client.deleted
>
>)
>
> Select qp.client_id, (SELECT COUNT(0) FROM customers AS c WHERE
> (c.clientid = qp.client_id) AND NOT c.deleted) AS client_count
>
> FROM query_p AS qp
>
>
> *Explain Analyze:*
>
> CTE Scan on "query_p" "qp"  (cost=0.01..1060.57 rows=1 width=8) (actual
> time=4065.244..4065.246 rows=1 loops=1)
>
>   CTE query_p
>
> ->  Result  (cost=0.00..0.01 rows=1 width=0) (actual time=0.002..0.003
> rows=1 loops=1)
>
>   SubPlan 2
>
> ->  Aggregate  (cost=1060.53..1060.54 rows=1 width=0) (actual
> time=4065.229..4065.229 rows=1 loops=1)
>
>   ->  Index Scan using "clientid_customers" on "customers" "c"
>  (cost=0.00..1059.01 rows=607 width=0) (actual time=9.105..4063.728
> rows=2513 loops=1)
>
> Index Cond: ("clientid" = "qp"."client_id")
>
> Filter: (NOT "deleted")
>
> Rows Removed by Filter: 1068
>
> Total runtime: 4075.753 ms
>
>
>
> Why a search for "client_id" is so slow??
>

I would think because of the NOT "deleted" clause. Which is interesting,
because that's a column which you conveniently didn't include in the
definition below.


>
>
> *Table customers:*
>
>   Table "public.customers"
>
>  Column |Type |
> Modifiers
>
> +-+-
> 
>
>  id | bigint  | not null default
> "nextval"('"customers_seq"'::"regclass")
>
>  clientid   | bigint  | not null default 0
>
>  name_first | character varying(80)   | default
> ''::character varying
>
>  name_last  | character varying(80)   | default
> ''::character varying
>
>  company| character varying(255)  | default
> ''::character varying
>
>
> *Index clientid_customers:*
>
> CREATE INDEX
>
> clientid_customers
>
> ON
>
> customers
>
> (
>
> "clientid"
>
> );
>
>
>
> Thanks!
>
> Patrick
>


Re: [GENERAL] postgres pg_bulkload c filter function in c programming

2016-12-29 Thread Jan de Visser
On Thursday, December 29, 2016 5:10:08 AM EST rajmhn wrote:
> Gurus,
> 
> Reading the data from file and loading it using pg_bulkload- C filter. As
> per documentation, C filter is much faster than SQL filter.
> 
> I'm new to C. Gone through this documentation. Not clear, how to start.
> https://www.postgresql.org/docs/current/static/xfunc-c.html.
> 
> Can someone kindly guide me to create C code, so that it can be called in
> postgres function?
> 

It seems to me it would be much easier to load the data into a temporary 
table, and from there transform into the form you need it to be. If you're not 
experienced in C (and probably even if you are) you can import *a lot* of data 
in the time it would take you to develop that custom filter. 

Besides, your requirements don't appear to be that difficult.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Performance PLV8 vs PLPGSQL

2016-12-28 Thread Jan de Visser
On Wednesday, December 28, 2016 6:02:51 AM EST Mike Sofen wrote:
> the natural lashup of plpgsql to postgres (I liked Alban’s term,
> “impedance”), is a key aspect.

Not to deprive Alban of any of his credit, but the term "impedance mismatch" 
is at least 25 year old; as far as I know it was coined to describe the 
problems arising from attempting to shoehorn an OO model onto a relational 
database.

And despite the smart people in academia warning us about that mismatch in the 
early 90s, we bravely soldiered (I'm taking full blame myself here) on and 
10-15 years later came up with abominations like Hibernate...

History lesson over, carry on...




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump and quoted identifiers

2016-12-15 Thread Jan de Visser
On Thursday, December 15, 2016 5:15:44 PM EST Thomas Kellerer wrote:
> Tom Lane schrieb am 15.12.2016 um 16:20:
> >> Still doesn't work:
> >>   -bash-4.1$ pg_dump -d postgres -t "\"Statuses\""
> >>   pg_dump: no matching tables were found
> > 
> > Hmm.  It might shed some light if you put "echo" in front of that
> > to see what gets printed:
> > 
> > $ echo pg_dump -d postgres -t "\"Statuses\""
> > pg_dump -d postgres -t "Statuses"
> 
[snip]

Crazy guess: Is pg_dump shadowed by a (maybe distro provided) script that eats 
your quotes?




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query regarding deadlock

2016-11-24 Thread Jan de Visser

On 2016-11-24 9:06 PM, Yogesh Sharma wrote:


Dear John,

Thanks for your support.
I mean to say, the REINDEX calls hourly and insert query executes every minute 
to update.
So, it might be race condition that these queries can call at same time.


Why do you need to run REINDEX every hour? That sounds like a stopgap 
solution for another problem.

If there is any solution like we can add some check before REINDEX operation 
performed.
If it is possible?


Try to find out why the cron script is there in the first place. Then go 
from there - eliminate it, or do it only in down periods.




Regards,
Yogesh

-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of John R Pierce
Sent: Friday, November 25, 2016 10:55 AM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Query regarding deadlock

On 11/24/2016 5:44 PM, Yogesh Sharma wrote:

I cannot change calling of REINDEX and insert query sequence because it is 
execute automatically through some cron script.

any cron scripts are your own doing, so this statement makes no sense at all.






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] import_bytea function

2016-10-08 Thread Jan de Visser



On 2016-10-08 2:36 AM, Stephen Davies wrote:

On 07/10/16 19:24, Thomas Kellerer wrote:

Stephen Davies schrieb am 07.10.2016 um 10:46:
You can store the contents of a file in a bytea using plain JDBC no 
lo_import() required


String sql = "insert into images (id, image_data) values (?,?)";
Connection con = ;
File uploaded = new File("...");
InputStream in = new FileInputStream(uploaded);
PreparedStatement pstmt = con.prepareStatement(sql);
pstmt.setInt(1, 42);
pstmt.setBinaryStream(in, (int)uploaded.length());
pstmt.executeUpdate();

This *only* works with bytea column, not with "large objects".

In production code you obviously need to close all resources and 
handle errors.

I left that out for simplicity.


That looks reasonable but I need to update rather than insert and my 
similar

code with sql="update part set pic=? where id=3" did not work.


That *will* work (using that myself for updates as well).

What exactly is your problem? What was the error/exception?






I tried the prepared statement approach again and this time it worked.
No idea what I did wrong last time.

However, my display code still does not work.


You need to stream the data. Working from memory here, and it's been a 
long time, but it's something like


rs = conn.executeQuery("SELECT byeta_column FROM foo WHERE bar = ?");
Blob b = (Blob) rs.getObject(1);
InputStream is = b.getInputStream();
byte[1024] bytes;
while (is.read(bytes)) {
  System.out.print(String(bytes));
}

Something like this, modulo using PreparedStatements and proper use of 
the byte[] buffer.




Cheers and thanks,
Stephen






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Chante domain type - Postgres 9.2

2016-09-26 Thread Jan de Visser

On 2016-09-26 1:15 AM, Gavin Flower wrote:


On 26/09/16 17:58, Patrick B wrote:

Hi guys,

I've got this domain:

CREATE DOMAIN public.a_city
  AS character varying(80)
  COLLATE pg_catalog."default";


And I need to increase the type from character varying(80) to 
character varying(255).


How can I do that? didn't find info about it. I'm using Postgres 9.2

Thanks!
Patrick


Why not simply use the 'text' data type?

To change the data type on a column you can use:
ALTER [ COLUMN ] /column_name/ [ SET DATA ] TYPE /data_type/ [ COLLATE 
/collation/ ] [ USING /expression/ ]


see:
https://www.postgresql.org/docs/9.2/static/sql-altertable.html


Note that 9.5 is the latest version of pg, with 9.6 being released 
very soon!



Cheers,
Gavin



So I guess the answer to the question is:
- Find all occurrences of a_city
- Change the type of those columns to text (or varchar(80))
- Drop the domain
- Recreate with the proper definition. I agree with Gavin that text is a 
better choice. Experience has taught me that server side size constraint 
are more trouble than they're worth and that size constraints are better 
handled on the client side.

- Change the type of the columns back to the domain.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] WAL directory size calculation

2016-07-28 Thread Jan Wieck
On Thu, Jul 28, 2016 at 9:54 AM, Andreas Kretschmer  wrote:

> Without Replication 1 GB would be fine, even with replication. But it must
> be realible!
>
>
The required size of WAL depends on what your intended checkpoint_timeout
vs. the amount
of WAL generated from data turnover is. A rather small 40GB database,
churning TPC-C style
transactions at a rate of 1,000 TPS can easily generate 60MB of WAL per
second (if configured
wrong). To keep the WAL size at or below 1GB would require a checkpoint to
complete every
17 seconds. In this case, max_wal_size=1GB is a very wrong config option.

One problem here is that the more frequent checkpoints occur, the more full
page writes will be
required. Which drives up the amount of WAL, requiring checkpoints even
more frequently
when max_wal_size is the limiting factor. This is a classic "down spiral"
scenario.

At 1,000 TPS, the above benchmark levels out (after about 1-2 hours) around
60-64GB of
WAL space used (with max_wal_size = 96GB and checkpoint_timeout=20min). The
total
amount of WAL actually produced goes down significantly (due to reduced
full page writes)
and the transaction response time improves in average as well as in stddev.
The whole DB
looks more like it is cruising, than fighting.

This example isn't a big database (40-80GB) or anything exotic. Just a
write heavy OLTP
load.


Regards, Jan







> Andreas
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Jan Wieck
Senior Postgres Architect
http://pgblog.wi3ck.info


Re: [GENERAL] Slony error please help

2016-07-17 Thread Jan Wieck
On Sun, Jul 17, 2016 at 12:47 AM, Jan Wieck  wrote:

>
> The only thing I can imagine would be that there is another slony cluster
> (or
> remnants of it) hanging around in the 9.4 installation, possibly in
> another database.
>
>
That does reproduce the problem. I ran the new doc/pgbench-tutorial through
steps
01, 02 and 03 with a 9.4/2.2.2 installation. Upgraded to 9.4/2.2.5 but left
out the
UPDATE FUNCTIONS for node 3. I could have created a fourth database and just
run INIT CLUSTER against that.

I then installed 9.5/2.2.5 in parallel and the pg_upgrade run looks like
this:

(venv)[postgres@db1 pgbench-tutorial]$ pg_upgrade -b
>> /var/lib/pgsql/test_94/bin -B /var/lib/pgsql/test_95/bin -d
>> /opt/pgsql/test_94 -D /opt/pgsql/test_95 -p 54394 -P 54395 -c
>
> Performing Consistency Checks
>
> -
>
> Checking cluster versions   ok
>
> Checking database user is the install user  ok
>
> Checking database connection settings   ok
>
> Checking for prepared transactions  ok
>
> Checking for reg* system OID user data typesok
>
> Checking for contrib/isn with bigint-passing mismatch   ok
>
> Checking for presence of required libraries fatal
>
>
>> Your installation references loadable libraries that are missing from the
>
> new installation.  You can add these libraries to the new installation,
>
> or remove the functions using them from the old installation.  A list of
>
> problem libraries is in the file:
>
> loadable_libraries.txt
>
>
>> Failure, exiting
>
> (venv)[postgres@db1 pgbench-tutorial]$ cat loadable_libraries.txt
>
> Could not load library "$libdir/slony1_funcs.2.2.2"
>
> ERROR:  could not access file "$libdir/slony1_funcs.2.2.2": No such file
>> or directory
>
>
>
If I drop the offending database or run UPDATE FUNCTIONS in it, pg_upgrade
is happy.


Regards, Jan

-- 
Jan Wieck
Senior Postgres Architect
http://pgblog.wi3ck.info


Re: [GENERAL] Slony error please help

2016-07-16 Thread Jan Wieck
On Thu, Jun 16, 2016 at 3:36 PM, avi Singh 
wrote:

> /usr/pgsql-9.5/bin/pg_upgrade -b /usr/pgsql-9.4/bin/ -B
> /usr/pgsql-9.5/bin/ -d /var/lib/pgsql/cmates1/data/ -D
> /var/lib/pgsql/cmates1/data/9.5/ -p 5432 -P 5433 -c
>
>
> it throws this error
>
> Could not load library "$libdir/slony1_funcs.2.2.2"
> ERROR:  could not access file "$libdir/slony1_funcs.2.2.2": No such file
> or directory
>
>
Using git clones of everything on CentOS 7.1 I cannot reproduce this
problem.
Even the actual pg_upgrade without -c works, the cluster starts up and
replicates
just fine. I went through installing a 9.4 with 2.2.2, then upgrading to
2.2.4 (via
UPDATE FUNCTIONS) and last upgrading to 9.5 with pg_upgrade.

The only thing I can imagine would be that there is another slony cluster
(or
remnants of it) hanging around in the 9.4 installation, possibly in another
database.

Can you provide a list of all databases and all schemas in them from the
9.4 install?


Regards, Jan







>
>
> Not sure why it is still looking for slony1_funcs.2.2.2 even though the
> version is upgraded to 2.2.4 and it is running fine. i do see
> slony1_funcs.2.2.4.so files in lib directory which is how it should be
> since i have upgraded it to 2.2.4
>
>
> Any suggestions?
>
>
> Thanks
> Avi
>
>


-- 
Jan Wieck
Senior Postgres Architect


Re: [GENERAL] regarding schema only migration from sqlserver to postgres with runmtk.sh

2016-06-16 Thread Jan de Visser

P L E A S E   D O N ' T   T O P  P O S T



On Thursday, June 16, 2016 9:00:48 PM EDT Durgamahesh Manne wrote:
> as per above conversation. i checked connectivity details as it is
> connected to database  (even hostname i mentioned connected to database)
>   psql --host 192.168.168.201 --user postgres -d raghu
> Password for user postgres:
> psql.bin (9.4.8, server 9.4.4)
> Type "help" for help.
> 
> No entry for terminal type "xterm";
> using dumb terminal settings.
> raghu=#
> 
> On Thu, Jun 16, 2016 at 8:51 PM, Jan de Visser  wrote:
> > Please don't top-post.
> > 
> > On Thursday, June 16, 2016 8:30:53 PM EDT Durgamahesh Manne wrote:
> > > hi
> > > sir
> > > as per above discussion same error repeated even mentioned ip address of
> > > hostname
> > > 
> > > ./runMTK.sh -sourcedbtype sqlserver -targetSchema public -schemaOnly
> > > -allTables dbo
> > > 
> > > 
> > > TARGET_DB_URL=jdbc:postgresql://192.168.168.201:5432/raghu
> > > TARGET_DB_USER=postgres
> > > TARGET_DB_PASSWORD=*
> > > 
> > > 

- Just to make sure: TARGET_DB_PASSWORD is not actually set to '*', right?
- Are you sure that you need 'jdbc:' in the URL?


Also, your tone and communication style are not really helping your cause. 
Your quoting style is chaotic and you're very adversarial. I understand you're 
probably under pressure to get this fixed, but yelling at the volunteers that 
help you here is not helping.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] regarding schema only migration from sqlserver to postgres with runmtk.sh

2016-06-16 Thread Jan de Visser
Please don't top-post.

On Thursday, June 16, 2016 8:30:53 PM EDT Durgamahesh Manne wrote:
> hi
> sir
> as per above discussion same error repeated even mentioned ip address of
> hostname
> 
> ./runMTK.sh -sourcedbtype sqlserver -targetSchema public -schemaOnly
> -allTables dbo
> 
> 
> TARGET_DB_URL=jdbc:postgresql://192.168.168.201:5432/raghu
> TARGET_DB_USER=postgres
> TARGET_DB_PASSWORD=*
> 
> 
> 
> Connecting with source SQL Server database server...
> Connected to Microsoft SQL Server, version '10.50.1600'
> Connecting with target EnterpriseDB database server...
> MTK-10045: The URL specified for the "target" database is invalid.
> Check the connectivity credentials.
> Stack Trace:
> com.edb.MTKException: MTK-10045: The URL specified for the "target"
> database is invalid.
> Check the connectivity credentials.
> 

So have you tried the connectivity details? What does

$ psql --host 192.168.168.201 --user postgres -d raghu

give you?

I have a sneaking suspicion you don't have TCP/IP access to the database 
configured in pg_hba.conf.






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Changelog version from 8.1.2 to 9.3.6

2016-06-15 Thread Jan de Visser
Please don't top post.

On Thursday, June 16, 2016 3:42:53 AM EDT Yogesh Sharma wrote:
> Dear David sir/All,
> 
> Thanks for your help.
> Just wanted to confirm few things.
> 
> (9.3.5,9.2.9,9.1.14,9.0.18,8.4.22) Fix REASSIGN OWNED to not fail for text
> search objects (Álvaro Herrera)
 
> As per my understanding from what you told the fix was done in 9.3.5 and
> then back patched to all the remaining version mentioned above.
 My concern
> is why it was back patched to these particular version only. Also will
> these feature be available only in these particular version and above 9.3.5
> version or is it available in all the version above 8.4.22 

It's available in 9.3.x with x >= 5, 9.2.x with x >= 9, 9.1.x with x >= 14, 
9.0.x with x >= 18, and 8.4.x with x >= 22. And in 9.4.x and 9.5.x for all x.

> This is not about this particular patch, I want to understand the multiple
> version number which are mentioned before several patch at given link.
 
> https://bucardo.org/postgres_all_versions.html
> 
> Please let me know what yours thought onto this.
> 
> 
> Regards,
> Yogesh
> 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Moving from PHP to Java: A result was returned when none was expected.

2016-06-15 Thread Jan de Visser
On Wednesday, June 15, 2016 9:07:10 PM EDT Alexander Farber wrote:
> I only understand a quarter of what you guys are writing,
> but to me the JDBC driver throwing SQLException
> "A result was returned when none was expected"
> when my stored function is declared as "void" with
> 
> CREATE OR REPLACE FUNCTION words_skip_game(
> IN in_uid integer,
> IN in_gid integer)
> RETURNS void AS
> $func$
> BEGIN
> 
> 
> is a strange decision. Why throw the exception, what's the benefit?
> 
> Even if PostgreSQL does not have stored functions (???),
> why does not JDBC driver workaround that fact?

In addition to what Adrian and David had to say, I'll reiterate what I said 
upthread: use PreparedStatement.executeQuery instead of 
PreparedStatement.executeUpdate, and ignore the result. You are executing a 
SELECT after all, and 'void' is a result, albeit a not very informative one.

And if you only understand a quarter of what we are writing, you may want to 
read up on both jdbc and pgsql. Porting from PHP to java involves more than 
mechanical replacing statements. Understanding the technologies is important.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Moving from PHP to Java: A result was returned when none was expected.

2016-06-15 Thread Jan de Visser
On Wednesday, June 15, 2016 10:43:13 AM EDT David G. Johnston wrote:
> On Wed, Jun 15, 2016 at 10:30 AM, Jan de Visser  wrote:
> > Point is that you're doing a SELECT. A SELECT returns a result, which can
> > be
> > empty. I would use executeQuery and ignore the result.
> > 
> > There is a bit of a mismatch between the JDBC stored procedure model and
> > the
> > pgsql function model, because pgsql doesn't have true stored procedures.
> 
> Can you point to docs, JDBC and/or PG, that describe what it means to
> "RETURN void"?
> 
> At a high-level SQL returns SETs and the empty set is a valid SET.  I take
> it from your comment that JDBC considers the empty set "a result", whose
> record count is zero.

Hrm...

jan=# create or replace function foo() returns void as $$
begin
raise notice 'foo() called';
end
$$ language plpgsql;
CREATE FUNCTION
jan=# select foo();
NOTICE:  foo() called
 foo 
-
 
(1 row)

So there's a row. Don't know what that row would contain, and how it would map 
to JDBC.
 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Moving from PHP to Java: A result was returned when none was expected.

2016-06-15 Thread Jan de Visser
On Wednesday, June 15, 2016 10:43:13 AM EDT David G. Johnston wrote:
> On Wed, Jun 15, 2016 at 10:30 AM, Jan de Visser  wrote:
> > Point is that you're doing a SELECT. A SELECT returns a result, which can
> > be
> > empty. I would use executeQuery and ignore the result.
> > 
> > There is a bit of a mismatch between the JDBC stored procedure model and
> > the
> > pgsql function model, because pgsql doesn't have true stored procedures.
> 
> Can you point to docs, JDBC and/or PG, that describe what it means to
> "RETURN void"?
> 
> At a high-level SQL returns SETs and the empty set is a valid SET.  I take
> it from your comment that JDBC considers the empty set "a result", whose
> record count is zero.

That's what I assume.





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Moving from PHP to Java: A result was returned when none was expected.

2016-06-15 Thread Jan de Visser
On Wednesday, June 15, 2016 4:21:47 PM EDT Alexander Farber wrote:
> Hello Jan,
> 
> On Wed, Jun 15, 2016 at 4:17 PM, Jan de Visser  wrote:
> > On Wednesday, June 15, 2016 3:56:07 PM EDT Alexander Farber wrote:
> > > Now I am trying to call the same function through JDBC driver
> > 
> > 9.4.1208.jre7:
> > > private static final String SQL_SKIP_GAME   =
> > > 
> > > "SELECT words_skip_game(?, ?)";
> > > 
> > > try (PreparedStatement st =
> > > 
> > > mDatabase.prepareStatement(SQL_SKIP_GAME)) { st.setInt(1, mUid);
> > > 
> > > st.setInt(2, gid);
> > > st.executeUpdate();
> > > 
> > > }
> > > 
> > > and sadly get the SQLException "A result was returned when none was
> > > expected.".
> > 
> > Looking at the 9.4 documentation, I see something completely different:
> > 
> > https://jdbc.postgresql.org/documentation/94/callproc.html
> 
> your doc talks about calling stored functions which return SETOF or cursor.

Example 6.1 doesn't.

What I was trying to indicate that the page you referred to has undergone, um, 
significant changes over the years.

> 
> But my function is returning VOID, so according to
> https://jdbc.postgresql.org/documentation/94/update.html
> I was thinking I should call executeUpdate()?
> 
> Regards
> Alex

Point is that you're doing a SELECT. A SELECT returns a result, which can be 
empty. I would use executeQuery and ignore the result.

There is a bit of a mismatch between the JDBC stored procedure model and the 
pgsql function model, because pgsql doesn't have true stored procedures.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Moving from PHP to Java: A result was returned when none was expected.

2016-06-15 Thread Jan de Visser
On Wednesday, June 15, 2016 3:56:07 PM EDT Alexander Farber wrote:
> Now I am trying to call the same function through JDBC driver 9.4.1208.jre7:
> 
> private static final String SQL_SKIP_GAME   =
> "SELECT words_skip_game(?, ?)";
> 
> try (PreparedStatement st =
> mDatabase.prepareStatement(SQL_SKIP_GAME)) { st.setInt(1, mUid);
> st.setInt(2, gid);
> st.executeUpdate();
> }
> 
> and sadly get the SQLException "A result was returned when none was
> expected.".
> 
> Shouldn't I call executeUpdate() method here - according to the doc
> https://www.postgresql.org/docs/7.4/static/jdbc-callproc.html  ?

You are looking at the 7.4 documentation. That page is so old it can apply for 
a driver's license in some jurisdictions.

Looking at the 9.4 documentation, I see something completely different:

https://jdbc.postgresql.org/documentation/94/callproc.html





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgresql embedded mode

2016-05-23 Thread Jan de Visser
On Monday, May 23, 2016 1:17:13 PM EDT Albe Laurenz wrote:
> David G. Johnston wrote:
> 
> > On Mon, May 23, 2016 at 6:54 AM, aluka raju 
> > wrote:
> 
> >> As given in the FAQ's that postgresql cannot be embedded
> >> https://wiki.postgresql.org/wiki/FAQ#Can_PostgreSQL_be_embedded.3F .
> >> 
> >> Is their any possibility to make it embedded. Till now postgresql has not
> >> done this embedded
 mode and i want to work on how it can be embedded
> >> and contribute. please help me how to start or suggest the idea-.
> > 
> > 
> > ​Step 1 - Fork the project...
> > 
> > ​Changing the fundamental architecture of the system is not something I
> > would expect a project with
 this long of history to attempt to do
> > directly.  While the people working on the core product are welcome to
> > spend their time however they like I don't imagine any of them would be
> > willing to commit code to core pertaining to this capability.  Most, if
> > not all, of them likely don't believe it is even a good idea generally.
> 
> 
> There was this patch by Tom Lane in 2012:
> http://www.postgresql.org/message-id/12511.1346631...@sss.pgh.pa.us
> 
> This can be used as something like an embedded database.
> Nothing more happened with this patch, though.
> 
> Yours,
> Laurenz Albe

I'm thinking that the recent work on parallel workers is going to make an 
embedded server hard to impossible, depending how you define "embedded". 
Individual workers are still separate O/S processes, so if the requirement is 
that everything must be in-process, that's not going to work. That's of course 
fixable by setting the number of bg workers to 0, but even if you do that 
there's the issue of autovac for example.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgresql-9.5.3 compilation on Solaris SPARC

2016-05-20 Thread Jan de Visser
On Friday, May 20, 2016 3:14:54 PM EDT Venkata Balaji N wrote:
> Hi,
> 
> "make" command is generating the following error while compiling
> postgresql-9.5.3 on Solaris SPARC.
> 
> I tried compiling 9.2 and 9.3, works fine. This is only happening on 9.5.

... snip ...

> 
> Regards,
> Venkata B N
> 
> Fujitsu Australia

I would suggest reporting this on pgsql-hackers. I know some work was done on 
the atomics over the last little while.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Ascii Elephant for text based protocols - Final

2016-05-17 Thread Jan de Visser
On Tuesday, May 17, 2016 9:41:51 AM EDT Bryan Nuse wrote:
> On 05/17/2016 08:25 AM, Victor Yegorov wrote:
> > I had a bit of fun with this SQL version and came up with this query:
> > 
> > WITH src(s) AS (
> > 
> >   VALUES
> > 
> > ('729472967293732174412176b12173b17111752171927491b1744171b174112171814172
> > 11718141734172b191721191724173b1714171912175b17221b1912174b1412178b1217151
> > 22a172a1b2317d91a172a17f71b1a1912177') ), str AS (
> > 
> >   SELECT string_agg(repeat(translate(substr(s, p, 1), '123456789ab',
> > 
> > '(/>)<+ o_|\'), ('x'||lpad(substr(s, p+1, 1), 8, '0'))::bit(32)::int),
> > '') line
> > 
> > FROM src, generate_series(1, 182, 2) p
> > 
> > )
> > SELECT substr(line, p, 21) slon FROM str, generate_series(1, 189, 21) p;
> 
> I wonder if --under some fonts especially-- that elephant's eyes look a
> little... aggressive?
> Perhaps the following is too cute, however:
> 
>   __  ___
>/)/  \/   \
>   ( / ___\)
>\(/ p)  ( g)   )
> \_  (_  )   \ ) _/
>   \  /\_/\)/
>\/ 
> _|  |
> \|_/

Some people have WAY too much time on their hands.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] index on ILIKE/LIKE - PostgreSQL 9.2

2016-05-11 Thread Jan de Visser
On Wed, May 11, 2016 at 10:03 PM, Lucas Possamai 
wrote:

>
>>>
>> Trying redoing the query with CTE as below:
>>
>> WITH ja_jobs as
>>   (SELECT DISTINCT title
>>  FROM ja_jobs
>> WHERE clientid = 31239  AND time_job > 1457826264
>>   )
>> SELECT title
>>   FROM ja_jobs
>>  WHERE title ILIKE 'RYAN WER%'
>>  ORDER BY title
>>  LIMIT 10;
>>
>
> hmm.. still slow =(
>
>
> and it's not hitting the index: (i had to change the clientid because the
> previous one was in cache)
>
> Limit  (cost=93790.08..93790.09 rows=1 width=218) (actual
>> time=284.293..284.308 rows=5 loops=1)
>>   Buffers: shared hit=42284
>>   CTE ja_jobs
>> ->  HashAggregate  (cost=93774.31..93779.16 rows=485 width=20)
>> (actual time=207.235..228.141 rows=16320 loops=1)
>>   Buffers: shared hit=42284
>>   ->  Bitmap Heap Scan on "ja_jobs"  (cost=882.98..93697.86
>> rows=30578 width=20) (actual time=21.942..133.380 rows=48472 loops=1)
>> Recheck Cond: (("clientid" = 14635) AND ("time_job" >
>> 1436731799))
>> Buffers: shared hit=42284
>> ->  Bitmap Index Scan on "ix_jobs_client_times"
>>  (cost=0.00..875.34 rows=30578 width=0) (actual time=12.389..12.389
>> rows=48472 loops=1)
>>   Index Cond: (("clientid" = 14635) AND ("time_job" >
>> 1436731799))
>>   Buffers: shared hit=243
>>   ->  Sort  (cost=10.92..10.93 rows=1 width=218) (actual
>> time=284.289..284.293 rows=5 loops=1)
>> Sort Key: "ja_jobs"."title"
>> Sort Method: quicksort  Memory: 25kB
>> Buffers: shared hit=42284
>> ->  CTE Scan on "ja_jobs"  (cost=0.00..10.91 rows=1 width=218)
>> (actual time=236.248..284.263 rows=5 loops=1)
>>   Filter: (("title")::"text" ~~* '%To Electrical%'::"text")
>>   Rows Removed by Filter: 16315
>>   Buffers: shared hit=42284
>> Total runtime: 287.633 ms
>
>
>

I think a GIN index can't be used for sorting.


Re: [GENERAL] Vacuum full of parent without partitions possible?

2016-05-04 Thread Jan Keirse
On Tue, May 3, 2016 at 3:22 PM, Tom Lane  wrote:

> Jan Keirse  writes:
> > I have a table that used to contain all data.
> > because it grew too big I added a partition trigger a long time ago and
> > since than all new data was added to small partitions. By now all data in
> > the original parent table has become obsolete and was deleted, however
> the
> > disk space cannot be reclaimed without a vacuum full. The problem is, a
> > vacuum full of only the parent table should be instantaneous since it
> > contains no rows, but because the vacuum full triggers a vacuum of all
> > partitions too,
>
> No, a VACUUM on a single table processes only that table.
>
> I'm inclined to think your actual problem is that VACUUM FULL wants
> an exclusive lock and can't get one because of other traffic on the
> table.  Plain VACUUM doesn't need an exclusive lock ... unless it's
> trying to truncate the relation, which in this case it presumably would
> be.  Maybe your conclusion that you needed a VACUUM FULL was based
> on observing that VACUUM didn't reduce disk consumption; but if the
> table is empty, that would only be because it couldn't get exclusive
> lock.
>
> I'd suggest waiting for a low-traffic time of day and then doing a
> plain VACUUM.  Or alternatively, if you're sure the table is empty
> and will stay that way, you could just cut to the chase and TRUNCATE
> it.  But none of these alternatives are going to reclaim any disk
> space without taking an exclusive lock on the table, because they
> simply cannot truncate the file while other queries are scanning it.
>

​OK, thanks for clearing that up. It is indeed impossible to ever take an
exclusive lock on the table during normal operations (there are continuous
selects and ​

inserts into the table which are redirected to the partitions by a before
insert trigger, the data is all machine output and the load is constant
24x7.​)
I'll leave the table as is for the time being and do a vacuum the next time
there is a need for scheduled down time.

-- 


 DISCLAIMER 

http://www.tvh.com/glob/en/email-disclaimer

"This message is delivered to all addressees subject to the conditions
set forth in the attached disclaimer, which is an integral part of this
message."


[GENERAL] Vacuum full of parent without partitions possible?

2016-05-03 Thread Jan Keirse
Hello,

I have a table that used to contain all data.
because it grew too big I added a partition trigger a long time ago and
since than all new data was added to small partitions. By now all data in
the original parent table has become obsolete and was deleted, however the
disk space cannot be reclaimed without a vacuum full. The problem is, a
vacuum full of only the parent table should be instantaneous since it
contains no rows, but because the vacuum full triggers a vacuum of all
partitions too, it's not possible to do this while the application is
running (a vacuum full of all partitions easily takes over an hour during
which the table is locked.)
Is there some workaround for this?

Thanks,

Jan

-- 


 DISCLAIMER 

http://www.tvh.com/glob/en/email-disclaimer

"This message is delivered to all addressees subject to the conditions
set forth in the attached disclaimer, which is an integral part of this
message."


Re: [GENERAL] Add relcreated (timestamp) column to pg_class catalog to record the time an object was created

2016-04-21 Thread Jan de Visser
On Thursday, April 21, 2016 12:55:00 PM EDT Melvin Davidson wrote:
> WHY am I being vilified for making a simple request? How is it that
> developers proceed with other enhancements, yet so much negative attention
> is being given to my request because of unjustified fear that something bad
> will happen?

The open source development model is "scratch your itch". Stuff gets developed 
because people have problems and solve them. Apparently nobody has a 
sufficiently large itch to both scratch it and submit the result to the 
project.

The main difference between a project like this and a commercial product is 
that here there are no product managers defining roadmaps and writing 
requirements, but people bring solutions to problems *they* face or find 
interesting. So, if this is a thing you really care about: do a git checkout 
and start hacking. "Patches welcome".

And you're not being vilified. Your communication style is a tad, um, 
abrasive, and sometimes hard to deal with.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] $foo $bar is BAD

2016-04-15 Thread Jan de Visser
On Friday, April 15, 2016 8:13:56 PM EDT Melvin Davidson wrote:
> *Thanks for the sympathetic feedback John. I understand people are
> reluctant to change. It was just my intent to enlighten others as to the
> true background behind it.*
> *So if it's not about to change, then I'll just have to cry $boo $hoo, $oye
> $vey. :)*

Reading this:

  http://programmers.stackexchange.com/a/80609

it appears to me that the true origin is not what you claim it to be, but pre-
war MIT pranksters.

And besides: language evolves. Words change meaning over time. Nobody typing 
'foo = 2*bar' thinks about WW2 acronym, and everybody reading it immediately 
understands it, again without referring to that acronym.

The horse is dead, Jim. Stop beating it.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Crypt change in 9.4.5

2016-03-19 Thread Jan de Visser
On Friday, March 18, 2016 1:18:01 PM EDT ando...@aule.net wrote:
> Hi,
> 
> After upgrading to PostgreSQL 9.4.6, our test system gave error messages
> like:
> 
> ERROR: invalid salt
> 
> The cause of these errors is statements like:
> 
> WHERE password = crypt('secret', 'secret')
> 
> After reverting to Postgres 9.4.4 the test system worked properly again.
> 
> This might be related to a security fix in 9.4.5:
> 
> ---
> Fix contrib/pgcrypto to detect and report too-short crypt() salts (Josh
> Kupershmidt)
> Certain invalid salt arguments crashed the server or disclosed a few bytes
> of server memory. We have not ruled out the viability of attacks that
> arrange for presence of confidential information in the disclosed bytes, but
> they seem unlikely. (CVE-2015-5288)
> ---
> 
> The "crypt" call is hardcoded in legacy code that hasn't been recompiled in
> years. Are there ways to keep the old code running against a newer Postgres
> version?

You could get the source of 9.4.6 from git, back out the commit for that fix, 
and compile.





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does a call to a language handler provide a context/session, and somewhere to keep session data?

2016-03-08 Thread Jan de Visser
On March 8, 2016 03:00:14 PM da...@andl.org wrote:
> At the moment I have two pressing problems.
> 
> One is the start-up phase: getting the Andl runtime stoked up, load its
> catalog, set up its execution environment (including its own type system),
> ready for business. That process in Postgres seems to be undocumented, but I
> think I have it sorted (barring memory lifetime issues down the track).
> 
> The other is type conversions: incoming and outgoing. That is undocumented
> too, and that's disappointing. Anyone writing functions or a language
> handler will really need this. I'm finding it hard to pick a good path right
> now.
> 
> The third would be queries, but that doesn't look too hard. SPI is quite
> well documented.
> 

OK, still not quite sure what your architecture is. Is it


(1)

  +---+
  | Client system |
  +---+
  |
  v
  +---+
  |  andl |
  +---+
  |
  v (SQL)
  +---+
  | pgsql |
  +---+

Or (2)

  +---+
  | Client system |
  +---+
  |
  v (SQL)
  +---++--+
  |  driver/pgsql | -> |   andl   |
  +---++--+
  

In case (1), you're writing a driver: you abstract out the actual datastore 
from your client program using andl. In case (2) you have a procedural 
language handler where your client program still consciously connects to a 
pgsql database, and within that database some/all data processing is delegated 
to andl.

The reason I'm asking is to set terminology. I've browsed some of your 
website, and I'm still not clear which of the two options you're after. It 
could even be both I think. So let's set some parameters.

Depending on where that goes, you should get pgsql-hackers involved. 


I'm not asking you this because 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does a call to a language handler provide a context/session, and somewhere to keep session data?

2016-03-07 Thread Jan de Visser
On March 8, 2016 11:35:00 AM da...@andl.org wrote:
> From: pgsql-general-ow...@postgresql.org
> [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of John R Pierce
> 
> this stuff you're loading from the database once, that's just data about
> your language plugin's configuration, or is it user data, or what? [dmb>]
> It's the catalog for Andl. It contains defined functions, types, persistent
> scalar (non table) data values and links to tables.
> 
> if its just a few global settings, you should consider using custom
> settings variables, rather than database tables.   for instance, pljava has
> a setting, pljava.libjvm_location='/usr/lib/jvm/java-1.8.0/lib/libjvm.so'
> or whatever which it uses to find the Java native calls interface
> library... [dmb>] Andl has something similar, but that problem is already
> solved.

You're being pretty oblique about what it is you're trying to achieve.

To go back to one of your earlier emails: the hardest problem in computing 
isn't cache invalidation. It is clearly explaining what the problem at hand 
is.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does a call to a language handler provide a context/session, and somewhere to keep session data?

2016-03-07 Thread Jan de Visser
On March 8, 2016 12:18:08 AM da...@andl.org wrote:
> From: pgsql-general-ow...@postgresql.org
> [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of John R Pierce
> 
> 
> 
> Yes, I was aware of GD and SD. My question is about what facilities Postgres
> provides for implementing such a thing. Where is the proper place for the
> root of the SD/GD? What does an implementation use to determine that two
> calls belong to the same session?
> 
> the process ID is unique for each active session.   of course, the OS can
> recycle a PID when a process/connection terminates
> 
> [dmb>] Thanks for the idea, but I’m wary of using PID for that purpose.
> 
> [dmb>] In the Python implementation the GD appears to just be stored as a
> simple variable at file scope in the DLL. Would I be right in saying that
> the language handler DLL is loaded exactly once for each session (when the
> language is first used)? If so, then any unique identifier allocated in
> PG_init (such as a GUID or timestamp or counter) would seem to serve the
> purpose. I just wondered if there was something clever I hadn’t found out
> about yet.
> 

One thing that's probably key here is that pgsql isn't multi-threaded. 
Individual connections are handled by forked backends, which share a shared-
memory cache that's not accessible by SQL-land code (which includes language 
handlers). So I think your problem goes away once you realize that all the 
data you have is tied to a single connection anyway.

You cannot use multi-threaded code (which touches the database) in language 
handlers or other "plug-in" code.

Also, trying to outsmart the db engine's cache by building your own is usually 
an exercise in futility and often counter-productive. I speak from experience 
:-P



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CoC [Final v2]

2016-01-26 Thread Jan Danielsson
On 24/01/16 18:30, Joshua D. Drake wrote:
[---]
> This is something that I brought up in protest because I believe that it
> is crucial to the growth of this community.

   Do you have any evidence to support this belief?  (Without referring
to an anonymous invisible mass, a single case or unverifiable anecdotal
evidence).

   Without any data/evidence either way I'd wager that the
implementation of a CoC will have exactly zero effect on developers
coming to or going from the project.  If gaining developers is your
motivator for pushing through a CoC, I for one believe it's a waste of
time and energy.

   I don't buy the idea that there's a huge cache of talent waiting in
the dark for open source projects to suddenly implement a CoC, at which
point they'll jump out and suddenly start contributing code.  Though
whatever anecdotal evidence I could produce to support that claim would
be as worthless as anyone else's, so:

   Surely considering the huge number of projects which have adopted
various forms of CoC's over the past months/years there are good numbers
to show if they have a positive effect on contributions?

   I'd be happy to be proven wrong, but I suspect you'll find zero
correlation between implementation of CoC's and number of contributions
and/or contributors.


   A wider question to the other participants in this discussion:  Is it
generally an accepted view that the growth of the community (in some
sense) is contingent on the implementation of a CoC?

   /Jan



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Charging for PostgreSQL

2016-01-06 Thread Jan de Visser

[offtopic alert]

On 2016-01-06 12:46 PM, James Keener wrote:
How does one "start a new thread"? 
'New Message' in your favourite email client. 
'pgsql-general@postgresql.org' in the 'To' box.



I wasn't aware that changing the subject wouldn't be enough. I tried :/


Check the raw source of the message I replied to. There's 'In-Reply-To' 
and 'References' headers email clients use to thread messages.


GMail attempts to be smart by threading messages by subject. This is not 
only contrary to the spec, but potentially just as inconvenient, if you 
have messages with the same subject. I've had gmail thread messages from 
years apart because the subject was something like 'Hello'.



Jim


jan



On January 6, 2016 12:17:54 PM EST, "Stéphane Schildknecht" 
 wrote:


On 06/01/2016 16:54, James Keener wrote:

As Melvin mentioned, this belongs in a new thread. 



And as such, it would have been really kind to actually start a new one.

(...)


--
Sent from my Android device with K-9 Mail. Please excuse my brevity. 




Re: [GENERAL] Old source code needed

2015-11-27 Thread Jan de Visser

On 2015-11-27 9:44 AM, Adrian Klaver wrote:

You realize how old 9.0 is, right? And you understand how nonsensical
your question is? How can you guarantee that code compiles properly on a
compiler which is released years after you write the code?


To me nonsensical would be trying to compile newer code using an old 
compiler. What the OP did seemed to be in the realm of possibility as 
I would think backwards compatibility kicks in. 


Well, maybe. But there's no way to *guarantee* it will work. And 
apparently it doesn't.


It's not unheard of - I have some C code which I developed over the last 
year or so on gcc 4.9. When I first attempted to compile it on 5.2 it 
failed horrible. And there's no real unusual things I'm doing - I didn't 
even use C11, just C99.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Old source code needed

2015-11-27 Thread Jan de Visser



On 2015-11-27 4:31 AM, NTPT wrote:
Thanx for help,  I grab the source code  that match old cluster fs 
backup.


However: Should it run fine compiled  with recent gcc 4.9.3 ?


You realize how old 9.0 is, right? And you understand how nonsensical 
your question is? How can you guarantee that code compiles properly on a 
compiler which is released years after you write the code?




while compiled with this gcc , I got a lot of strange errors like

ERROR:  could not identify an ordering operator for type name at 
character 3336

HINT:  Use an explicit ordering operator or modify the query.

with \dt or \list  or other  commands and select.


Hovever compiling it with gcc 3.4.6 and everythig works.

Is it intended (expected) behavior or a compiller bug (Being on 
Gentoo, compiller bug scary me a lot).


thanx



jan


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Ubuntu installed postgresql password failure

2015-09-14 Thread Jan de Visser
On September 14, 2015 07:22:58 AM Martín Marqués 
wrote:
> I would recommend using psql's \password meta-
command instead of ALTER
> USER to change a password, as to avoid having the 
password stamped in
> the logs.

You learn something new every day :-)

jan


Re: [GENERAL] Ubuntu installed postgresql password failure

2015-09-14 Thread Jan de Visser
On September 14, 2015 07:22:58 AM Martín Marqués 
wrote:
> I would recommend using psql's \password meta-
command instead of ALTER
> USER to change a password, as to avoid having the 
password stamped in
> the logs.

You learn something new every day :-)

jan


Re: [GENERAL] Ubuntu installed postgresql password failure

2015-09-13 Thread Jan de Visser
On September 13, 2015 03:57:51 PM Dale Seaburg wrote:
> I have reached my wit's end.  lol  I installed postgresql (9.3.9) and
> pgadmin3 with the Ubuntu Software Center.  When using pgadmin or psql, I
> get an error "password authentication failed for user "postgres" ".
> Ubuntu (14.04.3).
> 
> I created a postgres user BEFORE installing postgresql/pgadmin toolset.
> The password I used for the postgres user will not work with the
> installed postgresql tools.
> 
> What in the world am I doing wrong, and more importantly, how do I
> determine the password used for postgres during the postgresql install?
> AND, why should it be different from the postgres user I created.

Try

jan@bison:~$ sudo -u postgres -s
postgres@bison:~$ psql
psql (9.4.4)
Type "help" for help.

postgres=# ALTER USER postgres PASSWORD 'postgres';
ALTER ROLE
postgres=# \q
postgres@bison:~$ exit
jan@bison:~$ 

> 
> Any help will be greatly appreciated.
> 
> Dale



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL conf parameter setting

2015-08-18 Thread Jan Keirse
On Tue, Aug 18, 2015 at 1:00 PM, Jimit Amin  wrote:
> Can I know normal, aggressive, best parameter settings for PostgreSQL.conf
> file for particular hardware.
>
> Like Linux x86_64 , 8 GB Ram
> , Linux x86_64 , 126 GB Ram

There's no generic answer because it depends on what you're doing with
the database, but this may offer a starting point:
http://pgtune.leopard.in.ua/

-- 


 DISCLAIMER 

http://www.tvh.com/glob/en/email-disclaimer

"This message is delivered to all addressees subject to the conditions
set forth in the attached disclaimer, which is an integral part of this
message."


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Transaction ID Wraparound Monitoring

2015-08-04 Thread Jan Keirse
On Tue, Aug 4, 2015 at 6:24 AM, William Dunn  wrote:
> Hello Jan,
>
> I think your calculation is slightly off because per the docs when
> PostgreSQL comes within 1 million of the age at which an actual wraparound
> occurs it will go into the safety shutdown mode. Thus the calculation should
> be ((2^32)-1)/2-100 rather than just ((2^32)-1)/2 as I think you are
> using.
>
> When I first started building out my group's PostgreSQL monitoring solution
> I too found the wording of transaction freeze to be a bit difficult to
> understand. For my team's internal documentation I have summarized it as
> follows, I hope it might be more clear:
>
> ...normal XIDs are compared using modulo-2^32 arithmetic, which means that
> ~(2^32-1)/2- transactions appear in the future and ~(2^32-1)/2 transactions
> appear in the past.
>
> This [Transaction ID freeze] behavior of autovacuum is primarily dependent
> on the settings autovacuum_freeze_table_age and autovacuum_freeze_max_age,
> which are set as database defaults but can also be specified on a per table
> basis (as storage parameters in CREATE TABLE or ALTER TABLE)
>
> When a table's oldest transaction reaches autovacuum_freeze_table_age, the
> next autovacuum that is performed on that table will be a vacuum freeze
>
> PostgreSQL implicitly caps autovacuum_freeze_table_age at
> 0.95*autovacuum_freeze_max_age.
>
> When a table reaches autovacuum_freeze_max_age PostgreSQL will force an
> autovacuum freeze on that table, even if the table would not otherwise be
> autovacuumed or autovacuum is disabled.
>
> PostgreSQL implicitly caps autovacuum_freeze_max_age at 2 billion
> (20)
>
> The actual age that a wraparound occurs is ((2^32)/2)-1. When a PostgreSQL
> database comes within 1 million of this age (2^32/2-1-100) the database
> will go into the safety shutdown mode" and no longer accept commands,
> including the vacuum commands, and your only recovery option is to stop the
> server and use a single-user backend (where shutdown mode is not enforced)
> to execute VACUUM. This should, obviously, be avoided at all costs.
>
> References:
>
> http://www.PostgreSQL.org/docs/current/static/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND
> http://www.PostgreSQL.org/docs/current/static/runtime-config-client.html#GUC-VACUUM-FREEZE-TABLE-AGE
> http://www.PostgreSQL.org/docs/current/static/runtime-config-autovacuum.html#GUC-AUTOVACUUM-FREEZE-MAX-AGE
>
>
> Based on the above explanation we consider the following to be the most
> correct check for how close you are to an actual wraparound freeze:
>
> CAST (age(relfrozenxid) AS real) / CAST(trunc(((2^32)/2)-1-100) AS real)
> AS perc_until_wraparound_server_freeze
>
>
> (Note that we do this at the table level rather than the database level like
> you did, though, so that we have the information we need to tune the
> settings for individual tables.)

Thanks for the correction regarding the 1.000.000 safety margin! I
chose to monitor only the total value value to limit the amount of
extra metrics in the monitoring database. In case the value increased
we'll execute the queries to find out what table(s) is/are causing the
problem interactively.



> However it is better to set autovacuum max freeze age well below that value
> and monitor that instead. Autovacuum should always do a vacuum freeze for a
> table that has exceeded max freeze age, and if you are monitoring for that
> you should avoid a wrap around freeze:
>
> CAST (age(relfrozenxid) AS real) / CAST ((least(autovacuum_freeze_max_age,
> 20)) AS real) AS perc_until_freeze_max_age
>
>
> And ensure that value does not exceed 100%. Though it is important to note
> that max freeze age can be set on a per table basis, so to get the true
> autovacuum_freeze_max_age of a table (or the real max of the database) you
> would need to check the reloptions field of pg_class for that table and only
> if there is no value specified for '%autovacuum_freeze_table_age%' use
> current_setting('autovacuum_freeze_max_age')

I'll see to add this one to the monitoring too. Thanks for your clarifications!

-- 


 DISCLAIMER 

http://www.tvh.com/glob/en/email-disclaimer

"This message is delivered to all addressees subject to the conditions
set forth in the attached disclaimer, which is an integral part of this
message."


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Transaction ID Wraparound Monitoring

2015-07-30 Thread Jan Keirse
On Thu, Jul 30, 2015 at 2:56 PM, Adrian Klaver
 wrote:
> On 07/30/2015 02:55 AM, Jan Keirse wrote:
>>
>> Hello,
>>
>> we have some very write heavy databases and I have our monitoring
>> system watch the transaction age of my databases to be alerted before
>> we get into problems in case autovacuum can't keep up to avoid
>> transaction ID wraparound.
>>
>> The query I am executing is this:
>> SELECT max(age(pg_database.datfrozenxid)) / 2147483648.0 * 100.0 AS
>> "Percentage of transaction ID's used" FROM pg_database;
>>
>> My believe was that if this reaches 100 the database will stop
>> accepting writes and one must vacuum. I have set alerts on 50 and 90,
>> the result is around 9 so my believe was autovacuum is working fine
>> for my workload.
>> I often see autovacuum kicking in to prevent XID Wraparround, I
>> thought that was just to be on the safe side and vacuum well before
>> it's too late.
>>
>> However today I saw this post:
>>
>> http://blog.getsentry.com/2015/07/23/transaction-id-wraparound-in-postgres.html
>>
>> The following line has me worried:
>> ... that database is going to reach a situation where the XID counter
>> has reached its maximum value. The absolute peak is something around 2
>> billion, but it can be far lower than that in some situations...
>>
>> Could someone shed some light on this? Is my query insufficient? Can
>> the transaction wrapparound freeze problem indeed occur earlier? And
>> if so, could someone suggest a better query to monitor?
>
>
> I would look at:
>
> http://www.postgresql.org/docs/9.4/interactive/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND
>
> Which includes some query examples.

Yes, I have seen that documentation and it is because of it that I
believed that my queries were ok, but now I think I may be
misinterpreting or misunderstanding the documentation and have to look
at more information, like autovacuum_multixact_freeze_max_age?

-- 


 DISCLAIMER 

http://www.tvh.com/glob/en/email-disclaimer

"This message is delivered to all addressees subject to the conditions
set forth in the attached disclaimer, which is an integral part of this
message."


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Transaction ID Wraparound Monitoring

2015-07-30 Thread Jan Keirse
Hello,

we have some very write heavy databases and I have our monitoring
system watch the transaction age of my databases to be alerted before
we get into problems in case autovacuum can't keep up to avoid
transaction ID wraparound.

The query I am executing is this:
SELECT max(age(pg_database.datfrozenxid)) / 2147483648.0 * 100.0 AS
"Percentage of transaction ID's used" FROM pg_database;

My believe was that if this reaches 100 the database will stop
accepting writes and one must vacuum. I have set alerts on 50 and 90,
the result is around 9 so my believe was autovacuum is working fine
for my workload.
I often see autovacuum kicking in to prevent XID Wraparround, I
thought that was just to be on the safe side and vacuum well before
it's too late.

However today I saw this post:
http://blog.getsentry.com/2015/07/23/transaction-id-wraparound-in-postgres.html

The following line has me worried:
... that database is going to reach a situation where the XID counter
has reached its maximum value. The absolute peak is something around 2
billion, but it can be far lower than that in some situations...

Could someone shed some light on this? Is my query insufficient? Can
the transaction wrapparound freeze problem indeed occur earlier? And
if so, could someone suggest a better query to monitor?

Kind Regards,

Jan Keirse

-- 


 DISCLAIMER 

http://www.tvh.com/glob/en/email-disclaimer

"This message is delivered to all addressees subject to the conditions
set forth in the attached disclaimer, which is an integral part of this
message."


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgbench tps drop from 5000 to 37 going from localhost to a server 13ms away

2015-07-24 Thread Jan Lentfer


>> Am 24.07.2015 um 18:59 schrieb Chris Withers :
>> 
>> Hi all,
>> 
>> I've been doing some lightweight load testing with
>> “pgbench -c8 -j8 -T10”
>> 
>> When run locally on the postgres server I've testing, this gives around 
>> 5000tps
>> 
>> When I do it from a server that has a 13ms ping latency, it drops to 37tps.
>> 
>> This is using the default pgbench script, is it to be expected?
>> If so, why?
>> 


> Am 24.07.2015 um 20:06 schrieb Jan Lentfer :
> 
> That seems to be a large drop. On the other hand 13 ms is also like a very 
> large network latency. On LAN your usually in the sub ms area. So going from 
> e.g. 0.2 ms to 13ms is 65 fold decrease. What is the network toplogy like?
> 

Sorry for top posting my first response. Always happens when I am on the iPad.
I just checked on my home setup. Ping latency on GBit crossover connection is 
around 0.3 ms, while pinging localhost is around 0.05 ms. You are at 13ms. So 
that is a 260 fold decrease, which is in about the same area as what you see 
with pgbench. Of course with pgbench the actual payload comes into account on 
top.

Jan

Re: [GENERAL] pgbench tps drop from 5000 to 37 going from localhost to a server 13ms away

2015-07-24 Thread Jan Lentfer
That seems to be a large drop. On the other hand 13 ms is also like a very 
large network latency. On LAN your usually in the sub ms area. So going from 
e.g. 0.2 ms to 13ms is 65 fold decrease. What is the network toplogy like?

Jan

Von meinem iPad gesendet

> Am 24.07.2015 um 18:59 schrieb Chris Withers :
> 
> Hi all,
> 
> I've been doing some lightweight load testing with
> “pgbench -c8 -j8 -T10”
> 
> When run locally on the postgres server I've testing, this gives around 
> 5000tps
> 
> When I do it from a server that has a 13ms ping latency, it drops to 37tps.
> 
> This is using the default pgbench script, is it to be expected?
> If so, why?
> 
> cheers,
> 
> Chris
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [pg_hba.conf] publish own Python application using PostgreSQL

2015-07-06 Thread Jan de Visser
On July 6, 2015 06:43:53 AM c.bu...@posteo.jp wrote:
> On 2015-07-05 15:13 Jan de Visser  wrote:
> > You could set up a whole new server with a different $PGDATA on a
> > different port.
> 
> I (and the user) don't want to setup anything - that is the point.

Well, you don't have to setup anything. You do an initdb in a different 
directory, that will write a .conf file there, which you then massage to 
include a different port. You'll use the same binaries as the standard pgsql 
install, but in a different environment.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [pg_hba.conf] publish own Python application using PostgreSQL

2015-07-05 Thread Jan de Visser
On July 5, 2015 08:58:17 PM c.bu...@posteo.jp wrote:
> On 2015-07-05 15:11 Charles Clavadetscher 
> 
> wrote:
> > I am not really an expert, but from your description I guess that you
> > assume an existing PostgreSQL installation on your customers' server.
> 
> The application is a simple open source using a local PostgreSQL
> database. The customer is just any user out there.
> I guess the PostgreSQL instance itself is in most cases fresh/virgin
> installed without any configuration done by the user.
> 
> > I would not like to install applications that change settings in
> > pg_hba.conf
> 
> I know that this is a bad solution. It is just a workaround for my
> development environment. I just explained that modifications here to
> show how bad my workaround is and how less I know about PostgreSQL.
> 
> I read unspecific things about a "configuration file" for the
> application that make it possible to get access to PostgreSQL without
> having root-access to it. But I don't know details about it.
> What could this be?
> 
> Is it possible for the user to install a PostgreSQL-using application
> (including a fresh install and default-configured PostgreSQL) without
> modifying the PostgreSQL-configuration?

You could set up a whole new server with a different $PGDATA on a different 
port.

What I'm wondering though is what made you decide to use pgsql for your 
project? It seems to me that something like sqlite would be better suited for 
your requirements.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Backup Method

2015-07-03 Thread Jan Lentfer

Am 2015-07-03 13:00, schrieb howardn...@selestial.com:

On 03/07/2015 11:39, Guillaume Lelarge wrote:


> In that case is there any recommendation for how often to make 
base backups in relation to the size of the cluster and the size of 
the WAL?

>

Nope, not really. That depends on a lot of things. Our customers 
usually do one per day.



Excuse my ignorance... Is the base backup, in general, faster than 
pg_dump?


It is a different approach. With the base backup you are actually 
backing up files from the filesystem ($PGDATA directory), whereas with 
pg_dump your saving the SQL commands to reload and rebuild the database.
"Usually" a file based backup will be faster, both on backup and 
restore, but it is - as mentioned - a different approach and it might 
also not serve all your purposes.


That is why I do weekly base backups (plus WAL Archiving) and use 
pg_dump in a parallel way to do "logical" backups every night.


Regards,

Jan


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Ubuntu 14.04 LTS install problem

2015-07-01 Thread Jan de Visser
On July 1, 2015 02:51:37 PM Urs G. Berner wrote:
> Am 01.07.15 um 14:08 schrieb Jan de Visser:
> > On July 1, 2015 07:39:59 AM Urs Berner wrote:
> .
> .
> 
> > I would uninstall the stock (Ubuntu) version before installing the pgdg
> > version.
> 
> There is no postgresql at all installed - nor can I install the stock
> nor the pgdg version

Strange. The machine I'm typing this on (which has Ubuntu 15.04) had no pgsql 
installed:

jan@bison:~$ uname -a
Linux bison 3.19.0-22-generic #22-Ubuntu SMP Tue Jun 16 17:15:15 UTC 2015 
x86_64 x86_64 x86_64 GNU/Linux
jan@bison:~$ sudo apt install postgresql
[sudo] password for jan: 
Reading package lists... Done
Building dependency tree   
Reading state information... Done
The following extra packages will be installed:
  postgresql-9.4 postgresql-client-9.4 postgresql-client-common postgresql-
common
Suggested packages:
  postgresql-doc oidentd ident-server locales-all postgresql-doc-9.4
The following NEW packages will be installed:
  postgresql postgresql-9.4 postgresql-client-9.4 postgresql-client-common 
postgresql-common
0 upgraded, 5 newly installed, 0 to remove and 0 not upgraded.
Need to get 3,859 kB of archives.
After this operation, 15.9 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://ca.archive.ubuntu.com/ubuntu/ vivid/main postgresql-client-common 
all 166bzr2 [26.9 kB]
Get:2 http://ca.archive.ubuntu.com/ubuntu/ vivid-updates/main postgresql-
client-9.4 amd64 9.4.4-0ubuntu0.15.04 
etc etc.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Ubuntu 14.04 LTS install problem

2015-07-01 Thread Jan de Visser
On July 1, 2015 07:39:59 AM Urs Berner wrote:
> I installed Ubuntu 14.04 LTS
> apg-get update ... apt-get upgrade ...
> 
> and looked at
> www.postgresql.org/download/linux/ubuntu
> then added apt repository /etc/apt/sources.list.d/pgdg.list
> 
> When I try
> 
> apt-get install postgresql-9.4
> 
> error:
>  depends on postgresql-common (>= 142~) what should not get installed
> ... you have defect packages ...
> 
> any ideas?
> 
> Urs

I would uninstall the stock (Ubuntu) version before installing the pgdg 
version.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] use null or 0 in foreign key column, to mean "no value"?

2015-06-26 Thread Jan de Visser
On June 26, 2015 11:59:05 PM Robert Nikander wrote:
> Hi,
> 
> (Maybe my subject line should be: `is not distinct from` and indexes.)
> 
> In Postgres 9.4, I’ve got a table of ‘items’ that references a table
> ‘colors’.  Not all items have colors, so I created a nullable column in
> items like:
> 
>   color_id bigint references colors
> 
> There is also an index on color_id:
> 
>   create index on items (color_id);
> 
> I thought this was the right way to do it, but now I’m not so sure... In
> application code, prepared statements want to say: `select * from items
> where color_id = ?` and that `?` might be a int or null, so that doesn’t
> work.  I used `is not distinct from` instead of =, which has the right
> meaning, but now I notice it doesn’t use the index for queries that replace
> `=` with `is not distinct from`, and queries run much slower.  Using
> `explain` confirms: it’s doing sequential scans where `=` was using index.

I test for NULL in my application code and emit '... WHERE foo = ?' if the 
value is not NULL and '... WHERE foo IS NOT NULL' otherwise. ISTR that that 
actually uses indexes.

> 
> So… is this bad DB design to use null to mean that an item has no color?
> Should I instead put a special row in `colors`, maybe with id = 0, to
> represent the “no color” value?  Or is there some way to make an index work
> with nulls and `is not distinct from`?
> 
> thank you,
> Rob



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_start_backup does not actually allow for consistent, file-level backup

2015-06-08 Thread Jan Lentfer

Am 2015-06-08 14:45, schrieb otheus uibk:

The manual and in this mailing list, the claim is made that
consistent, file-level backups may be made by bracketing the 
file-copy

operation with the postgresql pg_start_backup and pg_stop_backup
operations.  Many people including myself have found that in some
circumstances, using "tar" to copy these files will result in an 
error

if one of the data files changes during the tar operation. The
responses to those queries on this mailing list are unsatisfactory
("everything is fine, trust us").


[...]


I decided to test this claim that these messages are "perfectly
harmless" and "can be ignored":

1. I executed pg_start_backup() on server
2. Ran md5sum recursively through PGs data directories
3. waited a split second
4. Ran md5sum recursively through PGs data directories as in step 2
5. Compared output from #2 and #4

As you can see below, there were non-zero changes made to these
files. 

< a1a571bfd1e4a98b20245edbdfce6d9a
 /var/lib/pgsql/data/base/41514/809275
---

21de5b864019c96c55e81a38fa1c9ccf

 /var/lib/pgsql/data/base/41514/809275
1783c1783
< 8eb4a578ecb56667e1698174f89c462c
 /var/lib/pgsql/data/base/41514/809280
---

b4c7b4ef30dda9543181465f53a85d72

 /var/lib/pgsql/data/base/41514/809280

Such changes occurred EVEN WHEN TAR DID NOT WARN of changed files.
Further, when step 3 involved an actual backup, involving minutes, 
not

milliseconds, dozens of differences to files in data/base/... are
reported. To be clear, I excluded from consideration all files in
pg_xlog, pg_clog, pg_subtrans, pg_stat_tmp.

If these files are changing during the pg_start_backup() and
pg_stop_backup, then exactly what is their purpose? Might they be
changing during the tar, as tar thinks? How may an operator be 
assured

the snapshot is consistent (unless one stops the databases)?  Will
the redo logs restore the files to a consistent state, no matter when
these files are changed? I find it hard to believe that would be the
case.

This test was performed using Postgresql 9.1.8. A scan of the
CHANGELOG since then indicates that if this is a bug, it has not been
reported as fixed.



Still everything is fine here. You need to understand that in between 
pg_start_ and pg_stop_backup Postgres continues to operate aus usual - 
so files in $PGDATA directory WILL change. That's why it is necessary to 
also keep all the WAL segments that where created during _start and 
_stop to actually recover to a consistent state. When you recover from a 
full (file based) backup the WAL files file be applied, too (that is why 
you need a recovery.conf and a restore_command.


You should possibly re-read 
http://www.postgresql.org/docs/9.4/static/continuous-archiving.html#BACKUP-PITR-RECOVERY 
especially 24.3.3 and 24.3.4.


hth

Jan





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database designpattern - product feature

2015-06-03 Thread Jan de Visser
On June 3, 2015 02:04:28 PM Roxanne Reid-Bennett wrote:
> I think you should evaluate your unease with having to update the database
> on release (potentially many times) carefully for what it is and why you
> have it.  [I'm not saying it is invalid - just know why you have it] 
> Because no matter how well you design your system - databases evolve. 
> Manage that. 

Having been guilty of designing an EAV system before, I think I know his 
hesitation is due to the fact that some databases (*cough* Oracle *cough*) 
don't allow DDL in transactions (or at least back when I worked with Oracle it 
didn't), making dynamic table creation a bit of a problem, especially when you 
run in a container which gives you little flexibility in your tx handling 
(i.e. you get it all the time, or never). This used to be a problem in many 
iterations of J2EE containers. Also, lots of DBAs get skittish when they hear 
about applications doing DDL. And again, many of the technological roadblocks 
are fixed by now, but the hesitation remains.

Combine this with the fact that you want users to be able to create new 
products, which should be built up out of existing and/or newly defined 
attributes, you quickly end up with something EAV like. Because you don't want 
your product management people coming crying to your DBAs to have a new table 
for a new product defined. You want the product management people to point-
and-click their way through a nice GUI.

So this is what lead me to that EAV design for this exact problem: we started 
off with Oracle as the database and a temperamental J2EE container, found out 
we couldn't do DDL (or at least DDL was hard), and by the time we were on less 
brain dead containers and databases the "damage" was done and there was no 
going back.

But in my defense I will say that mine was one of the prettiest EAV systems 
ever built. In my opinion at least :-)


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem when temp_tablespace get full?

2015-06-03 Thread Jan Lentfer

Am 2015-06-03 19:00, schrieb Daniel Begin:

Sorry, my question might not have been clear…

I set myself the temp_tablespace to that location but did not expect
the drive could get full;

Multiple factors may have caused the drive to turn off (not
necessarily postgresql);

So, if that temp_tablespace gets full, how postgresql will
react/manage the situation?


Queries running out of space in pgsql_tmp will just cancel (and 
rollback).


Jan


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] advocating LTS release and feature-train release cycles

2015-06-02 Thread Jan de Visser
On June 2, 2015 03:16:53 PM Zenaan Harkness wrote:
> On 6/2/15, Jan de Visser  wrote:
> > On June 1, 2015 11:11:37 PM Arthur Silva wrote:
> >> In my opinion a twice a year schedule would be good.
> >> The LTS would be every 2 or 4 releases. Keeping 2 LTS versions supported
> >> at
> >> all moments.
> >> 
> >> Maybe this should be reposted to the hackers list?
> > 
> > Pretty sure this would be shot down pretty quick. At this point it seems
> > more
> > likely to me that the time between releases will be longer rather than
> > shorter.
> 
> Really, that sounds like an excellent way to test such an alternative
> - if pg development went to what every other major libre project does,
> we would not have a proper comparison of the outcome for the
> alternative (lengthening the release cycle, rather than shortening).
> 
> I know how I think it'll pan out - but personal opions matter little
> here, only what the dev's choose.
> 
> Whatever the outcome, this will be a great experiment in the long run,
> providing a data point we would be quite unlikely to have otherwise!

I was overly short. What I should have done is direct you to pgsql-hackers 
where 
release schedules have been extensively discussed recently. Reading those 
threads 
will give you an idea about what the thinking process of the people responsible 
for 
releasing pgsql is. And whether or not their thinking lines up with other 
projects is 
not really relevant in my opinion - all projects are different, not in the 
least because 
the people running them are different.


Re: [GENERAL] advocating LTS release and feature-train release cycles

2015-06-01 Thread Jan de Visser
On June 1, 2015 11:11:37 PM Arthur Silva wrote:
> In my opinion a twice a year schedule would be good.
> The LTS would be every 2 or 4 releases. Keeping 2 LTS versions supported at
> all moments.
> 
> Maybe this should be reposted to the hackers list?

Pretty sure this would be shot down pretty quick. At this point it seems more 
likely to me that the time between releases will be longer rather than 
shorter. 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Partitioning and performance

2015-05-28 Thread Jan Lentfer


Am 28. Mai 2015 18:25:42 MESZ, schrieb Ravi Krishna :
>> Have you set up constraints on the partitions? The planner needs to
>know
>> what is in the child tables so it can avoid scanning them.
>
>Yes. each child table is defined as follows
>
>CREATE TABLE TSTESTING.ACCOUNT_PART1
>
> ( CHECK (ACCOUNT_ROW_INST BETWEEN 1001 and 271660))
>
> INHERITS (TSTESTING.ACCOUNT);
>
>ALTER TABLE TSTESTING.ACCOUNT_PART1 ADD CONSTRAINT ACCOUNT_PART1_PKEY
>PRIMARY KEY (ACCOUNT_ROW_INST);
>
>Perhaps I was not clear. The planner is excluding partitions which can
>not contain the rows looked up in the WHERE clause. However it is
>still scanning the parent table.
>
>Aggregate (cost=8.45..8.46 rows=1 width=0)
>-> Append (cost=0.00..8.44 rows=2 width=0)
>-> Seq Scan on account (cost=0.00..0.00 rows=1 width=0)
>Filter: (account_row_inst = 101)
>-> Index Only Scan using account_part1_pkey on account_part1
>(cost=0.42..8.44 rows=1 width=0)
>Index Cond: (account_row_inst = 101)
>(6 rows)

You can have a look at pg_partman. It makes setting up partitioning quite easy 
and provides a tool to easily move existing data from parent to child tables.

Jan

Re: [GENERAL] Partitioning and performance

2015-05-28 Thread Jan Lentfer


Am 28. Mai 2015 17:15:22 MESZ, schrieb Ravi Krishna :
>I am testing partitioning of a large table. I am INHERITING child
>tables.
>It is using a range
>partitioning based on a sequence col, which also acts as the primary
>key. For inserts I am using a trigger which will redirect insert to
>the right table based on the value of the primary key.
>
>Based on my testing, I see that the insert speed is less than 10%
>different than a non partitioned table. I am using  SET
>constraint_exclusion = on and I checked that via ANALYZE that the
>planner does not consider non qualifying child tables.
>
>yet, selects and updates based on the primary key show anywhere from
>40 to 200% slowness as compared to non partition. One thing I notice
>is that, even with partition pruning, the planner scans the base table
>and the table matching the condition. Is that the additional overhead.
>
>I am attaching below the output of analyze.
>
>===
>On a non partitioned table
>
>explain select count(*) from tstesting.account where account_row_inst =
>101 ;
>Aggregate (cost=8.16..8.17 rows=1 width=0)
>-> Index Only Scan using account_pkey on account (cost=0.14..8.16
>rows=1 width=0)
>Index Cond: (account_row_inst = 101)
>(3 rows)
>
>
>With partition pruning:
>
>Aggregate (cost=8.45..8.46 rows=1 width=0)
>-> Append (cost=0.00..8.44 rows=2 width=0)
>-> Seq Scan on account (cost=0.00..0.00 rows=1 width=0)
>Filter: (account_row_inst = 101)
>-> Index Only Scan using account_part1_pkey on account_part1
>(cost=0.42..8.44 rows=1 width=0)
>Index Cond: (account_row_inst = 101)
>(6 rows)
>
>On a partitioned table, with no partition pruning.
>
>explain analyze select count(*) from tstesting.account where
>account_row_inst = 101 ;
>Aggregate (cost=29.77..29.78 rows=1 width=0) (actual time=0.032..0.032
>rows=1 loops=1)
>-> Append (cost=0.00..29.76 rows=5 width=0) (actual time=0.029..0.029
>rows=0 loops=1)
>-> Seq Scan on account (cost=0.00..0.00 rows=1 width=0) (actual
>time=0.000..0.000 rows=0 loops=1)
>Filter: (account_row_inst = 101)
>-> Index Only Scan using account_part1_pkey on account_part1
>(cost=0.42..4.44 rows=1 width=0) (actual time=0.008..0.008 rows=0
>loops=1)
>Index Cond: (account_row_inst = 101)
>Heap Fetches: 0
>-> Index Only Scan using account_part2_pkey on account_part2
>(cost=0.42..8.44 rows=1 width=0) (actual time=0.007..0.007 rows=0
>loops=1)
>Index Cond: (account_row_inst = 101)
>Heap Fetches: 0
>-> Index Only Scan using account_part3_pkey on account_part3
>(cost=0.42..8.44 rows=1 width=0) (actual time=0.007..0.007 rows=0
>loops=1)
>Index Cond: (account_row_inst = 101)
>Heap Fetches: 0
>-> Index Only Scan using account_part4_pkey on account_part4
>(cost=0.42..8.44 rows=1 width=0) (actual time=0.006..0.006 rows=0
>loops=1)
>Index Cond: (account_row_inst = 101)
>Heap Fetches: 0
>Planning time: 0.635 ms
>Execution time: 0.137 ms
>(18 rows)
>
>
>-- 
>Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>To make changes to your subscription:
>http://www.postgresql.org/mailpref/pgsql-general

Have you set up constraints on the partitions? The planner needs to know what 
is in the child tables so it can avoid scanning them.

Jan


Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet.

Re: [GENERAL] Replacing uuid-ossp with uuid-freebsd

2015-05-25 Thread Jan de Visser
On May 25, 2015 04:17:32 PM Piotr Gasidło wrote:
> test_uuid=# drop extension "uuid-ossp";
> ERROR:  cannot drop extension uuid-ossp because other 
objects depend on it
> DETAIL:  default for table test column id depends on function
> uuid_generate_v4() HINT:  Use DROP ... CASCADE to drop the 
dependent
> objects too.

Wouldn't a simple ALTER TABLE to change/drop the default of the 
id column do?


Re: [GENERAL] PG and undo logging

2015-05-23 Thread Jan de Visser
On May 23, 2015 01:48:11 PM David G. Johnston wrote:
> On Sat, May 23, 2015 at 1:34 PM, Ravi Krishna 
> 
> wrote:
> > Is it true that PG does not log undo information, only redo. If true,
> > then how does it bring a database back to consistent state during
> > crash recovery. Just curious.
> 
> ​What does "undo" mean?
> 
> David J.​

Methinks rolling back the changes that transactions which got interrupted by 
the 
crash scribbled onto the data file. It's an Oracleism where not-consolidated 
data is 
kept in undo- and redo datafiles.

While I roughly understand Postgres' MVCC I don't feel qualified to answer 
Ravi's 
question :-)


Re: [GENERAL] date with month and year

2015-05-21 Thread Jan de Visser
On May 21, 2015 11:56:52 AM Steve Crawford wrote:
> The article does also display a couple attitudes that I feel are especially
> rampant in the web-development community. The first is that web developers
> shouldn't become educated about the capabilities of a database but rather
> use the database as a dumb data-store and redo everything themselves (often
> this includes an utter failure to use the data-integrity capabilities of
> the database).

Having been at the receiving end of web developer rants many times, the reason 
more often than not is that the database does the checking after the fact, i.e. 
after the user spend the time providing the data. Web developers need to know 
what's allowed when they throw up the page. And frameworks offer little or no 
help in retrieving these validation rules. So the web developer is almost 
forced to 
roll his own.




Re: [GENERAL] Allowing postgresql to accept 0xff syntax for data types that it makes sense for?

2015-05-21 Thread Jan de Visser
On May 21, 2015 06:04:37 PM Brown, Joseph E. 
wrote:
> Unsubscribe pgsql-general

This doesn't work. See the footer of the posts to the 
mailing list.




Re: [GENERAL] plpgsql functions organisation

2015-05-04 Thread Jan de Visser
On May 4, 2015 02:32:14 PM Yves Dorfsman wrote:
> > As for performance concerns, in 99% of cases code maintainability is going
> > to be way more important than performance microoptimization. If you're
> > *that* concerned about performance than plpgsql probably isn't the right
> > answer anyway.
> Isn't one of the advantage of running on the server to avoid data round
> trip?
> 
> What would you recommend for better performance?

You can run python or perl on the server. That's what Jim meant with 'If 
you're *that* concerned about performance than plpgsql probably isn't the 
right answer anyway.'

Additionally: Many moons ago I did extensive and aggressive performance 
analysis on a system that did many recursive queries. This was before CTEs 
(WITH statements) and we settled on recursive plpgsql functions. The queries 
in the functions were trivial, but nevertheless the bottleneck was in the 
query and data processing, and never in the surrounding infrastructure.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Working with Array of Composite Type

2015-03-28 Thread Jan de Visser
On March 28, 2015 06:18:49 PM Alex Magnum wrote:
> Hello,
> I am struggling with finding the right way to deal with arrays of composite
> types. Bellow is an example of the general setup where I defined an image
> type to describe the image properties. A user can have mulitple images
> stored.

The canonical answer is that in almost all cases where you think you want an 
array of composites, you *really* want a table join:

i.e. turn your image *type* into an image *table* with the user_id as a 
foreign key.

CREATE TABLE users (
  user_id   serial NOT NULL,
);

CREATE TABLE image (
  idsmallint,
  user_id int references users (user_id)
  caption   text,
  is_primaryboolean,
  is_privateboolean
);




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Building JSON objects

2015-03-27 Thread Jan de Visser
On March 27, 2015 11:38:42 AM David G. Johnston wrote:
> On Fri, Mar 27, 2015 at 11:31 AM, Jan de Visser  wrote:
> > On March 27, 2015 01:12:52 PM Eli Murray wrote:
> > > ERROR:  syntax error at or near "json_build_object"
> > > LINE 1: insert into json(data) json_build_object(SELECT DISTINCT dep...
> > 
> > You may want to review the syntax of the INSERT command, i.e. this doesn't
> > have anything to do with JSON:
> > 
> > INSERT INTO foo ( bar, baz, quux, froz ) VALUES (NULL, 12, 'box', now())
> 
> ​or..
> 
> INSERT INTO json(data) SELECT json_build_object(SELECT DISTINCT dep...)
> 
> ​David J.

Serves me right for bringing the snark without properly reading the OP :-)



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Building JSON objects

2015-03-27 Thread Jan de Visser
On March 27, 2015 01:12:52 PM Eli Murray wrote:
> ERROR:  syntax error at or near "json_build_object"
> LINE 1: insert into json(data) json_build_object(SELECT DISTINCT dep...

You may want to review the syntax of the INSERT command, i.e. this doesn't 
have anything to do with JSON:

INSERT INTO foo ( bar, baz, quux, froz ) VALUES (NULL, 12, 'box', now())


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Autovacuum query

2015-03-26 Thread Jan de Visser
On March 25, 2015 09:31:24 PM David G. Johnston wrote:
> On Wed, Mar 25, 2015 at 8:58 PM, Mitu Verma  wrote:
> >  Correcting the subject
> 
> ​And this is why it is considered good form to do "compose new message"
> instead of replying to an existing one.  Injecting your new topic into an
> existing unrelated mail thread is mildly annoying.

Wildly off-topic, but I'm blaming Google. Their thread detection logic is so 
good that people actually don't know anymore how it was supposed to work, back 
in the bad old days where threads were managed by In-Reply-To headers.

> 
> David J.
> ​



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL-related legal question

2015-03-11 Thread Jan de Visser
On March 12, 2015 06:43:40 AM Gavin Flower wrote:
> Bill cannot comment, but it might be along the lines of assigning all 
> intellectual property rights, or something of that ilk. In that case, it 
> might give the company ownership of stuff he may have contributed (or 
> intends to contribute) to PostgreSQL in some way – which could lead to 
> legal complications affecting PostgreSQL adversely, which would be 
> expensive and an unnecessary distraction.

I used to work for a company that did exactly that - you had to sign a 
contract that claimed copyright of all your work, even work done outside of 
work hours, to the company. They did however tell you beforehand that if you 
were an established contributor to an open-source project they could make 
exceptions for that, but you had to go through legal.

But the upshot was that if you wrote an iPhone app in 15 minutes, the company 
would own that, technically.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Performance slowing down when doing same UPDATE many times

2015-03-09 Thread Jan Strube
Hi,

 
does no one have an idea?

It may be a rare case doing the same UPDATE a thousand times. But I´m really 
interested why this is not happening when doing DIFFERENT updates. And, of 
course,  if something could be done on the database side to prevent this 
behavior in case some application developer does the same “mistake” again.

 
Thanks

Jan

 
 
From: Jan Strube 
Sent: Tuesday, February 10, 2015 12:03 PM
To: 'pgsql-general@postgresql.org'
Subject: Performance slowing down when doing same UPDATE many times

 
Hi,

 
we recently found a bug in one of our applications which was doing exactly the 
same UPDATE operation a few thousand times inside a transaction. This caused 
the UPDATEs to become slower and slower from some milliseconds to some seconds. 
We already fixed the application but I am wondering if this might be a 
PostgreSQL bug, too.

 
Here is a simple test case that performs and benchmarks 100,000 UPDATEs 
(benchmarking only every 10,000th to reduce output):

 
BEGIN;

CREATE TEMP TABLE test (id integer PRIMARY KEY, flag boolean DEFAULT false);

INSERT INTO test (id) SELECT generate_series(1, 10);

 
DO $$

DECLARE

  s timestamp;

  e timestamp;

BEGIN

  FOR i IN 1..10 LOOP

    SELECT clock_timestamp() INTO s;

    UPDATE test SET flag = true WHERE id = 12345;

    SELECT clock_timestamp() INTO e;

 
    IF i%1 = 0 THEN

  RAISE NOTICE '%', e-s;

    END IF;

  END LOOP;

END $$;

ROLLBACK;

 
The output looks like this:

 
NOTICE:  00:00:00.000525

NOTICE:  00:00:00.000992

NOTICE:  00:00:00.001404

NOTICE:  00:00:00.001936

NOTICE:  00:00:00.002374

NOTICE:  00:00:00.002925

NOTICE:  00:00:00.003525

NOTICE:  00:00:00.004015

NOTICE:  00:00:00.00453

NOTICE:  00:00:00.004976

 
The problem only occurs inside a transaction and if the same dataset is 
updated. I´m using PostgreSQL 9.1.15.

 
Jan

 


Re: [GENERAL] Application written in pure pgsql, good idea?

2015-03-01 Thread Jan de Visser
On March 1, 2015 09:45:24 AM inspector morse wrote:
> This is just for fun/research, I don't need a web framework because PHP is
> actually picking up the incoming requests and sending it to a pgsql stored
> function. The pgsql will concatenate all the html that is required for the
> page and send it back to PHP to write out to the response.
> 
> My main concern is, since there will be a lot of concatenation in pgsql to
> generate the HTML, would it affect performance?

Again, performance is the least of your concerns. Building HTML from raw 
strings is, except in the smallest of toys, an exercise in frustration. And in 
those toys performance won't be an issue anyway.

Save yourself a bunch of aggravation and let PHP do the HTML for you. It's 
good at it. pl/pgsql isn't.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Application written in pure pgsql, good idea?

2015-02-28 Thread Jan de Visser
On February 28, 2015 03:39:06 PM inspector morse wrote:
> Is it a good idea to write a simple application (consisting of just data
> entry interfaces) in pure pgsql?
> 
> Basically, we would have each page has a stored function in postgresql that
> is called by php+apache (the http get/post values would be passed into
> postgrel as an array).
> 
> The pgpsql would render HTML and return back to the front end for display.
> 
> Sample:
> create function render_user_login_page(out v_html varchar)
> returns varchar
> as
> $$
> begin
>  v_html := v_html || 'User ID: type="text" />';
> end;
> $$
> 
> 
> Would there be any performance issues with doing this?

Don't know about the performance aspects, but just thinking about it you're 
making your db server responsible for a lot of cruft that can easily be 
outsourced - the HTML rendering. Which, besides being a potential performance 
pitfall, will probably end up being a terrible maintenance nightmare.

What's the problem with letting PHP do what it's good at, i.e. rendering 
templatized HTML, and let the DB do what it's good at - data processing? The 
idea of sending stuff over straight to the DB sounds sane, but instead of doing 
that terrible string concat stuff you're thinking of just send back some 
structured data which you then render in PHP?




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Performance slowing down when doing same UPDATE many times

2015-02-10 Thread Jan Strube
Hi,

 
we recently found a bug in one of our applications which was doing exactly the 
same UPDATE operation a few thousand times inside a transaction. This caused 
the UPDATEs to become slower and slower from some milliseconds to some seconds. 
We already fixed the application but I am wondering if this might be a 
PostgreSQL bug, too.

 
Here is a simple test case that performs and benchmarks 100,000 UPDATEs 
(benchmarking only every 10,000th to reduce output):

 
BEGIN;

CREATE TEMP TABLE test (id integer PRIMARY KEY, flag boolean DEFAULT false);

INSERT INTO test (id) SELECT generate_series(1, 10);

 
DO $$

DECLARE

  s timestamp;

  e timestamp;

BEGIN

  FOR i IN 1..10 LOOP

    SELECT clock_timestamp() INTO s;

    UPDATE test SET flag = true WHERE id = 12345;

    SELECT clock_timestamp() INTO e;

 
    IF i%1 = 0 THEN

  RAISE NOTICE '%', e-s;

    END IF;

  END LOOP;

END $$;

ROLLBACK;

 
The output looks like this:

 
NOTICE:  00:00:00.000525

NOTICE:  00:00:00.000992

NOTICE:  00:00:00.001404

NOTICE:  00:00:00.001936

NOTICE:  00:00:00.002374

NOTICE:  00:00:00.002925

NOTICE:  00:00:00.003525

NOTICE:  00:00:00.004015

NOTICE:  00:00:00.00453

NOTICE:  00:00:00.004976

 
The problem only occurs inside a transaction and if the same dataset is 
updated. I´m using PostgreSQL 9.1.15.

 
Jan

 


Re: [GENERAL] Monitoring query plan cache

2014-12-21 Thread Jan de Visser
On December 21, 2014 04:08:43 PM Andomar wrote:
> It is not always easy to tell the query type (function, prepared or 
> ad-hoc.) We use Python in mod_wsgi with psycopg2. The code shows ad-hoc 
> SQL, but who knows what the many layers between Python and the database do.

psycopg2 sends the SQL you feed it straight to the DB. If you don't feed it a 
PREPARE statement [1] it'll be an ad-hoc query; the value placeholders will be 
interpolated prior to statement submission by psycopg2. 


[1] http://www.postgresql.org/docs/9.2/interactive/sql-prepare.html




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PROBLEM Service Alert: hostname/check_postgres_old_transaction is CRITICAL **

2014-09-19 Thread Jan-Pieter Cornet
On 2014-9-19 20:33 , Adarsh Sharma wrote:
> It returns approx *311 MB* data to the client servers.
> 
> root > netstat -p | grep 45355
> tcp0 1531648 localhost:5499  localhost:48848 
> ESTABLISHED 45355/postgres
> 
> root > strace -p 45355
> -- Remain stuck for *2-3 *hours
> read(306, 
> "\30\0kB\3\0\0\0\0\0\377\5\0\0\26\0\0\0\2\0\0\0\30\0+\264\3\0\0\0\0\0"..., 
> 8192) = 8192
> sendto(10, "4174\0\0\0\0041535D\0\0\0\30\0\2\0\0\0\006248710\0\0\0"..., 8192, 
> 0, NULL, 0
> 
> Then after 2-3 hours it got completed automatically with below stacktrace :
> 
> # strace -p 45355
> Process 45355 attached - interrupt to quit
> sendto(10, "4174\0\0\0\0041535D\0\0\0\30\0\2\0\0\0\006248710\0\0\0"..., 8192, 
> 0, NULL, 0
> 
> ) = -1 ECONNRESET (Connection reset by peer)

I'd guess that the receiving program crashes. It might not be able to stomach 
the 311 MB that's being sent to it, maybe it's expanding, and swapping, until 
it's killed by the OOM killer?

Doesn't look like a postgres problem to me. postgres is stuck trying to send 
data... try stracing the client to see what it does? (use "lsof -i :48848" or 
whatever the port number of the remote is to find the pid).

-- 
Jan-Pieter Cornet 
"Any sufficiently advanced incompetence is indistinguishable from malice."
- Grey's Law



signature.asc
Description: OpenPGP digital signature


[GENERAL] corruption in system tables (9.1.13)

2014-09-19 Thread Jan-Pieter Cornet
Hi,

One of our postgres database clusters suddenly developed a corruption in the 
system tables. I tried to debug this as best I could, but haven't found a root 
cause yet. I'm mainly seeking more pointers to attack this.

The error message that suddenly appeared for a lot of queries (but not all 
queries), is:

ERROR:  missing chunk number 0 for toast value 4132453 in pg_toast_2619

The database was running 9.1.13 at the time (upgraded to 9.1.14 now, but the 
releasenotes for 9.1.14 do not mention this sort of corruption. We do not use 
GiST indices). We cannot upgrade to 9.3.x at the moment because of dependencies 
on other components (it's high on the list of priorities, though).

I googled this error and found that sometimes judicious use of "reindex" and 
"vacuum full" might solve this, but it didn't.

The corrupt database was renamed and put aside for study. We also saved the 
pg_xlog files from around the time the corruption occurred. Unfortunately, we 
do not have pg_basebackup-like backups (yet - also high on the wishlist now). 
We restored to a fresh database from pg_dump made 30 minutes before the 
corruption occurred, and were able to replay the missing transactions using 
detailed logging from the connecting system. We also made a pg_basebackup-copy 
of the system including the faulty database.

There are no IO errors, so this does not look like disk corruption. Also, this 
machine uses streaming replication to replicate to a hot standby slave, and the 
slave is corrupt in the exact same way (producing the exact same errors). That 
to me is more proof that this is not caused by a faulty disk, since database 
changes are (as far as I know) stored in the base/ directories, and at the same 
time streamed to streaming replication slaves (and written to pg_xlog), right?

This system has been in production since July, and has been running fine ever 
since. It could still be a hardware problem (memory corruption?), but if it is, 
it's quite rare.

As an example, this is what it looks like to try to query the broken database:
sim_stuk=# \d
   List of relations
 Schema |Name|   Type   | Owner 
++--+---
 public | _dbversioning  | table| admin
 public | _dbversioning_id_seq   | sequence | admin
 public | sim_instance   | table| admin
 public | sim_instance_id_seq| sequence | admin
[... works fine ...]

sim_stuk=# \d sim_instance
ERROR:  missing chunk number 0 for toast value 4132453 in pg_toast_2619
sim_stuk=# select * from sim_instance limit 1;
ERROR:  missing chunk number 0 for toast value 4132461 in pg_toast_2619
sim_stuk=# select * from pg_attribute limit 1;
ERROR:  missing chunk number 0 for toast value 4132453 in pg_toast_2619

There are some tables we can still query, though, among these is a table 
containing "status" fields and timestamps, so we could learn approximately how 
many changes were missing from the backup that we restored.

Any suggestions on how to proceed? Thanks!

-- 
Jan-Pieter Cornet 
"Any sufficiently advanced incompetence is indistinguishable from malice."
- Grey's Law



signature.asc
Description: OpenPGP digital signature


Re: [GENERAL] Arduino SQL Connector

2014-04-18 Thread Jan Wieck

On 04/18/14 10:31, Steve Spence wrote:

Not a thing in that document about the Arduino. Just how to install
Postgres on a Raspberry Pi. My Postgres is on a hosted server at a ISP.


You intend to have thousands of Arduino devices, incapable of doing any 
sort of encryption or other means of secure IP connections, directly 
connect to a database, that is hosted on a publicly accessible VPS?


Maybe it is just me, but to me that design has DISASTER written in bold, 
red, 120pt font all over it.



Good luck with that,
Jan

--
Jan Wieck
Senior Software Engineer
http://slony.info


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Arduino SQL Connector

2014-04-17 Thread Jan Wieck

On 04/18/14 00:27, Steve Spence wrote:

On 4/18/2014 12:21 AM, John R Pierce wrote:


personal opinion:

I don't think a terminal device like a PC or an embedded system should
be talking directly to SQL at all.   instead, they should be talking
to an application server which implements the "business logic", and
THAT talks to the database.





When all we need to do is log sensor values, there's no business logic
needed. It's scientific data collection, and direct to SQL works very
nicely. It's fast and clean.


In that case you should be sending messages to a message bus or queue.

The bus/queue receiver will then push the data into the database or 
whatever downstream system.



Jan


--
Jan Wieck
Senior Software Engineer
http://slony.info


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Arduino SQL Connector

2014-04-17 Thread Jan Wieck

On 04/17/14 10:49, Bruce Momjian wrote:

On Thu, Apr 17, 2014 at 10:44:36AM -0400, David Rysdam wrote:

Bruce Momjian  writes:
> On Thu, Apr 17, 2014 at 09:39:55AM -0400, Steve Spence wrote:
>> So, who wants to work on this with me? I'm a fair arduino programmer,
>> but know nothing about postgres.
>
> I would look at the MySQL one as a first step to see how that was done.
> You are basically going to need to duplicate libpq, which is a major
> undertaking.

Maybe I'm being naive, but isn't libpq already being compiled for ARM by
Debian? As long as it fits, you should be good. If it doesn't, you'll
need to strip some stuff out.


Oh, can you run Debian ARM code on Arduino?  If so, Postgres's libpq
could be used directly, though it is probably too big, as you mentioned.
The MySQL driver is C++, which surprised me.


No, to do that you'd need something like a Beaglebone, which is ARM 
Cortex A8 based and runs Linux anyway.


http://www.ti.com/tool/beaglebk?DCMP=PPC_Google_TI&k_clickid=63c22498-5f5d-3789-4b41-0dabd35d

I don't think porting the whole libpq over to an Arduino would be a good 
move. For practical purposes a small subset of functionality through 
some gateway service would probably be a better approach.


Note that I am not an Arduino user/developer. I'm more familiar with the 
Microchip PICs.



73 de WI3CK

--
Jan Wieck
Senior Software Engineer
http://slony.info


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] WAL Replication Server + repmgr + Slony

2014-04-12 Thread Jan Wieck

On 03/20/14 10:37, basti wrote:

Hello,

at the moment we use a Postgres (9.1) Master and a Hot-Standby with WAL
Replication.

We also use Slony to copy some (not all) Tables to fronted server.
Is it possible to use repmgr to switch between Master and Hot-standby
without lose the slony functionality?

When I use repmgr and switch between master and slave what's about
IP-addresses and hostnames? Do I need a virtual IP that can I switch
from one to the other server?


Unless your standby moves backwards in time (async replication and 
failover losing transactions), all that is needed should be to issue 
STORE PATH commands with the new IP/hostname to the Slony replica(s).



Regards,
Jan

--
Jan Wieck
Senior Software Engineer
http://slony.info


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cancelling of autovacuums considered harmful

2014-04-12 Thread Jan Wieck

On 02/27/14 10:43, Scott Marlowe wrote:

On Wed, Feb 26, 2014 at 5:40 PM, Steve Crawford
 wrote:

On 02/26/2014 08:56 AM, Alvaro Herrera wrote:


...

No matter how heavily updated, regular activity should not cause
autovacuum kills.  Only heavier operations would do that (say ALTER
TABLE, etc).



"Considered harmful" got my attention. What, if any, known harm is caused?

We have many errors of this type but in our case most are due to batch
processes that have a vacuum embedded at appropriate points in the string of
commands in order to avoid excessive bloat and to ensure the tables are
analyzed for the following steps. Occasionally the autovacuum triggers
before the manual but gets canceled.

Any harm?


We have some rather large tables that have never been autovacuumed. At
first I was thinking it was due to pgsql cancelling them due to load
etc. But if it's slony getting in the way then cancelling them is
still harmful, it's just not postgres' fault.


Slony (even the very old 1.2) does not cancel anything explicitly.


Jan

--
Jan Wieck
Senior Software Engineer
http://slony.info


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Linux vs FreeBSD

2014-04-10 Thread Jan Wieck

On 04/10/14 17:25, Christofer C. Bell wrote:

I'm not wanting to get after anyone here, but I want it on the record
that I am not the source of the above quote discouraging the use of
Ubuntu in a server role.  That would be Bruce Momjian. While Bruce is
entitled to his opinion, it's not one I agree with and I don't want a
Google search years from now to tie my name to that viewpoint.


Who (in their right mind) would ever think of anything but BSD in a 
server role?





Jan

--
Jan Wieck
Senior Software Engineer
http://slony.info


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] .pgpass being ignored

2013-06-24 Thread Jan Wieck
On 06/24/13 10:24, Rebecca Clarke wrote:
> I could be wrong, but shouldn't the owner of .pgpass be postgres?

The owner of ~/.pgpass is whoever owns ~ (the home directory of that user).

And ~/.pgpass must have permissions 0600 in order for libpq to actually
use it.


Jan


> 
> 
> On Mon, Jun 24, 2013 at 3:17 PM, Ziggy Skalski  <mailto:zskal...@afilias.info>> wrote:
> 
> On 13-06-21 06:19 PM, Stephen Rasku wrote:
> 
> I am trying to write a script that will create and populate a
> database.  I don't want to enter a password every time so I want to
> use a .pgpass file.  It has the correct permissions:
> 
>  $ ls -l $PGPASSFILE
>  -rw---  1 Stephen  staff  43 21 Jun 14:48
> /Users/Stephen/.pgpass
> 
> However, when I call createdb, it fails:
> 
>  $ createdb -h 192.168.1.4 -U postgres --no-password JobSearch
>  createdb: could not connect to database postgres:
> fe_sendauth: no
> password supplied
> 
> This is the contents of my .pgpass file:
> 
>
>  192.168.1.4:5432:DatabaseName:__postgres:__thisIsTheCorrectPassword
> 
> If I omit the --no-password option it will prompt me for a password
> and the command will succeed.  I am using 9.0.10 from MacPorts.
> 
> What am I doing wrong?
> 
> ...Stephen
> 
> 
> 
> Hi,
> 
> Just going from a personal experience, have you tried to open the
> .pgpass file in vi and made sure there's no trailing spaces in your
> pgpass entry?  That bit me once before :)
> 
> Ziggy
> 
> 
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org
> <mailto:pgsql-general@postgresql.org>)
> To make changes to your subscription:
> http://www.postgresql.org/__mailpref/pgsql-general
> <http://www.postgresql.org/mailpref/pgsql-general>
> 
> 


-- 
Anyone who trades liberty for security deserves neither
liberty nor security. -- Benjamin Franklin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Perl function leading to out of memory error

2013-02-19 Thread Jan Strube

Hi,

we have a Java daemon that´s repeatedly calling a Perl function inside 
our database (version 9.1.8). The function is called about 200 times per 
second. While the Java program is running you can watch the memory usage 
of the postmaster grow continuously until after a few hours we get an 
out of memory error from Postgres. In the log you see a lot of 
"ExprContext..." messages. When the daemon is killed the memory is 
freed.The daemon is using a single database connection during it´s runtime.


You can see the function definition here: 
http://www.deriva.de/tmp/get_comment.txt


Does anyone have an idea what could be wrong here or how I can find out 
where the memory is spend?


Thanks a lot,
Jan


P.S.: Here´s the log:

<2013-02-18 16:51:26 CET - idms_export> CONTEXT: PL/Perl function 
"get_comment"
TopMemoryContext: 1272045600 total in 657683 blocks; 419880 free (42 
chunks); 1271625720 used
TopTransactionContext: 8192 total in 1 blocks; 7304 free (0 chunks); 888 
used

ExprContext: 8192 total in 1 blocks; 8160 free (0 chunks); 32 used
ExprContext: 8192 total in 1 blocks; 8160 free (0 chunks); 32 used

...About 290.000 more of the same lines...

ExprContext: 8192 total in 1 blocks; 8160 free (0 chunks); 32 used
ExprContext: 8192 total in 1 blocks; 8160 free (0 chunks); 32 used
Prepared Queries: 24576 total in 2 blocks; 15984 free (5 chunks); 8592 used
ExprContext: 8192 total in 1 blocks; 8160 free (0 chunks); 32 used
ExprContext: 8192 total in 1 blocks; 8160 free (0 chunks); 32 used
ExprContext: 8192 total in 1 blocks; 8160 free (0 chunks); 32 used
RI compare cache: 8192 total in 1 blocks; 3744 free (0 chunks); 4448 used
RI query cache: 24576 total in 2 blocks; 11888 free (5 chunks); 12688 used
ExprContext: 8192 total in 1 blocks; 8160 free (0 chunks); 32 used
PL/pgSQL function context: 57344 total in 3 blocks; 40176 free (3 
chunks); 17168 used

PL/Perl queries: 24576 total in 2 blocks; 15984 free (5 chunks); 8592 used
PL/Perl procedures: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
PL/Perl interpreters: 8192 total in 1 blocks; 3216 free (0 chunks); 4976 
used
PL/pgSQL function context: 57344 total in 3 blocks; 30072 free (4 
chunks); 27272 used
PL/pgSQL function context: 24576 total in 2 blocks; 17976 free (6 
chunks); 6600 used

CFuncHash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
Rendezvous variable hash: 8192 total in 1 blocks; 1680 free (0 chunks); 
6512 used
PLpgSQL function cache: 24520 total in 2 blocks; 3744 free (0 chunks); 
20776 used

TableSpace cache: 8192 total in 1 blocks; 3216 free (0 chunks); 4976 used
Type information cache: 24576 total in 2 blocks; 11888 free (5 chunks); 
12688 used
Operator lookup cache: 24576 total in 2 blocks; 11888 free (5 chunks); 
12688 used

MessageContext: 8192 total in 1 blocks; 6976 free (0 chunks); 1216 used
Operator class cache: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 
used
smgr relation table: 24576 total in 2 blocks; 5696 free (4 chunks); 
18880 used
TransactionAbortContext: 32768 total in 1 blocks; 32736 free (0 chunks); 
32 used

Portal hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
PortalMemory: 8192 total in 1 blocks; 7888 free (1 chunks); 304 used
PortalHeapMemory: 1024 total in 1 blocks; 552 free (0 chunks); 472 used
ExecutorState: 32864 total in 3 blocks; 7688 free (0 chunks); 25176 used
HashTableContext: 8192 total in 1 blocks; 8096 free (1 chunks); 96 used
HashBatchContext: 51372080 total in 16 blocks; 336 free (5 chunks); 
51371744 used

ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
Relcache by OID: 24576 total in 2 blocks; 12832 free (3 chunks); 11744 used
CacheMemoryContext: 1377288 total in 24 blocks; 94352 free (3 chunks); 
1282936 used
unnamed prepared statement: 122880 total in 4 blocks; 39504 free (5 
chunks); 83376 used

cached_comments_pkey: 2048 total in 1 blocks; 728 free (0 chunks); 1320 used
Z_BASE_COUNTRY_STATUS: 2048 total in 1 blocks; 776 free (0 chunks); 1272 
used
Z_BASE_COUNTRY_NSIN_ID_COUNTRY: 2048 total in 1 blocks; 176 free (0 
chunks); 1872 used

Z_BASE_COUNTRY_NSIN: 2048 total in 1 blocks; 776 free (0 chunks); 1272 used
Z_BASE_COUNTRY_ISIN: 2048 total in 1 blocks; 776 free (0 chunks); 1272 used
Z_BASE_COUNTRY_ID_COUNTRY: 2048 total in 1 blocks; 776 free (0 chunks); 
1272 used

Z_BASE_COUNTRY_pkey: 2048 total in 1 blocks; 664 free (0 chunks); 1384 used
CachedPlan: 15360 total in 4 blocks; 5272 free (0 chunks); 10088 used
CachedPlanSource: 7168 total in 3 blocks; 3928 free (2 chunks); 3240 used
SPI Plan: 1024 total in 1 blocks; 808 free (0 chunks); 216 used
Z_UL_ID_NOTATION: 2048 total in 1 blocks; 728 free (0 chunks); 1320 used
Z_UL_pkey: 2048 total in 1 blocks; 664 free (0 chunks); 1384 used
BASKET_ISIN_UNIQUE: 4096 total in 2 blocks; 1792 

Re: [GENERAL] Query becomes slow when written as view

2013-02-15 Thread Jan Strube



is your function stable/immutable, and if so is it decorated as such.


No, itŽs volatile.

Well, that's your problem.  The planner won't push down the IN clause
past the volatile function for fear of changing the query's side-effects.

I'd question whether it's sane to have a view with volatile functions in
it at all.  It certainly won't act much like the normal understanding of
a view ...


I see, thanks for the explanation.
In this case, the side effect is desired. The view should always return 
a COMMENT. Either directly from one of the tables or generated from the 
function which stores the COMMENT in cached_comments for the next select.
Is there perhaps a best practice to do a thing like that? Of course we 
could declare the original function stable and call another volatile 
function to store the data, as noted in the docs. But that would be 
cheating...


Regards,
Jan



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query becomes slow when written as view

2013-02-14 Thread Jan Strube



is your function stable/immutable, and if so is it decorated as such.

merlin


No, it´s volatile.

Jan


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Query becomes slow when written as view

2013-02-14 Thread Jan Strube

Hi,

I have the following query which runs reasonably fast under PostgreSQL 
9.1.8:


SELECT
b."ISIN",
CASE
WHEN b."COMMENT" IS NOT NULL THEN b."COMMENT"
WHEN cc."ISIN" IS NOT NULL THEN cc.comment
ELSE get_comment(b."ISIN")
END AS "COMMENT"
FROM dtng."Z_BASE" b
LEFT JOIN dtng.cached_comments cc on b."ISIN" = cc."ISIN" AND 
cc.cache_time >= b._last_modified

WHERE b."ISIN" IN (SELECT "ISIN" from dtng."Z_BASE" LIMIT 1)

Here is the query plan:

Nested Loop Left Join (cost=0.08..16.65 rows=1 width=1053)
Join Filter: (cc.cache_time >= b._last_modified)
-> Nested Loop (cost=0.08..8.67 rows=1 width=644)
-> HashAggregate (cost=0.08..0.09 rows=1 width=13)
-> Subquery Scan on "ANY_subquery" (cost=0.00..0.08 rows=1 width=13)
-> Limit (cost=0.00..0.07 rows=1 width=13)
-> Seq Scan on "Z_BASE" (cost=0.00..106515.68 rows=1637368 width=13)
-> Index Scan using "Z_BASE_pkey" on "Z_BASE" b (cost=0.00..8.57 rows=1 
width=644)

Index Cond: (("ISIN")::bpchar = ("ANY_subquery"."ISIN")::bpchar)
-> Index Scan using cached_comments_pkey on cached_comments cc 
(cost=0.00..7.71 rows=1 width=425)

Index Cond: ((b."ISIN")::bpchar = ("ISIN")::bpchar)

When I´m trying to put this into a view, it becomes extremely slow:

CREATE VIEW export_comments AS
SELECT
b."ISIN",
CASE
WHEN b."COMMENT" IS NOT NULL THEN b."COMMENT"
WHEN cc."ISIN" IS NOT NULL THEN cc.comment
ELSE get_comment(b."ISIN")
END AS "COMMENT"
FROM dtng."Z_BASE" b
LEFT JOIN dtng.cached_comments cc on b."ISIN" = cc."ISIN" AND 
cc.cache_time >= b._last_modified


SELECT *
FROM export_comments
WHERE "ISIN" IN (SELECT "ISIN" FROM dtng."Z_BASE" LIMIT 1)

The query plan now is:

Hash Join (cost=79926.52..906644.87 rows=818684 width=45)
Hash Cond: ((b."ISIN")::bpchar = ("ANY_subquery"."ISIN")::bpchar)
-> Hash Left Join (cost=79926.42..884049.08 rows=1637368 width=1053)
Hash Cond: ((b."ISIN")::bpchar = (cc."ISIN")::bpchar)
Join Filter: (cc.cache_time >= b._last_modified)
-> Seq Scan on "Z_BASE" b (cost=0.00..106515.68 rows=1637368 width=644)
-> Hash (cost=74620.41..74620.41 rows=77841 width=425)
-> Seq Scan on cached_comments cc (cost=0.00..74620.41 rows=77841 width=425)
-> Hash (cost=0.09..0.09 rows=1 width=13)
-> HashAggregate (cost=0.08..0.09 rows=1 width=13)
-> Subquery Scan on "ANY_subquery" (cost=0.00..0.08 rows=1 width=13)
-> Limit (cost=0.00..0.07 rows=1 width=13)
-> Seq Scan on "Z_BASE" (cost=0.00..106515.68 rows=1637368 width=13)

By the way I get the same behaviour and query plan when I try this:

SELECT *
FROM (
-- above view definition
) x
WHERE x."ISIN" IN (SELECT "ISIN" FROM dtng."Z_BASE" LIMIT 1)

We already found out that the problem is the Perl function "get_comment" 
which is very expensive. In the first case the function is called at 
most once, but in the second case it is called many times. I believe 
this is because of the hash join which causes the view to fetch 
everything from dtng."Z_BASE" first?
The question is, how to avoid this? We tried to set the functions cost 
from 100 to 1000 but that did not help. (Because of the architecture 
of the software that uses this query, we have the constraint that 
structure of the final WHERE clause (WHERE "ISIN" IN (...)) must not be 
altered.)


Thanks a lot for any idea,
Jan



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Prevent out of memory errors by reducing work_mem?

2013-01-27 Thread Jan Strube

Hi,

you are right.
We were running 9.1.4 and after upgrading to 9.1.7 the error disappeared.

Thanks a lot,
JanStrube



I'm getting an out of memory error running the following query over 6
tables (the *BASE* tables have over 1 million rows each) on Postgresql
9.1. The machine has 4GB RAM:

It looks to me like you're suffering an executor memory leak that's
probably unrelated to the hash joins as such.  The leak is in the
ExecutorState context:


ExecutorState: 3442985408 total in 412394 blocks; 5173848 free (16
chunks); 3437811560 used

while the subsidiary HashXYZ contexts don't look like they're going
beyond what they've been told to.

So the first question is 9.1.what?  We've fixed execution-time memory
leaks as recently as 9.1.7.

If you're on 9.1.7, or if after updating you can still reproduce the
problem, please see if you can create a self-contained test case.
My guess is it would have to do with the specific data types and
operators being used in the query, but not so much with the specific
data, so you probably could create a test case that just uses tables
filled with generated random data.

regards, tom lane




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Prevent out of memory errors by reducing work_mem?

2013-01-25 Thread Jan Strube

Hi,

I´m getting an out of memory error running the following query over 6 
tables (the *BASE* tables have over 1 million rows each) on Postgresql 
9.1. The machine has 4GB RAM:


SELECT *
FROM dtng."Z_BASE" zb
LEFT JOIN dtng."Z_BASE_COUNTRY" zbc ON zb."ISIN" = zbc."ISIN"
LEFT JOIN dtng."PRODUCT_TYPES" pt ON zb."ID_PRODUCT_TYPE" = 
pt."ID_PRODUCT_TYPE"
JOIN voola.export_product_groups epg ON pt."ID_PRODUCT_GROUP" = 
epg.id_product_group

FULL OUTER JOIN warrants."W_BASE" wb ON zb."ISIN" = wb."ISIN"
LEFT JOIN warrants."W_BASE_COUNTRY" wbc ON wb."ISIN" = wbc."ISIN"
WHERE coalesce(zbc."ID_COUNTRY", wbc."ID_COUNTRY") = 'DE'

This is the query plan:

Hash Right Join (cost=1498106.22..2102918.77 rows=7487 width=2708)
Hash Cond: ((wbc."ISIN")::bpchar = (wb."ISIN")::bpchar)
Filter: ((COALESCE(zbc."ID_COUNTRY", wbc."ID_COUNTRY"))::bpchar = 
'DE'::bpchar)
-> Seq Scan on "W_BASE_COUNTRY" wbc (cost=0.00..45668.41 rows=1497341 
width=160)

-> Hash (cost=1015864.28..1015864.28 rows=1474955 width=2548)
-> Hash Full Join (cost=420009.31..1015864.28 rows=1474955 width=2548)
Hash Cond: ((zb."ISIN")::bpchar = (wb."ISIN")::bpchar)
-> Hash Right Join (cost=266400.82..518612.27 rows=505517 width=2341)
Hash Cond: ((zbc."ISIN")::bpchar = (zb."ISIN")::bpchar)
-> Seq Scan on "Z_BASE_COUNTRY" zbc (cost=0.00..47831.60 rows=1614860 
width=106)

-> Hash (cost=120372.86..120372.86 rows=505517 width=2235)
-> Hash Join (cost=662.44..120372.86 rows=505517 width=2235)
Hash Cond: (zb."ID_PRODUCT_TYPE" = pt."ID_PRODUCT_TYPE")
-> Seq Scan on "Z_BASE" zb (cost=0.00..106484.75 rows=1634275 width=1377)
-> Hash (cost=631.12..631.12 rows=2506 width=858)
-> Hash Join (cost=2.64..631.12 rows=2506 width=858)
Hash Cond: (pt."ID_PRODUCT_GROUP" = (epg.id_product_group)::bpchar)
-> Seq Scan on "PRODUCT_TYPES" pt (cost=0.00..573.03 rows=8103 width=853)
-> Hash (cost=1.73..1.73 rows=73 width=5)
-> Seq Scan on export_product_groups epg (cost=0.00..1.73 rows=73 width=5)
-> Hash (cost=93399.55..93399.55 rows=1474955 width=207)
-> Seq Scan on "W_BASE" wb (cost=0.00..93399.55 rows=1474955 width=207)

I tried reducing work_mem from 8MB to 64kB to force usage of temporary 
files for the hash joins instead of working memory, as written in 
chapter 18.4.1 of the documentation. But that didn´t help.


I know that the query is poorly written and already rewrote it to use 
less memory. My question is if/how it is possible to prevent the out of 
memory error at the price of speed (disk usage).I already searched the 
internet for an answer but without luck...


Thanks a lot,
Jan

P.S.: Here is the Postgres log:

TopMemoryContext: 2181968 total in 13 blocks; 9888 free (12 chunks); 
2172080 used
TopTransactionContext: 8192 total in 1 blocks; 7112 free (0 chunks); 
1080 used
Type information cache: 24576 total in 2 blocks; 11888 free (5 chunks); 
12688 used
Record information cache: 24576 total in 2 blocks; 15984 free (5 
chunks); 8592 used

TableSpace cache: 8192 total in 1 blocks; 3216 free (0 chunks); 4976 used
Operator lookup cache: 24576 total in 2 blocks; 11888 free (5 chunks); 
12688 used

Prepared Queries: 24576 total in 2 blocks; 15984 free (5 chunks); 8592 used
MessageContext: 8192 total in 1 blocks; 6976 free (0 chunks); 1216 used
Operator class cache: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 
used
smgr relation table: 24576 total in 2 blocks; 5696 free (4 chunks); 
18880 used
TransactionAbortContext: 32768 total in 1 blocks; 32736 free (0 chunks); 
32 used

Portal hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
PortalMemory: 8192 total in 1 blocks; 7888 free (0 chunks); 304 used
PortalHoldContext: 24576 total in 2 blocks; 15888 free (4 chunks); 8688 used
PortalHeapMemory: 534144 total in 68 blocks; 6656 free (14 chunks); 
527488 used
ExecutorState: 3442985408 total in 412394 blocks; 5173848 free (16 
chunks); 3437811560 used

HashTableContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
HashBatchContext: 16384 total in 1 blocks; 2304 free (0 chunks); 14080 used
HashTableContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
HashBatchContext: 262144 total in 5 blocks; 74976 free (3 chunks); 
187168 used

HashTableContext: 8192 total in 1 blocks; 6080 free (0 chunks); 2112 used
HashBatchContext: 4194304 total in 9 blocks; 79984 free (11 chunks); 
4114320 used

HashTableContext: 8192 total in 1 blocks; 7104 free (0 chunks); 1088 used
HashBatchContext: 8421424 total in 12 blocks; 2017200 free (13 chunks); 
6404224 used
HashTableContext: 67166304 total in 5 blocks; 57248 free (15 chunks); 
67109056 used
HashBatchContext: 234881024 total in 37 blocks; 259488

Re: [GENERAL] How to startup the database server?

2012-12-19 Thread Jan Kesten
Hi Jason,

sounds a bit weird. Can you check which user the database files belong
to? A 'ls -lah /var/lib/pgsql' should do the job. Then switch from root
user to the postgresql user and try to start the cluster manually with
the command line from below (put from your message):

/usr/bin/postgres -D /var/lib/pgsql/data

That sould start your instance in foreground so you should see any
errors and messages during startup.

One more thing: is SELinux enabled?

Hope that helps :-)
Jan


On 19.12.2012 16:34, Jason Ma wrote:
> Hi,
>The ps output is after the server start, I don't know why I can't see
> any process after start the server. And of course I use root to initial
> db, 'cause we have to run this command in CentOS  which you need the
> privilege of root.
> 
> service postgresql start
> 
> Regards,
> Jason
> 
> 
> 2012/12/19 Adrian Klaver  <mailto:adrian.kla...@gmail.com>>
> 
> On 12/19/2012 07:07 AM, Jason Ma wrote:
> 
> Thanks, adrian,  but I have check the pgstartup.log, I got the
> following
> messages:
> .
> creating information schema ... ok
> vacuuming database template1 ... ok
> copying template1 to template0 ... ok
> copying template1 to postgres ... ok
> 
> Success. You can now start the database server using:
> 
>  /usr/bin/postgres -D /var/lib/pgsql/data
> or
>  /usr/bin/pg_ctl -D /var/lib/pgsql/data -l logfile start
> 
> runuser: cannot set groups: Operation not permitted
> 
> 
> Seems the database cluster was initialized. Sort of concerned by the
> runuser error. What user did you run the initdb as?
> 
> 
> 
> I tried this:
> 
> -bash-4.1$ /usr/bin/pg_ctl -D /var/lib/pgsql/data -l logfile start
> server starting
> 
> 
> Nothing in the logfile?
> 
> 
> -bash-4.1$ ps -ef |grep postgres
> root  2904  2585  0 22:59 pts/100:00:00 su - postgres
> postgres  2905  2904  0 22:59 pts/100:00:00 -bash
> postgres  2946  2905  7 23:00 pts/100:00:00 ps -ef
> postgres  2947  2905  0 23:00 pts/100:00:00 grep postgres
> 
> It reported that I have started the server but  that doesn't work, I
> think there must be somebody has the same problem with me. Any
> suggestions.
> 
> 
> I am not seeing the server running in the above. You might want to
> retry the ps with post as the grep expression.
> 
> 
> Regards,
> Jason
> 
> 
> 
> -- 
> Adrian Klaver
> adrian.kla...@gmail.com <mailto:adrian.kla...@gmail.com>
> 
> 
> 
> 
> -- 
> Best wishes,
> 
> Jason Ma



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] large database

2012-12-11 Thread Jan Kesten
Hi all,

> I would very much appreciate a copy or a link to these slides!

here they are:

http://www.scribd.com/mobile/doc/61186429

Have fun!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] large database

2012-12-11 Thread Jan Kesten
Hi Mihai.

> We are now at the point where the csv files are all created and amount
> to some 300 GB of data.

> I would like to get some advice on the best deployment option.

First - and maybe best - advice: Do some testing on your own and plan
some time for this.

> First, the project has been started using MySQL. Is it worth switching
> to Postgres and if so, which version should I use?

When switching to PostgreSQL I would recommend to use the latest stable
version. But your project is already running in MySQL - are there issues
you expect to solve with switching to another database system? If not:
why switching?

> Second, where should I deploy it? The cloud or a dedicated box?

Given 1TB of storage, the x-large instance and 1 provisioned IOPS
would mean about 2000USD for a 100% utilized instance on amazon. This is
not really ultra-cheap ;-) For two months running you can get a
dedicated server with eight drives, buy to extra SSDs and have full
control on a Dell server. But things get much cheaper if real IOPS are
not at such high rate.

Also when using a cloud infrastructure and need your data on local
system keep network latency in mind.

We have several huge PostgreSQL databases running and have used
OpenIndina with ZFS and SSDs for data storage for quite a while now and
works perfect.

There are some sildes from Sun/Oracle about ZFS, ZIL, SSD and PostgreSQL
performance (I can look if I find them if needed).

> Alternatively I looked at a Dell server with 32 GB of RAM and some
> really good hard drives. But such a box does not come cheap and I don't
> want to keep the pieces if it doesn't cut it

Just a hint: Do not simply look at Dells prices - phone them and get a
quote. I was surprised (but do not buy SSDs there).

Think about how you data is structured and how it is queried after it
was imported into the database to see where your bottlenecks are.

Cheers,
Jan


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


  1   2   3   4   5   6   >