Re: [GENERAL] Re: PostgreSQL specific datatypes very confusing for beginners who use wrappers around JDBC

2014-01-29 Thread Craig Ringer
On 01/29/2014 08:03 PM, Florian Weimer wrote:
> On 01/29/2014 09:07 AM, Craig Ringer wrote:
> 
>> A challenge I've found when approaching this from the ORM side has been
>> getting people to care. The sentiment has tended to be along the lines
>> of: No other DBMS does this or requires this, why do we have to jump
>> through hoops just to make PostgreSQL happy?
> 
> Is this true?  Can you use other JDBC drivers (except SQLite) to insert
> Java Strings into NUMERIC columns

That'll require direct JDBC tests, which I don't presently have time to
set up for the DBs of interest (downloading demo versions or finding
ones I can mess with, etc).

The following tests based on simple SQL expressions, using CASTs to
force interpretation of values as character-typed, should be informative
though.

> and Java ints into text columns?

I wasn't ever suggesting that; that's why I'm being quite specific about
referring to implicit casts *FROM* text. Not tested.



In the following tests the client interface used is JDBC, but it's used
to execute statements directly, not with bind params. So it's more like
running the statement in psql or equivalent; for that reason, CASTs are
used to force explicit types.


Oracle: Behaves much like PostgreSQL for its own custom types, and
SQL/XML, but permissive for NUMERIC and TIMESTAMP (appears to fit the
JDBC spec):

http://sqlfiddle.com/#!4/cc065/5
http://sqlfiddle.com/#!4/cc065/6
http://sqlfiddle.com/#!4/1adc6/4



MS SQL server 2012: permissive

http://sqlfiddle.com/#!6/57662/3


PostgreSQL: Strict

http://sqlfiddle.com/#!15/596f78/2
http://sqlfiddle.com/#!15/596f78/3
http://sqlfiddle.com/#!15/596f78/4
http://sqlfiddle.com/#!15/596f78/6


MySQL 5.5: Permissive (surprise!)

http://sqlfiddle.com/#!2/89152



I don't really care about dredging up DB2, Firebird, etc.

So of the set tested, we're the strictest, Oracle is next-strictest and
looks like it's as strict as it can be while remaining JDBC compliant. I
was surprised to see that it won't accept character literal input for
its XmlType and UriType.

Other servers are more permissive about inputs.

> What about using types on the PostgreSQL side which match the
> application types?

So, unless your language has a native json type, or standard json
library that the PostgreSQL client driver can rely on being used, you
shouldn't be able to use json in PostgreSQL?

Not convinced.

> In any case, use *can* use strings everywhere if you use the
> stringtype=unspecified connection parameter:
> 
> 

Yes, as I mentioned upthread, this is true for JDBC with the caveat that
it'll cause problems with function overload resolution.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: PostgreSQL specific datatypes very confusing for beginners who use wrappers around JDBC

2014-01-29 Thread Kevin Grittner
Florian Weimer  wrote:

> Can you use other JDBC drivers (except SQLite) to insert
> Java Strings into NUMERIC columns and Java ints into text
> columns?

The JDBC specification can be downloaded from here:

http://download.oracle.com/otndocs/jcp/jdbc-4_1-mrel-spec/index.html

Check out appendix B.  According to the charts there, any of these
Java types (from table B-5):

  String
  java.math.BigDecimal
  Boolean
  Byte
  Short
  Integer
  Long
  Float
  Double

should be assignable using setObject and setNull to these JDBC
target types:

  TINYINT
  SMALLINT
  INTEGER
  BIGINT
  REAL
  FLOAT
  DOUBLE
  DECIMAL
  NUMERIC  BIT
  BOOLEAN
  CHAR
  VARCHAR
  LONGVARCHAR

In addition (from the same table) String should be assignable to:

  BINARY
  VARBINARY
  LONGVARBINARY
  DATE
  TIME
  TIMESTAMP

So, it's at least partly a question of whether we want to conform
to the JDBC specification.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] concurrent SELECT blocking ALTER?

2014-01-29 Thread Jeff Janes
On Wed, Jan 29, 2014 at 3:09 PM, Neil Harkins  wrote:

> Note the number of exclusive locks in my first message, it is equal to the
> number of threads (20). Also, the ALTER was not running then, apologies
> if that was not clear.


Not all locks are on tables.  Each transaction holds an Exclusive lock on
its own virtual transaction id, and that is what you are seeing there.
Looking the pg_locks table without looking at the nature of the locks is
generally not very useful.

Cheers,

Jeff


Re: [GENERAL] concurrent SELECT blocking ALTER?

2014-01-29 Thread Neil Harkins
Note the number of exclusive locks in my first message, it is equal to the
number of threads (20). Also, the ALTER was not running then, apologies if
that was not clear.

On Wednesday, January 29, 2014, Thomas Kellerer  wrote:

> Neil Harkins wrote on 29.01.2014 23:37:
>
>> I totally understand DDL taking exclusive locks, the problem here seems
>> to be that the *SELECTs*
>> are taking out exclusive locks, locking out the ALTER, which feels like a
>> bug.
>>
>
> The SELECT is not holding an exclusive lock, it's holing a *shared* lock,
> but the ALTER is _requesting_ an exclusive lock and that can only be
> granted until all shared (or otherwise incompatible) locks are released.
>
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] concurrent SELECT blocking ALTER?

2014-01-29 Thread Thomas Kellerer

Neil Harkins wrote on 29.01.2014 23:37:

I totally understand DDL taking exclusive locks, the problem here seems to be 
that the *SELECTs*
are taking out exclusive locks, locking out the ALTER, which feels like a bug.


The SELECT is not holding an exclusive lock, it's holing a *shared* lock, but 
the ALTER is _requesting_ an exclusive lock and that can only be granted until 
all shared (or otherwise incompatible) locks are released.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] concurrent SELECT blocking ALTER?

2014-01-29 Thread Neil Harkins
I totally understand DDL taking exclusive locks, the problem here seems to
be that the *SELECTs* are taking out exclusive locks, locking out the
ALTER, which feels like a bug.

On Wednesday, January 29, 2014, Peter Eisentraut  wrote:

> On 1/29/14, 4:59 PM, Neil Harkins wrote:
> > Why are those exclusive locks present?
> > Can't the database rely on mvcc for those reads
> > without locking? The autocommit should be
> > increasing the xid used for the reads, so the
> > ALTER should be able to slip in-between?
>
> One would think so, but it's more complicated.  There is a long thread
> on pgsql-hackers spreading over many months that discusses the
> intricacies of reducing the strength of the locks taken by DDL commands.
>  This is being addressed, but at the moment most DDL commands take
> exclusive locks.
>
>


Re: [GENERAL] concurrent SELECT blocking ALTER?

2014-01-29 Thread Peter Eisentraut
On 1/29/14, 4:59 PM, Neil Harkins wrote:
> Why are those exclusive locks present?
> Can't the database rely on mvcc for those reads
> without locking? The autocommit should be
> increasing the xid used for the reads, so the
> ALTER should be able to slip in-between?

One would think so, but it's more complicated.  There is a long thread
on pgsql-hackers spreading over many months that discusses the
intricacies of reducing the strength of the locks taken by DDL commands.
 This is being addressed, but at the moment most DDL commands take
exclusive locks.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] concurrent SELECT blocking ALTER?

2014-01-29 Thread Neil Harkins
Hi all, I'm curious if anyone can explain or suggest some debugging to
explain some odd locking behavior I am able to reproduce on pg 9.2.1

I start a test program with 20 threads (autocommit=1), all
executing the same SELECT query with a LEFT OUTER JOIN,
1~2 per second. Then I execute an ALTER to drop a column
(one not referenced by the query, neither explicitly or by a *)
from the table referenced in the OUTER JOIN.

The ALTER blocks for many minutes:

2014-01-28 02:39:48.781
GMT,"postgres","pbs_production",49521,"[local]",52e713cb.c171,7,"ALTER
TABLE waiting",2014-01-28 02:19:55
GMT,18/59,241951078,LOG,0,"process 49521 acquired
AccessExclusiveLock on relation 16637 of database 16409 after
932916.917 ms",,"alter table refunds drop column
external_refund_id;",,,"psql"

When I stop the test program doing the SELECTs,
the ALTER gets unblocked and completes very quickly
(the table contains 20k rows):

2014-01-28 02:39:48.803
GMT,"postgres","pbs_production",49521,"[local]",52e713cb.c171,8,"ALTER
TABLE",2014-01-28 02:19:55 GMT,18/0,0,LOG,0,"duration: 932939.482
ms  statement: alter table refunds drop column
external_refund_id;","psql"

When the test program is running, I see:

# SELECT mode, COUNT(*) FROM pg_locks
WHERE pid != pg_backend_pid() GROUP BY 1;
  mode   | count
-+---
 ExclusiveLock   |20
 AccessShareLock |   440
(2 rows)

Why are those exclusive locks present?
Can't the database rely on mvcc for those reads
without locking? The autocommit should be
increasing the xid used for the reads, so the
ALTER should be able to slip in-between?

thanks in advance for any input,
-neil


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Continuous archiving and restore questions

2014-01-29 Thread Pedro Salgueiro
On Wed, Jan 29, 2014 at 5:26 PM, Jeff Janes  wrote:

> On Wed, Jan 29, 2014 at 6:15 AM, Pedro Salgueiro <
> pedro.salgue...@cortex-intelligence.com> wrote:
>
>> Hi,
>>
>> In the past couple of days I have been trying Continuous Archiving and
>> Point-in-Time Recovery (PITR) and I have some doubts.
>>
>> I successfully configured postgresql to perform the archive of the wal
>> files, using the following properties in postgresql.conf
>>
>> archive_mode = on
>>> wal_level = archive
>>> archive_command = 'cp %p /opt/postgres-wal-backups/wal-files/%f'
>>> max_wal_senders = 3
>>
>>
>> To perform the base backup, I am using the pg_basebackup tool:
>>
>> pg_basebackup --format tar --xlog -D - | gzip >
>>> ${BASE_BACKUP_FOLDER}/base_backup.tar.gz
>>
>>
>> After making a base backup, I made some changes on the database,
>> including creating new tables and adding data to them. Then I moved the
>> data folder to a safe place, restored the base backup, created the
>> recovery.conf file, copied the WAL files that were unarchived back to the
>> restored data folder, and restarted postgresql.
>>
>> I used the following recovery.conf file:
>>
>> restore_command = 'cp /opt/postgres-wal-backups/wal-files/%f %p'
>>> archive_cleanup_command = 'pg_archivecleanup
>>>  /opt/postgres-wal-backups/wal-files %r'
>>
>>
> Why are you cleaning up the archive?
>

The idea was to remove WAL files that are no longer needed, WAL files that
are someway included in the base-backup. Any way, that was not the problem,
as I tested the same procedure without the archive_cleanup_command.

Pedro

>
>
>>
>> The restore procedure worked like a charm, and all data was recovered.
>>
>> Then I created some more tables and added more data. Then made the same
>> restore procedure as before, using the same base backup. Apparently the
>> restore was successful and without errors, but the newly created data was
>> not restored, only the one which was created before the first restore.
>>
>
> If your previous use of archive_cleanup_command deleted files that the new
> recover would have have needed, then the recovery would have to end at the
> first missing file.
>
> Cheers,
>
> Jeff
>


Re: [GENERAL] Continuous archiving and restore questions

2014-01-29 Thread Jeff Janes
On Wed, Jan 29, 2014 at 6:15 AM, Pedro Salgueiro <
pedro.salgue...@cortex-intelligence.com> wrote:

> Hi,
>
> In the past couple of days I have been trying Continuous Archiving and
> Point-in-Time Recovery (PITR) and I have some doubts.
>
> I successfully configured postgresql to perform the archive of the wal
> files, using the following properties in postgresql.conf
>
> archive_mode = on
>> wal_level = archive
>> archive_command = 'cp %p /opt/postgres-wal-backups/wal-files/%f'
>> max_wal_senders = 3
>
>
> To perform the base backup, I am using the pg_basebackup tool:
>
> pg_basebackup --format tar --xlog -D - | gzip >
>> ${BASE_BACKUP_FOLDER}/base_backup.tar.gz
>
>
> After making a base backup, I made some changes on the database, including
> creating new tables and adding data to them. Then I moved the data folder
> to a safe place, restored the base backup, created the recovery.conf file,
> copied the WAL files that were unarchived back to the restored data folder,
> and restarted postgresql.
>
> I used the following recovery.conf file:
>
> restore_command = 'cp /opt/postgres-wal-backups/wal-files/%f %p'
>> archive_cleanup_command = 'pg_archivecleanup
>>  /opt/postgres-wal-backups/wal-files %r'
>
>
Why are you cleaning up the archive?


>
> The restore procedure worked like a charm, and all data was recovered.
>
> Then I created some more tables and added more data. Then made the same
> restore procedure as before, using the same base backup. Apparently the
> restore was successful and without errors, but the newly created data was
> not restored, only the one which was created before the first restore.
>

If your previous use of archive_cleanup_command deleted files that the new
recover would have have needed, then the recovery would have to end at the
first missing file.

Cheers,

Jeff


Re: [GENERAL] Continuous archiving and restore questions

2014-01-29 Thread Pedro Salgueiro
Found the problem.

When a restore is made, a new timeline is created.
The problem is that when a restore is made, the default behavior is to
restore the timeline used while creating the base backup, thus ignoring the
new timeline together with the changes made after the restore.

The solution to this problem is to include the following line in the
recovery.conf to always restore the last available timeline:

recovery_target_timeline = 'latest'


Best regards,
Pedro


On Wed, Jan 29, 2014 at 2:15 PM, Pedro Salgueiro <
pedro.salgue...@cortex-intelligence.com> wrote:

> Hi,
>
> In the past couple of days I have been trying Continuous Archiving and
> Point-in-Time Recovery (PITR) and I have some doubts.
>
> I successfully configured postgresql to perform the archive of the wal
> files, using the following properties in postgresql.conf
>
> archive_mode = on
>> wal_level = archive
>> archive_command = 'cp %p /opt/postgres-wal-backups/wal-files/%f'
>> max_wal_senders = 3
>
>
> To perform the base backup, I am using the pg_basebackup tool:
>
> pg_basebackup --format tar --xlog -D - | gzip >
>> ${BASE_BACKUP_FOLDER}/base_backup.tar.gz
>
>
> After making a base backup, I made some changes on the database, including
> creating new tables and adding data to them. Then I moved the data folder
> to a safe place, restored the base backup, created the recovery.conf file,
> copied the WAL files that were unarchived back to the restored data folder,
> and restarted postgresql.
>
> I used the following recovery.conf file:
>
> restore_command = 'cp /opt/postgres-wal-backups/wal-files/%f %p'
>> archive_cleanup_command = 'pg_archivecleanup
>>  /opt/postgres-wal-backups/wal-files %r'
>
>
> The restore procedure worked like a charm, and all data was recovered.
>
> Then I created some more tables and added more data. Then made the same
> restore procedure as before, using the same base backup. Apparently the
> restore was successful and without errors, but the newly created data was
> not restored, only the one which was created before the first restore.
>
> Everything that was made after the first restore was lost.
>
> Then I tried to make a fresh base backup, make some changes on the
> database, and then, issue the restore procedure just as before, but using
> the new base backup. This time, the changes made after the base backup were
> restored successfully.
>
> It seems that after a restore is made, I need to make a fresh base backup
> in order to be able to make future restores. Is this correct, or am I doing
> something wrong?
>
> Thank you,
> Pedro Salgueiro
>
>
>
>
>
>
>
>


Re: [GENERAL] PostgreSQL specific datatypes very confusing for beginners who use wrappers around JDBC

2014-01-29 Thread Adrian Klaver

On 01/29/2014 08:29 AM, Tom Lane wrote:

Craig Ringer  writes:

On 01/29/2014 02:01 PM, Tom Lane wrote:





I wish ORMs would go away sometimes too, and I recognise that there are
certain kinds of broken and stupid that it makes no sense to cater to. I
just don't think this is one of them - this problem is universal, I
can't think of an ORM that *doesn't* have it, and it's created by
PostgreSQL, not the ORMs.


Uh, no, it's created by ORMs that haven't heard of type extensibility.
The reason they don't have this problem with other databases is exactly
because those other databases don't have type extensibility.


Agreed. An ORM that has tackled this issue is SQLAlchemy. It has the 
concept of database dialects and uses that in hand with the extendable 
sqlalchemy.types to deal with database specific types.





regards, tom lane





--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL specific datatypes very confusing for beginners who use wrappers around JDBC

2014-01-29 Thread Tom Lane
Craig Ringer  writes:
> On 01/29/2014 02:01 PM, Tom Lane wrote:
>> Sorry, that is *just* as dangerous as implicit casts to text were.
>> It would bite a different set of queries, but not any less painfully.

> I'd be interested in some examples of that. It's clear that implicit
> casts *to* text were unsafe, but what makes implicit casts *from* text
> *to* validated types with specific structures just as bad?

The problem with an implicit cast is that it might or might not happen,
either way being contrary to the user's nearly-subconscious expectation.
If the cast isn't applied, then validation that might have been done
by the destination type is moot.

In the cases where current policy deems an implicit cast to be safe,
it doesn't matter a whole lot because the semantics of most operators
are about the same for either the source or destination type; even
if the user doesn't understand exactly which operator will be used for
"2 < 2.5", he's unlikely to be surprised by the results.  This happy
state of affairs doesn't hold for casts between text and some random
other type though.

Or in short, the risk factor here is that the user might write his
query assuming that an implicit cast from text would get applied,
but it doesn't and the operation proceeds using textual semantics.
This'd affect a different set of queries from the cases with
implicit casts to text, but the outcome is rather similar.

Prior to 7.3, Postgres did actually allow a bunch of implicit
coercions from text, and they caused their share of problems.

> This is a painful issue for a significant group; you can find them on
> Stack Overflow, Rails and Django and Hibernate discussion boards, Java
> user groups, etc. They've been taught not to care about the DB and write
> "database-agnostic" code, but they're seeing Pg features that're so
> useful that they'd like to bend that and start using some Pg features.
> Only to find they can't do it without throwing away everything they have.

Well, that's the end result of being "database agnostic", when it's
defined in the terms these libraries have traditionally used, which
really is "we're going to pick a tiny subset of SQL and code strictly
to that".

I think the burden is on them to figure out how their abstractions
should be generalized.  What you're suggesting is that we poke a
big hole in our type system to let them avoid doing the first round
of necessary fixes --- but most likely, there are still things they'd
need to change even if we did so, leaving us with a hole in our type
system and not much to show for it.

> I wish ORMs would go away sometimes too, and I recognise that there are
> certain kinds of broken and stupid that it makes no sense to cater to. I
> just don't think this is one of them - this problem is universal, I
> can't think of an ORM that *doesn't* have it, and it's created by
> PostgreSQL, not the ORMs.

Uh, no, it's created by ORMs that haven't heard of type extensibility.
The reason they don't have this problem with other databases is exactly
because those other databases don't have type extensibility.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL specific datatypes very confusing for beginners who use wrappers around JDBC

2014-01-29 Thread Andrew Sullivan
On Wed, Jan 29, 2014 at 03:05:23PM +0800, Craig Ringer wrote:
> This is a painful issue for a significant group

Yes.

> They've been taught not to care about the DB and write
> "database-agnostic" code, but they're seeing Pg features that're so
> useful that they'd like to bend that and start using some Pg features.
> Only to find they can't do it without throwing away everything they have.

Also yes.  This is important.  The _whole point_ of ORMs is that
they're hiding the underlying details of the database implementation;
in practice, this turns out to be lowest common denominator where
"lowest" is pretty low.  "Database-agnostic" code is, by definition,
not going to use database-specific features.

The idea that you can have a database-agnostic ORM that can use all
the clever features of the underlying database system is just
confused.  You can't have this both ways, and other strongly-typed
database systems don't do a whole lot better with this (often, you
will discover that the automatically-generated schemas these ORMs
produce use type text everywhere, for exactly this reason).  People
who insist that this ought to be possible in the general case are
saying, "I want a pony."

What you might do, however, is generate a bunch of CREATE CAST
statements for the implicit casts from text you want.  This is
dangerous for all the reasons Tom noted, but it might be actually good
enough for the ORM cases you're worried about.  If you think that,
maybe the right answer is to start up a project like
"ORM-cast-footgun" or whatever and create the relevant casts in a
generalized way.

> I guess this comes down to whether the goal is to be like Haskell -
> pure, perfect, and used by nobody for anything real - or a pragmatic
> tool for real world productive use.

That's a fun false dichotomy, but Postgres is a general purpose tool
and therefore needs to ship by default with the safest general purpose
behaviour.  Unlike many other systems, however, Postgres actually
gives you the power to adjust its behaviour according to your use
case, and there's no reason not to use that when appropriate.  

Best regards,

A

-- 
Andrew Sullivan
a...@crankycanuck.ca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL 9.2.4 using large amount of memory

2014-01-29 Thread Bhushan Pathak
Is there any way to set max memory a postgres connection can use in
postgres or linux[centOS 5.6 64bit]?

Thanks
Bhushan


On Wed, Jan 29, 2014 at 8:39 PM, Bhushan Pathak
wrote:

> Bruce -
> Will go through the blog posts. Thanks for the info.
>
> Raghav -
> I have not executed analyze after the upgrade.
>
> -- Bhushan
>
>
>
>
>
> On Wed, Jan 29, 2014 at 7:49 AM, Raghavendra <
> raghavendra@enterprisedb.com> wrote:
>
>> Thanks
>>
>> On 27 Jan 2014 22:35, "Bhushan Pathak" 
>> wrote:
>> >
>> > Hello,
>> >
>> > We have recently shifted to postgresql version 9.2.4 from 9.1.3. After
>> the migration, we observed that some of our delete queries on single table
>> [which have triggers, which in turn call other functions] have started
>> consuming large amounts of memory.
>> >
>> > In 9.1.3, this usage was upto 25MB with the same load on the same
>> server. With 9.2.4 it has jumped upto ~580 MB. We are monitoring the RES
>> column from top output to get the memory usage.
>> >
>> > Our migration method from 9.1.3 to 9.2.4 was take a dump, un-install
>> 9.1.3, install 9.2.4 & restore the dump.
>> >
>> > I also went through the thread -
>> >
>> http://postgresql.1045698.n5.nabble.com/Memory-usage-after-upgrade-to-9-2-4-td5752733.html
>> >
>> > In the thread in the end it is mentioned that there was some data
>> corruption & points to 9.1.6 release notes. I went through the release
>> notes & only thing of note that I found was the re-indexing or performing
>> vacuum operation in case of in-place upgrade, which is not the case for me.
>> >
>> > Any help/pointers in debugging would be helpful.
>> >
>> > Thanks
>> > Bhushan
>> >
>>
>> Just wanted to know, after upgrade as a part of process have you
>> performed ANALYZE on the database. I agree this might not relate to the
>> question but am curious to know this issue raised after proper upgrade
>> method.
>>
>> --Raghav
>>
>
>


Re: [GENERAL] Re: PostgreSQL specific datatypes very confusing for beginners who use wrappers around JDBC

2014-01-29 Thread Adrian Klaver

On 01/29/2014 12:07 AM, Craig Ringer wrote:

On 01/29/2014 02:36 PM, David Johnston wrote:


The "correct" solution would seem to be for ORMs to "bind unknown" against
the input but only when the supplied string is meant to be a representation
of a PostgreSQL type as opposed to being actual string data. The ORM is free
to provide the necessary API to distinguish between the two and the caller
has to know then database to call the proper method (i.e., not setString if
your data intended for anything besides a text/varchar column).


I certainly agree that that's the ideal, and it's closer to achievable
than any other fix to these sorts of systems.

A challenge I've found when approaching this from the ORM side has been
getting people to care. The sentiment has tended to be along the lines
of: No other DBMS does this or requires this, why do we have to jump
through hoops just to make PostgreSQL happy?

I'm not claiming that's a good reason for inaction.

I think there's more hope of getting ORM systems to differentiate
between "unknown-typed literal" and "concrete text-typed literal" than
actually implementing proper support for Pg's numerous useful types. Not
much more hope, but some.

Look at the example that started this thread, though. The stack is:

PostgreSQL
   PgJDBC
 Java JDBC API
   EBean ORM
 Play! Framework

and *every level* needs to have a clue about this or a way to pass the
information trough transparently.


And therein lies the problem, in the pursuit of simplicity, application 
developers have embraced complicated stacks. A change at any point in 
the above stack has the potential to unravel the whole system. So it is 
not obvious to me that 'fixing' one end of the stack is going to solve 
the problem as a whole.




Now think about Hibernate, Sequel ORM, Django ORM, ActiveRecord,
EclipseLink, OpenJPA, DataMapper, Entity Framework, EBean, TopLink,
iBatis/MyBatis, Doctrine, Propel, CakePHP, Zend, SQLAlchemy, 

Wouldn't it be nice if we could find a solution to this user pain point
in one place?


See above, I do not think that is possible.





--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL 9.2.4 using large amount of memory

2014-01-29 Thread Bhushan Pathak
Bruce -
Will go through the blog posts. Thanks for the info.

Raghav -
I have not executed analyze after the upgrade.

-- Bhushan





On Wed, Jan 29, 2014 at 7:49 AM, Raghavendra <
raghavendra@enterprisedb.com> wrote:

> Thanks
>
> On 27 Jan 2014 22:35, "Bhushan Pathak"  wrote:
> >
> > Hello,
> >
> > We have recently shifted to postgresql version 9.2.4 from 9.1.3. After
> the migration, we observed that some of our delete queries on single table
> [which have triggers, which in turn call other functions] have started
> consuming large amounts of memory.
> >
> > In 9.1.3, this usage was upto 25MB with the same load on the same
> server. With 9.2.4 it has jumped upto ~580 MB. We are monitoring the RES
> column from top output to get the memory usage.
> >
> > Our migration method from 9.1.3 to 9.2.4 was take a dump, un-install
> 9.1.3, install 9.2.4 & restore the dump.
> >
> > I also went through the thread -
> >
> http://postgresql.1045698.n5.nabble.com/Memory-usage-after-upgrade-to-9-2-4-td5752733.html
> >
> > In the thread in the end it is mentioned that there was some data
> corruption & points to 9.1.6 release notes. I went through the release
> notes & only thing of note that I found was the re-indexing or performing
> vacuum operation in case of in-place upgrade, which is not the case for me.
> >
> > Any help/pointers in debugging would be helpful.
> >
> > Thanks
> > Bhushan
> >
>
> Just wanted to know, after upgrade as a part of process have you performed
> ANALYZE on the database. I agree this might not relate to the question but
> am curious to know this issue raised after proper upgrade method.
>
> --Raghav
>


Re: [GENERAL] Re: PostgreSQL specific datatypes very confusing for beginners who use wrappers around JDBC

2014-01-29 Thread Merlin Moncure
On Wed, Jan 29, 2014 at 2:07 AM, Craig Ringer  wrote:
> On 01/29/2014 02:36 PM, David Johnston wrote:
>>
>> The "correct" solution would seem to be for ORMs to "bind unknown" against
>> the input but only when the supplied string is meant to be a representation
>> of a PostgreSQL type as opposed to being actual string data. The ORM is free
>> to provide the necessary API to distinguish between the two and the caller
>> has to know then database to call the proper method (i.e., not setString if
>> your data intended for anything besides a text/varchar column).
>
> I certainly agree that that's the ideal, and it's closer to achievable
> than any other fix to these sorts of systems.
>
> A challenge I've found when approaching this from the ORM side has been
> getting people to care. The sentiment has tended to be along the lines
> of: No other DBMS does this or requires this, why do we have to jump
> through hoops just to make PostgreSQL happy?
>
> I'm not claiming that's a good reason for inaction.
>
> I think there's more hope of getting ORM systems to differentiate
> between "unknown-typed literal" and "concrete text-typed literal" than
> actually implementing proper support for Pg's numerous useful types. Not
> much more hope, but some.
>
> Look at the example that started this thread, though. The stack is:
>
> PostgreSQL
>   PgJDBC
> Java JDBC API
>   EBean ORM
> Play! Framework
>
> and *every level* needs to have a clue about this or a way to pass the
> information trough transparently.
>
> Now think about Hibernate, Sequel ORM, Django ORM, ActiveRecord,
> EclipseLink, OpenJPA, DataMapper, Entity Framework, EBean, TopLink,
> iBatis/MyBatis, Doctrine, Propel, CakePHP, Zend, SQLAlchemy, 
>
> Wouldn't it be nice if we could find a solution to this user pain point
> in one place?

Hi Merlin!

The solution is obvious: "fix the ORM, or stop using it".  Don't even
get me started on hibernate -- it reserves (or at least did for a very
long time) the colon character to itself in an inescapable fashion and
does lots of other stupid things that are annoying in the extreme.

If you use a library that writes your SQL for you, you're just going
to have to limit your database features to what the ORM supports.
IMSNHO Any technology that hides the SQL statement from the programmer
or hacks it up in some unpreventable way should be avoided.  It's not
the database's job to work around them.  To those of you stuck in ORM
limbo, my advice would be to stick to basic types.  I would also
advise keeping as much business logic in the database as possible to
make the inevitable porting effort into a more intelligently designed
application stack easier.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Replicating SQL 2000 to PostgreSQL 9.x

2014-01-29 Thread Edson Richter
I'm trying to setup SQL 2000 replication to PostgreSQL 9.x, and follow 
those steps:


1) setup the publisher and distributor in SQL 2000
2) setup the article (12 tables)
3) setup the linked server to PostgreSQL and executed some queries to 
test (with success)
4) created the 12 tables in PostgreSQL using correct data types (I've 
used Microsoft guide to Oracle compatibility, which seems logical to me)

5) setup the Push Subscriber without creating schema/tables

After few seconds, it fails, and the only error I've have is quite vague 
(is a generic "An access violation occurred" error).
I've tried to configure PostgreSQL ODBC driver for logging (without 
success: no logs generated).


Is there someone successfully setup SQL 2k to PostgreSQL replication 
that could share some thoughts?


Thanks,

Edson


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Continuous archiving and restore questions

2014-01-29 Thread Pedro Salgueiro
Hi,

In the past couple of days I have been trying Continuous Archiving and
Point-in-Time Recovery (PITR) and I have some doubts.

I successfully configured postgresql to perform the archive of the wal
files, using the following properties in postgresql.conf

archive_mode = on
> wal_level = archive
> archive_command = 'cp %p /opt/postgres-wal-backups/wal-files/%f'
> max_wal_senders = 3


To perform the base backup, I am using the pg_basebackup tool:

pg_basebackup --format tar --xlog -D - | gzip >
> ${BASE_BACKUP_FOLDER}/base_backup.tar.gz


After making a base backup, I made some changes on the database, including
creating new tables and adding data to them. Then I moved the data folder
to a safe place, restored the base backup, created the recovery.conf file,
copied the WAL files that were unarchived back to the restored data folder,
and restarted postgresql.

I used the following recovery.conf file:

restore_command = 'cp /opt/postgres-wal-backups/wal-files/%f %p'
> archive_cleanup_command = 'pg_archivecleanup
>  /opt/postgres-wal-backups/wal-files %r'


The restore procedure worked like a charm, and all data was recovered.

Then I created some more tables and added more data. Then made the same
restore procedure as before, using the same base backup. Apparently the
restore was successful and without errors, but the newly created data was
not restored, only the one which was created before the first restore.

Everything that was made after the first restore was lost.

Then I tried to make a fresh base backup, make some changes on the
database, and then, issue the restore procedure just as before, but using
the new base backup. This time, the changes made after the base backup were
restored successfully.

It seems that after a restore is made, I need to make a fresh base backup
in order to be able to make future restores. Is this correct, or am I doing
something wrong?

Thank you,
Pedro Salgueiro


Re: [GENERAL] Re: PostgreSQL specific datatypes very confusing for beginners who use wrappers around JDBC

2014-01-29 Thread Florian Weimer

On 01/29/2014 09:07 AM, Craig Ringer wrote:


A challenge I've found when approaching this from the ORM side has been
getting people to care. The sentiment has tended to be along the lines
of: No other DBMS does this or requires this, why do we have to jump
through hoops just to make PostgreSQL happy?


Is this true?  Can you use other JDBC drivers (except SQLite) to insert 
Java Strings into NUMERIC columns and Java ints into text columns?



Look at the example that started this thread, though. The stack is:

PostgreSQL
   PgJDBC
 Java JDBC API
   EBean ORM
 Play! Framework

and *every level* needs to have a clue about this or a way to pass the
information trough transparently.

Now think about Hibernate, Sequel ORM, Django ORM, ActiveRecord,
EclipseLink, OpenJPA, DataMapper, Entity Framework, EBean, TopLink,
iBatis/MyBatis, Doctrine, Propel, CakePHP, Zend, SQLAlchemy, 

Wouldn't it be nice if we could find a solution to this user pain point
in one place?


What about using types on the PostgreSQL side which match the 
application types?


In any case, use *can* use strings everywhere if you use the 
stringtype=unspecified connection parameter:




--
Florian Weimer / Red Hat Product Security Team


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: PostgreSQL specific datatypes very confusing for beginners who use wrappers around JDBC

2014-01-29 Thread Craig Ringer
On 01/29/2014 02:36 PM, David Johnston wrote:
> 
> The "correct" solution would seem to be for ORMs to "bind unknown" against
> the input but only when the supplied string is meant to be a representation
> of a PostgreSQL type as opposed to being actual string data. The ORM is free
> to provide the necessary API to distinguish between the two and the caller
> has to know then database to call the proper method (i.e., not setString if
> your data intended for anything besides a text/varchar column).

I certainly agree that that's the ideal, and it's closer to achievable
than any other fix to these sorts of systems.

A challenge I've found when approaching this from the ORM side has been
getting people to care. The sentiment has tended to be along the lines
of: No other DBMS does this or requires this, why do we have to jump
through hoops just to make PostgreSQL happy?

I'm not claiming that's a good reason for inaction.

I think there's more hope of getting ORM systems to differentiate
between "unknown-typed literal" and "concrete text-typed literal" than
actually implementing proper support for Pg's numerous useful types. Not
much more hope, but some.

Look at the example that started this thread, though. The stack is:

PostgreSQL
  PgJDBC
Java JDBC API
  EBean ORM
Play! Framework

and *every level* needs to have a clue about this or a way to pass the
information trough transparently.

Now think about Hibernate, Sequel ORM, Django ORM, ActiveRecord,
EclipseLink, OpenJPA, DataMapper, Entity Framework, EBean, TopLink,
iBatis/MyBatis, Doctrine, Propel, CakePHP, Zend, SQLAlchemy, 

Wouldn't it be nice if we could find a solution to this user pain point
in one place?

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general