Re: [GENERAL] Procedural Languages

2012-05-31 Thread Joshua Tolley
On Fri, Jun 01, 2012 at 08:13:28AM +1200, Mike Toews wrote:
> On 1 June 2012 02:36, John Townsend  wrote:
> > There are least 10 Procedural Languages available for PostGreSQL. The one
> > that comes with the installation is PL/pgSQL.
> 
> The count looks closer to 18
> http://en.wikipedia.org/wiki/PostgreSQL#Procedural_languages
> but I doubt some get much use (PL/LOLCODE anyone?).

I use PL/LOLCODE... in its test suite, which runs on the rare occasion I make
changes to the language.

For whatever it's worth, PL/LOLCODE wasn't written as a language people would
use in production, but rather as a teaching tool. It taught me rather nicely,
too. :)

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


pgpc2KfpMpMGx.pgp
Description: PGP signature


Re: [GENERAL] Debug Contrib/cube code

2011-05-14 Thread Joshua Tolley
On Fri, May 13, 2011 at 09:39:58PM +0530, Nick Raj wrote:
> Hi,
> I am able to debug postgres by --enable-debug but breakpoint in cube is not
> working. Can you more elaborate on your second point?
> Even i also tried to change makefile of contrib/cube by -g(debug) but it
> gives error.
> Any idea is welcome.
> 
> ps: sorry for late reply

I don't have any good answers beyond what was already presented here:

http://archives.postgresql.org/pgsql-hackers/2011-05/msg00784.php

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] Debug Contrib/cube code

2011-05-11 Thread Joshua Tolley
On Fri, May 06, 2011 at 10:43:23AM +0530, Nick Raj wrote:
> Hi,
> I am using postgresql-8.4.6. I want to debug the contrib/cube code. Can we
> able to debug that cube code?  Because there is no .configure  file to
> enable debug. Is there is any way to change make file to enable debug?

If your postgres build uses --enable-debug, cube should use it as well. It
determines this either through pg_config, or by referring to makefiles in the
postgres source directory contrib/cube lives in.

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] Bidirectional replication

2011-05-05 Thread Joshua Tolley
On Thu, May 05, 2011 at 08:13:55PM -0700, John R Pierce wrote:
> On 05/05/11 8:05 PM, Joshua Tolley wrote:
>> Actually, Bucardo doesn't do statement replication. It, like Slony for
>> instance, replicates data, not SQL statements. And as you pointed out, it 
>> does
>> do bidirectional replication in a way that's sufficient for some use cases.
>
> does it use triggers for replication, similar to Slony, then?   
> obviously, it can't be doing WAL level replication or it wouldn't be  
> able to do any sort of master-master.

Exactly. It doesn't function exactly like Slony does under the hood, of
course, but it is trigger based. One notable difference between Bucardo and
Slony is that whereas Slony's triggers store the entire row data in a separate
log table when something changes, Bucardo stores only the primary key. As a
result, Bucardo doesn't apply each transaction to the replica databases, but
rather a set of all transactions that took place on the source since the last
time it synchronized things. For whatever that's worth.

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] Bidirectional replication

2011-05-05 Thread Joshua Tolley
On Thu, May 05, 2011 at 03:07:20PM +0530, tushar nehete wrote:
> Warning: prerequisite DBD:Pg 2.0 not found. We have 1.49.
> Warning: prerequisite ExtUtils::MakeMaker 6.32 not found. We have 6.30.

You need to install DBD::Pg, version 2.0 or greater. You also need to install
ExtUtils::MakeMaker version 6.32 or greater. These are both Perl packages,
available several different ways. Sometimes your operating system will
provide sufficiently recent versions through its own packaging system (e.g.
"yum install perl-DBD-Pg"); the more difficult way is to get it through CPAN,
per instructions here: http://www.cpan.org/modules/INSTALL.html

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] Bidirectional replication

2011-05-05 Thread Joshua Tolley
On Mon, May 02, 2011 at 11:31:28PM -0700, John R Pierce wrote:
> AFAIK, the only postgres replication systems that even pretend to  
> support master-master are things like Bucardo that do the replication at  
> the SQL layer, by sending all update/insert/delete commands to both  
> servers, and under certain sequences of concurrent queries, you could  
> end up with different results on the two servers.

Actually, Bucardo doesn't do statement replication. It, like Slony for
instance, replicates data, not SQL statements. And as you pointed out, it does
do bidirectional replication in a way that's sufficient for some use cases. 

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] Cross-schema view issue/question

2011-04-23 Thread Joshua Tolley
On Thu, Apr 14, 2011 at 07:33:17PM -0700, Bosco Rama wrote:
> 1) a function that removes/creates the views, etc in the 'xyz' schema
>that gets called as part of the replacement process for schema 'abc'
>
> 2) replacing the views, etc. with functions that return tables or
>the results of dynamic queries.
> 
> 3) have the user create the views, etc. as 'temp' items in their
>session each time.  Though this may still conflict with the
>replacement since there will still be a tight coupling between
>the temp objects and the 'abc' schema and the replacement occurs
>regardless of the current state of user connections.

#2 will screw up query planning substantially; I'd steer clear of it. The
other two options are essentially deciding whether you or your user will
recreate the xyz objects each time you replace abc. That's kinda a religious
issue, and depends on things like how stable your user expects the objects in
xyz to be.

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] How to retrieve values of 'array' field using C library (libpq)

2011-02-09 Thread Joshua Tolley
On Wed, Feb 09, 2011 at 02:53:37PM +0300, Dmitriy Igrishin wrote:
> > My array is the array of integer values. So maybe there is some native
> > method to retrieve them (as in the 'themoney' example)?
> >
> Not in libpq. Probably, in libpqtypes.

I ended up writing one for pgsnmpd, which you're welcome to use. I can't
guarantee that writing my own was the best way to do it, or that it's free of
bugs. See pg_array.c and pg_array.h at
http://git.postgresql.org/gitweb?p=pgsnmpd.git;a=summary

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


[GENERAL] Compellent SAN, anyone?

2011-02-09 Thread Joshua Tolley
I'm hunting opinions on Compellent (http://www.compellent.com) storage devices
and postgres. Comments, anyone? Experiences? Case studies? Success or horror
stories? Thanks in advance.

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] Automatic database monitoring tool for PostgreSQL ... new project

2011-01-31 Thread Joshua Tolley
On Mon, Jan 31, 2011 at 07:28:36PM +0100, Michael Justin wrote:
> While I am still new to PostgreSQL and have real world experience with  
> DB2 InterBase and Firebird only I am interested to write a similar tool  
> for PostgreSQL.
>
> Maybe there are articles or discussions which help to get an  
> introduction to the practical usage of PostgreSQL Statistics Views, like  
> typical queries used to identify potential problems? Just to give an  
> example: on InterBase and Firebird, long running transactions can cause  
> performance problems, and a typical system monitoring task is to find  
> the clients or processes with the oldest transactions (older than x  
> minutes).

There are several such tools already. That's not to say that writing another
is a bad idea, just that there are lots you could look at for inspiration. One
of the most widely-used is check_postgres, found here:

http://bucardo.org/wiki/Check_postgres

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] plpyhton

2010-11-29 Thread Joshua Tolley
On Sat, Nov 27, 2010 at 04:38:27PM +0530, c k wrote:
> How can we implement procedure cache? Cane you please give details regarding
> this?
> It will be helpful for me.

This is something the PL implementation needs to do for you. I finally looked
at the plpython code, and found that it does indeed do this for you. PLPython
caches a procedure's input and return types, and a compiled representation of
the procedure's source. 

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] plpyhton

2010-11-26 Thread Joshua Tolley
On Fri, Nov 26, 2010 at 05:28:52PM +0530, c k wrote:
> Thanks for your reply.
> But if a database has 100+ connections then isn't loading any such
> interpreter consumes more memory and requires more CPU? Does all PL
> languages behave in the same fashion?

If there are lots of connections, and each calls a plpython function (for
example), then each will load a python interpreter, and certainly that could
add up to serious memory usage. I can't speak for *every* PL; C functions
don't load any special interpreter, for instance, and I don't think there's
anything special you have to load to run SQL functions, beyond what gets
loaded anyway.

If you have problems with hundreds of connections using too much memory when
each loads an interpreter, you ought to consider getting more memory, using a
connection pooler, changing how you do things, or some combination of the
above.

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] diagram tools?

2010-11-24 Thread Joshua Tolley
On Wed, Nov 24, 2010 at 10:23:15AM -0800, DM wrote:
> There are many of them, I use SchemaSpy java based - easy to generate.

Here are several other possibilities, which I've taken from the helpful
pg_docbot that lives on Freenode in #postgresql. For whatever it's worth, I
don't know that I've used any of these except the first link; it has proven
itself fairly helpful.

http://www.rbt.ca/autodoc/
http://druid.sf.net/
http://www.sqlmanager.net/
http://www.casestudio.com/ 
http://www.hardgeus.com/projects/pgdesigner/
http://www.thekompany.com/products/dataarchitect/
http://uml.sourceforge.net/index.php
http://schemaspy.sourceforge.net/
http://pgfoundry.org/projects/autograph/
http://archives.postgresql.org/pgsql-general/2008-05/msg00918.php
http://mogwai.sourceforge.net/?Welcome:ERDesigner_NG
http://www.dbvis.com/products/dbvis/
http://www.modelsphere.org/
http://www.sqlpower.ca/page/architect

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] plpyhton

2010-11-24 Thread Joshua Tolley
On Wed, Nov 24, 2010 at 11:56:16AM +0530, c k wrote:
> Hello,
> Does calling a pl/python function from each database connection load the
> python interpreter each time? what are the effects of using pl/python
> function in a environment where no. of concurrent connections are more and
> each user calls a pl/python function?
> 
> Please give the details about how pl/python functions are executed.
> Thanks and regards,
> 
> CPK

I don't know plpython terribly well, but for most PLs, calling them once in a
session loads any interpreter they require. That interpreter remains loaded
for the duration of the session. So each individual connection will load its
own interpreter, once, at the time of the first function call requiring that
interpreter. Most widely used languages also cache various bits of important
information about the functions you run, the first time you run them in a
session, to avoid needing to look up or calculate that information again when
you run the function next time.

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] How to see what SQL queries are associated with pg_locks?

2010-10-04 Thread Joshua Tolley
On Fri, Oct 01, 2010 at 11:27:11AM -0700, Aleksey Tsalolikhin wrote:
> How to see what SQL queries are associated with pg_locks, please?

Look at the pid column in pg_locks. It will match the procpid column in
pg_stat_activity.

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] Foreign keys and permissions oddity

2010-08-07 Thread Joshua Tolley
On Sat, Aug 07, 2010 at 08:34:12AM +0100, Richard Huxton wrote:
> On 07/08/10 01:13, Joshua Tolley wrote:
>> Is there some justification for this behavior that I should know already? It
>> seemed awfully strange when some folkds here stumbled on it:
> [snip]
>> The key point seems to be that the owner of the referenced table has no
>> permissions on the table, although the referencing user does.
>
> Presumably the underlying trigger functions are executing as the owner  
> of the table. This would make sense in the (more common) case that you  
> want to reference a table you don't necessarily have full read access  
> for (e.g. member-id vs the whole row including address/phone).

Yeah, that appears to be what's happening, based on the code. It's certainly
confusing to look at, and I'm not sure it couldn't be described a bug. I'll
continue to ponder that.

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


[GENERAL] Foreign keys and permissions oddity

2010-08-06 Thread Joshua Tolley
Is there some justification for this behavior that I should know already? It
seemed awfully strange when some folkds here stumbled on it:

$ create user a;
$ create user b;
$ commit;
$ \c - a
$ create table foo (id integer primary key);
$ revoke all on foo from a;
$ grant all on foo to b;
$ commit;
$ \c - b
$ create table bar (foo_id integer references foo (id));
$ commit;
$ insert into bar values (1);
ERROR:  permission denied for relation foo
CONTEXT:  SQL statement "SELECT 1 FROM ONLY "public"."foo" x WHERE "id" 
OPERATOR(pg_catalog.=) $1 FOR SHARE OF x"

The key point seems to be that the owner of the referenced table has no
permissions on the table, although the referencing user does.

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] Partitioning into thousands of tables?

2010-08-06 Thread Joshua Tolley
On Fri, Aug 06, 2010 at 03:10:30PM +1000, Data Growth Pty Ltd wrote:
>Is there any significant performance problem associated with partitioning
>a table into 2500 sub-tables?  I realise a table scan would be horrendous,
>but what if all accesses specified the partitioning criteria "sid".  Such
>a scheme would be the simplest to maintain (I think) with the best
>localisation of writes.

I seem to remember some discussion on pgsql-hackers recently about the number
of partitions and its effect on performance, especially planning time.
Unfortunately I can't find it right now, but in general the conclusion was
it's bad to have lots of partitions, where "lots" is probably 100 or more.

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] column depends on sequence

2010-08-05 Thread Joshua Tolley
On Thu, Aug 05, 2010 at 06:23:54PM +0600, AI Rumman wrote:
>I need to write a query to find the primary key column_names that depend
>on sequences.
>Any help please.

On the assumption that the sort of dependence you're looking for is one where
a column has a default value of nextval('some_sequence'), this worked for me:

select
attrelid::regclass || '.' || attname,
adsrc
from
pg_attrdef ad
join pg_attribute pa
on (ad.adrelid = pa.attrelid and pa.attnum = ad.adnum)
where
adsrc ~* 'nextval'

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] PostgreSQL and distributed transactions

2010-07-31 Thread Joshua Tolley
On Thu, Jul 29, 2010 at 11:42:14AM +0200, Dimitri Fontaine wrote:
> Jayadevan M  writes:
> > But the initial setup for the client is done by 'Admin' and in that 
> > work-flow, we need distributed transactions. The transaction will start 
> > from the 'Admin" server, do some inserts on the 'Client' server and then 
> > either rollback or commit on both the servers. Is it possible to do this 
> > with PostgreSQL? Any helpful links?
> 
> See about 2 Phase Commit, but you will need a transaction manager I guess :
> 
>   http://www.postgresql.org/docs/current/static/sql-prepare-transaction.html
> 

If I may be allowed to toot my own horn, may I suggest this as an example:
http://blog.endpoint.com/2010/07/distributed-transactions-and-two-phase.html

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] Weird trouble with select

2010-06-29 Thread Joshua Tolley
On Tue, Jun 29, 2010 at 08:44:35AM -0500, erobles wrote:
> Hi!
>
> I have  3  pc's:
> pc 'A'  with postgres 7.2 running
> pc 'B' and 'C'  connected to  'A'

You should upgrade as quickly as possible. 7.2 is ancient. Even 8.1 will hit
end-of-life soon.

> if i made  the next query on 'B' or 'C' :  "select * from detail with   
> id=72";  takes too long time to  return the data, near of 10 minutes  
> more or less.
> if i made  the next query on 'B' or 'C' :  "select * from detail with   
> id=71",  or whatever value in id , the data are returned immediatly.
>
> if i made  the next query on 'A'  (postgres server):  "select * from  
> detail with  id=72", the data are returned immediatly.

I wonder if you're just hitting caching effects, and the different client
hosts are confusing the issue. If you issue the query from one machine
multiple times, does it reliably take forever the first time, and go quickly
afterward?

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] No PL/PHP ? Any reason?

2010-06-29 Thread Joshua Tolley
On Wed, Jun 23, 2010 at 05:17:13PM -0400, John DeSoi wrote:
> Any chance that the Parrot runtime could be used for PHP and other
> languages? I read that some folks are working on PL/Parrot. I'd really like
> to have PHP and Lisp for PL languages :).

Some folks are definitely working on it. The idea is that any language running
on Parrot will be usable with PL/Parrot. Whether that pans out in real life is
an open question, probably, but it's part of the plan.

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] pgpool

2010-06-29 Thread Joshua Tolley
On Tue, Jun 22, 2010 at 07:07:33AM -0400, Geoffrey wrote:
> A couple of questions regarding pgpool:
>
> Is there a problem with using multiple connection pools for the same  
> database?  Point being, we might want to give a higher number of  
> connections to one group of users then another.  I can see doing this by  
> having separate connection pools.  The higher priority users have a  
> larger connection pool.

That's entirely reasonable.

> Is there a problem with using connection pooling and traditional  
> connections to connect to the same database?

Nope.

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] IMMUTABLE columns in tables?

2010-06-16 Thread Joshua Tolley
On Tue, Jun 15, 2010 at 11:54:47AM -0400, Chris Browne wrote:
> I'd like that feature, and I don't think it takes too much arguing to
> get to the point that a declarative "IMMUTABLE" control is rather less
> opaque than someone saying "oh, you could just create a trigger
> running PL/LOLCODE to do that!"


PL/LOLCODE doesn't support running triggers, 'cuz no one has written that bit
yet


--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] Enforcing password standards

2010-06-14 Thread Joshua Tolley
On Fri, Jun 11, 2010 at 10:40:29AM -0700, DM wrote:
>Thanks everyone,
>I will wait for Postgres 9.0 to implement this feature then. Thanks

The contrib module supports enforcement of only some of the things you've
listed you want. For other items on your list (notably renewal), you're better
off integrating with some external authentication provider, as has been
suggested elsewhere in this thread.

--
Josh

>Thanks
>Deepak
>On Fri, Jun 11, 2010 at 10:30 AM, Joshua Tolley 
>wrote:
> 
>  On Thu, Jun 10, 2010 at 06:01:24PM -0700, DM wrote:
>  >How to force postgres users to follow password standards and
>  renewal
>  >policies?
>  >Thanks
>  >Deepak
> 
>  9.0 will ship with a contrib module called "passwordcheck" which will
>  enforce
>  some of these things, FWIW.
>  --
>  Joshua Tolley / eggyknap
>  End Point Corporation
>  http://www.endpoint.com
>  -BEGIN PGP SIGNATURE-
>  Version: GnuPG v1.4.9 (GNU/Linux)
> 
>  iEYEARECAAYFAkwScpkACgkQRiRfCGf1UMMOzgCfW1P8SpFR53OSjm/og3hQFjba
>  0dIAoJK9mkm07XCAyfnPeiygBgrKuFG2
>  =XESJ
>  -END PGP SIGNATURE-


signature.asc
Description: Digital signature


Re: [GENERAL] Enforcing password standards

2010-06-11 Thread Joshua Tolley
On Thu, Jun 10, 2010 at 06:01:24PM -0700, DM wrote:
>How to force postgres users to follow password standards and renewal
>policies?
>Thanks
>Deepak

9.0 will ship with a contrib module called "passwordcheck" which will enforce
some of these things, FWIW.

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] How to show the current schema or search path in the psql PROMP

2010-06-11 Thread Joshua Tolley
On Wed, Jun 09, 2010 at 05:52:49PM +0900, Schwaighofer Clemens wrote:
> Hi,
> 
> I am trying to figure out how I can show the current search_path, or
> better the first search_path entry (the active schema) in the PROMPT
> variable for psql.
> 
> Is there any way to do that? I couldn't find anything useful ...

5432 j...@josh# SHOW search_path;
  search_path   

 "$user",public
(1 row)

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] Connection's limit in SCO OpenServer 5.0.7 and pg 8.3.11 (no more than 94 connections)

2010-06-07 Thread Joshua Tolley
On Fri, Jun 04, 2010 at 10:51:27AM -0500, erobles wrote:
> Hello! I have postgres running on SCO OpenServer 5.0.7
>
> and I've noticed that only accepts up to 94 connections, no more ...
> I modified the values in postgresql.conf
> max_connections from  100 to 128
> shared_buffers  from 24 to 48  MB
>
> postgres is started  with:
> su - postgres -c "/usr/local/pgsql83/bin/postmaster -i -D  
> /usr/local/pgsql83/data"

What does it say when it refuses connections above 94? Also, when you connect
and type "show max_connections;", what gets returned? It might be you're
editing the wrong copy of postgresql.conf, or not restarting PostgreSQL after
editing

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] postgres authentication against Windows Domain

2010-06-02 Thread Joshua Tolley
On Tue, Jun 01, 2010 at 11:56:19AM -0600, u235sentinel wrote:
>Is there is a way to connect postgres to authenticate against a windows
>domain without recompiling and using gssapi.  Ldap perhaps?
> 
>Thanks!

This page describes, among other things, LDAP authentication:
http://www.postgresql.org/docs/8.4/interactive/auth-methods.html

You might also refer to this PGCon 2010 talk:
http://www.fosslc.org/drupal/content/secure-postgresql-deployment

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] Statement Pooling

2010-06-02 Thread Joshua Tolley
On Wed, Jun 02, 2010 at 10:10:24AM +0200, Janning wrote:
> But I remember a post of tom lane who said that prepared statements get 
> replaned since 8.4 if underlying statistics change. But I really can't find 
> it. 
> Maybe I do not remember correctly.

I saw a similar post yesterday, and think I was wrong in my original
assertion that they wouldn't be replanned. So now I'm thinking don't bother
manually refreshing your prepared statements, because postgresql will do it
for you :)

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] Statement Pooling

2010-06-01 Thread Joshua Tolley
On Fri, May 28, 2010 at 10:09:22PM +0200, Janning wrote:
> On Thursday 27 May 2010 16:56:28 you wrote:
> > > Additionally c3p0 can cache java instances of
> > > "java.sql.PreparedStatement" which means it is caching the java object.
> > > So when using
> > > c3p0.maxStatementsPerConnection  =   100 it caches at most 100 different
> > > objects. It saves time on creating objects, but this has nothing to do
> > > with the postgresql database and its prepared statements.
> > >
> > > Right?
> >
> > That's the idea.
> 
> Really? Today I think c3p0 statement pooling it is not only about saving time 
> in object creation, but to let postgresql reuse an already prwpeared 
> statement.

Well, it was probably too strong to say it "has nothing to do with" your
database. Anyway, it sounds like you've got it right.



> > > I remember postgresql 8.4 is replanning prepared statements when
> > > statistics change occur, but I didn't find it in the release notes. It is
> > > just saying "Invalidate cached plans when referenced schemas, functions,
> > > operators, or operator classes are modified". Does PG replans prepared
> > > statements from time to time if underlying data statistics change?
> >
> > I don't think so, though I may be wrong. The change you refer to replans
> > such things when the actual objects change, such as when you remove a
> > column or something that would make the plan fail to execute.
> 
> Ok, so i should be aware of this. if the execution plan postgresql is caching 
> gets old, i can run into trouble. 
> 
> So i should close my connections in the pool from time to time to force 
> replanning of my prepared statements.

I'm not sure this is a great idea, but it would depend on your application.
With caching and pooling and such, a prepared statement might live a really
long time, but your database's statistics might take even longer to change
sufficiently to suggest one plan is better than another. Explicitly replanning
sounds like it's probably more work than it's worth, unless you see specific
plan problems.

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] Statement Pooling

2010-05-27 Thread Joshua Tolley
On Tue, May 25, 2010 at 05:28:10PM +0200, Janning wrote:
> Our hibernate stack uses prepared statements. Postgresql is caching the 
> execution plan. Next time the same statement is used, postgresql reuses the 
> execution plan. This saves time planning statements inside DB.

It only uses the cached plan if you prepare the statement and run that
prepared statement. Running "SELECT foo FROM bar" twice in a row without any
preparing will result in the query being parsed, planned, and executed twice.
On the other hand, doing something like this:

p = conn.prepareStatement("SELECT foo FROM bar");

...and then repeatedly executed p, parsing and planning for the query would
occur only once, at the time of the prepareStatement call.

> Additionally c3p0 can cache java instances of "java.sql.PreparedStatement" 
> which means it is caching the java object. So when using   
> c3p0.maxStatementsPerConnection  =   100 it caches at most 100 different  
> objects. It saves time on creating objects, but this has nothing to do with 
> the postgresql database and its prepared statements.
> 
> Right?

That's the idea.

> As we use about 100 different statements I would set 
>   c3p0.maxStatementsPerConnection  =   100
> 
> Is this reasonable? Is there a real benefit activating it? 

Answering that question for your situation really requires benchmarking with
and without statement caching turned on. Your best bet is probably to set it
to a value that seems decent, and revisit it if you find a performance
bottleneck you need to resolve which looks like it's related to statement
caching.

> I remember postgresql 8.4 is replanning prepared statements when statistics 
> change occur, but I didn't find it in the release notes. It is just saying  
> "Invalidate cached plans when referenced schemas, functions, operators, or 
> operator classes are modified". Does PG replans prepared statements from time 
> to time if underlying data statistics change?

I don't think so, though I may be wrong. The change you refer to replans such
things when the actual objects change, such as when you remove a column or
something that would make the plan fail to execute.


--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] Connection lost

2010-05-19 Thread Joshua Tolley
On Wed, May 19, 2010 at 1:46 PM, Hernan Danielan
 wrote:
> Yes, the most interesting thing is that the log says
> unexpected EOF on client connection or  could not send data to client:
> Broken pipe

That log mostly just says something disconnected uncleanly, but
doesn't say PostgreSQL closed the connection on purpose. Any JDBC
driver people reading this, with ideas?

--
Joshua Tolley  /  eggyknap
End Point Corporation

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Connection lost

2010-05-18 Thread Joshua Tolley
On Tue, May 18, 2010 at 1:18 PM, Hernan Danielan
 wrote:
> Hello! I am using postgress 8.4. I am trying to save to my DB a Large Binary
> Object, in localhost, let's say 1.4MB. I read that LargeObjectAPI should be
> used.
> I have a problem that sometimes i can store the file and some others i get
> an exception of
>>>org.postgresql.util.PSQLException: An I/O error occured while sending to
>>> the backend.
>>>java.net.SocketException: Socket closed

Do the PostgreSQL logs include any useful information?

--
Joshua Tolley  /  eggyknap
End Point Corporation

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table Column Retrieval

2010-02-22 Thread Joshua Tolley
On Mon, Feb 22, 2010 at 07:23:09PM -0700, dmp wrote:
>
> Recently I read that one of the distinctions between a standard database  
> and
> a columnar one, which led to an increase in its efficiency, was and I  
> quote:
>
> "Only relevant columns are retrieved (A row-wise database would pull
> all columns and typically discard 80-95% of them)"
>
> Is this true of PostgreSQL? That eventhough my query does not call for a
> column it is still pulled from the table row(s). I know that my client via
> the JDBC does not contain the data in the ResultSet for the column, because
> of the packet monitoring I have done on queries.

PostgreSQL doesn't use columnar storage. Data are read from the disk in pages,
and those pages contain not only the columns you're interested in but all the
other columns in the table as well. The parts of the table you're not
interested in aren't returned as part of the query, and thus don't show up in
your result set, but they do get read from disk.

The disadvantage of a columnar system is that when you read multiple columns,
you have to piece together the rows of the table using columns read from
various parts of the disk, effectively identical to doing a bunch of joins.
For some workloads the columnar storage is a win, and for some workloads,
row-based storage is the best bet.

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] query a table from one database to another

2010-01-28 Thread Joshua Tolley
On Thu, Jan 28, 2010 at 11:36:55AM +0600, AI Rumman wrote:
>I am getting the error:
> 
>
>LINE 1: select dblink_connect('dbname=postgres');
>   ^
>HINT:  No function matches the given name and argument types. You might
>need to add explicit type casts.
>
> 
>Please tell me why? I am using Postgresql 8.3

I'm guessing that this is because you're trying to us dblink without having
installed it. I don't know how you've installed PostgreSQL; on my Ubuntu
system there's a package called postgresql-contrib I would need to install
first.

Once you have the package, you need to get dblink into your database. Like
most contrib modules, dblink comes with a SQL script you need to run to create
the necessary functions and other objects. In this case it's called
dblink.sql. Find that file -- in my case it's part of the postgresql-contrib
package -- and execute it in your database to create that function. 

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] query a table from one database to another

2010-01-27 Thread Joshua Tolley
On Wed, Jan 27, 2010 at 02:52:58PM +0600, AI Rumman wrote:
>I am using Postgresql 8.3
> 
>I have two databases:
>db1
>db2
> 
>db1 holds a table tab1.
> 
>Is it possible to get the value of the above tab1 in db2 database? 
> 
>Regards

You'll need something like the dblink contrib module to get one database to
talk to another.

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] replication from multiple "master" servers to a single read-only slave

2010-01-12 Thread Joshua Tolley
On Mon, Jan 11, 2010 at 04:18:30PM -0800, Omar Mehmood wrote:
> (but I want to avoid using DB triggers).



> I will check out Bucardo.

Bucardo uses triggers just like Slony does. That said, it seems strange that
you'd want to avoid them. Is there any particular reason you want to avoid
them?

Bucardo should handle the disconnection problems you described just fine.

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] DataBase Problem

2009-12-30 Thread Joshua Tolley
On Wed, Dec 30, 2009 at 09:49:52AM +0800, Premila Devi wrote:
>I am having problem as :
> 
> 
> 
>Caused by: org.springframework.transaction.TransactionSystemException:
>Could not roll back Hibernate transaction; nested exception is
>org.hibernate.TransactionException: JDBC rollback failed
  ...snip...
>Caused by: org.hibernate.TransactionException: JDBC rollback failed
  ...snip...
>Caused by: java.sql.SQLException: Couldn't perform the operation rollback:
>You can't perform any operations on this connection. It has been
>automatically closed by Proxool for some reason (see logs).

Without some idea of why the connection was "automatically closed by Proxool"
there's likely little we can do to help. If PostgreSQL is to blame, there's
probably useful information about the problem in your PostgreSQL logs. The
logging documentation might be useful to you here.

http://www.postgresql.org/docs/current/static/runtime-config-logging.html

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] Comparing 2 databases

2009-12-29 Thread Joshua Tolley
On Tue, Dec 29, 2009 at 03:21:18PM -0500, akp geek wrote:
>thanks for the repsonse. I appreciate it. are there any limitations on
>using this one?  Means that we have to the same user on both databases and
>same passwords.
> 
>I have used the command following way
> 
>  check_postgres.pl --action=same_schema -H 172.  -p 1550  
>--db=myProdDB  --dbuser=prodUser  --dbpass=prodPwd  --dbhost2=172.x
>--db=testDB  --dbuser=testUser  --dbpass=testPwd  --verbose >
>difference.txt
> 
>what happend was , it complained about the password, then I tried
>replacing the testPwd with prodPwd, then it started executing. but it
>prompted for password for testuser. that's where I got confused

You might try a pgpass file[1] and skip providing the passwords on the command
line. 

>One question I have is, is there an option to specify schema also

Check the docs under BASIC FILTERING[2].  You can tell it to ignore objects
with certain names, or to include only those objects with the given names.

[1] http://www.postgresql.org/docs/current/interactive/libpq-pgpass.html
[2] http://bucardo.org/check_postgres/check_postgres.pl.html#basic_filtering

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] Comparing 2 databases

2009-12-29 Thread Joshua Tolley
On Tue, Dec 29, 2009 at 11:43:58AM -0500, akp geek wrote:
>Hi all -
> 
>  I have postgres running on 2 servers. one production and one
>testing. What would be the best way to compare the 2 database, so find out
>the differences? Can you please advice?
> 
>regards

That depends on what you mean by "compare". check_postgres[1] has a schema
comparison action you can use.

[1] http://bucardo.org/wiki/Check_postgres

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] replication dbs

2009-12-16 Thread Joshua Tolley
Bucardo is released under the BSD license (see htt://bucardo.org for more
details). It's actively developed, and we at End Point have found it very
useful and reliable. But we wrote it :)

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com

On Tue, Dec 15, 2009 at 03:05:34PM -0700, Gauthier, Dave wrote:
>These links are very helpful.  Thanks !
> 
>At first glance, Bucardo loks like a good one.  Is it open (free)?  Can
>anyone speak about reliability, robustness, etc... ?
> 
> 
> 
>Thanks Again.
> 
> 
> 
> 
> 
>--
> 
>From: Greg Smith [mailto:g...@2ndquadrant.com]
>Sent: Tuesday, December 15, 2009 3:31 PM
>To: Gauthier, Dave
>Cc: pgsql-general@postgresql.org
>Subject: Re: [GENERAL] replication dbs
> 
> 
> 
>Gauthier, Dave wrote:
> 
> 
> 
>I have questions regarding replication db tech for PG.  Is this the right
>forumn?
> 
>Sure.  You can find some slightly rough documentation for help get you
>started at
>http://www.postgresql.org/docs/current/interactive/high-availability.html
>and
>
> http://wiki.postgresql.org/wiki/Replication%2C_Clustering%2C_and_Connection_Pooling
>, with more details about some of the projects/products that add features
>in this area at http://wiki.postgresql.org/wiki/Clustering
> 
>  --
> 
>  Greg Smith2ndQuadrant   Baltimore, MD
> 
>  PostgreSQL Training, Services and Support
> 
>  g...@2ndquadrant.com  www.2ndQuadrant.com


signature.asc
Description: Digital signature


Re: [GENERAL] createlang error(s)

2009-11-04 Thread Joshua Tolley
On Wed, Nov 04, 2009 at 12:12:50AM -0500, Tom Lane wrote:
> John Burski  writes:
> > I got my postgresql build from the from a Novell SuSE Linux repository
> > (mirror.susestudio.com).  
> 
> > Here's the result of a run of ldd against the plperl.so file:
> 
> > linux-gate.so.1 =>  (0xe000)
> > libperl.so => not found
> 
> File a bug with the SUSE packager.  "Medium broil" would be about right.
> Don't they test their packages?
> 
> > I copied the "libperl.so" file from its original location to the
> > "/lib" directory and tried the "createlang" command again - SUCCESS!
> 
> OK as a short-term workaround, but that's no excuse for it not being
> right to start with ...
> 
>   regards, tom lane

I've had this problem, too, though I hate to admit that I haven't actually
filed a bug already SUSE. I was fairly far removed from the actual
installation in that case, if it's any excuse ;) Anyway, I can confirm you're
not the only one having this problem.

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] tar error while running basebackup

2009-10-13 Thread Joshua Tolley
On Tue, Oct 13, 2009 at 12:40:37PM +0200, Andre Brandt wrote:
> tar: ./base/208106/209062: File shrank by 262144 bytes; padding with zeros
> tar: ./base/208106/210576: file changed as we read it
> tar: ./base/208106/210577: file changed as we read it
> tar: ./base/208106/210431: file changed as we read it
>

This is entirely normal.

> How can this happen? I always thought, that, when in backup mode,  
> nothing is able to change the database - so the database files shouldn't  
> change. Can autovaccumdb cause the changes?

pg_start_backup() doesn't tell the database to stop writing changes to disk;
it essentially just says "perform a checkpoint", which means all changes as of
that instant are written to the base data files. That ensures that you start
your base backup in a consistent state. When you recover it, replaying the WAL
files will fix any weirdness in your base backup, and you'll get a working
database, current up to the last WAL file you recovered.

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] Looking for way to replicate master db to multiple mobile databases

2009-09-24 Thread Joshua Tolley
On Thu, Sep 24, 2009 at 05:09:26PM +0200, Cédric Villemain wrote:
> > Bucardo is a good choice for this usage model because it was
> > originally designed to work over a lossy network connections.
> 
> yes, but isn't bucardo designed to 2 nodes only ?

Bucardo's multi-master replication works only between two hosts, unless
perhaps you can assure that only certain primary keys will be updated on
certain hosts, or do some other trickery. Syncing from one master to multiple
slaves is straightforward, if all you need is master->slave.

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] PG connections going to 'waiting'

2009-09-06 Thread Joshua Tolley
On Sat, Sep 05, 2009 at 07:41:42PM -0400, Chris Barnes wrote:
>Is the any way to look at the statistics on the name of table, length and
>type over a period of time?

Only if you've captured them. Various system information views provide those
statistics (or the server's best guesses about those statistics) for the
moment they're queried, but when those values are updated, the old values are
lost.

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] PG connections going to 'waiting'

2009-09-05 Thread Joshua Tolley
On Sat, Sep 05, 2009 at 01:08:30PM -0400, Alan McKay wrote:
> > pg_locks?  Somebody taking exclusive lock on a widely-used table might
> > explain that.
> 
> OK, in theory we could do the following, no?
> 
> Use our PITR logs to restore a tertiary system to the point when we
> were having the problem (we have a pretty wide 2 or 3 hour window to
> hit), then query the pg_locks table on that system?

I'm guessing this wouldn't work. pg_locks isn't a table, it's a view, based on
the pg_lock_status function. I don't know how that function works, but I'm
guessing it gathers lock data from shared memory or some such, and would only
contain the same values it did at the time of the crash if you had the same
queries and autovacuum processes currently in flight. Autovacuum presumably
didn't contribute to the locking in meaningful ways, so if you have logs of
the queries you were running at the time, you might be able to get the same or
a similar condition by executing those queries against the restored system.

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] Viable alternatives to SQL?

2009-08-27 Thread Joshua Tolley
On Thu, Aug 27, 2009 at 06:43:51AM -0700, Kelly Jones wrote:
> Are there other viable ways to query data? I read a little on
> "Business System 12" (BS12), Tutorial D, and even something called
> T-SQL (I think), but they all seem theoretical and not fully
> implemented.
> 
> I want a query language that non-techies can use easily, but also
> supports arbitrarily complex queries. Does such a language exist?

I've never heard of the first two, as far as I can recall. T-SQL is quite
common, however (http://en.wikipedia.org/wiki/Transact-SQL), but although I've
never used it, I seriously doubt it will make things any more user friendly.

Several business intelligence applications try to allow users to generate
their own queries by providing a query builder system of some sort. These are
far from simple drop-in systems, in general. For instance, I'm using Pentaho
(http://www.pentaho.com) in a project, to allow users to write their own
reports. I've had to create a sort of schema "metadata" thing, which tells
Pentaho what my database looks like, how various tables relate to each other,
etc., and creates a set of objects my users will immediately understand, along
with some method to turn those objects into database queries. Pentaho allows
the users to drag and drop those objects into a report, creating filters and
sorting constraints, different groupings, etc., in a fairly user-friendly way.
When they run the report, Pentaho translates their input into SQL.

Anyway, you might try using a reporting package rather than trying to write
your own query interface and provide for user-friendly error reporting, decent
security constraints, etc.

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] Schema diff tool?

2009-08-21 Thread Joshua Tolley
On Fri, Aug 21, 2009 at 11:43:49AM -0700, David Kerr wrote:
> Is there a default/standard (free) schema diff tool that's in use in the  
> community?

check_postgres.pl will compare schemas and report on results.
http://bucardo.org/check_postgres/

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] Postgres and multiple updates in one statement

2009-07-27 Thread Joshua Tolley
On Mon, Jul 27, 2009 at 07:31:37AM -0600, Scott Marlowe wrote:
> On Mon, Jul 27, 2009 at 2:10 AM, Jamie
> Lawrence-Jenner wrote:
> > Do 1 pass to retrieve the 5 primary keys, then update all rows in parallel
> 
> I would do 5 passes.  Better to have one update statement to reduce bloat.

You could possibly use UNION or UNION ALL to consolidate your 5 passes into
one pass. You could also possibly use UPDATE FROM to avoid having to return
the primary keys at all, and get the whole thing done in one query.

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] comparing NEW and OLD (any good this way?)

2009-07-23 Thread Joshua Tolley
On Thu, Jul 23, 2009 at 01:40:45PM +0200, Willy-Bas Loos wrote:
> Is there anything to say against this, or can i go ahead and recommend
> this to everyone who wants to check if anything changed before doing
> what their update triggers do?

Perhaps 8.4's suppress_redundant_updates_trigger() could be helpful in this
case:

http://www.postgresql.org/docs/8.4/interactive/functions-trigger.html

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


[GENERAL] Happy Birthday...

2009-07-08 Thread Joshua Tolley
Since no one else seems to have sent it yet, I'll do the obligatory annual
Happy Birthday email. Happy Birthday, PostgreSQL. You're a teenager now. The
pimples will go away, in time.

- Josh / eggyknap


signature.asc
Description: Digital signature


Re: [GENERAL] Free OLAP software for Postgres databas

2009-06-29 Thread Joshua Tolley
On Mon, Jun 29, 2009 at 09:02:30AM +0300, Sim Zacks wrote:
> 
> > The first is easy; simply configure data sources pointed at the PostgreSQL
> > database. Note that the JDBC drivers that ship with Pentaho are, in my
> > experience, of inconsistent version, and you might want to update them to 
> > the
> > latest available for your PostgreSQL and Java versions.
> >   
> In my experience you don't want to upgrade the jars that come with
> Pentaho. If it is not the versions that they come with the chances are
> that things will not work. The JDBC driver might be upgradeable, I
> believe we downloaded that by ourselves.

We've not had problems upgrading the JDBC drivers (or indeed installing them,
as IIRC, some components just haven't had a PostgreSQL driver in the default
installation). Your Mileage May Vary. We haven't touched any other libraries
that ship with Pentaho, which I tend to think is a wise decision on our part
:)

> Also we had a much better experience with Jasper (free version) then
> Pentaho (free version).

When we tried to decide between the two, we took both out for brief test
drives. They seemed almost functionally equivalent, and there were no major
advantages or deficiencies we found to indicate one over the other. But that's
a decision made with regard only to those features we cared about.

- Josh / eggyknap


signature.asc
Description: Digital signature


Re: [GENERAL] Free OLAP software for Postgres databas

2009-06-27 Thread Joshua Tolley
On Sat, Jun 27, 2009 at 02:04:13PM -0700, Michal Szymanski wrote:
> 
> > I've used Pentaho with pgsql and it worked pretty well.  I'm pretty
> > sure jasper as well can work with pgsql.
> 
> Where can  I find information how to install Pentaho on Postgres? Do
> you use free edition?

We also both the free and enterprise versions, and both work with PostgreSQL.
Note that there are two different things you might be talking about, here:

1) PostgreSQL contains your data warehouse, or pieces thereof.
2) PostgreSQL contains the Pentaho repository

The first is easy; simply configure data sources pointed at the PostgreSQL
database. Note that the JDBC drivers that ship with Pentaho are, in my
experience, of inconsistent version, and you might want to update them to the
latest available for your PostgreSQL and Java versions.

The second is more difficult, but also very possible. There are PostgreSQL
scripts in Pentaho's biserver/data/postgresql directory. These create the
necessary databases. Though they're poorly written, IMO, they'll give you a
good idea what you really need to create. The hardest bit is editing all the
configuration files for the various bits of the Java stack (spring, hibernate,
acegi, pentaho...). It's best to find instructions for using MySQL as the
repository, and substitute "PostgreSQL" for "MySQL". One possible such
reference is
http://www.prashantraju.com/pentaho/guides/biserver-2.0-final/biserver2.0-local-final.pdf

- Josh / eggyknap


signature.asc
Description: Digital signature


Re: [GENERAL] A question about inheritance and sequence

2009-06-24 Thread Joshua Tolley
On Wed, Jun 24, 2009 at 01:43:20PM +0200, Marko Pahić wrote:
>Hello,
>I have two databases, and I want the same structure, and if I change the
>structure of one database it changes the structure of the other. In
>documentation I saw that the syntax goes something like this: CREATE TABLE
>"User" () INHERITS database2."User"; But it's not working like this, and I
>couldn't find the example. Can you please write the correct example?

If they're two separate databases, you can't have objects in one inherit from
objects in another. Two separate databases know nothing of each other. If, on
the other hand, you have separate schemas within the same database, objects in
one schema can inherit from objects in another. The partitioning documentation
shows an example of the basic syntax; if the tables involved are in different
schemas, simply write . instead of just 

>The other problem I have if with sequence. I have this table structure:
>CREATE TABLE "Notes" (
>userkey character(40) NOT NULL,
>noteid SERIAL NOT NULL,
>note text,
>PRIMARY KEY (userkey, noteid),
>FOREIGN KEY (userkey) REFERENCES "User"(userkey) ON UPDATE CASCADE ON
>DELETE CASCADE
>);
> 
>How can I make the noteid go from 1 to max for each user? Because if I add
>a row it goes +1 for all users? How would I do that the note id would go
>+1 for each userkey. Do I have to make nested queries and where would I do
>it?

You could theoretically make a sequence for each user, but that would likely
be so painful to manage that you'd quickly give it up. Note, by the way, that
having a table name that you're required to double-quote all the time is also
often found to be a big pain. The other way to do it involves issuing a query
each time you add a new Note to find out what the user's last noteid was, and
add one to that. That sounds both slow and painful, though perhaps not as
painful as maintaining one sequence per user.

- Josh / eggyknap


signature.asc
Description: Digital signature


Re: [GENERAL] Using results from DELETE ... RETURNING

2009-06-05 Thread Joshua Tolley
On Mon, Jun 01, 2009 at 04:21:27PM +0100, Shak wrote:
> When RETURNING from a DELETE statement (or similar), how do you access the
> results being returned?
>
> Something like:
>
> SELECT COUNT(*) FROM (DELETE FROM a RETURNING *) ;
>
> sounds reasonable but results in a syntax error. I am able to return single
> results into a variable or record, but not more than one result.

You can't. It's on the TODO list (http://wiki.postgresql.org/wiki/Todo). Look
for "Allow INSERT/UPDATE ... RETURNING" and "Allow INSERT ... DELETE ... 
RETURNING"

- Josh / eggyknap


signature.asc
Description: Digital signature


Re: [GENERAL] cursores

2009-06-03 Thread Joshua Tolley
On Wed, Jun 03, 2009 at 09:36:18AM -0400, Esneiker wrote:
>I'm trying to build a cursor for postgres but I have some problems. Can
>anybody send me an example of cursor in postgres?

What about these examples?

http://www.postgresql.org/docs/8.3/interactive/plpgsql-cursors.html

- Josh / eggyknap


signature.asc
Description: Digital signature


Re: [GENERAL] Postgres registry access using java

2009-05-27 Thread Joshua Tolley
On Wed, May 27, 2009 at 01:21:40PM +0530, Anirban Pal wrote:
>Can any body tell me how to fetch directory path for lib or data directory
>for postgres installation in windows using registry. Solution using Java
>language will be highly solicited.

Some PL/Java or PL/J function might do what you want, provided Java gives you a
means to access the registry. But what's wrong with "SHOW data_directory;" or
"SELECT setting FROM pg_settings WHERE name = 'data_directory';"?

- Josh / eggyknap


signature.asc
Description: Digital signature


Re: [GENERAL] using explain to get query expected time

2009-05-25 Thread Joshua Tolley
On Mon, May 25, 2009 at 12:10:21AM -0700, Scara Maccai wrote:
> is there any chance to get the "Planner Cost Constants" right enough to get a 
> "good" estimate in seconds of how long a query is supposed to run?
> The "rowcount" estimates are always good (there is no skew data at all in the 
> db, values are pretty much "plain" distributed)

The most straightforward way I can think of would be to make sure your planner
constants (seq_page_cost, etc.) reflect time in meaningful units, which would
be very hard, and probably wouldn't work for reasons I don't know. Anyway, the
quick answer is no, 'cause it's really hard.

- Josh / eggyknap


signature.asc
Description: Digital signature


Re: [GENERAL] Column oriented pgsql

2009-05-08 Thread Joshua Tolley
On Fri, May 08, 2009 at 11:25:30AM -0700, John R Pierce wrote:
> Mag Gam wrote:
>> Is it possible to tweak (easily) Postgresql so the storage is column
>> oriented versus row-oriented? We would like to increase read
>> optimization on our data which is about 2TB.
>>
>>   
>
> you read your tables by column, rather than by row??   
>
> SQL queries are inherently row oriented, the fundamental unit of storage  
> is a 'tuple', which is a representation of a row of a table.

http://en.wikipedia.org/wiki/Column_oriented_database

This has come up on the lists from time to time; the short answer is it's
really hard.

- Josh / eggyknap 


signature.asc
Description: Digital signature


Re: [GENERAL] bizgres

2009-05-05 Thread Joshua Tolley
On Tue, May 05, 2009 at 10:04:15AM +, Glyn Astill wrote:
> I'm looking at building an olap reporting environment and I came across this
> project on pgfoundry. However it was last updated over 3 years ago, am I
> correct in assuming that this probably isn't something I should be looking at?

I've been working with Pentaho (which uses Mondrian for OLAP) on top of
PostgreSQL 8.3. We've not yet put it in production, so there's a lot that
hasn't been heavily stressed, in particular the size of the data set, but it
seems functional.

- Josh / eggyknap


signature.asc
Description: Digital signature


Re: [GENERAL] delete cascade not working

2009-04-25 Thread Joshua Tolley
On Sat, Apr 25, 2009 at 04:21:06PM -0400, Kenneth Tilton wrote:
> But I am "inheriting" from the parent, so everything about the parent  
> should be true for me unless overridden. I mean in the usual sense of  
> inherit I know from OO.

This is from the current docs
(http://www.postgresql.org/docs/current/static/ddl-inherit.html):

"All check constraints and not-null constraints on a parent table are
automatically inherited by its children. Other types of constraints
(unique, primary key, and foreign key constraints) are not inherited."

Although it's easy to imagine otherwise, really inheritance structures
are made up of individual tables, and for the most part behave no
differently from any other tables. Indexes, constraints, etc. on the
parent table do not apply to data in child tables. The documentation
snippet above really means that when a new child table is created, check
constraints and not null constraints are automatically created to
correspond with those on the parent. Don't let yourself start thinking
inheritance in PostgreSQL behaves too similarly to inheritance in, say,
most programming language. Inheritance works nicely for partitioning;
most people that try to apply table inheritance to the same sorts of
scenarios where they'd apply inheritance in a programming language come
to find that it's more of a pain that it's worth. 

- Josh / eggyknap


signature.asc
Description: Digital signature


Re: [GENERAL] delete cascade not working

2009-04-25 Thread Joshua Tolley
On Sat, Apr 25, 2009 at 04:01:20PM -0400, Kenneth Tilton wrote:
> Given constraint:
>
> ALTER TABLE provider_input.common
>   ADD CONSTRAINT common_pin_file_load_sid_fkey FOREIGN KEY  
> (pin_file_load_sid)
>   REFERENCES provider_input.file_load (sid) MATCH FULL
>   ON UPDATE NO ACTION ON DELETE CASCADE;



> I do not actually instantiate common, I have a table that inherits from  
> that. I will play around now to see if that is an issue, thought I'd  
> send up a flare here at the same time.

First guess: the foreign key constraint needs to exist on the child
table; having the key on the parent means nothing for the child table.

- Josh / eggyknap


signature.asc
Description: Digital signature


Re: [GENERAL] postgreSQL & amazon ec2 cloud

2009-04-20 Thread Joshua Tolley
On Mon, Apr 20, 2009 at 02:14:00PM +0200, Stefano Nichele wrote:
> Do you think that it could useful mounting two different EBS to handle  
> data and pg_xlog ?

Testing I've participated in suggests that it helps to split pg_xlog
elsewhere. Your mileage may vary.

- Josh / eggyknap


signature.asc
Description: Digital signature


Re: [GENERAL] How to get the PID associated with a Perl DBI dbh?

2009-03-12 Thread Joshua Tolley
On Thu, Mar 12, 2009 at 03:13:13PM +0100, A. Kretschmer wrote:
> In response to Kynn Jones :
> > Is there a reliable way to find out the (Unix) PID associated with a 
> > database
> > handle generated by Perl DBI's database connection?
> 
> You can ask the pg_stat_activity - View, column procpid.

The query "select pg_backend_pid()" issued through that handle will also
return the PID.

- Josh / eggyknap


signature.asc
Description: Digital signature


Re: [GENERAL] Sync 2 tables in 2 databases

2009-03-11 Thread Joshua Tolley
On Wed, Mar 11, 2009 at 12:26:35PM -0700, SHARMILA JOTHIRAJAH wrote:
> It is just 1 way synchronization... replication with slony sounds pretty 
> good... ill try that out
> Thanks

There are options other than Slony, each with their pros and cons. Some
that come to mind include Bucardo[1], Londiste[2], and Golconde[3].

- Josh

[1] http://bucardo.org/
[2] http://pgfoundry.org/projects/skytools/
[3] http://code.google.com/p/golconde/


signature.asc
Description: Digital signature


Re: [GENERAL] idle users

2009-03-05 Thread Joshua Tolley
On Thu, Mar 05, 2009 at 04:50:09AM -0800, paulo matadr wrote:
>I need to procedure to kill users in idle,anybody have this made?
>thanks

See pg_cancel_backend.
http://www.postgresql.org/docs/8.3/static/functions-admin.html

- Josh / eggyknap


signature.asc
Description: Digital signature


Re: [GENERAL] postgreSQL & amazon ec2 cloud

2009-03-03 Thread Joshua Tolley
On Tue, Mar 03, 2009 at 10:25:17AM -0500, Tom Lane wrote:
> Adrian Klaver  writes:
> > Nothing. I have created a Postgres instance on an EC2 virtual machine with 
> > attached EBS(Elastic Block Storage). I only got as far as creating in it 
> > and 
> > verifying it would run, no benchmarking. EC2 instances have storage as part 
> > of 
> > the instance but it is temporary and goes away when the instance is shut 
> > down. 
> > For a database you want EBS as it is a virtual harddrive that persists. 
> > Should 
> > an EC2 instance go down, you just reattach the EBS drive on reboot.
> 
> ... I wonder whether you have any guarantees about database consistency
> in that situation?  PG has some pretty strong requirements about fsync
> behavior etc, and I'd not want to take it on faith that a cloud
> environment will meet those requirements.
> 
> Performance would be an interesting question too.
> 
>   regards, tom lane

There's a place called Engine Yard offering Ruby on Rails hosting with
PostgreSQL on Amazon EC2.

- Josh / eggyknap


signature.asc
Description: Digital signature


Re: [GENERAL] Two-phase commmit, plpgsql and plproxy

2009-02-12 Thread Joshua Tolley
On Thu, Feb 12, 2009 at 02:17:03AM +0300, Igor Katson wrote:
>>
>> PostgreSQL does not provide a transaction manager.
>>
>> When you are dealing with multiple databases, the transaction manager
>> needs to make decisions like "this transaction failed on one node,
>> therefore we need to roll all the other transactions back".
>>
>> I think you are basically trying to make plproxy into the transaction
>> manager. You might get some better suggestions from people who know
>> plproxy well.
>>   
> Thanks, Jeff. Googling smth like "postgresql transaction manager" does  
> not give any nice result. It seems, that the one just does not exist.  
> Hope, plproxy developers will answer smth. considering this problem.

There are other transaction managers available (mainly Java-based,
AFAIK). Generally a transaction manager tries to be able to interface
with all kinds of different transaction-aware services (databases,
transactional messaging systems, etc.), because generally someone
wanting to coordinate transactions using 2PC wants to coordinate all
kinds of different services;  a PostgreSQL-specific one would probably
be of extremely limited applicability, especially compared to the work
required to get the transaction manager's behavior provably correct.
Often a middleware application (such as an application server) will
provide a transaction manager; standalone ones exist as well. I've heard
good things of, but never used, Bitronix, for example.

- Josh / eggyknap


signature.asc
Description: Digital signature


Re: [GENERAL] how to "group" several records with same timestamp into one line?

2008-11-14 Thread Joshua Tolley
On Thu, Nov 13, 2008 at 06:39:47PM -0800, Eus wrote:
> Isn't that something like this is better handled at the application level 
> instead of the DB level?
> 
> IOW, isn't that the cost of doing the query above far more expensive than 
> doing a little coding at the application level?

That's something you'll have to test to really find out, but keep in
mind the database has several advantages over the application in the
general case. First, the database has indexes (presumably), and the
application would just have a raw dataset. The dataset might be
delivered in sorted order if you query it right, which might obviate
that advantage. Or it might not. Second, the database has all the data
locally. If your application lives on some other server, the data
returned by your initial query needs to be streamed across a network,
and converted to data types your application language understands. This
makes a particularly noticeable difference when the result data set is
much smaller, due to filtering in the query, than the original tables.

Other considerations depend on your definition of "expensive". Getting
the programming right to do in application code what you could do in one
query might be a fair bit of work. There has been quite a lot of effort
spent on the database to make sure it performs correctly and
efficiently; probably more so than has been spent on the code you're
writing to replace the query. That said, the database is designed to
support generic operations, and your code might exploit some
characteristic of the data the database can't know about to achieve
better results. In general, it's probably best to let the database
handle data, and let your application handle "business logic" that
transcends the data. Interpretations of the above vary from person to
person :)

All that said, my first guess is that you're better off writing the
query than doing the processing in the application.

- Josh / eggyknap


signature.asc
Description: Digital signature


Re: [GENERAL] Error in starting Postgresql...

2008-11-05 Thread Joshua Tolley
On Wed, Nov 05, 2008 at 08:34:55PM +0530, Shashank Mujumdar wrote:
> After installing dspace when I tried to connect it to the postgresql
> server it gave the following error.
> 
> psql -U postgres -h localhost  ( command used )
> 
> the error was
> 
> psql: could not connect to server: Connection refused (0X274D/10061)
>   Is the server running on host "localhost" and accepting
> TCP/IP connections on port 5432?
> 
> I searched for a solution on the net and I found that server might not
> be running or postgresql might not have been configured properly.
> As far as I know I've configured the postgresql properly..But I'm
> unable to find out how to start the pgsql server !!!
> Please help me out...
> 

Look for pg_ctl.
http://www.postgresql.org/docs/current/static/app-pg-ctl.html

- Josh / eggyknap


signature.asc
Description: Digital signature


Re: [Fwd: Re: [GENERAL] GEQO randomness?]

2008-11-05 Thread Joshua Tolley
On Fri, Oct 31, 2008 at 06:57:25PM -0400, Eric Schwarzenbach wrote:
> 
> My problem with GEQO using a random number generator is that
> non-deterministic behavior is really hard to debug, and problems can go
> undiagnosed for ages. Frankly I would rather something fail all the
> time, than it work most of the time and fail just now and then. Never
> getting a good plan for a query would be an improvement because I would
> immediately be aware there's a problem and be forced to something about
> it, as opposed to maybe realizing there is going to *sometimes* be a
> problem.

Randomness is an integral part of all kinds of algorithms designed to
solve really hard problems in reasonable time, like GEQO. Such
algorithms show up all the time in data modeling and optimization
problems.

- Josh / eggyknap


signature.asc
Description: Digital signature


Re: [GENERAL] Equivalent for AUTOINCREMENT?

2008-11-05 Thread Joshua Tolley
On Sat, Nov 01, 2008 at 02:24:37PM +0100, Michelle Konzack wrote:
> Du I need to create a SEQUENCE for each table or do I need only  ONE  of
> if and can use it independant on differnt tables?

If you just create a bunch of tables with SERIAL or BIGSERIAL columns,
it will create one sequence for each column. But you can make a set of
such columns use the same sequence if you want. SERIAL and BIGSERIAL are
really just "syntactic sugar" which create a sequence and set the
column's default value to the next value in the sequence, like this:

jtolley=# create table a (id serial);
NOTICE:  CREATE TABLE will create implicit sequence "a_id_seq" for
serial column "a.id"
CREATE TABLE
jtolley=# \d a
  Table "public.a"
 Column |  Type   |   Modifiers
 +-+
 id | integer | not null default nextval('a_id_seq'::regclass)

If I need a new table or column using the same sequence, I just do this:

jtolley=# create table b (q integer not null default
nextval('a_id_seq'));
CREATE TABLE
jtolley=# \d b
 Table "public.b"
 Column |  Type   |   Modifiers
 +-+
 q  | integer | not null default nextval('a_id_seq'::regclass)

- Josh / eggyknap


signature.asc
Description: Digital signature


Re: [GENERAL] UPDATE and Indexes and Performance

2008-10-15 Thread Joshua Tolley
On Wed, Oct 15, 2008 at 10:42 AM, Bill Thoen <[EMAIL PROTECTED]> wrote:
> Does PG (8.1) ever use existing indexes when executing an UPDATE?
>
> I've got some tables with millions of records and whenever I update a column
> that involves most or all the records the EXPLAIN command seems to indicate
> that it isn't using the pre-existing indexes. This result in a slow update,
> which is further slowed by the presence of indexes. So when doing a large
> update should I just drop the indexes first, or is there some good reason to
> keep them?
>
> Thanks,
> - Bill Thoen

You have to hit the disk twice if you use an index -- once to read the
index and once to read the table itself. If the planner guesses that
an operation involves most of the records in a table, it will avoid
indexes and just seqscan, because it's faster.

- Josh / eggyknap

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Drupal and PostgreSQL - performance issues?

2008-10-13 Thread Joshua Tolley
On Mon, Oct 13, 2008 at 1:02 AM, Ivan Sergio Borgonovo
<[EMAIL PROTECTED]> wrote:

> Anyway I don't find myself comfortable with replies in these 2 lines
> of reasoning:
> 1) default configuration of PostgreSQL generally doesn't perform well
> 2) PostgreSQL may be slower but mySQL may trash your data.
>
> I think these answers don't make a good service to PostgreSQL.
>
> 1) still leave the problem there and doesn't give any good reason
> why Postgresql comes with a doggy default configuration on most
> hardware. It still doesn't explain why I've to work more tuning
> PostgreSQL to achieve similar performances of other DB when other DB
> don't require tuning.

This is a useful question, but there are reasonable answers to it. The
key underlying principle is that it's impossible to know what will
work well in a given situation until that situation is tested. That's
why benchmarks from someone else's box are often mostly useless on
your box, except for predicting generalities and then only when they
agree with other people's benchmarks. PostgreSQL ships with a very
conservative default configuration because (among other things,
perhaps) 1) it's a configuration that's very unlikely to fail
miserably for most situations, and 2) it's assumed that if server
performance matters, someone will spend time tuning things. The fact
that database X performs better than PostgreSQL out of the box is
fairly irrelevant; if performance matters, you won't use the defaults,
you'll find better ones that work for you.

> Making performance comparable without expert tuning will a) stop
> most too easy critics about PostgreSQL performances b) give
> developers much more feedback on PostgreSQL performance in "nearer
> to optimal" setup.

Most of the complaints of PostgreSQL being really slow are from people
who either 1) use PostgreSQL assuming its MySQL and therefore don't do
things they way a real DBA would do them, or 2) simply repeat myths
they've heard about PostgreSQL performance and have no experience to
back up. While it would be nice to be able to win over such people,
PostgreSQL developers tend to worry more about pleasing the people who
really know what they're doing. (The apparent philosophical
contradiction between my statements above and the fact that I'm
writing something as inane as PL/LOLCODE doesn't cause me much lost
sleep -- yet)

> If it is easy to write a tool that will help you to tune PostgreSQL,
> it seems it would be something that will really help PostgreSQL
> diffusion and improvements. If it is *complicated* to tune
> PostgreSQL so that it's performance can be *comparable* (I didn't
> write optimal) with other DB we have a problem.

It's not easy to write such a tool; the lists talk about one every few
months, and invariable conclude it's harder than just teaching DBAs to
do it (or alternatively letting those that need help pay those that
can help to tune for them).

As to whether it's a problem that it's a complex thing to tune, sure
it would be nice if it were easier, and efforts are made along those
lines all the time (cf. GUC simplification efforts for a contemporary
example). But databases are complex things, and any tool that makes
them overly simple is only glossing over the important details.

> Then other people added in the equation connection pooling as a MUST
> to compare MySQL and PostgreSQL performances.
> This makes the investment to have PostgreSQL in place of mySQL even
> higher for many, or at least it is going to puzzle most.

Anyone familiar with high-performance applications is familiar with
connection pooling.

> Or maybe... it is false that PostgreSQL doesn't have comparable
> performance to other DB with default configuration and repeating
> over and over the same answer that you've to tune PostgreSQL to get
> comparable performance doesn't play a good service to PostgreSQL.

Why not? It's the truth, and there are good reasons for it. See above.

> 2) I never saw a "trashing data benchmark" comparing reliability of
> PostgreSQL to MySQL. If what I need is a fast DB I'd chose mySQL...
> I think this could still not be the best decision to take based on
> *real situation*.

If you've got an important application (for some definition of
"important"), your considerations in choosing underlying software are
more complex than "is it the fastest option". Horror stories about
MySQL doing strange things to data, because of poor integrity
constraints, ISAM tables, or other problems are fairly common (among
PostgreSQL users, at least :) But I will also admit I have none of my
own; my particular experience in life has, thankfully, prevented me
from much MySQL exposure.

> Do we really have to trade integrity for speed?

Yes. Sanity checks take time.

>  Is MyISAM really much
> faster in read only operations?

Yes. See above.

> What I get with that kind of answer is:
> an admission: - PostgreSQL is slow

People aren't saying that. They're saying it works better when someone
who knows what the

Re: [GENERAL] Run postgresql engine in readonly mode?

2008-10-11 Thread Joshua Tolley
On Thu, Oct 9, 2008 at 2:37 AM, Galland Gregoire <[EMAIL PROTECTED]> wrote:
> Hi all!
>
> I would like to run all my databases in a readonly mode just for a few
> hours (migration plan).
>
> Is it a way to tell the postgresql engine to run in readonly?
>
> Sincerly
>
> G.Galland
>

This comes up periodically on lists. There's not really a way. You can
set transactions as read only (see docs for SET TRANSACTION) but
that's probably not feasible, as you'd have to change your
application. You might create triggers preventing INSERTs and UPDATEs,
but a good bet is just to back up before hand and warn your users.

- Josh / eggyknap

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general