Re: [GENERAL] [HACKERS] PLJava for Postgres 9.2.

2013-05-17 Thread Paul Hammond
Thx.

Yes, am aware PLJava is a 3rd party lib, just surprised the same party hasn't 
built them given they seem to be built all the way to 9.1.

My question was primarily about obtaining pgsx.mk file which is a part of the 
PostgreSQL project.

Paul



 From: Andrew Dunstan and...@dunslane.net
To: Paul Hammond hammpau...@yahoo.com 
Cc: pgsql-hack...@postgresql.org pgsql-hack...@postgresql.org; 
pgsql-general@postgresql.org pgsql-general@postgresql.org 
Sent: Friday, 17 May 2013, 0:03
Subject: Re: [GENERAL] [HACKERS] PLJava for Postgres 9.2.
 


On 05/16/2013 05:59 PM, Paul Hammond wrote:
 Hi all,

 I've downloaded PLJava, the latest version, which doesn't seem to have 
 a binary distribution at all for 9.2, so I'm trying to build it from 
 the source for Postgres 9.2. I have the DB itself installed on Windows 
 7 64 bit as a binary install. I've had to do a fair bit of hacking 
 with the makefiles on cygwin to get PLJava to build, but I have 
 succeeded in compiling the Java and JNI code, the pljava_all and 
 deploy_all targets effectively.


Cygwin is not a recommended build platform for native Windows builds. 
See the docs for the recommended ways to build Postgres.



 But I'm coming unstuck at the next target where it's doing the target 
 c_all. It's trying to find the following makefile in the Postgres dist:

 my postgres installation dir/lib/pgxs/src/makefiles/pgxs.mk: No such 
 file or directory

 What do I need to do to obtain the required files, and does anybody 
 know why, given Postgres 9.2 is out some time, and 9.3 is in beta, why 
 no prebuild binary PLJavas exist for 9.2?


Because nobody has built them?


FYI, PL/Java is not maintained by the PostgreSQL project.


cheers

andrew


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] PLJava for Postgres 9.2.

2013-05-17 Thread John R Pierce

On 5/16/2013 11:05 PM, Paul Hammond wrote:
I had downloaded a prepackaged binary from here, I hadn't build it at 
all, I was only trying to build PlJava. I presume you mean building 
Postgres from the source in GIT will be required to get the pgxs 
sources installed that I require.




cygwin isn't a tested or supported environment.   the windows version is 
built and tested with visualC.  so you'll be on your own for the 
makefiles and stuff.



--
john r pierce  37N 122W
somewhere on the middle of the left coast



Re: [GENERAL] About replacing PostgreSQL instance

2013-05-17 Thread chiru r
Hi Oscar Calderon,

Replacing instance is not good approach in major version (from PG9.1 to
9.3).

Yes,your approach is correct it should be upgrade,since it is production
need to take some extra care.

On Thu, May 16, 2013 at 11:49 PM, Oscar Calderon 
ocalde...@solucionesaplicativas.com wrote:

 Hi to all, i wanna ask you a piece of advice. The company where i work is
 bringing maintenance service of PostgreSQL to another company, and
 currently they have installed PostgreSQL 9.1.1, and they want to move to
 9.3 version when it will come out. So, because the difference of versions,
 and because it was installed by compiling it (using source code), and
 because the 9.1.1 installation is in a different directory than the
 default, they decided to replace 9.1.1 version with 9.3 (no upgrade, but
 replace it).

 Currently, they only have one database in production of 2.2 GB with some
 procedures and triggers. So, my plan to execute this database installation
 is the next:


1. Install PostgreSQL 9.3 from postgresql repository (
yum.postgresql.org) with a different port to avoid interrupt the
production PostgreSQL instance operation
2. Tune the database parameters in postgresql.conf, also create the
same rules in pg_hba as the production instance, configure log and so on.
3. At the end of the operations day, create a backup of the production
database and then restore it into the new instance.

 It seems you are shutting down Application here. If not please shutdown
the application or you need to keep production databases in read only mode
on PG9.1 in-order to protect  your database from users write queries and
maintain consistency at the time of upgrade.

The below command will protect your PG9.1 production database.
postgres=# ALTER  DATABASE  PRDB SET default_transaction_read_only to on;
ALTER DATABASE

A.Take the global dump(pg_dumpall) of PG91 and restore in PG9.3.
B.Take the pg_dump of required production databases from PG9.1 and restore
in PG9.3.


1.  Test the new instance with the PHP applications that use it and
verify that all is in order
2. Stop the old instance and change the port to another port, then
change the port of the new instance to 5432 in order to avoid change the
network configuration, permissions and so on.

 But really is the first time that i do that, so i don't know if i'm
 missing something or there's something wrong about i'm planning to do, so i
 will appreciate very much if you can guide me about what steps i have to do
 exactly and considerations during this process.

 Regards.

 ***
 Oscar Calderon
 Analista de Sistemas
 Soluciones Aplicativas S.A. de C.V.
 www.solucionesaplicativas.com
 Cel. (503) 7741 7850



Re: [GENERAL] PLJava for Postgres 9.2.

2013-05-17 Thread Paul Hammond
Thx John.

I had downloaded a prepackaged binary from here, I hadn't build it at all, I 
was only trying to build PlJava. I presume you mean building Postgres from the 
source in GIT will be required to get the pgxs sources installed that I require.

Paul



 From: John R Pierce pie...@hogranch.com
To: pgsql-general@postgresql.org 
Sent: Thursday, 16 May 2013, 23:41
Subject: Re: [GENERAL] PLJava for Postgres 9.2.
 


On 5/16/2013 2:59 PM, Paul Hammond wrote:

I've downloaded PLJava, the latest version, which doesn't seem to have a binary 
distribution at all for 9.2, so I'm trying to build it from the source for 
Postgres 9.2. I have the DB itself installed on Windows 7 64 bit as a binary 
install. I've had to do a fair bit of hacking with the makefiles on cygwin to 
get PLJava to build, but I have succeeded in compiling the Java and JNI code, 
the pljava_all and deploy_all targets effectively.


But I'm coming unstuck at the next target where it's doing the target c_all. 
It's trying to find the following makefile in the Postgres dist:


my postgres installation dir/lib/pgxs/src/makefiles/pgxs.mk: No such file or 
directory


What do I need to do to obtain the required files, and does anybody know why, 
given Postgres 9.2 is out some time, and 9.3 is in beta, why no prebuild 
binary PLJavas exist for 9.2?


did you build your 9.2 with cygwin?   the standard windows binary
distributions are built with VisualC, and mixing compilers is likely
going to fail.  

Anyways, if that pgxs stuff is missing, then your Postgres server
was built without the pgxs option, I believe thats a ./configure
option (--with-pgxs, or something like that?)




-- 
john r pierce  37N 122W
somewhere on the middle of the left coast

Re: [GENERAL] [HACKERS] PLJava for Postgres 9.2.

2013-05-17 Thread John R Pierce

On 5/16/2013 10:51 PM, Paul Hammond wrote:
My question was primarily about obtaining pgsx.mk file which is a part 
of the PostgreSQL project.


you need the whole PGXS subsystem, which the Windows version doesn't 
appear to be built with.   thats a system for building compatible 
extensions.


ahhh.   read this. 
http://wiki.postgresql.org/wiki/Building_and_Installing_PostgreSQL_Extension_Modules




--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] problem with lost connection while running long PL/R query

2013-05-17 Thread David M. Kaplan

Hi Joe,

Thanks for responding as you would clearly be the expert on this sort of 
problem.  My current function does page through data using a cursor 
precisely to avoid out of memory problems, which is why I am somewhat 
surprised and stumped as to how this can be happening.  It does return 
all the data at once, but one call to the function would seem to work, 
so I can't see why 4 wouldn't.  I am currently planning to do some test 
runs using memory.profile() to see if each successive call to the PL/R 
function is somehow accumulating memory usage somewhere.  Perhaps I am 
not properly closing a query or something like that?


I am attaching my code.  Perhaps you will have some insight.  To give 
you a basic idea of what I am trying to do, I have separately developed 
a classification model for the state of a system based on the data in 
the postgresql table.  I want to apply that model to each line of the 
table.  I loop over the cursor and predict the state for batches of 
10,000 lines at a time.


Thanks again for the help.

Cheers,
David



On 05/16/2013 11:40 PM, Joe Conway wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 05/16/2013 08:40 AM, Tom Lane wrote:

David M. Kaplan david.kap...@ird.fr writes:

Thanks for the help.  You have definitely identified the problem,
but I am still looking for a solution that works for me.  I tried
setting vm.overcommit_memory=2, but this just made the query
crash quicker than before, though without killing the entire
connection to the database.  I imagine that this means that I
really am trying to use more memory than the system can handle?
I am wondering if there is a way to tell postgresql to flush a
set of table lines out to disk so that the memory they are using
can be liberated.

Assuming you don't have work_mem set to something unreasonably
large, it seems likely that the excessive memory consumption is
inside your PL/R function, and not the fault of Postgres per se.
You might try asking in some R-related forums about how to reduce
the code's memory usage.

The two classic approaches to this with PL/R are either create a
custom aggregate with the PL/R as the final function (i.e. work on one
group at a time) or use the SPI cursor functionality within the PL/R
function and page your data using a cursor. Not all forms of analysis
lend themselves to these approaches, but perhaps yours does.

Ultimately I would like to implement a form of R data.frame that does
the paging with a cursor transparently for you, but I have not been
able to find the time so far.

Joe


- -- 
Joe Conway

credativ LLC: http://www.credativ.us
Linux, PostgreSQL, and general Open Source
Training, Service, Consulting,  24x7 Support
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBAgAGBQJRlVJdAAoJEDfy90M199hle8gP+wU+GSJ44g26VBBAy3po/E/Y
9+pwxBhJe0x6v5PXtuM8Bzyy4yjlKCgzDj4XdZpEU7SYR+IKj7tWCihqc+Fuk1t1
EjR2VUJwpSMztRvEIqWW8rX/DFGaVYCt89n0neKfKL/XJ5rbqMqQAUPbxMaBtW/p
7EXo8RjVBMYibkvKrjpYJjLTuOTWkQCiXx5hc4HVFN53DYOF46rdFxMYUe5KLYTL
mZOnSoV0yrsaPGnxRIY0uzRv7ZTTBmB2o4TIWpTySx2rHNLqAJIT22wl0pfkjksH
JYvko3rWhSg7vSf+8RDN6X1eMAXcUO7H2NR5IdOoXEX2bzqTmDBQUjOcb5WR1yUd
L5XuT5WYiTpyzU8qAtPEVirwFnEwUN1tR6wDoVsseIWwXUYqSuXtg9qjFNAXZ1Hr
05yxuzexOEzLQNwSXWhsCrLdnndEHrJ6pDlLaUCPVybxwwwW9BfS2fJUz+X63M8x
l5DYbyl6q6o2J2bs4UGCTk4r/1Qq/R9pApkWzsckTtF6zl49mzwzPnh5b/JcB+4x
u17Te+s3cRGcX09lt7qf9cWkv1uUF/Qw0ntBhW8TY2HYhbWVIEmiZV1HIksXf+nw
EBFshWs2/H75OPnhN9YNq3tjCuiR7o/eaZeINfGs2LzGIJvHpcjMDBgFFTES7CYV
Y20XukH07h9XcJGTsf0o
=TwfD
-END PGP SIGNATURE-


--
**
David M. Kaplan
Charge de Recherche 1

Institut de Recherche pour le Developpement
Centre de Recherche Halieutique Mediterraneenne et Tropicale
av. Jean Monnet
B.P. 171
34203 Sete cedex
France

Phone: +33 (0)4 99 57 32 27
Fax: +33 (0)4 99 57 32 95

http://www.umr-eme.org/team/dkaplan/
http://www.amped.ird.fr/
**

CREATE TYPE predict_classification_model_type
   AS (clean_pt_id int, class varchar(20));

--- NOTE: Must be superuser to create PLR functions
CREATE OR REPLACE FUNCTION predict_classification_model
   (filename varchar(256), modelname varchar(256),
schemaname varchar(256), tablename varchar(256),
	wherecondition varchar(256) )
RETURNS SETOF predict_classification_model_type AS 
$BODY$

pg.thrownotice('Starting predict_classification_model')

mp=capture.output(memory.profile())
for (m in mp) pg.thrownotice(m)

tablename - pg.quoteident(tablename)
if (nchar(schemaname)0) {
  schemaname - pg.quoteident(schemaname)
  tablename - paste(schemaname,tablename,sep=.)
}

if (nchar(wherecondition) == 0) {
   wherecondition = 'TRUE'
}
wherecondition = pg.quoteliteral(wherecondition)

# Load in file with model.  This can be fairly large.
load(filename)
themodel - get(modelname)

# Count number of lines in table
s - paste('SELECT count(*) FROM',tablename,'WHERE',wherecondition)
q - 

Re: [GENERAL] problem with lost connection while running long PL/R query

2013-05-17 Thread David M. Kaplan

Hi,

Of course today after a reboot to update to the newest kernel, 
everything works without crashing...


I imagine that yesterday the problem was that I had forgotten that I had 
a Windows virtual machine running on the server that was eating a good 
piece of memory.  Still, using a cursor to page through the data should 
have avoided major memory difficulties...


As a footnote, I can confirm that PL/R was not eating up memory little 
by little (see results of memory.profile() at beginning and end of each 
function invokation below).


Thanks,
David

Run 1:

NOTICE:  Starting predict_classification_model
NOTICE: NULL  symbolpairlist closure environment 
promise

NOTICE:17847  1737684308 11296273
NOTICE: language special builtinchar logical 
integer

NOTICE:46996 18918399387 7351   17229
NOTICE:   double complex   character ... anylist
NOTICE: 2190   1   42810   1 0   13456
NOTICE:   expressionbytecode externalptr weakref raw  S4
NOTICE:3   10246 823 198 1991061

NOTICE:  Ending predict_classification_model
NOTICE: NULL  symbolpairlist closure environment 
promise

NOTICE:18062  1813544360 11296273
NOTICE: language special builtinchar logical 
integer

NOTICE:50616 1891839 2085318 7496   17972
NOTICE:   double complex   character ... anylist
NOTICE: 2393   1   43459   1 0   13929
NOTICE:   expressionbytecode externalptr weakref raw  S4
NOTICE:3   10683 814 187 1881061

Run 2:

NOTICE:  Starting predict_classification_model
NOTICE: NULL  symbolpairlist closure environment 
promise

NOTICE:18062  1809364360 11286271
NOTICE: language special builtinchar logical 
integer

NOTICE:50536 18918399684 7496   17878
NOTICE:   double complex   character ... anylist
NOTICE: 2258   1   43312   1 0   13899
NOTICE:   expressionbytecode externalptr weakref raw  S4
NOTICE:3   10683 813 188 1891061

NOTICE:  Ending predict_classification_model
NOTICE: NULL  symbolpairlist closure environment 
promise

NOTICE:18062  1813564360 11296273
NOTICE: language special builtinchar logical 
integer

NOTICE:50617 1891839 2085317 7496   17972
NOTICE:   double complex   character ... anylist
NOTICE: 2393   1   43459   1 0   13929
NOTICE:   expressionbytecode externalptr weakref raw  S4
NOTICE:3   10683 814 187 1881061

Run 3:

NOTICE:  Starting predict_classification_model
NOTICE: NULL  symbolpairlist closure environment 
promise

NOTICE:18062  1809364360 11286271
NOTICE: language special builtinchar logical 
integer

NOTICE:50536 18918399684 7496   17878
NOTICE:   double complex   character ... anylist
NOTICE: 2258   1   43312   1 0   13899
NOTICE:   expressionbytecode externalptr weakref raw  S4
NOTICE:3   10683 813 188 1891061

NOTICE:  Ending predict_classification_model
NOTICE: NULL  symbolpairlist closure environment 
promise

NOTICE:18062  1813564360 11296273
NOTICE: language special builtinchar logical 
integer

NOTICE:50617 1891839 2085319 7496   17972
NOTICE:   double complex   character ... anylist
NOTICE: 2393   1   43459   1 0   13929
NOTICE:   expressionbytecode externalptr weakref raw  S4
NOTICE:3   10683 814 187 1881061



On 05/16/2013 11:40 PM, Joe Conway wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 05/16/2013 08:40 AM, Tom Lane wrote:

David M. Kaplan david.kap...@ird.fr writes:

Thanks for the help.  You have definitely identified the problem,
but I am still looking for a solution that works for me.  I tried
setting vm.overcommit_memory=2, but this just made the query
crash quicker than before, though without killing the entire
connection to the database.  I imagine that this means that I
really am trying to use more memory 

Re: [GENERAL] FATAL: database a/system_data does not exist

2013-05-17 Thread sumita
I found the configuration XML file which was creating the problem.By changing
the database name , it solved the problem.Since there was also
check-valid-connection-sql, it was happening at a particular frequency.

Thanks all!



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/FATAL-database-a-system-data-does-not-exist-tp5754839p5755937.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [HACKERS] PLJava for Postgres 9.2.

2013-05-17 Thread Cédric Villemain
 Yes, am aware PLJava is a 3rd party lib, just surprised the same party
 hasn't built them given they seem to be built all the way to 9.1.
 
 My question was primarily about obtaining pgsx.mk file which is a part of
 the PostgreSQL project.

With linux you do something like that for pljava

$ make PG_CONFIG=/usr/pgsql-9.2/bin/pg_config \
JAVA_HOME=/usr/java/default

The pg_config is used to find the pgxs.mk (the real command is: «pg_config --
pgxs»).

 
 Paul
 
 
 
  From: Andrew Dunstan and...@dunslane.net
 To: Paul Hammond hammpau...@yahoo.com
 Cc: pgsql-hack...@postgresql.org pgsql-hack...@postgresql.org;
 pgsql-general@postgresql.org pgsql-general@postgresql.org Sent:
 Friday, 17 May 2013, 0:03
 Subject: Re: [GENERAL] [HACKERS] PLJava for Postgres 9.2.
 
 On 05/16/2013 05:59 PM, Paul Hammond wrote:
  Hi all,
  
  I've downloaded PLJava, the latest version, which doesn't seem to have
  a binary distribution at all for 9.2, so I'm trying to build it from
  the source for Postgres 9.2. I have the DB itself installed on Windows
  7 64 bit as a binary install. I've had to do a fair bit of hacking
  with the makefiles on cygwin to get PLJava to build, but I have
  succeeded in compiling the Java and JNI code, the pljava_all and
  deploy_all targets effectively.
 
 Cygwin is not a recommended build platform for native Windows builds.
 See the docs for the recommended ways to build Postgres.
 
  But I'm coming unstuck at the next target where it's doing the target
  c_all. It's trying to find the following makefile in the Postgres dist:
  
  my postgres installation dir/lib/pgxs/src/makefiles/pgxs.mk: No such
  file or directory
  
  What do I need to do to obtain the required files, and does anybody
  know why, given Postgres 9.2 is out some time, and 9.3 is in beta, why
  no prebuild binary PLJavas exist for 9.2?
 
 Because nobody has built them?
 
 
 FYI, PL/Java is not maintained by the PostgreSQL project.
 
 
 cheers
 
 andrew

-- 
Cédric Villemain +33 (0)6 20 30 22 52
http://2ndQuadrant.fr/
PostgreSQL: Support 24x7 - Développement, Expertise et Formation


signature.asc
Description: This is a digitally signed message part.


[GENERAL] Best practice on inherited tables

2013-05-17 Thread Frank Lanitz
Hi folkes,

I'm looking for a nice way to build this scenario:
I've got a lot of locations with some special types. For example I've
got workplaces, places like real laboratories and virtual places like
maybe parcel service. For each of the different types I need to store
some common attributes as well as some special ones. Having OOP in mind
I came to the point of inherit tables. so I've create something like
that (just a minimal example):

CREATE TABLE locations(
id SERIAL PRIMARY KEY,
name varchar(50)
);
CREATE TABLE workplaces(
workers integer
) INHERITS (locations);

But now I got stuck with the primary key thing. As described in the
documentation it is not supported. And now I'm looking for the best way
on having at table workplaces also the unique constraint from locations
etc. so e.g. I can do something like that:

INSERT INTO workplaces (name, workers) VALUES ('My Place', 5);

having the incrementation and the uniqueness. I was thinking off
creating a number of triggers doing this for me but wondering whether
there might be a better way.

Cheers,
Frank

BTW: Using Postgres 9.2 and up


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best practice on inherited tables

2013-05-17 Thread Chris Travers
Just our experience in LedgerSMB

On Fri, May 17, 2013 at 5:46 AM, Frank Lanitz fr...@frank.uvena.de wrote:

 Hi folkes,

 I'm looking for a nice way to build this scenario:
 I've got a lot of locations with some special types. For example I've
 got workplaces, places like real laboratories and virtual places like
 maybe parcel service. For each of the different types I need to store
 some common attributes as well as some special ones. Having OOP in mind
 I came to the point of inherit tables. so I've create something like
 that (just a minimal example):

 CREATE TABLE locations(
 id SERIAL PRIMARY KEY,
 name varchar(50)
 );
 CREATE TABLE workplaces(
 workers integer
 ) INHERITS (locations);


 But now I got stuck with the primary key thing. As described in the
 documentation it is not supported. And now I'm looking for the best way
 on having at table workplaces also the unique constraint from locations
 etc. so e.g. I can do something like that:



What we usually do in LedgerSMB is to add an additional qualifying field
(in your case, maybe call it location_class_id).  This identifies the
subtype and we can use it to guarantee uniqueness without resorting to
various tricks.  Fkeys are still a problem but a more manageable one.   You
can either use constraint triggers for that or fkey against a child table
only where that is appropriate.

In essence I would do something like (pseudocode, untested, etc):

CREATE TABLE location_class (
 id serial not null unique,
 label text primary key
 );

CREATE TABLE location (
id serial not null,
location_class_id int references location_class(id),
name text not null,
primary key(id, location_class_id),
check NOINHERIT (location_class_id = 1)
);

CREATE TABLE worplace (
workers int not null,
check (workers  0),
check NOINHERIT (location_class_id = 2),
primary key(id, location_class_id)
);

That gives you a unique identifier across the tree.  If you want to do away
with location_class, you could make your primary key into (id, tableoid)
instead but that seems too hackish to me.

Now this doesn't solve the fkey problem but it does give you uniqueness.

Best Wishes,
Chris Travers


[GENERAL] Comunication protocol

2013-05-17 Thread Karel Riveron Escobar
Hi everyone:


I have a question. I think it's so simple to answer but I don't know anything 
about that. I want to know what is the comunication protocol among PostgreSQL 
database server and an application server like Apache. I have to know that 
because I'm designing a simple deployment diagram and I just need it for finish.


Thanks in advance.


Regards, Karel Riverón
Student Scientific Council
Informatics Science University

http://www.uci.cu

attachment: Diagrama de despliegue.svg
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Comunication protocol

2013-05-17 Thread Tom Lane
Karel Riveron Escobar kesco...@estudiantes.uci.cu writes:
 I have a question. I think it's so simple to answer but I don't know anything 
 about that. I want to know what is the comunication protocol among PostgreSQL 
 database server and an application server like Apache. I have to know that 
 because I'm designing a simple deployment diagram and I just need it for 
 finish.

The protocol the PG server understands is specified here:

http://www.postgresql.org/docs/9.2/static/protocol.html

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Comunication protocol

2013-05-17 Thread Achilleas Mantzios
That would be postgresql:5432/TCP

On Παρ 17 Μαΐ 2013 10:54:02 Karel Riveron Escobar wrote:

Hi everyone:


I have a question. I think it's so simple to answer but I don't know anything 
about that. I want to know what is the comunication protocol among PostgreSQL 
database server and an application server like Apache. I have to know that 
because I'm designing a simple deployment diagram and I just need it for finish.


Thanks in advance.


Regards, Karel Riverón
Student Scientific Council
Informatics Science University


 




-
Achilleas Mantzios
IT DEV
IT DEPT
Dynacom Tankers Mgmt

Re: [GENERAL] LONG delete with LOTS of FK's

2013-05-17 Thread Larry Rosenman

On 2013-05-16 18:35, David Kerr wrote:


- I'll take a look tomorrow, but we WERE seeing Seq Scan's against
- multi-million
- row tables, so I suspect Tom is right on with the replanning that's 
in

- 9.2 fixing
- it, and I'm in the process of validating that.

That seems likely, although you could try enable_seqscan=false as well.

Dave

The 9.2 upgrade DOES fix my issue.

Thanks again, Tom and everyone.


--
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 214-642-9640 (c) E-Mail: l...@lerctr.org
US Mail: 430 Valona Loop, Round Rock, TX 78681-3893


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Comunication protocol

2013-05-17 Thread Karel Riveron Escobar
Thanks Tom for your response so fast. I'm reading r ight now and I found 
something but I didn't understood very well. In the documentation says that the 
protocol we're talking about is supported over TCP/IP family protocols, but 
doesn't says what exactly protocol is it? I want to know something more 
specific.

Thanks again.

Regards, Karel Riverón
Student Scientific Council
Informatics Science University
- Original Message -

| From: Tom Lane t...@sss.pgh.pa.us
| To: Karel Riveron Escobar kesco...@estudiantes.uci.cu
| Cc: pgsql-general@postgresql.org
| Sent: Friday, May 17, 2013 10:56:35 AM
| Subject: Re: [GENERAL] Comunication protocol

| Karel Riveron Escobar kesco...@estudiantes.uci.cu writes:
|  I have a question. I think it's so simple to answer but I don't
|  know anything about that. I want to know what is the comunication
|  protocol among PostgreSQL database server and an application
|  server like Apache. I have to know that because I'm designing a
|  simple deployment diagram and I just need it for finish.

| The protocol the PG server understands is specified here:

| http://www.postgresql.org/docs/9.2/static/protocol.html

| regards, tom lane
| http://www.uci.cu

http://www.uci.cu



Re: [GENERAL] Comunication protocol

2013-05-17 Thread Karel Riveron Escobar
Thanks Achilleas, I think the same, but I'm not sure.

Saludos, Karel Riverón
Consejo Científico Estudiantil
Universidad de las Ciencias Informáticas

- Original Message -

| From: Achilleas Mantzios ach...@matrix.gatewaynet.com
| To: pgsql-general@postgresql.org
| Cc: Karel Riveron Escobar kesco...@estudiantes.uci.cu
| Sent: Friday, May 17, 2013 11:01:16 AM
| Subject: Re: [GENERAL] Comunication protocol

| That would be postgresql:5432/TCP

| On Παρ 17 Μαΐ 2013 10:54:02 Karel Riveron Escobar wrote:

| Hi everyone:

| I have a question. I think it's so simple to answer but I don't know
| anything about that. I want to know what is the comunication
| protocol among PostgreSQL database server and an application server
| like Apache. I have to know that because I'm designing a simple
| deployment diagram and I just need it for finish.

| Thanks in advance.

| Regards, Karel Riverón
| Student Scientific Council
| Informatics Science University

| -
| Achilleas Mantzios
| IT DEV
| IT DEPT
| Dynacom Tankers Mgmt

http://www.uci.cu



Re: [GENERAL] Comunication protocol

2013-05-17 Thread Oscar Calderon
Maybe you can use a software like Wireshark in your network and provoke a
connection between your PostgreSQL server and Apache server, and when you
see the results of traffic monitoring on Wireshark you can see more
specific details about the protocol.

Regards.


***
Oscar Calderon
Analista de Sistemas
Soluciones Aplicativas S.A. de C.V.
www.solucionesaplicativas.com
Cel. (503) 7741 7850


2013/5/17 Karel Riveron Escobar kesco...@estudiantes.uci.cu

 Thanks Achilleas, I think the same, but I'm not sure.

 Saludos, Karel Riverón
 Consejo Científico Estudiantil
 Universidad de las Ciencias Informáticas



 --

 *From: *Achilleas Mantzios ach...@matrix.gatewaynet.com
 *To: *pgsql-general@postgresql.org
 *Cc: *Karel Riveron Escobar kesco...@estudiantes.uci.cu
 *Sent: *Friday, May 17, 2013 11:01:16 AM

 *Subject: *Re: [GENERAL] Comunication protocol

 That would be postgresql:5432/TCP



 On Παρ 17 Μαΐ 2013 10:54:02 Karel Riveron Escobar wrote:

 Hi everyone:


 I have a question. I think it's so simple to answer but I don't know
 anything about that. I want to know what is the comunication protocol among
 PostgreSQL database server and an application server like Apache. I have to
 know that because I'm designing a simple deployment diagram and I just need
 it for finish.


 Thanks in advance.

 Regards, Karel Riverón

 Student Scientific Council

 Informatics Science University


 http://www.uci.cu/
 http://www.uci.cu/



 -

 Achilleas Mantzios

 IT DEV

 IT DEPT

 Dynacom Tankers Mgmt


 http://www.uci.cu/



   http://www.uci.cu/




Re: [GENERAL] Comunication protocol

2013-05-17 Thread Karel Riveron Escobar
Thanks Oscar. I'm going to try your suggestion but I get a problem. I get the 
database server and apache server in the same PC because the system what I'm 
building is in development phase. Do you think that is a problem for wireshark?

Regards, Karel Riverón
Student Scientific Council
Informatics Science University
- Original Message -

| From: Oscar Calderon ocalde...@solucionesaplicativas.com
| To: Karel Riveron Escobar kesco...@estudiantes.uci.cu
| Sent: Friday, May 17, 2013 11:23:01 AM
| Subject: Re: [GENERAL] Comunication protocol

| Maybe you can use a software like Wireshark in your network and
| provoke a connection between your PostgreSQL server and Apache
| server, and when you see the results of traffic monitoring on
| Wireshark you can see more specific details about the protocol.

| Regards.

| ***
| Oscar Calderon
| Analista de Sistemas
| Soluciones Aplicativas S.A. de C.V.
| www.solucionesaplicativas.com
| Cel. (503) 7741 7850

| 2013/5/17 Karel Riveron Escobar  kesco...@estudiantes.uci.cu 

| | Thanks Achilleas, I think the same, but I'm not sure.
|

| | Saludos, Karel Riverón
|
| | Consejo Científico Estudiantil
|
| | Universidad de las Ciencias Informáticas
|

| | | From: Achilleas Mantzios  ach...@matrix.gatewaynet.com 
| |
|
| | | To: pgsql-general@postgresql.org
| |
|
| | | Cc: Karel Riveron Escobar  kesco...@estudiantes.uci.cu 
| |
|
| | | Sent: Friday, May 17, 2013 11:01:16 AM
| |
|

| | | Subject: Re: [GENERAL] Comunication protocol
| |
|

| | | That would be postgresql:5432/TCP
| |
|

| | | On Παρ 17 Μαΐ 2013 10:54:02 Karel Riveron Escobar wrote:
| |
|

| | | Hi everyone:
| |
|

| | | I have a question. I think it's so simple to answer but I don't
| | | know
| | | anything about that. I want to know what is the comunication
| | | protocol among PostgreSQL database server and an application
| | | server
| | | like Apache. I have to know that because I'm designing a simple
| | | deployment diagram and I just need it for finish.
| |
|

| | | Thanks in advance.
| |
|

| | | Regards, Karel Riverón
| |
|
| | | Student Scientific Council
| |
|
| | | Informatics Science University
| |
|

| | | -
| |
|
| | | Achilleas Mantzios
| |
|
| | | IT DEV
| |
|
| | | IT DEPT
| |
|
| | | Dynacom Tankers Mgmt
| |
|

http://www.uci.cu



Re: [GENERAL] Comunication protocol

2013-05-17 Thread Amit Langote
On Sat, May 18, 2013 at 1:03 AM, Karel Riveron Escobar
kesco...@estudiantes.uci.cu wrote:

 Thanks Oscar. I'm going to try your suggestion but I get a problem. I get the 
 database server and apache server in the same PC because the system what I'm 
 building is in development phase. Do you think that is a problem for 
 wireshark?



From the documentation link that Tom provided you might have read that
the frontend/backend protocol which the database server and clients
connecting to it use - is a message based protocol in which the
messages are exchanged over either TCP/IP connection or UNIX doman
sockets. If you configure your client to use loopback interface
(127.0.0.1) to connect to the database you would be looking at a
TCP/IP connection, so probably manageable by Wireshark. Though, I
would suggest reading the protocol description in the documentation so
that you understand distinct phases of connection and subsequent
operation over the established connection.


--
Amit Langote


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Why does row estimation on nested loop make no sense to me

2013-05-17 Thread Jeff Amiel
On most nested loops that I do explain/explain analyze on, the row estimation 
for the nested-loop itself is a product of the inner nodes of the nested loop.
However in this case, I am stumped!

explain 
select  era.child_entity  from entity_rel era  join user_entity ue on 
ue.entity_id = era.parent_entity and ue.user_id=12345

Nested Loop  (cost=0.00..2903.37 rows=29107 width=4)
  -  Index Only Scan using entity_pk on user_entity ue  (cost=0.00..62.68 
rows=2 width=4)
    Index Cond: (user_id = 10954)
  -  Index Scan using rel_parent on entity_rel era  (cost=0.00..1261.85 
rows=317 width=8)
    Index Cond: (parent_entity = ue.entity_id)


How can the estimated number of rows for the nested loop node EXCEED the 
product of the 2 row estimates of the tables being joined?
Not only does it exceed it - but it is orders of magnitude greater.  

Am I missing something obvious here?  I an see the nested loop row estimate 
being LESS but certainly not more.



PostgreSQL 9.2.4 on x86_64-pc-solaris2.10, compiled by gcc (GCC) 3.4.3 
(csl-sol210-3_4-branch+sol_rpath), 64-bit



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why does row estimation on nested loop make no sense to me

2013-05-17 Thread Tom Lane
Jeff Amiel becauseimj...@yahoo.com writes:
 How can the estimated number of rows for the nested loop node EXCEED the 
 product of the 2 row estimates of the tables being joined?
 Not only does it exceed it - but it is orders of magnitude greater.

Can you provide a self-contained test case that does this?

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why does row estimation on nested loop make no sense to me

2013-05-17 Thread Amit Langote
On Sat, May 18, 2013 at 1:25 AM, Jeff Amiel becauseimj...@yahoo.com wrote:
 On most nested loops that I do explain/explain analyze on, the row estimation 
 for the nested-loop itself is a product of the inner nodes of the nested loop.
 However in this case, I am stumped!

 explain
 select  era.child_entity  from entity_rel era  join user_entity ue on 
 ue.entity_id = era.parent_entity and ue.user_id=12345

 Nested Loop  (cost=0.00..2903.37 rows=29107 width=4)
   -  Index Only Scan using entity_pk on user_entity ue  (cost=0.00..62.68 
 rows=2 width=4)
 Index Cond: (user_id = 10954)
   -  Index Scan using rel_parent on entity_rel era  (cost=0.00..1261.85 
 rows=317 width=8)
 Index Cond: (parent_entity = ue.entity_id)


 How can the estimated number of rows for the nested loop node EXCEED the 
 product of the 2 row estimates of the tables being joined?
 Not only does it exceed it - but it is orders of magnitude greater.

 Am I missing something obvious here?  I an see the nested loop row estimate 
 being LESS but certainly not more.


Can you also post the output of explain analyze your-query?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why does row estimation on nested loop make no sense to me

2013-05-17 Thread Jeff Amiel




 Can you provide a self-contained test case that does this?

That response scares me.
:)
I can try - Every other table set (small, easy to experiment with)  returns 
results as expected - 
Is the implication that this looks 'unusual'?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why does row estimation on nested loop make no sense to me

2013-05-17 Thread Jeff Amiel




- Original Message -
From: Amit Langote amitlangot...@gmail.com
To: Jeff Amiel becauseimj...@yahoo.com
Cc: pgsql-general@postgresql.org pgsql-general@postgresql.org
Sent: Friday, May 17, 2013 11:37 AM
Subject: Re: [GENERAL] Why does row estimation on nested loop make no sense to 
me

On Sat, May 18, 2013 at 1:25 AM, Jeff Amiel becauseimj...@yahoo.com wrote:
 On most nested loops that I do explain/explain analyze on, the row estimation 
 for the nested-loop itself is a product of the inner nodes of the nested loop.
 However in this case, I am stumped!

 explain
 select  era.child_entity  from entity_rel era  join user_entity ue on 
 ue.entity_id = era.parent_entity and ue.user_id=12345

 Nested Loop  (cost=0.00..2903.37 rows=29107 width=4)
   -  Index Only Scan using entity_pk on user_entity ue  (cost=0.00..62.68 
rows=2 width=4)
         Index Cond: (user_id = 10954)
   -  Index Scan using rel_parent on entity_rel era  (cost=0.00..1261.85 
rows=317 width=8)
         Index Cond: (parent_entity = ue.entity_id)


 How can the estimated number of rows for the nested loop node EXCEED the 
 product of the 2 row estimates of the tables being joined?
 Not only does it exceed it - but it is orders of magnitude greater.

 Am I missing something obvious here?  I an see the nested loop row estimate 
 being LESS but certainly not more.


 Can you also post the output of explain analyze your-query?

I'm not worried about performance (per se) but the row estimation issue which 
propagates up as part of a bigger query.  But here ya go:

explain analyze
select  era.child_entity  from entity_rel era  join user_entity ue on 
ue.entity_id = era.parent_entity and ue.user_id=12345

Nested Loop  (cost=0.00..2903.37 rows=29107 width=4) (actual time=0.028..0.274 
rows=201 loops=1)
  -  Index Only Scan using entity_pk on user_entity ue  (cost=0.00..62.68 
rows=2 width=4) (actual time=0.011..0.012 rows=1 loops=1)
    Index Cond: (user_id = 12345)
    Heap Fetches: 1
  -  Index Scan using rel_parent on entity_rel era  (cost=0.00..1261.85 
rows=317 width=8) (actual time=0.013..0.164 rows=201 loops=1)
    Index Cond: (parent_entity = ue.entity_id)
Total runtime: 0.361 ms


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why does row estimation on nested loop make no sense to me

2013-05-17 Thread Amit Langote
On Sat, May 18, 2013 at 1:47 AM, Jeff Amiel becauseimj...@yahoo.com wrote:




 - Original Message -
 From: Amit Langote amitlangot...@gmail.com
 To: Jeff Amiel becauseimj...@yahoo.com
 Cc: pgsql-general@postgresql.org pgsql-general@postgresql.org
 Sent: Friday, May 17, 2013 11:37 AM
 Subject: Re: [GENERAL] Why does row estimation on nested loop make no sense 
 to me

 On Sat, May 18, 2013 at 1:25 AM, Jeff Amiel becauseimj...@yahoo.com wrote:
 On most nested loops that I do explain/explain analyze on, the row 
 estimation for the nested-loop itself is a product of the inner nodes of the 
 nested loop.
 However in this case, I am stumped!

 explain
 select  era.child_entity  from entity_rel era  join user_entity ue on 
 ue.entity_id = era.parent_entity and ue.user_id=12345

 Nested Loop  (cost=0.00..2903.37 rows=29107 width=4)
   -  Index Only Scan using entity_pk on user_entity ue  (cost=0.00..62.68 
 rows=2 width=4)
 Index Cond: (user_id = 10954)
   -  Index Scan using rel_parent on entity_rel era  (cost=0.00..1261.85 
 rows=317 width=8)
 Index Cond: (parent_entity = ue.entity_id)


 How can the estimated number of rows for the nested loop node EXCEED the 
 product of the 2 row estimates of the tables being joined?
 Not only does it exceed it - but it is orders of magnitude greater.

 Am I missing something obvious here?  I an see the nested loop row estimate 
 being LESS but certainly not more.


 Can you also post the output of explain analyze your-query?

 I'm not worried about performance (per se) but the row estimation issue which 
 propagates up as part of a bigger query.  But here ya go:

 explain analyze
 select  era.child_entity  from entity_rel era  join user_entity ue on 
 ue.entity_id = era.parent_entity and ue.user_id=12345

 Nested Loop  (cost=0.00..2903.37 rows=29107 width=4) (actual 
 time=0.028..0.274 rows=201 loops=1)
   -  Index Only Scan using entity_pk on user_entity ue  (cost=0.00..62.68 
 rows=2 width=4) (actual time=0.011..0.012 rows=1 loops=1)
 Index Cond: (user_id = 12345)
 Heap Fetches: 1
   -  Index Scan using rel_parent on entity_rel era  (cost=0.00..1261.85 
 rows=317 width=8) (actual time=0.013..0.164 rows=201 loops=1)
 Index Cond: (parent_entity = ue.entity_id)
 Total runtime: 0.361 ms

Have you tried analyze (it's probably a case of insufficient/outdated
statistics to planner's disposal) or probably consider changing
default_statistics_target?


--
Amit Langote


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Comunication protocol

2013-05-17 Thread Achilleas Mantzios
That would be postgresql:5432/TCP

On Παρ 17 Μαΐ 2013 10:54:02 Karel Riveron Escobar wrote:

Hi everyone:


I have a question. I think it's so simple to answer but I don't know anything 
about that. I want to know what is the comunication protocol among PostgreSQL 
database server and an application server like Apache. I have to know that 
because I'm designing a simple deployment diagram and I just need it for finish.


Thanks in advance.


Regards, Karel Riverón
Student Scientific Council
Informatics Science University


 




-
Achilleas Mantzios
IT DEV
IT DEPT

Re: [GENERAL] Why does row estimation on nested loop make no sense to me

2013-05-17 Thread Jeff Amiel




 explain analyze
 select  era.child_entity  from entity_rel era  join user_entity ue on 
 ue.entity_id = era.parent_entity and ue.user_id=12345

 Nested Loop  (cost=0.00..2903.37 rows=29107 width=4) (actual 
 time=0.028..0.274 rows=201 loops=1)
   -  Index Only Scan using entity_pk on user_entity ue  (cost=0.00..62.68 
rows=2 width=4) (actual time=0.011..0.012 rows=1 loops=1)
         Index Cond: (user_id = 12345)
         Heap Fetches: 1
   -  Index Scan using rel_parent on entity_rel era  (cost=0.00..1261.85 
rows=317 width=8) (actual time=0.013..0.164 rows=201 loops=1)
         Index Cond: (parent_entity = ue.entity_id)
 Total runtime: 0.361 ms

Have you tried analyze (it's probably a case of insufficient/outdated
statistics to planner's disposal) or probably consider changing
default_statistics_target?


Again - my question revolves not around the whether or not I am getting good or 
bad estimates - my question is related to the fact that the nested-loop row 
estimation does not appear to be derived from the nodes below it - it is off by 
orders of magnitude.  I've never seen this before.
That aside, yes - I did analyze and tweak stats target during experimentation - 
no change.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best practice on inherited tables

2013-05-17 Thread Alfonso Afonso
Hi Frank

Although you are thinking in OOP, the SQL is itself one definition model that 
you should not ignore and, IMHO, try to follow the normalization statements.

You can build a robust and normalized schema (table primarylocation , table 
secondlocation that have a idprimarylocation, etc.) and later you could build 
your OOP software translating this to the proper classes (in Java you could use 
a DAO-POJO class or hibernate-jpa, for example).

With this solution you can obtain all the benefits of DBRMS besides a OOP 
robust software :)

Best regards

El 17/05/2013, a las 13:46, Frank Lanitz fr...@frank.uvena.de escribió:

 Hi folkes,
 
 I'm looking for a nice way to build this scenario:
 I've got a lot of locations with some special types. For example I've
 got workplaces, places like real laboratories and virtual places like
 maybe parcel service. For each of the different types I need to store
 some common attributes as well as some special ones. Having OOP in mind
 I came to the point of inherit tables. so I've create something like
 that (just a minimal example):
 
 CREATE TABLE locations(
   id SERIAL PRIMARY KEY,
   name varchar(50)
 );
 CREATE TABLE workplaces(
   workers integer
 ) INHERITS (locations);
 
 But now I got stuck with the primary key thing. As described in the
 documentation it is not supported. And now I'm looking for the best way
 on having at table workplaces also the unique constraint from locations
 etc. so e.g. I can do something like that:
 
 INSERT INTO workplaces (name, workers) VALUES ('My Place', 5);
 
 having the incrementation and the uniqueness. I was thinking off
 creating a number of triggers doing this for me but wondering whether
 there might be a better way.
 
 Cheers,
 Frank
 
 BTW: Using Postgres 9.2 and up
 
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general

Alfonso Afonso
(personal)







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why does row estimation on nested loop make no sense to me

2013-05-17 Thread Amit Langote
 explain analyze
 select  era.child_entity  from entity_rel era  join user_entity ue on 
 ue.entity_id = era.parent_entity and ue.user_id=12345

 Nested Loop  (cost=0.00..2903.37 rows=29107 width=4) (actual 
 time=0.028..0.274 rows=201 loops=1)
   -  Index Only Scan using entity_pk on user_entity ue  (cost=0.00..62.68 
 rows=2 width=4) (actual time=0.011..0.012 rows=1 loops=1)
 Index Cond: (user_id = 12345)
 Heap Fetches: 1
   -  Index Scan using rel_parent on entity_rel era  (cost=0.00..1261.85 
 rows=317 width=8) (actual time=0.013..0.164 rows=201 loops=1)
 Index Cond: (parent_entity = ue.entity_id)
 Total runtime: 0.361 ms


I noticed when the explain output in your first mail shows Index Cond:
(user_id = 10954) whereas your query says: ue.user_id=12345. Something
with that? Although, your explain analyze does show the same values at
both places with the row estimate being 29107 in both cases, which,
well, looks awful and quite unexpected though there seem to have been
similar observations before


Have you tried analyze (it's probably a case of insufficient/outdated
statistics to planner's disposal) or probably consider changing
default_statistics_target?


 Again - my question revolves not around the whether or not I am getting good 
 or bad estimates - my question is related to the fact that the nested-loop 
 row estimation does not appear to be derived from the nodes below it - it is 
 off by orders of magnitude.  I've never seen this before.
 That aside, yes - I did analyze and tweak stats target during experimentation 
 - no change.

Did you also check select count(*) on both the relations and found
related numbers?

--
Amit Langote


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why does row estimation on nested loop make no sense to me

2013-05-17 Thread Jeff Amiel




- Original Message -
From: Amit Langote amitlangot...@gmail.com
To: Jeff Amiel becauseimj...@yahoo.com
Cc: pgsql-general@postgresql.org pgsql-general@postgresql.org
Sent: Friday, May 17, 2013 2:21 PM
Subject: Re: [GENERAL] Why does row estimation on nested loop make no sense to 
me

 explain analyze
 select  era.child_entity  from entity_rel era  join user_entity ue on 
 ue.entity_id = era.parent_entity and ue.user_id=12345

 Nested Loop  (cost=0.00..2903.37 rows=29107 width=4) (actual 
 time=0.028..0.274 rows=201 loops=1)
   -  Index Only Scan using entity_pk on user_entity ue  (cost=0.00..62.68 
rows=2 width=4) (actual time=0.011..0.012 rows=1 loops=1)
         Index Cond: (user_id = 12345)
         Heap Fetches: 1
   -  Index Scan using rel_parent on entity_rel era  (cost=0.00..1261.85 
rows=317 width=8) (actual time=0.013..0.164 rows=201 loops=1)
         Index Cond: (parent_entity = ue.entity_id)
 Total runtime: 0.361 ms


I noticed when the explain output in your first mail shows Index Cond:
(user_id = 10954) whereas your query says: ue.user_id=12345. Something
with that? Although, your explain analyze does show the same values at
both places with the row estimate being 29107 in both cases, which,
well, looks awful and quite unexpected though there seem to have been
similar observations before

That was a weak attempt at hiding 'real' data - intended to change them all to 
12345.
:)

Did you also check select count(*) on both the relations and found
related numbers?

Nothing related (that I could find)  on the rowcounts - one table has 20 
million rows or so ad the other 65K.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Comunication protocol

2013-05-17 Thread John R Pierce

On 5/17/2013 8:15 AM, Karel Riveron Escobar wrote:
Thanks Tom for your response so fast. I'm reading right now and I 
found something but I didn't understood very well. In the 
documentation says that the protocol we're talking about is supported 
over TCP/IP family protocols, but doesn't says what exactly protocol 
is it? I want to know something more specific.




its postgresql protocol, transported over TCP, usually on socket 5432.



--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tuning read ahead continued...

2013-05-17 Thread Ramsey Gurley
On May 16, 2013, at 5:56 PM, Ramsey Gurley wrote:Hi All,I tried bumping my read ahead up to 4096. Instead of having faster reads, it seems it actually slowed things down. In fact, most of the tuning suggestions I've tried have made little to no difference in the results I get from bonnie++.I've run more tests with bonnie++. I'm beginning to wonder if there's something wrong with my system or my setup. In every test I have run, Seq Reads is faster with read ahead set to 256. If I increase read ahead to 4096 as suggested in Postgresql 9.0 High Performance, I get slower reads and slower writes.Other settings I've made as suggested by the book,
		
	
	/dev/sdb1 / ext3 noatime,errors=remount-ro 0 1  vm.swappiness=0
		
	
	vm.overcommit_memory=2echo 2  /proc/sys/vm/dirty_ratioecho 1  /proc/sys/vm/dirty_background_ratioHere is 4096 read aheadVersion 1.03e--Sequential Output-- --Sequential Input- --Random-  -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP498088-db1.s 96280M  130123 24 103634 15  277467 14 652.4  1498088-db1.smarthealth.com,96280M,,,130123,24,103634,15,,,277467,14,652.4,1,And here is the default 256 read aheadVersion 1.03e--Sequential Output-- --Sequential Input- --Random-  -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP498088-db1.s 96280M  160881 28 104868 17  286109 17 591.9  0498088-db1.smarthealth.com,96280M,,,160881,28,104868,17,,,286109,17,591.9,0,I also made some zcav plots. They are very flat on 256, which seems to indicate some limiting factor, but they also appear to be consistently *higher* than the 4096 values after about 70GB. Does this look familiar to anyone?

Re: [GENERAL] LONG delete with LOTS of FK's

2013-05-17 Thread Gavin Flower

On 18/05/13 03:06, Larry Rosenman wrote:

On 2013-05-16 18:35, David Kerr wrote:


- I'll take a look tomorrow, but we WERE seeing Seq Scan's against
- multi-million
- row tables, so I suspect Tom is right on with the replanning that's in
- 9.2 fixing
- it, and I'm in the process of validating that.

That seems likely, although you could try enable_seqscan=false as well.

Dave

The 9.2 upgrade DOES fix my issue.

Thanks again, Tom and everyone.



Did you also */enable_seqscan=false as well/*?

Cheers,
Gavin


Re: [GENERAL] LONG delete with LOTS of FK's

2013-05-17 Thread Larry Rosenman
Gavin Flower gavinflo...@archidevsys.co.nz wrote:
On 18/05/13 03:06, Larry Rosenman wrote:
 On 2013-05-16 18:35, David Kerr wrote:

 - I'll take a look tomorrow, but we WERE seeing Seq Scan's against
 - multi-million
 - row tables, so I suspect Tom is right on with the replanning
that's in
 - 9.2 fixing
 - it, and I'm in the process of validating that.

 That seems likely, although you could try enable_seqscan=false as
well.

 Dave
 The 9.2 upgrade DOES fix my issue.

 Thanks again, Tom and everyone.


Did you also */enable_seqscan=false as well/*?

Cheers,
Gavin

No.  Same set up as 9.1
-- 
Sent from Kaiten Mail. Please excuse my brevity.