Re: [GENERAL] Using relations in the SELECT part

2017-03-30 Thread Giuseppe Sacco
Hello Tom,

Il giorno mer, 29/03/2017 alle 09.59 -0400, Tom Lane ha scritto:
[...]
> > Furthermore, let's assume postgres does a cartesian product, if I
> > add a
> > new relation as third element, does it create 4x3 product?
> 
> You've hit on the reason why it's semantically ugly: it's not very
> clear what to do with multiple SRFs in one targetlist.  LATERAL,
> together with the ROWS FROM construct, allows clear specification
> of both of the useful behaviors (cartesian product and eval-set-
> returning-functions-in-lockstep).  The multiple-SRFs-in-targetlist
> behavior that we inherited from Berkeley is just a mess, as it
> effectively runs the SRFs until reaching the least common multiple of
> their periods.  We're changing that for v10 though.  You might find
> this commit informative (at least the commit message and
> documentation
> changes):
> 
> https://git.postgresql.org/gitweb/?p=postgresql.git=commitdiff=69f4b9c85

Thanks for the detailed answer. I'll better study LATERAL joins and
change my query.

BTW, the commit you pointed out has been very very instructive for me.

Thank you,
Giuseppe



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Using relations in the SELECT part

2017-03-29 Thread Giuseppe Sacco
Hello,
I am writing to this list since I wrote a query that I cannot really
understand. So, thanks to anyone who will light my darkness :-)

I have a table with two columns, the first one is a key, the second one
is a list of car plates. What I need to extract is a result set that
contains two columns, the first one should always be the key, the
second one only one plate. If the record contains many plates, the
result set should contains a row for each plate.

Example:

postgres=# create temporary table t
 (key varchar primary key, plates varchar);
postgres=# insert into t values ('00','AA888BB CC777DD GG333JJ'),
 ('11','ZZ888KK');
INSERT 0 2
postgres=# select * from t;
  key   | plates  
+-
 00 | AA888BB CC777DD GG333JJ
 11 | ZZ888KK


This is what I would like to extract from t:
  key   | plate
+-
 00 | AA888BB
 00 | CC777DD
 00 | GG333JJ
 11 | ZZ888KK

the solution I found is:

postgres=# select key,
 unnest(regexp_split_to_array(plates, E'\\s+')) AS plate from t;
  key   |  plate  
+-
 00 | AA888BB
 00 | CC777DD
 00 | GG333JJ
 11 | ZZ888KK


What did I write? The first operation is to convert the original space
separated list into an array, then convert that array to a relation
that contains many records.


Early questions:

1. why may I put in the SELECT part (instead of the FROM) a relation?
When I studied SQL, I was told to put all relations in FROM, and put in
the SELECT part only the colmns or expressions with columns for
formatting the output.

2. why postgresql create a cartesian product using a first element (a
single columns "key") and a second element (a relation "plate")?

3. how postgresql define the second element? it is not "static" since
it depends fomr the first element: it depends on the current record.
For every "key", there a different "plate" result set.


Furthermore, let's assume postgres does a cartesian product, if I add a
new relation as third element, does it create 4x3 product? Let's see:

postgres=# select key,
 unnest(regexp_split_to_array(plates, E'\\s+')) AS plate1,
 unnest(regexp_split_to_array(plates, E'\\s+')) AS plate2 from t;
  key   | plate1  | plate2  
+-+-
 00 | AA888BB | AA888BB
 00 | CC777DD | CC777DD
 00 | GG333JJ | GG333JJ
 11 | ZZ888KK | ZZ888KK

4. what happened? May this be somewhta related to IMMUTABLE function?
Is unnest an immutable function? And, in any case, why this this is not
a cartesia product?

Let's try in a different way, with a different array:

postgres=# select key,
 unnest(regexp_split_to_array(plates, E'\\s+')) AS plate1, 
 unnest('{1,2}'::int[]) AS array2 from t;
  key   | plate1  | array2 
+-+
 00 | AA888BB |  1
 00 | CC777DD |  2
 00 | GG333JJ |  1
 00 | AA888BB |  2
 00 | CC777DD |  1
 00 | GG333JJ |  2
 11 | ZZ888KK |  1
 11 | ZZ888KK |  2

this time it is a cartesian product. Why postgresql acts differently?

Thank you,
Giuseppe


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to setup Active Directory users in Postgres 9.3.11

2016-03-10 Thread Giuseppe Sacco
Hello Wyatt,

Il giorno mer, 09/03/2016 alle 14.35 +, Wyatt Sanford ha scritto:
> I have recently been tasked with installing Postgres 9.3.11 on a
> Windows 2012 R2 server on Active Directory and restoring backups from
> a Linux server running Postgres 9.3.11.  I have all of the databases
> restored to the windows server and now I need to set up access for
> users on Active Directory.  I’ve read a few things online, but did
> not find any good examples.  I know that you have to add login roles
[...]
> ports.  Can anyone give me some examples of the entries I need to add
> to the pg_hba.conf file or point me to some examples on the web.

I found the documentation on the web site quite good. These are two
examples I use every day for authenticating postgres users to a remote
AD:

host neos all 127.0.0.1/32  ldap ldapserver=ipaddress 
ldapbasedn="OU=xxx,DC=yyy,DC=local" ldapbinddn="CN=uuu,OU=xxx,DC=yyy,DC=local" 
ldapbindpasswd=password ldapsearchattribute=sAMAccountName
host neos all 10.42.112.0/24 ldap ldapserver=ipaddress ldapprefix="cn=" 
ldapsuffix=", ou=Users, ou=, dc=yyy, dc=local"

The first uses a special account for connecting and looking for
sAMAccountName before checking credentials, the second one connect
directly with specified credentials.

Please note, that beside importing all databases, you should also
import "globals" that contains all role definitions.

More info, for postgres 9.3, on the web site
http://www.postgresql.org/docs/9.3/interactive/auth-methods.html#AUTH-LDAP

More info about moving globals
http://www.postgresql.org/docs/9.3/static/app-pg-dumpall.html

Please note that postgresql connect to AD, it is not the other way
around.

Bye,
Giuseppe


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL]

2015-11-13 Thread Giuseppe Sacco
Hello,

Il giorno ven, 13/11/2015 alle 13.38 +0800, Alex Luya ha scritto:
> Hello,
>        I created a new database by 
>        create database icare;
>        then quit off psql and  run:
>         pg_restore --clean --create --exit-on-error --dbname=icare
> icare-test.tar
>          it complains:
>           pg_restore: [archiver (db)] Error while PROCESSING TOC:
>           pg_restore: [archiver (db)] Error from TOC entry 21; 2615
> 80924 SCHEMA icare icare
>           pg_restore: [archiver (db)] could not execute query: ERROR:
>  permission denied for database icare
>            Command was: CREATE SCHEMA icare;

From what I understand, it means that the postgresql user that is
restoring the dump cannot create a schema on "icare" database. So, is
that user the same that issued che "create database" earlier? If it's
not, then you should grant all required priviledges to that user.

Bye,
Giuseppe


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres architecture for multiple instances

2015-02-23 Thread Giuseppe Sacco
Il giorno dom, 22/02/2015 alle 14.53 -0500, Tom Lane ha scritto:
[...]
 That's because the above claim is nonsense.  pg_largeobject is not shared
 across databases of a cluster.
 
 You could well have collisions against large objects in the same database,
 though, if you're adding more large objects to an existing database and
 expecting to preserve their OIDs.

The problem is that when you use large objects, you have a table that
contain the OIDs or the large objects, and you need to keep the same
link table-LOB when moving the database. So, when you export the
database using pg_dump, it create an sql script that restore the db
using the same OIDs. If you run that script on any cluster, you may
possibly have the OID already used, and the import process does not
work.

Basically, you cannot use pg_dump for moving databases (that use large
objects), because there is no guarantee that your import succeed.

I normally import such dumps in new clusters, and it works.

Bye,
Giuseppe



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres architecture for multiple instances

2015-02-22 Thread Giuseppe Sacco
Il giorno sab, 21/02/2015 alle 16.01 -0600, Samuel Smith ha scritto:
 Howdy,
 I am looking for advice on migrating to postgres from another database 
 system.
[...]

People already wrote you some comments, here are two more.

DB2 instances run as different OS users, so if you need the same
approach for security reasons, you will need to create different
postgresql clusters.

Another important fact is about large objects, if you happen to use
them: their OID is not just unique to the database, but to the whole
cluster. This means that when you move a database in a cluster from a
production system to a database on a test cluster, you may get errors
when same OID already exists in target cluster (even if it is used in a
different database).

Bye,
Giuseppe



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Slow delete when many foreign tables are defined

2014-12-01 Thread Giuseppe Sacco
Hello,
I have a main table and a lot of details tables that reference the
main one.

Every time I delete a record from the main table, a check is done on
every details table that contain a foreign key toward main table.

This is a simplified schema:

create table main (
type varchar,
serial numeric,
description varchar not null,
constraint mainpk primary key (type,serial));

create table details1 (
type varchar check (type = '1'),
serial numeric,
details1 varchar not null,
constraint details1pk primary key (type,serial),
constraint details1fk foreign key (type,serial) references
main(type,serial));

create table details2 (
type varchar check (type = '2'),
serial numeric,
details2 varchar not null,
constraint details2pk primary key (type,serial),
constraint details2fk foreign key (type,serial) references
main(type,serial));

and suppose I have about 50-100 of these details tables, and about a
thousand records per each detail table. All detail tables use different
value for column type.

Now, when I delete a record, I should delete it from a detail table and
from main table.

When I delete from main table, postgresql check for reference from all
details tables, while I would only check from the details table that
have the column type corrected.

insert into main values ('1',1,'desc');
insert into main values ('2',1,'desc');
insert into details1 values ('1',1,'desc');
insert into details2 values ('2',1,'desc');

begin;
delete from details2;
explain analyze delete from main where type = '2';

 QUERY PLAN 
 
-
 Delete on main  (cost=4.17..11.28 rows=3 width=6) (actual time=0.015..0.015 
rows=0 loops=1)
   -  Bitmap Heap Scan on main  (cost=4.17..11.28 rows=3 width=6) (actual 
time=0.011..0.011 rows=1 loops=1)
 Recheck Cond: ((type)::text = '2'::text)
 Heap Blocks: exact=1
 -  Bitmap Index Scan on mainpk  (cost=0.00..4.17 rows=3 width=0) 
(actual time=0.007..0.007 rows=1 loops=1)
   Index Cond: ((type)::text = '2'::text)
 Planning time: 0.035 ms
 Trigger for constraint details1fk: time=0.107 calls=1
 Trigger for constraint details2fk: time=0.197 calls=1
 Execution time: 0.331 ms

As you may see, the delete operation call trigger details1fk even if
data in table details1 cannot be impacted by this delete.

You may think what happen with about 50 details tables...

Is there any way to make it work faster?

Thank you very much,
Giuseppe



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Slow delete when many foreign tables are defined

2014-12-01 Thread Giuseppe Sacco
Il giorno lun, 01/12/2014 alle 09.49 -0600, Andy Colson ha scritto:
 On 12/1/2014 9:23 AM, Giuseppe Sacco wrote:
  Hello,
  I have a main table and a lot of details tables that reference the
  main one.
 
  Every time I delete a record from the main table, a check is done on
  every details table that contain a foreign key toward main table.
[...]
 I can think of two options:
 
 1) Don't use 50 different detail tables.  A single detail table with the 
 type column will work much faster.  Is there a good reason to break them 
 out?  (# rows is not a good reason, btw).

Basically we do have a lot of different attributes on each details
tables. Let's say we use 20-30 specific columns in each of them, so why
should we waste disk space and CPU cycles for handling all these columns
in one table? If I understand it, you are suggesting to add about 25*50
columns in the main table and only set values for 25 columns.

Moreover, our ORM would probably get crazy :-)

 2) Try inheritance.  I have no idea if it'll help, but I thought I'd 
 read someplace where the planner knew a little more about what types of 
 rows go into which tables.

This would probably help, but we are blocked on ANSI SQL for easily
porting our application to other DBMSes.

Bye,
Giuseppe



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Large object rights management

2012-02-15 Thread Giuseppe Sacco
Is there anyone interested on this subject?

Il giorno dom, 05/02/2012 alle 23.30 +0100, Giuseppe Sacco ha scritto:
 Hi all,
 I wrote an application that store a large quantity of files in the
 database as large binary objects. There are around 50 tables (all in one
 schema) and only one table host all these large objects. Every user
 connect to database using his own user, so all users are parts of the
 same group role for granting permissione to every application user.
 
 My problem is about large object permissions introduced in postgresql
 9.0, since my application permits to everyone to insert/update/delete
 any large object. Now, since 9.0, deleting a large object is only
 possible for the user that owns it.
 
 I know that 9.0 also introduced an option for reverting this behaviour
 as it was in 8.4, but I wonder if there is any other way of sharing and
 deleting large objects in 9.0.
 
 Thanks,
 Giuseppe



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Large object rights management

2012-02-05 Thread Giuseppe Sacco
Hi all,
I wrote an application that store a large quantity of files in the
database as large binary objects. There are around 50 tables (all in one
schema) and only one table host all these large objects. Every user
connect to database using his own user, so all users are parts of the
same group role for granting permissione to every application user.

My problem is about large object permissions introduced in postgresql
9.0, since my application permits to everyone to insert/update/delete
any large object. Now, since 9.0, deleting a large object is only
possible for the user that owns it.

I know that 9.0 also introduced an option for reverting this behaviour
as it was in 8.4, but I wonder if there is any other way of sharing and
deleting large objects in 9.0.

Thanks,
Giuseppe


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] error permission denied for relation on postgresql 9.0.6 during CREATE TABLE

2012-01-27 Thread Giuseppe Sacco
Hi,
I get this error while executing a CREATE TABLE statement.
This is my CREATE statement:

CREATE TABLE agenzia.BarcodeByDocumentInfo (
docId VARCHAR(17) NOT NULL,
defaultOp VARCHAR(10) NOT NULL DEFAULT 'Append',
CONSTRAINT BcByDocInfo_pk PRIMARY KEY (docId),
CONSTRAINT BcByDoc_defOp_ck
 CHECK ( defaultOp = 'Append' OR defaultOp = 'Overwrite' ),
CONSTRAINT BcByDoc_docId_fk FOREIGN KEY(docId)
 REFERENCES agenzia.Documents(docId)
);

When I execute it on postgresql 9.0.6 I get this messages:

NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index bcbydocinfo_pk 
for table barcodebydocumentinfo
ERROR:  permission denied for relation documents

So, if I understand correctly the error message, this is a missing
permission a table documents that is only used in my CREATE STATEMENT
on a FOREIGN KEY constraint.

This is table documents:

neos= \d agenzia.documents
 Table agenzia.documents
 Column | Type  |
Modifiers 
+---+--
 docid  | character varying(17) | not null
 description| character varying(45) | 
 protid | character varying(50) | 
 iscommondata   | character(5)  | not null default
'FALSE'::bpchar
 tobecrypted| character(5)  | not null default
'FALSE'::bpchar
 islistofvalues | character(5)  | not null default
'FALSE'::bpchar
 isfulltext | character(5)  | not null default
'FALSE'::bpchar
Indexes:
[...]
Check constraints:
[...]
Foreign-key constraints:
[...]
Referenced by:
[...]

I am owner of table documents:

neos= \dt agenzia.documents
  List of relations
 Schema  |   Name| Type  | Owner 
-+---+---+---
 agenzia | documents | table | neos
(1 row)

I read the documentation about postgresql 9.0 and it seems the error
message is about permission x. As you may see x is among my
permissions:

neos= \dp agenzia.documents
   Access privileges
 Schema  |   Name| Type  |  Access privileges   | Column access privileges 
-+---+---+--+--
 agenzia | documents | table | neos=arwdDxt/neos   +| 
 |   |   | agenzia_r=arwdt/neos | 
(1 row)

Do you have suggestion about this problem?

I thank you very much,
Giuseppe



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] error permission denied for relation on postgresql 9.0.6 during CREATE TABLE

2012-01-27 Thread Giuseppe Sacco
Il giorno ven, 27/01/2012 alle 08.54 -0800, Adrian Klaver ha scritto:
 On Friday, January 27, 2012 8:25:56 am Giuseppe Sacco wrote:
[...]
  I am owner of table documents:
  
  neos= \dt agenzia.documents
List of relations
   Schema  |   Name| Type  | Owner
  -+---+---+---
   agenzia | documents | table | neos
  (1 row)
  
  I read the documentation about postgresql 9.0 and it seems the error
  message is about permission x. As you may see x is among my
  permissions:
 
 The x(REFERENCES) permission needs to be on both tables for the owner of the 
 referenced table(noes).

Well, I am owner of the referenced table. I cannot check anything on the
barcodebydocumentinfo table since it is the one I am trying to CREATE.

Thanks,
Giuseppe


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] error permission denied for relation on postgresql 9.0.6 during CREATE TABLE

2012-01-27 Thread Giuseppe Sacco
Il giorno ven, 27/01/2012 alle 12.38 -0500, Tom Lane ha scritto:
  NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index 
  bcbydocinfo_pk for table barcodebydocumentinfo
  ERROR:  permission denied for relation documents
 
 This example works for me.  Are you sure you are executing the CREATE
 TABLE command as user neos?

Until ten minutes ago I was sure about it, but I was wrong. I was
writing to the list about it when I read your message.

Sorry for the noise.

Thank,
Giuseppe


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSL certificates issue

2011-08-23 Thread Giuseppe Sacco
Il giorno lun, 22/08/2011 alle 09.37 -0400, Tom Lane ha scritto:
 Asia asia123...@op.pl writes:
  Now the issue is then when using libpq it was enough to have only root 
  certificate in server's root.crt and it worked fine.
  But when I tried using the same with JDBC it turned out that I need to put 
  whole chain (2 certs) of Intermediate CA 1 in server's root.crt.
[...]
 In the JDBC case you'd need to put all those certs into the client's
 keystore, which I'm afraid I don't know the details of doing.  Possibly
 somebody on pgsql-jdbc could help you with that.

you should import CA certificate in your JRE ca certstore with commands:

cd $JAVA_HOME/jre/lib/security
keytool -import -trustcacerts -alias $YOURCAALIAS \
-file $YOURCACERTFILE -keystore cacerts

I usually store in client and server certificates the whole chain from
primary CA.

Bye,
Giuseppe


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Executing more than one function.

2011-08-23 Thread Giuseppe Sacco
Il giorno mar, 23/08/2011 alle 16.30 +0100, f vf ha scritto:
[...]

 it takes more time than if I execute one function at the time and sum
 the execution times of each one:
 
 
 BEGIN;
 SELECT functionX();
 COMMIT;
 
You should probably accout a time for the COMMIT operation. In one case
you commit only once, while in other case you commit three times.

Bye,
Giuseppe



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] About permissions on large objects

2011-07-15 Thread Giuseppe Sacco
Hi Howard,

Il giorno mer, 13/07/2011 alle 23.30 +0100, Howard Cole ha scritto:
 Hi Guiseppe,
 
 Perhaps you can create a trigger that monitors for the insertion of an 
 oid and then grant permissions. No idea if this can be done, but if it 
 can it will save you lots of repeated grants.
[...]

Thanks for your tip. I already created a trigger on all my tables, as
this one:

CREATE OR REPLACE FUNCTION grant_large_object() RETURNS trigger AS '
BEGIN
execute ''GRANT SELECT,UPDATE ON LARGE OBJECT '' || NEW.IMAGE || '' TO 
agenzia_r'';
RETURN NEW;
END;'
LANGUAGE 'plpgsql';

CREATE TRIGGER grant_large_object
AFTER INSERT OR UPDATE ON agenzia.imagebydocument
FOR EACH ROW EXECUTE PROCEDURE grant_large_object();

And it seems to be working right. I still would like to know if there is
any way to query acl metadata, maybe from table
pg_catalog.pg_largeobject_metadata in order to collect information about
granted rights on large objects.

Bye,
Giuseppe


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] About permissions on large objects

2011-07-13 Thread Giuseppe Sacco
Hi all,
I moved a few clusters from 8.4 to 9.0 since I required the new way of
authenticating against LDAP (or, in my case, AD). Now, I found the new
database version introduced permissions on large object, so my
application, in order to share large object across a group, require a
bit of change.

While the application code will be changed in order to give rights on
large objects too, I would like to know if there is any way for listing
current rights, i.e., for finding all large objects that still need to
have permissions changed.

Currently I cannot know how to distinguish what large objects have
already been granted, so I do give permissions to all large objects.
This is quite time consuming, about 5 minutes, and need to be executed a
few times per hour.

This is what I do now:

do $$
declare r record;
begin
for r in select distinct loid from pg_catalog.pg_largeobject loop
   execute 'GRANT SELECT,UPDATE ON LARGE OBJECT ' || r.loid || ' TO agenzia_r';
end loop;
end$$;

Is there a better/faster way?

Thanks,
Giuseppe


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] About permissions on large objects

2011-07-13 Thread Giuseppe Sacco
Hi Howard,

Il giorno mer, 13/07/2011 alle 13.18 +0100, Howard Cole ha scritto:
[...]
 As an interim solution, you could set the large object compatibility:
 
 www.postgresql.org/docs/9.0/interactive/runtime-config-compatible.html#GUC-LO-COMPAT-PRIVILEGES

thanks for pointing to this option. I already evaluated it and decided
to keep 9.0 with new large object permissions since I think it is a good
thing.

Is there any other possibility?

Thanks to all,
Giuseppe


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general