[GENERAL] Do not understand high estimates of index scan vs seq scan

2013-06-20 Thread Antonio Goméz Soto
Hi all,

I do not understand why postgreSQL estimates an index scan only half as fast as 
a seq scan:

system=# explain select * from queuelog;QUERY PLAN
---
 Seq Scan on queuelog  (cost=0.00..20530.29 rows=610929 width=148)
(1 row)

system=# explain select * from queuelog where queuelog.start_time = 
'2013-05-20 8:30' and queuelog.start_time = '2013-06-21 17:0';
QUERY 
PLAN
---
 Index Scan using queuelog_start_time on queuelog  (cost=0.00..13393.18 
rows=316090 width=148)
   Index Cond: ((start_time = '2013-05-20 08:30:00+02'::timestamp with time 
zone) AND (start_time = '2013-06-21 17:00:00+02'::timestamp with time zone))


Is that solely because it nees to compare each index value to a fixed date/time?
I would assume the index would be much smaller then the actual data, or is it 
only based on the amount of rows?


Thanks,
Antonio


PS: here's the queuelog definition:

   Table public.queuelog
  Column  |   Type   |   Modifiers
--+--+---
 id   | integer  | not null default 
nextval('queuelog_id_seq'::regclass)
 created  | timestamp with time zone | not null default now()
 lastupdate   | timestamp with time zone | not null default now()
 start_time   | timestamp with time zone | not null default now()
 sessionid| character varying(50)| not null default ''::character 
varying
 call_seq | integer  | not null default 1
 queue| integer  | not null default 1
 dial | character varying(24)| not null default ''::character 
varying
 agent| integer  | not null default 1
 agents   | integer  | not null default 0
 agents_logged_in | integer  | not null default 0
 agents_avail | integer  | not null default 0
 queue_pos| integer  | not null default 1
 waittime | numeric  | not null default (0)::numeric
 ringtime | numeric  | not null default (0)::numeric
 talktime | numeric  | not null default (0)::numeric
 cause| integer  | not null default 16
 from_function| character varying(24)|
 from_lookupid| integer  | not null default 1
 to_function  | character varying(24)|
 to_lookupid  | integer  | not null default 1
 maxcallers   | integer  | not null default 0
Indexes:
queuelog_pkey PRIMARY KEY, btree (id)
queuelog_start_time btree (start_time)


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Slow query and using wrong index, how to fix? Probably naive question..

2013-05-22 Thread Antonio Goméz Soto
Hi,

I am using postgresql 8.1 (CentOS5). I have the following table:

system # \d history
   Table public.history
  Column  |   Type   |  Modifiers
--+--+--
 id   | integer  | not null default 
nextval('history_id_seq'::regclass)
 created  | timestamp with time zone |
 creator  | integer  | not null default 1
 contact  | integer  | not null default 1
 type | character varying| not null default ''::character varying
 lookup   | text |
 lookupid | integer  | not null default 1
 value| text |
Indexes:
history_pkey PRIMARY KEY, btree (id)
history_created_index btree (created)
history_creator_index btree (creator)
history_lookup_lookupid_creator_index btree (lookup, lookupid, creator)
history_lookup_lookupid_index btree (lookup, lookupid)
Foreign-key constraints:
history_contact_constraint FOREIGN KEY (contact) REFERENCES contact(id)
history_creator_constraint FOREIGN KEY (creator) REFERENCES contact(id)

system # explain select history.id, history.created, creator, contact, 
history.type, lookup, lookupid, value from history  where (lookup = 'phone' and 
lookupid = '672') or creator = '790' order by history.creator desc limit 1000;
   QUERY PLAN

 Limit  (cost=0.00..132041.59 rows=1000 width=58)
   -  Index Scan Backward using history_creator_index on history  
(cost=0.00..11746815.97 rows=88963 width=58)
 Filter: (((lookup = 'phone'::text) AND (lookupid = 672)) OR (creator = 
790))
(3 rows)

This table contains 2 million rows, the query takes 800 seconds on SSD HD.

I think - probably naive - the query should use the 
history_lookup_lookupid_creator_index.

Why doesn't it, and how can I speed up the query?

Thanks,
Antonio.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Slow query and using wrong index, how to fix? Probably naive question..

2013-05-22 Thread Antonio Goméz Soto
Leif,

of course. This performs much better (far below one second).

Thanks!
Antonio


Op 22-05-13 11:28, Leif Gunnar Erlandsen schreef:
 You might want to try with UNION and then sort the result of this query.
 
 The index history_lookup_lookupid_creator_index wont be used when you are 
 having an OR in your WHERE statement.
 
  select history.id, history.created, creator, contact, history.type, lookup, 
 lookupid, value 
 from history  
 where (lookup = 'phone' and lookupid = '672') 
 union
  select history.id, history.created, creator, contact, history.type, lookup, 
 lookupid, value 
 from history  
 where  creator = '790'
 
 Leif Gunnar Erlandsen
 
 
 
 
 Fra: pgsql-general-ow...@postgresql.org [pgsql-general-ow...@postgresql.org] 
 p#229; vegne av Antonio Goméz Soto [antonio.gomez.s...@gmail.com]
 Sendt: 22. mai 2013 10:50
 Til: pgsql-general@postgresql.org
 Emne: [GENERAL] Slow query and using wrong index, how to fix? Probably naive 
 question..
 
 Hi,
 
 I am using postgresql 8.1 (CentOS5). I have the following table:
 
 system # \d history
Table public.history
   Column  |   Type   |  Modifiers
 --+--+--
  id   | integer  | not null default 
 nextval('history_id_seq'::regclass)
  created  | timestamp with time zone |
  creator  | integer  | not null default 1
  contact  | integer  | not null default 1
  type | character varying| not null default ''::character varying
  lookup   | text |
  lookupid | integer  | not null default 1
  value| text |
 Indexes:
 history_pkey PRIMARY KEY, btree (id)
 history_created_index btree (created)
 history_creator_index btree (creator)
 history_lookup_lookupid_creator_index btree (lookup, lookupid, creator)
 history_lookup_lookupid_index btree (lookup, lookupid)
 Foreign-key constraints:
 history_contact_constraint FOREIGN KEY (contact) REFERENCES contact(id)
 history_creator_constraint FOREIGN KEY (creator) REFERENCES contact(id)
 
 system # explain select history.id, history.created, creator, contact, 
 history.type, lookup, lookupid, value from history  where (lookup = 'phone' 
 and lookupid = '672') or creator = '790' order by history.creator desc limit 
 1000;
QUERY PLAN
 
  Limit  (cost=0.00..132041.59 rows=1000 width=58)
-  Index Scan Backward using history_creator_index on history  
 (cost=0.00..11746815.97 rows=88963 width=58)
  Filter: (((lookup = 'phone'::text) AND (lookupid = 672)) OR (creator 
 = 790))
 (3 rows)
 
 This table contains 2 million rows, the query takes 800 seconds on SSD HD.
 
 I think - probably naive - the query should use the 
 history_lookup_lookupid_creator_index.
 
 Why doesn't it, and how can I speed up the query?
 
 Thanks,
 Antonio.
 
 
 
 
 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general
 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Very simple select, using index for ordering, but not for selecting. How to make it faster?

2013-05-22 Thread Antonio Goméz Soto
Hi,

I have the following simple query on a simple table:

system=# select * from history where lookup = 'trunk' and lookupid = '248' 
order by created desc limit 1000;


system=# \d history
   Table public.history
  Column  |   Type   |  Modifiers
--+--+--
 id   | integer  | not null default 
nextval('history_id_seq'::regclass)
 created  | timestamp with time zone |
 creator  | integer  | not null default 1
 contact  | integer  | not null default 1
 type | character varying| not null default ''::character varying
 lookup   | text |
 lookupid | integer  | not null default 1
 value| text |
Indexes:
history_pkey PRIMARY KEY, btree (id)
history_created_index btree (created)
history_creator_index btree (creator)
history_lookup_lookupid_creator_index btree (lookup, lookupid, creator)
history_lookup_lookupid_index btree (lookup, lookupid)
Foreign-key constraints:
history_contact_constraint FOREIGN KEY (contact) REFERENCES contact(id)
history_creator_constraint FOREIGN KEY (creator) REFERENCES contact(id)

system=# explain select * from history where lookup = 'trunk' and lookupid = 
'248' order by created desc limit 1000; 
  QUERY PLAN
-
 Limit  (cost=0.00..14799.28 rows=1000 width=58)
   -  Index Scan Backward using history_created_index on history  
(cost=0.00..12201987.90 rows=824499 width=58)
 Filter: ((lookup = 'trunk'::text) AND (lookupid = 248))
(3 rows)


Why doesn't it use the index specified in select? How can I change the query to 
make it faster?

Thanks,
Antonio



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 2 machines, same database, same query, 10 times slower?

2012-05-09 Thread Antonio Goméz Soto
Tomas,

running vacuum full cdr brought down the query time to 4 seconds..

Thanks a lot,
Antonio


Op 08-05-12 21:15, Tomas Vondra schreef:
 On 8.5.2012 20:32, Antonio Goméz Soto wrote:
 Do this on both machines to verify that

SELECT relpages, reltuples FROM pg_class WHERE relname = 'cdr';
 Slow machine:

  relpages | reltuples
 --+---
400566 |982321

 Fast machine:

  relpages |  reltuples
 --+-
 62076 | 1.48375e+06


 
 Yup, that's probably the culprit. The slow machine has to read 40
 pages while the fast one just 62000. Each page is 8kB, so this is 3GB
 vs. 500MB difference.
 
 T.
 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] 2 machines, same database, same query, 10 times slower?

2012-05-08 Thread Antonio Goméz Soto
Hi,

I am running PostgreSQL 8.1, on CentOS 5. I have two machines, same hardware, 
with the same database layout,
they have different data, and the same query run 10 times as slow on one 
machine compared to the other.

I have run ANALYZE on both machines, here are the query plans for both 
machines. Can someone explain
this to me? I have trouble reading EXPLAIN output...

Query:

explain select company.name as cname, call_flags, bill_duration as 
call_duration,
   date_part('epoch',start_time) as start_time,
   src_name, src_type, src_num, src_id,
   dial_name, dial_type, dial_num, dial_id,
   ans_name, ans_type, ans_num, ans_id,
   sessionid
  from cdr, company, phoneline, contact
  where (src_id = contact.id or dial_id = contact.id or ans_id = contact.id)
and contact.id = '2' and phoneline.function='contact' and
phoneline.lookupid = contact.id and phoneline.status != 'deleted' and
(src_company=company.id or dial_company=company.id) and company.id  2
  order by start_time DESC
  limit 10;

This is the query plan on machine #1 (query takes 2 seconds) :

 Limit  (cost=106128.33..106128.36 rows=10 width=160)
   -  Sort  (cost=106128.33..106166.98 rows=15458 width=160)
 Sort Key: date_part('epoch'::text, cdr.start_time)
 -  Nested Loop  (cost=49.38..104275.65 rows=15458 width=160)
   -  Nested Loop  (cost=2.10..103880.57 rows=7729 width=164)
 Join Filter: ((outer.src_company = inner.id) OR 
(outer.dial_company = inner.id))
 -  Nested Loop  (cost=0.00..103054.09 rows=6595 width=163)
   Join Filter: ((inner.src_id = outer.id) OR 
(inner.dial_id = outer.id) OR (inner.ans_id = outer.id))
   -  Index Scan using contact_pkey on contact  
(cost=0.00..5.94 rows=1 width=4)
 Index Cond: (id = 2)
   -  Seq Scan on cdr  (cost=0.00..77039.87 
rows=1486187 width=159)
 -  Materialize  (cost=2.10..2.16 rows=5 width=13)
   -  Seq Scan on company  (cost=0.00..2.10 rows=5 
width=13)
 Filter: (id  2)
   -  Materialize  (cost=47.28..47.30 rows=2 width=4)
 -  Seq Scan on phoneline  (cost=0.00..47.28 rows=2 
width=4)
   Filter: (((function)::text = 'contact'::text) AND 
((status)::text  'deleted'::text) AND (lookupid = 2))
(17 rows)

This is the query plan on machine two (query takes 38 seconds):

 Limit  (cost=424555.76..424555.79 rows=10 width=170)
   -  Sort  (cost=424555.76..424574.34 rows=7432 width=170)
 Sort Key: date_part('epoch'::text, cdr.start_time)
 -  Nested Loop  (cost=422353.60..424077.90 rows=7432 width=170)
   -  Nested Loop  (cost=422064.10..423621.19 rows=3716 width=174)
 Join Filter: ((inner.src_company = outer.id) OR 
(inner.dial_company = outer.id))
 -  Bitmap Heap Scan on company  (cost=2.09..49.23 rows=26 
width=21)
   Recheck Cond: (id  2)
   -  Bitmap Index Scan on company_pkey  
(cost=0.00..2.09 rows=26 width=0)
 Index Cond: (id  2)
 -  Materialize  (cost=422062.01..422085.24 rows=2323 
width=165)
   -  Nested Loop  (cost=0.00..422059.69 rows=2323 
width=165)
 Join Filter: ((inner.src_id = outer.id) OR 
(inner.dial_id = outer.id) OR (inner.ans_id = outer.id))
 -  Index Scan using contact_pkey on contact  
(cost=0.00..6.01 rows=1 width=4)
   Index Cond: (id = 2)
 -  Seq Scan on cdr  (cost=0.00..408379.70 
rows=781370 width=161)
   -  Materialize  (cost=289.50..289.52 rows=2 width=4)
 -  Seq Scan on phoneline  (cost=0.00..289.50 rows=2 
width=4)
   Filter: (((function)::text = 'contact'::text) AND 
((status)::text  'deleted'::text) AND (lookupid = 2))
(19 rows)

Thanks,
Antonio

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 2 machines, same database, same query, 10 times slower?

2012-05-08 Thread Antonio Goméz Soto
Hi Tomas,

thanks for responding.

Op 08-05-12 17:34, Tomas Vondra schreef:
 Hi,
 
 On 8 Květen 2012, 16:48, Antonio Goméz Soto wrote:
 Hi,

 I am running PostgreSQL 8.1, on CentOS 5. I have two machines, same
 hardware, with the same database layout,
 they have different data, and the same query run 10 times as slow on one
 machine compared to the other.
 
 First of all, to analyze runtime differences it's important to provide
 EXPLAIN ANALYZE output, not just EXPLAIN. Re-run the queries and use
 explain.depesz.com to post the output.
 

Allright, thanks, didn't know that. Reran the queries, and they are posted here:

The slow one: http://explain.depesz.com/s/2Si

The fast one: http://explain.depesz.com/s/c9m3

 Second, what do you mean 'different data'? If there is different amount of
 data, it may be perfectly expected that the query runs much slower on the
 machine with more data. For example the plans contain this:
 
 A: Seq Scan on cdr  (cost=0.00..77039.87 rows=1486187 width=159)
 B: Seq Scan on cdr (cost=0.00..408379.70 rows=781370 width=161)
 
 That suggests that the second database contains about 1/2 the rows.
 

That is true.

 The seq scan nodes reveal another interesting fact - while the expected
 row count is about 50% in the second plan, the estimated cost is about 5x
 higher (both compared to the first plan).
 
 The important thing here is that most of the cost estimate comes from the
 number of pages, therefore I suppose the cdr occupies about 5x the space
 in the second case, although it's much more 'sparse'.
 
 Do this on both machines to verify that
 
SELECT relpages, reltuples FROM pg_class WHERE relname = 'cdr';

Slow machine:

 relpages | reltuples
--+---
   400566 |982321

Fast machine:

 relpages |  reltuples
--+-
62076 | 1.48375e+06


 
 That might happen for example by deleting a lot of rows recently (without
 running VACUUM FULL after) or by not running autovacuum at all. Which is
 quite likely, because it was introduced in 8.1 and was off by default.
 

Autovacuum is running on both machines and does not report errors. But
I did not run a vacuum full. There currently are users on the machine,
so I can try that later tonight.

 BTW if you care about performance, you should upgrade to a more recent
 version (preferably 9.x) because 8.1 is not supported for several years
 IIRC and there were many improvements since then.
 

I would like to, but I am bound to distribution-supplied software versions.

Thanks a lot for helping,
Antonio

 Tomas
 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What is *wrong* with this query???

2011-11-05 Thread Antonio Goméz Soto

Steve,

Op 05-11-11 05:51, Steve Murphy schreef:

I give! I'm flummoxed!

Here is what I have, 3 tables:

schedule

company building status0

3 x active

4 x active

5 x active

3 x active

3 x active

3 x active

In the end, I want to replace the building id's above. They start out with the 
non-informative value of '1';

company

id name status

3 x active

4 y active

5 z active

building

id company name

1 3 A active

2 3 B active

3 3 C active

4 4 D active

5 4 E active

6 4 F active

7 5 G active

8 5 H active

9 5 I active

So, every company has 3 schedules. Of the 3, I'd like to select the *one* with 
the lowest id.




I think you mean every company has three buildings...



I'm using postgresql 8.1. Yes, ancient, but I have no choice in this affair.

select schedule.id as sched_id, bld.id as bid

from

schedule

left join company on schedule.company = company.id

left join (select * from building where building.company = company.id order by 
id limit 1) as bld

where

schedule.status = 'active' and company.status = 'active' and bld.status = 
'active';

I get a syntax error on the the where.

ERROR: syntax error at or near where at character ….

LINE 6: where

^

If I leave out the where clause entirely, that's an error also,

ERROR: syntax error at or near ; at character ….

LINE 5: … as bld ;

^ it's right under the semicolon

So, it's expecting ***SOMETHING*** after the as bld, but it sure isn't going 
to tell me what.

What am I missing?




I think you actually want to do this:

update schedule set building = (select id from building where company = 
schedule.company order by id limit 1);

Best,
Antonio.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FREE hosting platforms with PostgreSQL, Java SDK, Tomcat, ecc.?

2011-08-06 Thread Antonio Goméz Soto

Well,

I am from the hosting business, and can assure you, what you are looking for 
does not exist.
This configuration requires specialists on the provider side, which are 
expensive. They
need to eat too.

And history teaches, that even if it would exist, you should not put anything 
meaningful on it,
because they surely will go out of business soon.

Antonio.


Op 06-08-11 10:02, Fernando Pianegiani schreef:

Hello,

do you know any FREE hosting platforms where PostgreSQL, Java SDK, Tomcat (or 
other web servers) can be already found installed or where they can be 
installed from scratch? In possible, it would be better if the PostgreSQL be 
directly accessible by my
servlet, without any web service/PHP script in the middle.

Thank you very much in advance.

Kind regards.

Fernando Pianegiani



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Libpq is very slow on windows but fast on linux.

2010-11-09 Thread Antonio Goméz Soto

Op 10-11-10 08:22, Alban Hertroys schreef:

On 10 Nov 2010, at 5:19, Rob Brown-Bayliss wrote:


Further testing shows it is windows networking causing the issue.
Copying files to and from the server is 5 to 6 times slower on a
Windows client compared to the Linux client.

The issue is not specific to libpq.


Do both machines have similar hardware? If the Linux machine has a proper NIC 
(intel, for example) while the Windows machine

 has a poor NIC (Realtek!), then of course you would get differences in 
performance.




Not that much, in the past I have seen differences in performance between Intel 
and Realtek,
20-30% yes, but never by a factor of 5 or 6.

Antonio


I'm not saying Microsofts network stack is particularly good, mind you, just 
that it isn't necessarily the main cause.

 One of the problems Windows is facing is that, while supporting a lot of 
hardware is a good thing in general, it also
 supports all the crap hardware, crap drivers and crap ACPI implementations.


Alban Hertroys

--
Screwing up is an excellent way to attach something to the ceiling.


!DSPAM:737,4cda484310265265216060!






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to grant a user read-only access to a database?

2010-03-02 Thread Antonio Goméz Soto

Hi,

I tried this:

names=# grant select on database names to spice;
ERROR:  invalid privilege type SELECT for database

The documentation seems to imply I need to grant SELECT
to each table separately. That's a lot of work, and what if
new tables are created?

Thanks,
Antonio

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to grant a user read-only access to a database?

2010-03-02 Thread Antonio Goméz Soto

Op 02-03-10 13:00, Thom Brown schreef:

On 2 March 2010 11:46, Nilesh Govindarajanli...@itech7.com  wrote:

On Tue, Mar 2, 2010 at 4:57 PM, Thom Brownthombr...@gmail.com  wrote:


On 2 March 2010 11:12, Antonio Goméz Sotoantonio.gomez.s...@gmail.com
wrote:

Hi,

I tried this:

names=# grant select on database names to spice;
ERROR:  invalid privilege type SELECT for database

The documentation seems to imply I need to grant SELECT
to each table separately. That's a lot of work, and what if
new tables are created?

Thanks,
Antonio

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



The privileges you can grant on a database are only related to the
creation of tables and connecting to that database.

You could create a role which has SELECT-only access, apply that role
to all your tables, and assign users (other roles) as members of that
role.

Regards

Thom

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


How to create that ? I'm also interested in this as I need this for backing
up my databases.

--


Okay, here's an example:

CREATE ROLE readonly; -- This user won't be able to do anything by
default, not even log in

GRANT SELECT on table_a TO readonly;
GRANT SELECT on table_b TO readonly;
GRANT SELECT on table_c TO readonly;

CREATE ROLE testuser WITH LOGIN; -- At this point we haven't assigned
this user to any group

SET ROLE testuser;
SELECT * FROM table_a;

We get:
ERROR:  permission denied for relation table_a

SET ROLE postgres;

DROP ROLE testuser;
CREATE ROLE testuser WITH LOGIN IN ROLE readonly;

SET ROLE testuser;
SELECT * FROM table_a;

This would then return the results from table_a

Regards

Thom


But I still need to define access to each table separately?

Thanks,
Antonio.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to grant a user read-only access to a database?

2010-03-02 Thread Antonio Goméz Soto

Op 02-03-10 16:14, Raymond O'Donnell schreef:

On 02/03/2010 14:56, Thom Brown wrote:


But I still need to define access to each table separately?

Thanks,
Antonio.



As far as I'm aware.  It's only in the upcoming version 9.0 that you
can do things like:

GRANT SELECT ON ALL TABLES IN SCHEMA public TO readonly;

Other folk on here may have some alternative suggestions though.


I think people have in the past posted queries that extract the table
names from the system catalogues and then grant privileges on them
it might be worthwhile having a trawl through the archives.

Ray.



Ok, will do. Thanks.

Antonio


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] define transaction within pg/psql. Necessary?

2010-02-18 Thread Antonio Goméz Soto

Hello,

if I define a pg/pgsql function, and I call that outside a transaction,
does it create one for itself? Or should I add BEGIN and COMMIT statements 
within
the function?

Thanks,
Antonio.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] define transaction within pg/psql. Necessary?

2010-02-18 Thread Antonio Goméz Soto

Op 18-02-10 11:07, Richard Huxton schreef:

On 18/02/10 10:02, Antonio Goméz Soto wrote:


if I define a pg/pgsql function, and I call that outside a transaction,
does it create one for itself? Or should I add BEGIN and COMMIT
statements within
the function?


You can't call a function outside a transaction. Every statement in
PostgreSQL is inside a transaction, either one you define yourself, or
an implicit one that just lasts for the duration of one statement.




Clear answer. Thanks.

Antonio

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Using table name in column for joining

2010-01-05 Thread Antonio Goméz Soto

Hello,

I have a column in a table that contains the name of another table,
and the id in that table.

I would like to use this in a join statement. Is that possible?

example:

create table animal (
 id serial,
 table_name varchar(8) default 'monkey' CHECK (table_name IN ('monkey', 
'elephant')),
 table_id integer
);

create table monkey (
  id serial,
  name varchar(20)
);

create table elephant (
  id serial,
  name varchar(20)
);

and I want to do:

   select animal.name from animal left join animal.table_name on 
animaltablename.id = animal.table_id;

or something like that.

I know it's probably considered bad SQL, but what else can I do?

Thanks,
Antonio


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using table name in column for joining

2010-01-05 Thread Antonio Goméz Soto

Op 05-01-10 18:00, sabrina miller schreef:

You can also have inheritance from animal, in such case you can have
partitioned different animals in different tables with their special
attributes in it.

then you will have:


create table animal(
   id serial,
   name varchar(20),
   age integer
   );

create table elephant
 (some_attr_that_only_have_
elephants varchar(20)
 )
 INHERITS (animal);



This is PostgreSQL-specific isn't it?
I'm a bit afraid to use brand-specific constructs.

Antonio



create table monkey
 (some_attr_that_only_have_monkey varchar(20)
 )
 INHERITS (animal);

insert into elephant (name, age, some_attr_that_only_have_elephants)
values ('sophie',15,'lorem');

insert into monkey (name, age, some_attr_that_only_have_monkey)
values ('lory',3,'impsu');

You can look for diferents animals like this:

select name http://animal.name/ from animal;
select name http://animal.name/ from elephant;
select name http://animal.name/ from monkey;

I hope it helps.


2010/1/5 Grzegorz Jaśkiewicz gryz...@gmail.com mailto:gryz...@gmail.com

On Tue, Jan 5, 2010 at 3:30 PM, Antonio Goméz Soto
antonio.gomez.s...@gmail.com mailto:antonio.gomez.s...@gmail.com
wrote:
  Hello,
 
  I have a column in a table that contains the name of another table,
  and the id in that table.
 
  I would like to use this in a join statement. Is that possible?

not possible I'm afraid.

But have a look at concept of inheritance for something that might
suit you. Or just store everything in one table, but add new id to
each row. That is what other half of the world uses.

Also, when writing queries like that, consider using aliases for table
name, for instance:

select a.id http://a.id from animals a;

makes life easier.

hth

--
GJ

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org
mailto:pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] alter table performance

2009-12-20 Thread Antonio Goméz Soto

Op 19-12-09 22:20, Jaime Casanova schreef:

are you planning to run this many times? what is wrong with making
this manually?
doesn't seem like something to make automatic...

but if you insist in plpgsql you can execute select version() into
some_text_var and act acordingly

   
No, this is done in an automatic software update procedure across 
hundreds of machines

which run different postgreSQL versions.

Thanks, I'll give this a try.

Antonio.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] alter table performance

2009-12-19 Thread Antonio Goméz Soto

Hi,

is there a way in sql to dynamically test for version 7.3, so I can run the

  alter table add column
  update table set column = .. where column IS NULL;
  alter table alter column set not null

on 7.3, and on newer versions:

  alter table add column ... not null default '';

Maybe I can create pg/SQL function, that does this, and remove it 
afterwards.

or is there a better way?

Thanks
Antonio

Op 17-12-09 23:46, Tom Lane schreef:

Thomas Kellererspam_ea...@gmx.net  writes:
   

What's wrong with:
alter table person add column address varchar(64) not null default '';
 

This:

regression=# alter table person add column address varchar(64) not null default 
'';
ERROR:  Adding columns with defaults is not implemented.
 Add the column, then use ALTER TABLE SET DEFAULT.
regression=# \q

This is just one of many many things that could be improved by getting
off of 7.3.  In general, complaining about performance (or features)
of a seven-year-old, long since EOL'd release is not a productive use of
anybody's time.

regards, tom lane
   



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] alter table performance

2009-12-17 Thread Antonio Goméz Soto

Hi,

I am regularly altering tables, adding columns setting default values etc.
This very often takes a very long time and is very disk intensive, and this
gets pretty annoying.

Things are hampered by the fact that some of our servers run PG 7.3

Suppose I have a table and I want to add a non NULL column with a default value.
What I normally do is:

alter table person add column address varchar(64);
update person set address = '' where address IS NULL;
alter table person alter column address set not NULL;
alter table person alter column address set default '';

When the table contains millions of records this takes forever.

Am I doing something wrong? Do other people have the same problems?

Thanks,
Antonio



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] alter table performance

2009-12-17 Thread Antonio Goméz Soto

Op 17-12-09 23:46, Tom Lane schreef:

This is just one of many many things that could be improved by getting
off of 7.3.  In general, complaining about performance (or features)
of a seven-year-old, long since EOL'd release is not a productive use of
anybody's time.
   


I'm sorry, didn't mean to.

I was just checking if I did it the right way, or if it was supposed to 
be that slow.


I know we should upgrade the client machines, but in some cases we just 
can't.


Anyway, thanks for giving me the answer.

Antonio


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general