Re: [GENERAL] Optimize Query

2016-02-14 Thread Alban Hertroys
> On 14 Feb 2016, at 20:40, drum.lu...@gmail.com wrote: > > Hi Alban! Sorry.. that was my mistake Okay, first advice on that query: Trim it down to something that people can wrap their minds around. You have a silly amount of code repetition in there, much of which doesn't even seem to serve

Re: [GENERAL] Optimize Query

2016-02-14 Thread drum.lu...@gmail.com
Hi Alban! Sorry.. that was my mistake Original Query: SELECT concat(company, ' ', customer_name_first, ' ', customer_name_last) AS customer, sum(revenue) AS revenue, sum(i.quantity) AS quantity, sum(i.cost) AS costFROM ( SELECT account.id, job.customerid,

Re: [GENERAL] Optimize Query

2016-02-14 Thread Alban Hertroys
> On 13 Feb 2016, at 11:21, drum.lu...@gmail.com wrote: > > Anyone can help with that please? > > Cheers What would help is: 1. to post an actual query that you need optimised and 2. an explain analyze of that query. What you posted in your original message was some kind of query-template with

Re: [GENERAL] Optimize Query

2016-02-13 Thread drum.lu...@gmail.com
Anyone can help with that please? Cheers On Thursday, 11 February 2016, drum.lu...@gmail.com wrote: > oh ok! > > thanks > > > > Lucas Possamai > > - kinghost.co.nz > > - DigitalOcean

Re: [GENERAL] Optimize Query

2016-02-10 Thread John R Pierce
On 2/10/2016 6:38 PM, drum.lu...@gmail.com wrote: Sorry but I was unable to see the *n_quote_status = 0* I'm unable to see this variable anywhere in your two original queries, the SQL one, and the other ?? abbreviated thing, nor did you give any table definitions, so I'm not even sure what yo

Re: [GENERAL] Optimize Query

2016-02-10 Thread drum.lu...@gmail.com
> > > FYI, it is always helpful (and polite) to state the PostgreSQL VERSION and > O/S for archive documentation purposes! > That's right. My mistake... I'm using PostgreSQL 9.2. > > Note that various postgresql.conf options, system memory & hardware also > play a factor here, in addition to cur

Re: [GENERAL] Optimize Query

2016-02-10 Thread Melvin Davidson
On Wed, Feb 10, 2016 at 8:25 PM, drum.lu...@gmail.com wrote: > Hi all, > > I've got a slow query and I'm trying to make it faster. > > *New Query:* > > SELECT concat(client.company, ' ', client.name_first, ' ', >> client.name_last) AS customer, >>sum(COALESCE(bill_item.unit_price, billabl

[GENERAL] Optimize Query

2016-02-10 Thread drum.lu...@gmail.com
Hi all, I've got a slow query and I'm trying to make it faster. *New Query:* SELECT concat(client.company, ' ', client.name_first, ' ', > client.name_last) AS customer, >sum(COALESCE(bill_item.unit_price, billable.unit_price, 0) * > bill_item.quantity) AS revenue, >sum(bill_item.

Re: [GENERAL] Optimize query for listing un-read messages

2014-05-03 Thread Andreas Joseph Krogh
På lørdag 03. mai 2014 kl. 23:21:21, skrev Alban Hertroys mailto:haram...@gmail.com>>: On 03 May 2014, at 12:45, Andreas Joseph Krogh wrote: > Do you really need to query message_property twice? I would think this would give the same results: > > SELECT >     m.id                         

Re: [GENERAL] Optimize query for listing un-read messages

2014-05-03 Thread Alban Hertroys
On 03 May 2014, at 12:45, Andreas Joseph Krogh wrote: > Do you really need to query message_property twice? I would think this would > give the same results: > > SELECT > m.id AS message_id, > prop.person_id, > coalesce(prop.is_read, FALSE) AS is_read, >

Re: [GENERAL] Optimize query for listing un-read messages

2014-05-03 Thread Andreas Joseph Krogh
På lørdag 03. mai 2014 kl. 11:51:08, skrev Alban Hertroys mailto:haram...@gmail.com>>: On 01 May 2014, at 13:06, Andreas Joseph Krogh wrote: > I have the following query to list all un-read messages for person with id=1: >  > SELECT >     m.id                          AS message_id, >     

Re: [GENERAL] Optimize query for listing un-read messages

2014-05-03 Thread Alban Hertroys
On 01 May 2014, at 13:06, Andreas Joseph Krogh wrote: > I have the following query to list all un-read messages for person with id=1: > > SELECT > m.id AS message_id, > prop.person_id, > coalesce(prop.is_read, FALSE) AS is_read, > m.subject > FROM messag

[GENERAL] Optimize query for listing un-read messages

2014-05-02 Thread Andreas Joseph Krogh
Hi all,   I have a schema where I have lots of messages and some users who might have read some of them. When a message is read by a user I create an entry i a table message_property holding the property (is_read) for that user.   The schema is as follows:   drop table if exists message_property;

Re: [GENERAL] optimize query?

2013-01-30 Thread hamann . w
Bob Futrelle wrote: If looking for the variants with a single suffixed character is all you'll ever need to do: Do a single pass on the large table, after creating a field, 'trunc' that contains a truncated version of the item, e.g., adding XY423 to trunc for each entry of the form XY423A, or XY4

[GENERAL] optimize query?

2013-01-30 Thread hamann . w
Hi, I am trying to match items from 2 tables based on a common string. One is a big table which has one column with entries like XY123, ABC44, etc The table has an index on that column. The second table is, typically, much smaller select from tab1, tab2 where tab1.code = tab2.code; This wor

[GENERAL] Optimize query

2004-04-05 Thread Michal Hlavac
Hello, I have one main table and 2 many-to-many relation tables and 2 one-to-many relation table. Main table has 150 000 rows and many-to-many tables have about 300 000 rows. I am creating web portal. In my query I must call distinct with many-to-many relation. But this query is too slow for web a

Re: [GENERAL] Optimize query: time of "single * IN(many)" > time of "many *

2004-01-09 Thread Paul Janssen
Tom Lane writes: Paul Janssen writes: Can anyone help me out with the following situation: (a) a single query with 550 id's in the IN-clause resulting into 800+ seconds; (b) 550 queries with a single id in the IN-clause resulting into overall time of <60 seconds; The table consists of 950.0

Re: [GENERAL] Optimize query: time of "single * IN(many)" > time of "many * IN(single)"

2004-01-08 Thread Tom Lane
"Paul Janssen" <[EMAIL PROTECTED]> writes: > Can anyone help me out with the following situation: >(a) a single query with 550 id's in the IN-clause resulting into 800+ > seconds; >(b) 550 queries with a single id in the IN-clause resulting into overall > time of <60 seconds; > The table

Re: [GENERAL] Optimize query: time of "single * IN(many)" > time

2004-01-08 Thread Alvaro Herrera
On Thu, Jan 08, 2004 at 06:28:14AM -0500, Dave Smith wrote: > Firstly you should always provide an explain from your query before > posting to this list. You mean "while posting", because he can't possible provide the explain before having the means to do so, can he? :-) > I think the problem is