Re: [GENERAL] Seeking performance advice: Index for "recent entries"

2012-05-08 Thread Chris Angelico
On Wed, May 9, 2012 at 2:52 PM, Tom Lane  wrote:
> Chris Angelico  writes:
>> I have a table with a timestamptz column for the "effective date/time"
>> of the row, and need to have some queries that look only for those
>> entries for which that is in the future or VERY recently - which will
>> be a small minority of rows. I'm looking at something like:
>
>> CREATE INDEX on tablename (effective) where effective>timestamptz
>> 'now'-interval '21 days'
>
> I think this falls under the rubric of "premature optimization is the
> root of all evil".  Just use a plain index on the timestamptz column
> and be happy.  Searches that only look at the extremal values of a
> column work perfectly well with a full index, because they only need to
> examine a small range of the index.

The index is actually on two columns, an account ID followed by the
effective date - I need to look up whether any particular account has
recent entries. Does that make any difference?

Thanks for the advice. I don't have enough data yet to be able to
measure these things, and it's good to solve problems before they
become critical!

ChrisA

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Seeking performance advice: Index for "recent entries"

2012-05-08 Thread Tom Lane
Chris Angelico  writes:
> I have a table with a timestamptz column for the "effective date/time"
> of the row, and need to have some queries that look only for those
> entries for which that is in the future or VERY recently - which will
> be a small minority of rows. I'm looking at something like:

> CREATE INDEX on tablename (effective) where effective>timestamptz
> 'now'-interval '21 days'

I think this falls under the rubric of "premature optimization is the
root of all evil".  Just use a plain index on the timestamptz column
and be happy.  Searches that only look at the extremal values of a
column work perfectly well with a full index, because they only need to
examine a small range of the index.

> Is there a way around this? Also, how would I go about pruning the
> index, preferably in such a way that the old index can be used?

And that is exactly the reason why a partial index of this sort isn't a
win --- you'd be expending many extra cycles to keep it trimmed.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Seeking performance advice: Index for "recent entries"

2012-05-08 Thread Chris Angelico
I have a table with a timestamptz column for the "effective date/time"
of the row, and need to have some queries that look only for those
entries for which that is in the future or VERY recently - which will
be a small minority of rows. I'm looking at something like:

CREATE INDEX on tablename (effective) where effective>timestamptz
'now'-interval '21 days'

with a possible REINDEX or ALTER INDEX or even DROP INDEX/CREATE INDEX
periodically to "prune" the index. However, Postgres complains:

ERROR:  functions in index predicate must be marked IMMUTABLE

Without the arithmetic, it works fine:

CREATE INDEX ledgernotyet on larcombe.ledger (lid,effective) where
effective>timestamptz 'now'

Is there a way around this? Also, how would I go about pruning the
index, preferably in such a way that the old index can be used?

Thanks!

Chris Angelico

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY column order

2012-05-08 Thread MD33
HEADER option is not as useful as I wanted, but sounds good with column
names.
Thanks a lot !

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/COPY-column-order-tp5690950p5696058.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] relpages sudden increase

2012-05-08 Thread Tomas Vondra
On 8.5.2012 19:27, Richard Harley wrote:
> I currently do nightly database dumps on a ton of small dbs that are
> increasing around 2-3mb per day. Suddenly, in a recent backup file, one
> db in particular jumped from 55mb to 122mb overnight.

Well, I wouldn't call that 'massive size' but in my experience such
sudden changes in sizes are caused by one of these two things

(a) modification patterns + slow vacuuming

(b) batch updates (e.g. update of a column in the whole table)

Given that this is a one-time issue, I'd guess it's (b). Were you doing
any such updates or something like that?

> I did some investigation -
> 
> One table increased from 8mb to 31mb during a 24hr period. The table is
> just text so this is highly unusual given that the number of rows DIDN'T
> increase any more than normal.

What do you mean by 'number of rows'? Is that number of live rows, i.e.
the number you get from

  SELECT COUNT(*) FROM ...

or the number you get from pg_class as reltuples?

> pg_toast increased from 8mb to 27mb during the same period.
> 
> The relpages for the table in question increased from 164 to 1088 during
> the 24hr period. On the live db, the relpages is back to 164 but the
> size of the table remains massive.

Hmmm, I wonder how the number of pages could drop, because that does not
happen unless you run VACUUM FULL / CLUSTER or such commands. And that
does not happen regularly.

Also, how could the table size remain massive when the number of pages
dropped to 164? Did you mean a different table or the whole database?

> Using Centos5/postgres 8.1.21 at the moment. Planning to upgrade over
> summer but not had issues like this before on 8.1.
> 
> What gives?!

Good idea. 8.1 is not supported for a long time, plus there are so many
improvements since then?

Tomas

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 2 machines, same database, same query, 10 times slower?

2012-05-08 Thread Tomas Vondra
On 8.5.2012 20:32, Antonio Goméz Soto wrote:
>> > Do this on both machines to verify that
>> > 
>> >SELECT relpages, reltuples FROM pg_class WHERE relname = 'cdr';
> Slow machine:
> 
>  relpages | reltuples
> --+---
>400566 |982321
> 
> Fast machine:
> 
>  relpages |  reltuples
> --+-
> 62076 | 1.48375e+06
> 
> 

Yup, that's probably the culprit. The slow machine has to read 40
pages while the fast one just 62000. Each page is 8kB, so this is 3GB
vs. 500MB difference.

T.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 2 machines, same database, same query, 10 times slower?

2012-05-08 Thread Pavel Stehule
2012/5/8 Antonio Goméz Soto :
> Hi Tomas,
>
> thanks for responding.
>
> Op 08-05-12 17:34, Tomas Vondra schreef:
>> Hi,
>>
>> On 8 Květen 2012, 16:48, Antonio Goméz Soto wrote:
>>> Hi,
>>>
>>> I am running PostgreSQL 8.1, on CentOS 5. I have two machines, same
>>> hardware, with the same database layout,
>>> they have different data, and the same query run 10 times as slow on one
>>> machine compared to the other.
>>
>> First of all, to analyze runtime differences it's important to provide
>> EXPLAIN ANALYZE output, not just EXPLAIN. Re-run the queries and use
>> explain.depesz.com to post the output.
>>
>
> Allright, thanks, didn't know that. Reran the queries, and they are posted 
> here:
>
> The slow one: http://explain.depesz.com/s/2Si
>
> The fast one: http://explain.depesz.com/s/c9m3
>

probably cdr table on "slow machine" needs VACUUM FULL.

Regards

Pavel

>> Second, what do you mean 'different data'? If there is different amount of
>> data, it may be perfectly expected that the query runs much slower on the
>> machine with more data. For example the plans contain this:
>>
>> A: Seq Scan on cdr  (cost=0.00..77039.87 rows=1486187 width=159)
>> B: Seq Scan on cdr (cost=0.00..408379.70 rows=781370 width=161)
>>
>> That suggests that the second database contains about 1/2 the rows.
>>
>
> That is true.
>
>> The seq scan nodes reveal another interesting fact - while the expected
>> row count is about 50% in the second plan, the estimated cost is about 5x
>> higher (both compared to the first plan).
>>
>> The important thing here is that most of the cost estimate comes from the
>> number of pages, therefore I suppose the cdr occupies about 5x the space
>> in the second case, although it's much more 'sparse'.
>>
>> Do this on both machines to verify that
>>
>>    SELECT relpages, reltuples FROM pg_class WHERE relname = 'cdr';
>
> Slow machine:
>
>  relpages | reltuples
> --+---
>   400566 |    982321
>
> Fast machine:
>
>  relpages |  reltuples
> --+-
>    62076 | 1.48375e+06
>
>
>>
>> That might happen for example by deleting a lot of rows recently (without
>> running VACUUM FULL after) or by not running autovacuum at all. Which is
>> quite likely, because it was introduced in 8.1 and was off by default.
>>
>
> Autovacuum is running on both machines and does not report errors. But
> I did not run a vacuum full. There currently are users on the machine,
> so I can try that later tonight.
>
>> BTW if you care about performance, you should upgrade to a more recent
>> version (preferably 9.x) because 8.1 is not supported for several years
>> IIRC and there were many improvements since then.
>>
>
> I would like to, but I am bound to distribution-supplied software versions.
>
> Thanks a lot for helping,
> Antonio
>
>> Tomas
>>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 2 machines, same database, same query, 10 times slower?

2012-05-08 Thread Antonio Goméz Soto
Hi Tomas,

thanks for responding.

Op 08-05-12 17:34, Tomas Vondra schreef:
> Hi,
> 
> On 8 Květen 2012, 16:48, Antonio Goméz Soto wrote:
>> Hi,
>>
>> I am running PostgreSQL 8.1, on CentOS 5. I have two machines, same
>> hardware, with the same database layout,
>> they have different data, and the same query run 10 times as slow on one
>> machine compared to the other.
> 
> First of all, to analyze runtime differences it's important to provide
> EXPLAIN ANALYZE output, not just EXPLAIN. Re-run the queries and use
> explain.depesz.com to post the output.
> 

Allright, thanks, didn't know that. Reran the queries, and they are posted here:

The slow one: http://explain.depesz.com/s/2Si

The fast one: http://explain.depesz.com/s/c9m3

> Second, what do you mean 'different data'? If there is different amount of
> data, it may be perfectly expected that the query runs much slower on the
> machine with more data. For example the plans contain this:
> 
> A: Seq Scan on cdr  (cost=0.00..77039.87 rows=1486187 width=159)
> B: Seq Scan on cdr (cost=0.00..408379.70 rows=781370 width=161)
> 
> That suggests that the second database contains about 1/2 the rows.
> 

That is true.

> The seq scan nodes reveal another interesting fact - while the expected
> row count is about 50% in the second plan, the estimated cost is about 5x
> higher (both compared to the first plan).
> 
> The important thing here is that most of the cost estimate comes from the
> number of pages, therefore I suppose the cdr occupies about 5x the space
> in the second case, although it's much more 'sparse'.
> 
> Do this on both machines to verify that
> 
>SELECT relpages, reltuples FROM pg_class WHERE relname = 'cdr';

Slow machine:

 relpages | reltuples
--+---
   400566 |982321

Fast machine:

 relpages |  reltuples
--+-
62076 | 1.48375e+06


> 
> That might happen for example by deleting a lot of rows recently (without
> running VACUUM FULL after) or by not running autovacuum at all. Which is
> quite likely, because it was introduced in 8.1 and was off by default.
> 

Autovacuum is running on both machines and does not report errors. But
I did not run a vacuum full. There currently are users on the machine,
so I can try that later tonight.

> BTW if you care about performance, you should upgrade to a more recent
> version (preferably 9.x) because 8.1 is not supported for several years
> IIRC and there were many improvements since then.
> 

I would like to, but I am bound to distribution-supplied software versions.

Thanks a lot for helping,
Antonio

> Tomas
> 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] relpages sudden increase

2012-05-08 Thread Richard Harley
I currently do nightly database dumps on a ton of small dbs that are 
increasing around 2-3mb per day. Suddenly, in a recent backup file, one 
db in particular jumped from 55mb to 122mb overnight.


I did some investigation -

One table increased from 8mb to 31mb during a 24hr period. The table is 
just text so this is highly unusual given that the number of rows DIDN'T 
increase any more than normal.


pg_toast increased from 8mb to 27mb during the same period.

The relpages for the table in question increased from 164 to 1088 during 
the 24hr period. On the live db, the relpages is back to 164 but the 
size of the table remains massive.


Using Centos5/postgres 8.1.21 at the moment. Planning to upgrade over 
summer but not had issues like this before on 8.1.


What gives?!

Thanks
Rich


Re: [GENERAL] .pgpass not working

2012-05-08 Thread Rebecca Clarke
Hi Ben

I had saved the .pgpass file in my home directory /home/user/.pgpass which
works when I'm logged in as user. However, in order for me to use Slony, I
had to be logged in as postgres user.
I installed strace and ran my pg_dump test and found that it actually looks
for the .pgpass file in /var/lib/postgresql  (which I'm assuming is the
postgres users home directory as this is the directory where I begin in
when I log in).
I made a copy of the .pgpass and saved it in that location and it worked!

Many thanks.

Rebecca

On Fri, May 4, 2012 at 6:35 PM, Ben Chobot  wrote:

> On May 4, 2012, at 9:30 AM, Rebecca Clarke wrote:
>
> I do not want to touch the pg_hba.conf so I have generated the .pgpass
> file.
> The permissions is set to 600, and I have correctly inputted the details
> into .pgpass, there are no leading spaces.
>
> *myhostname:myport:*:postgres:mypassword*
>
> However I am still prompted for a password.
> I have tested pg_dump as well and it prompts also.
>
> Does anyone have any suggestions on what may be the culprit. Is there
> somewhere I need to specify to tell the system to look into the .pgpass
> file?
>
>
> Where is the .pgpass file? If it's not in ~/.pgpass or doesn't have the
> right ownership (your permissions are good) then it won't be used. If it's
> in a different location, you might need to make use of the PGPASSFILE
> environment variable.
>
> If you really get stuck, you can always strace psql or pg_dump and see if
> it has problems opening your .pgpass file.
>


Re: [GENERAL] FATAL: lock file "postmaster.pid" already exists

2012-05-08 Thread deepak
On Tue, May 8, 2012 at 3:09 AM, Alban Hertroys  wrote:

> On 8 May 2012, at 24:34, deepak wrote:
>
> > Hi,
> >
> > On Windows 2008, sometimes the server fails to start due to an existing
> "postmaster.pid' file.
> >
> > I tried rebooting a few times and even force shutting down the server,
> and it started up fine.
> > It seems to be a race-condition of sorts in the code that detects
> whether the process with PID
> > in the file is running or not.
>
> No, it means that postgres wasn't shut down properly when Windows shut
> down. Removing the pid-file is one of the last things the shut-down
> procedure does. The file is used to prevent 2 instances of the same server
> running on the same data-directory.
>
> If it's a race-condition, it's probably one in Microsoft's shutdown code.
> I've seen similar problems with Outlook mailboxes on a network directory;
> Windows unmounts the remote file-systems before Outlook finished updating
> its files under that mount point, so Outlook throws an error message and
> Windows doesn't shut down because of that.
>
> I don't suppose that pid-file is on a remote file-system?
>
> No, it's local.


> > Does any one have this same problem?  Any way to fix it besides removing
> the PID file
> > manually each time the server complains about this?
>
>
> You could probably script removal of the pid file if its creation date is
> before the time the system started booting up.
>
>
Thanks, it looks like the code already seems to overwrite an old pid file
if no other process is using it (if I understand the code correctly, it
just echoes a byte onto a pipe to detect this).

Still, I can't see under what conditions this occurs, but I have seen it
happen a couple of times, just that I don't know how to predictably
reproduce the problem.


--
Deepak


Re: [GENERAL] connect local pgAdmin III to remote postgres server

2012-05-08 Thread Raymond O'Donnell
On 08/05/2012 12:21, gdhia wrote:

> 
> postgresql.conf has the changed value:
> #listen_addresses = '*'

You need to uncomment this line for it to take effect. Remove the '#'
from the start of the line, and then restart (or maybe just reload? -
not sure) the server.

Ray.

-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] connect local pgAdmin III to remote postgres server

2012-05-08 Thread gdhia
Hi all,

I have a Debian server on which is installed Postgresql, with a database.

I have installed pgAdmin III on my computer (Win 7) and I try to communicate
with postgres on my server to display it.

on my server, using this code tells me the DB installed:

postgres=# select datname from pg_database;
  datname  
---
 template1
 template0
 postgres
(3 rows)

I checked on my server that posgres is listening on port 5432:
tcp0  0 127.0.0.1:5432  0.0.0.0:*   LISTEN 
4998/postgres

the firewall do not block his incoming connections:
ACCEPT tcp  --  anywhere anywheretcp
dpt:postgresql

the service is running:
8.3 main  5432 online postgres /var/lib/postgresql/8.3/main  
/var/log/postgresql/postgresql-8.3-main.log

postgresql.conf has the changed value:
#listen_addresses = '*'

and despite all this, pgAdmin III gives me:
could not connect to server: Connection refused (0x274D/10061) Is the
server running on host "xx.xx.xx.xx" and accepting TCP/IP connections on
port 5432?

Could anyone help me connect the postgresql server installed on my Debian
server via pgAdmin III installed on my pc?

Thanks in advance

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/connect-local-pgAdmin-III-to-remote-postgres-server-tp5693266.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 2 machines, same database, same query, 10 times slower?

2012-05-08 Thread Tomas Vondra
Hi,

On 8 Květen 2012, 16:48, Antonio Goméz Soto wrote:
> Hi,
>
> I am running PostgreSQL 8.1, on CentOS 5. I have two machines, same
> hardware, with the same database layout,
> they have different data, and the same query run 10 times as slow on one
> machine compared to the other.

First of all, to analyze runtime differences it's important to provide
EXPLAIN ANALYZE output, not just EXPLAIN. Re-run the queries and use
explain.depesz.com to post the output.

Second, what do you mean 'different data'? If there is different amount of
data, it may be perfectly expected that the query runs much slower on the
machine with more data. For example the plans contain this:

A: Seq Scan on cdr  (cost=0.00..77039.87 rows=1486187 width=159)
B: Seq Scan on cdr (cost=0.00..408379.70 rows=781370 width=161)

That suggests that the second database contains about 1/2 the rows.

The seq scan nodes reveal another interesting fact - while the expected
row count is about 50% in the second plan, the estimated cost is about 5x
higher (both compared to the first plan).

The important thing here is that most of the cost estimate comes from the
number of pages, therefore I suppose the cdr occupies about 5x the space
in the second case, although it's much more 'sparse'.

Do this on both machines to verify that

   SELECT relpages, reltuples FROM pg_class WHERE relname = 'cdr';

That might happen for example by deleting a lot of rows recently (without
running VACUUM FULL after) or by not running autovacuum at all. Which is
quite likely, because it was introduced in 8.1 and was off by default.

BTW if you care about performance, you should upgrade to a more recent
version (preferably 9.x) because 8.1 is not supported for several years
IIRC and there were many improvements since then.

Tomas


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Use LISTEN/NOTIFY between different databases

2012-05-08 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160


> Yes, but it is workaround. The native way - is the "communication" on
> DB-backend level.
> It would be very good feature - possibility to set a destination DB in
> NOTIFY command as optional argument.
>
> Is the developers read this list ? :)

Yes, but I find this a very unlikely feature, for a few reasons. First, 
there is a completely acceptable "workaround", as you call it. Second, you 
are the first person in my recollection to ask for this, so there is 
obviously not a high demand. Third, anything that goes cross-database 
is subject to lots of very careful care and handling, and I doubt adding 
that complexity is worth the small benefit gained.

- -- 
Greg Sabino Mullane g...@turnstep.com
End Point Corporation http://www.endpoint.com/
PGP Key: 0x14964AC8 201205081059
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-

iEYEAREDAAYFAk+pNOwACgkQvJuQZxSWSsjtdACggxjMfNxJxdfiY2ElxrWyx4E7
E/sAoLnDvepVy6QqhVicLf67kmSB6IqV
=oy5d
-END PGP SIGNATURE-



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] 2 machines, same database, same query, 10 times slower?

2012-05-08 Thread Antonio Goméz Soto
Hi,

I am running PostgreSQL 8.1, on CentOS 5. I have two machines, same hardware, 
with the same database layout,
they have different data, and the same query run 10 times as slow on one 
machine compared to the other.

I have run ANALYZE on both machines, here are the query plans for both 
machines. Can someone explain
this to me? I have trouble reading EXPLAIN output...

Query:

explain select company.name as cname, call_flags, bill_duration as 
call_duration,
   date_part('epoch',start_time) as start_time,
   src_name, src_type, src_num, src_id,
   dial_name, dial_type, dial_num, dial_id,
   ans_name, ans_type, ans_num, ans_id,
   sessionid
  from cdr, company, phoneline, contact
  where (src_id = contact.id or dial_id = contact.id or ans_id = contact.id)
and contact.id = '2' and phoneline.function='contact' and
phoneline.lookupid = contact.id and phoneline.status != 'deleted' and
(src_company=company.id or dial_company=company.id) and company.id > 2
  order by start_time DESC
  limit 10;

This is the query plan on machine #1 (query takes 2 seconds) :

 Limit  (cost=106128.33..106128.36 rows=10 width=160)
   ->  Sort  (cost=106128.33..106166.98 rows=15458 width=160)
 Sort Key: date_part('epoch'::text, cdr.start_time)
 ->  Nested Loop  (cost=49.38..104275.65 rows=15458 width=160)
   ->  Nested Loop  (cost=2.10..103880.57 rows=7729 width=164)
 Join Filter: (("outer".src_company = "inner".id) OR 
("outer".dial_company = "inner".id))
 ->  Nested Loop  (cost=0.00..103054.09 rows=6595 width=163)
   Join Filter: (("inner".src_id = "outer".id) OR 
("inner".dial_id = "outer".id) OR ("inner".ans_id = "outer".id))
   ->  Index Scan using contact_pkey on contact  
(cost=0.00..5.94 rows=1 width=4)
 Index Cond: (id = 2)
   ->  Seq Scan on cdr  (cost=0.00..77039.87 
rows=1486187 width=159)
 ->  Materialize  (cost=2.10..2.16 rows=5 width=13)
   ->  Seq Scan on company  (cost=0.00..2.10 rows=5 
width=13)
 Filter: (id > 2)
   ->  Materialize  (cost=47.28..47.30 rows=2 width=4)
 ->  Seq Scan on phoneline  (cost=0.00..47.28 rows=2 
width=4)
   Filter: ((("function")::text = 'contact'::text) AND 
((status)::text <> 'deleted'::text) AND (lookupid = 2))
(17 rows)

This is the query plan on machine two (query takes 38 seconds):

 Limit  (cost=424555.76..424555.79 rows=10 width=170)
   ->  Sort  (cost=424555.76..424574.34 rows=7432 width=170)
 Sort Key: date_part('epoch'::text, cdr.start_time)
 ->  Nested Loop  (cost=422353.60..424077.90 rows=7432 width=170)
   ->  Nested Loop  (cost=422064.10..423621.19 rows=3716 width=174)
 Join Filter: (("inner".src_company = "outer".id) OR 
("inner".dial_company = "outer".id))
 ->  Bitmap Heap Scan on company  (cost=2.09..49.23 rows=26 
width=21)
   Recheck Cond: (id > 2)
   ->  Bitmap Index Scan on company_pkey  
(cost=0.00..2.09 rows=26 width=0)
 Index Cond: (id > 2)
 ->  Materialize  (cost=422062.01..422085.24 rows=2323 
width=165)
   ->  Nested Loop  (cost=0.00..422059.69 rows=2323 
width=165)
 Join Filter: (("inner".src_id = "outer".id) OR 
("inner".dial_id = "outer".id) OR ("inner".ans_id = "outer".id))
 ->  Index Scan using contact_pkey on contact  
(cost=0.00..6.01 rows=1 width=4)
   Index Cond: (id = 2)
 ->  Seq Scan on cdr  (cost=0.00..408379.70 
rows=781370 width=161)
   ->  Materialize  (cost=289.50..289.52 rows=2 width=4)
 ->  Seq Scan on phoneline  (cost=0.00..289.50 rows=2 
width=4)
   Filter: ((("function")::text = 'contact'::text) AND 
((status)::text <> 'deleted'::text) AND (lookupid = 2))
(19 rows)

Thanks,
Antonio

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] errors on restoring postgresql binary dump to glusterfs

2012-05-08 Thread Liang Ma
Thank you Magnus for all the inputs. If I get any comments from
gluster community, I will update here.

Liang

On Mon, May 7, 2012 at 3:27 PM, Magnus Hagander  wrote:
> On Mon, May 7, 2012 at 7:34 PM, Liang Ma  wrote:
>> On Mon, May 7, 2012 at 12:54 PM, Magnus Hagander  wrote:
>>> On Mon, May 7, 2012 at 5:02 PM, Liang Ma  wrote:
 On Fri, May 4, 2012 at 3:58 AM, Magnus Hagander  
 wrote:
> On Mon, Apr 30, 2012 at 8:34 PM, Liang Ma  wrote:
>> Hi There,
>>
>> While trying to restore a ~700GM binary dump by command
>>
>> pg_restore -d dbdata < sampledbdata-20120327.pgdump
>>
>> I encountered following errors repeatedly
>>
>> pg_restore: [archiver (db)] Error from TOC entry 2882463; 2613
>> 10267347 BLOB 10267347 sdmcleod
>> pg_restore: [archiver (db)] could not execute query: ERROR:
>> unexpected data beyond EOF in block 500 of relation base/16386/11743
>> HINT:  This has been seen to occur with buggy kernels; consider
>> updating your system.
>
> Note the message right here...
>
> There may be further indications in the server log about what's wrong.
>

 The server's logs in message file were clean.
>>>
>>> Then your logging is incorrectly configured, because it should *at
>>> least* have the same message as the one that showed up in the client.
>>>
>>
>> Oh, yes, the same error messages were logged in the postgresql log
>> file but no further information. I thought you implied that there may
>> be some indication in server's system logs, which I couldn't find any.
>
> Well, there might be, I wasn't sure :-) I guess there wasn't.
>
>
>> The server runs Ubuntu server 10.04 LTS with postgresql upgraded to
>> version 9.1.3-1~lucid. The postgresql data directory is located in a
>> glusterfs mounted directory to a replicated volume vol-2
>
> I assume you don't have more than one node actually *accessing* the
> data directory at the same time, right?
>

 Yes, you are right. I just set up this glusterfs and postgresql server
 with two nodes for testing purpose. There was no other gluster
 filesystem access activity at the time I tried to restore the
 postgresql dump. Do you know if postgresql recommends any other
 cluster filesystem, or it may not like cluster filesystem at all?
>>>
>>>
>>> Did you have PostgreSQL started on both nodes? That is *not*
>>> supported. If PostgreSQL only runs on one node at a time it should in
>>> theory work, provided the cluster filesystem provides all the services
>>> that a normal filesystem does, such as respecting fsync.
>>>
>>
>> Postgresql are installed in both nodes, but only one node's postgresql
>> data directory points to glusterfs filesystem. Another one's data
>> directory is in its default location in the local ext4 filesystem.
>> This is the one I used to prove the dump file can be restored without
>> any problem when glusterfs is not involved.
>
> ok. That should in theory be safe. Having two active notes against th
> efilesystem is never safe.
>
>
>> According to its introduction and document, glusterfs is supposed to
>> appear as a normal filesystem when being mounted, although I don't
>> know how well it respects things like fsync.
>
> It certainly looks like it's failing at some point. So yeah, I'm
> pretty sure you need to get in touch with the glusterfs folks -
> hopefully you get a response from them soon.
>
> --
>  Magnus Hagander
>  Me: http://www.hagander.net/
>  Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Picksplit warning

2012-05-08 Thread Oleg Mürk
Hello,

Our postgresql logs are getting filled with warnings:
  LOG: picksplit method for column COLUMN_IDX of index INDEX_NAME
doesn't support secondary split
We are using gist indexes on integer, timestamp, and Postgis geometry.

Is there a way to work around this problem?

Thank You,
Oleg Mürk

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] lowercase on columnname using view

2012-05-08 Thread Guillaume Lelarge
On Tue, 2012-05-08 at 04:00 -0700, Chrishelring wrote:
> Hi all,
> 
> had some help the other day, but now I´m kinda stuck again. :/
> 
> I have a table ("virksomhedsdata") with the following columns:
> 
> "MI_STYLE" character varying(254),
>  "MI_PRINX" integer NOT NULL DEFAULT
> nextval('rk_ois."virksomhedsdata_MI_PRINX_seq"'::regclass),
>  "SP_GEOMETRY" geometry,
> 
> I would like to make a view so that the columnnames are presented in
> lowercase. I thought that the following would work:
> 
> CREATE OR REPLACE VIEW rk_ois.virksomhedsdata AS 
>  SELECT  virksomhedsdata.MI_STYLE AS mi_style, virksomhedsdata.MI_PRINX as
> mi_prinx, virksomhedsdata.SP_GEOMETRY AS sp_geometry
>FROM rk_ois.virksomhedsdata;
> 
> But it fails saying that column virksomhedsdata.mi_style does not exist.
> 
> What am I doing wrong here?
> 

You should double-quote the columns of the table. Something like this:

CREATE OR REPLACE VIEW rk_ois.virksomhedsdata AS 
  SELECT  virksomhedsdata."MI_STYLE" AS mi_style, ...


-- 
Guillaume
http://blog.guillaume.lelarge.info
http://www.dalibo.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] lowercase on columnname using view

2012-05-08 Thread Szymon Guz
On 8 May 2012 13:00, Chrishelring  wrote:

> Hi all,
>
> had some help the other day, but now I´m kinda stuck again. :/
>
> I have a table ("virksomhedsdata") with the following columns:
>
> "MI_STYLE" character varying(254),
>  "MI_PRINX" integer NOT NULL DEFAULT
> nextval('rk_ois."virksomhedsdata_MI_PRINX_seq"'::regclass),
>  "SP_GEOMETRY" geometry,
>
> I would like to make a view so that the columnnames are presented in
> lowercase. I thought that the following would work:
>
> CREATE OR REPLACE VIEW rk_ois.virksomhedsdata AS
>  SELECT  virksomhedsdata.MI_STYLE AS mi_style, virksomhedsdata.MI_PRINX as
> mi_prinx, virksomhedsdata.SP_GEOMETRY AS sp_geometry
>   FROM rk_ois.virksomhedsdata;
>
> But it fails saying that column virksomhedsdata.mi_style does not exist.
>
> What am I doing wrong here?
>
> thanks!
>
> Christian
>
>
If you created the columns like "MI_PRINX", then you need to call them
using "MI_PRINX", not MI_PRINX, because it will be change to lowercase in
the query.

So the proper query should look like this:

CREATE OR REPLACE VIEW rk_ois.virksomhedsdata AS
 SELECT
virksomhedsdata."MI_STYLE" AS mi_style,
virksomhedsdata."MI_PRINX" as mi_prinx,
virksomhedsdata."SP_GEOMETRY" AS sp_geometry
 FROM rk_ois.virksomhedsdata;


- szymon


[GENERAL] lowercase on columnname using view

2012-05-08 Thread Chrishelring
Hi all,

had some help the other day, but now I´m kinda stuck again. :/

I have a table ("virksomhedsdata") with the following columns:

"MI_STYLE" character varying(254),
 "MI_PRINX" integer NOT NULL DEFAULT
nextval('rk_ois."virksomhedsdata_MI_PRINX_seq"'::regclass),
 "SP_GEOMETRY" geometry,

I would like to make a view so that the columnnames are presented in
lowercase. I thought that the following would work:

CREATE OR REPLACE VIEW rk_ois.virksomhedsdata AS 
 SELECT  virksomhedsdata.MI_STYLE AS mi_style, virksomhedsdata.MI_PRINX as
mi_prinx, virksomhedsdata.SP_GEOMETRY AS sp_geometry
   FROM rk_ois.virksomhedsdata;

But it fails saying that column virksomhedsdata.mi_style does not exist.

What am I doing wrong here?

thanks!

Christian

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/lowercase-on-columnname-using-view-tp5693220.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FATAL: lock file "postmaster.pid" already exists

2012-05-08 Thread Alban Hertroys
On 8 May 2012, at 24:34, deepak wrote:

> Hi,
> 
> On Windows 2008, sometimes the server fails to start due to an existing 
> "postmaster.pid' file.
> 
> I tried rebooting a few times and even force shutting down the server, and it 
> started up fine.
> It seems to be a race-condition of sorts in the code that detects whether the 
> process with PID 
> in the file is running or not.

No, it means that postgres wasn't shut down properly when Windows shut down. 
Removing the pid-file is one of the last things the shut-down procedure does. 
The file is used to prevent 2 instances of the same server running on the same 
data-directory.

If it's a race-condition, it's probably one in Microsoft's shutdown code. I've 
seen similar problems with Outlook mailboxes on a network directory; Windows 
unmounts the remote file-systems before Outlook finished updating its files 
under that mount point, so Outlook throws an error message and Windows doesn't 
shut down because of that.

I don't suppose that pid-file is on a remote file-system?

> Does any one have this same problem?  Any way to fix it besides removing the 
> PID file
> manually each time the server complains about this?


You could probably script removal of the pid file if its creation date is 
before the time the system started booting up.

Alban Hertroys

--
The scale of a problem often equals the size of an ego.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general