[GENERAL] ERROR: unrecognized object class: 1262 with 9.2.2 but not with 9.2.1

2012-12-09 Thread Thomas Kellerer

I recently upgraded from 9.2.1 to 9.2.2 and my unit tests for my application 
are failing when cleaning up my test database.

I am using the statement:

   drop owned by wbjunit cascade;

at the end of a test suite to get rid of everything that was created during the 
tests.

Now since the upgrade to 9.2.2 it fails with the error message:

   ERROR: unrecognized object class: 1262

I could easily drop and re-create the database, but I'm a) curious on how this 
situation could have happened and b) if there is a way to fix a problem like 
that with re-creating the database (in case this would have not been a unit 
test DB)

Is this a bug in 9.2.2 or did I do something wrong?

A vacuumdb -f -v wbjunit shows the following message:

INFO:  vacuuming pg_catalog.pg_depend
INFO:  pg_depend: found 2 removable, 6219 nonremovable row versions in 47 
pages
DETAIL:  0 dead row versions cannot be removed yet.
CPU 0.00s/0.01u sec elapsed 0.10 sec.

The found 2 removable part does not change, even if I re-run the statement 
several times.


My environment:
  PostgreSQL 9.2.2, compiled by Visual C++ build 1600, 32-bit
  Windows XP SP2

Regards
Thomas



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Installation Issue on Ubuntu under Xen

2012-12-09 Thread P. Broennimann
Hi there

I have a VM running under XEN XCP. The VM is Ubuntu server 12.04.1/64
headless.

The VM is completely fresh  clean and works fine. Then I install
Postgresql...

   sudo apt-get install python-software-properties

   sudo add-apt-repository ppa:pitti/postgresql
   sudo apt-get update
   sudo apt-get install postgresql-9.2


...what works well. But when I reboot I see/get the following error:


   * Starting load fallback graphics devices  [fail]


Postgresql seems to work fine but anyone knows what is this error all
about? I never saw such a thing under my old Ubuntu 10.04 VM!?

Thanks  cheers,
Peter


[GENERAL] Re: ERROR: unrecognized object class: 1262 with 9.2.2 but not with 9.2.1

2012-12-09 Thread Thomas Kellerer

Thomas Kellerer wrote on 09.12.2012 11:36:

I recently upgraded from 9.2.1 to 9.2.2 and my unit tests for my application 
are failing when cleaning up my test database.

I am using the statement:

drop owned by wbjunit cascade;

at the end of a test suite to get rid of everything that was created during the 
tests.

Now since the upgrade to 9.2.2 it fails with the error message:

   ERROR: unrecognized object class: 1262



As this can be reproduced with a clean new installation I filed a bug report: 
#7748

Regards
Thomas





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installation Issue on Ubuntu under Xen

2012-12-09 Thread Adrian Klaver

On 12/09/2012 02:40 AM, P. Broennimann wrote:

Hi there

I have a VM running under XEN XCP. The VM is Ubuntu server 12.04.1/64
headless.

The VM is completely fresh  clean and works fine. Then I install
Postgresql...

sudo apt-get install python-software-properties

sudo add-apt-repository ppa:pitti/postgresql
sudo apt-get update
sudo apt-get install postgresql-9.2


...what works well. But when I reboot I see/get the following error:


* Starting load fallback graphics devices  [fail]


Postgresql seems to work fine but anyone knows what is this error all
about? I never saw such a thing under my old Ubuntu 10.04 VM!?


Seems Ubuntu boot behavior changed 11.04+. Best guess is it is just 
telling you what you already know, the VM does not have a video driver.


See here for more detail:
https://wiki.ubuntu.com/BootGraphicsArchitecture





Thanks  cheers,
Peter




--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] When is archive_cleanup called?

2012-12-09 Thread Vincent Veyron
Le vendredi 30 novembre 2012 à 09:44 -0500, François Beausoleil a
écrit :

 I'm using 9.1.5 on Ubuntu 11.10, in a streaming replication scenario. On my 
 slave, recovery.conf states:
 
 standby_mode = on
 restore_command = '/usr/local/omnipitr/bin/omnipitr-restore -D 
 /var/lib/postgresql/9.1/main/ --source gzip=/data/dbanalytics-wal/ 
 --remove-unneeded --temp-dir /var/tmp/omnipitr -l 
 /var/log/omnipitr/restore-^Y-^m-^d.log --streaming-replication --verbose 
 --error-pgcontroldata hang %f %p'
 archive_cleanup_command = '/usr/local/omnipitr/bin/omnipitr-cleanup --verbose 
 --log /var/log/omnipitr/cleanup-^Y-^m-^d.log --archive 
 gzip=/data/dbanalytics-wal/ %r'
 primary_conninfo = 'host=master port=5432 user=dbrepl password=password'
 
 
 I ran out of disk space on the slave, because the archived WAL records were 
 not removed. The documentation for archive_cleanup_command states[1]:
 

Hi, 

I have no personal experience on the matter, but saw this recent post :

http://archives.postgresql.org/pgsql-general/2012-12/msg00129.php


which seems related?








-- 
Vincent Veyron
http://marica.fr
Logiciel pour département juridique



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Statistics mismatch between n_live_tup and actual row count

2012-12-09 Thread tim_wilson
It seems that we are currently running 8.4.3 on the server we are
encountering the problem.

Will upgrade to 8.4.9 and then will come back with a test case if we still
see the issue

Thanks again for your help.



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Statistics-mismatch-between-n-live-tup-and-actual-row-count-tp5059317p5735835.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] When is archive_cleanup called?

2012-12-09 Thread Jeff Janes
On Fri, Nov 30, 2012 at 6:44 AM, François Beausoleil
franc...@seevibes.com wrote:

 How come no new restart points were achieved? I had 4008 WAL archives on my 
 slave. I expected them to be removed as streaming replication progressed. Are 
 restart points prevented while long queries are running?


They can be prevented, depending on how you what settings you have for
max_standby_*_delay.

Cheers,

Jeff


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query and index ... unexpected result need advice.

2012-12-09 Thread Jeff Janes
On Sat, Dec 8, 2012 at 5:54 AM, Condor con...@stz-bg.com wrote:

 I am interested to know where is my mistake or something wrong
 with server which I doubt. Here is my current query with explain:
 (I change names to XXX YYY ZZZ because original names is written on CP1251
 and most ppl in list can't read them)

 db=# explain analyze SELECT *,COALESCE(firstname,'') || ' ' ||
 COALESCE(middlename,'') || ' ' || COALESCE(lastname, '') AS name FROM
 clients_tbl WHERE firstname = 'XXX' AND middlename || lastname LIKE
 '%YYY%ZZZ%';

What is the meaning/purpose of the middlename || lastname LIKE '%YYY%ZZZ%' ?

At least in my culture, that doesn't seem like a sensible thing to do.
 Is it trying to compensate for some known dirtiness in the data that
has not yet been cleaned up?

In any event, in order to benefit from an index on that query, you
would need to create an index on the concatenated columns, not on the
individual columns.

create index on clients_tbl ((middlename||lastname) text_pattern_ops);

But that still won't work because your patterns starts with a wild
card, and that type of pattern cannot benefit from btree indexes.


...

 I dropped both indexes and create new one:

 create index clients_tbl_firstname_idx on clients_tbl using btree (firstname
 COLLATE bg_BG text_pattern_ops);

I don't understand why that is legal.  I would think that
text_pattern_ops implies something that contradicts COLLATE bg_BG.
In any event, the inclusion of both of those seems to prevent the
index from being used for equality, while the inclusion of just one or
the other property does not.  (That is why the query got slower.)

Since firstname is used as equality in your example, there is no
reason to change this index to text_pattern_ops in order to support
your example.

Cheers,

Jeff


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: unrecognized object class: 1262 with 9.2.2 but not with 9.2.1

2012-12-09 Thread Jeff Janes
On Sun, Dec 9, 2012 at 2:36 AM, Thomas Kellerer spam_ea...@gmx.net wrote:
 I recently upgraded from 9.2.1 to 9.2.2 and my unit tests for my application
 are failing when cleaning up my test database.

 I am using the statement:

drop owned by wbjunit cascade;

 at the end of a test suite to get rid of everything that was created during
 the tests.

 Now since the upgrade to 9.2.2 it fails with the error message:

ERROR: unrecognized object class: 1262


This was introduced into 9.2.2 by da24920ab8ea6b226321038
 REASSIGN OWNED: consider grants on tablespaces, too

And presumably introduced into the latest releases of other versions
by the analogous commits into those branches.

Cheers,

Jeff


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] query performance, though it was timestamps,maybe just table size?

2012-12-09 Thread Jeff Janes
On Mon, Dec 3, 2012 at 5:56 AM, Henry Drexler alonup...@gmail.com wrote:
 On Sun, Dec 2, 2012 at 12:44 AM, Jeff Janes jeff.ja...@gmail.com wrote:

 Could you do it for the recursive
 SQL (the one inside the function) like you had previously done for the
 regular explain?

 Cheers,

 Jeff


 Here they are:

 for the 65 million row table:
 Index Scan using ctn_source on massive  (cost=0.00..189.38 rows=1 width=28)
 (actual time=85.802..85.806 rows=1 loops=1)
   Index Cond: (ctn = 1302050134::bigint)
   Filter: (dateof @ '[2012-07-03 14:00:00,2012-07-10
 14:00:00]'::tsrange)
   Buffers: shared read=6
 Total runtime: 85.891 ms

If you execute it repeatedly (so that the data is in buffers the next
time) does it then get faster?

 for the 30 million row table:
 Index Scan using ctn_dateof on massive  (cost=0.00..80.24 rows=1 width=24)
 (actual time=0.018..0.020 rows=1 loops=1)
   Index Cond: (ctn = 1302050134::bigint)
   Filter: (dateof @ '[2012-07-03,2012-07-11)'::daterange)
   Buffers: shared hit=5
 Total runtime: 0.046 ms

The obvious difference is that this one finds all 5 buffers it needs
in buffers already, while the first one had to read them in.  So this
supports the idea that your data has simply grown too large for your
RAM.

Cheers,

Jeff


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with aborting entire transactions on error

2012-12-09 Thread David Johnston
On Dec 9, 2012, at 22:20, Zbigniew zbigniew2...@gmail.com wrote:

 Hello,
 
 As I read while googling the web, many people complained about this
 before. Couldn't it be made optional (can be even with default ON)?
 I understand, that there are situations, when it is a must - for
 example, when the rest of queries rely on the result of first ones -
 but there are numerous situations, when just skipping a faulty query
 is all we need.
 
 A simple - but very common - example: I wanted to perform really large
 number of inserts - using transaction, to make it faster - while being
 sure the duplicate entries will be skipped.

And what happens if one of those inserts errors out for reasons other than a 
duplicate?

 Of course, this job will
 be done best by server itself, which is keeping an eye on primary
 key of the table. Unfortunately: not during a transaction! Any dupe
 will trash thousands other (proper) entries immediately.
 
 Why is this? My guess is, there is kind of logic in the code, like this:
 
 if { no error during query } {
  do it
 } else {
 withdraw this one
 rollback entire transaction
 }
 
 Therefore my request - and, as I saw, of many others - would be just
 to introduce a little change:
 
 if { no error during query } {
  do it
 } else {
 withdraw this one
 if { ROLLBACK_ON_ERROR } {
   rollback entire transaction
  }
 }
 
 (if there's no ROLLBACK_ON_ERROR - it should carry on with the
 remaining queries)
 
 Is it really so problematic to introduce such code change, allowing
 the users to control this behaviour?

Since current belief is that such behavior is unwise no one is willing to give 
their time to doing so. I'm not sure whether that means that if you supplied 
such a patch it would be rejected.  Since their is a correct way to solve the 
duplicates scenario (see below) without savepoints you may wish to supply 
another example if you want to try and convince people.

 Yes, I read about using
 savepoints - but I think we agree, it's just cumbersome workaround -
 and not real solution, like my proposal. All we need is either a
 variable to set, or a command, that will allow to modify the present
 functionality in the way described above.

The true solution is to insert into a staging table that allows duplicates (but 
ideally contains other constraints that you do care about) and then de-dupe and 
insert into your final table.

 -- 
 regards,
 Zbigniew
 

This may be an undesirable instance of the database not allowing you to shoot 
yourself in the foot but as ignoring errors is bad practice motivation to allow 
it is small.  You should always be able to import the data without errors into 
an explicitly defined table and then write queries to convert between the input 
format and the final format - explicitly making no coding the necessary 
translation decisions and procedures.  In that way you always know that your 
import routine is always working as expected and not guessing whether it was 
the known error condition or an unknown condition the caused a record to go 
missing.

David J.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query and index ... unexpected result need advice.

2012-12-09 Thread Condor

On 2012-12-10 00:31, Jeff Janes wrote:

On Sat, Dec 8, 2012 at 5:54 AM, Condor con...@stz-bg.com wrote:


I am interested to know where is my mistake or something wrong
with server which I doubt. Here is my current query with explain:
(I change names to XXX YYY ZZZ because original names is written on 
CP1251

and most ppl in list can't read them)

db=# explain analyze SELECT *,COALESCE(firstname,'') || ' ' ||
COALESCE(middlename,'') || ' ' || COALESCE(lastname, '') AS name 
FROM

clients_tbl WHERE firstname = 'XXX' AND middlename || lastname LIKE
'%YYY%ZZZ%';


What is the meaning/purpose of the middlename || lastname LIKE 
'%YYY%ZZZ%' ?


At least in my culture, that doesn't seem like a sensible thing to 
do.

 Is it trying to compensate for some known dirtiness in the data that
has not yet been cleaned up?

In any event, in order to benefit from an index on that query, you
would need to create an index on the concatenated columns, not on the
individual columns.

create index on clients_tbl ((middlename||lastname) 
text_pattern_ops);


But that still won't work because your patterns starts with a wild
card, and that type of pattern cannot benefit from btree indexes.


...


The point is that the first server should fulfill the condition which 
is equal sign and then move on to the rest condition. I can use it as a 
above example or query like bellow:
SELECT *,COALESCE(firstname,'') || ' ' || COALESCE(middlename,'') || ' 
' || COALESCE(lastname, '') AS name FROM clients_tbl WHERE firstname = 
'XXX' AND middlename || lastname LIKE

 '%ZZZ%';
In this case I don't know where actually is ZZZ in the middle or in 
lastname because that is the input. Also can be:


SELECT *,COALESCE(firstname,'') || ' ' || COALESCE(middlename,'') || ' 
' || COALESCE(lastname, '') AS name FROM clients_tbl WHERE firstname = 
'XXX' AND middlename || lastname LIKE

 '%Y%ZZZ%';

First part of the middle name only Y not YYY full middle name.

And it's work fine.




I dropped both indexes and create new one:

create index clients_tbl_firstname_idx on clients_tbl using btree 
(firstname

COLLATE bg_BG text_pattern_ops);


I don't understand why that is legal.  I would think that
text_pattern_ops implies something that contradicts COLLATE bg_BG.
In any event, the inclusion of both of those seems to prevent the
index from being used for equality, while the inclusion of just one 
or

the other property does not.  (That is why the query got slower.)



I was thinking when I add COLLATE bg_BG text_pattern_ops it's will 
help to indexer to understand that data there is in specific encoding 
and

will speed up like clause. When i make index like:

create index on clients_tbl (middlename text_pattern_ops);
or
create index on clients_tbl (firstname text_pattern_ops);

there is not different result ... 35 ms but I expect to dropped from 35 
to 20 or 10 ms :)




Since firstname is used as equality in your example, there is no
reason to change this index to text_pattern_ops in order to support
your example.



Understand that, but if I need to do like in firstname what is the 
solution ?

To make two indexes one with text_pattern_ops other without it ?


Cheers,

Jeff



Regards,
H.S.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: ERROR: unrecognized object class: 1262 with 9.2.2 but not with 9.2.1

2012-12-09 Thread Thomas Kellerer

Jeff Janes, 09.12.2012 23:41:

On Sun, Dec 9, 2012 at 2:36 AM, Thomas Kellerer spam_ea...@gmx.net wrote:

I recently upgraded from 9.2.1 to 9.2.2 and my unit tests for my application
are failing when cleaning up my test database.

I am using the statement:

drop owned by wbjunit cascade;

at the end of a test suite to get rid of everything that was created during
the tests.

Now since the upgrade to 9.2.2 it fails with the error message:

ERROR: unrecognized object class: 1262



This was introduced into 9.2.2 by da24920ab8ea6b226321038
  REASSIGN OWNED: consider grants on tablespaces, too

And presumably introduced into the latest releases of other versions
by the analogous commits into those branches.



Thanks for the feedback.

Regards
Thomas





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSDs - SandForce or not?

2012-12-09 Thread Greg Smith

On 11/14/12 2:11 AM, Toby Corkindale wrote:

So on the face of it, I think the Sandforce-based drives are probably a
winner here, so I should look at the Intel 520s for evaluation, and
whatever the enterprise equivalent are for production.


As far as I know the 520 series drives fail the requirements outlined at 
http://wiki.postgresql.org/wiki/Reliable_Writes and you can expect 
occasional data corruption after a crash when using them.  As such, any 
performance results you get back are fake.  You can't trust the same 
results will come back from their drives that do handle writes 
correctly.  I'm not aware of any SSD with one of these compressing 
Sandforce controller that's on the market right now that does this 
correctly; they're all broken for database use.  The quick rule of thumb 
is that if the manufacturer doesn't brag about the capacitors on the 
drive, it doesn't have any and isn't reliable for PostgreSQL.


The safe Intel SSD models state very clearly in the specifications how 
they write data in case of a crash.  The data sheet for the 320 series 
drives for example says To reduce potential data loss, the Intel® SSD 
320 Series also detects and protects from unexpected system power loss 
by saving all cached data in the process of being written before 
shutting down.  The other model I've deployed and know is safe are the 
710 series models, which are the same basic drive but with different 
quality flash and tuning for longevity.  See 
http://blog.2ndquadrant.com/intel_ssds_lifetime_and_the_32/ for details. 
 The 710 series drives are quite a bit more expensive than Intel's 
other models.


Intel's recently released DC S3700 drives also look to have the right 
battery backup system to be reliable for PostgreSQL.  Those are expected 
to be significantly cheaper than the 710 models, while having the same 
reliability characteristics.  I haven't been able to get one yet though, 
so I don't really know for sure how well they perform.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: ERROR: unrecognized object class: 1262 with 9.2.2 but not with 9.2.1

2012-12-09 Thread Thomas Kellerer

Jeff Janes, 09.12.2012 23:41:


I recently upgraded from 9.2.1 to 9.2.2 and my unit tests for my application
are failing when cleaning up my test database.

I am using the statement:

drop owned by wbjunit cascade;

at the end of a test suite to get rid of everything that was created during
the tests.

Now since the upgrade to 9.2.2 it fails with the error message:

ERROR: unrecognized object class: 1262



This was introduced into 9.2.2 by da24920ab8ea6b226321038
  REASSIGN OWNED: consider grants on tablespaces, too

And presumably introduced into the latest releases of other versions
by the analogous commits into those branches.



Does anybody know if a 9.2.1 Windows (32bit and 64bit) ZIP distribution can 
still be downloaded somewhere?
I checked the Enterprise DB download page, but they only offer the current 
version.

I didn't keep a copy around as usually the minor updates go really smooth ;)


Regards
Thomas




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general