Re: [ADMIN] [GENERAL] How to recover a postgres installation from files

2010-12-01 Thread Craig Ringer
On 02/12/10 01:11, Anibal David Acosta wrote:

> So, I don't know how to recover the database from the data directory of a
> windows postgres 8.1 installation.

You'll need PostgreSQL 8.1 for Windows. This appears to have been
removed from the FTP site for the well-intentioned reason that, on
Windows, it's pretty scarily buggy and it's unwise to use anything prior
to 8.2 on Windows, but people keep on trying to install ancient versions
for new deployments for some bizarre reason.

Unfortunately, this doesn't consider the needs of anybody who has an
existing 8.1 database. The 8.1 win32 installers clearly used to be
hosted by the project, as:

  http://www.postgresql.org/about/news.422

links to:

  http://www.postgresql.org/ftp/binary/v8.1.0/win32

which no longer exists.


Does anyone have a copy of the win32 installer for such an ancient
version kicking around? If so, IMO the most recent 8.1 really needs to
be on postgresql.org, even if it's within a password-protected folder
that forces you to type "I don't care if this eats all my data" as the
password before downloading...

I've CC'd Magnus Hagander as he's listed as the installer maintainer for
the old msi installers on http://pgfoundry.org/projects/pginstaller .



I've found a copy of some version of the installer here:

  http://www.postgresql.at/download/

... but who knows which version it is, if it's configured the same, and
whether or not it's safe. If you want to give it a go you'll need to
download both postgresql-8.1-int.msi and postgresql-8.1.msi and put them
in the same directory before running postgresql-8.1.msi.


-- 
System & Network Administrator
POST Newspapers

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] [GENERAL] How to recover a postgres installation from files

2010-12-02 Thread Craig Ringer
On 02/12/10 01:11, Anibal David Acosta wrote:

> So, I only have the postgres directory on my hands. I decide to install the
> same database version in another computer and replace DATA directory, but I
> notice that 8.1 (windows binary) is not available for download.
> 
> So, I don't know how to recover the database from the data directory of a
> windows postgres 8.1 installation.

First and most important: only work on a COPY of the recovered files.
Keep an original, untouched copy somewhere read-only and safe.

My other mail should provide you with some info on how to get 8.1 .
Hopefully Magnus or someone else will provide a better source of 8.1
installer binaries, but if not then the ones I linked to may be a viable
option.


BTW, 8.1 was EOL'd in 2007:

http://www.postgresql.org/about/news.865

Once you get your data accessible, do a full dump IMMEDIATELY; you need
to upgrade to a supported version of PostgreSQL before resuming using
the database. 8.1 is not only old, but pretty unsafe on Windows. If your
organization has had it around this long after EOL, you need to look at
your software management policies.


Not that I can throw stones - I have:

  Windows NT 4 [1996, final eol in 2004]
  Sybase SQL Anywhere 5.5 [1995, eol 2002]
  SCO OpenServer 5.0.5 [1995, eol 2004, company no longer exists*]
  Plain English 4GL [?, eol 1983, company gone by 1985**]
  Microsoft Office Word 2000
  Mac OS 9.2.2 (PowerPC G4) with an Apple Desktop Bus hardware dongle
  Windows 98 (running in a VM; runs a win16 app with 16-bit ODBC)

here among other scary dinosaurs I can't yet get rid of for one reason
or another.

* There's someone wearing their name and their skin, but it's not really
SCO (the Santa Cruz Operation). The SCO Group are peddling the old
versions of the OS with minimal engineering support and knowledge. About
the only interesting thing they've done is repacked OpenServer 6 into a
canned VM image with a couple of drivers, because it was a bit
tempramental to install on many VM systems.

** This runs in the Microsoft Xenix kernel personality on the SCO
OpenServer box. SCO 5.0.5 is the *newest* OS it'll run on. And no, I
don't have source code or I would've ported it to something civilized
long ago. The app it runs relies too much on freaky bugs and quirks in
the interpreter for a reimplementation to be viable; I know, I tried. A
rewrite of the app Plain English runs is in progress...

-- 
System & Network Administrator
POST Newspapers

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Postgresql 9.0.1 installation error

2011-08-08 Thread Craig Ringer

On 8/08/2011 7:11 PM, Venkat Balaji wrote:


We had installed PostgreSQL9.0.1 for which i had executed the below
commands -

./configure --with-openssl --prefix=/opt/Postgresql9.0.1/
make install
installed successfully


OK, you're building Pg from source rather than using packages provided 
by EDB or your operating system. Fair enough, albeit somewhat unusual 
for many production setups.



When i said "./psql -p 5445", I am getting the below error


The most likely cause is that the dynamic linker is using a different 
library than what Pg was compiled against for libssl, libpg, or 
something like that.


You have neglected to mention the platform you're using or any of the 
other important data, so it's hard to give any more detailed 
instructions about how to track the issue down. It'd help if you 
mentioned what OS you were using, whether you uninstalled/deleted any 
prior versions of Pg, what your LD_LIBRARY_PATH is, what `ldd' reports 
when run on psql, etc.



[postgres@warehouse1 bin]$ ./psql -p 5445
LOG:  unexpected EOF on client connection
Segmentation fault


Most likely you have linkage problems, but if not it'd help if you'd get 
a backtrace of that client crash. Assuming you're on a machine with gdb:


gdb --args ./psql -p 5445
(gdb) run
 blah blah
 terminated with SIGSEGV (Segmentation fault)
(gdb) bt



2. The same version of PostgreSQL was installed in the same server in a
different location and is working fine without any issues -
3. Postgres process is running fine. When i am connecting from a
different PG installer bin directory,
 I am able to connect.


It's *EXTREMELY* likely to be an issue where you're linking to a 
different libpq possibly compiled with different options, then.


--
Craig Ringer

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Fwd: Suspected Postgres Datacorruption

2011-08-09 Thread Craig Ringer

On 4/08/2011 1:14 AM, Sumeet Jauhar wrote:


1 . There was a system crash due to a hardware failure .



A ) Isn’t Postgres database resilient enough to handle hardware system
failure ? or it sometime results in a corrupt index for its tables ? I


You should *always* be able to pull the plug out of a box running Pg at 
any point and have it come up just fine. If you can't, it's a bug. They 
do turn up, but quite rarely and usually relating to odd corner cases in 
newly introduced features.


If you've done something like turn off fsync, of course, you've just 
told PostgreSQL "I don't care about my data, don't bother keeping it 
crash safe" and you get garbage if there's a system crash. But that's 
your choice to enable if your use case doesn't require data durability. 
You haven't done that, so this isn't the cause of your issue.


The only other known case (in a current version) where index corruption 
is expected after a crash is if you are using hash indexes. Hash indexes 
are NOT CRASH SAFE, as per the documentation, and WILL need to be 
reindexed after a crash. Don't use them unless you really know you need 
them (you don't).


Of course, if you're using 7.4.2 or something ancient, you're missing a 
lot of bug fixes, and some of them DID relate to data durability issues.


--
Craig Ringer

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Postgresql 9.0.1 installation error

2011-08-09 Thread Craig Ringer

On 9/08/2011 2:31 PM, Venkat Balaji wrote:

Hi Craig,

Thank you very much for your reply !

Here is my reply -

1. We are using RHEL with the below version
Linux version 2.6.18-194.26.1.el5
(mockbu...@x86-004.build.bos.redhat.com
<mailto:mockbu...@x86-004.build.bos.redhat.com>) (gcc version 4.1.2
20080704 (Red Hat 4.1.2-48)) #1 SMP Fri Oct 29 14:21:16 EDT 2010

2. LD_LIBRARY_PATH is not set


OK. In that case, please show the output of:

ldd ./psql

run in exactly the same environment as you'd run ./psql and get a crash.




Program received signal SIGSEGV, Segmentation fault.
0x2aac2ecc in resetPQExpBuffer () from
/opt/Postgres9.0.1/lib/libpq.so.5


OK, you haven't built with debugging. Try rebuilding with --enable-debug 
to configure, then re-running the gdb command. When you get a crash and 
the above message is printed out, you'll get a prompt like:


(gdb)

Type "bt" at that prompt and press enter. The resulting stack trace 
shows more information about what led up to the crash.


Debugging memory related crashes in C isn't simple. Just because it 
crashes in libpq doesn't mean the fault is in libpq. It might be, or it 
might be that some other library is corrupting memory that causes libpq 
to crash later, corrupting the stack, returning a bad pointer from a 
function call, library headers not matching linked library sizes so 
returned struct sizes/offsets are wrong, and lots lots more. It's not 
simple.


--
Craig Ringer

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Postgres process

2011-08-09 Thread Craig Ringer

On 9/08/2011 11:26 AM, iliya g wrote:

Hi guys,

I need some help discovering the processes that run on an instance. I
know that there are many processes running on one instance and they are
all called postges, but I was hoping you can help me find a way to
uniquely identify each one of them. I want to discover what the process
name is and what the does. Let me know if you have any suggestions.


On operating systems that support it, postgresql changes the process 
name to tell you what it's doing.


On other operating systems, use the pg_stat_activity view. See the manual.

If you still need more information, please read this so you provide 
enough detail to get a proper answer before following up:


  http://wiki.postgresql.org/wiki/Guide_to_reporting_problems

... since you didn't even mention what operating system you were running 
on in your question!


--
Craig Ringer

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Character set equivalent for AL32UTF8

2011-08-09 Thread Craig Ringer

On 2/08/2011 8:52 PM, RBharathi wrote:

Hi,
We plan to migrate data from Oracle 11g with characterset AL32UTF8 to a 
Postgres db.

What is the euivalent charecterset to use in Postgress. We see only the UTF-8 
option.


What's AL32UTF8 ? That's not a standard charset name or widely 
recognised charset. Is it some Oracle specific feature? If so, what 
makes it different to UTF-8 and why do you need it?


Documentation link? References?

A 30-second Google search turned up this:

http://decipherinfosys.wordpress.com/2007/01/28/difference-between-utf8-and-al32utf8-character-sets-in-oracle/

"As far as these two character sets go in Oracle,  the only difference 
between AL32UTF8 and UTF8 character sets is that AL32UTF8 stores 
characters beyond U+ as four bytes (exactly as Unicode defines 
UTF-8). Oracle’s “UTF8” stores these characters as a sequence of two 
UTF-16 surrogate characters encoded using UTF-8 (or six bytes per 
character).  Besides this storage difference, another difference is 
better support for supplementary characters in AL32UTF8 character set."



Is this what you're taking about? If so, what's the concern? Have you 
checked to see if PostgreSQL's behavior fits your needs?



--
Craig Ringer

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] postgresql server crash on windows 7 when using plpython

2011-08-09 Thread Craig Ringer

On 9/08/2011 10:54 PM, c k wrote:


Normal python functions returning text etc. are working but when conatining

import sys
from uuid import getnode as get_mac
mac = get_mac()
return mac

fails. What will be the reason?


You still haven't supplied the error message you get when you run this.

In the absence of better information, my guess would be that python's 
uuid module uses ossp-uuid, same as PostgreSQL does, but a different 
version or one compiled differently. The PostgreSQL `bin' dir with the 
postgresql version of the DLL will be in the path before the Python one, 
so Python is calling into a different version of the DLL than it expects 
and probably crashing as a result.


That would be consistent with it working from the python command line.

To work around this, I'd either use the ossp-uuid functions via the SPI 
rather than using the Python UUID module, or I'd remove the OSSP-UUID 
dll from the postgresql directory. You can certainly try that to see if 
it helps.


This is one of the joys you get with Windows software not being managed 
by a central packaging system. Everyone bundles their own versions of 
all the dependencies, leaving messes like this where two DLLs with the 
same name aren't quite compatible. Yay!


--
Craig Ringer

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Postgresql 9.0.1 installation error

2011-08-10 Thread Craig Ringer

On 10/08/2011 3:43 PM, Venkat Balaji wrote:


 libpq.so.5 => /opt/Postgres9.0.1/lib/libpq.so.5


OK, so it looks like it's runtime linking to the correct libpq, assuming 
that the copy in /opt/Postgres9.0.1/ is the one for ... well .. 9.0.1 . 
That was my first suspicion about what might be wrong, that it was being 
runtime linked to a different libpq from a different build.



Program received signal SIGSEGV, Segmentation fault.
0x2aac3b8c in resetPQExpBuffer () from
/opt/Postgres9.0.1/lib/libpq.so.5
(gdb) bt\
#0  0x2aac3b8c in resetPQExpBuffer () from
/opt/Postgres9.0.1/lib/libpq.so.5
#1  0x2aabd610 in pqGets_internal () from
/opt/Postgres9.0.1/lib/libpq.so.5
#2  0x00658d70 in ?? ()
#3  0x00658d70 in ?? ()
#4  0x00659188 in ?? ()
#5  0x in ?? ()



While I'm no expert in the interpretation of wonky stack traces, I 
strongly suspect you have a corrupt stack here. Those addresses don't 
look right, and frame 5 is flat-out impossible AFAIK.


It's possible that the lack of debug symbols for other libraries and/or 
the fact that you didn't rebuild with --enable-debug is the cause, but I 
doubt it.


The trouble with stack corruption is finding the culprit. It's likely to 
be something specific to your system, like a custom-built library you 
installed in /usr/local that's being used for headers (includes) but not 
for the library, or where the library in /usr/local is being used for 
compile-time linking then a copy in /usr/lib is being used for runtime 
linking. Stuff like that is often the cause... but it can also be a 
genuine bug in libpq, psql, or one of the libraries that's only being 
triggered under some circumstance particular to your system or build.


The first thing to do is see if you can make this bug happen on another 
system that's configured as close to your current one as possible. If 
you can't, figure out what's different about them. If you can still 
reproduce it on a clean RHEL system, send the configure command, 
installed package list and any other information required to reproduce 
the bug to this mailing list.


Most likely you'll find that you can't reproduce it on a clean RHEL 
system and will find something like a dodgy old copy of a library 
somewhere, an installer that's overwritten a library in /usr/lib64 with 
its own incompatible version, or something fun like that.


Unfortunately this kind of thing can be very hard to debug remotely. It 
*might* help if you upload your config.log from your compile somewhere 
and send a link to this mailing list (do not attach the file!), but it 
might not land up telling me anything. Similarly, rebuilding Pg with 
--enable-debug, running "make install" and re-testing might get a better 
backtrace - or might be similarly useless.


--
Craig Ringer

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: FW: [ADMIN] Character set equivalent for AL32UTF8

2011-08-10 Thread Craig Ringer

On 10/08/2011 4:07 PM, Mridul Mathew wrote:


Does PostgreSQL make a distinction within Unicode in a similar fashion?


No.


We have not tested our Oracle al32utf8 databases on PostgreSQL, but
while creating databases in PostgreSQL, we see UTF8 as an option, but
not al32.


al32utf8 is Oracle specific and doesn't seem to be defined anywhere else.

What _application_ _level_ impact does this have for you? What changes 
do your apps expect to see in their use of or communication with the 
database?


I strongly suggest that you _test_ this in Pg and see.

--
Craig Ringer

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Postgres service starts and then stops immediatly

2011-08-11 Thread Craig Ringer

On 11/08/2011 6:00 PM, Sofer, Yuval wrote:


Hi,

-We are using Postgres 8.3.7 on Windows

-We cannot start the Postgres service. The windows OS error message is:

"The PostgreSQL Server 8.3 service on Local Computer started then 
stopped.

Some services stop automatically if they have no work to do, for example,
the Performance Logs and Alerts service."

-No logs in the postgres log file

-When using pg_ctl, the postgres.exe is starting OK



This sounds like permissions. Check and make sure that the "postgres" 
user has the rights to write to the data directory, including the log files.


Has anything been changed lately? Restored/deleted/updated? System users 
modified? Domain policy changed?


Check the Event Viewer for more information about the failure, see if 
there's anything there.


What Windows version are you using?

Why PostgreSQL 8.3.7 when the latest release is 8.3.15? You're missing a 
lot of bug fixes.


--
Craig Ringer


Re: [ADMIN] replication from Oracle to PostgreSQL?

2011-08-11 Thread Craig Ringer

On 11/08/2011 10:57 PM, CS DBA wrote:



On Thu, 2011-08-11 at 08:41 -0600, CS DBA wrote:

Anyone know of tools / options that will allow Oracle to PostgreSQL
replication? or at least a real time feed or dblink?

EnterpriseDB's Postgres Plus Advanced Server has a realtime replication
solution bundled.

Regards,


That was the first thing I tried to push 'em towards... they shut it
down cause it's not free (which is weird since they're not considering a
real TCO, but they are the customer).


Using Oracle ... and objecting to "not free"?

*boggle*

There are lots of ETL tools available, and there's always the 
roll-your-own queue-based trigger replication system option. Of course, 
both options would probably cost more than buying EDB's already built 
and tested version...


--
Craig Ringer


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] replication from Oracle to PostgreSQL?

2011-08-12 Thread Craig Ringer

On 12/08/2011 10:40 PM, CS DBA wrote:


There are lots of ETL tools available, and there's always the
roll-your-own queue-based trigger replication system option. Of
course, both options would probably cost more than buying EDB's
already built and tested version...


interesting idea, any specific ETL tools you could recommend?


The main open source ones I hear about are Talend 
(http://uk.talend.com/index.php) and Pentaho (http://www.pentaho.com/). 
 These are the only two I hear of people on this list using, but that 
doesn't mean others don't see use too.


Others found in a quick search include CloverETL 
(http://www.cloveretl.com/) and Aptar (http://apatar.com/). I'm sure 
there are tons more, but who knows what they're like.


--
Craig Ringer

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Unable to login with phppgadmin on a remote host

2011-08-17 Thread Craig Ringer

On 18/08/2011 7:20 AM, Tom Browder wrote:

I have installed postgresql 9.0.4 from source on aUbuntu 10.04 LTS
64-bit remote host.

I have installed phppgadmin, Apache2, and other required programs and
libraries via the Ubuntu package manager.

I have successfully created the user posgtres, executed initdb
successfully, and can execute pqsql to connect to template1, all while
logged in via ssh onto the remote host

I have set postgresql to listen on all.

I have these lines in my pg_hba.conf file:

host  all myuser/32 md5
host  all postgres/32 md5

However, I cannot successfully login with phppgadmin on the remote host.


Given the config you showed, that *should* work. Did you restart apache 
after altering your phppgadmin config?


It's usually best to have phppgadmin (or whatever) connect to 127.0.0.1 
for localhost, rather than the public IP address anyway. I'd recommend 
letting phppgadmin connect to localhost (127.0.0.1/32) and setting that 
to md5 auth in pg_hba.conf .


Most web-based database apps don't work well with "ident" authentication 
because they're all running under the apache or www-data user, so you'll 
need to add a like for the database(s) and user(s) of interest that 
specifies md5 auth. For example, if your admin app uses the "postgres" 
user and you want it to access all databases:


hostallpostgres127.0.0.1/32md5

(or the "local" clause instead if your app uses a unix socket).


Personally I wish Pg would permit the client to use md5 auth when ident 
fails - support something like "ident_or_md5" as an authmode. That'd 
solve a lot of usability issues for new admins around configuring auth.


--
Craig Ringer

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: R: Re: [ADMIN] adminpack installation problem

2011-08-17 Thread Craig Ringer

On 18/08/2011 12:02 AM, specialisedtools wrote:

Hi, I am building a site at the moment using the standard SQL system, I have
just read through your posts and before I get too far need to know if it
would be more beneficial to change the database accross?

The online system is as below: Any help is most appreciated!

http://www.specialisedtools.co.uk


What's the "standard SQL" system?

What problem are you having that you want to solve?

Try explaining in more detail. What are you trying to achieve? What is 
the current state of your effort? What have you already tried?


--
Craig Ringer

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Question on Postgres Index internals

2011-08-18 Thread Craig Ringer

On 19/08/2011 5:46 AM, A J wrote:

How does Postgres detect corruption of index data files ?
If their structure doesn't make sense, they're corrupt. It doesn't 
actively seek or detect corruption, because it shouldn't happen, and 
cannot be reliably discovered if it has.

What is the exact mechanism by which it knows that the index is corrupt ?
Numerous different ways, depending on the nature of the damage and the 
type of index. Often it won't know the index is damaged at all.
Can it happen that part of the index is corrupt but Postgres does not 
realize because those specific rows are not accessed (but other rows 
from that table are accessed)


Yes, that is possible. It is also possible that a damaged index could 
cause issues like multiple instances of a value that's supposed to be 
unique, or a foreign key reference to a non-existent tuple.


Index corruption should not happen. If you are facing index corruption, 
either you have a hardware problem or you've hit a bug. The vast 
majority of cases are hardware faults.


Perhaps it'd help if you'd supply a little more detail about the 
background. Why you're asking this, and why you (presumably) suspect you 
have index corruption.


--
Craig  Ringer


Re: [ADMIN] Question on Postgres Index internals

2011-08-25 Thread Craig Ringer

On 21/08/2011 11:07 PM, A J wrote:
So I assume there is no redundant information stored such as checksum 
to validate the files against the stored checksums.


Nope, there are no checksums.

It's the storage subsystem's job to make sure it doesn't mangle data on 
disk and reads the data as it was written. It's the OS's job to make 
sure nobody else overwrites or messes with that data while it's in RAM 
cache or on disk.


Index corruption means you've hit a hardware fault, OS bug, or possibly 
a PostgreSQL bug.


Only some kinds of corruption could be detected by application-level 
checksums. If the OS or storage subsystem just failed to write a page, 
Pg wouldn't be able to tell, for example. Checksums _would_ detect split 
page writes, single-bit faults, corrupted disk sectors and the like and 
could potentially be useful.


I don't have an active issue. Just trying to understand in detail how 
Postgres behaves under index corruption.


Like most forms of fault behaviour, it is undefined in the "C standard" 
sense of the meaning, though perhaps not quite as willing to just eat 
data ;-)


--
Craig Ringer


Re: [ADMIN] What is postgresql status? I'm not sure

2011-08-28 Thread Craig Ringer

On 25/08/2011 2:09 AM, Mcleod, John wrote:


Hello all,

Don't shoot me, I'm just the fella sweeping up after a departure.

I'm supporting a web site with mapping features using Mapserver and 
PostgreSql 7.5 for windows.


Recently, my mapping features went down.

Error messages that display are the following...

Warning: pg_query(): Query failed: ERROR: xlog flush request 
0/34D85078 is not satisfied --- flushed only to 0/34CD4518 CONTEXT: 
writing block 0 of relation 394198/412173 in 
C:\ms4w\Apache\htdocs\davison\mapping\gis1\viewparcel_nolink.php on 
line 10




Sounds like a disk fault or disk-full issue.



It looks to me that the query failed based on a hard drive issue.  
Well, over the weekend we swapped out the hard drive for new.  The 
site comes back fine but I'm still receiving the error.
If you have actual data corruption as a result of the disk failure, you 
may have to restore from a backup. Theoretically data corruption 
shouldn't happen if a HDD just starts failing writes, but in reality it 
can happen due to possible resulting file-system corruption, OS issues, 
or the disk actually returning corrupted blocks on reads.



C:\ms4w\apps\pgsql75win\data>del postmaster.pid

Could Not Find C:\ms4w\apps\pgsql75win\data\postmaster.pid



Eek, who/what is doing that?!?

Don't delete the postmaster pid file. If you do so without making 
EXTREMELY sure there are no backends still running, you risk severe 
database corruption if you then start a new postmaster.


LOG:  database system was shut down at 2011-08-24 17:30:14 Eastern 
Standard Time


LOG:  checkpoint record is at 0/34CD4078

LOG:  redo record is at 0/34CD4078; undo record is at 0/0; shutdown TRUE

LOG:  next transaction ID: 1198832; next OID: 1772830

LOG:  database system is ready

I'm not sure if the database is up or not.



Well, it's running.


I don't even see postgresql or mapserver as Windows services.



How did you start it? If you didn't start it with "net service" or using 
the "services.msc" MMC snapin, it won't start as a service. It looks 
like you used some kind of batch file to start it under your own user 
account instead, but you haven't shown the full command line(s) involved.


--
Craig Ringer


Re: [ADMIN] PostgreSQL 8.2 installation problem

2011-08-29 Thread Craig Ringer

On 29/08/2011 4:08 PM, Lukasz Brodziak wrote:


Hello,

When I try to install PostgreSQL 8.2.4 on Windows 7 HP 64-bit

'
8.2.(old) on Win7 x64? Why?

At least install the latest 8.2.x point-release, 8.2.21 . Here are all 
the fixes you are missing by installing 8.2.4 instead:


http://www.postgresql.org/docs/8.2/interactive/release.html

I wouldn't particularly recommend installing such an old version on 
64-bit Windows, either. It should work, but it won't be as well tested 
and neither will the installer. Use a recent version if possible, or 
just use 32-bit Windows.


--
Craig Ringer

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] create schedule job

2011-08-30 Thread Craig Ringer

On 30/08/2011 9:29 PM, Karuna Karpe wrote:

Hello,

I am create one job using pgAgent in postgres studio.  In that job i 
want to backup whole database every day.  But when i check the backup 
location, then there is no any backup file created.


Write permissions for PgAgent user to target location?

SELinux involvement?



Also, you seem to be doing both a pg_dumpall that backs up all the 
databases, and individual pg_dump runs for each database. Typically 
you'd do one or the other. If using individual database pg_dump, you 
usually use pg_dumpall with the "--globals-only" option to capture user 
definitions etc.




I recommend that you add error handling to your script. First, add

set -e -u

to the top, so the script treats undefined variables as errors and so 
that it aborts when commands fail. Then add error handling, eg:


$PGBIN/pg_dumpall -h $PGHOST -U $PGUSER | gzip > 
$BACKUPDIR/fullbackup-$themonth.sql.gz


becomes (eg)

if ! "$PGBIN/pg_dumpall" -h "$PGHOST" -U "$PGUSER" | gzip > 
"$BACKUPDIR/fullbackup-$themonth.sql.gz" ; then

  echo >2 "Failed to dump global data from database. Backup FAILED."
  exit 1
fi

Then make *SURE* that cron or pgagent or whatever check the exit status 
of your script and email you the contents of stderr when it fails. Do 
this by temporarily adding:


echo 2>"Manually aborting"
exit 1

to the top of your script and making sure that you get an error email 
containing "Manually aborting" when it is there.


Once that's done, you'll know that if something makes the backup fail, 
you will be notified and the message will contain the error output from 
the failed command.


--
Craig Ringer

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Best procedure for restricted access

2011-09-12 Thread Craig Ringer

On 12/09/2011 9:16 PM, Rainer Leo wrote:

Hello,

I have been asked to configure a database role
to be used for ODBC access.

So far I have done this:

CREATE ROLE odbc_user LOGIN
   ENCRYPTED PASSWORD 'bar'
   NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE;

REVOKE ALL ON DATABASE foo FROM odbc_user;
REVOKE CREATE ON SCHEMA public FROM public;
GRANT SELECT ON v_sales TO odbc_user;

When I try:

foo->  SELECT * FROM customers;
access is denied as expected

foo->\d baz
I see table definitions.


You'd have to mess with permissions on the pg_catalog tables and the 
INFORMATION_SCHEMA views. This may have unexpected side-effects or cause 
some clients that expect to be able to use those schema to get metadata 
to cease functioning correctly.


I don't think denying access to table definitions is part of the 
security model's goals at the moment; it's about limiting access to 
_data_ not DDL or definitions. You'll note that function sources are 
also available via pg_catalog, though it seems to be reasonably safe 
(from what I hear, having not tested it) to change permissions to deny 
access to those.


--
Craig Ringer

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] pg_upgrade --link

2011-09-13 Thread Craig Ringer

On 09/14/2011 03:38 AM, MirrorX wrote:

hello to all,

i have tried pg_upgrade to upgrade from 8.4 to 9.0. the operation was
completed succesfully. i only have one question:

i did the procedure twice. once without the 'link (-k) option' and one with
it. obviously the attempt with the link option was much faster. but, what
does link mean here? symbolic link? i saw that after the procedure the old
'data' folder was there too, and i could delete it, so i guess the answer is
not symbolic link. so what does that link mean? thx in advance


http://en.wikipedia.org/wiki/Hard_link

--
Craig Ringer

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Problem restoring a dump

2011-09-13 Thread Craig Ringer

On 09/13/2011 08:47 PM, Johann Spies wrote:

Thanks Giulio and Gabriele,


 as Giulio pointed out, it seems like the destination database is in
  LATIN1 encoding, rather than UTF8. Could you please confirm this?

That was the case.  I deleted one of the databases and recreated it with
as a UTF-8 encoded database and the import went well.

Question:  Can I change the encoding system of an existing database?  If
so, how?

You can re-encode a dump (see pg_dump's -E flag) then reload it into a 
new database with the new encoding. This will only work if the source 
database contains only characters that exist in the target encoding.


You can't change the encoding of a database in-place.

--
Craig Ringer

--
Craig Ringer

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] How to change query planner configuration paramerters

2011-09-18 Thread Craig Ringer

On 18/09/2011 5:51 PM, Melaka Gunasekara wrote:

Merge Full Join (cost=174.40..193.69 rows=1159 width=286)


Can you suggest why the merge join is being suggested when I have 
turned it off ?




AFAIK SETting a join type to "off" really just increases the cost 
estimate so high that the planner will avoid using it where it has any 
alternative. In this case, it doesn't seem to think it has any other way 
to execute the query, or it thinks that any other way will be so 
incredibly, insanely slow that the merge join is still better.


--
Craig Ringer

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] constraint triggers

2011-09-28 Thread Craig Ringer

On 09/28/2011 08:54 PM, Maria L. Wilson wrote:


UPDATE dataset
SET gracount = gracount + 1
WHERE dataset.inv_id = NEW.inv_id;


That'll serialize all transactions that touch the same inv_id, so only 
one may run at once. The second and subsequent ones will block waiting 
for an update lock on the `dataset' tuple for `inv_id'.


When you think about it that's necessary to prevent a wrong result when 
transaction A then B run this statement, transaction B commits, and 
transaction A rolls back. What's the correct answer?


To fix this you'll need to change how you maintain your `dataset' table. 
Exactly how depends on your requirements.


You can trade read performance off against write performance by 
INSERTing new rows instead of UPDATEing them, so you do a:


  SELECT count(inv_id) FROM dataset WHERE dataset.inv_id = ?

instead of a

  SELECT gracount FROM dataset WHERE dataset.inv_id = ?

to retrieve your count.

You can have both fast reads and fast writes if you accept potentially 
out-of-date `gracount' data, maintaining `gracount' as a materialized 
view that you refresh using LISTEN/NOTIFY *after* a transaction commits. 
It's possible for it to be a little out of date, but writers no longer 
interfere with each other and readers no longer have to re-do the 
counting/aggregation work.


You can live with serializing writes like you currently do in exchange 
for the greater read performance of maintaining counts. This may not be 
so bad once you understand what's happening and can keep transactions 
that trigger this problem short, preventing them from blocking others 
while they do unrelated work.


In the end, this looks like a simple problem but it isn't when you 
consider the possibility of transactions rolling back.



Our first try to solve this problem has been to convert these triggers
into a constraint trigger which allows for DEFERRABLE INITIALLY DEFERRED
flags. This, we are finding, is forcing the trigger function to run
after the triggering transaction is completed. We believe this will fix
our locking problem and hopefully speed up our inserts again.


That should help, but it's a form of trading timeliness off against 
performance. Queries within the same transaction won't see the updated 
`dataset' values, so if you're relying on them for correct operation 
later in the transaction you could have issues.


--
Craig Ringer

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Some Postgresql Concepts

2011-10-07 Thread Craig Ringer

On 10/07/2011 02:12 PM, Karuna Karpe wrote:


1. Flashback fundamental


Flashback? I think you might be confusing this:

  http://oracle.com/

with this:

  http://postgresql.org/


2. Compression, encryption and decryption


Check out TOAST and pgcrypto in the *MANUAL*.

  http://lmgtfy.com/?q=postgresql+manual

(The above too grumpy? Google is a key skill as a DBA, and asking people 
to do your basic research for you for free and hand the answers to you 
on a platter is a bit much.)



3. DB Link and Heterogeneous Concept


  "Hetrogenous Concept"?

  dblink is, again, something best learned about using the documentation
  and Google.


4. Storage Management


Details?

What do you want to know? "Storage management" is kind of a broad 
category. You probably want to read the manual first, then look into 
things like tablespaces, TOAST, vacuum and autovacuum, etc.


--
Craig Ringer

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] password problems

2011-10-07 Thread Craig Ringer

On 10/07/2011 10:14 PM, Alexander James Spence [axs] wrote:


We have updated our server to 9.1 the previous version used am external
file for authentication (unix crypt)


PostgreSQL's basic auth scheme hasn't changed in a long time, so it's 
pretty unlikely that the previous version inherently used an external 
file for auth. Is it possible the previous version was _configured_ to, 
via pg_hba.conf ?


If not: Which previous version, exactly? On what OS and (if linux) 
distro? What was the external file called, where was it and what is its 
format? Are you talking about /etc/passwd or something else?


Is it possible you were authenticating via OS usernames and passwords 
before, by using the pluggable authentication modules (PAM) subsystem in 
your OS via pg_hba.conf?



Can I continue to use this file if so how is it referenced in pg_hba.conf?


Alas, my psychic powers are insufficient to answer this question. 
Perhaps if you said something about what the file is and where it is...


See the manual:

  http://www.postgresql.org/docs/current/static/auth-pg-hba-conf.html

for more info on pg_hba.conf, in particular the auth-method section and 
the elaboraton of it given in this manual page, which the pg_hba.conf 
page links to:


  http://www.postgresql.org/docs/current/static/auth-methods.html




If I want to continue to use unix crypt passwords how do I get them into
the database. I have seen references to crypt and gen_salt but cannot
get these to work.


Do you refer to pgcrypto? (Tip: If you refer to something, link to it so 
others know for certain what you're talking about).


  http://www.postgresql.org/docs/current/interactive/pgcrypto.html

If so: that's for application-level crypto work, when your app wants to 
do crypto in the database. For database user authentication you probably 
want ALTER USER:


  http://www.postgresql.org/docs/current/interactive/sql-alteruser.html

As far as I know, PostgreSQL's default built-in user database uses a 
salted md5 hash for user passwords. This will prevent you from copying 
existing `crypt'ed passwords over directly into PostgreSQL's internal 
user list, as it's a different hash algorithm. You should still be able 
to use other auth methods like PAM to use them, though.


--
Craig Ringer

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Dumping data using pg_dump after chrooting to a different partition

2011-10-24 Thread Craig Ringer
On 25/10/11 11:01, Krishnamurthy Radhakrishnan wrote:
> Thanks Craig.
> 
> After configuring to accept TCP connections on port 5432, I tried to
> specify the hostname as shown below and that didn't help. Is there
> anything else that needs to be configured?
> pg_dump -h bldr-ccm36.cisco.com -p 5432 -a -U postgres
> pg_dump: [archiver (db)] connection to database "postgres" failed: could
> not connect to server: Connection refused
> Is the server running on host "bldr-ccm36.cisco.com" and accepting
> TCP/IP connections on port 5432?

Use "localhost" or "127.0.0.1" if it's on the same machine to simplify
things. If you try to connect to your host's public IP but
postgresql.conf has listen_addresses='127.0.0.1' or
listen_addresses='localhost' then you won't be able to connect because
Pg isn't listening on your public IP, only your loopback IP. A chroot
won't affect tcp/ip, so it's still localhost when you're chrooted into
another FS.

Also, you may have firewall rules in place that prevent the connection,
check for that.

--
Craig Ringer

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Error in intalling PostgreSQL 9.1

2011-10-30 Thread Craig Ringer
On 29/10/11 19:07, Jelena Gorelikova wrote:
> 
> During the istall of a new version of PostgreSQL 9.1 on Windows 7
> (64bit) machine I`got the following error:
> 
> A fatal error occur whilst loading database modules. Problem running
> post-install step. Installation may not complete correctly.
> 
> What should I do in this case?


http://wiki.postgresql.org/wiki/Troubleshooting_Installation

--
Craig Ringer

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] How to deal with corrupted database?

2011-11-09 Thread Craig Ringer

On 11/09/2011 07:02 PM, Ruslan A. Bondar wrote:

Hello all.

This is a first time I receive such an issue.
My database was corrupted some way.
Before you do ANYTHING else, make a copy of your database somewhere 
safe. See:


http://wiki.postgresql.org/wiki/Corruption

When I'm trying to access the database via psql I receive:

root@udb:/etc/bacula# psql -U bacula
psql: FATAL:  could not read block 0 in file "base/16387/86057840": read only 0 
of 8192 bytes


So I want to drop it, and recover from backup. But when I'm trying to drop the 
database I see:

postgres=# drop database bacula;
ERROR:  could not access status of transaction 15892843
DETAIL:  Could not open file "pg_subtrans/00F2": No such file or directory.


Is there any way to recover the database to working state, or drop it?

*ONLY* once you've made a full backup copy, you may be able to set 
zero_damaged_pages to get a usable dump.


Do you know what caused this? The missing files suggest it was probably 
file system corruption - was there a disk failure? fsck run with errors? 
Unexpected reboot on a RAID controller with a dead backup battery?


--
Craig Ringer


Re: [ADMIN] How to deal with corrupted database?

2011-11-09 Thread Craig Ringer
On 09/11/11 21:37, Ruslan A. Bondar wrote:

> This database isn't mission critical, so if you want - I can
experiment on this.

Brilliant! It's rare for people to be able to investigate issues like
this, most of the time they just have to get up and running ASAP and
often destroy data required to usefully investigate in the process.

> First issue was some kind of deadlock (concurrent insert and concurrent 
> delete on a table) I saw them wile reindexing the database. 
> Also mesages like this were in dmesg:

> [3681001.529385]  [] ? __mutex_lock_common+0xe8/0x13b
> [3681001.529401]  [] ? __mutex_lock_slowpath+0xf/0x11
> [3681001.529416]  [] ? mutex_lock+0x17/0x24
> [3681001.529429]  [] ? mutex_lock+0x17/0x24
> [3681001.529444]  [] ? generic_file_llseek+0x17/0x44
> [3681001.529458]  [] ? generic_file_llseek+0x0/0x44
> [3681001.529473]  [] ? vfs_llseek+0x30/0x34
> [3681001.529487]  [] ? sys_llseek+0x3a/0x7a
> [3681001.529501]  [] ? syscall_call+0x7/0xb

OK, so it was hung waiting for a lock within the VFS layer. That's
rather odd at best.

Your system details are:

OS: Linux Debian Sid
postgres version: 9.0.4
uname: Linux udb 2.6.32-5-xen-686 #1 SMP Tue Oct 19 17:09:04 UTC 2010
i686 GNU/Linux

so you're running a Xen kernel (guest? Or host? I'm assuming guest) on a
very fresh kernel on a testing version of Debian. I won't be entirely
shocked if this is a kernel issue.

I don't see any obvious deadlock reports on a search for

  "vfs_llseek" OR "generic_file_llseek" deadlock

but

  "vfs_llseek" OR "generic_file_llseek" mutex_lock

finds:

  https://bugzilla.redhat.com/show_bug.cgi?id=716991 (unrelated?)

and more interestingly:

  http://postgresql.1045698.n5.nabble.com/Load-Increase-td4269457.html

... which unfortunately doesn't give any OS/kernel info, but is another
recent report.

There have been some recent changes in file system locking:

  http://lwn.net/Articles/448038/

so I'd appreciate it if you could pass this report on to the kernel
folks involved in case they want to investigate further.



> So I've stopped software caused these inserts and deletes, but reindexing 
> shows same warnings. I've restarted postgresql server.

How did you restart PostgreSQL?

If there were backends hung in the vfs, did the eventually terminate by
themselves? If not, did you terminate them yourself? How? With a signal
(kill)? Which signal? Some other way?

> Postgres restarted successfully, but the database became unaccessible. 
> Filesystem is clean. File base/16387/86057840 exists but is zero length. File 
> pg_subtrans/00F2 does not exists.

Hm, ok. I'm a bit suspicious of the deadlock in the kernel. It isn't
necessarily a kernel issue, but given the system you're running on I
won't be too surprised if it is either. There's a fairly good chance the
trigger for this was a kernel issue munching your data.

Are you able to reproduce this issue with another instance of PostgreSQL
running with a freshly created database cluster (initdb) ?

> 2011-11-09 16:25:04 MSK FATAL:  xlog flush request 171/1B1374E0 is not 
> satisfied --- flushed only to 171/19C26010
> 2011-11-09 16:25:04 MSK CONTEXT:  writing block 0 of relation 
> base/16385/86064815_vm
> 2011-11-09 16:25:04 MSK LOG:  startup process (PID 3570) exited with exit 
> code 1
> 2011-11-09 16:25:04 MSK LOG:  aborting startup due to startup process failure

I don't know enough about Pg's guts to suggest how to proceed from here.
Maybe a pg_resetxlog might get you up and running (albeit with potential
data damage) but I'm not sure.

--
Craig Ringer

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] database not using indexes

2011-11-09 Thread Craig Ringer
On 09/11/11 23:58, Ruslan A. Bondar wrote:
> 
> Why have you decided it isn't using indexes?
> If index exists - postgres will use it.

Actually that's not necessarily the case.

PostgreSQL will only use an index if (a) the index can be used for that
kind of query and (b) using the index will be faster than doing a
sequential scan.

If a query requires all the data in a table, PostgreSQL is quite likely
to do a sequential scan of the table, because it'll need to read every
block anyway. Reading just the table (without reading the index) in
order is much faster than reading the index then doing semi-random reads
of the table.

Additionally, not all indexes can be used for all operations. For
example, a LIKE query with a prefix wildcard eg "%FRED" cannot use a
btree index, so any btree index on the searched field will be ignored.

--
Craig Ringer

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] database not using indexes

2011-11-09 Thread Craig Ringer
On 10/11/11 01:06, Silvio Brandani wrote:
> Ok,
> 
> the problem was  on a big table on query like this:
> 
> select outmessage0_.out_msg_id as out1_0_
> from edi.out_messages outmessage0_, edi.transaction_set_partners
> transactio1_
> where outmessage0_.transaction_set_partner=transactio1_.trn_set_prtn_id
> and outmessage0_.status_id='TOSND'
> and transactio1_.legacy_sender_id='ALL'
> and transactio1_.legacy_receiver_id='4542'
> and outmessage0_.transaction_set_id='INTERNAL_USE'
> order by outmessage0_.out_msg_id


Please read:

  http://wiki.postgresql.org/wiki/Guide_to_reporting_problems

and as linked to there, read:

  http://wiki.postgresql.org/wiki/SlowQueryQuestions

then try again including all the information you are asked for. In
particular, you have not included EXPLAIN ANALYZE output or any
information about your PostgreSQL version. Failure to provide that
information makes it difficult or impossible to answer your question,
wasting your time and everyone else's.

We don't link to that document just for fun. It's information that is
important to answer questions properly.

--
Craig Ringer

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] error log, tablespace, wal files

2011-11-10 Thread Craig Ringer

On 11/10/2011 02:29 PM, Karuna Karpe wrote:


  1) what is tablespace in postgres?


It's a way of putting some tables, indexes, etc into a different part of 
your system's storage. Sometimes you want to do this to put a really big 
table on slower, cheaper storage. Sometimes you might want to put 
indexes on really fast but expensive or small and limited storage. It's 
also useful for splitting up tables that're often accessed concurrently 
so they're on different disk arrays and don't compete with each other 
for I/O. There are lots of uses.



  2) one more issue is that, I try to rename or delete some file in
/opt/PostgresPlus/9.0/data/base/16428/ directory.


Don't do that!

Do not mess with anything in the PostgreSQL data directory unless you 
know *EXACTLY* what you are doing.



 when I restart
postgres server, it start normally.  but I thing this is wrong.  I
rename or delete file into above directory, that means my
database(having 16428 oid) is corrupted.  So How can my postgres server
is started normally??


Because it hasn't needed to access that file yet. When it does, it'll 
report errors for attempts to access that file but will otherwise 
continue to function normally.


PostgreSQL doesn't try to verify the whole database on startup. Imagine 
how long that'd take!



I want to see the log file for that database
corruption. Can I see it?? and where???
and please explain me that what's going on in above case??


You need to figure out what you deleted by looking in pg_catalog, then 
attempt to access it.


It'd be nice if there were a built-in tool or utility to verify a 
database catalog against the on-disk contents, but it's never supposed 
to be necessary. The need only arises when someone does something silly 
or when a file system goes bad - and in the latter case, who says it's 
the catalog that's correct?


--
Craig Ringer

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] lost tablespace

2011-11-10 Thread Craig Ringer

It'd help to know what Pg version you're running. Comments inline below.

On 11/10/2011 07:01 PM, Thorsten Strufe wrote:


sorry, noob here. So we've just started using postgre and right at the
start we had a hardware problem with a hard drive failing. We can still
start postgre but a tablespace for a db that we don't even need anymore
is lost. Now we would think that shouldn't be a problem, we could simply
drop that db. postgre, however, doesn't allow us to, but tells us to
vacuume the db first. Starting postgre singleuser and trying to run
vacuum still gives us the same trouble.


It'd be nice if PostgreSQL would `DROP DATABASE' successfully when there 
were missing files, just raising WARNINGs for those files. I'm not too 
shocked that it's not implemented though, as the files backing a 
database should never just go away, and if they do the database is 
considered severely damaged at best.



Now a naive question: is there any (sane) way to simply get postgre to
entirely forget about the broken db - and that's it (dumping the other
db and loading it back might not be the best solution, since it's rather
large..)?


No sane way I know of. A less-than-sane way is to mess with pg_catalog 
to get rid of the database, but that's unsafe at very best. I'd just 
dump the other databases, then re-initdb and reload them; that's the 
safest way by far.



2011-11-10 11:05:10 CET WARNING:  database with OID 17546 must be
vacuumed within 99 transactions


Now that's interesting. I wonder why it thinks the database is at risk 
of OID wraparound? It could genuinely be, or it might be confused 
because the files backing the database are missing.


Anyone have an opinion on whether it should be possible to drop a 
database that's hit or nearly hit OID wraparound w/o a vacuum first? I 
don't see any point vacuuming just to drop it, but this may be too much 
of a corner case to justify the testing a special case like this would 
require.


--
Craig Ringer

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] How and when are encoding DLLs used on Windows?

2011-11-16 Thread Craig Ringer

On 11/17/2011 12:04 AM, David Schnur wrote:

I bundle Postgres (8.3.15) with another product as a back-end database.
  On Windows, the default build includes a bunch of what appear to be
codec libraries, with names like,
utf8_and_cyrillic.dll, ascii_and_mic.dll, etc.  But using Microsoft's
dependency walker tool, I see no references to any of these in
libpq.dll, psql, postgres or initdb.


They're loaded by LoadLibrary calls. You'll see that the names do appear 
in the server executable as strings.


The PL libraries and contrib libraries are similar, in that you won't 
see direct dependencies but they'll still get used via LoadLibrary if 
your database calls for them.


--
Craig Ringer

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] silent installer

2011-11-22 Thread Craig Ringer

On 11/23/2011 01:06 AM, Skusa, Andre wrote:


Hello postgres admins,

is there any silent installer for current postgres releases (>= 9.x) 
available? Or what should I consider if I plan to include postgres in 
the installer of my own software package?



The installer has command line options for non-interactive / silent 
installation. Try using the --help, -h or /? flags (don't remember 
which) on the installer.


Consider *asking* the user if they want Pg installed or if they'd like 
to provide their own database settings instead. If you do silent 
install, please consider what'll happen if the user already has the same 
version of PostgreSQL installed. You'll still need to provide them with 
a way to tell your application which database settings to use, since 
your installer won't overwrite an already-installed version of Pg.


Alternately, you can use the PostgreSQL .zip package, creating a user 
account and service account yourself. That has the advantage of letting 
you create your own package that's independent of any regular PostgreSQL 
installation so you can set your own user account, service account, 
password, etc. The downside is that it's a lot more work to get right.


--
Craig Ringer


Re: [ADMIN] Point in time recovery

2011-11-23 Thread Craig Ringer

On 11/23/2011 08:39 PM, Karuna Karpe wrote:

Hello,

 Can any one please explain me how to use PITR?

Are you struggling with one particular issue with PITR setup where the 
documentation isn't helping? If so, please explain in more detail, 
making sure to include all the appropriate information about your setup.


See: http://wiki.postgresql.org/wiki/Guide_to_reporting_problems

If this is a more general question about how to use PITR and you haven't 
tried it yet, start with the manual here:


http://www.postgresql.org/docs/current/static/continuous-archiving.html
http://www.postgresql.org/docs/current/static/warm-standby.html

--
Craig Ringer


Re: [ADMIN] Sequence "self" change - possible?

2011-11-28 Thread Craig Ringer

On 11/28/2011 10:04 PM, Lukasz Brodziak wrote:

Hello,

Is it possible for sequence to decrement its value by itself? Or is it
possible to see wether it was changed by someone. The thing is that on
one of our clinet's DB sequence rolled back to the state from 8 months ago.


It shouldn't generally be possible, no. It'd have to be changed by a 
call to nextval(), setval() or the like, or a direct UPDATE to the 
underlying sequence pseudo-table.


Could someone have tried to restore a backup from 8 months ago? That'd 
result in a setval(...) call when pg_restore or psql (depending on 
backup format) tried to set the sequence position.


In your case, though, your next post mentions possible database 
corruption, so I'm wondering if this is related. If something's been 
scribbling over the database files, someone's half-restored an old 
file-level backup, or something similarly horrible then anything could 
happen.


--
Craig Ringer


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Database is in recovery mode.

2011-11-28 Thread Craig Ringer

On 11/29/2011 01:11 AM, Lukasz Brodziak wrote:

Hello,

I have a problem with client's database. When I log into application 
and work on the data everything seems fine, but when I try to dumpd 
the database connection closes. I have located the faulty table and 
when I try to cluster it I get FATAL: database is in recovery mode. It 
seems that there are some corrupted rows in the table but I can't find 
the way to repai the table and therefore whole DB.
PG is in the version 8.2 (in 2 weeks we are migrating to 9.0 and we 
are preparing our clients' DBs for the migration) running on Windows.

Any help will be useful with this issue.


First, before you do ANYTHING else, shut the database down and take a 
file-level copy of the entire datadir. See:


http://wiki.postgresql.org/wiki/Corruption

Once you've done that, you can attempt a repair. If the data in the 
problem table isn't too important you can just drop the table. If it's 
more important you could try enabling zero_damaged_pages (after READING 
THE DOCUMENTATION THAT WARNS YOU THAT THIS CAUSES PERMANENT DATA LOSS 
http://www.postgresql.org/docs/current/interactive/runtime-config-developer.html) 
and do a "SELECT * FROM thetable" to see if Pg can fix it. Running 
"REINDEX thetable" might also help if the problem is with an index, but 
since a COPY fails I rather doubt it's index related in your case.


If zero_damaged_pages doesn't help or if you can't afford to risk losing 
*any* possibly recoverable data, you should consider hiring a consultant 
who knows PostgreSQL's innards and the table format well. See: 
http://www.postgresql.org/support/professional_support/ .


--
Craig Ringer


Re: [ADMIN] Repeatable crash in pg_dump (with -d2 info)

2011-11-28 Thread Craig Ringer

On 11/29/2011 06:20 AM, David Schnur wrote:

I probably can't get a stack trace


Getting a usable stack trace on Windows isn't actually too hard. See:


http://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Windows

That's focused on debugging the `postgres.exe' backend, but all the 
setup applies just as well to pg_dump. You then start pg_dump via 
windbg.exe or Visual Studio Express and, when it crashes, follow the 
instructions given in the wiki to produce the backtrace.


--
Craig Ringer

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] transaction error handling

2011-11-30 Thread Craig Ringer

On 11/30/2011 09:19 PM, Nicholson, Brad (Toronto, ON, CA) wrote:
This functionality is something that Postgres can do today. We expose 
the ability to do this with explicit savepoints. The difference is 
that Oracle allows you to set it on a per transaction basis (I 
believe) and it will behave this way for all statements in the 
transaction, where as we need to do it explicitly. Looking through the 
archives there does seem to be a performance problem on commit in 
Postgres if you issue a lot of savepoints (there were discussions of a 
fix but I am not sure the status of this).
Savepoint performance has had several improvements over time. Back in 
8.1 when I got started developing against Pg seriously I was having 
*MASSIVE* performance issues with PL/PgSQL exception blocks (which use 
savepoints) in loops; these days it's perfect.


To make automatic savepoints viable, Pg would need to be able to 
completely forget a savepoint once it's been released, so there's no 
ongoing cost. That way a transaction would only need two savepoints at 
any particular point in time. My understanding is that it's not there 
yet; AFAIK released savepoints still have a non-trivial cost that would 
add up if someone was using automatic savepoints in (say) a 10,000 
INSERT transaction.


--
Craig Ringer

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] bad block problem

2011-12-07 Thread Craig Ringer

On 12/08/2011 08:20 AM, Walter Hurry wrote:

On Wed, 07 Dec 2011 22:20:30 +, jkells wrote:


I am relying on identifying and correcting a bad block.


Well, good luck with that. Most of the time you can't. Just check your
disk, replace it if necessary, restore from your backup and roll forward.

Oh, you can't do that, since you didn't bother to back up. Never mind.


Unless you're using synchronous replication to clone *every* transaction 
on commit to a spare machine, you'll still lose transactions on a 
failure no matter how good your backups are.


Even if the OP was doing nightly dumps, they'd be entirely justified in 
wanting to try to get a more recent dump on failure.


If they're not backing up at all, yes, that was dumb, but they know that 
now. Asking for help isn't unreasonable, and this isn't a stupid "just 
google it" question. They've made an effort, posted useful info and log 
output, etc. Please don't be too hard on them.


--
Craig Ringer

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] bad block problem

2011-12-07 Thread Craig Ringer

On 12/08/2011 07:41 AM, Kevin Grittner wrote:


That sounds like your storage system is failing, quite independently
from PostgreSQL.  Copy the entire data directory tree to some other
medium immediately, and preserve this copy.  If you hit bad blocks,
retry if possible.


If you find files you can't copy in their entirety, try using dd_rescue 
to copy it with a hole for the bad block. dd_rescue is an _incredibly_ 
useful tool for this, as it'll do bad-block-tolerant copies quickly and 
efficiently.


Once you have a complete copy of your datadir, stop working on the 
faulty machine. Make your first copy read-only. Duplicate the copy and 
work on the duplicate when trying to restore. I'd start with enabling 
zero_damaged_pages to see if you can get a dump that way.


Do **NOT** enable zero_damaged_pages on the original. Do it on the 
duplicate of the copied data.


--
Craig Ringer

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Resume Incomplete Pg-restore

2011-12-14 Thread Craig Ringer

On 12/15/2011 02:52 AM, P R wrote:
Is there any way to resume an incomplete Pg-restore from a custom 
Pgdump file?


Not reliably.

If you know it failed or stopped at a particular defined point, you 
might be able to selectively restore table-by-table from there, but that 
won't work if you have any circular references.


--
Craig Ringer


Re: [ADMIN] User password encryption using a stronger hashing function?

2011-12-22 Thread Craig Ringer

On 22/12/2011 5:41 PM, lst_ho...@kwsoft.de wrote:

Zitat von "Liu, Jianli (Jianli)" :

No one has responded to my questions. I am wondering if anyone can 
point me to where in postgresql source code I can further look into 
the issue and explore the possibility of this change?  Thanks.


I was already looking for the same topic but could not find anything 
useful. Maybe no one is using the build-in password encryption, but 
does it at application level with transparent data access?


Maybe one of the developers can explain why the choices for build-in 
password encryption are that limited.


I'm not a Pg dev, but I won't be surprised if the v3 protocol doesn't 
support other hash functions. I haven't looked into it. If you'd like to 
know more I suggest you search for things like "md5", "sha1", "sha256", 
"stronger hash" etc on the pogsql-hackers mailing list archives.


The main downside of md5 as a password hash function is that the result 
is small and quick to compute the standard of today's hashes, so rainbow 
tables are easier to build and can have better coverage. Brute-force 
cracking is also faster but there's not as much difference there.


If you have a need for stronger hashing functions you might want to 
contact one of the consultants who does contract work on PostgreSQL 
development and find out what'd be involved in funding the development 
of the feature. Think about why you need it first, though; what 
threat(s) are you trying to protect from?


--
Craig Ringer

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] (new thread) could not rename temporary statistics file "pg_stat_tmp/pgstat.tmp" to "pg_stat_tmp/pgstat.stat": No such file or directory

2012-06-07 Thread Craig Ringer

Hi.

I have this problem:

I have PostgreSQL 9.1.3 and the last night crash it.

This was the first error after an autovacuum (the night before last):

<2012-06-06 00:59:07 MDT814 4fceffbb.32e >LOG:  autovacuum: found 
orphan temp table "(null)"."tmpmuestadistica" in database "dbRX"
<2012-06-06 01:05:26 MDT1854 4fc7d1eb.73e >LOG:  could not rename 
temporary statistics file "pg_stat_tmp/pgstat.tmp" to 
"pg_stat_tmp/pgstat.stat": No such file or directory
<2012-06-06 01:05:28 MDT1383 4fcf0136.567 >ERROR:  tuple 
concurrently updated
<2012-06-06 01:05:28 MDT1383 4fcf0136.567 >CONTEXT:  automatic 
vacuum of table "global.pg_catalog.pg_attrdef"
<2012-06-06 01:06:09 MDT1851 4fc7d1eb.73b >ERROR:  xlog flush 
request 4/E29EE490 is not satisfied --- flushed only to 3/13527A10
<2012-06-06 01:06:09 MDT1851 4fc7d1eb.73b >CONTEXT:  writing block 
0 of relation base/311360/12244_vm
<2012-06-06 01:06:10 MDT1851 4fc7d1eb.73b >ERROR:  xlog flush 
request 4/E29EE490 is not satisfied --- flushed only to 3/13527A10
<2012-06-06 01:06:10 MDT1851 4fc7d1eb.73b >CONTEXT:  writing block 
0 of relation base/311360/12244_vm
<2012-06-06 01:06:10 MDT1851 4fc7d1eb.73b >WARNING:  could not 
write block 0 of base/311360/12244_vm
<2012-06-06 01:06:10 MDT1851 4fc7d1eb.73b >DETAIL:  Multiple 
failures --- write error might be permanent.



Last night it was terminated by signal 6.

<2012-06-07 01:36:44 MDT2509 4fd05a0c.9cd >LOG:  startup process 
(PID 2525) was terminated by signal 6: Aborted
<2012-06-07 01:36:44 MDT2509 4fd05a0c.9cd >LOG:  aborting startup 
due to startup process failure
<2012-06-07 01:37:37 MDT2680 4fd05a41.a78 >LOG:  database system 
shutdown was interrupted; last known up at 2012-06-07 01:29:40 MDT
<2012-06-07 01:37:37 MDT2680 4fd05a41.a78 >LOG:  could not open 
file "pg_xlog/000100030013" (log file 3, segment 19): No 
such file or directory
<2012-06-07 01:37:37 MDT2680 4fd05a41.a78 >LOG:  invalid primary 
checkpoint record


And the only option was pg_resetxlog.

After this a lot of querys showed me this error:
<2012-06-07 09:24:22 MDT 1306 4fd0c7a6.51a >ERROR: missing chunk 
number 0 for toast value 393330 in pg_toast_2619
<2012-06-07 09:24:31 MDT 1306 4fd0c7a6.51a >ERROR: missing chunk 
number 0 for toast value 393332 in pg_toast_2619


I lost some databases.

I restarted the cluster again with initdb and then I restored  the 
databases that I could backup (for the other I restored an old backup)


no space or permissions problem. No filesystem or disk error.

Can you help me to know what happened?



Did you take a copy of the PostgreSQL data directory and error logs 
before you tried to fix the problem, as per the advice here:


http://wiki.postgresql.org/wiki/Corruption

If you did, it might be possible to tell what happened. If you didn't 
then you've probably destroyed the evidence needed to determine what 
went wrong (and maybe recover some lost data).


--
Craig Ringer

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] pg_basebackup blocking all queries with horrible performance

2012-06-07 Thread Craig Ringer

On 06/08/2012 09:01 AM, Lonni J Friedman wrote:

On Thu, Jun 7, 2012 at 5:07 PM, Jerry Sievers  wrote:

You might try stopping pg_basebackup in place with SIGSTOP and check
if problem goes away.  SIGCONT and you should  start having
sluggishness again.

If verified, then any sort of throttling mechanism should work.


I'm certain that the problem is triggered only when pg_basebackup is
running.  Its very predictable, and goes away as soon as pg_basebackup
finishes running.  What do you mean by a throttling mechanism?


Sure, it only happens when pg_basebackup is running. But if you *pause* 
pg_basebackup, so it's still running but not currently doing work, does 
the problem go away? Does it come back when you unpause pg_basebackup? 
That's what Jerry was telling you to try.


If the problem goes away when you pause pg_basebackup and comes back 
when you unpause it, it's probably a system load problem.


If it doesn't go away, it's more likely to be a locking issue or 
something _other_ than simple load.


SIGSTOP ("kill -STOP") pauses a process, and SIGCONT ("kill -CONT") 
resumes it, so on Linux you can use these to try and find out. When you 
SIGSTOP pg_basebackup then the postgres backend associated with it 
should block shortly afterwards as its buffers fill up and it can't send 
more data, so the load should come off the server.


A "throttling mechanism" refers to anything that limits the rate or 
speed of a thing. In this case, what you want to do if your problem is 
system overload is to limit the speed at which pg_basebackup does its 
work so other things can still get work done. In other words you want to 
throttle it. Typical throttling mechanisms include the "ionice" and 
"renice" commands to change I/O and CPU priority, respectively.


Note that you may need to change the priority of the *backend* that 
pg_basebackup is using, not necessarily the pg_basebackup command its 
self. I haven't done enough with Pg's replication to know how that 
works, so someone else will have to fill that bit in.


--
Craig Ringer

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] How to setup PostgreSQL using Windows Authentication?

2012-06-12 Thread Craig Ringer

On 06/12/2012 10:00 AM, Peter Cheung wrote:


Hi,

I'm new to PostgreSQL.   I installed PostgreSQL on a Windows Server 
2008 R2 server.   I have created a database and an user in Windows 
Active Directory.  How can I configure that user to access that database?



By default, Windows users and PostgreSQL users are completely separate.

Use PgAdmin-III to create the user, or a "CREATE USER" command in psql. See

http://www.postgresql.org/docs/9.1/static/user-manag.html

It is also possible to use SSPI authentication with PostgreSQL, so 
PostgreSQL authenticates users against Active Directory. I haven't used 
it myself. The user must still be created in PostgreSQL, SSPI just takes 
care of authenticating them using their Windows credentials. See:


http://www.postgresql.org/docs/9.1/static/auth-methods.html

--
Craig Ringer


Re: [ADMIN] Merging two databases

2012-06-12 Thread Craig Ringer

On 06/12/2012 06:23 PM, Lukasz Brodziak wrote:

Hi there,

Is there a way of merging two databases in postgresql in the way of
differential replication? The thing is that we have two DBs with same
schemas and we want to merge the data within them into one single
replica. Are there any tools/ways of doing this?
I think you'll want Bucardo or Slony-I for this. You certainly can't do 
it with the built-in replication.


--
Craig Ringer

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] How to setup PostgreSQL using Windows Authentication?

2012-06-12 Thread Craig Ringer

On 06/13/2012 02:37 AM, Peter Cheung wrote:


According to 
http://www.postgresql.org/docs/9.1/static/auth-methods.html, SSPI only 
works when GSSAPI is available and GSSAPI support has to be enabled 
when PostgreSQL is built.   Does it mean that I need to uninstall 
PostgreSQL and reinstall it with GSSAPI support?  I used the One click 
installer downloaded from http://www.postgresql.org/download/windows 
to install PostgreSQL and I'm not sure how to include GSSAPI support.




While I haven't used it myself, I'd be amazed if the one-click 
installer's version of Pg wasn't built with GSSAPI and SSPI support. You 
shouldn't have to do anything.


--
Craig Ringer


Re: [ADMIN] How to setup PostgreSQL using Windows Authentication?

2012-06-17 Thread Craig Ringer

On 06/16/2012 08:36 PM, Christian Ullrich wrote:

* Peter Cheung wrote:

 I’m new to PostgreSQL.   I installed PostgreSQL on a Windows Server 
2008

R2 server.   I have created a database and an user in Windows Active
Directory.  How can I configure that user to access that database?


The one-click installer (assuming you used that) left you with 
PostgreSQL running under a local account named "postgres". First, you 
have to change that, because SSPI requires that the service uses a 
domain account:


That's a great explanation. I didn't see anything equivalent in the docs 
- am I just blind?


If not documented anywhere I'd like to add that to the wiki.



1. Create a user account in your domain.
2. Change the ownership of the data directory and everything within it
   to the new account, and grant it full control.
3. Change the service log on credentials so the service uses your
   domain account.
4. Start the service to see if everything works. Try logging on as
   before, create a database, drop some tables, call pg_switch_xlog().
   If you can log on at all, just about anything that goes wrong later
   indicates missing permissions on the data files.

Now, you have to tell Active Directory that your service account is 
running the database. For that, you add a Service Principal Name to 
your service account. You can do that with a command line tool named 
setspn.exe, of which I cannot remember the command line. You can also 
just change the attribute (servicePrincipalName) directly using either 
the "Users and Computers" MMC, or whatever 2008R2's replacement for 
that is, or ADSIedit. Anyway, your new SPN is


POSTGRES/fully.qualified.host.name

In my experience (which may be incomplete), you also have to make sure 
that all your clients use the full host name, because otherwise they 
may not get service tickets. Adding a second SPN with just the host 
name without the domain may help with that, but using the full name is 
better anyway.


The last step is to allow SSPI logon to the database. For that, you 
need to create some login roles that have the same name as your domain 
users, and an entry in pg_hba.conf with authentication method "sspi". 
Remember that only the first entry in pg_hba.conf that matches 
database, client address, and claimed user name is used.


--
Christian






--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] CEST ERROR: cursor "ogrpglayerreaderxxxxxxxxx" does not exist

2012-06-18 Thread Craig Ringer

On 06/18/2012 04:12 PM, francescobocca...@libero.it wrote:

Hi all,
i found a lots of ERROR into my postgresql log file as follow:

2012-06-10 23:37:17 CEST ERROR:  cursor "ogrpglayerreader073a8d30" does not
exist
2012-06-10 23:37:17 CEST STATEMENT:  CLOSE OGRPGLayerReader073A8D30
2012-06-10 23:37:23 CEST ERROR:  cursor "ogrpglayerreader073a8e78" does not
exist


You're running an application against PostgreSQL that's trying to close 
a cursor that it never created, that is already closed, or that only 
exists on a different connection.


It's an error in your app, not Pg. Pg is just telling you about the error.

--
Craig Ringer

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Issue in save and retreive file in postgres

2012-06-20 Thread Craig Ringer

On 06/15/2012 09:16 PM, Vishwas Dwivedi wrote:

Greetings,

I am a software programmer and  working on a .net web 
application(ASP.NET <http://ASP.NET> 4.0, c# ).
I am new to postgres and for my current project I am using postgres as 
database.



In this need functionality to upload a pdf file  to database and then 
retrieve it again from database.
I have taken bytea column in my table to store binary file data. I am 
taking byte array to convert the data.


When I try to retrieve this pdf file from database ,its not working.


Error messages?

What happens instead of what you expect to?

After the save, if you examine the data in the database using psql or 
PgAdmin-III, what do you see?


What PostgreSQL version are you using?

--
Craig Ringer


Re: [ADMIN] about shared_buffer and kernel.shmmax

2012-06-20 Thread Craig Ringer
On 06/21/2012 11:38 AM, wangqi wrote:
> Hi everybody,
> shared_buffer maximum limit is ?
> I set shared_buffer more than 250G, I got error at startup pg.
>


http://www.postgresql.org/docs/9.1/static/kernel-resources.html#SYSVIPC

http://www.postgresql.org/docs/9.1/static/runtime-config-resource.html

http://wiki.postgresql.org/wiki/Guide_to_reporting_problems

--
Craig Ringer


Re: [ADMIN] terminating autovacuum process due to administrator command

2012-06-24 Thread Craig Ringer

On 06/23/2012 10:14 AM, Radovan Jablonovsky wrote:

Could you please help with this peculiar problem?


Not without more information than that - at the very minimum, your 
version of PostgreSQL.


http://wiki.postgresql.org/wiki/Guide_to_reporting_problems

You also need to provide some info about the load the server was on, 
about what messages (if any) appear in the logs during the period when 
it was refusing connections and leading up to it, the error message with 
which it was refusing connections, etc.


--
Craig Ringer


Re: [ADMIN] starting postgres with an empty px_xlog folder

2012-06-24 Thread Craig Ringer

On 06/24/2012 03:45 AM, Kevin Grittner wrote:

As previously stated, make sure you understand how this happened, so
you can make sure it doesn't happen again.  The contents of the
pg_xlog directory are an integral part of your database cluster.


People not backing up pg_xlog, deleting its contents, etc happens often 
enough that I wonder if it should have a prominent 00_README or 
00_WARNING_DONT_DELETE file created by initdb - or simply be renamed to 
something scarier like "base_txrecords".


The annoyance of a readme is that it'd need translation.

People are used to logs being disposable. Anyone who's been responsible 
for a database should ideally know better than to assume that 
*transation* logs are disposable, but everyone has to learn sometime, 
and not everybody does so by reading TFM (unfortunately).


--
Craig Ringer

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] terminating autovacuum process due to administrator command

2012-06-25 Thread Craig Ringer

On 06/25/2012 11:00 PM, Radovan Jablonovsky wrote:

Thanks for response,

How were the connections refused (error message)?
2012-06-13 13:45:38.809 MDT [25172]: [1-1] FATAL:  remaining 
connection slots are reserved for non-replication superuser connections
2012-06-13 13:45:38.889 MDT [25173]: [1-1] FATAL:  remaining 
connection slots are reserved for non-replication superuser connections
2012-06-13 13:45:38.895 MDT [25174]: [1-1] FATAL:  remaining 
connection slots are reserved for non-replication superuser connections


Could it be that there are already max_connections 
sessions? max_connection was set to 600, when issue occurred the db 
server had 85 connection and server was under medium load.


I can't explain why it was topping out at 85 when it was set to 600, but 
- I strongly recommend that you switch to a connection pool and lower 
your max_connections considerably. You'll likely get much better 
performance with max connections at 20 or less - the usual rule of thumb 
is "num_hdds + num_cpus"  but in reality you need to benchmark and tune 
to work out what's best.


pgbouncer is a good light-weight pooling option.

--
Craig Ringer


Re: [ADMIN] Delay in completion of aggregation inserts when run in a single commit - PG 9.1.2

2012-07-04 Thread Craig Ringer

On 07/05/2012 02:30 AM, Akash Kodibail wrote:
-Staging table population happens in expected time. Anywhere from 
10-15 minutes for each process.


-Aggregation process almost never completes which is a set of 15 
insert queries. Entire process happens in a single commit (population 
of staging tables and aggregation).



When you mean "never completes" - can you be more specific?

Is the query not progressing - say, waiting on a lock?

Is it doing work, but performing very poorly?

Does it terminate with a failure? If so, error message?

If the query just keeps running but you're not sure what it is or isn't 
doing, you should examine pg_stat_activity and pg_locks. From 
pg_stat_activity you can get the backend process ID; examine that 
backend using system tools (ps, top, etc) to see if it's using CPU and 
whether it's doing disk I/O. From pg_locks you can get an idea of 
whether a query is stuck waiting on a lock.
Could you be kind enough to explain how differently queries run when 
in a “single commit” as opposed to running each of them individually?
There shouldn't be tons of difference. The first queries continue to 
hold locks while the others run, but since you're in the same 
transaction it won't affect subsequent queries, only concurrent ones.


If you have other concurrent work going on it's possible you're running 
into locking issues that you don't encounter when you release locks with 
individual commits.


Also, some resources can only be released when a transaction commits. If 
you're doing something like using huge numbers of savepoints or PL/PgSQL 
BEGIN ... EXCEPT blocks it's possible you're running into resource 
issues. The SAVEPOINT code has improved massively since 8.3 when I had 
similar issues, but savepoints still aren't free.


Start by looking at the locking situation, and by determining whether 
your query is making slow forward progress or is simply waiting.


Postgresql.conf remains untouched, except for log writing which has 
been made to “all”.


That's going to be VERY sub-optimal if you're bulk-loading data. You'll 
need to push checkpoint_segments right up. Check your logs; you're 
probably seeing warnings about checkpoints being too frequent. Hell, my 
checkpoint_segments is at 50 on my *laptop* - albeit a very fast 
SSD-equipped laptop that does lots of blk data uinit testing.


You'll want to push effective_cache_size up to 4 or 5 GB given you have 
three instances. Hard to say re shared_buffers, I have little experience 
with running multiple instances on one machine.


--
Craig Rin

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Delay in completion of aggregation inserts when run in a single commit - PG 9.1.2

2012-07-05 Thread Craig Ringer

On 07/05/2012 04:00 PM, Albe Laurenz wrote:

Akash Kodibail wrote:

Problem:

[...]

-  Aggregation process almost never completes which is a set of 15

insert queries. Entire

process happens in a single commit (population of staging tables and

aggregation).

-  When I run these 15 queries individually, it happens in no time.

Could you be kind enough to explain how differently queries run when

in a "single commit" as opposed

to running each of them individually?

Apart from minor differences (current_timestamp, ...) the
main difference is that there is no COMMIT after each query.
COMMIT is expensive since it requires a write to hard disk.

Try to get the execution plans of the queries in both cases
and see if you can spot a difference.


Actually, that's a thought. Autovaccum can't see uncomitted work, and 
thus can't analyze it. Maybe they need an explicit ANALYZE or two after 
a bulk insert or update during their data load.


--
Craig Ringer

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Re: Delay in completion of aggregation inserts when run in a single commit - PG 9.1.2

2012-07-05 Thread Craig Ringer

On 07/06/2012 01:06 PM, Akash wrote:

@@Craig

I could only wait for 6 hours. I cancelled the process after that.

No locks on any of the tables when running.
That's a very ambiguous statement. I'm assuming you are saying "our code 
does not take any explict locks on those tables using LOCK TABLE or 
SELECT ... FOR SHARE / FOR UPDATE" - because there /most certainly are/ 
locks on those tables whenever you're running a query against them. Even 
a simple SELECT takes out a lock to prevent the table from being dropped 
while the query runs.


There could be no locks taken by transactions other than the transaction 
doing the run, though.



Yes, It was progressing. Of the 15 tables, when I cancelled after 6 hours,
3rd table was getting populated (when checked in pg_stat_activity).
OK. If you turn auto_explain on and use it to get a plan for the slow 
queries, do those plans differ from the plans produced when running the 
same queries standalone?


Did you examine iostat? Did you look at what work the postgres process 
was doing to see if it was busy with cpu or I/O (iowait) or if it was 
idle waiting for something else to happen?



4)autovacuum = off, we are doing daily vacuum analyze on all tables.

No!

Unless your database as /no/ write activity other than these batch 
loads, you should have autovacuum on. The more frequently autovaccum 
runs the better a job it will do of preventing bloat, maintaining table 
stats, etc.


If you have tables that are only ever modified by a bulk-load script 
that then immediately vacuums them, set the autovac parameters for that 
table so it's excluded, don't turn autovaccum off entirely.




5)shared_buffers = 32MB

That's tiny and will severely constrain Pg's resources.

I notice you didn't set effective_cache_size either.

After these changes process is moving forward, but somehow I feel that, It
is only matter of time, I will reach the next choking point. Since I am
seeing the that process completion is taking a little longer each time I run
it.
Well, your tables and indexes are probably bloating horribly because 
autovaccum is turned off, so that's not surprising.


If you drop and recreate, or TRUNCATE, the tables between load runs you 
might be OK with autovac off for those tables, but what you're 
describing makes me think otherwise.


--
Craig Ringer


Re: [ADMIN] Error: could not send data to client: Connection reset by peer

2012-07-09 Thread Craig Ringer

On 07/09/2012 09:26 PM, Eduardo Sá dos Reis wrote:

Hi,
In my Postgres log the message is occurring:
1 4ffad9f4.67db 2012-07-09 10:17:40 BRT LOG: connection received: 
host=10.20.0.1 port=2368
2 4ffad9f4.67db 2012-07-09 10:17:40 BRT LOG: connection authorized: 
user=usu_cad database=prd_cad
3 4ffad9f4.67db 2012-07-09 10:17:42 BRT LOG: *could not send data to 
client: Connection reset by peer*
4 4ffad9f4.67db 2012-07-09 10:17:42 BRT LOG: *unexpected EOF on client 
connection*

Users can not run the system.


If this is a Windows system with a 3rd party software firewall (Windows 
Firewall is OK, anything else isn't) then remove or disable the firewall 
and see if the problem goes away.


Otherwise, more info please. PostgreSQL version, OS and version, client 
program, whether the client program is using psqlODBC, pgJDBC, libpq 
directly, etc.


--
Craig Ringer


Re: [ADMIN] Postgres Database got down

2012-07-19 Thread Craig Ringer

On 07/20/2012 12:19 AM, Umer Asghar wrote:
 After that I gave it a reboot to my machine, and its 3 hrs now and 
its running fine. but not sure why it got down and will it go down 
again soon.


Can somebody advice what should I looked into?

[snip]

2012-07-19 18:05:08 PKT LOG:  server process (PID 1836) exited with 
exit code 128




Maybe this?

http://support.microsoft.com/kb/974509

Thankyou for including your logs without having to be asked. It's 
usually helpful if you also provide your PostgreSQL version and 
architecture (x86 or x64) - but in this case I don't think it matters as 
it seems to be a Windows issue.


--
Craig Ringer


Re: [ADMIN] db size growing out of control when using clustered Jackrabbit

2012-07-23 Thread Craig Ringer

On 07/24/2012 05:13 AM, Gary Webster wrote:

Hello. I'm hoping someone has seen this before.

We are trying to use Postgres Plus v9.1.3 as the Persistence Manager 
in Jackrabbit (Apache JCR) clustering 
(http://wiki.apache.org/jackrabbit/Clustering).
Whenever the JCR is under load, the ws_bundle TOAST table in the 
repository  schema, grows out of control !


Has somebody disabled autovacuum or set it to barely run at all?

Try setting autovacuum to very aggressively vacuum the problem table(s).

--
Craig Ringer


Re: [ADMIN] problems with access into system catalogs

2012-07-26 Thread Craig Ringer
First, thank-you for an excellent complete question with versions, 
EXPLAIN ANALYZE, and exact messages.


My reply is interleaved below.

On 07/26/2012 03:44 PM, Thomas Markus wrote:

I have 2 systems with CentOS 5.5 (2.6.18) x86_64, postgresql-9.0.6 64bit

both systems contains the same content. But one system make troubles. 
some system tables (eg pg_catalog.pg_class or pg_attribute) contain 
much dead rows and all simple query take much time on one system. the 
other one is fast.


[snip]


other:
INFO:  "pg_class": found 0 removable, 1547667 nonremovable row 
versions in 31587 out of 31587 pages

DETAIL:  1545530 dead row versions cannot be removed yet.

a "select count(*) from pg_catalog.pg_class" returns only 2137



OK, so you have lots of bloat in the system catalogs. A few things to 
check for:


- Do you have any uncommitted two phase transactions?  Run:
  SELECT * from pg_prepared_xacts ;

- Do you have any long-lived 'IDLE IN TRANSACTION' connections ? Try:
  SELECT * FROM pg_stat_activity WHERE current_query = ' in 
transaction' AND xact_start > current_timestamp - '1 minute'::interval;


Either of those can prevent vacuum from cleaning things up.

Do you very frequently create and drop tables, indexes, etc? Say, using 
a database unit testing framework?


--
Craig Ringer

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] "Data import from Mysql to MS Sql Server"

2012-07-26 Thread Craig Ringer

On 07/26/2012 03:46 PM, Rajat Sangal wrote:

Hello Everyone,

I want to import Data from MySql to MS Sql Server 2008 using vb.net.
So please help me. Pls give me reply as soon possible.



You seem to be in the wrong place. This is a mailing list for 
administration of the PostgreSQL database server.


You could try asking your question on dba.stackexchange.com but you 
would need to write a lot more detail or they will just delete your 
question. You really *must* read this:


http://dba.stackexchange.com/faq#howtoask

before asking for help there. The more effort you put into your 
question, the more likely you will get useful help.


--
Craig Ringer


Re: [ADMIN] problems with access into system catalogs

2012-07-26 Thread Craig Ringer

On 07/26/2012 04:39 PM, Thomas Markus wrote:

Hi,

see below

Am 26.07.2012 10:25, schrieb Craig Ringer:

- Do you have any uncommitted two phase transactions?  Run:
  SELECT * from pg_prepared_xacts ;

hm yes, i stopped all applications this morning but this query shows:
 transaction | gid  | 
prepared| owner  | database
-+--+---++--- 

49737548 | 
131075_MS03ZjAwMDAwMTpjZmVlOjRlZDg3MTk2OjY4NGJlN2I=_N2YwMDAwMDE6Y2ZlZTo0ZWQ4NzE5Njo2ODRiZTdm 
| 2012-01-05 07:49:30.78583+01  | xxx | db1
49737549 | 
131075_MS03ZjAwMDAwMTpjZmVlOjRlZDg3MTk2OjY4NGJlN2I=_N2YwMDAwMDE6Y2ZlZTo0ZWQ4NzE5Njo2ODRiZTg0 
| 2012-01-05 07:49:30.789382+01 | xxx | db2


system time is valid (Thu Jul 26 10:38:12 CEST 2012). so may 1st is 
really old

Should I restart the instance?


Nope, and it wouldn't help anyway. Prepared but uncommitted two phase 
transactions are a permanent and persistent part of the database. They 
only go away when a COMMIT PREPARED  or ROLLBACK PREPARED is issued. See:


http://www.postgresql.org/docs/9.1/static/sql-prepare-transaction.html

I cannot advise you on what to do without knowing what created those 
transactions and why.


Do you very frequently create and drop tables, indexes, etc? Say, 
using a database unit testing framework?

no, its a live system with normal olap access


Weird, then I don't know how the catalogs would get so fat.

I don't think temporary tables create writes to the catalog heap, but I 
can't think what else it'd be.


--
Craig Ringer


Re: [ADMIN] Timeout error on pgstat

2012-08-01 Thread Craig Ringer

On 08/02/2012 04:27 AM, Anibal David Acosta wrote:


I have a lot (maybe 1 every 10 seconds) of this error WARNING:  pgstat 
wait timeout


A quick search suggests this can be due to excessive I/O. However, this 
thread:


http://postgresql.1045698.n5.nabble.com/pgstat-wait-timeout-td5078125.html

sounds very similar to your issue. I'm wondering if there's a bug 
lurking in there somewhere.



In the pg_stat_activity show an autovacuum process over a very used 
table that runs about 1 hour and then this vacuum is cancelled 
(according to log)

Was there any context to the `cancelling autovacuum task' message?


 I have Postgres 9.0.3 on a windows 2008 R2 running for about 1 year 
in same conditions, but this error is occurring about 1 week ago.




The current 9.0 release is 9.0.8, so you're missing a bunch of bug fixes.

http://www.postgresql.org/docs/current/static/release-9-0-8.html

Consider updating. You don't need to do a dump and reload or use 
pg_upgrade, since it's only a minor version update. Stop the DB, install 
the new binaries, start the DB.


However, I don't see any fixes related to the stats writer in the 
relnotes from the 9.0 series.


--
Craig Ringer


Re: [ADMIN] need help to write a function in postgresql

2012-08-03 Thread Craig Ringer

On 08/03/2012 04:37 PM, Laszlo Nagy wrote:
It is also better because with a view, you can also do " name is null 
". But you cannot do that with a function (unless you write 
unnecessary circumstancial code.)


While I agree with you on the view - among other things, it lets the 
query optimiser push conditions down into the view query - there is a 
reasonable answer to the problem of comparing to NULL. Just use IS 
DISTINCT FROM, eg:


   WHERE name IS DISTINCT FROM _test_variable

"IS DISTINCT FROM" is an equality comparison that treats null as a 
comparable value like any other, so "NULL IS DISTINCT FROM NULL" is 
false. Very handy.


--
Craig Ringer

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] How to make data changes inside trigger function visible to the top level statement ?

2012-08-06 Thread Craig Ringer

On 08/06/2012 06:50 PM, Haifeng Liu wrote:

Hello,

I am doing table partitioning, all is ok except that after executing 'insert' 
sql statement I can't get affected rows, it always be 0. After searching on the 
documents, I found that row changes inside trigger function is not visible to 
the top level statement.

Partition table using a trigger function to redirect insertion to the correct 
partition is the recommend way, and affected rows testing is also used widely. 
pgAdmin tests the affected rows too, thus when I type a new row and click save 
button, it seems failed but actually succeed.

How can I make the row changes inside the trigger function visible to the top 
level statement?
You can't. If you need an affected row count, you need to insert 
directly into the target partition, thus bypassing the trigger.


It's one of those things on the "wouldn't it be nice to fix" list in 
PostgreSQL's partitioning support.


--
Craig Ringer

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Data corruption after SAN snapshot

2012-08-07 Thread Craig Ringer

On 08/08/2012 06:23 AM, Terry Schmitt wrote:


Anyone have a solid method to test if fdatasync is working correctly or
thoughts on troubleshooting this?


Try diskchecker.pl

https://gist.github.com/3177656

The other obvious step is that you've changed three things, so start 
isolation testing.


- Test Postgres Plus Advanced Server 8.4, which you knew worked, on your 
new file system and OS.


- Test PP9.1 on your new OS but with ext3, which you knew worked

- Test PP9.1 on your new OS but with ext4, which should work if ext3 did

- Test PP9.1 on a copy of your *old* OS with the old file system setup.

- Test mainline PostgreSQL 9.1 on your new setup to see if it's PP specific.

Since each test sounds moderately time consuming, you'll probably need 
to find a way to automate. I'd first see if I could reproduce the 
problem when running PgBench against the same setup that's currently 
failing, and if that reproduces the fault you can use PgBench with the 
other tests.


--
Craig Ringer


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Data corruption after SAN snapshot

2012-08-07 Thread Craig Ringer

On 08/08/2012 09:39 AM, Stephen Frost wrote:

Terry,

* Terry Schmitt (tschm...@schmittworks.com) wrote:

So far, executing pg_dumpall
seems to be fairly reliable for finding the corrupt objects after my
initial data load, but unfortunately much of the corruption has been with
indexes which pgdump will not expose.

Shouldn't be too hard to write a script that'll do a query against each
table using an ORDER BY that matches each index, at least for 'simple'
indexes, which'll typically cause an in-order index traversal.

I'd really like a "VERIFY" command for PostgreSQL, though a proper one 
isn't really possible without block checksums.


I'm currently working on a virtual plug pull tool that uses VMs to 
simulate abrupt crashes of the machine PostgreSQL is running on. One of 
the bigger challenges is that Pg doesn't offer any reliable way to 
detect even simple corruption.


Maybe a pg_read_relation(oid) that simply reads all blocks in an index 
or table would help. It could live in the `adminpack' module ( 
http://www.postgresql.org/docs/9.1/static/adminpack.html) or 
`pageinspect' module ( 
http://www.postgresql.org/docs/9.1/static/pageinspect.html).



It turns out I can use the pageinspect functions to do a rough kind of 
verify, but it's pretty slow and inconvenient. Eg:


   WITH pages(page) AS (
SELECT get_raw_page('tablename'::text, pageno)
FROM generate_series(0, (SELECT relpages FROM pg_class WHERE
   relname = 'tablename')-1) AS pageno
   )
   SELECT page_header(page), heap_page_items(page) FROM pages;


takes 90ms when a 'SELECT * FROM tablename' takes 6.2ms . On a bigger 
table, the query takes 3939.912 vs 125.135ms for a table scan.


Of course, pageinspect is mostly interesting for indexes, where I'd do:

   create or replace function scan_index(indexname text) returns setof
   record as $$
   SELECT page_header(get_raw_page($1, 0));
   WITH pagenumbers(pageno) AS (
SELECT generate_series(1, (SELECT relpages FROM pg_class WHERE
   relname = $1)-1)
   )
   SELECT bt_page_items($1, pageno) FROM pagenumbers;
   $$ language sql volatile;

   SELECT scan_index('some_idx');


... but that's getting really quite slow and still hasn't touched the 
free space map or visibility map.


Of course, these checks prove nothing about subtle corruption or 
incorrect contents, they only make sure Pg can read them and they look 
vaguely sane. It doesn't do any kind of consistency checking between 
index and table.


--
Craig Ringer


Re: [ADMIN] Data migration to sql server 2008

2012-08-07 Thread Craig Ringer

On 08/07/2012 07:06 PM, karthi keyan wrote:

For some interlink application purpose , i need to *migrate data into 
Sql server 2008*.


The simplest way is usually to connect with psql and export CSV data with:

\copy (SELECT ) to '/path/to/file.csv' csv

or for a whole table:

\copy tablename to '/path/to/file.csv' csv

For usage of COPY see:

\h COPY

and http://www.postgresql.org/docs/current/static/sql-copy.html


... then load that CSV into MS SQL Server 2008 like you would anything else.


A more complicated but more powerful way is to use an ETL tool like 
Pentaho or Talend to connect to both databases and merge/convert data.


--
Craig Ringer


Re: [ADMIN] PostgreSQL oom_adj postmaster process to -17

2012-08-08 Thread Craig Ringer

On 08/09/2012 04:24 AM, Kevin Grittner wrote:

http://wiki.postgresql.org/wiki/Number_Of_Database_Connections
  

Can we please please PLEASE link to that as a comment above max_connections?

Last time this came up nobody was happy with wording of a comment so 
nothing got done. It's a real usability wart - causing real-world 
performance and reliability problems - that people unwittingly raise 
max_connections to absurd levels because they get no warnings, hints or 
guidance of any sort.


--
Craig Ringer

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Data migration to sql server 2008

2012-08-09 Thread Craig Ringer

[Replying via mailing list]

On 08/09/2012 08:24 PM, karthi keyan wrote:

Thank you for u r reply.
I have establish the same by sql server *linked server method* .its is 
working fine.

But right now i am facing the problem of .

-- Query i used in sql
 Select * from openquery(Postgre,' 
dblink_connect(''mycon'',''hostaddr=localhost; dbname=explore 
;user=openerp; password=xplore)')



--- the response

Cannot process the object " 
dblink_connect('mycon','hostaddr=localhost; dbname=explore 
;user=openerp; password=xplore)". The OLE DB provider "MSDASQL" for 
linked server "Postgre" indicates that either the object has no 
columns or the current user does not have permissions on that object.


Pl help me to fix it out.


You haven't really shown enough information. In particular, where does 
this 'Postgre' variable/table/whatever come from? What is it?


You should look at the PostgreSQL server logs to see if PostgreSQL is 
sending an error when you attempt the connection and if so, what it is. 
Consider also setting log_statement = 'all' in postgresql.conf so you 
can see exactly what queries this "openquery" tool runs against PostgreSQL.


Please reply via the mailing list ("reply all") in future.

--
Craig Ringer


Re: [ADMIN] Question about ALTER TABLE DROP CONSTRAINT on 9.1 -- psql crashes

2012-08-09 Thread Craig Ringer

On 08/10/2012 10:06 AM, Tom Lane wrote:

Greg Williamson  writes:

Every minute on the minute there is a cronjob that kills IDLE IN CONNECTION 
procs older than 3 seconds ... possibly that process killed the process I was 
running, but to me the user it seemed as if the COMMIT came before the 
disconnect.

That sure sounds like the source of your problem.  It might be sane if
it killed only processes that *had been idle* for at least three
seconds, but I'm not sure there is any easy way to determine that ...

wouldn't:

select * from pg_stat_activity
where current_query = ' in transaction'
AND query_start < current_timestamp - INTERVAL '3 seconds';

do it?

--
Craig Ringer

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Question about ALTER TABLE DROP CONSTRAINT on 9.1 -- psql crashes

2012-08-09 Thread Craig Ringer

On 08/10/2012 12:20 PM, Tom Lane wrote:

Craig Ringer  writes:

On 08/10/2012 10:06 AM, Tom Lane wrote:

That sure sounds like the source of your problem.  It might be sane if
it killed only processes that *had been idle* for at least three
seconds, but I'm not sure there is any easy way to determine that ...



wouldn't:



select * from pg_stat_activity
where current_query = ' in transaction'
AND query_start < current_timestamp - INTERVAL '3 seconds';



do it?


No, that would find sessions that were idle and whose last command
started at least 3 seconds ago.  But it might have completed only
microseconds ago.  The symptoms Greg is describing are consistent
with this kill script running during a short interval between his
index-build command and his COMMIT.


Eh. For some reason I thought query_start was updated whenever 
current_query was set, so it'd be updated when the session went idle. 
Nonesensical in retrospect.


Does this approach look sane? Note idlers and then reap them if they 
haven't started new queries next pass:


CREATE UNLOGGED TABLE reap (
  procpid integer,
  query_start timestamptz,
  added timestamptz
);

-- Run every 20s or whatever to note idlers and when we noticed them
insert into reap (procpid,query_start,added)
select procpid, query_start, current_timestamp
from pg_stat_activity where current_query = ' in transaction';

-- Clean out old entries and kill idlers
WITH r AS (
  DELETE FROM reap r2
  WHERE r2.added + INTERVAL '10 seconds' < current_timestamp
  RETURNING *
)
SELECT r.procpid, pg_terminate_backend(r.procpid)
FROM r INNER JOIN pg_stat_activity s
  ON (r.procpid = s.procpid AND r.query_start = s.query_start )
GROUP BY r.procpid;

ie "where we've noted more than 1 minute ago that a pid was idle, and 
query_start hasn't changed so no new query has been run, kill the backend".


Of course, there's race where the backend could be starting a new query 
even as you kill it for idleness, but that's true in any 
idle-in-transaction control scheme (other than having a true transaction 
timeout feature *within* the backend).


--
Craig Ringer


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Data migration to sql server 2008

2012-08-09 Thread Craig Ringer

On 08/10/2012 12:35 PM, karthi keyan wrote:



Actually my need is import data from PostgreSQL  , and the same 
process into sql server.


I'm guessing you didn't see what I wrote in reply to your message, where 
I said:


You haven't really shown enough information. In particular, where does 
this 'Postgre' variable/table/whatever come from? What is it?


You should look at the PostgreSQL server logs to see if PostgreSQL is 
sending an error when you attempt the connection and if so, what it is. 
Consider also setting log_statement = 'all' in postgresql.conf so you 
can see exactly what queries this "openquery" tool runs against PostgreSQL.



By the way, the easiest way to do this sort of thing is usually to

COPY tablename TO '/some/file.csv' CSV

from PostgreSQL, then load that file into MS SQL server. That's trivial 
to script with vbscript or whatever you prefer, using psql to get the 
data out of PostgreSQL.


--
Craig Ringer


Re: [ADMIN] Installing PostgreSQL Yum Repository

2012-08-18 Thread Craig Ringer

On 08/19/2012 01:25 AM, Jonathan Pool wrote:

I am trying to follow the instructions on 
http://www.postgresql.org/download/linux/redhat/ for installing the PostgreSQL 
Yum Repository (on RHEL 5).

So I downloaded pgdg-redhat-9.1-4.noarch.rpm into a directory from 
http://yum.postgresql.org/9.1/redhat/rhel-5-x86_64/pgdg-redhat-9.1-4.noarch.rpm.

Then I tried to install it with:

rpm -i 
http://yum.postgresql.org/9.1/redhat/rhel-5-x86_64/pgdg-redhat-9.1-4.noarch.rpm


First, why download it, then run rpm on the remote URL instead of the 
local file?


But anyway:

yum install pgdg-redhat-9.1-4.noarch.rpm

with the local file, or the URL; yum doesn't care.

--
Craig Ringer


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Log-Shipping Standby Server: USE_FLOAT8_BYVAL compatibility error

2012-09-04 Thread Craig Ringer

On 09/04/2012 06:04 PM, Mathias Breuninger wrote:

Hi,

I configure PostgreSQL 9.1.5 on two servers: one with FreeBSD-9.0/powerpc64 and 
the other with FreeBSD-9.0/powerpc.

The 64 bits server is the master and I tried to configure the 32 bits one as a 
standby server.


That isn't a supported configuration. The standby needs to be the same 
architecture as the master and needs to use the same compilation options.


I'm surprised the error message wasn't more useful in this case.


I can recompile PostgreSQL on the 64bit architecture without this configure 
option, but I wonder if there will be a performance drawback.


It still won't work. If you want to replicate from a 64-bit machine to a 
32-bit machine you will need to build a 32-bit version of PostgreSQL on 
the 64-bit machine.


--
Craig Ringer


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Windows Services and Postgresql 9.1.3

2012-09-16 Thread Craig Ringer

On 09/12/2012 04:48 AM, Vincent Dautremont wrote:


For example, after the server had rebooted, I would stop the service
from services.msc
Windows then reports it as stopped but I can still see that the database
is running,  and can use it.


Check in the process monitor (or, better, with Process Explorer from 
Sysinternals) to see what user the `postgres.exe` processes are running as.



When I would start the service again, I would end up with a timeout
waiting for the service to be started. When in fact the database was
always running.

Anyone had that bug/problem before ?


Nope. Sounds completely crazy. More info needed, though.

--
Craig Ringer


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Postgres Cache usage

2012-09-19 Thread Craig Ringer

On 09/20/2012 12:36 AM, A J wrote:

Hi,
I have a read heavy application. I would want it to read from memory as
database latency has to be in low milliseconds.
The database is not too big in size and can be fully contained in memory.

With Postgres, if I cache all the tables (by pre-emptive querying such
as select * from tables); is it assured that all subsequent queries that
have involved where clauses made up of several indexed fields will still
hit the cache (unless data is refreshed by writes) ?

Example if my first query is select * from table1. Then if my second
query is "select * from table1 where (field1 between v1 and v2) and
(field2 between v3 and v4)"; would the second query read from the cache ?


The second query will most likely get cached tuples from the heap, but 
the index(es) won't be in RAM so it won't have cached copies of them to use.


What you want to do is "pre-warm" the caches. Search for "postgresql 
prewarm". See, eg:


http://archives.postgresql.org/message-id/ca+tgmobrrrxco+t6gcqrw_djw+uf9zedwf9bejnu+rb5teb...@mail.gmail.com

--
Craig Ringer



--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Backup and Restore from 8.3.0 to 9.1.3

2012-09-20 Thread Craig Ringer

On 09/21/2012 01:01 AM, Kasia Tuszynska wrote:

Hi Everybody,

I am experimenting with backups and restores….

I am running into something curious and would appreciate any suggestions.

Backing up from:

Postgres 8.3.0

Windows 2003 sp1 server (32bit)

-Took a compressed binary backup of a single db (the default option in
pgAdminIII, right click backup)

-Took a plane backup of the same single db, (plane option in the right
click backup gui in pgAdminIII)


Did you back up using the old version of PgAdmin-III with the old 
pg_dump? If so, problems restoring into PostgreSQL 9.1 are likely.


If possible, dump the old database using the *new* pg_dump. If you're 
using the PgAdmin-III GUI, connecting to the old DB from the new PgAdmin 
on the new computer should do the trick.


--
Craig Ringer



--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Windows Services and Postgresql 9.1.3

2012-09-22 Thread Craig Ringer

On 09/22/2012 06:25 AM, Walter Hurry wrote:

On Fri, 21 Sep 2012 10:33:25 -0600, Scott Marlowe wrote:


On Tue, Sep 11, 2012 at 2:48 PM, Vincent Dautremont
 wrote:

Hi,
I'm using Postgresql 9.1.3 x64 on Windows 2008,
I'm doing reliability tests and effects and consequences of a server
power failure.


If you're looking to test for reliability and bugginess, I'd highly
suggest testing against the latest minor version of 9.1.5.


If reliability of interest, why is OP using Windows?


FWIW, my Win2k8 server has been rock solid since installation, with only 
*extremely* infrequent reboots for patches as few of the issues patched 
are network-exploitable.


These days Windows is a rather solid platform. It still weirds me out, 
but that's how it is. I've had more stability issues on my Linux servers 
lately (though that's significantly to do with some of the cheap and 
nasty hardware they're on).


--
Craig Ringer


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Pg 8.3.x on RHEL 6.3?

2012-10-10 Thread Craig Ringer

On 10/11/2012 07:12 AM, Walter Hurry wrote:

OK, I'll do your work for you. $200/hr.


This was a polite and reasonable question, there is no need to be rude.

--
Craig Ringer



--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Pg 8.3.x on RHEL 6.3?

2012-10-10 Thread Craig Ringer

On 10/11/2012 05:15 AM, Rosser Schwarz wrote:

In preparation for that effort, I've been asked to avail myself of the
collective experience and wisdom of the wider community to see if
anyone has seen anything unexpected, or any kind of misbehavior in
running 8.3 on RHEL 6.3.


I've not noticed any reports of issues specific to RHEL 6 on any version 
of Pg here, and would be surprised if any issues were to occur.


Consider retaining your old server for a little while, running as a warm 
standby with log shipping. In the exceedingly unlikely case that you do 
run into issues you can fail back to the old server.


Migrating from 8.3 to 9.1 or 9.2 isn't a big hurdle, since you're 
already past the implicit-casts-from-text stuff. About the only thing 
really likely to bite you is the bytea_output change to hex, and that's 
a trivial postgresql.conf change if you want to preserve the old 
behaviour while fixing code.


I can understand not wanting to undertake it at the same time as a 
hardware change, but you should probably schedule it for sooner rather 
than later and start testing on the new version once you're on the 
updated OS & HW.


--
Craig Ringer


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] 9.2 won't load C-language function

2012-10-10 Thread Craig Ringer

On 10/11/2012 02:22 AM, Craig James wrote:

I have a C-language function I've been using on 8.4 for a long time.  On
9.2 it won't load:

test=# set search_path = public;
SET
test=# CREATE OR REPLACE FUNCTION chmoogle_session_id() RETURNS integer
AS '/usr/local/pgsql/lib/libchmoogle.so', 'chmoogle_session_id'
LANGUAGE c VOLATILE;
ERROR:  could not load library "/usr/local/pgsql/lib/libchmoogle.so":
libopenbabel.so.4: cannot open shared object file: No such file or directory


What does `ldd /usr/local/pgsql/lib/libchmoogle.so` say? What about when 
run under the PostgreSQL user account?


You may need to set LD_LIBRARY_PATH in the Pg startup script, or modify 
ld.so.conf, or re-build your extension with rpath linking enabled.


As for why it worked with 8.4 - I expect you either altered 
LD_LIBRARY_PATH in the 8.4 startup scripts, or maybe the 8.4 version of 
the extension was built with -Wl,-rpath so the paths to required 
libraries were embedded into the extension library.


--
Craig Ringer


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Migration from Postgresql 9 to Oracle 10g

2012-10-18 Thread Craig Ringer

On 10/19/2012 01:46 AM, Grant Allen wrote:

On 18/10/2012 02:13, Shams Khan wrote:

Hi Experts,

I am in the process of migration from PG 9 to oracle. All of the
database objects has been migrated apart from functions. The main
issue I am facing with function code where DML statements are used in
the code. On the other side, Oracle doesn't support DML's in functions
or procedures.


Er, what?  What exactly do you mean "Oracle doesn't support DML's in ... 
procedures"?


I'm assuming the intention was to write "DDL".

--
Craig Ringer


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Database in psql

2012-10-21 Thread Craig Ringer
On 10/20/2012 08:18 AM, teyking2003 wrote:
> for the upgrade of psql version, do we need to reconfigure all again?

The manual explains the details:

  http://www.postgresql.org/docs/current/static/upgrading.html

You should also read the release notes for the major versions (8.3, 8.4,
9.0, 9.1 and 9.2) between your version and the target for your upgrade.
Read the compatibility notes.

You will need to be particularly careful of:

- The move to standard_conforming_strings
- The change of bytea_format
- The removal of implicit casts to text
- Upgrading contrib modules to extensions

so you will need to do some application testing.

--
Craig Ringer


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] [GENERAL] Streaming Replication Server Crash

2012-10-22 Thread Craig Ringer
On 10/19/2012 04:40 PM, raghu ram wrote:
> Hi All,
> 
> We have configured Streaming Replication b/w Primary and Standby server
> and Pgpool-II load balancing module diverting
> SELECT statements to  Standby server. As per our observations, Standby
> server crashed during peak hours on today and error message as follows

> 2012-10-19 12:26:46 IST [1338]: [18-1] user=,db= LOG:  server process
> (PID 15565) was terminated by signal 10
> 
> 2012-10-19 12:26:46 IST [1338]: [19-1] user=,db= LOG:  terminating any
> other active server processes

That's odd. SIGUSR1 (signal 10) shouldn't terminate PostgreSQL.

Was the server intentionally sent SIGUSR1 by an admin? Do you know what
triggered the signal?

Are you running any procedural languages other than PL/PgSQL, or any
custom C extensions? Anything that might have unwittingly cleared the
signal handler for SIGUSR1?

--
Craig Ringer


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] [GENERAL] Streaming Replication Server Crash

2012-10-22 Thread Craig Ringer
On 10/22/2012 08:52 PM, Tom Lane wrote:
> Craig Ringer  writes:
>> On 10/19/2012 04:40 PM, raghu ram wrote:
>>> 2012-10-19 12:26:46 IST [1338]: [18-1] user=,db= LOG:  server process
>>> (PID 15565) was terminated by signal 10
> 
>> That's odd. SIGUSR1 (signal 10) shouldn't terminate PostgreSQL.
> 
>> Was the server intentionally sent SIGUSR1 by an admin? Do you know what
>> triggered the signal?
> 
> SIGUSR1 is used for all sorts of internal cross-process signaling
> purposes.  There's no need to hypothesize any external force sending
> it; if somebody had broken a PG process's signal handling setup for
> SIGUSR1, a crash of this sort could be expected in short order.
> 
> But having said that, are we sure 10 is SIGUSR1 on the OP's platform?
> AFAIK, that signal number is not at all compatible across different
> flavors of Unix.  (I see SIGUSR1 is 30 on OS X for instance.)

Gah. I incorrectly though that POSIX specified signal *numbers*, not
just names. That does not appear to actually be the case. Thanks.

A bit of searching suggests that on Solaris/SunOS, signal 10 is SIGBUS:

http://www.s-gms.ms.edus.si/cgi-bin/man-cgi?signal+3HEAD
http://docs.oracle.com/cd/E23824_01/html/821-1464/signal-3head.html

... which tends to suggest an entirely different interpretation than
"someone broke a signal hander":

https://blogs.oracle.com/peteh/entry/sigbus_versus_sigsegv_according_to

such as:

- Bad mmap()ed read
- alignment error
- hardware fault

so it's not immensely different to a segfault in that it can be caused
by errors in hardware, OS, or applications.

Raghu, did PostgreSQL dump a core file? If it didn't, you might want to
enable core dumps in future. If it did dump a core, attaching a debugger
to the core file might tell you where it crashed, possibly offering some
more information to diagnose the issue. I'm not familiar enough with
Solaris to offer detailed advice on that, especially as you haven't
mentioned your Solaris version, how you installed Pg, etc. This may be
of some use:


http://stackoverflow.com/questions/6403803/how-to-get-backtrace-function-line-number-on-solaris

--
Craig Ringer


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Streaming Replication Server Crash

2012-10-22 Thread Craig Ringer
On 10/23/2012 01:03 PM, Craig Ringer wrote:
> http://stackoverflow.com/questions/6403803/how-to-get-backtrace-function-line-number-on-solaris

Actually, that link doesn't apply to this problem, it's for getting a
stack trace programmatically:

Try:

http://publib.boulder.ibm.com/httpserv/ihsdiag/get_backtrace.html

http://www.princeton.edu/~unix/Solaris/troubleshoot/adb.html
<http://www.princeton.edu/%7Eunix/Solaris/troubleshoot/adb.html>

Most of the good links I could find were on blogs.sun.com, which Oracle
have helpfully redirected to www.oracle.com - where the pages don't
actually exist.

--
Craig Ringer


Re: [ADMIN] Error on pg_dumpall

2012-10-22 Thread Craig Ringer
On 10/22/2012 03:24 AM, Terry Khatri wrote:
> Hi
> 
> I am getting the following error msg when I do a pg_dumpall
> 
> -
> 
> TK@OraLinux /usr/local/pgsql/backups
> $ pg_dumpall -U sns84 > completebackup20121020,sql
> pg_dump: schema with OID 74363 does not exist
> pg_dumpall: pg_dump failed on database "bihistory", exiting

That's not good.

First, try per-database dumps. Work out which database has the problem.
Do a:

pg_dumpall -U sns84  --globals-only > globals.sql

then for each database:

pg_dump -U sns84 -f $dbname.sql $dbname

(scripting it if desired), until you see which DB fails to dump. Dump
all the DBs you can successfully dump before proceeding.

Then, in the problem dB, try:

REINDEX pg_catalog.pg_namespace;

does that help?

What's the output of:

SELECT oid, nspname FROM pg_catalog.pg_namespace;

in the problem DB?

What about \dn in psql?

Is all your data for the problem DB in the "public" schema? If so, can
you do a schema-only dump?

  pg_dump -U sns84 -n public -f $dbname.dump $dbname

--
Craig Ringer



--
Craig Ringer


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] [GENERAL] Streaming Replication Server Crash

2012-10-22 Thread Craig Ringer
On 10/23/2012 01:20 PM, Tom Lane wrote:
>
> This isn't the first time I've wondered exactly which signal was meant
> in a postmaster child-crash report.  Seems like it might be worth
> expending some code on a symbolic translation, instead of just printing
> the number.  That'd be easy enough (for common signal names) on Unix,
> but has anyone got a suggestion how we might do something useful on
> Windows?

Here's a typical Windows exception:


2012-10-04 14:29:08 CEST LOG:  server process (PID 1416) was terminated
by exception 0xC005

2012-10-04 14:29:08 CEST HINT:  See C include file "ntstatus.h" for a
description of the hexadecimal value.


These codes can be translated with FormatMessage:

 
http://msdn.microsoft.com/en-us/library/windows/desktop/ms679351(v=vs.85).aspx
<http://msdn.microsoft.com/en-us/library/windows/desktop/ms679351%28v=vs.85%29.aspx>
  http://support.microsoft.com/kb/259693

FormatMessage may not be safe to perform in the context of a munged heap
or some other failure conditions, so you probably don't want to do it
from a crash handler. It is safe for the postmaster to do it based on
the exception code it gets from the dying backend, though.

I'd say the best option is for the postmaster to print the
FormatMessage(
FORMAT_MESSAGE_ALLOCATE_BUFFER|FORMAT_MESSAGE_FROM_SYSTEM|FORMAT_MESSAGE_FROM_HMODULE,
...) output when it sees the exception code from the dying backend.

RtlNtStatusToDosError may also be of interest:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms680600(v=vs.85).aspx
<http://msdn.microsoft.com/en-us/library/windows/desktop/ms680600%28v=vs.85%29.aspx>
... but it's in Winternl.h so it's not guaranteed to exist / be
compatible between versions and can only be accessed via runtime dynamic
linking. Not ideal.

--
Craig Ringer


Re: [ADMIN] Detecting DB corruption

2012-10-31 Thread Craig Ringer
On 11/01/2012 08:01 AM, Raj Gandhi wrote:
> 
> I'm looking for ways to detect DB index and any other type of corruption
> in DB.  It looks like there is no tool to verify if Postgres DB is
> corrupted or not.

There is no database verifier tool. One would be quite nice to have for
testing and development purposes, though I question whether corruption
should be a concern in production. If you face the realistic risk of
database corruption, you need to urgently address the problems in your
setup that make that possible.

I wrote a bit about that a while ago:

http://blog.ringerc.id.au/2012/10/avoiding-postgresql-database-corruption.html

Seriously, if you're facing DB corruption then something is already
horribly wrong with your setup.

PostgreSQL isn't like MySQL with MyISAM; corruption is not routine and
part of life. It's a sign of a very bad problem, one you should diagnose
and fix not paper over. Do you expect ext3 file system corruption
routinely? No? PostgreSQL should be the same.

> I would like to detect some of the following DB corruptions:
>   - corruption in DB indexes

A query that scans the whole index (say, to sort on it) should generally
find damaged pages in indexes. "Corruption" can cover many different
things, though, and some damage would not be detected by simply using
the index.

>   - detect duplicate primary keys in a table (noticed in couple of
> instances where duplciates keys were found becuase of  corrupted indexes)

A REINDEX will find that. Alternately, you might be able to formulate
queries that ignore the indexes and do duplicate searches by grouping by
the primary key with `enable_indexscan = off`, `enable_indexonlyscan =
off`, etc etc.

>   - any page or block is corrupted

I'd want to use the `pageinspect' extension to scan the table manually.
Create some artificially damaged blocks in a just-for-testing table and
make sure that doing so actually finds them.

> Using Postgres 8.3.18 on Linux. Database has around 100 tables with
> average rows in a table are 500.

Well, you're on an old version, but not one with any known serious
issues AFAIK.

--
Craig Ringer


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Detecting DB corruption

2012-10-31 Thread Craig Ringer
On 11/01/2012 01:10 PM, Scott Ribe wrote:
> On Oct 31, 2012, at 8:50 PM, Craig Ringer wrote:
>
>> Seriously, if you're facing DB corruption then something is already
>> horribly wrong with your setup.
> True, but. In a past life, complaints from the db (it was a db that stored a 
> checksum with every block) were the very first symptom when something went 
> horribly wrong with the hardware. (Partial short between wires of an internal 
> SCSI cable; eventually we determined that about every 1MB, 1 bit would get 
> flipped between the controller & disk.)
>
> So, if there were an official db verifier tool for PG, I for one would have 
> it run periodically.
If there were a way to reliably detect corruption, so would I. As things
stand there are no block checksums, so if a bit gets flipped in some
random `text` field you're never going to know, corruption-checker or
no. Some forms of random corruption - like bad blocks on disks causing
I/O errors, zeroed blocks, truncated files, etc - will become apparent
with general checking, but others won't be detectable unless you know
what the expected vs actual data is.

If page checksumming or any other reliable method of detecting possible
incipient corruption were available I'd quite likely want to use it for
much the same reason you outlined. For that matter, if there were a
general "sanity check my tables and indexes" tool I'd probably use that
too. However, no such tool exists - and in a good setup, none should be
needed. I'd want to use one anyway purely out of paranoia.

--
Craig Ringer


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Fwd: Errors on pg_dumpall

2012-11-02 Thread Craig Ringer
On 11/01/2012 10:05 PM, Terry Khatri wrote:
> Somebody PLEASE HELP ! is Tome Lane around !
>

http://www.postgresql.org/support/professional_support/


Re: [ADMIN] unlooged tables

2012-11-06 Thread Craig Ringer
On 11/07/2012 05:09 AM, Igor Neyman wrote:
>
> Hi,
>
>  
>
> Where in pg_catalog I can find, if the table was created as "unlogged"
> (or not)?
>
> As in: create unlogged table t1(c1 int);
>
> I can't seem to find this info in pg_catalog tables/views.
>
>  
>
> Tried psql with --E to describe (\d) unlogged table, but it was not
> helpful.
>

SELECT relpersistence FROM pg_class WHERE relname = 't1';

'u' is unlogged, 'p' is persistent, 't' is temporary.

See http://www.postgresql.org/docs/current/static/catalog-pg-class.html
<http://www.postgresql.org/docs/9.2/static/catalog-pg-class.html>

--
Craig Ringer


Re: [ADMIN] bumping all sequence ids in a schema

2012-11-07 Thread Craig Ringer
On 11/08/2012 04:42 AM, Mike Broers wrote:
> I would like to bump all sequences in a schema by a specified
> increment.  Is there a stored proc or some method that is recommended?
> Currently I have sql that generates scripts to do this, but it seems
> to be an inelegant approach and before I rework it from the ground up
> I want to see if anyone has already done this kind of work or thinks
> its a wasteful pursuit for some other reason I'm overlooking.

I'd use a PL/PgSQL `DO` block, myself.

DO
$$
DECLARE
seqname text;
nspname text;
seqval bigint;
BEGIN
FOR nspname, seqname IN select n.nspname, c.relname FROM
pg_class c INNER JOIN pg_namespace n ON (c.relnamespace = n.oid)
WHERE c.relkind = 'S' AND n.nspname = 'public' LOOP
EXECUTE format('SELECT last_value FROM %I.%I', nspname,
seqname) INTO seqval;
PERFORM setval(
quote_ident(nspname)||'.'||quote_ident(seqname), seqval + 0);
END LOOP;
END;
$$ LANGUAGE plpgsql;

Change "+ 0" to whatever your increment is.

--
Craig Ringer



Re: [ADMIN] No databases visible on pg 8.3 server

2012-11-11 Thread Craig Ringer
On 11/11/2012 08:27 AM, Pat Heuvel wrote:
>
> 2012-11-05 23:03:07 EST LOG:  could not open file
> "pg_xlog/00010285002E" (log file 645, segment 46): No such
> file or directory
If you haven't already, before you do ANYTHING ELSE, take a file-system
level copy of the PostgreSQL database while the server is stopped. See
http://wiki.postgresql.org/wiki/Corruption .

--
Craig Ringer


  1   2   >