Re: [GENERAL] Allowing postgresql to accept 0xff syntax for data types that it makes sense for?

2015-05-22 Thread Dennis Jenkins
On Fri, May 22, 2015 at 10:02 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 Alban Hertroys haram...@gmail.com writes:
  On 22 May 2015 at 04:46, Bill Moran wmo...@potentialtech.com wrote:
  With all that being said, if I were to build a patch, would it be likely
  to be accepted into core?


How feasible would it be to write a network proxy, like pg_bouncer, to
handle converting the values on the fly, so that you need to change neither
your original code base (with respect to this issue), nor change PostgreSQL
itself?


Re: [GENERAL] PG-9.3.6, unable to drop role because some objects depend on it

2015-04-13 Thread Dennis Jenkins
Apologies for the typo of your name in my last post, Tom.

On Mon, Apr 13, 2015 at 12:16 PM, Dennis Jenkins 
dennis.jenkins...@gmail.com wrote:

 Doh.  I found my answer.  Tome posted it years ago..
 http://www.postgresql.org/message-id/18994.1325874...@sss.pgh.pa.us

 I have to connect to the offending database and try to drop role again
 to get the list of actual objects.  One database cannot query the catalog
 of another.

 On Mon, Apr 13, 2015 at 12:05 PM, Dennis Jenkins 
 dennis.jenkins...@gmail.com wrote:

 I am attempting to remove a role from Postgresql-9.3.6.  I've already
 reassigned ownership for the role's tables, functions, sequences, types,
 views, etc...  However, I am still unable to remove the role.  Postgresql
 reports that 8 objects in the database 'postgres' depend on this role.

 How do I locate the database objects that depend on the 'DEADUSER' so
 that I can remedy the situation?

 ps- I've tried REASSIGN OWNED BY DEADUSER TO pgsql; with no success.


 (env vars set for PGDATABASE and PGUSER, target role name changed to
 protect the guilty)

 # psql -cdrop role DEADUSER;
 ERROR:  role DEADUSER cannot be dropped because some objects depend on
 it
 DETAIL:  8 objects in database postgres

 # pg_dump --schema-only postgres | grep -i DEADUSER
 ## No results





Re: [GENERAL] PG-9.3.6, unable to drop role because some objects depend on it

2015-04-13 Thread Dennis Jenkins
Doh.  I found my answer.  Tome posted it years ago..
http://www.postgresql.org/message-id/18994.1325874...@sss.pgh.pa.us

I have to connect to the offending database and try to drop role again to
get the list of actual objects.  One database cannot query the catalog of
another.

On Mon, Apr 13, 2015 at 12:05 PM, Dennis Jenkins 
dennis.jenkins...@gmail.com wrote:

 I am attempting to remove a role from Postgresql-9.3.6.  I've already
 reassigned ownership for the role's tables, functions, sequences, types,
 views, etc...  However, I am still unable to remove the role.  Postgresql
 reports that 8 objects in the database 'postgres' depend on this role.

 How do I locate the database objects that depend on the 'DEADUSER' so that
 I can remedy the situation?

 ps- I've tried REASSIGN OWNED BY DEADUSER TO pgsql; with no success.


 (env vars set for PGDATABASE and PGUSER, target role name changed to
 protect the guilty)

 # psql -cdrop role DEADUSER;
 ERROR:  role DEADUSER cannot be dropped because some objects depend on it
 DETAIL:  8 objects in database postgres

 # pg_dump --schema-only postgres | grep -i DEADUSER
 ## No results




[GENERAL] PG-9.3.6, unable to drop role because some objects depend on it

2015-04-13 Thread Dennis Jenkins
I am attempting to remove a role from Postgresql-9.3.6.  I've already
reassigned ownership for the role's tables, functions, sequences, types,
views, etc...  However, I am still unable to remove the role.  Postgresql
reports that 8 objects in the database 'postgres' depend on this role.

How do I locate the database objects that depend on the 'DEADUSER' so that
I can remedy the situation?

ps- I've tried REASSIGN OWNED BY DEADUSER TO pgsql; with no success.


(env vars set for PGDATABASE and PGUSER, target role name changed to
protect the guilty)

# psql -cdrop role DEADUSER;
ERROR:  role DEADUSER cannot be dropped because some objects depend on it
DETAIL:  8 objects in database postgres

# pg_dump --schema-only postgres | grep -i DEADUSER
## No results


Re: [GENERAL] Petition: Treat #!... shebangs as comments

2014-07-18 Thread Dennis Jenkins
On Fri, Jul 18, 2014 at 10:16 AM, Andrew Pennebaker 
andrew.penneba...@gmail.com wrote:

 Could we please have the PostgreSQL lexer treat #!... on the first line of
 a file as a comment? This would enable .psql scripts to be run with
 dot-slash notation preferred by many unix users:

 ./script.psql

 While still allowing the traditional (and Windows compatible) style:

 psql -f script.psql


+1


Re: [GENERAL] puzzling perl DBI vs psql problem

2014-03-13 Thread Dennis Jenkins
1)
Run both psql and perl under strace and search the output for which
sockets it connects to.
eg,   strace -o /tmp/psql.log psql -Upgsql -dmydatabase -cselect
version();


2)
Add a query into your perl script to perform the following SQL and print
the results:

  select current_database();
  select current_schema();
  select inet_server_addr();
  select current_user;
(and others, see
http://www.postgresql.org/docs/9.3/static/functions-info.html for more
functions)
Do the same from your psql.  Compare the output.  Are you 110% sure that
you are connecting to the same database, as the same user, and using the
same schema?




On Thu, Mar 13, 2014 at 3:44 PM, Susan Cassidy 
susan.cass...@decisionsciencescorp.com wrote:

 No, I don't have 2 instances running.  I default the port on the psql
 command line, and the perl program is using 5432, as normal.

 Now, I'm discovering that syslog is no longer logging anything.  I bounced
 it, but to no avail.

 Susan


 On Thu, Mar 13, 2014 at 1:34 PM, Rodrigo Gonzalez 
 rjgonzale.li...@gmail.com wrote:

 On Thu, 13 Mar 2014 13:28:38 -0700
 Susan Cassidy susan.cass...@decisionsciencescorp.com wrote:

  No, it is connecting to localhost, which is the same system I am
  running psql on.
 
  Susan
 
 Well, if one query is logged and the other one is not it means that it
 is running against different servers (as far as I understand
 logging)

 Maybe psql is connecting using one socket and perl using another one?
 maybe you have 2 instances running?





Re: [GENERAL] soft lockup in kernel

2013-07-15 Thread Dennis Jenkins
Stuart,

   I'm simply curious - did you resolve your issue?  What NAS
(vendor/model/config) are you using?


On Fri, Jul 5, 2013 at 11:31 AM, Dennis Jenkins dennis.jenkins...@gmail.com
 wrote:


 On Fri, Jul 5, 2013 at 8:58 AM, Stuart Ford stuart.f...@glide.uk.comwrote:

 On Fri, Jul 5, 2013 at 7:00 AM, Dennis Jenkins wrpte



 No. iSCSI traffic between the VMWare hosts and the SAN uses completely

 separate NICs and different switches to the production LAN.
 I've had a look at the task activity in VCEnter and found these two events



 at almost the same time as the kernel messages. In both cases the start
 time (the first time below) is 5-6 seconds after the kernel message, and
 I've seen that the clock on the Postgres VM and the VCenter server, at
 least, are in sync (it may not, of course, be the VCenter server's clock
 that these logs get the time from).

 Remove snapshot
 GLIBMPDB001_replica
 Completed
 GLIDE\Svc_vcenter
 05/07/2013 11:58:41
 05/07/2013 11:58:41
 05/07/2013 11:59:03

 Remove snapshot
 GLIBMPDB001_replica
 Completed
 GLIDE\Svc_vcenter
 05/07/2013 10:11:10
 05/07/2013 10:11:10
 05/07/2013 10:11:23


 I would not blame Veeam.

 I suspect that when a snapshot is deleted that all iSCSI activity either
 halts or slows SIGNIFICANTLY.  This depends on your NAS.

 I've seen an Oracle 7320 ZFS Storage appliance, misconfigured to use
 RAID-Z2 (raid6) to store terabytes of essentially random-access data pause
 for minutes when deleting a snapshot containing a few dozen gigabytes.
 (the snapshot deletion kernel threads get IO priority over nfsd file
 IO).  This causes enough latency to VMWare (over NFS), that VMWare gave up
 on the IO and returned a generic SCSI error to the guests.  Linux guests
 will semi-panic and remount their file-systems read-only.  FreeBSD will
 just freak out, panic and reboot.  The flaw here was using the wrong raid
 type (since replaced with triple-parity raid-10 and is working great).

 What NAS are you using?

 How busy are its disks when deleting a snapshot?

 What is the RAID type under the hood?




Re: [GENERAL] soft lockup in kernel

2013-07-05 Thread Dennis Jenkins
On Fri, Jul 5, 2013 at 7:00 AM, Stuart Ford stuart.f...@glide.uk.comwrote:

 Dear community

 Twice today our PG 9.1 server has caused a soft lockup, with a kernel
 message like this:

 [1813775.496127] BUG: soft lockup - CPU#3 stuck for 73s! [postgres:18723]

 Full dmesg output - http://pastebin.com/YdWSmNUp

 The incidents were approximately two hours apart and the server was
 momentarily unavailable before coming back again, with no restore actions
 required - it just carried on. It's never done this before, I've checked
 in the logs. The server is about 3 weeks old and runs Debian 7 under
 VMWare.

 Does anybody know what could cause this, and, if so, is it something to be
 concerned about and what can be done to stop it?


Before I looked at your pastebin, I was going to ask What kind of storage
are the VMDKs on?  If they are on NFS, iSCSI or FC, could the NAS/SAN be
experiencing a problem?

But I see in the stack trace that the kernel thread hung in
vmxnet3_xmit_frame (sending an Ethernet frame on your virtual NIC).

Describe your vmware network topology.

Do you share the same VLAN for guest traffic with NFS or iSCSI used by
vmware for storing VMDKs?

Are there any errors recorded on the Ethernet switch connected to your
VMWare servers?

What path does a packet take to get from a postgresql server process in
your VM to a client?

What version of VMWare are you running?

If you are managing it with vCenter, are there any alarms of events in the
VMWare logs?


Re: [GENERAL] soft lockup in kernel

2013-07-05 Thread Dennis Jenkins
On Fri, Jul 5, 2013 at 8:58 AM, Stuart Ford stuart.f...@glide.uk.comwrote:

 On Fri, Jul 5, 2013 at 7:00 AM, Dennis Jenkins wrpte



 No. iSCSI traffic between the VMWare hosts and the SAN uses completely
 separate NICs and different switches to the production LAN.
 I've had a look at the task activity in VCEnter and found these two events



 at almost the same time as the kernel messages. In both cases the start
 time (the first time below) is 5-6 seconds after the kernel message, and
 I've seen that the clock on the Postgres VM and the VCenter server, at
 least, are in sync (it may not, of course, be the VCenter server's clock
 that these logs get the time from).

 Remove snapshot
 GLIBMPDB001_replica
 Completed
 GLIDE\Svc_vcenter
 05/07/2013 11:58:41
 05/07/2013 11:58:41
 05/07/2013 11:59:03

 Remove snapshot
 GLIBMPDB001_replica
 Completed
 GLIDE\Svc_vcenter
 05/07/2013 10:11:10
 05/07/2013 10:11:10
 05/07/2013 10:11:23


I would not blame Veeam.

I suspect that when a snapshot is deleted that all iSCSI activity either
halts or slows SIGNIFICANTLY.  This depends on your NAS.

I've seen an Oracle 7320 ZFS Storage appliance, misconfigured to use
RAID-Z2 (raid6) to store terabytes of essentially random-access data pause
for minutes when deleting a snapshot containing a few dozen gigabytes.
(the snapshot deletion kernel threads get IO priority over nfsd file
IO).  This causes enough latency to VMWare (over NFS), that VMWare gave up
on the IO and returned a generic SCSI error to the guests.  Linux guests
will semi-panic and remount their file-systems read-only.  FreeBSD will
just freak out, panic and reboot.  The flaw here was using the wrong raid
type (since replaced with triple-parity raid-10 and is working great).

What NAS are you using?

How busy are its disks when deleting a snapshot?

What is the RAID type under the hood?


Re: [GENERAL] database error xx000?

2012-04-26 Thread dennis jenkins
On Thu, Apr 26, 2012 at 1:59 PM, Kenneth Tilton ktil...@mcna.net wrote:

 On Thu, Apr 26, 2012 at 2:24 PM, Tom Lane t...@sss.pgh.pa.us wrote:


 Can you produce a self-contained test case?


 I doubt it. Every test iteration I run includes a lot of redefining of
 functions and triggers all over the map, and it works fine thru dozens of
 iterations. The wheels seem to come off after a serious refactoring.
 Nothing is needed to clear the problem except (it seems) close any
 connections to the DB, so I guess some optimization does not work in the
 face of sufficient dynamic redefinition.



Suggestion: create a program which attempts to stress-test Postgresql by
doing similar things.  Even if a single test case does not reliably
reproduce the issue, a stress-test might trigger the condition after some
time.


Re: [GENERAL] pg_upgrade + streaming replication ?

2012-03-20 Thread dennis jenkins
On Tue, Mar 20, 2012 at 8:27 PM, Jeff Davis pg...@j-davis.com wrote:

 On Tue, 2012-03-20 at 22:21 +0100, Henk Bronk wrote:
  actually rsync works fine on file level and is good for manual syncing.
  it check really the files with the stat command, so a bit change will
 trigger the copy
  in practice you need to keep an eye on compleetness of the rsync action.

 Rsync still needs to examine the entire file. It has no information to
 know that the file is the same on master and slave.

 We could try to give it the appropriate information on which it can make
 that assumption -- e.g. keep the timestamps the same so that rsync
 assumes the contents are the same. But that seems fragile and I don't
 see a good way of doing it, anyway.

 We need a way to take a base backup of just the catalogs, essentially,
 and leave the user data intact. Probably quite a few details to sort out
 though.

 Regards,
Jeff Davis


rsync can take file that contains a list of files to sync.  Is there
a convenient way to generate such a control file?


Re: [GENERAL] A 154 GB table swelled to 527 GB on the Slony slave. How to compact it?

2012-03-17 Thread dennis jenkins
On Fri, Mar 16, 2012 at 2:20 PM, Aleksey Tsalolikhin
atsaloli.t...@gmail.com wrote:
 On Thu, Mar 15, 2012 at 6:43 AM, Aleksey Tsalolikhin
 atsaloli.t...@gmail.com wrote:

 Our database is about 200 GB - over a WAN link, last time it took 8
 hours to do a full sync, I expect it'll be
 more like 9 or 10 hours this time.


Aleksey, a suggestion:  The vast majority of the postgresql wire
protocol compresses well.  If your WAN link is not already compressed,
construct a compressed SSH tunnel for the postgresql TCP port in the
WAN link.  I've done this when rebuilding a 300GB database (via slony)
over a bandwidth-limited (2MB/s) VPN link and it cut the replication
resync time down significantly.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Indexing MS/Open Office and PDF documents

2012-03-15 Thread dennis jenkins
On Thu, Mar 15, 2012 at 4:12 PM, Jeff Davis pg...@j-davis.com wrote:
 On Fri, 2012-03-16 at 01:57 +0530, alexander.bager...@cognizant.com
 wrote:
 Hi,

 We are looking to use Postgres 9 for the document storing and would
 like to take advantage of the full text search capabilities. We have
 hard time identifying MS/Open Office and PDF parsers to index stored
 documents and make them available for text searching. Any advice would
 be appreciated.

 The first step is to find a library that can parse such documents, or
 convert them to a format that can be parsed.

I don't know about MS-Office document parsing, but the PoDoFo (pdf
parsing library) can strip text from PDFs.  Every now and then someone
posts to the podofo mailing list with questions related to extracting
text for the purposes of indexing it in FTS capable database.  Podofo
has excellent developer support.  The maintainer is quick to accept
patches, verify bugs, add features, etc...   Disclaimer: I'm not a pdf
nor podofo expert.  I can't help you accomplish what you want.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Synchronous replication + Fusion-io = waste of money OR significant performance boost? (compared to normal SATA-based SSD-disks)?

2012-03-08 Thread dennis jenkins
I've also looked at the Fusion-IO products.  They are not standard
flash drives.  They don't appear as SATA devices.  They contains an
FPGA that maps the flash directly to the PCI bus.  The kernel-mode
drivers blits data to/from them via DMA, not a SATA or SAS drive (that
would limit transfer rates to 6Gb/s).

But, I don't have any in-hand to test with yet... :(  But the
kool-aide looks tasty :)

On Thu, Mar 8, 2012 at 8:52 AM, Merlin Moncure mmonc...@gmail.com wrote:
 On Wed, Mar 7, 2012 at 2:54 AM, Joel Jacobson j...@trustly.com wrote:
 My company is in the process of migrating to a new pair of servers, running 
 9.1.

 The database performance monetary transactions, we require
 synchronous_commit on for all transactions.

 Fusion-io is being considered, but will it give any significant
 performance gain compared to normal SATA-based SSD-disks, due to the
 fact we must replicate synchronously?

 To make it more complicated, what about SLC vs MLC (for synchronous
 replication)?

 Assume optimal conditions, both servers have less than a meter between
 each other, with the best possible network link between them providing
 the lowest latency possible, maxed out RAM, maxed out CPUs, etc.

 I've already asked this question to one of the core members, but the
 answer was basically you will have to test, I was therefore hoping
 someone in the community already had some test results to avoid
 wasting money.

 Thank you for any advice!

 flash, just like hard drives, has some odd physical characteristics
 that impose some performance constraints, especially when writing, and
 double especially when MLC flash is used.  modern flash drives employ
 non volatile buffers to work around these constraints that work pretty
 well *most* of the time.  since MLC is much cheaper improvements in
 flash controller technology are basically pushing SLC out of the
 market except in high end applications.

 if you need zero latency storage all the time and are willing to spend
 the extra bucks, then pci-e  based SLC is definitely worth looking at
 (you'll have another product to evaluate soon when the intel 720
 ramsdale hits the market).  a decent MLC drive might work for you
 though, i'd suggest testing there first and upgrading to the expensive
 proprietary stuff if and only if you really need it.

 my experience with flash and postgres is that even with low-mid range
 drives like the intel 320 it's quite a challenge to make postgres be
 i/o bound.

 merlin

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Call for Google Summer of Code (GSoC) 2012: Project ideas?

2012-03-08 Thread dennis jenkins
 Now I have the burden to look for a cool project... Any ideas?

 -Stefan


 How about one of:

 1) on disk page level compression (maybe with LZF or snappy) (maybe not page
 level, any level really)

 I know toast compresses, but I believe its only one row.  page level would
 compress better because there is more data, and it would also decrease the
 amount of IO, so it might speed up disk access.

 2) better partitioning support.  Something much more automatic.

 3) take a nice big table, have it inserted/updated a few times a second.
  Then make select * from bigtable where indexed_field = 'somevalue'; work
 10 times faster than it does today.


 I think there is also a wish list on the wiki somewhere.

 -Andy


Ability to dynamically resize the shared-memory segment without taking
postgresql down :)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SQL Normalization Thought

2012-03-07 Thread dennis jenkins
On Wed, Mar 7, 2012 at 6:34 AM, David Johnston pol...@yahoo.com wrote:
 I know there is currently work ongoing regarding normalizing SQL statements 
 for logging purposes but has anyone considered given us the ability to name 
 our statements.

 SELECT 
 FROM ...
 WHERE 
 NAMEAS 'Name to track by'

 If a query lacks a name the algorithm generated normalized form would be used 
 instead but otherwise all queries with the same name would be treated as 
 being the same for statistics purposes.

 I'm sure there is more to it but the idea of letting the user name their 
 queries, and thus have something to actually link the logs and the source 
 code directly, has merit and at the least provides a workaround to an 
 algorithmic routine.


You could place a 'C style' comment at the beginning of the statement.  Ex:

/* MagicQuery-001 */ select awsome_stuff from coolness_table where user_id=?;

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unhelpful initdb error message

2012-03-06 Thread dennis jenkins
On Tue, Mar 6, 2012 at 10:11 AM, Thom Brown t...@linux.com wrote:
 On 6 March 2012 16:04, Adrian Klaver adrian.kla...@gmail.com wrote:
 The postmaster.pid is located outside the data directory, but points back to 
 the
 data directory.   Not sure where Debian, though at a guess somewhere in /var.
 Any way search for postmaster.pid.

 I'm not sure, because if I use a new data directory, initdb it and
 start the service, the postmaster.pid appears in it, and not as a
 symbolic link.

 I did a search for postmaster.pid in the whole of /var and it only
 shows up /var/lib/postgresql/9.1/main/postmaster.pid

 --
 Thom

I know that I'm late to the party, but a small suggestion: Run
initdb with strace (truss on Solaris) and examine the syscalls
made.  It should show you, conclusively, what files are being
opened, unlinked, etc...

Example:

strace -o /tmp/x initdb -D /tmp/data-1
grep -E '^(open|unlink)' /tmp/x

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] accumulating handles problem on machine running postgresql

2012-02-29 Thread dennis jenkins
On Wed, Feb 29, 2012 at 8:54 AM, Adam Bruss abr...@awrcorp.com wrote:
 I ran process explorer and looked at the handles for the System process. The 
 vast majority of the handles are of type Key. I can find them in the 
 registry. I took two at random from process explorer and exported the 
 registry branch for them below.

 ## EXAMPLE  1: ##

 Key Name:          
 HKEY_CLASSES_ROOT\CLSID\{9F074EE2-E6E9-4d8a-A047-EB5B5C3C55DA}
 Class Name:        NO CLASS
 Last Write Time:   2/28/2012 - 1:26 AM
 Value 0
  Name:            NO NAME
  Type:            REG_SZ
  Data:            HwTextInsertion Class


 Key Name:          
 HKEY_CLASSES_ROOT\CLSID\{9F074EE2-E6E9-4d8a-A047-EB5B5C3C55DA}\InprocServer32
 Class Name:        NO CLASS
 Last Write Time:   2/29/2012 - 4:05 AM
 Value 0
  Name:            NO NAME
  Type:            REG_EXPAND_SZ
  Data:            %CommonProgramFiles%\microsoft shared\ink\tiptsf.dll

 Value 1
  Name:            ThreadingModel
  Type:            REG_SZ
  Data:            Apartment

Seems like your web server is leaking registry keys used when loading
COM objects.  The sample that you posted is for the Tablet PC Input
Panel Text Services Framework [1].  However, I find it strange that
a) IIS needs this and b) that it would leak it.

Are you able to obtain a large statistical sample of the leaked
registry keys?  2 out of 130,000 seems like a small sample.

Try the command line handle.exe tool [2].  It can dump to a text
file that you can then analyze with perl, python, grep, etc... or your
own eyeballs. :)  See if the handle list is dominated by a specific
set of registry keys.

[1] http://systemexplorer.net/filereviews.php?fid=515344
[2] http://technet.microsoft.com/en-us/sysinternals/bb896655

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] accumulating handles problem on machine running postgresql

2012-02-28 Thread dennis jenkins
On Tue, Feb 28, 2012 at 8:48 AM, Adam Bruss abr...@awrcorp.com wrote:
 The handles persist through restarting the postgresql service and restarting 
 the IIS server. The handles are accumulating on the System process. I think 
 the handles are created when the web service is accessed but that would mean 
 the IIS worker processes would have responsibility and they don't seem to. 
 Recycling the worker processes in IIS does nothing. And the worker processes 
 have their own process w3wp.exe which never accumulate handles. It's probably 
 not a postgresql thing or other people would be seeing it.


Use process explorer from sysinternals / microsoft (google for it)
to see what these handles are for (pipes, files, events, mutants,
desktops, winstations (ok, probably not those), etc...

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql 9.0.6 backends pruning process environment?

2012-02-15 Thread dennis jenkins
On Wed, Feb 15, 2012 at 9:18 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Magnus Hagander mag...@hagander.net writes:
 On Wed, Feb 15, 2012 at 06:40, dennis jenkins
 dennis.jenkins...@gmail.com wrote:
 I recently updated my Gentoo Linux development system from postgresql
 9.0.4 to 9.0.6-r1 (9.0.6 plus some Gentoo specific patches).  One of
 my 'C' language functions (been using it for years) stopped working
 because the backend no longer had access to the PGDATA environment
 variable.  A snippet of code is included below.

 I suggest you look at the version history of the gentoo packaging and
 scripts instead. My guess is that something was changed there.

 Yes.  A PG backend will not remove a PGDATA envar, but *it does not
 set it either*.  This sounds to me like a change in the startup script.

 You can look at the configuration variable data_directory, or use the
 C symbol DataDir which is exported from the backend.

 Quite --- at the C level, looking at DataDir is the right thing, and
 looking at PGDATA could be misleading even if it exists --- consider
 the possibility that we took the data_directory setting from the
 command line or postgresql.conf.

                        regards, tom lane

Magnus, Tom,

Thank you very much.  My code now uses 'DataDir' (export in
server/miscadmin.h) and it works fine.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgresql 9.0.6 backends pruning process environment?

2012-02-14 Thread dennis jenkins
djenkins@ostara ~/code/capybara $ psql -U$someuser -dpostgres -c
select version();
 version
--
 PostgreSQL 9.0.6 on x86_64-pc-linux-gnu, compiled by GCC
x86_64-pc-linux-gnu-gcc (Gentoo 4.5.3-r1 p1.0, pie-0.4.5) 4.5.3,
64-bit
(1 row)

djenkins@ostara ~/code/capybara $ uname -a
Linux ostara 3.1.6-gentoo #1 SMP PREEMPT Mon Jan 9 22:43:24 CST 2012
x86_64 Intel(R) Core(TM) i5 CPU 760 @ 2.80GHz GenuineIntel GNU/Linux



I recently updated my Gentoo Linux development system from postgresql
9.0.4 to 9.0.6-r1 (9.0.6 plus some Gentoo specific patches).  One of
my 'C' language functions (been using it for years) stopped working
because the backend no longer had access to the PGDATA environment
variable.  A snippet of code is included below.

My routine gathers some data about the file system that the base
data directory resides on (ignoring table-spaces that could move data
to other file systems).  The existing postgresql server admin
functions are not sufficient to accomplish my goal:

1) pg_database_size does not give me all of the info that I'm after.
2) pg_relation_filepath only returns the path relative to PGDATA
(eg, base/n/m, not what I'm after
(/var/lib/postgresql/9.0, but may vary from system to system).

Development on 8.4.4 through 9.0.4 worked fine.  getenv(PGDATA)
returned a valid pathname in a shared object C function when ran by
the back end.

9.0.6 (and 9.0.6-r1) backends appear to have no environment variables
set in their backends.

Gentoo's portage no longer has an ebuild for 9.0.4, so I reverted to
9.0.5.  My function resumed working again.

I then tried Gentoo's portage for postgresql-9.0.6 (no -r1) and it
failed the same (the env var is not available to the forked backend)

For each postgresql version test, I recompiled and re-installed my
function (a '.so' file).

I skimmed the errata for Postgresql-9.0.6 and could not find anything
relevant.  (http://www.postgresql.org/docs/9.0/static/release-9-0-6.html)

I tried digging around in a mirrored source repository
(https://github.com/postgres/postgres/tree/master/src), but didn't
make much headway.

Thank you for your time and thoughts.


Questions:

1) Is the envvar present, and somehow my code or development system is faulty?

2) Were the envvars of the backends purposefully removed in version 9.0.6?

3) Is there a formal way to get the location of the pg data dir from
a C language routine?

4) It seems that the cwd (/prod/self/cwd sym link) would give me
what I need, but can this behavior be relied on for future versions of
Postgresql on Linux?


ostara ~ # ls -l /proc/2384/cwd
lrwxrwxrwx 1 postgres postgres 0 Feb 14 23:38 /proc/2384/cwd -
/var/lib/postgresql/9.0/data

ostara ~ # cat /proc/2384/environ

ostara ~ # equery l '*postgresql*'
 * Searching for *postgresql* ...
[IP-] [  ] app-admin/eselect-postgresql-1.0.10:0
[IP-] [  ] dev-db/postgresql-base-9.0.6:9.0
[IP-] [  ] dev-db/postgresql-base-9.1.2:9.1
[IP-] [  ] dev-db/postgresql-server-9.0.6:9.0
[IP-] [  ] dev-db/postgresql-server-9.1.2-r2:9.1




Datum   backend_disk_stats (PG_FUNCTION_ARGS)
{
char*pgdatadir = NULL;
struct statvfs  vfs;
TupleDesc   tupdesc = NULL; // Custom PG data type
disk_stats
AttInMetadata   *attinmeta = NULL;  // Used for accessing
composit type members by name.
Datum   result;
HeapTuple   tuple;
char**values = NULL;
int i = 0;
u_int64_t   nUsed = 0;

if (NULL == (pgdatadir = getenv (PGDATA)))
{
elog (ERROR, getenv('PGDATA') failed.\n);
PG_RETURN_NULL ();
}

if (-1 == statvfs (pgdatadir, vfs))
{
elog (ERROR, statvfs() failed.\n);
PG_RETURN_NULL ();
}

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Solaris 10u9, PG 8.4.6, 'c' lang function, fails on 1 of 5 servers

2011-10-07 Thread dennis jenkins
On Thu, Sep 29, 2011 at 12:08 PM, dennis jenkins 
dennis.jenkins...@gmail.com wrote:


 (root@failed: /db) #  psql -Upgsql -dmy_db -ccreate or replace function
 parse_micr(text) returns micr_struct
  as '/db/pgsql_micr_parser_64.so', 'pgsql_micr_parser' language c volatile
 cost 1;
 ERROR:  could not load library /db/pgsql_micr_parser_64.so: ld.so.1:
 postgres: fatal: /db/pgsql_micr_parser_64.so: Permission denied

 stat(/db/pgsql_micr_parser_64.so, 0xFD7FFFDF8F50) = 0
 resolvepath(/db/pgsql_micr_parser_64.so, /db/pgsql_micr_parser_64.so,
 1023) = 27
 open(/db/pgsql_micr_parser_64.so, O_RDONLY)   = 22
 mmap(0x0001, 32768, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_ALIGN, 22, 0)
 Err#13 EACCES
 close(22)   = 0



Problem solved.

First, some more background.  These 5 postgresql servers are local zones
running in Solaris. (like a linux container).  The database itself is
inside /db.  The database user (pgsql) has no problems reading any file
there, including the micr_parser.so file.

/db is not just a directory under /.  It is a separate file system
mounted within the zone (via a loopback mount).  The actual file-system is a
ZFS sub-filesystem on a dedicated zpool on the (shared) servers.  On the
four servers where the create function worked, /db was mounted with
options nodevices.

However, on the server where it failed, /db was mounted with
nodevices,noexec.  This was causing mmap() to fail when it requested
PROT_EXEC access.

My inspiration for solving this riddle was that I copied the .so to a local
directory under / that was not under /db.  create function then
succeeded.  The little light bulb over my head turned on and I began
checking filesystem mount flags.

So, for anyone who finds this posting via a search engine years from now...
if create function fails with permission denied, also check that the
filesystem holding the .so file is not mounted with exec disabled.

I've not tested this behavior on Linux.  I humble suggest putting a note in
the postgresql documentation about the FS mount flags impact on 'C' language
functions.


[GENERAL] Solaris 10u9, PG 8.4.6, 'c' lang function, fails on 1 of 5 servers

2011-09-29 Thread dennis jenkins
Hello Postgresql Community Members,

I am stumped trying to install a few 'c' language functions
on a particular Solaris server (64-bit, amd cpu arch (not sparc)).  I
actually
have 5 Postgresql servers, and the .so loads fine into 4 of them, but
refuses to load into the 5th.  I've quintuple checked the file
permissions, build of the .so, gcc versions, PostgreSQL versions,
etc...  I've had a college double check my work.  We're both stumped.
Details to follow.

All servers are running Solaris 10u9 on 64-bit hardware inside
Solaris zones.  Two of the servers are X4720's, 144GB ram, 24 Intel
CPU cores.  These two servers run the 4 working Solaris zones that
are able to load the function implemented in the .so files.  Postgresql
version 8.4.6, compiled from source (not a binary package).

The server that is misbehaving is an X4600, 128 GB ram, 16 AMD CPU
cores, but otherwise identical: Solaris 10u9, 64-bit OS, Postgresql
8.4.6.  All 5 systems use the stock gcc that ships with Solaris (v3.4.3,
its old, I know).

The permissions on the files and Postgresql directories.  First the
a working server, then the server that is not working as expected.

(root@working: /db) # ls -ld /db /db/*.so
drwx--  11 pgsqlroot  23 Sep 27 10:39 /db
-rwxr-xr-x   1 root root   57440 Sep 27 10:39
/db/pgsql_micr_parser_64.so

(root@working: /db) # psql -Upgsql -dpostgres -cselect version();
 PostgreSQL 8.4.6 on x86_64-pc-solaris2.11, compiled by GCC gcc (GCC) 3.4.3
(csl-sol210-3_4-20050802), 64-bit

(root@working: /db) # file /opt/local/x64/postgresql-8.4.6/bin/postgres
/opt/local/x64/postgresql-8.4.6/bin/postgres:   ELF 64-bit LSB executable
AMD64 Version 1 [SSE], dynamically linked, not stripped

(root@working: /db) # psql -Upgsql -dmy_db -ccreate or replace function
parse_micr(text) returns micr_struct
  as '/db/pgsql_micr_parser_64.so', 'pgsql_micr_parser' language c volatile
cost 1;
CREATE FUNCTION

(root@working: /db) # psql -Upgsql -dmy_db -t -cselect transit from
parse_micr(':=: 45800=100');
 =



(root@failed: /db) # ls -ld /db /db/*.so
drwx--  11 pgsqlroot  24 Sep 29 11:16 /db
-rwxr-xr-x   1 root root   57440 Sep 29 09:46
/db/pgsql_micr_parser_64.so

(root@failed: /db) # psql -Upgsql -dpostgres -cselect version();
 PostgreSQL 8.4.6 on x86_64-pc-solaris2.11, compiled by GCC gcc (GCC) 3.4.3
(csl-sol210-3_4-20050802), 64-bit

(root@failed: /db) # file /opt/local/x64/postgresql-8.4.6/bin/postgres
/opt/local/x64/postgresql-8.4.6/bin/postgres:   ELF 64-bit LSB executable
AMD64 Version 1 [SSE], dynamically linked, not stripped

(root@failed: /db) #  psql -Upgsql -dmy_db -ccreate or replace function
parse_micr(text) returns micr_struct
 as '/db/pgsql_micr_parser_64.so', 'pgsql_micr_parser' language c volatile
cost 1;
ERROR:  could not load library /db/pgsql_micr_parser_64.so: ld.so.1:
postgres: fatal: /db/pgsql_micr_parser_64.so: Permission denied



  Ok.  Well, the file permissions are correct, so what gives?  Next
step is to trace the backend process as it attempts to load the .so.
So I connect to the failed server via pgAdmin and run select getpid();
I then run truss -p PID from my shell, and in pgAdmin, execute the
SQL to create the function.  This is the result of the system trace:

(root@failed: /db) # truss -p 10369
recv(9, 0x0097C103, 5, 0)   (sleeping...)
recv(9, 170301\0  , 5, 0) = 5
recv(9,  TBEE5 n J\0 VF6E4DDCF84.., 32, 0)= 32
recv(9, 170301\0B0, 5, 0) = 5
recv(9, AAD5A5 L97B0CEA5A9F0CD89.., 176, 0)   = 176
stat(/db/pgsql_micr_parser_64.so, 0xFD7FFFDF9520) = 0
stat(/db/pgsql_micr_parser_64.so, 0xFD7FFFDF9530) = 0
stat(/db/pgsql_micr_parser_64.so, 0xFD7FFFDF8F50) = 0
resolvepath(/db/pgsql_micr_parser_64.so, /db/pgsql_micr_parser_64.so,
1023) = 27
open(/db/pgsql_micr_parser_64.so, O_RDONLY)   = 22
mmap(0x0001, 32768, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_ALIGN, 22, 0)
Err#13 EACCES
close(22)   = 0
setcontext(0xFD7FFFDF9050)
setcontext(0xFD7FFFDF9BB0)


We can see that the backend is able to open the .so file for
reading, but the mmap fails.  From the Solaris man page on mmap:

ERRORS
 The mmap() function will fail if:

 EACCES  The fildes file descriptor is not  open  for
 read,  regardless  of  the protection speci-
 fied; or fildes is not open  for  write  and
 PROT_WRITE  was  specified  for a MAP_SHARED
 type mapping.


My analysis:

1) The file descriptor (#22) is open for O_RDONLY.
2) PROT_WRITE and MAP_SHARED are not specified, so write access is not
relevant.


Things that I tried, unsuccessfully:

1) I recompiled the .so on the target system (X4600, AMD chips) just
   in case it is somehow different from the .so that got built on the
   working system (X4270, Intel chips).

2) Tested with a different .so (I have 

Re: [GENERAL] Detecting memory leaks with libpq?

2011-07-19 Thread dennis jenkins
On Tue, Jul 19, 2011 at 5:41 AM, Antonio Vieiro anto...@antonioshome.netwrote:

 Hi all,

 I'm building a small C application that uses libpq and I was wondering
 if there's an easy way to detect memory leaks in my code.

 I think I'm calling PQclear and friends correctly, but I'd like to
 double-check it. I was wondering if there's a function in libpq to
 check memory-use usage/internals, or something like that.


Wrap your main logic in a loop that runs it 100,000 or more times.  However,
the process itself should never exit (eg, only ever exist as one pid).  As
the process runs, monitor it with top, htop (really nice util for Linux),
vmstat, etc...  Does the memory usage go up and up, generally linearly with
time?

Run the same process using electronic fence [1] or valgrind [2].

[1] http://linux.die.net/man/3/efence  (not for finding memory leaks per se,
but useful for finding memory mis-usage.
[2] http://valgrind.org/


Re: [GENERAL] rationale behind quotes for camel case?

2011-06-28 Thread dennis jenkins
On Tue, Jun 28, 2011 at 8:43 AM, Merlin Moncure mmonc...@gmail.com wrote:


 The rebuttal to the above points is that the problem with not quoting
 is that your identifiers are folded to lower case on the server which
 can make them difficult to read in psql, pgadmin, etc. when outputted.
  This is true and I consequently use underscores to break up words in
 my personal style (order_number, not OrderNumber), avoiding that
 problem.  Some people don't like it, but it is the easiest to type,
 the most tool friendly, and most regular.

 merlin


(wandering slightly off-topic)

Any suggestions on how to name tables when table names contain both
multi-word nouns and mutli-table many-many mappings?

Example:  Suppose that I have a table called foo and another table called
barBiz (or bar_biz if you prefer).  Further, both of these tables have a
serial primary key.  Now I want to create a third table that represents a
many-to-many relationship between foo and barBiz.  So far I have been
keeping compound-noun table names in camel case, but mapping tables separate
the base table names with underscores.  Thus the table name would be
foo_barBiz.

However, I find the above distasteful, for many of the reasons that Merlin
and others have outlined.  Yet naming the table foo_bar_biz seems
ambiguous to me, as does using just lower-case foo_barbiz / barbiz.
These examples are contrived.  The real table names are normal English words
with subjective meaning.

I'd like in ask the pgsql community for suggestions on how they name tables.

Thank you all for your time.


Re: [GENERAL] Extract (Recover) data from a cluster built on a different architecture (ARM).

2011-04-25 Thread dennis jenkins
On Sun, Apr 24, 2011 at 4:16 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Phil Couling coul...@gmail.com writes:
  I'm looking for a way to extract the data from a PostgreSQL 8.3.14
  database (cluster) that was built using an an ARM/Linux server.  The
  ...
  Are there any tools for recovering data from a database built with a
  different architecture or is my data toast unless I can lay my hands
  on an ARM box?

 You might possibly be able to do it with a different big-endian machine
 such as PPC (it would also have to match on alignment rules and
 floating-point format, but I think you'd probably squeak by on those).
 Cedric's suggestion of an emulator is probably a surer answer though.

regards, tom lane

 QEMU can emulate an ARM system. It won't be really fast, but it might work
for you.  Disclaimer: I've never used QEMU to emulate an ARM system, but the
docs say that you can.

http://wiki.qemu.org/Main_Page


Re: [GENERAL] Weird problems with C extension and bytea as input type

2011-03-23 Thread dennis jenkins
On Wed, Mar 23, 2011 at 5:08 AM, Adrian Schreyer ams...@cam.ac.uk wrote:

 you are right, it returns a char *.

 The prototype:

 char *function(bytea *b);

 The actual C++ function looks roughly like this

 extern C
 char *function(bytea *b)
 {
   string ism;
   [...]
   return ism.c_str();
 }



Don't do that.  You are returning a pointer to an unallocated buffer
(previously held by a local variable).  c_str() is just a const
pointer to a buffer held inside ism.  When ism goes out of scope,
that buffer if freed.

Either return std::string, or strdup() the string and have the
caller free that.  (but use the postgresql alloc pool function to
handle the strdup.  I don't recall that function's name off the top of
my head).

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] finding bogus UTF-8

2011-02-10 Thread dennis jenkins
I'm working on a project to convert a large database form SQL_ASCII to
UTF-8.  I am using this procedure:

1) pg_dump the SQL_ASCII database to an SQL text file.
2) Run through a small (efficient) C program that logs each line that
contains ANY unclean ASCII text.
3) Parse that log with a small perl program (hashes are easier in perl
than C) to produce a report, and emit some SQL.
4) Construct SQL update statements to repair the original data.
5) Repeat at step #1 until the database is clean.
6) pg_dump (SQL_ASCII) - pg_restore -EUTF8 new database.
7) Profit!

If you are interested, I can email to you the C and Perl source.

It runs like this:

# time pg_restore /db-dumps/some_ascii_pgdump.bin | ./ascii-tester |
./bad-ascii-report.pl  unclean-ascii.rpt

real11m11.804s
user18m2.579s
sys 2m25.803s

# grep ^-- unclean-ascii.rpt
-- some_table 4051021
-- other_table 16

^^^ Numbers are count of rows that need cleaning.
Entire rpt file contains SQL comments -- and SQL select statements
of the form:
select * from table where primary_key in (1, 2, 3, 4, );

The perl script contains a hash that maps table names to primary key
column IDs (to pick up when parsing the raw SQL restore COPY
script).  I will need to purge my secret schema stuff from it before
sharing it with anyone.

My solution is probably not perfect, and probably not optimal, but it
is working great so far.  I'm almost done cleaning up my database and
hope to attempt a real UTF8 restore in the near future.


On Thu, Feb 10, 2011 at 1:02 PM, Scott Ribe scott_r...@elevated-dev.com wrote:
 I know that I have at least one instance of a varchar that is not valid 
 UTF-8, imported from a source with errors (AMA CPT files, actually) before 
 PG's checking was as stringent as it is today. Can anybody suggest a query to 
 find such values?


 --
 Scott Ribe
 scott_r...@elevated-dev.com
 http://www.elevated-dev.com/
 (303) 722-0567 voice





 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] finding bogus UTF-8

2011-02-10 Thread dennis jenkins
On Thu, Feb 10, 2011 at 1:02 PM, Scott Ribe scott_r...@elevated-dev.com wrote:
 I know that I have at least one instance of a varchar that is not valid 
 UTF-8, imported from a source with errors (AMA CPT files, actually) before 
 PG's checking was as stringent as it is today. Can anybody suggest a query to 
 find such values?

If you know which table and column the data is in, you can also do
something like this:

(I typed this up without checking the syntax of it.  The basic idea is
to cast the column as bytea, encode with the 'escape' method, then
grep for back-slashes).

select * from bad_table where regexp_match (encode (bad_column::bytea,
'escape'), ''));

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] finding bogus UTF-8

2011-02-10 Thread dennis jenkins

 If you are interested, I can email to you the C and Perl source.

 It runs like this:

 # time pg_restore /db-dumps/some_ascii_pgdump.bin | ./ascii-tester |
 ./bad-ascii-report.pl  unclean-ascii.rpt

http://www.ecoligames.com/~djenkins/pgsql/

Disclaimer: I offer NO warranty.  Use at your own risk.  Code does
minimal error checking (its a hack / tool for manual use, not reliable
production use).

C code compiles cleanly with gcc.  Perl code uses no libraries (just a
STDIN - STDOUT processor).  This code should run damn near anywhere.

The code will stay on my web server until I forget about it and re-org
stuff in a few weeks, so grab it while you can.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql-8.4.6, 64bit, Solaris 10u9, dtrace

2011-01-31 Thread dennis jenkins
On Fri, Jan 28, 2011 at 1:19 PM, dennis jenkins
dennis.jenkins...@gmail.com wrote:
 Hello Everyone,

    My goal is to install a 64-bit build of the latest Postgresql 8.4
 (not ready for 9.0 yet) onto a Solaris 10u9 server (Intel chips,
 X4270), with dtrace support.

 .../no-dtrace-postgresql, untarred the build with dtrace.  I then
 fired up postgresql using my SMF script and it worked GREAT!

   Awesome I thought... Until I tried to use the same dtrace script
 provided in section 26.4.3 Using Probes on the postgresql.org web
 site (http://www.postgresql.org/docs/8.4/static/dynamic-trace.html).


It occurred to me that since I was building and executing
postgresql in a local zone that I might not be able to use dtrace from
the local zone.  So I tried from the global zone.  I made a small bit
of progress.  I can list the dtrace providers, but unfortunately,
postgresql is not listed among them.

(root@server: ~) # dtrace -l | wc -l
   51584

(root@server: ~) # dtrace -l | grep postgresql | wc -l
   0

(root@server: ~) # pgrep postgres
4754
4757
4760
4759
4758

(root@server: ~) # ./exp-1.d 4754
dtrace: failed to compile script ./exp-1.d: line 3: probe description
postgresql4754:::transaction-start does not match any probes

(root@server: ~) # ./exp-1.d 4757
dtrace: failed to compile script ./exp-1.d: line 3: probe description
postgresql4757:::transaction-start does not match any probes

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgresql-8.4.6, 64bit, Solaris 10u9, dtrace

2011-01-28 Thread dennis jenkins
Hello Everyone,

My goal is to install a 64-bit build of the latest Postgresql 8.4
(not ready for 9.0 yet) onto a Solaris 10u9 server (Intel chips,
X4270), with dtrace support.  Postgresql compiles just fine when
configured with --disable-dtrace.  Attempting to compile when
configured with --enable-dtrace fails.  Specifically, the dtrace
compiler utility fails to compile src/backend/utils/probes.d:

$ ./configure CC=gcc -m64 MAKE=gmake DTRACEFLAGS='-64'
--prefix=/opt/local/x64/postgresql-8.4.6 --enable-dtrace
--with-openssl --with-pam --enable-thread-safety --enable-debug
--with-libraries=/opt/local/x64/openssl-1.0.0c/lib:/opt/local/x64/readline-6.1/lib
--with-includes=/opt/local/x64/openssl-1.0.0c/include:/opt/local/x64/readline-6.1/include

$ gmake
(lots of output omitted)
dtrace: failed to link script utils/probes.d: an error was encountered
while processing access/transam/clog.o
gmake[2]: *** [utils/probes.o] Error 1

  Google reveals very little about this error.  Others reported
problems compiling Postgresql-8.2 with dtrace on Solaris 10u3.  A fix
was promised for 10u4.  I can only surmise that the fix failed.  The
reported issue was that dtrace was unable to create probes to
statically defined functions.

  Someone suggested building postgresql on Solaris Express (what will
become Solaris 11), and then installing the binaries on Solaris 10.
So I tried that.  I build Postgresql without dtrace and installed it
on Solaris 10.  I build a virtual machine with Oracle Solaris Express,
installed the gcc compiler and a few other tools.  I then build and
installed readline, openssl, postgresql and slony following my
build instructions, except that I did --enable-dtrace on postgresql.
 It built and installed just fine.  So I tarred up
/opt/local/x64/postgresql-8.4.6 and copied the tarball to the
Solaris-10 server, renamed the existing install to
.../no-dtrace-postgresql, untarred the build with dtrace.  I then
fired up postgresql using my SMF script and it worked GREAT!

   Awesome I thought... Until I tried to use the same dtrace script
provided in section 26.4.3 Using Probes on the postgresql.org web
site (http://www.postgresql.org/docs/8.4/static/dynamic-trace.html).
I got this error:

# ./script.d 4759
dtrace: failed to compile script ./exp-1.d: line 3: probe description
postgresql4759:::transaction-start does not match any probes

  So I checked to see if there were ANY dtrace providers listed...

# dtrace -l
   ID   PROVIDERMODULE  FUNCTION NAME

  None dtrace -l on Solaris-11 lists 66,218 of them.

root@solaris-11:~# dtrace -l | wc -l
66218


  My questions are as follows:

1) Has anyone else successfully built Postgresql (v8 or v9) on
Solaris-10 with dtrace support?  If so, what did I do wrong?

2) Has anyone else successfully used dtrace on Postgresql (any build)
on SOlaris-10?

3) Any idea why dtrace itself is bjorked on Solaris-10?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] UUID column as pimrary key?

2011-01-05 Thread dennis jenkins
On Wed, Jan 5, 2011 at 1:03 PM, Bill Moran wmo...@potentialtech.com wrote:

 But the point (that you are trying to sidestep) is that the UUID namespace
 is finite, so therefore you WILL hit a problem with conflicts at some point.
 Just because that point is larger than most people have to concern themselves
 with isn't an invalidation.

The UUID itself is 128 bits.  Some of those bits are pre-determined.
I don't recall, but I think that a normal UUID has 121 bits of
randomness.

How many would one have to store in a database before a collision
would even be a concern.  Such a database would be freaking huge.
Probably far larger than anything that anyone has.

Lets say (I'm pulling numbers out of my ass here), that you wanted to
store 2^100 rows in a table.  Each row would have a UUID and some
other meaningful data.  Maybe a short string or something.  I don't
recall what the postgresql row overhead is (~20 bytes?), but lets say
that each row in your magic table of death required 64 bytes.  A table
with 2^100 rows would require nearly 10^31 bytes ( = log_10(64 *
2^100)).  How on Earth would you store that much data?  And why would
you ever need to?

I postulate that UUID collisions in Postgresql, using a good source
for UUID generation, is unlikely to have collisions for any reasonable
database.

Food for thought:
http://blogs.sun.com/dcb/entry/zfs_boils_the_ocean_consumes

ps- If my math is off, I apologize.  Its been a long day...

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Compression on SSL links?

2010-08-14 Thread dennis jenkins
On Fri, Aug 13, 2010 at 10:56 AM, Joshua D. Drake j...@commandprompt.com 
wrote:
 On Fri, 2010-08-13 at 11:38 -0400, Tom Lane wrote:

 I don't really see that.  The case where it's sensible to use
 compression on the connection is where you're pushing data across
 a WAN.  That's also pretty much exactly the situation where it's
 sensible to use encryption.  I guess there'd be a use case for
 compression-without-encryption if you had a trustworthy private WAN,
 but who's got one of those?

 So I'm much more in favor of doing something like this than doing it
 directly in our own protocol.

 Well as a company who implemented compression for postgresql on the wire
 long ago :D. I can say it is actually useful as a whole. Even on private
 networks.


We use a SSH tunnel with compression over a 2MB/s link (IPSEC VPN)
from a production site to a DR site.  We run slony over it and it
works quite well.  From what I can tell, average about 95%
compression.  This effectively gives us a 40MB/s link for replication.

We created a Solaris SMF service to maintain the SSH tunnel and a
restricted shell on the remote end.  SSH connects with a private key.
SMF for slony has required dependencies on the local postgresql and
the SSH tunnel, so it all starts up and shutdowns down in the proper
sequence.

The SSH client created one forward port tunnel and one reverse port
tunnel, both on localhost.  To abuse the tunnel one would have to
already be on the database server(s), so it seems (famous last words?)
fairly secure.

I think that adding compression to the SSL layer in Postgresql's wire
protocol would be great.  We could eliminate our kludgey tunnel (aka,
yet another failure point).

Just my $0.02...

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need Some Recent Information on the Differences between Postgres and MySql

2010-06-25 Thread dennis jenkins
On Fri, Jun 25, 2010 at 6:58 AM, Scott Marlowe scott.marl...@gmail.comwrote:


 Biggest difference between MySQL and PostgreSQL? The developers.



I like that...  It has a nice ring to it.


Re: [GENERAL] Connection lost

2010-05-20 Thread dennis jenkins
On Thu, May 20, 2010 at 7:49 AM, Hernan Danielan
hernandanie...@gmail.comwrote:

 The server and the client are running in the same machine.That is why a
 network connection problem is almost imposible. I do not know if i am doing
 something wrong, my code is in my fist message. It is a extrange thing
 because with files less than 500KB works great. However with a file of 1.4MB
 it fails most of the time. Does the JDBC has a timeOut or something? does
 anybody knows another connector that i could use in order to test it?

 I am really desperate because i have been with this problem for several
 days now.

 Thank in advance,
 Hernan


If you can duplicate the problem on demand, then I would try this:

1)  Disable SSL (so that TCPDUMP capture files are readable).

2)  I don't remember if TCPDUMP can sniff traffic on lo (loopback)
adapater on Linux, but I'm pretty sure that you can't sniff loopback on
Windows (wireshark) or Solaris (with either tcpdump or snoop, the kernel
won't support it). If you are using a unix socket then configure your client
to use an IP bound to a NIC on the server.

3)  Sniff the traffic between client and server.  If using TCPDUMP use the
options -s 0 (capture all bytes of the packet) and -n (don't do DNS
lookups).  If using snoop on Solaris the options are different.

4)  Run your client using strace (Linux) or truss (BSD and Solaris) to
record all syscalls.  You probably want to use the -o option (strace) to
log to a file (instead of STDOUT).

5)  If you can connect to the backend and then pause, connect a strace or
truss to the PID of the backend (use the -p pid argument).

6)  Let it run until it crashes.

7)  Compare the network and syscall traces with each other.  The network
trace should show you who hung up (who sent the TCP RST or TCP FIN) packet,
and if it was preceded by a delay or not.  The syscall trace will should you
what the kernel though that your process(es) were doing.  Your answer is
hopefully in there somewhere.  However, it is entirely possible that the
client hung up on the server on its own volition and neither trace will show
you why. :(  But you might get lucky too.


Re: [GENERAL] Reliability of Windows versions 8.3 or 8.4

2010-05-12 Thread dennis jenkins
On Wed, May 12, 2010 at 11:33 AM, Richard Broersma 
richard.broer...@gmail.com wrote:

 On Wed, May 12, 2010 at 9:18 AM, Justin Graf jus...@magwerks.com wrote:

  I would do a plain text file something like XML.  Given this is for
  industrial use 10 years is a good number for warranty and support, but
  this stuff will hang around years later, think 20 to 30 years.  How
  many people understand FLAT ISAM tables from the 1980's today, let alone
  tools to read/modify the records.
 
  I suggest storing the records in manner that is human readable

 These are all good points.  There is one concern that I do have, this
 information will be used to audit the billing system.  Is there any
 concern for loss of data if a file rewrite is interrupted by a power
 failure?   When using postgres there are some protections provided to
 reduce this kind of data loss.

 However, I do agree that tabular/xml data would stand the test of time.


I would suggest sqlite (specifically, version 3).  It is well tested to
survive system crashes.  It is an embedded database engine (runs in same
address space as your process, you use it just like any other C library).
If your program is not running, then neither is it and the database file(s)
are not open on the disk.  It also works great with perl and can even be
used from a CMD script (by spawning its shell, 'sqlite.exe').  I use sqlite
in a variety of projects when Postgresql would be overkill and I've been
vary happy with it.

http://www.sqlite.org/sqlite.html


Re: [GENERAL] log_statement and syslog severity

2010-03-10 Thread dennis jenkins
On Wed, Mar 10, 2010 at 10:55 AM, Ben Chobot be...@silentmedia.com wrote:

 On Mar 10, 2010, at 12:15 AM, Stuart Bishop wrote:

  syslog doesn't give you easily machine readable output. I'm not sure how
 syslog implementations handle high load (our sysadmins won't use it, so I
 haven't investigated this further).

 Have  you looked into syslog-ng? I believe it does just this sort of thing
 for you.
 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


We use syslog-ng for sorting our syslogs into different log files.  It works
well enough.

What annoys me is that postgresql will split a SQL statement across several
syslog events.  I know that syslog itself has a maximum message size and
that this is required for really long SQL.  However, I wanted to log each
SQL statement as a single event.  Syslog-ng can direct output to a pipe.
Putting these together I had begun working on a system where out syslog host
(which logs this from lots of other, unrelated systems) would sort the
postgresql logs to STDIN on a perl program that would reassemble the
individual SQL statements and write them out in a different format.  I never
completed the project (just got busy with more important things).  However,
I thought that this approach was feasible.

That being said, I would love it if Postgresql had logging mechanism
plugin.  Maybe it could load a so/dll that would handle logging.  That
so/dll would export a function like this:

voidpg_log_init (/* some params related to the database instance */);
voidpg_log_done (void); // called on database shutdown.

voidpg_log_sql (int runtime, int rows, int status, const char *sql,
const char *user);

status would be some code to indicate if the sql was successful or not.

The above is just off the top of my head.  I've done no real research on if
the above would be sufficient or correct.

Just an idea...


Re: [GENERAL] hardware information

2009-09-15 Thread dennis jenkins
On Tue, Sep 15, 2009 at 3:03 PM, Scott Marlowe scott.marl...@gmail.com wrote:
 On Tue, Sep 15, 2009 at 1:49 PM, John R Pierce pie...@hogranch.com wrote:
 std pik wrote:

 Hello all..
 I'm using PostgreSQL 8.3..
 How can I get information about the hardware utilization:
       - CPU usage.
       - Disk space.
       - Memory allocation.



 what operating system are you on?   If its Linux or some flavor of Unix, I'd
 use a combination of ps(1), df(1)/du(1), and top(1)

 Pretty much also anything in sysstat package

htop is really nice too.http://htop.sourceforge.net/
(disclaimer - I did not write it)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How do I use tsvector_update_trigger to index non-character columns?

2009-09-03 Thread dennis jenkins
Hello.
 I have several tables in a database that I want to use with full
text searching.  Some of the fields in some of these tables are not
character types, but for the purposes of searching, I'd like to
include their character representation in the tsvector.
Unfortunately, I cannot make this work without duplicating the column
(original as integer and dup as 'text' and using a trigger to
regenerate the dup on insert/update).  I would prefer a cleaner
approach.

I've reduced the problem to a small amount of SQL (included below)
that illustrates my problem.  Thank you kindly for any assistance.

capybara=# select version();
   version
--
 PostgreSQL 8.3.7 on i686-pc-linux-gnu, compiled by GCC
i686-pc-linux-gnu-gcc (GCC) 4.1.2 (Gentoo 4.1.2 p1.1)
(1 row)


-- Attempt #1:


capybara=# drop table if exists abc;
DROP TABLE

capybara=# create table abc (
abc_id serial not null,
client_num integer not null,
abc_name text not null,
tsv tsvector,
constraint abc_pkey primary key (abc_id)
) with (oids=false);
NOTICE:  CREATE TABLE will create implicit sequence abc_abc_id_seq
for serial column abc.abc_id
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index
abc_pkey for table abc
CREATE TABLE

capybara=# CREATE TRIGGER abc_tsv_update BEFORE INSERT OR UPDATE ON
abc FOR EACH ROW EXECUTE PROCEDURE tsvector_update_trigger('tsv',
'pg_catalog.english', 'client_num', 'abc_name');
CREATE TRIGGER

capybara=# insert into abc (client_num, abc_name) values (2751, 'bob');
ERROR:  column client_num is not of character type


--- Attempt #2
-- same table, different trigger function:


capybara=# CREATE TRIGGER abc_tsv_update BEFORE INSERT OR UPDATE ON
abc FOR EACH ROW EXECUTE PROCEDURE tsvector_update_trigger('tsv',
'pg_catalog.english', 'cast (client_num as text)', 'abc_name');
CREATE TRIGGER

capybara=# insert into abc (client_num, abc_name) values (2751, 'bob');
ERROR:  column cast (client_num as text) does not exist

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] function that resolves IP addresses

2005-10-31 Thread Dennis Jenkins


--- Joshua D. Drake [EMAIL PROTECTED] wrote:
It is not fully debugged, but this is what I wrote a
few months ago for sh*ts and grins.

/* djenkins, 2005-7-22

Implements poor-man's reverse DNS lookup tool for use
in
Postgresql SQL functions.

CREATE FUNCTION reverse_dns_lookup(text) RETURNS text
 AS 'dns_tools.so', 'reverse_dns_lookup'
 LANGUAGE C STRICT;

CREATE FUNCTION forward_dns_lookup(text) RETURNS text
 AS 'dns_tools.so', 'forward_dns_lookup'
 LANGUAGE C STRICT;
*/

#include postgres.h
#include string.h
#include fmgr.h
#include stdlib.h
#include netdb.h
#include sys/types.h
#include sys/socket.h
#include netinet/in.h
#include arpa/inet.h
#include syslog.h

PG_FUNCTION_INFO_V1(forward_dns_lookup);

Datum   forward_dns_lookup(PG_FUNCTION_ARGS)
{
text *t = PG_GETARG_TEXT_P(0);
struct hostent *he = NULL;
int ret_len = 0;
text*ret_text = NULL;
char *in_str = VARDATA(t);
int in_len = VARSIZE(t) - VARHDRSZ;
char temp[256];

if (!in_str || (in_len  sizeof(temp)-1))
{
PG_RETURN_NULL();
}

strncpy(temp, in_str, in_len);
temp[in_len] = 0;
he = gethostbyname(temp);
if (!he)
{
PG_RETURN_NULL();
}

strncpy(temp, inet_ntoa(*((struct in_addr
*)he-h_addr)), sizeof(temp));
ret_len = strlen(temp);

//  syslog(LOG_DEBUG, '%s'[%d] = '%s'[%d]\n, in_str,
strlen(in_str), temp, ret_len);

ret_text = (text*)palloc(ret_len + VARHDRSZ);
VARATT_SIZEP(ret_text) = ret_len + VARHDRSZ;
memcpy(VARDATA(ret_text), temp, ret_len);

PG_RETURN_TEXT_P(ret_text);
}

PG_FUNCTION_INFO_V1(reverse_dns_lookup);

Datum   reverse_dns_lookup(PG_FUNCTION_ARGS)
{
text *t = PG_GETARG_TEXT_P(0);
struct in_addr in;
struct hostent *he = NULL;
unsigned long *l = (unsigned long*)((void*)in);
int ret_len = 0;
text*ret_text = NULL;
char *in_str = VARDATA(t);
int in_len = VARSIZE(t) - VARHDRSZ;
char temp[16];

if (!in_str || (in_len  sizeof(temp)-1))
{
PG_RETURN_NULL();
}

memcpy(temp, in_str, in_len);
temp[in_len] = 0;

// First, convert the string to IPV4 'long'
memset(in, 0, sizeof(in));
if (!inet_aton(temp, in))
{
//  syslog(LOG_DEBUG, inet_aton('%s'[%d]) failed: %d
{%08lx}, in_str, strlen(in_str), errno, *l);
PG_RETURN_NULL();
}

he = gethostbyaddr((char*)l, 4, AF_INET);
if (!he)
{
//  syslog(LOG_DEBUG, gethostbyaddr('%s') failed:
%d, in_str, errno);
PG_RETURN_NULL();
}

// return string is in 'he-h_name'
ret_len = strlen(he-h_name);
ret_text = palloc(ret_len + VARHDRSZ);
VARATT_SIZEP(ret_text) = ret_len + VARHDRSZ;
memcpy(VARDATA(ret_text), he-h_name, ret_len);

PG_RETURN_TEXT_P(ret_text);
}



Dennis Jenkins

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Function written in C, hangs on one machine and not another...

2005-10-28 Thread Dennis Jenkins


--- Douglas McNaught [EMAIL PROTECTED] wrote:

 CG [EMAIL PROTECTED] writes:
 
  Does this give you any more insight into an
 alternate method of getting this
  thing done?
 
 I would fork(), set up file descriptors
 appropriately, then have the
 child call the Adobe library and the parent feed the
 data to it.
 Once the document is loaded in the child, do
 whatever processing you
 need to, then pass the results back to the parent
 via stdout or a
 temporary file. 
 
 Ugly, but probably the most robust way to do it. 
 Make sure you don't
 call any PG-internal functions in the child process,
 as that will
 confuse things badly.  
 

Is it safe for the postgres engine to fork()?  Would
the child need to close down anything immediately in
its main() to avoid corrupting the parent?


Dennis Jenkins

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Function written in C, hangs on one machine and not another...

2005-10-28 Thread Dennis Jenkins
--- CG [EMAIL PROTECTED] wrote:
 
 There's no other way to load data into the toolkit!
 (Can you /feel/ the
 insanity?)
 
 Does this give you any more insight into an
 alternate method of getting this
 thing done?
 

Write a completely seperate process to process your
FDF stuff.  Have this new process expose a
communicastions channel (message queues, sockets,
shared memory, etc...).  Write your PostgreSQL 'C'
function to use this channel.

You'll get almost complete seperation and the ability
to debug each piece independant of the other.  You can
write stubs for both ends: a fake server for testing
the PostgreSQL part, and a fake client for testing
the daemon that you wrote.


Dennis Jenkins

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] SPI_prepare, SPI_execute_plan do not return rows when using parameters

2005-10-05 Thread Dennis Jenkins
.  If I remove the parametrization, I get back the
row that I was looking for.


Portions of my Makefile (using FreeBSD 'make', not
GNU's 'gmake'):

INCDIR != pg_config --includedir-server
CFLAGS = -fpic -ggdb -Wall -O2 -I$(INCDIR)
-I/usr/local/include

parse_micr.so:  parse_micr.o
gcc -shared -o $@ parse_micr.o

Dennis Jenkins

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] SPI_prepare, SPI_execute_plan do not return rows when using parameters

2005-10-05 Thread Dennis Jenkins


--- Tom Lane [EMAIL PROTECTED] wrote:

 Dennis Jenkins [EMAIL PROTECTED]
 writes:
  My problem is that a query that should be
 returning a
  row is returning zero rows when I use a
 parametrized
  query.
 
 You're passing the wrong parameter value, and
 probably not declaring it
 to be the right type either.  CHAROID is not the
 type you think it is
 (BPCHAROID is what you want), and CStringGetDatum
 is not the way to
 convert a C string into a char(N) datum.  The most
 bulletproof way
 to do the latter is to use DirectFunctionCall3 to
 invoke bpcharin().
 

Thank you.  I was able to make the code work.  I
changed CHAROID to BPCHAROID and the bind[0] line
now reads:

bind[0] = DirectFunctionCall3(bpcharin,
(Datum)transit, 0 /*unused*/ , 9 + sizeof(VARHDRSZ));


However, I must admit that I could find no usable
documentation for either DirectFunctionCall3 and
bpcharin online.  I had to extract the source code
to the engine and read
src/backend/utils/adt/varchar.c to learn what
arguments to pass to DFC3.

If I create a sample stored procedure that uses the
above functions and submit it to the documentation
people, would they want it?  Would they include it in
the docs?  Who are these magic people and where do I
find them?

Thank you very much for your time.  Our code is now
working, so I'm a happy camper.


Dennis Jenkins

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Problem creating a database in a script; 'template1 is being accessed'

2005-09-09 Thread Dennis Jenkins


--- Madison Kelly [EMAIL PROTECTED] wrote:

 Hi all,
 
 createdb: database creation failed: ERROR:  source
 database template1 
 is being accessed by other users


I get that error every time that anyone is connected
to the database using pgAdmin.  We have to get
everyone to disconnect before we can create a
database.

Everyone out of the pool!  Adult swim!


Dennis Jenkins

---(end of broadcast)---
TIP 6: explain analyze is your friend