Re: [HACKERS] Warm-Standby using WAL archiving / Seperate

2006-07-11 Thread Andrew Rawnsley

Just having a standby mode that survived shutdown/startup would be a nice
start...

I also do the blocking-restore-command technique, which although workable,
has a bit of a house-of-cards feel to it sometimes.



On 7/10/06 5:40 PM, Florian G. Pflug [EMAIL PROTECTED] wrote:

 Merlin Moncure wrote:
 On 7/10/06, Florian G. Pflug [EMAIL PROTECTED] wrote:
 This methods seems to work, but it is neither particularly fool-proof nor
 administrator friendly. It's not possible e.g. to reboot the slave
 without postgres
 abortint the recovery, and therefor processing all wals generated
 since the last
 backup all over again.
 
 Monitoring this system is hard too, since there is no easy way to
 detect errors
 while restoring a particular wal.
 
 what I would really like to see is to have the postmaster start up in
 a special read only mode where it could auto-restore wal files placed
 there by an external process but not generate any of its own.  This
 would be a step towards a pitr based simple replication method.
 
 I didn't dare to ask for being able to actually _access_ a wal-shipping
 based slaved (in read only mode) - from how I interpret the code, it's
 a _long_ way to get that working. So I figured a stand-alone executable
 that just recovers _one_ archived wal would at least remove that
 administrative
 burden that my current solution brings. And it would be easy to monitor
 the Y



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] streamlined standby procedure

2006-02-08 Thread Andrew Rawnsley



On 2/7/06 1:19 PM, Tom Lane [EMAIL PROTECTED] wrote:

 Andrew Rawnsley [EMAIL PROTECTED] writes:
 IMHO the #1 priority in the current PITR/WAL shipping system is to make the
 standby able to tolerate being shut down and restarted, i.e. actually having
 a true standby mode and not the current method of doing it only on startup.
 
 How is shutting down the standby a good idea?  Seems like that will
 block the master too --- or at least result in WAL log files piling up
 rapidly.  If the standby goes off-line, abandoning it and starting from
 a fresh base backup when you are ready to restart it seems like the most
 likely recovery path.  For sure I don't see this as the #1 priority.
 
 regards, tom lane

I wasn't suggesting this in the context of Csaba's auto-ship plan (and, to
be clear, not #1 in the context of the entire database development. Just
PITR). 

For one, sometimes you have no choice about the standby being shut down, but
most of the time you can plan for that. As for Csaba's question of why I
would want to create a copy of a standby, its the easiest way to create
development and testing snapshots at standby locations, and for making
paranoid operations people confident that your standby procedures are
working. I do it with my Oracle (pardon the 'O' word) installations all the
time, and I despise being able to do something with Oracle that I can't with
PG.

I ship WAL logs around in batches independent of the archive command to
several locations. Either I :

A) let the logs 'pile up' on the standby (crap has to pile up somewhere),
and apply them should the standby be needed (could take some time should the
'pile' be large). The only way here to keep the recover time short is to
re-image the database frequently and ship it around. Not nice with big
databases.

B) Do the blocking recover command to continually apply the logs as they get
moved around. While this can generate good clever points, its a rig.
Fragile.

To me the question isn't 'How is shutting down the standby a good idea?',
its 'How is shutting down the standby not a bad idea?'. Different points of
view, I suppose - In my situation the standby going offline is not a
catastrophic event like the primary would be; its even a useful thing. If
there was some rman-style thing like people have suggested to auto-ship logs
around, then yeah, dealing with an offline standby could be a tricky thing
(but would need some solution anyway). But hell, Slony and Mammoth can
tolerate it, I just would like log shipping to handle it also.

Maybe it isn't #1 priority, but its something I view as a limitation, and
not just lacking a feature. Its something I can't control. As I originally
mentioned, the customizable archive/restore feature is great, superior to
dealing with it in Oracle. But the standby mode makes the Oracle setup more
bulletproof. 



-- 

Andrew Rawnsley
Chief Technology Officer
Investor Analytics, LLC
(740) 587-0114
http://www.investoranalytics.com




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] streamlined standby procedure

2006-02-07 Thread Andrew Rawnsley

IMHO the #1 priority in the current PITR/WAL shipping system is to make the
standby able to tolerate being shut down and restarted, i.e. actually having
a true standby mode and not the current method of doing it only on startup.

While it is a trivial thing to fool postgres into staying in startup/restore
mode with a restore_command that blocks until more files are available, if
the machine needs to be shut down for whatever reason you have to go back to
the last image and replay to the present, which isn't always convenient. Nor
are you able to shut down the standby, copy it to a second instance to use
for testing/development/whatever, and restart the standby.

(Just to be clear - I _really_ like the flexibility of the customizable
archive and restore structure with PITR in PG, but the lack of a standby
mode always reminds me of whacking my forehead at 3am on the too-low doorway
into my son's bedroom...)


On 2/7/06 10:11 AM, Csaba Nagy [EMAIL PROTECTED] wrote:

 Hi all,
 
 I decided to start implementing a streamlined WAL shipping based standby
 building procedure. My aim is fairly simple: to be able to build a
 standby as automated as possible.
 
 The ultimate simplicity would be for me:
  - install postgres on the standby machine;
  - create a directory for the data base files, containing
 postgresql.conf and pg_hba.conf, and a standby.conf file;
  - start up the postmaster with a --build-standby option;
 
 All the rest should be done automatically by postgres.
 
 The procedure should be something similar to the one available today if
 you do it manually. The main difference would be that the standby
 postmaster should connect to the primary server, and get all table data
 and WAL record stream through normal data base connections...
 
 To facilitate this process, I thought about why not expose the WAL files
 through a system view ? Something along the lines of:
 
 pg_wal (
   name text,
   walrecords blob,
   iscurrent boolean
 )
 
 Then anybody interested in the WAL record stream could easily find out
 which is the current WAL record, and get any of the existing WAL records
 by streaming the blob. Closed WAL files would be streamed completely,
 and the current WAL file could be streamed in realtime as it is
 created... this would facilitate an always as up to date as possible
 standby, as it could get the WAL records in real time.
 
 To make it possible to reliably get closed WAL records, a WAL
 subscription system could be created, where a subscriber (the standby)
 could signal which is the oldest WAL file it did not get yet. The
 primary machine would keep all the WAL files extending back to the
 oldest subscribed one. Then each time the subscriber finishes processing
 a WAL file, it can signal it's interest in the next one. This could be
 implemented by a table like:
 
 pg_wal_subscription (
   subscriber text,
   name text
 )
 
 The subscribers would insert a record in this table, and update it to
 the next WAL file after they processed one. The subscriber names should
 be unique across subscribers, this should be managed by the admin who
 sets up the subscribers. When the subscriber is not interested anymore,
 it can delete it's subscription record. That could be done by the DBA
 too if things go haywire...
 
 To build a stand by based on log shipping it is necessary to get over
 all the data base files too. That could be also done by exposing them
 through some view, which in turn might take advantage of knowledge of
 the table structure to compress the data to be transferred. The main
 idea is to do all transfers through normal DB connections, so the only
 configuration to be done is to point the standby to the master
 machine...
 
 So, all this said, I'm not too familiar with either C programming or the
 postgres sources, but I'm willing to learn. And the project as a whole
 seems a bit too much to do it in one piece, so my first aim is to expose
 the WAL records in a system view.
 
 I would really appreciate any comments you have...
 
 Thanks,
 Csaba.
 
 
 
 ---(end of broadcast)---
 TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly




---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Problem with PITR recovery

2005-04-20 Thread Andrew Rawnsley
It is also recommended when creating new standby control files, when 
Oracle can't
automatically expand the data file capacity on a standby like it does 
with
a live database. Nothing like seeing the 'Didn't restore  from 
sufficiently old
backup' message when Oracle is confused (which seems to be most of the 
time)
about what transactions have been applied where.

This, of course, doesn't matter for postgresql. Thank the gods
On Apr 20, 2005, at 3:28 AM, Klaus Naumann wrote:
Hi Simon,
Actually, me too. Never saw the need for the Oracle command myself.
It actually has. If you want to move your redo logs to a new disk, you
create a new redo log file and then issue a ALTER SYSTEM SWITCH 
LOGFILE;
to switch to the new logfile. Then you can remove the old one
(speaking just of one file for simplification).
Waiting on that event could take ages.

Strictly speaking, this doesn't concern postgresql (yet). But if, at 
the
future, we support user defined (= changing these parameters while the
db is running) redo log locations, sizes and count, we need a function
to switch the logfile manually. Which I think the pg_stop_backup()
hack is not suitable for.

---(end of 
broadcast)---
TIP 7: don't forget to increase your free space map settings



Andrew Rawnsley
Chief Technology Officer
Investor Analytics, LLC
(740) 587-0114
http://www.investoranalytics.com
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] Call for port reports

2004-12-07 Thread Andrew Rawnsley
smallmouth:~/tmp ronz$ uname -a
Darwin smallmouth.local 7.5.0 Darwin Kernel Version 7.5.0: Thu Aug  5  
19:26:16 PDT 2004; root:xnu/xnu-517.7.21.obj~3/RELEASE_PPC  Power  
Macintosh powerpc

(or OS X 10.3.5)
./configure --prefix=/Users/ronz/tmp/pgsql8 --enable-thread-safety  
--with-tcl --with-perl --with-python --with-krb5 --with-pam  
-with-openssl --with-libs=/sw/lib --with-includes=/sw/include

all 96 tests passed
[local]:template1=# select version();
  version
 

 PostgreSQL 8.0.0rc1 on powerpc-apple-darwin7.5.0, compiled by GCC gcc  
(GCC) 3.3 20030304 (Apple Computer, Inc. build 1495)


Andrew Rawnsley
President
The Ravensfield Digital Resource Group, Ltd.
(740) 587-0114
www.ravensfield.com
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Time off

2004-10-19 Thread Andrew Rawnsley
On Oct 19, 2004, at 2:05 PM, Joshua D. Drake wrote:
There comes the time in every hackers life when he discovers that 
even unsuccessfully chasing girls can be more fun than debugging 
kernel modules or interface libraries. Some get over that phase 
without greater collateral damage, some become successfull in the 
chasing, some then get caught by the upgrade policies of this quite 
different kind of hard- and software, and some even go that far that 
they experiment with its replication features ... and believe me, it 
takes a lot of time to get those replicas running :-)
Your telling me and we are not even legally allowed to use them as 
slaves ;)

Its also an unusual replication scheme in that, more often than not, 
the slaves control the masters.


Sincerely,
Joshua D. Drake

Jan
Regards,
Andreas
---(end of 
broadcast)---
TIP 4: Don't 'kill -9' the postmaster

--
Command Prompt, Inc., home of PostgreSQL Replication, and plPHP.
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com
Mammoth PostgreSQL Replicator. Integrated Replication for PostgreSQL
jd.vcf
---(end of 
broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Andrew Rawnsley
President
The Ravensfield Digital Resource Group, Ltd.
(740) 587-0114
www.ravensfield.com
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [HACKERS] version upgrade

2004-09-01 Thread Andrew Rawnsley
On Aug 31, 2004, at 11:35 PM, Jan Wieck wrote:
On 8/31/2004 9:38 PM, Andrew Rawnsley wrote:
On Aug 31, 2004, at 6:23 PM, Marc G. Fournier wrote:
On Tue, 31 Aug 2004, Josh Berkus wrote:
Andrew,
If I were loony enough to want to make an attempt at a version  
updater
(i.e. migrate a
7.4 database to 8.0 without an initdb), any suggestions on where to
poke first? Does a
catalog/list of system catalog changes exist anywhere? Any really  
gross
problems immediately
present themselves? Is dusting off pg_upgrade a good place to  
start, or
is that a dead end?
Join the Slony project?Seriously, this is one of the uses of  
slony.  All
you'd need would be a script that would:

I thought of this quite a bit when I was working over eRServer a  
while back.
Its _better_ than a dump and restore, since you can keep the master  
up while the
'upgrade' is happening.  But Mark is right - it can be quite  
problematic from an equivalent
resource point of view. An in-place system (even a faux setup like  
pg_upgrade) would be
easier to deal with in many situations.
There is something that you will not (or only under severe risk) get  
with an in-place upgrade system. The ability to downgrade back in the  
case, your QA missed a few gotchas. The application might not  
instantly eat the data, but it might start to sputter and hobble here  
and there.

With the Slony system, you not only switch over to the new version.  
But you keep the old system as a slave. That means that if you  
discover 4 hours after the upgrade that the new version bails out with  
errors on a lot of queries from the application, you have the chance  
to switch back to the old version and have lost no single committed  
transaction.


What, you don't like living out on the edge? :)
Doing an upgrade via replication is a great way to do it, if you have  
the resources available to do so, no argument there.

Jan
In the end, using a replication system OR a working pg_upgrade is  
still a pretty creaky
workaround. Having to do either tends to lob about 15 pounds of nails  
into the gears when
trying to develop a business case about upgrading (Doesn't  
necessarily stop it dead, but
does get everyone's attention...). The day when a dump/restore is not  
necessary is
the day all of us are hoping for.
1) Install PG 8.0 to an alternate directory;
2) Start 8.0;
3) install Slony on both instances (the 7.4 and the 8.0);
4) make 7.4 the master and start replicating
5) when 8.0 is caught up, stop 7.4 and promote it to Master
6) turn off Slony.
Slony is not an upgrade utility, and falls short in one big case ..  
literally .. a very large database with limited cash resources to  
duplicate it (as far as hardware is concerned).  In small shops, or  
those with 'free budget', Slony is perfect ... but if you are in an  
organization where getting money is like pulling teeth, picking up a  
new server just to do an upgrade can prove to be difficult ...
In many cases the mere idea of doing an upgrade proves to be difficult,  
before you even get to what upgrade procedure to use or whether you  
need hardware or not. Add in either of those two issues and people  
start to quiver and shake.



Marc G. Fournier   Hub.Org Networking Services  
(http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ:  
7615664

---(end of  
broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to  
[EMAIL PROTECTED])


Andrew Rawnsley
President
The Ravensfield Digital Resource Group, Ltd.
(740) 587-0114
www.ravensfield.com
---(end of  
broadcast)---
TIP 8: explain analyze is your friend

--  
#== 
#
# It's easier to get forgiveness for being wrong than for being right.  
#
# Let's break this rule - forgive me.   
#
#== [EMAIL PROTECTED]  
#

---(end of  
broadcast)---
TIP 5: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faqs/FAQ.html

Andrew Rawnsley
President
The Ravensfield Digital Resource Group, Ltd.
(740) 587-0114
www.ravensfield.com
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] version upgrade

2004-08-31 Thread Andrew Rawnsley
On Aug 31, 2004, at 6:23 PM, Marc G. Fournier wrote:
On Tue, 31 Aug 2004, Josh Berkus wrote:
Andrew,
If I were loony enough to want to make an attempt at a version 
updater
(i.e. migrate a
7.4 database to 8.0 without an initdb), any suggestions on where to
poke first? Does a
catalog/list of system catalog changes exist anywhere? Any really 
gross
problems immediately
present themselves? Is dusting off pg_upgrade a good place to start, 
or
is that a dead end?
Join the Slony project?Seriously, this is one of the uses of 
slony.  All
you'd need would be a script that would:

I thought of this quite a bit when I was working over eRServer a while 
back.

Its _better_ than a dump and restore, since you can keep the master up 
while the
'upgrade' is happening.  But Mark is right - it can be quite 
problematic from an equivalent
resource point of view. An in-place system (even a faux setup like 
pg_upgrade) would be
easier to deal with in many situations.

In the end, using a replication system OR a working pg_upgrade is still 
a pretty creaky
workaround. Having to do either tends to lob about 15 pounds of nails 
into the gears when
trying to develop a business case about upgrading (Doesn't necessarily 
stop it dead, but
does get everyone's attention...). The day when a dump/restore is not 
necessary is
the day all of us are hoping for.


1) Install PG 8.0 to an alternate directory;
2) Start 8.0;
3) install Slony on both instances (the 7.4 and the 8.0);
4) make 7.4 the master and start replicating
5) when 8.0 is caught up, stop 7.4 and promote it to Master
6) turn off Slony.
Slony is not an upgrade utility, and falls short in one big case .. 
literally .. a very large database with limited cash resources to 
duplicate it (as far as hardware is concerned).  In small shops, or 
those with 'free budget', Slony is perfect ... but if you are in an 
organization where getting money is like pulling teeth, picking up a 
new server just to do an upgrade can prove to be difficult ...


Marc G. Fournier   Hub.Org Networking Services 
(http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 
7615664

---(end of 
broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Andrew Rawnsley
President
The Ravensfield Digital Resource Group, Ltd.
(740) 587-0114
www.ravensfield.com
---(end of broadcast)---
TIP 8: explain analyze is your friend


[HACKERS] version upgrade

2004-08-30 Thread Andrew Rawnsley
If I were loony enough to want to make an attempt at a version updater 
(i.e. migrate a
7.4 database to 8.0 without an initdb), any suggestions on where to 
poke first? Does a
catalog/list of system catalog changes exist anywhere? Any really gross 
problems immediately
present themselves? Is dusting off pg_upgrade a good place to start, or 
is that a dead end?

There any chance of getting something to work at all?
(I figure I made enough of a stink about it last year about this time, 
I should at least make
the attempt...or have one of my minions do it...)


Andrew Rawnsley
President
The Ravensfield Digital Resource Group, Ltd.
(740) 587-0114
www.ravensfield.com
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] Too-many-files errors on OS X

2004-02-23 Thread Andrew Rawnsley
On Slackware 8.1:
[EMAIL PROTECTED]:~/src$ ./eatallfds libm.so libtcl.so libjpeg.so
dup() failed: Too many open files
Was able to use 1021 file descriptors
dup() failed: Too many open files
Was able to use 1021 file descriptors after opening 3 shared libs
On OpenBSD 3.1:
grayling# ./eatallfds libcrypto.so.10.0 libkrb5.so.13.0 
libncurses.so.9.0
dup() failed: Too many open files
Was able to use 125 file descriptors
dup() failed: Too many open files
Was able to use 125 file descriptors after opening 3 shared libs



On Feb 22, 2004, at 10:41 PM, Tom Lane wrote:

Kevin Brown [EMAIL PROTECTED] writes:
Tom Lane wrote:
Hmm.  This may be OS-specific.  The shlibs certainly show up in the
output of lsof in every variant I've checked, but do they count 
against
your open-file limit?

It seems not, for both shared libraries that are linked in at startup
time by the dynamic linker and shared libraries that are explicitly
opened via dlopen().
It would certainly make life a lot easier if we could assume that 
dlopen
doesn't reduce your open-files limit.

Attached is the test program I used.
Can folks please try this on other platforms?

			regards, tom lane

---(end of 
broadcast)---
TIP 8: explain analyze is your friend



Andrew Rawnsley
President
The Ravensfield Digital Resource Group, Ltd.
(740) 587-0114
www.ravensfield.com
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] Recursive queries?

2004-02-04 Thread Andrew Rawnsley
I haven't had any problems with it so far, although I haven't really 
stressed it yet.  I was going to make this very plea...

I agree that the syntax can probably be improved, but its familiar to 
those of us unfortunate enough to have used (or still have to use)
Oracle. I imagine that bringing it more in line with any standard would 
be what people would prefer.

On Feb 4, 2004, at 5:28 AM, Hans-Jürgen Schönig wrote:

Christopher Kings-Lynne wrote:
There is a website somewhere where a guy posts his patch he is 
maintaining that does it.  I'll try to find it...
Found it.  Check it out:
http://gppl.terminal.ru/index.eng.html
Patch is current for 7.4, Oracle syntax.
Chris


I had a look at the patch.
It is still in development but it seems to work nicely - at least I 
have been able to get the same results with Oracle.

I will try it with a lot of data this afternoon so that we can compare 
Oracle vs. Pg performance. I expect horrible results ;).

Does this patch have a serious chance to make it into Pg some day?
I think Oracle's syntax is not perfect but is easy to handle and many 
people are used to it. In people's mind recursive queries = CONNECT BY 
and many people (like me) miss it sadly.

If this patch has a serious chance I'd like to do some investigation 
and some real-world data testing.

	Regards,

		Hans

--
Cybertec Geschwinde u Schoenig
Schoengrabern 134, A-2020 Hollabrunn, Austria
Tel: +43/2952/30706 or +43/664/233 90 75
www.cybertec.at, www.postgresql.at, kernel.cybertec.at
---(end of 
broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly



Andrew Rawnsley
President
The Ravensfield Digital Resource Group, Ltd.
(740) 587-0114
www.ravensfield.com
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] pljava revisited

2003-12-10 Thread Andrew Rawnsley
On Dec 10, 2003, at 11:23 AM, Andrew Dunstan wrote:

Thomas Hallgren wrote:

Hi,
I'm working on a new pl/java prototype that I hope will become 
production
quality some time in the future. Before my project gets to far, I'd 
like to
gather some input from other users. I've taken a slightly different 
approach
than what seems to be the case for other attempts that I've managed 
to dig
up. Here's some highlights in my approach:

1. A new Java VM is spawned for each connection. I know that this 
will give
a performance hit when a new connection is created. The alternative 
however,
implies that all calls becomes inter-process calls which I think is a 
much
worse scenario. Especially since most modern environments today has 
some
kind of connection pooling. Another reason is that the connections
represents sessions and those sessions gets a very natural isolation 
using
separate VM's. A third reason is that the current connection would 
become
unavailable in a remote process (see #5).

Maybe on-demand might be better - if the particular backend doesn't 
need it why incur the overhead?

I think a JVM per connection is going to add too much overhead, even if 
its on-demand. Some platforms handle
multiple JVMs better than others, but still. 25 or so individual JVMs 
is going to be  a mess, in terms of resource consumption.

Start time/connect time will be an issue. Saying 'people use pools', 
while generally accurate, kind of sweeps the problem
under the carpet instead of the dust bin.

2. There's no actual Java code in the body of a function. Simply a 
reference
to a static method. My reasoning is that when writing (and debugging) 
java,
you want to use your favorite IDE. Mixing Java with SQL just gets 
messy.



Perhaps an example or two might help me understand better how this 
would work.

3. As opposed to the Tcl, Python, and Perl, that for obvious reasons 
uses
strings, my pl/java will use native types wherever possible. A flag 
can be
added to the function definition if real objects are preferred 
instead of
primitives (motivated by the fact that the primitives cannot reflect 
NULL
values).

4. The code is actually written using JNI and C++ but without any 
templates,
no -style object references, no operator overloads, external class
libraries etc. I use C++ simply to get better quality, readability and
structure on the code.

Other pl* (perl, python, tcl) languages have vanilla C glue code. 
Might be better to stick to this. If you aren't using advanced C++ 
features that shouldn't be too hard - well structured C can be just as 
readable as well structured C++. At the very lowest level, about the 
only things C++ buys you are the ability to declare variables in 
arbitrary places, and // style comments.

Agreed. Given that the rest of the code base is CI would imagine 
that the Powers that Be would frown a bit on merging
C++ code in, and relegate it to contrib for eternity...

Not knocking the idea, mind you - I think it would be great if it can 
be pulled off. Was thinking about it myself as a way to learn more
of the backend code and scrape the thick layer of rust off of my C 
skills. Would like to see where you are with it.


5. I plan to write a JDBC layer using JNI on top of the SPI calls to 
enable
JDBC functionality on the current connection. Some things will be 
limited
(begin/commit etc. will not be possible to do here for instance).

Again. examples would help me understand better.

Is there a web page for your project?

cheers

andrew

---(end of 
broadcast)---
TIP 5: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faqs/FAQ.html



Andrew Rawnsley
President
The Ravensfield Digital Resource Group, Ltd.
(740) 587-0114
www.ravensfield.com
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] pljava revisited

2003-12-10 Thread Andrew Rawnsley
On Dec 10, 2003, at 1:51 PM, Andrew Dunstan wrote:

Thomas Hallgren wrote:

The JVM will be started on-demand.
Although I realize that one JVM per connection will consume a fair 
amount of
resources, I still think it is the best solution. The description of 
this
system must of course make it very clear that this is what happens and
ultimately provide the means of tuning the JVM's as much as possible.

I advocate this solution because I think that the people that has the
primary interest of a pl/java will be those who write enterprise 
systems
using Java. J2EE systems are always equipped with connection pools.

Yes, but as was pointed out even if I use connection pooling I would 
rather not have, say, 25 JVMs loaded if I can help it.

Its also a bit of a solution by circumstance, rather that a solution by 
design.

But, I'm of course open for other alternatives. Let's say that 
there's a JVM
with a thread-pool that the Postgress sessions will connect to using 
some
kind of RPC. This implies that each call will have an overhead of at 
least 2
OS context switches. Compared to in-process calls, this will severely
crippel the performance. How do you suggest that we circumvent this 
problem?

My comments here are pretty off the cuff. You've thought about this far 
more than I have.




Context switches are not likely to be more expensive that loading an 
extra JVM, I suspect. Depending on your OS/hw they can be incredibly 
cheap, in fact.

Antother problem is that we will immeditately loose the ability to 
use the
current connection provided by the SPI interfaces. We can of course
establish a back-channel to the original process but that will incure 
even
more performance hits. A third alternative is to establish brand new
connections in the remote JVM. Problem then is to propagate the 
transaction
context correctly. Albeit solvable, the performance using distributed
transactions will be much worse than in-process. How do we solve this?

We are theorising ahead of data, somewhat. My suggestion would be to 
continue in the direction you are going, and later, when you can, 
stress test it. Ideally, if you then need to move to a shared JVM this 
would be transparent to upper levels of the code.

Agreed - sounds like you've done a fair amount of ground work. I at 
least am interested in where you're going with it.


C++ or C is not a big issue. I might rewrite it into pure C. The main 
reason
for C++ is to be able to use objects with virtual methods. I know how 
to do
that in C too but I don't quite agree that its just as clean :-)

Maybe not, but it's what is used in the core Pg distribution. Go with 
the flow :-)

cheers

andrew

---(end of 
broadcast)---
TIP 7: don't forget to increase your free space map settings



Andrew Rawnsley
President
The Ravensfield Digital Resource Group, Ltd.
(740) 587-0114
www.ravensfield.com
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [HACKERS] Is there going to be a port to Solaris 9 x86 in the near future???

2003-11-18 Thread Andrew Rawnsley
I think they are actually trying to pull it out of the dumpster, 
whether from desperation of marketing acumen no one knows. I think 
they've gone back to the 'if we can get them hooked on a dual opteron 
box, we can sell them some massive E1' or whatever.



On Nov 18, 2003, at 11:32 AM, Christopher Browne wrote:

In an attempt to throw the authorities off his trail, [EMAIL PROTECTED] 
(Christoper Smiga) transmitted:
Does anyone know if there is going to be a port to Solaris 9 x86 in
the near future. What is the decission to develop on this platform
since Sun is pushing Solaris x86 harder than ever.
If you're running Solaris on x86, then you're free to try PostgreSQL
out there.  It works quite well on SPARC; it is not evident that/why
it _wouldn't_ work on the x86 version.
On the other hand, the impression that I got was that the pushing
taking place with Solaris x86 was more of the into the dumpster sort
than pushing hard to customers.  I thought their new strategy
involved Linux on x86...
--
(reverse (concatenate 'string gro.gultn @ enworbbc))
http://cbbrowne.com/info/spreadsheets.html
Rules of the Evil Overlord  #220. Whatever my one vulnerability is, I
will fake a  different one. For example, ordering  all mirrors removed
from the palace, screaming and flinching whenever someone accidentally
holds up a mirror, etc. In the climax when the hero whips out a mirror
and thrusts it at my face,  my reaction will be ``Hmm...I think I need
a shave.''  http://www.eviloverlord.com/
---(end of 
broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if 
your
  joining column's datatypes do not match



Andrew Rawnsley
President
The Ravensfield Digital Resource Group, Ltd.
(740) 587-0114
www.ravensfield.com
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] [GENERAL] Proposal for a cascaded master-slave replication system

2003-11-11 Thread Andrew Rawnsley
On Nov 11, 2003, at 12:11 PM, Joe Conway wrote:

Jan Wieck wrote:
http://developer.postgresql.org/~wieck/slony1.html
Very interesting read. Nice work!
Ditto.  I'll read it a bit closer later,  but after a quick read it 
seems quite complete and well thought out. I especially like
that sequences are being dealt with.

Thanks for putting the effort in, and making it a community project.


We want to build this system as a community project. The plan was from
the beginning to release the product under the BSD license. And we 
think
it is best to start it as such and to ask for suggestions during the
design phase already.
I couldn't quite tell from the design doc -- do you intend to support 
conditional replication at a row level?

I'm also curious, with cascaded replication, how do you handle the 
case where a second level slave has a transaction failure for some 
reason, i.e.:

M
   / \
  /   \
Sa Sb
   /  \   /  \
  Sc  Sd Se  Sf
What happens if data is successfully replicated to Sa, Sb, Sc, and Sd, 
and then an exception/rollback occurs on Se?

Joe

---(end of 
broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Andrew Rawnsley
President
The Ravensfield Digital Resource Group, Ltd.
(740) 587-0114
www.ravensfield.com
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] Hacking PostgreSQL to work in Mac OS X 10.3 (Panther

2003-11-04 Thread Andrew Rawnsley
Just build RC1 today on Panther, no problems.

On Nov 4, 2003, at 5:06 PM, Jeff Hoffmann wrote:

Tom Lane wrote:
[EMAIL PROTECTED] writes:
After spending a few hours of trying to get Postgresql7.3.4 to build
from source (tar.gz) on a Panther (release, not beta) system,
Try 7.4RC1 instead.  Apple made some incompatible changes in their
compiler in Panther.
I was going to recommend the same thing.  I compiled a 7.4 beta out of 
the box without a hitch, so I'd assume the RC would be fine as well.

--

Jeff Hoffmann
PropertyKey.com
---(end of 
broadcast)---
TIP 8: explain analyze is your friend



Andrew Rawnsley
President
The Ravensfield Digital Resource Group, Ltd.
(740) 587-0114
www.ravensfield.com
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] Single-file DBs WAS: Need concrete Why Postgres

2003-08-22 Thread Andrew Rawnsley
On Friday, August 22, 2003, at 12:07 PM, Josh Berkus wrote:

Single-file databases also introduce a number of problems:

1) The database file is extremely vulnerable to corruption, and if 
corruption
occurs it is usually not localized but destroys the entire database 
due to
corruption of the internal file structure.  Recovery of raw data out 
of a
damaged single-file database inevitably requires specialized tools if 
it is
possible at all.

snip

Having fallen victim to Oracle crapping in its own nest and doing this 
exact thing, and having to drop some stupid amount of $$ to Oracle for 
them to use their specialized tool to try to recover data (which they 
really didn't do much of), I concur with this statement.

Boy, was that a lousy experience.

--
Josh Berkus
Aglio Database Solutions
San Francisco
---(end of 
broadcast)---
TIP 1: subscribe and unsubscribe commands go to 
[EMAIL PROTECTED]



Andrew Rawnsley
President
The Ravensfield Digital Resource Group, Ltd.
(740) 587-0114
www.ravensfield.com
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster