Re: [GENERAL] Win32 Backend Cash - pre-existing shared memory block is still in use

2010-08-18 Thread Magnus Hagander
On Wed, Aug 18, 2010 at 04:59, Jeremy Palmer  wrote:
>
> I'm getting infrequent backend crashes on a windows instance of PostgreSQL. 
> The error I get is in the log below. It seems to relate to the share memory 
> each time. Does anyone have any ideas what the problem is here, or what 
> additional things I can do to get more information out next time the backend 
> crashes?
>
> I'm running PostgreSQL 8.4.4 on Windows server 2003 Standard.

Are there any running postgres.exe processes still present on the
machine? (Other than the postmaster itself, that is)


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] When is newly inserted data visible to another connection?

2010-08-18 Thread fka...@googlemail.com
Hi all,

If there is nothing else wrong in our test case we noticed
the following:

We have done a test with two connections to the database on
different computers.  After the first client (writer) had
inserted new data into a quite simple table, it told another
client (by TCP communication) to be ready, however, this
second client (reader) did not see the data then immediately
in the database. So we delayed the reading client from 2 to
5s to have it see all data in the table.

So, firstly: Is that a possible scenario from postgre's view
(and do we have to address it here) ?

Secondly: If yes, is there a way to determine when newly
inserted data is visible to other clients?

Remark: In case transacted write access keeps the inserted
lines together and therefore make it either visible in whole
or nothing of it -- this would not be a solution for the
question *when* it appears visible to other clients.

Thank You
 Felix



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] When is newly inserted data visible to another connection?

2010-08-18 Thread Yeb Havinga

fka...@googlemail.com wrote:

Hi all,

If there is nothing else wrong in our test case we noticed
the following:

We have done a test with two connections to the database on
different computers.  After the first client (writer) had
inserted new data into a quite simple table, it told another
client (by TCP communication) to be ready, however, this
second client (reader) did not see the data then immediately
in the database. So we delayed the reading client from 2 to
5s to have it see all data in the table.
  
Essential information is missing. Did the first client COMMIT before 
toggling client 2? Also you might find the information from 
http://developer.postgresql.org/pgdocs/postgres/transaction-iso.html 
interesting, since that specifies how to control behaviour of concurrent 
transactions looking at each others data.

Secondly: If yes, is there a way to determine when newly
inserted data is visible to other clients?
  
Not before it is committed. To which clients the just committed data is 
visible depends on the transaction isolation level (see link above).


regards,
Yeb Havinga


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Win32 Backend Cash - pre-existing shared memory block is still in use

2010-08-18 Thread Jeremy Palmer
No they all got killed off.

-Original Message-
From: Magnus Hagander [mailto:mag...@hagander.net] 
Sent: Wednesday, August 18, 2010 8:06 PM
To: Jeremy Palmer
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Win32 Backend Cash - pre-existing shared memory block is 
still in use

On Wed, Aug 18, 2010 at 04:59, Jeremy Palmer  wrote:
>
> I'm getting infrequent backend crashes on a windows instance of PostgreSQL. 
> The error I get is in the log below. It seems to relate to the share memory 
> each time. Does anyone have any ideas what the problem is here, or what 
> additional things I can do to get more information out next time the backend 
> crashes?
>
> I'm running PostgreSQL 8.4.4 on Windows server 2003 Standard.

Are there any running postgres.exe processes still present on the
machine? (Other than the postmaster itself, that is)


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/
__

This message contains information, which is confidential and may be subject to 
legal privilege. 
If you are not the intended recipient, you must not peruse, use, disseminate, 
distribute or copy this message.
If you have received this message in error, please notify us immediately (Phone 
0800 665 463 or i...@linz.govt.nz) and destroy the original message.
LINZ accepts no responsibility for changes to this email, or for any 
attachments, after its transmission from LINZ.

Thank you.
__

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] When is newly inserted data visible to another connection?

2010-08-18 Thread fka...@googlemail.com
Yeb Havinga:

> fka...@googlemail.com wrote:

[...]

> > We have done a test with two connections to the database
> > on different computers.  After the first client (writer)
> > had inserted new data into a quite simple table, it told
> > another client (by TCP communication) to be ready,
> > however, this second client (reader) did not see the
> > data then immediately in the database. So we delayed the
> > reading client from 2 to 5s to have it see all data in
> > the table.

> Essential information is missing. Did the first client
> COMMIT before toggling client 2?

Yes, of course, the commit was done before toggling client
2. I would like to mention that the table itself is simple
however contains a bytea column and some of the inserted
rows contain some MBs of binary data which usually take a
while.  But, yes, we trigger client 2 only *after* the
commit was done and returned successfully (using v8.2.4 on
win32 via libpq).

> Also you might find the information from
> http://developer.postgresql.org/pgdocs/postgres/transaction-iso.html
> interesting, since that specifies how to control behaviour
> of concurrent transactions looking at each others data.

Thank you for the interesting link. I think, though, that
this does not address the question why there is a delay
between the point in time A that client 1 has successfully
commited and the point in time B when client 2 can see all
new rows! Even in pure serialization it should be possible
that client 2 can immediately start reading *after* client 1
has completely commited, shouldn't it? FYI: We are using the
default setup for transaction isolation.


> > Secondly: If yes, is there a way to determine when newly
> > inserted data is visible to other clients?
> >   
> Not before it is committed. To which clients the just
> committed data is visible depends on the transaction
> isolation level (see link above).

Hm, I do not get it -- AFAIK the article you mentioned deals
with the question what a concurrent transaction can see from
another one which is pending/not committed. But this is not
the case here. The first transaction is commited before.

To sum up our question:

If client 1 has commited some rows, when is the moment that
client 2 can see/read all that data? Do we have to consider
a gap and if yes how to determine it?

Thank You!
 Felix



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pgsql-general@postgresql.org

2010-08-18 Thread Sergey Sergeev
subscribe-set pgsql-general digest

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] When is newly inserted data visible to another connection?

2010-08-18 Thread Scott Ribe
On Aug 18, 2010, at 6:57 AM, fka...@googlemail.com wrote:

> Even in pure serialization it should be possible
> that client 2 can immediately start reading *after* client 1
> has completely commited, shouldn't it?

Unless client 2 had previously started a transaction and is reading from that.

-- 
Scott Ribe
scott_r...@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Windows 2003 server installation issue

2010-08-18 Thread Igor Neyman
 

> -Original Message-
> From: Vikram Patil [mailto:vpa...@actuate.com] 
> Sent: Tuesday, August 17, 2010 10:13 PM
> To: j...@commandprompt.com
> Cc: pgsql-general@postgresql.org
> Subject: Re: Windows 2003 server installation issue
> 
> Joshua,
> 
>  Thanks for reply. But I tried 8.4.4 and it still doesn't 
> work with local administrator account on windows 2003 . I 
> don't want to create additional "postgres" user to start 
> service on windows rather I want to use my currently logged 
> in user. User can be administrator or non-administrator. 
> 
> I am getting an "error code 5" in event viewer when I try to 
> use administrator as service user.Before attempting that I 
> used initdb to create Data directory with same user
> 
> Thanks & Regards,
> Vikram
>  
> 

Can't do.
PG documantation clearly states, that Windows account used to run
Postgres service SHOULD NOT be a member of Administrators group.

Regards,
Igor Neyman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] When is newly inserted data visible to another connection?

2010-08-18 Thread Tom Lane
"fka...@googlemail.com"  writes:
> Thank you for the interesting link. I think, though, that
> this does not address the question why there is a delay
> between the point in time A that client 1 has successfully
> commited and the point in time B when client 2 can see all
> new rows!

There is no such delay.  Either you forgot to commit in client 1,
or client 2 had a query snapshot that was taken before client 1
committed.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Warm Standby and resetting the primary as a standby

2010-08-18 Thread Derrick Rice
I've been reading up on the documentation for WAL shipping and warm standby
configuration. One concern that I have (a common one, I'm sure) is that it
seems that after bringing a standby server up as primary, other standby
servers (including the original primary) need to be rebased before they can
read the new primary's WALs in continuous recovery mode.

It seems that the cause of this is a change to the leading digit of the WAL
files:

http://archives.postgresql.org/pgsql-general/2010-03/msg00985.php
http://archives.postgresql.org/pgsql-admin/2009-08/msg00179.php

I was hoping that someone would shed some light on this situation with a
technical explanation.  It's not clear to me why the WAL files are
incompatible or why the digit increases. What does that first digit mean to
postgresql?  Is it possible to have the restore_command ignore the leading
digit?

I expected the WAL files to be compatible.  If I start two servers from the
same "disk image" and then they get the same exact changes recorded in WAL,
why should the next created WAL differ depending on which server creates
it?  I imagine these two servers to have identical new versions of a "disk
image" after consuming the exact same WALs (one generated them, the other
read them).

I'm surprised that this question doesn't come up more often or that there's
no explanation in the docs about why its necessary to rebase a primary that
went down gracefully (e.g. for planned maintenance)

Thanks

Derrick


Re: [GENERAL] [PERFORM] Are Indices automatically generated for primary keys?

2010-08-18 Thread Clemens Eisserer
Hi Kevin,

> Sorry; I didn't mean to be harsh.
I also overreacted, sorry about that.

Indeed the documentation is well done, as is the software itself =)

Thanks, Clemens

> Sometimes people coming from some other products aren't used to that
> -- I was just trying to point you in the direction of being able to
> find things in the future, to save you trouble and delay.
>
>> I looked at the "Indexes and ORDER BY" which doesn't mention it,
>> or I've overlook it.
>> Doesn't make a difference anyway.
>
> Well, it very much does make a difference, because when someone
> makes the effort to find something in our documentation, and in
> spite of their best efforts they can't, we tend to consider that a
> bug in the documentation.  I'll take a look at the page you
> mentioned and see if I can work in a suitable reference.  I'm sure
> you can see, though, why the *main* place it was documented was the
> statement which is generally used to create a primary key.
>
> Thanks for responding with the info on where you looked.
>
> -Kevin
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Accessing a database via AJAX scripts

2010-08-18 Thread Vick Khera
On Sun, Aug 8, 2010 at 5:08 PM, Glen Eustace  wrote:
> Thanks Tom, that is pretty much the conclusion I came to.  I think I need to
> close the db connection prior to the fork and then re-open in the new child.
>

Yes, you pretty much have to do this.  I usually do the close
immediately after fork in the child, but set the DBI attribute
InactiveDestroy on the handle so it doesn't destroy the parent's
handle.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Win32 Backend Cash - pre-existing shared memory block is still in use

2010-08-18 Thread Alvaro Herrera
Excerpts from Jeremy Palmer's message of mar ago 17 22:59:08 -0400 2010:
> 
> I'm getting infrequent backend crashes on a windows instance of PostgreSQL. 
> The error I get is in the log below. It seems to relate to the share memory 
> each time. Does anyone have any ideas what the problem is here, or what 
> additional things I can do to get more information out next time the backend 
> crashes?
> 
> I'm running PostgreSQL 8.4.4 on Windows server 2003 Standard.


The first entries are unrelated.  

> 2010-08-17 16:57:22 NZSTLOG:  unexpected EOF on client connection

This is a memory dump and could be unrelated (or maybe not).

> TopMemoryContext: 268428304 total in 26 blocks; 5528 free (22 chunks); 
> 268422776 used
>   Local Buffer Lookup Table: 2088960 total in 8 blocks; 501416 free (22 
> chunks); 1587544 used
[snip some more]


The real problem is here:

> 2010-08-17 19:19:32 NZSTLOG:  server process (PID 6684) exited with exit code 
> 128
> 2010-08-17 19:19:32 NZSTLOG:  terminating any other active server processes

The rest is just noise about other processes being killed by postgres to
reset memory to a known-good state.

What you need to investigate is what caused that process to exit with
code 128.

PS: it'd be a good idea to append some whitespace to log_line_prefix, to
ensure that timezone NZST is separate from the "LOG" tag.

-- 
Álvaro Herrera 
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg 9.0, streaming replication, fail over and fail back strategies

2010-08-18 Thread Vick Khera
On Mon, Aug 9, 2010 at 6:10 PM, Kyle R. Burton  wrote:
> Is there any way to get PostgreSQL to bind to a new ip address and
> interface without actually shutting it down?  If it could, would I
> need to break all the current (read only) client connections to get
> them to reconnect and have the ability to write?  (am I confused about
> this?)

What if you make the virtual IP instead be a shared IP using the CARP
protocol, and have only one host be the master at any given time.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Partitioning into thousands of tables?

2010-08-18 Thread Vick Khera
On Fri, Aug 6, 2010 at 1:10 AM, Data Growth Pty Ltd
 wrote:
> I have a table of around 200 million rows, occupying around 50G of disk.  It
> is slow to write, so I would like to partition it better.
>

How big do you expect your data to get?  I have two tables partitioned
into 100 subtables using a modulo operator on the PK integer ID
column.  This keeps the row counts for each partition in the 5-million
range, which postgres handles extremely well.  When I do a mass
update/select that causes all partitions to be scanned, it is very
fast at skipping over partitions based on a quick index lookup.
Nothing really gets hammered.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] MySQL versus Postgres

2010-08-18 Thread Vick Khera
On Thu, Aug 5, 2010 at 10:41 PM, John Gage  wrote:
> P. P. S. You want to get to grandma's house.  You want to drive a car.  You
> want to learn to drive the car quickly.

You're driving to grandma's house because she needs cataract surgery.
You don't want to pay the surgeon, you just want to do it yourself.
But you don't want to read a book. You just want to take the knife and
dive in.

If you just want to play with toys, then play with toys.  If you want
to get professional results, be a professional or hire one.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Win32 Backend Cash - pre-existing shared memory block is still in use

2010-08-18 Thread Tom Lane
Alvaro Herrera  writes:
> This is a memory dump and could be unrelated (or maybe not).

>> TopMemoryContext: 268428304 total in 26 blocks; 5528 free (22 chunks); 
>> 268422776 used

That's an unreasonably large amount of stuff in TopMemoryContext :-(.
I wonder what caused that?  It's not clear that'd have led to the crash
though.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] MySQL versus Postgres

2010-08-18 Thread Peter C. Lai
Well in that sense, Oracle does cling to some old designs that suck for 
most people's use-cases these days; most notably arbitrary-length indexable
text fields. In most Oracle-related applications you are stuck with
either an indexable nvarchar(4096) or an unindexable CLOB field (which
also requires a byte-stream cursor-based access implementation; one
cannot just "SELECT clob_field FROM table" to get a field dump).

The main problem with mysql is that it has historically prioritized speed 
and ease of use over data integrity, which a professional DBA 
would/should/ought to refuse to compromise over. The DBA and her DBMS is 
supposed to be your organization's last line of defense when it comes to 
data integrity.

There are far too many cases involving mysql where you end up with
data in, garbage out, and any DBA worth their salt should be cognizant 
of that reality.

On 2010-08-18 12:00:15PM -0400, Vick Khera wrote:
> On Thu, Aug 5, 2010 at 10:41 PM, John Gage  wrote:
> > P. P. S. You want to get to grandma's house.  You want to drive a car.  You
> > want to learn to drive the car quickly.
> 
> You're driving to grandma's house because she needs cataract surgery.
> You don't want to pay the surgeon, you just want to do it yourself.
> But you don't want to read a book. You just want to take the knife and
> dive in.
> 
> If you just want to play with toys, then play with toys.  If you want
> to get professional results, be a professional or hire one.
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

-- 
===
Peter C. Lai | Bard College at Simon's Rock
Systems Administrator| 84 Alford Rd.
Information Technology Svcs. | Gt. Barrington, MA 01230 USA
peter AT simons-rock.edu | (413) 528-7428
===


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Partitioning into thousands of tables?

2010-08-18 Thread Scott Marlowe
On Fri, Aug 6, 2010 at 8:08 AM, Joshua Tolley  wrote:
> On Fri, Aug 06, 2010 at 03:10:30PM +1000, Data Growth Pty Ltd wrote:
>>    Is there any significant performance problem associated with partitioning
>>    a table into 2500 sub-tables?  I realise a table scan would be horrendous,
>>    but what if all accesses specified the partitioning criteria "sid".  Such
>>    a scheme would be the simplest to maintain (I think) with the best
>>    localisation of writes.
>
> I seem to remember some discussion on pgsql-hackers recently about the number
> of partitions and its effect on performance, especially planning time.
> Unfortunately I can't find it right now, but in general the conclusion was
> it's bad to have lots of partitions, where "lots" is probably 100 or more.

When it comes to planning time, it's a trade off.  If you have a
reporting database that routinely runs queries that take 30 seconds to
30 minutes, an extra 10 seconds planning is no big deal.  If you need
to have your queries run in sub-second times, then an extra 10 seconds
is a very big deal.

We partition our stats data at work by day, and keep it around for
years.  So, we have 600 to 1000 partitions there.  But any query we
run takes minutes to run, so a little extra planning time is no big
deal there.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg 9.0, streaming replication, fail over and fail back strategies

2010-08-18 Thread Merlin Moncure
On Mon, Aug 9, 2010 at 6:10 PM, Kyle R. Burton  wrote:
> Hello,
>
> I'm new to the list and not even sure if this is the right place to be
> posting this...
>
> I've worked through the documentation for postgres 9.0 (beta2) and
> have successfully set up a master and hot slave configured with
> streaming replication (and xlog shipping).  That configuration seems
> to be correctly updating the slave and the slave accepts read queries
> and shows up to date table data (based on testing by hand with some
> DDL and insert queries).
>
> Now that I have that successfully configured, I have manually
> performed a fail over by stopping the master, moving a virtual IP
> address from the master to the slave, and touched the trigger file on
> the slave.  This worked as expected and the former slave promoted
> itself to being a full read/write master.
>
> I went through the process of failing back manually by dumping the
> database on the slave, restoring it on the master, moving the VIP back
> and renaming the recovery.done back to recovery.conf.  This took some
> time and required several steps, but was also successful.
>
> After I had moved the VIP from the master to the slave, I had to
> restart (not just reload) the postgres daemon to get it to start
> listening on the new ip address (it was previously listening to
> another IP [10.x.x.y] on the same NIC [eth0]).  I have the
> listen_addresses configured to listen on both an internal (10.x.x.y)
> address as well as the vip (10.x.x.z), but the interface on the slave
> did not have this ip address at the time Postgres was started (so I'm
> not all that surprised it didn't bind to that address on becoming the
> master).
>
> Is there any way to get PostgreSQL to bind to a new ip address and
> interface without actually shutting it down?  If it could, would I
> need to break all the current (read only) client connections to get
> them to reconnect and have the ability to write?  (am I confused about
> this?)

hm. I wonder if you could implement a solution around pgbouncer to do this...

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] mod_perl and PostgreSQL 8.3.3 causing "message type 0x44 arrived from server while idle"

2010-08-18 Thread Ogden
Hello all,

We upgraded our application servers to Apache 2.2.16 and upgraded our (hand 
built) Perl of 5.10.1, mod_perl (for Catalyst) and the modules (such as DBI, 
DBD::Pg) through CPAN. Our PostgreSQL server has not changed at all and it is 
running 8.3.3. 

Since the upgrade, we are noticing the following messages in our Apache logs:

message type 0x54 arrived from server while idle
message type 0x44 arrived from server while idle
message type 0x43 arrived from server while idle

We have never had this issue before and from searching the little information I 
found on the topic, it seems that this is thread related. However, what is the 
culprit here? Apache? mod_perl?

Software versions:  perl, v5.10.1 (*) built for x86_64-linux-thread-multi and 
PostgreSQL 8.3.3. 

Any help is appreciated. 

Thank you

Ogden
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Massively Parallel transactioning?

2010-08-18 Thread Benjamin Smith
Is there a way to update a number of databases hosted on a single server 
without opening a separate psql connection to each database? 

We have a cluster of servers hosting an application on Postgres. Right now, we 
have dozens of databases per server, enough that we're starting to have 
problems with our update process. 

When we release updates, we have a migrate script within our update process 
that runs all the database schema updates for all our clients. The way that it 
works is to open a transaction on all the databases concurrently, run the 
commands in sequence on the databases within the transactions, and then commit 
them all (or rollback if there was a problem) 

This way we can be sure that either all the databases are in synch, or that we 
need to rollback the program patch/update. 

So far, it's been a dream, but now, as we continue to grow, we're starting to 
reach connection limits per server. Short of raising the number of 
simultaneous connections, is there a way to run all the transactions for a 
single server for all databases within it on a single (or small number) of 
connections? 

I've tried the following: 

# ATTEMPT 1
$psql -U postgres template1 -h server1; 
template1=# begin transaction; create table testtable (name varchar); 
BEGIN
CREATE TABLE
\c somedatabase; ri
psql (8.4.4, server 8.4.0)
SSL connection (cipher: DHE-RSA-AES256-SHA, bits: 256)
You are now connected to database "somedatabase".
somedatabase=# rollback; 
NOTICE:  there is no transaction in progress
ROLLBACK
somedatabase=# \c template1; 
template1=# rollback; 
NOTICE:  there is no transaction in progress
ROLLBACK
template1=# 

# ATTEMPT 2
$psql -U postgres template1 -h server1; 
template1=# alter table somedatabase.testtable add address varchar; 
ERROR:  cross-database references are not implemented: 
"somedatabase.public.students"
template1=#

Is there a better way?

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Clustering, parallelised operating system, super-computing

2010-08-18 Thread Benjamin Smith
On Thursday, May 13, 2010 11:51:08 pm Brian Modra wrote:
> Maybe the best way to solve this is not to do automatic distribution
> of the data, but rather to provide tools for implementing distributed
> references and joins.

Here's my vote! I'd *LOVE* it if I could do a simple cross-database join 
(without the ugliness of dblink), it would be just awesome. Two beers for 
cross-database foreign keys... 

We already do use dblink extensively with a wrapper. for various reporting 
functions. Since the cross-database queries are in the minority, it does 
function as load balancing, even if the cross-joined queries aren't so 
balanced. 

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Massively Parallel transactioning?

2010-08-18 Thread Steve Atkins

On Aug 18, 2010, at 4:32 PM, Benjamin Smith wrote:

> Is there a way to update a number of databases hosted on a single server 
> without opening a separate psql connection to each database? 
> 
> We have a cluster of servers hosting an application on Postgres. Right now, 
> we 
> have dozens of databases per server, enough that we're starting to have 
> problems with our update process. 
> 

Any reason you're using multiple databases, rather than multiple schemas?

> Is there a better way?

Schemas, probably. Though lazy schema updates done by the application might 
help too.

Cheers,
  Steve


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Massively Parallel transactioning?

2010-08-18 Thread ChronicDB Community Team
Hello Benjamin,

On Wed, 2010-08-18 at 16:32 -0700, Benjamin Smith wrote:
> Is there a way to update a number of databases hosted on a single server 
> without opening a separate psql connection to each database? 

I believe you are more interested in applying an atomic update for all
databases rather than specifically the problem of separate psql
connections to each database. Is that correct ?

> This way we can be sure that either all the databases are in synch, or that 
> we 
> need to rollback the program patch/update. 

> simultaneous connections, is there a way to run all the transactions for a 
> single server for all databases within it on a single (or small number) of 
> connections? 

It would be easy to extend the ChronicDB live database schema update
system to support an atomic schema change across a multitude of
databases.

> Is there a better way?

ChronicDB replicates a database to use a new schema in the background.
When the schema change completes and few incremental changes remain,
active transactions are temporarily paused and then rerouted to the new
schema. This currently works for a schema change of a single database,
but the parallel update you are requesting could be supported.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Massively Parallel transactioning?

2010-08-18 Thread Joshua D. Drake
On Wed, 2010-08-18 at 16:32 -0700, Benjamin Smith wrote:

> So far, it's been a dream, but now, as we continue to grow, we're starting to 
> reach connection limits per server. Short of raising the number of 
> simultaneous connections, is there a way to run all the transactions for a 
> single server for all databases within it on a single (or small number) of 
> connections? 

Well if you are just using it for updates to the schema etc... you
should only need to launch a single connection to each database to make
those changes.

Joshua D. Drake



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] heap-only tuples, and constraints

2010-08-18 Thread Jeff Davis
On Sat, 2010-08-14 at 00:29 -0700, maxxe...@gmail.com wrote:
> Does this statement (UPDATE user SET email='newChangedVal',
> password='existingVal') requires updating an index on user.password?
> Or more generally, if an UPDATE includes an explicit but unchanged
> value for an index column, does postgres need to also update the index
> entries? Or does HOT apply only when indexed column is missing or not
> explicit (i.e. password=password)?

It does a binary comparison of the old/new values (for indexed columns),
and if they are identical, it allows a HOT update.

If a data type has two representations for the same value, that may mean
that it does a regular update when it could do a hot update. In other
words, it doesn't call a datatype-specific equality function.

> Along the same line of thought as above, if password is a foreign key
> column and if an UPDATE includes an explicit but unchanged value for
> this fk column, does postgres need to check constraint satisfiability?

No, it does not need to do the check.

In this case, however, it does appear that it uses a datatype-specific
equality function. So, even if you have a strange datatype where two
equal values can have different representations, it will still avoid the
check.

Regards,
Jeff Davis


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Win32 Backend Cash - pre-existing shared memory block is still in use

2010-08-18 Thread Jeremy Palmer
Could it be that I have too much memory allocated for postgresql? My resource 
settings are:

shared_buffers = 94952
temp_buffers = 1GB
work_mem = 19339
maintenance_work_mem = 191845
max_stack_depth = 2MB

I'm running on a server with 3.7GB of RAM.

I will adjust the logging level and wait for another crash. Should I set the 
level to 'debug5'? The cluster is for development purposes, so I don't mind the 
overhead.

Cheers,
Jeremy

-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us] 
Sent: Thursday, August 19, 2010 4:07 AM
To: Alvaro Herrera
Cc: Jeremy Palmer; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Win32 Backend Cash - pre-existing shared memory block is 
still in use 

Alvaro Herrera  writes:
> This is a memory dump and could be unrelated (or maybe not).

>> TopMemoryContext: 268428304 total in 26 blocks; 5528 free (22 chunks); 
>> 268422776 used

That's an unreasonably large amount of stuff in TopMemoryContext :-(.
I wonder what caused that?  It's not clear that'd have led to the crash
though.

regards, tom lane
__

This message contains information, which is confidential and may be subject to 
legal privilege. 
If you are not the intended recipient, you must not peruse, use, disseminate, 
distribute or copy this message.
If you have received this message in error, please notify us immediately (Phone 
0800 665 463 or i...@linz.govt.nz) and destroy the original message.
LINZ accepts no responsibility for changes to this email, or for any 
attachments, after its transmission from LINZ.

Thank you.
__

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Massively Parallel transactioning?

2010-08-18 Thread Adrian von Bidder
Heyho!

On Thursday 19 August 2010 01.32:06 Benjamin Smith wrote:
> This way we can be sure that either all the databases are in synch, or
> that we  need to rollback the program patch/update.

I guess this might be more a hack than a solution: do the updates in batches 
and use 2pc: first connect to batches of databases, but instead of commit, 
you "prepare to commit".  Prepared commits like this are persistent accross 
connections, so you can come back later and commit or rollback.

Note that such prepared commits will block (some) stuff and use resources 
(not sure how many) before they are finally committed or rolled back, so 
you'll want to make sure they don't stick around too long.

cheers
-- vbi

-- 
featured product: ClamAV Antivirus - http://www.clamav.net/


signature.asc
Description: This is a digitally signed message part.