-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Vick Khera
Here are two (not necessarily mutually exclusive) options for you:
1) use triggers/rules on the master table and do all your inserts
directed to it and have
-Original Message-
From: Greg Smith [mailto:gsm...@gregsmith.com]
On Wed, 21 Oct 2009, Scott Marlowe wrote:
Actually, later models of linux have a direct RAID-10 level built in.
I haven't used it. Not sure how it would look in /proc/mdstat either.
I think I actively block memory of
Sorry guys, I know this is very off-track for this list, but google hasn't
been of much help. This is my raid array on which my PG data resides.
I have a 4 disk Raid10 array running on linux MD raid.
Sda / sdb / sdc / sdd
One fine day, 2 of the drives just suddenly decide to die on me. (sda and
right sibling's left-link doesn't match: block 121425 links to 124561
instead of expected 121828 in index
Oct 20 22:21:29 hmweb5 postgres[8795]: [3-2] d_trh_trr_water_eval_pkey
WARNING: terminating connection because of crash of another server process
DETAIL: The postmaster has
[resend w/ plain text only - Sorry]
right sibling's left-link doesn't match: block 121425 links to 124561
instead of expected 121828 in index
Oct 20 22:21:29 hmweb5 postgres[8795]: [3-2] d_trh_trr_water_eval_pkey
WARNING: terminating connection because of crash of another server process
-Original Message-
From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
htop is really nice too. http://htop.sourceforge.net/
(disclaimer - I did not write it)
I like atop better
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to
Is there any way in which a trigger can return the # of rows affected by the
insert / delete ?
Master
→ slave_1
→ slave_2
Trigger is against master which will, based on the conditions re-direct the
data into the relevant slave_X partitions.
I think this post basically is what I am seeing.
Is there any way in which a trigger can return the # of rows affected by the
insert / delete ?
Master
-- slave_1
-- slave_2
Trigger is against master which will, based on the conditions re-direct the
data into the relevant slave_X partitions.
I think this post basically is what I am
-Original Message-
From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
Is there any way in which a trigger can return the # of rows affected by
the
insert / delete ?
Master
--- slave_1
--- slave_2
Trigger is against master which will, based on the conditions
I've got a largeish table which according to pg_size_pretty, has an on-disk
size of ~22GB Table size and 12GB index size, approx 55million rows.
When I truncate the table, (I've moved the data somewhere else), I see that
I only gain back ~7GB in the Filesystem space.
What gives?
--
Sent via
I was playing around with dbi-link, hoping to get it connected to a teradata
database. However, before I dive into that, I figured that I might as well
try it out first on a PG Database (on another server)
So, it installed dbi-link fine.
I did a select on a 30GB table and it froze the
-Original Message-
From: Andy Colson [mailto:a...@squeakycode.net]
Ow Mun Heng wrote:
I was playing around with dbi-link, hoping to get it connected to a
teradata
database. However, before I dive into that, I figured that I might as
well
try it out first on a PG Database (on another
Hi All,
Anyone here has a teradata box ? Are you able to connect to it from withing
postgresql?
I would like to pull 1or 2 tables from the box (sync) and was wondering if
there's anyway to do that w/o using dbi-link.
I actually am trying dbi-link but it seem as though it doesn't support
Hi Guys,
Searching the net didn't give me much clues as to how to convert a Decimal
number into BINARY.
Eg:
I have a datatype in the DB which needs to be converted.
DEC = 192
BINARY = 1100
DEC = 197
BINARY = 11000101
Which I then need to break down into pairs to do calculations on
11 :
-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Christophe Pettus
On Aug 11, 2009, at 12:19 AM, Jan Verheyden wrote:
I was looking in what way it's possible to alert via mail when some
conditions are true in a
-Original Message-
From: Tommy Gildseth [mailto:tommy.gilds...@usit.uio.no]
Ow Mun Heng wrote:
I'm starting to use DBLink / DBI-Link and one of the bad things is that
the password is out in the clear.
What can I do to prevent it from being such? How do I protect it from
'innocent
-Original Message-
From: Magnus Hagander [mailto:mag...@hagander.net]
On Wed, Aug 12, 2009 at 09:30, Ow Mun Hengow.mun.h...@wdc.com wrote:
From: Tommy Gildseth [mailto:tommy.gilds...@usit.uio.no]
Ow Mun Heng wrote:
I'm starting to use DBLink / DBI-Link and one of the bad things
-Original Message-
From: Magnus Hagander [mailto:mag...@hagander.net]
No, we're talking about operating system user here, not postgres user.
So the owner of the database object is irrelevant - only the user that
the backend process is executing as.
Got it.. Thanks for the tip.
--
I'm starting to use DBLink / DBI-Link and one of the bad things is that
the password is out in the clear.
What can I do to prevent it from being such? How do I protect it from
'innocent' users?
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your
While doing
# VACUUM VERBOSE ANALYZE d_trr_dfh;
INFO: vacuuming xmms.d_trr_dfh
ERROR: xlog flush request 21F/9F57DF88 is not satisfied --- flushed only to
21F/924CE76C
CONTEXT: writing block 2919652 of relation 17461/17462/17668
I see this in the logs
user= CONTEXT: writing block
-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
Ow Mun Heng ow.mun.h...@wdc.com writes:
As background, this is a new box mirrored from a separate box via rsync.
I've basically copied/rsync the entire postgresql server and data files
over
to create a mirror copy. After
I think I'm doing this wrongly. Before I go out re-invent the wheel, I
thought I'll just check w/ the list. (I previously got the idea from IRC)
Table Master
-- Table Child1
-- Table Child2
-- Table Child2
Table Update -- Table to update come key items from source table.
The Master table
-Original Message-
From: gsst...@gmail.com [mailto:gsst...@gmail.com] On Behalf Of Greg Stark
On Tue, Jul 21, 2009 at 11:25 PM, Ow Mun Hengow.mun.h...@wdc.com wrote:
RETURN NULL;
From the docs:
It can return NULL to skip the operation for the current row.
--
From: Sim Zacks [mailto:s...@compulab.co.il]
-Original Message-
From: gsst...@gmail.com [mailto:gsst...@gmail.com] On Behalf Of Greg Stark
On Tue, Jul 21, 2009 at 11:25 PM, Ow Mun Hengow.mun.h...@wdc.com wrote:
RETURN NULL;
Just make your trigger return NEW and it won't
What can I do about this plan?
HashAggregate (cost=8035443.21..8035445.17 rows=157 width=24)
- Nested Loop (cost=37680.95..7890528.72 rows=28982898 width=24)
suspect
Join Filter: ((a.test_run_start_date_time = date.start_time) AND
(a.test_run_start_date_time = date.end_time))
On Wed, 2009-06-03 at 01:28 -0600, Scott Marlowe wrote:
On Wed, Jun 3, 2009 at 12:32 AM, Ow Mun Heng ow.mun.h...@wdc.com wrote:
HashAggregate (cost=8035443.21..8035445.17 rows=157 width=24)
- Nested Loop (cost=37680.95..7890528.72 rows=28982898 width=24)
suspect
Join Filter
Is there a method to do this without transversing the whole 20GB table?
What about manipulating the pg_attribute table and changing atttypid just
like we can manipulate atttypmod to change from varchar(4) to varchar(8)?
Thanks
--
Sent via pgsql-general mailing list
-Original Message-
From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
On Tue, May 12, 2009 at 01:23:14PM +0800, Ow Mun Heng wrote:
| sum of count | sum_of_count_squared | qty | qty 100 | qty 500 |
I'm thinking of lumping them into 1 column via an array instead
Hi,
Currently doing some level of aggregrate tables for some data. These
data will be used for slice/dice activity and we want to be able to
play/manipulate the data such that I can get means and stddev data.
Eg: For each Original Column eg:
population_in_town : (I get derivatives)
- mean # of
I was trying to clean up my database and after 2 days of vacuum full on
a 20GB table, I gave up and used pg_reorg which is seriously fast.
However, now I'm not sure if it is the cause of my unstable DB (8.2.13)
I can connect, initiate a simple query and the DB will do down
logs says :
The
On Behalf Of Asko Oja
Hello
We use plProxy (RUN ON ALL) to run queries in parallel.
We split our database into 16 shards and distributed it over 4 servers.
So now we are running queries on 16 cpu's in parallel :)
Wow.. query time improved How many fold? Any idea?
--
Sent via pgsql-general
I don't think I understand how PG implements fulltext searching or if my
search actually needs to use fulltext search.
basically, I have a btree index on a SERIAL_NUMBER which is of sort like
ABC12345 or AAA123434 or AAB131441
I would like to have search on the specific text of the SERIAL_NUMBER
On Mon, 2009-01-12 at 14:42 +0800, Phoenix Kiula wrote:
ALTER COLUMN TYPE is intended for cases where actual transformation of
the data is involved. Obviously varchar(20) to varchar(35) doesn't
really require any per-row effort, but there's no operation in the
system that handles that
Quoted from
http://www.nabble.com/Group-Sets-For-Replication-w--Partition-d19369646.html
Basically, what you would need to do, weekly, is to run a slonik
execute script script where the SQL script consists of something
similar to what's in tests/testpartition/gen_ddl_sql.sh:
- create the
for the noise.
Ps : I can't count how many times I've bang my head against the wall on
these sort of things and when I decided to send the email to the list to ask
for help, then I solve it. :-)
-Original Message-
From: Ow Mun Heng
Sent: Saturday, October 18, 2008 4:07 PM
To: 'pgsql-general
Hi,
I'm a (more than a) bit confuse as to the diference between TEXT and
varchar data-types.
AFAIK, varchar will have a max limit char of, if not mistaken ~65k? But
for TEXT, it's more like a BLOB and there's supposed to be no limit?
Anyhow, searching the archives (in my mail client - no
Hi, appreciate if someone can help shed some light on what i may be doing
wrong.
I know there are caveat on using constraint exclusion to reduce the # of
partitions scanned.
pg:8.2.9
create table test (
code varchar,
dummy_col1 int,
dummy_col2 int
)
create table test_experimental_code (
Hi,
I'm playing around with triggers to implement partitioning.
I hit something which I don't know what and I don't have internet here
at work to find out what is the cause.
ERROR : stack depth limit exceeded
I see that this is one of the options in postgresql.conf but I don't
know exactly
On Fri, 2008-09-05 at 10:35 +0200, Magnus Hagander wrote:
Ow Mun Heng wrote:
Hi,
I'm playing around with triggers to implement partitioning.
I hit something which I don't know what and I don't have internet here
at work to find out what is the cause.
ERROR : stack depth limit
-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: Friday, September 05, 2008 11:22 PM
To: Magnus Hagander
Cc: Ow Mun Heng; pgsql-general@postgresql.org
Subject: Re: [GENERAL] max_stack_depth Exceeded
Magnus Hagander [EMAIL PROTECTED] writes:
Ow Mun Heng wrote:
Am I
Hi,
I frequently query PG for between 10k - 65k rows of data and was
wondering if I should be considering usage of cursors.
I’m not too well versed with it’s purpose but based on the docs, it is
supposed to be more efficient and also gives the impression of
responsiveness.
Currently, when I do
On Tue, 2008-09-02 at 22:56 -0400, Robert Treat wrote:
On Tuesday 02 September 2008 17:21:12 Asko Oja wrote:
On Tue, Sep 2, 2008 at 2:09 AM, Michael Nolan [EMAIL PROTECTED] wrote:
Oracle handles connecting to multiple databases (even on multiple/remote
computers) fairly seamlessly, PG
I posed this question to the Slony List as well, but no response yet.
I'll post it here as well, to elicit some responses, as there's a larger
community of people using PG+Slony who may also be on slony-list.
The question that I have is, I'm trying to determine if there's a
possibility that I
On Tue, 2008-08-26 at 00:58 -0400, Tom Lane wrote:
Ow Mun Heng [EMAIL PROTECTED] writes:
On Mon, 2008-08-25 at 10:18 -0400, Tom Lane wrote:
I suppose code_id is varchar or some such?
Yep
After a few more investigation on the usefulness of the partial indexes,
I found that, it really
On Mon, 2008-08-25 at 10:18 -0400, Tom Lane wrote:
Ow Mun Heng [EMAIL PROTECTED] writes:
On Mon, 2008-08-25 at 12:23 +0800, Ow Mun Heng wrote:
CREATE INDEX idx_d_trh_code_id_partial
ON xmms.d_trh_table
USING btree
(code_id) where code_id not in ('P000','000') and code_id is not null
CREATE INDEX idx_d_trh_code_id_partial
ON xmms.d_trh_table
USING btree
(code_id) where code_id not in ('P000','000') and code_id is not null;
ERROR: functions in index predicate must be marked IMMUTABLE
Just trying something new.
I want to create partial indexes on code_id which are not
On Mon, 2008-08-25 at 12:23 +0800, Ow Mun Heng wrote:
CREATE INDEX idx_d_trh_code_id_partial
ON xmms.d_trh_table
USING btree
(code_id) where code_id not in ('P000','000') and code_id is not null;
ERROR: functions in index predicate must be marked IMMUTABLE
Just trying something new
Hi,
I want to find out if there's a method to change this
select to_char('1 day 09:18:42.37996'::interval,'HH24:MI:SS')
to something like
24+9(hours) = 33:18:42 instead of returning It as 09:19:42
I've not found a way to do this (yet)
--
Sent via pgsql-general mailing list
On Thu, 2008-08-21 at 11:53 +0200, Tomasz Ostrowski wrote:
On 2008-08-21 11:09, Ow Mun Heng wrote:
I want to find out if there's a method to change this
select to_char('1 day 09:18:42.37996'::interval,'HH24:MI:SS')
to something like
24+9(hours) = 33:18:42 instead of returning It as 09
On Tue, 2008-08-19 at 02:28 -0400, David Wilson wrote:
On Fri, Aug 15, 2008 at 11:42 PM, Amber [EMAIL PROTECTED] wrote:
Dear all:
We are currently considering using PostgreSQL to host a read only
warehouse,
we would like to get some experiences, best practices and performance metrics
-Original Message-
From: johnf [EMAIL PROTECTED]
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] schema name in SQL statement.
Date: Tue, 19 Aug 2008 22:25:14 -0700
On Tuesday 19 August 2008 10:06:55 pm Scott Marlowe wrote:
On Tue, Aug 19, 2008 at 10:53 PM, johnf [EMAIL
-Original Message-
From: Scott Marlowe [EMAIL PROTECTED]
If you throw enough drives on a quality RAID controller at it you can
get very good throughput. If you're looking at read only / read
mostly, then RAID5 or 6 might be a better choice than RAID-10. But
RAID 10 is my default choice
On Mon, 2008-08-18 at 11:01 -0400, justin wrote:
Ow Mun Heng wrote:
-Original Message-
From: Scott Marlowe [EMAIL PROTECTED]
If you're looking at read only / read
mostly, then RAID5 or 6 might be a better choice than RAID-10. But
RAID 10 is my default choice unless
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Tue, 2008-08-12 at 08:38 -0700, Lennin Caro wrote:
you can use a cron job
I have my cron setup to do database wide vacuums each night and it
usually takes ~between 4-6 hours on ~200G DB size.
On days where there is huge activity, it can drag on for like 15+ hours.
I've recently dropped
On Mon, 2008-07-07 at 02:10 -0400, Lew wrote:
Ow Mun Heng wrote:
I want to change a column type from varchar(4) to varchar(5) or should I
just use text instead.
The choice of TEXT for the column would seem to be supported in the PG
manual,
which stresses that TEXT and VARCHAR are quite
I'm going to alter a bunch a tables columns's data type and I'm being
forced to drop a view which depends on the the colum.
eg: ALTER TABLE xs.d_trh ALTER m_dcm TYPE character varying;
ERROR: cannot alter type of a column used by a view or rule
DETAIL: rule _RETURN on view v_hpp depends on
Is there any quick hacks to do this quickly? There's around 20-30million
rows of data.
I want to change a column type from varchar(4) to varchar(5) or should I
just use text instead.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
On Wed, 2008-06-25 at 14:58 +1000, Klint Gore wrote:
Ow Mun Heng wrote:
explain select * from d_trr where revision like '^B2.%.SX'
--where ast_revision = 'B2.M.SX'
Seq Scan on d_trr (cost=0.00..2268460.98 rows=1 width=16)
Filter: ((revision)::text ~~ '^B2.%.SX'::text)
show
On Wed, 2008-06-25 at 17:00 +1000, Klint Gore wrote:
Ow Mun Heng wrote:
On Wed, 2008-06-25 at 14:58 +1000, Klint Gore wrote:
Ow Mun Heng wrote:
explain select * from d_trr where revision like '^B2.%.SX'
--where ast_revision = 'B2.M.SX'
Seq Scan on d_trr (cost=0.00
explain select * from d_trr where revision like '^B2.%.SX'
--where ast_revision = 'B2.M.SX'
Seq Scan on d_trr (cost=0.00..2268460.98 rows=1 width=16)
Filter: ((revision)::text ~~ '^B2.%.SX'::text)
show lc_collate;
en_US.UTF-8
Is it that this is handled by tsearch2? Or I need to do the
This question, I think is directed at Devrim, but if anyone else can
answer it would be great as well.
I saw from the site that states that slony1 packages are available.
However, I can't find it from the yum archives. This is for Centos 5.
Does anyone know?
muchos gracias.
--
Sent via
query is something like this
Select *
from v_test
where acode Like 'PC%'
and rev = '0Q'
and hcm = '1'
and mcm = 'K'
where acode, rev, hcm, mcm are all indexes.
Currently this query is only using the rev and mcm for the bitmapAND.
it then does a
On Fri, 2008-03-14 at 00:50 -0600, Scott Marlowe wrote:
On Fri, Mar 14, 2008 at 12:28 AM, Ow Mun Heng [EMAIL PROTECTED] wrote:
query is something like this
Select *
from v_test
where acode Like 'PC%'
and rev = '0Q'
and hcm = '1
On Fri, 2008-03-14 at 07:53 +0100, A. Kretschmer wrote:
am Fri, dem 14.03.2008, um 14:28:15 +0800 mailte Ow Mun Heng folgendes:
query is something like this
Select *
from v_test
where acode Like 'PC%'
and rev = '0Q'
and hcm = '1'
and mcm = 'K
On Fri, 2008-03-14 at 08:26 +0100, A. Kretschmer wrote:
am Fri, dem 14.03.2008, um 15:06:56 +0800 mailte Ow Mun Heng folgendes:
On Fri, 2008-03-14 at 07:53 +0100, A. Kretschmer wrote:
am Fri, dem 14.03.2008, um 14:28:15 +0800 mailte Ow Mun Heng folgendes:
query is something like
Hi,
I finally figure out how come (i think) my analyszing of some specific
tables is taking so freaking long. 12million rows, ~11GB table.
I had some of the columns with the stat level set up to 1000. (this was
previously because I was trying to optimise somethings to make things
faster. ) When
On Wed, 2008-03-12 at 21:33 -0500, Adam Rich wrote:
select c.relname, a.attname, attstattarget
from pg_catalog.pg_attribute a, pg_catalog.pg_class c,
pg_catalog.pg_namespace n
where a.attrelid = c.oid and c.relnamespace=n.oid
and n.nspname = 'public' and a.attnum 0
Funny, that does not
On Wed, 2008-03-12 at 21:40 -0700, Scott Marlowe wrote:
On Wed, Mar 12, 2008 at 8:45 PM, Ow Mun Heng [EMAIL PROTECTED] wrote:
On Wed, 2008-03-12 at 21:33 -0500, Adam Rich wrote:
select c.relname, a.attname, attstattarget
from pg_catalog.pg_attribute a, pg_catalog.pg_class c
On Wed, 2008-01-30 at 09:14 +0100, Eugenio Tacchini wrote:
Hello,
I'm writing a function in PL/pgSQL and I would like to know if there
is a method to get the number of records in a result set, after a
select query, without executing the same query using COUNT(*).
not sure what exactly you
On Wed, 2008-01-30 at 20:47 +0900, Jason Topaz wrote:
I don't disagree with your point that it's not robust with examples of
exactly how a particular problem can be solved. But I think there are
enough, and more importantly, I don't think problem-solving is an
important focus for a manual
On Wed, 2008-01-30 at 20:14 -0600, Josh Trutwin wrote:
On Wed, 30 Jan 2008 13:20:58 -0500
Tom Hart [EMAIL PROTECTED] wrote:
I have 4 years of mySQL experience (I know, I'm sorry)
Why is this something to apologize for? I used to use MySQL for
everything and now use PostgreSQL for the
This is OT for this list and I don't have access to I-net (only email)
and I'm not subscribed to the Slony list.
I need to add a few additional columns to an existing replicated
set/table. I know that I can't just add the columns normally but have to
go through slonik's EXECUTE SCRIPT
On Tue, 2008-01-29 at 19:16 +, Dave Page wrote:
On Jan 29, 2008 6:16 PM, Joshua D. Drake [EMAIL PROTECTED] wrote:
I try to be reasonable (no laughing people :)).
Oh it's hard, so very, very hard!
But seriously, I've ranted on this some time ago( and you can tell that
I'm about to
On Mon, 2008-01-28 at 22:17 +, Jeremy Harris wrote:
We have one problematic table, which has a steady stream of entries
and a weekly mass-delete of ancient history. The bloat query from
Greg Sabino Mullane (thanks to Greg Smith for pointing it out) returns:
schemaname | tablename |
On Mon, 2008-01-28 at 20:57 -0500, Greg Smith wrote:
On Tue, 29 Jan 2008, Ow Mun Heng wrote:
Can you let me know what is the sql used to generate such a nice summary
of the tables?
Might as well dupe the old text; this went out to the performance list:
Greg Sabino Mullane released
On Fri, 2008-01-18 at 14:57 -0500, Chris Browne wrote:
[EMAIL PROTECTED] (Ow Mun Heng) writes:
Just wondering if my 'Perceived' feeling that since implementing slony
for master/slave replication of select tables, my master database
performance is getting slower.
I'm constantly seeing
I'm currently seeing more and more problems with vacuum as the DB size
gets bigger and bigger. (~220GB+)
Bear in mind that I'm working on a fairly big DB with unfairly sized
hardware (Celeron 1.7G, 2x500G Raid1 dbspace1, 1x500Gb dbspace2, 1x80G
system, 768MB Ram, 2G Swap on dspace2)
IO is main
Just wondering if my 'Perceived' feeling that since implementing slony
for master/slave replication of select tables, my master database
performance is getting slower.
I'm constantly seeing a very high amount of IO wait. ~40-80 according to
vmstat 1
and according to atop. (hdb/hdc = raid1
On Wed, 2008-01-09 at 08:41 +0200, Sim Zacks wrote:
Another way of doing this, without dblink, is using an unsecured language
(plpython, for example) is to connect to the sql server using odbc and then
putting the data into your postgresql.
I use perl DBI to connect to both PG and MSSQL.
On Tue, 2008-01-08 at 23:37 -0700, Guido Neitzer wrote:
On 08.01.2008, at 23:20, Joshua D. Drake wrote:
Like, I have a situation where I need multi-master just for
availability. Two small servers are good enough for that. But
unfortunately with PostgreSQL the whole setup is a major pain
On Tue, 2008-01-08 at 23:16 -0800, Joshua D. Drake wrote:
Ow Mun Heng wrote:
On Wed, 2008-01-09 at 08:41 +0200, Sim Zacks wrote:
Another way of doing this, without dblink, is using an unsecured language
(plpython, for example) is to connect to the sql server using odbc
On Tue, 2008-01-08 at 23:05 -0800, Joshua D. Drake wrote:
Sim Zacks wrote:
The reason companies go with the closed source, expensive solutions is
because they are better products.
Sometimes, sometimes not. It depends on your needs.
This is total FUD. Everything has a place. And
On Wed, 2008-01-09 at 00:21 -0700, Guido Neitzer wrote:
On 09.01.2008, at 00:08, Joshua D. Drake wrote:
Great! I was just trying to show you that there was a JDBC layer
available for multi-mastering with PostgreSQL.
When I find some time, I might dig a bit deeper in the Sequoia stuff
On Wed, 2008-01-09 at 00:24 -0700, Guido Neitzer wrote:
On 09.01.2008, at 00:14, Ow Mun Heng wrote:
Like, I have a situation where I need multi-master just for
availability. Two small servers are good enough for that. But
unfortunately with PostgreSQL the whole setup is a major pain
On Mon, 2008-01-07 at 13:13 -0600, Scott Marlowe wrote:
On Jan 7, 2008 12:57 PM, Devrim GÜNDÜZ [EMAIL PROTECTED] wrote:
I want to announce PostgreSQL RPM Buildfarm today.
This is very very very cool! Thanks you guys.
Ultra Even. No more mucking about for RPMS and finally a ptop in RPM
On Mon, 2008-01-07 at 21:54 -0800, Devrim GÜNDÜZ wrote:
Hi,
On Tue, 2008-01-08 at 13:34 +0800, Ow Mun Heng wrote:
finally a ptop in RPM form.
I packaged it last month, but did not announce it to public.:)
Hmm.. can I find it in the same location as the pgpool rpms?
Devrim, can you
On Fri, 2007-11-30 at 09:33 -0500, Andrew Sullivan wrote:
On Fri, Nov 30, 2007 at 01:22:31PM -, Greg Sabino Mullane wrote:
or a scapegoat. Please don't perpetuate this urban myth. No companies are
suing Oracle and Microsoft because of their products, and companies have
no expectation
I'm just wetting my hands with slony and during the setup of the slave,
I did and dump and restore of the master DB to the Slave DB.
However during the startup of slony, I noticed that it issues a truncate
command to the (to be) replicated table. Hence, this means that there's
no such need for me
{resend as don't see it on the list after 4 hours}
I'm just wetting my hands with slony and during the setup of the slave,
I did and dump and restore of the master DB to the Slave DB.
However during the startup of slony, I noticed that it issues a truncate
command to the (to be) replicated
On Thu, 2008-01-03 at 19:17 -0500, Geoffrey wrote:
Ow Mun Heng wrote:
However during the startup of slony, I noticed that it issues a truncate
command to the (to be) replicated table. Hence, this means that there's
no such need for me to do a dump/restore in the 1st place
On Mon, 2007-11-26 at 12:39 -0500, Chris Browne wrote:
[EMAIL PROTECTED] (Jeff Larsen) writes:
Unfortunately, the only way to make things deterministic (or to get
from near real time to *GUARANTEED* real time) is to jump to
synchronous replication, which is not much different from 2PC (Two
On Wed, 2007-11-21 at 15:33 -0500, Andrew Sullivan wrote:
On Wed, Nov 21, 2007 at 12:20:51PM -0800, Garber, Mikhail wrote:
In the high-availabilty situation with a warm standby, is it possible (or
planned) to be able to make standby readable?
Yes, but it won't happen for 8.3. It's a
Hi,
is there a simple way for me to backup all the functions which I've
written for a server?
Thanks
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
On Tue, 2007-12-18 at 20:12 -0700, Gregory Williamson wrote:
Kevin H. wrote on Tue 12/18/2007 7:26 PM
This is aimed at everyone in this community who contributes to the
Postgres project, but especially at the core folks who continually
make this community great through energy, time, money,
On Sun, 2007-12-16 at 16:11 -0800, Joshua D. Drake wrote:
On Sun, 16 Dec 2007 17:55:55 -0600
Scott Marlowe [EMAIL PROTECTED] wrote:
On Dec 14, 2007 1:33 AM, Ow Mun Heng [EMAIL PROTECTED] wrote:
I kept looking at the io columns and didn't even think of the swap
partition. It's true
On Fri, 2007-12-14 at 09:35 +0100, Harald Armin Massa wrote:
Ow Mun Heng,
The current issue which prompted me to do such housekeeping is
due to
long database wide vacuum time. (it went from 2 hours to 4
hours to 7
hours)
If vacuum takes to long
I'm using centos 5 as the OS so, there's no fancy dtrace to look at
which processes is causing my disks to thrash.
I have 4 disks in the box. (all ide, 7200rpm)
1 OS disk [hda]
2 raided (1) disks [hdb/hdc]
1 pg_xlog disk (and also used as an alternate tablespace for [hdd]
temp/in-transit files
I'm starting to perform some basic housekeeping to try to trim some big
tables (~200 million rows - ~50GB+indexes) into separate partitions (via
inheritance).
The current issue which prompted me to do such housekeeping is due to
long database wide vacuum time. (it went from 2 hours to 4 hours to
On Fri, 2007-12-14 at 01:54 -0500, Tom Lane wrote:
Merlin Moncure [EMAIL PROTECTED] writes:
there are a few things that I can think of that can can cause postgres
to cause i/o on a drive other than the data drive:
* logging (eliminate this by moving logs temporarily)
I'll have to try this
1 - 100 of 229 matches
Mail list logo