On Fri, Jul 7, 2017 at 10:12 AM, Shaun Thomas
wrote:
>> I don't think the downstream dependencies will let that work without
>> rebuilding them as well. The drop fails (without a cascade), and the other
>> views and matviews that are built off of this all simply point to x_old.
>
> Wow, ouch. Ye
> I don't think the downstream dependencies will let that work without
> rebuilding them as well. The drop fails (without a cascade), and the other
> views and matviews that are built off of this all simply point to x_old.
Wow, ouch. Yeah, I'd neglected to consider dependent objects. Your
only "
>
>
> If you _can't_ do
>> that due to cloud restrictions, you'd actually be better off doing an
>> atomic swap.
>>
>> CREATE MATERIALIZED VIEW y AS ...;
>>
>> BEGIN;
>> ALTER MATERIALIZED VIEW x RENAME TO x_old;
>> ALTER MATERIALIZED VIEW y RENAME TO x;
>> DROP MATERIALIZED VIEW x_old;
>> COMMIT;
On Thu, Jul 6, 2017 at 11:25 AM, Shaun Thomas
wrote:
> > I'm curious if I'm overlooking other possible architectures or tools
> that might make this simpler to manage.
>
> One of the issues with materialized views is that they are based on
> views... For a concurrent update, it essentially perfor
> I'm curious if I'm overlooking other possible architectures or tools that
> might make this simpler to manage.
One of the issues with materialized views is that they are based on
views... For a concurrent update, it essentially performs a looped
merge, which can be pretty ugly. That's the price
On 5/28/15 9:31 AM, Ravi Krishna wrote:
explain select count(*) from tstesting.account where account_row_inst = 101 ;
Aggregate (cost=8.16..8.17 rows=1 width=0)
-> Index Only Scan using account_pkey on account (cost=0.14..8.16
rows=1 width=0)
Index Cond: (account_row_inst = 101)
EXPLAIN only sh
On Thu, May 28, 2015 at 10:31 AM, Ravi Krishna wrote:
> I am testing partitioning of a large table. I am doing a range
Sorry I forgot to clarify. I am using INHERITS for partioning with
check constraing built for range partitioning.
--
Sent via pgsql-performance mailing list (pgsql-performance
On Sat, Jun 9, 2012 at 7:58 PM, Kevin Kempter
wrote:
> Hi All;
>
> We have a client that has a table where large blobs (bytea) are stored. the
> table has a key column that is numbers (like 112362) but unfortunately it's
> a varchar column so the blobs are accessed via queries like:
>
> select * f
Thanks for pointing me to that article. I totally forgot that the postgres wiki
existed.
Updating is not an option at the moment, but we'll probably do so in the
future. Until then I can
live with the workaround.
Kind regards,
Marc
--
Sent via pgsql-performance mailing list (pgsql-perfor
On 5 Březen 2012, 16:11, Marc Schablewski wrote:
> We have an optimizer problem regarding partitioned tables on 8.4.11.
...
> gdw=> explain select min( emg_id ) from edifactmsgpart;
> QUERY PLAN
>
Em 13-01-2012 17:05, Josh Berkus escreveu:
On 1/13/12 2:44 AM, alexandre - aldeia digital wrote:
Also, (2) only really works if you're going to obsolesce (remove)
archive records after a certain period of time. Otherwise the
sub-partitioning hurts performance.
Is there any moves to include
On 1/13/12 2:44 AM, alexandre - aldeia digital wrote:
>>
>> Also, (2) only really works if you're going to obsolesce (remove)
>> archive records after a certain period of time. Otherwise the
>> sub-partitioning hurts performance.
>>
>
> Is there any moves to include the "easy" table partitioning
Also, (2) only really works if you're going to obsolesce (remove)
archive records after a certain period of time. Otherwise the
sub-partitioning hurts performance.
Is there any moves to include the "easy" table partitioning in the 9.2
version ?
--
Sent via pgsql-performance mailing list (
Mike,
> Is it practical to partition on the status column and, eg, use triggers to
> move a row between the two partitions when status is updated? Any
> surprises to watch for, given the status column is actually NULL for active
> data and contains a value when archived?
When I've done this befo
Mike Blackwell wrote:
> We have a set of large tables. One of the columns is a status indicator
> (active / archived). The queries against these tables almost always include
> the status, so partitioning against that seems to makes sense from a logical
> standpoint, especially given most of the
> -Original Message-
> From: Ben [mailto:midfi...@gmail.com]
> Sent: Friday, October 29, 2010 12:16 PM
> To: Igor Neyman
> Cc: pgsql-performance@postgresql.org
> Subject: Re: partitioning question 1
>
> On Oct 29, 2010, at 7:38 AM, Igor Neyman wrote:
>
> >> is my intuition completely
On Oct 29, 2010, at 7:38 AM, Igor Neyman wrote:
>> is my intuition completely off on this?
>>
>> best regards, ben
>>
>
> If your SELECT retrieves substantial amount of records, table scan could
> be more efficient than index access.
>
> Now, if while retrieving large amount of records "WHERE
> -Original Message-
> From: Ben [mailto:midfi...@gmail.com]
> Sent: Thursday, October 28, 2010 12:37 PM
> To: pgsql-performance@postgresql.org
> Subject: partitioning question 1
>
> hello --
>
> my last email was apparently too long to respond to so i'll
> split it up into shorter piec
whoops, didn't see the i=9 (linebreak! linebreak!)
nonetheless that is a static constant constraint on the column i, and i was
asking if constraint exclusions would work for dynamic constraints (like those
derived from a table joined against.) so for example the bar table has only
0-9 in its h
On Thu, 2010-10-28 at 12:59 -0700, Ben wrote:
> On Oct 28, 2010, at 12:44 PM, Joshua D. Drake wrote:
> >
> > My tests show you are incorrect:
> >
> >
> > part_test=# explain analyze select * from foo join bar using (i) where
> > i=9;
> >QUERY
>
On Thu, 2010-10-28 at 12:59 -0700, Ben wrote:
> explain analyze select * from foo join bar using (i);
vs
explain analyze select * from foo join bar using (i) where i=9;
On Oct 28, 2010, at 12:44 PM, Joshua D. Drake wrote:
>
> My tests show you are incorrect:
>
>
> part_test=# explain analyze select * from foo join bar using (i) where
> i=9;
>QUERY
> PLAN
> -
On Thu, 2010-10-28 at 12:25 -0700, Ben wrote:
> i think we are talking about two different things here: the constraints on
> the table, and the where-clause constraints in a query which may or may not
> trigger constraint exclusion. i understand that table constraints have to be
> constants --
On Oct 28, 2010, at 11:50 AM, Joshua D. Drake wrote:
>>> Yes the constraints have to be static. Not sure about the operator
>>> question honestly.
>>
>> this seems to severely restrict their usefulness -- our queries are data
>> warehouse analytical -type queries, so the constraints are usually
On Thu, 2010-10-28 at 11:44 -0700, Ben wrote:
> > Yes the constraints have to be static. Not sure about the operator
> > question honestly.
>
> this seems to severely restrict their usefulness -- our queries are data
> warehouse analytical -type queries, so the constraints are usually
> data-d
thanks for the prompt response. some comments / questions below :
On Oct 28, 2010, at 10:31 AM, Joshua D. Drake wrote:
>> ...constraint exclusion is able to eliminate table partitions. the I/O
>> advantages of having queries target small subtables are the same as the I/O
>> advantages of clust
On Thu, 2010-10-28 at 09:36 -0700, Ben wrote:
> hello --
>
> my last email was apparently too long to respond to so i'll split it up into
> shorter pieces. my first question :
>
> my understanding of how range partitioning and constraint exclusion works
> leads me to believe that it does not b
On Wednesday 09 September 2009 07:56:53 Kenneth Cox wrote:
> In case you aren't comfortable running unreleased planner patches from
> pgsql-hackers, a workaround was discussed on this list recently:
>
> http://archives.postgresql.org/pgsql-performance/2009-09/msg00036.php
>
> On Wed, 09 Sep 2009 06
In case you aren't comfortable running unreleased planner patches from
pgsql-hackers, a workaround was discussed on this list recently:
http://archives.postgresql.org/pgsql-performance/2009-09/msg00036.php
On Wed, 09 Sep 2009 06:05:22 -0400, Heikki Linnakangas
wrote:
Kevin Kempter wrote:
Kevin Kempter wrote:
> Hi all I have a large table (>2billion rows) that's partitioned by date based
> on an epoch int value. We're running a select max(id) where id is the PK. I
> have a PK index on each of the partitions, no indexes at all on the base
> table.
>
> If I hit a partition table
If I can't find an answer in the next day or so I'll crack open OpenJPA and
disable that check. Its a very simple, if ugly, hack.
--Nik
On 5/12/08, Neil Peter Braggio <[EMAIL PROTECTED]> wrote:
>
> I have the same problem in PG 8.2
>
> To resolve this issue I had to create a new table with the
I have the same problem in PG 8.2
To resolve this issue I had to create a new table with the same
structure than the partitioned table with a trigger for insert and
update. All the operations the application have to do are directed to
this new table.
When a new record is inserted in the new table
Tore Lukashaugen wrote:
Hi,
I am new to postgres having worked with Oracle in the past. I am interested
in understanding Postgres's table partition functionality better.
Specifically, I have a third party application running against my postgres
database, but the database is becoming rather la
Abu Mushayeed wrote:
I have partitioned a table based on period (e.g., cdate >=
'2007-01-01'::date and cdate<=.2007-03-31':;date).
Now, I am issuing query like cdate >= CURRENT_DATE - 1 and cdate <=
CURRENT_DATE, it scans all the partitions. But if I do cdate >=
'2007-01-01'::date and cdate<=
> Can anybody help me out
>
> I just wanted to knw what will be the configuraion settings for
> partitioning table so as to make inserts faster on the partitioned tables.
Well, that depends. Many questions are important here. Will you insert
into several partitions or only to a single one? Do yo
On Thu, 11 Jan 2007 08:18:39 -0600, "Adam Rich" <[EMAIL PROTECTED]> wrote:
>
> Subpartitions are just a way to break (parent) partitions up into
> smaller pieces. Those of course can be moved to other disks
> just like the main partitions.
Ah, didn't know that (i just wondered why i need a s
o other disks
just like the main partitions.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Bernd
Helmle
Sent: Thursday, January 11, 2007 6:51 AM
To: Scott Marlowe
Cc: Jim C. Nasby; Jeremy Haile; pgsql-performance@postgresql.org
Subject: Re: [PERF
Well - whether or not MySQL's implementation of partitioning has some
deficiency, it sure is a lot easier to set up than PostgreSQL. And I
don't think there is any technical reason that setting up partitioning
on Postgres couldn't be very easy and still be robust.
On Thu, 11 Jan 2007 13:59:20 +01
> On Fri, Jan 05, 2007 at 12:47:08PM +0100, Mikael Carneholm wrote:
> >> Take a look at the set of partitioning functions I wrote shortly
after
> >> the 8.1 release:
> >>
> >> http://www.studenter.hb.se/~arch/files/part_functions.sql
> >>
> >> You could probably work something out using those funct
On Wed, 10 Jan 2007 15:30:16 -0600, Scott Marlowe <[EMAIL PROTECTED]> wrote:
[...]
>
> And I don't think the mysql partition supports tablespaces either.
>
MySQL supports distributing partitions over multiple disks via the SUBPARTITION
clause [1].
I leave it to you, wether their syntax is
On Wed, 2007-01-10 at 15:15, Jeremy Haile wrote:
> You can do list partitioning in MySQL:
> http://dev.mysql.com/doc/refman/5.1/en/partitioning-list.html
>
> My comment was not meant as a criticism of PostgreSQL's current state -
> I'm glad that it has partitioning. I'm simply wondering if there
You can do list partitioning in MySQL:
http://dev.mysql.com/doc/refman/5.1/en/partitioning-list.html
My comment was not meant as a criticism of PostgreSQL's current state -
I'm glad that it has partitioning. I'm simply wondering if there are
any plans of adopting a more user-friendly syntax in th
On Wed, 2007-01-10 at 15:09, Jim C. Nasby wrote:
> On Wed, Jan 10, 2007 at 03:28:00PM -0500, Jeremy Haile wrote:
> > This seems so much more intuitive and simpler than what is required to
> > set it up in PostgreSQL. Does PostgreSQL's approach to table
> > partitioning have any advantage over MySQ
On Wed, Jan 10, 2007 at 03:28:00PM -0500, Jeremy Haile wrote:
> This seems so much more intuitive and simpler than what is required to
> set it up in PostgreSQL. Does PostgreSQL's approach to table
> partitioning have any advantage over MySQL? Is a "nicer" syntax planned
> for Postgres?
The focu
I really wish that PostgreSQL supported a "nice" partitioning syntax
like MySQL has.
Here is an example:
CREATE TABLE tr (id INT, name VARCHAR(50), purchased DATE)
PARTITION BY RANGE( YEAR(purchased) ) (
PARTITION p0 VALUES LESS THAN (1990),
PARTITION p1 VALUES LESS THAN (1995),
PARTITIO
On Fri, Jan 05, 2007 at 12:47:08PM +0100, Mikael Carneholm wrote:
Take a look at the set of partitioning functions I wrote shortly after
the 8.1 release:
http://www.studenter.hb.se/~arch/files/part_functions.sql
You could probably work something out using those functions (as-is, or
as inspirati
BTW, someone coming up with a set of functions to handle partitioning
for the general 'partition by time' case would make a GREAT project on
pgFoundry.
On Fri, Jan 05, 2007 at 12:47:08PM +0100, Mikael Carneholm wrote:
> Take a look at the set of partitioning functions I wrote shortly after
> the 8
Take a look at the set of partitioning functions I wrote shortly after
the 8.1 release:
http://www.studenter.hb.se/~arch/files/part_functions.sql
You could probably work something out using those functions (as-is, or
as inspiration) together with pgAgent
(http://www.pgadmin.org/docs/1.4/pgagent.h
Tom Lane <[EMAIL PROTECTED]> writes:
> It's usually better to use partitioning rules that have something to
> do with the WHERE-clauses you'd be using anyway. For instance, try
> to partition on ranges.
I agree and tried to create new partitioned tables. But now I ran into
some other performance
Martin Lesser <[EMAIL PROTECTED]> writes:
> I try to partition a large table (~ 120 mio. rows) into 50 smaller
> tables but using the IMO immutable %-function constraint exclusion
> does not work as expected:
The constraint exclusion mechanism is not as bright as you think.
There are some very lim
On Sun, Mar 19, 2006 at 01:31:42PM +0100, Antoine wrote:
> Hi,
> Is there any work on the cards for implementing other partitioning
> strategies? I see mysql 5.1 will have support for hashes and stuff but
> didn't see anything in the todos for postgres.
You'd have to provide a pretty convincing ar
On Tue, 2006-02-07 at 18:59 -0600, Jim C. Nasby wrote:
> I'm honestly somewhat surprised someone hasn't run into this problem
> with partitioning yet; or maybe everyone who needs to do long
> transactions just shoves those off to slony slaves...
All DDL takes locks, on all DBMS.
Best Regards, Si
On Tue, Feb 07, 2006 at 10:09:02PM +, Simon Riggs wrote:
> On Thu, 2006-02-02 at 11:27 -0500, Marc Morin wrote:
>
> > > > 1- long running report is running on view
> > > > 2- continuous inserters into view into a table via a rule
> > > > 3- truncate or rule change occur
ql.org
> Subject: Re: [PERFORM] partitioning and locking problems
>
> On Thu, 2006-02-02 at 11:27 -0500, Marc Morin wrote:
>
> > > > 1- long running report is running on view
> > > > 2- continuous inserters into view into a table
> via a rule
&
At 05:09 PM 2/7/2006, Simon Riggs wrote:
I'd be disinclined to using the locking system as a scheduling tool.
I Agree with Simon. Using the locking system for scheduling feels
like a form of Programming by Side Effect.
Ron
---(end of broadcast)---
On Thu, 2006-02-02 at 11:27 -0500, Marc Morin wrote:
> > > 1- long running report is running on view
> > > 2- continuous inserters into view into a table via a rule
> > > 3- truncate or rule change occurs, taking an exclusive lock.
> > > Must wait for #1 to finish.
> > > 4- new reports and
On Wed, Feb 01, 2006 at 10:20:21AM -0500, Tom Lane wrote:
> "Marc Morin" <[EMAIL PROTECTED]> writes:
> > Do you mean it would be impossible to change the code so that existing
> > selects continue to use the pre-truncated table until they commit?
>
> Yes, because that table won't exist any more (a
, 2006 7:44 AM
> To: Marc Morin
> Cc: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] partitioning and locking problems
>
> Hi, Marc,
>
> Marc Morin wrote:
>
> > 1- long running report is running on view
> > 2- continuous inserters into v
Hi, Marc,
Marc Morin wrote:
> 1- long running report is running on view
> 2- continuous inserters into view into a table via a rule
> 3- truncate or rule change occurs, taking an exclusive lock.
> Must wait for #1 to finish.
> 4- new reports and inserters must now wait for
"Marc Morin" <[EMAIL PROTECTED]> writes:
> Do you mean it would be impossible to change the code so that existing
> selects continue to use the pre-truncated table until they commit?
Yes, because that table won't exist any more (as in the file's been
unlinked) once the TRUNCATE commits.
> The upd
m: Tom Lane [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, February 01, 2006 12:50 AM
> To: Marc Morin
> Cc: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] partitioning and locking problems
>
> "Marc Morin" <[EMAIL PROTECTED]> writes:
> > Would l
Marc Morin wrote:
Under both these circumstances (truncate and create / replace rule) the
locking behaviour of these commands can cause locking problems for us.
The scenario is best illustrated as a series of steps:
1- long running report is running on view
2- continuous inser
"Marc Morin" <[EMAIL PROTECTED]> writes:
> Would like to understand the implications of changing postgres'
> code/locking for rule changes and truncate to not require locking out
> select statements?
It won't work...
regards, tom lane
---(end of
Yes, that's how I solved it... and I totally agree that it's hard for the
planner to guess what to do on the partitions. But maybe there should be
something in the docs explaining the limitations ...
I'm only asking for the biggest 100 ids from the table, so I thought maybe the
planner would ta
I just saw that there is no where clause in the query, that you had
fed to explain plan.
you need to include a where clause based on id_machine column to see the effect.
On 12/13/05, Pandurangan R S <[EMAIL PROTECTED]> wrote:
> Did you set constraint_exclusion = true in postgresql.conf file?
>
> O
Did you set constraint_exclusion = true in postgresql.conf file?
On 12/13/05, Marc Cousin <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I've been working on trying to partition a big table (I've never partitioned a
> table in any other database till now).
> Everything went ok, except one query that didn't
If you make the assertion that you are transferring equal or less
session data between your session server (lets say an RDBMS) and the
app server than you are between the app server and the client, an out
of band 100Mb network for session information is plenty of bandwidth.
So if you count on a m
Ross,
> Memcached is a PG memory store, I gather,
Nope. It's a hyperfast resident-in-memory hash that allows you to stash stuff
like user session information and even materialized query set results.
Thanks to SeanC, we even have a plugin, pgmemcached.
> but...what is squid, lighttpd?
> anyt
On 12 May 2005, at 18:33, Josh Berkus wrote:
People,
In general I think your point is valid. Just remember that it
probably
also matters how you count page views. Because technically images
are a
separate page (and this thread did discuss serving up images). So if
there are 20 graphics on a sp
People,
> In general I think your point is valid. Just remember that it probably
> also matters how you count page views. Because technically images are a
> separate page (and this thread did discuss serving up images). So if
> there are 20 graphics on a specific page, that is 20 server hits just
100 hits a second = 8,640,000 hits a day. I work on a site which does >
100 million dynamic pages a day. In comparison Yahoo probably does >
100,000,000,000 (100 billion) views a day
if I am interpreting Alexa's charts correctly. Which is about
1,150,000 a second.
Read the help on Alexa
Alex Turner wrote:
Ok - my common sense alarm is going off here...
There are only 6.446 billion people worldwide. 100 Billion page views
would require every person in the world to view 18 pages of yahoo
every day. Not very likely.
http://www.internetworldstats.com/stats.htm
suggests that there ar
Ok - my common sense alarm is going off here...
There are only 6.446 billion people worldwide. 100 Billion page views
would require every person in the world to view 18 pages of yahoo
every day. Not very likely.
http://www.internetworldstats.com/stats.htm
suggests that there are around 1 billio
On 12 May 2005, at 15:08, Alex Turner wrote:
Having local sessions is unnesesary, and here is my logic:
Generaly most people have less than 100Mb of bandwidth to the
internet.
If you make the assertion that you are transferring equal or less
session data between your session server (lets say an
Having local sessions is unnesesary, and here is my logic:
Generaly most people have less than 100Mb of bandwidth to the internet.
If you make the assertion that you are transferring equal or less
session data between your session server (lets say an RDBMS) and the
app server than you are between
machines. Which has it's own set of issues entirely. I am not entirely
sure that memcached actually does serialize data when it's comitted into
I think it does, ie. it's a simple mapping of [string key] => [string
value].
memcached either, although I could be wrong, I have not looked at the
On 11 May 2005, at 23:35, PFC wrote:
However, memcached (and for us, pg_memcached) is an excellent way
to improve
horizontal scalability by taking disposable data (like session
information)
out of the database and putting it in protected RAM.
So, what is the advantage of such a system ve
However, memcached (and for us, pg_memcached) is an excellent way to
improve
horizontal scalability by taking disposable data (like session
information)
out of the database and putting it in protected RAM.
So, what is the advantage of such a system versus, say, a "sticky
sessions" system wh
On Wed, May 11, 2005 at 08:57:57AM +0100, David Roussel wrote:
> For an interesting look at scalability, clustering, caching, etc for a
> large site have a look at how livejournal did it.
> http://www.danga.com/words/2004_lisa/lisa04.pdf
>
> They have 2.6 Million active users, posting 200 new blog
On Wed, 2005-05-11 at 17:13 +0800, Christopher Kings-Lynne wrote:
> > Alex Stapleton wrote
> > Be more helpful, and less arrogant please.
>
> Simon told you all the reasons clearly and politely.
Thanks Chris for your comments.
PostgreSQL can always do with one more developer and my sole intent
Alex Stapleton <[EMAIL PROTECTED]> writes:
> Acceptable Answers to 'So, when/is PG meant to be getting a decent
> partitioning system?':
...
> 3. Your welcome to take a stab at it, I expect the community would
> support your efforts as well.
As long as we're being curt all around, this one'
David,
> It's interesting that the solution livejournal have arrived at is quite
> similar in ways to the way google is set up.
Yes, although again, they're using memcached as pseudo-clustering software,
and as a result are limited to what fits in RAM (RAM on 27 machines, but it's
still RAM).
Mischa Sandberg <[EMAIL PROTECTED]> writes:
> So, simplicity dictates something like:
> table pg_remote(schemaname text, connectby text, remoteschema text)
Previous discussion of this sort of thing concluded that we wanted to
follow the SQL-MED standard.
regards, tom lane
Acceptable Answers to 'So, when/is PG meant to be getting a decent
partitioning system?':
1. Person X is working on it I believe.
2. It's on the list, but nobody has done anything about it yet
3. Your welcome to take a stab at it, I expect the community would
support your efforts a
On 11 May 2005, at 09:50, Alex Stapleton wrote:
On 11 May 2005, at 08:57, David Roussel wrote:
For an interesting look at scalability, clustering, caching, etc
for a
large site have a look at how livejournal did it.
http://www.danga.com/words/2004_lisa/lisa04.pdf
I have implemented similar syst
On 11 May 2005, at 08:57, David Roussel wrote:
For an interesting look at scalability, clustering, caching, etc for a
large site have a look at how livejournal did it.
http://www.danga.com/words/2004_lisa/lisa04.pdf
I have implemented similar systems in the past, it's a pretty good
technique, unf
On 11 May 2005, at 08:16, Simon Riggs wrote:
On Tue, 2005-05-10 at 11:03 +0100, Alex Stapleton wrote:
So, when/is PG meant to be getting a decent partitioning system?
ISTM that your question seems to confuse where code comes from.
Without
meaning to pick on you, or reply rudely, I'd like to explo
For an interesting look at scalability, clustering, caching, etc for a
large site have a look at how livejournal did it.
http://www.danga.com/words/2004_lisa/lisa04.pdf
They have 2.6 Million active users, posting 200 new blog entries per
minute, plus many comments and countless page views.
Althou
Josh Berkus wrote:
The other problem, as I was told it at OSCON, was that these were not
high-availability clusters; it's impossible to add a server to an existing
cluster
Yeah, that's a pretty significant problem.
a server going down is liable to take the whole cluster down.
That's news to me. D
On Tue, 2005-05-10 at 11:03 +0100, Alex Stapleton wrote:
> So, when/is PG meant to be getting a decent partitioning system?
ISTM that your question seems to confuse where code comes from. Without
meaning to pick on you, or reply rudely, I'd like to explore that
question. Perhaps it should be a F
Neil,
> Sure, but that hardly makes it not "usable". Considering the price of
> RAM these days, having enough RAM to hold the database (distributed over
> the entire cluster) is perfectly acceptable for quite a few people.
The other problem, as I was told it at OSCON, was that these were not
hig
Joshua D. Drake wrote:
Neil Conway wrote:
Oh? What's wrong with MySQL's clustering implementation?
Ram only tables :)
Sure, but that hardly makes it not "usable". Considering the price of
RAM these days, having enough RAM to hold the database (distributed over
the entire cluster) is perfectly acc
On Tue, May 10, 2005 at 08:02:50 -0700,
Adam Haberlach <[EMAIL PROTECTED]> wrote:
>
>
> With all the Opteron v. Xeon around here, and talk of $30,000 machines,
> perhaps it would be worth exploring the option of buying 10 cheapass
> machines for $300 each. At the moment, that $300 buys you, fr
Neil Conway wrote:
Josh Berkus wrote:
Don't hold your breath. MySQL, to judge by their first "clustering"
implementation, has a *long* way to go before they have anything usable.
Oh? What's wrong with MySQL's clustering implementation?
Ram only tables :)
-Neil
---(end of
Josh Berkus wrote:
Don't hold your breath. MySQL, to judge by their first "clustering"
implementation, has a *long* way to go before they have anything usable.
Oh? What's wrong with MySQL's clustering implementation?
-Neil
---(end of broadcast)---
Quoting "Jim C. Nasby" <[EMAIL PROTECTED]>:
> To the best of my knowledge no such work has been done. There is a
> project (who's name escapes me) that lets you run queries against a
> remote postgresql server from a postgresql connection to a different
> server, which could serve as the basis for
Quoting Christopher Kings-Lynne <[EMAIL PROTECTED]>:
>
> >>*laff*
> >>Yeah, like they've been working on views for the last 5 years, and
> >>still haven't released them :D :D :D
> >
> > ?
> > http://dev.mysql.com/doc/mysql/en/create-view.html
> > ...for MySQL 5.0.1+ ?
>
> Give me a call when i
*laff*
Yeah, like they've been working on views for the last 5 years, and
still haven't released them :D :D :D
?
http://dev.mysql.com/doc/mysql/en/create-view.html
...for MySQL 5.0.1+ ?
Give me a call when it's RELEASED.
Chris
---(end of broadcast)
Mischa Sandberg wrote:
Quoting Christopher Kings-Lynne <[EMAIL PROTECTED]>:
This is why I mention partitioning. It solves this issue by storing
different data sets on different machines under the same schema.
These seperate chunks of the table can then be replicated as well for
data redundancy a
Quoting Christopher Kings-Lynne <[EMAIL PROTECTED]>:
> > This is why I mention partitioning. It solves this issue by storing
> > different data sets on different machines under the same schema.
> > These seperate chunks of the table can then be replicated as well for
> > data redundancy and so o
1 - 100 of 120 matches
Mail list logo