On 25 Okt, 17:36, [EMAIL PROTECTED] wrote:
The design is based on access patterns, i.e. one partition represents a
group of data along a discrete axis, so the partitions are the perfect for
modeling that. Only the last partition will be used on normal cases. The
previous partitions only need
Adrian Klaver wrote:
I might be missing the point, but couldn't you do a Copy to a single table
instead of multiple inserts and avoid the index overhead.
Are you saying, have one large table with indexes and do a COPY to it or
are you saying a one small empty table and do a COPY to it?
-- Original message --
From: Thomas Finneid [EMAIL PROTECTED]
Adrian Klaver wrote:
I might be missing the point, but couldn't you do a Copy to a single table
instead of multiple inserts and avoid the index overhead.
Are you saying, have one large
Adrian Klaver wrote:
I'm thinking do a COPY to one large table. If the cost of indexing is relatively fixed as
you indicated in your previous post then you reduce the indexing overhead to each
COPY operation instead of each insert.
No, what I meant whas that creating an index on a table
On Friday 26 October 2007 8:56 am, [EMAIL PROTECTED] wrote:
Serious engineering does not imply perfect engineering, I have analyzed it
and made my tradeoffs. What you are forgetting here is that you clearly
dont understand the enire solution, So I will try to explain it again. And
if you
Gregory Stark wrote:
Tom's point is that if you have 55k tables then just *finding* the newest
child table is fairly expensive. You're accessing a not insignificant-sized
index and table of tables. And the situation is worse when you consider the
number of columns all those tables have, all
Scott Marlowe wrote:
It may well be that one big table and partial indexes would do what
you want. Did you explore partial indexes against one big table?
That can be quite handy.
Hmm, interresting, I suppose it could work. Tanks for the suggestion,
Ill keep it in mind.
regards
thomas
Jorge Godoy wrote:
Em Thursday 25 October 2007 13:36:09 [EMAIL PROTECTED] escreveu:
Regarding dumps and restore; the system will always be offline during
those operations and it will be so for several days, because a new project
might start at another location in the world, so the travelling
On Fri, Oct 26, 2007 at 08:26:13AM +0200, Thomas Finneid wrote:
Scott Marlowe wrote:
It may well be that one big table and partial indexes would do what
you want. Did you explore partial indexes against one big table?
That can be quite handy.
Hmm, interresting, I suppose it could work.
On Fri, Oct 26, 2007 at 08:25:09AM +0200, Thomas Finneid wrote:
Gregory Stark wrote:
Tom's point is that if you have 55k tables then just *finding* the newest
child table is fairly expensive. You're accessing a not insignificant-sized
index and table of tables. And the situation is worse when
Sam Mason [EMAIL PROTECTED] writes:
On Fri, Oct 26, 2007 at 08:25:09AM +0200, Thomas Finneid wrote:
Gregory Stark wrote:
Tom's point is that if you have 55k tables then just *finding* the newest
child table is fairly expensive. You're accessing a not insignificant-sized
index and table of
On Fri, Oct 26, 2007 at 02:21:39PM +0100, Gregory Stark wrote:
Sam Mason [EMAIL PROTECTED] writes:
I think the lookup that is being referred to is the fact that if you've
got 55k (plus) files in a directory then the filesystem still has to
perform a search in the directory to locate the
Sam Mason [EMAIL PROTECTED] writes:
On Fri, Oct 26, 2007 at 08:25:09AM +0200, Thomas Finneid wrote:
Gregory Stark wrote:
Tom's point is that if you have 55k tables then just *finding* the
newest
child table is fairly expensive. You're accessing a not
insignificant-sized
index and table of
Hi
I am volume testing a db model that consists of a paritioned tables. The
db has been running for a week and a half now and has built up to contain
approx 55000 partition tables of 18000 rows each. The root table therefore
contains about 1 billion rows. When I try to do a select count(*) of the
[EMAIL PROTECTED] writes:
ERROR: out of memory
DETAIL: Failed on request of size 130.
Does anybody have any suggestion as to which parameter I should tune to
give it more memory to be able to perform queries on the root table?
This indicates that malloc() failed which means the
Hi
I have tried to answer to the best of my knowledge but its running on
Soalris 10, and I am not that familiar with solaris ( Go Linux!!! :)
any more memory. Either you have a very low memory ulimit (look at ulimit
-a
in the same session as Postgres) or your machine is really low on memory.
[EMAIL PROTECTED] writes:
max_connections = 1000
Do you actually have anywhere near this number of processes? What is your
setting for work_mem? Keep in mind every process could use as much as work_mem
and actually it's possible to use that much several times over.
Also, what is your
[EMAIL PROTECTED] writes:
max_connections = 1000
Do you actually have anywhere near this number of processes? What is your
setting for work_mem? Keep in mind every process could use as much as
work_mem
and actually it's possible to use that much several times over.
Also, what is your
[EMAIL PROTECTED] wrote:
are a dump of Postgres's current memory allocations and could be useful in
showing if there's a memory leak causing this.
The file is 20M, these are the last lines: (the first line continues
unttill ff_26000)
idx_attributes_g1_seq_1_ff_4_value7: 1024 total in
AM
Subject: Re: [GENERAL] select count() out of memory
Hi
I have tried to answer to the best of my knowledge but its running on
Soalris 10, and I am not that familiar with solaris ( Go Linux!!! :)
any more memory. Either you have a very low memory ulimit (look at
ulimit
-a
in the same
God morgen
Please display these memory settings from your postgresql.conf file
sort_mem
shared_buffers
I have shown the entire configuration. if its not in the configuration
shown, I have changed its value.
I have used the configuration example provided by Sun regarding running
postgres on
[EMAIL PROTECTED] wrote:
are a dump of Postgres's current memory allocations and could be
useful in
showing if there's a memory leak causing this.
The file is 20M, these are the last lines: (the first line continues
unttill ff_26000)
idx_attributes_g1_seq_1_ff_4_value7: 1024 total in
[EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] wrote:
are a dump of Postgres's current memory allocations and could be
useful in
showing if there's a memory leak causing this.
The file is 20M, these are the last lines: (the first line continues
unttill ff_26000)
I have shown the entire configuration. if its not in the configuration
shown, I have changed its value.
I meant to say I haven't changed its value
thomas
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
Alvaro Herrera [EMAIL PROTECTED] writes:
[EMAIL PROTECTED] wrote:
I did a test previously, where I created 1 million partitions (without
data) and I checked the limits of pg, so I think it should be ok.
Clearly it's not.
You couldn't have tested it too much --- even planning a query over so
[EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] wrote:
are a dump of Postgres's current memory allocations and could be
useful in
showing if there's a memory leak causing this.
The file is 20M, these are the last lines: (the first line continues
unttill ff_26000)
Alvaro Herrera [EMAIL PROTECTED] writes:
[EMAIL PROTECTED] wrote:
I did a test previously, where I created 1 million partitions (without
data) and I checked the limits of pg, so I think it should be ok.
Clearly it's not.
You couldn't have tested it too much --- even planning a query over
On 10/25/07, Tom Lane [EMAIL PROTECTED] wrote:
Alvaro Herrera [EMAIL PROTECTED] writes:
[EMAIL PROTECTED] wrote:
I did a test previously, where I created 1 million partitions (without
data) and I checked the limits of pg, so I think it should be ok.
Clearly it's not.
You couldn't have
[EMAIL PROTECTED] wrote:
Alvaro Herrera [EMAIL PROTECTED] writes:
The partitioning facility is designed for partition counts in the tens,
or maybe hundreds at the most.
Maybe, but it works even on 55000 partitions as long as the operations are
done against a partition and not the root
On 10/25/07, [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:
Alvaro Herrera [EMAIL PROTECTED] writes:
[EMAIL PROTECTED] wrote:
I did a test previously, where I created 1 million partitions (without
data) and I checked the limits of pg, so I think it should be ok.
Clearly it's not.
You
On Oct 25, 2007, at 9:36 AM, [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] wrote:
are a dump of Postgres's current memory allocations and could be
useful in
showing if there's a memory leak causing this.
The file is 20M, these are the last lines: (the first line
On 10/25/07, [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:
Hi
I am volume testing a db model that consists of a paritioned tables. The
db has been running for a week and a half now and has built up to contain
approx 55000 partition tables of 18000 rows each. The root table therefore
contains
[EMAIL PROTECTED] wrote:
It will work on a million partitions and more, provided you do
operations on single partitions.
Thats good enough for me, thats exactly what I want. I just used the
select count() on the root to get a feeling of how many rows it was in
total. An then I thought that the
The db worked fine until it reached perhaps 30-40 thousand partitions.
It depends on how you have the partitions set up and how you're
accessing them. Are all of these partitions under the same parent
table? If so, then trying run a SELECT COUNT(*) against the parent
table is simply
[EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] wrote:
It will work on a million partitions and more, provided you do
operations on single partitions.
Thats good enough for me, thats exactly what I want.
In that case, why use partitions at all? They are simple independent
tables.
--
Alvaro
On Oct 25, 2007, at 10:36 AM, [EMAIL PROTECTED] wrote:
The db worked fine until it reached perhaps 30-40 thousand
partitions.
It depends on how you have the partitions set up and how you're
accessing them. Are all of these partitions under the same parent
table? If so, then trying run a
Are you selecting directly from the child table, or from the parent
table with constraint_exclusion turned on?
the problem was when selecting from the parent table, but selecting from
child tables are no problem. As stated in other replies, I only wanted to
know how many rows where in the table
So, out of curiosity, I asked my Oracle DBA friend if she'd ever heard
of anyone having 60,000 or so partitions in a table, and she looked at
me like I had a third eye in my forehead and said in her sweet voice
Well, that would certainly be an edge case. She sounded like she
was worried
Scott Marlowe escribió:
So, out of curiosity, I asked my Oracle DBA friend if she'd ever heard
of anyone having 60,000 or so partitions in a table, and she looked at
me like I had a third eye in my forehead and said in her sweet voice
Well, that would certainly be an edge case. She sounded
[EMAIL PROTECTED] wrote:
Thats good enough for me, thats exactly what I want.
In that case, why use partitions at all? They are simple independent
tables.
For two reasons,
- the data logically belongs together.
- because its more practical to create tables as childs of a parent table
than
Excellent, it sounds like you should be fine then. One thing to
note: if you want to get an idea of how many rows you have in your
partitions, you can run a SUM aggregate on reltuples in pg_class for
all of your partitions. The more recent the last ANALYZE for each
table, the more accurate
[EMAIL PROTECTED] writes:
In that case, why use partitions at all? They are simple independent
tables.
For two reasons,
- the data logically belongs together.
- because its more practical to create tables as childs of a parent table
than as independent tables.
- changes to the table is
[EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] wrote:
Thats good enough for me, thats exactly what I want.
In that case, why use partitions at all? They are simple independent
tables.
For two reasons,
- the data logically belongs together.
- because its more practical to create tables
[EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] wrote:
Thats good enough for me, thats exactly what I want.
In that case, why use partitions at all? They are simple independent
tables.
For two reasons,
- the data logically belongs together.
- because its more practical to create tables
[EMAIL PROTECTED] writes:
In that case, why use partitions at all? They are simple independent
tables.
For two reasons,
- the data logically belongs together.
- because its more practical to create tables as childs of a parent
table
than as independent tables.
- changes to the table
On Oct 25, 2007, at 11:16 AM, Alvaro Herrera wrote:
[EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] wrote:
Thats good enough for me, thats exactly what I want.
In that case, why use partitions at all? They are simple
independent
tables.
For two reasons,
- the data logically belongs
this is my config
checkpoint_segments = 96
effective_cache_size = 128000
shared_buffers = 43
max_fsm_pages = 208000
max_fsm_relations = 1
max_connections = 1000
autovacuum = off# enable autovacuum subprocess?
fsync = on #
[EMAIL PROTECTED] writes:
In other words, you really should have only one table; they aren't
independent. What you need to do is dial down your ideas of how many
partitions are reasonable to have.
Yes, but no. Each partition represents a chunk of information on a
discrete timeline. So there
Alvaro Herrera wrote:
...
You can use CREATE TABLE LIKE, which copies the definition but does not
set the inheritance.
Well, sort of.
Unless I'm using it incorrectly it only copies the basic column
definitions and, as optionally specified, defaults and some of the
constraints.
Primary key
On 10/25/07, Steve Crawford [EMAIL PROTECTED] wrote:
Alvaro Herrera wrote:
...
You can use CREATE TABLE LIKE, which copies the definition but does not
set the inheritance.
Well, sort of.
Unless I'm using it incorrectly it only copies the basic column
definitions and, as optionally
Tom Lane wrote:
You are making a common beginner error, which is to suppose that N
little tables are better than one big one. They are not.
Well that depends on how you define better. For my purposes, it is better.
What you're
effectively doing is replacing the upper levels of a big
On 10/25/07, Thomas Finneid [EMAIL PROTECTED] wrote:
Tom Lane wrote:
You are making a common beginner error, which is to suppose that N
little tables are better than one big one. They are not.
Well that depends on how you define better. For my purposes, it is better.
What you're
Thomas Finneid [EMAIL PROTECTED] writes:
What you're
effectively doing is replacing the upper levels of a big table's indexes
with lookups in the system catalogs, which in point of fact is a
terrible tradeoff from a performance standpoint.
Only if you assume I use all data in all tables
Em Thursday 25 October 2007 13:36:09 [EMAIL PROTECTED] escreveu:
Regarding dumps and restore; the system will always be offline during
those operations and it will be so for several days, because a new project
might start at another location in the world, so the travelling there
takes time.
54 matches
Mail list logo