On Fri, May 25, 2012 at 4:58 AM, Greg Spiegelberg <gspiegelb...@gmail.com>wrote:

> On Sun, May 13, 2012 at 10:01 AM, Craig James <cja...@emolecules.com>wrote:
>
>>
>> On Sun, May 13, 2012 at 1:12 AM, Віталій Тимчишин <tiv...@gmail.com>wrote:
>>
>>>
>>> The sequences AFAIK are accounted as relations. Large list of relations
>>> may slowdown different system utilities like vacuuming (or may not, depends
>>> on queries and indexes on pg_class).
>>>
>>
>> Not "may slow down."  Change that to "will slow down and possibly
>> corrupt" your system.
>>
>> In my experience (PG 8.4.x), the system can handle in the neighborhood of
>> 100,000 relations pretty well.  Somewhere over 1,000,000 relations, the
>> system becomes unusable.  It's not that it stops working -- day-to-day
>> operations such as querying your tables and running your applications
>> continue to work.  But system operations that have to scan for table
>> information seem to freeze (maybe they run out of memory, or are
>> encountering an O(N^2) operation and simply cease to complete).
>>
>
> Glad I found this thread.
>
> Is this 1M relation mark for the whole database cluster or just for a
> single database within the cluster?
>

I don't know.  When I discovered this, our system only had a few dozen
databases, and I never conducted any experiments.  We had to write our own
version of pg_dump to get the data out of the damaged system, and then
reload from scratch.  And it's not a "hard" number.  Even at a million
relation things work ... they just bog down dramatically.  By the time I
got to 5 million relations (a rogue script was creating 50,000 tables per
day and not cleaning up), the system was effectively unusable.

Craig



> Thanks,
> -Greg
>
>

Reply via email to