On 07/20/2011 02:04 PM, Steve Crawford wrote:
On 07/20/2011 12:58 PM, A J wrote:
I understand that 'cluster' performs the role of defrag ...
As with everything the answer is "it depends". For a "typical"
workload where the rows updated by a single query are one or a few
rowsl, the automatic va
On 07/20/2011 12:58 PM, A J wrote:
I understand that 'cluster' performs the role of defrag (along with
rewriting in index order) in Postgres.
How frequently does one have to run cluster ? Any thumb-rules or
experience ? How do I find if my table is fragmented enough to need a
cluster ?
We are
I understand that 'cluster' performs the role of defrag (along with rewriting
in index order) in Postgres.
How frequently does one have to run cluster ? Any thumb-rules or experience ?
How do I find if my table is fragmented enough to need a cluster ?
We are still to use Postgres in production,
PostgreSQL has to accumulate all the rows of a query before returning the
result set to the client. It is probably spooling those several 400-450 Mb
docs, plus all the other attributes, to a temporary file prior to sending the
results back. If you have just three document stored in the databas
he could use smth like this to know the size like:
SELECT count(*),CASE WHEN length(doc_data)<5000 THEN '<=50 MB' WHEN
length(doc_data)<1 THEN '<=100 MB' ELSE '>100MB' END from doc_table
GROUP by 2;
and then based on the above, to do finer queries to find large data.
However, i don
jtke...@verizon.net, 05.07.2011 18:44:
A while ago the some developers inserted several records with a
document (stored in doc_Data) that was around 400 - 450 MB each. Now
when you do a select * (all) from this table you get a hang and the
system becomes unresponsive.
What application/program i
You may do a backup of this table. Then with ultraedit search your
documents and remove them.
2011/7/5, jtke...@verizon.net :
> I am having a hang condition every time I try to retrieve a large
> records (bytea) data from a table
> The OS is a 5.11 snv_134 i86pc i386 i86pc Solaris with 4GB memory
2011/7/20 Tom Lane :
> Ken Caruso writes:
>> On Sun, Jul 17, 2011 at 3:04 AM, Cédric Villemain <
>> cedric.villemain.deb...@gmail.com> wrote:
Block number 12125253 is bigger that any block we can find in
base/2651908/652397108.1
>
>>> Should the table size be in the 100GB range or 2-3 GB