On Jan 23, 2008 1:57 PM, Guy Rouillier <[EMAIL PROTECTED]> wrote:
> Scott Marlowe wrote:
> > I assume you're talking about solid state drives? They have their
> > uses, but for most use cases, having plenty of RAM in your server will
> > be a better way to spend your money. For certain high throu
On Wed, 23 Jan 2008, Guy Rouillier wrote:
Flash has a limited number of writes before it becomes unreliable. On
good quality consumer grade, that's about 300,000 writes, while on
industrial grade it's about 10 times that.
The main advance that's made SSD practical given the write cycle
limi
On Wed, 23 Jan 2008, Tory M Blue wrote:
I have hundreds of thousands of updates, inserts a day. But what I'm
seeing is my server appears to "deallocate" memory (for the lack of a
better term) and performance goes to heck, slow response, a sub second
query takes anywhere from 6-40 seconds to co
Guy Rouillier wrote:
Scott Marlowe wrote:
I assume you're talking about solid state drives? They have their
uses, but for most use cases, having plenty of RAM in your server will
be a better way to spend your money. For certain high throughput,
relatively small databases (i.e. transactional wo
On Jan 23, 2008, at 2:57 PM, Guy Rouillier wrote:
Scott Marlowe wrote:
I assume you're talking about solid state drives? They have their
uses, but for most use cases, having plenty of RAM in your server
will
be a better way to spend your money. For certain high throughput,
relatively smal
Guy Rouillier wrote:
Scott Marlowe wrote:
I assume you're talking about solid state drives? They have their
uses, but for most use cases, having plenty of RAM in your server will
be a better way to spend your money. For certain high throughput,
relatively small databases (i.e. transactional
Vivek Khera <[EMAIL PROTECTED]> writes:
> On Jan 23, 2008, at 1:29 PM, Thomas Lozza wrote:
>> We have an installation of Postgres 8.1.2 (32bit on Solaris 9) with
>> ...
> it sounds to me like your autovacuum is not running frequently enough.
Yeah. The default autovac settings in 8.1 are extrem
Scott Marlowe wrote:
I assume you're talking about solid state drives? They have their
uses, but for most use cases, having plenty of RAM in your server will
be a better way to spend your money. For certain high throughput,
relatively small databases (i.e. transactional work) the SSD can be
qui
On Jan 23, 2008, at 1:29 PM, Thomas Lozza wrote:
We have an installation of Postgres 8.1.2 (32bit on Solaris 9) with
a DB
size of about 250GB on disk. The DB is subject to fair amount of
inserts, deletes and updates per day.
Running VACUUM VERBOSE tells me that I should allocate around 20M
Josh what about the rest of your system? What operating system? Your
hardware setup. Drives? Raids? What indices do you have setup for
these queries? There are other reasons that could cause bad queries
performance.
On Jan 22, 2008 11:11 PM, Joshua Fielek <[EMAIL PROTECTED]> wrote:
>
> Hey fol
On Jan 23, 2008 8:01 AM, mike long <[EMAIL PROTECTED]> wrote:
> Scott,
>
> What are your thoughts on using one of those big RAM appliances for
> storing a Postgres database?
I assume you're talking about solid state drives? They have their
uses, but for most use cases, having plenty of RAM in you
hi
We have an installation of Postgres 8.1.2 (32bit on Solaris 9) with a DB
size of about 250GB on disk. The DB is subject to fair amount of
inserts, deletes and updates per day.
Running VACUUM VERBOSE tells me that I should allocate around 20M pages
to FSM (max_fsm_pages)! This looks like a rea
I'm not sure what is going on but looking for some advice, knowledge.
I'm running multiple postgres servers in a slon relationship. I have
hundreds of thousands of updates, inserts a day. But what I'm seeing
is my server appears to "deallocate" memory (for the lack of a better
term) and performanc
"Guillaume Smet" <[EMAIL PROTECTED]> writes:
> It doesn't look like an EXPLAIN ANALYZE output. Can you provide a real
> one (you should have a second set of numbers with EXPLAIN ANALYZE)?
Also, could we see the pg_stats rows for the columns being joined?
regards, tom lane
In response to Joshua Fielek <[EMAIL PROTECTED]>:
>
> Hey folks --
>
> For starters, I am fairly new to database tuning and I'm still learning
> the ropes. I understand the concepts but I'm still learning the real
> world impact of some of the configuration options for postgres.
>
> We have an
Dmitry,
On Jan 23, 2008 2:48 PM, Dmitry Potapov <[EMAIL PROTECTED]> wrote:
> EXPLAIN ANALYZE SELECT * FROM t1t2_view ORDER BY time_stamp ASC LIMIT 100:
>
> Limit (cost=13403340.40..13403340.40 rows=1 width=152)
It doesn't look like an EXPLAIN ANALYZE output. Can you provide a real
one (you shoul
I've got two huge tables with one-to-many relationship with complex
key. There's also a view, which JOINs the tables, and planner chooses
unoptimal plan on SELECTs from this view.
The db schema is declared as: (from on now, I skip the unsignificant
columns for the sake of simplicity)
CREATE TABL
Hi Tom,
On May 9, 2007 6:40 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> To return to your original comment: if you're trying to model a
> situation with a fully cached database, I think it's sensible
> to set random_page_cost = seq_page_cost = 0.1 or so.
Is it still valid for 8.3 or is there any re
On Jan 23, 2008 3:02 AM, Guillaume Smet <[EMAIL PROTECTED]> wrote:
> I'll post my results tomorrow morning.
It works perfectly well:
cityvox_prod=# CREATE OR REPLACE FUNCTION
getTypesLieuFromTheme(codeTheme text) returns text[] AS
$f$
SELECT ARRAY(SELECT codetylieu::text FROM rubtylieu WHERE codet
"Luiz K. Matsumura" writes:
> If we run the commands "vacumm full analyze"
If you're using the cost based vacuum delay, don't forget that it
will probably take long; possibly, you may deactivate it locally
before running VACUUM FULL, in case the locked table is mandatory
for your running applica
20 matches
Mail list logo