o setautovacuum_max_workers sky-high =).
We have some situation when at thousands of tables autovacuum can’t vacuum all
tables that need it. Simply it vacuums some of most modified table and never
reach others. Only manual vacuum can help with this situation. With wraparound
issue it can be a
All advice very much appreciated, thanks
--
*Rowan Seymour* | +260 964153686
Hello! What do you mean by
"Server is an Amazon RDS instance with default settings and Postgres
9.3.10, with one other database in the instance."
PG is with default config or smth else?
Is it with default config as it is as from compile version? If so you
should definitely have to do some tuning on it.
By looking on plan i saw a lot of disk read. It can be linked to small
shared memory dedicated to PG exactly what Tom said.
Can you share pg config or raise for example shared_buffers parameter?
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
-> Seq Scan on public.dim_cliente (cost=0.00..618.90
rows=16890 width=86) (actual time=0.005..13.736 rows=16890 loops=1)
Output: dim_cliente.tipocliente, dim_cliente.a1_ibge,
dim_cliente.a1_cod, dim_cliente.a1_nome, dim_cliente.key_cliente
-> Hash (cost=18.90..18.90 rows=590 width=59) (actual
time=0.715..0.715 rows=590 loops=1)
Output: dim_vendedor.a3_nome, dim_vendedor.key_vendedor
Buckets: 1024 Batches: 1 Memory Usage: 56kB
-> Seq Scan on public.dim_vendedor (cost=0.00..18.90 rows=590
width=59) (actual time=0.024..0.405 rows=590 loops=1)
Output: dim_vendedor.a3_nome, dim_vendedor.key_vendedor
Total runtime: 37249.268 ms
(25 filas)
___
Is anything that I can do to solve this problem, is that a bug or a
config problem?
Here the link with a dump of the tables
https://drive.google.com/file/d/0Bwupj61i9BtWZ1NiVXltaWc0dnM/view?usp=sharing
I appreciate your help
Hello!
What is your Postgres version?
Do you have correct statistics on this tables?
Please show yours execution plans with buffers i.e. explain
(analyze,buffers) ...
--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
as I know.
Thanks in Advance.
Hello Javier!
Our tests shows that PG 9.4 scales well up to 60 Intel cores. I.E
pgbech -S and DB on tmpfs gave us 700 000 tps. After 60 соres s_lock is
dominating in cpu usage%. 9.5 scales way better.
--
Alex Ignatov
Postgres Professional: http
itions can be checked in the index.
regards, tom lane
Hello Bertrand once again!
What's your status? Does the plan changed after deploying three field
index ?
--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
:17 GMT+01:00 Alex Ignatov <mailto:a.igna...@postgrespro.ru>>:
On 27.10.2015 14:10, Bertrand Paquet wrote:
Yes, I have run VACUUM ANALYZE, no effect.
Bertrand
2015-10-27 12:08 GMT+01:00 Alex Ignatov mailto:a.igna...@postgrespro.ru>>:
On 27.10.2015 12:35,
On 27.10.2015 14:10, Bertrand Paquet wrote:
Yes, I have run VACUUM ANALYZE, no effect.
Bertrand
2015-10-27 12:08 GMT+01:00 Alex Ignatov <mailto:a.igna...@postgrespro.ru>>:
On 27.10.2015 12:35, Bertrand Paquet wrote:
Hi all,
We have a slow query. After analy
| | in_progress +
| | | error +
| | | sent_to_proxy
(3 rows)
# select count(*) from external_sync_messages;
count
992912
(1 row)
Hello, Bertrand!
May be statistics on external_sync_
On Sat, Jul 25, 2015 at 8:50 AM, Craig James wrote:
> The canonical advice here is to avoid more connections than you have CPUs,
> and to use something like pg_pooler to achieve that under heavy load.
>
> We are considering using the Apache mod_perl "fast-CGI" system and perl's
> Apache::DBI modu
the problem.
I'm running PostgreSQL 9.3.5, and have work_mem set to 32MB.
Is there any way I can work around this problem, other than to experiment with
disabling enable_hashjoin for this query/database?
Alex
Hash Right Join (cost=609908622207072.24..149222936608255392.00
rows=6928136791749
> Thank you, Kevin -- this is helpful.
Thank you David, too.
> But it still leaves questions for me.
Still...
Alex Goncharov wrote:
>>> How do I decide, before starting a COPY data load, whether such a load
>>> protection ("complexity") makes sense ("i
Thank you, Kevin -- this is helpful.
But it still leaves questions for me.
Kevin Grittner wrote:
> Alex Goncharov wrote:
> > The whole thing is aborted then, and the good 99 records are not
> > making it into the target table.
>
> Right. This is one reason people often
ay has not come yet?
There have to be some limits to the space and/or counts taken by the
new, uncommitted, data, while the COPY operation is still in progress.
What are they?
Say, I am COPYing 100 TB of data and the bad records are close to the
end of the feed -- how will this all error out?
T
index definition
CREATE INDEX views_visit_id_visit_buoy_index ON views USING btree
(visit_id, visit_buoy)
On Tue, Jan 29, 2013 at 1:35 PM, Merlin Moncure wrote:
> On Tue, Jan 29, 2013 at 12:59 PM, Alex Vinnik
> wrote:
> >
> >
> >
> > On Tue, Jan 29, 2013 at
On Tue, Jan 29, 2013 at 2:06 PM, Jeff Janes wrote:
> > Sort Key: visits.id, views.id
> > Sort Method: external sort Disk: 4248kB
>
> What query are you running? The query you originally showed us should
> not be doing this sort in the first place.
>
> Cheers,
>
> Jeff
>
Here is the query
On Tue, Jan 29, 2013 at 11:39 AM, Ben Chobot wrote:
> On Jan 29, 2013, at 6:24 AM, Alex Vinnik wrote:
>
> random_page_cost=1 might be not what you really want.
>> it would mean that random reads are as fast as as sequential reads, which
>> probably is true only for SSD
>&
3733.045 ms
On Mon, Jan 28, 2013 at 8:31 PM, Merlin Moncure wrote:
> On Mon, Jan 28, 2013 at 5:43 PM, Alex Vinnik wrote:
> > It sure turned out that default settings are not a good fit. Setting
> > random_page_cost to 1.0 made query to run in 2.6 seconds and I clearly
> see
>
On Mon, Jan 28, 2013 at 6:55 PM, Filip Rembiałkowski wrote:
>
> On Mon, Jan 28, 2013 at 5:43 PM, Alex Vinnik wrote:
>
>> It sure turned out that default settings are not a good fit.
>>
>
> do you know pgtune?
> it's a good tool for starters, if you want a fast
reated_at < '2012-12-16
00:00:00'::timestamp without time zone))
Total runtime: 33698.000 ms
Basically PG is going through all views again and not using "Index Scan
using views_visit_id_index on views". Looks like setting work_mem confuses
planner somehow. Any idea what can b
Guys, thanks a lot for your input. It is very valuable for us. We plan to
fix a separate dev server similar to production one, copy all data there
and try you suggestions as we really don't want to do it on production
server. I also noticed that IOPS jumps to 100% when running this query. So
it is
ows. So it must be something different. Any reason why it happens and how
to fix it?
Postgres 9.2
Ubuntu 12.04.1 LTS
shared_buffers = 4GB the rest of the settings are default ones
Thanks
-Alex
On Wed, May 25, 2011 at 10:59, Reuven M. Lerner wrote:
> Hi, everyone. I'm working on a project that's using PostgreSQL 8.3, that
> requires me to translate strings of octal digits into strings of characters
> -- so '141142143' should become 'abc', although the database column
> containing this d
On Wed, May 25, 2011 at 12:45, Reuven M. Lerner wrote:
> Hi, Alex. You wrote:
>> I think select E'\XXX' is what you are looking for (per the fine
>> manual:
>> http://www.postgresql.org/docs/current/static/datatype-binary.html)
>
> I didn't think that
on of using libpq, though. ODBC takes care
of a lot of difficult details for you, and libpq's higher performance
may turn out to be a loss for you, in your specific situation.
-- Alex -- alex-goncha...@comcast.net --
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresq
.
-- Alex -- alex-goncha...@comcast.net --
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Thu, Nov 11, 2010 at 06:41, Marc Mamin wrote:
> There are a few places in our data flow where we have to wait for index
> creation before being able to distribute the process on multiple threads
> again.
Would CREATE INDEX CONCURRENTLY help here?
--
Sent via pgsql-performance mailing list (p
On Wed, Oct 27, 2010 at 21:08, Divakar Singh wrote:
> So another question pops up: What method in PostgreSQL does the stored proc
> use when I issue multiple insert (for loop for 100 thousand records) in the
> stored proc?
It uses prepared statements (unless you are using execute). There is
also
On Wed, Oct 27, 2010 at 08:00, Divakar Singh wrote:
> I am attaching my code below.
> Is any optimization possible in this?
> Do prepared statements help in cutting down the insert time to half for this
> kind of inserts?
In half? not for me. Optimization possible? Sure, using the code you
paste
On Wed, Oct 13, 2010 at 02:38, Neil Whelchel wrote:
> And the cache helps...
> So, we are right back to within 10ms of where we started after INSERTing the
> data, but it took a VACUUM FULL to accomplish this (by making the table fit in
> RAM).
> This is a big problem on a production machine as t
On Wed, Oct 13, 2010 at 07:49, Tom Lane wrote:
> Neil Whelchel writes:
> I concur with Mark's question about whether your UPDATE pushed the table
> size across the limit of what would fit in RAM.
Yeah, you said you have ~2GB of ram, just counting the bytes and the
number of rows (not including
Thu, Aug 13, 2009 at 1:29 PM, Alvaro
Herrera wrote:
> Alex wrote:
>> The writer process seems to be using inordinate amounts of memory:
>>
>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+
>> COMMAND
>> 11088 postgres 13 -2 3217m 2.9g 2.9g S 0 3
The writer process seems to be using inordinate amounts of memory:
PID USER PR NI VIRT RES SHR S %CPU %MEMTIME+
COMMAND
11088 postgres 13 -2 3217m 2.9g 2.9g S0 38.7 0:10.46 postgres:
writer process
20190 postgres 13 -2 3219m 71m 68m S0 0.9 0:52.48 postgres:
cribq
I am using Postgres with Rails. Each rails application "thread" is
actually a separate process (mongrel) with it's own connection.
Normally, the db connection processes (?) look something like this in
top:
15772 postgres 15 0 229m 13m 12m S0 0.8 0:00.09 postgres:
db db [local]
idle
Thanks. That's very helpful. I'll take your suggestions and see if
things improve.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
>
> How is the index sl_city_etc defined?
>
Index "public.sl_city_etc"
Column|Type
--+-
city | text
listing_type | text
post_time| timestamp without time zone
bedrooms | integer
region | text
geo_lat |
esults are satisfactory or at least comparable,
| like-
|
| select distinct so_no, serial_no from observation_all;
| in postgres it takes - 1404.238 ms
| in gp it takes - 1217.283 ms
No surprise here: the data is picked by multiple segment hosts and
never sorted on the master.
-- Alex -- a
data distributed and served by separate segment hosts? By how
many? Is the network connectivity not a factor? What happens with
the times if you don't sort your result set?
-- Alex -- alex-goncha...@comcast.net --
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.or
Below is a query that takes 16 seconds on the first run. I am having
generally poor performance for queries in uncached areas of the data
and often mediocre (500ms-2s+) performance generallly, although
sometimes it's very fast. All the queries are pretty similar and use
the indexes this way.
I'v
Forgot to add:
postg...@ec2-75-101-128-4:~$ psql --version
psql (PostgreSQL) 8.3.5
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
er. They have a host of other
features; click on the Services tab on their site to find out more.
Alex
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Mon, Sep 8, 2008 at 8:19 PM, Rainer Mager <[EMAIL PROTECTED]> wrote:
> 1. Move some of the databases to the new drive. If this is a good idea, is
> there a way to do this without a dump/restore? I'd prefer to move the folder
> if possible since that would be much faster.
What like tablespaces?
performance was terrible. I
never had the time or resources to reconfigure the SAN to test a more
suitable spindle setup since the SAN was in heavy production use for
file archiving.
Alex
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
could use the SATA space in the mean time.
Would that speed things up, and if so, where in the drive mappings
should that partition go?
Thank you for your help. I'm mostly interested in if I can speed
these things up from 5-6 days to < 1 day, otherwise I need to look at
optimizin
ll at once.
But many large queries are getting Out of Memory errors.
Alex
On Nov 29, 2007, at 2:15 PM, Richard Huxton wrote:
Alex Hochberger wrote:
Does anyone have any white papers or basic guides for a large RAM
server?
We are consolidating two databases to enable better data-mining
that
of writes.
Our simultaneous queries are small, and currently run
acceptably. It's the big imports, data-mining pulls, and system
manipulation were we routinely wait days on the query that we are
looking to speed up.
Thanks,
Alex
---(end of broadcast)
On 7/11/07, Alex Deucher <[EMAIL PROTECTED]> wrote:
On 7/11/07, A. Kretschmer <[EMAIL PROTECTED]> wrote:
> am Wed, dem 11.07.2007, um 14:52:01 -0400 mailte Alex Deucher folgendes:
> > >Okay, i got a really different plan, but i expected _NOT_ a
> > >performanc
On 7/11/07, A. Kretschmer <[EMAIL PROTECTED]> wrote:
am Wed, dem 11.07.2007, um 14:52:01 -0400 mailte Alex Deucher folgendes:
> >Okay, i got a really different plan, but i expected _NOT_ a
> >performance-boost like this. I expected the opposite.
> >
> >
> &
confused
about this...
your results are getting cached. try two queries in a row with the same plan.
Alex
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTEC
On 4/9/07, Alex Deucher <[EMAIL PROTECTED]> wrote:
On 4/9/07, Alex Deucher <[EMAIL PROTECTED]> wrote:
> On 4/6/07, Michael Fuhr <[EMAIL PROTECTED]> wrote:
> > On Fri, Apr 06, 2007 at 04:38:33PM -0400, Alex Deucher wrote:
> > > One more anomaly between 7.4 and
On 4/9/07, Alex Deucher <[EMAIL PROTECTED]> wrote:
On 4/6/07, Michael Fuhr <[EMAIL PROTECTED]> wrote:
> On Fri, Apr 06, 2007 at 04:38:33PM -0400, Alex Deucher wrote:
> > One more anomaly between 7.4 and 8.2. DB dumped from 7.4 and loaded
> > onto 8.2, both have loc
On 4/6/07, Michael Fuhr <[EMAIL PROTECTED]> wrote:
On Fri, Apr 06, 2007 at 04:38:33PM -0400, Alex Deucher wrote:
> One more anomaly between 7.4 and 8.2. DB dumped from 7.4 and loaded
> onto 8.2, both have locale set to C. 8.2 seems to prefer Seq Scans
> for the first query whi
On 4/6/07, Michael Fuhr <[EMAIL PROTECTED]> wrote:
On Fri, Apr 06, 2007 at 04:38:33PM -0400, Alex Deucher wrote:
> One more anomaly between 7.4 and 8.2. DB dumped from 7.4 and loaded
> onto 8.2, both have locale set to C. 8.2 seems to prefer Seq Scans
> for the first query whi
8 and SQL_ASCII; same numbers and plans. Any ideas how to
improve this?
Thanks,
Alex
postgres 7.4
EXPLAIN ANALYZE select pnum, event_pid, code_name, code_description,
code_mcam, event_date, effective_date, ref_country,
ref_country_legal_code, corresponding_pnum, withdrawal_date,
payment_date, extension
On 4/5/07, Xiaoning Ding <[EMAIL PROTECTED]> wrote:
Alex Deucher wrote:
> On 4/5/07, Erik Jones <[EMAIL PROTECTED]> wrote:
>>
>> On Apr 5, 2007, at 1:22 PM, Xiaoning Ding wrote:
>>
>> Erik Jones wrote:
>> On Apr 5, 2007, at 12:09 PM, Xiaoning
rsory search of the archives and it seems
that others have looked at this before so you'll probably want to start
there if your up to it.
Linux used to have (still does?) a RAW interface which might also be
useful. I think the original code was contributed by oracle so they
could support d
power and
> cooling for them.
> 14 HD's in one case are going to have a serious transient load on
> system start up and (especially with those SAS HDs) can generate a
> great deal of heat.
I went w/ Fujitsu. Fortunately these servers are hosted in a very
well ventilated
Ok, well, I dropped the DB and reloaded it and now all seems to be
fine and performing well. I'm not sure what was going on before.
Thanks for everyone's help!
Alex
On 4/3/07, Alex Deucher <[EMAIL PROTECTED]> wrote:
On 4/3/07, Tom Lane <[EMAIL PROTECTED]> wrote:
&g
On 4/3/07, Tom Lane <[EMAIL PROTECTED]> wrote:
"Alex Deucher" <[EMAIL PROTECTED]> writes:
> Turning off bitmapscan ends up doing a sequential scan. Turning off
> both bitmapscan and seqscan results in a bitmap heap scan. It doesn't
> seem to want to use the
On 4/3/07, Tom Lane <[EMAIL PROTECTED]> wrote:
"Alex Deucher" <[EMAIL PROTECTED]> writes:
> Turning off bitmapscan ends up doing a sequential scan. Turning off
> both bitmapscan and seqscan results in a bitmap heap scan. It doesn't
> seem to want to use the
On 4/3/07, Tom Lane <[EMAIL PROTECTED]> wrote:
"Alex Deucher" <[EMAIL PROTECTED]> writes:
> Turning off bitmapscan ends up doing a sequential scan. Turning off
> both bitmapscan and seqscan results in a bitmap heap scan. It doesn't
> seem to want to use the
On 4/3/07, Alex Deucher <[EMAIL PROTECTED]> wrote:
On 4/3/07, Alex Deucher <[EMAIL PROTECTED]> wrote:
> On 4/3/07, Merlin Moncure <[EMAIL PROTECTED]> wrote:
> > On 4/3/07, Alex Deucher <[EMAIL PROTECTED]> wrote:
> >
(('{AB6698130,AB7076908,AB649
On 4/3/07, Alex Deucher <[EMAIL PROTECTED]> wrote:
On 4/3/07, Merlin Moncure <[EMAIL PROTECTED]> wrote:
> On 4/3/07, Alex Deucher <[EMAIL PROTECTED]> wrote:
>
(('{AB6698130,AB7076908,AB6499382,AB643,AB6385893,AB6378237,AB7146973,AB7127138,AB7124531,AB7124513,
On 4/3/07, Merlin Moncure <[EMAIL PROTECTED]> wrote:
On 4/3/07, Alex Deucher <[EMAIL PROTECTED]> wrote:
(('{AB6698130,AB7076908,AB6499382,AB643,AB6385893,AB6378237,AB7146973,AB7127138,AB7124531,AB7124513,AB7123427,AB7121183,AB7121036,AB7110101,AB7100321,AB7089845,AB
On 4/3/07, Alex Deucher <[EMAIL PROTECTED]> wrote:
On 4/3/07, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Alex Deucher" <[EMAIL PROTECTED]> writes:
> > and here are the query plans referenced in my last email (apologies if
> > you get these twice, th
On 4/3/07, Tom Lane <[EMAIL PROTECTED]> wrote:
"Alex Deucher" <[EMAIL PROTECTED]> writes:
> and here are the query plans referenced in my last email (apologies if
> you get these twice, they didn't seem to go through the first time,
> perhaps due to size?). I
and here are the query plans referenced in my last email (apologies if
you get these twice, they didn't seem to go through the first time,
perhaps due to size?). I cut out the longer ones.
Thanks,
Alex
postgres 7.4
EXPLAIN ANALYZE select num, c1, c2, c3, c4, c5, c6, c7, c8, c9, c10,
c11
y ideas what might be going wrong? I suppose the
next thing to try is 7.4 on the new servers, but I'd really like to
stick to the 8.x series if possible.
I've included some sample query plans below.
Thanks,
Alex
---(end of broadcast)---
On 3/6/07, Ron <[EMAIL PROTECTED]> wrote:
At 10:25 AM 3/6/2007, Alex Deucher wrote:
>On 3/5/07, Guido Neitzer <[EMAIL PROTECTED]> wrote:
>>On 05.03.2007, at 19:56, Alex Deucher wrote:
>>
>> > Yes, I started setting that up this afternoon. I'm goin
On 3/5/07, Guido Neitzer <[EMAIL PROTECTED]> wrote:
On 05.03.2007, at 19:56, Alex Deucher wrote:
> Yes, I started setting that up this afternoon. I'm going to test that
> tomorrow and post the results.
Good - that may or may not give some insight in the actual
bottleneck. Y
On 3/5/07, Guido Neitzer <[EMAIL PROTECTED]> wrote:
On 01.03.2007, at 13:40, Alex Deucher wrote:
> I read several places that the SAN might be to blame, but
> testing with bonnie and dd indicates that the SAN is actually almost
> twice as fast as the scsi discs in the old su
res config file, but
performance remains the same. Any ideas?
Thanks,
Alex
---(end of broadcast)---
TIP 6: explain analyze is your friend
On 3/2/07, Jeff Frost <[EMAIL PROTECTED]> wrote:
On Fri, 2 Mar 2007, Guido Neitzer wrote:
> On 02.03.2007, at 14:20, Alex Deucher wrote:
>
>> Ah OK. I see what you are saying; thank you for clarifying. Yes,
>> the SAN is configured for maximum capacity; it has large
On 3/2/07, Ron <[EMAIL PROTECTED]> wrote:
At 02:43 PM 3/2/2007, Alex Deucher wrote:
>On 3/2/07, Ron <[EMAIL PROTECTED]> wrote:
>>
>>...and I still think looking closely at the actual physical layout of
>>the tables in the SAN is likely to be worth it.
>
>H
On 3/2/07, Ron <[EMAIL PROTECTED]> wrote:
At 11:03 AM 3/2/2007, Alex Deucher wrote:
>On 3/2/07, Ron <[EMAIL PROTECTED]> wrote:
>
>>May I suggest that it is possible that your schema, queries, etc were
>>all optimized for pg 7.x running on the old HW?
>>(
On 3/2/07, Tom Lane <[EMAIL PROTECTED]> wrote:
"Alex Deucher" <[EMAIL PROTECTED]> writes:
> Anyway, new numbers after the analyze.
> Unfortunately, they are improved, but still not great:
Why are the index names different between the old and new servers?
Is t
On 3/2/07, Ron <[EMAIL PROTECTED]> wrote:
At 10:16 AM 3/2/2007, Alex Deucher wrote:
>On 3/2/07, Florian Weimer <[EMAIL PROTECTED]> wrote:
>>* Alex Deucher:
>>
>> > I have noticed a strange performance regression and I'm at a loss as
>> > to what&
On 3/1/07, Jeff Frost <[EMAIL PROTECTED]> wrote:
On Thu, 1 Mar 2007, Alex Deucher wrote:
> here are some examples. Analyze is still running on the new db, I'll
> post results when that is done. Mostly what our apps do is prepared
> row selects from different tables:
>
On 3/2/07, Florian Weimer <[EMAIL PROTECTED]> wrote:
* Alex Deucher:
> I have noticed a strange performance regression and I'm at a loss as
> to what's happening. We have a fairly large database (~16 GB).
Sorry for asking, but is this a typo? Do you mean 16 *TB* instea
On 3/1/07, Joshua D. Drake <[EMAIL PROTECTED]> wrote:
Alex Deucher wrote:
> On 3/1/07, Joshua D. Drake <[EMAIL PROTECTED]> wrote:
>> \
>> >> Is the SAN being shared between the database servers and other
>> >> servers? Maybe
>> >&g
On 3/1/07, Jeff Frost <[EMAIL PROTECTED]> wrote:
On Thu, 1 Mar 2007, Alex Deucher wrote:
> On 3/1/07, Jeff Frost <[EMAIL PROTECTED]> wrote:
>> On Thu, 1 Mar 2007, Alex Deucher wrote:
>>
>> >> >> Postgresql might be choosing a bad plan becaus
ding occassional poor performance on
>> the new
>> one.
>>
>
> The direct attached scsi discs on the old database server we getting
> 45MBps not the SAN. The SAN got 62/145Mbps, which is not as bad.
How many spindles you got in that SAN?
105 IIRC.
Alex
---
On 3/1/07, Jeff Frost <[EMAIL PROTECTED]> wrote:
On Thu, 1 Mar 2007, Alex Deucher wrote:
>> >> Postgresql might be choosing a bad plan because your
>> effective_cache_size
>> >> is
>> >> way off (it's the default now right?). Also, what was
On 3/1/07, Jeff Frost <[EMAIL PROTECTED]> wrote:
On Thu, 1 Mar 2007, Joshua D. Drake wrote:
> Alex Deucher wrote:
>> Hello,
>>
>> I have noticed a strange performance regression and I'm at a loss as
>> to what's happening. We have a fairly large datab
On 3/1/07, Jeff Frost <[EMAIL PROTECTED]> wrote:
On Thu, 1 Mar 2007, Alex Deucher wrote:
> On 3/1/07, Jeff Frost <[EMAIL PROTECTED]> wrote:
>> On Thu, 1 Mar 2007, Alex Deucher wrote:
>>
>> >> Vacuum? Analayze? default_statistics_target? How many sha
On 3/1/07, Jeff Frost <[EMAIL PROTECTED]> wrote:
On Thu, 1 Mar 2007, Alex Deucher wrote:
>> Vacuum? Analayze? default_statistics_target? How many shared_buffers?
>> effective_cache_size? work_mem?
>>
>
> I'm running the autovacuum process on the 8.1 server.
On 3/1/07, Jeff Frost <[EMAIL PROTECTED]> wrote:
On Thu, 1 Mar 2007, Joshua D. Drake wrote:
> Alex Deucher wrote:
>> Hello,
>>
>> I have noticed a strange performance regression and I'm at a loss as
>> to what's happening. We have a fairly large datab
On 3/1/07, Joshua D. Drake <[EMAIL PROTECTED]> wrote:
Alex Deucher wrote:
> Hello,
>
> I have noticed a strange performance regression and I'm at a loss as
> to what's happening. We have a fairly large database (~16 GB). The
> original postgres 7.4 was running
res config file, but
performance remains the same. Any ideas?
Thanks,
Alex
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
n such a config. You
can also look towards Bizgres which allegedly elimates some of these
problems, and is cheaper than most BigRDBMS products.
Alex.
On 12/28/06, Guy Rouillier <[EMAIL PROTECTED]> wrote:
I don't want to violate any license agreement by discussing performance,
so I'
speed.
Alex.
On 12/5/06, Alexandru Coseru <[EMAIL PROTECTED]> wrote:
Hello..
Thanks for the advices..
Actually , i'm waiting for the clovertown to show up on the market...
Regards
Alex
- Original Message -
From: "Sven Geisler" <[EMAIL PROTECTED]>
To: &quo
The problem I see with software raid is the issue of a battery backed unit:
If the computer loses power, then the 'cache' which is held in system
memory, goes away, and fubars your RAID.
Alex
On 12/5/06, Michael Stone <[EMAIL PROTECTED]> wrote:
On Tue, Dec 05, 2006 at 01:21:
suck worse, that doesn't bring us to a _good_ card).
Alex.
On 12/4/06, Greg Smith <[EMAIL PROTECTED]> wrote:
On Mon, 4 Dec 2006, Alex Turner wrote:
> People recommend LSI MegaRAID controllers on here regularly, but I have
> found that they do not work that well. I have bonnie++
http://en.wikipedia.org/wiki/RAID_controller
Alex
On 12/4/06, Michael Stone <[EMAIL PROTECTED]> wrote:
On Mon, Dec 04, 2006 at 12:37:29PM -0500, Alex Turner wrote:
>This discussion I think is important, as I think it would be useful for
this
>list to have a list of RAID cards th
eca, 3Ware/AMCC, LSI).
Thanks,
Alex
On 12/4/06, Scott Marlowe <[EMAIL PROTECTED]> wrote:
On Mon, 2006-12-04 at 01:17, Alex Turner wrote:
> People recommend LSI MegaRAID controllers on here regularly, but I
> have found that they do not work that well. I have bonnie++ numbers
>
for the output: http://www.infoconinc.com/test/bonnie++.html (the first line
is a six drive RAID 10 on a 3ware 9500S, the next three are all RAID 1s on
LSI MegaRAID controllers, verified by lspci).
Alex.
On 12/4/06, Greg Smith <[EMAIL PROTECTED]> wrote:
On Thu, 30 Nov 2006, Carlos H. R
Does anyone have any performance experience with the Dell Perc 5i
controllers in RAID 10/RAID 5?
Thanks,
Alex
On 23 Oct 2006, at 22:59, Jim C. Nasby wrote:
http://stats.distributed.net used to use a perl script to do some
transformations before loading data into the database. IIRC, when we
switched to using C we saw 100x improvement in speed, so I suspect
that
if you want performance perl isn't the wa
On 3 Oct 2006, at 16:04, Merlin Moncure wrote:
On 10/3/06, Carlo Stonebanks <[EMAIL PROTECTED]> wrote:
Some very helpful people had asked that I post the troublesome
code that was
generated by my import program.
I installed a SQL log feature in my import program. I have
posted samples of the
8(1 row)trend=#trend=# select count(*) from property; count--- 59913(1 row)trend=#
The database was just re-indexed, and no changes beyond this insert were made in that time and result_entry has recently been vacuumed.Any insight would be greatly appreciatedAlex
On 9/22/06, Alex Turner &l
ahh good pointThanksOn 9/22/06, Tom Lane <[EMAIL PROTECTED]> wrote:
"Alex Turner" <[EMAIL PROTECTED]> writes:> Home come the query statistics showed that 229066 blocks where read given> that all the blocks in all the tables put together only total 122968?
You forgot
1 - 100 of 279 matches
Mail list logo