bm\mbn wrote:
> Hi Everyone
>
> The machine is IBM x345 with ServeRAID 6i 128mb cache and 6 SCSI 15k
> disks.
>
> 2 disks are in RAID1 and hold the OS, SWAP & pg_xlog
> 4 disks are in RAID10 and hold the Cluster itself.
>
> the DB will have two major tables 1 with 10 million rows and one with
> 100
[EMAIL PROTECTED] wrote:
> I have a database of hundreds of millions of web links (between sites)
> in Postgres. For each link, we record the url, the referer, and the
> most recent date the link exists. I'm having some serious performance
> issues when it comes to writing new data into the datab
Alvaro Herrera wrote:
> On Wed, Sep 14, 2005 at 11:28:43AM -0700, Jeffrey W. Baker wrote:
>
>>On Wed, 2005-09-14 at 11:25 -0700, Peter Darley wrote:
>>
>>> I'm getting a new server for our database, and I have a quick question
>>>about RAID controllers with a battery backed cache. I understand
Kevin wrote:
> Arnau wrote:
>
>> Hi all,
>>
>> >
>> > COPY FROM a file with all the ID's to delete, into a temporary
>> table, and do a joined delete to your main table (thus, only one query).
>>
>>
>> I already did this, but I don't have idea about how to do this join,
>> could you give me
Guido Neitzer wrote:
> Hi.
>
> I have an interesting problem with the JDBC drivers. When I use a
> select like this:
>
> "SELECT t0.aktiv, t0.id, t0.ist_teilnehmer, t0.nachname, t0.plz,
> t0.vorname FROM public.dga_dienstleister t0 WHERE t0.plz like
> ?::varchar(256) ESCAPE '|'" withBindings: 1:"5
Martin Nickel wrote:
> Chris,
> Would you say that 3 pages is a good maximum for a Postgres install?
> We're running 8.0.3 on 64-bit SUSE on a dual Opteron box with 4G and have
> shared_buffers set at 12. I've moved it up and down (it was 16
> when I got here) without any measurable pe
gokulnathbabu manoharan wrote:
> Hi all,
>
> I like to know the caching policies of Postgresql.
> What parameter in the postgresql.conf affects the
> cache size used by the Postgresql? As far as I have
> searched my knowledge of the parameters are
In general, you don't. The OS handles caching bas
Jeremiah Jahn wrote:
> On Sun, 2005-08-21 at 16:13 -0400, Ron wrote:
>
>>At 10:54 AM 8/21/2005, Jeremiah Jahn wrote:
>>
...
>>So you have 2 controllers each with 2 external slots? But you are
>>currently only using 1 controller and only one external slot on that
>>controller?
>
>
> Sorry, no. I
Jeremiah Jahn wrote:
On Sat, 2005-08-20 at 21:32 -0500, John A Meinel wrote:
Ron wrote:
At 02:53 PM 8/20/2005, Jeremiah Jahn wrote:
Well, since you can get a read of the RAID at 150MB/s, that means that
it is actual I/O speed. It may not be cached in RAM. Perhaps you could
try the same
Ron wrote:
At 02:53 PM 8/20/2005, Jeremiah Jahn wrote:
On Fri, 2005-08-19 at 16:03 -0500, John A Meinel wrote:
> Jeremiah Jahn wrote:
> > On Fri, 2005-08-19 at 12:18 -0500, John A Meinel wrote:
> >
>
> > it's cached alright. I'm getting a read rate of
4 75 0 0 0
1 0 0|1334M 10M 4596M 25M| 0 0 | 0 0 : 056k| 160k
2048M| 21 4 75 0 0 0
On Fri, 2005-08-19 at 16:07 -0500, John A Meinel wrote:
Jeremiah Jahn wrote:
Rebuild in progress with just ext3 on the raid array...will see if this
helps the acc
Jeremiah Jahn wrote:
> Rebuild in progress with just ext3 on the raid array...will see if this
> helps the access times. If it doesn't I'll mess with the stripe size. I
> have REINDEXED, CLUSTERED, tablespaced and cached with 'cat table/index
>
>>/dev/null' none of this seems to have helped, or eve
Jeremiah Jahn wrote:
> On Fri, 2005-08-19 at 12:18 -0500, John A Meinel wrote:
>
>>Jeremiah Jahn wrote:
>>
...
>>
>>Well, in general, 3ms for a single lookup seems really long. Maybe your
>>index is bloated by not vacuuming often enough. Do you tend to get a
Ron wrote:
> At 01:18 PM 8/19/2005, John A Meinel wrote:
>
>> Jeremiah Jahn wrote:
>> > Sorry about the formatting.
>> >
>> > On Thu, 2005-08-18 at 12:55 -0500, John Arbash Meinel wrote:
>> >
>> >>Jeremiah Jahn wrote:
>> >>
&
Jeremiah Jahn wrote:
> Sorry about the formatting.
>
> On Thu, 2005-08-18 at 12:55 -0500, John Arbash Meinel wrote:
>
>>Jeremiah Jahn wrote:
>>
>>
...
>>The expensive parts are the 4915 lookups into the litigant_details (each
>>one takes approx 4ms for a total of ~20s).
>>And then you do it again
Diego de Lima wrote:
> Hi list,
>
> I´m using Pg 8.0.3 on Linux FC2.
>
> This question may have a very simple answer (I hope), but I´m having
> lots of trouble solving it, and I counldn´t find any other post about it
> or anything in the pg docs.
>
> I have some very complex select statements
Qingqing Zhou wrote:
> "Alvaro Herrera" <[EMAIL PROTECTED]> writes
>
>>Interesting; do they use an overwriting storage manager like Oracle, or
>>a non-overwriting one like Postgres?
>>
>
>
> They call this MVCC "RLV(row level versioning)". I think they use rollback
> segment like Oracle (a.k.a "ver
Jeremiah Jahn wrote:
> I just put together a system with 6GB of ram on a 14 disk raid 10 array.
> When I run my usual big painful queries, I get very little to know
> memory usage. My production box (raid 5 4GB ram) hovers at 3.9GB used
> most of the time. the new devel box sits at around 250MB.
>
Manfred Koizar wrote:
> On Tue, 19 Jul 2005 11:51:51 +0100, Richard Huxton
> wrote:
>
>>You could get away with one query if you converted them to left-joins:
>>INSERT INTO ...
>>SELECT * FROM upload LEFT JOIN ... WHERE f3 IS NULL
>>UNION
>>SELECT * FROM upload LEFT JOIN ... WHERE f4 IS NULL
>
>
>
Mahesh Shinde wrote:
> Hi
...
> To connect To the PostgreSQL I am using **PostgreSQL Win32 ODBC and
> OLEDB client drivers 1.0.0.2**
>
...
> Since in the database I have one Major table that Debtor table which
> is master table and having around 55 lac records. I have set debtorId as
>
Tom Lane wrote:
> John A Meinel <[EMAIL PROTECTED]> writes:
>
>>Alvaro Herrera wrote:
>>
>>>I've been asked this a couple of times and I don't know the answer: what
>>>happens if you give XLog a single drive (unmirrored single spindle), and
&g
Anjan Dave wrote:
> Yes, that's true, though, I am a bit confused because the Clariion array
> document I am reading talks about how the write cache can eliminate the
> RAID5 Write Penalty for sequential and large IOs...resulting in better
> sequential write performance than RAID10.
>
> anjan
>
To
Anjan Dave wrote:
> Yes, that's true, though, I am a bit confused because the Clariion array
> document I am reading talks about how the write cache can eliminate the
> RAID5 Write Penalty for sequential and large IOs...resulting in better
> sequential write performance than RAID10.
>
> anjan
>
We
Alvaro Herrera wrote:
> On Tue, Aug 16, 2005 at 09:12:31AM -0700, Josh Berkus wrote:
>
>
>>However, you are absolutely correct in that it's *relative* advice, not
>>absolute advice. If, for example, you're using a $100,000 EMC SAN as your
>>storage you'll probably be better off giving it everythi
Ulrich Wisser wrote:
> Hello,
>
> one of our services is click counting for on line advertising. We do
> this by importing Apache log files every five minutes. This results in a
> lot of insert and delete statements. At the same time our customers
> shall be able to do on line reporting.
What are
Alvaro Herrera wrote:
> On Thu, Aug 11, 2005 at 03:36:31PM -0500, John A Meinel wrote:
>
...
>
> This is a known problem, solved in 8.1. A workaround for previous
> releases is to defer FK checks until commit:
So I don't know exactly what the fix was, but I just teste
Jeffrey Tenny wrote:
> I have a largely table-append-only application where most transactions
> are read-intensive and many are read-only. The transactions may span
> many tables, and in some cases might need to pull 70 MB of data out of a
> couple of the larger tables.
>
>
> In 7.3, I don't seem
Alvaro Herrera wrote:
> On Thu, Aug 11, 2005 at 03:36:31PM -0500, John A Meinel wrote:
>
>>I'm having an odd case where my system is locking such that if I insert
>>into a table during a transaction, if I start a new connection and
>>transaction, it blocks while trying
I'm having an odd case where my system is locking such that if I insert
into a table during a transaction, if I start a new connection and
transaction, it blocks while trying to do a similar insert until the
first transaction is committed or rolled back.
The schema is rather complex (currently 157
Dan Harris wrote:
> I have a web page for my customers that shows them count of records and
> some min/max date ranges in each table of a database, as this is how we
> bill them for service. They can log in and check the counts at any
> time. I'd like for the counts to be as fresh as possible
Dan Harris wrote:
On Aug 9, 2005, at 3:51 PM, John A Meinel wrote:
Dan Harris wrote:
On Aug 10, 2005, at 12:49 AM, Steve Poe wrote:
Dan,
Do you mean you did RAID 1 + 0 (RAID 10) or RAID 0 + 1? Just a
clarification, since RAID 0 is still a single-point of failure even if
RAID1 is on top
Dan Harris wrote:
On Aug 10, 2005, at 12:49 AM, Steve Poe wrote:
Dan,
Do you mean you did RAID 1 + 0 (RAID 10) or RAID 0 + 1? Just a
clarification, since RAID 0 is still a single-point of failure even if
RAID1 is on top of RAID0.
Well, you tell me if I stated incorrectly. There are two ra
Patrick Hatcher wrote:
> [Reposted from General section with updated information]
> Pg 7.4.5
>
> I'm running an update statement on about 12 million records using the
> following query:
>
> Update table_A
> set F1 = b.new_data
> from table_B b
> where b.keyfield = table_A.keyfield
>
> both keyfield
Matthew Schumacher wrote:
> John A Meinel wrote:
>
>
>>Surely this isn't what you have. You have *no* loop here, and you have
>>stuff like:
>> AND
>>(bayes_token_tmp) NOT IN (SELECT token FROM bayes_token);
>>
>>I'm guessing this isn'
Matthew Schumacher wrote:
> Matthew Schumacher wrote:
>
>>Tom Lane wrote:
>>
>>
>>
>>>I don't really see why you think that this path is going to lead to
>>>better performance than where you were before. Manipulation of the
>>>temp table is never going to be free, and IN (sub-select) is always
>>>
Tom Lane wrote:
> Matthew Schumacher <[EMAIL PROTECTED]> writes:
>
>> for i in array_lower(intokenary, 1) .. array_upper(intokenary, 1)
>> LOOP
>>_token := intokenary[i];
>>INSERT INTO bayes_token_tmp VALUES (_token);
>> END LOOP;
>
>
>> UPDATE
>>bayes_token
>> SET
>>spam_count
Matthew Schumacher wrote:
> Okay,
>
> Here is the status of the SA updates and a question:
>
> Michael got SA changed to pass an array of tokens to the proc so right
> there we gained a ton of performance due to connections and transactions
> being grouped into one per email instead of one per toke
Lane Van Ingen wrote:
> I have in my possession some performance tuning documents authored by Bruce
> Momjian, Josh Berkus, and others. They give good information on utilities to
> use (like ipcs, sar, vmstat, etc) to evaluate disk, memory, etc. performance
> on Unix-based systems.
>
> Problem is,
Karim Nassar wrote:
> I ran into a situation today maintaining someone else's code where the
> sum time running 2 queries seems to be faster than 1. The original code
> was split into two queries. I thought about joining them, but
> considering the intelligence of my predecessor, I wanted to test i
Matthew Nuzum wrote:
On 7/26/05, Dan Harris <[EMAIL PROTECTED]> wrote:
I am working on a process that will be inserting tens of million rows
and need this to be as quick as possible.
The catch is that for each row I could potentially insert, I need to
look and see if the relationship is alread
Alex Turner wrote:
Also seems pretty silly to put it on a regular SATA connection, when
all that can manage is 150MB/sec. If you made it connection directly
to 66/64-bit PCI then it could actualy _use_ the speed of the RAM, not
to mention PCI-X.
Alex Turner
NetEconomist
Well, the whole point
Luke Lonergan wrote:
Yup - interesting and very niche product - it seems like it's only obvious
application is for the Postgresql WAL problem :-)
Well, you could do it for any journaled system (XFS, JFS, ext3, reiserfs).
But yes, it seems specifically designed for a battery backed journal.
Th
Dan Harris wrote:
I am working on a process that will be inserting tens of million rows
and need this to be as quick as possible.
The catch is that for each row I could potentially insert, I need to
look and see if the relationship is already there to prevent multiple
entries. Currently
I saw a review of a relatively inexpensive RAM disk over at
anandtech.com, the Gigabyte i-RAM
http://www.anandtech.com/storage/showdoc.aspx?i=2480
Basically, it is a PCI card, which takes standard DDR RAM, and has a
SATA port on it, so that to the system, it looks like a normal SATA drive.
Th
Tomeh, Husam wrote:
>
> Nothing was running except the job. The server did not look stressed out
> looking at top and vmstat. We have seen slower query performance when
> performing load tests, so I run the re-index on all application indexes
> and then issue a full vacuum. I ran the same thing on
Chris Isaacson wrote:
> I need COPY via libpqxx to insert millions of rows into two tables. One
> table has roughly have as many rows and requires half the storage. In
> production, the largest table will grow by ~30M rows/day. To test the
> COPY performance I split my transactions into 10,000 r
Dirk Lutzebäck wrote:
> Richard Huxton wrote:
>
>> Dirk Lutzebäck wrote:
>>
>>> Hi,
>>>
>>> I do not under stand the following explain output (pgsql 8.0.3):
>>>
>>> explain analyze
>>> select b.e from b, d
>>> where b.r=516081780 and b.c=513652057 and b.e=d.e;
>>>
>>>
Oliver Crosby wrote:
Hi,
I'm running Postgres 7.4.6 on a dedicated server with about 1.5gigs of ram.
Running scripts locally, it takes about 1.5x longer than mysql, and the
load on the server is only about 21%.
I upped the sort_mem to 8192 (kB), and shared_buffers and
effective_cache_size to 6553
Dan Harris wrote:
>
> On Jul 14, 2005, at 10:12 PM, John A Meinel wrote:
>
>>
>> My biggest question is why the planner things the Nested Loop would be
>> so expensive.
>> Have you tuned any of the parameters? It seems like something is out of
>> whack.
Dan Harris wrote:
>
> On Jul 14, 2005, at 7:15 PM, John A Meinel wrote:
>
>>
>>
>> Is the distribution of your rows uneven? Meaning do you have more rows
>> with a later id than an earlier one?
>>
>
> There are definitely some id's that will have m
Alison Winters wrote:
> Hi,
>
>
>>>Our application requires a number of processes to select and update rows
>>>from a very small (<10 rows) Postgres table on a regular and frequent
>>>basis. These processes often run for weeks at a time, but over the
>>>space of a few days we find that updates sta
Tom Lane wrote:
> John A Meinel <[EMAIL PROTECTED]> writes:
>
>>What I don't understand is that the planner is actually estimating that
>>joining against the new table is going to *increase* the number of
>>returned rows.
>
>
> It evidently think
Dan Harris wrote:
>
> On Jul 14, 2005, at 9:42 AM, John A Meinel wrote:
>
>>
>>
>> You might try giving it a little bit more freedom with:
>>
>> EXPLAIN ANALYZE
>> SELECT recordtext FROM eventactivity, k_r, k_b
>> WHERE eventactivity.incidentid
Dan Harris wrote:
>
> On Jul 14, 2005, at 9:42 AM, John A Meinel wrote:
...
Did you try doing this to see how good the planners selectivity
estimates are?
>> Well, postgres is estimating around 500 rows each, is that way off? Try
>> just doing:
>> EXPLAIN ANALYZE SE
Dan Harris wrote:
> I'm trying to improve the speed of this query:
>
> explain select recordtext from eventactivity inner join ( select
> incidentid from k_r where id = 94 ) a using ( incidentid ) inner join (
> select incidentid from k_b where id = 107 ) b using ( incidentid );
You might try giv
Dan Harris wrote:
Gurus,
> even the explain never
finishes when I try that.
Just a short bit. If "EXPLAIN SELECT" doesn't return, there seems to be
a very serious problem. Because I think EXPLAIN doesn't actually run the
query, just has the query planner run. And the query planner shouldn'
jobapply wrote:
> The 2 queries are almost same, but ORDER BY x||t is FASTER than ORDER BY x..
>
> How can that be possible?
>
> Btw: x and x||t are same ordered
>
> phoeniks=> explain analyze SELECT * FROM test WHERE i<20 ORDER BY x || t;
> Q
Chris Travers wrote:
> John A Meinel wrote:
>
>> jobapply wrote:
>>
>>
>>> The 2 queries are almost same, but ORDER BY x||t is FASTER than ORDER
>>> BY x..
>>>
>>> How can that be possible?
>>>
>>> Btw: x and x||t are
jobapply wrote:
> The 2 queries are almost same, but ORDER BY x||t is FASTER than ORDER BY x..
>
> How can that be possible?
>
> Btw: x and x||t are same ordered
>
> phoeniks=> explain analyze SELECT * FROM test WHERE i<20 ORDER BY x || t;
> Q
Dario Pudlo wrote:
> (first at all, sorry for my english)
> Hi.
>- Does "left join" restrict the order in which the planner must join
> tables? I've read about join, but i'm not sure about left join...
>- If so: Can I avoid this behavior? I mean, make the planner resolve the
> query, using
Niccolo Rigacci wrote:
>Hi to all,
>
>I have a performace problem with the following query:
>
> BEGIN;
> DECLARE mycursor BINARY CURSOR FOR
>SELECT
> toponimo,
> wpt
> FROM wpt_comuni_view
>WHERE (
> wpt &&
> setSRID('BOX3D(4.83 36, 20.16 47.5)'::BOX3
Emil Briggs wrote:
I just mentioned random_page_cost, but you should also tune
effective_cache_size, since that is effectively most of your RAM. It
depends what else is going on in the system, but setting it as high as
say 12-14GB is probably reasonable if it is a dedicated machine. With
random_
Emil Briggs wrote:
I'm working with an application where the database is entirely resident in RAM
(the server is a quad opteron with 16GBytes of memory). It's a web
application and handles a high volume of queries. The planner seems to be
generating poor plans for some of our queries which I ca
Sam Mason wrote:
Hi,
I've just been referred here after a conversion on IRC and everybody
seemed to think I've stumbled upon some strangeness.
The planner (in PG version 8.0.2) is choosing what it thinks is a more
expensive plan. I've got a table of animals (about 3M rows) and their
movements
Merlin Moncure wrote:
On Tue, Jun 28, 2005 at 12:02:09 -0400,
Merlin Moncure <[EMAIL PROTECTED]> wrote:
Confirmed. Hats off to you, the above some really wicked querying.
IIRC I posted the same question several months ago with no response
and
had given up on it. I think your solution
Merlin Moncure wrote:
John Meinel wrote:
See my follow up post, which enables an index scan. On my system with
90k rows, it takes no apparent time.
(0.000ms)
John
=:->
Confirmed. Hats off to you, the above some really wicked querying.
IIRC I posted the same question several months ago wi
Michael Stone wrote:
Is it possible to tweak the size of a block that postgres tries to read
when doing a sequential scan? It looks like it reads in fairly small
blocks, and I'd expect a fairly significant boost in i/o performance
when doing a large (multi-gig) sequential scan if larger blocks w
John A Meinel wrote:
Merlin Moncure wrote:
I need a fast way (sql only preferred) to solve the following problem:
I need the smallest integer that is greater than zero that is not in the
column of a table. In other words, if an 'id' column has values
1,2,3,4,6 and 7, I need a
Merlin Moncure wrote:
Not so bad. Try something like this:
SELECT min(id+1) as id_new FROM table
WHERE (id+1) NOT IN (SELECT id FROM table);
Now, this requires probably a sequential scan, but I'm not sure how
you
can get around that.
Maybe if you got trickier and did some ordering and
Merlin Moncure wrote:
I need a fast way (sql only preferred) to solve the following problem:
I need the smallest integer that is greater than zero that is not in the
column of a table. In other words, if an 'id' column has values
1,2,3,4,6 and 7, I need a query that returns the value of 5.
I'
Tobias Brox wrote:
[EMAIL PROTECTED] - Tue at 08:33:58PM +0200]
I use FreeBSD 4.11 with PostGreSQL 7.3.8.
(...)
database=> explain select date_trunc('hour', time),count(*) as total from
test where p1=53 and time > now() - interval '24 hours' group by
date_trunc order by date_trunc ;
Yves Vindevogel wrote:
Hi again all,
My queries are now optimised. They all use the indexes like they should.
However, there's still a slight problem when I issue the "offset" clause.
We have a table that contains 600.000 records
We display them by 25 in the webpage.
So, when I want the last p
Puddle wrote:
Hello, I'm a Sun Solaris sys admin for a start-up
company. I've got the UNIX background, but now I'm
having to learn PostgreSQL to support it on our
servers :)
Server Background:
Solaris 10 x86
PostgreSQL 8.0.3
Dell PowerEdge 2650 w/4gb ram.
This is running JBoss/Apache as well
Yves Vindevogel wrote:
I only add records, and most of the values are "random"
Except the columns for dates,
I doubt that you would need to recreate indexes. That really only needs
to be done in pathological cases, most of which have been fixed in the
latest postgres.
If you are only ins
Yves Vindevogel wrote:
And, after let's say a week, would that index still be optimal or
would it be a good idea to drop it in the weekend and recreate it.
It depends a little bit on the postgres version you are using. If you
are only ever adding to the table, and you are not updating it or
d
Yves Vindevogel wrote:
Hi,
I have another question regarding indexes.
I have a table with a lot of indexes on it. Those are needed to
perform my searches.
Once a day, a bunch of records is inserted in my table.
Say, my table has 1.000.000 records and I add 10.000 records (1% new)
What would
Jone C wrote:
On second thought... Does a VACUUM FULL help? If so, you might want to
increase your FSM settings.
Thank you for the reply, sorry for delay I was on holiday.
I tried that it had no effect. I benchmarked 2x before, peformed
VACUUM FULL on the table in question post inserts, the
Yves Vindevogel wrote:
rvponp=# explain analyze select * from tblPrintjobs order by
loginuser, desceventdate, desceventtime ;
QUERY PLAN
Sort (cost=345699.06..347256.
Amit V Shah wrote:
After I sent out this email, I found this article from google
http://jonathangardner.net/PostgreSQL/materialized_views/matviews.html
Looks like we can control as to when the views refresh... I am still kind of
confused, and would appreciate help !!
The create/drop table doe
Yves Vindevogel wrote:
Hi,
I have a very simple query on a big table. When I issue a "limit"
and/or "offset" clause, the query is not using the index.
Can anyone explain me this ?
You didn't give enough information. What does you index look like that
you are expecting it to use?
Generally
Veikko Mäkinen wrote:
Hey,
How does Postgres (8.0.x) buffer changes to a database within a
transaction? I need to insert/update more than a thousand rows (mayde
even more than 1 rows, ~100 bytes/row) in a table but the changes
must not be visible to other users/transactions before every
Alex Stapleton wrote:
Oh, we are running 7.4.2 btw. And our random_page_cost = 1
Which is only correct if your entire db fits into memory. Also, try
updating to a later 7.4 version if at all possible.
On 13 Jun 2005, at 14:02, Alex Stapleton wrote:
We have two index's like so
l1_historica
Kevin Grittner wrote:
It sure would be nice if the optimizer would consider that it had the
leeway to add any column which was restricted to a single value to any
point in the ORDER BY clause. Without that, the application programmer
has to know what indexes are on the table, rather than being
Clark Slater wrote:
> hmm, i'm baffled. i simplified the query
> and it is still taking forever...
>
>
> test
> -
> id| integer
> partnumber| character varying(32)
> productlistid | integer
> typeid| integer
>
>
> Indexes:
> "test_p
Clark Slater wrote:
> Hi-
>
> Would someone please enlighten me as
> to why I'm not seeing a faster execution
> time on the simple scenario below?
>
> there are 412,485 rows in the table and the
> query matches on 132,528 rows, taking
> almost a minute to execute. vaccuum
> analyze was just run.
Neil Conway wrote:
> Tom Arthurs wrote:
>
>> Yes, shared buffers in postgres are not used for caching
>
>
> Shared buffers in Postgres _are_ used for caching, they just form a
> secondary cache on top of the kernel's IO cache. Postgres does IO
> through the filesystem, which is then cached by th
Marty Scholes wrote:
>> Has anyone ran Postgres with software RAID or LVM on a production box?
>> What have been your experience?
>
> Yes, we have run for a couple years Pg with software LVM (mirroring)
> against two hardware RAID5 arrays. We host a production Sun box that
> runs 24/7.
>
> My ex
Michael Stone wrote:
> On Mon, Jun 06, 2005 at 10:08:23AM -0500, John A Meinel wrote:
>
>> I don't know if you can do it, but it would be nice to see this be 1
>> RAID1 for OS, 1 RAID10 for pg_xlog,
>
>
> That's probably overkill--it's a relative
Casey Allen Shobe wrote:
> On Wednesday 01 June 2005 20:19, Casey Allen Shobe wrote:
>
...
> Long-term, whenever we hit the I/O limit again, it looks like we really don't
> have much of a solution except to throw more hardware (mainly lots of disks
> in RAID0's) at the problem. :( Fortunately,
Tom Lane wrote:
...
Now that I think about it, you were (if I understood your layout
correctly) proposing to put the xlog on your system's root disk.
This is probably a bad idea for performance, because there will always
be other traffic to the root disk. What you are really trying to
accomplis
Martin Fandel wrote:
Hi @ all,
i'm trying to tune my postgresql-db but i don't know if the values are
right
set.
I use the following environment for the postgres-db:
# Hardware
cpu: 2x P4 3Ghz
ram: 1024MB DDR 266Mhz
partitions:
/dev/sda3 23G 9,6G 13G 44%
Jocelyn Turcotte wrote:
Hi all
i dont know if this is normal, but if yes i would like to know why and
how I could do it another way other than using unions.
The only thing that *might* work is if you used an index on both keys.
So if you did:
CREATE INDEX rt_edge_start_end_node ON rt_edge(s
SpaceBallOne wrote:
What version of postgres?
8.0.2 ... but I think I've seen this before on 7.3 ...
There are a few possibilities. If you are having a lot of updates to the
table, you can get index bloat. And vacuum doesn't fix indexes. You have
to "REINDEX" to do that. Though REINDEX has
SpaceBallOne wrote:
Wondering if someone could explain a pecularity for me:
We have a database which takes 1000ms to perform a certain query on.
If I pg_dump that database then create a new database (e.g. "tempdb")
and upload the dump file (thus making a duplicate) then the same query
only tak
Michael Stone wrote:
On Tue, May 24, 2005 at 04:35:14PM -0700, Josh Berkus wrote:
Pretty much. There has been discussion about allowing index-only
access to "frozen" tables, i.e. archive partitions. But it all sort
of hinges on someone implementing it and testing
Is there any way to
David Parker wrote:
> I just got a question from one our QA guys who is configuring a RAID 10
> disk that is destined to hold a postgresql database. The disk
> configuration procedure is asking him if he wants to optimize for
> sequential or random access. My first thought is that random is what we
Greg Stark wrote:
Sebastian Hennebrueder <[EMAIL PROTECTED]> writes:
User-Agent: Mozilla Thunderbird 1.0 (Windows/20041206)
...
"Nested Loop (cost=1349.13..1435.29 rows=1 width=2541) (actual
time=1640.000..3687.000 rows=62 loops=1)"
" Join Filter: ("inner".fid = "outer".faufgaben_id)"
" -> Ind
Sebastian Hennebrueder wrote:
Hello,
I could not find any recommandations for the level of set statistics and
what a specific level does actually mean.
What is the difference between 1, 50 and 100? What is recommanded for a
table or column?
Default I believe is 10. The higher the number, the more s
Alex Turner wrote:
Ok - my common sense alarm is going off here...
There are only 6.446 billion people worldwide. 100 Billion page views
would require every person in the world to view 18 pages of yahoo
every day. Not very likely.
http://www.internetworldstats.com/stats.htm
suggests that there ar
Marc Mamin wrote:
Hello,
I'm not an expert, but I'll give some suggestions.
I'd like to tune Postgres for large data import (using Copy from).
I believe that COPY FROM is supposed to be faster than COPY FROM
STDIN, but must be available to the backend process. If you can
do it, you should think a
1 - 100 of 154 matches
Mail list logo