On Thu, 21 Jun 2007, Scott Marlowe wrote:
Generally I agree, however, how about a first switch, for beginner /
intermediate / advanced.
You're describing a perfectly reasonable approach for a second generation
tool in this area. I think it would be very helpful for the user
community to get
On Thu, 21 Jun 2007, Scott Marlowe wrote:
And if they've gone to the trouble of implementing RAID-6, they're
usually at least halfway decent controllers.
Unfortunately the existance of the RAID-6 capable Adaptec 2820SA proves
this isn't always the case.
--
* Greg Smith [EMAIL PROTECTED] htt
On Jun 21, 2007, at 7:30 PM, Toru SHIMOGAKI wrote:
Tom Lane wrote:
Dan Gorman <[EMAIL PROTECTED]> writes:
All of our databases are on NetApp storage and I have been
looking
at SnapMirror (PITR RO copy ) and FlexClone (near instant RW volume
replica) for backing up our databases. The pro
Toru SHIMOGAKI wrote:
> Tom Lane wrote:
> - When we take a PITR base backup with hardware level snapshot operation
> (not filesystem level) which a lot of storage vender provide, the backup
> data
> can be corrupted as Dan said. During recovery we can't even read it,
> especially if meta-da
Tom Lane wrote:
> Dan Gorman <[EMAIL PROTECTED]> writes:
>>All of our databases are on NetApp storage and I have been looking
>> at SnapMirror (PITR RO copy ) and FlexClone (near instant RW volume
>> replica) for backing up our databases. The problem is because there
>> is no write-suspe
Rainer Bauer <[EMAIL PROTECTED]> writes:
> Fetching the 50 rows takes 12 seconds (without logging 8 seconds) and
> examining the log I found what I suspected: the performance is directly
> related to the ping time to the server since fetching one tuple requires a
> round trip to the server.
Hm, bu
Dan Gorman <[EMAIL PROTECTED]> writes:
>All of our databases are on NetApp storage and I have been looking
> at SnapMirror (PITR RO copy ) and FlexClone (near instant RW volume
> replica) for backing up our databases. The problem is because there
> is no write-suspend or even a 'hot backu
Hi -
I'm looking at ways to do clean PITR backups. Currently we're
pg_dumping our data in some cases when compressed is about 100GB.
Needless to say it's slow and IO intensive on both the host and the
backup server.
All of our databases are on NetApp storage and I have been looking
a
Scott Marlowe wrote:
Andrew Sullivan wrote:
On Thu, Jun 21, 2007 at 12:29:49PM -0400, Karl Wright wrote:
I checked the disk picture - this is a RAID disk array with 6 drives,
with a bit more than 1Tbyte total storage. 15,000 RPM. It would be
hard to get more/faster disk than that.
W
Rainer Bauer wrote:
Hello Dimitri,
but did you try to execute your query directly from 'psql' ?...
munnin=>\timing
munnin=>select * from "tblItem";
(50 rows)
Time: 391,000 ms
Why I'm asking: seems to me your case is probably just network latency
dependent, and what I noticed dur
Andrew Sullivan wrote:
On Thu, Jun 21, 2007 at 12:29:49PM -0400, Karl Wright wrote:
I checked the disk picture - this is a RAID disk array with 6 drives,
with a bit more than 1Tbyte total storage. 15,000 RPM. It would be
hard to get more/faster disk than that.
What kind of RAID? It
Andrew Sullivan wrote:
On Thu, Jun 21, 2007 at 12:29:49PM -0400, Karl Wright wrote:
I checked the disk picture - this is a RAID disk array with 6 drives,
with a bit more than 1Tbyte total storage. 15,000 RPM. It would be
hard to get more/faster disk than that.
What kind of RAID? It
Let's stay optimist - at least now you know the main source of your problem! :))
Let's see now with CURSOR...
Firstly try this:
munnin=>\timing
munnin=>\set FETCH_COUNT 1;
munnin=>select * from "tblItem";
what's the time you see here? (I think your application is working in
this manner)
Now, c
Francisco Reyes wrote:
Scott Marlowe writes:
and a bit more resiliant to drive failure, RAID-5 can give you a lot
of storage and very good read performance, so it works well for
reporting /
New controllers now also have Raid 6, which from the few reports I
have seen seems to have a good co
Hello Dimitri,
>but did you try to execute your query directly from 'psql' ?...
munnin=>\timing
munnin=>select * from "tblItem";
(50 rows)
Time: 391,000 ms
>Why I'm asking: seems to me your case is probably just network latency
>dependent, and what I noticed during last benchmarks with PostgreS
Hi Rainer,
but did you try to execute your query directly from 'psql' ?...
Why I'm asking: seems to me your case is probably just network latency
dependent, and what I noticed during last benchmarks with PostgreSQL
the SELECT query become very traffic hungry if you are using CURSOR.
Program 'psq
I wrote:
>Hello Harald,
>
>>I do not have a solution, but I can confirm the problem :)
>
>At least that rules out any misconfiguration issues :-(
I did a quick test with my application and enabled the ODBC logging.
Fetching the 50 rows takes 12 seconds (without logging 8 seconds) and
examining t
Chris Browne <[EMAIL PROTECTED]> writes:
> I once ran into the situation where Slony-I generated a query that
> made the parser blow out (some sort of memory problem / running out of
> stack space somewhere thing); it was just short of 640K long, and so
> we figured that evidently it was wrong to c
Thanks everyone. It appears that we had hacked the 502.pgsql script for our
8.1 build to disable the daily vacuum. I was not aware of this when
building and upgrading to 8.2.
So it looks like for the past two weeks, that 36 hour db-wide vacuum has
been running every 24 hours. Good for it for b
Thanks everyone for your responses. I don't think it's realistic to change
our application infrastructure to use COPY from a stream at this point.
It's good to know that multi-row-VALUES is good up into the thousands of
rows (depending on various things, of course). That's a good enough answer
f
In response to "Steven Flatt" <[EMAIL PROTECTED]>:
> On 6/21/07, Francisco Reyes <[EMAIL PROTECTED]> wrote:
> >
> > Are you on FreeBSD by any chance?
> >
> > I think the FreeBSD port by default installs a script that does a daily
> > vacuum.
>
>
> Yes, FreeBSD. Do you know what script that is?
[EMAIL PROTECTED] (Tom Lane) writes:
> PS: for the record, there is a hard limit at 1GB of query text, owing
> to restrictions built into palloc. But I think you'd hit other
> memory limits or performance bottlenecks before that one.
It would be much funnier to set a hard limit of 640K of query t
On Thu, 21 Jun 2007, Steven Flatt wrote:
On 6/21/07, Francisco Reyes <[EMAIL PROTECTED]> wrote:
Are you on FreeBSD by any chance?
I think the FreeBSD port by default installs a script that does a daily
vacuum.
Yes, FreeBSD. Do you know what script that is? And it does a db-wide
VACUUM AN
On 6/21/07, Francisco Reyes <[EMAIL PROTECTED]> wrote:
Are you on FreeBSD by any chance?
I think the FreeBSD port by default installs a script that does a daily
vacuum.
Yes, FreeBSD. Do you know what script that is? And it does a db-wide
VACUUM ANALYZE every day?! That is certainly not ne
Steven Flatt writes:
Can someone explain what is going on here? I can't quite figure it out
based on the docs.
Are you on FreeBSD by any chance?
I think the FreeBSD port by default installs a script that does a daily
vacuum. If using another OS, perhaps you want to see if you used some sor
Steven Flatt <[EMAIL PROTECTED]> schrieb:
> For example, on a toy table with two columns, I noticed about a 20% increase
> when bulking together 1000 tuples in one INSERT statement as opposed to doing
> 1000 individual INSERTS. Would this be the same for 1? 10? Does it
> depend on the wid
"Steven Flatt" <[EMAIL PROTECTED]> writes:
> It looks like Postgres does not enforce a limit on the length of an SQL
> string. Great. However is there some point at which a query string becomes
> ridiculously too long and affects performance?
Yes, but it'll depend a whole lot on context; I'd sug
Steven Flatt wrote:
It looks like Postgres does not enforce a limit on the length of an SQL
string. Great. However is there some point at which a query string
becomes
ridiculously too long and affects performance? Here's my particular case:
consider an INSERT statement where you're using the
Steven Flatt wrote:
The bad thing, which I don't totally understand from reading the docs, is
that another db-wide vacuum kicked off exactly 24 hours after the first
db-wide vacuum kicked off, before the first one had finished. (Note that
these vacuums seem to go through the tables alphabeticall
I can't seem to find a definitive answer to this.
It looks like Postgres does not enforce a limit on the length of an SQL
string. Great. However is there some point at which a query string becomes
ridiculously too long and affects performance? Here's my particular case:
consider an INSERT stat
Hello Harald,
>I do not have a solution, but I can confirm the problem :)
At least that rules out any misconfiguration issues :-(
>I did not find a solution so far; and for bulk data transfers I now
>programmed a workaround.
But that is surely based on some component installed on the server, is
Hello Tom,
>I seem to recall that we've seen similar reports before, always
>involving Windows :-(. Check whether you have any nonstandard
>components hooking into the network stack on that machine.
I just repeated the test by booting into "Safe Mode with Network Support", but
the results are th
On Thu, Jun 21, 2007 at 07:53:54PM +0300, Sabin Coanda wrote:
> Reading different references, I understand there is no need to vacuum a
> table where just insert actions perform.
That's false. First, you must vacuum at least once every 2 billion
transactions. Second, if a table is INSERTed to,
On Thu, Jun 21, 2007 at 12:29:49PM -0400, Karl Wright wrote:
> I checked the disk picture - this is a RAID disk array with 6 drives,
> with a bit more than 1Tbyte total storage. 15,000 RPM. It would be
> hard to get more/faster disk than that.
What kind of RAID? It's _easy_ to get faster disk
We recently upgraded a very large database (~550 GB) from 8.1.4 to 8.2.4 via
a pg_dump and pg_restore. (Note that the restore took several days.) We
had accepted the default settings:
vacuum_freeze_min_age = 100 million
autovacuum_freeze_max_age = 200 million
Due to our very high transaction r
Hi there,
Reading different references, I understand there is no need to vacuum a
table where just insert actions perform. So I'm surprising to see a table
with just historical data, which is vacuumed at the nightly cron with a
simple VACUUM VERBOSE on about 1/3 of indexes amount.
Take a look
Greg,
I have a PostgreSQL database that runs on a dedicated server. The
server has 24Gig of memory. What would be the max size I would ever
want to set the shared_buffers to if I where to relying on the OS for
disk caching approach? It seems that no matter how big your dedicated
server is there
Scott Marlowe wrote:
Karl Wright wrote:
Scott Marlowe wrote:
Karl Wright wrote:
Shaun Thomas wrote:
On Wednesday 20 June 2007 12:55:20 pm Karl Wright wrote:
I am afraid that I did answer this. My largest tables
are the ones continually being updated. The smaller
ones are updated only i
On Thu, Jun 21, 2007 at 03:14:48AM -0400, Greg Smith wrote:
> "The Oracle Way" presumes that you've got such a massive development staff
> that you can solve these problems better yourself than the community at
> large, and then support that solution on every platform.
Not that Greg is sugges
Karl Wright wrote:
Scott Marlowe wrote:
Karl Wright wrote:
Shaun Thomas wrote:
On Wednesday 20 June 2007 12:55:20 pm Karl Wright wrote:
I am afraid that I did answer this. My largest tables
are the ones continually being updated. The smaller
ones are updated only infrequently.
You
Rainer Bauer <[EMAIL PROTECTED]> writes:
> one of my customers installed Postgres on a public server to access the data
> from several places. The problem is that it takes _ages_ to transfer data from
> the database to the client app. At first I suspected a problem with the ODBC
> driver and my ap
Hi,
Andrew Sullivan wrote:
This isn't quite true. Slony-II was originally conceived by Jan as
an attempt to implement some of the Postgres-R ideas.
Oh, right, thanks for that correction.
Part of the problem, as near as I could tell, was that we had no
group communication protocol that would
Greg Smith wrote:
On Tue, 19 Jun 2007, Josh Berkus wrote:
I don't think the "mostly reads / mostly writes" question covers
anything,
nor is it likely to produce accurate answers. Instead, we need to
ask the
users to characterize what type of application they are running:
T1) Please character
Tom Lane wrote:
There's another reason for not setting shared_buffers huge, beyond the
good ones Greg listed: the kernel may or may not consider a large
shared-memory segment as potentially swappable.
Another is that on Windows, shared memory access is more expensive and
various people have
Hello Rainer,
The database computer is connected via a 2MBit SDL connection. I myself have
a
768/128 KBit ADSL connection and pinging the server takes 150ms on
average.
I do not have a solution, but I can confirm the problem :)
One PostgreSQL-Installation: Server 8.1 and 8.2 on Windows in th
> ""Campbell, Lance"" <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]
> Now I am at the difficult part, what parameters to calculate and how to
> calculate them. Everything below has to do with PostgreSQL version 8.2:
>
>
> The parameters I would think we should calculate are:
> ma
Greg Smith <[EMAIL PROTECTED]> writes:
> On Wed, 20 Jun 2007, Campbell, Lance wrote:
>> If everything I said is correct then I agree "Why have
>> effective_cache_size?" Why not just go down the approach that Oracle
>> has taken and require people to rely more on shared_buffers and the
>> genera
Hello all,
one of my customers installed Postgres on a public server to access the data
from several places. The problem is that it takes _ages_ to transfer data from
the database to the client app. At first I suspected a problem with the ODBC
driver and my application, but using pgAdminIII 1.6.3
Scott Marlowe writes:
and a bit more resiliant to drive failure, RAID-5 can give you a lot of
storage and very good read performance, so it works well for reporting /
New controllers now also have Raid 6, which from the few reports I have seen
seems to have a good compromise of performance a
On Wed, 20 Jun 2007, Campbell, Lance wrote:
If everything I said is correct then I agree "Why have
effective_cache_size?" Why not just go down the approach that Oracle
has taken and require people to rely more on shared_buffers and the
general memory driven approach? Why rely on the disk cac
50 matches
Mail list logo