Mark Kirkwood wrote:
Kevin Brown wrote:
I'll just start by warning that I'm new-ish to postgresql.
I'm running 8.1 installed from source on a Debian Sarge server. I
have a simple query that I believe I've placed the indexes correctly
for, and I still end up with a seq scan. It makes sense,
On Wed, 14 Dec 2005, Charles Sprickman wrote:
[big snip]
The list server seems to be regurgitating old stuff, and in doing so it
reminded me to thank everyone for their input. I was kind of waiting to
see if anyone who was very pro-NAS/SAN was going to pipe up, but it looks
like most people
> -Original Message-
> From: Tom Lane [mailto:[EMAIL PROTECTED]
> Sent: Wed 12/14/2005 9:36 PM
> To: Gregory S. Williamson
> Cc: pgsql-performance@postgresql.org; PostGIS Users Discussion
> Subject: Re: [PERFORM] [postgis-users] Is my query planner failing
me,
> or vice versa?
> "G
Hello all,
It seems that I'm starting to outgrow our current Postgres setup. We've
been running a handful of machines as standalone db servers. This is all
in a colocation environment, so everything is stuffed into 1U Supermicro
boxes. Our standard build looks like this:
Supermicro 1U w/S
On Wed, 2005-12-14 at 17:47 -0500, Tom Lane wrote:
> That plan looks perfectly fine to me. You could try forcing some other
> choices by fooling with the planner enable switches (eg set
> enable_seqscan = off) but I doubt you'll find much improvement. There
> are too many rows being pulled from o
Hi,
I have a java.util.List of values (1) which i wanted to use for a query in
the where clause of an simple select statement. iterating over the list and and
use an prepared Statement is quite slow. Is there a more efficient way to
execute such a query.
Thanks for any help.
Johannes
..
I have a few small functions which I need to write. They will be hopefully
quick running but will happen on almost every delete, insert and update on
my database (for audit purposes).
I know I should be writing these in C but that's a bit beyond me. I was
going to try PL/Python or PL/Perl or
Hi,
I have a java.util.List of values (1) which i wanted to use for a query in
the where clause of an simple select statement. iterating over the list and and
use an prepared Statement is quite slow. Is there a more efficient way to
execute such a query.
Thanks for any help.
Johannes
...
Hi,
I have a java.util.List of values (1) which i wanted to use for a
query in the where clause of an simple select statement. iterating
over the list and and use an prepared Statement is quite slow. Is
there a more efficient way to execute such a query. Thanks for any
help. Johannes
Hi,
I have a java.util.List of values (1) which i wanted to use for a query in
the where clause of an simple select statement. iterating over the list and and
use an prepared Statement is quite slow. Is there a more efficient way to
execute such a query.
Thanks for any help.
Johannes
...
On Tue, 20 Dec 2005, Juan Casero wrote:
Date: Tue, 20 Dec 2005 19:50:47 -0500
From: Juan Casero <[EMAIL PROTECTED]>
To: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] What's the best hardver for PostgreSQL 8.1?
Can you elaborate on the reasons the opteron is better than the Xeon when i
Can you elaborate on the reasons the opteron is better than the Xeon when it
comes to disk io? I have a PostgreSQL 7.4.8 box running a DSS. One of our
tables is about 13 million rows. I had a number of queries against this
table that used innner joins on 5 or 6 tables including the 13 mill
"Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> On Mon, Dec 19, 2005 at 03:47:35PM -0500, Greg Stark wrote:
>> Increase your work_mem (or sort_mem in older postgres versions), you can do
>> this for the server as a whole or just for this one session and set it back
>> after this one query. You can inc
On Dec 20, 2005, at 1:27 PM, Antal Attila wrote:
The budget line is about 30 000$ - 40 000$.
Like Jim said, without more specifics it is hard to give more
specific recommendations, but I'm architecting something like this
for my current app which needs ~100GB disk space. I made room to
On Tue, Dec 20, 2005 at 07:27:15PM +0100, Antal Attila wrote:
> We have complex databases with some 100million rows (2-3million new
How much space does that equate to?
> records per month). Our current servers are working on low resposibility
> in these days, so we have to buy new hardver for d
On Tue, Dec 20, 2005 at 01:26:00PM +, David Roussel wrote:
> Note that you can do the taring, zipping, copying and untaring
> concurrentlt. I can't remember the exactl netcat command line options,
> but it goes something like this
>
> Box1:
> tar czvf - myfiles/* | netcat myserver:12345
>
On Mon, Dec 19, 2005 at 07:20:56PM -0800, David Lang wrote:
> for persistant storage you can replicate from your ram-based system to a
> disk-based system, and as long as your replication messages hit disk
> quickly you can allow the disk-based version to lag behind in it's updates
> during your
Sun hardware is a 4 CPU (8 cores) v40z, Dell is 6850 Quad XEON (8
cores), both have 16GB RAM, and 2 internal drives, one drive has OS +
data and second drive has pg_xlog.
RedHat AS4.0 U2 64-bit on both servers, PG8.1, 64bit RPMs.
Thanks,
Anjan
-Original Message-
From: Juan Casero [mail
On Mon, Dec 19, 2005 at 03:47:35PM -0500, Greg Stark wrote:
> Increase your work_mem (or sort_mem in older postgres versions), you can do
> this for the server as a whole or just for this one session and set it back
> after this one query. You can increase it up until it starts causing swapping
> a
On Sun, Dec 18, 2005 at 11:35:15AM -0500, Juan Casero wrote:
> Can anyone tell me how well PostgreSQL 8.x performs on the new Sun Ultrasparc
> T1 processor and architecture on Solaris 10? I have a custom built retail
> sales reporting that I developed using PostgreSQL 7.48 and PHP on a Fedora
Pe
On Tue, Dec 20, 2005 at 12:20:55PM -0500, Jignesh K. Shah wrote:
> Is pgbench the workload that you prefer? (It already has issues with
> pg_xlog so my guess is it probably won't scale much)
> If you have other workload informations let me know.
>From what the user described, dbt3 would probably
On Mon, Dec 19, 2005 at 11:10:50AM -0800, James Klo wrote:
> Yes, I've considered partitioning as a long term change. I was thinking
> about this for other reasons - mainly performance. If I go the
> partitioning route, would I need to even perform archival?
No. The idea is that you have your t
On Sat, Dec 17, 2005 at 07:31:40AM -0500, Jaime Casanova wrote:
> > > Yeah it would - an implementation I have seen that I like is where the
> > > developer can supply the *entire* execution plan with a query. This is
> > > complex enough to make casual use unlikely :-), but provides the ability
>
Hi!
What do you suggest for the next problem?
We have complex databases with some 100million rows (2-3million new
records per month). Our current servers are working on low resposibility
in these days, so we have to buy new hardver for database server. Some
weeks ago we started to work with Po
But yes All LWPs (processes and threads) are switched across virtual
CPUS . There is intelligence built in Solaris to understand which
strands are executing on which cores and it will balance out the cores
too so if there are only 8 threads running they will essentially run on
separate cores
Jignesh,
Juan says the following below:
"I figured the number of cores on the T1000/2000 processors would be
utilized by the forked copies of the postgresql server. From the comments
I have seen so far it does not look like this is the case."
I think this needs to be refuted. Doesn't Solaris s
Greg Stark <[EMAIL PROTECTED]> writes:
> Tom Lane <[EMAIL PROTECTED]> writes:
>> That surprises me too --- can you provide details on the test case so
>> other people can reproduce it? AFAIR the only performance difference
>> between SERIALIZABLE and READ COMMITTED is the frequency with which
>> t
Hi, Tom,
Tom Lane wrote:
>>Some time ago, I had some tests with large bulk insertions, and it
>>turned out that SERIALIZABLE seemed to be 30% faster, which surprised us.
>
> That surprises me too --- can you provide details on the test case so
> other people can reproduce it? AFAIR the only per
Tom Lane <[EMAIL PROTECTED]> writes:
> That surprises me too --- can you provide details on the test case so
> other people can reproduce it? AFAIR the only performance difference
> between SERIALIZABLE and READ COMMITTED is the frequency with which
> transaction status snapshots are taken; your
Markus Schaber <[EMAIL PROTECTED]> writes:
> Some time ago, I had some tests with large bulk insertions, and it
> turned out that SERIALIZABLE seemed to be 30% faster, which surprised us.
That surprises me too --- can you provide details on the test case so
other people can reproduce it? AFAIR th
Oleg Bartunov writes:
> I still wondering with very poor performance of my server. Moving
> pgdata to RAID6 helped - about 600 tps. Then, I moved pg_xlog to separate
> disk and got strange error messages
> [EMAIL PROTECTED]:~$ time pgbench -c 10 -t 3000 pgbench
> starting vacuum...end.
> Client
On Tue, 20 Dec 2005, Alan Stange wrote:
David Lang wrote:
On Tue, 20 Dec 2005, Alan Stange wrote:
Jignesh K. Shah wrote:
I guess it depends on what you term as your metric for measurement.
If it is just one query execution time .. It may not be the best on
UltraSPARC T1.
But if you have mor
On Tue, 20 Dec 2005, Tom Lane wrote:
Oleg Bartunov writes:
I see a very low performance and high context switches on our
dual itanium2 slackware box (Linux ptah 2.6.14 #1 SMP)
with 8Gb of RAM, running 8.1_STABLE. Any tips here ?
[EMAIL PROTECTED]:~/cvs/8.1/pgsql/contrib/pgbench$ time pgbenc
David Lang wrote:
On Tue, 20 Dec 2005, Alan Stange wrote:
Jignesh K. Shah wrote:
I guess it depends on what you term as your metric for measurement.
If it is just one query execution time .. It may not be the best on
UltraSPARC T1.
But if you have more than 8 complex queries running simultane
On Tue, 20 Dec 2005, Alan Stange wrote:
Jignesh K. Shah wrote:
I guess it depends on what you term as your metric for measurement.
If it is just one query execution time .. It may not be the best on
UltraSPARC T1.
But if you have more than 8 complex queries running simultaneously,
UltraSPARC
Jignesh K. Shah wrote:
I guess it depends on what you term as your metric for measurement.
If it is just one query execution time .. It may not be the best on
UltraSPARC T1.
But if you have more than 8 complex queries running simultaneously,
UltraSPARC T1 can do well compared comparatively prov
The only issue is to ensure that a query near a boundary between two
adjacent areas behaves as though there was no partitioning. To do this, I'm
looking into using 8.1's new 2PC to allow me to selectively copy data
inserted near a boundary into the adjacent neighbouring databases, so that
Oleg Bartunov writes:
> I see a very low performance and high context switches on our
> dual itanium2 slackware box (Linux ptah 2.6.14 #1 SMP)
> with 8Gb of RAM, running 8.1_STABLE. Any tips here ?
> [EMAIL PROTECTED]:~/cvs/8.1/pgsql/contrib/pgbench$ time pgbench -s 10 -c 10
> -t 3000 pgbench
>
David Lang wrote:
ext3 has an option to make searching directories faster (htree), but
enabling it kills performance when you create files. And this doesn't
help with large files.
The ReiserFS white paper talks about the data structure he uses to
store directories (some kind of tree),
On 12/20/05, Nörder-Tuitje, Marcus <[EMAIL PROTECTED]> wrote:
> MVCC blocks reading processes when data is modified.
That is incorrect. The main difference between 2PL and MVCC is that
readers are never blocked under MVCC.
greetings,
Nicolas
--
Nicolas Barbier
http://www.gnu.org/philosophy/no-w
Hello, Andreas,
Andreas Seltenreich wrote:
>>Btw, there's another setting that might make a difference:
>>Having ACID-Level SERIALIZABLE or READ COMMITED?
>
> Well, if nonrepeatable or phantom reads would pose a problem because
> of those occasional writes, you wouldn't be considering autocommi
Markus Schaber writes:
> As I said, there usually are no writing transactions on the same database.
>
> Btw, there's another setting that might make a difference:
>
> Having ACID-Level SERIALIZABLE or READ COMMITED?
Well, if nonrepeatable or phantom reads would pose a problem because
of those occ
Markus Schaber schrieb:
Hello,
We have a database containing PostGIS MAP data, it is accessed mainly
via JDBC. There are multiple simultaneous read-only connections taken
from the JBoss connection pooling, and there usually are no active
writers. We use connection.setReadOnly(true).
Now my ques
Hi, Marcus,
Nörder-Tuitje wrote:
> afaik, this should be completely neglectable.
>
> starting a transaction implies write access. if there is none, You do
> not need to think about transactions, because there are none.
Hmm, I always thought that the transaction will be opened at the first
statem
Mmmm, good question.
MVCC blocks reading processes when data is modified. using autocommit implies
that each modification statement is an atomic operation.
on a massive readonly table, where no data is altered, MVCC shouldn't have any
effect (but this is only an assumption) basing on
http://en
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
Nörder-Tuitje wrote:
|> We have a database containing PostGIS MAP data, it is accessed
|> mainly via JDBC. There are multiple simultaneous read-only
|> connections taken from the JBoss connection pooling, and there
|> usually are no active writer
unsubscribe
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
afaik, this should be completely neglectable.
starting a transaction implies write access. if there is none, You do not need
to think about transactions, because there are none.
postgres needs to schedule the writing transactions with the reading ones,
anyway.
But I am not that performance pro
Hello,
We have a database containing PostGIS MAP data, it is accessed mainly
via JDBC. There are multiple simultaneous read-only connections taken
from the JBoss connection pooling, and there usually are no active
writers. We use connection.setReadOnly(true).
Now my question is what is best perfo
49 matches
Mail list logo