On Tue, Oct 30, 2012 at 4:11 PM, Greg Williamson
wrote:
> Jeff / Catalin --
>
> Jeff Janes wrote:
>
>>On Mon, Oct 29, 2012 at 5:11 PM, Catalin Iacob wrote:
>>
>>> pgbouncer 1.4.2 installed from Ubuntu's packages on the same machine
>>> as Postgres. Django connects via TCP/IP to pgbouncer (it does
Jeff / Catalin --
Jeff Janes wrote:
>On Mon, Oct 29, 2012 at 5:11 PM, Catalin Iacob wrote:
>
>> pgbouncer 1.4.2 installed from Ubuntu's packages on the same machine
>> as Postgres. Django connects via TCP/IP to pgbouncer (it does one
>> connection and one transaction per request) and pgbouncer k
On Mon, Oct 29, 2012 at 5:11 PM, Catalin Iacob wrote:
> pgbouncer 1.4.2 installed from Ubuntu's packages on the same machine
> as Postgres. Django connects via TCP/IP to pgbouncer (it does one
> connection and one transaction per request) and pgbouncer keeps
> connections open to Postgres via Uni
On Thu, Jul 19, 2012 at 11:07 AM, Jon Nelson wrote:
> Recently I found myself wondering what was taking a particular query so long.
> I immediately assumed it was a lack of I/O, because lack of I/O is a
> thorn in my side.
> Nope, the I/O was boring. CPU? Well, the process was using 100% of the
>
Shaun Thomas wrote:
> On 10/30/2012 06:55 AM, Kevin Grittner wrote:
>> Is there a good transaction-based connection pooler in Python?
>> You're better off with a good pool built in to the client
>> application than with a good pool running as a separate process
>> between the client and the databa
Cesar,
> On this server I have only one database with 312GB of data. The database
> had run fine during 4 months, but from two months ago, during high work
> load periods, the server is collapsed by "%sys" type load.
Hmmm. Have you updated Linux any time recently? I'm wondering if this
is a Pos
> 1) Make all types the same
> 2) If you are using some narrow type for big_table (say, int2) to save
> space, you can force narrowing conversion, e.g. "b.key1=ds.key1::int2". Note
> that if ds.key1 has any values that don't fit into int2, you will have
> problems. And of course, use your type used
AndyG wrote:
> A marginal improvement.
>
> http://explain.depesz.com/s/y63
That's what I thought.
Increasing the statistics for test_result.id_recipe_version
had no effect?
> I am going to normalize the table some more before partitioning.
How do you think that partitioning will help?
Yours,
On Tue, Oct 30, 2012 at 2:24 AM, vignesh wrote:
> Hi,
> When i start my postgres. Iam getting this error.
You may want to ask on the pgsql-general mailing list [1]. This list
is just for Postgres performance questions. While, technically,
failing to start outright could be considered a pe
A marginal improvement.
http://explain.depesz.com/s/y63
I am going to normalize the table some more before partitioning.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Slow-query-where-am-I-going-wrong-tp5730015p5730059.html
Sent from the PostgreSQL - performance mai
On 10/30/2012 06:55 AM, Kevin Grittner wrote:
Is there a good transaction-based connection pooler in Python?
You're better off with a good pool built in to the client application
than with a good pool running as a separate process between the
client and the database, IMO.
Could you explain thi
On 10/30/2012 07:15 AM, Vincenzo Melandri wrote:
Merge Join (cost=2604203.98..2774528.51 rows=129904 width=20)
Merge Cond: big_table.key1)::numeric) =
data_sequences_table.key1) AND ((( big_table.key2)::numeric) =
data_sequences_table.key2))
-> Sort (cost=2602495.47..2635975.81 row
Hi Vincenzo,
On Tue, 30 Oct 2012 13:15:10 +0100, Vincenzo Melandri
wrote:
I have indexes on both the key on the big table and the import_id on
the sequence table.
Forgive my quick answer, but it might be that the data you are
retrieving is scattered throughout the whole table, and the index
On Tue, Oct 30, 2012 at 02:16:57PM +0100, Albe Laurenz wrote:
> k...@rice.edu wrote:
> >>> If you do not have good random io performance log replay is nearly
> >>> unbearable.
> >>>
> >>> also, what io scheduler are you using? if it is cfq change that to
> >>> deadline or noop.
> >>> that can make
k...@rice.edu wrote:
>>> If you do not have good random io performance log replay is nearly
>>> unbearable.
>>>
>>> also, what io scheduler are you using? if it is cfq change that to
>>> deadline or noop.
>>> that can make a huge difference.
>>
>> We use the noop scheduler.
>> As I said, an identic
Heikki Linnakangas wrote:
>> Why does WAL replay read much more than it writes?
>> I thought that pretty much every block read during WAL
>> replay would also get dirtied and hence written out.
>
> Not necessarily. If a block is modified and written out of the buffer
> cache before next checkpoint
On Tue, Oct 30, 2012 at 09:50:44AM +0100, Albe Laurenz wrote:
> >> On Mon, Oct 29, 2012 at 6:05 AM, Albe Laurenz
> wrote:
> >>> I am configuring streaming replication with hot standby
> >>> with PostgreSQL 9.1.3 on RHEL 6 (kernel 2.6.32-220.el6.x86_64).
> >>> PostgreSQL was compiled from source.
>
Hello there,
I have PostgreSQL 8.3.18 server running on Centos 6.2 (2.6.32-220.7.1) with
this specs:
2x CPU AMD Opteron 6282
128GB RAM
Raid 10 (12HD 15k rpm 1GB cache) with data
Raid 10 (4HD 15k rpm 1GB cache) with xlog
Raid 1 (15k rpm 1GB cache shared with xlog) with system
On this server I hav
Hi all
I have a problem with a data import procedure that involve the following query:
select a,b,c,d
from big_table b
join data_sequences_table ds
on b.key1 = ds.key1 and b.key2 = ds.key2
where ds.import_id=xx
The "big table" has something like 10.000.000 records ore more
(depending on
Catalin Iacob wrote:
> Hardware:
> Virtual machine running on top of VMWare
> 4 cores, Intel(R) Xeon(R) CPU E5645 @ 2.40GHz
> 4GB of RAM
You should carefully test transaction-based pools limited to around 8
DB connections. Experiment with different size limits.
http://wiki.postgresql.org/wiki/Nu
> i have sql file (it's size are 1GB )
> when i execute it then the String is 987098801 bytr too long for encoding
> conversion error occured .
> pls give me solution about
You hit the upper limit of internal memory allocation limit in
PostgreSQL. IMO, there's no way to avoid the error except yo
On 30.10.2012 10:50, Albe Laurenz wrote:
Why does WAL replay read much more than it writes?
I thought that pretty much every block read during WAL
replay would also get dirtied and hence written out.
Not necessarily. If a block is modified and written out of the buffer
cache before next checkp
Thanks very much Laurenz.
I'll put your suggestions into motion right away and let you know the
results.
Albe Laurenz *EXTERN* wrote
> BTW, you seem to have an awful lot of indexes defined, some
> of which seem redundant.
I am in the process of pruning unused/useless indexes on this database -
Hi,
When i start my postgres. Iam getting this error. I had
installed 8.4 and 9.1
It was working good yesterday but not now.
service postgresql restart
* Restarting PostgreSQL 8.4databaseserver
* The PostgreSQL server failed to start. Please check the log output.
If i see the log. it s
hi
i have sql file (it's size are 1GB )
when i execute it then the String is 987098801 bytr too long for encoding
conversion error occured .
pls give me solution about
i have XP 64-bit with 8 GB RAM shared_buffer 1GB check point = 34
with thanks
mahavir
Sean Woolcock wrote:
> I have a large (3 million row) table called "tape" that represents
files,
> which I join to a small (100 row) table called "filesystem" that
represents
> filesystems. I have a web interface that allows you to sort by a
number of
> fields in the tape table and
>> On Mon, Oct 29, 2012 at 6:05 AM, Albe Laurenz
wrote:
>>> I am configuring streaming replication with hot standby
>>> with PostgreSQL 9.1.3 on RHEL 6 (kernel 2.6.32-220.el6.x86_64).
>>> PostgreSQL was compiled from source.
>>>
>>> It works fine, except that starting the standby took for ever:
>>
Andy wrote:
> I have been pulling my hair out over the last few days trying to get
any useful performance out of the
> following
> painfully slow query.
> The query is JPA created, I've just cleaned the aliases to make it
more readable.
> Using 'distinct' or 'group by' deliver about the same result
Hello all,
I have been pulling my hair out over the last few days trying to get any useful performance out of the following
painfully slow query.
The query is JPA created, I've just cleaned the aliases to make it more
readable.
Using 'distinct' or 'group by' deliver about the same results, but
29 matches
Mail list logo