On Thu, 28 May 2009, Flavio Henrique Araque Gurgel wrote:
It is 2.6.24 We had to apply the kswapd patch also. It's important
specially if you see your system % going as high as 99% in top and
loosing the machine's control. I have read something about 2.6.28 had
this patch accepted in mainstrea
On Thu, May 28, 2009 at 7:04 PM, Fabrix wrote:
>> I would ask for your kernel version. uname -a please?
>
> sure, and thanks for you answer Flavio...
>
> uname -a
> Linux SERVIDOR-A 2.6.18-92.el5 #1 SMP Tue Apr 29 13:16:15 EDT 2008 x86_64
> x86_64 x86_64 GNU/Linux
>
> cat /etc/redhat-release
> Red
I would ask for your kernel version. uname -a please?
> sure, and thanks for you answer Flavio...
>
> uname -a
> Linux SERVIDOR-A 2.6.18-92.el5 #1 SMP Tue Apr 29 13:16:15 EDT 2008 x86_64
> x86_64 x86_64 GNU/Linux
>
> cat /etc/redhat-release
> Red Hat Enterprise Linux Server release 5
2009/5/28 Flavio Henrique Araque Gurgel
> - "Scott Marlowe" escreveu:
> > On Thu, May 28, 2009 at 12:50 PM, Fabrix wrote:
> > >
> > > HI.
> > >
> > > Someone had some experience of bad performance with postgres in some
> server
> > > with many processors?
>
> I had.
>
> > > but I have exper
- "Scott Marlowe" escreveu:
> On Thu, May 28, 2009 at 12:50 PM, Fabrix wrote:
> >
> > HI.
> >
> > Someone had some experience of bad performance with postgres in some server
> > with many processors?
I had.
> > but I have experienced problems with another server that has 8 CPUS quad
- "Alan McKay" escreveu:
> Hmmm. Anyone out there have the Continuent solution working with PostgreSQL?
> If so, what release? We're at 8.3 right now.
I have tested Sequoia 2.10.10 with a high transaction rate database with good
servers and plenty of memory. Since that's a OLTP system the
Hi,
We have one query which has a left join. If we run this query without
the left join, it runs slower than with the left join.
-query with the left join:
EXPLAIN ANALYZE
SELECT
artifact.id AS id,
artifact.priority AS priority,
item.title AS title,
item.name AS name,
field_value2.value AS s
2009/5/28 Scott Mead
> On Thu, May 28, 2009 at 4:53 PM, Fabrix wrote:
>
>>
>>
>>>
>>> Wow, that's some serious context-switching right there - 300k context
>>> switches a second mean that the processors are spending a lot of their
>>> time fighting for CPU time instead of doing any real work.
>>
Thanks Scott
2009/5/28 Scott Marlowe
> On Thu, May 28, 2009 at 12:50 PM, Fabrix wrote:
> >
> > HI.
> >
> > Someone had some experience of bad performance with postgres in some
> server
> > with many processors?
>
> Seems to depend on the processors and chipset a fair bit.
>
> > I have a server
On Thu, May 28, 2009 at 2:53 PM, Fabrix wrote:
> yes, i have max_connections = 5000
> can lower, but at least i need 3500 connections
Whoa, that's a lot. Can you look into connection pooling of some sort?
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make ch
On Thu, May 28, 2009 at 4:53 PM, Fabrix wrote:
>
>
>>
>> Wow, that's some serious context-switching right there - 300k context
>> switches a second mean that the processors are spending a lot of their
>> time fighting for CPU time instead of doing any real work.
>
>
There is a bug in the quad c
Thanks David...
2009/5/28 David Rees
> On Thu, May 28, 2009 at 11:50 AM, Fabrix wrote:
> > Monitoring (nmon, htop, vmstat) see that everything is fine (memory, HD,
> > eth, etc) except that processors regularly climb to 100%.
>
> What kind of load are you putting the server under when this ha
On Thu, May 28, 2009 at 12:50 PM, Fabrix wrote:
>
> HI.
>
> Someone had some experience of bad performance with postgres in some server
> with many processors?
Seems to depend on the processors and chipset a fair bit.
> I have a server with 4 CPUS dual core and gives me a very good performance
On Thu, May 28, 2009 at 11:50 AM, Fabrix wrote:
> Monitoring (nmon, htop, vmstat) see that everything is fine (memory, HD,
> eth, etc) except that processors regularly climb to 100%.
What kind of load are you putting the server under when this happens?
> I can see that the processes are waiting
I also forgot to note that I had no problems setting up replication via
londiste (skytools). The cronjob that creates the partition each week for
me also adds the table to the replication set. As simple as:
londiste.py londiste.ini provider add 'public.Data__WI'
londiste.py londiste.i
I currently have a database doing something very similar. I setup partition
tables with predictable names based on the the data's timestamp week number
eg: (Data__WI).
I have a tigger on the parent partition table to redirect data to the
correct partition( tablename:='Data_' || to_char('$NEW
HI.
Someone had some experience of bad performance with postgres in some server
with many processors?
I have a server with 4 CPUS dual core and gives me a very good performance
but I have experienced problems with another server that has 8 CPUS quad
core (32 cores). The second one only gives me
On Thu, May 28, 2009 at 05:24:33PM +0200, Ivan Voras wrote:
> 2009/5/28 Kenneth Marshall :
>
> >
> > One big benefit of partitioning is that you can prune old data with
> > minimal impact to the running system. Doing a large bulk delete would
> > be extremely I/O impacting without partion support.
Hmmm. Anyone out there have the Continuent solution working with PostgreSQL?
If so, what release? We're at 8.3 right now.
thanks,
-Alan
p.s. I'm continuing the cross-post because that is the way I started
this thread. Future threads will not be cross-posted.
On Thu, May 28, 2009 at 9:34 AM, S
depends on how soon do you need to access that data after it's being
created, the way I do it in my systems, I get data from 8 points, bit
less than you - but I dump it to csv, and import it on database host
(separate server).
now, you could go to BDB or whatever, but that's not the solution.
So,
On Thu, May 28, 2009 at 5:06 PM, Ivan Voras wrote:
>> If you require precise data with the ability to filter, aggregate and
>> correlate over multiple dimensions, something like Hadoop -- or one of
>> the Hadoop-based column database implementations, such as HBase or
>> Hypertable -- might be a be
2009/5/28 Kenneth Marshall :
>
> One big benefit of partitioning is that you can prune old data with
> minimal impact to the running system. Doing a large bulk delete would
> be extremely I/O impacting without partion support. We use this for
> a DB log system and it allows us to simply truncate a
2009/5/28 Alexander Staubo :
> On Thu, May 28, 2009 at 2:54 PM, Ivan Voras wrote:
>> The volume of sensor data is potentially huge, on the order of 500,000
>> updates per hour. Sensor data is few numeric(15,5) numbers.
>
> The size of that dataset, combined with the apparent simplicity of
> your s
On Thu, May 28, 2009 at 04:55:34PM +0200, Ivan Voras wrote:
> 2009/5/28 Heikki Linnakangas :
> > Ivan Voras wrote:
> >>
> >> I need to store data about sensor readings. There is a known (but
> >> configurable) number of sensors which can send update data at any time.
> >> The "current" state needs
2009/5/28 Nikolas Everett :
> Option 1 is about somewhere between 2 and 3 times more work for the database
> than option 2.
Yes, for writes.
> Do you need every sensor update to hit the database? In a situation like
We can't miss an update - they can be delayed but they all need to be recorded.
2009/5/28 Heikki Linnakangas :
> Ivan Voras wrote:
>>
>> I need to store data about sensor readings. There is a known (but
>> configurable) number of sensors which can send update data at any time.
>> The "current" state needs to be kept but also all historical records.
>> I'm trying to decide betw
On Thu, May 28, 2009 at 2:54 PM, Ivan Voras wrote:
> The volume of sensor data is potentially huge, on the order of 500,000
> updates per hour. Sensor data is few numeric(15,5) numbers.
The size of that dataset, combined with the apparent simplicity of
your schema and the apparent requirement for
Option 1 is about somewhere between 2 and 3 times more work for the database
than option 2.
Do you need every sensor update to hit the database? In a situation like
this I'd be tempted to keep the current values in the application itself and
then sweep them all into the database periodically. If
Ivan Voras wrote:
I need to store data about sensor readings. There is a known (but
configurable) number of sensors which can send update data at any time.
The "current" state needs to be kept but also all historical records.
I'm trying to decide between these two designs:
1) create a table for
Hi,
I need to store data about sensor readings. There is a known (but
configurable) number of sensors which can send update data at any time.
The "current" state needs to be kept but also all historical records.
I'm trying to decide between these two designs:
1) create a table for "current" data,
30 matches
Mail list logo