postgres 8.4.4 on openSUSE 11.3 (2.6.36rc7, x86_64).
I was watching a fairly large query run and observed that the disk
light went out. I checked 'top' and postgres was using 100% CPU so I
strace'd the running process.
This is what I saw:
lseek(87, 0, SEEK_END) = 585531392
lseek(
On 2010-10-14 21:56, Robert Haas wrote:
On Thu, Oct 14, 2010 at 12:22 AM, mark wrote:
Could this be an interesting test use of https://www.fossexperts.com/ ?
'Community' driven proposal - multiple people / orgs agree to pay various
portions? Maybe with multiple funders a reasonable target
On 10/14/2010 4:10 PM, Jon Nelson wrote:
The first thing I'd do is think real hard about whether you really
really want 'numeric' instead of boolean, smallint, or integer. The
second thing is that none of your indices (which specify a whole bunch
of fields, by the way) have only just emailok, e
Just my take on this.
The first thing I'd do is think real hard about whether you really
really want 'numeric' instead of boolean, smallint, or integer. The
second thing is that none of your indices (which specify a whole bunch
of fields, by the way) have only just emailok, emailbounced, or only
emailok | numeric(2,0)|
Note that NUMERIC is meant for
- really large numbers with lots of digits
- or controlled precision and rounding (ie, order total isn't
99. $)
Accordingly, NUMERIC is a lot slower in all operations, and uses a lot
more spac
2010/10/14 Tony Capobianco :
> We are in the process of testing migration of our oracle data warehouse
> over to postgres. A potential showstopper are full table scans on our
> members table. We can't function on postgres effectively unless index
> scans are employed. I'm thinking I don't have s
On 10/14/10 21:43, Tony Capobianco wrote:
We have 4 quad-core processors and 32GB of RAM. The below query uses
the members_sorted_idx_001 index in oracle, but in postgres, the
optimizer chooses a sequential scan.
explain analyze create table tmp_srcmem_emws1
as
select emailaddress, websiteid
On Thu, Oct 14, 2010 at 12:43 PM, Tony Capobianco
wrote:
> We have 4 quad-core processors and 32GB of RAM. The below query uses
> the members_sorted_idx_001 index in oracle, but in postgres, the
> optimizer chooses a sequential scan.
>
> explain analyze create table tmp_srcmem_emws1
> as
> select
On Thu, Oct 14, 2010 at 12:22 AM, mark wrote:
> Could this be an interesting test use of https://www.fossexperts.com/ ?
>
> 'Community' driven proposal - multiple people / orgs agree to pay various
> portions? Maybe with multiple funders a reasonable target fund amount could
> be reached.
>
> Just
We are in the process of testing migration of our oracle data warehouse
over to postgres. A potential showstopper are full table scans on our
members table. We can't function on postgres effectively unless index
scans are employed. I'm thinking I don't have something set correctly
in my postgres
2010/10/13 Ron Mayer :
> Kevin Grittner wrote:
>>
>> ...Sybase named caches...segment off portions of the memory for
>> specific caches... bind specific database
>> objects (tables and indexes) to specific caches. ...
>>
>> When I posted to the list about it, the response was that LRU
>> eviction w
On 2010-10-14 06:22, mark wrote:
Could this be an interesting test use of https://www.fossexperts.com/ ?
'Community' driven proposal - multiple people / orgs agree to pay various
portions? Maybe with multiple funders a reasonable target fund amount could
be reached.
I might convince my boss
Merlin Moncure writes:
> On Wed, Oct 13, 2010 at 10:14 AM, Tom Lane wrote:
>> It's possible that at some point we'll try to introduce plan caching
>> for non-inlined SQL functions.
> hm, I think the search_path/function plan issue would have to be dealt
> with before doing this --
Yeah, perhaps
On Wed, Oct 13, 2010 at 10:14 AM, Tom Lane wrote:
> It's possible that at some point we'll try to introduce plan caching
> for non-inlined SQL functions.
hm, I think the search_path/function plan issue would have to be dealt
with before doing this -- a while back IIRC you suggested function
plans
On Wed, Oct 13, 2010 at 10:35 PM, Mladen Gogala
wrote:
> You will see that for most of the columns, the length of the histogram array
> corresponds to the value of the default_statistics_target parameter. For
> those that are smaller, the size is the total number of values in the column
> in the s
Could this be an interesting test use of https://www.fossexperts.com/ ?
'Community' driven proposal - multiple people / orgs agree to pay various
portions? Maybe with multiple funders a reasonable target fund amount could
be reached.
Just throwing around ideas here.
Mark
-Original Messag
On Wed, Oct 13, 2010 at 1:59 PM, Jesper Krogh wrote:
> On 2010-10-13 15:28, Robert Haas wrote:
>>
>> On Wed, Oct 13, 2010 at 6:16 AM, Neil Whelchel
>> wrote:
>>
>>>
>>> I might go as far as to rattle the cage of the developers to see if it
>>> makes
>>> any sense to add some column oriented stora
Ants Aasma wrote:
I hit an issue with window aggregate costing while experimenting with
providing a count of the full match along side a limited result set.
Seems that the window aggregate node doesn't take into account that it
has to consume the whole input before outputting the first row. When
On 13/10/10 23:16, Neil Whelchel wrote:
The good ol' bruit force approach! I knew I'd see this one sooner or later.
Though I was not sure if I was going to see the 16TB of RAM suggestion first.
Seriously though, as the title of this thread suggests, everything is
relative. Sure count(*) and e
On 13/10/10 21:44, Mladen Gogala wrote:
On 10/13/2010 3:19 AM, Mark Kirkwood wrote:
I think that major effect you are seeing here is that the UPDATE has
made the table twice as big on disk (even after VACUUM etc), and it has
gone from fitting in ram to not fitting in ram - so cannot be
effectiv
Ants Aasma writes:
> Seems that the window aggregate node doesn't take into account that it
> has to consume the whole input before outputting the first row.
Well, the reason it doesn't assume that is it's not true ;-). In this
particular case it's true, but more generally you only have to read
On Wednesday 13 October 2010 06:27:34 you wrote:
> On Wed, Oct 13, 2010 at 4:38 AM, Neil Whelchel
wrote:
> > There seems to be allot of discussion about VACUUM FULL, and its
> > problems. The overall buzz seems to be that VACUUM FULL is a bad idea (I
> > could be wrong here). It has been some tim
22 matches
Mail list logo