Re: [PERFORM] Block at a time ...

2010-03-17 Thread Pierre C

I was thinking in something like that, except that the factor I'd use
would be something like 50% or 100% of current size, capped at (say) 1  
GB.


Using fallocate() ?


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] GiST index performance

2010-03-17 Thread Yeb Havinga

Yeb Havinga wrote:

Matthew Wakeling wrote:

Matthew Wakeling wrote:
A second quite distinct issue is the general performance of GiST 
indexes

which is also mentioned in the old thread linked from Open Items. For
that, we have a test case at
http://archives.postgresql.org/pgsql-performance/2009-04/msg00276.php 
for
btree_gist indexes. I have a similar example with the bioseg GiST 
index. I
have completely reimplemented the same algorithms in Java for 
algorithm
investigation and instrumentation purposes, and it runs about a 
hundred
times faster than in Postgres. I think this is a problem, and I'm 
willing

to do some investigation to try and solve it.
I have not made any progress on this issue. I think Oleg and Teodor 
would be better placed working it out. All I can say is that I 
implemented the exact same indexing algorithm in Java, and it 
performed 100 times faster than Postgres. Now, Postgres has to do a 
lot of additional work, like mapping the index onto disc, locking 
pages, and abstracting to plugin user functions, so I would expect 
some difference - I'm not sure 100 times is reasonable though. I 
tried to do some profiling, but couldn't see any one section of code 
that was taking too much time. Not sure what I can further do.

Hello Mathew and list,

A lot of time spent in gistget.c code and a lot of functioncall5's to 
the gist's consistent function which is out of sight for gprof.
Something different but related since also gist: we noticed before 
that gist indexes that use a compressed form for index entries suffer 
from repeated compress calls on query operands (see 
http://archives.postgresql.org/pgsql-hackers/2009-05/msg00078.php).


The btree_gist int4 compress function calls the generic 
gbt_num_compress, which does a palloc. Maybe this palloc is allso hit 
al lot when scanning the index, because the constants that are queries 
with are repeatedly compressed and palloced.
Looked in the code a bit more - only the index nodes are compressed at 
index creation, the consistent functions does not compress queries, so 
not pallocs there. However when running Mathews example from 
http://archives.postgresql.org/pgsql-performance/2009-04/msg00276.php 
with the gist index, the coverage shows in gistget.c: 100 palloc0 's 
of gistsearchstack at line 152 and 2010982 palloc's also of the 
gistsearchstack on line 342. Two pfrees are also hit a lot: line 195: 
1010926 of a stackentry and line 293: 200056 times. My $0.02 cents is 
that the pain is here. My knowledge of gistget or the other sources in 
access/gist is zero, but couldn't it be possible to determine the 
maximum needed size of the stack and then allocate it at once and use a 
pop/push kind off api?


regards,
Yeb Havinga







regards,
Yeb Havinga





--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Block at a time ...

2010-03-17 Thread Greg Stark
On Wed, Mar 17, 2010 at 7:32 AM, Pierre C  wrote:
>> I was thinking in something like that, except that the factor I'd use
>> would be something like 50% or 100% of current size, capped at (say) 1 GB.

This turns out to be a bad idea. One of the first thing Oracle DBAs
are told to do is change this default setting to allocate some
reasonably large fixed size rather than scaling upwards.

This might be mostly due to Oracle's extent-based space management but
I'm not so sure. Recall that the filesystem is probably doing some
rounding itself. If you allocate 120kB it's probably allocating 128kB
itself anyways. Having two layers rounding up will result in odd
behaviour.

In any case I was planning on doing this a while back. Then I ran some
experiments and couldn't actually demonstrate any problem. ext2 seems
to do a perfectly reasonable job of avoiding this problem. All the
files were mostly large contiguous blocks after running some tests --
IIRC running pgbench.


> Using fallocate() ?

I think we need posix_fallocate().

-- 
greg

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Building multiple indexes concurrently

2010-03-17 Thread Greg Smith

Rob Wultsch wrote:

Are there any particular performance optimizations that would be in
play in such a scenario?
  


You'd want to increase maintenance_work_mem significantly, just for the 
sessions that are running these.  Something like this:


|SET maintenance_work_mem = '1GB';|

I don't know if that's a huge or tiny number relative to total RAM in 
your server, you get the idea though.


Also, you should have a larger than default value for 
checkpoint_segments in advance of this.  That you can't set per session, 
but you can adjust the value in the postgresql.conf and request a 
configuration reload--don't actually need to disrupt server operation by 
restarting to do it.  This will work for that:


pg_ctl reload



At a minimum I assume that if both of the commands were started at
about the same time they would each scan the table in the same
direction and whichever creation was slower would benefit from most of
the table data it needed being prepopulated in shared buffers. Is this
the case?
  


This might be optimistic; whether it will be the case depends a lot on 
how large your shared_buffers and OS buffer cache are relative to the 
table involved.  To pick an extreme example to demonstrate what I mean, 
if shared_buffers is the common default of <32MB, your table is 1TB, and 
you have a giant disk array that reads fast, it's not very likely that 
the second scan is going to find anything of interest left behind by the 
first one.  You could try and make some rough estimates of how long it 
will take to fill your RAM with table data at the expected I/O rate and 
guess how likely overlap is.


There's a trade-off here, which is that in return for making it possible 
the data you need to rebuild the index is more likely to be in RAM when 
you need it by building two at once, the resulting indexes are likely to 
end up interleaved on disk as they are written out.  If you're doing a 
lot of index scans, the increased seek penalties for that may ultimately 
make you regret having combined the two.  Really impossible to predict 
which approach is going to be better long term without gathering so much 
data that you might as well try and benchmark it on a test system 
instead if you can instead.  I am not a big fan of presuming one can 
predict performance instead of measuring it for complicated cases.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Testing FusionIO

2010-03-17 Thread Devrim GÜNDÜZ
On Mon, 2010-03-08 at 09:38 -0800, Ben Chobot wrote:
> We've enjoyed our FusionIO drives very much. They can do 100k iops
> without breaking a sweat.

Yeah, performance is excellent. I bet we could get more, but CPU was
bottleneck in our test, since it was just a demo server :(
-- 
Devrim GÜNDÜZ
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
PostgreSQL RPM Repository: http://yum.pgrpms.org
Community: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
http://www.gunduz.org  Twitter: http://twitter.com/devrimgunduz


signature.asc
Description: This is a digitally signed message part


Re: [PERFORM] Testing FusionIO

2010-03-17 Thread Brad Nicholson
On Wed, 2010-03-17 at 14:30 +0200, Devrim GÜNDÜZ wrote:
> On Mon, 2010-03-08 at 09:38 -0800, Ben Chobot wrote:
> > We've enjoyed our FusionIO drives very much. They can do 100k iops
> > without breaking a sweat.
> 
> Yeah, performance is excellent. I bet we could get more, but CPU was
> bottleneck in our test, since it was just a demo server :(

Did you test the drive in all three modes?  If so, what sort of
differences did you see.

I've been hearing bad things from some folks about the quality of the
FusionIO drives from a durability standpoint. I'm Unsure if this is
vendor specific bias or not, but considering the source (which not
vendor specific), I don't think so. 

-- 
Brad Nicholson  416-673-4106
Database Administrator, Afilias Canada Corp.



-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Testing FusionIO

2010-03-17 Thread Brad Nicholson
On Wed, 2010-03-17 at 09:11 -0400, Justin Pitts wrote:
> On Mar 17, 2010, at 9:03 AM, Brad Nicholson wrote:
> 
> > I've been hearing bad things from some folks about the quality of the
> > FusionIO drives from a durability standpoint.
> 
> Can you be more specific about that? Durability over what time frame? How 
> many devices in the sample set? How did FusionIO deal with the issue?

I didn't get any specifics - as we are looking at other products.  It
did center around how FusionIO did wear-leveling though. 
-- 
Brad Nicholson  416-673-4106
Database Administrator, Afilias Canada Corp.



-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Block at a time ...

2010-03-17 Thread Tom Lane
Greg Stark  writes:
> I think we need posix_fallocate().

The problem with posix_fallocate (other than questionable portability)
is that it doesn't appear to guarantee anything at all about what is in
the space it allocates.  Worst case, we might find valid-looking
Postgres data there (eg, because a block was recycled from some recently
dropped table).  If we have to write something anyway to zero the space,
what's the point?

regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Building multiple indexes concurrently

2010-03-17 Thread Tom Lane
Greg Smith  writes:
> Rob Wultsch wrote:
>> At a minimum I assume that if both of the commands were started at
>> about the same time they would each scan the table in the same
>> direction and whichever creation was slower would benefit from most of
>> the table data it needed being prepopulated in shared buffers. Is this
>> the case?

> This might be optimistic;

No, it's not optimistic in the least, at least not since we implemented
synchronized seqscans (in 8.3 or thereabouts).

regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Testing FusionIO

2010-03-17 Thread Brad Nicholson
On Wed, 2010-03-17 at 09:52 -0400, Justin Pitts wrote:
> FusionIO is publicly claiming 24 years @ 5TB/day on the 80GB SLC device, 
> which wear levels across 100GB of actual installed capacity. 
> http://community.fusionio.com/forums/p/34/258.aspx#258
> 

20% of overall capacity free for levelling doesn't strike me as a lot.
Some of the Enterprise grade stuff we are looking into (like TMS RamSan)
leaves 40% (with much larger overall capacity).

Also, running that drive at 80GB is the "Maximum Capacity" mode, which
decreases the write performance.

> Max drive performance would be about 41TB/day, which coincidently works out 
> very close to the 3 year warranty they have on the devices.
> 

To counter that:

http://www.tomshardware.com/reviews/fusioinio-iodrive-flash,2140-2.html

"Fusion-io’s wear leveling algorithm is based on a cycle of 5 TB
write/erase volume per day, resulting in 24 years run time for the 80 GB
model, 48 years for the 160 GB version and 16 years for the MLC-based
320 GB type. However, since 5 TB could be written or erased rather
quickly given the performance level, we recommend not relying on these
approximations too much."


> FusionIO's claim _seems_ credible. I'd love to see some evidence to the 
> contrary.

Vendor claims always seem credible.  The key is to separate the
marketing hype from the actual details.

Again, I'm just passing along what I heard - which was from a
vendor-neutral, major storage consulting firm that decided to stop
recommending these drives to clients.  Make of that what you will.

As an aside, some folks in our Systems Engineering department here did
do some testing of FusionIO, and they found that the helper daemons were
inefficient and placed a fair amount of load on the server.  That might
be something to watch of for for those that are testing them.

> 
> On Mar 17, 2010, at 9:18 AM, Brad Nicholson wrote:
> 
> > On Wed, 2010-03-17 at 09:11 -0400, Justin Pitts wrote:
> >> On Mar 17, 2010, at 9:03 AM, Brad Nicholson wrote:
> >> 
> >>> I've been hearing bad things from some folks about the quality of the
> >>> FusionIO drives from a durability standpoint.
> >> 
> >> Can you be more specific about that? Durability over what time frame? How 
> >> many devices in the sample set? How did FusionIO deal with the issue?
> > 
> > I didn't get any specifics - as we are looking at other products.  It
> > did center around how FusionIO did wear-leveling though. 
> > -- 
> > Brad Nicholson  416-673-4106
> > Database Administrator, Afilias Canada Corp.
> > 
> > 
> 
-- 
Brad Nicholson  416-673-4106
Database Administrator, Afilias Canada Corp.



-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Building multiple indexes concurrently

2010-03-17 Thread Rob Wultsch
On Wed, Mar 17, 2010 at 7:30 AM, Tom Lane  wrote:
> Greg Smith  writes:
>> Rob Wultsch wrote:
>>> At a minimum I assume that if both of the commands were started at
>>> about the same time they would each scan the table in the same
>>> direction and whichever creation was slower would benefit from most of
>>> the table data it needed being prepopulated in shared buffers. Is this
>>> the case?
>
>> This might be optimistic;
>
> No, it's not optimistic in the least, at least not since we implemented
> synchronized seqscans (in 8.3 or thereabouts).
>
>                        regards, tom lane
>

Where can I find details about this in the documentation?

-- 
Rob Wultsch
wult...@gmail.com

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Fwd: [PERFORM] shared_buffers advice

2010-03-17 Thread VJK
See below:


On Wed, Mar 10, 2010 at 9:28 PM, Paul McGarry  wrote:

> Hi there,
>
> I'm after a little bit of advice on the shared_buffers setting (I have
> read the various docs on/linked from the performance tuning wiki page,
> some very helpful stuff there so thanks to those people).
>
> I am setting up a 64bit Linux server running Postgresql 8.3, the
> server has 64gigs of memory and Postgres is the only major application
> running on it. (This server is to go alongside some existing 8.3
> servers, we will look at 8.4/9 migration later)
>
> I'm basically wondering how the postgresql cache (ie shared_buffers)
> and the OS page_cache interact. The general advice seems to be to
> assign 1/4 of RAM to shared buffers.
>
> I don't have a good knowledge of the internals but I'm wondering if
> this will effectively mean that roughly the same amount of RAM being
> used for the OS page cache will be used for redundantly caching
> something the Postgres is caching as well?
>
> IE when Postgres reads something from disk it will go into both the OS
> page cache and the Postgresql shared_buffers and the OS page cache
> copy is unlikely to be useful for anything.
>
> If that is the case what are the downsides to having less overlap
> between the caches, IE heavily favouring one or the other, such as
> allocating shared_buffers to a much larger percentage (such as 90-95%
> of expected 'free' memory).
>
> Pg apparently does not have an option of using direct IO with reads which
some other databases do (the O_DIRECT mode).  Therefore,  double-buffering
with read operations seems unavoidable.  Counterintuitively,   it may be a
good idea to just rely on OS buffering and keep shared_buffers rather small,
say, 512MB.

VJ




> Paul
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>


Re: [PERFORM] Block at a time ...

2010-03-17 Thread Dave Crooke
Greg - with Oracle, I always do fixed 2GB dbf's for poartability, and
preallocate the whole file in advance. However, the situation is a bit
different in that Oracle will put blocks from multiple tables and indexes in
a DBF if you don't tell it differently.

Tom - I'm not sure what Oracle does, but it literally writes the whole
extent before using it  I think they are just doing the literal
equivalent of *dd if=/dev/zero* ... it takes several seconds to prep a 2GB
file on decent storage.

Cheers
Dave

On Wed, Mar 17, 2010 at 9:27 AM, Tom Lane  wrote:

> Greg Stark  writes:
> > I think we need posix_fallocate().
>
> The problem with posix_fallocate (other than questionable portability)
> is that it doesn't appear to guarantee anything at all about what is in
> the space it allocates.  Worst case, we might find valid-looking
> Postgres data there (eg, because a block was recycled from some recently
> dropped table).  If we have to write something anyway to zero the space,
> what's the point?
>
>regards, tom lane
>


Re: [PERFORM] Block at a time ...

2010-03-17 Thread Craig James

On 3/17/10 2:52 AM, Greg Stark wrote:

On Wed, Mar 17, 2010 at 7:32 AM, Pierre C  wrote:

I was thinking in something like that, except that the factor I'd use
would be something like 50% or 100% of current size, capped at (say) 1 GB.


This turns out to be a bad idea. One of the first thing Oracle DBAs
are told to do is change this default setting to allocate some
reasonably large fixed size rather than scaling upwards.

This might be mostly due to Oracle's extent-based space management but
I'm not so sure. Recall that the filesystem is probably doing some
rounding itself. If you allocate 120kB it's probably allocating 128kB
itself anyways. Having two layers rounding up will result in odd
behaviour.

In any case I was planning on doing this a while back. Then I ran some
experiments and couldn't actually demonstrate any problem. ext2 seems
to do a perfectly reasonable job of avoiding this problem. All the
files were mostly large contiguous blocks after running some tests --
IIRC running pgbench.


This is one of the more-or-less solved problems in Unix/Linux.  Ext* file systems have a 
"reserve" usually of 10% of the disk space that nobody except root can use.  
It's not for root, it's because with 10% of the disk free, you can almost always do a 
decent job of allocating contiguous blocks and get good performance.  Unless Postgres has 
some weird problem that Linux has never seen before (and that wouldn't be 
unprecedented...), there's probably no need to fool with file-allocation strategies.

Craig

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Block at a time ...

2010-03-17 Thread Bob Lunney
Greg is correct, as usual.  Geometric growth of files is A Bad Thing in an  
Oracle DBA's world, since you can unexpectedly (automatically?) run out of file 
system space when the database determines it needs x% more extents than last 
time.

The concept of contiguous extents, however, has some merit, particularly when 
restoring databases.  Prior to parallel restore, a table's files were created 
and extended in roughly contiguous allocations, presuming there was no other 
activity on your database disks.  (You do dedicate disks, don't you?)  When 
using 8-way parallel restore against a six-disk RAID 10 group I found that 
table and index scan performance dropped by about 10x.  I/O performance was 
restored by either clustering the tables one at a time, or by dropping and 
restoring them one at a time.  The only reason I can come up with for this 
behavior is file fragmentation and increased seek times.

If PostgreSQL had a mechanism to pre-allocate files prior to restoring the 
database that might mitigate the problem.  

Then if we could only get parallel index operations ...

Bob Lunney

--- On Wed, 3/17/10, Greg Stark  wrote:

> From: Greg Stark 
> Subject: Re: [PERFORM] Block at a time ...
> To: "Pierre C" 
> Cc: "Alvaro Herrera" , "Dave Crooke" 
> , pgsql-performance@postgresql.org
> Date: Wednesday, March 17, 2010, 5:52 AM
> On Wed, Mar 17, 2010 at 7:32 AM,
> Pierre C 
> wrote:
> >> I was thinking in something like that, except that
> the factor I'd use
> >> would be something like 50% or 100% of current
> size, capped at (say) 1 GB.
> 
> This turns out to be a bad idea. One of the first thing
> Oracle DBAs
> are told to do is change this default setting to allocate
> some
> reasonably large fixed size rather than scaling upwards.
> 
> This might be mostly due to Oracle's extent-based space
> management but
> I'm not so sure. Recall that the filesystem is probably
> doing some
> rounding itself. If you allocate 120kB it's probably
> allocating 128kB
> itself anyways. Having two layers rounding up will result
> in odd
> behaviour.
> 
> In any case I was planning on doing this a while back. Then
> I ran some
> experiments and couldn't actually demonstrate any problem.
> ext2 seems
> to do a perfectly reasonable job of avoiding this problem.
> All the
> files were mostly large contiguous blocks after running
> some tests --
> IIRC running pgbench.
> 
> 
> > Using fallocate() ?
> 
> I think we need posix_fallocate().
> 
> -- 
> greg
> 
> -- 
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
> 


  

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Building multiple indexes concurrently

2010-03-17 Thread Greg Smith

Rob Wultsch wrote:

On Wed, Mar 17, 2010 at 7:30 AM, Tom Lane  wrote:
  

No, it's not optimistic in the least, at least not since we implemented
synchronized seqscans (in 8.3 or thereabouts).


Where can I find details about this in the documentation?
  


It's a behind the scenes optimization so it's not really documented on 
the user side very well as far as I know; easy to forget it's even there 
as I did this morning.  
http://j-davis.com/postgresql/syncscan/syncscan.pdf is a presentation 
covering it, and http://j-davis.com/postgresql/83v82_scans.html is also 
helpful.


While my pessimism on this part may have been overwrought, note the 
message interleaved on the list today with this discussion from Bob 
Lunney discussing the other issue I brought up:  "When using 8-way 
parallel restore against a six-disk RAID 10 group I found that table and 
index scan performance dropped by about 10x.  I/O performance was 
restored by either clustering the tables one at a time, or by dropping 
and restoring them one at a time.  The only reason I can come up with 
for this behavior is file fragmentation and increased seek times."  Now, 
Bob's situation may very well involve a heavy dose of table 
fragmentation from multiple active loading processes rather than index 
fragmentation, but this class of problem is common when trying to do too 
many things at the same time.  I'd hate to see you chase a short-term 
optimization (reduce total index built time) at the expense of long-term 
overhead (resulting indexes are not as efficient to scan).


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Testing FusionIO

2010-03-17 Thread Justin Pitts

On Mar 17, 2010, at 10:41 AM, Brad Nicholson wrote:

> On Wed, 2010-03-17 at 09:52 -0400, Justin Pitts wrote:
>> FusionIO is publicly claiming 24 years @ 5TB/day on the 80GB SLC device, 
>> which wear levels across 100GB of actual installed capacity. 
>> http://community.fusionio.com/forums/p/34/258.aspx#258
>> 
> 
> 20% of overall capacity free for levelling doesn't strike me as a lot.

I don't have any idea how to judge what amount would be right.

> Some of the Enterprise grade stuff we are looking into (like TMS RamSan)
> leaves 40% (with much larger overall capacity).
> 
> Also, running that drive at 80GB is the "Maximum Capacity" mode, which
> decreases the write performance.

Very fair. In my favor, my proposed use case is probably at half capacity or 
less. I am getting the impression that partitioning/formatting the drive for 
the intended usage, and not the max capacity, is the way to go. Capacity isn't 
an issue with this workload. I cannot fit enough drives into these servers to 
get a tenth of the IOPS that even Tom's documents the ioDrive is capable of at 
reduced performance levels.

>> Max drive performance would be about 41TB/day, which coincidently works out 
>> very close to the 3 year warranty they have on the devices.
>> 
> 
> To counter that:
> 
> http://www.tomshardware.com/reviews/fusioinio-iodrive-flash,2140-2.html
> 
> "Fusion-io’s wear leveling algorithm is based on a cycle of 5 TB
> write/erase volume per day, resulting in 24 years run time for the 80 GB
> model, 48 years for the 160 GB version and 16 years for the MLC-based
> 320 GB type. However, since 5 TB could be written or erased rather
> quickly given the performance level, we recommend not relying on these
> approximations too much."
> 

I'm not sure if that is a counter or a supporting claim :) 

> 
>> FusionIO's claim _seems_ credible. I'd love to see some evidence to the 
>> contrary.
> 
> Vendor claims always seem credible.  The key is to separate the
> marketing hype from the actual details.

I'm hoping to get my hands on a sample in the next few weeks. 

> 
> Again, I'm just passing along what I heard - which was from a
> vendor-neutral, major storage consulting firm that decided to stop
> recommending these drives to clients.  Make of that what you will.
> 
> As an aside, some folks in our Systems Engineering department here did
> do some testing of FusionIO, and they found that the helper daemons were
> inefficient and placed a fair amount of load on the server.  That might
> be something to watch of for for those that are testing them.
> 

That is a wonderful little nugget of knowledge that I shall put on my test plan.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Testing FusionIO

2010-03-17 Thread Justin Pitts
On Mar 17, 2010, at 9:03 AM, Brad Nicholson wrote:

> I've been hearing bad things from some folks about the quality of the
> FusionIO drives from a durability standpoint.

Can you be more specific about that? Durability over what time frame? How many 
devices in the sample set? How did FusionIO deal with the issue?
-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Testing FusionIO

2010-03-17 Thread Justin Pitts
FusionIO is publicly claiming 24 years @ 5TB/day on the 80GB SLC device, which 
wear levels across 100GB of actual installed capacity. 

http://community.fusionio.com/forums/p/34/258.aspx#258

Max drive performance would be about 41TB/day, which coincidently works out 
very close to the 3 year warranty they have on the devices.

FusionIO's claim _seems_ credible. I'd love to see some evidence to the 
contrary.


On Mar 17, 2010, at 9:18 AM, Brad Nicholson wrote:

> On Wed, 2010-03-17 at 09:11 -0400, Justin Pitts wrote:
>> On Mar 17, 2010, at 9:03 AM, Brad Nicholson wrote:
>> 
>>> I've been hearing bad things from some folks about the quality of the
>>> FusionIO drives from a durability standpoint.
>> 
>> Can you be more specific about that? Durability over what time frame? How 
>> many devices in the sample set? How did FusionIO deal with the issue?
> 
> I didn't get any specifics - as we are looking at other products.  It
> did center around how FusionIO did wear-leveling though. 
> -- 
> Brad Nicholson  416-673-4106
> Database Administrator, Afilias Canada Corp.
> 
> 


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Building multiple indexes concurrently

2010-03-17 Thread Andres Freund
On Wednesday 17 March 2010 19:44:56 Greg Smith wrote:
> Rob Wultsch wrote:
> > On Wed, Mar 17, 2010 at 7:30 AM, Tom Lane  wrote:
> >> No, it's not optimistic in the least, at least not since we implemented
> >> synchronized seqscans (in 8.3 or thereabouts).
> > 
> > Where can I find details about this in the documentation?
> 
> It's a behind the scenes optimization so it's not really documented on
> the user side very well as far as I know; easy to forget it's even there
> as I did this morning.
> http://j-davis.com/postgresql/syncscan/syncscan.pdf is a presentation
> covering it, and http://j-davis.com/postgresql/83v82_scans.html is also
> helpful.
> 
> While my pessimism on this part may have been overwrought, note the
> message interleaved on the list today with this discussion from Bob
> Lunney discussing the other issue I brought up:  "When using 8-way
> parallel restore against a six-disk RAID 10 group I found that table and
> index scan performance dropped by about 10x.  I/O performance was
> restored by either clustering the tables one at a time, or by dropping
> and restoring them one at a time.  The only reason I can come up with
> for this behavior is file fragmentation and increased seek times."  Now,
> Bob's situation may very well involve a heavy dose of table
> fragmentation from multiple active loading processes rather than index
> fragmentation, but this class of problem is common when trying to do too
> many things at the same time.  I'd hate to see you chase a short-term
> optimization (reduce total index built time) at the expense of long-term
> overhead (resulting indexes are not as efficient to scan).
I find it way much easier to believe such issues exist on a tables in 
constrast to indexes. The likelihood to get sequential accesses on an index is 
small enough on a big table to make it unlikely to matter much.

Whats your theory to make it matter much?

Andres

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Testing FusionIO

2010-03-17 Thread Kenny Gorman
Greg,

Did you ever contact them and get your hands on one?

We eventually did see long SSD rebuild times on server crash as well.  But data 
came back uncorrupted per my blog post.  This is a good case for Slony Slaves.  
Anyone in a high TX low downtime environment would have already engineered 
around needing to wait for rebuild/recover times anyway.  So it's not a deal 
killer in my view.

-kg

On Mar 8, 2010, at 12:50 PM, Greg Smith wrote:

> Ben Chobot wrote:
>> We've enjoyed our FusionIO drives very much. They can do 100k iops without 
>> breaking a sweat. Just make sure you shut them down cleanly - it can up to 
>> 30 minutes per card to recover from a crash/plug pull test.   
> 
> Yeah...I got into an argument with Kenny Gorman over my concerns with how 
> they were handling durability issues on his blog, the reading I did about 
> them never left me satisfied Fusion was being completely straight with 
> everyone about this area:  http://www.kennygorman.com/wordpress/?p=398
> 
> If it takes 30 minutes to recover, but it does recover, I guess that's better 
> than I feared was the case with them.  Thanks for reporting the plug pull 
> tests--I don't trust any report from anyone about new storage hardware that 
> doesn't include that little detail as part of the testing.  You're just 
> asking to have your data get lost without that basic due diligence, and I'm 
> sure not going to even buy eval hardware from a vendor that appears evasive 
> about it.  There's a reason I don't personally own any SSD hardware yet.
> 
> -- 
> Greg Smith  2ndQuadrant US  Baltimore, MD
> PostgreSQL Training, Services and Support
> g...@2ndquadrant.com   www.2ndQuadrant.us
> 
> 
> -- 
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Testing FusionIO

2010-03-17 Thread Brad Nicholson
On Wed, 2010-03-17 at 14:11 -0400, Justin Pitts wrote:
> On Mar 17, 2010, at 10:41 AM, Brad Nicholson wrote:
> 
> > On Wed, 2010-03-17 at 09:52 -0400, Justin Pitts wrote:
> >> FusionIO is publicly claiming 24 years @ 5TB/day on the 80GB SLC device, 
> >> which wear levels across 100GB of actual installed capacity. 
> >> http://community.fusionio.com/forums/p/34/258.aspx#258
> >> 
> > 
> > 20% of overall capacity free for levelling doesn't strike me as a lot.
> 
> I don't have any idea how to judge what amount would be right.
> 
> > Some of the Enterprise grade stuff we are looking into (like TMS RamSan)
> > leaves 40% (with much larger overall capacity).
> > 
> > Also, running that drive at 80GB is the "Maximum Capacity" mode, which
> > decreases the write performance.
> 
> Very fair. In my favor, my proposed use case is probably at half capacity or 
> less. I am getting the impression that partitioning/formatting the drive for 
> the intended usage, and not the max capacity, is the way to go. Capacity 
> isn't an issue with this workload. I cannot fit enough drives into these 
> servers to get a tenth of the IOPS that even Tom's documents the ioDrive is 
> capable of at reduced performance levels.


The actual media is only good for a very limited number of write cycles.  The 
way that the drives get around to be reliable is to 
constantly write to different areas.  The more you have free, the less you have 
to re-use, the longer the lifespan.

This is done by the drives wear levelling algorithms, not by using
partitioning utilities btw.

-- 
Brad Nicholson  416-673-4106
Database Administrator, Afilias Canada Corp.



-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Testing FusionIO

2010-03-17 Thread david

On Wed, 17 Mar 2010, Brad Nicholson wrote:


On Wed, 2010-03-17 at 14:11 -0400, Justin Pitts wrote:

On Mar 17, 2010, at 10:41 AM, Brad Nicholson wrote:


On Wed, 2010-03-17 at 09:52 -0400, Justin Pitts wrote:

FusionIO is publicly claiming 24 years @ 5TB/day on the 80GB SLC device, which 
wear levels across 100GB of actual installed capacity.
http://community.fusionio.com/forums/p/34/258.aspx#258



20% of overall capacity free for levelling doesn't strike me as a lot.


I don't have any idea how to judge what amount would be right.


Some of the Enterprise grade stuff we are looking into (like TMS RamSan)
leaves 40% (with much larger overall capacity).

Also, running that drive at 80GB is the "Maximum Capacity" mode, which
decreases the write performance.


Very fair. In my favor, my proposed use case is probably at half capacity or 
less. I am getting the impression that partitioning/formatting the drive for 
the intended usage, and not the max capacity, is the way to go. Capacity isn't 
an issue with this workload. I cannot fit enough drives into these servers to 
get a tenth of the IOPS that even Tom's documents the ioDrive is capable of at 
reduced performance levels.



The actual media is only good for a very limited number of write cycles.  The 
way that the drives get around to be reliable is to
constantly write to different areas.  The more you have free, the less you have 
to re-use, the longer the lifespan.

This is done by the drives wear levelling algorithms, not by using
partitioning utilities btw.


true, but if the drive is partitioned so that parts of it are never 
written to by the OS, the drive knows that those parts don't contain data 
and so can treat them as unallocated.


once the OS writes to a part of the drive, unless the OS issues a trim 
command the drive can't know that the data there is worthless and can be 
ignored, it has to try and preserve that data, which makes doing the wear 
leveling harder and slower.


David Lang

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Building multiple indexes concurrently

2010-03-17 Thread Alvaro Herrera
Andres Freund escribió:

> I find it way much easier to believe such issues exist on a tables in 
> constrast to indexes. The likelihood to get sequential accesses on an index 
> is 
> small enough on a big table to make it unlikely to matter much.

Vacuum walks indexes sequentially, for one.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Building multiple indexes concurrently

2010-03-17 Thread Greg Smith

Alvaro Herrera wrote:

Andres Freund escribió:

  
I find it way much easier to believe such issues exist on a tables in 
constrast to indexes. The likelihood to get sequential accesses on an index is 
small enough on a big table to make it unlikely to matter much.



Vacuum walks indexes sequentially, for one.
  


That and index-based range scans were the main two use-cases I was 
concerned would be degraded by interleaving index builds, compared with 
doing them in succession.  I work often with time-oriented apps that 
have heavy "give me every record between  and " components to 
them, and good sequential index performance can be an important 
requirement for that kind of application.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Forcing index scan on query produces 16x faster

2010-03-17 Thread Christian Brink
I am running into a problem with a particular query. The execution plan 
cost shows that the Seq Scan is a better bet (cost=54020.49..54020.55) 
over the forced index 'enable_seqscan =  false' 
(cost=1589703.87..1589703.93). But when I run the query both ways I get 
a vastly different result (below). It appears not to want to bracket the 
salesitems off of the 'id' foreign_key unless I force it.


Is there a way to rewrite or hint the planner to get me the better plan 
without resorting to 'enable_seqscan' manipulation (or am I missing 
something)?


postream=> select version();
 version
-
 PostgreSQL 8.0.3 on i386-redhat-linux-gnu, compiled by GCC 
i386-redhat-linux-gcc (GCC) 4.0.0 20050505 (Red Hat 4.0.0-4)



postream=> SET enable_seqscan = false;
SET
postream=> EXPLAIN ANALYZE
postream-> SELECT si.group1_id as name, sum(si.qty) as count, 
sum(si.amt) as amt

postream->   FROM salesitems si, sales s, sysstrings
postream->  WHERE si.id = s.id
postream->AND si.group1_id != ''
postream->AND si.group1_id IS NOT NULL
postream->AND NOT si.void
postream->AND NOT s.void
postream->AND NOT s.suspended
postream->AND s.tranzdate >= (cast('2010-02-15' as date) + 
cast(sysstrings.data as time))
postream->AND s.tranzdate < ((cast('2010-02-15' as date) + 1) + 
cast(sysstrings.data as time))

postream->AND sysstrings.id='net/Console/Employee/Day End Time'
postream->  GROUP BY name;

 QUERY PLAN

 HashAggregate  (cost=1589703.87..1589703.93 rows=13 width=35) (actual 
time=33.414..33.442 rows=12 loops=1)
   ->  Nested Loop  (cost=0.01..1588978.22 rows=96753 width=35) (actual 
time=0.284..22.115 rows=894 loops=1)
 ->  Nested Loop  (cost=0.01..2394.31 rows=22530 width=4) 
(actual time=0.207..4.671 rows=225 loops=1)
   ->  Index Scan using sysstrings_pkey on sysstrings  
(cost=0.00..5.78 rows=1 width=175) (actual time=0.073..0.078 rows=1 loops=1)
 Index Cond: (id = 'net/Console/Employee/Day End 
Time'::text)
   ->  Index Scan using sales_tranzdate_index on sales s  
(cost=0.01..1825.27 rows=22530 width=12) (actual time=0.072..3.464 
rows=225 loops=1)
 Index Cond: ((s.tranzdate >= ('2010-02-15'::date + 
("outer".data)::time without time zone)) AND (s.tranzdate < 
('2010-02-16'::date + ("outer".data)::time without time zone)))

 Filter: ((NOT void) AND (NOT suspended))
 ->  Index Scan using salesitems_pkey on salesitems si  
(cost=0.00..70.05 rows=30 width=39) (actual time=0.026..0.052 rows=4 
loops=225)

   Index Cond: (si.id = "outer".id)
   Filter: ((group1_id <> ''::text) AND (group1_id IS NOT 
NULL) AND (NOT void))

 Total runtime: 33.734 ms
(12 rows)

postream=> SET enable_seqscan = true;
SET
postream=> EXPLAIN ANALYZE
postream-> SELECT si.group1_id as name, sum(si.qty) as count, 
sum(si.amt) as amt

postream->   FROM salesitems si, sales s, sysstrings
postream->  WHERE si.id = s.id
postream->AND si.group1_id != ''
postream->AND si.group1_id IS NOT NULL
postream->AND NOT si.void
postream->AND NOT s.void
postream->AND NOT s.suspended
postream->AND s.tranzdate >= (cast('2010-02-15' as date) + 
cast(sysstrings.data as time))
postream->AND s.tranzdate < ((cast('2010-02-15' as date) + 1) + 
cast(sysstrings.data as time))

postream->AND sysstrings.id='net/Console/Employee/Day End Time'
postream->  GROUP BY name;

QUERY PLAN
--
 HashAggregate  (cost=54020.49..54020.55 rows=13 width=35) (actual 
time=5564.929..5564.957 rows=12 loops=1)
   ->  Hash Join  (cost=2539.63..53294.84 rows=96753 width=35) (actual 
time=5502.324..5556.262 rows=894 loops=1)

 Hash Cond: ("outer".id = "inner".id)
 ->  Seq Scan on salesitems si  (cost=0.00..30576.60 
rows=885215 width=39) (actual time=0.089..3099.453 rows=901249 loops=1)
   Filter: ((group1_id <> ''::text) AND (group1_id IS NOT 
NULL) AND (NOT void))
 ->  Hash  (cost=2394.31..2394.31 rows=22530 width=4) (actual 
time=3.329..3.329 rows=0 loops=1)
   ->  Nested Loop  (cost=0.01..2394.31 rows=22530 width=4) 
(actual time=0.217..2.749 rows=225 loops=1)
 ->  Index Scan using sysstrings_pkey on 
syss

Re: [PERFORM] Forcing index scan on query produces 16x faster

2010-03-17 Thread David Wilson
On Wed, Mar 17, 2010 at 5:25 PM, Christian Brink wrote:

>
>   ->  Index Scan using sales_tranzdate_index on sales s
>  (cost=0.01..1825.27 rows=22530 width=12) (actual time=0.072..3.464 rows=225
> loops=1)
>

Have you tried increasing the statistics on that table (and then analyzing)?
The estimates for that index scan are off by a factor of 100, which may
indicate why the planner is trying so hard to avoid a nestloop there.

-- 
- David T. Wilson
david.t.wil...@gmail.com


Re: [PERFORM] Forcing index scan on query produces 16x faster

2010-03-17 Thread Tom Lane
Christian Brink  writes:
> Is there a way to rewrite or hint the planner to get me the better plan 
> without resorting to 'enable_seqscan' manipulation (or am I missing 
> something)?

I think your problem is here:

>   PostgreSQL 8.0.3 on i386-redhat-linux-gnu, compiled by GCC 
> i386-redhat-linux-gcc (GCC) 4.0.0 20050505 (Red Hat 4.0.0-4)

Recent versions are significantly smarter about grouping operations
than that dinosaur.

(Even if you must stay on 8.0.x, you should at least be running
8.0.something-recent; what you have is full of security and data-loss
risks.  8.0.24 was released this week.)

regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Testing FusionIO

2010-03-17 Thread Ben Chobot
On Mar 17, 2010, at 7:41 AM, Brad Nicholson wrote:

> As an aside, some folks in our Systems Engineering department here did
> do some testing of FusionIO, and they found that the helper daemons were
> inefficient and placed a fair amount of load on the server.  That might
> be something to watch of for for those that are testing them.

As another anecdote, we have 4 of the 160GB cards in a 24-core Istanbul server. 
I don't know how efficient the helper daemons are, but they do take up about 
half of one core's cycles, regardless of how busy the box actually is. So that 
sounds "bad" until you take into account how much that one core costs, and 
compare it to how much it would cost to have the same amount of IOPs in a 
different form. 
-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Forcing index scan on query produces 16x faster

2010-03-17 Thread Eger, Patrick
I'm running 8.4.2 and have noticed a similar heavy preference for
sequential scans and hash joins over index scans and nested loops.  Our
database is can basically fit in cache 100% so this may not be
applicable to your situation, but the following params seemed to help
us:

seq_page_cost = 1.0
random_page_cost = 1.01
cpu_tuple_cost = 0.0001
cpu_index_tuple_cost = 0.5
cpu_operator_cost = 0.25
effective_cache_size = 1000MB
shared_buffers = 1000MB


Might I suggest the Postgres developers reconsider these defaults for
9.0 release, or perhaps provide a few sets of tuning params for
different workloads in the default install/docs? The cpu_*_cost in
particular seem to be way off afaict. I may be dead wrong though, fwiw
=)

-Original Message-
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Christian
Brink
Sent: Wednesday, March 17, 2010 2:26 PM
To: pgsql-performance@postgresql.org
Subject: [PERFORM] Forcing index scan on query produces 16x faster

I am running into a problem with a particular query. The execution plan 
cost shows that the Seq Scan is a better bet (cost=54020.49..54020.55) 
over the forced index 'enable_seqscan =  false' 
(cost=1589703.87..1589703.93). But when I run the query both ways I get 
a vastly different result (below). It appears not to want to bracket the

salesitems off of the 'id' foreign_key unless I force it.

Is there a way to rewrite or hint the planner to get me the better plan 
without resorting to 'enable_seqscan' manipulation (or am I missing 
something)?

postream=> select version();
  version

-
  PostgreSQL 8.0.3 on i386-redhat-linux-gnu, compiled by GCC 
i386-redhat-linux-gcc (GCC) 4.0.0 20050505 (Red Hat 4.0.0-4)


postream=> SET enable_seqscan = false;
SET
postream=> EXPLAIN ANALYZE
postream-> SELECT si.group1_id as name, sum(si.qty) as count, 
sum(si.amt) as amt
postream->   FROM salesitems si, sales s, sysstrings
postream->  WHERE si.id = s.id
postream->AND si.group1_id != ''
postream->AND si.group1_id IS NOT NULL
postream->AND NOT si.void
postream->AND NOT s.void
postream->AND NOT s.suspended
postream->AND s.tranzdate >= (cast('2010-02-15' as date) + 
cast(sysstrings.data as time))
postream->AND s.tranzdate < ((cast('2010-02-15' as date) + 1) + 
cast(sysstrings.data as time))
postream->AND sysstrings.id='net/Console/Employee/Day End Time'
postream->  GROUP BY name;
 
QUERY PLAN



  HashAggregate  (cost=1589703.87..1589703.93 rows=13 width=35) (actual 
time=33.414..33.442 rows=12 loops=1)
->  Nested Loop  (cost=0.01..1588978.22 rows=96753 width=35) (actual

time=0.284..22.115 rows=894 loops=1)
  ->  Nested Loop  (cost=0.01..2394.31 rows=22530 width=4) 
(actual time=0.207..4.671 rows=225 loops=1)
->  Index Scan using sysstrings_pkey on sysstrings  
(cost=0.00..5.78 rows=1 width=175) (actual time=0.073..0.078 rows=1
loops=1)
  Index Cond: (id = 'net/Console/Employee/Day End 
Time'::text)
->  Index Scan using sales_tranzdate_index on sales s  
(cost=0.01..1825.27 rows=22530 width=12) (actual time=0.072..3.464 
rows=225 loops=1)
  Index Cond: ((s.tranzdate >= ('2010-02-15'::date +

("outer".data)::time without time zone)) AND (s.tranzdate < 
('2010-02-16'::date + ("outer".data)::time without time zone)))
  Filter: ((NOT void) AND (NOT suspended))
  ->  Index Scan using salesitems_pkey on salesitems si  
(cost=0.00..70.05 rows=30 width=39) (actual time=0.026..0.052 rows=4 
loops=225)
Index Cond: (si.id = "outer".id)
Filter: ((group1_id <> ''::text) AND (group1_id IS NOT 
NULL) AND (NOT void))
  Total runtime: 33.734 ms
(12 rows)

postream=> SET enable_seqscan = true;
SET
postream=> EXPLAIN ANALYZE
postream-> SELECT si.group1_id as name, sum(si.qty) as count, 
sum(si.amt) as amt
postream->   FROM salesitems si, sales s, sysstrings
postream->  WHERE si.id = s.id
postream->AND si.group1_id != ''
postream->AND si.group1_id IS NOT NULL
postream->AND NOT si.void
postream->AND NOT s.void
postream->AND NOT s.suspended
postream->AND s.tranzdate >= (cast('2010-02-15' as date) + 
cast(sysstrings.data as time))
postream->AND s.tranzdate < ((cast('2010-02-15' as date) + 1) + 
cast(sysstrings.data as time))
postream->AND sysstrings.id='net/Console/Employee/Day End Time'
postream->  GROUP BY name;
 
QUERY PLAN

-