[PERFORM] Any advantage to integer vs stored date w. timestamp

2007-03-06 Thread Zoolin Lin
Hi,

I have database with a huge amount of data so i'm trying to make it as fast as 
possible and minimize space.

One thing i've done is join on a prepopulated date lookup table to prevent a 
bunch of rows with duplicate date columns. Without this I'd have about 2500 
rows per hour with the exact same date w. timestamp in them.

My question is, with postgres do I really gain anything by this, or should I 
just use the date w. timestamp column on the primary table and ditch the join 
on the date_id table.

Primary table is all integers like:

date id | num1 | num2 | num3 | num4 | num5 | num6 | num7 | num 8
-
primary key is on date to num->6 columns

date_id lookup table:

This table is prepopulated with the date values that will be used.

date_id | date w timestamp

1 | 2007-2-15 Midnight
2 | 2007-2-15 1 am
3 | 2007-2-15 2 am  etc for 24 hours each day


Each day 60k records are added to a monthly table structured as above, about 
2500 per hour.

thank you for your advice

 
-
It's here! Your new message!
Get new email alerts with the free Yahoo! Toolbar.

Re: [PERFORM] Any advantage to integer vs stored date w. timestamp

2007-03-07 Thread Zoolin Lin
thanks for your reply

> Primary table is all integers like:
> 
> date id | num1 | num2 | num3 | num4 | num5 | num6 | num7 | num 8 
> -
>  primary key is on date to num->6 columns

>>What types are num1->8?

They are all integer


> date_id | date w timestamp  1
> | 2007-2-15 Midnight 2 | 2007-2-15 1 am 3 | 2007-2-15
> 2 am  etc for 24 hours each day

>>If you only want things accurate to an hour, you could lost the join and
>>just store it as an int: 2007021500, 2007021501 etc.

Hmm yeh I could, I think with the amount of data in the db though it behooves 
me to use one of the date types, even if via lookup table.

So I guess I'm just not sure if I'm really gaining anything by using an integer 
 date id column and doing a join on a date lookup table, vs just making it a 
date w. timestamp column and having duplicate dates in that column.

I would imagine internally that the date w. timestamp is stored as perhaps a 
time_t type  plus some timezone information. I don't know if it takes that much 
more space, or there's a significant performance penalty in using it

2,500 rows per hour, with duplicate date columns, seems like it could add up 
though.

thanks

Richard Huxton  wrote: Zoolin Lin wrote:
> Hi,
> 
> I have database with a huge amount of data so i'm trying to make it
> as fast as possible and minimize space.
> 
> One thing i've done is join on a prepopulated date lookup table to
> prevent a bunch of rows with duplicate date columns. Without this I'd
> have about 2500 rows per hour with the exact same date w. timestamp
> in them.
> 
> My question is, with postgres do I really gain anything by this, or
> should I just use the date w. timestamp column on the primary table
> and ditch the join on the date_id table.
> 
> Primary table is all integers like:
> 
> date id | num1 | num2 | num3 | num4 | num5 | num6 | num7 | num 8 
> -
>  primary key is on date to num->6 columns

What types are num1->8?

> date_id lookup table:
> 
> This table is prepopulated with the date values that will be used.
> 
> date_id | date w timestamp  1
> | 2007-2-15 Midnight 2 | 2007-2-15 1 am 3 | 2007-2-15
> 2 am  etc for 24 hours each day

If you only want things accurate to an hour, you could lost the join and
just store it as an int: 2007021500, 2007021501 etc.

That should see you good to year 2100 or so.

-- 
   Richard Huxton
   Archonet Ltd

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


 
-
Food fight? Enjoy some healthy debate
in the Yahoo! Answers Food & Drink Q&A.

Re: [PERFORM] Any advantage to integer vs stored date w. timestamp

2007-03-07 Thread Zoolin Lin
Thank you for the reply
 
>> Primary table is all integers like:
>> 
>> date id | num1 | num2 | num3 | num4 | num5 | num6 | num7 | num 8 
>> -
>>  primary key is on date to num->6 columns
> 
>>> What types are num1->8?
> 
> They are all integer

>>Hmm - not sure if you'd get any better packing if you could make some
>>int2 and put them next to each other. Need to test.

Thanks, I find virtually nothing on the int2 column type? beyond brief mention 
here
http://www.postgresql.org/docs/8.2/interactive/datatype-numeric.html#DATATYPE-INT

Could i prevail on you to expand on packing wtih int2 a bit more, or point me 
in the right direction for documentation?

If there's some way I can pack multipe columns into one to save space, yet 
still effectively query on them, even if it's a lot slower, that would be great.

My current scheme, though as normalized and summarized as I can make it, really 
chews up a ton of space. It might even be chewing up more than the data files 
i'm summarizing, I assume due to the indexing.

Regading saving disk space, I saw someone mention doing a custom build and 
changing

TOAST_TUPLE_THRESHOLD/TOAST_TUPLE_TARGET

So data is compressed sooner, it seems like that might be a viable option as 
well.

http://www.thescripts.com/forum/thread422854.html

> 2,500 rows per hour, with duplicate date columns, seems like it could
> add up though.

>>>Well, let's see 2500*24*365 = 21,900,000 * 4 bytes extra = 83MB 
>>>additional storage over a year. Not sure it's worth worrying about.

Ahh yes probably better to make it a date w. timestamp column then.

Z






Richard Huxton  wrote: Zoolin Lin wrote:
> thanks for your reply
> 
>> Primary table is all integers like:
>> 
>> date id | num1 | num2 | num3 | num4 | num5 | num6 | num7 | num 8 
>> -
>>  primary key is on date to num->6 columns
> 
>>> What types are num1->8?
> 
> They are all integer

Hmm - not sure if you'd get any better packing if you could make some
int2 and put them next to each other. Need to test.

>> date_id | date w timestamp 
>> 1 | 2007-2-15 Midnight 2 | 2007-2-15 1 am 3 |
>> 2007-2-15 2 am  etc for 24 hours each day
> 
>>> If you only want things accurate to an hour, you could lost the
>>> join and just store it as an int: 2007021500, 2007021501 etc.
> 
> Hmm yeh I could, I think with the amount of data in the db though it
> behooves me to use one of the date types, even if via lookup table.

You can always create it as a custom ZLDate type. All it really needs to 
be is an int with a few casts.

> So I guess I'm just not sure if I'm really gaining anything by using
> an integer  date id column and doing a join on a date lookup table,
> vs just making it a date w. timestamp column and having duplicate
> dates in that column.
> 
> I would imagine internally that the date w. timestamp is stored as
> perhaps a time_t type  plus some timezone information. I don't know
> if it takes that much more space, or there's a significant
> performance penalty in using it

It's a double or int64 I believe, so allow 8 bytes instead of 4 for your 
int.

> 2,500 rows per hour, with duplicate date columns, seems like it could
> add up though.

Well, let's see 2500*24*365 = 21,900,000 * 4 bytes extra = 83MB 
additional storage over a year. Not sure it's worth worrying about.

-- 
   Richard Huxton
   Archonet Ltd


 
-
No need to miss a message. Get email on-the-go 
with Yahoo! Mail for Mobile. Get started.

Re: [PERFORM] Any advantage to integer vs stored date w. timestamp

2007-03-09 Thread Zoolin Lin
thanks for your replying

>>I think with packing he was referring to simply having more values in 
>>the same disk space by using int2 instead of int4. (half the storage space)

I see yes, the values I'm dealing with are a bit too large to do that but yes 
good technique. Were they smaller I would use that.

It looks like if I do CREATE TYPE with variable length, I can make a type 
that's potentially toastable. I could decrease the toast threshold and 
recompile.

I'm not sure if that's very practical I know next to nothing about using create 
type. But if I can essentially make a toastable integer column type, that's 
indexable and doesn't have an insane performance penalty for using, that would 
be great.

Looks like my daily data is about 25 mbs  before insert (ex via) COPY table to 
'somefile';). After insert, and doing vacuum full and reindex, it's at about 75 
megs.

If i gzip compress that 25 meg file it's only 6.3 megs so I'd think if I could 
make a toastable type it'd benefit.

Need to look into it now, I may be completely off my rocker.

Thank you

Shane Ambler <[EMAIL PROTECTED]> wrote: Zoolin Lin wrote:
> Thank you for the reply
>  
>>> Primary table is all integers like:
>>>
>>> date id | num1 | num2 | num3 | num4 | num5 | num6 | num7 | num 8 
>>> -
>>>  primary key is on date to num->6 columns
>>>> What types are num1->8?
>> They are all integer
> 
>>> Hmm - not sure if you'd get any better packing if you could make some
>>> int2 and put them next to each other. Need to test.
> 
> Thanks, I find virtually nothing on the int2 column type? beyond brief 
> mention here
> http://www.postgresql.org/docs/8.2/interactive/datatype-numeric.html#DATATYPE-INT
> 
> Could i prevail on you to expand on packing wtih int2 a bit more, or point me 
> in the right direction for documentation?

int4 is the internal name for integer (4 bytes)
int2 is the internal name for smallint (2 bytes)

Try
SELECT format_type(oid, NULL) AS friendly, typname AS internal,
typlen AS length FROM pg_type WHERE typlen>0;

to see them all (negative typlen is a variable size (usually an array or 
bytea etc))

I think with packing he was referring to simply having more values in 
the same disk space by using int2 instead of int4. (half the storage space)

> 
> If there's some way I can pack multipe columns into one to save space, yet 
> still effectively query on them, even if it's a lot slower, that would be 
> great.

Depending on the size of data you need to store you may be able to get 
some benefit from "Packing" multiple values into one column. But I'm not 
sure if you need to go that far. What range of numbers do you need to 
store? If you don't need the full int4 range of values then try a 
smaller data type. If int2 is sufficient then just change the columns 
from integer to int2 and cut your storage in half. Easy gain.

The "packing" theory would fall under general programming algorithms not 
postgres specific.

Basically let's say you have 4 values that are in the range of 1-254 (1 
byte) you can do something like
col1=((val1<<0)&(val2<<8)&(val3<<16)&(val4<<24))

This will put the four values into one 4 byte int.

So searching would be something like
WHERE col1 & ((val1<<8)&(val3<<0))=((val1<<8)&(val3<<0))
if you needed to search on more than one value at a time.

Guess you can see what your queries will be looking like.

(Actually I'm not certain I got that 100% correct)

That's a simple example that should give you the general idea. In 
practice you would only get gains if you have unusual length values, so 
if you had value ranges from 0 to 1023 (10 bits each) then you could 
pack 3 values into an int4 instead of using 3 int2 cols. (that's 32 bits 
for the int4 against 64 bits for the 3 int2 cols) and you would use <<10 
and <<20 in the above example.

You may find it easier to define a function or two to automate this 
instead of repeating it for each query. But with disks and ram as cheap 
as they are these days this sort of packing is getting rarer (except 
maybe embedded systems with limited resources)

> My current scheme, though as normalized and summarized as I can make it, 
> really chews up a ton of space. It might even be chewing up more than the 
> data files i'm summarizing, I assume due to the indexing.
> 


-- 

Shane Ambler
[EMAIL PROTECTED]

Get Sheeky @ http://Sheeky.Biz

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


 
-
Bored stiff? Loosen up...
Download and play hundreds of games for free on Yahoo! Games.