All the above are the exact same point in time merely stated as
relevant to each location. Note that given a timestamp with time zone
and a zone, PostgreSQL returns a timestamp without time zone (you know
the zone since you specified it).
Yes, I know the zone. But I don't know the offset
So you took two distinct points in time, threw away some critical
information, and are surprised why they are now equal?
Well, I did not want to throw away any information. The actual
representation could be something like:
2012-11-04 01:30:00-08 in Europe/Budapest, Winter time
and
Hello,
Under FreeBSD 9, what filesystem should I use for PostgreSQL? (Dell
PowerEdge 2900, 24G mem, 10x2T SATA2 disk, Intel RAID controller.)
* ZFS is journaled, and it is more independent of the hardware. So if
the computer goes wrong, I can move the zfs array to a different server.
I wonder if UFS has better performance or not. Or can you suggest
another fs? Just of the PGDATA directory.
Relying on physically moving a disk isn't a good backup/recovery
strategy. Disks are the least reliable single component in a modern
computer. You should figure out the
On 24.07.2012 14:51, Laszlo Nagy wrote:
* UFS is not journaled.
There is journal support for UFS as far as i know. Please have a look
at the gjournal manpage.
Yes, but gjournal works for disk devices. I would have rely on the hw
card for RAID. When the card goes wrong I won't be able
Are you running a lot of full table updates?
If you mean updates which are applied on every or almost every row of
the table - yes, it happens with two rather small tables of max. 10
000 rows. But they are both not touched by the query with this big
performance difference.
I'm not an expert, but
I suspect that, since the matched hid's probably aren't sequential,
many of those ~500 product_price_offer_history rows will be far apart
on disk.
OMG I was a fool! I'll CLUSTER on a different index and it will be fast,
I'm sure.
Thanks!
L
--
Sent via pgsql-performance mailing list
Given two tables:
CREATE TABLE product_price_history
(
hid bigint NOT NULL,
hdate timestamp without time zone NOT NULL,
id bigint NOT NULL,
product_id bigint NOT NULL,
more columns here
CONSTRAINT pk_product_price_history PRIMARY KEY (hid);
CREATE INDEX
This query:
select p.id,p.producer_id,visa.variation_item_id, vi.qtyavail
from variation_item_sellingsite_asin visa
inner join product p on p.id = visa.product_id
inner join variation_item vi on vi.id =
Hi All,
I'm working on a client program that iterates over master-detail
relationships in a loop chain.
Pseudo code:
for row_1 in table_1:
table_2 = get_details(row_1,table2)
for row_2 in table_2:
row_3 = get_details(row_2,table3)
etc.
This will be simple question to answer. :-) There is a single table:
select count(*) from product_price_history -- 12982555 rows
This table has exactly one index and on primary key constraint:
CREATE INDEX idx_product_price_history_id_hdate
ON product_price_history
USING btree
(id,
On 2011-01-12 14:42, Florian Weimer wrote:
* Laszlo Nagy:
This query:
select hid from product_price_history where id=35547581
Returns 759 rows in 8837 msec! How can this be that slow???
If most records are on different heap pages, processing this query
requires many seeks. 11ms per seek
On 2011-01-12 15:36, Kevin Grittner wrote:
Laszlo Nagygand...@shopzeus.com wrote:
shared_mem = 6GB
work_mem = 512MB
total system memory=24GB
In addition to the good advice from Ken, I suggest that you set
effective_cache_size (if you haven't already). Add whatever the OS
shows as RAM used
A change has been written to the WAL and fsync()'d, so Pg knows it's hit
disk. It can now safely apply the change to the tables themselves, and
does so, calling fsync() to tell the drive containing the tables to
commit those changes to disk.
The drive lies, returning success for the fsync when
- Pg doesn't know the erase block sizes or positions. It can't group
writes up by erase block except by hoping that, within a given file,
writing in page order will get the blocks to the disk in roughly
erase-block order. So your write caching isn't going to do anywhere near
as good a job as
Heikki Linnakangas wrote:
Laszlo Nagy wrote:
* I need at least 32GB disk space. So DRAM based SSD is not a real
option. I would have to buy 8x4GB memory, costs a fortune. And
then it would still not have redundancy.
At 32GB database size, I'd seriously consider just
Robert Haas wrote:
2009/11/14 Laszlo Nagy gand...@shopzeus.com:
32GB is for one table only. This server runs other applications, and you
need to leave space for sort memory, shared buffers etc. Buying 128GB memory
would solve the problem, maybe... but it is too expensive. And it is not
safe
* I could buy two X25-E drives and have 32GB disk space, and some
redundancy. This would cost about $1600, not counting the RAID
controller. It is on the edge.
This was the solution I went with (4 drives in a raid 10 actually).
Not a cheap solution, but the performance is
Hello,
I'm about to buy SSD drive(s) for a database. For decision making, I
used this tech report:
http://techreport.com/articles.x/16255/9
http://techreport.com/articles.x/16255/10
Here are my concerns:
* I need at least 32GB disk space. So DRAM based SSD is not a real
option. I
Note that some RAID controllers (3Ware in particular) refuse to
recognize the MLC drives, in particular, they act as if the OCZ Vertex
series do not exist when connected.
I don't know what they're looking for (perhaps some indication that
actual rotation is happening?) but this is a potential
Hi All,
We have a bigger table with some million rows. Number of index scans is
high, number of seq reads is low. This table if often joined with
others... so we want to buy a new SSD drive, create a tablespace on it
and put this big table on it. Random read speed on SSD is identical to
seq
Robert Haas írta:
2009/11/9 Laszlo Nagy gand...@shopzeus.com:
We have a bigger table with some million rows. Number of index scans is
high, number of seq reads is low. This table if often joined with
others... so we want to buy a new SSD drive, create a tablespace on it
and put this big
Tom Lane wrote:
Laszlo Nagy gand...@shopzeus.com writes:
On a 8 processor system, my stats collector is always at 100% CPU.
What platform? What Postgres version?
regards, tom lane
8.3.5 on FreeBSD 7.0 amd64
--
Sent via pgsql-performance mailing list
What version of Postgres are you using?
8.3.5 on FreeBSD amd64
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On a 8 processor system, my stats collector is always at 100% CPU.
Meanwhile disk I/O is very low. We have many databases, they are
accessed frequently. Sometimes there are big table updates, but in most
of the time only simple queries are ran against the databases, returning
a few records
Hi All,
I have these indexes on a table:
CREATE INDEX uidx_product_partno_producer_id
ON product
USING btree
(partno, producer_id);
CREATE INDEX idx_product_partno
ON product
USING btree
(partno);
Can I safely delete the second one? Will postgresql use
(partno,producer_id) when it
We have serveral table where the index size is much bigger than the
table size.
Example:
select count(*) from product_price -- 2234244
Table size: 400 MB
Index size: 600 MB
After executing reindex table product_price, index size reduced to 269MB.
I believe this affects performance.
My other idea was that there are so many indexes on this table, maybe
the update is slow because of the indexes?
Updating indexes is certainly very far from being free. How many is
many?
Number of indexes = 15.
3 indexex are on text type column, 500MB in size each.
Other are on
Inf 8.3 the HOT feature may help if the columns being updated are
indexed ... what version of PostgreSQL is this again ? (Forgive my
lack of memory -- the last few days I've forgotten a lot, heh heh.)
8.3.5.
The colum that was being updated is part of one small index only.
Any chances
SQL:
update product set sz_category_id=null where am_style_kw1 is not null
and sz_category_id is not null
query plan:
Seq Scan on product (cost=0.00..647053.30 rows=580224 width=1609)
Filter: ((am_style_kw1 IS NOT NULL) AND (sz_category_id IS NOT NULL))
Information on the table:
row
Laszlo Nagy wrote:
SQL:
update product set sz_category_id=null where am_style_kw1 is not null
and sz_category_id is not null
Hmm, this query:
select count(*) from product where am_style_kw1 is not null and
sz_category_id is not null and sz_category_id4809
opens in 10 seconds. The update
If the table has some sort of FK relations it might be being slowed by
the need to check a row meant to be deleted has any children.
If you look at my SQL, there is only one column to be updated. That
column has no foreign key constraint. (It should have, but we did not
want to add that
I just tested the same on a test machine. It only has one processor 1GB
memory, and one SATA disk. The same select count(*) was 58 seconds. I
started the same UPDATE with EXPLAIN ANALYZE. It is running since 1000
seconds. I'm now 100% sure that the problem is with the database,
because this
and see if its output changes when you start to trace it.
%cat test.c
#include stdio.h
int main() {
while(1) {
sleep(5);
printf(ppid = %d\n, getppid());
}
}
%gcc -o test test.c
%./test
ppid = 47653
ppid = 47653
ppid = 47653 # Started truss -p 48864 here!
ppid = 49073
Posted to the wrong list by mistake. Sorry.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Hi again,
Should I use gjournal on FreeBSD 7? Or just soft updates?
Here is my opinion: I suspect that gjournal would be much slower than
soft updates. Also gjournal is relatively new code, not very well
tested. But gjournal is better when the system crashes. Although I have
heard that
I'm about to buy a new server. It will be a Xeon system with two
processors (4 cores per processor) and 16GB RAM. Two RAID extenders
will be attached to an Intel s5000 series motherboard, providing 12
SAS/Serial ATA connectors.
The server will run FreeBSD 7.0, PostgreSQL 8, apache, PHP, mail
going to the same drives. This turns your fast sequential I/O into
random I/O with the accompaning 10x or more performance decrease.
Unless you have a good RAID controller with battery-backed-up cache.
All right. :-) This is what I'll have:
Boxed Intel Server Board S5000PSLROMB with
Hello,
I need to install a new server for postgresql 8.3. It will run two
databases, web server and some background programs. We already have a
server but it is becoming slow and we would like to have something that
is faster. It is a cost sensitive application, and I would like to get
your
I guess you mean postgresql 8.3.1? :-)
Yep. Sorry.
Question 3. FreeBSD 7.0 can use the ZFS file system. I suspect that UFS
2 + soft updates will be better, but I'm not sure. Which is better?
I'd stick with ufs2 atm. There are some issues with zfs which probably
have been ironed
I have had extremely bad performance historically with onboard SATA chipsets
on Linux. The one exception has been with the Intel based chipsets (not the
CPU, the I/O chipset).
This board has Intel chipset. I cannot remember the exact type but it
was not in the low end category.
dmesg
Hello,
I have a big table called products. Table size: 1123MB. Toast table
size: 32MB. Indexes size: 380MB.
I try to do a query like this:
select id,name from products where name like '%Mug%';
Yes, I know that tsearch2 is better for this, but please read on. The
above query gives this
Luke Lonergan írta:
Lazlo,
Meanwhile, iostat 5 gives something like this:
tin tout KB/t tps MB/s KB/t tps MB/s us ni sy in id
1 14 128.00 1 0.10 128.00 1 0.10 5 0 94 1 0
0 12 123.98 104 12.56 123.74 104 12.56 8 0 90 2 0
This is your problem. Do
Heikki Linnakangas wrote:
Is there any other columns besides id and name in the table? How big
is products.txt compared to the heap file?
Yes, many other columns. The products.txt is only 59MB. It is similar to
the size of the index size (66MB).
Another question: I have a btree index on
Tom Lane wrote:
Only if the index is capable of disgorging the original value of the
indexed column, a fact not in evidence in general (counterexample:
polygons indexed by their bounding boxes in an r-tree). But yeah,
it's interesting to think about applying filters at the index fetch
step for
Tom Lane wrote:
Why is that showing 85+ percent *system* CPU time?? I could believe a
lot of idle CPU if the query is I/O bound, or a lot of user time if PG
was being a hog about doing the ~~ comparisons (not too unlikely BTW).
I'm sorry, this was really confusing. I don't know what it was -
Craig A. James wrote:
There IS a bug for SATA disk drives in some versions of the Linux
kernel. On a lark I ran some of the I/O tests in this thread, and
much to my surprise discovered my write speed was 6 MB/sec ... ouch!
On an identical machine, different kernel, the write speed was 54
47 matches
Mail list logo