[HACKERS] Query regarding postgres lock contention - Followup

2011-10-05 Thread Hamza Bin Sohail

In addition to the previous post,

My postgres version is 8.3.7

>Hi there,
>
>Just to let you know, I'm not a database expert by any means. 
>I have configured dbt-2  with postgres and created a database with 4000 
>warehouses, 
>150 customers etc. The database size is over 8G. I am aware that lock 
>contention
>can be checked with lockstat (and with pg_locks ? ) but I wanted to know if 
>someone can tell me how much contention there would be for this database
>in a 16-core system vs a 4-core system. I just need a rough idea.
>
>Any response would be very helpful
>
>Thanks
>
>~Hamza

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Query regarding postgres lock contention

2011-10-05 Thread Hamza Bin Sohail
Hi there,

Just to let you know, I'm not a database expert by any means. 
I have configured dbt-2  with postgres and created a database with 4000 
warehouses, 
150 customers etc. The database size is over 8G. I am aware that lock contention
can be checked with lockstat (and with pg_locks ? ) but I wanted to know if 
someone can tell me how much contention there would be for this database
in a 16-core system vs a 4-core system. I just need a rough idea.

Any response would be very helpful

Thanks

~Hamza

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] would hw acceleration help postgres (databases in general) ?

2010-12-10 Thread Hamza Bin Sohail

Thanks alot for all the replies. Very helpful, really appreciate it.

- Original Message - 
From: "Jeff Janes" 

To: "Hamza Bin Sohail" 
Cc: 
Sent: Friday, December 10, 2010 7:18 PM
Subject: Re: [HACKERS] would hw acceleration help postgres (databases in 
general) ?



On Fri, Dec 10, 2010 at 3:09 PM, Hamza Bin Sohail  
wrote:


Hello hackers,

I think i'm at the right place to ask this question.

Based on your experience and the fact that you have written the Postgres 
code,
can you tell what a rough break-down - in your opinion - is for the time 
the

database spends time just "fetching and writing " stuff to memory and the
actual computation.


The database is a general purpose tool.  Pick a bottleneck you wish to 
have,
and probably someone uses it in a way that causes that bottleneck to 
occur.



The reason i ask this is because off-late there has been a
push to put reconfigurable hardware on processor cores. What this means 
is that
database writers can possibly identify the compute-intensive portions of 
the
code and write hardware accelerators and/or custom instructions and 
offload
computation to these hardware accelerators which they would have 
programmed

onto the FPGA.


When people don't use prepared statements, parsing can become a 
bottleneck.


If Bison's yyparse could be put on a FPGA in a transparent way, than
anyone using
Bison, including PG, might benefit.

That's just one example, of course.

Cheers,

Jeff




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] would hw acceleration help postgres (databases in general) ?

2010-12-10 Thread Hamza Bin Sohail

Hello hackers,

I think i'm at the right place to ask this question.

Based on your experience and the fact that you have written the Postgres code, 
can you tell what a rough break-down - in your opinion - is for the time the 
database spends time just "fetching and writing " stuff to memory and the 
actual computation. The reason i ask this is because off-late there has been a 
push to put reconfigurable hardware on processor cores. What this means is that 
database writers can possibly identify the compute-intensive portions of the 
code and write hardware accelerators and/or custom instructions and offload 
computation to these hardware accelerators which they would have programmed 
onto the FPGA. 

There is not much utility  in doing this if there aren't considerable compute-
intensive operations in the database (which i would be surprise if true ). I 
would suspect joins, complex queries etc may be very compute-intensive. Please 
correct me if i'm wrong. Moreover, if you were told that you have a 
reconfigurable hardware which can perform pretty complex computations 10x 
faster than the base, would you think about synthesizing it directly on an fpga 
and use it ?  

I'd be more than glad to hear your guesstimates.

Thanks alot !


Hamza

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] large page query

2010-11-29 Thread Hamza Bin Sohail
Hi,

I posted this email on the other postgres lists but did not get a reply. So as 
a last resort, I came here. I  hope somebody can help.

I am looking into the impact of large page sizes on the performance of 
commercial workloads e.g databases,webserver,virtual machines etc. I was 
wondering if I could get to know whether Postgres administrators configure the 
Postgres DBMS with large page support for shared memory regions, specifically 
on the Solaris 9 and 10 OSes. My understanding is that since large pages (4 MB) 
are suitable for applications allocating large shared memory regions (databases 
for instance), Postgres would most definitely use the large page support. Is it 
a functionality placed into Postgres by the developers or the administrator has 
to configure the database to use it ?

So in a nutshell, the questions are 

1) Does Postgres use large page support ? On solaris 10 and the ultrasparc III 
processor, a large page is 4 MB. It significantly reduces the page table size 
of the application and a 1000 entry TLB can cover the entire memory 4G.

2) On Solaris 9 and 10, does Postgres rely on the MPSS support provided by the 
Operating system and relegate the job of figuring out what to allocate as a 
large page and what not to, when to allocate a large page and when not to etc 
to the Operating system? Or is it the case that the Postgres developers have 
written it judiciously and Postgres itself knows what to and what not to 
allocate as a large page ? The reason i ask this question is because, i know 
for a JVM, solaris 10 allocates large pages for the heap memory (this is 
default behavior, no runtime parameters needed when one runs the JVM. The OS is 
smart enough to figure this out by probably looking at what is the app that is 
running )

3) In light of all this, do we know the performance difference between Postgres 
configured with no large pages vs Postgres configured with large pages.


Your replies are highly appreciated.


Hamza