Re: [ADMIN] run httpd and postgresql on different machines ?

2005-01-18 Thread Christian Fowler
One machine to run httpd + php and a separate machine for postgres is very 
common and highly recommended.

For safety, I would recommend using a private network (192.168.0.*) to 
connect the two machines, so your DB machine does not even have a public 
IP address. Ideally, use gigabit between the two machines.

Be sure you configure your pg_hba.conf properly too.
On Tue, 18 Jan 2005, Chuming Chen wrote:
Hi, all,
I want to set up a web site using apache httpd, php and postgresql. From the 
performance point of view, which  architecture is better? 1)  Run httpd and 
postgresql on the same machine; 2) Run postgresql on seperate machine. My 
concern is that the machine I am going to run httpd has limitted storage. I 
am expecting the increasing of postgresql database once I set it ip.

Any suggestions and comments will be highly appreciated.
--
Chuming Chen
System Administrator
NHLBI Proteomics Center
Medical University of South Carolina
135 Cannon Street, Suite 303
Charleston SC 29425
Tel: 843-792-1555 (O)
Fax: 843-876-1126

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
[ \ /
[ >X<   Christian Fowler  | spider AT viovio.com
[ / \   http://www.viovio.com | http://www.tikipro.org
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


[ADMIN] Assimilation of these "versus" and hardware threads

2005-01-11 Thread Christian Fowler
All of these recent threads about fastest hardware and "who's better than 
who" has inspired me to create a new website:

http://www.dbtuning.org
I snipped a few bits from recent posts to get some pages started - hope 
the innocent don't mind. It's a bit postgres biased at the moment, since 
well, so am I (though FireBird is now mounting a strong showing...) This 
site uses a wiki so anyone interested can make contributions. We are all 
short on time, so I would love any help. I haven't entered any hardware 
info yet.

I'll also take a minute to plug a postgres saavy open-source project used 
for this site - http://www.tikipro.org - It's a very flexible web 
framework with a very powerful and extendible CMS engine. It just hit 
Alpha 4, and we hope to go beta very soon. If you have feedback (or bugs), 
please send me a note. (and of course dbtuning is running on postgres ;-)

[ \ /
[ >X<   Christian Fowler  | spider AT viovio.com
[ / \   http://www.viovio.com | http://www.tikipro.org
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [ADMIN] help with - psql: FATAL: Password authentication failed

2004-12-28 Thread Christian Fowler
sounds like you don't have
tcpip_socket = true
in /var/lib/pgsql/data/postgresql.conf
it is false by default
On Tue, 28 Dec 2004, Sukhdeep Johar wrote:
great !!
ok, psql works now.
But pg_connect() still fails.
pg_connect() gets executed when I try to access index.php ( homepage
for gforge )
Execution stalls on reaching pg_connect() .
can you give some pointers to this. I have tried all docs that I could :(
-Sukhdeep
[ \ /
[ >X<   Christian Fowler  | spider AT viovio.com
[ / \   http://www.viovio.com | http://www.tikipro.org
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [ADMIN] upgrading postgresql

2004-12-15 Thread Christian Fowler
2 bits of wisdom:
1. As mentioned below, always try this upgrade on a test/ offline/ 
sandbox/ development/ testing /etc server first

-- Clean out the $PGDATA directory (keep a copy of your pg_hba.conf and
postgresql.conf for reference)
2. A simple 'mv data/ data73/' after pg shutdown keeps your 7.3 DB around 
incase something goes terribly wrong (i have had some dumps that would 
*not* re-import themselves due to illegal characters). If you need to get 
back online asap, just slip the 7.3 rpms back in, and 'mv data73 data'

I'd recommend a test run of the major upgrade procedures you're planning
on doing on a backup machine in case you run into some problems.  Use of
a replication engine, like Slony as well as pgpool can allow you to
seamlessly upgrade your installation while it stays online.  I'd
practice this too on a backup system before going after the live
servers.
[ \ /
[ >X<   Christian Fowler  | spider AT viovio.com
[ / \   http://www.viovio.com | http://www.tikipro.org

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [ADMIN] joins INNER, OUTER, LEFT, RIGHT, FULL, ...

2004-11-23 Thread Christian Fowler
On Tue, 23 Nov 2004, Roderick A. Anderson wrote:
Jaime Casanova wrote:
http://www.postgresql.org/docs/7.4/static/queries-table-expressions.html#QUERIES-JOIN
Been there, done that.  Bought several tee-shirts.
What do you mean with *complex joins*?
SELECT first, last, username || '@' || dom.domain as emailaddress
FROM cust_main cm
 LEFT OUTER JOIN cust_pkgcp ON ( cm.custnum = cp.custnum )
 LEFT OUTER JOIN cust_svccs ON ( cp.pkgnum  = cs.pkgnum )
 LEFT OUTER JOIN svc_email   em ON ( cs.svcnum  = em.svcnum )
 LEFT OUTER JOIN svc_domain dom ON ( em.domain  = dom.svcnum )
WHERE zip = '9'
  AND username || '@' || dom.domain != '@'
Which worked until I added one more table with a one-to-one relation to 
cust_main.custnum.  Then I got several tuples (2+) for each row above.
try:
SELECT first, last, username || '@' || dom.domain as emailaddress
FROM cust_main cm
 INNER JOIN one_more om ON ( om.custnum = cm.custnum )
 LEFT OUTER JOIN cust_pkgcp ON ( cm.custnum = cp.custnum )
 LEFT OUTER JOIN cust_svccs ON ( cp.pkgnum  = cs.pkgnum )
 LEFT OUTER JOIN svc_email   em ON ( cs.svcnum  = em.svcnum )
 LEFT OUTER JOIN svc_domain dom ON ( em.domain  = dom.svcnum )
WHERE zip = '9'
  AND username || '@' || dom.domain != '@'
or
SELECT first, last, username || '@' || dom.domain as emailaddress
FROM one_more om, cust_main cm
 LEFT OUTER JOIN cust_pkgcp ON ( cm.custnum = cp.custnum )
 LEFT OUTER JOIN cust_svccs ON ( cp.pkgnum  = cs.pkgnum )
 LEFT OUTER JOIN svc_email   em ON ( cs.svcnum  = em.svcnum )
 LEFT OUTER JOIN svc_domain dom ON ( em.domain  = dom.svcnum )
WHERE zip = '9'
  AND om.custnum=cm.custnum
  AND username || '@' || dom.domain != '@'
The explicit join syntax was freaky for me too at first,  but after 
several years, I prefer it now, since you can easily control your left 
outer joins

[ \ /
[ >X<   Christian Fowler  | spider AT viovio.com
[ / \   http://www.viovio.com | http://www.tikipro.org
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [ADMIN] psql: FATAL: user "root" does not exist createdb:

2004-11-16 Thread Christian Fowler
execute these as the postgres user:
su - postgres
One option is to create a super-user with something like:
postgres$ createuser -d -a -P jeoff
then:
jeoff$ createdb ibmadb
then do administrative things with that user. I would advise *NOT* using 
root. If this is a tightly controlled (non-shared) machine, you could make 
a super user as your normal unix login (which hopefull is not root). 
Ideally you'll only need root to start the postgres service.


On Tue, 16 Nov 2004, Jeoffrey L. Palacio wrote:

Hi to all I'm Jeoffrey, and I'm a newbie to postgreSQL. My problem is
that when i execute the command createdb  i always get a
[EMAIL PROTECTED] root]# createdb ibmadb
psql: FATAL:  user "root" does not exist
createdb: database creation failed
Then I tried to create the user root by the command:
createuser  and i always get this message also:
[EMAIL PROTECTED] root]# createuser
Enter name of user to add: root
Shall the new user be allowed to create databases? (y/n) y
Shall the new user be allowed to create more new users? (y/n) n
psql: FATAL:  user "root" does not exist
createuser: creation of user "root" failed
my postgresql version is 7.3.4 and my OS is FEDORA CORE 1 x86_64 bit
architecture.
What is wrong about it? please help.
Thanks,
Jeoffrey
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly
[ \ /
[ >X<   Christian Fowler  | spider AT viovio.com
[ / \   http://www.viovio.com | http://www.tikipro.org
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [ADMIN] evil characters #bfef cause dump failure

2004-11-16 Thread Christian Fowler
I strongly agree with this. I have always been uncomfortable selecting 
"UNICODE" and never quite sure if it is the UTF8, UTF16, or UTF32 
encoding.

SQL_8BIT or SQL_RAW make much more sense than SQL_ASCII given that Tom 
said this is a lack of encoding. I fear I might have high-bits chopped off 
or something.

However, back to my problem... if a #bfef character is shoved into a 
VARCHAR, one's dump is hosed. If I went to various websites and entered 
this in, I could cause a lot of pain. I believe I noticed some characters 
(like new line and tab) are converted to <80> or similar. Could/should 
this be extended to more character ranges - particularly high byte chars 
for people with the SQL_ASCII (lackof) encoding?

On Tue, 16 Nov 2004, Markus Bertheau wrote:
This is, by the way, a reason why this encoding should be renamed to
SQL_8BIT (or something along these lines) and UNICODE to UTF-8.
--
Markus Bertheau <[EMAIL PROTECTED]>
---(end of broadcast)---
TIP 8: explain analyze is your friend
[ \ /
[ >X<   Christian Fowler  | spider AT viovio.com
[ / \   http://www.viovio.com | http://www.tikipro.org
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [ADMIN] evil characters #bfef cause dump failure

2004-11-15 Thread Christian Fowler
db-# ;
 server_encoding
-
 SQL_ASCII
(1 row)
whoa! yikes, I bet this has a lot to do with it? I really wanted to keep 
everything UNICODE end-to-end.  I must have forgotten --encoding on my 
initdb? Anything I can do at this point?


On Mon, 15 Nov 2004, Tom Lane wrote:
Christian Fowler <[EMAIL PROTECTED]> writes:
[shell]$ env
PGCLIENTENCODING=UNICODE
LANG=en_US.UTF-8

db=# \encoding
UNICODE
I was more concerned about the database encoding, which the above
doesn't prove.  Try "SHOW server_encoding"
regards, tom lane
[ \ /
[ >X<   Christian Fowler  | spider AT viovio.com
[ / \   http://www.viovio.com | http://www.tikipro.org
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [ADMIN] evil characters #bfef cause dump failure

2004-11-15 Thread Christian Fowler
[shell]$ env
PGCLIENTENCODING=UNICODE
LANG=en_US.UTF-8
db=# \encoding
UNICODE
On Mon, 15 Nov 2004, Tom Lane wrote:
Christian Fowler <[EMAIL PROTECTED]> writes:
After much wrestling, it appears the hex byte sequence #bfef in a VARCHAR
column causes a truncated COPY line to be written (and thus the *entire*
COPY block fails).
What database encoding and locale are you using?
regards, tom lane
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
[ \ /
[ >X<   Christian Fowler  | spider AT viovio.com
[ / \   http://www.viovio.com | http://www.tikipro.org
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[ADMIN] evil characters #bfef cause dump failure

2004-11-15 Thread Christian Fowler
I have been trying to track down the source of why my 7.4.5 database won't 
reimport it's own dump ( http://archives.postgresql.org/pgsql-admin/2004-10/msg00213.php )

After much wrestling, it appears the hex byte sequence #bfef in a VARCHAR 
column causes a truncated COPY line to be written (and thus the *entire* 
COPY block fails). Exporting as inserts did not fix the problem either.

Any thoughts on why this might be so or how it can be avoided? Evil 
thought of the day is if someone were to go around and paste this 
multi-byte character in various websites' html forms it could cause a lot 
of trouble.

Also, the behavior of the restore / psql import to complete the COPY 
fields from the *following* line seems not good. It would be nice if the 
missing columns could just be written as NULL's. 6 bad rows makes a 6 gig 
dump worthless. Or perhaps an option to import each copy row in it's own 
transaction so 5+ million copied rows don't fail for 6 bogus ones. Perhaps a
--this_is_an_emergency_so_please_do_everything_you_can_to_restore_as_much_as_possible
option.

If any of the core dev's want some small debug dumps I created, I'd be 
happy to pass them on.

[ \ /
[ >X<   Christian Fowler  | spider AT viovio.com
[ / \   http://www.viovio.com | http://www.tikipro.org
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [ADMIN] pg_autovacuum is not working

2004-10-29 Thread Christian Fowler
Hi Ivan,
I had the same experience as you. I swear I did all the things required. 
Eventually it started working with enough postmaster restarts and fiddling 
with the files. I'm sorry I can't be more helpful, cause I was never quite 
sure why it finally decided to start working...

Keep fiddling and it should hopefully start up.
On Fri, 29 Oct 2004, Ivan Dimitrov wrote:
Hi list
This is my first post here, and I've searched google but couldn't find the
solution.
I'm trying to run pg_autovacuum on all my databases. All my Debian sarge
machines use version 7.4.5-3
[ \ /
[ >X<   Christian Fowler  | spider AT viovio.com
[ / \   http://www.viovio.com | http://www.tikipro.org
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [ADMIN] large dumps won't restore

2004-10-20 Thread Christian Fowler
Joshua - 
well, i ran a --inserts dump as you suggested, and it died upteen millions 
(and many hours) in at:

INSERT 59235383 1
invalid command \033',
cannot allocate memory for output buffer
Weiping,
db=# \encoding
UNICODE
the last four columns in the table are:
 full_name_nd  | character varying(200)  |
 mod_date  | timestamp without time zone |
 pc_char   | character varying(4)|
 dim_char  | character varying(16)   |
[EMAIL PROTECTED] root]# env |grep LANG
LANG=en_US.UTF-8
On Wed, 20 Oct 2004, Weiping wrote:
Christian Fowler wrote:
I'm running a large database ( data dir is ~9gigs ) on Fedora Core 2 with 
7.4.5-PGDG rpm's

When using the -Fc dump method + pg_restore, I get:
-bash-2.05b$ pg_restore -Fc -d foo -L backup.list 
/tmp/02\:43-postgresql_database-foo-backup
pg_restore: ERROR:  invalid input syntax for type timestamp: "52.24"
CONTEXT:  COPY foo_data, line 42529, column mod_date: "52.24"
pg_restore: [archiver (db)] error returned by PQendcopy

what's the column type before the  column "mod_date"?
is it's a text type? if so, then have you used any kind of encoding for your 
database?
what's it? and what's your PGCLIENCODING setting?

seems like a encoding problem I've ever met, but not sure.
regards
Laser
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
  (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[ \ /
[ >X<   Christian Fowler  | spider AT viovio.com
[ / \   http://www.viovio.com | http://www.tikipro.org
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [ADMIN] NIC to NIC connection

2004-10-19 Thread Christian Fowler
There have been many comments about this already Kent. My $.02:
The most common practice I am aware of is to install 2 NIC's in each 
appserver - one to your load balancer, and one to your private network 
(192.168.*) where your database server sites. In fact, ideally your 
database machine has no publically addressable nic at all. I have 
personally dealt with such a setup in several installs, some doing of 
millions of page views per *day*, and it has always been very reliable, 
secure, and fast.

Use gigabit everywhere on your 192.168 "database" network. If you are 
concernced about bandwidth, wire up http://www.mrtg.org and look at the 
traffic for yourself.

getting postgres to use this setup should be a piece of cake. Just make 
sure your settings in pg_hba.conf are setup right.

Good luck.
On Tue, 19 Oct 2004, Kent Anderson wrote:
We are upgrading our servers and have run into an interesting situation.  It
has been proposed that we have a direct connection from the web servers to
the postgres server via extra NICs. Has anyone done this before and how big
a project would it be to alter ASP and Java applications to make use of such
a connection?
Before we even waste time installing the NIC's I would like a sense of how
hard it is to get postgres to use that kind of a connection vs over the
Internet. We are looking to increase communication speed between the web
servers and database server as much as possible.
Thanks
Kent Anderson
[ \ /
[ >X<   Christian Fowler  | [EMAIL PROTECTED]
[ / \   http://www.viovio.com | http://www.tikipro.org
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


[ADMIN] large dumps won't restore

2004-10-18 Thread Christian Fowler
I'm running a large database ( data dir is ~9gigs ) on Fedora Core 2 with 
7.4.5-PGDG rpm's

When using the -Fc dump method + pg_restore, I get:
-bash-2.05b$ pg_restore -Fc -d foo -L backup.list 
/tmp/02\:43-postgresql_database-foo-backup
pg_restore: ERROR:  invalid input syntax for type timestamp: "52.24"
CONTEXT:  COPY foo_data, line 42529, column mod_date: "52.24"
pg_restore: [archiver (db)] error returned by PQendcopy

When using pg_dump + psql:
I get a similar error. In one table that has 
about 5.4 million rows, the dump has several incomplete rows in the large 
block of copy data. It seems to attempt to fill the copy with data from 
the *next* line.

ERROR:  invalid input syntax for type timestamp: "4"
CONTEXT:  COPY foo_data, line 169371, column mod_date: "4"
going to 169371 lines after the start of the copy, indeed there is a short
row (by two), and two fields in on the *next* line is indeed a "4"
Both seem to have the same problem - incomplete COPY row data. any ideas?

[ \ /
[ >X<   Christian Fowler  | [EMAIL PROTECTED]
[ / \   http://www.viovio.com | http://www.tikipro.org
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [ADMIN] set AUTOCOMMIT OFF in psqlrc not having affect

2004-10-17 Thread Christian Fowler
Thanks Tom, it appears that was the issue. Bruce, it would appear it 
was at some point? Glad to hear it's now insensitive.


On Sun, 17 Oct 2004, Bruce Momjian wrote:
FYI, it is not case-sensitive in current CVS.
Tom Lane wrote:
Christian Fowler <[EMAIL PROTECTED]> writes:
I have \set AUTOCOMMIT OFF in my .psqlrc and it seems psql seems to not
have any effect.
I think the value is case-sensitive.  Try
\set AUTOCOMMIT off

[ \ /
[ >X<   Christian Fowler  | [EMAIL PROTECTED]
[ / \   http://www.viovio.com | http://www.tikipro.org
---(end of broadcast)---
TIP 8: explain analyze is your friend


[ADMIN] set AUTOCOMMIT OFF in psqlrc not having affect

2004-10-17 Thread Christian Fowler
I have \set AUTOCOMMIT OFF in my .psqlrc and it seems psql seems to not 
have any effect. Am I missing something? I've been googling and googling 
and seem a bit lost at this point.

[EMAIL PROTECTED] host]$ cat ~/.psqlrc
\set AUTOCOMMIT OFF
\echo 'AUTOCOMMIT is' :AUTOCOMMIT
\set PROMPT1 'host.%/%R%# '
\timing
*timing statments where trimmed below*
[EMAIL PROTECTED] host]$ psql
Welcome to psql 7.4.5, the PostgreSQL interactive terminal.
Type:  \copyright for distribution terms
   \h for help with SQL commands
   \? for help on internal slash commands
   \g or terminate with semicolon to execute query
   \q to quit
AUTOCOMMIT is OFF
Timing is on.
host.domain=# commit;
WARNING:  there is no transaction in progress
COMMIT
host.domain=# update foo set foo_type_id = 3 where foo_id=20234;
UPDATE 1
host.domain=# commit;
WARNING:  there is no transaction in progress
COMMIT
host.domain=# update foo set foo_type_id = 3 where foo_id=20234;
UPDATE 1
host.domain=# rollback;
WARNING:  there is no transaction in progress
ROLLBACK
host.domain=# begin;
BEGIN
host.domain=# update foo set foo_type_id = 3 where foo_id=20234;
UPDATE 1
host.domain=# rollback;
ROLLBACK
host.domain=# begin;
BEGIN
host.domain=# update foo set foo_type_id = 3 where foo_id=20234;
UPDATE 1
host.domain=# rollback;
ROLLBACK
host.domain=# \q
[ \ /
[ >X<   Christian Fowler  | [EMAIL PROTECTED]
[ / \   http://www.viovio.com | http://www.tikipro.org
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [ADMIN] PostgreSQL Tool?

2004-10-14 Thread Christian Fowler

http://www.thekompany.com/products/dataarchitect/
Is an awesome tool. The "Reverse Engineer" ability will build the model 
from an existing database.

On Thu, 14 Oct 2004, [UTF-8] Sídar LC wrote:
Where can i find a tool for print graphical PostgreSQL schemas?
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
[ \ /
[ >X<   Christian Fowler  | [EMAIL PROTECTED]
[ / \   http://www.viovio.com | http://www.tikipro.org
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [ADMIN] Does PostgreSQL Stores its database in multiple disks?

2004-10-01 Thread Christian Fowler

Igor,
I would recommend you investigate LVM:
http://www.tldp.org/HOWTO/LVM-HOWTO/
This enables you to string multiple physical units into a single volume - 
as well as expand and grow the volume. I am unsure about postgres 
consequences of this for as much data as you have, so I will leave it up 
to others to comment about this.

However, my recommendation is for a modest investment of a third 120GB 
drive and a RAID card, you could do RAID 5 with 3 disks and get 240GB of 
storage.

On Fri, 1 Oct 2004, Igor Maciel Macaubas wrote:
Hi Guys,
I have a PostgreSQL server, running 7.4.2, that will store a really 
large amount of data (200GB) being migrated from an Oracle database. I 
have a machine with two 120GB Ultra ATA IDE disks, and I'd like to know 
if PostgreSQL could split it over both disks (that gives me 240GB or 
storage). Does he do this automatically ? Or I'll have to split it 
manually, creating symbolic links on my file system (ext3)? Does anyone 
ever stored a database (see, it's ONE database only, not a cluster) on 
multiple disks ?

What about PgSQL 8? It'll include this feature?
Thanks!
Regards,
Igor
--
[EMAIL PROTECTED]
[ \ /
[ >X<   [EMAIL PROTECTED]   |   http://www.viovio.com/
[ / \
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [ADMIN] PLEASE GOD HELP US!

2004-10-01 Thread Christian Fowler

I would like to also add that I've been a sysadmin for quite some time, and
I've been designing databases for quite some time as well. I'm no idiot, I
just can't find the bottleneck here (if one does in fact exist). So in light
of this, please send me some suggestions I can work with.
Here are three concise suggestions:
1. You say your issue is only under load, then I can probably guarantee 
your issue is available connections:

max_connections = 50
That number is way too small for a site with decent traffic. make it like 
500. how mindful are you about opening connections per page view?

2. You have a query taking 9 seconds. run that query by hand on the shell 
to find out why. Rework the query, add or recreate indices as necessary.
Using conditionals in ORDER BY is a bad idea. Do it in the WHERE clause 
per my previous email.

3. Leave your attitude at the door, or just leave. Making comments like:
I take vast offense to the suggestion that my query / database design is 
at fault here I must admit that I expected much more from this list 
than I've recieved so far.
make it clear that you have a lot more room for growth as a developer. If 
you stop biting the hands that help you, you will learn a lot more.

One last thing... even with ALL of the data intact (and yes, we DID do
testing... we just didn't have enough ppl to test the production server
load)
Another mistake showing your lack of experience. Use apache bench ( ab 
command ) to simulate load.

The only information I can give at the moment about the number of queries
per second is this: there is an average of 60 users online at any given
time, and the average number of queries per page load is 12, and they are
refreshing and clicking etc quite a bit... so I'd say about 120 queries per
second or so... (VERY rough estimate)
And you only have 50 max_connections for postgres? What are you thinking? 
Of course every apache process is waiting for a connection. Also, do you 
even have enough apache workers?

Your attitude sucks. Your problems are right under your nose, and 
you are too cocky to learn your tools. I imagine you are getting hammered 
by your co-workers to get things fixed. I will tell you empathy will 
always get you much farther than animosity.

[ \ /
[ >X<   [EMAIL PROTECTED]   |   http://www.steelsun.com/
[ / \
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [ADMIN] PLEASE GOD HELP US!

2004-10-01 Thread Christian Fowler
Hi Shane,
As many others have alluded to - performance like this is almost always 
attributable to your queries not using an index. Be it on Oracle, Mysql, 
or postgres, i have seen this problem popup often.

Also, could you tell us what language you are using, and if you are using 
a DB abstraction layer?

On to the particulars:
# WEBSITE #
   # SAMPLE DUMP OF COMMON PAGE-SPECIFIC QUERIES
   8 Queries Totaling 10.7413 Seconds
Since one query is taking 90% of the time, it clearly is the first 
cuplrit:

   SQL:  SELECT * FROM thread_listing AS t ORDER BY t.status=5
DESC,t.lastreply desc LIMIT 25 OFFSET 0
   Num Rows:25
   Affected Rows:0
   Exec Time:  9.1602659225464
Your SQL here seems what I would consider not typical. I would write it 
as:

SELECT * FROM thread_listing AS t WHERE t.status=5 ORDER BY t.lastreply 
desc LIMIT 25 OFFSET 0;

Run that from a psql shell, and see if that speed things up. If not, run:
db=> EXPLAIN ANALYSE SELECT * FROM thread_listing AS t WHERE t.status=5 
ORDER BY t.lastreply desc LIMIT 25 OFFSET 0;

and
db=> \d thread_listing
And send it to the list. You are in good shape I think, and porting won't 
be necessary. I've used many db's and postgres is my favorite by far. I'd 
say you've made a good choice ;-)

[ \ /
[ >X<   [EMAIL PROTECTED]   |   http://www.steelsun.com/
[ / \
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly