Re: [PERFORM] Wierd issues
Yes I did do analyze the here is the get_pwd function -- Function: public.get_pwd(varchar, varchar, varchar, varchar) -- DROP FUNCTION public.get_pwd(varchar, varchar, varchar, varchar); CREATE OR REPLACE FUNCTION public.get_pwd(varchar, varchar, varchar, varchar) RETURNS varchar AS ' DECLARE p_username ALIAS for $1; p_server ALIAS for $2; p_service ALIAS for $3; p_pwd_type ALIAS for $4; l_resource_id integer; l_server_id integer; l_service_id integer; l_allow_deny char(1); l_user_id integer; l_account_id integer; l_passwd varchar(40); begin -- get server identifier select id into l_server_id from servers s where address = p_server; if NOT FOUND then -- try to get default server select id into l_server_id from servers s where address = \'default\'; end if; if l_server_id isnull then return NULL; end if; -- get service identifier select id into l_service_id from services s where radius_service = p_service; if l_service_id isnull then return NULL; end if; -- get resource identifier (server/service combination) select id into l_resource_id from resources r where service_id = l_service_id and server_id = l_server_id; -- could not find resource via server_id, now look via server\'s group if any if l_resource_id isnull then select id into l_resource_id from resources r where service_id = l_service_id and server_group_id = (select server_group_id from servers where id = l_server_id); end if; -- could not determine resource user wants to access, so deny by returning NULL passwd if l_resource_id isnull then return NULL; end if; -- at this point we have a valid resource_id -- determine if valid username select u.id, u.account_id into l_user_id, l_account_id from users u, accounts a where u.username = upper(p_username) -- always uppercase in DB and u.del_id = 0 and u.status = \'A\' and a.status = \'A\' and u.account_id = a.id; -- if active user not found then return NULL for passwd if l_user_id isnull then return null; end if; -- user specific control select allow_deny into l_allow_deny from users_acl where resource_id = l_resource_id and user_id = l_user_id; if l_allow_deny = \'D\' then return NULL; elsif l_allow_deny isnull then -- no user-specific control select max(allow_deny) -- \'D\' is > \'A\' hence deny takes precedence if conflict across groups into l_allow_deny from users_acl where resource_id = l_resource_id and user_group_id in (select user_group_id from user_group_assignments where user_id = l_user_id); elsif l_allow_deny = \'A\' then -- do nothing; -- get and return passwd below end if; if l_allow_deny isnull or l_allow_deny = \'D\' then return NULL; elsif l_allow_deny = \'A\' then select password into l_passwd from user_pwds where password_type = upper(p_pwd_type) and user_id = l_user_id; return l_passwd; else return null; end if; end; ' LANGUAGE 'plpgsql' VOLATILE; -Original Message- From: Tom Lane [mailto:[EMAIL PROTECTED] Sent: Friday, April 09, 2004 8:02 AM To: Andrew Matthews Cc: [EMAIL PROTECTED] Subject: Re: [PERFORM] Wierd issues "Andrew Matthews" <[EMAIL PROTECTED]> writes: > [ PG 7.3.4 much slower than 7.2.1 ] > > Both have same databases, Both have had vacume full ran on them. You did ANALYZE too, right? The bulk of the time is evidently going into the seqscan on users in each case: > -> Seq Scan on users u (cost=0.00..1938.51 rows=71283 width=4) (actual time=0.81..30119.58 rows=70809 loops=1) > -> Seq Scan on users u (cost=0.00..1888.85 rows=71548 width=4) (actual time=18.38..2277152.51 rows=71028 loops=1) >Filter: (get_pwd(username, '127.0.0.1'::character varying, '101'::character varying, 'MD5'::character varying) IS NOT NULL) I have to suspect that the inefficiency is inside this get_pwd() function, but you didn't tell us anything about that... regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [PERFORM] good pc but bad performance,why?
On Wednesday 07 April 2004 16:59, Andrew McMillan wrote: > One thing I recommend is to use ext2 (or almost anything but ext3). > There is no real need (or benefit) from having the database on a > journalled filesystem - the journalling is only trying to give similar > sorts of guarantees to what the fsync in PostgreSQL is doing. That is not correct assumption. A journalling file system ensures file system consistency even at a cost of loss of some data. And postgresql can not guarantee recovery if WAL logs are corrupt. Some months back, there was a case reported where ext2 corrupted WAL and database. BAckup is only solution then.. Journalling file systems are usually very close to ext2 in performance, many a times lot better. With ext2, you are buying a huge risk. Unless there are good reason, I would not put a database on ext2. Performance isn't one ofthem.. Shridhar ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[PERFORM] Performance data for OpenFTS?
Hey, Has anyone done performance tests for OpenFTS on a really large database? I was speaking at PerlMongers and somebody asked. -- -Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [PERFORM] Upgrading question (recycled transaction log)
"Patrick Hatcher" <[EMAIL PROTECTED]> writes: > Doing an upgrade from 7.3.6 to 7.4.2 and I keep seeing the recycled > transaction log about every 2 mins. For future upgrades, is there > something that can be set so that I don't have as many recycles? Increasing checkpoint_segments ... but you seem to have that pretty high already. regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[PERFORM] Upgrading question (recycled transaction log)
Doing an upgrade from 7.3.6 to 7.4.2 and I keep seeing the recycled transaction log about every 2 mins. For future upgrades, is there something that can be set so that I don't have as many recycles? It seems to slow down the importing of data. Here's my current settings: #--- # WRITE AHEAD LOG #--- # - Settings - #fsync = true # turns forced synchronization on or off #wal_sync_method = fsync# the default varies across platforms: # fsync, fdatasync, open_sync, or open_datasync wal_buffers = 32# min 4, 8KB each # - Checkpoints - checkpoint_segments = 30# in logfile segments, min 1, 16MB each checkpoint_timeout = 600# range 30-3600, in seconds #checkpoint_warning = 30# 0 is off, in seconds #commit_delay = 0 # range 0-10, in microseconds #commit_siblings = 5# range 1-1000 TIA Patrick Hatcher ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [PERFORM] plan problem
Ken Geis <[EMAIL PROTECTED]> writes: > Does anyone think that the planner issue has merit to address? Can > someone help me figure out what code I would look at? The planner doesn't currently attempt to "drill down" into a sub-select- in-FROM to find statistics about the variables emitted by the sub-select. So it's just falling back to a default estimate of the number of distinct values coming out of the sub-select. The "drilling down" part is not hard; the difficulty comes from trying to figure out whether and how the stats from the underlying column would need to be adjusted for the behavior of the sub-select itself. As an example, the result of (SELECT DISTINCT foo FROM bar) would usually have much different stats from the raw bar.foo column. In your example, the LIMIT clause potentially affects the stats by reducing the number of distinct values. Now in most situations where the sub-select wouldn't change the stats, there's no issue anyway because the planner will flatten the sub-select into the main query. So we really have to figure out the adjustment part before we can think about doing much here. regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [PERFORM] [ADMIN] Raw devices vs. Filesystems
Grega, > Well, as I said, that's why I was asking - I'm willing to give it a go > if nobody can prove me wrong. :) Why not? If you have time? > I thought you knew - OCFS, OCFS-Tools and OCFSv2 have not only been open- > source for quite a while now - they're released under the GPL. Keen! Wonder if we can make them regret it. Seriously, if Oracle opened this stuff, it's probably becuase they used some GPL components in it. It also probably means that it won't work for anything but Oracle ... > I don't know what that means to you (probably nothing good, as PostgreSQL > is released under the BSD license), Well, it just means that we can't ship OCFS with PostgreSQL. > The question does spring up though, that Steve raised in another post - > just for the record, what POSIX semantics can a postmaster live without in > a filesystem? You might want to ask that question again on Hackers. I don't know the answer, myself. -- Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [PERFORM] Wierd issues
"Andrew Matthews" <[EMAIL PROTECTED]> writes: > [ PG 7.3.4 much slower than 7.2.1 ] > > Both have same databases, Both have had vacume full ran on them. You did ANALYZE too, right? The bulk of the time is evidently going into the seqscan on users in each case: > -> Seq Scan on users u (cost=0.00..1938.51 rows=71283 width=4) > (actual time=0.81..30119.58 rows=70809 loops=1) > -> Seq Scan on users u (cost=0.00..1888.85 rows=71548 > width=4) (actual time=18.38..2277152.51 rows=71028 loops=1) >Filter: (get_pwd(username, '127.0.0.1'::character > varying, '101'::character varying, 'MD5'::character varying) IS NOT NULL) I have to suspect that the inefficiency is inside this get_pwd() function, but you didn't tell us anything about that... regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly