[GENERAL] Permission template

2008-03-15 Thread Alban Hertroys

Hi all,

Reading up on the recent (and recurring) discussion about granting  
privileges on multiple tables I suddenly wondered whether the CREATE  
TABLE LIKE approach could be used to copy privileges to a to-be- 
created table. From the documentation it doesn't look like it, but  
wouldn't that be neat? It's pretty much like using a database as a  
template, but for tables instead of databases.


Here's an example of what I mean:
CREATE ROLE normal_user;

-- Create our template table
CREATE TABLE user_template ();
REVOKE ALL ON user_template TO normal_user;
GRANT SELECT ON user_template TO normal_user;

-- Use our template table to create other tables with the same  
privileges

CREATE TABLE accounts (
account_id  serial  PRIMARY KEY,
nametextNOT NULL,
LIKE user_template INCLUDING PRIVILEGES
);

etc.

After which accounts would have only SELECT privileges for normal_users.

Of course with this approach you'd still have to alter privileges for  
each table if you change your mind on who can access what, but it  
should make setting up privileges in a consistent way easier. Some  
way to copy privileges from another object might be a nice addition  
to this... Comments?


Regards,

Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,47dba99b233093511810745!



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Percent-encod ing conversion to binary, %C2%A9 = ©

2008-03-15 Thread Martijn van Oosterhout
On Thu, Mar 13, 2008 at 03:49:37PM +0100, Hans Schou wrote:
> Hi
> 
> I have a little trouble with the chr() function.
> 
> I have a string like this:
> "Copyright+%C2%A9+1856+Na%C3%AFve+retros%C2%AE"
> which should be converted to binary string like:
> "Copyright © 1856 Naïve retros®"
> 
> Is there an easy way to do this conversion?

Looks like you have UTF-8 encoded with percent signs. Perhaps the right
approach is you covert the incoming text into a bytea array and then
use convert() to turn it in to a string.

Hope this helps,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> Please line up in a tree and maintain the heap invariant while 
> boarding. Thank you for flying nlogn airlines.


signature.asc
Description: Digital signature


Re: [GENERAL] postgre vs MySQL

2008-03-15 Thread Ron Mayer

Greg Smith wrote:

On Fri, 14 Mar 2008, Andrej Ricnik-Bay wrote:
A silly question in this context:  If we know of a company that does 
use PostgreSQL but doesn't list it anywhere ... can we take the 
liberty to publicise this somewhere anyway?


I notice Oracle (and sleepycat before them) had a lot of fun
pointing out when Microsoft uses BDB.

http://www.oracle.com/technology/oramag/oracle/07-jan/o17opensource.html
   You'll find Oracle Berkeley DB "under the hood" in everything
   from Motorola cell phones, Microsoft/Groove's collaboration suite

and it seems unlikely Microsoft gave them their blessings.

Bad idea.  There are companies who consider being listed as a user of a 
product a sort of recommendation of that technology, and accordingly 


Other reasons a company might get offended by this:

 * They might consider it a trade secret and a competitive advantage
   over competitors; and internally enjoy giggling when they see
   their competitors sign deals with expensive databases.

 * They might have a close business partnership with Microsoft
   or Oracle that could be strained if they support other databases.

I suspect my employer would not like it announced for both reasons.


they will get really annoyed...asked to be removed from the list of
those using PostgreSQL.  ... PostgreSQL inside, it's best not to
publish the results unless you like to collect cease & desist letters.


While I agree companies are likely to get annoyed - just like fast
food companies do when you say how much trans-fats their products
contain; I'm rather curious what such a cease&desist letter would say.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trouble with Savepoints in postgres

2008-03-15 Thread sam
On Mar 12, 3:31 pm, sam <[EMAIL PROTECTED]> wrote:
> On Mar 12, 8:11 am, [EMAIL PROTECTED] (Alvaro Herrera) wrote:
>
>
>
> > Please always ensure that the list is copied on replies (use "Reply to
> > all") so that other people can help you.
>
> > sam escribió:
>
> > > On Mar 11, 5:39 pm, [EMAIL PROTECTED] (Alvaro Herrera) wrote:
> > > > sam escribió:
>
> > > > > Iam not able to understand if this is a version problem or the way iam
> > > > > using savepoints is wrong.Please advice.
>
> > > > It is.  You cannot use savepoints in PL/pgSQL functions (or any function
> > > > for that matter).  You can use EXCEPTION clauses instead.
> > > Then u please tell me how save points can be  used...The
> > > program iam working on is throwing an 'LIMIT EXCEEDED' error so iam
> > > trying to commit data so that a total rollback does not occur.Like
> > > commiting data after every 1000 transactions. I figured that
> > > savepoints would be the solution.
>
> > No, savepoints will not help you there.  No matter what you do, you
> > cannot commit in the middle of a function.
>
> > What's the limit being exceeded?  Perhaps you can solve your problem
> > some other way.
>
> > --
> > Alvaro Herrera  Developer,http://www.PostgreSQL.org/
> > "Escucha y olvidarás; ve y recordarás; haz y entenderás" (Confucio)
>
> > --
> > Sent via pgsql-general mailing list ([EMAIL PROTECTED])
> > To make changes to your 
> > subscription:http://www.postgresql.org/mailpref/pgsql-general
>
> Also can u please tell mewhy i get the error...ERROR: SPI_execute_plan
> failed executing query "PREPARE TRANSACTION 'foo'":
> SPI_ERROR_TRANSACTION
>
> I get this error when i also use COMMIT, ROLLBACK.does this mean a
> patch is missing ?
>
> Thanks
> Sam

Ok i realised that the only way data can be committed within a
procedure is to use nested BEGIN..END.
For example:

BEGIN

 statement1

 BEGIN
statement2
 END
END

so if the statement2 fails data is rolledback only until the inner
BEGIN. In other words statement1 changes is retained.


In my case i use a  for loop and update data row by row:

BEGIN
   FOR every record in CURSOR
  UPDATE DATA for the row
   END FOR
END

Since i have large amounts of data, if any error occured the entire
transaction was rolled back.

The solution for this would be:

BEGIN
   FOR every record in CURSOR
  UPDATE()
   END FOR
END

FUNCTION UPDATE ()
BEGIN
 UPDATE statement

 EXCEPTION
END


when one record fails data only for that is rolled back the rest of
the data is saved. EXCEPTION has to be caught.

Hope this helps anyone else facing similar issues.

Sam

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trouble with Savepoints in postgres

2008-03-15 Thread Tom Lane
sam <[EMAIL PROTECTED]> writes:
> Ok i realised that the only way data can be committed within a
> procedure is to use nested BEGIN..END.

Nothing that you said in this message is correct.  You can't "commit"
anything within a function, and bare BEGIN/END don't do anything at
all except create a syntactic grouping.

BEGIN/EXCEPT/END can indeed be used to limit the scope of errors,
but I don't think the way you've described it is a helpful way
to think about it, even with that correction.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trouble with Savepoints in postgres

2008-03-15 Thread Craig Ringer

sam wrote:


FUNCTION UPDATE ()
BEGIN
 UPDATE statement

 EXCEPTION
END


when one record fails data only for that is rolled back the rest of
the data is saved. EXCEPTION has to be caught.

  
As I recently found out, too many savepoints really kill PostgreSQL's 
performance in a transaction. A function inserting/updating, say, 
100,000 records will perform OK, but statements run in the same 
transaction after the function completes will be very slow.


So ... if you find that after your big update statement performance is 
terrible, you might need to explicitly check the conditions that might 
result in an exception and skip those records, thus avoiding the 
EXCEPTION block.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Reindex does not finish 8.2.6

2008-03-15 Thread Clodoaldo
2008/3/13, Alvaro Herrera <[EMAIL PROTECTED]>:
> Clodoaldo escribió:
>
> > 2008/3/13, Alvaro Herrera <[EMAIL PROTECTED]>:
>  > > Clodoaldo escribió:
>  > >
>  > >
>  > >  > Now what is happening is that reindex does not finish even with a
>  > >  > small 6,500 rows table and after a reboot. In top there is no CPU or
>  > >  > memory usage by postmaster and vmstat shows no disk activity.
>  > >
>  > > Hmm, are you vacuuming the system catalogs appropriately?
>  >
>  > If a simple vacuum in instead of a vacuum full is appropriate then yes
>  > the db is vacuumed every three hours after each bulk insert/delete.
>
>
> As superuser?  Take a look at whether relations are skipped during
>  vacuum.
>
>  Try vacuuming pg_class, pg_index, pg_attribute manually and see if that
>  makes the problem go away.  If it doesn't, then my guess is that it's
>  time to see what the hanging process is doing -- try an strace on it, or
>  attaching it with gdb and getting a backtrace.  (I hope your binaries
>  have debug symbols).

This is strace started before the reindex table command.

postmaster process:
# strace -f -p 2263
Process 2263 attached - interrupt to quit
select(6, [3 4 5], NULL, NULL, {35, 749000}) = 1 (in [5], left {23, 734000})
rt_sigprocmask(SIG_SETMASK, ~[ILL TRAP ABRT BUS FPE SEGV CONT SYS
RTMIN RT_1], NULL, 8) = 0
accept(5, {sa_family=AF_FILE, [EMAIL PROTECTED], [2]) = 8
getsockname(8, {sa_family=AF_FILE, path="/tmp/.s.PGSQL.5432"}, [21]) = 0
clone(Process 16014 attached (waiting for parent)
Process 16014 resumed (parent 2263 ready)
child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD,
child_tidptr=0x2aac72f0) = 16014
[pid  2263] close(8)= 0
[pid  2263] rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
[pid  2263] select(6, [3 4 5], NULL, NULL, {60, 0} 
[pid 16014] close(3)= 0
[pid 16014] close(4)= 0
[pid 16014] close(5)= 0
[pid 16014] close(6)= 0
[pid 16014] setsid()= 16014
[pid 16014] rt_sigaction(SIGTERM, {0x581700, [],
SA_RESTORER|SA_RESTART, 0x3e97630f30}, {0x55b5b0, [],
SA_RESTORER|SA_RESTART, 0x3e97630f30}, 8) = 0
[pid 16014] rt_sigaction(SIGQUIT, {0x581700, [],
SA_RESTORER|SA_RESTART, 0x3e97630f30}, {0x55b5b0, [],
SA_RESTORER|SA_RESTART, 0x3e97630f30}, 8) = 0
[pid 16014] rt_sigaction(SIGALRM, {0x581700, [], SA_RESTORER,
0x3e97630f30}, {SIG_IGN}, 8) = 0
[pid 16014] rt_sigprocmask(SIG_SETMASK, ~[QUIT ILL TRAP ABRT BUS FPE
SEGV ALRM TERM CONT SYS RTMIN RT_1], NULL, 8) = 0
[pid 16014] write(2, "\0\0_\0\216>\0\0tLOG:  0: connection"..., 104) = 104
[pid 16014] setitimer(ITIMER_REAL, {it_interval={0, 0}, it_value={60,
0}}, NULL) = 0
[pid 16014] recvfrom(8, "\0\0\0\20\4\322\26.\0\0>\2054\344\367W",
8192, 0, NULL, NULL) = 16
[pid 16014] kill(16005, SIGINT) = 0
[pid 16014] kill(4294951291, SIGINT)= 0
[pid 16014] exit_group(0)   = ?
Process 16014 detached
<... select resumed> )  = ? ERESTARTNOHAND (To be restarted)
--- SIGCHLD (Child exited) @ 0 (0) ---
rt_sigprocmask(SIG_SETMASK, ~[ILL TRAP ABRT BUS FPE SEGV CONT SYS
RTMIN RT_1], NULL, 8) = 0
wait4(-1, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], WNOHANG, NULL) = 16014
wait4(-1, 0x7fff77f82844, WNOHANG, NULL) = 0
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
rt_sigreturn(0x2)   = -1 EINTR (Interrupted system call)
rt_sigprocmask(SIG_SETMASK, ~[ILL TRAP ABRT BUS FPE SEGV CONT SYS
RTMIN RT_1], NULL, 8) = 0
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
select(6, [3 4 5], NULL, NULL, {60, 0} 
Process 2263 detached

Regards, Clodoaldo Pinto Neto

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Loging of postgres requests

2008-03-15 Thread Dave Potts



I am using a 3rd front end to generate  postgres requests , its reportng 
an error with the database.


Is there anyway of loging which sql requests the application is  actual 
sending to postgres.  I need to known if the error is being created by 
the application generating invalid SQL or if there is a problem with the 
desgin of the database tables.


Dave.
begin:vcard
fn:David Potts
n:Potts;David
x-mozilla-html:FALSE
version:2.1
end:vcard


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Loging of postgres requests

2008-03-15 Thread Adrian Klaver
On Saturday 15 March 2008 2:29 pm, Dave Potts wrote:
> I am using a 3rd front end to generate  postgres requests , its reportng
> an error with the database.
>
> Is there anyway of loging which sql requests the application is  actual
> sending to postgres.  I need to known if the error is being created by
> the application generating invalid SQL or if there is a problem with the
> desgin of the database tables.
>
> Dave.
See:
http://www.postgresql.org/docs/8.2/interactive/runtime-config-logging.html
In particular you will want to set log_statement to 'all'  to see the SQL.
-- 
Adrian Klaver
[EMAIL PROTECTED]

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] how to make this database / query faster

2008-03-15 Thread mark
Hi
I use postgres v 8.3 on a dual quad core, intel xeon [EMAIL PROTECTED], fedora
core 8 x86_64, and 32GB RAM

settings i changed on postgresql.conf:

shared_buffers = 1000MB # min 128kB or max_connections*16kB
effective_cache_size = 4000MB


I have a user table structure is attached and I have around 2 million rows
and adding like 10k-30k rows everyday..
id is the primary key, and i have an index session_key

i iterate through the users table like this

select * from users where session_key is not Null order by id offset OFFSET
limit 300


i want to go through the whole table... it gets really slow like greater
than 5 minutes when the OFFSET is over 500,000..

what is the best way to iterate through the whole table? should i increase
the  limit?
thanks a lot!

CREATE TABLE users
(
  id serial NOT NULL,
  username text,
  first_name text,
  last_name text,
  email text,
  "password" text,
  last_login timestamp without time zone,
  profilepic text,
  ip text,
  dob timestamp without time zone,
  created timestamp without time zone DEFAULT now(),
  rawpassword character varying(128),
  rating integer,
  zip text,
  hash text,
  gender character(1),
  groups text,
  aim text,
  yahoo text,
  skype text,
  hotmail text,
  vanity text,
  number_comments integer DEFAULT 0,
  number_friends integer DEFAULT 0,
  number_posts integer DEFAULT 0,
  number_polls integer DEFAULT 0,
  city text,
  site text,
  number_pictures bigint DEFAULT 0,
  email_subscribe boolean DEFAULT true,
  number_userpics integer DEFAULT 0,
  htmlcodes text,
  pageviews integer DEFAULT 1,
  number_uservideos integer DEFAULT 0,
  number_useraudios integer DEFAULT 0,
  number_usermessages integer DEFAULT 0,
  number_usermessages_sent integer DEFAULT 0,
  myrand double precision NOT NULL DEFAULT random(),
  number_activities integer DEFAULT 0,
  number_pages integer DEFAULT 0,
  uid integer NOT NULL,
  number_invites integer DEFAULT 0,
  number_notifications integer DEFAULT 0,
  number_emailnotifications integer DEFAULT 0,
  last_invitation timestamp without time zone,
  last_emailnotification timestamp without time zone,
  total_number_invites integer DEFAULT 0,
  total_number_emailnotifications integer DEFAULT 0,
  number_cards_sent integer DEFAULT 0,
  number_cards_received integer,
  session_key text,
  number_cards_tosign integer DEFAULT 0,
  number_pendingnotifications integer DEFAULT 0,
  last_profilefbml timestamp without time zone,
  "name" text,
  email_bday_alert boolean DEFAULT true,
  last_email_bday_alert timestamp without time zone,
  last_bday_notified text,
  bday_alert_14 boolean DEFAULT false,
  bday_alert_3 boolean DEFAULT true,
  bday_alert_7 boolean DEFAULT false,
  CONSTRAINT users_pkey PRIMARY KEY (id),
  CONSTRAINT unique_uid UNIQUE (uid),
  CONSTRAINT uniquemail UNIQUE (email),
  CONSTRAINT uniquuser UNIQUE (username)
)
WITH (OIDS=FALSE);
ALTER TABLE users OWNER TO postgres;

-- Index: idx_session_key

-- DROP INDEX idx_session_key;

CREATE INDEX idx_session_key
  ON users
  USING btree
  (session_key);

-- Index: idx_uid

-- DROP INDEX idx_uid;

CREATE INDEX idx_uid
  ON users
  USING btree
  (uid);

-- Index: idx_username

-- DROP INDEX idx_username;

CREATE INDEX idx_username
  ON users
  USING btree
  (username);

-- Index: last_login

-- DROP INDEX last_login;

CREATE INDEX last_login
  ON users
  USING btree
  (last_login);

-- Index: myrand_users

-- DROP INDEX myrand_users;

CREATE INDEX myrand_users
  ON users
  USING btree
  (myrand, id);

-- Index: pageviews_idx

-- DROP INDEX pageviews_idx;

CREATE INDEX pageviews_idx
  ON users
  USING btree
  (pageviews);

-- Index: user_sex_idx

-- DROP INDEX user_sex_idx;

CREATE INDEX user_sex_idx
  ON users
  USING btree
  (gender);



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Loging of postgres requests

2008-03-15 Thread Terry Fielder
You can turn up the verbosity of postgres logger to log all SQL 
statements.  Look in postgresql.conf


In particular, you can set postgres to log statements that take over x 
milliseconds to execute.  If you set log_min_duration_statement to 0, 
then it will log ALL statements, which could also give you what you want 
if you want to see all SQL statements.


Terry

Terry Fielder
[EMAIL PROTECTED]
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085



Dave Potts wrote:



I am using a 3rd front end to generate  postgres requests , its 
reportng an error with the database.


Is there anyway of loging which sql requests the application is  
actual sending to postgres.  I need to known if the error is being 
created by the application generating invalid SQL or if there is a 
problem with the desgin of the database tables.


Dave.



  


Re: [GENERAL] how to make this database / query faster

2008-03-15 Thread Richard Broersma
On Sat, Mar 15, 2008 at 4:21 PM, mark <[EMAIL PROTECTED]> wrote:


> select * from users where session_key is not Null order by id offset
> OFFSET limit 300
>
> OFFSET actually scans past all of the records that specify in the
"OFFSET".  So the bigger your offset the longer the scan will be.

One solution is to retain the last ID from the previous scan:

SELECT *
  FROM Users
 WHERE session_key IS NOT NULL
AND id > your_last_id
  LIMIT 300;


Re: [GENERAL] how to make this database / query faster

2008-03-15 Thread mark
On Sat, Mar 15, 2008 at 4:37 PM, Richard Broersma <
[EMAIL PROTECTED]> wrote:

> On Sat, Mar 15, 2008 at 4:21 PM, mark <[EMAIL PROTECTED]> wrote:
>
>
> > select * from users where session_key is not Null order by id offset
> > OFFSET limit 300
> >
> > One solution is to retain the last ID from the previous scan:
>
> SELECT *
>   FROM Users
>  WHERE session_key IS NOT NULL
> AND id > your_last_id
>   LIMIT 300;
>

will this ensure that no row is repeated when i itereate over the table?
what are the rows ordered by?
thanks


Re: [GENERAL] how to make this database / query faster

2008-03-15 Thread Richard Broersma
On Sat, Mar 15, 2008 at 4:41 PM, mark <[EMAIL PROTECTED]> wrote:

> On Sat, Mar 15, 2008 at 4:37 PM, Richard Broersma <
> [EMAIL PROTECTED]> wrote:
>
> >  On Sat, Mar 15, 2008 at 4:21 PM, mark <[EMAIL PROTECTED]> wrote:
> >
> >
> > > select * from users where session_key is not Null order by id offset
> > > OFFSET limit 300
> > >
> > >  One solution is to retain the last ID from the previous scan:
> >
> > SELECT *
> >   FROM Users
> >  WHERE session_key IS NOT NULL
> > AND id > your_last_id
> >   LIMIT 300;
> >
>
> will this ensure that no row is repeated when i itereate over the table?
> what are the rows ordered by?
> thanks
>

Ya, sorry I forgot to include the order by.

 SELECT *
  FROM Users
 WHERE session_key IS NOT NULL
AND id > your_last_id
 ORDER BY id
  LIMIT 300;

Yes there will not be any repeated rows sence you are using a order set
that who's ID are greated than the last set.


Re: [GENERAL] how to make this database / query faster

2008-03-15 Thread brian

Richard Broersma wrote:

On Sat, Mar 15, 2008 at 4:41 PM, mark <[EMAIL PROTECTED]> wrote:


On Sat, Mar 15, 2008 at 4:37 PM, Richard Broersma <
[EMAIL PROTECTED]> wrote:


 On Sat, Mar 15, 2008 at 4:21 PM, mark <[EMAIL PROTECTED]> wrote:



select * from users where session_key is not Null order by id offset
OFFSET limit 300

 One solution is to retain the last ID from the previous scan:

SELECT *
  FROM Users
 WHERE session_key IS NOT NULL
AND id > your_last_id
  LIMIT 300;


will this ensure that no row is repeated when i itereate over the table?
what are the rows ordered by?
thanks



Ya, sorry I forgot to include the order by.

 SELECT *
  FROM Users
 WHERE session_key IS NOT NULL
AND id > your_last_id
 ORDER BY id
  LIMIT 300;

Yes there will not be any repeated rows sence you are using a order set
that who's ID are greated than the last set.



As there's an index on id would it be faster to transpose the WHERE 
conditions?


WHERE id > your_last_id
AND session_key IS NOT NULL

I can't remember if the order of WHERE is significant.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to make this database / query faster

2008-03-15 Thread mark
On Sat, Mar 15, 2008 at 5:04 PM, brian <[EMAIL PROTECTED]> wrote:

> Richard Broersma wrote:
> > On Sat, Mar 15, 2008 at 4:41 PM, mark <[EMAIL PROTECTED]> wrote:
> >
> >> On Sat, Mar 15, 2008 at 4:37 PM, Richard Broersma <
> >> [EMAIL PROTECTED]> wrote:
> >>
> >>>  On Sat, Mar 15, 2008 at 4:21 PM, mark <[EMAIL PROTECTED]> wrote:
> >>>
> >>>
>  select * from users where session_key is not Null order by id offset
>  OFFSET limit 300
> 
>   One solution is to retain the last ID from the previous scan:
> >>> SELECT *
> >>>   FROM Users
> >>>  WHERE session_key IS NOT NULL
> >>> AND id > your_last_id
> >>>   LIMIT 300;
> >>>
> >> will this ensure that no row is repeated when i itereate over the
> table?
> >> what are the rows ordered by?
> >> thanks
> >>
> >
> > Ya, sorry I forgot to include the order by.
> >
> >  SELECT *
> >   FROM Users
> >  WHERE session_key IS NOT NULL
> > AND id > your_last_id
> >  ORDER BY id
> >   LIMIT 300;
> >
> > Yes there will not be any repeated rows sence you are using a order set
> > that who's ID are greated than the last set.
> >
>
> As there's an index on id would it be faster to transpose the WHERE
> conditions?
>
> WHERE id > your_last_id
> AND session_key IS NOT NULL
>
> I can't remember if the order of WHERE is significant.

brian,
i have an index on session_key also
but i dont have a session key combined on both. should i do that?


Re: [GENERAL] how to make this database / query faster

2008-03-15 Thread brian

mark wrote:

On Sat, Mar 15, 2008 at 5:04 PM, brian <[EMAIL PROTECTED]> wrote:


As there's an index on id would it be faster to transpose the WHERE
conditions?

WHERE id > your_last_id
AND session_key IS NOT NULL

I can't remember if the order of WHERE is significant.


brian,
i have an index on session_key also



Yeah, I should have asked, "If there's just the index on id ..."

> but i dont have a session key combined on both. should i do that?

I'll defer to someone with more insight.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Loging of postgres requests

2008-03-15 Thread Dave Potts

Thanks everbody  I have isolated the issue


Davel,



Terry Fielder wrote:
You can turn up the verbosity of postgres logger to log all SQL 
statements.  Look in postgresql.conf


In particular, you can set postgres to log statements that take over x 
milliseconds to execute.  If you set log_min_duration_statement to 0, 
then it will log ALL statements, which could also give you what you 
want if you want to see all SQL statements.


Terry

Terry Fielder
[EMAIL PROTECTED]
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085


Dave Potts wrote:



I am using a 3rd front end to generate  postgres requests , its 
reportng an error with the database.


Is there anyway of loging which sql requests the application is  
actual sending to postgres.  I need to known if the error is being 
created by the application generating invalid SQL or if there is a 
problem with the desgin of the database tables.


Dave.



  


begin:vcard
fn:David Potts
n:Potts;David
x-mozilla-html:FALSE
version:2.1
end:vcard


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general