[SQL] SQL timestamp to date cast
Hi, I have the following query: SELECT DISTINCT(nummer) FROM user_action WHERE datetime::date='11/11/2004' AND id_action IN (5,6,9) Now, datetime has the type timestamp. How can I make an index or write different this query so that it runs faster? It really takes some time sometimes. Usually about 3-4secs. user_action has about 300k rows and increasing ~ 5-10k a day. Explain analyze SELECT DISTINCT(nummer) FROM user_action WHERE datetime::date='11/11/2004' AND id_action IN (5,6,9) Unique (cost=18141.71..18143.72 rows=45 width=4) (actual time=418.122..418.340 rows=85 loops=1) -> Sort (cost=18141.71..18142.72 rows=402 width=4) (actual time=418.119..418.194 rows=192 loops=1) Sort Key: nummer -> Seq Scan on user_action (cost=0.00..18124.33 rows=402 width=4) (actual time=366.240..417.890 rows=192 loops=1) Filter: (((datetime)::date = '2004-11-11'::date) AND ((id_action = 5) OR (id_action = 6) OR (id_action = 9)))Total runtime: 418.419 ms Best regards. Andy.
Re: [SQL] SQL timestamp to date cast
On Thu, Jan 20, 2005 at 10:52:50AM +0200, Andrei Bintintan wrote: > SELECT DISTINCT(nummer) > FROM user_action > WHERE datetime::date='11/11/2004' AND id_action IN (5,6,9) > > Now, datetime has the type timestamp. How can I make an index or write > different this query so that it runs faster? You could create an index on datetime and rewrite your queries: CREATE INDEX user_action_datetime_idx ON user_action (datetime); SELECT DISTINCT(nummer) FROM user_action WHERE datetime >= '11/11/2004' AND datetime < '11/12/2004' AND id_action IN (5,6,9); Another possibility would be to create a functional index on datetime: CREATE INDEX user_action_date_idx ON user_action (date(datetime)); SELECT DISTINCT(nummer) FROM user_action WHERE date(datetime) = '11/11/2004' AND id_action IN (5,6,9); -- Michael Fuhr http://www.fuhr.org/~mfuhr/ ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[SQL] Problem on Geometric functions
Hello, I have a table that has an attribute 'identifier', declared as integer and an attribute 'plane_coord' defined as 'point': Table "reference.coord_test" Column| Type | Modifiers -+-+--- node_name | integer | plane_coord | point | I would like to find all points from the table that are within a square. Is this possible to do so? I have just found a check-operator to find out if a specified point is contained in or on a figure: point '(1,1)' @ box '((0,0),(2,2))' (on http://www.postgresql.org/docs/7.4/interactive/functions-geometry.html). I am looking for something like SELECT point(x,y) FROM reference.coord_test WHERE point(x,y) € box '((0,0), (2,2))'; Has anyone experience with that? Regards, Silke By the way, I am using PostGreSQL 7.4.1 ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] SQL timestamp to date cast
Another possibility would be to create a functional index on datetime: CREATE INDEX user_action_date_idx ON user_action (date(datetime)); GREAT!!! I thought it is possible but I didn't knew how to make such indexes. :)) Thank you !!! Andy. - Original Message - From: "Michael Fuhr" <[EMAIL PROTECTED]> To: "Andrei Bintintan" <[EMAIL PROTECTED]> Cc: Sent: Thursday, January 20, 2005 11:33 AM Subject: Re: [SQL] SQL timestamp to date cast On Thu, Jan 20, 2005 at 10:52:50AM +0200, Andrei Bintintan wrote: SELECT DISTINCT(nummer) FROM user_action WHERE datetime::date='11/11/2004' AND id_action IN (5,6,9) Now, datetime has the type timestamp. How can I make an index or write different this query so that it runs faster? You could create an index on datetime and rewrite your queries: CREATE INDEX user_action_datetime_idx ON user_action (datetime); SELECT DISTINCT(nummer) FROM user_action WHERE datetime >= '11/11/2004' AND datetime < '11/12/2004' AND id_action IN (5,6,9); Another possibility would be to create a functional index on datetime: CREATE INDEX user_action_date_idx ON user_action (date(datetime)); SELECT DISTINCT(nummer) FROM user_action WHERE date(datetime) = '11/11/2004' AND id_action IN (5,6,9); -- Michael Fuhr http://www.fuhr.org/~mfuhr/ ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] SQL timestamp to date cast
On Thu, Jan 20, 2005 at 11:41:41AM +0200, Andrei Bintintan wrote: > >Another possibility would be to create a functional index on datetime: > > > >CREATE INDEX user_action_date_idx ON user_action (date(datetime)); > > GREAT!!! I thought it is possible but I didn't knew how to make such > indexes. :)) See the "Indexes on Expressions" section in the "Indexes" chapter of the documentation. -- Michael Fuhr http://www.fuhr.org/~mfuhr/ ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] Problem on Geometric functions
On Thu, Jan 20, 2005 at 10:39:47AM +0100, Silke Trissl wrote: > Table "reference.coord_test" >Column| Type | Modifiers > -+-+--- > node_name | integer | > plane_coord | point | > > > I would like to find all points from the table that are within a square. Try this: SELECT plane_coord FROM reference.coord_test WHERE plane_coord @ box'((0,0), (2,2))'; -- Michael Fuhr http://www.fuhr.org/~mfuhr/ ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[SQL] OID's
Hi, I have a question about OID. How (where) I can get all OID's of the PostgeSQL installation? In other words where OID's is stored? Is it stored in special table? I would like use some SQL queries with the all OID's. Is this possible? I hadn't find the answer in the FAQ ( 4.16 What is an OID? What is a TID?). -- С уважением, Mihail mailto:[EMAIL PROTECTED] ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[SQL] OFFSET impact on Performance???
Hi to all, I have the following 2 examples. Now, regarding on the offset if it is small(10) or big(>5) what is the impact on the performance of the query?? I noticed that if I return more data's(columns) or if I make more joins then the query runs even slower if the OFFSET is bigger. How can I somehow improve the performance on this? Best regards, Andy. explain analyzeSELECT o.idFROM report r INNER JOIN orders o ON o.id=r.id_order AND o.id_status=6ORDER BY 1 LIMIT 10 OFFSET 10 Limit (cost=44.37..88.75 rows=10 width=4) (actual time=0.160..0.275 rows=10 loops=1) -> Merge Join (cost=0.00..182150.17 rows=41049 width=4) (actual time=0.041..0.260 rows=20 loops=1) Merge Cond: ("outer".id_order = "inner".id) -> Index Scan using report_id_order_idx on report r (cost=0.00..157550.90 rows=42862 width=4) (actual time=0.018..0.075 rows=20 loops=1) -> Index Scan using orders_pkey on orders o (cost=0.00..24127.04 rows=42501 width=4) (actual time=0.013..0.078 rows=20 loops=1) Filter: (id_status = 6)Total runtime: 0.373 ms explain analyzeSELECT o.idFROM report r INNER JOIN orders o ON o.id=r.id_order AND o.id_status=6ORDER BY 1 LIMIT 10 OFFSET 100Limit (cost=31216.85..31216.85 rows=1 width=4) (actual time=1168.152..1168.152 rows=0 loops=1) -> Sort (cost=31114.23..31216.85 rows=41049 width=4) (actual time=1121.769..1152.246 rows=42693 loops=1) Sort Key: o.id -> Hash Join (cost=2329.99..27684.03 rows=41049 width=4) (actual time=441.879..925.498 rows=42693 loops=1) Hash Cond: ("outer".id_order = "inner".id) -> Seq Scan on report r (cost=0.00..23860.62 rows=42862 width=4) (actual time=38.634..366.035 rows=42864 loops=1) -> Hash (cost=2077.74..2077.74 rows=42501 width=4) (actual time=140.200..140.200 rows=0 loops=1) -> Seq Scan on orders o (cost=0.00..2077.74 rows=42501 width=4) (actual time=0.059..96.890 rows=42693 loops=1) Filter: (id_status = 6)Total runtime: 1170.586 ms
Re: [SQL] OFFSET impact on Performance???
Andrei Bintintan wrote: Hi to all, I have the following 2 examples. Now, regarding on the offset if it is small(10) or big(>5) what is the impact on the performance of the query?? I noticed that if I return more data's(columns) or if I make more joins then the query runs even slower if the OFFSET is bigger. How can I somehow improve the performance on this? There's really only one way to do an offset of 1000 and that's to fetch 1000 rows and then some and discard the first 1000. If you're using this to provide "pages" of results, could you use a cursor? -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [SQL] OFFSET impact on Performance???
If you're using this to provide "pages" of results, could you use a cursor? What do you mean by that? Cursor? Yes I'm using this to provide "pages", but If I jump to the last pages it goes very slow. Andy. - Original Message - From: "Richard Huxton" To: "Andrei Bintintan" <[EMAIL PROTECTED]> Cc: ; Sent: Thursday, January 20, 2005 2:10 PM Subject: Re: [SQL] OFFSET impact on Performance??? Andrei Bintintan wrote: Hi to all, I have the following 2 examples. Now, regarding on the offset if it is small(10) or big(>5) what is the impact on the performance of the query?? I noticed that if I return more data's(columns) or if I make more joins then the query runs even slower if the OFFSET is bigger. How can I somehow improve the performance on this? There's really only one way to do an offset of 1000 and that's to fetch 1000 rows and then some and discard the first 1000. If you're using this to provide "pages" of results, could you use a cursor? -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] ERROR: row is too big: size 9856, maximum size 8136
I am enclosing a text file if this is not the correct manner let me know whats best way its not a lot of lines. ServerVersion: 07.03.0200 PostgreSQL 7.4.6 on i386-redhat-linux-gnu, compiled by GCC i386-redhat-linux-gcc (GCC) 3.4.2 20041017 (Red Hat 3.4.2 ERROR: row is too big: size 9856, maximum size 8136 Joel Fradkin Wazagua, LLC 2520 Trailmate Dr Sarasota, Florida 34243 Tel. 941-753-7111 ext 305 [EMAIL PROTECTED] www.wazagua.com Powered by Wazagua Providing you with the latest Web-based technology & advanced tools. C 2004. WAZAGUA, LLC. All rights reserved. WAZAGUA, LLC This email message is for the use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and delete and destroy all copies of the original message, including attachments. -Original Message- From: Michael Fuhr [mailto:[EMAIL PROTECTED] Sent: Wednesday, January 19, 2005 10:01 PM To: Joel Fradkin Cc: pgsql-sql@postgresql.org Subject: Re: [SQL] ERROR: row is too big: size 9856, maximum size 8136 On Wed, Jan 19, 2005 at 03:50:30PM -0500, Joel Fradkin wrote: > I get ERROR: row is too big: size 9856, maximum size 8136 when inserting a > view? Could you post the smallest possible self-contained example that demonstrates this behavior? What version of PostgreSQL are you using? -- Michael Fuhr http://www.fuhr.org/~mfuhr/ CREATE OR REPLACE VIEW thenetworkcallbackview as select i.clientnum, i.incidentid, incidentnum, incidenttypeid, incidentdate, subjectid, subjecttypeid, locationid, suspectedactivity.value as description, situation, policenotified, injuries, injurednum, injuriesdescription, propertydamage, damagelevelid, incidentreport, i.transfered, i.transftypeid, i.transfdate, i.transfbyid, i.transfnum, lastmodified, createdbyid,i.isdeleted, workflowstatus, cashloss, merchloss, totloss, markettypeid, typeofweaponid, alarmid, cameraid, escalated, transcount, orgcalldate_time, anoncaller, callernamefirst as firstname, callernamelast as lastname, callernamefirst || ' ' || callernamelast as callernamelast, callertype, callertitle, callerphone, callerext, callerbesttimetocall, calleremail, clientname, location, dba, address1, address2, city, state, zip, country, phone, ext, abuselocation, casesource.value as hoiwincdknown, supportdocsavailable, next24hours, nextwhen, nextwhere, howhotlineknown, interviewernotes, clientinstructions, case when i.statusid is null then 'none' when i.statusid = 1 then 'open' when i.statusid = 2 then 'closed' end as status,i.assignedto, cb.callbackdate, substring(cb.callbacknotes,1,20) as callbacknotes from tblincident i inner join tblincidentcallback cb on i.incidentnum = cb.incidentid and i.clientnum = cb.clientnum left outer join tblcasesource casesource on i.inccasesourceid = casesource.id and (i.clientnum=casesource.clientnum) left outer join tblsuspectedactivity suspectedactivity on i.incsuspectedactivityid = suspectedactivity.id and (i.clientnum=suspectedactivity.clientnum); ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[SQL] pg_user relationship does not exist
Not sure this is the correct place to ask, but when I log in with pgadminIII I get that message. Any ideas how to fix? Do I need to redo my data base or something? Joel Fradkin Wazagua, LLC 2520 Trailmate Dr Sarasota, Florida 34243 Tel. 941-753-7111 ext 305 [EMAIL PROTECTED] www.wazagua.com Powered by Wazagua Providing you with the latest Web-based technology & advanced tools. © 2004. WAZAGUA, LLC. All rights reserved. WAZAGUA, LLC This email message is for the use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and delete and destroy all copies of the original message, including attachments.
Re: [SQL] pg_user relationship does not exist
O Joel Fradkin έγραψε στις Jan 20, 2005 : > Not sure this is the correct place to ask, but when I log in with pgadminIII > I get that message. Maybe do extensive logging (show queries,etc..)(see in postgresql.conf), and then examine your log to see what pgadminIII is asking for. > > > > Any ideas how to fix? Do I need to redo my data base or something? > > > > Joel Fradkin > > > > Wazagua, LLC > 2520 Trailmate Dr > Sarasota, Florida 34243 > Tel. 941-753-7111 ext 305 > > > > [EMAIL PROTECTED] > www.wazagua.com > Powered by Wazagua > Providing you with the latest Web-based technology & advanced tools. > C 2004. WAZAGUA, LLC. All rights reserved. WAZAGUA, LLC > This email message is for the use of the intended recipient(s) and may > contain confidential and privileged information. Any unauthorized review, > use, disclosure or distribution is prohibited. If you are not the intended > recipient, please contact the sender by reply email and delete and destroy > all copies of the original message, including attachments. > > > > > > > > > -- -Achilleus ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [SQL] OFFSET impact on Performance???
Andrei Bintintan wrote: If you're using this to provide "pages" of results, could you use a cursor? What do you mean by that? Cursor? Yes I'm using this to provide "pages", but If I jump to the last pages it goes very slow. DECLARE mycursor CURSOR FOR SELECT * FROM ... FETCH FORWARD 10 IN mycursor; CLOSE mycursor; Repeated FETCHes would let you step through your results. That won't work if you have a web-app making repeated connections. If you've got a web-application then you'll probably want to insert the results into a cache table for later use. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [SQL] ERROR: row is too big: size 9856, maximum size 8136
On Thu, Jan 20, 2005 at 08:56:12AM -0500, Joel Fradkin wrote: > I am enclosing a text file if this is not the correct manner let me know > whats best way its not a lot of lines. The file you attached contains a view definition but it doesn't show the underlying tables, nor the statement that resulted in the error. By "self-contained example" I mean enough statements that somebody could copy them into an empty database and reproduce the problem. The error "row is too big: size 9856, maximum size 8136" gives a clue at what's wrong but I'm not sure what circumstances cause it, because the TOAST mechanism allows rows to be larger than a page. This is just a guess, but maybe the error means that the non-TOASTABLE data is exceeding the page size. -- Michael Fuhr http://www.fuhr.org/~mfuhr/ ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[SQL] automatic table locking on too many locked records?
Hi all and Michael. An MS-SQL experienced developer warned me that on MS-SQLsvr a whole table gets locked if a certain percentage or amount of records are locked due to an update. And then shortly nothing goes. Does there exist a similar behaviour on pgSQL? Get tables locked if too many records are beein updated? Or something elses? ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] OID's
On Thu, Jan 20, 2005 at 03:45:58PM +0500, Mihail Nasedkin wrote: > How (where) I can get all OID's of the PostgeSQL > installation? > In other words where OID's is stored? Is it stored in special table? See the "System Columns" section in the "Data Definition" chapter of the PostgreSQL documentation. Tables that store objects with OIDs should have an oid column; you could query pg_attribute to find out what tables those are. > I would like use some SQL queries with the all OID's. To what end? Are you aware that PostgreSQL allows tables to be created without OIDs? What problem are you trying to solve? -- Michael Fuhr http://www.fuhr.org/~mfuhr/ ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] automatic table locking on too many locked records?
KÖPFERL Robert wrote: Hi all and Michael. An MS-SQL experienced developer warned me that on MS-SQLsvr a whole table gets locked if a certain percentage or amount of records are locked due to an update. And then shortly nothing goes. Does there exist a similar behaviour on pgSQL? Get tables locked if too many records are beein updated? Or something elses? PostgreSQL's MVCC system means updates generally don't lock at all. For more info, Google for MVCC and check the manual for Transaction Isolation Levels. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [PERFORM] [SQL] OFFSET impact on Performance???
Alex Turner wrote: I am also very interesting in this very question.. Is there any way to declare a persistant cursor that remains open between pg sessions? Not sure how this would work. What do you do with multiple connections? Only one can access the cursor, so which should it be? This would be better than a temp table because you would not have to do the initial select and insert into a fresh table and incur those IO costs, which are often very heavy, and the reason why one would want to use a cursor. I'm pretty sure two things mean there's less difference than you might expect: 1. Temp tables don't fsync 2. A cursor will spill to disk beyond a certain size -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] OFFSET impact on Performance???
"Andrei Bintintan" <[EMAIL PROTECTED]> writes: > > If you're using this to provide "pages" of results, could you use a cursor? > What do you mean by that? Cursor? > > Yes I'm using this to provide "pages", but If I jump to the last pages it goes > very slow. The best way to do pages for is not to use offset or cursors but to use an index. This only works if you can enumerate all the sort orders the application might be using and can have an index on each of them. To do this the query would look something like: SELECT * FROM tab WHERE col > ? ORDER BY col LIMIT 50 Then you take note of the last value used on a given page and if the user selects "next" you pass that as the starting point for the next page. This query takes the same amount of time no matter how many records are in the table and no matter what page of the result set the user is on. It should actually be instantaneous even if the user is on the hundredth page of millions of records because it uses an index both for the finding the right point to start and for the ordering. It also has the advantage that it works even if the list of items changes as the user navigates. If you use OFFSET and someone inserts a record in the table then the "next" page will overlap the current page. Worse, if someone deletes a record then "next" will skip a record. The disadvantages of this are a) it's hard (but not impossible) to go backwards. And b) it's impossible to give the user a list of pages and let them skip around willy nilly. (If this is for a web page then specifically don't recommend cursors. It will mean you'll have to have some complex session management system that guarantees the user will always come to the same postgres session and has some garbage collection if the user disappears. And it means the URL is only good for a limited amount of time. If they bookmark it it'll break if they come back the next day.) -- greg ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] ERROR: row is too big: size 9856, maximum size 8136
Sorry, that was the statement that caused the error. I was creating a view that exists in the MSSQL land. It actually joins a few tables. I can put a create statement for all the tables used in and then create the view and re send the txt file with those. I am reloading the LINUX from scratch at the moment, but as soon as I get back up (be tomorrow probably as it takes over night to load the data from the MSSQL server) I will email with all the pertinent information. I am re-loading to hopefully get rid of the pg_user error I was getting (I went to su postgres and created my data base that way after creating a postgres user as root). My friend said to not create any users just start the data base up (Fedora core 3) and use pgadmin to create the database. I was following a how to convert I got off the archives, so I must of messed something up. Again thank you for the information. If it is non TOAST (sorry not sure what that means; I am guessing like not part of a text field data) field sizes adding up to more the 8k is there some way to produce the data set, or is this a limit of Postgres in general. If I can not have all the data needed in a recordset I might have to re-think using postgres is this a limit of mysql also? I hate to think I have to consider staying on MSSQL as it is not in our budget. Joel Fradkin Wazagua, Inc. 2520 Trailmate Dr Sarasota, Florida 34243 Tel. 941-753-7111 ext 305 [EMAIL PROTECTED] www.wazagua.com Powered by Wazagua Providing you with the latest Web-based technology & advanced tools. C 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc This email message is for the use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and delete and destroy all copies of the original message, including attachments. -Original Message- From: Michael Fuhr [mailto:[EMAIL PROTECTED] Sent: Thursday, January 20, 2005 11:33 AM To: Joel Fradkin Cc: pgsql-sql@postgresql.org Subject: Re: [SQL] ERROR: row is too big: size 9856, maximum size 8136 On Thu, Jan 20, 2005 at 08:56:12AM -0500, Joel Fradkin wrote: > I am enclosing a text file if this is not the correct manner let me know > whats best way its not a lot of lines. The file you attached contains a view definition but it doesn't show the underlying tables, nor the statement that resulted in the error. By "self-contained example" I mean enough statements that somebody could copy them into an empty database and reproduce the problem. The error "row is too big: size 9856, maximum size 8136" gives a clue at what's wrong but I'm not sure what circumstances cause it, because the TOAST mechanism allows rows to be larger than a page. This is just a guess, but maybe the error means that the non-TOASTABLE data is exceeding the page size. -- Michael Fuhr http://www.fuhr.org/~mfuhr/ ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [SQL] OFFSET impact on Performance???
Greg Stark wrote: "Andrei Bintintan" <[EMAIL PROTECTED]> writes: If you're using this to provide "pages" of results, could you use a cursor? What do you mean by that? Cursor? Yes I'm using this to provide "pages", but If I jump to the last pages it goes very slow. The best way to do pages for is not to use offset or cursors but to use an index. This only works if you can enumerate all the sort orders the application might be using and can have an index on each of them. To do this the query would look something like: SELECT * FROM tab WHERE col > ? ORDER BY col LIMIT 50 Then you take note of the last value used on a given page and if the user selects "next" you pass that as the starting point for the next page. Greg's is the most efficient, but you need to make sure you have a suitable key available in the output of your select. Also, since you are repeating the query you could get different results as people insert/delete rows. This might or might not be what you want. A similar solution is to partition by date/alphabet or similar, then page those results. That can reduce your resultset to a manageable size. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [SQL] ERROR: row is too big: size 9856, maximum size 8136
Joel Fradkin wrote: Sorry, that was the statement that caused the error. I was creating a view that exists in the MSSQL land. It actually joins a few tables. I can put a create statement for all the tables used in and then create the view and re send the txt file with those. I am reloading the LINUX from scratch at the moment, but as soon as I get back up (be tomorrow probably as it takes over night to load the data from the MSSQL server) I will email with all the pertinent information. I am re-loading to hopefully get rid of the pg_user error I was getting (I went to su postgres and created my data base that way after creating a postgres user as root). My friend said to not create any users just start the data base up (Fedora core 3) and use pgadmin to create the database. There are probably RPMs for 8.0 available by now. Might be worth going to that from the start, rather than upgrading later. I was following a how to convert I got off the archives, so I must of messed something up. Again thank you for the information. If it is non TOAST (sorry not sure what that means; I am guessing like not part of a text field data) field sizes adding up to more the 8k is there some way to produce the data set, or is this a limit of Postgres in general. If I can not have all the data needed in a recordset I might have to re-think using postgres is this a limit of mysql also? I hate to think I have to consider staying on MSSQL as it is not in our budget. Well, that's a lot of non-text columns to breach 8kB. It is a definite limit, but you shouldn't see it until you have hundreds of columns. If you can post the table definitions along with the view definition, that should let people see if they can reproduce the problem. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] ERROR: row is too big: size 9856, maximum size 8136
"Joel Fradkin" <[EMAIL PROTECTED]> writes: > Sorry, that was the statement that caused the error. Hmm. The error is associated with trying to store an oversized row. And CREATE VIEW doesn't store any rows ... except into system catalogs. So the only theory I can think of is that the pg_rewrite row for the view is exceeding 8K. Which can't happen, because no matter how complicated the view definition rule is, the tuple toaster should have sprung into action and pushed the rule text out-of-line. Could we see the results of select * from pg_class where relname = 'pg_rewrite'; select attname,atttypid::regtype,attstorage from pg_attribute where attrelid = 'pg_rewrite'::regclass and attnum > 0; 7.4 should certainly be configured to have a toast table for pg_rewrite, but maybe something went wrong during initdb on your installation. regards, tom lane ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [SQL] ERROR: row is too big: size 9856, maximum size 8136
Could very well be an install issue I was getting errors trying to see template1. I am in the process of re-installing Linux and will let you know if I still have the error what I get from the select you asked me to run. I appreciate everyones help. If anyone has an interest in the .net utility I wrote to pull the tables schema and data let me know. I used SQLDMO to have the script text available and then converted it to postgres syntax. I automated the creation and move of the data including the text fields(it runs a little slow as it does a read and write at a table row level, but this seemed the best way to get the text fields to move over). The views and procedures I am afraid I will have to use the list all views syntax and convert by hand as stuff like left and datediff would be difficult to auto-convert. I did create a left and right function but could see a performance hit for each use of function and feel it will be better to just convert the SQL (the hit was only milisecs on first number I guess the prepare part, but still might as well have it be as fast as possible). Joel Fradkin Wazagua, Inc. 2520 Trailmate Dr Sarasota, Florida 34243 Tel. 941-753-7111 ext 305 [EMAIL PROTECTED] www.wazagua.com Powered by Wazagua Providing you with the latest Web-based technology & advanced tools. C 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc This email message is for the use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and delete and destroy all copies of the original message, including attachments. -Original Message- From: Tom Lane [mailto:[EMAIL PROTECTED] Sent: Thursday, January 20, 2005 3:38 PM To: Joel Fradkin Cc: 'Michael Fuhr'; pgsql-sql@postgresql.org Subject: Re: [SQL] ERROR: row is too big: size 9856, maximum size 8136 "Joel Fradkin" <[EMAIL PROTECTED]> writes: > Sorry, that was the statement that caused the error. Hmm. The error is associated with trying to store an oversized row. And CREATE VIEW doesn't store any rows ... except into system catalogs. So the only theory I can think of is that the pg_rewrite row for the view is exceeding 8K. Which can't happen, because no matter how complicated the view definition rule is, the tuple toaster should have sprung into action and pushed the rule text out-of-line. Could we see the results of select * from pg_class where relname = 'pg_rewrite'; select attname,atttypid::regtype,attstorage from pg_attribute where attrelid = 'pg_rewrite'::regclass and attnum > 0; 7.4 should certainly be configured to have a toast table for pg_rewrite, but maybe something went wrong during initdb on your installation. regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[SQL] still having pg_user error
Well I re-installed Fedora 3 and updated my postgres and started the service (no command line stuff). When I connected using pgadmin I still got the sp_user error. I am brand new to both linux and postgres and am a bit frustrated as I have spent the entire day getting no where. Is there a step by step instruction for using postgres on fedora 3 anywhere? What I did seemed simple enough using the graphical interface I added the postgres components and put in CD3 then I used the up2date thing, then I started the service. Obviously I am missing a important step (I realize I have to edit the ph_hba.conf file, but it was not there until after I started the service. I stopped the service and added a host line and edited postgres.conf to allow tcpip sockets. I hate to not get further tonight as it takes all night to move the data and I wanted to be moving views tomorrow, but I guess I will just have to try again tomorrow. Joel Fradkin Wazagua, Inc. 2520 Trailmate Dr Sarasota, Florida 34243 Tel. 941-753-7111 ext 305 [EMAIL PROTECTED] www.wazagua.com Powered by Wazagua Providing you with the latest Web-based technology & advanced tools. © 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc This email message is for the use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and delete and destroy all copies of the original message, including attachments.
Re: [SQL] still having pg_user error
"Joel Fradkin" <[EMAIL PROTECTED]> writes: > Well I re-installed Fedora 3 and updated my postgres and started the = > service > (no command line stuff). > When I connected using pgadmin I still got the sp_user error. Fedora 3? You should've mentioned that before. I'll bet you've got SELinux enabled in enforcement mode. SELinux interferes with initdb and I believe a missing pg_user view is one of the symptoms. This is fixed in the very latest postgresql and selinux-policy-targeted RPMs (u need both), but the path of least resistance is often to dial SELinux down to permissive mode (sudo /usr/sbin/setenforce 0) for long enough to do the initdb. If you're running SELinux in strict rather than targeted policy, good luck ... I honestly haven't tried that one at all ... regards, tom lane ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [SQL] OID's
Hello, Michael. Thank you for answer January, 20 2005, 21:48:30: MF> On Thu, Jan 20, 2005 at 03:45:58PM +0500, Mihail Nasedkin wrote: >> How (where) I can get all OID's of the PostgeSQL >> installation? >> In other words where OID's is stored? Is it stored in special table? MF> See the "System Columns" section in the "Data Definition" chapter MF> of the PostgreSQL documentation. Tables that store objects with MF> OIDs should have an oid column; you could query pg_attribute to MF> find out what tables those are. I have already read about "System Columns" of the PostgreSQL documentation. In the table "pg_catalog.pg_attribute" column "attrelid" contain only "system OID's" but not OID's from records of the user tables. But I would like to use OID's of all records of the all my tables. ^^^^^^ I try to use rules on INSERT action of my tables to store last insert oid, but at the moment of the INSERT row into table OID value inaccessible (unknown). >> I would like use some SQL queries with the all OID's. MF> To what end? Are you aware that PostgreSQL allows tables to be MF> created without OIDs? Yes, of course, but in my case I create tables with OID and then want use OID of all records of the all tables as one column in some query. I think what system of OID's is very useful for application! MF> What problem are you trying to solve? For example, I want to fetching all rows of the several tables in one query by means of LEFT JOIN, but not use UNION operator. -- Mihail Nasedkinmailto:[EMAIL PROTECTED] ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match