[SQL] Unexplained SQL behavior
I am debugging a performance problem with a view. I have narrowed down the problem to when I adeed columns to my view. In the examples below I have a view with 10 columns. I run an explain plan and it uses the primary key of the driver table (enrollment table) as to be expected and executes in 1.86 msec. When I added another columns (and it is a null value casted to a timestamp - LASTACCESSED) the execution plan now uses a totally different plan and now runs for 8622.16 msec. Anyone have any ideas or explination that may help understand?? --- thanks drop view lv2 \gcreate view lv2( LEARNERID, ENROLLMENTID, GUID, EXPIREDDATE, FIRSTACCESSDATE, DISPLAYORDER, ASSESSMENTSSTARTED,COMPLETE, EXPIRED, TITLE ) ASselect distinct enr.learnerid, enr.enrollmentid, lov.ccguid, enr.enddate, null::timestamp without time zone, lov.displayorder, 'false'::character varying, 'F'::character varying, case when enr.endDate::timestamp without time zone > now()::timestamp without time zone then 'false' else 'true' end as "expired", lov.displaytitlemeid from enrollment enr, learningObjectView lovwhere enr.productid = lov.productId and not exists (select null from learnerAttendance la, learnerLaunchableAttendance lla where enr.enrollmentID = la.enrollmentID and la.learnerAttendanceID = lla.learnerAttendanceID and lla.launchableGUID = lov.ccGuid) \g explain analyze select * from lv2 where enrollmentid = 21462\gNOTICE: QUERY PLAN: Subquery Scan lv2 (cost=264.90..264.98 rows=1 width=77) (actual time=1.53..1.61 rows=12 loops=1) -> Unique (cost=264.90..264.98 rows=1 width=77) (actual time=1.52..1.55 rows=12 loops=1) -> Sort (cost=264.90..264.90 rows=3 width=77) (actual time=1.52..1.53 rows=12 loops=1) -> Nested Loop (cost=0.00..264.87 rows=3 width=77) (actual time=0.57..1.20 rows=12 loops=1) -> Index Scan using enrollment_pk on enrollment enr (cost=0.00..3.02 rows=1 width=20) (actual time=0.21..0.21 rows=1 loops=1) -> Index Scan using i_learningobjectview_productid on learningobjectview lov (cost=0.00..25.51 rows=6 width=57) (actual time=0.19..0.30 rows=13 loops=1) SubPlan -> Nested Loop (cost=0.00..36.69 rows=1 width=8) (actual time=0.04..0.04 rows=0 loops=13) -> Index Scan using i_learnerlaunchattend_lguid on learnerlaunchableattendance lla (cost=0.00..20.19 rows=5 width=4) (actual time=0.02..0.02 rows=0 loops=13) -> Index Scan using learnerattendance_pk on learnerattendance la (cost=0.00..3.02 rows=1 width=4) (actual time=0.16..0.16 rows=1 loops=1)Total runtime: 1.86 msec NOW I ADDED ANOTHER NULL column (LASTACCESSED field) drop view lv2 \gcreate view lv2( LEARNERID, ENROLLMENTID, GUID, EXPIREDDATE, FIRSTACCESSDATE, LASTACCESSED, DISPLAYORDER, ASSESSMENTSSTARTED,COMPLETE, EXPIRED, TITLE ) ASselect distinct enr.learnerid, enr.enrollmentid, lov.ccguid, enr.enddate, null::timestamp without time zone, null::timestamp without time zone, lov.displayorder, 'false'::character varying, 'F'::character varying, case when enr.endDate::timestamp without time zone > now()::timestamp without time zone then 'false' else 'true' end as "expired", lov.displaytitlemeid from enrollment enr, learningObjectView lovwhere enr.productid = lov.productId and not exists (select null from learnerAttendance la, learnerLaunchableAttendance lla where enr.enrollmentID = la.enrollmentID and la.learnerAttendanceID = lla.learnerAttendanceID and lla.launchableGUID = lov.ccGuid) \g explain analyze select * from lv2 where enrollmentid = 21462\g NOTICE: QUERY PLAN: Subquery Scan lv2 (cost=1968402.36..1969071.62 rows=2677 width=77) (actual time=8273.16..8516.75 rows=12 loops=1) -> Unique (cost=1968402.36..1969071.62 rows=2677 width=77) (actual time=8273.08..8458.33 rows=63048 loops=1) -> Sort (cost=1968402.36..1968402.36 rows=26770 width=77) (actual time=8273.07..8343.14 rows=63048 loops=1) -> Merge Join (cost=0.00..1965714.31 rows=26770 width=77) (actual time=1.28..7226.99 rows=63048 loops=1) -> Index Scan using i_enrollment_product on enrollment enr (cost=0.00..608.96 rows=8746 width=20) (actual time=0.17..54.26 rows=8746 loops=1) -> Index Scan using i_learningobjectview_productid on learningobjectview lov (cost=0.00..207.57 rows=3278 width=57) (actual time=0.15..333.79 rows=67003 loops=1) SubPlan -> Nested Loop (cost=0.00..36.69 rows=1 width=8) (actual time=0.09..0.09 rows=0 loops=64942) -> Index Scan using i_learnerlaunchattend_lguid on learnerlaunchableattendance lla (cost=0.00..20.19 rows=5 width=4) (actual time=0.01..0.03 rows=5 loops=64942)
[SQL] union optimization in views
We are attempting to move a couple of systems from Oracle to Postgres but can not do so without application rewrites due to the current use of views with UNIONs and the criticality of the performances of these views. I was wondering if a decision has been made on the optimization with the UNION clause in views. There are many documents in the SQL archive showing that the "push down" is not occuring and thus the use of UNION's in views is limited to case where the data set is small or performance is not a consideration. I also looked through the TODO list and didn't see anything (of course I could have missed references). thanks - Joe snip of an Article from SQL archives CREATE VIEW two_tables AS SELECT t1.id, t1.name, t1.abbreviation, t1.juris_id FROM t1 UNION ALL SELECT t2.id, t2.name, NULL, t2.juris_id FROM t2;This works fine as a view, since I have made the id's unique between the two tables (using a sequence). However, as t1 has 100,000 records, it isvitally important that queries against this view use an index.As it is a Union view, though, they ignore any indexes: > It's probably not pushing the login='asdadad' condition down into the > queries in the view so it's possibly doing a full union all followed > by the condition (given that it's estimating a larger number of rows > returned). I think there was some question about whether it was safe > to do that optimization (ie, is select * from (a union [all] b) where > condition always the same as> select * from a where condition union [all]> select * from b where condition> )>> There wasn't any final determination --- it's still an open issue > whether there are any limitations the planner would have to consider > when trying to push down conditions into UNIONs. Offhand it seems to > me that the change is always safe when dealing with UNION ALL, but I'm> not quite convinced about UNION. And what of INTERSECT and EXCEPT?>> Another interesting question is whether there are cases where the > planner could legally push down the condition, but should not because > it would end up with a slower plan. I can't think of any examples > offhand, but that doesn't mean there aren't any.
Re: [SQL] Question about One to Many relationships
Todd Kennedy wrote: They haven't responded me as of yet. There should be a band associated with each album -- this is handled in code, but other than that this is the only relational db way I can think of to do it. But if a band can have songs in many albums and an album can have songs from multiple bands, it's a many-to-many relationship, NOT one-to-many. Short of the full track design suggested by PFC, you'd normally implement a many-to-many table as follows: CREATE TABLE bands_on_album ( band_id integer REFERENCES band (id), album_id integer REFERENCES albums (id), PRIMARY KEY (band_id, album_id) ) This of course precludes the same band being listed twice in a given album. If you do need that info, then you're really asking for "tracks". Joe ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] MySQL DB to PostgresSQL DB
Jose Apablaza wrote: - Is posible to export MySQL DB to PostgreSQL DB? - Does PostgreSQL DB has tools to import MySQL DB? - If is posible, How do I need to build the DB in MySQL?, in order to have success in the exportation. - Do someone did it before?, exporting MySQL DB to PostgreSQL DB? - What kind of risk do we can to have in this process? - How long can take this process? Yes, it's doable, but it's not as straighforward as mysqldump -someflags mydbname | psql -someotherflags pgdbname I suggest you start by checking the resources/articles in the MySQL section of http://www.postgresql.org/docs/techdocs.3 (which I'm glad to say is much better organized and comprehensive than when I had to do it). FWIW, I converted using CSV as the intermediate format, and the dates and timestamps were the trickiest, which required the use of "staging" tables (with a textual representation of the columns). An interesting side effect was discovering data inconsistencies in the MySQL database since as part of the conversion I implemented foreign key constraints under PostgreSQL (which were missing in the former). Joe ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[SQL] Repetitive code
Hi, This is prompted by the previous thread on "SQL Technique Question". I have the following query, extracted from a PHP script, where $dt is a date provided to the script. SELECT created, topic_id, 0, 0, 0, 0 FROM topic WHERE created >= $dt AND page_type IN (1, 2) UNION SELECT updated, topic_id, 1, 0, 0, 0 FROM topic WHERE date_trunc('day', updated) != created AND updated >= $dt AND page_type IN (1, 2) UNION SELECT e.created, subject_id, 0, 1, entry_id, subject_type FROM entry e, topic WHERE subject_id = topic_id AND e.created >= $dt AND page_type IN (1, 2) UNION SELECT e.created, actor_id, 0, 1, entry_id, actor_type FROM entry e, topic WHERE actor_id = topic_id AND e.created >= $dt AND page_type IN (1, 2) UNION SELECT e.updated, subject_id, 1, 1, entry_id, subject_type FROM entry e, topic WHERE date_trunc('day', e.updated) != e.created AND subject_id = topic_id AND e.updated >= $dt AND page_type IN (1, 2) UNION SELECT e.updated, actor_id, 1, 1, entry_id, actor_type FROM entry e, topic WHERE date_trunc('day', e.updated) != e.created AND actor_id = topic_id AND e.updated >= $dt AND page_type IN (1, 2) UNION SELECT e.created, e.topic_id, 0, 1, entry_id, rel_type FROM topic_entry e, topic t WHERE e.topic_id = t.topic_id AND e.created >= $dt AND page_type IN (1, 2) UNION SELECT e.updated, e.topic_id, 1, 1, entry_id, rel_type FROM topic_entry e, topic t WHERE e.topic_id = t.topic_id AND date_trunc('day', e.updated) != e.created AND e.updated >= $dt AND page_type IN (1, 2); As you can see, there's quite a bit of repetitive code, so the previous thread got me to thinking about simplifying it, perhaps through a view, perhaps through the use of CASE statements, particularly since I'm about to add at least one other table to the mix. As background, each table has a 'created' date column and an 'updated' timestamp column and the purpose of the various selects is to find the rows that were created or updated since the given $dt date. The third expression in each select list is an indicator of NEW (0) or CHANGED (1). The fourth item is a code for row type (topic=0, entry=1, but a new code is coming). I've been trying to figure out if simplifying into a view (one or more) is indeed possible. One factoring out that I can see is the "topics of interest" restriction (i.e., the join of each secondary table back to topic to get only topics whose page_types are 1 or 2). Another redundancy is the "date_trunc('day', updated) != created" which is there to avoid selecting "changed" records when they're actually new. However, although creating these views may simplify the subqueries it doesn't seem there is a way to avoid the eight-way UNION, or is there? TIA Joe ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] Repetitive code
Aaron Bono wrote: Each of your queries has the filter xxx >= $dt where the xxx is the first column in each select. You could simplify the query by turning the unioned selects into a sub-query and then putting the $dt filter in the outer query. It would probably have to be two subqueries unless I can find a way to merge the differences between new and changed rows. I don't know if this will cause performance problems though. If PostgreSQL completes the inner query before filtering by your $dt you may be better off leaving the $dt filters where they are. The query is only run a few times a week so performance is largely not a concern. I'm trying to simplify it to make adding tables less cumbersome (as a separate effort, the schema may be modified to normalize it, e.g., topic joins to entry via subject_id and actor_id and a subject and actor can also appear in topic_entry's topic_id). I know Oracle has materialized views. Does PostgreSQL also have materialized views? If so, you could get great performance from your views AND simplify your SQL. AFAIK PostgreSQL does not support materialized views but it's interesting that you mention that because in essence the query is used to materialize a view, i.e., it's part of an INSERT / SELECT into a table which is then joined back to the other tables to construct a web page as well as an RSS feed. Joe ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] Repetitive code
Aaron Bono wrote: I haven't stared at your query as long as you have so I may have missed something but it looks like in all the selects you are combining the first column in the select is the column you filter on. So the the outer query doesn't have to know wiether it is a new or changed row: SELECT * FROM ( SELECT created as my_date, topic_id, 0, 0, 0, 0 FROM topic WHERE page_type IN (1, 2) UNION [snip] SELECT e.updated as my_date, e.topic_id, 1, 1, entry_id, rel_type FROM topic_entry e, topic t WHERE e.topic_id = t.topic_id AND date_trunc('day', e.updated) != e.created AND page_type IN (1, 2) ) my_union where my_union.my_date >= $dt Thanks Aaron. That does look like a great solution, overlooked since I'm not that familiar with SELECTs in the FROM clause. It may even make it possible to discard the interim table and do the web page/RSS feed directly from the view. I would almost be tempted to create a view for each small query and name them something meaningful and then another view that does the union. It would make the queries easier to understand at least (self documented). That sounds like a good idea too because schema changes would be somewhat insulated by the layered views. Best regards, Joe ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [SQL] avg(interval)
Tom Lane wrote: "Jeremiah Elliott" <[EMAIL PROTECTED]> writes: however if i don't average them here is what i get: "7 days 22:24:50.62311";"*2420" "9 days 22:21:02.683393";"*2420" "23:21:35.458459";"*2420" "4 days 22:47:41.749756";"*2420" "3 days 06:05:59.456947";"*2420" which should average to just over nine days - Uh ... how do you arrive at that conclusion? I haven't done the math, but by eyeball an average of four-something days doesn't look out of line for those values. It seems he's calculating (7 + 9 + 23 + 4 + 3) / 5 ... Joe ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [SQL] avg(interval)
Erik Jones wrote: Well, the query is working ok numerically, but should the answer really be reported as 4 days and 33 hours? Well, the original poster didn't provide the table schema or PG version, but on PG 8.0.3 both with intervals or with differences between timestamps, the query appears to work OK: test=> select * from x; t - 7 days 22:24:00 9 days 22:21:00 23:21:00 4 days 22:47:00 3 days 06:05:00 (5 rows) test=> select avg(t) from x; avg - 5 days 09:47:36 (1 row) test=> select * from x2; t - 2006-06-07 22:24:00 2006-06-09 22:21:00 2006-05-31 23:21:00 2006-06-04 22:47:00 2006-06-03 06:05:00 (5 rows) test=> select avg(t - '2006-5-31 0:0'::timestamp) from x2; avg - 5 days 09:47:36 (1 row) Joe ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: Fwd: [SQL] Start up question about triggers
Andrew Sullivan wrote: On Tue, Jun 27, 2006 at 11:16:17AM +0300, Forums @ Existanze wrote: I looked into slony, I have a question though, how would I go about controlling slony via a jdbc driver? See this whole problem has arisen because for some reason my client wants to keep to separate databases in two separate locations with the same data. So he would call this partiall backup function via a GUI client we provide, and with this information he would go to the same GUI client in this other location and import this partiall backup. It is totally normal to want to keep two databases in two locations: that's a matter of safety. Slony does it automatically, as long as the daemon is running. No need to control it. But with file-based log shipping (see http://linuxfinances.info/info/logshipping.html) one could write a Java app to control when the updates are applied. Joe ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Data Entry and Query forms
Aaron Bono wrote: Are there any web based management tools for PostgreSQL (like Mysql PHP Admin except for PostgreSQL)? I thought I saw a post sometime back about one but don't remember the name. Yes, that's phpPgAdmin (http://phppgadmin.com). Joe ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [SQL] ERROR: SELECT query has no destination for result data
Ezequias Rodrigues da Rocha wrote: Hi list, I have a function like this: Create OR REPLACE Function base.inserirPontos(char(1), varchar(255), numeric(12,2), int8, int8, int8 ) returns int4 as $$ declare Operacao alias for $1; Numero_nota alias for $2; Valor_nota alias for $3; PontoVenda_Emissor alias for $4; Cardpass alias for $5; Cx_Id alias for $6; begin -- Validando parâmetros passados na função if Operacao <> 'C' then return 1; else select count(id) as numRegistros from base.emissor_ponto_venda where id = PontoVenda_Emissor; You haven't declared numRegistros. Joe ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [SQL] Evaluation of if conditions
Daniel CAUNE wrote: Hi, How does the IF statement evaluate conditions? Does it evaluate conditions following their declaration order from left to right? In case of or-conditions, does the IF statement stop evaluating conditions whenever a first or-condition is true? Please see 4.2.12. Expression Evaluation Rules of the manual: The order of evaluation of subexpressions is not defined. In particular, the inputs of an operator or function are not necessarily evaluated left-to-right or in any other fixed order. There's more examples there too. Joe ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] Is this a bug? Deleting a column deletes the constraint.
On Thu, 2006-10-12 at 01:25 -0400, Tom Lane wrote: > It does seem like this is wrong, in view of SQL92's statement about > ALTER TABLE DROP COLUMN: > > 4) If RESTRICT is specified, then C shall not be referenced in > the of any view descriptor or in the condition> of any constraint descriptor other than a table con- > straint descriptor that contains references to no other column > and that is included in the table descriptor of T. > > IOW we should only allow unique constraints to be auto-dropped if > they reference just the one single column. Ick. I didn't realize before that you can also drop all columns, leaving a table without *any* columns. Is that a SQL92 "feature"? Joe ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] Case Preservation disregarding case sensitivity?
Hi Beau, On Fri, 2006-10-27 at 16:23 -0700, beau hargis wrote: > I am hoping that there is an easy way to obtain case-preservation with > case-insensitivity, or at the very least, case-preservation and complete > case-sensitivity, or case-preservation and a consistant case-conversion > strategy. > > The case of the column names need to be preserved because that is the way the > schema is designed and most importantly (VERY, VERY IMPORTANT), column names > are used in apps as hash values, or as named references which are case > sensitive and as such need to be delivered to the client in exactly in the > manner specified at the time of table creation. > > Again, I am looking for a way (magic, patches, whiskey, etc) that will give > me > case-preservation with EITHER case-sensitivity OR case-insensitivity, but not > both as I am seeing. > > Thanks in advance. I am hoping to find a solution to this so I can actually > convert one of our databases to use Postgres. And I can say that little > issues like this are precisely why Postgres was never used in this > organization before, even though several of the other database developers > like the features, stability and performance of Postgres. I went through the same issue in my conversion from MySQL to Postgres and (since I had a small application) I ended up changing up all my tables and columns "UserProfile" to user_profile. I'm afraid however, that it's MySQL that is the odd man out. I haven't researched this completely but I believe PG follows either the FIPS-127 or SQL-92 standard with respect to what are called "delimited identifiers". Basically, this says if you want case sensitivity in identifier names, you have to use double quotes wherever you refer to the identifier. Without the double quotes, the SQL implementor can either use UPPERCASE (as I believe Oracle and DB2 do) or lowercase (as PG does) when it displays those identifiers. Joe ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] show privileges
On Mon, 2006-11-06 at 17:01 -0600, Aaron Bono wrote: > On 11/2/06, Rares Vernica <[EMAIL PROTECTED]> wrote: > Hi, > > How can I view the privileges that an user or a role has? > > Or what is the equivalent of "show privileges" from MySQL? > > > select * from pg_user; > > Hey guys, this comes up every so often. Could some kind of syntax be > added, at least to the psql tool, to get this kind of information. It > would be really handy for administrators. Also, if it were on the > documentation under the GRANT privileges section that would help > immensely. I always have to hunt this down when I need it. You mean something like \du at the psql prompt? Joe ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] Between and miliseconds (timestamps)
Hi Ezequias, On Fri, 2006-11-10 at 16:34 -0200, Ezequias Rodrigues da Rocha wrote: > Ok thank you very much, but the suggestion (SQL:when field = > '2006-09-06'::date) does not retrieve any row. It does not retrive any > error too, but the suggestion does not applied to my requisition. I think you want to cast the field, not the constant, e.g., testdb=> select * from t2 where date(tm) = '2006-9-6'; tm --- 2006-09-06 00:00:01-04 2006-09-06 23:59:59.99-04 (2 rows) Joe ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] Question about time
On Thu, 2006-11-16 at 10:14 -0600, Judith wrote: > Hi everyone I'm doing a query like this: > > SELECT CAST(fecha_hora_factura as time) FROM nota_venta > > and I get: > > 14:16:52.824395 > > the field is timestamp type... I just want the HOUR:MINUTE:SECOND > > the question is how I drop the millisecond?? Take a look at date_trunc() under Date/Time Functions and Operators. Joe ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [SQL] Numbers
On Fri, 2006-11-24 at 11:07 -0200, Ezequias Rodrigues da Rocha wrote: > Does anybody have numbers of PostgreSQL in action ? > > Numbers like the biggest insert in mileseconds, the larger database etc ? First, you may want to post this in the GENERAL or in the PERFORMANCE lists since this isn't really about SQL. Second, you may want to look at the case studies page: http://www.postgresql.org/about/casestudies/. Third, the companies like EnterpriseDB and Pervasive may have some of what you're looking for since they have to measure themselves against the competition. And remember to take any numbers with a large grain of salt, YMMV, etc. Joe ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [SQL] Need to subtract values between previous and current row
On Fri, 2006-12-15 at 18:27 +0600, Partha Guha Roy wrote: > I have a table that contains data like this: > > ID ATIME (MM/dd/) > == > 110/12/2006 > 210/14/2006 > 310/18/2006 > 410/22/2006 > 510/30/2006 > > Now I need a query that will subtract atime of row 1 from row 2, row2 > from row3 and so on... > > Can anyone please help? How about something like this select x.id, x.atime, x.atime - y.atime as diff from yourtable x, yourtable y where x.id + 1 = y.id; Joe ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] Log, Logs and more Logs
On Wed, 2007-01-31 at 10:12 -0200, Ezequias Rodrigues da Rocha wrote: > I restart the server but the pg_log folder doesn't appears. > > My log session is like this on postgresql.conf: > > # These are only used if redirect_stderr is on: Did you look at the setting of redirect_stderr, and the various logging settings just above it? Joe ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [SQL] Open a Transaction
Hi Ezequias, On Thu, 2007-02-08 at 14:50 -0200, Ezequias Rodrigues da Rocha wrote: > Hi list, > > Could someone tell me what is wrong on this statement ? > > Start Transaction The above should read begin; > delete from base.something > where > id in( > 41503, > 41504, > 41505, > 41506, > 41507, > 41508, > 41509, > 41510, > 41511, > 41512, > 41513, > 41514, > 41515, > 41516, > 41517, > 41518, > 41519, > 41520, > 41521, > 41522, > 41523, > 41524, > 41525, > 41526, > 41527, > 41528, Remove the extra comma. > ) > end; Remove 'end'. > Commit; Joe ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] huge disparities in =/IN/BETWEEN performance
Hi Tom, On Thu, 2007-02-08 at 22:50 -0500, Tom Lane wrote: > There's a datatype abstraction issue involved: what does it take to > prove that "x >= 10 AND x <= 10" is equivalent to "x = 10"? This > requires a nontrivial amount of knowledge about the operators involved. > We could probably do it for operators appearing in a btree operator > class, but as Alvaro says, it'd be cycles wasted for non-dumb queries. Are you saying the planner is datatype-agnostic and can't tell that x is, say, as in the example above, an INTEGER and therefore cannot transform one expression into another? What about "x = 10 AND x < 5"? Can't it reduce that to FALSE? Joe ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] huge disparities in =/IN/BETWEEN performance
Hi Tom, On Thu, 2007-02-08 at 23:24 -0500, Tom Lane wrote: > Certainly, but your other concerns don't follow from that. The issue at > hand here is whether it's worth expending cycles on every query to try > to detect a situation that only holds for a few. Those where George's concerns, but I was interested in knowing whether the planner transforms a query in any way to avoid, let's say, useless execution. George didn't provide the inside of his view, but it's possible that my earlier example could be rephrased as follows: create view v_foo as select * from tab where x < 5; select * from v_foo where x = 10; Presumably the planner has to transform the SELECT into select * from tab where x < 5 and x = 10; or something analogous. This is a simple example, but with complicated views or even joins and aggregates, the "useless execution" may not be that obvious to the "naked eye" but it would be to a boolean logic analyzer. As to whether these query instances represent few or are typical is arguable, and will depend on the type of application, level of knowledge among users, and what kind of interfaces are used to enter or generate the queries. Joe ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] COPY FROM query.
Hi Paul, On Mon, 2007-02-12 at 08:19 +0900, Paul Lambert wrote: > I'm attempting to copy from a table into a file using a select query > inside the copy. > > The following is my command: > > COPY (SELECT DISTINCT ON (dealer_id,appraisal_id) * FROM > appraisals_temp) TO 'c:\autodrs_appraisal_new.txt' WITH DELIMITER AS '^' > CSV HEADER; > > I get the following returned: > > WARNING: nonstandard use of escape in a string literal > LINE 1: ...ealer_id,appraisal_id) * FROM appraisals_temp) TO 'C:\autodr... > ^ > HINT: Use the escape string syntax for escapes, e.g., E'\r\n'. > > ERROR: relative path not allowed for COPY to file > SQL state: 42602 > > > (The caret character is pointing to the M in FROM) I believe that on Windows you need to use double backslashes, i.e., 'c:\ \autodrs_appraisal_new.txt', although the regular slash may also work, i.e., 'c:/autodrs_appraisal_new.txt'. Joe ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [SQL] simple web search
Hello Louis-David, On Fri, 2007-02-23 at 17:27 +0100, Louis-David Mitterrand wrote: > I'm considering implementing a search box on my review web site > http://lesculturelles.net and am looking for a simple way to match > entered words against several columns on related tables: show.show_name, > story.title, person.firtname, person.lastname, etc. > > What is the most elegant way to build a single query to match search > words with multiple columns? You may want to take a look at contrib/tsearch2. Joe ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] how do I to generate a sequence Range or Set of integer constants
On Fri, 2007-02-23 at 19:25 +0100, Stefan Becker wrote: > dear SQL friends, > > What I want to do might be done differantly. Right now I can't > think of another solution other than a select statement > > I would like to create a sequence range of integer constants. Join > this sequence against a ID Range in a database and look for missing > Id's. > > Another application for this would be to simply populate a database with > say 1000.. Records > > Now: Is there a syntax that allows for the following. > > create table XX (id int); > insert into XX (select xx from "1 to 1000" of integers) > > or... > > select IntSeq.MissingValues, x.UniqIntId,x.A,x.B,x.C, > from MyDataTable x > left outer join > ( > select MissingValues from "1 to 1000" of integers > ) IntSeq on MissingValues=x.UniqIntId > > > I'm hoping that someone has done this and might be able to > point to some function or methode to do this Maybe something like this will help: SELECT id FROM generate_series(1, (SELECT last_value FROM id_seq)) AS s(id) EXCEPT SELECT UniqIntId FROM MyDataTable ORDER BY id; The id_seq is the sequence on your ID column, assuming it has one, or you can replace the (SELECT ... FROM id_seq) by 1000. Joe ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] pg_dump inquiry
On Thu, 2007-03-01 at 12:30 -0300, Osvaldo Rosario Kussama wrote: > http://www.postgresql.org/docs/8.2/interactive/app-pgdump.html > "Multiple tables can be selected by writing multiple -t switches. Also, > the table parameter is interpreted as a pattern according to the same > rules used by psql's \d commands (see Patterns), so multiple tables can > also be selected by writing wildcard characters in the pattern." But note that this is a new feature in 8.2. In 8.1 and earlier, multiple -t switches will only get you the last one specified. Joe ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [SQL] Statistics
On Fri, 2007-03-09 at 14:22 -0300, Ezequias Rodrigues da Rocha wrote: > Does someone have statistcs from PostgreSQL ? Numbers from the list, > performance statistics. I must argue with another person the idea of > do not put Oracle in our organization. > > We are quite well with postgresql and I have no plans to change my > plataform. Allow me to quote from my response to your previous message back in November: On Fri, 2006-11-24 at 11:07 -0200, Ezequias Rodrigues da Rocha wrote: > Does anybody have numbers of PostgreSQL in action ? > > Numbers like the biggest insert in mileseconds, the larger database etc ? First, you may want to post this in the GENERAL or in the PERFORMANCE lists since this isn't really about SQL. Second, you may want to look at the case studies page: http://www.postgresql.org/about/casestudies/. Third, the companies like EnterpriseDB and Pervasive may have some of what you're looking for since they have to measure themselves against the competition. And remember to take any numbers with a large grain of salt, YMMV, etc. Joe ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Dummy question
Hi Ezequias, On Thu, 2007-03-22 at 16:43 -0300, Ezequias R. da Rocha wrote: > I must use a select * from some table but i must do a join and it must > be in the were clause. > > I can't put select * from tabel1 as tb1, table2 as tb2 > where tb2.id = 2 > and tb1.fk_tb2ID = tb2.id Try select tb1.* from tabel1 as tb1, table2 as tb2 where tb2.id = 2 and tb1.fk_tb2ID = tb2.id; Joe ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] exception handling in postgres plpgsql
Hi, On Tue, 2007-04-03 at 15:35 -0700, Karthikeyan Sundaram wrote: > exception >when NO_DATA_FOUND >then > return 100; > end; > $$ > language 'plpgsql'; > > When I compile, I am getting an error message > ERROR: unrecognized exception condition "no_data_found" > CONTEXT: compile of PL/pgSQL function "audio_format_func" near line > 15 The constant is no_data. See http://www.postgresql.org/docs/8.2/static/errcodes-appendix.html Joe ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Urgent help in bit_string data type
Hi skarthi, On Wed, 2007-04-11 at 13:30 -0700, Karthikeyan Sundaram wrote: > insert into test_a values (to_char(1,'9')); > > ERROR: column "b" is of type bit but expression is of type > text > HINT: You will need to rewrite or cast the expression. As suggested by the error, you should use a cast, e.g., insert into test_a values 9::bit(3); This will result in binary '001' being inserted because you need 4 bits to represent decimal 9. Joe ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [ADMIN] [SQL] Urgent help in bit_string data type
Hi skarthi, On Wed, 2007-04-11 at 15:01 -0700, Karthikeyan Sundaram wrote: > create table test_a (b bit(3)); > > create view test_vw (b1, b2, b3) > as select > to_number(substring(b,1,1)::int,'9') as b1, > to_number(substring(b,2,1)::int,'9') as b2, > to_number(substring(b,3,1)::int,'9') as b3 from test_a; > > create or replace rule test_a_ins as on insert to test_vw > do instead > insert into test_a (b) values (COALESCE(new.b1::bit,'1')|| > COALESCE(new.b2::bit,'0')||COALESCE(new.b3::bit,'0')::bit); > > ERROR: cannot cast type numeric to bit > > How will I resolve this? *My* question is why are you doing such convoluted conversions, from bit string to text, then to int, etc.? It seems to me like you want to manipulate bits and if that's the case, you should be using the bit string operators, as someone pointed out a couple of days ago. In case you haven't looked at them, please see: http://www.postgresql.org/docs/8.2/static/functions-bitstring.html Joe ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [ADMIN] [SQL] Urgent help in bit_string data type
Hi skarthi, On Wed, 2007-04-11 at 16:29 -0700, Karthikeyan Sundaram wrote: > The reason why I am asking is, we are building an interface layer > where all our users will have a view. They shouldn't know anything > about how and where the data is stored in the table. They can be seen > only by the portal which will use view. > > That's the reason. I can understand using views to hide data from users, but that was not what I was asking about. It seems that you still have not read the page that we referenced. Consider the following: test=> create table test_a (b bit(3)); CREATE TABLE test=> insert into test_a values (b'001'); INSERT 0 1 test=> insert into test_a values (b'010'); INSERT 0 1 test=> insert into test_a values (b'101'); INSERT 0 1 test=> select * from test_a; b - 001 010 101 (3 rows) test=> create or replace view test_vw as test-> select b::bit(1) as b2, (b<<1)::bit(1) as b1, test-> (b<<2)::bit(1) as b0 from test_a; CREATE VIEW test=> select * from test_vw; b2 | b1 | b0 ++ 0 | 0 | 1 0 | 1 | 0 1 | 0 | 1 (3 rows) The view above gives the same results as your original view, but only uses bit manipulations (and the only counterintuitive part is ::bit(1) gives you the MSB). Your view has to convert a bit string to text (or maybe bytea) for the substring function, then it has to convert the text to int because of your explicit cast, and finally it has to convert back to text for the to_number function. The result of to_number is numeric and you're trying to cast it to bit, which is what the ERROR was telling you can't do. Joe ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [SQL] A long-running transaction
Hi Andrew, On Fri, 2007-04-13 at 07:49 -0400, Andrew Sullivan wrote: > Because in the general case, you need the intermediate rows. The > problem is that you'd have to write a special piece of code to catch > the case where nobody else can see the row that you're about to > expire, and that test isn't free. Therefore, you do it the same way > any other row gets expired. Just curious: is there a way to defeat MVCC? i.e., if you can lock the database exclusively, there won't be any readers or writers? Joe ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] We all are looped on Internet: request + transport = invariant
Hi Richard, On Fri, 2007-04-20 at 13:50 +0100, Richard Huxton wrote: > OK, but I'm still not seeing how this avoids me having to use > PHP/Perl/etc anyway. I mean, I'm going to need some application logic at > some point, in which case who cares whether the RDBMS has this specific > layout as a format. I can see how it might be useful as a library, but > then there are libraries that provide plenty of XML formatting. But if TML catches on so that "even a caveman can do it" *, i.e., query a database from just a browser, then you and I (and most everyone on this list) will be out of a job and PHP/Perl/etc. will be relegated to the dustbin of programming languages. [Sorry, couldn't resist :-) ] Joe * see http://en.wikipedia.org/wiki/Geico ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [SQL] We all are looped on Internet: request + transport = invariant
Hi Richard, On Fri, 2007-04-20 at 12:14 +0100, Richard Huxton wrote: > I'm not sure that anyone is clear why you just don't write this as a > simple php/perl/ruby/whatever script? There are libraries that will > output XML for you in most of these I'd suspect, and if not it's just an > afternoon's work. > > I mean, your application is written in *some* language, isn't it? It appears that Dmitry wants that an application, written in whatever language, will issue an SQL query (or maybe a TML "query" such as "a.b.c") and get back the XML. He's not concerned about loss of datatype information because from his perspective the application will just "present" the XML and similarly he's not concerned about DDL features because TML is a "transport" mechanism. I hope I haven't totally misrepresented Dmitry's words, but it seems he wants to create a universal "database to web" interface language, so that you and I won't have to deal with the pesky complications of retrieving data in C with libpq or with PHP, Python and what not libraries or "adapters", and then have to transform that data for display to the user. However, IMHO this appears to be rather simplistic and not completely grounded in actual experience. In other words, Dmitry, have you written a TML parser or a libpq prototype that given a TML "query" produces the XML and tested it with datatypes other than integer and float. Or do you have feedback from actual average users that TML is indeed easier to use than SQL? Two other items to think about. One is that the scheme apparently requires that tables be defined in a special way, but no indication is given as to how this will be accomplished or ensured that it is correct. Second is how does TML handle relational manipulations such as restriction, projection or aggregation. It appears TML is primarily for joins. Lastly, Dmitry, I think you'll be better off discussing this in comp.databases.theory. Joe ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] We all are looped on Internet: request + transport = invariant
Hi Dmitry, On Mon, 2007-04-23 at 11:20 +0300, Dmitry Turin wrote: > >I would change your examples to use less abstract > > data, like department/employee, customer/product/order/order_line > > This will not help. > To my mind, forum of real database is place, "What we got here is ... failure to communicate." I may be wrong, but I suspect most of the people in this forum take a look at create table a ( id num primary key; data float; ); create table b ( id num primary key; ref num references a(id); data float; ); create table c ( id num primary key; link num references b(id); data float; ); or even create table a ( id integer primary key, data float); create table b ( id integer primary key, ref integer references a(id), data float); create table c ( id integer primary key, link integer references b(id), data float); and your attempts to derive XML from them and ask questions like "how will this help me in my day-to-day job" and "what do a, b and c represent or are analogous to?" and all they'll see is a hierarchy of some sort (as evident by several of the responses) and they'll dismiss it because they have to deal with more complex relationships and the relational model allows them to express such connections. OTOH, if you were to take a real-life example, and show you can generate XML from your proposed TML, I contend that then you'd find more people receptive to your ideas or at least able to criticize them from more concrete viewpoints. Take a look for example at Philip Greenspun's "SQL for Web Nerds". It's dated and its examples are based on Oracle syntax, but it's based on a real web application. Show us what TML can do for the users, bboard and classified_ads tables (http://philip.greenspun.com/sql/data-modeling.html), and how it can answer the simple and complex queries in the next two chapters (or as I said, pick a real-life example of your own) and then maybe we'll have a more fruitful conversation. Joe ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] We all are looped on Internet: request + transport = invariant
Hi Dmitry, On Tue, 2007-04-24 at 15:31 +0300, Dmitry Turin wrote: > J> How do I see employees in just one department? > > department[id="1"].employee >>; > > or > > department[name="Technical"].employee >>; How is that any different or better than a standard SQL SELECT (or to use another context, a Tutorial D statement)? > J> How do I see a single employee? > > employee[id="31"] >>; > > or > > employee[name="Tomson"] >>; >-- maybe several employee with surname Tomson What if I want to see specific columns of the employee row or see the name of the department that Tomson works in or the name of Tomson's manager? > J> How do I actually retrieve SUM(salary) > > Declare field "sum" as not fictional. > > You can always declare field "sum" as not fictional and > put sign "#" before field to prevent output of field. This "fictional" attribute of yours implies a change to SQL DDL. > J> GROUP BY department? > > It is always "group by", because it is field of record "department" That is only in *your* view of the data. What if someone comes along and wants to do SELECT name, SUM(salary) FROM employee GROUP BY name? Or GROUP BY date_trunc('year', hire_date)? The point is that SQL has been refined (although some may disagree :-) over the past quarter century to answer this and many other questions and I don't see how TML improves on that (for an enhanced answer to SQL, take a look at Tutorial D). And XML is *not* the only answer to viewing or even "transporting" data. Some people prefer a tabular representation, others may prefer a hierarchical view, and yet others want to see a chart (and even the same person may prefer a different view for various purposes or at various times). And there's nothing wrong with Perl, PHP, Python and the myriad interface languages. That's what "free" software is all about. The POSTGRES UCB project had a language called PostQUEL, which may have been technically superior to SQL, but market forces (unfortunately not all "free") pushed Postgres95 and then PostgreSQL to adopt the latter in preference to PostQUEL. Maybe one day we'll have one (or two) interface languages to PostgreSQL that 80% of the users will use, but it will not be because you or I or even the PG Global Dev Group leaders say it's "better" or "it's necessary", but because thousands of users may agree and converge on those choices. Joe ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] We all are looped on Internet: request + transport = invariant
Hi Dmitry, On Wed, 2007-04-25 at 10:47 +0300, Dmitry Turin wrote: > J> And there's nothing wrong with Perl, PHP, Python and the myriad > J> interface languages. > > I said many times, what is wrong: > applied users can not join sql and perl, can not use libraries, > and can not adjust web-server. I strongly disagree. I have not taken any formal courses on PHP, HTML, Apache or Python, and I only took a couple of week-long courses on SQL ages ago (Perl I don't care for). Yet I've learned enough on my own to "join" them and use their libraries and put up a website. And I believe there are others on this list and elsewhere that have done so, to varying degrees. And yet others may require the assistance of a technical specialist or a full-time programming team, but what's wrong with that? > J> thousands of users may agree and converge on those choices. > > 1. Not users, but programmers. > 2. Needs are produced also, as goods and capital goods. > Karl Marks >For example, look at yourself. We are on diametrically opposed sides of that argument, but it's off-topic, so I'll leave it alone. Joe ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] We all are looped on Internet: request + transport = invariant
Dmitry, On Thu, 2007-04-26 at 11:33 +0300, Dmitry Turin wrote: > Joe, i speak not about you, but about statistics. Do you actually have statistics of how many people in the general population have learned SQL? And furthermore, how many of those people didn't already know or didn't want to bother learning another computer-related language? > J> there are others on this list and elsewhere that have done so > > Second question: why? > For what people must learn php, etc, > if they need only output data from database. If people only need to examine or visualize data, they can use any number of "user friendly" tools: PgAdmin, Excel with ODBC, etc. (and these may not be the best examples). However, someone will still have to learn SQL and C, PHP, Python, etc., to create an application to input and maintain the data. > >others may require the assistance of a > J> technical specialist or a full-time programming team, but what's wrong > J> with that? > > expenses of money, time, nerve to explain task to other man I believe your worldview is affected by the people you interact with. The average man or woman on the street doesn't want to know about SQL, relational databases, third normal form or, with all due respect, TML. If they have Internet access, they want to enter a query string into something like Google or Amazon and look at the results. Do you think if Amazon gave raw read access to their underlying database, many people would jump to query it with SQL, TML or any computer language? And even those who did, when they found a book of their choice they would still need APPLICATION code to enter their order. OTOH, I think you're dealing with a select group of scientists who have these data and they would love to "mine" it and analyze it to the nth degree but they don't want to jump through (too many) hoops to do so. And maybe TML is just what they need. And there are others (business people, for example) who also need to analyze the data they have and they don't mind paying for programmers, DBAs, etc. (even if sometimes it may be tough for them explaining *what* they want). For those, TML may or may not be a solution. The marketplace will decide. Joe ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [SQL] We all are looped on Internet: request + transport = invariant
Hi Dmitry, On Wed, 2007-05-02 at 08:05 +0300, Dmitry Turin wrote: > J> The average man or woman on the street > > For what you say about street ? > Average people, which you can meet on street, make physical job. That is an American colloquialism to refer to just about anyone, regardless of what kind of work they do. The point is that --using the Pareto principle-- 80% (probably much more) of the people don't know SQL or are fluent in other programming languages, and they don't want to be bothered with *any* of it except to use the products and services that are made possible through them. According to the U.S. Bureau of Labor Statistics (BLS) there were 455,000 programming jobs in 2004. Even if you raise that by an order of magnitude you're still talking about less than 2% of the U.S. population. The BLS estimates there were 16,000 physicists/astronomers and 77, biological scientists. So the software/programming and scientist populations may be roughly comparable. While the ratios may be better in some other countries, I doubt that they're much off. The bottom line is: the markets for PHP/etc. and TML are not too large, but you seem to be having a hard time convincing those of us who've taken even a mild interest in TML that it's really needed or is a better solution than what exists today. Joe ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Argentinian timezone change at the last moment. How to change pgsql tz db?
Scott Marlowe wrote: I've been reading up on zic and wondering if it's a reasonable thing to try and update the pg tz db to include the new argentinian DST change. Where is the tz info stored in postgres? In the catalog? Typically in /usr/share/postgresql/timezone or maybe /usr/local/share/postgresql/timezone. Joe ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [SQL] UTF8 encoding and non-text data types
Tom Lane wrote: "Medi Montaseri" <[EMAIL PROTECTED]> writes: insert into t1 (c1, cost) values ('tewt', 1234) this works find insert into t1 (c1, cost) values ('شد', ۱۲۳۴) DBD::Pg::db do failed: ERROR: syntax error at or near ";" at character 59, Well, you've got two problems there. The first and biggest is that &#NNN; is an HTML notation, not a SQL notation; no SQL database is going to think that that string in its input is a representation of a single Unicode character. The other problem is that even if this did happen, code points 1777 and nearby are not digits; they're something or other in Arabic, apparently. Precisely. 1777 through 1780 decimal equate to code points U+06F1 through U+06F4, which correspond to the Arabic numerals 1 through 4. Joe ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [SQL] UTF8 encoding and non-text data types
Tom Lane wrote: Oh? Interesting. But even if we wanted to teach Postgres about that, wouldn't there be a pretty strong risk of getting confused by Arabic's right-to-left writing direction? Wouldn't be real helpful if the entry came out as 4321 when the user wanted 1234. Definitely seems like something that had better be left to the application side, where there's more context about what the string means. The Arabic language is written right-to-left, except ... when it comes to numbers. http://www2.ignatius.edu/faculty/turner/arabic/anumbers.htm I agree that it's application specific. The HTML/Perl script ought to convert to Western numerals. Joe ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] Select into
Gavin 'Beau' Baumanis wrote: The copy is inside the same table, so I don't understand why it (the required query ) would require any joins. Ie. I want to copy the contents of a row (but for the id column - of course) into a record in the same table. I think what you want is something like this: Given (col1 being the id or PK): col1 | col2 | col3 --+--+--- 1 | 123 | first record 2 | 456 | second record 3 | 789 | third record then update t1 set col2 = t1copy.col2, col3 = t1copy.col3 from t1 as t1copy where t1.col1 = 1 and t1copy.col1 = 3; will result in: col1 | col2 | col3 --+--+--- 1 | 789 | third record 2 | 456 | second record 3 | 789 | third record So, it is a join ... of a table with a virtual copy of itself. Joe -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] Select into
Gurjeet Singh wrote: Except that it doesn't work... Did you try to execute that query; I am assuming not. Of course I did, do you think I create results by editing them into my email? The script: delete from t1; insert into t1 values (1, 123, 'first record'); insert into t1 values (2, 456, 'second record'); insert into t1 values (3, 789, 'third record'); select * from t1; update t1 set col2 = t1copy.col2, col3 = t1copy.col3 from t1 as t1copy where t1.col1 = 1 and t1copy.col1 = 3; select * from t1; select version(); The output: DELETE 3 INSERT 0 1 INSERT 0 1 INSERT 0 1 col1 | col2 | col3 --+--+--- 1 | 123 | first record 2 | 456 | second record 3 | 789 | third record (3 rows) UPDATE 1 col1 | col2 | col3 --+--+--- 2 | 456 | second record 3 | 789 | third record 1 | 789 | third record (3 rows) version PostgreSQL 8.1.9 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.0.3 (1 row) And BTW, I also tried your UPDATE SET (salary, name) but that only works on PG 8.2 and above. I don't see why my query would fail in subsequent releases. Joe -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] Protection from SQL injection
Ivan Sergio Borgonovo wrote: It'd be nice to have a wrapper that let you write prepared statements this way: "select a.id, b.name from a join b on a.id=b.id where a.status=$variable1 and b.id>$variable2 etc... but that's a pretty good change to any language parser. Python already supports something like that. See PEP 249 (http://www.python.org/dev/peps/pep-0249/), under Module Interface, the description of the paramstyle parameter. Psycopg2 supports both the "format" (C printf) and "pyformat" styles. See the last section on this page for an example using the pyformat style: http://www.devx.com/opensource/Article/29071/0/page/3. Joe -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] Protection from SQL injection
Ivan Sergio Borgonovo wrote: That's better than nothing but it is still a lot of code duplication. You've to write column names in the sql statement and in the array and... column values are not contextual to the statement. The apparent duplication in the example stems for its tutorial nature. In a real program, the namedict "array" (it's actually a Python tuple--an immutable array) would normally be constructed programmatically from user or other input. Note also that although Joshua chose to use dictionary keys named identical to the PG column names, they could be named differently, like "first" and "last". That's easy... while what I wrote above does look as requiring a really special parser. Furthermore from the example it looks as if all this is going to miss the scope to prevent sql injection since it doesn't support prepared statements. I assume you didn't check the PEP 249 (http://www.python.org/dev/peps/pep-0249/). The execute() and executemany() Cursor object methods are precisely to prepare and execute database operations. Joe -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] bcp.exe Fix Triggers
yazicivo wrote: But executing this procedure fails for some reason I couldn't understand. > SELECT public.create_bcp_fix_triggers('commsrv'); ERROR: syntax error at or near "AS $" LINE 4: RETURNS "trigger" AS $bcp-fix$ ^ Use an underscore (_) instead of a minus (-), i.e., $bcp_fix$. Joe -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] Cursor
Xavier Bermeo wrote: Hi, guys... I have problems with cursosrs. Anyone have an example complete the how load and read each position of a cursor? I wait your answer Thanks...guys Assuming you're using ECPG, there are a couple of examples in the documentation, e.g., http://www.postgresql.org/docs/8.3/static/ecpg-commands.html (see Select using cursors: http://www.postgresql.org/docs/8.3/static/ecpg-variables.html#AEN33442 The essence is you DECLARE the cursor, OPEN it and then FETCH rows in some kind of loop, and finish by CLOSE cursor. To exit the loop, you can check sqlca.sqlcode for 100 (NOT FOUND) or a SQLSTATE of 02000. Joe -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] ORDER BY collation order
Hi, I just found that two identical queries on two PG 8.2.7 databases with the same data and same encoding, one running on Debian and the other on FreeBSD, returned rows in a different order, even though both queries had an ORDER BY clause. Essentially, on FreeBSD a varchar starting with a double-quote character came first, ahead of a string starting with a number and a string starting with 'A', whereas on Debian the double-quote came last. Some research led to the following paragraph in the documentation: Character-string data is sorted according to the locale-specific collation order that was established when the database cluster was initialized. I guess that means the encoding of the respective template0 database is what determines the sort order (UTF8 on FreeBSD and LATIN1 on Debian), right? Unfortunately, I'm unable to change the FreeBSD cluster since it's shared with others at our hosting provider. Is there some way to override the cluster setting, or plans to allow for database-specific collation orders? Joe -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] ORDER BY collation order
Hi Scott, Scott Marlowe wrote: no, not encoding, locale, such as en_US or C determine sort order. OK, so I guess you're saying that whatever was in the LC_COLLATE environment variable at the time the template0 database was created determines the collation/sort order? Is that stored and visible somewhere? You can use varchar_pattern_ops and ~*~ operator. Search for those in the docs. What I found (http://www.postgresql.org/docs/8.2/static/indexes-opclass.html), talks about creating an index with varchar_pattern_ops but that presumably won't affect an ORDER BY result. I'm not quite sure where to find the "~*~" operator, although I did find similar ones in 9.7 Pattern Matching. In any case, I'm not sure how an operator helps in changing an ORDER BY result from "quoted" 123 Abc to 123 Abc "quoted" It's even trickier than this simple example, because on Debian which is using the en_US locale, the double quotes are disregarded for ordering purposes, e.g., Medical "Meet" Message Joe -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] ORDER BY collation order
Hi Scott, Scott Marlowe wrote: Sorry, I mentioned the wrong operator before, it's ~>~ and ~<~ (asc versus desc): smarlowe=# create table col_test (a text); CREATE TABLE smarlowe=# insert into col_test (a) values ('"quoted"'),('Abc'),('123'); INSERT 0 3 smarlowe=# select * from col_test order by a; a -- 123 Abc "quoted" (3 rows) smarlowe=# select * from col_test order by a using ~<~; a -- "quoted" 123 Abc (3 rows) smarlowe=# select * from col_test order by a using ~>~; a -- Abc 123 "quoted" (3 rows) Those operators give me "C"-style collation in the database that is using "en_US" collation, but what I would really prefer is the reverse. BTW, where are those operators documented? Neither Google nor Yahoo nor postgresql.org search return anything. Joe -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] grouping/clustering query
Steve Midgley wrote: # (invoiceid, txid) (A, 1) (A, 3) (B, 1) (B, 2) (C, 5) (D, 6) (D, 7) (E, 8) (F, 8) For journalling, I need to group/cluster this together. Is there a SQL query that can generate this output: # (journal: invoiceids, txids) [A,B] , [1,2,3] [C], [5] [D], [6,7] [E,F], [8] Hi Dave, I'm not following the logic here. A has 1,3 and B has 1,2. So why does the first line print: [A,B] , [1,2,3] What's the rule that tells the query to output this way? Is it that all of B's values are between A's values? From a purely accounting standpoint, since transaction 1 was applied to both invoices A and B, you need to group the invoices so that you can compare total invoiced against total paid. Joe -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] CREATE USER through SQL possible?
I am trying to make a administration web page for postgreSQL users. The main purpose of this web page is to add , remove and modify pgsql users. To do this I am connecting to a database under the postgres user. The following is some of the code being used. $dataSource="dbi:Pg:dbname=alidb"; $dbh = DBI->connect($dataSource, "postgres", "password"); if($dbh) { $statement = "CREATE USER oadd WITH PASSWORD rspassword"; #prepare and execute the statment $sth = $dbh->prepare($statement); $rc = $sth->execute; print "USER HAS BEEN ADDED$ADDstatement"; } Has anyone done this before? If so I could use some guidence because I can't seem to get it working. Joe Adams
[SQL] psql question
Hello, I've been searching off-and-on for an answer to this question, but I haven't found anything but fleeting references. Here's the problem/question: On machines where I've installed PostgreSQL 7.0.2 from RPM, psql allows use of the up arrow key for history and the escape/tab key for command completion, but on my remote web host (webpipe.net) those keys don't work. What do I need to do to get these features working on my remote web host? I've seen reference to .psqlrc in the psql man file, but not much else (no syntax, etc). Thanks in advance! Joe
Re: [SQL] psql question
> > On machines where I've installed PostgreSQL 7.0.2 from RPM, psql allows use > > of the up arrow key for history and the escape/tab key for command > > completion, but on my remote web host (webpipe.net) those keys don't work. > > What do I need to do to get these features working on my remote web host? > > I've seen reference to .psqlrc in the psql man file, but not much else (no > > syntax, etc). > > If you compiled from source, make sure you had the readline-devel package > installed. > Thanks for your responses, Peter & Mathijs. Actually, since the issue is on a hosted system, I don't have access to recompile. I was hoping that maybe there would be something I could configure from my own (non-root and non-postgres) account. My question actaully took several days just to make it to the list, so in the meantime I found my own work-around -- I copied local copies of psql and libpq up to my account on the web host. Then I put my own account folder to the front of the PATH and added a LD_LIBRARY_PATH. Now when I run 'psql' it works just like on my own machine! Thanks again for the response. Joe
Re: [PHP] Re: [PHP-DB] Re: [SQL] a script that queries database periodically
> Because PHP is supposed to solve web development problems. And this is > one of them. It's very useful. Why solve one that is already solved? PHP isn't here to reinvent the wheel - get crontab and quit crying. --Joe > > -Roberto > -- > Computer Science Utah State University > Space Dynamics Laboratory Web Developer > USU Free Software & GNU/Linux Clubhttp://fslc.usu.edu > My home page - http://www.brasileiro.net/roberto > > -- > PHP General Mailing List (http://www.php.net/) > To unsubscribe, e-mail: [EMAIL PROTECTED] > For additional commands, e-mail: [EMAIL PROTECTED] > To contact the list administrators, e-mail: [EMAIL PROTECTED] ==== = Joe Stump[EMAIL PROTECTED] http://www.miester.org = ==== Need a programmer? http://www.google.com/search?q=joe+stump+resume
[SQL] Fw: Optimization recommendations request
Well, this message never made it through, but I managed to answer my own question -- I never ran vacuum analyze which caused a table scan instead of an index scan. After running vacuum analyze the query returns immediately. I would still be interested in any hints to optimize performance for very large tables (>10M tuples). Thanks, Joe > Hello, > > I'm working on an application where I need to design for one table to grow > to an extremely large size. I'm already planning to partition the data into > multiple tables, and even possibly multiple servers, but even so each table > may need to grow to the 10 - 15 million tuple range. This table will be used > for a keyed lookup and it is very important that the query return in well > under a second. I've done a small test using a dual ppro 200 server with 512 > MB RAM, running RedHat Linux 6.1, and a late November 7.1 pre-beta snapshot > of PostgreSQL (PGOPTS="-D/opt/pgsql/data -i -B 25000"). I used a test table > with about 5 million tuples. > > Details: > > CREATE TABLE foo( > guid varchar(20) not null, > ks varchar(20) not null > ); > > --> insert 5,000,000 rows, both guid and ks are 20 byte sha-1 hashes > -- tried this first > -- create index foo_idx1 on foo(guid); > -- then tried > create index foo_idx1 on foo using HASH (guid); > > SELECT ks FROM foo WHERE guid = 'f9ee1930f4010ba34cc5ca6efd27d988eb4f921d'; > > The query currently takes in excess of 40 seconds. I would appreciate any > suggestions for optimizing to bring this down substantially. > > Thanks in advance, > > Joe Conway >
[SQL] Optimization recommendations request
Hello, I'm working on an application where I need to design for one table to grow to an extremely large size. I'm already planning to partition the data into multiple tables, and even possibly multiple servers, but even so each table may need to grow to the 10 - 15 million tuple range. This table will be used for a keyed lookup and it is very important that the query return in well under a second. I've done a small test using a dual ppro 200 server with 512 MB RAM, running RedHat Linux 6.1, and a late November 7.1 pre-beta snapshot of PostgreSQL (PGOPTS="-D/opt/pgsql/data -i -B 25000"). I used a test table with about 5 million tuples. Details: CREATE TABLE foo( guid varchar(20) not null, ks varchar(20) not null ); --> insert 5,000,000 rows, both guid and ks are 20 byte sha-1 hashes -- tried this first -- create index foo_idx1 on foo(guid); -- then tried create index foo_idx1 on foo using HASH (guid); SELECT ks FROM foo WHERE guid = 'f9ee1930f4010ba34cc5ca6efd27d988eb4f921d'; The query currently takes in excess of 40 seconds. I would appreciate any suggestions for optimizing to bring this down substantially. Thanks in advance, Joe Conway
Re: [SQL] Optimization recommendations request
Subject: Re: [SQL] Optimization recommendations request > > What does explain show for your query? > I sent this a week ago using the wrong (i.e. not the one I signed up with) reply-to address, so it didn't make it to the list until just now. In the meantime I ran explain and noticed that the index wasn't being used. So I ran vacuum analyze and now I'm getting the expected performance (and explain shows the index being used). If I understand the logged statistics correctly, I'm getting results returned in anywhere from about 3 to 45 ms, depending on cache hit rate. I also repeated my test with 15 million records with similar results. Not bad at all! I am still interested in any generic optimization tips for very large tables. Thanks for taking the time to reply! Joe
[SQL] single byte unsigned integer datatype
Hello, I was looking for a datatype to represent a single byte unsigned integer. The closest thing I can find looking through the online manual is a one byte char. Are there any side-effects of using a char datatype for this purpose? Is there a better datatype to use? Thanks in advance, Joe
[SQL] current host and dbname info
Hi, I've been searching the docs and been unable to find the answer to this -- is there a way to get the current database server host tcpip address, postmaster port, and database name from a SQL query? I'd like to access those from within a plpgsql function without having to create and populate some sort of identification table. Thanks, Joe
[SQL] Fw: C function for use from PLpgSQL trigger
Hello all, I posted this (see below) Friday to the interfaces list with no response. Does anyone know if what I'm trying to do is possible, or should I just write the entire thing in a C function trigger? The latter would be unfortunate because I think it would be nice to be able to extend PLpgSQL using C functions like this. Anyway, any help or advice will be much appreciated! Thanks, Joe > Hi, > > I'm trying to create a C function that I can call from within a PLpgSQL > trigger function which will return a list of all the values in the NEW > record formatted suitably for an insert statement. I can't come up with a > way to do this directly from within PLpgSQL (i.e. iterate through an > arbitrary number of NEW.attribute). > > Can anyone tell me how I can either pass the NEW record to the C function > (this produces an error message 'NEW used in a non-rule query') or gain > access to the trigger tuple from within my C function? It seems that > although PLpgSQL is called as a trigger, the C function is called as a > regular function (makes sense) and thus has no access to the trigger tuple > (my problem). > > Any help or guidance is greatly appreciated! > > Thanks, > > Joe Conway >
Re: [SQL] RE: C function for use from PLpgSQL trigger
> You could send the column name directly into your c function. For example: > c_function_name(NEW.col1, NEW.col2, NEW.col3). Otherwise I am not sure how > to send NEW into a C function. You could try declaring NEW in your C > function as a tuple. Thanks for your reply. I was hoping that I could avoid hardcoding NEW.col1, etc, so that the function could be used for multiple relations. I've also tried to declare the input parameter to the function as a tuple, but PLpgSQL never gets that far -- it doesn't seem to support passing NEW as a parameter. Oh, well. I will probably just write all of my logic into a C function and skip PLpgSQL entirely. That's too bad because it would be far simpler (and preferrable IMHO) to write a generic trigger function in PLpgSQL and call C functions for only certain operations that PLpgSQL does not directly support. Joe
Re: [SQL] Oracle to PostgreSQL help: What is (+) in Oracle select?
> A select is done across two tables, however when joining the foreign > key, the right hand side of the equallity has (+) appended > > SELECT o.* from one o, two t where o.key = t.key(+) > > Does anyone know what this does and how I can reproduce the select in > PostgreSQL? Hi Chris, The (+) in Oracle is for an outer join. See http://www.postgresql.org/devel-corner/docs/postgres/sql-select.html , in the join-type description, left outer join. Outer joins are only available in PostgreSQL 7.1, which is currently in the late stages of beta testing. Hope this helps, Joe ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
Re: [SQL] Help
> I am a user postgresql. I want to update a table > automatically when we reach monthend. i.e i want to > update some table on 31 of every month automatically > without any user attention. I want to know how to do > this. If anyone knows how to do this please mail me. i > will be ever thankfull to him Probably the easiest way to do this is to write a script and run it from cron. For example, if your update query is in a file called $HOME/bin/monthend.sql: insert into mymonthendtable(f1, f2, f3) values(123,'03/31/2001',12345.67); your script (call it $HOME/bin/monthend.sh) might look like: #!/bin/sh psql -U postgres mydatabasename < $HOME/bin/monthend.sql then run (see "man 5 crontab" for more on cron) crontab -e and add an entry like # run at 2:15 AM on the 30th of every month 15 2 30 * * $HOME/bin/monthend.sh Hope this helps, Joe ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
Re: [SQL] Help
> Thanks for your valuable information. I tried the > cron. i typed >cron -e > and entereed into the input area. but i don't know how > to save the cron file. I pressed ctrl+z and came out > from cron. but i edit the cron file i found nothing on > it.(i.e using pico filename.) Please tell me some > description how to save the file in cron and to achive > this. I will be thankful to you. > Instead of "ctrl+z", press ":wq" (colon for command mode, w for write, q for quit). This assumes that vi is your default editor. Joe ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Problems handling errors in PL/pgSQL
> However, not all types of errors are so trapped. The most problematic > un-trapped error is referential integrity: if an INSERT or UPDATE fails > because of a referential integrity violation, the PL/pgSQL function will > still see the statement as a success and not error out. Example: > I'm not sure if this is what you're looking for, but in 7.1 you can do something like: INSERT INTO bar(barpk,foopk) VALUES(barpkval,foopkval); GET DIAGNOSTICS rows = ROW_COUNT; -- do something based on rows -- See "24.2.5.4. Obtaining other results status" at http://postgresql.readysetnet.com/users-lounge/docs/7.1/postgres/plpgsql-des cription.html. Hope this helps, Joe ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
Re: [SQL] How can we match a condition among 2 diff. tables?
> > Now, i need to get the details of all employees who did > > receive NONE of the salesorders. ie.. i wish to select the > > records of table 'employee' whose 'emp_id' are not > > there in table 'salesorder'. > > > > I need to accompolish in a single query! This should work: select e.emp_id from employee as e left join salesorder as s on e.emp_id = s.emp_id where s.emp_id is null; -- Joe ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [SQL] Big table - using wrong index - why?
> phones=# \d i_pl_pseq > Index "i_pl_pseq" > Attribute | Type > ---+-- > entity| character varying(3) > pseq | bigint > btree > > phones=# explain select * from phonelog where entity = '001' and pseq >= > 9120 and pseq <= 9123; > NOTICE: QUERY PLAN: > > Index Scan using i_pl_loadtimestamp on phonelog (cost=0.00..209247.39 > rows=607 width=137) > > EXPLAIN > > phones=# \d i_pl_loadtimestamp > Index "i_pl_loadtimestamp" > Attribute | Type > ---+-- > entity| character varying(3) > loaddate | date > loadtime | time > btree Just a guess, but what happens if you build i_pl_pseq(pseq, entity), i.e. reverse the key fields? Also, has the table been vacuum analyzed? -- Joe ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] Fuzzy matching?
> And the fuzzy_match should return True if the two phrases are no more > than that number of characters different. Thus, we should get: > > fuzzy_match('Thornton','Tornton',1) = TRUE > fuzzy_match('Thornton','Torntin',1) = FALSE > fuzzy_match('Thornton','Torntin',2) = TRUE > > Unfortunately, I cannot think of a way to make this happen in a function > without cycling through all the possible permutations of characters for > both words or doing some character-by-character comparison with > elaborate logic for placement. Either of these approaches would be very > slow, and completely unsuitable for column comparisons on large tables. > > Can anyone suggest some shortcuts here? Perhaps using pl/perl or > something similar? Sounds like you want something along the lines of soundex or metaphone? I don't see either function in PostgreSQL, but take a look at the PHP manual to see examples: http://www.php.net/manual/en/function.soundex.php , http://www.php.net/manual/en/function.metaphone.php I looked at the soundex function in the PHP source, and it looks like it would be fairly easy to port to a Postgres C function. The algorithm itself comes from Donald Knuth in "The Art Of Computer Programming, vol. 3: Sorting And Searching", Addison-Wesley (1973), pp. 391-392. HTH, -- Joe ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [SQL] Fuzzy matching?
> > Sounds like you want something along the lines of soundex or metaphone? I > > don't see either function in PostgreSQL, but take a look at the PHP manual > > to see examples: http://www.php.net/manual/en/function.soundex.php , > > http://www.php.net/manual/en/function.metaphone.php > > > > See /contrib/soundex. Sorry, missed that -- I only looked in the Documentation :( I guess it's not there because it is a contrib. FWIW, both Oracle and MSSQL have a built-in soundex function. In any case, metaphone is reportedly more accurate (at least for English words) than soundex, and levenshtein offers an entirely different and interesting approach. Any interest in having all three of these in the backend? -- Joe ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] Fuzzy matching?
> > > > Actually, this may even be closer to what you want: > > http://www.php.net/manual/en/function.levenshtein.php > > Hey, that's terrific! I didn't know that those programs existed > outside fo expensive proprietary software. > > Now, who can I talk into porting them (metaphone, levenstein) to > Postgres? Hey, GreatBridge folks? (this would be a significant value > enhancement for Postgres) > > -Josh I wouldn't mind doing it if the core team agrees. It will probably be a couple of weeks before I can get to it though -- not sure if that's soon enough to make it into 7.2. Should it be a contrib, or in the backend? -- Joe ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [SQL] Outer Join Syntax
Subject: [SQL] Outer Join Syntax > I'm doing a feasability study on porting our flagship product to Postgres > (from MS_SQL). I have run across a few snags, the largest of which is the > outer join syntax. MS has some nice syntactical sugar with the *=/=* > operators that Postgres dosen't seem to support. I am confused on how > to replicate the behavior however. We often link together many tables via See http://www.postgresql.org/idocs/index.php?queries.html You also might want to take a look at http://www.postgresql.org/idocs/index.php?explicit-joins.html HTH, --Joe ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
Re: [SQL] Name Alike Challenge
> The "Name Alike" PL/pgSQL function has been posted on Roberto Mello's > cookbook: > > http://www.brasileiro.net/postgres/cookbook/view-one-recipe.adp?recipe_id=96 7 > > This function requires Joe Conway's port of the Metaphone and > Levenshtein functions to PostgreSQL, available from /contrib on CVS as > well as another site where /contrib stuff is available (anyone know > where this is?). Thanks again, Joe! I sent it as a tgz to Justin for techdocs.postgresql.org. Here's the link: http://techdocs.postgresql.org/source.php#ffuzzy Note that the lastest source in cvs has soundex included, which this tar file does not, but soundex is not needed for Josh's function and it was in previous contribs anyway. -- Joe ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL]
> im new in postgresql (actually came from SQL Server) and i was trying a > script like this > > insert into table1(field1,field2) values (select field1, field2 from table > 2); > > i dont know if this is possible (inserting a set of entries via resultset > from a select stmt in one command). If anyone has any answers, or > workarounds pls do email me Well, that syntax doesn't work on SQL Server either. I think what you want is: insert into table1(field1,field2) select field1, field2 from table2; HTH, -- Joe ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [SQL] Auto Increment
create table mytable ( myfield serial primary key, myotherfield integer); Using the SERIAL data type automatically creates the sequence and default statements for you. -Original Message- From: Mayuresh Kadu [mailto:[EMAIL PROTECTED]] Sent: Monday, October 22, 2001 2:36 AM To: [EMAIL PROTECTED] Subject: [SQL] Auto Increment hi all, could anyone tell me how to make a primary key to AUTO INCREMENT. The document is not exactly very explainatory about it :) Thankx in advance ... Mayuresh ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] How to use BYTEA type?
Christopher Sawtell wrote: > Greetings folks, > > Please could a kind soul tell be how to extract data from a BYTEA type of > field into a file, or better still tell me where I can find some doco? Bruce has already written some minimal documentation which is in the 7.2 beta. I'm hoping to add to that prior to the 7.2 release > > This is the table:- > > create table fax_info ( id serial, comment text, image bytea) ; > > This appears to work ( no erorr messages ):- > > chris=# insert into fax_info ( comment, image ) values > ( 'Faking it with a computer', byteain ('picture.pgm.gz')); > INSERT 18772 1 > > Is this correct? No. You'll end up with literally the text 'picture.pgm.gz' in image. What you need to do is escape 3 special characters in your application, and then insert the escaped string directly. How exactly you do that will vary depending on your application layer language. For example, in PHP, you would do something like: $image_name = "/tmp/myimage.jpg"; $fp = fopen($image_name,"r"); $image = fread($fp, filesize($image_name)); fclose($fp); $esc_string = bytea_esc($image); $sql = "insert into fax_info(comment,image) "; $sql .= "values ('my image comment','$esc_string')"; $rs = pg_exec($conn, $sql); where bytea_esc() is the function that escapes the special characters. The three are ASCII 0, ASCII 39 (single quote), and ASCII 92 (single backslash). In 7.2 there is a libpq function which can be called from your C program to do the escaping, but for now, and in other programming environments you may have to write your own. I have seen posts indicating that the Perl DBI library for Postgres does have this function already. The escaping is a little tricky, and again varies depending on your programming environment. When the string *reaches PostgreSQL*, it needs to be escaped like this: ASCII 0 ==> \\000 ASCII 39 ==>\' or \\047 ASCII 92 ==>or \\134 So an input string like 'helloworld' would wind up being inserted like (where is a single 0 byte): insert into foo(mybytea) values('hello\\000world'); As I said, the escaped string in your programming environment may need to be different. In PHP for example, one set of backslashes is striped by the PHP language parser (so \\ becomes \), so the actual function I use looks like: function bytea_esc($ct) { $buf = ""; for ($i = 0; $i < strlen($ct); $i++) { if (ord($ct[$i]) == 0) $buf .= "000"; else if (ord($ct[$i]) == 39) $buf .= "047"; else if (ord($ct[$i]) == 92) $buf .= "134"; else $buf .= $ct[$i]; } return $buf; } > > Now, how do I get my picture out again? > To get it back out, you query it out the same as any other field. The catch is that all "non-printable" characters (which is quite a few more than the three above) are returned to you escaped, i.e. ASCII 255 will be returned as '\377'. So again you need to unescape the returned string using your application programming language. In PHP there is a native function which works great: stripcslashes(). So to complete the PHP example: $sql = "select image from fax_info "; $sql .= "where serial = 1"; $rs = pg_exec($conn, $sql); $image = stripcslashes(pg_result($rs,0,0)); header("content-type: image/jpeg"); echo $image; Hope this helps, Joe ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Search by longitude/latitude
> Hi all, > > I need to implement "Find all hospitals in a 5 mile radius". Say I have all > the coordinates on them stored in a table with the fields longitude and > latitude. Has anybody some sample code for that? > > Best regards, > Chris > Here's a plpgsql function that will do what you need. You might also look at the earthdistance code in contrib if you'd rather have a C function. HTH, Joe geodist.sql Description: Binary data ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] ROUND function ??
> select ROUND(0.5) ; returns 0 > select ROUND(1.5) ; returns 2; > select ROUND(2.5) ; returns 2; > select ROUND(3.5) ; returns 4;so on . > I'm sure you would have figured out what's happening !!! Why ?? > How do I get to approximate any number x.5 as x+1 ?? Looks like a bug to me: test=# select * from pg_proc where proname = 'round'; proname | proowner | prolang | proisinh | proistrusted | proiscachable | proisstrict | pronargs | proretset | prorettype | proargtypes | probyte_pct | properbyte_cpu | propercall_cpu | prooutin_ratio | prosrc | probin -+--+-+--+--+---+--- --+--+---++-+-+- ---++++- --- round |1 | 12 | f| t| t | t |1 | f |701 | 701 | 100 | 0 | 0 |100 | dround | - round |1 | 14 | f| t| t | t |1 | f | 1700 |1700 | 100 | 0 | 0 |100 | select round($1,0) | - round |1 | 12 | f| t| t | t |2 | f | 1700 | 1700 23 | 100 | 0 | 0 |100 | numeric_round | - (3 rows) test=# select round(2.5); round --- 2 (1 row) test=# select round(2.5,0); round --- 3 (1 row) test=# select round(2.5::numeric); round --- 3 (1 row) When doing "select round(2.5)" the 2.5 gets cast as a float and the "dround" function is used. When doing "select round(2.5,0)", or "select round(2.5::numeric)", the 2.5 gets cast as a numeric and the function "numeric_round" is used, producing a different result. It looks like "dround" simply calls the rint system function, so I'd guess the issue is really there (and maybe platform dependent?). I do recall at least one interpretation of rounding that calls for rounding a 5 to the even digit (ASTM), so the rint behavior may not be strictly speaking a bug -- but certainly having two different interpretations is. In any case, use "select round(2.5,0)" for now. Hope this helps, Joe ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] Please help me out on this insert error
Vernon Wu wrote: > Command: > > Insert into profile (userid, haveChildren)values('id98', 'No'); > > Error: > > ERROR: Relation 'profile' does not have attribute 'havaChildren' ^^^ From the error message, looks like you spelled haveChildren wrong. HTH, Joe ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] SQL Challenge: Skip Weekends
Josh Berkus wrote: > Folks, > > Hey, I need to write a date calculation function that calculates the > date after a number of *workdays* from a specific date. I pretty > much have the "skip holidays" part nailed down, but I don't have a > really good way to skip all weekends in the caluclation. Here's the > ideas I've come up with: How about this (a bit ugly, but I think it does what you want -- minus the holidays, which you said you already have figured out): create or replace function get_future_work_day(timestamp, int) returns timestamp as ' select case when extract(dow from future_work_date) = 6 then future_work_date + ''2 days'' when extract(dow from future_work_date) = 0 then future_work_date + ''1 day'' else future_work_date end from ( select $1 + (($2 / 5)::text || '' weeks'') + (($2 % 5)::text || '' days'') as future_work_date ) as t1 ' language sql; CREATE testslv=# select get_future_work_day('2002-06-20',27); get_future_work_day 2002-07-29 00:00:00-07 (1 row) HTH, Joe ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Returning rows from functions
David Durst wrote: > I was wondering if there was a way of returning a complete row from a > function, in reading the documentation of CREATE FUNCTION. I was under the > impression that you could return a row by using setof, but this does not > seem to be true. > Can anyone help? The short answer is "yes, but..."; see the thread at: http://archives.postgresql.org/pgsql-interfaces/2002-06/msg00042.php for a recent discussion about this. HTH, Joe ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] contrib/dblink suggestion
Bhuvan A wrote: > I am using postgresql 7.2.1. > I badly require to interconnect between databases. contrib/dblink seems to > be handy and ofcourse it well suits my requirement. But while browsing > across, i heard that it is not advicable to use it. So i wish to know > someone's experience in using dblink and how handy it is. > > Will contrib/dblink be available with future postgresql releases? Valuable > suggestions are very welcome. > I've heard of at least two people who seem to be using dblink fairly heavily without problems. One recently reported something like 500 million records transferred without error. I use it myself, but not in what I'd call heavy use. If you are aware of any specific problems, please point me to them, and I'll fix them before the next release. Thanks, Joe ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] VACUUM not doing its job?
Kristian Eide wrote: > This is not good as the database is in use 24/7, and without the indexes > everything comes to a screeching halt. This means I probably will have to > stop the programs using the database for the time it takes to re-create the > indexes; this is better than having to dump/restore everything however :) Try REINDEX. I don't think that precludes (at least) read access. http://www.postgresql.org/idocs/index.php?sql-reindex.html You should also search the archives for threads on free space maps. You most likely need to increase yours. In particular, see: http://archives.postgresql.org/pgsql-general/2002-07/msg00972.php HTH, Joe ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] VACUUM not doing its job?
Kristian Eide wrote: > Thanks, very helpful, although there does not seem to be much description of > what the two free space map options in postgresql.conf actually do. Doing a > VACUUM ANALYZE VERBOSE on my largest table gives me: > > NOTICE: Pages 43681: Changed 208, Empty 0; Tup 1921387: Vac 61669, Keep 0, > UnUsed 1362341. > > I will try to increase the 'max_fsm_pages' option from 1 to 50 and > see if that helps. > Note that you'll need to do a vacuum full *first* to recover the lost space, since the free space map is populated as the tuples are actually freed, I believe. After that you can adjust 'max_fsm_pages' and your vacuum frequency to achieve an equilibrium. Joe ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] Hardware performance for large updates
Josh Berkus wrote: > The problem: The update series (done as a function) takes 10-15 > minutes. During this time, the CPU is never more than 31% busy, only > 256mb of 512 is in use, and the disk channel is only 25% - 50% > saturated.As such, is seems like we could run things faster. > > What does everybody suggest tweaking? > I think we'd need more information to be of any help -- schema, functions, explain output, etc. I do think you probably could increase Shared Buffers, as 256 is pretty small. There's been a lot of debate over the best setting. The usual guidance is start at 25% of physical RAM (16384 == 128MB if you have 512MB RAM), then tweak to optimize performance for your application and hardware. You might also bump sort mem up a bit (maybe to 2048). Again, I would test using my app and hardware to get the best value. Are you on a Linux server -- if so I found that fdatasync works better than (the default) fsync for wal_sync_method. HTH, Joe ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] Hardware performance for large updates
Josh Berkus wrote: > Particularly, the difficulty is that this application gets many small > requests during the day (100 simultaneous uses) and shares a server > with Apache. So I have to be concerned about how much memory each > connection soaks up, during the day. At night, the maintainence tasks > run a few, really massive procedures. > > So I should probably restart Postgres with different settings at night, > hey? Actually, if you can afford the twice daily changes, it sounds like a great idea. I think you can get new conf settings to take by sending a SIGHUP to the postmaster, so you don't even really need any downtime to do it. Yup, here it is: http://www.postgresql.org/idocs/index.php?runtime-config.html >>I do think you probably could increase Shared Buffers, as 256 is >>pretty small. There's been a lot of debate over the best setting. The >>usual guidance is start at 25% of physical RAM (16384 == 128MB if you >>have 512MB RAM), then tweak to optimize performance for your >>application and hardware. > > > Hmmm... how big is a shared buffer, anyway? I'm having trouble > finding actual numbers in the docs. By default it is 8K. It's mentioned here: http://www.postgresql.org/idocs/index.php?kernel-resources.html So, as I mentioned above, Shared Buffers of 16384 == 128MB if you have a default 8K block size. >>Are you on a Linux server -- if so I found that >>fdatasync works better than (the default) fsync for wal_sync_method. > > Yes, I am. Any particular reason why fdatasync works better? I can't remember the technical reason (although I've seen one on the list before), but I have determined it empirically true, at least for my setup. Ahh, here we go: http://archives.postgresql.org/pgsql-hackers/1998-04/msg00326.php Joe ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] writing a function to mimic mysql last_insert_id
Beth Gatewood wrote: > well, I know that I will have only a single sequence that will generate the > primary key per table. So basically, this type of function, for me needs > only to return the value of the primary key. > > I believe I mentioned in one of my posts the motivation behind not wanting > to use currval()...which was to trying to avoid having the developers make a > lot of sql revisions to their application. Maybe you could use the *same* sequence for the primary key of all the tables, say "my_global_seq" (it is bigint as of 7.2 I think), and then wrap a last_insert_id() (or whatever it is called) function around a call to currval('my_global_seq'). HTH, Joe ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [SQL] [GENERAL] Latitude / Longitude
Tim Perdue wrote: > Hi Folks - > > I'm working on a global weather database for a client and have hit an issue > which I'm sure has been solved before. Unfortunately, the site search doesn't > pull up anything. > > Essentially, I've got two tables, one with city/county/lat/long and another > with lat/long/weather data. > > None of the lat/longs in the two tables match up directly, so I can't do a > simple join of the two tables. I need to join on closest proximity on the > lat/long fields. > > Any suggestions? It seems to me this will be pretty expensive on CPU resources > unless there's a really elegant trick uncovered. I see you've gotten some other help, but in case you're interested, I'll give you another alternative. Here's a plpgsql function I wrote a while ago based on the Haversine formula: CREATE FUNCTION "zipdist" (float8,float8,float8,float8 ) RETURNS float8 AS ' DECLARE lat1 ALIAS FOR $1; lon1 ALIAS FOR $2; lat2 ALIAS FOR $3; lon2 ALIAS FOR $4; dist float8; BEGIN dist := 0.621 * 6371.2 * 2 * atan2( sqrt(abs(0 + pow(sin(radians(lat2)/2 - radians(lat1)/2),2) + cos(radians(lat1)) * cos(radians(lat2)) * pow(sin(radians(lon2)/2 - radians(lon1)/2),2))),sqrt(abs(1 - pow(sin(radians(lat2)/2 - radians(lat1)/2),2) + cos(radians(lat1)) * cos(radians(lat2)) * pow(sin(radians(lon2)/2 - radians(lon1)/2),2; return dist; END; ' LANGUAGE 'plpgsql'; I used the following PHP code to start looking for a match in a small circle, and then expand it if no matches were found: $dist = INIT_DIST; $cnt = 0; $cntr = 0; do { if ((! $zip == "") && (! $dist <= 0)) { $sql = get_zip_sql($lon1d,$lat1d,$dist,$numtoshow); $rs = connexec($conn,$sql); $rsf = rsfetchrs($rs); $dist *= 2; $cntr++; } else { $cntr = 10; } } while (count($rsf) < $numadvisorstoshow && $cntr < 10); Hopefully you get the idea. As was suggested, you can narrow the results using a box to make the query perform better, and then sort by distance to get the closest alternative. Here's the related part of get_zip_sql(): function get_zip_sql($lon1d,$lat1d,$dist,$numtoshow) { $sql = " SELECT DISTINCT FROM tbl_a AS a ,tbl_d AS d ,tbl_a_zipcodes AS az ,tbl_zipcodes as z WHERE abs(z.lat - $lat1d) * 60 * 1.15078 <= $dist and abs(z.long - $lon1d) * 60 * 1.15078 <= $dist and zipdist($lat1d,$lon1d,lat,long) <= $dist and z.zip = az.zipcode ORDER BY LIMIT $numtoshow; "; return $sql; } The "X * 60 * 1.15078" converts differences in degrees lat/long into rough distances in miles. The zipdist function returns a more-or-less exact distance using the Haversine formula. Hope this helps. Let me know if you want/need more explanation of any of this. Joe ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] function return multiply rows
Jeroen Olthof wrote: > What is going wrong here? > > An example of what I'm trying to do. > > vw_teams is a view but same problem when trying it on a single table > CREATE FUNCTION test() RETURNS SETOF vw_teams AS 'select * from vw_teams;' > LANGUAGE 'sql'; > > SELECT test(); > > results in > >test > --- > 137789256 > 137789256 > (2 rows) The capability to return composite types (multi-column rows) is limited in <= PostgreSQL 7.2.x. What you are seeing are pointers to the rows, not the rows themselves. Version 7.3, in beta testing now, will do what you are looking for. If you can, please give it a try. See: http://developer.postgresql.org/docs/postgres/xfunc-tablefunctions.html for more info and examples. HTH, Joe ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] split function for pl/pgsql
Frederic Logier wrote: > Great ! have you some example for call a pl/perl function from a > pl/pgsql function ? I don't use PL/Perl, but I think you should be able to do: SELECT INTO var my_plperl_func(text_to_split); from within your PL/pgSQL function. > And could i use an int array in pl/pgsql returned by the pl/perl > function ? I don't know if it is possible to construct a PostgreSQL array in pl/perl, but I would imagine that should work. Any pl/perl users out there? Joe ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] Stored Procedures
[EMAIL PROTECTED] wrote: > Ok, if this does not apply to versions prior to 7.3beta > then what do I need to do if I am running 7.2.1? When I > try to use the SETOF to retrun a row set, I only get > one column. First, prior to 7.3 there is no SCHEMA support in Postgres. Everything lives in essentially one and the same schema. In 7.2.x and before, returning a composite type (i.e. multiple columns) gives you back one column of pointers (large integer values) to the actual row of data. You can access the individual columns, but it's ugly: test=# CREATE TABLE foo (fooid int, foosubid int, fooname text); CREATE test=# INSERT INTO foo VALUES(1,1,'Joe'); INSERT 304822 1 test=# CREATE FUNCTION getfoo(int) RETURNS foo AS ' test'# SELECT * FROM foo WHERE fooid = $1; test'# ' LANGUAGE SQL; CREATE test=# select fooid(getfoo(1)), foosubid(getfoo(1)), fooname(getfoo(1)); fooid | foosubid | fooname -------+--+- 1 |1 | Joe (1 row) Joe ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] split function for pl/pgsql
Frederic Logier wrote: > hi, > > i'm looking for a split function, like perl or php. > I need doing a pl/pgsql function with a split return an array. > I must do some loop with this array for mass inserting. > > I think of doing it with pl / perl but I need to do insert and I don't > find example with pl / perl and sql. There is no split function built in to PostgreSQL currently. You could write it yourself in PL/Perl and use it in the PL/pgSQL function. In 7.3 (currently in beta) there is a split_part() function which returns just one element. I will most likely write a split function for 7.4 to return an array, similar to perl and php. In 7.3, the following will do what you want (I think): CREATE OR REPLACE FUNCTION split(text, text) RETURNS text[] AS ' DECLARE i int := 0; word text; result text := ''{''; result_arr text[]; BEGIN LOOP i := i + 1; SELECT INTO word split_part($1, $2, i); IF word = '''' THEN EXIT; END IF; IF i > 1 THEN result := result || '',"'' || word || ''"''; ELSE result := result || ''"'' || word || ''"''; END IF; END LOOP; result := result || ''}''; result_arr := result; RETURN result_arr; END ' LANGUAGE 'plpgsql'; test=# select split('a,b,c',','); split - {a,b,c} (1 row) test=# select a[1] from (select split('a,b,c',',') as a) as t; a --- a (1 row) HTH, Joe ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Stored Procedures
david williams wrote: > Also, > > the table definition MUST be in the Public Schema. I use my own schema > names but in order for the table to be found by the function it ( the > table ) must be in the public schema. Although it can be empty. (Note: this discussion does not apply to PostgreSQL releases prior to 7.3 beta) Not true. You need to be sure the schema the table is in is in your search path, or you need to fully qualify the table reference. See below for an example: -- create a new schema CREATE SCHEMA s1; CREATE SCHEMA -- change to the new schema SET search_path='s1','$user','public'; SET select current_schema(); current_schema s1 (1 row) -- create the table CREATE TABLE foo (fooid int, foosubid int, fooname text); CREATE TABLE INSERT INTO foo VALUES(1,1,'Joe'); INSERT 794076 1 -- change back to public schema, but leave s1 in the search path SET search_path='$user','public','s1'; SET select current_schema(); current_schema public (1 row) \dt List of relations Schema | Name | Type | Owner +--+---+-- s1 | foo | table | postgres (1 row) CREATE FUNCTION getfoo(int) RETURNS foo AS ' SELECT * FROM foo WHERE fooid = $1; ' LANGUAGE SQL; CREATE FUNCTION \df getfoo List of functions Result data type | Schema | Name | Argument data types --+++- foo | public | getfoo | integer (1 row) -- this will work SELECT *, upper(fooname) FROM getfoo(1) AS t1; fooid | foosubid | fooname | upper ---+--+-+--- 1 |1 | Joe | JOE (1 row) -- now try again with table name qualified in the function DROP FUNCTION getfoo(int); DROP FUNCTION -- remove s1 from the search path SET search_path='$user','public'; SET select current_schema(); current_schema public (1 row) \dt No relations found. CREATE FUNCTION getfoo(int) RETURNS s1.foo AS ' SELECT * FROM s1.foo WHERE fooid = $1; ' LANGUAGE SQL; CREATE FUNCTION \df getfoo List of functions Result data type | Schema | Name | Argument data types --+++- s1.foo | public | getfoo | integer (1 row) -- this will work SELECT *, upper(fooname) FROM getfoo(1) AS t1; fooid | foosubid | fooname | upper ---+--+-+--- 1 |1 | Joe | JOE (1 row) HTH, Joe ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] rows in order
Camila Rocha wrote: > Is there a way to put in order the rows in a table? the problem is that i w= > ant to keep a "tree" in the db, but the leaves must be ordered... > does someone have an idea? If you don't mind trying 7.3 beta, there is a function called connectby() in contrib/tablefunc. It works like this: CREATE TABLE connectby_tree(keyid text, parent_keyid text); INSERT INTO connectby_tree VALUES('row1',NULL); INSERT INTO connectby_tree VALUES('row2','row1'); INSERT INTO connectby_tree VALUES('row3','row1'); INSERT INTO connectby_tree VALUES('row4','row2'); INSERT INTO connectby_tree VALUES('row5','row2'); INSERT INTO connectby_tree VALUES('row6','row4'); INSERT INTO connectby_tree VALUES('row7','row3'); INSERT INTO connectby_tree VALUES('row8','row6'); INSERT INTO connectby_tree VALUES('row9','row5'); SELECT * FROM connectby('connectby_tree', 'keyid', 'parent_keyid', 'row2', 0, '~') AS t(keyid text, parent_keyid text, level int, branch text); keyid | parent_keyid | level | branch ---+--+---+- row2 | | 0 | row2 row4 | row2 | 1 | row2~row4 row6 | row4 | 2 | row2~row4~row6 row8 | row6 | 3 | row2~row4~row6~row8 row5 | row2 | 1 | row2~row5 row9 | row5 | 2 | row2~row5~row9 (6 rows) This allows completely dynamically generated trees. There is also a contrib/ltree, which I believe creates a persistent structure for the tree information, and gives you tools to manipulate it (but I have never used it, so my discription may not be completely accurate). HTH, Joe ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] Suggestion: md5/crypt functions in sql
Aasmund Midttun Godal wrote: > It would be very usefull to have these in sql, so that it is even easier to create >tables with encrypted passwords. > See contrib/pgcrypto Joe ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] Get A Tree from a table
Scott Yaung wrote: > I like to do something like this:(build a tree from relationship) [snip] > How can i make it by sql , and sql functions > Thanks lot and regards. There have been quite a few discussions of this topic in the past, so I would suggest you search through the archives. In 7.3 (currently in beta) you can use contrib/ltree or contrib/tablefunc. Here's an example of using the connectby() function from contrib/tablefunc: CREATE TABLE nodes(parentid varchar(20), parenttype varchar(20), childid varchar(20), childtype varchar(20)); INSERT INTO nodes values('A1', 'A', 'B1', 'B'); INSERT INTO nodes values('A2', 'A', 'B2', 'B'); INSERT INTO nodes values('A1', 'A', 'B3', 'B'); INSERT INTO nodes values('B1', 'B', 'C1', 'C'); INSERT INTO nodes values('B1', 'B', 'C2', 'C'); INSERT INTO nodes values('C1', 'C', 'D1', 'D'); INSERT INTO nodes values('A1', 'A', 'B4', 'B'); INSERT INTO nodes values('B1', 'B', 'C5', 'C'); test=# SELECT * FROM connectby('nodes','childid','parentid','A1',0,'~') AS t(childid varchar, parentid varchar, level int, branch text); childid | parentid | level | branch -+--+---+- A1 | | 0 | A1 B1 | A1 | 1 | A1~B1 C1 | B1 | 2 | A1~B1~C1 D1 | C1 | 3 | A1~B1~C1~D1 C2 | B1 | 2 | A1~B1~C2 C5 | B1 | 2 | A1~B1~C5 B3 | A1 | 1 | A1~B3 B4 | A1 | 1 | A1~B4 (8 rows) test=# SELECT * FROM connectby('nodes','childid','parentid','B1',0,'~') AS t(childid varchar, parentid varchar, level int, branch text); childid | parentid | level | branch -+--+---+-- B1 | | 0 | B1 C1 | B1 | 1 | B1~C1 D1 | C1 | 2 | B1~C1~D1 C2 | B1 | 1 | B1~C2 C5 | B1 | 1 | B1~C5 (5 rows) HTH, Joe ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]