Re: [SQL] We all are looped on Internet: request + transport = invariant
Good day, Joe. >> J> How do I see employees in just one department? >> department[id="1"].employee >>; >> department[name="Technical"].employee >>; J> How is that any different or better than a standard SQL SELECT by absence of gasket (php, its library; perl, its library; etc) J> XML is *not* the only answer to viewing or even "transporting" data. it's nothing more simply: offer your decision !! J> others want to see a chart it's nothing more simply: offer format for data, on which chart will be created !! J> And there's nothing wrong with Perl, PHP, Python and the myriad J> interface languages. I said many times, what is wrong: applied users can not join sql and perl, can not use libraries, and can not adjust web-server. J> I don't see how TML improves It characterizes not analysed situation, but you. J> Some people prefer a tabular representation --- >> J> How do I see a single employee? >> employee[id="31"] >>; >> employee[name="Tomson"] >>; J> I want to see specific columns of the employee row Mark other fields (columns) by sign "#" to prevent output of them. NOW there is no construction like employee#[need_field] i.e. to forbid all field and to allow necessary columns ("need_field"). Probably i add that. J> or see the name of the department that Tomson works department.#employee[name="Tomson"] >>; or #employee[name="Tomson"].department >>; J> or the name of Tomson's manager? Let table "employee" has additional field "manager" create table employee { id num primary key, name string, salary num, department num references department (id), managerbool }; Request is (look at http://sql4.by.ru/site/sql40/en/author/determination_eng.htm, paragraph "Buckle") (employee[manager=true].#department.#employee[name="Tomson"], employee[manager=true name="Tomson"]) >>; or (#employee[name="Tomson"].#department.employee[manager=true], employee[manager=true name="Tomson"]) >>; If you seggest other scheme of data, i will write query for it. J> This "fictional" attribute of yours implies a change to SQL DDL. You are worried about technical realization or ideological orthodoxy ? Or maybe about anything else ? J> if someone comes along and wants to do J> SELECT name, SUM(salary) FROM employee GROUP BY name? create view x as SELECT name, SUM(salary) FROM employee GROUP BY name; x >>; J> thousands of users may agree and converge on those choices. 1. Not users, but programmers. 2. Needs are produced also, as goods and capital goods. Karl Marks For example, look at yourself. Dmitry Turin http://html6.by.ru http://sql4.by.ru http://computer2.by.ru ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: Fwd: Re[2]: [SQL] We all are looped on Internet: request + transport = invariant
Don't forget to cc: the list. Dmitry Turin wrote: Good day, Richard. http://sql4.by.ru/site/sql40/en/author/wave_eng.htm RH> example 2 RH> 1. I can see how one flight might follow another, but not contained. Do you RH> not need some new object "flight_chain" or similar? No ! Fork is possible, several flights might follow another, thus several flights should be inside another. 1. One flight is not "contained" within another though. I board one plane, fly, get off, board another, fly, ... - they are sequential not contained. If you're trying to represent multiple options you should really have something like: These three flights represent options How do I get such a structure from a TML query? 2. If you nest flights then you'll be forced to repeat data, surely? Multiple routes could end up mentioning flight id=123 several times and each time have to repeat all possible options from it. RH> 2. Alternatively, the needs to indicate to which it's RH> referring either as / or or RH> similar. What is 'either as / or ' ? Looking at the resultset, how do I know whether a mentioned city is: 1. the start point of a flight 2. the end point of a flight RH> 3. How am I constructing these queries? There are five documents on http://sql4.by.ru/site/sql40/en/author/index_eng.htm RH> The whole point was so I didn't have to learn SQL, yes? Nearly: sorting by first section of tree (maybe by second section and so on after that) is not seggested now (user can sort, for examle, in browser by XSL). Hang on, XSL is *not* easier to understand than SQL. RH> syntax for these queries isn't obvious enough to me What is not-obvious - tree notation or work with fields inside section (record) in tree ? If all of them, what is more not-obvious ? The syntax of your queries is not obvious. I haven't read your guide, but you did say that the whole point was *not* to have to learn languages. If I'm going to learn only one query language, it'll be SQL because everything uses it. RH> 4. what use it is to have XML without a schema. What is 'XML without a schema' ? XML is not stored thing, it is a transport form for scheme. Umm - xml schemas, that describe valid data for an xml document. http://www.w3.org/XML/Schema http://www.xml.com/pub/a/2001/12/12/schemacompare.html RH> How does this TML setup avoid me having to write any php/perl/etc? Any inquired tree An "inquired tree" how? I'm sat here at my laptop, and I want to get a list of flights into a spreadsheet I'm doing for my boss. 1. How do I create my TML query (assuming I haven't read the TML manual, because if I can read manuals then SQL is viable)? 2. How do I get my query from my laptop to my database server? 3. How do I get results into the spreadsheet? > (in reduction case - inquired one table or one view), which is finished by '>>' at the end, is in output automatically (port #80, HTTP protocol, http://sql4.by.ru/site/sql40/en/author/inout_eng.htm and http://sql4.by.ru/site/sql40/en/author/introduction_eng.htm#http). You trimmed the first part of my question: "Let's say I want to build a holiday website. How does this TML setup avoid me having to write any php/perl/etc?" I'm a small travel agent, and I want to let customers see what flights I can offer them. We're going to need a form that lets people choose from a list of start and end cities, pick a date and see a list of flights (sorted by cost, or length or some such). At present I'll build that with e.g. PHP + SQL => PostgreSQL. What part of the process does TML make easier? -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[SQL] Which function transform x,y to latitude/longitude?
Hi, For example x= 38356.62 y= 42365.19.how to transform it to latitude 1.399948, longitude 193.92644? Which function I could use? I don’t know the algorithm. Initial data is: ("+proj=cass +a=6378137.0 +rf=298.257223563 +lat_0=1.287639n +lon_0=103.8516e +x_0=3 +y_0=3"). Thanks a lot. bill _ 与联机的朋友进行交流,请使用 Live Messenger; http://get.live.com/messenger/overview ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] We all are looped on Internet: request + transport = invariant
Hi Dmitry, On Wed, 2007-04-25 at 10:47 +0300, Dmitry Turin wrote: > J> And there's nothing wrong with Perl, PHP, Python and the myriad > J> interface languages. > > I said many times, what is wrong: > applied users can not join sql and perl, can not use libraries, > and can not adjust web-server. I strongly disagree. I have not taken any formal courses on PHP, HTML, Apache or Python, and I only took a couple of week-long courses on SQL ages ago (Perl I don't care for). Yet I've learned enough on my own to "join" them and use their libraries and put up a website. And I believe there are others on this list and elsewhere that have done so, to varying degrees. And yet others may require the assistance of a technical specialist or a full-time programming team, but what's wrong with that? > J> thousands of users may agree and converge on those choices. > > 1. Not users, but programmers. > 2. Needs are produced also, as goods and capital goods. > Karl Marks >For example, look at yourself. We are on diametrically opposed sides of that argument, but it's off-topic, so I'll leave it alone. Joe ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] plpgsql array looping
One problem (unless you intend to only look at every other element) is that you are incrementing idxptr explicitly in your loop. The FOR loop does that for you. This is the reason your output shows only even values. John On Apr 24, 2007, at 4:42 PM, Richard Albright wrote: for idxptr in 1 .. array_upper(p_idxarray, 1) loop exit when p_idxarray[idxptr] >= p_idx; idxptr := idxptr +1; raise notice 'idx ptr: %', idxptr; end loop; John DeSoi, Ph.D. http://pgedit.com/ Power Tools for PostgreSQL ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [SQL] plpgsql array looping
yeah i noticed that this morning, it used to be a while loop, for some reason (probably parser related) it keeps giving me an error on the exit when statement in the loop. On Wed, 2007-04-25 at 09:38 -0400, John DeSoi wrote: > One problem (unless you intend to only look at every other element) > is that you are incrementing idxptr explicitly in your loop. The FOR > loop does that for you. This is the reason your output shows only > even values. > > John > > > On Apr 24, 2007, at 4:42 PM, Richard Albright wrote: > > > for idxptr in 1 .. array_upper(p_idxarray, 1) > > loop > > exit when p_idxarray[idxptr] >= p_idx; > > idxptr := idxptr +1; > > raise notice 'idx ptr: %', idxptr; > > end loop; > > > > John DeSoi, Ph.D. > http://pgedit.com/ > Power Tools for PostgreSQL > > > ---(end of broadcast)--- > TIP 7: You can help support the PostgreSQL project by donating at > > http://www.postgresql.org/about/donate -- Rick Albright Senior Quantitative Analyst Indie Research, LLC 254 Witherspoon Street Princeton, NJ 08542 (609)497-1030 [EMAIL PROTECTED] ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [SQL] Which function transform x,y to latitude/longitude?
On Wed, Apr 25, 2007 at 05:02:02PM +0800, Nemo Terry wrote: > For example x= 38356.62 y= 42365.19.how to transform it to latitude > 1.399948, longitude 193.92644? Do you mean longitude 103.92644? In what datum are the lat/lon coordinates? Where did you get the transformation in your example? > Which function I could use? I don’t know the algorithm. > > Initial data is: ("+proj=cass +a=6378137.0 +rf=298.257223563 > +lat_0=1.287639n +lon_0=103.8516e +x_0=3 +y_0=3"). What's the source of these parameters? You can perform transformations outside the database with PROJ.4 or inside the database with the PostGIS transform() function (which uses PROJ.4). http://proj.maptools.org/ http://postgis.refractions.net/ I don't see any exact matches in the PostGIS spatial_ref_sys table for the parameters you posted you so if you use PostGIS then you might have to insert a row to create your own spatial reference system. However, I did a few tests with your parameters and various datums for the lat/lon and couldn't get the exact transformed values in your example. You might get more help on the PROJ.4 and PostGIS mailing lists. -- Michael Fuhr ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [SQL] Query Join Performance
Aaron Bono wrote: Performance tanks with this query - it takes over 120 seconds (that is where I set the timeout). BTW, on our Linux box the full query we run (which adds 3 more tables on the whole operation along with more filtering on the zip table) finishes in under 10 seconds. Problem is our development is on Windows and this is a real pain for developers to test. So what's different between the systems. Obvious things to look at: 1. EXPLAIN ANALYSE outputs to show the plans (these presumably are different, but in what details, and why?) 2. Configuration (particularly memory/cost settings). 3. Hardware. 4. Locale/encoding - these can affect index usage and sorting. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[SQL] sql wrapped plpgsql set returning function
I have figured out my looping issue, but am having difficulty wrapping my set returning plpgsql function getmovavgset with a getmovavg sql func when i run the following: select getmovavg(aggarray(trade_date), aggarray(close_price), '2004-01-20', 5) from ( select trade_date, close_price::numeric from quotedata where symbol='MSFT' and trade_date > '2004-01-01' order by trade_date desc) values i get the following output: NOTICE: v_rec: ("2004-01-20 00:00:00",27.6917) CONTEXT: SQL function "getmovavg" statement 1 NOTICE: v_rec: ("2004-01-16 00:00:00",27.6183) CONTEXT: SQL function "getmovavg" statement 1 NOTICE: v_rec: ("2004-01-15 00:00:00",27.6767) CONTEXT: SQL function "getmovavg" statement 1 NOTICE: v_rec: ("2004-01-14 00:00:00",27.7883) CONTEXT: SQL function "getmovavg" statement 1 NOTICE: v_rec: ("2004-01-13 00:00:00",27.8783) CONTEXT: SQL function "getmovavg" statement 1 NOTICE: v_rec: ("2004-01-12 00:00:00",27.9967) CONTEXT: SQL function "getmovavg" statement 1 NOTICE: v_rec: ("2004-01-09 00:00:00",27.9767) CONTEXT: SQL function "getmovavg" statement 1 NOTICE: v_rec: ("2004-01-08 00:00:00",28.0400) CONTEXT: SQL function "getmovavg" statement 1 NOTICE: v_rec: ("2004-01-07 00:00:00",28.0100) CONTEXT: SQL function "getmovavg" statement 1 NOTICE: v_rec: ("2004-01-06 00:00:00",27.9433) CONTEXT: SQL function "getmovavg" statement 1 NOTICE: v_rec: ("2004-01-05 00:00:00",27.7950) CONTEXT: SQL function "getmovavg" statement 1 ERROR: set-valued function called in context that cannot accept a set I am having difficulty determining if the error is in my getmovavgset or getmovavg function. the notice msgs are coming from the getmovavgset func, so it is iterating. I just dont know if the syntax is correct for the generate_series statement in that func. What am I missing? code is below. CREATE TYPE resultset AS ("index" timestamp[], "values" numeric[]); CREATE TYPE resultsetitem AS ("index" timestamp, value numeric); CREATE AGGREGATE aggarray( BASETYPE=anyelement, SFUNC=array_append, STYPE=anyarray, INITCOND='{}' ); CREATE OR REPLACE FUNCTION getmovavgarray(p_idxarray _timestamp, p_valarray _numeric, p_idx "timestamp", p_periods int8) RETURNS resultset AS $BODY$ declare idxptr int8; idxendptr int8; offsetptr int8; begoffset int8; ar_idx timestamp[]:='{}'; ar_values numeric[]:='{}'; v_rec resultset%rowtype; v_rtn resultset%rowtype; v_sql text; v_index timestamp; v_value numeric; v_idx timestamp; begin for offsetptr in 1 .. array_upper(p_idxarray, 1) loop --raise notice 'offset: %', offsetptr; begoffset := offsetptr; exit when p_idxarray[offsetptr]::timestamp <= p_idx; end loop; --raise notice 'offset: %, end: %', begoffset, array_upper(p_idxarray, 1); for idxptr in 1 .. (array_upper(p_idxarray, 1) - begoffset) loop idxendptr := idxptr + p_periods; v_index := p_idxarray[(idxptr + begoffset - 1)]; v_value := movavg(p_valarray[ (idxptr + begoffset - 1) : (idxendptr + begoffset -1) ]); ar_idx := array_append(ar_idx, v_index); ar_values := array_append(ar_values, v_value); --raise notice 'idx: %, avg: %', v_index, v_value; end loop; v_rtn := (ar_idx, ar_values); return v_rtn; end $BODY$ LANGUAGE 'plpgsql' VOLATILE; CREATE OR REPLACE FUNCTION getmovavgset(p_idxarray _timestamp, p_valarray _numeric, p_idx "timestamp", p_periods int8) RETURNS SETOF resultsetitem AS $BODY$ declare results resultset; v_rec record; v_rtn resultsetitem%rowtype; v_sql text; ar_idx timestamp[]; ar_values numeric[]; begin --raise notice 'idxarray: %', p_idxarray; for results in select * from getmovavgarray(p_idxarray, p_valarray, p_idx, p_periods) loop ar_idx := results.index; ar_values := results.values; end loop; for v_rec in select (ar_idx)[s] as index, (ar_values)[s] as value from generate_series(1, array_upper(ar_idx, 1)) as s loop raise notice 'v_rec: %', v_rec; v_rtn := (v_rec.index, v_rec.value); --raise notice 'resultset: %', v_rtn; return next v_rtn; end loop; end $BODY$ LANGUAGE 'plpgsql' VOLATILE; CREATE OR REPLACE FUNCTION getmovavg(p_idxarray _timestamp, p_valarray _numeric, p_idx "timestamp", p_periods int8) RETURNS SETOF resultsetitem as $BODY$ select * from getmovavgset($1, $2, $3, $4); $BODY$ LANGUAGE 'sql' volatile; -
Re: [SQL] sql wrapped plpgsql set returning function
Richard Albright <[EMAIL PROTECTED]> writes: > I have figured out my looping issue, but am having difficulty wrapping > my set returning plpgsql function getmovavgset with a getmovavg sql func > when i run the following: Hm, worksforme (see attached trivial example). What PG version are you using? regards, tom lane regression=# create function foo() returns setof int8_tbl as $$ regression$# declare r record; regression$# begin regression$# for r in select * from int8_tbl loop regression$# return next r; regression$# end loop; regression$# end$$ language plpgsql; CREATE FUNCTION regression=# select * from foo(); q1|q2 --+--- 123 | 456 123 | 4567890123456789 4567890123456789 | 123 4567890123456789 | 4567890123456789 4567890123456789 | -4567890123456789 (5 rows) regression=# select foo(); ERROR: set-valued function called in context that cannot accept a set CONTEXT: PL/pgSQL function "foo" line 4 at return next regression=# create function foowrap() returns setof int8_tbl as $$ regression$# select * from foo(); regression$# $$ language sql; CREATE FUNCTION regression=# select foowrap(); foowrap -- (123,456) (123,4567890123456789) (4567890123456789,123) (4567890123456789,4567890123456789) (4567890123456789,-4567890123456789) (5 rows) regression=# ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [SQL] sql wrapped plpgsql set returning function
It turns out that the from subselect is causing the error in : select getmovavg(aggarray(trade_date), aggarray(close_price), > '2004-01-20', 5) > from > ( select trade_date, close_price::numeric > from quotedata > where symbol='MSFT' > and trade_date > '2004-01-01' > order by trade_date desc) values whereas select * from getmovavg(array['2007-04-03', '2007-04-02', '2007-04-01']::timestamp[], array[4.0,3.0,2.0]::numeric[], '2007-04-03', 3) will work. anyone know why that would be? On Wed, 2007-04-25 at 14:10 -0400, Richard Albright wrote: > I have figured out my looping issue, but am having difficulty wrapping > my set returning plpgsql function getmovavgset with a getmovavg sql func > when i run the following: > > select getmovavg(aggarray(trade_date), aggarray(close_price), > '2004-01-20', 5) > from > ( select trade_date, close_price::numeric > from quotedata > where symbol='MSFT' > and trade_date > '2004-01-01' > order by trade_date desc) values > > i get the following output: > > NOTICE: v_rec: ("2004-01-20 00:00:00",27.6917) > CONTEXT: SQL function "getmovavg" statement 1 > NOTICE: v_rec: ("2004-01-16 00:00:00",27.6183) > CONTEXT: SQL function "getmovavg" statement 1 > NOTICE: v_rec: ("2004-01-15 00:00:00",27.6767) > CONTEXT: SQL function "getmovavg" statement 1 > NOTICE: v_rec: ("2004-01-14 00:00:00",27.7883) > CONTEXT: SQL function "getmovavg" statement 1 > NOTICE: v_rec: ("2004-01-13 00:00:00",27.8783) > CONTEXT: SQL function "getmovavg" statement 1 > NOTICE: v_rec: ("2004-01-12 00:00:00",27.9967) > CONTEXT: SQL function "getmovavg" statement 1 > NOTICE: v_rec: ("2004-01-09 00:00:00",27.9767) > CONTEXT: SQL function "getmovavg" statement 1 > NOTICE: v_rec: ("2004-01-08 00:00:00",28.0400) > CONTEXT: SQL function "getmovavg" statement 1 > NOTICE: v_rec: ("2004-01-07 00:00:00",28.0100) > CONTEXT: SQL function "getmovavg" statement 1 > NOTICE: v_rec: ("2004-01-06 00:00:00",27.9433) > CONTEXT: SQL function "getmovavg" statement 1 > NOTICE: v_rec: ("2004-01-05 00:00:00",27.7950) > CONTEXT: SQL function "getmovavg" statement 1 > > ERROR: set-valued function called in context that cannot accept a set > > I am having difficulty determining if the error is in my getmovavgset or > getmovavg function. > the notice msgs are coming from the getmovavgset func, so it is > iterating. I just dont know if the syntax is correct for the > generate_series statement in that func. What am I missing? code is > below. > > CREATE TYPE resultset AS >("index" timestamp[], > "values" numeric[]); > > CREATE TYPE resultsetitem AS >("index" timestamp, > value numeric); > > CREATE AGGREGATE aggarray( > BASETYPE=anyelement, > SFUNC=array_append, > STYPE=anyarray, > INITCOND='{}' > ); > > CREATE OR REPLACE FUNCTION getmovavgarray(p_idxarray _timestamp, > p_valarray _numeric, p_idx "timestamp", p_periods int8) > RETURNS resultset AS > $BODY$ > declare > idxptr int8; > idxendptr int8; > offsetptr int8; > begoffset int8; > ar_idx timestamp[]:='{}'; > ar_values numeric[]:='{}'; > v_rec resultset%rowtype; > v_rtn resultset%rowtype; > v_sql text; > v_index timestamp; > v_value numeric; > v_idx timestamp; > begin > for offsetptr in 1 .. array_upper(p_idxarray, 1) > loop > --raise notice 'offset: %', offsetptr; > begoffset := offsetptr; > exit when p_idxarray[offsetptr]::timestamp <= p_idx; > end loop; > --raise notice 'offset: %, end: %', begoffset, array_upper(p_idxarray, > 1); > for idxptr in 1 .. (array_upper(p_idxarray, 1) - begoffset) > loop > idxendptr := idxptr + p_periods; > v_index := p_idxarray[(idxptr + begoffset - 1)]; > v_value := movavg(p_valarray[ (idxptr + begoffset - 1) : > (idxendptr + > begoffset -1) ]); > ar_idx := array_append(ar_idx, v_index); > ar_values := array_append(ar_values, v_value); > --raise notice 'idx: %, avg: %', v_index, v_value; > end loop; > v_rtn := (ar_idx, ar_values); > return v_rtn; > > > end > $BODY$ > LANGUAGE 'plpgsql' VOLATILE; > > CREATE OR REPLACE FUNCTION getmovavgset(p_idxarray _timestamp, > p_valarray _numeric, p_idx "timestamp", p_periods int8) > RETURNS SETOF resultsetitem AS > $BODY$ > declare > results resultset; > v_rec record; > v_rtn resultsetitem%rowtype; > v_sql text; > ar_idx timestamp[]; > ar_values numeric[]; > begin > --raise notice 'idxarray: %', p_idxarray; > for results in > select * from getmovavgarray(p_idxarray, p_valarray, p_idx, > p_periods) > loop >
Re: [SQL] sql wrapped plpgsql set returning function
I narrowed it down further. Can someone explain the difference between passing array[...] and passing an array using an aggregate array function into the function? On Wed, 2007-04-25 at 14:45 -0400, Richard Albright wrote: > It turns out that the from subselect is causing the error in : > > select getmovavg(aggarray(trade_date), aggarray(close_price), > > '2004-01-20', 5) > > from > > ( select trade_date, close_price::numeric > > from quotedata > > where symbol='MSFT' > > and trade_date > '2004-01-01' > > order by trade_date desc) values > > whereas > > select * from getmovavg(array['2007-04-03', '2007-04-02', > '2007-04-01']::timestamp[], array[4.0,3.0,2.0]::numeric[], '2007-04-03', > 3) > > will work. anyone know why that would be? > > On Wed, 2007-04-25 at 14:10 -0400, Richard Albright wrote: > > I have figured out my looping issue, but am having difficulty wrapping > > my set returning plpgsql function getmovavgset with a getmovavg sql func > > when i run the following: > > > > select getmovavg(aggarray(trade_date), aggarray(close_price), > > '2004-01-20', 5) > > from > > ( select trade_date, close_price::numeric > > from quotedata > > where symbol='MSFT' > > and trade_date > '2004-01-01' > > order by trade_date desc) values > > > > i get the following output: > > > > NOTICE: v_rec: ("2004-01-20 00:00:00",27.6917) > > CONTEXT: SQL function "getmovavg" statement 1 > > NOTICE: v_rec: ("2004-01-16 00:00:00",27.6183) > > CONTEXT: SQL function "getmovavg" statement 1 > > NOTICE: v_rec: ("2004-01-15 00:00:00",27.6767) > > CONTEXT: SQL function "getmovavg" statement 1 > > NOTICE: v_rec: ("2004-01-14 00:00:00",27.7883) > > CONTEXT: SQL function "getmovavg" statement 1 > > NOTICE: v_rec: ("2004-01-13 00:00:00",27.8783) > > CONTEXT: SQL function "getmovavg" statement 1 > > NOTICE: v_rec: ("2004-01-12 00:00:00",27.9967) > > CONTEXT: SQL function "getmovavg" statement 1 > > NOTICE: v_rec: ("2004-01-09 00:00:00",27.9767) > > CONTEXT: SQL function "getmovavg" statement 1 > > NOTICE: v_rec: ("2004-01-08 00:00:00",28.0400) > > CONTEXT: SQL function "getmovavg" statement 1 > > NOTICE: v_rec: ("2004-01-07 00:00:00",28.0100) > > CONTEXT: SQL function "getmovavg" statement 1 > > NOTICE: v_rec: ("2004-01-06 00:00:00",27.9433) > > CONTEXT: SQL function "getmovavg" statement 1 > > NOTICE: v_rec: ("2004-01-05 00:00:00",27.7950) > > CONTEXT: SQL function "getmovavg" statement 1 > > > > ERROR: set-valued function called in context that cannot accept a set > > > > I am having difficulty determining if the error is in my getmovavgset or > > getmovavg function. > > the notice msgs are coming from the getmovavgset func, so it is > > iterating. I just dont know if the syntax is correct for the > > generate_series statement in that func. What am I missing? code is > > below. > > > > CREATE TYPE resultset AS > >("index" timestamp[], > > "values" numeric[]); > > > > CREATE TYPE resultsetitem AS > >("index" timestamp, > > value numeric); > > > > CREATE AGGREGATE aggarray( > > BASETYPE=anyelement, > > SFUNC=array_append, > > STYPE=anyarray, > > INITCOND='{}' > > ); > > > > CREATE OR REPLACE FUNCTION getmovavgarray(p_idxarray _timestamp, > > p_valarray _numeric, p_idx "timestamp", p_periods int8) > > RETURNS resultset AS > > $BODY$ > > declare > > idxptr int8; > > idxendptr int8; > > offsetptr int8; > > begoffset int8; > > ar_idx timestamp[]:='{}'; > > ar_values numeric[]:='{}'; > > v_rec resultset%rowtype; > > v_rtn resultset%rowtype; > > v_sql text; > > v_index timestamp; > > v_value numeric; > > v_idx timestamp; > > begin > > for offsetptr in 1 .. array_upper(p_idxarray, 1) > > loop > > --raise notice 'offset: %', offsetptr; > > begoffset := offsetptr; > > exit when p_idxarray[offsetptr]::timestamp <= p_idx; > > end loop; > > --raise notice 'offset: %, end: %', begoffset, array_upper(p_idxarray, > > 1); > > for idxptr in 1 .. (array_upper(p_idxarray, 1) - begoffset) > > loop > > idxendptr := idxptr + p_periods; > > v_index := p_idxarray[(idxptr + begoffset - 1)]; > > v_value := movavg(p_valarray[ (idxptr + begoffset - 1) : > > (idxendptr + > > begoffset -1) ]); > > ar_idx := array_append(ar_idx, v_index); > > ar_values := array_append(ar_values, v_value); > > --raise notice 'idx: %, avg: %', v_index, v_value; > > end loop; > > v_rtn := (ar_idx, ar_values); > > return v_rtn; > > > > > > end > > $BODY$ > > LANGUAGE 'plpgsql' VOLATILE; > > > > CREATE OR REPLACE FUNCTION getmovavgset(p_idxarray _timestamp, > > p_valarray _numeric, p_idx "timestamp", p_periods int8) > > R
Re: [SQL] Query Join Performance
On 4/25/07, Richard Huxton <[EMAIL PROTECTED]> wrote: Aaron Bono wrote: > Performance tanks with this query - it takes over 120 seconds (that is > where > I set the timeout). > BTW, on our Linux box the full query we run (which adds 3 more tables on > the > whole operation along with more filtering on the zip table) finishes in > under 10 seconds. Problem is our development is on Windows and this is a > real pain for developers to test. So what's different between the systems. Obvious things to look at: 1. EXPLAIN ANALYSE outputs to show the plans (these presumably are different, but in what details, and why?) 2. Configuration (particularly memory/cost settings). 3. Hardware. 4. Locale/encoding - these can affect index usage and sorting. -- Richard Huxton Archonet Ltd Now I am having the same problem on the Linux box so I doubt it is the platform. The biggest problem I notice is when I add a join from a child table (zip_city) to a parent table (zip). I have filtered the child table down to about 650 records but when I add the join to the parent which has over 800,000 records, performance tanks. I was able to benchmark two queries last night on my Windows machine: -- This runs in just over 2 seconds select nearby_zip_city.zip_id, gps_distance(zip_city.longitude::numeric, zip_city.latitude::numeric, nearby_zip_city.longitude::numeric, nearby_zip_city.latitude::numeric) AS distance from zip_city inner join zip on ( zip.zip_id = zip_city.zip_id ) inner join zip_city as nearby_zip_city on ( abs(zip_city.longitude - nearby_zip_city.longitude) <= (50.0 / 60.0) AND abs(zip_city.latitude - nearby_zip_city.latitude) <= (50.0 / 60.0) ) where zip.zip_cd = '66105' -- This takes over 48 seconds and I just added a join from the zip_city child to the zip parent table select nearby_zip.zip_cd, gps_distance(zip_city.longitude::numeric, zip_city.latitude::numeric, nearby_zip_city.longitude::numeric, nearby_zip_city.latitude::numeric) AS distance from zip_city inner join zip on ( zip.zip_id = zip_city.zip_id ) inner join zip_city as nearby_zip_city on ( abs(zip_city.longitude - nearby_zip_city.longitude) <= (50.0 / 60.0) AND abs(zip_city.latitude - nearby_zip_city.latitude) <= (50.0 / 60.0) ) -->>> The next 3 lines are the main difference <<<-- inner join zip as nearby_zip on ( nearby_zip_city.zip_id = nearby_zip.zip_id ) -->>> End of difference <<<-- where zip.zip_cd = '66105' -- Explain plan for faster/first query: Nested Loop (cost=45779.82..147990502.45 rows=451678770 width=40) (actual time=5404.943..20151.684 rows=653 loops=1) Join Filter: ((abs(("inner".longitude - "outer".longitude)) <= 0.833::double precision) AND (abs(("inner".latitude - "outer".latitude)) <= 0.833::double precision)) -> Seq Scan on zip_city nearby_zip_city (cost=0.00..25514.19 rows=901719 width=24) (actual time=0.053..2311.547 rows=901719 loops=1) -> Materialize (cost=45779.82..45824.90 rows=4508 width=16) (actual time=0.003..0.009 rows=1 loops=901719) -> Hash Join (cost=8944.55..45775.31 rows=4508 width=16) (actual time=444.657..4490.901 rows=1 loops=1) Hash Cond: ("outer".zip_id = "inner".zip_id) -> Seq Scan on zip_city (cost=0.00..25514.19 rows=901719 width=24) (actual time=0.004..2334.548 rows=901719 loops=1) -> Hash (cost=8933.90..8933.90 rows=4261 width=8) (actual time=0.296..0.296 rows=1 loops=1) -> Bitmap Heap Scan on zip (cost=32.91..8933.90rows=4261 width=8) (actual time= 0.272..0.275 rows=1 loops=1) Recheck Cond: ((zip_cd)::text = '66105'::text) -> Bitmap Index Scan on zip_zip_cd_key (cost= 0.00..32.91 rows=4261 width=0) (actual time=0.250..0.250 rows=1 loops=1) Index Cond: ((zip_cd)::text = '66105'::text) -- Explain plan for shower/second query: Nested Loop (cost=75372.31..148056286.32 rows=451678770 width=43) (actual time=62688.188..69916.943 rows=653 loops=1) Join Filter: ((abs(("inner".longitude - "outer".longitude)) <= 0.833::double precision) AND (abs(("inner".latitude - "outer".latitude)) <= 0.833::double precision)) -> Hash Join (cost=29592.49..91298.06 rows=901719 width=27) (actual time=17905.224..52279.151 rows=901719 loops=1) Hash Cond: ("outer".zip_id = "inner".zip_id) -> Seq Scan on zip_city nearby_zip_city (cost=0.00..25514.19rows=901719 width=24) (actual time= 0.044..2888.993 rows=901719 loops=1) -> Hash (cost=21634.79..21634.79 rows=852279 width=19) (actual time=13925.502..13925.502 rows=852279 loops=1) -> Seq Scan on zip nearby_zip (cost=0.00..21634.79rows=852279 width=19) (actual time= 0.042..2535.742 rows=852279 loops=1) -> Materialize (cost=45779.82..45824.90 rows=4508 width=16) (actual time=0.002..0.009 rows=1 lo
Re: [SQL] Query Join Performance
"Aaron Bono" <[EMAIL PROTECTED]> writes: > The biggest problem I notice is when I add a join from a child table > (zip_city) to a parent table (zip). I have filtered the child table down to > about 650 records but when I add the join to the parent which has over > 800,000 records, performance tanks. It shouldn't surprise you that joining a 900k row table to an 800k row table isn't cheap. It would certainly be better if the thing delayed the join to nearby_zip until after it had done the restrictive join. Your problem is it doesn't realize that that join condition is restrictive: > Nested Loop (cost=45779.82..147990502.45 rows=451678770 width=40) (actual ^ > time=5404.943..20151.684 rows=653 loops=1) > Join Filter: ((abs(("inner".longitude - "outer".longitude)) <= > 0.833::double precision) AND (abs(("inner".latitude - > "outer".latitude)) <= 0.833::double precision)) which is hardly surprising since the condition is phrased in a way that isn't amenable to statistical analysis. You might want to look into using PostGIS for this sort of thing --- it provides operators that are better suited to the problem domain, and also allow some modicum of intelligence in the rowcount estimates. Another bad misestimation is here: > -> Bitmap Heap Scan on zip > (cost=32.91..8933.90rows=4261 width=8) (actual time= > 0.272..0.275 rows=1 loops=1) > Recheck Cond: ((zip_cd)::text = '66105'::text) > -> Bitmap Index Scan on zip_zip_cd_key (cost= > 0.00..32.91 rows=4261 width=0) (actual time=0.250..0.250 rows=1 loops=1) > Index Cond: ((zip_cd)::text = '66105'::text) The error of 4000x here contributes directly to the error in the top-level row estimate; but this one is a simple scalar condition and I'd expect our stats code to be able to deal with it. Are the stats on zip up-to-date? Maybe you need to increase the stats target for it. regards, tom lane ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [SQL] Query Join Performance
On 4/25/07, Tom Lane <[EMAIL PROTECTED]> wrote: "Aaron Bono" <[EMAIL PROTECTED]> writes: > The biggest problem I notice is when I add a join from a child table > (zip_city) to a parent table (zip). I have filtered the child table down to > about 650 records but when I add the join to the parent which has over > 800,000 records, performance tanks. It shouldn't surprise you that joining a 900k row table to an 800k row table isn't cheap. It would certainly be better if the thing delayed the join to nearby_zip until after it had done the restrictive join. Your problem is it doesn't realize that that join condition is restrictive: > Nested Loop (cost=45779.82..147990502.45 rows=451678770 width=40) (actual ^ > time=5404.943..20151.684 rows=653 loops=1) > Join Filter: ((abs(("inner".longitude - "outer".longitude)) <= > 0.833::double precision) AND (abs(("inner".latitude - > "outer".latitude)) <= 0.833::double precision)) which is hardly surprising since the condition is phrased in a way that isn't amenable to statistical analysis. You might want to look into using PostGIS for this sort of thing --- it provides operators that are better suited to the problem domain, and also allow some modicum of intelligence in the rowcount estimates. Another bad misestimation is here: > -> Bitmap Heap Scan on zip > (cost=32.91..8933.90rows=4261 width=8) (actual time= > 0.272..0.275 rows=1 loops=1) > Recheck Cond: ((zip_cd)::text = '66105'::text) > -> Bitmap Index Scan on zip_zip_cd_key (cost= > 0.00..32.91 rows=4261 width=0) (actual time=0.250..0.250 rows=1 loops=1) > Index Cond: ((zip_cd)::text = '66105'::text) The error of 4000x here contributes directly to the error in the top-level row estimate; but this one is a simple scalar condition and I'd expect our stats code to be able to deal with it. Are the stats on zip up-to-date? Maybe you need to increase the stats target for it. regards, tom lane Looks like a vacuum analyze did the trick. Performance is beautiful now. I should have tried that earlier. I thought I had the auto vacuum turned on (PostgreSQL 8.1) but I guess it doesn't do analyze? Anyway, I will schedule a vacuum analyze nightly - it is low usage and won't cause any problems. Thanks! -- == Aaron Bono Aranya Software Technologies, Inc. http://www.aranya.com http://codeelixir.com ==
Re: [SQL] Query Join Performance
Aaron Bono wrote: > Looks like a vacuum analyze did the trick. Performance is beautiful now. I > should have tried that earlier. > > I thought I had the auto vacuum turned on (PostgreSQL 8.1) but I guess it > doesn't do analyze? > > Anyway, I will schedule a vacuum analyze nightly - it is low usage and won't > cause any problems. It should have done an analyze at some point. Unless this is Windows, in which case there's a bug that precludes autovacuum from running at all. -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Which function transform x,y to latitude/longitude?
I just know the correct data must be longitude 103.926669,latitude0.111827. x,y from Cassini system. Could you give me the source code how you calculate.Thanks a lot! From: Michael Fuhr <[EMAIL PROTECTED]> To: Nemo Terry <[EMAIL PROTECTED]> CC: pgsql-sql@postgresql.org Subject: Re: [SQL] Which function transform x,y to latitude/longitude? Date: Wed, 25 Apr 2007 08:16:56 -0600 On Wed, Apr 25, 2007 at 05:02:02PM +0800, Nemo Terry wrote: > For example x= 38356.62 y= 42365.19.how to transform it to latitude > 1.399948, longitude 193.92644? Do you mean longitude 103.92644? In what datum are the lat/lon coordinates? Where did you get the transformation in your example? > Which function I could use? I don���t know the algorithm. > > Initial data is: ("+proj=cass +a=6378137.0 +rf=298.257223563 > +lat_0=1.287639n +lon_0=103.8516e +x_0=3 +y_0=3"). What's the source of these parameters? You can perform transformations outside the database with PROJ.4 or inside the database with the PostGIS transform() function (which uses PROJ.4). http://proj.maptools.org/ http://postgis.refractions.net/ I don't see any exact matches in the PostGIS spatial_ref_sys table for the parameters you posted you so if you use PostGIS then you might have to insert a row to create your own spatial reference system. However, I did a few tests with your parameters and various datums for the lat/lon and couldn't get the exact transformed values in your example. You might get more help on the PROJ.4 and PostGIS mailing lists. -- Michael Fuhr ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq _ 与世界各地的朋友进行交流,免费下载 Live Messenger; http://get.live.com/messenger/overview ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[SQL] Add constraint for number of chars
Hi, This is seemly simple, but I'm @ a loss this early in the morning. It's best explained this way SAMPLE --- id | serial| username | varchar(100)| constraint username >=8 and username <=100 The problem is that it's characters not integers or numeric. It my constraint correct? Thanks for the help, J ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] Add constraint for number of chars
PostgreSQL Admin <[EMAIL PROTECTED]> writes: > username | varchar(100)| constraint username >=8 and username <=100 Perhaps you mean "length(username) >= 8" and so on? regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org