Jack Kiss <[EMAIL PROTECTED]> writes:
> 1)Do you have a function which is similar to Oracle\'s DECODE.
Yes
> 2) Can you extract day of week (Monday,etc) from yours date functions.
Yes
Check out the "Date/Time Function and Operators" and the "Conditional
Expressions" sections of this:
http://
Metnetsky <[EMAIL PROTECTED]> writes:
> It's for a class and my professor has a thing for brain teaser type
> questions.
Incidentally, TAs and Profs aren't stupid, and have been known to check on
newsgroups and mailing lists for students asking for people to do their
homework for them.
--
greg
One suggestion I'll make about your data model -- I'm not sure it would
actually help this query, but might help elsewhere:
WHERE ( C.Disabled > '2003-02-28'
OR C.Disabled IS NULL
)
Don't use NULL values like this. Most databases don't index NULLs (Oracle) or
even if they do, don't
> -> Merge Join (cost=6106.42..6335.30 rows=2679 width=265)
(actual time=859.77..948.06 rows=1 loops=1)
Actually another problem, notice the big discrepancy between the estimated row
and the actual rows. That's because you have the big OR clause so postgres
figures there's
Greg Stark <[EMAIL PROTECTED]> writes:
> Can I have a GiST index on (foo_id, attribute_set_array) and have it be just
> as fast at narrowing the search to just foo_id = 900 but also speed up the ~
> operation?
Hm, so if I understand what I'm reading I can do this if I
Tom Lane <[EMAIL PROTECTED]> writes:
> The SQL-standard way of writing this would presumably be either
>
> from G left join L on (G.SELID = L.SELID)
> left join C on (L.SELID = C.SELID and L.SELLEVEL = C.SELLEVEL)
I would think of it as this one.
> from G left join
> (
Josh Berkus <[EMAIL PROTECTED]> writes:
> Stefan,
>
> > I know the LEAST and GREATEST functions are not part
> > of standard SQL, but they sure were handy where I came
> > from (Oracle-land).
>
> Um, what's wrong with MAX and MIN, exactly?
MAX and MIN are single-parameter aggregate functions. L
To solve this problem efficiently you probably need the lead/lag analytic
functions. Unfortunately Postgres doesn't have them.
You could do it with something like:
update foo set c = a+b+(select c from foo as x where seq < foo.seq ORDER BY seq desc
LIMIT 1)
or the more standard but likely to
"Viorel Dragomir" <[EMAIL PROTECTED]> writes:
> Anyway, in real life this update modifies only one row with a value wich is
> diff of null. It was really handy if it was specified the option ORDER for
> the update command.
Are you hoping to produce a running total? That's very difficult in stand
Joe Conway <[EMAIL PROTECTED]> writes:
> BenLaKnet wrote:
> > I see connect by in Oracle
> > ??? is there an equivalent in PostgreSQL or not ??
>
> Someone is working on the SQL99 equivalent, but it isn't done yet. Perhaps for
> 7.5.
There's a connectby hack in the contrib/tablefunc directory. I
"Girish Bajaj" <[EMAIL PROTECTED]> writes:
> I cant possibly index all the cols in the table. So I thought Id best manage
> the data by splitting up the table into multiple partitions and eventually
> depending on application logic, only scan those tables that are necessary to
> scan sequentially
Joe Conway <[EMAIL PROTECTED]> writes:
> Not possible in current releases, but it will be in 7.4 (about to start beta).
> It looks like this:
Well there is the int_array_aggregate function in the contrib/intagg
directory. It has to be compiled separately, and it has a few quirks (like the
arrays
Tom Lane <[EMAIL PROTECTED]> writes:
> NULL can be special, because it acts specially in comparisons anyway.
> But NaN is just a value of the datatype.
Does postgres intend to support all the different types of NaN? Does you
intend to have +Inf and -Inf and underflow detection and all the other g
"SZÛCS Gábor" <[EMAIL PROTECTED]> writes:
> > cannot see is that the float values are not actually exactly 0.5
>
> Yes I could guess that (floating point vs fixed), but is this a coincidence
> that both '0.5'::float and '-0.5'::float are closer to 0, whereas they could
> be closer to +/-1, as we
Tom Lane <[EMAIL PROTECTED]> writes:
> Returning to the original problem, it seems to me that comparing "pg_dump
> -s" output is a reasonable way to proceed.
I've actually started checking in a pg_dump -s output file into my CVS tree.
However I prune a few key lines from it. I prune the TOC O
Guillaume LELARGE <[EMAIL PROTECTED]> writes:
> Doing a "select currval() from my_table" after your insert should work.
That's "select currval('my_table_pkcol_seq')" actually.
The above would have called the currval() function for every record of the
table which isn't what you want and in any
ow <[EMAIL PROTECTED]> writes:
> My concern though ... wouldn't pgSql server collapse when faced with
> transaction spawning across 100M+ records?
The number of records involved really doesn't faze Postgres at all. However
the amount of time spent in the transaction could be an issue if there is
"Randolf Richardson, DevNet SysOp 29" <[EMAIL PROTECTED]> writes:
> For example, if I want to index on a date field but only have the index
> keep track of the most recent 30 days (and then create a secondary index for
> all dates) so as to improve performance on more heavily loaded syste
Tom Lane <[EMAIL PROTECTED]> writes:
> No, because the above represents a moving cutoff; it will (and should)
> be rejected as a non-immutable predicate condition. You could do
> something like
>
> CREATE INDEX my_Nov_03_index on my_table (create_date)
> WHERE (cre
Joe Conway <[EMAIL PROTECTED]> writes:
> In 7.4 you could use an array. It would look like this:
Though note that 7.4 doesn't know how to optimize this form:
db=> explain select * from foo where foo_id in (1,2);
QUERY PLAN
[EMAIL PROTECTED] (Dmitri Bichko) writes:
> I am running in trouble with pagination here, somehow (rather naively) I
> assumed that when doing a LIMIT and OFFSET, the subselects on the records
> before the OFFSET would not be performed, which quite apparently is not the
> case. So, LIMIT 50 OFFSE
"Grace C. Unson" <[EMAIL PROTECTED]> writes:
> Why is it that my index for text[] data type is not recognized by the
> Planner?
>
> I did these steps:
>
> 1. create function textarr(text[]) returns text language sql as 'select
> $1[1]' strict immutable
> 2. create index org_idx on EmpData (text
Tom Lane <[EMAIL PROTECTED]> writes:
> Robert Creager <[EMAIL PROTECTED]> writes:
> > ... one piece of data I need is the last value for each GROUP BY
> > period. Alas, I cannot figure out how to do this.
>
> SELECT DISTINCT ON (rather than GROUP BY) could get this done for you.
Or if you need
Bruno Wolff III <[EMAIL PROTECTED]> writes:
> >QUERY PLAN
> >
> > Sort (cost=11824.16..11831.5
"Alexandra Birch" <[EMAIL PROTECTED]> writes:
> It works perfectly - thanks a million!
> Strangely the offset 0 does not seem to make any difference.
> Gotta read up more about subqueries :)
>
> explain analyze
> select code,order_date
>from (
> select code, order_date
>
Christoph Haller <[EMAIL PROTECTED]> writes:
> Why appear? If the Index Scan has a Total runtime: 2.46 msec and the Seq Scan
> a Total runtime: 46.19 msec, then the Index Scan is much faster.
> Or am I completely off the track reading the explain analyze output?
To estimate the relative costs
Josh Berkus <[EMAIL PROTECTED]> writes:
> Max() and Count() cannot use indexes for technical reasons. Browse through
> the archives of SQL, PERFORM, and HACKERS for about 12,000 discussions on the
> subject.
Please don't confuse the issue by throwing Max() and Count() into the same
basket.
Jeff Boes <[EMAIL PROTECTED]> writes:
> I headed off in the direction of groups of SELECTs and UNIONs, and quit when I
> got to something like four levels of "SELECT ... AS FOO" ...
four? wimp, that's nothing!
ok, seriously I think there's no way to do this directly with straight SQL.
You would
Josh Berkus <[EMAIL PROTECTED]> writes:
> Rod,
>
> > Something along the lines of the below would accomplish what you want
> > according to spec. ROW_NUMBER() is a spec defined function. (6.10 of
> > SQL200N)
>
> Great leaping little gods! They added something called "row number" to the
> spe
elein <[EMAIL PROTECTED]> writes:
> create or replace function pycounter(integer)
> returns integer as
> '
>if args[0] == 0:
> SD["nextno"] = 1
> return SD["nextno"]
>try:
> SD["nextno"] += 1
>except:
> SD["nextno"] = 1
>return SD["nextno"]
> ' language 'pl
Tom Lane <[EMAIL PROTECTED]> writes:
> Bret Hughes <[EMAIL PROTECTED]> writes:
> > FWIW I tried to use alter table but could never get the parser to accept
> > $1 as a constraint name. I used single and double quotes as well as a
> > lame attempt \$1.
>
> Hm, "$1" works for me ...
Hm, this rem
Tom Lane <[EMAIL PROTECTED]> writes:
> Greg Stark <[EMAIL PROTECTED]> writes:
> > Is there a reason postgres goes out of its way to pick names that
> > will be harder to work with than necessary?
>
> If we use ordinary identifiers for system-generated names the
Marco Lazzeri <[EMAIL PROTECTED]> writes:
> SELECT
> p.name, ARRAY(SELECT nicknames FROM people WHERE people.id = p.id)
> FROM people AS p
> Any suggestions?
Something like:
db=> create aggregate array_aggregate (basetype = integer, sfunc = array_append, stype
= integer[], initcond = '{}');
Christoph Haller <[EMAIL PROTECTED]> writes:
> Interesting feature, but I cannot find function array_append:
> ERROR: AggregateCreate: function array_append(integer[], integer) does not exist
It's new in Postgres 7.4
I think you could do this in 7.3 though, it would just be more awkward. Try |
Bruno Wolff III <[EMAIL PROTECTED]> writes:
> On Thu, May 13, 2004 at 18:13:23 +,
> Jaime Casanova <[EMAIL PROTECTED]> wrote:
> > Hi all, is there a way to set the isolation level to something like the
> > sql standard dirty read.
>
> No. There will be a way to use the standard name in a
Marco Lazzeri <[EMAIL PROTECTED]> writes:
> Hi!
> I'm searching a better (quicker) way to retrieve data as I used to do
> using the following query...
>
> ==
>
> SELECT main.codice,
>other.value AS value_one,
>other.value AS value_two
> FROM main
> LEFT OUTER JOIN other
> Joseph Turner <[EMAIL PROTECTED]> writes:
> > I have a table with a decent number of rows (let's say for example a
> > billion rows). I am trying to construct a graph that displays the
> > distribution of that data. However, I don't want to read in the
> > complete data set (as reading a billi
Markus Bertheau <[EMAIL PROTECTED]> writes:
> oocms=# SELECT ARRAY(SELECT 1 WHERE FALSE);
> ?column?
> --
>
This one seems strange to me. Shouldn't it result in an empty array?
--
greg
---(end of broadcast)---
TIP 9: the planner wi
Rich Hall <[EMAIL PROTECTED]> writes:
> "(anything) = NULL" is always Null, this cannot be what the coder intended.
I often have such things in my SQL. Consider what happens when you have SQL
constructed dynamically. Or more frequently, consider that many drivers still
don't use the new binary pl
Stephan Szabo <[EMAIL PROTECTED]> writes:
> IS TRUE and IS FALSE have a different effect from =true and =false when
> the left hand side is NULL. The former will return false, the latter will
> return NULL.
No, actually they both return false.
(But thanks, I didn't even realize they were specia
Ray Aspeitia <[EMAIL PROTECTED]> writes:
> I also would like to pass the delimiter to the aggregate as a parameter and
> I am not sure if it can handle that.
Currently aggregates that take multiple parameters are just not supported.
--
greg
---(end of broadcast)--
Joe Conway <[EMAIL PROTECTED]> writes:
> This is very true. In fact, I get mildly annoyed when people *don't* include
> the direct reply to me, because I very actively filter/redirect my mail.
> Replies directly to me are pretty much guaranteed to be seen quickly, but the
> ones that go to the li
John DeSoi <[EMAIL PROTECTED]> writes:
> On Aug 10, 2004, at 10:57 AM, Bruce Momjian wrote:
>
> > I can't think of one, no. I think you will have to use one of the
> > server-side languages and call a sleep in there.
>
> This is no good in the real world since it pounds the CPU, but it worked
Dino Vliet <[EMAIL PROTECTED]> writes:
> x,0 and y,4 but how do I manage this in sql? I was
> hoping for a keyword LAST or so, where I can specify
> that when I've ordered my results with order by, I
> could only get the last of the subgroups (the first
> one is easy because I could use limit 1)
"Riccardo G. Facchini" <[EMAIL PROTECTED]> writes:
> After searching throught the list, I assume you mean this link:
> http://www.rbt.ca/autodoc/index.html
> by Rod Taylor.
>
> Looks promising, but still what I need is a proper CVS output, as I
> need to review the changes made to the specific da
Scott Gerhardt <[EMAIL PROTECTED]> writes:
> Hello, I am new to the list, my apology if this question is beyond the scope or
> charter of this list.
>
> My questions is:
> What is the best method to perform an aggregate query to calculate sum() values
> for each distinct wid as in the example be
Theo Galanakis <[EMAIL PROTECTED]> writes:
> error: btree item size 2744 exceeds maximum 2713.
>
> I assume I had to change some server settings to extend the maximum, however
I would guess the block size. But I'm just guessing.
> in the end this column holds content, and even applying a
Theo Galanakis <[EMAIL PROTECTED]> writes:
> I created the Index you specified, however it chooses to run a seq scan on
> the column rather than a Index scan. How can you force it to use that
> Index..
>
> CREATE INDEX idx_content_numeric ON botched_table(content) WHERE content ~
> '^[0-9]{1,9}$
Greg Stark <[EMAIL PROTECTED]> writes:
> Theo Galanakis <[EMAIL PROTECTED]> writes:
>
> > I created the Index you specified, however it chooses to run a seq scan on
> > the column rather than a Index scan. How can you force it to use that
> > Index..
> >
Tom Lane <[EMAIL PROTECTED]> writes:
> Greg Stark <[EMAIL PROTECTED]> writes:
> > That's all well and good. But when I tried to make a version of your
> > situation that used a function I found it doesn't work so well with
> > functional indexes:
"Iain" <[EMAIL PROTECTED]> writes:
> Though, as far as I can tell, there is no way to have the notify activate a
> pl/pgsql function directly. I'll still need to write a client program to
> create a session and actually do the listening, that is if I havn't missed
> anything else...
Right, presu
stig erikson <[EMAIL PROTECTED]> writes:
> how can i specify an integer to be one byte byte or even 4 bits long?
> int1, int(1), tinyint are nowhere to be seen.
> smallest i can find is smallint that is 2 bytes.
There's a type called "char" (the double quotes are needed). It's used by
postgres s
T E Schmitz <[EMAIL PROTECTED]> writes:
> I just dug out the PostgreSQL book again because I thought I might've garbled
> it:
>
> Quote: "PostgreSQL will not index NULL values. Because an index will never
> include NULL values, it cannot be used to satisfy the ORDER BY clause of a
> query that r
Tom Lane <[EMAIL PROTECTED]> writes:
> > The paragraph continues:
> > "If the SELECT command included the clause WHERE phone NOT NULL,
> > PostgreSQL could use the index to satisfy the ORDER BY clause.
> > An index that covers optional (NOT NULL) columns will not be used to
> > speed table join
T E Schmitz <[EMAIL PROTECTED]> writes:
> I want to select only those BRAND/MODEL combinations, where the MODEL has more
> than one TYPE, but only where one of those has TYPE_NAME='xyz'.
> I am not interested in MODELs with multiple TYPEs where none of them are called
> 'xyz'.
There are lots of
T E Schmitz <[EMAIL PROTECTED]> writes:
> SELECT
> BRAND_NAME,MODEL_NAME
...
> intersect
...
Huh, I never think of the set operation solutions. I'm curious how it compares
speed-wise.
--
greg
---(end of broadcast)---
TIP 5: Have you checked our
T E Schmitz <[EMAIL PROTECTED]> writes:
> ) as somealias on (model_fk = model_pk)
>
> (subquery in FROM must have an alias)
ARGH! This is one of the most annoying things about postgres! It bites me all
the time. Obviously it's totally insignificant since it's easy for my to just
throw an "AS x"
Tom Lane <[EMAIL PROTECTED]> writes:
> Fixing this properly is a research project, and I haven't thought of any
> quick-and-dirty hacks that aren't too ugly to consider :-(
Just thinking out loud here. Instead of trying to peek inside the CASE
couldn't the optimizer just wrap the non-strict expr
Tom Lane <[EMAIL PROTECTED]> writes:
> Karsten Hilbert <[EMAIL PROTECTED]> writes:
> > I am not convinced I'll need a SRF. I am not trying to
> > calculate something that isn't there yet. I am just trying to
> > join two views appropriately. I might have to employ some
> > variant of Celko's integ
"C. Bensend" <[EMAIL PROTECTED]> writes:
> INSERT INTO table ( column1, column2, column3 )
>SELECT column1, column2, column3
>WHERE NOT EXISTS (
> SELECT column1, column2, column3 FROM table WHERE
> column1 = $column1 AND
> column2 = $column2 AND
> column3
"C. Bensend" <[EMAIL PROTECTED]> writes:
> The risk of a cron gone wild is acceptable to me at this moment.
Gee, now I have images of late-night advertisements for bofh-porn video tapes
of Cron Jobs Gone Wild(tm) dancing through my head... thanks.
--
greg
---(end of b
Michelle Murrain <[EMAIL PROTECTED]> writes:
> The OUTER JOIN version is quite a bit more efficient (by an order of magnitude)
> than the option with WHERE NOT EXISTS subquery.
This is going to be heavily dependent on the version of postgres. IN/NOT IN
execution has improved a lot in 7.4 and lat
"Muhyiddin A.M Hayat" <[EMAIL PROTECTED]> writes:
> Dear all,
>
> I need to do something similar to a cross tabulation, but without any
> aggregation.
join your table to itself four times:
select *
from (select check_time::date as date, employee_id, check_time-check_time::date as
in from te
patrick ~ <[EMAIL PROTECTED]> writes:
> I noticed that a freshly created db with freshly inserted data (from
> a previous pg_dump) would result in quite fast results. However,
> after running 'vacuum analyze' the very same query slowed down about
> 1250x (Time: 1080688.921 ms vs Time: 864.522 ms
Tomasz Myrta <[EMAIL PROTECTED]> writes:
> > select * from table1 LIMIT x
> > gives me the first x row of the result.
> > After that, I save the last value, and next time, I adjust
> > the query as
> > select * from table1 where itemkey>:lastvalue LIMIT x
>
> Why do you complicate it so much? Ev
"Thomas F.O'Connell" <[EMAIL PROTECTED]> writes:
> select 2004-06-08;
> ?column?
> --
> 1990
>
> I'm not exactly sure how the bare string is converted internally, but it's
> clearly not a complete date like you're expecting.
What string? That's just integer arithmetic.
--
greg
[EMAIL PROTECTED] writes:
> But not possible for real at the moment?
>
> So, summarising:
> - Nested transactions is not (yet) supported
> - READ UNCOMMITTED isolation level is not (yet) supported
> - the EXECUTE plpgsql construct does not circumvent the transaction
Well nested transactions are
Ian Barwick <[EMAIL PROTECTED]> writes:
> What I still don't quite understand is why IN in a CHECK context is
> handled differently to say: select 1 where 'x' in (null,'a','b','c') ?
> This could be a bit of a gotcha for anyone constructing a constraint
> similar to the original poster's and not
Andrew Sullivan <[EMAIL PROTECTED]> writes:
> You can set the sequence up to cycle (so once it gets to the end, it
> wraps around to the beginning again). The keyword is CYCLE at CREATE
> SEQUENCE time. It defaults to NO CYCLE.
>
> One potential problem, of course, are collisions on the table,
"Andrei Bintintan" <[EMAIL PROTECTED]> writes:
> > If you're using this to provide "pages" of results, could you use a cursor?
> What do you mean by that? Cursor?
>
> Yes I'm using this to provide "pages", but If I jump to the last pages it goes
> very slow.
The best way to do pages for is not t
Alex Turner <[EMAIL PROTECTED]> writes:
> I am also very interesting in this very question.. Is there any way to
> declare a persistant cursor that remains open between pg sessions?
> This would be better than a temp table because you would not have to
> do the initial select and insert into a f
"Joel Fradkin" <[EMAIL PROTECTED]> writes:
> I also tried a simple select * from tblcase where clientum = 'SAKS'
Try:
explain analyze select * from tblcase where clientum = 'SAKS'
Send the output.
--
greg
---(end of broadcast)---
TIP 4: Don't
"Joel Fradkin" <[EMAIL PROTECTED]> writes:
> QUERY PLAN
> "Seq Scan on tblcase (cost=0.00..30066.21 rows=37401 width=996) (actual
> time=0.344..962.260 rows=22636 loops=1)"
> " Filter: ((clientnum)::text = 'SAKS'::text)"
> "Total runtime: 1034.434 ms"
Well that says it only took 1s. So it seems
"Joel Fradkin" <[EMAIL PROTECTED]> writes:
> I tried the SET ENABLE_SEQSCAN=FALSE;
> And the result took 29 secs instead of 117.
>
> After playing around with the cache and buffers etc I see I am no longer
> doing any swapping (not sure how I got the 100 sec response might have been
> shared buff
Gary Stainburn <[EMAIL PROTECTED]> writes:
> > Alternatively: (a<>6),(a<>4),a
>
> Although this does exactly what I want, at first glance it should do
> exactly the opposite.
>
> I'm guessing that for each line it evaluates
> not (a=6) 0 for true else 1
Not really, "not a=6" is an expression t
Leon Stringer <[EMAIL PROTECTED]> writes:
> Hi,
>
> I wondered if anyone could answer the following question:
>
> If I have a table such as the one below:
>
> col1 col_order
> ---
> Apple 1
> Apple 2
> Orange 3
> Banana 4
> Apple 5
>
> Is there a way I can get the following result
James G Wilkinson <[EMAIL PROTECTED]> writes:
> I hope that this is some silly beginner's mistake. I have spent quite a bit
> of
> time
> reading the PostgreSQL documentation and cannot find my error. I have also
> scanned the PostgreSQL archive and the web for help, but I have not found
> anyt
[EMAIL PROTECTED] writes:
> So basically I want to ignore a single character (the apostrophe
> character), anywhere in the middle of my search word, in selecting
> results. How can I do this?
WHERE replace(name,,'') like '%dont%'
Beware of quoting issues if "dont" is coming from user suppl
Andrew Sullivan <[EMAIL PROTECTED]> writes:
> On Thu, Apr 14, 2005 at 07:21:38PM +0200, Andreas Joseph Krogh wrote:
> >
> > So, what you're suggesting is that a restart of the webapp should make
> > vacuum
> > able to delete those dead rows?
>
> Yes, but that'll only solve your problem for now
Tom Lane <[EMAIL PROTECTED]> writes:
> We have to start the transaction no later than event #2 since there has
> to be something to hold the lock. But it'd be easy enough to decouple
> this from BEGIN, and it'd be good enough to solve the "COMMIT;BEGIN"
> problem.
Oh I think I finally figured ou
Bruno Wolff III <[EMAIL PROTECTED]> writes:
> It does look like you can only ADD and DROP constraints, not directly
> alter or replace them. So making a reference deferable is go to require
> a DROP and ADD which will need to recheck the constraint.
I asked the same question a few days ago on pgs
Nick Fankhauser <[EMAIL PROTECTED]> writes:
> Alvaro Herrera wrote:
>
> > The order is not really guaranteed, though if this is a one-shot thing,
> > you may get away with turning off hashed aggregates.
> >
>
> When I read this, I assumed there was a runtime parameter I could set that was
> sim
Markus Bertheau <[EMAIL PROTECTED]> writes:
> Offset for negative numbers means 0, as it seems. I think there is a
> sensible meaning for negative offset numbers and wondered, what
> arguments led to negative offsets being processed as 0 offset.
Frankly I'm surprised it's not a syntax error.
--
"Marc G. Fournier" <[EMAIL PROTECTED]> writes:
> SELECT * FROM customers WHERE monthly_balance[6] = 0.00;
This, like the other poster said, can be accomplished with a set of simple
expression indexes.
> As an example ... or
>
> SELECT * FROM customers WHERE 0.00 = any (monthly_balance);
This w
"Marc G. Fournier" <[EMAIL PROTECTED]> writes:
> In PostgreSQL, as everyone knows, a QUERY == a transaction, unless wrap'd in a
> BEGIN/END explicitly ... how does that work with a function? is there an
> implicit BEGIN/END around the whole transaction, or each QUERY within the
> function itself
Josh Berkus writes:
> Mark, Nathan,
>
> I'm moving this over to the PGSQL-SQL list, away from -hackers, as it's no
> longer a -hackers type discussion. Hope you don't mind!
>
> > On Wed, Sep 07, 2005 at 11:31:16AM -0700, Josh Berkus wrote:
> > > I'm also a little baffled to come up with any
Tom Lane <[EMAIL PROTECTED]> writes:
> Richard Huxton writes:
> > Ah, now I see what you're saying. You're quite right in your suspicions,
> > "MOVE..." isn't supported for plpgsql cursors. You could probably do
> > something with EXECUTE and returning a refcursor from a previous
> > function,
Stephan Szabo <[EMAIL PROTECTED]> writes:
> On Tue, 11 Oct 2005, Rick Schumeyer wrote:
>
> > I'm not sure what I was thinking, but I tried the following query in pg:
> >
> > SELECT * FROM t GROUP BY state;
> >
> > pg returns an error.
> >
> > Mysql, OTOH, returns the first row for each state. (T
Collin Peters <[EMAIL PROTECTED]> writes:
> I have a table that has some columns which store 'custom' fields so the
> content varies according to the user that the row belongs to. For one
> of the groups of users the field is a date (the type of the field is
> 'text' though). I'm trying to perfo
Scott Marlowe <[EMAIL PROTECTED]> writes:
> Hehe. When I turn on my windshield wipers and my airbag deploys, is it
> a documented "feature" if the dealership told me about this behaviour
> ahead of time?
Well it's more like my car where the dashboard dims when I turn on my
headlights which ann
"Anthony Molinaro" <[EMAIL PROTECTED]> writes:
> By changing the values in the select/group by you are changing
> Group! How can you arbitrarily add or exclude a column?
> You can't do it.
Go back and reread the previous posts again. You missed the whole point.
--
greg
--
"Anthony Molinaro" <[EMAIL PROTECTED]> writes:
> Greg,
> You'll have to pardon me...
>
> I saw this comment:
>
> "I don't see why you think people stumble on this by accident.
> I think it's actually an extremely common need."
>
> Which, if referring to the ability to have items in the sele
Scott Marlowe <[EMAIL PROTECTED]> writes:
> Sorry, but it's worse than that. It is quite possible that two people
> could run this query at the same time and get different data from the
> same set and the same point in time. That shouldn't happen accidentally
> in SQL, you should know it's comi
"Anthony Molinaro" <[EMAIL PROTECTED]> writes:
> More awkward? What *you're* suggesting is more awkward. You realize that
> right? How can syntax that is understood and accepted for years be more
> awkward?
Well gosh, I would say that that's something only a newbie could say about
SQL of all thi
Rob <[EMAIL PROTECTED]> writes:
> I would like complete control over this information -- so if sometime in the
> future it's decided to totally redesign the layout. Also, at some point a
> tool will be created so novice computer users can enter nicely formatted
> markup -- meaning you won't have
elein <[EMAIL PROTECTED]> writes:
> > Note that the above are not inverses because you changed the lefthand
> > input. You do get consistent results when you just add or omit NOT:
> Yes, you are right. I skipped the permutations to get down to the point.
Remember that NULL means "unknown". So "1
"Jesper K. Pedersen" <[EMAIL PROTECTED]> writes:
> Having checked the I/O format it seems that MS Access exports the
> values of a YESNO field as 0 and 1
If only Postgres's boolean type were as helpful.
--
greg
---(end of broadcast)---
TIP 3: H
Tom Lane <[EMAIL PROTECTED]> writes:
> Greg Stark <[EMAIL PROTECTED]> writes:
> > "Jesper K. Pedersen" <[EMAIL PROTECTED]> writes:
> >> Having checked the I/O format it seems that MS Access exports the
> >> values of a YESNO field as 0 and 1
Jan Danielsson <[EMAIL PROTECTED]> writes:
> select from_ip, count(from_ip) as entries, count(select * from log where
> ...) as tot_entries, max(ts)::timestamp(0) as last_access from log where
> to_port=22 and direction='in' group by from_ip
select from_ip,
count(from_ip) as entries,
"Ding Xiangguang" <[EMAIL PROTECTED]> writes:
> Hi, friend,
>
> Suppose there is table of daily transaction data with 5 fields,
>
> time(date), open(float8), high(float8), low(float8), close(float8)
>
> Is it possible to create a view of weekly data, i.e. open is the first
> day'open, high is t
1 - 100 of 116 matches
Mail list logo