mands but they're only allowed to run against TABLES not
SEQUENCES - too bad - that would have been perfect.
I'm now starting to think that there's no way to solve this problem in
an "elegant manner" even in a stored procedure? Your method seems to be
as good as it
would
manifest, I'll post a warning on the RoR bug tracking site so that
people can at least understand that there's a potential bug here..
Thanks again,
Steve
At 08:42 PM 8/3/2007, Scott Marlowe wrote:
On 8/3/07, Steve Midgley <[EMAIL PROTECTED]> wrote:
> Hi Scott,
>
hope this helps someone else on the archives down the road. Thanks to
everyone for putting their time and attention on this problem. I'm very
grateful.
Sincerely,
Steve
At 08:00 AM 8/6/2007, Peter Childs wrote:
On 03/08/07, Michael Glaesemann
<<mailto:[EMAIL PROTECTED]>[EMAIL PRO
line off-list if you'd like some sample
Ruby code to read/write/transform your source.
Steve
At 06:33 AM 8/22/2007, [EMAIL PROTECTED] wrote:
Date: Wed, 22 Aug 2007 14:36:15 +1000
From: novice <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED], pgsql-sql@postgresql.org
Subject: raw data into ta
that the latter
(id=muni_city_id) is faster by a little - any voices of support or
other ideas in this regard? Caveats?
Thanks,
Steve
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
cting?
Or maybe someone could point me to the relevant docs that explains
order by behavior in more detail? I read this
http://www.postgresql.org/docs/8.2/interactive/sql-select.html#SQL-ORDERBY
and it didn't make things any clearer.
I'm quite interested in the power of "ex
st build your "entities" in the domain
specific language and it handles all the id
inserts and relational mapping for you. They can
even handle mapping many-to-many joined entities,
if you're careful in setting it up.
I hope this is helpful,
Steve
---(end of broadcast)---
TIP 6: explain analyze is your friend
I see that I can create a table with a mixed case name as long as the name is
wrapped in quotes. Is there a setting to allow upper case and mixed case names
for database tables, fields, etc, without having to wrap each in quotes?
Thanks,
SteveE
y Ralph Kimball "The Data Warehouse Toolkit: The Complete
Guide to Dimensional Modeling (Second Edition)" - this book did more to
open my eyes to alternative to traditional "normalized" modeling than
anything else. It also made me feel less guilty about building certain
non-normal st
rking? Does anyone have any knowledge of how "fixed" the columns and
values of this query are (i.e. are there a lot of internal and external
dependencies that make future Pg versiosn unlikely to break the above
code)?
Any other input on the above SQL - should I be doing this in another
Thanks Erik - that cleans things up significantly for me.
For the record, the SQL for finding all the table
names (alpha ordered) in the public schema using pg_tables view is:
select tablename from pg_tables where
schemaname='public' order by tablename
Steve
At 09:38 AM 12/18/
[user_id]" - I'm not sure how you'll pass user_id into this function,
maybe someone else can help with that?
Hopefully this is useful?
Steve
At 06:24 AM 1/8/2008, [EMAIL PROTECTED] wrote:
Date: Tue, 8 Jan 2008 14:24:22 +
From: "Jamie Tufnell" <[EMAIL PROTECTED]>
ss than 50 and a few
outliers with 60 or 70 rows before they get trimmed back down to 50..
Seems more reliable than a cron job, and solves your problem of an ever
growing table? You could adjust the random number test easily if you
change your mind of the balance of size of table vs. # of delete
s
ve control over this layer (that quotes your
values), then I'd say converting to ASCII would solve the problem. But
better to convert to numeric and not ship quoted strings at all.
I may be totally off-base and missing something fundamental and I'm
very open to c
On Jan 13, 2008 8:51 PM, Steve Midgley
<<mailto:[EMAIL PROTECTED]>[EMAIL PROTECTED]> wrote:
At 02:22 PM 1/13/2008,
<mailto:[EMAIL PROTECTED]>[EMAIL PROTECTED]
wrote:
>Date: Sat, 12 Jan 2008 14:21:00 -0800
>From: "Medi Montaseri" <<mailto:[EM
";" at character 59
Char 59 by the way is the first accurance of semi-colon as in 䕱
which is being caught by PG parser.
Medi
On Jan 14, 2008 12:18 PM, Steve Midgley
<<mailto:[EMAIL PROTECTED]>[EMAIL PROTECTED]> wrote:
On Jan 13, 2008 8:51 PM, Steve Midgley
<<m
would help me to give you some ideas.
Without seeing a more formal schema and being able to toy with it, I'm
not sure I can give good advice. Others may have different opinions
which I would welcome.
Sincerely,
Steve
---(end of broadcast)---
a little embarrassed to admit but
there you are.
If you're interested in this idea of precalculating values to optimize
your search, I'd be happy to discuss further. Also, Ralph Kimball's
Data Warehousing books are excellent on this subject (one of the few
authors
sions" table is simply not permitted. Inserts are allowed to
data_versions but nothing else..
I hope this idea is useful. If I haven't explained it well, drop me a
line and I'll try to clarify. Good luck with the project!
Steve
---(end of broadcast)--
ment in them..
I'm wondering if there are any Bad Things that happen if I use negative
integers for primary key values in Postgres (v8.2)? My primary keys are
all bigserial type.
Any comments or advice?
Thanks!
Steve
---(end of broadcast)---
TIP 6: explain analyze is your friend
encapsulated this and future-proofed it, that'd be much nicer.
Any assistance is appreciated! Thanks,
Steve
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at
http://www.postgresql.org/about/donate
/AP/GL) at their core.
As you add bank accounts, complex investment instruments, depreciation
etc, things get considerably more complex of course.
I'll readily admit my limited experience, and I'm sure others on this
list have far better information. I hope this gets you started anyway.
At 06:54 PM 2/6/2008, Medi Montaseri wrote:
Thanks Steve...
And finally you mentioned that bank accounts are tricky...can you
expand on this please. After all I am under the impression that "bank
accounts" are a corner stone of this whole book keeping...I
mean...bank accounts have
ten stored
procedure is the way to go? Not sure how to make WinAPI calls from PG -
perhaps someone on list has experience or references for that:
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/rpc/rpc/uuidcreate.asp
HTH,
Steve
---(end of broadcas
PostgreSQL. This often means that the product
has severely limited its use of appropriate PostgreSQL features in order
to remain compatible with the other backends.
Cheers,
Steve
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
port the "comment" keyword. Is there an
alternative?
You mean like:
COMMENT ON mytable IS 'This is my table. Mine, mine, mine';
You can also comment columns, databases, functions, schemas, domains, etc.
Cheers,
Steve
7;/folder1/folder2/' || _chrom || '.txt'
(i.e. leaving off the "c:" part too), you may find that everything just
works a little cleaner / fewer unexpected surprises.
Like I said, I don't know if this is your issue (and Vista), but it's
been my experience with Win
use.org/science/2007/08/07/obtaining-a-block-of-ids-from-a-sequence-in-postgresql/
I hope this helps a little!
Steve
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
speaking there is
one sequence per primary key, by default on each table. So if you reset
that key, then your table will start issuing keys at that new number.
Another way to be more safe is to "+5" your sequence, so that even if a
few inserts slip in, you're still ahead of the g
+ 1) * group_size
) as person_days
group by country_id;
Add where-clauses to either for efficiency.
Cheers,
Steve
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
p_size
)
Basically remove the "+1" so we don't include both start and end dates
but move the start base back one day so anyone starting prior to Feb 1
gets the extra day added.
Cheers,
Steve
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
Aarni Ruuhimäki wrote:
Thanks Steve,
I'm not sure if I quite grasped this. It gives a bit funny results:
SELECT sum ((date_smaller(res_end_day, '2007-12-31'::date) -
date_larger(res_start_day, '2006-12-31'::date)) * group_size) AS
days_in_period,
c.country_name AS
ly people here). This will
supplement your sparse index by detecting people who are scanning your
sparse index space and generating lots of "misses."
Hope that helps,
Steve
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
At 12:36 PM 3/18/2008, D'Arcy J.M. Cain wrote:
On Tue, 18 Mar 2008 12:23:35 -0700
Steve Midgley <[EMAIL PROTECTED]> wrote:
> 1) Create a second field (as someone recommend on this list) that
is an
> MD5 of your primary key. Use that as your "accessor" index from the
clears up what I was recommending! I didn't
anticipate it would stir up this much analysis and I hope the OP finds
your input and mine useful in coming up with a final answer to his
issue. Thanks for taking the time to consider the issue and I'll look
forward to any additi
ommand here");
I think you want something more like this psuedo code:
system.set_environment("PGPASSWORD")="pass1234";
system.exec("psql my command here");
I hope this helps,
Steve
es and rollbacks will
cause holes.
If you just need consecutive row-numbering on output (not in the table)
and if the row numbering doesn't need to match the same record each
time, you can create a temporary sequence and select
nextval('tempsequence'), from yourtable.
Cheers
f you index ip's using a
custom data type and search/group for specific octets, you'll get much
better performance than just searching via a regular b-tree string
index..
http://www.postgresql.org/docs/8.3/static/datatype-net-types.html
Steve
--
Sent via pgsql-sql mailing list (pg
his caused very poor performance on
large tables. EXPLAIN will not reveal this. You might want to set the
server to log all transactions and see what the app is really doing at
the server level.
Cheers,
Steve
gs in ODBC that
caused ODBC to do as absolutely little as possible. Just receive raw
sql, pipe it to my server and hand it back to me in a memory data
structure. All the data structure parsing was done by my middleware
connection wrapper to ODBC and none was done by ODBC itself. Once I got
r ADO driver and see if it gives you the same problems..
Just some more grist for the solution mill, hopefully!
Steve
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
into yourtable SELECT * from xxx;
DROP TABLE xxx;
If you are so close to out-of-space on your disk that you don't have the
room those 90,000 records will require, you may have to dump/restore
using another machine.
Cheers,
Steve
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.or
l_accents|short_accents...etc
I keep the country names with and without accents to make searching
easier across keyboards/locales.
I hope this helps too -- I think Craig has given you the lion's share
of good advice for sure - and I definitely follow the practices more or
less as he laid them
you there. I'm sure others have different
perspectives which are equally or more valid!
Best,
Steve
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
from nutrient_data where nutrient_no=203;
Cheers,
Steve
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
) for a given ndb_no:
select nutrient_no,nutrient_value from nutrient_data where ndb_no =
13473 order by nutrient_value limit 5;
Cheers,
Steve
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
op search / coding schemes that say
things like "find all records with chicken and skin, where the
following words do not appear between the words chicken and skin: beef,
pork, cow, pig, etc.."
Just some thoughts for you there.
Best,
Steve
--
Sent via pgsql-sql m
might be a day or two. I am looking for a query that will list any
device having no variation in the recent events.
Cheers,
Steve
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
Allan Kamau wrote:
Hi Steve,
Am having difficulties (there is a chance I could be the only one)
trying to see how the results you've listed under "I would want to
get:" section can be generated from the information you have provided
in your implicit problem statement.
Remember
Steve Crawford wrote:
Allan Kamau wrote:
Hi Steve,
Am having difficulties (there is a chance I could be the only one)
trying to see how the results you've listed under "I would want to
get:" section can be generated from the information you have provided
in your implicit pr
orm against ANSI-92 (or any other std) if you do not regularly
test against a set of platforms, your solution will converge on
supporting only the platforms you do regular test against.
I hope that helps,
Steve
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To ma
of
the database was created. If you restored from a dump or otherwise
recreated the database for any reason (version upgrade, machine
migration, disaster recovery, etc.), the timestamps would represent the
time of the restore, not the time of the creation of the original database.
Cheers,
Steve
lication layer developers with a consistent set of
interfaces to obtain data that are not tied to the data tables
themselves. And allowing them to insert/update/manage tables via
structured interfaces as well.
Am I missing something?
Best,
Steve
--
Sent via pgsql-sql mailing list (pgsql-sql@po
is really important in general for the list.
There's a lot of good information out there and I think it's not so
great if this list were to limit itself only to public domain and open
copyright documentation for consideration.
Just two more cents from the peanut gallery on a Sat
ate_max" CASE is causing the
problem.
Thanks for any advice or suggestions on how to get this to run
correctly. Is this a bug?
Basically I'm doing this as an optimization - I can get much better
performance running the DISTINCT ON in some circumstances than using
DISTINCT, but the edge case above is breaking my tests and preventing
me from implementing the idea. The code is generated by an application
layer which is not really paying attention to whether or not the two
CASE statements apply to the same field or not (sometimes they do
sometimes they don't)..
Thanks!
Steve
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
At 07:29 AM 7/16/2008, Tom Lane wrote:
Steve Midgley <[EMAIL PROTECTED]> writes:
Interesting. You realize of course that sorting by the same
expression
twice is completely redundant? I haven't dug through the code yet but
Thanks Tom. Yeah, I was a little embarrassed to throw this
u could aggregate against your datetime stamp by N hours or
days as well. If this idea is of interest you can write back to the
list or off-list to me for more info.
Steve
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
At 03:51 PM 7/31/2008, Tom Lane wrote:
Steve Midgley <[EMAIL PROTECTED]> writes:
> At 07:29 AM 7/16/2008, Tom Lane wrote:
>> I think what is happening is that ORDER BY knows that and gets rid
of
>> the duplicate entries while DISTINCT ON fails to do so.
> Of course rem
metimes in
middleware and it works pretty well. There's probably a pure-sql
solution in Pg as well but this method should work across any SQL
platform, which seems like one of your requirements.
Steve
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
t against that in WHERE clause? I could be
misunderstanding the whole thing though..
Steve
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
Sorry for the fairly long post.
I'm having a big problem trying to update one table from another in
PostgreSQL 8.3.1.
I have a lookup table called termgroup:
# select * from termgroup;
termgroupname | mindays | maxdays
---+-+-
1-30 days | 1 | 30
31-59 d
er heard someone say they've been bitten by using
an arbitrary surrogate key system, but I myself have been bitten and
have heard lots of stories of problems when using natural keys.
I hope this helps some,
Steve
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
great explanation as to the value of natural
keys! You haven't won me over, but you did teach me something - which I
appreciate.
Best,
Steve
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
uld be a bonus!
It seems like there's a clever way to do this without cursors but I
can't figure it out!
Thanks for any help!
Steve
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
aps a natural join like in Oliveiros' sql would do the
job?
Thanks for any advice on either of these solutions. I'm going to learn
a lot here if someone can pound it into my head.
Thanks,
Steve
It seems to be returning any records that have sequential id's
regardless
At 11:
At 05:38 PM 9/26/2008, Oliveiros Cristina wrote:
In-Reply-To: <[EMAIL PROTECTED]>
References: <[EMAIL PROTECTED]>
<[EMAIL PROTECTED]>
<[EMAIL PROTECTED]>
Howdy, Steve.
SELECT id
FROM dummy a
NATURAL JOIN (
SELECT fkey_id,name
FROM dummy
GROUP BY fkey_id,
At 09:50 PM 9/29/2008, Richard Broersma wrote:
On Mon, Sep 29, 2008 at 7:48 PM, Steve Midgley <[EMAIL PROTECTED]>
wrote:
> In my specific case it turns out I only had duplicates, but there
could have
> been n-plicates, so your code is still correct for my use-case
(though I
uld construct a rule or trigger to validate the
foreign key data on insert/update but that's out of my skill area.
Hope that helps a little,
Steve
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
morphic join" but I could be wrong
about
> that. I'd guess you could construct a rule or trigger to validate
the
> foreign key data on insert/update but that's out of my skill area.
Hi Steve,
So in your solution the f_table column is just text which needs to be
validated by
n you want
two columns of output, each column being a pg array?
I may not be the best person to answer the actual SQL question, but I
thought I'd clarify your requirements so the list members can have the
best chance of answering.
Steve
--
Sent via pgsql-sql mailing list (pgsql-sql@p
At 11:28 AM 10/23/2008, Joe wrote:
Steve Midgley wrote:
# (invoiceid, txid)
(A, 1)
(A, 3)
(B, 1)
(B, 2)
(C, 5)
(D, 6)
(D, 7)
(E, 8)
(F, 8)
For journalling, I need to group/cluster this together. Is there a
SQL
query that can generate this output:
# (journal: invoiceids, txids)
[A,B
having to look up the max value on each one (which
would also require that you shut off access to the table and for a much
longer time).
Hope that helps,
Steve
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
ue, use the coalesce
function:
select coalesce(my_column, 'a null value');
will return the string 'a null value' whenever my_column is null.
Cheers,
Steve
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
eople_id = p.people_id)
JOIN test_attributes b
ON ((b."people_id" = p."people_id")
WHERE
(a."attribute" = @firstAttr))
AND (b."attribute" = @secondAttr));
Also, any suggestions about how to figure out this on my own without
bugging the list in the future would be gre
t;Working with binary data and bytea
datatype" and try the example code there. Just feed it your images as
input and all should be good.
Cheers,
Steve
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
;
CREATE TABLE test
(
filename character varying(255) NOT NULL,
data bytea
);
insert into test (filename, data)
values (E'c:\\tmp\\tst.tif', '1234');
select replace(filename, E'\\', E''), data from test
Does this do it?
Steve
--
Sent via pg
works even on the command line, which is helpful. You can type this
directly into the CMD prompt now:
dir "c:/temp"
All new programs I write on Windows (in Ruby) use forward slashes for
paths, and it works just fine. Not sure about VB or C#, but I'd guess
you can make it work. Might be simpler than all the escaping work..
Best,
Steve
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
d solutions, questions and ideas back to this list as
you go. I've forwarded this thread already to several people who work
with related issues, and they're very interested in some solutions as
well.
So stay in touch as you work on this, please.
Sincerely,
Steve
--
Sent via pgsql-sql
nce would be anything comparable either (I'd guess that using
the "regex" operators (like "~*" would be the way to go).
I thought I'd mention this other approach in case it was of interest
and you haven't run across it before.
Sincerely,
Steve
--
Sent
such a utility I would like it to be
reviewed/evaluated?
Thanks a lot guys.
--
"If you would take, you must first give, this is the beginning of
intelligence" -- Lao Tze, Tao Te Ching
Steve L. Nyemba
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make chang
ors for this along
with a GiST index - it's not perfect for long distances - it assumes
the earth is flat, but it works great for small distances and is very
fast).
I hope this helps. Feel free to contact me on-list or off, if you want
to discuss more.
Steve
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
y'),
('Rob Roy', '(2,0)', 'today'),
('Rob Roy', '(2,0)', 'today');
-- SOLUTION QUERY HERE --
select count(user_tracker.id), locations.name from user_tracker
right outer join locations on user_tracker.location <@ locations.area
group by locations.name;
-- END SQL SCRIPT --
OUTPUT:
3;"Manhattan, NY"
0;"Talahassee, FL"
6;"Frankfurt, GE"
Steve
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
ot;table_name" and then looking at all
the "column_name" fields, building your column
list, excepting the column_names you wish to exclude..
Best,
Steve
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
rebuilding them manually later?
Thanks for any insight on that (and I hope my question helps the OP as
well - if this seems off topic let me know),
Steve
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
sks
these days, but for a read-only database, that seems like an option
(10gb of ram disk for your read-only data and 6 gb of ram for OS and
Pg).
Steve
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
reason.
How about this:
select round.*, stage.name from round
left join stage on stage.id = round.stage_id
ORDER BY round.score DESC;
Steve
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
3,4 for the
ordering, but then you have to mess with two records in order to swap
the positions of (say) item 2 and 3. Of course you can do this pretty
easily inside a transaction, and you don't have to worry about the mess
of moving PK's.
Steve
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
both queries side-by-side along with the FK id's
in a single result set
Thanks for any assistance on this!
Steve
/*SQL STARTS*/
drop table if exists contact_log;
drop table if exists contact_property;
create table contact_log(id serial NOT null, src_contact_id integer,
log_type character
Erik Jones wrote:
On Apr 15, 2009, at 1:15 PM, Steve Midgley wrote:
I want to generate an analysis report that counts the values in two
separate tables. I've been able to accomplish what I want with two
separate queries that I then merge together in Excel. Essentially
what I need
Date: Wed, 15 Apr 2009 21:23:04 -0700
From: Steve Midgley
To: Erik Jones
Subject: Re: How to count from a second table in an aggregate query?
Message-ID: <49e6b2a8.5040...@misuse.org>
Erik Jones wrote:
>
> On Apr 15, 2009, at 1:15 PM, Steve Midgley wrote:
>
>>
here a way
to do the same with a single command? (The data is such that
the update will fail in the majority of cases.)
Thanks!
--
Steve Wampler- SOLIS Project, National Solar Observatory
[EMAIL PROTECTED]
101 - 191 of 191 matches
Mail list logo