Re: [SQL] it's not NULL, then what is it?

2009-06-30 Thread Edward W. Rouse
Just out of curiosity did you try maf = 0?

 

Edward W. Rouse

 

From: pgsql-sql-ow...@postgresql.org [mailto:pgsql-sql-ow...@postgresql.org]
On Behalf Of Tena Sakai
Sent: Tuesday, June 30, 2009 6:03 PM
To: pgsql-sql@postgresql.org
Subject: [SQL] it's not NULL, then what is it?

 

Hi Everybody,

I have a table called gallo.sds_seq_reg_shw,
which is like:

  canon=# \d gallo.sds_seq_reg_shw
   Table "gallo.sds_seq_reg_shw"
Column  |  Type   | Modifiers
  --+-+---
   name | text|
   response | text|
   n| integer |
   source   | text|
   test | text|
   ref  | text|
   value| real|
   pvalue.term  | real|
   stars.term   | text|
   gtclass.test | text|
   fclass.test  | text|
   gtclass.ref  | text|
   fclass.ref   | text|
   markerid | integer |
   maf  | real|
   chromosome   | text|
   physicalposition | integer |
   id   | text|
   ctrast   | text|
 
I am intereseted in the column maf (which is real):

  canon=# select maf from gallo.sds_seq_reg_shw
  canon-#  order by maf asc;
   maf
  -
   0.000659631
   0.000659631
   0.000659631
   0.000659631
.
  (trunacated for the interest of breivity)
.

Another way to look at this column is:

  canon=# select maf from gallo.sds_seq_reg_shw
  canon-#  order by maf desc;
   maf
  -
 
 
 
.
  (trunacated for the interest of breivity)
.

These rows shown are blanks, as far as I can tell.
But...

  canon=# select count(maf) from gallo.sds_seq_reg_shw;
   count
  ---
   67284
  (1 row)
 
  canon=# select count(maf) from gallo.sds_seq_reg_shw
  canon-#  where maf ISNULL;
   count
  ---
   0
  (1 row)
 
  canon=#
  canon=# select count(maf) from gallo.sds_seq_reg_shw
  canon-#  where maf NOTNULL;
   count
  ---
   67284
  (1 row)

My confusion is that if they are real and not null,
what are they?  How would I construct a query to do
something like:

 select count(maf)
   from gallo.sds_seq_reg_shw
  where maf ISBLANK;

Thank you in advance.

Regards,

Tena Sakai
tsa...@gallo.ucsf.edu 



[SQL] lost password

2009-11-17 Thread Edward W. Rouse
I have a database that has a "process" user. I need to move the database to
an additional server. When I restore it gave me a "user not found" error. So
I am trying to recreate this user, but no one knows what the password is
supposed to be. The processes connecting to it have encrypted versions of
the password. And what I can find in the database uses md5. Is there some
way to find out or copy over the users password? Trying to find every
process that uses this login will be very time consuming and we are
guaranteed to miss some and generate a lot of frustration and confusion as
processes that always worked cease. Not to mention that some stuff may be
failing for a while before someone catches it.

 

I was thinking of just taking the md5 from pg_authid:rolpassword and using
it to update the new server. Will this work? Or is there another way to
retrieve a forgotten password?

 

Edward W. Rouse

Comsquared System, Inc.

770-734-5301

 



Re: [SQL] lost password

2009-11-17 Thread Edward W. Rouse
Well I quickly found out 2 things:

 

1.   You can copy the md5 over and have it work

2.   The same password can have different md5's

 

After I copied the md5 over I played around a bit and found the for the
process user. I then used the alter user sql statement and rechecked the
md5. It was different, but I could still use the same password to log in.
Needless to say, I am a bit confused by that, but it works and that's what I
need.

 

Edward W. Rouse

 

From: pgsql-sql-ow...@postgresql.org [mailto:pgsql-sql-ow...@postgresql.org]
On Behalf Of Edward W. Rouse
Sent: Tuesday, November 17, 2009 4:21 PM
To: pgsql-sql@postgresql.org
Subject: [SQL] lost password

 

I have a database that has a "process" user. I need to move the database to
an additional server. When I restore it gave me a "user not found" error. So
I am trying to recreate this user, but no one knows what the password is
supposed to be. The processes connecting to it have encrypted versions of
the password. And what I can find in the database uses md5. Is there some
way to find out or copy over the users password? Trying to find every
process that uses this login will be very time consuming and we are
guaranteed to miss some and generate a lot of frustration and confusion as
processes that always worked cease. Not to mention that some stuff may be
failing for a while before someone catches it.

 

I was thinking of just taking the md5 from pg_authid:rolpassword and using
it to update the new server. Will this work? Or is there another way to
retrieve a forgotten password?

 

Edward W. Rouse

Comsquared System, Inc.

770-734-5301

 



Re: [SQL] lost password

2009-11-18 Thread Edward W. Rouse
Well, the username and password are the same, but the md5 is different. But
it doesn't seem to matter because either one works the same. The fact that
you can have 2 different md5's yet still have the same user name and
password and have logins work is what I found to be interesting.

But my original problem is solved in either case, so it doesn't really
matter to me. I just found it to be ... interesting.

Edward W. Rouse

-Original Message-
From: pgsql-sql-ow...@postgresql.org [mailto:pgsql-sql-ow...@postgresql.org]
On Behalf Of Tom Lane
Sent: Tuesday, November 17, 2009 6:57 PM
To: Edward W. Rouse
Cc: pgsql-sql@postgresql.org
Subject: Re: [SQL] lost password 

"Edward W. Rouse"  writes:
> After I copied the md5 over I played around a bit and found the for the
> process user. I then used the alter user sql statement and rechecked the
> md5. It was different, but I could still use the same password to log in.
> Needless to say, I am a bit confused by that, but it works and that's what
I
> need.

The md5 will depend on both the actual password and the user's name
... does that help?

regards, tom lane

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] Duplicate rows

2010-08-10 Thread Edward W. Rouse
Is there any way to remove a duplicate row from a table? Not my db but I
have to work with it. On version 7.4 right now.

 

Edward W. Rouse

Comsquared System, Inc.

770-734-5301

 



Re: [SQL] Duplicate rows

2010-08-10 Thread Edward W. Rouse
I am trying to test this but get an error.

select ctid, * from test where id < 300 order by id, ctid;
ERROR:  could not identify an ordering operator for type tid
HINT:  Use an explicit ordering operator or modify the query.

If I do a select I get this:

select ctid, * from test where id < 300 order by id;  
  ctid   |   id| activated | wake_up_time 

(108,22) | 316 | f |  
 (36,17) | 316 | f |  
(used 2 rows only for brevity

And when I tried max(ctid) I got:

ERROR:  function max(tid) does not exist
HINT:  No function matches the given name and argument types. You may need
to add explicit type casts.

I appreciate all the help and this feels like I'm almost there.
Thanks


Edward W. Rouse

-Original Message-
From: pgsql-sql-ow...@postgresql.org [mailto:pgsql-sql-ow...@postgresql.org]
On Behalf Of Andreas Kretschmer
Sent: Tuesday, August 10, 2010 3:45 PM
To: pgsql-sql@postgresql.org
Subject: Re: [SQL] Duplicate rows

Edward W. Rouse  wrote:

> Is there any way to remove a duplicate row from a table? Not my db but I
have
> to work with it. On version 7.4 right now.
> 

How to select the right records?

You can try to use the ctid-column, see my simple example:

test=# select * from dups ;
 i
---
 1
 1
 1
 2
 2
 3
 4
(7 Zeilen)

Zeit: 0,145 ms
test=*# delete from dups where (ctid, i) not in (select max(ctid), i from
dups group by i);
DELETE 3
Zeit: 0,378 ms
test=*# select * from dups ;
 i
---
 1
 2
 3
 4
(4 Zeilen)



Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
"If I was god, I would recompile penguin with --enable-fly."   (unknown)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Duplicate rows

2010-08-10 Thread Edward W. Rouse
Solved. Because this is a 7.4 version and we used with oids by default, I
can use the oids instead of the ctid to remove the duplicates.

Thanks.

Edward W. Rouse


-Original Message-
From: pgsql-sql-ow...@postgresql.org [mailto:pgsql-sql-ow...@postgresql.org]
On Behalf Of Edward W. Rouse
Sent: Tuesday, August 10, 2010 4:43 PM
To: pgsql-sql@postgresql.org
Subject: Re: [SQL] Duplicate rows

I am trying to test this but get an error.

select ctid, * from test where id < 300 order by id, ctid;
ERROR:  could not identify an ordering operator for type tid
HINT:  Use an explicit ordering operator or modify the query.

If I do a select I get this:

select ctid, * from test where id < 300 order by id;  
  ctid   |   id| activated | wake_up_time 

(108,22) | 316 | f |  
 (36,17) | 316 | f |  
(used 2 rows only for brevity

And when I tried max(ctid) I got:

ERROR:  function max(tid) does not exist
HINT:  No function matches the given name and argument types. You may need
to add explicit type casts.

I appreciate all the help and this feels like I'm almost there.
Thanks


Edward W. Rouse

-Original Message-
From: pgsql-sql-ow...@postgresql.org [mailto:pgsql-sql-ow...@postgresql.org]
On Behalf Of Andreas Kretschmer
Sent: Tuesday, August 10, 2010 3:45 PM
To: pgsql-sql@postgresql.org
Subject: Re: [SQL] Duplicate rows

Edward W. Rouse  wrote:

> Is there any way to remove a duplicate row from a table? Not my db but I
have
> to work with it. On version 7.4 right now.
> 

How to select the right records?

You can try to use the ctid-column, see my simple example:

test=# select * from dups ;
 i
---
 1
 1
 1
 2
 2
 3
 4
(7 Zeilen)

Zeit: 0,145 ms
test=*# delete from dups where (ctid, i) not in (select max(ctid), i from
dups group by i);
DELETE 3
Zeit: 0,378 ms
test=*# select * from dups ;
 i
---
 1
 2
 3
 4
(4 Zeilen)



Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
"If I was god, I would recompile penguin with --enable-fly."   (unknown)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Issue with postgres connectivity

2011-01-21 Thread Edward W. Rouse
When was the last time you did a vacuum analyze?

Edward W. Rouse


-Original Message-
From: pgsql-sql-ow...@postgresql.org [mailto:pgsql-sql-ow...@postgresql.org]
On Behalf Of Arindam Hore
Sent: Friday, January 21, 2011 6:44 AM
To: Reinoud van Leeuwen; pgsql-sql@postgresql.org
Subject: Re: [SQL] Issue with postgres connectivity

We are accessing database using ip address.

-Original Message-
From: pgsql-sql-ow...@postgresql.org [mailto:pgsql-sql-ow...@postgresql.org]
On Behalf Of Reinoud van Leeuwen
Sent: Friday, January 21, 2011 4:27 PM
To: pgsql-sql@postgresql.org
Subject: Re: [SQL] Issue with postgres connectivity

On Fri, Jan 21, 2011 at 10:45:37AM +, Arindam Hore wrote:
> Hello All,
>
> I am facing a big problem in postgres connectivity using php application.
It is taking almost 10 sec for connection establishment. All my applications
were working perfectly 2 days before. Yesterday just it started giving
problem. Using pg-admin also it is taking time to connect as well as same
with opening server status window or with sql query window.
>
> Postgres is installed in linux environment. Don't know whether is it due
to some virus or something else.
>
> Please provide me with some guidelines. Ask me any queries.

this smells like DNS resolving...

Reinoud

--
__
"Nothing is as subjective as reality"
Reinoud van Leeuwenreinou...@n.leeuwen.net
http://reinoud.van.leeuwen.net kvk 27320762
__

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
This email and any files transmitted with it are confidential and intended
solely for the use of the individual or entity to whom they are addressed.
If you have received this email in error please notify the system manager.
This message contains confidential information and is intended only for the
individual named. If you are not the named addressee you should not
disseminate, distribute or copy this e-mail.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Sorting Issue

2011-05-10 Thread Edward W. Rouse
Looks like the sort is removing the spaces before sorting.

cxh
cxlm
cxlp
etc...

Edward W. Rouse


-Original Message-
From: pgsql-sql-ow...@postgresql.org [mailto:pgsql-sql-ow...@postgresql.org]
On Behalf Of Ozer, Pam
Sent: Monday, May 09, 2011 3:39 PM
To: em...@encs.concordia.ca
Cc: pgsql-sql@postgresql.org
Subject: Re: [SQL] Sorting Issue

That works. Why?

-Original Message-
From: Emi Lu [mailto:em...@encs.concordia.ca] 
Sent: Monday, May 09, 2011 12:38 PM
To: Ozer, Pam
Cc: pgsql-sql@postgresql.org
Subject: Re: [SQL] Sorting Issue

> I have the following query
>
> Select VehicleTrimAbbreviated, VehicleTrimAbbreviatedId
>
> from VehicleTrimAbbreviated
>
> Where vehicleTrimAbbreviated like 'CX%'
>
> order by VehicleTrimAbbreviated asc
>
> Results:
>
> 532;"CX Hatchback"
>
> 536;"CXL Minivan"
>
> 3255;"CXL Premium Sedan"
>
> 537;"CXL Sedan"
>
> 538;"CXL Sport Utility"
>
> 3319;"CXL Turbo Sedan"
>
> 533;"CX Minivan"
>
> 1959;"CX Plus Minivan"
>
> 534;"CX Sedan"
>
> 535;"CX Sport Utility"
>
> 539;"CXS Sedan"
>
> Why would this not sort correctly? All the CX should be first, then
CXL,
> Then CXS

Would you mind try:

Select VehicleTrimAbbreviated, VehicleTrimAbbreviatedId
from VehicleTrimAbbreviated
Where vehicleTrimAbbreviated like 'CX%'
order by

split_part(VehicleTrimAbbreviated, ' ', 1) asc,
split_part(VehicleTrimAbbreviated, ' ', 2) asc;



-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] extracting location info from string

2011-05-23 Thread Edward W. Rouse
I would think that changing the location column to hold a FK to a location 
table, and setting up the location table with various columns for city, region, 
country and whatever else might be required would be the way to go. It reduces 
column bloat on the main table, provides reuse of location data and is easier 
to modify in the future.

Edward W. Rouse


-Original Message-
From: pgsql-sql-ow...@postgresql.org [mailto:pgsql-sql-ow...@postgresql.org] On 
Behalf Of Lew
Sent: Monday, May 23, 2011 12:25 AM
To: pgsql-sql@postgresql.org
Subject: Re: [SQL] extracting location info from string

On 05/22/2011 09:42 PM, Craig Ringer wrote:
> On 23/05/2011 9:11 AM, Andrej wrote:
>> On 23 May 2011 10:00, Tarlika Elisabeth Schmitz
>>  wrote:
>>> On Sun, 22 May 2011 21:05:26 +0100
>>> Tarlika Elisabeth Schmitz wrote:
>>>
>>>> A column contains location information, which may contain any of the
>>>> following:
>>>>
>>>> 1) null
>>>> 2) country name (e.g. "France")
>>>> 3) city name, region name (e.g. "Bonn, Nordrhein-Westfalen")
>>>> 4) city name, Rg. region name (e.g. "Frankfurt, Rg. Hessen")
>>>> 5) city name, Rg region name (e.g. "Frankfurt, Rg Hessen")
>>>
>>>
>>> I also need to cope with variations of COUNTRY.NAME and REGION.NAME.

That isn't a table structure, that's a freeform text structure.  You didn't 
state your question, Tarlika, but your database structure is terrible.  For 
example, "region" and "country" should be different columns.  Really!

How you get your raw data into those columns can be interesting.

> This is a hard problem. You're dealing with free-form data that might be
> easily understood by humans, but relies on various contextual information and
> knowledge that makes it really hard for computers to understand.
>
> If you want to do a good job of this, your best bet is to plug in 3rd party
> address analysis software that is dedicated to this task. Most (all?) such

These aren't really addresses, as the OP presents them.

> packages are commercial, proprietary affairs. They exist because it's really,
> really hard to do this right.
>
>> Another thing of great import is whether the city can occur in the
>> data column all by itself; if yes, it's next to impossible to distinguish
>> it from a country.
>
> Not least because some places are both, eg:
>
> Luxembourg
> The Vatican
> Singapore
>
> (The Grand Duchy of Luxembourg has other cities, but still serves as an 
> example).

And,of course, you have to distinguish the City of London from London.  New 
York City comprises five boroughs (counties), each of which is itself a city. 
  (Brooklyn is one of the largest cities in the world all by itself.) 
"Region" has different meanings in different areas - it can mean part of a 
county, or state / province, or nation, or continent.  "The Baltic region", 
"the Northeast", "upstate", "the North Country", "Europe" are all regions.

The OP should share more about the semantics of their problem domain and 
whether they really intend those table structures to be table structures.  
Really?

-- 
Lew
Honi soit qui mal y pense.
http://upload.wikimedia.org/wikipedia/commons/c/cf/Friz.jpg

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Stuck Up In My Own Category Tree

2011-08-11 Thread Edward W. Rouse
How about SELECT cat_name, sum(amount) AS "amount" FROM category,
trans_details WHERE category_cat_id in (select cat_id from category where
lineage ~ '^1')

 

Where the in clause is basically, the cat_id where lineage starts with, and
then the lineage you want. 1, 1-2, whatever the lineage is. Not sure about
efficiency or whether you can put another subselect in there to build the
lineage string to match, but the basic idea should work. 

 

From: pgsql-sql-ow...@postgresql.org [mailto:pgsql-sql-ow...@postgresql.org]
On Behalf Of Don Parris
Sent: Thursday, August 11, 2011 11:39 AM
To: pgsql-sql@postgresql.org
Subject: [SQL] Stuck Up In My Own Category Tree

 

Hi all,

Note: I'm happy to read howtos, tutorials, archived messages - I just
haven't found anything that addresses this yet.  I found a related topic on
the novice list, but my post got no response there.  I've been struggling
with this for about a week now and need to figure out a solution.  Heck,
this may not even be the best approach to hierarchical structures, but it
sure seemed reasonable when I first read up on the subject.  Anyway...

I created a category table like so (I got the idea from a website somewhere
that used it in a different way, and did not discuss much about addressing
what I want to accomplish):
cat_id(serial)  |  cat_name(varchar)  |  parent_id(int)  |  lineage(varchar)
|  deep(int)
1 root_cat_a Null
1  1
2 sub_cat_1 1
1-2   2
3 sub_sub_cat_a  2
1-2-3 3

I use this to categorize transactions, and use the most appropriate
subcategory for any given transation item in a table called trans_details.
I can easily show transaction amounts by sub-category  (SELECT cat_name,
sum(amount) AS "amount" FROM category, trans_details WHERE category_cat_id =
trans_details.cat_id):

cat_name |amount

Transportation: Auto: Fuel |  $100
Transportation: Auto: Maint|  $150
Transportation: Fares: Bus|  $40

but what I cannot figure out is how to create a summary where I show
cat_name|amount
Transportation: Auto |  $250

or, what I *really* want:
cat_name|amount
Transportation |  $290


Can anyone help me work through this?  Frankly, I'm not even sure where to
begin to solve the problem.  I have been trying the WITH RECURSIVE feature,
but I do not understand very well how to apply it.  The example query I have
brings up an empty result set and I don't have a clue how I could modify it
to make it work.  I'm not even sure it's the best method, but it seems like
a reasonable approach.  I thought about using regexp to try and match the
initial part of the lineage to the category_id of the parents, something
like:
WHERE c1.category_id = regexp_matches(c2.lineage, 'nnn-'), but lineage is a
text type column, rather than an int, and would need to be cast.

One of the problems I encounter is that a root category with no
sub-categories (naturally) won't show up in the category_id = parent_id
matches, since such a category has no children.

I found an e-mail from an old thread on this topic on the novice list.  The
author created a very similar table to mine, but talks more about how to
select the child categories, not the root.  And, frankly, his example sql
statements did not bring up the results I would expect.  The issue seems to
be somewhat common - I just don't yet have the experience to understand it
well yet.  :-)

id  infoparent_id level node_id
1   Name1   Null1   1
2   Name2   1   2   2
3   Name3   2   3   3
4  Name43   4   4
5  Name54   5   5
6  Name51   2   6
7  Name66   3   7
8  Name71   2   8



-- 
D.C. Parris, FMP, LEED AP O+M, ESL Certificate
Minister, Security/FM Coordinator, Free Software Advocate

https://www.xing.com/profile/Don_Parris  |
http://www.linkedin.com/in/dcparris

GPG Key ID: F5E179BE

 



[SQL] intervals

2011-11-03 Thread Edward W. Rouse
I am writing a procedure and am having a little difficulty with an interval.
In the DECLARE section I have:

expire interval := '30 days'::interval;

which seems to work fine. In the BEGIN block I need to pull out the number
of days from a table and update the expire interval to that. But I can't
quite seem to get the replacement correct.

First I use "SELECT value into limit from" to get the new limit value. The
value is text, but I have also tried with SELECT (value::integer) into limit
from" with the same result. The error occurs here:

expire := '%  days'::interval, limit;

Now I have tried several different ways to get a new interval, such as '' ||
limit || days''::interval; and other various differently quoted variations,
but haven't found the one that works yet. I am still trying various
combinations, but thought that someone on the list has probably done this
already. I may, in fact, be doing it entirely wrong. So if someone knows the
correct/better/easier way to create a dynamic interval I'd appreciate a
clue.

Thanks.


Edward W. Rouse




-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] intervals

2011-11-03 Thread Edward W. Rouse
That was part of the problem. Thanks for the info.

> -Original Message-
> From: pgsql-sql-ow...@postgresql.org [mailto:pgsql-sql-
> ow...@postgresql.org] On Behalf Of Jonathan S. Katz
> Sent: Thursday, November 03, 2011 1:41 PM
> To: Edward W. Rouse
> Cc: pgsql-sql@postgresql.org
> Subject: Re: [SQL] intervals
> 
> Hi Edward,
> 
> "LIMIT" is a keyword, which is where you are getting your errors.  Did
> you try a different variable name?
> 
> Best,
> 
> Jonathan
> 
> On Nov 3, 2011, at 1:29 PM, Edward W. Rouse wrote:
> 
> > I am writing a procedure and am having a little difficulty with an
> interval.
> > In the DECLARE section I have:
> >
> > expire interval := '30 days'::interval;
> >
> > which seems to work fine. In the BEGIN block I need to pull out the
> number
> > of days from a table and update the expire interval to that. But I
> can't
> > quite seem to get the replacement correct.
> >
> > First I use "SELECT value into limit from" to get the new limit
> value. The
> > value is text, but I have also tried with SELECT (value::integer)
> into limit
> > from" with the same result. The error occurs here:
> >
> > expire := '%  days'::interval, limit;
> >
> > Now I have tried several different ways to get a new interval, such
> as '' ||
> > limit || days''::interval; and other various differently quoted
> variations,
> > but haven't found the one that works yet. I am still trying various
> > combinations, but thought that someone on the list has probably done
> this
> > already. I may, in fact, be doing it entirely wrong. So if someone
> knows the
> > correct/better/easier way to create a dynamic interval I'd appreciate
> a
> > clue.
> >
> > Thanks.
> >
> >
> > Edward W. Rouse
> >
> >
> >
> >
> > --
> > Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
> > To make changes to your subscription:
> > http://www.postgresql.org/mailpref/pgsql-sql
> 
> --
> Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-sql


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] intervals

2011-11-03 Thread Edward W. Rouse
In conjunction with Jonathan, this has gotten me to the point where it works, 
sort of. Now I just need to change it so that it does more than 1 at a time. 
Since it currently isn't in a loop, it affects one and quits. But I needed to 
get it to do that first, putting it in a loop should be the easy part.

Thanks.

> -Original Message-
> From: pgsql-sql-ow...@postgresql.org [mailto:pgsql-sql-
> ow...@postgresql.org] On Behalf Of bricklen
> Sent: Thursday, November 03, 2011 1:44 PM
> To: Edward W. Rouse
> Cc: pgsql-sql@postgresql.org
> Subject: Re: [SQL] intervals
> 
> On Thu, Nov 3, 2011 at 10:29 AM, Edward W. Rouse
>  wrote:
> > expire := '%  days'::interval, limit;
> 
> A couple ways spring to mind immediately. Using 10 as the example:
> 
> expire := 10 * '1 day'::INTERVAL;
> expire := ('10' || ' days')::INTERVAL;
> 
> --
> Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-sql


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] must appear in the GROUP BY clause or be used in an aggregate function problem

2012-01-31 Thread Edward W. Rouse
Attempting to get a list of items that we want to be grouped by id and date,
with a detail column.

Table a is a typical user table; id, first and last name are all that I am
using

Table b is a tracking table, combining data from 8+ tables into one for ease
of use for reporting purposes; id, userid, company and invoice_number are
all that I am using

Table c is an exception table; it has relations with the other 2 by both the
tracking and user ids but all I need from it is the exception date

So the data required is the users first and last name, the company, the
invoice number and the exception date. And the expected structure for the
report is:

User Name: Company: Date: Invoice Number

For those invoices that are in the exception table. The sql I can get to
work is:

SELECT MAX(a.name_first) AS name_first, MAX(a.name_last) AS name_last,
b.company, MAX(c.report_date) AS rDate, b.invoicenum
FROM resources a JOIN tracking b ON (a.id=b.resource_id)
JOIN except_detail c ON (b. id = b.tracking_id)
WHERE b.region = NE' AND b.state = 1
GROUP BY a.id, b. company, b.invoicenum
ORDER BY name_last, name_first, b.role_name, rDate 

And in most cases this works fine. The problem arises when invoices get
added to the exception table due to their not being an invoice number. Even
though we join on the tracking id, the group by on invoicenum lumps the
different blank invoices into a single line, if the same user has more than
1. What we want is for each of the individual blank invoicenum entries to
have a separate line in the result. If I remove b.invoicenum from the group
by then I get the error in the subject line. If I try to use an aggregate
function (like I used MAX on the names) it's even worse. MAX works on the
names because they are all the same. MAX on the date doesn't seem to effect
the results that I can see other than if an invoice went into exception more
than once, and in that case we only want the latest one anyway.

Any hints as to how to get this to not lump all of the empty invoicenums for
a user into a single line? And I have mentioned putting in a dummy value
like the date for an invoicenum, but that works as well as I expected it
would (not at all).

Edward W. Rouse



-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] must appear in the GROUP BY clause or be used in an aggregate function problem

2012-01-31 Thread Edward W. Rouse
I would love to remove most of the aggregate functions. I am trying to
update an existing query to provide better data and started with the
existing one (which is where the MAX for the names came from). I originally
tried to remove those, but I get the group by/aggregate function error if I
do. I guess I don't understand enough to know why these are being forced
into that category. I have written queries in the past that had several
columns that were not considered requiring aggregation. Can someone
enlighten me on why these are being marked as requiring group/aggregation. 

 

Something with the way the table are joined perhaps? Can I remove the joins
and put the a.id = c.id in the where clause and get rid of this?

 

From: pgsql-sql-ow...@postgresql.org [mailto:pgsql-sql-ow...@postgresql.org]
On Behalf Of Samuel Gendler
Sent: Tuesday, January 31, 2012 3:57 PM
To: Edward W. Rouse
Cc: pgsql-sql@postgresql.org
Subject: Re: [SQL] must appear in the GROUP BY clause or be used in an
aggregate function problem

 

On Tue, Jan 31, 2012 at 12:27 PM, Edward W. Rouse 
wrote:

And in most cases this works fine. The problem arises when invoices get
added to the exception table due to their not being an invoice number. Even
though we join on the tracking id, the group by on invoicenum lumps the
different blank invoices into a single line, if the same user has more than
1. What we want is for each of the individual blank invoicenum entries to
have a separate line in the result. If I remove b.invoicenum from the group
by then I get the error in the subject line. If I try to use an aggregate
function (like I used MAX on the names) it's even worse. MAX works on the
names because they are all the same. MAX on the date doesn't seem to effect
the results that I can see other than if an invoice went into exception more
than once, and in that case we only want the latest one anyway.

Any hints as to how to get this to not lump all of the empty invoicenums for
a user into a single line? And I have mentioned putting in a dummy value
like the date for an invoicenum, but that works as well as I expected it
would (not at all).

 

It seems like the use of the aggregation functions in your example aren't
absolutely necessary - though perhaps the date one is.  Best solution would
be to get rid of the aggregate columns so that you don't have this issue.

 

If that isn't possible, there are definitely some kludges you can use to get
it to work if you really don't want to union 2 queries together - one with
and one without valid invoice ids. Assuming invoice ids are generated via a
sequence, you can do the following to guarantee a unique 'id' for each empty
invoice - at the expense of incrementing your sequence unnecessarily:

 

COALESCE(invoiceid, nextval(invoiceid_sequence))

 

A better version of that kludge would be to create a sequence just for this
purpose and set it to a very negative number.  All of your generated fake
ids will then be negative numbers (so easily identified by whatever is
consuming the query results) and you can reset the sequence back to the most
negative possible value whenever you get concerned about running out of ids,
since you won't be using the same sequence as the invoice table itself.

There are probably lots of variations on that general concept.  You can
write a window function which will remember the ids already seen/generated
for each row and just generate an arbitrary id to be used in place of null
for grouping, though you'd have to worry about using an id that has not yet
appeared but which later does appear.  Assuming you can create a bucket of
space large enough that is guaranteed to not conflict with valid invoice
ids, you could make it work.  I don't know if you can have select max(x),
invoiceid group by func(invoiceid), so you may have to structure it as
"select max(x), func(invoiceid) group by 2" which would require that your
query results can deal with faked invoiceids - negative numbers again
providing a possible way to identify them.  

 

Doesn't postgres allow operator overloading?  Perhaps you could override the
equality operator for that type so that just a single value (the max value,
for example) always compares as unequal to itself and then use
COALESCE(invoiceid, 9223372036854775807).  All rows without a valid value
will have that value, but the overloaded equality operator will cause them
to not group together (I'm guessing.  I haven't tried this). That one makes
me nervous because of the potential for nasty side effects should something
legitimately have that value, but one could probably make the case for
having a bigger problem if a column has a value equal to max bigint.

 

Restructuring the query to separate valid invoice ids from invalid and/or
getting rid of the aggregation does seem like the best solution, though.



Re: [SQL] must appear in the GROUP BY clause or be used in an aggregate function problem

2012-01-31 Thread Edward W. Rouse
NM, I figured it out. The mere presence of an aggregate function and/or the
group by clause is what's causing all the hate for me. I will take a whack
at getting this to work without them. Thanks all.

 

From: pgsql-sql-ow...@postgresql.org [mailto:pgsql-sql-ow...@postgresql.org]
On Behalf Of Edward W. Rouse
Sent: Tuesday, January 31, 2012 4:27 PM
To: 'Samuel Gendler'
Cc: pgsql-sql@postgresql.org
Subject: Re: [SQL] must appear in the GROUP BY clause or be used in an
aggregate function problem

 

I would love to remove most of the aggregate functions. I am trying to
update an existing query to provide better data and started with the
existing one (which is where the MAX for the names came from). I originally
tried to remove those, but I get the group by/aggregate function error if I
do. I guess I don't understand enough to know why these are being forced
into that category. I have written queries in the past that had several
columns that were not considered requiring aggregation. Can someone
enlighten me on why these are being marked as requiring group/aggregation. 

 

Something with the way the table are joined perhaps? Can I remove the joins
and put the a.id = c.id in the where clause and get rid of this?

 

From: pgsql-sql-ow...@postgresql.org [mailto:pgsql-sql-ow...@postgresql.org]
On Behalf Of Samuel Gendler
Sent: Tuesday, January 31, 2012 3:57 PM
To: Edward W. Rouse
Cc: pgsql-sql@postgresql.org
Subject: Re: [SQL] must appear in the GROUP BY clause or be used in an
aggregate function problem

 

On Tue, Jan 31, 2012 at 12:27 PM, Edward W. Rouse 
wrote:

And in most cases this works fine. The problem arises when invoices get
added to the exception table due to their not being an invoice number. Even
though we join on the tracking id, the group by on invoicenum lumps the
different blank invoices into a single line, if the same user has more than
1. What we want is for each of the individual blank invoicenum entries to
have a separate line in the result. If I remove b.invoicenum from the group
by then I get the error in the subject line. If I try to use an aggregate
function (like I used MAX on the names) it's even worse. MAX works on the
names because they are all the same. MAX on the date doesn't seem to effect
the results that I can see other than if an invoice went into exception more
than once, and in that case we only want the latest one anyway.

Any hints as to how to get this to not lump all of the empty invoicenums for
a user into a single line? And I have mentioned putting in a dummy value
like the date for an invoicenum, but that works as well as I expected it
would (not at all).

 

It seems like the use of the aggregation functions in your example aren't
absolutely necessary - though perhaps the date one is.  Best solution would
be to get rid of the aggregate columns so that you don't have this issue.

 

If that isn't possible, there are definitely some kludges you can use to get
it to work if you really don't want to union 2 queries together - one with
and one without valid invoice ids. Assuming invoice ids are generated via a
sequence, you can do the following to guarantee a unique 'id' for each empty
invoice - at the expense of incrementing your sequence unnecessarily:

 

COALESCE(invoiceid, nextval(invoiceid_sequence))

 

A better version of that kludge would be to create a sequence just for this
purpose and set it to a very negative number.  All of your generated fake
ids will then be negative numbers (so easily identified by whatever is
consuming the query results) and you can reset the sequence back to the most
negative possible value whenever you get concerned about running out of ids,
since you won't be using the same sequence as the invoice table itself.

There are probably lots of variations on that general concept.  You can
write a window function which will remember the ids already seen/generated
for each row and just generate an arbitrary id to be used in place of null
for grouping, though you'd have to worry about using an id that has not yet
appeared but which later does appear.  Assuming you can create a bucket of
space large enough that is guaranteed to not conflict with valid invoice
ids, you could make it work.  I don't know if you can have select max(x),
invoiceid group by func(invoiceid), so you may have to structure it as
"select max(x), func(invoiceid) group by 2" which would require that your
query results can deal with faked invoiceids - negative numbers again
providing a possible way to identify them.  

 

Doesn't postgres allow operator overloading?  Perhaps you could override the
equality operator for that type so that just a single value (the max value,
for example) always compares as unequal to itself and then use
COALESCE(invoiceid, 9223372036854775807).  All rows without a valid value
will have that value, but the overloaded equality operator will cause them
to n

Re: [SQL] must appear in the GROUP BY clause or be used in an aggregate function problem

2012-01-31 Thread Edward W. Rouse
Yeah, that's what I did. No more group by, the MAX is in a subquery. One of
the reasons I hate coming behind someone else and updating their stuff is
that too many people try to get cute with the code. I try to make everything
as plain and simple as possible unless performance issues require otherwise.
My code is boring, but easy to understand and maintain ;)

 

From: pgsql-sql-ow...@postgresql.org [mailto:pgsql-sql-ow...@postgresql.org]
On Behalf Of Samuel Gendler
Sent: Tuesday, January 31, 2012 4:52 PM
To: Edward W. Rouse
Cc: pgsql-sql@postgresql.org
Subject: Re: [SQL] must appear in the GROUP BY clause or be used in an
aggregate function problem

 

 

On Tue, Jan 31, 2012 at 1:43 PM, Edward W. Rouse 
wrote:

NM, I figured it out. The mere presence of an aggregate function and/or the
group by clause is what's causing all the hate for me. I will take a whack
at getting this to work without them. Thanks all.

 

All columns that are not in an aggregate function MUST be in the group by
clause if there is any column in an aggregate function.  If you can get rid
of all aggregation, then you won't have to have the group by, either.  You
could use a correlated subquery to get the most recent report date, rather
than using max.  That would allow you to ditch all of the other aggregation,
I suspect.  But unless there is any chance of different versions of the name
fields for a given id, then it is harmless to drop the MAX() function call
and add them to the group by clause.  You get the same effect.

 

 

 

 

From: pgsql-sql-ow...@postgresql.org [mailto:pgsql-sql-ow...@postgresql.org]
On Behalf Of Edward W. Rouse
Sent: Tuesday, January 31, 2012 4:27 PM
To: 'Samuel Gendler'


Cc: pgsql-sql@postgresql.org
Subject: Re: [SQL] must appear in the GROUP BY clause or be used in an
aggregate function problem

 

I would love to remove most of the aggregate functions. I am trying to
update an existing query to provide better data and started with the
existing one (which is where the MAX for the names came from). I originally
tried to remove those, but I get the group by/aggregate function error if I
do. I guess I don't understand enough to know why these are being forced
into that category. I have written queries in the past that had several
columns that were not considered requiring aggregation. Can someone
enlighten me on why these are being marked as requiring group/aggregation. 

 

Something with the way the table are joined perhaps? Can I remove the joins
and put the a.id = c.id in the where clause and get rid of this?

 

From: pgsql-sql-ow...@postgresql.org [mailto:pgsql-sql-ow...@postgresql.org]
On Behalf Of Samuel Gendler
Sent: Tuesday, January 31, 2012 3:57 PM
To: Edward W. Rouse
Cc: pgsql-sql@postgresql.org
Subject: Re: [SQL] must appear in the GROUP BY clause or be used in an
aggregate function problem

 

On Tue, Jan 31, 2012 at 12:27 PM, Edward W. Rouse 
wrote:

And in most cases this works fine. The problem arises when invoices get
added to the exception table due to their not being an invoice number. Even
though we join on the tracking id, the group by on invoicenum lumps the
different blank invoices into a single line, if the same user has more than
1. What we want is for each of the individual blank invoicenum entries to
have a separate line in the result. If I remove b.invoicenum from the group
by then I get the error in the subject line. If I try to use an aggregate
function (like I used MAX on the names) it's even worse. MAX works on the
names because they are all the same. MAX on the date doesn't seem to effect
the results that I can see other than if an invoice went into exception more
than once, and in that case we only want the latest one anyway.

Any hints as to how to get this to not lump all of the empty invoicenums for
a user into a single line? And I have mentioned putting in a dummy value
like the date for an invoicenum, but that works as well as I expected it
would (not at all).

 

It seems like the use of the aggregation functions in your example aren't
absolutely necessary - though perhaps the date one is.  Best solution would
be to get rid of the aggregate columns so that you don't have this issue.

 

If that isn't possible, there are definitely some kludges you can use to get
it to work if you really don't want to union 2 queries together - one with
and one without valid invoice ids. Assuming invoice ids are generated via a
sequence, you can do the following to guarantee a unique 'id' for each empty
invoice - at the expense of incrementing your sequence unnecessarily:

 

COALESCE(invoiceid, nextval(invoiceid_sequence))

 

A better version of that kludge would be to create a sequence just for this
purpose and set it to a very negative number.  All of your generated fake
ids will then be negative numbers (so easily identified by whatever is
consuming the query results) and you can reset the sequence bac

[SQL] time interval math

2012-02-08 Thread Edward W. Rouse
I'm still working on getting this to work, but the summary is this:

I am getting several (many) intervals of hour, minutes and seconds. I need a
sum of the absolute value these intervals, similar to the SUM(ABS())
function for numbers; and I need to divide this sum by an integer (bigint).
Getting the intervals is no problem, but I can't find built in functions for
the rest. Currently on 8.3, want to upgrade to 9.x but I can't until this is
finished.

Do these functions exist, or will I be forced to convert to seconds, do the
math and then convert back to hour-minute-second format (I am assuming from
current data that, after the divide, the result should be in the minute:
second range).

Edward W. Rouse
Comsquared System, Inc.
770-734-5301




-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] time interval math

2012-02-08 Thread Edward W. Rouse
Hehe, thanks, I played around and ended up with this:

round(SUM(extract('epoch' from (time_out - time_in

I will have to do the division outside of the query, but that's really a
minor issue. Knowing the total in seconds was the big roadblock. And
converting back is easier (a / 3600 + ":" + a / 60 + ":" + a % 60)

> -Original Message-
> From: Steve Crawford [mailto:scrawf...@pinpointresearch.com]
> Sent: Wednesday, February 08, 2012 3:26 PM
> To: Edward W. Rouse
> Cc: pgsql-sql@postgresql.org
> Subject: Re: [SQL] time interval math
> 
> On 02/08/2012 12:01 PM, Edward W. Rouse wrote:
> > I'm still working on getting this to work, but the summary is this:
> >
> > I am getting several (many) intervals of hour, minutes and seconds. I
> need a
> > sum of the absolute value these intervals, similar to the SUM(ABS())
> > function for numbers; and I need to divide this sum by an integer
> (bigint).
> > Getting the intervals is no problem, but I can't find built in
> functions for
> > the rest. Currently on 8.3, want to upgrade to 9.x but I can't until
> this is
> > finished.
> >
> > Do these functions exist, or will I be forced to convert to seconds,
> do the
> > math and then convert back to hour-minute-second format (I am
> assuming from
> > current data that, after the divide, the result should be in the
> minute:
> > second range).
> 
> You will have to do some work on your own.
> 
> Time and intervals are tricky beasts and depend on the oddities of
> daylight saving rules. Even though you are only using
> hours/minutes/seconds the interval type also supports days and months.
> A
> day interval is probably 24 hours but could be 23 or 25 if it crosses a
> DST boundary. Months have different numbers of days. You have
> situations
> where adding and subtracting a month does not give the original date:
> 
> select '2011-03-31'::date - '1 month'::interval + '1 month'::interval;
>?column?
> -
>   2011-03-28 00:00:00
> 
> There is no abs(interval) function but, if you know that all your
> intervals are basic H:M:S and that you won't have any difficulty due to
> problems similar to the above you can mimic it with:
> ...case when myinterval < '0'::interval then '0'::interval - myinterval
> else myinterval end...
> 
> You are even allowed to sum that and divide it (though I suspect there
> are some interesting corner-cases waiting to be discovered):
> ...sum(case when myinterval < '0'::interval then '0'::interval -
> myinterval else myinterval end)/2...
> 
> Before you upgrade, be sure to read the release notes and test your
> calculations. The way intervals are handled, especially regarding
> intervals across DST boundaries, have changed over time. IIRC most of
> those changes were pre-8.3 but haven't looked recently.
> 
> Cheers,
> Steve


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] time interval math

2012-02-09 Thread Edward W. Rouse


> -Original Message-
> From: pgsql-sql-ow...@postgresql.org [mailto:pgsql-sql-
> ow...@postgresql.org] On Behalf Of Jasen Betts
> Sent: Thursday, February 09, 2012 6:37 AM
> To: pgsql-sql@postgresql.org
> Subject: Re: [SQL] time interval math
> 
> On 2012-02-08, Edward W. Rouse  wrote:
> > I'm still working on getting this to work, but the summary is this:
> >
> > I am getting several (many) intervals of hour, minutes and seconds. I
> need a
> > sum of the absolute value these intervals, similar to the SUM(ABS())
> > function for numbers; and I need to divide this sum by an integer
> (bigint).
> > Getting the intervals is no problem, but I can't find built in
> functions for
> > the rest. Currently on 8.3, want to upgrade to 9.x but I can't until
> this is
> > finished.
> 
> the operation abs() is meaninless on the type interval
> eg: what is abs( '1 month - 32 days + 24 hours'::interval )

If you need to add 30 intervals together, then +- is not meaningless. 

> 
> howevwer since all your intervals are in seconds (postgres pretends
> that all
> hours are 3600 seconds long) converting to seconds is probably the
> best way to go.
> 
> > Do these functions exist, or will I be forced to convert to seconds,
> do the
> > math and then convert back to hour-minute-second format (I am
> assuming from
> > current data that, after the divide, the result should be in the
> minute:
> > second range).
> 
> Yeah, you need to do that, it's not hard,
> 
> select ( sum(abs(extract('epoch' from YOUR_INTERVAL ))) / SOMETHING ) *
> '1s'::interval
> from ...
> 
> --
> ⚂⚃ 100% natural
> 
> 
> --
> Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-sql


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] pg_dump - 8.3 - schemas

2012-02-16 Thread Edward W. Rouse
I am working with a database set up by someone else. They set it up so that
most tables are in the public schema, but they also have a reports schema;
database.public.tables and database.reports.tables.

If I do a pg_dump of the database, I only get the public schema. If I do a
dump with --schema=reports, I only get the reports schema. Is there a way to
get all the schemas from a single pg_dump or am I forced to use separate
ones? This is also for future issues where there may be more than 2.

Thanks

Edward W. Rouse


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] pg_dump - 8.3 - schemas

2012-02-16 Thread Edward W. Rouse
To answer the second question first, yes; both as the same user.

pg_dump -v -f $bkfile -F c -U $USER $DATABASE



> -Original Message-
> From: Adrian Klaver [mailto:adrian.kla...@gmail.com]
> Sent: Thursday, February 16, 2012 2:22 PM
> To: Edward W. Rouse
> Cc: pgsql-sql@postgresql.org
> Subject: Re: [SQL] pg_dump - 8.3 - schemas
> 
> On 02/16/2012 10:59 AM, Edward W. Rouse wrote:
> > I am working with a database set up by someone else. They set it up
> so that
> > most tables are in the public schema, but they also have a reports
> schema;
> > database.public.tables and database.reports.tables.
> >
> > If I do a pg_dump of the database, I only get the public schema. If I
> do a
> > dump with --schema=reports, I only get the reports schema. Is there a
> way to
> > get all the schemas from a single pg_dump or am I forced to use
> separate
> > ones? This is also for future issues where there may be more than 2.
> 
> The pg_dump should work.
> What is the exact command line statement you are using?
> Are doing both dumps as the same user?
> 
> >
> > Thanks
> >
> > Edward W. Rouse
> >
> >
> 
> 
> --
> Adrian Klaver
> adrian.kla...@gmail.com


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] pg_dump - 8.3 - schemas

2012-02-16 Thread Edward W. Rouse
Well, when I do a restore using the created file, reports isn't there. i.e.
the select from reports.table gives an error and, from psql, \l doesn't list
it and \dn doesn't show it. But that all does work on the original database.

> -Original Message-
> From: pgsql-sql-ow...@postgresql.org [mailto:pgsql-sql-
> ow...@postgresql.org] On Behalf Of Adrian Klaver
> Sent: Thursday, February 16, 2012 2:39 PM
> To: Edward W. Rouse
> Cc: pgsql-sql@postgresql.org
> Subject: Re: [SQL] pg_dump - 8.3 - schemas
> 
> On 02/16/2012 11:31 AM, Edward W. Rouse wrote:
> > To answer the second question first, yes; both as the same user.
> >
> > pg_dump -v -f $bkfile -F c -U $USER $DATABASE
> 
> 
> So how are you determining that only the public schema is being dumped?
> One thing to check is the search_path setting in postgresql.conf. This
> can create the illusion that only one schema is available in a
> database.
> One way to check is to use the fully qualified name for a table you
> know
> to be in the reports schema. Ex:
> 
> select * from reports.some_table;
> 
> >
> >
> >
> >
> --
> Adrian Klaver
> adrian.kla...@gmail.com
> 
> --
> Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-sql


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] ignore case in where clause

2012-03-22 Thread Edward W. Rouse
I am currently using lower(column) = '' for matching case insensitive. I
know that there are ways to do this with regular expressions too. I recently
noticed that including even one lower causes severe performance issues (from
290ms to over 80Kms).

What is the best way, performance wise, to do case insensitive matching? I
could spend a few hours testing if I have to, but I'm hoping someone knows
off the top of their heads.

Using 8.3 currently. I am a programmer, not a database person; but we don't
really have a DB here, so I do what I can.

Thanks.

Edward W. Rouse
Comsquared System, Inc.
770-734-5301




-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] ignore case in where clause

2012-03-22 Thread Edward W. Rouse
That was exactly it, Thanks.

> -Original Message-
> From: pgsql-sql-ow...@postgresql.org [mailto:pgsql-sql-
> ow...@postgresql.org] On Behalf Of Guillaume Lelarge
> Sent: Thursday, March 22, 2012 4:39 PM
> To: Edward W. Rouse
> Cc: pgsql-sql@postgresql.org
> Subject: Re: [SQL] ignore case in where clause
> 
> On Thu, 2012-03-22 at 16:26 -0400, Edward W. Rouse wrote:
> > I am currently using lower(column) = '' for matching case
> insensitive. I
> > know that there are ways to do this with regular expressions too. I
> recently
> > noticed that including even one lower causes severe performance
> issues (from
> > 290ms to over 80Kms).
> >
> 
> Probably because it cannot use the index anymore. Try creating an index
> on lower(column), and see if it helps.
> 
> 
> --
> Guillaume
> http://blog.guillaume.lelarge.info
> http://www.dalibo.com
> 
> 
> --
> Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-sql


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] copy users/groups

2013-01-08 Thread Edward W. Rouse
We have a database that was updated from 7.4.1 to 9.1.3. I've gotten
everything working, but now it seems that the users and groups weren't
restored. Probably because they weren't backed up. Is there a way to dump
just the users and groups, plus the passwords and permissions, and restore
them without overwriting what's been manually added to the new database?


Edward W. Rouse



-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] removing duplicates and using sort

2013-09-16 Thread Edward W. Rouse
Change the order by to order by lastname, firstname, refid, appldate

 

From: pgsql-sql-ow...@postgresql.org [mailto:pgsql-sql-ow...@postgresql.org]
On Behalf Of Nathan Mailg
Sent: Saturday, September 14, 2013 10:36 PM
To: pgsql-sql@postgresql.org
Subject: [SQL] removing duplicates and using sort

 

I'm using 8.4.17 and I have the following query working, but it's not quite
what I need:

 

SELECT DISTINCT ON (refid) id, refid, lastname, firstname, appldate
FROM appl WHERE lastname ILIKE 'Williamson%' AND firstname ILIKE
'd%'
GROUP BY refid, id, lastname, firstname, appldate ORDER BY refid,
appldate DESC;

 

I worked on this awhile and is as close as I could get. So this returns rows
as you'd expect, except I need to somehow modify this query so it returns
the rows ordered by lastname, then firstname.

 

I'm using distinct so I get rid of duplicates in the table where refid (an
integer) is used as the common id that ties like records together. In other
words, I'm using it to get only the most recent appldate (a date) for each
group of refid's that match the lastname, firstname where clause.

 

I just need the rows returned from the query above to be sorted by lastname,
then firstname.

 

Hope I explained this well enough. Please let me know if you need more info.

 

Thanks!



[SQL] problem with join

2007-02-14 Thread Edward W. Rouse
I have 2 tables that look like this:
 
table1  table2
-   
value1 | value2 value1 | value3 | value4
-   
one| a  one| a  | jim
one| b  one| d  | bob
one| c  two| d  | bill
many   | d  two| f  | sue
many   | e  three  | d  | mary
some   | f  three  | f  | jane
 
my query is like this:
 
select b.value1, value2, count(value4)
from table1 as a join table2 as b on (a.value2 = b.value3)
where (a.value1 in ('one', 'two', 'three', 'many') or a.value2 = 'f')
and b.value1 in ('one', 'two', 'three')
group by b.value1, value2 
order by b.value1, value2;
 
I get back correct results EXCEPT I don't get back empty counts.
 
The results that I need from the tables above should look like this:
 
one, a, 1
one, b, 0
one, c, 0
one, d, 1
one, e, 0
one, f, 0
two, d, 1 
two, e, 0 
two, f, 1
three, d, 1 
three, e, 0 
three, f, 1
 
What I get are all of the rows with counts, but none of the rows where
the count would be 0. I understand why this query works that way, but I
need to find away to resolve the many to one relations that are 
backwards in this case. Can anyone come up with a query that will 
include the results that have counts of 0?
 
I tried using coalesce(count(value4)) and case count(value4) = 0
with no luck. I tried left and right joins and the right join gave me 
the same results while the left join gave me rows like:
 
many, d, 3
 
I'm beginning to wonder if this is even possible. 
 
 
 
Ed


[SQL] group by day

2007-05-24 Thread Edward W. Rouse
I have an audit table that I am trying to get a count of the number of distinct 
entries per day by the external table key field. I
can do a 
 
select count(distinct(id)) from audit where timestamp >= '01-may-2007'
 
and get a total count. What I need is a way to group on each day and get a 
count per day such that the result would be something
like
 
datecount
01-may-2007107
02-may-2007215
03-may-200796
04-may-20070
 
 
I would prefer the 0 entries be included but can live without them. Thanks.
 
Oh, postgres 7.4 by the way.
 
Edward W. Rouse

ComSquared Systems, Inc.

770-734-5301

 

 


Re: [SQL] duplicate key violates unique constraint

2008-02-26 Thread Edward W. Rouse
Could it be that the insert statement itself is the problem? What does the 
table look like?
 
 
Edward W. Rouse

 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Bart Degryse
Sent: Tuesday, February 26, 2008 11:35 AM
To: pgsql-sql@postgresql.org; Shavonne Marietta Wijesinghe
Subject: Re: [SQL] duplicate key violates unique constraint


Shavonne,
You will probably always find someone on the list who will answer your 
questions, but you really should read the manual too!
In this case you could have found the answer by reading
http://www.postgresql.org/docs/8.2/static/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING

>>> "Shavonne Marietta Wijesinghe" <[EMAIL PROTECTED]> 2008-02-26 17:05 >>>
Thank you. I tried as you said. But i get a ERROR:  syntax error at or near 
"INSERT" at character 9

BEGIN
INSERT INTO my_shevi values ('a', 4, 2, 2);
EXCEPTION
WHEN unique_violation THEN
INSERT INTO my_shevi values ('a', 5, 2, 2);
END;

I don't see anything wrong with the code..

Shavonne

- Original Message - 
From: "Teemu Torma" <[EMAIL PROTECTED]>
To: 
Sent: Tuesday, February 26, 2008 4:17 PM
Subject: Re: [SQL] duplicate key violates unique constraint


> On Tuesday 26 February 2008, Shavonne Marietta Wijesinghe wrote:
>& gt; During an "INSERT INTO" I get an "Error - duplicate key violates
>> unique constraint"
>>
>> Is there any way, that i can test the error. Something like this??
>>
>> IF error = "duplicate key violates unique constraint" then
>> Â Â do something
>> else
>> Â Â insert into
>> end if
>
> insert into ...;
> exception when unique_violation then
>  do something;
>
> Teemu
>
> ---(end of broadcast)---
> TIP 5: don't forget to increase your free space map settings 


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings




[SQL] design resource

2008-06-05 Thread Edward W. Rouse
I was wondering if there were any resources that have some table designs for 
common problems. Since that isn't very clear I will
give an example.
 
We have an internal app from years back that needs to be updated. One of the 
problems is that when it was originally created, the
company only had US customers. We now have international customers and need to 
support international addresses and phone numbers.
For the phone numbers that means adding a new column for international code or 
expanding the data field so that it's big enough to
hold the international prefix (still not sure which approach is best). But I 
haven't a clue as to how to set up for international
addresses. 
 
So I was hoping there would be a resource that I could check where these kinds 
of data sets have been 'solved' to ease the effort. I
have several books on design patterns for programming but I've not seen a 
design patterns book for common database problems. Thanks.
 
Edward W. Rouse

ComSquared Systems, Inc.

770-734-5301

 

 


[SQL] Join question

2008-08-15 Thread Edward W. Rouse
I have 2 tables, both have a user column. I am currently using a left join
from table a to table b because I need to show all users from table a even
those not having an entry in table b. The problem is I also have to include
items from table b with that have a null user. There are some other criteria
as well that are simple where clause filters. So as an example:

 

Table a:

Org|user

A| emp1

B| emp1

B| emp2

B| emp3

C| emp2

 

Table b:

Org|user|color

A   |emp1|red

A   |emp1|blue

A   |null|pink

A   |null|orange

B   |emp1|red

B   |emp3|red

B   |null|silver

C   |emp2|avacado

 

If I:

 

select org, user, count(total)

from a left join b

on (a.org = b.org and a.user = b.user)

where a.org = 'A'

group by a.org, a.user

order by a.org, a.user

 

I get:

 

Org|user|count

A|emp1|2

A|emp2|0

A|emp3|0

 

But what I need is:

 

A|emp1|2

A|emp2|0

A|emp3|0

A|null|2

 

Thanks,

Edward W. Rouse



Re: [SQL] Join question

2008-08-15 Thread Edward W. Rouse
Sigh, I messed up the tables a bit when I typed the example, org A was
supposed to have entries for all 3 users in table a just like org B does,
not just the one. Sorry for the confusion.

 

 

Edward W. Rouse

 

From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Edward W. Rouse
Sent: Friday, August 15, 2008 12:48 PM
To: pgsql-sql@postgresql.org
Subject: [SQL] Join question

 

I have 2 tables, both have a user column. I am currently using a left join
from table a to table b because I need to show all users from table a even
those not having an entry in table b. The problem is I also have to include
items from table b with that have a null user. There are some other criteria
as well that are simple where clause filters. So as an example:

 

Table a:

Org|user

A| emp1

B| emp1

B| emp2

B| emp3

C| emp2

 

Table b:

Org|user|color

A   |emp1|red

A   |emp1|blue

A   |null|pink

A   |null|orange

B   |emp1|red

B   |emp3|red

B   |null|silver

C   |emp2|avacado

 

If I:

 

select org, user, count(total)

from a left join b

on (a.org = b.org and a.user = b.user)

where a.org = 'A'

group by a.org, a.user

order by a.org, a.user

 

I get:

 

Org|user|count

A|emp1|2

A|emp2|0

A|emp3|0

 

But what I need is:

 

A|emp1|2

A|emp2|0

A|emp3|0

A|null|2

 

Thanks,

Edward W. Rouse



Re: [SQL] Join question

2008-08-15 Thread Edward W. Rouse
I did try that, but I can't get both the values from table a with no entries
in table b and the values from table b with null entries to show up. It's
either one or the other.

Edward W. Rouse


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Richard Broersma
Sent: Friday, August 15, 2008 1:10 PM
To: Edward W. Rouse
Cc: pgsql-sql@postgresql.org
Subject: Re: [SQL] Join question

On Fri, Aug 15, 2008 at 9:48 AM, Edward W. Rouse <[EMAIL PROTECTED]>
wrote:

> The problem is I also have to include
> items from table b with that have a null user. There are some other
criteria
> as well that are simple where clause filters. So as an example:

instead of left join try FULL OUTER JOIN.

-- 
Regards,
Richard Broersma Jr.

Visit the Los Angeles PostgreSQL Users Group (LAPUG)
http://pugs.postgresql.org/lapug

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Join question

2008-08-19 Thread Edward W. Rouse
I thought of that, but it does violate table constraints.

 

Edward W. Rouse

 

From: Oliveiros Cristina [mailto:[EMAIL PROTECTED] 
Sent: Monday, August 18, 2008 2:00 PM
To: pgsql-sql@postgresql.org; [EMAIL PROTECTED]
Subject: Re: [SQL] Join question

 

I don't understand your count(total) expression... 

It doesnt work, because apparently you dont have any "total" column...

Apparently, you meant count(color)

 

The problem is that you are grouping by a.org,a.user and on  table "a" u 
actually dont have any "null" users...

 

Well, if it is to include "null" users, a quick and dirty solution I can think 
of would be to add a "dummy" null user to every diferent org on table a and then

substitute your LEFT OUTER JOIN condition by this one :

 

from a left  join b

 

on (a.org = b.org and (a.user = b.user OR (a.user is null and b.user is null )))

 

 

Now, I don' know if "null" users on table "a" will violate any constraints you 
may have (e.g. NOT NULL) ...

 

I know This is not a very elegant solution, but seems to give the results you 
need

 

Best,

Oliveiros

- Original Message - 

From: Daniel Hernandez <mailto:[EMAIL PROTECTED]>  

To: pgsql-sql@postgresql.org ; [EMAIL PROTECTED] 

Sent: Monday, August 18, 2008 5:30 PM

Subject: Re: [SQL] Join question

 

have you tried a right Join?


Daniel Hernndez.
San Diego, CA.
"The more you learn, the more you earn".
Fax: (808) 442-0427


-Original Message-
From: "Edward W. Rouse" [EMAIL PROTECTED]
Date: 08/15/2008 09:48 AM
To: pgsql-sql@postgresql.org
Subject: Re: [SQL] Join question




I have 2 tables, both have a user column. I am currently using a left join from 
table a to table b because I need to show all users from table a even those not 
having an entry in table b. The problem is I also have to include items from 
table b with that have a null user. There are some other criteria as well that 
are simple where clause filters. So as an example:

Table a:

Org|user

A| emp1

B| emp1

B| emp2

B| emp3

C| emp2

Table b:

Org|user|color

A   |emp1|red

A   |emp1|blue

A   |null|pink

A   |null|orange

B   |emp1|red

B   |emp3|red

B   |null|silver

C   |emp2|avacado

If I:

select org, user, count(total)

from a left join b

on (a.org = b.org and a.user = b.user)

where a.org = ‘A’

group by a.org, a.user

order by a.org, a.user

I get:

Org|user|count

A|emp1|2

A|emp2|0

A    |emp3|0

But what I need is:

A|emp1|2

A|emp2|0

A|emp3|0

A|null|2

Thanks,

Edward W. Rouse



Re: [SQL] Join question

2008-08-19 Thread Edward W. Rouse
I have tried left, right outer and inner. 

 

Edward W. Rouse

 

From: Daniel Hernandez [mailto:[EMAIL PROTECTED] 
Sent: Monday, August 18, 2008 12:30 PM
To: pgsql-sql@postgresql.org; [EMAIL PROTECTED]
Subject: Re: [SQL] Join question

 

have you tried a right Join?


Daniel Hernndez.
San Diego, CA.
"The more you learn, the more you earn".
Fax: (808) 442-0427


-Original Message-----
From: "Edward W. Rouse" [EMAIL PROTECTED]
Date: 08/15/2008 09:48 AM
To: pgsql-sql@postgresql.org
Subject: Re: [SQL] Join question

I have 2 tables, both have a user column. I am currently using a left join from 
table a to table b because I need to show all users from table a even those not 
having an entry in table b. The problem is I also have to include items from 
table b with that have a null user. There are some other criteria as well that 
are simple where clause filters. So as an example:

 

Table a:

Org|user

A| emp1

B| emp1

B| emp2

B| emp3

C| emp2

 

Table b:

Org|user|color

A   |emp1|red

A   |emp1|blue

A   |null|pink

A   |null|orange

B   |emp1|red

B   |emp3|red

B   |null|silver

C   |emp2|avacado

 

If I:

 

select org, user, count(total)

from a left join b

on (a.org = b.org and a.user = b.user)

where a.org = ‘A’

group by a.org, a.user

order by a.org, a.user

 

I get:

 

Org|user|count

A|emp1|2

A|emp2|0

A|emp3|0

 

But what I need is:

 

A|emp1|2

A|emp2|0

A|emp3|0

A|null|2

 

Thanks,

Edward W. Rouse



Re: [SQL] Join question

2008-08-19 Thread Edward W. Rouse
I was trying to do something like this, but couldn't get it to work. I am 
trying to follow the example you provided, but don't understand how id and oid 
relate to the example tables and which table is pr1 and pr2. Also my data has 
to match 2 constraints, not 1 (though I'm guessing that I could just add the 
other without changing anything else). And you have pr2.dato in the inner 
select but not the outer one. Is there a reason for that.

As of now I am thinking I will have to break this up into more than one 
statement.

Edward W. Rouse


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Lennin Caro
Sent: Tuesday, August 19, 2008 11:59 AM
To: 'Daniel Hernandez'; pgsql-sql@postgresql.org; Edward W. Rouse
Subject: Re: [SQL] Join question




--- On Tue, 8/19/08, Edward W. Rouse <[EMAIL PROTECTED]> wrote:

> From: Edward W. Rouse <[EMAIL PROTECTED]>
> Subject: Re: [SQL] Join question
> To: "'Daniel Hernandez'" <[EMAIL PROTECTED]>, pgsql-sql@postgresql.org
> Date: Tuesday, August 19, 2008, 1:35 PM
> I have tried left, right outer and inner. 
> 
>  
> 
> Edward W. Rouse
> 
>  
> 
> From: Daniel Hernandez [mailto:[EMAIL PROTECTED] 
> Sent: Monday, August 18, 2008 12:30 PM
> To: pgsql-sql@postgresql.org; [EMAIL PROTECTED]
> Subject: Re: [SQL] Join question
> 
>  
> 
> have you tried a right Join?
> 
> 
> Daniel Hernndez.
> San Diego, CA.
> "The more you learn, the more you earn".
> Fax: (808) 442-0427
> 
> 
> -Original Message-
> From: "Edward W. Rouse" [EMAIL PROTECTED]
> Date: 08/15/2008 09:48 AM
> To: pgsql-sql@postgresql.org
> Subject: Re: [SQL] Join question
> 
> I have 2 tables, both have a user column. I am currently
> using a left join from table a to table b because I need to
> show all users from table a even those not having an entry
> in table b. The problem is I also have to include items from
> table b with that have a null user. There are some other
> criteria as well that are simple where clause filters. So as
> an example:
> 
>  
> 
> Table a:
> 
> Org|user
> 
> A| emp1
> 
> B| emp1
> 
> B| emp2
> 
> B| emp3
> 
> C| emp2
> 
>  
> 
> Table b:
> 
> Org|user|color
> 
> A   |emp1|red
> 
> A   |emp1|blue
> 
> A   |null|pink
> 
> A   |null|orange
> 
> B   |emp1|red
> 
> B   |emp3|red
> 
> B   |null|silver
> 
> C   |emp2|avacado
> 
>  
> 
> If I:
> 
>  
> 
> select org, user, count(total)
> 
> from a left join b
> 
> on (a.org = b.org and a.user = b.user)
> 
> where a.org = ‘A’
> 
> group by a.org, a.user
> 
> order by a.org, a.user
> 
>  
> 
> I get:
> 
>  
> 
> Org|user|count
> 
> A|emp1|2
> 
> A|emp2|0
> 
> A|emp3|0
> 
>  
> 
> But what I need is:
> 
>  
> 
> A|emp1|2
> 
> A|emp2|0
> 
> A|emp3|0
> 
> A|null|2
> 
>  
> 
> Thanks,
> 
> Edward W. Rouse

also like this...


select id1,dato1, count(id2) from
(
select pr1.id as id1,pr1.dato as dato1,pr2.oid as id2,pr2.dato from pr1 right 
outer join pr2 on (pr1.id = pr2.oid)
) a group by id1,dato1





-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Join question

2008-08-19 Thread Edward W. Rouse
Finally got it to work. I used 2 separate selects and a union. So one of the 
selects was like my original left outer joined select and then I unioned it 
with one that got the missed nulls from the other table.

Thanks for all the advice.

Edward W. Rouse


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Edward W. Rouse
Sent: Tuesday, August 19, 2008 2:04 PM
To: pgsql-sql@postgresql.org
Subject: Re: [SQL] Join question

I was trying to do something like this, but couldn't get it to work. I am 
trying to follow the example you provided, but don't understand how id and oid 
relate to the example tables and which table is pr1 and pr2. Also my data has 
to match 2 constraints, not 1 (though I'm guessing that I could just add the 
other without changing anything else). And you have pr2.dato in the inner 
select but not the outer one. Is there a reason for that.

As of now I am thinking I will have to break this up into more than one 
statement.

Edward W. Rouse


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Lennin Caro
Sent: Tuesday, August 19, 2008 11:59 AM
To: 'Daniel Hernandez'; pgsql-sql@postgresql.org; Edward W. Rouse
Subject: Re: [SQL] Join question




--- On Tue, 8/19/08, Edward W. Rouse <[EMAIL PROTECTED]> wrote:

> From: Edward W. Rouse <[EMAIL PROTECTED]>
> Subject: Re: [SQL] Join question
> To: "'Daniel Hernandez'" <[EMAIL PROTECTED]>, pgsql-sql@postgresql.org
> Date: Tuesday, August 19, 2008, 1:35 PM
> I have tried left, right outer and inner. 
> 
>  
> 
> Edward W. Rouse
> 
>  
> 
> From: Daniel Hernandez [mailto:[EMAIL PROTECTED] 
> Sent: Monday, August 18, 2008 12:30 PM
> To: pgsql-sql@postgresql.org; [EMAIL PROTECTED]
> Subject: Re: [SQL] Join question
> 
>  
> 
> have you tried a right Join?
> 
> 
> Daniel Hernndez.
> San Diego, CA.
> "The more you learn, the more you earn".
> Fax: (808) 442-0427
> 
> 
> -Original Message-
> From: "Edward W. Rouse" [EMAIL PROTECTED]
> Date: 08/15/2008 09:48 AM
> To: pgsql-sql@postgresql.org
> Subject: Re: [SQL] Join question
> 
> I have 2 tables, both have a user column. I am currently
> using a left join from table a to table b because I need to
> show all users from table a even those not having an entry
> in table b. The problem is I also have to include items from
> table b with that have a null user. There are some other
> criteria as well that are simple where clause filters. So as
> an example:
> 
>  
> 
> Table a:
> 
> Org|user
> 
> A| emp1
> 
> B| emp1
> 
> B| emp2
> 
> B| emp3
> 
> C| emp2
> 
>  
> 
> Table b:
> 
> Org|user|color
> 
> A   |emp1|red
> 
> A   |emp1|blue
> 
> A   |null|pink
> 
> A   |null|orange
> 
> B   |emp1|red
> 
> B   |emp3|red
> 
> B   |null|silver
> 
> C   |emp2|avacado
> 
>  
> 
> If I:
> 
>  
> 
> select org, user, count(total)
> 
> from a left join b
> 
> on (a.org = b.org and a.user = b.user)
> 
> where a.org = ‘A’
> 
> group by a.org, a.user
> 
> order by a.org, a.user
> 
>  
> 
> I get:
> 
>  
> 
> Org|user|count
> 
> A|emp1|2
> 
> A|emp2|0
> 
> A|emp3|0
> 
>  
> 
> But what I need is:
> 
>  
> 
> A|emp1|2
> 
> A|emp2|0
> 
> A|emp3|0
> 
> A|null|2
> 
>  
> 
> Thanks,
> 
> Edward W. Rouse

also like this...


select id1,dato1, count(id2) from
(
select pr1.id as id1,pr1.dato as dato1,pr2.oid as id2,pr2.dato from pr1 right 
outer join pr2 on (pr1.id = pr2.oid)
) a group by id1,dato1





-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] RE: [SQL] Why *no* ambig.uous complain in select part?

2008-08-22 Thread Edward W. Rouse
Just a guess, but it seems to me that since the join is using col1 and col2
there is no ambiguity. They should be the same no matter which table it
comes from.

Edward W. Rouse


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Emi Lu
Sent: Friday, August 22, 2008 4:12 PM
To: pgsql-sql@postgresql.org
Subject: [SQL] Why *no* ambig.uous complain in select part?

Good morning,

Just notice one small thing, and need your information about select

select col1, col2
from table1
left join table2
using (col1, col2)

;

This query never complain about ambiguous columns of col1 and col2 in 
the select part.

My guess is:
(1) col1, col2 always return table1.col1, table1.col2
(2) because using (col1, col2)
 that's why, table name is not necessary in select part

Am I wrong? Please advise?

Thank you!




-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Problem with pg_connect() in PHP

2008-09-26 Thread Edward W. Rouse
Can I assume the missing ‘.”’ From the end of PG_PASSWORD is a cut and paste 
error?

 

Edward W. Rouse

 

From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of James Kitambara
Sent: Friday, September 26, 2008 1:22 AM
To: pgsql-sql
Cc: Kenichiro Arakaki; Ken Arakaki
Subject: [SQL] Problem with pg_connect() in PHP

 


Dear Members of 

I have installed the Apache 2.0.61,  PHP 5.2.4 and PostgreSQL 8.1 on my local 
computer.

All three software were successfully tested. I changed 
“;extension=php_pgsql.dll”  to 

“extension=php_pgsql.dll”in the php.ini file in order to enable PostgreSQL 
in PHP.

The problem comes when I try to connect to the PostgreSQL Database using php 
function pg_connect

$dbconn = pg_connect("host=".PG_HOST_NAME." port=".PG_PORT_NUM."  
dbname=".PG_DB_NAME." user=".PG_USER." password=".PG_PASSWORD); 

All the arguments in the function pg_connect() are defined.

Unfortunately I am getting the Fatal error: “Call to undefined function 
pg_connect() in C:\Web\html\Staff_Management\example1.php on line 23”

C:\Web\html is my document root.

What could be the possible mistake? 

Anyone to assist me!

Best regards,

James Kitambara

 



Re: [SQL] Comparing two tables of different database

2009-04-30 Thread Edward W. Rouse
Can't you use this?

 

select name from database2.sr_1 where name not in (select name from
database2.pr_1);

 

My test database VM isn't running so I can't test it, but I seem to remember
that that's how I did it for a few queries of that type. This is assuming
the 2 databases are running on the same machine, like the way there is
template0 as the default and you add addition databases to the same
'instance'. If you are talking about 2 different database servers, then I
have no idea.

 

Edward W. Rouse

 

From: pgsql-sql-ow...@postgresql.org [mailto:pgsql-sql-ow...@postgresql.org]
On Behalf Of Nicholas I
Sent: Thursday, April 30, 2009 6:12 AM
To: Joshua Tolley
Cc: Adam Ruth; Pawel Socha; pgsql-sql@postgresql.org
Subject: Re: [SQL] Comparing two tables of different database

 

Hi All, 

  For example, 

There are two database.   database1 and database 2;

 database1 has a table called pr_1 with the columns, id,name and time.
 database2 has a table called sr_1 with the_columns id,name and time.
   
 i would like to find out the differences that is, find the names that
are not in sr_1 but in pr_1.
 we can achieve this by the query, 
   
 select name from sr_1 where name not in (select name from pr_1);
the above query will work in case of two tables in the same database.


 But the problem is, these two tables are in different database. i did
not understand about the dblink.

is there any exaples on dblink. can we do it without using dblink.

-Nicholas I

  

On Thu, Apr 30, 2009 at 9:07 AM, Joshua Tolley  wrote:

On Thu, Apr 30, 2009 at 08:20:02AM +1000, Adam Ruth wrote:
>The simple answer is to pg_dump both tables and compare the output with
>diff.
>Other than that, I think you'll need a custom program.

For all but the strictest definition of "identical", that won't work.
Tables may easily contain the same information, in different on-disk
order, and pg_dump will most likely give the data to you in an order
similar to its ordering on disk.

Something like a COPY () TO , where  includes an
ORDER BY clause, might give you a suitable result from both tables, on
which you could then take a checksum.

- Josh / eggyknap

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)

iEYEARECAAYFAkn5HQoACgkQRiRfCGf1UMPcagCfQDRa2bXPRjgSuVsrFYTnGTTC
rhoAnAlGwp0vSKd2uspyFvxCTfugG6Yh
=LO6r
-END PGP SIGNATURE-