[GENERAL] Convert Simple Query into tsvector & tsquery format.

2011-03-17 Thread Adarsh Sharma

Dear all,

I have a simple query mentioned below :

select  count(*)  from page_content where (content like '%Militant%'
OR content like '%jihad%' OR  content like '%Mujahid%'  OR
content like '%fedayeen%' OR content like '%insurgent%'  OR content 
like '%terrORist%' OR
 content like '%cadre%'  OR content like '%civilians%' OR content like 
'%police%' OR content like '%defence%' OR content like '%cops%' OR 
content like '%crpf%' OR content like '%dsf%' OR content like '%ssb%') 
AND (content like '%kill%' OR content like '%injure%');


I need to convert it into other format that use tsvector & tsquery for 
Full-Text Searching.


I try and convert it like :

SELECT count(*)  from page_content WHERE publishing_date like '%2010%' and
content_language='en' and content is not null and isprocessable = 1 and
to_tsvector('english',content) @@ to_tsquery('english','Mujahid' || ' | '

|| 'jihad' || ' | ' || 'Militant' || ' | ' || 'fedayeen' || ' | ' || 
'insurgent' || ' | ' || 'terrORist' || ' | ' || 'cadre'


|| ' | ' || 'civilians' || ' | ' || 'police' || ' | ' || 'cops' || ' | 
' || 'crpf' || ' | ' || 'defence' || ' | ' || 'dsf' || ' | ' || 'ssb') ;


But not able to convert and use the condition

AND condition (  *AND (content like '%kill%' OR content like 
'%injure%')*  )  also.*


Please *help me , how to add this to the query.

Thanks & best Regards,
Adarsh Sharma


Re: [GENERAL] regclass and search_path

2011-03-17 Thread Joe Abbate

HI Tom,

On 03/18/2011 12:42 AM, Joe Abbate wrote:
For my immediate needs, the query was actually the target of a NOT IN 
subquery of a query against pg_index (trying to exclude tuples of 
indexes for UNIQUE constraints) and I've solved that by using conrelid 
in the subquery (and indrelid in the main query).  Nevertheless, I 
think regclass should probably be smarter and work with anything in 
pg_class (regardless of search_path).


On second thought, conname is just a "name", is not unique and is 
lacking schema/namespace info. As you said, a thinko.


Regards,

Joe

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] regclass and search_path

2011-03-17 Thread Joe Abbate

Hi Tom,

On 03/18/2011 12:17 AM, Tom Lane wrote:

Joe Abbate  writes:

I'm using this to validate a tool I'm building and I get an error on the
following query:



autodoc=>  SELECT conname::regclass FROM pg_constraint
autodoc->WHERE contype = 'u';
ERROR:  relation "product_product_code_key" does not exist


Ummm ... pg_constraint.conname contains a constraint name, not a table
name, so casting it to regclass is highly likely to fail.  This hasn't
got anything to do with search_path AFAICS, it's just a thinko.

Depending on what it is that you're hoping to do, any of conrelid,
confrelid, or conindid might be what you're after.  All of those columns
would contain pg_class OIDs that could usefully be cast to regclass.


Well, the pg_constraint.conname value exists as a relname in pg_class, 
and the query works with constraints that don't cross schemas as 
autodoc's does (or if you add all necessary schemas to your 
search_path). For example,


moviesdb=> alter table film add unique (title);
NOTICE:  ALTER TABLE / ADD UNIQUE will create implicit index 
"film_title_key" for table "film"

ALTER TABLE
moviesdb=> SELECT conname::regclass FROM pg_constraint WHERE contype = 'u';
conname

 film_title_key
(1 row)

For my immediate needs, the query was actually the target of a NOT IN 
subquery of a query against pg_index (trying to exclude tuples of 
indexes for UNIQUE constraints) and I've solved that by using conrelid 
in the subquery (and indrelid in the main query).  Nevertheless, I think 
regclass should probably be smarter and work with anything in pg_class 
(regardless of search_path).


Regards,

Joe

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: [GENERAL] Re: [GENERAL] Different encoding for string values and identifier strings? Or (select 'tést' as tést) returns different values for string and identifier...

2011-03-17 Thread Tom Lane
"Francisco Figueiredo Jr."  writes:
> Would it be possible that Postgresql would be using another encoding
> for the identifiers when they aren't wrapped by double quotes?

No.  I'm betting this is a client-side bug ... but you haven't told us
what the client-side code is.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] regclass and search_path

2011-03-17 Thread Tom Lane
Joe Abbate  writes:
> I'm using this to validate a tool I'm building and I get an error on the 
> following query:

> autodoc=> SELECT conname::regclass FROM pg_constraint
> autodoc->   WHERE contype = 'u';
> ERROR:  relation "product_product_code_key" does not exist

Ummm ... pg_constraint.conname contains a constraint name, not a table
name, so casting it to regclass is highly likely to fail.  This hasn't
got anything to do with search_path AFAICS, it's just a thinko.

Depending on what it is that you're hoping to do, any of conrelid,
confrelid, or conindid might be what you're after.  All of those columns
would contain pg_class OIDs that could usefully be cast to regclass.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] why does extract ( HOUR FROM TIMESTAMP '2010-01-01 00:00:00' at time zone 'utc' ) == 16?

2011-03-17 Thread bubba postgres
Is this the correct behavior? It seems like if I specify the utc offset it
should be 0, not 16.. It seems to be the opposite behavior from extract
epoch.

select extract ( HOUR FROM TIMESTAMP '2010-01-01 00:00:00' ) as defhour,
extract ( HOUR FROM TIMESTAMP '2010-01-01 00:00:00' at time zone 'PST' ) as
psthour, extract ( HOUR FROM TIMESTAMP '2010-01-01 00:00:00' at time zone
'utc' ) as utchour, extract ( epoch FROM TIMESTAMP '2010-01-01 00:00:00' at
time zone 'utc' ) as utcepoch;

0,0,16,1262304000




@Test
public void testFoo() {
TimeZone  tz  = TimeZone.getTimeZone("GMT");
GregorianCalendar cal = new GregorianCalendar(tz);
cal.set(2010,0,1,0,0,0);
cal.set(GregorianCalendar.MILLISECOND, 0 );
System.out.println("" + cal.getTimeInMillis() );
System.out.println("" + String.format( "%1$tY-%1$tm-%1$td
%1$tH:%1$tM:%1$tS.%1$tL", cal ) );
System.out.println("" + cal.get(GregorianCalendar.HOUR_OF_DAY ) );
}

In Java:
126230400
2010-01-01 00:00:00.000 (UTC)
0


Re: [GENERAL] triggers and FK cascades

2011-03-17 Thread David Johnston
Don't know if this would work but could you check to see if the corresponding 
PK exists on A?  

It may also help to explain why you would want to do such a thing so that 
someone may be able to provide an alternative solution as opposed to simply 
responding to a generic feature question.

David J.

-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Grzegorz Jaskiewicz
Sent: Thursday, March 17, 2011 6:41 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] triggers and FK cascades

Considering the following example.
Tables A and B.
Table A contains some data.
Table B reefers to table A using FK with 'on delete cascade'. Table B has a 
trigger on it, after delete per row

Now, is there any way I can tell in the trigger on table B that it has been 
called from a direct delete on that table, as oppose to the indirect (FK) 
delete on table A?

Trigger is PLpg/SQL or C function.


--
GJ

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make 
changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is PostgreSQL supported on RHEL6?

2011-03-17 Thread MauMau

Hello,

Thank you for your reply. I've been largely relieved. I understood the anser 
was as follows:


[A1] Yes, it is safe to use PostgreSQL 8.3.12 on RHEL6.
It is recommended to rebuild it on RHEL6, however, it should be no problem 
to use it without rebuilding it.


[A2] N/A because the answer to Q1 is yes (safe).
Applying the newest update is always recommended. However, the newest update 
is not a must for RHEL6.


Tom, all, I'm sorry to ask again.

Do I need to set wal_sync_method to fdatasync in postgresql.conf if I use 
8.3.12 on RHEL6? From the release note, I got the impression that even 
open_datasync is not a problem unless the ext4 file system is mounted with 
data=journal.


How about Q3?


Regards
Maumau


"Tom Lane"  wrote in message 
news:12396.1300373...@sss.pgh.pa.us...

"MauMau"  writes:
Now I'm trying to support both RHEL5 and RHEL6 with minimal effort (but 
with
safety). If possible, I want to continue to use PostgreSQL 8.3.12 built 
on

RHEL5 for a while. Then, I'd like to ask some questions:


I'd recommend rebuilding the executables on RHEL6 if possible, but
otherwise this should be no problem.

I could build 8.3.12 successfully with 167 compilation warnings that 
report

"variable not used" and "uninitialized variable is used" etc. Even if I
could run PostgreSQL, I'm not sure that it is safe.


You can reasonably assume those are cosmetic.  Newer compilers tend to
be pickier about that sort of thing than older ones, so we fix those
sorts of warnings when we see them.  If any of them had represented
actual bugs, we'd have back-patched the fixes into 8.3.x.


I searched the PostgreSQL mailing lists with "RHEL6" and found the
discussion regarding wal_sync_method and O_DSYNC/O_SYNC. The following 
fix
in 8.3.13 makes me wonder if I should update with 8.3.14 which is the 
latest

version of 8.3 series.


That would be a good idea in any case.

regards, tom lane




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] triggers and FK cascades

2011-03-17 Thread Grzegorz Jaśkiewicz
Considering the following example.
Tables A and B.
Table A contains some data.
Table B reefers to table A using FK with 'on delete cascade'. Table B
has a trigger on it, after delete per row

Now, is there any way I can tell in the trigger on table B that it has
been called from a direct delete on that table, as oppose to the
indirect (FK) delete on table A?

Trigger is PLpg/SQL or C function.


-- 
GJ

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Windows ODBC connection trouble? ISP issue?

2011-03-17 Thread Adrian Klaver
On Thursday, March 17, 2011 3:20:16 pm Joseph Doench wrote:
> I cannot think of any special reason why the cloud server would allow a
> connection from my home versus any other location.

I do, it is called a firewall:)  I would highly suggest checking what your 
firewall rules on your cloud server are. I use AWS and the rules are accessed 
from the Management Console as Security Groups as a for instance.

> 
> I will test another location or two.
> 
> 
> Regards,
> 

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Primary key vs unique index

2011-03-17 Thread Scott Marlowe
On Thu, Mar 17, 2011 at 12:59 PM, Voils, Steven M
 wrote:
> Thanks for the reply.  I should have mentioned in the first post that we do 
> delete significant amounts of the table which I thought was the cause of the 
> bloat.  We are already performing automatic vacuums nightly.

Automatic regular vacuums?  So you do or don't have autovac turned on?

What version of pg are you running (8.3 or before, 8.4 or later?)

Are your nightly vacuums FULL or regular vacuums?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: [GENERAL] Re: [GENERAL] Different encoding for string values and identifier strings? Or (select 'tést' as tést) returns different values for string and identifier...

2011-03-17 Thread Francisco Figueiredo Jr.
Any ideas??

Would it be possible that Postgresql would be using another encoding
for the identifiers when they aren't wrapped by double quotes?


On Tue, Mar 15, 2011 at 23:37, Francisco Figueiredo Jr.
 wrote:
> Now, I'm using my dev machine.
>
> With the tests I'm doing, I can see the following:
>
> If I use:
>
> select 'seléct' as "seléct";
>
> column name returns ok as expected.
>
> If I do:
>
> select 'seléct' as seléct;
>
>
> This is the sequence of bytes I receive from postgresql:
>
> byte1 - 115 UTF-8 for s
> byte2 - 101 UTF-8 for e
> byte3 - 108 UTF-8 for l
> byte4 - 227
> byte5 - 169
> byte6 - 99 UTF-8 for c
> byte7 - 116 UTF-8 for t
>
>
> The problem lies in the byte4.
> According to [1], the first byte defines how many bytes will compose
> the UTF-8 char. the problem is that 227 encodes to a binary value of
> 1110 0011 and so, the UTF-8 decoder will think there are 3 bytes in
> sequence when actually there are only 2! :( And this seems to be the
> root of the problem for me.
>
>
> For the select value the correct byte is returned:
>
> byte1 - 115 UTF-8 for s
> byte2 - 101 UTF-8 for e
> byte3 - 108 UTF-8 for l
> byte4 - 195
> byte5 - 169
> byte6 - 99 UTF-8 for c
> byte7 - 116 UTF-8 for t
>
>
> Where 195 is 1100 0011 which gives two bytes in sequence and the
> decoder can decode this to the U+00E9 which is the char "é"
>
> Do you think this can be related to my machine? I'm using OSX 10.6.6
> and I compiled postgresql 9.0.1 from source code.
>
> Thanks in advance.
>
>
>
>
> [1] - http://en.wikipedia.org/wiki/UTF-8
>
>
>
>
> On Tue, Mar 15, 2011 at 15:52, Francisco Figueiredo Jr.
>  wrote:
>> H,
>>
>> What would change the encoding of the identifiers?
>>
>> Because on my dev machine which unfortunately isn't with me right now
>> I can't get the identifier returned correctly :(
>>
>> I remember that it returns:
>>
>>  test=*# select 'tést' as tést;
>>   tst
>>  --
>>   tést
>>
>> Is there any config I can change at runtime in order to have it
>> returned correctly?
>>
>> Thanks in advance.
>>
>>
>> On Tue, Mar 15, 2011 at 15:45, Andreas Kretschmer
>>  wrote:
>>> Francisco Figueiredo Jr.  wrote:
>>>

 What happens if you remove the double quotes in the column name identifier?
>>>
>>> the same:
>>>
>>> test=*# select 'tést' as tést;
>>>  tést
>>> --
>>>  tést
>>> (1 Zeile)
>>>
>>>
>>>
>>> Andreas
>>> --
>>> Really, I'm not out to destroy Microsoft. That will just be a completely
>>> unintentional side effect.                              (Linus Torvalds)
>>> "If I was god, I would recompile penguin with --enable-fly."   (unknown)
>>> Kaufbach, Saxony, Germany, Europe.              N 51.05082°, E 13.56889°
>>>
>>> --
>>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>>> To make changes to your subscription:
>>> http://www.postgresql.org/mailpref/pgsql-general
>>>
>>
>>
>>
>> --
>> Regards,
>>
>> Francisco Figueiredo Jr.
>> Npgsql Lead Developer
>> http://www.npgsql.org
>> http://fxjr.blogspot.com
>> http://twitter.com/franciscojunior
>>
>
>
>
> --
> Regards,
>
> Francisco Figueiredo Jr.
> Npgsql Lead Developer
> http://www.npgsql.org
> http://fxjr.blogspot.com
> http://twitter.com/franciscojunior
>



-- 
Regards,

Francisco Figueiredo Jr.
Npgsql Lead Developer
http://www.npgsql.org
http://fxjr.blogspot.com
http://twitter.com/franciscojunior

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Windows ODBC connection trouble? ISP issue?

2011-03-17 Thread Joseph Doench
I cannot think of any special reason why the cloud server would allow a
connection from my home versus any other location.

I will test another location or two. 


Regards,

JPD



-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@gmail.com] 
Sent: Thursday, March 17, 2011 6:00 PM
To: pgsql-general@postgresql.org
Cc: Joseph Doench
Subject: Re: [GENERAL] Windows ODBC connection trouble? ISP issue?

On Thursday, March 17, 2011 2:50:18 pm Joseph Doench wrote:
> My home ISP, our cloud server, and the office ISP are all separate
> entities.
> 
> I infer that the problem is with the office ISP - DSL provided by a phone
> company.

But you could not connect from two Wi-Fi locations either, that tends to
rule 
out the office ISP as root of problem. The common point in all the
connections is 
the cloud server. You can connect from home but not any where else. Would
seem 
to indicate that your cloud server(the server itself,not the Postgres
server) is 
only taking connections from your home.

> 
> 
> 
> Regards,
> 
> JPD
> 


-- 
Adrian Klaver
adrian.kla...@gmail.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Windows ODBC connection trouble? ISP issue?

2011-03-17 Thread Adrian Klaver
On Thursday, March 17, 2011 2:50:18 pm Joseph Doench wrote:
> My home ISP, our cloud server, and the office ISP are all separate
> entities.
> 
> I infer that the problem is with the office ISP - DSL provided by a phone
> company.

But you could not connect from two Wi-Fi locations either, that tends to rule 
out the office ISP as root of problem. The common point in all the connections 
is 
the cloud server. You can connect from home but not any where else. Would seem 
to indicate that your cloud server(the server itself,not the Postgres server) 
is 
only taking connections from your home.

> 
> 
> 
> Regards,
> 
> JPD
> 


-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Windows ODBC connection trouble? ISP issue?

2011-03-17 Thread Joseph Doench
My home ISP, our cloud server, and the office ISP are all separate entities.

I infer that the problem is with the office ISP - DSL provided by a phone
company.



Regards,

JPD


-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@gmail.com] 
Sent: Thursday, March 17, 2011 5:44 PM
To: pgsql-general@postgresql.org
Cc: Joseph Doench
Subject: Re: [GENERAL] Windows ODBC connection trouble? ISP issue?

On Thursday, March 17, 2011 2:29:00 pm Joseph Doench wrote:
> I have set up a pg database server for my organization on a cloud server
> using PG 8.2;
> 
> I am trying to provide connections to the db for some members using MS
> Access.  I developed a small MS Access application using Windows ODBC - it
> works fine from my house.
> 
> I have this line in pg_hba.conf:
> 
> HostallmydbuserID0.0.0.0/0 password
> 
> BUT. I cannot re-create the ODBC connection in our organization's offices!
> 
> 
> I have de-bugged by taking my laptop to the office - it will not connect 
> to the db there - but is ok at my house.
> (I have also checked 2 other locations with public wi-fi; could not
connect
> from either of them)
> 
> This seems to be related to the ISP blocking data - I have ruled out the
> router in the office.

When you say ISP do you mean the cloud provider or the service provider(s)
from 
the various sites? I am assuming that that your home, office and the public
Wi-Fi 
locations are not all using the same ISP. 

> 
> Is this a common SNAFU to encounter?

My guess is that the firewall rules on your cloud server is only allowing 
connections from your home site.

> 
> I spent an hour on the phone with tech support for the office's ISP;  the
> guy insisted it could not be a problem on their side!
> 
> Is there something I could be overlooking?
> 
> Any help or guidance would be greatly appreciated.
> 
> 
> Regards,
> 
> JPD

-- 
Adrian Klaver
adrian.kla...@gmail.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Windows ODBC connection trouble? ISP issue?

2011-03-17 Thread Adrian Klaver
On Thursday, March 17, 2011 2:29:00 pm Joseph Doench wrote:
> I have set up a pg database server for my organization on a cloud server
> using PG 8.2;
> 
> I am trying to provide connections to the db for some members using MS
> Access.  I developed a small MS Access application using Windows ODBC - it
> works fine from my house.
> 
> I have this line in pg_hba.conf:
> 
> HostallmydbuserID0.0.0.0/0 password
> 
> BUT. I cannot re-create the ODBC connection in our organization's offices!
> 
> 
> I have de-bugged by taking my laptop to the office - it will not connect 
> to the db there - but is ok at my house.
> (I have also checked 2 other locations with public wi-fi; could not connect
> from either of them)
> 
> This seems to be related to the ISP blocking data - I have ruled out the
> router in the office.

When you say ISP do you mean the cloud provider or the service provider(s) from 
the various sites? I am assuming that that your home, office and the public 
Wi-Fi 
locations are not all using the same ISP. 

> 
> Is this a common SNAFU to encounter?

My guess is that the firewall rules on your cloud server is only allowing 
connections from your home site.

> 
> I spent an hour on the phone with tech support for the office's ISP;  the
> guy insisted it could not be a problem on their side!
> 
> Is there something I could be overlooking?
> 
> Any help or guidance would be greatly appreciated.
> 
> 
> Regards,
> 
> JPD

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Windows ODBC connection trouble? ISP issue?

2011-03-17 Thread John R Pierce

On 03/17/11 2:29 PM, Joseph Doench wrote:


I have setup a pg database server formy organizationon a cloud 
serverusing PG 8.2;


I am trying to provide connections to the db for somemembers using MS 
Access. I developed a small MS Access application using Windows 
ODBC–it works fine from myhouse.


I havethisline in pg_hba.conf:

Host all mydbuserID 0.0.0.0/0 password

BUT…I cannot re-create the ODBC connection in our organization’s offices!

I have de-bugged by taking my laptop to the office–it will not 
connectto the db there–but is ok at my house.


(I have also checked 2 other locations with public wi-fi; could not 
connect from either of them)


This seems to berelated to the ISP blockingdata–I have ruled out the 
router in the office.





is your home behind a home internet sharing router ? (this might be 
built into whatever 'modem' your ISP provided). is the Postgres port 
forwarded from the outside world to your server?


in general if your home server is on a private local network address 
like 192.168.x.y or 10.x.y.z, then it can't be directly reached from the 
internet unless the internet gateway is configured to forward the 
service port in question from real.ip.addr:port to local.ip.addr:port 
(Postgres uses port 5432/tcp by default)







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Windows ODBC connection trouble? ISP issue?

2011-03-17 Thread Joseph Doench
I have set up a pg database server for my organization on a cloud server
using PG 8.2;

I am trying to provide connections to the db for some members using MS
Access.  I developed a small MS Access application using Windows ODBC - it
works fine from my house.  

I have this line in pg_hba.conf:

HostallmydbuserID0.0.0.0/0 password

BUT. I cannot re-create the ODBC connection in our organization's offices!


I have de-bugged by taking my laptop to the office - it will not connect  to
the db there - but is ok at my house.
(I have also checked 2 other locations with public wi-fi; could not connect
from either of them)

This seems to be related to the ISP blocking data - I have ruled out the
router in the office.

Is this a common SNAFU to encounter?  

I spent an hour on the phone with tech support for the office's ISP;  the
guy insisted it could not be a problem on their side!

Is there something I could be overlooking?

Any help or guidance would be greatly appreciated.


Regards,

JPD



Re: [GENERAL] PG COPY from version 8 to 9 issue with timezonetz

2011-03-17 Thread Radosław Smogura
Adrian Klaver  Thursday 17 March 2011 19:18:25
> On Thursday, March 17, 2011 10:10:49 am Brent Gulanowski wrote:
> > We use PG COPY to successfully in PG 8 to copy a database between two
> > servers. Works perfectly.
> > 
> > When the target server is PG 9, *some* fields of type timezonetz end up
> > garbled. Basically the beginning of the string is wrong:
> > 
> > 152037-01-10 16:53:56.719616-05
> > 
> > It should be 2011-03-16 or similar.
> > 
> > In this case, the source computer is running Mac OS X 10.6.6 on x86_64
> > (MacBook Pro Core i5), and the destination computer is running Debian
> > Lenny on Xeon (Core i7).
> > 
> > I looked at the documentation on the copy command, and the PG9 release
> > notes, but I didn't see anything that might explain this problem.
> > 
> > We are using the WITH BINARY option. It has been suggested to disable
> > that. What are the down sides of that? I'm guessing just performance
> > with binary columns.
> 
> I think the bigger downsides come from using it:) See below for more
> information:
> 
> http://www.postgresql.org/docs/9.0/interactive/sql-copy.html
> 
> "Binary Format
> The binary format option causes all data to be stored/read as binary format
> rather than as text. It is somewhat faster than the text and CSV formats,
> but a binary-format file is less portable across machine architectures and
> PostgreSQL versions. Also, the binary format is very data type specific;
> for example it will not work to output binary data from a smallint column
> and read it into an integer column, even though that would work fine in
> text format.
> The binary file format consists of a file header, zero or more tuples
> containing the row data, and a file trailer. Headers and data are in
> network byte order. "
Actually binary mode is faster in some situations, and slower with other, in 
any case it should save space in backup files or during transmission (e.g. 
binary tz takes 8 bytes, text takes more)

But this may be due to encoding of timestamptz, you could have 8 version 
compiled with float timestamps, and 9 with integer tiemstamps or vice versa.

Regards,
Radek

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] query taking much longer since Postgres 8.4 upgrade

2011-03-17 Thread Tomas Vondra
Dne 17.3.2011 19:29, Davenport, Julie napsal(a):
> I have not yet had time to try Tomas' suggestion of bumping up the work_mem 
> first (trying to figure out how to do that from within a coldfusion script).  
> Many thanks for all your help guys!

Well, just execute this 'SQL query' just like the other ones

set work_mem='8MB'

and it will increase the amount of memory for that connection.

Tomas


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] query taking much longer since Postgres 8.4 upgrade

2011-03-17 Thread Davenport, Julie
FYI, I implemented Pavel's suggestion to use:

 course_begin_date::date IN ( 
'20100412','20100510','20100607','20100712','20100830','20100927','20101025','20101122','20101213','20110110','20110207','20110307'
 )

instead of 

to_char(course_begin_date,'MMDD') IN ( 
'20100412','20100510','20100607','20100712','20100830','20100927','20101025','20101122','20101213','20110110','20110207','20110307'
 )

and it did help significantly.  The overall script (where there are several 
queries like this one) was taking 7.5 mins on Postgres 8.0 and initially took 
20 mins on 8.4; but now after this change (::date) it only takes 14.9 mins.  
Progress!  I have not yet had time to try Tomas' suggestion of bumping up the 
work_mem first (trying to figure out how to do that from within a coldfusion 
script).  Many thanks for all your help guys!

Julie


-Original Message-
From: Pavel Stehule [mailto:pavel.steh...@gmail.com] 
Sent: Thursday, March 17, 2011 12:13 AM
To: Davenport, Julie
Cc: Tomas Vondra; pgsql-general@postgresql.org
Subject: Re: [GENERAL] query taking much longer since Postgres 8.4 upgrade

2011/3/16 Davenport, Julie :
> Yes, the column course_begin_date is a timestamp, so that would not work in 
> this instance, but I will keep that in mind for future use elsewhere.  I 
> agree, there are ways to rewrite this query, just wondering which is best to 
> take advantage of 8.4.
> Thanks much.
>
>

ok, sorry, do column_course_begin::date = ...

:)

Pavel

>
>
> -Original Message-
> From: Tomas Vondra [mailto:t...@fuzzy.cz]
> Sent: Wednesday, March 16, 2011 4:40 PM
> To: Pavel Stehule
> Cc: pgsql-general@postgresql.org; Davenport, Julie
> Subject: Re: [GENERAL] query taking much longer since Postgres 8.4 upgrade
>
> Dne 16.3.2011 22:31, Pavel Stehule napsal(a):
>> 2011/3/16 Tomas Vondra :
>>> Dne 16.3.2011 21:38, Davenport, Julie napsal(a):
 OK, I did the explain analyze on both sides (using a file for output 
 instead) and used the tool you suggested.

 8.0 - http://explain.depesz.com/s/Wam
 8.4 - http://explain.depesz.com/s/asJ
>>>
>>> Great, that's exactly what I asked for. I'll repost that to the mailing
>>> list so that the others can check it too.
>>>
 When I run the queries I get 59,881 rows on the 8.0 side and 59,880 on the 
 8.4 side, which is what I expect because 8.4 side was updated a couple 
 hours later and some minor changes make sense.
>>>
>>> Hm, obviously both versions got the row estimates wrong, but the 8.4
>>> difference (200x) is much bigger that the 8.0 (10x). This might be one
>>> of the reasons why a different plan is chosen.
>>
>> the expression
>>
>> to_char(course_begin_date, 'MMDD'::text) = '20101025'::text
>>
>> should be a problem
>>
>> much better is test on equality in date domain like:
>>
>> course_begin_date = to_date('20101025', 'MMDD')
>>
>> this is faster and probably better estimated
>
> Which is not going to work if the course_begin_date column is a
> timestamp, because of the time part.
>
> But yes, there are several ways to improve this query, yet it does not
> explain why the 8.4 is so much slower.
>
> Tomas
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Primary key vs unique index

2011-03-17 Thread Voils, Steven M
Thanks for the reply, that's what I was looking for.  I just wasn't sure if 
there was another compelling advantage to use primary keys instead of a unique 
index.

-Original Message-
From: Scott Ribe [mailto:scott_r...@elevated-dev.com] 
Sent: Thursday, March 17, 2011 12:13 PM
To: Voils, Steven M
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Primary key vs unique index

On Mar 17, 2011, at 6:19 AM, Voils, Steven M wrote:

> Is there a fundamental difference between a primary key and a unique index?  
> Currently we have primary keys on tables that have significant amounts of 
> updates performed on them, as a result the primary key indexes are becoming 
> significantly bloated.  There are other indexes on the tables that also 
> become bloated as a result of this, but these are automatically rebuild 
> periodically by the application (using the concurrently flag) when read usage 
> is expected to be very low. 
>  
> We don't want to remove the unique constraint of that the primary key is 
> providing, but the space on disk will continue to grow unbounded so we must 
> do something.  Can we replace the primary key with a unique index that could 
> be rebuilt concurrently, or would this be considered bad design?  The 
> reasoning behind this would be that the unique index could be rebuilt 
> concurrently without taking the application down or exclusively locking the 
> table for an extending period of time.  Are there other advantages to a 
> primary key outside of a uniqueness constraint and an index? 

So, you can't use REINDEX because it lacks the CONCURRENTLY option, and that 
would lock too much for too long?

The only thing the primary key designation provides beyond not null & unique is 
the metadata about what is the primary key. Which for example in the db allows 
foreign key constraints to be created without specifying that column. And some 
ORM/apps/frameworks can automatically make use of the information as well. I 
like having them for clarity, but you really can do away with them if your 
deployment needs to do so.

-- 
Scott Ribe
scott_r...@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Primary key vs unique index

2011-03-17 Thread Voils, Steven M
Thanks for the reply.  I should have mentioned in the first post that we do 
delete significant amounts of the table which I thought was the cause of the 
bloat.  We are already performing automatic vacuums nightly.

-Original Message-
From: Scott Marlowe [mailto:scott.marl...@gmail.com] 
Sent: Thursday, March 17, 2011 2:52 PM
To: Voils, Steven M
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Primary key vs unique index

On Thu, Mar 17, 2011 at 12:51 PM, Scott Marlowe  wrote:
> On Thu, Mar 17, 2011 at 6:19 AM, Voils, Steven M  
> wrote:
>> Is there a fundamental difference between a primary key and a unique index?
>> Currently we have primary keys on tables that have significant amounts of
>> updates performed on them, as a result the primary key indexes are becoming
>> significantly bloated.  There are other indexes on the tables that also
>> become bloated as a result of this, but these are automatically rebuild
>> periodically by the application (using the concurrently flag) when read
>> usage is expected to be very low.
>
> If you're experiencing bloat, but not deleting huge chunks of your
> table at a time, then you're not vacuuming aggressively enough

Or you're on 8.3 or before and blowing out your free space map.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Primary key vs unique index

2011-03-17 Thread Scott Marlowe
On Thu, Mar 17, 2011 at 12:51 PM, Scott Marlowe  wrote:
> On Thu, Mar 17, 2011 at 6:19 AM, Voils, Steven M  
> wrote:
>> Is there a fundamental difference between a primary key and a unique index?
>> Currently we have primary keys on tables that have significant amounts of
>> updates performed on them, as a result the primary key indexes are becoming
>> significantly bloated.  There are other indexes on the tables that also
>> become bloated as a result of this, but these are automatically rebuild
>> periodically by the application (using the concurrently flag) when read
>> usage is expected to be very low.
>
> If you're experiencing bloat, but not deleting huge chunks of your
> table at a time, then you're not vacuuming aggressively enough

Or you're on 8.3 or before and blowing out your free space map.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Primary key vs unique index

2011-03-17 Thread Scott Marlowe
On Thu, Mar 17, 2011 at 6:19 AM, Voils, Steven M  wrote:
> Is there a fundamental difference between a primary key and a unique index?
> Currently we have primary keys on tables that have significant amounts of
> updates performed on them, as a result the primary key indexes are becoming
> significantly bloated.  There are other indexes on the tables that also
> become bloated as a result of this, but these are automatically rebuild
> periodically by the application (using the concurrently flag) when read
> usage is expected to be very low.

If you're experiencing bloat, but not deleting huge chunks of your
table at a time, then you're not vacuuming aggressively enough

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: Is TimeZone applied with TIMESTAMP WITHOUT TIME ZONE and Extract( EPOCH ...)?

2011-03-17 Thread bubba postgres
ok got it.

select EXTRACT( EPOCH FROM '2010-04-01 00:00:00'::TIMESTAMP WITHOUT TIME
ZONE at time zone 'utc' );


On Thu, Mar 17, 2011 at 11:32 AM, bubba postgres
wrote:

> no.. still confused.
> I assume it's storing everythign in UTC.. did I need to specify a timezone
> when I inserted?
>
>
>
> On Thu, Mar 17, 2011 at 11:24 AM, bubba postgres  > wrote:
>
>> Looks like a quick search says I need to specify the timezone...
>>
>>
>> On Thu, Mar 17, 2011 at 11:22 AM, bubba postgres <
>> bubba.postg...@gmail.com> wrote:
>>
>>>
>>> I'm noticing some interesting behavior around timestamp and extract
>>> epoch, and it appears that I'm getting a timezone applied somewhere.
>>>
>>> Specifically, If I do:
>>> select EXTRACT( EPOCH FROM '2010-01-31 00:00:00'::TIMESTAMP WITHOUT TIME
>>> ZONE ); == 1264924800
>>> select EXTRACT( EPOCH FROM '2010-04-01 00:00:00'::TIMESTAMP WITHOUT TIME
>>> ZONE ); == 1270105200
>>>
>>> Now if I do something similar in Java.. using a GregorianCalendar, with
>>> "GMT" TimeZone.
>>> I get
>>> Hello:2010-01-31 00:00:00.000 (UTC)
>>> Hello:126489600
>>>
>>> Hello:2010-04-01 00:00:00.000 (UTC)
>>> Hello:127008000
>>>
>>> Which gives a difference of 8 and 7 hours respectively, so both a
>>> timezone and a DST shift are at work here.
>>>
>>> Is this the expected behavior of extract epoch, is there a way to get it
>>> to always be in GMT?
>>>
>>>
>>>
>>>
>>>
>>
>


[GENERAL] Re: Is TimeZone applied with TIMESTAMP WITHOUT TIME ZONE and Extract( EPOCH ...)?

2011-03-17 Thread bubba postgres
no.. still confused.
I assume it's storing everythign in UTC.. did I need to specify a timezone
when I inserted?



On Thu, Mar 17, 2011 at 11:24 AM, bubba postgres
wrote:

> Looks like a quick search says I need to specify the timezone...
>
>
> On Thu, Mar 17, 2011 at 11:22 AM, bubba postgres  > wrote:
>
>>
>> I'm noticing some interesting behavior around timestamp and extract epoch,
>> and it appears that I'm getting a timezone applied somewhere.
>>
>> Specifically, If I do:
>> select EXTRACT( EPOCH FROM '2010-01-31 00:00:00'::TIMESTAMP WITHOUT TIME
>> ZONE ); == 1264924800
>> select EXTRACT( EPOCH FROM '2010-04-01 00:00:00'::TIMESTAMP WITHOUT TIME
>> ZONE ); == 1270105200
>>
>> Now if I do something similar in Java.. using a GregorianCalendar, with
>> "GMT" TimeZone.
>> I get
>> Hello:2010-01-31 00:00:00.000 (UTC)
>> Hello:126489600
>>
>> Hello:2010-04-01 00:00:00.000 (UTC)
>> Hello:127008000
>>
>> Which gives a difference of 8 and 7 hours respectively, so both a timezone
>> and a DST shift are at work here.
>>
>> Is this the expected behavior of extract epoch, is there a way to get it
>> to always be in GMT?
>>
>>
>>
>>
>>
>


[GENERAL] Re: Is TimeZone applied with TIMESTAMP WITHOUT TIME ZONE and Extract( EPOCH ...)?

2011-03-17 Thread bubba postgres
Looks like a quick search says I need to specify the timezone...

On Thu, Mar 17, 2011 at 11:22 AM, bubba postgres
wrote:

>
> I'm noticing some interesting behavior around timestamp and extract epoch,
> and it appears that I'm getting a timezone applied somewhere.
>
> Specifically, If I do:
> select EXTRACT( EPOCH FROM '2010-01-31 00:00:00'::TIMESTAMP WITHOUT TIME
> ZONE ); == 1264924800
> select EXTRACT( EPOCH FROM '2010-04-01 00:00:00'::TIMESTAMP WITHOUT TIME
> ZONE ); == 1270105200
>
> Now if I do something similar in Java.. using a GregorianCalendar, with
> "GMT" TimeZone.
> I get
> Hello:2010-01-31 00:00:00.000 (UTC)
> Hello:126489600
>
> Hello:2010-04-01 00:00:00.000 (UTC)
> Hello:127008000
>
> Which gives a difference of 8 and 7 hours respectively, so both a timezone
> and a DST shift are at work here.
>
> Is this the expected behavior of extract epoch, is there a way to get it to
> always be in GMT?
>
>
>
>
>


[GENERAL] Is TimeZone applied with TIMESTAMP WITHOUT TIME ZONE and Extract( EPOCH ...)?

2011-03-17 Thread bubba postgres
I'm noticing some interesting behavior around timestamp and extract epoch,
and it appears that I'm getting a timezone applied somewhere.

Specifically, If I do:
select EXTRACT( EPOCH FROM '2010-01-31 00:00:00'::TIMESTAMP WITHOUT TIME
ZONE ); == 1264924800
select EXTRACT( EPOCH FROM '2010-04-01 00:00:00'::TIMESTAMP WITHOUT TIME
ZONE ); == 1270105200

Now if I do something similar in Java.. using a GregorianCalendar, with
"GMT" TimeZone.
I get
Hello:2010-01-31 00:00:00.000 (UTC)
Hello:126489600

Hello:2010-04-01 00:00:00.000 (UTC)
Hello:127008000

Which gives a difference of 8 and 7 hours respectively, so both a timezone
and a DST shift are at work here.

Is this the expected behavior of extract epoch, is there a way to get it to
always be in GMT?


Re: [GENERAL] PG COPY from version 8 to 9 issue with timezonetz

2011-03-17 Thread Adrian Klaver
On Thursday, March 17, 2011 10:10:49 am Brent Gulanowski wrote:
> We use PG COPY to successfully in PG 8 to copy a database between two
> servers. Works perfectly.
> 
> When the target server is PG 9, *some* fields of type timezonetz end up
> garbled. Basically the beginning of the string is wrong:
> 
> 152037-01-10 16:53:56.719616-05
> 
> It should be 2011-03-16 or similar.
> 
> In this case, the source computer is running Mac OS X 10.6.6 on x86_64
> (MacBook Pro Core i5), and the destination computer is running Debian Lenny
> on Xeon (Core i7).
> 
> I looked at the documentation on the copy command, and the PG9 release
> notes, but I didn't see anything that might explain this problem.
> 
> We are using the WITH BINARY option. It has been suggested to disable that.
> What are the down sides of that? I'm guessing just performance with binary
> columns.

I think the bigger downsides come from using it:) See below for more 
information:

http://www.postgresql.org/docs/9.0/interactive/sql-copy.html

"Binary Format
The binary format option causes all data to be stored/read as binary format 
rather than as text. It is somewhat faster than the text and CSV formats, but a 
binary-format file is less portable across machine architectures and PostgreSQL 
versions. Also, the binary format is very data type specific; for example it 
will 
not work to output binary data from a smallint column and read it into an 
integer column, even though that would work fine in text format. 
The binary file format consists of a file header, zero or more tuples 
containing 
the row data, and a file trailer. Headers and data are in network byte order. "

-- 
Adrian Klaver
adrian.kla...@gmail.com


[GENERAL] regclass and search_path

2011-03-17 Thread Joe Abbate

Hi,

I'm using the autodoc regression database available at

http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/autodoc/autodoc/regressdatabase.sql?rev=1.2&content-type=text/x-cvsweb-markup

This has several schemas that have cross-schema foreign key constraints 
such as the following:


 autodoc=> \d product.product
   Table "product.product"
   Column|  Type   |  Modifiers 


-+-+--
 product_id  | integer | not null default 
nextval('product.product_product_id_seq'::regclass)

 product_code| text| not null
 product_description | text|
Indexes:
"product_pkey" PRIMARY KEY, btree (product_id)
"product_product_code_key" UNIQUE, btree (product_code)
Check constraints:
"product_product_code_check" CHECK (product_code = upper(product_code))
Referenced by:
TABLE "store.inventory" CONSTRAINT "inventory_product_id_fkey" 
FOREIGN KEY (product_id) REFERENCES product.product(product_id) ON 
UPDATE CASCADE ON DELETE RESTRICT
TABLE "warehouse.inventory" CONSTRAINT "inventory_product_id_fkey" 
FOREIGN KEY (product_id) REFERENCES product.product(product_id) ON 
UPDATE CASCADE ON DELETE RESTRICT


I'm using this to validate a tool I'm building and I get an error on the 
following query:


autodoc=> SELECT conname::regclass FROM pg_constraint
autodoc->   WHERE contype = 'u';
ERROR:  relation "product_product_code_key" does not exist

The 8.4 documentation says:

The regclass input converter handles the table lookup according to the 
schema path setting, and so it does the "right thing" automatically.


My search path is the default "$user", public and I'm only able to avoid 
the error if I set the search_path to cover all the schemas, e.g.,


autodoc=> set search_path to "$user", public, product, store, warehouse;
SET
autodoc=> SELECT conname::regclass FROM pg_constraint
  WHERE contype = 'u';
  conname

 product_product_code_key
 store_store_code_key
 warehouse_warehouse_code_key
 warehouse_warehouse_supervisor_key
(4 rows)

I would've thought that the "right thing" would have involved prepending 
the schema to the constraint name, e.g., 
product.product_product_code_key as is done for the table names in the 
\d output. Is this a bug or does regclass only do the "right thing" for 
tables and not for constraints?


Joe

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PG COPY from version 8 to 9 issue with timezonetz

2011-03-17 Thread Brent Gulanowski
We use PG COPY to successfully in PG 8 to copy a database between two
servers. Works perfectly.

When the target server is PG 9, *some* fields of type timezonetz end up
garbled. Basically the beginning of the string is wrong:

152037-01-10 16:53:56.719616-05

It should be 2011-03-16 or similar.

In this case, the source computer is running Mac OS X 10.6.6 on x86_64
(MacBook Pro Core i5), and the destination computer is running Debian Lenny
on Xeon (Core i7).

I looked at the documentation on the copy command, and the PG9 release
notes, but I didn't see anything that might explain this problem.

We are using the WITH BINARY option. It has been suggested to disable that.
What are the down sides of that? I'm guessing just performance with binary
columns.

-- 
#pragma mark signature
[[self mailClient] send:[Mail messageWithText:@"From: Brent Gulanowski\nTo:
You"];


Re: [GENERAL] Primary key vs unique index

2011-03-17 Thread Scott Ribe
On Mar 17, 2011, at 6:19 AM, Voils, Steven M wrote:

> Is there a fundamental difference between a primary key and a unique index?  
> Currently we have primary keys on tables that have significant amounts of 
> updates performed on them, as a result the primary key indexes are becoming 
> significantly bloated.  There are other indexes on the tables that also 
> become bloated as a result of this, but these are automatically rebuild 
> periodically by the application (using the concurrently flag) when read usage 
> is expected to be very low. 
>  
> We don’t want to remove the unique constraint of that the primary key is 
> providing, but the space on disk will continue to grow unbounded so we must 
> do something.  Can we replace the primary key with a unique index that could 
> be rebuilt concurrently, or would this be considered bad design?  The 
> reasoning behind this would be that the unique index could be rebuilt 
> concurrently without taking the application down or exclusively locking the 
> table for an extending period of time.  Are there other advantages to a 
> primary key outside of a uniqueness constraint and an index? 

So, you can't use REINDEX because it lacks the CONCURRENTLY option, and that 
would lock too much for too long?

The only thing the primary key designation provides beyond not null & unique is 
the metadata about what is the primary key. Which for example in the db allows 
foreign key constraints to be created without specifying that column. And some 
ORM/apps/frameworks can automatically make use of the information as well. I 
like having them for clarity, but you really can do away with them if your 
deployment needs to do so.

-- 
Scott Ribe
scott_r...@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Startup messages for socket protocol

2011-03-17 Thread Guillaume Yziquel
Le Thursday 17 Mar 2011 à 16:08:55 (+0100), Guillaume Yziquel a écrit :
> Le Thursday 17 Mar 2011 à 10:48:50 (-0400), Tom Lane a écrit :
> > Guillaume Yziquel  writes:
> 
> For now, when sending \000\003\000\000 and only this, the server seems
> to disconnect. The recv() call on the socket returns 0, which should
> mean that the server has dropped the connection.

Got it:

Sent: "\000\000\000\022\000\003\000\000user\000yziquel\000\000"
Read from socket:
"R\000\000\000\b\000\000\000\000S\000\000\000\025client_encoding\000UTF8\000S\000\000\000\023DateStyle\000ISO,
DMY\000S\000\000\000\025integer_datetimes\000on\000S\000\000\000\027IntervalStyle\000postgres\000S\000\000\000\021is_superuser\000off\000S\000\000\000\025server_encoding\000UTF8\000S\000\000\000\025server_version\0008.4.7\000S\000\000\000\"session_authorization\000yziquel\000S\000\000\000$standard_conforming_strings\000off\000S\000\000\000\023TimeZone\000localtime\000K\000\000\000\012\000\000|\197{\177\235?Z\000\000\000\005I"

Needed to prepend the length of the packet.

Didn't appear very clearly in the docs. But this link got me more info:

http://blog.endpoint.com/2010/05/finding-postgresql-version-without.html

Thanks for your time.

-- 
 Guillaume Yziquel

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] query stuck at SOCK_wait_for_ready function call

2011-03-17 Thread tamanna madaan
Hi All

Now, I am using postgres-8.4.0 and psqlodbc-08.03.0400-1 and 
unixODBC-2.2.14-000.01 driver to connect
to the databse. Again having the same issue . One of the queries I executed 
from my application have got stuck for an
indefinite amount of time causing my application to hang. So I cored the 
application. The
core file gives a backtrace which shows it got stuck while waiting for a socket 
to get
ready as follows :

(gdb) bt
#0  0x7f1c3e5ed366 in poll () from /lib64/libc.so.6
#1  0x7f1c3a82d0a5 in SOCK_wait_for_ready (sock=0x7f1be67ff720, output=0, 
retry_count=1) at socket.c:531
#2  0x7f1c3a82d8c3 in SOCK_get_next_byte (self=0x7f1be67ff720, peek=0) at 
socket.c:940
#3  0x7f1c3a82db92 in SOCK_get_id (self=0x7f1be67ff720) at socket.c:696
#4  0x7f1c3a8090ca in CC_send_query_append (self=0x7f1be68bf430, 
query=, qi=0x0, flag=, stmt=0x0,
appendq=) at connection.c:2498
#5  0x7f1c3a818ae5 in PGAPI_Transact (henv=0x0, hdbc=0x7f1be68bf430, 
fType=0) at execute.c:1143
#6  0x7f1c3a8424ec in SQLEndTran (HandleType=, 
Handle=0x7f1be68bf430, CompletionType=-1) at odbcapi30.c:178
#7  0x7f1c3f62fa2b in SQLEndTran (handle_type=, 
handle=0x7f1beff16b90, completion_type=0) at SQLEndTran.c:360


One other thread of the same process was also stuck :

(gdb) bt
#0  0x7f1c3e5ed366 in poll () from /lib64/libc.so.6
#1  0x7f1c3a82d0a5 in SOCK_wait_for_ready (sock=0x2bcde60, output=0, 
retry_count=1) at socket.c:531
#2  0x7f1c3a82d8c3 in SOCK_get_next_byte (self=0x2bcde60, peek=0) at 
socket.c:940
#3  0x7f1c3a82db92 in SOCK_get_id (self=0x2bcde60) at socket.c:696
#4  0x7f1c3a8090ca in CC_send_query_append (self=0x2bd13a0, query=, qi=0x0, flag=, stmt=0x7f1bf766c380,
appendq=) at connection.c:2498
#5  0x7f1c3a836c94 in SC_execute (self=0x7f1bf766c380) at statement.c:1879
#6  0x7f1c3a81907e in Exec_with_parameters_resolved (stmt=0x7f1bf766c380, 
exec_end=0x7f1c2c59e4c0) at execute.c:386
#7  0x7f1c3a81a600 in PGAPI_Execute (hstmt=0x7f1bf766c380, flag=) at execute.c:1070
#8  0x7f1c3a83fd82 in SQLExecute (StatementHandle=0x7f1bf766c380) at 
odbcapi.c:374
#9  0x7f1c3f630c77 in SQLExecute (statement_handle=0x7f1be4b069e0) at 
SQLExecute.c:283


I had the same issue while using postgres-8.1.2 and was advised to upgrade 
postgres.


But upgrading the postgres version didn't resolve the issue  .
There doesn't seem to be any locking issue . 

Can anyone please shed some light on this issue .


Thanks...
Tamanna





From: Alban Hertroys [mailto:dal...@solfertje.student.utwente.nl]
Sent: Fri 12/31/2010 3:28 PM
To: tamanna madaan
Cc: Tom Lane; pgsql-general@postgresql.org
Subject: Re: [GENERAL] query stuck at SOCK_wait_for_ready function call



On 31 Dec 2010, at 5:14, tamanna madaan wrote:

> Moreover, it cant be waiting for a lock as
> other processes were able to update the same table at the same time.

That only means it wasn't waiting on a TABLE-lock, occurrences of which are 
quite rare in Postgres. But if, for example, an other update was updating the 
same row or if it was selected for update, then there would be a lock on that 
row.

> restarting the process which was stuck because of this query, also
> resolved the issue. That means after restart, the process was able to
> update the same table.

After it restarted, was it updating the same row? If not, there's your 
explanation.

> Had it been waiting for a lock before , it wouldn't
> have been able to update the table after restart either.

It would have been able to, unless the table was being altered (ALTER TABLE foo 
ADD bar text) or some-such.

Did you upgrade to the latest minor release yet? Upgrading should be one of 
your first priorities for solving this issue.

If you did and the problem still occurs; What is the query you were executing? 
From your backtrace it looks like you were executing  "SELECT RUMaster(2) AS 
call_proc_result". If so, what does that function do?

You appear to be running Postgres on a Windows machine? Are you sure you don't 
have some anti-virus package getting in the way locking files that are 
Postgres's?

Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:1210,4d1da969802651767083970!






[GENERAL] Primary key vs unique index

2011-03-17 Thread Voils, Steven M
Is there a fundamental difference between a primary key and a unique index?  
Currently we have primary keys on tables that have significant amounts of 
updates performed on them, as a result the primary key indexes are becoming 
significantly bloated.  There are other indexes on the tables that also become 
bloated as a result of this, but these are automatically rebuild periodically 
by the application (using the concurrently flag) when read usage is expected to 
be very low.

We don't want to remove the unique constraint of that the primary key is 
providing, but the space on disk will continue to grow unbounded so we must do 
something.  Can we replace the primary key with a unique index that could be 
rebuilt concurrently, or would this be considered bad design?  The reasoning 
behind this would be that the unique index could be rebuilt concurrently 
without taking the application down or exclusively locking the table for an 
extending period of time.  Are there other advantages to a primary key outside 
of a uniqueness constraint and an index?


[GENERAL] pgwatch by Cybertec

2011-03-17 Thread bkwiencien
Does anyone have any experience using pgwatch from Cybertec? What is
your opinion of its functionality?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Startup messages for socket protocol

2011-03-17 Thread Guillaume Yziquel
Le Thursday 17 Mar 2011 à 10:48:50 (-0400), Tom Lane a écrit :
> Guillaume Yziquel  writes:
> > However, when I try to send this as the first thing, I get disconnected
> > by the server. Reading what the psql program does, I first get an 8 byte
> > message containing this:
> 
> > \000\000\000\008\004\210\022/
> 
> > This seems to work, but I'm at a loss pinpointing in the libpq source
> > code where that would fit in the protocol.
> 
> [ scratches head... ]  You should be getting either an ErrorResponse
> message or some AuthenticationXXX variant, and both of those would start
> with an ASCII character ('E' or 'R').

For now, when sending \000\003\000\000 and only this, the server seems
to disconnect. The recv() call on the socket returns 0, which should
mean that the server has dropped the connection.

> I'm not sure what the above could
> be, unless maybe you're somehow triggering an SSL startup handshake ---

For now, I simply have a INET socket. Just sending \000\003\000\000. No
SSL handshake.

> but the first returned byte should be an 'S' in that case.  Are you sure
> you've correctly identified what is payload data, versus what's TCP
> overhead or something like that?

Not sure how to identify that. What I identified was 'cannot read and
server disconnects client'. When I send \000\000\000\008\004\210\022/
(which is what the psql sent me when I looked it up), I get a "N", and
it waits for further data (i.e. not disconnected yet...). Then I get
disconnected after some timeout, I guess.

> It might also be enlightening to look into the server's log, especially
> if you were to crank log_min_messages way up so it logs debug stuff.

Not so familiar as to where to look, except for the file in
/var/log/postgresql:

2011-03-17 14:41:34 CET LOG:  longueur invalide du paquet de d?marrage
2011-03-17 14:42:12 CET LOG:  longueur invalide du paquet de d?marrage
2011-03-17 14:42:21 CET LOG:  longueur invalide du paquet de d?marrage
2011-03-17 14:43:46 CET LOG:  longueur invalide du paquet de d?marrage
2011-03-17 14:45:01 CET LOG:  longueur invalide du paquet de d?marrage
2011-03-17 14:46:38 CET LOG:  longueur invalide du paquet de d?marrage
2011-03-17 15:08:04 CET LOG:  longueur invalide du paquet de d?marrage
2011-03-17 15:33:08 CET LOG:  longueur invalide du paquet de d?marrage
2011-03-17 15:35:36 CET LOG:  longueur invalide du paquet de d?marrage
2011-03-17 16:01:16 CET LOG:  longueur invalide du paquet de d?marrage

In english: invalid length for startup packet.

>   regards, tom lane

What should a typical startup packet be? psql confuses me more than it
helps me.

-- 
 Guillaume Yziquel

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Getting users/privs for tables.

2011-03-17 Thread Gauthier, Dave
information_schema.table_privileges has it.  Thanks !

From: Adrian Klaver [mailto:adrian.kla...@gmail.com]
Sent: Thursday, March 17, 2011 10:59 AM
To: pgsql-general@postgresql.org
Cc: Gauthier, Dave
Subject: Re: [GENERAL] Getting users/privs for tables.


On Thursday, March 17, 2011 7:48:37 am Gauthier, Dave wrote:

> Hi:

>

> I'm trying to determine who has what privs for what tables. Couldn't find

> any canned views for that (but may have missed it). Or is there a query

> that can get this from the metadata somehow?

>

> Thanks in Advance.

http://www.postgresql.org/docs/9.0/interactive/information-schema.html

role_table_grants

and/or

table_privileges

--

Adrian Klaver

adrian.kla...@gmail.com


Re: [GENERAL] Getting users/privs for tables.

2011-03-17 Thread Adrian Klaver
On Thursday, March 17, 2011 7:48:37 am Gauthier, Dave wrote:
> Hi:
> 
> I'm trying to determine who has what privs for what tables.  Couldn't find
> any canned views for that (but may have missed it).  Or is there a query
> that can get this from the metadata somehow?
> 
> Thanks in Advance.

http://www.postgresql.org/docs/9.0/interactive/information-schema.html

role_table_grants
and/or
table_privileges

-- 
Adrian Klaver
adrian.kla...@gmail.com


Re: [GENERAL] Getting users/privs for tables.

2011-03-17 Thread Tom Lane
"Gauthier, Dave"  writes:
> I'm trying to determine who has what privs for what tables.  Couldn't find 
> any canned views for that (but may have missed it).  Or is there a query that 
> can get this from the metadata somehow?

You could try using has_table_privilege() in a join between pg_class and
pg_authid ... might be a bit slow though.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is PostgreSQL supported on RHEL6?

2011-03-17 Thread Tom Lane
"MauMau"  writes:
> Now I'm trying to support both RHEL5 and RHEL6 with minimal effort (but with 
> safety). If possible, I want to continue to use PostgreSQL 8.3.12 built on 
> RHEL5 for a while. Then, I'd like to ask some questions:

I'd recommend rebuilding the executables on RHEL6 if possible, but
otherwise this should be no problem.

> I could build 8.3.12 successfully with 167 compilation warnings that report 
> "variable not used" and "uninitialized variable is used" etc. Even if I 
> could run PostgreSQL, I'm not sure that it is safe.

You can reasonably assume those are cosmetic.  Newer compilers tend to
be pickier about that sort of thing than older ones, so we fix those
sorts of warnings when we see them.  If any of them had represented
actual bugs, we'd have back-patched the fixes into 8.3.x.

> I searched the PostgreSQL mailing lists with "RHEL6" and found the 
> discussion regarding wal_sync_method and O_DSYNC/O_SYNC. The following fix 
> in 8.3.13 makes me wonder if I should update with 8.3.14 which is the latest 
> version of 8.3 series.

That would be a good idea in any case.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Startup messages for socket protocol

2011-03-17 Thread Tom Lane
Guillaume Yziquel  writes:
> However, when I try to send this as the first thing, I get disconnected
> by the server. Reading what the psql program does, I first get an 8 byte
> message containing this:

>   \000\000\000\008\004\210\022/

> This seems to work, but I'm at a loss pinpointing in the libpq source
> code where that would fit in the protocol.

[ scratches head... ]  You should be getting either an ErrorResponse
message or some AuthenticationXXX variant, and both of those would start
with an ASCII character ('E' or 'R').  I'm not sure what the above could
be, unless maybe you're somehow triggering an SSL startup handshake ---
but the first returned byte should be an 'S' in that case.  Are you sure
you've correctly identified what is payload data, versus what's TCP
overhead or something like that?

It might also be enlightening to look into the server's log, especially
if you were to crank log_min_messages way up so it logs debug stuff.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Getting users/privs for tables.

2011-03-17 Thread Gauthier, Dave
Hi:

I'm trying to determine who has what privs for what tables.  Couldn't find any 
canned views for that (but may have missed it).  Or is there a query that can 
get this from the metadata somehow?

Thanks in Advance.


[GENERAL] Startup messages for socket protocol

2011-03-17 Thread Guillaume Yziquel
Hi.

I've been trying to get a connection working to a PostgreSQL server
through the socket-level protocol. I've therefore been looking at
fe-connect.c and fe-protocol3.c in the src/interfaces/libpq folder.

Reading those sources, I understood, that the startup message should
begin with a request from the client with the protocol number.

In fe-protocol3.c, build_startup_message():

/* Protocol version comes first. */
if (packet)
{
ProtocolVersion pv = htonl(conn->pversion);

memcpy(packet + packet_len, &pv, sizeof(ProtocolVersion));
}
packet_len += sizeof(ProtocolVersion);

However, when I try to send this as the first thing, I get disconnected
by the server. Reading what the psql program does, I first get an 8 byte
message containing this:

\000\000\000\008\004\210\022/

This seems to work, but I'm at a loss pinpointing in the libpq source
code where that would fit in the protocol.

Enlightenment would be very welcome.

-- 
 Guillaume Yziquel

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] A join of 2 tables with sum(column) > 30

2011-03-17 Thread Alexander Farber
Thank you all for the replies -

On Wed, Mar 16, 2011 at 3:05 PM, Igor Neyman  wrote:
> Select id, sum(col1) from tab
> Where id > 10
> Group by id
> Having sum)col1) >30;
>
> Spend some time reading basic SQL docs/books - it'll help you
> tremendously.

I have already read many SQL-docs (really) and
I've done Perl, PHP, Java, C, ActionScript, etc. programming
at various points of time (for living AND/OR for fun)
and SQL is the most mind-boggling for me.

Regards
Alex

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Is PostgreSQL supported on RHEL6?

2011-03-17 Thread MauMau

Hello,

I have some software products which support RHEL5 for x86 and x86_64. Each 
of them uses PostgreSQL 8.3.12 as a data repository. They all embed the same 
PostgreSQL binaries.


Now I'm trying to support both RHEL5 and RHEL6 with minimal effort (but with 
safety). If possible, I want to continue to use PostgreSQL 8.3.12 built on 
RHEL5 for a while. Then, I'd like to ask some questions:


Q1: Is it safe to use PostgreSQL 8.3.12 on RHEL6? If it is not safe, what 
kind of problems might happen?
I could build 8.3.12 successfully with 167 compilation warnings that report 
"variable not used" and "uninitialized variable is used" etc. Even if I 
could run PostgreSQL, I'm not sure that it is safe. I wonder if running the 
regression tests reveals problems.


I searched the PostgreSQL mailing lists with "RHEL6" and found the 
discussion regarding wal_sync_method and O_DSYNC/O_SYNC. The following fix 
in 8.3.13 makes me wonder if I should update with 8.3.14 which is the latest 
version of 8.3 series. Is it safe to use 8.3.12 on RHEL6 by setting 
wal_sync_method to fdatasync?



8.3.13 release note
http://www.postgresql.org/docs/8.3/static/release-8-3-13.html
...
Force the default wal_sync_method to be fdatasync on Linux (Tom Lane, Marti 
Raudsepp)
The default on Linux has actually been fdatasync for many years, but recent 
kernel changes caused PostgreSQL to choose open_datasync instead. This 
choice did not result in any performance improvement, and caused outright 
failures on certain filesystems, notably ext4 with the data=journal mount 
option.




Q2: If 8.3.12 is not safe on RHEL6, is 8.3.14 safe? Do I need to use 9.0.3 
on RHEL6?
I want to avoid upgrading to a newer major version (9.0) because my software 
do not need new features in 9.0 yet.



Q3: Doesn't PostgreSQL's performance degrade on RHEL6?
As stated above, by searching the PostgreSQL mailing lists and other web 
sites, I knew that O_SYNC was implemented in Linux kernel and fsync() got 
slower (on ext4 than on ext3?). Do these mean that running PostgreSQL on 
RHEL6 is not appropriate yet?


Regards
MauMau


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general