Re: [GENERAL] date_trunc on date is immutable?

2009-12-24 Thread Scott Marlowe
On Thu, Dec 24, 2009 at 6:47 PM, Greg Stark  wrote:
> On Fri, Dec 25, 2009 at 12:56 AM, Scott Marlowe  
> wrote:
>> On Thu, Dec 24, 2009 at 4:36 PM, Kian Wright
>>  wrote:
>>> I'm trying to create an index on the month and year of a date field (in
>>> 8.3), and I'm getting the "functions in index expression must be marked
>>> IMMUTABLE" error message.
>>
>> If applied to a timestamp, it is immutable.  If it's a timestamp with
>> timezone it's not, because the timezone can change, which would change
>> the index.
>
> Put another way, a given point in time doesn't necessarily lie in a
> particular month or on a particular day because it depends what time
> zone the system is set to. So right now it's a day earlier or later in
> part of the globe.
>
> To do what you want define the index on date_trunc('month',
> appl_recvd_date at time zone 'America/Los_Angeles') or something like that.
>
> You'll have to make sure your queries have the same expression in them
> though :( It won't work if you just happen to have the system time
> zone set to the matching time zone.

Isn't it the client timezone and not the system timezone that actually
sets the tz the tstz is set to on retrieval?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] date_trunc on date is immutable?

2009-12-24 Thread Greg Stark
On Fri, Dec 25, 2009 at 12:56 AM, Scott Marlowe  wrote:
> On Thu, Dec 24, 2009 at 4:36 PM, Kian Wright
>  wrote:
>> I'm trying to create an index on the month and year of a date field (in
>> 8.3), and I'm getting the "functions in index expression must be marked
>> IMMUTABLE" error message.
>
> If applied to a timestamp, it is immutable.  If it's a timestamp with
> timezone it's not, because the timezone can change, which would change
> the index.

Put another way, a given point in time doesn't necessarily lie in a
particular month or on a particular day because it depends what time
zone the system is set to. So right now it's a day earlier or later in
part of the globe.

To do what you want define the index on date_trunc('month',
appl_recvd_date at time zone 'America/Los_Angeles') or something like that.

You'll have to make sure your queries have the same expression in them
though :( It won't work if you just happen to have the system time
zone set to the matching time zone.

-- 
greg

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to add month.year column validation

2009-12-24 Thread Scott Marlowe
On Thu, Dec 24, 2009 at 4:47 PM, Andrus  wrote:
> Scott,
>
>> You can use the regex I posted to get rid of the data easily, then go
>> back to the substr one for  a check constraint after that.
>
> regex is non-standard.
> How to implement this in standard SQL ?

I take it you need a way to scrub your data in various databases, not
just pgsql?  I'm not sure there is a simple SQL standard way.  It's
likely that this one time job might require various non-standard ways
of scrubbing your data this one time.You're gonna have to figure
out how to make databases other than pgsql happy without me, the only
one I'm even a little familiar with is Oracle, and my oracle-fu is
rather rusty after a three year or so lay off from it.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] date_trunc on date is immutable?

2009-12-24 Thread Scott Marlowe
On Thu, Dec 24, 2009 at 4:36 PM, Kian Wright
 wrote:
> I'm trying to create an index on the month and year of a date field (in
> 8.3), and I'm getting the "functions in index expression must be marked
> IMMUTABLE" error message.

If applied to a timestamp, it is immutable.  If it's a timestamp with
timezone it's not, because the timezone can change, which would change
the index.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] date_trunc on date is immutable?

2009-12-24 Thread Kian Wright
I'm trying to create an index on the month and year of a date field (in
8.3), and I'm getting the "functions in index expression must be marked
IMMUTABLE" error message.

I thought dates were immutable, and didn't think that DATE_TRUNC did
anything to change that.  These all fail:
  create index enrollments_app_recvd_month_idx on enrollments (
date_trunc('month', appl_recvd_date) );
  create index enrollments_app_recvd_month_idx on enrollments (
(date_trunc('month', appl_recvd_date) at time zone 'pst') );
  create index enrollments_app_recvd_month_idx on enrollments (
to_char(appl_recvd_date, 'MM') );
  create index enrollments_app_recvd_month_idx on enrollments (
(to_char(extract(year from appl_recvd_date), '') || to_char(extract(
month from appl_recvd_date), '00')) );

After much experimentation, I finally was able to get this to work:
  create index enrollments_app_recvd_month_idx on enrollments (
(cast(extract(year from appl_recvd_date) as text) || cast(extract(month from
appl_recvd_date) as text)) );

I am guessing to_char is mutable because the format string could use a
locale specific character, and PG doesn't bother to check the format string
when determining whether a function call is immutable. But I'm lost on why
date_trunc is mutable, especially after applying a specific time zone. Am I
missing something here?


Re: [GENERAL] How to add month.year column validation

2009-12-24 Thread Adrian Klaver
On Thursday 24 December 2009 3:47:23 pm Andrus wrote:
> Scott,
>
> >You can use the regex I posted to get rid of the data easily, then go
> >back to the substr one for  a check constraint after that.
>
> regex is non-standard.
> How to implement this in standard SQL ?
>
> Andrus.

Why should it matter? The initial data clean up is a one time event. Once the 
fields are cleaned up the check constraint will keep them that way.

-- 
Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to add month.year column validation

2009-12-24 Thread Andrus

Scott,


You can use the regex I posted to get rid of the data easily, then go
back to the substr one for  a check constraint after that.


regex is non-standard.
How to implement this in standard SQL ?

Andrus.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Optimizing data layout for reporting in postgres

2009-12-24 Thread Scott Marlowe
On Thu, Dec 24, 2009 at 1:01 PM, Doug El  wrote:
> Hi,
>
> I have to summarize some data to be queried and there are about 5 million raw 
> records a day I need to summarize. In a nutshell I don't think I'm laying it 
> out in an optimal fashion, or not taking advantage of some postgres features 
> perhaps, I'm looking for feedback.
>
> The raw incoming data is in the form of
>
> ip string uint uint uint uint
>
> So for any given record say:
>
> 8.8.8.8 helloworld 1 2 3 4
>
> First, I need to be able to query how many total and how many unique requests 
> there were (unique by ip), over given time frame.
>
> So for the below data on the same day that's total two, but one unique
>
> 8.8.8.8 helloworld 1 2 3 4
> 8.8.8.8 helloworld 1 2 3 4
>
> Further for all fields (but ip which is not stored) I need to be able to 
> query and get total/unique counts based off any combination of criteria.
>
> So if I refer to them as columns A-E
>
> A               B               C               D               E
> string  uint    uint    uint    uint
>
> I need to be able and say how many where col A = 'helloworld' and say col C = 
> 4.
> Or  perhaps col E = 4 and col c < 3 etc, any combination.
>
> The only way I could see to do this was to take the 5 million daily raw 
> records, sort them, then summarize that list with total and unique counts as 
> so:
>
> A               B                       C               D               E     
>           F               G               H
> date    stringid        uint    uint    uint    uint    total   unique
>
> Primary key is A-F (date stringid uint uint uint uint)
>
> This gives me a summary of about 900k records a day from the 4 million raw.
>
> I have things organized with monthly tables and yearly schemas. The string 
> column also has its own monthly lookup table, so there's just a string id 
> that's looked up.
>
> The database however is still quite huge and grows very fast, even simple 
> daily queries are fairly slow even on a fast server. I have a few indexes on 
> what I know are common columns queried against but again, any combination of 
> data can be queried, and  indexes do increase db size of course.
>
> I feel like there's got to be some better way to organize this data and make 
> it searchable.  Overall speed is more important than disk space usage for 
> this application.
>
> Perhaps there are some native features in postgres I'm not taking advantage 
> of here, that would tip the scales in my favor. I've done a fair amount of 
> research on the configuration file settings and feel like I have a fairly 
> optimized config for it as far as that goes, and have done the things 
> mentioned here: http://wiki.postgresql.org/wiki/SlowQueryQuestions
>
> Very much appreciate any suggestions, thank you in advance.

We run a nightly cron job that creates all the summary tables etc at
midnight.  On a fast machine it takes about 1 to 2 hours to run, but
makes the queries run during the day go from 10 or 20 seconds to a few
hundred milliseconds.

You might want to look into table partitioning and also materialized
views.  There's a great tutorial on how to roll your own at:

http://tech.jonathangardner.net/wiki/PostgreSQL/Materialized_Views

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] cross-database time extract?

2009-12-24 Thread Adrian Klaver
On Thursday 24 December 2009 1:44:58 pm Israel Brewster wrote:
> This is sort of a PostgreSQL question/sort of a general SQL question,
> so I apologize if this isn't the best place to ask. At any rate, I
> know in PostgreSQL you can issue a command like 'SELECT
> "time"(timestamp_column) from table_name' to get the time part of a
> timestamp. The problem is that this command for some reason requires
> quotes around the "time" function name, which breaks the command when
> used in SQLite (I don't know about MySQL yet, but I suspect the same
> would be true there). The program I am working on is designed to work
> with all three types of databases (SQLite, PostgreSQL, and MySQL) so
> it would be nice (save me some programing) if there was a single SQL
> statement to get the time portion of a timestamp that would work with
> all three. Is there such a beast? On a related note, why do we need
> the quotes around "time" for the function to work in PostgreSQL? the
> date function doesn't need them, so I know it's not just a general
> PostgreSQL formating difference. Thanks :)
> ---
> Israel Brewster
> Computer Support Technician II
> Frontier Flying Service Inc.
> 5245 Airport Industrial Rd
> Fairbanks, AK 99709
> (907) 450-7250 x293
> ---

As to the "time" issue see here;
http://www.postgresql.org/docs/8.4/interactive/sql-expressions.html
4.2.9. Type Casts

-- 
Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] cross-database time extract?

2009-12-24 Thread Adrian Klaver
On Thursday 24 December 2009 1:44:58 pm Israel Brewster wrote:
> This is sort of a PostgreSQL question/sort of a general SQL question,
> so I apologize if this isn't the best place to ask. At any rate, I
> know in PostgreSQL you can issue a command like 'SELECT
> "time"(timestamp_column) from table_name' to get the time part of a
> timestamp. The problem is that this command for some reason requires
> quotes around the "time" function name, which breaks the command when
> used in SQLite (I don't know about MySQL yet, but I suspect the same
> would be true there). The program I am working on is designed to work
> with all three types of databases (SQLite, PostgreSQL, and MySQL) so
> it would be nice (save me some programing) if there was a single SQL
> statement to get the time portion of a timestamp that would work with
> all three. Is there such a beast? On a related note, why do we need
> the quotes around "time" for the function to work in PostgreSQL? the
> date function doesn't need them, so I know it's not just a general
> PostgreSQL formating difference. Thanks :)
> ---
> Israel Brewster
> Computer Support Technician II
> Frontier Flying Service Inc.
> 5245 Airport Industrial Rd
> Fairbanks, AK 99709
> (907) 450-7250 x293
> ---

select cast(timestamp_column as time) from table_name

-- 
Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Esqsuig(!77. ca ccqyvxxghsqf

2009-12-24 Thread Raul Giucich








Poi
Uyploiyuhytrwqwddhg grgrr.
Sent from my iPhone

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to get a list of tables that have a particular column value?

2009-12-24 Thread Adrian Klaver
On Thursday 24 December 2009 5:35:10 am Rajan, Pavithra wrote:
>  Hello -Yes I need to find out the column value like '%Volt%' in any
> column of data_type (character varying) of any table. Basically what I
> need to do is go thro each columns of all tables and find any entries
> that have  Voltage followed by some number e.g. 'Voltage 4.8000'.Then I
> need to use regexep_replace function to curtail the precision to two
> digits after decimal instead of 4.
>
> Eg:table name 'app' has a column name description which has 4 entries
> like
>
>|description |
>|  character varying(50) |
>|
>|Voltage 2.4000  |
>|Voltage 4.8000  |
>|Voltgae 3.0509  |  |
>|Voltage 1.0010  |
>
> Then I run a sql file with this command any many other Update commands
> form other tables that have similar entries in various columns.
>
> UPDATE app SET app = regexp_replace(description,'4.8000','4.80') where
> description like 'Volt%';
>
> Hence I need to know all the tables and their column name ("data_type
>
> :character varying") that has this 4 digit extn.
>
> Thank you.
>
>

Would it not be easier to dump the data and does this against the text dump and 
then restore the data? 

-- 
Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] cross-database time extract?

2009-12-24 Thread Israel Brewster
This is sort of a PostgreSQL question/sort of a general SQL question, so I apologize if this isn't the best place to ask. At any rate, I know in PostgreSQL you can issue a command like 'SELECT "time"(timestamp_column) from table_name' to get the time part of a timestamp. The problem is that this command for some reason requires quotes around the "time" function name, which breaks the command when used in SQLite (I don't know about MySQL yet, but I suspect the same would be true there). The program I am working on is designed to work with all three types of databases (SQLite, PostgreSQL, and MySQL) so it would be nice (save me some programing) if there was a single SQL statement to get the time portion of a timestamp that would work with all three. Is there such a beast? On a related note, why do we need the quotes around "time" for the function to work in PostgreSQL? the date function doesn't need them, so I know it's not just a general PostgreSQL formating difference. Thanks :) ---Israel BrewsterComputer Support Technician IIFrontier Flying Service Inc.5245 Airport Industrial RdFairbanks, AK 99709(907) 450-7250 x293---BEGIN:VCARD
VERSION:3.0
N:Brewster;Israel;;;
FN:Israel Brewster
ORG:Frontier Flying Service;MIS
TITLE:PC Support Tech II
EMAIL;type=INTERNET;type=WORK;type=pref:isr...@frontierflying.com
TEL;type=WORK;type=pref:907-450-7293
item1.ADR;type=WORK;type=pref:;;5245 Airport Industrial Wy;Fairbanks;AK;99701;
item1.X-ABADR:us
CATEGORIES:General
X-ABUID:36305438-95EA-4410-91AB-45D16CABCDDC\:ABPerson
END:VCARD
 

Re: [GENERAL] Updating from 8.2 to 8.4

2009-12-24 Thread Adrian Klaver
On Thursday 24 December 2009 11:20:35 am Mark Morgan Lloyd wrote:
> I was hoping to finally get the servers updated from 8.2 to 8.4 over the
> festive season, but by now I think I've left things too tight.
>
> Is it necessary to update the (Windows) ODBC driver as well? I've got a
> couple of app servers still on NT4 with ODBC 8.02.0100 which I'm trying
> to be careful with lest I put something on them which requires a later
> OS and can't back out.
>
> The apps are written in a rather old version of Delphi with BDE which is
> fine with 8.2. Trying to run against 8.4 I get "Couldn't perform the
> edit because another user changed the record."- looking back through
> this and other MLs I see suggestion that this could be caused by an
> excessive number of decimal places in the data (current case only
> contains integers, timestamps and text) or possibly by a transaction
> isolation issue.

My experience with this is it related to timestamps with fractional second 
precision. The other thing to note is that in 8.4 the default for datetimes is 
now 64-bit integer datetimes, not the previous float datetimes.

>
> I'll carry on hacking at this from the app side but is there anything
> obvious that I've missed?
>
> --
> Mark Morgan Lloyd
> markMLl .AT. telemetry.co .DOT. uk
>
> [Opinions above are the author's, not those of his employers or colleagues]



-- 
Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Any need to copy args before SPI C function callls SQL function?

2009-12-24 Thread J. Greg Davidson
I have some SPI C functions which dispatch to a selected PostgreSQL
function based on the the first argument value.  I have 3 questions
and am very open to any other advise you might have:

1. If any of the arguments are bigger than a word, e.g. text or arrays,
do I need to copy them into upper memory or can I just reuse the Datum
value passed to the C function in calling the SQL function?

2. When the returned value is bigger than a word I need to copy it
before returning it - is there a way for the SPI C function to do this
without having to know the detailed type of the value returned?  I'm
hoping there might be some handy formula with macros here.

3. I'm doing the calling using saved query plans - this seems to be the
only way to call an arbitrary function using the documented SPI API.
However, I see other code using undocumented functions to "directly"
call PostgreSQL functions from C, which looks easier.  I want to use the
fastest method possible that will not break with future releases - what
do you advise?

Thanks for your answers to these questions and thanks for the excellent
responses to my earlier questions!

Happy New Year,

_Greg

J. Greg Davidson

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Optimizing data layout for reporting in postgres

2009-12-24 Thread Doug El
Hi,

I have to summarize some data to be queried and there are about 5 million raw 
records a day I need to summarize. In a nutshell I don't think I'm laying it 
out in an optimal fashion, or not taking advantage of some postgres features 
perhaps, I'm looking for feedback.

The raw incoming data is in the form of

ip string uint uint uint uint

So for any given record say:

8.8.8.8 helloworld 1 2 3 4

First, I need to be able to query how many total and how many unique requests 
there were (unique by ip), over given time frame.

So for the below data on the same day that's total two, but one unique

8.8.8.8 helloworld 1 2 3 4
8.8.8.8 helloworld 1 2 3 4

Further for all fields (but ip which is not stored) I need to be able to query 
and get total/unique counts based off any combination of criteria.

So if I refer to them as columns A-E

A   B   C   D   E
string  uintuintuintuint

I need to be able and say how many where col A = 'helloworld' and say col C = 4.
Or  perhaps col E = 4 and col c < 3 etc, any combination.

The only way I could see to do this was to take the 5 million daily raw 
records, sort them, then summarize that list with total and unique counts as so:

A   B   C   D   E   
F   G   H
datestringiduintuintuintuinttotal   unique

Primary key is A-F (date stringid uint uint uint uint)

This gives me a summary of about 900k records a day from the 4 million raw.

I have things organized with monthly tables and yearly schemas. The string 
column also has its own monthly lookup table, so there's just a string id 
that's looked up.

The database however is still quite huge and grows very fast, even simple daily 
queries are fairly slow even on a fast server. I have a few indexes on what I 
know are common columns queried against but again, any combination of data can 
be queried, and  indexes do increase db size of course.

I feel like there's got to be some better way to organize this data and make it 
searchable.  Overall speed is more important than disk space usage for this 
application. 

Perhaps there are some native features in postgres I'm not taking advantage of 
here, that would tip the scales in my favor. I've done a fair amount of 
research on the configuration file settings and feel like I have a fairly 
optimized config for it as far as that goes, and have done the things mentioned 
here: http://wiki.postgresql.org/wiki/SlowQueryQuestions

Very much appreciate any suggestions, thank you in advance.

Doug


  


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Installation problem

2009-12-24 Thread Alexander Solovkin
Hello! I have problems with installation PostgreSQL 8.4.2 on Windows7x64.
Installation passes normally, but in the end there is an error message of
the following maintenance: "Problem running post-instal step. Installation
may not complete correctly. The database cluster initialisation failed." In
what the reason of this error? In advance thanks for the help.


[GENERAL] Updating from 8.2 to 8.4

2009-12-24 Thread Mark Morgan Lloyd
I was hoping to finally get the servers updated from 8.2 to 8.4 over the 
festive season, but by now I think I've left things too tight.


Is it necessary to update the (Windows) ODBC driver as well? I've got a 
couple of app servers still on NT4 with ODBC 8.02.0100 which I'm trying 
to be careful with lest I put something on them which requires a later 
OS and can't back out.


The apps are written in a rather old version of Delphi with BDE which is 
fine with 8.2. Trying to run against 8.4 I get "Couldn't perform the 
edit because another user changed the record."- looking back through 
this and other MLs I see suggestion that this could be caused by an 
excessive number of decimal places in the data (current case only 
contains integers, timestamps and text) or possibly by a transaction 
isolation issue.


I'll carry on hacking at this from the app side but is there anything 
obvious that I've missed?


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] targetted array element modification or delete without knowing the index?

2009-12-24 Thread Richard Broersma
On Thu, Dec 24, 2009 at 10:01 AM, Gauthier, Dave
 wrote:
> Is there a way to modify or delete an element of an array with knowledge
> only of the element’s value?  Maybe an array index finder would help?

I haven't tried it myself, but would the array "contains" operator work for you?

http://www.postgresql.org/docs/8.4/static/functions-array.html#ARRAY-OPERATORS-TABLE


-- 
Regards,
Richard Broersma Jr.

Visit the Los Angeles PostgreSQL Users Group (LAPUG)
http://pugs.postgresql.org/lapug

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] targetted array element modification or delete without knowing the index?

2009-12-24 Thread Gauthier, Dave
Is there a way to modify or delete an element of an array with knowledge only 
of the element's value?  Maybe an array index finder would help?

For example

create table foo (name,text, arr text[]);
insert into foo (name,arr) values ('joe',ARRAY['a','b','c']);


update foo set arr[indexfinder('b')] = 'z';

This would update arr[2], because that's the element with value 'b'

Thanks !


Re: [GENERAL] defining yuor own commands in PG ?

2009-12-24 Thread Israel Brewster


On Dec 21, 2009, at 9:34 AM, Pavel Stehule wrote:


2009/12/21 Israel Brewster :


On Dec 19, 2009, at 2:59 AM, Filip Rembiałkowski wrote:

2009/12/18 Gauthier, Dave 


Can you define your own commands in PG.


In psql, yes:

\set sel 'SELECT * FROM'
:sel clients;

\set desc '\\d'
:desc table




E.g., if users from other DBs use “describe foo” to get the  
metadata for
foo, is there a way I can create a command “describe” to =  
“\d” ?


But what's wrong with "\d"? For me, its like the first thing people  
learn

when practicing postgres.
It is even faster to type than DESCRIBE, right?

Just to put in my 2¢, I write a program that can use a MySQL,  
PostgreSQL, or
SQLite database as its backend, depending on user preference. As a  
result, I
spend time in all three databases each day. It can get a bit  
confusing at

times trying to remember "ok, in this terminal window I need to do a
'describe foo', in that one I need to do a '\d foo' and in the third
'.schema foo'". So being able to simplify that at least somewhat is  
nice :)
That said, I'd be just as happy getting MySQL to recognize the \d  
command as

I would be getting postgres to recognize describe-perhaps more so.



do you know gnome-db console?


That looks very handy. Of course, I'm using a Mac, no linux, so it may  
not work for me, but I'll have to see if I can get it to compile and  
run for me. Thanks for the pointer!




http://blogs.gnome.org/vivien/2007/10/25/sql-console/
http://library.gnome.org/devel/libgda/unstable/gda-sql-manual-run.html

regards
Pavel

IMO, when you will bend postgres to mimick other DBMS, you will  
hide its

real power from users.



--
Filip Rembiałkowski
JID,mailto:filip.rembialkow...@gmail.com
http://filip.rembialkowski.net/

---
Israel Brewster
Computer Support Technician II
Frontier Flying Service Inc.
5245 Airport Industrial Rd
Fairbanks, AK 99709
(907) 450-7250 x293
---






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


---
Israel Brewster
Computer Support Technician II
Frontier Flying Service Inc.
5245 Airport Industrial Rd
Fairbanks, AK 99709
(907) 450-7250 x293
---

BEGIN:VCARD
VERSION:3.0
N:Brewster;Israel;;;
FN:Israel Brewster
ORG:Frontier Flying Service;MIS
TITLE:PC Support Tech II
EMAIL;type=INTERNET;type=WORK;type=pref:isr...@frontierflying.com
TEL;type=WORK;type=pref:907-450-7293
item1.ADR;type=WORK;type=pref:;;5245 Airport Industrial Wy;Fairbanks;AK;99701;
item1.X-ABADR:us
CATEGORIES:General
X-ABUID:36305438-95EA-4410-91AB-45D16CABCDDC\:ABPerson
END:VCARD




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] /var/lib/pgsql/data/pg_xlog/000000010000000000000000,two process access it ?

2009-12-24 Thread Thomas
I guess PID 19045 write data to the log file first when I insert data
into table ,but why did writer process also access the log file ?
Could some guy tell me some details ?
FYI:
postgres: writer process's PID is 18848 .
postgres test [local] idle's PID is 19045  .

[r...@localhost tmp]# lsof /var/lib/pgsql/data/pg_xlog/
0001
COMMAND PID USER   FD   TYPE DEVICE SIZENODE NAME
postmaste 18848 postgres4u   REG  253,0 16777216 1770912 /var/lib/
pgsql/data/pg_xlog/0001
postmaste 19045 postgres   37u   REG  253,0 16777216 1770912 /var/lib/
pgsql/data/pg_xlog/0001


[r...@localhost tmp]# ps aux|grep postgre
postgres  2429  0.0  0.6  21044  3364 ?S07:32   0:00 /usr/
bin/postmaster -p 5432 -D /var/lib/pgsql/data
postgres  2434  0.0  0.1  10824   804 ?S07:32   0:00
postgres: logger process
root  9539  0.0  0.2   4908  1232 pts/3S15:31   0:00 su -
postgres
postgres  9540  0.0  0.2   4528  1480 pts/3S15:31   0:00 -bash
postgres 18848  0.0  0.2  21180  1324 ?S17:23   0:00
postgres: writer process
postgres 18849  0.0  0.1  11824   780 ?S17:23   0:00
postgres: stats buffer process
postgres 18850  0.0  0.1  11056  1000 ?S17:23   0:00
postgres: stats collector process
postgres 19029  0.0  0.4   8292  2296 pts/3S+   17:25   0:00 psql
postgres 19045  0.0  0.7  21888  4016 ?S17:25   0:00
postgres: postgres test [local] idle
root 19607  0.0  0.1   3912   696 pts/2R+   17:31   0:00 grep
postgre


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres: writer process,what does this process actually do?

2009-12-24 Thread Thomas
On Dec 23, 3:44 pm, r...@iol.ie ("Raymond O'Donnell") wrote:
> On 23/12/2009 02:56, Thomas wrote:
>
> > And could you give me some info about postgres internals? Such as
> > ebooks or online articles.
>
> There's quite a bit in the manual:
>
>  http://www.postgresql.org/docs/8.4/interactive/internals.html
>
> Ray.
>
> --
> Raymond O'Donnell :: Galway :: Ireland
> r...@iol.ie
>
> --
> Sent via pgsql-general mailing list (pgsql-gene...@postgresql.org)
> To make changes to your 
> subscription:http://www.postgresql.org/mailpref/pgsql-general

Tks man.
:)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to get a list of tables that have a particular column value?

2009-12-24 Thread Rajan, Pavithra

 Hello -Yes I need to find out the column value like '%Volt%' in any
column of data_type (character varying) of any table. Basically what I
need to do is go thro each columns of all tables and find any entries
that have  Voltage followed by some number e.g. 'Voltage 4.8000'.Then I
need to use regexep_replace function to curtail the precision to two
digits after decimal instead of 4.

Eg:table name 'app' has a column name description which has 4 entries
like 
   |description |
   |  character varying(50) |  
   ||
   |Voltage 2.4000  |  
   |Voltage 4.8000  |
   |Voltgae 3.0509  |  | 
   |Voltage 1.0010  |

Then I run a sql file with this command any many other Update commands
form other tables that have similar entries in various columns.

UPDATE app SET app = regexp_replace(description,'4.8000','4.80') where
description like 'Volt%';

Hence I need to know all the tables and their column name ("data_type
:character varying") that has this 4 digit extn.

Thank you.




-Original Message-
From: John R Pierce [mailto:pie...@hogranch.com] 
Sent: Wednesday, December 23, 2009 3:11 PM
To: Rajan, Pavithra 
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] How to get a list of tables that have a
particular column value?

Rajan, Pavithra wrote:
>
> Hello - I would like to know if there is a way to find all the table 
> names in a data base that have a particular* column value*  eg:"volt"
> .ie given a column value (not column name) how to I find which tables 
> and their column names  have them .Thank you.
>

Do you want to search for values 'volt' in -any- column of any table, or
just in certain columns if they exist?

you can enumerate the tables by querying information_schema.tables, and
you can enumerate the columns of a table via information_schema.columns,
so I'd imagine you would need a script or program that iterates through
the tables, and through the columns of each table, then constructs and
executes a query of that column of that table for your value.  You'll
probably want to check the column datatype first and not query numeric
fields.  

when you do these queries, just what is it you're looking for, the 
simple existence of the value in table X as a yes/no thing?since 
each table has its own structure, its unclear what other useful info you
could extract

on a large database, this is going to be very time consuming as it
likely will require sequential scanning everything multiple times if you
want to look at every text column of every table.

now, if this is a requirement to look for this value in a specific
column of various specific tables, perhaps that column should be its OWN
table, (id serial, thing text) and the other tables reference this as a
foreign key.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Session based transaction!!

2009-12-24 Thread Tino Wildenhain
Hi,

S Arvind schrieb:
> Hi Everyone,
> 
> In a web application running in Tomcat and Postgres 8.3 as DB, i need to
> know whether my given task is feasible or not.
>All the db operation which is done by that web application
> must be rollback at the end(session dies) and the DB data must be same
> as the starting stage(of session). Its like virtually placing the data
> for that session alone and rollbacking the db to the template which is
> originally been. So whenever users get in the webapplication, the
> initial data must be the template data only and he can perform any
> operation for which data is visible for that user alone and when the
> users leaves(session destroyed) all the data changed in that time must
> be rollbacked.
> 
> I thought this by, When the session created in the application a
> transaction must be started and all the activites must be done on the
> DB, but the transaction will not be commited or rollback across request
> but it must be done across the session. By not destroying the connection
> and using it for all the work done by that session. when session destroy
> we can rollback the entire transaction
> 
> Is there any other way to achieve the requirements in postgres.
> Thanks in advance..

Isn't that the default? If not you should handle your database
interaction with some high priority handler which runs first and ends
last in your request and handles all exceptions bubbling from other
activities inside it and does a rollback in this case. (Thats the way
Zope does it - but it has a transaction framework. Not sure if Tomcat
offers the same easy hooks but there should be a way.)

Regards
Tino


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [GENERAL] WARNING: nonstandard use of escape in a string literal

2009-12-24 Thread Alban Hertroys
On 23 Dec 2009, at 22:58, Patrick M. Rutkowski wrote:

> In that case, let me put it this way:
> 
> Is the query
> UPDATE my_table SET colname = NULL WHERE colname ~ '^\s*$'
> 
> already all correct and standard conforming. Such that all I need to
> do is turn on standard_conforming_strings to have it stop complaining
> at me?
> 
> In other words: I'm already doing it right, no?


Yes, for this query you are. You may have other queries that rely on 
non-standard escaping though, and those would break if you toggle that setting.
Alternatively you can just turn off the warning (escape_string_warning).

Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,4b333e33228059156120885!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] WARNING: nonstandard use of escape in a string literal

2009-12-24 Thread Albe Laurenz
Patrick M. Rutkowski wrote:
> Is the query
> UPDATE my_table SET colname = NULL WHERE colname ~ '^\s*$'
> 
> already all correct and standard conforming. Such that all I need to
> do is turn on standard_conforming_strings to have it stop complaining
> at me?

Precisely.

> In other words: I'm already doing it right, no?

If you define "right" as "standard compliant", yes.
If you define "right" as "in accordance with the default
behaviour of PostgreSQL", then no.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Session based transaction!!

2009-12-24 Thread S Arvind
Hi Everyone,

In a web application running in Tomcat and Postgres 8.3 as DB, i need to
know whether my given task is feasible or not.
   All the db operation which is done by that web application must
be rollback at the end(session dies) and the DB data must be same as the
starting stage(of session). Its like virtually placing the data for that
session alone and rollbacking the db to the template which is originally
been. So whenever users get in the webapplication, the initial data must be
the template data only and he can perform any operation for which data is
visible for that user alone and when the users leaves(session destroyed) all
the data changed in that time must be rollbacked.

I thought this by, When the session created in the application a transaction
must be started and all the activites must be done on the DB, but the
transaction will not be commited or rollback across request but it must be
done across the session. By not destroying the connection and using it for
all the work done by that session. when session destroy we can rollback the
entire transaction

Is there any other way to achieve the requirements in postgres.
Thanks in advance..

Arvind S


*
"Many of lifes failure are people who did not realize how close they were to
success when they gave up."
-Thomas Edison*


Re: [GENERAL] best way to manage indexes

2009-12-24 Thread Craig Ringer

On 24/12/2009 6:10 AM, Jamie Kahgee wrote:

what would be considered "best practice" for my situation?

I have a table *member*, with column *name *that I want to put an index
on, because it is searched quiet frequently. When I create my sql search
string, the name will consist only of alpha-numeric characters and be
compared against lowercase matches.


As already noted, it's worth looking into full-text search.


SELECT *
   FROM member
  WHERE lower(regexp_replace(member_name, '[^[:alnum:]]', '', 'g')) ~*
'search_string'
 OR lower(metaphone(name, 4)) = lower(metaphone('search_string', 4));

is it better to create an index that matches my search?
create index member_name_idx on member (lower(regexp_replace(name,
'[^[:alnum:]]', '', 'g')));


You can't really build an index on the regex match expression ( ~* ) 
above, because the results of the expression depend on `search_string' 
via the non-btree-indexable operator ~* . Btree indexes can only be used 
for equality, inequality, greater-than or less-than operators.


If you can use one of "=", "!=", "<" or ">" as your test expression 
instead, then you could usefully build a functional index. In that case, 
you could wrap the expression that mangles 'member_name' in an immutable 
SQL function. You'd then create the index on that, and use that function 
in your queries, eg:


   WHERE simplify_member(member_name) = 'search_string'

The function makes life MUCH easier on the planner, since it can easily 
tell when your search expressions match the functional index. It also 
makes maintenance easier. If you decide to change the function (say, to 
add to the allowed char list) despite it being marked immutable, you 
will have to drop and re-create the index.


However, if you can't use a btree-indexable operator when comparing 
against your search string, the index can't be used anyway. You'd have 
to put something together using GiST, if it's even possible at all.



do I need two indexes?  one for both search parameters (regexp & metaphone)?


Yes.

--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general