Re: [GENERAL] Are there plans to add data compression feature to postgresql?

2008-10-29 Thread Joshua D. Drake

Steve Atkins wrote:

The one place where Compression is an immediate benefit is the wire. 
It is easy to forget that one of our number one bottlenecks (even at 
gigabit) is the amount of data we are pushing over the wire.


Wouldn't "ssl_ciphers=NULL-MD5" or somesuch give zlib compression over 
the wire?


I don't think so.

Joshua D. Drake



Cheers,
  Steve





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Are there plans to add data compression feature to postgresql?

2008-10-29 Thread Steve Atkins


On Oct 29, 2008, at 10:43 PM, Joshua D. Drake wrote:


Steve Atkins wrote:

On Oct 29, 2008, at 9:50 PM, Grant Allen wrote:



One other thing I forgot to mention:  Compression by the DB trumps  
filesystem compression in one very important area -  
shared_buffers! (or buffer_cache, bufferpool or whatever your  
favourite DB calls its working memory for caching data).  Because  
the data stays compressed in the block/page when cached by the  
database in one of its buffers, you get more bang for you memory  
buck in many circumstances!  Just another angle to contemplate :-)
The additional latency added by decompression is reasonably small  
compared with traditional disk access time. It's rather large  
compared to memory access time.


The one place where Compression is an immediate benefit is the wire.  
It is easy to forget that one of our number one bottlenecks (even at  
gigabit) is the amount of data we are pushing over the wire.


Wouldn't "ssl_ciphers=NULL-MD5" or somesuch give zlib compression over  
the wire?


Cheers,
  Steve


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Are there plans to add data compression feature to postgresql?

2008-10-29 Thread Joshua D. Drake

Steve Atkins wrote:


On Oct 29, 2008, at 9:50 PM, Grant Allen wrote:



One other thing I forgot to mention:  Compression by the DB trumps 
filesystem compression in one very important area - shared_buffers! 
(or buffer_cache, bufferpool or whatever your favourite DB calls its 
working memory for caching data).  Because the data stays compressed 
in the block/page when cached by the database in one of its buffers, 
you get more bang for you memory buck in many circumstances!  Just 
another angle to contemplate :-)


The additional latency added by decompression is reasonably small 
compared with traditional disk access time. It's rather large compared 
to memory access time.


The one place where Compression is an immediate benefit is the wire. It 
is easy to forget that one of our number one bottlenecks (even at 
gigabit) is the amount of data we are pushing over the wire.


Joshua D. Drake



Cheers,
  Steve





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Are there plans to add data compression feature to postgresql?

2008-10-29 Thread Steve Atkins


On Oct 29, 2008, at 9:50 PM, Grant Allen wrote:



One other thing I forgot to mention:  Compression by the DB trumps  
filesystem compression in one very important area - shared_buffers!  
(or buffer_cache, bufferpool or whatever your favourite DB calls its  
working memory for caching data).  Because the data stays compressed  
in the block/page when cached by the database in one of its buffers,  
you get more bang for you memory buck in many circumstances!  Just  
another angle to contemplate :-)


The additional latency added by decompression is reasonably small  
compared with traditional disk access time. It's rather large compared  
to memory access time.


Cheers,
  Steve


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] phone database schema

2008-10-29 Thread novice
Firstly, thank you very much to all advice that has been passed
through.  These groups have thought me many new tricks and I wish all
of you the best.

Now, I'm trying to create a schema for mobile phone custodian and
billing system.  Any advice would be greatly appreciated, I've
included a schema that I've been working on and some sample data.
Personally I feel this schema can be improved thoroughly but I need
some help in designing it.

-- start script --

-- mobile user --
create table mobile_user (
   user_id serial not null,
   first_name text,
   last_name text,
   department text,
   section text,
   CONSTRAINT mobile_user_pkey PRIMARY KEY (user_id)
);

INSERT INTO mobile_user(first_name, last_name, department, section)
VALUES ('fuser1', 'luser2', '106', 'driver');

-- mobile --
create table mobile_number (
   mobile_no text not null,
   sim_no text,
   pin_code text,
   puk_code  text,
   issue_date date,
   return_date date,
   status boolean,
   CONSTRAINT mobile_number_pkey PRIMARY KEY(mobile_no)
);

INSERT INTO mobile_number(mobile_no, sim_no, pin_code, puk_code,
issue_date, status)
VALUES ('09455225998', 'X1255849', '', '', '2008-10-10','y');


-- device --
create table accessory (
   accessory_id serial not null,
   make text,
   model text,
   serial text,
   price money DEFAULT '$0.00',
   CONSTRAINT accessory_id_pkey PRIMARY KEY (accessory_id)
);

INSERT INTO accessory(make, model, serial, price)
VALUES ('NOKIA', 'N70', 1234, '151.00');


-- custodian --
create table mobile_custodian (
   custodian_id serial not null,
   user_id int references mobile_user (user_id),
   mobile_no text references mobile_number (mobile_no),
   accessory_id int references accessory (accessory_id),
   issue_date date not null,
   return_date date,
   status boolean not null,
   CONSTRAINT mobile_custodian_id_pkey PRIMARY KEY (custodian_id)
);
INSERT INTO mobile_custodian(user_id, mobile_no, accessory_id,
issue_date, status)
VALUES (1, '09455225998', 1, '2008-10-11', 'y');


-- billing --
create table bill_period (
   bill_id text not null, -- eg. -MM
   start_date timestamp,
   end_date timestamp,
   CONSTRAINT bill_id_pkey PRIMARY KEY(bill_id)
);

INSERT INTO bill_period(bill_id, start_date, end_date)
VALUES ('2008-07', '2008-06-30 00:00', '2008-08-03 23:59');
INSERT INTO bill_period(bill_id, start_date, end_date)
VALUES ('2008-08', '2008-08-04 00:00', '2008-08-31 23:59');


create table call (
   call_id serial,
   bill_id text references bill_period (bill_id),
   mobile_no text references mobile_number (mobile_no),
   datetime timestamp,
   origin text,
   destination text,
   call_no text,
   duration interval,
   charge float,
   CONSTRAINT call_id_pkey PRIMARY KEY(call_id)
);

INSERT INTO call (bill_id, mobile_no, datetime, origin, destination,
call_no, duration, charge)
VALUES ('2008-07', '09455225998', '2007-07-10 10:00', 'london',
'new york', '12345632', '0:12:05', 5.28);

INSERT INTO call (bill_id, mobile_no, datetime, origin, destination,
call_no, duration, charge)
VALUES ('2008-08', '09455225998', '2007-08-12 13:27', 'rome',
'canada', '325699845','0:15:57', 3.15);



--  Also, here's a query that I'm playing around with, which I think is
-- going to be used a lot to produce individual reports.

select
mobile_user.first_name,
mobile_user.last_name,
call.mobile_no,
call.origin,
call.destination,
call.call_no,
call.duration,
call.charge

FROM (call INNER JOIN mobile_custodian ON call.mobile_no =
mobile_custodian.mobile_no) INNER JOIN mobile_user ON
mobile_custodian.user_id = mobile_user.user_id;

-- end script --


 first_name | last_name |  mobile_no  | origin | destination |
call_no  | duration | charge
+---+-++-+---+--+
 user1  | last1 | 09455225998 | rome   | canada  |
325699845 | 00:15:57 |5.2
 user1  | last1 | 09455225998 | london | new york|
12345632  | 00:12:05 |5.2
(2 rows)


Many thanks in advance :)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Are there plans to add data compression feature to postgresql?

2008-10-29 Thread Grant Allen

Ron Mayer wrote:

Grant Allen wrote:
...warehouse...DB2...IBM is seeing typical storage savings in the 
40-60% range


Sounds about the same as what compressing file systems claim:

http://opensolaris.org/os/community/zfs/whatis/
 "ZFS provides built-in compression. In addition to
  reducing space usage by 2-3x, compression also reduces
  the amount of I/O by 2-3x. For this reason, enabling
  compression actually makes some workloads go faster.

I do note that Netezza got a lot of PR around their
compression release; claiming it doubled performance.
Wonder if they added that at the file system or higher
in the DB.



I just so happen to have access to a Netezza system :-) I'll see if I 
can find out.


One other thing I forgot to mention:  Compression by the DB trumps 
filesystem compression in one very important area - shared_buffers! (or 
buffer_cache, bufferpool or whatever your favourite DB calls its working 
memory for caching data).  Because the data stays compressed in the 
block/page when cached by the database in one of its buffers, you get 
more bang for you memory buck in many circumstances!  Just another angle 
to contemplate :-)


Ciao
Fuzzy
:-)


Dazed and confused about technology for 20 years
http://fuzzydata.wordpress.com/


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why Postgresql Public Schema Is Not Owned By The DB Owner By Default

2008-10-29 Thread Eus
Hi Ho!

--- On Thu, 10/30/08, Tom Lane <[EMAIL PROTECTED]> wrote:

> Because it'd be extremely difficult to do otherwise
> (given the way that
> CREATE DATABASE works)

Understood.

> and it's not at all clear that
> it'd be a good
> idea anyway.

Can it be cleared up by looking at the kind of security breaches that can be 
abused by users that are not the owner of the DB when the public schema is 
owned by the owner of the DB (i.e., not a SUPERUSER) instead of by "postgres"?

I am hoping to get a list of achilles' heels that I need to consider when 
assigning the ownership of a public schema of a DB to its owner that is not a 
SUPERUSER from the default "postgres".

>   regards, tom lane

Best regards,
Eus


  

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication with slony-I

2008-10-29 Thread Abdul Rahman
TOO MANY THANKS RAY!

I have got success in doing replication to an existing database. I did test on 
a dummy databases. But I am sure that I can implement to live project. 

Regards,
Abdul Rehman. 



  

Re: [GENERAL] Why Postgresql Public Schema Is Not Owned By The DB Owner By Default

2008-10-29 Thread Tom Lane
Eus <[EMAIL PROTECTED]> writes:
> Why does Postgresql by default assign the ownership of the public schema of a 
> DB to "postgres" instead of the owner of the DB itself?

Because it'd be extremely difficult to do otherwise (given the way that
CREATE DATABASE works) and it's not at all clear that it'd be a good
idea anyway.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] valid use of wildcard

2008-10-29 Thread Tom Lane
Klint Gore <[EMAIL PROTECTED]> writes:
> Surprisingly, '2008-10-27%' casts to a date in 8.3.3.

Yeah, the datetime input code is pretty willing to overlook unexpected
punctuation.  There are enough odd formats out there that I'm not sure
tightening it up would be a good idea.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] valid use of wildcard

2008-10-29 Thread Klint Gore

Scott Marlowe wrote:

On Wed, Oct 29, 2008 at 5:04 PM, Irene Barg <[EMAIL PROTECTED]> wrote:
> Hi,
>
> Is the following query a valid use of the 'wildcard' in (='2008-10-27%')?
>
>> [EMAIL PROTECTED] arcsoft]$ psql metadata
>> Password: Welcome to psql 8.1.9, the PostgreSQL interactive terminal.
>>
>> metadata=# SELECT * FROM viewspace.siap AS t WHERE
>> t."startDate"='2008-10-27%' AND t.prop_id LIKE '%' LIMIT 1000;
>
> Causes the %CPU to jump and process lingers for over an hour.

Bad move.  dates aren't strings, and their format can change based on
what you've got set for datestyle.

If you want a start date (that's a date or a timestamp) then use the
proper operators

where startDate='2008-10-27'

If startDate is a text / varchar type then you need to change it to a
date.  storing dates in strings is bad.
  


Surprisingly, '2008-10-27%' casts to a date in 8.3.3.  I was expecting 
the planner to cast the field to string to compare it (or throw an error 
about implicit casting), but the literal goes to the field type (see 
explain on a timestamp field below).  Does the % have any special 
meaning in casts to date/timestamp?


postgres=# select version();
  version
-
PostgreSQL 8.3.3, compiled by Visual C++ build 1400
(1 row)

postgres=# select '2008-10-27%'::date;
   date

2008-10-27
(1 row)

postgres=# explain select * from data where "timestamp" = '2008-10-27%';
 QUERY PLAN
--
Seq Scan on data  (cost=0.00..504.68 rows=2 width=27)
  Filter: ("timestamp" = '2008-10-27 00:00:00'::timestamp without time 
zone)

(2 rows)

postgres=#


klint.

--
Klint Gore
Database Manager
Sheep CRC
A.G.B.U.
University of New England
Armidale NSW 2350

Ph: 02 6773 3789  
Fax: 02 6773 3266

EMail: [EMAIL PROTECTED]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Are there plans to add data compression feature to postgresql?

2008-10-29 Thread Ron Mayer

Grant Allen wrote:
...warehouse...DB2...IBM is seeing typical 
storage savings in the 40-60% range


Sounds about the same as what compressing file systems claim:

http://opensolaris.org/os/community/zfs/whatis/
 "ZFS provides built-in compression. In addition to
  reducing space usage by 2-3x, compression also reduces
  the amount of I/O by 2-3x. For this reason, enabling
  compression actually makes some workloads go faster.

I do note that Netezza got a lot of PR around their
compression release; claiming it doubled performance.
Wonder if they added that at the file system or higher
in the DB.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Why Postgresql Public Schema Is Not Owned By The DB Owner By Default

2008-10-29 Thread Eus
Hi Ho!

As a new user of Postgresql 8.3.3, I came across this common error message when 
restoring a database previously dumped from another machine:

15: ERROR:  must be owner of schema public

when it came to this line in the dump file:

COMMENT ON SCHEMA public IS 'Standard public schema';

And, also the following warning messages:

193842: WARNING:  no privileges could be revoked for "public"
193843: WARNING:  no privileges could be revoked for "public"
193844: WARNING:  no privileges were granted for "public"
193845: WARNING:  no privileges were granted for "public"
193846: WARNING:  no privileges were granted for "public"

For the following lines in the dump file:

REVOKE ALL ON SCHEMA public FROM PUBLIC;
REVOKE ALL ON SCHEMA public FROM "my_role_1";
GRANT ALL ON SCHEMA public TO "my_role_1";
GRANT ALL ON SCHEMA public TO PUBLIC;
GRANT USAGE ON SCHEMA public TO "my_role_2";

All of which can be solved when the schema public is owned by the owner of the 
DB, which is "my_role_1", by issuing:

ALTER SCHEMA public OWNER TO my_role_1;

So, the question is:
Why does Postgresql by default assign the ownership of the public schema of a 
DB to "postgres" instead of the owner of the DB itself?

What does it entail when by default the ownership of the public schema is given 
to the owner of the DB (from security or other aspects)?

I have researched the archive of the mailing list with the following result:

1. http://archives.postgresql.org/pgsql-general/2008-04/msg00714.php
The same question was raised here, but not answered.

2. http://archives.postgresql.org/pgsql-admin/2008-01/msg00128.php
One had a work-around by temporarily making the owner of the DB become 
SUPERUSER.

3. http://archives.postgresql.org/pgsql-hackers/2008-01/msg00462.php
One tried to suppress the error message related to `COMMENT ON SCHEMA public IS 
'Standard public schema';'

But, they do not answer my question.

So, can someone provide me with the answers to the questions?

Particularly, what does it entail when by default the ownership of the public 
schema is given to the owner of the DB (from security or other aspects)?

Thank you very much.

Best regards,
Eus


  

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] valid use of wildcard

2008-10-29 Thread Scott Marlowe
On Wed, Oct 29, 2008 at 5:04 PM, Irene Barg <[EMAIL PROTECTED]> wrote:
> Hi,
>
> Is the following query a valid use of the 'wildcard' in (='2008-10-27%')?
>
>> [EMAIL PROTECTED] arcsoft]$ psql metadata
>> Password: Welcome to psql 8.1.9, the PostgreSQL interactive terminal.
>>
>> metadata=# SELECT * FROM viewspace.siap AS t WHERE
>> t."startDate"='2008-10-27%' AND t.prop_id LIKE '%' LIMIT 1000;
>
> Causes the %CPU to jump and process lingers for over an hour.

Bad move.  dates aren't strings, and their format can change based on
what you've got set for datestyle.

If you want a start date (that's a date or a timestamp) then use the
proper operators

where startDate='2008-10-27'

If startDate is a text / varchar type then you need to change it to a
date.  storing dates in strings is bad.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can't restart Postgres

2008-10-29 Thread Tom Lane
"Thom Brown" <[EMAIL PROTECTED]> writes:
> Actually I did "ps aux | grep post" just to cover all bases, but still
> nothing.. except of course the grep itself.

The overwhelming impression from here is of a seriously brain-dead
startup script.  It's spending all its effort on being chatty and none
on actually dealing with unusual cases correctly :-(.  Whose script
is it anyway?

My bet is that there's some discrepancy between what the script is
expecting and what your intended configuration is.  I'm not sure if
the discrepancy is exactly the PID-file location or if it's more subtle
than that, but anyway I'd suggest reading through that script carefully
to see what it's actually doing.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: FW: [GENERAL] Slow query performance

2008-10-29 Thread Kevin Galligan
I agree with the concept.  The issue is the application is an open ended
query tool.  So, pretty much whatever they feel like entering is valid.  I
totally understand the indexes aren't very selective.  I guess I just don't
know what the next step is.  There aren't a lot of assumptions I can make
about how the queries are run.

I'm in the uniquely bad situation that there exists something with very
similar data that works, so its impossible to tell them "You can't run a
state query by itself with some other data.  You always needs an age bound"
or whatever.  The other application works reasonably fast.  I also don't
know how its built, which makes my life that much more difficult.

All of my experience is with small or medium sized databases.  the front end
itself is done, but I've had the client on the hook for a while now trying
to sort this out.  Not sure what the next step is.

I tried the other extreme end.  Age in its own table.  Just 'account
integer' and 'fval smallint'.  fval is the age value.

explain analyze select count(*) from jage where fval between 22 and 33;
   QUERY
PLAN
-
 Aggregate  (cost=1499316.66..1499316.67 rows=1 width=0) (actual
time=75985.403..75985.404 rows=1 loops=1)
   ->  Bitmap Heap Scan on jage  (cost=365374.78..1456268.39 rows=17219307
width=0) (actual time=7930.354..56879.811 rows=18016538 loops=1)
 Recheck Cond: ((fval >= 22) AND (fval <= 33))
 ->  Bitmap Index Scan on idx_tjage  (cost=0.00..361069.96
rows=17219307 width=0) (actual time=7084.535..7084.535 rows=18016538
loops=1)
   Index Cond: ((fval >= 22) AND (fval <= 33))
 Total runtime: 76015.215 ms


Still took over a minute to do the count.  It seems like the index scan
happens pretty fast, but the last steps go from 7 or 8 seconds to over a
minute.

On Wed, Oct 29, 2008 at 7:52 PM, Dann Corbit <[EMAIL PROTECTED]> wrote:

>*From:* Kevin Galligan [mailto:[EMAIL PROTECTED]
> *Sent:* Wednesday, October 29, 2008 4:34 PM
> *To:* Dann Corbit
> *Cc:* pgsql-general@postgresql.org
> *Subject:* Re: FW: [GENERAL] Slow query performance
>
>
>
> Sorry for the lack of detail.  Index on both state and age.  Not a
> clustered on both as the queries are fairly arbitrary (that's the short
> answer.  The long answer is that, at least with those columns, something
> like that MAY be an option later but I don't know enough now).
>
> I don't have the gui admin set up, as I'm doing this over ssh.  Will get
> the full table definition in a bit.  The short answer is simple btree
> indexes on the columns being searched.  I was applying simple indexes on all
> the columns, as there will be queries like "where [col] is not null",
> although in retrospect, that's fairly pointless unless the column has very
> little data.  Even then, maybe.
>
> Anyway, looking at the output, the time goes from 6727.848 to 387159.175
> during the bitmap heap scan (I was reading it wrong about the Aggregate
> line).  Considering the size involved, is this something that postgre has
> decided is too big to be done in memory?  That would be my wild guess.
>
> Ran another query.  this one even simpler.  Still quite long...
>
> explain analyze select count(*) from bigdatatable where age between 22 and
> 23 and state = 'NY';
>QUERY PLAN
>
> -
>  Aggregate  (cost=37.41..37.42 rows=1 width=0) (actual
> time=217998.706..217998.707 rows=1 loops=1)
>->  Index Scan using idx_jage on bigdatatable  (cost=0.00..37.41 rows=1
> width=0) (actual time=247.209..217988.584 rows=10303 loops=1)
>  Index Cond: ((age >= 22) AND (age <= 23))
>  Filter: ((state)::text = 'NY'::text)
>  Total runtime: 217998.800 ms
>
> Abbreviated schema below.  The table is huge.  Originally I had a design
> with a main "anchor" table that had all records, and most of those columns
> were in other tables I would join for the search.  This didn't perform very
> well, so I decided to go the other way and see how it worked.  I did a count
> on all data, and anything with records in fewer than 5 percent of the rows,
> I put in their own table.  Everything else is in this big one.  The indexing
> strategy is a joke right now.  I just applied one to each.  This is still in
> the testing phase.
>
> I had set this up on mysql.  The joins on the full size db turned out to be
> terrible.  I'm currently setting up the "one large table" design on mysql to
> see how that works.
>
> The obvious answer would be that the table is huge, so when the query is
> running, its grabbing all that data and not using much of it.  True.
> However, I'm not sure how to approach the des

[GENERAL] valid use of wildcard

2008-10-29 Thread Irene Barg

Hi,

Is the following query a valid use of the 'wildcard' in (='2008-10-27%')?


[EMAIL PROTECTED] arcsoft]$ psql metadata
Password: 
Welcome to psql 8.1.9, the PostgreSQL interactive terminal.


metadata=# SELECT * FROM viewspace.siap AS t WHERE t."startDate"='2008-10-27%' 
AND t.prop_id LIKE '%' LIMIT 1000;


Causes the %CPU to jump and process lingers for over an hour.


Processes:  87 total, 3 running, 84 sleeping... 321 threads15:51:49
Load Avg:  0.28, 0.28, 0.24 CPU usage:  11.4% user, 9.1% sys, 79.5% idle
SharedLibs: num =  164, resident = 29.5M code, 4.52M data, 7.30M LinkEdit
MemRegions: num = 10409, resident =  311M + 13.8M private,  501M shared
PhysMem:   750M wired,  125M active, 1.42G inactive, 2.27G used, 1.73G free
VM: 13.2G + 97.3M   30039(0) pageins, 0(0) pageouts

  PID COMMAND  %CPU   TIME   #TH #PRTS #MREGS RPRVT  RSHRD  RSIZE  VSIZE
10637 postgres69.1%  0:17.43   1 952  7.60M-  433M  56.9M- 1.06G 
10635 psql 0.0%  0:00.00   11422   256K+  608K   728K+ 27.2M 
10634 top  9.1%  0:03.96   12120   492K   396K   976K  27.0M 
10633 bash 0.0%  0:00.00   11416   204K   792K   808K  27.1M

10632 sshd 0.0%  0:00.00   11145   116K  1.58M   516K  30.0M
10628 sshd 0.0%  0:00.09   11846   144K  1.58M  1.47M  30.1M
10562 postgres 0.0%  0:43.65   1 930  1.30M   433M  64.8M  1.05G
10559 psql 0.0%  0:00.03   11423   252K   608K   736K  27.2M


I do a 'reindexdb -d metadata' and re-run same query and get a response 
back quickly:



[EMAIL PROTECTED] arcsoft]$ psql metadata
Password: 
Welcome to psql 8.1.9, the PostgreSQL interactive terminal.



metadata=# SELECT * FROM viewspace.siap AS t WHERE t."startDate"='2008-10-27%' 
AND t.prop_id LIKE '%' LIMIT 1000;
 image_id  | reference | fits_extension |  object   |  prop_id   |  startDate  |  ra  
 |  dec  | equinox | numberOfAxes | naxis_length |  scale  |  mimeType  | instrument | telesco

pe | cprojection | crefpixel | crefvalue | cdmatrix | fileSize  | pixflags |
 bandpass_id | bandpas
s_unit | bandpass_lolimit | bandpass_hilimit | exposure |  depth  | depthErr | seeing  | releaseDate  
   | vo_id 
---+---++---++-+--

-+---+-+--+--+-+++
---+-+---+---+--+---+--+-+
---+--+--+--+-+--+-+--
---+---
 ct1417659 | ct1417659.fits.gz |  1 | object| noao   | 
2008-10-27 00:00:00 | 14:59:22.
49   | -30:08:17.49  |  2000.0 |2 | unknown  | unknown | image/fits | mosaic_2   | ct4m   
   | unknown | unknown   | unknown   | unknown  |  88343772 | unknown  | VR Supermacho c6027 | unknown

   | unknown  | unknown  | 1.000| unknown | unknown  | 
unknown | 2010-04-27 00:00:
00 | 
 ct1417660 | ct1417660.fits.gz |  1 | unknown   | smarts | 2008-10-27 00:00:00 | 18:05:49.
42   | -19:26:22.6   |  2000.0 |2 | unknown  | unknown | image/fits | ccd_spec   | ct15m  
   | unknown | unknown   | unknown   | unknown  |270250 | unknown  | CuSO4   | unknown

   | unknown  | unknown  | 0.000| unknown | unknown  | 
unknown | 2010-04-27 00:00:
00 | 
 ct1417661 | ct1417661.fits.gz |  1 | unknown   | smarts | 2008-10-27 00:00:00 | 18:06:02.
66   | -19:26:22.8   |  2000.0 |2 | unknown  | unknown | image/fits | ccd_spec   | ct15m  
   | unknown | unknown   | unknown   | unknown  |269673 | unknown  | CuSO4   | unknown




Why does reindexdb help?
How is WHERE t."startDate"='2008-10-27%' getting interpreted?

Thank you.
-- irene
-
Irene BargEmail:  [EMAIL PROTECTED]
NOAO/AURA Inc. http://www.noao.edu/noao/staff/ibarg
950 N. Cherry Ave.Voice:  520-318-8273
Tucson, AZ  85726 USA   FAX:  520-318-8360
-

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Are there plans to add data compression feature to postgresql?

2008-10-29 Thread Grant Allen

Tom Lane wrote:

=?utf-8?Q?=E5=B0=8F=E6=B3=A2_=E9=A1=BE?= <[EMAIL PROTECTED]> writes:
  

[ snip a lot of marketing for SQL Server ]



I think the part of this you need to pay attention to is

  

Of course, nothing is entirely free, and this reduction in space and
time come at the expense of using CPU cycles.



We already have the portions of this behavior that seem to me to be
likely to be worthwhile (such as NULL elimination and compression of
large field values).  Shaving a couple bytes from a bigint doesn't
strike me as interesting.


Think about it on a fact table for a warehouse.  A few bytes per bigint 
multiplied by several billions/trillions of bigints (not an exaggeration 
in a DW) and you're talking some significant storage saving on the main 
storage hog in a DW.  Not to mention the performance _improvements_ you 
can get, even with some CPU overhead for dynamic decompression, if the 
planner/optimiser understands how to work with the compression index/map 
to perform things like range/partition elimination etc.  Admittedly this 
depends heavily on the storage mechanics and optimisation techniques of 
the DB, but there is value to be had there ... IBM is seeing typical 
storage savings in the 40-60% range, mostly based on boring, 
bog-standard int, char and varchar data.


The IDUG (so DB2 users themselves, not IBM's marketing) had a 
competition to see what was happening in the real world, take a look if 
interested: http://www.idug.org/wps/portal/idug/compressionchallenge


Other big benefits come with XML ... but that is even more dependent on 
the starting point.  Oracle and SQL Server will see big benefits in 
compression with this, because their XML technology is so 
mind-bogglingly broken in the first place.


So there's certainly utility in this kind of feature ... but whether it 
rates above some of the other great stuff in the PostgreSQL pipeline is 
questionable.


Ciao
Fuzzy
:-)


Dazed and confused about technology for 20 years
http://fuzzydata.wordpress.com/


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: FW: [GENERAL] Slow query performance

2008-10-29 Thread Dann Corbit
From: Kevin Galligan [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, October 29, 2008 4:34 PM
To: Dann Corbit
Cc: pgsql-general@postgresql.org
Subject: Re: FW: [GENERAL] Slow query performance

 

Sorry for the lack of detail.  Index on both state and age.  Not a
clustered on both as the queries are fairly arbitrary (that's the short
answer.  The long answer is that, at least with those columns, something
like that MAY be an option later but I don't know enough now).

I don't have the gui admin set up, as I'm doing this over ssh.  Will get
the full table definition in a bit.  The short answer is simple btree
indexes on the columns being searched.  I was applying simple indexes on
all the columns, as there will be queries like "where [col] is not
null", although in retrospect, that's fairly pointless unless the column
has very little data.  Even then, maybe.

Anyway, looking at the output, the time goes from 6727.848 to 387159.175
during the bitmap heap scan (I was reading it wrong about the Aggregate
line).  Considering the size involved, is this something that postgre
has decided is too big to be done in memory?  That would be my wild
guess.

Ran another query.  this one even simpler.  Still quite long...

explain analyze select count(*) from bigdatatable where age between 22
and 23 and state = 'NY';
   QUERY
PLAN

-
 Aggregate  (cost=37.41..37.42 rows=1 width=0) (actual
time=217998.706..217998.707 rows=1 loops=1)
   ->  Index Scan using idx_jage on bigdatatable  (cost=0.00..37.41
rows=1 width=0) (actual time=247.209..217988.584 rows=10303 loops=1)
 Index Cond: ((age >= 22) AND (age <= 23))
 Filter: ((state)::text = 'NY'::text)
 Total runtime: 217998.800 ms

Abbreviated schema below.  The table is huge.  Originally I had a design
with a main "anchor" table that had all records, and most of those
columns were in other tables I would join for the search.  This didn't
perform very well, so I decided to go the other way and see how it
worked.  I did a count on all data, and anything with records in fewer
than 5 percent of the rows, I put in their own table.  Everything else
is in this big one.  The indexing strategy is a joke right now.  I just
applied one to each.  This is still in the testing phase.

I had set this up on mysql.  The joins on the full size db turned out to
be terrible.  I'm currently setting up the "one large table" design on
mysql to see how that works.

The obvious answer would be that the table is huge, so when the query is
running, its grabbing all that data and not using much of it.  True.
However, I'm not sure how to approach the design now.  Its rarely going
to need data from anything other than a few columns, but joining across
10's or 100's of millions of records didn't seem that much fun either.
Thoughts?

CREATE TABLE bigdatatable (
account integer,
city character varying(20),
zip character(5),
dincome character(1),
sex character(1),
mob boolean,
religion character(1),
groupcd character(1),
lastdata character varying(4),
countycd character varying(3),
state character varying(2),
dutype character varying(1),
orders integer,
countysz character varying(1),
language character varying(2),
cbsacode character varying(5),
cbsatype character varying(1),
age smallint,
dob date,
ccard boolean,
lor integer,
bankcard boolean,
lastord date,
total integer,
lmob boolean,
homeown character varying(1),
ord1st date,
ordlast date,
married boolean,
deptcard boolean,
ordtotm smallint,
ordlastm date,
ord1stm date,
orddolm smallint,
pcuser boolean,
homeval character varying(1),
mailresp boolean,
lhomepc boolean,
dirrspby boolean,
mgift boolean,
lebuyer boolean,
payother smallint,
lhomdecr boolean,
driver boolean,
ordtote smallint,
ord1ste date,
ordlaste date,
orddole smallint,
mhmdecor boolean,
oddsnend smallint,
aptot smallint,
apwk smallint,
apdol smallint,
payccrd smallint,
landval smallint,
mfapparl boolean,
mgengift boolean,
homeblt smallint,
homebydt date,
educate character varying(1),
children boolean,
payvisa smallint,
hmfr smallint,
maghlth smallint,
homebypx integer,
gfhol smallint,
mbeauty boolean,
apwmtot smallint,
apwmwk smallint,
apwmdol smallint,
travlseg integer,
lhealth boolean,
lcharity boolean,
moutdoor boolean,
occupatn character varying(4),
fundrais boolean,
msports boolean,
hg smallint,
magfam smallint,
melectrc boolean,
lelectrc boolean,
bankcrd1 boolean,
lfoodck boolean,
mfood boolean,
finance boolean,
hmfrntot smallint,
hmfrnwk smalli

Re: [GENERAL] Group BY and Chart of Accounts

2008-10-29 Thread justin
There was a number of code mistakes  in my examples as i was just doing 
it off the top of my head,  just went through it and got it all working. 


I had to change the function around as it was double dipping accounts
just run this and it does work.  


--

Create table coa (
   coa_id serial not null,
   parent_id int not null default 0,
   doIhaveChildren boolean default false,
   account_name text null );


Create Table general_ledger_transactions(
   transaction_id serial not null,
   coa_id integer,
   accounting_period integer,
   debit numeric(20,10) ,
   credit numeric(20,10),
   transaction_date timestamp);


Create table  accounting_periods (
   accounting_period serial not null,
   start_date date,
   end_date date,
   accounting_period_Open boolean);

Insert into coa values (10, default, True, 'ParentAccount1');
Insert into coa values (11, 10, True, 'ChildAccount1');
Insert into coa values (12, 11, false, 'ChildAccount2');
Insert into coa values (13, default, false, 'ChildAccount3');

Insert into Accounting_Periods values ( 1, '2008-10-01', '2008-10-31', 
true );
Insert into Accounting_Periods values ( 2, '2008-11-01', '2008-11-30', 
true );


Insert into general_ledger_transactions values(  default, 11,  1, 30.0, 
0.0, current_timestamp);
Insert into general_ledger_transactions values(  default, 11,  1, 20.0, 
0.0, current_timestamp);
Insert into general_ledger_transactions values(  default, 12,  1, 10.0, 
0.0, current_timestamp);
Insert into general_ledger_transactions values(  default, 12,  1, 50.0, 
0.0, current_timestamp);
Insert into general_ledger_transactions values(  default, 11,  1, 1.0, 
0.0, current_timestamp);
Insert into general_ledger_transactions values(  default, 13,  1, 0.0, 
111.0, current_timestamp);



Insert into general_ledger_transactions values(  default, 11,  2, 0.0, 
30.0, current_timestamp);
Insert into general_ledger_transactions values(  default, 11,  2, 0.0, 
20.0, current_timestamp);
Insert into general_ledger_transactions values(  default, 12,  2, 0.0, 
10.0, current_timestamp);
Insert into general_ledger_transactions values(  default, 12,  2, 0.0, 
50.0, current_timestamp);
Insert into general_ledger_transactions values(  default, 11,  2, 0.0, 
1.0, current_timestamp);
Insert into general_ledger_transactions values(  default, 13,  2, 111.0, 
0.0, current_timestamp);



 


CREATE OR REPLACE  FUNCTION GetChildAccountDebits(PassedAccountID
integer, PassedPeriodID integer) RETURNS NUMERIC AS
$FunctionCode$
DECLARE
   retval NUMERIC = 0.0 ;
begin
  
   return (SELECT
   coalesce ( (select Sum(general_ledger_transactions.debit ) from 
general_ledger_transactions where general_ledger_transactions.coa_id = 
coa.coa_id and general_ledger_transactions.accounting_period = 
PassedPeriodID), 0 ) +

   (CASE WHEN coa.doIhaveChildren THEN
   GetChildAccountDebits(coa.coa_id, PassedPeriodID )
   ELSE
  0.0
   END)
   FROM coa
  WHERE  coa.parent_id = PassedAccountID);
  
end;

$FunctionCode$
LANGUAGE 'plpgsql' VOLATILE ;

select 10, getchildaccountdebits(10,1)
union
select 11, getchildaccountdebits(11,1)
union
select 12, getchildaccountdebits(12,1);


--

WaGathoni wrote:

Justin was recommending a solution to the Chart of Accounts Problem
posted by jamhitz:

MQUOTE>
One has you chart of Accounts
   Create table coa (
  coa_id serial not null,
  parent_id int not null default 0,
  doIhaveChildren boolean default false
   account_name text null )
primary key(coa_id)

Create Table general_ledger_transactions(
  transaction_id serial not null
  coad_id integer,
  accounting_period integer,
  debit numeric(20,10) ,
  credit numeric(20,10),
  transaction_date datestamp)
primary key (transaction_id)

...

Create table  accounting_periods (
   accounting_period serial not null,
   start_date date,
   end_date date,
   accounting_period_Open boolean)



Would someone please assist me.  Why is the following function:...


CREATE OR REPLACE  FUNCTION GetChildAccountDebits(PassedAccountID
integer, PassedPeriodID integer) RETURNS NUMERIC AS
$FunctionCode$
DECLARE retval NUMERIC :=0.0;
begin
SELECT Sum(gl_transactions.debit) +
CASE WHEN coa.doIhaveChildren THEN
GetChildAccountDebits(coa.coa_id, PassedPeriodID )
ELSE
   0.0
END
INTO retval
FROM gl_transactions, coa
WHERE gl_transactions.coa_id= coa.coa_id
AND coa.parent_id = PassedAccountID
AND gl_transactions.period_id = PassedPeriodID;

RETURN retval;
end;
$FunctionCode$
 LANGUAGE 'plpgsql' VOLATILE ;

failing with an error to the effect that that that
coa.doaIhaveChildren and coa.coa_id must be included in the GROUP BY
clause and what is is the recommended course of action.

I have 

Re: [GENERAL] postgresql and Mac OS X

2008-10-29 Thread Tom Allison

Grzegorz Jaśkiewicz wrote:
I use postgresql on MBP, current head, for testing and development. Just 
from sources, it won't bite :)


you just have to add user postgres to your system, place $PGDATA 
wherever you feel you should, and you're done.




Yes.  I actually started using Nix from Slackware.  Which means, by 
definition, installation from scratch is "trivial".  I can see the value 
in doing an installation on your own because you do have absolute 
control over the version/options of the packages.


I guess my reluctance against compiling is that I have little interest 
in tuning development box and going through the nuances of 
configuration.  And as such -- plug & chug seems easy.


I think I found my answer though -- DIY.  It's the control and knowing I 
 have all the binaries and source code I need to.  now, wish me luck! 
I might be back on the list really soon...


:)

- Tom

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] using plpgsql debuggers

2008-10-29 Thread Dave Page
On Wed, Oct 29, 2008 at 9:41 PM, Ravi Chemudugunta
<[EMAIL PROTECTED]> wrote:
> Thanks for the replies.
>
> I probably should have explained what it is that I am trying to do.
>
> I am trying to create a command line interface to the debugger rather than
> using the GUI (that'll eventually plug into a sort of testing harness) - so
> I am needing to figure out how it actually works.  In the code comments it
> mentions that any client can be used (psql) ... I don't mind all the extra
> typing =)
>
> Any thoughts on how I may do this?
>
> I did have a brief look at the code inside pgadmin's debugger module - but
> it was a bit hard to figure out how it kicked the whole process off...

Ah, well that code is pretty complex to read as it uses async queries
which fire events when they complete. There's no obvious flow to
follow just by reading.

My colleague posted an example of the API usage a while back - you can
find it at http://archives.postgresql.org/pgsql-hackers/2007-09/msg00241.php

-- 
Dave Page
EnterpriseDB UK:   http://www.enterprisedb.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


FW: [GENERAL] Slow query performance

2008-10-29 Thread Dann Corbit
From: Kevin Galligan [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, October 29, 2008 3:16 PM
To: Dann Corbit
Subject: Re: [GENERAL] Slow query performance

 

Columns are as follows:

account  | integer   |
city | character varying(20) |
zip  | character(5)  |
dincome  | character(1)  |
sex  | character(1)  |
mob  | boolean   |
religion | character(1)  |
groupcd  | character(1)  |
lastdata | character varying(4)  |
countycd | character varying(3)  |
state| character varying(2)  |
dutype   | character varying(1)  |
orders   | integer   |
countysz | character varying(1)  |
ethnic   | character varying(2)  |
language | character varying(2)  |
cbsacode | character varying(5)  |
cbsatype | character varying(1)  |
age  | smallint  |
dob  | date  |
ccard| boolean   |
lor  | integer   |
bankcard | boolean   |
lastord  | date  |
total| integer   |
lmob | boolean   |
homeown  | character varying(1)  |
ord1st   | date  |
ordlast  | date  |
married  | boolean   |
deptcard | boolean   |
>>

You did not show us the indexes.

If you have pgadmin III, go to the table and copy/paste the actual
definition, including indexes.

<<
>From here its about another 100 columns with either booleans or
smallints, mostly null values.

I eventually killed the vacuum.  I will run it again, but was just going
through the indexes.  All were of this format...

"INFO:  index "idx_jordlast" now contains 265658026 row versions in
728409 pages
DETAIL:  0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.88s/0.13u sec elapsed 90.38 sec."

An example of a slow query is...

select count(*) from bigdatatable where age between 22 and 40 and state
= 'NY';
>>

Is there an index on state and age?

Is there an index on state?

Is there an index on age?

That is important missing information.

If there is no index on either column, then you will do a table scan.

If all of your slow queries look like the above, then create a clustered
index on state,age

<<
I know count is not optimized on postgresql like it is on mysql due to
transaction isolation (at least that's what I've read.  Makes sense to
me).  I understand it'll take time to actually count the rows.  However,
here's the output of 'explain analyze select count(*) from bigdatatable
where age between 22 and 40 and state = 'NY';'

 Aggregate  (cost=5179639.55..5179639.56 rows=1 width=0) (actual
time=389529.895..389529.897 rows=1 loops=1)
   ->  Bitmap Heap Scan on bigdatatable  (cost=285410.65..5172649.63
rows=2795968 width=0) (actual time=6727.848..387159.175
rows=2553273 loops=1)
 Recheck Cond: ((state)::text = 'NY'::text)
 Filter: ((age >= 22) AND (age <= 40))
 ->  Bitmap Index Scan on idx_jstate  (cost=0.00..284711.66
rows=15425370 width=0) (actual time=6298.950..6298.950 ro
ws=16821828 loops=1)
   Index Cond: ((state)::text = 'NY'::text)
 Total runtime: 389544.088 ms

It looks like the index scans are around 6 seconds or so each, which is
fine.  then it looks like "Aggregate" suddenly jumps up to 6 minutes.

I know the database design is crude.  Its really just a big flat table.
I didn't put too much into weeding out which columns should be indexed
and which shouldn't (just slapped an index on each).  Happy to do that
work, but right now I'm in panic mode and just need to figure out which
way to start going.  I had a design on mysql which worked pretty good at
10 to 20 % of full size, but degraded quite a bit at full size.
compounding this is there is another implementation we've seen that uses
the full size of similar data and returns actual results in seconds (I
originally planned to used a 5% size db for estimated results, then the
full size for getting the actual data.  This plan was rejected :(

Any thoughts?  It seemed to work OK when I had a table with 10 cols but
about the same data length.  That may have been an artificial test,
though.

Again.  This is read-only once the data is set up.  Client wants to run
pretty much arbitrary queries, so its hard to isolate certain things for
optimization, although there are some "knowns".

Will start the full vacuum process again.

Thanks in advance,
-Kevin

On Wed, Oct 29, 2008 at 4:52 PM, Dann Corbit <[EMAIL PROTECTED]> wrote:
>> -Original Message-
>> From: [EMAIL PROTECTED] [mailto:pgsql-general-
>> [EMAIL PROTECTED] On Behalf Of Kevin Galligan
>> Sent: Wednesday, October 29, 2008 1:18 PM
>> To: pgsql-general@postgresql.org
>> Subject: [GENERAL] Slow query performance
>>
>> I'm approaching the end of my rope here.  I have a large database.
>> 250 million rows (ish).  Each row has potentially about 500 pieces of
>> data, although most of the columns are sparsely populated.
>>
>> Wha

Re: [GENERAL] Slow query performance

2008-10-29 Thread Kevin Galligan
An example of a slow query is...

select count(*) from bigdatatable where age between 22 and 40 and state =
'NY';

explain analyze returned the following...

 Aggregate  (cost=5179639.55..5179639.56 rows=1 width=0) (actual
time=389529.895..389529.897 rows=1 loops=1)
   ->  Bitmap Heap Scan on bigdatatable  (cost=285410.65..5172649.63
rows=2795968 width=0) (actual time=6727.848..387159.175
rows=2553273 loops=1)
 Recheck Cond: ((state)::text = 'NY'::text)
 Filter: ((age >= 22) AND (age <= 40))
 ->  Bitmap Index Scan on idx_jstate  (cost=0.00..284711.66
rows=15425370 width=0) (actual time=6298.950..6298.950
rows=16821828 loops=1)
   Index Cond: ((state)::text = 'NY'::text)
 Total runtime: 389544.088 ms

It looks like the index scans are around 6 seconds or so each, but then the
bitmap heap scan and aggregate jump up to 6 mintues.

More detail on the table design and other stuff in a bit...


On Wed, Oct 29, 2008 at 6:18 PM, Scott Marlowe <[EMAIL PROTECTED]>wrote:

> On Wed, Oct 29, 2008 at 2:18 PM, Kevin Galligan <[EMAIL PROTECTED]>
> wrote:
> > I'm approaching the end of my rope here.  I have a large database.
> > 250 million rows (ish).  Each row has potentially about 500 pieces of
> > data, although most of the columns are sparsely populated.
>
> A couple of notes here.  PostgreSQL stores null values as a single bit
> in a bit field, making sparsely populated tables quite efficient as
> long as you store the non-existent values as null and not '' or some
> other real value.
>
> Have you run explain analyze on your queries yet?  Pick a slow one,
> run explain analyze on it and post it and we'll see what we can do.
>


Re: [GENERAL] Slow query performance

2008-10-29 Thread Scott Marlowe
On Wed, Oct 29, 2008 at 2:18 PM, Kevin Galligan <[EMAIL PROTECTED]> wrote:
> I'm approaching the end of my rope here.  I have a large database.
> 250 million rows (ish).  Each row has potentially about 500 pieces of
> data, although most of the columns are sparsely populated.

A couple of notes here.  PostgreSQL stores null values as a single bit
in a bit field, making sparsely populated tables quite efficient as
long as you store the non-existent values as null and not '' or some
other real value.

Have you run explain analyze on your queries yet?  Pick a slow one,
run explain analyze on it and post it and we'll see what we can do.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Group BY

2008-10-29 Thread Scott Marlowe
On Wed, Oct 29, 2008 at 2:59 PM, WaGathoni <[EMAIL PROTECTED]> wrote:
> failing with an error to the effect that that that
> coa.doaIhaveChildren and coa.coa_id must be included in the GROUP BY
> clause and what is is the recommended course of action.

Generally the solution in postgresql is its proprietary extension of
distinct on ()

select distinct on (field1, field2) field1, field2, field3 from 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] using plpgsql debuggers

2008-10-29 Thread Ravi Chemudugunta
Thanks for the replies.

I probably should have explained what it is that I am trying to do.

I am trying to create a command line interface to the debugger rather than
using the GUI (that'll eventually plug into a sort of testing harness) - so
I am needing to figure out how it actually works.  In the code comments it
mentions that any client can be used (psql) ... I don't mind all the extra
typing =)

Any thoughts on how I may do this?

I did have a brief look at the code inside pgadmin's debugger module - but
it was a bit hard to figure out how it kicked the whole process off...

-ravi

On Wed, Oct 29, 2008 at 9:54 PM, Dave Page <[EMAIL PROTECTED]> wrote:

> On Wed, Oct 29, 2008 at 6:48 AM, Ravi Chemudugunta
> <[EMAIL PROTECTED]> wrote:
> > hi all,
> >
> > has anyone here heard of / used:
> >
> > http://pgfoundry.org/projects/edb-debugger/
> >
> > This allows one to debug code, using breakpoints, single stepping etc.
> which
> > seems really great but I haven't figured out how to use it yet?
> >
> > I can't figure out how to invocate the server (target) side in order to
> then
> > get the proxy to connect to it.
>
> Run pgAdmin with the plugin enabled in your database, and right-click
> a function and select one of the debugging options.
>
>
> --
> Dave Page
> EnterpriseDB UK:   http://www.enterprisedb.com
>


[GENERAL] Group BY

2008-10-29 Thread WaGathoni
Justin was recommending a solution to the Chart of Accounts Problem
posted by jamhitz:

MQUOTE>
One has you chart of Accounts
   Create table coa (
  coa_id serial not null,
  parent_id int not null default 0,
  doIhaveChildren boolean default false
   account_name text null )
primary key(coa_id)

Create Table general_ledger_transactions(
  transaction_id serial not null
  coad_id integer,
  accounting_period integer,
  debit numeric(20,10) ,
  credit numeric(20,10),
  transaction_date datestamp)
primary key (transaction_id)

...

Create table  accounting_periods (
   accounting_period serial not null,
   start_date date,
   end_date date,
   accounting_period_Open boolean)



Would someone please assist me.  Why is the following function:...


CREATE OR REPLACE  FUNCTION GetChildAccountDebits(PassedAccountID
integer, PassedPeriodID integer) RETURNS NUMERIC AS
$FunctionCode$
DECLARE retval NUMERIC :=0.0;
begin
SELECT Sum(gl_transactions.debit) +
CASE WHEN coa.doIhaveChildren THEN
GetChildAccountDebits(coa.coa_id, PassedPeriodID )
ELSE
   0.0
END
INTO retval
FROM gl_transactions, coa
WHERE gl_transactions.coa_id= coa.coa_id
AND coa.parent_id = PassedAccountID
AND gl_transactions.period_id = PassedPeriodID;

RETURN retval;
end;
$FunctionCode$
 LANGUAGE 'plpgsql' VOLATILE ;

failing with an error to the effect that that that
coa.doaIhaveChildren and coa.coa_id must be included in the GROUP BY
clause and what is is the recommended course of action.

I have limited Internet access, so forgive me when I raise the same
question 8 days later.

Thanks

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with selecting the first day of the the week

2008-10-29 Thread Raymond O'Donnell
On 29/10/2008 20:11, x asasaxax wrote:

>I need a function to select the first day of the week. For example
> giving  today´s date(29/10/2008).

select 'Monday';

;-)


Ray.


--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Slow query performance

2008-10-29 Thread Kevin Galligan
I'm approaching the end of my rope here.  I have a large database.
250 million rows (ish).  Each row has potentially about 500 pieces of
data, although most of the columns are sparsely populated.

What I'm trying to do is, essentially, search for sub-sets of that
data based on arbitrary queries of those data columns.  the queries
would be relatively simple ("dirbtl is not null and qqrq between 20
and 40").  After the database is built, it is read only.

So, I started with maybe 10-15 fields in a main table, as most records
have values for those fields.  Then had individual tables for the
other values.  The idea is that the percentage of rows with values
drops off significantly after those main tables.  That, an each
individual query looks at probably 3 or 4 fields in total.  The
performance of those queries was pretty bad.  Its got to join large
numbers of values, which didn't really work out well.

So, went the other direction completely.  I rebuilt the database with
a much larger main table.  Any values with 5% or greater filled in
rows were added to this table.  Maybe 130 columns.  Indexes applied to
most of these.  Some limited testing with a smaller table seemed to
indicate that queries on a single table without a join would work much
faster.

So, built that huge table.  now query time is terrible.  Maybe a
minute or more for simple queries.

I'm running vacuum/analyze right now (which is also taking forever, BTW).

The box has 15 g of ram.  I made the shared_buffers setting to 8 or 9
gig.  My first question, what would be better to bump up to increase
the performance?  I thought that was the field to jack up to improve
query time or index caching, but I've read conflicting data.  The 15
ram is available.

I originally had this in mysql.  Also bad performance.  I understand
how to optimize that much better, but it just wasn't cutting it.

Anyway, help with tuning the settings would be greatly appreciated.
Advice on how best to lay this out would also be helpful (I know its
difficult without serious detail).

Thanks in advance,
-Kevin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with selecting the first day of the the week

2008-10-29 Thread Pavel Stehule
Hello

try

CREATE OR REPLACE FUNCTION public.fdow(date)
 RETURNS date
 LANGUAGE sql
 IMMUTABLE STRICT
AS $function$ select $1 - extract(dow from $1)::int $function$

postgres=# select fdow(current_date);
fdow

 2008-10-26
(1 row)

regards
Pavel Stehule

2008/10/29 x asasaxax <[EMAIL PROTECTED]>:
> Hi,
>
>I need a function to select the first day of the week. For example
> giving  today´s date(29/10/2008).
>
> Can anyone help´s me?
> Thanks!
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Problem with selecting the first day of the the week

2008-10-29 Thread x asasaxax
Hi,

   I need a function to select the first day of the week. For example
giving  today´s date(29/10/2008).

Can anyone help´s me?
Thanks!


Re: [GENERAL] autovacuum

2008-10-29 Thread Matthew T. O'Connor

Noah Freire wrote:
<2008-10-29 11:09:03.453 PDT>DEBUG: 0: accounts: vac: 16697969 
(threshold 650), anl: 16697969 (threshold 12048)
<2008-10-29 11:09:05.610 PDT>DEBUG: 0: accounts: vac: 16699578 
(threshold 650), anl: 16699578 (threshold 12048)
<2008-10-29 11:10:03.563 PDT>DEBUG: 0: accounts: vac: 16735906 
(threshold 650), anl: 16735906 (threshold 12048)


please check the first log message: the vacuum threshold is 6,000,050 
rows and the number of dead tuples is 16,697,969. Even though the number 
of dead_tuples is greater than the threshold the autovacuum is not being 
triggered for this table. So, besides this condition (dead_tuples > 
threshold) what else is taken into account by autovacuum?


What version of PostgreSQL?  Is the table being excluded? (see the 
pg_autovacuum system table settings)  Are you sure that it's not getting 
processed? Perhaps one worker is / has been churning on this table for a 
 *LONG* time (that is a fairly big table).  What does it say for the 
most recent autovacuum?


SELECT relid, schemaname, relname, last_vacuum, last_autovacuum  from 
pg_stat_all_tables;


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] autovacuum

2008-10-29 Thread Noah Freire
Hello,

I have a table (accounts) with 600,000,000 rows. A heavy high-concurrent
workload that makes mostly updates on this table generates a lot of dead
tuples in its run, which is expected due to MVCC.
The problem is that even though autovacuum is enabled, the autovacuum worker
does not vacuum this table (I entered custom autovacuum settings for this
table in pg_autovacuum to try to force a situation). Autovacuum is working
for other smaller tables but not for accounts.

<2008-10-29 11:09:03.453 PDT>DEBUG: 0: accounts: vac: 16697969
(threshold 650), anl: 16697969 (threshold 12048)
<2008-10-29 11:09:05.610 PDT>DEBUG: 0: accounts: vac: 16699578
(threshold 650), anl: 16699578 (threshold 12048)
<2008-10-29 11:10:03.563 PDT>DEBUG: 0: accounts: vac: 16735906
(threshold 650), anl: 16735906 (threshold 12048)


please check the first log message: the vacuum threshold is 6,000,050 rows
and the number of dead tuples is 16,697,969. Even though the number of
dead_tuples is greater than the threshold the autovacuum is not being
triggered for this table. So, besides this condition (dead_tuples >
threshold) what else is taken into account by autovacuum?

Thank you,

-Noah


Re: [GENERAL] Can't restart Postgres

2008-10-29 Thread Thom Brown
Actually I did "ps aux | grep post" just to cover all bases, but still
nothing.. except of course the grep itself.

On Wed, Oct 29, 2008 at 6:38 PM, Scott Marlowe <[EMAIL PROTECTED]> wrote:
> On Wed, Oct 29, 2008 at 9:43 AM, Thom Brown <[EMAIL PROTECTED]> wrote:
>> Hi,
>>  * Forcing it to shutdown which leads to a recover-run on next startup.
>> pg_ctl: PID file "/var/lib/postgresql/8.3/data/postmaster.pid" does not exist
>> Is server running?
>>  * Forced shutdown failed!!! Something is wrong with your system,
>> please take care of it manually.
>>   [ ok ]
>>  * Starting PostgreSQL ...
>> waiting for server to
>> start...could
>> not start server[ !! ]
>>  * The pid-file doesn't exist but pg_ctl reported a running server.
>>  * Please check whether there is another server running on the same
>> port or read the log-file.
>
> At this point did you do something like:
>
> ps ax|grep postgres
>
> ???
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can't restart Postgres

2008-10-29 Thread Scott Marlowe
On Wed, Oct 29, 2008 at 9:43 AM, Thom Brown <[EMAIL PROTECTED]> wrote:
> Hi,
>  * Forcing it to shutdown which leads to a recover-run on next startup.
> pg_ctl: PID file "/var/lib/postgresql/8.3/data/postmaster.pid" does not exist
> Is server running?
>  * Forced shutdown failed!!! Something is wrong with your system,
> please take care of it manually.
>   [ ok ]
>  * Starting PostgreSQL ...
> waiting for server to
> start...could
> not start server[ !! ]
>  * The pid-file doesn't exist but pg_ctl reported a running server.
>  * Please check whether there is another server running on the same
> port or read the log-file.

At this point did you do something like:

ps ax|grep postgres

???

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Are there plans to add data compression feature to postgresql?

2008-10-29 Thread Scott Marlowe
On Wed, Oct 29, 2008 at 10:09 AM, 小波 顾 <[EMAIL PROTECTED]> wrote:
>
> Data Compression
>
> The new data compression feature in SQL Server 2008 reduces the size of
> tables, indexes or a subset of their partitions by storing fixed-length data
> types in variable length storage format and by reducing the redundant data.
> The space savings achieved depends on the schema and the data distribution.
> Based on our testing with various data warehouse databases, we have seen a
> reduction in the size of real user databases up to 87% (a 7 to 1 compression
> ratio) but more commonly you should expect a reduction in the range of
> 50-70% (a compression ratio between roughly 2 to 1 and 3 to 1).

I'm sure this makes for a nice brochure or power point presentation,
but in the real world I can't imagine putting that much effort into it
when compressed file systems seem the place to be doing this.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Weird problem concerning tsearch functions built into postgres 8.3, assistance requested

2008-10-29 Thread Andrew Edson
I've been tasked with maintaining a set of postgres databases created by my 
predecessor in this position.  The original databases several years back were 
version 8.1.3, and used the tsearch2 functions to enable some client-program 
searches.

We've recently begun preparing to shift to 8.3 (I believe the current starter 
box we're putting together for cloning is running 8.3.1), and I've been having 
a bit of trouble with the built-in searching.  I think I've got it mostly fixed 
after following a few leads online, but I've run across one last little bit of 
problem that I can't figure out how to get around, and that doesn't make much 
sense to me.  Admittedly, I don't know very much about tsearch.

One of the tables we're using in the 8.1.3 setups currently running includes 
phone numbers as a searchable field (fti_phone), with the results of a select 
on the field generally looking like this: 'MMM':2 '':3 'MMM-':1.  MMM 
is the first three digits,  is the fourth-seventh.

The weird part is this: On the old systems running 8.1.3, I can look up a 
record by 
fti_phone using any of the three above items; first three, last four, or entire 
number including dash.  On the new system running 8.3.1, I can do a lookup by 
the first three or the last four and get the results I'm after, but any attempt 
to do a lookup by the entire MMM- version returns no records.

I saw nothing concerning this while I was looking for information on how to get 
the search functions properly working in postgres 8.3.1, nor have I 
specifically seen anything since running across that problem.  The latter, 
however, may simply be because I don't know how to properly phrase my searches.

Does anyone have any information they would be willing to share regarding this 
issue, or a link to a website which discusses it?  I would greatly appreciate 
any advice I may be given.



  

Re: [GENERAL] Can't restart Postgres

2008-10-29 Thread Thom Brown
Permissions are identical to live.  I've checked the /tmp folder for a
PID reference, but doesn't exist in live or dev.

What do you mean by "variables"?  How can I check?

I only have one postgresql database cluster on each server.

With regards to the config causing memory problems, the specs of both
environments are absolutely identical.  Same kernel, same disk space,
same distro, same memory.  In fact the development environment had to
be rebuilt so we cloned live and changed the necessary settings.  Is
that potentially an issue, bearing in mind it had been running until
now?

By virtual machine I mean the entire systems are running within
virtual machines on different physical machines.  It is completely
unaware of its host and localhost is a true localhost as far as its
concerned.

The port numbers are only different because we also have a staging
virtual machine which is on the default port and to remotely connect
to both we just changed the port on the development one to an unused
one.  This is the port is has been running on until now.

Thanks

Thom

On Wed, Oct 29, 2008 at 3:57 PM, Serge Fonville
<[EMAIL PROTECTED]> wrote:
> Hi,
> Did you check permissions?
> Do the pid files exist?
> What variables are set?
> Regards,
> Serge Fonville
> On Wed, Oct 29, 2008 at 4:43 PM, Thom Brown <[EMAIL PROTECTED]> wrote:
>>
>> Hi,
>>
>> I've got a development virtual server which matches live exactly
>> except for the fact that Postgres is running on a different port which
>> is not used by anything else.  Postgres was running fine until I
>> updated postgresql.conf to enhance logging and make better use of
>> system resources.
>>
>> Here's the problem:
>>
>> # /etc/init.d/postgresql-8.3 restart
>>  * Stopping PostgreSQL (this can take up to 90 seconds) ...
>> pg_ctl: PID file "/var/lib/postgresql/8.3/data/postmaster.pid" does not
>> exist
>> Is server running?
>>  * Some clients did not disconnect within 30 seconds.
>>  * Going to shutdown the server anyway.
>> pg_ctl: PID file "/var/lib/postgresql/8.3/data/postmaster.pid" does not
>> exist
>> Is server running?
>>  * Shutting down the server gracefully failed.
>>  * Forcing it to shutdown which leads to a recover-run on next startup.
>> pg_ctl: PID file "/var/lib/postgresql/8.3/data/postmaster.pid" does not
>> exist
>> Is server running?
>>  * Forced shutdown failed!!! Something is wrong with your system,
>> please take care of it manually.
>>   [ ok ]
>>  * Starting PostgreSQL ...
>> waiting for server to
>> start...could
>> not start server[ !! ]
>>  * The pid-file doesn't exist but pg_ctl reported a running server.
>>  * Please check whether there is another server running on the same
>> port or read the log-file.
>>
>>
>> If you're curious, the settings I changed in postgresql.conf are as
>> follows:
>>
>> OLD: shared_buffers = 24MB
>> NEW: shared_buffers = 128MB
>>
>> OLD: #log_destination = 'stderr'
>> NEW: log_destination = 'stderr'
>>
>> OLD: #logging_collector = off
>> NEW: logging_collector = on
>>
>> OLD: #log_directory = 'pg_log'
>> NEW: log_directory = '/var/log/pg_log'
>>
>> OLD: #log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'
>> NEW: log_filename = 'postgresql-%Y-%m-%d.log'
>>
>> OLD: #log_rotation_age = 1d
>> NEW: log_rotation_age = 1d
>>
>> OLD: #log_min_duration_statement = -1
>> NEW: log_min_duration_statement = 0
>>
>> OLD: #log_duration = off
>> NEW: log_duration = on
>>
>> OLD: #log_line_prefix = ''
>> NEW: log_line_prefix = '%t [%p]: [%l-1] '
>>
>> Note that the live and development configs are identical except for
>> the port number.
>>
>> Netstat data
>>
>> # netstat -a
>> Active Internet connections (servers and established)
>> Proto Recv-Q Send-Q Local Address   Foreign Address State
>> tcp0  0 localhost:mysql *:* LISTEN
>> tcp0  0 *:sunrpc*:* LISTEN
>> tcp0  0 *:39571 *:* LISTEN
>> tcp6   0  0 [::]:http-alt   [::]:*  LISTEN
>> tcp6   0  0 [::]:http   [::]:*  LISTEN
>> tcp6   0  0 [::]:ssh[::]:*  LISTEN
>> tcp6   0  0 [::]:https  [::]:*  LISTEN
>> tcp6   0732 linode-dev.prehisto:ssh 217.154.203.18:4244
>> ESTABLISHED
>> udp0  0 *:779   *:*
>> udp0  0 *:32781 *:*
>> udp0  0 *:sunrpc*:*
>> Active UNIX domain sockets (servers and established)
>> Proto RefCnt Flags   Type   State I-Node Path
>> unix  2  [ ACC ] STREAM LISTENING 7294 @/tmp/fam-root-
>> unix  2  [ ACC ] STREAM LISTENING 1745591
>> /var/run/cgisock.19119
>> unix  2  [ ] DGRAM179
>> @/org/kernel/ude

Re: [GENERAL] Are there plans to add data compression feature to postgresql?

2008-10-29 Thread Grzegorz Jaśkiewicz
I can imagine my big stats tables , with 300-400M rows, all big ints, that
 - mostly - require that sort of length. Gain, none, hassle 100%.


Re: [GENERAL] Are there plans to add data compression feature to postgresql?

2008-10-29 Thread justin


小波 顾 wrote:


Data Compression  MSSQL 2008 technots .  Your results depend on 
your workload, database, and hardware

Sounds cool but i wonder what real world results are??

For IO bound systems lots of pluses
but for CPU bound workloads it would suck



Re: [GENERAL] Are there plans to add data compression feature to postgresql?

2008-10-29 Thread Tom Lane
=?utf-8?Q?=E5=B0=8F=E6=B3=A2_=E9=A1=BE?= <[EMAIL PROTECTED]> writes:
> [ snip a lot of marketing for SQL Server ]

I think the part of this you need to pay attention to is

> Of course, nothing is entirely free, and this reduction in space and
> time come at the expense of using CPU cycles.

We already have the portions of this behavior that seem to me to be
likely to be worthwhile (such as NULL elimination and compression of
large field values).  Shaving a couple bytes from a bigint doesn't
strike me as interesting.

(Note: you could build a user-defined type that involved a one-byte
length indicator followed by however many bytes of the bigint you
needed.  So someone who thought this might be worthwhile could do it
for themselves.  I don't see it being a win, though.)

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql screen size

2008-10-29 Thread Alvaro Herrera
Tom Lane escribió:
> Alvaro Herrera <[EMAIL PROTECTED]> writes:

> > I can confirm that when the pager is open, psql does not resize
> > properly.  Maybe psql is ignoring signals while the pager is open, or
> > something.
> 
> If the pager is running, psql's not going to do anything anyway, no?
> What behavior are you expecting?

I am expecting that when control gets back to it, it has updated its
notion of terminal size.  Obviously I don't want anything _visible_ to
happen at that point to psql.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Are there plans to add data compression feature to postgresql?

2008-10-29 Thread 小波 顾
Data Compression
The new data compression feature in SQL Server 2008 reduces the size of tables, 
indexes or a subset of their partitions by storing fixed-length data types in 
variable length storage format and by reducing the redundant data. The space 
savings achieved depends on the schema and the data distribution. Based on our 
testing with various data warehouse databases, we have seen a reduction in the 
size of real user databases up to 87% (a 7 to 1 compression ratio) but more 
commonly you should expect a reduction in the range of 50-70% (a compression 
ratio between roughly 2 to 1 and 3 to 1). 
SQL Server provides two types of compression as follows:
· ROW compression enables storing fixed length types in variable length 
storage format. So for example, if you have a column of data type BIGINT which 
takes 8 bytes of storage in fixed format, when compressed it takes a variable 
number of bytes—anywhere from 0 bytes to up to 8 bytes. Since column values are 
stored as variable length, an additional 4‑bit length code is stored for each 
field within the row. Additionally, zero and NULL values don’t take any storage 
except for the 4‑bit code. 
· PAGE compression is built on top of ROW compression. It minimizes 
storage of redundant data on the page by storing commonly occurring byte 
patterns on the page once and then referencing these values for respective 
columns. The byte pattern recognition is type-independent. Under PAGE 
compression, SQL Server optimizes space on a page using two techniques.
The first technique is column prefix. In this case, the system looks for a 
common byte pattern as a prefix for all values of a specific column across rows 
on the page. This process is repeated for all the columns in the table or 
index. The column prefix values that are computed are stored as an anchor 
record on the page and the data or index rows refer to the anchor record for 
the common prefix, if available, for each column.
The second technique is page level dictionary. This dictionary stores common 
values across columns and rows and stores them in a dictionary. The columns are 
then modified to refer to the dictionary entry.
Compression comes with additional CPU cost. This overhead is paid when you 
query or execute DML operations on compressed data. The relative CPU overhead 
with ROW is less than for PAGE, but PAGE compression can provide better 
compression. Since there are many kinds of workloads and data patterns, SQL 
Server exposes compression granularity at a partition level. You can choose to 
compress the whole table or index or a subset of partitions. For example, in a 
DW workload, if CPU is the dominant cost in your workload but you want to save 
some disk space, you may want to enable PAGE compression on partitions that are 
not accessed frequently while not compressing the current partition(s) that are 
accessed and manipulated more frequently. This reduces the total CPU cost, at a 
small increase in disk space requirements. If I/O cost is dominant for your 
workload, or you need to reduce disk space costs, compressing all data using 
PAGE compression may be the best choice. Compression can give many-fold 
speedups if it causes your working set of frequently touched pages to be cached 
in the main memory buffer pool, when it does not otherwise fit in memory. 
Preliminary performance results on one large-scale internal DW query 
performance benchmark used to test SQL Server 2008 show a 58% disk savings, an 
average 15% reduction in query runtime, and an average 20% increase in CPU 
cost. Some queries speeded up by a factor of up to seven. Your results depend 
on your workload, database, and hardware.
The commands to compress data are exposed as options in CREATE/ALTER DDL 
statements and support both ONLINE and OFFLINE mode. Additionally, a stored 
procedure is provided to help you estimate the space savings prior to actual 
compression. 
Backup Compression
Backup compression helps you to save in multiple ways.
By reducing the size of your SQL backups, you save significantly on disk media 
for your SQL backups. While all compression results depend on the nature of the 
data being compressed, results of 50% are not uncommon, and greater compression 
is possible. This enables you to use less storage for keeping your backups 
online, or to keep more cycles of backups online using the same storage.
Backup compression also saves you time. Traditional SQL backups are almost 
entirely limited by I/O performance. By reducing the I/O load of the backup 
process, we actually speed up both backups and restores.
Of course, nothing is entirely free, and this reduction in space and time come 
at the expense of using CPU cycles. The good news here is that the savings in 
I/O time offsets the increased use of CPU time, and you can control how much 
CPU is used by your backups at the expense of the rest of your workload by 
taking advantage of the Resource Governor.
 
URL:http://ms

Re: [GENERAL] psql screen size

2008-10-29 Thread Tom Lane
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> How do you know that the pager is in the same process group?  Is the
> process group something that's inherited automatically on fork()?

It certainly should be.

> I can confirm that when the pager is open, psql does not resize
> properly.  Maybe psql is ignoring signals while the pager is open, or
> something.

If the pager is running, psql's not going to do anything anyway, no?
What behavior are you expecting?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can't restart Postgres

2008-10-29 Thread Serge Fonville
Hi,
Did you check permissions?
Do the pid files exist?
What variables are set?

Regards,

Serge Fonville

On Wed, Oct 29, 2008 at 4:43 PM, Thom Brown <[EMAIL PROTECTED]> wrote:

> Hi,
>
> I've got a development virtual server which matches live exactly
> except for the fact that Postgres is running on a different port which
> is not used by anything else.  Postgres was running fine until I
> updated postgresql.conf to enhance logging and make better use of
> system resources.
>
> Here's the problem:
>
> # /etc/init.d/postgresql-8.3 restart
>  * Stopping PostgreSQL (this can take up to 90 seconds) ...
> pg_ctl: PID file "/var/lib/postgresql/8.3/data/postmaster.pid" does not
> exist
> Is server running?
>  * Some clients did not disconnect within 30 seconds.
>  * Going to shutdown the server anyway.
> pg_ctl: PID file "/var/lib/postgresql/8.3/data/postmaster.pid" does not
> exist
> Is server running?
>  * Shutting down the server gracefully failed.
>  * Forcing it to shutdown which leads to a recover-run on next startup.
> pg_ctl: PID file "/var/lib/postgresql/8.3/data/postmaster.pid" does not
> exist
> Is server running?
>  * Forced shutdown failed!!! Something is wrong with your system,
> please take care of it manually.
>   [ ok ]
>  * Starting PostgreSQL ...
> waiting for server to
> start...could
> not start server[ !! ]
>  * The pid-file doesn't exist but pg_ctl reported a running server.
>  * Please check whether there is another server running on the same
> port or read the log-file.
>
>
> If you're curious, the settings I changed in postgresql.conf are as
> follows:
>
> OLD: shared_buffers = 24MB
> NEW: shared_buffers = 128MB
>
> OLD: #log_destination = 'stderr'
> NEW: log_destination = 'stderr'
>
> OLD: #logging_collector = off
> NEW: logging_collector = on
>
> OLD: #log_directory = 'pg_log'
> NEW: log_directory = '/var/log/pg_log'
>
> OLD: #log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'
> NEW: log_filename = 'postgresql-%Y-%m-%d.log'
>
> OLD: #log_rotation_age = 1d
> NEW: log_rotation_age = 1d
>
> OLD: #log_min_duration_statement = -1
> NEW: log_min_duration_statement = 0
>
> OLD: #log_duration = off
> NEW: log_duration = on
>
> OLD: #log_line_prefix = ''
> NEW: log_line_prefix = '%t [%p]: [%l-1] '
>
> Note that the live and development configs are identical except for
> the port number.
>
> Netstat data
>
> # netstat -a
> Active Internet connections (servers and established)
> Proto Recv-Q Send-Q Local Address   Foreign Address State
> tcp0  0 localhost:mysql *:* LISTEN
> tcp0  0 *:sunrpc*:* LISTEN
> tcp0  0 *:39571 *:* LISTEN
> tcp6   0  0 [::]:http-alt   [::]:*  LISTEN
> tcp6   0  0 [::]:http   [::]:*  LISTEN
> tcp6   0  0 [::]:ssh[::]:*  LISTEN
> tcp6   0  0 [::]:https  [::]:*  LISTEN
> tcp6   0732 linode-dev.prehisto:ssh 217.154.203.18:4244
> ESTABLISHED
> udp0  0 *:779   *:*
> udp0  0 *:32781 *:*
> udp0  0 *:sunrpc*:*
> Active UNIX domain sockets (servers and established)
> Proto RefCnt Flags   Type   State I-Node Path
> unix  2  [ ACC ] STREAM LISTENING 7294 @/tmp/fam-root-
> unix  2  [ ACC ] STREAM LISTENING 1745591
> /var/run/cgisock.19119
> unix  2  [ ] DGRAM179
> @/org/kernel/udev/udevd
> unix  2  [ ACC ] STREAM LISTENING 6073 /dev/log
> unix  2  [ ACC ] STREAM LISTENING 7275
> /var/run/fail2ban/fail2ban.sock
> unix  2  [ ACC ] STREAM LISTENING 88239
> /var/run/mysqld/mysqld.sock
> unix  3  [ ] STREAM CONNECTED 1799425  /dev/log
> unix  3  [ ] STREAM CONNECTED 1799424
> unix  3  [ ] STREAM CONNECTED 1799422
> unix  3  [ ] STREAM CONNECTED 1799421
> unix  3  [ ] STREAM CONNECTED 1746483  /dev/log
> unix  3  [ ] STREAM CONNECTED 1746482
> unix  3  [ ] STREAM CONNECTED 1746384  /dev/log
> unix  3  [ ] STREAM CONNECTED 1746382
> unix  3  [ ] STREAM CONNECTED 7427 /dev/log
> unix  3  [ ] STREAM CONNECTED 7426
> unix  3  [ ] STREAM CONNECTED 7298 @/tmp/fam-root-
> unix  3  [ ] STREAM CONNECTED 7295
> unix  3  [ ] STREAM CONNECTED 6511 /dev/log
> unix  3  [ ] STREAM CONNECTED 6508
>
> I'm sure I've had this problem before (a few months ago on my home PC)
> and never did solve it.
>

Re: [GENERAL] Can't restart Postgres

2008-10-29 Thread Richard Huxton
Thom Brown wrote:
> Hi,
> 
> I've got a development virtual server which matches live exactly
> except for the fact that Postgres is running on a different port

What do you mean by "virtual server"? And does it affect definitions of
localhost or shared-memory allocation?

> which
> is not used by anything else.  Postgres was running fine until I
> updated postgresql.conf to enhance logging and make better use of
> system resources.
> 
> Here's the problem:
> 
> # /etc/init.d/postgresql-8.3 restart
>  * Stopping PostgreSQL (this can take up to 90 seconds) ...
> pg_ctl: PID file "/var/lib/postgresql/8.3/data/postmaster.pid" does not exist
> Is server running?

Your system isn't set up the way you think it is - the .pid file is
missing. Is it looking in the right place?

> waiting for server to
> start...could
> not start server[ !! ]
>  * The pid-file doesn't exist but pg_ctl reported a running server.

> If you're curious, the settings I changed in postgresql.conf are as follows:
> 
> OLD: shared_buffers = 24MB
> NEW: shared_buffers = 128MB

This can cause problems if your kernel doesn't allocate enough
shared-memory, but you should get a different error message.

> Note that the live and development configs are identical except for
> the port number.

Are they reading the right config files?

> Netstat data
> 
> # netstat -a
> Active Internet connections (servers and established)
[snip]

I don't see postgresql here at all. Mysql, fam, fail2ban but not PG.

> If anyone can offer some insight I'd be grateful.

If you've got two installations on the same machine having problems then
either:
1. They're *not* running on different ports with different data
directories (check you're using the correct config file for each)

2. They're having problems with shared memory (in which case you should
see a different error message).

-- 
  Richard Huxton
  Archonet Ltd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql screen size

2008-10-29 Thread Alvaro Herrera
Gregory Stark escribió:

> Hm, this Bash FAQ seems to indicate this shouldn't be a problem -- the whole
> process group is supposed to get the window size. Psql isn't doing the job
> control stuff the FAQ entry talks about so the pager ought to be in the same
> process group. So I'm puzzled.

How do you know that the pager is in the same process group?  Is the
process group something that's inherited automatically on fork()?  The
setsid() manpage says so, but maybe we're missing something else.

I can confirm that when the pager is open, psql does not resize
properly.  Maybe psql is ignoring signals while the pager is open, or
something.

Hmm, a quick experiment (8.3) reveals that psql doesn't seem to do
anything if I SIGWINCH it manually while the pager is open ...

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Can't restart Postgres

2008-10-29 Thread Thom Brown
Hi,

I've got a development virtual server which matches live exactly
except for the fact that Postgres is running on a different port which
is not used by anything else.  Postgres was running fine until I
updated postgresql.conf to enhance logging and make better use of
system resources.

Here's the problem:

# /etc/init.d/postgresql-8.3 restart
 * Stopping PostgreSQL (this can take up to 90 seconds) ...
pg_ctl: PID file "/var/lib/postgresql/8.3/data/postmaster.pid" does not exist
Is server running?
 * Some clients did not disconnect within 30 seconds.
 * Going to shutdown the server anyway.
pg_ctl: PID file "/var/lib/postgresql/8.3/data/postmaster.pid" does not exist
Is server running?
 * Shutting down the server gracefully failed.
 * Forcing it to shutdown which leads to a recover-run on next startup.
pg_ctl: PID file "/var/lib/postgresql/8.3/data/postmaster.pid" does not exist
Is server running?
 * Forced shutdown failed!!! Something is wrong with your system,
please take care of it manually.
   [ ok ]
 * Starting PostgreSQL ...
waiting for server to
start...could
not start server[ !! ]
 * The pid-file doesn't exist but pg_ctl reported a running server.
 * Please check whether there is another server running on the same
port or read the log-file.


If you're curious, the settings I changed in postgresql.conf are as follows:

OLD: shared_buffers = 24MB
NEW: shared_buffers = 128MB

OLD: #log_destination = 'stderr'
NEW: log_destination = 'stderr'

OLD: #logging_collector = off
NEW: logging_collector = on

OLD: #log_directory = 'pg_log'
NEW: log_directory = '/var/log/pg_log'

OLD: #log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'
NEW: log_filename = 'postgresql-%Y-%m-%d.log'

OLD: #log_rotation_age = 1d
NEW: log_rotation_age = 1d

OLD: #log_min_duration_statement = -1
NEW: log_min_duration_statement = 0

OLD: #log_duration = off
NEW: log_duration = on

OLD: #log_line_prefix = ''
NEW: log_line_prefix = '%t [%p]: [%l-1] '

Note that the live and development configs are identical except for
the port number.

Netstat data

# netstat -a
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address   Foreign Address State
tcp0  0 localhost:mysql *:* LISTEN
tcp0  0 *:sunrpc*:* LISTEN
tcp0  0 *:39571 *:* LISTEN
tcp6   0  0 [::]:http-alt   [::]:*  LISTEN
tcp6   0  0 [::]:http   [::]:*  LISTEN
tcp6   0  0 [::]:ssh[::]:*  LISTEN
tcp6   0  0 [::]:https  [::]:*  LISTEN
tcp6   0732 linode-dev.prehisto:ssh 217.154.203.18:4244 ESTABLISHED
udp0  0 *:779   *:*
udp0  0 *:32781 *:*
udp0  0 *:sunrpc*:*
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags   Type   State I-Node Path
unix  2  [ ACC ] STREAM LISTENING 7294 @/tmp/fam-root-
unix  2  [ ACC ] STREAM LISTENING 1745591
/var/run/cgisock.19119
unix  2  [ ] DGRAM179
@/org/kernel/udev/udevd
unix  2  [ ACC ] STREAM LISTENING 6073 /dev/log
unix  2  [ ACC ] STREAM LISTENING 7275
/var/run/fail2ban/fail2ban.sock
unix  2  [ ACC ] STREAM LISTENING 88239
/var/run/mysqld/mysqld.sock
unix  3  [ ] STREAM CONNECTED 1799425  /dev/log
unix  3  [ ] STREAM CONNECTED 1799424
unix  3  [ ] STREAM CONNECTED 1799422
unix  3  [ ] STREAM CONNECTED 1799421
unix  3  [ ] STREAM CONNECTED 1746483  /dev/log
unix  3  [ ] STREAM CONNECTED 1746482
unix  3  [ ] STREAM CONNECTED 1746384  /dev/log
unix  3  [ ] STREAM CONNECTED 1746382
unix  3  [ ] STREAM CONNECTED 7427 /dev/log
unix  3  [ ] STREAM CONNECTED 7426
unix  3  [ ] STREAM CONNECTED 7298 @/tmp/fam-root-
unix  3  [ ] STREAM CONNECTED 7295
unix  3  [ ] STREAM CONNECTED 6511 /dev/log
unix  3  [ ] STREAM CONNECTED 6508

I'm sure I've had this problem before (a few months ago on my home PC)
and never did solve it.

If anyone can offer some insight I'd be grateful.

Thanks

Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Are there plans to add data compression feature to postgresql?

2008-10-29 Thread Grzegorz Jaśkiewicz
2008/10/29 小波 顾 <[EMAIL PROTECTED]>

> 1. Little integers of types take 8 bytes in the past now only take 4 or 2
> bytes if there are not so large.
>
So what actually happen if I have a table with few mills of values that fit
in 2 bytes, but all of the sudent I am going to add another column with
something that requires 8 bytes ? update on all columns ? I am actually even
against varchars in my databases, so something like that sounds at least
creepy.

-- 
GJ


Re: [GENERAL] How can I tell, in a trigger, if a value in a string is an integer.

2008-10-29 Thread Tomasz Myrta

Philip W. Dalrymple napisal 29.10.2008 15:26:


 set_config('session.sessionid','23',false);


I validate strings with regexp. This expression returns true:

select '23' ~'^-{0,1}[0-9]+$';

If it's true - you can cast it safely:
select cast('23' as integer)

--
Regards,
Tomasz Myrta

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Are there plans to add data compression feature to postgresql?

2008-10-29 Thread 小波 顾
Sorry for following up so late, actually I mean compression features like what 
other commercial RDBMS have, such as DB2 9.5 or SQL Server 2008. In those 
databases, all data types in all tables can be compressed, following are two 
features we think very useful:
1. Little integers of types take 8 bytes in the past now only take 4 or 2 bytes 
if there are not so large.
2. If two values have the same text or pattern, only one is stored, and that 
one is compressed with traditional compress methods too.



To: [EMAIL PROTECTED]: Re: [GENERAL] Are there plans to add data compression 
feature to postgresql?From: [EMAIL PROTECTED]: Mon, 27 Oct 2008 10:19:31 
+Note that most data stored in the TOAST table is compressed. IE a Text 
type with length greater than around 2K will be stored in the TOAST table.  By 
default data in the TOAST table is compressed,  this can be overriden. However 
I expect that compression will reduce the performance of certain queries. 
http://www.postgresql.org/docs/8.3/interactive/storage-toast.html Out of 
interested, in what context did you want compression? 



Ron Mayer <[EMAIL PROTECTED]> Sent by: [EMAIL PROTECTED] 
27/10/2008 07:34 





To
小波 顾 <[EMAIL PROTECTED]> 


cc
"pgsql-general@postgresql.org"  


Subject
Re: [GENERAL] Are there plans to add data compression feature to postgresql?




You might want to try using a file system (ZFS, NTFS) thatdoes compression, 
depending on what you're trying to compress.-- Sent via pgsql-general mailing 
list (pgsql-general@postgresql.org)To make changes to your 
subscription:http://www.postgresql.org/mailpref/pgsql-general
**
If you are not the intended recipient of this email please do not send it on
to others, open any attachments or file the email locally. 
Please inform the sender of the error and then delete the original email.
For more information, please refer to http://www.shropshire.gov.uk/privacy.nsf
**
 
_
Explore the seven wonders of the world
http://search.msn.com/results.aspx?q=7+wonders+world&mkt=en-US&form=QBRE

Re: [GENERAL] Replication with slony-I

2008-10-29 Thread Raymond O'Donnell
On 29/10/2008 12:55, Abdul Rahman wrote:

>   11.  Got SUCCESS up to this point. Now I think this is *pgbench *which
>   is responsible for replicating tables. 

No - it's Slony which is responsible for the replication. pgbench is
just executing lots of SQL commands - SELECTs, UPDATEs, INSERTs, etc -
while Slony is replicating the changes from one set to the other.

> Now my question is that how to include the tables which are already
> present in an existing database

Just add more SET ADD TABLE commands to your slonik script. Use the
existing lines as a model, so you'll get something like:

  set add table (set id=1, origin=1, id=4, fully qualified name =
'public.new_table', comment='Another table');


Make sure that the id is different for each table.

HTH,

Ray.

--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Piping CSV data to psql when executing COPY .. FROM STDIN (Solved)

2008-10-29 Thread Sam Mason
On Wed, Oct 29, 2008 at 12:11:43PM +0200, Allan Kamau wrote:
> I am however unable to do the same successfully (the Java code simply 
> hangs, probably as a result of the second psql not getting the input to 
> it) from Java code using objects of ProcessBuilder and Process.

Why don't you use the JDBC driver?  You wouldn't have to worry about
file descriptors blocking at all that way.


  Sam

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [Help] Config Failure on Mac OSX: psqlodbc-08.03.0300

2008-10-29 Thread Tom Lane
Brent Austin <[EMAIL PROTECTED]> writes:
> I could swear that is what I did..or is it not? That is why I sent a 
> copy/paste of that mess from my terminal:
> it showed that I do have PG_CONFIG installed  and it showed I did set my path.
> That is why I am asking help because configure is saying I did neither when 
> it plainly shows I did (to the best of my knowledge).

Well, this is one of a number of reasons why it's a bad idea to sudo
right there ... presumably, sudo is resetting the environment to
something it considers safe.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How can I tell, in a trigger, if a value in a string is an integer.

2008-10-29 Thread Philip W. Dalrymple
I have a trigger function that, for every Update or Insert of
a table inserts a row in a "audit" table that contains some trace
data, the key of the first table and the changed columns (unchanged
cols are NULL) I have this working with a plpgsql function execpt
for one of the trace fields (called the audit_session) which is an 
integer. The value of this should be -1 unless a variable is set
for the session using 

 set_config('session.sessionid','23',false);

which can be read with 

current_setting('session.sessionid')

Which will allow the web server to link a action to a "session"
(data stored in another table but starts from 1 and goes up as 
a integer)

I was not able to make the trigger function work in plperl (where
the forcing of the invalid, unset, or unknown to -1 would be easy).

any ideas on the best way to handle this.

-- 
This email, and any files transmitted with it, is confidential 
and intended solely for the use of the individual or entity to 
whom they are addressed.  If you have received this email in error, 
please advise [EMAIL PROTECTED] .

New MDT Software Headquarters (As of July 1, 2008):
3480 Preston Ridge Road
Suite 450
Alpharetta, GA 30005


Philip W. Dalrymple III <[EMAIL PROTECTED]>
MDT Software - The Change Management Company
+1 678 297 1001
Fax +1 678 297 1003


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [Help] Config Failure on Mac OSX: psqlodbc-08.03.0300

2008-10-29 Thread Brent Austin
I could swear that is what I did..or is it not? That is why I sent a 
copy/paste of that mess from my terminal:
it showed that I do have PG_CONFIG installed  and it showed I did set my path.
That is why I am asking help because configure is saying I did neither when it 
plainly shows I did (to the best of my knowledge).

client-6X-1XX-17-XX4:psqlodbc-08.03.0300 brent1a$ 
/usr/local/pgsql/bin/pg_config 
BINDIR = /usr/local/pgsql/bin
DOCDIR = /usr/local/pgsql/doc
INCLUDEDIR = /usr/local/pgsql/include
PKGINCLUDEDIR = /usr/local/pgsql/include
INCLUDEDIR-SERVER = /usr/local/pgsql/include/server
LIBDIR = /usr/local/pgsql/lib
PKGLIBDIR = /usr/local/pgsql/lib
LOCALEDIR = 
MANDIR = /usr/local/pgsql/man
SHAREDIR = /usr/local/pgsql/share
SYSCONFDIR = /usr/local/pgsql/etc
PGXS = /usr/local/pgsql/lib/pgxs/src/makefiles/pgxs.mk
CONFIGURE = 
CC = gcc -no-cpp-precomp
CPPFLAGS = 
CFLAGS = -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Winline 
-Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing -fwrapv
CFLAGS_SL = 
LDFLAGS = 
LDFLAGS_SL = 
LIBS = -lpgport -lz -lreadline -lm 
VERSION = PostgreSQL 8.3.4
client-6X-1XX-17-XX4:psqlodbc-08.03.0300 brent1a$ sudo ./configure
Password:
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking whether to enable maintainer-specific portions of Makefiles... no
checking for pg_config... no
configure: error: pg_config not found (set PG_CONFIG environment variable)
client-6X-1XX-17-XX4:psqlodbc-08.03.0300 brent1a$ export 
PG_CONFIG=/usr/local/pgsql/bin/pg_config
client-6X-1XX-17-XX4:psqlodbc-08.03.0300 brent1a$ sudo ./configure
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking whether to enable maintainer-specific portions of Makefiles... no
checking for pg_config... no
configure: error: pg_config not found (set PG_CONFIG environment variable)


From: Albe Laurenz <[EMAIL PROTECTED]>
To: Brent Austin  *EXTERN* <[EMAIL PROTECTED]>; Grzegorz Jaśkiewicz <[EMAIL 
PROTECTED]>
Cc: pgsql-general@postgresql.org
Sent: Wednesday, October 29, 2008 7:14:37 AM
Subject: Re: [GENERAL] [Help] Config Failure on Mac OSX: psqlodbc-08.03.0300

Brent Austin wrote:

> Configure still failsI've tried everything I can figure

[...]

> configure: error: pg_config not found (set PG_CONFIG environment variable)

It's quite simple:

- Find out where pg_config is.
- If you don't have it, install the appropriate package.
- Make sure it's in your PATH or set PG_CONFIG.

Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



  

Re: [GENERAL] Decreasing WAL size effects

2008-10-29 Thread Greg Smith

On Tue, 28 Oct 2008, Jason Long wrote:

I also have to ship them off site using a T1 so setting the time to 
automatically switch files will just waste bandwidth if they are still going 
to be 16 MB anyway.


The best way to handle this is to clear the unused portion of the WAL file 
and then compress it before sending over the link.  There is a utility 
named pg_clearxlogtail available at 
http://www.2ndquadrant.com/replication.htm that handles the first part of 
that you may find useful here.


This reminds me yet again that pg_clearxlogtail should probably get added 
to the next commitfest for inclusion into 8.4; it's really essential for a 
WAN-based PITR setup and it would be nice to include it with the 
distribution.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Autovacuum and relfrozenxid

2008-10-29 Thread Glyn Astill

> 
> If there's no update activity on that table, this is to
> be expected.
> 

Hmm, there is activity on the table, so I'm guessing I've not got autovacuumm 
tuned aggressively enough.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication with slony-I

2008-10-29 Thread Abdul Rahman




The
detail of my work is as under:

 

As I
mentioned earlier that platform is windows xp and using postgres 8.2 in which
Slony- I is included. And all databases are at localhost. 

 

After
reading the document entitled “Replicating Your First Database” which is for
Linux I optimized the instructions for windows and performed the following
tasks:

 

I didn’t find any line like tcpip_socket=true in my
 postgresql.conf. Because there is another option and i.e. listen_addresses 
= '*'. I
 changed it to listen_addresses = 'localhost'.

 

In pg_hba.conf, I changed the
 line

 

host    all all 127.0.0.1/32  md5

 

To

 

host    all all 127.0.0.1/32  trust

 

 

Then created 2 databases
 named: master and slave Executed the command

  pgbench -i -s 1 -U $PGBENCHUSER -h $MASTERHOST $MASTERDBNAME  After 
optimizing it i.e.:  pgbench -i -s 1 -U postgres -h localhost master

 

Slony-I does not automatically copy
 table definitions from a master when a slave subscribes to it, so we need
 to import this data. I did this with pg_dump.

  pg_dump -s -U postgres -h localhost master > schema.sql  psql -U 
postgres -h localhost slave < schema.sql

 

Executed the command: 

  pgbench -s 1 -c 5 -t 1000 -U postgres -h localhost master

 

Optimized  the given script (and used slonik to
 run this) as follows:

 

 cluster name =
pgbench;

 

 node 1 admin
conninfo = 'dbname=master host=localhost user=postgres';

 node 2 admin
conninfo = 'dbname=slave host=localhost user=postgres';

 

 init cluster (
id=1, comment = 'Master Node');

 

 table add key
(node id = 1, fully qualified name = 'public.history');

 

 create set
(id=1, origin=1, comment='All pgbench tables');

  set add
table (set id=1, origin=1, id=1, fully qualified name = 'public.accounts', 
comment='accounts
table');

set add table (set id=1, origin=1, id=2, fully qualified
name = 'public.branches', comment='branches table');

set add table (set id=1, origin=1, id=3, fully qualified
name = 'public.tellers', comment='tellers table');

set add table (set id=1, origin=1, id=4, fully qualified
name = 'public.history', comment='history table', key = serial);

 

 store node
(id=2, comment = 'Slave node');

 store path
(server = 1, client = 2, conninfo='dbname=master host=localhost
user=postgres');

 store path
(server = 2, client = 1, conninfo='dbname=slave host=localhost user=postgres');

 store listen
(origin=1, provider = 1, receiver =2);

 store listen
(origin=2, provider = 2, receiver =1);

 

On MASTERHOST (localhost) the
 command to start the replication executed the command:

  slon pgbench "dbname=master user=postgres host=localhost"

On SLAVEHOST (localhost) the
 command to start the replication executed the command:

  slon pgbench "dbname=slave user=postgres host=localhost" 

To start replicating the 4
 pgbench tables (set 1) from the master (node id 1) the the slave (node id
 2), executed the following script: 

 

   cluster name = pgbench;

 

   node 1 admin conninfo = 'dbname=master
host=localhost user=postgres';

   node 2 admin conninfo = 'dbname=slave host=localhost
user=postgres';

 

   subscribe set ( id = 1, provider = 1, receiver
= 2, forward = no);

 

 Got SUCCESS up to this point. Now I think
 this is pgbench which is responsible for replicating tables. 

 

Now my question is that how to include the tables which
are already present in an existing database

   

 

 

 

 

 

 

 




  

Re: [GENERAL] [Help] Config Failure on Mac OSX: psqlodbc-08.03.0300

2008-10-29 Thread Tom Lane
Brent Austin <[EMAIL PROTECTED]> writes:
> Configure still failsI've tried everything I can figure
> Last login: Wed Oct 29 02:58:10 on ttys000
> client-6X-1XX-17-XX4:~ brent1a$ cd /psqlodbc-08.03.0300 
> client-6X-1XX-17-XX4:psqlodbc-08.03.0300 brent1a$ sudo ./configure
> Password:
> checking for a BSD-compatible install... /usr/bin/install -c
> checking whether build environment is sane... yes
> checking for gawk... gawk
> checking whether make sets $(MAKE)... yes
> checking whether to enable maintainer-specific portions of Makefiles... no
> checking for pg_config... no
> configure: error: pg_config not found (set PG_CONFIG environment variable)
> client-6X-1XX-17-XX4:psqlodbc-08.03.0300 brent1a$ 
> /usr/local/pgsql/bin/pg_config 

Putting /usr/local/pgsql/bin in your PATH would probably do it.
Or you could take the script's advice and set PG_CONFIG.

Just as an aside, I wouldn't recommend building packages as root.
You might need to do the install step that way, if you are installing
into a location that only root can write to; but not configure or
build.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql screen size

2008-10-29 Thread Merlin Moncure
On Mon, Oct 27, 2008 at 3:59 AM, wstrzalka <[EMAIL PROTECTED]> wrote:
> I'm using psql mainly in putty window.
>
> I have a problem while resizing the window.
> When changing the window size (and those chars per row) psql output
> becomes mess, the only rescue is to exit and run the psql again. It
> looks like it's initializing the output params at startup and don't
> refresh it in runtime.
> Is there any way to deal with it? Or maybe some patch to psql could be
> applied? If shell console or vi can work this way why not psql?

I see this once in a while with ssh sessions from linux to linux.  But
usually everything is worse on windows :-).

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [Help] Config Failure on Mac OSX: psqlodbc-08.03.0300

2008-10-29 Thread Albe Laurenz
Brent Austin wrote:

> Configure still failsI've tried everything I can figure

[...]

> configure: error: pg_config not found (set PG_CONFIG environment variable)

It's quite simple:

- Find out where pg_config is.
- If you don't have it, install the appropriate package.
- Make sure it's in your PATH or set PG_CONFIG.

Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Autovacuum and relfrozenxid

2008-10-29 Thread Tom Lane
Glyn Astill <[EMAIL PROTECTED]> writes:
> I've noticed age(relfrozenxid) of some of our tables approaching 
> vacuum_freeze_min_age, am I right in thinking this is nothing to worry about, 
> autovacuum will just get invoked for those tables?

If there's no update activity on that table, this is to be expected.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] sum the text of a text field

2008-10-29 Thread Sim Zacks
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

It was easier then I thought.

I built a custom function because I wanted each field value to be on its
own line.

create or replace function textsum(text,text) returns text as
$$
select coalesce($1,'') || case when $1 is null then '' else case
when
$2 is not null then E'\n' else '' end end ||
coalesce($2,'');
$$ language sql;

  create aggregate sum(text)
  (
sfunc=textsum,
stype=text
  );

Sim

Sim Zacks wrote:
> Is there any way (aside from creating a new aggregate type) to sum the
> text in a text field. I would like to group on a query and concatenate
> all the values of a specific field in the result set.
> 
> This is a common practice and currently I find myself writing functions
> to iterate the data to basically build my own text sum function.
> 
> I'm going to look into building my own aggregate type, but if there is
> something out there that works it will make life simpler.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkkITc0ACgkQjDX6szCBa+qcawCg2N9Xt9gnX0tvhaYE7iDJE5Lt
pwYAoIRipArzjODonqzr00peTP933yPY
=DNJz
-END PGP SIGNATURE-

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] UUID-OSSP Contrib Module Compilation Issue

2008-10-29 Thread Tom Lane
Bruce McAlister <[EMAIL PROTECTED]> writes:
>> Bizarre ... I've never heard of a Unix system that didn't consider that
>> a default place to look.  Unless this is a 64-bit machine and uuid
>> should have installed itself in /usr/lib64?

> It is a rather peculiar issue, I also assumed that it would check the
> standard locations, but I thought I would try it anyway and see what
> happens.

> The box is indeed a 64-bit system but the packages being built are all
> 32-bit and therefor all libraries being built are all in the standard
> locations.

Hmm ... it sounds like some part of the compile toolchain didn't get the
word about wanting to build 32-bit.  Perhaps the switch you really need
is along the lines of CFLAGS=-m32.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Autovacuum and relfrozenxid

2008-10-29 Thread Glyn Astill
Hi chaps,

I've noticed age(relfrozenxid) of some of our tables approaching 
vacuum_freeze_min_age, am I right in thinking this is nothing to worry about, 
autovacuum will just get invoked for those tables?

Even if it isn't, should I be tuning autovacuum so that those tables should 
have been vacuumed before it comes to this?

Glyn




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] sum the text of a text field

2008-10-29 Thread A. Kretschmer
am  Wed, dem 29.10.2008, um 13:20:59 +0200 mailte Sim Zacks folgendes:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> Is there any way (aside from creating a new aggregate type) to sum the
> text in a text field. I would like to group on a query and concatenate
> all the values of a specific field in the result set.

Do you want to concat all rows to a string?

,[  Column to string with delimiter, David Fetter  ]
| test=*# select * from w;
|t
| 
|  test
|  foo
|  bar
|  foobar
| (4 rows)
|
| test=*# SELECT array_to_string(ARRAY(SELECT t FROM w), ',');
|array_to_string
| -
|  test,foo,bar,foobar
| (1 row)
`



HTH, Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] sum the text of a text field

2008-10-29 Thread Sim Zacks
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Is there any way (aside from creating a new aggregate type) to sum the
text in a text field. I would like to group on a query and concatenate
all the values of a specific field in the result set.

This is a common practice and currently I find myself writing functions
to iterate the data to basically build my own text sum function.

I'm going to look into building my own aggregate type, but if there is
something out there that works it will make life simpler.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkkIRxsACgkQjDX6szCBa+pClACgi1gV9FsE4Hj5bdOueHENwq2l
lo0An09lAf160qz/nYMnb0LLCF6VU7Qs
=pkwu
-END PGP SIGNATURE-

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql screen size

2008-10-29 Thread Gregory Stark

Gregory Stark <[EMAIL PROTECTED]> writes:

> Could you define "messed up"? 
>
> What I see is that the query output is formatted correctly but readline still
> thinks the screen is the old size. (This is in CVS HEAD -- this code was
> definitely different in 8.3 and before so the behaviour may be different).
>
> Perhaps we need to tell readline whenever we run a subprocess and it may have
> missed screen resize signals.

Hm, this Bash FAQ seems to indicate this shouldn't be a problem -- the whole
process group is supposed to get the window size. Psql isn't doing the job
control stuff the FAQ entry talks about so the pager ought to be in the same
process group. So I'm puzzled.

http://tiswww.case.edu/php/chet/bash/FAQ

->  E11) If I resize my xterm while another program is running, why doesn't bash
->   notice the change?
->  
->  This is another issue that deals with job control.
->  
->  The kernel maintains a notion of a current terminal process group.  Members
->  of this process group (processes whose process group ID is equal to the
->  current terminal process group ID) receive terminal-generated signals like
->  SIGWINCH.  (For more details, see the JOB CONTROL section of the bash
->  man page.)
->  
->  If a terminal is resized, the kernel sends SIGWINCH to each member of
->  the terminal's current process group (the `foreground' process group).
->  
->  When bash is running with job control enabled, each pipeline (which may be
->  a single command) is run in its own process group, different from bash's
->  process group.  This foreground process group receives the SIGWINCH; bash
->  does not.  Bash has no way of knowing that the terminal has been resized.
->  
->  There is a `checkwinsize' option, settable with the `shopt' builtin, that
->  will cause bash to check the window size and adjust its idea of the
->  terminal's dimensions each time a process stops or exits and returns control
->  of the terminal to bash.  Enable it with `shopt -s checkwinsize'.


-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Ask me about EnterpriseDB's RemoteDBA services!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication with slony-I

2008-10-29 Thread Abdul Rahman
Ok Ray!

I will send the detail of my work soon.

Thanks to All participants. 



  

Re: [GENERAL] Replication with slony-I

2008-10-29 Thread Abdul Rahman
The link of the document, entitled "Replicating Your First Database" is as 
under:

http://slony1.projects.postgresql.org/slony1-1.2.6/doc/adminguide/firstdb.html





  

Re: [GENERAL] Replication with slony-I

2008-10-29 Thread Raymond O'Donnell
On 29/10/2008 11:00, Abdul Rahman wrote:

> But, the implementation on existing database is not working for the
> tables already exists in the database. The replication is being only
> among four tables created via pgbench tool (accounts, history, branches,
> tellers).
> 
> whats going on!

You'll need to tell us a *lot* more about what you've done, show us your
slonik scripts, etc etc - otherwise we're just guessing.

Also, it may be worth your while subscribing to the slony1-general
mailing list - details on www.slony.info.

Ray.


--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication with slony-I

2008-10-29 Thread Abdul Rahman
I followed the document entitled entitled "Replicating your first database" and 
got SUCCESS
In implementing the given example. 

But, the implementation on existing database is not working for the tables 
already exists in the database. The replication is being only among four tables 
created via pgbench tool (accounts, history, branches, tellers). 

whats going on!

--- On Tue, 10/28/08, Abdul Rahman <[EMAIL PROTECTED]> wrote:
From: Abdul Rahman <[EMAIL PROTECTED]>
Subject: Re: [GENERAL] Replication with slony-I
To: [EMAIL PROTECTED]
Cc: pgsql-general@postgresql.org
Date: Tuesday, October 28, 2008, 9:05 AM


Thanks a lot  Ray!

I have found the document and going to follow this. If I face any difficulty 
then will prompt to archive. 

Regards,

Abdul.

--- On Mon, 10/27/08, Raymond O'Donnell <[EMAIL PROTECTED]> wrote:
From: Raymond O'Donnell <[EMAIL PROTECTED]>
Subject: Re: [GENERAL] Replication with slony-I
To: [EMAIL PROTECTED]
Cc: pgsql-general@postgresql.org
Date: Monday, October 27, 2008, 7:17 PM

On 27/10/2008 11:17, Abdul Rahman wrote:
> May any one support step by step procedure for the replication with
> slony-I in windows xp.

There's a pretty good step-by-step guide in the Slony documentation - I
haven't it to hand, but it's entitled "Replicating your
 first
database"
or something like that. As I recall, it's based on a *nix setup, but
Windows is mentioned too.

Off the top of my head, the only major difference is that instead of
running a separate slon daemon for each node, on Windows you run a
single slon as a service, and then invoke it with "/addengine" for
each
node...it's all in the docs.

Ray.

--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--



  


  

Re: [GENERAL] psql screen size

2008-10-29 Thread Gregory Stark

wstrzalka <[EMAIL PROTECTED]> writes:

> On 27 Paź, 13:16, [EMAIL PROTECTED] (Sam Mason) wrote:
>> On Mon, Oct 27, 2008 at 01:59:42AM -0700, wstrzalka wrote:
>>
>> > When changing the window size (and those chars per row) psql output
>> > becomes mess, the only rescue is to exit and run the psql again. It
>> > looks like it's initializing the output params at startup and don't
>> > refresh it in runtime.

At least in CVS HEAD it checks before every query output.

However...

>> Resizing the window when entering SQL works OK for me, but resizing when
>> inside my pager (normally the "less" utility, when you get more results
>> than will fit on the screen) causes psql to be confused when I return to
>> it.
>
> Yes. This is exactly the case. When I browse large paged query result
> and resize during that, the screen is messed up.

Could you define "messed up"? 

What I see is that the query output is formatted correctly but readline still
thinks the screen is the old size. (This is in CVS HEAD -- this code was
definitely different in 8.3 and before so the behaviour may be different).

Perhaps we need to tell readline whenever we run a subprocess and it may have
missed screen resize signals.

It's easy enough to work around, just resize the window again a little bit
once you're at the prompt. Readline notices that and adjusts.


-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Ask me about EnterpriseDB's On-Demand Production Tuning

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgresql and Mac OS X

2008-10-29 Thread Isak Hansen
On Tue, Oct 28, 2008 at 11:41 PM, Tom Allison <[EMAIL PROTECTED]> wrote:
> I was using macports but got into a cluster-F on versions and multiple
> installs.  After a spell I had all four versions 8.0 - 8.3 installed in
> order to use postgres, ruby, perl, and rails together.

I use apple's ruby, but have postgres, perl and a lot of other
packages/libraries installed from macports, and am quite happy with
it.
I had the same multiple-version-issues I think you're experiencing at
first, believing that all deactivated ports were still required as
dependencies, but eventually got tired of the mess and did a forced
uninstall of anything inactive. My apps still worked.

Installing a single app from source isn't that much of an issue, but I
have 100+ different ports installed on my dev box. Four(*) commands in
a terminal window keeps them all current.


Isak

*)
sudo port sync# update package info
sudo port selfupdate   # update macports
sudo port upgrade installed# upgrade installed ports
sudo port uninstall -f inactive  # clean up after upgrade

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Piping CSV data to psql when executing COPY .. FROM STDIN (Solved)

2008-10-29 Thread Allan Kamau
Thanks Reid, Sam and others. Indeed the oversight was once again at my 
end. As rightfully pointed out I was using tab as a delimiter where the 
copy command "COPY abc FROM STDIN WITH CSV HEADER" expects a comma 
character and will ignore the first line.


It does work appropriately when I pipe the two psql commands at the 
command line.


The contents below are not important.
I am however unable to do the same successfully (the Java code simply 
hangs, probably as a result of the second psql not getting the input to 
it) from Java code using objects of ProcessBuilder and Process. I have 
used threads consume the STDOUT and STDERR streams (I write the STDOUT 
stream to file) do the waitFor(), then I read the file contents and 
write them to STDIN stream of the second call to psql.
I have therefore resorted to password-less ssh. So far all is well. Am 
writing to CSV file which I scp to the remote server then I issue 
another call to psql to connect to the remote server's PostgreSQL and 
execute an sql having a COPY abc FROM ..


Allan.

Sam Mason wrote:

On Tue, Oct 28, 2008 at 03:11:05PM +0200, Allan Kamau wrote:
  
Sam, I have been unable to understand your shell script well enough to 
use it. Seems am slow this afternoon :-)



Don't worry, I've just spent an hour going through three computers
trying to figure out why they didn't work.  In the end just decided to
give them to someone else to fix.

  
On this list I saw a message detailing using copy as illustrated below 
(see )when I run this command I get the following output (see 
)




COPY abc FROM STDIN WITH CSV HEADER;
\.
1  qrsta
2   zvyb


As you can see the ./ is placed a the top instead of the bottom of the 
output. The does create some error when I run this output via psql.



I'd expect postgres to try and interpret "1 qrst a" as an SQL command,
this is obviously fail.

  
I then get a datatype error when I pass to psql the following (edited) 
sql from a text editor (see )



COPY abc FROM STDIN WITH CSV HEADER;
1   qrsta
2   zvyb
\.


The error reads as follows

psql:sql/some2.sql:7: ERROR:  invalid input syntax for integer: "1   
qrsta"

CONTEXT:  COPY item_major, line 1, column id: "1   qrsta"




Sorry, I put a "CSV" mode into there as well.  It's expecting commas
between fields, not tabs.  I'd only ever seen the "HEADER" option
supported with CSV mode before, so had assumed this is what you wanted.


  Sam

  



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] using plpgsql debuggers

2008-10-29 Thread Dave Page
On Wed, Oct 29, 2008 at 6:48 AM, Ravi Chemudugunta
<[EMAIL PROTECTED]> wrote:
> hi all,
>
> has anyone here heard of / used:
>
> http://pgfoundry.org/projects/edb-debugger/
>
> This allows one to debug code, using breakpoints, single stepping etc. which
> seems really great but I haven't figured out how to use it yet?
>
> I can't figure out how to invocate the server (target) side in order to then
> get the proxy to connect to it.

Run pgAdmin with the plugin enabled in your database, and right-click
a function and select one of the debugging options.


-- 
Dave Page
EnterpriseDB UK:   http://www.enterprisedb.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgresql and Mac OS X

2008-10-29 Thread Grzegorz Jaśkiewicz
I use postgresql on MBP, current head, for testing and development. Just
from sources, it won't bite :)

you just have to add user postgres to your system, place $PGDATA wherever
you feel you should, and you're done.


Re: [GENERAL] postgresql and Mac OS X

2008-10-29 Thread Niklas Johansson


On 28 okt 2008, at 23.41, Tom Allison wrote:

I can get postgresql installed in three flavors:
EnterpriseDB has a dmg package for Mac.
macports has their own package.
fink also has their own package.


You also have the fourth, most delicious flavor: build it yourself;  
PostgreSQL compiles nicely on Mac OS X. I've never had any reason to  
regret not using a package manager yet.


I was using macports but got into a cluster-F on versions and  
multiple installs.  After a spell I had all four versions 8.0 - 8.3  
installed in order to use postgres, ruby, perl, and rails together.


Do you mean that Macports installed different versions of Postgres  
because the other packages had different dependencies? Don't know if  
compiling from source would help you there, but surely there must be  
some way to tell the package manager that a certain dependency  
already exists, albeit somewhere else?





Sincerely,

Niklas Johansson




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [Help] Config Failure on Mac OSX: psqlodbc-08.03.0300

2008-10-29 Thread Brent Austin
Configure still failsI've tried everything I can figure


Last login: Wed Oct 29 02:58:10 on ttys000
client-6X-1XX-17-XX4:~ brent1a$ cd /psqlodbc-08.03.0300 
client-6X-1XX-17-XX4:psqlodbc-08.03.0300 brent1a$ sudo ./configure
Password:
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking whether to enable maintainer-specific portions of Makefiles... no
checking for pg_config... no
configure: error: pg_config not found (set PG_CONFIG environment variable)
client-6X-1XX-17-XX4:psqlodbc-08.03.0300 brent1a$ 
/usr/local/pgsql/bin/pg_config 
BINDIR = /usr/local/pgsql/bin
DOCDIR = /usr/local/pgsql/doc
INCLUDEDIR = /usr/local/pgsql/include
PKGINCLUDEDIR = /usr/local/pgsql/include
INCLUDEDIR-SERVER = /usr/local/pgsql/include/server
LIBDIR = /usr/local/pgsql/lib
PKGLIBDIR = /usr/local/pgsql/lib
LOCALEDIR = 
MANDIR = /usr/local/pgsql/man
SHAREDIR = /usr/local/pgsql/share
SYSCONFDIR = /usr/local/pgsql/etc
PGXS = /usr/local/pgsql/lib/pgxs/src/makefiles/pgxs.mk
CONFIGURE = 
CC = gcc -no-cpp-precomp
CPPFLAGS = 
CFLAGS = -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Winline 
-Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing -fwrapv
CFLAGS_SL = 
LDFLAGS = 
LDFLAGS_SL = 
LIBS = -lpgport -lz -lreadline -lm 
VERSION = PostgreSQL 8.3.4
client-6X-1XX-17-XX4:psqlodbc-08.03.0300 brent1a$ sudo ./configure
Password:
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking whether to enable maintainer-specific portions of Makefiles... no
checking for pg_config... no
configure: error: pg_config not found (set PG_CONFIG environment variable)
client-6X-1XX-17-XX4:psqlodbc-08.03.0300 brent1a$ export 
PG_CONFIG=/usr/local/pgsql/bin/pg_config
client-6X-1XX-17-XX4:psqlodbc-08.03.0300 brent1a$ sudo ./configure
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking whether to enable maintainer-specific portions of Makefiles... no
checking for pg_config... no
configure: error: pg_config not found (set PG_CONFIG environment variable)
client-6X-1XX-17-XX4:psqlodbc-08.03.0300 brent1a$ 




From: Grzegorz Jaśkiewicz <[EMAIL PROTECTED]>
To: Brent Austin <[EMAIL PROTECTED]>
Cc: pgsql-general@postgresql.org
Sent: Tuesday, October 28, 2008 8:23:31 AM
Subject: Re: [GENERAL] [Help] Config Failure on Mac OSX: psqlodbc-08.03.0300




just type in 'pg_config' ,without quotes in terminal and see if it runs. if 
not, you gotta find it. For instance by using:

find /usr -name pg_config
than if it does come up with whereabouts of it - stick it into PG_CONFIG env 
variable:
export PG_CONFIG=/path/path/pg_config
and rerun configure in the same terminal (important, the env variable PG_CONFIG 
will only be seen in that scope/terminal).


  

Re: [GENERAL] UUID-OSSP Contrib Module Compilation Issue

2008-10-29 Thread Bruce McAlister
> 
> Bizarre ... I've never heard of a Unix system that didn't consider that
> a default place to look.  Unless this is a 64-bit machine and uuid
> should have installed itself in /usr/lib64?
> 

It is a rather peculiar issue, I also assumed that it would check the
standard locations, but I thought I would try it anyway and see what
happens.

The box is indeed a 64-bit system but the packages being built are all
32-bit and therefor all libraries being built are all in the standard
locations.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general