Re: [GENERAL] Using complex PRIMARY KEY

2009-10-07 Thread A. Kretschmer
In response to Zsolt :
> 
> This is our first project using PostgerSQL, where I have a problem I cant 
> solve
> on a neat way (I assume PGSQL should provide a nice solution...).
> 
> So we have an old xBase based program we are trying to port to PostgreSQL 
> while
> we should keep the original data structure especially the ID fields must be
> kept as this IDs are already used in other systems.
> 
> The problem is with two table, one is storing the data of houses the other the
> data of tenants in a given houses.
> 
>  
> 
> For a given house I would like to start the numbering of tenants from 1. Each
> house could have tenant_ID=1, obviously in this case the house_ID will differ.
> The combination of tenant_ID and house_ID will be the unique identifier of 
> each
> tenant.

Do you have PostgreSQL 8.4?

If yes, you can use CTE-functions for that. row_number().

Unfortunately, at the moment i haven't access to my database to create
an example, maybe later.


Regards, Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need help in spi_prepare errors

2009-10-07 Thread paresh masani
You were correct below trigger worked. Giving reference for others.

CREATE OR REPLACE FUNCTION init() RETURNS TEXT AS $$
   my $raw_row = "(\"col1\", \"col2\")";
   my @new_row = ('5', '6');
   my @col_types = ("integer", "character varying");
   my $query = "INSERT INTO mytable  $raw_row VALUES (\$1, \$2)";
   my $prepared = spi_prepare($query, @col_types);
   spi_exec_prepared($prepared, @new_row);
   return "success";
$$ LANGUAGE plperl;

Thanks, Paresh

On 10/7/09, Alvaro Herrera  wrote:
> Tom Lane escribió:
>
>> I'm not much of a Perl hacker, but I seem to recall that it's possible
>> to pass an array to a function in a way that will make the array
>> elements look like separate arguments.  If you really need a dynamic
>> list of types and values, maybe there's some solution in that direction.
>
> Actually any time you pass an array as a parameter, the list is
> flattened and the function sees it as a plain list.  If that doesn't
> seem to make sense, consider that if you pass two lists they will be
> flattened to a single list and you won't be able to tell where one
> ends and the other starts.
>
> If you really want two separate lists, you need to pass them as array
> references (i.e. \...@myarray).  I guess you could also pass an element
> count but that's not very perlish.
>
> --
> Alvaro Herrerahttp://www.CommandPrompt.com/
> PostgreSQL Replication, Consulting, Custom Development, 24x7 support
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to troubleshoot authentication failure?

2009-10-07 Thread Kynn Jones
Thank you all!  Someone else in our team found the problem (a missing user
in the failing server).
k


Re: [GENERAL] Using complex PRIMARY KEY

2009-10-07 Thread Sam Mason
On Wed, Oct 07, 2009 at 09:19:58PM +0200, Zsolt wrote:
> For a given house I would like to start the numbering of tenants from
> 1. Each house could have tenant_ID=1, obviously in this case the
> house_ID will differ. The combination of tenant_ID and house_ID will
> be the unique identifier of each tenant.

The term for this that tends to float around for this concept (in PG
anyway) is "gap-less sequences": this looks like a reasonable link:

  http://www.varlena.com/GeneralBits/130.php

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dumpall asking for password for each database

2009-10-07 Thread Krzysztof Barlik
> What version are you running?  IIRC it should remember the password
> between databases.

8.4.0 on Linux/x86_64. It does not, and man page clearly says:

"pg_dumpall  needs  to  connect  several  times  to  the  
PostgreSQL  server (once per database). If you use password 
authentication it will ask for a password  each  time."

Is there any specific reason for this behaviour ?

Thanks for Scott and Joshua - I read documentation about 
.pgpass and PGPASSWORD, I just thought it would be a bit
more convenient and possibly safer to do it without
storing password in files or environment variables.

Thanks,
Krzysztof


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Using complex PRIMARY KEY

2009-10-07 Thread Zsolt

 Normal   0   21 false   false   false 
MicrosoftInternetExplorer4  This is our first project using 
PostgerSQL, where I have a problem I cant solve on a neat way (I assume PGSQL 
should provide a nice solution...). 

  So we have an old xBase based program we are trying to port to PostgreSQL 
while we should keep the original data structure especially the ID fields must 
be kept as this IDs are already used in other systems.

  The problem is with two table, one is storing the data of houses the other 
the data of tenants in a given houses.

   

   

  Something like this:

  CREATE TABLE house (

  house_id SERIAL,

  .

  CONSTRAINT pk_house_id PRIMARY KEY(house_id)

   

  ) WITHOUT OIDS;

   

   

  CREATE TABLE tenant (

  tenant_id SERIAL,

  house_id INTEGER REFERENCES house(house_id),

  .

  CONSTRAINT pk_tenant_house_id PRIMARY KEY(tenant_id, house_id)

   

  ) WITHOUT OIDS;

   

  For a given house I would like to start the numbering of tenants from 1. Each 
house could have tenant_ID=1, obviously in this case the house_ID will differ. 
The combination of tenant_ID and house_ID will be the unique identifier of each 
tenant.

   

  I'm just looking for the best solution to insert new rows into the tenant 
database without worrying about keeping the above mentioned logic in mind. 
Should I create a stored procedure to add a new tenant and this will calculate 
the new house_id+tenant_id combination (only the house_id would be passed to 
the stored procedure, the tenat_id will be calculated by the sp). In this case 
how can I avoid that two concurrent user would try to add records in the same 
time without getting an exception due to violating the pk_tenant_house_id 
constraint? Or should I add a  new field in the house table storing the last 
issued tenant_id in the given house (max_tenant_id) which will be used while 
adding a new record to tenant, and will be updated by a trigger on the tenant 
table? Or am I thinking on a wrong way and there is a better mechanism provided 
by PostgreSQL for this problem?

   

  Any other suggestions would be greatly appreciated.

   

  Thanks!

   
Zsolt


  


Csatlakozzon a 
Bookline törzsvásárlói programjához, és válogasson prémium és egyedi 
akcióink 
közül!http://bookline.hu/news/news.action?id=2845&tabname=book&affiliate=frelinkar9773&utm_source=freemail_karakteres_level_alja&utm_medium=level_alja_karakteres_line_promo&utm_campaign=0910_line_promo


Re: [GENERAL] How to troubleshoot authentication failure?

2009-10-07 Thread Greg Smith

On Wed, 7 Oct 2009, Kynn Jones wrote:

Is there some way to have Postgres dump excruciatingly thorough details 
about every single step of the authentication sequence?


There's a postgresql.conf parameter named log_min_messages that you can 
crank up until you see the detail level you're looking for.  In general, 
"debug2" is usually a good place to start at when trying to nail down 
mysterious database issues.


--
* Greg Smith gsm...@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] automated row deletion

2009-10-07 Thread Dave Huber
John, I got your previous post, but I think I misunderstood something. You 
didn't mean a disk partition. I think I get what you're describing now. I had 
previously missed the link in your earlier post, too. Please accept my 
apologies for not being more diligent in my reading. I'll look into this 
partitioned table bit.

Thanks,
Dave

-Original Message-
From: John R Pierce [mailto:pie...@hogranch.com]
Sent: Wednesday, October 07, 2009 12:01 PM
To: Dave Huber
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] automated row deletion

Dave Huber wrote:
>
> A colleague gave me the following query to run:
>
>
>
> DELETE FROM data_log_20msec_table WHERE (log_id IN (SELECT log_id FROM
> data_log_20msec_table ORDER BY log_id DESC OFFSET 1000))
>
> ...
>
> This query keeps the most recent 10 million rows and deletes the
> remaining ones. If I call this once a minute, it would be deleting
> 3000 rows each time. Is there a way to optimize this statement?
> Postgres was setup with default configuration. Is there anything we
> can change in the configuration to make this run more efficiently? The
> table is defined as below:
>
> ...
>
> Is there anything we can do here that can optimize the deletion of rows?
>
>
>

as I previously wrote...


I think you'll find row deletes would kill your performance.   For time
aged data like that, we use partitioned tables, we typically do it by
the week (keeping 6 months of history), but you might end up doing it by
N*1000 PK values or some such, so you can use your PK to determine the
partition.   With a partitioning scheme, its much faster to add a new
one and drop the oldest at whatever interval you need.   See
http://www.postgresql.org/docs/current/static/ddl-partitioning.html



based on the numbers you give above, I think I'd do it by 10 log_id
values, so you'd end up with 101 partition tables, and every half hour
or so you'd truncate the oldest partition and start a new one (reusing
the previously oldest in a round robin fashion).   truncate is 1000s of
times faster than delete.








This electronic mail message is intended exclusively for the individual(s) or 
entity to which it is addressed. This message, together with any attachment, is 
confidential and may contain privileged information. Any unauthorized review, 
use, printing, retaining, copying, disclosure or distribution is strictly 
prohibited. If you have received this message in error, please immediately 
advise the sender by reply email message to the sender and delete all copies of 
this message.
THIS E-MAIL IS NOT AN OFFER OR ACCEPTANCE: Notwithstanding the Uniform 
Electronic Transactions Act or any other law of similar import, absent an 
express statement to the contrary contained in this e-mail, neither this e-mail 
nor any attachments are an offer or acceptance to enter into a contract, and 
are not intended to bind the sender, LeTourneau Technologies, Inc., or any of 
its subsidiaries, affiliates, or any other person or entity.
WARNING: Although the company has taken reasonable precautions to ensure no 
viruses are present in this email, the company cannot accept responsibility for 
any loss or damage arising from the use of this email or attachments.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump with 1100 schemas being a bit slow

2009-10-07 Thread Loic d'Anterroches
Hi Josua,

On Wed, Oct 7, 2009 at 6:29 PM, Joshua D. Drake  wrote:
> On Wed, 2009-10-07 at 12:51 +0200, Loic d'Anterroches wrote:
>> Hello,
>
>> My problem is that the dump increased steadily with the number of
>> schemas (now about 20s from about 12s with 850 schemas) and pg_dump is
>> now ballooning at 120MB of memory usage when running the dump.
>>
>
> And it will continue to. The number of locks that are needing to be
> acquired will consistently increase the amount of time it takes to
> backup the database as you add schemas and objects. This applies to
> whether or not you are running a single dump per schema or a global dump
> with -Fc.
>
> I agree with the other participants in this thread that it makes more
> sense for you to use -Fc but your speed isn't going to change all that
> much overall.

If the speed of a full dump against a series of schema dump is not
going to be dramatically different, time to change to a more long term
efficient way to do the job. Anyway, the benefits of having a WAL
powered slave are interesting as I can use it as failover in another
datacenter in case of problems on the main DB server. It will also add
no load on the main server, which is a good thing.

Thanks all of you for the detailed answers, I feel good using
PostgreSQL for all my production deployments. Being able to go such
good answers in a very short amount of time when nothing is available
on the net is really nice.

loïc

--
Loïc d'Anterroches - Céondo Ltd - http://www.ceondo.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] automated row deletion

2009-10-07 Thread John R Pierce

Dave Huber wrote:


A colleague gave me the following query to run:

 

DELETE FROM data_log_20msec_table WHERE (log_id IN (SELECT log_id FROM 
data_log_20msec_table ORDER BY log_id DESC OFFSET 1000))


...

This query keeps the most recent 10 million rows and deletes the 
remaining ones. If I call this once a minute, it would be deleting 
3000 rows each time. Is there a way to optimize this statement? 
Postgres was setup with default configuration. Is there anything we 
can change in the configuration to make this run more efficiently? The 
table is defined as below:


...

Is there anything we can do here that can optimize the deletion of rows?

 



as I previously wrote...


I think you'll find row deletes would kill your performance.   For time 
aged data like that, we use partitioned tables, we typically do it by 
the week (keeping 6 months of history), but you might end up doing it by 
N*1000 PK values or some such, so you can use your PK to determine the 
partition.   With a partitioning scheme, its much faster to add a new 
one and drop the oldest at whatever interval you need.   See 
http://www.postgresql.org/docs/current/static/ddl-partitioning.html




based on the numbers you give above, I think I'd do it by 10 log_id 
values, so you'd end up with 101 partition tables, and every half hour 
or so you'd truncate the oldest partition and start a new one (reusing 
the previously oldest in a round robin fashion).   truncate is 1000s of 
times faster than delete.








--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] automated row deletion

2009-10-07 Thread Dave Huber
A colleague gave me the following query to run:

DELETE FROM data_log_20msec_table WHERE (log_id IN (SELECT log_id FROM 
data_log_20msec_table ORDER BY log_id DESC OFFSET 1000))

log_id is the primary key (big serial)
data_log is the table described below

This query keeps the most recent 10 million rows and deletes the remaining 
ones. If I call this once a minute, it would be deleting 3000 rows each time. 
Is there a way to optimize this statement? Postgres was setup with default 
configuration. Is there anything we can change in the configuration to make 
this run more efficiently? The table is defined as below:

CREATE TABLE data_log_20msec_table
(
  log_id bigserial NOT NULL,
  timestamp_dbl double precision,
  data bytea,
  CONSTRAINT data_log_20msec_table_pkey PRIMARY KEY (log_id)
)
WITH (OIDS=FALSE);
ALTER TABLE data_log_20msec_table OWNER TO postgres;

-- Index: data_log_20msec_table_timestamp_index

-- DROP INDEX data_log_20msec_table_timestamp_index;

CREATE INDEX data_log_20msec_table_timestamp_index
  ON data_log_20msec_table
  USING btree
  (timestamp_dbl);

Is there anything we can do here that can optimize the deletion of rows?

Much thanks to anyone who can help us out.

Regards,
Dave


Original Post:
I am inserting 250 rows of data (~2kbytes/row) every 5 seconds into a table 
(the primary key is a big serial). I need to be able to limit the size of the 
table to prevent filling up the disk. Is there a way to setup the table to do 
this automatically or do I have to periodically figure out how many rows are in 
the table and delete the oldest rows manually?





This electronic mail message is intended exclusively for the individual(s) or 
entity to which it is addressed. This message, together with any attachment, is 
confidential and may contain privileged information. Any unauthorized review, 
use, printing, retaining, copying, disclosure or distribution is strictly 
prohibited. If you have received this message in error, please immediately 
advise the sender by reply email message to the sender and delete all copies of 
this message.
THIS E-MAIL IS NOT AN OFFER OR ACCEPTANCE: Notwithstanding the Uniform 
Electronic Transactions Act or any other law of similar import, absent an 
express statement to the contrary contained in this e-mail, neither this e-mail 
nor any attachments are an offer or acceptance to enter into a contract, and 
are not intended to bind the sender, LeTourneau Technologies, Inc., or any of 
its subsidiaries, affiliates, or any other person or entity.
WARNING: Although the company has taken reasonable precautions to ensure no 
viruses are present in this email, the company cannot accept responsibility for 
any loss or damage arising from the use of this email or attachments.



Re: [GENERAL] pg_dump with 1100 schemas being a bit slow

2009-10-07 Thread Joshua D. Drake
On Wed, 2009-10-07 at 12:51 +0200, Loic d'Anterroches wrote:
> Hello,

> My problem is that the dump increased steadily with the number of
> schemas (now about 20s from about 12s with 850 schemas) and pg_dump is
> now ballooning at 120MB of memory usage when running the dump.
> 

And it will continue to. The number of locks that are needing to be
acquired will consistently increase the amount of time it takes to
backup the database as you add schemas and objects. This applies to
whether or not you are running a single dump per schema or a global dump
with -Fc.

I agree with the other participants in this thread that it makes more
sense for you to use -Fc but your speed isn't going to change all that
much overall.

Joshua D. Drake

-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564
Consulting, Training, Support, Custom Development, Engineering
If the world pushes look it in the eye and GRR. Then push back harder. - 
Salamander


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump with 1100 schemas being a bit slow

2009-10-07 Thread Loic d'Anterroches
Harald,

>>settings up each time. The added benefit of doing a per schema dump is
>>that I provide it to the users directly, that way they have a full
>>export of their data.
>
> you should try the timing with
>
> pg_dump --format=c  completedatabase.dmp
>
> and then generating the separte schemas in an extra step like
>
> pg_restore --schema=%s --file=outputfilename.sql completedatabase.dmp
>
> I found that even with maximum compression
>
> pg_dump --format=c --compress=9
>
> the pg_dump compression was quicker then  dump + gzip/bzip/7z compression
> afterwards.
>
> And after the dumpfile is created, pg_restore will leave your database
> alone.
> (make sure to put completedatabase.dmp on a separate filesystem). You can
> even try to run more then one pg_restore --file in parallel.

Yummy! The speed of a full dump and the benefits of the per schema
dump for the users. I will try this one tonight when the load is low.
I will keep you informed of the results.

Thanks a lot for all the good ideas, pointers!
loïc

--
Loïc d'Anterroches - Céondo Ltd - http://www.ceondo.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump with 1100 schemas being a bit slow

2009-10-07 Thread Loic d'Anterroches
On Wed, Oct 7, 2009 at 5:54 PM, Bill Moran  wrote:
> In response to "Loic d'Anterroches" :
>
>> On Wed, Oct 7, 2009 at 4:23 PM, Tom Lane  wrote:
>> > "Loic d'Anterroches"  writes:
>> >> Each night I am running:
>> >> pg_dump --blobs --schema=%s --no-acl -U postgres indefero | gzip >
>> >> /path/to/backups/%s/%s-%s.sql.gz
>> >> this for each installation, so 1100 times. Substitution strings are to
>> >> timestamp and get the right schema.
>
> Have you tested the speed without the gzip?

This is the first thing I did but in that case I was not able to get
any significant improvement. The data to gzip is very small "per
schema" so this is not the bottleneck.

> We found that compressing the dump takes considerably longer than pg_dump
> does, but pg_dump can't release its locks until gzip has completely
> processed all of the data, because of the pipe.

Good tip, I keep that in mind for the future!

Thanks,
loïc

--
Loïc d'Anterroches - Céondo Ltd - http://www.ceondo.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump with 1100 schemas being a bit slow

2009-10-07 Thread Massa, Harald Armin
Loic,

>settings up each time. The added benefit of doing a per schema dump is
>that I provide it to the users directly, that way they have a full
>export of their data.

you should try the timing with

pg_dump --format=c  completedatabase.dmp

and then generating the separte schemas in an extra step like

pg_restore --schema=%s --file=outputfilename.sql completedatabase.dmp

I found that even with maximum compression

pg_dump --format=c --compress=9

the pg_dump compression was quicker then  dump + gzip/bzip/7z compression
afterwards.

And after the dumpfile is created, pg_restore will leave your database
alone.
(make sure to put completedatabase.dmp on a separate filesystem). You can
even try to run more then one pg_restore --file in parallel.

Best wishes,

Harald

-- 
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Spielberger Straße 49
70435 Stuttgart
0173/9409607
no fx, no carrier pigeon
-
%s is too gigantic of an industry to bend to the whims of reality


Re: [GENERAL] pg_dump with 1100 schemas being a bit slow

2009-10-07 Thread Bill Moran
In response to "Loic d'Anterroches" :

> On Wed, Oct 7, 2009 at 4:23 PM, Tom Lane  wrote:
> > "Loic d'Anterroches"  writes:
> >> Each night I am running:
> >> pg_dump --blobs --schema=%s --no-acl -U postgres indefero | gzip >
> >> /path/to/backups/%s/%s-%s.sql.gz
> >> this for each installation, so 1100 times. Substitution strings are to
> >> timestamp and get the right schema.

Have you tested the speed without the gzip?

We found that compressing the dump takes considerably longer than pg_dump
does, but pg_dump can't release its locks until gzip has completely
processed all of the data, because of the pipe.

By doing the pg_dump in a different step than the compression, we were
able to eliminate our table locking issues, i.e.:

pg_dump --blobs --schema=%s --no-acl -U postgres indefero > 
/path/to/backups/%s/%s-%s.sql && gzip /path/to/backups/%s/%s-%s.sql

Of course, you'll need enough disk space to store the uncompressed
dump while gzip works.

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump with 1100 schemas being a bit slow

2009-10-07 Thread Loic d'Anterroches
On Wed, Oct 7, 2009 at 4:23 PM, Tom Lane  wrote:
> "Loic d'Anterroches"  writes:
>> Each night I am running:
>> pg_dump --blobs --schema=%s --no-acl -U postgres indefero | gzip >
>> /path/to/backups/%s/%s-%s.sql.gz
>> this for each installation, so 1100 times. Substitution strings are to
>> timestamp and get the right schema.
>
> This seems like a pretty dumb way to go at it.  Why don't you just do
> one -Fc dump for the whole database?  If you ever actually need to
> restore a single schema, there's a pg_restore switch for that.

Thank you for your fast answer.

This is the way I started to do the work, but then I started to have
issues because of the numbers of tables to be soft locked at the same
time increasing each time, I had to push the max_locks_per_transaction
settings up each time. The added benefit of doing a per schema dump is
that I provide it to the users directly, that way they have a full
export of their data. I cannot increase the max_locks_per_transaction
all the time with the increasing number of schemas, no?

What is the problem if I put this settings at a high value (outside of
the memory overhead per connection as far as I understood the doc)?

>> I think that pg_dump, when looking at the objects to dump, also it is
>> limited to a given schema, is scanning the complete database in one
>> those calls:
>
> Yes, it has to examine all database objects in order to trace
> dependencies properly.
>
>> Is there an option: "I know what I am doing, do not look outside of
>> the schema" available which can help in my case?
>
> No.

So it looks like I may need to go a different way. I can setup a WAL
based backup server and dump the content of each schema in an
application specific way (JSON export), this way I can keep the ready
to use backup (at the moment a restore is just a series of import for
each schema) with the slave and it will keep my customers happy with
the JSON dump.

If you know a better solution, I would be pleased to be guided in the
right direction.
loïc

--
Loïc d'Anterroches - Céondo Ltd - http://www.ceondo.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: LF PostgreSQL virtual Linux host provider in India and/or Asia Pac

2009-10-07 Thread Alan McKay
OK, I did find this
http://www.postgresql.org/support/professional_hosting_asia

but does anyone have experience with any of them?


-- 
“Don't eat anything you've ever seen advertised on TV”
 - Michael Pollan, author of "In Defense of Food"

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to troubleshoot authentication failure?

2009-10-07 Thread Alvaro Herrera
Kynn Jones escribió:
> I have two Linux servers that are pretty similar to each other, and both are
> running PostgreSQL servers, but in one server a certain Perl script succeeds
> in connecting to the localhost server whereas in the other one the same
> script fails.  The error on the second server is of the form "fe_sendauth:
> no password supplied".
> I have verified all the possible reasons that I can think of for why this
> script succeeds on one server but fails on the other.  In all these respects
> the two set ups are absolutely identical: same pg_hba.conf files, same
> script parameters, same versions of Postgres, same everything.

Maybe you have a .pgpass file storing a password for one of them and not
the other?

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to troubleshoot authentication failure?

2009-10-07 Thread Kynn Jones
I have two Linux servers that are pretty similar to each other, and both are
running PostgreSQL servers, but in one server a certain Perl script succeeds
in connecting to the localhost server whereas in the other one the same
script fails.  The error on the second server is of the form "fe_sendauth:
no password supplied".
I have verified all the possible reasons that I can think of for why this
script succeeds on one server but fails on the other.  In all these respects
the two set ups are absolutely identical: same pg_hba.conf files, same
script parameters, same versions of Postgres, same everything.

I have run out of ideas!

How can I troubleshoot this further?  Is there some way to have Postgres
dump excruciatingly thorough details about every single step of the
authentication sequence?

Any other suggestions would be greatly appreciated.

Thanks!

k


Re: [GENERAL] Need help in spi_prepare errors

2009-10-07 Thread Alvaro Herrera
Tom Lane escribió:

> I'm not much of a Perl hacker, but I seem to recall that it's possible
> to pass an array to a function in a way that will make the array
> elements look like separate arguments.  If you really need a dynamic
> list of types and values, maybe there's some solution in that direction.

Actually any time you pass an array as a parameter, the list is
flattened and the function sees it as a plain list.  If that doesn't
seem to make sense, consider that if you pass two lists they will be
flattened to a single list and you won't be able to tell where one
ends and the other starts.

If you really want two separate lists, you need to pass them as array
references (i.e. \...@myarray).  I guess you could also pass an element
count but that's not very perlish.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump with 1100 schemas being a bit slow

2009-10-07 Thread Tom Lane
"Loic d'Anterroches"  writes:
> Each night I am running:
> pg_dump --blobs --schema=%s --no-acl -U postgres indefero | gzip >
> /path/to/backups/%s/%s-%s.sql.gz
> this for each installation, so 1100 times. Substitution strings are to
> timestamp and get the right schema.

This seems like a pretty dumb way to go at it.  Why don't you just do
one -Fc dump for the whole database?  If you ever actually need to
restore a single schema, there's a pg_restore switch for that.

> I think that pg_dump, when looking at the objects to dump, also it is
> limited to a given schema, is scanning the complete database in one
> those calls:

Yes, it has to examine all database objects in order to trace
dependencies properly.

> Is there an option: "I know what I am doing, do not look outside of
> the schema" available which can help in my case?

No.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need help in spi_prepare errors

2009-10-07 Thread Tom Lane
paresh masani  writes:
> Below function doesn't work: (I tried each combination mentioned with
> # but none of them working.)

I haven't tried it, but a look at the code makes me think that
spi_prepare wants each type name to appear as a separate argument.
It definitely won't work to smash them all into one string like that.

You're going to have the same problem at spi_exec_prepared --- it
thinks each actual value should be a separate argument.

I'm not much of a Perl hacker, but I seem to recall that it's possible
to pass an array to a function in a way that will make the array
elements look like separate arguments.  If you really need a dynamic
list of types and values, maybe there's some solution in that direction.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] current_query stat is idle

2009-10-07 Thread mezgani ali
I used pg_top to monitor my database and i'm looking for method to find out
what queries are currently being serviced by the database.
pg_top inform me that the queries are IDLE and the result of select * from
pg_stat_activity return that current_query is always on IDLE status.

Please can you help

-- 
Ali MEZGANI
Network Engineering/Security
http://securfox.wordpress.com/


[GENERAL] pg_dump with 1100 schemas being a bit slow

2009-10-07 Thread Loic d'Anterroches
Hello,

After a series of sessions to search the web for information, I am
asking the help of people having a bit more knowledge of the internals
of pg_dump to try to solve a performance problem I have. I am running
PostgreSQL version 8.3.8 both server and pg_dump,

The context is a farm hosting of a web application
(http://www.indefero.net) where each installation get its own schema.
Each table in the schema is a "real" table, not a view of the same
table in the public schema with "WHERE schema='currentschema'" clause.
This setup allows me to easily run the web application nearly
unmodified between the downloadable version and the "farm" version
(schemas rock!). Now you have the background.

Each night I am running:
pg_dump --blobs --schema=%s --no-acl -U postgres indefero | gzip >
/path/to/backups/%s/%s-%s.sql.gz
this for each installation, so 1100 times. Substitution strings are to
timestamp and get the right schema.

My problem is that the dump increased steadily with the number of
schemas (now about 20s from about 12s with 850 schemas) and pg_dump is
now ballooning at 120MB of memory usage when running the dump.

The thing is that my overage schema size is a bit more than 1MB, with
the schema just after installation being 850kB.
Max size: 2.8MB
Min size: 0.85MB
Avg: 1.0MB
Total size: ~1GB

To get the size I run the following command with sum off the "size":

SELECT relname, pg_total_relation_size(CAST(relname AS
TEXT)) AS size FROM pg_class AS pgc, pg_namespace AS pgn
 WHERE pg_table_is_visible(pgc.oid) IS TRUE AND relkind = 'r'
 AND pgc.relnamespace = pgn.oid
 AND pgn.nspname NOT IN ('information_schema', 'pg_catalog')';


I think that pg_dump, when looking at the objects to dump, also it is
limited to a given schema, is scanning the complete database in one
those calls:
http://doxygen.postgresql.org/pg__dump_8c-source.html#l00717

Is there an option: "I know what I am doing, do not look outside of
the schema" available which can help in my case? Because running
SELECT is snappy, vmstats shows no swapping and the complete system is
basically running very well with a load average below 1.

So, your help is very welcomed,
loïc

--
Loïc d'Anterroches - Céondo Ltd - http://www.ceondo.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] problems with encoding

2009-10-07 Thread Karina Guardado
Hi,

I have a Map info file named map.TAB and when I tried to export it to
Postgres I get an error of encoding so I used konwert to convert it to utf8
using the following statement
konwert any/es-utf8 map.TAB -O

But this only convert the name of the header of each column but not the
registers of each column so I don't know what to do in order to make it work
and I get the following error

ERROR 1: INSERT command for new feature failed.
ERROR:  invalid byte sequence for encoding "UTF8": 0xe17465
HINT:  This error can also happen if the byte sequence does not match the
encoding expected by the server, which is controlled by "client_encoding".

Command: INSERT INTO "geocalderas" (wkb_geometry , "fnode_", "tnode_",
"lpoly_", "rpoly_", "length", "geo_lin_", "geo_lin_id", "fnode1", "tnode1",
"lpoly1", "rpoly1", "geo_lin1", "geo_lin__1", "fnode1_1", "tnode1_1",
"lpoly1_1", "cod", "iden", "descripció") VALUES
(GeomFromEWKT('SRID=32767;LINESTRING (447585.611617154267151
352945.605798984819558,447663.959493021073285
352903.167054586228915,447669.807996430899948
352846.719404269650113,447645.862237187044229
352814.281670245632995,447594.825768752663862
352805.778938578849193,447551.734437025908846
352808.138727543642744)'::TEXT) ,   291,   297, 0,
0, 281.033,   175,83, 0, 0, 0,
0, 0, 0, 0, 0, 0,600300,
12083, 'Cr�ter, caldera')
ERROR 1: Terminating translation prematurely after failed


any help please,

thanks

karina


[GENERAL] SSIS and Postgres

2009-10-07 Thread Jamie Lawrence-Jenner
Hi there

 

we are having some problems using OLEDB PGNP and SSIS, this is a post we
have added to experts exchange, but we were wondering whether anyone here
could shed some light on this. We are also interested how others manage ETL

 

Cheers

 

Jamie

 


Data Warehousing Postgres


 

We're considering using SSIS to maintain a PostgreSql data warehouse. I've
used it before between SQL Servers with no problems, but am having a lot of
difficulty getting it to play nicely with Postgres. Im using the evaluation
version of the OLEDB PGNP data provider ( 
http://tiny.cc/qLoS2).

I wanted to start with something simple like UPSERT on the fact table
(10k-15k rows are updated/inserted daily), but this is proving very
difficult (not to mention Ill want to use surrogate keys in the future).

Ive attempted   http://tiny.cc/hOb6L and
 http://tiny.cc/uRF1f which are effectively the same
(except I dont really understand the union all at the end when Im trying to
upsert) But I run into the same problem with parameters when doing the
update using a OLEDb command  which I tried to overcome using
 http://tiny.cc/8EmyM but that just doesnt seem to
work, I get a validation error  
The external columns for complent are out of sync with the datasource
columns... external column Param_2 needs to be removed from the external
columns.
(this error is repeated for the first two parameters as well  never came
across this using the sql connection as it supports named parameters)

Has anyone come across this?

AND:

The fact that this simple task is apparently so difficult to do in SSIS
suggests Im using the wrong tool for the job - is there a better (and still
flexible) way of doing this? Or would another ETL package be better for use
between two Postgres database? -Other options include any listed on
 http://tiny.cc/PbIO4. I could just go and write a
load of SQL to do this for me, but I wanted a neat and easily maintainable
solution.