Re: [GENERAL] Questions regarding JSON processing

2017-04-26 Thread Pavel Stehule
2017-04-26 15:06 GMT+02:00 Glen Huang :

> @Pavel
>
> Thanks for bringing PLV8 to my attention. Wasn't aware of it. Sounds like
> the right tool to for the job. I'll try it out. Do you think it makes sense
> to use PLV8 to also generate JSON? Can it beat SQL?
>

Hard to say - probably it depends on actual case. I have not any
benchmarks.

Regards

Pavel


>
> Good to know functions are executed under transaction, I think that should
> be enough for me.
>
> @John
>
> Only data is inside JSON, but it does have keys like "added", "updated"
> that contain objected to be added and updated inside it. I think this kind
> of branching should be safe though?
>
> On Wed, Apr 26, 2017 at 12:41 PM, Pavel Stehule 
> wrote:
>
>>
>>
>> 2017-04-26 6:21 GMT+02:00 Glen Huang :
>>
>>> Hi all,
>>>
>>> I have a RESTful API server that sends and receives JSON strings. I'm
>>> wondering what might be the best way to leverage PostgreSQL's JSON
>>> capability.
>>>
>>> For sending JSON responses to clients. I believe the best way is to ask
>>> PostgreSQL to generate the JSON string and then pass that directly to
>>> clients, instead of making multiple queries to construct the JSON and then
>>> send it,  which doesn't seem optimal. Is that the case?
>>>
>>> For updating db using JSON requests from clients, that I'm not so sure.
>>> Should I directly pass the request JSON to PostgreSQL and ask it to parse
>>> this JSON and execute a transaction all by itself, or should I parse it in
>>> the server and generate the transaction SQL and execute that on PostgreSQL?
>>> The former sounds optimal, but I'm not sure if PostgreSQL is able to walk a
>>> JSON structure and run a transaction along the way? Should I do it with
>>> PL/pgSQL? It seems functions can't execute a transaction?
>>>
>>
>> The PLpgSQL is static language and is good for static processing JSON
>> doc, but it is unfit for iteration over any generic nested document. You
>> can use PLPerlu, PLPythonu. Lot of people uses PLV8 for JSON processing.
>>
>> The functions in PostgreSQL are executed under transaction - you cannot
>> to explicitly control transaction, but there are possibility to implicitly
>> handle transactions with exception handling. There is workaround via dblink
>> to emulate autonomous transactions.
>>
>> Regards
>>
>> Pavel
>>
>>
>>>
>>> Would like to hear some thoughts on this. Thanks.
>>>
>>> Glen
>>>
>>
>>
>


Re: [GENERAL] Questions regarding JSON processing

2017-04-26 Thread Glen Huang
@Pavel

Thanks for bringing PLV8 to my attention. Wasn't aware of it. Sounds like the 
right tool to for the job. I'll try it out. Do you think it makes sense to use 
PLV8 to also generate JSON? Can it beat SQL?

Good to know functions are executed under transaction, I think that should be 
enough for me.

@John

Only data is inside JSON, but it does have keys like "added", "updated" that 
contain objected to be added and updated inside it. I think this kind of 
branching should be safe though?

On Wed, Apr 26, 2017 at 12:41 PM, Pavel Stehule mailto:pavel.steh...@gmail.com>> wrote:


2017-04-26 6:21 GMT+02:00 Glen Huang mailto:hey...@gmail.com>>:
Hi all,

I have a RESTful API server that sends and receives JSON strings. I'm wondering 
what might be the best way to leverage PostgreSQL's JSON capability.

For sending JSON responses to clients. I believe the best way is to ask 
PostgreSQL to generate the JSON string and then pass that directly to clients, 
instead of making multiple queries to construct the JSON and then send it,  
which doesn't seem optimal. Is that the case?

For updating db using JSON requests from clients, that I'm not so sure. Should 
I directly pass the request JSON to PostgreSQL and ask it to parse this JSON 
and execute a transaction all by itself, or should I parse it in the server and 
generate the transaction SQL and execute that on PostgreSQL? The former sounds 
optimal, but I'm not sure if PostgreSQL is able to walk a JSON structure and 
run a transaction along the way? Should I do it with PL/pgSQL? It seems 
functions can't execute a transaction?

The PLpgSQL is static language and is good for static processing JSON doc, but 
it is unfit for iteration over any generic nested document. You can use 
PLPerlu, PLPythonu. Lot of people uses PLV8 for JSON processing.

The functions in PostgreSQL are executed under transaction - you cannot to 
explicitly control transaction, but there are possibility to implicitly handle 
transactions with exception handling. There is workaround via dblink to emulate 
autonomous transactions.

Regards

Pavel
 

Would like to hear some thoughts on this. Thanks.

Glen




Re: [GENERAL] Questions regarding JSON processing

2017-04-25 Thread John R Pierce

On 4/25/2017 9:21 PM, Glen Huang wrote:
For updating db using JSON requests from clients, that I'm not so 
sure. Should I directly pass the request JSON to PostgreSQL and ask it 
to parse this JSON and execute a transaction all by itself, or should 
I parse it in the server and generate the transaction SQL and execute 
that on PostgreSQL? The former sounds optimal, but I'm not sure 
if PostgreSQL is able to walk a JSON structure and run a transaction 
along the way? Should I do it with PL/pgSQL? It seems functions can't 
execute a transaction?


what does "walk a JSON structure and run a transaction along the way"   
actual entail?Normally, the application starts a transaction, 
executes one or more SQL commands, then does a commit or rollback.   any 
JSON would be within this transaction. functions are called within a 
transaction.


If your JSON includes instructions as well as data, I'd be rather 
cautious of letting a remote client send that directly to the database 
server unless you can ensure that nothing hostile can be done with it, 
or completely trust all clients.



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions regarding JSON processing

2017-04-25 Thread Pavel Stehule
2017-04-26 6:21 GMT+02:00 Glen Huang :

> Hi all,
>
> I have a RESTful API server that sends and receives JSON strings. I'm
> wondering what might be the best way to leverage PostgreSQL's JSON
> capability.
>
> For sending JSON responses to clients. I believe the best way is to ask
> PostgreSQL to generate the JSON string and then pass that directly to
> clients, instead of making multiple queries to construct the JSON and then
> send it,  which doesn't seem optimal. Is that the case?
>
> For updating db using JSON requests from clients, that I'm not so sure.
> Should I directly pass the request JSON to PostgreSQL and ask it to parse
> this JSON and execute a transaction all by itself, or should I parse it in
> the server and generate the transaction SQL and execute that on PostgreSQL?
> The former sounds optimal, but I'm not sure if PostgreSQL is able to walk a
> JSON structure and run a transaction along the way? Should I do it with
> PL/pgSQL? It seems functions can't execute a transaction?
>

The PLpgSQL is static language and is good for static processing JSON doc,
but it is unfit for iteration over any generic nested document. You can use
PLPerlu, PLPythonu. Lot of people uses PLV8 for JSON processing.

The functions in PostgreSQL are executed under transaction - you cannot to
explicitly control transaction, but there are possibility to implicitly
handle transactions with exception handling. There is workaround via dblink
to emulate autonomous transactions.

Regards

Pavel


>
> Would like to hear some thoughts on this. Thanks.
>
> Glen
>


[GENERAL] Questions regarding JSON processing

2017-04-25 Thread Glen Huang
Hi all,

I have a RESTful API server that sends and receives JSON strings. I'm
wondering what might be the best way to leverage PostgreSQL's JSON
capability.

For sending JSON responses to clients. I believe the best way is to ask
PostgreSQL to generate the JSON string and then pass that directly to
clients, instead of making multiple queries to construct the JSON and then
send it,  which doesn't seem optimal. Is that the case?

For updating db using JSON requests from clients, that I'm not so sure.
Should I directly pass the request JSON to PostgreSQL and ask it to parse
this JSON and execute a transaction all by itself, or should I parse it in
the server and generate the transaction SQL and execute that on PostgreSQL?
The former sounds optimal, but I'm not sure if PostgreSQL is able to walk a
JSON structure and run a transaction along the way? Should I do it with
PL/pgSQL? It seems functions can't execute a transaction?

Would like to hear some thoughts on this. Thanks.

Glen


Re: [GENERAL] questions about 2nd index on one column

2017-01-03 Thread Ravi Kapoor
> Please reply to list also.
apologies, my bad.

> It would seem that the index would not be rebuilt, assuming all
conditions are the same.
Thanks for finding this. This is enough info for me to spend a day
experimenting. I did not want to waste a day if we knew upfront that it
wont work. But looks like it will be worth the time.

regards


On Tue, Jan 3, 2017 at 12:09 PM, Adrian Klaver 
wrote:

On 01/03/2017 11:35 AM, Ravi Kapoor wrote:
Please reply to list also.
Ccing list.

> Yes I am aware of django EOL. However, our company is still using it, we
> have a migration plan later this year, however for now, I got to work
> with what we have.

Still, you are missing 14 patch releases to the 1.5 version.

>
> Correct, the index will be on one column.
>
> In Django, the model is described in model.py file, so to update it, I
> will simply change following line
> votes1 = models.CharField(default='', max_length=200)
> to following line
> votes1 = models.CharField(db_index=True, default='', max_length=200)
>
> and run Django migrations.

Hmm, from this:

https://git.postgresql.org/gitweb/?p=postgresql.git;a=
blob;f=src/backend/commands/indexcmds.c;h=fdfa6ca4f5cd3be2db624e1c709ad0
37dc700b40;hb=c7681b2b9a115eb05048a485480826bc0efa6d3b

/*
 * CheckIndexCompatible
 *  Determine whether an existing index definition is
compatible with a
 *  prospective index definition, such that the existing index
storage
 *  could become the storage of the new index, avoiding a
rebuild.
 *
/*

It would seem that the index would not be rebuilt, assuming all conditions
are the same.

The part that has me questioning is this:

 * This is tailored to the needs of ALTER TABLE ALTER TYPE, ..

Someone who knows the internals better will have to verify this.

>
>
> On Tue, Jan 3, 2017 at 11:26 AM, Adrian Klaver
> mailto:adrian.kla...@aklaver.com>> wrote:
>
> On 01/03/2017 11:07 AM, Ravi Kapoor wrote:
>
>
> I have a bit strange question. I am trying to figure out how to
> avoid
> table locking while creating an index through Django (1.5.1) in
> Postgres
> 9.4.7
>
> Django 1.5.1 does not support concurrent indexing. So my thought
> is to
> first create a concurrent index using SQL prompt.
> Then try to update django model to add index, which will
effectively
> create 2 indexes on same column.
>
>
> I really need to read the entire subject. So in response to my own
> question,  yes it is on one column.
>
>
>
> So my questions are:
>
> If I create a 2nd index on one column,
> 1. Does postgres scan entire table to create index from scratch
> or does
> it simply copy the first index?
> 2. Does postgres lock the table to create this index?
>
> thanks
>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com 
>
>


--
Adrian Klaver
adrian.kla...@aklaver.com


Re: [GENERAL] questions about 2nd index on one column

2017-01-03 Thread Adrian Klaver
On 01/03/2017 11:35 AM, Ravi Kapoor wrote:
Please reply to list also.
Ccing list.

> Yes I am aware of django EOL. However, our company is still using it, we
> have a migration plan later this year, however for now, I got to work
> with what we have.

Still, you are missing 14 patch releases to the 1.5 version.

> 
> Correct, the index will be on one column.
> 
> In Django, the model is described in model.py file, so to update it, I
> will simply change following line
> votes1 = models.CharField(default='', max_length=200)
> to following line
> votes1 = models.CharField(db_index=True, default='', max_length=200)
> 
> and run Django migrations.

Hmm, from this:

https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/commands/indexcmds.c;h=fdfa6ca4f5cd3be2db624e1c709ad037dc700b40;hb=c7681b2b9a115eb05048a485480826bc0efa6d3b

/*
 * CheckIndexCompatible
 *  Determine whether an existing index definition is compatible 
with a
 *  prospective index definition, such that the existing index 
storage
 *  could become the storage of the new index, avoiding a rebuild.
 *
/*

It would seem that the index would not be rebuilt, assuming all conditions are 
the same.

The part that has me questioning is this:

 * This is tailored to the needs of ALTER TABLE ALTER TYPE, ..

Someone who knows the internals better will have to verify this.

> 
> 
> On Tue, Jan 3, 2017 at 11:26 AM, Adrian Klaver
> mailto:adrian.kla...@aklaver.com>> wrote:
> 
> On 01/03/2017 11:07 AM, Ravi Kapoor wrote:
> 
> 
> I have a bit strange question. I am trying to figure out how to
> avoid
> table locking while creating an index through Django (1.5.1) in
> Postgres
> 9.4.7
> 
> Django 1.5.1 does not support concurrent indexing. So my thought
> is to
> first create a concurrent index using SQL prompt.
> Then try to update django model to add index, which will effectively
> create 2 indexes on same column.
> 
> 
> I really need to read the entire subject. So in response to my own
> question,  yes it is on one column.
> 
> 
> 
> So my questions are:
> 
> If I create a 2nd index on one column,
> 1. Does postgres scan entire table to create index from scratch
> or does
> it simply copy the first index?
> 2. Does postgres lock the table to create this index?
> 
> thanks
> 
> 
> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com 
> 
> 


-- 
Adrian Klaver
adrian.kla...@aklaver.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] questions about 2nd index on one column

2017-01-03 Thread Adrian Klaver

On 01/03/2017 11:07 AM, Ravi Kapoor wrote:


I have a bit strange question. I am trying to figure out how to avoid
table locking while creating an index through Django (1.5.1) in Postgres
9.4.7

Django 1.5.1 does not support concurrent indexing. So my thought is to
first create a concurrent index using SQL prompt.
Then try to update django model to add index, which will effectively
create 2 indexes on same column.


I really need to read the entire subject. So in response to my own 
question,  yes it is on one column.




So my questions are:

If I create a 2nd index on one column,
1. Does postgres scan entire table to create index from scratch or does
it simply copy the first index?
2. Does postgres lock the table to create this index?

thanks




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] questions about 2nd index on one column

2017-01-03 Thread Adrian Klaver

On 01/03/2017 11:07 AM, Ravi Kapoor wrote:


I have a bit strange question. I am trying to figure out how to avoid
table locking while creating an index through Django (1.5.1) in Postgres
9.4.7


First Django 1.5.x has been past end of life for 2.25 years.
Second before it went EOL it was up to 1.5.12.



Django 1.5.1 does not support concurrent indexing. So my thought is to
first create a concurrent index using SQL prompt.
Then try to update django model to add index, which will effectively
create 2 indexes on same column.


Is the index going to be on a single column?

How are you going to update the model?



So my questions are:

If I create a 2nd index on one column,
1. Does postgres scan entire table to create index from scratch or does
it simply copy the first index?
2. Does postgres lock the table to create this index?

thanks




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] questions about 2nd index on one column

2017-01-03 Thread Ravi Kapoor
I have a bit strange question. I am trying to figure out how to avoid table
locking while creating an index through Django (1.5.1) in Postgres 9.4.7

Django 1.5.1 does not support concurrent indexing. So my thought is to
first create a concurrent index using SQL prompt.
Then try to update django model to add index, which will effectively create
2 indexes on same column.

So my questions are:

If I create a 2nd index on one column,
1. Does postgres scan entire table to create index from scratch or does it
simply copy the first index?
2. Does postgres lock the table to create this index?

thanks


Re: [GENERAL] Questions on Post Setup MASTER and STANDBY replication - Postgres9.1

2016-11-07 Thread Joanna Xu




Jim Nasby [mailto:jim.na...@bluetreble.com] wrote:



>On 11/2/16 2:49 PM, Joanna Xu wrote:

>> The replication is verified and works.  My questions are what's the

>> reason causing "cp: cannot stat

>> `/opt/postgres/9.1/archive/00010003': No such file or

>> directory" on STANDBY and how to fix it?



>What instructions/tools did you use to setup replication?



The following steps were involved in setting up the replication:



* Stop the STANDBY database.

* Start the hot backup on the MASTER database.

* Push data to the STANDBY database.

* Stop the hot backup on the MASTER database.

* Reload with the new configuration on MASTER.

* Setup new configuration on the STANDBY database.

* Start the STANDBY database.



Also, we see the message "LOG:  terminating walsender process due to 
replication timeout" in the logs on MASTER. Is this expected in the condition 
when the system is less busy or else?



Thanks,

Joanna



-Original Message-
From: Jim Nasby [mailto:jim.na...@bluetreble.com]
Sent: Wednesday, November 02, 2016 7:15 PM
To: Joanna Xu ; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Questions on Post Setup MASTER and STANDBY replication - 
Postgres9.1



On 11/2/16 2:49 PM, Joanna Xu wrote:

> The replication is verified and works.  My questions are what's the

> reason causing "cp: cannot stat

> `/opt/postgres/9.1/archive/00010003': No such file or

> directory" on STANDBY and how to fix it?



What instructions/tools did you use to setup replication?



> Also, it seems the startup

> process stucks on "recovering 00010004", how to

> resolve it?



As far as I know that's normal while in streaming mode.

--

Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX Experts in 
Analytics, Data Architecture and PostgreSQL Data in Trouble? Get it in Treble! 
http://BlueTreble.com

855-TREBLE2 (855-873-2532)   mobile: 512-569-9461

This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp


Re: [GENERAL] Questions on Post Setup MASTER and STANDBY replication - Postgres9.1

2016-11-02 Thread Jim Nasby

On 11/2/16 2:49 PM, Joanna Xu wrote:

The replication is verified and works.  My questions are what’s the
reason causing “cp: cannot stat
`/opt/postgres/9.1/archive/00010003': No such file or
directory” on STANDBY and how to fix it?


What instructions/tools did you use to setup replication?


Also, it seems the startup
process stucks on “recovering 00010004”, how to resolve
it?


As far as I know that's normal while in streaming mode.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Questions on Post Setup MASTER and STANDBY replication - Postgres9.1

2016-11-02 Thread Joanna Xu
Hi All,

After setting up two nodes with MASTER and STANDBY replication, I see " cp: 
cannot stat `/opt/postgres/9.1/archive/00010003': No such file 
or directory" in the log on STANDBY and the startup process recovering 
"00010004" which does not exist in the archive directory.

The replication is verified and works.  My questions are what's the reason 
causing "cp: cannot stat `/opt/postgres/9.1/archive/00010003': 
No such file or directory" on STANDBY and how to fix it? Also, it seems the 
startup process stucks on "recovering 00010004", how to resolve 
it?

Thank you !

On STANDBY node:

LOG:  entering standby mode
cp: cannot stat `/opt/postgres/9.1/archive/00010003': No such 
file or directory
LOG:  redo starts at 0/320
LOG:  record with zero length at 0/3B0
cp: cannot stat `/opt/postgres/9.1/archive/00010003': No such 
file or directory
LOG:  streaming replication successfully connected to primary
LOG:  consistent recovery state reached at 0/400
LOG:  database system is ready to accept read only connections

ls -rlt /opt/postgres/9.1/archive
-rw--- 1 postgres postgres 16777216 Oct 28 14:07 00010001
-rw--- 1 postgres postgres 16777216 Nov  2 19:00 00010002

ps -ef|grep startup|grep -v grep
postgres  9036  9020  0 19:00 ?00:00:00 postgres: startup process   
recovering 00010004

ps -ef|grep receiver|grep -v grep
postgres  9040  9020  0 19:00 ?00:00:00 postgres: wal receiver process  
 streaming 0/4000380

On MASTER node:

ls -rlt /opt/postgres/9.1/archive
-rw--- 1 postgres postgres 16777216 Oct 28 14:08 00010001
-rw--- 1 postgres postgres 16777216 Nov  2 19:00 00010002
-rw--- 1 postgres postgres 16777216 Nov  2 19:00 00010003
-rw--- 1 postgres postgres  270 Nov  2 19:00 
00010003.0020.backup

ps -ef|grep archiver |grep -v grep
postgres  9041  9035  0 18:57 ?00:00:00 postgres: archiver process   
last was 00010003.0020.backup

ps -ef|grep sender |grep -v grep
postgres  9264  9035  0 19:00 ?00:00:00 postgres: wal sender process 
postgres 192.168.154.106(64182) streaming 0/4000380

Cheers,

Joanna Xu
Senior Oracle DBA
Data Experience Solution BU

+1 613 595 5234

AMDOCS | EMBRACE CHALLENGE EXPERIENCE SUCCESS

POLICY CONTROL IN THE FAST LANE
What's making policy control strategic in 2015 and beyond? Check out the top 
ten factors driving change...


This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp


Re: [GENERAL] questions about how to implement a gist index

2016-06-28 Thread Riccardo Vianello
Hi Oleg,

On Tue, Jun 28, 2016 at 1:05 AM, Oleg Bartunov  wrote:

> On Tue, Jun 28, 2016 at 12:44 AM, Riccardo Vianello
>  wrote:
> > Could you please also help me understand the difference (if any) between
> > using the GIST_LEAF macro or the leafkey attribute of the GISTENTRY data
> > structure?
>
> Yes, this is confused.
>
> GIST_LEAF is TRUE if key is in leaf page.
>
> bool leafkey points if TRUE that key contains value from heap.
>

I think I wrongly assumed that internal and leaf nodes are homogeneously
filled with union and indexed values respectively. Is it correct that
leafkey is always TRUE for the entries that populate a leaf page, and may
be either TRUE and FALSE if GIST_LEAF is FALSE?

Thanks,
Riccardo


Re: [GENERAL] questions about how to implement a gist index

2016-06-27 Thread Oleg Bartunov
On Tue, Jun 28, 2016 at 12:44 AM, Riccardo Vianello
 wrote:
> Hi all,
>
> I'm trying to contribute some improvements to the implementation of a gist
> index that is part of an open source project and it would be really nice if
> anyone could help me answer some questions.
>
> I would like to use different data structures to represent the internal and
> leaf entries. I think I have found several examples in the postgresql source
> code where a variable length array is used to pack different data
> consistently with the value of a "type" attribute. Is this the suggested way
> to proceed, or would it be also possible to use two actually different data
> structures? I at first considered this latter approach, but some erroneous
> behavior suggested that maybe I am not fully clear about which data
> structure should be produced or expected by some parts of the
> implementation.
>
> Could you please also help me understand the difference (if any) between
> using the GIST_LEAF macro or the leafkey attribute of the GISTENTRY data
> structure?

Yes, this is confused.

GIST_LEAF is TRUE if key is in leaf page.

bool leafkey points if TRUE that key contains value from heap.

>
> Thank you very much in advance,
>
> Riccardo
>
>


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] questions about how to implement a gist index

2016-06-27 Thread Riccardo Vianello
Hi all,

I'm trying to contribute some improvements to the implementation of a gist
index that is part of an open source project and it would be really nice if
anyone could help me answer some questions.

I would like to use different data structures to represent the internal and
leaf entries. I think I have found several examples in the postgresql
source code where a variable length array is used to pack different data
consistently with the value of a "type" attribute. Is this the suggested
way to proceed, or would it be also possible to use two actually different
data structures? I at first considered this latter approach, but some
erroneous behavior suggested that maybe I am not fully clear about which
data structure should be produced or expected by some parts of the
implementation.

Could you please also help me understand the difference (if any) between
using the GIST_LEAF macro or the leafkey attribute of the GISTENTRY data
structure?

Thank you very much in advance,

Riccardo


Re: [GENERAL] Questions on dynamic execution and sqlca

2014-08-07 Thread Guillaume Lelarge
2014-08-07 7:24 GMT+02:00 David Johnston :

>
>> > > - What are the differences among PL/SQL,  PL/PGSQL and pgScript.
>> >
>> > The first two are languages you write functions in.  pgScript is simply
>> an
>> > informal way to group a series of statements together and have them
>> execute
>> > within a transaction.
>> >
>>
>> AFAICT, this isn't true. Pgscript is a client specific language. There is
>> a whole description of what it can do in pgadmin's manual. This was
>> interesting when PostgreSQL didn't have the DO statement. Now that we do,
>> it's rather pointless.
>>
>>
>>
> ​Yeah, I probably should have either researched the answer or just left it
> alone.  I am not all that familiar with pgAdmin - I figured it was just a
> souped up script runner with maybe a couple of features like variables but
> otherwise allowing only SQL commands.
>
>
No problem :)

pgscript reference is available on
http://pgadmin.org/docs/1.18/pgscript.html.

Note that pgScript isn't specific to pgAdmin, AFAIK.


-- 
Guillaume.
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


Re: [GENERAL] Questions on dynamic execution and sqlca

2014-08-06 Thread David Johnston
>
>
> > > - What are the differences among PL/SQL,  PL/PGSQL and pgScript.
> >
> > The first two are languages you write functions in.  pgScript is simply
> an
> > informal way to group a series of statements together and have them
> execute
> > within a transaction.
> >
>
> AFAICT, this isn't true. Pgscript is a client specific language. There is
> a whole description of what it can do in pgadmin's manual. This was
> interesting when PostgreSQL didn't have the DO statement. Now that we do,
> it's rather pointless.
>
>
>
​Yeah, I probably should have either researched the answer or just left it
alone.  I am not all that familiar with pgAdmin - I figured it was just a
souped up script runner with maybe a couple of features like variables but
otherwise allowing only SQL commands.

David J.​


Re: [GENERAL] Questions on dynamic execution and sqlca

2014-08-06 Thread Guillaume Lelarge
Le 6 août 2014 18:47, "David G Johnston"  a
écrit :
>
> Bill Epstein wrote
> > I've tried a variety of ways based on the on-line docs I've seen, but I
> > always get a syntax error on EXEC when I use only the line EXEC
 statement
>
> You likely need to use "EXECUTE" in PostgreSQL
>
>
> >INFO:  INSERT INTO UTILITY.BPC_AUDIT (COMPONENT, ACTIVITY,
AUDIT_LEVEL,
> >AUDIT_TIME, NOTE, SQL) VALUES ('Overpayment','Create
> >TLI','LOG','2014-08-06 10:44:23.933','Created TLI','INSERT INTO
> >TLIA...')
> >CONTEXT:  SQL statement "SELECT utility.LOG_MSG (p_log_yn,
p_component,
> >p_function, p_note, p_sql)"
> >PL/pgSQL function utility.logging_test() line 24 at PERFORM
> >ERROR:  INSERT has more expressions than target columns
> >LINE 3:  VALUES ($1, $2, $3, $4, $5, $6)
> > ^
> >QUERY:  PREPARE myinsert7 (text, text, text, timestamp, text, text)
AS
> > INSERT INTO UTILITY.BPC_AUDIT (COMPONENT, ACTIVITY,
> >AUDIT_LEVEL, NOTE, SQL)
> > VALUES ($1, $2, $3, $4, $5, $6)
> >CONTEXT:  PL/pgSQL function utility.log_msg
> >(character,text,text,text,text) line 48 at SQL statement
> >SQL statement "SELECT utility.LOG_MSG (p_log_yn, p_component,
> >p_function, p_note, p_sql)"
> >PL/pgSQL function utility.logging_test() line 24 at PERFORM
> >** Error **
> >
> >ERROR: INSERT has more expressions than target columns
> >SQL state: 42601
> >Context: PL/pgSQL function utility.log_msg
> >(character,text,text,text,text) line 48 at SQL statement
> >SQL statement "SELECT utility.LOG_MSG (p_log_yn, p_component,
> >p_function, p_note, p_sql)"
> >PL/pgSQL function utility.logging_test() line 24 at PERFORM
>
> Since "COMPONENT, ACTIVITY, AUDIT_LEVEL, NOTE, SQL" is only 5 columns and
> you are sending 6 it is not surprising that you are getting an error.
>
>
> > In the other function (log_error ), the problem I'm having is that I'm
> > trying to pull out the sqlca error code and description (as I've done in
> > the past w/ Oracle), in order to write that information in my log table.
> > The intent is that this function will only be called from within an
> > EXCEPTION block (as I do in my logging_test  function - I purposely run
a
> > bad query to trigger it).
>
> You still have to deal with the fact that PostgreSQL functions operate in
> the transaction context of the caller; they cannot set their own.
 Depending
> on how you write the function and the caller if you eventually ROLLBACK
you
> could lose the logging.
>
>
> > - What's the difference between hitting the Execute Query and Execute
> > PGScript buttons?  Both seem to compile the functions.
>
> Execute Query just sends the statement(s) to the server
> Execute PGScript wraps the statements in a transaction so that either they
> are succeed or all fail.
> Basically with Execute Query if a statement in the middle fails everything
> before it still commits (auto-commit)
>
> For a single statement there is no difference.
>
> > - What are the differences among PL/SQL,  PL/PGSQL and pgScript.
>
> The first two are languages you write functions in.  pgScript is simply an
> informal way to group a series of statements together and have them
execute
> within a transaction.
>

AFAICT, this isn't true. Pgscript is a client specific language. There is a
whole description of what it can do in pgadmin's manual. This was
interesting when PostgreSQL didn't have the DO statement. Now that we do,
it's rather pointless.

> > - I installed Postgres 9.3.4 and  I'm using PEM v4.0.2.  When I click on
> > the icon to "Execute arbitrary SQL queries", I notice that the icons on
> > the
> > window that opens are different from the pgAdmin PostgreSQL Tools window
> > that opens if I double-click on one of my .sql files.  Is there a
> > difference in these tools?
>
> No idea - but probably.  But there are likely many similarities too.
>
>
> > Attached are the relevant scripts:
> > (See attached file: create_bpc_audit.sql) - Create the log table
> > (See attached file: create_log_utilities.sql)- Code to create the two
> > logging functions
> > (See attached file: test_log_utilities.sql)- Code to exercise the msg
and
> > error logging functions
>
> Didn't even open these...
>
>
> David J.
>
>
>
>
> --
> View this message in context:
http://postgresql.1045698.n5.nabble.com/Questions-on-dynamic-execution-and-sqlca-tp5813929p5813934.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions on dynamic execution and sqlca

2014-08-06 Thread Ray Stell

On Aug 6, 2014, at 12:28 PM, Bill Epstein  wrote:

> I'm very new to Postgres, but have plenty of experience developing stored 
> procs in Oracle.  
> 

I found this helpful:

http://www.amazon.com/PostgreSQL-Server-Programming-Hannu-Krosing-ebook/dp/B00DMYO2D2/ref=tmm_kin_swatch_0?_encoding=UTF8&sr=8-2&qid=1407345445
Krosing, Hannu; Mlodgenski, Jim; Roybal, Kirk (2013-06-25). PostgreSQL Server 
Programming (Kindle Locations 272-273). Packt Publishing. Kindle Edition. 



signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: [GENERAL] Questions on dynamic execution and sqlca

2014-08-06 Thread David G Johnston
Bill Epstein wrote
> I've tried a variety of ways based on the on-line docs I've seen, but I
> always get a syntax error on EXEC when I use only the line EXEC  statement

You likely need to use "EXECUTE" in PostgreSQL


>INFO:  INSERT INTO UTILITY.BPC_AUDIT (COMPONENT, ACTIVITY, AUDIT_LEVEL,
>AUDIT_TIME, NOTE, SQL) VALUES ('Overpayment','Create
>TLI','LOG','2014-08-06 10:44:23.933','Created TLI','INSERT INTO
>TLIA...')
>CONTEXT:  SQL statement "SELECT utility.LOG_MSG (p_log_yn, p_component,
>p_function, p_note, p_sql)"
>PL/pgSQL function utility.logging_test() line 24 at PERFORM
>ERROR:  INSERT has more expressions than target columns
>LINE 3:  VALUES ($1, $2, $3, $4, $5, $6)
> ^
>QUERY:  PREPARE myinsert7 (text, text, text, timestamp, text, text) AS
> INSERT INTO UTILITY.BPC_AUDIT (COMPONENT, ACTIVITY,
>AUDIT_LEVEL, NOTE, SQL)
> VALUES ($1, $2, $3, $4, $5, $6)
>CONTEXT:  PL/pgSQL function utility.log_msg
>(character,text,text,text,text) line 48 at SQL statement
>SQL statement "SELECT utility.LOG_MSG (p_log_yn, p_component,
>p_function, p_note, p_sql)"
>PL/pgSQL function utility.logging_test() line 24 at PERFORM
>** Error **
> 
>ERROR: INSERT has more expressions than target columns
>SQL state: 42601
>Context: PL/pgSQL function utility.log_msg
>(character,text,text,text,text) line 48 at SQL statement
>SQL statement "SELECT utility.LOG_MSG (p_log_yn, p_component,
>p_function, p_note, p_sql)"
>PL/pgSQL function utility.logging_test() line 24 at PERFORM

Since "COMPONENT, ACTIVITY, AUDIT_LEVEL, NOTE, SQL" is only 5 columns and
you are sending 6 it is not surprising that you are getting an error.


> In the other function (log_error ), the problem I'm having is that I'm
> trying to pull out the sqlca error code and description (as I've done in
> the past w/ Oracle), in order to write that information in my log table.
> The intent is that this function will only be called from within an
> EXCEPTION block (as I do in my logging_test  function - I purposely run a
> bad query to trigger it).

You still have to deal with the fact that PostgreSQL functions operate in
the transaction context of the caller; they cannot set their own.  Depending
on how you write the function and the caller if you eventually ROLLBACK you
could lose the logging.


> - What's the difference between hitting the Execute Query and Execute
> PGScript buttons?  Both seem to compile the functions.

Execute Query just sends the statement(s) to the server
Execute PGScript wraps the statements in a transaction so that either they
are succeed or all fail.
Basically with Execute Query if a statement in the middle fails everything
before it still commits (auto-commit)

For a single statement there is no difference.

> - What are the differences among PL/SQL,  PL/PGSQL and pgScript.

The first two are languages you write functions in.  pgScript is simply an
informal way to group a series of statements together and have them execute
within a transaction.


> - I installed Postgres 9.3.4 and  I'm using PEM v4.0.2.  When I click on
> the icon to "Execute arbitrary SQL queries", I notice that the icons on
> the
> window that opens are different from the pgAdmin PostgreSQL Tools window
> that opens if I double-click on one of my .sql files.  Is there a
> difference in these tools?

No idea - but probably.  But there are likely many similarities too.


> Attached are the relevant scripts:
> (See attached file: create_bpc_audit.sql) - Create the log table
> (See attached file: create_log_utilities.sql)- Code to create the two
> logging functions
> (See attached file: test_log_utilities.sql)- Code to exercise the msg and
> error logging functions

Didn't even open these...


David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Questions-on-dynamic-execution-and-sqlca-tp5813929p5813934.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Questions on dynamic execution and sqlca

2014-08-06 Thread Bill Epstein


I'm very new to Postgres, but have plenty of experience developing stored
procs in Oracle.

I'm going to be creating Postgres stored procedures (functions actually,
since I discovered that in postgres, everything is a function) to do a
variety of batch-type processing.  These functions may or may not be called
by the .Net application that is being developed.  To support both my
Postgres function development and run-time monitoring, I wanted to develop
generic logging functions that would be called by other Postgres functions
to be developed in order to help trace through code and collect error
information.

The attached create_log_utilities.sql holds plsql for creating two logging
functions (one for logging status messages, and one for logging errors).
In the log_msg function, the various sets of EXEC and EXECUTE statements
are from my experimenting with dynamically generating SQL.  If I could get
it working, the intent is to be able to add a LogTableName as an input
parameter, thereby allowing individual developers to utilize their own
version of the log table (w/ the same columns).  I've been able to do this
sort of thing w/ Oracle before.

I've tried a variety of ways based on the on-line docs I've seen, but I
always get a syntax error on EXEC when I use only the line EXEC  statement
(is there a setting I need to set in order to be able to include EXEC
directives?).  The closest I've come is the currently uncommented prepared
statement - it compiles, but I get the following error messages:

   INFO:  INSERT INTO UTILITY.BPC_AUDIT (COMPONENT, ACTIVITY, AUDIT_LEVEL,
   AUDIT_TIME, NOTE, SQL) VALUES ('Overpayment','Create
   TLI','LOG','2014-08-06 10:44:23.933','Created TLI','INSERT INTO
   TLIA...')
   CONTEXT:  SQL statement "SELECT utility.LOG_MSG (p_log_yn, p_component,
   p_function, p_note, p_sql)"
   PL/pgSQL function utility.logging_test() line 24 at PERFORM
   ERROR:  INSERT has more expressions than target columns
   LINE 3:  VALUES ($1, $2, $3, $4, $5, $6)
^
   QUERY:  PREPARE myinsert7 (text, text, text, timestamp, text, text) AS
INSERT INTO UTILITY.BPC_AUDIT (COMPONENT, ACTIVITY,
   AUDIT_LEVEL, NOTE, SQL)
VALUES ($1, $2, $3, $4, $5, $6)
   CONTEXT:  PL/pgSQL function utility.log_msg
   (character,text,text,text,text) line 48 at SQL statement
   SQL statement "SELECT utility.LOG_MSG (p_log_yn, p_component,
   p_function, p_note, p_sql)"
   PL/pgSQL function utility.logging_test() line 24 at PERFORM
   ** Error **

   ERROR: INSERT has more expressions than target columns
   SQL state: 42601
   Context: PL/pgSQL function utility.log_msg
   (character,text,text,text,text) line 48 at SQL statement
   SQL statement "SELECT utility.LOG_MSG (p_log_yn, p_component,
   p_function, p_note, p_sql)"
   PL/pgSQL function utility.logging_test() line 24 at PERFORM


In the other function (log_error ), the problem I'm having is that I'm
trying to pull out the sqlca error code and description (as I've done in
the past w/ Oracle), in order to write that information in my log table.
The intent is that this function will only be called from within an
EXCEPTION block (as I do in my logging_test  function - I purposely run a
bad query to trigger it).

To exercise the code, I'm just executing select utility.logging_test(); in
a query window.

A few other items I could use clarification on:
- What's the difference between hitting the Execute Query and Execute
PGScript buttons?  Both seem to compile the functions.

- What are the differences among PL/SQL,  PL/PGSQL and pgScript.

- I installed Postgres 9.3.4 and  I'm using PEM v4.0.2.  When I click on
the icon to "Execute arbitrary SQL queries", I notice that the icons on the
window that opens are different from the pgAdmin PostgreSQL Tools window
that opens if I double-click on one of my .sql files.  Is there a
difference in these tools?


Attached are the relevant scripts:
(See attached file: create_bpc_audit.sql) - Create the log table
(See attached file: create_log_utilities.sql)- Code to create the two
logging functions
(See attached file: test_log_utilities.sql)- Code to exercise the msg and
error logging functions


Thanks.
Bill

_
William Epstein
Consulting I/T Specialist
AIS ADM Information Management
US Federal
Office/Fax:  301-240-3887, Tie Line:  372-3887
International Business Machines (IBM) Corporation
Global Business Services (GBS)

create_bpc_audit.sql
Description: Binary data


create_log_utilities.sql
Description: Binary data


test_log_utilities.sql
Description: Binary data

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions about daterange() function

2014-06-26 Thread Ken Tanzer
>
> > So here are my questions:
> >
> > 1)  Is there anyway to control this behavior of daterange(), or is it
> just
> > best to (for example) add 1 to the upper bound argument if I want an
> > inclusive upper bound?
>
> See link for question #3; namely use the three-arg version of daterange
> (type,type,text)
>
>
> > 2)  This is purely cosmetic, but is there anyway to control the output
> > formatting of a daterange to show the upper bound as inclusive?  So that
> > daterange(d1,d2) would display as [d1,d2-1] rather than [d1,d2)?
>
> Not easily - you could write a custom type with the desired canonical form.
>
>
> > 3)  I couldn't find this discussed in the documentation, and specifically
> > didn't find the daterange() function documented, including on this page
> > where I might have expected it:
> > http://www.postgresql.org/docs/9.3/static/functions-range.html.  Is it
> > somewhere else where I'm not finding it?
>
> Yes, the documentation could maybe use some work on this topic.  The
> relevant information is provided at:
>
> http://www.postgresql.org/docs/9.3/interactive/rangetypes.html
>
> See especially: 8.17.2 & 8.17.6
>
> David J.
>
>
> Thanks for your help!



>
>
> --
> View this message in context:
> http://postgresql.1045698.n5.nabble.com/Questions-about-daterange-function-tp5809274p5809277.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Questions about daterange() function

2014-06-25 Thread David G Johnston
Ken Tanzer wrote
> Hi.  I've got lots of tables with start and end dates in them, and I'm
> trying to learn how to work with them as date ranges (which seem
> fantastic!).  I've noticed that the daterange() function seems to create
> ranges with an inclusive lower bound, and an exclusive upper bound.  For
> example:
> 
> SELECT
>  reg_spc_date,
>  reg_spc_date_end,
>  daterange(reg_spc_date,reg_spc_date_end)
> FROM reg_spc
> LIMIT 5;
> 
>  reg_spc_date | reg_spc_date_end |daterange
> --+--+-
>  2012-04-05   | 2013-10-21   | [2012-04-05,2013-10-21)
>  2013-10-28   |  | [2013-10-28,)
>  2013-11-01   |  | [2013-11-01,)
>  2012-10-19   | 2013-11-01   | [2012-10-19,2013-11-01)
>  2005-03-29   | 2013-10-31   | [2005-03-29,2013-10-31)
> (5 rows)
> 
> So here are my questions:
> 
> 1)  Is there anyway to control this behavior of daterange(), or is it just
> best to (for example) add 1 to the upper bound argument if I want an
> inclusive upper bound?

See link for question #3; namely use the three-arg version of daterange
(type,type,text)


> 2)  This is purely cosmetic, but is there anyway to control the output
> formatting of a daterange to show the upper bound as inclusive?  So that
> daterange(d1,d2) would display as [d1,d2-1] rather than [d1,d2)?

Not easily - you could write a custom type with the desired canonical form.


> 3)  I couldn't find this discussed in the documentation, and specifically
> didn't find the daterange() function documented, including on this page
> where I might have expected it:
> http://www.postgresql.org/docs/9.3/static/functions-range.html.  Is it
> somewhere else where I'm not finding it?

Yes, the documentation could maybe use some work on this topic.  The
relevant information is provided at:

http://www.postgresql.org/docs/9.3/interactive/rangetypes.html

See especially: 8.17.2 & 8.17.6

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Questions-about-daterange-function-tp5809274p5809277.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions about daterange() function

2014-06-25 Thread Ken Tanzer
On Wed, Jun 25, 2014 at 6:12 PM, Adrian Klaver 
 wrote:

> On 06/25/2014 05:53 PM, Ken Tanzer wrote:
>
>> Hi.  I've got lots of tables with start and end dates in them, and I'm
>> trying to learn how to work with them as date ranges (which seem
>> fantastic!).  I've noticed that the daterange() function seems to create
>> ranges with an inclusive lower bound, and an exclusive upper bound.  For
>> example:
>>
>> SELECT
>>   reg_spc_date,
>>   reg_spc_date_end,
>>   daterange(reg_spc_date,reg_spc_date_end)
>> FROM reg_spc
>> LIMIT 5;
>>
>>   reg_spc_date | reg_spc_date_end |daterange
>> --+--+-
>>   2012-04-05   | 2013-10-21   | [2012-04-05,2013-10-21)
>>   2013-10-28   |  | [2013-10-28,)
>>   2013-11-01   |  | [2013-11-01,)
>>   2012-10-19   | 2013-11-01   | [2012-10-19,2013-11-01)
>>   2005-03-29   | 2013-10-31   | [2005-03-29,2013-10-31)
>> (5 rows)
>>
>> So here are my questions:
>>
>> 1)  Is there anyway to control this behavior of daterange(), or is it
>> just best to (for example) add 1 to the upper bound argument if I want
>> an inclusive upper bound?
>>
>> 2)  This is purely cosmetic, but is there anyway to control the output
>> formatting of a daterange to show the upper bound as inclusive?  So that
>> daterange(d1,d2) would display as [d1,d2-1] rather than [d1,d2)?
>>
>> 3)  I couldn't find this discussed in the documentation, and
>> specifically didn't find the daterange() function documented, including
>> on this page where I might have expected it:
>> http://www.postgresql.org/docs/9.3/static/functions-range.html.  Is it
>> somewhere else where I'm not finding it?
>>
>
> What version of Postgres are you using?
>

In this particular case  9.3, although also working in 9.2.  Both are the
currrent versions.



>
>
>> Thanks in advance!
>>
>> Ken
>>
>>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>



-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Questions about daterange() function

2014-06-25 Thread Adrian Klaver

On 06/25/2014 05:53 PM, Ken Tanzer wrote:

Hi.  I've got lots of tables with start and end dates in them, and I'm
trying to learn how to work with them as date ranges (which seem
fantastic!).  I've noticed that the daterange() function seems to create
ranges with an inclusive lower bound, and an exclusive upper bound.  For
example:

SELECT
  reg_spc_date,
  reg_spc_date_end,
  daterange(reg_spc_date,reg_spc_date_end)
FROM reg_spc
LIMIT 5;

  reg_spc_date | reg_spc_date_end |daterange
--+--+-
  2012-04-05   | 2013-10-21   | [2012-04-05,2013-10-21)
  2013-10-28   |  | [2013-10-28,)
  2013-11-01   |  | [2013-11-01,)
  2012-10-19   | 2013-11-01   | [2012-10-19,2013-11-01)
  2005-03-29   | 2013-10-31   | [2005-03-29,2013-10-31)
(5 rows)

So here are my questions:

1)  Is there anyway to control this behavior of daterange(), or is it
just best to (for example) add 1 to the upper bound argument if I want
an inclusive upper bound?

2)  This is purely cosmetic, but is there anyway to control the output
formatting of a daterange to show the upper bound as inclusive?  So that
daterange(d1,d2) would display as [d1,d2-1] rather than [d1,d2)?

3)  I couldn't find this discussed in the documentation, and
specifically didn't find the daterange() function documented, including
on this page where I might have expected it:
http://www.postgresql.org/docs/9.3/static/functions-range.html.  Is it
somewhere else where I'm not finding it?


What version of Postgres are you using?



Thanks in advance!

Ken



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Questions about daterange() function

2014-06-25 Thread Ken Tanzer
Hi.  I've got lots of tables with start and end dates in them, and I'm
trying to learn how to work with them as date ranges (which seem
fantastic!).  I've noticed that the daterange() function seems to create
ranges with an inclusive lower bound, and an exclusive upper bound.  For
example:

SELECT
 reg_spc_date,
 reg_spc_date_end,
 daterange(reg_spc_date,reg_spc_date_end)
FROM reg_spc
LIMIT 5;

 reg_spc_date | reg_spc_date_end |daterange
--+--+-
 2012-04-05   | 2013-10-21   | [2012-04-05,2013-10-21)
 2013-10-28   |  | [2013-10-28,)
 2013-11-01   |  | [2013-11-01,)
 2012-10-19   | 2013-11-01   | [2012-10-19,2013-11-01)
 2005-03-29   | 2013-10-31   | [2005-03-29,2013-10-31)
(5 rows)

So here are my questions:

1)  Is there anyway to control this behavior of daterange(), or is it just
best to (for example) add 1 to the upper bound argument if I want an
inclusive upper bound?

2)  This is purely cosmetic, but is there anyway to control the output
formatting of a daterange to show the upper bound as inclusive?  So that
daterange(d1,d2) would display as [d1,d2-1] rather than [d1,d2)?

3)  I couldn't find this discussed in the documentation, and specifically
didn't find the daterange() function documented, including on this page
where I might have expected it:
http://www.postgresql.org/docs/9.3/static/functions-range.html.  Is it
somewhere else where I'm not finding it?

Thanks in advance!

Ken



-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Questions about DB capabilities

2014-05-16 Thread John R Pierce

On 5/15/2014 1:52 PM, Diego Ramón Cando Díaz wrote:
Hi, I have a question, is postgres cappable of horizontal growing, I 
mean, in the case I have a server that is reaching it’s full HD 
capacity, is there a way to  add another server to use as an extensión 
of ther first one, like a cluster configuration, do you know a 
configuration that is cappable of doing this? Is there a way that I 
can handle this?


not without extensive work to shard your data.  its certainly not 
transparent.


you might look at the skypetools stuff and pgQ, also projects like 
postgresql-XC which are under active development.




--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Questions about DB capabilities

2014-05-16 Thread Diego Ramón Cando Díaz
Hi, I have a question, is postgres cappable of horizontal growing, I mean, in 
the case I have a server that is reaching it’s full HD capacity, is there a way 
to  add another server to use as an extensión of ther first one, like a cluster 
configuration, do you know a configuration that is cappable of doing this? Is 
there a way that I can handle this?




Saludos cordiales/kind regards,
Diego Cando
Security Data / JR Electric Supply / Consorcio MTE Latinus
1800FIRMAS ext. 5021






Re: [GENERAL] general questions

2014-01-08 Thread Raghavendra
On Thu, Jan 9, 2014 at 5:04 AM, Tom Lane  wrote:

> CS DBA  writes:
> > 1) \d and schema's
> > - I setup 2 schema's (sch_a and sch_b)
> > - I added both schema's to my search_path
> > - I created 2 tables:  sch_a.test_tab and sch_b.test_tab
>
> > If I do a \d with no parameters I only see the first test_tab table
> > based on the order of my search_path.
> > I get that any queries will use the first found table if I don't specify
> > the schemaname but
> > if I'm looking for a full list (i.e. \d with no parameters) I would
> > think I should get a full list back
>
> > Is this intentional?
>
> Yes.  If you want to see stuff that's invisible in your current search
> path, use "\d *.*".  That's even documented somewhere ...
>
>
As Tom already said, am adding document pointer, you can find i
n "patterns"
.

http://www.postgresql.org/docs/9.3/static/app-psql.html


>  > 3) Can I force unaligned mode AND no wrap for psql output?
>
>
For both unaligned
AND
no wrap, I guess you need to take help of PAGER and
"
psql
-A
" or "
p
ostgres=#
\a
"
or
"postgres=#
\pset format unaligned
"

I would try like:

export PAGER='less -RSX'// It no wraps the output
psql -A

// Unaligned

--
Raghav

EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/


Re: [GENERAL] general questions

2014-01-08 Thread Tom Lane
CS DBA  writes:
> 1) \d and schema's
> - I setup 2 schema's (sch_a and sch_b)
> - I added both schema's to my search_path
> - I created 2 tables:  sch_a.test_tab and sch_b.test_tab

> If I do a \d with no parameters I only see the first test_tab table 
> based on the order of my search_path.
> I get that any queries will use the first found table if I don't specify 
> the schemaname but
> if I'm looking for a full list (i.e. \d with no parameters) I would 
> think I should get a full list back

> Is this intentional?

Yes.  If you want to see stuff that's invisible in your current search
path, use "\d *.*".  That's even documented somewhere ...

> 2) SET work_mem = x
> It seems that any user can run set work_mem = x in a session.

Yup.  If a user can issue arbitrary SQL, they can drive your server into
the ground with or without that, so I see little point in restricting it.
(Indeed, restricting it could be counterproductive, since too *small*
a value can be just as bad for performance as too large.)

> 3) Can I force unaligned mode AND no wrap for psql output?

[ shrug ... ]  Dunno, read the manual.

> 4) Is there a way to know for sure ifa sql file was run in single 
> transaction mode (after the fact), i.e. something in the logs?

If you're logging individual statements, there will be BEGIN and
COMMIT logged.  If you're not, I doubt the log will even tell you
a sql file was run, let alone such details.

> 5) Is there a query that will show me the actual prepared SQL text for a 
> prepared query?

select * from pg_prepared_statements

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] general questions

2014-01-08 Thread CS DBA

Hi All;

I recently ran into the following, any thoughts?

Thanks in advance...

1) \d and schema's
- I setup 2 schema's (sch_a and sch_b)
- I added both schema's to my search_path
- I created 2 tables:  sch_a.test_tab and sch_b.test_tab

If I do a \d with no parameters I only see the first test_tab table 
based on the order of my search_path.
I get that any queries will use the first found table if I don't specify 
the schemaname but
if I'm looking for a full list (i.e. \d with no parameters) I would 
think I should get a full list back


Is this intentional?


2) SET work_mem = x
It seems that any user can run set work_mem = x in a session. While this 
is keen for
folks who know what they are doing, it may not be so keen for folks who 
tend to do
foolish things, especially if a team has several of these types of folks 
on board
i.e. I can Imagine 5 devs all setting work_mem to 5GB each and running 
giant runaway

queries all on a dev server with 8GB of RAM.

Is there a way to restrict this?

3) Can I force unaligned mode AND no wrap for psql output?

4) Is there a way to know for sure ifa sql file was run in single 
transaction mode (after the fact), i.e. something in the logs?


5) Is there a query that will show me the actual prepared SQL text for a 
prepared query?






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions: How to configure PostgreSQL at the beginning ....

2013-02-19 Thread Adrian Klaver

On 02/19/2013 03:07 AM, Tomas Pasterak wrote:

Dear Sir or Madam,

I have downloaded and installed the latest PostgreSQL version V9.2 to my
Windows 7 OS PC.
I want to have it running on my PC, as local host.

Now I am facing some problems.

1.) I do not know, how to fill in the properties tab for the server, as
name, host (what shall be host, shall
I put there localhost or 127.0.0.1?), service field, Maintenance DB
field (can it be named as postgres?)?
And, what shall I put as user name? And password? Is there something
predefined? Or how and where shall I change it?
In which conf file?
It does not accept what I add there 


Are you creating a new database or trying to change an existing one?

By default there should be postgres user set up with a password you 
supplied during the install. If you want more users then you will need 
to create them. Know that in Postgres users/groups are called roles, 
where a role with login privileges ~ user and one without ~ group. So in 
PgAdmin you would go Edit -> New Object -> New Login Role




I want to have it as simple as possible, I want to connect my PHP
scripts with the PostgreSQL DB on my PC.


2.) Via the psql console, I can not login myself  I want to create
tables etc., but I need to overcome the connection and
login procedure and again, I do not know what to put there, as it does
not accept what I enter there ...

Server [localhost]: What to put here?
Database [postgres]: What to put here?
Port [5432]: What to put here?
Username [postgres]: What to put here?
Password: What to put here?


You should just accept the defaults[values in brackets] and supply the 
password you created for the postgres user on install. At that point you 
will be connected to the postgres database. On install Postgres create 
three system databases template0, template1 and postgres. At this point 
should not be dealing with the template* databases. The 
postgres(Maintenance DB) is basically empty and used primarily as a 
database to connect to and then do other things. One of those things 
would be to create your own database(s). You can do that using psql or 
PgAdmin. In PgAdmin that would happen under File -> Add Server.




The message returned is for example this:
psql: could not translate host name "Tomas" to address: Unknown server error
Press any key to continue ...


Looks like you tried to use your name as the host name.



3.) There are also some .conf files, like postgresql.conf, pg_hba.conf,
pgpass.conf. What shall I put there?
I played with that, but it does not work, I want to have it simple.

Do you have any examples of configuration, what shall I put everywhere,
so that it works?


That is  big topic:

http://www.postgresql.org/docs/9.2/interactive/runtime-config.html
http://www.postgresql.org/docs/9.2/interactive/client-authentication.html

For know I would tend to leave the settings alone, except for those in 
pg_hba.conf which are explained in the second link above.




Thank you very much in advance
Best regards
Tomas



--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions: How to configure PostgreSQL at the beginning ....

2013-02-19 Thread Amit Kapila
On Tuesday, February 19, 2013 4:37 PM Tomas Pasterak wrote:

> I have downloaded and installed the latest PostgreSQL version V9.2 to my
Windows 7 OS PC.
> I want to have it running on my PC, as local host.

> Now I am facing some problems.

> 1.) I do not know, how to fill in the properties tab for the server, as
name, host (what shall be host, shall 
> I put there localhost or 127.0.0.1?), service field, Maintenance DB field
(can it be named as postgres?)?
> And, what shall I put as user name? And password? Is there something
predefined? Or how and where shall I change it? 
> In which conf file? 
> It does not accept what I add there 

> I want to have it as simple as possible, I want to connect my PHP scripts
with the PostgreSQL DB on my PC.


> 2.) Via the psql console, I can not login myself  I want to create
tables etc., but I need to overcome the 
> connection and 
> login procedure and again, I do not know what to put there, as it does not
accept what I enter there ...

> Server [localhost]: What to put here?
> Database [postgres]: What to put here?
> Port [5432]: What to put here?
> Username [postgres]: What to put here?
> Password: What to put here?

> The message returned is for example this: 
> psql: could not translate host name "Tomas" to address: Unknown server
error
> Press any key to continue ...

Have your psql.exe -d postgres?
If your server is started with default configuration it should work.

> 3.) There are also some .conf files, like postgresql.conf, pg_hba.conf,
pgpass.conf. What shall I put there?
> I played with that, but it does not work, I want to have it simple.

You don't need to change there unless you want something specific. 
After initdb, the server should start with
Pg_ctl.exe start -D data_dir

Where data_dir is the path where you have created your database.

With Regards,
Amit Kapila.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Questions: How to configure PostgreSQL at the beginning ....

2013-02-19 Thread Tomas Pasterak
Dear Sir or Madam,

I have downloaded and installed the latest PostgreSQL version V9.2 to my
Windows 7 OS PC.
I want to have it running on my PC, as local host.

Now I am facing some problems.

1.) I do not know, how to fill in the properties tab for the server, as
name, host (what shall be host, shall
I put there localhost or 127.0.0.1?), service field, Maintenance DB field
(can it be named as postgres?)?
And, what shall I put as user name? And password? Is there something
predefined? Or how and where shall I change it?
In which conf file?
It does not accept what I add there 

I want to have it as simple as possible, I want to connect my PHP scripts
with the PostgreSQL DB on my PC.


2.) Via the psql console, I can not login myself  I want to create
tables etc., but I need to overcome the connection and
login procedure and again, I do not know what to put there, as it does not
accept what I enter there ...

Server [localhost]: What to put here?
Database [postgres]: What to put here?
Port [5432]: What to put here?
Username [postgres]: What to put here?
Password: What to put here?

The message returned is for example this:
psql: could not translate host name "Tomas" to address: Unknown server error
Press any key to continue ...

3.) There are also some .conf files, like postgresql.conf, pg_hba.conf,
pgpass.conf. What shall I put there?
I played with that, but it does not work, I want to have it simple.

Do you have any examples of configuration, what shall I put everywhere, so
that it works?

Thank you very much in advance
Best regards
Tomas


[GENERAL] Questions about 9.2 unique constraints

2013-01-08 Thread a...@hsk.hk
Hi,

In PostgreSQL 9.0.x we must define a constraint as DEFERRABLE on the "create 
table", we cannot define DEFERRABLE on "create table as select", how is this 
restriction in 9.2 now?

Also, in 9.2 can deferrable uniqueness be mixed with Foreign keys?

Thanks



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions of the privileges to use the pg_cancel_backend and pg_terminate_backend function. Thanks.

2012-04-04 Thread Aaron
When I need to give other users access to a function that someone must
be superuser to execute I write a security definer function.
See: http://www.postgresql.org/docs/9.1/static/sql-createfunction.html
Also: 
http://www.ibm.com/developerworks/opensource/library/os-postgresecurity/index.html
  Using the security definer

Think if is like sudo for a db.


Aaron Thul
http://www.chasingnuts.com



On Wed, Apr 4, 2012 at 8:39 AM, leaf_yxj  wrote:
> Hi Guys. I got one problem. I need to give some of the non-super users( kind
> of dba) to get the privileges
> to can cancel other users's query, DML.  After I granted the execute on
> pg_cancel_backend and pg_terminate_backend function to them, they still get
> the error message as follows when they call these two function :
>
> ERROR : must be superuser to signal other server processes.
>
> QUestion : is it possible to make the non superuser to have these two
> privileges??
>
> Thanks.
>
> Regards.
>
> Grace
>
> --
> View this message in context: 
> http://postgresql.1045698.n5.nabble.com/Questions-of-the-privileges-to-use-the-pg-cancel-backend-and-pg-terminate-backend-function-Thanks-tp5618129p5618129.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Questions of the privileges to use the pg_cancel_backend and pg_terminate_backend function. Thanks.

2012-04-04 Thread leaf_yxj
Hi Guys. I got one problem. I need to give some of the non-super users( kind
of dba) to get the privileges
to can cancel other users's query, DML.  After I granted the execute on
pg_cancel_backend and pg_terminate_backend function to them, they still get
the error message as follows when they call these two function :

ERROR : must be superuser to signal other server processes. 

QUestion : is it possible to make the non superuser to have these two
privileges??

Thanks.

Regards.

Grace

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Questions-of-the-privileges-to-use-the-pg-cancel-backend-and-pg-terminate-backend-function-Thanks-tp5618129p5618129.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions about setting an array element value outside of the update

2011-12-04 Thread Tom Lane
David Johnston  writes:
>> Is here less clumsy way to set  array[position] to the new_value (not update 
>> but just change an element inside an array) than:
>> 
>> SELECT
>> _array[1:pos-1]
>> ||newval
>> ||_array[_pos+1:array_length(_array, 1)]

> I do not know if there is a cleaner way but regardless you should code
> your logic as a function.

Inside a plpgsql function, you could just do

array[pos] := newval;

so perhaps it'd be worth creating a helper function that's a wrapper
around that.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions about setting an array element value outside of the update

2011-12-04 Thread David Johnston
On Dec 4, 2011, at 22:43, Maxim Boguk  wrote:

> Lets say i have subquery which produce array[], position and new_value
> 
> Is here less clumsy way to set  array[position] to the new_value (not update 
> but just change an element inside an array) than:
> 
> SELECT
>_array[1:pos-1]
>||newval
>||_array[_pos+1:array_length(_array, 1)]
> FROM 
> (
>SELECT _array,
>   pos,
>   newval
> FROM
>   some_colmplicated_logic
> );
> 
> The: 
>_array[1:pos-1]
>||newval
>||_array[_pos+1:array_length(_array, 1)]
> part is very clumsy for my eyes.
> 
> PS: that is just small part of the complicated WITH RECURSIVE iterator in 
> real task.
> 
> -- 
> Maxim Boguk
> Senior Postgresql DBA.

My first reaction is that you should question whether you really want to deal 
with arrays like this in the first place.  Maybe describe what you want to 
accomplish and look for alternatives.

I do not know if there is a cleaner way but regardless you should code your 
logic as a function.  If you devise a better way later then changing the 
algorithm will be very simple.  And it also should make you inline SQL easier 
to follow.

David J.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Questions about setting an array element value outside of the update

2011-12-04 Thread Maxim Boguk
Lets say i have subquery which produce array[], position and new_value

Is here less clumsy way to set  array[position] to the new_value (not
update but just change an element inside an array) than:

SELECT
_array[1:pos-1]
||newval
||_array[_pos+1:array_length(_array, 1)]
FROM
(
SELECT _array,
   pos,
   newval
 FROM
   some_colmplicated_logic
);

The:
_array[1:pos-1]
||newval
||_array[_pos+1:array_length(_array, 1)]
part is very clumsy for my eyes.

PS: that is just small part of the complicated WITH RECURSIVE iterator in
real task.

-- 
Maxim Boguk
Senior Postgresql DBA.


[GENERAL] Questions about "EXPLAIN"

2011-11-15 Thread David Johnston
Hey,

PostgreSQL 9.0

1) While comparing a simple GROUP/COUNT query I noticed that TEXT and JSON
formats identify the Top-Level Plan Node differently (GroupAggregate vs.
Aggregate).  More curiosity than anything but I would have expected them to
match.

2) For the same query I was hoping to be able to get the defined alias for
the "COUNT" output column but instead the "OUTPUT" simply gives the
expression.  Is there some way to get EXPLAIN to output the final column
names or, assuming that this has been discussed previously (likely), could
someone link to one or more threads with the discussion as to why it has not
been done (or give a brief synopsis).

Thanks,

David J.

QUERY PLAN (TEXT) [Sample]
GroupAggregate  (cost=4.27..9.64 rows=2 width=64)
  Output: sqllibrary_query_name, count(sqllibrary_query_version)

QUERY PLAN (JSON) [Sample]
[
  {
"Plan": {
  "Node Type": "Aggregate", 
  "Strategy": "Sorted",
  "Startup Cost": 4.27,
  "Total Cost": 9.64,
  "Plan Rows": 2,
  "Plan Width": 64,
  "Output": ["sqllibrary_query_name",
"count(sqllibrary_query_version)"],




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions about Partitioning

2011-04-21 Thread Craig Ringer
On 19/04/11 23:56, Phoenix Kiula wrote:
> While I fix some bigger DB woes, I have learned a lesson. Huge indexes
> and tables are a pain.
> 
> Which makes me doubly keen on looking at partitioning.
> 
> Most examples I see online are partitioned by date. As in months, or
> quarter, and so on. This doesn't work for me as I don't have too much
> logic required based on time.
> 
> The biggest, highest volume SELECT in my database happens through an
> "alias" column. This is an alphanumeric column. The second-biggest
> SELECT happens through the "userid" column -- because many users check
> their account every day.

If  user id -> alias and/or alias -> user id lookups are really "hot",
consider moving them to a subtable, so you don't have to worry about
whether to partition by user id or alias, and so that the table is
really small, easily cached, and fast to scan. For example:

CREATE TABLE user_alias (
   alias VARCHAR(42) PRIMARY KEY,
   user_id integer REFERENCES maintable(id)
);

If you like you can retain the "alias" column in "maintable", making
that a REFERENCE to user_alias(alias) so you force a 1:1 relationship
and don't have to JOIN on user_alias to get alias data for a user. The
downside of that is that the circular/bidirectional reference requires
you to use 'DEFERRABLE INITIALLY DEFERRED' on one or both references to
be able to insert, and that can cause memory use issues if you do really
big batch inserts and deletes on those tables.

> 1. Which column should I partition by -- the "alias" because it's the
> largest contributor of queries? This should be OK, but my concern is
> that when user_id queries are happening, then the data for the same
> user will come through many subtables that are partitioned by "alias"

See above: consider splitting the user-id-to-alias mapping out into
another table.

> 3. If I partition using "a%", "b%" etc up to "z%" as the partition
> condition, is this an issue

It might be worth examining the distribution of your data and
partitioning on constraints that distribute the data better. There'll be
a lot more "c"s than "z"s.

That said, it might not be worth the complexity and you'd have to check
if the constraint exclusion code was smart enough to figure out the
conditions. I don't have much experience with partitioning and have
never tried or tested partitioning on a LIKE pattern.

> 6. Triggers - how do they affect speed?

A constraint is not a trigger, they're different. SELECTs on partitioned
tables are not affected by triggers.

For INSERT, UPDATE and DELETE, where you're redirecting INSERTs into the
parent table into the appropriate partition, then speed might be a
concern. It probably doesn't matter. If you find it to be an issue, then
rather then re-writing the trigger in C, you're probably better off just
INSERTing directly into the appropriate subtable and thus bypassing the
trigger.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions about Partitioning

2011-04-19 Thread Steve Crawford

On 04/19/2011 08:56 AM, Phoenix Kiula wrote:

While I fix some bigger DB woes, I have learned a lesson. Huge indexes
and tables are a pain.

Which makes me doubly keen on looking at partitioning.
Before jumping into partitioning it would be useful to know specifically 
what pain you are having with your current tables and indexes. 
Maintenance? Performance? Other? Question zero is "What issues are 
currently causing you pain with large tables?" and after that 
determining if the partitioning is an appropriate solution. There is 
pain associated with partitioning, as well, so you need to be sure that 
you will achieve a net pain reduction.


Carefully read 
http://www.postgresql.org/docs/current/static/ddl-partitioning.html, it 
has examples that answer several of your questions. Pay extra attention 
to 5.9.6 "Caveats".


Some places where partitioning work well:

1. The partition can substitute for an index and the resulting child 
tables will have somewhat comparable sizes. If you had contact 
information where state was typically required in queries you might 
partition the data into tables for each state so a typical query would 
only touch a smaller data set and the partitioning/child-table 
constraints substitute for an index on state.


2. You frequently drop data in bulk and can group that data in such a 
way that you can drop or truncate a child-table. Among the places I've 
used partitioning is for validation codes. I partition them by like 
expiration and when the date arrives, I just drop the partition with the 
expired codes - way faster than delete-from and the necessary follow-up 
maintenance when deleting millions of codes.


3. The nature of your data is such that it can be partitioned into a 
small part that is accessed frequently and parts that are relatively 
rarely accessed.



Most examples I see online are partitioned by date. As in months, or
quarter, and so on. This doesn't work for me as I don't have too much
logic required based on time.


Time-based data often satisfies all of the above (log data you can 
partition by month, typically only look at the current month and drop 
data that is a year old, for example) so that's what ends up being in 
most examples.



The biggest, highest volume SELECT in my database happens through an
"alias" column. This is an alphanumeric column. The second-biggest
SELECT happens through the "userid" column -- because many users check
their account every day.

A rough table definition can be considered as follows:

CREATE TABLE maintable
   idSERIAL  primary key
   alias  VARCHAR(42)
   ...
   user_id  VARCHAR(30)
   user_registered   BOOLEAN
   statusVARCHAR(1)


  My questions:

1. Which column should I partition by -- the "alias" because it's the
largest contributor of queries? This should be OK, but my concern is
that when user_id queries are happening, then the data for the same
user will come through many subtables that are partitioned by "alias"
-- will this happen automatically (presuming "constraint exclusion" is
on)? How does partitioning by one column affect queries on others.
Will there be subtable-by-subtable indexes on both alias and


Answer question zero, above, first. But beware - the primary key is not 
inherited. You run the risk of duplicating the primary key (or other 
unique identifier) across child tables unless you implement the 
appropriate constraints on the child tables to prevent this. It's also 
pointless to have a primary key on the parent table in most situations.



2. How does SERIAL type work with partitions? Will INSERT data go into
the respective partitions and yet maintain an overall sequence -- I
mean, the *same* overall sequence for the parent table distributed
automagically across subtables?
This depends on how you set up your triggers, constraints, child tables 
etc. but by default a basic "create table thechild () inherits 
(theparent);" will result in a child table that shares the same sequence 
as the parent.

3. If I partition using "a%", "b%" etc up to "z%" as the partition
condition, is this an issue -- are about 26 subtables too many
partitions? Mine are static partitions as in they will be the same
forever, unlike data-based partitions. And each partition will
continue to grow. If I include that "alias"es can begin with numbers
and allowed symbols too, then this may be 45 partitions? What's the
limit of partitions -- not only official limit, but practical limit in
terms of performance?


As always, the answer is "depends" but I wouldn't typically see 45 as 
too many. See primary-key warning above. It's less an absolute number of 
tables and more whether the design of your tables and queries results in 
execution efficiency gains that outweigh the additional planner costs.



4. Given that it's a wildcard LIKE condition (with a "%") will this
affect the index and subsequent SELECT speed? Are partition conditions
recommended to be "=" or "<" t

[GENERAL] Questions about Partitioning

2011-04-19 Thread Phoenix Kiula
While I fix some bigger DB woes, I have learned a lesson. Huge indexes
and tables are a pain.

Which makes me doubly keen on looking at partitioning.

Most examples I see online are partitioned by date. As in months, or
quarter, and so on. This doesn't work for me as I don't have too much
logic required based on time.

The biggest, highest volume SELECT in my database happens through an
"alias" column. This is an alphanumeric column. The second-biggest
SELECT happens through the "userid" column -- because many users check
their account every day.

A rough table definition can be considered as follows:

   CREATE TABLE maintable
  idSERIAL  primary key
  alias  VARCHAR(42)
  ...
  user_id  VARCHAR(30)
  user_registered   BOOLEAN
  statusVARCHAR(1)


 My questions:

1. Which column should I partition by -- the "alias" because it's the
largest contributor of queries? This should be OK, but my concern is
that when user_id queries are happening, then the data for the same
user will come through many subtables that are partitioned by "alias"
-- will this happen automatically (presuming "constraint exclusion" is
on)? How does partitioning by one column affect queries on others.
Will there be subtable-by-subtable indexes on both alias and

2. How does SERIAL type work with partitions? Will INSERT data go into
the respective partitions and yet maintain an overall sequence -- I
mean, the *same* overall sequence for the parent table distributed
automagically across subtables?

3. If I partition using "a%", "b%" etc up to "z%" as the partition
condition, is this an issue -- are about 26 subtables too many
partitions? Mine are static partitions as in they will be the same
forever, unlike data-based partitions. And each partition will
continue to grow. If I include that "alias"es can begin with numbers
and allowed symbols too, then this may be 45 partitions? What's the
limit of partitions -- not only official limit, but practical limit in
terms of performance?

4. Given that it's a wildcard LIKE condition (with a "%") will this
affect the index and subsequent SELECT speed? Are partition conditions
recommended to be "=" or "<" type operators only or is LIKE ok??

5. Does partitioning need to happen only through one column? Can I
have a condition containing two columns instead?

   CREATE TABLE subtable_a (
PRIMARY KEY (id)
CHECK ( user_id LIKE 'a%' and user_registered IS TRUE)
) INHERITS (maintable);

   CREATE TABLE subtable_b (
PRIMARY KEY (id),
CHECK ( user_id LIKE 'b%' and user_registered IS TRUE)
) INHERITS (maintable);

   ..etc


6. Triggers - how do they affect speed? Everything, insert, update,
select will happen through this conditional trigger. I will likely be
writing this in PLSQL, but I read in several websites that C triggers
are much faster than PLSQL triggers. Is this a concern?

7. "Constraint exclusion" - is it recommended to have this in the
pg.conf, or will I need to do this before every SQL? I prefer the
pg.conf way, but want to confirm that there are no downsides for other
regular SQL operations with this setting?

8. How will JOIN work? I have different tables JOINing with the parent
table now. With partitioned subtables, will constraint exclusion
automatically do what's needed and my SQL does not need to change? Or
will there be triggers required for each and every query I currently
have?


Eight questions is enough for my first post in this partitioning thread   :)

Thanks much!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions about octal vs. hex for bytea

2011-02-20 Thread Reuven M. Lerner
Thanks, everyone, for the swift and clear responses.  It's good to know 
that I did understand things correctly!


Reuven

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions about octal vs. hex for bytea

2011-02-20 Thread Tom Lane
"Reuven M. Lerner"  writes:
> My client is concerned that the internal representation has changed, and 
> is asking me for a script that will change the representation, in order 
> to save space (since hex occupies less space than octal).

This is complete nonsense.  The internal representation is just bytes.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions about octal vs. hex for bytea

2011-02-20 Thread Radosław Smogura
"Reuven M. Lerner"  Sunday 20 February 2011 12:31:09
> Hi, everyone.  I've got a client who is planning to upgrade from
> PostgreSQL 8.3 to 9.0 in the coming weeks.  They use a lot of tables
> with bytea columns.  They're worried about the switch from octal to hex
> formats for bytea data.
> 
> 
> Based on everything I know and have read, the change is only for
> external representations, for input and output.  Output is now by
> default in hex, but can be changed with a parameter (bytea_output) that
> provides backward compatibility.  Input can be in either octal or hex,
> with no changes needed.
> 
> 
> My client is concerned that the internal representation has changed, and
> is asking me for a script that will change the representation, in order
> to save space (since hex occupies less space than octal).  I'm not aware
> of such a need, or even the possibility for this to be done; even if the
> internal representation has changed, it'll be handled in the upgrade
> process, and doesn't need to be done with anything external.
> 
> 
> So I've told them that I don't think that anything is necessary for
> either input or output, except (perhaps) to set bytea_output in its
> backward-compatibility mode.  But I wanted to check with people here,
> just to double-check my understanding.
> 
> 
> Thanks in advance,
> 
> 
> Reuven

There is no internal change how bytea is keept or at least internally data are 
not keept in octal or hex represenation. If your client uses JDBC it's better 
to download newest driver. It should support as well hex and octal receive. It 
is safer to turn on backward compatibility. I know nothing how pg_dump will 
work to migrate data.

Regards,
Radek

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions about octal vs. hex for bytea

2011-02-20 Thread David Johnston
Been using bytea heavily through JDBC.  In transitioning to 9.0 I've found the 
need to set the bytea_output parameter but otherwise everything else works the 
same as it did before.

As for storage space concerns I do not know for sure but the numbers cannot be 
that substantial to warrant changing an existing database unless large binary 
processing takes up a significant portion of the processing (as opposed to 
simple document storage and retrieval which is what I am doing).

David J.

-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Reuven M. Lerner
Sent: Sunday, February 20, 2011 6:31 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Questions about octal vs. hex for bytea

Hi, everyone.  I've got a client who is planning to upgrade from PostgreSQL 8.3 
to 9.0 in the coming weeks.  They use a lot of tables with bytea columns.  
They're worried about the switch from octal to hex formats for bytea data.


Based on everything I know and have read, the change is only for external 
representations, for input and output.  Output is now by default in hex, but 
can be changed with a parameter (bytea_output) that provides backward 
compatibility.  Input can be in either octal or hex, with no changes needed.


My client is concerned that the internal representation has changed, and is 
asking me for a script that will change the representation, in order to save 
space (since hex occupies less space than octal).  I'm not aware of such a 
need, or even the possibility for this to be done; even if the internal 
representation has changed, it'll be handled in the upgrade process, and 
doesn't need to be done with anything external.


So I've told them that I don't think that anything is necessary for either 
input or output, except (perhaps) to set bytea_output in its 
backward-compatibility mode.  But I wanted to check with people here, just to 
double-check my understanding.


Thanks in advance,


Reuven

--
Reuven M. Lerner -- Web development, consulting, and training
Mobile: +972-54-496-8405 * US phone: 847-230-9795
Skype/AIM: reuvenlerner


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions about octal vs. hex for bytea

2011-02-20 Thread Andreas Kretschmer
Reuven M. Lerner  wrote:
> So I've told them that I don't think that anything is necessary for  
> either input or output, except (perhaps) to set bytea_output in its  
> backward-compatibility mode.  But I wanted to check with people here,  
> just to double-check my understanding.

You are right, there is no other problem. I have upgraded a customer's
database from 8.3 to 9.0, it was a DRUPAL - database. Only backup &
restore and set the bytea_output - Parameter, that's all. 


Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
"If I was god, I would recompile penguin with --enable-fly."   (unknown)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Questions about octal vs. hex for bytea

2011-02-20 Thread Reuven M. Lerner
Hi, everyone.  I've got a client who is planning to upgrade from 
PostgreSQL 8.3 to 9.0 in the coming weeks.  They use a lot of tables 
with bytea columns.  They're worried about the switch from octal to hex 
formats for bytea data.



Based on everything I know and have read, the change is only for 
external representations, for input and output.  Output is now by 
default in hex, but can be changed with a parameter (bytea_output) that 
provides backward compatibility.  Input can be in either octal or hex, 
with no changes needed.



My client is concerned that the internal representation has changed, and 
is asking me for a script that will change the representation, in order 
to save space (since hex occupies less space than octal).  I'm not aware 
of such a need, or even the possibility for this to be done; even if the 
internal representation has changed, it'll be handled in the upgrade 
process, and doesn't need to be done with anything external.



So I've told them that I don't think that anything is necessary for 
either input or output, except (perhaps) to set bytea_output in its 
backward-compatibility mode.  But I wanted to check with people here, 
just to double-check my understanding.



Thanks in advance,


Reuven

--
Reuven M. Lerner -- Web development, consulting, and training
Mobile: +972-54-496-8405 * US phone: 847-230-9795
Skype/AIM: reuvenlerner


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions regarding SET option.

2010-02-22 Thread Pavel Stehule
2010/2/22 Jignesh Shah :
>>> set work_mem to '1MB'
>>> set search_path = 'public';
>
> Thanks for the example Pavel. I understood it. Are there any other SET
> options except above that I need to set to prevent security breach?
>

I am not sure - I know only search_path

Pavel

> Thanks,
> Jack
>
> On Mon, Feb 22, 2010 at 11:41 PM, Pavel Stehule 
> wrote:
>>
>> 2010/2/22 Jignesh Shah :
>> > Thanks a ton Laurenz and Pavel for your responses but I really didn't
>> > follow
>> > you. I am not master in PostGreSQL yet. Could you please give me some
>> > example?
>> >
>> > Basically, I want to know how many such SET options I should reset
>> > before
>> > executing my function and at the end it should also be restored to
>> > original
>> > settings.
>> >
>>
>> create or replace function foop()
>>  returns int as $$
>> select 10
>> $$ language sql
>> set work_mem to '1MB'
>> set search_path = 'public';
>> CREATE FUNCTION
>> postgres=#
>>
>> regards
>> Pavel Stehule
>>
>> > It would be really helpful if you could elaborate your response.
>> >
>> > Thanks guys.
>> > Jack
>> >
>> > On Mon, Feb 22, 2010 at 8:05 PM, Albe Laurenz 
>> > wrote:
>> >>
>> >> Jignesh Shah wrote:
>> >> > I have been writing a function with SECURITY DEFINER enabled.
>> >> > Basically, I am looking for ways to override the users SET
>> >> > option settings while executing my function to prevent the
>> >> > permissions breach. For example, to override "SET
>> >> > search_path", I am setting search path in my function before
>> >> > executing anything. Could any one please tell me what could
>> >> > be other SET options that I should take care?
>> >> >
>> >> > Moreover, how to revert back those settings just before
>> >> > returning from my function?
>> >>
>> >> You can use the SET clause of CREATE FUNCTION which does exactly
>> >> what you want.
>> >>
>> >> Yours,
>> >> Laurenz Albe
>> >
>> >
>
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions regarding SET option.

2010-02-22 Thread Jignesh Shah
>> set work_mem to '1MB'
>> set search_path = 'public';

Thanks for the example Pavel. I understood it. Are there any other SET
options except above that I need to set to prevent security breach?

Thanks,
Jack

On Mon, Feb 22, 2010 at 11:41 PM, Pavel Stehule wrote:

> 2010/2/22 Jignesh Shah :
> > Thanks a ton Laurenz and Pavel for your responses but I really didn't
> follow
> > you. I am not master in PostGreSQL yet. Could you please give me some
> > example?
> >
> > Basically, I want to know how many such SET options I should reset before
> > executing my function and at the end it should also be restored to
> original
> > settings.
> >
>
> create or replace function foop()
>  returns int as $$
> select 10
> $$ language sql
> set work_mem to '1MB'
> set search_path = 'public';
> CREATE FUNCTION
> postgres=#
>
> regards
> Pavel Stehule
>
> > It would be really helpful if you could elaborate your response.
> >
> > Thanks guys.
> > Jack
> >
> > On Mon, Feb 22, 2010 at 8:05 PM, Albe Laurenz 
> > wrote:
> >>
> >> Jignesh Shah wrote:
> >> > I have been writing a function with SECURITY DEFINER enabled.
> >> > Basically, I am looking for ways to override the users SET
> >> > option settings while executing my function to prevent the
> >> > permissions breach. For example, to override "SET
> >> > search_path", I am setting search path in my function before
> >> > executing anything. Could any one please tell me what could
> >> > be other SET options that I should take care?
> >> >
> >> > Moreover, how to revert back those settings just before
> >> > returning from my function?
> >>
> >> You can use the SET clause of CREATE FUNCTION which does exactly
> >> what you want.
> >>
> >> Yours,
> >> Laurenz Albe
> >
> >
>


Re: [GENERAL] Questions regarding SET option.

2010-02-22 Thread Pavel Stehule
2010/2/22 Jignesh Shah :
> Thanks a ton Laurenz and Pavel for your responses but I really didn't follow
> you. I am not master in PostGreSQL yet. Could you please give me some
> example?
>
> Basically, I want to know how many such SET options I should reset before
> executing my function and at the end it should also be restored to original
> settings.
>

create or replace function foop()
 returns int as $$
select 10
$$ language sql
set work_mem to '1MB'
set search_path = 'public';
CREATE FUNCTION
postgres=#

regards
Pavel Stehule

> It would be really helpful if you could elaborate your response.
>
> Thanks guys.
> Jack
>
> On Mon, Feb 22, 2010 at 8:05 PM, Albe Laurenz 
> wrote:
>>
>> Jignesh Shah wrote:
>> > I have been writing a function with SECURITY DEFINER enabled.
>> > Basically, I am looking for ways to override the users SET
>> > option settings while executing my function to prevent the
>> > permissions breach. For example, to override "SET
>> > search_path", I am setting search path in my function before
>> > executing anything. Could any one please tell me what could
>> > be other SET options that I should take care?
>> >
>> > Moreover, how to revert back those settings just before
>> > returning from my function?
>>
>> You can use the SET clause of CREATE FUNCTION which does exactly
>> what you want.
>>
>> Yours,
>> Laurenz Albe
>
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions regarding SET option.

2010-02-22 Thread Jignesh Shah
Thanks a ton Laurenz and Pavel for your responses but I really didn't follow
you. I am not master in PostGreSQL yet. Could you please give me some
example?

Basically, I want to know how many such SET options I should reset before
executing my function and at the end it should also be restored to original
settings.

It would be really helpful if you could elaborate your response.

Thanks guys.
Jack

On Mon, Feb 22, 2010 at 8:05 PM, Albe Laurenz wrote:

> Jignesh Shah wrote:
> > I have been writing a function with SECURITY DEFINER enabled.
> > Basically, I am looking for ways to override the users SET
> > option settings while executing my function to prevent the
> > permissions breach. For example, to override "SET
> > search_path", I am setting search path in my function before
> > executing anything. Could any one please tell me what could
> > be other SET options that I should take care?
> >
> > Moreover, how to revert back those settings just before
> > returning from my function?
>
> You can use the SET clause of CREATE FUNCTION which does exactly
> what you want.
>
> Yours,
> Laurenz Albe
>


Re: [GENERAL] Questions regarding SET option.

2010-02-22 Thread Albe Laurenz
Jignesh Shah wrote:
> I have been writing a function with SECURITY DEFINER enabled. 
> Basically, I am looking for ways to override the users SET 
> option settings while executing my function to prevent the 
> permissions breach. For example, to override "SET 
> search_path", I am setting search path in my function before 
> executing anything. Could any one please tell me what could 
> be other SET options that I should take care?
>  
> Moreover, how to revert back those settings just before 
> returning from my function?

You can use the SET clause of CREATE FUNCTION which does exactly
what you want.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions regarding SET option.

2010-02-22 Thread Pavel Stehule
Hello

you can overwrite standard settings only for function

CREATE [ OR REPLACE ] FUNCTION
name ( [ [ argmode ] [ argname ] argtype [ { DEFAULT | = }
default_expr ] [, ...] ] )
[ RETURNS rettype
  | RETURNS TABLE ( column_name column_type [, ...] ) ]
  { LANGUAGE lang_name
| WINDOW
| IMMUTABLE | STABLE | VOLATILE
| CALLED ON NULL INPUT | RETURNS NULL ON NULL INPUT | STRICT
| [ EXTERNAL ] SECURITY INVOKER | [ EXTERNAL ] SECURITY DEFINER
| COST execution_cost
| ROWS result_rows
| SET configuration_parameter { TO value | = value | FROM CURRENT } <<<===
| AS 'definition'
| AS 'obj_file', 'link_symbol'
  } ...
[ WITH ( attribute [, ...] ) ]

Regards
Pavel Stehule


2010/2/22 Jignesh Shah :
> Hello All,
>
> I have been writing a function with SECURITY DEFINER enabled. Basically, I
> am looking for ways to override the users SET option settings while
> executing my function to prevent the permissions breach. For example, to
> override "SET search_path", I am setting search path in my function before
> executing anything. Could any one please tell me what could be other SET
> options that I should take care?
>
> Moreover, how to revert back those settings just before returning from my
> function?
>
> Thanks, Jack

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Questions regarding SET option.

2010-02-22 Thread Jignesh Shah
Hello All,

I have been writing a function with SECURITY DEFINER enabled. Basically, I
am looking for ways to override the users SET option settings while
executing my function to prevent the permissions breach. For example, to
override "SET search_path", I am setting search path in my function before
executing anything. Could any one please tell me what could be other SET
options that I should take care?

Moreover, how to revert back those settings just before returning from my
function?

Thanks, Jack


Re: [GENERAL] questions about a table's row estimates

2010-02-11 Thread Tom Lane
Ben Chobot  writes:
> And unfortunately, Tom, we're not resetting stats counters. :(

Mph.  Well, the other thing that comes to mind is that n_live_tup
(and n_dead_tup) is typically updated by ANALYZE, but only to an
estimate based on ANALYZE's partial sample of the table.  If the
sample isn't very representative then it might be far off.
How does n_live_tup compare to an actual COUNT(*) count?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] questions about a table's row estimates

2010-02-11 Thread Ben Chobot
On Feb 10, 2010, at 10:28 PM, Greg Smith wrote:

> Ben Chobot wrote:
>> I'm looking at pg_stat_user_tables in 8.4.2, and I'm confused about 
>> n_live_tup. Shouldn't that be at least fairly close to (n_tup_ins - 
>> n_tup-del)? It doesn't seem to be, but I'm unclear why.
>>  
> Insert 2000 tuples.
> Delete 1000 tuples.
> vacuum
> Insert 1000 tuples. These go into the free space the deleted tuples used to 
> be in.
> analyze
> 
> n_tup_ins=3000
> n_tup_del=1000
> n_live_tup=3000

Say what? It's not documented terrible well 
(http://www.postgresql.org/docs/8.4/interactive/monitoring-stats.html is the 
best I can find) but I thought n_live_tup was basically the number of tuples 
visible to a new transaction. If my assumption is wrong, that might explain 
things.

And unfortunately, Tom, we're not resetting stats counters. :(

Re: [GENERAL] questions about a table's row estimates

2010-02-10 Thread Tom Lane
Greg Smith  writes:
> Ben Chobot wrote:
>> I'm looking at pg_stat_user_tables in 8.4.2, and I'm confused about 
>> n_live_tup. Shouldn't that be at least fairly close to (n_tup_ins - 
>> n_tup-del)? It doesn't seem to be, but I'm unclear why.
>> 
> Insert 2000 tuples.
> Delete 1000 tuples.
> vacuum
> Insert 1000 tuples. These go into the free space the deleted tuples used 
> to be in.
> analyze

> n_tup_ins=3000
> n_tup_del=1000
> n_live_tup=3000

Huh?

regression=# create table foo (f1 int);
CREATE TABLE
regression=# insert into foo select generate_series(1,2000);
INSERT 0 2000
regression=# select n_live_tup,n_tup_ins,n_tup_del from pg_stat_user_tables  
where relname = 'foo';
 n_live_tup | n_tup_ins | n_tup_del 
+---+---
   2000 |  2000 | 0
(1 row)

regression=# delete from foo where f1 > 1000;
DELETE 1000
regression=# select n_live_tup,n_tup_ins,n_tup_del from pg_stat_user_tables  
where relname = 'foo';
 n_live_tup | n_tup_ins | n_tup_del 
+---+---
   1000 |  2000 |  1000
(1 row)

regression=# insert into foo select generate_series(2001,3000);
INSERT 0 1000
regression=# select n_live_tup,n_tup_ins,n_tup_del from pg_stat_user_tables  
where relname = 'foo';
 n_live_tup | n_tup_ins | n_tup_del 
+---+---
   2000 |  3000 |  1000
(1 row)

regression=# 

The only easy explanation I can think of for Ben's complaint is if he
reset the stats counters sometime during the table's existence.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] questions about a table's row estimates

2010-02-10 Thread Greg Smith

Ben Chobot wrote:

I'm looking at pg_stat_user_tables in 8.4.2, and I'm confused about n_live_tup. 
Shouldn't that be at least fairly close to (n_tup_ins - n_tup-del)? It doesn't 
seem to be, but I'm unclear why.
  

Insert 2000 tuples.
Delete 1000 tuples.
vacuum
Insert 1000 tuples. These go into the free space the deleted tuples used 
to be in.

analyze

n_tup_ins=3000
n_tup_del=1000
n_live_tup=3000

If there's any delete/vacuum/reuse churn here, no reason the believe the 
insert/delete and live counts will be close at all.


--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] questions about a table's row estimates

2010-02-08 Thread Ben Chobot
On Feb 5, 2010, at 12:14 PM, Ben Chobot wrote:

> I'm looking at pg_stat_user_tables in 8.4.2, and I'm confused about 
> n_live_tup. Shouldn't that be at least fairly close to (n_tup_ins - 
> n_tup-del)? It doesn't seem to be, but I'm unclear why.

Is everybody else unclear as well?
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] questions about a table's row estimates

2010-02-05 Thread Ben Chobot
I'm looking at pg_stat_user_tables in 8.4.2, and I'm confused about n_live_tup. 
Shouldn't that be at least fairly close to (n_tup_ins - n_tup-del)? It doesn't 
seem to be, but I'm unclear why.
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions on PostGreSQL Authentication mechanism...

2010-02-05 Thread Dimitri Fontaine
Alvaro Herrera  writes:
> For example, perhaps there could be a new pair of functions
> pg_read_hba_file/pg_write_hba_file that would work even if the files are
> placed in other directories, but they (Debian) would need to propose
> it.

I don't remember they had to provide those GUCs:

  http://www.postgresql.org/docs/8.4/static/runtime-config-file-locations.html

  hba_file (string)
Specifies the configuration file for host-based authentication
(customarily called pg_hba.conf). This parameter can only be set at
server start

The bug certainly is on PostgreSQL for providing the facility to
relocate the hba_file without providing any way for pgadmin and other
utilities to handle the situation?

Regards,
-- 
dim

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions on PostGreSQL Authentication mechanism...

2010-02-02 Thread dipti shah
That makes sense.

Thanks,
Dipti

On Wed, Feb 3, 2010 at 12:08 PM, John R Pierce  wrote:

> dipti shah wrote:
>
>> I am connected to database as postgres user.
>>  '\!exec ..' doesn't work if I connect to the database from other host but
>> it does work if I connect to the database from server where I have
>> PostGreSQL installed. pg_read_file doesn't work in any case.
>>  Techdb=# \! exec cat /etc/postgresql/8.4/main/pg_hba.conf
>> cat: cannot open /etc/postgresql/8.4/main/pg_hba.conf
>>
>
> thats because psql runs the command on the LOCAL server that the user is
> running psql on.   would be all kinda security problems if a user could run
> commands on the remote server without having logged onto it as a regular
> user.
>
>
>


Re: [GENERAL] Questions on PostGreSQL Authentication mechanism...

2010-02-02 Thread John R Pierce

dipti shah wrote:

I am connected to database as postgres user.
 
'\!exec ..' doesn't work if I connect to the database from other host 
but it does work if I connect to the database from server where I have 
PostGreSQL installed. pg_read_file doesn't work in any case.
 
Techdb=# \! exec cat /etc/postgresql/8.4/main/pg_hba.conf

cat: cannot open /etc/postgresql/8.4/main/pg_hba.conf


thats because psql runs the command on the LOCAL server that the user is 
running psql on.   would be all kinda security problems if a user could 
run commands on the remote server without having logged onto it as a 
regular user.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions on PostGreSQL Authentication mechanism...

2010-02-02 Thread dipti shah
I am connected to database as postgres user.

'\!exec ..' doesn't work if I connect to the database from other host but it
does work if I connect to the database from server where I have PostGreSQL
installed. pg_read_file doesn't work in any case.

Techdb=# \! exec cat /etc/postgresql/8.4/main/pg_hba.conf
cat: cannot open /etc/postgresql/8.4/main/pg_hba.conf
Techdb=# select pg_read_file('pg_hba.conf', 0, 8192);
ERROR:  could not open file "pg_hba.conf" for reading: No such file or
directory
Thanks,
Dipti



On Wed, Feb 3, 2010 at 12:14 AM, Tim Bruce - Postgres
wrote:

>  On Tue, February 2, 2010 08:23, Alvaro Herrera wrote:
> > dipti shah escribió:
> >> Techdb=# show hba_file;
> >>hba_file
> >> --
> >>  /etc/postgresql/8.4/main/pg_hba.conf
> >> (1 row)
> >>
> >> Moreover, is there anyway to view content of this file from stored in
> >> above
> >> location "Techdb" command prompt itself.
> >>
> >> Techdb=# cat  /etc/postgresql/8.4/main/pg_hba.conf;
> >
> > Probably pg_read_file():
> >
> > select pg_read_file('pg_hba.conf', 0, 8192);
> >
> > Note that pg_read_file only allows paths relative to $PGDATA, which is
> > what you get from SHOW data_directory;
> >
> > --
> > Alvaro Herrera
> > http://www.CommandPrompt.com/ 
> > PostgreSQL Replication, Consulting, Custom Development, 24x7 support
> >
> > --
> > Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> > To make changes to your subscription:
> > http://www.postgresql.org/mailpref/pgsql-general
> >
>
> Since the pg_hba.conf file is located in the /etc path, the pg_read_file
> command fails with an error of
>
> "could not open file "pg_hba.conf" for reading: No such file or direcotry"
>
> It also won't allow use of the absolute path.
>
> The answer I found was to use the following command:
>
> postgres=# \! exec cat /etc/postgresql/8.3/main/pg_hba.conf
>
> Tim
> --
> Timothy J. Bruce
>
> Registered Linux User #325725
>
>


Re: [GENERAL] Questions on PostGreSQL Authentication mechanism...

2010-02-02 Thread Alvaro Herrera
Joshua D. Drake escribió:
> On Tue, 2010-02-02 at 16:09 -0300, Alvaro Herrera wrote:
> > Tim Bruce - Postgres escribió:
> > > On Tue, February 2, 2010 08:23, Alvaro Herrera wrote:
> > 
> > > > Probably pg_read_file():
> > > >
> > > > select pg_read_file('pg_hba.conf', 0, 8192);
> > > >
> > > > Note that pg_read_file only allows paths relative to $PGDATA, which is
> > > > what you get from SHOW data_directory;
> > > 
> > > Since the pg_hba.conf file is located in the /etc path, the pg_read_file
> > > command fails with an error of
> > > 
> > > "could not open file "pg_hba.conf" for reading: No such file or direcotry"
> > 
> > Hmm, yeah, that's a shortcoming of the debian packaging, no doubt.  Does
> > it not install symlinks in the actual data directory?  If not, that
> > should be reported as a bug ...
> 
> I disagree. Debian/Ubuntu use the postgresql.conf facilities to have a
> different postgresql.conf.

That's all very good but in doing so they disabled the ability to edit
the files through pg_file_read and pg_file_write, so this patch needs
more thought or handling on their part.  It works fine in pristine
PostgreSQL, so it's not our bug.  I assume pgAdmin is unable to edit the
config files in Debian due to this problem.

For example, perhaps there could be a new pair of functions
pg_read_hba_file/pg_write_hba_file that would work even if the files are
placed in other directories, but they (Debian) would need to propose it.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions on PostGreSQL Authentication mechanism...

2010-02-02 Thread Joshua D. Drake
On Tue, 2010-02-02 at 16:09 -0300, Alvaro Herrera wrote:
> Tim Bruce - Postgres escribió:
> > On Tue, February 2, 2010 08:23, Alvaro Herrera wrote:
> 
> > > Probably pg_read_file():
> > >
> > > select pg_read_file('pg_hba.conf', 0, 8192);
> > >
> > > Note that pg_read_file only allows paths relative to $PGDATA, which is
> > > what you get from SHOW data_directory;
> > 
> > Since the pg_hba.conf file is located in the /etc path, the pg_read_file
> > command fails with an error of
> > 
> > "could not open file "pg_hba.conf" for reading: No such file or direcotry"
> 
> Hmm, yeah, that's a shortcoming of the debian packaging, no doubt.  Does
> it not install symlinks in the actual data directory?  If not, that
> should be reported as a bug ...

I disagree. Debian/Ubuntu use the postgresql.conf facilities to have a
different postgresql.conf.

This is a limitation within PostgreSQL. 

Joshua D. Drake


-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564
Consulting, Training, Support, Custom Development, Engineering
Respect is earned, not gained through arbitrary and repetitive use or Mr. or 
Sir.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions on PostGreSQL Authentication mechanism...

2010-02-02 Thread Alvaro Herrera
Tim Bruce - Postgres escribió:
> On Tue, February 2, 2010 08:23, Alvaro Herrera wrote:

> > Probably pg_read_file():
> >
> > select pg_read_file('pg_hba.conf', 0, 8192);
> >
> > Note that pg_read_file only allows paths relative to $PGDATA, which is
> > what you get from SHOW data_directory;
> 
> Since the pg_hba.conf file is located in the /etc path, the pg_read_file
> command fails with an error of
> 
> "could not open file "pg_hba.conf" for reading: No such file or direcotry"

Hmm, yeah, that's a shortcoming of the debian packaging, no doubt.  Does
it not install symlinks in the actual data directory?  If not, that
should be reported as a bug ...


> It also won't allow use of the absolute path.

That's by design, yes.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions on PostGreSQL Authentication mechanism...

2010-02-02 Thread Tim Bruce - Postgres
On Tue, February 2, 2010 08:23, Alvaro Herrera wrote:
> dipti shah escribió:
>> Techdb=# show hba_file;
>>hba_file
>> --
>>  /etc/postgresql/8.4/main/pg_hba.conf
>> (1 row)
>>
>> Moreover, is there anyway to view content of this file from stored in
>> above
>> location "Techdb" command prompt itself.
>>
>> Techdb=# cat  /etc/postgresql/8.4/main/pg_hba.conf;
>
> Probably pg_read_file():
>
> select pg_read_file('pg_hba.conf', 0, 8192);
>
> Note that pg_read_file only allows paths relative to $PGDATA, which is
> what you get from SHOW data_directory;
>
> --
> Alvaro Herrera
> http://www.CommandPrompt.com/
> PostgreSQL Replication, Consulting, Custom Development, 24x7 support
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

Since the pg_hba.conf file is located in the /etc path, the pg_read_file
command fails with an error of

"could not open file "pg_hba.conf" for reading: No such file or direcotry"

It also won't allow use of the absolute path.

The answer I found was to use the following command:

postgres=# \! exec cat /etc/postgresql/8.3/main/pg_hba.conf

Tim
-- 
Timothy J. Bruce

Registered Linux User #325725


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions on PostGreSQL Authentication mechanism...

2010-02-02 Thread Alvaro Herrera
dipti shah escribió:
> Techdb=# show hba_file;
>hba_file
> --
>  /etc/postgresql/8.4/main/pg_hba.conf
> (1 row)
> 
> Moreover, is there anyway to view content of this file from stored in above
> location "Techdb" command prompt itself.
> 
> Techdb=# cat  /etc/postgresql/8.4/main/pg_hba.conf;

Probably pg_read_file():

select pg_read_file('pg_hba.conf', 0, 8192);

Note that pg_read_file only allows paths relative to $PGDATA, which is
what you get from SHOW data_directory;

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions on PostGreSQL Authentication mechanism...

2010-02-02 Thread dipti shah
Wow!!..that was too quick. Thanks Richard.

On Tue, Feb 2, 2010 at 3:29 PM, Richard Huxton  wrote:

> On 02/02/10 09:55, dipti shah wrote:
>
>> Thanks Richard and Alvaro. The "show hba_file" is great solution. Thanks a
>> ton. Could you tell me from where to get all such commands?
>>
>
> All the configuration settings are listed in Chapter 18:
> http://www.postgresql.org/docs/8.4/static/runtime-config.html
>
> You'll also find details in the SQL reference for SET and SHOW:
> http://www.postgresql.org/docs/8.4/static/sql-set.html
> http://www.postgresql.org/docs/8.4/static/sql-show.html
>
> "SHOW ALL" will list all the settings in one big table.
>
> --
>  Richard Huxton
>  Archonet Ltd
>


Re: [GENERAL] Questions on PostGreSQL Authentication mechanism...

2010-02-02 Thread Richard Huxton

On 02/02/10 09:58, dipti shah wrote:

Techdb=# show hba_file;
hba_file
--
  /etc/postgresql/8.4/main/pg_hba.conf
(1 row)


Ah! you're running a Debian-based system by the look of it.


Moreover, is there anyway to view content of this file from stored in above
location "Techdb" command prompt itself.

Techdb=# cat  /etc/postgresql/8.4/main/pg_hba.conf;
ERROR:  syntax error at or near "cat"
LINE 1: cat  /etc/postgresql/8.4/main/pg_hba.conf;
 ^


You would normally do so from the system shell rather than psql. 
However, you can "shell out" with a backslash command in psql:

\! cat /etc/...

You might not have permission to view that file.

A full list of psql backslash commands are available in the manual 
(client applications), psql man-page and by doing \? (or \h for sql help).


HTH

--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions on PostGreSQL Authentication mechanism...

2010-02-02 Thread Richard Huxton

On 02/02/10 09:55, dipti shah wrote:

Thanks Richard and Alvaro. The "show hba_file" is great solution. Thanks a
ton. Could you tell me from where to get all such commands?


All the configuration settings are listed in Chapter 18:
http://www.postgresql.org/docs/8.4/static/runtime-config.html

You'll also find details in the SQL reference for SET and SHOW:
http://www.postgresql.org/docs/8.4/static/sql-set.html
http://www.postgresql.org/docs/8.4/static/sql-show.html

"SHOW ALL" will list all the settings in one big table.

--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions on PostGreSQL Authentication mechanism...

2010-02-02 Thread dipti shah
Techdb=# show hba_file;
   hba_file
--
 /etc/postgresql/8.4/main/pg_hba.conf
(1 row)

Moreover, is there anyway to view content of this file from stored in above
location "Techdb" command prompt itself.

Techdb=# cat  /etc/postgresql/8.4/main/pg_hba.conf;
ERROR:  syntax error at or near "cat"
LINE 1: cat  /etc/postgresql/8.4/main/pg_hba.conf;
^
Thanks,
Dipti
On Tue, Feb 2, 2010 at 3:25 PM, dipti shah  wrote:

> Thanks Richard and Alvaro. The "show hba_file" is great solution. Thanks a
> ton. Could you tell me from where to get all such commands?
>
> Thanks,
> Dip
>   On Mon, Feb 1, 2010 at 9:43 PM, Alvaro Herrera <
> alvhe...@commandprompt.com> wrote:
>
>> dipti shah escribió:
>> > Thanks Richard. those chapters are very useful. I got to know most of
>> > concepts but didn't find the location of pg_hba.conf file so that I can
>> > verify it. I have connected to my database using "postgres" user. Could
>> you
>> > tell me how to open pg_hba.conf file?
>>
>> Run this:
>>SHOW hba_file;
>>
>> --
>> Alvaro Herrera
>> http://www.CommandPrompt.com/ 
>> The PostgreSQL Company - Command Prompt, Inc.
>>
>
>


Re: [GENERAL] Questions on PostGreSQL Authentication mechanism...

2010-02-02 Thread dipti shah
Thanks Richard and Alvaro. The "show hba_file" is great solution. Thanks a
ton. Could you tell me from where to get all such commands?

Thanks,
Dip
On Mon, Feb 1, 2010 at 9:43 PM, Alvaro Herrera
wrote:

> dipti shah escribió:
> > Thanks Richard. those chapters are very useful. I got to know most of
> > concepts but didn't find the location of pg_hba.conf file so that I can
> > verify it. I have connected to my database using "postgres" user. Could
> you
> > tell me how to open pg_hba.conf file?
>
> Run this:
>SHOW hba_file;
>
> --
> Alvaro Herrera
> http://www.CommandPrompt.com/ 
> The PostgreSQL Company - Command Prompt, Inc.
>


Re: [GENERAL] Questions on PostGreSQL Authentication mechanism...

2010-02-01 Thread Alvaro Herrera
dipti shah escribió:
> Thanks Richard. those chapters are very useful. I got to know most of
> concepts but didn't find the location of pg_hba.conf file so that I can
> verify it. I have connected to my database using "postgres" user. Could you
> tell me how to open pg_hba.conf file?

Run this:
SHOW hba_file;

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions on PostGreSQL Authentication mechanism...

2010-02-01 Thread Richard Huxton

On 01/02/10 10:24, dipti shah wrote:

Thanks Richard. those chapters are very useful. I got to know most of
concepts but didn't find the location of pg_hba.conf file so that I can
verify it. I have connected to my database using "postgres" user. Could you
tell me how to open pg_hba.conf file?


It should be with your other configuration files: postgresql.conf, 
pg_ident.conf.


Where will depend on how you installed it. If you're on Windows, it's 
probably in the main PostgreSQL folder on drive C:


If a package manager on Linux/Unix look in /etc/postgresql/...

If you compiled from source, probably something like 
/usr/local/postgresql/data/


It's a text file and you'll need to restart PostgreSQL to pick up your 
new settings.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions on PostGreSQL Authentication mechanism...

2010-02-01 Thread dipti shah
Thanks Richard. those chapters are very useful. I got to know most of
concepts but didn't find the location of pg_hba.conf file so that I can
verify it. I have connected to my database using "postgres" user. Could you
tell me how to open pg_hba.conf file?

Thanks.

On Mon, Feb 1, 2010 at 3:06 PM, Richard Huxton  wrote:

> On 01/02/10 07:35, dipti shah wrote:
>
>>
>> Moreover, anyone can connect to databases as postgres user without giving
>> password.
>>
>> I am not aware how above setup has been made but I want to get rid of
>> them.
>> Could anyone please help me in below questions?
>>
>
> You'll want to read Chapter 19 of the manuals followed by Chapter 20.
> http://www.postgresql.org/docs/8.4/static/client-authentication.html
> http://www.postgresql.org/docs/8.4/static/user-manag.html
>
>
> >1. When user connects to TechDB database(or any other) as a "postgres"
> > user, it should ask for password.
>
> I would guess your pg_hba.conf file is set to allow "trust" access. You
> will probably want "md5" passwords. You can also GRANT access to databases
> using the permissions system.
>
>
> >2. Remove all above users(listed with \du) and create someof users and
> > they will have only table creating privileges.
>
>
> You can DROP USER (or DROP ROLE) to remove users, but you'll want to
> reallocate any tables they own. You can GRANT and REVOKE various
> permissions, but I don't think you can just create tables without being able
> to access them afterwards. You could write a function that does that for you
> though.
>
> HTH
>
> --
>  Richard Huxton
>  Archonet Ltd
>


Re: [GENERAL] Questions on PostGreSQL Authentication mechanism...

2010-02-01 Thread Richard Huxton

On 01/02/10 07:35, dipti shah wrote:


Moreover, anyone can connect to databases as postgres user without giving
password.

I am not aware how above setup has been made but I want to get rid of them.
Could anyone please help me in below questions?


You'll want to read Chapter 19 of the manuals followed by Chapter 20.
http://www.postgresql.org/docs/8.4/static/client-authentication.html
http://www.postgresql.org/docs/8.4/static/user-manag.html

>1. When user connects to TechDB database(or any other) as a "postgres"
> user, it should ask for password.

I would guess your pg_hba.conf file is set to allow "trust" access. You 
will probably want "md5" passwords. You can also GRANT access to 
databases using the permissions system.


>2. Remove all above users(listed with \du) and create someof users and
> they will have only table creating privileges.


You can DROP USER (or DROP ROLE) to remove users, but you'll want to 
reallocate any tables they own. You can GRANT and REVOKE various 
permissions, but I don't think you can just create tables without being 
able to access them afterwards. You could write a function that does 
that for you though.


HTH

--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Questions on PostGreSQL Authentication mechanism...

2010-02-01 Thread dipti shah
Hi, we have latest PostGreSQL setup and it allows everyone to connect. When
I do \du, it gives following output and it is same for all users.

TechDB=# \du
  List of roles
   Role name   | Superuser | Create role | Create DB | Connections |  Member
of
---+---+-+---+-+-
 sonal  | no| no  | no| no limit|
{from_ldap}
 sundar | no| no  | no| no limit|
{from_ldap}
...
..

Moreover, anyone can connect to databases as postgres user without giving
password.

I am not aware how above setup has been made but I want to get rid of them.
Could anyone please help me in below questions?

  1. When user connects to TechDB database(or any other) as a "postgres"
user, it should ask for password.
  2. Remove all above users(listed with \du) and create someof users and
they will have only table creating privileges.

Thanks.


Re: [GENERAL] general questions postgresql performance config

2010-01-26 Thread Jayadevan M
Hi,
Regarding Pentaho - please keep in mind that Pentaho needs significant 
amount of memory. We had a lot of issues with Pentaho crashing with java 
out of memory error. If you are using a 64 bit machine, you may be able to 
give it sufficient RAM and keep it happy. If all you have is one 4 GB 
machine to run PostgreSQL and the ETL tool, I have my doubts. It depends 
on the volume of data - how many GBs, rather than the number of records. 
Pentaho added PostgreSQL bulk loader as an experimental component 
recently. You can try that out. Talend can generate Java or perl 
components and was faster than Pentaho in our case. Since Talend community 
edition did not provide a shared development environment, we opted for 
Pentato.
 If there is not a lot of complex 'transformations', you should be able to 
manage fine with shell scripts.
Jayadevan




From:   Dino Vliet 
To: pgsql-general@postgresql.org
Date:   01/25/2010 09:57 PM
Subject:[GENERAL] general questions postgresql performance config
Sent by:pgsql-general-ow...@postgresql.org




Dear postgresql people,
 
Introduction
Today I've been given the task to proceed with my plan to use postgresql 
and other open source techniques to demonstrate to the management of my 
department the usefullness and the "cost savings" potential that lies 
ahead. You can guess how excited I am right now. However, I should plan 
and execute at the highest level because I really want to show results. 
I'm employed in the financial services.
 
Context of the problem
Given 25 million input data, transform and load 10 million records to a 
single table DB2 database containing already 120 million records (the 
whole history).
 
The current process is done on the MVS mainframe while the SAS system is 
used to process the records (ETL like operations). The records of the two 
last months (so 20 million records) are also stored in a single SAS 
dataset, where users can access them through SAS running on their Windows 
PC's. With SAS PC's they can also analyse the historical records in the 
DB2 table on the mainframe.
 
These users are not tech savvy so this access method is not very 
productive for them but because the data is highly valued, they use it 
without complaining too much.
 
Currently it takes 5 to 6 hours before everything is finished.
 
Proof of concept
I want to showcase that a solution process like:
 
input-->Talend/Pentaho Kettle for ETL-->postgresql-->pentaho report 
designer, is feasible while staying in the 5~6 hours processing and 
loading time.
 
Input: flat files, position based
ETL: Pentaho Kettle or Talend to process these files
DBMS: postgresql 8 (on debian, opensuse, or freebsd)
Reporting: Pentaho report wizard
 
Hardware
AMD AM2 singlecore CPU with 4GB RAM
Two mirrored SATA II disks (raid-0)
 
Now that I have introduced my situation, I hope this list can give me some 
tips, advice, examples, pitfalls regarding the requirements I have.
 
Questions
1) Although this is not exactly rocket science, the sheer volume of the 
data makes it a hard task. Do you think my "solution" is 
viable/achievable?
 
2) What kind of OS would you choose for the setup I have proposed? I 
prefer FreeBSD with UFS2 as a filesystem, but I guess Debian with ext3 
filesystems or openSUSE with ext3 or Ubuntu server with ext3 would all be 
very good candidates too??
 
3) Would you opt for the ETL tools mentioned by me (pentaho and talend) or 
just rely on the unix/linux apps like gawk, sed, perl? I'm familiar with 
gawk. The ETL tools require java, so I would have to configure postgresql 
to not use all the available RAM otherwise risking the java out of memory 
error message. With that said, it would be best if I first configure my 
server to do the ETL processing and then afterwards configure it for 
database usage. 
 
4) what values would you advice for the various postgresql.conf values 
which can impact performance like shared buffers, temp_buffers, sort_mem, 
etc etc? Or is this more of like an "art" where I change and restart the 
db server, analyze the queries and iterate until I find optimal values?
 
5) Other considerations?
 
Thanks in advanced,
 
Dino
 






DISCLAIMER: 

"The information in this e-mail and any attachment is intended only for 
the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly 
contact the sender and destroy all copies of the original communication. 
IBS makes no warranty, express or implied, nor guarantees the accuracy, 
adequacy or completeness of the information contained in this email or any 
attachment and is not liable for any errors, defects, omissions, viruses 
or for resultant loss or damage, if any, direct or indirect."






Re: [GENERAL] general questions postgresql performance config

2010-01-26 Thread Greg Smith

Andy Colson wrote:
I recall seeing someplace that you can avoid WAL if you start a 
transaction, then truncate the table, then start a COPY.


Is that correct?  Still hold true?  Would it make a lot of difference?


That is correct, still true, and can make a moderate amount of 
difference if the WAL is really your bottleneck.  More of a tweak for 
loading small to medium size things as I see it.  Once the database and 
possibly its indexes get large enough, the loading time starts being 
dominated by handling all that work, with its random I/O, rather than 
being limited by the sequential writes to the WAL.  It's certainly a 
useful optimization to take advantage of when you can, given that it's 
as easy as:


BEGIN;
TRUNCATE TABLE x;
COPY x FROM ... ;
COMMIT;

--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] general questions postgresql performance config

2010-01-26 Thread Andy Colson

On 1/25/2010 8:12 PM, Craig Ringer wrote:

On 26/01/2010 12:15 AM, Dino Vliet wrote:


5) Other considerations?



Even better is to use COPY to load large chunks of data. libpq provides
access to the COPY interface if you feel like some C coding. The JDBC
driver (dev version only so far) now provides access to the COPY API, so
you can also bulk-load via Java very efficiently now.

--
Craig Ringer



I recall seeing someplace that you can avoid WAL if you start a 
transaction, then truncate the table, then start a COPY.


Is that correct?  Still hold true?  Would it make a lot of difference?

(Also, small plug, perl supports the COPY api too)

-Andy

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] general questions postgresql performance config

2010-01-25 Thread Scott Marlowe
On Mon, Jan 25, 2010 at 9:15 AM, Dino Vliet  wrote:
>
> Introduction
> Today I've been given the task to proceed with my plan to use postgresql and 
> other open source techniques to demonstrate to the management of my 
> department the usefullness and the "cost savings" potential that lies ahead. 
> You can guess how excited I am right now. However, I should plan and execute 
> at the highest level because I really want to show results. I'm employed in 
> the financial services.
>
> Context of the problem
> Given 25 million input data, transform and load 10 million records to a 
> single table DB2 database containing already 120 million records (the whole 
> history).

Are these rows pretty wide?  or are they narrow?  Matters a lot.
120Million records of ~100 or so bytes each are gonna load a lot
quicker than 120Million records of 2,000 bytes, which will be faster
than rows of 20,000 bytes, and so on.

> The current process is done on the MVS mainframe while the SAS system is used 
> to process the records (ETL like operations). The records of the two last 
> months (so 20 million records) are also stored in a single SAS dataset, where 
> users can access them through SAS running on their Windows PC's. With SAS 
> PC's they can also analyse the historical records in the DB2 table on the 
> mainframe.

This sounds like you're gonna want to look into partitioning your
postgresql database.   Follow the manual's advice to use triggers not
rules to implement it.

> These users are not tech savvy so this access method is not very productive 
> for them but because the data is highly valued, they use it without 
> complaining too much.
>
> Currently it takes 5 to 6 hours before everything is finished.

The import or the user reports?  I assume the import process.

> Proof of concept
> I want to showcase that a solution process like:
>
> input-->Talend/Pentaho Kettle for ETL-->postgresql-->pentaho report designer, 
> is feasible while staying in the 5~6 hours processing and loading time.

Keep in mind that if a simple desktop PC can run this in 24 hours or
something like that, you can expect a server class machine with a
decent RAID array to run it in some fraction of that.

> Input: flat files, position based
> ETL: Pentaho Kettle or Talend to process these files
> DBMS: postgresql 8 (on debian, opensuse, or freebsd)
> Reporting: Pentaho report wizard

Make sure and step up to at LEAST postgresql 8.3.latest.  8.4 doesn't
have tons of performance improvements, but it does have tons of
functional improvements that may make it worth your while to go to it
as well.

> Hardware
>
> AMD AM2 singlecore CPU with 4GB RAM
> Two mirrored SATA II disks (raid-0)

So, definitely a proof of concept on a workstation type machine.  Be
careful, if the workstation runs the import or reports in some
fractional percentage of time that the big machines do, it may become
a server on the spot. (it's happened to me before.)  So consider
making that RAID-1 up there in case it does.

> Questions
> 1) Although this is not exactly rocket science, the sheer volume of the data 
> makes it a hard task. Do you think my "solution" is viable/achievable?

Yes.  I've done similar on small workstation machines before and
gotten acceptable performance for reports that can run overnight.

> 2) What kind of OS would you choose for the setup I have proposed? I prefer 
> FreeBSD with UFS2 as a filesystem, but I guess Debian with ext3 filesystems 
> or openSUSE with ext3 or Ubuntu server with ext3 would all be very good 
> candidates too??

You should use the flavor of linux you're most familiar with the
pitfalls of.  They've all got warts, no need to learn new ones because
some other flavor is more popular.  OTOH, if you've got a row of
cabinets running RHEL and a few RHEL sysadmins around, you can appeal
to their vanity to get them to help tune the machine you're running.

> 3) Would you opt for the ETL tools mentioned by me (pentaho and talend) or 
> just rely on the unix/linux apps like gawk, sed, perl? I'm familiar with 
> gawk. The ETL tools require java, so I would have to configure postgresql to 
> not use all the available RAM otherwise risking the java out of memory error 
> message. With that said, it would be best if I first configure my server to 
> do the ETL processing and then afterwards configure it for database usage.

I'd use unix tools myself.  gawk, sed, p(erl/ython/hp) are all great
for tossing together something that works quickly.  If you need more
flexibility then look at ETL tools later, unless you're already
familiar enough with one to spend the time getting it setup and
running.

> 4) what values would you advice for the various postgresql.conf values which 
> can impact performance like shared buffers, temp_buffers, sort_mem, etc etc? 
> Or is this more of like an "art" where I change and restart the db server, 
> analyze the queries and iterate until I find optimal values?

Go here: http://www.westnet.com/~gsmith/content/post

Re: [GENERAL] general questions postgresql performance config

2010-01-25 Thread Craig Ringer

On 26/01/2010 12:15 AM, Dino Vliet wrote:


5) Other considerations?



To get optimal performance for bulk loading you'll want to do concurrent 
data loading over several connections - up to as many as you have disk 
spindles. Each connection will individually be slower, but the overall 
throughput will be much greater.


Just how many connections you'll want to use depends on your I/O 
subsystem and to a lesser extent your CPU capacity.


Inserting data via multiple connections isn't something every data 
loading tool supports, so make sure to consider this carefully.



Another consideration is how you insert the data. It's vital to insert 
your data in large transaction-batched chunks (or all in one 
transaction) ; even with synchronized_commit = off you'll still see 
rather poor performance if you do each INSERT in its own transaction. 
Doing your inserts as prepared statements where each INSERT is multi-row 
valued will help too.


Even better is to use COPY to load large chunks of data. libpq provides 
access to the COPY interface if you feel like some C coding. The JDBC 
driver (dev version only so far) now provides access to the COPY API, so 
you can also bulk-load via Java very efficiently now.


If your data needs little/no transformation and is externally validated 
you can look into pg_bulkload as an alternative to all this.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] general questions postgresql performance config

2010-01-25 Thread Dino Vliet



Dear postgresql people,
 
Introduction
Today I've been given the task to proceed with my plan to use postgresql and 
other open source techniques to demonstrate to the management of my department 
the usefullness and the "cost savings" potential that lies ahead. You can guess 
how excited I am right now. However, I should plan and execute at the highest 
level because I really want to show results. I'm employed in the financial 
services.
 
Context of the problem
Given 25 million input data, transform and load 10 million records to a single 
table DB2 database containing already 120 million records (the whole history).
 
The current process is done on the MVS mainframe while the SAS system is used 
to process the records (ETL like operations). The records of the two last 
months (so 20 million records) are also stored in a single SAS dataset, where 
users can access them through SAS running on their Windows PC's. With SAS PC's 
they can also analyse the historical records in the DB2 table on the mainframe.
 
These users are not tech savvy so this access method is not very productive for 
them but because the data is highly valued, they use it without complaining too 
much.
 
Currently it takes 5 to 6 hours before everything is finished.
 
Proof of concept
I want to showcase that a solution process like:
 
input-->Talend/Pentaho Kettle for ETL-->postgresql-->pentaho report designer, 
is feasible while staying in the 5~6 hours processing and loading time.
 
Input: flat files, position based
ETL: Pentaho Kettle or Talend to process these files
DBMS: postgresql 8 (on debian, opensuse, or freebsd)
Reporting: Pentaho report wizard
 
Hardware

AMD AM2 singlecore CPU with 4GB RAM
Two mirrored SATA II disks (raid-0)
 
Now that I have introduced my situation, I hope this list can give me some 
tips, advice, examples, pitfalls regarding the requirements I have.
 
Questions
1) Although this is not exactly rocket science, the sheer volume of the data 
makes it a hard task. Do you think my "solution" is viable/achievable?
 
2) What kind of OS would you choose for the setup I have proposed? I prefer 
FreeBSD with UFS2 as a filesystem, but I guess Debian with ext3 filesystems or 
openSUSE with ext3 or Ubuntu server with ext3 would all be very good candidates 
too??
 
3) Would you opt for the ETL tools mentioned by me (pentaho and talend) or just 
rely on the unix/linux apps like gawk, sed, perl? I'm familiar with gawk. The 
ETL tools require java, so I would have to configure postgresql to not use all 
the available RAM otherwise risking the java out of memory error message. With 
that said, it would be best if I first configure my server to do the ETL 
processing and then afterwards configure it for database usage. 
 
4) what values would you advice for the various postgresql.conf values which 
can impact performance like shared buffers, temp_buffers, sort_mem, etc etc? Or 
is this more of like an "art" where I change and restart the db server, analyze 
the queries and iterate until I find optimal values?
 
5) Other considerations?
 
Thanks in advanced,
 
Dino
 


  

Re: [GENERAL] Questions about connection clean-up and "invalid page header"

2010-01-25 Thread Greg Stark
On Mon, Jan 25, 2010 at 1:16 PM, Herouth Maoz  wrote:
> Well, I assume by the fact that eventually I get an "Unexpected end of file"
> message for those queries, that something does go in and check them. Do you
> have any suggestion as to how to cause the postgresql server to do so
> earlier?

No, Postgres pretty intentionally doesn't check because checking would
be quite slow.

If this is a plpgsql function looping you can put a RAISE NOTICE in
the loop periodically. I suppose you could write such a function and
add it to your query but whether it does what you want will depend on
the query plan.

-- 
greg

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions about connection clean-up and "invalid page header"

2010-01-25 Thread Herouth Maoz

Greg Stark wrote:


On Mon, Jan 25, 2010 at 11:37 AM, Herouth Maoz  wrote:
  

The tcp_keepalive setting would only come into play if the remote
machine crashed or was disconnected from the network.


That's the situation I'm having, so it's OK. Crystal, being a Windows
application, obviously runs on a different server than the database itself,
so the connection between them is TCP/IP, not Unix domain sockets.



The unix socket api is used for both unix domain sockets and internet
domain sockets. The point is that in the api there's no way to find
out about a connection the other side has closed except for when you
write or read from it or when you explicitly check.


  

And
furthermore, that was exactly the problem as I described it - the fact that
the third party software, instead of somehow instructing Crystal to send a
cancel request to PostgreSQL, instead just kills the client process on the
Windows side.



Killing the client process doesn't mean the machine has crashed or
been disconnected from the network. I'm assuming Crystal isn't
crashing the machine just to stop the report... And even if it did and
tcp_keepalives kicked in the server *still* wouldn't notice until it
checked or tried to read or write to that socket.

  
Well, I assume by the fact that eventually I get an "Unexpected end of 
file" message for those queries, that something does go in and check 
them. Do you have any suggestion as to how to cause the postgresql 
server to do so earlier?


Herouth


Re: [GENERAL] Questions about connection clean-up and "invalid page header"

2010-01-25 Thread Greg Stark
On Mon, Jan 25, 2010 at 11:37 AM, Herouth Maoz  wrote:
> The tcp_keepalive setting would only come into play if the remote
> machine crashed or was disconnected from the network.
>
>
> That's the situation I'm having, so it's OK. Crystal, being a Windows
> application, obviously runs on a different server than the database itself,
> so the connection between them is TCP/IP, not Unix domain sockets.

The unix socket api is used for both unix domain sockets and internet
domain sockets. The point is that in the api there's no way to find
out about a connection the other side has closed except for when you
write or read from it or when you explicitly check.


> And
> furthermore, that was exactly the problem as I described it - the fact that
> the third party software, instead of somehow instructing Crystal to send a
> cancel request to PostgreSQL, instead just kills the client process on the
> Windows side.

Killing the client process doesn't mean the machine has crashed or
been disconnected from the network. I'm assuming Crystal isn't
crashing the machine just to stop the report... And even if it did and
tcp_keepalives kicked in the server *still* wouldn't notice until it
checked or tried to read or write to that socket.

-- 
greg

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions about connection clean-up and "invalid page header"

2010-01-25 Thread Herouth Maoz

Greg Stark wrote:


On Mon, Jan 25, 2010 at 8:15 AM, Scott Marlowe  wrote:
  

Is there a parameter to set in the configuration or some other means to
shorten the time before an abandoned backend's query is cancelled?
  

You can shorten the tcp_keepalive settings so that dead connections
get detected faster.




This won't help. The TCP connection is already being closed (or I
think only half-closed). The problem is that in the Unix socket API
you don't find out about that unless you check or try to read or write
to it.

The tcp_keepalive setting would only come into play if the remote
machine crashed or was disconnected from the network.
  
That's the situation I'm having, so it's OK. Crystal, being a Windows 
application, obviously runs on a different server than the database 
itself, so the connection between them is TCP/IP, not Unix domain 
sockets. And furthermore, that was exactly the problem as I described it 
- the fact that the third party software, instead of somehow instructing 
Crystal to send a cancel request to PostgreSQL, instead just kills the 
client process on the Windows side.


Herouth


Re: [GENERAL] Questions about connection clean-up and "invalid page header"

2010-01-25 Thread Greg Stark
On Mon, Jan 25, 2010 at 8:15 AM, Scott Marlowe  wrote:
>> Is there a parameter to set in the configuration or some other means to
>> shorten the time before an abandoned backend's query is cancelled?
>
> You can shorten the tcp_keepalive settings so that dead connections
> get detected faster.
>

This won't help. The TCP connection is already being closed (or I
think only half-closed). The problem is that in the Unix socket API
you don't find out about that unless you check or try to read or write
to it.

The tcp_keepalive setting would only come into play if the remote
machine crashed or was disconnected from the network.



-- 
greg

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions about connection clean-up and "invalid page header"

2010-01-25 Thread Herouth Maoz

Scott Marlowe wrote:


You can shorten the tcp_keepalive settings so that dead connections
get detected faster.
  

Thanks, I'll ask my sysadmin to do that.


Might be, but not very likely.  I and many others run pgsql in
production environments where it handles thousands of updates /
inserts per minute with no corruption.  We run on server class
hardware with ECC memory and large RAID arrays with no corruption.
  
Someone pointed out to me, though, that comparing data warehouse systems 
to production systems is like Apples and Oranges - we also have a 
production system that, as you say, makes millions of inserts and 
updates per hour. It works very well with PostgreSQL - a lot better than 
with Sybase with which we worked previously. But the reports system on 
which I work makes bulk inserts using calculations based on complicated 
joins and each transaction is long and memory-consuming, as opposed to 
the production system, where each transaction takes a few milliseconds 
and is cleared immediately.


So far this only happened to me in the development server, and if it 
really is a matter of hardware, I'm not worried. What I am worried is if 
there really is some sort of bug that may carry to our production 
reports system.

Have you run something as simple as memtest86+ on your machine to see
if it's got bad memory?
  

I'll tell my sysadmin to do that. Thank you.
  

We are currently using PostgreSQL v. 8.3.1 on the server side.



You should really update to the latest 8.3.x version (around 8.3.8 or
so).  It's simple and easy, and it's possible you've hit a bug in an
older version of 8.3.
  

OK, I'll also try to get that done.

Thanks for your help,
Herouth



Re: [GENERAL] Questions about connection clean-up and "invalid page header"

2010-01-25 Thread Scott Marlowe
On Sun, Jan 24, 2010 at 3:17 AM, Herouth Maoz  wrote:
> Hi Everybody.
>
> I have two questions.
>
> 1. We have a system that is accessed by Crystal reports which is in turned
> controlled by another (3rd party) system. Now, when a report takes too long or
> the user cancels it, it doesn't send a cancel request to Postgres. It just
> kills the Crystal process that works on it.
>
> As a result, the query is left alive on the Postgres backend. Eventually I get
> the message "Unexpected End of file" and the query is cancelled. But this
> doesn't happen soon enough for me - these are usually very heavy queries, and
> I'd like them to be cleaned up as soon as possible if the client connection
> has ended.

The real solution is to fix the application.  But I understand
sometimes you can't do that.

> Is there a parameter to set in the configuration or some other means to
> shorten the time before an abandoned backend's query is cancelled?

You can shorten the tcp_keepalive settings so that dead connections
get detected faster.

> 2. I get the following message in my development database:
>
> vacuumdb: vacuuming of database "reports" failed: ERROR:  invalid page header
> in block 6200 of relation "rb"
>
> I had this already a couple of months ago. Looking around the web, I saw this
> error is supposed to indicate a hardware error. I informed my sysadmin, but
> since this is just the dev system and the data was not important, I did a
> TRUNCATE TABLE on the "rb" relation, and the errors stopped...
>
> But now the error is back, and I'm a bit suspicious. If this is a hardware
> issue, it's rather suspicious that it returned in the exact same relation
> after I did a "truncate table". I have many other relations in the system,
> ones that fill up a lot faster. So I suspect this might be a PostgreSQL issue
> after all. What can I do about this?

Might be, but not very likely.  I and many others run pgsql in
production environments where it handles thousands of updates /
inserts per minute with no corruption.  We run on server class
hardware with ECC memory and large RAID arrays with no corruption.

Have you run something as simple as memtest86+ on your machine to see
if it's got bad memory?

> We are currently using PostgreSQL v. 8.3.1 on the server side.

You should really update to the latest 8.3.x version (around 8.3.8 or
so).  It's simple and easy, and it's possible you've hit a bug in an
older version of 8.3.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Questions about connection clean-up and "invalid page header"

2010-01-24 Thread Herouth Maoz
Hi Everybody.

I have two questions.

1. We have a system that is accessed by Crystal reports which is in turned 
controlled by another (3rd party) system. Now, when a report takes too long or 
the user cancels it, it doesn't send a cancel request to Postgres. It just 
kills the Crystal process that works on it. 

As a result, the query is left alive on the Postgres backend. Eventually I get 
the message "Unexpected End of file" and the query is cancelled. But this 
doesn't happen soon enough for me - these are usually very heavy queries, and 
I'd like them to be cleaned up as soon as possible if the client connection 
has ended.

Is there a parameter to set in the configuration or some other means to 
shorten the time before an abandoned backend's query is cancelled?

2. I get the following message in my development database:

vacuumdb: vacuuming of database "reports" failed: ERROR:  invalid page header 
in block 6200 of relation "rb"

I had this already a couple of months ago. Looking around the web, I saw this 
error is supposed to indicate a hardware error. I informed my sysadmin, but 
since this is just the dev system and the data was not important, I did a 
TRUNCATE TABLE on the "rb" relation, and the errors stopped...

But now the error is back, and I'm a bit suspicious. If this is a hardware 
issue, it's rather suspicious that it returned in the exact same relation 
after I did a "truncate table". I have many other relations in the system, 
ones that fill up a lot faster. So I suspect this might be a PostgreSQL issue 
after all. What can I do about this?


We are currently using PostgreSQL v. 8.3.1 on the server side.

TIA,
Herouth

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions On Tablespace

2009-09-27 Thread Tom Lane
Carlo Camerino  writes:
> I have questions regarding tablespaces, What happens when the disk on
> which my tablespace is in fills up?

You start getting errors.

> How do I expand my tablespace, in oracle there is a concept of
> datafiles? In postgresql I specify a directory instead of a single
> file...

If you expect to need to expand the filesystem, you should be using
LVM or local equivalent so that you can add or remove disks from
the filesystem as needed.

Oracle's design dates from a time when filesystems tended to suck and so
Oracle felt it should reimplement all the filesystem-level functionality
for itself.  Postgres is not interested in reinventing the wheel, so we
don't do that.  You won't find any "raw disk access" functions in
Postgres either.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Questions On Tablespace

2009-09-27 Thread Carlo Camerino
Hi Everyone,

I have questions regarding tablespaces, What happens when the disk on
which my tablespace is in fills up?
How do I expand my tablespace, in oracle there is a concept of
datafiles? In postgresql I specify a directory instead of a single
file...

For example I have two tables and they both use a single tablespace.
After some time the tablespace fills up. How do I point one table to
use a new tablespace?
Or is there a way in which I can get my tablespace to increase size by
using another disk for this purpose?
What's the best approach to this situation?

Thanks A Lot
Carlo

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions about encoding between two databases

2009-08-21 Thread Archibald Zimonyi


Hello,

iconv seemed to work fine. I converted the dump file from LATIN1 to UFT8 
and kept the changes in the client_encoding (in the dump file) and loaded 
them all into the database.


No complains. I still need to verify the result but at least I got no 
restore errors based on character encoding.


Thanks for the tips.

Archie


Archibald Zimonyi wrote:


Hello,


Archibald Zimonyi  writes:

I went into the generated dump file and (more wish then anything else)
tried to simply change the encoding from LATIN1 to UTF8 and then load the
file, it did not complain about incorrect encoding setting for the load,
however it complained that the characters did not match true UTF8
characters (which was almost what I guessed would happen).


Indeed.  Do *not* change the client_encoding setting in the dump file.
You can edit the ENCODING options in the CREATE DATABASE commands
though.  (Didn't we explain this to you already?)




Well, I did send this query with an incorrect email address so it
got stuck and was never posted properly, so I have not seen any such
reply. Can you please explain again?


Search the archives: http://archives.postgresql.org/

--
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


  1   2   3   >