2017-04-26 15:06 GMT+02:00 Glen Huang :
> @Pavel
>
> Thanks for bringing PLV8 to my attention. Wasn't aware of it. Sounds like
> the right tool to for the job. I'll try it out. Do you think it makes sense
> to use PLV8 to also generate JSON? Can it beat SQL?
>
Hard to say -
@Pavel
Thanks for bringing PLV8 to my attention. Wasn't aware of it. Sounds like the
right tool to for the job. I'll try it out. Do you think it makes sense to use
PLV8 to also generate JSON? Can it beat SQL?
Good to know functions are executed under transaction, I think that should be
enough
On 4/25/2017 9:21 PM, Glen Huang wrote:
For updating db using JSON requests from clients, that I'm not so
sure. Should I directly pass the request JSON to PostgreSQL and ask it
to parse this JSON and execute a transaction all by itself, or should
I parse it in the server and generate the
2017-04-26 6:21 GMT+02:00 Glen Huang :
> Hi all,
>
> I have a RESTful API server that sends and receives JSON strings. I'm
> wondering what might be the best way to leverage PostgreSQL's JSON
> capability.
>
> For sending JSON responses to clients. I believe the best way is to
Hi all,
I have a RESTful API server that sends and receives JSON strings. I'm
wondering what might be the best way to leverage PostgreSQL's JSON
capability.
For sending JSON responses to clients. I believe the best way is to ask
PostgreSQL to generate the JSON string and then pass that directly
> Please reply to list also.
apologies, my bad.
> It would seem that the index would not be rebuilt, assuming all
conditions are the same.
Thanks for finding this. This is enough info for me to spend a day
experimenting. I did not want to waste a day if we knew upfront that it
wont work. But
On 01/03/2017 11:35 AM, Ravi Kapoor wrote:
Please reply to list also.
Ccing list.
> Yes I am aware of django EOL. However, our company is still using it, we
> have a migration plan later this year, however for now, I got to work
> with what we have.
Still, you are missing 14 patch releases to
On 01/03/2017 11:07 AM, Ravi Kapoor wrote:
I have a bit strange question. I am trying to figure out how to avoid
table locking while creating an index through Django (1.5.1) in Postgres
9.4.7
Django 1.5.1 does not support concurrent indexing. So my thought is to
first create a concurrent index
On 01/03/2017 11:07 AM, Ravi Kapoor wrote:
I have a bit strange question. I am trying to figure out how to avoid
table locking while creating an index through Django (1.5.1) in Postgres
9.4.7
First Django 1.5.x has been past end of life for 2.25 years.
Second before it went EOL it was up to
I have a bit strange question. I am trying to figure out how to avoid table
locking while creating an index through Django (1.5.1) in Postgres 9.4.7
Django 1.5.1 does not support concurrent indexing. So my thought is to
first create a concurrent index using SQL prompt.
Then try to update django
o: Joanna Xu <joanna...@amdocs.com>; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Questions on Post Setup MASTER and STANDBY replication -
Postgres9.1
On 11/2/16 2:49 PM, Joanna Xu wrote:
> The replication is verified and works. My questions are what's the
> reason causing "cp:
On 11/2/16 2:49 PM, Joanna Xu wrote:
The replication is verified and works. My questions are what’s the
reason causing “cp: cannot stat
`/opt/postgres/9.1/archive/00010003': No such file or
directory” on STANDBY and how to fix it?
What instructions/tools did you use to setup
Hi All,
After setting up two nodes with MASTER and STANDBY replication, I see " cp:
cannot stat `/opt/postgres/9.1/archive/00010003': No such file
or directory" in the log on STANDBY and the startup process recovering
"00010004" which does not exist in the
Hi Oleg,
On Tue, Jun 28, 2016 at 1:05 AM, Oleg Bartunov wrote:
> On Tue, Jun 28, 2016 at 12:44 AM, Riccardo Vianello
> wrote:
> > Could you please also help me understand the difference (if any) between
> > using the GIST_LEAF macro or the
On Tue, Jun 28, 2016 at 12:44 AM, Riccardo Vianello
wrote:
> Hi all,
>
> I'm trying to contribute some improvements to the implementation of a gist
> index that is part of an open source project and it would be really nice if
> anyone could help me answer some
Hi all,
I'm trying to contribute some improvements to the implementation of a gist
index that is part of an open source project and it would be really nice if
anyone could help me answer some questions.
I would like to use different data structures to represent the internal and
leaf entries. I
2014-08-07 7:24 GMT+02:00 David Johnston david.g.johns...@gmail.com:
- What are the differences among PL/SQL, PL/PGSQL and pgScript.
The first two are languages you write functions in. pgScript is simply
an
informal way to group a series of statements together and have them
execute
I'm very new to Postgres, but have plenty of experience developing stored
procs in Oracle.
I'm going to be creating Postgres stored procedures (functions actually,
since I discovered that in postgres, everything is a function) to do a
variety of batch-type processing. These functions may or
Bill Epstein wrote
I've tried a variety of ways based on the on-line docs I've seen, but I
always get a syntax error on EXEC when I use only the line EXEC statement
You likely need to use EXECUTE in PostgreSQL
INFO: INSERT INTO UTILITY.BPC_AUDIT (COMPONENT, ACTIVITY, AUDIT_LEVEL,
On Aug 6, 2014, at 12:28 PM, Bill Epstein epste...@us.ibm.com wrote:
I'm very new to Postgres, but have plenty of experience developing stored
procs in Oracle.
I found this helpful:
Le 6 août 2014 18:47, David G Johnston david.g.johns...@gmail.com a
écrit :
Bill Epstein wrote
I've tried a variety of ways based on the on-line docs I've seen, but I
always get a syntax error on EXEC when I use only the line EXEC
statement
You likely need to use EXECUTE in PostgreSQL
- What are the differences among PL/SQL, PL/PGSQL and pgScript.
The first two are languages you write functions in. pgScript is simply
an
informal way to group a series of statements together and have them
execute
within a transaction.
AFAICT, this isn't true. Pgscript is a
So here are my questions:
1) Is there anyway to control this behavior of daterange(), or is it
just
best to (for example) add 1 to the upper bound argument if I want an
inclusive upper bound?
See link for question #3; namely use the three-arg version of daterange
(type,type,text)
Hi. I've got lots of tables with start and end dates in them, and I'm
trying to learn how to work with them as date ranges (which seem
fantastic!). I've noticed that the daterange() function seems to create
ranges with an inclusive lower bound, and an exclusive upper bound. For
example:
SELECT
On 06/25/2014 05:53 PM, Ken Tanzer wrote:
Hi. I've got lots of tables with start and end dates in them, and I'm
trying to learn how to work with them as date ranges (which seem
fantastic!). I've noticed that the daterange() function seems to create
ranges with an inclusive lower bound, and an
On Wed, Jun 25, 2014 at 6:12 PM, Adrian Klaver adrian.kla...@aklaver.com
wrote:
On 06/25/2014 05:53 PM, Ken Tanzer wrote:
Hi. I've got lots of tables with start and end dates in them, and I'm
trying to learn how to work with them as date ranges (which seem
fantastic!). I've noticed that
Ken Tanzer wrote
Hi. I've got lots of tables with start and end dates in them, and I'm
trying to learn how to work with them as date ranges (which seem
fantastic!). I've noticed that the daterange() function seems to create
ranges with an inclusive lower bound, and an exclusive upper bound.
Hi, I have a question, is postgres cappable of horizontal growing, I mean, in
the case I have a server that is reaching it’s full HD capacity, is there a way
to add another server to use as an extensión of ther first one, like a cluster
configuration, do you know a configuration that is
On 5/15/2014 1:52 PM, Diego Ramón Cando Díaz wrote:
Hi, I have a question, is postgres cappable of horizontal growing, I
mean, in the case I have a server that is reaching it’s full HD
capacity, is there a way to add another server to use as an extensión
of ther first one, like a cluster
Hi All;
I recently ran into the following, any thoughts?
Thanks in advance...
1) \d and schema's
- I setup 2 schema's (sch_a and sch_b)
- I added both schema's to my search_path
- I created 2 tables: sch_a.test_tab and sch_b.test_tab
If I do a \d with no parameters I only see the first
CS DBA cs_...@consistentstate.com writes:
1) \d and schema's
- I setup 2 schema's (sch_a and sch_b)
- I added both schema's to my search_path
- I created 2 tables: sch_a.test_tab and sch_b.test_tab
If I do a \d with no parameters I only see the first test_tab table
based on the order of
On Thu, Jan 9, 2014 at 5:04 AM, Tom Lane t...@sss.pgh.pa.us wrote:
CS DBA cs_...@consistentstate.com writes:
1) \d and schema's
- I setup 2 schema's (sch_a and sch_b)
- I added both schema's to my search_path
- I created 2 tables: sch_a.test_tab and sch_b.test_tab
If I do a \d with
Dear Sir or Madam,
I have downloaded and installed the latest PostgreSQL version V9.2 to my
Windows 7 OS PC.
I want to have it running on my PC, as local host.
Now I am facing some problems.
1.) I do not know, how to fill in the properties tab for the server, as
name, host (what shall be host,
On Tuesday, February 19, 2013 4:37 PM Tomas Pasterak wrote:
I have downloaded and installed the latest PostgreSQL version V9.2 to my
Windows 7 OS PC.
I want to have it running on my PC, as local host.
Now I am facing some problems.
1.) I do not know, how to fill in the properties tab for
On 02/19/2013 03:07 AM, Tomas Pasterak wrote:
Dear Sir or Madam,
I have downloaded and installed the latest PostgreSQL version V9.2 to my
Windows 7 OS PC.
I want to have it running on my PC, as local host.
Now I am facing some problems.
1.) I do not know, how to fill in the properties tab for
Hi,
In PostgreSQL 9.0.x we must define a constraint as DEFERRABLE on the create
table, we cannot define DEFERRABLE on create table as select, how is this
restriction in 9.2 now?
Also, in 9.2 can deferrable uniqueness be mixed with Foreign keys?
Thanks
--
Sent via pgsql-general mailing
Hi Guys. I got one problem. I need to give some of the non-super users( kind
of dba) to get the privileges
to can cancel other users's query, DML. After I granted the execute on
pg_cancel_backend and pg_terminate_backend function to them, they still get
the error message as follows when they call
When I need to give other users access to a function that someone must
be superuser to execute I write a security definer function.
See: http://www.postgresql.org/docs/9.1/static/sql-createfunction.html
Also:
http://www.ibm.com/developerworks/opensource/library/os-postgresecurity/index.html
Lets say i have subquery which produce array[], position and new_value
Is here less clumsy way to set array[position] to the new_value (not
update but just change an element inside an array) than:
SELECT
_array[1:pos-1]
||newval
||_array[_pos+1:array_length(_array, 1)]
On Dec 4, 2011, at 22:43, Maxim Boguk maxim.bo...@gmail.com wrote:
Lets say i have subquery which produce array[], position and new_value
Is here less clumsy way to set array[position] to the new_value (not update
but just change an element inside an array) than:
SELECT
David Johnston pol...@yahoo.com writes:
Is here less clumsy way to set array[position] to the new_value (not update
but just change an element inside an array) than:
SELECT
_array[1:pos-1]
||newval
||_array[_pos+1:array_length(_array, 1)]
I do not know if there is a cleaner way but
Hey,
PostgreSQL 9.0
1) While comparing a simple GROUP/COUNT query I noticed that TEXT and JSON
formats identify the Top-Level Plan Node differently (GroupAggregate vs.
Aggregate). More curiosity than anything but I would have expected them to
match.
2) For the same query I was hoping to be
On 19/04/11 23:56, Phoenix Kiula wrote:
While I fix some bigger DB woes, I have learned a lesson. Huge indexes
and tables are a pain.
Which makes me doubly keen on looking at partitioning.
Most examples I see online are partitioned by date. As in months, or
quarter, and so on. This
While I fix some bigger DB woes, I have learned a lesson. Huge indexes
and tables are a pain.
Which makes me doubly keen on looking at partitioning.
Most examples I see online are partitioned by date. As in months, or
quarter, and so on. This doesn't work for me as I don't have too much
logic
On 04/19/2011 08:56 AM, Phoenix Kiula wrote:
While I fix some bigger DB woes, I have learned a lesson. Huge indexes
and tables are a pain.
Which makes me doubly keen on looking at partitioning.
Before jumping into partitioning it would be useful to know specifically
what pain you are having
Hi, everyone. I've got a client who is planning to upgrade from
PostgreSQL 8.3 to 9.0 in the coming weeks. They use a lot of tables
with bytea columns. They're worried about the switch from octal to hex
formats for bytea data.
Based on everything I know and have read, the change is only
Reuven M. Lerner reu...@lerner.co.il wrote:
So I've told them that I don't think that anything is necessary for
either input or output, except (perhaps) to set bytea_output in its
backward-compatibility mode. But I wanted to check with people here,
just to double-check my understanding.
...@postgresql.org] On Behalf Of Reuven M. Lerner
Sent: Sunday, February 20, 2011 6:31 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Questions about octal vs. hex for bytea
Hi, everyone. I've got a client who is planning to upgrade from PostgreSQL 8.3
to 9.0 in the coming weeks. They use a lot
Reuven M. Lerner reu...@lerner.co.il Sunday 20 February 2011 12:31:09
Hi, everyone. I've got a client who is planning to upgrade from
PostgreSQL 8.3 to 9.0 in the coming weeks. They use a lot of tables
with bytea columns. They're worried about the switch from octal to hex
formats for bytea
Reuven M. Lerner reu...@lerner.co.il writes:
My client is concerned that the internal representation has changed, and
is asking me for a script that will change the representation, in order
to save space (since hex occupies less space than octal).
This is complete nonsense. The internal
Thanks, everyone, for the swift and clear responses. It's good to know
that I did understand things correctly!
Reuven
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hello All,
I have been writing a function with SECURITY DEFINER enabled. Basically, I
am looking for ways to override the users SET option settings while
executing my function to prevent the permissions breach. For example, to
override SET search_path, I am setting search path in my function
Hello
you can overwrite standard settings only for function
CREATE [ OR REPLACE ] FUNCTION
name ( [ [ argmode ] [ argname ] argtype [ { DEFAULT | = }
default_expr ] [, ...] ] )
[ RETURNS rettype
| RETURNS TABLE ( column_name column_type [, ...] ) ]
{ LANGUAGE lang_name
|
Jignesh Shah wrote:
I have been writing a function with SECURITY DEFINER enabled.
Basically, I am looking for ways to override the users SET
option settings while executing my function to prevent the
permissions breach. For example, to override SET
search_path, I am setting search path in
Thanks a ton Laurenz and Pavel for your responses but I really didn't follow
you. I am not master in PostGreSQL yet. Could you please give me some
example?
Basically, I want to know how many such SET options I should reset before
executing my function and at the end it should also be restored to
2010/2/22 Jignesh Shah jignesh.shah1...@gmail.com:
Thanks a ton Laurenz and Pavel for your responses but I really didn't follow
you. I am not master in PostGreSQL yet. Could you please give me some
example?
Basically, I want to know how many such SET options I should reset before
executing
set work_mem to '1MB'
set search_path = 'public';
Thanks for the example Pavel. I understood it. Are there any other SET
options except above that I need to set to prevent security breach?
Thanks,
Jack
On Mon, Feb 22, 2010 at 11:41 PM, Pavel Stehule pavel.steh...@gmail.comwrote:
2010/2/22
2010/2/22 Jignesh Shah jignesh.shah1...@gmail.com:
set work_mem to '1MB'
set search_path = 'public';
Thanks for the example Pavel. I understood it. Are there any other SET
options except above that I need to set to prevent security breach?
I am not sure - I know only search_path
Pavel
On Feb 10, 2010, at 10:28 PM, Greg Smith wrote:
Ben Chobot wrote:
I'm looking at pg_stat_user_tables in 8.4.2, and I'm confused about
n_live_tup. Shouldn't that be at least fairly close to (n_tup_ins -
n_tup-del)? It doesn't seem to be, but I'm unclear why.
Insert 2000 tuples.
Delete
Ben Chobot be...@silentmedia.com writes:
And unfortunately, Tom, we're not resetting stats counters. :(
Mph. Well, the other thing that comes to mind is that n_live_tup
(and n_dead_tup) is typically updated by ANALYZE, but only to an
estimate based on ANALYZE's partial sample of the table. If
Ben Chobot wrote:
I'm looking at pg_stat_user_tables in 8.4.2, and I'm confused about n_live_tup.
Shouldn't that be at least fairly close to (n_tup_ins - n_tup-del)? It doesn't
seem to be, but I'm unclear why.
Insert 2000 tuples.
Delete 1000 tuples.
vacuum
Insert 1000 tuples. These go into
Greg Smith g...@2ndquadrant.com writes:
Ben Chobot wrote:
I'm looking at pg_stat_user_tables in 8.4.2, and I'm confused about
n_live_tup. Shouldn't that be at least fairly close to (n_tup_ins -
n_tup-del)? It doesn't seem to be, but I'm unclear why.
Insert 2000 tuples.
Delete 1000
On Feb 5, 2010, at 12:14 PM, Ben Chobot wrote:
I'm looking at pg_stat_user_tables in 8.4.2, and I'm confused about
n_live_tup. Shouldn't that be at least fairly close to (n_tup_ins -
n_tup-del)? It doesn't seem to be, but I'm unclear why.
Is everybody else unclear as well?
--
Sent via
Alvaro Herrera alvhe...@commandprompt.com writes:
For example, perhaps there could be a new pair of functions
pg_read_hba_file/pg_write_hba_file that would work even if the files are
placed in other directories, but they (Debian) would need to propose
it.
I don't remember they had to provide
I'm looking at pg_stat_user_tables in 8.4.2, and I'm confused about n_live_tup.
Shouldn't that be at least fairly close to (n_tup_ins - n_tup-del)? It doesn't
seem to be, but I'm unclear why.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your
Thanks Richard and Alvaro. The show hba_file is great solution. Thanks a
ton. Could you tell me from where to get all such commands?
Thanks,
Dip
On Mon, Feb 1, 2010 at 9:43 PM, Alvaro Herrera
alvhe...@commandprompt.comwrote:
dipti shah escribió:
Thanks Richard. those chapters are very useful.
Techdb=# show hba_file;
hba_file
--
/etc/postgresql/8.4/main/pg_hba.conf
(1 row)
Moreover, is there anyway to view content of this file from stored in above
location Techdb command prompt itself.
Techdb=# cat
On 02/02/10 09:55, dipti shah wrote:
Thanks Richard and Alvaro. The show hba_file is great solution. Thanks a
ton. Could you tell me from where to get all such commands?
All the configuration settings are listed in Chapter 18:
http://www.postgresql.org/docs/8.4/static/runtime-config.html
On 02/02/10 09:58, dipti shah wrote:
Techdb=# show hba_file;
hba_file
--
/etc/postgresql/8.4/main/pg_hba.conf
(1 row)
Ah! you're running a Debian-based system by the look of it.
Moreover, is there anyway to view content of this file from
Wow!!..that was too quick. Thanks Richard.
On Tue, Feb 2, 2010 at 3:29 PM, Richard Huxton d...@archonet.com wrote:
On 02/02/10 09:55, dipti shah wrote:
Thanks Richard and Alvaro. The show hba_file is great solution. Thanks a
ton. Could you tell me from where to get all such commands?
All
dipti shah escribió:
Techdb=# show hba_file;
hba_file
--
/etc/postgresql/8.4/main/pg_hba.conf
(1 row)
Moreover, is there anyway to view content of this file from stored in above
location Techdb command prompt itself.
Techdb=# cat
On Tue, February 2, 2010 08:23, Alvaro Herrera wrote:
dipti shah escribió:
Techdb=# show hba_file;
hba_file
--
/etc/postgresql/8.4/main/pg_hba.conf
(1 row)
Moreover, is there anyway to view content of this file from stored in
above
Tim Bruce - Postgres escribió:
On Tue, February 2, 2010 08:23, Alvaro Herrera wrote:
Probably pg_read_file():
select pg_read_file('pg_hba.conf', 0, 8192);
Note that pg_read_file only allows paths relative to $PGDATA, which is
what you get from SHOW data_directory;
Since the
On Tue, 2010-02-02 at 16:09 -0300, Alvaro Herrera wrote:
Tim Bruce - Postgres escribió:
On Tue, February 2, 2010 08:23, Alvaro Herrera wrote:
Probably pg_read_file():
select pg_read_file('pg_hba.conf', 0, 8192);
Note that pg_read_file only allows paths relative to $PGDATA,
Joshua D. Drake escribió:
On Tue, 2010-02-02 at 16:09 -0300, Alvaro Herrera wrote:
Tim Bruce - Postgres escribió:
On Tue, February 2, 2010 08:23, Alvaro Herrera wrote:
Probably pg_read_file():
select pg_read_file('pg_hba.conf', 0, 8192);
Note that pg_read_file only
I am connected to database as postgres user.
'\!exec ..' doesn't work if I connect to the database from other host but it
does work if I connect to the database from server where I have PostGreSQL
installed. pg_read_file doesn't work in any case.
Techdb=# \! exec cat
dipti shah wrote:
I am connected to database as postgres user.
'\!exec ..' doesn't work if I connect to the database from other host
but it does work if I connect to the database from server where I have
PostGreSQL installed. pg_read_file doesn't work in any case.
Techdb=# \! exec cat
That makes sense.
Thanks,
Dipti
On Wed, Feb 3, 2010 at 12:08 PM, John R Pierce pie...@hogranch.com wrote:
dipti shah wrote:
I am connected to database as postgres user.
'\!exec ..' doesn't work if I connect to the database from other host but
it does work if I connect to the database from
Hi, we have latest PostGreSQL setup and it allows everyone to connect. When
I do \du, it gives following output and it is same for all users.
TechDB=# \du
List of roles
Role name | Superuser | Create role | Create DB | Connections | Member
of
On 01/02/10 07:35, dipti shah wrote:
Moreover, anyone can connect to databases as postgres user without giving
password.
I am not aware how above setup has been made but I want to get rid of them.
Could anyone please help me in below questions?
You'll want to read Chapter 19 of the manuals
Thanks Richard. those chapters are very useful. I got to know most of
concepts but didn't find the location of pg_hba.conf file so that I can
verify it. I have connected to my database using postgres user. Could you
tell me how to open pg_hba.conf file?
Thanks.
On Mon, Feb 1, 2010 at 3:06 PM,
On 01/02/10 10:24, dipti shah wrote:
Thanks Richard. those chapters are very useful. I got to know most of
concepts but didn't find the location of pg_hba.conf file so that I can
verify it. I have connected to my database using postgres user. Could you
tell me how to open pg_hba.conf file?
It
dipti shah escribió:
Thanks Richard. those chapters are very useful. I got to know most of
concepts but didn't find the location of pg_hba.conf file so that I can
verify it. I have connected to my database using postgres user. Could you
tell me how to open pg_hba.conf file?
Run this:
On 1/25/2010 8:12 PM, Craig Ringer wrote:
On 26/01/2010 12:15 AM, Dino Vliet wrote:
5) Other considerations?
Even better is to use COPY to load large chunks of data. libpq provides
access to the COPY interface if you feel like some C coding. The JDBC
driver (dev version only so far) now
Andy Colson wrote:
I recall seeing someplace that you can avoid WAL if you start a
transaction, then truncate the table, then start a COPY.
Is that correct? Still hold true? Would it make a lot of difference?
That is correct, still true, and can make a moderate amount of
difference if the
To: pgsql-general@postgresql.org
Date: 01/25/2010 09:57 PM
Subject:[GENERAL] general questions postgresql performance config
Sent by:pgsql-general-ow...@postgresql.org
Dear postgresql people,
Introduction
Today I've been given the task to proceed with my plan to use
On Sun, Jan 24, 2010 at 3:17 AM, Herouth Maoz hero...@unicell.co.il wrote:
Hi Everybody.
I have two questions.
1. We have a system that is accessed by Crystal reports which is in turned
controlled by another (3rd party) system. Now, when a report takes too long or
the user cancels it, it
Scott Marlowe wrote:
You can shorten the tcp_keepalive settings so that dead connections
get detected faster.
Thanks, I'll ask my sysadmin to do that.
Might be, but not very likely. I and many others run pgsql in
production environments where it handles thousands of updates /
inserts per
On Mon, Jan 25, 2010 at 8:15 AM, Scott Marlowe scott.marl...@gmail.com wrote:
Is there a parameter to set in the configuration or some other means to
shorten the time before an abandoned backend's query is cancelled?
You can shorten the tcp_keepalive settings so that dead connections
get
Greg Stark wrote:
On Mon, Jan 25, 2010 at 8:15 AM, Scott Marlowe scott.marl...@gmail.com wrote:
Is there a parameter to set in the configuration or some other means to
shorten the time before an abandoned backend's query is cancelled?
You can shorten the tcp_keepalive settings so
On Mon, Jan 25, 2010 at 11:37 AM, Herouth Maoz hero...@unicell.co.il wrote:
The tcp_keepalive setting would only come into play if the remote
machine crashed or was disconnected from the network.
That's the situation I'm having, so it's OK. Crystal, being a Windows
application, obviously
Greg Stark wrote:
On Mon, Jan 25, 2010 at 11:37 AM, Herouth Maoz hero...@unicell.co.il wrote:
The tcp_keepalive setting would only come into play if the remote
machine crashed or was disconnected from the network.
That's the situation I'm having, so it's OK. Crystal, being a Windows
On Mon, Jan 25, 2010 at 1:16 PM, Herouth Maoz hero...@unicell.co.il wrote:
Well, I assume by the fact that eventually I get an Unexpected end of file
message for those queries, that something does go in and check them. Do you
have any suggestion as to how to cause the postgresql server to do so
Dear postgresql people,
Introduction
Today I've been given the task to proceed with my plan to use postgresql and
other open source techniques to demonstrate to the management of my department
the usefullness and the cost savings potential that lies ahead. You can guess
how excited I am
On 26/01/2010 12:15 AM, Dino Vliet wrote:
5) Other considerations?
To get optimal performance for bulk loading you'll want to do concurrent
data loading over several connections - up to as many as you have disk
spindles. Each connection will individually be slower, but the overall
On Mon, Jan 25, 2010 at 9:15 AM, Dino Vliet dino_vl...@yahoo.com wrote:
Introduction
Today I've been given the task to proceed with my plan to use postgresql and
other open source techniques to demonstrate to the management of my
department the usefullness and the cost savings potential
Hi Everybody.
I have two questions.
1. We have a system that is accessed by Crystal reports which is in turned
controlled by another (3rd party) system. Now, when a report takes too long or
the user cancels it, it doesn't send a cancel request to Postgres. It just
kills the Crystal process
Hi Everyone,
I have questions regarding tablespaces, What happens when the disk on
which my tablespace is in fills up?
How do I expand my tablespace, in oracle there is a concept of
datafiles? In postgresql I specify a directory instead of a single
file...
For example I have two tables and they
Carlo Camerino carlo.camer...@gmail.com writes:
I have questions regarding tablespaces, What happens when the disk on
which my tablespace is in fills up?
You start getting errors.
How do I expand my tablespace, in oracle there is a concept of
datafiles? In postgresql I specify a directory
Hello,
I am sitting on version 7.4.x and am going to upgrade to version 8.3.x.
From all I can read I should have no problem with actual format of the
pgdump file (for actual dumping and restoring purposes) but I am
having problems with encoding (which I was fairly sure I would). I have
1 - 100 of 285 matches
Mail list logo