akp geek wrote:
We have postgres 9.0.4 since an year. Replication also set up and works fine.
On the master. The wal files are growing and it's almost gone to 90% of the
disk.
How can I handle the situation. Can I delete the old files. Please advice
Are the WAL files (the ones in pg_xlog)
Jerry LeVan wrote:
I recently modified one of my Fedora boxes by changing it's name and ip.
I also disabled the internal wifi ( connection speed was dropping to 1 mb/sec
) and
configured a USB wifi stick ( wow 270~300 mb/sec ).
As I checked out the refurbed box networking was ok and I
Dhiraj Gupta wrote:
I have created a database name 'ofbiz. then the default schema name
public
created automatically. I want to create schema name ofbiz in the database
ofbiz
when I create database name ofbiz then the schema name ofbiz will create
automatically.
how it is possible if
rahul143 wrote:
Im receiving the following error, on a php page, conneting to postgres
7.4,
installed on Mandrake 10.0
snip
Error Description: pg_connect(): Unable to connect to PostgreSQL
server:
FATAL: sorry, too many clients already .
/snip
Can anyone tell me what this means?
Is there a
rahul143 wrote:
We are currently using a 32byte varchar for our primary keys. We tried to
reduce this down to 16 bytes but varchar didn’t seem to store this
correctly. I’d like to use bytea instead so we could use 16bytes, but are
indexes used properly ? Does anyone have any other suggestions
David Greco wrote:
Need some help with UPDATE syntax. I am attempting to do something
like this:
WITH default_facility AS (
SELECT facility_id,
inkjetorlabel
FROM engagement_facility_defs
WHERE engagement_facility_def_id = 8
icholy wrote:
I'm having problem with libpq's *PQexec* function hanging on
intermittent
connections.
After searching around, the solution seems to be to use the
asynchronous
functions *PQsendQuery*/*PQgetResult* and implement your own timeout.
Now the issue I'm facing is that *PQgetResult*
Michael Giannakopoulos wrote:
I would like to ask if there is any way to make an aggregate function
to take a set of tuples as an
input variable. I know that an actual aggregate function receives each
tuple one at a time and process
it on the fly. However I want to store tuples in an
ivan marchesini wrote:
we are facing with a strange efficiency problem.
We have two tables:
1) the first largest (3 rows) contains a timestamp field (named
tempo_calcolo) and other fields.
2) the second contains only few rows (150), a PostGIS geometric POLYGON
field (named the_geom)
David Greco wrote:
[wants to use CTEs in an UPDATE]
Yeah that's good, but there are plenty of columns, was hoping to be
able to use (table.*) syntax
Is this a problem or do you just want to type
as little as possible?
You have to specify them in the SET clause anyway.
Yours,
Laurenz Albe
--
ivan marchesini wrote:
Concerning the problem of the same running time I'm really surprised
but I can confirm that the EXPLAIN ANALIZE time is different from the
query execution time.
probably it happen something like this:
wd wrote:
Logs are something like this:
[2012-11-24 21:51:33.591 CST 583 50b0d0e5.247 9 0]LOG: recovery
has paused
[2012-11-24 21:51:33.591 CST 583 50b0d0e5.247 10 0]HINT: Execute
pg_xlog_replay_resume() to continue.
Well, try
SELECT pg_xlog_replay_resume();
Yours,
Laurenz Albe
wd wrote:
Logs are something like this:
[2012-11-24 21:51:33.591 CST 583 50b0d0e5.247 9 0]LOG: recovery
has paused
[2012-11-24 21:51:33.591 CST 583 50b0d0e5.247 10 0]HINT:
Execute pg_xlog_replay_resume() to continue.
Well, try
SELECT pg_xlog_replay_resume();
I can't connect
wd wrote:
I've try to restore Postgres to a specific time but failed.
The recovery.conf as bellow
restore_command='cp /t/xlog/%f %p'
recovery_target_time='2012-11-22 5:01:09 CST'
pause_at_recovery_target=true
recovery_target_inclusive=false
The basebackup was made at 2012-11-22 3:10 CST,
wd wrote:
I've try to restore Postgres to a specific time but failed.
The recovery.conf as bellow
restore_command='cp /t/xlog/%f %p'
recovery_target_time='2012-11-22 5:01:09 CST'
pause_at_recovery_target=true
recovery_target_inclusive=false
The basebackup was made at 2012-11-22 3:10
Ranjeet Dhumal wrote:
When am trying to query a table temp_table1(sms_type
varchar(20),sms_info varchar(100),sms_id integer)
Query :: select sms_type,count(*) from temp_table1 group by 1 order
by 2 desc;
Then i got following errors , i dont know whats wrong in this .
ERROR: volatile
Ranjeet Dhumal wrote:
I got this error on two different tables , first one is empty and
second one has near about 60k
records , am using 9.0.1 version of postgres.
And when i dropped that both table and recreated with same data and
structure query ran successfully
, am not able to understand
Hao Wang wrote:
I installed PostgresSQL-8.3 on my linux machine.
The cluster directory is /usr/local/data and I created three
databases
named db1, db2, and db3. db1 is
in the default tablespace 'pg_default'. db2 is in
'/home/tablespace/space1/' and db3 is in
'/home/tablespace/space2/'. I
Hao Wang wrote:
My purpose is not to do backup for my database.
I understood that. It was just a side comment.
I just want to copy the whole db3 database to another
machine and restore it. That database could be very large so I think
directly copy is more efficient
than pg_dump. So I'd
Vishalakshi Navaneethakrishnan wrote:
Can we create composite index for one text column and integer column?
Yes.
It would probably have been faster to try it than
to send this e-mail...
Yours,
Laurenz Albe
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes
Clemens Park wrote:
Recently, during a performance improvement sweep for an application at
my company, one of the hotspots
that was discovered was pagination.
In order to display the correct pagination links on the page, the
pagination library we used (most
pagination libraries for that
Philippe Amelant wrote:
So i was thinking it was just a reconnect to the sender (and I can see
the standby trying to reconnect in the log)
Hmmm. I think I was too quick when I said no.
If you ship the WAL archives including the history file to the
standby, then the standby should be able
Hao Wang wrote:
I installed PostgresSQL-8.3 on my linux machine.
The cluster directory is /usr/local/data and I created three databases
named db1, db2, and db3. db1 is
in the default tablespace 'pg_default'. db2 is in
'/home/tablespace/space1/' and db3 is in
'/home/tablespace/space2/'. I
Philippe Amelant wrote:
I'm setting up a 3 nodes cluster and after some tests
I just discover that the cascading slave does not recover.
Right, switching timeline over streaming replication
is not supported yet. There's a patch by Heikki in
the pipeline for this, so it will probably work in
Pawel Veselov wrote:
QUESTION: If multiple pgpools are running, and if there are no network
problems,
and configuration
files are identical, is there any guarantee that the same stand-by node
will be selected for
promotion? Concern here is that with configuration of (M-SB0-SB1) one
Pawel Veselov wrote:
From your later comments, it's also apparent that these archived WALs will
be useless after
failover (for the purpose of recovery), so there is no reason to send them
to all the nodes after all.
I obviously lost it here. The archives do need to be synchronized, for
Philippe Amelant wrote:
I'm setting up a 3 nodes cluster and after some tests
I just discover that the cascading slave does not recover.
Right, switching timeline over streaming replication
is not supported yet. There's a patch by Heikki in
the pipeline for this, so it will probably work in
高健 wrote:
Please let me dig it a little more:
I think that when a session is accessing a postgersql table. It will be
influenced by the
followings:
Really old data (needed to be vacuumed, eg: old image at one day ago).
Recent data (committed and uncommitted), because they are
I'll try to answer the questions I can.
Pawel Veselov wrote:
I've been struggling with understanding all the necessary pieces for
streaming replication. So I put
down the pieces as I did understand them, and would appreciate if you guys
could point out any of the
stuff I understood or have
Edson Richter wrote:
I've a bunch of databases that cannot be dropped in any case.
I was wondering if it is possible to revoke drop database permission
for all users, in order that
even superuser, if he wishes to drop a database, he will need first to
grant drop database first.
I know
高健 wrote:
I have one question about the visibility of explain plan.
Firstly , I was inserting into data to a table. I use : [ insert into
ptest select * from
test02; ]
And test02 table has 10,000,000 records. And ptest is a parent table, which
has two distribution
child
Thalis Kalfigkopoulos wrote:
A tablespace is created successfully and so is a test table in it:
# CREATE TABLESPACE ts_name LOCATION '/path/to/tablespace/location';
# SET default_tablespace=ts_name;
# CREATE TABLE ts_test_table;
From the command line:
$ cd /path/to/tablespace/location
$
I wrote:
Magnus Hagander wrote:
I have streaming replication configured over SSL, and
there seems to be a problem with SSL renegotiation.
[...]
After that, streaming replication reconnects and resumes working.
Is this an oversight in the replication protocol, or is this
working as designed?
高健 wrote:
I have one question about the cache clearing.
If I use the following soon after database startup(or first time I use it):
postgres=# explain analyze select id,deptno from gaotab where id=200;
The result is: the above explain analyze got a total runtime of 47 ms.
But If I
Magnus Hagander wrote:
I have streaming replication configured over SSL, and
there seems to be a problem with SSL renegotiation.
[...]
After that, streaming replication reconnects and resumes working.
Is this an oversight in the replication protocol, or is this
working as designed?
This
I have streaming replication configured over SSL, and
there seems to be a problem with SSL renegotiation.
This is from the primary's log:
2012-11-06 00:13:10.990
CET,replication,,5204,10.153.109.3:49889,509843df.1454,10,streami
ng 1E3/76D64000,2012-11-05 23:55:27 CET,4/0,0,LOG,08P01,SSL
Thomas Munro wrote:
I am using 9.1.6, and I've set up a partitioned table as described in
the manual, with partitions
based on a timestamptz column called 'time'. The exclusion
constraints work nicely when I select
ranges of times with literal constants. But why would a WHERE clause
like the
Magnus Hagander wrote:
I have streaming replication configured over SSL, and
there seems to be a problem with SSL renegotiation.
[...]
After that, streaming replication reconnects and resumes working.
Is this an oversight in the replication protocol, or is this
working as designed?
This
hari.fu...@gmail.com wrote:
I think the problem is that this + operator is implemented
by the function timestamptz_pl_interval, which is STABLE
but not IMMUTABLE.
I am not sure why this function cannot be IMMUTABLE, it
seems to me that it should be.
No: the result of e.g.
Ramkumar Yelai wrote:
[is worried that a database might become inconsistent if conflicting
INSERTs and DELETEs occur]
@Albe - I got you first point. The second point is little skeptical
because postgres could have been
avoided this lock by using MVCC. Please correct me if I am wrong?
Which
Jeff Janes wrote:
On Wed, Oct 31, 2012 at 7:24 AM, Jürgen Fuchsberger
juergen.fuchsber...@uni-graz.at wrote:
I have a problem with a corrupt backup, fortunately I was only testing so I
did not loose any data. Unfortunetely what I did is to follow the backup
guidelines in the
Yelai, Ramkumar IN BLR STS worte:
Sent: Wednesday, October 31, 2012 12:40 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Parallel Insert and Delete operation
Hi All,
Please clarify me the following example.
I have 2 tables
Table1 - ( it has one primary key and few columns
高健 wrote:
I am new to PostgreSQL's SPI(Server Programming Interface).
I can understand PostgreSQL's exampel of using SPI. But I am not sure about
SPI_prepare's parameter.
void * SPI_prepare(const char * command, int nargs, Oid * argtypes)
Can somebody kindly give an example of using
[added CC to oracle-fdw-general, where this should be]
Papiernik Anna-AAP053 wrote:
I have Oracle with OS authentication.
And I have installed PostgreSQL and I want to migrate data from Oracle
to PostgreSQL database.
I have performed steps:
CREATE EXTENSION oracle_fdw;
CREATE
Rob wrote:
Some more info
Oracle Server:Oracle 11g R2 (11.2.0.2.0)
Client: 11.2
Was installed using Oracle Universal Installer
Ok.
I don't really want to post the full environment of the postmaster but
basically I could see no entry in there for ORACLE_HOME or TNS_ADMIN,
should
I?
Yes,
Hannes Erven wrote:
today I ran into an issue I believed to be a FAQ, but fortunately it
doesn't seem so as I could find any resources related to this... :-/
A misguided click in PGADMIN executed a TRUNCATE CASCADE on a rather
central table of my schema, which resulted in most important
chinnaobi wrote:
I have tested using cygwin rsync in windows 2008 R2, just after
restart the
server.
for 10 GB it took nearly 5 minutes to sync,
for 50 GB it took nearly 30 minutes, -- too long Though there were no
big
changes.
My requirement is something less than 5 minutes. I am doing
[I'll CC the oracle_fdw mailing list where this should be.]
Rob wrote:
Environment: PostgreSQL 9.1.6
SLES 11 SP2
Oracle_fdw 0.9.7
I am trying to implement the use of oracle_fdw.
So far I have installed an oracle client on my postgres server. I can
connect to the oracle environment from
Oliver Kohll wrote:
I'm getting a problem where my application hangs in the process of
adding a field to a table. The app
adds the field, then creates an index on the field. It hangs for ages
(minutes) until I cancel the
query. My investigation so far has been
Look at current queries:
Oliver Kohll wrote:
In the example you show, some connections are idle in transaction.
Such connections can hold locks that block your CREATE INDEX.
Could you look at pg_locks if there is a lock that is not granted?
There could also be prepared transactions holding locks, if
you use that
Chris Angelico wrote:
- Set fsync=off and hope you don't crash.
Ouch. I might consider that for a bulk import operation or something,
but not for live usage. There's plenty else can be done without
risking data corruption.
I didn't mean that to be an alternative that anybody should
consider
Craig Ringer wrote:
Why? They can be serialized. The outcome would be exactly the same
if session 2 completed before session 1 began.
Hmm. Good point; so long as *either* ordering is valid it's fine, it's
only when *both* orderings are invalid that a serialization failure
would occur. For
Alvaro Herrera wrote:
Are there any particular settings or methods available to improve
Just
insert_table operations
The obvious ones:
- Wrap several of them in a transaction.
- Increase checkpoint_segments.
- Set fsync=off and hope you don't crash.
I think it would work to set
chinnaobi wrote:
I have been testing streaming replication in windows with postgres
9.1.1.
For few scenario's I haven't found a solution. Please advice me.
1. Precautions before promoting standby server to primary manually
considering the dead primary server ??
You don't have to take any
Jayadevan M wrote:
I went through the documentation on replication. and am trying to
understand the various options
available.
Does replication available in PostgreSQL always use WAL (i.e
irrespective of whether it is
synchronous/asynchronous, whether it is using archived logs or 'live'
logs
GMAIL wrote:
i describe the situation:
i have two pc with postgressql server:
- a main pc, with ip 192.168.5.1 turned on
- a backup pc, with ip 192.168.5.1 turned off
i want that the main pc saves the database in local hard drive and on
a
nas real-time. when the main pc has a failure, i
Craig Ringer wrote:
In this SO question:
http://dba.stackexchange.com/questions/26905/how-do-i-implement-insert-i
f-not-found-for-transactions-
at-serializable-isolatio/26909#26909
the author is running a series of queries that I'd expect to abort on
commit with a serialisation
failure.
chinnaobi wrote:
When the standby is promoted, it starts a new time line, so it
is on a different time line from the old master. The old
primary cannot be turned to a standby without a new base backup
(possibly via rsync to speed up things).
But taking 200 GB to 500 GB DB base backup is
Fathi Ben Nasr wrote:
Is there a PostgreSQL magazine like the one sent by Oracle to whom
requests it ?
If answer is yes: how to sbscribe to it ?
Else
I know this could cost a lot to print such magazines, but maybe a pdf
version could be affordable.
The main idea behind this is to
Arvind Singh wrote:
Are there any particular settings or methods available to improve Just
insert_table operations
The obvious ones:
- Wrap several of them in a transaction.
- Increase checkpoint_segments.
- Set fsync=off and hope you don't crash.
Yours,
Laurenz Albe
--
Sent via
kostika gorica wrote:
I have an application already running, accessing the database
(postgresql
8.1). I need a solution, that prints (on a file, or somehow) the
queries
being executed every time the app accesses the database.
Is there any ON DATABASE ACCESS trigger ?!
Any idea how that can be
urkpostenardr wrote:
Is this bug in Postgres ?
If yes, is it fixed in latest release ?
Second query should return 2 rows instead of 1 ?
create table t(i int);
insert into t values(1);
insert into t values(2);
insert into t values(3);
pgdb=# select i from t order by i limit
Divakar Singh wrote:
While making connection to PGSQL using libpq, is there any option to
mention the schema name?
Something similar exists in java, however not sure about libpq.
Use the options connection parameter, like this:
psql dbname=test user=laurenz port=5432 options='-c
tigran2-postg...@riatest.com wrote:
Is there any other way to store large data in Postgres that allows
streaming and correctly works with
multiple schemas per database?
Large Objects and bytea are the only ways.
If you want to pg_dump only certain large objects, that won't work
as far as I
shehab adean sherif wrote:
where should be the location of pgsql_tmp directory of specific
database i've searched in
/var/lib/postgresql/9.1/main/base/database OID and i didn't find it,
should i create it by my self?
This directory does not belong to a database, but to a tablespace.
For the
Yelai, Ramkumar wrote:
Anything exceeding a few hundred partitions is not considered a good
idea.
The system needs to keep track of all the tables, and query planning
for such a partitioned table
might be expensive.
1440 is probably pushing the limits, but maybe somebody with more
experience
Scot Kreienkamp wrote:
You could try to strace the postmaster during a connection attempt
and see what happens. Maybe that helps to spot the place where
things go wrong.
[Scot Kreienkamp]
I'm willing to give it a try, but I've never done it before. What do
I need to do?
man strace is your
Chris Travers wrote:
If it is truly idle in a transaction, maybe it has locks that are
holding up other transactions?
Locks are usually held until commit time, except advisory locks iirc
but those have to be explicitly
checked, so if you don't know if you are using them you probably
aren't.
Yelai, Ramkumar wrote:
1.Do I need run REINDEX to reduce space or auto vacuum will handle
re indexing?
Autovacuum will not rebuild the index as REINDEX does.
It will only free index entries tht can be reused later.
2.Cluster, Re index and Vacuum full locks the table, Hence do we
need
Chris Curvey wrote:
1) Why do I get a warning when doubling a backslash?
2) What is the meaning of E syntax (E'fs1\\bar')?
3) If I have backslashes in my table, how can I get them back out?
4) I'd like to run an update to change the value '\\fs1\bar' to
\\fs1\foo\bar'. What incantation
hartrc wrote:
Basically my question is:
Is there currently any way to avoid wal generation during data load
for
given tables and then have point in time recovery after that?
Background and blurb
The situation i'm referring to here is for a development environment.
I
require point in time
Alan Nilsson wrote:
Is it the case that extensions can be added to only one schema? If
so, what is the recommended
practice for accessing a function from an extension in multiple
schemas?
Yes, a (relocatable) extension can belong to only one schema.
There are two ways to access the functions:
Scot Kreienkamp wrote:
I have a problem that I've been struggling with for quite some time.
Every once in a while I will get
a connection that goes to idle in transaction on an in-house
programmed application that connects with
JDBC. That happens fairly regularly and the programmers are trying
Chris Travers wrote:
If one releases an extension with say a version number of 0.1 and then
releases one with important
changes at 0.2, how is the best way to manage these changes? I
couldn't find anything in the docs to
discuss this issue. Am I missing something?
Specifically for
hubert depesz lubaczewski wrote:
When I enabled normal stderr logging, with absurdly full
log_line_prefix, I got this:
a[psql] u[depesz] d[depesz] r[[local]] h[[local]] p[15444] t[2012-09-13
21:49:37 CEST] m[2012-09-13
21:49:37.840 CEST] i[SELECT] e[0] c[505238d0.3c54] l[4] s[2012-09-13
Arvind Singh wrote:
Our CSV Log contains lot of statements like the following THREE lines.
They appear exactly one after
the other.
And they number in thousands for a Session (more than ten thousand)
2011-11-11 12:41:31.484
IST,agent1,pem,524,localhost:2141,4ebccaa2.20c,754,idle,2011-11-
depesz wrote:
This makes automatic parsing of log files possible:
if a line starts with a tab, it is a continuation line.
The tab itself is not part of the message.
why the tab isn't appended for other multi-line messages? like queries
in LOG: duration: statement: lines?
I'm not sure what
Arvind Singh wrote:
we have been working on a CSV Log parser application in csharp.
we recently discovered that certain log entries or records can span
across multiple lines.
in the sense, that the same activity has more detail in subsequent
lines.
Not really, they are different log entries
Guilherme Rodrigues wrote:
I created one table so:
CREATE TABLE clima (
city char(80),
cprc int,
);
And have other table so:
CREATE TABLE city (
namechar(80),
other_thing int,
);
These SQL statements have syntax errors (comma after the last
François Beausoleil wrote:
What's the state of the art for foreign keys on child tables?
My use case is this:
CREATE TABLE parties(party_id serial primary key);
CREATE TABLE positions( PRIMARY KEY(party_id) ) INHERITS(parties);
CREATE TABLE organizations( PRIMARY KEY(party_id) )
Craig Ringer wrote:
On 09/18/2012 03:21 AM, Edson Richter wrote:
2012-09-17 14:25:33.150
BRT,thedb,thedb,15660,10.10.20.1:33485,5054d3ab.3d2c,1,SELECT,2012-09-15
16:14:51 BRT,10/204822,0,LOG,0,duração: 505.494 ms executar
unnamed: select A, B
from C join D
on C.id =
Rafal Pietrak wrote:
On Mon, 2012-09-17 at 14:31 -0500, Merlin Moncure wrote:
On Mon, Sep 17, 2012 at 7:24 AM, Rafal Pietrak ra...@zorro.isa-geek.com
wrote:
On Mon, 2012-09-17 at 19:58 +0800, Craig Ringer wrote:
On 09/17/2012 04:46 PM, Rafal Pietrak wrote:
[--]
There
Yvon Thoraval wrote:
You may want to set log_statement='all' in postgresql.conf
i did this change , restart PostgreSQL and test again, with the same error,
however postgres log
didn't report any error, as far as i understand the log :
2012-09-14 18:43:14 CEST LOG: received fast shutdown
Arvind Singh wrote:
I have come across a very peculiar situation.
We have a postgres installation 9.0. It was installed last year.
But we started implementation on it, just recently.
And therefore the need to develop a Logparser application.
During our preliminary parsing , What we
Arvind Singh wrote:
I have enabled the csvlog option.
There seem to be 24 columns in the Log. I have difficulty in
understanding the following columns
- virtual_transaction_id
Such a transaction ID is given to every transaction.
Can be used to track transactions in the log.
-
Yvon Thoraval wrote:
I do have a strange probleme when connecting to a database thru php using PDO.
When connecting to the php script thru a browser i get the following error :
HTTP 500 (Internal Server Error)
Not really informative, then i've tried the php script thru Command Line
where
Dear Dave, dear Korry,
I tried to build the debugger
(http://pgfoundry.org/projects/edb-debugger/)
with PostgreSQL 9.2 and it failed.
The code seems to have bitrotted.
The last commit was 16 months ago.
Do you still maintain the project?
Has it moved somewhere else?
Yours,
Laurenz Albe
--
Fabrízio de Royes Mello wrote:
Has it moved somewhere else?
Look at the pldebugger git repository in [1].
Thanks!
Laurenz Albe
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Gustav Potgieter wrote:
Hope you can assist and that I am posting to the right forum.
Sending the question twice with the same wording does not
make it clearer...
We currently have multiple Postgresql 9 instances running with warm
standby, and the replication work
wonderfully.
The problem
Kenaniah Cerny wrote:
I would first like to thank everyone involved for all of the hard work that
goes into the postgres and
the RPMs.I have a small request:
In the service script that gets installed to /etc/rc.d/init.d/, there is a
hard-coded value for
PGPORT. Would it be possible to
Arvind Singh wrote:
I am in a project that uses PostGresSql v9.0. We are developing an
application in CSharp to parse the
PG server activity log installed on Windows 2003/XP or higher.
Our application will :
Scan the Log for a given search text and Post rows found
Produce statistics related
Roman Golis wrote:
I am trying to load data into a rather simple table:
CREATE TABLE public.files (
id SERIAL,
idchar CHAR(32) NOT NULL,
content BYTEA,
CONSTRAINT files_pkey PRIMARY KEY(id)
) WITHOUT OIDS;
with this command:
copy files (idchar, content) from '/data/1.dat'
Edson Richter wrote:
That's what I want to do know: I would like to consolidate these 4
separate databases in 1
database with 5 schemas:
- Main schema: will have all shared tables, that will be
read only most of time;
- Schema1 to Schema4: will have their own tables, read write.
Now
Sireesha Modumudi wrote:
I am using postgres 8.3.9 on SUSE 64 bit. By default max_connections
is 100, but I want to know if
this can be increased, if so, what should we take into consideration?
It can be increased, but you habe to restart the server for
the change to take effect.
It is not a
Edson Richter wrote:
2) Is there a way to specify the default schema in JDBC url
(or command I can issue to change
the default schema at runtime, like set path...)?
SET search_path=schema1,schema2,public;
Problem is that my application uses JDBC and Connection Pooling. After
a
connection
John R Pierce wrote:
was this a client process or a postgres process? kill -9 on postgres
processes can easily trigger data corruption.
It definitely shouldn't cause data corruption, otherwise
PostgreSQL would not be crash safe.
Yours,
Laurenz Albe
--
Sent via pgsql-general mailing list
Yvon Thoraval wrote:
on my computer I had a disk probleme, then i had to reinstall the system
(Xubuntu 12.04).
I've backuped some parts of the disk, namely /etc, /var and /home.
I wonder if I'm able to recover my past databases in the
/var/lib/postgresql/9.1/ backup.
If yes, how ?
If you
Himanshu Joshi wrote:
I am getting a below mentioned strange error while logging
into
postgres from linux command prompt. It is working fine with
GUI(pgAdmin
III). I am using postgres version 8.4SS.
Himanshu
# cd /opt/PostgresPlus/8.4SS/
# bin/psql -Upostgres -p
bin/psql: line 1:
Pedro Santos wrote:
Im trying to install one application in my Linux but when I try to use the
expression SU postgresql it
ask for password, what is the password? I try my root password but dont work.
It may be that you never set that password.
Try grep postgresql /etc/shadow as root user.
Khangelani Gama wrote:
I need help in turning off autovacuum in the Database that's running
postgres 9.1.2 DB. I tried to
turn it off by putting off in postgresql.cont file and restarted the
postmaster but when I run show
autovacuum; query I still see autovacuum set on inside the database.
501 - 600 of 1237 matches
Mail list logo