Arvind,
It seems this is Firewall issue.Server side(where as postgresql
installed) port(Ex:5432 default) was not opened to access postgres instance
from client machines. please stop firewalls on windows 7 machine and try to
connect from client machine.
Best Regards,
Chiru
On Mon, May 27, 2013
Hello,
I set enable_seqscan=off and also accidentally dropped the only index
on a table (actually, drop extension pg_bigm cascade) and observe
following:
postgres=# explain select * from testdata where name like '%gi%';
QUERY PLAN
On Mon, May 27, 2013 at 12:42 AM, Amit Langote amitlangot...@gmail.com wrote:
I set enable_seqscan=off and also accidentally dropped the only index
[...]
Seq Scan on testdata (cost=100.00..101.10 rows=2 width=71)
[...]
Although, I suspect the (dropped index + enable_seqscan)
2013/5/27 Amit Langote amitlangot...@gmail.com
Although, I suspect the (dropped index + enable_seqscan) causes this,
is the cost shown in explain output some kind of default max or
something like that for such abnormal cases?
When one uses “enable_” settings to adjust planner behavior,
Although, I suspect the (dropped index + enable_seqscan) causes this,
is the cost shown in explain output some kind of default max or
something like that for such abnormal cases?
When you set enable_xxx=off, it not actually disables the xxx
operation, it sets the start cost to the high value
When one uses “enable_” settings to adjust planner behavior, PostgreSQL
just sets really high costs for the operations affected (like the one you
see).
As SeqScan is the only possible way to execute your query, it is still
choosen.
I get it. Thanks!
--
Amit Langote
--
Sent via
Hi there
I am looking for an open-source document management system (DMS) based on
PostgreSQL.
Anyone has experience with such tools?
Thanks cheers,
Peter
Even with that, some clients are being encouraged to change to
PostgreSQL to lower their companies costs in technologies, but very
often they ask if there are success stories of PostgreSQL
implementations in companies in our region or around the world,
success stories (if is possible) with
Thank you Wolfgang, just one question, what bio means? In the part that says
69 bio EUR...
Regards.
Sent from my BlackBerry® wireless device from Telecom.
-Original Message-
From: Wolfgang Keller felip...@gmx.net
Sender: pgsql-general-owner@postgresql.orgDate: Mon, 27 May 2013 17:15:41
I have a PostgreSQL 9.0/9.2 which from time to time hits some memory issues.
I know the best approach is to monitor the DB performance and activity but
in the log files I see error messages similar to:
TopMemoryContext: 221952 total in 17 blocks; 7440 free (41 chunks); 214512
used
Hello,
We use the basebackup + WAL files strategy to backup our database i.e.
take a basebackup every day and copy WAL files to a remote server every
15 minutes. In case of a disaster on the master, we'd recover the
database on the slave. If this happens, I would like to tell the
customer
How can I show the value of search_path for the database, the user and the
session?
I ask because I cannot explain the following:
$ psql -U postgres -d ises
psql (9.1.4)
Type help for help.
postgres@moshe=devmain:ises=# show search_path;
search_path
---
public, audit_log
(1
On 05/27/2013 11:29 AM, Moshe Jacobson wrote:
How can I show the value of search_path for the database, the user and
the session?
I ask because I cannot explain the following:
$ psql -U postgres -d ises
psql (9.1.4)
Type help for help.
postgres@moshe=devmain:ises=# show
On Mon, May 27, 2013 at 2:47 PM, Adrian Klaver adrian.kla...@gmail.comwrote:
Is the below what you are looking for?
http://www.postgresql.org/**docs/9.2/static/runtime-**config-client.htmlhttp://www.postgresql.org/docs/9.2/static/runtime-config-client.html
On Mon, May 27, 2013 at 3:07 PM, Moshe Jacobson mo...@neadwerx.com wrote:
I'd like to know how to see the search_path setting attached to a
particular user/role independent of the session.
Oh, and I'd also like to see the current setting of the database so I know
what will happen if I clear
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 05/27/2013 12:16 PM, Moshe Jacobson wrote:
On Mon, May 27, 2013 at 3:07 PM, Moshe Jacobson
mo...@neadwerx.com mailto:mo...@neadwerx.com wrote:
I'd like to know how to see the search_path setting attached to a
particular user/role independent
On Mon, May 27, 2013 at 12:16 PM, Moshe Jacobson mo...@neadwerx.com wrote:
Oh, and I'd also like to see the current setting of the database so I know
what will happen if I clear the user setting.
I think you can find some of what you are looking for here:
I've two distant servers I would like to configure async replication
between.
Servers are running 9.2.4.
Since 9.0 days I do use script with rsync for transfer. And sometimes
the servers get out of sync (due large processing in master database and
huge network latency), and I have to
Try this step-by-step instruction
https://code.google.com/p/pgcookbook/wiki/Streaming_Replication_Setup.
I constantly update it when discovering useful things, including low
bandwidth issues.
On Mon, May 27, 2013 at 5:08 PM, Edson Richter edsonrich...@hotmail.com wrote:
Since 9.0 days I do use
On 05/27/2013 05:43 PM, Sergey Konoplev wrote:
Try this step-by-step instruction
https://code.google.com/p/pgcookbook/wiki/Streaming_Replication_Setup.
I constantly update it when discovering useful things, including low
bandwidth issues.
On Mon, May 27, 2013 at 5:08 PM, Edson Richter
Em 27/05/2013 21:43, Sergey Konoplev escreveu:
Try this step-by-step instruction
https://code.google.com/p/pgcookbook/wiki/Streaming_Replication_Setup.
I constantly update it when discovering useful things, including low
bandwidth issues.
Thanks. This is a good idea, of course!
I also have a
Em 28/05/2013 00:03, Joshua D. Drake escreveu:
On 05/27/2013 05:43 PM, Sergey Konoplev wrote:
Try this step-by-step instruction
https://code.google.com/p/pgcookbook/wiki/Streaming_Replication_Setup.
I constantly update it when discovering useful things, including low
bandwidth issues.
On
Hello:
I created a table, and found the file created for that table is about 10
times of that I estimated!
The following is what I did:
postgres=# create table tst01(id integer);
CREATE TABLE
postgres=#
postgres=# select oid from pg_class where relname='tst01';
oid
---
16384
(1 row)
* 高健 (luckyjack...@gmail.com) wrote:
So , Is there any method to correctly evaluate disk space one table will
need,
given the table's column data types and , estimated record numbers ?
The simplest might be to do exactly what you did- create the table and
then check the size with a subset of
Folks,
I was using PostgreSQL 8.x in development environment when one day I
started getting all kinds of low-level errors while running queries and
eventually had to reinstall. Maybe it was salvageable but since it was a
test database anyway it didn't matter.
We use PostgreSQL 9 on our
On Tue, May 28, 2013 at 9:48 AM, 高健 luckyjack...@gmail.com wrote:
Hello:
I created a table, and found the file created for that table is about 10
times of that I estimated!
The following is what I did:
postgres=# create table tst01(id integer);
CREATE TABLE
postgres=#
postgres=#
Nikhil,
* Nikhil G Daddikar (n...@celoxis.com) wrote:
We use PostgreSQL 9 on our production server and I was wondering if
there there is a way to know when pages get corrupted.
It's not great, but there are a few options. First is to use pg_dump
across the entire database and monitor the PG
On 05/27/2013 08:13 PM, Edson Richter wrote:
I think the use of PITRTools is probably up your alley here.
JD
Assume I know nothing about PITRTools (which I really don't know!), can
you elaborate a bit more your suggestion?
It is an open source tool specificaly for working with
28 matches
Mail list logo