I am going to install Postgresql 9.0 for my running applicaiton which is at
8.1.
My Db size is 3 GB.
Server Specification:
dual-core 4 cpu
RAM: 32 GB
OS: Centos
What will be good settings for DB parameters such as shared_buffers,
checkpoint_segment and etc.
Any help please.
Hi!
Tom Lane writes:
> So far as I can see offhand, the only way you'd get that error message
> is if to_tsquery were declared to take OID not regconfig as its first
> argument.
>
> > I suspect it has to do with the Tsearch2-compatibility modules from
> > contrib - these were compiled and insta
I have same question
My Computer is running POS with Postgres 8.9.11 database
Ram : 16GB
OS : Windows 2008 R2
CPU XEON 2G
User : 50-60 user (connect ~ 200 connects, I increase Windows
SharedSection=1024,20480,1024 for > 125 connects).
DISK : RAID 1
What will be good settings for DB parameter
Hello,
When reindexing we keep the previous and new reltuples/relpages ratio in
in a reference table in order to track the reindex efficiency.
We also have a maintenance jobs that compare this table with the stats
from pg_class and automatically reindex the relations where the ratio
degraded t
For now I just ignore the first exception.
BEGIN
PERFORM * from dblink_get_result('myconn')as x (t text);
EXCEPTION WHEN datatype_mismatch THEN NULL;
END;
PERFORM * from dblink_get_result('myconn')as x (t text);
thanks,
Marc Mamin
-Original Message-
From: Tom Lane [mailto:t...@sss.pgh
SSD caveats are well described here:
http://www.postgresql.org/about/news.1249
HTH,
Marc Mamin
-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Allan Kamau
Sent: Donnerstag, 11. November 2010 07:42
To: Postgres General P
Hello
I have postrges 8.3.12 and I have the following issue:
I have a table
create table test(
table_id integer,
datetime timestamp,
MMSI integer,
lat real,
lng real,
);
and I bulk insert data to this table with COPY.
A tiny portion of the data in the file are wrong. F
Le 11/11/2010 13:01, Vangelis Katsikaros a écrit :
> Hello
>
> I have postrges 8.3.12 and I have the following issue:
>
> I have a table
> create table test(
> table_id integer,
> datetime timestamp,
> MMSI integer,
> lat real,
> lng real,
> );
>
> and I bulk insert data
On Wed, Nov 10, 2010 at 12:39 PM, John R Pierce wrote:
> My developers are complaining about the lack of support for 2 phase commit
> in this scenario. Can we get any mileage on PREPARE TRANSACTION in a
> dblink sort of environment like this?
>
Yes, that's an ideal case for this. We use it ou
On Wed, Nov 10, 2010 at 5:20 PM, Tom Lane wrote:
> Not in libpq. libpq is just a passive library, it can't cause actions
> to happen when you aren't calling it. So there's no point in a
> callback: you might as well just test for occurrences of a NOTIFY at
> times when you're prepared to handle
On Thu, Nov 11, 2010 at 8:05 AM, Guillaume Lelarge
wrote:
> Le 11/11/2010 13:01, Vangelis Katsikaros a écrit :
>> Hello
>>
>> I have postrges 8.3.12 and I have the following issue:
>>
>> I have a table
>> create table test(
>> table_id integer,
>> datetime timestamp,
>> MMSI integer
On Thu, Nov 11, 2010 at 2:59 AM, AI Rumman wrote:
> Server Specification:
> dual-core 4 cpu
> RAM: 32 GB
> OS: Centos
> What will be good settings for DB parameters such as shared_buffers,
> checkpoint_segment and etc.
I'll take this one ... :) On my 24GB quad-core Opteron servers
running
On Thu, Nov 11, 2010 at 3:30 AM, tuanhoanganh wrote:
> My Computer is running POS with Postgres 8.9.11 database
> Ram : 16GB
> OS : Windows 2008 R2
> CPU XEON 2G
> User : 50-60 user (connect ~ 200 connects, I increase Windows
> SharedSection=1024,20480,1024 for > 125 connects).
> DISK : RAID
On Thu, Nov 11, 2010 at 1:42 AM, Allan Kamau wrote:
> After googling I found little resent content (including survival
> statistics) of using SSDs in a write intensive database environment.
>
We use the Texas Memory RAMSan-620 external disk units. It is
designed specifically to survive high writ
> On Thu, Nov 11, 2010 at 3:30 AM, tuanhoanganh wrote:
>> My Computer is running POS with Postgres 8.9.11 database
Not sure which version is that. There's nothing like 8.9.11 ...
>> Ram : 16GB
>> OS : Windows 2008 R2
>> CPU XEON 2G
>> User : 50-60 user (connect ~ 200 connects, I increase Win
Hello,
When choosing SSD drive you need to consider
* number of writes to particular sector which is about 100k to 200k and then
sector will fail
* in case of DB grow, limitied size of those dirvers.
> As part of datamining activity. I have some plpgsql functions
> (executed in parallel, up to 6
ZFS has an option to use an SSD as cache for the spinning drives. ZFS
under Solaris has turned in some really good IO numbers. The problem is
with the new Sun I am not feeling so good about the open nature of
Solaris. ZFS performance under BSD I have read does not match ZFS under
Solaris.
On 11/11
On Thu, Nov 11, 2010 at 08:30:16AM -0500, Vick Khera wrote:
> On Thu, Nov 11, 2010 at 2:59 AM, AI Rumman wrote:
> > Server Specification:
> > dual-core 4 cpu
> > RAM: 32 GB
> > OS: Centos
> > What will be good settings for DB parameters such as shared_buffers,
> > checkpoint_segment and etc.
On 11/11/2010 03:05 PM, Guillaume Lelarge wrote:
You should better look at pgloader which will use COPY to put your data
in your table and found the lines in error. Of course, it takes time to
detect lines in error. But at least, all "good" lines will be in your
table, and all "bad" lines will b
A while back, I thought I remembered seeing a Mac OS X client for PostgreSQL
which had a feature where it would display a graphic schema of whatever
database you connect to but I can't seem to find it again (web searching.)I
did come across one post which said that this was a planned feature
Am 11.11.2010 16:40, schrieb David Siebert:
ZFS has an option to use an SSD as cache for the spinning drives. ZFS
under Solaris has turned in some really good IO numbers. The problem is
with the new Sun I am not feeling so good about the open nature of
Solaris. ZFS performance under BSD I have re
Hey Aram,
I recommend dbWrench by Nizana. It has a nice synchronization capabilities,
forward / reverse engineering and supports many built-in PostgreSQL types
and user-defined types as well.
NB: it is commercial application.
Another alternative is a MicroOLAP Database Designer.
The both tools a
On 11/11/2010 09:50 AM, Aram Fingal wrote:
> A while back, I thought I remembered seeing a Mac OS X client for PostgreSQL
> which had a feature where it would display a graphic schema of whatever
> database you connect to but I can't seem to find it again (web searching.)
> I did come acros
Le 11/11/2010 17:46, Vangelis Katsikaros a écrit :
> On 11/11/2010 03:05 PM, Guillaume Lelarge wrote:
>>
>> You should better look at pgloader which will use COPY to put your data
>> in your table and found the lines in error. Of course, it takes time to
>> detect lines in error. But at least, all
Le 11/11/2010 18:58, Rob Sargent a écrit :
>
>
> On 11/11/2010 09:50 AM, Aram Fingal wrote:
>> A while back, I thought I remembered seeing a Mac OS X client for PostgreSQL
>> which had a feature where it would display a graphic schema of whatever
>> database you connect to but I can't seem to f
Windows uses % as subnet mask delimiter.
Trying to use it like
create temp table test (test inet) on commit drop;
insert into test values('fe80::f1ea:f3f4:fb48:7155%10')
returns error
ERROR: invalid input syntax for type inet: "fe80::f1ea:f3f4:fb48:7155%10"
LINE 2: insert into test values('fe
Hello,
may schemaspy help you ?
http://schemaspy.sourceforge.net/sample/relationships.html
HTH,
Marc Mamin
-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Aram Fingal
Sent: Donnerstag, 11. November 2010 17:51
To: Post
On Thu, Nov 11, 2010 at 11:45 AM, David Fetter wrote:
>> max_prepared_transactions = 100 # guideline: same number as max_connections
>
> This should be either 0 (no 2PC) or the bounded from below by
> max_connections.
>
In general, sure. I have one app that uses 2PC, and it makes maybe 2%
of the
On 11/11/10 5:17 AM, Vick Khera wrote:
On Wed, Nov 10, 2010 at 12:39 PM, John R Pierce wrote:
My developers are complaining about the lack of support for 2 phase commit
in this scenario.Can we get any mileage on PREPARE TRANSACTION in a
dblink sort of environment like this?
Yes, that's an
On 11/11/10 12:45, Marc Mamin wrote:
Hello,
may schemaspy help you ?
http://schemaspy.sourceforge.net/sample/relationships.html
HTH,
Marc Mamin
Thanks for this link! I've been looking for something that can run on
the command line for quite a while, and even better it outputs to html
and
Marc,
> may schemaspy help you ?
> http://schemaspy.sourceforge.net/sample/relationships.html
Thank you *VERY* much for suggesting this tool!
-- Gary Chambers
/* Nothing fancy and nothing Microsoft! */
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to yo
Thanks, each of you for all the suggestions on schema generating tools. I
haven't had a chance to evaluate them all yet but DBVisualizer looks pretty
good. In the meanwhile I found SQL Power Architect, which is also free/open
source, and can do this kind of diagraming but is not as good as DBV
Aram Fingal wrote on 11.11.2010 22:45:
I was thinking of reporting back to this forum with
advantages/disadvantages of each tool, as I see it, but realized that
I was rapidly getting too far off topic for a list focused
specifically on PostgreSQL.
I don't think this woul be off-topic here if yo
Hello,
I am interested in sharing/replicating data between different databases, and
I'd like to ask if what I'd like to do is possible in postgresql. I have read a
fair amount of documentation and was looking forward to PostgreSQL 9, but I
don't think it will do for me what I want.
I have an a
Hi All !
I just started with postgres and wish to use a postgres db as a logging
destination for my apache webserver in my debian environment [this was not
the reason to start with postgres]. Does someone, if this works [thre is no
package in my distribution]?
Any tips would be great!
br++mabra
Thomas Kellerer writes:
> Aram Fingal wrote on 11.11.2010 22:45:
>> I was thinking of reporting back to this forum with
>> advantages/disadvantages of each tool, as I see it, but realized that
>> I was rapidly getting too far off topic for a list focused
>> specifically on PostgreSQL.
> I don't t
On Thu, Nov 11, 2010 at 3:05 PM, Demitri Muna
wrote:
> Hello,
>
> I am interested in sharing/replicating data between different databases, and
> I'd like to ask if what I'd like to do is possible in postgresql. I have read
> a fair amount of documentation and was looking forward to PostgreSQL 9,
On 11/11/10 14:48, ma...@manfbraun.de wrote:
Hi All !
I just started with postgres and wish to use a postgres db as a logging
destination for my apache webserver in my debian environment [this was not
the reason to start with postgres]. Does someone, if this works [thre is no
package in my distr
Hi all,
I've got an odd problem. I've got a column with a non-standard
date/time in it that I carve up and re-cast as a timestamp. This works
fine. Now though, I am trying to check/restrict my results to dates
before now() and it's telling me that the columns doesn't exist.
http://pastebin.com/
On Thu, Nov 11, 2010 at 2:26 PM, Digimer wrote:
> http://pastebin.com/sExiBynp
> Any hints on what I am doing wrong?
You can't refer to a column alias name in the order by clause
unfortunately. You'll need to either nest this query in another query
to use the alias name, or you have to use a p
> -Original Message-
> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
> ow...@postgresql.org] On Behalf Of Marc Mamin
> Sent: Thursday, November 11, 2010 11:46 AM
> To: Aram Fingal; Postgres-General General
> Subject: Re: [GENERAL] Schema tool
>
> Hello,
>
> may schemaspy
Search the PG performance mailing list archive. There has been some good
posts about SSD drives there related to PG use.
-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Allan Kamau
Sent: Wednesday, November 10, 2010 11:
Greeting PostgreSQL Gurus.
>From my goggling I know this has been discussed before and from what I have
>read it seems that the consensus is you can re-compile postgres and increase
>the block size from 8k to 16k or 32k to double or quadruple this limit.
I re-compiled the latest 9.0.1 source rp
"Mark Mitchell" writes:
> Is there are hard limit of 1600 that you cannot get around?
Yes.
Generally, wanting more than a few dozen columns is a good sign that you
need to rethink your schema design. What are you trying to accomplish
exactly?
regards, tom lane
--
Sent
On 11/11/10 9:24 PM, Tom Lane wrote:
"Mark Mitchell" writes:
Is there are hard limit of 1600 that you cannot get around?
Yes.
Generally, wanting more than a few dozen columns is a good sign that you
need to rethink your schema design. What are you trying to accomplish
exactly?
ind
Hey Mark,
Yeah, I can't imagine an entity in a real project even with more than 100
columns. Its rare case.
But if you entities (rows/tuples) of some class (table) can contains
variable
set of columns (properties) you can look at hstore contrib module.
2010/11/12 John R Pierce
> On 11/11/10 9:2
On Fri, Nov 12, 2010 at 10:17:50AM +0300, Dmitriy Igrishin wrote:
> Hey Mark,
>
> Yeah, I can't imagine an entity in a real project even with more than 100
> columns. Its rare case.
> But if you entities (rows/tuples) of some class (table) can contains
> variable
> set of columns (properties) you
Hey Peter,
Unfortunately, there is no indexes on arrays (only on expressions).
With hstore we can easily create GiST index for effective access.
2010/11/12 Peter Bex
> On Fri, Nov 12, 2010 at 10:17:50AM +0300, Dmitriy Igrishin wrote:
> > Hey Mark,
> >
> > Yeah, I can't imagine an entity in a re
On Fri, Nov 12, 2010 at 10:43:14AM +0300, Dmitriy Igrishin wrote:
> Hey Peter,
>
> Unfortunately, there is no indexes on arrays (only on expressions).
> With hstore we can easily create GiST index for effective access.
True. In my project I only ever needed to search on a particular key,
and I ma
49 matches
Mail list logo