Benjamin Arai wrote:
For the most part the updates are simple one liners. I currently commit
in large batch to increase performance but it still takes a while as
stated above. From evaluating the computers performance during an
update, the system is thrashing both memory and disk. I am
Benjamin Arai wrote:
What is the current maximum number of tables per database? Also, does
having more tables slow down performance in any way?
For most cases, the answer is no. However, once you get near 6 figure
table counts, pg_catalog ends up being pretty massive. The problem is
that
Bob Pawley wrote:
Thank you.
What clause is acceptable??
INSERT has no conditional clause. You send INSERT, it goes to the
database -- the database either accepts it or rejects it (unique key
violations, bad data, etc).
If you want to INSERT if some value = 'xxx', wrap an IF THEN
Johnny Ljunggren wrote:
William Yu wrote:
What I'm not sure about though is what would happen with 2 slaves if
main goes down AND the connection between center 1 2 is also
disconnected. Others with more experience with the various
master/slave replication solutions might want to chime
Johnny Ljunggren wrote:
Hmm, okay. We're now looking at another, simpler, way to do it. The same
setup but the clients on Center 1/2 will connect directly to the Main
center (2Mb leased line). The databases on Center 1/2 will then just be
a replica of the Main databases.
The biggest issue
Johnny Ljunggren wrote:
1. Replication - multimaster
I'll try to explain the setup to the best of my ability:
Three centers:
Main center - database with a backup database
Center 1 - database with a backup database
Center 2 - database with a backup database (same as center 1)
Application on the
Andrus wrote:
Can you use Postgres savepoints from VFP ?
sqlexec('ROLLBACK TO mysavepoint') and even sqlexec('ROLLBACK') cause C5
error.
I have no experience w/ savepoints yet. However, I will say that
sometimes ODBC will hijack your commands and alter them so you may want
to try
dll files (odbc32.dll, odbcint.dll, odbccp32.dll )
with application ?
This is a lot simpler for end user, no manual config required.
William Yu [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:
VFP will run in WINE up to VFP version 8. But the people I
Onyx wrote:
Hello Everyone,
Does anyone know if PostgreSQL supports running on dual processors, and
will efficiently utilize the dual processors for performance? More
specifically, will PostgreSQL utilize the dual processor performance of
using dual-core AMD chips? Any information would be
Mark Rae wrote:
On Wed, Oct 19, 2005 at 12:32:36AM -0700, William Yu wrote:
Expect to need to upgrade to later Linux cores though. Previous kernel
on this server was 2.6.9+ (FC3 64-bit) -- promptly kernel panic'd upon
install of the DCs. FC3 installer did the same thing. Went to FC4
(2.6.11
Chris Travers wrote:
1) Efficiency of network throughput
2) Tolerance to attempts at repeat transactions before replication
(emptying an account multiple times)
3) Availability of a transaction.
We ended up having to give up #1. It's possible to have our transactions
routed to multiple
Our own personal IM :)
Chris Travers wrote:
The delay will by definition defeat any guarantee of financial integrity
if you are allowing read-write operations to the replica without
checking with some sort of central authority. At very least, the
central authority should look for suspicious
Chris Travers wrote:
Why not have the people who have rights to review this all write to the
master database and have that replicated back? It seems like latency is
not really an issue. Replication here is only going to complicate
What master database? Having a single master defeats the
Chris Travers wrote:
I guess I am thinking along different lines than you. I was thinking
that the simplest solution would be to have master/slave replication for
*approved* transactions only and no replication for initial commits
prior to approval. This makes the assumption that a single
for pseudo-multimaster replication, and what about
software like sqlrelay wouldn't these suites help to some extent ?
Looking forward to be enlightened.
Cheers,
Aly.
William Yu wrote:
Carlos Henrique Reimer wrote:
I read some documents about replication and realized that if you plan
on using
On Thu, Aug 25, 2005 at 09:01:49AM +0200, William Yu wrote:
It provides pseudo relief if all your servers are in the same building.
Having a front-end pgpool connector pointing to servers across the world
is not workable -- performance ends up being completely decrepit due to
the high latency
swoop and
validating/posting it afterwards and (2) data can/will sit in pending
for a period of time until it is acted upon with somebody/some server
with the proper authority. Hence users aren't expecting results to pop
up on the screen the moment they press the submit button.
William Yu wrote
Tino Wildenhain wrote:
Which is the problem we face. Great, you've got multiple servers for
failover. Too bad it doesn't do much good if your building gets hit
by fire/earthquake/hurricane/etc.
This would remove the application using that data too, or not? ;)
Yes and no. If your DB is an
David Goodenough wrote:
The most obvious one that does exactly this (generic multi-master
replication) is Lotus Domino. It is not a relational DB, but not sufficiently
far off to stop the analogy.
Domino marks each document with a binary value which identifies the
server (built from a hash of
Carlos Henrique Reimer wrote:
I read some documents about replication and realized that if you plan on
using asynchronous replication, your application should be designed from
the outset with that in mind because asynchronous replication is not
something that can be easily “added on” after the
There's nothing on the server side that needs to be configured for
Windows clients to access Postgres via ODBC. As long as TCP/IP is turned
on. Just add a data source and configure the server
ip/name/port/database/etc.
[EMAIL PROTECTED] wrote:
Hi,
I have seen a bunch of different
My take on this. For a while I've tried to setup my own internal
documentation system for my company but it turned out to be half-baked,
unfriendly to use, etc. Mainly because I have to spend most of my time
doing product development and end up with zero time to work on the
documentation.
on the server
usage pattern.
Guy Rouillier wrote:
William Yu wrote:
On other note -- if you are thinking about SMP Opteron, you may
actually get better performance from 1x275 (Dual Core 2.2ghz) versus
2x248 (2.2ghz). Full duals have twice the bandwidth but without good
NUMA support, memory has
peter pilsl wrote:
I was now wondering which hardware would fit this server best. I think
about 2GB RAM, a fast Raid1 and now I'm not sure about the CPU.
I was considering 64-bit AMD : A Athlon 64 FX-51 or a Opteron 2.2GHz.
The hosting system will be a 64-Bit linux.
While the A64 and Opteron are
Depends for what purpose. I've found that if you're heavy into doing
user interface stuff on Windows, Delphi is great for that. Lots of
built-in widgets plus zillions of 3rd party addins makes it easy to drag
and drop your way to full-blown apps.
For data manipulaton, my opinion is nothing
Aceshardware.com has a good UI for looking at Spec scores. They imported
all the results into their DB for easily comparisons between processors.
Single CPU (individual queries):
-in to their systems.
William Yu wrote:
The numbers don't have the latest dual core Opterons yet. (Don't see
them on spec.org yet either.) My random guess right now, 4x2 system
would probably be about 140 SpecINT_rate. It's looking like it's faster
than have a DC Opteron w/ 1 memory bank versus Dual Opteron w
the fastest processor with crappy I/O.
i guess the my $0.02 is in order here :-)
UC
On Saturday 23 April 2005 01:06, William Yu wrote:
Looked on AMD's website. 132 for 4x875 on Windows, 126 on Linux.
(Probably Intel compiler on Windows, gcc on Linux.) That gets AMD into
the $100K 16+ processor Sun
32 proc IBM boxes 100+ SUN boxes. :)
Ben wrote:
We don't generally purchase monster machines. Sure, there are some
mainframes, but they are few and far between. Everything else doesn't
really have anything more than 32 procs.
---(end of
Guy Rouillier wrote:
NTPT wrote:
Will I have some advantages, better performance etc using postgres
7.4 or postgres 8.x on Athlon64 system with 64 bit Linux distro ?Are
there asome benchmark available or someone personal experience ?
Or should I stay in 32 bit platform for a while ?
PG 8
Since I've upgraded my development server to FC2/FC3 64-bit, I
occasionally get these weird PostgreSQL semi-lockups. Once in a while, a
query (usually a big update of some type) will just stop dead. Doing a
ps -ef | grep postgres, I will see something like:
root 17034 1 0 21:41 ?
Tom Lane wrote:
William Yu [EMAIL PROTECTED] writes:
Doing a ps -ef | grep postgres, I will see something like:
root 17034 1 0 21:41 ?00:00:00 gdb -q -x /dev/stdin postgres
9131
postgres 9131 2712 0 20:31 ?00:00:00 postgres: postgres netdata
[local] VACUUM
So where
Max wrote:
That would be quite an investment. Interesting. I wonder if we should put
windows or linux on it, since postgresql 8.0 works on windows.
What OS are you running on your quad opteron ?
Well here's one thing that you can bank on. You can get 64-bit Linux now
while 64-bit Windows is still
Max wrote:
And you absolutely want a 64-bit OS with that much memory. Even on our
development server with just 3GB of RAM, we see quite big jumps in
performance after moving from 32-bit Linux to 64-bit Linux. I'd hate to
think about 64GB of RAM being swapped in and out using PAE.
What's PAE ?
With a Quad Opteron (4 memory slots per CPU), you could put 64GB of RAM
onto a single machine using 4GB DIMMs in every slot.
The other option is to explore static memory storage. It's probably too
expensive to put your entire DB onto such a device but moving just the
WAL there would give you a
Without memory, the prices are roughly:
4x846 = $5500
4x848 = $6500
4x850 = $8000
Memory costs would be:
16GB (1GB DIMMs) = $3000
32GB (2GB DIMMs) = $7500
64GB (4GB DIMMs) = $24000
128GB (8GB DIMMs) = $6
The prices on the 4GB 8GB DIMMs are bleeding edge of course. I see a
wide range of for
Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of William Yu
Sent: Thursday, January 27, 2005 12:41 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Splitting queries across servers
With a Quad Opteron (4 memory slots per CPU), you could put 64GB of RAM
onto
Ralph van Etten wrote:
Ok, but then I have the same problem. The display field has to be
unique and it needs to use a SELECT MAX in an INSERT query and this gives
the same problem. Only not with the primary key but with possible
duplicate values in the display field.
If you absolutely need the
Ralph van Etten wrote:
I agree that a serial would be better.
But I think there are situations where a serial isn't convenient
Like when you want an primary key which consists of the current
year and an sequence number. Like ('05', 1), ('05', 2), ('05', 3) etc.
With a sequence you must write extra
Take a look at Ruby on Rails.
http://www.onlamp.com/pub/a/onlamp/2005/01/20/rails.html
I haven't used it myself yet but looking through the above walkthrough,
seems pretty easy to make data entry web forms.
Chris Green wrote:
On Fri, Jan 21, 2005 at 06:19:53PM -0600, Andrew L. Gould wrote:
On
You may also want to test data=journal for ext3. Most of the time, this
is slower but for databases with logging and mail servers, it can be faster.
Mage wrote:
Hello,
Gabor Szima asked us to translate the letter below.
I read that ext3 writeback mode is recommended for PostgreSQL. I
I've run into these quirks before using ODBC.
If a table is huge, the ODBC driver will croak if you try to grab the
entire table all at once. I end up needing to use LIMIT OFFSET
to get chunks of tables and then piece the table together on the client.
For a transaction encompassing a
It seems that upon dump restore, UPPER indexes either aren't recreated
correctly or not listed somewhere the query analyzer can know it exist.
I've encountered first encountered this problem doing an upgrade to
7.3.7 to 7.4.6. I again encountered this program replicating a server
(same 7.4.6
=0.032..0.039 rows=2 loops=1)
Index Cond: ((upper((name)::text) = 'NBC'::text) AND
(upper((name)::text) 'NBD'::text))
Filter: (upper((name)::text) ~~ 'NBC%'::text)
Total runtime: 0.096 ms
(4 rows)
Tom Lane wrote:
William Yu [EMAIL PROTECTED] writes:
It seems that upon dump restore, UPPER
??? As a longstanding reader of the pgsql-
mailinglists, (including via news.postgresql.org on
occasion), all I see is some outsiders trying to help
us fix a problem that does not exist. And yes, I
have read most of the messages that have passed by in
these threads. After all that, I still
My guess is that you will get better performance from a similarly priced
Dual Opteron for the following reasons:
1) OS-X is not 64-bit yet, 64-bit Linux/BSD OS's are available
2) GCC is far better tuned for x86 than PowerPC/Itanium/etc
3) Postgres *seems* to prefer Opteron's ondie memory
complicated
for queries though, since I use null to indicate an unknown end date
instead of the Y2K problem solution below.
-Ian
William Yu [EMAIL PROTECTED] 10/15/04 12:46 PM
Have you thought about unifying the audit + the current table and add
from/to datestamps for every record?
Example:
from_dt
Have you thought about unifying the audit + the current table and add
from/to datestamps for every record?
Example:
from_dt to_dt value
9/1/2004 9/30/2004 ABC
9/30/2004 10/5/2004 XYZ
10/6/2004 12/31/ 123
This would let you use the following query on the same table whether you
HM wrote:
I would like to know why the database seems frozen.
...
The problem happened at 04:15 AM during a vacuum FULL and could occurs
at every moment
VACUUM FULL locks your tables.
---(end of broadcast)---
TIP 9: the planner will ignore your
Reece Hart wrote:
On Wed, 2004-03-10 at 18:23, William Yu wrote:
/At this time, only Newisys offers a Quad Opteron box and it carries a hefty
premium. (Sun's upcoming 4X machine is a rebadged Newisys machine and
it's possible HP's will be also.)/
There are several vendors with quad opterons
50 matches
Mail list logo