Say you have 8 different data models that are related enough to share
roughly 70% of the same fields, but the shared fields are not always
the same. And also within any given model, some fields can be empty.
The business logic is that data is pulled from all the data models and
put into a common
Any recommendations for vendors that can build custom servers?
Specifically opteron based with scsi raid.
Chris
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL P
I'm pretty sure the answer to this is no, but just in case I've missed
something. Is there a way to configure the server so it only logs
for specific users?
Chris
I believe it is possible for a superuser to do something like"ALTER USER victim SET log_min_messages = whatever", so that the
log verbosity is different for different users.regards,
tom lane
I'll try that and see how it works.
On 10/7/05, Jim C. Nasby <[EMAIL PROTECTED]> wrote:
http://lnk.nu/prnewswire.com/4dv.pl--Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED]Pervasive Software
http://pervasive.comwork: 512-231-6117vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461--
I remember a few months back when someone hit the emergency power
switch to the whole floor where we host at Internap. Subsequently
the backup power system had a cascading failure. Livejournal, who
also hosts there, was up all night and into the next day restoring
their mysql databases after a b
On 11/9/05, Nicolay A Vasiliev <[EMAIL PROTECTED]> wrote:
Hello there!I'd like to ask the PostgreSQL community for the conseptual thing. Wedevelop our web sites using MySQL. We like this for its high speed andfulltext search feature. But nowadays our projects are growing fast and
we afraid our MySQ
I have a very strange issue that I'm not sure how to debug. This is
on postgresql 8.0.0rc5, Freebsd 5.4. Yes I know I should be upgrading
this version and it's scheduled, but it can't happen for another week
and for all I know this might still be an issue in current versions of
postgresql.
First
Do I need to do a full dump/restore when migrating from 8.0 rc5 to the
latest 8.0.3?
Chris
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes d
> > One other thing about our particular setup is that we use separate
> > schema's for all user data and the functions go in the public schema.
> > So before executing this function we issue something like 'set_path to
> > username,public'.
>
> Mph. Are you expecting the function to work for mor
On 7/13/05, Matt McNeil <[EMAIL PROTECTED]> wrote:
> Greetings,
> I need to securely store lots of sensitive contact information and
> notes in a freely available database (eg PostgreSQL or MySQL) that will be
> stored on a database server which I do not have direct access to.
> This database will
I'm trying to run two database clusters on the same box. Both are
bound to their own ip but use the same port. I can't see a way to
change the location of the lockfile on a per cluster basis though. Is
there one?
Chris
---(end of broadcast)---
TI
On 8/5/05, snacktime <[EMAIL PROTECTED]> wrote:
> I'm trying to run two database clusters on the same box. Both are
> bound to their own ip but use the same port. I can't see a way to
> change the location of the lockfile on a per cluster basis though. Is
> there o
I've been going back and forth on the best way to model this.
A user can have one to many bill and ship addresses.
An order can have one bill address and one to many ship addresses
Let's assume I have a single address table, with an address_type
column that is a foreign key to the address_types t
I'm working through the architecture design for a new product. We
have a small group working on this. It's a web app that will be using
ruby on rails. The challenge I'm running into is that the latest
conventional wisdom seems to be that since obviously databases don't
scale on the web, you shou
I have an application that processes financial transactions. Each of
these transactions needs to be sent with a sequence number. It starts
at 1 and resets to 1 once it hits 8000. I'm trying to think of the
most elegant solution without having to create a sequence for each
user (there are hundr
I have an application that processes credit card transactions,and
contains a table called authorizations. The authorizations table
contains information returned by the bank necessary to capture the
transaction. Nothing should block the application from inserting new
rows into the authorizations
What's a safe way to kill a specific connection to the database? I'm
testing some code that reconnects if a connection has timed out or
gone bad and I need to simulate a connection that has gone away.
Chris
---(end of broadcast)---
TIP 9: In versi
I'm re evaluating a few design choices I made a while back, and one
that keeps coming to the forefront is data separation. We store
sensitive information for clients. A database for each client isn't
really workable, or at least I've never though of a way to make it
workable, as we have several
On 9/29/06, Just Someone <[EMAIL PROTECTED]> wrote:
I am using a similar solution, and I tested it with a test containing
20K+ different schemas. Postgres didn't show slowness at all even
after the 20K (over 2 million total tables) were created. So I have
feeling it can grow even more.
That's g
First, thanks for all the feedback. After spending some more time
evaluating what we would gain by using slony I'm not sure it's worth
it. However I thought I would get some more feedback before
finalizing that decision.
The primary reason for looking at replication was to move cpu
intensive S
Sorry wrong list, this was meant for the slony list...
Chris
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
1. create table test (id int4, aaa int4, primary key (id));
2. insert into test values (0,1);
3. Execute "update test set aaa=1 where id=0;" in an endless loop
I just did the test on PostgreSQL 7.4.12 and MySQL 5.0.22 (MyISAM,
sorry had no configured InnoDB). Ubuntu 6.0.6, AMD64, 2GB, default
dat
I can't seem to find an example of how to add restrictions to the
where clause of an updateable view created via the rule system. For
example I don't want the update to complete if a where clause is
missing entirely, and in some cases I want to only allow the update if
the where clause specifies a
Anyone have any tips for minimizing downtime when upgrading? So far
we have done upgrades during scheduled downtimes. Now we are getting
to the point where the time required for a standard dump/restore is
just too long. What have others done when downtime is critical? The
only solution we have
On 6/16/06, Richard Huxton wrote:
The other option would be to run replication, e.g. slony to migrate from
one version to another. I've done it and it works fine, but it will mean
slony adding its own tables to each database. I'd still do it one
merchant at a time, but that should reduce your d
Both connection pooling and using the superuser with SET SESSION
AUTHORIZATION both have their uses. You might have an application
that processes some type of transaction and inserts data into a users
schema or table, but where there are no user credentials available.
Then you might have a web i
Right now we are running mysql as that is what was there when I
entered the scene. We might switch to postgres, but I'm not sure if
postgres makes this any easier.
We run a couple of popular games on social networking sites. These
games have a simple economy,and we need to be able to time warp t
On Thu, Nov 20, 2008 at 4:06 PM, Scott Marlowe <[EMAIL PROTECTED]> wrote:
> On Thu, Nov 20, 2008 at 4:36 PM, snacktime <[EMAIL PROTECTED]> wrote:
>> Right now we are running mysql as that is what was there when I
>> entered the scene. We might switch to postgres, but
Where I work we use mysql for a fairly busy website, and I'd like to
eventually start transitioning to postgres if possible. The largest
obstacle is the lack of replication as a core feature. I'm well aware
of the history behind why it's not in core, and I saw a post a while
back saying it would
30 matches
Mail list logo