Put all your eggs in one basket, and WATCH THAT BASKET.
Better yet, pay someone more reliable than oneself to watch it.
Preferably a well-paid and happy fox.
Or _maybe_ put your eggs in an invisible super-basket?
Not trolling, just checking the analogy integrity field.
M
--
If you have 6 app servers it's just daft to stick 6 NICs in your DB
server.
While there might be some cases where that makes sense most likely it
isn't something you would want to do. I believe the original motivation
was to solve bandwidth congestion rather than security.
Switches are not security devices. While it is harder to sniff packets on
switches, you can't count on them to prevent hostile machines on the
switch from playing games with the arp protocol. Also I believe that if
a switch doesn't remember where a particular mac address is it will send
the packet
Title: Message
You
would assign a different subnet to the connection, and then tell the servers to
connect to the PG server's address on that subnet. No other changes
required. Very odd setup though. If you want a 'private' connection
then use a switch, rather than needing umpty NICs in t
Got any suggestions now?!? I was sort of looking for more information /
insight on my postgresql.conf file... but it seems we had to get the "IS HE
A MORON" question answered :P
Anyhow, again thank you for any help you can lend...
Well, try not to SHOUT is a good suggestion. Also, how about p
Hi again,
It seems I posted in HTML before, sorry about that...
It seems I'm trying to solve the same problem as Richard Emberson had a
while ago (thread here:
http://archives.postgresql.org/pgsql-general/2002-03/msg01199.php).
Essentially I am storing a large number of large objects in the D
s basically, as comparing
the price of a terabyte of ATA mirrored disks and the same TB on SCSI
hardware raid is enlightening.
M
-Original Message-
From: Bradley Kieser [mailto:[EMAIL PROTECTED]
Sent: 06 May 2004 11:03
To: Matt Clark
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: Re: [
Title: Message
Hello
all,
It seems I'm trying
to solve the same problem as Richard Emberson had a while ago (thread here: http://archives.postgresql.org/pgsql-general/2002-03/msg01199.php).
Essentially I am
storing a large number of large objects in the DB (potentially tens or hundreds
> 1. a traffic table is read in, and loaded into a hash table that is
>ordered by company_id, ip_id and port:
>
> $traffic{$ip_rec{$ip}{'company_id'}}{$ip_id}{$port} += $bytes1 + $bytes2;
>
> 2. a foreach loop is run on that resultant list to do the updates to the
>database:
>
> foreac
> What's wrong with using a LoopAES filesystem? It protects against
> someone walking off with the server, or at least the hard disk, and
> being able to see the data.
Yes, but only if the password has to entered manually [1] at boot time.
And it gives zero protection against someone who gains ro
BEGIN;
DELETE FROM mytable;
!!! OOOPS
ROLLBACK;
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] Behalf Of Jeremy Smith
> Sent: 20 February 2004 06:06
> To: [EMAIL PROTECTED]
> Subject: [ADMIN] "DELETE FROM" protection
>
>
>
> This may be an all-time idio
> .. I can't _quite_ tell if you're serious or not ... :)
>
> If you are serious, are you saying to do something like:
>
> CREATE TABLE new_money (product text, dollars int4, cents int4);
Ha :-) That would not be serious. I'm pretty sure he meant to just store the product
cost in cents instea
Title: Message
The
consensus from previous discussions (search for 'LVM' in the archives) is
essentially that it definitely *should* work, some people *do* use it
successfully, but that you *must* test it thoroughly in your own setup under
heavy write load before relying on it.
PG
will be
ome SQL
> command like in Mysql command is 'flush tables'. what's in postgres?
>
> regards
> bhartendu
>
> On Thu, 2003-12-11 at 16:56, Matt Clark wrote:
> > > I got all your points, thanks for such a great discussion, Now the last
> > > thing I wa
> I got all your points, thanks for such a great discussion, Now the last
> thing I want is how can I close the data files and flush the cache into
> the data files. How can I do this in postgresql
pg_ctl stop
---(end of broadcast)---
TIP
more constructive ;-)
M
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of Marek Florianczyk
> Sent: 31 October 2003 13:20
> To: Jamie Lawrence
> Cc: Matt Clark; [EMAIL PROTECTED]
> Subject: Re: [ADMIN] performance problem - 10.000 databases
>
>
> W liście z pią, 31-10-2003, godz. 12:25, Matt Clark pisze:
> > Ooh, I see. That's a tricky one. Do you really need that level of separation?
>
> Well, if you talk with the clients, and they promise, that they will not
> access to other databasess, and specially
Ooh, I see. That's a tricky one. Do you really need that level of separation?
> Because every virtual domain has its own database, username and
> password. So one client domain1.com with db: domain1db user: domain1user
> cannot access to second client database domain2.com db: domain2db user:
>
> I could made persistent connection, but with 10.000 clients it will kill
> the server.
But if they're virtual domains, why would you need one connection per domain? You
should only need one connection per apache
process...
---(end of broadcast)
It won't work.
You could instead have a separate boolean attribute called 'expired' for
each row. Set this to true whenever you expire the row, and create the
partial index using that attr.
Matt
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] Behalf Of Jeff Bo
I rather like it actually. Cisco equipment has a 'show tech-support' command that
does exactly that, dumps all the config, HW/SW
versions, current state, you name it. If you have a problem you run that, attach the
output to yr support email, and 99% of the
time there's enough info there to solv
> > It is crashing the linux box. Not rebooting, not kernel panic,
> but only stop
> > to respond.On the console if I type reboot it will not to
> reboot and so on.
> > But it crash only if I start intensive operations on pg.
If you can type 'reboot' then surely it hasn't stopped responding?
--
Morning all, bit of a general question here...
consider:
begin;
update a set col1 = 'p' where id = '1';
update a set col2 = 'q' where id = '1';
commit;
versus:
update a set col1 = 'p', col2 = 'q' where id = '1';
Does the first case generate any more dead tuples that will need vacuumi
Hi,
I've noticed that
the cost estimates for a lot of my queries are consistently far to high.
Sometimes it's because the row estimates are wrong, like
this:
explain analyze
select logtime from loginlog where
uid='Ymogen::YM_User::3e2c0869c2fdd26d8a74d218d5a6ff585d490560' and result =
Well, I usually am under a misapprehension! Thanks for the explanation about LIMIT
too.
In that case then, I shall stop worrying and learn to love the planner.
M
> -Original Message-
> From: Tom Lane [mailto:[EMAIL PROTECTED]
> Sent: 08 August 2003 16:15
> To: Matt Clark
A P3
1GHz is probably roughly equivalent to a P4 1.5GHz, so going from dual P3 1GHz
to single P4 2.4GHz would likely be slower in any case. Don't forget that
unless you're talking about the "Xeon MP" then the whole "Xeon" tag is pretty
meaningless for the P4 range.
If
you moved to a *dua
Thanks Murthy, that's exceptionally helpful!
Does anyone know what (in general) would cause the notices that Murthy spotted in the
logs as per the snippet below?
> The postmaster is started and stopped on the backup server, so that any
> problems can be identified right away. (Notice the "ReadR
Title: RE: [ADMIN] LVM snapshots
Thanks all.
The
conclusion there seemed to be that it ought to work just fine, but should be
tested. Well, I'll test it and see if anything interesting comes up.
If anything LVM snapshots will be less tricky than NetApp snapshots as LVM has
access to the
28 matches
Mail list logo