via email or this list…
On Jan 26, 2012, at 4:03 PM, Robert Boyer wrote:
I can probably arrange for a tunneled v6 address - should be the same thing
at the end of the day…. how much time/mem you need?
RB
On Jan 26, 2012, at 2:10 PM, Steve Bertrand wrote:
Hi all!
I've been away
, at 10:45 AM, Steve Bertrand wrote:
On 2012.01.26 23:12, Robert Boyer wrote:
just an FYI - that VM that you logged into tonight now has verified access
via IPv6 from anywhere, is serving up the /64 block to my local devices vi
route adverts, has route6d running and appears to work locally
I can probably arrange for a tunneled v6 address - should be the same thing at
the end of the day…. how much time/mem you need?
RB
On Jan 26, 2012, at 2:10 PM, Steve Bertrand wrote:
Hi all!
I've been away for some time, but I'm now getting back into the full swing of
things.
I'm
most likely a ifs label on the disk from before that you need to get rid of
before doing the install.
RB
On Jan 16, 2012, at 11:37 AM, Daniel Staal wrote:
I've got a weird problem... I was working on installing 9.0 w/zfs on my
laptop, messed up, rebooted, *formatted the drives* and
I agree if you move drives and a particular zfs has not seen them before - and
there is a zfs label at the end things can go pear shaped - however…
if you blast just the end of the drive it should be fine.
RB
Ps. Maybe I;ll title a book fun with zfs and glabel or cheap thrills with zfs,
To deal with this kind of traffic you will most likely need to set up a mongo
db cluster of more than a few instances… much better. There should be A LOT of
info on how to scale mongo to the level you are looking for but most likely you
will find that on ruby forums NOT on *NIX boards….
The OS
but they tend to have a very
narrow view of the world…. ;-)
RB
On Jan 2, 2012, at 4:21 PM, Robert Boyer wrote:
To deal with this kind of traffic you will most likely need to set up a mongo
db cluster of more than a few instances… much better. There should be A LOT
of info on how to scale mongo
Shard Chunks - MongoDB
enjoy….
RB
On Jan 2, 2012, at 5:38 PM, Robert Boyer wrote:
Sorry one more thought and a clarification….
I have found that it is best to run mongos with each app server instance most
of the mongo interface libraries aren't intelligent about the way
Sorry to see you are still having issues. I thought you were set when we fixed
your resolv last night.
Okay - let's start from scratch here
Are you sure you need a named? Are you actually serving dns for your own IP
addresses or are you using it as a caching server. Getting a new named
19, 2011 at 06:11:23PM -0500, Robert Boyer wrote:
Sorry to see you are still having issues. I thought you were set when we
fixed your resolv last night.
Okay - let's start from scratch here
Are you sure you need a named? Are you actually serving dns for your own IP
addresses
am in the process of moving all of my NAS from open solaris to FreeBSD (I
hope?) and have run into a few speed bumps along the way. Maybe I am doing
something way way wrong but I cannot seem to find any info at all on some of my
issues. I hope this is the right list to ask - if not please
I am running release 9.1 under VMware Fusion and it works great - except
No USB connections on any USB bus work at all - the kernel sees the connect but
then encounters an error and disables the device immediately.
Searched around a bit but didn't find anything definitive. Seems like this
I am trying nanobsd for the first time under 8.1 and have two fairly basic
questions before I go about solving a few issues in my usual brute-force and
wrong way.
1)Using a box stock system with a fresh install and the default nanobsd.sh with
default configuration everything looks like it
I have been running FreeBSD for about a year and tracking the ZFS
implementation for almost as long. I am reasonably happy with the current
stable 8.1 ZFS configs that I have been running with a few TB of storage all
managed with an integrated SATA controller on my test machine. I am about to
I have been running FreeBSD for about a year and tracking the ZFS
implementation for almost as long. I am reasonably happy with the current
stable 8.1 ZFS configs that I have been running with a few TB of storage all
managed with an integrated SATA controller on my test machine. I am about to
15 matches
Mail list logo