aneip wrote:
I really new to zfs and also raid.
I have 3 hard disk, 500GB, 1TB, 1.5TB.
On each HD i wanna create 150GB partition + remaining space.
I wanna create raidz for 3x150GB partition. This is for my document + photo.
You should be able to create 150 GB slices on each drive, and then
I really new to zfs and also raid.
I have 3 hard disk, 500GB, 1TB, 1.5TB.
On each HD i wanna create 150GB partition + remaining space.
I wanna create raidz for 3x150GB partition. This is for my document + photo.
As for the remaining I wanna create my video library. This one no need any
redunda
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> Actually, I find this very surprising:
> Question posted:
> http://lopsa.org/pipermail/tech/2010-April/004356.html
As the thread unfolds, it appears, although netapp may
>-Original Message-
>From: Ross Walker [mailto:rswwal...@gmail.com]
>Sent: Friday, April 23, 2010 7:08 AM
>>
>> We are currently porting over our existing Learning Lab Infrastructure
>> platform from MS Virtual Server to VBox + ZFS. When students
>> connect into
>> their lab environment it
on 23/04/2010 04:22 BM said the following:
> On Tue, Apr 20, 2010 at 2:18 PM, Ken Gunderson wrote:
>> Greetings All:
>>
>> Granted there has been much fear, uncertainty, and doubt following
>> Oracle's take over of Sun, but I ran across this on a FreeBSD mailing
>> list post dated 4/20/2010"
>>
>>
My use case for opensolaris is as a storage server for a VM environment (we
also use EqualLogic, and soon an EMC CX4-120). To that end, I use iometer
within a VM, simulating my VM IO activity, with some balance given to easy
benchmarking. We have about 110 VMs across eight ESX hosts. Here is wha
At the time we had it setup as 3 x 5 disk raidz, plus a hot spare. These 16
disks were in a SAS cabinet, and the the slog was on the server itself. We are
now running 2 x 7 raidz2 plus a hot spare and slog, all inside the cabinet.
Since the disks are 1.5T, I was concerned about resliver times fo
I was having this same problem with snv_134. I executed all the same commands
as you did. The cloned disk booted up to the "Hostname:" line and then died.
Booting with the "-kv" kernel option in GRUB, it died at a different point each
time, most commonly after:
"srn0 is /pseudo/s...@0"
What's
Dedup is a key element for my purpose, because i am planning a central
repository for like 150 Windows Server 2008 (R2) servers which would take a lot
less storage if they dedup right.
--
This message posted from opensolaris.org
___
zfs-discuss mailing
A few things come to mind...
1. A lot better than...what? Setting the recordsize to 4K got you some
deduplication but maybe the pertinent question is what were you
expecting?
2. Dedup is fairly new. I haven't seen any reports of experiments like
yours so...CONGRATULATIONS!! You're probably the
It was active all the time.
Made a new zfs with -o dedup=on, copied with default record size, got no dedup,
deleted files, set recordsize 4k, dedup ratio 1.29x
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensol
You might note, dedupe only dedupes data that is writen after the flag is set.
It does not retroactivly dedupe already writen data.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensola
Bogdan,
Thanks for pointing this out and passing along the latest news from Oracle.
Stamp out FUD wherever possible. At this point, unless it is said officially,
and Oracle generally keeps pretty tight lipped about products and directions,
people should regard most things as heresy.
Cheers,
Hi,
I am playing with opensolaris a while now. Today i tried to deduplicate the
backup VHD files Windows Server 2008 generates. I made a backup before and
after installing AD-role and copied the files to the share on opensolaris
(build 134). First i got a straight 1.00x, then i set recordsize t
Sunil wrote:
If you like, you can later add a fifth drive
relatively easily by
replacing one of the slices with a whole drive.
how does this affect my available storage if I were to replace both of those
sparse 500GB files with a real 1TB drive? Will it be same? Or will I have
expande
I would have thought that the file movement from one FS to another within the
same pool would be almost instantaneous. Why does it take to platter for such a
movement?
# time cp /tmp/blockfile /pcshare/1gb-tempfile
real0m5.758s
# time mv /pcshare/1gb-tempfile .
real0m4.501s
Both FSs ar
On Apr 22, 2010, at 11:03 AM, Geoff Nordli wrote:
From: Ross Walker [mailto:rswwal...@gmail.com]
Sent: Thursday, April 22, 2010 6:34 AM
On Apr 20, 2010, at 4:44 PM, Geoff Nordli
wrote:
If you combine the hypervisor and storage server and have students
connect to the VMs via RDP or VNC or
I can replicate this case; Start new instance > attach EBS volumes > reboot
instance > data finally available.
Guessing that it's something to do with the way the volumes/devices are "seen"
& then made available.
I've tried running various operations (offline/online, scrub) to see whether it
On 23/04/2010 13:38, Phillip Oldham wrote:
The instances are "ephemeral"; once terminated they cease to exist, as do all
their settings. Rebooting an image keeps any EBS volumes attached, but this isn't the
case I'm dealing with - its when the instance terminates unexpectedly. For instance, if
On 23 Apr, 2010, at 8.38, Phillip Oldham wrote:
> The instances are "ephemeral"; once terminated they cease to exist, as do all
> their settings. Rebooting an image keeps any EBS volumes attached, but this
> isn't the case I'm dealing with - its when the instance terminates
> unexpectedly. For
On 23/04/2010 12:24, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of thomas
Someone on this list threw out the idea a year or so ago to just setup
2 ramdisk servers, export a ramdisk from each and create a mirror slog
One thing I've just noticed is that after a reboot of the new instance, which
showed no data on the EBS volume, the files return. So:
1. Start new instance
2. Attach EBS vols
3. `ls /foo` shows no data
4. Reboot instance
5. Wait a few minutes
6. `ls /foo` shows data as expected
Not sure if this
The instances are "ephemeral"; once terminated they cease to exist, as do all
their settings. Rebooting an image keeps any EBS volumes attached, but this
isn't the case I'm dealing with - its when the instance terminates
unexpectedly. For instance, if a reboot operation doesn't succeed or if the
On 23 Apr, 2010, at 7.31, Phillip Oldham wrote:
> I'm not actually issuing any when starting up the new instance. None are
> needed; the instance is booted from an image which has the zpool
> configuration stored within, so simply starts and sees that the devices
> aren't available, which beco
> From: Richard Elling [mailto:richard.ell...@gmail.com]
>
> One last try. If you change the "real" directory structure, how are
> those
> changes reflected in the "snapshot" directory structure?
>
> Consider:
> echo "whee" > /a/b/c/d.txt
> [snapshot]
> mv /a/b /a/B
>
> What do
I'm not actually issuing any when starting up the new instance. None are
needed; the instance is booted from an image which has the zpool configuration
stored within, so simply starts and sees that the devices aren't available,
which become available after I've attached the EBS device.
Before t
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of thomas
>
> Someone on this list threw out the idea a year or so ago to just setup
> 2 ramdisk servers, export a ramdisk from each and create a mirror slog
> from them.
Isn't the whole point of
On 23 Apr, 2010, at 7.06, Phillip Oldham wrote:
>
> I've created an OpenSolaris 2009.06 x86_64 image with the zpool structure
> already defined. Starting an instance from this image, without attaching the
> EBS volume, shows the pool structure exists and that the pool state is
> "UNAVAIL" (as
I'm trying to provide some "disaster-proofing" on Amazon EC2 by using a
ZFS-based EBS volume for primary data storage with Amazon S3-backed snapshots.
My aim is to ensure that, should the instance terminate, a new instance can
spin-up, attach the EBS volume and auto-/re-configure the zpool.
I'v
On 22 Apr 2010, at 20:50, Rich Teer wrote:
On Thu, 22 Apr 2010, Alex Blewitt wrote:
Hi Alex,
For your information, the ZFS project lives (well, limps really) on
at http://code.google.com/p/mac-zfs. You can get ZFS for Snow Leopard
from there and we're working on moving forwards from the anci
> If you're lucky, the device will be marked as not being present, and then
> you can use the GUID.
> To find out, use the command "zdb -C" to dump out the configuation
> information. In the output, look for the offline disk (it should be under
> a heading "children[3]"). If the "not_present
31 matches
Mail list logo