You can truncate a file:
Echo "" > bigfile
That will free up space without the 'rm'
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of David Dyer-Bennet
Sent: Wednesday, September 29, 2010 12:59 PM
To: zfs-discuss@opens
We actually did some pretty serious testing with SATA SLCs from Sun directly
hosting zpools (not as L2ARC). We saw some really bad performance - as though
there were something wrong, but couldn't find it.
If you search my name on this list you'll find the description of the problem.
--m
m a t
I note in your iostat data below that one drive (sd5) consistently performs
MUCH worse than the others, even when doing less work.
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of John J Balestrini
Sent: Tuesday, M
It probably put an EFI label on the disk. Try doing a wiping the first AND
last 2MB.
--M
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of nich romero
Sent: Wednesday, May 05, 2010 1:00 PM
To: zfs-discuss@opensolaris.
RAIDZ = RAID5, so lose 1 drive (1.5TB)
RAIDZ2 = RAID6, so lose 2 drives (3TB)
RAIDZ3 = RAID7(?), so lose 3 drives (4.5TB).
What you lose in useable space, you gain in redundancy.
-m
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org]
[zfs-discuss] snapshots as versioning tool
|
| Matt Cowger writes:
|
| > zfs list | grep '@'
| >
| > zpool/f...@1154758324G - 461G -
| > zpool/f...@1208482 6.94G - 338G -
| > zpool/f...@daily.net
scuss-boun...@opensolaris.org] On Behalf Of Harry Putnam
Sent: Monday, March 22, 2010 2:23 PM
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] snapshots as versioning tool
Matt Cowger writes:
> This is totally doable, and a reasonable use of zfs snapshots - we
> do some simil
This is totally doable, and a reasonable use of zfs snapshots - we do some
similar things.
You can easily determine if the snapshot has changed by checking the output of
zfs list for the snapshot.
--M
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-bou
On Mar 10, 2010, at 6:30 PM, Ian Collins wrote:
> Yes, noting the warning.
Is it safe to execute on a live, active pool?
--m
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Ross Walker [mailto:rswwal...@gmail.com]
Sent: Tuesday, March 09, 2010 3:53 PM
To: Roch Bourbonnais
Cc: Matt Cowger; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk
(70% drop)
On Mar 9, 2010, at 1:42 PM, Roch Bourbonnais
wrote:
>
&
a significant drain of CPU resource.
>
> -r
>
>
> Le 8 mars 10 à 17:57, Matt Cowger a écrit :
>
>> Hi Everyone,
>>
>> It looks like I¹ve got something weird going with zfs performance on
>> a ramdiskS.ZFS is performing not even a 3rd of what UFS is doing.
Ross is correct - advanced OS features are not required here - just the ability
to store a file - don’t even need unix style permissions
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Ross Walker
Sent: Tuesday, M
On Mar 8, 2010, at 6:31 PM, Bill Sommerfeld wrote:
>
> if you have an actual need for an in-memory filesystem, will tmpfs fit
> the bill?
>
> - Bill
Very good point bill - just ran this test and started to get the numbers I was
expecting (1.3 GB
On Mar 8, 2010, at 6:31 PM, Richard Elling wrote:
>> Same deal for UFS, replacing the ZFS stuff with newfs stuff and mounting the
>> UFS forcedirectio (no point in using a buffer cache memory for something
>> that’s already in memory)
>
> Did you also set primarycache=none?
> -- richard
Good
It can, but doesn't in the command line shown below.
M
On Mar 8, 2010, at 6:04 PM, "ольга крыжановская" wrote:
> Does iozone use mmap() for IO?
>
> Olga
>
> On Tue, Mar 9, 2010 at 2:57 AM, Matt Cowger
> wrote:
>> Hi Everyone,
>>
>>
>
Hi Everyone,
It looks like I've got something weird going with zfs performance on a
ramdiskZFS is performing not even a 3rd of what UFS is doing.
Short version:
Create 80+ GB ramdisk (ramdiskadm), system has 96GB, so we aren't swapping
Create zpool on it (zpool create ram)
Change zfs op
Anyone willing to provide the modified kernel binaries for opensolaris2008.05?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I can't believe its almost a year later, with a patch provided, and this bug is
still not fixed.
For those of us that cant recompile the sources, it makes solaris useless if we
want to use a firewire drive.
--m
This message posted from opensolaris.org
___
18 matches
Mail list logo