> Hello Marc,
>
> Sunday, July 29, 2007, 9:57:13 PM, you wrote:
>
> MB> MC eastlink.ca> writes:
> >>
> >> Obviously 7zip is far more CPU-intensive than
> anything in use with ZFS
> >> today. But maybe with all these processor cores
> coming down the road,
> >> a high-end compression system is
Hi,
I just noticed something interesting ... don't know whether it's
relevant or not (two commands run in succession during a 'nightly' run):
$ iostat -xnz 6
[...]
extended device statistics
r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.00.3
I have begun a scrub on a 1,5TB pool which has 600GB data, and seeing that it
will take 11h47min I want to stop it. I invoked "zpool scrub -s pool" and
nothing happens. There is no message: "scub stopped" or something similar. The
cursor just sits there on a new line and blinks with no output.
"Richard L. Hamilton" <[EMAIL PROTECTED]> wrote:
> * disks are probably cheaper than CPUs
>
> * it looks to me like 7z may also be RAM-hungry; and there are probably
> better ways to use the RAM, too
The main problem with the currently available 7z implementation is that
it has been written in C+
I missed the video. Was it published?
On 5/2/07, Joy Marshall <[EMAIL PROTECTED]> wrote:
>
> Hi Brian,
>
> I can confirm that we will certainly be filming the presentation with a
> view to posting it online shortly afterwards for those people who can't
> attend in person.
>
> Regards,
> Joy
>
>
>
On Jul 31, 2007, at 5:44 AM, Orvar Korvar wrote:
> I have begun a scrub on a 1,5TB pool which has 600GB data, and
> seeing that it will take 11h47min I want to stop it. I invoked
> "zpool scrub -s pool" and nothing happens. There is no message:
> "scub stopped" or something similar. The cur
I ran testing of hardware RAID versus all software and didn't see any
differences that made either a clear winner. For production platforms you're
just as well off having JBODs.
This was with bonnie++ on a V240 running Solaris 10u3. A 3511 array fully
populated with 12 380-gig drives, single
Hi All,
We have a problem running a scientific application dCache on ZFS.
dCache is a java based software that allows to store huge datasets in pools.
One dCache pool consists of two directories pool/data and pool/control. The
real data goes into pool/data/
For each file in pool/data/ the pool
I have a customer who is running into bug 6538387. This is a problem
with HP-UX clients accessing NFS mounts which are on a ZFS file
system. This has to do with ZFS using nanosecond times and the HP
client does not use this amount of precision. This is not an issue
with ufs as it does no
Sergey Chechelnitskiy <[EMAIL PROTECTED]> writes:
> Hi All,
>
> We have a problem running a scientific application dCache on ZFS.
> dCache is a java based software that allows to store huge datasets in
> pools. One dCache pool consists of two directories pool/data and
> pool/control. The real dat
10 matches
Mail list logo