The profit stuff has been NDA for awhile but we started telling the
street a while back and they seem to like the idea. :)
Selim Daoud wrote:
> wasn't that an NDA info??
>
> s-
>
> On 10/18/07, Torrey McMahon <[EMAIL PROTECTED]> wrote:
>
>> MC wrote:
>>
>>> Sun's storage strategy:
>>>
>>>
wasn't that an NDA info??
s-
On 10/18/07, Torrey McMahon <[EMAIL PROTECTED]> wrote:
> MC wrote:
> > Sun's storage strategy:
> >
> > 1) Finish Indiana and distro constructor
> > 2) (ship stuff using ZFS-Indiana)
> > 3) Success
>
> 4) Profit :)
> ___
> zf
Upon further thought, it was probably just defaulting to the first
thing grub could boot -- my laptop is partitioned (in order) xp,
recovery partition, solaris. The grub menu though lists the options
in order of solaris b62 (zfs /), solaris b74 (also zfs /), recovery
partition, windows. So my in
Scott Laird wrote:
> On 10/18/07, Neil Perrin <[EMAIL PROTECTED]> wrote:
>>> So, the only way to lose transactions would be a crash or power loss,
>>> leaving outstanding transactions in the log, followed by the log
>>> device failing to start up on reboot? I assume that that would that
>>> be h
Hi Bill,
Thinking about this a little more, would this provide the ability to
maintain B and G's data for a rollback followed by a possible roll
forward?
1.) Create a clone of snapshot_B (clone_B).
2.) Create a new current snapshot (snapshot_F).
3.) Create a clone of snapshot_F (clone_F).
4.) Pr
Hi Bill,
You've got it 99%. I want to roll E back to say B, and keep G intact.
I really don't care about C, D or F. Essentially, B is where I want to
roll back to, but in case B's data copy doesn't improve what I'm
trying to fix I want to have copy of G's data around so I can go back
to how it.
M
On Thu, Oct 18, 2007 at 02:29:27PM -0600, Neil Perrin wrote:
>
> > So, the only way to lose transactions would be a crash or power loss,
> > leaving outstanding transactions in the log, followed by the log
> > device failing to start up on reboot? I assume that that would that
> > be handled rela
On 10/18/07, Neil Perrin <[EMAIL PROTECTED]> wrote:
> > So, the only way to lose transactions would be a crash or power loss,
> > leaving outstanding transactions in the log, followed by the log
> > device failing to start up on reboot? I assume that that would that
> > be handled relatively clean
Scott Laird wrote:
> On 10/18/07, Neil Perrin <[EMAIL PROTECTED]> wrote:
>>
>> Scott Laird wrote:
>>> I'm debating using an external intent log on a new box that I'm about
>>> to start working on, and I have a few questions.
>>>
>>> 1. If I use an external log initially and decide that it was a
On Oct 18, 2007, at 13:26, Richard Elling wrote:
>
> Yes. It is true that ZFS redefines the meaning of available space.
> But
> most people like compression, snapshots, clones, and the pooling
> concept.
> It may just be that you want zfs list instead, df is old-school :-)
exactly - i'm not
On Oct 18, 2007, at 11:57, Richard Elling wrote:
> David Runyon wrote:
>> I was presenting to a customer at the EBC yesterday, and one of the
>> people at the meeting said using df in ZFS really drives him crazy
>> (no,
>> that's all the detail I have). Any ideas/suggestions?
>
> Filter it. T
[warning: paradigm shifted]
Jonathan Edwards wrote:
> On Oct 18, 2007, at 11:57, Richard Elling wrote:
>
>> David Runyon wrote:
>>> I was presenting to a customer at the EBC yesterday, and one of the
>>> people at the meeting said using df in ZFS really drives him crazy (no,
>>> that's all the de
On 10/18/07, Neil Perrin <[EMAIL PROTECTED]> wrote:
>
>
> Scott Laird wrote:
> > I'm debating using an external intent log on a new box that I'm about
> > to start working on, and I have a few questions.
> >
> > 1. If I use an external log initially and decide that it was a
> > mistake, is there a
Scott Laird wrote:
> I'm debating using an external intent log on a new box that I'm about
> to start working on, and I have a few questions.
>
> 1. If I use an external log initially and decide that it was a
> mistake, is there a way to move back to the internal log without
> rebuilding the en
I'm debating using an external intent log on a new box that I'm about
to start working on, and I have a few questions.
1. If I use an external log initially and decide that it was a
mistake, is there a way to move back to the internal log without
rebuilding the entire pool?
2. What happens if th
Brendan Gregg has put together a few dtrace scripts for looking at
various parts of the zfs subsystems recently. I don't find them in
the DTraceToolkit-0.99 release though.
-- richard
Nathan Kroenert wrote:
> Hey all -
>
> Time for my silly question of the day, and before I bust out vi and
>
David Runyon wrote:
> I was presenting to a customer at the EBC yesterday, and one of the
> people at the meeting said using df in ZFS really drives him crazy (no,
> that's all the detail I have). Any ideas/suggestions?
Filter it. This is UNIX after all...
-- richard
___
MC wrote:
> Sun's storage strategy:
>
> 1) Finish Indiana and distro constructor
> 2) (ship stuff using ZFS-Indiana)
> 3) Success
4) Profit :)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discu
Sun's storage strategy:
1) Finish Indiana and distro constructor
2) (ship stuff using ZFS-Indiana)
3) Success
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listin
zpool create t1 c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0 c0t6d0 c0t7d0
zpool create t2 c1t0d0 c1t1d0 c1t2d0 c1t3d0 c1t4d0 c1t5d0 c1t6d0 c1t7d0
zpool create t3 c4t0d0 c4t1d0 c4t2d0 c4t3d0 c4t4d0 c4t5d0 c4t6d0 c4t7d0
zpool create t4 c5t1d0 c5t2d0 c5t3d0 c5t5d0 c5t6d0 c5t7d0
zpool create t5 c6t0d
On Thu, Oct 18, 2007 at 10:32:58AM -0500, Mike Gerdts wrote:
> On 10/18/07, Gary Mills <[EMAIL PROTECTED]> wrote:
> > What's the command to show cross calls?
>
> mpstat will show it on a system basis.
Thanks. This is on our T2000 Cyrus IMAP server with ZFS. It's
the second listing from `mpstat
I don't know of any way to observe IOPS per zvol and I believe
this would be tricky. Any writes/reads from individual datasets (filesystems
and zvols) will go through the pipeline and can fan out to multiple
mirrors or raidz or be striped across devices. Block writes will be
combined and pushed out
I think this is an artifact of a manual setup. Ordinarily, if
booting from a zfs root pool, grub wouldn't even be able
to read the menu.lst if it couldn't interpret the pool format.
I'm not sure what the entire sequence of events is here,
so I'm not sure if there's a bug. Perhaps you could elab
Hi,
from sun germany i got the info hat the 2u JBODs wille be officially announced
in q1 2008 and the 4u JBODs in q2 2008.
Both will have SAS connectors and support either SAS and SATA drives.
Ragards,
Tom
This message posted from opensolaris.org
__
Gary Mills wrote:
> What's the command to show cross calls?
mpstat
--
Michael SchusterSun Microsystems, Inc.
recursion, n: see 'recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zf
On Thu, 18 Oct 2007, Mike Gerdts wrote:
> On 10/18/07, Bill Sommerfeld <[EMAIL PROTECTED]> wrote:
>> that sounds like a somewhat mangled description of the cross-calls done
>> to invalidate the TLB on other processors when a page is unmapped.
>> (it certainly doesn't happen on *every* update to a
On 10/18/07, Gary Mills <[EMAIL PROTECTED]> wrote:
> What's the command to show cross calls?
mpstat will show it on a system basis.
xcallsbypid.d from the DTraceToolkit (ask google) will tell you which
PID is responsible.
--
Mike Gerdts
http://mgerdts.blogspot.com/
_
>
> What's the command to show cross calls?
>
mpstat(1M)
example o/p
$ mpstat 1
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 16 0 0 416 316 485 16 0 0 0 618 7 3 0 90
0 6 0 0 425 324 488 2 0 0 0 579 4 2 0 94
___
zfs-disc
On Thu, Oct 18, 2007 at 10:16:52AM -0400, Bill Sommerfeld wrote:
> On Thu, 2007-10-18 at 08:04 -0500, Gary Mills wrote:
> > Here's a suggestion on the cause:
> >
> > The root problem seems to be an interaction between Solaris' concept
> > of global memory consistency and the fact that Cyrus sp
On 10/18/07, Bill Sommerfeld <[EMAIL PROTECTED]> wrote:
> that sounds like a somewhat mangled description of the cross-calls done
> to invalidate the TLB on other processors when a page is unmapped.
> (it certainly doesn't happen on *every* update to a mapped file).
I've seen systems running Verit
On 10/18/07, Darren J Moffat <[EMAIL PROTECTED]> wrote:
> Which marketing documentation (not person) says that ?
It was a person giving a technology brief in the past 6 weeks or so.
It kinda went like "so long as they link against the bundled openssl
and not a private copy of openssl they will aut
Hi,
snv_74, x4500, 48x 500GB, 16GB RAM, 2x dual core
# zpool create test c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0 c0t6d0 c0t7d0
c1t0d0 c1t1d0 c1t2d0 c1t3d0 c1t4d0 c1t5d0 c1t6d0 c1t7d0 c4t0d0 c4t1d0 c4t2d0
c4t3d0 c4t4d0 c4t5d0 c4t6d0 c4t7d0 c5t1d0 c5t2d0 c5t3d0 c5t5d0 c5t6d0 c5t7d0
c6t0d0 c6t1
On Thu, 2007-10-18 at 08:04 -0500, Gary Mills wrote:
> Here's a suggestion on the cause:
>
> The root problem seems to be an interaction between Solaris' concept
> of global memory consistency and the fact that Cyrus spawns many
> processes that all memory map (mmap) the same file. Whenever
Mike Gerdts wrote:
> On 10/18/07, Darren J Moffat <[EMAIL PROTECTED]> wrote:
>> Unfortunately it doesn't yet because ssh can't yet use the N2 crypto -
>> because it uses OpenSSL's libcrypto without using the ENGINE API.
>
> Marketing needs to get in line with the technology. The word I
> received
On 10/18/07, Darren J Moffat <[EMAIL PROTECTED]> wrote:
> Unfortunately it doesn't yet because ssh can't yet use the N2 crypto -
> because it uses OpenSSL's libcrypto without using the ENGINE API.
Marketing needs to get in line with the technology. The word I
received was that any application tha
On 10/17/07, Claus Guttesen <[EMAIL PROTECTED]> wrote:
> Thank you for the clarification. When mounting the same partitions
> from a windows-client I get r/w access to both the parent- and
> child-partition.
That is because the Windows clients are mounting via SMB
(Samba) and since Samba
Mike Gerdts wrote:
> On 10/18/07, Darren J Moffat <[EMAIL PROTECTED]> wrote:
>> zfs send | ssh -C | zfs recv
>
> I was going to suggest this, but I think (I could be wrong...) that
> ssh would then use zlib for compression and that ssh is still a
> single-threaded process. This has two effects:
>
Does anyone on this mailing list have an idea what went wrong with
ZFS and Cyrus IMAP? Here's an excerpt that explains the problem:
About a week before classes actually start is when all the kids start
moving back into town and mailing all their buds. We saw process
numbers go from 500-ish
On 10/18/07, Darren J Moffat <[EMAIL PROTECTED]> wrote:
> zfs send | ssh -C | zfs recv
I was going to suggest this, but I think (I could be wrong...) that
ssh would then use zlib for compression and that ssh is still a
single-threaded process. This has two effects:
1) gzip compression instead of
Hi,
I see it's still not fixed. I've checked with snv_74 and the bug is still
there.
Is someone working on it? Do we have any ETA?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.open
Vic Cornell wrote:
> Hi All,
>
> I went to storage expo in the UK yesterday. During a long train journey
> back to the west country boss and I were discussing the joys of storage
> management in a production environment and where ZFS would be able to help.
> Whilst it would be great if ZFS we
Richard Elling wrote:
> Do not assume that a compressed file system will send compressed. IIRC, it
> does not.
>
> But since UNIX is a land of pipe dreams, you can always compress anyway :-)
> zfs send ... | compress | ssh ... | uncompress | zfs receive ...
zfs send | ssh -C | zfs recv
--
Hi All,
I went to storage expo in the UK yesterday. During a long train journey
back to the west country boss and I were discussing the joys of storage
management in a production environment and where ZFS would be able to help.
Whilst it would be great if ZFS were a clustered file-system and
Hi,
I just reinstall my machine with onnv build75. When I try to import the zfs, I
meet an error. The pool is created with a slice c1t1d0s7 and a whole disk
c1t2d0s0. How do I fix the error? Below is the output from zpool and zdb.
# zpool import
pool: tank
id: 8219303556773256880
state: U
44 matches
Mail list logo