. http://www.apps.ietf.org/rfc/rfc3721.html#sec-2
and the CLI could be something like:
zfs set shareiscsi=on shareisicsiname= tank
Ced.
--
Cedric BRINER
Geneva - Switzerland
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
cedric briner wrote:
hello dear community,
Is there a way to have a ``local_name'' as define in iscsitadm.1m when
you shareiscsi a zvol. This way, it will give even easier
way to identify an device through IQN.
Ced.
Okay no reply from you so... maybe I didn't m
hello dear community,
Is there a way to have a ``local_name'' as define in iscsitadm.1m when
you shareiscsi a zvol. This way, it will give even easier
way to identify an device through IQN.
Ced.
--
Cedric BRINER
Geneva - Switzerland
_
Richard Elling wrote:
cedric briner wrote:
Hello ZFS community,
I do not have a so strong love towards *probability*. And even less
love when probability caracterize true, solid and tangible stuff that
I've to administer.
I start doing some math..
don't get scared : I'm no
6 raidz 7 8 9
- ... and other things that I even not thought about
Any thought
Ced.
--
Cedric BRINER
Geneva - Switzerland
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
nable end zil_enable is --in that
case-- faster
So why are you recommending me to disable the write cache ?
--
Cedric BRINER
Geneva - Switzerland
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
just take ZFS and disable the ZIL ?
thanks in advance for your clarifications
Ced.
P.-S. Does some of you know the best way to send an email containing
many questions inside it ? Should I create a thread for each of them,
the next time
--
Cedric BRINER
Geneva - Switzerland
___
d my situation
I mean, I want to have a cheap and reliable nfs service. Why should I
buy expensive `Complex Storage with NVRAM' and not just buying a machine
with 8 IDE HD's ?
Ced.
--
Cedric BRINER
Geneva - Switzerland
___
zfs-dis
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#ZFS_and_Complex_Storage_Considerations
I wonder if there is a way to tell the OS to ignore the fsync flush
commands since they are likely to survive a power outage.
Ced.
--
Cedric BRINER
Geneva - Switze
the thing is really fast, and quite reliable. Not to
mention the sexy blue lights that tell you its hummin'
yeah right.. quite sexy !
-Andy
Ced.
* (a MiB is a mebibyte 2^20 ref: http://en.wikipedia.org/wiki/Mebibyte)
--
Cedric BRINER
Geneva - Switzerland
Raid to ignore ``fsync''
requests ?
After the announce that zfs will be included in Tiger, I'll be surprised
that the Xserve Raid will not include such configuration.
Ced.
--
Cedric BRINER
Geneva - Switzerland
___
zfs-discuss mail
rectly (without iscsi) to the node.
I'm not able to do an zfs import on the IDE HD. I try to do a *loopback*
iscsi without success :( . This is sad, because, I was thinking to move
the z-node in the i-node. But I wont be able to do this due to the
behaviour of iscsi.
Ced.
nd that you have at least 2 iscsi's client which will
consolidate this space with zfs. And suddenly, you can see with zpool
that a disk is dead. So I have to be able to replace this disk and so
for this, I have to know on which one of the 4 machine it resides and
which disk it is.
. I've
follow the advise of Roch:to try the different type of iscsi:
disk|raw|tape
but unfortunately the only type who accepts the:
``iscsitadm create target -b /dev/dsk/c0d0s5'' is the type disk which
doesn't work.
Any idea of what I could do to improve this.
th
14 matches
Mail list logo