hi
IMHO, upgrade to s11 if possible
use the COMSTAR based iscsi
Sent from my iPad
On Jan 26, 2012, at 23:25, Ivan Rodriguez ivan...@gmail.com wrote:
Dear fellows,
We have a backup server with a zpool size of 20 TB, we transfer
information using zfs snapshots every day (we have around 300
On Fri, Jan 27, 2012 at 03:25:39PM +1100, Ivan Rodriguez wrote:
We have a backup server with a zpool size of 20 TB, we transfer
information using zfs snapshots every day (we have around 300 fs on
that pool),
the storage is a dell md3000i connected by iscsi, the pool is
currently version 10,
Hi Ivan,
On Jan 26, 2012, at 8:25 PM, Ivan Rodriguez wrote:
Dear fellows,
We have a backup server with a zpool size of 20 TB, we transfer
information using zfs snapshots every day (we have around 300 fs on
that pool),
the storage is a dell md3000i connected by iscsi, the pool is
Dear fellows,
We have a backup server with a zpool size of 20 TB, we transfer
information using zfs snapshots every day (we have around 300 fs on
that pool),
the storage is a dell md3000i connected by iscsi, the pool is
currently version 10, the same storage is connected
to another server with a
As it occasionally happens, this is not another whining question
from me but rather a statement. Or a progress report.
It recently occurred to me that since I fail to work around my
home NAS freezing while it tries to import the dcpool of my
setup, and the freeze seems to be in-kernel, I can try
Hei,
I'm crossposting this to zfs as i'm not sure which bit is to blame here.
I've been having this issue that i cannot really fix myself:
I have a OI 148 server, which hosts a log of disks on SATA
controllers. Now it's full and needs some data moving work to be done,
so i've acquired another
Hi friends,
i have a problem. I have a file server which initiates large volumes with iscsi
initiator. Problem is, zfs side it shows non aviable space, but i am %100 sure
there is at least, 5 TB space. Problem is, because zfs pool shows as 0 aviable
all iscsi connection got lost and all
At the time we had it setup as 3 x 5 disk raidz, plus a hot spare. These 16
disks were in a SAS cabinet, and the the slog was on the server itself. We are
now running 2 x 7 raidz2 plus a hot spare and slog, all inside the cabinet.
Since the disks are 1.5T, I was concerned about resliver times
Systems Analyst II
TAMU DRGS
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of JOrdan
Sent: Friday, April 16, 2010 2:42 PM
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] ZFS for ISCSI ntfs backing store
For ease of administration with everyone in the department i'd prefer to keep
everything consistent in the windows world.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I have used build 124 in this capacity, although I did zero tuning. I had about
4T of data on a single 5T iSCSI volume over gigabit. The windows server was a
VM, and the opensolaris box is on a Dell 2950, 16G of RAM, x25e for the zil, no
l2arc cache device. I used comstar.
It was being used
I'm looking to move our file storage from Windows to Opensolaris/zfs. The
windows box will be connected through 10g for iscsi to the storage. The windows
box will continue to serve the windows clients and will be hosting
approximately 4TB of data.
The physical box is a sunfire x4240, single
hello
do you want to use it as a file smb-fileserver or do you want to have other
windows services? if you want to use it as a file server only, i would suggest
to use build in cifs server.
iscsi will be always slower than native cifs server and you have snapshots via
windows property
+0x149()
-Message d'origine-
De : Arnaud Brand
Envoyé : samedi 16 janvier 2010 01:54
À : zfs-discuss@opensolaris.org
Objet : Zfs over iscsi bad status
I was testing zfs over iscsi (with commstar over a zvol) and got some errors.
Target and initiator are on the same host.
I've copy-pasted
I was testing zfs over iscsi (with commstar over a zvol) and got some errors.
Target and initiator are on the same host.
I've copy-pasted an excerpt on zpool status hereafter.
The pool (tank) containing the iscsi-shared zvol (tank/tsmvol) is healthy and
show no errors.
But the zpool (tsmvol) on
I ran the RealLife iometer profile on NFS based storage (vs. SW iSCSI), and got
nearly identical results to having the disks on iSCSI:
iSCSI
IOPS: 1003.8
MB/s: 7.8
Avg Latency (s): 27.9
NFS
IOPS: 1005.9
MB/s: 7.9
Avg Latency (s): 29.7
Interesting!
Here is how the pool was behaving during the
On Fri, 26 Jun 2009, Scott Meilicke wrote:
I ran the RealLife iometer profile on NFS based storage (vs. SW
iSCSI), and got nearly identical results to having the disks on
iSCSI:
Both of them are using TCP to access the server.
So it appears NFS is doing syncs, while iSCSI is not (See my
On Fri, Jun 26, 2009 at 6:04 PM, Bob
Friesenhahnbfrie...@simple.dallas.tx.us wrote:
On Fri, 26 Jun 2009, Scott Meilicke wrote:
I ran the RealLife iometer profile on NFS based storage (vs. SW iSCSI),
and got nearly identical results to having the disks on iSCSI:
Both of them are using TCP to
if those servers are on physical boxes right now i'd do some perfmon
caps and add up the iops.
Using perfmon to get a sense of what is required is a good idea. Use the 95
percentile to be conservative. The counters I have used are in the Physical
disk object. Don't ignore the latency counters
sm == Scott Meilicke no-re...@opensolaris.org writes:
sm Some storage will flush their caches despite the fact that the
sm NVRAM protection makes those caches as good as stable
sm storage. [...] ZFS also issues a flush every time an
sm application requests a synchronous write
Isn't that section of the evil tuning guide you're quoting actually about
checking if the NVRAM/driver connection is working right or not?
Miles, yes, you are correct. I just thought it was interesting reading about
how syncs and such work within ZFS.
Regarding my NFS test, you remind me that
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
i'm getting involved in a pre-production test and want to be sure of the
means i'll have to use.
Take 2 SunFire x4150 1 3750 Gb Cisco Switche
1 private VLAN on the Gb ports of the SW.
1 x4150 is going to be ESX4 aka VSphere Server ( 1
On Wed, June 24, 2009 08:42, Philippe Schwarz wrote:
In my tests ESX4 seems to work fine with this, but i haven't already
stressed it ;-)
Therefore, i don't know if the 1Gb FDuplex per port will be enough, i
don't know either i'have to put sort of redundant access form ESX to
SAN,etc
2 first disks Hardware mirror of 146Go with Sol10 UFS filesystem on it.
The next 6 others will be used as a raidz2 ZFS volume of 535G,
compression and shareiscsi=on.
I'm going to CHAP protect it soon...
you're not going to get the random read write performance you need
for a vm backend out
See this thread for information on load testing for vmware:
http://communities.vmware.com/thread/73745?tstart=0start=0
Within the thread there are instructions for using iometer to load test your
storage. You should test out your solution before going live, and compare what
you get with what
Bottim line with virtual machines is that your IO will be random by
definition since it all goes into the same pipe. If you want to be
able to scale, go with RAID 1 vdevs. And don't skimp on the memory.
Our current experience hasn't shown a need for an SSD for the ZIL but
it might be
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
milosz a écrit :
Within the thread there are instructions for using iometer to load test your
storage. You should test out your solution before going live, and compare
what you get with what you need. Just because striping 3 mirrors *will* give
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
David Magda a écrit :
On Wed, June 24, 2009 08:42, Philippe Schwarz wrote:
In my tests ESX4 seems to work fine with this, but i haven't already
stressed it ;-)
Therefore, i don't know if the 1Gb FDuplex per port will be enough, i
don't know
- - the VM will be mostly few IO systems :
- -- WS2003 with Trend Officescan, WSUS (for 300 XP) and RDP
- -- Solaris10 with SRSS 4.2 (Sunray server)
(File and DB servers won't move in a nearby future to VM+SAN)
I thought -but could be wrong- that those systems could afford a high
latency
On Jun 24, 2009, at 16:54, Philippe Schwarz wrote:
Out of curiosity, any reason why went with iSCSI and not NFS? There
seems
to be some debate on which is better under which circumstances.
iSCSI instead of NFS ?
Because of the overwhelming difference in transfer rate between
them, In
Hello Richard,
Wednesday, October 15, 2008, 6:39:49 PM, you wrote:
RE Archie Cowan wrote:
I just stumbled upon this thread somehow and thought I'd share my zfs over
iscsi experience.
We recently abandoned a similar configuration with several pairs of x4500s
exporting zvols as iscsi
Robert Milkowski wrote:
Hello Richard,
Wednesday, October 15, 2008, 6:39:49 PM, you wrote:
RE Archie Cowan wrote:
I just stumbled upon this thread somehow and thought I'd share my zfs over
iscsi experience.
We recently abandoned a similar configuration with several pairs of x4500s
On Thu, Oct 16, 2008 at 03:50:19PM +0800, Gray Carper wrote:
Sidenote: Today we made eight network/iSCSI related tweaks that, in
aggregate, have resulted in dramatic performance improvements (some I
just hadn't gotten around to yet, others suggested by Sun's Mertol
Ozyoney)...
Gary,
Sidenote: Today we made eight network/iSCSI related tweaks that, in
aggregate, have resulted in dramatic performance improvements
(some I
just hadn't gotten around to yet, others suggested by Sun's Mertol
Ozyoney)...
- disabling the Nagle algorithm on the head node
-
Hey, Jim! Thanks so much for the excellent assist on this - much better than
I could have ever answered it!
I thought I'd add a little bit on the other four...
- raising ddi_msix_alloc_limit to 8
For PCI cards that use up to 8 interrupts, which our 10GBe adapters do. The
previous value of 2
Some of that is very worrying Miles, do you have bug ID's for any of those
problems?
I'm guessing the problem of the device being reported ok after the reboot could
be this one:
http://bugs.opensolaris.org/view_bug.do?bug_id=6582549
And could the errors after the reboot be one of these?
Well obviously recovery scenarios need testing, but I still don't see it being
that bad. My thinking on this is:
1. Loss of a server is very much the worst case scenario. Disk errors are
much more likely, and with raid-z2 pools on the individual servers this should
not pose a problem. I
Howdy!
Very valuable advice here (and from Bob, who made similar comments - thanks,
Bob!). I think, then, we'll generally stick to 128K recordsizes. In the case
of databases, we'll stray as appropriate, and we may also stray with the HPC
compute cluster if we can get demonstrate that it is worth
Miles makes a good point here, you really need to look at how this copes with
various failure modes.
Based on my experience, iSCSI is something that may cause you problems. When I
tested this kind of setup last year I found that the entire pool hung for 3
minutes any time an iSCSI volume went
Oops - one thing I meant to mention: We only plan to cross-site replicate
data for those folks who require it. The HPC data crunching would have no
use for it, so that filesystem wouldn't be replicated. In reality, we only
expect a select few users, with relatively small filesystems, to actually
r == Ross [EMAIL PROTECTED] writes:
r 1. Loss of a server is very much the worst case scenario.
r Disk errors are much more likely, and with raid-z2 pools on
r the individual servers
yeah, it kind of sucks that the slow resilvering speed enforces this
two-tier scheme.
Also if
[EMAIL PROTECTED] said:
It's interesting how the speed and optimisation of these maintenance
activities limit pool size. It's not just full scrubs. If the filesystem is
subject to corruption, you need a backup. If the filesystem takes two months
to back up / restore, then you need really
pNFS is NFS-centric of course and it is not yet stable, isn't it? btw,
what is the ETA for pNFS putback?
On Thu, 2008-10-16 at 12:20 -0700, Marion Hakanson wrote:
[EMAIL PROTECTED] said:
It's interesting how the speed and optimisation of these maintenance
activities limit pool size. It's
On Thu, Oct 16, 2008 at 12:20:36PM -0700, Marion Hakanson wrote:
I'll chime in here with feeling uncomfortable with such a huge ZFS pool,
and also with my discomfort of the ZFS-over-ISCSI-on-ZFS approach. There
just seem to be too many moving parts depending on each other, any one of
which
[EMAIL PROTECTED] said:
In general, such tasks would be better served by T5220 (or the new T5440 :-)
and J4500s. This would change the data paths from:
client --net-- T5220 --net-- X4500 --SATA-- disks to
client --net-- T5440 --SAS-- disks
With the J4500 you get the same storage
nw == Nicolas Williams [EMAIL PROTECTED] writes:
nw But does it work well enough? It may be faster than NFS if
You're talking about different things. Gray is using NFS period
between the storage cluster and the compute cluster, no iSCSI.
Gray's (``does it work well enough''): iSCSI
On Thu, Oct 16, 2008 at 04:30:28PM -0400, Miles Nordin wrote:
nw == Nicolas Williams [EMAIL PROTECTED] writes:
nw But does it work well enough? It may be faster than NFS if
You're talking about different things. Gray is using NFS period
between the storage cluster and the compute
nw == Nicolas Williams [EMAIL PROTECTED] writes:
mh == Marion Hakanson [EMAIL PROTECTED] writes:
nw I was replying to Marion's [...]
nw ZFS-over-iSCSI could certainly perform better than NFS,
better than what, ZFS-over-'mkfile'-files-on-NFS? No one was
suggesting that. Do you mean
On Oct 16, 2008, at 15:20, Marion Hakanson wrote:
For the stated usage of the original poster, I think I would aim
toward
turning each of the Thumpers into an NFS server, configure the head-
node
as a pNFS/NFSv4.1
It's a shame that Lustre isn't available on Solaris yet either.
[EMAIL PROTECTED] said:
but Marion's is not really possible at all, and won't be for a while with
other groups' choice of storage-consumer platform, so it'd have to be
GlusterFS or some other goofy fringe FUSEy thing or not-very-general crude
in-house hack.
Well, of course the magnitude of
On Wed, 15 Oct 2008, Gray Carper wrote:
be good to set different recordsize paramaters for each one. Do you have any
suggestions on good starting sizes for each? I'd imagine filesharing might
benefit from a relatively small record size (64K?), image-based backup
targets might like a pretty
I just stumbled upon this thread somehow and thought I'd share my zfs over
iscsi experience.
We recently abandoned a similar configuration with several pairs of x4500s
exporting zvols as iscsi targets and mirroring them for high availability
with T5220s.
Initially, our performance was also
Howdy, Brent!
Thanks for your interest! We're pretty enthused about this project over here
and I'd be happy to share some details with you (and anyone else who cares
to peek). In this post I'll try to hit the major configuration
bullet-points, but I can also throw you command-line level specifics
Archie Cowan wrote:
I just stumbled upon this thread somehow and thought I'd share my zfs over
iscsi experience.
We recently abandoned a similar configuration with several pairs of x4500s
exporting zvols as iscsi targets and mirroring them for high availability
with T5220s.
In
Hi Gray,
You've got a nice setup going there, few comments:
1. Do not tune ZFS without a proven test-case to show otherwise, except...
2. For databases. Tune recordsize for that particular FS to match DB recordsize.
Few questions...
* How are you divvying up the space ?
* How are you taking
Am I right in thinking your top level zpool is a raid-z pool consisting of six
28TB iSCSI volumes? If so that's a very nice setup, it's what we'd be doing if
we had that kind of cash :-)
--
This message posted from opensolaris.org
___
zfs-discuss
gc == Gray Carper [EMAIL PROTECTED] writes:
gc 5. The NAS nead node has wrangled up all six of the iSCSI
gc targets
are you using raidz on the head node? It sounds like simple striping,
which is probably dangerous with the current code. This kind of sucks
because with simple striping
r == Ross [EMAIL PROTECTED] writes:
r Am I right in thinking your top level zpool is a raid-z pool
r consisting of six 28TB iSCSI volumes? If so that's a very
r nice setup,
not if it scrubs at 400GB/day, and 'zfs send' is uselessly slow. Also
I am thinking the J4500 Richard
On Wed, 15 Oct 2008, Marcelo Leal wrote:
Are you talking about what he had in the logic of the configuration at top
level, or you are saying his top level pool is a raidz?
I would think his top level zpool is a raid0...
ZFS does not support RAID0 (simple striping).
Bob
On 15 October, 2008 - Bob Friesenhahn sent me these 0,6K bytes:
On Wed, 15 Oct 2008, Marcelo Leal wrote:
Are you talking about what he had in the logic of the configuration at top
level, or you are saying his top level pool is a raidz?
I would think his top level zpool is a raid0...
So, there is no raid10 in a solaris/zfs setup?
I´m talking about no redundancy...
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, 15 Oct 2008, Tomas Ögren wrote:
ZFS does not support RAID0 (simple striping).
zpool create mypool disk1 disk2 disk3
Sure it does.
This is load-share, not RAID0. Also, to answer the other fellow,
since ZFS does not support RAID0, it also does not support RAID 1+0
(10). :-)
With
Bob Friesenhahn wrote:
On Wed, 15 Oct 2008, Tomas Ögren wrote:
ZFS does not support RAID0 (simple striping).
zpool create mypool disk1 disk2 disk3
Sure it does.
This is load-share, not RAID0. Also, to answer the other fellow,
since ZFS does not support RAID0, it also does not support
Hey, all!
We've recently used six x4500 Thumpers, all publishing ~28TB iSCSI targets over
ip-multipathed 10GB ethernet, to build a ~150TB ZFS pool on an x4200 head node.
In trying to discover optimal ZFS pool construction settings, we've run a
number of iozone tests, so I thought I'd share
Howdy!
Sounds good. We'll upgrade to 1.1 (b101) as soon as it is released, re-run
our battery of tests, and see where we stand.
Thanks!
-Gray
On Tue, Oct 14, 2008 at 8:47 PM, James C. McPherson [EMAIL PROTECTED]
wrote:
Gray Carper wrote:
Hello again! (And hellos to Erast, who has been a
Hey there, James!
We're actually running NexentaStor v1.0.8, which is based on b85. We haven't
done any tuning ourselves, but I suppose it is possible that Nexenta did. If
there's something specific you have in mind, I'd be happy to look for it.
Thanks!
-Gray
On Tue, Oct 14, 2008 at 8:10 PM,
Gray Carper wrote:
Hey there, James!
We're actually running NexentaStor v1.0.8, which is based on b85. We
haven't done any tuning ourselves, but I suppose it is possible that
Nexenta did. If there's something specific you'd like me to look for,
I'd be happy to.
Hi Gray,
So build 85
Just a random spectator here, but I think artifacts you're seeing here are not
due to file size, but rather due to record size.
What is the ZFS record size ?
On a personal note, I wouldn't do non-concurrent (?) benchmarks. They are at
best useless and at worst misleading for ZFS
- Akhilesh.
On Tue, 14 Oct 2008, Gray Carper wrote:
So, how concerned should we be about the low scores here and there?
Any suggestions on how to improve our configuration? And how excited
should we be about the 8GB tests? ;
The level of concern should depend on how you expect your storage pool
to
James, all serious ZFS bug fixes back-ported to b85 as well as marvell
and other sata drivers. Not everything is possible to back-port of
course, but I would say all critical things are there. This includes ZFS
ARC optimization patches, for example.
On Tue, 2008-10-14 at 22:33 +1000, James C.
On Tue, Oct 14, 2008 at 12:31 AM, Gray Carper [EMAIL PROTECTED] wrote:
Hey, all!
We've recently used six x4500 Thumpers, all publishing ~28TB iSCSI targets
over ip-multipathed 10GB ethernet, to build a ~150TB ZFS pool on an x4200
head node. In trying to discover optimal ZFS pool
Gray Carper wrote:
Hey, all!
We've recently used six x4500 Thumpers, all publishing ~28TB iSCSI
targets over ip-multipathed 10GB ethernet, to build a ~150TB ZFS pool on
an x4200 head node. In trying to discover optimal ZFS pool construction
settings, we've run a number of iozone tests, so I
Gray Carper wrote:
Hello again! (And hellos to Erast, who has been a huge help to me many,
many times! :)
As I understand it, Nexenta 1.1 should be released in a matter of weeks
and it'll be based on build 101. We are waiting for that with baited
breath, since it includes some very
Hello again! (And hellos to Erast, who has been a huge help to me many, many
times! :)
As I understand it, Nexenta 1.1 should be released in a matter of weeks and
it'll be based on build 101. We are waiting for that with baited breath,
since it includes some very important Active Directory
Erast Benson wrote:
James, all serious ZFS bug fixes back-ported to b85 as well as marvell
and other sata drivers. Not everything is possible to back-port of
course, but I would say all critical things are there. This includes ZFS
ARC optimization patches, for example.
Excellent!
James
--
Hey there, Bob!
Looks like you and Akhilesh (thanks, Akhilesh!) are driving at a similar,
very valid point. I'm currently using the default recordsize (128K) on all
of the ZFS pool (those of the iSCSI target nodes and the aggregate pool on
the head node).
I should've mentioned something about
Hello all, sorry if somebody already asked this or not. I was playing today
with
iSCSI and I was able to create zpool and then via iSCSI I can see it on two
other hosts. I was courious if I could use zfs to have it shared on those two
hosts but aparently I was unable to do it for obvious
2007/10/12, Krzys [EMAIL PROTECTED]:
Hello all, sorry if somebody already asked this or not. I was playing today
with
iSCSI and I was able to create zpool and then via iSCSI I can see it on two
other hosts. I was courious if I could use zfs to have it shared on those two
hosts but aparently
I was courious if I could use zfs to have it shared on those two hosts
no, that`s not possible for now.
but aparently I was unable to do it for obvious reasons.
you will corrupt your data!
On my linuc oracle rac I was using ocfs which works just as I need it
yes, because ocfs is build for
roland wrote:
Is there any solutions out there of this kind?
i`m not that deep into solaris, but iirc there isn`t one for free.
veritas is quite popular, but you need spend lots of bucks for this.
maybe SAM-QFS ?
We have lots of customers using shared QFS with RAC.
QFS is on the road to open
Hello Thomas,
Saturday, March 24, 2007, 1:06:47 AM, you wrote:
The problem is that the failure modes are very different for networks and
presumably reliable local disk connections. Hence NFS has a lot of error
handling code and provides well understood error handling semantics. Maybe
what
Hi Robert,
On Sun, 25 Mar 2007, Robert Milkowski wrote:
The problem is that the failure modes are very different for networks and
presumably reliable local disk connections. Hence NFS has a lot of error
handling code and provides well understood error handling semantics. Maybe
what you really
On Mar 25, 2007, at 06:14, Thomas Nau wrote:
We use a cluster ;) but in the backend it doesn't solve the sync
problem as you mention
The StorageTek Availability Suite was recently open-sourced:
http://www.opensolaris.org/os/project/avs/
___
Thomas Nau [EMAIL PROTECTED] wrote:
fflush(fp);
fsync(fileno(fp));
fclose(fp);
and check errors.
(It's remarkable how often people get the above sequence wrong and only
do something like fsync(fileno(fp)); fclose(fp);
Thanks for clarifying! Seems I really need to
On March 23, 2007 11:06:33 PM -0700 Adam Leventhal [EMAIL PROTECTED] wrote:
On Fri, Mar 23, 2007 at 11:28:19AM -0700, Frank Cusack wrote:
I'm in a way still hoping that it's a iSCSI related Problem as
detecting dead hosts in a network can be a non trivial problem and it
takes quite some time
On Sat, Mar 24, 2007 at 11:20:38AM -0700, Frank Cusack wrote:
iscsi doesn't use TCP, does it? Anyway, the problem is really transport
independent.
It does use TCP. Were you thinking UDP?
or its own IP protocol. I wouldn't have thought iSCSI would want to be
subject to the vagaries of
Dear all.
I've setup the following scenario:
Galaxy 4200 running OpenSolaris build 59 as iSCSI target; remaining
diskspace of the two internal drives with a total of 90GB is used as zpool
for the two 32GB volumes exported via iSCSI
The initiator is an up to date Solaris 10 11/06 x86 box
On Fri, 23 Mar 2007, Roch - PAE wrote:
I assume the rsync is not issuing fsyncs (and it's files are
not opened O_DSYNC). If so, rsync just works against the
filesystem cache and does not commit the data to disk.
You might want to run sync(1M) after a successful rsync.
A larger rsync would
On March 23, 2007 6:51:10 PM +0100 Thomas Nau [EMAIL PROTECTED] wrote:
Thanks for the hints but this would make our worst nightmares become
true. At least they could because it means that we would have to check
every application handling critical data and I think it's not the apps
I'd tend to disagree with that. POSIX/SUS does not guarantee data makes
it to disk until you do an fsync() (or open the file with the right flags,
or other techniques). If an application REQUIRES that data get to disk,
it really MUST DTRT.
Indeed; want your data safe? Use:
Thomas Nau wrote:
Dear all.
I've setup the following scenario:
Galaxy 4200 running OpenSolaris build 59 as iSCSI target; remaining
diskspace of the two internal drives with a total of 90GB is used as
zpool for the two 32GB volumes exported via iSCSI
The initiator is an up to date Solaris 10
Dear Fran Casper
I'd tend to disagree with that. POSIX/SUS does not guarantee data makes
it to disk until you do an fsync() (or open the file with the right flags,
or other techniques). If an application REQUIRES that data get to disk,
it really MUST DTRT.
Indeed; want your data safe?
Richard,
Like this?
disk--zpool--zvol--iscsitarget--network--iscsiclient--zpool--filesystem--app
exactly
I'm in a way still hoping that it's a iSCSI related Problem as detecting
dead hosts in a network can be a non trivial problem and it takes quite
some time for TCP to timeout and inform
Thanks for clarifying! Seems I really need to check the apps with truss or
dtrace to see if they use that sequence. Allow me one more question: why
is fflush() required prior to fsync()?
When you use stdio, you need to make sure the data is in the
system buffers prior to call fsync.
fclose()
On Fri, Mar 23, 2007 at 11:28:19AM -0700, Frank Cusack wrote:
I'm in a way still hoping that it's a iSCSI related Problem as detecting
dead hosts in a network can be a non trivial problem and it takes quite
some time for TCP to timeout and inform the upper layers. Just a
guess/hope here that
If you have questions about iSCSI, I would suggest sending them to
[EMAIL PROTECTED] I read that mail list a little more
often, so you'll get a quicker response.
On Feb 26, 2007, at 8:39 AM, cedric briner wrote:
devfsadm -i iscsi # to create the device on sf3
iscsiadm list target -Sv|
hello,
I'm trying to consolidate my HDs in a cheap but (I hope) reliable
manner. To do so, I was thinking to use zfs over iscsi.
Unfortunately, I'm having some issue with it, when I do:
# iscsi server (nexenta alpha 5)
#
svcadm enable iscsitgt
iscsitadm delete target --lun 0
On 2/26/07, cedric briner [EMAIL PROTECTED] wrote:
hello,
I'm trying to consolidate my HDs in a cheap but (I hope) reliable
manner. To do so, I was thinking to use zfs over iscsi.
Unfortunately, I'm having some issue with it, when I do:
# iscsi server (nexenta alpha 5)
#
svcadm
devfsadm -i iscsi # to create the device on sf3
iscsiadm list target -Sv| egrep 'OS Device|Peer|Alias' # not empty
Alias: vol-1
IP address (Peer): 10.194.67.111:3260
OS Device Name:
/dev/rdsk/c1t014005A267C12A0045E2F524d0s2
this is where my
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
James W. Abendschan wrote:
It took about 3 days to finish
during which the T1000 was basically unusable. (during that time,
sendmail managed to syslog a few messages about how it
was skipping the queue run because the load was at 200 :-)
Glup!.
1 - 100 of 103 matches
Mail list logo