Hi,
Nexenta CP and NexentaStor has integrated COMSTAR with ZFS, which
provides 2-3x performance gain over userland SCSI target daemon. I;ve
blogged in more detail at
http://www.gulecha.org/2009/03/03/nexenta-iscsi-with-comstarzfs-integration/
Cheers,
Anil
http://www.gulecha.org
__
I've turned off iSCSI sharing at the moment.
My first question is: how can zfs report available is larger than
reservation on a zfs volume? I also know that used mshould be larger
than 22.5 K. Isn't this strange?
Lars-Gunnar Persson
Den 3. mars. 2009 kl. 00.38 skrev Richard Elling >:
La
Hi,
I am soliciting input from the ZFS engineers and/or ZFS users on an
extension to "zfs list". Thanks in advance for your feedback.
Quick Background:
The pNFS project (http://opensolaris.org/os/project/nfsv41/) is adding
a new DMU object set type which is used on the pNFS data server to
On Tue, Mar 3, 2009 at 8:35 AM, Julius Roberts wrote:
> but previously using zfs-fuse (on Ubuntu 8.10), this was not possible.
> to look at a snapshot we had to clone the snapshot ala:
> sudo zfs clone zpoolname/zfsn...@snapname_somedate
> zpoolname/zfsname_restore_somedate
> which works but it's
Hello everybody,
Is this the correct list to be talking about zfs-fuse 0.5.1-1ubuntu5
on Ubuntu 8.10?
if not, can anyone point me in the right direction? i've been
googling all morning and i'm at a bit of a loss.
else
on our solaris machines (running ZFS pool version 10) i can cd into
.zfs/snap
On Mar 2, 2009, at 18:37, Miles Nordin wrote:
And I'm getting frustrated pointing out these issues for the 10th
time [...]
http://www.xkcd.com/386/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinf
On Mar 2, 2009, at 19:31, David wrote:
So nobody is interested in Raidz grow support? i.e. you have 4 disks
is a
raidz and you only have room for a 5th disk(physically), so you add
the 5th
disk to the raidz. It would be a great feature for a home server and
its the
only thing stopping sola
So nobody is interested in Raidz grow support? i.e. you have 4 disks is a
raidz and you only have room for a 5th disk(physically), so you add the 5th
disk to the raidz. It would be a great feature for a home server and its the
only thing stopping solaris going on my home file server.
On Tue, Mar
When creating a ZFS pool, it seems the default format is striping. But is there
a way to create a pool of concatenated disks? That is, let's say I have 2 local
disks (SATA, 100GB each) and 1 iscsi partition (from remote Solaris server,
80GB).
So, if I issue a command:
# zpool create -f mypoo
> "cb" == C Bergström writes:
cb> ideas for good zfs GSoC projects, but wanted to stir some
cb> interest.
Read-only vdev support.
1. possibility to import a zpool on DVD. All filesystems within would
be read-only. DVD should be scrubbable: result would be a list of
files wit
On Fri, Feb 27, 2009 at 4:51 PM, Harry Putnam wrote:
> Can you say if it makes a noticeable difference to zfs. I'd noticed
> that option but didn't connect it to this conversation. Also, if I
> recall there is some warning about being an advanced user to use that
> option or something similar.
I know this problem has been mentioned a while back, but I haven’t seen or
found any usable solution. Sorry to bring it up again, especially if there IS a
solution.
Problem:
I have two servers: zeek and zed (both running nexenta flavor of Solaris x86).
On zed I created a zpool using two SATA dr
> "bh" == Brandon High writes:
bh> VMWare can give VMs direct access to the actual disks. This
bh> should avoid the overhead of using virtual disks.
maybe some of the ``overhead'' but not necessarily the write cache
sync bugs.
pgp2gzrmzGLyG.pgp
Description: PGP signature
__
> "ma" == Matthew Ahrens writes:
ma> We will soon be changing the manpage to indicate that the zfs
ma> send stream will be receivable on all future versions of ZFS.
still not strong enough statement for this case:
old system new system
1. zfs send --backup--->
Lars-Gunnar Persson wrote:
Hey to everyone on this mailing list (since this is my first post)!
Welcome!
We've a Sun Fire X4100 M2 server running Solaris 10 u6 and after some
system work this weekend we have a problem with only one ZFS volume.
We have a pool called /Data with many file sys
> "dm" == David Magda writes:
dm> Yes, in its current state; hopefully that will change some
dm> point in the future
I don't think it will or should. A replication tool and a backup tool
seem similar, but they're not similar enough.
With replication, you want an exact copy, and if
The Linux host can still see the device. I showed you the log from the
Linux host.
I tried the fdisk -l and it listed the iSCSI disks.
Lars-Gunnar Persson
Den 2. mars. 2009 kl. 17.02 skrev "O'Shea, Damien" :
I could be wrong but this looks like an issue on the Linux side
A zpool status is
That is correct. It's a raid 6 disk shelf with one volume connected
via fibre.
Lars-Gunnar Persson
Den 2. mars. 2009 kl. 16.57 skrev Blake :
It looks like you only have one physical device in this pool. Is
that correct?
On Mon, Mar 2, 2009 at 9:01 AM, Lars-Gunnar Persson
wrote:
Hey t
> "rb" == Roch Bourbonnais writes:
rb> If log devices goes away the system starts to behave as if no
rb> separate log was configured and the zil just uses the main
rb> storage pool.
Maybe you can continue running for a while, but if you reboot in this
situation, the pool will ref
smart trams wrote:
Hi All,
What I all want is a way to disable startup import process of ZFS. So on every server reboot, I want to manually import the pools and mount on required mount point.
zpool attributes like mountpoint=legacy or canmount affect pool mounting
behavior and no com
n...@jnickelsen.de said:
> As far as I know the situation with ATI is that, while ATI supplies
> well-performing binary drivers for MS Windows (of course) and Linux, there is
> no such thing for other OSs. So OpenSolaris uses standardized interfaces of
> the graphics hardware, which have comparativ
On Sat, Feb 28, 2009 at 09:45:12PM -0600, Mike Gerdts wrote:
> On Sat, Feb 28, 2009 at 8:34 PM, Nicolas Williams
> > Right, but normally each head in a cluster will have only one pool
> > imported.
>
> Not necessarily. Suppose I have a group of servers with a bunch of
> zones. Each zone represen
Blake wrote:
zfs send is great for moving a filesystem with lots of tiny files,
since it just handles the blocks :)
I'd like to see:
pool-shrinking (and an option to shrink disk A when i want disk B to
become a mirror, but A is a few blocks bigger)
I'm working on it.
install to mirror fro
David Magda wrote:
Given the threads that have appeared on this list lately, how about
codifying / standardizing the output of "zfs send" so that it can be
backed up to tape? :)
We will soon be changing the manpage to indicate that the zfs send stream
will be receivable on all future versions
excellent! i wasn't sure if that was the case, though i had heard rumors.
On Mon, Mar 2, 2009 at 12:36 PM, Matthew Ahrens wrote:
> Blake wrote:
>>
>> zfs send is great for moving a filesystem with lots of tiny files,
>> since it just handles the blocks :)
>>
>>
>>
>> I'd like to see:
>>
>> pool
that link suggests that this is a problem with a dirty export:
http://www.sun.com/msg/ZFS-8000-EY
maybe try importing on system A again, doing a 'zpool export', waiting
for completion, then moving to system B to import?
On Sun, Mar 1, 2009 at 2:29 PM, Kyle Kakligian wrote:
> What does it mean f
I could be wrong but this looks like an issue on the Linux side
A zpool status is returning the healthy pool
What does format/fdisk show you on the Linux side ? Can it still see the
iSCSI device that is being shared from the Solaris server ?
Regards,
Damien O'Shea
Strategy & Unix Systems
Reve
yes, most nvidia hardware will give you much better performance on
OpenSolaris (provided the card is fairly recent)
On Mon, Mar 2, 2009 at 6:18 AM, Juergen Nickelsen wrote:
> Juergen Nickelsen writes:
>
>> Solaris Bundled Driver: * vgatext/ ** radeon
>> Video
>> ATI Technologies Inc
>> R360 NJ [
It looks like you only have one physical device in this pool. Is that correct?
On Mon, Mar 2, 2009 at 9:01 AM, Lars-Gunnar Persson
wrote:
> Hey to everyone on this mailing list (since this is my first post)!
>
> We've a Sun Fire X4100 M2 server running Solaris 10 u6 and after some system
> wor
Hey to everyone on this mailing list (since this is my first post)!
We've a Sun Fire X4100 M2 server running Solaris 10 u6 and after some
system work this weekend we have a problem with only one ZFS volume.
We have a pool called /Data with many file systems and two volumes.
The status of my
Hi All,
What I all want is a way to disable startup import process of ZFS. So on
every server reboot, I want to manually import the pools and mount on required
mount point.
zpool attributes like mountpoint=legacy or canmount affect pool mounting
behavior and no command found for disa
Juergen Nickelsen writes:
> Solaris Bundled Driver: * vgatext/ ** radeon
> Video
> ATI Technologies Inc
> R360 NJ [Radeon 9800 XT]
>
> I *think* this is the same driver used with my work laptop (which I
> don't have at hand to check, unfortunately), also with ATI graphics
> hardware.
Confirmed.
32 matches
Mail list logo