That screen shot looks very much like Nexenta 3.0 with a different
branding. Elsewhere, The Register confirms it's OpenSolaris.
On 29 Apr 2010, at 07:35, Thommy M. Malmström m> wrote:
What operating system does it run?
--
This message posted from opensolaris.org
_
2010/4/29 Thommy M. Malmström :
> What operating system does it run?
Nexenta I believe.
--
Regards,
Cyril
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> Why would you recommend a spare for raidz2 or raidz3?
> -- richard
Spare is to minimize the reconstruction time. Because remember a vdev can not
start resilvering until there is a spare disk available. And with disks as big
as they are today, resilvering also take many hours. I rather have
Hi Mark,
I also had some SSD drives in this machine, but i have take them out but
the problem still occours...
Regarding the bug, well it seems to be related with usage of xVM , and
since i don't use maybe it will not make any difference to this
particular server...
Anyway, thanks for the tip, an
What operating system does it run?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I'm looking for a way to backup my entire system, the rpool zfs pool to an
external HDD so that it can be recovered in full if the internal HDD fails.
Previously with Solaris 10 using UFS I would use ufsdump and ufsrestore, which
worked so well, I was very confident with it. Now ZFS doesn't have
On Apr 28, 2010, at 9:48 PM, Jim Horng wrote:
>> 3 shelves with 2 controllers each. 48 drive per
>> shelf. These are Fibrechannel attached. We would like
>> all 144 drives added to the same large pool.
>
> I would do either a 12 or 16 disk raidz3 vdev and do spread out the disk
> across controll
> 3 shelves with 2 controllers each. 48 drive per
> shelf. These are Fibrechannel attached. We would like
> all 144 drives added to the same large pool.
I would do either a 12 or 16 disk raidz3 vdev and do spread out the disk across
controllers within vdevs. also may want to leave a least 1 spare
Today, Compellant announced their zNAS addition to their unified storage
line. zNAS uses ZFS behind the scenes.
http://www.compellent.com/Community/Blog/Posts/2010/4/Compellent-zNAS.aspx
Congrats Compellant!
-- richard
ZFS storage and performance consulting at http://www.RichardElling.com
ZFS tr
On 28/04/10 11:07 AM, Brad wrote:
Whats the default size of the file system cache for Solaris 10 x86 and can it
be tuned?
I read various posts on the subject and its confusing..
http://dlc.sun.com/osol/docs/content/SOLTUNEPARAMREF/soltuneparamref.html
should have all the answers you need.
Not
[...]
> There is a way to do this kind of object to name
> mapping, though there's no documented public
> interface for it. See zfs_obj_to_path() function and
> ZFS_IOC_OBJ_TO_PATH ioctl.
>
> I think it should also be possible to extend it to
> handle multiple names (in case of multiple hardlinks)
Ian: Of course they expected answers to those questions here. It seems many
people do not read the forums or mailing list archives to see their
questions
previously asked (and answered) many many times over, or the flames that
erupt from them. It's scary how much people don't check historical re
On Wed, Apr 28, 2010 at 5:09 PM, Jim Horng wrote:
> This is not a performance issue. The rsync will hang hard and one of the
> child process can not be killed (I assume it's
I've seen a similar issue on a b133 host that has a large DDT, but I
haven't waited very long to see if it completes. You
This is not a performance issue. The rsync will hang hard and one of the child
process can not be killed (I assume it's the one running on the zfs). the
command gets slower I am referring to the output of the file system commands
(zpool, zfs, df, du, etc) from the different shell. I left the
On Apr 29, 2010, at 3:03 AM, Edward Ned Harvey wrote:
>> From: Ragnar Sundblad [mailto:ra...@csc.kth.se]
>> Sent: Wednesday, April 28, 2010 3:49 PM
>>
>> What indicators do you have that ONTAP/WAFL has inode->name lookup
>> functionality?
>
> I don't have any such indicator, and if that's the
On 04/29/10 11:02 AM, autumn Wang wrote:
One quick question: When will the next formal release be released?
Of what?
Does oracle have plan to support OpenSolaris community as Sun did before?
What is the direction of ZFS in future?
Do you really expect answers to those question here?
> From: Ragnar Sundblad [mailto:ra...@csc.kth.se]
> Sent: Wednesday, April 28, 2010 3:49 PM
>
> What indicators do you have that ONTAP/WAFL has inode->name lookup
> functionality?
I don't have any such indicator, and if that's the way my words came out,
sorry for that. Allow me to clarify:
In
One quick question: When will the next formal release be released?
Does oracle have plan to support OpenSolaris community as Sun did before?
What is the direction of ZFS in future?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
On Thu, 29 Apr 2010, Ian Collins wrote:
You can create pools and filesystems with older versions if you want them to
be backwards compatible. I have done this when I was sending data to a
backup server running an older Solaris version.
From the zpool manual page, it seems that it should be po
On 04/29/10 10:21 AM, devsk wrote:
I had a pool which I created using zfs-fuse, which is using March code base
(exact version, I don't know; if someone can tell me the command to find the
zpool format version, I would be grateful).
Try [zfs|zpool] upgrade.
These commands will tell you th
Hi Mary Ellen,
We were looking at this problem and are unsure what the problem is...
To rule out NFS as the root cause, could you create and share a test ZFS
file system without any ACLs to see if you can access the data from the
Linux client?
Let us know the result of your test.
Thanks,
Ci
On Wed, Apr 28, 2010 at 1:51 PM, Jim Horng wrote:
> I have now turned the dedup off on the pools and the rsync seem to be going
> further than before. Is this a known bug? Is there an workaround for this
> without rebooting the system? I am not an Solaris expert and I haven't worked
> on Solari
I had a pool which I created using zfs-fuse, which is using March code base
(exact version, I don't know; if someone can tell me the command to find the
zpool format version, I would be grateful).
I exported it and now I tried to import it in OpenSolaris, which is running Feb
bits because it sa
On Wed, 28 Apr 2010, Jim Horng wrote:
I understand your point. however in most production system the
selves are added incrementally so make sense to be related to number
of slots per shelf. and in most case withstand a shelf failure is
to much of overhead on storage any are. for example in h
On Apr 28, 2010, at 8:00 PM, Freddie Cash wrote:
> Looks like I've hit this bug:
> http://bugs.opensolaris.org/view_bug.do?bug_id=6782540 However, none of the
> workaround listed in that bug, or any of the related bugs, works. :(
>
> Going through the zfs-discuss and freebsd-fs archives, I s
Sorry for the double post but I think this was better suite for zfs forum.
I am running OpenSolaris snv_134 as a file server in a test environment,
testing deduplication. I am transferring large amount of data from our
production server via using rsync.
The Data pool is on a separated raidz1-0
On Wed, April 28, 2010 10:16, Eric D. Mudama wrote:
> On Wed, Apr 28 at 1:34, Tonmaus wrote:
>>> Zfs scrub needs to access all written data on all
>>> disks and is usually
>>> disk-seek or disk I/O bound so it is difficult to
>>> keep it from hogging
>>> the disk resources. A pool based on mirro
On 28 apr 2010, at 14.06, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>>
>> Look up the inode number of README. (for example, ls -i README)
>> (suppose it’s inode 12345)
>>
I understand your point. however in most production system the selves are added
incrementally so make sense to be related to number of slots per shelf. and in
most case withstand a shelf failure is to much of overhead on storage any are.
for example in his case he will have to configure 1+0 ra
3 shelves with 2 controllers each. 48 drive per shelf. These are Fibrechannel
attached. We would like all 144 drives added to the same large pool.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http:
New to Solairs/ZFS and having a difficult time getting ZFS, NFS and ACLs
all working together, properly. Trying access/use zfs shared
filesystems on a linux client. When I access the dir/files on the linux
client, my permissions do not carry over, nor do the newly created
files, and I can not
On Wed, 28 Apr 2010, Jim Horng wrote:
So on the point of not need an migration back.
Even at 144 disk. they won't be on the same raid group. So figure
out what is the best raid group size for you since zfs don't support
changing number of disk in raidz yet. I usually use the number of
th
Sorry, I need to correct myself. Mirror luns on the windows side to switch
storage pool under it is a great idea and I think you can do this without
downtime.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensol
> I took a snapshot of one of my oracle filesystems this week and when someone
> tried to add data to it it filled up.
>
> I tried to remove some data but the snapshot seemed to keep reclaiming it as
> i deleted it. I had taken the snapshot says earlier. Does this make sense?
Snapshots are compl
So on the point of not need an migration back.
Even at 144 disk. they won't be on the same raid group. So figure out what is
the best raid group size for you since zfs don't support changing number of
disk in raidz yet. I usually use the number of the slots per shelf. or a good
number is 7~10
I took a snapshot of one of my oracle filesystems this week and when someone
tried to add data to it it filled up.
I tried to remove some data but the snapshot seemed to keep reclaiming it as i
deleted it. I had taken the snapshot says earlier. Does this make sense?
--
This message posted from
For this type of migration a downtime is required. However, it can be reduce
to only a few hours to a few minutes depending how much change need to be
synced.
I have done this many times on a NetApp Filer but can be apply to zfs as well.
First thing is consider is only do the migration once so
adding on...
On Apr 28, 2010, at 8:57 AM, Tomas Ögren wrote:
> On 28 April, 2010 - Eric D. Mudama sent me these 1,6K bytes:
>
>> On Wed, Apr 28 at 1:34, Tonmaus wrote:
Zfs scrub needs to access all written data on all
disks and is usually
disk-seek or disk I/O bound so it is diff
On Apr 28, 2010, at 8:39 AM, Wolfraider wrote:
>> Mirrors are made with vdevs (LUs or disks), not
>> pools. However, the
>> vdev attached to a mirror must be the same size (or
>> nearly so) as the
>> original. If the original vdevs are 4TB, then a
>> migration to a pool made
>> with 1TB vdevs can
Hi Abdullah,
You can review the ZFS/MySQL presentation at this site:
http://forge.mysql.com/wiki/MySQL_and_ZFS#MySQL_and_ZFS
We also provide some ZFS/MySQL tuning info on our wiki,
here:
http://hub.opensolaris.org/bin/view/Community+Group+zfs/zfsanddatabases
Thanks,
Cindy
On 04/28/10 03:42
On Wed, 28 Apr 2010, Richard Elling wrote:
the disk resources. A pool based on mirror devices will behave
much more nicely while being scrubbed than one based on RAIDz2.
The data I have does not show a difference in the disk loading while
scrubbing for different pool configs. All HDDs becom
Looks like I've hit this bug:
http://bugs.opensolaris.org/view_bug.do?bug_id=6782540 However, none of the
workaround listed in that bug, or any of the related bugs, works. :(
Going through the zfs-discuss and freebsd-fs archives, I see that others
have run into this issue, and managed to solve i
On 28 April, 2010 - Eric D. Mudama sent me these 1,6K bytes:
> On Wed, Apr 28 at 1:34, Tonmaus wrote:
>>> Zfs scrub needs to access all written data on all
>>> disks and is usually
>>> disk-seek or disk I/O bound so it is difficult to
>>> keep it from hogging
>>> the disk resources. A pool based
Hi Eric,
> While there may be some possible optimizations, i'm
> sure everyone
> would love the random performance of mirror vdevs,
> combined with the
> redundancy of raidz3 and the space of a raidz1.
> However, as in all
> ystems, there are tradeoffs.
I think we all may agree that the topic he
> Mirrors are made with vdevs (LUs or disks), not
> pools. However, the
> vdev attached to a mirror must be the same size (or
> nearly so) as the
> original. If the original vdevs are 4TB, then a
> migration to a pool made
> with 1TB vdevs cannot be done by replacing vdevs
> (mirror method).
> --
> On Apr 28, 2010, at 6:37 AM, Wolfraider wrote:
> > The original drive pool was configured with 144 1TB
> drives and a hardware raid 0 strip across every 4
> drives to create 4TB luns.
>
> For the archives, this is not a good idea...
Exactly, This is the reason I want to blow all the old configu
On Apr 28, 2010, at 6:37 AM, Wolfraider wrote:
> The original drive pool was configured with 144 1TB drives and a hardware
> raid 0 strip across every 4 drives to create 4TB luns.
For the archives, this is not a good idea...
> These luns where then combined into 6 raidz2 luns and added to the zf
On Apr 28, 2010, at 6:40 AM, Wolfraider wrote:
> We are running the latest dev release.
>
> I was hoping to just mirror the zfs voumes and not the whole pool. The
> original pool size is around 100TB in size. The spare disks I have come up
> with will total around 40TB. We only have 11TB of spa
On Wed, Apr 28 at 1:34, Tonmaus wrote:
Zfs scrub needs to access all written data on all
disks and is usually
disk-seek or disk I/O bound so it is difficult to
keep it from hogging
the disk resources. A pool based on mirror devices
will behave much
more nicely while being scrubbed than one base
On Apr 28, 2010, at 1:34 AM, Tonmaus wrote:
>> Zfs scrub needs to access all written data on all
>> disks and is usually
>> disk-seek or disk I/O bound so it is difficult to
>> keep it from hogging
>> the disk resources. A pool based on mirror devices
>> will behave much
>> more nicely while be
We are running the latest dev release.
I was hoping to just mirror the zfs voumes and not the whole pool. The original
pool size is around 100TB in size. The spare disks I have come up with will
total around 40TB. We only have 11TB of space in use on the original zfs pool.
--
This message poste
The original drive pool was configured with 144 1TB drives and a hardware raid
0 strip across every 4 drives to create 4TB luns. These luns where then
combined into 6 raidz2 luns and added to the zfs pool. I would like to delete
the original hardware raid 0 stripes and add the 144 drives directl
On 28.04.10 14:06, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
Look up the inode number of README. (for example, ls -i README)
(suppose it’s inode 12345)
find /tank/.zfs/snapshot
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> Look up the inode number of README. (for example, ls -i README)
> (suppose its inode 12345)
> find /tank/.zfs/snapshot -inum 12345
>
> Problem is, the f
Greeing All
This might be an old question !!
Does any one know how to use ZFS with Mysql, i.e how to make mysql use a ZFS
file system , how to point zfs to tank/myzfs ???
Thanks
--
Abdullah Al-Dahlawi
George Washington University
Department. Of Electrical & Computer Engineering
Check
> Zfs scrub needs to access all written data on all
> disks and is usually
> disk-seek or disk I/O bound so it is difficult to
> keep it from hogging
> the disk resources. A pool based on mirror devices
> will behave much
> more nicely while being scrubbed than one based on
> RAIDz2.
Experienc
56 matches
Mail list logo