resources
available that will show me how this is done?
You could try zdb.
Thanks and regards,
Sanjeev
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
Moshe,
You might want to check if you have multiple paths to these disks.
- Sanjeev
On Wed, Feb 17, 2010 at 07:59:28PM -0800, Moshe Vainer wrote:
I have another very weird one, looks like a reoccurance of the same issue but
with the new firmware.
We have the following disks:
AVAILABLE
Abdullah,
On Thu, Feb 11, 2010 at 03:42:38PM -0500, Abdullah Al-Dahlawi wrote:
Hi Sanjeev
linking the application to the ARCSTAT_BUMP(arcstat_hits) is not
straightforward and time consuming especially if I am running many
experiments.
Brendan has commented on on the post by providing
if it causes a hit or a miss.
Thanks and regards,
Sanjeev
Your response is highly appreciated.
Thanks
--
Abdullah
dahl...@ieee.org
(IM) ieee2...@hotmail.com
Check The Fastest 500 Super Computers Worldwide
http://www.top500.org/list/2009/11/100
--
Sanjeev Bagewadi
with the drop in the safety, the performance will also drop
because of potential seek delays.
Thanks and regards,
Sanjeev
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
and
select ON gate and search. That should list all the files that were modified by
the fix for that bug.
Now for each file you can got to the histry and get a diff of version where
the fix was integrated and the previous version.
Hope that helps.
Regards,
Sanjeev
--
Sanjeev
any recommendations for books on File Systems and/or
File Systems Programming?
== end ==
Solaris Internals has a chapter on Filesystems which talks about the VNODE/VFS
layer and how they interface with different filesystems. You might find this
useful as well.
Thanks and regards,
Sanjeev
and regards,
Sanjeev
--
Sanjeev Bagewadi
Solaris RPE
Bangalore, India
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Chris,
On Wed, Aug 05, 2009 at 05:33:24AM -0700, Chris Baker wrote:
Sanjeev
Thanks for taking an interest. Unfortunately I did have failmode=continue,
but I have just destroyed/recreated and double confirmed and got exactly the
same results.
zpool status shows both drives mirror
of memory on the system.
If ZFS is not beinng used significantly, then ARC should not grow. ARC grows
based on the usage (ie. amount of ZFS files/data accessed). Hence, if you are
sure that the ZFS usage is low, things should be fine.
Hope that helps.
Regards,
Sanjeev
I'd followed the evil tuning
to the GUDs tool please collect data using that.
We need to understand how ARC plays a role here.
Thanks and regards,
Sanjeev.
On Sat, Jul 04, 2009 at 02:49:05PM -0500, Bob Friesenhahn wrote:
On Sat, 4 Jul 2009, Jonathan Edwards wrote:
this is only going to help if you've got problems in zfetch .. you'd
6596237 we would see metaslab related routines on the top.
Thanks and regards,
Sanjeev
On Mon, Apr 13, 2009 at 07:13:03AM -0500, Gary Mills wrote:
On Mon, Apr 13, 2009 at 09:08:09AM +0530, Sanjeev wrote:
How full is the pool ?
Only 50%, but it started with two 500-gig LUNs initially. We
Gary,
How full is the pool ?
-- Sanjeev
On Sun, Apr 12, 2009 at 08:39:03AM -0500, Gary Mills wrote:
We're running a Cyrus IMAP server on a T2000 under Solaris 10 with
about 1 TB of mailboxes on ZFS filesystems. Recently, when under
load, we've had incidents where IMAP operations became very
see in the
above output that it has only about 22.5K used. Is that correct ? I would
have expected it to be higher.
You should also check what 'zpool history -i ' says.
Thanks and regards,
Sanjeev
I've also learned the the AVAIL column reports what's available in the
zpool and NOT what's
be listed.
Thanks and regards,
Sanjeev
The following is output from the modified command and reflects the
current mode of operation (i.e. zfs list lists filesystems, volumes
and pnfs datasets by default):
(pnfs-17-21:/home/lisagab):6 % zfs list
NAMEUSED AVAIL
failure of links. But, that in itself is not
sufficient.
Thanks and regards,
Sanjeev
--
Sanjeev Bagewadi
Solaris RPE
Bangalore, India
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
Sriram,
On Mon, Feb 16, 2009 at 11:12:42AM +0530, Sriram Narayanan wrote:
On Mon, Feb 16, 2009 at 9:11 AM, Sanjeev sanjeev.bagew...@sun.com wrote:
Sendai,
On Fri, Feb 13, 2009 at 03:21:25PM -0800, Andras Spitzer wrote:
Hi,
When I read the ZFS manual, it usually recommends
%2Fsrc%2Futs%2Fcommon%2Fio%2Fscsi%2Ftargets%2Fsd.c%403169r1=%2Fonnv%2Fonnv-gate%2Fusr%2Fsrc%2Futs%2Fcommon%2Fio%2Fscsi%2Ftargets%2Fsd.c%403138
Thanks and regards,
Sanjeev.
Does anyone know how to push for resolution on this? USB is pretty
common, like it or not for storage purposes - especially
,
Sanjeev
#!/usr/sbin/dtrace -Cs
/* CDDL HEADER START
*
* The contents of this file are subject to the terms of the
* Common Development and Distribution License, Version 1.0 only
* (the License). You may not use this file except in compliance
* with the License.
*
* You can obtain a copy
,
Sanjeev
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
However, if you had configured it as RAIDZ-2 then it can sustain upto 2 disk
failures.
The other option would be configure it as mirrored pool. And this too can
sustain 2 disk failures (one in each mirror device).
Hope that helps.
Thanks and regards,
Sanjeev
-- Sriram
and hence they would
be shared as well.
I guess that would help in your case.
Hope that helps.
Thanks and regards,
Sanjeev
On Sun, Feb 08, 2009 at 12:43:26AM +0530, Sriram Narayanan wrote:
An update:
I'm using VMWare ESX 3.5 and VMWare ESXi 3.5 as the NFS clients.
I'm use zfs set sharenfs
, it
will revert back to the previous state.
Another option would be to turn off atime if you are sure that you are not
planing to modify anything on the destination box.
But, like you mentioned above if you allow users to mess around with the FS
then -F seems to be a better option.
Regards,
Sanjeev
and regards,
Sanjeev.
On Thu, Jan 29, 2009 at 01:13:29PM +0100, Kevin Maguire wrote:
Hi
We have been using a Solaris 10 system (Sun-Fire-V245) for a while as
our primary file server. This is based on Solaris 10 06/06, plus
patches up to approx May 2007. It is a production machine, and until
about
is :
- Export the pool on the old box : zpool export poolname
- Connect the S1 to the new machine
- Import the pool : zpool import poolname
Hope that helps.
Thanks and regards,
Sanjeev
If I simply pull out the S1 storage box(on which the zfs is configured for a
long time now with data) and plug
Marcelo,
On Wed, Dec 31, 2008 at 02:17:37AM -0800, Marcelo Leal wrote:
Thanks a lot Sanjeev!
If you look my first message you will see that discrepancy in zdb...
Apologies. Now, in the hindsight I understand why you gave the zdb details :-(
I should have read the mail carefully.
Thanks
then pass on the file : /tmp/rm.truss
This would show us which system call is failing and why. That would give
us a good idea of what
is going wrong.
Thanks and regards,
Sanjeev.
Marcelo Leal wrote:
Hello all,
# zpool status
pool: mypool
state: ONLINE
scrub: scrub completed after 0h2m with 0
the directory contents and ascertain that
the file exists.
Can you please provide the directory listing (ls -l) of the directory
in question ?
Note that a ls -l would use fstat64 to get the stats of the files. So,
truss on ls -l would
also help.
Thanks and regards,
Sanjeev.
fstat64(2, 0x08046CE0
of next week and hence
may not be able to follow up.
I hope someone else will be able to follow it up from here.
Thanks and regards,
Sanjeev.
close(3)= 0
ioctl(1, TCGETA, 0x08046BBC)= 0
fstat64(1, 0x08046B20
of the directory :
- How many entries does it have ?
- Which filesystem (of the zpool) does it belong to ?
Thanks and regards,
Sanjeev.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
snapshots which refer to the same set of blocks. So, even after deleting
one snapshot you might not see the space freed up. And this could be because,
of the second snapshot which is refering to some of the blocks still.
Hope that helps.
Thanks and regards,
Sanjeev
On Wed, Dec 03, 2008 at 12:26
behaviour can still set the listsnapshots
property accordingly.
Hope that helps.
Regards,
Sanjeev.
Regards,
jel.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
fault.
Thanks and regards,
Sanjeev
yuvraj wrote:
Hi Sanjeev,
I am herewith giving all the details of my zpool by
firirng #zpool status command on commandline. Please go through the same and
help me out.
Thanks in advance
Yuvraj,
Can you please post the details of the zpool ? 'zpool status' should
give you that.
You could pull out one of the disks.
Thanks and regards,
Sanjeev.
On Thu, Oct 16, 2008 at 11:22:43PM -0700, yuvraj wrote:
Hi Friends,
I have create my own Zpool on Solaris 10 also
Anas,
Are both (IDE and SATA) disks plugged in ?
I had similar problems where the machine woudl just drop into GRUB and never
boot up despite giving the right GRUB commands.
I finally disconnected the IDE disk and things are fine now.
Thanks and regards,
Sanjeev.
On Mon, Oct 06, 2008 at 12:03
Detlef,
I presume you have about 9 filesystems. How many snapshots do you have ?
Thanks and regards,
Sanjeev.
On Mon, Sep 22, 2008 at 03:59:34PM +0200, Detlef [EMAIL PROTECTED] wrote:
With Nevada Build 98 I realize a slow zpool import of my pool which
holds my user and archive data on my
Maybee and am looking into it
right now.
Thanks and regards,
Sanjeev.
On Fri, Aug 29, 2008 at 06:59:07AM -0700, Michael Schuster wrote:
On 08/29/08 04:09, Tomas ?gren wrote:
On 15 August, 2008 - Tomas ?gren sent me these 0,4K bytes:
On 14 August, 2008 - Paul Raines sent me these 2,9K bytes
. But, as long as the box can publish its
LUNs as disks on a host it should work.
Regards,
Sanjeev.
Thanks in Advance.
Regards
Vikas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
Tomas,
On Wed, Aug 27, 2008 at 11:57:09AM +0200, Tomas ?gren wrote:
On 27 August, 2008 - Sanjeev sent me these 1,1K bytes:
Vikas,
On Wed, Aug 27, 2008 at 01:29:49PM +0530, Vikas Kakkar wrote:
Hi,
Please help answering the following queries:
1. Can we reduce the size
Hi Vikas,
On Wed, Aug 27, 2008 at 03:38:27PM +0530, Vikas Kakkar wrote:
Thanks for your email Sanjeev!!
Actually customer wants to reduce the pool size, I guess we cannot do this
todaythere is a pending RFP on this.
You are right. I got confused because, you were refering
.
Thanks and regrads,
Sanjeev.
On Wed, Aug 20, 2008 at 04:04:59AM -0700, Ben Rockwood wrote:
Would someone in the know be willing to write up (preferably blog)
definitive definitions/explanations of all the arcstats provided via kstat?
I'm struggling with proper interpretation of certain values
+---
Changes (by sandervl73):
* status: new = closed
* resolution: = duplicate
Comment:
Duplicate and fixed in 1.6.2 (due out in a day or two)
-- snip --
Cheers,
Sanjeev.
Mike Gerdts wrote:
This is good for a chuckle.
# zpool status
pool: rpool
state: ONLINE
status: One
and get these IDRs.
Thanks and regards,
Sanjeev.
Lance wrote:
Any progress on a defragmentation utility? We appear to be having a severe
fragmentation problem on an X4500, vanilla S10U4, no additional patches.
500GB disks in 4 x 11 disk RAIDZ2 vdevs. It hit 97% full and fell off a
cliff
Scott,
This looks more like bug#*6596237 Stop looking and start ganging
http://monaco.sfbay/detail.jsf?cr=6596237.
*
What version of Solaris are the production servers running (S10 or
Opensolaris) ?
Thanks and regards,
Sanjeev.
Scott wrote:
Hello,
I have several ~12TB storage servers
if it is a
database-application they have a record-size
option when the DB is created (based on my limited knowledge about DBs).
Thanks and regards,
Sanjeev.
PS : Here is a simple script which just aggregates on the write size and
executable name :
-- snip --
#!/usr/sbin/dtrace -s
syscall::write:entry
Carol,
Probably /mnt is already in use ie. some other filesystem is mounted
there.
Can you please verify ?
What is the original mountpoint of pool/zfs1 ?
Regards,
Sanjeev.
Caroline Carol wrote:
Hi all,
When i modify zfs FS propreties I get device busy
-bash-3.00# zfs set mountpoint
Michael,
If you don't call zpool export -f tank it should work.
However, it would be necessary to understand why you are using the above
command after creation of the zpool.
Can you avoid exporting after the creation ?
Regards,
Sanjeev
Michael Goff wrote:
Hi,
When jumpstarting s10x_u4_fcs
Thanks Robert ! I missed that part.
-- Sanjeev.
Michael Goff wrote:
Great, thanks Robert. That's what I was looking for. I was thinking
that I would have to transfer the state somehow from the temporary
jumpstart environment to /a so that it would be persistent. I'll test
it out tomorrow
Kanishk,
Directories are implemented as ZAP objects.
Look at the routines in that order :
- zfs_lookup()
- zfs_dirlook()
- zfs_dirent_lock()
- zap_lookup
Hope that helps.
Regards,
Sanjeev.
kanishk wrote:
i wanted to know how does ZFS finds an entry of a file from its
dirctory object
export - BOX 2 ZFS.
In other words, can I setup a bunch of thin storage boxes with low cpu
and ram instead of using sas or fc to supply the jbod to the zfs server?
Should be feasible. Just that you would then need a robust LAN and that
would be flooded.
Thanks and regards,
Sanjeev.
--
Solaris
is always synced by the txg threads. Not sure why you want it.
Regards,
Sanjeev.
Regards,
-Atul
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Solaris Revenue Products Engineering
MC,
If you originally had 4 * 500 GB disks configured in RAID-Z, you cannot
add 1 single disk and grow
the capacity of the pool (with same protection). This is not allowed.
Regards,
Sanjeev.
MC wrote:
Two conflicting answers to the same question? I guess we need someone to break
the tie
Atul,
libkstat(3LIB) is the library.
man -s 3KSTAT kstat should give a good start.
Regards,
Sanjeev.
Atul Vidwansa wrote:
Peter,
How do I get those stats programatically? Any clues?
Regards,
_Atul
On 3/27/07, Peter Tribble [EMAIL PROTECTED] wrote:
On 3/27/07, Atul Vidwansa [EMAIL
and regards,
Sanjeev.
Bev Crair wrote:
Mike,
Take a look at
http://video.google.com/videoplay?docid=8100808442979626078q=CSI%3Amunich
Granted, this was for demo purposes, but the team in Munich is clearly
leveraging USB sticks for their purposes.
HTH,
Bev.
mike wrote:
I still haven't got
that they will not
change.
Thanks and regards,
Sanjeev.
Manoj Joseph wrote:
Hi,
I believe, ZFS, at least in the design ;) , provides APIs other than
POSIX (for databases and other applications) to directly talk to the DMU.
Are such interfaces ready/documented? If this is documented somewhere,
could you
Adrian,
Seems like a cool idea to me :-) Not sure if there is anything of this
kind being thought about...
Would be a good idea to file an RFE.
Regards,
Sanjeev
Adrian Saul wrote:
Not sure how technically feasible it is, but something I thought of while
shuffling some files around my home
. There is specific code to keep
the page cache (needed in case of mmaped files) and the ARC caches
consistent.
Thanks and regards,
Sanjeev.
--
Solaris Revenue Products Engineering,
India Engineering Center,
Sun Microsystems India Pvt Ltd.
Tel:x27521 +91 80 669 27521
particular reason that you are pushing the filesystem to
the brim ?
Is this part of some test ? Please, help us understand what you are
trying to test.
Thanks and regards,
Sanjeev.
--
Solaris Revenue Products Engineering,
India Engineering Center,
Sun Microsystems India Pvt Ltd.
Tel:x27521 +91 80
Robert,
Comments inline...
Robert Milkowski wrote:
Hello Jason,
Wednesday, January 10, 2007, 9:45:05 PM, you wrote:
JJWW Sanjeev Robert,
JJWW Thanks guys. We put that in place last night and it seems to be doing
JJWW a lot better job of consuming less RAM. We set it to 4GB and each of
JJWW
tuning
the ARC should
help in your case.
The zio_bufs that you referred to in the previous are the caches used by
ARC for caching
various things (including the metadata and the data).
Thanks and regards,
Sanjeev.
Best Regards,
Jason
On 1/10/07, Robert Milkowski [EMAIL PROTECTED] wrote
Jason,
Apologies.. I missed out this mail yesterday...
I am not too familiar with the options. Someoen else will have to answer
this.
Thanks and regards,
Sanjeev.
Jason J. W. Williams wrote:
Sanjeev,
Could you point me in the right direction as to how to convert the
following GCC compile
as the mirrors. I think the hot-spares don't kickin
if there is a size mismatch.
If none of the above works then we will have a take a closer look at the
details :-)
Regards,
Sanjeev.
Rob wrote:
I physically removed a disk (c3t8d0 used by ZFS 'pool01') from a 3310 JBOD
connected to a V210 running
zpools during this operation.
Thanks and regards,
Sanjeev.
Jason J. W. Williams wrote:
Hello,
Is there a way to set a max memory utilization for ZFS? We're trying
to debug an issue where the ZFS is sucking all the RAM out of the box,
and its crashing MySQL as a result we think. Will ZFS reduce
Jim,
That is good news !! Let's us know how it goes.
Regards,
Sanjeev.
PS : I am out of office a couple of days.
Jim Hranicky wrote:
OK, spun down the drives again. Here's that output:
http://www.cise.ufl.edu/~jfh/zfs/threads
I just realized that I changed the configuration, so
Jim,
James F. Hranicky wrote:
Sanjeev Bagewadi wrote:
Jim,
We did hit similar issue yesterday on build 50 and build 45 although the
node did not hang.
In one of the cases we saw that the hot spare was not of the same
size... can you check
if this true ?
It looks like they're all
.
Thanks and regards,
Sanjeev.
Jim Hranicky wrote:
OS: Nevada build 51 x86
I recently upgraded Sol10x86 6/6 to Nevada build 51. I'm testing out zfs
on a machine and set up a pool with a mirror of two drives and two hot
spares. I then spun down a drive in the mirror which caused the machine
to hang
that
zfs_vdev_cache_bshift is the one which would
control the amount that is read. Currenty it is set to 16. So, we should
be able to modify this and reduce
the prefetch.
However, I will have to double check with more people and get back to you.
Thanks and regards,
Sanjeev.
/Tomas
Tomas,
comments inline...
Tomas Ă–gren wrote:
On 10 November, 2006 - Sanjeev Bagewadi sent me these 3,5K bytes:
1. DNLC-through-ZFS doesn't seem to listen to ncsize.
The filesystem currently has ~550k inodes and large portions of it is
frequently looked over with rsync (over nfs). mdb
crunch
}
-- snip --
And as Niel pointed out we would probably need some way of limiting the
ARC consumption.
Regards,
Sanjeev.
Neil.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
69 matches
Mail list logo