[zfs-discuss] Recover data from disk with zfs

2011-02-16 Thread Sergey
Hello everybody! Please, help me!

I have Solaris 10x86_64 server with a 5x40gb hdd.
1 HDD with /root and /usr (and other partition) (ufs filesystem) were crashed. 
He's died.
Other 4 HDD (zfs file system) were mounted by 4 pool (zfs create pool disk1 
c0t1d0 and etc.).

I install Solaris 10x86_64 on new disk and then mount (zpool import) other 4 
HDD disks. 3 disk mount successfully, but 1 don't mount (i can create new pool 
with this disk, but he is empty).

How can I mount this disk or recover data from this disk?
Sorry for my English.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS send-receive between remote machines as non-root user

2009-09-23 Thread Sergey
Hi list,

I have a question about setting up zfs send-receive functionality (between 
remote machine) as non-root user.

"server1" - is a server where "zfs send" will be executed
"server2" - is a server where "zfs receive" will be executed.

I am using the following zfs structure:

[server1]$ zfs list -t filesystem -r datapool/data
NAME USED  AVAIL  REFER  MOUNTPOINT
datapool/data  2.05G   223G  2.05G  /opt/data
datapool/data/logs   35K   223G19K  /opt/data/logs
datapool/data/db18K   223G18K  /opt/data/db


[server1]$ zfs list -t filesystem -r datapool2/data
NAME USED  AVAIL  REFER  MOUNTPOINT
datapool2/data   72K  6.91G18K  /datapool2/data
datapool2/data/fastdb   18K  6.91G18K  /opt/data/fastdb
datapool2/data/fastdblog18K  6.91G18K  /opt/data/fastdblog
datapool2/data/dblog18K  6.91G18K  /opt/data/dblog


ZFS delegated permissions setup on the sending machine:

[server1]$ zfs allow datapool/data
-
Local+Descendent permissions on (datapool/data)
user joe 
atime,canmount,create,destroy,mount,receive,rollback,send,snapshot
-

[server1]$ zfs allow datapool2/data
-
Local+Descendent permissions on (data2/data)
user joe 
atime,canmount,create,destroy,mount,receive,rollback,send,snapshot
-


The idea is to create a snapshot and send it to another machine with zfs using 
zfs send-receive.

So I am creating a snapshot and ... get the following error:

[server1]$ zfs list -t snapshot -r datapool/data
NAMEUSED  AVAIL  REFER  
MOUNTPOINT
datapool/d...@rolling-2009092314071448K  -  2.05G  -
datapool/data/l...@rolling-20090923140714   16K  -18K  -
datapool/data/d...@rolling-20090923140714  0  -18K  -

[server1]$ zfs list -t snapshot -r datapool2/data
NAMEUSED  AVAIL  REFER  
MOUNTPOINT
datapool2/d...@rolling-20090923140714 0  -18K  -
datapool2/data/fas...@rolling-20090923140714 0  -18K  -
datapool2/data/fastdb...@rolling-20090923140714  0  -18K  -
datapool2/data/db...@rolling-20090923140714  0  -18K  -


To send the snapshot I'm using the following command (for "datapool" datapool):

[server1]$ zfs send -R datapool/d...@rolling-20090923140714 | ssh server2 zfs 
receive -vd datapool/data_backups/`hostname`/datapool

receiving full stream of datapool/d...@rolling-20090923140714 into 
datapool/data_backups/server1/datapool/data
@rolling-20090923140714
received 2.06GB stream in 62 seconds (34.0MB/sec)
receiving full stream of datapool/data/l...@rolling-20090923140714 into 
datapool/data_backups/server2/datapool/data/l...@rolling-20090923140714
cannot mount 'datapool/data_backups/server1/datapool/data/logs': Insufficient 
privileges


Seems like user "joe" on the remote server ("server2") can not mount the 
filesystem:

[server2]$ zfs mount datapool/data_backups/server1/datapool/data/logs
cannot mount 'datapool/data_backups/server1/datapool/data/logs': Insufficient 
privileges

ZFS delegated permissions on the receiving side look fine for me:

[server2]$ zfs allow datapool/data_backups/server1/datapool/data/logs
-
Local+Descendent permissions on (datapool/data_backups)
user joe 
atime,canmount,create,destroy,mount,receive,rollback,send,snapshot
-
Local+Descendent permissions on (datapool)
user joe 
atime,canmount,create,destroy,mount,receive,rollback,send,snapshot

"zfs receive" creates a mountpoint with "root:root" permissions:

[server2]$ ls -ld /opt/data_backups/server2/datapool/data/logs/
drwxr-xr-x   2 root root   2 Sep 23 14:02 
/opt/data_backups/server1/datapool/data/logs/

I've tried to play with RBAC a bit ..:
[server2]$ id 
uid=750(joe) gid=750(prod)

[server2]$ profiles
File System Security
ZFS File System Management
File System Management
Service Management
Basic Solaris User
All

... but no luck - I still have zfs mount error while receiving a snapshot:

Both servers are running Solaris U7 x86_64, Generic_139556-08.

Is there any method to setup zfs send-receive functionality for descending zfs 
filesystems as non-root user?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How to do "zfs send" to remote tape drive without intermediate file?

2008-01-31 Thread Sergey
Hi list,

I'd like to be able to store zfs filesystems on a tape drive that is attached to
another Solaris U4 x86 server. The idea is to use "zfs send" together with tar 
in order to get the list of the filesystems' snapshots stored on a tape and be 
able to perform a restore operation later. It's pretty nice to use tar to get a 
content of the tape in human-readable form.

How can I do it in-one-command without intermediate file? 

The file ifself can be done using "zfs send tank/[EMAIL PROTECTED] > 
/path/to/filesystem_snapshot.zfs" ? But it's painfull to reserve pretty large 
amount if disk space to store intermidiate .zfs file..

Of course, I can write to remote type using ssh using the command below but I'd 
lile to see some kind of meaningful names on the tape:

# zfs send tank/[EMAIL PROTECTED] | ssh remote_server "cat > /dev/rmt/0bn"

Thanks,
Sergey
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Memory Usage

2007-09-14 Thread Sergey
I am running Solaris U4 x86_64.

Seems that something is changed regarding mdb:

# mdb -k
Loading modules: [ unix krtld genunix specfs dtrace cpu.AuthenticAMD.15 uppc 
pcplusmp ufs ip hook neti sctp arp usba fctl nca lofs zfs random nfs sppp 
crypto ptm ]
> arc::print -a c_max
mdb: failed to dereference symbol: unknown symbol name


> ::arc -a
{
hits = 0x6baba0
misses = 0x25ceb
demand_data_hits = 0x2f0bb9
demand_data_misses = 0x92bc
demand_metadata_hits = 0x2b50db
demand_metadata_misses = 0x14c20
prefetch_data_hits = 0x5bfe
prefetch_data_misses = 0x1d42
prefetch_metadata_hits = 0x10f30e
prefetch_metadata_misses = 0x60cd
mru_hits = 0x62901
mru_ghost_hits = 0x9dd5
mfu_hits = 0x545ea4
mfu_ghost_hits = 0xb9aa
deleted = 0xcb5a3
recycle_miss = 0x131fb
mutex_miss = 0x1520
evict_skip = 0x0
hash_elements = 0x1ea54
hash_elements_max = 0x40fac
hash_collisions = 0x138464
hash_chains = 0x92c7
[..skipped..]

How can I set/view arc.c_max now?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS problems in dCache

2007-08-01 Thread Sergey Chechelnitskiy
Hi All, 

Thank you for answers. 
I am not really comparing anything. 
I have a flat directory with a lot of small files inside. And I have a java 
application that reads all these files when it starts. If this directory is 
located on ZFS the application starts fast (15 mins) when the number of files 
is around 300,000 and starts very slow (more than 24 hours) when the number 
of files is around 400,000. 

The question is why ? 
Let's set aside the question why this application is designed this way.

I still needed to run this application. So, I installed a linux box with XFS, 
mounted this XFS directory to the Solaris box and moved my flat directory 
there. Then my application started fast ( < 30 mins) even if the number of 
files (in the linux operated XFS directory mounted thru NSF to the Solaris 
box) was 400,000 or more. 

Basicly, what I want to do is to run this application on a Solaris box. Now I 
cannot do it.

Thanks, 
Sergey

On August 1, 2007 08:15 am, [EMAIL PROTECTED] wrote:
> > On 01/08/2007, at 7:50 PM, Joerg Schilling wrote:
> > > Boyd Adamson <[EMAIL PROTECTED]> wrote:
> > >> Or alternatively, are you comparing ZFS(Fuse) on Linux with XFS on
> > >> Linux? That doesn't seem to make sense since the userspace
> > >> implementation will always suffer.
> > >>
> > >> Someone has just mentioned that all of UFS, ZFS and XFS are
> > >> available on
> > >> FreeBSD. Are you using that platform? That information would be
> > >> useful
> > >> too.
> > >
> > > FreeBSD does not use what Solaris calls UFS.
> > >
> > > Both Solaris and FreeBSD did start with the same filesystem code but
> > > Sun did start enhancing UFD in the late 1980's while BSD did not
> > > take over
> > > the changes. Later BSD started a fork on the filesystemcode.
> > > Filesystem
> > > performance thus cannot be compared.
> >
> > I'm aware of that, but they still call it UFS. I'm trying to
> > determine what the OP is asking.
>
>   I seem to remember many daemons that used large grouping of files such as
> this changing to a split out directory tree starting in the late 80's to
> avoid slow stat issues.  Is this type of design (tossing 300k+ files into
> one flat directory) becoming more acceptable again?
>
>
> -Wade
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS problems in dCache

2007-07-31 Thread Sergey Chechelnitskiy
Hi All, 

We have a problem running a scientific application dCache on ZFS. 
dCache is a java based software that allows to store huge datasets in pools.
One dCache pool consists of two directories pool/data and pool/control. The 
real data goes into pool/data/ 
For each file in pool/data/ the pool/control/ directory contains two small 
files, one is 23 bytes, another one is 989 bytes. 
When dcache pool starts it consecutively reads all the files in control/ 
directory.
We run a pool on ZFS.

When we have approx 300,000 files in control/ the pool startup time is about 
12-15 minutes. 
When we have approx 350,000 files in control/ the pool startup time increases 
to 70 minutes. 
If we setup a new zfs pool with the smalles possible blocksize and move 
control/ there the startup time decreases to 40 minutes (in case of 350,000 
files). 
But if we run the same pool on XFS the startup time is only 15 minutes. 
Could you suggest to reconfigure ZFS to decrease the startup time.

When we have approx 400,000 files in control/ we were not able to start the 
pool in 24 hours. UFS did not work either in this case, but XFS worked.

What could be the problem ? 
Thank you,

-- 
--
Best Regards, 
Sergey Chechelnitskiy ([EMAIL PROTECTED])
WestGrid/SFU
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: NFS share problem with mac os x client

2007-02-08 Thread Sergey
The setup below works fine for me.

macmini:~ jimb$ mount | grep jimb
ride:/xraid2/home/jimb on /private/var/automount/home/jimb (nosuid, automounted)

macmini:~ jimb$ nidump fstab / | grep jimb
ride:/xraid2/home/jimb /home/jimb nfs rw,nosuid,tcp 0 0

NFS server: Solaris 10 11/06 x86_64 + patches, NFSv3.
On the client there's latest release MacOSX version + patches.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] HELP please!!!! zfs can't open drives!!!

2007-01-19 Thread Sergey
After bfuing from b37 to current zpool can't start with error:
wis-2 ~ # zpool status -x
  pool: zstore
 state: FAULTED
status: One or more devices could not be opened.  There are insufficient
replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-D3
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
zstore  UNAVAIL  0 0 0  insufficient replicas
  raidz1UNAVAIL  0 0 0  insufficient replicas
c2t0d0  UNAVAIL  0 0 0  cannot open
c2t1d0  UNAVAIL  0 0 0  cannot open
c2t2d0  UNAVAIL  0 0 0  cannot open
c2t3d0  UNAVAIL  0 0 0  cannot open
c2t4d0  UNAVAIL  0 0 0  cannot open
c2t5d0  UNAVAIL  0 0 0  cannot open


But accordingly to log drives was found and initialized:
Jan 19 13:58:11 wis-2 pcplusmp: [ID 398438 kern.info] pcplusmp: pci1000,1960 
(lsimega) instance #1 vector 0x18 ioapic 0x9 intin 0x0 is bound to cpu 2
Jan 19 13:58:11 wis-2 pci_pci: [ID 370704 kern.info] PCI-device: pci8086,[EMAIL 
PROTECTED], lsimega1
Jan 19 13:58:11 wis-2 genunix: [ID 936769 kern.info] lsimega1 is /[EMAIL 
PROTECTED],0/pci8086,[EMAIL PROTECTED]/pci8086,[EMAIL PROTECTED]
Jan 19 13:58:11 wis-2 scsi: [ID 193665 kern.info] sd2 at lsimega1: target 0 lun 0
Jan 19 13:58:11 wis-2 genunix: [ID 936769 kern.info] sd2 is /[EMAIL 
PROTECTED],0/pci8086,[EMAIL PROTECTED]/pci8086,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0
Jan 19 13:58:11 wis-2 scsi: [ID 107833 kern.warning] WARNING: /[EMAIL 
PROTECTED],0/pci8086,[EMAIL PROTECTED]/pci8086,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0 (sd2):
Jan 19 13:58:11 wis-2   sd_get_write_cache_enabled: Mode Sense returned invalid 
block descriptor length
Jan 19 13:58:12 wis-2 krtld: [ID 469452 kern.info] NOTICE: ncrs: 64-bit driver 
module not found
Jan 19 13:58:12 wis-2 krtld: [ID 469452 kern.info] NOTICE: hpfc: 64-bit driver 
module not found
Jan 19 13:58:12 wis-2 krtld: [ID 469452 kern.info] NOTICE: adp: 64-bit driver 
module not found
Jan 19 13:58:12 wis-2 krtld: [ID 469452 kern.info] NOTICE: cadp: 64-bit driver 
module not found
Jan 19 13:58:12 wis-2 krtld: [ID 469452 kern.info] NOTICE: symhisl: 64-bit 
driver module not found
Jan 19 13:58:12 wis-2 scsi: [ID 193665 kern.info] sd36 at lsimega1: target 1 
lun 0
Jan 19 13:58:12 wis-2 genunix: [ID 936769 kern.info] sd36 is /[EMAIL 
PROTECTED],0/pci8086,[EMAIL PROTECTED]/pci8086,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0
Jan 19 13:58:12 wis-2 scsi: [ID 193665 kern.info] sd37 at lsimega1: target 2 
lun 0
Jan 19 13:58:12 wis-2 genunix: [ID 936769 kern.info] sd37 is /[EMAIL 
PROTECTED],0/pci8086,[EMAIL PROTECTED]/pci8086,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0
Jan 19 13:58:12 wis-2 scsi: [ID 193665 kern.info] sd38 at lsimega1: target 3 
lun 0
Jan 19 13:58:12 wis-2 genunix: [ID 936769 kern.info] sd38 is /[EMAIL 
PROTECTED],0/pci8086,[EMAIL PROTECTED]/pci8086,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0
Jan 19 13:58:12 wis-2 scsi: [ID 107833 kern.warning] WARNING: /[EMAIL 
PROTECTED],0/pci8086,[EMAIL PROTECTED]/pci8086,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0 (sd38):
Jan 19 13:58:12 wis-2   sd_get_write_cache_enabled: Mode Sense returned invalid 
block descriptor length
Jan 19 13:58:12 wis-2 scsi: [ID 193665 kern.info] sd39 at lsimega1: target 4 
lun 0
Jan 19 13:58:12 wis-2 genunix: [ID 936769 kern.info] sd39 is /[EMAIL 
PROTECTED],0/pci8086,[EMAIL PROTECTED]/pci8086,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0
Jan 19 13:58:12 wis-2 scsi: [ID 193665 kern.info] sd40 at lsimega1: target 5 
lun 0
Jan 19 13:58:12 wis-2 genunix: [ID 936769 kern.info] sd40 is /[EMAIL 
PROTECTED],0/pci8086,[EMAIL PROTECTED]/pci8086,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Where is the ZFS configuration data stored?

2006-10-12 Thread Sergey
+ a little addition to the original quesion:

Imagine that you have a RAID attached to Solaris server. There's ZFS on RAID. 
And someday you lost your server completely (fired motherboard, physical crash, 
...). Is there any way to connect the RAID to some another server and restore 
ZFS layout (not loosing all data on RAID)?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS vs. Apple XRaid

2006-09-22 Thread Sergey
Please read also http://docs.info.apple.com/article.html?artnum=303503.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS vs. Apple XRaid

2006-09-21 Thread Sergey
I had the same problem. Read the following article -
http://docs.info.apple.com/article.html?artnum=302780

Most likely you have "Allow host cache Flushing" checked. Uncheck it and try 
again.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Some questions about how to organize ZFS-based filestorage

2006-09-20 Thread Sergey
Hi all,

I am trying to organize our small (and the only one) filestorage using and 
thinking in ZFS-style )

So I have SF x4100 (2 x DualCore AMD Opteron 280, 4 Gb of RAM, Solaris 10 x86 
06/06 64 bit kernel + updates),  Sun Fiber Channel HBA card (Qlogic-based) and 
Apple Xraid 7Tb (2 raid controllers with 7 x 500 Gb ATA disks per each 
controller). Two internal SAS drives are in RAID1 mode using built-in LSI 
controller.

Xraid is confugured like the following - 6 disks in HW RAID 5 and one spare 
disk per controller.

So I have :

# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
   0. c0t2d0 
  /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci1000,[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0
   1. c4t600039317312d0 
  /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci1077,[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0/[EMAIL PROTECTED],0
   2. c5t60003931742Bd0 
  /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci1077,[EMAIL 
PROTECTED],1/[EMAIL PROTECTED],0/[EMAIL PROTECTED],0


I need a place to keep multiple builds of the products (a huge number of small 
files). This will take about 2 Tb - so it's quite logical to give the whole 
"1." or "2." from the output above. What will be the best block size that I 
need to supply to "zfs create" command to get the most from filesystem that has 
a huge number of small files?

The other tank will host users' homes, projects' files and other files.

Now I am thinking to create two separate ZFS pools. "1." and "2." will be the 
only physical devices in both pools.

Or I'd better go and create one xfs pool that includes both "1." and "2."?

Later on I will use NFS to share this filestorage between Linux, Solaris, 
OpenSolaris and MacOSX hosts.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss