Re: [OpenIndiana-discuss] ZFS Snapshot question

2012-08-02 Thread Mark Creamer
Thanks Jan, that sounds easy enough. I'll try it with a non-critical one
that I use for testing but your explanation looks like it will be more
straightforward than I expected. I appreciate the help.

On Thu, Aug 2, 2012 at 4:59 PM, Jan Owoc  wrote:

> Hi Mark,
>
> On Thu, Aug 2, 2012 at 2:53 PM, Mark Creamer  wrote:
> > What I should have done (I think) is set up the zfs file system for each
> VM
> > at the level below ../vmimages, that is:  "zfs create
> > datastore/vmimages/server1". That way, auto-snapshot would be creating
> > snapshots for each VM, rather than for the entire ../vmimages file system
> > each time.
> >
> > So, is there an easy way to make a separate zfs file system for each of
> the
> > existing directories below ../vmimages? I am able to take the VM guests
> > down to do this as needed.
>
> Yes, zfs filesystems can be nested as many times as you like. On a
> single pool, you could have "/", then "/home", then "/home/mark", and
> then even create subfilesystems for different kinds of data
> (compressible data, data that needs copies=2, etc.).
>
> Having said that, I believe they need to be mounted in empty
> directories, so you'd have to do:
> # mv datastore/vmimages/server1 datastore/vmimages/server1.bak
> # zfs create -o [options here] datastore/vmimages/server1
> # cp -a datastore/vmimages/server1.bak/* datastore/vmimages/server1
> # rm -r datastore/vmimages/server1.bak
>
> Jan
>
> ___
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss@openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
>



-- 
Mark
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS Snapshot question

2012-08-02 Thread Jan Owoc
Hi Mark,

On Thu, Aug 2, 2012 at 2:53 PM, Mark Creamer  wrote:
> What I should have done (I think) is set up the zfs file system for each VM
> at the level below ../vmimages, that is:  "zfs create
> datastore/vmimages/server1". That way, auto-snapshot would be creating
> snapshots for each VM, rather than for the entire ../vmimages file system
> each time.
>
> So, is there an easy way to make a separate zfs file system for each of the
> existing directories below ../vmimages? I am able to take the VM guests
> down to do this as needed.

Yes, zfs filesystems can be nested as many times as you like. On a
single pool, you could have "/", then "/home", then "/home/mark", and
then even create subfilesystems for different kinds of data
(compressible data, data that needs copies=2, etc.).

Having said that, I believe they need to be mounted in empty
directories, so you'd have to do:
# mv datastore/vmimages/server1 datastore/vmimages/server1.bak
# zfs create -o [options here] datastore/vmimages/server1
# cp -a datastore/vmimages/server1.bak/* datastore/vmimages/server1
# rm -r datastore/vmimages/server1.bak

Jan

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] ZFS Snapshot question

2012-08-02 Thread Mark Creamer
I'd like to see if I can easily go back and fix a design error I made. I
have a zpool called datastore, and under that a filesystem called vmimages.
I'm auto-snapshotting at that level, and that's all working fine. However,
there are twenty or so sub-directories under datastore/vmimages, each of
which has the files for a VMware guest. Example, datastore/vmimages/server1
has all the files for VM Guest server1, etc.

What I should have done (I think) is set up the zfs file system for each VM
at the level below ../vmimages, that is:  "zfs create
datastore/vmimages/server1". That way, auto-snapshot would be creating
snapshots for each VM, rather than for the entire ../vmimages file system
each time.

So, is there an easy way to make a separate zfs file system for each of the
existing directories below ../vmimages? I am able to take the VM guests
down to do this as needed.

Thanks,

-- 
Mark
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Access to ZFS viz CIFS from windows regularly hangs.

2012-08-02 Thread John McEntee
I have an update on my running CIFS and windows saga.

Getting samba installed and joined to the domain is a challenge and a half,
and is still not authenticating (trying on a test server).

What I have done is stop using a dns alias to connect to the OpenIndiana
server. So whereas before I was using the servers real name to connect to a
H: drive (for my documents etc) and a dns alias (CNAME) to connect the
company wide file share S: Drive. (the intention I could fail over to a
different and just update dns). I had noticed that it was mainly the S:
drive not responding while the H: Drive would continue to work (but not
always). I changed my login scripts to use the OpenIndiana servers real name
for the S drive as well, and have had not reported problems since. I did
this change just over a week ago.

Should OpenIndiana support a dns alias like this? It requires a registry
change on a windows server to work, but I believe the registry change just
effect that windows server, and is required for it to work at all.

Regards

John


___

The contents of this e-mail and any attachment(s) are strictly confidential and 
are solely for the person(s) at the e-mail address(es) above. If you are not an 
addressee, you may not disclose, distribute, copy or use this e-mail, and we 
request that you send an e-mail to ad...@stirling-dynamics.com and delete this 
e-mail.  Stirling Dynamics Ltd. accepts no legal liability for the contents of 
this e-mail including any errors, interception or interference, as internet 
communications are not secure.  Any views or opinions presented are solely 
those of the author and do not necessarily represent those of Stirling Dynamics 
Ltd. Registered In England No. 2092114 Registered Office: 26 Regent Street, 
Clifton, Bristol. BS8 4HG
VAT no. GB 464 6551 29
___

This e-mail has been scanned for all viruses MessageLabs.

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Hanging zfs commands

2012-08-02 Thread Rich
I'd suggest trying to get a trace of where in the kernel it's blocking
so that the deadlock can be found and fixed.

What version of OI is this?

- Rich

On Thu, Aug 2, 2012 at 5:19 AM, Maurilio Longo  wrote:
> Hans,
>
> I've seen the same problem when using zfs list with -d n, so that it just goes
>  down some depth (I was using 1 as well).
>
> I've solved it doing a zfs list without -d and sorting and grepping the result
> for what I need.
>
> Slower, but it did not hang anymore.
>
> You need to reboot, though, I've not found any other way to kill the hanging
> process.
>
> Maurilio.
>
> Hans Joergensen wrote:
>> Hey,
>>
>> Somehow I've hit somekind of lock on one of my NAS-boxes
>>
>> Output from ps;
>> root 26707 1   0 09:10:16 ?   0:00 /usr/sbin/zfs destroy 
>> datastore1/vmware-nfs/zfsnas4-clientstore@snap-hourly-1-201
>> root 26705 1   0 09:10:16 ?   0:00 /usr/sbin/zfs destroy 
>> datastore1/vmware-nfs/zfsnas4-clientstore@snap-hourly-1-201
>> root  2583 1   0 11:40:52 ?   0:00 /usr/sbin/zfs list -t 
>> snapshot -H -o name -d 1 -r datastore1/vmware-nfs/zfsnas4
>> root 12079 1   0 15:40:35 ?   0:00 /usr/sbin/zfs list -t 
>> snapshot -H -o name -d 1 -r datastore1/vmware-nfs/zfsnas4
>> root 26706 1   0 09:10:16 ?   0:00 /usr/sbin/zfs destroy 
>> datastore1/vmware-nfs/zfsnas4-clientstore@snap-hourly-1-201
>> root 22359 1   0 22:21:17 ?   0:00 /usr/sbin/zfs list -t 
>> snapshot -H -o name -d 1 -r datastore1/vmware-nfs/zfsnas4
>> root 26708 1   0 09:10:17 ?   0:00 /usr/sbin/zfs destroy 
>> datastore1/vmware-nfs/zfsnas4-clientstore@snap-hourly-1-201
>> root 22374 1   0 22:21:24 ?   0:00 /usr/sbin/zfs list -t 
>> snapshot -H -o name -d 1 -r datastore1/vmware-nfs/zfsnas4
>> root 16677 1   0 18:06:38 ?   0:00 /usr/sbin/zfs list -t 
>> snapshot -H -o name -d 1 -r datastore1/vmware-nfs/zfsnas4
>> root 22335 1   0 22:21:03 ?   0:00 /usr/sbin/zfs list -t 
>> snapshot -H -o name -d 1 -r datastore1/vmware-nfs/zfsnas4
>> root 27981 1   0 09:40:57 ?   0:00 /usr/sbin/zfs list -t 
>> snapshot -H -o name -d 1 -r datastore1/vmware-nfs/zfsnas4
>> root 22386 1   0 22:21:28 ?   0:00 /usr/sbin/zfs list -t 
>> snapshot -H -o name -d 1 -r datastore1/vmware-nfs/zfsnas4
>> root  7165 1   0 13:40:48 ?   0:00 /usr/sbin/zfs list -t 
>> snapshot -H -o name -d 1 -r datastore1/vmware-nfs/zfsnas4
>> root 29390 1   0 10:18:03 ?   0:00 /usr/sbin/zfs list -t 
>> snapshot -r datastore1/vmware-nfs
>> root  3637 1   0 12:06:27 ?   0:00 /usr/sbin/zfs list -t 
>> snapshot -H -o name -d 1 -r datastore1/vmware-nfs/zfsnas4
>> root 15999 1   0 17:40:43 ?   0:00 /usr/sbin/zfs list -t 
>> snapshot -H -o name -d 1 -r datastore1/vmware-nfs/zfsnas4
>> root 20089 1   0 20:40:52 ?   0:00 /usr/sbin/zfs list -t 
>> snapshot -H -o name -d 1 -r datastore1/vmware-nfs/zfsnas4
>> root 17500 1   0 18:42:03 ?   0:00 /usr/sbin/zfs list -t 
>> snapshot -r datastore1/vmware-nfs
>>
>>
>> Any chance I can get around this without rebooting the machine? it's
>> a production system with lots of VM's on it.. So that would be very
>> annoying..
>>
>> I've tried solving the problem by killing the processes that spawned
>> the zfs list and destoy commands, thats why they have 1 as parant
>> process...
>>
>> Could the lock have happened because of PID 26707 and 26705 running
>> at the same time?
>>
>> // Hans
>>
>> ___
>> OpenIndiana-discuss mailing list
>> OpenIndiana-discuss@openindiana.org
>> http://openindiana.org/mailman/listinfo/openindiana-discuss
>>
>
> --
>  __
> |  |  | |__| Maurilio Longo
> |_|_|_|| farmaconsult s.r.l.
>
>
>
> ___
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss@openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] Intel 82852/855GM driver

2012-08-02 Thread DJG
Hello all,

I've got older device with this Intel chip which I can't make to work. Is the 
driver part of xorg-video-intel? Any config suggestions? 

Many thanks,
Dan

node name:  pci10cf,1215
Vendor: Intel Corporation
Device: 82852/855GM Integrated Graphics Device
Sub-Vendor: Fujitsu Limited.
binding name:   pci10cf,1215
devfs path: /pci@0,0/pci10cf,1215
pci path:   0,2,1
compatible name:
(pci8086,3582.10cf.1215.2)(pci8086,3582.10cf.1215)(pci10cf,1215)(pci8086,3582.2)(pci8086,3582)(pciclass,038000)(pciclass,0380)
driver name:unknown
assigned-addresses: c2001110
reg:1100
compatible: pci8086,3582.10cf.1215.2
model:  Video controller
power-consumption:  1
fast-back-to-back:  TRUE
devsel-speed:   0
max-latency:0
min-grant:  0
subsystem-vendor-id:10cf
subsystem-id:   1215
unit-address:   2,1
class-code: 38000
revision-id:2
vendor-id:  8086
device-id:  3582

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Hanging zfs commands

2012-08-02 Thread Maurilio Longo
Hans,

I've seen the same problem when using zfs list with -d n, so that it just goes
 down some depth (I was using 1 as well).

I've solved it doing a zfs list without -d and sorting and grepping the result
for what I need.

Slower, but it did not hang anymore.

You need to reboot, though, I've not found any other way to kill the hanging
process.

Maurilio.

Hans Joergensen wrote:
> Hey,
> 
> Somehow I've hit somekind of lock on one of my NAS-boxes
> 
> Output from ps;
> root 26707 1   0 09:10:16 ?   0:00 /usr/sbin/zfs destroy 
> datastore1/vmware-nfs/zfsnas4-clientstore@snap-hourly-1-201
> root 26705 1   0 09:10:16 ?   0:00 /usr/sbin/zfs destroy 
> datastore1/vmware-nfs/zfsnas4-clientstore@snap-hourly-1-201
> root  2583 1   0 11:40:52 ?   0:00 /usr/sbin/zfs list -t 
> snapshot -H -o name -d 1 -r datastore1/vmware-nfs/zfsnas4
> root 12079 1   0 15:40:35 ?   0:00 /usr/sbin/zfs list -t 
> snapshot -H -o name -d 1 -r datastore1/vmware-nfs/zfsnas4
> root 26706 1   0 09:10:16 ?   0:00 /usr/sbin/zfs destroy 
> datastore1/vmware-nfs/zfsnas4-clientstore@snap-hourly-1-201
> root 22359 1   0 22:21:17 ?   0:00 /usr/sbin/zfs list -t 
> snapshot -H -o name -d 1 -r datastore1/vmware-nfs/zfsnas4
> root 26708 1   0 09:10:17 ?   0:00 /usr/sbin/zfs destroy 
> datastore1/vmware-nfs/zfsnas4-clientstore@snap-hourly-1-201
> root 22374 1   0 22:21:24 ?   0:00 /usr/sbin/zfs list -t 
> snapshot -H -o name -d 1 -r datastore1/vmware-nfs/zfsnas4
> root 16677 1   0 18:06:38 ?   0:00 /usr/sbin/zfs list -t 
> snapshot -H -o name -d 1 -r datastore1/vmware-nfs/zfsnas4
> root 22335 1   0 22:21:03 ?   0:00 /usr/sbin/zfs list -t 
> snapshot -H -o name -d 1 -r datastore1/vmware-nfs/zfsnas4
> root 27981 1   0 09:40:57 ?   0:00 /usr/sbin/zfs list -t 
> snapshot -H -o name -d 1 -r datastore1/vmware-nfs/zfsnas4
> root 22386 1   0 22:21:28 ?   0:00 /usr/sbin/zfs list -t 
> snapshot -H -o name -d 1 -r datastore1/vmware-nfs/zfsnas4
> root  7165 1   0 13:40:48 ?   0:00 /usr/sbin/zfs list -t 
> snapshot -H -o name -d 1 -r datastore1/vmware-nfs/zfsnas4
> root 29390 1   0 10:18:03 ?   0:00 /usr/sbin/zfs list -t 
> snapshot -r datastore1/vmware-nfs
> root  3637 1   0 12:06:27 ?   0:00 /usr/sbin/zfs list -t 
> snapshot -H -o name -d 1 -r datastore1/vmware-nfs/zfsnas4
> root 15999 1   0 17:40:43 ?   0:00 /usr/sbin/zfs list -t 
> snapshot -H -o name -d 1 -r datastore1/vmware-nfs/zfsnas4
> root 20089 1   0 20:40:52 ?   0:00 /usr/sbin/zfs list -t 
> snapshot -H -o name -d 1 -r datastore1/vmware-nfs/zfsnas4
> root 17500 1   0 18:42:03 ?   0:00 /usr/sbin/zfs list -t 
> snapshot -r datastore1/vmware-nfs
> 
> 
> Any chance I can get around this without rebooting the machine? it's
> a production system with lots of VM's on it.. So that would be very
> annoying..
> 
> I've tried solving the problem by killing the processes that spawned
> the zfs list and destoy commands, thats why they have 1 as parant
> process...
> 
> Could the lock have happened because of PID 26707 and 26705 running
> at the same time?
> 
> // Hans
> 
> ___
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss@openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
> 

-- 
 __
|  |  | |__| Maurilio Longo
|_|_|_|| farmaconsult s.r.l.



___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] problems with iscsi/taget going offline

2012-08-02 Thread Jonathan Loran

Hi List,

I've been fighting an intermittent bugaboo that's driving me crazy.  I haven't 
found anything on the list that shows anyone seeing anything like this.  I have 
a Solaris 10 server that mounts a zpool off of an Openindiana storage server 
using disks served by iscsi (Comstar).  Every so often, sometimes every day, 
then not for weeks the iscsi target just goes south on the Openindiana box.  
There are scant messages on the Openindiana iscsi target, and the iscsi/target 
service claims to stay online.  But on the console on the Sol 10 box, there are 
messages like:

...
 Aug  1 18:00:29 core-bu scsi: WARNING: 
/scsi_vhci/disk@g600144f0f7c588004fc7a1df0015 (sd55):
 Aug  1 18:00:29 core-bu SCSI transport failed: reason 'tran_err': 
retrying command
 Aug  1 18:00:52 core-bu iscsi: NOTICE: iscsi connection(11) unable to connect 
to target iqn.2007-07.edu.berkeley.ssl:iscsi-9.target0
 Aug  1 18:00:53 core-bu iscsi: NOTICE: iscsi connection(7) unable to connect 
to target iqn.2007-07.edu.berkeley.ssl:iscsi-9.target0
 Aug  1 18:02:00 core-bu scsi: WARNING: 
/scsi_vhci/disk@g600144f0f7c588004fc7a1df0014 (sd54):
 Aug  1 18:02:00 core-bu SCSI transport failed: reason 'timeout': 
retrying command
 Aug  1 18:02:01 core-bu scsi: WARNING: 
/scsi_vhci/disk@g600144f0f7c588004fc7a1df000b (sd45):
...

and naturally the Sol 10 box hangs.  The Solaris 10 server is used for various 
backups, and is thus hammered pretty hard at times.  However, it seems like the 
failures are not load related, since they often happen at off-peak times.

Has anyone seen a problem like this?  I'm not sure which end is the problem 
here: the Openindiana side or Sol 10.  Here are the versions:

Sol 10 box:
# cat /etc/release
Oracle Solaris 10 9/10 s10x_u9wos_14a X86
 Copyright (c) 2010, Oracle and/or its affiliates. All rights reserved.
Assembled 11 August 2010

Openindiana box (iscsi server):
# cat /etc/release 
  OpenIndiana Development oi_151a X86
Copyright 2010 Oracle and/or its affiliates. All rights reserved.
Use is subject to license terms.
   Assembled 01 September 2011

Thanks,

Jon



- _/ _/  /   - Jonathan Loran -   -
-/  /   /IT Officer   -
-  _  /   _  / / Space Sciences Laboratory, UC Berkeley
-/  / /(510) 643-5146 -
- __/__/__/jlo...@ssl.berkeley.edu-




___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss