> From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
> Sent: Friday, April 30, 2010 1:40 PM
>
> With the new Oracle policies, it seems unlikely that you will be able
> to reinstall the OS and achieve what you had before. An exact
> recovery method (dd of partition images or recreate pool
> From: Cindy Swearingen [mailto:cindy.swearin...@oracle.com]
> Sent: Friday, April 30, 2010 10:46 AM
>
> Hi Ned,
>
> Unless I misunderstand what bare metal recovery means, the following
> procedure describes how to boot from CD, recreate the root pool, and
> restore the root pool snapshots:
>
>
I have no idea why I posted in zfs discuss...ok, migration...I will post follow
up in help.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 05/ 1/10 03:09 PM, devsk wrote:
Looks like the X's vesa driver can only use 1600x1200 resolution and not the
native 1920x1200.
Asking these question on the ZFS list isn't going to get you very far.
Troy the opensolaris-help list.
--
Ian.
_
Looks like the X's vesa driver can only use 1600x1200 resolution and not the
native 1920x1200.
And if I passed -dpi to enforce 96 DPI, it just croaks.
Once -dpi was out, I am inside X with 1600x1200 resolution.
Can anyone tell me how I can get the native 1920x1200 resolution working with
vesa
I am getting a strange reset as soon as I say startx from normal user's console
login.
How do I troubleshoot this? Any ideas? I removed the /etc/X11/xorg.conf before
invoking startx because that would have some PCI bus ids in there which won't
be valid in real hardware.
--
This message posted
I think I messed up just a notch!
When I did zfs send|recv, I used the flag -u (because I wanted it to not mount
at that time). But it set the fs property canmount to off for ROOT...YAY!
I booted into livecd, imported the mypool and fixed the mount points and
canmount property. And I am now in
I had created a virtualbox VM to test out opensolaris. I updated to latest dev
build and set my things up. Tested pools and various configs/commands. Learnt
format/partition etc.
And then, I wanted to move this stuff to a solaris partition on the physical
disk. VB provides physical disk access.
Looks like I am hitting the same issue now
from the earlier post that you responded.
http://opensolaris.org/jive/thread.jspa?threadID=128532&tstart=15
Continue my test migration with the dedup=off and synced couple more file
systems.
I decided the merge two of the file systems together by copyi
On Wed, Apr 28, 2010 at 09:49:04PM +0200, Ragnar Sundblad wrote:
> On 28 apr 2010, at 14.06, Edward Ned Harvey wrote:
>
> What indicators do you have that ONTAP/WAFL has inode->name lookup
> functionality? I don't think it has any such thing - WAFL is pretty
> much an UFS/FFS that does COW instead
On Fri, April 30, 2010 13:44, Freddie Cash wrote:
> On Fri, Apr 30, 2010 at 11:35 AM, Bob Friesenhahn <
> bfrie...@simple.dallas.tx.us> wrote:
>
>> On Thu, 29 Apr 2010, Tonmaus wrote:
>>
>> Recommending to not using scrub doesn't even qualify as a workaround,
>> in
>>> my regard.
>>>
>>
>> As a d
> On Thu, 29 Apr 2010, Tonmaus wrote:
>
> > Recommending to not using scrub doesn't even qualify as a
> > workaround, in my regard.
>
> As a devoted believer in the power of scrub, I believe that after the
>
> OS, power supplies, and controller have been verified to function with
> a good scrub
On Fri, Apr 30, 2010 at 11:35 AM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Thu, 29 Apr 2010, Tonmaus wrote:
>
> Recommending to not using scrub doesn't even qualify as a workaround, in
>> my regard.
>>
>
> As a devoted believer in the power of scrub, I believe that after the OS
On Thu, 29 Apr 2010, Tonmaus wrote:
Recommending to not using scrub doesn't even qualify as a
workaround, in my regard.
As a devoted believer in the power of scrub, I believe that after the
OS, power supplies, and controller have been verified to function with
a good scrubbing, if there is m
On Thu, 29 Apr 2010, Edward Ned Harvey wrote:
This is why I suggested the technique of:
Reinstall the OS just like you did when you first built your machine, before
the catastrophy. It doesn't even matter if you make the same selections you
With the new Oracle policies, it seems unlikely that
Brandon,
You're probably hitting this CR:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6924824
I'm tracking the existing dedup issues here:
http://hub.opensolaris.org/bin/view/Community+Group+zfs/dedup
Thanks,
Cindy
On 04/29/10 23:11, Brandon High wrote:
I tried destroying a
On Thu, April 29, 2010 17:35, Bob Friesenhahn wrote:
> In my opinion periodic scrubs are most useful for pools based on
> mirrors, or raidz1, and much less useful for pools based on raidz2 or
> raidz3. It is useful to run a scrub at least once on a well-populated
> new pool in order to validate
On 30 avr. 2010, at 13:47, Euan Thoms wrote:
> Well I'm so impressed with zfs at the moment! I just got steps 5 and 6 (form
> my last post) to work, and it works well. Not only does it send the increment
> over to the backup drive, the latest increment/snapshot appears in the
> mounted filesys
> From: Peter Jeremy [mailto:peter.jer...@alcatel-lucent.com]
>
> Whilst it's trivially easy to get from the file to the list of
> directories containing that file, actually getting from one directory
> to its parent is less so: A directory containing N sub-directories has
> N+2 links. Whilst the
Hi Ned,
Unless I misunderstand what bare metal recovery means, the following
procedure describes how to boot from CD, recreate the root pool, and
restore the root pool snapshots:
http://docs.sun.com/app/docs/doc/819-5461/ghzur?l=en&a=view
I retest this process at every Solaris release.
Thanks,
Was wondering if anyone of you see any issues with the following in Solaris
10 u8 ZFS?
System Memory:
Physical RAM: 11042 MB
Free Memory : 5250 MB
LotsFree: 168 MB
ZFS Tunables (/etc/system):
ARC Size:
Current Size: 4309 MB (arcsize)
Target Size (Adaptive): 10018 MB (c)
Min Size (Hard Limit): 12
> From: Peter Jeremy [mailto:peter.jer...@alcatel-lucent.com]
>
> I gather you are suggesting that the inode be extended to contain a
> list of the inode numbers of all directories that contain a filename
> referring to that inode.
Correct.
> [inodes] can have up to 32767 links [to them]. Wh
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Euan Thoms
>
> pfexec zfs send rp...@first | pfexec zfs receive -u backup-pool/rpool
> pfexec zfs send rpool/r...@first | pfexec zfs receive -u backup-
> pool/rpool/ROOT
> pfexec zfs send rpool/
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Euan Thoms
>
> My ideal solution would be to have the data accessible from the backup
> media (external HDD) as well as be used as full syatem restore. Below
> is what I would consider ideal:
>
Brandon High wrote:
I'm seeing some weird behavior on b133 with 'zfs inherit' that seems
to conflict with what the docs say. According to the man page it
"clears the specified property, causing it to be inherited from an
ancestor" but that's not the behavior I'm seeing.
For example:
basestar:~$
Well I'm so impressed with zfs at the moment! I just got steps 5 and 6 (form my
last post) to work, and it works well. Not only does it send the increment over
to the backup drive, the latest increment/snapshot appears in the mounted
filesystem. In nautilus I can browse an exact copy of my PC, f
Hello Experts;
There was a CIFS share we were using /export/cifs1 . It got deleted
accidently.
Is there any way I can recover this directory. We don't have snapshot
for this directory.
regards;
Ashish
___
zfs-discuss mailing list
zfs-discuss@op
Thanks Edward, you understood me perfectly.
Your suggestion sounds very promising. I like the idea of letting the
installation CD set everything up, that way some hardware/drivers could
possibly be updated and yet it still work. On top of a bare metal recovery, I
would like to leverage the incr
Thanks Cindy for the links.
I see that this could possibly be a replacement for ufsbackup/ufsrestore but
unless a further snapshot can be appended to the file containing the recursive
rootpool snapshot, it would still regress from the incremental backup that
ufsbackup has. It would take a long
29 matches
Mail list logo