Do you use any form of compression?
I changed compression from none to gzip-9, got some message about changing
properties of boot pool (or fs), copied and moved all files under /usr and /etc
to enforce compression, rebooted, and - guess what message did I get.
--
This message posted from openso
Off the lists, someone suggested to me that the "Inconsistent
filesystem" may be the boot archive and not the ZFS filesystem (though I
still don't know what's wrong with booting b99).
Regardless, I tried rebuilding the boot_archive with bootadm
update-archive -vf and verified it by mounting it
Hi,
After a recent pkg image-update to OpenSolaris build 100, my system
booted once and now will no longer boot. After exhausting other
options, I am left wondering if there is some kind of ZFS issue a scrub
won't find.
The current behavior is that it will load GRUB, but trying to boot the
mo
Raw Device Mapping is a feature of ESX 2.5 and above which allows a guest OS to
have access to a LUN on fibre or ISCSI SAN.
See http://www.vmware.com/pdf/esx25_rawdevicemapping.pdf for more details.
You may be able to do something similar with the raw disks under workstation
see http://www.vmwar
Hi - I'm interested in your solution as my current ZFS/vmware experiment is
stalled.
I have a 6-disk SCSI rack ( 6 @ 9GB/ea ) attached as Raw disks to the VM
(Workstation 6), and have been getting ZFS pool corruption on reboot. Vmware
is allowing the Solaris guest to write a disklabel that is (
Added an vdev using rdm and that seems to be stable over reboots
however the pools based on a virtual disk now also seems to be stable after
doing an export and import -f
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-disc
I am seeing the same problem using a seperate virtual disk for the pool.
This is happening with Solaris 10 U3, U4 and U5
SCSI reservations is know to be an issue with clustered solaris
http://blogs.sun.com/SC/entry/clustering_solaris_guests_that_run
I wonder if this is the same problem. Maybe w
Hi Ricardo,
I'll try that.
Thanks (Obrigado)
Paulo Soeiro
On 6/5/08, Ricardo M. Correia <[EMAIL PROTECTED]> wrote:
>
> On Ter, 2008-06-03 at 23:33 +0100, Paulo Soeiro wrote:
>
> 6)Remove and attached the usb sticks:
>
> zpool status
> pool: myPool
> state: UNAVAIL
> status: One or more devices
On Ter, 2008-06-03 at 23:33 +0100, Paulo Soeiro wrote:
> 6)Remove and attached the usb sticks:
>
> zpool status
> pool: myPool
> state: UNAVAIL
> status: One or more devices could not be used because the label is
> missing
> or invalid. There are insufficient replicas for the pool to continue
> f
On Jun 3, 2008, at 18:34, Paulo Soeiro wrote:
> This test was done without the hub:
FWIW, I bought 9 microSD's and 9 USB controller units for them from
NewEgg to replicate the famous ZFS demo video, and I had problems
getting them working with OpenSolaris (on VMWare on OSX, in this case).
Af
This test was done without the hub:
On Tue, Jun 3, 2008 at 11:33 PM, Paulo Soeiro <[EMAIL PROTECTED]> wrote:
> Did the same test again and here is the result:
>
> 1)
>
> zpool create myPool mirror c6t0d0p0 c7t0d0p0
>
> 2)
>
> -bash-3.2# zfs create myPool/myfs
>
> -bash-3.2# zpool status
>
> pool:
Did the same test again and here is the result:
1)
zpool create myPool mirror c6t0d0p0 c7t0d0p0
2)
-bash-3.2# zfs create myPool/myfs
-bash-3.2# zpool status
pool: myPool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
myPool ONLINE 0 0 0
mirror ONLINE 0 0 0
c6t0
Justin Vassallo wrote:
> Thommy,
>
> If I read correctly your post stated that the pools did not automount on
> startup, not that they would go corrupt. It seems to me that Paulo is
> actually experiencing a corrupt fs
Nah, I also had indications of "corrupted data" if you read my posts.
But the
June 2008 13:19
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] ZFS problems with USB Storage devices
Paulo Soeiro wrote:
> Greetings,
>
> I was experimenting with zfs, and i made the following test, i shutdown
> the computer during a write operation
> in a mirro
Paulo Soeiro wrote:
> Greetings,
>
> I was experimenting with zfs, and i made the following test, i shutdown
> the computer during a write operation
> in a mirrored usb storage filesystem.
>
> Here is my configuration
>
> NGS USB 2.0 Minihub 4
> 3 USB Silicom Power Storage Pens 1 GB each
>
> Th
Greetings,
I was experimenting with zfs, and i made the following test, i shutdown the
computer during a write operation
in a mirrored usb storage filesystem.
Here is my configuration
NGS USB 2.0 Minihub 4
3 USB Silicom Power Storage Pens 1 GB each
These are the ports:
hub devices
/---
Hello, I'm having the same exact situation on one VM, and not on another VM on
the same infrastructure.
The only difference is that on the failing VM I initially created the pool with
a name and then changed the mountpoint to another name.
Did you found a solution to the issue?
Should I consider
I have a test bed S10U5 system running under vmware ESX that has a weird
problem.
I have a single virtual disk, with some slices allocated as UFS filesystem
for the operating system, and s7 as a ZFS pool.
Whenever I reboot, the pool fails to open:
May 8 17:32:30 niblet fmd: [ID 441519 daemon.e
We have the same issue (using dCache on Thumpers, data on ZFS).
A workaround has been to move the directory on a local UFS filesystem using a
low nbpi parameter.
However, this is not a solution.
Doesn't look like a threading problem, thanks anyway Jens !
This message posted from opensolaris
On Wed, Aug 01, 2007 at 09:49:26AM -0700, Sergey Chechelnitskiy wrote:
Hi Sergey,
>
> I have a flat directory with a lot of small files inside. And I have a java
> application that reads all these files when it starts. If this directory is
> located on ZFS the application starts fast (15 mins) w
I think I am having the same problem using a different application (Windchill).
zfs is consuming hugh amounts of memory and system (T2000) is performing
poorly. Occasionally it will take a long time (several hours) to do a snapshot.
Normally a snapshot will take a second or two. The application
Hi All,
Thank you for answers.
I am not really comparing anything.
I have a flat directory with a lot of small files inside. And I have a java
application that reads all these files when it starts. If this directory is
located on ZFS the application starts fast (15 mins) when the number of fi
> On 01/08/2007, at 7:50 PM, Joerg Schilling wrote:
> > Boyd Adamson <[EMAIL PROTECTED]> wrote:
> >
> >> Or alternatively, are you comparing ZFS(Fuse) on Linux with XFS on
> >> Linux? That doesn't seem to make sense since the userspace
> >> implementation will always suffer.
> >>
> >> Someone h
On 01/08/2007, at 7:50 PM, Joerg Schilling wrote:
> Boyd Adamson <[EMAIL PROTECTED]> wrote:
>
>> Or alternatively, are you comparing ZFS(Fuse) on Linux with XFS on
>> Linux? That doesn't seem to make sense since the userspace
>> implementation will always suffer.
>>
>> Someone has just mentioned th
Boyd Adamson <[EMAIL PROTECTED]> wrote:
> Or alternatively, are you comparing ZFS(Fuse) on Linux with XFS on
> Linux? That doesn't seem to make sense since the userspace
> implementation will always suffer.
>
> Someone has just mentioned that all of UFS, ZFS and XFS are available on
> FreeBSD. Are
Sergey Chechelnitskiy <[EMAIL PROTECTED]> writes:
> Hi All,
>
> We have a problem running a scientific application dCache on ZFS.
> dCache is a java based software that allows to store huge datasets in
> pools. One dCache pool consists of two directories pool/data and
> pool/control. The real dat
Hi All,
We have a problem running a scientific application dCache on ZFS.
dCache is a java based software that allows to store huge datasets in pools.
One dCache pool consists of two directories pool/data and pool/control. The
real data goes into pool/data/
For each file in pool/data/ the pool
Hello James,
Saturday, November 18, 2006, 11:34:52 AM, you wrote:
JM> as far as I can see, your setup does not mee the minimum
JM> redundancy requirements for a Raid-Z, which is 3 devices.
JM> Since you only have 2 devices you are out on a limb.
Actually only two disks for raid-z is fine and you
David Dyer-Bennet wrote:
On 11/26/06, Al Hopper <[EMAIL PROTECTED]> wrote:
[4] I proposed this solution to a user on the [EMAIL PROTECTED]
list - and it resolved his problem. His problem - the system would reset
after getting about 1/2 way through a Solaris install. The installer was
simply a
On 11/26/06, Al Hopper <[EMAIL PROTECTED]> wrote:
[4] I proposed this solution to a user on the [EMAIL PROTECTED]
list - and it resolved his problem. His problem - the system would reset
after getting about 1/2 way through a Solaris install. The installer was
simply acting as a good system exe
On Sat, 25 Nov 2006 [EMAIL PROTECTED] wrote:
reformatted ...
> First thing is I would like to thank everyone for their replies/help.
> This machine has been running for two years under Linux, but for last
^
Ugh Oh - possible CPU fan "fatigue" time.
First thing is I would like to thank everyone for their replies/help. This
machine has been running for two years under Linux, but for last two or three
months has had Nexenta Solaris on it. This machine has never once crashed. I
rebooted with a Knoppix disk in and ran memtest86. Within 30 minut
[ I've seen the response where one astute list participate noticed you're
running a 2-way raidz device, when the documentation clearly states that
the mimimum raidz volume consists of 3 devices ]
Not very astute. The documentation clearly states that the minimum is
2 devices.
zpool(1M):
On Sat, 18 Nov 2006 [EMAIL PROTECTED] wrote:
> I'm new to this group, so hello everyone! I am having some issues with
Welcome!
> my Nexenta system I set up about two months ago as a zfs/zraid server. I
> have two new Maxtor 500GB Sata drives and an Adaptec controller which I
> believe has a Sili
On 18-Nov-06, at 2:01 PM, Bill Moore wrote:
Hi Michael. Based on the output, there should be no user-visible file
corruption. ZFS saw a bunch of checksum errors on the disk, but was
able to recover in every instance.
While 2-disk RAID-Z is really a fancy (and slightly more expensive,
CPU-wis
Hi Michael. Based on the output, there should be no user-visible file
corruption. ZFS saw a bunch of checksum errors on the disk, but was
able to recover in every instance.
While 2-disk RAID-Z is really a fancy (and slightly more expensive,
CPU-wise) way of doing mirroring, at no point should yo
On 11/18/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
...
scrub: scrub completed with 0 errors on Mon Nov 13 04:49:35 2006
config:
NAMESTATE READ WRITE CKSUM
amber ONLINE 0 0 0
raidz1ONLINE 0 0 0
c4d0
I'm new to this group, so hello everyone! I am having some issues with my
Nexenta system I set up about two months ago as a zfs/zraid server. I have two
new Maxtor 500GB Sata drives and an Adaptec controller which I believe has a
Silicon Image chipset. Also I have a Seasonic 80+ power supply, so
38 matches
Mail list logo