Hello,
I have problem regarding zfs. After installing solaris 10 x86, it worked for a
while, and then, some problem happened that there was problem in solaris and it
could not get loaded! Even fail safe didn't resolve the problem. I put an open
solaris CD and booted from it, I wrote the bellow
On Tue, Apr 13, 2010 at 7:03 AM, Harry Putnam wrote:
> Apparently you are not disagreeing with Daniel Cs' comment above so I
> guess you are talking about disk partitions here?
I'm not disagreeing, but the use case for a server is different than
for a laptop that can only hold one drive.
> Curre
On Apr 14, 2010, at 2:42 AM, Ragnar Sundblad wrote:
>
> On 12 apr 2010, at 19.10, Kyle McDonald wrote:
>
>> On 4/12/2010 9:10 AM, Willard Korfhage wrote:
>>> I upgraded to the latest firmware. When I rebooted the machine, the pool
>>> was back, with no errors. I was surprised.
>>>
>>> I will
These are all good reasons to switch back to letting ZFS handle it. I did put
about 600GB of data on the pool as configured with Raid 6 on the card, verified
the data, and scrubbed it a couple time in the process and there's no problems,
so it appears that the firmware upgrade fixed my problems.
On Apr 13, 2010, at 9:52 PM, Cyril Plisko wrote:
> Hello !
>
> I've had a laptop that crashed a number of times during last 24 hours
> with this stack:
>
> panic[cpu0]/thread=ff0007ab0c60:
> assertion failed: ddt_object_update(ddt, ntype, nclass, dde, tx) == 0,
> file: ../../common/fs/zfs/d
On 12 apr 2010, at 19.10, Kyle McDonald wrote:
> On 4/12/2010 9:10 AM, Willard Korfhage wrote:
>> I upgraded to the latest firmware. When I rebooted the machine, the pool was
>> back, with no errors. I was surprised.
>>
>> I will work with it more, and see if it stays good. I've done a scrub, s
Yesterday, Arne Jansen wrote:
Paul Archer wrote:
Because it's easier to change what I'm doing than what my DBA does, I
decided that I would put rsync back in place, but locally. So I changed
things so that the backups go to a staging FS, and then are rsync'ed
over to another FS that I take sna
> Hi all.
>
> Im pretty new to the whole OpenSolaris thing, i've
> been doing a bit of research but cant find anything
> on what i need.
>
> I am thinking of making myself a home file server
> running OpenSolaris with ZFS and utilizing Raid/Z
>
> I was wondering if there is anything i can get th
I realized I forgot to follow-up on this thread. Just to be clear, I
have confirmed that I am seeing what to me is undesirable behavior
even with the ARC being 1500 MB in size on an almost idle system (<
0.5 mb/sec read load, almost 0 write load). Observe these recursive
searches through /usr/src/s
I was finally able to generate syslog messages, thanks to the clue given by timh
cat /usr/lib/fm/fmd/plugins/syslog-msgs.conf
setprop console true
setprop facility LOG_LOCAL0 -log as facility local0
setprop syslogd true
svcadm restart fmd- restart FMD
Note that s
Hello !
I've had a laptop that crashed a number of times during last 24 hours
with this stack:
panic[cpu0]/thread=ff0007ab0c60:
assertion failed: ddt_object_update(ddt, ntype, nclass, dde, tx) == 0,
file: ../../common/fs/zfs/ddt.c, line: 968
ff0007ab09a0 genunix:assfail+7e ()
ff0007
Thanks for the clue.
Still not successful, but some hope is there.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi folks,
At home I run OpenSolaris x86 with a 4 drive Raid-Z (4x1TB) zpool and it's not
in great shape. A fan stopped spinning and soon after the top disk failed
(cause you know, heat rises). Naturally, OpenSolaris and ZFS didn't skip a
beat; I didn't even notice it was dead until I saw the
On Tue, 13 Apr 2010, Christian Molson wrote:
Now I would like to add my 4 x 2TB drives, I get a warning message
saying that: "Pool uses 5-way raidz and new vdev uses 4-way raidz"
Do you think it would be safe to use the -f switch here?
It should be "safe" but chances are that your new 2TB di
On Tue, 13 Apr 2010, Joerg Schilling wrote:
I believe you make a mistake with this assumption.
I see that you make some mistakes with your own assumptions. :-)
- The SSD cannot know which blocks are currently not in use.
It does know that blocks in its spare pool are not in use.
-
On Mon, 12 Apr 2010, Eric D. Mudama wrote:
The advantage of TRIM, even in high end SSDs, is that it allows you to
effectively have additional "considerable extra space" available to
the device for garbage collection and wear management when not all
sectors are in use on the device.
For most use
Hi,
(Main questions at bottom of post)
I recently discovered the joys of ZFS. I have a home file server for
backups+media, also hosting some virtual machine (over LAN).
I was wondering if I could get some feedback as to whether I have set things up
properly.
Drives:
20 x 1TB (Mix of seagate a
On Apr 13, 2010, at 5:22 AM, Tony MacDoodle wrote:
> I was wondering if any data was lost while doing a snapshot on a running
> system?
ZFS will not lose data during a snapshot.
> Does it flush everything to disk or would some stuff be lost?
Yes, all ZFS data will be committed to disk and then
Offhand, I'd say EON
http://sites.google.com/site/eonstorage/
This probably the best answer right now. It will be even better when they get a
web administration GUI running. Some variant of freenas on freebsd is also
possible.
Opensolaris is missing a good opportunity to expand its user bas
Brandon High writes:
[...]
Harry wrote:
>> So having some data on rpool (besides the OS I mean) is not
>> necessarily a bad thing then?
Daniel C answered:
>> Not at all; laptops would be screwed otherwise.
Brandon H. responded:
> The pool will resilver faster if it's got less data on it, wh
If you're concerned about someone reading the charge level of a Flash cell to
infer the value of the cell before being erased, then overwrite with random
data twice before issuing TRIM (remapping in an SSD probably makes this
ineffective).
Most people needing a secure erase feature need it to s
A snapshot is a picture of the storage at a point in time so
everything depends on the applications using the storage. If you're
running a db with lots of cache it's probably a good idea to stop the
service or force a flush to disk before taking the snapshot to ensure
the integrity of the d
I was wondering if any data was lost while doing a snapshot on a running
system? Does it flush everything to disk or would some stuff be lost?
Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zf
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Daniel
>
> Im pretty new to the whole OpenSolaris thing, i've been doing a bit of
> research but cant find anything on what i need.
>
> I am thinking of making myself a home file server runnin
Hi all,
Since Google can be your friend and this good article at
http://cuddletech.com/blog/pivot/entry.php?id=965, by Ben Rockwood, i
have new informations, and hopefully someone might be able to see
something interesting in this.
So based on what i can understand a thread ff001f7f3c60 runni
"David Magda" wrote:
> Given that ZFS probably would not have to go back to "old" blocks until
> it's reached the end of the disk, that should give the SSDs' firmware
> plenty of time to do block-remapping and background erasing--something
> that's done now anyway regardless of whether an SSD sup
Bob Friesenhahn wrote:
> Yes of course. Properly built SSDs include considerable extra space
> to support wear leveling, and this same space may be used to store
> erased blocks. A block which is "overwritten" can simply be written
> to a block allocated from the extra free pool, and the exi
Bob Friesenhahn wrote:
> On Sun, 11 Apr 2010, James Van Artsdalen wrote:
>
> > OpenSolaris needs support for the TRIM command for SSDs. This
> > command is issued to an SSD to indicate that a block is no longer in
> > use and the SSD may erase it in preparation for future writes.
>
> There doe
On 13.04.2010 10:12, Ian Collins wrote:
> On 04/13/10 05:47 PM, Daniel wrote:
>> Hi all.
>>
>> Im pretty new to the whole OpenSolaris thing, i've been doing a bit of
>> research but cant find anything on what i need.
>>
>> I am thinking of making myself a home file server running OpenSolaris
>> wit
Hi all,
Recently one of the servers , a Dell R710, attached to 2 J4400 started
to crash quite often.
Finally i got a message in /var/adm/messages that might point to
something usefull, but i don't have the expertise to start to
troubleshooting this problem, so any help would be highly valuable.
B
Hi All,
I'm researching boot diskless over iSCSI . I want to add gpxe and dhcpd to EON
. Could you help me ?
Thanks and Regards
Tien Doan
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail
>> * Should I be able to import a degraded pool?
> In general, yes. But it is complaining about corrupted data, which can
> be due to another failure.
Any suggestions on how to discover what that failure might be?
>> * If not, shouldn't there be a warning when exporting a degraded pool?
> What sh
On 04/13/10 05:47 PM, Daniel wrote:
Hi all.
Im pretty new to the whole OpenSolaris thing, i've been doing a bit of research
but cant find anything on what i need.
I am thinking of making myself a home file server running OpenSolaris with ZFS
and utilizing Raid/Z
I was wondering if there is a
Now is probably a good time to mention that dedupe likes LOTS of RAM, based
on
experiences described here. 8 GiB minimum is a good start. And to avoid
those
obscenely long removal times due to updating the DDT, an SSD based L2ARC
device
seems to be highly recommended as well.
That is, of course,
Oops, I meant SHA256. My mind just maps SHA->SHA1, totally forgetting that ZFS
actually uses SHA256 (a SHA-2 variant).
More on ZFS dedup, checksums and collisions:
http://blogs.sun.com/bonwick/entry/zfs_dedup
http://www.c0t0d0s0.org/archives/6349-Perceived-Risk.html
--
This message posted from
hello,all,I want to add a dual ports FC HBA (qlogic 2462)to my opensolaris
snv-133.
My Purpose is one port is set to Initiator mode and another is target mode.
#luxadm -e port
/device/pci...@... connected
/device/pci...0,1...@.. connected
I konw it is the initiator mode.
then I updadate_drv -
> Did you try with -f? I doubt it will help.
Yep, no luck with -f, -F or -fF.
> > * If replace 1TB dead disk with a blank disk, might
> the import work?
>
> Only if the import is failing because the dead disk
> is nonresponsive in a way that makes the import hang.
> Otherwise, you'd import the
37 matches
Mail list logo