> Because then I have to compute yesterday's date to do the
incremental dump.
snaps=15
today=`date +%j`
# to change the second day of the year from 002 to 2
today=`expr $today + 0`
nuke=`expr $today - $snaps`
yesterday=`expr $today - 1`
if [ $yesterday -lt 1 ] ; then
yesterday=365
fi
if [
Darren J Moffat wrote:
> I know this isn't answering the question but rather than using "today"
> and "yesterday" why not not just use dates ?
Because then I have to compute yesterday's date to do the incremental dump.
I don't suppose I can create symlinks to snapshots in order to give them
mult
Stuart Anderson wrote:
> On Thu, Mar 06, 2008 at 11:51:00AM -0800, Stuart Anderson wrote:
>
>> I currently have an X4500 running S10U4 with the latest ZFS uber patch
>> (127729-07) for which "zpool scrub" is making very slow progress even
>> though the necessary resources are apparently availabl
On Thu, Mar 06, 2008 at 05:55:53PM -0800, Marion Hakanson wrote:
> [EMAIL PROTECTED] said:
> > It is also interesting to note that this system is now making negative
> > progress. I can understand the remaining time estimate going up with time,
> > but what does it mean for the % complete number to
[EMAIL PROTECTED] said:
> It is also interesting to note that this system is now making negative
> progress. I can understand the remaining time estimate going up with time,
> but what does it mean for the % complete number to go down after 6 hours of
> work?
Sorry I don't have any helpful experi
> I have 4x500G disks in a RAIDZ. I'd like to repurpose one of them
> SYS1 124G 1.21T 29.9K /SYS1
This seems to be a simple task because RAID5/Z runs just fine when it is
missing one disk. Just format one disk any way that works (take the array
offline and do it with format or zpool, or boot in
> 1. In zfs can you currently add more disks to an existing raidz? This is
> important to me as i slowly add disks to my system one at a time.
No, but solaris and linux raid5 can do this (in linux, grow with mdadm).
> 2. in a raidz do all the disks have to be the same size?
I think this one has
Paul -
Don't substitute redundancy for backup...
if your data is important to you, for the love of steak, make sure you
have a backup that would not be destroyed by, say, a lightening strike,
fire or stray 747.
For what it's worth, I'm also using ZFS on 32 bit and am yet to
experience any sor
Brian D. Horn wrote:
> Take a look at CR 6634371. It's worse than you probably thought.
>
Actually, almost all of the problems noted in that bug are statistics.
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED] http://blogs.sun.com/bart
On Thu, Mar 06, 2008 at 11:51:00AM -0800, Stuart Anderson wrote:
> I currently have an X4500 running S10U4 with the latest ZFS uber patch
> (127729-07) for which "zpool scrub" is making very slow progress even
> though the necessary resources are apparently available. Currently it has
It is also i
I currently have an X4500 running S10U4 with the latest ZFS uber patch
(127729-07) for which "zpool scrub" is making very slow progress even
though the necessary resources are apparently available. Currently it has
been running for 3 days to reach 75% completion, however, in the last 12
hours this
Insufficient data.
How big is the pool? How much stored?
Are the external drives all on the same USB bus?
I am switching to eSATA for my next external drive setup as both USB 2.0 and
firewire are just too fricking slow for the large drives these days.
This message posted from opensolaris.or
Take a look at CR 6634371. It's worse than you probably thought.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, Mar 6, 2008 at 10:22 AM, Brian D. Horn <[EMAIL PROTECTED]> wrote:
> ZFS is not 32-bit safe. There are a number of places in the ZFS code where
> it is assumed that a 64-bit data object is being read atomically (or set
> atomically). It simply isn't true and can lead to weird and bugs.
>ZFS is not 32-bit safe. There are a number of places in the ZFS code where
>it is assumed that a 64-bit data object is being read atomically (or set
>atomically). It simply isn't true and can lead to weird and bugs.
Where do you get that information?
(First I've heard of it and I have a hard
> ZFS is not 32-bit safe.
while this is kinda true, if the systems has 2G or less of ram
it shouldn't be an issue other than poor performance for lack of
ARC.
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://
Brian D. Horn wrote:
> ZFS is not 32-bit safe. There are a number of places in the ZFS code where
> it is assumed that a 64-bit data object is being read atomically (or set
> atomically). It simply isn't true and can lead to weird and bugs.
Bug numbers please.
--
Darren J Moffat
__
ZFS is not 32-bit safe. There are a number of places in the ZFS code where
it is assumed that a 64-bit data object is being read atomically (or set
atomically). It simply isn't true and can lead to weird and bugs.
This message posted from opensolaris.org
__
On 06 March, 2008 - Justin Vassallo sent me these 12K bytes:
> Hello,
>
>
>
> I ran a zpool scrub on 2 zpools. one located on internal sas drives, the
> second on external, USB SATA drives.
>
>
>
> The internal pool finished scrubbing in no time, while the external pool is
> taking incredi
2008/3/6, Brian Hechinger <[EMAIL PROTECTED]>:
> On Thu, Mar 06, 2008 at 11:39:25AM +0100, [EMAIL PROTECTED] wrote:
> >
> > I think it's specfically problematic on 32 bit systems with large amounts
> > of RAM. Then you run out of virtual address space in the kernel quickly;
> > a small amount
On Thu, Mar 06, 2008 at 11:39:25AM +0100, [EMAIL PROTECTED] wrote:
>
> I think it's specfically problematic on 32 bit systems with large amounts
> of RAM. Then you run out of virtual address space in the kernel quickly;
> a small amount of RAM (I have one with 512MB) works fine.
I have a 32-bit
Hello,
I ran a zpool scrub on 2 zpools. one located on internal sas drives, the
second on external, USB SATA drives.
The internal pool finished scrubbing in no time, while the external pool is
taking incredibly long.
Typical data transfer rate to this external pool is 80MB/s.
Any he
>Ben wrote:
>> Hi,
>>
>> I know that is not recommended by Sun
>> to use ZFS on 32 bits machines but,
>> what are really the consequences of doing this ?
>
>Depends on what kind of performance you need.
>
>> I have an old Bipro Xeon server (6 GB ram , 6 disks),
>> and I would like to do a raidz
Hi,
MPxIO is basically a failover protocol. The way MpXIO will handle a storage
device is listed in /kernel/drv/scsi_vhci.conf
If you have multiple paths to your device over Fibre or over IP for iSCSI and
MPXIO is not configured right or disabled, or turned off on your storage array,
then at t
Ben wrote:
> Hi,
>
> I know that is not recommended by Sun
> to use ZFS on 32 bits machines but,
> what are really the consequences of doing this ?
Depends on what kind of performance you need.
> I have an old Bipro Xeon server (6 GB ram , 6 disks),
> and I would like to do a raidz with 4 disks
Bill Shannon wrote:
> If I do something like this:
>
> zfs snapshot [EMAIL PROTECTED]
> zfs send [EMAIL PROTECTED] > tank.backup
> sleep 86400
> zfs rename [EMAIL PROTECTED] [EMAIL PROTECTED]
> zfs snapshot [EMAIL PROTECTED]
> zfs send -I [EMAIL PROTECTED] [EMAIL PROTECTED] > tank.incr
>
> Am I g
26 matches
Mail list logo