>> Hello
>> I'm offering a solution based on our disks where replication and
>> storage management should be made using only ZFS...
>> The test change few bytes on one file ( 10bytes ) and check how many
>> bytes the source sends to target.
>> The customer tried the replication between 2 volume.
It's simply a shell grokking issue, when you allow your (l)users to self
name your files then you will have spaces etc in the filename (breaks
shell arguments). In this case the '[E]' is breaking your command line
argument grokking. We have the same issue in our photos tree. We have to
use non
In case your still interested, I did do a firmware build based on 3.2.7 that:
1) Allows 14 volumes (aka RAID groups) to be defined per tray (I had to limit
the tray count to 2 to avoid gobs of restructuring...)
BTW, This requires you to wipe the disk labels and you lose everything...but I
figure
Ok... So I was wrong. I was informed I had this backwards. It seems that this
NFS4.1 mirror mounts thing is really only nice for getting rid of a lot of
automount maps. You still have to share each filesystem :-( I hate it when I
think there is hope just to have it taken away. Sigh...
T
On Mon, 28 Jan 2008, Chris wrote:
> I did a little bit more digging and found some interesting things. NFS4
> Mirror mounts. This would seem to be the most logical option. In this
> scenario the client would connect to a single mount /tank/users but would be
> able to move through the indiv
You weren't dreaming, that's been a request for quite some time, and is
being worked on. I couldn't tell you the status though.
On 1/28/08, Tim Thomas <[EMAIL PROTECTED]> wrote:
>
> Hi
>
> my understanding is that you cannot remove a disk from a ZFS storage pool
> once you have added it...but I
I did a little bit more digging and found some interesting things. NFS4 Mirror
mounts. This would seem to be the most logical option. In this scenario the
client would connect to a single mount /tank/users but would be able to move
through the individual user file systems underneath that moun
[EMAIL PROTECTED] wrote on 01/28/2008 09:11:53 AM:
> I too am having the same issues. I started out using Solaris 10
> 8/07 release. I could create all the filesystems, 47,000
> filesystems, but if you needed to reboot, patch, shutdown Very
> bad. So then I read about sharemgr and how it w
I too am having the same issues. I started out using Solaris 10 8/07 release.
I could create all the filesystems, 47,000 filesystems, but if you needed to
reboot, patch, shutdown Very bad. So then I read about sharemgr and how
it was supposed to mitigate these issues. Well, after runnin
Title: Signature
Hi
my understanding is that you cannot remove a disk from a ZFS storage
pool once you have added it...but I also think I saw an email from Jeff
B saying that the ability to depopulate a disk so that it can be
removed is being worked onor was I dreaming ?
What is the statu
Hello Christopher,
Friday, January 25, 2008, 5:49:14 PM, you wrote:
CG> Robert Milkowski wrote:
>> Hello Christopher,
>>
>> Friday, January 25, 2008, 5:37:58 AM, you wrote:
>>
>> CG> michael schuster wrote:
I assume you've assured that there's enough space in /pond ...
can you tr
Thanks for all information.
We'll try to escalate the issue :)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
12 matches
Mail list logo