Chill. It's a filesystem. If you don't like it, don't use it.
Sincere Regards,
-Tim
can you guess? wrote:
>> can you guess? wrote:
>>
>
> ...
>
>
>>> Most of the balance of your post isn't addressed in
>>>
>> any detail because it carefully avoids the
>> fundamental issues tha
> > can you guess? wrote:
>
> ...
>
> > > Most of the balance of your post isn't addressed
> in
> > any detail because it carefully avoids the
> > fundamental issues that I raised:
> > >
> >
> > Not true; and by selective quoting you have
> removed
> > my specific
> > responses to most of th
> can you guess? wrote:
...
> > Most of the balance of your post isn't addressed in
> any detail because it carefully avoids the
> fundamental issues that I raised:
> >
>
> Not true; and by selective quoting you have removed
> my specific
> responses to most of these issues.
While I'm natur
Mattias Pantzare wrote:
> As the fsid is created when the file system is created it will be the
> same when you mount it on a different NFS server. Why change it?
>
> Or are you trying to match two different file systems? Then you also
> have to match all inode-numbers on your files. That is not
can you guess? wrote:
>> can you guess? wrote:
>>
>>> This is a bit weird: I just wrote the following
>>>
>> response to a dd-b post that now seems to have
>> disappeared from the thread. Just in case that's a
>> temporary aberration, I'll submit it anyway as a new
>> post.
>>
>>>
On Nov 10, 2007, at 3:49 PM, Mattias Pantzare wrote:
> 2007/11/10, asa <[EMAIL PROTECTED]>:
>> Hello all. I am working on an NFS failover scenario between two
>> servers. I am getting the stale file handle errors on my (linux)
>> client which point to there being a mismatch in the fsid's of my t
> can you guess? wrote:
> > This is a bit weird: I just wrote the following
> response to a dd-b post that now seems to have
> disappeared from the thread. Just in case that's a
> temporary aberration, I'll submit it anyway as a new
> post.
> >
>
> Strange things certainly happen here now and t
can you guess? wrote:
>>
>> I have to comment here. As a bloke with a bit of a
>> photography
>> habit - I have a 10Mpx camera and I shoot in RAW mode
>> - it is
>> very, very easy to acquire 1Tb of image files in
>> short order.
>>
>
> So please respond to the question that I raised above (an
can you guess? wrote:
> This is a bit weird: I just wrote the following response to a dd-b post that
> now seems to have disappeared from the thread. Just in case that's a
> temporary aberration, I'll submit it anyway as a new post.
>
Strange things certainly happen here now and then.
The p
> can you guess? wrote:
...
If you include
> 'image files of various
> > sorts', as he did (though this also raises the
> question of whether we're
> > still talking about 'consumers'), then you also
> have to specify exactly
> > how damaging single-bit errors are to those various
> 'sorts' (on
I used the Asus P5K WS motherboard with 1 PCI-X slot and an Intel
E2140 CPU (Core 2 Duo, 1.6 GHz, 64 bits, < 45W). It works fine. With
a 8 500 GB drives in a raidz2 array, I'm getting ~160 MB/sec writing
and 280 MB/sec reading.
See
http://scottstuff.net/blog/articles/2007/10/20/notes-from-insta
This is a bit weird: I just wrote the following response to a dd-b post that
now seems to have disappeared from the thread. Just in case that's a temporary
aberration, I'll submit it anyway as a new post.
> can you guess? wrote:
> > Ah - thanks to both of you. My own knowledge of
> video format
2007/11/10, asa <[EMAIL PROTECTED]>:
> Hello all. I am working on an NFS failover scenario between two
> servers. I am getting the stale file handle errors on my (linux)
> client which point to there being a mismatch in the fsid's of my two
> filesystems when the failover occurs.
> I understand th
Hello all. I am working on an NFS failover scenario between two
servers. I am getting the stale file handle errors on my (linux)
client which point to there being a mismatch in the fsid's of my two
filesystems when the failover occurs.
I understand that the fsid_guid attribute which is then
I would recommend the 64-bit system, but make sure your controller card
will work in it, first. The bottleneck will most likely be the incoming
network connection (100MB/s) in any case. Assuming, of course, that you
have more than one disk. With the 64-bit system, you'll run into fewer
issues in
Hi all,
I am currently planning a new home file server on a gigabit network that will
be utilizing ZFS (on SXDE). The files will be shared via samba as I have a
mixed OS environment. The controller card I will be using is the SuperMicro
SAT2-MV8 133MHz PCI-X card. I have two options for CPUs
Hey Bill:
what's an object here? or do we have a mapping between "objects" and
block pointers?
for example a zdb -bb might show:
th37 # zdb -bb rz-7
Traversing all blocks to verify nothing leaked ...
No leaks (block sum matches space maps exactly)
bp count: 47
> So I see no reason to change my suggestion that consumers just won't notice
> the level of increased reliability that ZFS offers in this area: not only
> would the difference be nearly invisible even if the systems they ran on were
> otherwise perfect, but in the real world consumers have oth
can you guess? wrote:
> Ah - thanks to both of you. My own knowledge of video format internals is so
> limited that I assumed most people here would be at least equally familiar
> with the notion that a flipped bit or two in a video would hardly qualify as
> any kind of disaster (or often even
On 9-Nov-07, at 3:23 PM, Scott Laird wrote:
> Most video formats are designed to handle errors--they'll drop a frame
> or two, but they'll resync quickly. So, depending on the size of the
> error, there may be a visible glitch, but it'll keep working.
>
> Interestingly enough, this applies to a
On 9-Nov-07, at 2:45 AM, can you guess? wrote:
>>> Au contraire: I estimate its worth quite
>> accurately from the undetected error rates reported
>> in the CERN "Data Integrity" paper published last
>> April (first hit if you Google 'cern "data
>> integrity"').
>>>
While I have yet to see
Hi *,
It would seem that no matter how large the bag you can put an unlimited
amount of stuff into it. Does this work in Nevada (I don't have a system
to test on at the moment)?
I know there is an old bug for this kind of thing but since it works in
SCU3 plus (nearly current) patches I think it
Hi *,
Missed it by that '' much... or something to brighten everyone's
Friday. This is just a heads up since I suspect others will run
into it.
bob
Consider the possibilities of a device named star.
I added devices from the global zone to the bug zone in the usual way and
rebooted the zone:
can you guess? <[EMAIL PROTECTED]> wrote:
> > : In case of a filesystem, I do not see why the
> > filesystem could
> > : be a derived work from e.g. Linux.
> >
> > Indeed not, however AIUI the FSF do.
>
> My impression is that GPFS on Linux was (and may still be) provided as a
> binary proprieta
24 matches
Mail list logo