Due to a fsck file system repair I lost the content of a file
I consider important, but it hasn't been backed up yet. The
file name is still present, but no blocks are associated
(file size is zero). I hope the data blocks (which are now
probably marked "unused") are still intact, so I thought
I'd
On 4 Mar 2013, at 01:36, Polytropon wrote:
> Due to a fsck file system repair I lost the content of a file
> I consider important, but it hasn't been backed up yet. The
> file name is still present, but no blocks are associated
> (file size is zero). I hope the data blocks (which are now
> proba
On Mon, 4 Mar 2013 10:09:50 +0100, Damien Fleuriot wrote:
> Hey that's actually a pretty creative way of doing things ;)
It could be more optimum. :-)
My thought is that I could maybe use a better bs= to make
the whole thing run faster. I understand that for every
unit, a subprocess dd | grep is
On 3/3/2013 6:36 PM, Polytropon wrote:
Due to a fsck file system repair I lost the content of a file
I consider important, but it hasn't been backed up yet. The
file name is still present, but no blocks are associated
(file size is zero). I hope the data blocks (which are now
probably marked "unu
On Mon, 04 Mar 2013 04:15:48 -0600, Joshua Isom wrote:
> I'd call bs= essential for speed. Any copying will be faster with
> something higher.
I thought about that. Narrowing down _if_ something has
found is easy, e. g. when the positive 1 MB unit is dd'ed
to a file, further work can easily be a
On Mon, 4 Mar 2013 12:15:24 +0100
Polytropon wrote:
> But I don't know how to do this. From reading "man dd"
> my impression (consistent with my experience) is that
> the option skip= operates in units of bs= size, so I'm
> not sure how to compose a command that reads units of
> 1 MB, but skips i
On Mon, 4 Mar 2013 11:29:00 +, Steve O'Hara-Smith wrote:
> On Mon, 4 Mar 2013 12:15:24 +0100
> Polytropon wrote:
>
> > But I don't know how to do this. From reading "man dd"
> > my impression (consistent with my experience) is that
> > the option skip= operates in units of bs= size, so I'm
>
Hi Polytropon & cc questions@
> Any suggestion is welcome!
Ideas:
A themed list: freebsd...@freebsd.org
There's a bunch of fs tools in /usr/ports/sysutils/
My http://www.berklix.com/~jhs/src/bsd/jhs/bin/public/slice/
slices large images such as tapes & disks
(also the slice names would give n
On Mon, Mar 4, 2013 at 1:36 AM, Polytropon wrote:
> Any suggestion is welcome!
How about crawling the metadata, locating each block
that is already allocated, and skip those blocks when you
scan the disk? That could reduce the searching space
significantly. blkls(1) et al. from the Sleuth Kit are
On Mon, 4 Mar 2013, Polytropon wrote:
The file size of the file I'm searching for is less than 10 kB.
It's a relatively small text file which got some subsequent
additions in the last days, but hasn't been part of the backup
job yet.
There have been some good suggestions. I would use a large
10 matches
Mail list logo