Re: How long will this take?

2020-07-01 Thread David Wright
On Fri 26 Jun 2020 at 19:50:21 (-0700), David Christensen wrote:
> On 2020-06-26 18:25, David Wright wrote:
> > On Fri 26 Jun 2020 at 15:06:31 (-0700), David Christensen wrote:
> > > On 2020-06-26 06:07, David Wright wrote:
> 
> > > > On this slow machine with an oldish PATA disk,
> > > > I can get about 75% speed from urandom, 15MB/s vs 20MB/s on a 29GiB
> > > > partition (no encryption). There's a noticeable slowdown because,
> > > > I presume, the machine runs a bit short of entropy after a while.
> > > 
> > > I think you are noticing a slowdown when the Linux write buffer fills.
> > 
> > I'm not sure where these write buffers might be hiding: the
> > 2000-vintage PC has 512MB memory, and the same size swap partition,
> > though the latter is on a disk constructed one month earlier than the
> > target disk (Feb/Mar 2008). The target disk has 8MB of cache.
> > With a leisurely determination of dd's PID, my first USR1 poke
> > occurred no earlier than after 4GB of copying, over three minutes in.
> 
> I seem to recall that most of my EIDE interfaces and drives were 100
> MB/s.  (A few were 133 MB/s.)  So, bulk reads or writes can completely
> use an 8 MB cache in a fraction of a second.

This is IDE. The buses run at 100MHz, but I don't know where the
bottlenecks are. The idea was only to compare writing zeros and
random data. The machine, a 650MHz SE440BX-2 (Seattle 2), was selected
on the basis that it's presently housing a secondary drive with two
spare "root filesystem partitions" (which was in the Dell Optiplex
that died last month). It was doing nothing but running two ssh
sessions, one for dd and one for kill.

> top(1) reports memory statistics on line 4.  I believe "buff/cache" is
> the amount of memory being used for I/O write buffering and read
> caching.  Line 5 has statics for swap.  I do not know if memory write
> buffer / read cache usage interacts with swap usage, but it would not
> surprise me.  top(1) should be able to show you.
> 
> Perhaps I misinterpreted your "slowdown" statement.  I assumed you ran
> a command similar to:
> 
> # dd if=/dev/urandom of=/dev/sdxn bs=1M status=progress

Close: I was running within a script command, so I just poked
the dd occasionally with   kill -USR1   to record its progress
in the typescript file.

> dd(1) is copying PRN data from the CPU to the kernel write buffer (in
> memory) and the kernel input/ output stack is copying from the write
> buffer to the HDD (likely via direct memory access, DMA).  The
> 'status=progress' option will cause dd(1) to display the rate at which
> the write buffer is being filled.  I am not sure how to monitor the
> rate at which the write buffer is being drained.  Assuming the write
> buffer is initially empty, the filling process is "fast", and the
> draining process is "slow" when the above command is started, dd(1)
> should show fast throughput until the write buffer fills and then show
> slow throughput for the remainder of the transfer.  And, without a
> 'sync' option to dd(1), dd(1) will exit and the shell will display the
> next prompt as the final write buffer contents are being written to
> the HDD (e.g. the HDD will be busy for a short while after dd(1) is
> finished).
> 
> Another possibility -- magnetic disk drives have more sectors in outer
> tracks (lower sector number) than they have in inner tracks (higher
> sector number).  When filling an entire drive, I have seen the
> transfer rate drop by 40~50% over the duration of the transfer.  This
> is normal. Is this what you are referring to?

I tried to take account of these possibilities by using 29GB
partitions, much larger than the buffer sizes, and writing
two different partitions. But replicating the run a few times
didn't give me consistent enough timings to have confidence in
any conclusions. When I tried using sync to reduce the effect of
buffering, things slowed so much that I suspect there would be
no shortage of entropy anyway.

Regardless, the loss of speed is not serious enough for me to change
my strategy from:
  urandom before cryptsetup,
  zero before encrypting swap,
  zero to erase disk at end of life/possession.
I *have* given up running badblocks.

(The disk layout is:

Device Start   End   Sectors   Size Type
/dev/sdb1   2048  8191  6144 3M BIOS boot
/dev/sdb2   8192   1023999   1015808   496M EFI System
/dev/sdb31024000   2047999   1024000   500M Linux swap
/dev/sdb42048000  63487999  6144  29.3G Linux filesystem
/dev/sdb5   63488000 124927999  6144  29.3G Linux filesystem
/dev/sdb6  124928000 976773119 851845120 406.2G Linux filesystem
)

Cheers,
David.



Re: How long will this take?

2020-06-27 Thread rhkramer
On Friday, June 26, 2020 09:41:26 PM Seeds Notoneofmy wrote:
> On 6/27/20 3:20 AM, rhkra...@gmail.com wrote:
> > This (the above) subject line is not very good, but at least it gives a
> > hint that it  probably is, or at least could be, computer related.
> 
> Would you please explain the computer related "hint" in "How long will
> this take?"

I'm reallly not inclined to do so.  It really doesn't matter if you used a 
poor subject line and somebody else did also -- no point arguing over which is 
worse.

I will say apparently others also were able to see a hint in "how long will 
this take"?

To make a feeble attempt at explaining the hint I saw -- computers require 
time to perform tasks, so that was a hint to me.

"Have you seen this inside" provides no hint to me.

I would suggest that you take the criticism in stride and try to make better 
subjects in the future.  And, if you can suggest better subjects to someone 
else, that can be acceptable as long as you do it with good will.

> For starters, I could end that sentence in so many ways, just use your
> imagination.
> 
> In much the same way I could end,  'have you seen this inside...'
> 
> We can drop the subjective statements and look objectively at things.



Re: How long will this take?

2020-06-27 Thread Andrei POPESCU
On Sb, 27 iun 20, 01:37:57, Seeds Notoneofmy wrote:
> 
> Recently I posted here with the subject line: "have you seen this inside..."
> 
> And I was lectured by no fewer than three people.
> 
> The subject line in this thread is: "How long will this take?"
> 
> I struggle to understand the difference between the two subject lines
> that merits their different treatment.

It seems to me you are complaining that the OP of this thread was not 
lectured as well.

Was this your intention?

Kind regards,
Andrei
-- 
http://wiki.debian.org/FAQsFromDebianUser


signature.asc
Description: PGP signature


Re: How long will this take?

2020-06-26 Thread David Christensen

On 2020-06-26 18:25, David Wright wrote:

On Fri 26 Jun 2020 at 15:06:31 (-0700), David Christensen wrote:

On 2020-06-26 06:07, David Wright wrote:



On this slow machine with an oldish PATA disk,
I can get about 75% speed from urandom, 15MB/s vs 20MB/s on a 29GiB
partition (no encryption). There's a noticeable slowdown because,
I presume, the machine runs a bit short of entropy after a while.


I think you are noticing a slowdown when the Linux write buffer fills.


I'm not sure where these write buffers might be hiding: the
2000-vintage PC has 512MB memory, and the same size swap partition,
though the latter is on a disk constructed one month earlier than the
target disk (Feb/Mar 2008). The target disk has 8MB of cache.
With a leisurely determination of dd's PID, my first USR1 poke
occurred no earlier than after 4GB of copying, over three minutes in.


I seem to recall that most of my EIDE interfaces and drives were 100 
MB/s.  (A few were 133 MB/s.)  So, bulk reads or writes can completely 
use an 8 MB cache in a fraction of a second.



top(1) reports memory statistics on line 4.  I believe "buff/cache" is 
the amount of memory being used for I/O write buffering and read 
caching.  Line 5 has statics for swap.  I do not know if memory write 
buffer / read cache usage interacts with swap usage, but it would not 
surprise me.  top(1) should be able to show you.



Perhaps I misinterpreted your "slowdown" statement.  I assumed you ran a 
command similar to:


# dd if=/dev/urandom of=/dev/sdxn bs=1M status=progress


dd(1) is copying PRN data from the CPU to the kernel write buffer (in 
memory) and the kernel input/ output stack is copying from the write 
buffer to the HDD (likely via direct memory access, DMA).  The 
'status=progress' option will cause dd(1) to display the rate at which 
the write buffer is being filled.  I am not sure how to monitor the rate 
at which the write buffer is being drained.  Assuming the write buffer 
is initially empty, the filling process is "fast", and the draining 
process is "slow" when the above command is started, dd(1) should show 
fast throughput until the write buffer fills and then show slow 
throughput for the remainder of the transfer.  And, without a 'sync' 
option to dd(1), dd(1) will exit and the shell will display the next 
prompt as the final write buffer contents are being written to the HDD 
(e.g. the HDD will be busy for a short while after dd(1) is finished).



Another possibility -- magnetic disk drives have more sectors in outer 
tracks (lower sector number) than they have in inner tracks (higher 
sector number).  When filling an entire drive, I have seen the transfer 
rate drop by 40~50% over the duration of the transfer.  This is normal. 
Is this what you are referring to?



David



Re: How long will this take?

2020-06-26 Thread Seeds Notoneofmy



On 6/27/20 3:20 AM, rhkra...@gmail.com wrote:

This (the above) subject line is not very good, but at least it gives a hint
that it  probably is, or at least could be, computer related.


Would you please explain the computer related "hint" in "How long will
this take?"

For starters, I could end that sentence in so many ways, just use your
imagination.

In much the same way I could end,  'have you seen this inside...'

We can drop the subjective statements and look objectively at things.




Re: How long will this take?

2020-06-26 Thread David Wright
On Fri 26 Jun 2020 at 15:06:31 (-0700), David Christensen wrote:
> On 2020-06-26 06:07, David Wright wrote:
> > On Fri 19 Jun 2020 at 14:52:11 (-0700), David Christensen wrote:
> 
> > > Benchmark is one thing.  But, from a security viewpoint, writing zeros
> > > to an encrypted volume amounts to providing blocks of plaintext for
> > > corresponding blocks of cyphertext, thereby facilitating
> > > cryptanalysis.
> > 
> > So in view of the unlikelihood of badblocks actually logging something
> > more useful than SMART (where available) or normal disk write errors,
> > perhaps a compromise (for my use case) is to just write /dev/urandom
> > rather than /dev/zero.
> 
> Copying random data to a partition while creating an encrypted
> filesystem provides a high-entropy backdrop to conceal ciphertext
> blocks.  This is a form of steganography.  The Debian Installer manual
> partitioning page has an option to do this.

I presume you meet this option when you select "Configure encrypted volumes",
something that I've never done. Because currently I only encrypt /home
and swap, I set these up after installation, if they're not already there.

I must admit that I prefer to partition disks and set up encryption
outside the d-i, usually capturing the process with script.

> As the storage is used, the initial random blocks will be overwritten
> by ciphertext blocks.  Depending upon filesystem, encryption, volume
> management, and/or device details, the steganography degrades and may
> eventually disappear.
> 
> Copying random data to storage will add fresh nearly-random blocks on
> the device, improving the steganography.  (The canonical example is to
> copy /dev/urandom to a file until the filesystem fills up, and then
> delete the file.  But, this takes time and adds wear to the device.)

Yes, SSD caveat taken on board.

> > On this slow machine with an oldish PATA disk,
> > I can get about 75% speed from urandom, 15MB/s vs 20MB/s on a 29GiB
> > partition (no encryption). There's a noticeable slowdown because,
> > I presume, the machine runs a bit short of entropy after a while.
> 
> I think you are noticing a slowdown when the Linux write buffer fills.

I'm not sure where these write buffers might be hiding: the
2000-vintage PC has 512MB memory, and the same size swap partition,
though the latter is on a disk constructed one month earlier than the
target disk (Feb/Mar 2008). The target disk has 8MB of cache.
With a leisurely determination of dd's PID, my first USR1 poke
occurred no earlier than after 4GB of copying, over three minutes in.

Cheers,
David.



Re: How long will this take?

2020-06-26 Thread rhkramer
Quoted lines resequenced for my convenience in responding.

On Friday, June 26, 2020 07:37:57 PM Seeds Notoneofmy wrote:
> I struggle to understand the difference between the two subject lines
> that merits their different treatment.


> The subject line in this thread is: "How long will this take?"

This (the above) subject line is not very good, but at least it gives a hint 
that it  probably is, or at least could be, computer related.

> Recently I posted here with the subject line: "have you seen this
> inside..."

This gives no hint that it is computer related, and sounds very spammy, like 
the subject lines that many of us run into, trying to get us to open an email 
that has subject matter we are absolutely not interested in.
 
> And I was lectured by no fewer than three people.

Not I, said the spider to the fly



Re: How long will this take?

2020-06-26 Thread Seeds Notoneofmy

On 6/8/20 10:22 PM, Matthew Campbell wrote:


I bought a new 4 terrabyte hard drive that is connected with a USB
cable using USB2. It took about 32 hours to read every sector on the
drive to look for bad sectors. I started blanking the sectors using
/dev/zero last Friday night. It still isn't done. Is there a way I can
find out how much data a particular process has written to the disk?
I'm using Debian 10.4.

dd if=/dev/zero of=/dev/sdb ibs=4096 count=976754646

name=Matthew%20Campbell=trenix25%40pm.me


Recently I posted here with the subject line: "have you seen this inside..."

And I was lectured by no fewer than three people.

The subject line in this thread is: "How long will this take?"

I struggle to understand the difference between the two subject lines
that merits their different treatment.

Thanks.



Re: How long will this take?

2020-06-26 Thread David Christensen

On 2020-06-26 06:07, David Wright wrote:

On Fri 19 Jun 2020 at 14:52:11 (-0700), David Christensen wrote:



Benchmark is one thing.  But, from a security viewpoint, writing zeros
to an encrypted volume amounts to providing blocks of plaintext for
corresponding blocks of cyphertext, thereby facilitating
cryptanalysis.


So in view of the unlikelihood of badblocks actually logging something
more useful than SMART (where available) or normal disk write errors,
perhaps a compromise (for my use case) is to just write /dev/urandom
rather than /dev/zero. 


Copying random data to a partition while creating an encrypted 
filesystem provides a high-entropy backdrop to conceal ciphertext 
blocks.  This is a form of steganography.  The Debian Installer manual 
partitioning page has an option to do this.



As the storage is used, the initial random blocks will be overwritten by 
ciphertext blocks.  Depending upon filesystem, encryption, volume 
management, and/or device details, the steganography degrades and may 
eventually disappear.



Copying random data to storage will add fresh nearly-random blocks on 
the device, improving the steganography.  (The canonical example is to 
copy /dev/urandom to a file until the filesystem fills up, and then 
delete the file.  But, this takes time and adds wear to the device.)




On this slow machine with an oldish PATA disk,
I can get about 75% speed from urandom, 15MB/s vs 20MB/s on a 29GiB
partition (no encryption). There's a noticeable slowdown because,
I presume, the machine runs a bit short of entropy after a while.


I think you are noticing a slowdown when the Linux write buffer fills.


David



Re: How long will this take?

2020-06-26 Thread David Wright
On Fri 19 Jun 2020 at 14:52:11 (-0700), David Christensen wrote:
> On 2020-06-18 19:13, David Wright wrote:
> > On Fri 12 Jun 2020 at 07:51:30 (-0400), Michael Stone wrote:
> > > On Thu, Jun 11, 2020 at 08:52:10PM -0500, David Wright wrote:
> > 
> > > > The only unaddressed point in my use case is the prevention of a
> > > > high-water mark, because zeroing the drive achieves precisely the
> > > > opposite. What ought I to be running, instead of badblocks -w -t random,
> > > > to achieve that goal?
> > > 
> > > Create the encrypted volume first, then write zeros to it. :)
> > 
> > Duh! That should work a treat. My posting that example bore me fruit.
> 
> Benchmark is one thing.  But, from a security viewpoint, writing zeros
> to an encrypted volume amounts to providing blocks of plaintext for
> corresponding blocks of cyphertext, thereby facilitating
> cryptanalysis.

So in view of the unlikelihood of badblocks actually logging something
more useful than SMART (where available) or normal disk write errors,
perhaps a compromise (for my use case) is to just write /dev/urandom
rather than /dev/zero. On this slow machine with an oldish PATA disk,
I can get about 75% speed from urandom, 15MB/s vs 20MB/s on a 29GiB
partition (no encryption). There's a noticeable slowdown because,
I presume, the machine runs a bit short of entropy after a while.

Cheers,
David.



Re: How long will this take?

2020-06-19 Thread David Christensen

On 2020-06-18 19:13, David Wright wrote:

On Fri 12 Jun 2020 at 07:51:30 (-0400), Michael Stone wrote:

On Thu, Jun 11, 2020 at 08:52:10PM -0500, David Wright wrote:



The only unaddressed point in my use case is the prevention of a
high-water mark, because zeroing the drive achieves precisely the
opposite. What ought I to be running, instead of badblocks -w -t random,
to achieve that goal?


Create the encrypted volume first, then write zeros to it. :)


Duh! That should work a treat. My posting that example bore me fruit.

Cheers,
David.



Benchmark is one thing.  But, from a security viewpoint, writing zeros 
to an encrypted volume amounts to providing blocks of plaintext for 
corresponding blocks of cyphertext, thereby facilitating cryptanalysis.



David



Re: How long will this take?

2020-06-18 Thread David Wright
On Fri 12 Jun 2020 at 07:51:30 (-0400), Michael Stone wrote:
> On Thu, Jun 11, 2020 at 08:52:10PM -0500, David Wright wrote:

> > The only unaddressed point in my use case is the prevention of a
> > high-water mark, because zeroing the drive achieves precisely the
> > opposite. What ought I to be running, instead of badblocks -w -t random,
> > to achieve that goal?
> 
> Create the encrypted volume first, then write zeros to it. :)

Duh! That should work a treat. My posting that example bore me fruit.

Cheers,
David.



Re: How long will this take?

2020-06-12 Thread Michael Stone

On Thu, Jun 11, 2020 at 08:52:10PM -0500, David Wright wrote:

If you were preserving the disk contents (imagine there were
proprietary encryption software on it), and performed a "read test"
or ran badblocks on it, would that be sufficient to test the disk's
performance, as it's merely reading the sectors. Or do you have to
actually write, with badblocks -r for example?


That really depends on whether you want to test read or write 
performance. If you mean that you want to test for correct operation 
rather than performance then you need a write test to fully exercise the 
disk. Some errors will only be found on write, and conversely some read 
errors will be corrected (remapped) on write.



The only unaddressed point in my use case is the prevention of a
high-water mark, because zeroing the drive achieves precisely the
opposite. What ought I to be running, instead of badblocks -w -t random,
to achieve that goal?


Create the encrypted volume first, then write zeros to it. :)



Re: How long will this take?

2020-06-11 Thread David Wright
On Wed 10 Jun 2020 at 14:51:32 (-0400), Michael Stone wrote:
> On Wed, Jun 10, 2020 at 12:02:13PM -0500, David Wright wrote:

[snipped the first part as it's covered elsewhere]

> > My use case for badblocks was closer to that of the OP, but still
> > different. Firstly, the disk contained personal data from unencrypted
> > use in the past. Secondly, I was intending to use it encrypted (as
> > mentioned) and prefer no high-watermark.  Thirdly, because of its
> > age (2011), I was interested in seeing how well it performed. I have
> > no idea whether the disk is "modern" in the sense you used, as I don't
> > follow the technology like some people on this list evidently do.
> > Fourthly, I don't make a habit of throwing away 2TB disks.
> 
> badblocks isn't particularly useful for achieving any of those goals
> vs just writing zeros. "modern" in this context means anything since
> probably the mid 90s but my memory is a bit fuzzy on the exact dates.
> certainly anything since the turn of the century.
> 
> > But, as you know about these things, a few questions:
> > 
> > . How does badblocks do its job in readonly mode, given that it
> >  doesn't know what any block's content ought to be.
> 
> you have to write the test data ahead of time
> 
> > . Why might the OP run badblocks, particularly non-destructively
> >  (as if to preserve something), and *then* zero the drive.
> 
> the only person I saw mention badblocks in this thread was you, but I
> guess I might have missed it

No, you're right, I brought it up, and I *am* conflating two things:
the OP running an unspecified "read test", reading every sector
looking for errors, and a hypothetical person running badblocks.

If you were preserving the disk contents (imagine there were
proprietary encryption software on it), and performed a "read test"
or ran badblocks on it, would that be sufficient to test the disk's
performance, as it's merely reading the sectors. Or do you have to
actually write, with badblocks -r for example?

> > . What's the easiest way of finding out about "consistent bad
> >  (not remappable) sectors" on a drive, as I soon will have to
> >  repeat this result (if not by this exact process) with a 3TB
> >  disk of 2013 vintage. (The good news: it has a USB3 connection.)
> 
> you'll get a bunch of errors while writing, and probably the drive
> will drop offline. you can use smartctl in the smartmontools package
> to see the status of retries & remapped sectors and get a health
> report on the drive, which you can use to decide whether to keep the
> drive in service even if it is currently working. (as a drive ages it
> will often record an increasing number of correctable errors, which
> typically will result in failure in the not-distant future.)

OK, so as far as the 2TB disk is concerned, writing anything over the
entire disk will provoke the reporting and/or remapping of any bad
sectors by SMART, so you can then check the statistics.

The only unaddressed point in my use case is the prevention of a
high-water mark, because zeroing the drive achieves precisely the
opposite. What ought I to be running, instead of badblocks -w -t random,
to achieve that goal?

> a confounding factor is that you might also get write errors and
> dropped disk if there's a USB issue, separate from whether the drive
> is working properly. smartctl may help you understand whether there's
> a physical drive issue, and you can try different USB adapters, ports,
> and cables.

Actually, one of the difficulties I have with the 3TB disk is reading
its SMART information. The disk claims to collect and retain it, but
the ?protocol/?device interface (in the container?) prevents my
reading it successfully. But I'll ask about that in a separate post
sometime.

Cheers,
David.



Re: How long will this take?

2020-06-10 Thread David Christensen

On 2020-06-10 07:00, Michael Stone wrote:

On Mon, Jun 08, 2020 at 08:22:39PM +, Matthew Campbell wrote:



dd if=/dev/zero of=/dev/sdb ibs=4096 count=976754646


This command line gets data in 4k chunks from /dev/zero and then writes 
them to the disk in 512 byte chunks. That's pretty much the worst 
possible case for writing to a disk. 


> # dd if=/dev/zero of=/dev/sdh ibs=4096 count=1 conv=fdatasync
> 1+0 records in
> 8+0 records out
> 4096 bytes (41 MB, 39 MiB) copied, 3.15622 s, 13.0 MB/s

Good catch.


You want "bs", not "ibs". I'd 
suggest dd if=/dev/zero of=/dev/sdb bs=64k


+1


(I do not recall having a need for 'ibs' or 'obs'.)


and I wouldn't bother trying to calculate a count if you're trying to 
overwrite the entire disk


+1


IME performance peaks at 16-64k. Beyond that things don't improve, and 
can potentially get worse or cause other issues.


I've run benchmarks over the years, usually on Linux.  I forget where 
the performance knees are, but do recall that bs=1M has always been in 
between.



I've been getting sustained USB2 disk writes in the low 40MB/s range for 
more than 15 years. I'd suggest either checking that you're using a 
reasonable block size or getting a better USB2 adapter. 25MB/s is 
definitely low.



# dd if=/dev/zero of=/dev/sdh bs=64k count=1 conv=fdatasync
1+0 records in
1+0 records out
65536 bytes (655 MB, 625 MiB) copied, 15.1168 s, 43.4 MB/s


I have Intel desktop and server motherboards, and Dell laptops and one 
server.  I believe they all have Intel USB chips.



Looking at a recent imaging script run of dd(1) with bs=1M over USB 2.0 
to a modern USB 3.0 external enclosure with a vintage SATA I HDD, the 
numbers were better than I was remembering:


13997441024 bytes (14 GB, 13 GiB) copied, 387 s, 36.2 MB/s


David



Re: How long will this take?

2020-06-10 Thread Anders Andersson
On Wed, Jun 10, 2020 at 4:00 PM Nicolas George  wrote:
>
> Anders Andersson (12020-06-10):
> > Because the police raiding my house for dealing drugs is not a
> > realistic threat. Looking at my drives for running Tor could be.
>
> I have tried to explain that your threat assessment is inadequate, you
> do not want to listen. Fine, keep wasting your time on your own private
> security theater.

It would only take a quick google to show that this is something that
actually happens. Maybe you want to believe it doesn't, but that
doesn't make it less true.



Re: How long will this take?

2020-06-10 Thread Michael Stone

On Wed, Jun 10, 2020 at 12:02:13PM -0500, David Wright wrote:

I tried to make clear that my use case differed from that of the OP,
in case you missed that. Just before lockdown (=lockout). I borrowed
an AIO computer and, to make room, returned a 2006 vintage tower that
would no longer pass its POST. I used /dev/zero to erase all the
information from the disk as there was little point in trying to put
Windows XP (licensed to a dead computer) back onto it. Quick, easy,
and quick to check with od. Both l0f4r0 and I have asked why the OP
is zeroing the drive, but no reply yet. Perhaps you can suggest an
answer.


I don't really care why the OP is doing it. I can think of several 
possibilities, but I don't see any need to argue with him over it. At 
some point it's reasonable to simply accept that someone is trying to do 
something and either help or not.



My use case for badblocks was closer to that of the OP, but still
different. Firstly, the disk contained personal data from unencrypted
use in the past. Secondly, I was intending to use it encrypted (as
mentioned) and prefer no high-watermark.  Thirdly, because of its
age (2011), I was interested in seeing how well it performed. I have
no idea whether the disk is "modern" in the sense you used, as I don't
follow the technology like some people on this list evidently do.
Fourthly, I don't make a habit of throwing away 2TB disks.


badblocks isn't particularly useful for achieving any of those goals vs 
just writing zeros. "modern" in this context means anything since 
probably the mid 90s but my memory is a bit fuzzy on the exact dates. 
certainly anything since the turn of the century.



But, as you know about these things, a few questions:

. How does badblocks do its job in readonly mode, given that it
 doesn't know what any block's content ought to be.


you have to write the test data ahead of time


. Why might the OP run badblocks, particularly non-destructively
 (as if to preserve something), and *then* zero the drive.


the only person I saw mention badblocks in this thread was you, but I 
guess I might have missed it



. What's the easiest way of finding out about "consistent bad
 (not remappable) sectors" on a drive, as I soon will have to
 repeat this result (if not by this exact process) with a 3TB
 disk of 2013 vintage. (The good news: it has a USB3 connection.)


you'll get a bunch of errors while writing, and probably the drive will 
drop offline. you can use smartctl in the smartmontools package to see 
the status of retries & remapped sectors and get a health report on the 
drive, which you can use to decide whether to keep the drive in service 
even if it is currently working. (as a drive ages it will often record 
an increasing number of correctable errors, which typically will result 
in failure in the not-distant future.)


a confounding factor is that you might also get write errors and dropped 
disk if there's a USB issue, separate from whether the drive is working 
properly. smartctl may help you understand whether there's a physical 
drive issue, and you can try different USB adapters, ports, and cables.




Re: How long will this take?

2020-06-10 Thread David Wright
On Wed 10 Jun 2020 at 10:14:02 (-0400), Michael Stone wrote:
> On Mon, Jun 08, 2020 at 10:01:13PM -0500, David Wright wrote:
> > On Mon 08 Jun 2020 at 20:22:39 (+), Matthew Campbell wrote:
> > > I bought a new 4 terrabyte hard drive that is connected with a
> > > USB cable using USB2. It took about 32 hours to read every
> > > sector on the drive to look for bad sectors.
> > 
> > I recently ran
> > 
> > # badblocks -c 1024 -s -w -t random -v /dev/sdz
> > 
> > on a 2TB disk with a USB2 connection. The whole process, writing and
> > checking, took 33⅓ hours. (The disk now holds an encrypted ext4 filesystem.)
> 
> Yes, it's a slower process than just writing zeros. A modern drive
> will verify writes as they're made. badblocks is basically a relic of
> another age primarily intended to give a list of bad sectors to avoid
> when making a filesystem. Once upon a time, hard drives actually had a
> handwritten label on the top listing any bad sectors identified at the
> factory so you could avoid them. They don't have that anymore. If any
> modern hard drive has consistent bad (not remappable) sectors it
> should just be thrown away, because that means it is so far gone that
> it no longer has the ability to internally map bad sectors to reserved
> good sectors.
> 
> > > I started blanking the sectors using /dev/zero last Friday
> > > night. It still isn't done. Is there a way I can find out how
> > > much data a particular process has written to the disk? I'm
> > > using Debian 10.4.
> 
> > I'm not sure why you'd do that. I've only zeroed disks to erase them
> > before I return them to the owner. (They're inside loaned computers.)
> 
> Because it accomplishes what your badblocks run does, in less than
> half the time. :)

I tried to make clear that my use case differed from that of the OP,
in case you missed that. Just before lockdown (=lockout). I borrowed
an AIO computer and, to make room, returned a 2006 vintage tower that
would no longer pass its POST. I used /dev/zero to erase all the
information from the disk as there was little point in trying to put
Windows XP (licensed to a dead computer) back onto it. Quick, easy,
and quick to check with od. Both l0f4r0 and I have asked why the OP
is zeroing the drive, but no reply yet. Perhaps you can suggest an
answer.

My use case for badblocks was closer to that of the OP, but still
different. Firstly, the disk contained personal data from unencrypted
use in the past. Secondly, I was intending to use it encrypted (as
mentioned) and prefer no high-watermark.  Thirdly, because of its
age (2011), I was interested in seeing how well it performed. I have
no idea whether the disk is "modern" in the sense you used, as I don't
follow the technology like some people on this list evidently do.
Fourthly, I don't make a habit of throwing away 2TB disks.

But, as you know about these things, a few questions:

. How does badblocks do its job in readonly mode, given that it
  doesn't know what any block's content ought to be.

. Why might the OP run badblocks, particularly non-destructively
  (as if to preserve something), and *then* zero the drive.

. What's the easiest way of finding out about "consistent bad
  (not remappable) sectors" on a drive, as I soon will have to
  repeat this result (if not by this exact process) with a 3TB
  disk of 2013 vintage. (The good news: it has a USB3 connection.)

Cheers,
David.



Re: How long will this take?

2020-06-10 Thread Michael Stone

On Wed, Jun 10, 2020 at 05:53:17PM +0300, Andrei POPESCU wrote:

On Mi, 10 iun 20, 10:00:48, Michael Stone wrote:


IME performance peaks at 16-64k. Beyond that things don't improve, and can
potentially get worse or cause other issues.


Even so, bs=1M is easy to remember and type ;)


I don't find it all that hard to remember 64k but YMMV.



Re: How long will this take?

2020-06-10 Thread Andrei POPESCU
On Mi, 10 iun 20, 10:00:48, Michael Stone wrote:
> 
> IME performance peaks at 16-64k. Beyond that things don't improve, and can
> potentially get worse or cause other issues.

Even so, bs=1M is easy to remember and type ;)

Kind regards,
Andrei
-- 
http://wiki.debian.org/FAQsFromDebianUser


signature.asc
Description: PGP signature


Re: How long will this take?

2020-06-10 Thread Michael Stone

On Mon, Jun 08, 2020 at 10:01:13PM -0500, David Wright wrote:

On Mon 08 Jun 2020 at 20:22:39 (+), Matthew Campbell wrote:
I bought a new 4 terrabyte hard drive that is connected with a USB 
cable using USB2. It took about 32 hours to read every sector on the 
drive to look for bad sectors.


I recently ran

# badblocks -c 1024 -s -w -t random -v /dev/sdz

on a 2TB disk with a USB2 connection. The whole process, writing and
checking, took 33⅓ hours. (The disk now holds an encrypted ext4 filesystem.)


Yes, it's a slower process than just writing zeros. A modern drive will 
verify writes as they're made. badblocks is basically a relic of another 
age primarily intended to give a list of bad sectors to avoid when 
making a filesystem. Once upon a time, hard drives actually had a 
handwritten label on the top listing any bad sectors identified at the 
factory so you could avoid them. They don't have that anymore. If any 
modern hard drive has consistent bad (not remappable) sectors it should 
just be thrown away, because that means it is so far gone that it no 
longer has the ability to internally map bad sectors to reserved good 
sectors.


I started blanking the sectors using /dev/zero last Friday night. It 
still isn't done. Is there a way I can find out how much data a 
particular process has written to the disk? I'm using Debian 10.4.



I'm not sure why you'd do that. I've only zeroed disks to erase them
before I return them to the owner. (They're inside loaned computers.)


Because it accomplishes what your badblocks run does, in less than half 
the time. :)




Re: How long will this take?

2020-06-10 Thread Michael Stone

On Mon, Jun 08, 2020 at 08:22:39PM +, Matthew Campbell wrote:

I bought a new 4 terrabyte hard drive that is connected with a USB cable using
USB2. It took about 32 hours to read every sector on the drive to look for bad
sectors. I started blanking the sectors using /dev/zero last Friday night. It
still isn't done. Is there a way I can find out how much data a particular
process has written to the disk? I'm using Debian 10.4.

dd if=/dev/zero of=/dev/sdb ibs=4096 count=976754646


This command line gets data in 4k chunks from /dev/zero and then writes 
them to the disk in 512 byte chunks. That's pretty much the worst 
possible case for writing to a disk. You want "bs", not "ibs". I'd 
suggest 
dd if=/dev/zero of=/dev/sdb bs=64k
and I wouldn't bother trying to calculate a count if you're trying to 
overwrite the entire disk (any human is likely to screw up the math and 
there's no actual benefit).


IME performance peaks at 16-64k. Beyond that things don't improve, and 
can potentially get worse or cause other issues.


On Mon, Jun 08, 2020 at 04:33:29PM -0400, Dan Ritter wrote:

Matthew Campbell wrote:

I bought a new 4 terrabyte hard drive that is connected with a USB cable using 
USB2. It took about 32 hours to read every sector on the drive to look for bad 
sectors. I started blanking the sectors using /dev/zero last Friday night. It 
still isn't done. Is there a way I can find out how much data a particular 
process has written to the disk? I'm using Debian 10.4.

dd if=/dev/zero of=/dev/sdb ibs=4096 count=976754646


USB2 disks are good for about 25MB/s.


I've been getting sustained USB2 disk writes in the low 40MB/s range for 
more than 15 years. I'd suggest either checking that you're using a 
reasonable block size or getting a better USB2 adapter. 25MB/s is 
definitely low. That said, these days you'd be much better off with

USB3 because any modern disk is going to bottleneck on USB2.

On Mon, Jun 08, 2020 at 06:02:46PM -0400, Dan Ritter wrote:

deloptes wrote:

Dan Ritter wrote:

> USB2 disks are good for about 25MB/s.
>

Where do you have those numbers?

USB 2.0 standard can theoretically transfer data at a very high 480 megabits
per second (mbps), or 60 megabytes per second (MBps) [for example in
wikipedia).


Yes, that's the theory. In years of running several USB 2.0
attached disks, I found that they were actually good for about
25MB/s long-term. Bursts to 37MB/s were not uncommon.


# dd if=/dev/zero of=/dev/sdh bs=64k count=1 conv=fdatasync
1+0 records in
1+0 records out
65536 bytes (655 MB, 625 MiB) copied, 15.1168 s, 43.4 MB/s


The OP's 4K writes will be particularly badly performing.


OP was doing 512 byte writes.

(same adapter/disk)
# dd if=/dev/zero of=/dev/sdh ibs=4096 count=1 conv=fdatasync
1+0 records in
8+0 records out
4096 bytes (41 MB, 39 MiB) copied, 3.15622 s, 13.0 MB/s

I didn't have the patience to write the same amount of data. :D



Re: How long will this take?

2020-06-10 Thread Nicolas George
Anders Andersson (12020-06-10):
> Because the police raiding my house for dealing drugs is not a
> realistic threat. Looking at my drives for running Tor could be.

I have tried to explain that your threat assessment is inadequate, you
do not want to listen. Fine, keep wasting your time on your own private
security theater.

Regards,

-- 
  Nicolas George


signature.asc
Description: PGP signature


Re: How long will this take?

2020-06-10 Thread Anders Andersson
On Wed, Jun 10, 2020 at 3:33 PM Nicolas George  wrote:
>
> Anders Andersson (12020-06-10):
> > Except wiping a disk is trivial. Just start the job and come back
> > later to a clean disk. It's not like you have to wipe it by hand. I do
> > it routinely before I put a disk to use that's going to be used for a
> > couple of years.
>
> There is no "except" about: define your threat model; if it requires
> wiping, wipe. If it does not, wiping is just a waste of time, little or
> lots, still a waste. And it is a waste of power too.
>
> There are many things that are trivial to do with a hard drive and could
> benefit security in far-fetched scenarios. Did you wipe the possible
> traces of cocaine? Did you weight it to check it matches the specs? Did
> you take pictures of all angles? All these and many others are trivial.
> Why one but not the others?

Because the police raiding my house for dealing drugs is not a
realistic threat. Looking at my drives for running Tor could be.



Re: How long will this take?

2020-06-10 Thread Nicolas George
Anders Andersson (12020-06-10):
> Except wiping a disk is trivial. Just start the job and come back
> later to a clean disk. It's not like you have to wipe it by hand. I do
> it routinely before I put a disk to use that's going to be used for a
> couple of years.

There is no "except" about: define your threat model; if it requires
wiping, wipe. If it does not, wiping is just a waste of time, little or
lots, still a waste. And it is a waste of power too.

There are many things that are trivial to do with a hard drive and could
benefit security in far-fetched scenarios. Did you wipe the possible
traces of cocaine? Did you weight it to check it matches the specs? Did
you take pictures of all angles? All these and many others are trivial.
Why one but not the others?

Regards,

-- 
  Nicolas George


signature.asc
Description: PGP signature


Re: How long will this take?

2020-06-10 Thread Anders Andersson
On Wed, Jun 10, 2020 at 1:14 PM Nicolas George  wrote:
>
> Anders Andersson (12020-06-10):
> > Too bad if you end up in a routine police investigation and they find
> > child pornography when scanning the disks for deleted files.
> >
> > "Must have been the previous owner" is a valid defense, but I'd rather
> > not end up having to use it.
>
> Ah, but maybe the previous owner had discovered a cheap cure for
> covid-19 and big pharma had them silenced. You would be wiping the last
> traces of their research!
>
> Seriously, first we were talking about hard drives straight from the
> factory in China, making the thread… industrial espionage, I suppose?
> And now we are talking about child pornography found in an unrelated
> seizure.
>
> So, for that to be relevant, you would need that all the following
> conditions to be met:
>
> - the previous owner had child pornography on this disk;
>
> - unencrypted;
>
> - they gave it away their disk in a way that makes it reusable;
>
> - without wiping it themselves;
>
> - cops show up at your door and take the drive to examine it;
>
> - they do it before regular use has wiped it.
>
> That is a fine Drake equation you got here, but maybe not a rational
> justification for spending days wiping a drive.
>
> For any security measure, it is easy to find afterwards a far-fetched
> scenario where it makes a difference. But that is how TV writers work,
> not security. For security, we must first define the attack model, and
> then search for defense. Otherwise we end up barricading the back door
> while the key to the front door is still under the mat.

Except wiping a disk is trivial. Just start the job and come back
later to a clean disk. It's not like you have to wipe it by hand. I do
it routinely before I put a disk to use that's going to be used for a
couple of years.



Re: How long will this take?

2020-06-10 Thread Nicolas George
Anders Andersson (12020-06-10):
> Too bad if you end up in a routine police investigation and they find
> child pornography when scanning the disks for deleted files.
> 
> "Must have been the previous owner" is a valid defense, but I'd rather
> not end up having to use it.

Ah, but maybe the previous owner had discovered a cheap cure for
covid-19 and big pharma had them silenced. You would be wiping the last
traces of their research!

Seriously, first we were talking about hard drives straight from the
factory in China, making the thread… industrial espionage, I suppose?
And now we are talking about child pornography found in an unrelated
seizure.

So, for that to be relevant, you would need that all the following
conditions to be met:

- the previous owner had child pornography on this disk;

- unencrypted;

- they gave it away their disk in a way that makes it reusable;

- without wiping it themselves;

- cops show up at your door and take the drive to examine it;

- they do it before regular use has wiped it.

That is a fine Drake equation you got here, but maybe not a rational
justification for spending days wiping a drive.

For any security measure, it is easy to find afterwards a far-fetched
scenario where it makes a difference. But that is how TV writers work,
not security. For security, we must first define the attack model, and
then search for defense. Otherwise we end up barricading the back door
while the key to the front door is still under the mat.

Regards,

-- 
  Nicolas George


signature.asc
Description: PGP signature


Re: How long will this take?

2020-06-10 Thread Anders Andersson
On Tue, Jun 9, 2020 at 8:28 PM Nicolas George  wrote:
>
> Jude DaShiell (12020-06-09):
> > High security operations do this routinely.  They properly don't trust
> > parts are as labeled from manufacturers especially manufacturers that
> > send any of their stuff or get any of their stuff from China.
>
> There is no trust to have. The previous contents would be overwritten on
> the first actual write of a file.
>
> And if the filesystem reads a sector that has never been written, that's
> a serious bug in the operating system.

Too bad if you end up in a routine police investigation and they find
child pornography when scanning the disks for deleted files.

"Must have been the previous owner" is a valid defense, but I'd rather
not end up having to use it.



Re: How long will this take?

2020-06-09 Thread Christopher David Howie

On 6/9/2020 5:39 AM, Nicolas George wrote:

How do you add "status=progress" to a process that has already been
running for three days?


You can't, of course.  I was merely suggesting using this in future 
invocations.


--
Chris Howie
http://www.chrishowie.com
http://en.wikipedia.org/wiki/User:Crazycomputers

If you correspond with me on a regular basis, please read this document: 
http://www.chrishowie.com/email-preferences/


PGP fingerprint: 2B7A B280 8B12 21CC 260A DF65 6FCE 505A CF83 38F5


IMPORTANT INFORMATION/DISCLAIMER

This document should be read only by those persons to whom it is 
addressed.  If you have received this message it was obviously addressed 
to you and therefore you can read it.


Additionally, by sending an email to ANY of my addresses or to ANY 
mailing lists to which I am subscribed, whether intentionally or 
accidentally, you are agreeing that I am "the intended recipient," and 
that I may do whatever I wish with the contents of any message received 
from you, unless a pre-existing agreement prohibits me from so doing.


This overrides any disclaimer or statement of confidentiality that may 
be included on your message.




Re: How long will this take?

2020-06-09 Thread Nicolas George
Jude DaShiell (12020-06-09):
> To search disk drives.

Are you still talking about binary search? To search disk drives for
what? Binary search is for sorted data. There is nothing sorted on a
hard drive. And binary search is when random access is fast; on
mechanical hard drives, random access is much slower than sequential
access.

Regards,

-- 
  Nicolas George


signature.asc
Description: PGP signature


Re: How long will this take?

2020-06-09 Thread Jude DaShiell
To search disk drives.
On Tue, 9 Jun 2020, Nicolas George wrote:

> Date: Tue, 9 Jun 2020 14:27:46
> From: Nicolas George 
> Reply-To: debian-user@lists.debian.org
> To: Jude DaShiell 
> Cc: l0f...@tuta.io, Debian User 
> Subject: Re: How long will this take?
>
> Jude DaShiell (12020-06-09):
> > High security operations do this routinely.  They properly don't trust
> > parts are as labeled from manufacturers especially manufacturers that
> > send any of their stuff or get any of their stuff from China.
>
> There is no trust to have. The previous contents would be overwritten on
> the first actual write of a file.
>
> And if the filesystem reads a sector that has never been written, that's
> a serious bug in the operating system.
>
> > I'm thinking of the binary search method and am wondering if disk
> > operations of all sorts could be speeded up using it rather than
> > sequential searches.  Or is binary already used now?
>
> To search what?
>
> Regards,
>
>

-- 



Re: How long will this take?

2020-06-09 Thread Nicolas George
Jude DaShiell (12020-06-09):
> High security operations do this routinely.  They properly don't trust
> parts are as labeled from manufacturers especially manufacturers that
> send any of their stuff or get any of their stuff from China.

There is no trust to have. The previous contents would be overwritten on
the first actual write of a file.

And if the filesystem reads a sector that has never been written, that's
a serious bug in the operating system.

> I'm thinking of the binary search method and am wondering if disk
> operations of all sorts could be speeded up using it rather than
> sequential searches.  Or is binary already used now?

To search what?

Regards,

-- 
  Nicolas George


signature.asc
Description: PGP signature


Re: How long will this take?

2020-06-09 Thread Jude DaShiell
High security operations do this routinely.  They properly don't trust
parts are as labeled from manufacturers especially manufacturers that
send any of their stuff or get any of their stuff from China.
I'm thinking of the binary search method and am wondering if disk
operations of all sorts could be speeded up using it rather than
sequential searches.  Or is binary already used now?

On Tue, 9 Jun 2020, l0f...@tuta.io wrote:

> Date: Tue, 9 Jun 2020 14:08:34
> From: l0f...@tuta.io
> To: Debian User 
> Subject: Re: How long will this take?
> Resent-Date: Tue,  9 Jun 2020 18:08:47 + (UTC)
> Resent-From: debian-user@lists.debian.org
>
> Hi,
>
> 8 juin 2020 ? 22:22 de treni...@pm.me:
>
> > I bought a new 4 terrabyte hard drive that is connected with a USB cable 
> > using USB2.  It took about 32 hours to read every sector on the drive to 
> > look for bad sectors.  I started blanking the sectors using /dev/zero last 
> > Friday night.
> >
> Out of curiosity, what is the purpose to wipe a brand new HDD?
> Wouldn't formatting (or GPT overwrite) be sufficient?
>
> 9 juin 2020 ? 08:59 de dpchr...@holgerdanske.com:
>
> > Also as others have stated, writing zeros to an SSD may wear it out 
> > prematurely (depends upon internals of SSD).  The best approach is to do a 
> > "secure erase".
> >
> It seems to be a hard drive here ;)
> > Rather than wiping storage devices with GNU/Linux userland tools, your best 
> > bet is to use the manufacturer's diagnostic utility.  In the ideal case, 
> > the utility sends a command to the drive controller and everything gets 
> > done internally at maximum speed.  I prefer the bootable "Live" tools, if 
> > available.  Each manufacturer has their own toolkit.  Get the one for your 
> > drive brand.  For example, SeaTools Bootable:
> >
> > https://www.seagate.com/support/downloads/seatools/
> >
> Even more true for an SSD (and yet, I'm not sure we can say "secure" for sure 
> as those utilities are generally proprietary so we cannot verify what they do 
> exactly).
>
> Best regards,
> l0f4r0
>
>

-- 



Re: How long will this take?

2020-06-09 Thread l0f4r0
Hi,

8 juin 2020 à 22:22 de treni...@pm.me:

> I bought a new 4 terrabyte hard drive that is connected with a USB cable 
> using USB2.  It took about 32 hours to read every sector on the drive to look 
> for bad sectors.  I started blanking the sectors using /dev/zero last Friday 
> night.
>
Out of curiosity, what is the purpose to wipe a brand new HDD?
Wouldn't formatting (or GPT overwrite) be sufficient?

9 juin 2020 à 08:59 de dpchr...@holgerdanske.com:

> Also as others have stated, writing zeros to an SSD may wear it out 
> prematurely (depends upon internals of SSD).  The best approach is to do a 
> "secure erase".
>
It seems to be a hard drive here ;)
> Rather than wiping storage devices with GNU/Linux userland tools, your best 
> bet is to use the manufacturer's diagnostic utility.  In the ideal case, the 
> utility sends a command to the drive controller and everything gets done 
> internally at maximum speed.  I prefer the bootable "Live" tools, if 
> available.  Each manufacturer has their own toolkit.  Get the one for your 
> drive brand.  For example, SeaTools Bootable:
>
> https://www.seagate.com/support/downloads/seatools/
>
Even more true for an SSD (and yet, I'm not sure we can say "secure" for sure 
as those utilities are generally proprietary so we cannot verify what they do 
exactly).

Best regards,
l0f4r0



Re: How long will this take?

2020-06-09 Thread Nicolas George
Christopher David Howie (12020-06-08):
> I'd suggest simply adding "status=progress" which gives you a summary every
> second including bytes written, elapsed time, and average transfer rate.

How do you add "status=progress" to a process that has already been
running for three days?

Regards,

-- 
  Nicolas George


signature.asc
Description: PGP signature


Re: How long will this take?

2020-06-09 Thread Andrei POPESCU
On Lu, 08 iun 20, 20:09:54, Jude DaShiell wrote:
> > From: Dan Ritter 
> People are suing Western Digital for sneaking those SMR disks into their
> supply chain.  They're supposed to be red in color if what I read in the
> news is correct.
> >
> > https://www.ixsystems.com/community/resources/list-of-known-smr-drives.141/

According to that list it's only specific model numbers of WD Red and 
Blue.

Kind regards,
Andrei
-- 
http://wiki.debian.org/FAQsFromDebianUser


signature.asc
Description: PGP signature


Re: How long will this take?

2020-06-09 Thread David Christensen

On 2020-06-08 13:22, Matthew Campbell wrote:

I bought a new 4 terrabyte hard drive that is connected with a USB cable using 
USB2. It took about 32 hours to read every sector on the drive to look for bad 
sectors. I started blanking the sectors using /dev/zero last Friday night. It 
still isn't done. Is there a way I can find out how much data a particular 
process has written to the disk? I'm using Debian 10.4.

dd if=/dev/zero of=/dev/sdb ibs=4096 count=976754646

name=Matthew%20Campbell=trenix25%40pm.me


Install 'nmon'.  Then start a terminal and run 'nmon'. Press 'd' to 
display the disk monitoring screen.  This will show read throughput, 
write throughput, and percent utilization.



Alternatively, if you are using the Xfce desktop, add a Disk Performance 
Monitor applet to the panel and configure it for the correct device node 
/dev/sdX:


Device  /dev/sdX
unchecked Label sdX
Update interval(s)  1.000
Monitor Busy time
checked Combine Read/Write data

Then hover your mouse pointer over the applet and it will show you read, 
write, and total statistics for both throughput and for busy time.



USB 2.0 ports have a maximum write speed around 25 MB/s.  eSATA ports 
(version 1) get much closer to their theoretical maximum of 150 MB/s. 
USB 3.0 beats them both.  Of course, you must have a fast drive and a 
fast program.



When using dd(1) to write blocks to a raw drive, use a block size of 1M 
(e.g. 1 Mibabyte).  As others have stated, a small block size of 4K will 
significantly reduce throughput due to I/O overhead.



Also as others have stated, writing zeros to an SSD may wear it out 
prematurely (depends upon internals of SSD).  The best approach is to do 
a "secure erase".



Rather than wiping storage devices with GNU/Linux userland tools, your 
best bet is to use the manufacturer's diagnostic utility.  In the ideal 
case, the utility sends a command to the drive controller and everything 
gets done internally at maximum speed.  I prefer the bootable "Live" 
tools, if available.  Each manufacturer has their own toolkit.  Get the 
one for your drive brand.  For example, SeaTools Bootable:


https://www.seagate.com/support/downloads/seatools/


David



Re: How long will this take?

2020-06-08 Thread Christopher David Howie

On 6/8/2020 11:01 PM, David Wright wrote:

I, too, determine progress with
# kill -USR1 


I'd suggest simply adding "status=progress" which gives you a summary 
every second including bytes written, elapsed time, and average transfer 
rate.


--
Chris Howie
http://www.chrishowie.com
http://en.wikipedia.org/wiki/User:Crazycomputers

If you correspond with me on a regular basis, please read this document: 
http://www.chrishowie.com/email-preferences/


PGP fingerprint: 2B7A B280 8B12 21CC 260A DF65 6FCE 505A CF83 38F5


IMPORTANT INFORMATION/DISCLAIMER

This document should be read only by those persons to whom it is 
addressed.  If you have received this message it was obviously addressed 
to you and therefore you can read it.


Additionally, by sending an email to ANY of my addresses or to ANY 
mailing lists to which I am subscribed, whether intentionally or 
accidentally, you are agreeing that I am "the intended recipient," and 
that I may do whatever I wish with the contents of any message received 
from you, unless a pre-existing agreement prohibits me from so doing.


This overrides any disclaimer or statement of confidentiality that may 
be included on your message.




Re: How long will this take?

2020-06-08 Thread David Wright
On Mon 08 Jun 2020 at 20:22:39 (+), Matthew Campbell wrote:
> I bought a new 4 terrabyte hard drive that is connected with a USB cable 
> using USB2. It took about 32 hours to read every sector on the drive to look 
> for bad sectors.

I recently ran

# badblocks -c 1024 -s -w -t random -v /dev/sdz

on a 2TB disk with a USB2 connection. The whole process, writing and
checking, took 33⅓ hours. (The disk now holds an encrypted ext4 filesystem.)

> I started blanking the sectors using /dev/zero last Friday night. It still 
> isn't done. Is there a way I can find out how much data a particular process 
> has written to the disk? I'm using Debian 10.4.

I'm not sure why you'd do that. I've only zeroed disks to erase them
before I return them to the owner. (They're inside loaned computers.)

> dd if=/dev/zero of=/dev/sdb ibs=4096 count=976754646

… And I'd be using bs=1M and no count. I, too, determine progress with
# kill -USR1 

Cheers,
David.



Re: How long will this take?

2020-06-08 Thread Dan Ritter
Jude DaShiell wrote: 
> Does any optimal formula exist based on hard drive size which minimizes
> time needed for checking and blanking hard drives in connection with the
> block size value?

If the disk firmware offers it, a SMART long read/verify test
should be close to optimal. Consult smartctl and the disk manufacturer
for details.

For conventional spinning hard disks, the optimal write size would be
a complete cylinder at a time. That varies across the radius of the disk,
and may not be made available to the OS. 

In lieue of knowing that, writes which are reasonable integer
multiples of the sector size are very good. 1 MB is probably
good for most drives.

For SMR spinning disks,the optimal write size is one complete
write zone. I've heard that this is standardizing at 256MB, but
I would want to confirm with the manufacturer. There are a lot
of interactions with PMR caches. 

For SSD, writing wears out the storage mechanism. A write-all
test won't test reliability; flaws will be detected and remapped
without letting the host know.

-dsr-



re: How long will this take?

2020-06-08 Thread Jude DaShiell
Does any optimal formula exist based on hard drive size which minimizes
time needed for checking and blanking hard drives in connection with the
block size value?



--



Re: How long will this take?

2020-06-08 Thread Jude DaShiell
People are suing Western Digital for sneaking those SMR disks into their
supply chain.  They're supposed to be red in color if what I read in the
news is correct.

On Mon, 8 Jun 2020, Dan Ritter wrote:

> Date: Mon, 8 Jun 2020 16:33:29
> From: Dan Ritter 
> To: Matthew Campbell 
> Cc: Debian User Support 
> Subject: Re: How long will this take?
> Resent-Date: Mon,  8 Jun 2020 20:33:43 + (UTC)
> Resent-From: debian-user@lists.debian.org
>
> Matthew Campbell wrote:
> > I bought a new 4 terrabyte hard drive that is connected with a USB cable 
> > using USB2. It took about 32 hours to read every sector on the drive to 
> > look for bad sectors. I started blanking the sectors using /dev/zero last 
> > Friday night. It still isn't done. Is there a way I can find out how much 
> > data a particular process has written to the disk? I'm using Debian 10.4.
> >
> > dd if=/dev/zero of=/dev/sdb ibs=4096 count=976754646
>
> USB2 disks are good for about 25MB/s.
>
> 4 seconds gets you 100MB.
>
> 40 seconds gets you 1000MB.
>
> 4000 * 40 seconds is 16 seconds, so that's not quite two
> days.
>
> Is something wrong? Based on current news reports, I would say
> you accidentally purchased an SMR disk. (By accidentally, I mean
> that the box didn't say, the ad didn't say, and the manufacturer
> might even have lied to you for a while.)
>
> https://www.ixsystems.com/community/resources/list-of-known-smr-drives.141/
>
> Is it one of those?
>
> If so, return it. Tell the store that it's an unlabelled SMR
> drive. They'll take it back.
>
> -dsr-
>
>

-- 



Re: How long will this take?

2020-06-08 Thread Dan Ritter
deloptes wrote: 
> Dan Ritter wrote:
> 
> > USB2 disks are good for about 25MB/s.
> > 
> 
> Where do you have those numbers?
> 
> USB 2.0 standard can theoretically transfer data at a very high 480 megabits
> per second (mbps), or 60 megabytes per second (MBps) [for example in
> wikipedia).

Yes, that's the theory. In years of running several USB 2.0
attached disks, I found that they were actually good for about
25MB/s long-term. Bursts to 37MB/s were not uncommon.

The USB mass storage protocol forces a queue depth of 1: one
request, one response, nothing else until it's done. 

The OP's 4K writes will be particularly badly performing.

-dsr-



Re: How long will this take?

2020-06-08 Thread Nicolas George
Matthew Campbell (12020-06-08):
> Is that in bytes?

You can compare with the stats presented by USR1 to be sure.

> stdin and stderr both show a position of zero.

You can look in /proc/$PID/fd to see where the various fd points. I
guess 0 will point to /dev/zero, 1 to the hard drive and 2 to the tty.

Regards,

-- 
  Nicolas George


signature.asc
Description: PGP signature


Re: How long will this take?

2020-06-08 Thread Matthew Campbell
# cat /proc/24283/fdinfo/1
pos: 877106917376
flags: 011
mnt_id: 21
#

Is that in bytes?

stdin and stderr both show a position of zero.

name=Matthew%20Campbell=trenix25%40pm.me

 Original Message 
On Jun 8, 2020, 1:32 PM, Nicolas George wrote:

> Matthew Campbell (12020-06-08): > I bought a new 4 terrabyte hard drive that 
> is connected with a USB > cable using USB2. It took about 32 hours to read 
> every sector on the > drive to look for bad sectors. I started blanking the 
> sectors using > /dev/zero last Friday night. It still isn't done. Is there a 
> way I can > find out how much data a particular process has written to the 
> disk? > I'm using Debian 10.4. Sending a USR1 signal to a running 'dd' 
> process makes it print I/O sta‐ tistics to standard error and then resume 
> copying. Fron dd(1). Also, you can go read /proc/$(pidof dd)/fdinfo, it 
> contains the information too. Note that it becomes much slower as it nears 
> the center of the disk. Regards, -- Nicolas George

Re: How long will this take?

2020-06-08 Thread deloptes
Dan Ritter wrote:

> USB2 disks are good for about 25MB/s.
> 

Where do you have those numbers?

USB 2.0 standard can theoretically transfer data at a very high 480 megabits
per second (mbps), or 60 megabytes per second (MBps) [for example in
wikipedia).

but as you say it is slowing down at some point. It is not working at max
anyway - some people say it will not increase above 20MB/s anyway.

I suggest add  to dd   status=progress

  The LEVEL of information to print to stderr; 'none' suppresses
everything but error messages, 'noxfer' suppresses the final transfer sta-
  tistics, 'progress' shows periodic transfer statistics

And honestly I do not think 4TB disks were meant to be used via USB2 - think
of using eSATA.







Re: How long will this take?

2020-06-08 Thread Dan Ritter
Matthew Campbell wrote: 
> I bought a new 4 terrabyte hard drive that is connected with a USB cable 
> using USB2. It took about 32 hours to read every sector on the drive to look 
> for bad sectors. I started blanking the sectors using /dev/zero last Friday 
> night. It still isn't done. Is there a way I can find out how much data a 
> particular process has written to the disk? I'm using Debian 10.4.
> 
> dd if=/dev/zero of=/dev/sdb ibs=4096 count=976754646

USB2 disks are good for about 25MB/s.

4 seconds gets you 100MB.

40 seconds gets you 1000MB.

4000 * 40 seconds is 16 seconds, so that's not quite two
days.

Is something wrong? Based on current news reports, I would say
you accidentally purchased an SMR disk. (By accidentally, I mean
that the box didn't say, the ad didn't say, and the manufacturer
might even have lied to you for a while.)

https://www.ixsystems.com/community/resources/list-of-known-smr-drives.141/

Is it one of those?

If so, return it. Tell the store that it's an unlabelled SMR
drive. They'll take it back.

-dsr-



Re: How long will this take?

2020-06-08 Thread Nicolas George
Matthew Campbell (12020-06-08):
> I bought a new 4 terrabyte hard drive that is connected with a USB
> cable using USB2. It took about 32 hours to read every sector on the
> drive to look for bad sectors. I started blanking the sectors using
> /dev/zero last Friday night. It still isn't done. Is there a way I can
> find out how much data a particular process has written to the disk?
> I'm using Debian 10.4.

   Sending a USR1 signal to a running 'dd' process makes it print I/O sta‐
   tistics to standard error and then resume copying.

Fron dd(1).

Also, you can go read /proc/$(pidof dd)/fdinfo, it contains the
information too.

Note that it becomes much slower as it nears the center of the disk.

Regards,

-- 
  Nicolas George


signature.asc
Description: PGP signature


Re: How long should it take to extract 1GIG off Tape?

1997-02-18 Thread Dr. Andreas Wehler
 My Wangtek QIC 150 tape (an old 5110 ES drive) runs at a rate of
5MB/min, or 300MB/h, which should give 1GB in something near/below 4h
(if the 250MB-tapes are changed fast enough).  The streaming
throughput should be limited from the tape, not the host.

:  I'm just checking but whenever I extract the contents of my mirrored
: tapes it seems to take nearly all night to extract. At least 6 hours.
: That doesn't seem right to me. Is the kernel configured to work with all
: SCSI tape drives in an optimal manner? Is there the same kind of QIC
: interpretation problem like under Solaris here? ie. stconf.c?

-- 
Uni Wuppertal, FB Elektrotechnik, Tel/Fax: (0202) 439 - 3009
Dr. Andreas Wehler;  [EMAIL PROTECTED]


--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word unsubscribe to
[EMAIL PROTECTED] . Trouble? e-mail to [EMAIL PROTECTED]