Re: Migrating from hard drives to SSDs

2023-07-14 Thread gene heskett

On 7/14/23 09:34, Anssi Saari wrote:

gene heskett  writes:


One of the things apparently missing in today's support for the arm64
boards such as the bananapi-m5, is the lack of support for the nvme
memory on some of these devices. I have quite a few of them, all
booting and running from 64G micro-sd's.  Yet these all have, soldered
to the board, several gigs of nvme memory, more that enough to contain
a full desktop install with all the toys, but totally unused.


https://wiki.banana-pi.org/Banana_Pi_BPI-M5 says eMMC, not NVME. Same
page has a link to a video https://www.youtube.com/watch?v=q5I6pzWCTrg
which supposedly explains how to use it to install some software. I have
no idea if you can actually boot from eMMC on those boards.

I only have edible bananas here but on my Raspberry Pi Compute Module 3+
the eMMC was fully usable and in fact, I have no SD card on that
system. Although the eMMC solution on these boards isn't great because
they're slow. Apparently the Foundation went cheap (or clueless) on it
which is a pity.

.
Thanks Anssi. I go for the both option, cheap AND clueless.  I am 
blocked from their forum because I asked about realtime kernels for the 
3rd time.


One reply did give me a link to the src, which I was able to build, but 
their u-boot is custom so I had to invent my own method of install which 
amazed me because it worked first time out of the gate.  I made a 
tarball out of it that ran to just over 27 megabytes.  Two directory's 
in the tarball, put the u-sd card in a reader, mount it, and copy those 
2 directories to the card. thats it, everything else it needs is done in 
raspi-config.


Cheers, Gene Heskett.
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author, 1940)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: Migrating from hard drives to SSDs

2023-07-14 Thread Anssi Saari
gene heskett  writes:

> One of the things apparently missing in today's support for the arm64
> boards such as the bananapi-m5, is the lack of support for the nvme 
> memory on some of these devices. I have quite a few of them, all
> booting and running from 64G micro-sd's.  Yet these all have, soldered
> to the board, several gigs of nvme memory, more that enough to contain
> a full desktop install with all the toys, but totally unused.

https://wiki.banana-pi.org/Banana_Pi_BPI-M5 says eMMC, not NVME. Same
page has a link to a video https://www.youtube.com/watch?v=q5I6pzWCTrg
which supposedly explains how to use it to install some software. I have
no idea if you can actually boot from eMMC on those boards.

I only have edible bananas here but on my Raspberry Pi Compute Module 3+
the eMMC was fully usable and in fact, I have no SD card on that
system. Although the eMMC solution on these boards isn't great because
they're slow. Apparently the Foundation went cheap (or clueless) on it
which is a pity.



Re: Migrating from hard drives to SSDs

2023-07-12 Thread Jeffrey Walton
On Wed, Jul 12, 2023 at 6:40 AM gene heskett  wrote:
> [ ...]
> One of the things apparently missing in today's support for the arm64
> boards such as the bananapi-m5, is the lack of support for the nvme
> memory on some of these devices. I have quite a few of them, all booting
> and running from 64G micro-sd's.  Yet these all have, soldered to the
> board, several gigs of nvme memory, more that enough to contain a full
> desktop install with all the toys, but totally unused.
>
> So my question is, when do we get support for using it? It is reported
> to be several times faster than the u-sd's its running from now. u-sd's
> touted to do 100mhz/second, but generally can only do less than 20
> megs/second in actual practice. So what plans are in place to use this
> memory on the arms, that we users can look fwd to?

I don't follow. What does this have to do with raid and migrating?

It seems like it should be a new thread, but I want to make sure I am
not missing something obvious.

Jeff



Re: Migrating from hard drives to SSDs

2023-07-12 Thread gene heskett

On 7/11/23 21:39, David Christensen wrote:

On 7/11/23 13:18, Mick Ab wrote:
I am thinking of changing my storage from two 1TB hard drives in a 
software

RAID 1 configuration to two M.2 Nvme 1 TB SSDs. The two SSDs would be put
into a software RAID 1 configuration. Currently each hard drive contains
both the operating system and user data.

What steps would you recommend to achieve the above result and would 
those

steps be the quickest way ?

One of the M.2 slots can operate at PCIe 4.0 and PCIe 3.0, while the 
other

slot can only operate at PCIe 3.0. If they are to be in a RAID 1 array, I
guess that both slots should be operated at PCIe 3.0 speed.



I would backup the system configuration files and data, power down, 
remove the HDD's, install the NVMe drives, boot Debian installation 
media, do a fresh install, restore/ merge the system configuration 
files, and restore the data.



The above should be the most reliable approach and produce a "known 
good" Debian system instance.



AIUI Linux md RAID can deal with block device speed differences.


Taking a step back, you might want to re-think using two 1 TB devices in 
RAID1 for everything -- boot, swap, root, and data.  I put boot, swap, 
and root on a single 2.5" SATA SSD's and keep the entire instance small 
enough to fit onto a "16 GB" device (you might want to target "32 GB", 
"64 GB", etc., if you install a lot of software).  I then put 2.5" SATA 
trayless bays in all of my computers.  This makes it easy to move OS 
instances to other machines (subject to BIOS/UEFI compatibility), to 
clone images to additional devices (USB flash drives, HDD's, SD cards), 
and to take and store images on a regular basis for disaster 
preparedness/ recovery.  I would then wipe the 1 TB HDD's and build a 
ZFS pool using the HDD's as a mirror.  A surplus of memory will help ZFS 
performance.  For further ZFS improvements, add small/ fast/ high 
endurance NVMe devices as ZFS cache and/or log devices.



David


One of the things apparently missing in today's support for the arm64 
boards such as the bananapi-m5, is the lack of support for the nvme 
memory on some of these devices. I have quite a few of them, all booting 
and running from 64G micro-sd's.  Yet these all have, soldered to the 
board, several gigs of nvme memory, more that enough to contain a full 
desktop install with all the toys, but totally unused.


So my question is, when do we get support for using it? It is reported 
to be several times faster than the u-sd's its running from now. u-sd's 
touted to do 100mhz/second, but generally can only do less than 20 
megs/second in actual practice. So what plans are in place to use this 
memory on the arms, that we users can look fwd to?


Thank you.

.


Cheers, Gene Heskett.
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author, 1940)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: Migrating from hard drives to SSDs

2023-07-11 Thread David Christensen

On 7/11/23 13:18, Mick Ab wrote:

I am thinking of changing my storage from two 1TB hard drives in a software
RAID 1 configuration to two M.2 Nvme 1 TB SSDs. The two SSDs would be put
into a software RAID 1 configuration. Currently each hard drive contains
both the operating system and user data.

What steps would you recommend to achieve the above result and would those
steps be the quickest way ?

One of the M.2 slots can operate at PCIe 4.0 and PCIe 3.0, while the other
slot can only operate at PCIe 3.0. If they are to be in a RAID 1 array, I
guess that both slots should be operated at PCIe 3.0 speed.



I would backup the system configuration files and data, power down, 
remove the HDD's, install the NVMe drives, boot Debian installation 
media, do a fresh install, restore/ merge the system configuration 
files, and restore the data.



The above should be the most reliable approach and produce a "known 
good" Debian system instance.



AIUI Linux md RAID can deal with block device speed differences.


Taking a step back, you might want to re-think using two 1 TB devices in 
RAID1 for everything -- boot, swap, root, and data.  I put boot, swap, 
and root on a single 2.5" SATA SSD's and keep the entire instance small 
enough to fit onto a "16 GB" device (you might want to target "32 GB", 
"64 GB", etc., if you install a lot of software).  I then put 2.5" SATA 
trayless bays in all of my computers.  This makes it easy to move OS 
instances to other machines (subject to BIOS/UEFI compatibility), to 
clone images to additional devices (USB flash drives, HDD's, SD cards), 
and to take and store images on a regular basis for disaster 
preparedness/ recovery.  I would then wipe the 1 TB HDD's and build a 
ZFS pool using the HDD's as a mirror.  A surplus of memory will help ZFS 
performance.  For further ZFS improvements, add small/ fast/ high 
endurance NVMe devices as ZFS cache and/or log devices.



David




Re: Migrating from hard drives to SSDs

2023-07-11 Thread Dan Ritter
Nicolas George wrote: 
> Dan Ritter (12023-07-11):
> > mdadm create /dev/md10 /dev/nvme0n1p1 /dev/nvme1n1p1 --level=1 
> > --raid-devices=2
> > and so on for the other RAID pairs -- I'd call them md10, 11 and
> > 12 or so on.
> 
> So you… create new RAIDs on the new drives and just ignore the old data?
> That works, but that can hardly be called migrating, can it? Or maybe I
> am missing something ?

A few lines later I'm pretty sure I wrote about copying the data
across.

I suppose I should have mentioned removing or repurposing the
original spinners, too.

-dsr-



Re: Migrating from hard drives to SSDs

2023-07-11 Thread Nicolas George
Dan Ritter (12023-07-11):
> mdadm create /dev/md10 /dev/nvme0n1p1 /dev/nvme1n1p1 --level=1 
> --raid-devices=2
> and so on for the other RAID pairs -- I'd call them md10, 11 and
> 12 or so on.

So you… create new RAIDs on the new drives and just ignore the old data?
That works, but that can hardly be called migrating, can it? Or maybe I
am missing something ?

Regards,

-- 
  Nicolas George


signature.asc
Description: PGP signature


Re: Migrating from hard drives to SSDs

2023-07-11 Thread Dan Ritter
Mick Ab wrote: 
> I am thinking of changing my storage from two 1TB hard drives in a software
> RAID 1 configuration to two M.2 Nvme 1 TB SSDs. The two SSDs would be put
> into a software RAID 1 configuration. Currently each hard drive contains
> both the operating system and user data.
> 
> What steps would you recommend to achieve the above result and would those
> steps be the quickest way ?

Let's say that the spinners are sda and sdb, and together they
form md0, 1 and 2

Plug in the two new SSDs. We'll call them nvme0n1 and nvme1n1
though they might be different.

If you need /boot, EFI and/or swap partitions here, make them.
EFI can't be MD raided. boot and swap can.

mdadm create /dev/md10 /dev/nvme0n1p1 /dev/nvme1n1p1 --level=1 --raid-devices=2
and so on for the other RAID pairs -- I'd call them md10, 11 and
12 or so on.

mkfs on your new md devices.

Figure out your bootloader and update it.

Copy over data.

> One of the M.2 slots can operate at PCIe 4.0 and PCIe 3.0, while the other
> slot can only operate at PCIe 3.0. If they are to be in a RAID 1 array, I
> guess that both slots should be operated at PCIe 3.0 speed.

No need. Or you can just buy 2 PCI3 SSDs.

-dsr-



Re: Migrating from hard drives to SSDs

2023-07-11 Thread Nicolas George
Mick Ab (12023-07-11):
> I am thinking of changing my storage from two 1TB hard drives in a software
> RAID 1 configuration to two M.2 Nvme 1 TB SSDs. The two SSDs would be put
> into a software RAID 1 configuration. Currently each hard drive contains
> both the operating system and user data.
> 
> What steps would you recommend to achieve the above result and would those
> steps be the quickest way ?

Plug the new drives, add them to the RAID. While you are waiting for the
synchronization, install the bootloader on the new drives, preferably
both. Once the RAID is synced, remove the old drives from it. Shutdown
and finish the job.

> One of the M.2 slots can operate at PCIe 4.0 and PCIe 3.0, while the other
> slot can only operate at PCIe 3.0. If they are to be in a RAID 1 array, I
> guess that both slots should be operated at PCIe 3.0 speed.

Let the kernel decide.

Regards,

-- 
  Nicolas George



Re: Migrating from hard drives to SSDs

2023-07-11 Thread Jeffrey Walton
On Tue, Jul 11, 2023 at 4:18 PM Mick Ab  wrote:
>
> I am thinking of changing my storage from two 1TB hard drives in a software 
> RAID 1 configuration to two M.2 Nvme 1 TB SSDs. The two SSDs would be put 
> into a software RAID 1 configuration. Currently each hard drive contains both 
> the operating system and user data.
>
> What steps would you recommend to achieve the above result and would those 
> steps be the quickest way ?
>
> One of the M.2 slots can operate at PCIe 4.0 and PCIe 3.0, while the other 
> slot can only operate at PCIe 3.0. If they are to be in a RAID 1 array, I 
> guess that both slots should be operated at PCIe 3.0 speed.

Are these the same machine or two different machines?

If this were me performing the task... I would add an external drive
on the old machine, boot to Clonezilla, and use Clonezilla to
backup/clone to the external drive. Then I would plug the external
drive into the new machine with the SSDs, and use Clonezilla to write
the backup from the external drive to the new machine.

Jeff



Migrating from hard drives to SSDs

2023-07-11 Thread Mick Ab
I am thinking of changing my storage from two 1TB hard drives in a software
RAID 1 configuration to two M.2 Nvme 1 TB SSDs. The two SSDs would be put
into a software RAID 1 configuration. Currently each hard drive contains
both the operating system and user data.

What steps would you recommend to achieve the above result and would those
steps be the quickest way ?

One of the M.2 slots can operate at PCIe 4.0 and PCIe 3.0, while the other
slot can only operate at PCIe 3.0. If they are to be in a RAID 1 array, I
guess that both slots should be operated at PCIe 3.0 speed.