Re: Raid Array and Changing Motherboard

2023-07-02 Thread David Christensen

On 7/2/23 13:11, Mick Ab wrote:

On 19:58, Sun, 2 Jul 2023 David Christensen 

On 7/2/23 10:23, Mick Ab wrote:

I have a software RAID 1 array of two hard drives. Each of the two disks
contains the Debian operating system and user data.

I am thinking of changing the motherboard because of problems that

might be

connected to the current motherboard. The new motherboard would be the

same

make and model as the current motherboard.

Would I need to recreate the RAID 1 array for the new motherboard I.e.
re-initialise the current RAID 1 disks and repopulate the disks with

data

or can I just set up the software RAID on the new motherboard without
affecting the current data on the RAID 1 drives ?



Shutdown the machine.  Boot using a live USB stick.  Type notes into a
text file.  Use script(1) to record console sessions.  Use dd(1) to take
an image of each disk to an external HDD (consider piping dd(1) to
gzip(1) to save space).  Shutdown.  Take note of which HDD is cabled to
which motherboard port.  Replace motherboard.  Connect HDD's to the same
motherboard ports.  Boot.  It should "just work".


Post details if you have problems.


David




Thanks for your reply.

I don't quite understand what you are proposing.



Backup the RAID member drives first, then swap motherboards.



Do you mean the external HDDs would form a new RAID 1 array for the new
motherboard ?



No.  For each existing RAID member drive, copy its blocks to a file on 
the external HDD.




What would happen to the original RAID 1 disks ?



They stay in the chassis and are connected to the same ports on the new 
motherboard.



David



Re: Raid Array and Changing Motherboard

2023-07-02 Thread Alexander V. Makartsev

On 02.07.2023 22:23, Mick Ab wrote:


I have a software RAID 1 array of two hard drives. Each of the two 
disks contains the Debian operating system and user data.


I am thinking of changing the motherboard because of problems that 
might be connected to the current motherboard. The new motherboard 
would be the same make and model as the current motherboard.


Would I need to recreate the RAID 1 array for the new motherboard I.e. 
re-initialise the current RAID 1 disks and repopulate the disks with 
data or can I just set up the software RAID on the new motherboard 
without affecting the current data on the RAID 1 drives ?


It's hard to tell what exactly will happen, because it depends on 
BIOS/Firmware of the motherboard, even though there is a special 
metadata record on each disk, which contains role of the disk and 
configuration of the RAID array. I predict two outcomes:
1. Two disks connected to a new motherboard will be recognized by 
BIOS/Firmware right away after you switch controller mode from AHCI to 
RAID, and appear as existing RAID1 array.
2. Two disks connected to a new motherboard will appear as two normal 
disks and won't be recognized as a RAID1 array, asking you to 
create\init array.


In case #2 data on disks will be lost, so before you do any 
manipulations make and verify backups.
Usually BIOS RAID software is very basic and won't allow to preserve 
current data on disks, or select a role (primary/secondary) for the 
disks, or create incomplete RAID1 array using only one disk to allow to 
copy data over from the second disk.


If you happen to have any other two old disks on hand, I suggest you to 
experiment with those on current motherboard, i.e. create an additional 
new RAID1 array and see if that array stays intact after simulated disks 
"transfer".
You can simulate disks transfer by powering of the computer, 
disconnecting the test disks and check if test RAID1 array still listed.
If test array will be listed and report two test disks missing then 
array information is also recorded in BIOS and this array information 
won't be on a new motherboard.
However, if there won't be any information about test array, then it 
should appear when you reconnect test disks and data on test disks 
should be intact.


There could be also a manual available from motherboard's manufacturer 
which could give some clues about what is possible and what would happen.



--
With kindest regards, Alexander.

⢀⣴⠾⠻⢶⣦⠀
⣾⠁⢠⠒⠀⣿⡁ Debian - The universal operating system
⢿⡄⠘⠷⠚⠋⠀ https://www.debian.org
⠈⠳⣄

Re: Raid Array and Changing Motherboard

2023-07-02 Thread Mick Ab
On 19:58, Sun, 2 Jul 2023 David Christensen 
> On 7/2/23 10:23, Mick Ab wrote:
> > I have a software RAID 1 array of two hard drives. Each of the two disks
> > contains the Debian operating system and user data.
> >
> > I am thinking of changing the motherboard because of problems that
might be
> > connected to the current motherboard. The new motherboard would be the
same
> > make and model as the current motherboard.
> >
> > Would I need to recreate the RAID 1 array for the new motherboard I.e.
> > re-initialise the current RAID 1 disks and repopulate the disks with
data
> > or can I just set up the software RAID on the new motherboard without
> > affecting the current data on the RAID 1 drives ?
>
>
> Shutdown the machine.  Boot using a live USB stick.  Type notes into a
> text file.  Use script(1) to record console sessions.  Use dd(1) to take
> an image of each disk to an external HDD (consider piping dd(1) to
> gzip(1) to save space).  Shutdown.  Take note of which HDD is cabled to
> which motherboard port.  Replace motherboard.  Connect HDD's to the same
> motherboard ports.  Boot.  It should "just work".
>
>
> Post details if you have problems.
>
>
> David
>
>

Thanks for your reply.

I don't quite understand what you are proposing.

Do you mean the external HDDs would form a new RAID 1 array for the new
motherboard ?

What would happen to the original RAID 1 disks ?


Re: Raid Array and Changing Motherboard

2023-07-02 Thread David Christensen

On 7/2/23 10:23, Mick Ab wrote:

I have a software RAID 1 array of two hard drives. Each of the two disks
contains the Debian operating system and user data.

I am thinking of changing the motherboard because of problems that might be
connected to the current motherboard. The new motherboard would be the same
make and model as the current motherboard.

Would I need to recreate the RAID 1 array for the new motherboard I.e.
re-initialise the current RAID 1 disks and repopulate the disks with data
or can I just set up the software RAID on the new motherboard without
affecting the current data on the RAID 1 drives ?



Shutdown the machine.  Boot using a live USB stick.  Type notes into a 
text file.  Use script(1) to record console sessions.  Use dd(1) to take 
an image of each disk to an external HDD (consider piping dd(1) to 
gzip(1) to save space).  Shutdown.  Take note of which HDD is cabled to 
which motherboard port.  Replace motherboard.  Connect HDD's to the same 
motherboard ports.  Boot.  It should "just work".



Post details if you have problems.


David




Re: Raid Array and Changing Motherboard

2023-07-02 Thread Charles Curley
On Sun, 2 Jul 2023 18:23:31 +0100
Mick Ab  wrote:

> I am thinking of changing the motherboard because of problems that
> might be connected to the current motherboard. The new motherboard
> would be the same make and model as the current motherboard.
>
> Would I need to recreate the RAID 1 array for the new motherboard I.e.
> re-initialise the current RAID 1 disks and repopulate the disks with
> data or can I just set up the software RAID on the new motherboard
> without affecting the current data on the RAID 1 drives ?

I believe that will depend on how you built the RAID array.

Assuming the new motherboard supports your current hard drives and
peripheral cards. (As it should if it is the same make and model, and
the manufacturer did nothing stupid.)

If you used mdadm (Linux software RAID), you should have no problem.

Some hardware RAID systems on a card should be OK.

RAID in the firmware on the motherboard should be OK, unless the
manufacturer made an incompatible upgrade.

-- 
Does anybody read signatures any more?

https://charlescurley.com
https://charlescurley.com/blog/



Re: RAID-1 and disk I/O

2021-07-18 Thread rhkramer
On Sunday, July 18, 2021 09:37:53 AM David wrote:
> On Sun, 18 Jul 2021 at 21:08,  wrote:
> > Interesting -- not surprising, makes sense, but something (for me, at
> > least) to keep in mind -- probably not a good idea to run on an old
> > drive that hasn't been backed up.
> 
> Sorry if my language was unclear. If you read the manpage context, it's
> explaining that drives can be tested without taking them out of service.
> So performance is only "degraded" while the test is running, compared
> to normal operation, because the drive is also busy testing itself.
> It doesn't mean permanent degradation.

Ahh, ok -- thanks for the clarification!



Re: RAID-1 and disk I/O

2021-07-18 Thread David Christensen

On 7/18/21 2:29 PM, Urs Thuermann wrote:

David Christensen  writes:


You should consider upgrading to Debian 10 -- more people run that and
you will get better support.


It's on my TODO list.  As well as upgrading the very old hardware.
Currently, it's a Gigabyte P35-DS3L with an Intel Core2Duo E8400 CPU
and 8 GB RAM.  It's only my private home server and performance is
still sufficient but I hope to reduce power consumption considerably.



I ran Debian on desktop hardware as a SOHO server for many years, but 
grew concerned about bit rot.  So, I migrated to low-end enterprise 
hardware and FreeBSD with ZFS.  The various SATA battles made things 
tougher than they should have been, but I fixed several problems and 
everything is now stable.




# diff -U20 <(smartctl -x /dev/sda) <(smartctl -x /dev/sdb)



Why limit unified context to 20 lines?  You may be missing information
(I have not counted the differences, below).  I suggest '-U' alone.


20 lines are just enough to get all.  You can see this because there
are less than 20 context lines at the beginning and end of the diff
and only one hunk.  GNU diff doesn't allow -U without a line count.



Sorry -- I do not use the -U option and misread the diff(1) man page.



Yes, the old Gigabyte mainboard has only 3 Gbps ports.  I wasn't aware
of this but have just looked up the specs.



SATA2 should be plenty for Seagate ST2000DM001 drives.  Two PCIe x1 
SATA3 HBA's or one PCIe x2+ SATA3 HBA might improve performance slightly 
under specific workloads, but I would just stay with motherboard SATA2 
ports (unless you find problems with them).




And the server is about 8 years old, initially with only 1 hard drive
which crashed while my backup was too small to hold everything.  This
meant a lot of work (and quite some money) to get everything running
again and to recover data which wasn't in the backup.



I think we have all been burned by trying to "make do" with inadequate 
backup devices.  I threw money at the problem after my last significant 
data loss, and now have backups several drives deep.  The funny thing 
is: when you're prepared, the gremlins know it and stay away.  ;-)




The smartctl(8) RAW_VALUE column is tough to read.  Sometimes it looks
like an integer.  Other times, it looks like a bitmap or big-endian/
little-endian mix-up.  The VALUE column is easier.  Both 119 and 117
are greater than 100, so I would not worry.


Hm, in some cases the RAW_VALUE looked somehow "more readable, and the
VALUE looked suspicous to me.  And here I found the explanation in the
smartctl(8) man page:

 Each Attribute has a "Raw" value, printed under the heading
 "RAW_VALUE", and a "Normalized" value printed under the
 heading "VALUE".
 [...]
 Each vendor uses their own algorithm to convert this "Raw"
 value to a "Normalized" value in the range from 1 to 254.
 [...]
 So to summarize: the Raw Attribute values are the ones that
 might have a real physical interpretation, such as
 "Temperature Celsius", "Hours", or "Start-Stop Cycles".



Thank you for the clarification.  As usual, I am guilty of inadequate 
RTFM...




Thanks for all your answers, hints, suggestions.  With that, and
reading the man page more carefully (mostly motivate by your and
other's answers) I learned quite a lot new about SMART and how to
use/read it.



YW.  I am learning too.


David



Re: RAID-1 and disk I/O

2021-07-18 Thread Urs Thuermann
David Christensen  writes:

> You should consider upgrading to Debian 10 -- more people run that and
> you will get better support.

It's on my TODO list.  As well as upgrading the very old hardware.
Currently, it's a Gigabyte P35-DS3L with an Intel Core2Duo E8400 CPU
and 8 GB RAM.  It's only my private home server and performance is
still sufficient but I hope to reduce power consumption considerably.

> > the storage setup is as follows:
> > Two identical SATA disks with 1 partition on each drive spanning the
> > whole drive, i.e. /dev/sda1 and /dev/sdb1.  Then, /dev/sda1 and
> > /dev/sdb1 form a RAID-1 /dev/md0 with LVM on top of it.
> 
> 
> ext4?  That lacks integrity checking.
> 
> 
> btrfs?  That has integrity checking, but requires periodic balancing.

Mostly ext4 for / /var /var/spool/news /usr /usr/local and /home file
systems.  The /usr/src file system is btrfs and some test file systems
also.  There are also 4 VMs, FreeBSD and NetBSD with their partitions
and slices and ufs file systems, one Linux VM with ext4 and one very
old Linux VM (kernel 2.4) with its own LVM in two LVs and 10 ext3 file
systems.

> Are both your operating system and your data on this array?  I always
> use a single, small solid-state device for the system drive, configure
> my hardware so that it is /dev/sda, and use separate drive(s) for data
> (/dev/sdb, /dev/sdc, etc.).  Separating these concerns simplifies
> system administration and disaster preparedness/ recovery.

Yes, everything is in the LVs on /dev/md0.  Except for some external
USB hard drives for backup (4 TB) and some other seldomly used stuff
(e.g. NTFS drive with some old data of my wife's laptop, I cannot
persuade her to use Linux).

> > but I found the following with
> > smartctl:
> > --
> > # diff -U20 <(smartctl -x /dev/sda) <(smartctl -x /dev/sdb)
> 
> 
> Why limit unified context to 20 lines?  You may be missing information
> (I have not counted the differences, below).  I suggest '-U' alone.

20 lines are just enough to get all.  You can see this because there
are less than 20 context lines at the beginning and end of the diff
and only one hunk.  GNU diff doesn't allow -U without a line count.

> You have a SATA transfer speed mismatch -- 6.0 Gbps drives running at
> 3.0 Gbps.  If your ports are 3 Gbps, fine.  If your ports are 6 Gbps,
> you have bad ports, cables, racks, docks, trays, etc..

Yes, the old Gigabyte mainboard has only 3 Gbps ports.  I wasn't aware
of this but have just looked up the specs.

> Seek_Error_Rate indicates those drives have seen better days, but are
> doing their job.
> 
> 
> Power_On_Hours indicates those drives have seen lots of use.

> Power_Cycle_Count indicates that the machine runs 24x7 for long
> periods without rebooting.

Yes, the server runs 24/7 except for kernel updates, and a power
outage 2 weeks ago (my UPS batteries also need replacement... )-:

And the server is about 8 years old, initially with only 1 hard drive
which crashed while my backup was too small to hold everything.  This
meant a lot of work (and quite some money) to get everything running
again and to recover data which wasn't in the backup.

This was almost 6 years ago and I then bought 2 Seagate Barracuda
drives for RAID-1 and a larger backup drive.  One of the two Seagate
drives is still running and is /dev/sda.  The other drive /dev/sdb
crashed after only 9.5 months of operation and I got it replaced by
the dealer.  This was when I loved my decision to setup RAID-1.  With
no downtime I pulled the failed drive, returned it to the dealer, ran
the system a week or two with only one drive, got the replacement
drive from the dealer hot-plugged it in, synced, and was happy :-)
Only short time after this I also bought a 3.5" removable mounting
frame for 2 drives to swap drives even more easily.

> Runtime_Bad_Block looks acceptable.

> End-to-End_Error and Reported_Uncorrect look perfect.  The drives
> should not have corrupted or lost any data (other hardware and/or
> events may have).

OK.

> Airflow_Temperature_Cel and Temperature_Celsius are higher than I
> like. I suggest that you dress cables, add fans, etc., to improve
> cooling.

OK, I'll have a look at that.

> UDMA_CRC_Error_Count for /dev/sda looks worrisome, both compared to
> /dev/sdb and compared to reports for my drives.
> 
> 
> Total_LBAs_Written for /dev/sda is almost double that of
> /dev/sdb. Where those drives both new when put into RAID1?

Yes, see above.  But /dev/sdb was replaced after 9.5 months, so it has
shorter life-time.  Also, /dev/sda began to fail every couple of
months about a year ago.  I could always fix this by pulling the
drive, re-inserting and re-syncing it.  This also caused more
write-traffic to /dev/sda.

> > SMART Extended Self-test Log Version: 1 (1 sectors)
> >   Num  Test_DescriptionStatus  Remaining  
> > LifeTime(hours)  LBA_of_first_error
> > -# 1  Short 

Re: RAID-1 and disk I/O

2021-07-18 Thread David Christensen

On 7/18/21 2:16 AM, Reco wrote:

Hi.

On Sat, Jul 17, 2021 at 02:03:15PM -0700, David Christensen wrote:

But much more noticable is the difference of data reads of the two
disks, i.e. 55 GB and 27 GB, i.e. roughly twice as much data is read
from /dev/sdb compared to /dev/sda.  Trying to figure out the reason
for this, dmesg didn't give me anything


Getting meaningful information from system monitoring tools is
non-trivial.  Perhaps 'iostat 600' concurrent with a run of bonnie++.
Or, 'iostat 3600 24' during normal operations.  Or, 'iostat' dumped to
a time-stamped output file run once an hour by a cron job.


iostat belongs to sysstat package.
sysstat provides sar, which, by default, gathers every detail of the
host resource utilization and a little more once per 10 minutes.

There's little need for the kludges you're describing for one can simply
invoke "sar -pd -f /var/log/sysstat/sa...".

Reco



Yes, sar(1) looks useful.  :-)


David



Re: RAID-1 and disk I/O

2021-07-18 Thread mick crane

On 2021-07-18 14:37, David wrote:

On Sun, 18 Jul 2021 at 21:08,  wrote:

On Saturday, July 17, 2021 09:30:56 PM David wrote:



> The 'smartctl' manpage explains how to run and abort self-tests.
> It also says that a running test can degrade the performance of the drive.


Interesting -- not surprising, makes sense, but something (for me, at 
least)
to keep in mind -- probably not a good idea to run on an old drive 
that hasn't

been backed up.


Sorry if my language was unclear. If you read the manpage context, it's
explaining that drives can be tested without taking them out of 
service.

So performance is only "degraded" while the test is running, compared
to normal operation, because the drive is also busy testing itself.
It doesn't mean permanent degradation.


I admit I had to look twice "running a test". "What!". Oh "a running 
test"


mick
--
Key ID4BFEBB31



Re: RAID-1 and disk I/O

2021-07-18 Thread David
On Sun, 18 Jul 2021 at 21:08,  wrote:
> On Saturday, July 17, 2021 09:30:56 PM David wrote:

> > The 'smartctl' manpage explains how to run and abort self-tests.
> > It also says that a running test can degrade the performance of the drive.

> Interesting -- not surprising, makes sense, but something (for me, at least)
> to keep in mind -- probably not a good idea to run on an old drive that hasn't
> been backed up.

Sorry if my language was unclear. If you read the manpage context, it's
explaining that drives can be tested without taking them out of service.
So performance is only "degraded" while the test is running, compared
to normal operation, because the drive is also busy testing itself.
It doesn't mean permanent degradation.



Re: RAID-1 and disk I/O

2021-07-18 Thread David Christensen

On 7/17/21 6:30 PM, David wrote:

On Sun, 18 Jul 2021 at 07:03, David Christensen
 wrote:

On 7/17/21 5:34 AM, Urs Thuermann wrote:



On my server running Debian stretch,
the storage setup is as follows:
Two identical SATA disks with 1 partition on each drive spanning the
whole drive, i.e. /dev/sda1 and /dev/sdb1.  Then, /dev/sda1 and
/dev/sdb1 form a RAID-1 /dev/md0 with LVM on top of it.



--
# diff -U20 <(smartctl -x /dev/sda) <(smartctl -x /dev/sdb)



-  9 Power_On_Hours  -O--CK   042   042   000-51289
+  9 Power_On_Hours  -O--CK   051   051   000-43740



   SMART Extended Self-test Log Version: 1 (1 sectors)
   Num  Test_DescriptionStatus  Remaining  LifeTime(hours)  
LBA_of_first_error
-# 1  Short offline   Completed without error   00% 21808 -
+# 1  Short offline   Completed without error   00% 14254 -


sda was last self-tested at 21808 hours and is now at 51289.
sdb was last self-tested at 14254 hours and is now at 43740.
And those were short (a couple of minutes) self-tests only.
So these drives have apparently only ever run one short self-test.


Thank you for the clarification.  :-)


David



Re: RAID-1 and disk I/O

2021-07-18 Thread rhkramer
On Saturday, July 17, 2021 09:30:56 PM David wrote:
> The 'smartctl' manpage explains how to run and abort self-tests.
> It also says that a running test can degrade the performance of the drive.

Interesting -- not surprising, makes sense, but something (for me, at least) 
to keep in mind -- probably not a good idea to run on an old drive that hasn't 
been backed up.



Re: RAID-1 and disk I/O

2021-07-18 Thread Reco
Hi.

On Sat, Jul 17, 2021 at 02:03:15PM -0700, David Christensen wrote:
> > But much more noticable is the difference of data reads of the two
> > disks, i.e. 55 GB and 27 GB, i.e. roughly twice as much data is read
> > from /dev/sdb compared to /dev/sda.  Trying to figure out the reason
> > for this, dmesg didn't give me anything
> 
> Getting meaningful information from system monitoring tools is
> non-trivial.  Perhaps 'iostat 600' concurrent with a run of bonnie++.
> Or, 'iostat 3600 24' during normal operations.  Or, 'iostat' dumped to
> a time-stamped output file run once an hour by a cron job.

iostat belongs to sysstat package.
sysstat provides sar, which, by default, gathers every detail of the
host resource utilization and a little more once per 10 minutes.

There's little need for the kludges you're describing for one can simply
invoke "sar -pd -f /var/log/sysstat/sa...".

Reco



Re: RAID-1 and disk I/O

2021-07-17 Thread David
On Sun, 18 Jul 2021 at 07:03, David Christensen
 wrote:
> On 7/17/21 5:34 AM, Urs Thuermann wrote:

> > On my server running Debian stretch,
> > the storage setup is as follows:
> > Two identical SATA disks with 1 partition on each drive spanning the
> > whole drive, i.e. /dev/sda1 and /dev/sdb1.  Then, /dev/sda1 and
> > /dev/sdb1 form a RAID-1 /dev/md0 with LVM on top of it.

> > --
> > # diff -U20 <(smartctl -x /dev/sda) <(smartctl -x /dev/sdb)

> > -  9 Power_On_Hours  -O--CK   042   042   000-51289
> > +  9 Power_On_Hours  -O--CK   051   051   000-43740

> >   SMART Extended Self-test Log Version: 1 (1 sectors)
> >   Num  Test_DescriptionStatus  Remaining  
> > LifeTime(hours)  LBA_of_first_error
> > -# 1  Short offline   Completed without error   00% 21808   
> >   -
> > +# 1  Short offline   Completed without error   00% 14254   
> >   -

sda was last self-tested at 21808 hours and is now at 51289.
sdb was last self-tested at 14254 hours and is now at 43740.
And those were short (a couple of minutes) self-tests only.
So these drives have apparently only ever run one short self-test.

I am a home user, and I run long self-tests regularly using
# smartctl -t long 
In my opinion these drives are due for a long self-test.
I have no idea if this will add any useful information,
but there's an obvious way to find out :)

A bit more info on self-tests:
https://serverfault.com/questions/732423/what-does-smart-testing-do-and-how-does-it-work

The 'smartctl' manpage explains how to run and abort self-tests.
It also says that a running test can degrade the performance of the drive.



Re: RAID-1 and disk I/O

2021-07-17 Thread David Christensen

On 7/17/21 5:34 AM, Urs Thuermann wrote:
On my server running Debian stretch, 



You should consider upgrading to Debian 10 -- more people run that and 
you will get better support.



I migrated to FreeBSD.



the storage setup is as follows:
Two identical SATA disks with 1 partition on each drive spanning the
whole drive, i.e. /dev/sda1 and /dev/sdb1.  Then, /dev/sda1 and
/dev/sdb1 form a RAID-1 /dev/md0 with LVM on top of it.



ext4?  That lacks integrity checking.


btrfs?  That has integrity checking, but requires periodic balancing.


I use ZFS.  That has integrity checking.  It is wise to do periodic 
scrubs to check for problems.



Are both your operating system and your data on this array?  I always 
use a single, small solid-state device for the system drive, configure 
my hardware so that it is /dev/sda, and use separate drive(s) for data 
(/dev/sdb, /dev/sdc, etc.).  Separating these concerns simplifies system 
administration and disaster preparedness/ recovery.




The disk I/O shows very different usage of the two SATA disks:

 # iostat | grep -E '^[amDL ]|^sd[ab]'
 Linux 5.13.1 (bit)  07/17/21_x86_64_(2 CPU)
 avg-cpu:  %user   %nice %system %iowait  %steal   %idle
3.780.002.270.860.00   93.10
 Device:tpskB_read/skB_wrtn/skB_readkB_wrtn
 sdb   4.5472.1661.25   54869901   46577068
 sda   3.7235.5361.25   27014254   46577068
 md0   5.53   107.1957.37   81504323   43624519
 
The data written to the SATA disks is about 7% = (47 GB - 44 GB) / 44 GB

more than to the RAID device /dev/md0.  Is that the expected overhead
for RAID-1 meta data?

But much more noticable is the difference of data reads of the two
disks, i.e. 55 GB and 27 GB, i.e. roughly twice as much data is read
from /dev/sdb compared to /dev/sda.  Trying to figure out the reason
for this, dmesg didn't give me anything 



Getting meaningful information from system monitoring tools is 
non-trivial.  Perhaps 'iostat 600' concurrent with a run of bonnie++. 
Or, 'iostat 3600 24' during normal operations.  Or, 'iostat' dumped to a 
time-stamped output file run once an hour by a cron job.  Beware of 
using multiple system monitoring tools at the same time -- they may 
access the same kernel data structures and step on each other.




but I found the following with
smartctl:

--
# diff -U20 <(smartctl -x /dev/sda) <(smartctl -x /dev/sdb)



Why limit unified context to 20 lines?  You may be missing information 
(I have not counted the differences, below).  I suggest '-U' alone.




--- /dev/fd/63  2021-07-17 12:09:00.425352672 +0200
+++ /dev/fd/62  2021-07-17 12:09:00.425352672 +0200
@@ -1,165 +1,164 @@
  smartctl 6.6 2016-05-31 r4324 [x86_64-linux-5.13.1] (local build)
  Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
  
  === START OF INFORMATION SECTION ===

  Model Family: Seagate Barracuda 7200.14 (AF)



I burned up both old desktop drives and new enterprise drives when I put 
them into a server (Samba, CVS) for my SOHO network and ran them 24x7. 
As my arrays had only one redundant drive (e.g. two drives in RAID1, 
three drives in RAID5), I had the terrorifying realization that I was at 
risk of losing everything when a drive failed and I had not replaced it 
yet.  I upgraded to all enterprise drives, bought a spare enterprise 
drive and put it on the shelf, built another server, replicate 
periodically to the second server, and replicate periodically to 
tray-mounted old desktop drives used like backup tapes (and rotated 
on/off site).  I should probably put the spare drive into the live 
server and set it up as a hot spare.




  Device Model: ST2000DM001-1ER164
-Serial Number:W4Z171HL
-LU WWN Device Id: 5 000c50 07d3ebd67
+Serial Number:Z4Z2M4T1
+LU WWN Device Id: 5 000c50 07b21e7db
  Firmware Version: CC25
  User Capacity:2,000,397,852,160 bytes [2.00 TB]
  Sector Sizes: 512 bytes logical, 4096 bytes physical
  Rotation Rate:7200 rpm
  Form Factor:  3.5 inches
  Device is:In smartctl database [for details use: -P show]
  ATA Version is:   ACS-2, ACS-3 T13/2161-D revision 3b
  SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 3.0 Gb/s)



You have a SATA transfer speed mismatch -- 6.0 Gbps drives running at 
3.0 Gbps.  If your ports are 3 Gbps, fine.  If your ports are 6 Gbps, 
you have bad ports, cables, racks, docks, trays, etc..




  Local Time is:Sat Jul 17 12:09:00 2021 CEST
  SMART support is: Available - device has SMART capability.
  SMART support is: Enabled
  AAM feature is:   Unavailable
  APM level is: 254 (maximum performance)
  Rd look-ahead is: Enabled
  Write cache is:   Enabled
  ATA Security is:  Disabled, NOT FROZEN [SEC1]
  Wt Cache Reorder: Unavailable
  
  === START OF 

Re: RAID-1 and disk I/O

2021-07-17 Thread Andy Smith
Hi Urs,

Your plan to change the SATA cable seems wise - your various error
rates are higher than I have normally seen.

Also worth bearing in mind that Linux MD RAID 1 will satisfy all
read IO for a given operation from one device in the mirror. If
you have processes that do occasional big reads then by chance those
can end up being served by the same device leading to a big
disparity in per-device LBAs read.

You can do RAID-10 (even on 2 or 3 devices) which will stripe data
at the chunk size resulting in even a single read operation being
striped across multiple devices, though overall this may not be more
performant than RAID-1, especially if your devices were
non-rotational. You would have to measure.

I don't know about the write overhead you are seeing.

Cheers,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: RAID-1 and disk I/O

2021-07-17 Thread Bob Weber

On 7/17/21 08:34, Urs Thuermann wrote:

Here, the noticable lines are IMHO

 Raw_Read_Error_Rate (208245592 vs. 117642848)
 Command_Timeout (8 14 17 vs. 0 0 0)
 UDMA_CRC_Error_Count(11058 vs. 29)

Do these numbers indicate a serious problem with my /dev/sda drive?
And is it a disk problem or a transmission problem?
UDMA_CRC_Error_Count sounds like a cable problem for me, right?

BTW, for a year so I had problems with /dev/sda every couple of month,
where the kernel set the drive status in the RAID array to failed.  I
could always fix the problem by hot-plugging out the drive, wiggling
the SATA cable, re-inserting and re-adding the drive (without any
impact on the running server).  Now, I haven't seen the problem for
quite a while.  My suspect is that the cable is still not working very
good, but failures are not often enough to set the drive to "failed"
status.

urs

I switched from Seagate to WD Red years ago since I couldn't get them to last 
more than a year or so.  I have one WD that is 6.87 years old with no errors.  
Well past the 5 year life expectancy. In recent years WD has pulled a marketing 
controversy on their Red drives.  See:


https://arstechnica.com/gadgets/2020/06/western-digital-adds-red-plus-branding-for-non-smr-hard-drives/

So be careful to get the Pro version if you decide to try WD. I use the 
WD4003FFBX (4T) drives (Raid 1) and have them at 2.8 years running 24/7 with no 
problems.


If you value your data get another drive NOW .. they are already 5 and 5.8 years 
old!  Add it to the array and let it settle in (sync) and see what happens.  I 
hope your existing array can hold together long enough to add a 3rd drive.  I 
would have replaced those drives long ago from all the errors reported.  You 
might want to get new cables also since you have had problems in the past.


I also run self tests weekly to make sure the drives are ok.  I run smartctl -a 
daily also.  I also run backuppc on a separate server to get backups of 
important data.


There are some programs in /usr/share/mdadm that can check an array but I would 
wait until you have a new drive added to the array before testing the array.  
Here is the warning that comes with another script I found:




DATA LOSS MAY HAVE OCCURRED.

This condition may have been caused by one of more of the following events:

. A LEGITIMATE write to a memory mapped file or swap partition backed by a
    RAID1 (and only a RAID1) device - see the md(4) man page for details.

. A power failure when the array was being written-to.
  Data corruption by a hard disk drive, drive controller, cable etc.

. A kernel bug in the md or storage subsystems etc.

. An array being forcibly created in an inconsistent state using --assume-clean

This count is updated when the md subsystem carries out a 'check' or
'repair' action.  In the case of 'repair' it reflects the number of
mismatched blocks prior to carrying out the repair.

Once you have fixed the error, carry out a 'check' action to reset the count
to zero.

See the md (section 4) manual page, and the following URL for details:

https://raid.wiki.kernel.org/index.php/Linux_Raid#Frequently_Asked_Questions_-_FAQ

--

The problem is that if a miss count occurs then which drive (Raid 1) is 
correct!  I also run programs like debsums to check programs after an update so 
I know there is no bit rot in important programs as explained above.


Hope this helps.

--



*...Bob*

Re: RAID-1 and disk I/O

2021-07-17 Thread Nicholas Geovanis
I'm going to echo your final thought there: Replace the SATA cables with 2
NEW ones of the same model. Then see how it goes, meaning rerun the tests
you just ran. If possible, try to make the geometries of the cables as
similar as you can: roughly same (short?) lengths, roughly as straight and
congruent as you are able.

Keep in mind that the minor flaws on the drive surfaces are different, each
drive from the other. The list of known bad blocks will be different from
one drive to the other and that can affect performance of the filesystem
built on it.

On Sat, Jul 17, 2021, 7:42 AM Urs Thuermann  wrote:

> On my server running Debian stretch, the storage setup is as follows:
> Two identical SATA disks with 1 partition on each drive spanning the
> whole drive, i.e. /dev/sda1 and /dev/sdb1.  Then, /dev/sda1 and
> /dev/sdb1 form a RAID-1 /dev/md0 with LVM on top of it.
>
> The disk I/O shows very different usage of the two SATA disks:
>
> # iostat | grep -E '^[amDL ]|^sd[ab]'
> Linux 5.13.1 (bit)  07/17/21_x86_64_(2 CPU)
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>3.780.002.270.860.00   93.10
> Device:tpskB_read/skB_wrtn/skB_readkB_wrtn
> sdb   4.5472.1661.25   54869901   46577068
> sda   3.7235.5361.25   27014254   46577068
> md0   5.53   107.1957.37   81504323   43624519
>
> The data written to the SATA disks is about 7% = (47 GB - 44 GB) / 44 GB
> more than to the RAID device /dev/md0.  Is that the expected overhead
> for RAID-1 meta data?
>
> But much more noticable is the difference of data reads of the two
> disks, i.e. 55 GB and 27 GB, i.e. roughly twice as much data is read
> from /dev/sdb compared to /dev/sda.  Trying to figure out the reason
> for this, dmesg didn't give me anything but I found the following with
> smartctl:
>
>
> --
> # diff -U20 <(smartctl -x /dev/sda) <(smartctl -x /dev/sdb)
> --- /dev/fd/63  2021-07-17 12:09:00.425352672 +0200
> +++ /dev/fd/62  2021-07-17 12:09:00.425352672 +0200
> @@ -1,165 +1,164 @@
>  smartctl 6.6 2016-05-31 r4324 [x86_64-linux-5.13.1] (local build)
>  Copyright (C) 2002-16, Bruce Allen, Christian Franke,
> www.smartmontools.org
>
>  === START OF INFORMATION SECTION ===
>  Model Family: Seagate Barracuda 7200.14 (AF)
>  Device Model: ST2000DM001-1ER164
> -Serial Number:W4Z171HL
> -LU WWN Device Id: 5 000c50 07d3ebd67
> +Serial Number:Z4Z2M4T1
> +LU WWN Device Id: 5 000c50 07b21e7db
>  Firmware Version: CC25
>  User Capacity:2,000,397,852,160 bytes [2.00 TB]
>  Sector Sizes: 512 bytes logical, 4096 bytes physical
>  Rotation Rate:7200 rpm
>  Form Factor:  3.5 inches
>  Device is:In smartctl database [for details use: -P show]
>  ATA Version is:   ACS-2, ACS-3 T13/2161-D revision 3b
>  SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 3.0 Gb/s)
>  Local Time is:Sat Jul 17 12:09:00 2021 CEST
>  SMART support is: Available - device has SMART capability.
>  SMART support is: Enabled
>  AAM feature is:   Unavailable
>  APM level is: 254 (maximum performance)
>  Rd look-ahead is: Enabled
>  Write cache is:   Enabled
>  ATA Security is:  Disabled, NOT FROZEN [SEC1]
>  Wt Cache Reorder: Unavailable
>
>  === START OF READ SMART DATA SECTION ===
>  SMART overall-health self-assessment test result: PASSED
>
>  General SMART Values:
>  Offline data collection status:  (0x82)Offline data collection
> activity
> was completed without error.
> Auto Offline Data Collection:
> Enabled.
>  Self-test execution status:  (   0)The previous self-test
> routine completed
> without error or no self-test has
> ever
> been run.
>  Total time to complete Offline
> -data collection:   (   89) seconds.
> +data collection:   (   80) seconds.
>  Offline data collection
>  capabilities:   (0x7b) SMART execute Offline immediate.
> Auto Offline data collection
> on/off support.
> Suspend Offline collection upon new
> command.
> Offline surface scan supported.
> Self-test supported.
> Conveyance Self-test supported.
> Selective Self-test supported.
>  SMART capabilities:(0x0003)Saves SMART data before
> entering
> power-saving mode.
> Supports SMART auto save timer.
>  Error logging 

Re: Raid 1

2021-01-25 Thread David Christensen

On 2021-01-24 21:23, mick crane wrote:

On 2021-01-24 20:10, David Christensen wrote:



Please tell us why you must put the OS and the backup images on the
same RAID mirror of two HDD's, and why you cannot add one (or two?)
more devices for the OS.


I think I'll go with the first and last suggestion to just have 2 disks 
in raid1.
It seems that properly you'd want 2 disks in raid for the OS, 2 at least 
for the pool and maybe 1 for the cache.

Don't have anything big enough I could put 5 disks in.
I could probably get 3 disks in. Install the OS on one and then dd that 
to another and put that in a drawer and have another 2 disks as the zfs 
pool. I might have a fiddle about and see what goes on.



If you are short on hardware or money, one option is to install Debian 
onto a USB flash drive.   I ran desktop hardware as servers on USB flash 
drives for many years, and still keep a Debian 9 system on USB flash for 
maintenance purposes.  I have yet to wear one out.  If you feel the need 
for RAID, use two USB flash drives.



David



Re: Raid 1

2021-01-25 Thread Pankaj Jangid


Thanks Andy and Linux-Fan, for the detailed reply.



Re: Raid 1

2021-01-25 Thread Linux-Fan

Andy Smith writes:


Hi Pankaj,

Not wishing to put words in Linux-Fan's mouth, but my own views
are…

On Mon, Jan 25, 2021 at 11:04:09AM +0530, Pankaj Jangid wrote:
> Linux-Fan  writes:
>
> > * OS data bitrot is not covered, but OS single HDD failure is.
> >   I achieve this by having OS and Swap on MDADM RAID 1
> >   i.e. mirrored but without ZFS.
>
> I am still learning.
>
> 1. By "by having OS and Swap on MDADM", did you mean the /boot partition
>and swap.

When people say, "I put OS and Swap on MDADM" they typically mean
the entire installed system before user/service data is put on it.
So that's / and all its usual sub-directories, and swap, possibly
with things later split off after install.


Yes, that is exactly how I meant it :)

My current setup has two disks each partitioned as follows:

* first   partition ESP  for /boot/efi (does not support RAID)
* sencond partition MDADM RAID 1 for / (including /boot and /home)
* third   partition MDADM RAID 1 for swap
* fourth  partition ZFS mirror   for virtual machines and containers

Some may like to have /home separately. I personally prefer to store all my  
user-created data outside of the /home tree because many programs are using  
/home structures for cache and configuration files that are automatically  
generated and should (IMHO) not be mixed with what I consider important data.



> 2. Why did you put Swap on RAID? What is the advantage?

If you have swap used, and the device behind it goes away, your
system will likely crash.

The point of RAID is to increase availability. If you have the OS
itself in RAID and you have swap, the swap should be in RAID too.


That was exactly my reasoning, too. I can add that I did not use a ZFS  
volume for the swap mostly because of

https://github.com/openzfs/zfs/issues/7734
and I did not use it for the OS (/, /boot, /home) mainly because I wanted to  
avoid getting a non-booting system in case anything fails with the ZFS  
module DKMS build. The added benefit was a less complex installation  
procedure i.e. using Debian installer was possible and all ZFS stuff could  
be done from the installed and running system.


I would advise against replicating my setup for first-time RAID users  
because restore after a failed disk will require invoking the respective  
restoration procedures of both technologies.



There are use cases where the software itself provides the
availability. For example, there is Ceph, which typically uses
simple block devices from multiple hosts and distributes the data
around.


Yes.

[...]


> How do you decide which partition to cover and which not?

For each of the storage devices in your system, ask yourself:

- Would your system still run if that device suddenly went away?

- Would your application(s) still run if that device suddenly went
  away?

- Could finding a replacement device and restoring your data from
  backups be done in a time span that you consider reasonable?

If the answer to those questions are not what you could tolerate,
add some redundancy in order to reduce unavailability. If you decide
you can tolerate the possible unavailability then so be it.


[...]

My rule of thumb: RAID 1 whenever possible i.e. on all actively relied-upon  
computers that are not laptops or other special form factors with tightly  
limited HDD/SSD options.


The replacement drive considerations are important for RAID setups, too. I  
used to have a "cold spare" HDD but given the rate at which the  
capacity/price ratio rises I thought it to be overly cautious/expensive to  
keep that scheme.


HTH
Linux-Fan

öö


pgp9PMRNZZV5y.pgp
Description: PGP signature


Re: Raid 1

2021-01-25 Thread deloptes
mick crane wrote:

> I think I'll go with the first and last suggestion to just have 2 disks
> in raid1.
> It seems that properly you'd want 2 disks in raid for the OS, 2 at least
> for the pool and maybe 1 for the cache.
> Don't have anything big enough I could put 5 disks in.
> I could probably get 3 disks in. Install the OS on one and then dd that
> to another and put that in a drawer and have another 2 disks as the zfs
> pool. I might have a fiddle about and see what goes on.

Hi,
I have not followed this thread closely, but my advise is keep it as simple
as possible.
Very often people here overcomplicate things - geeks and freaks - in the
good sense - but still if you do not know ZFS or can not afford the
infrastructure for that, just leave it.

In my usecase I came with following solution:

md0 - boot disk (ext3)
md1 - root disk (ext4)
md2 - swap
md3 - LVM for user data (encrypted + xfs)

I have this on two disks that were replaced and "grown" from 200GB to 1TB
over the past 18y. Some of the Seagates I used in the beginning died and
RAID1 payed off.

Planning to move to GPT next md0 will be converted to EFI disk (FAT32) or I
will just create one additional partition on each disk for the EFI stuff.
I'm not sure if I need it at all, so I must be really bored to touch this.








Re: Raid 1

2021-01-24 Thread Andy Smith
Hi Pankaj,

Not wishing to put words in Linux-Fan's mouth, but my own views
are…

On Mon, Jan 25, 2021 at 11:04:09AM +0530, Pankaj Jangid wrote:
> Linux-Fan  writes:
> 
> > * OS data bitrot is not covered, but OS single HDD failure is.
> >   I achieve this by having OS and Swap on MDADM RAID 1
> >   i.e. mirrored but without ZFS.
> 
> I am still learning.
> 
> 1. By "by having OS and Swap on MDADM", did you mean the /boot partition
>and swap.

When people say, "I put OS and Swap on MDADM" they typically mean
the entire installed system before user/service data is put on it.
So that's / and all its usual sub-directories, and swap, possibly
with things later split off after install.

> 2. Why did you put Swap on RAID? What is the advantage?

If you have swap used, and the device behind it goes away, your
system will likely crash.

The point of RAID is to increase availability. If you have the OS
itself in RAID and you have swap, the swap should be in RAID too.

There are use cases where the software itself provides the
availability. For example, there is Ceph, which typically uses
simple block devices from multiple hosts and distributes the data
around.

A valid setup for Ceph is to have the OS in a small RAID just so
that a device failure doesn't take down a machine entirely, but then
have the data devices stand alone as Ceph itself will handle a
failure of those. Small boot+OS devices are cheap and it's so simple
to RAID them.

Normally Ceph is set up so that an entire host can be lost. If host
reinstallation is automatic and quick and there's so many hosts that
losing any one of them is a fairly minor occurrence then it could be
valid to not even put the OS+swap in RAID. Though for me it still
sounds like a lot more hassle than just replacing a dead drive in a
running machine, so I wouldn't do it personally.

>- I understood that RAID is used to detect disk failures early.

Not really. Although with RAID or ZFS or the like it is typical to
have a periodic (weekly, monthly, etc) scrub that reads all data and
may uncover drive problems like unreadable sectors, usually failures
happen when they will happen. The difference is that a copy of the
data still exists somewhere else, so that can be used and the
failure does not have to propagate to the application.

> How do you decide which partition to cover and which not?

For each of the storage devices in your system, ask yourself:

- Would your system still run if that device suddenly went away?

- Would your application(s) still run if that device suddenly went
  away?

- Could finding a replacement device and restoring your data from
  backups be done in a time span that you consider reasonable?

If the answer to those questions are not what you could tolerate,
add some redundancy in order to reduce unavailability. If you decide
you can tolerate the possible unavailability then so be it.

Cheers,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: Raid 1

2021-01-24 Thread Andrei POPESCU
On Du, 24 ian 21, 23:21:38, Linux-Fan wrote:
> mick crane writes:
> 
> > On 2021-01-24 17:37, Andrei POPESCU wrote:
> 
> [...]
> 
> > > If you want to combine Linux RAID and ZFS on just two drives you could
> > > partition the drives (e.g. two partitions on each drive), use the first
> > > partition on each drive for Linux RAID, install Debian (others will have
> > > to confirm whether the installer supports creating RAID from partitions)
> > > and then use the other partitions for the ZFS pool.
> 
> I can confirm that this works. In fact, I always thought that to be the
> "best practice" for MDADM: To use individual partitions rather than whole
> devices. OTOH for ZFS, best practice seems to be to use entire devices. I am
> not an expert on this, though :)

ZFS is actually using GPT partitions and also automatically creates a 
"reserve" 8 MiB partition, just in case a replacement disk is not 
exactly the same size as the other disk(s) in a VDEV.

So far I haven't found a way around it (not that I care, as I prefer to 
partition manually and identify physical devices by partition label)

Kind regards,
Andrei
-- 
http://wiki.debian.org/FAQsFromDebianUser


signature.asc
Description: PGP signature


Re: Raid 1

2021-01-24 Thread Pankaj Jangid
Linux-Fan  writes:

> * OS data bitrot is not covered, but OS single HDD failure is.
>   I achieve this by having OS and Swap on MDADM RAID 1
>   i.e. mirrored but without ZFS.

I am still learning.

1. By "by having OS and Swap on MDADM", did you mean the /boot partition
   and swap.

2. Why did you put Swap on RAID? What is the advantage?

   - I understood that RAID is used to detect disk failures early. How
 do you decide which partition to cover and which not?



Re: Raid 1

2021-01-24 Thread mick crane

On 2021-01-24 20:10, David Christensen wrote:

On 2021-01-24 03:36, mick crane wrote:


Let's say I have one PC and 2 unpartitioned disks.


Please tell us why you must put the OS and the backup images on the
same RAID mirror of two HDD's, and why you cannot add one (or two?)
more devices for the OS.


David


I think I'll go with the first and last suggestion to just have 2 disks 
in raid1.
It seems that properly you'd want 2 disks in raid for the OS, 2 at least 
for the pool and maybe 1 for the cache.

Don't have anything big enough I could put 5 disks in.
I could probably get 3 disks in. Install the OS on one and then dd that 
to another and put that in a drawer and have another 2 disks as the zfs 
pool. I might have a fiddle about and see what goes on.


mick
--
Key ID4BFEBB31



Re: Raid 1

2021-01-24 Thread Linux-Fan

mick crane writes:


On 2021-01-24 17:37, Andrei POPESCU wrote:


[...]


If you want to combine Linux RAID and ZFS on just two drives you could
partition the drives (e.g. two partitions on each drive), use the first
partition on each drive for Linux RAID, install Debian (others will have
to confirm whether the installer supports creating RAID from partitions)
and then use the other partitions for the ZFS pool.


I can confirm that this works. In fact, I always thought that to be the  
"best practice" for MDADM: To use individual partitions rather than whole  
devices. OTOH for ZFS, best practice seems to be to use entire devices. I am  
not an expert on this, though :)



You might want to experiment with this in a VM first. For testing
purposes you can also experiment with ZFS on files instead of real
devices / partitions (probably with Linux RAID as well).

Kind regards,
Andrei


This is my problem "where is the OS to be running the ZFS to put Debian on ?"


You could use a live system, for instance. Beware that this route is  
complicated. I linked to the guide in a previous mail but am not sure if you  
were finally able to check it (you mentioned at least one of my links not  
being accessible, but not which one...).


All I want to do is back up PCs to another and have that have redundancy  
with 2 disks so if one gets borked I can still use the other and put things  
back together.

How do I do that ?


My recommendation would be to keep it simple stupid: Let the installer setup  
RAID 1 MDADM for OS, swap and data and be done with it, avoid ZFS unless  
there is some reason to need it :)


For sure MDADM lacks the bit rot protection, but it is easier to setup  
especially for the OS and you can mitigate the bit rot (to some extent)  
by running periodic backup integrity checks which your software hopefully  
supports.


HTH
Linux-Fan

öö

[...]


pgpsyBbo4j9TH.pgp
Description: PGP signature


Re: Raid 1

2021-01-24 Thread Andrei POPESCU
On Du, 24 ian 21, 17:50:06, Andy Smith wrote:
> 
> Once it's up and running you can then go and create a second
> partition that spans the rest of each disk, and then when you are
> ready to create your zfs pool:
> 
> > "zpool create tank mirror disk1 disk2"
> 
> # zpool create tank mirror /dev/disk/by-id/ata-DISK1MODEL-SERIAL-part2 
> /dev/disk/by-id/ata-DISK2MODEL-SERIAL-part2
> 
> The DISK1MODEL-SERIAL bits will be different for you based on what
> the model and serial numbers are of your disks. Point is it's a pair
> of devices that are partition 2 of each disk.

At this point I'd recommend to use GPT partition labels instead (not to 
be confused with file system labels). Assuming labels datapart1 and 
datapart2 the create becomes:

# zpool create tank mirror /dev/disk/by-partlabel/datapart1 
/dev/disk/by-partlabel/datapart2

Now the output of 'zpool status' and all other commands will show the 
human-friendly labels instead of the device ID.


Kind regards,
Andrei
-- 
http://wiki.debian.org/FAQsFromDebianUser


signature.asc
Description: PGP signature


Re: Raid 1

2021-01-24 Thread David Christensen

On 2021-01-24 03:36, mick crane wrote:


Let's say I have one PC and 2 unpartitioned disks.


Please tell us why you must put the OS and the backup images on the same 
RAID mirror of two HDD's, and why you cannot add one (or two?) more 
devices for the OS.



David



Re: Raid 1

2021-01-24 Thread Marc Auslander
Andy Smith  writes:
>...
>So personally I would just do the install of Debian with both disks
>inside the machine, manual partitioning, create a single partition
>big enough for your OS on the first disk and then another one the
>same on the second disk. Mark them as RAID members, set them to
>RAID-1, install on that.
>...

You don't say if this is or will become a secure boot system, which
would require an EFI partition.  Leaving a bit of space just in case
seems a good idea.



Re: Raid 1

2021-01-24 Thread mick crane

On 2021-01-24 17:37, Andrei POPESCU wrote:

On Du, 24 ian 21, 11:36:09, mick crane wrote:


I know I'm a bit thick about these things, what I'm blocked about is 
where

is the OS.
Let's say I have one PC and 2 unpartitioned disks.
Put one disk in PC and install Debian on it.


Ok


Install headers and ZFS-utils.
I put other disk in PC, PC boots from first disk.


Ok.


"zpool create tank mirror disk1 disk2"


This will destroy all data already existing on disk1 and disk2 (though 
I
strongly suspect zpool will simply refuse to use disk1). Same with 
Linux

RAID.

Creating the RAID (Linux or ZFS) will overwrite any data already
existing on the disks / partitions used for the RAID.

If you want to have the OS on RAID it's probably easiest to let the
installer configure that for you. This implies *both* disks are
available during install (unless the installer can create a "degraded"
RAID).

Installing Debian on ZFS involves manual steps anyway, so it's 
basically
create the pool with just one disk, install Debian and then 'attach' 
the

other disk to the first one.

If you want to combine Linux RAID and ZFS on just two drives you could
partition the drives (e.g. two partitions on each drive), use the first
partition on each drive for Linux RAID, install Debian (others will 
have
to confirm whether the installer supports creating RAID from 
partitions)

and then use the other partitions for the ZFS pool.

You might want to experiment with this in a VM first. For testing
purposes you can also experiment with ZFS on files instead of real
devices / partitions (probably with Linux RAID as well).

Kind regards,
Andrei


This is my problem "where is the OS to be running the ZFS to put Debian 
on ?"
All I want to do is back up PCs to another and have that have redundancy 
with 2 disks so if one gets borked I can still use the other and put 
things back together.

How do I do that ?
mick
--
Key ID4BFEBB31



Re: Raid 1

2021-01-24 Thread Andy Smith
Hi Mick,

On Sun, Jan 24, 2021 at 11:36:09AM +, mick crane wrote:
> I know I'm a bit thick about these things, what I'm blocked about is where
> is the OS.

Wherever you installed it.

> Let's say I have one PC and 2 unpartitioned disks.
> Put one disk in PC and install Debian on it.

I think you are fundamentally going about this the wrong way.

There are several concerns and I think you are mixing them up. If I
understand you correctly, you concerns are:

1. Your data and OS should be backed up.
2. Your data and OS should be available even if a disk dies

Concern #1 is totally separate from concern #2 and is achieved by
setting up a backup system, has very little to do with whether you
use RAID or ZFS or whatever. It is worth a separate thread because
it's separate project.

For concern #2, that being *availability* of data and OS, there's
many ways to do it. You seem to have settled upon ZFS for your data,
and OS separately by some other means. That's fine.

A ZFS mirror vdev is going to need two identically-sized devices.
And you want to keep your OS separate. This suggests that each of
your disks should have two partitions. The first one would be for
the OS, and the second one would be for ZFS.

If you are going to keep your OS separate, I don't see any reason
not to use mdadm RAID-1 for the OS even if you're going to use zfs
for your data. Yes you could just install the OS onto a single
partition of a single disk, but you have two disks so why not use
RAID-1? If a disk breaks, your computer carries on working, what's
not to like?

So personally I would just do the install of Debian with both disks
inside the machine, manual partitioning, create a single partition
big enough for your OS on the first disk and then another one the
same on the second disk. Mark them as RAID members, set them to
RAID-1, install on that.

Once it's up and running you can then go and create a second
partition that spans the rest of each disk, and then when you are
ready to create your zfs pool:

> "zpool create tank mirror disk1 disk2"

# zpool create tank mirror /dev/disk/by-id/ata-DISK1MODEL-SERIAL-part2 
/dev/disk/by-id/ata-DISK2MODEL-SERIAL-part2

The DISK1MODEL-SERIAL bits will be different for you based on what
the model and serial numbers are of your disks. Point is it's a pair
of devices that are partition 2 of each disk.

> Can I then remove disk1 and PC will boot Debian from disk2 ?

This is only going to work if you have gone to the effort of
installing your OS on RAID. The easiest way to achieve that is to
have both disks in the machine when you install it and to properly
tell it that the first partition of each is a RAID member, create
them as a RAID-1 and tell the installer to install onto that.

As other mentioned, after it's installed you do have to manually
install the grub bootloader to the second device as well as by
default it only gets installed on the first one.

A word of warning: RAID is quite a big topic for the uninitiated and
so is ZFS. You are proposing to take on both at once. You have some
learning to do. You may make mistakes, and this data seems precious
to you. I advise you to sort out the backups first. You might need
them sooner than you'd hoped.

Cheers,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: Raid 1

2021-01-24 Thread Andrei POPESCU
On Du, 24 ian 21, 11:36:09, mick crane wrote:
> 
> I know I'm a bit thick about these things, what I'm blocked about is where
> is the OS.
> Let's say I have one PC and 2 unpartitioned disks.
> Put one disk in PC and install Debian on it.

Ok

> Install headers and ZFS-utils.
> I put other disk in PC, PC boots from first disk.

Ok.

> "zpool create tank mirror disk1 disk2"

This will destroy all data already existing on disk1 and disk2 (though I 
strongly suspect zpool will simply refuse to use disk1). Same with Linux 
RAID.

Creating the RAID (Linux or ZFS) will overwrite any data already 
existing on the disks / partitions used for the RAID.

If you want to have the OS on RAID it's probably easiest to let the 
installer configure that for you. This implies *both* disks are 
available during install (unless the installer can create a "degraded" 
RAID).

Installing Debian on ZFS involves manual steps anyway, so it's basically 
create the pool with just one disk, install Debian and then 'attach' the 
other disk to the first one.

If you want to combine Linux RAID and ZFS on just two drives you could 
partition the drives (e.g. two partitions on each drive), use the first 
partition on each drive for Linux RAID, install Debian (others will have 
to confirm whether the installer supports creating RAID from partitions) 
and then use the other partitions for the ZFS pool.

You might want to experiment with this in a VM first. For testing 
purposes you can also experiment with ZFS on files instead of real 
devices / partitions (probably with Linux RAID as well).

Kind regards,
Andrei
-- 
http://wiki.debian.org/FAQsFromDebianUser


signature.asc
Description: PGP signature


Re: Raid 1

2021-01-24 Thread mick crane

On 2021-01-23 22:01, David Christensen wrote:

On 2021-01-23 07:01, mick crane wrote:

On 2021-01-23 12:20, Andrei POPESCU wrote:

On Vi, 22 ian 21, 22:26:46, mick crane wrote:

hello,
I want to tidy things up as suggested.
Have one old PC that I'll put 2 disks in and tidy everything up so 
what's
scattered about is on the running disks and this new/old one is just 
backup

for them.
Can I assume that Debian installer in some expert mode will sort out 
the
raid or do I need to install to one disk and then mirror it manually 
before

invoking the raid thing ?


The "raid thing" is a separate layer below the partitions and file
systems.

Technically it is possible to create the mirror with just one device 
(I

believe mdadm calls this "degraded"), partition the md mirror device,
install, copy data to it, etc., add the second device later and let 
md

synchronize the two drives.

Because Linux RAID is a separate layer with no knowledge of the data
"above" it has to copy every single bit to the other drive as well
(similar to a dd device-to-device copy), regardless if actually 
needed

or not.

If you are really strapped for space and must do this ZFS can do it 
much

more efficiently, because it controls the entire "stack" and knows
exactly which blocks to copy (besides many other advantages over 
Linux

RAID).

Unfortunately ZFS is slightly more complicated from the packaging 
side,

and installing Debian on a ZFS root is difficult.

It still makes an excelent choice to manage your storage drives,
especially on a stable system, where there is less hassle with the 
dkms

module and it's amazingly simple to use once you familiarise yourself
with the basics.

Kind regards,
Andrei


Sigh, OK I take advice and have a go.
Really I just want to get on and do some drawings or something but I 
think I'll thank myself later if I get proper backup in place.

If after having a quick look am I understanding anything?
Partition and install minimal Debian with no X or anything on just one 
disk.

install headers and zfs-utils.
Add other disk and then what ? To make it a mirror pool (like raid1) 
does zfs take care of the partitions.
Do I want to delete all partitions on other disk first or make like 
for like partitions?


If that's done and I've made a zpool called  "backup" from then on the 
ZFS is nothing to do with the kernel ?

I ask kernel make a directory "my_pc1"
then
"zfs create -o mountpoint=/my_pc1 backup/my_pc1"

I ask kernel make a directory "my_pc2"
then
"zfs create -o mountpoint=/my_pc2 backup/my_pc2"

So then I can copy files from other PC (pc1) to 
"my_backup_pc/backup/my_pc1" and ZFS mirrors the data to other disk in 
pool ?


If that's how it works I'll just need something on the backup_pc and 
the other PCs to automate the backing up.

Is that backup Ninja or something ?



RAID protects against storage device sectors going bad and against
entire storage devices going bad -- e.g. hard disk drives, solid state
drives, etc..


Backups protect against filesystem contents going bad -- e.g. files,
directories, metadata, etc..


While putting an operating system and backups within a single RAID can
be done, this will complicate creation of a ZFS pool and will
complicate disaster preparedness/ recovery procedures.  The following
instructions assume your OS is on one device and that you will
dedicate two HDD's to ZFS.


See "Creating a Mirrored Storage Pool":

https://docs.oracle.com/cd/E19253-01/819-5461/gaynr/index.html


The above URL is good for concepts, but the virtual device names
('c1d0', 'c2d0') are for Solaris.  For Debian, you will want to
zero-fill both HDD's with dd(1) and then create the pool with zpool(8)
using device identity nodes:

/dev/disk/by-id/ata-...


Be extremely careful that you specify the correct devices!


ZFS will mark the drives and create a ZFS pool named 'tank' mounted at
'/tank'.  Note the parallel namespaces -- 'tank'is ZFS namespace and
has no leading slash, while '/tank' is a Unix absolute path.


'/tank' is a ZFS filesystem that can do everything a normal Unix
directory can do.  So, you could create a directory for backups and
create directories for specific machines:

# mkdir /tank/backup

# mkdir /tank/backup/pc1

# mkdir /tank/backup/pc2


Or, you could create a ZFS filesystem for backups and create ZFS
filesystems for specific machines:

# zfs create tank/backup

# zfs create tank/backup/pc1

# zfs create tank/backup/pc2


Both will give you directories that you can put your backups into
using whatever tools you choose, but the latter will give you
additional ZFS capabilities.


David


I know I'm a bit thick about these things, what I'm blocked about is 
where is the OS.

Let's say I have one PC and 2 unpartitioned disks.
Put one disk in PC and install Debian on it.
Install headers and ZFS-utils.
I put other disk in PC, PC boots from first disk.

"zpool create tank mirror disk1 disk2"
Can I then remove disk1 and PC will boot Debian from disk2 ?

mick
--
Key ID

Re: Raid 1

2021-01-23 Thread David Christensen

On 2021-01-23 07:01, mick crane wrote:

On 2021-01-23 12:20, Andrei POPESCU wrote:

On Vi, 22 ian 21, 22:26:46, mick crane wrote:

hello,
I want to tidy things up as suggested.
Have one old PC that I'll put 2 disks in and tidy everything up so 
what's
scattered about is on the running disks and this new/old one is just 
backup

for them.
Can I assume that Debian installer in some expert mode will sort out the
raid or do I need to install to one disk and then mirror it manually 
before

invoking the raid thing ?


The "raid thing" is a separate layer below the partitions and file
systems.

Technically it is possible to create the mirror with just one device (I
believe mdadm calls this "degraded"), partition the md mirror device,
install, copy data to it, etc., add the second device later and let md
synchronize the two drives.

Because Linux RAID is a separate layer with no knowledge of the data
"above" it has to copy every single bit to the other drive as well
(similar to a dd device-to-device copy), regardless if actually needed
or not.

If you are really strapped for space and must do this ZFS can do it much
more efficiently, because it controls the entire "stack" and knows
exactly which blocks to copy (besides many other advantages over Linux
RAID).

Unfortunately ZFS is slightly more complicated from the packaging side,
and installing Debian on a ZFS root is difficult.

It still makes an excelent choice to manage your storage drives,
especially on a stable system, where there is less hassle with the dkms
module and it's amazingly simple to use once you familiarise yourself
with the basics.

Kind regards,
Andrei


Sigh, OK I take advice and have a go.
Really I just want to get on and do some drawings or something but I 
think I'll thank myself later if I get proper backup in place.

If after having a quick look am I understanding anything?
Partition and install minimal Debian with no X or anything on just one 
disk.

install headers and zfs-utils.
Add other disk and then what ? To make it a mirror pool (like raid1) 
does zfs take care of the partitions.
Do I want to delete all partitions on other disk first or make like for 
like partitions?


If that's done and I've made a zpool called  "backup" from then on the 
ZFS is nothing to do with the kernel ?

I ask kernel make a directory "my_pc1"
then
"zfs create -o mountpoint=/my_pc1 backup/my_pc1"

I ask kernel make a directory "my_pc2"
then
"zfs create -o mountpoint=/my_pc2 backup/my_pc2"

So then I can copy files from other PC (pc1) to 
"my_backup_pc/backup/my_pc1" and ZFS mirrors the data to other disk in 
pool ?


If that's how it works I'll just need something on the backup_pc and the 
other PCs to automate the backing up.

Is that backup Ninja or something ?



RAID protects against storage device sectors going bad and against 
entire storage devices going bad -- e.g. hard disk drives, solid state 
drives, etc..



Backups protect against filesystem contents going bad -- e.g. files, 
directories, metadata, etc..



While putting an operating system and backups within a single RAID can 
be done, this will complicate creation of a ZFS pool and will complicate 
disaster preparedness/ recovery procedures.  The following instructions 
assume your OS is on one device and that you will dedicate two HDD's to ZFS.



See "Creating a Mirrored Storage Pool":

https://docs.oracle.com/cd/E19253-01/819-5461/gaynr/index.html


The above URL is good for concepts, but the virtual device names 
('c1d0', 'c2d0') are for Solaris.  For Debian, you will want to 
zero-fill both HDD's with dd(1) and then create the pool with zpool(8) 
using device identity nodes:


/dev/disk/by-id/ata-...


Be extremely careful that you specify the correct devices!


ZFS will mark the drives and create a ZFS pool named 'tank' mounted at 
'/tank'.  Note the parallel namespaces -- 'tank'is ZFS namespace and has 
no leading slash, while '/tank' is a Unix absolute path.



'/tank' is a ZFS filesystem that can do everything a normal Unix 
directory can do.  So, you could create a directory for backups and 
create directories for specific machines:


# mkdir /tank/backup

# mkdir /tank/backup/pc1

# mkdir /tank/backup/pc2


Or, you could create a ZFS filesystem for backups and create ZFS 
filesystems for specific machines:


# zfs create tank/backup

# zfs create tank/backup/pc1

# zfs create tank/backup/pc2


Both will give you directories that you can put your backups into using 
whatever tools you choose, but the latter will give you additional ZFS 
capabilities.



David



Re: Raid 1

2021-01-23 Thread Linux-Fan

mick crane writes:


On 2021-01-23 17:11, Linux-Fan wrote:

mick crane writes:


[...]


Please note that "root on ZFS" is possible but quite complicated:
https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian 
%20Buster%20Root%20on%20ZFS.html


For my current system I actually used mdadm RAID 1 for OS+Swap and ZFS
mirrors for the actual data. This way, I can use the Debian Installer
for installation purposes and benefit from the bit rot protection for
the acutally important data while maintaining basic redundancy for the
OS installation. YMMV.

Here are my notes on essential ZFS commands (in case they might be of help):
https://masysma.lima-city.de/37/zfs_commands_shortref.xhtml


[...]


link is not currently available.
what you seem to be doing there is backing up the data with ZFS but not  
backing up the OS, so I guess your raid is the backup for the OS ?

mick


Both open fine here, which of the links fails for you?

RAID is not Backup! Hence I have entirely separate programs for backup. The  
RAID is only the "quickest" layer -- solely responsible for catching  
problems with randomly failing HDDs and -- for non-OS-data -- bit rot.


My system works as follows

* OS data bitrot is not covered, but OS single HDD failure is.
  I achieve this by having OS and Swap on MDADM RAID 1
  i.e. mirrored but without ZFS.

* Actual data bitrot is covered, as is single HDD failure by
  means of ZFS mirrors for all data.

* Backups are separate. For instance, important data is copied to a
  separate computer upon shutdown. Less important data is part of
  manually-invoked backup tasks which use multiple programs to cope
  with different types of data...

HTH
Linux-Fan

öö


pgpsazBj3_HL2.pgp
Description: PGP signature


Re: Raid 1

2021-01-23 Thread mick crane

On 2021-01-23 17:11, Linux-Fan wrote:

mick crane writes:


On 2021-01-23 12:20, Andrei POPESCU wrote:

On Vi, 22 ian 21, 22:26:46, mick crane wrote:

hello,
I want to tidy things up as suggested.
Have one old PC that I'll put 2 disks in and tidy everything up so 
what's
scattered about is on the running disks and this new/old one is just 
backup

for them.
Can I assume that Debian installer in some expert mode will sort out 
the
raid or do I need to install to one disk and then mirror it manually 
before

invoking the raid thing ?


[...]


Sigh, OK I take advice and have a go.
Really I just want to get on and do some drawings or something but I 
think I'll thank myself later if I get proper backup in place.

If after having a quick look am I understanding anything?
Partition and install minimal Debian with no X or anything on just one 
disk.

install headers and zfs-utils.
Add other disk and then what ? To make it a mirror pool (like raid1) 
does zfs take care of the partitions.
Do I want to delete all partitions on other disk first or make like 
for like partitions?


If I get your scenario correctly you want to install Debian (without
ZFS i.e. not "root on ZFS") and then create a ZFS mirror?

If yes, then as a preparation you need either (a) two entire devices
of ~ same size to use with ZFS or (b) two partitions to use with ZFS.

Say you install as follows:

* sda1: OS
* sda2: Swap
* sda : XX GiB free

* sdb: XX+ Gib free

Then prepare two unformatted partitions:

* sda3: XX GiB "for ZFS"
* sdb1: XX GiB "for ZFS"

and use these devices for ZFS.

If that's done and I've made a zpool called  "backup" from then on the 
ZFS is nothing to do with the kernel ?

I ask kernel make a directory "my_pc1"
then
"zfs create -o mountpoint=/my_pc1 backup/my_pc1"


You can specifiy a mountpoint and it will be created automatically. No
need to pre-create the Directory as with other file systems.


I ask kernel make a directory "my_pc2"
then
"zfs create -o mountpoint=/my_pc2 backup/my_pc2"

So then I can copy files from other PC (pc1) to 
"my_backup_pc/backup/my_pc1" and ZFS mirrors the data to other disk in 
pool ?


Yes. In case you are unsure check the output of `zpool status` to see
the structure as understood by ZFS.

If that's how it works I'll just need something on the backup_pc and 
the other PCs to automate the backing up.

Is that backup Ninja or something ?


I have never used backup Ninja. Depending on your use case anything
from simple rsync to borgbackup may serve :)

Please note that "root on ZFS" is possible but quite complicated:
https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Buster%20Root%20on%20ZFS.html

For my current system I actually used mdadm RAID 1 for OS+Swap and ZFS
mirrors for the actual data. This way, I can use the Debian Installer
for installation purposes and benefit from the bit rot protection for
the acutally important data while maintaining basic redundancy for the
OS installation. YMMV.

Here are my notes on essential ZFS commands (in case they might be of 
help):

https://masysma.lima-city.de/37/zfs_commands_shortref.xhtml

HTH
Linux-Fan

öö


link is not currently available.
what you seem to be doing there is backing up the data with ZFS but not 
backing up the OS, so I guess your raid is the backup for the OS ?

mick


--
Key ID4BFEBB31



Re: Raid 1

2021-01-23 Thread Linux-Fan

mick crane writes:


On 2021-01-23 12:20, Andrei POPESCU wrote:

On Vi, 22 ian 21, 22:26:46, mick crane wrote:

hello,
I want to tidy things up as suggested.
Have one old PC that I'll put 2 disks in and tidy everything up so what's
scattered about is on the running disks and this new/old one is just backup
for them.
Can I assume that Debian installer in some expert mode will sort out the
raid or do I need to install to one disk and then mirror it manually before
invoking the raid thing ?


[...]


Sigh, OK I take advice and have a go.
Really I just want to get on and do some drawings or something but I think  
I'll thank myself later if I get proper backup in place.

If after having a quick look am I understanding anything?
Partition and install minimal Debian with no X or anything on just one disk.
install headers and zfs-utils.
Add other disk and then what ? To make it a mirror pool (like raid1) does  
zfs take care of the partitions.
Do I want to delete all partitions on other disk first or make like for like  
partitions?


If I get your scenario correctly you want to install Debian (without ZFS  
i.e. not "root on ZFS") and then create a ZFS mirror?


If yes, then as a preparation you need either (a) two entire devices of  
~ same size to use with ZFS or (b) two partitions to use with ZFS.


Say you install as follows:

* sda1: OS
* sda2: Swap
* sda : XX GiB free

* sdb: XX+ Gib free

Then prepare two unformatted partitions:

* sda3: XX GiB "for ZFS"
* sdb1: XX GiB "for ZFS"

and use these devices for ZFS.

If that's done and I've made a zpool called  "backup" from then on the ZFS  
is nothing to do with the kernel ?

I ask kernel make a directory "my_pc1"
then
"zfs create -o mountpoint=/my_pc1 backup/my_pc1"


You can specifiy a mountpoint and it will be created automatically. No need  
to pre-create the Directory as with other file systems.



I ask kernel make a directory "my_pc2"
then
"zfs create -o mountpoint=/my_pc2 backup/my_pc2"

So then I can copy files from other PC (pc1) to "my_backup_pc/backup/my_pc1"  
and ZFS mirrors the data to other disk in pool ?


Yes. In case you are unsure check the output of `zpool status` to see the  
structure as understood by ZFS.


If that's how it works I'll just need something on the backup_pc and the  
other PCs to automate the backing up.

Is that backup Ninja or something ?


I have never used backup Ninja. Depending on your use case anything from  
simple rsync to borgbackup may serve :)


Please note that "root on ZFS" is possible but quite complicated:
https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Buster%20Root%20on%20ZFS.html

For my current system I actually used mdadm RAID 1 for OS+Swap and ZFS  
mirrors for the actual data. This way, I can use the Debian Installer for  
installation purposes and benefit from the bit rot protection for the  
acutally important data while maintaining basic redundancy for the OS  
installation. YMMV.


Here are my notes on essential ZFS commands (in case they might be of help):
https://masysma.lima-city.de/37/zfs_commands_shortref.xhtml

HTH
Linux-Fan

öö


pgpyxBOnc9X6C.pgp
Description: PGP signature


Re: Raid 1

2021-01-23 Thread mick crane

On 2021-01-23 12:20, Andrei POPESCU wrote:

On Vi, 22 ian 21, 22:26:46, mick crane wrote:

hello,
I want to tidy things up as suggested.
Have one old PC that I'll put 2 disks in and tidy everything up so 
what's
scattered about is on the running disks and this new/old one is just 
backup

for them.
Can I assume that Debian installer in some expert mode will sort out 
the
raid or do I need to install to one disk and then mirror it manually 
before

invoking the raid thing ?


The "raid thing" is a separate layer below the partitions and file
systems.

Technically it is possible to create the mirror with just one device (I
believe mdadm calls this "degraded"), partition the md mirror device,
install, copy data to it, etc., add the second device later and let md
synchronize the two drives.

Because Linux RAID is a separate layer with no knowledge of the data
"above" it has to copy every single bit to the other drive as well
(similar to a dd device-to-device copy), regardless if actually needed
or not.

If you are really strapped for space and must do this ZFS can do it 
much

more efficiently, because it controls the entire "stack" and knows
exactly which blocks to copy (besides many other advantages over Linux
RAID).

Unfortunately ZFS is slightly more complicated from the packaging side,
and installing Debian on a ZFS root is difficult.

It still makes an excelent choice to manage your storage drives,
especially on a stable system, where there is less hassle with the dkms
module and it's amazingly simple to use once you familiarise yourself
with the basics.

Kind regards,
Andrei


Sigh, OK I take advice and have a go.
Really I just want to get on and do some drawings or something but I 
think I'll thank myself later if I get proper backup in place.

If after having a quick look am I understanding anything?
Partition and install minimal Debian with no X or anything on just one 
disk.

install headers and zfs-utils.
Add other disk and then what ? To make it a mirror pool (like raid1) 
does zfs take care of the partitions.
Do I want to delete all partitions on other disk first or make like for 
like partitions?


If that's done and I've made a zpool called  "backup" from then on the 
ZFS is nothing to do with the kernel ?

I ask kernel make a directory "my_pc1"
then
"zfs create -o mountpoint=/my_pc1 backup/my_pc1"

I ask kernel make a directory "my_pc2"
then
"zfs create -o mountpoint=/my_pc2 backup/my_pc2"

So then I can copy files from other PC (pc1) to 
"my_backup_pc/backup/my_pc1" and ZFS mirrors the data to other disk in 
pool ?


If that's how it works I'll just need something on the backup_pc and the 
other PCs to automate the backing up.

Is that backup Ninja or something ?

mick


--
Key ID4BFEBB31



Re: Raid 1

2021-01-23 Thread Andrei POPESCU
On Vi, 22 ian 21, 22:26:46, mick crane wrote:
> hello,
> I want to tidy things up as suggested.
> Have one old PC that I'll put 2 disks in and tidy everything up so what's
> scattered about is on the running disks and this new/old one is just backup
> for them.
> Can I assume that Debian installer in some expert mode will sort out the
> raid or do I need to install to one disk and then mirror it manually before
> invoking the raid thing ?

The "raid thing" is a separate layer below the partitions and file 
systems.

Technically it is possible to create the mirror with just one device (I 
believe mdadm calls this "degraded"), partition the md mirror device, 
install, copy data to it, etc., add the second device later and let md 
synchronize the two drives.

Because Linux RAID is a separate layer with no knowledge of the data 
"above" it has to copy every single bit to the other drive as well 
(similar to a dd device-to-device copy), regardless if actually needed 
or not.

If you are really strapped for space and must do this ZFS can do it much 
more efficiently, because it controls the entire "stack" and knows 
exactly which blocks to copy (besides many other advantages over Linux 
RAID).

Unfortunately ZFS is slightly more complicated from the packaging side, 
and installing Debian on a ZFS root is difficult.

It still makes an excelent choice to manage your storage drives, 
especially on a stable system, where there is less hassle with the dkms 
module and it's amazingly simple to use once you familiarise yourself 
with the basics.

Kind regards,
Andrei
-- 
http://wiki.debian.org/FAQsFromDebianUser


signature.asc
Description: PGP signature


Re: Raid 1

2021-01-22 Thread David Christensen

On 2021-01-22 15:10, David Christensen wrote:


A key issue with storage is bit rot.


I should have said "bit rot protection".


David



Re: Raid 1

2021-01-22 Thread David Christensen

On 2021-01-22 14:26, mick crane wrote:

hello,
I want to tidy things up as suggested.
Have one old PC that I'll put 2 disks in and tidy everything up so 
what's scattered about is on the running disks and this new/old one is 
just backup for them.
Can I assume that Debian installer in some expert mode will sort out the 
raid or do I need to install to one disk and then mirror it manually 
before invoking the raid thing ?



I would install a small SSD and do a fresh install of the OS onto that.


I would then install the two HDD's and set up a mirror (RAID 1).  Linux 
options include Multiple Device md(4), Linux Volume Manager lvm(8), and 
ZFS zfs(8).



A key issue with storage is bit rot.  btrfs and ZFS have it. 
dm-integrity (man page?) can provide it for Linux solutions without. 
btrfs requires maintenance.  I did not do it, and my disks suffered. 
ZFS does not require maintenance and has many killer features.  I have 
not tried dm-integrity, but would be interested in reading a HOWTO for 
Debian.



Due to CDDL and GPL licensing conflicts, ZFS is not fully integrated 
into Debian.  ZFS can be installed and used on Debian, but ZFS-on-root 
is not supported by the Debian installer.



The CDDL and BSD licenses are compatible.  So, ZFS is fully integrated 
on FreeBSD, and the FreeBSD installer can do ZFS-on-root.  FreeBSD has 
other features I like.  I use FreeBSD and ZFS on my servers; including 
storage (Samba).



David



Re: Raid 1

2021-01-22 Thread Linux-Fan

mick crane writes:


hello,
I want to tidy things up as suggested.
Have one old PC that I'll put 2 disks in and tidy everything up so what's  
scattered about is on the running disks and this new/old one is just backup  
for them.
Can I assume that Debian installer in some expert mode will sort out the  
raid or do I need to install to one disk and then mirror it manually before  
invoking the raid thing ?


Debian Installer can create MDADM RAID volumes even in non-expert mode.

You need to explicitly select the right options in the installer  
partitioning screen i.e. create the partitions, then create MDADM RAID 1  
devices on top of them and finally let them be formatted with  
ext4/filesystem of choice and be the installation target.


AFAIK "Guided" installation modes do not automatically create RAID, i.e. I  
recommend using the manual partitioning mode.


In the few RAID installs I did, it worked out all of the time.

Only thing to do afterwards is to ensure that GRUB is installed on both of  
the respective devices (dpkg-reconfigure grub-pc assuming BIOS mode).


HTH
Linux-Fan

öö


pgpJCMm17oPdN.pgp
Description: PGP signature


[Solved] Re: RAID installation at boot questions

2020-11-20 Thread Charles Curley
On Sat, 14 Nov 2020 12:15:47 -0700
Charles Curley  wrote:

> Or (afterthought here) did I give it the wrong UUID?

A week later, I came back to this. It appears I did use the wrong UUID
in /etc/crypttab.

root@hawk:~# ll /dev/disk/by-uuid/
total 0
drwxr-xr-x 2 root root 180 Nov 20 10:25 ./
drwxr-xr-x 7 root root 140 Nov 20 10:24 ../
lrwxrwxrwx 1 root root  10 Nov 20 10:40 343ed59e-ae41-4733-8277-f1b77de67479 -> 
../../sda5
lrwxrwxrwx 1 root root  10 Nov 20 10:36 52be92ca-795f-46ef-9c52-074fceedc53c -> 
../../dm-1
lrwxrwxrwx 1 root root   9 Nov 20 10:41 57de8169-da6c-4952-b6ac-25e6c87dbf1a -> 
../../md0
lrwxrwxrwx 1 root root  10 Nov 20 10:40 85936b4c-4088-4365-9c93-6a7cd8a025c6 -> 
../../sda1
lrwxrwxrwx 1 root root  10 Nov 20 10:36 a3cc28cf-fb2f-4ef5-803e-2fcce7006d05 -> 
../../dm-2
lrwxrwxrwx 1 root root  10 Nov 20 10:36 aef1882d-714d-41da-865e-f5cc161473e6 -> 
../../dm-5
lrwxrwxrwx 1 root root  10 Nov 20 10:36 e297096c-1336-4f9f-8da0-025da7190d7b -> 
../../dm-3
root@hawk:~# 

A quick experiment shows that this UUID for /dev/md0 is the one that
works. This is also the UUID that gparted shows for /dev/md0.

encryptedRaid UUID=57de8169-da6c-4952-b6ac-25e6c87dbf1a none luks

As previously noted, using the device name also works, with or without
a password file:

encryptedRaid /dev/md0 /root/raid.encrypt.password.txt luks

or

encryptedRaid /dev/md0 none luks


And that leaves the question of whether mdadm has a bug or whether it
is showing some other UUID. Which question I am not going to pursue.


-- 
Does anybody read signatures any more?

https://charlescurley.com
https://charlescurley.com/blog/



Re: RAID installation at boot questions

2020-11-14 Thread Charles Curley
On Sat, 14 Nov 2020 23:00:35 +0100
Toni Mas Soler  wrote:

> I have more or less the same configuration. I am a no-systemd user
> (yet?) so I cannot show you the full example.
> You could verify:
> - Is there a mdraid1x module  in your grub menu entry?
> - If I not wrong you made your RAID by mdadm metadata version 1.2. I
> think in this version metadata is located at first blocks, on the
> other hand, version 1.0 places at the end blocks. Somewhere out there
> I read blootable partitions could not use 1.2 metadata version. Thus,
> for a bootable (and EFI, if exists) partition must be build in
> metadata version 1.0. I did and it works. This you could solve your
> problem.

Thank you. An interesting thought, but in my setup the RAID array is not
necessary to boot. That is handled on a different drive entirely.


-- 
Does anybody read signatures any more?

https://charlescurley.com
https://charlescurley.com/blog/



Re: RAID installation at boot questions

2020-11-14 Thread Toni Mas Soler
I have more or less the same configuration. I am a no-systemd user
(yet?) so I cannot show you the full example.
You could verify:
- Is there a mdraid1x module  in your grub menu entry?
- If I not wrong you made your RAID by mdadm metadata version 1.2. I
think in this version metadata is located at first blocks, on the
other hand, version 1.0 places at the end blocks. Somewhere out there
I read blootable partitions could not use 1.2 metadata version. Thus,
for a bootable (and EFI, if exists) partition must be build in
metadata version 1.0. I did and it works. This you could solve your
problem.

To force a specific metadata version, I used:
mdadm --create --metadata=1.0 --verbose /dev/md2

Toni Mas

Missatge de Charles Curley  del dia
ds., 14 de nov. 2020 a les 20:40:
>
> On Sat, 14 Nov 2020 08:12:41 +0100
> john doe  wrote:
>
> > >
> > > What do I do to automate that?
> > >
> >
> >
> >
> > Is your '/etc/crypttab' file properly populated?
>
> Well, I thought it was
>
> At first I got the UUID for the RAID device, /dev/md0:
>
> root@hawk:~# mdadm --detail /dev/md0
> /dev/md0:
>Version : 1.2
>  Creation Time : Thu Nov 12 12:06:28 2020
> Raid Level : raid1
> Array Size : 3906884416 (3725.90 GiB 4000.65 GB)
>  Used Dev Size : 3906884416 (3725.90 GiB 4000.65 GB)
>   Raid Devices : 2
>  Total Devices : 2
>Persistence : Superblock is persistent
>
>  Intent Bitmap : Internal
>
>Update Time : Sat Nov 14 11:52:39 2020
>  State : clean
> Active Devices : 2
>Working Devices : 2
> Failed Devices : 0
>  Spare Devices : 0
>
> Consistency Policy : bitmap
>
>   Name : hawk:0  (local to host hawk)
>   UUID : 0d3ec9c1:2bc5b3e8:24a27283:c0cad01b
> Events : 12270
>
> Number   Major   Minor   RaidDevice State
>0   8   330  active sync   /dev/sdc1
>1   8   491  active sync   /dev/sdd1
> root@hawk:~#
>
> and set that up as a line in /etc/crypttab:
>
> encryptedRaid UUID=0d3ec9c1-2bc5-b3e8-24a2-7283c0cad01b none luks
>
> Didn't work, and gave a 90 second timeout.
>
> Note that the UUID in crypttab is re-formatted to agree with the other
> UUIDs in that file, dashes rather than colons. Is that relevant?
>
> Or (afterthought here) did I give it the wrong UUID?
>
> root@hawk:~# ll /dev/disk/by-uuid/
> total 0
> drwxr-xr-x 2 root root 300 Nov 14 11:52 ./
> drwxr-xr-x 8 root root 160 Nov 14 11:51 ../
> lrwxrwxrwx 1 root root  10 Nov 14 11:52 343ed59e-ae41-4733-8277-f1b77de67479 
> -> ../../sda5
> lrwxrwxrwx 1 root root  10 Nov 14 11:52 52be92ca-795f-46ef-9c52-074fceedc53c 
> -> ../../dm-1
> lrwxrwxrwx 1 root root   9 Nov 14 11:52 57de8169-da6c-4952-b6ac-25e6c87dbf1a 
> -> ../../md0
> ...
> root@hawk:~#
>
> Anyway, I tried it by device name, and that worked.
>
> encryptedRaid /dev/md0 none luks
>
> Useful tip: that worked without a prompt because I gave /dev/md0's
> encryption the same passphrase I gave the other encrypted partitions.
>
> This also works:
>
> encryptedRaid /dev/md0 /root/raid.encrypt.password.txt luks
>
>
> --
> Does anybody read signatures any more?
>
> https://charlescurley.com
> https://charlescurley.com/blog/
>



Re: RAID installation at boot questions

2020-11-14 Thread Charles Curley
On Sat, 14 Nov 2020 08:12:41 +0100
john doe  wrote:

> >
> > What do I do to automate that?
> >  
> 
> 
> 
> Is your '/etc/crypttab' file properly populated?

Well, I thought it was

At first I got the UUID for the RAID device, /dev/md0:

root@hawk:~# mdadm --detail /dev/md0
/dev/md0:
   Version : 1.2
 Creation Time : Thu Nov 12 12:06:28 2020
Raid Level : raid1
Array Size : 3906884416 (3725.90 GiB 4000.65 GB)
 Used Dev Size : 3906884416 (3725.90 GiB 4000.65 GB)
  Raid Devices : 2
 Total Devices : 2
   Persistence : Superblock is persistent

 Intent Bitmap : Internal

   Update Time : Sat Nov 14 11:52:39 2020
 State : clean 
Active Devices : 2
   Working Devices : 2
Failed Devices : 0
 Spare Devices : 0

Consistency Policy : bitmap

  Name : hawk:0  (local to host hawk)
  UUID : 0d3ec9c1:2bc5b3e8:24a27283:c0cad01b
Events : 12270

Number   Major   Minor   RaidDevice State
   0   8   330  active sync   /dev/sdc1
   1   8   491  active sync   /dev/sdd1
root@hawk:~# 

and set that up as a line in /etc/crypttab:

encryptedRaid UUID=0d3ec9c1-2bc5-b3e8-24a2-7283c0cad01b none luks

Didn't work, and gave a 90 second timeout.

Note that the UUID in crypttab is re-formatted to agree with the other
UUIDs in that file, dashes rather than colons. Is that relevant?

Or (afterthought here) did I give it the wrong UUID?

root@hawk:~# ll /dev/disk/by-uuid/
total 0
drwxr-xr-x 2 root root 300 Nov 14 11:52 ./
drwxr-xr-x 8 root root 160 Nov 14 11:51 ../
lrwxrwxrwx 1 root root  10 Nov 14 11:52 343ed59e-ae41-4733-8277-f1b77de67479 -> 
../../sda5
lrwxrwxrwx 1 root root  10 Nov 14 11:52 52be92ca-795f-46ef-9c52-074fceedc53c -> 
../../dm-1
lrwxrwxrwx 1 root root   9 Nov 14 11:52 57de8169-da6c-4952-b6ac-25e6c87dbf1a -> 
../../md0
...
root@hawk:~#

Anyway, I tried it by device name, and that worked.

encryptedRaid /dev/md0 none luks

Useful tip: that worked without a prompt because I gave /dev/md0's
encryption the same passphrase I gave the other encrypted partitions.

This also works:

encryptedRaid /dev/md0 /root/raid.encrypt.password.txt luks


-- 
Does anybody read signatures any more?

https://charlescurley.com
https://charlescurley.com/blog/



Re: RAID installation at boot questions

2020-11-13 Thread john doe

On 11/14/2020 4:23 AM, Charles Curley wrote:

I've added RAID and two new hard drives to my desktop. The RAID appears
to work, once it is up and running. Alas, on boot it is not being
properly set up. Everything else comes up correctly.

I have two new four terabyte drives set aside for RAID. They are
partitioned, with one partition on each, and the two partitions
combined in a RAID1 array. So far so good.

On top of the RAID1 array is an encrypted device. This is where booting
seems to break down. I do not get the device files related to the
encrypted layer in /dev/mapped.

On top of the encrypted layer is an LVM2 physical volume (PV). Within
that is one logical volume, with an ext4 file system on it.

I see messages in syslog for the two drives originally in the system,
sda and sdb. However, I don't see any for the two new drives, sdc and
sdd. E.g:

systemd[1]: Started Cryptography Setup for sdb3_crypt.

I can manually complete the process after the system has booted:

cryptsetup luksOpen /dev/md0 encryptedRaid
mount /dev/mapper/hawk--vg--raid-crc2020

What do I do to automate that?





Is your '/etc/crypttab' file properly populated?

--
John Doe



Re: Raid 1 borked

2020-10-26 Thread Leslie Rhorer




On 10/26/2020 7:55 AM, Bill wrote:

Hi folks,

So we're setting up a small server with a pair of 1 TB hard disks 
sectioned into 5x100GB Raid 1 partition pairs for data,  with 400GB+ 
reserved for future uses on each disk.


	Oh, also, why are you leaving so much unused space on the drives?  One 
of the big advantages of RAID and LVM is the ability to manage storage 
space.  Unmanaged space on drives doesn't serbe much purpose.




Re: Raid 1 borked

2020-10-26 Thread Leslie Rhorer

This might be better handled on linux-r...@vger.kernel.org

On 10/26/2020 10:35 AM, Dan Ritter wrote:

Bill wrote:

So we're setting up a small server with a pair of 1 TB hard disks sectioned
into 5x100GB Raid 1 partition pairs for data,  with 400GB+ reserved for
future uses on each disk.


That's weird, but I expect you have a reason for it.


	It does seem odd.  I am curious what the reasons might be.  Do you mean 
perhaps, rather than RAID 1 pairs on each disk, each partition  is 
paired with the corresponding partition on the other drive?


Also, why so small and so many?


I'm not sure what happened, we had the five pairs of disk partitions set up
properly through the installer without problems. However, now the Raid 1
pairs are not mounted as separate partitions but do show up as
subdirectories under /, ie /datab, and they do seem to work as part of the
regular / filesystem.  df -h does not show any md devices or sda/b devices,
neither does mount. (The system partitions are on an nvme ssd).


Mounts have to happen at mount points, and mount points are
directories. What you have is five mount points and nothing
mounted on them.



lsblk reveals sda and sdb with sda[1-5] and sdb[1-5] but no md[0-5]. blkid
reveals that sda[1-5] and sdb[1-5] are still listed as
TYPE="linux_raid_member".

So first of all I'd like to be able to diagnose what's going on. What
commands should I use for that? And secondly, I'd like to get the raid
arrays remounted as separate partitions. How to do that?


Well, you need to get them assembled and mounted. I'm assuming
you used mdadm.

Start by inspecting /proc/mdstat. Does it show 5 assembled MD
devices? If not:

mdadm -A /dev/md0
mdadm -A /dev/md1
mdadm -A /dev/md2
mdadm -A /dev/md3
mdadm -A /dev/md4

And tell us any errors.


	Perhaps before that (or after), what are the contents of 
/etc/mdadm/mdadm.conf?  Try:


grep -v "#" /etc/mdadm/mdadm.conf


Once they are assembled, mount them:

mount -a

if that doesn't work -- did you remember to list them in
/etc/fstab? Put them in there, something like:

/dev/md0/dataa  ext4defaults0   0

and try again.

-dsr-




Fortunately, there is no data to worry about. However, I'd rather not
reinstall as we've put in a bit of work installing and configuring things.
I'd prefer not to loose that. Can someone help us out?


	Don't fret.  There is rarely, if ever, any need to re-install a system 
to accommodate updates in RAID facilities.  Even if / or /boot are RAID 
arrays - which does not seem to be the case here - one can ordinarily 
manage RAID systems without resorting to a re-install.  I cannot think 
of any reason why a re-install would be required in order to manage a 
mounted file system.  Even if /home is part of a mounted file system 
(other than /, of course), the root user can handle any sort of changes 
to mounted file systems.  This would be especially true in your case, 
where your systems aren't even mounted, yet.  Even in the worst case - 
and yours is far from that - one should ordinarily be able to boot from 
a DVD or a USB drive and manage the system.




Re: Raid 1 borked

2020-10-26 Thread Mark Neyhart
On 10/26/20 4:55 AM, Bill wrote:

> lsblk reveals sda and sdb with sda[1-5] and sdb[1-5] but no md[0-5].
> blkid reveals that sda[1-5] and sdb[1-5] are still listed as
> TYPE="linux_raid_member".
> 
> So first of all I'd like to be able to diagnose what's going on. What
> commands should I use for that? And secondly, I'd like to get the raid
> arrays remounted as separate partitions. How to do that?
> 
    Bill

mdadm will give you some information about which partitions have been
configured as part of a raid device.

mdadm --examine /dev/sda1

It can also report on a raid device

mdadm --detail /dev/md1

If these commands don't report anything, you will need to define the
raid devices again.

Mark



Re: Raid 1 borked

2020-10-26 Thread R. Ramesh

Hi folks,

So we're setting up a small server with a pair of 1 TB hard 
diskssectioned into 5x100GB Raid 1 partition pairs for data, with 
400GB+reserved for future uses on each disk.I'm not sure what 
happened, we had the five pairs of disk partitions setup properly 
through the installer without problems. However, now theRaid 1 pairs 
are not mounted as separate partitions but do show up assubdirectories 
under /, ie /datab, and they do seem to work as part ofthe regular / 
filesystem. df -h does not show any md devices or sda/bdevices, 
neither does mount. (The system partitions are on an nvme ssd).lsblk 
reveals sda and sdb with sda[1-5] and sdb[1-5] but no md[0-5].blkid 
reveals that sda[1-5] and sdb[1-5] are still listed as

TYPE="linux_raid_member".

So first of all I'd like to be able to diagnose what's going on. 
Whatcommands should I use for that? And secondly, I'd like to get the 
raidarrays remounted as separate partitions. How to do 
that?Fortunately, there is no data to worry about. However, I'd rather 
notreinstall as we've put in a bit of work installing and 
configuringthings. I'd prefer not to loose that. Can someone help us out?

Thanks in advance,

Bill


Did you create the md raid1s after partitioning the disks?

Normally when you install mdadm or when you install the system from 
usb/.iso for the first time, the respective mds are assembled and 
appropriately set up if you have already created them.


If you added and partitioned the disk after the main system has been 
installed and running, you will have to create md raid1s and enable 
automatic assembly through /etc/mdadm.conf file. You may need to update 
your initrd also, but this I am not sure. To access and use the md 
raid1s as file systems, You also need to add appropriate fstab entries 
to mount them.


Hope I am not trivializing your issues.

Regards
Ramesh



Re: Raid 1 borked

2020-10-26 Thread Dan Ritter
Bill wrote: 
> So we're setting up a small server with a pair of 1 TB hard disks sectioned
> into 5x100GB Raid 1 partition pairs for data,  with 400GB+ reserved for
> future uses on each disk.

That's weird, but I expect you have a reason for it.

> I'm not sure what happened, we had the five pairs of disk partitions set up
> properly through the installer without problems. However, now the Raid 1
> pairs are not mounted as separate partitions but do show up as
> subdirectories under /, ie /datab, and they do seem to work as part of the
> regular / filesystem.  df -h does not show any md devices or sda/b devices,
> neither does mount. (The system partitions are on an nvme ssd).

Mounts have to happen at mount points, and mount points are
directories. What you have is five mount points and nothing
mounted on them.


> lsblk reveals sda and sdb with sda[1-5] and sdb[1-5] but no md[0-5]. blkid
> reveals that sda[1-5] and sdb[1-5] are still listed as
> TYPE="linux_raid_member".
> 
> So first of all I'd like to be able to diagnose what's going on. What
> commands should I use for that? And secondly, I'd like to get the raid
> arrays remounted as separate partitions. How to do that?

Well, you need to get them assembled and mounted. I'm assuming
you used mdadm.

Start by inspecting /proc/mdstat. Does it show 5 assembled MD
devices? If not:

mdadm -A /dev/md0
mdadm -A /dev/md1
mdadm -A /dev/md2
mdadm -A /dev/md3
mdadm -A /dev/md4

And tell us any errors.

Once they are assembled, mount them:

mount -a

if that doesn't work -- did you remember to list them in
/etc/fstab? Put them in there, something like:

/dev/md0/dataa  ext4defaults0   0

and try again.

-dsr-


> 
> Fortunately, there is no data to worry about. However, I'd rather not
> reinstall as we've put in a bit of work installing and configuring things.
> I'd prefer not to loose that. Can someone help us out?
> 
> Thanks in advance,
> 
>   Bill
> -- 
> Sent using Icedove on Debian GNU/Linux.
> 

-- 
https://randomstring.org/~dsr/eula.html is hereby incorporated by reference.
there is no justice, there is just us.



Re: Raid 1

2019-09-25 Thread Pascal Hambourg

Le 25/09/2019 à 09:25, Erwan RIGOLLOT a écrit :


Un raid 1 sur 3 disques fait que les 3 disques ont les données et t'autorise 
donc à perdre jusqu'à 2 disques sans perte de données mais dans ce cas les 3 
disques fonctionneront en permanence et s'userons mais tu n'auras pas de temps 
de reconstruction en cas de perte d'un disque.


Si, il y aura de toute façon un temps de reconstruction quand le disque 
défaillant sera remplacé.


PS : bizarre que ce message soit arrivé plusieurs heures après avoir été 
envoyé.




RE: Raid 1

2019-09-25 Thread Erwan RIGOLLOT
Hello,

Si tu choisis de mettre le disque en spare, il ne se mettra en marche que quand 
un autre tombera en panne.
C’est-à-dire qu'il ne s'usera pas en fonctionnement normal mais cela engendrera 
un temps de reconstruction du raid (les données devront être copiées dessus).

Un raid 1 sur 3 disques fait que les 3 disques ont les données et t'autorise 
donc à perdre jusqu'à 2 disques sans perte de données mais dans ce cas les 3 
disques fonctionneront en permanence et s'userons mais tu n'auras pas de temps 
de reconstruction en cas de perte d'un disque.

C'est un choix à faire ...

Bonne journée !

Erwan
-Message d'origine-
De : steve  
Envoyé : mercredi 25 septembre 2019 09:07
À : duf 
Objet : Raid 1

Bonjour,

J'ai trois disques que je souhaiterais monter en Raid 1.

J'ai le choix entre créer une grappe de deux disque plus un spare ou alors 
créer une grappe de trois disques sans spare.

Qu'est-ce qui est le mieux ?

Merci

Steve



Re: Raid 1

2019-09-25 Thread Pascal Hambourg

Le 25/09/2019 à 12:20, Pascal Hambourg a écrit :

Le 25/09/2019 à 11:39, steve a écrit :


L'argument du spare qui ne travaille pas, et donc ne s'use pas, est un
bon argument, par exemple.


Oui. Du moins qui s'use moins que s'il travaillait, mais plus que s'il 
était sur une étagère. Il reste sous tension et soumis à la chaleur de 
la machine.


Un autre argument est le temps de reconstruction. Avec les disques de 
très grande capacité (qui augmente plus vite que le débit), la 
reconstruction prend de plus en plus de temps - plusieurs heures - et le 
risque que le seul disque actif restant, qui a subi la même usure, 
flanche à son tour avant la fin n'est pas négligeable, d'autant plus 
qu'il est plus fortement sollicité lors de la reconstruction qu'en temps 
normal.


Et si la performance importe, le RAID 1 peut faire de la répartition de 
charge en lecture entre tous les disques actifs.




Re: Raid 1

2019-09-25 Thread Pascal Hambourg

Le 25/09/2019 à 11:39, steve a écrit :

Le 25-09-2019, à 10:12:49 +0200, Pascal Hambourg a écrit :


Le 25/09/2019 à 09:07, steve a écrit :

Bonjour,

J'ai trois disques que je souhaiterais monter en Raid 1.

J'ai le choix entre créer une grappe de deux disque plus un spare ou
alors créer une grappe de trois disques sans spare.

Qu'est-ce qui est le mieux ?


Le mieux pour quoi ?


Pour moi.


Je parlais du critère d'optimisation choisi.


L'argument du spare qui ne travaille pas, et donc ne s'use pas, est un
bon argument, par exemple.


Oui. Du moins qui s'use moins que s'il travaillait, mais plus que s'il 
était sur une étagère. Il reste sous tension et soumis à la chaleur de 
la machine.


Un autre argument est le temps de reconstruction. Avec les disques de 
très grande capacité (qui augmente plus vite que le débit), la 
reconstruction prend de plus en plus de temps - plusieurs heures - et le 
risque que le seul disque actif restant, qui a subi la même usure, 
flanche à son tour avant la fin n'est pas négligeable, d'autant plus 
qu'il est plus fortement sollicité lors de la reconstruction qu'en temps 
normal.




Re: Raid 1

2019-09-25 Thread steve

Le 25-09-2019, à 11:21:42 +0200, Jean-Michel OLTRA a écrit :


Le mercredi 25 septembre 2019, steve a écrit...

J'ai le choix entre créer une grappe de deux disque plus un spare ou

Je fonctionne comme ça depuis des années. Si un disque présente des
faiblesses, ou une partie de disque, le spare prend le relais. Ce qui laisse
le temps de racheter un autre disque pour reconstituer l'ensemble.


Moi aussi. Cependant, j'ai eu un soucis il y a quelques temps avec mon
système et je pensais que la source était un des disques de la grappe.
Je l'ai donc sorti de la grappe mais le soucis persistait. J'ai
finalement trouvé le problème, qui n'était pas du tout lié au disque en
question. Par paresse, j'ai laissé l'été passer, et je viens de remettre
ce disque dans la grappe sans me souvenir qu'il était en spare à
l'origine.

J'ai donc maintenant une grappe avec 3 disques et plus de spare. D'où ma
question.

Je pense comme toi que la solution 2 disques + spare est une bonne
option. Avant de retirer un des disques de la grappe et de le remettre
en spare, je voulais avoir le sentiment de la liste.



Re: Raid 1

2019-09-25 Thread Jean-Michel OLTRA


Bonjour,


Le mercredi 25 septembre 2019, steve a écrit...


> J'ai le choix entre créer une grappe de deux disque plus un spare ou

Je fonctionne comme ça depuis des années. Si un disque présente des
faiblesses, ou une partie de disque, le spare prend le relais. Ce qui laisse
le temps de racheter un autre disque pour reconstituer l'ensemble.


-- 
jm



Re: Raid 1

2019-09-25 Thread steve

Le 25-09-2019, à 10:12:49 +0200, Pascal Hambourg a écrit :


Le 25/09/2019 à 09:07, steve a écrit :

Bonjour,

J'ai trois disques que je souhaiterais monter en Raid 1.

J'ai le choix entre créer une grappe de deux disque plus un spare ou
alors créer une grappe de trois disques sans spare.

Qu'est-ce qui est le mieux ?


Le mieux pour quoi ?


Pour moi.


Le mieux dans l'absolu, ça n'existe pas.


On est bien d'accord, ma question était une question ouverte.

L'argument du spare qui ne travaille pas, et donc ne s'use pas, est un
bon argument, par exemple.





Re: Raid 1

2019-09-25 Thread Pascal Hambourg

Le 25/09/2019 à 10:14, kaliderus a écrit :



Si tu veux de la redondance (donc du Raid 1),


Non. On peut aussi avoir de la redondance avec du RAID 4, 5, 6 ou 10.


il te faut un disque de parité.


Non, pas forcément. Les RAID 1 et 10 n'ont pas de parité. Seul le RAID 4 
a un disque de parité dédié. Les RAID 5 et 6 ont une parité répartie sur 
tous les disques actifs.



" une grappe de trois disques sans spare " je ne sais pas trop ce que
c'est ; à priori du Raid 0


Tu confirmes que tu ne sais pas de quoi tu parles et tu racontes 
n'importe quoi.




RE: Raid 1

2019-09-25 Thread Erwan RIGOLLOT
Heu, je ne suis pas d'accord avec toi.
Un disque de spare c'est un disque inactif.
On peut faire un raid 1 sur 3 disques et les trois sont actifs et contiennent 
les données et donc aucun spare.

-Message d'origine-
De : kaliderus  
Envoyé : mercredi 25 septembre 2019 10:14
À : duf 
Objet : Re: Raid 1

Le mer. 25 sept. 2019 à 09:07, steve  a écrit :
>
> Bonjour,
>
> J'ai trois disques que je souhaiterais monter en Raid 1.
>
> J'ai le choix entre créer une grappe de deux disque plus un spare ou 
> alors créer une grappe de trois disques sans spare.
Si tu veux de la redondance (donc du Raid 1), il te faut un disque de parité.

>
> Qu'est-ce qui est le mieux ?
Lire les documentations associées afin de comprendre les différentes 
architectures :-) " une grappe de trois disques sans spare " je ne sais pas 
trop ce que c'est ; à priori du Raid 0 qui n'as de " redondant " que le nom, et 
qui dans les faits n'est que de globaliser 3 disques en une seule unité 
logique, et si tu perd un disque tu perd tout, l'anti-thèse de la notion de 
dedondance.

Bon amusement.



Re: Raid 1

2019-09-25 Thread Eric Degenetais
bonjour

Le mer. 25 sept. 2019 à 10:14, kaliderus  a écrit :
>
> Le mer. 25 sept. 2019 à 09:07, steve  a écrit :
> >
> > Bonjour,
> >
> > J'ai trois disques que je souhaiterais monter en Raid 1.
> >
> > J'ai le choix entre créer une grappe de deux disque plus un spare ou
> > alors créer une grappe de trois disques sans spare.
> Si tu veux de la redondance (donc du Raid 1), il te faut un disque de parité.
>
> >
> > Qu'est-ce qui est le mieux ?
> Lire les documentations associées afin de comprendre les différentes
> architectures :-)
> " une grappe de trois disques sans spare " je ne sais pas trop ce que
> c'est ; à priori du Raid 0 qui n'as de " redondant " que le nom, et
Non : ça peut être du raid1 (mirroring) qui met l'accent sur la
redondance (3 copies), en sacrifiant la capacité (3 disques pour
stocker un volume de données équivalent au plus petit d'entre eux, ou
à un seul d'netre eux s'ils sont identiques).
> qui dans les faits n'est que de globaliser 3 disques en une seule
> unité logique, et si tu perd un disque tu perd tout, l'anti-thèse de
Donc non, dans ce cas tu peux aller jusqu'à en perdre deux (en faisant
abstraction des risques de corruption, pour lesquels il faut comparer
au moins trois disques entre eux).
> la notion de dedondance.
>
> Bon amusement.
>

Cordialement
__
Éric Dégenètais
Henix

http://www.henix.com
http://www.squashtest.org



Re: Raid 1

2019-09-25 Thread kaliderus
Le mer. 25 sept. 2019 à 09:07, steve  a écrit :
>
> Bonjour,
>
> J'ai trois disques que je souhaiterais monter en Raid 1.
>
> J'ai le choix entre créer une grappe de deux disque plus un spare ou
> alors créer une grappe de trois disques sans spare.
Si tu veux de la redondance (donc du Raid 1), il te faut un disque de parité.

>
> Qu'est-ce qui est le mieux ?
Lire les documentations associées afin de comprendre les différentes
architectures :-)
" une grappe de trois disques sans spare " je ne sais pas trop ce que
c'est ; à priori du Raid 0 qui n'as de " redondant " que le nom, et
qui dans les faits n'est que de globaliser 3 disques en une seule
unité logique, et si tu perd un disque tu perd tout, l'anti-thèse de
la notion de dedondance.

Bon amusement.



Re: Raid 1

2019-09-25 Thread Pascal Hambourg

Le 25/09/2019 à 09:07, steve a écrit :

Bonjour,

J'ai trois disques que je souhaiterais monter en Raid 1.

J'ai le choix entre créer une grappe de deux disque plus un spare ou
alors créer une grappe de trois disques sans spare.

Qu'est-ce qui est le mieux ?


Le mieux pour quoi ?
Le mieux dans l'absolu, ça n'existe pas.



Re: Raid 0

2018-11-06 Thread Dan Ritter
Eduardo M KALINOWSKI: 
> On ter, 06 nov 2018, Finariu Florin wrote:
> >  Hi,
> > Somebody can help me with some information about why I can not see the
> > Raid0 created in bios?
> > I have a motherboard EPC602D8A with 2 chipsets: Intel C602 (Sata 2 x 4,
> > Sata 3 x 2) and Marvell SE9172 (Sata 3 x 2). I create in BIOS a Raid0 on
> > Marvel and another Raid0 on Intel C602.
> > When I start installation of OS in the section 'detect disk' it's show
> > me nothing ask me to verify if the SSDs are connected. When I install OS
> > with no Raid partition it see all SSDs I have plugged. I verified all
> > SSDs one by one all cables too but nothing...
> > So how can I see the Raids created in Bios? Is something else should I
> > do to be able to see them? I tried on RedHat, Fedora, CentOS, Kubuntu
> > but the same thing!
> 
> It's the third time you've asked this. I'm assuming you're not subscribed to
> the list. You'd better subscribe in order to view the replies:
> https://lists.debian.org/debian-user/
> 
> Or at least look for replies in the web archives at that same address. But
> please don't keep reposting the same question.

He got good, similar answers each time, and also thanked me in
private email.

So I don't know what's up, but it's safe to ignore further
repeats.

-dsr-



Re: Raid 0

2018-11-06 Thread Eduardo M KALINOWSKI

On ter, 06 nov 2018, Finariu Florin wrote:

 Hi,
Somebody can help me with some information about why I can not see  
the Raid0 created in bios?
I have a motherboard EPC602D8A with 2 chipsets: Intel C602 (Sata 2 x  
4, Sata 3 x 2) and Marvell SE9172 (Sata 3 x 2). I create in BIOS a  
Raid0 on Marvel and another Raid0 on Intel C602.
When I start installation of OS in the section 'detect disk' it's  
show me nothing ask me to verify if the SSDs are connected. When I  
install OS with no Raid partition it see all SSDs I have plugged. I  
verified all SSDs one by one all cables too but nothing...
So how can I see the Raids created in Bios? Is something else should  
I do to be able to see them? I tried on RedHat, Fedora, CentOS,  
Kubuntu but the same thing!


It's the third time you've asked this. I'm assuming you're not  
subscribed to the list. You'd better subscribe in order to view the  
replies: https://lists.debian.org/debian-user/


Or at least look for replies in the web archives at that same address.  
But please don't keep reposting the same question.

--
Eduardo M KALINOWSKI
edua...@kalinowski.com.br




Re: Raid 0

2018-11-06 Thread gosho

На 2018-11-06 15:49, Finariu Florin написа:

Hi,
Somebody can help me with some information about why I can not see the
Raid0 created in bios?
I have a motherboard EPC602D8A with 2 chipsets: Intel C602 (Sata 2 x
4, Sata 3 x 2) and Marvell SE9172 (Sata 3 x 2). I create in BIOS a
Raid0 on Marvel and another Raid0 on Intel C602.
When I start installation of OS in the section 'detect disk' it's show
me nothing ask me to verify if the SSDs are connected. When I install
OS with no Raid partition it see all SSDs I have plugged. I verified
all SSDs one by one all cables too but nothing...
So how can I see the Raids created in Bios? Is something else should I
do to be able to see them? I tried on RedHat, Fedora, CentOS, Kubuntu
but the same thing!
So if you have informations about this please help me !
Thank you!


Hi,

Your raid controller is not probably supported. A possible workarraound 
could be you to use LVM.


Kind regards
Georgi



Re: Raid 0

2018-11-06 Thread Roberto C . Sánchez
On Tue, Nov 06, 2018 at 01:49:32PM +, Finariu Florin wrote:
> Hi,
>Somebody can help me with some information about why I can not see the
>Raid0 created in bios?
>I have a motherboard EPC602D8A with 2 chipsets: Intel C602 (Sata 2 x 4,
>Sata 3 x 2) and Marvell SE9172 (Sata 3 x 2). I create in BIOS a Raid0 on
>Marvel and another Raid0 on Intel C602.
>When I start installation of OS in the section 'detect disk' it's show me
>nothing ask me to verify if the SSDs are connected. When I install OS with
>no Raid partition it see all SSDs I have plugged. I verified all SSDs one
>by one all cables too but nothing...
>So how can I see the Raids created in Bios? Is something else should I do
>to be able to see them? I tried on RedHat, Fedora, CentOS, Kubuntu but the
>same thing!
>So if you have informations about this please help me !
>Thank you! 

You probably need a special driver.  BIOS RAID is not real RAID; it
falls under the category of "fake RAID."

https://en.wikipedia.org/wiki/RAID#FAKE

Essentially, a BIOS RAID is really a crappy software RAID implementation
hosted on a chip.  It is the worst of everything.

If you need true RAID, then get a real RAID card (and be prepared to pay
a decent amount).  If you do not need real RAID, then setup your RAID as
a pure software RAID managed from within Linux.

If you use the BIOS fake RAID and your motherboard fails, there is a
good possibility that you will not be able to to recover the data from
your disks without another identical or nearly identical motherboard to
which you can connect the drives.  Linux software RAID, on the other
hand, lets you plug the disks into any machine running Linux and assemle
the array.

Regards,

-Roberto

-- 
Roberto C. Sánchez



Re: Raid

2018-11-05 Thread Reco
Hi.

On Mon, Nov 05, 2018 at 02:53:34PM +, Finariu Florin wrote:
>  I have a motherboard EPC602D8A with 2chipsets: Intel C602 (Sata2 x 4, Sata3 
> x 2)

A fakeraid aka Intel Martix RAID. There should be some mdadm support
for this, but you might as well use mdadm to create your RAID from SSDs
themselves.


> and Marvell SE9172 (Sata3x 2).

A fakeraid, but without sensible mdadm support. Again, end result is
hardly distinguishable from mdadm RAID.


>  When I start installation of OS in the section 'detectdisk' it's show me 
> nothing ask me to verify if the SSDs areconnected. When I install OS with no 
> Raid partition it see all SSD Ihave plugged. I verified all SSDs one by one 
> all cables too butnothing...
>  So how can I see the Raids created in Bios?

I'd start by removing these 'RAID controllers' altogether and throwing
them to a nearest garbage pile. Next I'd buy something from Adaptec or
LSI (hint - eBay). If it does not have BBU it's not a real RAID
controller anyway.

But I suspect that it's not an option for you, so I'd start by trying to
detect mdadm drives in d-i.

Reco



Re: Raid

2018-11-05 Thread Dan Ritter
Finariu Florin: 
>  Hi,somebody can help me with some information about why I can not seethe 
> Raid0 created in bios?
>  I have a motherboard EPC602D8A with 2chipsets: Intel C602 (Sata2 x 4, Sata3 
> x 2) and Marvell SE9172 (Sata3x 2). I create in BIOS a Raid0 on Marvel and 
> another Raid0 on IntelC602.
>  When I start installation of OS in the section 'detectdisk' it's show me 
> nothing ask me to verify if the SSDs areconnected. When I install OS with no 
> Raid partition it see all SSD Ihave plugged. I verified all SSDs one by one 
> all cables too butnothing...
>  So how can I see the Raids created in Bios? Issomething else should I do to 
> be able to see them? I tried on RedHat,Fedora, CentOS, Kubuntu but the same 
> thing!
> So if you haveinformations about this please help me !

Ah, this is because your BIOS RAID is what is referred to as
FakeRAID. It's neither a hardware controller (and thus
OS-independent) nor fully OS controlled (and thus transferrable
to another system). While there used to be some support in
Linux, it's really a Windows thing -- and not all that well
supported in Windows.

My recommendation is to tell your BIOS they are all independent disks,
and use either Linux's mdadm to assemble them into RAIDs or, if you want
a little more excitement, use btrfs or ZFS instead.

-dsr-



Re: RAID 5 array with journal device does not automatically assemble at boot

2017-11-09 Thread Tobx

On 8. Nov 2017, at 21:58, deloptes  wrote:
> 
> Tobx wrote:
> 
>> VERBOSE=false
> 
> perhaps set to true and see what it says.

The comment to this option states:
#   if this variable is set to true, mdadm will be a little more verbose e.g.
#   when creating the initramfs.

I tried that, but I did not see any difference.

On 8. Nov 2017, at 21:58, deloptes  wrote:
> According the docs [1,2] I overflew it is used only when creating an array.
> [3] says explicitly create, build or grow. For manage you should
> use --add-journal

Yes, I used the option only to create the array.

At linux-r...@vger.kernel.org I have got the information that this is most 
probably a bug and that the journaling feature is very new, but should be 
stable.

Now I decided to work around that problem. I prevent mdadm to auto-assemble the 
array with:

ARRAY /dev/md/test UUID=08ef559a:55c0f0ee:713cb429:c73d5a76

in /etc/mdadm/mdadm.conf and (not yet tested) try to assemble and mount the 
array in /etc/rc.local.

I hope this works.

Cheers,
Tobi


Re: RAID 5 array with journal device does not automatically assemble at boot

2017-11-08 Thread deloptes
Tobx wrote:

> RAID assembling at boot only works when no journal device is involved.
> 

I can't help much here, nothing to compare. I forgot to mention that md
driver is compiled in the kernel in my case.

> VERBOSE=false

perhaps set to true and see what it says.

> 
> Options in /etc/mdadm/mdadm.conf are:
> 
> HOMEHOST 
> MAILADDR root
> ARRAY /dev/md/test  metadata=1.2 UUID=4f0448f6:fee2638c:a1c1b547:20358980
> name=debian:test spares=1

.. and I assume you double checked (blkid) the UUID.

No idea - just trying to help as it sounded similar to what I've
experienced. However in your case the "--write-journal=/dev/sde1" seems to
cause the issue.
According the docs [1,2] I overflew it is used only when creating an array.
[3] says explicitly create, build or grow. For manage you should
use --add-journal

regards

[1] https://lwn.net/Articles/665299/
[2] 2016_vault_write_journal_cache_v2.pdf
[3] https://man.cx/mdadm(8)





Re: RAID 5 array with journal device does not automatically assemble at boot

2017-11-08 Thread Tobx
I was on 4.9.0-4 (Stretch), now tried with 4.13.0-0 but had no luck.

I also tried it again on a clean Ubuntu-Server 17.10 with Kernel 4.13.0-16 and 
had exactly the same issue:

RAID assembling at boot only works when no journal device is involved.

> On 7. Nov 2017, at 20:04, deloptes  wrote:
> 
> besides what do you have in /etc/default/mdadm

I did not touch /etc/default/mdadm. I ran dpkg-reconfigure mdadm once, but this 
did not help, options in /etc/default/mdadm are:

AUTOCHECK=true
START_DAEMON=true
DAEMON_OPTIONS="--syslog"
VERBOSE=false

Options in /etc/mdadm/mdadm.conf are:

HOMEHOST 
MAILADDR root
ARRAY /dev/md/test  metadata=1.2 UUID=4f0448f6:fee2638c:a1c1b547:20358980 
name=debian:test
  spares=1

regards


Re: RAID 5 array with journal device does not automatically assemble at boot

2017-11-07 Thread deloptes
Tobx wrote:

> What am I missing?

I don't know if it is related and I don't use raid5, but rather raid1, and
in the past year or so I had experienced similar with our server. Now I run
4.12.10 and noticed in the changelog/release notes that there are a lot of
fixes in the md stack. The issues are gone now and it assembles on boot
automatically. I don't know when the problems started (with which kernel
version - I htink 4.9 was OK, but I might be wrong)

besides what do you have in /etc/default/mdadm

regards



Re: RAID e LVM

2016-06-10 Thread Flavio Menezes dos Reis
Dutra,

Troféu pra ti campeão!

Em 10 de junho de 2016 07:33, Guimarães Faria Corcete DUTRA, Leandro <
l...@dutras.org> escreveu:

> 2016-06-09 15:01 GMT-03:00 Flavio Menezes dos Reis <
> flavio-r...@pge.rs.gov.br>:
> > Concordo contigo que é ousado dizer que ZFS é melhor que RAID+LVM em
> > qualquer cenário, mas fazer o que se é a verdade.
>
> Pois é, mas numa lista técnica o ideal é mostrar em que seria melhor.
> Uma afirmação genérica assim carece de credibilidade.
>
>
> > Será que não podemos
> > afirmar que ext4 é melhor que fat em qualquer cenário de uso sério?
>
> E o que é uso sério?  Por exemplo, há os fatores humanos que você tem
> ignorado: FAT funciona em várias situações em que ext4 não funciona,
> porque o usuário não tem privilégios administradores ou conhecimento
> para instalar o suporte ao ext4.
>
>
> > É um
> > processo evolutivo, ninguém usa mais PDP-11 em cenário algum
>
>
> http://www.theregister.co.uk/2013/06/19/nuke_plants_to_keep_pdp11_until_2050/
>
> Está vendo como afirmações genéricas costumam dar errado?
>
>
> > agora são outros equipamentos/tecnologias.
>
> Mais uma vez, confusão de níveis e palavras ambíguas.
>
>
> > Dutra, se o consulente tem dois discos somente, como não usar clone ou
> > stripping? Imaturidade? De onde tirou isso? Já é considerado de uso em
> > produção o ZFS On Linux.
>
> Um colega acaba de desmentir.
>
> Aliás, você se contradisse: já admitira que não está maduro.
>
> E quem considera?  Essa voz passiva, anônima (‘é considerado’), carece
> de credibilidade.  Se isso aqui fosse a Wikipædia{{Citation needed |
> date = June 2016}}.
>
>
> > Acharia interessante aceitar que algumas coisas vêm para dar maiores
> > facilidades e acabam substituindo tecnologias antigas. É certo que
> RAID+LVM
> > sucumbirá em breve ao ZFS. É ousadia? Nem sei, é tão lógico esperar por
> > isso.
>
> Lógico para quem, por quê?
>
> Talvez eu tenha alguns cabelos brancos a mais que você.
>
> Ainda lembro quando era lógico esperar que o MS-WNT substituiria Unix.
> Nessa eu acreditei.  Ou quando sistemas de grafos (cada época os
> chamou de nome diferente, desde hierárquicos até NoSQL, passando por
> de rede, multivalorado, até LDAP entrou na dança) substituiriam o SQL
> e (ou) o modelo relacional — quando percebi essa moda periodicamente,
> eu já vi que era fraude e (ou) falta de conhecimento.  Teve o
> ReiserFS, que ia substituir bases de dados, e o infame compatriota
> nosso, o Klaus, dizendo que o Prevayler mataria o SQL.  Em todos esses
> casos, quem tinha conhecimento sabia que se jurava em falso; eu era
> jovem e ignorante o suficiente para que o primeiro me engambelasse,
> mas para os outros já não.
>
> E o que quero deixar registrado na lista não é que o Raid+LVM seja
> superior a, ou tão bom quanto, o ZFS.  Mas que o ZFS implementa Raid,
> ainda que com acréscimos; que ZFS pode não ser a melhor opção para
> todos, sendo uma decisão muito mais importante entender o cenário do
> usuário e qual o nível de Raid mais adequado para ele; e que hoje ZFS
> não está maduro no Debian, como, de novo, um colega já demonstrou
> nesta mesma discussão, embora mudando a linha de assunto.  E,
> incidentalmente, que é importante diferenciar conceitos (/striping/,
> /cloning/, espelhamento, paridade, Raid) que implementações (ZFS,
> Raid+LVM…).
>
> Mas acho que você está preocupado em ganhar discussão e glorificar o
> ídolo de tuas devoções ‘tecnológicas’, então páro por aqui a menos que
> você traga informações sólidas em vez de meras repetições.  A não ser
> nesse caso de novos dados, pode escrever suas reafirmações que não
> mais as refutarei.
>
>
> --
> skype:leandro.gfc.dutra?chat  Yahoo!: ymsgr:sendIM?lgcdutra
> +55 (61) 3546 7191  gTalk: xmpp:leand...@jabber.org
> +55 (61) 9302 2691ICQ/AIM: aim:GoIM?screenname=61287803
> BRAZIL GMT−3  MSN: msnim:chat?contact=lean...@dutra.fastmail.fm
>
>


-- 
Flávio Menezes dos Reis
Analista de Informática
Seção de Infraestrutura de Rede - Assessoria de Informática
Procuradoria-Geral do Estado do RS
(51) 3288 1764


Re: RAID e LVM

2016-06-10 Thread Guimarães Faria Corcete DUTRA , Leandro
2016-06-09 15:01 GMT-03:00 Flavio Menezes dos Reis :
> Concordo contigo que é ousado dizer que ZFS é melhor que RAID+LVM em
> qualquer cenário, mas fazer o que se é a verdade.

Pois é, mas numa lista técnica o ideal é mostrar em que seria melhor.
Uma afirmação genérica assim carece de credibilidade.


> Será que não podemos
> afirmar que ext4 é melhor que fat em qualquer cenário de uso sério?

E o que é uso sério?  Por exemplo, há os fatores humanos que você tem
ignorado: FAT funciona em várias situações em que ext4 não funciona,
porque o usuário não tem privilégios administradores ou conhecimento
para instalar o suporte ao ext4.


> É um
> processo evolutivo, ninguém usa mais PDP-11 em cenário algum

http://www.theregister.co.uk/2013/06/19/nuke_plants_to_keep_pdp11_until_2050/

Está vendo como afirmações genéricas costumam dar errado?


> agora são outros equipamentos/tecnologias.

Mais uma vez, confusão de níveis e palavras ambíguas.


> Dutra, se o consulente tem dois discos somente, como não usar clone ou
> stripping? Imaturidade? De onde tirou isso? Já é considerado de uso em
> produção o ZFS On Linux.

Um colega acaba de desmentir.

Aliás, você se contradisse: já admitira que não está maduro.

E quem considera?  Essa voz passiva, anônima (‘é considerado’), carece
de credibilidade.  Se isso aqui fosse a Wikipædia{{Citation needed |
date = June 2016}}.


> Acharia interessante aceitar que algumas coisas vêm para dar maiores
> facilidades e acabam substituindo tecnologias antigas. É certo que RAID+LVM
> sucumbirá em breve ao ZFS. É ousadia? Nem sei, é tão lógico esperar por
> isso.

Lógico para quem, por quê?

Talvez eu tenha alguns cabelos brancos a mais que você.

Ainda lembro quando era lógico esperar que o MS-WNT substituiria Unix.
Nessa eu acreditei.  Ou quando sistemas de grafos (cada época os
chamou de nome diferente, desde hierárquicos até NoSQL, passando por
de rede, multivalorado, até LDAP entrou na dança) substituiriam o SQL
e (ou) o modelo relacional — quando percebi essa moda periodicamente,
eu já vi que era fraude e (ou) falta de conhecimento.  Teve o
ReiserFS, que ia substituir bases de dados, e o infame compatriota
nosso, o Klaus, dizendo que o Prevayler mataria o SQL.  Em todos esses
casos, quem tinha conhecimento sabia que se jurava em falso; eu era
jovem e ignorante o suficiente para que o primeiro me engambelasse,
mas para os outros já não.

E o que quero deixar registrado na lista não é que o Raid+LVM seja
superior a, ou tão bom quanto, o ZFS.  Mas que o ZFS implementa Raid,
ainda que com acréscimos; que ZFS pode não ser a melhor opção para
todos, sendo uma decisão muito mais importante entender o cenário do
usuário e qual o nível de Raid mais adequado para ele; e que hoje ZFS
não está maduro no Debian, como, de novo, um colega já demonstrou
nesta mesma discussão, embora mudando a linha de assunto.  E,
incidentalmente, que é importante diferenciar conceitos (/striping/,
/cloning/, espelhamento, paridade, Raid) que implementações (ZFS,
Raid+LVM…).

Mas acho que você está preocupado em ganhar discussão e glorificar o
ídolo de tuas devoções ‘tecnológicas’, então páro por aqui a menos que
você traga informações sólidas em vez de meras repetições.  A não ser
nesse caso de novos dados, pode escrever suas reafirmações que não
mais as refutarei.


-- 
skype:leandro.gfc.dutra?chat  Yahoo!: ymsgr:sendIM?lgcdutra
+55 (61) 3546 7191  gTalk: xmpp:leand...@jabber.org
+55 (61) 9302 2691ICQ/AIM: aim:GoIM?screenname=61287803
BRAZIL GMT−3  MSN: msnim:chat?contact=lean...@dutra.fastmail.fm



Re: RAID e LVM

2016-06-09 Thread Flavio Menezes dos Reis
Concordo contigo que é ousado dizer que ZFS é melhor que RAID+LVM em
qualquer cenário, mas fazer o que se é a verdade. Será que não podemos
afirmar que ext4 é melhor que fat em qualquer cenário de uso sério? É um
processo evolutivo, ninguém usa mais PDP-11 em cenário algum, agora são
outros equipamentos/tecnologias.

Dutra, se o consulente tem dois discos somente, como não usar clone ou
stripping? Imaturidade? De onde tirou isso? Já é considerado de uso em
produção o ZFS On Linux.

Acharia interessante aceitar que algumas coisas vêm para dar maiores
facilidades e acabam substituindo tecnologias antigas. É certo que RAID+LVM
sucumbirá em breve ao ZFS. É ousadia? Nem sei, é tão lógico esperar por
isso.



Em 9 de junho de 2016 14:45, Guimarães Faria Corcete DUTRA, Leandro <
l...@dutras.org> escreveu:

> 2016-06-09 14:39 GMT-03:00 Flavio Menezes dos Reis <
> flavio-r...@pge.rs.gov.br>:
> > Eu nem sei porque continuo este debate inócuo contigo
>
> Não creio que seja inócuo.  Temos um consulente com uma situação
> específica que você parece ter ignorado em sua recomendação, e o
> arquivo da lista fica disponível e há quem o consulte em busca de
> orientações.
>
>
> > qualquer forma tu parece ser o tipo de pessoa que só se satisfaz ganhando
> > uma argumentação em detrimento ao ganho coletivo.
>
> Pelo contrário, estou justamente pensando nos outros e em
> esclarecimento do público.
>
>
> > Em nenhum momento esgotei as capacidades do ZFS para poder dizer tudo o
> que
> > ele implementa ou não
>
> Nunca te acusei disso.
>
>
> > apenas disse e reitero que ZFS é melhor que RAID+LVM
> > em qualquer cenário
>
> Isso precisaria ser demonstrado.  É uma afirmação muito ousada e
> genérica, e dificilmente esse tipo de afirmação se sustenta na
> prática.
>
>
> > se pensa ao contrário, refute o cenário específico.
>
> O consulente tem apenas dois discos.  Assim, ele praticamente não tem
> uso nem para /clone/, nem para /striping/, que foram as duas vantagens
> que você colocou; facilidade de uso, que foi outra que você também
> mencionou, fica contrabalançada pelo uso de nomenclatura fora do
> padrão, pela imaturidade do porte para Linux, e pela própria
> simplicidade do cenário.
>
>
> > E sim, sempre vou diferenciar ZFS de RAID como todos conhecem, seja via
> > software ou hardware.
>
> E são diferentes, mas inclusive de níveis diferentes.  O problema não
> foi ter de igualar, pelo contrário, mas diferenciar níveis.  E que
> todos conheçam é uma falácia lógica, não um argumento nem muito menos
> prova.
>
>
> --
> skype:leandro.gfc.dutra?chat  Yahoo!: ymsgr:sendIM?lgcdutra
> +55 (61) 3546 7191  gTalk: xmpp:leand...@jabber.org
> +55 (61) 9302 2691ICQ/AIM: aim:GoIM?screenname=61287803
> BRAZIL GMT−3  MSN: msnim:chat?contact=lean...@dutra.fastmail.fm
>
>


-- 
Flávio Menezes dos Reis
Analista de Informática
Seção de Infraestrutura de Rede - Assessoria de Informática
Procuradoria-Geral do Estado do RS
(51) 3288 1764


Re: RAID e LVM

2016-06-09 Thread Flavio Menezes dos Reis
Eu nem sei porque continuo este debate inócuo contigo, mas vamos lá, de
qualquer forma tu parece ser o tipo de pessoa que só se satisfaz ganhando
uma argumentação em detrimento ao ganho coletivo.

Em nenhum momento esgotei as capacidades do ZFS para poder dizer tudo o que
ele implementa ou não, apenas disse e reitero que ZFS é melhor que RAID+LVM
em qualquer cenário, se pensa ao contrário, refute o cenário específico.

E sim, sempre vou diferenciar ZFS de RAID como todos conhecem, seja via
software ou hardware.


Em 9 de junho de 2016 14:29, Guimarães Faria Corcete DUTRA, Leandro <
l...@dutras.org> escreveu:

> 2016-06-09 14:24 GMT-03:00 Flavio Menezes dos Reis <
> flavio-r...@pge.rs.gov.br>:
> >
> > Me desculpe, mas acho que tu estás complicando e querendo render o
> assim. Se
> > é em detrimento obviamente é uma alternativa, não uso A em detrimento de
> B,
> > A é uma alternativa a B.
>
> Não, detrimento significa prejuízo, dano, difamação.  Tem uma
> conotação negativa que alternativa não tem.
>
>
> > Novamente, uma coisa é RAID seja via software ou hardware, outra coisa é
> > ZFS
>
> Não, ZFS implementa Raid.  Você está confundindo níveis diferentes de
> conceitos, e isso é péssimo para a comunicação.
>
>
> > E sim, estou dizendo que ZFS é melhor do que a utilização de RAID (por
> > software ou hardware) + LVM, independente do modelo de RAID adotado, pois
> > ZFS também faz strip e clone.
>
> Finalmente você deu duas razões.  Mas a primeira é falsa — Raid
> inerentemente faz /stripping/ —, a segunda tem outras implementações
> que podem ser mais robusta em Debian, e de qualquer maneira você nem
> disse porque isso seria melhor para o consulente, e ainda ignorou a
> questão da maturidade no Debian.
>
>
> > Tudo bem se quiser continuar a discussão filosófica da coisa, enquanto
> eu me
> > propus a oferecer uma solução prática muito mais simples, em qualquer
> > cenário, do que a utilização do par RAID+LVM.
>
> Não existe algo melhor em qualquer cenário.  Tudo bem se você não
> conseguir acreditar, a experiência te ensinará; mas até lá ao menos
> tolere quem discorda.  E dê argumentos em vez de apenas refrasear
> afirmações.
>
>
> --
> skype:leandro.gfc.dutra?chat  Yahoo!: ymsgr:sendIM?lgcdutra
> +55 (61) 3546 7191  gTalk: xmpp:leand...@jabber.org
> +55 (61) 9302 2691ICQ/AIM: aim:GoIM?screenname=61287803
> BRAZIL GMT−3  MSN: msnim:chat?contact=lean...@dutra.fastmail.fm
>
>


-- 
Flávio Menezes dos Reis
Analista de Informática
Seção de Infraestrutura de Rede - Assessoria de Informática
Procuradoria-Geral do Estado do RS
(51) 3288 1764


Re: RAID e LVM

2016-06-09 Thread Guimarães Faria Corcete DUTRA , Leandro
2016-06-09 14:39 GMT-03:00 Flavio Menezes dos Reis :
> Eu nem sei porque continuo este debate inócuo contigo

Não creio que seja inócuo.  Temos um consulente com uma situação
específica que você parece ter ignorado em sua recomendação, e o
arquivo da lista fica disponível e há quem o consulte em busca de
orientações.


> qualquer forma tu parece ser o tipo de pessoa que só se satisfaz ganhando
> uma argumentação em detrimento ao ganho coletivo.

Pelo contrário, estou justamente pensando nos outros e em
esclarecimento do público.


> Em nenhum momento esgotei as capacidades do ZFS para poder dizer tudo o que
> ele implementa ou não

Nunca te acusei disso.


> apenas disse e reitero que ZFS é melhor que RAID+LVM
> em qualquer cenário

Isso precisaria ser demonstrado.  É uma afirmação muito ousada e
genérica, e dificilmente esse tipo de afirmação se sustenta na
prática.


> se pensa ao contrário, refute o cenário específico.

O consulente tem apenas dois discos.  Assim, ele praticamente não tem
uso nem para /clone/, nem para /striping/, que foram as duas vantagens
que você colocou; facilidade de uso, que foi outra que você também
mencionou, fica contrabalançada pelo uso de nomenclatura fora do
padrão, pela imaturidade do porte para Linux, e pela própria
simplicidade do cenário.


> E sim, sempre vou diferenciar ZFS de RAID como todos conhecem, seja via
> software ou hardware.

E são diferentes, mas inclusive de níveis diferentes.  O problema não
foi ter de igualar, pelo contrário, mas diferenciar níveis.  E que
todos conheçam é uma falácia lógica, não um argumento nem muito menos
prova.


-- 
skype:leandro.gfc.dutra?chat  Yahoo!: ymsgr:sendIM?lgcdutra
+55 (61) 3546 7191  gTalk: xmpp:leand...@jabber.org
+55 (61) 9302 2691ICQ/AIM: aim:GoIM?screenname=61287803
BRAZIL GMT−3  MSN: msnim:chat?contact=lean...@dutra.fastmail.fm



Re: RAID e LVM

2016-06-09 Thread Flavio Menezes dos Reis
Dutra,

Me desculpe, mas acho que tu estás complicando e querendo render o assim.
Se é em detrimento obviamente é uma alternativa, não uso A em detrimento de
B, A é uma alternativa a B.

Novamente, uma coisa é RAID seja via software ou hardware, outra coisa é
ZFS, então há uma clara distinção quando digo para usar ZFS em vez de RAID,
seja via software ou hardware.

Aí a complicação continua, o preciosismo em querer diferenciar
referência/conceito de tecnologia.

E sim, estou dizendo que ZFS é melhor do que a utilização de RAID (por
software ou hardware) + LVM, independente do modelo de RAID adotado, pois
ZFS também faz strip e clone.

Tudo bem se quiser continuar a discussão filosófica da coisa, enquanto eu
me propus a oferecer uma solução prática muito mais simples, em qualquer
cenário, do que a utilização do par RAID+LVM.

[]'s

Em 9 de junho de 2016 14:09, Guimarães Faria Corcete DUTRA, Leandro <
l...@dutras.org> escreveu:

> 2016-06-09 11:07 GMT-03:00 Flavio Menezes dos Reis <
> flavio-r...@pge.rs.gov.br>:
> >
> > Acho que não entendeu quando eu me referia à tecnologia RAID em
> detrimento
> > ao cenário do consulente.
>
> Detrimento?  Não entendi assim, apenas uma alternativa.
>
>
> > Afinal o que eu disse é que o ZFS deveria ser
> > observado em vez de RAID+LVM
>
> E o que estou falando é que ZFS não é ‘em vez de Raid’, mas ‘uma
> implementação de Raid’.  E que tem suas complexidades também, então
> pode não ser adequado para usuários leigos (como o consulente) e em
> GNU/Linux (por causa da imaturidade da implementação, além da
> complexidade).
>
>
> > e então começamos a extrapolar as
> > carecterísticas da tecnologia RAID.
>
> Veja, acho que o termo ‘tecnologia’ é muito ambíguo.  ZFS faz Raid;
> Raid é um conceito, não uma implementação específica.  O que você está
> dizendo é que a implementação Raid do ZFS é melhor do que outras; e eu
> estou falando (1) que tem de especificar qual o nível de Raid mais
> adequado para a situação do consulente, antes de recomentar uma
> implementação qualquer, e (2) não necessariamente uma determinada
> implementação é melhor para todo mundo, então tem-se de explicar em
> que é melhor para a situação do consulente, ao menos em potencial.
> Dizer que é (muito) melhor não esclarece nada.
>
>
> > Sim, quando aos milhões trata-se à Serpro, então, não limite a
> tecnologia a
> > discos baratos.
>
> Ninguém limitou nada.  Só disse o que a sigla significa.
>
>
> > Com isso imagino que as demais colocações estejam
> > esclarecidas, afinal estavamos tratando da tecnologia e não do cenário do
> > consulente.
>
> Só na questão do nome, de resto estou sempre com o consulente em mente.
>
>
> --
> skype:leandro.gfc.dutra?chat  Yahoo!: ymsgr:sendIM?lgcdutra
> +55 (61) 3546 7191  gTalk: xmpp:leand...@jabber.org
> +55 (61) 9302 2691ICQ/AIM: aim:GoIM?screenname=61287803
> BRAZIL GMT−3  MSN: msnim:chat?contact=lean...@dutra.fastmail.fm
>
>


-- 
Flávio Menezes dos Reis
Analista de Informática
Seção de Infraestrutura de Rede - Assessoria de Informática
Procuradoria-Geral do Estado do RS
(51) 3288 1764


Re: RAID e LVM

2016-06-09 Thread Guimarães Faria Corcete DUTRA , Leandro
2016-06-09 14:24 GMT-03:00 Flavio Menezes dos Reis :
>
> Me desculpe, mas acho que tu estás complicando e querendo render o assim. Se
> é em detrimento obviamente é uma alternativa, não uso A em detrimento de B,
> A é uma alternativa a B.

Não, detrimento significa prejuízo, dano, difamação.  Tem uma
conotação negativa que alternativa não tem.


> Novamente, uma coisa é RAID seja via software ou hardware, outra coisa é
> ZFS

Não, ZFS implementa Raid.  Você está confundindo níveis diferentes de
conceitos, e isso é péssimo para a comunicação.


> E sim, estou dizendo que ZFS é melhor do que a utilização de RAID (por
> software ou hardware) + LVM, independente do modelo de RAID adotado, pois
> ZFS também faz strip e clone.

Finalmente você deu duas razões.  Mas a primeira é falsa — Raid
inerentemente faz /stripping/ —, a segunda tem outras implementações
que podem ser mais robusta em Debian, e de qualquer maneira você nem
disse porque isso seria melhor para o consulente, e ainda ignorou a
questão da maturidade no Debian.


> Tudo bem se quiser continuar a discussão filosófica da coisa, enquanto eu me
> propus a oferecer uma solução prática muito mais simples, em qualquer
> cenário, do que a utilização do par RAID+LVM.

Não existe algo melhor em qualquer cenário.  Tudo bem se você não
conseguir acreditar, a experiência te ensinará; mas até lá ao menos
tolere quem discorda.  E dê argumentos em vez de apenas refrasear
afirmações.


-- 
skype:leandro.gfc.dutra?chat  Yahoo!: ymsgr:sendIM?lgcdutra
+55 (61) 3546 7191  gTalk: xmpp:leand...@jabber.org
+55 (61) 9302 2691ICQ/AIM: aim:GoIM?screenname=61287803
BRAZIL GMT−3  MSN: msnim:chat?contact=lean...@dutra.fastmail.fm



Re: RAID e LVM

2016-06-09 Thread Guimarães Faria Corcete DUTRA , Leandro
2016-06-09 11:07 GMT-03:00 Flavio Menezes dos Reis :
>
> Acho que não entendeu quando eu me referia à tecnologia RAID em detrimento
> ao cenário do consulente.

Detrimento?  Não entendi assim, apenas uma alternativa.


> Afinal o que eu disse é que o ZFS deveria ser
> observado em vez de RAID+LVM

E o que estou falando é que ZFS não é ‘em vez de Raid’, mas ‘uma
implementação de Raid’.  E que tem suas complexidades também, então
pode não ser adequado para usuários leigos (como o consulente) e em
GNU/Linux (por causa da imaturidade da implementação, além da
complexidade).


> e então começamos a extrapolar as
> carecterísticas da tecnologia RAID.

Veja, acho que o termo ‘tecnologia’ é muito ambíguo.  ZFS faz Raid;
Raid é um conceito, não uma implementação específica.  O que você está
dizendo é que a implementação Raid do ZFS é melhor do que outras; e eu
estou falando (1) que tem de especificar qual o nível de Raid mais
adequado para a situação do consulente, antes de recomentar uma
implementação qualquer, e (2) não necessariamente uma determinada
implementação é melhor para todo mundo, então tem-se de explicar em
que é melhor para a situação do consulente, ao menos em potencial.
Dizer que é (muito) melhor não esclarece nada.


> Sim, quando aos milhões trata-se à Serpro, então, não limite a tecnologia a
> discos baratos.

Ninguém limitou nada.  Só disse o que a sigla significa.


> Com isso imagino que as demais colocações estejam
> esclarecidas, afinal estavamos tratando da tecnologia e não do cenário do
> consulente.

Só na questão do nome, de resto estou sempre com o consulente em mente.


-- 
skype:leandro.gfc.dutra?chat  Yahoo!: ymsgr:sendIM?lgcdutra
+55 (61) 3546 7191  gTalk: xmpp:leand...@jabber.org
+55 (61) 9302 2691ICQ/AIM: aim:GoIM?screenname=61287803
BRAZIL GMT−3  MSN: msnim:chat?contact=lean...@dutra.fastmail.fm



Re: RAID e LVM

2016-06-09 Thread Flavio Menezes dos Reis
Bom um detalhe, ZFS implementa RAID, realmente, mas como um RAID diferente
do comumente aplicado tanto por hardware quanto por software, afinal ele é
superior, não sofrendo do fenômeno do "Write Hole" (fontes:
http://www.raid-recovery-guide.com/raid5-write-hole.aspx e
http://www.techforce.com.br/content/zfs-part-3-quick-presentation-sysadmins),
assim, ZFS distinguisse dos métodos RAIDs comuns, tanto que sua
nomenclatura para os similares RAID5/RAID6 é raidz e raidz2.

Atte.,

Em 9 de junho de 2016 11:07, Flavio Menezes dos Reis <
flavio-r...@pge.rs.gov.br> escreveu:

> Dutra,
>
> Acho que não entendeu quando eu me referia à tecnologia RAID em detrimento
> ao cenário do consulente. Afinal o que eu disse é que o ZFS deveria ser
> observado em vez de RAID+LVM e então começamos a extrapolar as
> carecterísticas da tecnologia RAID.
>
> Sim, quando aos milhões trata-se à Serpro, então, não limite a tecnologia
> a discos baratos. Com isso imagino que as demais colocações estejam
> esclarecidas, afinal estavamos tratando da tecnologia e não do cenário do
> consulente.
>
> Atte.,
>
> Em 9 de junho de 2016 10:54, Leandro Guimarães Faria Corcete DUTRA <
> l...@dutras.org> escreveu:
>
>> Le 9 juin 2016 10:00:01 GMT-03:00, Flavio Menezes dos Reis <
>> flavio-r...@pge.rs.gov.br> a écrit :
>> > Bom, quanto ao gasto de
>> >alguns
>> >milhões para storage, pelo que falei com o Andre, foi e é justamente
>> >essa
>> >realidade.
>>
>> Dele é claro, mas não do consulente nem da maioria de nós nesta lista ou
>> na comunidade.
>>
>>
>> >Desculpe se for ignorância minha nesse caso, mas insisto que RAID foi
>> >utilizada no passado com o intuito de pouco gasto, hoje não se aplica
>>
>> A quem?
>>
>>
>> >afinal como ter um armazenamento de 120TB seja com discos caros ou
>> >baratos?
>>
>> Mas quem falou em 120 To?  Não o consulente, nem ninguém mais nesta
>> discussão.  Legal que tenhas esses recursos, mas isso é irrelevante nesta
>> discussão.
>>
>>
>> > seja RAID ou ZFS (quem sabe alguma outra tecnologia que eu não conheça)
>>
>> ZFS implementa Raid.  Raid é um conceito implementado por vários
>> sistemas, inclusive pelo ZFS.
>>
>>
>>
>>
>> --
>> skype:leandro.gfc.dutra?chat  Yahoo!: ymsgr:sendIM?lgcdutra
>> +55 (61) 3546 7191 (Net)gTalk: xmpp:leand...@jabber.org
>> +55 (61) 9302 2691 (Vivo) ICQ/AIM: aim:GoIM?screenname=61287803
>> BRAZIL GMT−3  MSN: msnim:chat?contact=lean...@dutra.fastmail.fm
>>
>>
>
>
> --
> Flávio Menezes dos Reis
> Analista de Informática
> Seção de Infraestrutura de Rede - Assessoria de Informática
> Procuradoria-Geral do Estado do RS
> (51) 3288 1764
>



-- 
Flávio Menezes dos Reis
Analista de Informática
Seção de Infraestrutura de Rede - Assessoria de Informática
Procuradoria-Geral do Estado do RS
(51) 3288 1764


Re: RAID e LVM

2016-06-09 Thread Flavio Menezes dos Reis
Dutra,

Acho que não entendeu quando eu me referia à tecnologia RAID em detrimento
ao cenário do consulente. Afinal o que eu disse é que o ZFS deveria ser
observado em vez de RAID+LVM e então começamos a extrapolar as
carecterísticas da tecnologia RAID.

Sim, quando aos milhões trata-se à Serpro, então, não limite a tecnologia a
discos baratos. Com isso imagino que as demais colocações estejam
esclarecidas, afinal estavamos tratando da tecnologia e não do cenário do
consulente.

Atte.,

Em 9 de junho de 2016 10:54, Leandro Guimarães Faria Corcete DUTRA <
l...@dutras.org> escreveu:

> Le 9 juin 2016 10:00:01 GMT-03:00, Flavio Menezes dos Reis <
> flavio-r...@pge.rs.gov.br> a écrit :
> > Bom, quanto ao gasto de
> >alguns
> >milhões para storage, pelo que falei com o Andre, foi e é justamente
> >essa
> >realidade.
>
> Dele é claro, mas não do consulente nem da maioria de nós nesta lista ou
> na comunidade.
>
>
> >Desculpe se for ignorância minha nesse caso, mas insisto que RAID foi
> >utilizada no passado com o intuito de pouco gasto, hoje não se aplica
>
> A quem?
>
>
> >afinal como ter um armazenamento de 120TB seja com discos caros ou
> >baratos?
>
> Mas quem falou em 120 To?  Não o consulente, nem ninguém mais nesta
> discussão.  Legal que tenhas esses recursos, mas isso é irrelevante nesta
> discussão.
>
>
> > seja RAID ou ZFS (quem sabe alguma outra tecnologia que eu não conheça)
>
> ZFS implementa Raid.  Raid é um conceito implementado por vários sistemas,
> inclusive pelo ZFS.
>
>
>
>
> --
> skype:leandro.gfc.dutra?chat  Yahoo!: ymsgr:sendIM?lgcdutra
> +55 (61) 3546 7191 (Net)gTalk: xmpp:leand...@jabber.org
> +55 (61) 9302 2691 (Vivo) ICQ/AIM: aim:GoIM?screenname=61287803
> BRAZIL GMT−3  MSN: msnim:chat?contact=lean...@dutra.fastmail.fm
>
>


-- 
Flávio Menezes dos Reis
Analista de Informática
Seção de Infraestrutura de Rede - Assessoria de Informática
Procuradoria-Geral do Estado do RS
(51) 3288 1764


Re: RAID e LVM

2016-06-09 Thread Leandro Guimarães Faria Corcete DUTRA
Le 9 juin 2016 10:00:01 GMT-03:00, Flavio Menezes dos Reis 
 a écrit :
> Bom, quanto ao gasto de
>alguns
>milhões para storage, pelo que falei com o Andre, foi e é justamente
>essa
>realidade.

Dele é claro, mas não do consulente nem da maioria de nós nesta lista ou na 
comunidade.


>Desculpe se for ignorância minha nesse caso, mas insisto que RAID foi
>utilizada no passado com o intuito de pouco gasto, hoje não se aplica

A quem?


>afinal como ter um armazenamento de 120TB seja com discos caros ou
>baratos?

Mas quem falou em 120 To?  Não o consulente, nem ninguém mais nesta discussão.  
Legal que tenhas esses recursos, mas isso é irrelevante nesta discussão.


> seja RAID ou ZFS (quem sabe alguma outra tecnologia que eu não conheça)

ZFS implementa Raid.  Raid é um conceito implementado por vários sistemas, 
inclusive pelo ZFS.




-- 
skype:leandro.gfc.dutra?chat  Yahoo!: ymsgr:sendIM?lgcdutra
+55 (61) 3546 7191 (Net)gTalk: xmpp:leand...@jabber.org
+55 (61) 9302 2691 (Vivo) ICQ/AIM: aim:GoIM?screenname=61287803
BRAZIL GMT−3  MSN: msnim:chat?contact=lean...@dutra.fastmail.fm



Re: RAID e LVM

2016-06-09 Thread Flavio Menezes dos Reis
Ah sim Dutra, IRAID foi erro de digitação. Bom, quanto ao gasto de alguns
milhões para storage, pelo que falei com o Andre, foi e é justamente essa
realidade. Devido às qualidades técnicas do ZFS, muito superiores a RAID
seja por software e até mesmo por hardware, eles o tem utilizado com
entusiasmo.

Desculpe se for ignorância minha nesse caso, mas insisto que RAID foi
utilizada no passado com o intuito de pouco gasto, hoje não se aplica,
afinal como ter um armazenamento de 120TB seja com discos caros ou baratos?
No meu ponto de vista tem que ser utilizado algum mecanismo de strip com
alguma redundância. Se usar discos baratos, menos
confiabilidade/performance, discos caros (no sentido da qualidade), mais
confiabilidade/performance. Assim, de qualquer forma, precisamos utilizar
seja RAID ou ZFS (quem sabe alguma outra tecnologia que eu não conheça)
para que tenhamos grandes áreas contíguas de armazenamento.

Certamente que meu cenário de uso de ZFS com Debian não é exigente em
termos de performance, mas tem sido muito melhor do que minhas experiências
com RAID+LVM.

Atte.,



Em 8 de junho de 2016 17:21, Guimarães Faria Corcete DUTRA, Leandro <
l...@dutras.org> escreveu:

> 2016-06-08 16:44 GMT-03:00 Flavio Menezes dos Reis <
> flavio-r...@pge.rs.gov.br>:
> >
> > Quanto à IRAID
>
> Iraid?  Algo novo ou só erro de digitação?
>
>
> > mesmo que no significado tenha inexpensive, no mundo real,
> > TODOS os storages de alguns milhões utilizam outros discos que não
> > inexpensives.
>
> Mas estamos falando disso?  Imagino que não seja nem a experiência do
> consulente, nem da maioria de nós.  Até porque se alguém usou alguns
> milhões (mesmo que de pobres reais) para comprar um equipamento, deve
> ter contratado também alguém que não precisa de nossa humilde lista
> para aprender o beabá do Raid e sobre o Baarf.
>
> Aliás, com a crise cabulosa do jeito que está, o ‘barato’ ganha nova
> relevância.  E essa é a promessa do Raid: não precisar gastar milhões,
> mas ter ao menos os benefícios mais fundamentais dos milhões com meras
> miríades.
>
>
> > Até quando eu pude falar com Andre no último FISL, sobre ZFS, eles
> estavam
> > utilizando, em produção, FreeBSD com OpenZFS.
>
> Interessante.  Faz sentido, principalmente se o FreeBSD já tiver
> resolvido os problemas de escalabilidade com multiprocessamento e de
> ineficiência de pilha de rede de alguns anos atrás, o que imagino já
> tenha com louvor.  Embora eu mesmo tenha uma queda pelo OpenBSD e sua
> auditoria de código-fonte.
>
> Obrigado!
>
>
> --
> skype:leandro.gfc.dutra?chat  Yahoo!: ymsgr:sendIM?lgcdutra
> +55 (61) 3546 7191  gTalk: xmpp:leand...@jabber.org
> +55 (61) 9302 2691ICQ/AIM: aim:GoIM?screenname=61287803
> BRAZIL GMT−3  MSN: msnim:chat?contact=lean...@dutra.fastmail.fm
>
>


-- 
Flávio Menezes dos Reis
Analista de Informática
Seção de Infraestrutura de Rede - Assessoria de Informática
Procuradoria-Geral do Estado do RS
(51) 3288 1764


Re: RAID e LVM

2016-06-08 Thread Guimarães Faria Corcete DUTRA , Leandro
2016-06-08 16:44 GMT-03:00 Flavio Menezes dos Reis :
>
> Quanto à IRAID

Iraid?  Algo novo ou só erro de digitação?


> mesmo que no significado tenha inexpensive, no mundo real,
> TODOS os storages de alguns milhões utilizam outros discos que não
> inexpensives.

Mas estamos falando disso?  Imagino que não seja nem a experiência do
consulente, nem da maioria de nós.  Até porque se alguém usou alguns
milhões (mesmo que de pobres reais) para comprar um equipamento, deve
ter contratado também alguém que não precisa de nossa humilde lista
para aprender o beabá do Raid e sobre o Baarf.

Aliás, com a crise cabulosa do jeito que está, o ‘barato’ ganha nova
relevância.  E essa é a promessa do Raid: não precisar gastar milhões,
mas ter ao menos os benefícios mais fundamentais dos milhões com meras
miríades.


> Até quando eu pude falar com Andre no último FISL, sobre ZFS, eles estavam
> utilizando, em produção, FreeBSD com OpenZFS.

Interessante.  Faz sentido, principalmente se o FreeBSD já tiver
resolvido os problemas de escalabilidade com multiprocessamento e de
ineficiência de pilha de rede de alguns anos atrás, o que imagino já
tenha com louvor.  Embora eu mesmo tenha uma queda pelo OpenBSD e sua
auditoria de código-fonte.

Obrigado!


-- 
skype:leandro.gfc.dutra?chat  Yahoo!: ymsgr:sendIM?lgcdutra
+55 (61) 3546 7191  gTalk: xmpp:leand...@jabber.org
+55 (61) 9302 2691ICQ/AIM: aim:GoIM?screenname=61287803
BRAZIL GMT−3  MSN: msnim:chat?contact=lean...@dutra.fastmail.fm



Re: RAID e LVM

2016-06-08 Thread Flavio Menezes dos Reis
DUTRA, vou inverter um pouco as respostas:

Quanto à IRAID, mesmo que no significado tenha inexpensive, no mundo real,
TODOS os storages de alguns milhões utilizam outros discos que não
inexpensives.

Até quando eu pude falar com Andre no último FISL, sobre ZFS, eles estavam
utilizando, em produção, FreeBSD com OpenZFS.

Atte.,

Em 8 de junho de 2016 16:41, Leandro Guimarães Faria Corcete DUTRA <
l...@dutras.org> escreveu:

> Le 8 juin 2016 16:26:52 GMT-03:00, Flavio Menezes dos Reis <
> flavio-r...@pge.rs.gov.br> a écrit :
> >
> >Desculpe se não fui a fundo com muito mais informações, afinal não
> >tinha a
> >intenção de propor um curso sobre ZFS num simples post.
>
> Tranqüilo.  Quis esclarecer que o consulente parece estar iniciando nesse
> mundo e as coisas são um pouco mais complicadas que parecem à primeira
> vista.
>
>
> > Inclusive a SERPRO tem utilizado ZFS há
> >muitos
> >anos em detrimento a RAID por hardware.
>
> Eles estão no Debian, Solaris ou BSD, sabes?
>
>
> >E quanto à matriz de discos baratos, não procede, pois a escolha da
> >qualidade dos discos estará ligada diretamente à qualidade do
> >armazenamento
> >que se deseja.
>
> Mas esse é o significado de Raid, /redundant array of inexpensive disks/.
> Porque a idéia original era uma alternativa aos discos de /mainframe/.
>
>
>
> --
> skype:leandro.gfc.dutra?chat  Yahoo!: ymsgr:sendIM?lgcdutra
> +55 (61) 3546 7191 (Net)gTalk: xmpp:leand...@jabber.org
> +55 (61) 9302 2691 (Vivo) ICQ/AIM: aim:GoIM?screenname=61287803
> BRAZIL GMT−3  MSN: msnim:chat?contact=lean...@dutra.fastmail.fm
>
>


-- 
Flávio Menezes dos Reis
Analista de Informática
Seção de Infraestrutura de Rede - Assessoria de Informática
Procuradoria-Geral do Estado do RS
(51) 3288 1764


Re: RAID e LVM

2016-06-08 Thread Flavio Menezes dos Reis
Para que não fique dúvida, eu tenho utilizado ZFS em Debian e não em BSD
(projeto http://zfsonlinux.org/).

Em 8 de junho de 2016 16:26, Flavio Menezes dos Reis <
flavio-r...@pge.rs.gov.br> escreveu:

> DUTRA,
>
> Desculpe se não fui a fundo com muito mais informações, afinal não tinha a
> intenção de propor um curso sobre ZFS num simples post.
>
> Quanto à maturidade, obviamente ela é muito maior no BSD (nem cito
> Solaris), pois há muito é utilizada desde que o ZFS tornou-se livre
> (juntamente com o Solaris). Inclusive a SERPRO tem utilizado ZFS há muitos
> anos em detrimento a RAID por hardware. Eis um apresentação do André Felipe
> Machado http://assiste.serpro.gov.br/cisl/zfs.html.
>
> E quanto à matriz de discos baratos, não procede, pois a escolha da
> qualidade dos discos estará ligada diretamente à qualidade do armazenamento
> que se deseja.
>
> Atte.,
>
> Em 8 de junho de 2016 15:49, Rodolfo  escreveu:
>
>> Não vi problema nenhum no conselho que o outro colega deu, a pesquisa e
>> análise da utilização ou não do conselho cabe ao interessado na mensagem
>> verificar, independente da pouco informação dada, logo, não vejo qualquer
>> problema no conselho dado.
>>
>> Em 8 de junho de 2016 14:45, Guimarães Faria Corcete DUTRA, Leandro <
>> l...@dutras.org> escreveu:
>>
>>> 2016-06-08 14:41 GMT-03:00 Flavio Menezes dos Reis <
>>> flavio-r...@pge.rs.gov.br>:
>>> >
>>> > Desculpa quem ainda gosta de RAID+LVM, mas nada se compara a ZFS. Tenho
>>> > usado a dois anos, além da robustez, tem uma facilidade de uso
>>> incrível.
>>>
>>> Tomemos cuidado com conselhos na lista.  ZFS pode ser um produto
>>> fantástico, mas o que ele faz, entre outras coisas, é implementar uma
>>> MRDB (matriz redundante de discos [relativamente] baratos) — ou seja,
>>> Raid.
>>>
>>> No Solaris, talvez seja hoje a maneira mais recomendada de fazer Raid.
>>> Mas tenho minhas dúvidas se isso também vale para outros SOs.
>>>
>>> De qualquer maneira, o mínimo que devemos fazer é dar informação
>>> suficiente para quem for seguir o conselho dado.  Por exemplo, a
>>> equivalência entre os níveis de Raid e os nomes comerciais do ZFS:
>>> http://www.zfsbuild.com/2010/05/26/zfs-raid-levels/ — tenho bronca de
>>> quem fica mudando os nomes das coisas, como o ZFS faz com os nomes dos
>>> níveis do Raid.  E deveríamos também informar qual o nível de
>>> maturidade do produto proposto na plataforma do consulente.
>>>
>>>
>>> --
>>> skype:leandro.gfc.dutra?chat  Yahoo!: ymsgr:sendIM?lgcdutra
>>> +55 (61) 3546 7191  gTalk: xmpp:leand...@jabber.org
>>> +55 (61) 9302 2691ICQ/AIM: aim:GoIM?screenname=61287803
>>> BRAZIL GMT−3  MSN: msnim:chat?contact=lean...@dutra.fastmail.fm
>>>
>>>
>>
>
>
> --
> Flávio Menezes dos Reis
> Analista de Informática
> Seção de Infraestrutura de Rede - Assessoria de Informática
> Procuradoria-Geral do Estado do RS
> (51) 3288 1764
>



-- 
Flávio Menezes dos Reis
Analista de Informática
Seção de Infraestrutura de Rede - Assessoria de Informática
Procuradoria-Geral do Estado do RS
(51) 3288 1764


Re: RAID e LVM

2016-06-08 Thread Flavio Menezes dos Reis
DUTRA,

Desculpe se não fui a fundo com muito mais informações, afinal não tinha a
intenção de propor um curso sobre ZFS num simples post.

Quanto à maturidade, obviamente ela é muito maior no BSD (nem cito
Solaris), pois há muito é utilizada desde que o ZFS tornou-se livre
(juntamente com o Solaris). Inclusive a SERPRO tem utilizado ZFS há muitos
anos em detrimento a RAID por hardware. Eis um apresentação do André Felipe
Machado http://assiste.serpro.gov.br/cisl/zfs.html.

E quanto à matriz de discos baratos, não procede, pois a escolha da
qualidade dos discos estará ligada diretamente à qualidade do armazenamento
que se deseja.

Atte.,

Em 8 de junho de 2016 15:49, Rodolfo  escreveu:

> Não vi problema nenhum no conselho que o outro colega deu, a pesquisa e
> análise da utilização ou não do conselho cabe ao interessado na mensagem
> verificar, independente da pouco informação dada, logo, não vejo qualquer
> problema no conselho dado.
>
> Em 8 de junho de 2016 14:45, Guimarães Faria Corcete DUTRA, Leandro <
> l...@dutras.org> escreveu:
>
>> 2016-06-08 14:41 GMT-03:00 Flavio Menezes dos Reis <
>> flavio-r...@pge.rs.gov.br>:
>> >
>> > Desculpa quem ainda gosta de RAID+LVM, mas nada se compara a ZFS. Tenho
>> > usado a dois anos, além da robustez, tem uma facilidade de uso incrível.
>>
>> Tomemos cuidado com conselhos na lista.  ZFS pode ser um produto
>> fantástico, mas o que ele faz, entre outras coisas, é implementar uma
>> MRDB (matriz redundante de discos [relativamente] baratos) — ou seja,
>> Raid.
>>
>> No Solaris, talvez seja hoje a maneira mais recomendada de fazer Raid.
>> Mas tenho minhas dúvidas se isso também vale para outros SOs.
>>
>> De qualquer maneira, o mínimo que devemos fazer é dar informação
>> suficiente para quem for seguir o conselho dado.  Por exemplo, a
>> equivalência entre os níveis de Raid e os nomes comerciais do ZFS:
>> http://www.zfsbuild.com/2010/05/26/zfs-raid-levels/ — tenho bronca de
>> quem fica mudando os nomes das coisas, como o ZFS faz com os nomes dos
>> níveis do Raid.  E deveríamos também informar qual o nível de
>> maturidade do produto proposto na plataforma do consulente.
>>
>>
>> --
>> skype:leandro.gfc.dutra?chat  Yahoo!: ymsgr:sendIM?lgcdutra
>> +55 (61) 3546 7191  gTalk: xmpp:leand...@jabber.org
>> +55 (61) 9302 2691ICQ/AIM: aim:GoIM?screenname=61287803
>> BRAZIL GMT−3  MSN: msnim:chat?contact=lean...@dutra.fastmail.fm
>>
>>
>


-- 
Flávio Menezes dos Reis
Analista de Informática
Seção de Infraestrutura de Rede - Assessoria de Informática
Procuradoria-Geral do Estado do RS
(51) 3288 1764


Re: RAID e LVM

2016-06-08 Thread Leandro Guimarães Faria Corcete DUTRA
Le 8 juin 2016 16:26:52 GMT-03:00, Flavio Menezes dos Reis 
 a écrit :
>
>Desculpe se não fui a fundo com muito mais informações, afinal não
>tinha a
>intenção de propor um curso sobre ZFS num simples post.

Tranqüilo.  Quis esclarecer que o consulente parece estar iniciando nesse mundo 
e as coisas são um pouco mais complicadas que parecem à primeira vista.


> Inclusive a SERPRO tem utilizado ZFS há
>muitos
>anos em detrimento a RAID por hardware.

Eles estão no Debian, Solaris ou BSD, sabes?


>E quanto à matriz de discos baratos, não procede, pois a escolha da
>qualidade dos discos estará ligada diretamente à qualidade do
>armazenamento
>que se deseja.

Mas esse é o significado de Raid, /redundant array of inexpensive disks/.  
Porque a idéia original era uma alternativa aos discos de /mainframe/.



-- 
skype:leandro.gfc.dutra?chat  Yahoo!: ymsgr:sendIM?lgcdutra
+55 (61) 3546 7191 (Net)gTalk: xmpp:leand...@jabber.org
+55 (61) 9302 2691 (Vivo) ICQ/AIM: aim:GoIM?screenname=61287803
BRAZIL GMT−3  MSN: msnim:chat?contact=lean...@dutra.fastmail.fm



Re: RAID e LVM

2016-06-08 Thread Rodolfo
Não vi problema nenhum no conselho que o outro colega deu, a pesquisa e
análise da utilização ou não do conselho cabe ao interessado na mensagem
verificar, independente da pouco informação dada, logo, não vejo qualquer
problema no conselho dado.

Em 8 de junho de 2016 14:45, Guimarães Faria Corcete DUTRA, Leandro <
l...@dutras.org> escreveu:

> 2016-06-08 14:41 GMT-03:00 Flavio Menezes dos Reis <
> flavio-r...@pge.rs.gov.br>:
> >
> > Desculpa quem ainda gosta de RAID+LVM, mas nada se compara a ZFS. Tenho
> > usado a dois anos, além da robustez, tem uma facilidade de uso incrível.
>
> Tomemos cuidado com conselhos na lista.  ZFS pode ser um produto
> fantástico, mas o que ele faz, entre outras coisas, é implementar uma
> MRDB (matriz redundante de discos [relativamente] baratos) — ou seja,
> Raid.
>
> No Solaris, talvez seja hoje a maneira mais recomendada de fazer Raid.
> Mas tenho minhas dúvidas se isso também vale para outros SOs.
>
> De qualquer maneira, o mínimo que devemos fazer é dar informação
> suficiente para quem for seguir o conselho dado.  Por exemplo, a
> equivalência entre os níveis de Raid e os nomes comerciais do ZFS:
> http://www.zfsbuild.com/2010/05/26/zfs-raid-levels/ — tenho bronca de
> quem fica mudando os nomes das coisas, como o ZFS faz com os nomes dos
> níveis do Raid.  E deveríamos também informar qual o nível de
> maturidade do produto proposto na plataforma do consulente.
>
>
> --
> skype:leandro.gfc.dutra?chat  Yahoo!: ymsgr:sendIM?lgcdutra
> +55 (61) 3546 7191  gTalk: xmpp:leand...@jabber.org
> +55 (61) 9302 2691ICQ/AIM: aim:GoIM?screenname=61287803
> BRAZIL GMT−3  MSN: msnim:chat?contact=lean...@dutra.fastmail.fm
>
>


Re: RAID e LVM

2016-06-08 Thread Guimarães Faria Corcete DUTRA , Leandro
2016-06-08 14:41 GMT-03:00 Flavio Menezes dos Reis :
>
> Desculpa quem ainda gosta de RAID+LVM, mas nada se compara a ZFS. Tenho
> usado a dois anos, além da robustez, tem uma facilidade de uso incrível.

Tomemos cuidado com conselhos na lista.  ZFS pode ser um produto
fantástico, mas o que ele faz, entre outras coisas, é implementar uma
MRDB (matriz redundante de discos [relativamente] baratos) — ou seja,
Raid.

No Solaris, talvez seja hoje a maneira mais recomendada de fazer Raid.
Mas tenho minhas dúvidas se isso também vale para outros SOs.

De qualquer maneira, o mínimo que devemos fazer é dar informação
suficiente para quem for seguir o conselho dado.  Por exemplo, a
equivalência entre os níveis de Raid e os nomes comerciais do ZFS:
http://www.zfsbuild.com/2010/05/26/zfs-raid-levels/ — tenho bronca de
quem fica mudando os nomes das coisas, como o ZFS faz com os nomes dos
níveis do Raid.  E deveríamos também informar qual o nível de
maturidade do produto proposto na plataforma do consulente.


-- 
skype:leandro.gfc.dutra?chat  Yahoo!: ymsgr:sendIM?lgcdutra
+55 (61) 3546 7191  gTalk: xmpp:leand...@jabber.org
+55 (61) 9302 2691ICQ/AIM: aim:GoIM?screenname=61287803
BRAZIL GMT−3  MSN: msnim:chat?contact=lean...@dutra.fastmail.fm



Re: RAID e LVM

2016-06-08 Thread Flavio Menezes dos Reis
Galera,

Desculpa quem ainda gosta de RAID+LVM, mas nada se compara a ZFS. Tenho
usado a dois anos, além da robustez, tem uma facilidade de uso incrível.

Recomendo a todos que estão pensando em criar mais um RAID+LVM.

Atte.,

Em 30 de maio de 2016 15:46, Caio Ferreira  escreveu:

> Adriano
>
> Obrigado pela ajuda, funcionou.
>
> > On May 25, 2016, at 6:05 PM, Adriano Rafael Gomes 
> wrote:
> >
> > On Wed, May 25, 2016 at 05:19:10PM -0300, Caio Ferreira wrote:
> >> Aguém teria uma ideia de como fazer um teste em relação ao RAID 1?
> >
> > Segue uma cópia das anotações que eu fiz quando estudei RAID 1. São de
> > 2007, veja se ainda podem ser aplicadas:
> >
> > Exibir o status do array:
> >
> > cat /proc/mdstat
> >
> > sudo mdadm --query --detail /dev/md0
> >
> > Remover um disco hot-swap do array:
> >
> > mdadm --set-faulty /dev/md0 /dev/sda1
> > mdadm --set-faulty /dev/md1 /dev/sda2
> >
> > mdadm --remove /dev/md0 /dev/sda1
> > mdadm --remove /dev/md1 /dev/sda2
> >
> > Para remover um disco não hot-swap não precisa rodar estes comandos
> > acima, basta desligar o servidor e retirar o disco, e depois religar o
> > servidor.
> >
> > Re-adicionar um disco ao array:
> >
> > mdadm --re-add /dev/md0 /dev/sda1
> > mdadm --re-add /dev/md1 /dev/sda2
> >
> > Adicionar um disco novo ao array:
> >
> > sfdisk -d /dev/sda | sfdisk /dev/sdb
> >
> > mdadm --zero-superblock /dev/sdb1
> > mdadm --zero-superblock /dev/sdb2
> >
> > mdadm --add /dev/md0 /dev/sdb1
> > mdadm --add /dev/md1 /dev/sdb2
>
>


-- 
Flávio Menezes dos Reis
Analista de Informática
Seção de Infraestrutura de Rede - Assessoria de Informática
Procuradoria-Geral do Estado do RS
(51) 3288 1764


Re: RAID e LVM

2016-05-30 Thread Caio Ferreira
Adriano

Obrigado pela ajuda, funcionou.

> On May 25, 2016, at 6:05 PM, Adriano Rafael Gomes  
> wrote:
> 
> On Wed, May 25, 2016 at 05:19:10PM -0300, Caio Ferreira wrote:
>> Aguém teria uma ideia de como fazer um teste em relação ao RAID 1?
> 
> Segue uma cópia das anotações que eu fiz quando estudei RAID 1. São de
> 2007, veja se ainda podem ser aplicadas:
> 
> Exibir o status do array:
> 
> cat /proc/mdstat
> 
> sudo mdadm --query --detail /dev/md0
> 
> Remover um disco hot-swap do array:
> 
> mdadm --set-faulty /dev/md0 /dev/sda1
> mdadm --set-faulty /dev/md1 /dev/sda2
> 
> mdadm --remove /dev/md0 /dev/sda1
> mdadm --remove /dev/md1 /dev/sda2
> 
> Para remover um disco não hot-swap não precisa rodar estes comandos
> acima, basta desligar o servidor e retirar o disco, e depois religar o
> servidor.
> 
> Re-adicionar um disco ao array:
> 
> mdadm --re-add /dev/md0 /dev/sda1
> mdadm --re-add /dev/md1 /dev/sda2
> 
> Adicionar um disco novo ao array:
> 
> sfdisk -d /dev/sda | sfdisk /dev/sdb
> 
> mdadm --zero-superblock /dev/sdb1
> mdadm --zero-superblock /dev/sdb2
> 
> mdadm --add /dev/md0 /dev/sdb1
> mdadm --add /dev/md1 /dev/sdb2



Re: RAID e LVM

2016-05-30 Thread Henrique de Moraes Holschuh
On Wed, May 25, 2016, at 10:45, Guimarães Faria Corcete DUTRA, Leandro
wrote:
> 2016-05-24 20:31 GMT-03:00 Lucas Castro :
> > não seria interessante fazer uma RAID-10 como falou que quer fazer.
> > fica meio sem lógica fazer um stripping que ficará no mesmo disco.
> 
> Raid 10 com dois discos é apenas Raid 1.

Não no Linux.  E só vai ter o mesmo layout se usar o modo "n2" ("near",
2 cópias) do md-raid10 com dois dispositivos. Que, aliás, não é quase
nunca a melhor escolha para performance (pode ser a escolha mais segura
para a RAID de boot, entretanto).

Alguns casos específicos de topologia do md-raid10 serão idênticas às
topologias mais tradicionais, mas é só isso. Ou seja, RAID1 de dois
discos tem topologia equivalente ao md-raid10 n2 em 2 discos, mas a
*implementação* disso no Linux é diferente, e *não* se comporta da mesma
forma.

A diferença entre md-raid10 n2 com 2 discos (topologia idêntica do RAID1
tradicional) e md-raid1 com dois discos está em como múltiplos leitores
serão tratados pelo driver md:  md-raid1 usa no máximo um disco
componente por thread, limitando o throughput máximo ao de um único
dispositivo componente.  Já o md-raid10 irá fazer stripping do acesso
quando possível (e quase dobrar o throughput).

Ou seja, o md-raid1 "socializa" mais quando há mais de um processo
tentando acessar o disco, enquanto que o md-raid10 vai entregar >80% a
mais de throughput quando há só um processo tentando acessar o disco
(vai acessar todos os componentes em paralelo como se fosse uma raid0).

Em geral, para 2 discos, md-raid 1 << md-raid10 n2 << md-raid10 f2 em
matéria de performance com poucas threads (desktop normal). 

Faz uma diferença brutal dependendo do workload.  Para desktop, não
tenha dúvidas e vá de md-raid10 no caso do Linux, mesmo que seja só para
2 dispositivos (em SSD, use modo "near" ou "far". Em HDD, use modo
"far").  Lembrando que md-raid10 funciona com qualquer número de
dispositivos acima de 1, inclusive ímpares.

Num servidor com múltiplos acessos concorrentes, só testando: depende da
carga típica de trabalho.

Uso no meu desktop um md-raid10 f2 em 3 HDDs.  E sim, isso significa que
ele espalhou 2 cópias em 3 discos.  180% mais rápido que o disco sem
raid(!) ou em md-raid1, até mesmo para escrita (em arquivos grandes).  E
uns 300% mais rápido que o pior caso do md-raid5, já que raid10 nunca
incorre em RMW (read-modify-write para calcular a paridade).

Aliás, md-raid10 f2 em 2 SSDs te dá 99% da performance de leitura de
RAID0 entre os SSDs (na escrita fica em >85%), duplica os dados entre os
dois (proteção contra falha de 1 SSD), em troca de perder metade do
espaço (como era-se de esperar).  Só que um dos SSDs vai estar
"invertido" devido ao layout "far". 

Não sei porque o "far" é vantagem até em SSD, pela lógica não era para
fazer diferença nenhuma, pode ser um detalhe de implementação do
elevador de IO do Linux.  Mesmo que não houvesse diferença, ainda seria
uma boa usar o "far" porque vai garantir um regime de uso diferente
entre dois SSDs iguais, o que é sempre uma boa para tentar distanciar
uma eventual falha simultânea quando forem do mesmo lote.

> > se tiver grana pra adquirir mais um disco, RAID-5 é legal,

Mais ou menos.  RAID5 é bem mais perigoso que RAID1 e RAID10, tem uns
detalhes sórdidos, fora os tradicionais problemas de RMW, etc.  Mas
realmente sai bem mais "barato" em termos de "perda de espaço".

Por sinal, não utilize RAID5 para >8TB em discos com taxa de erro típica
(1E-14), e nem pensar se for >12TB, a chance de perda de dados durante o
rebuilt aproxima-se do evento certo.  Para isso existe RAID6.

PS: *cuidado* com o layout "offset" do md-raid10: pelo que me lembro,
ele pode gerar layouts com características peculiares e muitas vezes
indesejáveis...  desenhe o layout e verifique o que acontece em caso de
falha de 1 ou mais componentes, e compare com os outros layouts (near e
far).

> Com os preços de disco hoje, fica difícil justificar qualquer Raid que
> não seja o 10 a menos que haja baixa carga de escrita, rotina de
> cópias de segurança muito bem ajustada e alta tolerância a
> indisponibilidade e lentidão.

Concordo.

-- 
  "One disk to rule them all, One disk to find them. One disk to bring
  them all and in the darkness grind them. In the Land of Redmond
  where the shadows lie." -- The Silicon Valley Tarot
  Henrique de Moraes Holschuh 



Re: RAID e LVM

2016-05-25 Thread Adriano Rafael Gomes
On Wed, May 25, 2016 at 05:19:10PM -0300, Caio Ferreira wrote:
> Aguém teria uma ideia de como fazer um teste em relação ao RAID 1?

Segue uma cópia das anotações que eu fiz quando estudei RAID 1. São de
2007, veja se ainda podem ser aplicadas:

Exibir o status do array:

cat /proc/mdstat

sudo mdadm --query --detail /dev/md0

Remover um disco hot-swap do array:

mdadm --set-faulty /dev/md0 /dev/sda1
mdadm --set-faulty /dev/md1 /dev/sda2

mdadm --remove /dev/md0 /dev/sda1
mdadm --remove /dev/md1 /dev/sda2

Para remover um disco não hot-swap não precisa rodar estes comandos
acima, basta desligar o servidor e retirar o disco, e depois religar o
servidor.

Re-adicionar um disco ao array:

mdadm --re-add /dev/md0 /dev/sda1
mdadm --re-add /dev/md1 /dev/sda2

Adicionar um disco novo ao array:

sfdisk -d /dev/sda | sfdisk /dev/sdb

mdadm --zero-superblock /dev/sdb1
mdadm --zero-superblock /dev/sdb2

mdadm --add /dev/md0 /dev/sdb1
mdadm --add /dev/md1 /dev/sdb2


signature.asc
Description: Digital signature


Re: RAID e LVM

2016-05-25 Thread paulo bruck
sim a maneira mais fácil seria vc desligar o servidor e tirar um dos HD's
que fazem parte do RAID1.

ligar a máquina e dar o comando:

cat /proc/mdadm e ver como estão os hd's

se estiver tudo funcionando e o /proc/mdadm mostra algo como U_ ou _U quer
dizer que 1 dos HD's está fora do RAID.


após este teste desligue o servidor, reinsira o HD e ligue o servidor.


teste novamente o cat /proc/mdadm

se ainda estiver faltando o HD vc pode adicionálo novamennte no dispositivo
raid, algo como:

mdadm /dev/md1 --manager --add /dev/sdc1 ( algo parecido com isto.


para verificar como anda o progresso :

cat /proc/mdadm

bom divertimento 80)

Em 25 de maio de 2016 17:19, Caio Ferreira  escreveu:

> Lista
>
> Eu fiz o seguinte para montar o meu RAID e LVM. Um computador com três
> discos rígidos de 1GB cada. Um para o SD e os outros dois para RAID 0 e LVM.
>
> ###
>
> 1. Instalacao
> sudo apt-get install mdadm lvm2 dmsetup mdadm xfsprogs
>
> 2. RAID
>
> # discos
> /dev/sdb
> /dev/sdc
>
> # criar as particoes e formatar os discos rigidos sdb e sdc no formato fd
> sudo fdisk /dev/sdb
> sudo fdisk /dev/sdc
>
> # criacao do raid 1 com dois discos rigidos
> sudo mdadm --create /dev/md0 --verbose --level=1 --raid-device=2
> /dev/sd[bc]1
> # informacoes sobre o raid 1
> sudo mdadm --detail /dev/md0
> # arquivo de configuracao do raid 1
> /etc/mdadm/mdadm.conf
>
> # adicionar o raid 1 no arquivo de configuracao
> sudo su
> mdadm -Es | grep md >> /etc/mdadm/mdadm.conf
> exit
>
> # verificar o ponto de montagem do raid 1
> sudo df -h
>
> # verificar o status do raid 1
> sudo mdadm --detail /dev/md0
> 3. LVM
>
> # formatar o disco rigido
> $ sudo fdisk /dev/sdX
> Hex code (type L to list codes): <-- 8e
>
> # Preparar o volume
> $ sudo pvcreate /dev/md0p1
>
> # Mostrar informacoes sobre o disco rigido
> $ sudo pvdisplay
>
> # Criar o volume
> $ sudo vgcreate lvm_volume /dev/md0p1
>
> # Mostrar informacoes sobre o volume
> $ sudo vgdisplay
>
> # Criar o volume logical
> $ sudo lvcreate --name share --size 500MB lvm_volume
> $ sudo lvcreate --name bckup --size 500MB lvm_volume
>
> # Format
> $ sudo mkfs.ext4 /dev/lvm_volume/share
> $ sudo mkfs.ext4 /dev/lvm_volume/backup
>
> # Mount
> $ sudo mkdir /mnt/share
> $ sudo mkdir /mnt/backup
> $ sudo mount /dev/lvm_volume/share /mnt/share
> $ sudo mount /dev/lvm_volume/backup /mnt/backup
>
> # Fstab
> $ sudo vi /etc/fstab
> /dev/lvm_volume/share   /mnt/share ext4   rw,noatime0 0
> /dev/lvm_volume/backup   /mnt/backup ext4   rw,noatime0 0
> ###
>
> Aguém teria uma ideia de como fazer um teste em relação ao RAID 1?
>
> Desde já, eu agradeço pela atenção.
>
>  .''`.   Caio Abreu Ferreira
> : :'  :  abreuf...@gmail.com
> `. `'`   Debian User
>   `-
>
> 2016-05-25 10:45 GMT-03:00 Guimarães Faria Corcete DUTRA, Leandro <
> l...@dutras.org>:
>
>> 2016-05-24 20:31 GMT-03:00 Lucas Castro :
>> > não seria interessante fazer uma RAID-10 como falou que quer fazer.
>> > fica meio sem lógica fazer um stripping que ficará no mesmo disco.
>>
>> Raid 10 com dois discos é apenas Raid 1.
>>
>>
>> > se tiver grana pra adquirir mais um disco, RAID-5 é legal,
>> > você tem 75% da do array, se poderá perder um disco assim como na
>> raid-1.
>>
>> Não é tao simples, eu deixaria o terceiro como reserva para o espelho
>> (Raid 1 com reserva em linha).  Raid 5 acarreta muitos riscos —
>> geralmente dois discos entre os pelo menos três são do mesmo lote, e
>> depois que um falha há alta probabilidade do segundo falhar durante a
>> recuperação, que é extremamente lenta e longa, além da escrita ficar
>> mais lenta em operação normal —, vide Baarf .
>>
>> Com os preços de disco hoje, fica difícil justificar qualquer Raid que
>> não seja o 10 a menos que haja baixa carga de escrita, rotina de
>> cópias de segurança muito bem ajustada e alta tolerância a
>> indisponibilidade e lentidão.
>>
>>
>> --
>> skype:leandro.gfc.dutra?chat  Yahoo!: ymsgr:sendIM?lgcdutra
>> +55 (61) 3546 7191  gTalk: xmpp:leand...@jabber.org
>> +55 (61) 9302 2691ICQ/AIM: aim:GoIM?screenname=61287803
>> BRAZIL GMT−3  MSN: msnim:chat?contact=lean...@dutra.fastmail.fm
>>
>>
>


-- 
Paulo Ricardo Bruck consultor
tel 011 3596-4881/4882  011 98140-9184 (TIM)
http://www.contatogs.com.br
http://www.protejasuarede.com.br
gpg AAA59989 at wwwkeys.us.pgp.net


  1   2   3   4   5   6   7   8   9   10   >