On Sun, Mar 1, 2015 at 9:03 PM, Valeri Galtsev
wrote:
> There may be a bit expensive route. Depending on how valuable the data
> are, you may think of contacting professional recovery services. They
> usually take about a Month, they are expensive. Decent ones will be on the
> order of $1000 if i
On Sun, Mar 1, 2015 at 8:07 PM, Khemara Lin wrote:
> Dear Chris, James, Valeri and all,
>
> Sorry to have not responded as I'm still on struggling with the recovery
> with no success.
>
> I've been trying to set up a new system with the exact same scenario (4 2TB
> hard drives and remove the 3rd o
On Sun, March 1, 2015 9:07 pm, Khemara Lin wrote:
> Dear Chris, James, Valeri and all,
>
> Sorry to have not responded as I'm still on struggling with the recovery
> with no success.
>
> I've been trying to set up a new system with the exact same scenario (4
> 2TB hard drives and remove the 3rd on
Dear Chris, James, Valeri and all,
Sorry to have not responded as I'm still on struggling with the recovery
with no success.
I've been trying to set up a new system with the exact same scenario (4
2TB hard drives and remove the 3rd one afterwards). I still cannot recover.
We did have a back
On Sat, Feb 28, 2015 at 5:59 PM, James A. Peltier wrote:
> There is no difference between a single disk system and a multi-disk system
> in terms of being able to dynamically resize volumes that reside on a volume
> group. Having the ability to resize a volume to be either larger or smaller
>
- Original Message -
| On Sat, Feb 28, 2015 at 4:28 PM, James A. Peltier wrote:
|
| > People who understand how to use the system do not suffer these problems.
| > LVM adds a bit of complexity for a bit of extra benefits. You can't
| > blame LVM for user error. Not having monitoring in
On Sat, Feb 28, 2015 at 4:29 PM, Valeri Galtsev
wrote:
> You are implying that firmware of hardware RAID cards is somehow buggier
> than software of software RAID plus Linux kernel (sorry if I
> misinterpreted your point).
"Drives, and hardware RAID cards are subject to firmware bugs, just as
we
On Sat, Feb 28, 2015 at 4:28 PM, James A. Peltier wrote:
> People who understand how to use the system do not suffer these problems.
> LVM adds a bit of complexity for a bit of extra benefits. You can't blame
> LVM for user error. Not having monitoring in place or backups is a user
> proble
On Sat, February 28, 2015 4:22 pm, Chris Murphy wrote:
> On Sat, Feb 28, 2015 at 1:26 PM, Valeri Galtsev
> wrote:
>> Indeed. That is why: no LVMs in my server room. Even no software RAID.
>> Software RAID relies on the system itself to fulfill its RAID function;
>> what if kernel panics before so
- Original Message -
| On Fri, 27 Feb 2015 19:24:57 -0800
| John R Pierce wrote:
| > On 2/27/2015 4:52 PM, Khemara Lyn wrote:
| > >
| > > What is the right way to recover the remaining PVs left?
| >
| > take a filing cabinet packed full of 10s of 1000s of files of 100s of
| > pages each
On Sat, Feb 28, 2015 at 1:26 PM, Valeri Galtsev
wrote:
> Indeed. That is why: no LVMs in my server room. Even no software RAID.
> Software RAID relies on the system itself to fulfill its RAID function;
> what if kernel panics before software RAID does its job? Hardware RAID
> (for huge filesystems
On Fri, February 27, 2015 10:00 pm, Marko Vojinovic wrote:
> On Fri, 27 Feb 2015 19:24:57 -0800
> John R Pierce wrote:
>> On 2/27/2015 4:52 PM, Khemara Lyn wrote:
>> >
>> > What is the right way to recover the remaining PVs left?
>>
>> take a filing cabinet packed full of 10s of 1000s of files of
On Fri, Feb 27, 2015 at 9:00 PM, Marko Vojinovic wrote:
> And this is why I don't like LVM to begin with. If one of the drives
> dies, you're screwed not only for the data on that drive, but even for
> data on remaining healthy drives.
It has its uses, just like RAID0 has uses. But yes, as the nu
And then Btrfs (no LVM).
mkfs.btrfs -d single /dev/sd[bcde]
mount /dev/sdb /mnt/bigbtr
cp -a /usr /mnt/bigbtr
Unmount. Poweroff. Kill 3rd of 4 drives. Poweron.
mount -o degraded,ro /dev/sdb /mnt/bigbtr ## degraded,ro is required
or mount fails
cp -a /mnt/bigbtr/usr/ /mnt/btrfs## copy to a d
On Fri, Feb 27, 2015 at 9:44 PM, John R Pierce wrote:
> On 2/27/2015 8:00 PM, Marko Vojinovic wrote:
>>
>> And this is why I don't like LVM to begin with. If one of the drives
>> dies, you're screwed not only for the data on that drive, but even for
>> data on remaining healthy drives.
>
>
> with
On 2/27/2015 8:00 PM, Marko Vojinovic wrote:
And this is why I don't like LVM to begin with. If one of the drives
dies, you're screwed not only for the data on that drive, but even for
data on remaining healthy drives.
with classic LVM, you were supposed to use raid for your PV's. The new
LV
OK so ext4 this time, with new disk images. I notice at mkfs.ext4 that
each virtual disk goes from 2MB to 130MB-150MB each. That's a lot of
fs metadata, and it's fairly evenly distributed across each drive.
Copied 3.5GB to the volume. Unmount. Poweroff. Killed 3rd of 4. Boot.
Mounts fine. No error
On Fri, Feb 27, 2015 at 8:24 PM, John R Pierce wrote:
> On 2/27/2015 4:52 PM, Khemara Lyn wrote:
>>
>> I understand; I tried it in the hope that, I could activate the LV again
>> with a new PV replacing the damaged one. But still I could not activate
>> it.
>>
>> What is the right way to recover t
On Fri, 27 Feb 2015 19:24:57 -0800
John R Pierce wrote:
> On 2/27/2015 4:52 PM, Khemara Lyn wrote:
> >
> > What is the right way to recover the remaining PVs left?
>
> take a filing cabinet packed full of 10s of 1000s of files of 100s of
> pages each, with the index cards interleaved in the fi
On 2/27/2015 4:52 PM, Khemara Lyn wrote:
I understand; I tried it in the hope that, I could activate the LV again
with a new PV replacing the damaged one. But still I could not activate
it.
What is the right way to recover the remaining PVs left?
take a filing cabinet packed full of 10s of 100
https://lists.fedoraproject.org/pipermail/users/2015-February/458923.html
I don't see how the VG metadata is restored with any of the commands
suggested thus far. I think that's vgcfgrestore. Otherwise I'd think
that LVM has no idea how to do the LE to PE mapping.
In any case, this sounds like a
Ok, sorry about that.
On Sat, February 28, 2015 9:13 am, Chris Murphy wrote:
> OK It's extremely rude to cross post the same question across multiple
> lists like this at exactly the same time, and without at least showing the
> cross posting. I just replied to the one on Fedora users before I saw
OK It's extremely rude to cross post the same question across multiple
lists like this at exactly the same time, and without at least showing
the cross posting. I just replied to the one on Fedora users before I
saw this post. This sort of thing wastes people's time. Pick one list
based on the best
On Sat, 2015-02-28 at 07:25 +0700, Khemara Lyn wrote:
> I have tried with the following:
>
> 1. Removing the broken PV:
>
> # vgreduce --force vg_hosting /dev/sdc1
> Physical volume "/dev/sdc1" still in use
Next time, try "vgreduce --removemissing " first.
In my experience, any lvm command u
Hello James and All,
For your information, here's the listing looks like:
[root@localhost ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda1 vg_hosting lvm2 a-- 1.82t0
/dev/sdb2 vg_hosting lvm2 a-- 1.82t0
/dev/sdc1 vg_hosting lvm2 a-- 1.82t0
/dev/sdd1 vg_ho
Dear John,
I understand; I tried it in the hope that, I could activate the LV again
with a new PV replacing the damaged one. But still I could not activate
it.
What is the right way to recover the remaining PVs left?
Regards,
Khem
On Sat, February 28, 2015 7:42 am, John R Pierce wrote:
> On 2/2
On 2/27/2015 4:37 PM, James A. Peltier wrote:
| I was able to create a new PV and restore the VG Config/meta data:
|
| # pvcreate --restorefile ... --uuid ... /dev/sdc1
|
oh, that step means you won't be able to recover ANY of the data that
was formerly on that PV.
--
john r pierce
Dear James,
Thank you for being quick to help.
Yes, I could see all of them:
# vgs
# lvs
# pvs
Regards,
Khem
On Sat, February 28, 2015 7:37 am, James A. Peltier wrote:
>
>
> - Original Message -
> | Dear All,
> |
> | I am in desperate need for LVM data rescue for my server.
> | I have
Thank you, John for your quick reply.
That is what I hope. But how to do it? I cannot even activate the LV with
the remaining PVs.
Thanks,
Khem
On Sat, February 28, 2015 7:34 am, John R Pierce wrote:
> On 2/27/2015 4:25 PM, Khemara Lyn wrote:
>
>> Right now, the third hard drive is damaged; and t
- Original Message -
| Dear All,
|
| I am in desperate need for LVM data rescue for my server.
| I have an VG call vg_hosting consisting of 4 PVs each contained in a
| separate hard drive (/dev/sda1, /dev/sdb1, /dev/sdc1, and /dev/sdd1).
| And this LV: lv_home was created to use all the
On 2/27/2015 4:25 PM, Khemara Lyn wrote:
Right now, the third hard drive is damaged; and therefore the third PV
(/dev/sdc1) cannot be accessed anymore. I would like to recover whatever
left in the other 3 PVs (/dev/sda1, /dev/sdb1, and /dev/sdd1).
your data is spread across all 4 drives, and yo
Dear All,
I am in desperate need for LVM data rescue for my server.
I have an VG call vg_hosting consisting of 4 PVs each contained in a
separate hard drive (/dev/sda1, /dev/sdb1, /dev/sdc1, and /dev/sdd1).
And this LV: lv_home was created to use all the space of the 4 PVs.
Right now, the third h
32 matches
Mail list logo