Re: f20 lvm - inactive LV

2014-03-12 Thread Michal Kopacki


- Original Message -
From: Chris Murphy li...@colorremedies.com
To: mkopa...@gmail.com, Community support for Fedora users 
users@lists.fedoraproject.org
Sent: Monday, March 10, 2014 3:29:13 AM
Subject: Re: f20 lvm - inactive LV


cut

I would start with:

journalctl -b -x -o short-monotonic --no-pager

And then start search for some of the above items, like pvscan, to see if it's 
scanning for PVs and if it finds anything, and if it activates anything, and if 
not why not. Status can also be helpful.

According to your advise i did review logs and found few disturbing and not 
clear errors:

pvscan[792]: device-mapper: suspend ioctl on  failed: Invalid argument
pvscan[792]: Unable to suspend rootvg-varlv (253:9)
lvm[796]: Monitoring RAID device rootvg-varlv for events.
pvscan[774]: device-mapper: suspend ioctl on  failed: Invalid argument
pvscan[774]: Unable to suspend rootvg-tmplv (253:14)
lvm[796]: Monitoring RAID device rootvg-tmplv for events.
pvscan[792]: device-mapper: suspend ioctl on  failed: Invalid argument
pvscan[792]: Unable to suspend rootvg-usrlv (253:20)
lvm[796]: Monitoring RAID device rootvg-usrlv for events.
pvscan[774]: device-mapper: suspend ioctl on  failed: Invalid argument
pvscan[774]: Unable to suspend rootvg-rootlv (253:25)
pvscan[774]: rootvg: refresh before autoactivation failed.
lvm[796]: Monitoring RAID device rootvg-rootlv for events.
pvscan[792]: rootvg: refresh before autoactivation failed.
lvm[690]: 1 logical volume(s) in volume group datavg monitored
lvm[690]: 26 logical volume(s) in volume group rootvg monitored
lvm[690]: 4 logical volume(s) in volume group lxcvg monitored
systemd[1]: Started Monitoring of LVM2 mirrors, snapshots etc. using dmeventd 
or progress polling.
-- Subject: Unit lvm2-monitor.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit lvm2-monitor.service has finished starting up.
--

what does mean suspend in that case ?

-- 
regards,
Michal



  
-- 
users mailing list
users@lists.fedoraproject.org
To unsubscribe or change subscription options:
https://admin.fedoraproject.org/mailman/listinfo/users
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct
Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines
Have a question? Ask away: http://ask.fedoraproject.org


f20 lvm - inactive LV

2014-03-07 Thread Michal Kopacki
  hello,

 This is my first post to that list so hello everyone.

 After a few years break, I've decided to take Fedora on test drive (last 
Fedora I've seen was something around 11; I use centos/redhat on daily basis) 
and I must say there are many changes here (good and bad ones). Anyway, below 
my problem:

 There is this thing with LVM which I can't understand - I did create few VGs 
and LVs and add them to fstab (as default) but after that strange things 
started to happen during system boot. It seems that not all LVs are activated 
during system boot (kind of randomness here). I came to this solution that I 
need to put all LVs I want to be mounted by default as kernel parameter (ie 
rd.lvm.lv=rootvg/rootlv). 

  I can live with that when it comes to system LVs but problem is that I have 
many non os LVs on that machine and putting all of them as a kernel param 
doesn't seem right.

1. Am I doing something wrong ? 
2. Is there new method other than default in fstab ? 
3. What for rd.lvm.lv is ?

-- 
regards,
Michal
-- 
users mailing list
users@lists.fedoraproject.org
To unsubscribe or change subscription options:
https://admin.fedoraproject.org/mailman/listinfo/users
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct
Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines
Have a question? Ask away: http://ask.fedoraproject.org