Re: [gentoo-user] Intel(R) C600 SAS Controller

2014-06-11 Thread Stefan G. Weichinger

hello again ... noone interested? ;-)

I understand in a way ...

Maybe I have something in the kernel misconfigured ...

Right now I get these messages again:

[ 1998.118658] hpet1: lost 1 rtc interrupts

Should I disable HPET in the BIOS and/or via kernel command line?

I never know how to set the timer-related kernel options, especially
with KVM hosting.

Stefan



Re: [gentoo-user] N failed logins since your last login

2014-06-11 Thread Florian HEGRON

Is there a way to display that 'failed logins' message without using
gdm/kdm/xdm?


Hello,

See that : http://linux.die.net/man/8/faillog

I am not on my Gentoo machine so I don't know if the faillog file is 
really present.



With this, you just have to make a script with Bash / faillog / awk.


Regards,



Re: [gentoo-user] Intel(R) C600 SAS Controller

2014-06-11 Thread thegeezer
On 05/27/2014 02:03 PM, Stefan G. Weichinger wrote:
 I think I have some IO-topic going on ... very likely some mismatch of
 block sizes ... the hw-raid, then LVM, then the snapshot on top of
 that ... and a filesystem with properties as target ... oh my. Chosing
 noop as IO-scheduler helps a bit but maybe I have to roll back and
 rebuild one of the HW-RAID-Arrays with a different blocksize. Stefan 

Hi Stefan,
block size / stripe size mismatches only really penalise random io, if
you are trying to use dd and have slow speeds this would suggest
something else is awry. 
I don't know the c600 rad chip personally, but in trying to google it it
appears to be a motherboard based raid device?  is it a real raid or
fakeraid?

I'm a little confused over your setup to help. i'm sorry if there is
duplication but it would be useful to have all info in one hit rather
than trying to piece it together from all your messages.
1. please can you list your hardware raid config.  I'm looking for the
physical disk sizes, the virtual disks and their raid types. do you have
cache enabled on the raid card, is there a background scrub or anything
like that running? do you have active seek/ prefetch configured ? 
parity size being 50% of total size is just odd to me - but i guess
these are mirrors ?  but it says raid-level3 --- just odd, most setups
use raid0(not raid) raid1(mirror) raid5(parity stripe) raid6 (double
parity stripe) or combinations, like 50.. raid3 is allocating a single
disk to parity but is very rarely used.
2. how many other devices are actively doing IO? do you have any other
raid cards/io cards of note that might be clashign on the board.
3. do you have active I/O when doing your performance tests? if you have
several virtual machines running depending on what they are doing they
will crucify your access.
4. are you using any type of CGroups ?
5. i'm also confused over your LVM config.  please can you send the
output of vgs pvs and lvs -a -o +devices
6. please also send the output of mount
7. do you have atop or iotop that you can use to monitor performance -
specifically we are looking for disk ios per device and disk latency per
device.  both before and during you are trying to run your backup.

this should give us a better idea of where the problems lay.




Re: [gentoo-user] Intel(R) C600 SAS Controller

2014-06-11 Thread thegeezer
On 06/11/2014 10:34 AM, Stefan G. Weichinger wrote:
 Am 11.06.2014 11:19, schrieb thegeezer:

 Hi Stefan,
 block size / stripe size mismatches only really penalise random io, if
 you are trying to use dd and have slow speeds this would suggest
 something else is awry. 
 I don't know the c600 rad chip personally, but in trying to google it it
 appears to be a motherboard based raid device?  is it a real raid or
 fakeraid?

 I'm a little confused over your setup to help. i'm sorry if there is
 duplication but it would be useful to have all info in one hit rather
 than trying to piece it together from all your messages.
 OK, will do ...


 1. please can you list your hardware raid config.  I'm looking for the
 physical disk sizes, the virtual disks and their raid types. do you have
 cache enabled on the raid card, is there a background scrub or anything
 like that running? do you have active seek/ prefetch configured ? 
 parity size being 50% of total size is just odd to me - but i guess
 these are mirrors ?  but it says raid-level3 --- just odd, most setups
 use raid0(not raid) raid1(mirror) raid5(parity stripe) raid6 (double
 parity stripe) or combinations, like 50.. raid3 is allocating a single
 disk to parity but is very rarely used.
 Basically 3 RAID-6 hw-raids over 6 SAS hdds.

OK so i'm confused again.   RAID6 requires minimum of 4 drives.
if you have 3 raid6's then you would need 12 drives (coffee hasn't quite
activated in me yet so my maths may not be right)
or do you have essentially the first part of each of the six drives be
virtual disk 1, the second part of each of the six drives virtual disk 2
and the third part be virtual disk 3 -- if this is the case bear in mind
that the slowest part of the disk is the end of the disk -- so you are
essentially hobbling your virtual disk3 but only a little, instead of
being around 150MB/sec it might run at 80.

you might also like to try a simple test of the following (yes lvs count
as block devices)
# hdparm -t /dev/sda
# hdparm -t /dev/sdb
# hdparm -t /dev/sdc
# hdparm -t /dev/vg01/winserver_disk0
# hdparm -t /dev/vg01/amhold

 I don't know where this RAID-3 term comes from -

 #  megacli -LDInfo -Lall -aALL


 Adapter 0 -- Virtual Drive Information:
 Virtual Drive: 0 (Target Id: 0)
 Name:root
 RAID Level  : Primary-6, Secondary-3, RAID Level Qualifier-3
 Size: 500.0 GB
 Sector Size : 512
 Is VD emulated  : No
 Parity Size : 250.0 GB
 State   : Optimal
 Strip Size  : 256 KB
 Number Of Drives: 6
 Span Depth  : 1
 Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if
 Bad BBU
 Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if
 Bad BBU
 Default Access Policy: Read/Write
 Current Access Policy: Read/Write
 Disk Cache Policy   : Disabled
 Encryption Type : None
 Bad Blocks Exist: No
 Is VD Cached: No


 Virtual Drive: 1 (Target Id: 1)
 Name:swap
 RAID Level  : Primary-6, Secondary-3, RAID Level Qualifier-3
 Size: 8.0 GB
 Sector Size : 512
 Is VD emulated  : No
 Parity Size : 4.0 GB
 State   : Optimal
 Strip Size  : 64 KB
 Number Of Drives: 6
 Span Depth  : 1
 Default Cache Policy: WriteBack, ReadAheadNone, Cached, No Write Cache
 if Bad BBU
 Current Cache Policy: WriteBack, ReadAheadNone, Cached, No Write Cache
 if Bad BBU
 Default Access Policy: Read/Write
 Current Access Policy: Read/Write
 Disk Cache Policy   : Disabled
 Encryption Type : None
 Bad Blocks Exist: No
 Is VD Cached: No


 Virtual Drive: 2 (Target Id: 2)
 Name:lvm
 RAID Level  : Primary-6, Secondary-3, RAID Level Qualifier-3
 Size: 1.321 TB
 Sector Size : 512
 Is VD emulated  : No
 Parity Size : 676.5 GB
 State   : Optimal
 Strip Size  : 64 KB
 Number Of Drives: 6
 Span Depth  : 1
 Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if
 Bad BBU
 Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if
 Bad BBU
 Default Access Policy: Read/Write
 Current Access Policy: Read/Write
 Disk Cache Policy   : Disabled
 Encryption Type : None
 Bad Blocks Exist: No
 Is VD Cached: No




 2. how many other devices are actively doing IO? do you have any other
 raid cards/io cards of note that might be clashign on the board.
 The Intel C600 Controller seems to only run the LTO-4-drive in the
 server while the

 LSI Logic / Symbios Logic MegaRAID SAS 2108

 runs the 6 hard disks.

 # lspci
 00:00.0 Host bridge: Intel Corporation Xeon E5/Core i7 DMI2 (rev 07)
 00:01.0 PCI bridge: Intel Corporation Xeon E5/Core i7 IIO PCI Express
 Root Port 1a (rev 07)
 00:01.1 PCI bridge: Intel Corporation Xeon E5/Core i7 IIO PCI Express
 Root Port 1b (rev 07)
 00:03.0 PCI bridge: Intel Corporation Xeon E5/Core i7 IIO PCI Express
 Root Port 3a in PCI Express Mode (rev 

Re: [gentoo-user] Intel(R) C600 SAS Controller

2014-06-11 Thread thegeezer
On 06/11/2014 11:14 AM, thegeezer wrote:
 just some extra thoughts 
*cough* yeah i meant to keep typing!

the extra thoughts are that the better way of doing this would be to
create up
RAID1 physicaldisks1+2
RAID6 physicaldisks3,4,5,6

then put lvm on there as vg01 with two PVs, one on the raid1 virtualdisk
and one on the raid6 virtualdisk
you can then create new LVs and choose to put them on fast or slow
(raid1 or raid6)
you can then have system and archive data on raid6 and VM's on fast.
of course you could always have all 6 disks setup for raid0+1 and then
it woudl all be very fast

the general gist of what i'm trying to say is have the hardware raid
card do the hardware raid across the disks as required, then have LVM do
the partitioning of the storage.   at the moment you have the hardware
raid doing the partitioning.

also i'd always recommend you have _inside the case_ a hotspare
configured to be global spare



Re: [gentoo-user] Intel(R) C600 SAS Controller

2014-06-11 Thread Stefan G. Weichinger
Am 11.06.2014 12:14, schrieb thegeezer:

 Basically 3 RAID-6 hw-raids over 6 SAS hdds.
 
 OK so i'm confused again.   RAID6 requires minimum of 4 drives.
 if you have 3 raid6's then you would need 12 drives (coffee hasn't quite
 activated in me yet so my maths may not be right)
 or do you have essentially the first part of each of the six drives be
 virtual disk 1, the second part of each of the six drives virtual disk 2
 and the third part be virtual disk 3 -- if this is the case bear in mind
 that the slowest part of the disk is the end of the disk -- so you are
 essentially hobbling your virtual disk3 but only a little, instead of
 being around 150MB/sec it might run at 80.


I'd be happy to see 80 !

Ran atop now while dd-ing stuff to an external disk and got ~1MB/s for
2.5GB of data.

(this is even too slow for USB ...)

I am unsure what to post here from atop ... ?


To the initial question:

Yes, imagine the six disks split or partitioned at the level of the
hardware raid controller (as you described above).

 you might also like to try a simple test of the following (yes lvs count
 as block devices)
 # hdparm -t /dev/sda
 # hdparm -t /dev/sdb
 # hdparm -t /dev/sdc
 # hdparm -t /dev/vg01/winserver_disk0
 # hdparm -t /dev/vg01/amhold

everything around 380 MB/s ... only ~350 MB/s for
/dev/vg01/winserver_disk0 (which still is nice)

 i notice the core i7 only now.  have you disabled turbo boost in the bios ?
 this is great for a desktop but awful for a server as it disables all
 those extra cores for a single busy thread

I checked BIOS settings yesterday and don't remember a turbo boost
option. I will check once more.

 cgroups are a great way of limiting or guaranteeing performance. by
 default i believe systemd will aim for user interactivity, but you want
 to change that to be more balanced.
 maybe some else can suggest how best to configure systemd cgroups.
 meanwhile can you
 # tree /sys/fs/cgroup/

# !tr
tree /sys/fs/cgroup/
/sys/fs/cgroup/
├── cpu - cpu,cpuacct
├── cpuacct - cpu,cpuacct
├── cpu,cpuacct
│   ├── cgroup.clone_children
│   ├── cgroup.event_control
│   ├── cgroup.procs
│   ├── cgroup.sane_behavior
│   ├── cpuacct.stat
│   ├── cpuacct.usage
│   ├── cpuacct.usage_percpu
│   ├── cpu.shares
│   ├── notify_on_release
│   ├── release_agent
│   └── tasks
├── cpuset
│   ├── cgroup.clone_children
│   ├── cgroup.event_control
│   ├── cgroup.procs
│   ├── cgroup.sane_behavior
│   ├── cpuset.cpu_exclusive
│   ├── cpuset.cpus
│   ├── cpuset.mem_exclusive
│   ├── cpuset.mem_hardwall
│   ├── cpuset.memory_migrate
│   ├── cpuset.memory_pressure
│   ├── cpuset.memory_pressure_enabled
│   ├── cpuset.memory_spread_page
│   ├── cpuset.memory_spread_slab
│   ├── cpuset.mems
│   ├── cpuset.sched_load_balance
│   ├── cpuset.sched_relax_domain_level
│   ├── machine.slice
│   │   ├── cgroup.clone_children
│   │   ├── cgroup.event_control
│   │   ├── cgroup.procs
│   │   ├── cpuset.cpu_exclusive
│   │   ├── cpuset.cpus
│   │   ├── cpuset.mem_exclusive
│   │   ├── cpuset.mem_hardwall
│   │   ├── cpuset.memory_migrate
│   │   ├── cpuset.memory_pressure
│   │   ├── cpuset.memory_spread_page
│   │   ├── cpuset.memory_spread_slab
│   │   ├── cpuset.mems
│   │   ├── cpuset.sched_load_balance
│   │   ├── cpuset.sched_relax_domain_level
│   │   ├── machine-qemu\x2dotrs.scope
│   │   │   ├── cgroup.clone_children
│   │   │   ├── cgroup.event_control
│   │   │   ├── cgroup.procs
│   │   │   ├── cpuset.cpu_exclusive
│   │   │   ├── cpuset.cpus
│   │   │   ├── cpuset.mem_exclusive
│   │   │   ├── cpuset.mem_hardwall
│   │   │   ├── cpuset.memory_migrate
│   │   │   ├── cpuset.memory_pressure
│   │   │   ├── cpuset.memory_spread_page
│   │   │   ├── cpuset.memory_spread_slab
│   │   │   ├── cpuset.mems
│   │   │   ├── cpuset.sched_load_balance
│   │   │   ├── cpuset.sched_relax_domain_level
│   │   │   ├── emulator
│   │   │   │   ├── cgroup.clone_children
│   │   │   │   ├── cgroup.event_control
│   │   │   │   ├── cgroup.procs
│   │   │   │   ├── cpuset.cpu_exclusive
│   │   │   │   ├── cpuset.cpus
│   │   │   │   ├── cpuset.mem_exclusive
│   │   │   │   ├── cpuset.mem_hardwall
│   │   │   │   ├── cpuset.memory_migrate
│   │   │   │   ├── cpuset.memory_pressure
│   │   │   │   ├── cpuset.memory_spread_page
│   │   │   │   ├── cpuset.memory_spread_slab
│   │   │   │   ├── cpuset.mems
│   │   │   │   ├── cpuset.sched_load_balance
│   │   │   │   ├── cpuset.sched_relax_domain_level
│   │   │   │   ├── notify_on_release
│   │   │   │   └── tasks
│   │   │   ├── notify_on_release
│   │   │   ├── tasks
│   │   │   ├── vcpu0
│   │   │   │   ├── cgroup.clone_children
│   │   │   │   ├── cgroup.event_control
│   │   │   │   ├── cgroup.procs
│   │   │   │   ├── cpuset.cpu_exclusive
│   │   │   │   ├── cpuset.cpus
│   │   │   │   ├── cpuset.mem_exclusive
│   │   │   │   ├── cpuset.mem_hardwall
│   │   │   │   ├── cpuset.memory_migrate
│   │   │   │   ├── cpuset.memory_pressure
│   │   │   │   ├── cpuset.memory_spread_page
│   │ 

Re: [gentoo-user] Intel(R) C600 SAS Controller

2014-06-11 Thread thegeezer
On 06/11/2014 11:34 AM, Stefan G. Weichinger wrote:
 Am 11.06.2014 12:14, schrieb thegeezer:

 Basically 3 RAID-6 hw-raids over 6 SAS hdds.
 OK so i'm confused again.   RAID6 requires minimum of 4 drives.
 if you have 3 raid6's then you would need 12 drives (coffee hasn't quite
 activated in me yet so my maths may not be right)
 or do you have essentially the first part of each of the six drives be
 virtual disk 1, the second part of each of the six drives virtual disk 2
 and the third part be virtual disk 3 -- if this is the case bear in mind
 that the slowest part of the disk is the end of the disk -- so you are
 essentially hobbling your virtual disk3 but only a little, instead of
 being around 150MB/sec it might run at 80.

 I'd be happy to see 80 !

 Ran atop now while dd-ing stuff to an external disk and got ~1MB/s for
 2.5GB of data.

 (this is even too slow for USB ...)

 I am unsure what to post here from atop ... ?


 To the initial question:

 Yes, imagine the six disks split or partitioned at the level of the
 hardware raid controller (as you described above).

 you might also like to try a simple test of the following (yes lvs count
 as block devices)
 # hdparm -t /dev/sda
 # hdparm -t /dev/sdb
 # hdparm -t /dev/sdc
 # hdparm -t /dev/vg01/winserver_disk0
 # hdparm -t /dev/vg01/amhold
 everything around 380 MB/s ... only ~350 MB/s for
 /dev/vg01/winserver_disk0 (which still is nice)


OK here is the clue.
if the LVs are also showing such fast speed, then please can you show
your command that you are trying to run that is so slow ?



Re: [gentoo-user] Intel(R) C600 SAS Controller

2014-06-11 Thread Stefan G. Weichinger
Am 11.06.2014 12:41, schrieb thegeezer:

 everything around 380 MB/s ... only ~350 MB/s for
 /dev/vg01/winserver_disk0 (which still is nice)
 
 
 OK here is the clue.
 if the LVs are also showing such fast speed, then please can you show
 your command that you are trying to run that is so slow ?

I originally noticed that virt-backup was slow so I looked into it and
found some dd-command.

My tests right now are like this:



booze ~ # dd if=/dev/vg01/winserver_disk0 bs=1M   of=/dev/null
^C25+0 Datensätze ein
24+0 Datensätze aus
25165824 Bytes (25 MB) kopiert, 13,8039 s, 1,8 MB/s

booze ~ # dd if=/dev/vg01/winserver_disk0 bs=4M   of=/dev/null
^C6+0 Datensätze ein
5+0 Datensätze aus
20971520 Bytes (21 MB) kopiert, 12,5837 s, 1,7 MB/s

booze ~ # dd if=/dev/vg01/winserver_disk0of=/dev/null
^C55009+0 Datensätze ein
55008+0 Datensätze aus
28164096 Bytes (28 MB) kopiert, 12,611 s, 2,2 MB/s

So no copy from-to same disk here ... should be just plain reading, right?

virt-backup does some ionice-stuff as well, but as you see, my
test-commands don't.

# cat /sys/block/sdc/queue/scheduler
[noop] deadline cfq

- noop scheduler to let the controller do its own scheduling



thanks, Stefan





Re: [gentoo-user] Intel(R) C600 SAS Controller

2014-06-11 Thread thegeezer
On 06/11/2014 11:49 AM, Stefan G. Weichinger wrote:
 Am 11.06.2014 12:41, schrieb thegeezer:

 everything around 380 MB/s ... only ~350 MB/s for
 /dev/vg01/winserver_disk0 (which still is nice)

 OK here is the clue.
 if the LVs are also showing such fast speed, then please can you show
 your command that you are trying to run that is so slow ?
 I originally noticed that virt-backup was slow so I looked into it and
 found some dd-command.

 My tests right now are like this:



 booze ~ # dd if=/dev/vg01/winserver_disk0 bs=1M   of=/dev/null
 ^C25+0 Datensätze ein
 24+0 Datensätze aus
 25165824 Bytes (25 MB) kopiert, 13,8039 s, 1,8 MB/s

 booze ~ # dd if=/dev/vg01/winserver_disk0 bs=4M   of=/dev/null
 ^C6+0 Datensätze ein
 5+0 Datensätze aus
 20971520 Bytes (21 MB) kopiert, 12,5837 s, 1,7 MB/s

 booze ~ # dd if=/dev/vg01/winserver_disk0of=/dev/null
 ^C55009+0 Datensätze ein
 55008+0 Datensätze aus
 28164096 Bytes (28 MB) kopiert, 12,611 s, 2,2 MB/s

 So no copy from-to same disk here ... should be just plain reading, right?

 virt-backup does some ionice-stuff as well, but as you see, my
 test-commands don't.

 # cat /sys/block/sdc/queue/scheduler
 [noop] deadline cfq

 - noop scheduler to let the controller do its own scheduling



 thanks, Stefan



yeah this is very very odd.
firstly there should not be such discrepancy between hdparm -t and dd if=
secondly you would imagine that the first dd would be cached and so
would be faster the second time round
please check for the turbo boost disable, i'll have a closer look at the
cgroups



Re: [gentoo-user] Intel(R) C600 SAS Controller

2014-06-11 Thread thegeezer
On 06/11/2014 11:49 AM, Stefan G. Weichinger wrote:
 Am 11.06.2014 12:41, schrieb thegeezer:

 everything around 380 MB/s ... only ~350 MB/s for
 /dev/vg01/winserver_disk0 (which still is nice)

 OK here is the clue.
 if the LVs are also showing such fast speed, then please can you show
 your command that you are trying to run that is so slow ?
 I originally noticed that virt-backup was slow so I looked into it and
 found some dd-command.

 My tests right now are like this:



 booze ~ # dd if=/dev/vg01/winserver_disk0 bs=1M   of=/dev/null
 ^C25+0 Datensätze ein
 24+0 Datensätze aus
 25165824 Bytes (25 MB) kopiert, 13,8039 s, 1,8 MB/s

 booze ~ # dd if=/dev/vg01/winserver_disk0 bs=4M   of=/dev/null
 ^C6+0 Datensätze ein
 5+0 Datensätze aus
 20971520 Bytes (21 MB) kopiert, 12,5837 s, 1,7 MB/s

 booze ~ # dd if=/dev/vg01/winserver_disk0of=/dev/null
 ^C55009+0 Datensätze ein
 55008+0 Datensätze aus
 28164096 Bytes (28 MB) kopiert, 12,611 s, 2,2 MB/s

 So no copy from-to same disk here ... should be just plain reading, right?

 virt-backup does some ionice-stuff as well, but as you see, my
 test-commands don't.

 # cat /sys/block/sdc/queue/scheduler
 [noop] deadline cfq

 - noop scheduler to let the controller do its own scheduling



 thanks, Stefan




just out of curiosity, what happens if you do
# dd if=/dev/vg01/amhold of=/dev/null bs=1M count=100
# dd if=/dev/sdc of=/dev/null bs=1M count=100





Re: [gentoo-user] Intel(R) C600 SAS Controller

2014-06-11 Thread Stefan G. Weichinger
Am 11.06.2014 13:01, schrieb thegeezer:

 yeah this is very very odd.
 firstly there should not be such discrepancy between hdparm -t and dd if=
 secondly you would imagine that the first dd would be cached and so
 would be faster the second time round
 please check for the turbo boost disable, i'll have a closer look at the
 cgroups

no turbo boost found.

only a powermanagement menu ... I disabled it again.

It was disabled per default, other options are efficient and custom
... so I understand it as performant when it is disabled.

custom brings several options then, I had tried the performant options
already without success.

Right now I get around 18-20 MB/s for the /dev/null test.







Re: [gentoo-user] Intel(R) C600 SAS Controller

2014-06-11 Thread Stefan G. Weichinger
Am 11.06.2014 13:18, schrieb thegeezer:

 just out of curiosity, what happens if you do # dd 
 if=/dev/vg01/amhold of=/dev/null bs=1M count=100 # dd if=/dev/sdc 
 of=/dev/null bs=1M count=100



booze ~ # dd if=/dev/vg01/amhold of=/dev/null bs=1M count=100
100+0 Datensätze ein
100+0 Datensätze aus
104857600 Bytes (105 MB) kopiert, 1,71368 s, 61,2 MB/s

booze ~ # dd if=/dev/sdc of=/dev/null bs=1M count=100
100+0 Datensätze ein
100+0 Datensätze aus
104857600 Bytes (105 MB) kopiert, 2,40518 s, 43,6 MB/s




[gentoo-user] [OT] Intel(R) Celeron(R) CPU J1800 drivers

2014-06-11 Thread Francisco Ares
Hi,

I am trying to install Gentoo on a x64 system with such processor, that, as
far as I could understand, is like to have the chipset embedded, so the
buses to video, pci express, usb, etc, comes out of the processor chip.

The kernel from the 3.10 series were not able to correctly handle this
processor, at least the video driver (not sure about the rest), but the new
stable one, gento-sources-3.12.21-r1 is OK, now I have the framebuffer
splash.

But no X11 for now.  I have added ~amd64 keywords to
x11-drivers/xf86-video-intel, but, for now, only a black screen, with no
clue on the log file /var/log/Xorg.0.log (which is the latest).

On /etc/portage/make.conf, I have the line:

VIDEO_CARDS=intel i915 i965 modesetting

Did I miss something?

Thanks.
Francisco


[gentoo-user] Re: [OT] Intel(R) Celeron(R) CPU J1800 drivers

2014-06-11 Thread Francisco Ares
P.S.:

here is the output of lspci -k

00:00.0 Host bridge: Intel Corporation ValleyView SSA-CUnit (rev 0c)
Subsystem: Biostar Microtech Int'l Corp Device 
00:02.0 VGA compatible controller: Intel Corporation ValleyView Gen7 (rev
0c)
Subsystem: Biostar Microtech Int'l Corp Device 
Kernel driver in use: i915
00:13.0 IDE interface: Intel Corporation ValleyView 4-Port SATA Storage
Controller (rev 0c)
Subsystem: Biostar Microtech Int'l Corp Device 521d
Kernel driver in use: ata_piix
00:14.0 USB controller: Intel Corporation ValleyView USB xHCI Host
Controller (rev 0c)
Subsystem: Biostar Microtech Int'l Corp Device 6403
Kernel driver in use: xhci_hcd
00:1a.0 Encryption controller: Intel Corporation ValleyView SEC (rev 0c)
Subsystem: Biostar Microtech Int'l Corp Device 310e
00:1b.0 Audio device: Intel Corporation ValleyView High Definition Audio
Controller (rev 0c)
Subsystem: Biostar Microtech Int'l Corp Device 821e
Kernel modules: snd_hda_intel
00:1c.0 PCI bridge: Intel Corporation ValleyView PCI Express Root Port (rev
0c)
Kernel driver in use: pcieport
00:1c.1 PCI bridge: Intel Corporation ValleyView PCI Express Root Port (rev
0c)
Kernel driver in use: pcieport
00:1c.2 PCI bridge: Intel Corporation ValleyView PCI Express Root Port (rev
0c)
Kernel driver in use: pcieport
00:1c.3 PCI bridge: Intel Corporation ValleyView PCI Express Root Port (rev
0c)
Kernel driver in use: pcieport
00:1f.0 ISA bridge: Intel Corporation ValleyView Power Control Unit (rev 0c)
Subsystem: Biostar Microtech Int'l Corp Device 310e
00:1f.3 SMBus: Intel Corporation ValleyView SMBus Controller (rev 0c)
Subsystem: Biostar Microtech Int'l Corp Device 310e
Kernel modules: i2c_i801
03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd.
RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 0c)
Subsystem: Device 1c6c:0123
Kernel driver in use: r8169


Thanks again,
Francisco


2014-06-11 8:28 GMT-03:00 Francisco Ares fra...@gmail.com:

 Hi,

 I am trying to install Gentoo on a x64 system with such processor, that,
 as far as I could understand, is like to have the chipset embedded, so the
 buses to video, pci express, usb, etc, comes out of the processor chip.

 The kernel from the 3.10 series were not able to correctly handle this
 processor, at least the video driver (not sure about the rest), but the new
 stable one, gento-sources-3.12.21-r1 is OK, now I have the framebuffer
 splash.

 But no X11 for now.  I have added ~amd64 keywords to
 x11-drivers/xf86-video-intel, but, for now, only a black screen, with no
 clue on the log file /var/log/Xorg.0.log (which is the latest).

 On /etc/portage/make.conf, I have the line:

 VIDEO_CARDS=intel i915 i965 modesetting

 Did I miss something?

 Thanks.
 Francisco



Re: [gentoo-user] Intel(R) C600 SAS Controller

2014-06-11 Thread thegeezer
On 06/11/2014 12:21 PM, Stefan G. Weichinger wrote:
 Am 11.06.2014 13:18, schrieb thegeezer:

 just out of curiosity, what happens if you do # dd 
 if=/dev/vg01/amhold of=/dev/null bs=1M count=100 # dd if=/dev/sdc 
 of=/dev/null bs=1M count=100


 booze ~ # dd if=/dev/vg01/amhold of=/dev/null bs=1M count=100
 100+0 Datensätze ein
 100+0 Datensätze aus
 104857600 Bytes (105 MB) kopiert, 1,71368 s, 61,2 MB/s

 booze ~ # dd if=/dev/sdc of=/dev/null bs=1M count=100
 100+0 Datensätze ein
 100+0 Datensätze aus
 104857600 Bytes (105 MB) kopiert, 2,40518 s, 43,6 MB/s



ok baffling.
sdc i already said would be slower but not this much slower
it certainly should not be slower than the lvm that sits on top of it!
i can't see anything in the cgroups that stands out, maybe someone else
can give a better voice to this.

all i can think is there is other IO happening
in atop if you can highlight any line that begins LVM CPU or DSK and
paste it in to a reply - with no virtualmachines running and no dd or
anything.
then run a dd as before and highlight the lines in atop while it is
running (maybe increase count to 1000 to give yourself a chance) and
paste in here too





Re: [gentoo-user] Intel(R) C600 SAS Controller

2014-06-11 Thread Stefan G. Weichinger
Am 11.06.2014 13:52, schrieb thegeezer:

 ok baffling.
 sdc i already said would be slower but not this much slower
 it certainly should not be slower than the lvm that sits on top of it!
 i can't see anything in the cgroups that stands out, maybe someone else
 can give a better voice to this.
 
 all i can think is there is other IO happening
 in atop if you can highlight any line that begins LVM CPU or DSK and
 paste it in to a reply - with no virtualmachines running and no dd or
 anything.
 then run a dd as before and highlight the lines in atop while it is
 running (maybe increase count to 1000 to give yourself a chance) and
 paste in here too


I did a test with a sysresccd from 2013 (that is kernel 3.4.52 ... phew)

Just booted, vgchange -ay and then the dd-test from LV to /dev/null

- with or without count=500 I get around 340-350 MB/s !

So my kernel-config seems buggy or I should downgrade to something older?

Aside from that I checked the firmware of the controller, it has the
latest release.

Stefan







Re: [gentoo-user] chown - not permited

2014-06-11 Thread J. Roeleveld
On 10 June 2014 21:33:28 CEST, Joseph syscon...@gmail.com wrote:
On 06/10/14 22:50, the wrote:
On 06/10/14 22:37, Joseph wrote:
 I mount USB stick form camera and I can not change ownership (I'm
 login as root)
 
 drwxr-xr-x 9 root root 32768 Nov 18  2013 DCIM -rwxr-xr-x 1 root
 root 4 Nov 21  2013 _disk_id.pod drwxr-xr-x 2 root root 32768
 Aug 14  2013 LOST.DIR
 
 I can read and write another USB stick but others I can not.  How
 to control it?
 
What filesystem does it contain and what mount options are you using?
Depending on the filesystem it can be possible to mount with
user/group permissions.

One USB stick was ext2 the other was dos file system.  I have problem
with dos.
I have commentd out in fstab:
/dev/sdb1  /media/stickautonoauto,rw,user0 
 0

and let udisks mange it.  It works.
Except that now I have ugly long names, for ext2 I get:
/run/media/joseph/2f5fc53e-4f4c-4e74-b9c4-fca316b47fea

for dos I get:
/run/media/joseph/3136-3934

with fstab entry they all were mounted under: 
 /media/stick

Joseph.

If you give the filesystem a Label. Then udisks will use that instead of the 
UUID string.

--
Joost
-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.



[gentoo-user] Problem with power management of SATA hard drives

2014-06-11 Thread Ralf
Hi there,

I'm using Gentoo ~amd64 on my NAS.

This is my setup:
Mainboard - Asus E35M1
CPU - AMD E350
HDD - 1x 500GiB WD Caviar Green WD5000AADS (root)
HDD - 4x 3TiB WD Caviar Green WD30EZRX (Raid10)

As these hard drives are desktop hard drives and not designed for 24/7
purposes, I want to spin them down when they are not in use.
(And in fact, they will probably be idling most of the time, so let's
save energy)

I'm able to force spin down those drive by using hdparm -y. hdparm -C
then tells me, that they switched from active/idle to standby.
Setting standby-time using hdparm -S also seems to work fine:

hdparm -S 10 /dev/sdb

/dev/sdb:
 setting standby to 10 (50 seconds)

But this does not standby my drive after 50 seconds. So I tried to set
the Power Management Level:

hdparm -B 5 /dev/sdb

/dev/sdb:
 setting Advanced Power Management level to 0x05 (5)
 HDIO_DRIVE_CMD failed: Input/output error
 APM_level  = not supported


Obviously, my system does not support APM what I can hardly believe...
So I tried to enable APM but my kernel configuration doesn't allow me to
enable APM support as long as I use a 64 bit kernel - APM option is only
available for 32 bit kernels.

What am I doing wrong? My hardware is *relatively* new and I don't
believe that it doesn't support those power management features.

But besides that, does anyone have further tips or tricks to protect
hard drives? E.g. try to minimize Load Cycle Count, ...

Output of hdparm -I: http://pastebin.com/RyAU6u8T

Cheers,
  Ralf


Re: [gentoo-user] Intel(R) C600 SAS Controller

2014-06-11 Thread thegeezer
On 06/11/2014 01:41 PM, Stefan G. Weichinger wrote:
 Am 11.06.2014 13:52, schrieb thegeezer:

 ok baffling.
 sdc i already said would be slower but not this much slower
 it certainly should not be slower than the lvm that sits on top of it!
 i can't see anything in the cgroups that stands out, maybe someone else
 can give a better voice to this.

 all i can think is there is other IO happening
 in atop if you can highlight any line that begins LVM CPU or DSK and
 paste it in to a reply - with no virtualmachines running and no dd or
 anything.
 then run a dd as before and highlight the lines in atop while it is
 running (maybe increase count to 1000 to give yourself a chance) and
 paste in here too

 I did a test with a sysresccd from 2013 (that is kernel 3.4.52 ... phew)

 Just booted, vgchange -ay and then the dd-test from LV to /dev/null

 - with or without count=500 I get around 340-350 MB/s !

 So my kernel-config seems buggy or I should downgrade to something older?

I suspect that in your fully running system somethingelse(tm) is
stealing the activity.   can you start up with no services enabled and
do the test ?

 Aside from that I checked the firmware of the controller, it has the
 latest release.

 Stefan









Re: [gentoo-user] Problem with power management of SATA hard drives

2014-06-11 Thread thegeezer
On 06/11/2014 02:12 PM, Ralf wrote:
 Hi there,

 I'm using Gentoo ~amd64 on my NAS.

 This is my setup:
 Mainboard - Asus E35M1
 CPU - AMD E350
 HDD - 1x 500GiB WD Caviar Green WD5000AADS (root)
 HDD - 4x 3TiB WD Caviar Green WD30EZRX (Raid10)

 As these hard drives are desktop hard drives and not designed for 24/7
 purposes, I want to spin them down when they are not in use.
 (And in fact, they will probably be idling most of the time, so let's
 save energy)

 I'm able to force spin down those drive by using hdparm -y. hdparm -C
 then tells me, that they switched from active/idle to standby.
 Setting standby-time using hdparm -S also seems to work fine:

 hdparm -S 10 /dev/sdb

 /dev/sdb:
  setting standby to 10 (50 seconds)

 But this does not standby my drive after 50 seconds. So I tried to set
 the Power Management Level:

 hdparm -B 5 /dev/sdb

 /dev/sdb:
  setting Advanced Power Management level to 0x05 (5)
  HDIO_DRIVE_CMD failed: Input/output error
  APM_level  = not supported


 Obviously, my system does not support APM what I can hardly believe...
 So I tried to enable APM but my kernel configuration doesn't allow me
 to enable APM support as long as I use a 64 bit kernel - APM option is
 only available for 32 bit kernels.

 What am I doing wrong? My hardware is *relatively* new and I don't
 believe that it doesn't support those power management features.

 But besides that, does anyone have further tips or tricks to protect
 hard drives? E.g. try to minimize Load Cycle Count, ...

 Output of hdparm -I: http://pastebin.com/RyAU6u8T

 Cheers,
   Ralf

50 seconds is very small timeout, be wary of spinup/spindown cycles
which imho are worse than always spinning.

depending on what is accessing /dev/sdb you might find that it sleeps
then immediately is woken.  lsof is your friend here.
this is how I do it (my time is ten mins)

# /etc/conf.d/hdparm
# or, you can set hdparm options for all drives
all_args=-S120


then..
# /etc/init.d/hdparm start






Re: [gentoo-user] Intel(R) C600 SAS Controller

2014-06-11 Thread Stefan G. Weichinger
Am 11.06.2014 15:32, schrieb thegeezer:

 So my kernel-config seems buggy or I should downgrade to something older?
 
 I suspect that in your fully running system somethingelse(tm) is
 stealing the activity.   can you start up with no services enabled and
 do the test ?

hm, yes. although I had deactivated most of it already.


Right now I compile a 3.10.x kernel with a config pulled from the
sysresccd  ... way more stuff compiled in, but maybe a step ...




Re: [gentoo-user] chown - not permited

2014-06-11 Thread Joseph

On 06/11/14 11:33, J. Roeleveld wrote:

On 10 June 2014 21:33:28 CEST, Joseph syscon...@gmail.com wrote:

On 06/10/14 22:50, the wrote:

On 06/10/14 22:37, Joseph wrote:

I mount USB stick form camera and I can not change ownership (I'm
login as root)

drwxr-xr-x 9 root root 32768 Nov 18  2013 DCIM -rwxr-xr-x 1 root
root 4 Nov 21  2013 _disk_id.pod drwxr-xr-x 2 root root 32768
Aug 14  2013 LOST.DIR

I can read and write another USB stick but others I can not.  How
to control it?


What filesystem does it contain and what mount options are you using?
Depending on the filesystem it can be possible to mount with
user/group permissions.


One USB stick was ext2 the other was dos file system.  I have problem
with dos.
I have commentd out in fstab:
/dev/sdb1   /media/stickautonoauto,rw,user0 
 0

and let udisks mange it.  It works.
Except that now I have ugly long names, for ext2 I get:
/run/media/joseph/2f5fc53e-4f4c-4e74-b9c4-fca316b47fea

for dos I get:
/run/media/joseph/3136-3934

with fstab entry they all were mounted under:
/media/stick


Joseph.

If you give the filesystem a Label. Then udisks will use that instead of the 
UUID string.

--
Joost


Thanks.
What is the best way to edit USB Label?

--
Joseph



Re: [gentoo-user] chown - not permited

2014-06-11 Thread Stroller

On Wed, 11 June 2014, at 2:52 pm, Joseph syscon...@gmail.com wrote:
...
 
 If you give the filesystem a Label. Then udisks will use that instead of the 
 UUID string.
 
 Thanks.
 What is the best way to edit USB Label?

$ apropos label
e2label (8)  - Change the label on an ext2/ext3/ext4 filesystem

I think you may be able to give DOS filesystems a label at creation time.

Stroller.




Re: [gentoo-user] Problem with power management of SATA hard drives

2014-06-11 Thread Ralf
On 06/11/2014 03:40 PM, thegeezer wrote:
 50 seconds is very small timeout, be wary of spinup/spindown cycles
 which imho are worse than always spinning.
For sure, I know, this was only for testing purposes, to see if it
works. I don't want to wait ten minutes, or even an hour to see that it
actually does not work :-)

 depending on what is accessing /dev/sdb you might find that it sleeps
 then immediately is woken.  lsof is your friend here.
 this is how I do it (my time is ten mins)
Nope, the filesystem isn't even mounted.

 # /etc/conf.d/hdparm
 # or, you can set hdparm options for all drives
 all_args=-S120


 then..
 # /etc/init.d/hdparm start
And nope, it does not spin down.

It only spins down if I force it with hdparm -y

Cheers,
  Ralf



Re: [gentoo-user] Intel(R) C600 SAS Controller

2014-06-11 Thread Stefan G. Weichinger
Am 11.06.2014 15:44, schrieb Stefan G. Weichinger:
 Am 11.06.2014 15:32, schrieb thegeezer:
 
 So my kernel-config seems buggy or I should downgrade to something older?

 I suspect that in your fully running system somethingelse(tm) is
 stealing the activity.   can you start up with no services enabled and
 do the test ?
 
 hm, yes. although I had deactivated most of it already.
 
 
 Right now I compile a 3.10.x kernel with a config pulled from the
 sysresccd  ... way more stuff compiled in, but maybe a step ...

That definitely helped.

Faster booting and now the bottleneck is gone somewhere.

dd-tests look good now, I already do a first backup via virt-backup
(which runs a dd with bs=4M under the hood ... and I pipe that through
pigz ...)

Now I migrate and slim down this kernel config for the (gentoo-)stable
kernel linux-3.12.21-gentoo-r1 ... we'll see!

Thanks @thegeezer for the help so far!

Stefan




Re: [gentoo-user] Intel(R) C600 SAS Controller

2014-06-11 Thread thegeezer
On 06/11/2014 03:15 PM, Stefan G. Weichinger wrote:
 Am 11.06.2014 15:44, schrieb Stefan G. Weichinger:
 Am 11.06.2014 15:32, schrieb thegeezer:

 So my kernel-config seems buggy or I should downgrade to something older?
 I suspect that in your fully running system somethingelse(tm) is
 stealing the activity.   can you start up with no services enabled and
 do the test ?
 hm, yes. although I had deactivated most of it already.


 Right now I compile a 3.10.x kernel with a config pulled from the
 sysresccd  ... way more stuff compiled in, but maybe a step ...
 That definitely helped.

 Faster booting and now the bottleneck is gone somewhere.

 dd-tests look good now, I already do a first backup via virt-backup
 (which runs a dd with bs=4M under the hood ... and I pipe that through
 pigz ...)

 Now I migrate and slim down this kernel config for the (gentoo-)stable
 kernel linux-3.12.21-gentoo-r1 ... we'll see!

 Thanks @thegeezer for the help so far!

 Stefan


it will be interesting to diff the previous.config and the
current.config to see what the difference is!



[gentoo-user] Re: problem with v86d

2014-06-11 Thread Nikos Chantziaras

On 11/06/14 08:14, cov...@ccs.covici.com wrote:

Hi.  Does anyone have a clue as to why v86d should suddenly start being
very cpu intensive on my computer?  When I first boot its fine (using
either systemd or openrc), but after a while -- maybe  a day or two it
starts using up lots of cpu and definitely increases the load average
and slows down things.  I notice this has not changed in several years,
so I am wondering if it is not working as it used to?

Thanks in advance for any ideas.


It's probably not v86d itself, but whoever is using it. But I don't know 
how to find out for sure.


I didn't notice anything like that myself though. But that might be 
because my machine isn't running for that long (I turn off my PC when I 
don't need it.)





[gentoo-user] Re: problem with v86d

2014-06-11 Thread James
 covici at ccs.covici.com writes:


 Hi.  Does anyone have a clue as to why v86d should suddenly start being
 very cpu intensive on my computer?  When I first boot its fine (using
 either systemd or openrc), but after a while -- maybe  a day or two it
 starts using up lots of cpu and definitely increases the load average
 and slows down things.  I notice this has not changed in several years,
 so I am wondering if it is not working as it used to? 
 Thanks in advance for any ideas.


Ok so the first thing I noticed:


http://dev.gentoo.org/~spock/projects/uvesafb/
You don't have permission to access /~spock/projects/uvesafb/ on this server.

So you need to drop the (gentoo-dev) a line about where to look at his 
sources

Now looking at the flags {debug x86emu} I see:


sys-apps/v86d: Use x86emu for Video BIOS calls

If you've been reading the gentoo user list, you can see
much has changed with frame buffers and video drivers recently
in the kernel. The best place to start reading is posting on 
25/may/2014 by Greg Turner.


My best guess is changes in the kernel affect your emulation,
and you'll have much digging to do, if the gentoo -dev that
develops/maintains that code does not drop a hint onto your
questions as to waz sup with x86emu.

Are there any notes when you compile it?  News?   Read the comments
in the ebuild as to new problems?

good hunting.


hth,
James







Re: [gentoo-user] chown - not permited

2014-06-11 Thread Neil Bothwick
On Wed, 11 Jun 2014 07:52:23 -0600, Joseph wrote:

 What is the best way to edit USB Label?

For the DOS filesystem, mlabel, part of sys-fs/mtools.


-- 
Neil Bothwick

Top Oxymorons Number 19: Passive aggression


signature.asc
Description: PGP signature


Re: [gentoo-user] Re: problem with v86d

2014-06-11 Thread covici
James wirel...@tampabay.rr.com wrote:

  covici at ccs.covici.com writes:
 
 
  Hi.  Does anyone have a clue as to why v86d should suddenly start being
  very cpu intensive on my computer?  When I first boot its fine (using
  either systemd or openrc), but after a while -- maybe  a day or two it
  starts using up lots of cpu and definitely increases the load average
  and slows down things.  I notice this has not changed in several years,
  so I am wondering if it is not working as it used to? 
  Thanks in advance for any ideas.
 
 
 Ok so the first thing I noticed:
 
 
 http://dev.gentoo.org/~spock/projects/uvesafb/
 You don't have permission to access /~spock/projects/uvesafb/ on this server.
 
 So you need to drop the (gentoo-dev) a line about where to look at his 
 sources
 
 Now looking at the flags {debug x86emu} I see:
 
 
 sys-apps/v86d: Use x86emu for Video BIOS calls
 
 If you've been reading the gentoo user list, you can see
 much has changed with frame buffers and video drivers recently
 in the kernel. The best place to start reading is posting on 
 25/may/2014 by Greg Turner.
 
 
 My best guess is changes in the kernel affect your emulation,
 and you'll have much digging to do, if the gentoo -dev that
 develops/maintains that code does not drop a hint onto your
 questions as to waz sup with x86emu.
 
 Are there any notes when you compile it?  News?   Read the comments
 in the ebuild as to new problems?
 
 good hunting.

Thanks.  I have a fairly old kernel for other reasons and I installed
v86d in 2011 and it has not changed since.   I use udesafb because I
want a frame buffer so I can get a lot more than 80x25 in a virtual
console.  Iget 64x160.  I also need something which will net the nvidia
driver work since this is the card I have.  I did try the noveau driver,
but it did not give me as large of a screen and nvidia driver did not
like that driver.  I can't remember what it complained about, but it
means no X at all.

-- 
Your life is like a penny.  You're going to lose it.  The question is:
How do
you spend it?

 John Covici
 cov...@ccs.covici.com



[gentoo-user] Re: problem with v86d

2014-06-11 Thread Nikos Chantziaras

On 11/06/14 17:49, cov...@ccs.covici.com wrote:

Thanks.  I have a fairly old kernel for other reasons and I installed
v86d in 2011 and it has not changed since.   I use udesafb because I
want a frame buffer so I can get a lot more than 80x25 in a virtual
console.  Iget 64x160.  I also need something which will net the nvidia
driver work since this is the card I have.  I did try the noveau driver,
but it did not give me as large of a screen and nvidia driver did not
like that driver.  I can't remember what it complained about, but it
means no X at all.


If you're not booting in EFI mode, then you can use vesafb instead. This 
doesn't require v86d and doesn't even require an initrd.


uvesafb is mostly for non-PC or generally platforms where a BIOS is not 
available (EFI on a PC also lacks BIOS), and it achieves that through 
v86d. vesafb uses the BIOS directly, so v86d is not needed.





Re: [gentoo-user] chown - not permited

2014-06-11 Thread Mike Gilbert
On Wed, Jun 11, 2014 at 10:49 AM, Neil Bothwick n...@digimed.co.uk wrote:
 On Wed, 11 Jun 2014 07:52:23 -0600, Joseph wrote:

 What is the best way to edit USB Label?

 For the DOS filesystem, mlabel, part of sys-fs/mtools.


Or fatlabel, from sys-fs/dosfstools.



Re: [gentoo-user] Intel(R) C600 SAS Controller

2014-06-11 Thread Stefan G. Weichinger

looks promising:

virt-backup dumps and packs a 12 GB image-file within ~145 seconds to a
non-compressing btrfs subvolume:

a) does a LVM-snapshot

b) dd with bs=4M and through pigz to the target file

The bigger LV with ~250GB is running right now.

The system feels snappier than with the old kernel ... I wonder if there
is more to tune as right now I am using the rather generic config which
is not tuned for the specific CPU, for example (which might even have
helped? ;-) ).

That was good progress today ... but I might consider re-configuring the
RAIDs as mentioned.

As I run backups via amanda I have to provide a so called holding disk
as intermediate place for dumps on their way to the tape drive.

This means copying around stuff within the same hardware raid array.

One big fat hw-RAID10 might be better?
But losing the wrong 2 drives makes it crash again ... afaik.

time for a break here.

Greets, Stefan



Re: [gentoo-user] chown - not permited

2014-06-11 Thread Joseph

On 06/11/14 11:31, Mike Gilbert wrote:

On Wed, Jun 11, 2014 at 10:49 AM, Neil Bothwick n...@digimed.co.uk wrote:

On Wed, 11 Jun 2014 07:52:23 -0600, Joseph wrote:


What is the best way to edit USB Label?


For the DOS filesystem, mlabel, part of sys-fs/mtools.



Or fatlabel, from sys-fs/dosfstools.


Thanks, I've tired mtools mlable couldn't get it to work.
fatlabel worked perfectly.

--
Joseph



[gentoo-user] Re: problem with v86d

2014-06-11 Thread James
Nikos Chantziaras realnc at gmail.com writes:


  like that driver.  I can't remember what it complained about, but it
  means no X at all.
 
 If you're not booting in EFI mode, then you can use vesafb instead. This 
 doesn't require v86d and doesn't even require an initrd.
 
 uvesafb is mostly for non-PC or generally platforms where a BIOS is not 
 available (EFI on a PC also lacks BIOS), and it achieves that through 
 v86d. vesafb uses the BIOS directly, so v86d is not needed.

Spock is Michał Januszewski

A physics type with keen interests in chipsets; loads of Frame
Buffer info on his blog. He'd be a keen resource for you. Seems 
he has vanished from the gentoo scene?

sp...@gentoo.org

http://mjanusz.wordpress.com/

http://scholar.google.com/citations?user=XSjXVbQJhl=en

http://mjanusz.github.io/homepage/

I'd rather think he's one of those really sharp but hidden
physics folks, who very much likes Gentoo and privacy.


hth,
James




Re: [gentoo-user] Intel(R) C600 SAS Controller

2014-06-11 Thread thegeezer
On 06/11/2014 07:57 PM, Stefan G. Weichinger wrote:
 looks promising:


awesome.  i did have a look through the diff, there are lots of scsi
drivers selected, storage (block) cgroups but i think the crucial factor
was the HZ was set at 100 previously and 1000 now.  i guess it has
helped kernel-io though maybe a kernel hacker in here might give a more
authoritative answer

 One big fat hw-RAID10 might be better?
 But losing the wrong 2 drives makes it crash again ... afaik.
yeah you could argue with raid6 you can _only_ lose two disks, whereas
if you lose the right disks with raid01 you can lose 3 and still rebuild.
raid 0+1 (as opposed to raid10, slightly different) gives you great
speed and at least one drive you can lose.
however, you are not protected by silent bit corruption but then you are
using btrfs elsewhere.
myself i would use lvm to partition and then at least you can move
things around later; btrfs lets you do the same afaiu
_always_ have your hotspare in the system, then it takes less time to
come back up to 100%
nothing is quite as scary as having a system waiting on the post and a
screwdriver before rebuild can even start

 time for a break here.
i'd strongly recommend such monitoring software as munin to have running
-- this way you can watch trends like io times increasing over time and
act on them before things start feeling sluggish

well earned break :)

 Greets, Stefan





Re: [gentoo-user] Re: problem with v86d

2014-06-11 Thread covici
James wirel...@tampabay.rr.com wrote:

 Nikos Chantziaras realnc at gmail.com writes:
 
 
   like that driver.  I can't remember what it complained about, but it
   means no X at all.
  
  If you're not booting in EFI mode, then you can use vesafb instead. This 
  doesn't require v86d and doesn't even require an initrd.
  
  uvesafb is mostly for non-PC or generally platforms where a BIOS is not 
  available (EFI on a PC also lacks BIOS), and it achieves that through 
  v86d. vesafb uses the BIOS directly, so v86d is not needed.
 
 Spock is Michał Januszewski
 
 A physics type with keen interests in chipsets; loads of Frame
 Buffer info on his blog. He'd be a keen resource for you. Seems 
 he has vanished from the gentoo scene?
 
 sp...@gentoo.org
 
 http://mjanusz.wordpress.com/
 
 http://scholar.google.com/citations?user=XSjXVbQJhl=en
 
 http://mjanusz.github.io/homepage/
 
 I'd rather think he's one of those really sharp but hidden
 physics folks, who very much likes Gentoo and privacy.

I will check him out, I have been using uvesafb for years.

-- 
Your life is like a penny.  You're going to lose it.  The question is:
How do
you spend it?

 John Covici
 cov...@ccs.covici.com



[gentoo-user] Re: N failed logins since your last login

2014-06-11 Thread walt
On 06/11/2014 01:56 AM, Florian HEGRON wrote:
 Is there a way to display that 'failed logins' message without using
 gdm/kdm/xdm?
 
 Hello,
 
 See that : http://linux.die.net/man/8/faillog
 
 I am not on my Gentoo machine so I don't know if the faillog file is really 
 present.

Very good clue, thanks.  After several hours of poking around in /etc
I know a lot more and understand less :)

I enabled a few settings in /etc/login.defs that *should* have worked
(according to the man pages) but had no effect at all.

I found some appropriate failed login messages in /var/log/auth.log,
as specified by this line in /etc/syslog.conf:

#grep -r auth.log /etc
syslog.conf:auth,authpriv.* /var/log/auth.log

I should confess that I'm running systemd instead of openrc and I'm
using my own hacked config files in /etc/systemd/ to run syslogd:

#cat /etc/systemd/system/sklogd.service 
[Unit]
Description=The syslogd half of sysklogd

[Service]
Type=forking
EnvironmentFile=/etc/init.d/sysklogd
ExecStart=/usr/sbin/syslogd -m 0

[Install]
WantedBy=multi-user.target


Maybe failed logins should be logged by journalctl now instead of
sys-apps/shadow?  I see entries from systemd-logind about successful
logins but nothing about failed logins.  (I've deliberately caused
many failed logins just for the purpose of spamming the system logs.)

Any additional clues would be much appreciated, thanks.