[CentOS] RAID controller recommendations that are supported by RHEL/CentOS 8?

2019-10-10 Thread Dennis Jacobfeuerborn
Hi,
I'm currently looking for a RAID controller with BBU/CacheVault and
while LSI MegaRaid controllers worked well in the past apparently they
are no longer supported in RHEL 8:
https://access.redhat.com/discussions/3722151

Does anybody have recommendations for for hardware controllers with
cache that should work in both CentOS 7 and 8 out of the box?

Regards,
  Dennis
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS 8.0 1905 is now available for download

2019-09-24 Thread Dennis Jacobfeuerborn
Already bummed that the 4.18 kernel is too old for /proc/pressure :(
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Any alternatives for the horrible reposync

2018-02-28 Thread Dennis Jacobfeuerborn
On 27.02.2018 16:45, Stephen John Smoogen wrote:
> On 27 February 2018 at 06:11, Dennis Jacobfeuerborn
> <denni...@conversis.de> wrote:
>> Hi,
>> I'm currently trying to mirror a couple of yum repositories and the only
>> tool that seems to be available for this is reposync.
>> Unfortunately reposync for some inexplicable reason seems to use the yum
>> config of the local system as a basis for its work which makes no sense
>> and creates all kinds of problems where cache directories and metadata
>> gets mixed up.
>> Are there any alternatives? Some repos support rsync but not all of them
>> so I'm looking for something that works for all repos.
>>
> 
> It is not 'inexplicable'. reposync was primarily built for a user to
> sync down the repositories they are using to be local.. so using
> yum.conf makes sense. The fact that it can be used for a lot of other
> things is built into various configs which the man page covers. As
> John Hodrien mentioned, you can use the -C flag to point it to a
> different config file. This is the way to use it if you are wanting to
> download other files and data. Tools like cobbler wrap the reposync in
> this fashion.

I've been trying to use a custom config file and the -t option to
somehow separate the operation of reposync from the systems repositories
but this does not seem to work.
When I tried to copy a repository reposync reported that it already had
a more current repomd.xml than the one offered by the repository.
An investigation with strace revealed that reposync still was checking
/var/cache/yum for cached files even though i tried both -t and
explicitly creating a new directory and using --cachedir.
Reposync looks into that cache directory and doesn't find the repomd.xml
which is great but then also checks /var/cache/yum where it finds a
repomd.xml and uses that.

What I mean by inexplicable is that it would make more sense to make
reposync a generic tool for syncing yum repos and then simply provide
the option to use /etc/yum.conf rather then to hard-code (as is
apparently the case) these system specific behaviors.

For now what seems to work is using a unique repo name in the config
file to make it impossible for reposync to find a matching file in
/var/cache/yum but that's more of a hack than a fix for the issue.

Regards,
  Dennis
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] Any alternatives for the horrible reposync

2018-02-27 Thread Dennis Jacobfeuerborn
Hi,
I'm currently trying to mirror a couple of yum repositories and the only
tool that seems to be available for this is reposync.
Unfortunately reposync for some inexplicable reason seems to use the yum
config of the local system as a basis for its work which makes no sense
and creates all kinds of problems where cache directories and metadata
gets mixed up.
Are there any alternatives? Some repos support rsync but not all of them
so I'm looking for something that works for all repos.

Regards,
  Dennis
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] RAID questions

2017-02-19 Thread Dennis Jacobfeuerborn
On 15.02.2017 03:10, TE Dukes wrote:
> 
> 
>> -Original Message-
>> From: CentOS [mailto:centos-boun...@centos.org] On Behalf Of John R
>> Pierce
>> Sent: Tuesday, February 14, 2017 8:13 PM
>> To: centos@centos.org
>> Subject: Re: [CentOS] RAID questions
>>
>> On 2/14/2017 5:08 PM, Digimer wrote:
>>> Note; If you're mirroring /boot, you may need to run grub install on
>>> both disks to ensure they're both actually bootable (or else you might
>>> find yourself doing an emergency boot off the CentOS ISO and
>>> installing grub later).
>>
>> I left that out because the OP was talking about booting from a seperate
> SSD,
>> and only mirroring his data drive.
>>
> Thanks!!
> 
> I'm only considering a SSD drive due to the lack of 3.5 drive space. I have
> unused 5.25 bays but I'd have to get an adapter.
> 
> I probably don't need to go the RAID 10 route. I just need/would like some
> kind of redundancy for backups. This is a home system but over the years due
> to HD, mainboard, power supply failures, I have lost photos, etc, that can
> never be replaced. Backing up gigabytes/terabytes of data to cloud storage
> would be impractical due to bandwidth limitations.
> 
> Just looking for a solution better than what I have. A simple mirror is more
> than I have now. I'd like to add another drive for redundancy and go from
> there.
> 
> What should I do?

RAID is *not* a backup. If a virus or buggy program or an accidental "rm
-rf *" in the wrong directory deletes files on a RAID then these files
are obviously gone on the replicas as well.
If you want to prevent the loss of files then instead you should add a
second disk to the system and simply backup data on a daily basis to
that disk.
A RAID array is not the appropriate way to go for you scenario described
above.

Regards,
  Dennis


___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] RHEL 7.3 released

2016-11-04 Thread Dennis Jacobfeuerborn
On 04.11.2016 15:29, Johnny Hughes wrote:
> On 11/04/2016 09:15 AM, Mark Haney wrote:
>> That's all well and good, but how about you actually include the minor
>> number AND the release date?  I.e. 7.3-1104 for CentOS 7.3 released today,
>> for example.   I'm all for the SIGs to keep track of their own upstreams,
>> but surely there's a better way to do this that doesn't annoy the heck out
>> of us Joe-Blows out here.  A lot of us don't have the time (or inclination)
>> to deal with oddball version discrepancies when there really doesn't need
>> to be.
>>
>> I mean, there are dozens of Ubuntu distros and they all use the same basic
>> versioning schemes.  (Maybe not a completely fair example, but still.)
>>  Isn't the idea with CentOS to be a method of generating a larger testing
>> base and interest in RHEL and it's products?  If not, that's how I've
>> always seen it, incorrect or not.
> 
> I said on the tree it will be 7.3.1611 .. and I don't get to make the
> call on this.
> 
> This was battle was fought two years ago.
> 
> We don't have to like it.
> 
> We also don't need to fight it again.
> 
> I do what I am told, and I have been told what to do ...

I don't really mind any particular version scheme getting used but why
not use it consistently? Right now the ISOs are named like this:

CentOS-7-x86_64-NetInstall-1511.iso

Why isn't that name consistent with the tree versioning e.g.:

CentOS-7.2.1511-x86_64-NetInstall.iso

That would make things less ambiguous.

Regards,
  Dennis
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] gigE -> 100Mb problems

2016-10-12 Thread Dennis Jacobfeuerborn
On 12.10.2016 06:26, John R Pierce wrote:
> On 10/11/2016 9:03 PM, Ashish Yadav wrote:
>> Please test that if both the server are communicating with each other at
>> 1Gbps or not via "iperf" tool.
>>
>> If above gives result of 1Gbps then it will eliminate the NICs problem
>> then
>> you know that it is a problem with cisco switch only.
> 
> after they forced the cisco ports to gigE, I was seeing 200-400Mbps in
> iPerf, which was odd.  servers were both very lightly loaded.
> 
> BUT...   the switch ports kept going offline on us.   Note I have no
> admin access to the switch, its managed by IT so I have to go through
> channels to get anything.   I asked what error codes were causing the
> ports to go offline but haven't heard back.   as of right now, both
> servers are offline, (I can reach their IPMI management controller, and
> remotely log onto the console just fine, but the ports show no link).   
> When I was in the DC yesterday, I switched ports, same problem, I also
> switched the network cable with a different (HP) server, it had no
> problems on the same cable+port thats giving these supermicro servers
> problems.
> 
> I'd chalk it up to a bad NIC, but two identical servers with two nic's
> each all have this problem,  so its got to be something else, some
> weirdness with the 82574L as implemented on these SuperMicro X8DTE-F
> servers running CentOS 6.7 ?!?In our old DC, these servers ran rock
> solid for several years without any network issues at all, in that rack
> I had a Netgear JGS524

A while back there was an issue with this nic chipset and CentOS but I'm
not sure if this still applies to CentOS 6.7:
https://blog.andreas-haerter.com/2013/02/11/intel-82574l-network-nic-aspm-bug-e1000-linux-rhel-centos-sl-6.3

If this is your problem then adding "pcie_aspm=off" should fix it.

Regards,
   Dennis

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] [CENTOS ]IPTABLES - How Secure & Best Practice

2016-06-29 Thread Dennis Jacobfeuerborn
On 29.06.2016 12:00, Leon Vergottini wrote:
> Dear Members
> 
> I hope you are all doing well.
> 
> I am busy teaching myself iptables and was wondering if I may get some
> advise.  The scenario is the following:
> 
> 
>1. Default policy is to block all traffic
>2. Allow web traffic and SSH
>3. Allow other applications
> 
> I have come up with the following:
> 
> #!/bin/bash
> 
> #  RESET CURRENT RULE BASE
> iptables -F
> service iptables save
> 
> #  DEFAULT FIREWALL POLICY
> iptables -P INPUT DROP
> iptables -P FORWARD DROP
> iptables -P OUTPUT DROP
> 
> #  --
> #  INPUT CHAIN RULES
> #  --
> 
> #  MOST COMMON ATTACKS
> iptables -A INPUT -p tcp --tcp-flags ALL NONE -j DROP
> iptables -A INPUT -p tcp ! --syn -m state --state NEW -j DROP
> iptables -A INPUT -p tcp --tcp-flags ALL ALL -j DROP
> 
> #  LOOPBACK, ESTABLISHED & RELATED CONNECTIONS
> iptables -A INPUT -i lo -j ACCEPT
> iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
> 
> #  SSH
> iptables -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
> 
> #  WEB SERVICES
> iptables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
> iptables -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
> iptables -A INPUT -p tcp -m tcp --dport 8080 -j ACCEPT
> 
> #  EMAIL
> iptables -A INPUT -p tcp -m tcp --dport 143 -j ACCEPT
> iptables -A INPUT -p tcp -m tcp --dport 993 -j ACCEPT
> 
> #  OTHER APPLICATIONS
> iptables -A INPUT -p tcp -m tcp --dport X -j ACCEPT
> iptables -A INPUT -p tcp -m tcp --dport X -j ACCEPT
> 
> 
> #  --
> #  OUTPUT CHAIN RULES
> #  --
> #  UDP
> iptables -A OUTPUT -p udp -j DROP
> 
> #  LOOPBACK, ESTABLISHED & RELATED CONNECTIONS
> iptables -A OUTPUT -i lo -j ACCEPT
> iptables -A OUTPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
> 
> #  SSH
> iptables -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
> 
> #  WEB SERVICES
> iptables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
> iptables -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
> iptables -A INPUT -p tcp -m tcp --dport 8080 -j ACCEPT
> 
> #  EMAIL
> iptables -A INPUT -p tcp -m tcp --dport 143 -j ACCEPT
> iptables -A INPUT -p tcp -m tcp --dport 993 -j ACCEPT
> 
> #  OTHER APPLICATIONS
> iptables -A INPUT -p tcp -m tcp --dport 11009 -j ACCEPT
> iptables -A INPUT -p tcp -m tcp --dport 12009 -j ACCEPT
> 
> 
> 
> #  --
> #  SAVE & APPLY
> #  --
> 
> 
> service iptables save
> service iptables restart
> 
> To note:
> 
> 
>1. The drop commands at the beginning of each chain is for increase
>performance.  It is my understanding that file gets read from top to bottom
>and applied accordingly.  Therefore, applying them in the beginning will
>increase the performance by not reading through all the rules only to apply
>the default policy.
>2. I know the above point will not really affect the performance, so it
>is more of getting into a habit of structuring the rules according to best
>practice, or at least establishing a pattern for myself.
> 
> 
> How secure is this setup?  Is there any mistakes or things that I need to
> look out for?

You shouldn't script iptables like this and instead use iptables-save
and iptables-restore to activate the rules atomically and with some
error checking.

Regards,
  Dennis
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2

2016-05-25 Thread Dennis Jacobfeuerborn
What is the HBA the drives are attached to?
Have you done a quick benchmark on a single disk to check if this is a
raid problem or further down the stack?

Regards,
  Dennis

On 25.05.2016 19:26, Kelly Lesperance wrote:
> [merging]
> 
> The HBA the drives are attached to has no configuration that I’m aware of.  
> We would have had to accidentally change 23 of them ☺
> 
> Thanks,
> 
> Kelly
> 
> On 2016-05-25, 1:25 PM, "Kelly Lesperance"  wrote:
> 
>> They are:
>>
>> [root@r1k1 ~] # hdparm -I /dev/sda
>>
>> /dev/sda:
>>
>> ATA device, with non-removable media
>>  Model Number:   MB4000GCWDC 
>>  Serial Number:  S1Z06RW9
>>  Firmware Revision:  HPGD
>>  Transport:  Serial, SATA Rev 3.0
>>
>> Thanks,
>>
>> Kelly
> 
> 
> On 2016-05-25, 1:23 PM, "centos-boun...@centos.org on behalf of 
> m.r...@5-cent.us"  
> wrote:
> 
>> Kelly Lesperance wrote:
>>> I’ve posted this on the forums at
>>> https://www.centos.org/forums/viewtopic.php?f=47=57926=244614#p244614
>>> - posting to the list in the hopes of getting more eyeballs on it.
>>>
>>> We have a cluster of 23 HP DL380p Gen8 hosts running Kafka. Basic specs:
>>>
>>> 2x E5-2650
>>> 128 GB RAM
>>> 12 x 4 TB 7200 RPM SATA drives connected to an HP H220 HBA
>>> Dual port 10 GB NIC
>>>
>>> The drives are configured as one large RAID-10 volume with mdadm,
>>> filesystem is XFS. The OS is not installed on the drive - we PXE boot a
>>> CentOS image we've built with minimal packages installed, and do the OS
>>> configuration via puppet. Originally, the hosts were running CentOS 6.5,
>>> with Kafka 0.8.1, without issue. We recently upgraded to CentOS 7.2 and
>>> Kafka 0.9, and that's when the trouble started.
>> 
>> One more stupid question: could the configuration of the card for how the
>> drives are accessed been accidentally changed?
>>
>>  mark
>>
>> ___
>> CentOS mailing list
>> CentOS@centos.org
>> https://lists.centos.org/mailman/listinfo/centos
> 
> ___
> CentOS mailing list
> CentOS@centos.org
> https://lists.centos.org/mailman/listinfo/centos
> 

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] tune2fs: Filesystem has unsupported feature(s) while trying to open

2016-04-30 Thread Dennis Jacobfeuerborn
Then you either made a mistake or ran into a bug. Both "normal" disk
partitions and logical volumes are regular block devices and tune2fs or
other tool operating on block devices will see no difference between
them and treat them identical.

On 30.04.2016 12:42, Rob Townley wrote:
> Not in my testing especially about the time of 6.4.
> On Apr 22, 2016 5:16 PM, "Gordon Messmer"  wrote:
> 
>> On 04/22/2016 01:33 AM, Rob Townley wrote:
>>
>>> tune2fs against a LVM (albeit formatted with ext4) is not the same as
>>> tune2fs against ext4.
>>>
>>
>> tune2fs operates on the content of a block device.  A logical volume
>> containing an ext4 system is exactly the same as a partition containing an
>> ext4 filesystem.
>>
>> ___
>> CentOS mailing list
>> CentOS@centos.org
>> https://lists.centos.org/mailman/listinfo/centos
>>
> ___
> CentOS mailing list
> CentOS@centos.org
> https://lists.centos.org/mailman/listinfo/centos
> 

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] About mysql upgrade

2016-04-28 Thread Dennis Jacobfeuerborn
On 28.04.2016 17:58, Lamar Owen wrote:
> On 04/28/2016 08:45 AM, Sergio Belkin wrote:
>> I've found some issues upgrading mysql, some people recommends run
>> mysql_upgrade. I wonder why such a script is not run from scriptlet of
>> mysql-server rpm.
> Back in the Dark Ages of the PostgreSQL RPMset (PostgreSQL 6.5), early
> in my time as RPM maintainer for the community PostgreSQL.org RPMset, I
> asked a very similar question of some folks, and I got a canonical
> answer from Mr. RPM himself, Jeff Johnson.
> 
> The answer is not very complex, but it was spread across a several
> message private e-mail thread.  The gist of it is that the RPM
> scriptlets are very very limited in what they can do.  Trying to do
> something clever inside an RPM scriptlet is almost never wise.  The key
> thing to remember is that the scriptlet has to be able to be run during
> the OS install phase (back when upgrades were actually supported by the
> OS installer that is now known as anaconda). Quoting this introductory
> section:
> 
> On August 18, 1999, Jeff Johnson wrote:
>> The Red Hat install environment is a chroot. That means no daemons,
>> no network, no devices, nothing. Even sniffing /proc can be problematic
>> in certain cases.
> 
> Now, I realize that that is OLD information; however, anaconda is still
> doing the same basic chrooted install, just with a prettier face.  You
> cannot start a daemon in the chroot, since many things are simply not
> available to the scriptlets when installed/upgraded by anaconda. 
> Scriptlets have to work in an environment other than 'yum update.'  And
> also note that this is a very different situation than Debian packages
> live in; RPM scriptlets are essentially forbidden from interactivity
> with the user; Debian's equivalent are not so hindered.  At least that
> was the rule as long as I was an active packager.
> 
> Further reference a WayBack Machine archive of a page I wrote long ago:
> https://web.archive.org/web/20010122090200/http://www.ramifordistat.net/postgres/rpm_upgrade.html
> 
> 
> And leaving you with this thought, again from Jeff Johnson:
> 
> On August 18, 1999, Jeff Johnson wrote:
>> Good. Now you're starting to think like a packager  Avoiding MUD is
>> *much* more important than attempting magic.
> 
> 
>> The bottom line is you shouldn't attempt a database conversion as
>> part of the package install. The package, however, should contain
> programs
>> and procedures necessary to do the job.

The real reason something like mysql_upgrade isn't run automatically
hasn't really much to do with the technicalities of RPM but the fact
that mysql_upgrade might kill your server if you don't know what you are
doing.
If for example you have a big table that takes up 70% of your storage
and mysql_upgrade needs to convert it to a new format then it will
create a copy of that table in the new format first and then delete the
old one...except of course that this would require 140% of disk space to
work.

So independent of the packaging technology used major changes like these
should never be done automatically and instead always be handled by an
admin who knows what he is doing.

Regards,
  Dennis


___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] VPN suggestions centos 6, 7

2016-04-05 Thread Dennis Jacobfeuerborn
How is IPSec "not recommended solution nowdays"?

I tend to use IPSec for site-to-site connections i.e. the ones that run
24/7 and only require two experienced people to set up (the admins at
both endpoints).
For host-to-site setups I prefer OpenVPN since explaining to endusers
how to set up an ipsec connection is neigh impossible whereas with
OpenVPN I can simply tell them to install the software and then unzip an
archive into a directory and they are done.

Regards,
  Dennis

On 05.04.2016 09:07, Eero Volotinen wrote:
> IPSec is not recommended solution nowdays. OpenVPN runs top of single udp
> or tcp port, so it usually works on strictly firewalled places like in
> hotels and so on.
> 
> --
> Eero
> 
> 2016-04-04 23:18 GMT+03:00 Gordon Messmer :
> 
>> On 04/04/2016 10:57 AM, david wrote:
>>
>>> I have seen discussions of OpenVPN, OpenSwan, LibreVPN, StrongSwan (and
>>> probably others I haven't noted).  I'd be interested in hearing from anyone
>>> who wishes to comment about which to use, with the following requirements:
>>>
>>
>> I recommend l2tp/ipsec.  It's supported out of the box on a wide variety
>> of client platforms, which means significantly less work to set up the
>> clients.
>>
>> OpenVPN is a popular choice, and it's fine for most people.  It's more
>> work to set up than l2tp/ipsec, typically.  We used it for quite a while at
>> my previous employer, though ultimately dropped it because the Windows GUI
>> requires admin rights to run, and we didn't want to continue giving admin
>> rights to the users we supported.
>>
>> ___
>> CentOS mailing list
>> CentOS@centos.org
>> https://lists.centos.org/mailman/listinfo/centos
>>
> ___
> CentOS mailing list
> CentOS@centos.org
> https://lists.centos.org/mailman/listinfo/centos
> 

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] www.centos.org/forums/

2016-03-26 Thread Dennis Jacobfeuerborn
On 25.03.2016 17:29, Eero Volotinen wrote:
>> @Eero: IMHO you are missing some points here. There are more and more
>> browsers that are unable to use SSL{2,3} as well as TLS1.0, not just
>> disabled via config, but this decission was made at compile time.
>> Newer Android and Apple-iOS devices for example.
>>
>>
> This is not true. it works fine with latest android and ios. I just tested
> it.

The latest version of Android is Marshmallow and currently is only
installed on 2.3% of the devices out there:
http://developer.android.com/about/dashboards/index.html

You cannot just support the latest version of a client if your site is
accessed by regular users out there.

> 
>> And the point is not that the site supports TLS1.0, but that it does
>> not support TLS1.1 and/or TLS 1.2, and as such is incassessible
>> to devices that ask for TLS1.1 as minimum for HTTPS.
>>
>> But that is for the admins/webmasters of the servers to resolve.
> 
> 
> Many sites are still using centos 5 and clones and cannot support tls 1.2
> and tls 1.1 without upgrade.

Then they might be forced to upgrade to a newer CentOS version. If you
only run your personal blog then you can of course whatever you want but
if you run a commercial site then the OS you can run depends on what the
clients support and not the other way around.

Regards,
  Dennis


___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] hosted VMs, VLANs, and firewalld

2016-03-21 Thread Dennis Jacobfeuerborn
On 21.03.2016 16:57, Gordon Messmer wrote:
> On 03/20/2016 08:51 PM, Devin Reade wrote:
>> In a CentOS 7 test HA cluster I'm building I want both traditional
>> services running on the cluster and VMs running on both nodes
> 
> On a purely subjective note: I think that's a bad design.  One of the
> primary benefits of virtualization and other containers is isolating the
> applications you run from the base OS.  Putting services other than
> virtualization into the system that runs virtualization just makes
> upgrade more difficult later.
> 
>> A given VM will be assigned a single network interface, either in
>> the DMZ, on vlan2, or on vlan3.  Default routes for each of those
>> networks are essentially different gateways.
> 
> What do you mean by "essentially"?
> 
>>   On the DMZ side, the physical interface is eno1 on which is layered
>>   bridge br0.
> ...
>>   On the other network side, the physical interface is enp1s0, on
>>   which is layered bridge br2, on which is layered VLAN devices
>>   enp1s0.2 and enp1s0.3.
> 
> That doesn't make any sense at all.  In what way are enp1s0.2 and
> enp1s0.3 layered on top of the bridge device?
> 
> Look at the output of "brctl show".  Are those two devices slaves of
> br2, like enp1s0 is?  If so, you're bridging the network segments.
> 
> You should have individual bridges for enp1s0, enp1s0.2 and enp1s0.3. 
> If there were any IP addresses needed by the KVM hosts, those would be
> on the bridge devices, just like on br0.
> 

As a side node it is actually possible now to have one bridge to manage
multiple independent vlans. Unfortunately this is basically undocumented
(at least I can't find any decent documentation about this).
One user of this is Cumulus Linux:
https://support.cumulusnetworks.com/hc/en-us/articles/204909397-Comparing-Traditional-Bridge-Mode-to-VLAN-aware-Bridge-Mode

Apparently you can manage this with the "bridge" command. Here is what i
get on my Fedora 22 System:

0 dennis@nexus ~ $ bridge fdb
01:00:5e:00:00:01 dev enp4s0 self permanent
33:33:00:00:00:01 dev enp4s0 self permanent
33:33:ff:ef:69:e6 dev enp4s0 self permanent
01:00:5e:00:00:fb dev enp4s0 self permanent
01:00:5e:00:00:01 dev virbr0 self permanent
01:00:5e:00:00:fb dev virbr0 self permanent
52:54:00:d3:ca:6b dev virbr0-nic master virbr0 permanent
52:54:00:d3:ca:6b dev virbr0-nic vlan 1 master virbr0 permanent
01:00:5e:00:00:01 dev virbr1 self permanent
52:54:00:a6:af:5d dev virbr1-nic vlan 1 master virbr1 permanent
52:54:00:a6:af:5d dev virbr1-nic master virbr1 permanent
0 dennis@nexus ~ $ bridge vlan
portvlan ids
virbr0   1 PVID Egress Untagged

virbr0-nic   1 PVID Egress Untagged

virbr1   1 PVID Egress Untagged

virbr1-nic   1 PVID Egress Untagged

I'm not sure if the CentOS 7 kernel is recent enough to support this but
I thought I'd mention this anyway to make people aware that the "one
bridge per vlan" model is no longer the only one in existence.

Regards,
  Dennis


___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] /tmp full with systemd-private*

2016-02-09 Thread Dennis Jacobfeuerborn
On 09.02.2016 17:05, Kai Bojens wrote:
> CentOS: 7.1.1503
> 
> I have a problem with systemd which somehow manages to fill /tmp up with a 
> lot of
> files. These files obviously are from the Apache server and don't pose a 
> problem
> per se. The problem is that these files don't get removed daily:
> 
> du -hs systemd-private-*
> 7,7G  systemd-private-mpg7rm
> 0 systemd-private-olXnby
> 0 systemd-private-qvJJ5o
> 0 systemd-private-Rs2nBv
> 
> It was my understanding that these temp-files should have been removed daily 
> as
> it is stated here:
> 
> $: grep -v '^#' /usr/lib/systemd/system/systemd-tmpfiles-clean.timer
> 
> [Unit]
> Description=Daily Cleanup of Temporary Directories
> Documentation=man:tmpfiles.d(5) man:systemd-tmpfiles(8)
> 
> [Timer]
> OnBootSec=15min
> OnUnitActiveSec=1d
> 
> Am I missing something? Is there a better way with a systemd based systemd to
> have these files removed daily?

Have you checked which process creates the files and doesn't apparently
clean them up properly by checking the contents for example?

Regards,
  Dennis

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] NFS problem after 7.2

2016-01-01 Thread Dennis Jacobfeuerborn
On 01.01.2016 12:23, Gordon Messmer wrote:
> On 01/01/2016 01:55 AM, Mark wrote:
>> The command 'yum downgrade nfs-utils' just
>> returns nothing to do. I've searched centos.org to find the previous
>> package nfs-utils-1.3.0-0.8.el7.x86_64.rpm to manually download and
>> install, but haven't found it. What is actually the best way to do a
>> downgrade?
> 
> http://centos.s.uw.edu/centos/7.1.1503/os/x86_64/Packages/nfs-utils-1.3.0-0.8.el7.x86_64.rpm
> 
> 
> "yum downgrade" will typically take you back to the version included
> with a release.
> 
> I thought that old packages were available on http://vault.centos.org/,
> but after a brief look I only see source packages.  Does anyone know
> where old binaries are kept, if anywhere?

At least this mirror has all the 7.1.1503 files available:
http://mirror.netcologne.de/centos/7.1.1503/os/x86_64/Packages/nfs-utils-1.3.0-0.8.el7.x86_64.rpm

Regards,
  Dennis

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS 7 (1511) is released

2015-12-15 Thread Dennis Jacobfeuerborn
On 15.12.2015 03:22, John R Pierce wrote:
> On 12/14/2015 3:46 PM, Wes James wrote:
>> I just updated to 7.2 from 7.1.  I did lsb_release -a and it says
>> 7.2.1511.  I haven’t rebooted yet, which items would run with new
>> binaries, anything that isn’t running yet? Ssay I had apache running,
>> it wouldn’t pick up new apache until a reboot, right?
> 
> most service updates will restart the service

Will they? That sound like a pretty terrible idea.

Regards,
  Dennis


___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS-virt] win2008r2 update on centos 6 host made system unbootable

2015-12-09 Thread Dennis Jacobfeuerborn
On 09.12.2015 13:47, Patrick Bervoets wrote:
> 
> 
> Op 09-12-15 om 01:00 schreef Dennis Jacobfeuerborn:
>> On 09.12.2015 00:39, NightLightHosts Admin wrote:
>>> On Tue, Dec 8, 2015 at 5:26 PM, Dennis Jacobfeuerborn
>>> <denni...@conversis.de> wrote:
>>>> Hi,
>>>> today we ran into a strange problem: When performing a regular Windows
>>>> 2008r2 update apparently among other things the following was
>>>> installed:
>>>> "SUSE - Storage Controller - SUSE Block Driver for Windows"
>>>>
>>>> [...]
>>>> [...]
>>>> What worries me is that I want to update other win2008r2 guests as well
>>>> but now fear that they will all be rendered unbootable by such an
>>>> update.
>>>>
>>>> Regards,
>>>>Dennis
>>>>
>>>> ___
>>>> CentOS-virt mailing list
>>>> CentOS-virt@centos.org
>>>> https://lists.centos.org/mailman/listinfo/centos-virt
>>>>
> 
> Which virtualization are you using? KVM?
> 
> How did you get that update offered?
> 
> I can't reproduce it, but then my servers are on a patch management
> software.
> And I can't check on WU because I don't want to install the new update
> client.
> 
> Anyway, I would uncheck that patch when updating the other guests if I
> were you. And work on a copy / snapshot.

Yes, this is a CentOS 6 Host using regular libvirt based virtualization.
The Suse driver is apparently an optional update that gets delivered
using the regular Microsoft update mechanism.
It's hard to believe that they didn't catch a completely broken driver
during QA so my hypothesis is that maybe the new Virtio driver is
incompatible only with the older Kernel of CentOS 6 and that this wasn't
properly tested. To verify this one could check if the same thing
happens on a CentOS 7 Host but at the moment I'm to busy the check this.

Regards,
  Dennis


___
CentOS-virt mailing list
CentOS-virt@centos.org
https://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] win2008r2 update on centos 6 host made system unbootable

2015-12-08 Thread Dennis Jacobfeuerborn
On 09.12.2015 00:39, NightLightHosts Admin wrote:
> On Tue, Dec 8, 2015 at 5:26 PM, Dennis Jacobfeuerborn
> <denni...@conversis.de> wrote:
>> Hi,
>> today we ran into a strange problem: When performing a regular Windows
>> 2008r2 update apparently among other things the following was installed:
>> "SUSE - Storage Controller - SUSE Block Driver for Windows"
>>
>> Previously the disk drive was using the Red Hat virtio drivers which
>> worked just fine but after the reboot after the update I just get a blue
>> screen indicating that Windows cannot find a boot device.
>>
>> Does anyone understand what is going on here? Why is the windows update
>> installing a Suse driver that overrides the Red Hat driver even though
>> it is apparently incompatible with the system?
>>
>> Regards,
>>   Dennis
> 
> Did you roll back the driver and did it work after that?

I can't roll back the driver for that device because I can't boot the
system. The only way I can boot into the system is by changing the disk
type to IDE but then I cannot roll back the driver because the entire
device changed. As far as I can tell the Suse version of the virtio
block driver is incompatible with the incompatible with the system but
right now I see no way to tell windows "Uninstall the driver completely
for the entire system" so that on the next boot it would fall back to
the old virtio driver from Red Hat.
I tried installing the current stable drivers from this URL:
https://fedoraproject.org/wiki/Windows_Virtio_Drivers

But Windows refuses and says the driver is already up-to-date.

What worries me is that I want to update other win2008r2 guests as well
but now fear that they will all be rendered unbootable by such an update.

Regards,
  Dennis

___
CentOS-virt mailing list
CentOS-virt@centos.org
https://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS] firewalld being stupid

2015-11-17 Thread Dennis Jacobfeuerborn
On 17.11.2015 15:18, James B. Byrne wrote:
> 
> On Mon, November 16, 2015 16:39, Nick Bright wrote:
>> On 11/6/2015 3:58 PM, James Hogarth wrote:
>>> I have a couple of relevant articles you may be interested in ...
>>>
>>> On assigning the zone via NM:
>>> https://www.hogarthuk.com/?q=node/8
>>>
>>> Look down to the "Specifying a particular firewall zone" bit ...
>>> remember that if you edit the files rather than using nmcli you must
>>> reload NM (or do nmcli reload) for that to take effect.
>>>
>>> If you specify a zone in NM then this will override the firewalld
>>> configuration if the zone is specified there.
>>>
>>> Here's some firewalld stuff:
>>> https://www.hogarthuk.com/?q=node/9
>>>
>>> Don't forget that if you use --permanent on a command you need to do
>>> a
>>> reload for it to read the config from disk and apply it.
>> Thanks for the articles, they're informative.
>>
>> Here's what's really irritating me though.
>>
>> firewall-cmd --zone=internal --change-interface=ens224 --permanent
>>
>> ^^ This command results in NO ACTION TAKEN. The zone IS NOT CHANGED.
>>
>> firewall-cmd --zone=internal --change-interface=ens224
>>
>> This command results in the zone of ens224 being changed to internal,
>> as
>> desired. Of course, this is not permanent.
>>
>> As such, firewall-cmd --reload (or a reboot, ect) will revert to the
>> public zone. To save the change, one must execute firewall-cmd
>> --runtime-to-permanent.
>>
>> This is very frustrating, and not obvious. If --permanent doesn't work
>> for a command, then it should give an error - not silently fail
>> without doing anything!
>>
> 
> This behaviour is congruent with SELinux. One utility adjusts the
> permanent configuration, the one that will be applied at startup.
> Another changes the current running environment without altering the
> startup config.  From a sysadmin point of view this is desirable since
> changes to a running system are often performed for empirical testing.
> Leaving ephemeral state changes permanently fixed in the startup
> config could, and almost certainly would eventually, lead to serious
> problem during a reboot.
> 
> Likewise, immediately introducing a state change to a running system
> when reconfiguring system startup options is just begging for an
> operations incident report.
> 
> It may not be intuitive to some but it is certainly the logical way of
> handling this.
> 

The better way is to explicitly allow the user to dump the runtime
configuration as the persistent configuration though as that makes it
much more difficult to have subtly diverging configurations due to user
errors. On network switches you often find something like "copy
running-congig startup-config".

Regards,
  Dennis

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] firewalld being stupid

2015-11-17 Thread Dennis Jacobfeuerborn
On 17.11.2015 17:51, m.r...@5-cent.us wrote:
> Nick Bright wrote:
>> On 11/17/2015 8:18 AM, James B. Byrne wrote:
>>> This behaviour is congruent with SELinux. One utility adjusts the
>>> permanent configuration, the one that will be applied at startup.
>>> Another changes the current running environment without altering the
>>> startup config. From a sysadmin point of view this is desirable since
>>> changes to a running system are often performed for empirical testing.
>>> Leaving ephemeral state changes permanently fixed in the startup
>>> config could, and almost certainly would eventually, lead to serious
>>> problem during a reboot. Likewise, immediately introducing a state
>>> change to a running system when reconfiguring system startup options
>>> is just begging for an operations incident report. It may not be
>>> intuitive to some but it is certainly the logical way of handling this.
>>
>> I certainly don't disagree with this behavior.
>>
>> What I disagree with is documented commands _*not working and failing
>> silently*_.
>>
> I agree, and it seems to be the way systemd works, as a theme, as it were.
> I restart a service... and it tells me *nothing* at all. I have to run a
> second command, to ask the status. I've no idea why it's "bad form" to
> tell me progress, and final result. You'd think they were an old New
> Englander.

Systemd has better mechanisms to report feedback compared to SysV
scripts but if the creators of the service files and the daemons don't
make use of these that's hardly systemd's fault. The best way is to use
"Type=notify" which allows a daemon to actually report to systemd when
it is done initializing. If the daemon doesn't support this then you can
still use ExecStartPost to specify a command that verifies that the
daemon indeed did start up correctly (and no the binary returning a code
of 0 does not mean the service is actually up-and-running properly).

Regards,
  Dennis

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] firewalld being stupid

2015-11-16 Thread Dennis Jacobfeuerborn
On 16.11.2015 22:58, Gordon Messmer wrote:
> On 11/16/2015 01:39 PM, Nick Bright wrote:
>> This is very frustrating, and not obvious. If --permanent doesn't work
>> for a command, then it should give an error - not silently fail
>> without doing anything! 
> 
> But --permanent *did* work.
> 
> What you're seeing is the documented behavior:
>--permanent
>The permanent option --permanent can be used to set options
>permanently. These changes are not effective immediately, only
>after service restart/reload or system reboot. Without the
>--permanent option, a change will only be part of the runtime
>configuration.
> 
>If you want to make a change in runtime and permanent
>configuration, use the same call with and without the
> --permanent
>option.

That's fairly annoying behavior as it creates the potential for
accidentally diverging configurations.
Why not do the same as virsh an have two options for this? When I attach
a device I can specify --config to update the persistent configuration,
--live to update the runtime configuration and both if I want to change
both. That's a much better API in my opinion.

Regards,
  Dennis

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] strange diskspace consuming

2015-10-20 Thread Dennis Jacobfeuerborn
On 20.10.2015 12:11, Götz Reinicke - IT Koordinator wrote:
> Hi,
> 
> I do a tgz-backup some maildir-folders with n*1000 off files and a lot
> of GB in storage. The backuped maildirs are removed after the tar.
> 
> My assumption was, that the free diskspace should be bigger after that,
> but from what I get with df, it looks like I'm loosing space.
> 
> Currently the tgz is saved on the same disk/mountpoint.
> 
> Any hint, why removing the maildirs dont free diskspace as expected?
> 
> It is still an ext3 filesystme

The files might still be in use by some process and in that case the
space will not be freed until that process closes the files.

Try this:
lsof -nn|grep deleted

That shows all files that are still in use by a process but are marked
as deleted on the filesystem.

Regards,
 Dennis

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Help with systemd

2015-09-23 Thread Dennis Jacobfeuerborn
On 24.09.2015 00:06, John R Pierce wrote:
> On 9/23/2015 2:58 PM, Jonathan Billings wrote:
>> 1.) why 'cd /greenstone/gs3 && ant start' when you could just run
>> '/greenstone/gs3/ant start'.
> 
> thats *not* equivalent, unless ant is in /greenstone/gs3 *and* . is in
> the path, and even then,  ant looks for build.xml in the current path
> when its invoked, so the cd /path/to/build/  is appropriate.

Mind you I only work with ant very rarely but what should work is this:
/path/to/ant -buildfile /greenstone/gs3/build,xml -Dbasedir=/greenstone/gs3

Regards,
  Dennis

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS-virt] Using STP in kvm bridges

2015-09-16 Thread Dennis Jacobfeuerborn
On 16.09.2015 12:18, C.L. Martinez wrote:
> On 09/16/2015 10:15 AM, Dmitry E. Mikhailov wrote:
>> On 09/16/2015 03:02 PM, C.L. Martinez wrote:
>>>   What advantages and disadvantages have??  If I will want to install
>>> some kvm guests that use multicast address for certain services, is it
>>> recommended to enable STP?
>> STP has nothing to do with multicast as it's an Ethernet protocol.
>> It's developed to provide loop-free redundancy links to Ethernet-based
>> networks.
>>
>> I can't imagine any legitimate use of STP within virtualized environment
>> except when BOTH a) you don't trust the person who manages VM's (like in
>> VPS providing) AND b) you provide more then one network interface to the
>> virtual machine.
>>
>> Otherwise STP can be used to prevent traffic storm because of malicious
>> bridging of vNIC's inside VM.
>>
>> Best regards,
>>  Dmitry Mikhailov
> 
> Thanks Dmitry... Uhmm, but my case is: "b) you provide more then one
> network interface to the virtual machine". I have several kvm guests
> with 3 or more network interfaces ... In this case, do you recommends to
> enable STP??

You should always enable STP on a bridge unless you have a very specific
reason not to.

Regards,
  Dennis

___
CentOS-virt mailing list
CentOS-virt@centos.org
https://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS] Upgrade of CentOS 6.6 to 6.7

2015-09-03 Thread Dennis Jacobfeuerborn
Specifically the output of "ip r" and "ip a" would be useful both with
the working and non-working kernel for comparison.

On 09/03/2015 10:52 PM, Jim Perrin wrote:
> Can you provide a bit more detail? What hardware are you using, what
> network driver is in use, etc..
> 
> On 09/03/2015 02:52 PM, John Tebbe wrote:
>> After the upgrade, I was encountering a "network unreachable" error.
>> This happens on kernel versions -> 2.6.32-573.1 .1-el6.x86_64 and
>> 2.6.32-573.3.1-el6.x86_64. If I revert back to
>> 2.6.32.504.30.3.el6.x86_64, the problem goes away. I've been searching
>> frantically without much luck. I have found references to the same error
>> on bugs.centos.org but no resolution as of yet. Has anyone else
>> encountered this? If so, what did you do to resolve it?
>>
>> TIA,
>> John
>>
> 

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS 7.1 NFS Client Issues - rpc.statd / rpcbind

2015-08-31 Thread Dennis Jacobfeuerborn
On 08/31/2015 05:48 AM, Mark Selby wrote:
> That is the thing - rpc.statd does have rpcbind a pre-req. It looks like
> systemd is not handling this correctly. Just wondering if anyone knows a
> good way to fix.
> 
> root@ls2 /usr/lib/systemd/system 110#  grep Requires rpc-statd.service
> Requires=nss-lookup.target rpcbind.target

This is a bug in the NFS service configuration files. You need to copy
rpc-statd.service over to /etc/systemd/system and change the
"rpcbind.target" to "rpcbind.service". Don't forget the "systemctl
daemon-reload" afterwards if you don't reboot.
See this bug for details:
https://bugzilla.redhat.com/show_bug.cgi?id=1171603

Regards,
  Dennis

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Shutdown hangs on "Unmounting NFS filesystems"

2015-08-31 Thread Dennis Jacobfeuerborn
On 08/31/2015 02:15 AM, Robert Nichols wrote:
> On 08/30/2015 04:45 PM, John R Pierce wrote:
>> On 8/30/2015 2:20 PM, Robert Nichols wrote:
>>> Once the system gets into this state, the only remedy is a forced
>>> power-off.  What seems to be happening is that an NFS filesystem that
>>> auto-mounted over a WiFi connection cannot be unmounted because the
>>> WiFi connection is enabled only for my login and gets torn down when
>>> my UID is logged off.
>>>
>>> Any suggestions on how I can configure things to avoid this?  I
>>> really don't want to expose my WPA2 key by making the connection
>>> available to all users.
>>
>> my experience is A) NFS doesn't like unreliable networks, and B) WiFi
>> isn't very reliable.
>>
>> perhaps using the 'soft' mount option will help, along with intr ?
> 
> Making use of the "intr" option would require that the umount process
> have the console as its controlling tty.  AFAICT, having been invoked
> from the init process, it has _no_ controlling tty.  Hard to send a
> SIGINT that way.

The "intr" option is no longer available. See the nfs man page:
"This option is provided for backward compatibility.  It is ignored
after kernel 2.6.25."

You should be able to kill -9 the process though.

Regards,
  Dennis
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] UMask=0002 ignored in httpd service file

2015-08-20 Thread Dennis Jacobfeuerborn
On 08/20/2015 06:54 PM, Dennis Jacobfeuerborn wrote:
 Hi,
 I'm trying to get Apache httpd to create new files with the group
 writable bit set and thought adding the directive UMask=0002 to the
 service section of the service file would be enough (after copying it to
 /etc/systemd/system). But after a systemctl daemon-reload followed by a
 service restart files are still created as -rw-r--r-- instead of the
 expected -rw-rw-r--.
 Does anyone have an idea what is missing here or how I can debug why the
 directive is apparently ignored?

Never mind I was just being dumb. The directive belonged in the php-fpm
service file of course and not the httpd one.
Sorry for the noise.

Regards,
  Dennis


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] UMask=0002 ignored in httpd service file

2015-08-20 Thread Dennis Jacobfeuerborn
Hi,
I'm trying to get Apache httpd to create new files with the group
writable bit set and thought adding the directive UMask=0002 to the
service section of the service file would be enough (after copying it to
/etc/systemd/system). But after a systemctl daemon-reload followed by a
service restart files are still created as -rw-r--r-- instead of the
expected -rw-rw-r--.
Does anyone have an idea what is missing here or how I can debug why the
directive is apparently ignored?

Regards,
  Dennis
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] 6.7

2015-08-07 Thread Dennis Jacobfeuerborn
On 08/07/2015 01:56 PM, Farkas Levente wrote:
 On 08/07/2015 01:04 PM, Johnny Hughes wrote:
 6.7 is there most places ... since we have more than 500 external 
 mirrors (right now 593) not all of them are updated.  (looks like
 4% still are not completely updated)
 
 what about the src.rpms? it seems http://vault.centos.org/6.7/os/ and
 http://vault.centos.org/6.7/cr/Source/ is empty and while
 http://vault.centos.org/6.7/updates/Source/SPackages/ also seems to
 very outdated.

I think it would make more sense to wait for the actual release
announcement first before asking about missing files.

Regards,
  Denis

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Semi-OT: configuring mongodb for sharding

2015-07-29 Thread Dennis Jacobfeuerborn
On 07/29/2015 04:43 PM, m.r...@5-cent.us wrote:
 Anyone know about this? Googling, all I can find is mongodb's 3.x manual,
 nothing for the 2.4 we get from epel.
 
 What I need to do, CentOS 6.6, is start it as a service, not a user, and
 have it do sharding. I see examples of how to start it as a user... but I
 can't find if there's a syntax for /etc/mongodb.conf to tell it that, and
 I don't want to have to edit /etc/init.d/mongod
 
 Clues for the poor?

Use the packages from the official MongoDB repo and not the packages
from epel. MongoDB is rather buggy and you always want to run recent
versions. The last version I ran in a sharded setup was 2.6.5 and that
contained some rather ugly bugs that resulted in no proper balancing
happening between the shards and replica sets becoming confused about
the number of servers that were members of a set.

Regards,
  Dennis

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Backups solution from WinDoze to linux

2015-07-16 Thread Dennis Jacobfeuerborn
On 16.07.2015 11:36, Leon Fauster wrote:
 Am 16.07.2015 um 02:22 schrieb Valeri Galtsev galt...@kicp.uchicago.edu:

 On Wed, July 15, 2015 7:05 pm, Michael Mol wrote:
 On Tue, Jul 14, 2015, 10:37 AM  m.r...@5-cent.us wrote:

 My manager just tasked me at looking at this, for one team we're
 supporting. Now, he'd been thinking of bacula, but I see their Windows
 binaries are now not-free, so I'm looking around. IIRC, Les thinks highly
 of backuppc; comments on that, or other packaged solutions?


 We use Bareos extensively. By default, Bareos is Bacula-compatible. We use
 Bareos extensively.

 What is the story between bareos and bacula? And why you prefer bareos as
 opposed to bacula. Just curios: I use bacula (it is bacula 5, server is
 FreeBSD, clients are CentOS 5,6,7, FreeBSD 9,10, Windows 7). Thanks for
 your insights!
 
 
 I personally prefer bacula. For more informations about the case above look 
 at:
 
 http://blog.bacula.org/category/kerns-blog/
 http://blog.bacula.org/category/status-reports/
 http://sourceforge.net/p/bacula/mailman/message/33199834/


I've tried bacula/bareos and they are horribly outdated in how they
approach backups and only really useful if you use tape backups (because
that's the only target they were designed for).

I've found obnam to be a good solution as it is lightweight, does
de-deduplication (no full/incremental/differential nonsense) and can
backup any sftp source. It's not perfect but the best tools I've found
so far.

Regards,
  Dennis


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] nfs-server.service start delayed 60 seconds asking for password?

2015-06-18 Thread Dennis Jacobfeuerborn
On 16.06.2015 04:34, Dennis Jacobfeuerborn wrote:
 Hi,
 when I start nfs-server.service it takes 60 seconds until the nfsd
 finally is up. Looking at the process list I see a process
 'systemd-tty-ask-password-agent' running which goes away after the 60
 seconds the startup requires.
 
 Does anyone have an idea what the reason could be for this and how I can
 get rid of this delay?

For anyone who also runs into this:

This is a bug in nfs-server.service. This fix is to copy
nfs-server.service to /etc/systemd/system and replace all occurrences of
rpcbind.target with rpcbind.service.

See this bug for reference:
https://bugzilla.redhat.com/show_bug.cgi?id=1171603

Regards,
 Dennis

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] nfs-server.service start delayed 60 seconds asking for password?

2015-06-15 Thread Dennis Jacobfeuerborn
Hi,
when I start nfs-server.service it takes 60 seconds until the nfsd
finally is up. Looking at the process list I see a process
'systemd-tty-ask-password-agent' running which goes away after the 60
seconds the startup requires.

Does anyone have an idea what the reason could be for this and how I can
get rid of this delay?

Regards,
  Dennis
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Two partitions with samd UUID??

2015-06-15 Thread Dennis Jacobfeuerborn
On 15.06.2015 18:13, John Hodrien wrote:
 On Mon, 15 Jun 2015, jd1008 wrote:
 
 Thanx for the update
 but what about non-gpt and non lvm partitions?
 What is used as inp
 nut to create a universally unique id?

 (Actually, for an id to be universally unique, one would almost
 nee knowledge of all existing id's.
 So, I do not have much credence in this universal uniqueness.
 
 Sufficiently random gets you there, since you're not connecting billions of
 filesystems to a single system.  If you really want to generate them by
 hand,
 feel free, as mkfs.ext4 lets you specify the filesystem UUID.

Or if you are in the situation of the original poster and made a copy
using dd use tune2fs -L and -U to modify label and uuid of the copy.

Regards,
  Dennis


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Effectiveness of CentOS vm.swappiness

2015-06-05 Thread Dennis Jacobfeuerborn
On 05.06.2015 19:47, Greg Lindahl wrote:
 On Fri, Jun 05, 2015 at 09:33:11AM -0700, Gordon Messmer wrote:
 On 06/05/2015 03:29 AM, Markus Shorty Uckelmann wrote:
 some (probably unused) parts are swapped out. But, some of
 those parts are the salt-minion, php-fpm or mysqld. All services which
 are important for us and which suffer badly from being swapped out.

 Those two things can't really both be true.  If the pages swapped
 out are unused, then the application won't suffer as a result.
 
 No.
 
 Let's say the application only uses the page once per hour. If there
 is also I/O going on, then it's easy to see that the kernel could
 decide to page the page out after 50 minutes, leaving the application
 having to page it back in 10 minutes later.

That's true but it also means that if you lock that page so it cannot be
swapped out then this page is not available for the page cache so you
incur the i/o hit either way and it's probably going to be worse because
the system has no longer an option to optimize the memory management.
I wouldn't worry about it until there's actually permanent swap activity
going on and then you have to decide if you want to add more ram to the
system or maybe find a way to tell e.g. Bacula to use direct i/o and not
pollute the page cache.
For application that do not allow to specify this a wrapper could be
used such as this one:
http://arighi.blogspot.de/2007/04/how-to-bypass-buffer-cache-in-linux.html

Regards,
  Dennis

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Effectiveness of CentOS vm.swappiness

2015-06-05 Thread Dennis Jacobfeuerborn
On 06.06.2015 04:48, Dennis Jacobfeuerborn wrote:
 On 05.06.2015 19:47, Greg Lindahl wrote:
 On Fri, Jun 05, 2015 at 09:33:11AM -0700, Gordon Messmer wrote:
 On 06/05/2015 03:29 AM, Markus Shorty Uckelmann wrote:
 some (probably unused) parts are swapped out. But, some of
 those parts are the salt-minion, php-fpm or mysqld. All services which
 are important for us and which suffer badly from being swapped out.

 Those two things can't really both be true.  If the pages swapped
 out are unused, then the application won't suffer as a result.

 No.

 Let's say the application only uses the page once per hour. If there
 is also I/O going on, then it's easy to see that the kernel could
 decide to page the page out after 50 minutes, leaving the application
 having to page it back in 10 minutes later.
 
 That's true but it also means that if you lock that page so it cannot be
 swapped out then this page is not available for the page cache so you
 incur the i/o hit either way and it's probably going to be worse because
 the system has no longer an option to optimize the memory management.
 I wouldn't worry about it until there's actually permanent swap activity
 going on and then you have to decide if you want to add more ram to the
 system or maybe find a way to tell e.g. Bacula to use direct i/o and not
 pollute the page cache.
 For application that do not allow to specify this a wrapper could be
 used such as this one:
 http://arighi.blogspot.de/2007/04/how-to-bypass-buffer-cache-in-linux.html

Actually I found better links:
https://code.google.com/p/pagecache-mangagement/
http://lwn.net/Articles/224653/

It is to address the waah, backups fill my memory with pagecache and
the waah, updatedb swapped everything out and the waah, copying a DVD
gobbled all my memory problems.

Regards,
  Dennis

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Effectiveness of CentOS vm.swappiness

2015-06-04 Thread Dennis Jacobfeuerborn
On 04.06.2015 22:18, Markus Shorty Uckelmann wrote:
 Hi all,
 
 This might not be CentOS related at all. Sorry about that.
 
 I have lots of C6  C7 machines in use and all of them have the default
 swappiness of 60. The problem now is that a lot of those machines do
 swap although there is no memory pressure. I'm now thinking about
 lowering swappiness to 1. But I'd still like to find out why this
 happens. The only common thing between all those machines is that there
 are nightly backups done with Bacula. I once came across issues with the
 fs-cache bringing Linux to start paging out. Any hints, explanations and
 suggestions would be much appreciated.

If I'd have to venture a guess then I'd say there are memory pages that
are never touched by any processes and as a result the algorithm has
decided that it's more effective to swap out these pages to disk and use
the freed ram for the page-cache.
Swap usage isn't inherently evil and what you really want to check for
is the si/so columns in the output of the vmstat command. If the
system is using swap space but these columns are mostly 0 then that
means memory has been swapped out in the past but there is no actual
swap activity happening right now and there should be no performance
impact. If however these numbers are consistently larger than 0 then
then that means the system is under acute memory pressure and has to
constantly move pages between ram and disk and that will have a large
negative performance impact on the system. This is the moment when swap
usage becomes bad.

Regards,
  Dennis

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Upgrading to CentOS 7

2015-05-19 Thread Dennis Jacobfeuerborn
On 19.05.2015 16:37, Stephen Harris wrote:
 On Tue, May 19, 2015 at 09:25:30AM -0500, Jim Perrin wrote:
 If you have a good config management environment set up, rolling out a
 new build to replace older systems is much easier than walking through
 an update on each system. I really recommend people use ansible, chef,
 puppet.. whatever they're comfortable with to do some basic automation.
 
 Just do lots of testing, first :-)  There are sufficient differences
 between major OS releases (5, 6, 7) that you may need different rules
 for each type.
 
 For example, postfix is different version on each so main.cf and master.cf
 are different and have version specific differences.
 Apache is sufficiently the same between 5 and 6, but 7 has a totally
 new way of doing things
 And, of course, sysvinit vs upstart vs systemd!
 
 Config managementis a great way of rebuilding a new copy of an existing
 version, but it's not a panacea when changing versions.

It's a good way to keep track of what makes your system unique though.
Kind off a diff between the core installation and the final production
system.

For a lot of people it seems the biggest problem is to identify what
they need to migrate to get things running again and adapting that to
new versions is actually that that big an issue. Sure you remember to
copy /etc/httpd but did you also copy that script you wrote that tweaks
some queue settings in /sys or that maintenance script that you stored
under /usr/local or /opt or wherever that you haven't had to use in a
year? If you have the discipline to put all that into a configuration
management system then you don't have to search for these things.

Regards,
  Dennis
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos 7 and qemu-kvm

2015-05-09 Thread Dennis Jacobfeuerborn
On 09.05.2015 15:26, Jerry Geis wrote:
 Still trying to migrate to CentOS 7.
 
 I used to use qemu-kvm on centos 6. tried to compile on
 centos 7 and get error about undefined reference to timer_gettime
 searching for that says basically use virt-manager


Why are you trying to compile it yourself and not use the version that
comes with the OS?

Regards,
  Dennis


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] nfs (or tcp or scheduler) changes between centos 5 and 6?

2015-04-29 Thread Dennis Jacobfeuerborn
 You may want to look at NFSometer and see if it can help.
 
 Haven't seen that, will definitely give it a try!

Try nfsstat -cn on the clients to see if any particular NFS operations
occur more or less frequently on the C6 systems.

Also look at the lookupcache option found in man nfs:

lookupcache=mode
Specifies how the kernel manages its cache of directory entries for a
given mount point. mode can be one of all, none, pos, or positive.  This
option is supported in kernels 2.6.28 and later.
(there is more text in the man page)

Since C5 came with 2.6.18 and C6 with 2.6.32 this might have something
to do with it.

Regards,
  Dennis

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] C7 systemd and network configuration

2015-04-21 Thread Dennis Jacobfeuerborn
On 21.04.2015 16:46, Johnny Hughes wrote:
 On 04/21/2015 08:54 AM, Jonathan Billings wrote:
 On Tue, Apr 21, 2015 at 03:46:52PM +0200, Dennis Jacobfeuerborn wrote:
 Networking isn't really controlled by systemd but by NetworkManager. I
 usually just yum remove NetworkManager* and then everything works just
 as it did in CentOS 6.

 Note:  NetworkManager is in CentOS6 too, and is part of the default
 workstation install.  The NM in CentOS7 is a bit more polished than
 the NM in CentOS6, but it is configured in the same way, using files
 in /etc/sysconfig/network-scripts/ (using the ifcfg-rh NetworkManager
 plugin).  In both cases, you can remove NM and use the 'network'
 service instead.

 
 You can disable NetworkManager for now in CentOS-7 and use the network
 service .. but in reality I am not sure how long that is going to be
 100% true.  In fact, things like dnsmsq and even libvirt/qemu are
 becoming much harder to configure to work via the network service and
 are pre-configured to work with NetworkManager. (Don't yell at me, not
 my decision :D)
 
 I have decided it is likely better to bite the bullet and learn how to
 use and configure Network Manager if you are going to do anything other
 than very simple things with your network .. at least on CentOS-7 or
 higher (ie, Fedora  18, etc.).
 
 Again, one CAN still use the network service .. but most documentation
 available now assumes instead that Network Manager is being used.

systemd-networkd is becoming increasingly capable and popular though so
NetworkManager might not actually stay around for too long.

Regards,
  Dennis

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] C7 systemd and network configuration

2015-04-21 Thread Dennis Jacobfeuerborn
On 21.04.2015 14:10, Mihamina Rakotomandimby wrote:
 Hi all,
 
 I used to manage network through /etc/sysconfig/network-scripts/ifcfg-*
 Most of my use case are vlans (ie: eth0.1) an aliases (ie: eth1:3)
 My context in headless VMs (no DE, no Xorg, no GUI)
 
 With CentOS7 and systemd: is it still managed with
 /etc/sysconfig/network-scripts/ifcfg-* ?
 
 For the mount component, I found that systemd kind of sources
 /etc/fstab and converts it to something for it (so, no worry about
 fstab), but how about networking?

Networking isn't really controlled by systemd but by NetworkManager. I
usually just yum remove NetworkManager* and then everything works just
as it did in CentOS 6.

Regards,
  Dennis

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos 5 tls v1.2, v1.1

2015-04-17 Thread Dennis Jacobfeuerborn
The cheapest sollution is probably compiling a private openssl somewhere
on the system and then compiling apache using that private openssl
version instead of the default system-wide one.

Regards,
  Dennis

On 17.04.2015 13:20, Eero Volotinen wrote:
 Yep, maybe using ssl offloading devices like (BigIP) that receives tls1.2
 and tlsv1.2 and then re-encrypts traffic with tls1.0 might be cheapest
 solution.
 
 --
 Eero
 
 2015-04-17 14:15 GMT+03:00 Johnny Hughes joh...@centos.org:
 
 On 04/16/2015 05:00 PM, Eero Volotinen wrote:
 in fact: modgnutls provides easy way to get tlsv1.2 to rhel 5

 --
 Eero


 If you do that, then you are at the mercy of Mr. Bergmann to provide
 updates for all security issues for openssl.  Has he updated his RPMs
 since 2014-11-19 23:57:58?  Does his patch work on the latest
 RHEL/CentOS EL5 openssl-0.9.8 package?

 The answer right now for him providing newer packages is, I have no
 idea.  His repo
 (
 http://www.tuxad.de/blog/archives/2014/12/07/yum_repository_for_rhel__centos_5/index.html
 )
 does not seem to be available:
 
 Attempted reposync:

 Error setting up repositories: failure: repodata/repomd.xml from tuxad:
 [Errno 256] No more mirrors to try.
 http://www.tuxad.com/repo/5/x86_64/tuxad/repodata/repomd.xml: [Errno 14]
 HTTP Error 404 - Not Found
 

 Red Hat chose not to turn on those cyphers in RHEL-5 (the ones in his
 patches) .. doing so is not at all certified as safe, nor has it been
 tested by anyone that I can see (other than in that blog entry).  It
 might be fine .. it might not be.

 People can make any choice that they want, but I would be looking to
 upgrade to at least CentOS-6 at this point if I wanted newer TLS support
 and not depending on one person to provide packages (or patches) of this
 importance for all my EL5 machines.  But, that is just me.

 Please note, I have no idea who Mr. Bergmann is and I am not in any way
 being negative about those packages and patches .. they are extremely
 nice and seem to work.  However, I can not see the rest of his repo
 right now and I would not trust MY production machines to a one person
 operation with something as important as openssl.

 Thanks,
 Johnny Hughes



 2015-04-16 21:02 GMT+03:00 Eero Volotinen eero.voloti...@iki.fi:

 well. this hack solution might work:

 http://www.tuxad.de/blog/archives/2014/11/19/openssl_updatesenhancements_for_rhel__centos_5/index.html

 --
 Eero

 2015-04-16 17:30 GMT+03:00 Leon Fauster leonfaus...@googlemail.com:

 Am 16.04.2015 um 11:46 schrieb Leon Fauster 
 leonfaus...@googlemail.com:
 Am 16.04.2015 um 11:43 schrieb Eero Volotinen eero.voloti...@iki.fi
 :
 Is there any nice way to get tlsv1.2 support to centos 5?
 upgrading os to 6 is not option available.


 Unfortunately not.


 https://bugzilla.redhat.com/show_bug.cgi?id=1066914

 --
 LF



 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos


 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos
 

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Update to 1503 release problem

2015-04-16 Thread Dennis Jacobfeuerborn
On 16.04.2015 12:51, Alessandro Baggi wrote:
 From freedesktop.org:
 
 
 Q: I want to change a service file, but rpm keeps overwriting it in
 /usr/lib/systemd/system all the time, how should I handle this?
 
 A: The recommended way is to copy the service file from
 /usr/lib/systemd/system to /etc/systemd/system and edit it there. The
 latter directory takes precedence over the former, and rpm will never
 overwrite it. If you want to use the distributed service file again you
 can simply delete (or rename) the service file in /etc/systemd/system
 again.
 
 
 This is the way?

Yes. The files under /usr/lib/systemd/system are defaults that systemd
only uses if a similar files does not exist in /etc/systemd/system.
The default files themselves should never be edited.

Regards,
  Dennis
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] systemd private tmp dirs

2015-04-16 Thread Dennis Jacobfeuerborn
On 16.04.2015 04:15, Les Mikesell wrote:
 On Wed, Apr 15, 2015 at 9:00 PM, John R Pierce pie...@hogranch.com wrote:
 On 4/15/2015 6:52 PM, Les Mikesell wrote:

 Mostly I'm interested in avoiding surprises and having code that isn't
 married to the weirdness of any particular version of any particular
 distribution.  And I found this to be pretty surprising, given that I
 could see the file in /tmp and could read the code that was looking
 there.   So, from the point of view of writing portable code, how
 should something handle this to run on any unix-like system?


 you sure this had nothing to do with selinux not letting perl running as the
 http user write there?

 
 No, systemd actually remaps /tmp from apache - and apparently most
 other daemons - to private directories  below /tmp with configs as
 shipped.  The command line tool wrote the file to /tmp as expected.
 The perl code running under httpd reading what it thought was /tmp was
 actually looking under /tmp/systemd-private-something.  I'm beginning
 to see why so much of EPEL isn't included in epel7 yet.

The issue here really isn't systemd or the PrivateTmp feature but the
fact that some applications don't properly distinguish between temporary
files and data files.
Temporary files are files the application generates temporarily for
internal processing and that are not to be touched by anybody else.
If as in the twiki backup case the files generated are to be used by
somebody else after twiki is done generating them then these are regular
data files and not temporary files.
The application should have a configuration option to set its data
directory and it should default to /var/lib/application-name.
In cases where this option is not available and the application abuses
the tmp directory as data directory there is probably no other option
than to the set PrivateTmp=false in the service file.

Regards,
  Dennis
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Modifying files of NFS

2015-04-16 Thread Dennis Jacobfeuerborn
On 16.04.2015 02:12, Steven Tardy wrote:
 
 I have an NFS storage system and want to run jpegoptim on several GB's
 of jpeg images and I'm wondering what the best approach is.
 Is it ok to run this operation on the Server itself while the clients
 have it mounted or will this lead to problems like e.g. the dreaded
 stale filehandle?
 
 Stale file handles won't happen if the file modified time stamp is updated. 
 Add a simple 'touch $file' after updating each file.

If I'm not mistaken then jpegoptim will use the regular way this is done
by writing the new version to a temporary file and then rename that to
the original filename this replacing/overwriting it so the modified time
changes automatically but so does for example the inode of the file
since technically it is a new file.

The question really is if NFS picks up on this and just keeps serving
the new file normally or if it gets confused because the change was made
in the underlying filesystem on the server and not trough NFS from one
of the clients.
I've done this in the past and didn't see any errors but I was wondering
if this is intended to work by design or if I was just lucky in the past.

Regards,
  Dennis

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Modifying files of NFS

2015-04-15 Thread Dennis Jacobfeuerborn
Hi,
I have an NFS storage system and want to run jpegoptim on several GB's
of jpeg images and I'm wondering what the best approach is.
Is it ok to run this operation on the Server itself while the clients
have it mounted or will this lead to problems like e.g. the dreaded
stale filehandle?

Regards,
  Dennis
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Update to 1503 release problem

2015-04-15 Thread Dennis Jacobfeuerborn
On 15.04.2015 12:41, Alessandro Baggi wrote:
 Hi there,
 Yesterday I've updated from 7 to 7.1 and today I've noticed on 2 server
 that postgresql systemd file was replaced with default values. This make
 postgres to no start and webserver give me problem. This problem was
 fixed and now all works good. It's normal that on major update I can get
 this problem? If so, I've ridden release change but I have not ridden
 about postgresql problem.
 
 Someone had the same issue?

What is the full path of the file that changed?

Regards,
  Dennis

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Explanation please?

2015-04-03 Thread Dennis Jacobfeuerborn
On 04.04.2015 02:32, James B. Byrne wrote:
 I am seeing log file entries like this:
 
 IN=eth0 OUT=eth1 SRC=109.74.193.253 DST=x.y.z.34 LEN=122 TOS=0x00
 PREC=0x00 TTL=48 ID=49692 PROTO=ICMP TYPE=3 CODE=3 [SRC=x.y.z.34
 DST=109.74.193.253 LEN=94 TOS=0x00 PREC=0x00 TTL=53 ID=41330 PROTO=UDP
 SPT=34679 DPT=53 LEN=74 ]
 
 This is found on our gateway host.  eth0 is the WAN i/f, eth1 is the
 LAN i/f.  Our netblock is x.y.z.0/24.  Can somebody tell me what this
 record is?
 
 

IN=eth0 OUT=eth1 SRC=109.74.193.253 DST=x.y.z.34 LEN=122 TOS=0x00
PREC=0x00 TTL=48 ID=49692 PROTO=ICMP TYPE=3 CODE=3

This is a Port unreachable message from host 109.74.193.253.

[SRC=x.y.z.34 DST=109.74.193.253 LEN=94 TOS=0x00 PREC=0x00 TTL=53
ID=41330 PROTO=UDP SPT=34679 DPT=53 LEN=74 ]

This is probably the cause of the above error: SRC=x.y.z.34 tried to do
a DNS lookup on DST=109.74.193.253 which failed (hence the icmp response).

Regards,
  Dennis
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] KVM guest not running but cannot stop either

2015-03-17 Thread Dennis Jacobfeuerborn
On 17.03.2015 20:45, James B. Byrne wrote:
 These are the messages that I get when trying to attach a virtio disk
 to or perform a shutdown of the problem vm guest.
 
 
 Add hardware
 
 This device could not be attached to the running machine. Would you
 like to make the device available after the next guest shutdown?
 
 Requested operation is not valid: cannot do live update a device on
 inactive domain
 
 Traceback (most recent call last):
   File /usr/share/virt-manager/virtManager/addhardware.py, line
 1095, in add_device
 self.vm.attach_device(self._dev)
   File /usr/share/virt-manager/virtManager/domain.py, line 756, in
 attach_device
 self._backend.attachDevice(devxml)
   File /usr/lib64/python2.6/site-packages/libvirt.py, line 403, in
 attachDevice
 if ret == -1: raise libvirtError ('virDomainAttachDevice()
 failed', dom=self)
 libvirtError: Requested operation is not valid: cannot do live update
 a device on inactive domain
 
 
 Shutdown domain:
 
 Error shutting down domain: Requested operation is not valid: domain
 is not running
 
 Traceback (most recent call last):
   File /usr/share/virt-manager/virtManager/asyncjob.py, line 44, in
 cb_wrapper
 callback(asyncjob, *args, **kwargs)
   File /usr/share/virt-manager/virtManager/asyncjob.py, line 65, in
 tmpcb
 callback(*args, **kwargs)
   File /usr/share/virt-manager/virtManager/domain.py, line 1106, in
 shutdown
 self._backend.shutdown()
   File /usr/lib64/python2.6/site-packages/libvirt.py, line 1566, in
 shutdown
 if ret == -1: raise libvirtError ('virDomainShutdown() failed',
 dom=self)
 libvirtError: Requested operation is not valid: domain is not running
 
 
 
 They seem to be mutually exclusive and yet occur together nonetheless.
 

Have you tried restarting the libvirtd service?

Regards,
  Dennis

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Masquerading (packet forwarding) on CentOS 7

2015-02-19 Thread Dennis Jacobfeuerborn
On 19.02.2015 11:58, Niki Kovacs wrote:
 Hi,
 
 I just migrated my office's server from Slackware64 14.1 to CentOS 7. So
 far everything's running fine, I just have a few minor details to work out.
 
 I removed the firewalld package and replaced it by a simple Iptables
 script:
 
 
 --8
 #!/bin/sh
 #
 # firewall-lan.sh
 
 IPT=$(which iptables)
 MOD=$(which modprobe)
 SYS=$(which sysctl)
 SERVICE=$(which service)
 
 # Internet
 IFACE_INET=enp2s0
 
 # Réseau local
 IFACE_LAN=enp3s0
 IFACE_LAN_IP=192.168.2.0/24
 
 # Relais des paquets (yes/no)
 MASQ=yes
 
 # Tout accepter
 $IPT -t filter -P INPUT ACCEPT
 $IPT -t filter -P FORWARD ACCEPT
 $IPT -t filter -P OUTPUT ACCEPT
 $IPT -t nat -P PREROUTING ACCEPT
 $IPT -t nat -P POSTROUTING ACCEPT
 $IPT -t nat -P OUTPUT ACCEPT
 $IPT -t mangle -P PREROUTING ACCEPT
 $IPT -t mangle -P INPUT ACCEPT
 $IPT -t mangle -P FORWARD ACCEPT
 $IPT -t mangle -P OUTPUT ACCEPT
 $IPT -t mangle -P POSTROUTING ACCEPT
 
 # Remettre les compteurs à zéro
 $IPT -t filter -Z
 $IPT -t nat -Z
 $IPT -t mangle -Z
 
 # Supprimer toutes les règles actives et les chaînes personnalisées
 $IPT -t filter -F
 $IPT -t filter -X
 $IPT -t nat -F
 $IPT -t nat -X
 $IPT -t mangle -F
 $IPT -t mangle -X
 
 # Désactiver le relais des paquets
 $SYS -q -w net.ipv4.ip_forward=0
 
 # Politique par défaut
 $IPT -P INPUT DROP
 $IPT -P FORWARD ACCEPT
 $IPT -P OUTPUT ACCEPT
 
 # Faire confiance à nous-même
 $IPT -A INPUT -i lo -j ACCEPT
 
 # Ping
 $IPT -A INPUT -p icmp --icmp-type echo-request -j ACCEPT
 $IPT -A INPUT -p icmp --icmp-type time-exceeded -j ACCEPT
 $IPT -A INPUT -p icmp --icmp-type destination-unreachable -j ACCEPT
 
 # Connexions établies
 $IPT -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
 
 # SSH local
 $IPT -A INPUT -p tcp -i $IFACE_LAN --dport 22 -j ACCEPT
 
 # SSH limité en provenance de l'extérieur
 $IPT -A INPUT -p tcp -i $IFACE_INET --dport 22 -m state \
  --state NEW -m recent --set --name SSH
 $IPT -A INPUT -p tcp -i $IFACE_INET --dport 22 -m state \
  --state NEW -m recent --update --seconds 60 --hitcount 2 \
  --rttl --name SSH -j DROP
 $IPT -A INPUT -p tcp -i $IFACE_INET --dport 22 -j ACCEPT
 
 # DNS
 $IPT -A INPUT -p tcp -i $IFACE_LAN --dport 53 -j ACCEPT
 $IPT -A INPUT -p udp -i $IFACE_LAN --dport 53 -j ACCEPT
 
 # DHCP
 $IPT -A INPUT -p udp -i $IFACE_LAN --dport 67:68 -j ACCEPT
 
 # Activer le relais des paquets
 if [ $MASQ = 'yes' ]; then
  $IPT -t nat -A POSTROUTING -o $IFACE_INET -s $IFACE_LAN_IP \
-j MASQUERADE
  $SYS -q -w net.ipv4.ip_forward=1
 fi
 
 # Enregistrer les connexions refusées
 $IPT -A INPUT -j LOG --log-prefix +++ IPv4 packet rejected +++
 $IPT -A INPUT -j REJECT
 
 # Enregistrer la configuration
 $SERVICE iptables save
 --8
 
 As you can see, the script is also supposed to handle IP packet
 forwarding (masquerading).
 
 Once I run firewall-lan.sh manually, everything works as expected.
 
 When I restart the server, Iptables rules are still the same. The only
 thing that's not activated is IP forwarding. So as far as I can tell,
 iptables rules are stored, but packet forwarding returns to its pristine
 state (not activated).
 
 What would be an orthodox way of handling this? Put
 net.ipv4.ip_forward=1 in /etc/sysctl.conf? Something else?

Hi,
on CentOS 7 you probably want to take advantage of the ability to put
multiple config files in /etc/sysctl.d. For example this is what
/etc/sysctl.d/50-network.conf looks like on one of my routers:

# cat /etc/sysctl.d/50-network.conf
net.ipv4.ip_forward = 1
net.ipv4.conf.eth1.promote_secondaries = 1

The other thing i would recommend is to replace the iptables script with
the iptables-service package. That package uses iptables-restore to load
the iptables rules from /etc/sysconfig/iptables on boot and you can use
iptables-save to store the iptables rules there when you make changes.

The advantage of using iptables-save/restore is that it's more robust.
When you have a typo in your script then you end up with a
half-initialized firewall but when you use iptables-restore it parses
the specified file into a new kernel structure and then simply flips a
pointer to make that the active firewall configuration and deletes the
old one. That means if there is a problem with parsing the file
iptables-restore simply never switches to the new config i.e. during the
whole process the active firewall never gets touched and is never in an
half-initialized state. It also means that the switch is atomic i.e. The
complete old configuration is active until the moment the pointer gets
flipped at which point the whole new configuration gets active. The same
mechanism is also available for ipset using ipset save or ipset restore.

Regards,
  Dennis
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS 7: software RAID 5 array with 4 disks and no spares?

2015-02-19 Thread Dennis Jacobfeuerborn
On 19.02.2015 06:28, Chris Murphy wrote:
 On Wed, Feb 18, 2015 at 4:20 PM,  m.r...@5-cent.us wrote:
 Niki Kovacs wrote:
 Le 18/02/2015 23:12,
 
 close, but then, for mysterious reasons, Red Hat decided to cripple it
 into oblivion. Go figure.

 One word: desktop. That's what they want to conquer next.
 
 OK well there's a really long road to get to that pie in the sky. I
 don't see it happening because it seems there's no mandate to
 basically tell people what they can't have, instead it's well, we'll
 have a little of everything.
 
 Desktop OS that are the conquerers now? Their installers don't offer
 100's of layout choices. They offer 1-2, and they always work rock
 solid, no crashing, no user confusion, essentially zero bugs. The code
 is brain dead simple, and that results in stability.
 
 *shrug*
 
 Long road. Long long long. Tunnel. No light. The usability aspects are
 simply not taken seriously by the OS's as a whole. It's only taken
 seriously by DE's and they get loads of crap for every change they
 want to make. Until there's a willingness to look at 16 packages as a
 whole rather than 1 package at a time, desktop linux has no chance.
 The very basic aspects of how to partition, assemble, and boot and
 linux distro aren't even agreed upon. Fedora n+1 has problems
 installing after Fedora n. And it's practically a sport for each
 distro to step on an existing distros installer. This is
 technologically solved, just no one seems to care to actually
 implement something more polite.
 
 OS X? It partitions itself, formats a volume, sets the type code,
 writes some code into NVRAM, in order to make the reboot automatically
 boot the Windows installer from a USB stick. It goes out of it's way
 to invite the foreign OS.
 
 We can't even do that with the same distro, different version. It
 should be embarrassing but no one really cares enough to change it.
 It's thankless work in the realm of polish. But a huge amount of
 success for a desktop OS comes from polish.

I think the problem is that you simply have to draw a distinction
between technology and product.
The rise of the Linux desktop will never happen because Linux is not a
product but a technology and as a result has to be a jack of all trades.
The reason Apple is so successful I believe is because they understood
more than others that people don't care about technology but want one
specific consistent experience. They don't core how the harddisk is
partitioned.
So I can see the rise of the X desktop but only if X is willing to
have its own identity an eschew the desire to be compatible with
everything else or cater to both casual users and hard-core admin types.
In other words the X Desktop would have to be a very opinionated
product rather than a highly flexible technology.

 We also pretty much don't use any drives under 1TB. The upshot is we had
 custom scripts for  500GB, which made 4 partitions - /boot (1G, to fit
 with the preupgrade), swap (2G), / (497G - and we're considering
 downsizing that to 250G, or maybe 150G) and the rest in another partition
 for users' data and programs. The installer absolutely does *not* want to
 do what we want. We want swap - 2G - as the *second* partition. But if we
 use the installer, as soon as we create the third partition, of 497GB, for
 /, it immediately reorders them, so that / is second.
 
 I'm open to having my mind changed on this, but I'm not actually
 understanding why it needs to be in the 2nd slot, other than you want
 it there, which actually isn't a good enough reason. If there's a good
 reason for it to be in X slot always, for everyone, including
 anticipating future use, then that's a feature request and it ought to
 get fixed. But if it's a specific use case, well yeah you get to
 pre-partition and then install.
 

When I was younger I cared about where exactly each partition was
positioned but nowadays I refer to all my file systems using the uuid so
I don't really care anymore if / is the second or fifth partition. The
same is true for network interfaces. Since I mostly deal with physical
interfaces on Hypervisors only these days and there I am more interested
in bridges rather than the nics themselves I couldn't care less if the
interface is named eth0 or enp2something. I tend to think more in terms
of logical resources these days rather than physical ones.

Regards,
  Dennis
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS 7: software RAID 5 array with 4 disks and no spares?

2015-02-19 Thread Dennis Jacobfeuerborn
On 19.02.2015 19:41, Chris Murphy wrote:
 On Thu, Feb 19, 2015 at 5:47 AM, Dennis Jacobfeuerborn
 denni...@conversis.de wrote:
 I think the problem is that you simply have to draw a distinction
 between technology and product.
 The rise of the Linux desktop will never happen because Linux is not a
 product but a technology and as a result has to be a jack of all trades.
 
 I'm unconvinced. True, Chromebooks uses the linux kernel, and thus it
 qualifies, sorta, as Linux desktop. But this is something analogous to
 OS X using a FOSS kernel and some other BSD stuff, but the bulk of it
 is proprietary. Maybe Chrome isn't quite that proprietary, but it's
 not free either. And Chrome OS definitely is not jack of all trades.
 What it can run is very narrow in scope right now.
 
 
 
 The reason Apple is so successful I believe is because they understood
 more than others that people don't care about technology but want one
 specific consistent experience. They don't core how the harddisk is
 partitioned.
 So I can see the rise of the X desktop but only if X is willing to
 have its own identity an eschew the desire to be compatible with
 everything else or cater to both casual users and hard-core admin types.
 In other words the X Desktop would have to be a very opinionated
 product rather than a highly flexible technology.
 
 Hmm, well Apple as a pretty good understanding what details are and
 aren't important to most people. That is, they discriminate. People do
 care about technologies like disk encryption, but they don't care
 about the details of how to enable or manage it. Hence we see both iOS
 and Android enable it by default now. Change the screen lock password,
 and it also changes the encryption unlock password *while removing*
 the previous password all in one step. On all conventional Linux
 distributions, this is beyond confusing and is totally sysadmin
 territory. I'd call it a bad experience.
 
 OK so that's mobile vs desktop, maybe not fair. However, OS X has one
 button click full disk encryption as opt in post-install (and opt out
 after). This is done with live conversion. The user can use the
 computer normally while conversion occurs, they can put the system to
 sleep, and even reboot it, and will resume conversion when the system
 comes back up. Decrypt conversion works the same way. They are poised
 to make full disk encryption a default behavior, without having
 changed the user experience at all, in the next major release of the
 software. I don't know whether they'll do it, but there are no
 technical or usability impediments.
 
 Linux distros experience on this front is terrible. Why? Linux OS's
 don't have a good live conversion implementation (some people have
 tried this and have hacks, but no distro has adopted this); but Ok the
 installer could just enable it by default, obviating conversion. But
 there's no one really looking at the big picture, looking at dozens of
 packages, how this affects them all from the installer password
 policy, to Gnome and KDE. You'd need the add user GUI tools to be able
 to change both user login and encryption passphrase passwords, to keep
 them in sync, and remove the old one. And currently LUKS has this 8
 slot limit, which is probably not a big problem, but might be a
 sufficient barrier in enough cases that this needs extending.

I'm not sure why you seem to disagree with what I wrote (unconvinced)
and then basically say what I was saying.

Linux with a thousand knobs is never going become popular. Instead
somebody has to go and create an opinionated system where most knobs are
removed and replaced by sane/good/useful defaults. Like Google with its
Chromebooks.

Regards,
  Denis

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos 7.0 and mismatched swap file

2015-02-15 Thread Dennis Jacobfeuerborn
On 15.02.2015 16:49, Gregory P. Ennis wrote:
 Everyone,
 
 I am putting together a new mail server for our firm using a SuperMicro
 with Centos 7.0.  When performed the install of the os, I put 16 gigs of
 memory in the wrong slots on the mother board which caused the
 SuperMicro to recognize 8 gigs instead of 16 gigs.  When I installed
 Centos 7.0, this error made the swap file 8070 megs instead of what I
 would have expected to be a over 16000 megs.
 
 I am using the default xfs file system on the other partitions.  Is
 there a way to expand the swap file?  If not, then is this problem
 sufficiently bad enough for me to start over with a new install.  I do
 not want to start over unless I need to.

8G of swap should be more than enough for a 16G system unless you plan
to severely over-commit memory.

Regards,
  Dennis
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] C5 C6 : useradd

2015-01-24 Thread Dennis Jacobfeuerborn
On 25.01.2015 04:54, Always Learning wrote:
 
 On Sat, 2015-01-24 at 22:45 -0500, Stephen Harris wrote:
 
 On Sun, Jan 25, 2015 at 03:43:06AM +, Always Learning wrote:
 
 Should the 'correct' entry be:-

 fred:x:504:504:::/sbin/nologin  ?

 No; that's invalid.  There must be an entry in the home directory field.
 
 Thanks Stephen and Dennis for the helpful explanation.
 
 I will use:useradd -d /dev/null -s /sbin/nologin snowman

You can add the -M option too which should get rid of the warning
messages (though I have not tested this).

Regards,
  Dennis
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] C5 C6 : useradd

2015-01-24 Thread Dennis Jacobfeuerborn
On 25.01.2015 04:30, Always Learning wrote:
 
 useradd --help
 
  -d, --home-dir HOME_DIR  home directory for the new user account
  -M, do not create user's home directory 
 yet
  useradd -M -s /sbin/nologin FRED
 
 produces in /etc/passwd
 
  fred:x:504:504::/home/fred:/sbin/nologin
 
 Trying again with
 
  useradd -d /dev/null -s /sbin/nologin doris
 
 gives a CLI message
 
  useradd: warning: the home directory already exists.
  Not copying any file from skel directory into it.
 
 and in /etc/password
 
  doris:x:505:505::/dev/null:/sbin/nologin
 
 QUESTION
 
 What is the 'official' method of creating a user with no home directory
 and no log-on ability ?

Your first invocation seemed to look fine. What result do you expect to
get? Every user needs a home directory in /etc/passwd even if it doesn't
exist.

Regards,
  Dennis

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] VLAN issue

2015-01-24 Thread Dennis Jacobfeuerborn
Hi Boris,
what I'd like to know is the actual VLAN configuration of the switch
port (link-type and tagged and untagged VLANs). When I look at the
switchport coniguration here I get (among other things):

...
 Port link-type: trunk
  Tagged   VLAN ID : 8, 1624
  Untagged VLAN ID : 10
...

Here is my suspicion:
Your ports have an access link-type with an untagged VLAN ID of 48. That
would explain why the moment you configure an IP from that VLAN on eth0
you get connectivity because then the packets the Linux box sends are
untagged as the switch would expect them to be. If you only put an
address on eth0.48 then the packets get tagged by Linux but if the
switch port is not configured to receive the packets for VLAN 48 as
tagged then it will simply drop these packets and you will not get
connectivity.

So getting the actual VLAN config of the switch port would help to
determine if the switch actually expects to receive the packets the way
you send them from the Linux box.

Regards,
  Dennis

So if you
On 24.01.2015 13:35, Boris Epstein wrote:
 Do you need the whole configuration? On the switch end, we have the
 relevant VLAN (VLAN 48) with the assigned IP address of 192.168.48.101 and
 the range of ports (Gi1/0/1 - Gi1/0/8) assigned to that VLAN.
 
 Seems - and acts - like a legitimate setup and works fine, except for this
 particular instance.
 
 Thanks.
 
 Boris.
 
 On Fri, Jan 23, 2015 at 8:54 PM, Dennis Jacobfeuerborn 
 denni...@conversis.de wrote:
 
 We have lots of servers with a similar setup (i.e. tagged vlans and no
 ip on eth0) and this works just fine.

 What is the actual vlan configuration on your switchport?

 Regards,
   Dennis

 On 24.01.2015 01:34, Boris Epstein wrote:
 Steve,

 Thanks, makes sense.

 I just don't see why I have to effectively waste an extra IP address to
 get
 my connection established.

 Boris.


 On Fri, Jan 23, 2015 at 7:16 PM, Stephen Harris li...@spuddy.org
 wrote:

 On Fri, Jan 23, 2015 at 07:10:57PM -0500, Boris Epstein wrote:

 This makes two of us. I've done everything as you have described and it
 simply does not work.

 Are you actually seeing VLAN tagged traffic, or is the cisco switch
 just providing a normal stream?

 At work we have hundreds of VLANs, but the servers don't get configured
 for this; we just configure them as normal; ie eth0.  The network
 infrastructure does the VLAN decoding, the server doesn't have to.

 Try configuring the machine as if it was a real LAN and forget about
 the VLAN.

 If that doesn't work then what does 'tcpdump -i eth0' show you?

 --

 rgds
 Stephen
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos

 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos


 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos

 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos
 

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] VLAN issue

2015-01-23 Thread Dennis Jacobfeuerborn
We have lots of servers with a similar setup (i.e. tagged vlans and no
ip on eth0) and this works just fine.

What is the actual vlan configuration on your switchport?

Regards,
  Dennis

On 24.01.2015 01:34, Boris Epstein wrote:
 Steve,
 
 Thanks, makes sense.
 
 I just don't see why I have to effectively waste an extra IP address to get
 my connection established.
 
 Boris.
 
 
 On Fri, Jan 23, 2015 at 7:16 PM, Stephen Harris li...@spuddy.org wrote:
 
 On Fri, Jan 23, 2015 at 07:10:57PM -0500, Boris Epstein wrote:

 This makes two of us. I've done everything as you have described and it
 simply does not work.

 Are you actually seeing VLAN tagged traffic, or is the cisco switch
 just providing a normal stream?

 At work we have hundreds of VLANs, but the servers don't get configured
 for this; we just configure them as normal; ie eth0.  The network
 infrastructure does the VLAN decoding, the server doesn't have to.

 Try configuring the machine as if it was a real LAN and forget about
 the VLAN.

 If that doesn't work then what does 'tcpdump -i eth0' show you?

 --

 rgds
 Stephen
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos

 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos
 

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] restart after yum update (6.6)?

2015-01-16 Thread Dennis Jacobfeuerborn
Hi,
you don't *have* to reboot the server. If you don't there are two
factors you need to consider:

1. The updated component are not all active without a reboot

The kernel for example will obviously not not be running without a
reboot and the same may be true for other components. For most
applications you should be ok if you just restart the application so it
can load in the new libraries.

2. If you reboot later issues after the reboot might become more
difficult to debug

If you reboot e.g. 6 Months after you made an update and the system
doesn't boot properly you will most likely have forgotten that you
updated the system a long time ago and look for more recent reasons why
the system doesn't boot. If you reboot the system immediately and it
doesn't come back up you'll know that most likely the update has
something to do with it.

So if you don't reboot the system should still keep working normally but
for the above reasons you might want to reboot it anyway of not right
away then at least in the not too distant future.

Regards,
  Dennis

On 16.01.2015 14:25, Mateusz Guz wrote:
 Someone have updated it without my knowledge, now i have to make a choice: 
 -don’t reboot and wait for errors
 -reboot (which im trying to avoid)
 
 What about (g)libc package, anyone encountered similar situation ?
 
 
 -Original Message-
 From: centos-boun...@centos.org [mailto:centos-boun...@centos.org] On Behalf 
 Of Nathan Duehr
 Sent: Thursday, January 15, 2015 10:08 PM
 To: CentOS mailing list
 Subject: Re: [CentOS] restart after yum update (6.6)?
 
 
 
 On Jan 15, 2015, at 12:36, Mateusz Guz mateusz@itworks.pl wrote:
 
 according to this :

 http://unix.stackexchange.com/questions/28144/after-yum-update-is-it-a-good-idea-to-restart-the-server

 i should reboot my server after updating packages i.e: kernel, glibc, libc.
 Maybe it's a silly question, but Is it necessary if I don't use graphical 
 environment ? (and don't want to use the latest kernel yet)
 
 If you don’t want the kernel to update, just use —exclude=kernel* on yum or 
 whatever.  Why update it if you aren’t going to use it?
 
 Might as well be deliberate and know you’re purposefully skipping something.
 
 Nate
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos
 

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] NTP Vulnerability?

2014-12-19 Thread Dennis Jacobfeuerborn
On 20.12.2014 03:42, listmail wrote:
 I just saw this:
 
 https://ics-cert.us-cert.gov/advisories/ICSA-14-353-01
 
 which includes this:
  A remote attacker can send a carefully crafted packet that can overflow a
 stack buffer and potentially allow malicious code to be executed with the
 privilege level of the ntpd process. All NTP4 releases before 4.2.8 are
 vulnerable.
 
 This vulnerability is resolved with NTP-stable4.2.8 on December 19, 2014.
 
 I guess no one has had time to respond yet. Wonder if I should shut down my
 external NTP services as a precaution?

From the description in the Red Hat advisory and this link
http://www.kb.cert.org/vuls/id/852879 it seems the buffer overflow
issues can only be exploitet with specific authentication settings that
are not part of the default configuration or am I interpreting this wrong?

Regards,
  Dennis

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] CentOS 7 not installable using KVM-over-IP System

2014-11-19 Thread Dennis Jacobfeuerborn
Hi,
I just tried to install CentOS 7 using a Lantronix Spider KVM-over-IP
System and its virtual media feature and to my surprise this did not work.
The installation using the netinstall iso seems to work for a while (I
see some dracut boot messages) but when the first stage of the boot is
finished I get dropped into an emergency shell with the error message
that /dev/root does not exist.

I tried this on a Supermicro system and a gen-8 HP ProLiant Server both
with the same result.

Using CentOS 6 instead worked fine and I could install the Systems
without issues.

Any idea what the problem could be? Given that the iso is passed through
as a USB storage device I'm not sure what the problem could be.

Regards,
   Dennis
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS 7 - Firewall always allows outgoing packets?

2014-08-11 Thread Dennis Jacobfeuerborn
On 11.08.2014 15:43, Tom Bishop wrote:
 You and 4 other guys are moving things from Linux to FreeBSD.

 The rest of the world is moving things from UNIX and Windows to Linux.

 CentOS-7 rebuild RHEL sources and most all of the important Enterprise
 Linux things are moving to RHEL.

 RHEL runs the stock exchanges, the banks, etc.

 Free BSD is fine and people can use it if they like ... but if you want
 real Enterprise grade software, it needs to be RHEL based, that is just
 the way it is.

 Keep in mind that EL 7.0 is a 'dot zero release' and some of the
 features need work.  It works for the majority of use cases, but some
 features will need to be enhanced, and Red Hat will enhance it.  When
 they do, we will build the source code and it will be in CentOS.


 
 I hear you Johnny, I'm a big RH fan, but there is several things that
 they have shifted to in RHEL 7 that just chafes a little.
 
 I am dual hat guy, network and IS and when iptables with firewalld, at
 a minimum I would like the ability to be able to accomplish the same
 things I accomplished with iptables. I read about firewalld the pros
 and cons and I understand the shift and reason.
 
 But I do have heartburn when they call something a firewall and you
 cannot drop all the packets. It's not like they didn't know about it
 since I read about it in fedora and it's not clear if it will be
 addressed.  There are lots of use cases where I want to control all of
 the packets coming and going from a box, I see this becoming more so
 moving forward.
 
 Hopefully this will be addressed in a future release, trying to figure
 out where I can go to now and keep up to date with the latest
 firewalld info, just to stay clued in.

While I am also disappointed with firewalld I think the whole situation
is not as terrible as people claim it is after all you can easily go
back to iptables as it was in CentOS 6:

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Security_Guide/sec-Using_Firewalls.html#sec-Using_iptables

It's strange that people threaten to go FreeBSD simply because the
defaults are not to their liking. Not exactly a rational way to look at
things.

Regards,
  Dennis

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] when will docker 1.1.2 for rhel7 be released?

2014-08-11 Thread Dennis Jacobfeuerborn
On 11.08.2014 15:42, 彭勇 wrote:
 there are some bugs in docker-0.11.1-22.el7, when will latest version
 docker be relased for el7?

What bugs are you referring to? Since Red Hat backports patches these
bugs might already be fixed event though the version number might
suggest otherwise.

Regards,
  Dennis

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] when will docker 1.1.2 for rhel7 be released?

2014-08-11 Thread Dennis Jacobfeuerborn
On 12.08.2014 01:51, 彭勇 wrote:
 *https://bugzilla.redhat.com/show_bug.cgi?id=1119042
 https://bugzilla.redhat.com/show_bug.cgi?id=1119042*
 *https://bugzilla.redhat.com/show_bug.cgi?id=1109039
 https://bugzilla.redhat.com/show_bug.cgi?id=1109039*
 *https://github.com/docker/docker/issues/6770
 https://github.com/docker/docker/issues/6770*
 *https://access.redhat.com/solutions/964923
 https://access.redhat.com/solutions/964923*
 
 
 On Tue, Aug 12, 2014 at 1:13 AM, Dennis Jacobfeuerborn 
 denni...@conversis.de wrote:
 
 On 11.08.2014 15:42, 彭勇 wrote:
 there are some bugs in docker-0.11.1-22.el7, when will latest version
 docker be relased for el7?

 What bugs are you referring to? Since Red Hat backports patches these
 bugs might already be fixed event though the version number might
 suggest otherwise.


Looks like docker-io-1.0.0 is available in EPEL:
http://dl.fedoraproject.org/pub/epel/beta/7/x86_64/repoview/docker-io.html

If you really want to use the latest version of docker you cannot rely
on RHEL packages though as they only get updated with important fixes
and usually only with point releases (unless it's a security bug).

Regards,
  Dennis
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos 7 - iptables service failed to start

2014-08-10 Thread Dennis Jacobfeuerborn
On 10.08.2014 05:30, Neil Aggarwal wrote:
 Hey everyone:
 
 The process /usr/local/bin/firewall.start could not be executed 
 and failed.
 
 I just realized I forgot to put #!/bin/sh at the top of my firewall
 scripts.  I added that and it is working perfectly fine now.
 
 Sorry for any trouble.

You might want to look into using the regular iptables service instead
od custom firewall scripts. The service uses iptables-save and
iptables-restore which are designed to install all iptables rules
atomically.
If you end up with a typo in your script you end up with a partially
initialized firewall but iptables-restore first parses the entire rule
set and doesn't touch the current rules at all if it finds an error
making the process much more robust.

Regards,
  Dennis

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Install php-imap using yum or any on CentOS 7

2014-07-30 Thread Dennis Jacobfeuerborn
On 30.07.2014 14:53, Giles Coochey wrote:
 On 30/07/2014 13:34, Vivek Patil wrote:
 [epel]
 name=Extra Packages for Enterprise Linux 7 - $basearch
 #baseurl=http://download.fedoraproject.org/pub/epel/7/$basearch
 mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7arch=$basearch

 failovermethod=priority
 enabled=1
 gpgcheck=1
 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7

 [epel-debuginfo]
 name=Extra Packages for Enterprise Linux 7 - $basearch - Debug
 #baseurl=http://download.fedoraproject.org/pub/epel/7/$basearch/debug
 mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-7arch=$basearch

 failovermethod=priority
 enabled=1
 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
 gpgcheck=1

 [epel-source]
 name=Extra Packages for Enterprise Linux 7 - $basearch - Source
 #baseurl=http://download.fedoraproject.org/pub/epel/7/SRPMS
 mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-7arch=$basearch

 failovermethod=priority
 enabled=1
 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
 gpgcheck=1


 On 7/30/2014 5:46 PM, Reindl Harald wrote:
 [epel]
 name=Extra Packages for Enterprise Linux 7 - $basearch
 # baseurl=http://download.fedoraproject.org/pub/epel/7/$basearch
 mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7arch=$basearch

 failovermethod=priority
 enabled=1
 gpgcheck=1
 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7

 Your links are wrong, change them to
 whateveryourmirroris/pub/epel/beta/7/$basearch

Don't play with the repo files manually at all and instead install the
release package for the repo:

http://dl.fedoraproject.org/pub/epel/beta/7/x86_64/epel-release-7-0.2.noarch.rpm

That way if anything changes in the repo config you get these changes
with the next update.

Regards,
  Dennis
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] LVM - VG directory not being created

2014-07-25 Thread Dennis Jacobfeuerborn
On 25.07.2014 12:12, Kyle Thorne wrote:
 Hi all,
 
 I'm not sure if this is the right place to ask, but it's worth a shot.
 
 I have installed CentOS 6.5 on one of our servers, and have just installed
 SolusVM.
 
 I have also set up LVM, with a PV on /dev/sda4 (which is GPT formatted, and
 3.12TB is size).
 
 The problem I'm having is that when I create the VG, it will not show up
 under /dev/VG-name, which it's supposed to according to Red Hat guides on
 LVM.
 
 On a CentOS 5.9 server, with an almost identical set up, the VG showed up
 under /dev/ correctly, so I'm wondering if the VG folder is stored
 elsewhere in CentOS 6? I have tried the obvious solutions, such as a
 reboot, vgscan, pvscan, and even partprobe. I have even tried running
 'find' in search of the VG name, which returned no results.
 
 Any help is much appreciated. :)

Have you created any logical volumes yet? The directory will only be
created once you create the first volume in that volume group.

Regards,
  Dennis

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] CentOS7+kickstart+thinpool = error/exception

2014-07-22 Thread Dennis Jacobfeuerborn
Hi,
I'm trying to create a kickstart file that uses a thinly provisioned lvm
volume as root but I've run into trouble. I installed a System manually
using this option and this is the anaconda file produced:

part /boot --fstype=xfs --ondisk=vda --size=500
part pv.10 --fstype=lvmpv --ondisk=vda --size=7691
volgroup centos_centos7 --pesize=4096 pv.10

logvol   --fstype=None --grow --size=1232 --thinpool --name=pool00
--vgname=centos_centos7
logvol swap  --fstype=swap --size=819 --name=swap --vgname=centos_centos7
logvol /  --fstype=xfs --grow --maxsize=51200 --size=1024 --thin
--poolname=pool00 --name=root --vgname=centos_centos7

The Problem is that when I use this as the basis for a kickstart
Anaconda complains that I need to specify a mount-point for the logvol
line defining the thinpool. So I looked at the kickstart documentation
here: http://fedoraproject.org/wiki/Anaconda/Kickstart#logvol
There is says --thinpool Create a thin pool logical volume. (Use a
mountpoint of none) so I went ahead and specified a mountpoint
none. The result is that Anaconda now crashes with an exception.

Does sombody know how I can install CentOS 7 using a thinpool? The
features of thinly provisioned lvm volumes make them highly desirable
for database backups for example.

Regards,
  Dennis
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] kickstart partition without home

2014-07-22 Thread Dennis Jacobfeuerborn
On 22.07.2014 23:56, Matthew Sweet wrote:
 I am trying to finish off a kickstart file for a computer lab on CentOS 6.5
 machines. I don't want to have a separate /home as I'm going to add an
 entry in fstab for it to nfs mount /home from a server.
 
 Is there a way to have it autopart the rest of the file system without
 /home? Wanting to keep autopart for size since not all hard drives across
 the labs are the same.

Don't use auto-partitioning at all and instead create a boot partition
with fixed size, a swap partition with fixed size and lastly a root
partition with --size=1 --grow. That way the partition will use the
rest of the available disk space.

Regards,
  Dennis

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] journalctl and log server

2014-07-11 Thread Dennis Jacobfeuerborn
On 11.07.2014 10:47, Mauricio Tavares wrote:
 On Fri, Jul 11, 2014 at 3:00 AM, James Hogarth james.hoga...@gmail.com 
 wrote:
 On 10 Jul 2014 23:26, Matthew Miller mat...@mattdm.org wrote:
 (In
 fact, you can even turn off persistent journald if you like.) Or, you can
 use 'imjournal' for more sophisticated integration if you like -- see
 http://www.rsyslog.com/doc/imjournal.html.

   Is it me who have not had coffee yet or that assumes you have to
 have rsyslog installed in the machine running systemd/journald? For
 the sake of this discussion, let's say that is not an option for
 whatever reason, so you must make journald talk to the rsyslog server.
 What would need to be done in both ends?

That's a bit like saying you must make mysql talk to the apache
webserver. The journal has its own mechanism using
systemd-journal-remote but that hasn't been included in CentOS7 because
its fairly new.


 In fact in EL7 the default behaviour is no persistent journald since the
 logging is set to auto and there is no /var/log/journal ...

 The default behaviour is to have journald collect the logs and forward them
 all to rsyslog to then be stored on disk or filtered or forwarded just the
 same as in EL6 ...

 On a related note this does mean that if you want persistent journald
 logging you must remember to create that directory...
 
   Now, let's say we are trying to prove journald is superior to
 rsyslog, so we must not use rsyslog in this machine (only in the
 syslog server since it is up and has to deal with others)

In this scenario you would set up systemd-journal-remote on the server
in addition to rsyslog so syslog clients can keep using the rsyslog
endpoint and journal client can use the journal-remote one. On the
server you could then forward the data to the local rsyslog to have
everything in one place/format.

The whole remote logging story is still pretty dodgy right now though so
I would stick to rsyslog for now.

Regards,
  Dennis
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Cemtos 7 : Systemd alternatives ?

2014-07-08 Thread Dennis Jacobfeuerborn
On 08.07.2014 09:12, Ljubomir Ljubojevic wrote:
 On 07/08/2014 03:41 AM, Always Learning wrote:

 On Mon, 2014-07-07 at 21:34 -0400, Scott Robbins wrote:

 No systemd in FreeBSD.  It isn't Linux, and like any O/S, has its own
 oddities.  

 It would take more adjustment, IMHO, to go from CentOS 6.x to FreeBSD than
 to go to 7.x.  (I'm saying this as someone who uses both FreeBSD and
 Fedora which has given a hint of what we'll see in CentOS 7.)

 Thanks. I've deployed C 5.10 and C 6.5. Thought I'll play with C 7.

 I notice, from http://wiki.centos.org/Manuals/ReleaseNotes/CentOS7, the
 apparent replacement of IPtables by firewalld

 https://fedoraproject.org/wiki/FirewallD


 
 Check Static_Firewall Chapter:
 https://fedoraproject.org/wiki/FirewallD#Static_Firewall_.28system-config-firewall.2Flokkit.29
 
 and one below it. You can have iptables rules and also rules from
 system-config-firewall
 

If you want to avoid firewalld for now you can uninstall it and instead
install the iptables-services package. This replaces the old init
scripts and provides an iptables systemd unit file that starts and
stops iptables and if you require the old service iptables save
command you can reach that using /usr/libexec/iptables/iptables.init.

Also if you want to keep NetworkManager on a Server you can install the
NetworkManager-config-server package. This only contains a config chunk
with two settings:
no-auto-default=*
ignore-carrier=*

With this package installed you get a more traditional handling of the
network. Interfaces don't get shutdown when the cable is pulled, no
automatic configuration of unconfigured interfaces and no automatic
reload of configuration files (the last one doesn't require the package
and is now the NetworkManager default behaviour).

Regards,
  Dennis

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Cemtos 7 : Systemd alternatives ?

2014-07-08 Thread Dennis Jacobfeuerborn
On 08.07.2014 13:57, Scott Robbins wrote:
 On Mon, Jul 07, 2014 at 06:50:21PM -0700, Russell Miller wrote:

 On Jul 7, 2014, at 6:34 PM, Scott Robbins scot...@nyc.rr.com wrote:

 No systemd in FreeBSD.  It isn't Linux, and like any O/S, has its own
 oddities.  

 It would take more adjustment, IMHO, to go from CentOS 6.x to FreeBSD than
 to go to 7.x.  (I'm saying this as someone who uses both FreeBSD and
 Fedora which has given a hint of what we'll see in CentOS 7.)


 That's a good point.  Systemd may be the abomination of desolation that
 causes me to finally start moving to a BSD variant.  Or at least start 
 looking at one.
 
 Y'know, I was considered a troll when I said on Fedora forums that systemd
 going into server systems might start driving people away from RH to the
 BSDs.  (And to be honest, I was being trollish there, in a friendly way--in
 the same way at work I'll say something about Arch loudly enough for our
 Arch lover to hear.)  
 
 Now that it's insinuated itself in the RHEL system, I do wonder if it is
 going to start driving people away.  In many ways, IMHO, RH has become the
 Windows of Linux, with no serious competitors, at least here in the US.
 Sure, some companies use something else, but when I had to job hunt last
 year, 90-95 percent of the Linux admin jobs were for RedHat/CentOS/OEL/SL
 admins.

That presumes that your conservative attitude is the majority opinion
though. Systemd is one of the features that I have been looking forward
to in CentOS 7 because of the new capabilities it provides so while this
will surely drive some people away it will actually attract others and
if you think that this will lead to some sort of great exodus then I
think you are mistaken. Not everybody is this uncomfortable with change.

Regards,
  Dennis
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Cemtos 7 : Systemd alternatives ?

2014-07-08 Thread Dennis Jacobfeuerborn
On 08.07.2014 14:35, David Both wrote:
 I still prefer IPTables, so in Fedora I simply disabled firewalld and enabled 
 IPTables. No need to uninstall. I have read that IPTables will continue to be 
 available alongside firewalld for the unspecified future.

Be careful with this though. A while ago I tried this on a system that
also had libvirtd running and ran into the problem that libvirt detected
the existence of firewalld and as a result tried to use it even though
it was disables. It took a while to figure this out once I actually
uninstalled firewalld and restarted libvirtd it started to use iptables.
This might have been fixed by now but you should keep that in mind when
you run into firewall trouble. Some software might mistakenly assume
that just because firewalld is present is must also be in active use.

 Note that IPTables rule syntax and structure have evolved so your ruleset may 
 need to be updated. I did find that the current version of IPTables will 
 actually convert old rulesets on the fly, at least as far as the syntax of 
 the 
 individual rules is concerned. From there you can simply use iptables-save to 
 save the converted ruleset.
 
 One of the items on my tudo list is to learn firewalld. The switch from 
 ipchains 
 took a bit of learning and I expect this switch will as well.

There was a discussion a while ago on fedora-devel that the current
handling of firewalld and zones is not ideal and there might be changes
in store for the future. This will probably not hit CentOS 7 but you
might to want to keep an ear out in case some deeper structural changes
happen. Always good to be ahead of the curve.

 One of the stated reasons for firewalld is that dynamic rule changes do not 
 clear the old rules before loading the new ones, to paraphrase, where 
 IPTables 
 does. If true, that would leave a very small amount of time in which the 
 host 
 would be vulnerable. I have no desire to peruse the source code to determine 
 the 
 veracity of that statement, so if there is someone here who could verify that 
 changing the rules in IPTables, whether using the iptables command or the 
 iptables-restore command, I would be very appreciative. No need to go to any 
 trouble to locate that answer as I am merely curious.

iptables-restore is atomic. It builds completely new tables and then
just tells the kernel to switch the old version with the new version.
Depending on the timing the packets are either handled by complete old
rule set or the complete new rule set. There is never any moment where
no rules are applied or only half of the new rules are inserted.

The problem firewalld tries to solve is that nowadays you often want to
insert temporary rules that should only be active while a certain
application is running. This collides a bit with the way iptables works.
For example libvirt inserts specific rules when you define networks for
virtualization dynamically. If you now do an iptables-save these rules
get saved and on next boot when these rules are restored the exist again
but now libvirt will add them dynamically a second time.

Firewalld is simply a framework built around iptables that allows for
applications to register rules with additional information such as
this rule is a static one or this rule should only be used
dynamically while application X is running. Then there is of course the
handling of zones which is a concept iptables by itself does not know about.

Regards,
  Dennis


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Cemtos 7 : Systemd alternatives ?

2014-07-08 Thread Dennis Jacobfeuerborn
On 08.07.2014 14:58, Adrian Sevcenco wrote:
 On 07/08/2014 04:22 AM, Always Learning wrote:

 On Mon, 2014-07-07 at 20:46 -0400, Robert Moskowitz wrote:

 On 07/07/2014 07:47 PM, Always Learning wrote:
 Reading about systemd, it seems it is not well liked and reminiscent of
 Microsoft's put everything into the Windows Registry (Win 95 onwards).

 Is there a practical alternative to omnipresent, or invasive, systemd ?

 So you are following the thread on the Fedora list?  I have been 
 ignoring it.

 No. I read some of
 http://www.phoronix.com/scan.php?page=news_topicq=systemd

 The systemd proponent, advocate and chief developer? wants to
 abolish /etc and /var in favour of having the /etc and /var data
 in /usr.
 err.. what? even on that wild fedora thread this did not come up!!!
 
 i will presume that you understood well your information source and you
 are actually know what you are referring to ... so, could you elaborate
 more about this?(with some references)
 i use systemd for some time (and i keep myslef informed about it) and i
 would need to know in time about this kind of change..

There are no plans to abolish /etc and /var.

The idea is that rather than say proftpd shipping a default config file
/etc/proftpd.conf that you then have to edit for you needs instead it
will ship the default config somewhere in /usr and let the config in
/etc override the one in /usr. That way if you want to factory reset
the system you can basically clear out /etc and you are back do the
defaults. The same applies to /var.
The idea is that /etc and /var become site-local directories that only
contain the config you actually changed from the defaults for this system.

Since you already have experience with systemd you are already familiar
with this system where it stores its unit files in /usr/lib/systemd and
if you want to change some of them you copy them to /etc/systemd and
change them there. Same principle.

/etc and /var will stay as valid as ever though and are not being
abolished.

Regards,
  Dennis
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Cemtos 7 : Systemd alternatives ?

2014-07-08 Thread Dennis Jacobfeuerborn
On 08.07.2014 15:22, Steve Clark wrote:
 On 07/08/2014 08:09 AM, Dennis Jacobfeuerborn wrote:
 On 08.07.2014 13:57, Scott Robbins wrote:
 On Mon, Jul 07, 2014 at 06:50:21PM -0700, Russell Miller wrote:
 On Jul 7, 2014, at 6:34 PM, Scott Robbinsscot...@nyc.rr.com  wrote:
 No systemd in FreeBSD.  It isn't Linux, and like any O/S, has its own
 oddities.

 It would take more adjustment, IMHO, to go from CentOS 6.x to
 FreeBSD than
 to go to 7.x.  (I'm saying this as someone who uses both FreeBSD and
 Fedora which has given a hint of what we'll see in CentOS 7.)

 That's a good point.  Systemd may be the abomination of desolation
 that
 causes me to finally start moving to a BSD variant.  Or at least
 start looking at one.
 Y'know, I was considered a troll when I said on Fedora forums that
 systemd
 going into server systems might start driving people away from RH to the
 BSDs.  (And to be honest, I was being trollish there, in a friendly
 way--in
 the same way at work I'll say something about Arch loudly enough for our
 Arch lover to hear.)

 Now that it's insinuated itself in the RHEL system, I do wonder if it is
 going to start driving people away.  In many ways, IMHO, RH has
 become the
 Windows of Linux, with no serious competitors, at least here in the US.
 Sure, some companies use something else, but when I had to job hunt last
 year, 90-95 percent of the Linux admin jobs were for
 RedHat/CentOS/OEL/SL
 admins.
 That presumes that your conservative attitude is the majority opinion
 though. Systemd is one of the features that I have been looking forward
 to in CentOS 7 because of the new capabilities it provides so while this
 will surely drive some people away it will actually attract others and
 if you think that this will lead to some sort of great exodus then I
 think you are mistaken. Not everybody is this uncomfortable with change.

 Regards,
Dennis
 
 My concern it that it is a massive change with a large footprint. How
 secure is it really? It has arguably become
 the second kernel it touches and handles so many things.

I agree but that is a change that you actively have to opt into though.
CentOS 6 will receive updates for many years to come so you don't have
to immediately migrate everything over in a rush. Also systemd is hardly
new at this point. It has been available for years and had quite some
time to mature. Red Hat would not have made it the core of its
Enterprise OS if it didn't think it would be very reliable.

 Maybe on desktops it makes sense - but I fail to see any positives for
 servers that once started run for months at a time
 between reboots.

The ability to jail services and restrict it's resources is one big plus
for me. Also the switch from messy bash scripts to a declarative
configuration makes things easier once you get used to the syntax.
Then there is the fact that services are actually monitored and can be
restarted automatically if they fail/crash and they run in a sane
environment where stdout is redirected into the journal so that all
output is caught which can be useful for debugging.

Its certainly a change one needs to get used to but as mentioned above I
don't think its a bad change and you don't have to jump to it
immediately if you don't want to.

Regards,
  Dennis

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Cemtos 7 : Systemd alternatives ?

2014-07-08 Thread Dennis Jacobfeuerborn
On 08.07.2014 15:53, Ned Slider wrote:
 On 08/07/14 14:14, Dennis Jacobfeuerborn wrote:
 On 08.07.2014 14:58, Adrian Sevcenco wrote:
 On 07/08/2014 04:22 AM, Always Learning wrote:

 On Mon, 2014-07-07 at 20:46 -0400, Robert Moskowitz wrote:

 On 07/07/2014 07:47 PM, Always Learning wrote:
 Reading about systemd, it seems it is not well liked and reminiscent of
 Microsoft's put everything into the Windows Registry (Win 95 onwards).

 Is there a practical alternative to omnipresent, or invasive, systemd ?

 So you are following the thread on the Fedora list?  I have been 
 ignoring it.

 No. I read some of
 http://www.phoronix.com/scan.php?page=news_topicq=systemd

 The systemd proponent, advocate and chief developer? wants to
 abolish /etc and /var in favour of having the /etc and /var data
 in /usr.
 err.. what? even on that wild fedora thread this did not come up!!!

 i will presume that you understood well your information source and you
 are actually know what you are referring to ... so, could you elaborate
 more about this?(with some references)
 i use systemd for some time (and i keep myslef informed about it) and i
 would need to know in time about this kind of change..

 There are no plans to abolish /etc and /var.

 The idea is that rather than say proftpd shipping a default config file
 /etc/proftpd.conf that you then have to edit for you needs instead it
 will ship the default config somewhere in /usr and let the config in
 /etc override the one in /usr. That way if you want to factory reset
 the system you can basically clear out /etc and you are back do the
 defaults. The same applies to /var.
 The idea is that /etc and /var become site-local directories that only
 contain the config you actually changed from the defaults for this system.

 Since you already have experience with systemd you are already familiar
 with this system where it stores its unit files in /usr/lib/systemd and
 if you want to change some of them you copy them to /etc/systemd and
 change them there. Same principle.

 /etc and /var will stay as valid as ever though and are not being
 abolished.

 
 That's not always true.
 
 Some configs that were under /etc on el6 must now reside under /usr on el7.
 
 Take modprobe blacklists for example.
 
 On el5 and el6 they are in /etc/modprobe.d/
 
 On el7 they need to be in /usr/lib/modprobe.d/
 
 If you install modprobe blacklists to the old location under el7 they
 will not work.
 
 I'm sure there are other examples, this is just one example I've
 happened to run into.

You might want to report this as a bug. The modprobe and modprobe.d man
pages explicitly reference /etc/modprobe.d/*.conf for the configuration.

Regards,
  Dennis

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Large file system idea

2014-05-17 Thread Dennis Jacobfeuerborn
On 17.05.2014 19:00, Steve Thompson wrote:
 On Sat, 17 May 2014, SilverTip257 wrote:
 
 Sounds like you might be reinventing the wheel.
 
 I think not; see below.
 
 DRBD [0] does what it sounds like you're trying to accomplish [1].
 Especially since you have two nodes A+B or C+D that are RAIDed over iSCSI.
 It's rather painless to set up two-nodes with DRBD.
 
 I am familiar with DRBD, having used it for a number of years. However, I 
 don't think this does what I am describing. With a conventional two-node 
 DRBD setup, the drbd block device appears on both storage nodes, one of 
 which is primary. In this case, writes to the block device are done from 
 the client to the primary, and the storage I/O is done locally on the 
 primary and is forwarded across the network by the primary to the 
 secondary.
 
 What I am describing in my experiment is a setup in which the block device 
 (/dev/mdXXX) appears on neither of the storage nodes, but on a third node. 
 Writes to the block device are done from the client to the third node and 
 are forwarded over the network to both storage servers. The whole setup 
 can be done with only packages from the base repo.
 
 I don't see how this can be accomplished with DRBD, unless the DRBD 
 two-node setup then iscsi-exports the block device to the third node. With 
 provision for failover, this is surely a great deal more complex than the 
 setup that I have described.
 
 If DRBD had the ability for the drbd block device to appear on a third 
 node (one that *does not have any storage*), then it would perhaps be 
 different.

Why specifically do you care about that? Both with your solution and the
DRBD one the clients only see a NFS endpoint so what does it matter that
this endpoint is placed on one of the storage systems?
Also while with you solution streaming performance may be ok latency is
going to be fairly terrible due to the round-trips and synchronicity
required so this may be a nice setup for e.g. a backup storage system
but not really suited as a more general purpose solution.

Regards,
  Dennis

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] EFI and RAID questions

2014-05-10 Thread Dennis Jacobfeuerborn
On 10.05.2014 18:36, CS_DBA wrote:
 Hi All;
 
 I have a new server we're setting up that supports EFI or Legacy in the bios
 
 I am a solid database guy but my SA skills are limited to what I need to 
 get by
 
 1) I used EFI because I wanted to create a raid 10 array with 6 4TB 
 drives and apparently I cannot setup gpt partitions via parted in legacy 
 mode (at least that's what I've read - is this true?)

When you say legacy mode do you mean BIOS or the CSM (Compatibility
Support Module) of the UEFI firmware?

BIOS cannot boot from GPT partition but the CSM mode of the UEFI
firmware should be able to. You really want to go with plain UEFI though
if your system supports it.

 2) I installed the OS on 2 500GB drives, I used to do all my installs 
 with software RAID (mirrored) without LVM as follows:
 - create 2 raid partitions (one on each drive)  for swap, /boot and /
 - create a raid1 device for each set of partitions above
 
 The installer would not let me proceed without a /boot/efi partition I 
 tried to create a raid partition on each drive for this and create a 
 /boot/efi raid disk but when I doit this way in the installer I no 
 longer see the EFI SYSTEM Partition as an option for the filesystem 
 type so this did not work either.
 
 I ended up doing hardware raid for the OS drives and software raid for 
 the 6 4TB data drives. It works but I prefer to do software raid for 
 everything so we ca have standard methods of monitoring for bad drives.
 
 Is there a way to setup software raid with EFI?

No. The UEFI Firmware needs to access to this partition before it can
boot the OS so anything that needs to have the OS running (like a
software raid) cannot work.
What you can do is create the partition on both disks, point the
installer to only the first disk and then later copy the files over to
the partition on the other disk so that if the first disk dies you can
still boot using the second one.
The partition can be tiny (just a couple of megabytes), should be the
first partition on the disk (though I think this is not strictly
necessary), should be formatted as FAT32, and should be given a type
GUID of C12A7328-F81F-11D2-BA4B-00A0C93EC93B which means EFI System
partition so that the UEFI firmware can find it.

You *should* be able to create the other partitions (including /boot) as
software raid though I've not done this myself yet so I'm not 100%
certain on that.

 Do I need to add a /boot/efi partition only to one of the 2 OS drives?
 If so how do I recover if we loose the drive with the /boot/efi partition?
 
 Is it required to use LVM to do this?
 
 Thanks in advance
 
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos
 

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] EFI and RAID questions

2014-05-10 Thread Dennis Jacobfeuerborn
On 10.05.2014 19:17, Dennis Jacobfeuerborn wrote:
 On 10.05.2014 18:36, CS_DBA wrote:
 Hi All;

 I have a new server we're setting up that supports EFI or Legacy in the bios

 I am a solid database guy but my SA skills are limited to what I need to 
 get by

 1) I used EFI because I wanted to create a raid 10 array with 6 4TB 
 drives and apparently I cannot setup gpt partitions via parted in legacy 
 mode (at least that's what I've read - is this true?)
 
 When you say legacy mode do you mean BIOS or the CSM (Compatibility
 Support Module) of the UEFI firmware?
 
 BIOS cannot boot from GPT partition but the CSM mode of the UEFI
 firmware should be able to. You really want to go with plain UEFI though
 if your system supports it.
 
 2) I installed the OS on 2 500GB drives, I used to do all my installs 
 with software RAID (mirrored) without LVM as follows:
 - create 2 raid partitions (one on each drive)  for swap, /boot and /
 - create a raid1 device for each set of partitions above

 The installer would not let me proceed without a /boot/efi partition I 
 tried to create a raid partition on each drive for this and create a 
 /boot/efi raid disk but when I doit this way in the installer I no 
 longer see the EFI SYSTEM Partition as an option for the filesystem 
 type so this did not work either.

 I ended up doing hardware raid for the OS drives and software raid for 
 the 6 4TB data drives. It works but I prefer to do software raid for 
 everything so we ca have standard methods of monitoring for bad drives.

 Is there a way to setup software raid with EFI?
 
 No. The UEFI Firmware needs to access to this partition before it can
 boot the OS so anything that needs to have the OS running (like a
 software raid) cannot work.
 What you can do is create the partition on both disks, point the
 installer to only the first disk and then later copy the files over to
 the partition on the other disk so that if the first disk dies you can
 still boot using the second one.
 The partition can be tiny (just a couple of megabytes), should be the
 first partition on the disk (though I think this is not strictly
 necessary), should be formatted as FAT32, and should be given a type
 GUID of C12A7328-F81F-11D2-BA4B-00A0C93EC93B which means EFI System
 partition so that the UEFI firmware can find it.
 
 You *should* be able to create the other partitions (including /boot) as
 software raid though I've not done this myself yet so I'm not 100%
 certain on that.

Just to give you an idea what a couple of megabytes means this is what
is stored on my EFI partition right now:

[root@nexus EFI]# du -csh /boot/efi/EFI/*
658K/boot/efi/EFI/Boot
7,8M/boot/efi/EFI/fedora
244K/boot/efi/EFI/fedora15
18M /boot/efi/EFI/Microsoft
247K/boot/efi/EFI/redhat
27M total

That's with four different OS installations so for a non-dual-boot
system something like 50-100MB should be plenty.

Regards,
  Dennis

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] how to replace a raid drive with mdadm

2014-05-10 Thread Dennis Jacobfeuerborn
On 10.05.2014 19:06, Keith Keller wrote:
 On 2014-05-10, CS_DBA cs_...@consistentstate.com wrote:

 If we loose a drive in a raid 10 array (mdadm software raid) what are 
 the steps needed to correctly do the following:
 - identify which physical drive it is
 
 This is controller dependent.  Some support blinking the drive light to
 identify it, others do not.  If yours does not you need to jury-rig
 something (e.g., either physically label the drive slot/drive, or send
 some dummy data to the drive to get it to blink).
 

This can also be inverted especially if you cannot send data to the
drive anymore because it dies completely: Create lots of disk i/o with a
command like grep -nri test /usr and all drives except the broken one
should show activity.

Another way is to write down the serial numbers of the disks, the slots
you put the disks in and then use hdparm -I /dev/sdX to find which
device shows which serial number. That way once sdX dies you can check
the list to find which slot the disk for the failed device was put in.

Regards,
  Dennis
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Format big drives (4TB) in the installer?

2014-05-07 Thread Dennis Jacobfeuerborn
On 07.05.2014 17:31, CS_DBA wrote:
 
 On 05/07/2014 09:14 AM, Bob Marcan wrote:
 On Wed, 07 May 2014 08:41:23 -0600
 CS_DBA cs_...@consistentstate.com wrote:

 Hi all;

 I cross posted this to the fedora list since we use Fedora as a test bed
 from time to time, however given this is a production server we'll
 likely be running CentOS.

 we've just ordered a new server
 (http://www.spectrumservers.com/ssproducts/pc/viewPrd.asp?idcategory=26idproduct=787)


 Originally I tried to simply upgrade an older server with more drive
 space, I installed six (6)  4TB drives and did a new CentOS 6.5 install
 but the OS would not allow me to configure more than 2TB per drive.

 Subsequent research leads me to conclude that if the bios supports UEFI
 and the installer boots as such then the installer should see 4TB drives
 without any issues.  I'm also assuming that any server I order today
 (i.e. a more modern server) should ship with UEFI support in the bios.

 Are my conclusions above per UEFI correct?


 Thanks in advance
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos
 Use parted and make GPT label.
 BR, Bob
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos
 Thanks for the advice, can someone point me to a good step by step how 
 to per setting up a RAID 10 volume per the parted  GPT tools?

Unless your server supports UEFI it will probably not boot from a GPT
partitioned disk. RAID controllers usually support splitting off a part
of the array as a boot disk. I recently did this with an old server with
a 3ware controller and 3TB disks. I created a RAID-10 and then in the
advanced settings I told it to use 50G as a boot disk. The result was
that I got a 50G /dev/sda which I could partition with a DOS label and
2.95T /dev/sdb which I let anaconda put a GPT label on.

Anyway if you are using a BIOS instead of UEFI you need to provide a
disk with a DOS partition label to boot from.

Regards,
  Dennis

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS-virt] BAD disk I/O performance

2014-05-04 Thread Dennis Jacobfeuerborn
On 04.05.2014 12:58, Luca Gervasi wrote:
 Hello,
 
 i'm trying to convert my physical web servers to a virtual guest. What i'm
 experiencing is a poor disk i/o, compared to the physical counterpart
 (having strace telling me that each write takes approximately 100 times the
 time needed on physical).
 
 Tested hardware is pretty good (HP Proliant 360p Gen8 with 2xSAS 15k rpm 48
 Gb Ram).
 
 The hypervisor part is a minimal Centos 6.5 with libvirt.
 The guest is configured using: VirtIO as disk bus, qcow2 storage format
 (thick allocation), cache mode: none (needed for for live migration - this
 could be changed if is the bottleneck), IO mode: default.
 
 Is someone willing to give me some adivices? :)

Have you tried using a raw images just for testing? I've seen some
pretty nasty performance degradation with qcow2 but unfortunately I was
never able to track down what exactly caused this. Switching to raw
images fixed the issue for me.

Regards,
  Dennis

___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS] Replace failed disk in raid

2014-03-05 Thread Dennis Jacobfeuerborn
On 05.03.2014 15:31, Nikos Gatsis - Qbit wrote:

 On 5/3/2014 3:59 μμ, Reindl Harald wrote:
 Am 05.03.2014 14:55, schrieb Nikos Gatsis - Qbit:
 A disk, part of a raid failed and I have to replace it.
 My problem is the swap partition which is in raid0. The rest partitions
 are in raid1 and I successfully removed them.
 The partition in swap cant removed because is probably active.
 How can I stop swap and remove partition?
 After replacing the faulty disk and rebuilt how I start swap again?
 man swapoff

 I have run swapoff and swap stop, but I cant still remove partition from
 raid0.
 Should I stop md also?
 Thank you.

You need to be more exact when describe what you are doing and what the 
result is.

I have run swapoff and swap stop

What where the exact commands and arguments you used and what was the 
output? Did you verify that the swap was actually disabled after running 
these commands?

cant still remove partition from raid0

What commands and arguments did you run to remove the drive? What was 
the result?

Regards,
   Dennis
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] gnutls bug

2014-03-05 Thread Dennis Jacobfeuerborn
On 05.03.2014 22:19, Michael Coffman wrote:
 I am running centos6.4.   Where do I find the updated gnutls packages?I
 see the updated source file here:
 http://vault.centos.org/6.5/updates/Source/SPackages/

 But I don't see the correct version of the packages in the 6.4 tree here:
 http://vault.centos.org/6.4/updates/x86_64/Packages/

 Where should I be looking for the updated package for 6.4?

There never will be any. 6.4 and 6.5 are not independent installations 
of the system and you simply have to upgrade to 6.5 to get fixes.

Regards,
   Dennis

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] transparent_huge_pages problem (again?)

2014-02-21 Thread Dennis Jacobfeuerborn
Hi,
I've experiencing problems with 6.5 guests on a 6.4 host when running 
hadoop with transparen_huge_pages enabled. As soon as I disable that 
feature everything returns to normal.

I'm posting here because this issue cam up in the past:
http://bugs.centos.org/view.php?id=5716

That bug was closed with resolved in EL6.4 but now it seems to have 
returned.

Apparently there exists an old bug (presumably resolved) upstream here:
https://bugzilla.redhat.com/show_bug.cgi?id=805593

Since I don't have access to it I'm not sure what the current state is.

What are my options here? Should I reopen the CentOS bug above or should 
I file a new one upstream?

Regards,
   Dennis
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS-virt] disk io in guests causes soft lockups in guests and host processes

2014-02-20 Thread Dennis Jacobfeuerborn
Hi,
I have a strange phenomenon that I cannot readily explain so I wonder if 
anyone here can shed a light on this.

The host system is a Dell r815 with 64 cores and 256G ram and has centos 
6 installed. The five guests are also running centos 6 and are running 
as a hadoop cluster. The problem is that I see disk-io spikes in the 
vm's which then cause soft lockups in the guest but I also see hanging 
processes on the host as if the entire machine locks up for 30-60 seconds.

Now I know that having all cluster members running on the same system 
isn't efficient and that I cannot expect good performance but what I was 
not expecting is that a guest make host processes hang.
Does anyone have an idea what the issue could be here or how I can find 
out what cause for this behavior is?

Regards,
   Dennis
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Problem with lvm disks assigned to kvm guests

2014-02-06 Thread Dennis Jacobfeuerborn
On 06.02.2014 11:45, C. L. Martinez wrote:
 Hi all,

   I have a strange problem when I use lvm disks to expose to virtual
 guests (host is CentOS 6.5 x86_64). If I remove a kvm guest and all
 lvm disks attached to it, and I create a new kvm with another lvm
 disks that use the same disk space previously assigned to the previous
 kvm guest, this new guest sees all partitions and data. Creating new
 lvm volumes with different names to this new kvm doesn't resolves the
 problem.

 Any idea why??

When you delete a volume the data isn't cleared only the metadata 
removed so if you later create a new volume that ends up using the same 
area on disk then you will see the old data as expected.
If you don't want this to happen then you need to overwrite the volume 
before you delete it.

This is a general issue in virtualization/clouds that you need to take 
into account for security reasons. See for example:
https://github.com/fog/fog/issues/2525

Regards,
   Dennis
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Problem with lvm disks assigned to kvm guests

2014-02-06 Thread Dennis Jacobfeuerborn
On 06.02.2014 12:05, C. L. Martinez wrote:
 On Thu, Feb 6, 2014 at 11:01 AM, Dennis Jacobfeuerborn
 denni...@conversis.de wrote:
 On 06.02.2014 11:45, C. L. Martinez wrote:
 Hi all,

I have a strange problem when I use lvm disks to expose to virtual
 guests (host is CentOS 6.5 x86_64). If I remove a kvm guest and all
 lvm disks attached to it, and I create a new kvm with another lvm
 disks that use the same disk space previously assigned to the previous
 kvm guest, this new guest sees all partitions and data. Creating new
 lvm volumes with different names to this new kvm doesn't resolves the
 problem.

 Any idea why??

 When you delete a volume the data isn't cleared only the metadata
 removed so if you later create a new volume that ends up using the same
 area on disk then you will see the old data as expected.
 If you don't want this to happen then you need to overwrite the volume
 before you delete it.

 This is a general issue in virtualization/clouds that you need to take
 into account for security reasons. See for example:
 https://github.com/fog/fog/issues/2525

 Regards,
 Dennis


 Many thanks Dennis ... Then if I do:

 dd if=/dev/zero of=/dev/sdc1 bs=1M (it is a 1TiB disk), will erase all
 data and partitions created by the kvm guest??

That should work although if you want to be really safe you should 
probably use /dev/urandom instead of /dev/zero as using random data is a 
better way to deal with the problem of data remanence:

http://en.wikipedia.org/wiki/Data_remanence#Overwriting

Regards,
   Dennis

___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS] HA cluster - strange communication between nodes

2014-01-15 Thread Dennis Jacobfeuerborn
On 16.01.2014 00:29, Leon Fauster wrote:
 Am 15.01.2014 um 11:56 schrieb Martin Moravcik cen...@datalock.sk:

 Thanks for your interest and for your help.
 Here is the output from command (pcs config show)

 [root@lb1 ~]# pcs config show
 Cluster Name: LB.STK
 Corosync Nodes:

 Pacemaker Nodes:
   lb1.asol.local lb2.asol.local

 Resources:
   Group: LB
Resource: LAN.VIP (class=ocf provider=heartbeat type=IPaddr2)
 Attributes: ip=172.16.139.113 cidr_netmask=24 nic=eth1
 Operations: monitor interval=15s (LAN.VIP-monitor-interval-15s)
Resource: WAN.VIP (class=ocf provider=heartbeat type=IPaddr2)
 Attributes: ip=172.16.139.110 cidr_netmask=24 nic=eth0
 Operations: monitor interval=15s (WAN.VIP-monitor-interval-15s)
Resource: OPENVPN (class=lsb type=openvpn)
 Operations: monitor interval=20s (OPENVPN-monitor-interval-20s)
 start interval=0s timeout=20s (OPENVPN-start-timeout-20s)
 stop interval=0s timeout=20s (OPENVPN-stop-timeout-20s)

 Stonith Devices:
 Fencing Levels:

 Location Constraints:
 Ordering Constraints:
 Colocation Constraints:

 Cluster Properties:
   cluster-infrastructure: cman
   dc-version: 1.1.10-14.el6_5.1-368c726
   stonith-enabled: false


 When I start cluster after reboot of both nodes, everythings looks fine.
 But when shoot command pcs resource delete OPENVPN from node lb1 in
 the log starts to popup these lines:
 Jan 15 13:56:37 corosync [TOTEM ] Retransmit List: 202
 Jan 15 13:57:08 corosync [TOTEM ] Retransmit List: 202 203
 Jan 15 13:57:38 corosync [TOTEM ] Retransmit List: 202 203 204
 Jan 15 13:58:08 corosync [TOTEM ] Retransmit List: 202 203 204 206
 Jan 15 13:58:38 corosync [TOTEM ] Retransmit List: 202 203 204 206 208
 Jan 15 13:59:08 corosync [TOTEM ] Retransmit List: 202 203 204 206 208 209

 I also noticed, that these retransmit entries starts to appear even
 after some time (7 minutes) from fresh cluster start without doing any
 change or manipulation with cluster.


 there exists multicast issues on virtual nodes - therefore your bridged 
 network
 will for sure not operate reliable out of the box for HA setups.

 try

 echo 1  /sys/class/net/YOURDEVICE/bridge/multicast_querier

For a two node cluster using unicast is probably easier and less error 
prone way.

Regards,
   Dennis

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] LogScape as a Splunk alternative?

2014-01-13 Thread Dennis Jacobfeuerborn
On 13.01.2014 17:06, zGreenfelder wrote:

 I searched for a Splunk alternative and found LogScape. Have anyone worked
 with it?
 There is no documentation available only some very brief installation
 instructions and there is almost no information in google about successful
 deployments in linux environments. From my current perspective it is a
 quite small and not widely used product, am I right?
 Also videos about search capabilities show that in comparison with Splunk
 it gives rather limited search functionality.
 Overall what do you think about LogScape?


 I have not, but I found this link not so long ago:
 http://docs.fluentd.org/articles/free-alternative-to-splunk-by-fluentd
 and had thoughts about trying it out.  not sure how commited you are
 to logscrape.


There is also logstash:
http://logstash.net/

Regards,
   Dennis
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] LVM thinpool snapshots broken in 6.5?

2014-01-11 Thread Dennis Jacobfeuerborn
On 09.01.2014 23:54, Dennis Jacobfeuerborn wrote:
 Hi,
 I just installed a CentOS 6.5 System with the intention of using thinly
 provisioned snapshots. I created the volume group, a thinpool and then a
 logical volume. All of that works fine but when I create a snapshot
 mysnap then the snapshot volume gets displayed in the lvs output
 with the correct information but apparently no device nodes are created
 under /dev/mapper/ or /dev/(volumge_group_name).
 Any ideas what might be going on here?

For the people who run into this as well:
This is apparently a feature and not a bug. Thin provisioning snapshots 
are no longer automatically activated and a skip activation flag is 
set during creation by default. One has to add the -K option to 
lvchange -ay snapshot-volume to have lvchange ignore this flag and 
activate the volume for real. -k can be used on lvcreate to not add 
this flag to the volume. See man lvchange/lvcreate for more details.
/etc/lvm/lvm.conf also contains a auto_set_activation_skip option now 
that controls this.

Apparently this was changed in 6.5 but the changes were not mentioned in 
the release notes.

Regards,
   Dennis

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] LVM thinpool snapshots broken in 6.5?

2014-01-09 Thread Dennis Jacobfeuerborn
Hi,
I just installed a CentOS 6.5 System with the intention of using thinly 
provisioned snapshots. I created the volume group, a thinpool and then a 
logical volume. All of that works fine but when I create a snapshot 
mysnap then the snapshot volume gets displayed in the lvs output 
with the correct information but apparently no device nodes are created 
under /dev/mapper/ or /dev/(volumge_group_name).
Any ideas what might be going on here?

Regards,
   Dennis
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Disappearing directory

2014-01-03 Thread Dennis Jacobfeuerborn
On 03.01.2014 15:37, Leon Fauster wrote:
 Am 03.01.2014 um 15:04 schrieb Ken Smith k...@kensnet.org:
 Leon Fauster wrote:
 Am 03.01.2014 um 08:57 schrieb Mauricio Tavaresraubvo...@gmail.com:

 On Thu, Jan 2, 2014 at 10:19 PM, Leon Fauster
 leonfaus...@googlemail.com  wrote:

 {snip}
   Even though it is a workaround - I myself like to use
 /export/backup -- I do not think that solves the original question. At
 work our fileserver, an ubuntu box, mounts its backup drive into
 /mnt/backup just like Ken wants to do. And it works exactly as he
 wants. I wonder if something is doing housecleaning in /mnt.


 thats why i suggest to try it in backup. Thats not a solution, it is more
 a heuristic way to get close to the problem (after evaluating the results).



 I tried it in /media. Same result. Its as if umount is doing a rm -rf


 please try /backup, /test or /random or something that is not /mnt or /media.
 The latter dirs are common to be under control by some processes.

Also try to use /bin/umount instead of just umount. That way you prevent 
a potential alias for umount from running instead of the actual command.

Regards,
   Dennis

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Hangs after inactivity

2013-12-30 Thread Dennis Jacobfeuerborn
On 31.12.2013 03:00, Arvind Nagarajan wrote:
 Hi,

 My linux machine hangs after some period of inactivity (8+ hrs).
 Anyone experiencing similar issue.

 CentOS
 Release 6.5(Final)
 Kernel Linux 2.6.32-431.el6_x86_64
 GNOME 2.28.2

 Processor (0 - 7):
 Intel(R) Core(TM) i7-4770 CPU @ 3.40 GHz

 Any pointers will be helpful.

You can try running the machine with the kernel option pcie_aspm=off 
and see if that helps and also try to disable C-States or Power Saving 
all together in the BIOS.
More Details here:

https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Power_Management_Guide/ASPM.html

http://lists.us.dell.com/pipermail/linux-poweredge/2011-June/044901.html

I've experienced lock-ups in the past because of these issues. The 
network device hung because it tried to go into power saving mode using 
ASPM and it failed and whole servers locked-up after being idle for a 
while because of a bug in Intel Processors regarding the deeper power 
saving modes (C-States) although these are supposed to be fixed since 
6.3. Hope this helps.

Regards,
   Dennis
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] strange speed of mkfs Centos vs. Debian

2013-11-13 Thread Dennis Jacobfeuerborn
On 13.11.2013 11:36, Götz Reinicke - IT Koordinator wrote:
 Hi,

 I'm testing a storage system and different network settings and I'm
 faced with a strange phenomen.

 Doing a mkfs.ext4 on the centos server lasts 11 minutes.

 The same mkfs.ext4 command on the debian installation is done in 20 seconds.

 It is formatting a 14 TB 10Gbit ISCSI Target.

 It is the same server. Centos and debian are installed on different
 internal harddisks.

 Any explanations why debian is so f*** fast? Any hint?

Is it possible that you use a relatively recent debian version compared 
to the older centos 5/6 versions available? If so you might want to look 
into the lazy_itable_init option for mkfs.ext4 which is probabaly used 
in the debian case but not the centos case (but will probably be used in 
RHEL7).

Regards,
   Dennis
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] [OT] Building a new backup server

2013-11-05 Thread Dennis Jacobfeuerborn
On 05.11.2013 14:35, SilverTip257 wrote:
 On Tue, Nov 5, 2013 at 8:09 AM, Sorin Srbu sorin.s...@orgfarm.uu.se wrote:

 -Original Message-
 From: centos-boun...@centos.org [mailto:centos-boun...@centos.org] On
 Behalf Of Dennis Jacobfeuerborn
 Sent: den 4 november 2013 22:30
 To: centos@centos.org
 Subject: Re: [CentOS] [OT] Building a new backup server

 In that case it might be better to switch to XFS which is supported by
 Red Hat up to 100TB so up to that capacity should work well. With RHEL 7
 XFS will become the default Filesystem anyway so now is the time to get
 used to it.

 Yeah? That sounds really interesting. Is this listed on the RHEL website?


 In my brief search I didn't mind to find anything on Red Hat's web site,
 but I did find the below articles.

 https://www.suse.com/communities/conversations/xfs-the-file-system-of-choice/
 http://searchdatacenter.techtarget.com/news/2240185580/Red-Hat-discloses-RHEL-roadmap


Take a look at the first comment here:
https://access.redhat.com/site/discussions/476563

You are not going to get any more official information on what is going 
th happen in RHEL 7 until it's actually out the door.

The was a video on youtube on the Red Hat Summit channel where they 
spoke in great detail about what is planned for RHEL 7 but strangely 
this video has been removed.

Regards,
   Dennis
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] [OT] Building a new backup server

2013-11-05 Thread Dennis Jacobfeuerborn
On 05.11.2013 16:06, Sorin Srbu wrote:
 -Original Message-
 From: centos-boun...@centos.org [mailto:centos-boun...@centos.org] On
 Behalf Of m.r...@5-cent.us
 Sent: den 5 november 2013 15:35
 To: CentOS mailing list
 Subject: Re: [CentOS] [OT] Building a new backup server

 According to Wikipedia RHEL 7 is scheduled for release 2Q 2013;

 http://www.pcworld.com/article/255629/red_hat_preps_rhel_7_for_second_half_of_2013.html.

 I think you meant (2Q++)++ g

 Well, maybe. 8-}

 Let's not diss our upstream provider. I'm pretty sure they do a good job,
 judging from how good CentOS works. ;-)

That's the reason why Red Hat refuses to make any official announcements 
before the actual release. it might turn out that some planned features 
are not ready in time and have to be removed or the release itself has 
to be delayed if there are features that need a little more baking but 
are considered to be important part for strategic reason.
They try to give you an idea what is likely to be included in the next 
release but unfortunately there are always people who then jump on the 
but you promised! bandwagon when some of the features don't materialize.

Regards,
   Dennis
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] [OT] Building a new backup server

2013-11-04 Thread Dennis Jacobfeuerborn
On 04.11.2013 18:05, m.r...@5-cent.us wrote:
 Sorin Srbu wrote:
 Guys,

 I was thrown a cheap OEM-server with a 120 GB SSD and 10 x 4 TB
 SATA-disks for the data-backup to build a backup server. It's built
 around an Asus
 Z87-A
 that seems to have problems with anything Linux unfortunately.

 Anyway, BackupPC is my preferred backup-solution, so I went ahead to
 install another favourite, CentOS 6.4 - and failed.

 The raid controller is a Highpoint RocketRAID 2740 and its driver is
 suggested to be prior to starting Anaconda by way of ctrl-alt-f2, at
 which point
 Anaconda freezes.

 I've come so far as installing Fedora 19 and having it see all the
 hard-drives, but it refuses to create any partition bigger than approx.
 16 TB with ext4.

 I've never had to deal with this big raid-arrays before and am a bit
 stumped.

 Any hints as to where to start reading up, as well as hints on how to
 proceed (another motherboard, ditto raidcontroller?), would be greatly
 appreciated.

 Several. First, see if you CentOS supports that card. The alternative is
 to go to Highpoint's website, and look for the driver. You *might* need to
 get the source and build it - I had to do that a few months ago, on an old
 2260 (I think it is) card, and had to hack the source -they're *not* good
 about updates. If you're lucky, they'll have a current driver or source.

 Second, on our HBR's (that's a technical term - Honkin' Big RAIDS... g),
 we use ext4, and RAID 6. Also, for about two years, I keep finding things
 that say that although ext4 supports gigantic filesystems, the tools
 aren't there yet. The upshot is that I make several volumes and partition
 them into 14TB-16TB filesystems.

In that case it might be better to switch to XFS which is supported by 
Red Hat up to 100TB so up to that capacity should work well. With RHEL 7 
XFS will become the default Filesystem anyway so now is the time to get 
used to it.

Regards,
   Dennis

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


  1   2   3   >