[CentOS] send existing root email to another user

2009-11-30 Thread Lists
Hi all,

I'm running Centos 5.4
I have a pile of emails in my root users /var/spool/mail/root file
I need to send this all to another address (preferrably external but 
local would possibly do)

I have searched a lot but can't find any way to do this. I have set it 
so future emails get forwarded to my external address but I also need to 
shift the existing emails.

Thanks
Kate

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] send existing root email to another user

2009-11-30 Thread Lists
Mathew S. McCarrell wrote:
> On Mon, Nov 30, 2009 at 9:32 PM, Lists  <mailto:li...@rheel.co.nz>> wrote:
>
> Hi all,
>
> I'm running Centos 5.4
> I have a pile of emails in my root users /var/spool/mail/root file
> I need to send this all to another address (preferrably external but
> local would possibly do)
>
> I have searched a lot but can't find any way to do this. I have set it
> so future emails get forwarded to my external address but I also
> need to
> shift the existing emails.
>
>
> Kate,
>
> You need to modify /etc/aliases to have something like:
>
> root:  u...@gmail.com <mailto:u...@gmail.com>
>
>
> You should then run /usr/bin/newaliases to make these changes active||.
>
> Matt
Hi Matt,

I have done this and it works for new mail arriving but I need to onsend 
the existing mail.

Thanks
Kate
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] send existing root email to another user

2009-11-30 Thread Lists

Mathew S. McCarrell wrote:
On Mon, Nov 30, 2009 at 9:42 PM, Lists <mailto:li...@rheel.co.nz>> wrote:


>
> Kate,
>
> You need to modify /etc/aliases to have something like:
>
> root:  u...@gmail.com <mailto:u...@gmail.com>
<mailto:u...@gmail.com <mailto:u...@gmail.com>>
>
>
> You should then run /usr/bin/newaliases to make these changes
active||.
>
> Matt
Hi Matt,

I have done this and it works for new mail arriving but I need to
onsend
the existing mail.


Ah, I misread what you were looking to do.  You should be able to do 
the following then.


cat /var/spool/mail/root | mail -s "Old Root Emails" u...@gmail.com 
<mailto:u...@gmail.com>


Hope that helps.
Matt

You are a legend Thanks so much.
Kate
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] send existing root email to another user

2009-11-30 Thread Lists
Jason Pyeron wrote:
>> -Original Message-
>> From: Lists
>> Sent: Monday, November 30, 2009 21:33
>> To: centos@centos.org
>> Subject: [CentOS] send existing root email to another user
>>
>> Hi all,
>>
>> I'm running Centos 5.4
>> I have a pile of emails in my root users /var/spool/mail/root 
>> file I need to send this all to another address (preferrably 
>> external but local would possibly do)
>>
>> 
>
> You want to bounce the messages. Tools like Pine can do it throught a fancy 
> UI.
> Or you can:
>
> formail -s procmail < mbox
>
> Assuming you set it up to be delivered corectly now.
>
> http://googleit1st.com/search?q=redeliver+mbox
>   
hmm I tried that but nothing seemed to happen. I used Matts suggestion 
of cat /var/spool/mail/root | mail -s "mail" u...@domain.com which has 
set it as one email.

Thanks
Kate
>   
>> I have searched a lot but can't find any way to do this. I 
>> have set it so future emails get forwarded to my external 
>> address but I also need to shift the existing emails.
>>
>> Thanks
>> Kate
>>
>> ___
>> CentOS mailing list
>> CentOS@centos.org
>> http://lists.centos.org/mailman/listinfo/centos
>>
>> 
>
>
>
>
> --
> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
> -   -
> - Jason Pyeron  PD Inc. http://www.pdinc.us -
> - Principal Consultant  10 West 24th Street #100-
> - +1 (443) 269-1555 x333Baltimore, Maryland 21218   -
> -   -
> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
> This message is copyright PD Inc, subject to license 20080407P00.
>
>  
>
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>   
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] send existing root email to another user

2009-11-30 Thread Lists

Mathew S. McCarrell wrote:
On Mon, Nov 30, 2009 at 9:42 PM, Lists <mailto:li...@rheel.co.nz>> wrote:


>
> Kate,
>
> You need to modify /etc/aliases to have something like:
>
> root:  u...@gmail.com <mailto:u...@gmail.com>
<mailto:u...@gmail.com <mailto:u...@gmail.com>>
>
>
> You should then run /usr/bin/newaliases to make these changes
active||.
>
> Matt
Hi Matt,

I have done this and it works for new mail arriving but I need to
onsend
the existing mail.


Ah, I misread what you were looking to do.  You should be able to do 
the following then.


cat /var/spool/mail/root | mail -s "Old Root Emails" u...@gmail.com 
<mailto:u...@gmail.com>


Hope that helps.
Matt
Argh spoke to soon - its mostly all good apart from the attachments that 
are on the emails don't come through as attachments.
Is there any way I can send the so that they are received with 
attachments still intact?


Kate
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] send existing root email to another user

2009-12-01 Thread Lists

John R Pierce wrote:
Argh spoke to soon - its mostly all good apart from the attachments 
that are on the emails don't come through as attachments.
Is there any way I can send the so that they are received with 
attachments still intact?



install mutt if you haven't already.  (yum install mutt)

as root, go into mutt.  you should see all your messages, or at least 
the first screenful of them.


tag all the messages by...
T .* 
(no spaces, just shift-T, then .* to tag all and enter to make the tag 
happen)  
now each message header should show a *


;b

to 'bounce' all messages that are taggged.  give it an address to 
'bounce' them too, and voila.   they remain in the mailbox, so...


;d

and yes to delete all.   q to quit (it won't actually purge the deleted 
messages til you quit)


in general, ; means the next command is executed on all tagged messages.

I'm sure there's probably a  more elegant way to do this, but this is 
thie quickest way I know.


  

Thank you this worked perfectly.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] PHP 5 bug?

2012-01-04 Thread Lists
I'm using EL6 with all updates applied and getting bit by a PHP5 bug 
that was fixed a year and a half ago...

https://bugs.php.net/bug.php?id=52534

EL6 ships with php 5.3.3, which was released prior to the bug fix. What 
are the chances that this fixed bug can be reported/fixed upstream at 
the prominent North American Linux Vendor? Here's sample code that 
demonstrates the problem:

function CheckBug52534(){
   $check = array(1 => 'a', -1 => 'b');
   $str = var_export($check, true);
   $str = "\$a=$str;";
   eval($str);
   if (!isset($a[-1]))
 return true;
}


echo (Checkbug52534()) ? "has it" : 'not found';

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] PHP 5 bug?

2012-01-04 Thread Lists
 > I found the following existing bugzillas. 
https://bugzilla.redhat.com/show_bug.cgi?id=695251 
https://bugzilla.redhat.com
 > /show_bug.cgi?id=700724 However, both seems for 5 only. If you think 
this applies to 6 too, consider filing a bug request yourself.


Thanks, I did. https://bugzilla.redhat.com/show_bug.cgi?id=771738

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Making a clone of an LVM-based EL5 install

2013-03-18 Thread Lists
Does anybody here have any idea how to make an exact copy of a drive 
that has LVM partitions? I'm having trouble using dd to do this for an 
EL5 server.

We're trying to diagnose a software problem of some kind and would like 
an exact, perfect copy of the software running so that we can see 
exactly what the problem is without disturbing our production copy. It's 
been admin practice to set up a single server with the ideal image as 
desired, and then use DD to replicate that image to multiple hosts. 
We've done this a large number of times. The exact procedure followed is:

-) Perform typical file-level backups with rsync
-) Install additional HDD (hereafter sdb) as least a big as the HDD to 
be duplicated.
-) Boot off a CentOS install DVD with "linux rescue"
-) Where /dev/sda is the original drive and /dev/sdb is the new drive, run:
 dd bs=8192 if=/dev/sda of=/dev/sdb;
-) shutdown the server upon completion of the previous command and 
remove 2nd drive.

Hereafter, using systems NOT running lvm, we have a bootable drive that 
can be installed into a 2nd machine and generally expect things to "just 
work" after sorting out a few driver issues. (EG: /dev/eth#s change)
Following this exact procedure with partitions on LVM does NOT result in 
a booting system!

-) When booting from the newly imaged drive, it starts the boot just 
fine but quits at:

Activating logical volumes
   Volume group "VolGroup00" not found
Trying to resume from /dev/VolGroup00/LogVol01
Unable to access resume device (/dev/VolGroup00/LogVol01)
Creating root device.
Mounting root filesystem.
mount: could not find filesystem '/dev/root'
--SNIP--
switchroot: mount failed: no such file or directory
Kernel panic - not syncing: Attempted to kill init!



-) Running Linux rescue, at the shell:

# pvscan
PV /dev/sda2VG VolGroup00   lvm2 [153.28 GiB / 0 free]
Total: 1 [153.28 GiB] / inuse: 1 [153.28 GiB] / in no VG: [0  ]
# vgscan
   Reading all physical volums. This may take a while...
   Found volume group "VolGroup00" using metadata type lvm2
# lvscan
   inactive'/dev/VolGroup00/LogVol00' [151.31 GiB] inherit
   inactive'/dev/VolGroup00/LogVol01' [1.97 GiB] inherit


-) Running the EL5 installer with the "Find distro" option on mounts the 
install at /tmp/sysimage without issue.

What am I missing?!?!
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Making a clone of an LVM-based EL5 install

2013-03-19 Thread Lists
Thanks!

Your reply in conjunction with a google search that found the below 
website and resolved this completely!

http://wiki.centos.org/TipsAndTricks/CreateNewInitrd

The final line being something like
mkinitrd --with sata_nv initrd-2.6.18-194.32.1.el5.img 2.6.18-194.32.1.el5

-Ben

On 03/19/2013 08:37 AM, Gordon Messmer wrote:
> On 03/18/2013 03:36 PM, Lists wrote:
>> -) When booting from the newly imaged drive, it starts the boot just
>> fine but quits at:
>> 
>> Activating logical volumes
>>  Volume group "VolGroup00" not found
> The only reason that I can think of that would cause this is an initrd
> that doesn't contain the driver for the whatever adapter the disk is
> attached to.
>
> Boot the rescue image and identify the adapter module.  When you've
> identified it, go back to the live system and make a new initrd using
> "--with ".  Don't replace the existing initrd, just
> create a new one in /boot.  If you then clone the disk, you should be
> able to boot the cloned disk to grub.  Edit the kernel definition and
> change the path to the initrd, selecting the one you've created for the
> new system.  It should boot properly, at which point you can replace the
> standard initrd path or fix grub's configuration file.
>
> ...and if you don't want to clone the system again, you can just boot
> the rescue environment, chroot to the sysimage, and make the initrd there.
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Trying to get a kernel dump

2013-05-30 Thread Lists
Trying to debug a database server getting cpu softlocks causing SSHD to 
hang and not let anybody log in. Figured that a good first step would be 
to get some kernel dumps when the problem occurs. According to what I 
read at the following web site, I can get dumps for softlock problems 
too. Here's the site:
http://blog.kreyolys.com/2011/03/17/no-panic-its-just-a-kernel-panic/

Using a throwaway 32 bit dev server, I followed the directions exactly 
with the below modifications.

 A) I got the kernel-debuginfo and kernel-debuginfo-common by enabling
/etc/yum.repos.d/CentOS-Debuginfo.repo and then installing with yum.

 B) One other change is that I had to use an alternate partition 
instead of the one resolving "/var" because the server was set up with 
LVM. The partition that I used ("/dev/sdb3") is actually mounted at 
/alt/home.


So when I simulate a crash by piping a "c" to /proc/sysrq-trigger, I see 
the following in /var/log/messages:
May 30 13:37:24 norman kernel: SysRq : HELP : loglevel(0-9) reBoot Crash 
terminate-all-tasks(E) memory-full-oom-kill(F) kill-all-tasks(I) 
thaw-filesystems(J) saK show-backtrace-all-active-cpus(L) 
show-memory-usage(M) nice-all-RT-tasks(N) powerOff show-registers(P) 
show-all-timers(Q) unRaw Sync show-task-states(T) Unmount 
show-blocked-tasks(W) dump-ftrace-buffer(Z)

But nothing else. NO files in /var/crash, or /alt/home/var/crash 
(directory exists) Nothing happens at all when I run
# echo 1 > /proc/sys/kernel/softlockup_panic

I've tried tweaking a few settings in /etc/kdump.conf, restarting after 
each time (since it creates a new boot img) to no avail. What am I messing?

Here's a bunch of hopefully relevant info:

kernel line in grub.conf:
kernel /vmlinuz-2.6.32-358.6.2.el6.i686 ro 
root=/dev/mapper/VolGroup-lv_root nomodeset rd_NO_LUKS LANG=en_US.UTF-8 
rd_NO_MD rd_LVM_LV=VolGroup/lv_swap SYSFONT=latarcyrheb-sun16 rhgb 
crashkernel=auto quiet rd_LVM_LV=VolGroup/lv_root  KEYBOARDTYPE=pc 
KEYTABLE=us rd_NO_DM crashkernel=128M nmi_watchdog=1

# grep -v "#" /etc/kdump.conf
ext4 /dev/sdb3
path /var/crash
core_collector makedumpfile -c -d 31


# cat /proc/meminfo
MemTotal: 379352 kB
MemFree:  232748 kB
Buffers:   22496 kB
Cached:79040 kB
SwapCached:0 kB
Active:60512 kB
Inactive:  60152 kB
Active(anon):  20028 kB
Inactive(anon): 4640 kB
Active(file):  40484 kB
Inactive(file):55512 kB
Unevictable:   0 kB
Mlocked:   0 kB
HighTotal: 0 kB
HighFree:  0 kB
LowTotal: 379352 kB
LowFree:  232748 kB
SwapTotal:   1048568 kB
SwapFree:1048568 kB
Dirty:48 kB
Writeback: 0 kB
AnonPages: 19124 kB
Mapped:14320 kB
Shmem:  5556 kB
Slab:  16216 kB
SReclaimable:   8444 kB
SUnreclaim: 7772 kB
KernelStack: 816 kB
PageTables: 2188 kB
NFS_Unstable:  0 kB
Bounce:0 kB
WritebackTmp:  0 kB
CommitLimit: 1238244 kB
Committed_AS: 164964 kB
VmallocTotal: 506396 kB
VmallocUsed:5344 kB
VmallocChunk: 491856 kB
HugePages_Total:   0
HugePages_Free:0
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB
DirectMap4k:9692 kB
DirectMap2M:  514048 kB




// IN sysctl.conf
kernel.unknown_nmi_panic=1
kernel.panic_on_unrecovered_nmi=1
vm.panic_on_oom=1
kernel.sysrq = 1

# /cat/proc/cpuinfo
processor   : 0
vendor_id   : GenuineIntel
cpu family  : 15
model   : 1
model name  : Intel(R) Pentium(R) 4 CPU 1.50GHz
stepping: 2
cpu MHz : 1495.192
cache size  : 256 KB
fdiv_bug: no
hlt_bug : no
f00f_bug: no
coma_bug: no
fpu : yes
fpu_exception   : yes
cpuid level : 2
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca 
cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm up pebs bts
bogomips: 2990.38
clflush size: 64
cache_alignment : 128
address sizes   : 36 bits physical, 32 bits virtual
power management:

# dmesg:
Initializing cgroup subsys cpuset
Initializing cgroup subsys cpu
Linux version 2.6.32-358.6.2.el6.i686 
(mockbu...@c6b8.bsys.dev.centos.org) (gcc version 4.4.7 20120313 (Red 
Hat 4.4.7-3) (GCC) ) #1 SMP Thu May 16 18:12:13 UTC 2013
KERNEL supported cpus:
   Intel GenuineIntel
   AMD AuthenticAMD
   NSC Geode by NSC
   Cyrix CyrixInstead
   Centaur CentaurHauls
   Transmeta GenuineTMx86
   Transmeta TransmetaCPU
   UMC UMC UMC UMC
BIOS-provided physical RAM map:
  BIOS-e820:  - 000a (usable)
  BIOS-e820: 000f - 0010 (reserved)
  BIOS-e820: 0010 - 1ff77000 (usable)
  BIOS-e820: 1ff77000 - 1ff79000 (ACPI NVS)
  BIOS-e820: 1ff79000 - 2000 (reserved)
  BIOS-e820: fec0 - fec1 (r

Re: [CentOS] SSD support in C5 and C6

2013-07-18 Thread Lists
On 07/15/2013 07:33 AM, John Doe wrote:
> I do not think CentOS 5 supports TRIM (unless back-ported from 
> 2.6.33)... 
> http://kernelnewbies.org/Linux_2_6_33#head-b9b8a40358aaef60a61fcf12e9055900709a1cfb
>  
> JD ___ CentOS mailing list 
> CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos 

AFAIK, EL5 doesn't support trim. We've been using SSDs for DB servers 
for almost 2 years and love 'em. Conservative performance estimates were 
off-the-charts: PostgreSQL query results with a 95% reduction in query 
latency, even after extended, continuous use. Haven't seen trim make 
much difference in actual performance.

Main thing is DO NOT EVEN THINK OF USING CONSUMER GRADE SSDs. SSDs are a 
bit like a salt shaker, they have only a certain number of shakes and 
when it runs out of writes, well, the salt shaker is empty. Spend the 
money and get a decent Enterprise SSD. We've been conservatively using 
the (spendy) Intel drives with good results.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS and LessFS

2012-01-23 Thread Lists
This thread has been beat to death, so perhaps my $0.02 isn't so 
meaningful, but I wrote a set of rsync scripts in php
that I've used for years to manage terabytes of backups going back years 
of time. It's called TINBackupBuddy and you can get it at 
http://www.effortlessis.com/thisisnotbackupbuddy/ - it is a set of 
scripts that allow you to manage and back up numerous hosts, called via 
cron on a regular, graceful failure basis, via rsync. It de-duplicates 
files that have not changed between backup sets, so depending on the 
churn on your servers, you can get an astonishing number of backups onto 
a single drive...

I've managed backups for a rather large cluster (now over 200 schools 
and school districts) of data automatically, on a 24 hour basis using 
these scripts, for years, so they really do work. And for our 
development team, we recover from these backups in order to replicate 
reported issues, so these backups are verified numerous times per day.

Get a computer with some big disks in it. (We have about 20 TB of disk 
space on our backups server right now) Set up TinBackupBuddy and point 
to the big disks, use symlinks where it makes sense. Set a few options, 
call bbbackup.php via cron, and you're golden. Been doing it for close 
to 10 years now

Good luck!

On 01/16/2012 03:50 PM, Hugh E Cruickshank wrote:
> Hi All:
>
> We have been looking at implementing deduplication on a backup server.
> > From what I have been able to find the available documentation is
> pretty thin. I ended up trying to install LessFS on this CentOS 5.7
> box but we have now encountered problems with fuse version.
>
> Has anyone out there been able to get LessFS running on CentOS 5.7 and
> can provide some pointers?
>
> If not LessFS can you suggest an alternate deduplication software?
>
> TIA
>
> Regards, Hugh
>


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Terminal settings with Putty?

2012-02-10 Thread Lists
Trying to use Putty 0.62 (the most current version I can find) and when 
I use the c compiler, I see a bunch of terminal codes that obfuscate the 
output of the compiler. (Teaching my son some programming)

Anybody know what I should be setting to what? It's pretty much a 
CentOS6 server set up with defaults. You can see what I'm talking about

http://i.imgur.com/dpwLg.png

Thanks in advance...

-Ben
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] RESOLVED: Terminal settings with Putty?

2012-02-10 Thread Lists
Found the problem several google searches later... need to use UTF8 
encoding instead of ISO-8859-1 as found on this website: 
http://turbulentsky.com/cygwin-funny-characters-man-pages-putty.html

On 02/10/2012 08:07 PM, Lists wrote:
> Trying to use Putty 0.62 (the most current version I can find) and 
> when I use the c compiler, I see a bunch of terminal codes that 
> obfuscate the output of the compiler. (Teaching my son some programming)
>
> Anybody know what I should be setting to what? It's pretty much a 
> CentOS6 server set up with defaults. You can see what I'm talking about
>
> http://i.imgur.com/dpwLg.png
>
> Thanks in advance...
>
> -Ben

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] download.fedora.redhat.com down? moved?

2012-03-02 Thread Lists
On 03/02/2012 01:26 PM, Rob Del Vecchio wrote:
> Hello all,
>
> I'm trying to install mod_python from the normal channels via RPM,
> though it seems that download.fedora.redhat.com is down, or moved?
> DNS reports it does not exist.
>

Genuinely curious why you didn't run:

# yum install mod_python;

(It's been a very, very long time since I downloaded distro packages 
manually)
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cron job not running

2012-03-05 Thread Lists
On 03/04/2012 07:25 PM, Tim Dunphy wrote:
> hello list,
>
>   I am attempting to backup a centos 5.4 (x86_64) server running mysql
> with a cron job. Here's how the cron job looks:
>
>   [root@cloud:/home/bluethundr/backupdb] #crontab -l
> * 3 * * * /usr/bin/mysqldump jfwiki>
> /home/bluethundr/backupdb/wiki-$(date +%Y%m%d).sql
>
> However if I run the command from the command line it seems to work fine.
>
>
> If I grep syslog for cron this is what I've found:
>
> Mar  4 22:12:13 domU-12-31-38-07-49-CC syslog-ng[1174]: Log
> statistics; processed='center(queued)=16141',
> processed='center(received)=16141', processed='destination(d_boot)=0',
> processed='destination(d_auth)=3',
> processed='destination(d_cron)=131',
> processed='destination(d_mlal)=0', processed='destination(d_kern)=0',
> processed='destination(d_mesg)=200',
> processed='destination(d_cons)=0', processed='destination(d_spol)=0',
> processed='destination(d_mail)=15807', processed='source(s_sys)=16141'
>
> I've tried bouncing the crond service but even if I set the cron job
> to run every minute the backups don't run.
>
> I was hoping someone out there might have some advice n how to solve
> this problem.
>
> thanks

I'll start by suggesting that you really don't want to backup every 
minute - as soon as the data size grows bigger than what can be dumped 
in a minute, you're going to run into some serious problems. We backup 
all our databases hourly to avoid these types of problems, and a typical 
backup time is about 15-20 minutes.

I suggest taking your command and putting it into its own shell script. 
Call every command explicitly - full path as returned by which, eg: 
/usr/bin/mysqldump instead of just mysqldump, /bin/date instead of date, 
since cron runs with different environment variables. The crontab entry 
should also reference the full path to the script EG: 
/usr/local/bin/mysql_backup.sh instead of mysql_backup.sh - this also 
provides some security benefits since if you have the permissions 
straight, you won't inadvertently run the wrong script (perhaps 
containing an exploit) due to an unexpected problem with either the 
environment or inadvertent write permissions on the wrong directory.

Also, don't run this as root. Not that you can't, it just causes the 
script to run with more privileges than are strictly necessary. Add a 
user "mysqldump" or something and run the script in that user's crontab. 
I'd recommend that you don't give this user a password so it can't be 
inadvertently used against you.

Lastly, have your backup script produce some output at the end, EG: echo 
"Script complete" and then check the email of the user running the cron job.

My $0.02, good luck!
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] High load averages copying USB

2012-04-19 Thread Lists
Problem as follows:

1) Plug in an external USB drive.

2) Mount it anywhere. Doesn't matter how.

3) Copy a few GB of data to the drive from a non-USB disk.

4) Watch the load average "climb" to 5.x, sometimes 10.x or more. Why? 
This on an otherwise unloaded system. Doesn't matter how many cores, how 
much RAM, 32/64 bit, etc.

Why should copying some files to a USB drive cause load averages to 
climb so high? (and network monitors to freak out?)
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] High load averages copying USB

2012-04-26 Thread Lists
On 04/20/2012 05:24 AM, Giovanni Tirloni wrote:
> On Apr 20, 2012 2:42 AM, "Lists"  wrote:
>> Problem as follows:
>>
>> 1) Plug in an external USB drive.
>>
>> 2) Mount it anywhere. Doesn't matter how.
>>
>> 3) Copy a few GB of data to the drive from a non-USB disk.
>>
>> 4) Watch the load average "climb" to 5.x, sometimes 10.x or more. Why?
>> This on an otherwise unloaded system. Doesn't matter how many cores, how
>> much RAM, 32/64 bit, etc.
>>
>> Why should copying some files to a USB drive cause load averages to
>> climb so high? (and network monitors to freak out?)
> It's just a number. Is the system any slower?
>
> Linux adds I/O wait time to the load average calculation.

Problem isn't so much actual "speed" but causing network monitors to 
freak out due to "high" load average
when performing backups. I can make exceptions for servers doing 
backups, but then I don't get notifications when
the load is legitimately high. I can make exceptions only during backup 
times, but that increases complexity.

Seems silly that load average would climb to 2.x or more copying some 
files on an otherwise lightly loaded server.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Detect if a drive is USB?

2012-09-21 Thread Lists
I'm updating a script to work with EL6 (previously worked on EL5) and am 
stumped, google fu is failing me. Part of the script is to detect USB 
drives and mount them. Previously, It worked something like

isUsbDevice() {
if [ -f /sys/block/$1/usb ] ; then
  // do stuff
  fi;
}

but I don't find the "usb" file/directory anywhere to be found any more 
on el6. I've tried the output of lsusb and lsusb -v but in no case have 
I been able to find anything there matching anything in /sys/block that 
matching anything in lsusb's output.

Given an available drive (EG: /dev/sdk) how can I reliably tell it's 
interface type? (USB/SATA/PATA/SCSI ?)


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Detect if a drive is USB?

2012-09-21 Thread Lists
Answering my own message for posterity's sake: This line will output "E: 
ID_BUS=usb" for any block
device connected to a USB bus:

udevadm info --query=all --name=$file 2>/dev/null | grep -i BUS=usb;

The basic idea is to
1) use udevadm to get all info on device $file (where $file is a string 
like "sde")
 a) send all errors to /dev/null.
2) grep the output of udevadm for the string "BUS=usb" which appears on 
USB devices.

Note that you must have $file defined with a device name which I got by 
listing the contents of /sys/block.

Good luck! Also, if anybody has a better solution, please reply. The 
above appeared to be "good enough"
to finish porting my admin script.

-Ben

On 09/21/2012 11:36 AM, Lists wrote:
> I'm updating a script to work with EL6 (previously worked on EL5) and am
> stumped, google fu is failing me. Part of the script is to detect USB
> drives and mount them. Previously, It worked something like
>
> isUsbDevice() {
> if [ -f /sys/block/$1/usb ] ; then
>// do stuff
>fi;
> }
>
> but I don't find the "usb" file/directory anywhere to be found any more
> on el6. I've tried the output of lsusb and lsusb -v but in no case have
> I been able to find anything there matching anything in /sys/block that
> matching anything in lsusb's output.
>
> Given an available drive (EG: /dev/sdk) how can I reliably tell it's
> interface type? (USB/SATA/PATA/SCSI ?)
>
>
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Newer version of ffmpeg for EL6?

2014-06-03 Thread Lists
On 05/24/2014 05:13 AM, Nux! wrote:
> Otherwise, you can try some of the static builds people offer, e.g.:
> http://ffmpeg.gusari.org/static/  (download and run, handy in some
> scenarios)
It's late, I was having an issue getting email for a while. This is 
exactly the solution we've been moving forward with, thanks!

-Ben
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Mother board recommendation

2014-06-03 Thread Lists
On 05/16/2014 11:23 AM, m.r...@5-cent.us wrote:
> hardware doesn't support ECC.
> 
> Oh, right, *all* the servers here use ECC DIMMs. And you really, REALLY
> don't want to go there: a) price, b) n/s is not buffered is not
> registered, none of the above compatible in the same bank, and oh, yes,
> dual rank is *not* compatible with single rank or quad rank... I kid you
> not. I've had servers simply not boot by mixing two of those, and let's
> not forget not fitting in the slot, and c) see a).
>

ECC is such a horrible pain in the rear. If you don't have things like 
"SLA" in your casual vocabulary, pretty much any desktop board works 
find for Centos6. For spare/personal/backups servers, I use whatever old 
hardware sits in the junk room.

Anything using ECC is such a pain to match up correctly that I tend to 
buy motherboard/RAM/CPU from a vendor as a package unit so it's 
warranted to work together. Registered/Unregistered, CAS timing, 
single/double/quad ranked, never mind voltages, and making sure your CPU 
supports it!

For all the promises of better uptimes, I've had far more trouble with 
mis-matched  ECC than I've ever experienced in bad non-ECC RAM. Truly, 
this is a sorry showing for ECC.

Ben
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Mother board recommendation

2014-06-03 Thread Lists
On 06/03/2014 11:52 AM, Rainer Duffner wrote:
> It’s also a bit of a sorry showing for the admin putting together the system.
Perhaps, perhaps not. Remember the old saw about simplicity and 
reliability? ECC ignores that saw completely, resulting in a complex, 
error prone hardware landscape fraught with incompatibilities.

With desktop systems, it's pretty straight forward to get memory that 
"just works". Yes, with ECC, all the specs are in the Motherboard 
manual, but there are far more variables to consider. And, even when you 
buy exactly to spec, I've had issues with systems that worked fine for 
days at high loads and passed extensive Memtest, but locked up 
sporadically (every few weeks), at a tremendous cost to diagnose. The 
only way I've found to be confident that a purchase will work is to buy 
only hardware on a specific hardware compatibility list, and typically I 
buy the hardware together. Most of our hardware is in this category.

I'll consider my irreverent point of view, that this complexity is 
unnecessary and counter-productive, as somewhat unpopular and leave it 
at that.

-Ben

PS: As an aside, we've had extremely good results with SuperMicro - we 
shop for them almost exclusively. The only problems we've had was with 
onboard SAS controller on a specific model on EL6 - worked fine on EL5. 
We ended up disabling SAS and going with onboard SATA with no further 
issues.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Mother board recommendation

2014-06-03 Thread Lists
On 06/03/2014 01:53 PM, Rainer Duffner wrote:
> That’s why you replace both.
> Or, if you build your own servers in significant quantities, you’ve got to do 
> you’re own stock-keeping.
> Need 24 hard drives? Buy 30!
> Need 12 PSUs for 6 servers? Buy 16.
This may be industry standard, and I understand that. I just think that 
it's a poor showing that this is what you have to do to take advantage 
of technology meant to be more reliable than "consumer grade" stuff, 
which somehow manages to be quite reliable even if you mix and match.

So, the "better" one is, in practice, less reliable unless you follow 
extremely narrow guidelines, and not only pay more per unit, but buy 
more units than needed! Usually, the cheap one is flaky and unreliable, 
and the expensive one "just works". See: shovels, auto parts, shoes, 
kitchen knives, mechanic tools, bread machines, and virtually every 
other product category on the market.

My complaining on an obscure list won't change anything, but this is 
still a sorry state for ECC.

Ben
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] USB blues

2014-06-11 Thread Lists
I have a freshly built, updated EL6 system and am having problems with 
USB stability - at boot everything works fine but within a few hours, 
USB devices start disappearing randomly. At first I though the USB 
devices were suspect, but removing the suspect devices and an accessory 
PCIE USB card hasn't changed anything. As of now, a single USB device is 
working. (which is lucky, it hosts the OS) I've rebooted the server 
several times trying to diagnose the problem. After a reboot, everything 
works great - for a while. Is this a driver issue or just a bum 
motherboard/chipset?

The system is  built based on the SUPERMICRO MBD-X9SCM-F-O available here:
http://www.newegg.com/Product/Product.aspx?Item=N82E16813182253

According to the OS compatibility matrix, the most recent CentOS 6.5 is 
supported:
http://www.supermicro.com/support/resources/OS/C204.cfm
and all yum updates have been applied.

I have the output of /var/log/messages since the last reboot here:
http://hal.schoolpathways.com/lastboot.txt

Notes: Everything was working after the boot. Devices read/wrote fine.

ata9.00 HD errors are known, it's the old ATA O/S drive.

Jun 11 01:07:36: Problem begins shortly after leaving for the day,

Jun 11 03:35:44 sdi goes offline, it's one of the two USB boot devices 
and is plugged into the USB port directly on the main board. (IE: It's 
not a bad cable)

Jun 11 15:53:43  b2012 goes offline, it's mounted USB.


Here's the output of lspci (Note, this is when the system is having a 
problem)
00:00.0 Host bridge: Intel Corporation Xeon E3-1200 v2/3rd Gen Core 
processor DRAM Controller (rev 09)
00:19.0 Ethernet controller: Intel Corporation 82579LM Gigabit Network 
Connection (rev 05)
00:1a.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset 
Family USB Enhanced Host Controller #2 (rev 05)
00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset 
Family PCI Express Root Port 1 (rev b5)
00:1c.4 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset 
Family PCI Express Root Port 5 (rev b5)
00:1d.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset 
Family USB Enhanced Host Controller #1 (rev 05)
00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev a5)
00:1f.0 ISA bridge: Intel Corporation C204 Chipset Family LPC Controller 
(rev 05)
00:1f.2 SATA controller: Intel Corporation 6 Series/C200 Series Chipset 
Family SATA AHCI Controller (rev 05)
00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family 
SMBus Controller (rev 05)
01:00.0 SATA controller: JMicron Technology Corp. JMB363 SATA/IDE 
Controller (rev 03)
01:00.1 IDE interface: JMicron Technology Corp. JMB363 SATA/IDE 
Controller (rev 03)
02:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network 
Connection
03:03.0 VGA compatible controller: Matrox Electronics Systems Ltd. MGA 
G200eW WPCM450 (rev 0a)

And here is the output of lsusb: (Again, after the problem appears)
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub
Bus 001 Device 003: ID 0557:2221 ATEN International Co., Ltd Winbond Hermon
Bus 001 Device 004: ID 13fe:5200 Kingston Technology Company Inc.
Bus 001 Device 005: ID 05e3:0608 Genesys Logic, Inc. USB-2.0 4-Port HUB

I plan on verifying/updating the BIOS tonight. Is there any other 
information that I could provide to help diagnose this?

-Ben
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] How to enable EDAC kernel module for checking ECC memory?

2014-06-25 Thread Lists
In order to support ZFS, we upgraded a backups server with a new, ECC 
motherboard. We're running CentOS 6 with ZFS on Linux, recently patched. 
Now, I want to enable EDAC so we can check for memory errors (and maybe 
PCI errors as well) but so far, repeatedly pounding on the Google hasn't 
yielded exactly what I need to do to enable EDAC.

One howto was covering PCI and edac, but "modprobe edac_mc" didn't work. 
Here's some information below, How do I get edac up and running? Many 
howtos cover how to use edac-ctl and edac-util, but none seem to cover 
how to determine what module to load into the kernel.

[root@hume ~]# modprobe edac_mc
FATAL: Module edac_mc not found.
[root@hume ~]# lsmod | grep edac
[root@hume ~]# cat /proc/version
Linux version 2.6.32-431.11.2.el6.x86_64 
(mockbu...@c6b8.bsys.dev.centos.org) (gcc version 4.4.7 20120313 (Red 
Hat 4.4.7-4) (GCC) ) #1 SMP Tue Mar 25 19:59:55 UTC 2014
[root@hume ~]# modprobe edac_mce
FATAL: Module edac_mce not found.
[root@hume ~]# edac-ctl --mainboard
edac-ctl: mainboard: Supermicro X9SCL/X9SCM
[root@hume ~]# edac-ctl --status
edac-ctl: drivers not loaded.
[root@hume ~]# lsmod
Module  Size  Used by
ext3  240013  1
jbd80858  1 ext3
nfsd  309196  13
lockd  73662  1 nfsd
nfs_acl 2647  1 nfsd
auth_rpcgss44949  1 nfsd
sunrpc262864  20 nfsd,lockd,nfs_acl,auth_rpcgss
exportfs4236  1 nfsd
bnx2fc 90539  0
fcoe   23298  0
libfcoe56791  2 bnx2fc,fcoe
8021q  25349  0
garp7152  1 8021q
stp 2218  1 garp
libfc 108670  3 bnx2fc,fcoe,libfcoe
llc 5546  2 garp,stp
scsi_transport_fc  55299  3 bnx2fc,fcoe,libfc
scsi_tgt   12077  1 scsi_transport_fc
ipt_REJECT  2351  2
nf_conntrack_ipv4   9506  15
nf_defrag_ipv4  1483  1 nf_conntrack_ipv4
iptable_filter  2793  1
ip_tables  17831  1 iptable_filter
ip6t_REJECT 4628  2
nf_conntrack_ipv6   8337  2
nf_defrag_ipv6 11156  1 nf_conntrack_ipv6
xt_state1492  17
nf_conntrack   79758  3 nf_conntrack_ipv4,nf_conntrack_ipv6,xt_state
ip6table_filter 2889  1
ip6_tables 18732  1 ip6table_filter
iTCO_wdt7115  0
iTCO_vendor_support 3056  1 iTCO_wdt
zfs  1152935  53
zcommon44698  1 zfs
znvpair80460  2 zfs,zcommon
zavl6925  1 zfs
zunicode  323159  1 zfs
spl   260832  5 zfs,zcommon,znvpair,zavl,zunicode
zlib_deflate   21629  1 spl
i2c_i801   11359  0
i2c_core   31084  1 i2c_i801
ses 6475  0
enclosure   8438  1 ses
sg 29350  0
lpc_ich12803  0
mfd_core1895  1 lpc_ich
shpchp 32778  0
ext4  374902  3
jbd2   93427  1 ext4
mbcache 8193  2 ext3,ext4
raid1  32045  2
usb_storage49068  5
sd_mod 39069  27
crc_t10dif  1541  1 sd_mod
ata_generic 3837  0
pata_acpi   3701  0
pata_jmicron2813  2
video  20674  0
output  2409  1 video
e1000e267701  0
ptp 9614  1 e1000e
pps_core   11458  1 ptp
ahci   42247  8
xhci_hcd  148886  0
dm_mirror  14384  0
dm_region_hash 12085  1 dm_mirror
dm_log  9930  2 dm_mirror,dm_region_hash
dm_mod 84209  2 dm_mirror,dm_log
be2iscsi   99578  0
bnx2i  48096  0
cnic   57079  2 bnx2fc,bnx2i
uio10462  1 cnic
ipv6  317829  56 
ip6t_REJECT,nf_conntrack_ipv6,nf_defrag_ipv6,cnic
cxgb4i 28361  0
cxgb4 104882  1 cxgb4i
cxgb3i 24491  0
libcxgbi   52202  2 cxgb4i,cxgb3i
cxgb3 152922  1 cxgb3i
mdio4769  1 cxgb3
libiscsi_tcp   17020  3 cxgb4i,cxgb3i,libcxgbi
qla4xxx   257114  0
iscsi_boot_sysfs9458  2 be2iscsi,qla4xxx
libiscsi   49836  7 
be2iscsi,bnx2i,cxgb4i,cxgb3i,libcxgbi,libiscsi_tcp,qla4xxx
scsi_transport_iscsi84241  5 be2iscsi,bnx2i,libcxgbi,qla4xxx,libiscsi


[root@hume ~]# rpm -qi edac-utils
Name: edac-utils   Relocations: (not relocatable)
Version : 0.9   Vendor: CentOS
Release : 14.el6Build Date: Wed 20 Jul 2011 
11:13:34 AM UTC
Install Date: Wed 25 Jun 2014 09:27:40 PM UTC  Build Host: 
c6b6.bsys.dev.centos.org
Group   : System Environment/Base   Source RPM: 
edac-utils-0.9-14.el6.src.r

Re: [CentOS] How to enable EDAC kernel module for checking ECC memory?

2014-06-26 Thread Lists
See below

On 06/26/2014 08:11 AM, Leon Fauster wrote:
> Am 26.06.2014 um 00:08 schrieb Lists :
>> In order to support ZFS, we upgraded a backups server with a new, ECC
>> motherboard. We're running CentOS 6 with ZFS on Linux, recently patched.
>> Now, I want to enable EDAC so we can check for memory errors (and maybe
>> PCI errors as well) but so far, repeatedly pounding on the Google hasn't
>> yielded exactly what I need to do to enable EDAC.
>>
>> One howto was covering PCI and edac, but "modprobe edac_mc" didn't work.
>> Here's some information below, How do I get edac up and running? Many
>> howtos cover how to use edac-ctl and edac-util, but none seem to cover
>> how to determine what module to load into the kernel.
>>
>> [root@hume ~]# modprobe edac_mc
>> FATAL: Module edac_mc not found.
> it seems to be compiled into the kernel.
>
> $ grep -i -E 'mce|edac' /boot/config-2.6.32-431.11.2.el6.x86_64
>
>
>> [root@hume ~]# lsmod | grep edac
>> [root@hume ~]# cat /proc/version
>> Linux version 2.6.32-431.11.2.el6.x86_64
>
> check the available modules
>
> $ find /lib/modules/2.6.32-431.11.2.el6.x86_64/ | grep -i -E 'edac'

[root@hume ~]# find /lib/modules/2.6.32-431.11.2.el6.x86_64/ | grep -i 
-E 'edac'
/lib/modules/2.6.32-431.11.2.el6.x86_64/kernel/drivers/edac
/lib/modules/2.6.32-431.11.2.el6.x86_64/kernel/drivers/edac/i5400_edac.ko
/lib/modules/2.6.32-431.11.2.el6.x86_64/kernel/drivers/edac/i5000_edac.ko
/lib/modules/2.6.32-431.11.2.el6.x86_64/kernel/drivers/edac/i82975x_edac.ko
/lib/modules/2.6.32-431.11.2.el6.x86_64/kernel/drivers/edac/i7core_edac.ko
/lib/modules/2.6.32-431.11.2.el6.x86_64/kernel/drivers/edac/amd64_edac_mod.ko
/lib/modules/2.6.32-431.11.2.el6.x86_64/kernel/drivers/edac/i3000_edac.ko
/lib/modules/2.6.32-431.11.2.el6.x86_64/kernel/drivers/edac/edac_mce_amd.ko
/lib/modules/2.6.32-431.11.2.el6.x86_64/kernel/drivers/edac/i7300_edac.ko
/lib/modules/2.6.32-431.11.2.el6.x86_64/kernel/drivers/edac/x38_edac.ko
/lib/modules/2.6.32-431.11.2.el6.x86_64/kernel/drivers/edac/i3200_edac.ko
/lib/modules/2.6.32-431.11.2.el6.x86_64/kernel/drivers/edac/i5100_edac.ko
/lib/modules/2.6.32-431.11.2.el6.x86_64/kernel/drivers/edac/edac_core.ko
/lib/modules/2.6.32-431.11.2.el6.x86_64/kernel/drivers/edac/e752x_edac.ko
/lib/modules/2.6.32-431.11.2.el6.x86_64/kernel/drivers/edac/sb_edac.ko

Question is: which one do I use?

-Ben
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] How to enable EDAC kernel module for checking ECC memory?

2014-07-01 Thread Lists
On 06/27/2014 10:27 PM, lee wrote:
> Lists  writes:
>
>>> $ find /lib/modules/2.6.32-431.11.2.el6.x86_64/ | grep -i -E 'edac'
>> [root@hume ~]# find /lib/modules/2.6.32-431.11.2.el6.x86_64/ | grep -i
>> -E 'edac'
>> /lib/modules/2.6.32-431.11.2.el6.x86_64/kernel/drivers/edac
>> /lib/modules/2.6.32-431.11.2.el6.x86_64/kernel/drivers/edac/i5400_edac.ko
>> /lib/modules/2.6.32-431.11.2.el6.x86_64/kernel/drivers/edac/i5000_edac.ko
>> /lib/modules/2.6.32-431.11.2.el6.x86_64/kernel/drivers/edac/i82975x_edac.ko
>> /lib/modules/2.6.32-431.11.2.el6.x86_64/kernel/drivers/edac/i7core_edac.ko
>> /lib/modules/2.6.32-431.11.2.el6.x86_64/kernel/drivers/edac/amd64_edac_mod.ko
>> /lib/modules/2.6.32-431.11.2.el6.x86_64/kernel/drivers/edac/i3000_edac.ko
>> /lib/modules/2.6.32-431.11.2.el6.x86_64/kernel/drivers/edac/edac_mce_amd.ko
>> /lib/modules/2.6.32-431.11.2.el6.x86_64/kernel/drivers/edac/i7300_edac.ko
>> /lib/modules/2.6.32-431.11.2.el6.x86_64/kernel/drivers/edac/x38_edac.ko
>> /lib/modules/2.6.32-431.11.2.el6.x86_64/kernel/drivers/edac/i3200_edac.ko
>> /lib/modules/2.6.32-431.11.2.el6.x86_64/kernel/drivers/edac/i5100_edac.ko
>> /lib/modules/2.6.32-431.11.2.el6.x86_64/kernel/drivers/edac/edac_core.ko
>> /lib/modules/2.6.32-431.11.2.el6.x86_64/kernel/drivers/edac/e752x_edac.ko
>> /lib/modules/2.6.32-431.11.2.el6.x86_64/kernel/drivers/edac/sb_edac.ko
>>
>> Question is: which one do I use?
> Try edac_core, it might load the module which is needed for the hardware
> you have.
>

Tried that, no luck so far...

[root@hume bin]# modprobe edac_core
[root@hume bin]# lsmod | grep -i edac
edac_core  46581  0
[root@hume bin]# edac-ctl --status
edac-ctl: drivers not loaded.


[root@hume bin]# cat /proc/cpuinfo
processor: 0
vendor_id: GenuineIntel
cpu family: 6
model: 58
model name: Intel(R) Pentium(R) CPU G2020 @ 2.90GHz
stepping: 9
cpu MHz: 2900.103
cache size: 3072 KB
physical id: 0
siblings: 2
core id: 0
cpu cores: 2
apicid: 0
initial apicid: 0
fpu: yes
fpu_exception: yes
cpuid level: 13
wp: yes
flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca 
cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall 
nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology 
nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 
ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt tsc_deadline_timer xsave 
lahf_lm arat epb xsaveopt pln pts dts tpr_shadow vnmi flexpriority ept 
vpid fsgsbase smep erms
bogomips: 5800.20
clflush size: 64
cache_alignment: 64
address sizes: 36 bits physical, 48 bits virtual
power management:
-- SNIP --


What might I search for to find the right driver?
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] How to enable EDAC kernel module for checking ECC memory?

2014-07-01 Thread Lists
On 07/01/2014 03:42 PM, Alexander Dalloz wrote:
> Try the i7core_edac module. If that does not fit there is no EDAC
> support for your Ivy Bridge generation CPU with the build-in memory
> controller.
>

Thank you very much. Now the module is loaded but I still don't see any 
devices.

[root@hume bin]# modprobe i7core_edac
[root@hume bin]# edac-ctl --status
edac-ctl: drivers are loaded.
[root@hume bin]# ls -s /sys/devices/system/edac/mc
total 0

Also, where would I go to find out what driver I should be using for 
other systems? The best introduction I've found so far doesn't seem to 
cover setup:
 http://www.admin-magazine.com/Articles/Monitoring-Memory-Errors

And the best documentation I've found so far doesn't mention the driver 
I just installed on its "supported hardware" chart.
 http://buttersideup.com/edacwiki/Main_Page

Am I being optimistic to think that I should be generally able to 
identify and/or log ECC error correction events with EL6?

Thanks,

Ben
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] How to enable EDAC kernel module for checking ECC memory?

2014-07-02 Thread Lists
On 07/01/2014 06:41 PM, Lists wrote:
> Am I being optimistic to think that I should be generally able to
> identify and/or log ECC error correction events with EL6?

I've found the answer to my question, replying for future reference. 
EDAC really only applies to older systems. Use mcelog for newer (EG: 64 
bit) systems where the CPU has a built in memory controller.

-Ben
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] block level changes at the file system level?

2014-07-02 Thread Lists
I'm trying to streamline a backup system using ZFS. In our situation, 
we're writing pg_dump files repeatedly, each file being highly similar 
to the previous file. Is there a file system (EG: ext4? xfs?) that, when 
re-writing a similar file, will write only the changed blocks and not 
rewrite the entire file to a new set of blocks?

Assume that we're writing a 500 MB file with only 100 KB of changes. 
Other than a utility like diff, is there a file system that would only 
write 100KB and not 500 MB of data? In concept, this would work 
similarly to using the 'diff' utility...

-Ben
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] block level changes at the file system level?

2014-07-03 Thread Lists
On 07/02/2014 12:57 PM, m.r...@5-cent.us wrote:
> I think the buzzword you want is dedup.
dedup works at the file level. Here we're talking about files that are 
highly similar but not identical. I don't want to rewrite an entire file 
that's 99% identical to the new file form, I just want to write a small 
set of changes. I'd use ZFS to keep track of which blocks change over time.

I've been asking around, and it seems this capability doesn't exist 
*anywhere*.

-Ben
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] block level changes at the file system level?

2014-07-03 Thread Lists
On 07/03/2014 12:19 PM, John R Pierce wrote:
> you do realize, adding/removing or even changing the length of a single
> line in a block of that pg_dump file will change every block after it as
> the data will be offset ?

Yes. And I guess this is probably where the conversation should end. I'm 
used to the capabilities of Mercurial DVCS as well as ZFS snapshots, and 
was thinking/hoping that this type of capability might exist in a file 
system. Perhaps it just doesn't belong there.

On 07/03/2014 12:23 PM, Les Mikesell wrote:
> But, since this is about postgresql, the right way is probably just to
> set up replication and let it send the changes itself instead of doing
> frequent dumps.

Whatever we do, we need the ability to create a point-in-time history. 
We commonly use our archival dumps for audit, testing, and debugging 
purposes. I don't think PG + WAL provides this type of capability. So at 
the moment we're down to:

A) run PG on a ZFS partition and snapshot ZFS.
B) Keep making dumps (as now) and use lots of disk space.
C) Cook something new and magical using diff, rdiff-backup, or related 
tools.

-Ben
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Convert "bare partition" to RAID1 / mdadm?

2014-07-24 Thread Lists
I have a large disk full of data that I'd like to upgrade to SW RAID 1 
with a minimum of downtime. Taking it offline for a day or more to rsync 
all the files over is a non-starter. Since I've mounted SW RAID1 drives 
directly with "mount -t ext3 /dev/sdX" it would seem possible to flip 
the process around, perhaps change the partition type with fdisk or 
parted, and remount as SW RAID1?

I'm not trying to move over the O/S, just a data paritition with LOTS of 
data. So far, Google pounding has resulted in howtos like this one 
that's otherwise quite useful, but has a big "copy all your data over" 
step I'd like to skip:

http://sysadmin.compxtreme.ro/how-to-migrate-a-single-disk-linux-system-to-software-raid1/

But it would seem to me that a sequence roughly like this should work 
without having to recopy all the files.

1) umount /var/data;
2) parted /dev/sdX
 (change type to fd - Linux RAID auto)
3) Set some volume parameters so it's seen as a RAID1 partition 
"Degraded". (parted?)
4) ??? Insert mdadm magic here ???
5) Profit! `mount /dev/md1 /var/data`

Wondering if anybody has done anything like this before...

-Ben
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] smartctl question

2014-07-24 Thread Lists
On 07/24/2014 07:00 PM, SilverTip257 wrote:
> On Thu, Jul 24, 2014 at 2:29 PM,  wrote:
>
>> SilverTip257 wrote:
>>> On Thu, Jul 24, 2014 at 10:52 AM,  wrote:
 m.r...@5-cent.us wrote:
> I had a drive failing on a server. I've replaced it. The old drive was
> /dev/sdb; the new one's /dev/sdd. The manpage and googling are failing
> me- anyone know how to tell smartd to stop trying to read /dev/sdb
> (which I've pulled, and is sitting here on my desk)?  I don't need the
> garbage in the logs, and no, I can't just reboot the server, since
>> it's
> a home directory server
>
 Following myself up, I just tried a simple service smartd restart, and
 I'll see if that worked. But if anyone's got other ideas - I do *not*
 want to have this last after a reboot, though, because I assume sdx
 will change.
>>> Based on your comments about drive naming (sdX) ... you're using software
>>> RAID.
>> Nope. No RAID involved, just /dev/sda, /dev/sdb, etc.
>>
> Ah, thanks for clarifying.
> In hindsight there was a pretty good chance I was wrong about you possibly
> using soft RAID.  Anyways...
>
> BUT removing drives from the SCSI subsystem could still have been helpful
> to you (since the drives got different block device names).

Just a guess... have you looked at /etc/smartd.conf?

-Ben
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Convert "bare partition" to RAID1 / mdadm?

2014-07-24 Thread Lists
On 07/24/2014 06:07 PM, Les Mikesell wrote:
> On Thu, Jul 24, 2014 at 7:11 PM, Lists  wrote:
>> I have a large disk full of data that I'd like to upgrade to SW RAID 1
>> with a minimum of downtime. Taking it offline for a day or more to rsync
>> all the files over is a non-starter. Since I've mounted SW RAID1 drives
>> directly with "mount -t ext3 /dev/sdX" it would seem possible to flip
>> the process around, perhaps change the partition type with fdisk or
>> parted, and remount as SW RAID1?
>>
>> I'm not trying to move over the O/S, just a data paritition with LOTS of
>> data. So far, Google pounding has resulted in howtos like this one
>> that's otherwise quite useful, but has a big "copy all your data over"
>> step I'd like to skip:
>>
>> http://sysadmin.compxtreme.ro/how-to-migrate-a-single-disk-linux-system-to-software-raid1/
>>
>> But it would seem to me that a sequence roughly like this should work
>> without having to recopy all the files.
>>
>> 1) umount /var/data;
>> 2) parted /dev/sdX
>>   (change type to fd - Linux RAID auto)
>> 3) Set some volume parameters so it's seen as a RAID1 partition
>> "Degraded". (parted?)
>> 4) ??? Insert mdadm magic here ???
>> 5) Profit! `mount /dev/md1 /var/data`
>>
>> Wondering if anybody has done anything like this before...
>>
> Even if I found the magic place to change to make the drive think it
> was a raid member, I don't think I would trust getting it right with
> my only copy of the data.  Note that you don't really have to be
> offline for the full duration of an  rysnc to copy it.  You can add
> another drive as a raid with a 'missing' member, mount it somewhere
> and rsync with the system live to get most of the data over.  Then you
> can shut down all the applications that might be changing data for
> another rsync pass to pick up any changes - and that one should be
> fast.   Then move the raid to the real mount point and either (safer)
> swap a new disk, keeping the old one as a backup or (more dangerous)
> change the partition type on the original and add it into the raid set
> and let the data sync up.
I would, of course, have backups. And the machine being upgraded is one 
of several redundant file stores, so the risk is near zero of actual 
data loss even if it should not work. :)

And I've done what you suggest: rsync "online", take apps offline, 
rsync, swap, and bring it all back up. But the data set in question is 
about 100 million small files (PDFs) and even an rsync -van takes a day 
or more, downtime I'd like to avoid. A sibling data store is running 
LVM2 so the upgrade without downtime is underway, another sibling is 
using ZFS which breezed right through the upgrade so fast I wasn't sure 
it had even worked!

So... is it possible to convert an EXT4 partition to a RAID1 partition 
without having to copy the files over?
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Testing "dark" SSL sites

2014-10-21 Thread lists
So, with all the hubbub around POODLE and ssl, we're preparing a new load 
balancer using HAProxy. 

So we have a set of unit tests written using PHPUnit, having trouble 
validating certificates. How do you test/validate an SSL cert for a prototype 
"foo.com" server if it's not actually active at the IP address that matches 
DNS for foo.com? 

For non-ssl sites, I can specify the url like http://1.2.3.4/path and pass an 
explicit "host: foo.com" http header but that fails for SSL certificate 
validation. 

You can also set a hosts file entry, but that's also rather painful. Is there a 
better option? 
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Testing "dark" SSL sites

2014-10-21 Thread lists
On Tuesday, October 21, 2014 06:07:29 PM Stephen Harris wrote:
> On Tue, Oct 21, 2014 at 02:57:42PM -0700, li...@benjamindsmith.com wrote:
> > So we have a set of unit tests written using PHPUnit, having trouble
> > validating certificates. How do you test/validate an SSL cert for a
> > prototype "foo.com" server if it's not actually active at the IP address
> > that matches DNS for foo.com?
> 
> openssl s_client -connect ip.ad.dr.ess:443
> then decode the cert
> 
> e.g.
> $ openssl s_client -connect 1.2.3.4:443 < /dev/null >| cert
> 
> Now you can use the "x509" to look at various things
> eg
> $ openssl x509 -in cert -subject -noout
> subject=
> /description=foobar/C=US/CN=ssl.example.com/emailAddress=f...@example.com
> 
> "man x509"

The issue is that I wouldn't consider myself qualified to make sense of this 
output. Curl noticed when an intermediate SSL cert wasn't installed correctly, 
so if possible I'd really like to use a CLI "browser" such as curl or wget. 
I've already confirmed for example, that using openssl s_client as you mention 
above doesn't actually check the certs, just lists them. 

Thus, the recent issues with firefox and intermediate certs would be tough to 
look for 
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Testing "dark" SSL sites

2014-10-21 Thread lists
On Tuesday, October 21, 2014 05:02:53 PM Travis Kendrick wrote:
> On 10/21/2014 04:57 PM, li...@benjamindsmith.com wrote:
> > So, with all the hubbub around POODLE and ssl, we're preparing a new load
> > balancer using HAProxy.
> > 
> > So we have a set of unit tests written using PHPUnit, having trouble
> > validating certificates. How do you test/validate an SSL cert for a
> > prototype "foo.com" server if it's not actually active at the IP address
> > that matches DNS for foo.com?
> > 
> > For non-ssl sites, I can specify the url like http://1.2.3.4/path and pass
> > an explicit "host: foo.com" http header but that fails for SSL
> > certificate validation.
> > 
> > You can also set a hosts file entry, but that's also rather painful. Is
> > there a better option?
> > ___
> > CentOS mailing list
> > CentOS@centos.org
> > http://lists.centos.org/mailman/listinfo/centos
> 
> I just disabled SSLv3 altogether on my server and just use TLS. On my
> site I only use TLS 1.2 and not earlier versions or SSL so I was never
> affected by POODLE.

As far as I can tell, this comment is not related to the question I asked... 
at all. 
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] prefork vs worker mpm in apache

2015-11-03 Thread lists
On Tuesday, November 03, 2015 10:17:22 AM Tim Dunphy wrote:
> Hey guys,
> 
> We had to recompile apache 2.4.12 because we needed to disable thread
> safety in php (ZTS).  Because for some reason when compiling php with the
> --disable-maintainer-zts with the worker mpm model and checking the php
> info page, it was saying that thread safety was still enabled.
> 
> So when we recompiled apache to use the prefetch worker model instead of
> worker, the php info page was showing that  thread safety was disabled.
> 
> But after that change apache processes spiked from around 11 processes per
> machine to well over 250 processes at any given time.
> 
> These are the tuning settings we have in apache:
> 
> StartServers 10
> 
> #MinSpareServers 10
> 
> #MaxSpareServers 25
> 
> ServerLimit 250
> 
> MaxRequestWorkers 250
> 
> MaxConnectionsPerChild 1000
> 
> KeepAlive On
> 
> KeepAliveTimeout 30
> 
> EnableSendfile Off
> 
> 
> So I was just wondering how this change could've cause this problem of
> having the number of apache processes spike. And if there are any other
> changes we can make to apache to bring the process count down?

There isn't near enough info here to respond coherently. A few questions that 
might be useful to ask: 

How many hits/second are you serving? How long does a hit take to serve? Why 
do you need to disable thread safety? What's the system load? What compile 
options did you use? What does iostat report? Does the server work? What does 
top report? What's your memory utilization? What are your php.ini settings? 
Are you using zend optimizer or APC? 

... etc... 

Often, when trying to get details to anticipated questions together for 
posting to a list, I find the answer. 

Good luck! 
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] Howto: Extremely tight security rsync shell for backups

2013-09-23 Thread Lists
We've been using rsync since forever to back up all our servers and it's 
worked without a problem. But in a recent security review, we noted that 
our specific rsync backup host is using root keys to access the server, 
meaning that if the keys on the backup server were leaked/compromised in 
any fashion, that would provide r00t access to the servers being backed up.

Since this doesn't seem to be readily documented, I thought I'd provide 
it to the community.

After some playing around, we've found it is possible to set up 
rsync/ssh so that the connecting server can ONLY run rsync with a 
predetermined set of options.

// OBJECTIVES //
1) Make it so the backup server's SSH key can only be used
 A) in an rsync read-only. (no write capability to the production 
webserver)
 B) against a predetermined set of directories and with a 
predetermined set of options.
2) Limit the power of the webserver/backup account so that it cannot do 
anything other than a read-only rsync.
3) Limit access to the backup account to a specific IP address, EG: only 
the backup server
can access the account.


# ON WEBSERVER (note: normal account!)
-
[root] # adduser backupaccount;


# ON WEBSERVER
in /home/backupaccount/.ssh/authorized_keys ("backupserver" must be the 
remote, publicly visible IP address of the backup server)
-
from="backupserver" ssh-dss B3NzaC1kc3MAAACBAKLv/SNIP/ 
root@backupserver
-


# ON BACKUPSERVER
Verify that root@backupserver can log in normally via ssh without password:
-
[root@backupserver] # ssh backupaccount@webserver


# ON WEBSERVER
In /etc/passwd. (change the shell at the end of the line)
-
backupaccount:x:514:514::/home/backupaccount:/usr/local/bin/backupaccount.sh 



# ON WEBSERVER
in /etc/sudoers
-
backupaccount ALL=NOPASSWD: /usr/bin/rsync
Defaults:backupaccount !requiretty
-


# ON WEBSERVER
And in /usr/local/bin/backupaccount.sh
-
#! /bin/sh
# look in this file to see what options were passed, if the rsync 
doesn't work.
echo $* > /home/backupaccount/options.passed.sh
# rsync -va backupaccount@WEBSERVER:/home/ /mnt/backup/home/
/usr/bin/sudo /usr/bin/rsync --server --sender -vlogDtpre.iLs . /home/
-


Note that in this solution, it doesn't matter what the backupserver 
specifies as the backup location,
it will ALWAYS get /home/ and it may well break if you change the 
options any. EG: using "z" for compression, etc. If this happens, look 
in options.passed.sh to see what rsync on the backup server tried to 
pass and, if it makes sense, modify the backupaccount.sh script with 
these options, or modify backupaccount.sh so that it can take any of a 
number of source directories after appropriate validation.

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Howto: Extremely tight security rsync shell for backups

2013-09-23 Thread Lists
On 09/23/2013 01:02 PM, m.r...@5-cent.us wrote:
> It does have to
> run as root, though, on both, to preserve ownership of home and project
> directories, etc.

Depending on how you interpret this statement, my documented process may 
present a (mild) improvement.

It has the backup account on the public server being a non-priviliged 
account only able to run a (tightly controlled) shell script which 
contains the sudo call. In this way, even if the backup account is 
compromised, it can't be used to "take down" the web server, only 
provide access to the data. Technically, the rsync command *is* being 
run as (sudo) root, but nothing else is, and the backup account has no 
ability to change the parameters of the rsync account.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Howto: Extremely tight security rsync shell for backups

2013-09-23 Thread Lists
On 09/23/2013 01:50 PM, Les Mikesell wrote:
> Is there something that convinces you that sudo is better at handling
> the command restriction than sshd would be?

In the context of a production server, the idea is to remove any ability 
from another host (EG: backup server) to run local arbitrary code or 
change local files. (read-only)

There is one (small) benefit to not using SSHD options: Even if the 
account is somehow accessed locally, (eg via password prompt) it still 
cannot be used for anything but a read-only rsync command. And by using 
a (read only) script to replace the normal shell and sudo, I'm able to 
not only limit the command being run (in this case rsync) but also limit 
all options passed to it.

You can disable the password on the backup account to achieve a similar 
effect using an SSHD option. If there's a better/simpler way to do this 
via SSHD option I'd love to hear about it!

Thanks,

-Ben
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Howto: Extremely tight security rsync shell for backups

2013-09-23 Thread Lists
On 09/23/2013 02:44 PM, m.r...@5-cent.us wrote:
> Lists wrote:
>> On 09/23/2013 01:50 PM, Les Mikesell wrote:
>>> Is there something that convinces you that sudo is better at handling
>>> the command restriction than sshd would be?
>> In the context of a production server, the idea is to remove any ability
>> from another host (EG: backup server) to run local arbitrary code or
>> change local files. (read-only)
> 
>> You can disable the password on the backup account to achieve a similar
>> effect using an SSHD option. If there's a better/simpler way to do this
>> via SSHD option I'd love to hear about it!
>>
> Sure. You disable password authentication, and allow keys only, in
> /etc/ssh/sshd_config.
>

This prohibits SSH logins via password, but does not strictly enforce 
what commands are allowed to be run (and all options allowed) by a 
specific which is what I was looking for.

Having done a bit more research, It does appear that you could use the 
"ForceCommand" option and disable passwords altogether for a user to 
achieve a similar effect with SSHD.

-Ben
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Old hardware, newer kernels

2013-09-24 Thread Lists
On 09/19/2013 08:26 PM, Ashley M. Kirchner wrote:
> It works for what it does.  And I'm completely prepared to freeze it as far
> as software goes.  I was just curious what may have happened after that
> particular version of the kernel, and whether there's something else I can
> do, or call it done, slap a red sticker on it that read, "DON'T EVER
> UPGRADE ANYMORE" and call it done.
>

For what it's worth, we have a single-core P3 running the latest 
Centos6/32 without issue. I realize that from a processing power stand 
point my SIP phone is probably faster, but like Ashley, it does a job 
and very well at that. (network monitor)

[root@edison ~]# cat /proc/cpuinfo
processor   : 0
vendor_id   : GenuineIntel
cpu family  : 6
model   : 7
model name  : Pentium III (Katmai)
stepping: 3
cpu MHz : 498.456
cache size  : 512 KB
fdiv_bug: no
hlt_bug : no
f00f_bug: no
coma_bug: no
fpu : yes
fpu_exception   : yes
cpuid level : 2
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 mtrr pge mca cmov 
pse36 mmx fxsr sse up
bogomips: 996.91
clflush size: 32
cache_alignment : 32
address sizes   : 36 bits physical, 32 bits virtual
power management:

[root@edison ~]# cat /proc/version
Linux version 2.6.32-358.18.1.el6.i686 
(mockbu...@c6b10.bsys.dev.centos.org) (gcc version 4.4.7 20120313 (Red 
Hat 4.4.7-3) (GCC) ) #1 SMP Wed Aug 28 14:27:42 UTC 2013

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Port knocking and DNAT rules

2013-10-10 Thread Lists
So I found an excellent port knocking tutorial using ONLY iptables rules 
that looks to be among the best I've ever seen. (warning: techno music, 
tough to read screen, you don't need to type it in because I post a link 
to script below)

http://www.youtube.com/watch?v=0zFQocf7C_0

It works fabulously for simply opening a port to a locally managed 
service, but I can't seem to get it to work for a PREROUTING/DNAT rule. 
I've posted the shell script I'm trying to get to work, it should be 
self-documented.

http://chico.benjamindsmith.com/iptables.txt

I've confirmed that the logs correctly show port knocking 2, 3, and 4 in 
/var/log/messages so everything seems to be working golden all the way 
up to the last line. There are no errors reported when I run this 
script. The result that I get is that it acts as though packets are 
being dropped for 15 seconds, then I get connection refused.

What am I doing wrong?
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] SSH login from user with empty password

2013-10-10 Thread Lists
On 10/10/2013 03:12 PM, David C. Miller wrote:
> SSH by default will use a key pair if found but then drops back to 
> login password. It will also fall back to password if the keypair has 
> a passphrase and you just hit retrun without type it in. SSH won't 
> allow you to connect because the password in the shadow file is blank. 
> Basically if you don't have a password it should not allow you to 
> login regardless. From a security standpoint it makes sense to never 
> allow blank passwords. Just give the account a long 25 character 
> random password and then setup SSH key pairs.

 From what I read, it sounds like you are saying that you can't log in 
with keypairs unless a password has been set. If so, this appears to be 
incorrect, at least as of CentOS 6. To test this, I did the following:

[root@norman ~]# adduser testnopw
[root@norman ~]# su - testnopw
[testnopw@norman ~]$ mkdir .ssh && chmod 600 .ssh;
[testnopw@norman ~]$ nano .ssh/authorized_keys
< - pasted id_dsa.pub from another account ->
[testnopw@norman ~]$ chmod 600 .ssh/authorized_keys


Now, as another account on the same server:

[bens@norman] ssh testnopw@localhost
Enter passphrase for key '/home/bens/.ssh/id_dsa':
[testnopw@norman ~]$

Never, in the above script, was a password set.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] setuid or other ideas

2013-10-14 Thread Lists
On 10/14/2013 02:31 PM, Gregory P. Ennis wrote:
> Everyone,
>
> I am working on a Centos 5.9 system.  I have an need to be able to
> activate a piece of software from /etc/smrsh that is activated when
> sendmail delivers the e-mail to this piece of software.  I would like
> this piece of software to take on the user and group identities that are
> different than 'mail' which is what happens now.  I want to use a user
> and group that is not root), so that the piece of software will be able
> to write (concatenate) to a file.
>
> I have never used setuid, but it appears that this will only allow a
> piece of software to be set to root.  I really do not want to give that
> kind of privilege to this piece of software.
>
> Any ideas?

I've done lots of operations from /etc/smrsh under sendmail. I can't say 
I've ever used setuid for this type of work; it may well suffice. Now in 
my case with sendmail, the scripts run as the user receiving the email 
locally, so I don't need to do any of the below. I simply define the 
account that I want to run the script as the recipient of the message 
and it's all done.

I'd suggest to run sudo and make an entry in /etc/sudoers. You want to 
be paranoid around any publicly visible service like email but an entry 
like this might work in /etc/sudoers:

mailALL=(user2) NOPASSWD: /usr/local/script.to.run.sh
Defaults:mail !requiretty

Again, I'm not sure why you are seeing this run as the "mail" user 
unless that is the name of the local account, sendmail runs these kinds 
of scripts as the user receiving the messages. In which case, if my user 
was "taxinfo" it would look like

taxinfoALL=(user2) NOPASSWD: /usr/local/script.to.run.sh
Defaults:taxinfo !requiretty

Note that the last line (Defaults...)  is probably needed because 
there's not an actual terminal involved when processing a background 
script. Try without and see if it works. Then, in /etc/smrsh/received.sh 
you have

#! /bin/sh
/usr/bin/sudo -u taxinfo /usr/local/script.to.run.sh;


And in your .forward file: (don't forget to chmod 600 this file)
| /etc/smrsh/received.sh

Good luck!

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] ZFS on Linux in production?

2013-10-24 Thread Lists
We are a CentOS shop, and have the lucky, fortunate problem of having 
ever-increasing amounts of data to manage. EXT3/4 becomes tough to 
manage when you start climbing, especially when you have to upgrade, so 
we're contemplating switching to ZFS.

As of last spring, it appears that ZFS On Linux http://zfsonlinux.org/ 
calls itself production ready despite a version number of 0.6.2, and 
being acknowledged as unstable on 32 bit systems.

However, given the need to do backups, zfs send sounds like a godsend 
over rsync which is running into scaling problems of its own. (EG: 
Nightly backups are being threatened by the possibility of taking over 
24 hours per backup)

Was wondering if anybody here could weigh in with real-life experience? 
Performance/scalability?

-Ben

PS: I joined their mailing list recently, will be watching there as 
well. We will, of course, be testing for a while before "making the 
switch".

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] ZFS on Linux in production?

2013-10-24 Thread Lists
On 10/24/2013 01:59 PM, John R Pierce wrote:


> 1) you need a LOT of ram for decent performance on large zpools. 1GB ram
> above your basic system/application requirements per terabyte of zpool
> is not unreasonable.

That seems quite reasonable to me. Our existing equipment has far more 
than enough RAM to make this a comfortable experience.

> 2) don't go overboard with snapshots.   a few 100 are probably OK, but
> 1000s (*) will really drag down the performance of operations that
> enumerate file systems.

Our intended use for snapshots is to enable consistent backup points, 
something we're simulating now with rsync and its hard-link option. We 
haven't figured out the best way to do this, but in our backup clusters 
we have rarely more than 100 save points at any one time.

> 3) NEVER let a zpool fill up above about 70% full, or the performance
> really goes downhill.

Thanks for the tip!

> (*) ran into a guy who had 100s of zfs 'file systems' (mount points),
> per user home directories, and was doing nightly snapshots going back
> several years, and his zfs commands were taking a long long time to do
> anything, and he couldn't figure out why.  I think he had over 10,000
> filesystems * snapshots.

Wow. Couldn't he have the same results by putting all the home 
directories on a single ZFS partition?
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] which kernel do people use?

2013-10-24 Thread Lists
On 10/24/2013 03:48 PM, Karanbir Singh wrote:
> On 10/23/2013 10:30 AM, Morgan Cox wrote:
>> If you want SSD + MDRAID you need to use a 3.8+ kernel to have TRIM.
>>
>> The speed difference between the stock 2.6.32 -> 3.10 kernel with SSD +
>> MDRAID is insane.
> has someone quantified what this 'insane' amounts to ?

Going from HDD to SSD's gave us better than a 95% reduction in query 
times for complex queries using PostgreSQL on otherwise identical 
hardware. I wouldn't have believed it had I not seen it directly, for 
myself.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] ZFS on Linux in production?

2013-10-24 Thread Lists
On 10/24/2013 02:47 PM, SilverTip257 wrote:
> You didn't mention XFS.
> Just curious if you considered it or not.

Most definitely. There are a few features that I'm looking for:

1) MOST IMPORTANT: STABLE!

2) The ability to make the partition  bigger by adding drives with very 
minimal/no downtime.

3) The ability to remove an older, (smaller) drive or drives in order to 
replace with larger capacity drives without downtime or having to copy 
over all the files manually.

4) The ability to create snapshots with no downtime.

5) The ability to synchronize snapshots quickly and without having to 
scan every single file. (backups)

6) Reasonable failure mode. Things *do* go south sometimes. Simple is 
better, especially when it's simpler for the (typically highly stressed) 
administrator.

7) Big. Basically all filesystems in question can handle our size 
requirements. We might hit a 100 TB  partition in the next 5 years.

I think ZFS and BTRFS are the only candidates that claim to do all the 
above. Btrfs seems to have been "stable in a year or so" for as long as 
I could keep a straight face around the word "Gigabyte", so it's a 
non-starter at this point.

LVM2/Ext4 can do much of the above. However, horror stories abound, 
particularly around very large volumes. Also, LVM2 can be terrible in 
failure situations.

XFS does snapshots, but don't you have to freeze the volume first? 
Xfsrestore looks interesting for backups, though I don't know if there's 
a consistent "freeze point". (what about ongoing writes?) Not sure about 
removing HDDs in a volume with XFS.

Not as sure about ZFS' stability on Linux (those who run direct Unix 
derivatives seem to rave about it) and failure modes.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] which kernel do people use?

2013-10-24 Thread Lists

> Hi all,
>
> I'm doing a very informal and unscientific poll: which kernel do you
> use on your CentOS machines?
> 

We're all stock, all the way. Figure 30 servers configured like this, 
including dev/test and embedded servers. We'll soon have a true 
"Disaster Recovery" setup with another 8 servers. Workstations run 
Fedora or Ubuntu.



[bens@hal ~]$ rpm -qi kernel-2.6.32-358.18.1.el6.x86_64
Name: kernel   Relocations: (not relocatable)
Version : 2.6.32Vendor: CentOS
Release : 358.18.1.el6  Build Date: Wed 28 Aug 2013 
06:28:07 PM UTC
Install Date: Sat 07 Sep 2013 12:38:03 AM UTC  Build Host: 
c6b10.bsys.dev.centos.org
Group   : System Environment/Kernel Source RPM: 
kernel-2.6.32-358.18.1.el6.src.rpm
Size: 121423079License: GPLv2
Signature   : RSA/SHA1, Wed 28 Aug 2013 07:02:55 PM UTC, Key ID 
0946fca2c105b9de
Packager: CentOS BuildSystem 
URL : http://www.kernel.org/
Summary : The Linux kernel
Description :
The kernel package contains the Linux kernel (vmlinuz), the core of any
Linux operating system.  The kernel handles the basic functions
of the operating system: memory allocation, process allocation, device
input and output, etc.

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] ZFS on Linux in production?

2013-10-24 Thread Lists
On 10/24/2013 05:29 PM, Warren Young wrote:
> On 10/24/2013 17:12, Lists wrote:
>> 2) The ability to make the partition  bigger by adding drives with very
>> minimal/no downtime.
> Be careful: you may have been reading some ZFS hype that turns out not
> as rosy in realiIdeally, ZFS would work like a Drobo with an infinite number 
> of drive
> bays.  Need to add 1 TB of disk space or so?  Just whack another 1 TB
> disk into the pool, no problem, right?
>
> Doesn't work like that.
>
> You can add another disk to an existing pool, but it doesn't instantly
> make the pool bigger.  You can make it a hot spare, but you can't tell
> ZFS to expand the pool over the new drive.
>
> "But," you say, "didn't I read that"   Yes, you did.  ZFS *can* do
> what you want, just not in the way you were probably expecting.
>
> The least complicated *safe* way to add 1 TB to a pool is add *two* 1 TB
> disks to the system, create a ZFS mirror out of them, and add *that*
> vdev to the pool.  That gets you 1 TB of redundant space, which is what
> you actually wanted.  Just realize, you now have two separate vdevs
> here, both providing storage space to a single pool.
>
> You could instead turn that new single disk into a non-redundant
> separate vdev and add that to the pool, but then that one disk can take
> down the entire pool if it dies.

We have redundancy at the server/host level, so even if we have a 
fileserver go completely offline,
our application retains availability. We have an API in our application 
stack that negotiates with the (typically 2 or 3) file stores.

> Another problem is that you have now created a system where ZFS has to
> guess which vdev to put a given block of data on.  Your 2-disk mirror of
> newer disks probably runs faster than the old 3+ disk raidz vdev, but
> ZFS isn't going to figure that out on its own.  There are ways to
> "encourage" ZFS to use one vdev over another.  There's even a special
> case mode where you can tell it about an SSD you've added to act purely
> as an intermediary cache, between the spinning disks and the RAM caches.
Performance isn't so much an issue - we'd partition our cluster and 
throw a few more boxes into place if it became a bottle neck.

> The more expensive way to go -- which is simpler in the end -- is to
> replace each individual disk in the existing pool with a larger one,
> letting ZFS resilver each new disk, one at a time.  Once all disks have
> been replaced, *then* you can grow that whole vdev, and thus the pool.
Not sure enough of the vernacular but lets say you have 4 drives in a 
RAID 1 configuration, 1 set of TB drives and another set of 2 TB drives.

A1 <-> A2 = 2x 1TB drives, 1 TB redundant storage.
B1 <-> B2 = 2x 2TB drives, 2 TB redundant storage.

We have 3 TB of available storage. Are you suggesting we add a couple of 
4 TB drives:

A1 <-> A2 = 2x 1TB drives, 1 TB redundant storage.
B1 <-> B2 = 2x 2TB drives, 2 TB redundant storage.
C1 <-> C2 = 2x 4TB drives, 4 TB redundant storage.

Then wait until ZFS moves A1/A2 over to C1/C2 before removing A1/A2? If 
so, that's capability I'm looking for.

> But, XFS and ext4 can do that, too.  ZFS only wins when you want to add
> space by adding vdevs.

The only way I'm aware of ext4 doing this is with resizee2fs, which is 
extending a partition on a block device. The only way to do that with 
multiple disks is to use a virtual block device like LVM/LVM2 which (as 
I've stated before) I'm hesitant to do.

>> 3) The ability to remove an older, (smaller) drive or drives in order to
>> replace with larger capacity drives without downtime or having to copy
>> over all the files manually.
> Some RAID controllers will let you do this.  XFS and ext4 have specific
> support for growing an existing filesystem to fill a larger volume.

LVM2 will let you remove a drive without taking it offline. Can XFS do 
this without some block device virtualization like LVM2? (I didn't think 
so)

>> 6) Reasonable failure mode. Things *do* go south sometimes. Simple is
>> better, especially when it's simpler for the (typically highly stressed)
>> administrator.
> I find it simpler to use ZFS to replace a failed disk than any RAID BIOS
> or RAID management tool I've ever used.  ZFS's command line utilities
> are quite simply slick.  It's an under-hyped feature of the filesystem,
> if anything.
>
> A lot of thought clearly went into the command language, so that once
> you learn a few basics, you can usually guess the right command in any
> given situation.  That sort of good design doesn't happen by itself.
>
> All other disk management tools I've used seem to have jus

Re: [CentOS] which kernel do people use?

2013-10-25 Thread Lists
On 10/25/2013 04:38 AM, Nicolas Thierry-Mieg wrote:
> interesting datapoint for HDD vs SSD, but what about kernel versions? 
> When using SSDs, did you need to use 3.8+ kernels as suggested in the 
> quoted post, or do you use stock? thanks

I've taken some flack for being off-topic regards my comments on SSDs, 
though it would seem relevant to know that we're using Stock CentOS 
kernel, and that we aren't using trim because the performance increase 
was so much improved over spinning disks that we didn't seem to need it. 
Our use case is a worst-case read/write scenario. The SSD partitions we 
use for the DB servers are mirrored software RAID1, which, to my 
understanding, makes the use of trim impossible.

Where we've had performance problems we've always found something else 
to be the bottle neck, typically a poorly constructed query. (PostgreSQL 
really, REALLY hates left outer joins)

It's possible that we could eek out more performance with a suggested 
kernel update. If/when we do performance testing on that front, I'll 
make it a point to submit my results.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] ZFS on Linux in production?

2013-10-25 Thread Lists
On 10/24/2013 11:18 PM, Warren Young wrote:
> - vdev, which is a virtual device, something like a software RAID.  It is one 
> or more disks, configured together, typically with some form of redundancy.
>
> - pool, which is one or more vdevs, which has a capacity equal to all of its 
> vdevs added together.

Thanks for the clarification of terms.

> You would have 3 TB *if* you configured these disks as two separate vdevs.
>
> If you tossed all four disks into a single vdev, you could have only 2 TB 
> because the smallest disk in a vdev limits the total capacity.
>
> (This is yet another way ZFS isn't like a Drobo[*], despite the fact that a 
> lot of people hype it as if it were the same thing.)

Two separate vdevs is pretty much what I was after. Drobo: another 
interesting option :)

>> Are you suggesting we add a couple of
>> 4 TB drives:
>>
>> A1 <-> A2 = 2x 1TB drives, 1 TB redundant storage.
>> B1 <-> B2 = 2x 2TB drives, 2 TB redundant storage.
>> C1 <-> C2 = 2x 4TB drives, 4 TB redundant storage.
>>
>> Then wait until ZFS moves A1/A2 over to C1/C2 before removing A1/A2? If
>> so, that's capability I'm looking for.
> No.  ZFS doesn't let you remove a vdev from a pool once it's been added, 
> without destroying the pool.
>
> The supported method is to add disks C1 and C2 to the *A* vdev, then tell ZFS 
> that C1 replaces A1, and C2 replaces A2.  The filesystem will then proceed to 
> migrate the blocks in that vdev from the A disks to the C disks. (I don't 
> remember if ZFS can actually do both in parallel.)
>
> Hours later, when that replacement operation completes, you can kick disks A1 
> and A2 out of the vdev, then physically remove them from the machine at your 
> leisure.  Finally, you tell ZFS to expand the vdev.
>
> (There's an auto-expand flag you can set, so that last step can happen 
> automatically.)
>
> If you're not seeing the distinction, it is that there never were 3 vdevs at 
> any point during this upgrade.  The two C disks are in the A vdev, which 
> never went away.

I see the distinction about vdevs vs. block devices. Still, the process 
you outline is *exactly* the capability that I'm looking for, despite 
the distinction in semantics.

> Yes, implicit in my comments was that you were using XFS or ext4 with some 
> sort of RAID (Linux md RAID or hardware) and Linux's LVM2.
>
> You can use XFS and ext4 without RAID and LVM, but if you're going to compare 
> to ZFS, you can't fairly ignore these features just because it makes ZFS look 
> better.

I've had good results with Linux' software RAID+Ext[2-4].  For example, 
I *love* that you can mount a RAID partitioned drive directly in a 
worst-case scenario. LVM2 complicates administration terribly. The 
widely touted, simplified administration of ZFS is quite attractive to me.

I'm just trying to find the best tool for the job. That may well end up 
being Drobo!

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] ZFS on Linux in production?

2013-10-30 Thread Lists
On 10/25/2013 11:14 AM, Chuck Munro wrote:
> To keep the two servers in sync I use 'lsyncd' which is essentially a
> front-end for rsync that cuts down thrashing and overhead dramatically
> by excluding the full filesystem scan and using inotify to figure out
> what to sync.  This allows almost-real-time syncing of the backup
> machine.  (BTW, you need to crank the resources for inotify way up
> for large filesystems with a couple million files.)

Playing with lsyncd now, thanks for the tip!

One qeustion though: why did you opt to use lsyncd rather than using ZFS 
snapshots/send/receive?

Thanks,

Ben
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Installing CentOS via USB thumbdrive

2013-11-08 Thread Lists
On 11/08/2013 10:04 AM, Joseph Spenner wrote:
> Has anyone successfully installed via USB?
> I remember reading some multi part instructions where the USB drive is 
> formatted with some special tools, often involving Windows, and various files 
> need to be copied to the USB drive.  But I was hoping we were passed that by 
> now.
> But then again, Dell firmware updates still want me to make a DOS bootable 
> floppy.  So, I'm usually not surprised when I hear something like this.   :)

I did an install from USB disk a while back. It mostly worked. The only 
thing I had issues with is that the USB disk occupied a slot and I had 
to tell grub which partition to boot from.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Install to internal USB?

2013-11-08 Thread Lists
Saw a trick today, wondering if anybody else had done/tried this? Assume 
you have a 1U rackmount with 4 front-accessed drive bays, and you want 
all four bays for a 4-disk RAID5 storage.

The idea is to use an internal USB adapter and a couple of bigger USB 
thumb drives to install to, RAID 1 style, freeing up all your external 
drive bays. At first, I didn't think that a thumb drive would hold 
enough for the O/S, but in actual production use for a file server with 
14 TB of redundant storage, the OS actually uses less than 6 GB!

Here's the internal USB adapter specifically mentioned:
http://www.amazon.com/gp/product/B007PODI1W

I'd be concerned about getting a higher quality drive than the $10 
givaways at Staples; Anybody here ever tried this?
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Install to internal USB?

2013-11-08 Thread Lists
On 11/08/2013 02:06 PM, John R Pierce wrote:
> On 11/8/2013 12:57 PM, Lists wrote:
>> Saw a trick today, wondering if anybody else had done/tried this? Assume
>> you have a 1U rackmount with 4 front-accessed drive bays, and you want
>> all four bays for a 4-disk RAID5 storage.
>>
>> The idea is to use an internal USB adapter and a couple of bigger USB
>> thumb drives to install to, RAID 1 style, freeing up all your external
>> drive bays. At first, I didn't think that a thumb drive would hold
>> enough for the O/S, but in actual production use for a file server with
>> 14 TB of redundant storage, the OS actually uses less than 6 GB!
> USB thumb drives are really not that suitable for anything doing random
> writes, lots of small files, etc.
>
Agreed! In the case of a file store, there isn't a whole lot going on 
with the O/S drive, just the drives that the O/S is hosting. The 
original post recommended that /var/log be run off the big partition 
being hosted (on spinning disks) to minimize writes to the flash drives.

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] CentOS 6 and Intel vPro?

2013-11-15 Thread Lists
Needed to set up a cluster where horsepower and cost were paramount, so 
I thought this would be a good opportunity to try out Intel's business 
class "vPro AMT" remote administration technology, and compare it to 
IPMI, which I've used for years on servers. From a feature standpoint, 
it seems quite impressive, going to far as using standards-based VNC!

Unfortunately, set up is quite a bear, with most of a day spent and 
while I can remote power cycle the machine via a web interface, VNC 
support is still not to be found. I have no intention of buying Windows 
licenses for all these machines just so I can enable AMT. (the VNC 
remote desktop solution)

So far, it would seem that the Intel approach is very Windows-oriented, 
to the point where I've been unable to find any official documentation 
at all as to how to make it work with *nix. After some googling and 
trying to make sense of the "enterprise reference documentation" I came 
across http://blog.yarda.eu/2012/09/intel-amt-quickstart-guide.html 
which apparently was working as recently as October of this year. (see 
the comments)

However, it would seem that intel has pulled support for some of this, 
as pages like
  http://intel.com/wbem/wscim/1/ips-schema/1/IPS_KVMRedirectionSettingData
and
  http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_KVMRedirectionSAP
simply resolve to 404, numerous attempts to google 
"IPS_KVMRedirectionSettingData" and such haven't (so far) returned 
anything useful.

 From what it seems, Intel wants everybody to be installing PKI AMT 
certificates on vPro motherboards via Windows.

Does anybody have any information that might be useful about how to 
enable unencrypted VNC remote administration for CentOS based servers? I 
have a used KVM switch, but that's always been a sub-optimum solution, 
and it really makes a mess out of your server cages.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS 6 and Intel vPro?

2013-11-15 Thread Lists
On 11/15/2013 07:30 PM, Wes James wrote:
> What type of admin are trying to do on your cluster.  First, to get the
> most power out of your cluster, shouldn't you be running nodes in run level
> 3 (non gui).
>
> You can set up ssh keys so you can create some scripts and perform your
> commands  (from your admin machine) on each of your cluster nodes.  (i.e.,
> running a command over ssh without a password being needed).
>
> You may also be interested in esysman that allows for doing many admin
> tasks from a web interface on your admin box:
>
> https://github.com/comptekki/esysman
>
> Just this last week I updated it to work with CentOS (orLSB/RPM based
> systems).  It currently supports MS Windows, OS X, Xubuntu (and probably
> other LSB/ubuntu/debian base systems) and now CentOS.

There's no GUI installed, and I'm quite familiar with SSH, and most 
daily administration is heavily scripted/automated.

The capabilities I'm looking for include the ability to force restart 
the server remotely, even when the O/S Is locked up (EG: kernel panic) 
and access the BIOS remotely, do an O/S install remotely, etc. which can 
be done with IPMI. From what I've read, Intel's vPro allows for all of 
these possibilities, although it does seem to be heavily Windows oriented.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS 6 and Intel vPro?

2013-11-16 Thread Lists
On 11/16/2013 04:14 AM, Daniel Bird wrote:
> On 16/11/2013 04:26, Lists wrote:
>>   From what I've read, Intel's vPro allows for all of
>> these possibilities, although it does seem to be heavily Windows oriented.
> Does this help?
>
> https://communities.intel.com/community/vproexpert/blog/2011/11/03/intel-setup-and-configuration-service-72-designed-for-linux
>
> https://communities.intel.com/community/vproexpert/blog/2012/01/19/configuring-intel-vpro-with-linux-in-user-control-mode
>
> https://communities.intel.com/community/vproexpert/blog/2012/02/13/configuring-intel-vpro-with-linux-in-admin-control-mode
>
> http://software.intel.com/en-us/articles/download-the-latest-intel-amt-open-source-drivers/
>
>

Thanks, this looks very good! I will try Monday morning and report 
results here.


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Postfix relay on Comcast

2013-11-21 Thread Lists
How to get postfix working on CentOS 6 and Comcast. Recently, they've 
changed their policies regards email relay and require authentication 
even to send email. (they no longer use IP address ranges, presumably in 
an attempt to curb outgoing SPAM)

I didn't see an updated howto anywhere on the Interwebs, thought I'd 
point out what I had to do. The part that had me stumped for longer than 
I care to admit was having to install cyrus-sasl-plain rpm - EL5 
apparently had that installed as part of the cyrus-sasl package.

1) yum install postfix cyrus-sasl-plain;
# note that cyrus-sasl-plain is NOT installed by default but is needed 
by this config.

2) Create file /etc/postfix/passwords. Replace "USERNAME" with your user 
name, and
"password" with your password. Note: your username is typically your 
email address without the domain name.
#
smtp.comcast.net:587 USERNAME:password
smtp.comcast.net usern...@comcast.net:password
#

3) makemap passwords;

4) Edit /etc/postfix/main.cf
#
relayhost = [smtp.comcast.net]:587
smtpd_sasl_auth_enable = yes
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/passwords
# note that with no security options on, we're using sasl-plain 
installed above.
smtp_sasl_security_options =
# You might need this, you may not.
#sender_canonical_maps = regexp:/etc/postfix/sender_rewrite
#

5) Create file /etc/postfix/sender_rewrite. Note that not all Comcast 
customers need this, I didn't when I authenticated as above. Obviously, 
replace "USERNAME" with your user name.
#
/^([^@]*)@.*$/  usern...@comcast.net
#

6) service postfix stop; sleep 5; service postfix start


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Unsupported Hardware that works fine?

2013-11-25 Thread Lists
I recently purchased a set of ASRock Intel i5 MB/CPU combos for a budget 
compute cluster. Every time we load up a system and try to boot with a 
recent EL6/64 ISO, we get a message that reads:

 > This hardware (or a combination thereof) is not supported by CentOS. 
For more
 > information on supported hardware, plesae refer to 
http://www.centos.org/hardware

Not only does the hardware *seem* to work to expectations, but the url 
referenced goes to 404!

Having loaded CentOS6 on many systems without ever seeing this message, 
I have to ask how to determine what might be triggering it and whether 
or not I should be concerned?

Thanks
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Unsupported Hardware that works fine?

2013-11-26 Thread Lists
On 11/25/2013 05:04 PM, John R Pierce wrote:
> On 11/25/2013 4:42 PM, Lists wrote:
>> I recently purchased a set of ASRock Intel i5 MB/CPU combos for a budget
>> compute cluster. Every time we load up a system and try to boot with a
>> recent EL6/64 ISO, we get a message that reads:
>>
>>> This hardware (or a combination thereof) is not supported by CentOS.
>> For more
>>> information on supported hardware, plesae refer to
>> http://www.centos.org/hardware
>>
>> Not only does the hardware*seem*  to work to expectations, but the url
>> referenced goes to 404!
>>
>> Having loaded CentOS6 on many systems without ever seeing this message,
>> I have to ask how to determine what might be triggering it and whether
>> or not I should be concerned?
> what chipset and which core i5 (there's at least 3, maybe 4 generations
> of 'core i5' processors now)
Chipset is Intel Q87. i5 4670, 3.4 Ghz, LGA1150 socket. ASROCK Q87 Vpro 
MB, 32 GB DDR3/1600 RAM.

> do you still get that error after a `yum update -y`  and a reboot ?

Error only shows during the install, just before loading X11 for the 
setup. Again, I've had these systems torture tested for my needs for 
days without issue.

Just wondering if there's a way to figure out what caused the message, 
and/or if I should expect something to *not* work?

-Ben
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] ZFS on Linux testing effort

2013-12-03 Thread Lists
Andrew,

We've been testing ZFS since about 10/24, see my original post (and 
replies) asking about its suitability "ZFS on Linux in production" on 
this list. So far, it's been rather impressive. Enabling compression 
better than halved the disk space utilization in a low/medium bandwidth 
(mainly archival) usage case.

Dealing with many TB of data in a "real" environment is a very slow, 
conservative process; our ZFS implementation has, so far, been limited 
to a single redundant copy of a file system on a server that only backs 
up other servers.

Our next big test is to try out ZFS filesystem send/receive in lieu of 
our current backup processes based on rsync. Rsync is a fabulous tool, 
but is beginning to show performance/scalability issues dealing with the 
many millions of files being backed up, and we're hoping that ZFS 
filesystem replication solves this.

This  stage of deployment is due to be in place by about 1/2014.

-Ben

On 11/30/2013 06:20 AM, Andrew Holway wrote:
> Hey,
>
> http://zfsonlinux.org/epel.html
>
> If you have a little time and resource please install and report back
> any problems you see.
>
> A filesystem or Volume sits within a zpool
> a zpool is made up of vdevs
> vdevs are made up of block devices.
>
> zpool is similar to LVM volume
> vdev is similar to raid set
>
> devices can be files.
>
> Thanks,
>
> Andrew
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] ZFS on Linux testing effort

2013-12-13 Thread Lists
On 12/04/2013 06:05 AM, John Doe wrote:
> Not sure if I already mentioned it but maybe have a look at: 
>  http://code.google.com/p/lsyncd/

We checked lsyncd out and it's most certainly an very interesting tool. 
I *will* be using it in the future!

However, we found that it has some issues scaling up to really big file 
stores that we haven't seen (yet) with ZFS.

For example, the first thing it has to do when it comes online is a 
fully rsync of the watched file area. This makes sense; you need to do 
this to ensure integrity. But if you have a large file store, EG: many 
millions of files and dozens of TB then this first step can take days, 
even if the window of downtime is mere minutes due to a restart. Since 
we're already at this stage now (and growing rapidly!) we've decided to 
keep looking for something more elegant and ZFS appears to be almost an 
exact match. We have not tested the stability of lsyncd managing the 
many millions of inode write notifications in the meantime, but just 
trying to satisfy the write needs for two smaller customers (out of 
hundreds) with lsyncd led to crashes and the need to modify kernel 
parameters.

As another example, lsyncd solves a (highly useful!) problem of 
replication, which is a distinctly different problem than backups. 
Replication is useful, for example as a read-only cache for remote 
application access, or for disaster recovery with near-real-time 
replication, but it's not a backup. If somebody deletes a file 
accidentally, you can't go to the replicated host and expect it to be 
there. And unless you are lsyncd'ing to a remote file system with it's 
own snapshot capability, there isn't an easy way to version a backup 
short of running rsync (again) on the target to create hard links or 
something - itself a very slow, intensive process with very large 
filesystems. (days)

I'll still be experimenting with lsyncd further to evaluate its real 
usefulness and performance compared to ZFS and report results. As said 
before, we'll know much more in another month or two once our next stage 
of roll out is complete.

-Ben
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Why the huge shmmax default setting?

2013-12-17 Thread Lists
Fresh load of Centos6/64 from new ISO (downloaded 2 weeks ago?) and 
getting set up with PostgreSQL, one of the typical steps is to increase 
shmmax from its normal, conservative value (eg: 32 MB or something) to 
something far more aggressive.

But in recent installs of CentOS 6, this value is generally huge, 
typically larger than the RAM installed on the machine! For example, 
fresh installs on systems with 32 GB of RAM, this is already set to 
68719476736 (64 GB)

This large value doesn't really make sense to me - can somebody explain 
why the change to such a large value?

-Ben
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] ZFS on Linux testing

2013-12-17 Thread Lists
On 12/14/2013 08:50 AM, Chuck Munro wrote:
> Hi Ben,
>
> Yes, the initial replication of a large filesystem is *very* time
> consuming!  But it makes sleeping at night much easier.  I did have to
> crank up the inotify kernel parameters by a significant amount.
>
> I did the initial replication using rsync directly, rather than asking
> lsyncd to do it.  I notice that if I reboot the primary server, it takes
> a while for the inotify tables to be rebuilt ... after that it's smooth
> sailing.

I may be being presumptuous, and if so, I apologize in advance...

It sounds to me like you might consider a disk-to-disk backup solution. 
I could suggest dirvish, BackupPC, or our own home-rolled rsync-based 
solution that works rather well: http://www.effortlessis.com/backupbuddy/

Note that with these solutions you get multiple save points that are 
deduplicated with hardlinks so you can (usually) keep dozens of save 
points in perhaps 2x the disk space of a single copy. Also, because of 
this, you can go back a few days / weeks / whatever when somebody 
deletes a file. In our case, we make the backed up directories available 
via read-only ftp so that end users can recover their files.

I don't know if dirvish offers this, but backupbuddy also allows you to 
run pre and post backup shell scripts, which we use (for example) for 
off-site archiving to permanent storage since backup save points expire.

-Ben

> If you want to prevent deletion of files from your replicated filesystem
> (which I do), you can modify the rsync{} array in the lsyncd.lua file by
> adding the line 'delete = false' to it.  This has saved my butt a few
> times when a user has accidentally deleted a file on the primary server.
>
> I agree that filesystem replication isn't really a backup, but for now
> it's all I have available, but at least the replicated fs is on a
> separate machine.
>
> As a side note for anyone using a file server for hosting OS-X Time
> Machine backups, the 'delete' parameter in rsync{} must be set to 'true'
> in order to prevent chaos should a user need to point their Mac at the
> replicate filesystem (which should be a very rare event).  I put all TM
> backups in a separate ZFS sub-pool for this reason.
>
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] ZFS on Linux testing

2013-12-18 Thread Lists
On 12/18/2013 07:50 AM, Les Mikesell wrote:
> I've always considered backuppc to be one of those rare things that
> you set up once and it takes care of itself for years.  If you have
> problems with it, someone on the backuppc mail list might be able to
> help.   It does tend to be slower than native rsync and especially bad
> at handling huge directories, but sometimes you can split up large
> filesystems into smaller subdirectory runs and if necessary you can
> use the ClientAlias feature to make it look like a single large host
> is several different systems so you can skew the full and incremental
> runs of different areas to different days.

BackupPC is a great product, and if I knew of it and/or it was available 
when I started, I would likely have used it instead of cutting code. Now 
that we've got BackupBuddy working and integrated, we aren't going to be 
switching as it has worked wonderfully for a decade with very few issues 
and little oversight.

I would differentiate BackupBuddy in that there is no "incremental" and 
"full" distinction. All backups are "full" in the truest sense of the 
word, and all backups are stored as native files on the backup server. 
This works using rsync's hard-link option to minimize wasted disk space. 
This means that the recovery process is just copying the files you need. 
Also, graceful recovery for downtime and optimistic disk space usage are 
both very nice. (it will try to keep as many backup savepoints as it can 
disk space depending)

I'm evaluating ZFS and will likely include some features of ZFS into 
BBuddy as we integrate these capabilities into our backup processes. 
We're free to do this in part because we have redundant backup sets, so 
a single failure wouldn't be catastrophic in the short/medium term.

-Ben
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] ZFS on Linux testing

2013-12-18 Thread Lists
On 12/18/2013 03:04 PM, Les Mikesell wrote:
> For the people who don't know, backuppc builds a directory tree for
> each backup run where the full runs are complete and the incrementals
> normally only contain the changed files. However, when you access the
> incremental backups through the web interface or the command line
> tools, the backing full is automatically merged so you don't have to
> deal with the difference - and when using rsync as the xfer method,
> deletions are tracked correctly.
Should I read this as "BackupPC now has its own filesystem driver"? If 
so, wow. Or do you mean that there are command line tools to read/copy 
BackupPC save points?



___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] ZFS on Linux testing effort

2014-01-06 Thread Lists
On 11/30/2013 06:20 AM, Andrew Holway wrote:
> Hey,
>
> http://zfsonlinux.org/epel.html
>
> If you have a little time and resource please install and report back
> any problems you see.
>

Andrew,

I want to run /var on zfs, but when I try to move /var over it won't 
boot thereafter, with errors about /var/log missing. Reading the ubuntu 
howto for ZFS indicates that while it's possible to even boot from zfs, 
it's a rather long and complicated process.

I don't want to boot from ZFS, but it appears that grub needs to be set 
up to support ZFS in order to be able to mount zfs filesystems, and it's 
possible that EL6's grub just isn't new enough. Is there a howto/ 
instructions for setting up zfs on CentOS/6 so that it's available on boot?

Thanks,

Ben
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Can we trust RedHAt encryption tools?

2014-01-10 Thread Lists
On 01/09/2014 02:52 PM, m.r...@5-cent.us wrote:
> Not quite - anyone mandated to POSIX standards are effectively mandated to
> use the compromised algorithms, as I understand it.

That's news to me. Citation?

Recently, there was a discussion amongst BSD devs and they concluded 
that they don't trust hardware RNG either, deciding instead to add their 
randomness to other sources before going to /dev/random.

http://arstechnica.com/security/2013/12/we-cannot-trust-intel-and-vias-chip-based-crypto-freebsd-developers-say/

Lastly, we should all thank this neckbeard who's been banging the gong 
all along, and was right:

http://schestowitz.com/Weblog/archives/2013/07/15/

-Ben
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Booting Software RAID

2014-01-24 Thread Lists
On 01/24/2014 09:25 AM, Matt wrote:
> # file -s /dev/sda
> /dev/sda: x86 boot sector; GRand Unified Bootloader, stage1 version
> 0x3, boot drive 0x80, 1st sector stage2 0x849fc, GRUB version 0.94;
> partition 1: ID=0xee, starthead 0, startsector 1, 4294967295 sectors,
> extended partition table (last)\011, code offset 0x48
>
> # file -s /dev/sdb
> /dev/sdb: x86 boot sector; GRand Unified Bootloader, stage1 version
> 0x3, boot drive 0x80, 1st sector stage2 0x849fc, GRUB version 0.94;
> partition 1: ID=0xee, starthead 0, startsector 1, 4294967295 sectors,
> extended partition table (last)\011, code offset 0x48
>
> I am guessing this is saying its present on both?  I did nothing to
> copy it there so it must have done it during the centos 6.x install
> process.

It would appear so. But I'd recommend simply yanking out one drive, 
booting, and then swapping the drives to try booting again. You can 
resync the raid arrays trivially after the test. Then you know for sure. 
I've made this a matter of course for any servers where the root fs is 
RAID0.

-Ben

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Single sign-on for CentOS-6

2014-01-29 Thread Lists
On 01/29/2014 06:51 AM, James B. Byrne wrote:
>> I would have to ask why you're doing such a thing in the first place?  You
>> have a perfectly good working Active Directory setup, that people are already
>> familiar with, I suspect with existing MS clients which integrate fully (and
>> "properly") and you want to replace it with a Samba based setup.  Unless you
>> have a relatively simple setup, I would say don't change.  However, if you 
>> are
>> looking to move to something else, then do that.  Why fix to Samba?  Why not
>> go with a full on Kerberos/LDAP environment?
>>
>> FWIW, we use CentOS 6 with Active Directory Authorization.  Things have 
>> worked
>> fine for us for about 1 year.  It took a VERY long time to get setup and
>> working, but it is now.
> The main reason is the age of the equipment and software.  The current domain
> controller host is from c.2004 and the software is Microsoft Advanced Server
> 2000.  The Windows 7 workstations work with this AD but there are a few
> quirks.
>
> As the equipment is well past its best before date we need to replace it. We
> have virtualised just about everything else saving only the desktop
> workstations and this is another candidate for virtualisation.
>
> As a company we are moving everything we can to FOSS and away from proprietary
> interests.  Therefore the combination of moving from MS-AS2000 and a dedicated
> host to Samba4 running on a virtualised guest seems an attractive option,
> provided that it works.  Thus my question.

As a CentOS/Linux shop serving clients who are primarily Windows-based, 
this is also attractive to us. However, initial research indicates that 
while it probably can work, it's by no means trivial.

EG: http://news.idg.no/cw/art.cfm?id=07B0DED3-A627-9A9A-C05097D23C5FD44F

Our intentions (round tuit, etc) at this point are probably to work with 
Windows Live in more of a "client" role for SSO, though we haven't 
started, it's a second-level priority at this point. Personally, I'd 
love to see a website/project put together to document the needs and 
solutions of corporate/enterprise level Samba4 users, but I'm not aware 
of such already existing.

Ben
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Installing on USB Flash Drive

2014-01-29 Thread Lists
I'm doing exactly this on a trial basis with production servers. So far, 
it's working great. Some tips:


1) Flash drives are less reliable than HDDs. Software RAID1 is the way 
to go.
  A) Use two different makes of USB drives so that you have 
different failure characteristics. If either fails, replace it 
immediately. You can set it up so that if a thumb drive fails, you can 
yank it and set up its replacement without downtime. Practice this and 
document how you did it.
 B) When you replace the failed USB drive and set it up, you can 
manually remove it from the raid array and test it without causing 
downtime to verify that grub, etc. are set up properly on a different 
(offline) machine. Do this.
 C) Write speed is important! USB drives have an incredibly diverse 
range of performance. Don't cripple yourself trying to save $5.


2) Flash drives typically have weak write endurance. Think of write 
capacity like a disposable salt shaker. Once it's run out of salt, it's 
game over and you can't recharge it.
 A) Find and root out any remaining regular write activity on the 
thumb drives and move affected partitions to something else.
 B) Mount all partitions on the flash drive RAID with noatime
 C) Move your /tmp, /var partitions to spinning disks or proper SSDs.
 D) Never put swap space on the thumb drives.
 E) Verify the lack of write activity with iostat.


Good luck.

On 01/24/2014 03:06 PM, Matt wrote:
> Is it possible to install CentOS on a USB Flash Drive.  Have boot
> sector, / and /boot on USB drive then put /home, etc on a software
> raid array of the physical drives.
>
> Thought there used to be motherboards with SDHC slots that you could
> use to boot off.
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Booting Software RAID

2014-01-29 Thread Lists
On 01/29/2014 08:15 AM, Matt wrote:
> If I am putting both 4TB drives in a single RAID1 array for /vz would
> there be any advantage to using LVM on it?

My (sometimes unpopular) advice is to set up the partitions on servers 
into two categories:

1) OS
2) Data

OS partitions don't really grow much. Most of our servers' OS partitions 
total less than 10 GB of used space after years of 24x7 use. I recommend 
keeping things *very* *simple* here, avoid LVM. I use simple software 
RAID1 with bare partitions.

Data partitions, by definition, would be much more flexible. As your 
service becomes more popular, you can get caught in a double bind that 
can be very hard to escape: On one hand, you need to add capacity 
without causing downtime because people are *using* your service 
extensively, but on the other hand you can't easily handle a day or so 
to transfer TBs of data because people are *relying* on your service 
extensively. To handle these cases you need something that gives you the 
ability to add capacity without (much) downtime.

LVM can be very useful here, because you can add/upgrade storage without 
taking the system offline, and although there *is* some downtime when 
you have to grow the filesystem (EG when using Ext* file systems) it's 
pretty minimal.

So I would strongly recommend using something to manage large amounts of 
data with minimal downtime if/when that becomes a likely scenario.

Comparing LVM+XFS to ZFS, ZFS wins IMHO. You get all the benefits of LVM 
and the file system, along with the almost magical properties that you 
can get when you combine them into a single, integrated whole. Some of 
ZFS' data integrity features (See RAIDZ) are in "you can do that?" 
territory. The main downsides are the slightly higher risk that ZFS on 
Linux' "non-native" status can cause problems, though in my case, that's 
no worry since we'll be testing any updates carefully prior to roll out.

In any event, realize that any solution like this (LVM + XFS/Ext, ZFS, 
or BTRFS) will have a significant learning curve. Give yourself *time* 
to understand exactly what you're working with, and use that time 
carefully.


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Installing on USB Flash Drive

2014-01-29 Thread Lists
On 01/24/2014 11:09 PM, Emmanuel Noobadmin wrote:
> However, note that there might be an issue with anaconda and big USB
> storage. The boot partition anaconda creates will not boot past grub.
> I needed to manually create the partition to start on sector 63 for
> grub to see it. Happens on my 16GB sticks but not on small 1GB sticks.

FWIW, I have not seen this issue with recent SuperMicro system and 32 GB 
sticks.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Booting Software RAID

2014-01-29 Thread Lists
On 01/29/2014 01:10 PM, Les Mikesell wrote:
> How so, unless you are adding disk heads to the mix or localizing
> activity during your test?

Just ran into this: did a grep on what seemed to be a lightly loaded 
server, load average suddenly spiked unexpectedly. Turns out that it was 
performing over 130 write ops/second on a 7200 RPM drive! Partitioning 
data would into partitions would have no effect in this case.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Understanding iostat

2014-01-29 Thread Lists
We have a load balancer/session server that manages sessions in small 
files. I did a grep on the directory of session files and the server 
load went from 0.50 to 10.x, for all intents and purposes we were down 
until I canceled the grep.

According to this article on 
http://www.thattommyhall.com/2011/02/18/iops-linux-iostat/ tps is 
roughly analogous to iops. Running iostat on the device reports a tps 
that sometimes hits as high as 2000. Given a fairly standard 7200 RPM 
SATA drive, with potential IOPS as high as perhaps 100, how is this even 
remotely possible? Is it due to sequential writes?

Thanks,

Ben
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Monitoring (was Re: Booting Software RAID)

2014-01-30 Thread Lists
On 01/30/2014 07:25 AM, Jeffrey Hass wrote:
> You could EASILY write a script (anyone do that here?) -- to monitor
> that /OS file system ~ and send
> alerts based on thresholds and triggers (notify the monitoring people
> before they get even notified.. it's
> alot of fun!) -- and put it in the crontab - // cron // - get yourself
> some...
>
> Better living through monitoring..

Why not just download xymon or nagios? 100x the features, 1/100th the 
maintenance. I don't believe in NIH.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Booting Software RAID

2014-01-30 Thread Lists
On 01/30/2014 12:28 AM, Sorin Srbu wrote:
>>> My (sometimes unpopular) advice is to set up the partitions on servers
>>> into two categories:
>>>
>>> 1) OS
>>> 2) Data
>> Absolutely. I have been doing this, without problems, for 5 years.
>> Keeping the two distinct is best, in my opinion.
> Exactly. Why would this be an unpopular piece of advice?
>
> It might even be better to keep the OS by itself on one disk (with /boot, /
> and swap) and have the data on a separate disk.
>
> Please enlighten me!

I think the somewhat unpopular part is to recommend *against* using LVM 
for the OS partitions, voting instead to KISS, and only use LVM / Btrfs 
/ ZFS for the "data" part. Some people actually think LVM should be used 
everywhere.

And for clarity's sake, I'm not suggesting a literal partition /os and 
/data, simply that there are areas of the filesystem used to store 
operating system stuff (EG: /bin, /boot, /usr), and areas used to store 
data (EG: /home, /var, /tmp), etc.

Keep the OS stuff as simple as possible, RAID1 against bare partitions, 
etc. because when things go south, the last thing you want is *another* 
thing to worry about.

Keep the data part such that you can grow as needed without (much) 
downtime. EG: LVM+XFS, ZFS, etc. (And please verify your backups regularly)

-Ben
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] idea: "hybrid" iso images?

2014-01-30 Thread Lists
On 01/30/2014 12:58 PM, Joseph Spenner wrote:
> i definitely had the same experience back then.  Anybody had luck with
> simply dd a current CentOS iso.  I wonder if RedHat supports
> ISOHybrid?
Nope. I succeeded once by manually installing grub after an install. I 
have an external DVD R/W that a basically only use for installing CentOS.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Experience with BTRFS?

2014-02-04 Thread Lists
Was wondering if anyone here could weigh in on using BTRFS for CentOS 6 
in a "near production" environment?

I've been using ZFS on Linux and am very happy with the results so far, 
but don't particularly want to put all my eggs in one basket. Our 
application architecture allows us to have multiple, concurrent 
filesystems in mirror so I have the option of running a system under 
production-like environment without risking actual loss of customer 
data. In any event, we will be triple redundant at the application 
level, with ZFS on one file store, RAID/LVM/EXT4 on another, and 
possibly BTRFS on the third.

Of course, I would want to use AT LEAST "release candidate" quality 
filesystem, but the ability of ZFS and BTRFS to check for errors and fix 
without downtime hold a tremendous amount of weight. The file stores are 
big enough that it is a days-long process to bring an offline file store 
back online. It would seem that BTRFS is slightly more flexible than 
ZFS, EG the ability to add RAID-levels for improved redundancy after 
initial creation without taking the system(s) offline.

Anybody using BTRFS for real, sustained-load, 24x7 environment?

-Ben
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Experience with BTRFS?

2014-02-05 Thread Lists
On 02/04/2014 12:24 PM, Fernando Cassia wrote:

> Indeed. Check this out
> A tour of BTRFS
> http://www.youtube.com/watch?v=hxWuaozpe2I

 From 2012! Lots of cool stuff, doesn't cover the current state of the 
art...

> And this
> Dec 2012: SUSE says BTRFS is ready to rock
> https://www.linux.com/news/enterprise/systems-management/677226-suse-linux-says-btrfs-is-ready-to-rock

* half of the features I'm interested not included

> And this
> BTRFS improvements in kernel 3.14
> http://www.phoronix.com/scan.php?page=news_item&px=MTU4ODA

So, now I'm running a non-stock kernel. Looks like it *may* be an option 
for EL7? It's at least a "technology preview" which means it *may* be 
available in a point update. (sighs, stamps feet in a childish way)
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] And then there was one (browser)

2014-02-07 Thread Lists
On 02/06/2014 08:41 AM, Phelps, Matt wrote:
> Of course we already have notified Google.
>
> I was hoping for a little more granularity. Google is a large place; as is
> Red Hat I know. There was word that Red Hat was working with Google on a
> solution, and I was hoping to hear if there was any movement.
>
> I can't ask Red Hat since we don't pay for it, but perhaps the new CentOS
> relationship with them can offer a channel of communication for the
> Community.

I don't mean to poke, but is there a reason you aren't using Chromium? 
It looks, acts, and works like Google Chrome except that:

1) It doesn't spy on you.
2) It uses standard flash that you install separately.
3) The icon looks a little different.

I use Chromium on Fedora, so I can't comment on CentOS, which I only use 
for headless servers. But this might be a starting point:

http://www.if-not-true-then-false.com/2013/install-chromium-on-centos-red-hat-rhel/

Good luck!
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Migration from 32 to 64 bits

2014-02-25 Thread Lists
On 02/25/2014 10:32 AM, Les Mikesell wrote:
> On Tue, Feb 25, 2014 at 12:28 PM, Steve Clark  wrote:
>> On 02/25/2014 01:17 PM, Fabrizio Di Carlo wrote:
>>> Hello to all,
>>>
>>> currently I have CentOS 6.4 32 bit, very simple setup on my notebook,
>>> I want to migrate it from 32 to 64 bits cause I want to play with some
>>> VMs etc etc.
>>>
>>> Do you have some suggestions on how to do backup of folders (mainly I
>>> have 1 user) or just copy the user folder and stop?
>>>
>>> Fabrizio
>>>
>> Hmmm... we have 32bit CentOS 6.3 running both 32bit and 64bit VirtualBox VMs 
>> just fine.
> Yes, if you want to use KVM you'd need a 64-bit install, but for
> VirtualBox or VMware Player it should only depend on the CPU
> capability, not the host OS.

While technically true, I can't imagine wanting to do much with VMs 
without the extra RAM space afforded by 64 bit O/S. Really, 2 (ok, 3) GB 
of RAM is *not enough* to do serious work. I'd strongly suggest starting 
with at least 8 GB of RAM. You can do it with the 2 or so that you have, 
but I wanted to scream with 2 GB using VirtualBox for Windows 
compatibility testing.

8 GB lets me run a host OS (Fedora) and 2 VMs without too much trouble. 
3 VMs starts to slow down noticeably, no matter how I tweak the memory 
split.

-Ben
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] - automation script

2014-03-01 Thread Lists
On 03/01/2014 08:01 AM, Les Mikesell wrote:
> On Sat, Mar 1, 2014 at 3:23 AM, Paolo De Michele
>  wrote:
>> Hi everybody,
>>
>> I have many server in production but I would verify this:
>>
>> ex.: I have many domain in /var/www/html/ (domain1 / domain2 / domain3)
>>
>> now,
>>
>> How I do check if the files are added or changed?
>> I should apply this "politic" for all domains but I don't know how
>>
>> It is possible send me an email with the modification?
>> so, exist diff command with some option but I don't understand how to apply 
>> it
>
> If you installed with an rpm you can use 'rpm -q -V packagename' to
> check that files haven't changed since the install.
>
> If you have a master copy on some other server, you can  use 'rsync
> -avn  --delete dir  user@production_host:/path/above/dir' to see a
> list of files with differences.  And if you remove the n from the
> arguement list it will fix any differences (but be very sure you are
> in the right place if you use --delete without -n).
>
> Ultimately, you should probably be using subversion or git to maintain
> this master copy. If you only have one or a few production servers you
> can just use the version control tools to update them directly, but I
> prefer a staging copy where you can test the changes before rsyncing
> to production.
>

All good suggestions. I'd prefer mercurial to either of Subversion 
(central repo? Blagh!) or git (mercurial
has easier syntax) but that's minor.

One more (super simple) thing to try:

$ touch releasetime.txt;
// do something else for a day or three
$ find /var/www -newer releasetime.txt;


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] - automation script

2014-03-01 Thread Lists
On 03/01/2014 11:20 AM, Les Mikesell wrote:
> On Sat, Mar 1, 2014 at 1:14 PM, Lists  wrote:
>> All good suggestions. I'd prefer mercurial to either of Subversion
>> (central repo? Blagh!)
> A central repo is exactly what you want when you want one
> authoritative copy and you have a network to reach it.
>
Oh, it's often very useful to have an "authoritative" copy. We have a 
few such beasts that are authoritative within their context.

When you are using a DCVS, you can take any arbitrary copy you want and 
call it the "Central" repo. You can always commit to and pull from the 
"Central" repo, just as you would with SVN. The bonus of using a DCVS is 
that when you pull, you get the entire repo, so if your "Central" repo 
server dies/crashes, you can just start using another one without 
skipping a beat. Another neat trick is to daisy chain repos, where A is 
the master of B is the master of C. B gets the changes from A, but not 
C. C gets changes from A and B. This makes it trivial to try out new 
features in a fork for a while before pushing changes in C back to B or A.

That you want to have a "Central" or "Master" repo is no reason set 
things up so you have no ability to change your mind, IMHO. I cannot 
think of any significant features that SVN offers that Mercurial does 
not. We started with SVN before switching to Mercurial and there's no 
way we're going back.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] suggestions for large filesystem server setup (n * 100 TB)

2014-03-01 Thread Lists
On 02/28/2014 06:30 AM, Mauricio Tavares wrote:
> On Fri, Feb 28, 2014 at 8:55 AM, Phelps, Matt  wrote:
>> I'd highly recommend getting a NetApp storage device for something that big.
>>
>> It's more expensive up front, but the amount of heartache/time saved in the
>> long run is WELL worth it.
>>
>My vote would be for a ZFS-based storage solution, be it
> homegrown or appliance (like nextenta). Remember, as far as ZFS (and
> similar filesystems whose acronyms are more than 3 letters) is
> concerned, a petabyte is still small fry.

Ditto on ZFS! I've been experimenting with it for about 5-6 months and 
it really is the way to go for any filesystem greater than about 10 GB 
IMHO. We're in the process of transitioning several of our big data 
pools to ZFS because it's so obviously better.

Just remember that ZFS Isn't casual! You have to take the time to 
understand what it is and how it works, because if you make the wrong 
mistake, it's curtains for your data. ZFS has a few maddening 
limitations** that you have to plan for. But it is far and away the 
leader in Copy-On-Write, large scale file systems, and once you know how 
to plan for it, ZFS capabilities are jaw-dropping. Here are a few off 
the top of my head:

1) Check for and fix filesystem errors without ever taking it offline.
2) Replace failed HDDs from a raidz pool without ever taking it offline.
3) Works best with inexpensive JBOD drives - it's actually recommended 
to not use expensive HW raid devices.
4) Native, built-in compression: double your usable disk space for free.
5) Extend (grow) your zfs pool without ever taking it offline.
6) Create a snapshot in seconds that you can keep or expire at any time. 
(snapshots are read-only, and take no disk space initially)
7) Send a snapshot (entire filesystem) to another server. Binary perfect 
copies in a single command, much faster than rsync when you have a large 
data set.
8) Ability to make a clone - a writable copy of a snapshot in seconds. A 
clone of a snapshot is writable, and snapshots can be created of a 
clone. A clone initially uses no disk space, and as you use it, it only 
uses the disk space of the changes between the current state of the 
clone and the snapshot it's derived from.


** Limitations? ZFS? Say it isn't so! But here they are:

1) You can't add redundancy after creating a vdev in a zfs pool. So if 
you make a ZFS vdev and don't make it raidz at the start, you can't add 
another more drives to get raidz. You also can't "add" redundancy to an 
existing raidz partition. Once you've made it raidz1, you can't add a 
drive to get raidz2. I've found a workaround, where you create a "fake" 
drive with a sparse file, and add the fake drive(s) to your RAIDZ pool 
upon creation, and immediately remove them. But you have to do this on 
initial creation!
http://jeanbruenn.info/2011/01/18/setting-up-zfs-with-3-out-of-4-discs-raidz/

2) Zpools are grouped into vdevs, which you can think of like a block 
device made from 1 or more HDs. You can add vdevs without issue, but you 
can't remove them. EVER. Combine this fact with #1 and you had better be 
planning carefully when you extend a file system. See "Hating your data" 
section in this excellent ZFS walkthrough:
http://arstechnica.com/information-technology/2014/02/ars-walkthrough-using-the-zfs-next-gen-filesystem-on-linux/

3) Like any COW file system, ZFS tends to fragment. This cuts into 
performance, especially when you have less than about 20-30% free space. 
This isn't as bad as it sounds, you can enable compression to double 
your usable space.

Bug) ZFS on Linux has been quite stable in my testing, but as of this 
writing, has a memory leak. The workaround is manageable but if you 
don't do it ZFS servers will eventually lock up. The workaround is 
fairly simple, google for "zfs /bin/echo 3 > /proc/sys/vm/drop_caches;"


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] ZFS on Linux testing effort

2014-03-01 Thread Lists
I had promised to weigh in on my experiences using ZFS in a production 
environment. We've been testing it for a few months now, and confidence 
is building. We've started using it in production about a month ago 
after months of non production testing.

I'll append my thoughts in a cross-post from another thread because I 
think it's an excellent summary for anybody looking for an Enterprise 
scale file system.

--- ORIGINAL POST 

Ditto on ZFS! I've been experimenting with it for about 5-6 months and
it really is the way to go for any filesystem greater than about 10 GB
IMHO. We're in the process of transitioning several of our big data
pools to ZFS because it's so obviously better.

Just remember that ZFS isn't casual! You have to take the time to
understand what it is and how it works, because if you make the wrong
mistake, it's curtains for your data. ZFS has a few maddening
limitations** that you have to plan for. But it is far and away the
leader in Copy-On-Write, large scale file systems, and once you know how
to plan for it, ZFS capabilities are jaw-dropping. Here are a few off
the top of my head:

1) Check for and fix filesystem errors without ever taking it offline.
2) Replace failed HDDs from a raidz pool without ever taking it offline.
3) Works best with inexpensive JBOD drives - it's actually recommended
to not use expensive HW raid devices.
4) Native, built-in compression: double your usable disk space for free.
5) Extend (grow) your zfs pool without ever taking it offline.
6) Create a snapshot in seconds that you can keep or expire at any time.
(snapshots are read-only, and take no disk space initially)
7) Send a snapshot (entire filesystem) to another server. Binary perfect
copies in a single command, much faster than rsync when you have a large
data set.
8) Ability to make a clone - a writable copy of a snapshot in seconds. A
clone of a snapshot is writable, and snapshots can be created of a
clone. A clone initially uses no disk space, and as you use it, it only
uses the disk space of the changes between the current state of the
clone and the snapshot it's derived from.


** Limitations? ZFS? Say it isn't so! But here they are:

1) You can't add redundancy after creating a vdev in a zfs pool. So if
you make a ZFS vdev and don't make it raidz at the start, you can't add
another more drives to get raidz. You also can't "add" redundancy to an
existing raidz partition. Once you've made it raidz1, you can't add a
drive to get raidz2. I've found a workaround, where you create a "fake"
drive with a sparse file, and add the fake drive(s) to your RAIDZ pool
upon creation, and immediately remove them. But you have to do this on
initial creation!
http://jeanbruenn.info/2011/01/18/setting-up-zfs-with-3-out-of-4-discs-raidz/

2) Zpools are grouped into vdevs, which you can think of like a block
device made from 1 or more HDs. You can add vdevs without issue, but you
can't remove them. EVER. Combine this fact with #1 and you had better be
planning carefully when you extend a file system. See "Hating your data"
section in this excellent ZFS walkthrough:
http://arstechnica.com/information-technology/2014/02/ars-walkthrough-using-the-zfs-next-gen-filesystem-on-linux/

3) Like any COW file system, ZFS tends to fragment. This cuts into
performance, especially when you have less than about 20-30% free space.
This isn't as bad as it sounds, you can enable compression to double
your usable space.

Bug) ZFS on Linux has been quite stable in my testing, but as of this
writing, has a memory leak. The workaround is manageable but if you
don't do it ZFS servers will eventually lock up. The workaround is
fairly simple, google for "zfs /bin/echo 3 > /proc/sys/vm/drop_caches;"


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] - automation script

2014-03-01 Thread Lists
On 03/01/2014 06:15 PM, Les Mikesell wrote:
> The biggest thing for us is subversion's ability to use svn 'external'
> properties at any point in a tree to reference any other svn URL.
> Checkouts and updates automatically pull in those other locations into
> your working copy.  That lets each component of a large project have
> its own release schedule by tagging versions and the consuming
> projects can each advance the versions they use as they are ready
> simply by changing their external references.And an automated
> build system like jenkins will see anything new and rebuild as needed
> simply by polling the top level project.

See Merc's "subrepos": 
http://mercurial.selenic.com/wiki/Subrepository?action=show&redirect=subrepos

(Or not. If SVN's working well for you, more power to you.)
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] NFS Mount: files owned by nobody

2014-03-17 Thread Lists
This is one of those simple-been-doing-this-forever things that, for 
some reason, has me stumped today.

When I try to NFS (v4) mount a directory, the user/group ownership shows 
up as user "nobody" even though /etc/passwd has values for the correct 
user names. How do I get it to mount with the correct user IDs?

Hume is the server, running CentOS 6, all updates applied, maybe a week 
or two ago
Bender is the client, running CentOS 6, all updates applied, maybe a 
week or two ago.

### HUME ###

Hume has a backup saved in /home/spfs.450, the directory that I'm trying 
to export.
[root@hume ~]# ll /home/spfs.450/
drwxr-xr-x  3 apache apache  3 Oct  8  2009 y.spfs
drwxr-xr-x  3 apache apache  3 Feb  1  2010 yts.spfs
--SNIP--

Hume is exporting with /etc/exports
/home/spfs.450 
192.168.254.0/255.255.255.0(rw,async,no_subtree_check,mp,no_acl,insecure,no_root_squash)

Hume has appropriate /etc/passwd entries:
[root@hume ~]# grep -i apache /etc/passwd
apache:x:48:48:Apache:/var/www:/sbin/nologin

To be sure, the files are numerically id'd as 48:
[root@hume ~]# ls -ln /home/spfs.450/
drwxr-xr-x  3 48 48  3 Oct  8  2009 y.spfs
drwxr-xr-x  3 48 48  3 Feb  1  2010 yts.spfs
--SNIP--



Nothing shows in /var/log/messages either when I enable the export or 
when it's mounted by bender.
[root@bender ~]# tail -f /var/log/messages


### BENDER ###

Bender is mounting the backup with the following command:
[root@bender ~]# /bin/mount -t nfs 192.168.254.9:/home/spfs.450 
/home/spfs.450

Files show up with the wrong user name.
[root@bender ~]# ll /home/spfs.450/
drwxr-xr-x  3 nobody nobody  3 Oct  8  2009 y.spfs
drwxr-xr-x  3 nobody nobody  3 Feb  1  2010 yts.spfs
--SNIP--

Bender as appropriate /etc/passwd entries:
[root@bender ~]# grep -i apache /etc/passwd
apache:x:48:48:Apache:/var/www:/sbin/nologin

Nothing shows in /var/log/messages when I mount the export on Bender.
[root@bender ~]# tail /var/log/messages


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] NFS Mount: files owned by nobody

2014-03-19 Thread Lists
Alas, this doesn't seem to have resolved the issue. (See results shown 
below) Your notes closely mirror the results of my google searches. Is 
there a way to have NFS server/client be very verbose and log where the 
error is occuring?

-Ben


On 03/17/2014 03:20 PM, Frank Cox wrote:
> On Mon, 17 Mar 2014 15:10:28 -0700
> Lists wrote:
>
>> When I try to NFS (v4) mount a directory, the user/group ownership shows
>> up as user "nobody" even though /etc/passwd has values for the correct
>> user names. How do I get it to mount with the correct user IDs?
> Hello Mr. Lists:
My name is Ben :)

>
> Here are my notes that cover this occurrence.
>
> If all users come up as nobody on a nfs mount:
>
> Add nfs server name to the Domain = line in /etc/idmapd.conf on both the 
> server and the clients, i.e. Domain = example.com

On the server:
[root@hume ~]# grep Domain /etc/idmapd.conf
Domain = hume.mycompany.com

And the client:
[root@bender ~]#  grep Domain /etc/idmapd.conf
#Domain = local.domain.edu
Domain = hume.mycompany.com


>
> /sbin/service rpcidmapd restart
> /sbin/service nfslock restart
> /sbin/service nfs restart

Server:
[root@hume ~]# /sbin/service rpcidmapd restart
Shutting down RPC idmapd:  [  OK  ]
Starting RPC idmapd:   [  OK  ]
[root@hume ~]# /sbin/service nfslock restart
Stopping NFS locking:  [  OK  ]
Stopping NFS statd:[  OK  ]
Starting NFS statd:[  OK  ]
[root@hume ~]# /sbin/service nfs restart
Shutting down NFS daemon:  [  OK  ]
Shutting down NFS mountd:  [  OK  ]
Shutting down NFS services:[  OK  ]
Shutting down RPC idmapd:  [  OK  ]
Starting NFS services: [  OK  ]
Starting NFS mountd:   [  OK  ]
Starting NFS daemon:   [  OK  ]
Starting RPC idmapd:   [  OK  ]


And the client: (nfs service wasn't previously running)
[root@bender ~]# /sbin/service rpcidmapd restart
Shutting down RPC idmapd:  [  OK  ]
Starting RPC idmapd:   [  OK  ]
[root@bender ~]# /sbin/service rpcidmapd restart
Shutting down RPC idmapd:  [  OK  ]
Starting RPC idmapd:   [  OK  ]
[root@bender ~]# /sbin/service nfslock restart
Stopping NFS statd:[  OK  ]
Starting NFS statd:[  OK  ]
[root@bender ~]# /sbin/service nfs restart
Shutting down NFS daemon:  [FAILED]
Shutting down NFS mountd:  [FAILED]
Shutting down NFS quotas:  [FAILED]
Shutting down RPC idmapd:  [  OK  ]
Starting NFS services: [  OK  ]
Starting NFS quotas:   [  OK  ]
Starting NFS mountd:   [  OK  ]
Starting NFS daemon:   [  OK  ]
Starting RPC idmapd:   [  OK  ]



> Also, the complete hostname as specified (example.com) must be in /etc/hosts 
> on the nfs clients as well as the server
>
>

[root@hume ~]# grep hume /etc/hosts
192.168.254.9hume.mycompany.com hume
[root@hume ~]# exportfs -ra
[root@hume ~]# hostname
hume.mycompany.com
[root@hume ~]#


[root@bender ~]# grep hume /etc/hosts
192.168.254.9hume.mycompany.com hume
[root@bender ~]# umount /home/spfs.450
[root@bender ~]# /bin/mount -t nfs hume.mycompany.com:/home/spfs.450 
/home/spfs.450
[root@bender ~]# ls -ln /home/spfs.450/
drwxr-xr-x  3 99 99  3 Oct  8  2009 y.spfs
drwxr-xr-x  3 99 99  3 Feb  1  2010 yts.spfs
--SNIP--


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Linux malware attack

2014-03-19 Thread Lists
On 03/19/2014 10:35 AM, Mike McCarthy wrote:
> Years ago I moved sshd off port 22, disabled password logins and use
> certificates after noticing my logs filling up with numerous daily
> attempts at hacking into sshd.
>

Not only do I not use port 22, no passwords, and keys with passphrases, 
the port I use for SSH is also protected by port knocking. Hey, just 
because I'm paranoid doesn't mean they aren't out to get me! Though few, 
there have been remote exploits against SSHd.

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] NFS Mount: files owned by nobody

2014-03-19 Thread Lists
On 03/19/2014 12:28 PM, Frank Cox wrote:
> Do your user numeric id's match between the nfs server and client?

Yes, and despite restarting all services manually, only a SIMULTANEOUS 
cold reboot for both client and server "resolved" the issue. (I've 
already rebooted by the client and server multiple times trying to sort 
it all out) So now, something ELSE has broken in the process: the client 
mounted a different directory on a different server via NFS, and now it 
is mounting as nobody.

Based on setting the "Domain" parameter id idmapd.conf on the client, is 
this supposed to be the case that a client cannot mount NFS shares from 
more than one server?


-Ben
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] RESOLVED: NFS Mount: files owned by nobody

2014-03-19 Thread Lists
On 03/19/2014 02:44 PM, Ljubomir Ljubojevic wrote:
>> >
> It is very strange that client can mount directory on DIFFERENT SERVER?
> It looks like you have DNS/IP issues on your network?
>
> I used autofs and IP address to point it to desired server, to avoid
> possible DNS problems.

I've resolved this issue, in a way that both uses the server names 
(instead of IP) but avoids DNS problems also and allows you to use 
shares from multiple servers. Every instruction I read indicated that 
the hostname value was important. Careful scrutinizing of the config 
file reveals that there are really two values that are important, so 
here's my rough instruction:

1) /etc/idmapd.conf
# Set domain to the domain name shared by your NFS servers.
Domain: mycompany.com
Set local-realms to the name of the nfs servers you'll be using. THIS 
WASN'T MENTIONED ELSEWHERE.
Local-Realms: nfs1.mycompany.com,nfs2.mycompany.com
# make the above changes on all the servers in question.

2) /etc/hosts: list with all the NFS servers you specified in 
local-realms above. This way DNS errors don't make
your servers get "hung"
1.2.3.4nfs1.mycompany.com
1.2.3.5nfs2.mycompany.com

3) Make sure you synchronize your /etc/passwd files so that the account 
IDs match up or you'll get very strange results.

4) Reboot EVERYTHING. Restarting services (in my case) was not enough. 
For documentation's sake, I restarted
rpcidmapd, nfslock, and nfs, but didn't get the correct permissions 
until reboot. It doesn't seem important to run the nfs service on the 
clients.

5) Client mount:
# CLI
/bin/mount -t nfs hume.schoolpathways.com:/path/to/share /local/mount/point

# /etc/fstab
nfs1.mycompany.com:/path/to/share/local/mount/mount nfs ro,nolock0 0

# mount -a
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CVE-2014-0160 CentOS 5.x openssl heartbleed workaround

2014-04-08 Thread Lists
On 04/08/2014 10:37 AM, Phil Wyett wrote:
> If you: rpm -qa | grep openssl
>
> If you have: openssl-1.0.1e-16.el6_5.4.0.1
>
> You have the package with affected elements disabled. These were made
> until the final fixes could be brought in and applied.
>
> If you have: openssl-1.0.1e-16.el6_5.7
>
> You have the package with the upstream fix(es) applied and supersedes
> the openssl-1.0.1e-16.el6_5.4.0.1 packages.

What packages do I look for on EL5.X?

Thanks,

Ben
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


  1   2   3   >