Re: XFS vs Ext4

2023-12-05 Thread Paul Robert Marino
XFS is a from the ground up journaling filesystem, whereas EXT4 is still a
journal tacked on filesystem.
That's said EXT4 has caught up to XFS and in some specific
cases exceeded its performance but not all.

The short version is it depends on what you are doing both have pros and
cons,

Here are some short examples
If I'm doing a Gluster object store I will always use XFS because Gluster
is optimized for XFS,
If it's an NFS volume then EXT4 is usually better because of inode
compatibility issues which can be worked around in XFS but tank the
performance.
If I'm using it for temp space for compiles or an ETL, XFS will probably do
better because it will reduce the IOPS around the handling of inodes.
Extremely large volumes (100TB+ on a SAN or in a RAID) I always will use
XFS.
Desktop pick the one you know the best, but if undelete is important to you
use EXT4 (XFS has no mechanism for undeleting files).
if you are dealing with an embedded device that boots off of an SD card or
EMMC, XFS if possible because how it handles inodes puts less wear on it
over time.
High risk of file corruption due to unreliable power or hardware always XFS

The long version
There is a huge efficiency gain when deleting files in XFS. It's noticeably
faster when freeing up space from deleting files; this is because instead
of having an inode per block it only creates 1 inode at the beginning of
every file as needed for legacy application backward support.
Another positive side effect of his formatting an XFS filesystem is faster
because it never pre-creates inodes during the formatting process, infact
the only thing mkfs.xfs does is create a couple of redundant journals.
The 1 inode per file is also the reason XFS puts less wear on SSD's, This
difference wont make a noticeable impact on the life of a business class
SSD, and probably not in a decent consumed grade SSD, but for SD cards and
EMMC being used as a boot device this impact can be huge especially if that
device isn't using tmpfs for /tmp or is storing /var on the SSD and is
writing a lot of logs and temp files.
Risk of file system corruption due to unplanned power outages XFS,
because it keeps multiple copies of the journal, ignores the inodes for
recovery and generally is self healing. While they both do an amazing job
at recovering from this kind of situation, EXT4 still trusts the inodes
over its single copy of the journal and as a result is more likely to have
significant file corruption issues sooner. fsck.xfs normally only does a
comparison of the copies of the journal to fix journal corruption issues
and will usually roll back file changes that were incomplete at the time of
the power loss. By the way XFS does this automatically when the volumes are
mounted if the checksums of the copies of the journal don't match so
essentially it's normally a placebo command which you should never need to
run unless you are dealing with bad sectors in the physical media. EXT4
handles things way differently on mount and as a result you may need to
manually run fsck.ext4 after a power loss to recover and it may not
recover as well.
Essentially if you want to create and delete files quickly XFS is
definitely the way to go, the easiest way you can see this is by deleting a
large multi GB file and running the df command multiple times EXT4 will
take a few seconds to free the space and XFS will free it immediately.
XFS can also be tuned to optimize the performance in general on RAID
volumes if it is told some details about the RAID when the filesystem is
created.
https://urldefense.proofpoint.com/v2/url?u=https-3A__access.redhat.com_documentation_en-2Dus_red-5Fhat-5Fenterprise-5Flinux_6_html_performance-5Ftuning-5Fguide_s-2Dstorage-2Dxfs=DwIFaQ=gRgGjJ3BkIsb5y6s49QqsA=gd8BzeSQcySVxr0gDWSEbN-P-pgDXkdyCtaMqdCgPPdW1cyL5RIpaIYrCn8C5x2A=J0XUyOuCLK8Iw3r5uS2peX0cFvg-VzKQz417xJp_NKS1rBAJsaP7fmbiyaCWwCz3=anjmxPLaNFBFOW2Ma6LUp1D5fZRlbef28388r3x2kAA=
 

https://urldefense.proofpoint.com/v2/url?u=https-3A__xfs.org_index.php_XFS-5FFAQ-23Q-3A-5FHow-5Fto-5Fcalculate-5Fthe-5Fcorrect-5Fsunit.2Cswidth-5Fvalues-5Ffor-5Foptimal-5Fperformance=DwIFaQ=gRgGjJ3BkIsb5y6s49QqsA=gd8BzeSQcySVxr0gDWSEbN-P-pgDXkdyCtaMqdCgPPdW1cyL5RIpaIYrCn8C5x2A=J0XUyOuCLK8Iw3r5uS2peX0cFvg-VzKQz417xJp_NKS1rBAJsaP7fmbiyaCWwCz3=NaUHHrZJZOKhn5EanoQKy78vFjwVakMN-Cm3kB3NdCw=
 
Also XFS has incremental backup  capabilities built in, it rudimentary but
its there
https://urldefense.proofpoint.com/v2/url?u=https-3A__access.redhat.com_documentation_en-2Dus_red-5Fhat-5Fenterprise-5Flinux_7_html_storage-5Fadministration-5Fguide_xfsbackuprestore=DwIFaQ=gRgGjJ3BkIsb5y6s49QqsA=gd8BzeSQcySVxr0gDWSEbN-P-pgDXkdyCtaMqdCgPPdW1cyL5RIpaIYrCn8C5x2A=J0XUyOuCLK8Iw3r5uS2peX0cFvg-VzKQz417xJp_NKS1rBAJsaP7fmbiyaCWwCz3=JsxStjzjv_BahpqpOc2VZyEHPZIn73ex9tpkznid0_s=
 
.
With NFS always go with EXT4 because NFS isn't compatible with 64bit
inodes, so you need to disable a flag with XFS for "inode64" which means on
files over 2GB XFS' will need to create multiple 

Re: Back UP

2021-08-09 Thread Paul Robert Marino
well if cron is broken you could take the sedghamer approach and
install jobscheduler
https://urldefense.proofpoint.com/v2/url?u=https-3A__www.sos-2Dberlin.com_en_jobscheduler-2Ddownloads=DwIFaQ=gRgGjJ3BkIsb5y6s49QqsA=gd8BzeSQcySVxr0gDWSEbN-P-pgDXkdyCtaMqdCgPPdW1cyL5RIpaIYrCn8C5x2A=zg66EiCB-0MjyUT4sH_xbY9luy7GY2Jvc8OmDcWBFEo=U3ysnINxB8VXojeYx_Aq6wQBgErIZ9p5zO8T6la0m_8=
  lol
seriously though its a bit much if you are just replacing cron but if
you are enterprise scale automation its awesome.

On Mon, Aug 9, 2021 at 5:29 PM Jon Pruente  wrote:
>
>
>
> On Mon, Aug 9, 2021 at 3:36 PM Larry Linder 
> <0dea520dd180-dmarc-requ...@listserv.fnal.gov> wrote:
>>
>> Have friends and relatives buy a MAC.
>
>
> I know this is a silly nit to pick in what you are posting about, but it 
> reminded me that I tend to see it most often from technical types. Why do 
> people use MAC when referring to a Macintosh? MAC should be for something 
> like a MAC address. We don't call people named Joseph JOE when their name 
> gets shortened. However, technical types seem to do it all the time for Macs. 
> Just a habit from dealing with MAC addresses all the time?


Re: ctrl + alt + F1 issue

2021-04-28 Thread Paul Robert Marino
Keep going through the function keys (CTRL + ALT + F2, CTRL + ALT + F3,
etc. ) eventually you will find the right one this is a common problem on
systems with Wayland as opposed to X11.

On Wed, Apr 28, 2021, 6:17 PM Chen, Zhenhang  wrote:

> Hello Everyone,
>
> On my scientific linux desktop, I pressed ctrl + alt + F1 shortcut keys
> and got a black screen only with a cursor that can be moved. How can I go
> back to the desktop screen ? Thank you !
>
>
> Zhenhang
>


Re: LSI MegaRAID management

2018-10-18 Thread Paul Robert Marino
LSI got split up a few years back the SANS went to Netapp and the raids
went to Broadcom you should be able to find the linux tools here
https://urldefense.proofpoint.com/v2/url?u=https-3A__www.broadcom.com_support_download-2Dsearch_-3Fpg-3D-26pf-3D-26pn-3D-26pa-3D-26po-3D-26dk-3DMegaRAID-2B9266-2D4i=DwIFaQ=gRgGjJ3BkIsb5y6s49QqsA=gd8BzeSQcySVxr0gDWSEbN-P-pgDXkdyCtaMqdCgPPdW1cyL5RIpaIYrCn8C5x2A=uDeYY5jilEWNuLdEGycqx1u_71fgZyvcULsTNOTcKAM=Bsg1dGwheOxTr3yXGayYaQxGXJKE71J-UO1bF2iGUoI=

As for monitoring you have a few options, the megaraid cli is a nice tool,
storage manager has a nice GUI if i remember correctly, and for remote
monitoring look at the SMIS provider.
SMIS is an industry standard for storage monitoring that works very well,
there are a number of tools and API's that support it too. the down side is
the API isn't documented well for consumption by the general public I wrote
some documentation about it for a perl implementation of a client some
years back here
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_prmarino1_Lib-2DCIM-2DPerl_blob_master_LCP-3A-3AQuery.md=DwIFaQ=gRgGjJ3BkIsb5y6s49QqsA=gd8BzeSQcySVxr0gDWSEbN-P-pgDXkdyCtaMqdCgPPdW1cyL5RIpaIYrCn8C5x2A=uDeYY5jilEWNuLdEGycqx1u_71fgZyvcULsTNOTcKAM=i_aZX7GqrXEWbXaESR5LtyXPYd9Q0TO8Sh91mxMYFD4=
 the
documentation on how the queries  work  should work for any WBEM/CIM client
and for any subset including SMIS and WMI.
when you get down to it CIM (the transport protocol for SMIS) is kind of
like a sane version of SNMP where you can get just the raw data but you can
also get the mib from the device you are querying.



On Thu, Oct 18, 2018 at 4:28 PM Alec Habig  wrote:

> LSI puts out an rpm, the one I've got laying around from an old install
> is MegaCli-8.01.06-1.i386.rpm
>
> Current version is either at your vendor's drivers page, or presumably
> from LSI's own website.
>
> I run the attached cronjob nightly to get an email report of the array's
> health:
>
>   Checking RAID status on lepton.d.umn.edu
>   Controller a0:  LSI MegaRAID SAS 9280-16i4e
>   No of Physical disks online : 6
>   Degraded : 0
>   Failed Disks : 0
>
> --
>Alec Habig
>  University of Minnesota Duluth
>  Dept. of Physics and Astronomy
> ha...@neutrino.d.umn.edu
>
> https://urldefense.proofpoint.com/v2/url?u=http-3A__neutrino.d.umn.edu_-7Ehabig_=DwIFAw=gRgGjJ3BkIsb5y6s49QqsA=gd8BzeSQcySVxr0gDWSEbN-P-pgDXkdyCtaMqdCgPPdW1cyL5RIpaIYrCn8C5x2A=HyUzo1cZ5rJfgEG6gRB-VkwQyw2SU4ItLpmGKJt0coU=y03GGtHkL3OIhqqltEKkMq9SXQz_hTXMYjC7F4USR90=
>


Re: is the disk failing ?

2018-10-18 Thread Paul Robert Marino
Radha,
Over the decades of my dealing with hundreds of thousands of disks in data
centers my experience comes down to this.
1) if smart says its going to die trust it its rarely wrong about failures.
2) if smart says its fine, but you are getting IO errors use the badblocks
and or fdisk to verify. %60 of the time it will be a file system problem,
%39 of the time it will be something wrong smart didn't detect, the rest
will be something else like in no particular order a bad kernel version,
bad bios revision, bad controller, or bad cable. by the way this happened
to me this year on one of my personal laptops with a toshiba drive smart
said it was fine but badblocks revealed it had bad sectors and more were
going bad by the day.

Lastly any one who would like to discuss how the raid controllers for HP
server, Dell servers, work and relates subjects like SMI-S feel free to
contact me off the list, butI wont engage any further with a flame war
based on the uninformed opinions of people on an open list. frankly I can
back up what I say with published proven facts by reputable experts, but
many people do not respond well when presented with real facts based on
evidence.


On Wed, Oct 17, 2018 at 8:11 PM Konstantin Olchanski 
wrote:

> On Wed, Oct 17, 2018 at 11:57:34PM +, Hinz, David (GE Healthcare)
> wrote:
> > I'd like to submit an opposing viewpoint.
> > If SMART disk analysis says it's going to break, replace it.
> > Nothing is worth risking lost data.
>
>
> I second this.
>
> My only case of false positive (SMART reports complete failure while
> disk still seems to work) has been a worn out 2 TB "green" WD disk.
> By "worn out" I mean that it was (a) heavily used and (b) all it's mates
> of same vintage, age and heavy use have already failed (with i/o errors,
> etc).
>
>
> K.O.
>
>
>
>
> >
> >
> > On 10/17/18, 4:50 PM, "owner-scientific-linux-us...@listserv.fnal.gov
> on behalf of Konstantin Olchanski" <
> owner-scientific-linux-us...@listserv.fnal.gov on behalf of
> olcha...@triumf.ca> wrote:
> >
> > >
> > > # smartctl -a /dev/sda
> > > ...
> > > Device Model: TOSHIBA MG03ACA100
> > > ...
> >
> > Thank you for posting your data, here is my reading of smartctl data:
> >
> > >
> > > === START OF READ SMART DATA SECTION ===
> > > SMART overall-health self-assessment test result: PASSED
> > >
> >
> > this you can ignore, I have held in my hands disks that reported
> "PASSED"
> > but were dead, could not read, could not write anything. Also had
> > disks that worked perfectly but reported "FAILED" here.
> >
> >
> > Next goes the meat of the data:
> >
> > > ID# ATTRIBUTE_NAME  FLAG VALUE WORST THRESH TYPE
> UPDATED  WHEN_FAILED RAW_VALUE
> > >   4 Start_Stop_Count0x0032   100   100   000Old_age
> Always   -   26
> >
> > Your disk is brand new, only ever saw 26 power cycles.
> >
> > >   9 Power_On_Hours  0x0032   051   051   000Old_age
> Always   -   19725
> >
> > Your disk is brand new, 19725 hours is 2.2 years.
> >
> > > 194 Temperature_Celsius 0x0022   100   100   000Old_age
> Always   -   32 (Min/Max 20/37)
> >
> > You have good cooling, temperature is 32C, as high as 40C is usually
> okey, above 50C means the cooling fans are dead.
> >
> > >   5 Reallocated_Sector_Ct   0x0033   100   100   050Pre-fail
> Always   -   0
> > > 196 Reallocated_Event_Count 0x0032   100   100   000Old_age
> Always   -   0
> > > 198 Offline_Uncorrectable   0x0030   100   100   000Old_age
> Offline  -   0
> > > 199 UDMA_CRC_Error_Count0x0032   200   200   000Old_age
> Always   -   0
> >
> > Your disk does not report any problems reading or writing data to
> the magnetic media.
> >
> > Conclusion: healthy as a bull.
> >
> > --
> > Konstantin Olchanski
> > Data Acquisition Systems: The Bytes Must Flow!
> > Email: olchansk-at-triumf-dot-ca
> > Snail mail: 4004 Wesbrook Mall, TRIUMF, Vancouver, B.C., V6T 2A3,
> Canada
> >
> >
>
> --
> Konstantin Olchanski
> Data Acquisition Systems: The Bytes Must Flow!
> Email: olchansk-at-triumf-dot-ca
> Snail mail: 4004 Wesbrook Mall, TRIUMF, Vancouver, B.C., V6T 2A3, Canada
>


Re: is the disk failing ?

2018-10-16 Thread Paul Robert Marino
to be clear I wasn't saying Smart is useless just that smartctl doesn't
always tell you every thing so you shouldn't rely as a definitive answer on
all issues on all disks.

As for raid controllers well that's a very long conversation there are good
reasons the enterprise ones do not, at least not directly in a way you can
extract using the smartctl command instead they have more advanced checks
available through the drivers and additional monitoring tools provided by
the manufacturer of the raid controller.

as for the predictive nature of smart well that's actually in its
specification it predicts errors based on indicators.

On Tue, Oct 16, 2018 at 7:55 PM Konstantin Olchanski 
wrote:

> On Tue, Oct 16, 2018 at 04:20:03PM -0400, Paul Robert Marino wrote:
> >
> > smart is predictive and doesn't catch all errors its also not compatible
> > with all disks and controllers especially raid capable controllers.
> >
>
>
> Do not reject SMART as useless, it correctly reports many actual disk
> failures:
>
> a) overheating (actual disk temperature is reported in degrees Centigrade)
> b) unreadable sectors (data on these sectors is already lost) - disk model
> dependant
> c) "hard to read" sectors (WD specific - "raw read error rate")
> d) sata link communication errors ("CRC error count")
>
> even more useful actual (*not* predictive) stuff is reported for SSDs
> (again, model dependant)
>
> it is true that much of this information is disk model dependant and
> one has to have some experience with the SMART data to be able
> to read it in a meaningful way.
>
> as for raid controllers that prevent access to disk SMART data,
> they are as safe to use a car with a blank dashboard (no fuel level,
> no engine temperature, no speedometer, etc).
>
>
> --
> Konstantin Olchanski
> Data Acquisition Systems: The Bytes Must Flow!
> Email: olchansk-at-triumf-dot-ca
> Snail mail: 4004 Wesbrook Mall, TRIUMF, Vancouver, B.C., V6T 2A3, Canada
>


Re: gaming laptop compatibility

2018-10-15 Thread Paul Robert Marino
By the way one more thing. conventional wisdom for gamers is to use Nvidia cards the reason for this is the high end Nvidia cards (1080, and 2080 series) are faster than current AMD's GPU's however I've never seen a gaming laptop with those GPU's or even a 1070 in them without being severely thermally constrained. That means on Laptops right now the field between AMD and Nvidia GPU's is actually level. Also AMD does have an advantage on Linux the amdgpu driver is GPL and part of the Kernel. this driver is good for them most pare with a couple of exceptions for which you need to install the amdgpu-pro driver which is not a kernel driver instead it actually communicates with the amdgpu driver in the kernel which means no compiles of proprietary kernel drivers on kernel updates.The two exceptions are as follows.1) the amdgpu-pro driver is required for crossfire (multi GPU) support this may not sound important but it is if the laptop has a Ryzen mobile CPU (that are all actually APU's with their own GPU built in)2) if you do a lot of work with Open-CL the amdgpu driver can do it but the amdgpu-pro driver is faster for ececuting Open-CL Applications.there is also a limitation to the amdgpu-pro driver it is pre-compiled and they only make packages for RHEL, Ubuntu LTS, and SuSE (no Fedora support :( ) this shouldn't be a problem for Scientific Linux because the RHEL version should work but it is something to keep in mind.by the way the latest Ryzen 7 Mobile 2700U APU should be playable in most games if they need a budget gaming laptop On Mon, Oct 15, 2018 at 3:45 PM Paul Robert Marino <prmari...@gmail.com> wrote:I have an Asus ROG Strix GL702ZC its on the high end side and the battery life is terrible, but its got a desktop Ryzen 7 CPU 8 cores 16 threads,  a 4GB  AMD RX 580 video card, a 17" freesync screen. 32GB of ram,.an SSD, and a hard drive.The thing i like about it is I can easily play games on at higher frame rates than I've ever seen on a laptop (100FPS+ in most games) and it is also a power house for running VM's and containers under Linux. I can easily set up mini clouds on it when I want to test or develop  network applications. it also surprisingly is cooled well and I had no issues installing Linux.I've tested it with RHEL and Fedora, and I  know some one else who has one running Ubuntu. The only bad thing I would say about it running Linux is Asus has not signed on to LVFS yet so firmware updates are not automatic, but can still be done at boot via a usb thumb drive via the BIOS so its safe to blow away the windows partitions :) .By the way don't listen to any one who says Intel CPU's are better for games on a laptop they really cant handle a video card big enough for the slight extra speed on the Intel CPU's to have any noticeable.The new stuff Valve funded the development of for gaming for Linux like the DirectX11 to Vulcan stuff utilizes the extra cores on the AMD CPU's much better than the same game on Windows. Most games using DirectX11 (on Windows or Wine)  can only use 2 cores and 4 cores if they are DrectX12, but Vulcan can use more and the AMD video card supports Vulcan on Linux even in the GPL driver now included with newer Linux Kernels.That said its huge and heavy I often joke about it being more like a Compaq portable than a laptop lol. also the battery life off the charger is less than an hour so it does have some down sides.On Mon, Oct 15, 2018 at 2:04 PM Yasha Karant <ykar...@csusb.edu> wrote:Please see the list below.  These come from a popular press article, but 
I cannot post the URL as the university that provides this email 
rewrites all URLs, and thus I have no certainty that any URL I post (or 
is embedded in any thread to which I respond) will not be corrupted.

At the end of the popular press account, there are mentions of specific 
laptop models.  As I do not have the time to research this, but a number 
of students want to know, which if any of these are SL 7 compatible 
(meaning, all hardware is "supported")?  I assume that a larger number 
are Ubuntu supported, in that Ubuntu keeps closer to the "bleeding edge" 
of Linux hardware support.

Thanks for any specific information.

Yasha Karant

Excerpt:

How to buy a gaming laptop
They're cheaper, lighter and more powerful than ever before.
Devindra Hardawar

If your priority is smooth gameplay, I'd recommend a laptop with a 
15.6-inch 1080p screen and either NVIDIA's GTX 1060 or 1070 Max-Q GPU. 
The former will run most games well at 60fps and beyond, while the 1070 
will let you reach even higher frame rates and better-quality graphics 
settings. Mid-range machines like HP's Omen and some of Dell's Alienware 
models are a good start. If you've got a slightly bigger budget, you 
should consider laptops with high-refresh-rate screens: MSI's GS65 
Stealth Thin, Gigabyte's Aero 15X, Razer's Blade and pricier Alienware 
configuration.

But if you're on a budget, stick to machines with the 

Re: gaming laptop compatibility

2018-10-15 Thread Paul Robert Marino
I have an Asus ROG Strix GL702ZC its on the high end side and the battery
life is terrible, but its got a desktop Ryzen 7 CPU 8 cores 16 threads,  a
4GB  AMD RX 580 video card, a 17" freesync screen. 32GB of ram,.an SSD, and
a hard drive.
The thing i like about it is I can easily play games on at higher frame
rates than I've ever seen on a laptop (100FPS+ in most games) and it is
also a power house for running VM's and containers under Linux. I can
easily set up mini clouds on it when I want to test or develop  network
applications. it also surprisingly is cooled well and I had no issues
installing Linux.
I've tested it with RHEL and Fedora, and I  know some one else who has one
running Ubuntu. The only bad thing I would say about it running Linux is
Asus has not signed on to LVFS yet so firmware updates are not automatic,
but can still be done at boot via a usb thumb drive via the BIOS so its
safe to blow away the windows partitions :) .
By the way don't listen to any one who says Intel CPU's are better for
games on a laptop they really cant handle a video card big enough for the
slight extra speed on the Intel CPU's to have any noticeable.The new stuff
Valve funded the development of for gaming for Linux like the DirectX11 to
Vulcan stuff utilizes the extra cores on the AMD CPU's much better than the
same game on Windows. Most games using DirectX11 (on Windows or Wine)  can
only use 2 cores and 4 cores if they are DrectX12, but Vulcan can use more
and the AMD video card supports Vulcan on Linux even in the GPL driver now
included with newer Linux Kernels.

That said its huge and heavy I often joke about it being more like a Compaq
portable than a laptop lol. also the battery life off the charger is less
than an hour so it does have some down sides.

On Mon, Oct 15, 2018 at 2:04 PM Yasha Karant  wrote:

> Please see the list below.  These come from a popular press article, but
> I cannot post the URL as the university that provides this email
> rewrites all URLs, and thus I have no certainty that any URL I post (or
> is embedded in any thread to which I respond) will not be corrupted.
>
> At the end of the popular press account, there are mentions of specific
> laptop models.  As I do not have the time to research this, but a number
> of students want to know, which if any of these are SL 7 compatible
> (meaning, all hardware is "supported")?  I assume that a larger number
> are Ubuntu supported, in that Ubuntu keeps closer to the "bleeding edge"
> of Linux hardware support.
>
> Thanks for any specific information.
>
> Yasha Karant
>
> Excerpt:
>
> How to buy a gaming laptop
> They're cheaper, lighter and more powerful than ever before.
> Devindra Hardawar
>
> If your priority is smooth gameplay, I'd recommend a laptop with a
> 15.6-inch 1080p screen and either NVIDIA's GTX 1060 or 1070 Max-Q GPU.
> The former will run most games well at 60fps and beyond, while the 1070
> will let you reach even higher frame rates and better-quality graphics
> settings. Mid-range machines like HP's Omen and some of Dell's Alienware
> models are a good start. If you've got a slightly bigger budget, you
> should consider laptops with high-refresh-rate screens: MSI's GS65
> Stealth Thin, Gigabyte's Aero 15X, Razer's Blade and pricier Alienware
> configuration.
>
> But if you're on a budget, stick to machines with the GTX 1050, 1050Ti
> or 1060 Max-Q, like Dell's G3 and G5 series. You won't get
> high-refresh-rate monitors with these, but they'll have enough
> horsepower to reach a silky 60fps. They're ideal if you're mainly
> playing MOBA titles and undemanding games like Overwatch.
>
> It's easy to get overwhelmed by the number of options today, but that
> variety is ultimately a good thing. What was once a category filled with
> huge, ugly monstrosities now includes genuinely gorgeous machines that
> aren't much heavier than a MacBook Pro.
>


Re: Trouble with MySQL Server

2018-05-16 Thread Paul Robert Marino
So first thing first to get you back up and running run the following command
"setenforce 0"
 This will set selinux into permissive mode then restart mysql.
The next step is to reliable your file system a quick Google search can tell 
you how to do this.
The next steps are a little more complicated but you should be start with the 
audit2why command 
Sorry I'm not giving you a full how to right now but I'm answering from my cell 
phone so I don't have the full set of commands avaliable to me at the moment to 
give you proper examples.
What I can tell you is selinux isn't that hard to deal with once you know the 
basic tools.


  Original Message  
From: eric.lofg...@wsu.edu
Sent: May 16, 2018 8:34 PM
To: pa...@tchpc.tcd.ie
Cc: scientific-linux-users@fnal.gov
Subject: Re: Trouble with MySQL Server

It looks like we have a winner. chown doesn’t work, but checking selinux, I get 
a number of denied {write} notices for that directory.

It looks like selinux is preventing writing to that directory. Is there a way 
to change that? I confess selinux is utterly opaque to me.

Eric

> On May 16, 2018, at 2:36 AM, Paddy Doyle  wrote:
> Maybe instead of the chmod, just make the dir owned by the mysql user:
> 
>  chown mysql.mysql /home/mysqltmp
> 
> Or check if selinux is enabled and is preventing writing to that directory.
> 
>  getenforce
>  grep mysqltmp /var/log/audit/audit.log
> 
> Paddy



Re: SIGTERM?

2017-10-27 Thread Paul Robert Marino
you understand the global sigterm correctly but there is a problem with
relying on that. while it is true that a global sigterm is issued it is
followed shortly afterward by a global kill. what that means is it may not
give the database sufficient time to shutdown before killing it. whenever
databases are involved you can not count on the global sigterm to shut it
down correctly in time

On Fri, Oct 27, 2017 at 3:57 PM, ToddAndMargo  wrote:

> Dear List,
>
> In the situation I am facing, a database is not shutdown by the
> systemd script that started it at boot. (Its start point was
> actually hacked into a related bash file called by another
> systems script without a shutdown hack.)  There is no "ExecStop"
> line.   NO, IT WAS NO  MY DOING !!!
>
> I am not saying which (proprietary) database as I don’t want to
> get into any legal cross hairs.  Anyway, someone else is using
> the database.  The database works fine.
>
> The vendor is not systemd literate and keeps complaining about
> it only works under SysV.  And no, they won’t give me the SysV
> rc.d scripts and let me convert it for them.  And, yes, I know,
> you can still use SysV if you must.  But, again, as I said,
> it is not my doing.
>
> I am thinking there is a possibility of data corruptions.
>
> Question: does the general shutdown take care of this issue?
> Am I presuming too much to think this is handled by the general
> shutdown global SIGTERM?  The database does properly respond
> to SIGTERM.
>
> Do I understand the global SIGTERM correctly?
>
> Many thanks,
> -T
>


Re: clock skew too great ** EXTERNAL **

2017-10-19 Thread Paul Robert Marino
Here is a question was it using preauth?
In other words is there a  key tab file in /etc?
The other question is NTP set to sync the time on shutdown to the bios?

There are a couple of reasons why I can think this might happen the first
involves how NTP corrects the time and how it may interact with how an
option in MIT kerberos client works and that article. There is an incorrect
statement in that article about disabling the time sync. It's not that the
option disables the time sync it just corrects for it when the ticket is
created to mask the issue. The problem with that is NTP usually doesn't
sync the time in one shot by default it only corrects it in less than 1
second increments so it doesn't break time dependant things like cron jobs.
That flag in combination with the default behavior of NTP can cause an
artificial clock skew issue later.
Now you can set in /etc/sysconfig an option in the NTP settings to tell it
to do an initial full sync on boot before starting ntpd but it is not the
default behavior in 6 if I remember correctly. If a ticket had been created
and the clock had been more than 5 minutes out of sync you would have
gotten a clock skew error after the clock had corrected its self because it
would still have still been compensating for the initial skew. In this case
kdestroy would clear the skew correction and the new key would be
unaffected.

The other possibility is that if preauth is being used there could have be
something wrong in how the service credential was created in the kerberos
server which is quite common if the server is an AD server, and sometimes
happens with Heimdal kerberos servers too. Essentially the other
possibility is it may have a max ticket renewal set on the principal in
which case a kdestroy may force it to redo the preauth and then create a
new ticket. Usually you can correct this in the kerberos server if your
Kerberos admin really knows it well sadly most AD admins don't :(. I've had
to show more than a few of them over the years articles on Microsoft
technet,  and tell them just do that and stop insisting it can't be done.

By the way that article is right about one thing the DNS reverse lookup in
MIT kerberos can be problematic because it can't support the use of CNAMEs
in the forward lookup, and is not specified any where in any of the
ratified RFCs ( in fact it was proposed and rejected by committee on that
basis) so it causes more problem especially when it interacts with other
kerberos implementation or is implemented in the cloud. It's also the only
implementation of Kerberos 5 that does it. It's not the only place where
MIT kerberos violates the RFCs and those violations are the reason why you
can't use it for Samba version 4 AD server, and why most Linux based
appliances that support kerberos use Heimdal kerberos.


On Oct 19, 2017 11:48 AM, "Stephen Isard" <7p03xy...@sneakemail.com> wrote:

> On Thu, 19 Oct 2017 09:09:32 -0500, Pat Riehecky 
> wrote:
>
> >If memory serves, SL7 has "Less Brittle Kerberos"[1] where as SL6 does
> >not.  This could account for why one works and the other does not.
> >
> >Pat
> >
> >[1] https://fedoraproject.org/wiki/Features/LessBrittleKerberos
>
> That looks promising as an explanation.
>
> The problem has been "solved", or at least it has gone away, although I
> don't really understand why.  Without any clear hypothesis as to why it
> might help, I decided to run "kdestroy -A" on the affected machine to clear
> expired tickets out of my local cache.  That did it.  No more clock skew
> messages.  So it looks as if it was a kerberos issue, rather than an ntp
> one, and the error message wasn't really explaining what was wrong.
>
> Thanks to everyone for their advice.
>
> Stephen Isard
> >
> >On 10/18/2017 07:10 PM, Stephen Isard wrote:
> >> On Wed, 18 Oct 2017 17:12:46 -0400, R P Herrold 
> wrote:
> >>
> >>> On Wed, 18 Oct 2017, Howard, Chris wrote:
> >>>
>  Is it possible the two boxes are talking to two different servers?
> >>> as the initial post mentioned and showed it was using remote
> >>> host lists to a pool alias, almost certainly --
> >> Oh, I took the question to be about the kerberos server.  Yes, you are
> right,
> >> ntpd -q returns different results on the two machines.  However, as I
> said in the original post, the time on the two machines is the same to
> within a very small amount., well within the five minute tolerance used by
> kerberos.  So I don't understand why it should matter that the two machines
> have arrived at the same time by syncing with different servers.
> >>
> >>> as a way around, set up ONE unit to act as the local master,
> >>> and then sync against it, to get 'site coherent' time
> >> Could you tell me how to do this, or point me at a document that does?
> >>
> >> Thanks.
> >>
> >>> [a person with more than one clock is never quite _sure_ what
> >>> time is correct ;) ]
> >>>
> >>>
> >>> for extra geek points, spend $25 on AMZN, and get 

Re: [SCIENTIFIC-LINUX-USERS] clock skew too great ** EXTERNAL **

2017-10-19 Thread Paul Robert Marino
As I said they probably have a different setting for the allowed clock skew
so I would check the time on the kerberos server
Note in MIT kerberos in the krb5.conf file this can be set via the
'clockskew' option in the "libdefaults"section. It is specified in seconds
and usually defaults to 300 seconds check out the krb5.conf man page for
details there is also an option to allow the client to compensate for it
and detect the actual skew but I don't recommend tinkering with it because
it can cause issues.
Also note that if you kerberos server is an AD server windows clients
usually use their AD server as their default NTP source otherwise they go
to Microsoft's  pool of NTP servers.

On Oct 19, 2017 10:09 AM, "Pat Riehecky"  wrote:

> If memory serves, SL7 has "Less Brittle Kerberos"[1] where as SL6 does
> not.  This could account for why one works and the other does not.
>
> Pat
>
> [1] https://fedoraproject.org/wiki/Features/LessBrittleKerberos
>
> On 10/18/2017 07:10 PM, Stephen Isard wrote:
>
>> On Wed, 18 Oct 2017 17:12:46 -0400, R P Herrold 
>> wrote:
>>
>> On Wed, 18 Oct 2017, Howard, Chris wrote:
>>>
>>> Is it possible the two boxes are talking to two different servers?

>>> as the initial post mentioned and showed it was using remote
>>> host lists to a pool alias, almost certainly --
>>>
>> Oh, I took the question to be about the kerberos server.  Yes, you are
>> right,
>> ntpd -q returns different results on the two machines.  However, as I
>> said in the original post, the time on the two machines is the same to
>> within a very small amount., well within the five minute tolerance used by
>> kerberos.  So I don't understand why it should matter that the two machines
>> have arrived at the same time by syncing with different servers.
>>
>> as a way around, set up ONE unit to act as the local master,
>>> and then sync against it, to get 'site coherent' time
>>>
>> Could you tell me how to do this, or point me at a document that does?
>>
>> Thanks.
>>
>> [a person with more than one clock is never quite _sure_ what
>>> time is correct ;) ]
>>>
>>>
>>> for extra geek points, spend $25 on AMZN, and get a GPS USB
>>> dongle; run a local top strata server (the first three
>>> lintes of the following)
>>>
>>> [root@router etc]# ntpq -p
>>>  remote   refid  st t when poll reach   delay
>>> offset  jitter
>>> 
>>> =
>>> GPS_NMEA(0) .GPS.0 l-   1600.000
>>> 0.000   0.000
>>> SHM(0)  .GPS.0 l-   1600.000
>>> 0.000   0.000
>>> SHM(1)  .PPS.0 l-   1600.000
>>> 0.000   0.000
>>> +ntp1.versadns.c .PPS.1 u  665 1024  377   51.817
>>> -12.510  19.938
>>> *tock.usshc.com  .GPS.1 u  294 1024  377   34.608
>>> -8.108  10.644
>>> +clmbs-ntp1.eng. 130.207.244.240  2 u  429 1024  377   31.520
>>> -5.674   7.484
>>> +ntp2.sbcglobal. 151.164.108.15   2 u  272 1024  377   23.117
>>> -6.825  10.479
>>> +ntp3.tamu.edu   165.91.23.54 2 u 1063 1024  377   63.723
>>> -3.319  16.813
>>> [root@router etc]#
>>>
>>>
>>> configuring ntp.conf is not all that hard
>>>
>>> -- Russ herrold
>>>
>>
> --
> Pat Riehecky
>
> Fermi National Accelerator Laboratory
> www.fnal.gov
> www.scientificlinux.org
>


Re: clock skew too great ** EXTERNAL **

2017-10-19 Thread Paul Robert Marino
no difference there the time in both protocals are based on EPOC which
means it is always using GMT time regardless of the OS settings.
So that should make no difference in this case, though without NTP involved
that has been a know issue for encrytion in genneral so its not a bad
question just most likely irrelivant in this case.


On Thu, Oct 19, 2017 at 9:42 AM, Howard, Chris  wrote:

> How about timezone?
>
>
> -Original Message-
> From: owner-scientific-linux-us...@listserv.fnal.gov [mailto:
> owner-scientific-linux-us...@listserv.fnal.gov] On Behalf Of Stephen Isard
> Sent: Wednesday, October 18, 2017 5:43 PM
> To: scientific-linux-us...@listserv.fnal.gov
> Subject: Re: clock skew too great ** EXTERNAL **
>
> On Wed, 18 Oct 2017 21:02:53 +, Howard, Chris 
> wrote:
>
> >Is it possible the two boxes are talking to two different servers?
>
> Thanks for the idea, but no.  The admin_server entry in /etc/krb5.conf is
> the same on both machines, and the host command returns the same ip address
> for that machine name on both machines.
>
>
>
> >-Original Message-
> >From: owner-scientific-linux-us...@listserv.fnal.gov [mailto:
> owner-scientific-linux-us...@listserv.fnal.gov] On Behalf Of Stephen Isard
> >Sent: Wednesday, October 18, 2017 2:47 PM
> >To: scientific-linux-us...@listserv.fnal.gov
> >Subject: clock skew too great ** EXTERNAL **
> >
> >Hello,
> >
> >I have two laptops side by side, one running SL6, the other SL7, both up
> >to date.  According to the date command, their times agree to within a
> >small fraction of a second.
> >
> >On both machines, I normally run kinit to get a kerberos ticket in the
> >same realm.  Today, the SL7 machine gets its ticket normally, but the
> >SL6 one shows an error message "Clock skew too great while getting
> >initial credentials".  Since the clocks of the two machines appear to
> >agree, I would have expected that either both should produce the error
> >or neither.  From what I have read on the web, the standard tolerance
> >for clock skew is 5 minutes, and the agreement between the times on the
> >two machines is well within that.
> >
> >Both machines have ntpd running, using the time servers
> >[0-3].rhel.pool.ntp.org.  Powering off the SL6 machine and rebooting
> >does not restore sanity.  The problem just arose today.  There have been
> >no system updates on the SL6 machine since it successfully got its
> >ticket yesterday.
> >
> >Any suggestions for what to try?
> >
> >Stephen Isard
> >
> >
> > *** This email is from an EXTERNAL sender ***
> >Use caution before responding. DO NOT open attachments or click links
> from unknown senders or unexpected email. If this email appears to be sent
> from a Platte River Power Authority employee or department, verify its
> authenticity before acting or responding. Contact the IT Help Desk with any
> questions.
>


Re: clock skew too great ** EXTERNAL **

2017-10-19 Thread Paul Robert Marino
well the clock sqew allowed is a client side setting and may be different
on the two the real question is what is the time on the kerberos server?
the clock sqew probably there not on the clients, The clock sqew allowed is
set in the/etc/krb5.conf file by default (and also has a default value if
not specified) however may be overriden by the library call so in other
words the pam module may also over ride the defaults. Sometime they have
been known to change the default for clock sqew in the MIT Kerberos
libraries between major releases so thats probably why you are seeing this.
looking at differences in the upstream NTP servers is only something you
should consider when all of the other possibilities are exausted because
the public ones are usually getting their time from the same upstream
source (usually GPS which is fed by NORAD's atomic clock, or NTP from
NIST's or CERN'satomic clocks or other simmilar autoritative source all of
which syncronize with eachother regualrly ), and there for are very
unlikely to have more than a few miliseconds difference between them which
is not enough to cause such an error.

In short look at your Kerberos server not the clients NTP servers as that
is most likely where the real clock drift issue is.

Then next thing to check is the firewalls because NTP works in a veru
unusual way when compared to other protocols on the internet, for example
in netfilters firewalls you need to load a special conntrack helper module
to support it other wise it breaks due to the inability to the blocking of
related connections the source server tries to make back to the client
durring the syncronization process.
.



On Wed, Oct 18, 2017 at 8:10 PM, Stephen Isard <7p03xy...@sneakemail.com>
wrote:

> On Wed, 18 Oct 2017 17:12:46 -0400, R P Herrold 
> wrote:
>
> >On Wed, 18 Oct 2017, Howard, Chris wrote:
> >
> >> Is it possible the two boxes are talking to two different servers?
> >
> >as the initial post mentioned and showed it was using remote
> >host lists to a pool alias, almost certainly --
>
> Oh, I took the question to be about the kerberos server.  Yes, you are
> right,
> ntpd -q returns different results on the two machines.  However, as I said
> in the original post, the time on the two machines is the same to within a
> very small amount., well within the five minute tolerance used by
> kerberos.  So I don't understand why it should matter that the two machines
> have arrived at the same time by syncing with different servers.
>
> >as a way around, set up ONE unit to act as the local master,
> >and then sync against it, to get 'site coherent' time
>
> Could you tell me how to do this, or point me at a document that does?
>
> Thanks.
>
> >[a person with more than one clock is never quite _sure_ what
> >time is correct ;) ]
> >
> >
> >for extra geek points, spend $25 on AMZN, and get a GPS USB
> >dongle; run a local top strata server (the first three
> >lintes of the following)
> >
> >[root@router etc]# ntpq -p
> > remote   refid  st t when poll reach   delay
> >offset  jitter
> > 
> =
> > GPS_NMEA(0) .GPS.0 l-   1600.000
> >0.000   0.000
> > SHM(0)  .GPS.0 l-   1600.000
> >0.000   0.000
> > SHM(1)  .PPS.0 l-   1600.000
> >0.000   0.000
> >+ntp1.versadns.c .PPS.1 u  665 1024  377   51.817
> >-12.510  19.938
> >*tock.usshc.com  .GPS.1 u  294 1024  377   34.608
> >-8.108  10.644
> >+clmbs-ntp1.eng. 130.207.244.240  2 u  429 1024  377   31.520
> >-5.674   7.484
> >+ntp2.sbcglobal. 151.164.108.15   2 u  272 1024  377   23.117
> >-6.825  10.479
> >+ntp3.tamu.edu   165.91.23.54 2 u 1063 1024  377   63.723
> >-3.319  16.813
> >[root@router etc]#
> >
> >
> >configuring ntp.conf is not all that hard
> >
> >-- Russ herrold
>


Re: [EXTERNAL] Re: scanner

2017-10-12 Thread Paul Robert Marino
  Don't use a smart phone camera for it a decent dedicated digital camera will correct for that optically in the lense, but you are right that is an issue for smart phone cameras due to the physical lense size.Sent from my BlackBerry - the most secure mobile deviceFrom: miles.on...@cirrus.comSent: October 12, 2017 5:52 PMTo: prmari...@gmail.com; jason.bron...@gmail.com; scientific-linux-users@fnal.govSubject: Re: [EXTERNAL] Re: scanner  On 10/12/2017 04:42 PM, Paul Robert
  Marino wrote:

  
  
  
 Interestingly my
  father threw me for a loop on this now a days a low grade
  digital camera actually has higher resolution than most
  scanners so he uses one in a photo copy stand and then just
  copies the one file to his computer via a bluetooth enabled SD
  card which is faster than any scanner on the market. Then he
  uses gimp or photos of depending on which computer he is using
  to crop it and convert the format if need.
He
  told me this actually works faster and easier than any scanner
  he has ever used and gets higher resolution as well and
  requires no drivers. All you need is a photo copy stand which
  is a rig you attach your camera to the holds it level to a
  surface.
Its
  a fascinating idea and I'm sure he is right about it, and it's
  probably the way I'm going to do it in the future.
  

That seems like it would introduce distortion, as the document edges
would be farther from the camera lens than the document center. Kind
of like most selfies make your nose look bigger.
  



Re: scanner

2017-10-12 Thread Paul Robert Marino
  No it would not produce parallax that the is the point of using a photo copy stand. That is how Profesional photographers have been copying photographs without the negatives since the beginning of photography it's a rig designed to prevent exactly that.Now the down side is a good photo copy stand is expensive ($400+) but the one he is using actually originally belonged to his father. Really good ones are a one time high quality investment that will out last you, but you can get cheap ones (less than $50) that will last a decade or more of constant use too.Sent from my BlackBerry - the most secure mobile deviceFrom: positiv...@gmx.comSent: October 12, 2017 5:51 PMTo: prmari...@gmail.comCc: jason.bron...@gmail.com; scientific-linux-users@fnal.govSubject: Re: scanner  A single point of imaging would produce paralax -- if that matters.

Otherwise , an efficient 'hack' .

On the plus side , could 'scan' a non-flat object.

 

 
 

Sent: Thursday, October 12, 2017 at 5:42 PM
From: "Paul Robert Marino" <prmari...@gmail.com>
To: "Jason Bronner" <jason.bron...@gmail.com>, scientific-linux-users <scientific-linux-users@fnal.gov>
Subject: Re: scanner




Interestingly my father threw me for a loop on this now a days a low grade digital camera actually has higher resolution than most scanners so he uses one in a photo copy stand and then just copies the one file to his computer via a bluetooth enabled SD card which is faster than any scanner on the market. Then he uses gimp or photos of depending on which computer he is using to crop it and convert the format if need.

He told me this actually works faster and easier than any scanner he has ever used and gets higher resolution as well and requires no drivers. All you need is a photo copy stand which is a rig you attach your camera to the holds it level to a surface.

Its a fascinating idea and I'm sure he is right about it, and it's probably the way I'm going to do it in the future.

 


Sent from my BlackBerry - the most secure mobile device





			
			From:  jason.bron...@gmail.com

			Sent: October 12, 2017 5:26 PM

			To:  scientific-linux-users@fnal.gov

			Subject: Re: scanner
			
			

 



I'm currently using an older Epson Perfection with a reasonable degree of success. HP is probably going to be your best bet for any kind of stable use and long term support, though. It'll function correctly on about anything until the unit dies from mechanical failure.

 
Virus-free. www.avg.com


 
On Thu, Oct 12, 2017 at 1:50 PM, David Sommerseth <sl+us...@lists.topphemmelig.net> wrote:

On 12/10/17 17:31, ToddAndMargo wrote:
> Dear List,
>
>    Anyone have a favorite flat bed scanner that is SL friendly?

I've only had MFPs the last 10 years or so, with printer and scanner
integrated.  These are my general experiences on a few brands

- Canon
  Horrendous Linux support, network scanning basically impossible.  USB
  scanning may work reasonably okay.

- Brother
  Functional Linux drivers (also on RHEL), cumbersome setup but once
  done even network scanning works reasonably well.

- HP
  One of the best driver packages I've used.  Newest hardware can be
  tricky and may require building hplip package manually.  But the web
  page is quite good at listing which driver version is required.
  Network scanning works very well, even AFP with duplex scanning.  And
  for USB scanning, this works also very well.

  Downside: requires a binary plug-in to be installed post driver
  install.  This is basically a required closed source/proprietary
  firmware to enable scanning.  This can be installed both via the hplip
  command line and GUI tools.

I'd recommend a HP MFP device any time.  If you can find an older model
on sale, you'll get big bang for the bucks with a big chance it will
work out of the box once the hplip packages are installed.


--
kind regards,

David Sommerseth










Re: scanner

2017-10-12 Thread Paul Robert Marino
  Interestingly my father threw me for a loop on this now a days a low grade digital camera actually has higher resolution than most scanners so he uses one in a photo copy stand and then just copies the one file to his computer via a bluetooth enabled SD card which is faster than any scanner on the market. Then he uses gimp or photos of depending on which computer he is using to crop it and convert the format if need.He told me this actually works faster and easier than any scanner he has ever used and gets higher resolution as well and requires no drivers. All you need is a photo copy stand which is a rig you attach your camera to the holds it level to a surface.Its a fascinating idea and I'm sure he is right about it, and it's probably the way I'm going to do it in the future.Sent from my BlackBerry - the most secure mobile deviceFrom: jason.bron...@gmail.comSent: October 12, 2017 5:26 PMTo: scientific-linux-users@fnal.govSubject: Re: scanner  I'm currently using an older Epson Perfection with a reasonable degree of success. HP is probably going to be your best bet for any kind of stable use and long term support, though. It'll function correctly on about anything until the unit dies from mechanical failure.
Virus-free. www.avg.com
		On Thu, Oct 12, 2017 at 1:50 PM, David Sommerseth  wrote:On 12/10/17 17:31, ToddAndMargo wrote:
> Dear List,
>
>    Anyone have a favorite flat bed scanner that is SL friendly?

I've only had MFPs the last 10 years or so, with printer and scanner
integrated.  These are my general experiences on a few brands

- Canon
  Horrendous Linux support, network scanning basically impossible.  USB
  scanning may work reasonably okay.

- Brother
  Functional Linux drivers (also on RHEL), cumbersome setup but once
  done even network scanning works reasonably well.

- HP
  One of the best driver packages I've used.  Newest hardware can be
  tricky and may require building hplip package manually.  But the web
  page is quite good at listing which driver version is required.
  Network scanning works very well, even AFP with duplex scanning.  And
  for USB scanning, this works also very well.

  Downside: requires a binary plug-in to be installed post driver
  install.  This is basically a required closed source/proprietary
  firmware to enable scanning.  This can be installed both via the hplip
  command line and GUI tools.

I'd recommend a HP MFP device any time.  If you can find an older model
on sale, you'll get big bang for the bucks with a big chance it will
work out of the box once the hplip packages are installed.


--
kind regards,

David Sommerseth



Re: How do I access mtp from the command line?

2017-10-08 Thread Paul Robert Marino

Mtp is a barely a protocol, its implementation actually differs widely for each 
device that uses it. I remember years ago I needed it for a old hard drive mp3 
player I had and it was always annoying to get working to say the least. That 
said if you need support for a recent device that uses it you should always get 
the latest from source because no two devices implement it the same way, so 
it's a race between the hardware manufacturers and the maintainers of libmtp 
which the maintainers can never really win but they do their best to keep up.

  Original Message  
From: toddandma...@zoho.com
Sent: October 7, 2017 3:57 AM
To: SCIENTIFIC-LINUX-USERS@fnal.gov
Subject: Re: How do I access mtp from the command line?

On 10/07/2017 12:27 AM, Jos Vos wrote:
> On Fri, Oct 06, 2017 at 03:44:51PM -0700, ToddAndMargo wrote:
> 
>> With a lot of help from Vladimir, here is my write up:
>>
>> SL 7.4: how to operate MTP devices from the command line;
>>
>> First download and install libmtp and libmtp-examples from:
>> http://people.redhat.com/bnocera/libmtp-rhel-7.5/
> 
> EPEL has ready-to-use libmtp packages, as well ass jmtpfs (FUSE
> and libmtp based filesystem).  Never used it myself.
> 
> Sounds like a much simpler way to go.
> 


The current libmtp did not recognize my wife's tablet.
Red Hat fixed that and posted it on the link I gave.  See

https://bugzilla.redhat.com/show_bug.cgi?id=1356288


Re: reiserfs?

2017-07-18 Thread Paul Robert Marino

OK well reiserfs is actually EXT2 with a journal slapped on top of it just like 
EXT3 so you can try mounting it as readonly EXT2 though admittedly I haven't 
tried it it should work in theory, but certainly can't hurt if you try it in 
read only mode.

  Original Message  
From: toddandma...@zoho.com
Sent: July 18, 2017 8:50 PM
To: SCIENTIFIC-LINUX-USERS@fnal.gov
Subject: Re: reiserfs?

On 07/18/2017 05:33 PM, Nico Kadel-Garcia wrote:
> On Fri, Jul 14, 2017 at 9:04 PM, ToddAndMargo  wrote:
>> Hi All,
>>
>> I need to read a reiserfs partition on a flash drive.
>> Any words of wisdom?
> 
> *Why* ? reiserfs has languished since the arrest of Hans Reiser for
> murdering his wife. And much like ReiserFS, Hans claimed complete
> innocence until actually looking at evidence proved that her sudden
> absence was entirely his fault.
> 

Hi Niko,

Ya, no fooling!  :-)

I was trying to read the reiserfs partition on my
Knoppix Live USB drive.

I eventuality qemu-kvm boot the flash drive and used
cifs to import the data I wanted from my Samba server

-T


Re: selinux preventing access to directory net

2017-07-17 Thread Paul Robert Marino


It looks like you may be right that it's /proc/net

Have you tried using the python audit tools such as audit2text to analyze them 
they can make it a lot easier to understand what's going on, though they 
usually don't tell you if there is a bool you can flip to fix it.
That tool still needs to be written :)
  Original Message  
From: 7p03xy...@sneakemail.com
Sent: July 17, 2017 2:16 PM
To: scientific-linux-us...@listserv.fnal.gov
Subject: selinux preventing access to directory net

On two SL7.3 systems where I have set exim as my mta alternative, I am 
getting a lot of entries in /var/log/messages saying "SELinux is 
preventing /usr/bin/exim from search access on the directory net", with 
the usual accompanying "if you believe that exim should be allowed..." 
stuff, but the logs don't explain what call to exim triggered the 
messages.

Sealert -l tells me

Raw Audit Messages
type=AVC msg=audit(1500313603.937:268): avc:  denied { search } for 
pid=3097 comm="exim" name="net" dev="proc" ino=7154 
scontext=system_u:system_r:exim_t:s0 
tcontext=system_u:object_r:sysctl_net_t:s0 tclass=dir

type=SYSCALL msg=audit(1500313603.937:268): arch=x86_64 syscall=open 
success=no exit=EACCES a0=7ff03baef4b0 a1=8 a2=1b6 a3=24 items=0 
ppid=781 pid=3097 auid=4294967295 uid=0 gid=93 euid=0 suid=0 fsuid=0 
egid=93 sgid=93 fsgid=93 tty=(none) ses=4294967295 comm=exim 
exe=/usr/sbin/exim subj=system_u:system_r:exim_t:s0 key=(null)

which doesn't seem to be much help.

Searches turn up two Centos 7 reports,
https://bugs.centos.org/view.php?id=13247 and 
https://bugs.centos.org/view.php?id=12913 that look as if they might be 
the same thing with different mta alternatives, but no response to 
either.

All that the mta is supposed to be doing on these systems is reporting 
the output of cron jobs, and that appears to be happening correctly, so 
I am puzzled as to what this is about.  I'm not even sure what net 
directory is being referred to.  /proc/net?  Does an mta need to look in 
that directory?  I can send mail internally, to and from my local user 
and root, and that doesn't provoke selinux messages in the logs.

Any suggestions for where to look?

Thanks,

Stephen Isard


Re: is it possible to update kernel on out-of-data SL5?

2017-06-21 Thread Paul Robert Marino
Not trustworthy ones

On Jun 21, 2017 6:08 PM, "WILLIAM J LUTTER"  wrote:

> Recently there has been the "stack-clash" exploit that impacts several OS
> including linux
>
> (CVE-2017-1000364).   Unfortunately, I maintain several old SL5 PCs.   For
> instance, one of them is 5.7 with a 2.6.18-419 kernel.
>
>
> I suppose that kernels for SL/Centos/Redhat kernels that would be
> compatible with say SL5.7 are not maintained, so when exploits get too bad,
> then time to install SL7?
>
>
> Are there kernels that are kept up to date that could be installed for
> older SL5 via rpmfind or some such repo/download site?
>
>
> Bill Lutter
>


Re: Perl 6 just hit

2016-12-28 Thread Paul Robert Marino
Perl 5 isn't going any where any time soon.
surprisingly you can actually get working Perl 4 RPMs for SL7.
besides there is too much C code linked to lib Perl 5 like Git for example

On Wed, Dec 28, 2016 at 4:24 PM, Natalia Ratnikova <nata...@fnal.gov> wrote:
> Hi All,
> With Perl 6 on the scene, is Perl 5 expected to continue to get full support
> within SL?
> Thanks.
>Natalia.
>
>
> On 12/28/16 3:09 PM, Paul Robert Marino wrote:
>>
>> Well I'm hoping my multi-threaded code will actually be able to use
>> multiple CPU cores on Linux. its worked on Solaris for a long time but
>> for some reason its always been CPU core bound on Linux.
>> Also I would like to start a local Perl 6 work group for Perl 5
>> programmers looking to port their code. there is one for active Perl 6
>> projects but they don't want any one who doesn't already have an
>> active Perl 6 project to attend. I asked them very politely for a
>> clarification on their policy and didn't not get a response. I didn't
>> get a reply but I know other Perl 5 programmers who showed up looking
>> to get porting tips, and were asked to leave because they weren't
>> currently Perl 6 programmers, which is a very poor approach to take if
>> you really want to rebuild the Perl community.
>>
>> On Tue, Dec 27, 2016 at 8:55 PM, ToddAndMargo <toddandma...@zoho.com>
>> wrote:
>>>>
>>>> On Tue, Dec 27, 2016 at 8:28 PM, ToddAndMargo <toddandma...@zoho.com>
>>>> wrote:
>>>>>
>>>>> Hi All,
>>>>>
>>>>> Perl 6 just hit EPEL: rakudo-star.x86_64 0:0.0.2016.11-1.el7
>>>>>
>>>>> -T
>>>>>
>>>>> --
>>>>> ~~
>>>>> Computers are like air conditioners.
>>>>> They malfunction when you open windows
>>>>> ~~
>>>
>>>
>>> On 12/27/2016 05:53 PM, Paul Robert Marino wrote:
>>>>
>>>> Cool
>>>> I guess that means I really should start writing in Perl 6
>>>>
>>> I am looking forward to the improved way of passing variables to
>>> subroutines.  :-)
>>>
>>>
>>> --
>>> ~~
>>> Computers are like air conditioners.
>>> They malfunction when you open windows
>>> ~~


Re: Perl 6 just hit

2016-12-28 Thread Paul Robert Marino
Well I'm hoping my multi-threaded code will actually be able to use
multiple CPU cores on Linux. its worked on Solaris for a long time but
for some reason its always been CPU core bound on Linux.
Also I would like to start a local Perl 6 work group for Perl 5
programmers looking to port their code. there is one for active Perl 6
projects but they don't want any one who doesn't already have an
active Perl 6 project to attend. I asked them very politely for a
clarification on their policy and didn't not get a response. I didn't
get a reply but I know other Perl 5 programmers who showed up looking
to get porting tips, and were asked to leave because they weren't
currently Perl 6 programmers, which is a very poor approach to take if
you really want to rebuild the Perl community.

On Tue, Dec 27, 2016 at 8:55 PM, ToddAndMargo <toddandma...@zoho.com> wrote:
>> On Tue, Dec 27, 2016 at 8:28 PM, ToddAndMargo <toddandma...@zoho.com>
>> wrote:
>>>
>>> Hi All,
>>>
>>> Perl 6 just hit EPEL: rakudo-star.x86_64 0:0.0.2016.11-1.el7
>>>
>>> -T
>>>
>>> --
>>> ~~
>>> Computers are like air conditioners.
>>> They malfunction when you open windows
>>> ~~
>
>
> On 12/27/2016 05:53 PM, Paul Robert Marino wrote:
>>
>> Cool
>> I guess that means I really should start writing in Perl 6
>>
>
> I am looking forward to the improved way of passing variables to
> subroutines.  :-)
>
>
> --
> ~~
> Computers are like air conditioners.
> They malfunction when you open windows
> ~~


Re: Perl 6 just hit

2016-12-27 Thread Paul Robert Marino
Cool
I guess that means I really should start writing in Perl 6

On Tue, Dec 27, 2016 at 8:28 PM, ToddAndMargo  wrote:
> Hi All,
>
> Perl 6 just hit EPEL: rakudo-star.x86_64 0:0.0.2016.11-1.el7
>
> -T
>
> --
> ~~
> Computers are like air conditioners.
> They malfunction when you open windows
> ~~


Re: Regarding latest Linux level 3 rootkits

2016-09-08 Thread Paul Robert Marino
This thread raises some interesting point but I've seen a few
misconceptions in it too

first let me clear up the misconceptions.
1) busy box is meant to make the footprint on an appliance or live
"CD" type distro smaller it is not for security. Rootkits that replace
busy box have been seen in the wild on appliances in the past, so its
not necessarily a good answer to prevent rootkits.
2) you really don't need root to be writable in production on any
system as long as you've laid out your logical volume or partitions
correctly and just add a few simple operational procedures when you
update or modify configuration files. As far as updates go you can
still do it by remounting (mount -O remount) the filesystem as read
write first,run your updates, then remount again as read only. I've
been doing this on secure systems for over 20 years and its quite
effective in preventing about 40% of root kits from infecting a
system. In fact in the cloud you should make your instances as
stateless and read only as possible. you do this by only doing updates
on the VM you use to create your image and not the production VM's
instead you should just replace them when you are ready to update this
works well with thing like AWS autoscaling and other similar
solutions. in reality on most systems only /var, /tmp (preferably
tmpfs), and maybe /home need to be writable in production.
3) SELinux secures system processed but not user executed commands for
that you need apparmor which is not implemented in RHEL yet.

now some suggestions
A simple solution for file integrity checking is AIDE
https://sourceforge.net/projects/aide/ there has been an RPM included
in all versions of RHEL and Fedora since the first versions.
If you want a better more enhanced rootkit protection look at Samhain
http://la-samhna.de/samhain/ here is a SANS article from a couple of
years ago which can walk you through the install
https://www.sans.org/reading-room/whitepapers/detection/samhain-host-based-intrusion-detection-file-integrity-monitoring-34567
If you want to get really into it look at OSEC but be warned I've
found OSEC has issues with the cloud specifically when autoscaling is
involved because its very difficult to remove retired instances .

In the cloud I have found simpler solutions like using AIDE work well
because you don't need to register and unregister instances, also you
can precreate the database on the VM you use to create the image so no
post launch scripts need to be added to cloudinit.
That said I haven't tried it yet in the cloud yet but I think Samhain
may have some potential in the cloud but the implementation of it for
the cloud would need a lot more forethought then a standard install.

On a side note effecting the load order of libraries is not a new
method for rootkits, and many of them have been know to install
themselves in users home directories and modifying the users profile
and or bashrc.
Also i know it hasnt been updated in a long time but if you want to
know more about securing a Linux box I highly suggest you take a look
at what Bastille Linux does  http://bastille-linux.sourceforge.net/
even though its out of date (looking for config file in the wrong
place or expecting an old syntax) a lot of the base concepts of what
it did are correct and very good to do if you want to secure a system.

On Thu, Sep 8, 2016 at 6:44 AM, Vladimir Mosgalin
 wrote:
> Hi Steven J. Yellin!
>
>  On 2016.09.07 at 19:03:32 -0700, Steven J. Yellin wrote next:
>
>> Are rpm and the check sum tools statically linked?  If not, hiding
>> copies of them might not help if libraries have been compromised.  But
>> busybox is statically linked, and it looks like it can be easily used to
>> replace most commands used to check security without going to the trouble of
>> pulling files from it.  For example, 'ln -s busybox md5sum' allows use of
>> busybox's md5sum and 'ln -s busybox vi' allows use of its vi. See
>> https://busybox.net/FAQ.html#getting_started .
>
> Statically linked rpm won't help you at all. This malware in question
> doesn't modify any system files or libraries, it installs new (non
> system-managed) library and creates extra config file for linker, it has
> random name and is treated as non system-managed as well. This library
> preloads itself for any non-statically linked binary and replaces libc
> functions.
>
> rpm has absolutely nothing to do with non-system files, you can do as
> many verify passes as you want, using statically linked rpm binary if
> you prefer, and it won't show you that anything is wrong.
>
> --
>
> Vladimir


Re: Red Hat's new virtualization

2016-08-27 Thread Paul Robert Marino
I ran RHEV in production in a previous job to give you an idea its
similar to VMWare vSphere. It allows you to have a single host manage
you virtualization environment including live migrations. in addition
it can monitors the virtual machines and hardware so if vms crash
unexpectedly due to hardware failures it can relaunch them on a
different host, but it requires the management host to have access to
ILO's, DRAC's, or similar device to control the power switches of the
servers and the use of SAN or NAS storage to fully work correctly. it
can also manage Gluster storage clusters, however it assumes that the
Gluster clusters are dedicated to RHEV for use by the virtual machines
and nothing else.
there are two features vShpere has that Ovirt does not
1) last I worked with it there was no virtual switch option although I
know there is at least a plan to integrate openvswitch in the future.
2) it can not bring a VM back online in the identical running state if
the hardware crashed, VMware does this by mirroring the ram to a
ramdisk on the SAN the developers of Ovirt consider this to be an edge
case that is often misused, and I think they are right. it slows down
writes to the VM's ram significantly and eats the cache on the SAN
slowing down every thing else on it.
Ovirt is the upstream project.
there is one main difference between ovirt and RHEV, with RHEV you can
use a striped down appliance image on the servers running the VM's,
the appliance is tiny just a few hundred MB and unless its in
maintenance mode (all of the VM's have been automatically migrated off
of it and the management console has temporarily removed it from the
pool of usable servers) it runs with most of volumes mounted as
readonly including the one where it stores its configs so its
considerably hardened. in fact I've run the whole thing off a bootable
SD card slot on a motherboard before in an HP DL385 with no drives or
raid controller. I just loaded it with 8 core CPU's, lots of ram and a
high quality SD card in the slot on the motherboard and it was good to
go.
For storage I suggest using NFS, Gluster (with a minimum of 3 node for
quorum), or a SAN also keep in mind NFS is required for ISO image
storage, and data center migrations. while iSCSI is supported I don't
recommend using it because last I worked with it it had some nasty
race conditions which can stop the system from working until you dig
deep into the database to fix it. Red Hats support can not help you
with it if that happens they say just to drop the database and reload
from backups. that said I have fixed it before by manually deleting
the frozen tasks from the table and triggering the plsql command to
release the lock but it took me a an hour or two to figure it out and
the only reason I was able to is I use to be a PostgreSQL DBA and
could read and understand the PSQL procedures.
On a side note if you are looking at RHEV and Cloudforms its also a
good idea to look at cloudinit as well.


On Sat, Aug 27, 2016 at 10:01 AM, Steven Miano  wrote:
> The upstream of CloudForms is actually: http://manageiq.org/
>
> On Sat, Aug 27, 2016 at 6:16 AM, David Sommerseth
>  wrote:
>>
>> On 27/08/16 09:23, ToddAndMargo wrote:
>> > Hi All,
>> >
>> > Will we be seeing any of this?
>> >
>> >
>> > http://www.infoworld.com/article/3111908/virtualization/red-hat-virtualization-4-woos-vmware-faithful.html
>> >
>> >
>> > And does it have anything to do with qemu-kvm?
>> >
>>
>> AFAIK, Red Hat Virtualization (RHV) is building upon libvirt and
>> qemu-kvm.  The difference is that it comes with a far more powerful
>> management tool than virsh and virt-manager and the host OS is a scaled
>> down RHEL installation fine-tuned for being a virtualization host.
>>
>> Right now I've forgotten what the upstream project of RHV is named, but
>> it should exist such a project.
>>
>> You also have CloudForms, which is an even wider scoped management tool
>> capable of managing more than just libvirt/qemu-kvm virt hosts.  The
>> upstream project for this is called oVirt, IIRC.
>>
>>
>> --
>> kind regards,
>>
>> David Sommerseth
>
>
>
>
> --
> Miano, Steven M.
> http://stevenmiano.com


Re: Perl window question

2015-11-23 Thread Paul Robert Marino
sure Ill send you some stuff when I get home tonight.

On Mon, Nov 23, 2015 at 5:58 PM, ToddAndMargo  wrote:
>>>Original Message
>>> From: ToddAndMargo
>>> Sent: Sunday, November 22, 2015 23:00
>>> To: scientific-linux-users@fnal.gov
>>> Subject: Perl window question
>>>
>>> Hi All,
>>>
>>> I am trying to teach myself Perl. I am also trying to
>>> get away from Zenity.
>>>
>>> Any of you guys have a favorite method of creating a
>>> windows from Perl that is SL7 friendly (meaning the
>>> modules are available)?
>>>
>>> Many thanks,
>>> -T
>>>
>>
>
> On 11/23/2015 07:04 AM, prmari...@gmail.com wrote:
>>
>> By window I assume you mean X11
>> In that case look at Perl/TK there are several great modules that can help
>> you, that's the classic method although most people just do web interfaces
>> now‎.
>> Also if you would like I could suggest some books to read that would help
>> you a lot.
>> I'm a pretty heavy Perl programmer my self and am always happy to help any
>> one who wants to learn Perl.‎
>> One thing I will advise it seems pretty abstract at first but learn how
>> anonymous references work in Perl because they are really the key to
>> unlocking the true power of the whole language, I always consider it as the
>> key piece of knowledge that separates Perl scripters from a Perl programmer.
>>
>
> Hi Prmarino,
>
> I will look up Perl/TK.
>
> "anonymous references"? Is that the mysterious "->" thingy?
>
>  $ua = LWP::UserAgent->new;
>  $ua->timeout ( MaxTime1 );
>  $ua->show_progress;
>
> Can you point me to a good web page to "finally" figure out
> what that is?
>
> Thank you for helping me with this!
> -T
>
>


Re: how to add packages that are not in the repo?

2015-06-02 Thread Paul Robert Marino
by the way for future reference yum provides */kalarm would have
told you if it was in a package with a different name than you expect.
also I always check rpm.pbone.net its very useful for finding rpm's in
3rd party repos.

On Tue, Jun 2, 2015 at 2:42 PM, Mark Stodola stod...@pelletron.com wrote:
 That is a source rpm.  It contains the source files and settings to build
 the binary packages for a target system.  Installing it would give you the
 files necessary to run it through rpmbuild.  You would then need to install
 the resulting package, assuming you have all of the required libraries and
 development tools needed for a successful build.  Many people use a tool
 called mock to help accomplish this.


 On 06/02/2015 12:35 PM, t...@telekon.org wrote:

 this version is independent,
 http://pkgs.repoforge.org/kaffeine/kaffeine-0.8.7-1.rf.src.rpm

 can it be installed with yumex? or should i use one of these,

 su yum localinstall kaffeine-0.8.7-1.rf.src.rpm

 yum install kaffeine-0.8.7-1.rf.src.rpm

 su -c 'yum -y install kaffeine-0.8.7-1.rf.src.rpm'

 (i'm about ready to give up)

 --- Original Message ---
 From: Mark Stodola stod...@pelletron.com
 To: scientific-linux-users@fnal.gov scientific-linux-users@fnal.gov
 Subject: Re: how to add packages that are not in the repo?
 Date: Tue, 2 Jun 2015

 On 06/02/2015 11:01 AM, Tini wrote:

 can you make a repo install the version that you want,
 instead of the latest version?

 i want this: http://pkgs.repoforge.org/kaffeine/ (ver. 0.8)

 but instead i keep getting version 1.0

 -Tini


 If you look carefully at that repo, you will notice that 0.8 is for EL 5
 and below. If you are getting 1.1, I am guessing you are on EL 6.

 There are ways to specify versions/roll back, but in this case I don't
 think it is a good idea to mix packages like that.

 If you are feeling adventurous you could try it I guess.
 Just include the full release info in the package name I think.
 kaffeine-0.8.7-1.el5.rf.x86_64

 Alternately, you can just download by hand and toss it in with yum or
 rpm. If you have automated yum updates, it will get caught and upgraded
 unless you exclude/blacklist it.




Re: write permission error on a shared drive

2014-11-29 Thread Paul Robert Marino
Mahmood
you will also probably need to learn about the setgid bit.



On Sat, Nov 29, 2014 at 12:23 PM, Nico Kadel-Garcia nka...@gmail.com wrote:
 On Sat, Nov 29, 2014 at 10:56 AM, Mahmood N nt_mahm...@yahoo.com wrote:
 Hi
 A server and a client both run SL6.3. On server, I have exported a disk with
 the following property
/data 192.168.1.0/24(rw,sync,no_root_squash)

 and on the client side, I wrote this entry in the fstab
192.168.1.5:/data   /data   nfs defaults 0 0

 However on the client side, I am not able to create folders.

 [mahmood@client data]$ mkdir afolder
 mkdir: cannot create directory `afolder': Permission denied

 However, root has the write permission.

 [root@client data]# mkdir a
 [root@client data]#

 How can I grant the write permission tot he user?

 Regards,
 Mahmood

 You need to learn about uid, gid, and file system permissions.
 The user and the groupo that own a file are stored, on the NFS
 serrver's file system, as numbers. Those numbers are tied to group and
 owner as far as the login name and login user's groups by
 /etc/passwd, /etc/group, and lots of different network tools that
 can also do that.

 If the user name on the client *has the same uid and group gid
 memberships* as the server expects, then they'lll typically have
 permission to write to those directories. This is much like file
 ownership on a local directory. If someone else owns the directory,
 *and did not allow write access to others*, others will not be able to
 write there.

 In this case, I would do ls -al /data and see who owns it. Then I'd
 look up the man pages for chown and chgrp and chmod to get a
 handle on what you want to allow and prevent.


Re: Final Solution to Chinese Break in

2014-10-05 Thread Paul Robert Marino
That is because the secret service is part of the treasury department oddly enough. Even though they are most know for protecting the president they are actually a law enforcement agency.-- Sent from my HP Pre3On Oct 5, 2014 2:54 AM, jdow j...@earthlink.net wrote: If credit card fraud was involved you might check with the Secret Service. At 
least in the mid 80s credit card fraud was investigated by the Secret Service. 
I've no freaking idea why; but, during an investigation about some online 
stalking featuring me as one of the victims credit card fraud was involved. I 
was interviewed about it by a Secret Service agent and an FBI agent 
concurrently. Both were just a touch out of their depth. Sigi Kluger was 
ultimately prosecuted for the CC fraud, not the stalking, not the death threats, 
not the bodily harm (chop me up and feed me to his dog) but CC fraud that was 
small amounts over a full year. VERY few women stayed with McGraw-Hill's "BIX" 
or "Byte Information eXchange" through that year. I was too damn stubborn to be 
run out. But - damn - CC fraud was the Secret Service's domain? Washington DC 
was hopelessly screwed up even then.

{^_^}   Joanne

On 2014-10-04 21:26, Paul Robert Marino wrote:
 you may be right Interpol's economic crimes division might be the
 right way to go Ive never considered that before.


 On Sat, Oct 4, 2014 at 8:56 PM, Nico Kadel-Garcia nka...@gmail.com wrote:
 On Sat, Oct 4, 2014 at 9:26 PM, Bill Maidment b...@maidment.me wrote:
 There used to be an organisation called Interpol to deal with international crime. I haven't heard anything recent about them; do they still exist?

 Regards
 Bill Maidment

 Interpol still exists, they've a web site hat
 https://www.interpol.int/. Since we've gone way off this mailing
 list's announced purpose, I'll stop here.


Re: Final Solution to Chinese Break in

2014-10-05 Thread Paul Robert Marino
Well it looks like in 2003 it was transfered to the new at that tim DHS. It originally was not a waste of money because the FBI didn't exist yet so it was vital at the time. Can it be consolidated now? Yes; but that doesn't mean a new conglomerate agency would do a good job.All things said and done outside of the protection of the president and other dignitaries the secret service has a long history of being very good and efficient at their originally mandated job even in the internet age which is to really with finance related crimes originally meaning preventing the circulation of fake currency.I think a new agency needs to be developed with a strict mandate of international internet crimes; however I don't trust modern politicians to do a great job of designing it to work in our best interests. Honestly I don't think they know enough about the subject to even know if they are or are not doing the right thing.-- Sent from my HP Pre3On Oct 5, 2014 2:04 PM, Jason Bronner jason.bron...@gmail.com wrote: Its under DHS now and has been under DHS for quite a while. Initially thats who is tasked with investigating counterfeiting US currency and is why they were a division of the Treasury Dept. Had a nice chat with one of their reps when I was still in management and someone shot a fake 100 through my safe and i filled out the paperwork on it at the bank. On Sun, Oct 5, 2014 at 11:54 AM, jdow j...@earthlink.net wrote:Like I say, our government is totally disorganized and overbloated. Law enforcement should be part of DoJ not Treasury. But, we gotta waste money somehow so we get a mishmash of a hodgepodge. nuff said - except to note that I wanted a chance to live enough that I illegally carried a .38 special for most of that year and slept with it under the pillow. Surrender is for victims. I gave up being a victim and liked the feeling.

{o.o}

On 2014-10-05 06:27, Paul Robert Marino wrote:

That is because the secret service is part of the treasury department oddly
enough. Even though they are most know for protecting the president they are
actually a law enforcement agency.



-- Sent from my HP Pre3


On Oct 5, 2014 2:54 AM, jdow j...@earthlink.net wrote:

If credit card fraud was involved you might check with the Secret Service. At
least in the mid 80s credit card fraud was investigated by the Secret Service.
Ive no freaking idea why; but, during an investigation about some online
stalking featuring me as one of the victims credit card fraud was involved. I
was interviewed about it by a Secret Service agent and an FBI agent
concurrently. Both were just a touch out of their depth. Sigi Kluger was
ultimately prosecuted for the CC fraud, not the stalking, not the death threats,
not the bodily harm (chop me up and feed me to his dog) but CC fraud that was
small amounts over a full year. VERY few women stayed with McGraw-Hills BIX
or Byte Information eXchange through that year. I was too damn stubborn to be
run out. But - damn - CC fraud was the Secret Services domain? Washington DC
was hopelessly screwed up even then.

{^_^} Joanne

On 2014-10-04 21:26, Paul Robert Marino wrote:
  you may be right Interpols economic crimes division might be the
  right way to go Ive never considered that before.
 
 
  On Sat, Oct 4, 2014 at 8:56 PM, Nico Kadel-Garcia nka...@gmail.com wrote:
  On Sat, Oct 4, 2014 at 9:26 PM, Bill Maidment b...@maidment.me wrote:
  There used to be an organisation called Interpol to deal with international
crime. I havent heard anything recent about them; do they still exist?
 
  Regards
  Bill Maidment
 
  Interpol still exists, theyve a web site hat
  https://www.interpol.int/. Since weve gone way off this mailing
  lists announced purpose, Ill stop here.
 




Re: Final Solution to Chinese Break in

2014-10-04 Thread Paul Robert Marino
One other problem is the FBI can only investigate criminals operating within the united states they really can't do any thing if the criminal is operating out of an other country due to their mandated scope of enforcement.In fact internet crimes are really difficult for any law enforcement because they are usually international and there for exceed their jurisdiction. The laws that limit jurisdiction are ment to protect our rights and prevent any one law enforcement agency from having enough power to threaten the government; however this makes it nearly impossible for any one of them to truly investigate internet crimes. What is needed is a new agency who's jurisdiction is international internet crimes; however that also presents its own risks because if you think the NSA is bad about respecting our rights just wait to see what an international agency tasked with tracking internet crimes would do.-- Sent from my HP Pre3On Oct 3, 2014 12:30 AM, Nico Kadel-Garcia nka...@gmail.com wrote: On Thu, Oct 2, 2014 at 4:02 PM, Larry Linder
larry.lin...@micro-controls.com wrote:
 on May 22 Our server was broken into by some one in China.   How it happened
 is that we had had a hole in our firewall so employees could access out
 server from the field.   This had worked pretty well - until the AT Motorola
 modem died and they install two new ones and left the port to the ssh open.

*Ouch*. Dude, you've my sympathies. This sort of thing is precisely
why I argue with people about the concept of "we have a firewall, so
we don't need to be so rigorous about our internal network security".
And oh, yes, the old standby "who would want to hack us?"

 The people who did this job had more than a working knowledge of networks,
 Linux and files systems.   We were wondering how they could create a
 directory at end of file system was a puzzle.   They had root privilege, ssh,
 and with access to bash they were in.

And the kernel. Don't forget that with that level of access, they can
manipulate the modules in your kernel.

 How did they covered their tracks so well?  "messages" was there but filled
 with nonsense and file in /var/log that tells you who and what was sent was
 touched was now missing.   "security" was there and you could see the

And since they owned root, they could replace core system libraries,
even corrupting compilers. *nothing* rebuilt on that host can be
trusted.

 repeated access attempts to break in again.  "cron" was changed so daily
 backups were done after they down loaded all new files.   "crontab -e" no
 longer worked.
 We made a copy of the OS onto old disk and removed disk from the system.
 There were so many charges to the OS and files in /etc that we did not even
 try to repair it.   There were 1000's of differences between new install and
 copy of old system.

 I personally think the bash problem is over blown because they have to get
 threw modem, firewall, ssh before they can use "bash".

That is *one* instance, and not really relevant to the circumstances
you described. In fact, many systems expose SSH to the Internet at
large for "git" repository access, and for telecommuting access to
firewalls and routers. The big problem with "shellshock" was that
attempts to restrict the available commands for such access, for
example inside "ForceCommands" controlled SSH "authrozed_keys" files,
could now broken out of and allow full local shell access. Once you
have *that* on a critical server, your hard crunch outershell is
cracked open and your soft chewy underbelly exposed.

 One question remains and that is what code and script did they use to run the
 system??

Gods only know. there are so *many* rootkits in the wild, and so much
theft of private SSH keys and brute force attacks or theft of
passwords, it's hard to know how they got in.

 If anyone wants details and IP's I will send it to them on an individual
 basis.

 We contacted the FBI and after a telephone interview,  they were sort of
 interested but I think the problem is so big they don't have time to work
 little stuff.

My personal experience with the FBI and computer crime is that they
are simply not competent. They accept information eagerly and do
nothing at all helpful with it. They have a very poor track record of
getting crackers to turn each other in and abusing the resulting
immunity from prosecution, and not actually investigating or
prosecuting more than the tiniest fraction of crimes reported.

 This is a little disjointed because it happened over a long time.

 Larry Linder

As I mention, you have my sympathies. It'a a good reminder to keep
your internal systems updated from known attack vectors.

Re: about realtime system

2014-08-24 Thread Paul Robert Marino
Nico
Depending on the role of the particular system and or which company I
was working for at the time I've need one the other or both.
In my current role in the broadcast industry precision with
predictable latency is more important for most of my systems.
That said when I worked in the financial industry it changed based on
what part of the industy I was working for.
The stock exchanges I've worked for cared about precision because it
was more important to them to make sure every one had the same latency
and our logging was accurate for audit purposes.
When I used to work for a managed systems vender for hedge funds they
were all about low latency because the faster they got data in and out
of the exchanges often determined if they had an edge over their
competitors or not. quite literally an extra millisecond could cost
them millions of dollars a second due to the nature of high frequency
trading.

Generally I don't think of real time kernels when I am thinking about
low latency because oftent they increase the latency when dealing with
multiple operations. however the reverse can true if you only have a
box doing one specific task only but that rarely is the case.

By the way one of those stock exchanges is where the VMware engineers
told us never to use their product in production. In fact we had huge
problems with VMware in our development environments because some of
our applications would actually detect the clock instability in the
VMware clocks and would shut themselves down rather than have
inaccurate audit logs. as a result we found we had trouble even using
it in our development environments.

By the way Red hat only told me recently about guaranteeing the
microsecond precision of the clocks in KVM on RHEV and said they have
been doing it in financial for over a year. there are conditions
though such as you need to turn off support for overbooking the CPU
cores. last I checked VMware still says do not use their product
anywhere where you need millisecond accurate clocks.

Further more I dont know about that statement Anyways, KVM will not
handle latency any better than Vmware. the article you pointed out
talks about VCPU's going in and out of halted states, which is normal
and completely expected in VMware because they always assume you are
going to overbook your CPU cores. there is a slight difference when
you talk about KVM in paravirtualized mode with overbooking disabled
it directly maps the CPU cores the the VM so as long as you don't have
power management enabled the CPU's are always operating at full speed
further more you can directly map PCIe bus address to the VM
(essentially assigning a card on your bus directly to the VM to be
completely managed by its kernel) to reduce latency in other ways to
hardware if you need too.

On Sun, Aug 24, 2014 at 2:02 PM, Nico Kadel-Garcia nka...@gmail.com wrote:
 On Sun, Aug 24, 2014 at 12:57 PM, John Lauro
 john.la...@covenanteyes.com wrote:
 Why spread FUD about Vmware.  Anyways, to hear what they say on the subject:
 http://www.vmware.com/files/pdf/techpaper/latency-sensitive-perf-vsphere55.pdf

 Anyways, KVM will not handle latency any better than Vmware.

 - Original Message -
 From: Paul Robert Marino prmari...@gmail.com
 To: Nico Kadel-Garcia nka...@gmail.com
 Cc: Brandon Vincent brandon.vinc...@asu.edu, llwa...@gmail.com, 
 SCIENTIFIC-LINUX-USERS@FNAL.GOV
 scientific-linux-users@fnal.gov
 Sent: Sunday, August 24, 2014 12:26:17 PM
 Subject: Re: about realtime system

 Wow I don't know how VMware got mentioned in this string but VMware
 is
 not capable of real time operation and if you ask the senior
 engineers
 at VMware they will tell you they don't want you even trying it on
 their product because they know it wont work. The reason is VMware
 plays games with the clock on the VM so the clocks can never be 100%
 accurate.
 It should be possible to do real time in KVM assuming you don't
 overbook your CPU Cores or RAM. Apparently Red Hat has been doing
 VM's
 with microsecond accurate clocks with PTP running on the
 visualization


 I mentioned that I hope they were using real servers, not VM's. I'd
 had people try to run real-time systems in virtualization,
 specifically with VMware, and it wasn't workable for their needs.

 Also, high precision is not the same as low latency, although both
 are often grouped together for real-time operations. I'm curious if
 Paul needs both.


Re: about realtime system

2014-08-24 Thread Paul Robert Marino
PS. That last paragraph was intended to respond to John not Nico.

On Sun, Aug 24, 2014 at 3:27 PM, Paul Robert Marino prmari...@gmail.com wrote:
 Nico
 Depending on the role of the particular system and or which company I
 was working for at the time I've need one the other or both.
 In my current role in the broadcast industry precision with
 predictable latency is more important for most of my systems.
 That said when I worked in the financial industry it changed based on
 what part of the industy I was working for.
 The stock exchanges I've worked for cared about precision because it
 was more important to them to make sure every one had the same latency
 and our logging was accurate for audit purposes.
 When I used to work for a managed systems vender for hedge funds they
 were all about low latency because the faster they got data in and out
 of the exchanges often determined if they had an edge over their
 competitors or not. quite literally an extra millisecond could cost
 them millions of dollars a second due to the nature of high frequency
 trading.

 Generally I don't think of real time kernels when I am thinking about
 low latency because oftent they increase the latency when dealing with
 multiple operations. however the reverse can true if you only have a
 box doing one specific task only but that rarely is the case.

 By the way one of those stock exchanges is where the VMware engineers
 told us never to use their product in production. In fact we had huge
 problems with VMware in our development environments because some of
 our applications would actually detect the clock instability in the
 VMware clocks and would shut themselves down rather than have
 inaccurate audit logs. as a result we found we had trouble even using
 it in our development environments.

 By the way Red hat only told me recently about guaranteeing the
 microsecond precision of the clocks in KVM on RHEV and said they have
 been doing it in financial for over a year. there are conditions
 though such as you need to turn off support for overbooking the CPU
 cores. last I checked VMware still says do not use their product
 anywhere where you need millisecond accurate clocks.

 Further more I dont know about that statement Anyways, KVM will not
 handle latency any better than Vmware. the article you pointed out
 talks about VCPU's going in and out of halted states, which is normal
 and completely expected in VMware because they always assume you are
 going to overbook your CPU cores. there is a slight difference when
 you talk about KVM in paravirtualized mode with overbooking disabled
 it directly maps the CPU cores the the VM so as long as you don't have
 power management enabled the CPU's are always operating at full speed
 further more you can directly map PCIe bus address to the VM
 (essentially assigning a card on your bus directly to the VM to be
 completely managed by its kernel) to reduce latency in other ways to
 hardware if you need too.

 On Sun, Aug 24, 2014 at 2:02 PM, Nico Kadel-Garcia nka...@gmail.com wrote:
 On Sun, Aug 24, 2014 at 12:57 PM, John Lauro
 john.la...@covenanteyes.com wrote:
 Why spread FUD about Vmware.  Anyways, to hear what they say on the subject:
 http://www.vmware.com/files/pdf/techpaper/latency-sensitive-perf-vsphere55.pdf

 Anyways, KVM will not handle latency any better than Vmware.

 - Original Message -
 From: Paul Robert Marino prmari...@gmail.com
 To: Nico Kadel-Garcia nka...@gmail.com
 Cc: Brandon Vincent brandon.vinc...@asu.edu, llwa...@gmail.com, 
 SCIENTIFIC-LINUX-USERS@FNAL.GOV
 scientific-linux-users@fnal.gov
 Sent: Sunday, August 24, 2014 12:26:17 PM
 Subject: Re: about realtime system

 Wow I don't know how VMware got mentioned in this string but VMware
 is
 not capable of real time operation and if you ask the senior
 engineers
 at VMware they will tell you they don't want you even trying it on
 their product because they know it wont work. The reason is VMware
 plays games with the clock on the VM so the clocks can never be 100%
 accurate.
 It should be possible to do real time in KVM assuming you don't
 overbook your CPU Cores or RAM. Apparently Red Hat has been doing
 VM's
 with microsecond accurate clocks with PTP running on the
 visualization


 I mentioned that I hope they were using real servers, not VM's. I'd
 had people try to run real-time systems in virtualization,
 specifically with VMware, and it wasn't workable for their needs.

 Also, high precision is not the same as low latency, although both
 are often grouped together for real-time operations. I'm curious if
 Paul needs both.


Re: about realtime system

2014-08-24 Thread Paul Robert Marino
John
reread the first and third paragraph of my previous email.
Trading firms care about low latency but never cared about the
accuracy millisecond of the clocks. Sock exchanges on the other hand
want predictable latency not necessarily low latency but absolutely
require millisecond and if possible microsecond accurate clocks.
The reason for this is trading funds are worried about getting quotes,
bids, executions, etc. to the exchanges gateways as fast as possible;
however the exchange has to be able to prove to both the member firms
and the regulators that everyone is treated fairly once they put an
order into the gateway.

While yes VMware says under very specific configurations with 3/4ths
of their features disabled and special network cards which offload
their virtual switch's work VMware can handle low latency, but they
still can not handle clocks accurate to the millisecond and certain
cant handle it to the microsecond.
Furthermore I find this article highly suspect because its talking
about reducing the latency overhead in their visualization stack to
the point where it becomes less noticeable not necessarily true low
latency. This makes it acceptable for small hedge funds which have
staff and equipment budget constraints, but not really good enough for
the big boys if they are smart. I would advise you to be careful with
VMwares technical marketing docs and blogs in this area because the
sale people will tell you anything to get you to buy it, their high
level engineers will actually tell you the truth if they know what you
are using it for. In true real time and high precision situations
their senior engineers will tell the sales department to wave you off
of using their product if your employer is a big enough name for for
the real senior engineers (not sales engineers) to look at your
design prior to sale.

If you dive deep into that article it say's you need
   1) very specific hardware support specifically network cards
   2) you need to turn off vmotion and all of the other fault tolerance features
   3) you need to have very specific features turned on
   4) It makes strongly implied suggestion but doesn't state flat out
that for best performance you need to align the number of cores you
assign to the layout of the cache in your CPU so you don't get
multiple VM's sharing CPU cache even if that means assigning more
VCPUs than you need.
   5) a separate physical network card for each VM
   6) disable memory over committing (Same as KVM)
   7) disable CPU over committing (Same as KVM)

Even with that all of that you still do not have a 10 microsecond
latency jitter in the network stack, and the accuracy of the clocks
are still not guaranteed to the millisecond. In no place in that
article or the blog is clock accuracy mentioned at all.
All they are talking about is better response time latency not real precision.




On Sun, Aug 24, 2014 at 3:46 PM, John Lauro john.la...@covenanteyes.com wrote:
 The recommendation changed with 5.5.
 http://blogs.vmware.com/performance/2013/09/deploying-extremely-latency-sensitive-applications-in-vmware-vsphere-5-5.html

 ... However, performance demands of latency-sensitive applications with very 
 low latency requirements such as distributed in-memory data management, stock 
 trading, and high-performance computing have long been thought to be 
 incompatible with virtualization.
 vSphere 5.5 includes a new feature for setting latency sensitivity in order 
 to support virtual machines with strict latency requirements.


 - Original Message -
 From: Paul Robert Marino prmari...@gmail.com
 To: Nico Kadel-Garcia nka...@gmail.com
 Cc: John Lauro john.la...@covenanteyes.com, Brandon Vincent 
 brandon.vinc...@asu.edu, Lee Kin
 llwa...@gmail.com, SCIENTIFIC-LINUX-USERS@FNAL.GOV 
 scientific-linux-users@fnal.gov
 Sent: Sunday, August 24, 2014 3:27:39 PM
 Subject: Re: about realtime system
 ...
 By the way one of those stock exchanges is where the VMware engineers
 told us never to use their product in production. In fact we had huge
 problems with VMware in our development environments because some of
 our applications would actually detect the clock instability in the
 VMware clocks and would shut themselves down rather than have
 inaccurate audit logs. as a result we found we had trouble even using
 it in our development environments.


Re: about realtime system

2014-08-24 Thread Paul Robert Marino
Seriously lets take the high frequency trading thing off this list and any one else who wants to talk to me as well a out it, I'm perfectly happy to explain it.Further more I'm willing to explain the real problems with the world financial system but not on this list and certainly not on this thread-- Sent from my HP Pre3On Aug 24, 2014 8:55 PM, Nico Kadel-Garcia nka...@gmail.com wrote: On Sun, Aug 24, 2014 at 7:50 PM, jdow j...@earthlink.net wrote:
 The stock exchange could remove most of the problem, meaning high
 frequency trades, by placing a purely random 0 to 1 second latency
 on all incoming data and all outgoing data. The high frequency trading
 reads to me as just another means of skimming now that they're not
 allowed to round down fractional pennies and pocket the change. It's
 time to give mere mortals some practical access to the exchanges. And
 this interest in microsecond clocks would simply vanish from the
 exchanges.

The whole high frequency trading, low latency mess is due to leave the
Linux world, at least for the hosts directly receiving the data, due
to the availability of FPGA's that can live on fiber optic
connections, physically adjacent to the stock market. I can say that
based on interviews I did some years back, without a signed NDA for
the interviews, and based on press articles on the technology.

Generating the rules for the FPGA's, now, *that* is an interesting
potential Linux market, and I can point people to job ads for
precisely this. Tying it back to Scientific Linux, I'd *love* to see
them using Scientific Linux for this due to the support available from
the Scientific Linux world for oddball scientific computing
requirements. And the Scientific Linux built-in integration for 3rd
party repositories like EPEL and ATrpms is invaluable.

 On a different point, the word I can find is that the free version of
 VMWare does not support this "high latency sensitivity" setting.

 {o.o}   Joanne, Just sayin'

Cool  thanks for looking into it!

Re: XFS and dump?

2014-08-03 Thread Paul Robert Marino
Look at CXFS your dreams of using it on other operating systems are actually possible at least on a SAN.-- Sent from my HP Pre3On Aug 2, 2014 10:53 PM, Brent L. Bates blbates1...@gmail.com wrote:  I'm sorry, but the proven, reliable, and fast file system is XFS,
NOT ext4.  ext4 is the new kid on the block.  XFS has been around for
probably 20 YEARS, if not longer.  Half of that time also under Linux.
ext4 hasn't been around nearly that long.  XFS is the tried and true,
dependable, reliable, resilient, and fast file system.  I've seen it
survive hardware crashes and flaky disk drives and keep on going.
I've used it under both 32bit (not huge disk drives) and 64bit Linux
with no problems.  I would not use any other file system under Linux
and if I could, I'd use it under other OS's as well.  It is just that
good, fast, and reliable.

-- 
Brent L. Bates
Email: blbates1...@gmail.com

Re: Clarity on current status of Scientific Linux build

2014-06-23 Thread Paul Robert Marino
well what I dont understand here is all of RHEL SRPMs are on a web
server an can be downloaded if you have an entitlement.
all you need is
1) the CA cert located here /usr/share/rhn/RHNS-CA-CERT on any Red Hat host.
2) the entitlement cert from subscription manager winch you can get
off of access.redhat.com go to Subscriptions - Subscription
Management - UNITs then click on the subscription you would like to
use. you will see a Download button on the top left side of the
screen.
3) on the page where you downloaded the certificate there is a sub tab
called Content Set take the URL's listed there and prefix them with
https://cdn.redhat.com

if you connect with a browser you can see its just a standard yum repo
which uses the certificates for authentication, so most yum mirroring
tools will work just fine as long as it can supply the the PKI
(entitlement) cert to their web server.



On Mon, Jun 23, 2014 at 9:54 AM, Steven Timm t...@fnal.gov wrote:
 I was at the HEPiX meeting at which those slides were presented
 and there was further discussion during the course of the week
 as to what would happen.  RedHat/CentOS was also represented at that
 meeting in the person of Karanbir Singh.  You should not presume
 that the presentations given at that meeting are the final word.

 You notice that nobody with a cern.ch or fnal.gov E-mail address
 has responded to this thread up until now.  When they have
 something concrete they will respond with the details.

 Steve Timm




 It seems it is more likely that Scientific LInux 7 will become a Special
 Interest Group (SIG) of CentOS 7. See the presentations at the Hepix
 meeting
 in Annecy Le Vieux, last May, on SL 10 years, notably the ones from Connie
 Sieh and Jarek Polok:
 http://indico.cern.ch/event/274555/session/11/#20140519

 Alain

 --
 Administrateur Système/Réseau
 Laboratoire de Photonique et Nanostructures (LPN/CNRS - UPR20)
 Centre de Recherche Alcatel Data IV - Marcoussis
 route de Nozay - 91460 Marcoussis
 Tel : 01-69-63-61-34



 --
 Steven C. Timm, Ph.D  (630) 840-8525
 t...@fnal.gov  http://home.fnal.gov/~timm/
 Fermilab Scientific Computing Division, Scientific Computing Services Quad.
 Grid and Cloud Services Dept., Associate Dept. Head for Cloud Computing


Re: bootable USB flash drive questions

2014-03-23 Thread Paul Robert Marino
Hi Urs

  here are a couple of notes on the subject I havent done it in a few
years but my past experience may be useful.

1) on the subject of the mother board not supporting booting off the
USB drive it doesn't matter because Grub supports it so if you install
grub as the boot loader on your hard drive it can handle it for you.

2) its not the USB bandwidth you need to worry about with speed.
Most USB flash drives are far slower than the USB bus some of them
measure in the hundreds of Kilobytes per second so don't expect speed
unless you buy a very high end Flash drive. that said Ram helps a lot
and there are some other hybrid options you can look at SuSE's live
(DV|C)D's traditionally allowed booting off of the CD and merging it
with persistent data on disk images on an Fat32 or NTFS drive it can
give you more speed but at the cost of shortening the life span of
your DVD drive because of the constant use. so there are alternative
options you could evaluate.

3) more ram will help the more you can keep in ram the better your
performance will be.

4) do not put a swap partition on your flash drive!
it will eat the total life time writes very quickly however if you
absolutly need too then resuce the swappienes

5) use tempfs as much as possible.
use it for /tmp and if you can for /var/run and /var/log it will eat
some ram but it will significantly increase the life span of the flash
drive.




On Sun, Mar 23, 2014 at 5:00 AM, ToddAndMargo toddandma...@zoho.com wrote:
 On 03/22/2014 04:09 AM, Urs Beyerle wrote:

 3) 1x DVD read is 1.3 MB/s (10.5 Mbit/s), so a 24X DVD drive will be
 about 30 MB/s. I guess there is no speedup using a USB 2 flash drive.


 Hi Urs,

   Just playing around with it in KVM, I'd say it
 boot about 4 times faster than a CD on one of my
 customer's computers


 -T


 --
 ~~
 Computers are like air conditioners.
 They malfunction when you open windows
 ~~


Re: Wanted LDAP server configuration documents

2014-03-23 Thread Paul Robert Marino
That's an old good but out of date document.The big question is do you want to do LDAP 2 or 3.The big difference is Kerberos in 3 or not and are you planning to use no encryption or SSL as in version 2 or TLS as in version 3 which is similar but has some additional DNS requirements.Also that document refers to openldap which I used for many years but I would advise you now to look at 389 server which was one of the original LDAP servers written by Netscape Security Solutions and is now owned by Red Hat also under RHDS and the basis of Oracles Directory Server due to a commercial fork agreement negotiated between SUN and AOL after AOL bought Netscape. Its not perfect but its GPL and in many ways better than OpenLDAP as a server platform and truly the good parts existed before Netscape died to the point where the first time I saw the java GUI a couple of years ago I fell on the floor litteraly bec)s of the nasty flashbacks of supporting the SCO version and the NT 4 domain controller sync plugin in the late 90s-- Sent from my HP Pre3On Mar 23, 2014 19:45, Steven Miano mian...@gmail.com wrote: This is a very good starting point:http://www.tldp.org/HOWTO/LDAP-HOWTO/installing.html

On Sun, Mar 23, 2014 at 7:41 PM, Pritam Khedekar pritamkhedek...@gmail.com wrote:

Dear All,

Please send me some LDAP config documents if available. ASAP.
--  Miano, Steven M. http://stevenmiano.com





Re: Sharing users among few hosts

2014-02-17 Thread Paul Robert Marino
TLS/SSL won't work correctly if you use the /etc/hosts file. That is the real constraint with LDAP and DNS.But its not that severe all you need to be able to do is forward and reverse lookup the host name and match it to the IP address.You do not really need the SRV records. As long as the name in the cert matches the DNS A record for the hostname(s) and the reverse lookup of the resulting IP also matches the hostname(s) in the cert you are good.One other option is you don't really need the passwords in the LDAP database you can put it in Kerberos then you don't have to worry about clear text passwords at all and there are no DNS requirements.It takes a out 15 minutes to set up a Kerberos server and only about an hour to setup 389 server (a.k.a Red Hat Directory servera.k.a. Netscape Directory Server) from scratch to use Kerberos Auth.Then on your client configs you specify the IP addresses instead of the host names.-- Sent from my HP Pre3On Feb 17, 2014 9:09, Tam Nguyen tam8gu...@gmail.com wrote: If you wanted to avoid DNS, then you can *temporarily* achieve that on RH Identity Management by updating the /etc/hosts files on the server and client nodes.  -Tam
On Mon, Feb 17, 2014 at 6:57 AM, צביקה הרמתי haramaty.zv...@gmail.com wrote:
Hi.I want to have several hosts, sharing the same Users Accounts database.
i.e, user John will be able to seamlessly login to host1 or to host2, without having to manually config Johns credentials unto each machine.
Nothing more than that...LDAP seems like the solution, however, I tried to find an easy tutorial and understood that maybe its a little bit overkill for my humble requirements.

Ive read about RH Identity Management (https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Identity_Management_Guide/index.html)

It seemed interesting; but its DNS requirements are a little bit too complicated for scenerio (having the IDM servers public IP properly configured DNS record).

Am I missing something?There must be simpler way...Thanks,Zvika



Re: Exchange server alternative?

2014-02-09 Thread Paul Robert Marino
You know what you also can't do with Gmail create a SOX compliant export for regulators if you get audited.So if there is reason to believe that your companies emails contains data pertinent to the financial transactions of your company and your company gets audited you are in deep trouble. It is also the legal responsibility of the person or people in charge of maintaining the email system to ensure the compliant backups are taken and made available upon request.That's why most large and or financial companies in the united states won't use it.And some times the regulators are the ones who are actually asking for the tap via a compliance officer on some ones emails without managerial approval and its really bad if they can't do that.You can thank Enron for that.-- Sent from my HP Pre3On Feb 9, 2014 1:25, Nico Kadel-Garcia nka...@gmail.com wrote: On Sat, Feb 8, 2014 at 6:53 PM, Paul Robert Marino prmari...@gmail.com wrote:
 info sec is not the problem it's a record keeping issue.

Info sec for email is *always* a problem. It's also critical to the
record keeping: the ability to re-route, or delete, or backfill email
needs to be handled. (Do not get me *started* on desires to backfill
email that never happened, or to silently tap people's email with or
without managerial permission! Switching to GMail made it possible for
me to say "can't be done!!!" and avoid responsibility for such
abuses.)

Fortunately, modern IMAP based email systems give easy ways to
transfer or replicate email wholesale ti alternative servers,
wholesale. Thank you, *gods* for the migration from POP based
services, which have mostly disappeared but stored all the folder
information on the client. For most clients, they can pretty much cut
and paste their old Exchange folders to a new IMAP environment.
*Migrating* from Exchange to the new environment takes time and
effort.

For folks recommending Zimbra or Zarafa, I'd be very curious how they
migrate data from Exchange clients. I'm sure that part's not just
"drop-in replacement".

Re: Exchange server alternative?

2014-02-08 Thread Paul Robert Marino
have you looked at openchange http://www.openchange.org/index.html
It's been a few years since I looked at it but the goal is to create a
exchange server replacement.

On Sat, Feb 8, 2014 at 11:34 AM, Nico Kadel-Garcia nka...@gmail.com wrote:
 Whoops, sorry! I thought you were looking for an AD replacement, not
 an Exchange replacement.

 To replace Exchange, run, do not walk, to Google Apps for Business. It
 works very well, you don't have to maintain your own expert IT
 infrastructure to deal with the vagaries and backups and security of
 email handling, and their uptime exceeds that of any internal business
 email setup I've ever seen or helped run. You lose Outlook based
 calendar functionality, but you gain document sharing and
 collaboration to replace emailing bulky email documents around. And it
 plays very, very well with Linux clients such as Scientific Linux and
 even cell phones, unlike the Exchange suite. The spam filtering is
 also *ery* good.

 Unless you've got some very large demands for customized internal
 services or security far beyond that of most small shops, don't burn
 your time on setting up your own messaging or collaborative suite.
 Between managing high reliability services, backups, denial-of-service
 attacks, system security, customization requests, and migrating users
 to new tool suites,  you'll burn up any benefits from having it in
 house with months, if not within minutes, of first running it in house
 in a small environment.

 If you *have* to continue with Outlook based clients, especially for
 calendaring, look at Microsoft's Online365 services.

 I'm afraid I can't recommend the locally installed, Linux based,
 messaging suite replacements for Exchange. I tried Zimbra under
 RHEL/CentOS some years back, and rejected it as too bulky and far too
 expensive in engineering time to maintain. I hope it's gotten better
 since, but it suffered from the same problem as Exchange: awkward
 integration of conflicting components and their requirements. I don't
 expect the mentioned Zarafa tool suite to do it any better, but I'd
 be curious to see more recent experience with either.

   Nico Kadel-Garcia nka...@gmail.com


Re: What happened to adobe repository ?

2014-01-15 Thread Paul Robert Marino
FYI they also did the same thin with Shockwave Flash player for Linux too.Apparently Adobe doesn't care about the linux user market share any more.-- Sent from my HP Pre3On Jan 15, 2014 10:56, Graham Allan al...@physics.umn.edu wrote: On 1/15/2014 4:20 AM, Urs Beyerle wrote:

 Adobe discontinued the Adobe Reader 9 for Linux in June 2013 and has not
 fixed and will not fix any further security issues in it. Therefore it
 makes totally sense to remove it from their repo.

I'm not disagreeing with you but it's still a breathtakingly crappy way 
of handling it. Acroread for linux is still available as a regular web 
download, so it's not remotely obvious that it's desupported unless you 
follow the news independently. For example it might have been worth a 
final mention on Adobe's "acrobat for unix" blog 
http://blogs.adobe.com/acroread/ rather than leaving that abandoned 
since 2010!

Graham

RE: Fedora Scientific Spin

2014-01-15 Thread Paul Robert Marino
I'm not touching this question lol.-- Sent from my HP Pre3On Jan 15, 2014 22:40, Jean-Victor Côté jean-v.c...@sympatico.ca wrote: There could be a Long Term Support (LTS) option for Fedora Scientific, built from the latest stable release and tested by the builders. This sounds a bit like Ubuntu, which might also benefit from a scientific version. Further upstream, there is Debian Science, which can be built upon directly and which even has versions for many branches of science:
https://wiki.debian.org/DebianScience/Jean-Victor Côté

 Date: Wed, 15 Jan 2014 19:06:39 -0800 From: ykar...@csusb.edu To: scientific-linux-users@fnal.gov Subject: Re: Fedora Scientific Spin  On 01/15/2014 04:36 PM, Andrew Z wrote:   Would not it be sufficient to have a "scientific applications" group   in the installer?   On Jan 15, 2014 7:29 PM, "Jean-Victor Côté" jean-v.c...@sympatico.ca   mailto:jean-v.c...@sympatico.ca wrote:   They have included interesting IDEs:  https://fedoraproject.org/wiki/Scientific_Spin  Collaboration between the two projects might prove fruitful, who  knows?   Jean-Victor Côté, M.Sc.(Sciences économiques), (CPA, CMA), Post MBA  J'ai aussi passé d'autres examens, dont les examens CFA.  J'ai un profil Viadeo sommaire:  http://www.viadeo.com/fr/profile/jean-victor.cote  I also have a LinkedIn profile:  http://www.linkedin.com/profile/view?id=2367003trk=tab_pro  Whether or not a "scientific spin" is placed on Fedora, such an approach  does not address the fundamental issue. Fedora is an enthusiast  perpetually alpha or beta distribution, never designed as a stable,  "bulletproofed", production distribution. For many EL users, clone or  TUV, the reason is stability. I do not need nor use beta environments  except for testing or for those situations in which I am forced to use a  Microsoft product (e.g., MS Win under VirtualBox under Linux). Thus,  any Fedora environment simply does not address the needs of my work.  Yasha Karant 		 	   		  


Re: RedHat CentOS acquisition: stating the obvious

2014-01-14 Thread Paul Robert Marino
Well in general my company uses SL or depending on the business unit CentOS for non critical systems and Red Hat on every thing mission critical, not because they think it works better just because of appearances. If there is an outage on a critical system that effects the bottom line the first question they will be asked by the board of directors is what linux distro it was running on and if director of the department doesn't say Red Hat with a current support agreement then the board knows who to make their scapegoat. If the director answers Red Hat and we have support then they look else where for a scapegoat. Also market analysts look at the distro when they evaluate your projected stock value and they tend to give higher estimates if you can say all your linux boxes run Red Hat.-- Sent from my HP Pre3On Jan 14, 2014 18:01, John Lauro john.la...@covenanteyes.com wrote: Your first assumption, although largely correct as a generality it is not entirely accurate, and at a minimum is not the sole purpose.  That is why companies have mission statements.  They rarely highlight the purpose of making money, although that is often the main purpose even if not specified.  What is Red Hat's mission?  It is listed as:
To be the catalyst in communities of customers, contributors, and partners creating
better technology the open source way.

Making things exceedingly difficult would go against the stated mission.  In my opinion it would also go against making money as it would kill the eco system of vendors that support RedHat Enterprise Linux for their applications.

There are so many distributions out there, the biggest way for them to not make money is to become insignificant.  Having free
alternatives like Centos keeps high market share of the EL product and ensures compatibility and a healthy eco system.  If there was not open clones of EL, then ubuntu or something else would take over and the main supported platform of enterprise applications, and then the large enterprises that pay for RedHat support contracts would move completely off.

Having people use Centos or Scientific linux might not directly help the bottom line, but for RedHat it's a lot better than having people use ubuntu or suse.  Oracle not being free could pose a bigger threat, but either RedHat remains on top as they are the main source for good support, or they do not and Oracle will have to pick up the slack for driving RedHat out of business. and what's left of RedHat would have to start using Oracle as TUV...  I don't see too many switching to Oracle besides those that are already Oracle shops.- Original Message -
 From: "Patrick J. LoPresti" lopre...@gmail.com
 To: scientific-linux-users@fnal.gov
 Sent: Tuesday, January 14, 2014 12:45:01 PM
 Subject: RedHat CentOS acquisition: stating the obvious
 
 RedHat is a company. Companies exist for the sole purpose of making
 money. Every action by any company -- literally every single action,
 ever -- is motivated by that goal.
 
 The question you should be asking is: How does Red Hat believe this
 move is going to make them money?
 
 Those were statements of fact. What follows is merely my opinion.
 
 Right now, anybody can easily get for free the same thing Red Hat
 sells, and their #1 competitor is taking their products, augmenting
 them, and reselling them. If you think Red Hat perceives this as
 being
 in their financial interest, I think you are out of your mind.
 
 SRPMs will go away and be replaced by an ever-moving git tree. Red
 Hat
 will make it as hard as legally possible to rebuild their commercial
 releases. The primary target of this move is Oracle, but Scientific
 Linux will be collateral damage.
 
 I consider all of this pretty obvious, but perhaps I am wrong. I hope
 I am.
 
  - Pat
 

Re: Centos vs. SL

2014-01-09 Thread Paul Robert Marino
Well correction that was one of the original goals of LTS (Long term support Linux) which was the name of one of the two efforts which were combine to create SL. Since then TUV a.k.a Red Hat has changed their lifecycle policy and made it much longer that it had been prior to RHEL 4I'm sure though if Red Hat decided to go back to a two or three year life cycle then SL's policy would change back to providing security patches over a longer period of time.-- Sent from my HP Pre3On Jan 9, 2014 18:39, Ian Murray murra...@yahoo.co.uk wrote: 
On 09/01/14 23:13, Paul Robert Marino
  wrote:

SL is an exact match to RHEL with only a few
  variations such as the removed the client for Red Hats support
  site integration and added a few things like AFS because their
  labs need it. The differences are well documented in the release
  notes and its a short list.
  In addition SL guarantees long term patch availability even if Red
  Hat is no longer supporting that release.

This wasn't my understanding. According to this page

https://www.scientificlinux.org/distributions
...
"

* We plan on
  following the TUV Life Cycle. Provided TUV continues to make the
  source rpms publicly available."
  ... which disagrees with your statement. At least the way I read
  it.



  CentOS tends to do thing like update the PHP libraries to make it
  easier for web developers. And as a result they take longer for
  many security patches because they occasionally hit dependency
  issues due to the packages they have updated.
  

I am pretty sure the base release does not do this kind of thing by
default. It would be a major deviation from being "binary
compatible" with upstream vendor, which is how I recall their stated
goal to be. It may be optional, however.




  
-- Sent from my HP Pre3

  
On Jan 9, 2014 13:17, Orion
Poplawski or...@cora.nwra.com wrote: 

  On 01/09/2014 05:54 AM, Adrian Sevcenco wrote:.
  
   What technical differences would be between CentOS +
  scientific repo and SL?
  
   
   Just a personal thought, but maybe this would free some human
  resources
  
   for maintaining a lot of scientific (and IT/grid related)
  packages in
  
   well established repos (like epel, fedora/rpmfusion)
  
   
   Thanks!
  
   Adrian
  
   
  
  Well, for me the main difference between CentOS and SL is that
  with SL you can
  
  stay on EL point releases. That would require a major change in
  the CentOS
  
  infrastructure to support it. Worth exploring though...
  
  
  
  -- 
  Orion Poplawski
  
  Technical Manager 303-415-9701 x222
  
  NWRA, Boulder/CoRA Office FAX: 303-415-9702
  
  3380 Mitchell Lane or...@nwra.com
  
  Boulder, CO 80301 http://www.nwra.com
  


  



Re: DNS Servers

2014-01-09 Thread Paul Robert Marino
Bind works well period!That said one of my favorite DNS appliances uses PowerDNS under the hood and it works very well too if you configure it correctly.The others I really can't speak to because I've never used them.It really comes down to this you need to balance your budget as compared to man hours. I tend to use appliances for my core DNS servers where ever possible because there are a lot of really good ones and I have support staff time limitations, but I also use Bind 9 slave servers to handle most of the actual query traffic because it reduces my support and equipment costs. That said if you are more concerned about the initial upfront cost and support cost than man hours Bind is the safest bet because its the standard that all the others are based on.-- Sent from my HP Pre3On Jan 9, 2014 19:28, Steven Haigh net...@crc.id.au wrote: On 10/01/2014 11:16 AM, Jeremy Wellner wrote:
 I've been using BIND on RHEL5 for years and it's come time to overhaul
 those venerable DNS boxes.
 
 I've seen alot of alternatives like NSD, PowerDNS, YADIFA, and others
 but I'm wondering what experience has been with going to something other
 than BIND.
 
 Having a database backend is very attractive, but so is having a
 manageable GUI for those in the department that work with adding devices
 and are scared of text files and the black of terminal.

Use bind. DNS is all about reliability - not pretty or GUIs...

-- 
Steven Haigh

Email: net...@crc.id.au
Web: https://www.crc.id.au
Phone: (03) 9001 6090 - 0412 935 897
Fax: (03) 8338 0299


Re: DNS Servers

2014-01-09 Thread Paul Robert Marino
I in theory would like webmin for this in a fast and dirty development environment, but it still has too many infosec problems for my taste for production.In the past when I had the time and work driven focus to harden webmin with only custom module which all used sudo for an appliance I was able to reconcile my issues, but in production as is it stock webmin is risky. Many if these concerns could be handles by selinux now but the webmin developers are still behind the ball on writing the appropriate rules or even requiring module writer to include the prerequisite rules so I still wouldn't consider it in production. -- Sent from my HP Pre3On Jan 9, 2014 19:50, Nico Kadel-Garcia nka...@gmail.com wrote: BIND for the server, "webmin" for the configuration tool. and my
presentation at SVNday a few years ago if you wnat notes on how to put
it under source control.

Don't forget "mkrdns" for generating your reverse DNS reliably: the
RPM building tools are at
https://github.com/nkadel/repoforge-rpms-nkadel-dev/tree/master/specs/mkrdns/

On Thu, Jan 9, 2014 at 7:26 PM, Steven Haigh net...@crc.id.au wrote:
 On 10/01/2014 11:16 AM, Jeremy Wellner wrote:
 I've been using BIND on RHEL5 for years and it's come time to overhaul
 those venerable DNS boxes.

 I've seen alot of alternatives like NSD, PowerDNS, YADIFA, and others
 but I'm wondering what experience has been with going to something other
 than BIND.

 Having a database backend is very attractive, but so is having a
 manageable GUI for those in the department that work with adding devices
 and are scared of text files and the black of terminal.

 Use bind. DNS is all about reliability - not pretty or GUIs...

 --
 Steven Haigh

 Email: net...@crc.id.au
 Web: https://www.crc.id.au
 Phone: (03) 9001 6090 - 0412 935 897
 Fax: (03) 8338 0299


Re: Centos / Redhat announcement

2014-01-09 Thread Paul Robert Marino
Absolutely right. Red Hat is only obliged to provide source code to those who they have shared the software with, nor are they required to package the software and thief patches in an easy to compile format like source RPM packages.Now there is absolutely nothing that prevents some one who pays for Red Hat 'support' from re-sharing it but Redhat has always gone above and beyond the requirements of the GPL. But their is also nothing in the gpl that requires them to make it easy which they do.There are plenty of companies I've worked for that license software the write as GPL but don't share it with any one else but their subsidiaries and based on their employment contracts the employees who use the software as part of their job are not technically covered under the shared with clause of the GPL so its highly unlikely you will se any of them on a public web server ever.The GPL is far more subtle in legal terms than most programmers it users really understand.That said...As I've said before can we please stop this speculation train its giving me a migraine and I want to get off lol.-- Sent from my HP Pre3On Jan 9, 2014 20:46, zxq9 z...@zxq9.com wrote: On Friday 10 January 2014 01:14:02 Ian Murray wrote:
 On 10/01/14 00:16, jdow wrote:
  Don't forget that GPL means you must have the sources available when
  asked for. 

And this obligation only applies to Red Hat's customers, not to us.

 I have been struggling with this myself tbh. If RH adds a line in a GPL
 program that says "Welcome to Red Hat", releases the binary as RHEL and
 then modifies it for CentOS to read "Welcome to CentOS" and only
 releases the source that says "Welcome to CentOS", then they are in
 technical violation of the GPL, I would say. (IANAL).

No, if you received the CentOS binaries you are only entitled to receive the 
sources to those binaries (not the Red Hat ones).

GPL does not mandate that sources get released publicly, only to parties to 
whom a program has been directly distributed. Folks who are not Red Hat 
customers have not received programs from Red Hat, we've received the same 
programs from other places (CentOS, SL, or to be more legally accurate, mirror 
locations) and it is those other projects/providers who are obliged to make 
programs available in source form.

The fact that the GPL and related licenses also guarantee that any customer 
can distribute the source (but not a copy of the binary) to anyone they want 
means its almost impossible to can or gag a successful piece of GPL software. 
As a business it is better to control that release process than to be 
blindsided by it, so Red Hat has fully embraced the open source community idea 
and always provided public access to source -- but they are not obligated to 
do so.

Re: DNS Servers

2014-01-09 Thread Paul Robert Marino
Its doable to have bind be your DNS for AD it just takes some work and planing. The primary thing is make sure dynamic DNS works properly.The big catches there are making sure you have the right Service entries and ensuring dynamic DNS works correctly. By the way neither of theism are AD specific requirements they actually stem from the RFCs that describe LDAP 3 and the RFCs which describe TLS and Kerberos V which the LDAP 3 RFC's reference. Essentially AD is Microsoft's implementation of LDAP 3 and since Windows server 2008 its very RFC compliant with some Microsoft windows specific optimizations and automation-- Sent from my HP Pre3On Jan 9, 2014 21:38, Jeremy Wellner jwell...@stanwood.wednet.edu wrote: Thats a resounding stay the course and I dont mind that one bit.  Its been rock solid and Ive been happy with it.So as a secondary question, we are planning on adding Active Directory in to our network and I know that it is very particular about its DNS.  Will AD be happy with being given a delegate domain to have as its sandbox or does that throw my BIND install out the window?
Thank you all for the advise!! :)


Re: Still having problem with epel

2014-01-06 Thread Paul Robert Marino
sounds like a bad mirror or an SSL issue.
Here are possible things on your host or network that can cause SSL
not to work correctly
1) your system clock is severely incorrect time by a few hours or days
this can cause an error because one of the systems thinks its a replay
attack.
2) you must have full forward and revers lookup of the DNS host name
and IP's in the servers and the CNAME on the reverse lookup of the IP
must match. this  can be caused by a caching DNS which ignores TTL's.



On Fri, Jan 3, 2014 at 2:21 AM, Mahmood Naderan nt_mahm...@yahoo.com wrote:
 As stated by Zvika, it seems that I have problem with https connections. So
 replacing https with http in epel.repo temporarily fixed the issue.


 Regards,
 Mahmood




Re: Network dies unexpectedly

2013-12-26 Thread Paul Robert Marino
This was caused by an internal hardware watchdog built into Intel
network cards, it detected an error and disabled the interface on the
hardware level until you rebooted and the cards memory was cleared. It
looks like the card may have lost clock sync with its neighbor which
is odd that basically means it wasn't sending out the 5 volt signal
used for frequency sync. I've worked with Intel cards for probably
over a decade and Ive never seen this exact error before.

Try rolling back to the previous kernel version however this looks
more like it may be a physical hardware issue.

On Thu, Dec 26, 2013 at 12:40 PM, Galante, Nicola
ngala...@cfa.harvard.edu wrote:
 Greetings,

 I administer a web server for my institution and last night we had a
 problem.  The server is a 1U Intel Xeon E5620 machine.  The on-board network
 interface is an Intel 82574L Gigabit Controller.  Scientific Linux 6.4,
 kernel 2.6.32-431.1.2.el6.x86_64.  At some point last night the network
 interface stopped working giving a backtrace on dev_watchdog.  I could not
 restart the service network, it complained that the interface eth0 was not
 available.  I tried to reconfigure it with NetworkManager, unsuccessfully.
 A full system reboot fixed the problem, although I couldn't identify the
 problem.  I do not know if this matters, but this problem never occurred
 before the last yum update.  Here below the portion of /var/log/messages
 that relates to the problem

 =
 Dec 25 20:01:52 veritasm xinetd[1966]: EXIT: nrpe status=0 pid=20943
 duration=0(sec)
 Dec 25 20:02:21 veritasm xinetd[1966]: START: nrpe pid=20947
 from=:::199.104.151.131
 Dec 25 20:02:21 veritasm xinetd[1966]: EXIT: nrpe status=0 pid=20947
 duration=0(sec)
 Dec 26 02:18:37 veritasm kernel: [ cut here ]
 Dec 26 02:18:37 veritasm kernel: WARNING: at net/sched/sch_generic.c:261
 dev_watchdog+0x26b/0x280() (Not tainted)
 Dec 26 02:18:37 veritasm kernel: Hardware name: X8DTL
 Dec 26 02:18:37 veritasm kernel: NETDEV WATCHDOG: eth0 (e1000e): transmit
 queue 0 timed out
 Dec 26 02:18:37 veritasm kernel: Modules linked in: autofs4 8021q sunrpc
 garp stp llc cpufreq_ondemand acpi_cpufreq freq_table mperf ipt_REJECT
 nf_conntrack_ipv4 nf_defrag_ipv4 iptable_filter ip_tables ip6t_REJECT
 nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter
 ip6_tables ipv6 microcode iTCO_wdt iTCO_vendor_support sg i2c_i801 i2c_core
 lpc_ich mfd_core e1000e ptp pps_core ioatdma dca i7core_edac edac_core
 shpchp ext4 jbd2 mbcache raid1 sr_mod cdrom sd_mod crc_t10dif pata_acpi
 ata_generic ata_piix dm_mirror dm_region_hash dm_log dm_mod [last unloaded:
 scsi_wait_scan]
 Dec 26 02:18:37 veritasm kernel: Pid: 130, comm: kipmi0 Not tainted
 2.6.32-431.1.2.el6.x86_64 #1
 Dec 26 02:18:37 veritasm kernel: Call Trace:
 Dec 26 02:18:37 veritasm kernel: IRQ  [81071e27] ?
 warn_slowpath_common+0x87/0xc0
 Dec 26 02:18:37 veritasm kernel: [81071f16] ?
 warn_slowpath_fmt+0x46/0x50
 Dec 26 02:18:37 veritasm kernel: [8147b75b] ?
 dev_watchdog+0x26b/0x280
 Dec 26 02:18:37 veritasm kernel: [8105dd5c] ?
 scheduler_tick+0xcc/0x260
 Dec 26 02:18:37 veritasm kernel: [8147b4f0] ?
 dev_watchdog+0x0/0x280
 Dec 26 02:18:37 veritasm kernel: [81084b07] ?
 run_timer_softirq+0x197/0x340
 Dec 26 02:18:37 veritasm kernel: [810ac905] ?
 tick_dev_program_event+0x65/0xc0
 Dec 26 02:18:37 veritasm kernel: [8107a8e1] ?
 __do_softirq+0xc1/0x1e0
 Dec 26 02:18:37 veritasm kernel: [810ac9da] ?
 tick_program_event+0x2a/0x30
 Dec 26 02:18:37 veritasm kernel: [8100c30c] ?
 call_softirq+0x1c/0x30
 Dec 26 02:18:37 veritasm kernel: [8100fa75] ? do_softirq+0x65/0xa0
 Dec 26 02:18:37 veritasm kernel: [8107a795] ? irq_exit+0x85/0x90
 Dec 26 02:18:37 veritasm kernel: [815310ba] ?
 smp_apic_timer_interrupt+0x4a/0x60
 Dec 26 02:18:37 veritasm kernel: [8100bb93] ?
 apic_timer_interrupt+0x13/0x20
 Dec 26 02:18:37 veritasm kernel: EOI  [8152a367] ?
 _spin_unlock_irqrestore+0x17/0x20
 Dec 26 02:18:37 veritasm kernel: [812e7790] ?
 ipmi_thread+0x70/0x1c0
 Dec 26 02:18:37 veritasm kernel: [812e7720] ?
 ipmi_thread+0x0/0x1c0
 Dec 26 02:18:37 veritasm kernel: [8109af06] ? kthread+0x96/0xa0
 Dec 26 02:18:37 veritasm kernel: [8100c20a] ? child_rip+0xa/0x20
 Dec 26 02:18:37 veritasm kernel: [8109ae70] ? kthread+0x0/0xa0
 Dec 26 02:18:37 veritasm kernel: [8100c200] ? child_rip+0x0/0x20
 Dec 26 02:18:37 veritasm kernel: ---[ end trace fc057a7fca6eff49 ]---
 Dec 26 02:18:37 veritasm kernel: e1000e :06:00.0: eth0: Reset adapter
 unexpectedly
 Dec 26 02:18:37 veritasm NetworkManager[1724]: info (eth0): carrier now
 OFF (device state 8, deferring action for 4 seconds)
 Dec 26 02:18:38 veritasm kernel: e1000e :06:00.0: eth0: Timesync Tx
 Control register not set as expected
 Dec 26 02:18:41 

Re: Expansion plans

2013-12-15 Thread Paul Robert Marino
Look ahh the stock virtualization under SL6 with oVirt as a manager especially if you have any real time requirements as long as you don't over book the CPU cores or if you need PCIe pass through of hardware cards its a much better choice than VMware.For authentication look at FreeIPAAnd for unified storage Gluster with Samba 4 and CTDB will integrate your storage nicely.-- Sent from my HP Pre3On Dec 15, 2013 15:37, Jeff Siddall n...@siddall.name wrote: On 12/15/2013 03:17 PM, Larry Linder wrote:
 New project for next January.
 We are getting ready to expand our lab and plan to install a terminal with
 large displays at each bench.   The bench will support one or two projects
 and I hate the thought of setting up 6 new SL 6.4's so there is access to
 schematics, parts lists, layouts, and drawings.   In the past we have used
 NFS to do this.   I have thought about setting up a VMware server and running
 SL 5.10 and Windows under it.  The problem is how to tie it all to one or two
 servers in the shop without this turning into a big dog performance wise.
 Since we plan to use the same hardware we need to mod all SL6.4 so that
 Ethernet works correctly.  SL5.10 works out of the box.  We really like the
 Gigabyte boards and AMD quad cores. We quit buying other brands because of
 bad brown capacitors that bulge and start leaking before the board /
 processor fails.   When you start replacing 50 or so boards it is a real
 problem.

 Wile we are at it we need to set up other boxes for shop,  receiving and
 shipping with printers.  I hate to say this we are still running a "sneaker
 net".

 We are basically an electronic engineering company that is looking more like a
 factory.

Sounds like the perfect use for LTSP:

https://fedorahosted.org/k12linux/

Basically a couple of packages to install on a normal SL system plus a 
client image to install and you are good to go.  there may even be a 
live CD/USB if you want to try it out that way.

The terminals (clients) can be very lightweight and are typically 
diskless (boot off the network with PXE).  I mostly use Atom all-in-one 
systems.  Even the server doesn't have to be too big if you aren't doing 
anything particularly CPU or RAM intensive.

It's a bit of configuring and messing around to get it all the way you 
want it up front but having only one "box" to maintain is a sysadmin 
dream and is highly worth the effort in my experience.

Jeff

Re: Unexplained Kernel Panic / Hung Task

2013-12-04 Thread Paul Robert Marino
Yup that's a hardware problem.It may be a bad firmware on the controller I would check the firmware version first and see if there is a patch. I've seen this kind of thing with Dell OEMed RAID controllers enough over the years that that's almost always the first thing I try.-- Sent from my HP Pre3On Dec 4, 2013 8:21, ~Stack~ i.am.st...@gmail.com wrote: Greetings,

I have a test system I use for testing deployments and when I am not
using it, it runs Boinc. It is a Scientific Linux 6.4 fully updated box.
Recently (last ~3 weeks) I have started getting the same kernel panic.
Sometimes it will be multiple times in a single day and other times it
will be days before the next one (it just had a 5 day uptime). But the
kernel panic looks pretty much the same. It is a complaint about a hung
task plus information about the ext4 file system. I have run the
smartmon tool against both drives (2 drives setup in a hardware RAID
mirror) and both drives checkout fine. I ran a fsck against the /
partition and everything looked fine (on this text box there is only /
and swap partitions). I even took out a drive at a time and had the same
crashes (though this could be an indicator that both drives are bad). I
am wondering if my RAID card is going bad.

When the crash happens I still have the SSH prompt, however, I can only
do basic things like navigating directories and sometimes reading files.
Writing to a file seems to hang, using tab-autocomplete will frequently
hang, running most programs (even `init 6` or `top`) will hang.

It crashed again last night, and I am kind of stumped. I would greatly
appreciate others thoughts and input on what the problem might be.

Thanks!
~Stack~

Dec  4 02:25:09 testbox kernel: INFO: task jbd2/cciss!c0d0:273 blocked
for more than 120 seconds.
Dec  4 02:25:09 testbox kernel: "echo 0 
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec  4 02:25:09 testbox kernel: jbd2/cciss!c0 D  0
 273  2 0x
Dec  4 02:25:09 testbox kernel: 8802142cfb30 0046
8802138b5800 1000
Dec  4 02:25:09 testbox kernel: 8802142cfaa0 81012c59
8802142cfae0 810a2431
Dec  4 02:25:09 testbox kernel: 880214157058 8802142cffd8
fb88 880214157058
Dec  4 02:25:09 testbox kernel: Call Trace:
Dec  4 02:25:09 testbox kernel: [81012c59] ? read_tsc+0x9/0x20
Dec  4 02:25:09 testbox kernel: [810a2431] ?
ktime_get_ts+0xb1/0xf0
Dec  4 02:25:09 testbox kernel: [810a2431] ?
ktime_get_ts+0xb1/0xf0
Dec  4 02:25:09 testbox kernel: [81119e10] ? sync_page+0x0/0x50
Dec  4 02:25:09 testbox kernel: [8150e953] io_schedule+0x73/0xc0
Dec  4 02:25:09 testbox kernel: [81119e4d] sync_page+0x3d/0x50
Dec  4 02:25:09 testbox kernel: [8150f30f] __wait_on_bit+0x5f/0x90
Dec  4 02:25:09 testbox kernel: [8111a083]
wait_on_page_bit+0x73/0x80
Dec  4 02:25:09 testbox kernel: [81096de0] ?
wake_bit_function+0x0/0x50
Dec  4 02:25:09 testbox kernel: [8112f115] ?
pagevec_lookup_tag+0x25/0x40
Dec  4 02:25:09 testbox kernel: [8111a4ab]
wait_on_page_writeback_range+0xfb/0x190
Dec  4 02:25:09 testbox kernel: [8125d42d] ? submit_bio+0x8d/0x120
Dec  4 02:25:09 testbox kernel: [8111a56f]
filemap_fdatawait+0x2f/0x40
Dec  4 02:25:09 testbox kernel: [a004de59]
jbd2_journal_commit_transaction+0x7e9/0x1500 [jbd2]
Dec  4 02:25:09 testbox kernel: [8100975d] ?
__switch_to+0x13d/0x320
Dec  4 02:25:09 testbox kernel: [81081b5b] ?
try_to_del_timer_sync+0x7b/0xe0
Dec  4 02:25:09 testbox kernel: [a0054148]
kjournald2+0xb8/0x220 [jbd2]
Dec  4 02:25:09 testbox kernel: [81096da0] ?
autoremove_wake_function+0x0/0x40
Dec  4 02:25:09 testbox kernel: [a0054090] ?
kjournald2+0x0/0x220 [jbd2]
Dec  4 02:25:09 testbox kernel: [81096a36] kthread+0x96/0xa0
Dec  4 02:25:09 testbox kernel: [8100c0ca] child_rip+0xa/0x20
Dec  4 02:25:09 testbox kernel: [810969a0] ? kthread+0x0/0xa0
Dec  4 02:25:09 testbox kernel: [8100c0c0] ? child_rip+0x0/0x20
Dec  4 02:25:09 testbox kernel: INFO: task master:1058 blocked for more
than 120 seconds.
Dec  4 02:25:09 testbox kernel: "echo 0 
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec  4 02:25:09 testbox kernel: masterD  0
1058  1 0x0080
Dec  4 02:25:09 testbox kernel: 88021535d948 0082
88021535d8d8 81065c75
Dec  4 02:25:09 testbox kernel: 880028216700 88021396b578
880214336ad8 880028216700
Dec  4 02:25:09 testbox kernel: 88021396baf8 88021535dfd8
fb88 88021396baf8
Dec  4 02:25:09 testbox kernel: Call Trace:
Dec  4 02:25:09 testbox kernel: [81065c75] ?
enqueue_entity+0x125/0x410
Dec  4 02:25:09 testbox kernel: [810a2431] ?
ktime_get_ts+0xb1/0xf0
Dec  4 02:25:09 testbox kernel: [811b62b0] ? sync_buffer+0x0/0x50
Dec  4 

Re: Unexplained Kernel Panic / Hung Task

2013-12-04 Thread Paul Robert Marino
Well I tend to discount the driver idea because of an other problem he has involving multiple what I think are identical machines . Also any problems I've ever had with the ccsis driver were usually firmware related an a update or roll back usually corrects them.Besides the based on what I've heard this is low budget equipment and ProLiants aren't cheap. If I had to guess we are talking about Dells.-- Sent from my HP Pre3On Dec 4, 2013 18:36, David Sommerseth sl+us...@lists.topphemmelig.net wrote: On 04/12/13 14:21, ~Stack~ wrote: Greetings,
 
  I have a test system I use for testing deployments and when I am not
  using it, it runs Boinc. It is a Scientific Linux 6.4 fully updated box.
  Recently (last ~3 weeks) I have started getting the same kernel panic.
  Sometimes it will be multiple times in a single day and other times it
  will be days before the next one (it just had a 5 day uptime). But the
  kernel panic looks pretty much the same. It is a complaint about a hung
  task plus information about the ext4 file system. I have run the
  smartmon tool against both drives (2 drives setup in a hardware RAID
  mirror) and both drives checkout fine. I ran a fsck against the /
  partition and everything looked fine (on this text box there is only /
  and swap partitions). I even took out a drive at a time and had the same
  crashes (though this could be an indicator that both drives are bad). I
  am wondering if my RAID card is going bad.
 
  When the crash happens I still have the SSH prompt, however, I can only
  do basic things like navigating directories and sometimes reading files.
  Writing to a file seems to hang, using tab-autocomplete will frequently
  hang, running most programs (even `init 6` or `top`) will hang.
 
  It crashed again last night, and I am kind of stumped. I would greatly
  appreciate others thoughts and input on what the problem might be.
 
  Thanks!
  ~Stack~
 
  Dec  4 02:25:09 testbox kernel: INFO: task jbd2/cciss!c0d0:273 blocked
  for more than 120 seconds.
  Dec  4 02:25:09 testbox kernel: "echo 0 
  /proc/sys/kernel/hung_task_timeout_secs" disables this message.
  Dec  4 02:25:09 testbox kernel: jbd2/cciss!c0 D  0
273  2 0x
  Dec  4 02:25:09 testbox kernel: 8802142cfb30 0046
  8802138b5800 1000
  Dec  4 02:25:09 testbox kernel: 8802142cfaa0 81012c59
  8802142cfae0 810a2431
  Dec  4 02:25:09 testbox kernel: 880214157058 8802142cffd8
  fb88 880214157058
  Dec  4 02:25:09 testbox kernel: Call Trace:
  Dec  4 02:25:09 testbox kernel: [81012c59] ? read_tsc+0x9/0x20

This looks like some locking issue to me, triggered by something around the 
TSC timer.

This is either a buggy driver (most likely the ccsis driver) or a related 
firmware (read the complete boot log carefully, look after firmware warnings). 
  Or it's a really unstable TSC clock source.  Try switching from TSC to HPET 
(or in really worst case acpi_pm).  See this KB for some related info: 
https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_MRG/2/html/Realtime_Reference_Guide/chap-Realtime_Reference_Guide-Timestamping.html

But my hunch tells me it's a driver related issue, with some bad locking. 
There seems to be several filesystem operations happening on two or more CPU 
cores in a certain order which seems to trigger a deadlock.


--
kind regards,

David Sommerseth

Re: Unexplained Kernel Panic / Hung Task

2013-12-04 Thread Paul Robert Marino
If not down rev it to the same version as the one that works.It isn't hard to do with their utilities because those of us who work in mission critical environment have hammered it into their heads that its an absolute requierment-- Sent from my HP Pre3On Dec 4, 2013 19:12, ~Stack~ i.am.st...@gmail.com wrote: On 12/04/2013 05:51 PM, Paul Robert Marino wrote:
 Well I tend to discount the driver idea because of an other problem he
 has involving multiple what I think are identical machines . Also any
 problems I've ever had with the ccsis driver were usually firmware
 related an a update or roll back usually corrects them.
 Besides the based on what I've heard this is low budget equipment and
 ProLiants aren't cheap. If I had to guess we are talking about Dells. 

You are right, in that I am experiencing two different issues and the
vast majority of my test lab is older cast-away parts. The difference is
that both issues are on very different systems.

The DHCP problem is on a bunch of similar generic Dells. This particular
problem is on a HP Prolient DL360 G4 which its twin (same hardware specs
and thanks to Puppet should be dang-near identical in terms of software)
so far has not displayed this problem.

Because the twin isn't having this problem and the problem only started
~3 weeks ago is why I thought for the last few weeks it was a disk drive
problem.

I am looking up the firmware versions for this box now. I am not hopeful
that I will find a newer firmware for this old of a system though.
Still, totally worth the try! :-)

Thanks!
~Stack~


RE: issue in CD not ejecting at the end of OS installation

2013-10-21 Thread Paul Robert Marino
Eject is a valid kickstart directive you don't need to put it in a post. Additionally if you don't include the eject package in your package then in a post statement it won't work because the post by default execute chrooted into the installed OS-- Sent from my HP Pre3On Oct 21, 2013 8:13, Edison, Arul (GE Healthcare) aruljeyananth.jamesedi...@ge.com wrote: Hi All,
	In the kick start file, I have mentioned eject at post section at the end. Still installation DVD is not ejecting . suggestion on what needs to be configured is welcome
 
#platform=x86, AMD64, or Intel EM64T
#version=DEVEL
# Firewall configuration
firewall --disabled
install
cdrom
graphical
firstboot --disable
keyboard us
lang en_US
selinux --disabled
logging --level=info
.

%packages
%post
eject
%end

-Original Message-
From: owner-scientific-linux-us...@listserv.fnal.gov [mailto:owner-scientific-linux-us...@listserv.fnal.gov] On Behalf Of Edison, Arul (GE Healthcare)
Sent: Monday, October 21, 2013 3:54 PM
To: scientific-linux-users@fnal.gov
Subject: issue in CD not ejecting at the end of OS installation

HI,
I am porting my application from Redhat to Scientific Linux 6.3 
	 In Scientific Linux the CD is not ejecting at the end of installation though I have added eject at the end of my kick start file.  Can someone point how to fix this issue
	
Thanks,
Arul

Re: NFTables To Replace iptables In the Linux Kernel

2013-10-21 Thread Paul Robert Marino
Well I've heard this before Nf-Hipac http://www.hipac.org/ was at one
time slated to be the next big thing. I'm not holding my breath on
this one and if it does replace all of the existing tools expect it to
be a few years before you see it in production any where.


On Mon, Oct 21, 2013 at 1:47 PM, Jos Vos j...@xos.nl wrote:
 On Mon, Oct 21, 2013 at 08:34:58AM -0700, Yasha Karant wrote:

 [...] -- the actual name
 of TUV, that evidently is taboo on this list, [...]

 Evidently?  It's complete nonsense to NOT use the name of TUV here
 or at whatever place, as long as you don't say things about SL in
 relation to TUV and/or their products that are illegal.

 I thought only the CentOS community was (for no good reason) paranoid.

 In the time I created and released X/OS Linux, being also rebuild of
 the Red Hat Enterprise Linux 3/4/5 sources, I did mention Red Hat,
 RHEL, etc., but only as far as appropriate and allowed of course.

 Nonetheless, once a major change (e.g., NFTables replacing iptables) is
 done in the base source, the production enterprise version must reflect
 the change -- and in less than a decade.  Why less than a decade?
 Unless there is a fully backward compatible set of APIs, new
 applications and revisions typically use the current not historical
 APIs.  Presumably, there will be NFTables features that application
 developers will use that have no iptables backport.

 Thus -- how long is the delay?  Typically, are two major releases (e.g.,
 NFTables in EL8) the usual delay?  Does anyone have historical data from
 EL/TUV?

 Fedora 20 seems to use the 3.11 kernel, so it won't have a kernel with
 NFTables.  RHEL 7 is already being developed (and in alpha stage as far
 as I've heard) and will most likely have a kernel = 3.11, so this makes
 the statement that EL7 probably won't either very trustworthy.  There
 are no statistics about delays etc. needed to just see that RHEL 7
 won't use a kernel that supports NFTables.

 So, there is no artificial delay created by RH to postpone things in
 RHEL, it's just common sense when someone says this about NFTables in
 relattion to RHEL 7.

 --
 --Jos Vos j...@xos.nl
 --X/OS Experts in Open Systems BV   |   Phone: +31 20 6938364
 --Amsterdam, The Netherlands| Fax: +31 20 6948204


Re: how to change the CD mount point

2013-10-15 Thread Paul Robert Marino
Um well that's not a porting issue that's a basic sysadmin issue.If the CD isn't being automouted by a GUI like they usually are nowadays then look at '/etc/fstab'.-- Sent from my HP Pre3On Oct 15, 2013 8:07, Edison, Arul (GE Healthcare) aruljeyananth.jamesedi...@ge.com wrote: HI,
I am porting my application from Redhat to Scientific Linux 6.3 
In Scientific Linux, the CD is mounted to mount point /media/CDROM_
I would like to change the mount point location to /mnt/cdrom
Any idea what is the configuration to change this?

Thanks,
Arul

Re: how to change the CD mount point

2013-10-15 Thread Paul Robert Marino
The automount tools in the GUI usually use the label of the CD as the mount point so the only way to ensure the name is the same regardless of the label is to specify it in the /etc/fstab file.And yes that line should work in SL6-- Sent from my HP Pre3On Oct 15, 2013 17:09, Steven J. Yellin yel...@slac.stanford.edu wrote:  Here's a line from /etc/fstab on an SL5 machine.  It might work for 
SL6, too, assuming you have a /mnt/cdrom directory:

/dev/cdrom   /mnt/cdrom   udf,iso9660   noauto,owner,kudzu,ro   0 0

Or maybe it would work to instead make make /media/CDROM_ into a symbolic 
link to /mnt/cdrom before loading the CD.

Steven Yellin

On Tue, 15 Oct 2013, Edison, Arul (GE Healthcare) wrote:

 HI,
I am porting my application from Redhat to Scientific Linux 6.3
 In Scientific Linux, the CD is mounted to mount point /media/CDROM_
 I would like to change the mount point location to /mnt/cdrom
 Any idea what is the configuration to change this?

 Thanks,
 Arul


Re: Finding the files in a package

2013-10-02 Thread Paul Robert Marino
I always userpm.pbone.netin advanced search mode for that.-- Sent from my HP Pre3On Oct 2, 2013 10:18, Loris Bennett loris.benn...@fu-berlin.de wrote: Bruno Pereira brunopereir...@gmail.com
writes:

 Please notice that apt-file is not installed by default on a clean Debian
 system.

 dpkg -S foo_file will give you the necessary package name without installing
 further software.

Yes, but only for installed packages, whereas apt-file will search in
all packages in the configured sources.

Regards

Loris

 Regards


 On Wed, Oct 2, 2013 at 3:13 PM, Loris Bennett loris.benn...@fu-berlin.de
 wrote:

 Francesco Minafra
 francesco.mina...@gmail.com writes:
 
  Hi Loris,
   you can use the command:
 
  yum provides */libmpi_cxx.so.1
 
  Cheers,
  Francesco.
 
 
  On Wed, Oct 2, 2013 at 1:01 PM, Loris Bennett
 loris.benn...@fu-berlin.de
  wrote:
 
      Dear List,
 
      I would like to know which version of the package "openmpi" contains
 the
      library "libmpi_cxx.so.1".
 
      On the Debian website, the information about individual packages
      includes a link to a file list for each available architecture.
 
      How would I go about finding this information for SL?
 
      Cheers,
 
      Loris
 
      --
      This signature is currently under construction.
 
 
 
 Thanks Francesco (small world ;-)) and Bluejay,
 
 It seems that "provides" and "whatprovides" are synonyms.
 
 BTW, instead of messing about on the website, on Debian
 
 apt-file search libmpi_cxx.so
 
 is possible.
 
 Cheers,
 
 Loris
 
 --
 This signature is currently under construction.
 


-- 
Dr. Loris Bennett (Mr.)
ZEDAT, Freie Universität Berlin Email loris.benn...@fu-berlin.de

Re: How a user can execute a file from anothe user

2013-09-30 Thread Paul Robert Marino
Warning running commands out of an other users home directory is ill
advised and should be avoided at all costs.
By changing the users home directory permissions you may cause
problems as a side effect. For example if the user logs in via ssh and
uses a key for authentication it may fail due to the home directory
permissions being insure which is a very common side effect of doing
this kind of change.




On Fri, Sep 27, 2013 at 2:13 AM, Mahmood Naderan nt_mahm...@yahoo.com wrote:

 Sorry, I just saw the mistake, I forgot to mention that you need to
 grant access to the your home directory as mentioned by Mark.

 chmod o+rx /home/mahmood (I added read as the user didn't have
 permission to access the directory.

If the filename is known (no requirement to do a ls on the directory), then
 execute is sufficient.  If you give read, then all the filenames in your
 directory are revealed (but not necessarily the contents).

 Yes, thank you. It is now solved and the execute permission was good tip


 Regards,
 Mahmood

 
 From: John Lauro john.la...@covenanteyes.com
 To: Earl Ramirez earlarami...@gmail.com
 Cc: scientific-linux-users@fnal.gov; Mahmood Naderan nt_mahm...@yahoo.com
 Sent: Friday, September 27, 2013 12:30 AM

 Subject: Re: How a user can execute a file from anothe user

 One minor note,

 Read isn't needed on the directories if the user/script/etc knows the path.
 If the filename is known (no requirement to do a ls on the directory), then
 execute is sufficient.  If you give read, then all the filenames in your
 directory are revealed (but not necessarily the contents).

 - Original Message -
 From: Earl Ramirez earlarami...@gmail.com
 To: Mahmood Naderan nt_mahm...@yahoo.com
 Cc: scientific-linux-users@fnal.gov
 Sent: Thursday, September 26, 2013 4:43:31 PM
 Subject: Re: How a user can execute a file from anothe user

 ...
 Sorry, I just saw the mistake, I forgot to mention that you need to
 grant access to the your home directory as mentioned by Mark.

 chmod o+rx /home/mahmood (I added read as the user didn't have
 permission to access the directory.

 You should now be able to execute the script as another user.

 For your reference:

 I created a folder named shared in user2 home directory

 @lab19 ~]# ls -la /home/user2
 total 40
 drwx---r-x. 5 user2 user2  4096 Sep 26 15:57 .
 drwxr-xr-x. 5 root  root  4096 Sep 26 15:53 ..
 -rw---. 1 user2 user2  1387 Sep 26 16:27 .bash_history
 -rw-r-. 1 user2 user218 Feb 21  2013 .bash_logout
 -rw-r-. 1 user2 user2  176 Feb 21  2013 .bash_profile
 -rw-r-. 1 user2 user2  124 Feb 21  2013 .bashrc
 drwxr-x---. 2 user2 user2  4096 Nov 11  2010 .gnome2
 drwxr-x---. 4 user2 user2  4096 Dec 20  2012 .mozilla
 drwxrws---. 2 user2 public 4096 Sep 26 15:57 shared
 -rw---. 1 user2 user2  641 Sep 26 15:57 .viminfo

 Created the script and was able to execute it from the user name
 user1

 @lab19 ~]# ls -la /home/user2/shared/
 total 12
 drwxrws---. 2 user2 public 4096 Sep 26 15:57 .
 drwx---r-x. 5 user2 user2  4096 Sep 26 15:57 ..
 -rwxrwx---. 1 user2 public  18 Sep 26 15:57 script1

 user1@lab19 ~]$ /home/user2/shared/script1
 FilesystemSize  Used Avail Use% Mounted on
 /dev/mapper/vg_lab11-lv_root
  5.5G  2.8G  2.5G  54% /
 tmpfs504M  232K  504M  1% /dev/shm
 /dev/vda1485M  92M  369M  20% /boot
 /dev/md1272.0G  100M  1.9G  5% /home/labs




 --


 Kind Regards
 Earl Ramirez
 GPG Key: http://trinipino.com/PublicKey.asc





Re: About the time sync for linux guest under vmware of windows xp host

2013-09-01 Thread Paul Robert Marino
It sounds like your network on the VMware side isn't configured correctly.Also you should always use VMware tools to sync your time instead of NTP on VMware virtual machines.VMWare hypervisor plays games with the clock on purpose and can cause a VM with NTP enabled to behave erratically.-- Sent from my HP Pre3On Sep 1, 2013 8:44, taozhijiang taozhiji...@gmail.com wrote: Hello, I just install my SL carbon 6.4 under vmware environment for
kernel study, but found the system time runs much faster than the host.
I want the kernel to be pure, so I do not take account of vmware-tools.
I setup a ntpd in my host windows xp, but when guest SL update the time, 
it complains like this:

[user@workstation ~]$ sudo ntpdate -dv 192.168.17.1
 1 Sep 20:31:55 ntpdate[1479]: ntpdate 4.2.4p8@1.1612-o Fri Feb 22
 03:55:28 UTC 2013 (1)
Looking for host 192.168.17.1 and service ntp
host found : 192.168.17.1
transmit(192.168.17.1)
transmit(192.168.17.1)
transmit(192.168.17.1)
transmit(192.168.17.1)
transmit(192.168.17.1)
192.168.17.1: Server dropped: no data
server 192.168.17.1, port 123
stratum 0, precision 0, leap 00, trust 000

...


 1 Sep 20:32:00 ntpdate[1479]: no server suitable for synchronization
 found
[user@workstation ~]$ 


I am wired, and do not know how to handle it.
Anyone anyidea would be much much appreciated!!!

Re: [SCIENTIFIC-LINUX-USERS] Any rumors on rhel 7?

2013-08-18 Thread Paul Robert Marino
I somewhat agree however I still think systemd needs an other year or two to bake before its enterprise ready.1) the scripts are still too new and most of them haven't been thoroughly thought out yet. There are a lot of things that change when you switch from simple start and stop scripts to using an active process manager which take time for the developers and system administrators to wrap their heads around. Frankly they haven't had enough time to do it well and I will have issues all over the place with 3rd party applications when RHEL 7 is released because of it.2) systemd isn't the first of its kind on Linux in fact Gentoo Linux has been doing something similar for years with its startup scripts.3) in many ways the design of systemd is very desktop centric which is great for a desktop or laptop but horrible in an enterprise. Frankly I'm horrified by the idea that if a inexperienced sysadmin does a default install instead of our standard nobase install that someone my come along and stick a WiFi dongle in a box and create a loop or security hole because it was immediately detected and the services to auto configure it were automatically started without a authorized sysadmins intervention. By the way that is a scenario that I've seen users attempt before because they needed access from their desktop and didn't want to wait for or were just too lazy to request a firewall change. For that matter I had a consultant just this past week accidentally create a loop on my network because he had made a mistake in the network configuration and NetworkManager decided to bridge several interfaces ( I never thought I'd hear my self say this but thank god for spanning tree). So auto starting and restarting services based on things like hardware event are scary to me for good reasons. Additionally if I have a service that occasionally crashes due to a bug or misconfiguration but systems keeps relaunching it I may never know I have a problem I'd rather the process crash and get a one time complaint or trouble ticket from the user and fix it than have users grumbling how my systems suck because they keep having problems but the guy in the NOC sees all green on his screen when the user calls and keeps dismissing their complaints without further investigation.-- Sent from my HP Pre3On Aug 18, 2013 9:29, Tom H tomh0...@gmail.com wrote: On Tue, Aug 13, 2013 at 12:30 AM, zxq9 z...@zxq9.com wrote:


 * The old init system was complicated (in that the defaults aren't uniform).
 Familiarity with the system triumphed over lack of clear implementation and
 lack of documentation.

All Linux users and developers were victims of laziness and inertia
when we stuck with the mess that sysvinit scripts are for as long as
we did, especially after Solaris and OS X showed the way with SMF and
launchd. We should've at least moved to declarative init files with
one bash/dash/sh script to start and stop daemons; we didn't and we've
fortunately gone beyond that with systemd.


 * systemd is a huge effort that isn't doing anything to remedy the
 situation.

One or two years after the release of EL-7, everyone'll wonder what
all the anti-systemd fuss was about...

Re: [SCIENTIFIC-LINUX-USERS] Any rumors on rhel 7?

2013-08-18 Thread Paul Robert Marino
I totally disagree I love NM on my laptop but its a pox upon my servers. It causes far more problems for servers than it fixes.For one thing if I have a process bound to an IP and NM stops that interface due to a transient issue with the network such as a switch rebooting or some one accidentally unplugging the wrong cable in the patch panel for a split second. The problem is it brings the interfaces down when link is lost. However the file handle for the network socket stays in a "DELETED" state until the the service is restarted or the service detects an issue with the socket because the programmer was better than average and though about the scenario. Now the link comes back but the socket is still broken because it still points to the deleted file handle for the old link so it spears although my service is working but in reality it can't hear new connections coming in. By the way this scenario doesn't apply to things bound to the default 0.0.0.0.Also the fact that I saw it decide on its own to bridge several interfaces together last week on a CentOS 6.4 box because a consultant made a mistake I just don't trust it.In fact I've been thinking I just may scrap it all the redhat network tools and write a ground up replacement for the firewall tools I'm writing right now (yes the will be released under the GPL when they are ready).I'm thinking thinking I'll do something similar to what I did with Fedora 4 back in the day when I worked for a network secuity appliance company with an XML based config for the network interfaces. But I think I can get away with a simple set if small scripts that mostly just do XSLT transforms to create the appropriate commands for iproute2 this would mean that adding or modifying features would simply be a matter of updating a DTD or schema and the XSLT file. Also I'm thinking of possibly integrating iptables and ipset into it as well. Since I already have successfully compiled and tested ipset on RHEL 6.x and already have the tools and sysV init scripts written for them based on a slightly modified (I added to it but didn't change any thing already in it) version of the XML dumped by the ipset command.-- Sent from my HP Pre3On Aug 18, 2013 12:42, Tom H tomh0...@gmail.com wrote: On Mon, Aug 12, 2013 at 10:24 AM, Paul Robert Marino
prmari...@gmail.com wrote:
 On Wed, Jul 31, 2013 at 10:21 PM, zxq9 z...@zxq9.com wrote:
 On 07/31/2013 11:57 PM, Tom H wrote:
 On Tue, Jul 30, 2013 at 5:12 PM, zxq9z...@zxq9.com  wrote:
 On 07/30/2013 10:26 PM, Tom H wrote:

 I was only commenting on the more complex and unreadable spec files.
 Otherwise I'm happy about systemd and journald. In short, the kernel
 has evolved, the applications have evolved, why not the init system?

 Its not that the init system can't do with some modernization, its that the
 new system has a severe case of featuritis that is spawning little eddies of
 nonlocalized complexity all over the place. Modernizing a system and tossing
 everything that's come before in the interest of a deliberately incompatible
 rewrite are different things. Remember HAL?

 well thats mostly due to the fact that its new and far more complex.
 there was a mad rush for every one to rewrite there statup scripts and
 quite a few of them weren't done very well and others weren't fully
 thought out.

 what I find worse is they did a ground up rewrite and didn't touch the
 network configuration portion wasn't rewritten.

 The network scripts are limited and problematic if you want to do any
 thing advanced. for example its a long story why but on one device a
 bridge I have to add a static arp entry. iproute2 has been able to do
 this for as long as i can remember but there was no clean way to get
 it to work was to hack the network scripts in order to add the
 functionality.

 Really the scripts network scripts need to have hooks added so user
 defined scripts can be called at various points of the startup and
 shutdown of an interface. but more than that they mostly date back to
 the 2.0 Kernel and Linux's Network capabilities have change
 significantly since then but for the most part these scripts keep
 people stuck in the 90's.

(Couldn't you top-post like everyone else?)

There's been more than one hint on fedora-devel that the reason that
the "/etc/sysconfig/network-scripts/scripts" haven't been adapted to
systemd is that no one wants to do the (large amount of) work that
would be required when the eventual goal is to default to NM and only
use NM; and that that goal's ever closer. (As a Fedora user, I
sometimes wish that TUV weren't so involved with GNOME and NM and that
netctl were packaged for Fedora - and in the future for EL-7 - because
it's integrated into systemd; but NM's slowly getting there for
servers.)

EL-6.4's network scripts mostly use iproute (although I don't think
that you can use "PREFIX=24" instead of "NETMASK=255.255.255.0" as you
can on Fedora).

The following command returns not

Re: UDP message not recevied at port 8100 with Scientific Linux 6.3

2013-08-18 Thread Paul Robert Marino
-- Sent from my HP Pre3On Aug 16, 2013 14:36, Konstantin Olchanski olcha...@triumf.ca wrote: On Fri, Aug 16, 2013 at 11:24:36AM -0700, Konstantin Olchanski wrote:
 On Fri, Aug 16, 2013 at 02:01:20PM +, Edison, Arul (GE Healthcare) wrote:
  Hi all,
  The application that I run on Scientific Linux 6.3 is to receive the UDP message at port 8100. However I found that port 8100 is used by xprint-server
  Is there a way to disable the xprint-server?
 
 
 
 You can use "lsof" to find out who is consuming UDP packets sent to port 8100.
FYI in most cases netstat is a lot easier to get this info examplenetstat -lunpAnd add a -t if you want to see TCP as well. 
 (Also check that your packets are not rejected by your own firewall, "iptables -L -v").
 


Also SL 6.3 shipped with a defective libtirpc which discards UDP broadcast packets.

The bug makes NIS not work.

May bite you, too.

SL 6.2 or 6.1 was okey, I think, SL 6.4 not sure.

I have posted this bug on this mailing list and on RH bugzilla, it is fixed in errata packages.


-- 
Konstantin Olchanski
Data Acquisition Systems: The Bytes Must Flow!
Email: olchansk-at-triumf-dot-ca
Snail mail: 4004 Wesbrook Mall, TRIUMF, Vancouver, B.C., V6T 2A3, Canada


Re: New Install of 6.4 - no Internet step 2, 3

2013-08-12 Thread Paul Robert Marino
I use XFCE on fedoraI would like to see it on EL 7.Its a good balance of usability without the bloat that Gnome and KDE have developed over the years.-- Sent from my HP Pre3On Aug 12, 2013 13:42, Jeffrey Anderson jdander...@lbl.gov wrote: On Mon, Aug 12, 2013 at 7:41 AM, Larry Linder larry.lin...@micro-controls.com wrote:

Comment:
Fedora 19 with the latest Gnome desktop is written for a tablet.   It only
takes 6 mouse clicks to do anything useful - Hope RH 7 will allow a person to
select an older version of KDE or Gnome.
I would hate to think of the number of mouse ites injuries we could have.
Is there a number of simpler desktops available.
The new KDE is useless, Gnome in Fidora 19 is terrible so where do people who
need a no BS computer go?I agree with you about Gnome3.  I dont mind the latest KDE (4.10), but agree that it may be too heavy for some.

Fedora19 shipped with MATE as an option, though I dont think it is installed by default.  It is a fork of Gnome 2 and may be suitable.yum groupinstall MATE Desktop   to get it.

LXDE and IceWM are still available through simple yum updates from the standard Fedora repo as well.  Jeff

-- --Jeffrey Anderson                        | jdander...@lbl.gov

Lawrence Berkeley National Laboratory   |Office: 50A-5104E                       | Mailstop 50A-5101Phone: 510 486-4208                     | Fax: 510 486-4204



Re: Bug in yum-autoupdate

2013-08-04 Thread Paul Robert Marino
Here here-- Sent from my HP Pre3On Aug 4, 2013 15:13, Mark Stodola stod...@pelletron.com wrote: Can we please end this discussion?  It has become extremely off topic at 
this point and no longer has _any_ direct impact on Scientific Linux.

Re: Bug in yum-autoupdate

2013-08-01 Thread Paul Robert Marino
Seriously are we still beating this dead horse. While I admit I was the one who took this conversation on a tangent in the first place, every valid point of view on this has been covered from both sides.No resolution will come of it!From here its an intellectual pissing contest lets end it!-- Sent from my HP Pre3On Aug 1, 2013 20:08, Steven Haigh net...@crc.id.au wrote: On 02/08/13 09:59, Vincent Liggio wrote:
 On 08/01/2013 06:07 PM, Steven Haigh wrote:

 If you really do have 1200 systems to worry about, I'd be looking at
 things like satellite. I have ~20-25 systems and yum-autoupdate is
 fantastic. It does what it says on the box and relieves me of having to
 watch / check for updates every day. I get an email in the morning that
 tells me what was updated and if there were any problems.

 Guess none of you have to deal with third party applications, device
 drivers, change management, etc. Simple servers are easy to patch, and
 yes, I've done that for years. But take a system running anything
 graphical (especially with video and audio device drivers) and try to
 randomly patch it, and see how long that lasts!

I hate to say it, but now you've shifted the goal posts. You talk about 
blade servers, now you talk about graphics drivers and audio - which I 
assume would be desktop use.

Even on the desktop though, the kernel doesn't auto-update - so any 
graphics drivers that are installed against a specific kernel version 
will continue to work until you upgrade the kernel manually - at which 
time you will be required to build the kernel modules again (nvidia / 
ATI etc).

 (and yes, I really do have 1200+ systems to worry about. And I sleep
 very happily knowing tomorrow they won't be any different than they were
 today)

Unless in the lack of updates, you leave a security hole and due to the 
lack of updates you never pick up on it. My 16 years of experience says 
that this is a dangerous attitude for system admins to adopt. And no, in 
16 years I have never had a security breach (touch wood).

 Its hardly hidden - and if you don't like it, don't install the package
 - its purely in your control.

 It installs by default. I certainly can uninstall it, or set it to not
 autoupdate, which I shall.


And this may work for you - and thats great for you. It shouldn't 
however mean that the default should be changed to disable this in the 
entire distro.

In fact, if you *really* want to disable auto-updates globally, then 
you're better off using a single line sed command that you can run via 
SSH to all systems you control to disable it. That way it is rapidly 
deployed to all your systems with a simple bash script loop.

Re: Upgrade SLC5 to SLC6 ?

2013-07-30 Thread Paul Robert Marino
While that's somewhat true the really concern is unresolvable conflicts between packages during the update also you may experience selinux issues among other things.-- Sent from my HP Pre3On Jul 30, 2013 12:05, Anton Starikov ant.stari...@gmail.com wrote: I would expect that amount of problems depends on what is installed. And how much /etc differs from default.
One story if it is, let say, minimal install with dhcp server, another story if it is host filled with all possible software in active use.

Anton.

On Jul 30, 2013, at 5:58 PM, Paul Robert Marino prmari...@gmail.com wrote:

 gennerally its possible to upgrade from one minor release to the next
 but not major releases so 5.1 to say 5.9 is possible but 5.x to 6.x is
 not. you can attempt it through yum but you will have tons of
 problems.
 
 Fedora you usually can upgrade versions but even Fedora has version
 cut offs where you have to do a scratch install for example 16 to 17
 was one of those cutoffs.
 
 
 On Mon, Jul 29, 2013 at 4:08 AM, Tom H tomh0...@gmail.com wrote:
 On Thu, Jul 25, 2013 at 12:48 PM, Lamar Owen lo...@pari.edu wrote:
 On 07/25/2013 01:28 AM, David G.Miller wrote:
 
 
 The problem of upgrading from FC-n to FC-n+1 is basically the same as
 upgrading EL-n to EL-n+1.
 
 No; upgrading ELx to ELx+1 is like upgrading Fn to Fn+k(x), where k(x) is an
 element of an array of integer constants; x is the starting EL release, so
 k(3)=3 [RHEL3 was based on what I'm going to call 'Fedora Core 0,' which was
 the pre-fedora RHL 10 beta; see footnote 1]; k(4)=3, k(5)=6 (or 7, since
 some F13 packages showed up in EL6), and k(6) will probably be 7 or so.
 
 Doing this without going stepwise through the Fedora releases is a
 challenge. I forget how large of an increment preupgrade can do, but I
 remember doing it F12 to F13 to F14, and it had issues even going Fn to
 Fn+1, especially if any part of the massive yum transaction fails for any
 reason (it leaves the system with a half completed yum transaction that
 yum-complete-transaction simply won't deal with, and then you have to finish
 the upgrade manually and manually remove the older packages) I have done
 this twice on two separate machines, one had issues going from F12 to F13
 and the other one had issues going from F13 to F14. The Fn to Fn+1 upgrade
 path is somewhat expected to work; Fn to Fn+2 probably won't work correctly,
 especially if major changes are in both releases.
 
 In theory, preupgrade can upgrade a system to the latest Fedora
 release; I assume from a still supported release so it's Fn to Fn+2. I
 have a colleague who went on his laptop from 12 to 14 and you can find
 people skipping a release on mailing lists and forums. But they're
 there because they have a problem. :) Both preupgrade and fedup are
 the source of quite a few list and forum posts.
 
 A more usable upgrade system would be one where you could snapshot a
 system transparently before an upgrade and fallback to the original in
 case of failure, like Solaris with its "Boot Environment".
 
 
 In the Ubuntu world, this is like taking Ubuntu LTS 6.06 straight to 8.04,
 or worse. I've done the 6.06 to 8.04 thing, by the way, and have no desire
 to repeat it.
 
 Ubuntu/Canonical support LTS-to-LTS upgrades (6.06 to 8.04 to 10.04 to
 12.04) where intermediate versions are skipped (so they must be QAd
 quite extensively; and companies with support contracts must be
 on-hand for all phases of the upgrade). I've tested an 8.04 to 10.04
 upgrade (as I've tested preupgrade and fedup upgrades) but I've never
 used Linux upgrades in a live/production setting.


RE: Large filesystem recommendation

2013-07-24 Thread Paul Robert Marino
Although that said EXT4 is still an inode centric file system with a journal added so moving the journal to a faster volume wont have as big an effect as it does on ground up designed journaling file systems. So while that feature may speed up the journal for EXT4 its still limited by the speed of the in-filesystem inodes regardless of where the journal is located.The difference being XFS, JFS, ZFS, and a few others primarily rely on the journal and write the inboxes as needed after the fact for backwards compatibility for older low level binaries. XFS also uses them with the xfsrepair tool as a DR backup incase the very rare casein the journal getting corrupted (usually due to a hardware issue like raid controller backplane melt down) but even in that case XFS only thin-provisions creates the inodes it reallyneeds the first time they are written to. Which is why themkfs.xfs tool is so fast.EXT4 still pre-allocates all of the possible inodes during formating and writes to the inodes before the journal-- Sent from my HP Pre3On Jul 25, 2013 1:17, Paul Robert Marino prmari...@gmail.com wrote: That's cool I've never noticed that I the documentation but I'll look for it.-- Sent from my HP Pre3On Jul 24, 2013 18:41, Scott Weikart scot...@benetech.org wrote: 

 Though I will admit the being able to move your journal to a
 separate faster volume to increase performance is very cool
 and that's only a feature I've seen in XFS and ZFS.

ext4 supports that.

-scott


From: owner-scientific-linux-us...@listserv.fnal.gov owner-scientific-linux-us...@listserv.fnal.gov on behalf of Paul Robert Marino prmari...@gmail.com
Sent: Wednesday, July 24, 2013 3:36 PM
To: Brown, Chris (GE Healthcare); Graham Allan; John Lauro
Cc: scientific-linux-users
Subject: RE: Large filesystem recommendation


ZFS is a performance nightmare if you plan to export it via NFS because of a core design conflict with how NFS locking and the ZIL journal in ZFS. Its not just a linux issue it effects Solaris and BSD as well. My only experience with ZFS was on a Solaris
 NFS server and we had to get a dedicated flash backed ram drive for the ZIL to fix our performance issues, and let me tell you sun charged us a small fortune for the card.
Aside from that most of the cool features are available in XFS if you dive deep enough into the documentation though most of them like multi disk spanning can be handled now by LVM or MD but are at least in my opinion handled better by hardware raid. Though
 I will admit the being able to move your journal to a separate faster volume to increase performance is very cool and that's only a feature I've seen in XFS and ZFS.




-- Sent from my HP Pre3



On Jul 24, 2013 16:53, Brown, Chris (GE Healthcare) christopher.br...@med.ge.com wrote:


ZFS on Linux will provide you all the goodness that it brought to Solaris and BSD.

Check out: 
http://listserv.fnal.gov/scripts/wa.exe?A2=ind1303L=scientific-linux-usersT=0P=21739


http://listserv.fnal.gov/scripts/wa.exe?A2=ind1303L=scientific-linux-usersT=0P=21882


http://listserv.fnal.gov/scripts/wa.exe?A2=ind1307L=scientific-linux-usersT=0P=4752


- Chris 


-Original Message- 
From: owner-scientific-linux-us...@listserv.fnal.gov [mailto:owner-scientific-linux-us...@listserv.fnal.gov] On Behalf Of Graham Allan

Sent: Wednesday, July 24, 2013 3:46 PM 
To: John Lauro 
Cc: scientific-linux-users 
Subject: Re: Large filesystem recommendation 

XFS seems like the most obvious and maybe safest choice. FWIW, we use it on SL5 and SL6. Ultimately any issues we've had with it turned out to be hardware-related.


ZFS has some really nice features, and we are using it for larger filesystems than we have XFS, but so far only on BSD rather than Linux...


On Wed, Jul 24, 2013 at 01:59:03PM -0400, John Lauro wrote: 
 What is recommended for a large file system (40TB) under SL6? 
 
 In the past I have always had good luck with jfs. Might not be the 
 fastest, but very stable. It works well with being able to repair 
 huge filesystems in reasonable amount of RAM, and handle large 
 directories, and large files. Unfortunately jfs doesn't appear to be 
 supported in 6? (or is there a repo I can add?) 
 
 
 Besides for support of 40TB filesystem, also need support of files 4TB, and directories with hundreds of thousands of files. What do people recommend?


-- 
- 
Graham Allan 
School of Physics and Astronomy - University of Minnesota 
- 







Re: Problem with errata dates?

2013-07-08 Thread Paul Robert Marino
This is the wrong list for this question it should be on the spacewalk list but to answer your question its a known issue with the repo-sync process. What you will also notice is the version numbers on the errata's are incrementing in spacewalk every time you do a repo sync.There has been some discussion about fixing this on the developers list but as far as I know no ones seriously started work on it yet because there doesn't seem to be quorum yet on the dev list as to how it should be handled yet.-- Sent from my HP Pre3On Jul 4, 2013 2:53 AM, Ree, Jan-Albert van j.a.v@marin.nl wrote: This morning our Spacewalk service sent out some emails , among them an errata mail regarding SLSA-2013:0957-1
What I noticed in Spacewalk is that the Issued date is set at 1/1/70

Same is true for several others,

Bug Fix AdvisorySLBA-2013:0835-1selinux-policy bug fix update   1   1/1/70
Security Advisory   SLSA-2013:0911-1Important: kernel security update   4   1/1/70
Bug Fix AdvisorySLBA-2013:0893-1selinux-policy bug fix update   1   1/1/70
Bug Fix AdvisorySLBA-2013:0909-1selinux-policy bug fix update   4   1/1/70
Bug Fix AdvisorySLBA-2013:1000-1selinux-policy bug fix update   4   1/1/70
Security Advisory   SLSA-2013:0983-1Moderate: curl security update  4   1/1/70
Security Advisory   SLSA-2013:0942-1Moderate: krb5 security update  4   1/1/70
Security Advisory   SLSA-2013:0957-1Critical: java-1.7.0-openjdk security update4   1/1/70
Security Advisory   SLSA-2013:0981-1Critical: firefox security update   4   1/1/70

Is there a reason these are set at 1/1/70 instead of it's normal issue date?
All these are coming from the sl6 security repository
ftp://ftp.scientificlinux.org/linux/scientific/6rolling/x86_64/updates/security/

Regards,


Jan-Albert van Ree
Linux System Administrator
MARIN Support Group
E mailto:j.a.v@marin.nl
T +31 317 49 35 48


MARIN
2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands
T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl

Re: Application severs cease to start after latest kernel upgrade

2013-06-13 Thread Paul Robert Marino
Check tour audit logs-- Sent from my HP Pre3On Jun 13, 2013 1:53 PM, x...@gmx.com x...@gmx.com wrote: 
Dear All, 

We upgraded our kernel with a new one (2.6.32-358.11.1.el6)
today and since then we have not been able to run any of our
Application Server (Jboss-eap-6, Apache Geronimo-3 and IBM
Webspher-CE-3). 

All of these application servers were working and used in
development until this morning after upgrading Scientific Linux.
Every time we try to start Apache Geronimo-3 or IBM
Webspher-CE-3, we get the following error:


Error occurred during initialization of VM
Could not reserve enough space for object heap
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.


And none of the three servers would start. We have spent a whole
day, googling for solutions but to no avail. We suspect that the
problem is caused by the kernel upgrade as it was the only
software we added to our systems first thing this morning before
all servers refused to start. We all really pissed off for
spending a whole day on this problem and still can't find the
problem even after reinstalling all of the application servers.


Any help or hint or both would be greatly appreciated.

Thanks in advance.

  
  



Re: New person publishing security errata

2013-06-12 Thread Paul Robert Marino
Welcome, Bonnie



On Wed, Jun 12, 2013 at 11:36 AM, Connie Sieh cs...@fnal.gov wrote:
 I am glad to announce that we have added Bonnie King as one of the people
 who will be publishing security errata.  She will be publishing security
 errata this week.

 -Connie Sieh


has any one else noticed all the duplicate packages in EPEL

2013-06-09 Thread Paul Robert Marino
In the last few weeks I have noticed a lot of duplicate packages in EPEL
which are included in the base OS this breaks kickstarts and updates
through spacewalk.

here are just 2 tickets I put in today

https://bugzilla.redhat.com/show_bug.cgi?id=972490

https://bugzilla.redhat.com/show_bug.cgi?id=972490

has any one else noticed these or any others if so lets get them all
reported.

I don't know about any one else but I would hate to have to delete packages
every morning in perpetuity to ensure all of my kickstarts and updates work.


*
*
**


Re: advice on using latest firefox from mozilla

2013-06-06 Thread Paul Robert Marino
Todd and Margo (I'm never sure who I'm addressing with you lol)There is a long standing security reason non root users can't update software which affect all users on the system. Remember over all *ux design is based on a multi user model where only people granted root access by password access or even better sudo access can affect all users. This is a good thing, it was done in response to computer viruses in the 70s.-- Sent from my HP Pre3On Jun 6, 2013 7:40 PM, Todd And Margo Chester toddandma...@gmail.com wrote: On 06/06/2013 01:18 PM, Todd And Margo Chester wrote:
 I have had no
 problems at all, except that the updates can not be installed
 by the users -- you have to fire up Firefox as root.

I just filed this: Linux upgrade required root privileges.
And, it's looking good for implementation:

https://bugzilla.mozilla.org/show_bug.cgi?id=880504

"Ya All" please vote for it!

-T

Re: Netrwork dropping transmitted packages on fresh SL 6.4 install

2013-05-04 Thread Paul Robert Marino
Elrepo is your problem they often push untested updated drivers.Use the drivers that came with the kernel they may not have all the features but they should work.Do not install kmod-8169.-- Sent from my HP Pre3On May 4, 2013 4:11 AM, Bill Maidment b...@maidment.vu wrote: 
Hi againI've just rebuilt a desktop machine with a GA-970A-D3 motherboard using SL 6.4 64 bit isos and the network adaptor is dropping all transmitted packages.The onboard adapter is a Realtek RTL8168/8111Using a Fedora 18 Live disk the network adapter works OK and is recognised as RTL8168evl/8111evlI have other spare network cards, but the are all RTL8168 or RTL8169 and all show the same symptoms in SL 6.4, but they all work in Fedora 18.Can anyone help me to get the drivers updated (ELrepo kmod-r8168 and kmod-r8169 did not help)?



Re: Issues with the recent kernel and proprietary nvidia drivers

2013-03-25 Thread Paul Robert Marino
Um wellFrankly the proprietary driver is never up to date with the kernel and it is well that's luck if it ever works with a new version of the kernel after you have reinstalled "recompiled the module with code you can't see against the new code"If you have a problem with the proprietary driver take it up with ?Nvidia. In theory you pay them to make it work correct ?If you don't pay them for support then find a card that doesn't use proprietary code.-- Sent from my HP Pre3On Mar 25, 2013 9:59 PM, Jeff Siddall n...@siddall.name wrote: On 03/25/2013 12:41 PM, Yasha Karant wrote:
 We are forced to use the Nvidia proprietary driver for two reasons:

 1.  We use the switched stereoscopic 3D mode of "professional" Nvidia
 video cards with the external Nvidia 3D switching emitter for the
 stereoscopic 3D "shutter glass" mode of various applications that
 display stereoscopic 3D images (both still and motion).

 2.  We need to load Nvidia CUDA in order to use the CUDA computational
 functions of Nvidia GPU compute cards in our GPU based compute engines.
   The Nvidia CUDA system appears to require the proprietary Nvidia driver.

Yup, I run the proprietary driver for VDPAU support.  If anyone knows 
how to get that from the open source driver I would like to know.

Jeff

Re: A silly question

2013-03-20 Thread Paul Robert Marino
Well this is sort of a question I answer at work all the time so I can tell you and I know there are sites and even Linux journal articles that explain it.Essentially both labs had their own in-house compiled versions of RHEL already for slightly different reasons but CERNs was called LTS (long term support) Linux and their original goal was to keep doing security patches to older RHEL versions after Redhat declared EOL ( End Of Life) on earlier versions of RHEL because their were essentially appliances built for labs that were it was difficult to migrate the apps to newer versions of RHEL and at the time Redhat only provided patches for a version for about 2 years for a version of RHEL if I remember correctly. The problem is when you install something in a facility connected directly or by proximity with less that two firewalls in between to a secure US government facility it must have all security patches for any installed software within a few months of the creation of the fix for the security hole. Also every new version of any OS needs to be evaluated for security prior to being connected. So for CERN since so many US Government agencies already used RHEL, and the time its was so popular in the US, that any one in the US who knew linux had used Reheat at some point; it was really the only choice.As a matter of fact I can remember in the late 90s being so synotimous with linux in the US that I was having a problem with compiling a program due to a Redhat only bug caused by a patch they put into gcc so I ran into 4 different software stores asking if they had any linux distro other than redhat, the first three store I was told no the 4th store told me yea we have plenty and then walked me over to a wall filled floor to sealing of various different redhat (box set v5.x pre RHEL) box sets with various different support add-ons like the "secure webserver" version that included a script on an additional 3.5 floppy to set up a openssl CA for you, but the were all redhat.Fermilabs motivation to choose RHEL over SuSE I'm not sure of but I suspect since they are funded by multiple countries and the nature of their research they may have also run into the US Government security rules and its just easier in that case to go with the flow than deal with the long drawn out process of getting a different distro certified.-- Sent from my HP Pre3On Mar 21, 2013 12:11 AM, Yasha Karant ykar...@csusb.edu wrote: This is perhaps a silly question, but I would appreciate a URL or some 
other explanation.

A faculty colleague and I were discussing the differences between a 
supported enterprise Linux and any of a number of "beta" or "enthusiast" 
linuxes (including TUV Fedora).  A question arose for which I have no 
answer:  why did SL -- that has professional paid personnel at Fermilab 
and CERN -- select to use the present TUV instead of SuSE enterprise 
that is RPM (but yast, not yum) based, and has to release full source 
(not binaries/directly useable) for the OS environment under the same 
conditions as TUV of SL?  SuSE is just as stable, but typically 
incorporates more current versions of applications and libraries than 
does the TUV chosen.  Any insight would be appreciated.  If SuSE had 
been chosen (SuSE originally was from the EU and thus a more natural 
choice for CERN), what would we be losing over SL?
To the best of my knowledge, there is no SuSE Enterprise clone 
equivalent to the SL or CentOS clones of TUV EL.

Yasha Karant

Re: Power management with ATI Radeon cards using the radeon driver.

2013-03-18 Thread Paul Robert Marino
StevenWow thanks for sharing that, its certainly useful information about the kernel Radeon driver I didn't know.I wonder if its true for the AMD fusion as well or does it scale based on the CPU frequency since the are on the same die? Looks like I have some experiments to do latter.-- Sent from my HP Pre3On Mar 18, 2013 3:47 AM, Steven Haigh net...@crc.id.au wrote: Hi all,

I've been on a path of discovery lately regarding the state of play for 
ATI graphics cards. I started off using the ATI binary driver due to the 
high fan speed (resulting from high power usage) of the open source driver.

I decided to take a different approach today and stick with the open 
source 'radeon' driver. I managed to find that by default, the OSS 
driver keeps the card in a 'high power / performance' state.

This can be changed by using the sysfs entries exposed.

I found that using the following puts the card in low power mode:
	echo profile  /sys/class/drm/card0/device/power_method
	echo low  /sys/class/drm/card0/device/power_profile

Now, this is great to shut the fan up, and works on multi-head systems 
(more than one screen).

If you only use one screen, then you're in luck.
	echo dynpm  /sys/class/drm/card0/device/power_method

The "dynpm" method dynamically changes the clocks based on the number of 
pending fences, so performance is ramped up when running GPU  intensive 
apps, and ramped down when the GPU is idle. The reclocking is attemped 
during vertical blanking periods, but due to the timing of the 
reclocking functions, doesn't not always complete in the blanking 
period, which can lead to flicker in the display. Due to this, dynpm 
only works when a single head is active.

If you are like me and have multiple screens, you have the following 
options to get power_profile to:

"default" uses the default clocks and does not change the power state. 
This is the default behavior.

"auto" selects between "mid" and "high" power states based on the 
whether the system is on battery power or not. The "low" power state are 
selected when the monitors are in the dpms off state.

"low" forces the gpu to be in the low power state all the time. Note 
that "low" can cause display problems on some laptops; this is why auto 
does not use "low" when displays are active.

"mid" forces the gpu to be in the "mid" power state all the time. The 
"low" power state is selected when the monitors are in the dpms off state.

"high" forces the gpu to be in the "high" power state all the time. The 
"low" power state is selected when the monitors are in the dpms off state.

I've found that the 'low' setting seems to work fine in every day 
desktop tasks - and it certainly causes the fan to be much, much quieter 
than the default profile.

References:
* http://www.x.org/wiki/RadeonFeature

-- 
Steven Haigh

Email: net...@crc.id.au
Web: https://www.crc.id.au
Phone: (03) 9001 6090 - 0412 935 897
Fax: (03) 8338 0299

Re: nfs+xfs - was Re: SL6.3 2.6.32-358.2.1.el6.x86_64 kernel panic

2013-03-18 Thread Paul Robert Marino
I've used XFS for over a decade now. Its the most reliable crash resistant filesystem I've ever used according to all my tests and experience. But I have had a few bad patches on older versions of RHEL (before RedHat started supporting it) where it didn't work well, but historicity its worked perfectly on every non reheat based distro.By the way I know why the performance goes down on NFS4 its mostly due to the fact that it supports xattribs natively and ext3 does not unless you explicitly turn it on when you mount the file system.By the way I currently have several production servers running gluster on top of XFS serving both gluster native and NFS 3 clients and in several clusters it works perfectly for me.Oh and the earlier confusion Netapps are BSD UNIX boxes and just like Unix Linux can serve NFS volumes Netapp didn't invent NFS nor are they even the best implementation. Give me a linux box with a san or a good raid controller any day they are faster and in the case of a SAN the are cheaper to scale-- Sent from my HP Pre3On Mar 18, 2013 3:45 AM, Sergio Ballestrero sergio.ballestr...@cern.ch wrote: On 18 Mar 2013, at 08:37, Steven Haigh wrote:On 03/18/2013 06:34 PM, Nico Kadel-Garcia wrote:On Mon, Mar 18, 2013 at 2:59 AM, Dr Andrew C Aitchisona.c.aitchi...@dpmms.cam.ac.uk wrote:On Sun, 17 Mar 2013, Nico Kadel-Garcia wrote:Also, *why* are you mixing xfs and nfs services in the sameenvirnment? And what kind of NFS and XFS servers are you using?Out of curiosity, why not ?In theory the choice of disk filesystem and network file sharingprotocol should be independent.How different is the practice ?I had some bad, bad experience with XFS and haven't used it since. Itcompletely destabilized my bulk storage environment: things may havechanged.I've deliberately and effectively kept my file systems below the 16 TBrange and worked well with ext4. I've occasionally used larger scalecommercial storage serves such as NetApp's for larger NFS environmentssince then.I use XFS on a small RAID6 array (its 2Tb - not huge), and I mount it via NFS to other systems. I haven't had a kernel crash as yet.We use XFS for some heavily loaded "buffer storage" systems, and we haven't had an issue - but no NFS there.We also have an NFS server using XFS (mostly because of the "project quota" feature we needed on some shares) and that's also working fine with NFSv3, serving about 200 clients; NFSv4 performance on XFS is disappointing compared to ext3 but we are not in an hurry to migrate.Cheers, Sergio
--Sergio Ballestrero -http://physics.uj.ac.za/psiwiki/BallestreroUniversity of Johannesburg, Physics DepartmentATLAS TDAQ sysadmin team - Office:75282 OnCall:164851



Re: nfs+xfs - was Re: SL6.3 2.6.32-358.2.1.el6.x86_64 kernel panic

2013-03-18 Thread Paul Robert Marino
Its mostly due to the uid and gid name mapping to names instead of numbers introduced in NFS 4 by default if possible a backup is saved as an extended attribute and can also compound the atime update speed issue.As for JFS its been a long time since I tested it but I had the reverse issue.Oh and I know the issue you ran into with xfs its rare but has been known to happen I've hit it once my self on a laptop its a journal problem, and fsck isn't the tool to use.There is a specific xfs repair tool to fix the journal or can rebuild it from the backup inodes-- Sent from my HP Pre3On Mar 18, 2013 10:53 AM, Vladimir Mosgalin mosga...@vm10124.spb.edu wrote: Hi Paul Robert Marino!

 On 2013.03.18 at 08:55:39 -0400, Paul Robert Marino wrote next:

 I've used XFS for over a decade now. Its the most reliable crash resistant
 filesystem I've ever used according to all my tests and experience. But I have

This might be true, but it's not the case for all. I've experienced very bad
corruptions on xfs myself, resulting in lots of non-accessible fake
files (random size, attributes etc) with random filenames including
non-printable characters - and there was no way to remove them, fsck
refused to fix them, too. Filesystem was in total mess and producing
various errors - it's fortunate that I was able to copy all real data
without corruption from it, though. Since then I try not to approach xfs
without serious reason.

I'd rather use JFS for huge filesystem which I've been using for many
years until ext4 appeared.. But for fs 16 Tb jfs is still best option,
I believe (far more stable in my experience compared to xfs, though
might be not as fast).

For several reasons most people don't consider JFS but I used it on tons
of servers for filesystems  1 Tb (ext3 was a bad choice for huge
filesystems for various reasons) and never had a single issue with it.

At most, after multiple power failures during heavy write access I had
errors which remounted it into R/O mode and fsck always fixed it.

 By the way I know why the performance goes down on NFS4 its mostly due to the
 fact that it supports xattribs natively and ext3 does not unless you explicitly
 turn it on when you mount the file system.

I don't really understand your implication: xfs is slower *due* to xattr
support? So if I will mount ext4 with user_xattr option, NFS4 from it
will become slower? How come?


-- 

Vladimir

Re: need command line support of zoho

2013-03-15 Thread Paul Robert Marino
Well that depends.If its clear text and you have the right flags set it will show you all of the raw data.Wireshark can in many cases decode it further.However if it ssl/tls encrypted there is a tool much to most infosec peoples dismay (and joy when its useful ) called ssldump that can take a tcpdump that captures the full conversation and decode it.But that answered is no not out of the box.-- Sent from my HP Pre3On Mar 15, 2013 10:27 PM, jdow j...@earthlink.net wrote: On 2013/03/15 19:14, Todd And Margo Chester wrote:
 On 03/15/2013 02:17 PM, Todd And Margo Chester wrote:
 Hi All,

 The connection just times out.  Does anyone know what I am
 doing wrong here?  This is Linux and the nail program.
 (The account does work from Thunderbird.)

 #!/bin/bash
 echo "nail test" | \
  nail -v \
 -S smtp-use-starttls \
 -S from=taperepo...@.com \
 -S smtp-auth=login \
 -S ssl-verify=ignore \
 -S smtp-auth-user=taperepo...@.com \
 -S smtp-auth-password=zz \
 -S smtp=smtp.zoho.com:465 \
 -s `dnsdomainname`" zoho smtp test subject" y...@zoho.com


 Many thanks,
 -T


 Okay, I've have gotten a little further along.  I am able to test
 with gmail but not yet with zoho:

 #!/bin/bash
 echo "nail test" | nail -v -s `dnsdomainname`" zoho smtp test subject" \
 -S smtp-use-starttls \
 -S smtp-auth=plain \
 -S ssl-verify=ignore \
 -S smtp=smtps://smtp.zoho.com:465 \
 -S from=x...@zoho.com \
 -S smtp-auth-user= \
 -S smtp-auth-password="hahahahaha" \
 -S nss-config-dir=/home/linuxutil/mailcerts/ \
 yy...@zoho.com


 Gives me:

 250 AUTH LOGIN PLAIN
 STARTTLS
 220 Ready to start TLS
 SSL/TLS handshake failed: Unknown error -5938.

 Anyone know what causes this?

 Many thanks,
 -T


 Okay.  I figured it out.  I commented out "-S smtp-use-starttls".
 Go figure.

 [editorial comment] AAHH!![/editorial comment]

 -T

Out of curiosity does tcpdump show the plain text login and message
transfer or is it encrypted?

{O.O}

Re: Installing on a new laptop

2013-02-26 Thread Paul Robert Marino
I have an X120e as well and simply changing the hard drive doesn't fix
the eufi issue.
the first answer to this string is correct with two cavorts RedHat got
two signed certs one fro RHEL and the other for Fedora. apparently the
process was a nightmare but they will work with secure boot. for that
reason I run fedora as my primary os on my laptop and if i have to do
any Scientific Linux testing I run it in a VM
(and yes an AMD fusion chip can runs a single VM surprisingly well)..


On Tue, Feb 26, 2013 at 2:04 PM, Ken Teh t...@anl.gov wrote:
 I never boot a new laptop into Windows.  I replace the original hard drive
 with a new one and install Linux on it.  This way I can put the original
 disk
 back in and never void my warranty.  You can then even sell it in its
 original state.

 Of course, this works only if you don't plan to use Windows.

 I use a $500 Lenovo X120e netbook.



 On 02/26/2013 11:26 AM, Scott_Gates wrote:

 OK, If I needed a desktop, I'd just roll my own. Probably starting with

 something bare-bones from TigerDirect.

 I'm thinking of buying a new laptop, rather than just recycling old ones,

 like I have been.

 I have HEARD there are issues with trying to install on computers with

 Windows8 already installed--the only source I have of CHEAP laptops.

 Basically a Wal-mart or Best-buy boxes that I can get in the $250-$400 ra
 nge.

 Does anybody have experience with this?  Yeah, I know I'll be Voiding the

 Warranty--but, I need a laptop for real work--not socializing or net
 flicking.  You know what I mean.


Re: puppet

2013-02-22 Thread Paul Robert Marino
The only problem I ever had with cfengine is the documentation was
never all that great but it is stable and scales well.
That being said puppet is not perfect many of the stock recipes for it
you find on the web don't scale well and to get it to scale you really
need to be a ruby programer. My other issue with puppet is it doesn't
provide you with a great amount of control over the timing of the
deployment of changes unless you go to significant lengths.
Essentially its good for a Agile development model environment which
is popular with many web companies; however its a nightmare for
mission critical 24x7x365 environments which require changes to be
scheduled in advance.

These days I'm using Spacewalk for most of what I would have used
cfengine or puppet for in the past the only thing that it doesn't do
out of the box is make sure that particular services are running or
not running at boot, but there are a myriad of other simple ways to do
that which require very little work, and if I really wanted to I could
get spacewalk to do that as well via the soap APIs.


On Fri, Feb 22, 2013 at 11:04 AM, Graham Allan al...@physics.umn.edu wrote:
 On 2/21/2013 4:13 PM, Natxo Asenjo wrote:

 On Thu, Feb 21, 2013 at 9:38 PM, Graham Allan al...@physics.umn.edu
 wrote:


 Also cfengine, though that seems to be getting less fashionable... We
 still use it, no compelling reasons to change so far!


 we take our decisions based on functionality, not fashion.

 Cfengine is just fine. Good performance, little dependencies, good
 security record (not unimportant for your infrastructure management
 tool and oh what a start of the year for ruby it was), and it has in
 place editing instead of requiring you to use yet another tool
 (augeas).

 But puppet/chef are good products too, just not good enough to justify
 a downgrade from the better one ;-)


 Totally agree, I just meant that puppet does have more mindshare these days
 and you'll probably find more people familiar with it. We have used cfengine
 for 10+ years, not that we haven't discovered flaws over time but I'm
 certainly very happy with it and see no reason to change. We have had
 student sysadmins come in, have to learn cfengine, they also look at puppet,
 and comment that cfengine was a good choice.

 Just as we here still write most of our support scripts etc in perl, that is
 also unfashionable now, doesn't mean it's not the best tool for the job (fx:
 throws bomb and runs away... :-)

 Graham