Re: [CentOS] Linux on touch screen device

2012-03-29 Thread  


--- On Fri, 2012/3/30, Nataraj incoming-cen...@rjl.com wrote:

 I have poked around in google and have seen a number of youtube videos,
 but my question is whether anyone really has linux running on any kind
 of tablet or tablet PC device in such a way that the touch screen can be
 used productively and it won't take a month to get it running? 
 Initially the two applications that are of most interest to me would be
 a good web browser (maybe chromium) and thunderbird.  I would also like
 to have a decent on screen keyboard which could be used to ssh to
 servers in an emergency.
 
 I've seen instructions for booting linux on various devices, but many
 people doing this are using keyboards and not touchscreens.
 
 Do applications like thunderbird have to be modified in order to work
 well with a touch screen or is just getting a working driver for the
 touchpad sufficient?
 
 If anyone has any experience with this I would appreciate knowing what
 hardware your running on and what linux distro/desktop environment you
 use.  I've been interested in devices like the ASUS EP121 which is a
 dual core I5, so it wouldn't be necessary to have an ARM distribution. 
 Also the newest Asus transformer prime (arm) which I think is about 2
 months away sounds interesting.

Lots of people do this and lots of (most?) commercial tablet/smartphone systems 
are based on Linux or a close cousin (Android and iOS come to mind...).

As far as non-commercial DIY tablet distros, there are distros and special 
interest groups within larger distros that focus on this type of deployment.

But none of them are CentOS, so I'm not sure why you pinged this mailinglist -- 
though I think you'd probably find that CentOS installs just fine in most 
cases, just remember to build whatever graphcs driver you need or your 
experience might not be good.

Go ask over at Fedora, Ubuntu and maybe Mint. Also check out MeeGo and whatnot.

As a side note, there is nothing magical about a touchscreen. Touchscreens are 
just pointing devices like mice and touchpads as far as Linux is concerned, but 
in this case it is a touchpad that you can see through to a screen on the other 
side (there is a special case of location logic, of course, so the pointer 
doesn't continue from last location, but this is a normal case handled by X). 
So nothing special happens in an application to make it work with a 
touchscreen because a touchscreen is just creating mouse events the same way 
your normal mouse would do. The only problem with touchscreens is that small 
icons are smaller than your finger (well, mine anyway) and so you have to make 
the desktop a little cartoony to make things work right. Gnome Shell in Fedora 
is actually not too bad to use with a touchscreen, though it sucks horribly 
with a mouse IMO, and KDE with large widgets is pretty easy as well.

-IY
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] SELinux and access across 'similar types'

2012-01-11 Thread  
On 01/11/2012 11:07 AM, Les Mikesell wrote:
 On Tue, Jan 10, 2012 at 3:50 PM, Daniel J Walshdwa...@redhat.com  wrote:

 That is not the way it works.  SELinux Reference policy is a database
 of rules that govern the default ways application run.

 Yes, but it is application developers that know what their
 applications need to do.  Is there a way for them to express that?

This gets back to the core problem of people often just not learning 
SELinux in the first place. Its not RH or Fedora's problem if upstream 
developers trying to port Windows apps to Fedora dont, for example, 
understand Unix permissions. Various distros deal with such problems in 
various ways. This family's response was to write troubleshooting tools, 
start writing docs and get packagers on-board with it -- other distros' 
responses varied but tended toward scary, this thing... let's just 
quietly include binaries but not turn the scary spinning thing on... 
heaven forbid we learn anything new

These rules
 that have been written for Fedora/RHEL are public and are being moved
 upstream.

 There has to be a better approach than letting the Fedora guys
 second-guess where application components should live, then
 second-guess what the application needs to do.   In fact, that sounds
 like a recipe for years of problems for everyone who uses the results.

Honestly, SELinux is far less of a complication -- be it within a distro 
or across the Unixcape (new word?) -- than the current proliferation of 
different init subsystems. There is, for example, no tool that can even 
give you reasonable hints to get your daemons from SysV to Upstart to 
Systemd. Trying just to package for Upstart between distros can be a 
nightmare. SELinux is, by comparison, far easier to develop a basic, 
reasonably sane (if not as tight as could be) policy to distribute per 
package.

   Different Distributions can choose to use these policies or
 write there own.

 So after the Fedora version of second-guessing, that gets pushed off
 to other distributions to likely make it even worse?

I'm assuming this is a joke. Fedora already ate (almost) all the babies. 
Seriously. I lived through it and now I find SELinux a breeze, and 
audit2allow is *really* quite a livable tool to work with. You're 
talking like other distro's burden somehow belongs to Fedora. Fedora's 
burdens aren't even RH's problem -- RH only picks what it finds as 
useful for its customer base out of Fedora, and SELinux is very high on 
the list of incredibly useful things. But just like PostgreSQL, 
Sendmail, Apache2, Bash, Awk, Sed (the list is long) all the really 
powerful stuff, new or old, requires some study.

You're expecting to get a system-wide mandatory access control policy 
system across every distro based solely on Fedora's efforts (as if 
Fedora is actively pushing SELinux to other distros) without the 
users/sysadmins needing to do some reading? When was the last time you 
ever heard of a safe, secure, all-encompassing web (or otherwise) CMS, 
for example, that didn't require at least some configuration and knowhow?

   Out of the Reference Policy you can build your own
 version of targeted or MLS policy or you can write your policy from
 scratch.

 But is there a way that these can originate from the group that
 manages the application, and appear automatically as a result in
 distributions that include the application or if you compile from the
 source distribution?

Upstream projects can certainly assist enormously by learning about 
SELinux and writing guidelines ahead of time, and some of the larger 
projects have such people (Postgres is a good example of this), but just 
like startup scripts, installation, linking, $PATH-related and 
compile-time library linking decisions SELinux policy is *heavily* the 
responsibility of the packager/distro maintainer, *not* the upstream 
itself. Each distro has its own quirks and nearly every package on every 
system has to work around these as constraints.

SELinux policy is not the hardest thing a packager has to deal with, 
IMO. Its one of those things that if a project has sane guidelines to 
cover (which many, perhaps most, distros lack on for many things) is not 
too hard to deal with, but must be learned through experience at play 
and study, just like RPM.

 The place that SELinux breaks applications is when an application does
 something that SELinux did not expect.

 Well, of course.   The issue is how SELinux is supposed to learn from
 the person who does know what the application is going to do.  I don't
 run an OS distribution to what a distribution does, I run it so it
 does what the application is supposed to do.  That is, the application
 is the point, not what SELinux guesses it was supposed to do.

That's what the auditing tools can help you out with. Outside of that, 
you're asking for Microsoft-style defaults, which is ridiculous.

There is no way, for example, that you could consider it a sane default 
to permit 

Re: [CentOS] SELinux and access across 'similar types'

2012-01-11 Thread  
On 01/11/2012 07:19 PM, Bennett Haselton wrote:

 Well there is already a beginner-friendly introduction:
 http://wiki.centos.org/HowTos/SELinux
 The problem I had with it is that there are several statements that are
 unclear, missing, or just wrong. That's not necessarily the fault of the
 author; if I had to write an intro to something that I knew a lot about,
 I'd probably also make a few statements that were unclear or wrong.

Tell me about it. I constantly find myself really great at writing docs 
for systems that the audience is already expert in, but somewhat lacking 
on writing it for complete beginners. Really, the principal problem is 
one of prereqs. Teaching people on this list about SELinux is a lot 
easier than teaching professional diesel mechanics about it, and a bit 
harder than teaching a certain breed of security researchers about it. 
So at what level is appropriate to begin the explanation? This is tricky.

 The cure for that is to show it to 10 people whose intelligence you are
 reasonably confident about, but who *don't already know* what the
 document is trying to teach, and ask them to suggest edits: anything
 that tells the user to do something without saying how, or is unclear,
 or doesn't work when they try it. Then when the documentation has been
 tweaked enough that it no longer has too many of those problem areas,
 then it's ready.

This is sounds very much the way open source development works. And its 
the process you're engaging in, actually. See below...

 (If I were a volunteer, some of my suggested edits to that page would be:
 - Near the beginning the doc says the machine should be rebooted and
 the filesystem relabeled, without telling the user how to actually do
 that. Have a forward-reference telling the user where to read how to
 relabel the filesystem.
 - The sentence about Access is only allowed between similar types is
 apparently wrong (and meaningless anyway if it doesn't explain what
 similar types means). I would just go ahead and say that there's no
 way to know for sure what process types will be allowed to access what
 file types, and all you can do is make educated guesses based on the
 similarity of the names, and then look at error logs afterwards to see
 if you were right.
 - Explain that files in /tmp/ aren't relabeled after rebooting. (If
 indeed that is the case. We never did figure out why my /tmp/ files
 weren't being relabeled.)
 - The genhomedircon command gives an error if SELinux is enforcing;
 switch to permissive before running that command.
 - The doc says httpd runs in the httpd_t security context. This is only
 true if it's started silently; if the user starts it from the command
 line, it runs in a different context.)

And you should *really* cc this bit to the author. Anyway, you said it 
is a wiki -- so why don't you get to wikifying instead of writing on a 
mailinglist? That's the heart of the process! This is a system under 
development, and as such needs your help. How great would it be for you 
to document your trouble spots in learning and contribute that back? 
Most of the best tutorials on the web started that way -- as a how to 
learn how to learn systemX based on my personal experience type of 
document. A roadmap for learning is never more accurate than the one 
written by a learner himself.

There is a secondary benefit to this -- it forces you as a learner to 
really understand your subject, which makes the learning more complete 
for you. Its a win-win, give it a spin! If nobody did that we wouldn't 
even have a kernel, by the way...

 It doesn't take that much work to turn so-so documentation into really
 useful documentation, but you have to start with the assumption that
 there is room for improvement. The main obstacle is the attitude of
 people like John Dennison, who assume the documentation is fine the way
 it is, and that any problems are therefore the fault of the user: If
 people would bother to spend some time _reading_ _documentation_ on the
 systems they are attempting to admin they might find that subsystems
 such as selinux aren't quite as complex as they make them out to be...
 Blaming selinux itself for creating what you perceive as a problem
 because you won't make a rudimentary attempt at learning to properly
 manage it is ludicrous. (Even though it subsequently came out that I
 was in fact following the instructions on the wiki, and there were steps
 missing in the instructions.)

He's right in principle, if not in detail. You're right in principle, 
but you're also correct in the specific detail of practice as things 
stand right now. Every system we use is a moving target. JD was probably 
dead-on correct a few months ago. Things have changed, and the 
documentation likely doesn't take into account the specific version of 
the distro you're running, so some things could be missing. This happens 
a LOT, and its not really anyone's fault, per se -- that is entirely the 
wrong way of looking at it, 

Re: [CentOS] SELinux blocking cgi script from writing to socket (httpd_t)

2012-01-11 Thread  
On 01/12/2012 03:18 AM, Bennett Haselton wrote:
 Is this really supposed to get easier over time? :)  Now my audit.log
 file shows that SELinux is blocking my cgi script, index.cgi (which is
 what's actually served when the user visits the front page of one of our
 proxy sites like sugarsurfer.com) from having 'read write to socket
 (httpd_t)'.  I have no idea what that means, except that I thought that
 cgi scripts were supposed to be able to write to stdout so that the web
 server could send the data via a socket connection to the end user's
 browser, so I don't know why a CGI script would be blocked from writing
 to a socket with security context httpd_t.

Your cgi script isn't allowed to write to the socket file. The context 
httpd_t isn't touchable by your executable. Is your index.cgi script 
custom or something from a distro package?

In any case you can find a way to deliberately make this audit message 
show up (sounds like you have), try to vary it a few times to get a good 
base of audit information, and then use audit2allow to inspect the last 
several lines of your audit file.

 From this you can have audit2allow create and package a new policy that 
explicitely permits just this access and nothing else. See if you get 
any more AVC denials (9 out of 10 times this will be the end of it). If 
no more AVC notices and everything works well, turn SELinux back to 
enforcing and test a bit.

Usually for custom scripts (of any depth, really) this is all that's needed.

I have to do the same thing every time I decide to do something weird 
like run a new version of Django on top of Postgres 9.1 from source with 
all my other custom database-using apps asking for things from other 
servers, etc. Even with that much messaging going around its pretty easy 
to pin the situation down.

You will need policycoreutils-python installed and you'll want to read 
over the manpages for audit2allow and any other tool that you find 
interesting, but its pretty easy now that you've got sane output to go 
by, know where/what your audit file is, and what a security context is.

 The only clue that might narrow it down is the line Target
 Objectssocket [ udp_socket ].  The sockets that the cgi
 scripts usually send output to are of course tcp sockets, so why would
 it say udp?  The only time one of my cgi scripts might use udp would be
 if it were doing a hostname lookup via dns, but the index.cgi script
 doesn't do that at any point.

No idea why it would need to talk to a UDP socket if you're absolutely 
certain that it doesn't need to, but it could be something else related 
that needs to do a lookup (Apache set to resolve names for logs, for 
example?).

 What would the pros do at this point?

Aside from checking the policy that the audit tools come up with, it 
would be reasonable to run a test on the script in a clean environment 
(minimal install of the OS with just enough web server (Apache) setup to 
let the script execute) and see what is happening in the audit log (and 
anywhere else you're curious about -- database, other processes, file 
system acces, etc.).

If you're really paranoid (and a pro is paid to be, so...) it would be 
good to check the contents of the packets being passed in and out of the 
test environment to make sure you fully understand what is expected, 
then compare that against what you see coming from the production server.

In *any* case, putting together a quick SELinux policy that let's your 
app run and lets you turn SELinux back on is better in the interim than 
simply letting the system sit with a permissive policy while you do your 
homework.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] SELinux blocking cgi script from writing to socket (httpd_t)

2012-01-11 Thread  
On 01/12/2012 03:48 AM, Daniel J Walsh wrote:

 In Fedora we currently dontaudit this leak.

 audit2allow -i /tmp/t


 #= httpd_sys_script_t ==
 # This avc has a dontaudit rule in the current policy

 allow httpd_sys_script_t httpd_t:udp_socket { read write };

Pow. Reasonable answer, and it isn't so hard to run that command -- its 
just difficult to understand why its necessary if you don't know 
anything about the environment, and mystifying if you know the command 
but nothing about what's going on.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] SELinux and access across 'similar types'

2012-01-11 Thread  
On 01/12/2012 04:49 AM, Les Mikesell wrote:
 On Wed, Jan 11, 2012 at 1:23 PM, Lamar Owenlo...@pari.edu  wrote:
 On Wednesday, January 11, 2012 01:22:05 PM Les Mikesell wrote:
 I don't think of myself as a 'normal user', but I still don't
 appreciate it when a distribution goes out of its way to arbitrarily
 modify and break what application developers spent years designing and
 writing.

 SELinux does not 'go out of its way' to 'break' anything; rather, SELinux 
 enforces a deny by default 'need to access' policy.

 Yes, the breakage came from having someone who didn't understand the
 needs define that policy.

I think you are misunderstanding how SELinux policies are formed and how 
they work. Its a *lot* less complicated and mysterious than you're 
making it sound. For most applications its really, really easy to do this.

 If you need to special-case stuff, then you need to do an analysis of the 
 special cases you need to create; this is what a testing server running 
 SELinux in permissive mode is for, as there is no better analysis of what 
 SELinux needs than SELinux in permissive mode loggin what your application 
 is using.  Get the logs and run audit2allow and package that as a piece of 
 your applications' SELinux policies.

 So if an application only needs to do something once at some future
 time, what happens?  If you write an application that will need to do
 something at some rare future time, what is the standard way to tell
 distribution packaging systems and system administrators to permit it?

I'm trying to think of a single example (that isn't a worm) that fits 
this description.

Can you think of any examples? (again, worms don't count... actually, 
that is sort of the point here...)

 That is new, but it isn't very hard.

 Doesn't that really depend on what the application needs to do?

No, there are tools that do almost all the work for you. Its much easier 
than learning how to write a spec file in the first place. At this point 
it sounds like you're just arguing against something you're refusing to 
find out more about -- which is the standard human policy towards 
SELinux, so you're in good company (it used to be the standard human 
policy toward ipchains back in the day, too).

You can just turn it off if it bothers you so much.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] SELinux and access across 'similar types'

2012-01-10 Thread  
On 01/11/2012 05:04 AM, Les Mikesell wrote:
 On Tue, Jan 10, 2012 at 1:46 PM, Daniel J Walshdwa...@redhat.com  wrote:

 On Tue, Jan 10, 2012 at 7:47 AM, Daniel J Walsh
 dwa...@redhat.com  wrote:

 Now if only more people used RHEL we could further enhance
 the products.  :^)


 Why isn't it accepted as more of a standard?

 I don't understand the question.

 Why is it vendor-specific to RHEL?

 I was talking Money not vendor specific. The question meant as a jab
 was if more people used RHEL instead of Centos, we could pay more
 developers.  I thought the @redhat.com would signify why I would want
 that.  :^)

 OK, I can understand why you would want that.  I don't understand why
 you think anyone else would want even more nonstandard variations in
 linux distributions.   And if this isn't intended to be
 vendor-specific, why isn't it an independent upstream project or
 included in the kernel?

The logical code to SELinux isn't specific to RH, not by a long shot. 
(Of course, RH may wind up doing some way un-Unixy/very-vendor-specific 
things in the near future, but that has nothing to do with SELinux)
http://userspace.selinuxproject.org/trac
http://www.gentoo.org/proj/en/hardened/selinux/
https://wiki.ubuntu.com/SELinux
...

But the difficult thing about SELinux isn't how it works, its the detail 
required for each policy to wrap each program up correctly without 
denying useful functionality in the process, not to mention deploying 
them with packages, and dealing with the whole new universe of 
inaccurate bug reports SELinux has spawned...

*That* is very hard -- and that is what Red Hat has been so good about 
over the last while. In the process Fedora has spawned a slew of new 
tools to make SELinux policy easier to deal with -- and in the process 
of doing that Fedora acquired/affirmed its reputation for eating babies.

SElinux exists all over the place, and there are binaries for it in 
nearly every distro -- but nearly everyone has decided that its too 
hard so its just a set of accessory packages almost nobody installs, 
and if installed not activated, and if activated quickly de-activated 
(the #1 web server fix your frustrations on the web advice for noobs 
is still disable SELinux, it sux).

Honestly, though, at this point the tools really are there. A packager 
that wants to publish an SELinux policy with his package finds it easy 
if the tools are understood -- what is really lacking now is just a very 
public, beginner-friendly introduction to the core concepts of SELinux 
which includes a nice intro to the somewhat arbitrary jargon that 
surrounds access policy concepts.

Minds are very slowly changing and I am beginning to see a lot more 
functionality in non-Fedora-derived distros, but it takes a long time to 
turn the tide several years' worth of mailing archive, newsgroup, blog 
and forum advice *against* learning SELinux and turning it off instead 
-- and of course the biggest problem with that advice for those new to 
SELinux is that often it produces instant gratification.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] what percent of time are there unpatched exploits against default config?

2011-12-30 Thread  
On 12/30/2011 02:33 AM, Ljubomir Ljubojevic wrote:
 I like to use serial numbers from MB, HDD, etc., as passwords. I never
 use normal words for my passwords, and few other users (with ssh/cli
 access) are carefully checked for their passwords.

 If this formula is true (1/2 . 2 ^ 54 . 1s / 10) for 9 *random*
 character password, then 0.5 * 18014398509481984 /10 gives
 900719925474099 seconds to crack it, or 10424999137 days per attacker.

 If you use denyhosts or fail2ban, attacker needs 10,000 attack PC's that
 never attacked any denyhosts or fail2ban server in recent time.

 So for army of 10,000 attacker PC's, bruteforce ssh needs 1042499 days,
 or 2856 years to crack it. Is this correct figure?


Unfortunately, no, it is not a correct number.

There are a few situational variables that have to be considered to 
really assess the security of a password, and theoretical best-case 
entropy is only a small part of that.

If, for example, you login remotely using a password that has to cross 
an untrusted network you should expect that it is being sniffed. Now 
this is less a problem because it should be encrypted. The situation is 
the same for local Windows or Linux logins -- all systems I know of 
encrypt passowords for storage by default, which isn't much different 
than how simple password encryption works across the network.

There is a problem with this, however. The password, however long, is 
reduced to a fixed-length hash in most password encryption schemes. 
Because of the wonders of modulo mathematics the total set of possible 
hashes is a lot less than the total set of possible passwords. What this 
means is we can start attacking the algorithm itself, in preparation for 
trying to decrypt intercepted data (of any type that falls under a 
signature/hash type scheme, not just passwords).

We can start a 10,000 computer botnet (or, more realistically, a 10m 
computer botnet these days, and this is a technique used right now) 
working on the problem of assembling a new index table that orders and 
assigns every possible valid hash said algorithm can produce, and start 
assigning values.

Essentially, we can move the computing cost up-front by assuming that we 
indeed *do* have to try *every* possible password, which means computing 
done 5 years ago applies to your brand new password today.

Something weird about the way encryption algorithms tend to work is that 
as you move through the list of possible hashes you find large spans of 
values that are impossible to actually arrive at given the set of data a 
user can enter. You can move those to the side, and this massively 
redudces the work load (but its something you can't discover until 
you're well into building the hash index, which might be a year or so).

You can also start with targeted hash indexing, meaning you first run 
through every possible dictionary word, then every variant of, then 
every possible combination of, then every possible combination with l33t 
substitutions, then any data set that you can scan from external sources 
(meaning things people might see and decide to use as a password that 
isn't in the dictionary, which is the category your S/N scheme falls 
into...), all of the above with numeric insertions (4, 2 and scattered 
single digits are most common, so focus on those), names of companies, 
etc. Going this route you can can about 99% of average user passwords in 
about a month. This is how John the Ripper works, in a nutshell -- it 
has a hash index of pre-checked phrases in a myriad of different hashing 
schemes and just checks the hashed password against the index to locate 
the list of possible correct ones, which is a pretty short list.

Anyway, to keep from getting into too much math, just consider that 
password cracking is not only based on entropy of the password, and the 
concept of passwords encrypted for transit or storage has been around 
long enough that has tables exist for a vast number of common algorithms.

If you are only logging on locally *and* you are nearly certain that 
nobody has access to your password storage, then you're just fine. Most 
users who don't spend time using IE to surf unsavory websites and click 
on everything those websites has to offer are safe from this. People who 
log in remotely, however, face a different challenge. People who login 
to a web interface using a password have a SERIOUS problem, in my 
opinion, because HTTP cannot be secured, and some websites (even banks!) 
sometimes don't have a public HTTPS, but force the user to use a 
HTTP-HTTPS redirect, which makes securing it literally impossible.

Blah blah. I'm glossing over some details here and there are as many 
different cracking scenarios which involve their own weaknesses as there 
are systems.

In short, keys, man, keys. Its not perfect, but it is much stronger than 
passwords and in my experience FAR much less hassle.

-Iwao
___
CentOS mailing list

Re: [CentOS] Need help in writing a shell/bash script

2011-12-30 Thread  
On 12/30/2011 09:00 PM, ankush grover wrote:
 Hi Friends,

 I am trying to write a shell script which can merge the 2 columns into
 3rd one on Centos 5. The file is very long around 31200 rows having
 around 1370 unique groups and around 12000 unique user-names.
 The 1st column is the groupname and then 2nd column is the user-name.

 1st Column (Groupname)2nd Column (username)
  admin  ankush
  admin   amit
  powerusers   dinesh
  powerusers   jitendra




 The desired output should be like this

 admin:   ankush, amit
 powerusers:  dinesh, jitendra


 There are commands available but not able to use it properly to get
 the desired output. Please help me

Hi Ankush,

This will do what you want. But please read the comments in the code.
As a side note, this sort of thing is way more natural in Postgres. That 
will become more apparent as the file contents grow. In particular, the 
concept of appending tens of thousands of names to a single line in a 
file is a little crazy, as most text editors will start choking on 
display without a \n in there somewhere to relieve the way most of them 
read and display text.

###BEGIN collator.sh
#! /bin/bash
#
# collator.sh
#
# Invocation:
#   If executable and in $PATH (~/bin is a good idea):
#   collator.sh input-filename output-filename
#   If not executable, not in $PATH, but in present working directory:
#   sh ./collator.sh input-filename output-filename
#
# WARNING: There is NO serious attempt at error checking implemented.
#  This means you should check the contents of OUTFILE before
#  using it for anything important.

INFILE=${1:?Input filename missing, please read script comments.}
OUTFILE=${2:?Output filename missing, please read script comments.}

awk '{print $1 : }' $INFILE | uniq  $OUTFILE
for GROUP in `cat $OUTFILE | cut -d ':' -f 1`
 do for NAME in `cat $INFILE | grep $GROUP | awk '{print $2}'`
 do sed -i s/^$GROUP: /$NAME,\ / $OUTFILE
 done
done
###END collator.sh
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Need help in writing a shell/bash script

2011-12-30 Thread  
On 12/31/2011 01:41 AM, m.r...@5-cent.us wrote:
 Hey, supergiantpotato (and btw, this list is plain text, not unicode, and
 most of us don't read Japanese...),

Thanks for the info

 This is really complicated and fiddly. Look at the one awk script that was
 posted, which is *far* simpler, and uses awk the way it's intended to be
 used, not as a replacement for cut

I tried it before writing that.
It starts printing names on newlines after the second name in a group. 
Not so good. It also has variable output when the group names are not 
sorted prior to input. Etc.
Given that, I'd say it is more fragile than what I wrote. But whatev. 
Let the OP decide which one is more useful.

Easy to fix, yes.

And perhaps you don't like awk being used that way. Fine. It can be 
substituted -- but awk is an old habit of mine.

The whole script could have been written in just one or two blazingly 
complex sed commands... but that sucks even more for the OP if he has to 
debug it later...
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] what percent of time are there unpatched exploits against default config?

2011-12-30 Thread  
On 12/31/2011 01:19 AM, Marko Vojinovic wrote:

 On Friday 30 December 2011 19:40:55 夜神 岩男 wrote:
 [snip]
 We can start a 10,000 computer botnet (or, more realistically, a 10m
 computer botnet these days, and this is a technique used right now)
 working on the problem of assembling a new index table that orders and
 assigns every possible valid hash said algorithm can produce, and start
 assigning values.

 Essentially, we can move the computing cost up-front by assuming that we
 indeed *do* have to try *every* possible password, which means computing
 done 5 years ago applies to your brand new password today.
 [snip]
 In short, keys, man, keys. Its not perfect, but it is much stronger than
 passwords and in my experience FAR much less hassle.

 You are basically saying that, given enough resources, you can precalculate
 all hashes for all possible passwords in advance.

 Can the same be said for keys? Given enough resources, you could precalculate
 all possible public/private key combinations, right?

 Please don't get me wrong --- I'm not saying that the resources needed are
 equal (or even comparable) for the two cases.

 But theoretically, both keys and passwords rely on the assumption that the
 inverse operation  (be it calculating a password from a hash or factoring a
 large integer into primes) is too expensive to be feasible. But given enough
 time and resources, you could in principle have prebuilt tables for both,
 right?

 Just asking... :-) ...while waiting for the first successful build of a 
 quantum
 computer, which will fundamentally redefine all current concepts of 
 security...
 ;-)

Yes, theoretically it is possible to precalculate the hashes of 
everything against everything. Seriously. Of course, the only groups 
with the current resources to actually build hash indexes against 
serious keys are governments, and there are limits there even.

The cost is what prevents this, which is why cryptographic security can 
never, ever sit still.

And you're right about quantum computing changing the game. In fact, it 
can change the game so much that physical and information security will 
once again become one and the same, period [1].

Considering this and how close we are to quantum computing, I find the 
rush to the cloud for business and personal data storage laugably 
shortsighted.

-Iwao

1.Ok, there is actually a way around this which relies on quantum 
hashing, but I don't know the terms for this in English. It depends on 
the idea that you can only observe some articles of data a single time 
before the act of observation forces an alteration of state: In other 
words it does nothing to encrypt the data, but rather you can know 100% 
if the data has been intercepted at all. But its ridiculously finnicky 
right now because its so new, so don't expect this for a long time.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Need help in writing a shell/bash script

2011-12-30 Thread  
On 12/31/2011 01:56 AM, Craig White wrote:

 On Dec 30, 2011, at 9:52 AM, m.r...@5-cent.us wrote:

 Craig White wrote:
 looked like English to me...

 On Dec 30, 2011, at 9:41 AM, m.r...@5-cent.us wrote:

 Hey, supergiantpotato (and btw, this list is plain text, not unicode,
 and most of us don't read Japanese...),

 夜神 岩男 wrote:
^^  doesn't look like English, or ASCII, to me.

 MVNCH

 mark
 
 let me see if I get this straight... you are objecting to him using his real 
 name?

 Craig

Its ok, I'm totally about to change my private email address header for 
one guy on one mailing list. And anyway, shame on me for trying to help 
someone on a list with a quick script. What was I thinking!
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] what percent of time are there unpatched exploits against default config?

2011-12-29 Thread  
On 12/30/2011 12:41 AM, Marc Deop wrote:
 On Thursday 29 December 2011 14:59:14 Reindl Harald wrote:
 the hughe difference is: while having the same password (for the key)
 it can not be used directly for brute-force und you need the password
 and at least one time access to the key file

 Explain me how having a key protected by a password avoids brute forcing if 
 you loose the usb stick holding that key?

 Technology is developing at a scary pace, have a look at this:
 http://mytechencounters.wordpress.com/2011/04/03/gpu-password-cracking-crack-a-windows-password-using-a-graphic-card/

 And this is with a simple card, imagine what you can do with a system with 
 multiple paralel cards...


 Just to be clear: I'm not arguing which system is better/more secure. I'm 
 just pointing out one downside of having the key in a usb memory.

 And bruteforcing against ssh servers are really difficult as some others have 
 commented (and even more difficult if you limit failed connections...)


My IC card fries itself after 10 unsucessful attempts.

That is one way.

The military CACs fry themselves after 3.

They are not just disks, they are tiny 8-bit systems embedded in the 
chip. The key never actually leaves the card. The benefit is that your 
key is never exposed, even in an encrypted state. The downside is that 
signing really huge things can take a few seconds (like ~5 secs for, 
say, signing a decent sized RPM or email attachment, 15 secs or so for 
signing the a kernel RPM) because the card processor, not the host 
system, is doing the signing.

I don't know about the security of USB dongles. I've never used them 
before, but I'm sure that secured versions of them are much more than 
simple USB drives with a directory full of keys, but rather discrete USB 
devices which probably operate in the same way. I'm speculating, but I 
can't imagine this isn't the case with good USB systems.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] what percent of time are there unpatched exploits against default config?

2011-12-29 Thread  
On 12/30/2011 01:33 AM, m.r...@5-cent.us wrote:
 Marko Vojinovic wrote:
 On Thursday 29 December 2011 14:59:14 Reindl Harald wrote:
 Am 29.12.2011 14:21, schrieb Marko Vojinovic:
 so explain me why discuss to use or not to use the best
 currently availbale method in context of security?

 Using the ssh key can be problematic because it is too long and too
 random to be memorized --- you have to carry it on a usb stick (or
 whereever). This provides an additional point of failure should your
 stick get lost or stolen. Human brain is still by far the most secure
 information-storage device. :-)
 this is bullshit
 most people have their ssh-key on a usb-stick

 And how are you going to access your servers if the stick gets broken or
 lost? I guess you would have to travel back to where the server is
 hosted, in order to copy/recreate the key.

 Um, yep: you're SOL, same as if you spilled coffee on your laptop, or
 whatever. And if you loose it, you should then create a new one.

 I did not argue that the key is not more secure than a password. I was
 just pointing out that sometimes it can be more inconvenient.

 All security is inconvenient. What's implemented is a balance between
 convenience and security - really secure is a system not connected to any
 network, and with no USB ports, that runs off a DVD

...at the bottom of the ocean...
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] what percent of time are there unpatched exploits against default config?

2011-12-29 Thread  
On 12/29/2011 05:17 PM, Bennett Haselton wrote:
 On Wed, Dec 28, 2011 at 6:10 AM, Johnny Hughesjoh...@centos.org  wrote:
 On 12/27/2011 10:42 PM, Bennett Haselton wrote:
 2.  Why have password logins at all?  Using a secure ssh key only for
 logins makes the most sense.


 Well that's something that I'm curious about the reasoning behind -- if
 you're already using a completely random 12-character password, why would
 it be any more secure to use an ssh key?  Even though the ssh key is more
 random, they're both sufficiently random that it would take at least
 hundreds of years to get in by trial and error.

I'm almost afraid to see the responses to this comment...

If you believe that passwords are as secure as SSH2 keys, then you've 
got some homework to do before second guessing anyone's security policy. 
I don't say that as a jab, I'm being totally serious.

The good side of this conversation is that you may become motivated to 
learn about security as a hobby after this. Its a lot more interesting 
than watching TV after work (but a lot less interesting than playing 
with real people (friends, kids, wife, whatever)).

 3.  Please do not top post.


 My bad.  Gmail default. :)

It is the devil.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] what percent of time are there unpatched exploits against default config?

2011-12-29 Thread  
On 12/30/2011 12:00 AM, m.r...@5-cent.us wrote:
 夜神 岩男 wrote:
 On 12/29/2011 10:21 PM, Marko Vojinovic wrote:
 On Thursday 29 December 2011 13:07:56 Reindl Harald wrote:
 Am 29.12.2011 12:56, schrieb Leonard den Ottolander:
 On Thu, 2011-12-29 at 12:29 +0100, Reindl Harald wrote:
 Am 29.12.2011 09:17, schrieb Bennett Haselton:
 Even though the ssh key is more
 random, they're both sufficiently random that it would take at least
 hundreds of years to get in by trial and error.

 if you really think your 12-chars password is as secure
 as a ssh-key protcected with this password you should
 consider to take some education in security
 snip
 It is very inconvenient for people who need to login to their servers
 from random remote locations (ie. people who travel a lot or work in
 hardware-controlled environment).

 Besides, it is essentially a question of overkill. If password is not
 good enough, you could argue that the key is also not good enough ---
 two keys (or a larger one) would be more secure. Where do you draw the
 line?
 snip
 When traveling I log in to my home server and work servers with my
 laptop. Its really a *lot* easier than using a bunch of pasword schemes.
 snip
 Ah, that brings to mind another issue with only passwords:
 synchronization. I worked as a subcontractor for a *huge* US co a few
 years ago. I've *never* had to write passwords down... but for there, I
 had a page of them! Our group's, the corporate test systems, the corporate
 *production* systems, and *each* had their own, along with their own
 password aging (there was *no* single sign-on), the contracting co's

 mark

Ah, forgot about that because its no longer a problem for me anymore. 
Using the same password on two systems is a religiously-to-be-observed 
rule that *most* users violate.

I can put my public keys on any system and not worry about it. Hitting 
the number pad for my digits is a lot faster than typing in a password, 
a lot more convenient than remembering a bunch of them (and a big 
motivator to buy laptops with full-blown 10-keys, which is common now 
anyway, as are internal card readers...).
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] what percent of time are there unpatched exploits against default config?

2011-12-29 Thread  
On 12/29/2011 10:21 PM, Marko Vojinovic wrote:
 On Thursday 29 December 2011 13:07:56 Reindl Harald wrote:
 Am 29.12.2011 12:56, schrieb Leonard den Ottolander:
 Hello Reindl,

 On Thu, 2011-12-29 at 12:29 +0100, Reindl Harald wrote:
 Am 29.12.2011 09:17, schrieb Bennett Haselton:
 Even though the ssh key is more
 random, they're both sufficiently random that it would take at least
 hundreds of years to get in by trial and error.

 if you really think your 12-chars password is as secure
 as a ssh-key protcected with this password you should
 consider to take some education in security

 Bennett clearly states that he understands the ssh key is more random,
 but wonders why a 12 char password (of roughly 6 bits entropy per byte
 assuming upper  lower case characters and numbers) wouldn't be
 sufficient.

 so explain me why discuss to use or not to use the best
 currently availbale method in context of security?

 Using the ssh key can be problematic because it is too long and too random to
 be memorized --- you have to carry it on a usb stick (or whereever). This
 provides an additional point of failure should your stick get lost or stolen.
 Human brain is still by far the most secure information-storage device. :-)

 It is very inconvenient for people who need to login to their servers from
 random remote locations (ie. people who travel a lot or work in hardware-
 controlled environment).

 Besides, it is essentially a question of overkill. If password is not good
 enough, you could argue that the key is also not good enough --- two keys (or
 a larger one) would be more secure. Where do you draw the line?

 Best, :-)
 Marko

Hi Marko!
What about IC cards? I use that a lot, and its reduced my need for a 
password to something tiny (6 numbers) and requires a physical key (my 
card). I have the root certificates, private keys, etc. stored offline 
just in case my card goes nuts, which has happened before, but I've 
never had a problem with this.

When traveling I log in to my home server and work servers with my 
laptop. Its really a *lot* easier than using a bunch of pasword schemes. 
I was initially worried that I'd run into a situation where I'd either 
lose my card traveling, or it would get crushed, or whatever -- but that 
hasn't happened in 5 years. What has happened in 5 years of doing this 
is intermittent network outages, work server crashing, web applications 
failing, database corruption, etc.

So from experience (mine and coworkers, at least), it is a lot more 
likely that problems will arise from totally different vectors than 
having ssh keys and ic cards making life complicated -- because from 
this user's perspective its made things a LOT simpler.

But it requires a bit of study. Which most people don't do. More to the 
point most people don't even read popups on the screens, even the big 
red scary ones, so...
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] what percent of time are there unpatched exploits against default config?

2011-12-28 Thread  
On 12/28/2011 02:01 PM, Bennett Haselton wrote:
 Yeah I know that most break-ins do happen using third-party web apps;
 fortunately the servers I'm running don't have or need any of those.

 But then what about what my friend said:
 For example, there was a while back ( ~march ) a kernel exploit that
 affected CentOS / RHEL. The patch came after 1-2 weeks of the security
 announcement. The initial
 announcement provided a simple work around until the new version is
 released.
 Is that an extremely rare freak occurrence?  Or are you just saying it's
 rare *compared* to breakins using web apps?  Or am I misunderstanding what
 my friend was referring to in the above paragraph?

Yes, that is rare. There *are* holes in nearly everything, though, and 
there are workarounds and patches for nearly all of those holes.

But not all holes are equal. Not nearly so. For example, the vast 
majority of the security announcements for RHEL are rated as very minor, 
despite the enormous scrutiny Linux is subjected to. That we can find SO 
MANY tiny holes is a testament to the thoroughness of the community 
approach to common component development (which is a bit different from 
the dynamic found in niche applications development, despite what the 
RHSs of the world have to say).

It is important to ask your friend two things:

1- Was the vendor involved in the announcement, and if so was the 
workaround explained thoroughly in the announcement and permit 
reconfiguration of a functional system?

Sometimes people want to make a name for themselves by finding a hole 
in the Linux kernel and try to announce things without notifying the 
vendor, in which case the bad guys and good guys have a race to see who 
will develop first, the patchers or the exploiters.

Even IBM can get caught off-guard by things like this with Big Adult 
systems like z/OS. Being caught off-guard is the problem Google tries to 
solve by providing both paying and stroking the ego of people who find 
security problems with their infrastructure. Preventing the malicious 
use of such information is what the whole Full Disclosure concept is 
about (though the mailing list of the same name is often just nothing 
more than trollville)

2- Did the security hole, when exploited, grant root access? Without the 
ability to root the machine, the picture is a lot less grim. 
Understanding iptables, SELinux, what apps are installed, what Apache 
modules aren't necessary (quite a few), etc. can go a long way to 
providing intermediate barriers against a big scary hole in the kernel. 
Consider that the kernel has one huge hole by design called root. 
Getting access to it is the key, and the vast majority of security 
announcements permit marginal, not root, system access.


To answer your original question, the announcement in March is not 
anything I heard of. Or more correctly it isn't something I remember in 
particular, and I tend to keep up with things. I hear about *lots* of 
security holes in lots of different software daily. Most of it is 
patched before the announcement, or patched along with the announcement. 
The overwhelming majority of the announcements I see are XSS and SQL 
injections against web frameworks -- or various ways of re-verbing 
existing problems with new buzzwords.

As far as what exact % of the time that is impossible to determine 
until you at the very least put a threshold on the severity of a 
security issue. And when it comes to some issues, frankly what some 
people consider a needed feature another may consider a security hole. 
Take FTP and Telnet, for example. Holy crap, wotmud.org: is WIDE 
OPEN to incoming telnet requests! would be a ridiculous thing to 
proclaim, but I've seen it done. I've also seen people say Ubuntu is 
WIDE OPEN because they have a new guest account by default with a 
consistent name! -- as if names were equivalent to passwords.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] what percent of time are there unpatched exploits against default config?

2011-12-28 Thread  
On 12/28/2011 04:40 PM, Bennett Haselton wrote:
 On Tue, Dec 27, 2011 at 10:17 PM, Rilindo Fosterrili...@me.com  wrote:
 On Dec 27, 2011, at 11:29 PM, Bennett Haseltonbenn...@peacefire.org

 What was the nature of the break-in, if I may ask?


 I don't know how they did it, only that the hosting company had to take the
 server offline because they said it was sending a DOS attack to a remote
 host and using huge amounts of bandwidth in the process.  The top priority
 was to get the machine back online so they reformatted it and re-connected
 it, so there are no longer any logs showing what might have happened.
 (Although of course once the server is compromised, presumably the logs can
 be rewritten to say anything anyway.)

Stopping right there, it sounds like the hosting company doesn't know 
their stuff.

Logs should always be replicated remotely in a serious production 
environment, and I would say that any actual hosting company -- being a 
group whose profession it is to host things -- would define that category.

Yes, logs can get messed with. But everything up to the moment of 
exploit should be replicated remotely for later investigation, whether 
or not the specific, physical machine itself is wiped. The only way to 
get around that completely is to compromise the remote logger, and if 
someone is going to that much trouble, especially across custom setups 
and tiny spins (I don't know many people who use standard full-blown 
installs for remote logging machines...?) then they are good enough to 
have had your goose anyway.

My point is, I think server management is at least as much to blame as 
any specific piece of software involved here.

If that were not the case, why didn't my servers start doing the same thing?

 Well that's what I'm trying to determine.  Is there any set of default
 settings that will make a server secure without requiring the admin to
 spend more than, say, 30 minutes per week on maintenance tasks like reading
 security newsletters, and applying patches?  And if there isn't, are there
 design changes that could make it so that it was?

 Because if an OS/webserver/web app combination requires more than, say,
 half an hour per week of maintenance, then for the vast majority of
 servers and VPSs on the Internet, the maintenance is not going to get
 done.  It doesn't matter what our opinion is about whose fault it is or
 whether admins should be more diligent.  The maintenance won't get done
 and the machines will continue to get hacked.  (And half an hour per week
 is probably a generous estimate of how much work most VPS admins would be
 willing to do.)

 On the other hand, if the most common causes of breakins can be identified,
 maybe there's a way to stop those with good default settings and automated
 processes.  For example, if exploitable web apps are a common source of
 breakins, maybe the standard should be to have them auto-update themselves
 like the operating system.  (Last I checked, WordPress and similar programs
 could *check* if updates were available, and alert you next time you signed
 in, but they didn't actually patch themselves.  So if you never signed in
 to a web app on a site that you'd forgotten about, you might never realize
 it needed patching.)

You just paraphrased the entire market position of professional hosting 
providers, the security community, China's (correct) assumptions for 
funding a cracking army, the reason browser security is impossible, etc.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Plymouth Failed to read image

2011-12-27 Thread  
On 12/27/2011 11:32 PM, 夜神 岩男 wrote:
 I'm trying to learn more about Plymouth, but am having trouble finding
 sufficient documentation on it.
...
 Perhaps the error message is just confusing me.

 If it is just the background image, then what is not valid about the
 splash.xpm.gz now? I've reduced it to 14 indexed colors, 640x480
 resolution (which I thought were the criteria?).

A little more information.

It seems the image issue really is with visual images, not data sort.

The problem I'm having is that the background cannot be updated. At all. 
For some reason the screen will now redraw, but only on the foreground.

-So the grub splash cannot be drawn.

+But the Plymouth theme can run correctly.

-Then the gdm splash cannot be drawn (leaves a frozen image of whatever 
the last Plymouth loading image was)

+But then a desktop can be loaded and drawn just fine (but its slower to 
load than previously)

-Then if the screen is locked the lock screen (blank) will never get 
overdrawn at all

+But entering a password blind brings a mouse pointer back on the black 
screen, and you can see the pointer change as it passes over items known 
to be on the desktop.

-Other ttys can be accessed, but not seen when Ctrl+Alt+F# is used.

Has anyone ever experienced this sort of behavior with gdm, plymouth or 
X in general? I'm confused, but at least the problem is narrowed down to 
whatever controls the splash/gdm-background/lock layer of display.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Plymouth Failed to read image

2011-12-27 Thread  
I'm trying to learn more about Plymouth, but am having trouble finding 
sufficient documentation on it.

After a rebuild of Plymouth with a few theme changes, I am getting an 
error message on boot Failed to read image and then it gives me the 
grub screen to boot one of the three kernels installed.

Boot works fine and I actually see the proper splash once I select a 
kernel. Changing themes works, etc. The single problem is that weird 
message about image read failure.

So my question: Since Plymouth actually is working fine after the 5 
second delay, just what image is it that can't be read? Is this a 
message about, say, the background image for the menu (the screen 
background *is* black, actually) or the ramfs boot image which 
apparently works just fine after a moment?

Perhaps the error message is just confusing me.

If it is just the background image, then what is not valid about the 
splash.xpm.gz now? I've reduced it to 14 indexed colors, 640x480 
resolution (which I thought were the criteria?).
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] what percent of time are there unpatched exploits against default config?

2011-12-27 Thread  
On 12/28/2011 01:29 PM, Bennett Haselton wrote:
 On Tue, Dec 27, 2011 at 8:33 PM, Gilbert Sebenste
 seben...@weather.admin.niu.edu  wrote:

 On Tue, 27 Dec 2011, Bennett Haselton wrote:

 Suppose I have a CentOS 5.7 machine running the default Apache with no
 extra modules enabled, and with the yum-updatesd service running to
 pull
 down and install updates as soon as they become available from the
 repository.

 So the machine can still be broken into, if there is an unpatched exploit
 released in the wild, in the window of time before a patch is released
 for
 that update.

 Roughly what percent of the time is there such an unpatched exploit in
 the
 wild, so that the machine can be hacked by someone keeping up with the
 exploits?  5%?  50%?  95%?

 There's no way to give you an exact number, but let me put it this way:

 If you've disable as much as you can (which by default, most stuff is
 disabled, so that's good), and you restart Apache after each update,
 your chances of being broken into are better by things like SSH brute
 force attacks. There's always a chance someone will get in, but when you
 look at the security hole history of Apache, particularly over the past
 few years, there have been numerous CVE's, but workarounds and they aren't
 usually earth-shattering. Very few of them have. The latest version that
 ships with 5.7 is as secure as they come. If it wasn't, most web sites
 on the Internet would be hacked by now, as most run Apache


 I was asking because I had a server that did get broken into, despite
 having yum-updatesd running and a strong password.  He said that even if
 you apply all latest updates automatically, there were still windows of
 time where an exploit in the wild could be used to break into a machine; in
 particular he said:

 For example, there was a while back ( ~march ) a kernel exploit that
 affected CentOS / RHEL. The patch came after 1-2 weeks of the security
 announcement. The initial announcement provided a simple work around until
 the new version is released.

 Was this a sufficiently high-profile incident that you know what he's
 referring to?  If this kind of thing happens once a year or more, than
 surely this is a much greater threat than brute forcing the SSH
 password?  That's what I'm talking about -- how often does this sort of
 thing happen, where you need to be subscribed to be a security mailing list
 in order to know what workaround to make to stay safe, as opposed to simply
 running yum-updatesd to install latest patches automatically.

Nearly every time servers get broken into they are web servers, and web 
servers serving applications the greatest percentage of those. The web 
never having been intended as an applications platform provides a huge 
number of attack vectors which are entirely separate from the OS layer.

For example, a perfectly secure operating system running a perfectly 
secure Apache configuration on a perfectly secure MySQL deployment could 
be running an application that permits injection of arbitrary SQL 
commands into the database. The server itself may not be compromised (or 
it may, depending on what else that SQL command can touch/be referenced 
by) in the sense that someone can open a shell, but in most cases there 
is nothing of interest on a web server anyway. What is interesting is 
what is in the database or lives within the application being served, 
and that is an application/database layer problem, not an OS, web-server 
or kernel problem.

With the vast majority of web applications being developed on frameworks 
like Drupal, Django and Plone, the overwhelming majority of server 
hacks with regard to the web have to do with attacking these structures 
(at least initially), not the actual OS layer directly at the outset.

Compare this with email server software, which, if the OS layer were the 
inherent problem, would be heard about every day -- much more often than 
web-related cracks. But email server software is mature and just as 
secure as Apache is. However, web-based email is a common target, and 
for a good reason. http is inherently insecure, and bouncing someone 
from http to https is just as insecure because the initial http link and 
DNS can be attacked, both being deliberately insecure, public protocols.

Blah blah. My point is, the OS is rarely attacked directly in 
web-related cracks. A good cracker tries to discover flaws in young, 
fast changing web frameworks which require priviledged access to things 
like MySQL instead of trying to attack Apache or an SE-enabled OS layer 
directly.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Mystery of email authentication

2011-12-26 Thread  
On 12/26/2011 09:45 PM, Timothy Murphy wrote:
 夜神 岩男 wrote:


 The hard part is developing an initial understanding of how certificates
 are interpreted and managed -- and where insecurity in the system can
 arise. Key and certificate management is, in fact, the hardest part of
 staying cryptographically secure at the present time. Unfortunately this
 seems to be too much trouble for most large system administrators, even
 at enormously connected places like universities, so it just gets
 ignored and MitM attacks are more commonplace than most people realise.
 The effects of such are generally minimal enough that most people don't
 even know they've been snooped, however, which is a testament to how
 unimportant most of our private data/lives really are anyway.

 Thanks again for your lucid explanation.

 I do feel there is a serious lack of what I would call low-level
 documentation in RedHat/CentOS/Fedora on authentication.

There is a lack, in a grand sense, but a lot of it is considered 
non-OS-specific specialist knowledge and is covered in other places 
thoroughly. To that end the Fedora and RedHat docs tend to include 
extensive references to security texts elsewhere. Understanding Kerberos 
and TLS, for example, requires at least a lay understanding of both 
public key and symmetric key encryption, and how those structures can be 
combined to make such encryption sub-systems work. Explaining those 
things is beyond the scope of the Fedora/RH docs, but the references to 
Kerberos docs, and the Kerberos references to general encryption docs do 
cover the subject in detail -- but most people (even system operators) 
tend to not follow the chain of references nearly so far as to learn all 
that. (Instead they get the 4-day certification version for ...the 
low, low seminar-only price of just $4,300! Come on today, bring a 
friend for a $1,000 cash back voucher! Impress your manager and totally 
snow government contract hiring departments into thinking you know your 
stuff!)

 On your last point, I do agree that many people seem to elevate
 their personal security to an absurd level,
 as though there are people in China who are desperate to find out
 their secrets.
 Apart from credit card and bank account details
 I don't think most of us have anything of interest to declare.

Generally, no. Besides, finding a single person in all the mess is 
itself another mess -- which is why it happens a lot less than people 
fear. On the other hand the principle is what is important here. I don't 
want you reading mail between me and my mother. Why? No real reason. But 
just because its my life, not yours.

Of course, if we really cared about that we'd go back to remebering that 
http is a broadcast, deliberately insecure protocol and can't be made 
secure via redirects. Period. And then maybe we'd suddenly remember that 
the web was never intended to be an applications development 
environment as much as it naturally *is* a massively linked bulletin 
system... and maybe we'd even remember that World of Warcraft is, in 
very real terms, cloud computing... blah blah blah. There are many 
places the current market is way off base today. And that's not going to 
change anytime soon...

 Speaking of China, I do find that according to logwatch/shorewall
 the majority of people trying to enter my system
 seem to live in that country.
 Maybe it is just that there are so many of them?
 Or are chinese naturally more inquisitive?

No, the Chinese really do have a massive, concerted government cracking 
program to crack literally everything. They conduct what is known as 
mosaic intelligence, where no collected piece is considered individually 
important and targeted intelligence is considered infeasable, but enough 
non-sensitive data collected in a wide enough arc can be assembled in 
such a way as to predict whatever the really sensitive data should be. 
And this is workable with a program as large as theirs.

This used to be a specific area of speciality/concern for me for 
professional reasons (more on the human collection side, not signal 
collections, though) and it really is a concern. But it is a general 
threat, not a specific one, and doesn't generally need to alarm an 
individual as much as it should alarm large organizations and governments.

Blah blah. This is getting pretty OT, so I'll end this chain of thought 
here.

Back on your email question...
I did push some requests around your school server our of curiosity, and 
the -ssh option alone works but does give a warning (this is 
certificate is worthless, but I'm continuing anyway sort of message). 
If you get a chance or even can it might be a good thing to talk to the 
admin about that -- he might not even know the situation, perhaps not 
being the one who set things up to begin with.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Mystery of email authentication

2011-12-24 Thread  
On 12/24/2011 08:54 PM, Timothy Murphy wrote:
 夜神 岩男 wrote:

 I'm trying to setup sendmail/dovecot on a new server running CentOS-6
 (well, CentOS-6.2 now).
 Everything seems to go well, but when I run fetchmail I get this warning:
 
 [tim@grover ~]$ fetchmail imap.maths.tcd.ie
 fetchmail: Warning: the connection is insecure, continuing anyways.
 (Better use --sslcertck!)
 

 If I do add --sslcertck (as suggested) I get the response:
 
 [tim@grover ~]$ fetchmail --sslcertck imap.maths.tcd.ie
 fetchmail: Server certificate verification error: self signed certificate
 fetchmail: This means that the root signing certificate (issued for
 /C=IE/ST=Dublin/L=Dublin/O=School of Mathematics, Trinity College,
 Dublin./OU=Automatically-generated IMAP SSL
 key/CN=imap.maths.tcd.ie/emailAddress=postmaster-
 k8gv5eydmbcyfdswbdo...@public.gmane.org)
 is not in the trusted CA certificate locations, or that c_rehash needs to
 be run on the certificate directory. For details, please see the
 documentation of -- sslcertpath and --sslcertfile in the manual page.
 139925738739528:error:14090086:SSL
 routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify
 failed:s3_clnt.c:1063:
 fetchmail: SSL connection failed.
 fetchmail: socket error while fetching from
 t...@imap.maths.tcd.ie fetchmail: Query
 status=2 (SOCKET)
 

 Its just healthier, more detailed warnings that what you got before.

 SSL/TLS relies on a third party verification of a certificate. This
 means a third party's signature on the certificate of the site you are
 connecting to. If, on the other hand, the site you're connecting to
 signed their own certificate themselves, then you have no way of knowing
 if they are really themselves because nobody outside of the 2-party
 connection is validating that the system you're taking to today is the
 same system you were talking to yesterday.

 Thanks very much for your explanation,
 which throws some light on the subject.

 What I still find a little puzzling is that
 fetchmail --sslcertck imap.maths.tcd.ie
 tells me the SSL connection failed,
 yet fetchmail imap.maths.tcd.ie seems to work.

 Also, I'm not clear if SSL will look at all the crt's
 in /etc/pki/tls/certs , or just ca-bundle.crt?


--sslcertck is a switch that specifically demands certificate checking, 
and by strict standards a self-signed certificate is no good. The logic 
being that while technically nobody between you and the system you are 
directly connecting to can read your traffic, you have no guarantee that 
the system you're connecting to is authentic or is an imposter executing 
a man-in-the-middle attack. Most attacks involve silently passing 
traffic through anyway, so there is no way to arouse the suspicions of 
the victims, because they really do connect to their mail account (or 
whatever resource) and things function correctly. So --sslcertck is 
doing its job, denying you a chance at getting cracked.

On the other hand you'll probably have no trouble connecting using 
fetchmail --ssl imap.maths.tcd.ie because it merely requires that some 
form of SSL connection be used, but doesn't care about authenication 
(which is worthless from a security perspective). It also permits a few 
other things considered simply not good enough, and should be avoided.

The canonically correct way to use this is:
fetchmail --sslproto 'SSL3' --sslcertck hostname.domain
This forces tuse of SSL3, as SSL2 isn't considered good enough anymore, 
and requires a correct certificate that is signed by a certificate 
authority (someone with a root certificate, which can be your 
university's systems operator if he knows what he' about).

As far as certs in /etc/pki/tls/certs/ ...
Yes, they are all read. Anything with the extension .crt or .pem (and if 
.pem it needs to actually be in pem format, which is not always the 
case) will be scanned and cached by the system. Internally my company 
has a private root CA, and we just add our certificates as separate 
files in the directory -- and LDAP, websites, email, etc work just fine. 
Its actually pretty easy once you learn a bit about what's going on.

The hard part is developing an initial understanding of how certificates 
are interpreted and managed -- and where insecurity in the system can 
arise. Key and certificate management is, in fact, the hardest part of 
staying cryptographically secure at the present time. Unfortunately this 
seems to be too much trouble for most large system administrators, even 
at enormously connected places like universities, so it just gets 
ignored and MitM attacks are more commonplace than most people realise. 
The effects of such are generally minimal enough that most people don't 
even know they've been snooped, however, which is a testament to how 
unimportant most of our private data/lives really are anyway.

BTW, Merry Christmas, list

Re: [CentOS] Mystery of email authentication

2011-12-23 Thread  
On 12/24/2011 11:34 AM, Timothy Murphy wrote:
 I'm trying to setup sendmail/dovecot on a new server running CentOS-6
 (well, CentOS-6.2 now).
 Everything seems to go well, but when I run fetchmail I get this warning:
 
 [tim@grover ~]$ fetchmail imap.maths.tcd.ie
 fetchmail: Warning: the connection is insecure, continuing anyways. (Better
 use --sslcertck!)
 

 I should say that everything runs fine on a CentOS-5.7 server,
 and as far as I can see the setup on the new server is the same.
 Under CentOS-5.7 I don't get the same warning:
 
 [tim@helen ~]$ fetchmail imap.maths.tcd.ie
 fetchmail: No mail for tim at imap.maths.tcd.ie
 

 If I do add --sslcertck (as suggested) I get the response:
 
 [tim@grover ~]$ fetchmail --sslcertck imap.maths.tcd.ie
 fetchmail: Server certificate verification error: self signed certificate
 fetchmail: This means that the root signing certificate (issued for
 /C=IE/ST=Dublin/L=Dublin/O=School of Mathematics, Trinity College,
 Dublin./OU=Automatically-generated IMAP SSL
 key/CN=imap.maths.tcd.ie/emailAddress=postmas...@maths.tcd.ie) is not in the
 trusted CA certificate locations, or that c_rehash needs to be run on the
 certificate directory. For details, please see the documentation of --
 sslcertpath and --sslcertfile in the manual page.
 139925738739528:error:14090086:SSL
 routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify
 failed:s3_clnt.c:1063:
 fetchmail: SSL connection failed.
 fetchmail: socket error while fetching from t...@imap.maths.tcd.ie
 fetchmail: Query status=2 (SOCKET)
 
 That is on the new server.
 On the old server (where the fetchmail command works)
 I get much the same warning, though briefer.
 
 [tim@helen ~]$ fetchmail --sslcertck imap.maths.tcd.ie
 fetchmail: Server certificate verification error: self signed certificate
 11316:error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate
 verify failed:s3_clnt.c:915:
 fetchmail: SSL connection failed.
 fetchmail: socket error while fetching from t...@imap.maths.tcd.ie
 fetchmail: Query status=2 (SOCKET)
 

 I must admit I've never been very clear on SSL authentication

Its just healthier, more detailed warnings that what you got before.

SSL/TLS relies on a third party verification of a certificate. This 
means a third party's signature on the certificate of the site you are 
connecting to. If, on the other hand, the site you're connecting to 
signed their own certificate themselves, then you have no way of knowing 
if they are really themselves because nobody outside of the 2-party 
connection is validating that the system you're taking to today is the 
same system you were talking to yesterday.

Deeper explanation is beyond an email list, but suffice to say the 
warnings are accurate. As for how does a third party verify a 
certificate without a simultaneous connection in the session? just 
don't worry about it. Its magic. It will remain magical until you do a 
lot of reading about exactly how the algorithms act as (nearly) one-way 
functions.

The proper solution to this situation would be for whoever Lord Sysop is 
at your school to generate a real root certificate for the school and 
place it somewhere for download so that everyone can include it in their 
public certificate packs in browsers and email programs (check `less 
/etc/pki/tls/certs/ca-bundle.crt` or something similar to see the 
standard bundle). Then Lord Sysop can sign the certificates of all the 
school's official servers and users can rest reasonably well assured 
that they are really talking to actual school servers -- so long as the 
private part of the key used to generate the root certificate remains 
secret and NOT available for download (I've seen things done horribly 
wrong before, so its worth mentioning...).

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Incorrect evince password request

2011-12-07 Thread  
On 12/08/2011 12:14 AM, Johnny Hughes wrote:
 On 12/07/2011 09:09 AM, m.r...@5-cent.us wrote:
 Lucian wrote:
 On 7 December 2011 14:03, Reynolds McClatcheyr...@saf.com  wrote:

 Any workaround or do I just need to use adobe on WinXP?

 Nobody should need to use windows.

 http://lmgtfy.com/?q=evince+password

 Or, least best answer, acroread runs jes' fine on Linux.

 except that they don't have an x86_64 version (unless it is fairly new)
 and I refuse to install i386 libraries to run acroread.

Slight digression, but I always forget to ask this:

Why are people so against installing 32-bit libraries? I've never 
understood this -- some people even opt for virtualization of a 32-bit 
release of their entire OS within which to run a few key 32-bit apps 
instead of just installing the 32-bit compatibility libraries.

What gives? Is there a technical argument against this?
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] e-mail serving

2011-08-03 Thread  
On 08/04/2011 03:20 AM, Todd wrote:
 Hi John,

 what are you doing with this email when you recieve it, beyond just
 saving it?


 I plan to analysis the mail to group into e-mails on the same topic and
 create a comprehensive answer to the topics. Along the lines of a FAQ
 for topics that are continually being asked over and over as well as
 more advanced, obscure topics that people may want to chime into.

 If I had $500 to spend, not counting money for hard disks, could I even
 get a machine for that? or do I really need to be scraping more cash
 together?
 -Jason

 From what was stated previously by RPH where he did the breakdown and 
shared his 750k/day experience, I'd say you could easily afford to build 
a system yourself minus the drives. Your problem may be affording the 
bandwidth to sustain the experiment, depending on where you live (here 
in Japan fat bandwidth is cheap, but we have trouble connecting to some 
specific places at high speeds sometimes, for example -- but 
domestically it is really amazing).

Of course, that addresses receipt of the messages, what sort of computer 
would be required to do the parsing and scanning in realtime, on the 
other hand, depends entirely on the sort of routines you want run. The 
cheap route is to collect cheaply over a period and stock the messages, 
and then switch to processing the collected data with whatever resources 
you have available once you've hit the point of diminishing returns on 
whatever storage solution you wind up building. In this way you can 
afford cheap processors if you are willing to pay in time instead of cash.

-Iwao

PS: Of course, if you don't mind dealing with dodgy Russians you could 
probably find a sponsor for just such an effort...
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Two ftp clients? Why?

2011-08-02 Thread  
On 08/03/2011 06:41 AM, Les Mikesell wrote:

 But back to the original problem, why would anyone use ftp in this
 century when rsync or http(s) are so much easier to manage?

Do we have Kerberized rsync yet? Or Globus rsync?

If so... please post a link and... (^.^)

Anyway, that sort of gets to the heart of just why we have several (not 
just two) ftp options. ftp, vsftp, Kerberized ftp, gridftp, etc... Its a 
pretty common tool and in some specific cases scripting the niche ones 
is necessary due to a lack of alternatives to match a given environment 
-- though if security isn't an issue (bringing in signed, public 
packages from a repo, say)... then yeah, rsync; though some people view 
it as just one more thing to have to learn and never discovering the 
benefits.

-Iwao
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] running X as root in centos 6

2011-07-27 Thread  
On 07/27/2011 11:39 PM, Jerry Geis wrote:

 I do this for a reason as a post install step, then the system reboots
 and it never happens again...

And so you will never be asked again, it seems.

 I am trying to find how to set this checkbox which says never ask me
 again and move on...

But it isn't going to have a chance to ask you again, is it?

Anyway, I'm just being silly above. The gconf key for this is:

/apps/gnome-session/options/show_root_warning

It accepts a bool value, so something along the lines of the following 
command should work from within a kickstart or post-install 
script/firstrun kludge if that is your intention (and I assume it is as 
the above two statements must not be quite what you meant):

gconftool-2 --direct --config-source \
xml:readwite:/etc/gconf/gconf.xml.defaults \
--type bool --set /apps/gnome-session/options/show_root_warning false

Not sure if the format length turned out correctly in email, but I think 
you get the idea.

Have fun!

-Iwao

PS: If anyone knows anything better than the above sort of commands, 
please pipe up. I've been doing a *lot* of gconftool-2 scripted 
customizations lately and some of the options are pretty hard to 
research. Things like setting default colors for gnome-terminal or 
changing icons defaults, etc. are a fruitful source of irritating 
mistakes. Any better ideas are welcome -- thanks in advance.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] running X as root in centos 6

2011-07-27 Thread  
On 07/28/2011 12:47 AM, Jerry Geis wrote:

 Anyway, I'm just being silly above. The gconf key for this is:

 /apps/gnome-session/options/show_root_warning

 Thats awesome... I new the rest about setting values - I just didnt know
 the name.
 Thanks,

I've become a wizard at finding those things.

Unfortunately you didn't write back with what I was hoping for -- 
something along the lines of:

gconftool-2 customizations? pheh, bonehead, everybody knows that's just:
# read-your-mind-for-preferences.sh --exec 'gconftool-2'
duh

But no luck there, eh? (.)

-Iwao

In other news, the Practical Joke of the Year is dconf! (or maybe that's 
2nd place to systemd error output...)
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Upgrading from CentOS 5.6 to 6.0

2011-07-25 Thread  
On 07/26/2011 01:32 PM, R P Herrold wrote:
 On Tue, 26 Jul 2011, Mike Burger wrote:

 If IBM can make this happen for their OS, and Red Hat certainly supports
 such a process in the Fedora line of releases (including the ability to
 list additional repositories for remote installation as part of the
 process), they could certainly make it a supportable option for the RHEL
 line.

 The upstream supports nothing as to Fedora, and indeed,
 members of that project regularly (and seem to gleefully)
 break forward compatability

 But you are missing the point -- WHY spend the engineering
 effort on trying to support such Major 'upgradeany's?  A new
 deployment takes mere minutes for a commercial shop, and by
 NOT supporting such explicitly, the upstream avoids much
 support and engineering load.

 [I say this having done an 'upgradeany' and run into a later
 'nss' in C5 than the C6 initial media provides, that required
 some head scratching, and a nasty workaround, to solve over
 the weekend]

RPH is definitely right about gleefully breaking forward compatibility.

It is easier to control compatibility backward and forward when you're 
deploying a closed (or at least tightly controlled) system as opposed to 
one that boils with change the way the Fedora upstream does.

Example: systemd

Find a way to make a transition from 6x to 7.0 seamless by way of a 
simple yum update once SysV init goes away and all system services 
must grow configuration files and drop init scripts. That's just one 
subsystem, there are other huge changes as well (Gnome3...).

IBM and the tightly controlled (and decades long) OS/360 - z/OS process 
or their linear passes through AIX Ver.n - Ver.n+i do not compare to 
Red Hat's situation. Anyway, compatibility is often complex enough for 
IBM to address by quietly including emulators for their previous systems 
instead of shooting for base compatibility.

The key to Red Hat's success has been its hands-off approach to the 
Fedora Project. If Red Hat ever desires to implement something they must 
first present working implementations for acceptance by FESCo -- which 
implies promising, working implementations. This forces a lot of unique 
situations, but the primary effects are:
  * Advances occur at a rate difficult to compare to other projects
  * Entire subsystems can be marked obsolete if a working 
implementation demonstrates superior function (systemd ousting the 
venerated SysV init is an example of this nothing sacred attitude)
  * Technical debate about anything/everything crosses company, private 
and personal lines in ways difficult to interpret from a traditional 
development perspective
  * The chaos level is high (marked by the inability for any one person 
to be an expert on everything at a given time -- by the time one thing 
is thoroughly understood something else has changed)
  * Absolute forward and backward compatibility requires too much 
effort, so the concept of compatibility moves up two levels to the 
data layer[1]

IBM, on the other hand, has a long-term compatibility program they 
consider to be at the core of their business model (System/360 history 
is interesting here). They plan their changes around a few subsystems 
they consider to be sacred. If you want to change something sacred you 
have to plan it out through the high priest in charge of that subsystem 
-- and it is acceptable for major system changes to take several years.

The whole thought process is entirely different -- as are their target 
markets. Red Hat is a good value for large- to huge-sized businesses, 
and IBM is a better value (sometimes with a mix of Red Hat in some 
areas/departments) for titanic- to ZOMG-sized businesses.

I apologize for the long message. I didn't have time to write a short one.

-Iwao

[1] I've been thinking about this a bit and I've come to think that 
there are roughly three layers to compatibility -- so I'll define them 
here since I referenced my own definition:

1- Absolute forward and backward compatibility. Code builds and runs in 
exactly the same way on any system in the series.

This is the level IBM shoots for. System upgrades and downgrades are 
clean, reliable and easy to recommend and support.

In Linux terms this would mean you could load, say, RHEL 3 and yum 
upgrade to RHEL 6.1 -- whether that is through a yum-initiated upgrade 
chain or a one-step upgrade is of no concern to the user.

2- Configuration compatibility. Implementations change in radical ways, 
but the interpretation, format and semantics of configuration files is 
absolutely respected between versions and often between competing 
implementations of a single standard. OpenLDAP's move to cn=config while 
retaining the ability for slapd.conf to be read and converted to a 
cn=config loadable set of LDIFs is an example of this.

This is roughly what Microsoft used to aim for (somewhere on the road 
between XP and 8 they seem to have totally quit the idea, though).

In Linux terms this would 

Re: [CentOS] Upgrading from CentOS 5.6 to 6.0

2011-07-25 Thread  
On 07/26/2011 02:07 PM, Mike Burger wrote:

 But you are missing the point -- WHY spend the engineering
 effort on trying to support such Major 'upgradeany's?  A new
 deployment takes mere minutes for a commercial shop, and by
 NOT supporting such explicitly, the upstream avoids much
 support and engineering load.

 Quite simply, because the customer base, which is paying the upstream for
 support, is requesting that such a process be supported.

And this would be a sensible argument, were it not being made on the 
CentOS list. Folks here aren't paying anyone anything.

This is more like an extension of the Fedora community, in a way -- free 
testers and freeloaders. Big deal. Red Hat doesn't *need* to do anything 
for us, come to think of it they're already doing quite a bit, so I see 
no point in complaining.

-Iwao
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] OT: Linus Torvalds delays Linux 3.0 launch due to a subtle bug (fwd)

2011-07-21 Thread  
On 07/21/2011 09:26 PM, Geoff Galitz wrote:

 And more over, there is nothing earth-shatteringly new in the 3.0 kernel.
 Linus said during the last kernel summit he wanted to change the versioning
 scheme to make it easier for various developers in different realms to track
 version changes.   Don't expect anything super-cool for us on the
 sysadmin/user side as a direct result of the 3.x kernel.

 Of course, incremental changes are usually welcome for stability and device
 driver support.

He just wrote everyone this morning that he's pushing 3.0 on Monday.

Sometimes I wonder if the CentOS list-land is more about drama and FUD 
than getting anything done or being correct.


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] SPAM on the List

2011-07-18 Thread
On Mon, 2011-07-18 at 04:04 +0100, Always Learning wrote:
 On Sun, 2011-07-17 at 22:37 -0400, Stephen Harris wrote:
 
  On Sun, Jul 17, 2011 at 09:07:38PM -0500, Les Mikesell wrote:
   There is no requirement for the greeting name to match any IP, and isn't 
   likely 
 
  RFC2821 says:
 -  The domain name given in the EHLO command MUST BE either a primary
host name (a domain name that resolves to an A RR) or, if the host
has no name, an address literal as described in section 4.1.1.1.
  
  So, pretty much, HELO or EHLO greeting _must_ match to an IP.
  
  (RFC821 actually wanted the HELO to match the connecting host, but
  2821 just says it must be an A record or an address literal).

 It seems spammers have successfully hacked Rupert Murdock's London Times
 newspaper and copied hundreds of thousands of email addresses or has a
 member of staff sold the email addresses to spammers to make some money?

Though it is certainly possible that a breach of some sort is
responsible for your spam, sniffing for email headers on high activity
parts of a network would be sufficient to collect a large number of
active email addresses to try (sniffing at Tor gateways could provide
interesting results, come to think of it). Another big winner for
mailbox collection is to not crack the information provider's site, but
to instead crack the email service provider and obtain a list of all
active accounts on that server (which would likely span multiple
domains).

Getting a hold of email accounts can happen any number of ways, most of
them uncontrollable by the account holder. Its a mailbox -- an open
destination for the world to send you stuff. You can't be too surprised
when the world does in fact send you stuff.

Traditional solutions include hiring a secretary to screen your mail
(today this would be setting up SpamAssassin) or ignoring all but
personal messages on verified stationary (today this would be digitally
signed mail) and instead going out to retreive your information at need
instead of having it sent to you at availability.

The diffrence between deposit/fetch and send/receive is profound. This
is part of why I'm surprised that newsreaders and forums have fallen
from favor amongst technical discussion groups. The Logging into forums
is a PITA or setting up another client is a PITA arguments obviously
won the debate -- though I think spam is a lot deeper into PITA
territory than either at the present time.

-Iwao


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] SPAM on the List

2011-07-18 Thread
On Mon, 2011-07-18 at 15:00 +0100, Always Learning wrote:

 In the example I mentioned, it was a specially created single purpose
 email SMTP address (no POP etc.) used just once about 5? months ago. It
 is easy for me to block it as the mail server (MTA Mail Transfer Agent)
 which I have done.

An address can get snagged once after a single use and spammed months
later. Creating a plethora of special use email accounts is, in my
opinion, simply too much effort when the original design of email was to
create a public address that anyone can contact the owner through. 

 We are no so liberal with mailboxes. Some can be accessed only by prior
 approved senders. Others, because they are single purpose email
 addresses, can be permanently blocked after the first unwanted email.
 Some email addresses are created with sub-domains that can be dropped at
 the first abuse then replaced by new sub-domains.

Getting too creative with email protections reduces the primary
functionality it was invented to provide.

  The difference between deposit/fetch and send/receive is profound. This
  is part of why I'm surprised that newsreaders and forums have fallen
  from favor amongst technical discussion groups. The Logging into forums
  is a PITA or setting up another client is a PITA arguments obviously
  won the debate -- though I think spam is a lot deeper into PITA
  territory than either at the present time.
 
 The problems with forums are, in my personal opinion:-
 
 (1) Spyware : logging every access with Google the USA's international
 spying operation.

Mailing lists do not avoid this (if they do, please explain how),
particularly now that Google has people using its own parallel DNS
service (!o!) and runs infrastructure that most of these tech mailing
lists touch at some point. At this point I doubt that there is a message
sent that doesn't touch or at least bounce toward a Google-owned server
somewhere.

 (2) Advertisements

On a bad forum, yes. On good ones, no. The advertising thing is
ridiculous and a symptom of our community not realizing how easy it is
to self-host forums for free (or newsgroups -- but more on that later).

 (3) tiny text difficult to read

On bad forums, maybe. I haven't been to a site I found difficult to
read, come to think of it, but I'm sure there are some administrators
out there who don't understand the concepts of usability. Anyway, you
can generally control your mail display settings and forums would
present a potentially mixed bag unless the community settled on a rough
standard, so I can see your point here.

 (4) Pop-up windows

On unbearably crap forums, maybe. I've never experienced this (Firefox
always saved me or there just were never pop ups? No idea), but if any
official project decided to use forums as a primary communication means
and put not just ads, but *pop-up ads* on their site -- wow...

 (5) Layout not conducive to easy and quick reading.

The free-form layout of mailing lists (top/bottom/mid posting all
mishmashed) is far less conducive to organized eye movements, in my
experience. Obviously, you and I may have adapted differently, though I
find neither difficult.

 (6) Having to visit a web site and then log-on if one wants to respond.

I keychain the logins (I think most browsers have a function like this
now -- I think even elinks does, and elinks is a great way to browse
forums, btw) and don't worry too much with it after that.

I find this to be a *lot* less trouble than twisting my email setup into
something email was never intended to be.

 Conversely:-
 
 Email Lists are quick, easy, immediate (certainly for my set-up),
 require no extra effort.  Should the address get spammed, then its one
 quick and simple change:-
 
 (a) replacement DNS sub-domain
 
 (b) update Mailman
 
 (c) change email address in email client.

Again, far too much trouble.

So... what is wrong with newsreaders? In my experience the provide all
the benefits of email (speed, uniform interface, etc.) that you listed
as well as all the benefits of a post/fetch paradigm that I get from
forums without any of the hassles of either.

That we aren't communicating through a newsgroup has always been
puzzling to me for the exact reasons that you and I both listed. If we
were to design a new protocol to solve both problems it would likely
turn out to be very like newsgroups -- yet we don't use them and they
exist and are easy to set up.

Anyway, interesting response. Cheers.

-Iwao

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] SPAM on the List

2011-07-18 Thread
On Mon, 2011-07-18 at 22:17 +0800, Christopher Chan wrote:
 On Monday, July 18, 2011 09:19 PM, Stephen Harris wrote:
 
 SPAM-L is that way == oh wait, it's dead...
 
 Maybe we can keep discussions about blackhat, incompetent networks, 
 about SMTP, open proxies/relays, honeypots and what have you off this list?
 
 Just limit it to sendmail/postfix/exim configuration if you have to 
 discuss these things but please leave everything else outside in 
 NANAE/your favourite spitting pot.

You wouldn't be insinuating that [CentOS] SPAM on the list has become
SPAM on the list now, would you?

-Iwao

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] SPAM on the List

2011-07-18 Thread
On Mon, 2011-07-18 at 10:54 -0500, Les Mikesell wrote:
 On 7/18/2011 10:27 AM, 夜神 岩男 wrote:
 
  (6) Having to visit a web site and then log-on if one wants to respond.
 
  I keychain the logins (I think most browsers have a function like this
  now -- I think even elinks does, and elinks is a great way to browse
  forums, btw) and don't worry too much with it after that.
 
 So do you typically provide helpful answers to forum questions sooner 
 after they are posted when you have to forum-hop than you would if they 
 land in your inbox or later?

Obviously some level of activity must be maintained within a community
to ensure decent response times, but newer communities such as Ubuntu
have found forums to be a fairly useful thing. The forum community there
is doing well and questions get answered at a reasonable pace -- with
the added benefit that when someone goes on vacation they have no box
that needs filtering, unsubscribing, setting in a vacation state, etc.
to protect from lists or spam. Outside of the tech world forums have
proven themselves durable and usable for help and feedback purposes --
overwhelmingly so.

  I find this to be a *lot* less trouble than twisting my email setup into
  something email was never intended to be.
 
 Email wasn't intended for receiving messages and replying?  Hmmm...

It was designed precisely to do those things. What was described by the
previous poster was time consuming contrivances with the specific intent
of limiting the receipt of messages -- which is exactly half of the
specification as you stated it.

  So... what is wrong with newsreaders? In my experience the provide all
  the benefits of email (speed, uniform interface, etc.) that you listed
  as well as all the benefits of a post/fetch paradigm that I get from
  forums without any of the hassles of either.
 
 Interesting that you bring this up in the context of spam.  The problem 
 with net news is that all of the servers stopped handling it because of 
 the porn and copyright-infringing binaries postings that overwhelm it.

Newsreaders require a news server. News servers can be run by anyone, it
doesn't require a global cabal to serve news. In the later days of
usenet it was overwhelmed by crap, largely because of the enormous
number of groups created by people who didn't have time to maintain
them, had a blanket anonymous publish policy, and eventually never
showed back up to take care of their lists. Lists such as that got
swamped, and so did the servers, which made the whole system unweidly
(though news server networks are still run today and moderation via user
validation is still an option).

What I am describing is the running of a newsgroup server specific to a
project or interest, say news.centos.org (or whatever for whatever).
Initial validation would be required (not unusual for mailing lists) for
initial posting, and after that unmoderated publication would be
permitted by a validated user. This is a simple system. Disabling
attachments and/or setting file/message size limits is trivial and is an
action which occurs in just one place (the server) and doesn't bother
the users.

From an anti-spam/security perspective a post/fetch system is simply
more suitable for noise-free discourse than email. That we have
forgotten that is likely more due to the timing of the web explosion in
the early 90's and the tech/generation gap it produced than anything
else.

 A news service with censorship might be OK.  Until they censor something 
 that you wanted to say or see.  Forums with rss feeds might be a middle 
 ground to centralize the reading side but there's still the issue of 
 standardizing the forum interfaces so you don't have to figure out how 
 to reply again for every interesting topic.

You have just described properly run newsgroups -- and why I am
suggesting them as a reasonable course of action which would resolve
spam issues not just within list, but limit everyone's exposure to spam
in their general mail boxes.

-Iwao

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Switch from SL - Centos

2011-07-12 Thread
On Mon, 2011-07-11 at 14:17 +0100, James Hogarth wrote:
 
  Downloaded centos-release-6-0.el6.centos.5.x86_64.rpm and
  redhat-logos-60.0.14-10.el6.noarch.rpm from CentOS repo
 
  rpm -e --nodeps sl-release redhat-logos
  rpm -hiv redhat-logos-60.0.14-10.el6.noarch.rpm
  centos-release-6-0.el6.centos.5.x86_64.rpm
 
  yum update
 
  reboot, and voilà
 
 
 The above would only update a package if the centos repos had a higher
 version number than the installed SL one I would strongly suggest
 something akin to yum reinstall \* and leave it to chug away (backups
 first naturally) for a while to refresh all the packages and teh rpm
 database to be in sync with the centos build. requires matching,
 same build options for sure etc etc
 
 In the event something crops up it at least eliminates an odd untested
 mix for certain fundamental packages like glibc etc

An idle question:

What is the advantage of switching to CentOS 6 if you already are
running SL6? Or at least... what is the purpose? I'm not really clear on
the difference (other than CentOS is the noisier bit of the party).

-Iwao

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS 6 system-config-network missing

2011-07-12 Thread
On Mon, 2011-07-11 at 16:00 -0700, Emmett Culley wrote:
 On 07/11/2011 03:26 PM, b.j. mcclure wrote:
  On Mon, 2011-07-11 at 14:35 -0700, Emmett Culley wrote:
  The network configuration GUI is not to be found on any of the CentOS 
  repos or on EPEL.  I am not interested in having NetworkManager installed 
  on a server.  Is there an application that takes the place of 
  system-config-network?
 
  Emmett
  
  There was much discussion about this on the RHEL 6 beta list several
  months ago.  Many complaints but nothing came of it as far as I know.  I
  just edit the config files in /etc/sysconfig/network-scripts/.
  
  B.J.
  
  RHEL 6.0, Linux 2.6.32-131.2.1.el6.x86_64
 
 I guess I'll have to do that as well.  I couldn't manage bridge network via 
 the GUI anyway.  I might try installing NetworkManager and disabling NM 
 control for the bridged devices, but for now it seems easier to just edit the 
 files in /etc/sysconfig/network-scripts.
 
 After all, they shouldn't be changing all that often on servers anyway...
 
 Emmett

Having had to deal with this a *lot* across a variety of Fedora-based
systems for several months now, the most reliable replacement for
NetworkManager I have found truly is vi.

-Iwao

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Switch from SL - Centos

2011-07-12 Thread
On Tue, 2011-07-12 at 10:46 +0100, James Hogarth wrote:
 
  An idle question:
 
  What is the advantage of switching to CentOS 6 if you already are
  running SL6? Or at least... what is the purpose? I'm not really clear on
  the difference (other than CentOS is the noisier bit of the party).
 
 Plus when you have many systems (read 100+) to manage it is far better
 to keep things consistent The 'test' Sl6 boxes that were used for
 simulation of how C6 was likely to be will be replaced here so that
 only CentOS channels are in Spacewalk and no SL etc it also
 eliminates package confusion in Spacewalk (for instance) due to
 identical NVREA between the channels...

But that was my point, why the switch in the first place if SL6 was
consistent?

I may be missing the point. We have a large environment to manage (lots
of systems distributed across several organizations) but the application
requirements are fairly uniform and many userland packages get rebuilt
and managed by us anyway. I suppose that makes our deployment a niche
spin of its own -- only the very core is really unadulturated SL6.
Perhaps incongruencies simply get squashed by our rebuild process so I
may have never noticed awkwardness in SL6 that others find intolerable
(is it awkward for some?).

We would have fielded CentOS 6 with partial rebuilds but for the delay
(and a few other issues -- the level of decision-making mystery
surrounding it being another) -- it was intolerable and came down to
simply getting an RHEL contract (hard to balance against some expenses
which we just can't eliminate through external support), internally
forking a LTS version of Fedora (which may still happen at some point
not soon), or getting SL6 and doing partial rebuilds where necessary.

Now that a consistent environment exists based on SL6 I'm wondering what
are the merits, if any, of attempting a lateral (and somewhat backward)
migration to CentOS 6? It seems it may not be worth the effort in my
present environment, but in other environments what would be the benefit
other than reverting to a more familiar name?

-Iwao

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] [OT] FOSS marketing problems (WAS Re: Celebrating Centos 6.0 Day World-wide)

2011-07-09 Thread
On Sat, 2011-07-09 at 16:21 -0500, John R. Dennison wrote:
 On Sat, Jul 09, 2011 at 10:14:28PM +0100, Always Learning wrote:
  
  Its time for the world to drift away from the M$ Windoze expensive
  nightmare. Centos is a very good alternative.
 
 While that might be true, the reality of the situation is different.
 Until you can provide a seamless drop-in replacement for Windows that
 does not require a change in work-flow habits learned over the course
 of, for some, many years such a switchover will _never_ happen en masse.

[Iwao clears his throat, arranges his large stack of soap-boxes...]

Microsoft has decided to give alternatives a chance recently, by forcing
customers into expensive upgrade cycles which provide no new real
features but do require a series of awkward changes in workflow.
Consider that OOo got a big boost when MS Office introduced the ribbon
menu bar. Also consider the headway Apple is making -- and a transition
to OS X workflows is a lot more traumatic to the lay user than
transitioning to Gnome 2x, KDE or XFCE-ish environments. But it doesn't
matter.

There is more to the market than chasing drop-in replacements (once
again, look at Apple).

The real biggest problem is that very few on this side of the fence are
any good at marketing -- and part of that is due to the fact that we
tend to chase absolute truth and argue it at length, whereas the real
heart of marketing is selling dreams in spite of reality (which is a
little different than just flat out lying -- but FOSSy folks have
difficulty with that concept). This is assisted by the technical nature
of our arguments which provides some level of anambiguous reference for
discussion. The problem with talking to real customers, however, is that
they don't really care about the underlying tech, they care about the
end result. So we talk one language to them and they think in another. 

Consider that whenever we say we have developed a superior video codec
and container mechanism -- and its free for the world! Technical
progress! Freedom! the guy sitting across the table is usually quiety
thinking something along the lines of ...buz does that mean it
will put naked ladies on my screen or let me play
UltraAwesome3DKnockoffSpaceFight III? This scenario plays out in
various ways depending on the market you're pitching to, of course, but
this is roughly what happens. Microsoft, on the other hand, just says
Cool pixels! Look, now the menu bar FADES IN! Did you SEE that?!? And
bikini girls at the presentation announcements! And Steve Ballmer
walking out from BEHIND FOG! Wow! What a great new OS we have! Yep,
email, web browser! Stuff! Cloud! Chat with your grandmother! Movies!
Games! We have it all! And... security! Interoperation with all the
security vendors you want! Compare that glitz and marketing sex with,
say, the frank and practical expert discussions you can have at the Red
Hat booth at a trade show -- we lose unless we are marketing to fellow
engineers, which is why Red Hat has cornered that specific market and
chases nothing else.

The color of their discussions is entirely different from the way FOSS
developers think about things. We have a customer culture problem, not a
problem with achieving identical workflows to Windows. Once again, OS X
is a good example of just how much if a difference people are willing to
accept if the product is superior *and* it is conveyed to them in their
language.

Microsoft's genius is that they just tell people what they want to hear,
not what is accurate or even true in most cases. Microsoft hypes, FOSS
whines. Look at how far they have come with absolutely inferior
technology -- to this day! Its amazing and the brilliance underlying
that is completely missed by the majority of the FOSS community --
probably because the open source movement is all about truth, and that
makes FOSS hype far less titilating than MS hype. MS's discovery was not
better technology -- it was that technology that is merely good enough
can destroy the market position of superior products if conveyed in a
manner that consumers can digest as opposed to the way the creators
understand their engineering product.

-Iwao

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Power-outage

2011-07-02 Thread
On Sat, 2011-07-02 at 03:03 +0800, Emmanuel Noobadmin wrote:
 On 7/1/11, Timothy Murphy gayle...@eircom.net wrote:
  It seems to me that it should be possible
  to have a simple, torch-battery operated, system
  which will keep the machine alive long enough
  to make a graceful exit.
  A full-blown UPS would be excessive, I think,
  as I only want the machine to re-boot
  when the current comes back on.
 
 Like others have suggested, a cheap UPS is the way to go. The problem
 with your idea is that you'll need a DC to AC inverter that can handle
 the output current required by your server and something to hold the
 batteries (you'll need more than one because attempting to draw a huge
 current from a normal battery will either kill it or at the very least
 cause it to have a shorter than expected capacity) and everything
 together, it's probably going to cost more in both money and time to
 have this thing.

You will also need to have the device signal the OS to shutdown cleanly
and be set to reboot when the power comes back on.

And once you've added those features, you will have created a UPS --
likely at an expense in time/money that exceeds simply having bought
one. Specifically, a 300W UPS can be had for less than $40 -- that's
0.5~4 hours of overtime or side work depending on your job. You are
likely to expend a lot more than 4 hours putting your homemade solution
together and achieve a far less reliable result. Unless the experience
of amateur electrical engineering is what you are craving (it *is* fun)
buy a UPS and be done with it -- just read the docs so you fully
understand how to make it tell your computer to start shutting down or
booting up, etc. They aren't magic and require a little set up to be
fully utilized.


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos