On Fri, Mar 21, 2014 at 10:33:07PM -0700, Keith Lofstrom wrote:
Indeed. By big, slow stuff I am indeed talking days on end. I've
run programs that used both processors of a dual core CPU for over a
week. Perhaps virtualbox can use two processors, too; otherwise I
take a 2x performance hit,
On Fri, 21 Mar 2014, Keith Lofstrom wrote:
However, I do a lot of science/technical/medical work. These oddball
programs do a particular task and there are not many alternatives. The
author is usually a subject specialist rather than a programmer, with bits
of contributed code, borrowed
That seems like a great reason to run a virtual machine and sandbox a more
modern linux to run those specific apps.
On Thu, Mar 20, 2014 at 8:58 PM, Keith Lofstrom kei...@gate.kl-ic.comwrote:
On Tue, Mar 18, 2014 at 04:43:53PM -0700, Tim wrote:
Your software distribution does you a grand
Yeah, it seems to me that running a VM for legacy incompatible stuff
is far easier.
I typically stay away from distros that are very task-specific. While
it's convenient and all to have a certain set of software installed
and configured by default, the folks managing those distros have very
I run a Red Hat Enterprise clone, Scientific Linux 6.5 . It is
stodgy (equivalent to Fedora 12, a 4 year old base kernel) - but
it will get security updates until 2023, supported by Fermilabs
even if Red Hat goes away (or worse, gets bought by Larry
Ellison). The problem is, almost all
On 03/21/14 14:45, Ali Corbin wrote:
I run a Red Hat Enterprise clone, Scientific Linux 6.5 . It is
stodgy (equivalent to Fedora 12, a 4 year old base kernel) - but
it will get security updates until 2023, supported by Fermilabs
even if Red Hat goes away (or worse, gets bought by Larry
Keith wrote:
... The problem is, almost all recent third party not-in-
the-standard-distro packages want later major revs of libraries
with new features; I can't compile or run those because the
dependencies collide.
On Fri, Mar 21, 2014 at 02:45:34PM -0700, Ali Corbin wrote:
Are the
I meant to sound off earlier about many things in this thread, such as:
- not all dependencies are libraries (need gnome = 2.0 because its
settings manager is how you change the color scheme in program Z or
must have an MTA so that the disk-health-checker can notify you of
impending failures) -
On Fri, Mar 21, 2014 at 11:52 PM, Keith Lofstrom kei...@gate.kl-ic.com wrote:
... For the big, slow stuff, I'll have to keep tweaking C code, sigh.
On Sat, Mar 22, 2014 at 12:03:02AM -0500, chris (fool) mccraw wrote:
... If you run code for days on end, maybe this makes sense. if your runs
On Tue, Mar 18, 2014 at 04:43:53PM -0700, Tim wrote:
Your software distribution does you a grand service by managing this
for you. Use a distro that does it right, buy into it and use their
framework, and many of the headaches you describe become minor.
On Wed, Mar 19, 2014 at 08:31:38AM
The justification for dependencies in software packages is that
they can be shared, saving RAM and disk space. But disks and
RAM is growing very large, while not much is actually shared.
Besides many instances of the same program sharing the runtime
code, do programs really need to share
Bynoe
From: plug-boun...@lists.pdxlinux.org [plug-boun...@lists.pdxlinux.org] on
behalf of Keith Lofstrom [kei...@gate.kl-ic.com]
Sent: Tuesday, March 18, 2014 4:26 PM
To: PLUG
Subject: [PLUG] Are dependences obsolete?
The justification for dependencies
The justification for dependencies in software packages is that
they can be shared, saving RAM and disk space. But disks and
RAM is growing very large, while not much is actually shared.
Besides many instances of the same program sharing the runtime
code, do programs really need to share
Most of you probably know this, but what you are talking about is the
difference between static and dynamic linking. When using static
linking, all the libraries needed by the program are inserted into the
executable code. Depending upon the environment on the target machine,
statically linked
14 matches
Mail list logo