On 13/02/2015 23:08, James wrote:
> Alan McKinnon <alan.mckinnon <at> gmail.com> writes:
> 
> 
>> I doubt dpkg and rpm aren't going to be much use to you, unless you
>> really want to run two package managers. Besides, both are not
>> especially useful with the front ends apt* and yum.
> 
> I'd just use those to unpackage and maybe preprocess some of the codes.
> 
> Agreed. I do not want a full blown deb or rpm package manager just
> a way to install and evaluate some of those codes before beginning a more
> arduous  and comprehensive task. Maybe I should just put up a RH/centos box
> and evaluate codes there. It seems *everything* I want to test and look at
> in the cluster and hpc world, as a rpm or deb package; so I'm looking for a
> time saver, to surf thru the myriad of codes I'm getting; many look very
> cool  from the outside, but once I run them, they are pigs.......
> 
> Then a slick way to keep them secure and clean it out. Maybe I need chroot
> jails too? I spend way to much time managing codes rather than I do actually
> writing code. I feel confused often and cannot seem to master this
> git_thingy.... I have not code seriously in a long time and now it is
> becoming an obsession, but the old ways are draining my constitutional
> powers.....


I see you are doing more than I thought you were doing :-)

rpms and debs are both cpio files so the easy way is to unpack them and
see what's going on:

rpm2cpio name.rpm | cpio -iv --make-directories
dpkg -x somepackage.deb ~/temp/

Considering the size of what you are doing, you are probably better off
running a Centos and Debian system to evaluate the code and discard the
rubbish. Once you've isolated the interesting ones, you can evaluate
them closer and maybe write ebuilds for them.




> 
> 
>> Any special reason why you don't instead download the sources and build
>> them yourself with PREFIX=/usr/local ?
> 
> Lots of errant codes flying everywhere so you have to pull a code audit
> to see what's in the raw tarballs before building. That takes way too much
> time. I'm working on setting up several more workstations for coding to
> isolate them from my main system. This approach you suggest is: error prone,
> takes too much time, and I'm lazy and sometimes even stupid.
> I need a structure methodology to be a one man extreme_hack_prolific
> system that prevents me from doing stupid things, whilst I'm distracted.
> 
> 
> Maybe I should just put up a VM resources on the net, blast tons
> of tests thru the vendors hardware and let them worry about the
> security ramifications?  Some of it is these codes are based on 'functional
> languages' and I just do not trust what I do not fully understand. Stuff
> (files etc) goes everywhere and that makes me cautiously nervous. I have
> /usr/local for manual work and /usr/local/portage for ovelays (layman) but
> it's becoming a mess. There where to I put the work effort that is a  result
> from repoman. Those codes seem to be parallel projects often
> when the code I'm evaluating needs to be cleaned up or extend to properly
> test. Furthermore I have a growing collection of file that result
> from kernel profiling via  trace-cmd, valgrind, systemtap etc etc.
> As soon as I delete something, I need to re-generated it for one
> reason or another...... I just hope that this repo.conf effort
> helps be get more structurally organized?  
> 
> 
> Did you see/test 'travis-ci' yet? [1] I'm not sure it's the same
> on github [2] but some of the devs are using it on github. 
> 
> 
> 
> James
> 
> [1] http://docs.travis-ci.com/
> 
> [2] https://github.com/travis-ci/travis-ci
> 
> 
> 
> 
> 
> 
> 
> 
> 


-- 
Alan McKinnon
alan.mckin...@gmail.com


Reply via email to