Hey Pete.

Thanks for the reply! (I guess).. I asked for all replies, so I can't complain!!

My goal isn't to roll my own tool, but it is to make sure I'm not
going to down a hole that will end in a dead end, costing time/effort,
with little to show. I have no assumptions, so I'm not sure where you
get that I do, but that said thanks for your reply.




On Tue, Nov 15, 2016 at 12:09 PM, Pete Travis <li...@petetravis.com> wrote:
> On Nov 15, 2016 09:40, "bruce" <badoug...@gmail.com> wrote:
>>
>> Hey!
>>
>> --This is offtopic from fed --
>>
>> Thanks to all who've commented on the SSH/screen issue! Think I got
>> that part resolved.
>>
>> Now, in order to use the ipAddresses that have been generated for the
>> droplets, need to get opinions/thoughts on apps to use to handle
>> connecting to the different numbers of droplets/instances for the
>> project.
>>
>> The following items are a kind of rough overview of what we're
>> thinking needs to be done.
>>
>> We're looking at clusterSSH (even though it appears to need to be
>> built from source!). Other thoughts/opinions on apps that might
>> suffice would be good.
>>
>> Any tool we use, will need to initially handle ~50 droplets/instances,
>> and scale to ~500-1000
>>
>>
>> The tool should:
>> - be able to fire up group(s) of servers based on ipAddress (type of
>> droplet -fetch/parse/etc..)
>> -be able to generate cmd to single/multiple
>> -be able to display either single/group of terms based on ipAddress
>> (single/group)
>> -be able to group the term(ipAddress) -- single term/multiple term in
>> the group(s)
>> -be able to switch between the groups, which in turn display the terms
>> for the group
>>
>> The use case...
>> -the crawler spins up a number of droplets/instances
>> -the crawler generates the required ipAddresses and "groups" the
>> ipAddresses, based on fetch/parse use
>>
>> -All the "instances' are generated/cloned to have the required apps on
>> the server in order to be fetch or parse - there's no need to
>> upload/scp files to the remote instances
>>
>> Tool Requirements:
>> -Nice if the termManagerApp is able to use config/xternal files to
>> handle the ipAddresses to create groups as required
>> -Nice if the termManagerApp is able to display terms in a given group
>> -App has to handle external pub/priv key, all terms (cloned) have the same
>> key
>> -
>>
>> The termManagerApp needs to be able to display terms from the selected
>> group
>>
>> TermManagerApp needs to be able to send same command to all terms in
>> the displayed group
>> TermManagerApp needs to be able to select a given term, and insert
>> cmds to that term only
>> Terms being displayed, should display "realtime" window update
>>
>> Nice if termManagerApp can display 20-50 terms simultaneously
>>
>> Basically, the tool/app will be used to allow the project to manage
>> the multiple instances/VMs that are being run for the crawl.
>>
>> Project use functions requires:
>>  -Running commands on remote servers
>>  -Checking on the progress of the remote processes via the screen function
>>  -Starting up/Shutting Down remote servers as needed
>>  -TBD..
>>
>>
>> Thanks for any/all comments/thoughts!!
>> _______________________________________________
>
> It increasingly appears that you're trying to roll your own orchestration
> software.  Have you looked at existing technologies like ansible, salt,
> chef, or puppet?  I'm reasonably confident that if you drop your assumptions
> about implementation and learn one, you can meet your deployment goals
> without reinventing the wheel.  If you do insist on doing it all from
> scratch, relying on the list to walk you through it may not be the best
> approach.
>
> -- Pete
>
>
> _______________________________________________
> users mailing list -- users@lists.fedoraproject.org
> To unsubscribe send an email to users-le...@lists.fedoraproject.org
>
_______________________________________________
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an email to users-le...@lists.fedoraproject.org

Reply via email to