Agreed. A colleague and I have a disagreement about how to specify a container. Because 
he works almost exclusively in Python, he uses python-based specifications. That tracks 
with the data scientist in charge of the locked down portal who has 1 container for R 
work and anther for Python, both wired to reach out to host OS volumes even for 
"local" data. Because my stack is a gooey mess of tools including things like 
sed, awk, and node, as well as using both Python and R at the same time, I tend to build 
my containers as if they're full blown VMs. I go inside, intra-container. Their 
containers are fine-grained and the gooey bits are inter-container. I suppose if/when 
there's a chance to parallelize the workflows, their way is better. But when it's mostly 
serial, my way maintains repeatability better. Plus it allows me to organically arrive at 
automation through upstream manual repetition, rather than somewhat idealistic planning 
with error correcting iterations.

On 6/9/22 09:04, Marcus Daniels wrote:
A reason to run locally is to know what you have and to be able to control 
everything.   My sense is that people waste a lot of time maintaining 
containers.

On Jun 9, 2022, at 6:50 AM, glen <geprope...@gmail.com> wrote:

The End of Localhost
https://dx.tips/the-end-of-localhost#heading-the-potential-of-edge-compute

On the tails of the Get off my lawn! AOL thread, that localhost article 
reminded me of Firefox's new tool:

https://addons.mozilla.org/en-US/firefox/addon/firefox-translations/

I don't yet understand how it works. But assuming it's true, I like the idea that the translator robot runs on 
localhost. But it also invokes 2 problems I currently have: 1) coworkers who won't share their premature/broken 
works in progress and 2) the opacity of computation that happens elsewhere. If you read the Hacker News thread 
<https://news.ycombinator.com/item?id=31669762>, you see lots of yapping about "developers" and 
front-end stuff, not understanding back-end stuff, yaddayadda. And that's fine; gatekeepers are everywhere. But 
there are serious "openness" issues with relying on compute elsewhere. And it's not merely supply chain 
problems like what version are they running back there. One data portal my clients want/expect me to use prevents 
any traffic in or out, for data privacy reasons. But many of the workflows we use to knead data call out to online 
APIs, in my case so that you "don't have to worry about" what version of whatever lies on the other side. 
So, obviously, I have to convert all the outreach to localhost, either with simulated servers or installing large 
blocks into the container and refactoring network calls into local calls. That bloats my container, of course, 
slowing the development process. Well-simulated data becomes important so I can tighten the dev loop on localhost 
before sending the bloated container to the portal to test on real data.

I'm no longer sure where I'm going with this. Sorry. Were I intelligent, I'd 
delete my commentary and just send along the links. Maybe SteveS has finally 
infected me. 8^D


--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to