Hi Rainer,

On 13/12/21 11:50 π.μ., Rainer Duffner wrote:


Am 10.12.2021 um 13:01 schrieb Achilleas Mantzios <ach...@matrix.gatewaynet.com 
<mailto:ach...@matrix.gatewaynet.com>>:

On 10/12/21 1:24 μ.μ., o1bigtenor wrote:


On Fri, Dec 10, 2021 at 3:24 AM Achilleas Mantzios <ach...@matrix.gatewaynet.com 
<mailto:ach...@matrix.gatewaynet.com>> wrote:

    Hi
    we are running some 140 remote servers (in the 7 seas via satellite 
connections), and in each one of them we run:
    - jboss
    - postgresql
    - uucp (not as a daemon)
    - gpsd
    - samba
    - and possibly some other services

    Hardware and software upgrades are very hard since there is no physical 
access to those servers by trained personnel, and also there is a diversity of 
software versions.

    The idea for future upgrades is to containerize certain aspects of the 
software. The questions are (I am not skilled in docker, only minimal contact 
with lxd) :
    - is this a valid use case for containerization?
    - are there any gotchas around postgersql, the reliability of the system ?
    - since we are talking about 4+ basic services (pgsqk, jboss, uucp, samba), 
is docker a good fit or should we be looking into lxd as well?
    - are there any success stories of other after following a similar path?


Thanks
My experience with LXD is that upon install you are now on a regular update 
plan that is impossible to change.
Ehhmmm we are running some old versions there already (jboss, pgsql), LXD would 
not differ in this regard.
What do you mean? that the updates for LXD are huge? short spaced/very regular?
Can you pls elaborate some more on that?


IIRC, you can’t really control, which updates are installed for LXD (and snap). 
You can’t create a local mirror.

IIRC, you can delay snap updates, but you can’t really reject them.

Maybe you can these days, with landscape server?

(insert the usual rant about Enterprise != Ubuntu here)

I don’t know about LXD, but as it’s only running on Ubuntu and is  apparently developed by a single guy (who might or might not work for Canonical - sorry, too lazy to check), I wouldn’t hold my breath as to its long-term viability.

Ubuntu will probably morph into a container-only, cloud-only OS sooner than 
later - the writing is on the wall (IMHO).
All notes taken, thank you.


This means that your very expensive data connection will be preempted for 
updates at the whim of the
canonical crew. Suggest not using such (most using such on wireless connections 
seem to have found
the resultant issues less than wonderful - - cost (on the data connection) 
being #1 and the inability to achieve
solid reliability crowding it for #2).
Crew has their own paid service. Business connection is for business not crew.


The word „crew“ was meant to say „employees of Canonical“ - I’m sure the 
allegory was not meant to mess with you...


What I am interested is, could docker be of any use in the above scenario? 
Containerization in general?
The guys (admins/mgmt) here seem to be dead set on docker, but I have to 
guarantee some basic data safety requirements.



I know very little about docker, but IMO, for ultimate stability, you could 
switch to RHEL and use their certified images:

https://catalog.redhat.com/software/containers/search?q=PostgreSQL%2012&p=1


My coworker says, he re-packages all his docker-images himself (with RPMs from 
his own mirror), so that he understands what’s really in them.


The big problem that I see with your use-case and docker is that docker implies 
frequent, small updates to the whole stack - including docker itself (unless 
you pay for the LTS version).

This is not what you do right now, I reckon?


Our setup has been open source since forever. So licenses for something that 
used to be free for ages would be hard to introduce.
So Docker is NOT free? Please share your thoughts? I am a complete noob.
Those servers I am talking about have no internet connectivity. And the 
satellite connection costs are high.
(although I think we pay a fixed amount for a certain total data transfer size).

The question is: do you want to get there?
But maybe your developers want to get here, because they don’t want to learn 
about software-packaging (anymore) - but is that what the business wants?

Those servers live for years, the objective is to facilitate upgrades.

https://www.docker.com/blog/how-carnival-creates-customized-guest-experiences-with-docker/
Thanks for the link, I didn't quite understand what they do with docker (video 
included).
120 docker containers in two data centers on the ship? Ours will be just a 
single linux box with limited connectivity (in some seas no connectivity ) to 
the internet/shore.

(That was pre-pandemic…)

I would make an educated guess that you’d need to have the whole 
docker-infrastructure on each ship (build-server, repository etc.pp.) to 
minimize sat-com traffic.

Hmm, I don't know about that. The hardware is given (existing) and limited.
You are like the 2nd person who warned about comms as being an issue with 
docker/containers.

Can't someone have a free docker system inside a linux server and run the 
containers (free again) until he/she decides to upgrade either docker or (more 
frequently) one of the images?
Is Docker upwards compatible? Meaning new Docker versions to run old images ?


I mean, it looks like it could be done. But this is where the „dev“ part in the 
„devops" world has to take a step back and the „ops“ guys need to come forward.

Can you please explain in more detail?




Rainer


--
Achilleas Mantzios
DBA, Analyst, IT Lead
IT DEPT
Dynacom Tankers Mgmt

Reply via email to