; anything, so can we postpone this discussion to systemd conference?
>>
>> Lukas
now that systemd conference has been a success, I wanted to ask whether
you had a chance to look into it?
Cheers,
Tobias Florek
Thank you for you heroic effort to make docker containers a better
citizen! It is very appreciated.
Is there some work underway (or planned) to run systemd with non-zero
pid? That is some additional isolation that would benefit e.g. Openshift
tremendously.
Cheers,
Tobias Florek
signature.asc
Gluster.
Thank you for your support,
Tobias Florek
practices regarding it?
Thank you in advance,
Tobias Florek
Hi,
> atomic install helloapache
> -->
I would prefer if the id starts with the name of the atomic app to
install (similar to how pods do in kubernetes/openshift). That way
auto-complete will make the app instance be discoverable by the admin in
an easy way.
Cheers,
Tobias Florek
.
>
How can a spc work seamlessly in that case? Wouldn't it at least need to
have a `/host` in front of every path you might want to manage in
ansible? Is there some deep python hacking that is still feasible so we
don't have to do that?
Cheers,
Tobias Florek
ython-libs-2.7.10-8.fc22.x86_64
python-lxml-3.3.6-1.fc22.x86_64
PyYAML-3.11-9.fc22.x86_64
rpm-python-4.12.0.1-12.fc22.x86_64
yum-3.4.3-508.fc22.noarch
Cheers,
Tobias Florek
ansible core module imports of category packaging commands inventory system:
apt apt.debfile apt_pkg aptsources.distro
could a containerized python 2 work? I don't see a way on how it
could work. So is the only option to roll your own atomic hosts?
Cheers,
Tobias Florek
tob
/var/home/tob unconfined_u:object_r:user_home_dir_t:s0
BTW: Of course restorecon did work correctly, but I forgot to use `-v`.
Sorry about that.
The atomic host is now running flawlessy in enforcing mode. The
resolv.conf problem is gone too.
Thank you for your help.
Tobias Florek
hat's
reasonable.
Cheers,
Tobias Florek
for showing me the way to fix it.
Cheers,
Tobias Florek
t) how to force it to
redownload the current version.
I hope it figures itself out whenever the new fedora atomic release
comes out.
Cheers,
Tobias Florek
have
system_u:object_r:sshd_exec_t:s0 as expected.
> Should be running as sshd_t not kernel_t? Are you doing this into the
> systemd-nspawn container, or
> is the sshd_t native on atomic?
Native on atomic.
Should I let that machine stay around for debugging further, or
should I just install a new machine?
Thanks in advance,
Tobias Florek
omm="sshd" sconte xt=system_u:system_r:kernel_t:s0
> > tcontext=unconfined_u:unconfined_r:unconfined_t:s0 tclass=process
> > permissive=0
> Looks like sshd is running as kernel_t, which indicates to me the system
> needs to be relabeled.
>
> touch /.autorelabel;
omm="sshd" sconte xt=system_u:system_r:kernel_t:s0
> > tcontext=unconfined_u:unconfined_r:unconfined_t:s0 tclass=process
> > permissive=0
> Looks like sshd is running as kernel_t, which indicates to me the system
> needs to be relabeled.
>
> touch /.autorelabel; reboot
>
> Should fix the labels.
unfortunately it's an atomic host and I can't touch that file. I assume,
even the autorelabeler won't be able to change the labels. A
restorecon -nR /
does not tell me anything that is wrong.
Cheers,
Tobias Florek
process permissive=0
[many more of the last]
type=AVC msg=audit(1442214925.923:172): avc: denied { sigchld } for
pid=1 comm="systemd" scontext=system_u:system_r:sshd_net_t:s0
tcontext=system_u:system_r:kernel_t:s0 tclass=process permissive=1
Thanks in advance,
Tobias Florek
argue for having the storage
> drivers directly on the host, which the rpm-ostree package layering model
> solves.
Systemd won't know when kubernetes mounts a volume by itself regardless
of whether a container performs the mount or not, or am I missing things.
Cheers,
Tobias Florek
-
(8), which will need to find
a mount.glusterfs helper script (on the host) that will call
mount.glusterfs in the container. The easiest way is to have a writeable
/sbin/fs.d that containers can install into.
Cheers,
Tobias Florek
onnect me to other people doing that, they might have
other/better ideas than I have. When I was looking for it a few weeks
ago I did not find any mention of work being done.
my repository is at
https://github.com/ibotty/atomic-gluster-server/
Cheers,
Tobias Florek
Hi,
is there a way to specify dependencies for (spc) containers that provide
some service? Is nulecule the way to do it? If so, is there some
"atomic" provider that installs all containers that a nulecule specifies?
Cheers,
Tobias Florek
/.
That way, a container does not have to install an overlayfs on top of
/sbin (or similar) to install the helper scripts.
Thank you for your consideration,
Tobias Florek
C "/var/lib/machines/${NAME}"
> /usr/bin/docker rm "$DOCKER_CONTAINER_ID"
which works but is inefficient and depends on (the atomic tool to) redo
these steps whenever an update to the container is done.
Thanks,
Tobias Florek
blems
It might be nice to have a bug (or label, I don't know how to do that
with github) that depends on all these se bugs, so one can follow their
progress.
Anyway, thank you for your answer,
Tobias Florek
unts:
- mountPath: /config
name: config
readOnly: true
volumes:
- name: config
secret:
secretName: test
and the following secret
apiVersion: v1
kind: Secret
metadata:
name: test
type: Opaque
data:
key: dmFsdWUtMg0K
Cheers,
Tobias Flo
Hi,
> Check /etc/passwd for the rpc user. I suspect you're trying to run
the services directly on the Atomic host?
yes, it's failing to run on the atomic host. And yes, of course I
already checked whether rpc is there. It is in getent passwd.
Cheers,
Tobias Florek
Hi,
that happens to a few folks on #atomic. My version is
22.56 a73c889cea fedora-atomic:fedora-atomic/f22/x86_64/docker-host
The exact error message is
cannot get uid of 'rpc': Success
Cheers,
Tobias Florek
ces vs firewalld, libreswan vs strongswan vs racoon,
flannel vs weave vs calypso.
Thanks for your consideration,
Tobias Florek
n some fundamental software (also flanneld, as done
by core os) in a container, so it can be installed on demand?
If so, we will need something like early docker before runC is ready.
Cheers,
Tobias Florek
container.
What do you think?
Cheers,
Tobias Florek
twork (all containers use host-net) would enable that.
What do you think?
Cheers,
Tobias Florek
30 matches
Mail list logo