How can you write a docker daemon that can kick off other docker containers as needed? -
i've seen several options how docker container can communicate directly host system seem kind of sneaky. instance, appears 1 can start container , bind (using -v) in-container docker executable host's docker executable. 1 can send messages host using networking protocol. appears --privilege flag might well.
each of these methods appears have drawbacks , security concerns. bigger question if architecture best approach.
our goal have docker daemon process running, polling database being used queue. (i know frowned upon in ways our traffic low , internal. performance sort of queue not issue.) when docker daemon detects there work done, kicks off docker container handle work. container dies when finished. each container belongs "system" , run load on system. each system can have 1 container running load on it.
is paradigm makes sense? daemon better off host-level process? python script, instance, instead of docker container? docker meant used way? missing where, in docker documentation, tells me how this? "sneaky" ideas above not sneaky, after all?
i understand there opportunity opinion here. looking concise best practices.
thanks, in advance!
the preferred solution i've seen install docker binaries in container, , mount /var/run/docker.sock container. dockerfile have similar looks like:
from upsteam:latest arg docker_gid=999 user root # install docker run curl -ssl https://get.docker.com/ | sh # app setup goes here # configure user access docker run groupmod -g ${docker_gid} docker && \ usermod -ag docker appuser user appuser
and it's run with:
docker run -d --name myapp -v /var/run/docker.sock:/var/run/docker.sock myapp
this efficient solution since remove network bandwidth. , removes network vulnerabilities, either open port, or including tls cert inside container accidentally leak lost backup.
Comments
Post a Comment