HOW CONTAINERS SUPPORT SERVICE REGISTRATION + DISCOVERY / DOCKER

SIX MSA PATTERNS.
The most common microservice patterns are Message-Oriented, Event-Driven, Isolated State, Replicating State, Fine-Grained (SOA), and Layered APIs.
Web Services and Business Processes were once complicated by the issue of State. By their nature, Business Processes are Stateful, in that changes occur after each step is performed. The moment of each event is measured as the clock ticks. In the early days, Web Services were always Stateless. This was slowly resolved with non-proprietary solutions such as business process Management Notation (BPMN) and Business Process Execution Language (PBEL). Yet, some Web Services execute or expose computing functions and others are executing business processes. In some cases, a clock matters and other cases, it does not.
Kristopher Sandoval sees a reason for people to be confused, noting “stateless services have managed to mirror a lot of the behavior of stateful services without technically crossing the line.” His explains, “When the state is stored by the server, it generates a session. This is stateful computing. When the state is stored by the client, it generates some kind of data that is to be used for various systems — while technically ‘stateful’ in that it references a state, the state is stored by the client so we refer to it as stateless. Sandoval writes for Nordic APIs.
STATEFUL PATTERNS AND EVI.
Traditional system design favors consistent data queries and mutating state, which is not how distributed architectures are designed. To avoid unexpected results or data corruption, a state needs to be explicitly declared or each component needs to be autonomous. Event-driven patterns provide standards to avoid side-effects of explicitly declaring a state. Message-oriented systems use a queue, while event-based also sets and enforces standards to assure that the design and behavior of messages over the queue have a timestamp. A materialized view of the state can be reconstructed by the service receiving it. It can then replay the events in order. This makes the event-based pattern ideal for EVI.
Any pattern that records time stamps is suitable. Therefore, an index of microservices should also attempt to classify whether a Service has a time-stamp, using the input of whether it is stateful as a key predictor. To start, “service discovery” is required. Enterprise Value Integration (EVI) is then able to leverage the inventory of services to track inputs, transformations, and outputs associated with each customer and each process they consume. This is vital to improve profitability, while also delivering more consistently on brand promises.
EVI is the only brand consulting firm in the world to recognize the importance of containers and serverless computing – and consistently re-engineer organizations to deliver more value. 
Docker is one of the leading companies and platforms for developers and sysadmins to develop, ship, and run applications in containers. Docker lets you quickly assemble applications from components and eliminates the friction that can come when shipping code.

DOCKER

Docker has a free open source and paid enterprise versions and is considered a simpler and more flexible container storage platform than Kubernetes. Although it is also open source, it comes from Google and is considered more intricate. Some say overly-complicated. The simplicity of Docker is illustrated through service discovery. As long as the container name is used as hostname, a container can always discover other containers on the same stack. Docker Cloud uses directional links recorded in environment variables to provides a basic service discovery functionality. The complexity of Kubernetes is illustrated through service registration and discovery. Kubernetes is predicated in the idea that a Service is a REST object. This means that a Service definition can be POSTed to the apiserver to create a new instance. Kubernetes offers Endpoints API, which is updated when a Service changes. Google explains, “for non-native applications, Kubernetes offers a virtual-IP-based bridge to Services which redirects to the backend Pods.” A group of containers that deployed together on the same host is a Kubernetes pod. Kubernetes supports domain name servers (DNS) discovery as well as environmental variables. Google strongly recommends the DNS approach.

CONTAINERS (by Docker)

- David Cardozo

Dive into a video transcription and chat project that leverages the GenAI Stack, along with seamless integration provided by Docker, to streamline video content processing and understanding.

- Savannah Ostrowski

Learn what containerd is, how Docker and containerd work together, and how their combined strengths can improve developer experience.

- Laurent Goderre

Find out what makes an image distroless, tools that make the creation of distroless images practical, and security benefits of this approach.

- Scott Johnston

Eleven years ago, Solomon Hykes walked onto the stage at PyCon 2013 and revealed Docker to the world for the first time. The world of application software development changed forever.

DOCKER (by RedHat)

RSS Error: A feed could not be found at https://developers.redhat.com/blog/tag/docker/feed/. A feed with an invalid mime type may fall victim to this error, or SimplePie was unable to auto-discover it.. Use force_feed() if you are certain this URL is a real feed.

VIRTUAL MACHINES + CONTAINERS (by VMware)

RSS Error: A feed could not be found at https://blogs.vmware.com/vcloud/feed. A feed with an invalid mime type may fall victim to this error, or SimplePie was unable to auto-discover it.. Use force_feed() if you are certain this URL is a real feed.

DOCKER (by Reddit)

- /u/Pleasant-Guidance110

Hello, I am honestly confused right here.

I had Docker and Portainer running on a little N100 Mini PC, everything was going fine. I then tried to put Proxmox onto it and I am now running Docker and Portainer in a Ubuntu VM from within Proxmox. Ive got most of my containers running again but for some reason some Images are just not wanting to run on the system now. Images like Nginx and PaperlessNGX are just refusing to even run and giving me error codes along the lines of "exec /docker-entrypoint.sh: exec format error" The CPU is set to host from within Proxmox. Does anyone have an idea why it is behaving the way it is?

submitted by /u/Pleasant-Guidance110 [link] [comments]

- /u/zorgul66

I'm trying to set the container vm on Mac on an external drive. The application let met set it when I do apply. However, if I quit the config and reenter it went back to the internal drive (original location)

Anyone has any idea? I'd like to move the container vm on an external drive to have more space.

Docker Desktop for Mac: 4.28

MacOS: Sonoma 14.4.1

thanks

submitted by /u/zorgul66 [link] [comments]

- /u/michaelclaw

I forgot to setup persistent storage on my portainer stack and upon resetting my swarm leader, all my stacks show limited control. I was planning on backing up the compose files I modified from my setup but forgot to. I setup storage now but I'm unable to view the compose files. I'm very new to docker swarm and would appreciate the help.

Is there any way to view the compose files for these stacks?

Also, I noticed some stacks would show up in services on portainer and some are on the containers tab, what is the difference?

submitted by /u/michaelclaw [link] [comments]

- /u/Greenhousesanta

I have tried almost everything I can find online including chmod and changing the file route but nothing seems to be working. Here is the error I get:

assets directory not writable. check assets directory permissions & docker user

And here is my .yaml

version: "2"

services:

homer:

image: b4bz/homer

container_name: homer

volumes: - /home/user/docker/homer:/www/assets

ports: - 8000:8000

user: 411:411

environment: - INIT_ASSETS=1

restart: unless_stopped

submitted by /u/Greenhousesanta [link] [comments]

- /u/Flat_Needleworker157

I have an issue with my Django app running in a container through docker compose. I have a page, and when I make a get/post request to the page, the view should print out a line (run code in general to the terminal) but it does not do that. The terminal will show a request was made/received, but the view code will not run until the site is reloaded/refreshed. This can be done by making a text change to any file, then saving the changes. Once the site reloads the new files, the amount of requests that originally went through are then processed through views.py. This then outputs code from views as it should of with each request. I'll try to show what I am talking about in the screenshot. I do not have this issue if I run the server locally. Anyone have any ideas as to what is going on?

def passwordreset(request): if request.method == "GET": print("GET FROM VIEW") return render(request,'passwordreset.html') if request.method == "POST": print("POST FROM VIEW") username = request.POST.get('useraccount') print(username) return render(request,'passwordreset.html')

#Terminal Output

localhost | Watching for file changes with StatReloader localhost | [28/Mar/2024 15:49:41] "GET /statsadmin/passwordreset/ HTTP/1.1" 200 4814 localhost | [28/Mar/2024 15:49:44] "GET /statsadmin/passwordreset/ HTTP/1.1" 200 4814 localhost | /app/statsadmin/views.py changed, reloading. localhost | Performing system checks... localhost | localhost | System check identified no issues (0 silenced). localhost | March 28, 2024 - 15:49:34 localhost | Django version 3.2.25, using settings 'basesite.settings' localhost | Starting development server at http://0.0.0.0:8000/ localhost | Quit the server with CONTROL-C. localhost | GET FROM VIEW localhost | GET FROM VIEW localhost | Watching for file changes with StatReloader

submitted by /u/Flat_Needleworker157 [link] [comments]

- /u/Turnspit

I'm running Docker on a couple of Ubuntu VMs, and it has come to my understanding that by default Docker completely bypasses anything configured via UFW due to its own iptables-ruling.

With some (and only some) service running behind a Reverse Proxy (like a Bookstack and Guacamole) with UFW enabled, I am unable to reach them anymore, whilst others (like Nextcloud or TS3) are accessible without problems, despite UFW enabled.

How can this be?

submitted by /u/Turnspit [link] [comments]

- /u/Tiny-Entertainer-346

We need to deploy docker container on edge devices which wont be having Internet. These device occasionally connect to network and one of the device (lets call it H) on the network will have internet access. So, I want to know how we can update docker containers in such scenario. I imagine following two approaches:

Create tar of image. Copy it to edge device (say over USB) and then update the image on the edge device. Create local registry on device H. Pull the updated image on this device. Make edge device pull only updated layers from this local registry on H.

Q1. Machine H has x86 architecture and edge device is runs arm architecture. Running docker pull command on H wont pull arm image from public docker registry. How can we pull arm image on x86 machine? So that we can follow either of two approaches?

Q2. Our docker image tar is of 300 MB and it may grow. So we will require to copy whole 300 MB image tar from device H to edge device. I believe we can create tar only of whole image. Is it, by any chance possible to create tar of only updated layers or reduce the size of tar by any means? (Our image is minimal built by copying only bindaries to scratch docker image)

Q3. Is there any other better approach? What is usually followed in the industry?

submitted by /u/Tiny-Entertainer-346 [link] [comments]

- /u/ambulocetus_

I was asked by my boss to look into Dockerizing some internal services. We have a Django app that I discovered today runs on Microsoft IIS, a technology which unfortunately I know less than nothing about. In fact I have basically no experience developing software that touches Windows-based tools in any capacity.

I've deployed apps and services with Docker using gunicorn but that was pretty simple with linux-based images.

My specific question is whether I need to use an IIS-based Docker image or if I can spin up a simple linux-based Docker image and connect to the IIS server - and if so, how do I even do that? From a couple hours of googling it looks like deploying things on IIS is not straightforward.

There was another post today complaining about Windows-based Docker images, and I had a quick exchange with a fellow who suggested a linux-based image would likely work, but I'd like to get a little more input if possible. Searching google, my use case seems to be nonexistent, and almost all results revolve around "deploying IIS within Docker" rather than the opposite which is basically what I'm wanting to do.

submitted by /u/ambulocetus_ [link] [comments]

- /u/waubers

Trying to get the IPVLAN L3 mode driver working in my test Docker environment. Striking out pretty hard, and I have no idea why. (appolgies, YAML seems to be messing w/the reddit editor)

Environment info:

Ubuntu 22.04 inside a ESXi 7 VM Docker Host IP: 192.168.200.12 internal DNS server IP: 192.168.100.2 Docker v25.0.5Portainer v2.19.4

network: ethernets: ens160: addresses: - 192.168.200.12/24 nameservers: addresses: - 192.168.100.2 - 2.2.2.2 search: - home.myfqdn.com routes: - to: default via: 192.168.200.1 version: 2

I created an IPVLAN L3 network on the Docker host:

docker network create -d IPVLAN_L3_201 \ --subnet=192.168.201.0/24 \ -o ipvlan_mode=l3 ens160

I have a Ubiquiti UDM-Pro w/a static route for 192.168.201.0/24 pointing at 192.168.200.12:

If I SSH to the docker host and test networking, specifically NSLOOKUP, this is what I get (everything works):

waubers@dockerhost:~$ ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_seq=1 ttl=57 time=52.7 ms 64 bytes from 8.8.8.8: icmp_seq=2 ttl=57 time=43.2 ms 64 bytes from 8.8.8.8: icmp_seq=3 ttl=57 time=38.6 ms ^C --- 8.8.8.8 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2003ms rtt min/avg/max/mdev = 38.635/44.828/52.666/5.844 ms waubers@dockerhost:~$ nslookup google.com Server: 127.0.0.53 Address: 127.0.0.53#53 Non-authoritative answer: Name: google.com Address: 142.250.191.110 Name: google.com Address: 2607:f8b0:4009:803::200e waubers@dockerhost~$ nslookup google.com 192.168.100.2 Server: 192.168.100.2 Address: 192.168.100.2#53 Non-authoritative answer: Name: google.com Address: 142.250.191.110 Name: google.com Address: 2607:f8b0:4009:803::200e

No issues here. I see both DNS queries hitting my DNS server w/o issue.

So, now let's go inside a container. I made a quick little nginx container:

version: "3" services: nginx_test: container_name: nginx_test image: linuxserver/nginx:latest networks: IPVLAN_L3_201: ipv4_address: 192.168.201.118 networks: IPVLAN_L3_201: external: true

Container starts up just fine, and if I hop inside of it to run the same tests as from the host, this is what I get:

root@20e97f08c7db:/# ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8): 56 data bytes 64 bytes from 8.8.8.8: seq=0 ttl=113 time=39.784 ms 64 bytes from 8.8.8.8: seq=1 ttl=113 time=42.886 ms 64 bytes from 8.8.8.8: seq=2 ttl=113 time=44.664 ms ^C --- 8.8.8.8 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 39.784/42.444/44.664 ms root@20e97f08c7db:/# nslookup google.com Server: 127.0.0.11 Address: 127.0.0.11:53 ** server can't find google.com: SERVFAIL ** server can't find google.com: SERVFAIL root@20e97f08c7db:/# nslookup google.com 192.168.100.2 Server: 192.168.100.2 Address: 192.168.100.2:53 Non-authoritative answer: Name: google.com Address: 142.250.191.110 Non-authoritative answer: Name: google.com Address: 2607:f8b0:4009:803::200e

Also, I tested pinging into the container from elsewhere (192.168.100.0/24 subnet) in my network and it works fine.

So, to see if this was specific to the ipvlan l3 driver mode, I created another container using the same image but using the default bridge networking:

version: "3" services: nginx_test: container_name: nginx_bridge image: linuxserver/nginx:latest

Once again same tests from inside the container:

root@90b63fdd7260:/# ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8): 56 data bytes 64 bytes from 8.8.8.8: seq=0 ttl=56 time=43.751 ms 64 bytes from 8.8.8.8: seq=1 ttl=56 time=42.334 ms 64 bytes from 8.8.8.8: seq=2 ttl=56 time=39.053 ms ^C --- 8.8.8.8 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 39.053/41.712/43.751 ms root@90b63fdd7260:/# nslookup google.com Server: 127.0.0.11 Address: 127.0.0.11:53 Non-authoritative answer: Name: google.com Address: 142.250.191.110 Non-authoritative answer: Name: google.com Address: 2607:f8b0:4009:80b::200e root@90b63fdd7260:/# nslookup google.com 192.168.100.2 Server: 192.168.100.2 Address: 192.168.100.2:53 Non-authoritative answer: Name: google.com Address: 142.250.191.110 Non-authoritative answer: Name: google.com Address: 2607:f8b0:4009:80b::200e

I'm baffled. I didn't see anything in the documentation for the IPVLAN L3 driver that explains this to hints at something I missed configuring.

What am I missing here? I know the upstream networking/routing/dns is fine.

I really don't want to abandon the IPVLAN L3 driver (unless we know of a bug or something). I have other things going on in my network that make each container being route-able hugely useful and dramatically simplified some other aspects of my lab.

I'll likely cross-post this on the docker forums, and update if I get an answer there. Thanks!

submitted by /u/waubers [link] [comments]

- /u/DeosnDennis

Hello guys,

I am using a Docker container to run a firefox instance. I put a proxy in the env. variables but when i do a check at whatsmyip.com I still get my own IP address. How can I fix this. I know firefox has an integrated Proxy option, but its way easier for me do define proxies in the env. than go into every container.

firefox10: image: jlesage/firefox:latest container_name: firefox10 restart: unless-stopped ports: - "58010:5800" network_mode: bridge environment: - FF_OPEN_URL=https://google.com/ - DISPLAY_WIDTH=1920 - DISPLAY_HEIGHT=1080 - SECURE_CONNECTION_VNC_METHOD=SSL - KEEP_APP_RUNNING=1 - http_proxy=http://192.116.136.71:8800 - https_proxy=http://192.116.136.71:8800 volumes: - '/home/user/firefox/firefox10:/config'

Thanks for every reply

submitted by /u/DeosnDennis [link] [comments]