[–] Rowern link

Traefik is really cool for most usage, I tried to used it in production and here are some of the shortcoming I found (the 2 first issues are on github):

- If you use DNS for a blue-green deployment, traefik does not honor DNS ttls (because it uses the DNS of go and go might do some caching) so when you do the switch you might still end on the "old" environment

- An error on one of the cool feature: serving error page when your backend returns some HTTP code

- Some configuration options are not documented (but easily found using github search)

I still love this software and I will keep looking at it.

reply

[–] dward link

By default, go compiles binaries linked to libc to use the system DNS resolver. It does re-implement a DNS resolver but it's only used if CGO is disabled at compilation time.

reply

[–] axaxs link

That's only half right. It does link to libc, but the default behavior on linux is to use Go unless some conditions are met.

reply

[–] dward link

Use go for what? DNS resolution? The default behavior is to use the system DNS resolver. The Go resolver will be used if the system is resolver is not avaialble (e.g. if the binary is compiled as pure go) or if the net.Resolver has the PreferGo flag set (which is false by default).

https://github.com/golang/go/blob/541f9c0345d4ec52d9f4be5913...

reply

[–] mathnmusic link

Is it possible for me to dynamically configure Traefik for non-Docker environments? I would like to run a Traefik instance which acts as a https-proxy for arbitrary domains,hosts and ports - without having to restart at all.

reply

[–] tylerjl link

As long as you're using one of the dynamic backends that Traefik hooks into, such as Consul for example, you can do this pretty easily.

I do this exact thing with Traefik, Consul, and some various miscellaneous services that don't fall under the dynamically-discovered umbrella when they might run under k8s or nomad. I write a value to Consul with the desired Host: match, a backend server to route requests to, and Traefik will handle routing to any backend server it can reach without restarts or interrupting service.

reply

[–] drewmol link

I would ALSO like to run a Traefik instance which acts as a https-proxy for arbitrary domains,hosts and ports - without having to restart at all.

reply

[–] fornowiamhere link

Yep, you can do that using the file backend: https://docs.traefik.io/configuration/backends/file

reply

[–] docsapp_io link

We are using Traefik with k8s with great success in production. They start to provide commercial support, which is great for enterprise.

reply

[–] snorremd link

Træfik works pretty well as an automated https proxy as well: https://traefik.io/ It is still missing a caching feature though, so it might not be a good fit for everyone. It has a Docker backend which works with Docker labels (much the same way the https-portal project uses environment vars).

reply

[–] ripdog link

Why shouldn't I? If I was just going to use this to host a single website, what's wrong with running docker-compose?

reply

[–] tjbiddle link

A single node swarm is incredibly simple to setup:

    docker swarm init
    docker stack deploy -c docker-compose.yml
Sure, you could use compose; but it's like using a brick for a nail instead of a hammer. Swarm is just a better tool for Production deployments, and compose is better for development - and since it's so simple on a single node, there's no reason not to, and then you're better setup for multi-node and the best practices surrounding it down the road.

As another commenter mentioned - there's often confusion on using Docker in development vs. production. I think this comes up a lot since people always talk about how "Use the same image everywhere!" when that's not always the case. In development you may be using bind-mounts and attaching your working directory into the container so you can actually edit your files, you may be running the application with nodemon (Or your langauge-equivalent) to monitor changes, or do whatever it is your application needs, maybe you have separate Dockerfile's as you want a more slimmed down Production image. But the idea is more that you bring the dev/production parity closer not necessarily identical. You can now ensure all your developing teams are using the same version of your language, and that things are configured exactly the same, and when you do go into Production - that when the image is built, it will run exactly the same on your CI server as it does in on your Production node. It allows for more consistency. But things aren't necessarily 100% the same between development and production.

reply

[–] bfirsh link

Nothing at all, as long as you don’t mind the unreliability of a single server. (I am the author of Compose.)

Honestly though, I would probably use Heroku for deploying little apps like this. The Compose CLI’s sweet spot is development environments.

Edit: There are even official docs for this https://docs.docker.com/compose/production/

reply

[–] Already__Taken link

I have always been confused by examples of things that are deployed with docker compose. And I'm sat here with babies first coreOs cluster wondering how to mesh the two.

reply

[–] gexla link

I see Docker for production and Docker for development as two different worlds. I think containers can work well for everyone in development. Not everyone needs to use it in production. If you are using Docker Compose for production, then you might not need the advantages of Docker for production (even though it's still useful for development.) If you Google around for production setup of applications, you'll see what a typical flow looks like. I think key points are automation and spreading your resources. Docker compose doesn't address these points.

reply

[–] ckocagil link

Why not if it works for their use case?

reply

[–] tjbiddle link

docker-compose is not meant to be used in Production.

Compose files can be; however they're used in conjunction with Docker Swarm - and when this is done, certain features are made available while others are not. Networks would be used in this case rather than the `link` directive.

reply

[–] dvfjsdhgfv link

I'm sorry, I really don't get it. What do you need Docker for? Let's Encrypt and Nginx give you practically full cert automation. Maybe there's some crucial bit of information I'm missing here?

reply

[–] rapnie link

I am running RancherOS and Rancher where I have everything as docker images. Easily manageable and updateable. Using docker-compose automation, but in my setup with nginx as reverse proxy I have a lot of configuration options, and am a programmer at heart, not a DevOps guy. Don't know yet if I can use this out-of-the-box, but it will certainly save me a lot of time having it all together.

reply

[–] sureaboutthis link

I don't get it either even if one uses Docker. I use jails on FreeBSD but, otherwise, I have the same set up that does the same thing by just configuring what comes with all that so I don't understand the point of this.

reply

[–] aaaaaaaaaab link

Can you have a self-contained pre-built FreeBSD jail that you can fire up without doing additional setup? Like a prebuilt Docker container with all the necessary stuff already installed.

reply

[–] sureaboutthis link

I'm a professional. I don't need someone else to configure my system tools for me.

reply

[–] ric2b link

Professional doesn't mean you have to do everything yourself, this saves time so you can do the important stuff that can't be automated.

reply

[–] sureaboutthis link

You only do it once. Then you do everything to this tool that it, too, will need; adjust as necessary. When you do your own config, you know what's going on and how to fix and adjust. And it's one less tool you need to learn and maintain.

reply

[–] dagenix link

That's pretty rude.

reply

[–] kodablah link

For differing values of configuration, yes you do, via scripts you rely on. Your build process based on others' scripts is the same philosophy.

reply

[–] sureaboutthis link

I think you're talking about something else. I'm talking about set up and configuration of these tools and servers. Other people's scripts may read those configuration files and apply them but it is the config itself that I do on my own. Even then, I sometimes edit those build scripts if they don't do what I want and I just need something customized. If some other tool set all that up for me, I'd never know how to do it or I'd have to learn it anyway, negating the need for this tool in the first place.

reply

[–] geggam link

This ^ gets you hired.

reply

[–] twovarsishard link

Setting a few labels on a docker container is hard? Good god we set the bar low these days.

reply

[–] SquareWheel link

I've flagged this comment. Creating a throwaawy just to mock somebody is not appropriate.

reply

[–] bovermyer link

It should also be punishable by more than just flagging a comment. Is there a way to ban IPs?

reply

[–] dspillett link

IP blocking is ineffective[1] and can have noticeable collateral damage[2]. As has been discussed at length in discussions about tracking people for copyright protection/enforcement reasons, IP addresses are not a suitable method of identifying/locating/tracking people.

[1] they could just connect from somewhere else next time: mobile instead of home network, coffee shop wireless, day-job guest wireless, train/bus wireless, friend's wireless, tethered to friends' mobile connection, ... - in fact the poster may have used an alternate location this time just-in-case[3].

[2] if a public address/range is blocked, or a range used by a large employer or educational establishment, that could affect many innocent users while not affecting the problem user.

[3] people determined enough to be dicks that they take time to sign up for a throwaway account, are likely determined enough to post from another location and dickish enough not care if other people using that location get blocked because of their action.

reply

[–] nothrabannosir link

Agreed, and thanks for mentioning it. This type of comment gives everyone a bad name.

reply

[–] Sohcahtoa82 link

I'd call it cowardly.

reply

[–] dvfjsdhgfv link

While I don't appreciate the form and agree there is no place for it on HN, there is some merit in that that comment.

reply

[–] thecatspaw link

it could be written in a non-inflamatory tone however

reply

[–] throwaway8879 link

They didn't say it was hard, they said "involved process", which I agree with.

reply

[–] rapnie link

Thank you. My case involves more than the few labels. But you may enlighten others with your solution and help raise the bar again.

reply

[–] rapnie link

This is a great project, and exactly what I was looking for. Now doing manual cert renewal and looked into using jwilder's docker-gen to automate, but that was an involved process. This brings it all together. Thank you!

reply

[–] stephenr link

Sure, if you want to play "guess which crazy decision the author makes will bite users in the ass next".

The project has a very poor history of bad decisions.

reply

[–] y4mi link

i know of one example, which was admittedly beyond silly (caddy wouldn't start when lets encrypt had an outage), and they back paddled very quickly after it blew up, and provided a fix within hours iirc.

your comment however paints a different picture... if you know other examples, please enlighten us.

reply

[–] stephenr link

The issue you're aware of is when I became aware of the project (because of the outages and ensuing coverage here on HN).

My issues are:

- that problem was caused not by a 'bug' but by a deliberate decision that the author made, and defended. I wouldn't use the term 'back peddled' so much though: Even after relenting and changing the allowed expiry window, the behaviour was essentially the same, albeit with a smaller failure window. AFAIK Caddy to this day won't start with a valid certificate if it can't renew it, if the expiry is within 7 days.

- before the TLS-SNI-01 challenge was disabled on the LE side, Caddy's behaviour was to randomly select a challenge to use. When I questioned the author about this his response was "oh well that would be too much load for LE". The problem there is that the ACME protocol handles this exact scenario - you query the ACME server to find out which of the challenges you want to use, it will allow you to continue with. As I pointed out at the time, I personally wasn't aware of this before the TLS-SNI issue, but as demonstrated by his response, neither was he. If his project's entire reason for being is registering certs via ACME, wouldn't you expect him to know the basic information/control flow when requesting a cert?

- the author has no apparent concept or understanding of 'separation of concerns' - apparently anything that isn't a single file go binary is all too much for the world to bare, regardless of the disastrous results his approach has given the world. I'm all for competing approaches to solving a problem, but when someone essentially ignores any competing approach that doesn't fit their existing narrative of "I have the sole solution to this problem", it's not a sign of someone who is really interested in solving the problem, it's someone interested in pushing their solution. That's fine for a salesman, I guess. Not so much for an 'engineer'.

Edit:

- the 'ads via headers' stunt was just plain weird and creepy, and shows that the author and his partners have no real sense for business. This is demonstrated in their 'Basic' paid support, which states: "We usually respond within 1-2 business days.". USUALLY!? You're paying $25 a month per instance, so you.. I dunno, don't have to install certbot once, and your support response timeframe includes the term "usually". Sweet fucking Jesus.

reply

[–] andmarios link

I'd say that $25/month in order to get support from the core development team within a couple business days is an excellent deal, especially for an open source software.

reply

[–] aurieh_ link

Not OP, but opt-out telemetry (https://github.com/mholt/caddy/pull/2079) phoning home to a non-libre, non-open source server for one. Not to mention the fact you couldn't opt-out without changing the build up until someone else (not the author!) made it a CLI flag (https://github.com/mholt/caddy/pull/2191)

reply

[–] graup link

Looks cool, but it's not free for commercial use. Instead of paying $25/instance/mo I can set it up myself with the https-portal docker config.

reply

[–] bpizzi link

It's not free if you use the binaries builded/downloaded from caddyserver.com. It's still free if you build the binaries yourself from source [0].

[0] https://caddyserver.com/products/licenses

reply

[–] morpheuskafka link

And there is a Docker Hub image freely licensed building from source: https://github.com/abiosoft/caddy-docker

reply

[–] andmarios link

Binaries from github should also be free, though they do not have external plugins compiled in.

reply

[–] edwinyzh link

Great! I didn't know about that.

reply

[–] mpranjic link

Caddy is free. Caddy binaries are not free, which is different. You're free to build it yourselves (or use a prebuilt docker image).

reply

[–] mortond link

Or just build Caddy from source?

reply

[–] cabraca link

their github source is still apache 2.0 licensed. just compile it yourself and you're good to go.

reply

[–] dexterbt1 link

Same with me, they're my go-to dockerized automated https reverse proxy setup. Takes a few minutes to setup.

But recently though, I grew tired of handling 2 or more docker images, then I discovered https://github.com/Valian/docker-nginx-auto-ssl. One image, sets up in a single command without volume sharing complexity of jwilder/nginx-proxy. One caveat though is the larger cpu overhead (from lua) when handling high volume or high reqs/sec.

reply

[–] TheGrumpyBrit link

I've been using https://github.com/jwilder/nginx-proxy and https://github.com/JrCs/docker-letsencrypt-nginx-proxy-compa... to achieve this for a year or two now. I prefer it because the domain configuration is set on the backend site, rather than the proxy image itself, which means you don't have to worry about cleaning anything up when you remove an image.

reply

[–] throwaway9d0291 link

Quite a few reasons:

- SSL: You can put a bunch of things behind a reverse proxy and you can put all your SSL stuff in one place, which makes it a lot easier to deal with, secure and manage.

- Load balancing: If your application needs more than one host, you need some way of distributing requests to multiple hosts. A reverse proxy is one of the easiest ways of achieving this (and comes with above SSL handling as well)

- Caching: They can be very good for caching dynamic but rarely changing resources like news articles. They can also take care of requests for static assets so that your application servers don't have to.

- Multiple apps on a single IP: At the other end of the spectrum, if you have for example a home server, only one application can listen on a given port and you might want to run multiple applications responding to different hostnames. A reverse proxy lets you do this.

reply

[–] dspillett link

> Why are reverse proxies so popular?

From the point of view of SSL, they make it easy to bolt SSL support onto existing infrastructure with minimal changes to everything else.

In some cases the separation of duties is useful too: if I change the back-end of one of my applications from Apache+PHP on one server to a new implementation in node.js elsewhere, I don't have to worry about implementing SSL (and caching if the proxy is also used for that purpose) in the new implementation or even needing to change DNS and other service pointers, I can just direct the proxy to the new end-point(s).

For larger organisations (or individual projects) this separation of responsibility might also be beneficial for human resource deployment and access control: keeping the proxy within the security/infrastructure team for the most part and the app deployment/development with specific teams.

> They are unnecessary in most cases

I agree. But that doesn't necessarily mean they are not beneficial in a number of cases.

> and bring more conplexity

Though also spread out that complexity, which can help in the management of scaling up and the maintenance of larger scale once there.

Obviously the utility of this very much depends on the project/team/organisation - it is very much a "your mileage will vary" situation.

reply

[–] blablabla123 link

I've often pondered about whether it is useful to use one or not. I came to the conclusion that it's always a nice add-on, and eventually it always helps. You have highly efficient static file serving, adding HTTP-Headers is a no-brainer, same goes for typical stuff like HTTP->HTTPS redirect, basic logging, error pages independent of your app... Also don't forget that it's an additional layer in the setup that adds extra security because it's an extra layer. ;-)

It would be nice to have one in Go or Rust if it had all of nginx' features, performance and documentation/community support.

reply

[–] jenscow link

Another reason: binding to a privileged port (< 1024).

I trust nginx to do the right thing, more than some other application.

reply

[–] dullgiulio link

No, you don't need any special privileges to bind to a low port, just set the right capability:

# setcap cap_net_bind_service=+ep /usr/sbin/httpd

You should never run servers with privileges, period.

reply

[–] teleclimber link

Worth noting that this allows the httpd executable to bind to a low port. Meaning that any call to httpd now has this privilege. A reverse proxy is much more targeted.

For httpd this may not matter as much because it's only used as a serve. But if you use node, giving all node scripts the ability to bind to a low port is uncomfortable.

reply

[–] acdha link

I agree with you but it’s worth acknowledging that setting capabilities and making sure they persist across updates (e.g. your example breaks the first time a package update is installed) isn’t always trivial, especially in bureaucratic enterprise IT environments, and although the risk is lower an attacker could still potentially find interesting things to do using other low ports unless you’ve also setup something like authbind to limit it to just port 80/443.

reply

[–] nickpsecurity link

On top of other comments, I'll add you can do them more securely. High-assurance security often used proxies to enable support for legacy apps since the proxies could be clean-slated using rigorous techniques. The legacy systems were often closed-source, way too complicated, or (eg Microsoft) deliberately obfuscated. They already mentioned SSL/TLS. Another example is putting a crypto proxy in front of Microsoft Outlook that communicates with a mail guard. Can scan, encrypt, etc email with little or no work on the client.

Can do (improvements here) with little to no change to (existing app) is the recurring pattern.

reply

[–] bpicolo link

They tend to be more performant in handling connections / request queueing, HTTPS termination, and serving static files among other things.

reply

[–] BjoernKW link

They're popular because, for better or worse, Microservices are popular.

If you want to serve Microservices from a common host name as if they were a single application, e.g. for public users of an API, you need some sort of mapping between internal and external URLs.

Service discovery is another approach but that's probably only applicable to internal service usage.

reply

[–] aequitas link

They where popular way before microservices, only less dynamic. Mostly from an operational perspective they give a single entry point which can be better secured and monitored. It also allows to easily decouple unstable backends from your user so even though functionality is broken (404), the user experience doesn't have to suffer (serve stale cache, respond with a proper branded error page which helps the user forward (instead of a cryptic app specific error page).

reply

[–] fulafel link

How about just returning the service url along with the API auth token. This would enable load balancing and failover too.

reply

[–] BjoernKW link

Yes, but that'd require a more complex process on the client.

At least you'd have to send an additional request, e.g. instead of just calling

https://someserviceprovider.com/serviceA

you'd first have to call

https://someserviceprovider.com/serviceA

which for example would return

{ "url": "https://someserviceprovider.com:8081", "auth_token": "..." }

and only then could you call the actual service under https://someserviceprovider.com:8081

reply

[–] lixtra link

From the readme:

> By using it, you can run any existing web application over HTTPS, with only one extra line of configuration.

You can do authentication, ssl, authorization etc. all in one place.

Downsides:

- difficult to scale

- no deep defense

- CSRF exposure because applications are not separated by domain

I’m having good experiences with this approach

reply

[–] nailer link

Because they're used to terminate SSL and do load balancing. More recently they do HTTP/2, Brotli and other newer tech that non-specialist HTTP servers don't yet do.

reply

[–] a012 link

Can you name a reverse proxy which written in safe language?

reply

[–] lixtra link

Can you name a safe language?

reply

[–] drngdds link

Virtually anything other than C or C++?

reply

[–] fulafel link

Why are reverse proxies (in memory unsafe languages, no less) so popular? They are unnecessary in most cases, and bring more conplexity and hinder transparency more than the alternatives.

reply

[–] DanielDent link

This seems like it might be a new "hello world" for devops-inclined people.

I've authored a similar Docker image with less features: https://github.com/DanielDent/docker-nginx-ssl-proxy

(Although lately I've been finding my cookie-or-IP-or-HTTP-basic auth feature extremely useful in development, which this doesn't seem to have from the README)

It hasn't been updated in a while, but I've also got an automatic service discovered based version of this for Rancher 1.6:

https://github.com/DanielDent/rancher-nginx-active-lb

reply

[–] nothrabannosir link

Example : this company had part of their site built by some abstruse static site generator with a million dependencies written in a language that no one at the company had installed by default, nor knew (I think it was Ruby). We put it all in Docker, and the README changed from 10 lines of install x, y, z version foo to /random, to just “run docker build, then docker run.” Most of the people working on those docs didn’t need to know about ruby or gems or lock files or any of that.

A year later I overheard someone say thank god that thing was dockerized because I was absolutely not looking forward to installing all those dependencies just for a typo fix.

That’s one of the areas where docker shines: app delivery.

reply

[–] Fredej link

I'm also still not on the docker-wagon and would like to know what I'm missing out on. Most of my stuff is python. Why is docker better than virtualenv with a requirements.txt file?

From this description it seems to solve the same issue.

I could imagine something like environment variables, but on the other hand that's something I've learned not to keep in version control, and putting it in docker would be exactly that, no?

reply

[–] anyzen link

> Why is docker better than virtualenv with a requirements.txt file?

It's not, it solves a different issue. If you only have Python as dependency, requirements.txt are fine (well, user needs to install correct version of Python, pip and virtualenv / pipenv, but that's doable). But as soon as your app is actually composed of nginx / apache, python, some background process in Rust, bash scripts for cron jobs,... then you have a problem with app delivery, which Docker solves nicely. Just package eveything in a Dockerfile and distribute the image. Bonus point: you can now test it locally, with the same installation.

I jumped on Docker wagon very early for this exact reason. I don't care about hype, but it does solve these kinds of problems.

reply

[–] undefined link
[deleted]

reply

[–] Ralfp link

I'm also python dev and I'm embracing docker for delivering my app to users.

> Why is docker better than virtualenv with a requirements.txt file

One of issues that `docker-compose build` solves better is that venv is superconfusing to your non-python users. I've had people:

- skip it and do `python setup.py install` instead - skip it and install requirements globally `pip install -r requirements.txt` because thats how it worked for them last time they used python - version control their `venv` directory - backup and upload their `venv` directory on new server when moving

Now they just do `./appctl setup` that asks them for few things, writes `setup.env` file for docker-compose, and then runs `docker-compose build`

But this is only beginning. Thanks to `docker-compose` that `build` step also installs:

- PostgreSQL database - Redis for caching and message queue - UWSGI - Celery - Nginx proxy

reply

[–] SteveLTN link

> Why is docker better than virtualenv with a requirements.txt file?

You can't put postgres, memcached, imagemagick in requirement.txt, can you?

reply

[–] nickjj link

You might want to read this blog post as it walks through setting up a dev environment for Python with and without Docker:

https://nickjanetakis.com/blog/setting-up-a-python-developme...

The TL;DR is a typical web app is often a lot more than just running Python, and there's a huge amount of value in being able to run the same code across different operating systems without any installation changes.

reply

[–] cvakiitho link

not every dependency is written in python.

reply

[–] bpizzi link

Or to say it differently: "that's one of the areas where interpreted languages' ecosystems stinks: app delivery".

reply

[–] dbdjfjrjvebd link

The OP says app delivery but they actually talk about development builds. Dev builds are another area where Docker/containerisation helps. Dev builds are just as annoying with Golang as Ruby or Python. In fact build awkwardness is orthogonal to static vs dynamic typing.

reply

[–] bpizzi link

> Dev builds are just as annoying with Golang as Ruby or Python.

Hm, I'm pretty sure I can take any Go project out there and build it in a matter of minutes on my current OS (depending on the build time of course). Just a matter of setting the environment on the right compiler binary and retrieving dependencies (which just came better 2 weeks ago with 1.11's modules).

I'm pretty sure too I'll very quickly run into dependencies hell by picking randoms ruby/python programs. Just take a look at that SO question: https://stackoverflow.com/questions/2812471/is-there-a-pytho....

Interpreted languages have their uses, of course, but there's a clear difference in both ease of deployment and development between the realms of interpreted languages (Ruby/Python/PHP/etc.), VM/IL/JIT based languages (JVM/.NET/etc.) and 'plain old' compiler based languages (C/C++/Rust/Go/etc.).

> In fact build awkwardness is orthogonal to static vs dynamic typing.

I said "interpreted languages' ecosystems", not "dynamically typed languages". I think you're mixing concepts here (or just maybe you're adding info not directly related to my comment).

reply

[–] icebraining link

Just a matter of setting the environment on the right compiler binary and retrieving dependencies

Which is exactly how it works in Python.

I'm pretty sure too I'll very quickly run into dependencies hell by picking randoms ruby/python programs. Just take a look at that SO question:

That SO question is about a tool for "setting the environment on the right compiler binary and retrieving dependencies". But you can also do it manually just fine; that's how I usually did it, in fact. Just retrieve the dependencies to a directory and call Python with PYTHONPATH set to it.

Pyenv is essentially the same as Go's new modules tool, it's just not called through the same binary as the interpreter/compiler.

Interpreted languages have their uses, of course, but there's a clear difference in both ease of deployment and development between the realms of interpreted languages (Ruby/Python/PHP/etc.), VM/IL/JIT based languages (JVM/.NET/etc.) and 'plain old' compiler based languages (C/C++/Rust/Go/etc.).

Yes; with Python, you can both run directly from source without wasting time compiling after each change, or you can produce a standalone binary (dependent only on libc) for distribution, using a tool like PyInstaller. Plain old compiled based languages are much more limited.

reply

[–] bpizzi link

> Yes; with Python, you can both run directly from source without wasting time compiling after each change

As I said, interpreted languages have their uses, of course. My initial point was about delivery to end users.

reply

[–] icebraining link

And my point, which you cutoff early, is that with Python you can have both.

reply

[–] bpizzi link

Yes, you can have everything in every language. After all it's turtles all the way down.

What makes us successful as software engineers, in the end (and IMHO), is the efficiency level at which we reach our goals. To my eyes, when deliverability is the goal, then a language where native (cross)compilation is first class citizen is the most efficient path. But don't get me wrong, Python is really great and still a good choice for that scenario (I'm thinking of dropbox, for ex).

reply

[–] ric2b link

But setting up a build system is a tiny part of time it takes to finish a project, so why would it be a major factor when choosing a language?

reply

[–] bpizzi link

Mainly because I currently work on long term projects, building entreprise grading systems. Here you want simple, dumb technologies for which you'll still be able to put up build OS in the years to go.

reply

[–] dbdjfjrjvebd link

I can build any python project in 10 mimutes (and not pollute my global environment). I can't do that with Go or C or C++ or Ruby or php. I can if they are dockerised.

reply

[–] bpizzi link

Sorry I can't follow you (no offense really).

You said at first "Dev builds are just as annoying with Golang as Ruby or Python" and now you're saying "I can build any python [...] I can't do that with [...] Ruby"?

reply

[–] dbdjfjrjvebd link

I mean that I am knowledgable about Python builds. You are knowledgable about Golang. Neither skill is special. With Docker I don't need to learn as much about FooLang's build process to contribute to a project using FooLang.

reply

[–] bpizzi link

Thanks, it gets clearer. As a matter of fact I'm both knowledgeable in building things in Python and Go, Node and PHP too. And others. But, admittedly, for 'big' things I tend to lean on Go nowadays (edit: mainly because I need to deliver on-premise, OS-aware applications).

It seems like you think there's a "build process" with Go because you're accustomed to python/pyenv (or whatever is your env manager of choice), in which cas you need to known that there's no such thing with Go.

The two following course of actions are the same: - "apt-get docker; docker pull {project-url}; docker start {project-url}" - "apt-get golang; git pull {project-url}; go run .go" (the last Go 1.11 will take care of dependencies at build time)

You can't as easily do "apt-get python; git pull {url}; python .py", unless the program is really simple, because there's such questions as "python2 or 3?" or "pyenv, virtualenv or anaconda?", obligatory XKCD here: https://xkcd.com/1987.

But it's totally ok not to bother learning the compilation command of a given project's language and rely on Docker, really I'm ok with that, doing it myself from time to time. My initial point was on app delivery to final users.

reply

[–] ric2b link

You're right that the multiple solutions for Python make things confusing and prevent you from using the same method for every project, hopefully we'll start to standardize on pipenv, which makes the process:

"apt install python3 python3-pipenv; git pull {url}; pipenv sync; python3 .py"

pipenv sync sets up a virtual env and pulls all the dependencies.

reply

[–] simongr3dal link

I get that docker makes this simpler but it’s not really a feat only docker could accomplish, a makefile or bash script could have accomplished the same thing.

reply

[–] jarfil link

A bash script would have to automatically install all the dependencies. Over time, there is a growing chance that some of the versions required will conflict with whatever is already installed on the machine, and someone will have to go in and fix them... then fix them in a way that it works on everybody's machines.

With docker, you can just ignore all that. As long as there is a single person capable of updating the dependencies on a single machine with docker, it'll work the same everywhere, always.

reply

[–] louiskottmann link

A bash script would be nowhere near as practical. First of all it would be much more complicated to deal with various environments, and in practice, docker run/docker stop is much easier to upgrade.

reply

[–] v_lisivka link

If everybody in your team uses same distribution of Linux (say latest Fedora, for example), then all dependencies can be packaged into RPM or installed from system repository. RPM and docker are very similar in general idea: they are both image of result of installation of a program(RPM)/system(Docker).

Every member of team will need to run same version of Fedora directly or in Vagrant or in (ta-da) docker container.

From my experience, it turned out that for small Dockerfile's, RPM packages are unnecessary burden, so usually I just start with just docker. But later, when Dockerfile growths, it much harder to track dependencies and installation instructions, when they are interleaved, so it much easier to return to individually packaged programs and replace almost whole content of Dockerfile with single "dnf install meta-package" command.

reply

[–] johnchristopher link

You could also move bits with a magnet over the circuits but it's more error prone /s.

reply

[–] martin-adams link

I was like that and ignored Docker for a number of years. The app I'm working on has one database, one queue, one API server and one web front end. Some of these will scale when in production, but for eveloperment it's quite simple. I used to just run them at the command line and get on with development.

But using docker has been an eye-opener. I can now get my front end developer to have a fully working database, queue, API server running with a simple `npm run docker`. They are free to develop the UI without having to worry about the underlying stack. I am free to change that stack without having to worry about telling them what commands to run on their laptop to get it up to date.

As we add more services, such as email sending, image processing, video compression, it wont require anything special to configure from the front end perspective.

We're now moving it into the cloud and are now seeing the benefits of Kubernetes with our docker containers to deploy, have a blue/green deployment, not worry so much about what the servers are doing but focusing on deployments. This is very clever stuff and will reduce our hosting cost without having to duplicate infrastructure.

So yeah, I didn't get docker for a long time. It does depend on the types of apps your building though. My use case fits it perfectly.

reply

[–] SteveLTN link

Docker is surely not for everyone. If you don't get why you want to use Docker, chances are you don't need it.

Differently people use Docker differently, I see Docker could be useful for several purposes:

1) scalability This is the main advertisement point for Docker I guess. Well it enables your stateless application to scale automatically.

2) being able to deploy without caring about dependencies There are times we want to deploy something into customers' servers. We have no control over their machines' environments. But with Docker, the only thing I need to make sure is that they have it installed.

3) Infrastructure as code All our source code is checked in git and version controlled nicely. When we need to run it again, just clone from git. How nice! But our infrastructure, meaning server formation and service relationship? Not so much. Before using Docker, we basically need to ssh into servers and install dependencies according to the documents. It was not only slow, but unreliable because we all know how documents can lie. With Docker and docker-compose, the infrastructure can be written in code and checked in git. Most importantly, code doesn't lie.

You can surely use other tools to achieve those. I'm not saying Docker is the only way. But I believe it's a good tool to consider when you have those needs.

reply

[–] pvg link

Before using Docker, we basically need to ssh into servers and install dependencies according to the documents.

There's an entire cottage industry of not-docker tools for that. Not to take anything from docker but the options aren't 'docker' and 'the pre-docker dark ages where you ssh'ed into servers one by one and carefully typed commands from a faded printout'.

reply

[–] mmt link

> There's an entire cottage industry of not-docker tools for that.

Not the least of which are the distros own packaging systems (e.g. rpm/yum or dpkg/apt). Full investment in the complexity of the ecosystem isn't even necessary for basic dependency handling.

reply

[–] SteveLTN link

Yes, I know. I just personally didn't learn to use Chef or Ansible, etc. back then. Now I use Ansible as well as Docker. Like it so far.

reply

[–] cm2187 link

On 2, perhaps I misunderstood how docker works but don’t you need to deploy a docker image that is binary compatible with the type and version of the host OS? Then don’t you just move the problem? (Two customers with different types of OS require you to recreate the docker images). And what if these customers need to upgrade their version of linux, do they need to contact all their software vendors to reissue new docker images?

I never understood how creating the tight dependancy between the host and the docker image is not a problem.

reply

[–] SteveLTN link

Well, you need to get a binary compatible docker image for the hardware. But frankly almost everyone uses x86-64 now, so this is not usually a problem. If these customer upgrade their version of Linux, the new Linux version usually comes with Docker, and you don't need to update the image.

reply

[–] cm2187 link

ok so maybe I misunderstood. I thought the version of the host linux had to match exactly the version of the docker linux image.

reply

[–] pythonaut_16 link

I believe the running containers share the host's kernel, so maybe if you were doing certain low level kernel level stuff, maybe the version of the kernel matching would matter?

reply

[–] davedx link

Another old timer here.

I'm a front end dev, and at my latest contract I wanted to help out our back end developers with some .NET stuff. Can you imagine trying to setup a full .NET stack on a Macbook a few years ago? This time what I had to do was:

* install Docker * install Visual Studio Community and .NET Core * run docker-compose ... up

Then I had their entire .NET software stack up and running, including MSSQL, ElasticSearch, Kibana, redis, all with one command.

I don't like doing DevOps personally, but I definitely appreciate how Docker helps me in situations like this!

reply

[–] jarfil link

You don't need docker to run a single program on a single computer... but the moment you either want to run multiple copies, or distribute it to multiple computers, particularly if they're not your computers, then being able to encapsulate stuff and reduce the number of dependencies is a huge benefit.

For example, it lets you run software on your computer without worrying about installing stuff that might affect other software, and without even having any interactions with other software aside from what you allow it.

reply

[–] zmmmmm link

Old timer to old timer: it's how the rest of the computing ecosystem re-invented Java after it got uncool to use Java. Just slam all your dependencies into an image and it can run anywhere [-]

[-] Until it has an external dependency and then it breaks, which is where docker turns out to be rather fragile by comparison.

reply

[–] anyzen link

Old timer to 2 other old timers: Docker does solve a problem, and this is quite close to it... We had a solution that we needed to install on servers for multiple clients. Before Docker, we had to make sure the libraries were correct (on CentOS, Debian, Ubuntu,... you name it), which was a nightmare. We were literally debugging on customers' installations to see why it broke there.

Docker solves this nicely. Apart from the kernel you take all dependencies with you.

In this case, it provides an easy installation of all the dependencies in a single command, while still allowing for customizations, and all in a standard way. Not to mention that certificates and similar are all solved.

Forget about hype and just try Docker. It's a tool like any other. It can be abused, sure, but it's useful too and it's here to stay.

EDIT: difference between Java and Docker is that containers depend on kernel capabilities, not on some 3rd party JAR maintainers. In my experience Docker simply works unless you are doing something really weird.

reply

[–] zmmmmm link

I've been using Docker for lots of things. To be honest, the dependency part has been helpful but still weak: we inevitably end up finding that our containers are built from things that go stale (apt-get sources, pip, broken URLs, stale SSL certs, etc). I guess there is a skill to doing it well so that this doesn't happen, but there you are - it didn't really solve the problem it's now an extra skill we need to have to solve a problem we had already mostly solved.

What has really sold me is more the orchestration side, with docker-compose allowing us to easily launch a whole fleet of independent services. I assume swarm and kubernetes etc., make this even better. It's really this that lets us decompose our software into finer grained independent components which has a number of other benefits. I don't think we could manage it otherwise.

reply

[–] unilynx link

You still need to do a frequent build to detect broken dependencies, but at least setting up a daily docker build in CI is so much simpler and faster (and cross-platform!) than doing this with vagrant or live systems. At least you'll catch problems a lot sooner.

If you want to protect yourself from breaking dependencies, mirror them locally, and use a network-isolated docker build to verify you've actually got everything you need.

reply

[–] ric2b link

The build process might break because of stale links, like you say, but already built images should continue to function everywhere unless you're doing something really weird.

reply

[–] hardwaresofton link

Remember that docker = lxd/lxc =~ bsd jails/solaris zones =~~ enhanced sandboxing/isolation for processes (containerization).

When thinking about it that way, it's the pre-docker world that seems unnecessarily dangerous -- Why would you just let some random process run on your system with all it's dependencies and possibly cause havoc without containerizing it?

reply

[–] zaarn link

lxd/lxc containers are a tad more flexible than docker in my experience, notably because you tend to run a full distro in there instead of just a single app.

reply

[–] hardwaresofton link

I agree -- I'm personally super excited to try my hand at using LXD[0] on Ubuntu, in particular I want to see just how awesome/well-isolated system images are. If everything I'm reading is right, lxd/lxc system containers are going to make running VMs seem outdated.

Unfortunately it's a little bit annoying because the lxd docs don't make it easy to find what they define as a "system image" -- do you have some links to more more good layouts? I actually found a good talk from DebConf '17[1].

[0]: https://help.ubuntu.com/lts/serverguide/lxd.html

[1]: https://debconf17.debconf.org/talks/53/

reply

[–] zaarn link

I mostly rely on system images from Proxmox, they're fairly okayish to run even without PM.

I still need VMs for some things, especially when the kernel of the host is too old or missing modules I need or when I need to mount things via FUSE (or similar ops).

reply

[–] cabraca link

containers keep all that junk away from my system. how many apps need node.js or ruby and stuff just to preprocess their css? ever managed some legacy php app? instead of putting that junk directly on my machine i just put it in a container. dont need it anymore? cool - docker rmi that shit

reply

[–] johnchristopher link

Yeah well maybe but some docker config file imply that you know how to configure the postgress, redis, node package manager, nginx and the SSL chain that live in the container by the means of a yaml config file that is in the end another custom layer of abstraction over the original config files. Of course some docker based projects are better than others.

reply

[–] vultour link

I like to keep my servers free of garbage. Installing bajillion dependencies for every application you run kind of violates that. Bonus points for Python module management.

reply

[–] amaccuish link

Ye I don't use it for scale. It's great just to have a declarative file which is easy to read and understand. I guess I use it like people use Puppet and Ansible, I just find it better for my use case. And I don't always have to rely on what's in the package manager for my OS. So I get a nice stable core OS, and up to date apps.

reply

[–] ckocagil link

Have you used a Windows program that comes with all its dependencies (e.g dlls)? Have you used OS level isolation to deny some resources to some processes?

reply

[–] locusm link

I wish lxc/lxd got more love. I think a machine container makes more sense for non scale stuff.

reply

[–] stephenr link

With 'built-in' imaging in LXD I think a lot of the use-cases LXC isn't quite as fluid for will become more realistic. (i.e. just download this image and run it)

For local dev environments I still think Vagrant (and pick whatever underlying provider you want) is a better solution.

reply

[–] unilynx link

docker is basically a reproducible lxc/lxd, but you can get very close to actually running lxc-like machines with docker

for me, docker actually solves the problem that lxc/openvz were trying to solve in a better way. we already had the data on separate volumes, and docker is a much nicer interface than bootstrapping and then updating a chroot for lxc

reply

[–] rawoke083600 link

I don't get docker... I get it for things of scale... but the everyday stuff ? Maybe i'm still old !

reply

[–] undefined link
[deleted]

reply

[–] enriquto link

what is the difference between "automated", "fully automated", and nothing ? Aren't all algorithms automatic ?

reply

[–] JoshuaAshton link

Or just use traefik https://traefik.io/

reply