A little more explanation of what that means from the docs:
In NGINX 1.9.11 onwards a new way of loading modules dynamically has been introduced. This means that selected modules can be loaded into NGINX at runtime based on configuration files. They can also be unloaded by editing the configuration files and reloading NGINX.
Hugely important. Compiling your own Nginx has never been trivial in modern development pipelines, and until now that's been the only way to use external modules.
I don't like installing packages outside of the apt/aptitude system. Creating a custom .deb requires a repo to store our fork of the code (or the code + our configuration parameters) plus a jenkins or similar job to build it, plus a place to store the build artifacts to make available to our fleet of boxes, which also need to be configured to pull images from said repository. There are many potential points of failure now. Rarely do I have package failures with my current infrastructure.
I'd rather trust the package/security team of my favorite distribution to do this for me.
When you live in the modern world of cattle not pets you optimize for things like this.
That being said our current nginx solution lives in a Docker container on top of Kubernetes so it would be fairly trivial at this stage to build our own nginx and bake it in... but I have to pick my battles.
Indeed - but having built an nginx module, it's hard to convince others to use it. Often, nginx users are simply using the binary packages from their Linux distribution, and if they compile their own, they'll have to remember that they have a version that's compiled from source on the system and not the packaged one.
Exactly and then when it is time to patch nginx you can't just install a new binary and reload - you have to have documented exactly right set of modules previously compiled. Depending on the modules this can be time consuming and error prone. Not impossible, but adds additional automation requirements.
you can reduce re-compilation time by using ccache compiler caching. i do this for my Centmin Mod LEMP stack auto installer and re-compile via ccache is up to 60-80% faster than without ccache https://ccache.samba.org/ :)
This is fantastic news. I've wanted to use the StatsD plugin for nginx for a while now at the load balancer level but didn't want to compile my own and have to continuously re-compile to get patches.
We'll take a read through the comments here and are looking forward to answering questions and receive feedback. You may also email the NGINX development mailing list (info in blog post).
We support nginx in our products, due to high demand for it, but we don't really like the way it has historically handled modules. Having to custom-build to do interesting things is very far away from the "plug and play" experience that our users expect and can get with Apache.
While I'm not on the bandwagon that says all Apache installations would be better on nginx (and I still run Apache on all of our servers, except nginx test machines), I do think this makes the case quite a bit more compelling, particularly for people who package and distribute the web server (OS vendors, control panel maintainers like me, container builders, etc.).
Nginx doesn't really have a module API. While lua reaches deep inside, loadable modules already have to be compiled for the specific nginx version, which means this module isn't in a different situation from any of the others.
Because bugs happen. It's likely a change that reaches into the core of the product. Also, would give a psychological boost to upgrading to 2.0, while also subtly warning that the code will be immature until a few point releases are complete.
Unfortunately, it looks like it may not be something will be able to work with. Instead of an ABI like Apache uses, Nginx went with signature checking. To load a precompiled module, the signature has to match. This means we would have to compile many different binary modules for every configuration people might have, and for every version of Nginx. Like several dozen.
As it stands, though, the module system would be very helpful for OS packagers, because they don't have to worry about signature mismatch.
Actually, it's still a pain for package maintainers. Take a look at mod_wsgi for apache as an example, it's maintained as a completely separate package (the rpmspec and dist-git trees are totally independent from apache) - when the httpd package gets upgraded mod_wsgi doesn't have to be rebuilt at all. This is important, in fact, because rebuilding a package means updating the spec file, bumping the release, and submitting a new build to koji - not something you want to do for loadable modules every time httpd is updated.
This is going to create headaches for anyone wanting to package nginx, either they will have to compile all the modules when building nginx (creating many sub-packages) or there will have to be coordination between package maintainers to do a mass rebuild of all loadable modules every time the nginx package gets bumped. Neither situation is desirable, and I hope they release a stable ABI because until then nobody is going to waste time packaging loadable modules.
Pretty much. Like I said, technically you could just have the build for nginx also include all of the modules and just split them up into sub-packages, but that defeats the entire point of having loadable modules that can be compiled independently.
1. Don't use a million dependencies that aren't already included in the majority of distributions. As a packager, I have no desire to maintain 20 complicated libraries you decided to depend on just to include your application in the package collection. A couple is fine, and the flatter your dependency tree is the less painful it is for a package maintainer to deal with. If you must include a dependency then please keep the transitive ones under control!
2. Use a standard build/packaging system. autotools, cmake, scons, setuptools/distutils, maven, gem (read: not bundler), cpan, etc. Whatever is considered a "standard" way of building and releasing software in the language you are writing it in is acceptable, using hacked together Makefiles and build scripts is generally a quick way to make packagers hate you and not want to waste time getting your software built.
3. Don't vendor dependencies - if you do then support the use of system-installed libraries in their place. I won't package software that insists on vending dependencies, I'm happy to package an extra library or two but I don't have that option if you don't support it.
4. Please support a standard installation mechanism if at all possible. I can relocate your build artifacts to the buildroot if necessary, but then I have to maintain it as the upstream changes - and distributions may be inconsistent with each other. If you are using a standard build system this should come for practically free unless you've done some nasty hacks, in which case please reconsider the modifications you have made.
Debian packaging is pretty simple but I find the decision to use Makefile's for debian/rules to be a little annoying. rpmspec's are a lot clearer and rpm macro's are a lot easier to intuit since you can easily use rpmspec -e to see what they expand to, versus the blackbox that is the debhelper scripts sometimes.
If you are new to packaging seriously take a look at making RPM's your first stop, rpmdev-newspec has a lot of templates for spec files that make getting started with any project that uses a standard build system super-easy (automake, cmake, python, ruby, perl, etc).
1. As a developer, I couldn't care less about making your life easier at the expense of making mine more difficult.
3. I couldn't care less what you won't package. Especially so if it's due to prioritizing your own needs over everyone else's. Take it or leave it.
4. Distributions are already woefully inconsistent anyway. If you're going to package something, the entire burden is on you.
Distributions have had too much influence over upstream projects for way too long. Thank god people are finally starting to reject their diva-like, our-way-or-the-highway attitudes. I have been a package maintainer for both Fedora and Debian, and frankly, for things outside of the core OS/libs/compilers/tools, using deb/rpm packaging is very, very overrated. Likewise for dynamic linking.
> 1. As a developer, I couldn't care less about making your life easier at the expense of making mine more difficult.
Great, and nobody said you had to. If someone likes your application enough and they want to get it packaged they will - personally, however, ease of packaging is something that makes or breaks whether I "like" something - which is why as much as I love Mumble I'm not putting any effort in to revive its package in Fedora, because ICE is a huge pain in the butt to build and I just don't want to waste time on it.
> 3. I couldn't care less what you won't package. Especially so if it's due to prioritizing your own needs over everyone else's. Take it or leave it.
Outside of commercial distributions this is literally how everything works. I package things that are useful to me that I hope would be useful to other people, but I don't know of many package maintainers that spend time on software they personally have no use for.
> Distributions have had too much influence over upstream projects for way too long. Thank god people are finally starting to reject their diva-like, our-way-or-the-highway attitudes.
If you don't want distributions to have so much influence over packaging your software then do it yourself, nothing makes me happier than seeing .spec files or debian/ folders inside a repository of some application I'm looking at - 99% of the work is already done outside of maybe cleaning the scripts and metadata up to meet packaging standards.
> and frankly, for things outside of the core OS/libs/compilers/tools, using deb/rpm packaging is very, very overrated.
I too at one point believed this, but then I remembered how crappy package management on Windows is as a result of this philosophy. They even HAVE a package manager (MSI/Windows Installer) that nobody uses properly! And you know what, good luck hoping all your applications have kept every dependency and every transitive dependency they bundled in up to date - when you get your package included in a distribution that's done for you by the package maintainer that is supporting your package.
> Likewise for dynamic linking.
Strongly disagree. I hope you have fun downloading update bits for every application that statically links against openssl next time some stupid bug like heartbleed comes along. I still think Google made the wrong call with Go, same goes with Mozilla and Rust.
Dynamic linking in Rust is as easy as passing in a single compiler flag, which is what I expect distro package managers will do when building packages written in Rust.
Big problem is Rust's name-mangling, it makes it extremely hard to provide a consistent ABI so dynamic linking doesn't even work half the time without having to rebuild everything - last I checked, maybe it's gotten better?
I would think that containers would help obviate such concerns, since it would be rather trivial to keep even a hundred images in place for an automated compilation run.
Of course, serving the output would be harder, but still not impossible.
It gives you one image per configuration, instead of one build machine, or one stored VM. It makes it easier for a single machine to iterate over dozens or hundreds of build environments with a minimum of overhead.
For example, one Ubuntu machine with Docker can create build environments for every Linux based distro (and version thereof) that exists, and iterate through all of them when it comes time to build. The setup is not trivial, though it is easier than creating a similar environment with VMs or physical machines.
Every distribution on the planet handles this with chroots for doing builds and it works out just fine without needing to do gymnastics with docker. Fedora, for example, has a farm of build hosts that are architecture specific (so there is no cross-compiling outside of bootstrapping a new build target), from there a new chroot gets created for every build and the build runs isolated within it. They've been using this same system to maintain packages for up to 7 different releases at once (counting the EPEL repositories) with no issues, I fail to see any benefit docker brings to the table here - and really, nginx should just provide a stable ABI.
not just for them. This also helps a lot for all OS packagers because right now they have to ship multiple custom nginx packages. As a user you either don't get what you want or you get way too much.
Plus, if there's a custom module you'd really like to have in your nginx you're practically forced into building your own because there's no way to get that module loaded into the distro nginx.
Yes. There's some overhead to dynamic modules. Yes. There's a (minuscule) performance advantage to static linking.
But having the ability to ship modules independently of the web server core is a very important usability feature which, frankly, outweighs the performance overhead for most installs.
If you're really CPU bound on that nginx instance where you need the additional modules and you removing that overhead gives you that little bit extra performance you need, then you always have the option of hard-linking them.
> There's a (minuscule) performance advantage to static linking.
Is that actually true? I would have thought networking latency overshadows any memory latency introduced in a `call foo@PLT` v. `call *(foo@GOT)`.
Are you aware of anyone who has benchmarked this? i.e. not a microbenchmark e.g. inside something like nginx?
If so, it might be worth fixing the dynamic linker: the cost difference is absolutely fixable, as it's a tradeoff for sharing pages with other processes.
> This also helps a lot for all OS packagers because right now they have to ship multiple custom nginx packages. As a user you either don't get what you want or you get way too much.
I guess it allows us to split the nginx-included modules into separate packages, but it does nothing to aid with packaging extra modules until nginx provides a stable ABI for modules to use.
Well got nginx 1.9.11 stack installer dynamic module ready almost with ngx_brotli fixed, lua nginx module patched https://community.centminmod.com/posts/26128/ just missing ngx_pagespeed compatibility and i'd be happy as a pig in mud :)
This module is not built by default, it should be enabled
with the --with-http_stub_status_module configuration
parameter
I had to compile Nginx myself, because that module was absent. As you can read also on various community pages, boards, etc.
Nginx is the only server software that I have to compile myself, just to get a basic status feature. It's a way to upsale people to their NginxPlus which offer a lot more status data in the binary.
Compiling a module is still a lot more straight-forward than recompiling the whole of Nginx, which is probably the main obstacle to people using it (at least it is in my case).
In NGINX 1.9.11 onwards a new way of loading modules dynamically has been introduced. This means that selected modules can be loaded into NGINX at runtime based on configuration files. They can also be unloaded by editing the configuration files and reloading NGINX.
https://www.nginx.com/resources/wiki/extending/converting/