Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Nginx 1.9.11 with Dynamic Modules (nginx.org)
201 points by Nekit1234007 on Feb 9, 2016 | hide | past | favorite | 59 comments


A little more explanation of what that means from the docs:

In NGINX 1.9.11 onwards a new way of loading modules dynamically has been introduced. This means that selected modules can be loaded into NGINX at runtime based on configuration files. They can also be unloaded by editing the configuration files and reloading NGINX.

https://www.nginx.com/resources/wiki/extending/converting/


Hugely important. Compiling your own Nginx has never been trivial in modern development pipelines, and until now that's been the only way to use external modules.


Honest question: Why do you think it was non-trivial? What is/was complex?


I don't like installing packages outside of the apt/aptitude system. Creating a custom .deb requires a repo to store our fork of the code (or the code + our configuration parameters) plus a jenkins or similar job to build it, plus a place to store the build artifacts to make available to our fleet of boxes, which also need to be configured to pull images from said repository. There are many potential points of failure now. Rarely do I have package failures with my current infrastructure.

I'd rather trust the package/security team of my favorite distribution to do this for me.

When you live in the modern world of cattle not pets you optimize for things like this.

That being said our current nginx solution lives in a Docker container on top of Kubernetes so it would be fairly trivial at this stage to build our own nginx and bake it in... but I have to pick my battles.


I found that nginx actually "just works", however not all software is easy to compile and many of us shy away from compiling for various reasons.


Compiling nginx is surprisingly easy and straightforward; a perfect example to get started with packaging for yourself.


Indeed - but having built an nginx module, it's hard to convince others to use it. Often, nginx users are simply using the binary packages from their Linux distribution, and if they compile their own, they'll have to remember that they have a version that's compiled from source on the system and not the packaged one.


Exactly and then when it is time to patch nginx you can't just install a new binary and reload - you have to have documented exactly right set of modules previously compiled. Depending on the modules this can be time consuming and error prone. Not impossible, but adds additional automation requirements.


Time of compilation is one issue. If you just wanted to add a module you would have to recompile everything. Likewise if you want to remove a module.

Although one advantage of requiring compilation is the user is probably more likely to use the latest version.


you can reduce re-compilation time by using ccache compiler caching. i do this for my Centmin Mod LEMP stack auto installer and re-compile via ccache is up to 60-80% faster than without ccache https://ccache.samba.org/ :)


This is fantastic news. I've wanted to use the StatsD plugin for nginx for a while now at the load balancer level but didn't want to compile my own and have to continuously re-compile to get patches.


We've just posted a blog post "Introducing Dynamic Modules in NGINX 1.9.11." https://www.nginx.com/blog/dynamic-modules-nginx-1-9-11/

We'll take a read through the comments here and are looking forward to answering questions and receive feedback. You may also email the NGINX development mailing list (info in blog post).

(I work @ NGINX).


Hey :)

I've just created a separate HN submission pointing to your blog post.

I hope you don't mind.

Keep up the amazing work!


We support nginx in our products, due to high demand for it, but we don't really like the way it has historically handled modules. Having to custom-build to do interesting things is very far away from the "plug and play" experience that our users expect and can get with Apache.

While I'm not on the bandwagon that says all Apache installations would be better on nginx (and I still run Apache on all of our servers, except nginx test machines), I do think this makes the case quite a bit more compelling, particularly for people who package and distribute the web server (OS vendors, control panel maintainers like me, container builders, etc.).


If I could get mod_pagespeed, mod_security and mod_cloudflare configured without recompiling/accidentally breaking everything, I'd be so ecstatic.


Good for them.

It's worth to note that an nginx fork called Tengine (by Taobao) has a similar feature before.

http://tengine.taobao.org/index.html


Yes and the only reason why Taobao forked it was because Nginix did not allow such feature to be upstream.



Is this going to make it easier to get luajit on a stock nginx server?

I just see a single bullet point regarding "Dynamic modules", anywhere I can get more info?


Probably not. The lua module hooks into internal symbols not considered part of the module API.


Nginx doesn't really have a module API. While lua reaches deep inside, loadable modules already have to be compiled for the specific nginx version, which means this module isn't in a different situation from any of the others.


    load webserver.nlm
(a reference for the old-timers here)

Seems an odd feature for 1.9.11, why not wait for 2.0?


The 1.9 series is unstable (mainline) while 1.8 is stable. Classic odd/even unstable/stable versioning.


This is the answer, I had forgotten about this strategy.


As long as it doesn't break API compatibility, why wait?


Because bugs happen. It's likely a change that reaches into the core of the product. Also, would give a psychological boost to upgrading to 2.0, while also subtly warning that the code will be immature until a few point releases are complete.


Looking good so far I have enabled a few Nginx dynamic modules including Openresty's set-misc, echo and headers more modules https://community.centminmod.com/posts/26535/

  load_module "modules/ngx_http_image_filter_module.so";
  load_module "modules/ngx_http_headers_more_filter_module.so";
  load_module "modules/ngx_http_set_misc_module.so";
  load_module "modules/ngx_http_echo_module.so";
  load_module "modules/ngx_http_geoip_module.so";
  load_module "modules/ngx_stream_module.so";


This is great news for the pagespeed people.


Unfortunately, it looks like it may not be something will be able to work with. Instead of an ABI like Apache uses, Nginx went with signature checking. To load a precompiled module, the signature has to match. This means we would have to compile many different binary modules for every configuration people might have, and for every version of Nginx. Like several dozen.

As it stands, though, the module system would be very helpful for OS packagers, because they don't have to worry about signature mismatch.


Actually, it's still a pain for package maintainers. Take a look at mod_wsgi for apache as an example, it's maintained as a completely separate package (the rpmspec and dist-git trees are totally independent from apache) - when the httpd package gets upgraded mod_wsgi doesn't have to be rebuilt at all. This is important, in fact, because rebuilding a package means updating the spec file, bumping the release, and submitting a new build to koji - not something you want to do for loadable modules every time httpd is updated.

This is going to create headaches for anyone wanting to package nginx, either they will have to compile all the modules when building nginx (creating many sub-packages) or there will have to be coordination between package maintainers to do a mass rebuild of all loadable modules every time the nginx package gets bumped. Neither situation is desirable, and I hope they release a stable ABI because until then nobody is going to waste time packaging loadable modules.


Wow. That basically means their current module scheme is useless for package maintainers.


Pretty much. Like I said, technically you could just have the build for nginx also include all of the modules and just split them up into sub-packages, but that defeats the entire point of having loadable modules that can be compiled independently.


This is off topic, but I think there should be more tutorials for developers along the lines of "how to make life easier for packagers".

Sidenote: creating Debian packages is an intimidating task.


1. Don't use a million dependencies that aren't already included in the majority of distributions. As a packager, I have no desire to maintain 20 complicated libraries you decided to depend on just to include your application in the package collection. A couple is fine, and the flatter your dependency tree is the less painful it is for a package maintainer to deal with. If you must include a dependency then please keep the transitive ones under control!

2. Use a standard build/packaging system. autotools, cmake, scons, setuptools/distutils, maven, gem (read: not bundler), cpan, etc. Whatever is considered a "standard" way of building and releasing software in the language you are writing it in is acceptable, using hacked together Makefiles and build scripts is generally a quick way to make packagers hate you and not want to waste time getting your software built.

3. Don't vendor dependencies - if you do then support the use of system-installed libraries in their place. I won't package software that insists on vending dependencies, I'm happy to package an extra library or two but I don't have that option if you don't support it.

4. Please support a standard installation mechanism if at all possible. I can relocate your build artifacts to the buildroot if necessary, but then I have to maintain it as the upstream changes - and distributions may be inconsistent with each other. If you are using a standard build system this should come for practically free unless you've done some nasty hacks, in which case please reconsider the modifications you have made.

Debian packaging is pretty simple but I find the decision to use Makefile's for debian/rules to be a little annoying. rpmspec's are a lot clearer and rpm macro's are a lot easier to intuit since you can easily use rpmspec -e to see what they expand to, versus the blackbox that is the debhelper scripts sometimes.

If you are new to packaging seriously take a look at making RPM's your first stop, rpmdev-newspec has a lot of templates for spec files that make getting started with any project that uses a standard build system super-easy (automake, cmake, python, ruby, perl, etc).


1. As a developer, I couldn't care less about making your life easier at the expense of making mine more difficult.

3. I couldn't care less what you won't package. Especially so if it's due to prioritizing your own needs over everyone else's. Take it or leave it.

4. Distributions are already woefully inconsistent anyway. If you're going to package something, the entire burden is on you.

Distributions have had too much influence over upstream projects for way too long. Thank god people are finally starting to reject their diva-like, our-way-or-the-highway attitudes. I have been a package maintainer for both Fedora and Debian, and frankly, for things outside of the core OS/libs/compilers/tools, using deb/rpm packaging is very, very overrated. Likewise for dynamic linking.


> 1. As a developer, I couldn't care less about making your life easier at the expense of making mine more difficult.

Great, and nobody said you had to. If someone likes your application enough and they want to get it packaged they will - personally, however, ease of packaging is something that makes or breaks whether I "like" something - which is why as much as I love Mumble I'm not putting any effort in to revive its package in Fedora, because ICE is a huge pain in the butt to build and I just don't want to waste time on it.

> 3. I couldn't care less what you won't package. Especially so if it's due to prioritizing your own needs over everyone else's. Take it or leave it.

Outside of commercial distributions this is literally how everything works. I package things that are useful to me that I hope would be useful to other people, but I don't know of many package maintainers that spend time on software they personally have no use for.

> Distributions have had too much influence over upstream projects for way too long. Thank god people are finally starting to reject their diva-like, our-way-or-the-highway attitudes.

If you don't want distributions to have so much influence over packaging your software then do it yourself, nothing makes me happier than seeing .spec files or debian/ folders inside a repository of some application I'm looking at - 99% of the work is already done outside of maybe cleaning the scripts and metadata up to meet packaging standards.

> and frankly, for things outside of the core OS/libs/compilers/tools, using deb/rpm packaging is very, very overrated.

I too at one point believed this, but then I remembered how crappy package management on Windows is as a result of this philosophy. They even HAVE a package manager (MSI/Windows Installer) that nobody uses properly! And you know what, good luck hoping all your applications have kept every dependency and every transitive dependency they bundled in up to date - when you get your package included in a distribution that's done for you by the package maintainer that is supporting your package.

> Likewise for dynamic linking.

Strongly disagree. I hope you have fun downloading update bits for every application that statically links against openssl next time some stupid bug like heartbleed comes along. I still think Google made the wrong call with Go, same goes with Mozilla and Rust.


  > same goes with Mozilla and Rust.
Dynamic linking in Rust is as easy as passing in a single compiler flag, which is what I expect distro package managers will do when building packages written in Rust.


Big problem is Rust's name-mangling, it makes it extremely hard to provide a consistent ABI so dynamic linking doesn't even work half the time without having to rebuild everything - last I checked, maybe it's gotten better?


That's not a problem if you're dynamically linking to a C library or at least one that provides a C ABI.


So does nginx still have to know about all of the modules at compile-time? Or can I compile nginx, then later compile a module for that nginx?


That's fine, and is something you'd be able to do with PageSpeed.

(But I'd really love a way to distribute binary modules anyone could just use, with no additional compilation.)


I would think that containers would help obviate such concerns, since it would be rather trivial to keep even a hundred images in place for an automated compilation run.

Of course, serving the output would be harder, but still not impossible.


Generating all those binaries would still be a pain, but you're right that containers could help.

We'd also need to write a script that looked at your nginx install and figured out what would be the right precompiled module to download.


How does containers help? Building a hundred different nginx's shouldn't be neither harder nor easier if you stuff the binary in a container or not.


It gives you one image per configuration, instead of one build machine, or one stored VM. It makes it easier for a single machine to iterate over dozens or hundreds of build environments with a minimum of overhead.

For example, one Ubuntu machine with Docker can create build environments for every Linux based distro (and version thereof) that exists, and iterate through all of them when it comes time to build. The setup is not trivial, though it is easier than creating a similar environment with VMs or physical machines.


Every distribution on the planet handles this with chroots for doing builds and it works out just fine without needing to do gymnastics with docker. Fedora, for example, has a farm of build hosts that are architecture specific (so there is no cross-compiling outside of bootstrapping a new build target), from there a new chroot gets created for every build and the build runs isolated within it. They've been using this same system to maintain packages for up to 7 different releases at once (counting the EPEL repositories) with no issues, I fail to see any benefit docker brings to the table here - and really, nginx should just provide a stable ABI.


not just for them. This also helps a lot for all OS packagers because right now they have to ship multiple custom nginx packages. As a user you either don't get what you want or you get way too much.

Plus, if there's a custom module you'd really like to have in your nginx you're practically forced into building your own because there's no way to get that module loaded into the distro nginx.

Yes. There's some overhead to dynamic modules. Yes. There's a (minuscule) performance advantage to static linking.

But having the ability to ship modules independently of the web server core is a very important usability feature which, frankly, outweighs the performance overhead for most installs.

If you're really CPU bound on that nginx instance where you need the additional modules and you removing that overhead gives you that little bit extra performance you need, then you always have the option of hard-linking them.


> There's a (minuscule) performance advantage to static linking.

Is that actually true? I would have thought networking latency overshadows any memory latency introduced in a `call foo@PLT` v. `call *(foo@GOT)`.

Are you aware of anyone who has benchmarked this? i.e. not a microbenchmark e.g. inside something like nginx?

If so, it might be worth fixing the dynamic linker: the cost difference is absolutely fixable, as it's a tradeoff for sharing pages with other processes.


> This also helps a lot for all OS packagers because right now they have to ship multiple custom nginx packages. As a user you either don't get what you want or you get way too much.

I guess it allows us to split the nginx-included modules into separate packages, but it does nothing to aid with packaging extra modules until nginx provides a stable ABI for modules to use.


Well got nginx 1.9.11 stack installer dynamic module ready almost with ngx_brotli fixed, lua nginx module patched https://community.centminmod.com/posts/26128/ just missing ngx_pagespeed compatibility and i'd be happy as a pig in mud :)


Is this going to make it easier to enable stub_status module without compiling Nginx? (or buying NginxPlus)

http://nginx.org/en/docs/http/ngx_http_stub_status_module.ht...


What problems are you having using stub_status?

I'm using the nginx mainline PPA and didn't have a problem enabling it.


  This module is not built by default, it should be enabled 
  with the --with-http_stub_status_module configuration 
  parameter
I had to compile Nginx myself, because that module was absent. As you can read also on various community pages, boards, etc.

Nginx is the only server software that I have to compile myself, just to get a basic status feature. It's a way to upsale people to their NginxPlus which offer a lot more status data in the binary.


http://nginx.org/en/linux_packages.html#arguments

  Configure arguments common for nginx binaries
  from pre-built packages for stable version: 
  ...
  --with-http_stub_status_module
  ...


Does Dynamic modules will help to enable mod_pagespeed easily?



Compiling a module is still a lot more straight-forward than recompiling the whole of Nginx, which is probably the main obstacle to people using it (at least it is in my case).


Cool. We're working now to make this possible.

(And, in fact, to work at all when compiling against 1.9.11, since the addition of dynamic modules broke us: https://github.com/pagespeed/ngx_pagespeed/issues/1110 )


Piotr is working his magic for ngx_pagespeed 1.10 https://github.com/pagespeed/ngx_pagespeed/issues/1114 just need a patch for 1.9 branch too




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: