Search for a Simple, Lightweight, and Easy-to-Use Process Manager

  • Author: Lukas Charvat
  • Published: 4/20/2024
  • Tags: #process #manager #daemon #init
Search for a Simple, Lightweight, and Easy-to-Use Process Manager

Not only on our servers but also on our workstations, we often need to have a long-running processes. Keeping them running smoothly and reliably can be a challenge, especially when some components of the ecosystem are flaky or under development. What I lack is a simple tool that would allow me to auto-start, auto-restart, monitor, and manage the lifecycle of my services/processes/daemons.

Motivation

Before going deeper, let's discuss a use case that led me to write this article. My friends have recently asked me whether we can make a retro gaming party as we did back in the nineties (mostly FPS shooters with a bunch of custom plugins and maps). The problem is that this was not a co-located LAN party anymore (since we now live scattered all across Europe). Thus, one had to spin up a public game server(s).

I did not want to do this at home, so I decided to sign up for a trial of Azure cloud. Here, Microsoft offers a free AMD-based B2ats v2 VM for 12 months. This is great, but the machines come "only" with 512-1GB of RAM, which, together with additional applications running on the system, leaves enough room for 3-4 game servers (other cloud services like Amazon Web Services, Hostinger, or DigitalOceans also provide very similar options).

I did not feel comfortable having all the processing running in the tmux or screen terminal sessions and monitoring them regularly.

Further, the customizations that we like to play are rather flaky---they seem to have memory leaks, gradually consuming all your available memory, and causing a crash of your game server. I did not feel comfortable having all the processing running in the tmux or screen terminal sessions and monitoring them regularly. I also did not want to write some random bash scripts that restart the daemons, since these are usually very unstable. And so this is how my search for a suitable process manager began.

Potential Solutions

There are quite a few popular process management solutions out there, but many of them have drawbacks that can make them less than ideal choices, especially if one considers the constraints mentioned above. Let's have a look at some options and situations when you might want to utilize them.

Systemd

No discussion of process management on Linux would be complete without mentioning systemd. This controversial (at least according to some Linux purists) but widely adopted successor of System V init program and service manager aims to provide a standard process for controlling and supervising system daemons and user processes. From a process management perspective, systemd offers very robust monitoring, logging (native compatibility with journalctl), and lifecycle management capabilities. While these are extremely powerful and full of features, systemd has been criticized by some for its complexity. Services can be defined through the so-called unit files that specify dependencies, environment variables, resource limits, and many more. The unit files use INI format and must obey a certain directory structure. The following code snippet shows a simple example of a unit file (placed typically in /etc/systemd/system directory).

$ cat /etc/systemd/system/hlds.service [Unit] Description=Counter-Strike Dedicated Server After=network.target [Service] Type=exec Restart=always WorkingDirectory=/opt/hlds Environment="LD_LIBRARY_PATH=/opt/hlds" ExecStart=/opt/hlds/hlds_linux -dev 5 -game cstrike +exec server.cfg [Install] WantedBy=multi-user.target

Even, though this is a solution that I end up using (at least for the time being), systemd requires a non-trivial effort to configure and manage---especially for a "casual" user who simply wants to set up monitoring for a few services. Further, because systemd plays the role of init process with pid 1 (at least on the most popular Linux distributions like Ubuntu or Fedora), activities associated with systemd also usually require root privileges to operate. In conclusion, for those already familiar with the systemd ecosystem on their Linux distributions, it can make sense to use it as an all-in-one process supervision solution. But for slimmer use cases that do not require all of systemd's capabilities (as in our case), its size and complexity can be overkill.

Runit and s6

runit and s6 systems were built to serve as a potential replacement for process 1. That means they come with capabilities that extend expected lifecycle management like monitoring and logging (for instance, to syslog). Many widespread Linux distros (including Debian and Ubuntu) package runit as an alternative init scheme.

Furthermore, s6 with its minimal hardware requirements, has been designed with embedded environments in mind. However, being capable of running as process 1 comes with a downside, which is the complexity of their setup. They both require service directories with the non-trivial structure given in the program documentation.

In the end, I could successfully fulfill my use-case using s6, but similarly, as in the previous case, it seems to be overkill. In our use case (and I believe that in many others), it is sufficient to just specify a working directory of the process, provide a file path to its binary, and (optionally) set up some environment variables.

Daemon Tools and Perp

Daemon tools are a collection of tools for managing UNIX services. It is a pioneer in the process supervision which does not directly aim to replace init. Setting up a service in deamontools means creating a directory with a run script that runs the service. Once started, daemontools make sure that the service is always restarted if it dies.

Persistent process manager (perp) could be considered as a spiritual successor of deamontools while providing a modern update such as ease of configuration---for instance, everything is located in one place and there is no need to create symlinks when a service is activated in place.

More precisely, the overall structure of the base directory upon which perp control daemon operates takes the following form:

$ tree /opt/perp/hlds /opt/perp/hlds ├── rc.log └── rc.main

The perp control daemon firstly inspects the service subdirectory of each service, that is, hlds is our tiny example. If the optional file named rc.log exists, perp creates a child process to run it. The main goal is to allow setting up a pipe between stdout of the main service (started later) and stdin of the logger. Then, perp invokes rc.log with the following arguments:

/opt/perp/hlds/rc.log start hlds

After starting the logger, perp tries to start a child process and runs the file named rc.main. The rc.main is run with the following arguments:

/opt/perp/hlds/rc.main start hlds

That is, the first argument is the literal string start followed by the service name (hlds in our case). The content of the rc.main script itself can then take the following form:

$ cat /opt/perp/hlds/rc.main #!/bin/bash exec 2>&1 start() { exec \ /opt/hlds/hlds_linux -dev 5 -game cstrike +exec server.cfg } reset() { case $3 in 'exit') echo "hlds exited with status $4" ;; 'signal') echo "hlds killed by signal $5" ;; *) echo "hlds stopped: $3" ;; esac exit 0 } eval $1 "$@"

As one can see, while both tools are quite lightweight, they are actually composed of several small executable binaries rather than a single program, which could make it inconvenient to deploy for inexperienced users.

PM2 Process Manager and Supervisor

PM2 is a popular choice for managing node.js applications. However, unlike the programs above, it is not meant to be run as a substitute for init as process 1. The pm2 is packed with features like load balancing, clustering, and easy deployment. Services can be defined using configuration files, which take the following form.

$ cat /opt/hlds/hlds.config.js module.exports = { apps: [ { name: "hlds", script: "hlds_linux", cwd: "/opt/hlds", args: "-dev 5 -game cstrike +exec server.cfg", env: { LD_LIBRARY_PATH: "/opt/hlds", }, }, ], };

The application can be even started and added for monitoring directly from the command line as follows:

$ pm2 start sleep -- 10

The ease of use of this program manager is really what diversifies pm2 from other tools.

However, since the pm2 itself is a node.js application, one can easily guess its major downside. Yes, it is a hefty memory footprint. The so-called God process of pm2 easily consumes over 60MB of RAM. Moreover, with logrotate plugin, which (surprisingly) takes care of log rotation, this can easily add up to 80MB. Such a large amount represents a significant overhead for VMs with less than 1GB of RAM.

The so-called God process of pm2 easily consumes over 60MB of RAM.

From the user's point of view, the supervisor is very similar to pm2. It is a cross-platform and written in Python and it also does not aim to replace process no. 1. Instead, it is meant to be used to control processes related to a certain project. The configuration is held in INI files of the following form.

$ cat /opt/hlds/hlds.conf [program:hlds] directory=/opt/hlds environment=LD_LIBRARY_PATH="/opt/hlds" command=/opt/hlds/hlds_linux -dev 5 -game cstrike +exec server.cfg

Even though supervisor is more lightweight than pm2, it still maintains an active memory footprint of over 8MB when idle (because it is written in the interpreted language).

Conclusion

In our exploration of process managers, we have seen they tend to fall into two broad categories:

  • Low-level init systems like systemd or runit that are designed to be run as process 1 to manage all system services. While extremely powerful, these tools are often complex and require root access.
  • User-friendly supervisors like pm2 or supervisor that provide convenient interfaces for monitoring and controlling processes. However, these often consume more memory than desired because they are implemented via interpreted languages.

Clearly, there is a divide between the capabilities and resource usage of the process management tools available today. Admins and DevOps engineers are left to choose between complex but robust system-level supervisors, or more user-friendly but heavyweight solutions.

This divide points to an unmet need---a process manager that combines the best of both worlds. A tool that offers a clean, easy-to-use interface for monitoring and controlling processes, but with an emphasis on simplicity and a tiny memory footprint suitable for modern containerized environments. Such a tool could provide the convenience of user-friendly process supervision without the bloat and resource overhead. By carefully scoping features and dependencies, I believe, it may be possible to create a process manager that truly delivers simple, lightweight lifecycle management.