Default installation is suitable for most simple and common cases, but
Polemarch is highly configurable system. If you need something more advanced
(scalability, dedicated DB, custom cache, logging or directories) you can
always configure Polemarch deeply by tweaking
This manual does not have purpose to describe all possible configuration
options in details because
settings.ini has commentary for every option
which makes clear its purpose and possible values. But here is a brief overview
of the most important settings to make clear for you big picture: what you can
actually customize in Polemarch.
We advice you to read Polemarch clustering overview if you want to setup cluster of Polemarch nodes to maintain reliability or speedup things. It will give you understanding of services, which are included into Polemarch and how to distribute them between the nodes to reach your goal.
Polemarch clustering overview
Polemarch actually consists of two services: web-server and worker. Polemarch
uses worker for long-running tasks (such as
ansible-playbook runs, repo
synchronizations and so on). Those services are designed as systemd services
you can control using regular distro-tools for service manipulation.
You can run more than one server with those services. In default configuration
those services uses local file system to keep data and exchange with each
other, but for multiple nodes they must be configured to use shared
client-server database, cache server and network filesystem (for multiple
workers). All those settings are described in appropriate sections of this
documentation. It is up to you to make sure that configuration identical on
every node to prevent discordant behaviour of nodes. If you have multiple
web-servers, don’t forget to setup HAProxy or something similar for balancing
load between them.
Lets assume, that you want to create 2 servers with web-part of Polemarch to maintain reliability of your admin-panel and 4 servers with workers to prevent denial for service because of overloading. Then briefly (it is mostly example than general howto) you must do such steps:
Install Polemarch from PyPI at every server with worker and web-server by installation instructions. We recommend to install virtual environment in
/opt/polemarchand set as owner user
polemarch(need to be created).
Setup some network filesystem (NFS, Samba, GlusterFS, Ceph). NFS, for example. Mount it in the same directory of all worker-intended nodes. Write this directory in Main settings. Example:
[main] projects_dir = /mnt/mystorage/projects hooks_dir = /mnt/mystorage/hooks
Setup some http-balancer. HAProxy, for example. Point it to web-intended nodes.
You can setup ssl for Polemarch endpoints in this step.
Create polemarch systemd service:
Firstly, create a file
[Unit] Description=Polemarch Service HTTP Server After=network.target remote-fs.target nss-lookup.target redis.service mysql.service [Service] Type=forking ExecStart=/opt/polemarch/bin/polemarchctl webserver ExecReload=/opt/polemarch/bin/polemarchctl webserver reload=/opt/polemarch/pid/web.pid ExecStop=/opt/polemarch/bin/polemarchctl webserver stop=/opt/polemarch/pid/web.pid PIDFile=/opt/polemarch/pid/web.pid User=polemarch Group=polemarch KillSignal=SIGCONT Restart=always # Uncomment this if used privileged ports # Capabilities=CAP_NET_BIND_SERVICE+ep # AmbientCapabilities=CAP_NET_BIND_SERVICE [Install] WantedBy=multi-user.target
Notice, that user and group ‘polemarch’ should exist in your system. If they don’t exist, create them.
Reload systemctl daemon:
Add polemarch.service to autoload:
systemctl enable polemarch.service
systemctl start polemarch.service
Repeat all steps in other nodes and connect them to one DB, cache, MQ and storage.
You don’t need migrate database on each node. This need only once when you install/update first node in cluster.
Don’t forget to stop all Polemarch services when update polemarch package.
This section is for settings related to whole Polemarch (both worker and web). Here you can specify verbosity level of Polemarch during work, which can be useful for troubleshoot problems (logging level etc). Also there are settings for changing of timezone for whole app and directory where Polemarch will store ansible projects cloned from repositories.
If you want to use LDAP protocol, you should create next settings in section
ldap-server = ldap://server-ip-or-host:port ldap-default-domain = domain.name ldap-auth_format = cn=<username>,ou=your-group-name,<domain>
ldap-default-domain is an optional argument, that is aimed to make user authorization easier (without input of domain name).
ldap-auth_format is an optional argument, that is aimed to customize LDAP authorization. Default value: cn=<username>,<domain>
So in this case authorization logic will be the following:
System checks combination of login:password in database;
System checks combination of login:password in LDAP:
if domain was mentioned, it will be set during authorization (if user enter login without
if authorization was successful and there is user with mentioned login in database, server creates session for him.
debug - Enable debug mode.
allowed_hosts - Comma separated list of domains, which allowed to serve.
ldap-server - LDAP server connection.
ldap-default-domain - Default domain for auth.
timezone - Timezone of web-application.
log_level - Logging level.
projects_dir - Path where projects will be stored.
hooks_dir - Path where hook scripts stored.
executor_path - Path for polemarch-ansible wrapper binary.
enable_django_logs - Enable or disable Django logger to output. Useful for debugging.
enable_user_self_remove - Enable or disable user self-removing.
auth-cache-user - Enable or disable user instance caching. It increases session performance on each request but saves model instance in unsafe storage (default django cache). The instance is serialized to a string using the
standard python module pickleand then encrypted with Vigenère cipher. Read more in the
Here you can change settings related to database system, which Polemarch will
use. Polemarch supports all databases supported by
django. List of
supported out of the box: SQLite (default choice), MySQL, Oracle, or
PostgreSQL. Configuration details you can look at
Django database documentation.
If you run Polemarch at multiple nodes (clusterization), you should
use some of client-server database (SQLite not suitable) shared for all nodes.
If you use MySQL there is a list of required settings, that you should create for correct database work.
Firstly, if you use MariaDB and you have set timezone different from “UTC” you should run next command:
mysql_tzinfo_to_sql /usr/share/zoneinfo | mysql -u root -p mysql
Secondly, for correct MariaDB work you should set next options in
[database.options] connect_timeout = 10 init_command = SET sql_mode='STRICT_TRANS_TABLES', default_storage_engine=INNODB, NAMES 'utf8', CHARACTER SET 'utf8', SESSION collation_connection = 'utf8_unicode_ci'
Finally, you should add some options to MariaDB configuration:
[client] default-character-set=utf8 init_command = SET collation_connection = @@collation_database [mysqld] character-set-server=utf8 collation-server=utf8_unicode_ci
You can find more database options in Databases settings.
This section is for settings related to cache backend used by Polemarch. Polemarch supports all cache backends that Django supports. Currently is: filesystem, in-memory, memcached out of the box and many more by additional plugins. You can find details about cache configuration at Django caches documentation. In clusterization scenario we advice to share cache between nodes to speedup their work using client-server cache realizations. We recommend to use Redis in production environments.
Locks is system that Polemarch uses to prevent damage from parallel actions working on something simultaneously. It is based on Django cache, so there is another bunch of same settings as Cache. And why there is another section for them, you may ask. Because cache backend used for locking must provide some guarantees, which does not required to usual cache: it MUST be shared for all Polemarch threads and nodes. So, in-memory backend, for example, is not suitable. In case of clusterization we strongly recommend to use Redis or Memcached as backend for that purpose. Cache and locks backend can be same, but don’t forget about requirement we said above.
Session cache settings
Polemarch store sessions in Database settings, but for better performance, we use a cache-based session backend. It is based on Django cache, so there is another bunch of same settings as Cache. By default, settings getted from Cache.
Polemarch uses Celery for long-running tasks (such as
runs, repo synchronizations and so on). Celery is based on message queue concept,
so between web-service and workers running under Celery bust be some kind of
message broker (RabbitMQ or something). Those settings relate to this broker
and Celery itself. Those kinds of settings: broker backend, number of
worker-processes per node and some settings used for troubleshoot
server-broker-worker interaction problems.
connection - Celery broker connection. Read more: Broker Settings.
concurrency - Celery count worker threads.
heartbeat - Interval between sending heartbeat packages, which says that connection still alive.
enable_worker - Enable or disable worker with webserver.
clone_retry_count - Retries count on project sync operation.
You can find more RPC options in Rpc settings.
Celery worker options for start. Useful settings:
loglevel - Celery worker logging level. Default: from main section
pidfile - Celery worker pidfile.
autoscale - Options for autoscaling. Two comma separated numbers: max,min.
beat - Enable or disable celery beat scheduler.
Other settings can be getted from command
celery worker --help.
Here placed settings related to web-server. Those settings like: session_timeout, static_files_url or pagination limit.
session_timeout - Session life-cycle time.
Default: 2w(two weeks).
rest_page_limit - Default limit of objects in API list.
public_openapi - Allow to have access to OpenAPI schema from public.
You can find more Web options in Web settings.
Centrifugo client settings
To install app with centrifugo client,
[centrifugo] section must be set.
Centrifugo is used by application to auto-update page data.
When user change some data, other clients get notification on
with model label and primary key. Without the service all GUI-clients get page data
every 5 seconds (by default). Centrifugo server v3 is supported.
address - Centrifugo server address.
api_key - API key for clients.
token_hmac_secret_key - API key for jwt-token generation.
timeout - Connection timeout.
verify - Connection verification.
These settings also add parameters to the OpenApi schema and change how the auto-update system works in the GUI.
token_hmac_secret_key is used for jwt-token generation (based on
session expiration time). Token will be used for Centrifugo-JS client.
Options for git commands. See options in
git fetch --help or
git clone --help.
Here you can specify settings used by archive (e.g. TAR) projects.
max_content_length - Maximum download file size. Format:
30<unit>, where unit is b, kb, mb, gb, tb.
Production web settings
Here placed settings related to web-server used by Polemarch in production (for deb and rpm packages by default). Most of them related to system paths (logging, PID-file and so on).
More settings in Configuring uWSGI.
In production, it is recommended to use Centrifugo in order to reduce the load on the backend from automatic page updates.
This section contains additional information for configure additional elements.
If you need set
httpsfor your web settings, you can do it using HAProxy, Nginx or configure it in
[uwsgi] https = 0.0.0.0:8443,foobar.crt,foobar.key addrport = 127.0.0.1:8080
We strictly do not recommend running the web server from root. Use HTTP proxy to run on privileged ports.
If you need more options you can find it in Configuration manual in the official vstutils documentation .