Playout Redundancy

Playout Redundancy

Introduction to Playout Redundancy

Icareus Playout can be installed as a redundant system with two to four hardware servers to ensure excellent service level. 
There are several redundancy architectures depending on the head-end and desired usage. These options are described in the following chapter to give reader full understanding of building an redundant Icareus Playout deployment. If you have not installed Playout yet, go to the page Playout Software Installation.
Info
In this document we assume that the IP number of the master server's interface which is used for connecting to Playout Web Console, is 192.168.1.105. For the backup, it is 192.168.1.108. For the eth2 interfaces, which are used for the Slony based copy config functionality, the master uses a different interface which has IP number 172.17.59.128 and the backup uses 172.17.59.129.

Approaches for Playout Redundancy

Icareus Playout enable to build various redundancy architectures such as
  1. Manually importing and exporting configurations using Playout Management Console or Playout Web Console
  2. 1+1 system based on replicating the database manually or automatically from Main to Back-up server
  3. Playout Web Redundancy allows configuration changes to propagate immediately to all servers in a cluster
  4. Clustering Icareus Playout with separate Admin and Player machines.

Issues to consider

  1. Manual importing and exporting configurations will require service restarts that may have to be done manually on both main and backup servers.
  2. Transport Stream outputs are not replicated, these have to be configured separately on each server
  3. How will data casting content/files be updated on the servers

Playout Web Redundancy

When Playout Web Console has been installed and is being used, redundancy can be setup so that when one server is updated using the web console, the other server is updated at the same time.

As noted in the Playout Software Installation document, when Playout API is installed it creates some configuration files (spoolers.json and replicate.conf in /opt/playout/api/config directory). The installer asks for the management interface IP and writes the files.

Info
Here we assume that Playout and Playout Web Console has been installed on a main and a backup server. Both servers have a valid license. We assume that an admin user for Playout Web has been made on both servers and it's been tested that you can login using the Web Console on both servers.

Editing Playout Web configuration files to achieve redundancy

The Playout API installation script places the IP number mentioned above into file /opt/playout/api/config/spoolers.json. For example if the IP number is 192.168.1.105 the contents of the file will be:
  1. [{"name":"master","address":"192.168.1.105","type":"master"}]
Edit spoolers.json to contain both the main and the backup server, for example using the command nano /opt/playout/api/config/spoolers.json :
  1. [
  2.   {"name":"main","address":"192.168.1.105","type":"master"},
  3.   {"name":"backup","address":"192.168.1.108","type":"spooler"}
  4. ]
Note that you need a comma between the 2 objects and the types have to be as specified above. The names of the servers can be main and backup, or something else if you like.
Add the second server to  /opt/playout/api/config/replicate.conf file:
  1. 192.168.1.105
  2. 192.168.1.108
When you've done editing, restart the API:
  1. systemctl restart playout-api.service
Check the service status:
  1. systemctl status playout-api.service
Output should show that API is active (running), for example:...
  1. ● playout-api.service - Icareus Playout JSON API
  2.    Loaded: loaded (/usr/lib/systemd/system/playout-api.service; enabled; vendor preset: disabled)
  3.    Active: active (running) since Wed 2025-02-05 14:23:43 EET; 55s ago
  4. ...
You can take a look in the log file to see that everything went ok:
  1. tail -10 /var/log/playout-api.log
You should see output that includes the IP addresses  you've just configured, e.g.
  1. ...
  2. 2025-02-05T12:23:44.016Z info: spoolers file: /opt/playout/api/config/spoolers.json
  3. 2025-02-05T12:23:44.017Z info: spoolers: [{"name":"main","address":"192.168.1.105","type":"master"},{"name":"backup","address":"192.168.1.108","type":"spooler"}]
  4. 2025-02-05T12:23:44.018Z info: ipAddresses: ["192.168.1.105","192.168.1.108"]
  5. 2025-02-05T12:23:44.025Z info: Running in production mode.
  6. 2025-02-05T12:23:44.286Z info: Server has started on port 8081

Troubleshooting

If Playout API doesn't start after editing the configuration files, check that the JSON format of the spoolers.json is correct. You can do this for example at https://jsonformatter.curiousconcept.com/
If the format is wrong, edit spoolers.json and restart API again.
If Playout API doesn't start at all after editing the files, install Playout API again using the install-api.sh script as documented on the Playout Software Installation page.


Info
Now making configuration changes on the main server using Playout web console propagates the changes to the backup.
Because backup server in a json file was a type "spooler", it means that if backup server is selected from dropdown menu and
configuration is changed, then changes would only apply to the backup server.

Testing redundant updates on the main server

Login to the main server using Playout Web Console.

At this point we want to make a configuration change on the master server and see that it propagates to the backup.

For example, in the sidebar click on DVB Services. Add a service, for example one named "Icareus 5". Click Add Service and fill in the information, for example Service Id 5, Service name Icareus 5 and Service provider name Icareus and press OK. The service should appear.

Now to check that the service also appeared on the backup machine. Login to the web console of the backup machine as admin. Check that the change you made on the master server has also appeared on the backup, e.g. that the Icareus 5 service has appeared under DVB services.

Editing playout web configuration files to achieve redundancy from backup to main

For the redundant updates to work from the backup to the main server, in case that the configuration is being edited on the backup side, we have to edit the configuration of the backup server.
In the example case that the master has IP number 192.168.1.105 and the backup server has 192.168.1.018 we change the backup server's config files as follows.
Edit the backup's /opt/playout/api/config/spoolers.json to contain:
  1. [
  2.   {"name":"backup","address":"192.168.1.108","type":"master"},
  3.   {"name":"main","address":"192.168.1.105","type":"spooler"}
  4. ]
Edit the backup's /opt/playout/api/config/replicate.conf to contain:
  1. 192.168.1.108
  2. 192.158.1.105
Note that here the type of the backup is set as master and the type of the master is set as spooler. This is because editing the config on the backup, at that point it takes the role of master, so the redundancy is symmetric, whichever server is being edited on, that takes the role of the master, irrespective on what the servers have been named.

After editing, restart the API on the backup, login to the backup web console, make a change (e.g. add service Icareus 6) and check that the change takes effect on the main server.

Notes
It is also possible to configure both main and backup servers act like a "master", meaning that if you login to e.g. main server, select backup server
from the dropdown menu on the upper left hand side and make configuring to that server, both servers would updated. If symmetry is desired, then
the spooler.json on both servers should be configured to that both servers have type "master".
Here is an example of this kind of spooler.json on main server:
  1. [
  2.   {"name":"main","address":"192.168.1.105","type":"master"},
  3.   {"name":"backup","address":"192.168.1.108","type":"master"}
  4. ]

Manually Exporting and Importing configurations

Exporting and Importing configurations using Web Console

Export and Import is possible via Web Console. In the export a .zip file is generated that contains all necessary files. The same .zip file should be imported to the other server.
Alert
Service will use browsers memory to transfer data. Browsers have limited memory sizes reserved for this use (example: Opera 500MiB, IE 600MiB, Firefox 800MiB and Chrome 2GB). If zipped version of transferred files exceed this size then download might fail.
Import and Export take place under Server -menu.
 
Expoting server settings:

Importing server settings:

Exporting and Importing configurations using PMC


Exporting/importing configuration option allows saving Playout settings to a file. This file can be used to restore the settings for any Playout server.
Info
Only server configuration settings are processed using this option. EPG information is not handled.
PMC (Playout Management Console)  can be used to export/import Playout configuration.
Info
Export configuration settings
  1.  Select Server/Settings/Export menu item
  2.  Choose a file to export Playout configuration and press OK button
Info
Import configuration settings
  1. Select Server/Settings/Import menu item
  2. Choose a Playout configuration file and press OK button

1+1 Redundancy via Database replication

A third party tool called Slony is used to perform database replication from main (master) to backup (slave) machine.

The database replication is done using a system level script that is ran on the main server that performs
  1. Database replication from main to backup
  2. Restarting of necessary services on backup server
A powerful Slony cluster

Slony Installation

Slony should be installed on both main and backup servers if you've done the Rocky 8 Operating System Configuration.

System configuration

There are two main components of the system. They are main and backup machines. The following configuration files should be updated for both machines to allow Slony-I perform replication.
Connect an ethernet cable for eth2 between main and backup machines.

Interfaces configuration


NICs configuration

eth0 - data output
eth1 - data output (if needed)
eth2 - redundancy
eth3 - management (Internet connection)

Postgres host-based authentication configuration

The Postgresql host-based authentication configuration file /opt/playout/db/pg_hba.conf should contain permission declaration main and backup machines.
Edit the pg_hba.conf file on both servers, adding the IP addresses for both hosts at the bottom of the file.

  1. # TYPE  DATABASE        USER            ADDRESS                 METHOD
  2. # "local" is for Unix domain socket connections only
  3. local   all             all                                     trust
  4. # IPv4 local connections:
  5. host    all             all             127.0.0.1/32            trust
  6. # IPv6 local connections:
  7. host    all             all             ::1/128                 trust
  8. # Allow replication connections from localhost, by a user with the
  9. # replication privilege.
  10. #local   replication     postgres                                trust
  11. #host    replication     postgres        127.0.0.1/32            trust
  12. #host    replication     postgres        ::1/128                 trust
  13. host    all         all         172.17.59.128/32         trust
  14. host    all         all         172.17.59.129/32         trust

Firewall configuration

Firewall configuration should allow listening for incoming connections for port 5432. Applicable for master and slave machines.
The port 5432 should already be open since installing Playout runs the /opt/playout/bin/setup-firewall.sh
script.
Check that port 5432/tcp is listed in ports with command:
  1. firewall-cmd --list-all
If not, run the following commands:
  1. firewall-cmd --permanent --zone=public --add-port=5432/tcp
  2. firewall-cmd --reload
  3. firewall-cmd --list-all
The following command can also be useful to restart and check the firewall service status:
  1. systemctl restart firewalld
  2. systemctl status firewalld


Postgresql configuration file

Edit /opt/playout/db/postgresql.conf on both servers.

The Postgresql configuration file should allow listening for incoming connections for port 5432.
Applicable for main and backup machines. Edit the CONNECTIONS AND AUTHENTICATION section of the configuration file.


Info
You have to change listen_addresses from 'localhost' to '*' to allow the other machine to connect during replication.

  1. #------------------------------------------------------------------------------
  2. # CONNECTIONS AND AUTHENTICATION
  3. #------------------------------------------------------------------------------
  4. # - Connection Settings -
  5. listen_addresses = '*'                  # what IP address(es) to listen on;
  6.                                         # comma-separated list of addresses;
  7.                                         # defaults to 'localhost'; use '*' for all
  8.                                         # (change requires restart)
  9. port = 5432                             # (change requires restart)
  10. # Note: In RHEL/Fedora installations, you can't set the port number here;
  11. # adjust it in the service file instead.
  12. max_connections = 100                   # (change requires restart)
  13. # Note:  Increasing max_connections costs ~400 bytes of shared memory per
  14. # connection slot, plus lock space (see max_locks_per_transaction).
  15. superuser_reserved_connections = 3      # (change requires restart)
  16. unix_socket_directories = '/var/run/postgresql, /tmp'   # comma-separated list of directories
  17.                                         # (change requires restart)
  18. unix_socket_group = ''                  # (change requires restart)
  19. unix_socket_permissions = 0777          # begin with 0 to use octal notation
  20.                                         # (change requires restart)
  21. #bonjour = off                          # advertise server via Bonjour
  22.                                         # (change requires restart)
  23. bonjour_name = ''                       # defaults to the computer name
  24.                                         # (change requires restart)


Restart POSTGRESQL

  1. systemctl restart postgresql


SSH configuration

It is necessary to generate authentication keys for BOTH main and backup machines to allow the replication scripts to work without entering password. The list of instructions to generate the keys are as follows.

Note that with default settings SSH without password wont work with root rights. Edit the file /etc/ssh/sshd_config to contain
  1. PermitRootLogin yes
on both main and backup before next steps. After this restart sshd:
  1. systemctl restart sshd

First log in on the main server as root and generate a pair of authentication keys. Do not enter a passphrase. No need to input any data, just press 'Enter' every time.
  1. cd
  2. ssh-keygen -t rsa
Output:
  1. Generating public/private rsa key pair.
  2. Enter file in which to save the key (/root/.ssh/id_rsa):
  3. ...
Create directory /root/.ssh on both machines
  1. mkdir -p /root/.ssh
Append thew new public key to backup's /root/.ssh/authorized_keys by issuing the following command on the main server.
The command copies the id_rsa.pub files contents to the backup server's authorized_keys file. Enter the root password for the backup server.
  1. cat /root/.ssh/id_rsa.pub | ssh root@192.168.1.108 'cat >> /root/.ssh/authorized_keys'
From now on you can log as root from main to backup without password. Test on main server:
  1. ssh root@192.168.1.108
The login should work without entering the password of the backup server.

To enable the login without password from the backup to main, perform the same steps as above on the backup machine.
The commands you need to run on the backup server are:
  1. cd
  2. ssh-keygen -t rsa
  3. mkdir -p /root/.ssh
  4. cat /root/.ssh/id_rsa.pub | ssh root@192.168.1.105 'cat >> /root/.ssh/authorized_keys'
Test logging into main on backup server. You should be able to login without entering the password.
  1. ssh root@192.168.1.105


Using Copy Config from a terminal

Copy config can be run in a terminal, it takes the main and backup host addresses as argument.
This script uses Slony to copy the Playout configuration from the main to the backup machine.
  1. /opt/playout/bin/copy_config.sh 172.17.59.128 172.17.59.129
On failure the output might include:
  1. /opt/playout/bin/copy_config.sh: Create replication system... returned 255
  2. /opt/playout/bin/copy_config.sh: Failure, setting replication status to stopped.
You can look in the log file in the case of errors:
  1. cat /var/log/playout-copy-config.log
Other possible output might be:
  1. ./copy_config.sh: warning: no managers specified
  2. ./copy_config.sh: Checking own status...
  3. ./copy_config.sh: Checking if replication is in progress...
  4. ./copy_config.sh: Starting at Fri May 14 12:33:05 CEST 2021.
  5. ./copy_config.sh: Checking failover on slave... returned 127
  6. ./copy_config.sh: Failure, setting replication status to stopped.
In this case we see that failover did not return 0 (it returned 127), therefore to configure this, run on the backup machine check_failover.sh with the main server IP as argument:
  1. /opt/playout/bin/check_failover.sh 172.17.59.128
Output should be:
  1. ./check_failover.sh: warning: no managers specified
  2. autoupdate inactive: 1
  3. /opt/playout/bin/check_peer_status.sh: peer host is 172.17.59.128
  4. Peer active
Now we can try to run copy_config.sh again on the master:
  1. /opt/playout/bin/copy_config.sh 172.17.59.128 172.17.59.129
Output is similar to:
  1. /opt/playout/bin/copy_config.sh: Wed Feb  5 16:28:48 EET 2025: Copy config starting on 192.168.1.105
  2. /opt/playout/bin/copy_config.sh: Primary host: 192.168.1.105
  3. /opt/playout/bin/copy_config.sh: Secondary host: 192.168.1.108
  4. /opt/playout/bin/copy_config.sh: Checking if replication is in progress...
  5. /opt/playout/bin/replication_started.sh: Updating the replication status
  6. /opt/playout/bin/replication_started.sh: Updating the replication start time
  7. /opt/playout/bin/copy_config.sh: Starting at Wed Feb  5 16:28:49 EET 2025.
  8. /opt/playout/bin/copy_config.sh: Remove replication system...
  9. /opt/playout/bin/copy_config.sh: Export DSMCC config on secondary... returned 0
  10. /opt/playout/bin/copy_config.sh: Create replication system... returned 0
  11. /opt/playout/bin/copy_config.sh: Stop playout services on secondary... /opt/playout/bin/copy_config.sh: Start slon process for Primary
  12. /opt/playout/bin/copy_config.sh: Start slon process for Secondary
  13. /opt/playout/bin/copy_config.sh: Start replication... returned 0
  14. /opt/playout/bin/copy_config.sh: REPLICATION IS IN PROGRESS
  15. /opt/playout/bin/copy_config.sh: Please wait...
  16. /opt/playout/bin/copy_config.sh: Removing secondary old carousel files... returned 0
  17. /opt/playout/bin/copy_config.sh: Copying carousel files to secondary... returned 0
  18. /opt/playout/bin/copy_config.sh: Changing carousel files owner on secondary... returned 0
  19. /opt/playout/bin/copy_config.sh: Removing secondary old av files... returned 0
  20. /opt/playout/bin/copy_config.sh: Copying av files to secondary... returned 0
  21. /opt/playout/bin/copy_config.sh: Changing av files owner on secondary... returned 0
  22. /opt/playout/bin/copy_config.sh: Removing secondary old ts files... returned 0
  23. /opt/playout/bin/copy_config.sh: Copying ts files to secondary... returned 0
  24. /opt/playout/bin/copy_config.sh: Changing ts files owner on secondary... returned 0
  25. /opt/playout/bin/copy_config.sh: Replication completed
  26. /opt/playout/bin/copy_config.sh: Killall slon...
  27. /opt/playout/bin/copy_config.sh: line 304: 36722 Killed                  slon playout "dbname=playout user=postgres host=$PRIMARY_HOST" > "$PRIMARY_LOGFILE" 2>&1
  28. /opt/playout/bin/copy_config.sh: line 304: 36725 Killed                  slon playout "dbname=playout user=postgres host=$SECONDARY_HOST" > "$SECONDARY_LOGFILE" 2>&1
  29. /opt/playout/bin/copy_config.sh: Killall slon returned 0
  30. /opt/playout/bin/copy_config.sh: Remove replication system...
  31. /opt/playout/bin/copy_config.sh: Import DSMCC config on secondary... returned 0
  32. /opt/playout/bin/copy_config.sh: Restart playout services on secondary... returned 0
  33. /opt/playout/bin/copy_config.sh: Setting replication status to stopped... /opt/playout/bin/replication_stopped.sh: Updating the replication status
  34. /opt/playout/bin/replication_stopped.sh: Updating the replication stop time
  35. returned 0
  36. /opt/playout/bin/copy_config.sh: Replication finished successfully at Wed Feb  5 16:29:03 EET 2025.
Depending on the deployment the database replication script can be modified to meet the specific requirements.

If the backup system is down for some time, e.g. the script can be executed manually on the main server if the configuration was updated on the main server and should be copied to the backup server.

When using Playout Webconsole and not the legacy PMC to do updates,  provided that Playout Webconsole Redundancy (see above) is enabled, it's not necessary to use copy config after every configuration change on main.

If we want to run copy_config.sh on the backup server, replicating to main, put the addresses in the reverse order, indicating that the backup is acting as primary server for the purpose of that copy config:
  1. /opt/playout/bin/copy_config.sh 172.17.59.129 172.17.59.128

Output might be:
  1. ./copy_config.sh: warning: no managers specified
  2. ./copy_config.sh: Checking own status...
  3. ./copy_config.sh: Checking if replication is in progress...
  4. ./copy_config.sh: Skip replication: replication is already in progress
  5. ./copy_config.sh: You can reset the replication status by running ./replication_stopped.sh
  6. ./copy_config.sh: Don't reset the replication if it is still running.
  7. ./copy_config.sh: After resetting replication you can run ./copy_config.sh again.
In this case run replication_stopped.sh and then run the copy_config.sh command given above, again.

  1. /opt/playout/bin/replication_stopped.sh
  2. /opt/playout/bin/copy_config.sh 172.17.59.129 172.17.59.128

Using Copy Config from Playout Webconsole

Playout Webconsole uses the configuration file /opt/playout/api/config/replicate.conf to obtain the 2 server addresses to use for copy config.
To achieve redundancy for Playout Web updates between a main and a backup server from the master to the backup, edit replicate.conf e.g. using the nano editor to contain in addition to the main eth2 IP number, also the backup's eth2 IP number.

Edit /opt/playout/api/config/replicate.conf with e.g. nano:
  1. nano /opt/playout/api/config/replicate.conf
On the main server edit replicate.conf to contain:
  1. 172.17.59.128
  2. 172.17.59.129
On backup server, these should be reversed so that copy config treats the backup as the primary server when running from there.

On the backup server edit replicate.conf to contain:
  1. 172.17.59.129
  2. 172.17.59.128
After the changes, restart Playout API on both servers.
  1. systemctl restart playout-api.service
To run copy config replication in Playout Webconsole, login to the main server's webconsole and navigate to Monitoring > Cluster status. You should see your backup server listed here.
Click the button Synchronize all servers to run copy config from the web.



Icareus Playout Clusters

Icareus Playout supports clusters where admin and outputting functionalities (a Player -server) are separated. In this architecture Player servers are used only to output the configuration that is managed on the Admin servers. The architecture, ports and configurations are described in the following chapter. 

Example of clustering architecture



Connection with green lines is implemented with HTTP protocol. Port 80 (default port in browsers) is used for this connection.
Connection with orange lines is implemented with TCP protocol. Ports 22, 2001 and 5555 are used for this connection.
Connection with black lines is implemented with UDP protocol. Ports 161 and 162 are used for this connection.




    • Related Articles

    • Updating Icareus Playout

      Updating Icareus Playout 6 Introduction When updating Icareus Playout 6 to a newer version, for example from version 6.0.2.6 to 6.1.0.0, it's not necessary to install a new Operating System (OS). A successful update process involves copying new ...
    • Playout Software Installation

      Installation of Icareus Playout 6.1.0.0 Introduction This page assumes you have obtained the Playout software in a folder from Icareus. You should also have changed to that directory and unzipped all zip files as described on the page Rocky 8 ...
    • Playout Release Notes

      version 6.1.0.0 Release data: 2026-01-15 Server changes: EPG monitor: Monitoring is now done also for EIT Schedule tables, previously only EIT p/f was monitored. EIT update: TV-Anytime Data Source: Update to support more tags in XML input file. EIT ...
    • Introduction to Icareus Playout Installation

      Introduction to Icareus Playout Installation The installation of Icareus Playout is divided into the following steps:   Operating system and database installation  Icareus Playout software installation  Icareus Playout configuration Icareus Playout ...
    • Managing Playout OS Services

      Introduction Icareus Playout runs several linux services for its features. Managing Playout OS Services Playout Services and Linux processes can be managed from both Playout Management Console and Linux console. In Linux console Icareus Playout is ...