AzuraCast – Simple, open-source self-hosted web radio

For many years now I have hosted internet radio and internet radio events using a very simple stack of just IceCast + EZStream, tied in with a few very simple scripts and cron jobs. This has always worked incredibly reliably and kept these setups very minimal with less parts in the chain to potentially have an issue or drive up resource usage. The negative of this has always been a lack of flexibility. In the past I had looked into software such as Centova Cast, however this comes at a cost and isn’t an open-source solution – which is what I will generally opt for where possible.

Introducing AzuraCast! I have been experimenting with AzuraCast for the last few months and it certainly shows a lot of promise. Rather than repeating specifics about it myself – here is a copy-paste directly from their GitHub.

AzuraCast is a self-hosted, all-in-one web radio management suite. Using its easy installer and powerful but intuitive web interface, you can start up a fully working web radio station in a few quick minutes.

Features

For Radio Stations

  • Rich Media Management: Upload songs, edit metadata, preview songs and organize music into folders from your browser.
  • Playlists: Add music to standard-rotation playlists (in sequential or shuffled playback order) or schedule a playlist to play at a scheduled time, or once per x songs/minutes/etc.
  • Live DJs: Set up individual DJ/streamer accounts and see who’s currently streaming from your station’s profile page.
  • Web DJ: Broadcast live directly from your browser, with no extra software needed, with AzuraCast’s built-in Web DJ tool.
  • Public Pages: AzuraCast includes embeddable public pages that you can integrate into your existing web page or use as the basis for your own customized player.
  • Listener Requests: Let your listeners request specific songs from your playlists, both via an API and a simple public-facing listener page.
  • Remote Relays: Broadcast your radio signal (including live DJs) to any remote server running Icecast or SHOUTcast.
  • Web Hooks: Integrate your station with Slack, Discord, TuneIn, Twitter and more by setting up web hooks that connect to third-party services.
  • Detailed Analytics and Reports: Keep track of every aspect of your station’s listeners over time. View reports of each song’s impact on your listener count. You can also generate a report that’s compatible with SoundExchange for US web radio royalties.

For Server Administrators

  • Role-based User Management: Assign global and per-station permissions to a role, then add users to those roles to control access.
  • Custom Branding: Modify every aspect of both the internal and public-facing AzuraCast pages by supplying your own custom CSS and JavaScript.
  • Authenticated RESTful API: Individual users in the system can create API keys which have the same permissions they have in the system. The AzuraCast API is a powerful and well-documented tool for interacting with installations.
  • Web Log Viewing: Quickly diagnose problems affecting any part of the AzuraCast system through the system-wide web log viewer.
  • Automatic Radio Proxies: Many users can’t connect directly to radio station ports (i.e. 8000) by default, so AzuraCast includes an automatic nginx proxy that lets listeners connect via the http (80) and https (443) ports. These proxies are also compatible with services like CloudFlare.

AzuraCast is still currently in beta, however the developers seem very active on the project and updates are regular. This is definitely a long awaited alternative to the commercial/paid software solutions for managing internet radio. I’m very excited to watch how this project develops. For more information, check out their website over at – https://www.azuracast.com/

Installing Rocket.Chat on Ubuntu Xenial 16.04 via Snap

This is a simple tutorial to get Rocket.Chat running on a Ubuntu Xenial 16.04 server (You’ll likely be perfectly fine to run through the same process on a different Ubuntu version such as 18.04 if you’d prefer) In this case we’re installing this on a fresh server and we’ll be installing Rocket.Chat as a Snap and using Caddy as a reverse proxy. Caddy will also deal with issuing SSL certificates via Let’s Encrypt. With this you’ll be able to get Rocket.Chat up and running within ~10 minutes, from there you can go on and make further server configuration changes for security and so on, as well as configure Rocket.Chat in more depth – which won’t be covered within the scope of this tutorial.


Let’s first start with some updates.

apt-get update
apt-get upgrade


Basic UFW setup

Let’s setup a basic firewall using UFW. First install UFW if it’s not installed –

apt-get install ufw


Setup the default access rules –

ufw default deny incoming
ufw default allow outgoing


Setup the firewall rules that we’ll want –

ufw allow 22/tcp
ufw allow 80/tcp
ufw allow 443/tcp


Enable the firewall –

ufw enable


You can check the status of ufw with –

ufw status


If you add or remove rules you should reload ufw with –

ufw reload


If you need to disable ufw you can do so with –

ufw disable


Install Fail2Ban

apt-get install fail2ban


Install Rocket.Chat as a Snap

Install Snap if it’s not already installed –

apt-get install snapd


Install Rocket.Chat –

snap install rocketchat-server


At this point the Rocket.Chat service will have automatically started, you can check if it’s running with –

service snap.rocketchat-server.rocketchat-server status


Configure Caddy and SSL

Initial configuration-

snap set rocketchat-server caddy-url=https://<your-domain-name>
snap set rocketchat-server caddy=enable
snap set rocketchat-server https=enable
rocketchat-server.initcaddy


Assuming you didn’t have any errors, restart Rocket.Chat and Caddy –

systemctl restart snap.rocketchat-server.rocketchat-server.service
systemctl restart snap.rocketchat-server.rocketchat-caddy.service


You can check Caddy’s logs with the following command

journalctl -r | grep caddy | less


Redirect HTTP to HTTPS

Redirecting HTTP to HTTPS is handled in the Caddy configuration by ommitting the http or https prefix. For instance you should have something like this inside /var/snap/rocketchat-server/current/Caddyfile –

your-domain-name.com {
  proxy / localhost:3000 {
    websocket
    transparent
  }
}


Restart Caddy once again after saving your changes –

systemctl restart snap.rocketchat-server.rocketchat-caddy


Onto Rocket.Chat itself!

At this point you’ll have a working Rocket.Chat installation running. You can browse to https://yourserver.com and you should be presented with the Setup Wizard screen to create the first user whom will by the Admin by default.

Once logged in, you may get a pop-up stating something along the lines of – The setting Site URL is configured to http://localhost and you are accessing from https://yourserver.com - Do you want to change to https://yourserver.com ? – You’ll want to click YES.

At this stage you’ll want to setup Rocket.Chat itself, so please refer to their documentation here – https://rocket.chat/docs


~Extra~

You can install a Discord style dark theme using this here! https://github.com/0x0049/Rocket.Chat.Dark


Backup and restore or migrate a Snap based installation of Rocket.Chat

This is a simple tutorial to backup and restore, or backup and migrate a Snap based installation of Rocket.Chat.


Stop the Rocket.Chat server

First you’ll need to stop the Rocket.Chat server.

service snap.rocketchat-server.rocketchat-server stop

Note that we’re only stopping the rocketchat-server service, not the MongoDB service, which should still be running. Check with –

service snap.rocketchat-server.rocketchat-mongo status | grep Active
Active: active (running) (…)


Create a backup.

snap run rocketchat-server.backupdb

You should see output similar to this –

[+] A backup of your data can be found at /var/snap/rocketchat-server/common/backup/rocketchat_backup_<timestamp>.tar.gz

Download that backup file over SFTP for instance, or transfer it to the server you’re migrating your Rocket.Chat installation to.

Your Rocket.Chat server will still be stopped at this point, so if you just wanted to create a backup for your existing installation, you can start the server back up with –

service snap.rocketchat-server.rocketchat-server start


Migrate (or restore) from backup

Now if we’re going to migrate our Rocket.Chat installation, on the server we’re migrating the installation to, you’ll want to have already installed Rocket.Chat as a Snap. Once done upload the *.tar.gz backup file from earlier to /var/snap/rocketchat-server/common/ on the destination server.

Once again, stop the rocketchat-server service, but not the MongoDB service –

service snap.rocketchat-server.rocketchat-server stop

service snap.rocketchat-server.rocketchat-mongo status | grep Active
Active: active (running) (…)

Restore using the *.tar.gz backup that we created –

snap run rocketchat-server.restoredb /var/snap/rocketchat-server/common/rocketchat_backup.tgz

*** ATTENTION ***
* Your current database WILL BE DROPPED prior to the restore!
* Would you like to make a backup of the current database before proceeding?
* (y/n/Q)>

Y

[*] Extracting backup file...
[*] Restoring data...
[*] Preparing database...
[+] Restore completed! Please restart the snap.rocketchat services to verify.

Start the Rocket.Chat server at this point, and your installation will now be running based on the Rocket.Chat Snap backup that was performed!

service snap.rocketchat-server.rocketchat-server start

Installing Rocket.Chat on Ubuntu Xenial 16.04 via Snap with an NGINX reverse proxy

Please note that Rocket.Chat Snaps now come with Caddy to deal with the reverse proxy and free SSL certificate’s via Let’s Encrypt – so you may wish to refer to my newer post here. NGINX won’t have websockets configured if you use this guide – which are required if you intend to use the mobile Rocket.Chat apps.

This is a simple tutorial to get Rocket.Chat running on a Ubuntu Xenial 16.04 server (You’ll likely be perfectly fine to run through the same process on a different Ubuntu version such as 18.04 if you’d prefer) In this case we’re installing this on a fresh server and we’ll be installing Rocket.Chat as a Snap and using NGINX as a reverse proxy, as well as setting up an SSL certificate via Let’s Encrypt. With this you’ll be able to get Rocket.Chat up and running within ~10 minutes, from there you can go on and make further server configuration changes for security and so on, as well as configure Rocket.Chat in more depth – which won’t be covered within the scope of this tutorial.


Let’s first start with some updates.

apt-get update
apt-get upgrade


Basic UFW setup

Let’s setup a basic firewall using UFW. First install UFW if it’s not installed –

apt-get install ufw


Setup the default access rules –

ufw default deny incoming
ufw default allow outgoing


Setup the firewall rules that we’ll want –

ufw allow 22/tcp
ufw allow 80/tcp
ufw allow 443/tcp


Enable the firewall –

ufw enable


You can check the status of ufw with –

ufw status


If you add or remove rules you should reload ufw with –

ufw reload


If you need to disable ufw you can do so with –

ufw disable


Install Fail2Ban

apt-get install fail2ban


Install Rocket.Chat as a Snap

Install Snap if it’s not already installed –

apt-get install snapd


Install Rocket.Chat –

snap install rocketchat-server


At this point the Rocket.Chat service will have automatically started, you can check if it’s running with –

service snap.rocketchat-server.rocketchat-server status


Install and configure NGINX to use as a reverse proxy + SSL setup

Install NGINX –

apt install nginx
systemctl start nginx
systemctl enable nginx


Remove the default NGINX site –

rm /etc/nginx/sites-enabled/default


Create the NGINX config for Rocket.Chat

vim /etc/nginx/sites-available/rocketchat.conf


Once inside vim, you should have the following (edit “yourserver.com” to be your actual domain that you’re going to use for this server) –

server {
     listen 80;
 
     server_name yourserver.com; 

     location / {
     proxy_pass http://localhost:3000/; 
     }
 }


Enable the new configuration by creating a link to it from /etc/nginx/sites-available/ –

ln -s /etc/nginx/sites-available/rocketchat.conf /etc/nginx/sites-enabled/


Test the configuration –

nginx -t


Assuming no errors are reported, reload the NGINX config with –

nginx -s reload


SSL Setup using Let’s Encrypt + Certbot

Install Certbot and run it –

apt-get install software-properties-common
add-apt-repository ppa:certbot/certbot
apt-get update
apt-get install python-certbot-nginx
certbot --nginx


Follow the prompts on screen and you should be issued a valid Let’s Encrypt SSL certificate. Make sure you do choose to force a HTTPS redirect when prompted.

Certbot will automatically deal with SSL certificate renewal, a cron will automatically be created under /etc/cron.d/certbot. You can test the renewal process as a dry run via –

certbot renew --dry-run


Certbot will have updated the NGINX configuration, test that the config is valid with –

certbot renew --dry-run


Assuming no errors are reported, reload the NGINX config with –

nginx -s reload


Onto Rocket.Chat itself!

At this point you’ll have a working Rocket.Chat installation running. You can browse to https://yourserver.com and you should be presented with the Setup Wizard screen to create the first user whom will by the Admin by default.

Once logged in, you should get a pop-up stating something along the lines of – The setting Site URL is configured to http://localhost and you are accessing from https://yourserver.com - Do you want to change to https://yourserver.com ? – You’ll want to click YES.

At this stage you’ll want to setup Rocket.Chat itself, so please refer to their documentation here – https://rocket.chat/docs


~Extra~

You can install a Discord style dark theme using this here! https://github.com/0x0049/Rocket.Chat.Dark


FreeNAS CPU core and disk temperature check script

This is a nice simple script to display CPU core and disk temperatures on a FreeNAS server. Tested with FreeNAS 9.10.2-U1


Create a file with the following content – Mine is just called “temps_check_script.sh” as an example.

# Write some general information
echo System Temperatures - `date`
uptime | awk '{ print "\nSystem Load:",$10,$11,$12,"\n" }'

# Write CPU temperatures
echo "CPU Temperature:"
sysctl -a | egrep -E "cpu\.[0-9]+\.temp"

# Write HDD temperatures and status
echo "HDD Temperature:"
for i in $(sysctl -n kern.disks | awk '{for (i=NF; i!=0 ; i--) if(match($i, '/da/')) print $i }' ) ; do
echo $i: `smartctl -a -n standby /dev/$i | awk '/Temperature_Celsius/{DevTemp=$10;} /Serial Number:/{DevSerNum=$3}; /Device Model:/{DevVendor=$3; DevName=$4} END {printf "%s C - %s %s
(%s)", DevTemp,DevVendor,DevName,DevSerNum }'`;
done


Run it with ./temps_check_script.sh – The output looks like this:

System Temperatures - Wed Jun 21 07:53:16 AEST 2017

System Load: 0.17

CPU Temperature:
dev.cpu.3.temperature: 40.0C
dev.cpu.2.temperature: 44.0C
dev.cpu.1.temperature: 40.0C
dev.cpu.0.temperature: 42.0C

HDD Temperature:
ada0: 27 C - Hitachi HDS5C3020ALA632 (SERIALNUMBERGOESHERE)
ada1: 27 C - Hitachi HDS5C3020ALA632 (SERIALNUMBERGOESHERE)
ada2: 27 C - Hitachi HDS5C3020ALA632 (SERIALNUMBERGOESHERE)
ada3: 28 C - Hitachi HDS5C3020ALA632 (SERIALNUMBERGOESHERE)
ada4: 28 C - Hitachi HDS5C3020ALA632 (SERIALNUMBERGOESHERE)
ada5: 27 C - Hitachi HDS5C3020ALA632 (SERIALNUMBERGOESHERE)
da0: 32 C - TOSHIBA MD04ACA400 (SERIALNUMBERGOESHERE)
da1: 31 C - TOSHIBA MD04ACA400 (SERIALNUMBERGOESHERE)
da2: 29 C - TOSHIBA MD04ACA400 (SERIALNUMBERGOESHERE)
da3: 30 C - TOSHIBA MD04ACA400 (SERIALNUMBERGOESHERE)
da4: 31 C - TOSHIBA MD04ACA400 (SERIALNUMBERGOESHERE)
da5: 30 C - TOSHIBA MD04ACA400 (SERIALNUMBERGOESHERE)

4RU Rosewill RSV-L4500 ~22TB usable storage FreeNAS Build

This is my current storage/labbing box, which originally started in a Fractal R3 Define case, and had slowly been upgraded over the years.

  • Rosewill RSV-L4500 Chassis
  • Seasonic S12II 620W Power Supply
  • Front fans – 3x Corsair SP120 Performance Edition ~2350 RPM (these are way louder than the stock fans!)
  • Center bracket fans – 3x Corsair SP120 Quite Edition ~1450 RPM
  • Rear fans – 2x stock 80mm fans that come with the case
  • CPU Fan – Noctua NH-L9i
  • Motherboard – ASUS P8B-X
  • CPU – Xeon E3-1220 – 3.1ghz (LGA1155)
  • RAM – 2x – Kingston Technology ValueRAM 8GB 1333MHz DDR3 ECC CL9 DIMM (16GB total)
  • LSI 9220-8i in IT Mode + motherboard SATA for disks
  • 6x 4TB Toshiba MD04ACA400 in RAIDZ2
  • 6x 2TB Hitachi (5200rpm model) in RAIDZ2

As you can see, it’s pretty damn messy to cable when you have no backplane!

nas1 nas2 nas3 nas5 nas6

The front fans and also the rear fans are powered using chained molex adaptors which is certainly messy. The center bracket fans are plugged straight into the motherboard. I’m considering one of those fan controllers that have something like 8x 3pin fan headers powered by a single molex, which can then either float in the case or be mounted somewhere. Not sure yet if I should replace the rear 80mm fans or not. There is a single fan header empty near that end of the motherboard, running one off that would be neater than the molex.

It took quite some time to choose a power supply for this build. This PSU was chosen based on how I could distribute the rail amperage for the disks. You will find it’s not really possible to use the chained SATA power due to how close the drives are together when bending the cable. A bunch of SATA power and SATA>Molex splitters have been used, however everything is distributed so that no single cable has too many drives so that the rails are not overloaded.

Ambient temperatures are pretty high here. I’m yet to do a full load/temp test on the CPU since changing all of the cooling from stock – previously when the ambient temp was ~27c the CPU at ~90% load was hitting ~65c on all cores – not great considering the intel rated max for this CPU is apparently 69c.

I think if you live somewhere with an ambient temperature that sits around 20c, you’d be totally fine using all of the stock cooling for this case.

Schedule commands in LFTP using “at”

You can schedule downloads and uploads in LFTP in a simple manner by using the “at” command. Anything following the syntax described in this image below should work.

at


Mirror/download a remote folder two hours from the current time, using 10 connections + segmentation

at now + 2 hours -- queue mirror --use-pget-n=10 yourdirectory/


Mirror/download a remote folder at 1AM tomorrow, using 10 connections + segmentation

at 1:00 tomorrow -- queue mirror --use-pget-n=10 yourdirectory/

Segmented SFTP downloading using LFTP

If you’ve every tried saturating a fast connection using FTP/SFTP you may have run into problems where you can only achieve limited download speeds using a single thread. Segmented downloading can often be a solution. Bare in mind that segmented FTP/SFTP will open many sessions to the server you are connecting to. Depending on the situation this might not be ideal, however if you’re sure you have sufficient resources to do it (without generating too much load if your server is within a shared environment), then it can work very well.


You’ll need to install lftp – I run it on a Raspberry Pi.

sudo apt-get install lftp


Login to your server using lftp

lftp sftp://[email protected]


Change into the directory with files you want to download

cd /hdd01/downloads


Start a segmented download

A pget command using segmentation is used for single files.

pget -n 15 somefile.iso #where 15 is the number of segments

A mirror command using segmentation is for downloading whole directories.

mirror --use-pget-n=15 SomeDirectory #where 15 is the number of segments

You’ll need to experiment with the amount of segments – It’s best to use as few as you can, while still getting as much speed as you need. I tend to use 8 – 15 at absolute maximum.

lftp has queue support which can also be pretty useful. Essentially you can queue up a bunch of different transfers and pull up the status later on. You simply need to add queue to the start of your command. To check the queue you can use jobs -v