Installing Rocket.Chat on Ubuntu Xenial 16.04 via Snap

This is a simple tutorial to get Rocket.Chat running on a Ubuntu Xenial 16.04 server (You’ll likely be perfectly fine to run through the same process on a different Ubuntu version such as 18.04 if you’d prefer) In this case we’re installing this on a fresh server and we’ll be installing Rocket.Chat as a Snap and using Caddy as a reverse proxy. Caddy will also deal with issuing SSL certificates via Let’s Encrypt. With this you’ll be able to get Rocket.Chat up and running within ~10 minutes, from there you can go on and make further server configuration changes for security and so on, as well as configure Rocket.Chat in more depth – which won’t be covered within the scope of this tutorial.


Let’s first start with some updates.

sudo apt-get upgrade
sudo apt-get update


Basic UFW setup

Let’s setup a basic firewall using UFW. First install UFW if it’s not installed –

sudo apt-get install ufw


Setup the default access rules –

sudo ufw default deny incoming
sudo ufw default allow outgoing


Setup the firewall rules that we’ll want –

sudo ufw allow 22/tcp
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp


Enable the firewall –

sudo ufw enable


You can check the status of ufw with –

sudo ufw status


If you add or remove rules you should reload ufw with –

sudo ufw reload


If you need to disable ufw you can do so with –

sudo ufw disable


Install Fail2Ban

sudo apt-get install fail2ban


Install Rocket.Chat as a Snap

Install Snap if it’s not already installed –

sudo apt-get install snapd


Install Rocket.Chat –

sudo snap install rocketchat-server


At this point the Rocket.Chat service will have automatically started, you can check if it’s running with –

sudo service snap.rocketchat-server.rocketchat-server status


Configure Caddy and SSL

Initial configuration-

sudo snap set rocketchat-server caddy-url=https://<your-domain-name>
sudo snap set rocketchat-server caddy=enable
sudo snap set rocketchat-server https=enable
sudo rocketchat-server.initcaddy


Assuming you didn’t have any errors, restart Rocket.Chat and Caddy –

sudo systemctl restart snap.rocketchat-server.rocketchat-server.service
sudo systemctl restart snap.rocketchat-server.rocketchat-caddy.service


You can check Caddy’s logs with the following command

journalctl -r | grep caddy | less


Redirect HTTP to HTTPS

Redirecting HTTP to HTTPS is handled in the Caddy configuration by ommitting the http or https prefix. For instance you should have something like this inside /var/snap/rocketchat-server/current/Caddyfile –

your-domain-name.com {
  proxy / localhost:3000 {
    websocket
    transparent
  }
}


Restart Caddy once again after saving your changes –

sudo systemctl restart snap.rocketchat-server.rocketchat-caddy


Onto Rocket.Chat itself!

At this point you’ll have a working Rocket.Chat installation running. You can browse to https://yourserver.com and you should be presented with the Setup Wizard screen to create the first user whom will by the Admin by default.

Once logged in, you may get a pop-up stating something along the lines of – The setting Site URL is configured to http://localhost and you are accessing from https://yourserver.com - Do you want to change to https://yourserver.com ? – You’ll want to click YES.

At this stage you’ll want to setup Rocket.Chat itself, so please refer to their documentation here – https://rocket.chat/docs


~Extra~

You can install a Discord style dark theme using this here! https://github.com/0x0049/Rocket.Chat.Dark


Backup and restore or migrate a Snap based installation of Rocket.Chat

This is a simple tutorial to backup and restore, or backup and migrate a Snap based installation of Rocket.Chat.


Stop the Rocket.Chat server

First you’ll need to stop the Rocket.Chat server.

sudo service snap.rocketchat-server.rocketchat-server stop

Note that we’re only stopping the rocketchat-server service, not the MongoDB service, which should still be running. Check with –

sudo service snap.rocketchat-server.rocketchat-mongo status | grep Active
Active: active (running) (…)


Create a backup.

sudo snap run rocketchat-server.backupdb

You should see output similar to this –

[+] A backup of your data can be found at /var/snap/rocketchat-server/common/backup/rocketchat_backup_<timestamp>.tar.gz

Download that backup file over SFTP for instance, or transfer it to the server you’re migrating your Rocket.Chat installation to.

Your Rocket.Chat server will still be stopped at this point, so if you just wanted to create a backup for your existing installation, you can start the server back up with –

sudo service snap.rocketchat-server.rocketchat-server start


Migrate (or restore) from backup

Now if we’re going to migrate our Rocket.Chat installation, on the server we’re migrating the installation to, you’ll want to have already installed Rocket.Chat as a Snap. Once done upload the *.tar.gz backup file from earlier to /var/snap/rocketchat-server/common/ on the destination server.

Once again, stop the rocketchat-server service, but not the MongoDB service –

sudo service snap.rocketchat-server.rocketchat-server stop

sudo service snap.rocketchat-server.rocketchat-mongo status | grep Active
Active: active (running) (…)

Restore using the *.tar.gz backup that we created –

sudo snap run rocketchat-server.restoredb /var/snap/rocketchat-server/common/rocketchat_backup.tgz

*** ATTENTION ***
* Your current database WILL BE DROPPED prior to the restore!
* Would you like to make a backup of the current database before proceeding?
* (y/n/Q)>

Y

[*] Extracting backup file...
[*] Restoring data...
[*] Preparing database...
[+] Restore completed! Please restart the snap.rocketchat services to verify.

Start the Rocket.Chat server at this point, and your installation will now be running based on the Rocket.Chat Snap backup that was performed!

sudo service snap.rocketchat-server.rocketchat-server start

Installing Rocket.Chat on Ubuntu Xenial 16.04 via Snap with an NGINX reverse proxy

Please note that Rocket.Chat Snap’s now come with Caddy to deal with the reverse proxy and free SSL certificate’s via Let’s Encrypt – so you may wish to refer to my newer post here. NGINX won’t have websockets configured if you use this guide – which are required if you intend to use the mobile Rocket.Chat apps.

This is a simple tutorial to get Rocket.Chat running on a Ubuntu Xenial 16.04 server (You’ll likely be perfectly fine to run through the same process on a different Ubuntu version such as 18.04 if you’d prefer) In this case we’re installing this on a fresh server and we’ll be installing Rocket.Chat as a Snap and using NGINX as a reverse proxy, as well as setting up an SSL certificate via Let’s Encrypt. With this you’ll be able to get Rocket.Chat up and running within ~10 minutes, from there you can go on and make further server configuration changes for security and so on, as well as configure Rocket.Chat in more depth – which won’t be covered within the scope of this tutorial.


Let’s first start with some updates.

sudo apt-get upgrade
sudo apt-get update


Basic UFW setup

Let’s setup a basic firewall using UFW. First install UFW if it’s not installed –

sudo apt-get install ufw


Setup the default access rules –

sudo ufw default deny incoming
sudo ufw default allow outgoing


Setup the firewall rules that we’ll want –

sudo ufw allow 22/tcp
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp


Enable the firewall –

sudo ufw enable


You can check the status of ufw with –

sudo ufw status


If you add or remove rules you should reload ufw with –

sudo ufw reload


If you need to disable ufw you can do so with –

sudo ufw disable


Install Fail2Ban

sudo apt-get install fail2ban


Install Rocket.Chat as a Snap

Install Snap if it’s not already installed –

sudo apt-get install snapd


Install Rocket.Chat –

sudo snap install rocketchat-server


At this point the Rocket.Chat service will have automatically started, you can check if it’s running with –

sudo service snap.rocketchat-server.rocketchat-server status


Install and configure NGINX to use as a reverse proxy + SSL setup

Install NGINX –

sudo apt install nginx
sudo systemctl start nginx
sudo systemctl enable nginx


Remove the default NGINX site –

sudo rm /etc/nginx/sites-enabled/default


Create the NGINX config for Rocket.Chat

sudo vim /etc/nginx/sites-available/rocketchat.conf


Once inside vim, you should have the following (edit “yourserver.com” to be your actual domain that you’re going to use for this server) –

server {
     listen 80;
 
     server_name yourserver.com; 

     location / {
     proxy_pass http://localhost:3000/; 
     }
 }


Enable the new configuration by creating a link to it from /etc/nginx/sites-available/ –

sudo ln -s /etc/nginx/sites-available/rocketchat.conf /etc/nginx/sites-enabled/


Test the configuration –

sudo nginx -t


Assuming no errors are reported, reload the NGINX config with –

sudo nginx -s reload


SSL Setup using Let’s Encrypt + Certbot

Install Certbot and run it –

sudo apt-get install software-properties-common
sudo add-apt-repository ppa:certbot/certbot
sudo apt-get update
sudo apt-get install python-certbot-nginx
sudo certbot --nginx


Follow the prompts on screen and you should be issued a valid Let’s Encrypt SSL certificate. Make sure you do choose to force a HTTPS redirect when prompted.

Certbot will automatically deal with SSL certificate renewal, a cron will automatically be created under /etc/cron.d/certbot. You can test the renewal process as a dry run via –

sudo certbot renew --dry-run


Certbot will have updated the NGINX configuration, test that the config is valid with –

sudo certbot renew --dry-run


Assuming no errors are reported, reload the NGINX config with –

sudo nginx -s reload


Onto Rocket.Chat itself!

At this point you’ll have a working Rocket.Chat installation running. You can browse to https://yourserver.com and you should be presented with the Setup Wizard screen to create the first user whom will by the Admin by default.

Once logged in, you should get a pop-up stating something along the lines of – The setting Site URL is configured to http://localhost and you are accessing from https://yourserver.com - Do you want to change to https://yourserver.com ? – You’ll want to click YES.

At this stage you’ll want to setup Rocket.Chat itself, so please refer to their documentation here – https://rocket.chat/docs


~Extra~

You can install a Discord style dark theme using this here! https://github.com/0x0049/Rocket.Chat.Dark


FreeNAS CPU core and disk temperature check script

This is a nice simple script to display CPU core and disk temperatures on a FreeNAS server. Tested with FreeNAS 9.10.2-U1


Create a file with the following content – Mine is just called “temps_check_script.sh” as an example.

# Write some general information
echo System Temperatures - `date`
uptime | awk '{ print "\nSystem Load:",$10,$11,$12,"\n" }'

# Write CPU temperatures
echo "CPU Temperature:"
sysctl -a | egrep -E "cpu\.[0-9]+\.temp"

# Write HDD temperatures and status
echo "HDD Temperature:"
for i in $(sysctl -n kern.disks | awk '{for (i=NF; i!=0 ; i--) if(match($i, '/da/')) print $i }' ) ; do
echo $i: `smartctl -a -n standby /dev/$i | awk '/Temperature_Celsius/{DevTemp=$10;} /Serial Number:/{DevSerNum=$3}; /Device Model:/{DevVendor=$3; DevName=$4} END {printf "%s C - %s %s
(%s)", DevTemp,DevVendor,DevName,DevSerNum }'`;
done


Run it with ./temps_check_script.sh – The output looks like this:

System Temperatures - Wed Jun 21 07:53:16 AEST 2017

System Load: 0.17

CPU Temperature:
dev.cpu.3.temperature: 40.0C
dev.cpu.2.temperature: 44.0C
dev.cpu.1.temperature: 40.0C
dev.cpu.0.temperature: 42.0C

HDD Temperature:
ada0: 27 C - Hitachi HDS5C3020ALA632 (SERIALNUMBERGOESHERE)
ada1: 27 C - Hitachi HDS5C3020ALA632 (SERIALNUMBERGOESHERE)
ada2: 27 C - Hitachi HDS5C3020ALA632 (SERIALNUMBERGOESHERE)
ada3: 28 C - Hitachi HDS5C3020ALA632 (SERIALNUMBERGOESHERE)
ada4: 28 C - Hitachi HDS5C3020ALA632 (SERIALNUMBERGOESHERE)
ada5: 27 C - Hitachi HDS5C3020ALA632 (SERIALNUMBERGOESHERE)
da0: 32 C - TOSHIBA MD04ACA400 (SERIALNUMBERGOESHERE)
da1: 31 C - TOSHIBA MD04ACA400 (SERIALNUMBERGOESHERE)
da2: 29 C - TOSHIBA MD04ACA400 (SERIALNUMBERGOESHERE)
da3: 30 C - TOSHIBA MD04ACA400 (SERIALNUMBERGOESHERE)
da4: 31 C - TOSHIBA MD04ACA400 (SERIALNUMBERGOESHERE)
da5: 30 C - TOSHIBA MD04ACA400 (SERIALNUMBERGOESHERE)

My 4RU Rosewill RSV-L4500 ~22TB usable storage FreeNAS Build

This is my current storage box, which originally started in a Fractal R3 Define case, and had slowly been upgraded over the years.

  • Rosewill RSV-L4500 Chassis
  • Seasonic S12II 620W Power Supply
  • Front fans – 3x Corsair SP120 Performance Edition ~2350 RPM (these are way louder than the stock fans!)
  • Center bracket fans – 3x Corsair SP120 Quite Edition ~1450 RPM
  • Rear fans – 2x stock 80mm fans that come with the case
  • CPU Fan – Noctua NH-L9i
  • Motherboard – ASUS P8B-X
  • CPU – Xeon E3-1220 – 3.1ghz (LGA1155)
  • RAM – 2x – Kingston Technology ValueRAM 8GB 1333MHz DDR3 ECC CL9 DIMM (16GB total)
  • LSI 9220-8i in IT Mode + motherboard SATA for disks
  • 6x 4TB Toshiba MD04ACA400 in RAIDZ2
  • 6x 2TB Hitachi (5200rpm model) in RAIDZ2

As you can see, it’s pretty damn messy to cable when you have no backplane!

nas1 nas2 nas3 nas5 nas6 nas7

The front fans and also the rear fans are powered using chained molex adaptors which is certainly messy. The center bracket fans are plugged straight into the motherboard. I’m considering one of those fan controllers that have something like 8x 3pin fan headers powered by a single molex, which can then either float in the case or be mounted somewhere. Not sure yet if I should replace the rear 80mm fans or not. There is a single fan header empty near that end of the motherboard, running one off that would be neater than the molex.

It took quite some time to choose a power supply for this build. This PSU was chosen based on how I could distribute the rail amperage for the disks. You will find it’s not really possible to use the chained SATA power due to how close the drives are together when bending the cable. A bunch of SATA power and SATA>Molex splitters have been used, however everything is distributed so that no single cable has too many drives so that the rails are not overloaded.

This box lives under my bed. It’s not running 24/7 (not even close actually), but rather to backup and snatch files to/from perhaps 2/3x a month for ~4 hours each time. Ambient temperatures are pretty high here. I’m yet to do a full load/temp test on the CPU since changing all of the cooling from stock – previously when the ambient temp was ~27c the CPU at ~90% load was hitting ~65c on all cores – not great considering the intel rated max for this CPU is apparently 69c.

I think if you live somewhere with an ambient temperature that sits around 20c, you’d be totally fine using all of the stock cooling for this case.

~2TB free space left on both arrays at this point. I would like to move to having less drives and a smaller form factor in future once drive sizes increase and the prices for drives go down.

Schedule commands in LFTP using “at”

You can schedule downloads and uploads in LFTP in a simple manner by using the “at” command. Anything following the syntax described in this image below should work.

at


Mirror/download a remote folder two hours from the current time, using 10 connections + segmentation

at now + 2 hours -- queue mirror --use-pget-n=10 linuxisos/


Mirror/download a remote folder at 1AM tomorrow, using 10 connections + segmentation

at 1:00 tomorrow -- queue mirror --use-pget-n=10 linuxisos/

Install Tor relay on CentOS 7

This is a quick guide to running up a Tor relay on a CentOS 7 server. Firewall config has been omitted, check out these links if you need help with the OS firewall config.
How to setup a firewall using firewalld on CentOS 7
How to migrate from firewalld to iptables on CentOS 7

It’s worth noting that you can score a Tor t-shirt if you run an exit node or relay that satisfies a set criteria:
Tor T-Shirt for contributing!

“Operate a fast Tor relay that’s been running for the past two months: you are eligible if you allow exits to port 80 and you average 250 KBytes/s traffic, or if you’re not an exit but you average 500 KBytes/s traffic.”

Let’s get started.


Create the .repo file below.

vim /etc/yum.repos.d/torproject.repo

[tor]
name=Tor repo
enabled=1
baseurl=https://deb.torproject.org/torproject.org/rpm/el/7/$basearch/
gpgcheck=1
gpgkey=https://deb.torproject.org/torproject.org/rpm/RPM-GPG-KEY-torproject.org.asc

[tor-source]
name=Tor source repo
enabled=1
autorefresh=0
baseurl=https://deb.torproject.org/torproject.org/rpm/el/7/SRPMS
gpgcheck=1
gpgkey=https://deb.torproject.org/torproject.org/rpm/RPM-GPG-KEY-torproject.org.asc


Install Tor through yum.

yum -y install tor


Edit the config file for Tor.

vim /etc/tor/torrc

SOCKSPort 0
Log notice file /var/log/tor/notices.log
RunAsDaemon 1
DataDirectory /var/lib/tor

#Listen port
ORPort 443

#IP Address or DNS name of your relay.
Address relay.torserver12312312.com
#The name of your relay.
Nickname rltrs12312312

#If you're worried about spam then you really don't want to format the email address like I have here.
ContactInfo oh boy suddenly all this spam is going to - [email protected]
DirPort 9058

# no exits allowed.
ExitPolicy reject *:*


Verify the config to make sure there are no issues.

tor -f /etc/tor/torrc --verify-config


Run Tor.

/etc/init.d/tor start
Starting tor...done.
/etc/init.d/tor status
tor (pid 3666) running


Check the log file to make sure everything is running smoothly.

tail -f /var/log/tor/notices.log

Aug 28 04:19:43.000 [notice] I learned some more directory information, but not enough to build a circuit: We need more descriptors: we have 5382/6917, and can only build 50% of likely paths. (We have 77% of guards bw, 79% of midpoint bw, and 81% of exit bw = 50% of path bw.)
Aug 28 04:19:43.000 [notice] Bootstrapped 80%: Connecting to the Tor network
Aug 28 04:19:44.000 [notice] Bootstrapped 85%: Finishing handshake with first hop
Aug 28 04:19:44.000 [notice] Bootstrapped 90%: Establishing a Tor circuit
Aug 28 04:19:45.000 [notice] Tor has successfully opened a circuit. Looks like client functionality is working.
Aug 28 04:19:45.000 [notice] Bootstrapped 100%: Done
Aug 28 04:19:45.000 [notice] Now checking whether ORPort 163.172.170.23:443 and DirPort 163.172.170.23:9058 are reachable... (this may take up to 20 minutes -- look for log messages indicating success)
Aug 28 04:19:45.000 [notice] Self-testing indicates your ORPort is reachable from the outside. Excellent. Publishing server descriptor.
Aug 28 04:19:45.000 [notice] Self-testing indicates your DirPort is reachable from the outside. Excellent.
Aug 28 04:19:46.000 [notice] Performing bandwidth self-test...done.

After a couple of hours you should be able to see your relay on one of the various index sites!

Running fsck via Leaseweb FreeBSD Rescue 2.1 on UFS partitions

I’ve been running a FreeBSD 10 based dedicated server with Leaseweb NL for a little over a year now. This morning I noticed the server was down.

Unfortunately Leaseweb don’t seem to provide any KVM style access, or in any case I don’t have that functionality with this server from them. I rebooted the server via the Leaseweb panel, without any success. My suspicion was that the filesystem might be dirty, and FreeBSD was stuck on a screen waiting for fsck to be launched.

Using Leaseweb’s panel I booted into their “FreeBSD Rescue 2.1”. I tried to run fsck across my partitions, however I would constantly get the error:

fsck: Could not determine filesystem type

For this particular server I am still using UFS rather than ZFS. It turns out you have to define the type in the fsck command.

In the end I did the following:

ls /dev/ad* #to list out all partitions
fsck -y -t ufs /dev/ad3s1 #ran this same command across every partition

It was the /usr partition that was marked as “dirty”. After running the above fsck command across it, I rebooted the server and everything came back as normal.

Segmented SFTP downloading using LFTP

If you’ve every tried saturating a fast connection using FTP/SFTP you may have run into problems where you can only achieve limited download speeds using a single thread. Segmented downloading can often be a solution. Bare in mind that segmented FTP/SFTP will open many sessions to the server you are connecting to. Depending on the situation this might not be ideal, however if you’re sure you have sufficient resources to do it (without pissing anyone off if the server is in shared environment), then it can work very well. For example – From my home connection I can usually only pull ~800KB/s on a single thread SFTP download from a dedicated server based in the Netherlands. Using segmented downloading I can easily max out my connection (~7MB/s). I’ve found that other software such as Bitkinex and CuteFTP on Windows are not able to match the speeds I get when using lftp.


You’ll need to install lftp – I run it on my Raspberry Pi.

sudo apt-get install lftp


Login to your server using lftp

lftp sftp://[email protected]


Change into the directory with files you want to download

cd /hdd01/downloads


Start a segmented download

A pget command using segmentation is used for single files.

pget -n 15 somefile.iso #where 15 is the number of segments

A mirror command using segmentation is for downloading whole directories.

mirror --use-pget-n=15 SomeDirectory #where 15 is the number of segments

You’ll need to experiment with the amount of segments – It’s best to use as few as you can, while still getting as much speed as you need. I tend to use 8 – 15 at absolute maximum.

lftp has queue support which can also be pretty useful. Essentially you can queue up a bunch of different transfers and pull up the status later on. You simply need to add queue to the start of your command. To check the queue you can use jobs -v