FreeNAS CPU core and disk temperature check script

This is a nice simple script to display CPU core and disk temperatures on a FreeNAS server. Tested with FreeNAS 9.10.2-U1

Create a file with the following content – Mine is just called “temps_check_script.sh” as an example.


# Write some general information
echo System Temperatures - `date`
uptime | awk '{ print "\nSystem Load:",$10,$11,$12,"\n" }'

# Write CPU temperatures
echo "CPU Temperature:"
sysctl -a | egrep -E "cpu\.[0-9]+\.temp"

# Write HDD temperatures and status
echo "HDD Temperature:"
for i in $(sysctl -n kern.disks | awk '{for (i=NF; i!=0 ; i--) if(match($i, '/da/')) print $i }' ) ; do
echo $i: `smartctl -a -n standby /dev/$i | awk '/Temperature_Celsius/{DevTemp=$10;} /Serial Number:/{DevSerNum=$3}; /Device Model:/{DevVendor=$3; DevName=$4} END {printf "%s C - %s %s
(%s)", DevTemp,DevVendor,DevName,DevSerNum }'`;
done

Run it with ./temps_check_script.sh – The output looks like this:

System Temperatures - Wed Jun 21 07:53:16 AEST 2017

System Load: 0.17

CPU Temperature:
dev.cpu.3.temperature: 40.0C
dev.cpu.2.temperature: 44.0C
dev.cpu.1.temperature: 40.0C
dev.cpu.0.temperature: 42.0C

HDD Temperature:
ada0: 27 C - Hitachi HDS5C3020ALA632 (SERIALNUMBERGOESHERE)
ada1: 27 C - Hitachi HDS5C3020ALA632 (SERIALNUMBERGOESHERE)
ada2: 27 C - Hitachi HDS5C3020ALA632 (SERIALNUMBERGOESHERE)
ada3: 28 C - Hitachi HDS5C3020ALA632 (SERIALNUMBERGOESHERE)
ada4: 28 C - Hitachi HDS5C3020ALA632 (SERIALNUMBERGOESHERE)
ada5: 27 C - Hitachi HDS5C3020ALA632 (SERIALNUMBERGOESHERE)
da0: 32 C - TOSHIBA MD04ACA400 (SERIALNUMBERGOESHERE)
da1: 31 C - TOSHIBA MD04ACA400 (SERIALNUMBERGOESHERE)
da2: 29 C - TOSHIBA MD04ACA400 (SERIALNUMBERGOESHERE)
da3: 30 C - TOSHIBA MD04ACA400 (SERIALNUMBERGOESHERE)
da4: 31 C - TOSHIBA MD04ACA400 (SERIALNUMBERGOESHERE)
da5: 30 C - TOSHIBA MD04ACA400 (SERIALNUMBERGOESHERE

My 4RU Rosewill RSV-L4500 ~22TB usable storage FreeNAS Build

This is my current storage box, which originally started in a Fractal R3 Define case, and had slowly been upgraded over the years.

  • Rosewill RSV-L4500 Chassis
  • Seasonic S12II 620W Power Supply
  • Front fans – 3x Corsair SP120 Performance Edition ~2350 RPM (these are way louder than the stock fans!)
  • Center bracket fans – 3x Corsair SP120 Quite Edition ~1450 RPM
  • Rear fans – 2x stock 80mm fans that come with the case
  • CPU Fan – Noctua NH-L9i
  • Motherboard – ASUS P8B-X
  • CPU – Xeon E3-1220 – 3.1ghz (LGA1155)
  • RAM – 2x – Kingston Technology ValueRAM 8GB 1333MHz DDR3 ECC CL9 DIMM (16GB total)
  • LSI 9220-8i in IT Mode + motherboard SATA for disks
  • 6x 4TB Toshiba MD04ACA400 in RAIDZ2
  • 6x 2TB Hitachi (5200rpm model) in RAIDZ2

As you can see, it’s pretty damn messy to cable when you have no backplane!

nas1 nas2 nas3 nas5 nas6 nas7

The front fans and also the rear fans are powered using chained molex adaptors which is certainly messy. The center bracket fans are plugged straight into the motherboard. I’m considering one of those fan controllers that have something like 8x 3pin fan headers powered by a single molex, which can then either float in the case or be mounted somewhere. Not sure yet if I should replace the rear 80mm fans or not. There is a single fan header empty near that end of the motherboard, running one off that would be neater than the molex.

It took quite some time to choose a power supply for this build. This PSU was chosen based on how I could distribute the rail amperage for the disks. You will find it’s not really possible to use the chained SATA power due to how close the drives are together when bending the cable. A bunch of SATA power and SATA>Molex splitters have been used, however everything is distributed so that no single cable has too many drives so that the rails are not overloaded.

This box lives under my bed. It’s not running 24/7 (not even close actually), but rather to backup and snatch files to/from perhaps 2/3x a month for ~4 hours each time. Ambient temperatures are pretty high here. I’m yet to do a full load/temp test on the CPU since changing all of the cooling from stock – previously when the ambient temp was ~27c the CPU at ~90% load was hitting ~65c on all cores – not great considering the intel rated max for this CPU is apparently 69c.

I think if you live somewhere with an ambient temperature that sits around 20c, you’d be totally fine using all of the stock cooling for this case.

~2TB free space left on both arrays at this point. I would like to move to having less drives and a smaller form factor in future once drive sizes increase and the prices for drives go down.

Schedule commands in LFTP using “at”

You can schedule downloads and uploads in LFTP in a simple manner by using the “at” command. Anything following the syntax described in this image below should work.

at

For example –

Mirror/download a remote folder two hours from the current time, using 10 connections + segmentation
at now + 2 hours -- queue mirror --use-pget-n=10 linuxisos/

Mirror/download a remote folder at 1AM tomorrow, using 10 connections + segmentation
at 1:00 tomorrow -- queue mirror --use-pget-n=10 linuxisos/

Install Tor relay on CentOS 7

This is a quick guide to running up a Tor relay on a CentOS 7 server. Firewall config has been omitted, check out these links if you need help with the OS firewall config.
How to setup a firewall using firewalld on CentOS 7
How to migrate from firewalld to iptables on CentOS 7

It’s worth noting that you can score a Tor t-shirt if you run an exit node or relay that satisfies a set criteria:
Tor T-Shirt for contributing!

“Operate a fast Tor relay that’s been running for the past two months: you are eligible if you allow exits to port 80 and you average 250 KBytes/s traffic, or if you’re not an exit but you average 500 KBytes/s traffic.”

Let’s get started.

Create the .repo file below.

vim /etc/yum.repos.d/torproject.repo

[tor]
name=Tor repo
enabled=1
baseurl=https://deb.torproject.org/torproject.org/rpm/el/7/$basearch/
gpgcheck=1
gpgkey=https://deb.torproject.org/torproject.org/rpm/RPM-GPG-KEY-torproject.org.asc

[tor-source]
name=Tor source repo
enabled=1
autorefresh=0
baseurl=https://deb.torproject.org/torproject.org/rpm/el/7/SRPMS
gpgcheck=1
gpgkey=https://deb.torproject.org/torproject.org/rpm/RPM-GPG-KEY-torproject.org.asc

Install Tor through yum.


yum -y install tor


Edit the config file for Tor.


vim /etc/tor/torrc

SOCKSPort 0
Log notice file /var/log/tor/notices.log
RunAsDaemon 1
DataDirectory /var/lib/tor
#Listen port
ORPort 443
#IP Address or DNS name of your relay.
Address cheddar.cheese.sexy
#The name of your relay.
Nickname chsxy
#If you're worried about spam then you really don't want to format the email address like I have here.
ContactInfo oh boy suddenly all this spam is going to - [email protected]
DirPort 9058
# no exits allowed.
ExitPolicy reject *:*

Verify the config to make sure there are no issues.

tor -f /etc/tor/torrc --verify-config

Run Tor.

/etc/init.d/tor start
Starting tor...done.
/etc/init.d/tor status
tor (pid 3666) running

Check the log file to make sure everything is running smoothly.

tail -f /var/log/tor/notices.log

Aug 28 04:19:43.000 [notice] I learned some more directory information, but not enough to build a circuit: We need more descriptors: we have 5382/6917, and can only build 50% of likely paths. (We have 77% of guards bw, 79% of midpoint bw, and 81% of exit bw = 50% of path bw.)
Aug 28 04:19:43.000 [notice] Bootstrapped 80%: Connecting to the Tor network
Aug 28 04:19:44.000 [notice] Bootstrapped 85%: Finishing handshake with first hop
Aug 28 04:19:44.000 [notice] Bootstrapped 90%: Establishing a Tor circuit
Aug 28 04:19:45.000 [notice] Tor has successfully opened a circuit. Looks like client functionality is working.
Aug 28 04:19:45.000 [notice] Bootstrapped 100%: Done
Aug 28 04:19:45.000 [notice] Now checking whether ORPort 163.172.170.23:443 and DirPort 163.172.170.23:9058 are reachable... (this may take up to 20 minutes -- look for log messages indicating success)
Aug 28 04:19:45.000 [notice] Self-testing indicates your ORPort is reachable from the outside. Excellent. Publishing server descriptor.
Aug 28 04:19:45.000 [notice] Self-testing indicates your DirPort is reachable from the outside. Excellent.
Aug 28 04:19:46.000 [notice] Performing bandwidth self-test...done.

After a couple of hours you should be able to see your relay on one of the various index sites!

Here’s mine.

This particular relay is hosted over at Scaleway.

Running fsck via Leaseweb FreeBSD Rescue 2.1 on UFS partitions

I’ve been running a FreeBSD 10 based dedicated server with Leaseweb NL for a little over a year now. This morning I noticed the server was down.

Unfortunately Leaseweb don’t seem to provide any KVM style access, or in any case I don’t have that functionality with this server from them. I rebooted the server via the Leaseweb panel, without any success. My suspicion was that the filesystem might be dirty, and FreeBSD was stuck on a screen waiting for fsck to be launched.

Using Leaseweb’s panel I booted into their “FreeBSD Rescue 2.1”. I tried to run fsck across my partitions, however I would constantly get the error:
fsck: Could not determine filesystem type

For this particular server I am still using UFS rather than ZFS. It turns out you have to define the type in the fsck command.

In the end I did the following:
ls /dev/ad* #to list out all partitions
fsck -y -t ufs /dev/ad3s1 #ran this same command across every partition

It was the /usr partition that was marked as “dirty”. After running the above fsck command across it, I rebooted the server and everything came back as normal.

Segmented SFTP downloading using LFTP

If you’ve every tried saturating a fast connection using FTP/SFTP you may have run into problems where you can only achieve limited download speeds using a single thread. Segmented downloading can often be a solution. Bare in mind that segmented FTP/SFTP will open many sessions to the server you are connecting to. Depending on the situation this might not be ideal, however if you’re sure you have sufficient resources to do it (without pissing anyone off if the server is in shared environment), then it can work very well. For example – From my home connection I can usually only pull ~800KB/s on a single thread SFTP download from a dedicated server based in the Netherlands. Using segmented downloading I can easily max out my connection (~7MB/s). I’ve found that other software such as Bitkinex and CuteFTP on Windows are not able to match the speeds I get when using lftp.

You’ll need to install lftp – I run it on my Raspberry Pi.
sudo apt-get install lftp

Login to your server using lftp
lftp sftp://[email protected]

Change into the directory with files you want to download
cd /hdd01/downloads

Start a segmented download

A pget command using segmentation is used for single files.
pget -n 15 somefile.iso #where 15 is the number of segments

A mirror command using segmentation is for downloading whole directories.
mirror --use-pget-n=15 SomeDirectory #where 15 is the number of segments

You’ll need to experiment with the amount of segments – It’s best to use as few as you can, while still getting as much speed as you need. I tend to use 8 – 15 at absolute maximum.

lftp has queue support which can also be pretty useful. Essentially you can queue up a bunch of different transfers and pull up the status later on. You simply need to add queue to the start of your command. To check the queue you can use jobs -v