Unlock and Root Nexus Using Linux

Unlock and Root Your Nexus Using Linux


Introduction
I’ve had a lot of trouble gathering information on how to unlock and root my Nexus 6 phone (having never done anything like this before). The information on how to do this is definitely out there, but it’s all over the place in bits and pieces. So that’s where this blog comes in; I tried to group everything I learned after successfully rooting my phone and installing TWRP on it.

This diagram explains very loosely how the upgrades work; the diagram will make more sense as you read on.
Nexus Upgrade In Nutshell
Here are the topics of interest:

  • Installing the Android SDK and preparing a useful (Linux) Environment: This step is important in order to be able to interact with your device and update it via a USB cable attached to your PC. It specifically focuses on enabling the use of the adb and fastboot tools. This entire blog is useless to you if you don’t have this part set up.
  • Enabling Debug (Developer) Mode on your Android device: This step pairs with the SDK. The SDK allows your PC to communicate with your Android device while this step allows your device to be communicated with. Each device you plan on unlocking or rooting will have to have Debug Mode turned on.
  • Unlock Your Bootloader: You can not root your device unless your bootloader is unlocked. This action opens your device up to a world of tweaks and hacks. Note: this step can also can void your warranty.
  • Root Your Device: You can not root your device unless your bootloader is unlocked. This step focuses on using the excellent work of Chainfire‘s (CF-)Auto-Root boot.img and SuperSU application.
  • Upgrade Your Rooted Device: When Google puts out their new Over The Air (OTA) upgrade. You’ll notice that rooted devices won’t successfully get upgraded. Don’t worry; nothing breaks, the update simply fails. I identify the steps here how you can play along and still upgrade your rooted device.

A few other things worth sharing:

  • Fantastic Android (Root Required) Apps: Got your Android device rooted? Here are some amazing applications worth installing on it.
  • TWRP Setup: TWRP allows you to try out custom firmware. This is one of the fantastic apps i had to make a separate section for since it’s setup can be a bit confusing if you’ve never done it before.
  • Backups and Restores: Just some tips I found on creating backups and their accompanied restoring operations (in a time of need).

I performed all of the steps below on a Fedora 20 Linux box, but it should work on any variation of Linux (the instructions that is). Please note that these steps are specifically geared towards a Nexus 6, but I’m sure any competent person can adjust what is identified here to work with their own Android device (phone/tablet) too. :)

Android SDK Preparation
This step is vital for the two-way communication to your device using your PC. I did the following using Fedora 20; but the rules don’t change for most distributions.

  1. As root, make sure the proper libraries are installed and available to your environment:
    # You'll need the 32-bit version of libstdc and the JDK 
    yum install libstdc++.i686 java-1.7.0-openjdk -y
    
  2. Download the full Android SDK here (scroll to the bottom of the page>DOWNLOAD FOR OTHER PLATFORMS>SDK Tools Only). At the time I did this, the file was called: android-sdk_r24.0.2-linux.tgz.
    # Alternatively, you can fetch it this way
    # (as long as the site doesn't change their paths around):
    wget http://dl.google.com/android/android-sdk_r24.0.2-linux.tgz
    
  3. Install it’s contents into a workable directory; the following prepared my development environment placing everything in ”’~/Development/Android/SDK”':
    mkdir -p ~/Development/Android/SDK
    # Extract contents of the SDK into our newly created directory
    # the --strip 1 eliminates the parent directory (called android-sdk-linux)
    # we just want the root of this to go into the SDK directory instead
    tar xvfz android-sdk_r24.0.2-linux.tgz -C ~/Development/Android/SDK --strip 1
    
  4. The following will allow regular users to utilize the fastboot command. Without this step, you must switch to root to utilize the command.

    cat < /etc/udev/rules.d/51-android.rules
    # Nexus 6 ID: 18d1
    SUBSYSTEM=="usb", ATTR{idVendor}=="18d1", MODE="0666", GROUP="plugdev" 
    _EOF
    chmod 644 /etc/udev/rules.d/51-android.rules
    
  5. Now we need to add a few more tools to complete our setup. This can be done through the tools/android binary.

    # Create an environment variable to simplify the commands
    ASDK="$HOME/Development/Android/SDK"
    # Change to our development directory
    pushd $ASDK
    # Update our path
    export PATH="$PATH$ASDK/tools:$ASDK/platform-tools"
    
    # Install (the following were in the SDK r24.0.2 package)
    # they may be different in yours.
    #  1- Android SDK Platform-tools
    #  2- Android SDK Build-tools
    # 57- Android Support Library
    #
    # The following command lists all of the packages you can install:
    #    > android list sdk
    #
    # If you want to be certain you grab the right packages and
    # are using a different release, here is a cheat that will
    # fetch this information for you:
    PACKAGES=$(tools/android list sdk --extended | \
                egrep -B2 -i 'Android (Support Library|SDK Platform-tools|SDK Build-tools)' | \
                egrep '^id:' | \
                sed -e 's/[^"]\+"\([^"]\+\)"/\1/g' | \
                cut -d- -f1 | tr -s '[:space:]' , | \
                sed -e 's/,\+$//g')
    
    # Now install our packages! (if there are any to install)
    android update sdk -u -a -t $PACKAGES
    
    # if you get the following error, it's because you didn't install
    # libstdc++ as identified in an earlier setup:
    #   ...
    #   Stopping ADB server failed (code 127).
    #   Starting ADB server failed (code 127).
    #   Done. 3 packages installed.
    #
    # To fix this, make sure you have installed the 32-bit libstd++ (as root):
    #    yum install libstdc++.i686
    #
    # After it's installed, you can restart the adb server with the
    # following command:
    #     adb start-server
    

    Alternatively, you can just launch the tools/android binary and use the GUI to set things up. You will need to install the following (3) packages (minimally):

    1. Tools >> Android SDK Tools
    2. Tools >> Android SDK Platform-tools
    3. Extras >> Android Support Library
  6. You can test to see if you’ve got everything working by preforming the following:
    # Tests if the adb-server is successfully running
    adb version
    
    # NOTE:
    # If it doesn't work, don't panic (yet); try killing the server
    # (it's possible it's not even running) and restarting it:
    adb kill-server
    adb start-server
    
    
  7. Now try testing the command identified earlier again:
    adb version

    If it displays Android Debug Bridge version x.x.x, then you’ve completed the first part successfully.

Enable Development / Debugging Mode on Device
You must log into your device and access the settings to enable Development / Debugging mode:

  1. Go into Settings.
  2. Under About, you’ll be able to locate your Build Number.
  3. Tap Build Number 7 times until you are notified that you have activated Developer options.
  4. Go into Developer Options, ensure it is enabled and check the Enable OEM Unlock checkbox.
  5. While in Developer Options, ensure the USB Debugging checkbox is checked too.

Unlock the Bootloader
Once the bootloader is unlocked, your Android device will prompt you that it will erase everything.
Before proceeding with this step, make sure you’ve backed up everything important to you. Preferably you’re doing this step with a brand new device so you have nothing to lose at this time.

  1. Turn the device off (if it’s on already) and turn it back on while holding the and the button at the same time. This will start your device into the bootloader/fastboot mode.
  2. Plug the device into your PC (via a USB cable), then open a command prompt window window and type the following:
    # Create an environment variable to simplify the commands
    ASDK="$HOME/Development/Android/SDK"
    # Change to our development directory
    pushd $ASDK
    # Update our path
    export PATH="$PATH:$ASDK/tools:$ASDK/platform-tools"
    
    # list the devices detected in fastboot mode:
    fastboot devices
    

    If your device’s serial number shows up, you are good to go! If not, you’ll need to figure this part out before continuing.

  3. This next step will completely wipe EVERYTHING off of your device. Make sure to back up anything worth saving before proceeding. If you’re having second thoughts about this; now is the time to back out. You can reboot your device again and it will return how it was.

    Otherwise, for all intent and purposes, unlock the bootloader with the following command:

    fastboot oem unlock

    On the device (still attached to the computer), a screen should pop up asking whether or not you would like to unlock the bootloader. Use the volume rockers to highlight “Yes” then press power to confirm the action.
    After the above command has finished executing, run the following to restart your device:

    fastboot reboot

    The device will reboot at this time.

    Next you will be presented with a screen containing an Android logo as well as a progress bar. It can take about 8 minutes or so to finish.

    When it’s complete, it will restart and act as though it was the first time you’ve ever turned on the device. There is no sense in doing anything yet, we still have a few more steps to do. Just power off the device at this time and keep reading.

Root Your Android Device
Your bootloader must be unlocked for this step to work correctly. Otherwise, you can root your device with the following actions:

  1. Download the Chainfire‘s (CF-)Auto-Root tool to automate everything for you downloadable from here.
  2. Extract the ZIP file to your Desktop; this makes it easy to access.
  3. Turn the device off (if it’s on already) and turn it back on while holding the volume down + power button at the same time. This will start your device into the bootloader/fastboot mode.
  4. Plug the device into your PC (via a USB cable), then open a command prompt window window and type the following:
    # Create an environment variable to simplify the commands
    ASDK="$HOME/Development/Android/SDK"
    # Change to our development directory
    pushd $ASDK
    # Update our path
    export PATH="$PATH:$ASDK/tools:$ASDK/platform-tools"
    
    # Create a directory to place the Auto-Root contents in:
    mkdir -p "$ASDK/autoroot"
    
    # Uncompress contents to this directory:
    unzip CF-Auto-Root-shamu-shamu-nexus6.zip -d "$ASDK/autoroot"
    
    # Reboot your device into bootloader mode (power+volume down or this option):
    adb reboot bootloader
    
    # Now root your device with the following commands:
    IMAGE=$(find autoroot/image -type f -name '*.img')
    chmod ug+x autoroot/tools/fastboot-linux
    autoroot/tools/fastboot-linux boot "$IMAGE"
    

Upgrade Your Rooted Device
When Android v5.1 came out; the OTA failed (rebooted to an ‘error’). I had to fetch the stock version I was currently running so I could specifically just flash 2 images (system and recovery).

  1. Setup your development environment
    # Create an environment variable to simplify the commands
    ASDK="$HOME/Development/Android/SDK"
    # Change to our development directory
    pushd $ASDK
    # Update our path
    export PATH="$PATH:$ASDK/tools:$ASDK/platform-tools"
    
  2. The point of this section is you want to update your Android ‘without’ losing your data.Nexus 6 Settings Build Number

    Download the stock firmware associated with the (rooted) version currently installed on your device here. The steps below will show you how you can access what is important from what you just downloaded.

    The best way to be sure you’re downloading the correct version is to go into the About settings of your device and pay attention to Build number identified at the bottom. You just need to download and work with this version as your base.

    # 5.0.1 shamu Stock (Nexus 6); You will need to visit the following
    # website to get the right firmware for your device if it's not the
    # same version I'm using: 
    # - https://developers.google.com/android/nexus/images
    #
    wget https://dl.google.com/dl/android/aosp/shamu-lrx22c-factory-ff173fc6.tgz
    mkdir -p "$ASDK/firmware.nexus.6-shamu.5.0.1"
    tar xvfz shamu-lrx22c-factory-ff173fc6.tgz -C "$ASDK/firmware.nexus.6-shamu.5.0.1"  --strip 1
    
    # Change into our Firmware Directory
    pushd "$ASDK/firmware.nexus.6-shamu.5.0.1"
    
    ######################################
    # After previously running the update from 5.0.1 to 5.1.0
    # My Nexus was updated to LMY47D (seen from options)
    # So this can be set up as follows:
    wget https://dl.google.com/dl/android/aosp/shamu-lmy47d-factory-6c44d402.tgz
    mkdir -p "$ASDK/firmware.nexus.6-shamu.5.1.0"
    tar xvfz shamu-lmy47d-factory-6c44d402.tgz -C "$ASDK/firmware.nexus.6-shamu.5.1.0"  --strip 1
    
    # Change into our Firmware Directory
    pushd "$ASDK/firmware.nexus.6-shamu.5.1.0"
    
    ######################################
    # After the update from 5.1.0 to 5.1.1 was successful
    # I checked for the version i was running for a recovery
    # point (in the future)
    # My Nexus was updated to LMY47Z (seen from options)
    # So this can be set up as follows:
    wget https://dl.google.com/dl/android/aosp/shamu-lmy47z-factory-33951732.tgz
    mkdir -p "$ASDK/firmware.nexus.6-shamu.5.1.1"
    tar xvfz shamu-lmy47z-factory-33951732.tgz -C "$ASDK/firmware.nexus.6-shamu.5.1.1"  --strip 1
    
    # Change into our Firmware Directory
    pushd "$ASDK/firmware.nexus.6-shamu.5.1.1"
    
    ######################################
    # The below applies to any firmware previously downloaded above
    
    # Extract the system images next
    unzip *.zip -d image
    
    # Reboot your device into bootloader mode (power+volume down or this option):
    adb reboot bootloader
    
    # Flash the 2 critical images that will allow your OTA to properly
    # update itself. You will loose 'root' if you've done so.
    # Don't worry; you won't lose your data when you do this.
    fastboot flash system image/system.img
    fastboot flash recovery image/recovery.img
    
    # If you've got other boot modifications (mods) installed (such as TWRP
    # or BusyBox) running your Android device in an unencrypted mode,
    # you should additionally run the following:
    fastboot flash boot image/boot.img
    
    # If you have nothing to lose and just want to completely reset
    # your device to the version you downloaded, you can run the command:
    #    ./flash-all.sh
    # Otherwise, don't do this (to save your data)!! Just move on
    # to the next command identified below to reboot.
    
    # Your device no longer has root access at this point
    # you can reboot it now
    fastboot reboot
    
    popd
    
  3. Nexus 6 OTA UpdateAfter the reboot, you will have lost your root privileges. However, you can now upgrade using the OTA update Google provides. Once the OTA upgrade has complete, you can re-root your device using the steps already identified above.

Fantastic (Root Requiring) Apps to Install
Here are some great apps worth installing on your rooted devic:

If you’re feeling brave and want to start playing with custom firmware:

If I forgot one, or you have one to share; please feel free to let me know and I’ll add it!

TWRP Setup
TWRP allows you to backup your firmware as well as flash and install/uninstall custom firmware. It requires (Stericson) BusyBox or BusyBox Pro to work correctly. Naturally it also requires that your device is rooted already. Here are the devices TWRP support. For the Nexus 6; here is the direct link.

You can think of this application has installing 2 key components onto your device:

  • The .apk (front end application) which provides you with some rudimentary control to it’s backend. This is a simple app you just download from the Google Play store here.

    It provides you with the ability to flash the backend for you. I’ll explain how to flash it manually below; the only advantage of doing it manually is you can use some of the newer firmware that hasn’t made it back to the .apk yet. That and you know for certain you’ll flash the right ROM as using the .apk gives you the ‘chance’ to brick your device through accidental choice selections.

  • The backend itself. This is the beauty behind TWRP. It’s an image file you need to flash directly to your device (overwrite the stock version). Through this you can:
    1. Backup your entire device’s state (everything; including the firmware you’re running)
    2. Restore any previous backup you already created.
    3. Flash a new custom firmware you feel is worth trying without worry for if something goes wrong, you can just restore a backup. Note: It’s presumed that the first thing you do once you get this software installed is backup your current system so you have a good restore point.
  1. Setup your development environment
    # Create an environment variable to simplify the commands
    ASDK="$HOME/Development/Android/SDK"
    # Change to our development directory
    pushd $ASDK
    # Update our path
    export PATH="$PATH:$ASDK/tools:$ASDK/platform-tools"
    
  2. The point of this section is you want to update your Android ‘without’ losing your data.

    Download the stock firmware associated with the (rooted) version currently installed on your device here and prepare it’s content. If you have nothing to loose; the easiest way is to just flash everything using the latest stock firmware. But if you’re like me; here is the custom steps i took to upgrade from 5.0.1 to 5.1.0.

    # 2.8.6.0 shamu Stock (Nexus 6); You will need to visit the following
    # website to get the right firmware for your device if it's not the
    # same version I'm using:
    # - http://dl.twrp.me/shamu/
    
    [ ! -d "$ASDK/twrp/" ] && mkdir -p "$ASDK/twrp/"
    pushd "$ASDK/twrp/"
    # At the time; the latest image was twrp-2.8.6.0-shamu.img
    curl --referer http://dl.twrp.me/shamu/twrp-2.8.6.0-shamu.img \
         -O http://dl.twrp.me/shamu/twrp-2.8.6.0-shamu.img
    
    # Reboot your device into bootloader mode (power+volume down or this option):
    adb reboot bootloader
    
    # Flash the recovery image with TWRP's version
    fastboot flash recovery twrp-2.8.6.0-shamu.img
    
    # You now have TWRP installed
    fastboot reboot
    
    popd
    

Backup / Restore

  • TWRP allows you to preform full backups and restores of your device without the need of a PC connected to it. It doesn’t hurt to use this option. In order to do this; it needs to modify the recovery ROM of your device. But this isn’t really a big deal since you can easily flash it back to where it was using the firmware provided on Google’s website. Note: This application requires that your device is already rooted.
  • Titanimum Backup allows you to backup your applications and even clean out the ones you don’t want. This application is an absolutely MUST for all rooted devices!
  • Android SDK supports backups and restores too.Here is a great forum post explaining the whole operation in great detail; but in short the following commands will do the trick (assuming your device is powered on).
    # Create an environment variable to simplify the commands
    ASDK="$HOME/Development/Android/SDK"
    # Update our path
    export PATH="$PATH:$ASDK/tools:$ASDK/platform-tools"
    
    # Create Backup
    adb backup -f $(date +'%Y.%m.%d-nexus6.backup') -apk -shared -all
    
    # You can later restore this file with the command:
    adb restore YYYY.MM.DD-nexus6.backup
    

    The nice thing about the built in Android backup and restore commands is that they don’t require a rooted device to use. Your device has to be unlocked and connected to your PC (via USB) for this option to work though! Honestly, if your device is already rooted, then TWRP and Titanium Backup are the best solutions.

Reverting Everything Back
Sometimes you may want to revert everything back to how it was. This is especially an essential step if you have to send your device back to Google (or another carrier) to full fill it’s warranty.

  1. Setup your development environment
    # Create an environment variable to simplify the commands
    ASDK="$HOME/Development/Android/SDK"
    # Change to our development directory
    pushd $ASDK
    # Update our path
    export PATH="$PATH:$ASDK/tools:$ASDK/platform-tools"
    
  2. Download the stock firmware associated with your device here and prepare it’s content.

    # 5.0.1 shamu Stock (Nexus 6); You will need to visit the following
    # website to get the right firmware for your device if it's not the
    # same version I'm using: 
    # - https://developers.google.com/android/nexus/images
    # 2015 Jan - Updated Nexus 6 to v5.1.0; Future flashes for myself
    #            will involve treating that version as my stock instead.
    wget https://dl.google.com/dl/android/aosp/shamu-lrx22c-factory-ff173fc6.tgz
    mkdir -p "$ASDK/firmware.nexus.6-shamu.5.0.1"
    tar xvfz shamu-lrx22c-factory-ff173fc6.tgz -C "$ASDK/firmware.nexus.6-shamu.5.0.1"  --strip 1
    
    # now run the firmware restore
    pushd "$ASDK/firmware.nexus.6-shamu.5.0.1"
    # This command does it all and restarts the device when your done.
    ./flash-all.sh
    popd
    
  3. Now you need to re-lock the device. The easiest way to do this is power on the device with the power and volume down button held down.

    Note: if you are in a unique position where the volume down button doesn’t work (making it impossible to do the bootloader combination; do the following:

    • Log back into your device using any user (it really doesn’t matter); Don’t bother putting your real information in otherwise you’ll have to wait while your device downloads and re-installs all of your apps. The main thing is you need to enable the developer mode again and Debugging Mode. You will be prompted to authorize the connection within your device (make sure to do so).
    • Once Debugging Mode is enabled, you can connect your device up to your laptop and type the following:
      adb reboot bootloader
    • You can safely relock your device with the following command:
      fastboot oem lock
      

      After the above command has finished executing, run the following to restart your device:

      fastboot reboot

      The device will reboot at this time.

    Next you will be presented with a screen containing an Android logo as well as a progress bar. It can take about 8 minutes or so to finish.

    When it’s complete, it will restart and act as though it was the first time you’ve ever turned on the device. You can safely power it off now. It’s good to be shipped back to Google!

Sources

Credit
This blog took me a very long time to put together and test! If you like what you see and wish to copy and paste this HOWTO, please reference back to this blog post at the very least. It’s really all I ask.

Permanently Ban Those Detected by Fail2Ban

Permanently Ban Those Caught By Fail2Ban


Introduction
Fail2ban is probably one of the best intrusive detection based tools an administrator can deploy onto their system. This is especially the case if your system is connected to the internet. If you aren’t already using it; consider reading my blog entry here that talks about it.

In this blog, I provide a scripted solution that will capture the current list of banned users from Fail2Ban and make their ban permanent. This allows us to limit the constant emails one might receive about the same people trying to compromise our system(s). For those who aren’t using the emailing portion of Fail2Ban; this script still greatly takes the load off of Fail2Ban because it no longer has to manage the constant repeat offenders. Our logs are less cluttered too.

The script will address several things:

  1. It will handle multiple attacks from people within the same Class C type netmask.
  2. It will allow for the permanent bans to stick even after your system is rebooted. Unlike Fail2Ban’s list of blocked perpetrators, which is lost if the system (or iptables) is restarted.
  3. It will enforce the use of iptable’s DROP directive instead of REJECT. This is a slightly more secure approach in handling those who aren’t ever allowed to come back.
  4. It will support the fact that over time, you may want to add and remove from this new ban list I keep speaking of. Basically you can re-run the steps outlined in this blog again (and again) and not lose the addresses you’ve already blocked.
  5. The script maintains a global list of addresses in a simple delimited format. You can then choose to share this list with other systems (or colleagues) to block the same unwanted users on these systems too.

Script Dependencies and Requirements
For this script to work, you can virtually use any Linux/FreeBSD/Unix operating system as long as you’re also using fail2ban in conjunction with iptables.

The script makes use of sed, and gawk to massage the data. These tools are common an available to all operating systems. But not all of them necessarily have them installed by default. Make sure you’ve got them before proceeding:

# Fedora / CentOS 4,5,6 / RedHat 4,5,6
yum -i install gawk sed

# Ubuntu / Debian
apt-get install gawk sed

The Goods
Without further ado, the below code is documented quite heavily so that you can just copy the sections into your terminal screen as the systems root user. Don’t try the next section until you’re done with the previous.

Although this code works for me, I still must caution you all the same: I will not be held liable for any loss, damage or expense as a result of the actions taken by my script. I’ve only used it without problem with CentOS 6.x. That said, the simplicity of it should make it work with any other *nux based OS as well.

# Author: Chris Caron
# Date: Tue, Apr 7th, 2015
########################################
# Environment Variables
########################################
# YOU MUST CORRECTLY SET THESE OR THE REMAINING FUNCTIONS OF
# THIS CODE WILL NOT WORK CORRECTLY.
#
# THIS SCRIPT ASSUMES YOU ARE RUNNING AS 'root'
#
# Public Ethernet Device
# For my home system, i have this set to ppp0; You'll want to
# set this to the Ethernet device that harm can venture from.
# such as the internet.
PUBDEV=eth0

# The name of the iptables chain to manage the permanent bans
# within. The name doesn't matter so long that it doesn't
# collide with a chain you're already managing.
# Also, Do not change this value in the future because that
# will hinder the ability to upgrade/append new bans easily.
# It is doubtful that it's current set value will conflict
# with anything. Therefore just leave the name the way it is
# now:
FILTER=nuxref-perm-ban

# The script makes an effort to detect IP Addresses all
# coming from the same Class C type network.  Rather then
# have an entry per IP like fail2ban manages; we group
# them into 1 to speed the look-ups iptables preforms.
# You'll want to identify the minimum number of IP
# addresses matched within the same alike (Class C) network
# before this grouping takes place.
CLASSCGRP_MIN=2

# IP Tables configuration file read during your system
# startup.
IPTABLES_CFG=/etc/sysconfig/iptables

# IPTables Chain Index
# Identify where you want your ban list to be applied
# To reduce overhead, the banlist should be processed 'after'
# some core checks you do.  For example I have a series of other
# checks i do first such as allowing already established
# connections to pass through.  I didn't want the ban list
# being applied to these, so for my system, i set this to 11.
# Setting it to 1 is safe because it guarantees it's at least
# processed first on systems who don't actively maintain their
# own custom firewall list.
CHAINID=1

########################################
# Preparation
########################################

# First we built a massive sorted and unique list of
# already banned IP addresses.
# The below is clever enough to include previous content
# you've banned from before (if any exists):
(
   # carry over old master list
   iptables -L -n | awk "/Chain $FILTER/, \$NF ~ /RETURN/" | \
    egrep DROP | sed -e 's/^[^0-9.]*\([0-9]\+\.[0-9]\+\.[0-9]\+\)\.\([0-9/]\+\).*/\1 .\2/g'
   # update master list with new data
   iptables -L -n | egrep REJECT | \
    sed -e 's/^[^0-9.]*\([0-9]\+\.[0-9]\+\.[0-9]\+\)\.\([0-9]\+\).*/\1 .\2/g'
) | sort -n | uniq > list
 
# Now we build a separate list (in memory) that will track
# all of the ip addresses that met our $CLASSCGRP_MIN flag.
CLASSC_LIST=$(cat list | cut -f1 -d' ' | uniq -c | \
    sed 's/^ *//g' | sort -n | \
    awk "int(\$1)>=$CLASSCGRP_MIN" | cut -f2 -d' ')
 
# We eliminate the duplicates already pulled into memory
# from the master list of IPs
for NW in $CLASSC_LIST; do sed -i -e "/^$NW /d" list; done

# Now we scan our list and repopulate them into our master
# list. We place these entries at the head of the file so
# that they'll be added to our iptable ban chain first.
for NW in $CLASSC_LIST; do sed -i "1s/^/$NW .0\/24\n/" list; done

# Using our list of banned IP addresses, we now generate
# the actual iptable entries (into a file called
# 'commands':
(
   # Creates the chain
   echo iptables -N $FILTER
 
   # Build List of Addresses using our list file
   while read line; do           
      IP=$(echo $line | tr -d '[:space:]')
      echo iptables -A $FILTER -s $IP -j DROP;
   done < list
 
   # Allow future iptable processing to continue 
   # after reading through this chain by appending
   # the RETURN action.
   echo iptables -A $FILTER -j RETURN
 
   # Add chain to INPUT
   echo iptables -t filter -I INPUT $CHAINID -i $PUBDEV -j $FILTER
) > commands

########################################
# IPTables (Temporary Instalment)
########################################

# Have a look at our commands if you want:
cat commands

# Apply these new rules now with the following command:
sh commands

# The commands generated in the 'commands' text file 
# are only temporary; they will be lost if your
# machine (or iptables) is ever restarted

########################################
# IPTables (Permanent Installation)
########################################
# Consider making a backup of your configuration in case you
# need to roll back
/bin/cp -f $IPTABLES_CFG $IPTABLES_CFG.backup

# Now we generate all of the commands needed to place
# into our iptables configuration file:
sed -e 's/^iptables[ ]*//g' commands | \
    egrep "^-A $FILTER -s " > commands-iptables
# Clean up old configuration
sed -i -e "/^:$FILTER -/d" "$IPTABLES_CFG"
sed -i -e "/^-A $FILTER -/d" "$IPTABLES_CFG"
 
# Now push all of our new ban entries into the iptables file
sed -i -e "/:OUTPUT .*/a:$FILTER - [0:0]" "$IPTABLES_CFG"
sed -i -e "/:$FILTER - .*/a-A INPUT -i $PUBDEV -j $FILTER" "$IPTABLES_CFG"
sed -i -e "/-A INPUT -i $PUBDEV -j $FILTER.*/r commands-iptables" "$IPTABLES_CFG"

# Now preform the following to reset all of the fail2ban
# jails you've got as well load your new permanent ban setup
service fail2ban stop
service iptables restart
service fail2ban start

# If you have a problem; just roll back your backup you
# created and rerun the 3 commands above again. You can
# have a look at the table with the following command:
iptables -L -n -v

########################################
# IPTables (Optional Tiding)
########################################
# the above will insert the banlist at the top
# The below will just correct this and move it
# clean entry(s) from INPUT
(
   while [ 1 -eq 1 ]; do
      ID=$(iptables -nL --line-numbers | \
           egrep "^[0-9]+[ \t]+$FILTER " | \
           head -n 1 | \
           sed -e 's/^\([0-9]\+\).*/\1/g')
      [ -z "$ID" ] && break;
      iptables -D INPUT $ID
   done
   # Re insert at the correct $CHAINID
   iptables -t filter -I INPUT $CHAINID -i $PUBDEV -j $FILTER
)

# have another look if you want (if you tidied)
iptables -L -n -v

########################################
# Cleanup (Optional)
########################################
# Remove temporary files when your done; or save them if you
# want to port this data to another server:
rm -f list commands commands-iptables

You can undo and destroy the new entries an any time using the following:

# disassociate filter from INPUT
while [ 1 -eq 1 ]; do
   ID=$(iptables -nL --line-numbers | \
        egrep "^[0-9]+[ \t]+$FILTER " | \
        head -n 1 | \
        sed -e 's/^\([0-9]\+\).*/\1/g')
   [ -z "$ID" ] && break;
   iptables -D INPUT $ID
done
# flush filter
iptables -F $FILTER
# remove filter
iptables -X $FILTER

Credit
This blog took some time to put together and test! If you like what you see and wish to copy and paste this HOWTO, please reference back to this blog post at the very least. It’s really all I ask.

Sources
There were not many sources used to make this entry. Most of it is just shell scripting knowledge I’ve adopted over the years mixed with some iptable commands. Here are some sources anyway that are applicable:

Administration Tricks & Tips

Handy Administration Tools & Tricks


Introduction
All administrators have tools they use to make their job easier. This blog will be an ongoing one where every time I find or create a handy tool, I’ll share it. I’ll do my best to provide a quick write up with it and some simple examples as to how it may be useful to you too.
Currently, this blog touches on the following:

Please note that most of these tools can be directly retrieved from my repository I’m hosting. You may wish to connect to it to make your life easier.


genssl: SSL/TLS Certificate Generator and Certificate Validation

genssl is a simple bash script I created to automatically generate Private/Public key pairs that can be used by all sorts of applications requiring SSL/TLS encryption. This could be used by any number of servers you’re hosting (Web, Mail, FTP, etc). It will also test previously created keys (validating them). It supports generating keys that are to be signed by Certificate Authority as well as self signed and test keys too.

# Assuming you've connected to my repository,
# just type the following:
yum install -y --enablerepo=nuxref genssl

What You’ll Need To Know
It’s really quite simple, SSL/TLS keys are usually assigned to domain names. Thus this is the only input to the tool in addition to the key type you want to generate (self-signed, test, or registered):

# Generate a simple self-signed Certificate Private/Public
# key pair for the domain example.com you would just type the
# following:
genssl -s example.com
# ^-- creates the following:
#   - example.com.key
#   - example.com.crt
#   - example.com.README

# You can specify as many domain names as you want to
# generate more then one:
genssl -s example.com nuxref.com myothersite.com
# ^-- creates the following:
#   - example.com.key
#   - example.com.crt
#   - example.com.README
#   - nuxref.com.key
#   - nuxref.com.crt
#   - nuxref.com.README
#   - myothersite.com.key
#   - myothersite.com.crt
#   - myothersite.com.README

You can verify the keys with the following command:

# The following scans the current directory
# for all .key, .crt, and/or .csr files and tests that
# they all correctly match against one another.
genssl -v

If you need a signed certificate by a registered Certificate Authority, genssl will get you half way there. It will generate you your public key and an unsigned certificate (.csr). You’ll then need to choose a Certificate Authority that works best for you (cost wise, features you need). You will provide them your unsigned certificate you just generated and in return, they’ll produce (and provide) for you the private key you need (.key). Signed (registered) certificates cost money because you’re paying for someone else to confirm the authenticity of your site when people visit it. This is by far the absolute best way to publicly host a secure website!

# Generate an unsigned certificate and public key
# pair for the domain example.com with the following:
genssl -r example.com
# ^-- creates the following:
#   - example.com.key
#   - example.com.crt
#   - example.com.README

You’ll notice that you also generate a README file for every domain you create keys for. Some of us won’t generate keys often; so this README provides some additional information that will explain how you can use the key. It lists a few Certificate Authoritative locations you can contact, and provides examples on how you can install the key into your system. Depending on the key pair type you generate, the README will provide you with information specific to your cause. Have a look at the contents yourself after you generate a key! I find the information in it useful!

The last thing worth noting about this tool is to create a key pair, you usually provide information about yourself or the site you’re creating the key for. I’ve defaulted all these values to ones you’re probably not interested in using. You can over-ride them by specifying additional switches on the command line. But the absolute easiest (and better) way of doing it, is to just provide a global configuration file it can reference which is /etc/genssl. This file is always sourced (if present). The next file that is checked and sourced is ~/.config/genssl and then ~/.genssl.
Set this with information pertaining to your hosting to ease the generation of of the key details. Here is an example of what any one of these configuration files might look like:

# The country code is represented in it's 2 letter abbreviated version:
# hence: CA, US, UK, etc
# Defaults to "7K" if none is specified
GENSSL_COUNTRY="7K"
# Your organization/company Name
# Defaults to "Lannisters" if none is specified
GENSSL_ORG="Lannisters"
# The Province and or State you reside in
# Defaults to "Westerlands" if none is specified
GENSSL_PROVSTATE="Westerlands"
# Identify the City/Town you reside in
# Defaults to "Casterly Rock" if none is specified
GENSSL_LOCATION="Casterly Rock"
# Define a department; this is loosely used. Some
# just leave it blank
# Defaults to "" (blank) if not is specified
GENSSL_DEPT=""

There are other variables you can override, but the above are the key ones. The default cipher strength is rsa:2048 and keys expire after 730 days (which equates to 2 years). This is usually the max time Certificate Authoritative sites will sign your key for anyway (if registering with them).

You don’t need a configuration file at all for this script to work, there are switches for all of these variables too that you can pass in. Its important to note that passed in switches will always over-ride the configuration file too, but this is the same with most applications anyway.

Some Further Reading


fencemon: A System Monitoring Tool

fencemon is a beautiful creation by Red Hat, the only thing I did was package it as an RPM. I don’t think it’s advertised or used enough by system administrators. It simply runs as a cron and constantly gathers detailed system information. Information such as the current programs running, the system memory remaining, network statistics, disk i/o, etc. It captures content every 10 seconds but does so by using tools that are not i/o intensive. So running this will not slow your machine down enough to justify not using it.

Where this information becomes vital is during a system failure (in the event one ever does occur). It will allow you to see exactly what the last state of the system was (processes in memory, etc) prior to your issue. The detailed information can be used with everyday troubleshooting as well too such as identifying that process that is going crazy overnight and other anonymities that you’re never around to witness first hand.

Why is it called fencemon?
It got it’s name because it’s initial purpose was to run in clustered environments which do a think called ‘fencing’. It’s when another cluster node sees that another is violating some of the simple cluster rules put in place, or it simply isn’t responding anymore. Fencing is effectively the power cycling of the node (so an admin doesn’t have to). In almost all cases (unless there is seriously something wrong with the fenced node), it will reboot and rejoin the cluster correctly. This tool would have allowed an admin to see the final state of the cluster node before it was lost. But any server can crash or go crazy when deploying software in a production environment. We all hope it never happens, but it can. The logging put in place by fencemon can be a life saver!

# Assuming you've connected to my repository,
# just type the following:
yum install -y --enablerepo=nuxref fencemon

What You’ll Need To Know
The constant system capturing that is going on will report itself to /var/log/fencemon/.

The format of the log files is:
hostnameYYYYMMDDHHMMSS-info-capturetype.log

Once an hour elapses, a new file is written to. Each file contains system statistics in 20 second bursts; as you can imagine, there is A LOT of information here.

the following capturetype log files (with their associated commands) are gathered in 20 second increments:

  • hostnameYYYYMMDDHHMMSS-info-ps-wchan.log:
    ps -eo user,pid,%cpu,%mem,vsz,rss,tty,stat,start,time,wchan:32,args
  • hostnameYYYYMMDDHHMMSS-info-vmstat.log:
    vmstat
  • hostnameYYYYMMDDHHMMSS-info-vmstat -d.log:
    vmstat-d
  • hostnameYYYYMMDDHHMMSS-info-netstat.log:
    netstat -s
  • hostnameYYYYMMDDHHMMSS-info-meminfo.log:
    cat /proc/meminfo
  • hostnameYYYYMMDDHHMMSS-info-slabinfo.log:
    cat /proc/slabinfo
  • hostnameYYYYMMDDHHMMSS-info-ethtool-$iface.log:
    # $iface will be whatever is detected on your machine
    # you can also define this variable in /etc/sysconfig/fencemon
    # ifaces="lo eth0"
    # The above definition would cause the following to occur:
    /sbin/ethtool -S lo >> \
       /var/log/fencemon/hostname-YYYYMMDD-HHMMSS-info-ethtool-lo.log
    /sbin/ethtool -S eth0 >> \
       /var/log/fencemon/hostname-YYYYMMDD-HHMMSS-info-ethtool-eth0.log
    

The log files are kept for about 2 days which can occupy about 750MB of disk space. But hey, disk space is cheap now of days. You should have no problem reserving a GIG of disk space for this info. It’s really worth it in the long run!

Some Further Reading


datemath: A Date/Time Manipulation Utility

I won’t go too deep on this subject since I’ve already blogged about it before here. In a nutshell, It can easily provide you a way of manipulating time on the fly to ease scripting. This tool is not rocket science by any means, but it simplifies shell scripting a great deal when preforming functionality that is time sensitive.

This is effectively an extension or the the missing features to the already existing date tool which a lot of developers and admins use regularly.

# Assuming you've connected to my repository,
# just type the following:
yum install -y --enablerepo=nuxref datemath

What You’ll Need To Know
There are 2 key applications; datemath and dateblock. Datemath can ease scripting by returning you time relative to when it was called; for example:

# Note: the current date/time is printed to the
#        screen to give you an idea how the math was
#        applied.
# date +'%Y-%m-%d %H:%M:%S' prints: 2014-11-14 21:49:42
# What will the time be in 20 hours from now?
datemath -o 20
# 2014-11-15 17:49:42

# Top of the hour 9000 minutes in the future:
datemath -n 9000 -f '%Y-%m-%d %H:00:00'
# 2014-11-21 03:00:00

If you use a negative sign, then it looks in the past. There are lots of reasons why you might want to do this; consider this little bit of code:

# Touch a file that is 30 seconds older than 'now'
touch -t $(datemath -s -30 -f '%Y%m%d%H%M.%S') reference_file

# Now we have a reference point when using the 'find'
# command. The below command will list all files that
# at least 30 seconds old. This might be useful if your
# hosting an FTP server and don't want to process files
# until they've arrived completely.
find -type f -newer reference_file
# You could move the fully delivered files to a new
# directory with a slight adjustment to the above line
find -type f -newer reference_file -exec mv -f {} /new/path \;

# Consider this for an archive clean-up for a system
# you maintain. Simply touch a file as being 31 days
# older then 'now'
touch -t $(datemath -d -31 -f '%Y%m%d%H%M.%S') reference_file

# now you can list all the files that are older then
# 31 days.  Add the -delete switch and you can clean
# up old content from a directory.
find /an/archive/directory -type f ! -newer reference_file

# You can easily script sql statements using this tool too
# consider that you need to just select yesterdays data
# for an archive:
START=$(datemath -d -1 -f '%Y-%m-%d 00:00:00')
FINISH=$(date +'%Y-%m-%d 00:00:00')
BACKUP_FILE=/absolute/path/to/put/backup/$START-backup.csv
# Now your SQL statement could be:
/usr/bin/psql -d postgres \
      -c "COPY (SELECT * FROM mysystem.data WHERE \
                mysystem.start_date >= '$START' AND \
                mysystem.finish_date < '$FINISH') \
          TO '$BACKUP_FILE';"
# Now that we've backed our data up, we can remove
# it from our database:
/usr/bin/psql -d postgres \
      -c "DELETE FROM mysystem.data  WHERE \
                mysystem.start_date >= '$START' AND \
                mysystem.finish_date < '$FINISH'"

Some Further Reading


dateblock: A Cron-Like Utility

dateblock allows us to mimic the functionality of sleep by blocking the process. The catch is dateblock blocks until a specific time instead of for a duration of time.

This very subtle difference can prove to be a very effective and powerful too, especially when you want to execute something at a specific time. Cron’s are fine for most scenarios, but they aren’t ideal in a clustered environment where this tool excels. In a clustered environment you may only want an action to occur on the node hosting the application, where a cron would require the action to fire on all nodes.

The python C++ library enables this tools usage even further since you can integrate it with your application.

Just like datemath is, dateblock is discussed in much more detail in my blog about it here.

# Assuming you've connected to my repository,
# just type the following:
yum install -y --enablerepo=nuxref dateblock

# Get the python library too if you want it
yum install -y --enablerepo=nuxref python-dateblock

What You’ll Need To Know
Consider the following:

# block for 30 seconds...
sleep 30

# however dateblock -s 30 would block until the next 30th second
# of the minute is reached. Consider the time:
# date +'%Y-%m-%d %H:%M:%S' prints: 2014-11-14 22:14:16
# dateblock would block until 22:14:30 (just 14 seconds later)
dateblock -s 30

Dateblock works just like a crontab does and supports the crontab format too. It was written by me to specifically allow accuracy for time sensitive applications running. If you write your actions and commands in a script under a dateblock command, you can always guarantee your actions will be called at a precise time.

Since it supports cron entries you can feed it things like this:

# The following will unblock if the minute becomes
# divisible by 10.  so this would unblock at the
# :00, :10, :20, :30: 40: and :50 intervals
dateblock -n /10

# the above command could also be written like this:
dateblock -n 0,10,20,30,40,50

# You can mix and match switches to tweak the tool to
# always work in your favour.

A really nice switch that can help with your debugging and development is the –test (-t) switch. This will cause dateblock to ‘not’ block at all. Instead it will just spit out the current time, followed by the time it ‘would’ have unblocked at had you not put it into test mode. This is good for trying to tweak complicated arguments to get the tool to work in your favour.

# using the test switch, we can make sure we're going
# to unblock at time intervals we expect it to. In this
# example, i use the test switch and a cron formatted
# argument.  In this example, I'm asking it to unblock
# ever 2 days at the start of each day (midnight with
# respect to the systems time)
dateblock -t -c "0 0 0 /2"
Current Time : 2014-11-14 22:24:06 (Fri)
Block Until  : 2014-11-16 00:00:00 (Sun)

# Note: there is an extra 0 above that may through you
#       off.. in the normal cron world, the first
#       argument is the 'minute' entry.  Well since
#       dateblock offers high resolution, the first
#       entry is actually 'seconds', then minute, etc.

There are man pages for both of these tools. You’ll get a lot more detail of the power they offer. Alternatively, if you even remotely interested, check out e my blog entry on them.

The other cool thing is dateblock also has a python library I created.

# Import the module
from dateblock import dateblock

# Here is a similar example as above and we set
# block to false so we know how long were blocking until
unblock_dt = dateblock('0 0 0 /2', block=False)
print("Blocking until %s" % unblock_dt\
              .strftime('%Y-%m-%d %H:%M:%S))
# by default, blocking is enabled, so now we can just
# block for the specified time
datetime.datetime('0 0 0 /2')

Some Further Reading


nmap (Network Mapper)

nmap is port scanner & network mapping tool. You can use it to find out all of the open ports a system has on it and you can also use it to see who is on your network. This tool is awesome, but it’s intentions can be easily abused. It’s common sense to scan your own network to make sure that only the essential ports are open to the outside world, but it’s not kind to scan a system to which you are not the owner of. Hence, this tool should be used to identify your systems weaknesses so that you can fix them; it should not be used to identify an others weakness. For this reason, if you do install nmap on your system, consider limiting permission so that only the root user has access to it.

# Assuming you've connected to my repository,
# just type the following:
yum install -y --enablerepo=nuxref-shared nmap

# Restrict its access (optional, but ideal)
chmod o-rwx /usr/bin/nmap

What You’ll Need To Know
Now you can do things like:

  • Scan your system to identify all open ports:
    # Assuming the server you are scanning is 192.168.1.10
    nmap 192.168.1.10
    
  • Scan an entire network to reveal everyone on it (including MAC Address information):
    # Assuming a class C network (24 masked bits) with
    # a network address of 192.168.1.0
    nmap -n -v -sP 192.168.1.0/24
    

Note: Don’t panic if your system appears to have more ports open then you had thought. Well at least don’t panic until you determine where you are running your network map test from. You see, if you run nmap on the same system you are testing against, then there your test might not prove accurate since firewalls do not normally deny any local traffic. The best test is to use a second access point to test against the first. Also, if your server sits on more then one network, make sure to test it from a different server on each network it connects to!

Some Further Reading

  • A blog with a ton of different examples identifying things you can do with this tool.

lsof (LiSt Open Files)

This tool is absolutely fantastic when it comes to preforming diagnostics or handling emergency scenarios. This command is only available to the root user; this is intentional due to the power it offers the system administrators.

I’m not hosting this tool simply because it ships with most Linux flavors (Fedora, CentOS, Red Hat, etc). Therefore, the following should haul it into your system if it’s not already installed:

# just type the following:
yum install -y lsof

# If this doesn't work, the rpm will be found
# on your DVD/CD containing your Linux Operating
# System.

What You’ll Need To Know
This tool when executed on it’s own (without any options) simply lists every open file, device, socket you have on one great big list. This can be useful when paired with grep if your looking for something specific.

# list everything opened
lsof

But it’s true power comes from it’s ability to filter through this list and fetch specific things for you.

# list all of the processes utilizing port 80 on your system:
lsof -i :80 -t

# or be more specific and list all of the processes accessing
# a specific service you are hosting on an IP listening on
# port 80:
# Note: the below assumes we're interested in those accessing
#       the ip address 192.168.0.1
lsof -i @192.168.0.1:80 -t

# If you pair this with the kill command, you can easily kick
# everyone off your server this way:
kill -9 $(lsof -i @192.168.0.1:80 -t)

But the tool is also great for even local administration; lets say there is a user account on your system with the login id of bob.

# find out what bob is up to and what he's accessing on
# your system this way (you may need to pair this with
# grep since the output can be quite detailed
lsof -u bob

# Perhaps you just want to see all of the network activity
# bob is involved in. The following spits out a really
# easy to read list of all of the network related processes
# bob is dealing with.
lsof -a -u bob -i

# You can kill all of the TCP/IP processes bob is using with
# the following (if bob was violating the system, this might
# be a good course of action to take):
kill -9 $(lsof -t -u bob)

Perhaps you have a file on your system you use regularly, you can find out who is also using it with the following command:

# List who is accessing a file
lsof /path/to/a/file

Some Further Reading

  • lsof Survival Guide: A Stack Overflow post with some awesome tricks you can do with this tool.
  • More great lsof examples. This is someones blog who specifically wrote about this tool specifically. They provide many more documented examples of this tools usage here.

Credit
This blog took me a very long time to put together and test! If you like what you see and wish to copy and paste this HOWTO, please reference back to this blog post at the very least. It’s really all I ask.

Brogue

Brogue: A Roguelike Game for CentOS 6


Introduction
I don’t intend to post much on Linux gaming; in fact this blog entry is quite uncharacteristic of me. But Brogue is a fantastic roguelike game for most operating systems (Linux, MacOS X, and Windows). It’s completely open source (license and cost free) and available for anyone who wants it. The game has a huge following on their forum. It’s also worth pointing out that all the development credit in this game goes to the author Brian Walker.
Pixel Dungeon
I was inspired to look up Brogue and build it properly for CentOS/Red Hat (including Fedora 20) after playing Pixel Dungeon (GitHub/Wiki) which is available for all Android devices. Pixel Dungeon credits Brogue for all the success it’s had; therefore it only felt worth it to help the Brogue community out and package it for Red Hat based systems.

How Does It Work?
You start off as a character in a dungeon denoted by an at sign @. Using the arrow keys on your keyboard and/or mouse; you simply navigate your way around. Your goal is to reach 26th floor of it where a prize awaits you (The Amulet of Yendor). You’ll face monsters, starvation, curses and a lot of luck the deeper you go. All of the levels, gear, and monsters are randomly generated; so each time you play the game you will have a different experience. Some experiences are hard, some harder, and some virtually impossible. Roguelike games (like this one) imply that you will die. Most games are short; but each time you play, you’ll learn new things and make it further into the game. It’s important not to get discouraged; but rather give the game a fair chance. In fact, since each experience is different for every single play-through, you won’t be bored with the repetitive nature of rinsing and repeating what you’ve already done over and over again. It’s quite the opposite actually. You’ll learn things that didn’t work for you last time; but you may not get to apply that knowledge on the next play-through since you’ll be in a completely different dungeon.
Brogue Brimstone Battle
The in-game graphics could be considered impressive to some and not to others. But hopefully most people have come to realize that graphics don’t make a game and it’s the fun of it that truly fulfils the experience.

Again, I’ll state that when you first dive into this game it can be frustrating. You are going to die a lot until you grasp the concepts. The developer thought about those who just want to hack and slash and win every time the play the game. For this reason; he created a file called the seed catalog which dictates what will be available to you as you explore deeper into the dungeon. You can manipulate this catalogue and make your experience more enjoyable (that is… if enjoyable means easy). If you choose this route, perhaps overtime as you pick up new strategies, you can limit the catalogues contents or just use it’s defaults. It’s up to you!

Note: If you want to update the catalogue; it becomes available to you after you start the program up for the first time. It’s located as ~/.config/brogue/Brogue seed catalog.txt. You can adjust the content of a specific seed and choose that seed in the game by pressing <ctrl>-N on your keyboard. For example: <ctrl>-N and entering in 21 is known to give you a game that is fairly generous to new users. Apparently seed 1331532419 is a good one to start with too (source).

How Do I Get Brogue?
I thought you’d never ask; I’ve packaged the whole thing along with all of it’s dependencies. The easiest way to get it is from my repository I host.

# Visit http://nuxref.com/repo/ and set up a link to the
# repository I host, then run the following:
yum install -y \
       --enablerepo=nuxref \
       --enablerepo=nuxref-shared \
           brogue

I went ahead and packaged this game for Fedora 20 users as well. Since this repository is someone incomplete and i don’t intend on completing it; it’s great to have around for the gamers:

# You'll need to be root to run any of the below commands.

# I signed everything I host to add a level of security to
# all of the people following my blog and using the packages
# I'm hosting. You'll want to install that here:
rpm --import http://repo.lead2gold.org/pub/NUXREF-GPG-KEY

# Connect to my repository.
# Don't worry, we're defaulting it to a disabled state, for
# the paranoid.  (note that enable=0 below for both
# repositories)
cat << _EOF > /etc/yum.repos.d/nuxref.repo
[nuxref]
name=NuxRef Core - Fedora 20 - \$basearch
baseurl=http://repo.lead2gold.org/fedora/20/en/\$basearch/custom
enabled=0
priority=1
gpgcheck=1
_EOF

# Now you can easily install it as follows Install
yum install -y \
       --enablerepo=nuxref \
       --enablerepo=nuxref-shared \
           brogue

But if you want the required RPMs directly from this blog you can get them here:

Getting Started
After you’ve installed the game; you’re ready to begin playing it. It’s truly worth reading the tips to new gamers posted on the Official Brogue Website.
Gnome Launcher
You can launch the game from the Menu bar in Gnome, or you can simply type brogue on the command line. A script I prepared will automatically start everything up for you.
Brogue Main Menu

The only other thing worth noting is that all of your configuration and high scores are stored in the ~/.config/brogue directory.

Good Luck!

Sources

  • Official Brogue Website: I can’t encourage you enough to check this link out. Especially for the invaluable tips and playing advice it provides to new users.
  • Brogue Online Forum: A great place to meet new people and discuss the game. You can use this location to report any issues or pick up strategies of your own.
  • Official Brogue Wiki: This is a fantastic resource to learn the environment and about all the things that reside in the game.

I also want to give credit to the graphic I photoshopped as part of my title for this blog. It came from an author who goes by 88grzes who publicly posted it on DiviantART here.

Multicast File Transfer Solution

Mass File Distribution Using Multicasting


Introduction
Multicasting has it’s pros and cons just like everything else. But it is often an overlooked solution to a common business problem which is: How do I transfer a file to multiple (subscribed) locations at the same time?
Most administrators or developers will come up with a solution that involves sending this file to each site using a common protocol such as SFTP, SCP, RSYNC, FTP, RCP, etc. There is no doubt that these solutions will work. However these solutions require you to send the file to each location individually. Hence, if you need to send a 10MB file to 100 remote locations, you’ll need 1GB (10MB x 100) of local network bandwidth to do it with.
Traditional Protocol File Transfers
A Multicasting solution saves you this effort and bandwidth by allowing you to send the file once and have all 100 sites collectively store it onto their systems at (relatively) the same time. Now, with respect to the diagram below (and the rest of this blog), I’m focusing entirely on the amazing efforts Dennis Bush put into UFTP. It is this tool that makes all of this possible. UFTP is one of those diamonds in the rough that I don’t think gets enough attention for the value it brings with it.
Multicast File Transfer using UFTP

Multicast in a Nutshell
A Multicast boils down to just being an IP address anywhere in the 224.0.0.0/4 network range. It uses a User Datagram Protocol to communicate with anyone who chooses to read and write data from this address. Routers can be configured to share this address across networks so that everyone may join in. Similar to a chat room on the internet where everyone joins together and anything they write, everyone else in the channel can see too.

Multicasting no doubt has it’s drawbacks. But I figure that it’s not worth dwelling on what you can’t do with the (multicast) protocol on it’s own since the UFTP tool this blog focuses on for mass file distribution has solved many (if not all) of these problems for us.

Why Multicasting, Who Uses it?
Multicasting saves bandwidth. Think of your Cable TV provider; when you change a channel, you’re actually just subscribing to a new multicast address along with the 100+ million other subscribing customers. Multicasting allows Cable TV companies to broadcast every channel at once, and those who are interested in a specific channel will receive it. Regardless of what channel you change to, there is no additional load sustained by the cable provider.

System administrators may not even know they’re using it when if they are managing a cluster. Most clusters use multicast as a way of passing their heartbeats to all other nodes in efforts to keep quorum.

Now, with respect to UFTP, if you visit it’s official website, you’ll see that the the Wall Street Journal used this tool in the past to send their developed newspapers to remote printing plants and other outlets across the United States.

UFTP in a Nutshell
UFTP is a File Transfer solution that wraps itself around the the multicast protocol as well as addressing the deficiencies that come with it. It is a well designed server/client application that allows for one to easily transfer files from 1 location to as many clients as you want in one shot. It can drastically save your company on bandwidth and has been around long enough to be deemed a reliable business decision. It’s worth noting that UFTP was written in C making it incredibly fast and lightweight on system resources. It works on all platforms but I’ll specifically focus on CentOS and the rpms I’ve packaged.

To use the software, you simply pass it a file and it looks after all the dirty work of guaranteeing it’s delivery to all of the clients subscribed to the address it broadcasts on. The author thought of everything when developing his tool, encryption of the data is always available to you as well if you prefer. He developed 3 main tools that you can manipulate to feed your data anywhere.
A Simple UFTP Environment

  • uftp (The Server – Docs): A simple command line tool that takes a file and broadcasts it to all of the listening clients. This is the exact reverse of the traditional ftp, sftp, scp, etc tools where they become the client and need to connect to a server to preform their tasks.
  • uftpd (The Client – Docs): A daemon that starts up and listens to a multicast address for new files being broadcasted by the server. Again, you’ll notice with Multicasting, roles are reversed from what you’d normally be used to. With the traditional protocols (FTP, SFTP, SCP, etc), they usually have daemons to host the server side of things not the client. You’ll want to run this daemon at all locations you wish to receive content broadcasted by the server (uftp).
  • uftpproxyd (The Proxy – Docs): The proxy allows you to tunnel your uftp multicast across a network that doesn’t support multicasting (such as the internet). This allows clients in other controlled networks (separated by one you can’t control) to additionally be part of your file distribution.

    The UFTP Proxy has 3 main modes it operates as:

    • Server Mode: This is used for pushing content upstream. This would effectively sit on the same server you call ‘uftp’ from. It listens just like any other client would and passes all information it receives to connected UFTP Proxies configured with the client mode.
    • Client Mode: This communicates with a UFTP Proxy configured for Server Mode and mimics the ‘uftp’ by broadcasting the same data to the local multicast address. This allows all local UFTP Clients to retrieve the file(s).
    • Response Mode: This is used to help take off some of the load of the server if there are many clients. Although the file is only being broadcasted once, there is a lot of handshaking that goes on at the start and end of the transmission to guarantee all data was delivered successfully. Depending on the different networks, their medium and reliability, a server may need some extra help with the handshaking if there are a lot of clients involved struggling to retrieve the data.

    Below shows an example of how the proxy can be utilized:
    A UFTP Proxy ConfigurationNote: Pinholes are used as a way to connect back to the UFTP Proxy Server effectively requiring no firewall changes to be made at each client site (only the server).

    **Note: It’s important to note a single Proxy Server can only be configured to connect to a single Proxy Client; it’s a 1 to 1 mapping. If you have multiple sites you need to connect to, you’ll need to set up an individual Proxy Server for each Proxy Client you need to serve.

    If you’re interested in the proxy portion of UFTP, you can read the official documentation about it found here.

Hand Over Everything
I wouldn’t have it any other way:

  • uftp-4.5.1-1.el6.nuxref.x86_64.rpm: The server is just the uftp tool and is really easy to use. This assumes you’ve got clients configured somewhere listening though!
    # just type uftp file.you.want.to.send
    uftp mytestfile
    
    # You can also send multiple files by just adding them to the end of
    # the string:
    uftp mytestfile1 mytestfile2 mytestfile3
    
    # I also wrote a small script that works the same way and sends stuff encrypted
    uftpe mytestfile1 mytestfile2 mytestfile3
    
  • uftp-client-4.5.1-1.el6.nuxref.x86_64.rpm: The client listens for data sent from the uftp server. You can use the RC Script i prepared to greatly simplify this tool:
    # as root; use the RC Script I wrote to make hosting the server
    # really easy
    service uftpd start
    

    Filtering is optional; if you don’t specify any, then by default there are no restrictions. Sometimes this is satisfactory (especially in closed or isolated networks). I attempted to simply UFTP’s built in server filtering for those who want to use it though. You see, not only can you encrypt the data you transmit from the server. But you can restrict the client to only accept connections from specific UFTP servers residing at a specific hosts with a specific server id. You can even go as far as only accepting servers with a specific fingerprint (created by their private key)

    # simply drop a configuration file in /etc/uftpd/servers.d 
    # Examples of accepted entries (taken from uftpd man page):
    # 0x11112222|192.168.1.101|66:1E:C9:1D:FC:99:DB:60:B0:1A:F0:8F:CA:F4:28:27:A6:BE:94:BC
    # 0x11113333|fe80::213:72ff:fed6:69ca
    #
    # You can have as many files as you want in this directory with as many entries in each
    # as you want.  If you add or remove new files, you'll need to restart the uftpd service
    # since it's only read in at the start.
    

    So by adding this bit of complexity, I know you are asking yourself:

    • Q: How do I know what my Server ID is?
      A: By default (using the rpm I’ve packaged) every server has an ID of 0x00000001 (decimal value of 1 – one) if you use the uftpe script I wrote (for encrypted transfers). To change your server id do the following:

      # Define a new id (other then 1)
      NEWID=2
      
      # Simply store this ID inside of the $HOME/.uftp config
      # file as it's HEX value:
      
      # Ensure the config directory exists
      [ ! -d $HOME/.uftp ] && mkdir -p  $HOME/.uftp
      
      # clear any old entry you may have set if you want:
      [ -f $HOME/.uftp/config ] && sed -i -e '/^UFTP_UID=/d' $HOME/.uftp/config
      
      # Set the new entry
      printf 'UFTP_UID=0x%.8x\n' $NEWID >> $HOME/.uftp/config
      

      If however your just using the uftp tool on it’s own, it takes on the IP address of the host it’s running on (as it’s hex value) unless you explicitly specify -U 0x00000002 (or whatever ID you want it to assume). Here is a quick example of how you can convert an IP address to it’s hex value at the shell:

      # Define the address you want to convert
      IP_ADDR=192.168.1.128
      # Now convert it (don't forget the brackets!)
      (
         printf '0x'
         printf '%02X' $(echo "${IP_ADDR//./ }"); echo
      )
      # example above outputs: 0xC0A80180
      
    • Q: How do I know what my Server Fingerprint is?
      A: First off, uftp will not ever connect to this server with this option set if you choose ‘not’ to use the uftpe script i wrote or provide the necessary switches to enforce encryption. Setting a filter of your servers fingerprint is also a way of saying you only accept connections that are encrypted. This entry is completely optional. Your fingerprint ID is stored in $HOME/.uftp/uftp.key. Again this key only exists if your using the uftpe tool; otherwise the key is wherever you chose to store it. Fetch the id as follows (the below is intended to be called on the server where the uftp.key file exists):

      # fetch the details from the key:
      uftp_keymgt $HOME/.uftp/uftp.key
      

      Don’t panic if you don’t have a key; They are generated automatically when you first run the uftpe tool. The easiest way to pre-create it would just be to call uftpe by itself (without any parameters). Yes; it’ll spit an error telling you you didn’t provide it enough options. But the script will also generate you a key automatically too.

    I tried to think of everything; so log rotations and logging is already built in and included. You can locate them in /var/log/uftpd.log.

  • uftp-proxy-4.5.1-1.el6.nuxref.x86_64.rpm: the proxy service can bridge two networks that don’t support multicasting together. I haven’t spent to much time with this area since my environment hasn’t required me to. If you have any information to share about it; feel free and I can expand this area.
  • uftp-debuginfo-4.5.1-1.el6.nuxref.x86_64.rpm (Optional): This is only required if you are debugging this tool.
  • uftp-4.5.1-1.el6.nuxref.src.rpm (Optional): The source RPM for those interested in building the software for themselves.

Some Things To Consider:
When something is explained to be this easy, there is always a catch. I won’t lie, there are a few which means the UFTP solution may not be for everyone. They are as follows:

  • Multicasting isn’t enabled by most routers by default. If your recipients reside in a network you don’t manage, you’ll want to ask the local administrator to make sure their routers have (Level 2) multicasting enabled.
  • Multicasting can be a pain in the butt to troubleshoot; although newer routers won’t give you any grief, some of the older ones can fail to relay content correctly to clients who’ve also connected to the same multicast address. With negativity aside though, when it works, it works so great! My point is: you’ll need to make sure you’re using (networking) hardware that is relatively new (no older then 2010) where the firmware adequately supports Level 2 multicasting.
  • The UFTP Proxy attempts to resolve the problem of linking your networks together when separated by ones you don’t control. But consider that unless there are multiple recipients located on each network you connect your proxy too, you aren’t saving any bandwidth choosing this route.
  • Only uftp clients who are online will receive content sent by the uftp server. This can be a deal breaker for some especially if the product being delivered ‘MUST’ reach all of the clients. UFTP does not track who is online and who isn’t. It simply delivers content to those who are present at that time. It’s just like how you can’t watch a television show if you haven’t told your TV’s receiver what channel to be on.
  • If you’re using restrictive firewall settings (hopefully you are!), you’ll want to make sure multicasting is allowed into your client box with the following:
    iptables -A INPUT -m pkttype --pkt-type multicast -j ACCEPT

    It would be worth adding this to your /etc/sysconfig/iptables file as well.

Repository
Please note that all of these packages are also within my repository if that makes things easier for you and your deployment.

Sources

  • UFTP: Dennis Bush did a fantastic job with this tool. It is the key to making multicast file transmission powerful and reliable.
  • Multicasting: information in greater depth can be found here.
Usenet Solution

A Usenet Solution For CentOS 6


The Indexer
Google uses it’s own web crawlers to scan the entire World Wide Web just to create an index from it. Each website they find, they track it’s name, it’s content, the language it’s written in and so on. The result from them doing this is: we get to use their fantastic search engine! A search engine that has made our lives incredibly easy by granting us fast and easy accessible information at our fingertips. Well… Usenet needs the same thing done to it too; and Google isn’t doing it for us! Usenet is a very big world of it’s own and without indexing, it’s a lot harder to get around in (but not impossible). Thankfully Usenet is no where near the size of the World Wide Web and indexing it is very possible. We can even index it with our personal computer(s) we run at home!

Since just about anyone can index Usenet, one has to think: Why index Usenet ourselves if someone’s already doing it for us elsewhere? In fact, there are many sites (and tools) that have done all indexing (some better than others) of Usenet who are willing to share it with others (us). But it’s important to know: it can take a lot of server power, disk space, and network consumption for these site administrators to constantly index Usenet for us. Since most (if not all) of the sites in this list are just hobbyists doing it for fun, it gets expensive for them to maintain things. For that reason some of them may charge or ask for a donation. If you want to use their services, you should respect their measly request of $8USD to $20USD for a lifetime membership. But don’t get discouraged, there are still a lot of free ones too! Some get by with just advertisement banners on their website or just index once every few days. Others indexers may run their indexer constantly (every 1-5 minutes) granting you the latest data available to you at all times.

Alternatively, we can go as far as running our own Usenet indexer (such as NewzNab) just as the hobbyists did (mentioned above). NewzNab will index Usenet on a regular basis. With your own indexer, you can choose to just index content that appeals to you. You can even choose to offer your services publicly if you want. Just keep in mind that Usenet is huge! If you do decide to go this route, you’ll find it a very CPU and network intensive operation. You may want to make sure you don’t exceed your Internet Service Providers (ISP) download limits.

Now back to the Google analogy I started earlier: When you find a link on Google you like, you simply click on it and your browser redirect you to the website you chose; end of story. However, in the Usenet Indexing world, once you find something of interest, the Usenet Indexer will provide you with an NZB File. An NZB file is effectively a map that identifies where your content can be specifically located on Usenet (but not the data itself). An NZB file to Usenet is similar to what a Torrent file is to a BitTorrent Client. Both NZB and Torrent files provide the blueprints needed to mine (acquire) your data. Both NZB and Torrent files require a Downloader to preform the actual data mining for you.

The Downloader
The Downloader can take an NZB File it’s provided and then uses it to acquire the actual data it maps to. This is the final piece of the puzzle!
Of the list below, you really only need to choose 1 Downloader. I just listed more then 1 to give you alternatives to work with. My personal preference is NZBGet because it is more flexible. But it’s flexibility can also be very confusing (only at first). Once you get over it’s learning curve and especially the initial configuration; it’s a dream to work with. Alternatively SABnzbd may be better for the novice if your just starting off with Usenet and don’t want to much more of a learning curve then you already have.

Either way, pick you poison:

Title Package Details
NZBGet rpm/src NZBGet is written in C++ and designed with performance in mind to achieve maximum download speed by using very little system resources.
Community / Manual
**Note: I created this patch in a recent update rebuild (Jul 17th, 2014) to fix a few directory paths so the compression tools (unrar and 7zip) can work right away. I also added these compression tools as dependencies to the package so they’ll just be present for you at the start.
**Note: I also created this patch in a recent update rebuild (Nov 9th, 2014) to allow the RC Script to take optional configuration defined in /etc/sysconfig/nzbget.

You can install NZBGet using the steps below:

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Install NZBGet
yum install -y nzbget \
    --enablerepo=nuxref \
    --enablerepo=nuxref-shared

# Grab Template
cp /usr/share/nzbget/nzbget.conf ~/.nzbget

# Protect it
chmod 600 ~/.nzbget

# Start it Up (as a non-root user):
nzbget -D

# You should now be able to access it via: 
#     http://localhost:6789/
SABnzbd n/a SABnzbd is an Open Source Binary Newsreader written in Python.
Community / Manual
Note:I have not packaged this yet, but will probably eventually get around to it. For now it can be accessed from it’s repository on GitHub, or you can quickly set it up in your environment as follows:

# There is no RPM installer for this one, we just
# fetch straight from their repository.
# Install git (if it's not already)
yum install -y git

# Grab a snapshot of SABnzbd
git clone https://github.com/sabnzbd/sabnzbd.git SABnzbd

# Start it Up (as a non-root user):
python SABnzbd/SABnzbd.py \
    --daemon \
    --pid $(pwd)/SABnzbd/sabnzbd.pid

# You should now be able to access it via: 
#     http://localhost:8080/

NZBGet Processing Scripts
One of it’s strongest features of NZBGet is it’s ability to process content it downloads before and after it’s received. The Post Processing (PP) has been specifically one of NZBGet’s greatest features. It allows separation between the the function NZBGet (which is to download content in NZB files) and what you want to do with the content afterwards. Post Processing could do anything such as catalogue what was received and place it into an SQL database. Post Processing could rename the content and sort it for you in separate directories depending on what it is. Post processing can be as simple as just emailing you when the download completed or post on Facebook or Twitter. You’re not limited to just 1 PP Script either, you can chain them and run a whole slew of them one after another. The options are endless.

I’ve taken some of the popular PP Scripts from the NZBGet forum and packaged them in a self installing RPM as well to make life easy for those who want it. Some of these packages require many dependencies and ports to make the installation smooth. Although i link directly to the RPMs here, you are strongly advised to link to my repository with yum if you haven’t already done so.

Title Package Provides Details
Failure Link rpm/src FAILURELINK If download fails, the script sends info about the failure to indexer site, so a replacement NZB (same movie or TV episode) can be queued up if available. The indexer site must support DNZB-Header “X-DNZB-FailureLink”.

Note: The integration works only for downloads queued via URL (including RSS). NZB-files queued from local disk don’t have enough information to contact the indexer site.

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbget-script-failurelink \
    --enablerepo=nuxref \
    --enablerepo=nuxref-shared 
nzbToMedia rpm/src DELETESAMPLES
RESETDATETIME
NZBTOCOUCHPOTATO
NZBTOGAMEZ
NZBTOHEADPHONES
NZBTOMEDIA
NZBTOMYLAR
NZBTONZBDRONE
NZBTOSICKBEARD
Provides an efficient way to handle post processing for
CouchPotatoServer, SickBeard, Sonarr, Headphones, and Mylar
when using NZBGet on low performance systems like a NAS.

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbget-script-nzbtomedia \
    --enablerepo=nuxref \
    --enablerepo=nuxref-shared 

Note: This package includes the removal of the entire PYPKG/libs directory. I replaced all of the dependencies previously defined here with global ones used by CentOS. The reason for this was due to the fact a lot of other packages all share the same libraries. It just didn’t make sense to maintain a duplicate of it all.

Subliminal rpm/src SUBLIMINAL Provides a wrapper that can be integrated with NZBGet with subliminal (which fetches subtitles given a filename or filepath). Subliminal uses the correct video hashes using the powerful guessit library to ensure you have the best matching subtitles. It also relies on enzyme to detect embedded subtitles and avoid retrieving duplicates.

Multiple subtitles services are available using:opensubtitles, tvsubtitles, podnapisi, addic7ed, and thesubdb.

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbget-script-subliminal \
    --enablerepo=nuxref \
    --enablerepo=nuxref-shared 

*Note: python-subliminal (what this PP Script is a wrapper too) had some issues I had to address. For one, I eliminated the entire PYPKG/subliminal/libs directory. I replaced all of the dependencies previously defined here with global ones used by CentOS. The reason for this was due to the fact a lot of other packages all share the same libraries. It just didn’t make sense to maintain a duplicate of it all.
**Note: Subliminal was written using Dict Comprehensions (PEP 274), a feature that wasn’t introduced until Python 2.7. Unfortunately, the developers of it had no interest in resolving this and closed the issue with ‘Upgrade to Python 2.7 or Python v3.3. For this reason, subliminal does ‘not’ work at all with CentOS or Red Hat 6.x. So… I fixed that. Now, I can proudly tell you that the copy of subliminal I host on my repository is 100% compatibility with python 2.6 (this includes a few Logging backported functionality too).

I am the current maintainer of this plugin and it can be accessed from my GitHub page here.

TidyIt rpm/src TIDYIT TidyIt integrates itself with NZBGet’s scheduling and is used
to preform basic house cleaning on a media library. TidyIt
removes orphaned meta information, empty directories and unused
content. It’s the perfect OCD tool for those who want to eliminate
any unnessisary bloat on their filesystem and media library.

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbget-script-tidyit \
    --enablerepo=nuxref \
    --enablerepo=nuxref-shared 

I am the current maintainer of this plugin and it can be accessed from my GitHub page here.

Video Sort rpm/src VIDEOSORT With post-processing script VideoSort you can automatically organize downloaded video files.

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbget-script-videosort \
    --enablerepo=nuxref \
    --enablerepo=nuxref-shared 

Note: This package includes the removal of the entire PYPKG/libs directory. I replaced all of the dependencies previously defined here with global ones used by CentOS. The reason for this was due to the fact a lot of other packages all share the same libraries. It just didn’t make sense to maintain a duplicate of it all.

Automated Index Searchers
These tools search for already indexed content you’re interested in and can be configured to automatically download it for you when it’s found. It itself doesn’t do the downloading, but it will automate the connection between your chosen Indexer and Downloader (such as NZBGet or SABnzbd). For this reason, these tools do not actually search Usenet at all and therefore have very little overhead on your system (or NAS drive).

Title Package Details
Sonarr
nzbdrone-icon
rpm/src Automatic TV Show downloader
Formally known as NZBDrone; it has since been changed to Sonarr. This was only made possible because of the blog I wrote on mono v3.x .

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbdrone \
    --enablerepo=nuxref \
    --enablerepo=nuxref-shared 

# Start it Up (as a non-root user):
nohup mono /opt/NzbDrone/NzbDrone.exe &

# You should now be able to access it via: 
#     http://localhost:8989/
Sick Beard
sickbeard-icon
n/a (Another) Automatic TV Show downloader

Note:I have not packaged this yet, but will probably eventually get around to it. For now it can be accessed from it’s repository on GitHub, or you can quickly set it up in your environment as follows:

# Install git (if it's not already)
yum install -y git

# Grab a snapshot of Sick Beard
# Note that we grab the master branch, otherwise we default
# to the development one.
git clone -b master https://github.com/midgetspy/Sick-Beard.git SickBeard

# Start it Up (as a non-root user):
python SickBeard/SickBeard.py \
   --daemon \
   --pidfile $(pwd)/SickBeard/sickbeard.pid

# You should now be able to access it via: 
#     http://localhost:8081/
CouchPotato
couchpotato-icon
n/a Automatic movie downloader

Note:I have not packaged this yet, but will probably eventually get around to it. For now it can be accessed from it’s repository on GitHub, or you can quickly set it up in your environment as follows:

# Install git (if it's not already)
yum install -y git

# Grab a snapshot of CouchPotato
git clone https://github.com/RuudBurger/CouchPotatoServer.git CouchPotato

# Start it Up (as a non-root user):
python CouchPotato/CouchPotato.py \
   --daemon \
   --pid_file CouchPotato/couchpotato.pid

# You should now be able to access it via: 
#     http://localhost:5050/
Headphones
headphones-icon
n/a Automatic music downloader

Note:I have not packaged this yet, but will probably eventually get around to it. For now it can be accessed from it’s repository on GitHub, or you can quickly set it up in your environment as follows:

# Install git (if it's not already)
yum install -y git

# Grab a snapshot of Headphones
git clone https://github.com/rembo10/headphones Headphones

# Start it Up (as a non-root user):
python Headphones/Headphones.py \
   --daemon \
   --pidfile $(pwd)/Headphones/headphones.pid

# You should now be able to access it via: 
#     http://localhost:8181/
Mylar
mylar-icon
n/a Automatic Comic Book downloader

Note:I have not packaged this yet, but will probably eventually get around to it. For now it can be accessed from it’s repository on GitHub, or you can quickly set it up in your environment as follows:

# Install git (if it's not already)
yum install -y git

# Grab a snapshot of Headphones
git clone https://github.com/evilhero/mylar Mylar

# Start it Up (as a non-root user):
python Mylar/Mylar.py \
   --daemon \
   --pidfile $(pwd)/Mylar/Mylar.pid

# You should now be able to access it via: 
#     http://localhost:8090/

Mobile Integration
nzb360-logoThere are some fantastic Apps out there that allow you to integrate your phone with the applications mentioned above. It can allow you to manage your downloads from wherever you are. A special shout out to NZB 360 who recently had his app pulled from the Google Play Store for no apparent reason and had to set up shop outside. I can say first hand that his application is amazing! You should totally consider it if you have an Android phone.

Usenet Provides
For those who don’t have Usenet already, it does come at an extra cost and/or fee. The cost averages anywhere between $6 to $20 USD/month (anything more and you’re paying to much). The reason for this is because Usenet is a completely isolated network from the Internet. It’s comprised of a completely isolated set of interconnected servers. While the internet is comprised of hundreds of millions of servers all hosting specific content, each Usenet server hosts the entire usenet database… it hosts everything. If anything is uploaded to Usenet, all of the interconnected servers update themselves with their own local copy of it (to serve us). For this to happen, their servers have to have petabytes of storage. The fee they charge you is just going to support their operational cost such as bandwidth, maintenance and the regular addition of storage to their infrastructure. There is very little profit to be made for them at $8 a person. Here is a breakdown of a few servers (in alphabetical order) I’m aware of and support:

Provider Server
Location(s)
Notes Average Cost
Astraweb US & Europe Retention: 2158 Days (5.9 Years) $6.66USD/Month to $15USD/Month
see here for details
Usenet Server US Retention: 2159 Days (5.9 Years)
Has a free 14 day trial
$13.33USD/Month to $14.95USD/Month
see here for details

*Note: Table information was last updated on Jul 14th, 2014. Prices are subject to change as time goes on and this blog post isn’t updated.
**Note: If you have a provider that you would like to be added to this list… Or if you simply spot an error in pricing or linking, please feel free to contact me so I can update it right away.

Why do people use Usenet/Newsgroups?

  • Speed: It’s literally just you and another server; a simple 1 to 1 connection. Data transfer speeds will always be as fast as your ISP can carry your traffic to and from the Usenet Server you signed up with. Unlike torrents, content isn’t governed by how many seeders and leechers that have the content available to you. You never have to deal with upload/download ratios, maintain quotas, and or sit idle in someone’s queue who will serve data to you eventually.
  • Security: You only deal with secure connections between you and your Usenet Provider; no one else! Torrents can have you to maintaining thousands of connections to different systems and sharing data with them. With BitTorrent setups, tracker are publicly advertising what you have to share and what your trying to download. Your privacy is public to anyone using the same tracker that you’re connected to. Not only that, but most torrent connections are insecure as well which allows virtually anyone to view what you’re doing.

Please know that I am not against torrents at all! In fact, now I’ll take the time to mention a few points where torrents are excel over Usenet:

  • Cost: It doesn’t usually cost you anything to use the torrent network. It all depends on the tracker your using of course (some private trackers charge for their usage). But if you’re just out to get the free public stuff made available to us, there are absolutely no costs at all to use this method!
  • Availability: Usenet is far from perfect. When someone uploads something onto their Usenet Provider, by the time it propagates this new content to all of the other Usenet Servers, there is a small chance the data will be corrupted. This happens with Usenet all of the time. To compensate for this, Usenet users anticipate corruption (sad but true). These people kindly post Parchive files to Usenet to compliment whatever they previously uploaded. Parchive files work similar to how RAID works; they provide building blocks to reassemble data in the event it’s corrupted. Corruption never happens with Torrents unless the person hosting decides to host corrupted data. Any other scenario would simply be because your BitTorrent Client had a bug in it.
  • Retention: As long as someone is willing to seed something, or enough combined leechers can reconstruct what is being shared, then data will always stay alive in the BitTorrent world. However with Usenet, the Usenet Server is hosting EVERYTHING which means it has to maintain a lot data on a lot of disk space! For this reason, a retention period is inevitably met. A time is eventually reached where the Usenet Server purges (erases) older content from these hard disks to make room for the new stuff showing up every day.

Honestly, at the end of the day: both Torrents and Usenet Servers have their pros and cons. We will always continue each weigh them at different levels. What’s considered the right choice for one person, might not be the right one for another. Heck, just use both depending on your situation! :)

Update Jan 10th, 2015:
Provided RPMs on the repository (and referenced in this blog) updated to current stable versions.

Source

.NET Solution

Mono 3.x for CentOS 6


Introduction
Mono allows us to run Windows Applications in our Linux environment. It is an open source implementation of Microsoft’s .NET Framework. The problem is, CentOS (and Red Hat) 6.x ship with Mono v2.4 which is a little outdated. You can’t take advantage of the newer apps .NET developers are writing. In fact, you can’t run anything that requires a newer version of v3.5 of the .NET runtime libraries.

In addition to that, Mono v3.4 grants your CentOS system more support for .NET applications that weren’t otherwise available for you in v2.4.

Compatibility Mono v2.4 Mono v3.4
.NET 1.0 Yes No (dropped support)
.NET 2.0 Yes Yes
C# 3.0 Yes Yes
ASP.NET 2.0 Yes Yes
.NET 3.5 Partial Yes
.NET 4.0 No Yes
.NET 4.5 No Yes
C# 4.0 No Yes
ASP.NET 4.0 No Yes
C# 5.0 No Yes

What’s so special about the repackaging you did?
Well first of all… it’s actually an RPM package. It doesn’t require you to haul in a ton of development libraries and compile it from scratch. Another point is that Mono v2.4 (shipped with CentOS & Red Hat 6) had many patches applied to it. These patches forced Mono to conform to the common directory structure used natively by our Operating System. It took me several hours to recreate all of these patches forcing Mono v3.4 to comply with the same standards.

Finally (and at the time of writing this blog), this is the first package of Mono v3.4 that I’ve found that can be installed via an RPM and not require you to recompile everything yourself. Hence you don’t even need to haul in any development libraries at all. Mono will just work as is. Since my repackaging was based off of the original, I tried to keep all of the external rpm packages the same. That said, I did get a little confused with all of the new packages and binary tools that ship with Mono v3.x. Since I’m not a Microsoft Developer, I tried to sort these new packages accordingly as best as I could. Please feel free to let me know how I can improve this package if you notice anything I’ve done wrong.

Just hand over all your work already!
Absolutely, here they are:

Binary Packages:

Note: Mono (v3) was a bit picky about it’s SQLite version it referenced. I had to update it’s package to a slightly newer version as well for everything to play nicely. Only if you intend to haul in the mono-data-sqlite-*.rpm package, will you be required to haul in this newer version. I’ve already provided this on my repository, but for the sceptics who want to build it themselves, I’ll include those instructions too.

Source Packages:

Debug Info Packages

You can get it from my repository too
Content/Repositories are grouped as follows:

  • nuxref – Repo/Source: All custom packages rebuilt reside here (including those from past blog entries).
  • nuxref-shared – Repo/Source: This is really just a snapshot in time of packages I used from the different external repositories I don’t maintain. It may not always be up to date, but it can be thought of as a baseline (where things worked for me and with respect to this blog). If I get around to testing a new package and everything is still working fine, I’ll update the repositories here. I also resign everything with my own key to mark that I’ve tested it.

Install all of the necessary packages:

# I signed everything I host to add a level of security to
# all of the people following my blog and using the packages
# I'm hosting. You'll want to install that here:
rpm --import http://repo.lead2gold.org/pub/NUXREF-GPG-KEY

# Connect to my repository to which I've had to rebuild a few
# packages to support PostgreSQL as well as fix some bugs in
# other bugs. This step will really make your life easy and let
# us compare apples to apples with package versions. It also
# allows you to haul in a working setup right out of the box.

# Don't worry, we're defaulting it to a disabled state, for
# the paranoid.  (note that enable=0 below for both
# repositories)
cat << _EOF > /etc/yum.repos.d/nuxref.repo
[nuxref]
name=NuxRef Core - Enterprise Linux 6 - \$basearch
baseurl=http://repo.lead2gold.org/centos/6/en/\$basearch/custom
enabled=0
priority=1
gpgcheck=1

[nuxref-shared]
name=NuxRef Shared - Enterprise Linux 6 - \$basearch
baseurl=http://repo.lead2gold.org/centos/6/en/\$basearch/shared
enabled=0
priority=1
gpgcheck=1
_EOF

# You should use yum priorities (if you're not already) to
# eliminate version problems later, but if you intend on 
# just pointing to my repository and not the others, then you'll
# be fine.
yum install -y yum-plugin-priorities

################################################################
# Install Mono
################################################################
yum install -y \
       --enablerepo=nuxref \
       --enablerepo=nuxref-shared \
           mono-core
################################################################
# Install additional packages too if you wish (depending on your
# needs.
################################################################
yum install -y \
       --enablerepo=nuxref \
       --enablerepo=nuxref-shared \
           mono-web

I Never Trust Your Stuff; Let Me Do It Myself
Sure, First you’re going to need to fetch all the patches I had to create (plus the old ones carried forth from Mono v2.4:

You can additionally view the RPM SPEC file I created here.

First prepare our development environment with mock if you haven’t already:

# Install 'mock' into your environment if you don't have it already
# This step will require you to be the superuser (root) in your native
# environment.
yum install -y mock

# Grant your normal every day user account access to the mock group
# This step will also require you to be the root user.
usermod -a -G mock YourNonRootUsername
# Download the official mono packages from their official
# hosting site:
wget http://origin-download.mono-project.com/sources/mono/mono-3.4.0.tar.bz2

# Download all of the building blocks you'll need
wget --output-document=mono.spec https://www.dropbox.com/sh/9dt7klam6ex1kpp/AAAiqD2KjHhakweKY_mkLLPba/20140713/mono/mono.spec?dl=1
wget --output-document=monodir.c https://www.dropbox.com/sh/9dt7klam6ex1kpp/AABZHv5NeWFICyAAAw--eiJoa/20140713/mono/monodir.c?dl=1
wget --output-document=mono.snk https://www.dropbox.com/sh/9dt7klam6ex1kpp/AADoY6UvThpcQbUHhs7XUecsa/20140713/mono/mono.snk?dl=1
wget --output-document=lc https://www.dropbox.com/sh/9dt7klam6ex1kpp/AACkja0kNxmO1ytHOIw523HTa/20140713/mono/lc?dl=1
wget --output-document=mono-3.4-ppc-threading.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AAAKrjdqRR826osJjbqZIu5la/20140713/mono/mono-3.4-ppc-threading.patch?dl=1
wget --output-document=mono-1.2.3-use-monodir.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AABHvxieGDqU8eDB24ghS_Dua/20140713/mono/mono-1.2.3-use-monodir.patch?dl=1
wget --output-document=mono-2.2-uselibdir.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AABSjbMfIRj5JB7HKWydQVpja/20140713/mono/mono-2.2-uselibdir.patch?dl=1
wget --output-document=mono-2.0-monoservice.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AADsL0DBfI0VixRAw6uI0Vkpa/20140713/mono/mono-2.0-monoservice.patch?dl=1
wget --output-document=mono-3.4-libgdiplusconfig.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AAAHvISVzxIPq9xCmw2m2tcPa/20140713/mono/mono-3.4-libgdiplusconfig.patch?dl=1
wget --output-document=mono-3.4-libdir.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AACHrlv_iSp36jhSeOn4ki0fa/20140713/mono/mono-3.4-libdir.patch?dl=1
wget --output-document=mono-3.4-POSIX_ARG_MAX.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AADSN5WhjyqTQptMoWthDnYHa/20140713/mono/mono-3.4-POSIX_ARG_MAX.patch?dl=1
wget --output-document=mono-3.4.xamarin.BZ18690.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AAAwWsEKkOlnzYZCshp29wuwa/20140713/mono/mono-3.4.xamarin.BZ18690.patch?dl=1

# Initialize our Environment
mock -v -r epel-6-x86_64 --init

# Dependencies
mock -v -r epel-6-x86_64 --install libpng-devel libjpeg-devel ligiflib-devel  \
   lilibtiff-devel  lilibexif-devel  lilibX11-devel  lifontconfig-devel  \
   ligettext  limake  ligcc-c++ libison liglib2-devel lipkgconfig \
   lilibicu-devel lilibgdiplus-devel  lizlib-devel li automake libtool \
   gettext-devel mono-core gcc-c++ mediainfo gettext \
   giflib-devel libtiff-devel libexif-devel libX11-devel fontconfig-devel \
   bison glib2-devel libicu-devel libgdiplus-devel mysql-devel \
   postgresql-devel sqlite-devel

# Copy our packages into our environment
mock -v -r epel-6-x86_64 --copyin mono.spec /builddir/build/SPECS
mock -v -r epel-6-x86_64 --copyin \
    *.patch \
    mono-3.4.0.tar.bz2 \
    mono.snk \
    lc \
    monodir.c \
    /builddir/build/SOURCES

# Shell into our environment
mock -v -r epel-6-x86_64 --shell

# Change to our build directory
cd builddir/build

# Enable Bootstrapping for the first time
# mono actually requires 'mono' (itself) to build. Weird Right?
# But still necessary! For this reason I prepared an easier
# way of enabling bootstrapping for your first build.
#
# Once you install the binaries created from your first build
# we can rebuild the package again (but this time without
# bootstrapping). The purpose of this is to ensure the mono
# binaries and packages we created are equivalent to the
# the bootstrapped content.
# So... on with the bootstrapping; Note: this will take
# 20 to 30 minutes depending on how fast your system is.
rpmbuild -ba --define "_with_bootstrap=1" SPECS/mono.spec

# Now that we've created mono from a bootstrap, we can
# install the package back into our virtual environment
# and rebuild it again. But this time we rebuild it
# without the bootstrap reference.
rpm -Uhi RPMS/mono-core-3.4.0-1.el6.nuxref.x86_64.rpm RPMS/mono-devel-3.4.0-1.el6.nuxref.x86_64.rpm
# Now rebuild the whole thing all over again to confirm
# your build was good; Note: This will take another 20 to 30 
# minutes again...
rpmbuild -ba SPECS/mono.spec

# we're now done with our mock environment for now; Press Ctrl-D to
# exit or simply type exit on the command line of our virtual
# environment
exit

# We'll return to the directory we were previously in.  We can copy
# out the packages we just built at this point.Ignore the warning
# about SELinux if you get one. It doesn't impact our goals at this
# moment.
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/SRPMS/mono-3.4.0-1.el6.nuxref.src.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-core-3.4.0-1.el6.nuxref.x86_64.rpm
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-data-3.4.0-1.el6.nuxref.x86_64.rpm
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-data-oracle-3.4.0-1.el6.nuxref.x86_64.rpm
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-data-postgresql-3.4.0-1.el6.nuxref.x86_64.rpm
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-data-sqlite-3.4.0-1.el6.nuxref.x86_64.rpm
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-devel-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/monodoc-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/monodoc-devel-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-extras-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-locale-extras-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-nunit-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-nunit-devel-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-reactive-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-wcf-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-web-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-web-devel-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-winforms-3.4.0-1.el6.nuxref.x86_64.rpm .
# The debuginfo package will only exist if you successfully rebuilt
# everything without the bootstrap set
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-debuginfo-3.4.0-1.el6.nuxref.x86_64.rpm .

Upgrading SQLite
For me, I just visited pkgs.org and downloaded the fedora 20 (source -src.rpm) release of SQLite. Then I extracted it’s contents as follows:

# I can't promise this link will work, as this package is always
# evolving, but if you do the search above, you'll get the idea
wget http://dl.fedoraproject.org/pub/fedora/linux/releases/20/Everything/source/SRPMS/s/sqlite-3.8.1-2.fc20.src.rpm

# Alternatively, you can download the source rpm package I'm
# already hosting:
wget http://repo.lead2gold.org/centos/6/en/source/custom/sqlite-3.8.1-2.el6.src.rpm

# Then extracted it using this neat technique:
rpm2cpio sqlite-*.src.rpm | cpio -idmv

# Initialize our Environment
mock -v -r epel-6-x86_64 --init

# Dependencies
mock -v -r epel-6-x86_64 --install ncurses-devel \
    readline-devel glibc-devel autoconf /usr/bin/tclsh \
    tcl-devel

# You'll already have the block you need as nothing is
# changed with this package. We're just using it as is
mock -v -r epel-6-x86_64 --copyin sqlite-*.zip /builddir/build/SOURCES
mock -v -r epel-6-x86_64 --copyin *.patch /builddir/build/SOURCES
mock -v -r epel-6-x86_64 --copyin sqlite.spec /builddir/build/SPECS

# Shell into our environment
mock -v -r epel-6-x86_64 --shell
 
# Change to our build directory
cd builddir/build

# Build our packages (process doesn't take long ~2 min)
rpmbuild -ba SPECS/sqlite.spec

# we're now done with our mock environment for now; Press Ctrl-D to
# exit or simply type exit on the command line of our virtual
# environment
exit

# We'll return to the directory we were previously in.  We can copy
# out the packages we just built at this point. Ignore the warning
# about SELinux if you get one. It doesn't impact our goals at this
# moment.
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/SRPMS/sqlite-3.8.1-2.el6.src.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/sqlite-3.8.1-2.el6.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/sqlite-devel-3.8.1-2.el6.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/sqlite-doc-3.8.1-2.el6.noarch.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/lemon-3.8.1-2.el6.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/sqlite-tcl-3.8.1-2.el6.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/sqlite-debuginfo-3.8.1-2.el6.x86_64.rpm .

Credit
This blog took me a very (,very) long time to put together and test! The repository hosting alone now accommodates all my blog entries up to this date. If you like what you see and wish to copy and paste this HOWTO, please reference back to this blog post at the very least. It’s really all I ask.

I’ve tried hard to make this a complete working solution out of the box. Please feel free to email me or post comments below with any suggestions you have so I can ensure this blog is as complete as possible! Positive feedback is always welcome too!

Repository
This blog makes use of my own repository I loosely maintain. If you’d like me to continue to monitor and apply updates as well as hosting the repository for long terms, please consider donating or offering a mirror server to help me out! This would would be greatly appreciated!

Sources
The majority of my efforts came from the following sites: