Tag Archives: NZBGet

A Redhat Usenet Solution

Introduction

The two top programs in the Usenet world are NZBGet and SABnzbd. Both of these tools have their strengths and weaknesses. Both of them also have a strong community that chooses one over the other and have their own preferences why they did. But at the end of the day, both of them are amazing and well maintained by their developers and the community that follows them.

I’ve recently begun packaging these tools in addition to some of the plugins that go with them. My goal was to just bring awareness to those who use the CentOS/Fedora/RedHat world that their lives just got easier! At the time this blog was written CentOS 8, Fedora 31, Fedora 32, and Fedora 33 were the current distributions.

The Setup and Target Audience

This blog was centered around the Nuxref repositories I maintain. Thus, this is the cleanest CentOS/RedHat RPM based installation with very little effort required. Simply:

  1. Connect to my repository.
  2. Install your desired packages (explained below):
  3. Enjoy!

The Nuxref Repositories

Before you proceed with the rest of this blog; you’ll first want to get yourself set up with the Nuxref repositories. This is where all of the packages are that will allow you to proceed with one (or both) of the solutions below.

The easiest way to do this is to visit the repository website and follow the instructions to get connected.

CentOS 8 users will want to connect to EPEL if they haven’t already done so:

# Connect to EPEL:
sudo dnf install -y epel-release

NZBGet Usenet Solution


The NZBGet solution comes with the following:

Provides a SystemD service ready to go for those who want to tie it to their systems startup and shutdown.

# Install NZBGet from the Nuxref repository
sudo dnf install -y nzbget

# Add NZBGet to the startup of your system
sudo systemctl enable nzbget.service

# Start it up now if you like too!
sudo systemctl start nzbget.service

You’ll now be able to access the web page through your browser accessing:
http://localhost:6789.
Note: The default login/password is nzbget/tegbzn6789 when prompted.

Here is a breakdown of how the custom NZBGet package works:

  • A default configuration (/etc/nzbget.conf) is pre-prepared for you which
  • sets up the logging directory to be /var/log/nzbget/nzbget.log (with log rotation on).
  • All of the variable data (file processing, etc) will be located in /var/lib/nzbget/*. In fact this is a very important directory because unless you configure things differently, all downloaded content will appear in /var/lib/nzbget/downloads.

To grant users on your system access to the data available through NZBGet, simply just add them to the nzbget group:

# swap the USER with your username you want to add to the
# nzbget group:
sudo usermod -aG nzbget USER

You can optionally choose to just run nzbget -d using your general account to launch a personal instance of NZBGet.

NZBGet Notifications

NZBGet can keep you posted on what it’s doing by sending you emails when a download completes (or fails). But it’s not just emails it can use as a medium; it can be Gotify, Twitter, Telegram, Amazon SNS, Slack, etc.

To leverage the notification features, you just need to install the NZB-Notify Plugin:

# Install the notification plugin from the Nuxref repository
sudo dnf install -y nzbget-script-notify

# Reload NZBGet so it takes on the new configuration
sudo systemctl restart nzbget

To enable it, you simply need to need to access the Notifications tab from within the NZBGet Settings section.

NZBGet Notification Setup
  1. Select the SETTINGS entry at the top right.
  2. Select the NOTIFY entry on the left hand side
  3. Notifications basically have to be defined as Apprise URLs; you can learn to construct the one for the services you wish to use here.

Finally it is a good idea to ensure the Notifications are triggered to run at the proper times:

NZBGet Settings – NZB-Notify Ordering
  1. Select the SETTINGS entry at the top right (if you’re not there already still from the previous steps).
  2. Select the EXTNSION SCRIPTS entry on the left hand side
  3. Make sure the Notify.py is listed in both the Extensions and ScriptOrder section.

NZBGet Video Sorting

Another great plugin for NZBGet is one that allows all content downloaded to be automatically sorted/renamed into a very conventional format. The amazing tool that does this is VideoSort. I’ve also gone ahead and packaged this so it would easy to use in a CentOS/Fedora enviroment like everything else:

# Install the videosort plugin from the Nuxref repository
sudo dnf install -y nzbget-script-videosort

# VideoSort can be a bit confusing to setup the first time; so the best way I can
# make this incredibly easy for you is to pre-load you with some good default
# configuration.  Here is how you can do it:

# First stop NZBGet:
sudo systemctl stop nzbget

# Backup the configuration file (so you can restore things they way they
# were if you don't like my approach):
sudo cp /etc/nzbget.conf /etc/nzbget.conf.backup

# Now thrown in our default configuration
# (just copy and paste the below):
NZBGETCFG=/etc/nzbget.conf

# First tidy up any conflicting entries (don't worry
# you backed this file up remember, we can roll back if you're
# not happy after:
sudo sed -i '/Videosort/d' $NZBGETCFG
sudo sed -i '/Category[0-9]\+/d' $NZBGETCFG

# Now copy in our new settings:
( cat << _EOF
# Custom Categories
Category1.Name=movies
# (option <DestDir>) is used. In this case if the option <AppendCategoryDir>
Category1.DestDir=\${DestDir}/Movies
Category1.Unpack=yes
Category1.Extensions=VideoSort.py, Notify.py
Category1.Aliases=movies*
Category2.Name=tv
Category2.DestDir=\${DestDir}/TVShows
Category2.Unpack=yes
Category2.Extensions=VideoSort.py, Notify.py
Category2.Aliases=hdtv, tv*, s??e??

# VideoSort Script
VideoSort.py:MoviesDir=\${DestDir}/Movies
VideoSort.py:SeriesDir=\${DestDir}/TVShows
VideoSort.py:DatedDir=\${DestDir}/TVShows
VideoSort.py:OtherTvDir=\${DestDir}/TVShows
VideoSort.py:TvCategories=tv
VideoSort.py:VideoExtensions=.mkv,.avi,.divx,.xvid,.mov,.wmv,.mp4,.mpg,.mpeg,.vob,.iso
VideoSort.py:SatelliteExtensions=.srt,.sub,.idx,.subs.rar,.rar,.nfo,.jpg
VideoSort.py:MinSize=100
VideoSort.py:MoviesFormat=%.t.(%y)/%.t.%y.%qss.%qf.%qvc-%qrg
VideoSort.py:SeriesFormat=%s.n/S%0s/%s.n.S%0sE%0e.%e.n.%qss.%qf.%qvc-%qrg
VideoSort.py:EpisodeSeparator=E
VideoSort.py:SeriesYear=yes
VideoSort.py:DatedFormat=%s.n/%s.n-%e.n-%y-%0m-%0d
VideoSort.py:OtherTvFormat=%t
VideoSort.py:LowerWords=the,of,and,at,vs,a,an,but,nor,for,on,so,yet
VideoSort.py:UpperWords=III,II,IV
VideoSort.py:DNZBHeaders=yes
VideoSort.py:PreferNZBName=yes
VideoSort.py:Overwrite=no
VideoSort.py:Cleanup=yes
VideoSort.py:Preview=no
VideoSort.py:Verbose=no
_EOF
) | sudo tee -a $NZBGETCFG

# Restart our NZBGet instance so it takes on the new configuration
sudo systemctl start nzbget

Firewall Configuration

The package will provide you the files needed to set up the firewall and make NZBGet available to you from other stations by simply doing the following:

# Assuming your network is set up to the `home` zone, the following
# sets you up to expose TCP Port 6789 on your network
firewall-cmd --zone=home --add-service nzbget --permanent

# You can optionally expose your secure instance of NZBGet on TCP
# port 6791 as well if you want
firewall-cmd --zone=home --add-service nzbget-secure --permanent

# Now reload your firewall to take on the new change:
firewall-cmd --reload

SABnzbd Usenet Solution

You can start it up with the command:

# Install SABnzbd from the Nuxref repository
sudo dnf install -y sabnzbd

# Add SABnzbd to the startup of your system
sudo systemctl enable sabnzbd.service

# Start it up now if you like too!
sudo systemctl start sabnzbd.service

You’ll now be able to access the web page through your browser accessing:
http://localhost:9080.

Here is a breakdown of how the custom SABnzbd package works:

  • All of your log files will show up in /var/log/sabnzbd/sabnzbd.log
  • All of configuration will get written to /etc/sabnzbd/sabnzbd.conf
  • All of the variable data (file processing, etc) will be located in /var/lib/sabnzbd/*. In fact this is a very important directory because unless you configure things differently, all downloaded content will appear in /var/lib/sabnzbd/complete.

To grant users on your system access to the data available through SABnzbd, simply just add them to the sabnzbd group:

# swap the USER with your username you want to add to the
# sabnzbd group:
sudo usermod -aG sabnzbd USER

Firewall Configuration

The package will provide you the files needed to set up the firewall and make SABnzbd available to you from other stations by simply doing the following:

# Assuming your network is set up to the `home` zone, the following
# sets you up to expose TCP Port 9080 on your network
firewall-cmd --zone=home --add-service sabnzbd --permanent

# You can optionally expose your secure instance of SABnzbd on TCP
# port 9090 as well if you want
firewall-cmd --zone=home --add-service sabnzbd-secure --permanent

# Now reload your firewall to take on the new change:
firewall-cmd --reload

SABnzbd Notifications

SABnzbd can keep you posted on what it’s doing by sending you emails when a download completes (or fails). But it’s not just emails it can use as a medium; it can be Gotify, Twitter, Telegram, Amazon SNS, Slack, etc.

To leverage the notification features, you just need to install the NZB-Notify Plugin:

# Install the notification plugin from the Nuxref repository
sudo dnf install -y sabnzbd-script-notify

To enable it, you simply need to need to access the Notifications tab from within the SABnzbd Configuration section.

SABnzbd Notification Setup
SABnzbd Notification Setup
  1. Select the Enable notification script checkbox
  2. Select the Notify.py script from the dropdown list next to the Script category
  3. Next to the Parameter category, you must specify the URL(s) identifying which service(s) you want to notify.Depending on what you want plan on alerting, the URL(s) you specify in the Parameter field will vary. You can get a better understanding of the URL options supported here.

NGinX Frontend

Regardless of what downloader you chose to setup from the instructions above, you should NEVER directly expose SABnzbd or NZBGet to the internet… EVER! ALWAYS uses some sort of proxy to bridge the internet from systems you run on your local network. NGinX (pronounced Engine-X) can solve this very thing for you. NginX is much more actively maintained and focuses entirely protecting your servers while allowing them to safely host their backend HTTP(S) solutions.

# Install NGinX and httpd-tools (if they're not there already)
sudo dnf install nginx httpd-tools

# Add nginx to the startup of your system
sudo systemctl enable nginx.service

# Start it up now if you like too!
sudo systemctl start nginx.service

# Expose port 80 and port 443
sudo firewall-cmd --zone=home --add-service http --permanent
sudo firewall-cmd --zone=home --add-service https --permanent

# If you're installing NginX on the same server that you've installed
# SABnzbd or NGinX on, make sure you no longer expose the ports associated
# with them anymore:
sudo firewall-cmd --zone=home --remove-service nzbget --permanent
sudo firewall-cmd --zone=home --remove-service nzbget-secure --permanent
sudo firewall-cmd --zone=home --remove-service sabnzbd --permanent
sudo firewall-cmd --zone=home --remove-service sabnzbd-secure --permanent

# Now reload your firewall to take on the new change:
firewall-cmd --reload

Now add the following into your /etc/nginx/default.d/usenet.conf file:

   # NZBGet wrapper
   location /nzbget/ {
      proxy_pass   http://localhost:6789;
      proxy_cache  off;

      # Simple Security
      auth_basic            "Restricted Usenet Area";
      auth_basic_user_file  usenet.htpasswd;
   }

   # SABnzbd wrapper
   location /sabnzbd/ {
      proxy_pass   http://localhost:9080;
      proxy_cache  off;

      # Simple Security
      auth_basic            "Restricted Usenet Area";
      auth_basic_user_file  usenet.htpasswd;
   }

Now let’s set up a simple password file we can use to secure our content.

# Generate a password for the user 'usenet'.  You might want to
# switch this for your own name if you like:
sudo htpasswd -c /etc/nginx/usenet.htpasswd usenet

# you will be prompted to enter a password to associate with your
# new user.  Go ahead and provide one! :)

# You can generate more account if you want too; just remove the
# -c for future calls. e.g:
# sudo htpasswd  /etc/nginx/usenet.htpasswd user2

# It wouldn't hurt to just enhance the security associated with our
# new password file (for prying eyes):
chown root.nginx /etc/nginx/usenet.htpasswd
chmod 640 /etc/nginx/usenet.htpasswd

# You're almost done!
# Make sure your configuration won't break anything
nginx -t -c /etc/nginx/nginx.conf

# If the above passed okay, then it's safe to make your configuration
# live.
sudo systemctl reload nginx

Credit

Please note that this information took me several days to put together and test thoroughly. There was a tremendous effort put in place to make all versions of these programs compatible with all recent RPM based Linux distributions. I may not blog often; but I want to re-assure the stability and testing I put into everything I intend share.

If you like what you see and wish to copy and paste this HOWTO, please reference back to this blog post at the very least. It’s really all I ask.

Sources

NZBGet v17 for CentOS 7

Introduction

Recently NZBGet v17 was released which involved a massive re-factoring and over-haul of most of it’s back-end. Here is a quote taken from the ticket (#88) that makes up the bulk of this release:

Due to a wide variety of platforms we could not control which version of compiler is used. To make it compatible even with old compilers NZBGet was targeting old C++ versions.

In last years C++ has been evolved dramatically. New versions were standardized by ISO as C++11 and lately C++14. These new standards bring many interesting features to the language making it much more pleasure to work with.

The new (C++) standards are amazing, there is no doubt about that. But the problem is that not all (Stable) Linux distributions support these new standards. To put it in perspective, here is a chart of some Linux Distributions and the GCC standards they support (without any extra hacking/effort):

Compatibility C++98 C++03 C++11 C++14 C++17
Red Hat 5 / CentOS 5 Yes Yes No No No
Red Hat 6 / CentOS 6 Yes Yes No No No
Red Hat 7 / CentOS 7 Yes Yes Yes No No
Ubuntu 12.x Yes Yes No No No
Ubuntu 14.x Yes Yes Yes No No
Ubuntu 16.x Yes Yes Yes Yes No
Fedora 22 Yes Yes Yes Yes no
Fedora 23 Yes Yes Yes Yes Yes
Fedora 24 Yes Yes Yes Yes Yes

The point is: By making NZBGet require the C++14 standard, the list of supported (Linux) distributions drops significantly.

Just use the NZBGet Binaries Already Provided

Yes, NZBGet allows you to install an already pre-compiled version of it, and if this satisfies your needs, then I’m not here to tell you to change.

In my personal opinion, there are several problems with going with this approach:

  1. Your program is statically linked to everything it uses. What does this mean? It means that any vulnerabilities discovered and patched on your system (via distributed system updates) will NOT patch NZBGet. These patches that are released don’t just resolve vulnerabilities, but they also add stability too.

    This isn’t to say NZBGet isn’t stable, it’s to point out that it can’t reap benefits of added stability when it becomes available.
  2. There is no package management using the binary provided on NZBGet’s website and therefore no version control (unless you’re using Microsoft). Thus, you can’t automate it’s deployment through tools like yum, dnf or apt-get (depending on your distribution).

Just to give you an idea of how many libraries NZBGet (v17) links to; here they are. I also identify the likelihood of a patch to be released between now and the next 3 months (purely speculation based on recent releases):

Package Risk Description
glibc, libgcc & libstdc++ MED The systems standard libraries (what this whole blog is about really). Ideally you don’t want to use programs that aren’t linked to a different system core then the one your system depends on. This core is managed by your distribution and is key to it’s stability. It only makes sense to use it for everything else too!
gmp LOW Just a really powerful library for doing mathematical calculations. There are rarely exploits found here; but it’s worth noting that this is a dependency of NZBGet.
keyutils-libs MED A wrapper library for the key management facility system
calls
krb5-libs MED A network authentication system.
libselinux LOW An API for SELinux applications to get and set process and file security contexts and to obtain security policy decisions. This is very customized for each distribution. Ideally you’ll always want to link to you’re managed library.
libxml2 LOW XML Parser which is used for reading through your NZB Files and RSS feeds.
nettle, trousers & openssl-libs HIGH The encryption between you an your Usenet provider is here. This is also the encryption between you and your indexer. There are constant exploits an vulnerabilities detected in these encryption libraries. These libraries alone are the main reason you should not use the NZBGet binaries and link to the ones you can keep current instead.
xz-libs, & zlib LOW Compression libraries for handling zip files (like those zipped NZB files for example when bundled).
p11-kit, pcre, ncurses-libs, libtasn1, libffi, & libcom_err LOW Some important libraries, but nothing with a high risk level associated with them. They’re still worth noting as they’re part of its requirements.

The point is that there are a lot of moving parts that make up NZBGet and you should make sure that it links to shared actively maintained libraries that you have control over.

You Talk To Much, Just Hand Over Whatever It Is You’re Selling!

Sure, here they are:

  • CentOS 7: rpm / src
    Note: This is the true bread and butter of this blog because this rpm contains the NZBGet v17 installable package completely modified to work with the C++11 standard. Thus you can take advantage of this beautiful piece of software while still using your shared libraries.
  • Fedora 22: rpm / src
  • Fedora 23: rpm / src
  • Fedora 24: rpm / src

The Fedora packages were all compiled using C++14 standard; there was no patch required for them.

You can also feel free to connect to my repository by following the information provided here and then download them using yum or dnf.

A worthy note about these packages:

  • It’ll automatically create a nzbget user (an group) for you so that you can control access to your downloads better.
    Adding users to it’s access is as simple as just adding them to the nzbget group:

    # Add a user to the nzbget group
    usermod -a -G nzbget myuser
    
  • Provides a SystemD service ready to go for those who want to tie it to their systems startup and shutdown.
    # Add NZBGet to the startup of your system
    systemctl enable nzbget.service
    
    # Start it up now if you like too!
    systemctl start nzbget.service
    
    # Now you can go ahead and access http://localhost:6789 with your browser
    # The default login/password is nzbget/tegbzn6789
    
  • A small separate patch in these RPMs change:
    • All of the ~/downloads references to ~/Downloads.
    • All references to /usr/local/bin to /usr/bin.
    • Added ~/.config/nzbget/ to the default paths list (for those who just want to run nzbget -d as they’re user). I personally prefer to tuck all my configuration files in .config/ instead of the root folder of my home directory (which still works too if you want).
  • A default configuration (/etc/nzbget.conf) is pre-prepared for you which
    • configures most of the basic settings out of the box such as unrar and 7za paths.
    • sets up the logging directory to be /var/log/nzbget/nzbget.log (with log rotation on).
    • sets up the $MainDir directory to /var/lib/nzbget which is pre-created already for you and protected by the nzbget user and group.

What About Everybody Else?

For those using other Linux Distributions like Ubuntu, you can download my patch file which back-ports all of the C++14 code to C++11 here.

Alternatively, you can follow these instructions which should get it working for you (using share libraries instead):

# Note: The below is only applicable to v17.0.
# Download the NZBGet v17 Source
wget https://github.com/nzbget/nzbget/releases/download/v17.0/nzbget-17.0-src.tar.gz

# Download the patch:
wget https://repo.nuxref.com/pub/patches/nzbget/nzbget-use.c++11.patch

# Extract the tarball
tar xfz nzbget-17.0-src.tar.gz

# Change into the directory it extracts to
pushd nzbget-17.0

# Apply the patch
patch -p1 < ../nzbget-use.c++11.patch

You’ll see output like this:

patching file config.h.in
patching file configure.ac
patching file daemon/connect/Connection.cpp
patching file daemon/connect/TlsSocket.cpp
patching file daemon/main/Scheduler.cpp
patching file daemon/queue/QueueEditor.cpp
patching file daemon/util/Observer.h
patching file daemon/util/Thread.cpp
patching file daemon/util/Thread.h
patching file daemon/util/Util.h
patching file linux/build-nzbget
patching file Makefile.in

You can now build NZBGet (assuming you have all the right libraries):

# first rebuild the configure script (part of what the patch prepares)
autoreconf -i
automake

# You may need to install the following packages
# Red Hat / CentOS 7:
#  sudo yum install libxml2-devel libpar2-devel \
#      ncurses-devel dos2unix \
#      openssl-devel autoconf, libtool, automake
#
# Ubuntu:
#  sudo apt-get install libxml2-dev libpar2-dev \
#      ncurses-dev dos2unix \
#      libssl-dev autoconf, libtool, automake

# Now you can run the rest of the setup:
./configure

# correct any problems you have here. The configure
# script will look after telling you if you're missing
# any libraries or not. If there are no problems then
# go ahead and type:
make

# Now you can install it into your system
make install

Now the last commands don’t provide you any kind of package management like RPMs or DEB packages do. But there is enough information in this blog create your own installable package here if you wanted. The main thing you accomplish is that you link a working copy of NZBGet to your stable (upgradable) system libraries.

Credit

Please note that this information took me several days to put together and test thoroughly. The C++11 patch alone took me several hours to test an create. I may not blog often; but I want to re-assure the stability and testing I put into everything I intend share.

If you like what you see and wish to copy and paste this HOWTO, please reference back to this blog post at the very least. It’s really all I ask.

Aug 5th, 2016 – Update

Since writing this blog, the amazing developer (hugbug) of NZBGet reached out to me to incorporate my patch (a much simplified version of it) into the next release of NZBGet (v17.1)! Here is a link to the pull request. So this pretty much renders this blog entry useless but accomplishing (at the end) none the less! πŸ™‚ I still intend to package NZBGet (and all of it’s future releases) on my repository for those still interested.

Sept 10th, 2016 – Update

NZBGet v17.1 came out and the packages in this blog and the repository have been updated to reflect this. Those that still wish to build it manually can do so as follows:

# Download the latest version of NZBGet
wget https://github.com/nzbget/nzbget/releases/download/v17.1/nzbget-17.1-src.tar.gz

# You may need to install the following packages
# Red Hat / CentOS 7:
#  sudo yum install libxml2-devel libpar2-devel \
#      ncurses-devel dos2unix \
#      openssl-devel autoconf, libtool, automake
#
# Ubuntu:
#  sudo apt-get install libxml2-dev libpar2-dev \
#      ncurses-dev dos2unix \
#      libssl-dev autoconf, libtool, automake

# Now you can run the rest of the setup:
CXXFLAGS="-std=c++11 -O2 -s" ./configure --disable-cpp-check

# correct any problems you have here. The configure
# script will look after telling you if you're missing
# any libraries or not. If there are no problems then
# go ahead and type:
make

# Now you can install it into your system
make install

Sources

A Usenet Solution For CentOS 6

What Is Usenet

In a nutshell; it’s basically a bunch of (file) servers that host a ton of information people place onto it. We’re talking about petabytes (1000+ Terabytes) of information. There is very little organization, but it does have a defined structure.

Content is sorted into groups which act as containers for it to be stored and retrieved from. You can think of a group like you might think of a directory on your computer at home. We create directories all the time in efforts to add order and structure to where we keep things (so we can find them later). The thing is, Usenet has no moderation; so you can place content in any group you want. As a result; it’s a lot like what you might expect someone’s hard drive would look like if you gave 5 million people access to it. Basically there is just a ton of crap everywhere.

The World Wide Web is similar to this, but instead of groups, we sort things by URLs (web addresses) such as http://nuxref.com. Google uses it’s own web crawlers to scan the entire World Wide Web just to create an index from it. Each website they find, they track it’s name, it’s content, and the language it’s written in. The result from them doing this is: we get to use their fantastic search engine! A search engine that has made our lives incredibly easy by granting us fast and easy accessible information at our fingertips.

The Usenet Indexer

Usenet is a very big world of it’s own and it’s a lot harder to get around in (but not impossible) without anything indexing it. Thankfully Usenet is no where near the size of the World Wide Web which makes indexing it is very possible for a much larger audience! In fact, we can even index it with our personal computer(s) we run at home. By indexing it; we can easily search it for content we’re interested in (much like how we use Google for web page searching).

Since just about anyone can index Usenet, one has to think: Why index Usenet ourselves if someone’s already doing it for us elsewhere? In fact, there are many sites (and tools) that have already done all the indexing (some better than others) of Usenet who are willing to share it with others (us). But it’s important to know: it can take a lot of server power, disk space, and network consumption for these site administrators to constantly index Usenet for us. Since most (if not all) of the sites are just hobbyists doing it for fun, it gets expensive for them to maintain things. For that reason some of them may charge or ask for a donation. If you want to use their services, you should respect their measly request of $8USD to $20USD for a lifetime membership. But don’t get discouraged, there are still a lot of free ones too!

Just keep in mind that Usenet is constantly getting larger; people are constantly posting new content to it every second. You’ll find that the sites that charge a fee are already (relatively) aware of the new changes to Usenet every time you search with it. Others (the free ones) may only update their index a few times a day or so.

Alternatively (the free route), we can go as far as running our own Usenet indexer (such as NewzNab) just as the hobbyists did (mentioned above). NewzNab will index Usenet on a regular basis. With your own indexer, you can choose to just index content that appeals to you. You can even choose to offer your services publicly if you want. Just keep in mind that Usenet is huge! If you do decide to go this route, you’ll find it a very CPU and network intensive operation. You may want to make sure you don’t exceed your Internet Service Providers (ISP) download limits.

Now back to the Google analogy I started earlier: When you find a link on Google you like, you simply click on it and your browser redirect you to the website you chose; end of story. However, in the Usenet Indexing world, once you find something of interest, the Usenet Indexer will provide you with an NZB File. An NZB file is effectively a map that identifies where your content can be specifically located on Usenet (but not the data itself). An NZB file to Usenet is similar to what a Torrent file is to a BitTorrent Client. Both NZB and Torrent files provide the blueprints needed to mine (acquire) your data. Both NZB and Torrent files require a Downloader to preform the actual data mining for you.

The Downloader

The Downloader can take an NZB File it’s provided and then uses it to acquire the actual data it maps to. This is the final piece of the puzzle!
Of the list below, you really only need to choose 1 Downloader. I just listed more then 1 to give you alternatives to work with. My personal preference is NZBGet because it is more flexible. But it’s flexibility can also be very confusing (only at first). Once you get over it’s learning curve and especially the initial configuration; it’s a dream to work with. Alternatively SABnzbd may be better for the novice if your just starting off with Usenet and don’t want to much more of a learning curve then you already have.

Either way, pick you poison:

Title Package Details
NZBGet rpm/src NZBGet is written in C++ and designed with performance in mind to achieve maximum download speed by using very little system resources.
Community / Manual
**Note: I created this patch in a recent update rebuild (Jul 17th, 2014) to fix a few directory paths so the compression tools (unrar and 7zip) can work right away. I also added these compression tools as dependencies to the package so they’ll just be present for you at the start.
**Note: I also created this patch in a recent update rebuild (Nov 9th, 2014) to allow the RC Script to take optional configuration defined in /etc/sysconfig/nzbget.

You can install NZBGet using the steps below:

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/nuxref-repository/

# Install NZBGet
yum install -y nzbget 
    --enablerepo=nuxref 
    --enablerepo=nuxref-shared

# Grab Template
cp /usr/share/nzbget/nzbget.conf ~/.nzbget

# Protect it
chmod 600 ~/.nzbget

# Start it Up (as a non-root user):
nzbget -D

# You should now be able to access it via: 
#     http://localhost:6789/
SABnzbd n/a SABnzbd is an Open Source Binary Newsreader written in Python.
Community / Manual
Note:I have not packaged this yet, but will probably eventually get around to it. For now it can be accessed from it’s repository on GitHub, or you can quickly set it up in your environment as follows:

# There is no RPM installer for this one, we just
# fetch straight from their repository.
# Install git (if it's not already)
yum install -y git

# Grab a snapshot of SABnzbd
git clone https://github.com/sabnzbd/sabnzbd.git SABnzbd

# Start it Up (as a non-root user):
python SABnzbd/SABnzbd.py 
    --daemon 
    --pid $(pwd)/SABnzbd/sabnzbd.pid

# You should now be able to access it via: 
#     http://localhost:8080/

Automated Index Searchers

These tools search for already indexed content you’re interested in and can be configured to automatically download it for you when it’s found. It itself doesn’t do the downloading, but it will automate the connection between your chosen Indexer and Downloader (such as NZBGet or SABnzbd). For this reason, these tools do not actually search Usenet at all and therefore have very little overhead on your system (or NAS drive).

Title Package Details
Sonarr
nzbdrone-icon
rpm/src Automatic TV Show downloader
Formally known as NZBDrone; it has since been changed to Sonarr. This was only made possible because of the blog I wrote on mono v3.x .

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/nuxref-repository/

# Installation of this plugin:
yum install -y sonarr 
    --enablerepo=nuxref 
    --enablerepo=nuxref-shared 

# Start it Up (as a non-root user):
nohup mono /opt/NzbDrone/NzbDrone.exe &

# You should now be able to access it via: 
#     http://localhost:8989/
Sick Beard
sickbeard-icon
n/a (Another) Automatic TV Show downloader

Note:I have not packaged this yet, but will probably eventually get around to it. For now it can be accessed from it’s repository on GitHub, or you can quickly set it up in your environment as follows:

# Install git (if it's not already)
yum install -y git

# Grab a snapshot of Sick Beard
# Note that we grab the master branch, otherwise we default
# to the development one.
git clone -b master https://github.com/midgetspy/Sick-Beard.git SickBeard

# Start it Up (as a non-root user):
python SickBeard/SickBeard.py 
   --daemon 
   --pidfile $(pwd)/SickBeard/sickbeard.pid

# You should now be able to access it via: 
#     http://localhost:8081/
CouchPotato
couchpotato-icon
n/a Automatic movie downloader

Note:I have not packaged this yet, but will probably eventually get around to it. For now it can be accessed from it’s repository on GitHub, or you can quickly set it up in your environment as follows:

# Install git (if it's not already)
yum install -y git

# Grab a snapshot of CouchPotato
git clone https://github.com/RuudBurger/CouchPotatoServer.git CouchPotato

# Start it Up (as a non-root user):
python CouchPotato/CouchPotato.py 
   --daemon 
   --pid_file CouchPotato/couchpotato.pid

# You should now be able to access it via: 
#     http://localhost:5050/
Headphones
headphones-icon
n/a Automatic music downloader

Note:I have not packaged this yet, but will probably eventually get around to it. For now it can be accessed from it’s repository on GitHub, or you can quickly set it up in your environment as follows:

# Install git (if it's not already)
yum install -y git

# Grab a snapshot of Headphones
git clone https://github.com/rembo10/headphones Headphones

# Start it Up (as a non-root user):
python Headphones/Headphones.py 
   --daemon 
   --pidfile $(pwd)/Headphones/headphones.pid

# You should now be able to access it via: 
#     http://localhost:8181/
Mylar
mylar-icon
n/a Automatic Comic Book downloader

Note:I have not packaged this yet, but will probably eventually get around to it. For now it can be accessed from it’s repository on GitHub, or you can quickly set it up in your environment as follows:

# Install git (if it's not already)
yum install -y git

# Grab a snapshot of Headphones
git clone https://github.com/evilhero/mylar Mylar

# Start it Up (as a non-root user):
python Mylar/Mylar.py 
   --daemon 
   --pidfile $(pwd)/Mylar/Mylar.pid

# You should now be able to access it via: 
#     http://localhost:8090/

NZBGet Processing Scripts

For those who prefer SABnzbd, you can ignore this part of the blog. For those using NZBGet, one of it’s strongest features is it’s ability to process content it downloads before and after it’s received. The Post Processing (PP) has been specifically one of NZBGet’s greatest features. It allows separation between the the function NZBGet (which is to download content in NZB files) and what you want to do with the content afterwards. Post Processing could do anything such as catalogue what was received and place it into an SQL database. Post Processing could rename the content and sort it for you in separate directories depending on what it is. Post processing can be as simple as just emailing you when the download completed or post on Facebook or Twitter. You’re not limited to just 1 PP Script either, you can chain them and run a whole slew of them one after another. The options are endless.

I’ve taken some of the popular PP Scripts from the NZBGet forum and packaged them in a self installing RPM as well to make life easy for those who want it. Some of these packages require many dependencies and ports to make the installation smooth. Although i link directly to the RPMs here, you are strongly advised to link to my repository with yum if you haven’t already done so.

Title Package Provides Details
Failure Link rpm/src FAILURELINK If download fails, the script sends info about the failure to indexer site, so a replacement NZB (same movie or TV episode) can be queued up if available. The indexer site must support DNZB-Header “X-DNZB-FailureLink”.

Note: The integration works only for downloads queued via URL (including RSS). NZB-files queued from local disk don’t have enough information to contact the indexer site.

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbget-script-failurelink 
    --enablerepo=nuxref 
    --enablerepo=nuxref-shared 
nzbToMedia rpm/src DELETESAMPLES
RESETDATETIME
NZBTOCOUCHPOTATO
NZBTOGAMEZ
NZBTOHEADPHONES
NZBTOMEDIA
NZBTOMYLAR
NZBTONZBDRONE
NZBTOSICKBEARD
Provides an efficient way to handle post processing for
CouchPotatoServer, SickBeard, Sonarr, Headphones, and Mylar
when using NZBGet on low performance systems like a NAS.

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbget-script-nzbtomedia 
    --enablerepo=nuxref 
    --enablerepo=nuxref-shared 

Note: This package includes the removal of the entire PYPKG/libs directory. I replaced all of the dependencies previously defined here with global ones used by CentOS. The reason for this was due to the fact a lot of other packages all share the same libraries. It just didn’t make sense to maintain a duplicate of it all.

Subliminal rpm/src SUBLIMINAL Provides a wrapper that can be integrated with NZBGet with subliminal (which fetches subtitles given a filename or filepath). Subliminal uses the correct video hashes using the powerful guessit library to ensure you have the best matching subtitles. It also relies on enzyme to detect embedded subtitles and avoid retrieving duplicates.

Multiple subtitles services are available using:opensubtitles, tvsubtitles, podnapisi, addic7ed, and thesubdb.

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbget-script-subliminal 
    --enablerepo=nuxref 
    --enablerepo=nuxref-shared 

*Note: python-subliminal (what this PP Script is a wrapper too) had some issues I had to address. For one, I eliminated the entire PYPKG/subliminal/libs directory. I replaced all of the dependencies previously defined here with global ones used by CentOS. The reason for this was due to the fact a lot of other packages all share the same libraries. It just didn’t make sense to maintain a duplicate of it all.
**Note: Subliminal was written using Dict Comprehensions (PEP 274), a feature that wasn’t introduced until Python 2.7. Unfortunately, the developers of it had no interest in resolving this and closed the issue with ‘Upgrade to Python 2.7 or Python v3.3. For this reason, subliminal does ‘not’ work at all with CentOS or Red Hat 6.x. So… I fixed that. Now, I can proudly tell you that the copy of subliminal I host on my repository is 100% compatibility with python 2.6 (this includes a few Logging backported functionality too).

I am the current maintainer of this plugin and it can be accessed from my GitHub page here.

DirWatch rpm/src DIRWATACH DirWatch can watch multiple directories for NZB-Files and move them for processing by NZBGet. This tool is awesome if you have a DropBox account or a network share you want NZBGet to scan! Without this script NZBGet can only be configured to scan one (and only one) directory for NZB-Files.

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbget-script-dirwatch 
    --enablerepo=nuxref 
    --enablerepo=nuxref-shared 

I am the current maintainer of this plugin and it can be accessed from my GitHub page here.

TidyIt rpm/src TIDYIT TidyIt integrates itself with NZBGet’s scheduling and is used
to preform basic house cleaning on a media library. TidyIt
removes orphaned meta information, empty directories and unused
content. It’s the perfect OCD tool for those who want to eliminate
any unnecessary bloat on their filesystem and media library.

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbget-script-tidyit 
    --enablerepo=nuxref 
    --enablerepo=nuxref-shared 

I am the current maintainer of this plugin and it can be accessed from my GitHub page here.

Notify rpm/src NOTIFY Notify provides a wrapper that can be integrated with NZBGet allowing you to notify in just about any supported method today such as
email, KODI (XBMC), Prowl, Growl, PushBullet, NotifyMyAndroid, Toasty, Pushalot,
Boxcar, Faast, Telegram, Join, and Slack Notifications. It also supports pushing information in HTTP Post request
via JSON or XML (SOAP structure).

The script can also be used as a standalone tool and called from the
command line allowing it to support a lot more tools besides NZBGet.

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbget-script-notify 
    --enablerepo=nuxref 
    --enablerepo=nuxref-shared 

I am the current maintainer of this plugin and it can be accessed from my GitHub page here.

Password Detector rpm/src PASSWORDETECTOR Password Detector is a queue script that checks for passwords inside of every .rar file of a NZB downloaded. This means that it can detect password protected NZB’s very early before downloading is complete, allowing the NZB to be automatically deleted or paused. Detecting early saves data, time, resources, etc.

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbget-script-passworddetector 
    --enablerepo=nuxref 
    --enablerepo=nuxref-shared 
Fake Detector rpm/src FAKEDETECTOR This is a queue-script which is executed during download, after every downloaded file containing in nzb-file (typically a rar-file). The script lists content of download rar-files and tries to detect fake nzbs. Thus it saves your bandwidth if it detects that the content your downloading if the contents within it fail to pass a series of validity checks.

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbget-script-fakedetector 
    --enablerepo=nuxref 
    --enablerepo=nuxref-shared 
Video Sort rpm/src VIDEOSORT With post-processing script VideoSort you can automatically organize downloaded video files.

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbget-script-videosort 
    --enablerepo=nuxref 
    --enablerepo=nuxref-shared 

Note: This package includes the removal of the entire PYPKG/libs directory. I replaced all of the dependencies previously defined here with global ones used by CentOS. The reason for this was due to the fact a lot of other packages all share the same libraries. It just didn’t make sense to maintain a duplicate of it all.

Mobile Integration

nzb360-logoThere are some fantastic Apps out there that allow you to integrate your phone with the applications mentioned above. It can allow you to manage your downloads from wherever you are. A special shout out to NZB 360 who recently had his app pulled from the Google Play Store for no apparent reason and had to set up shop outside. I can say first hand that his application is amazing! You should totally consider it if you have an Android phone.

Usenet Provides

For those who don’t have Usenet already, it does come at an extra cost and/or fee. The cost averages anywhere between $6 to $20 USD/month (anything more and you’re paying to much). The reason for this is because Usenet is a completely isolated network from the Internet. It’s comprised of a completely isolated set of interconnected servers. While the internet is comprised of hundreds of millions of servers all hosting specific content, each Usenet server hosts the entire usenet database… it hosts everything. If anything is uploaded to Usenet, all of the interconnected servers update themselves with their own local copy of it (to serve us). For this to happen, their servers have to have petabytes of storage. The fee they charge you is just going to support their operational cost such as bandwidth, maintenance and the regular addition of storage to their infrastructure. There is very little profit to be made for them at $8 a person. Here is a breakdown of a few servers (in alphabetical order) I’m aware of and support:

Provider Server
Location(s)
Notes Average Cost
Astraweb US & Europe Retention: 2158 Days (5.9 Years) $6.66USD/Month to $15USD/Month
see here for details
Usenet Server US Retention: 2159 Days (5.9 Years)
Has a free 14 day trial
$13.33USD/Month to $14.95USD/Month
see here for details

*Note: Table information was last updated on Jul 14th, 2014. Prices are subject to change as time goes on and this blog post isn’t updated.
**Note: If you have a provider that you would like to be added to this list… Or if you simply spot an error in pricing or linking, please feel free to contact me so I can update it right away.

Why do people use Usenet/Newsgroups?

  • Speed: It’s literally just you and another server; a simple 1 to 1 connection. Data transfer speeds will always be as fast as your ISP can carry your traffic to and from the Usenet Server you signed up with. Unlike torrents, content isn’t governed by how many seeders and leechers that have the content available to you. You never have to deal with upload/download ratios, maintain quotas, and or sit idle in someone’s queue who will serve data to you eventually.
  • Security: You only deal with secure connections between you and your Usenet Provider; no one else! Torrents can have you to maintaining thousands of connections to different systems and sharing data with them. With BitTorrent setups, tracker are publicly advertising what you have to share and what your trying to download. Your privacy is public to anyone using the same tracker that you’re connected to. Not only that, but most torrent connections are insecure as well which allows virtually anyone to view what you’re doing.

Please know that I am not against torrents at all! In fact, now I’ll take the time to mention a few points where torrents are excel over Usenet:

  • Cost: It doesn’t usually cost you anything to use the torrent network. It all depends on the tracker your using of course (some private trackers charge for their usage). But if you’re just out to get the free public stuff made available to us, there are absolutely no costs at all to use this method!
  • Availability: Usenet is far from perfect. When someone uploads something onto their Usenet Provider, by the time it propagates this new content to all of the other Usenet Servers, there is a small chance the data will be corrupted. This happens with Usenet all of the time. To compensate for this, Usenet users anticipate corruption (sad but true). These people kindly post Parchive files to Usenet to compliment whatever they previously uploaded. Parchive files work similar to how RAID works; they provide building blocks to reassemble data in the event it’s corrupted. Corruption never happens with Torrents unless the person hosting decides to host corrupted data. Any other scenario would simply be because your BitTorrent Client had a bug in it.
  • Retention: As long as someone is willing to seed something, or enough combined leechers can reconstruct what is being shared, then data will always stay alive in the BitTorrent world. However with Usenet, the Usenet Server is hosting EVERYTHING which means it has to maintain a lot data on a lot of disk space! For this reason, a retention period is inevitably met. A time is eventually reached where the Usenet Server purges (erases) older content from these hard disks to make room for the new stuff showing up every day.

Honestly, at the end of the day: both Torrents and Usenet Servers have their pros and cons. We will always continue each weigh them at different levels. What’s considered the right choice for one person, might not be the right one for another. Heck, just use both depending on your situation! πŸ™‚

Source