Tag Archives: Red Hat

UFTP: Mass File Distribution Using Multicasting

Introduction

Multicasting has it’s pros and cons just like everything else. But it is often an overlooked solution to a common business problem which is: How do I transfer a file to multiple (subscribed) locations at the same time?
Most administrators or developers will come up with a solution that involves sending this file to each site using a common protocol such as SFTP, SCP, RSYNC, FTP, RCP, etc. There is no doubt that these solutions will work. However these solutions require you to send the file to each location individually. Hence, if you need to send a 10MB file to 100 remote locations, you’ll need 1GB (10MB x 100) of local network bandwidth to do it with.
Traditional Protocol File Transfers
A Multicasting solution saves you this effort and bandwidth by allowing you to send the file once and have all 100 sites collectively store it onto their systems at (relatively) the same time. Now, with respect to the diagram below (and the rest of this blog), I’m focusing entirely on the amazing efforts Dennis Bush put into UFTP. It is this tool that makes all of this possible. UFTP is one of those diamonds in the rough that I don’t think gets enough attention for the value it brings with it.
Multicast File Transfer using UFTP

Multicast in a Nutshell

A Multicast boils down to just being an IP address anywhere in the 224.0.0.0/4 network range. It uses a User Datagram Protocol to communicate with anyone who chooses to read and write data from this address. Routers can be configured to share this address across networks so that everyone may join in. Similar to a chat room on the internet where everyone joins together and anything they write, everyone else in the channel can see too.

Multicasting no doubt has it’s drawbacks. But I figure that it’s not worth dwelling on what you can’t do with the (multicast) protocol on it’s own since the UFTP tool this blog focuses on for mass file distribution has solved many (if not all) of these problems for us.

Why Multicasting, Who Uses it?

Multicasting saves bandwidth. Think of your Cable TV provider; when you change a channel, you’re actually just subscribing to a new multicast address along with the 100+ million other subscribing customers. Multicasting allows Cable TV companies to broadcast every channel at once, and those who are interested in a specific channel will receive it. Regardless of what channel you change to, there is no additional load sustained by the cable provider.

System administrators may not even know they’re using it when if they are managing a cluster. Most clusters use multicast as a way of passing their heartbeats to all other nodes in efforts to keep quorum.

Now, with respect to UFTP, if you visit it’s official website, you’ll see that the the Wall Street Journal used this tool in the past to send their developed newspapers to remote printing plants and other outlets across the United States.

UFTP in a Nutshell

UFTP is a File Transfer solution that wraps itself around the the multicast protocol as well as addressing the deficiencies that come with it. It is a well designed server/client application that allows for one to easily transfer files from 1 location to as many clients as you want in one shot. It can drastically save your company on bandwidth and has been around long enough to be deemed a reliable business decision. It’s worth noting that UFTP was written in C making it incredibly fast and lightweight on system resources. It works on all platforms but I’ll specifically focus on CentOS and the rpms I’ve packaged.

To use the software, you simply pass it a file and it looks after all the dirty work of guaranteeing it’s delivery to all of the clients subscribed to the address it broadcasts on. The author thought of everything when developing his tool, encryption of the data is always available to you as well if you prefer. He developed 3 main tools that you can manipulate to feed your data anywhere.
A Simple UFTP Environment

  • uftp (The Server – Docs): A simple command line tool that takes a file and broadcasts it to all of the listening clients. This is the exact reverse of the traditional ftp, sftp, scp, etc tools where they become the client and need to connect to a server to preform their tasks.
  • uftpd (The Client – Docs): A daemon that starts up and listens to a multicast address for new files being broadcasted by the server. Again, you’ll notice with Multicasting, roles are reversed from what you’d normally be used to. With the traditional protocols (FTP, SFTP, SCP, etc), they usually have daemons to host the server side of things not the client. You’ll want to run this daemon at all locations you wish to receive content broadcasted by the server (uftp).
  • uftpproxyd (The Proxy – Docs): The proxy allows you to tunnel your uftp multicast across a network that doesn’t support multicasting (such as the internet). This allows clients in other controlled networks (separated by one you can’t control) to additionally be part of your file distribution.

    The UFTP Proxy has 3 main modes it operates as:

    • Server Mode: This is used for pushing content upstream. This would effectively sit on the same server you call ‘uftp’ from. It listens just like any other client would and passes all information it receives to connected UFTP Proxies configured with the client mode.
    • Client Mode: This communicates with a UFTP Proxy configured for Server Mode and mimics the ‘uftp’ by broadcasting the same data to the local multicast address. This allows all local UFTP Clients to retrieve the file(s).
    • Response Mode: This is used to help take off some of the load of the server if there are many clients. Although the file is only being broadcasted once, there is a lot of handshaking that goes on at the start and end of the transmission to guarantee all data was delivered successfully. Depending on the different networks, their medium and reliability, a server may need some extra help with the handshaking if there are a lot of clients involved struggling to retrieve the data.

    Below shows an example of how the proxy can be utilized:
    A UFTP Proxy ConfigurationNote: Pinholes are used as a way to connect back to the UFTP Proxy Server effectively requiring no firewall changes to be made at each client site (only the server).

    **Note: It’s important to note a single Proxy Server can only be configured to connect to a single Proxy Client; it’s a 1 to 1 mapping. If you have multiple sites you need to connect to, you’ll need to set up an individual Proxy Server for each Proxy Client you need to serve.

    If you’re interested in the proxy portion of UFTP, you can read the official documentation about it found here.

Hand Over Everything

I wouldn’t have it any other way:

  • uftp-4.5.1-1.el6.nuxref.x86_64.rpm: The server is just the uftp tool and is really easy to use. This assumes you’ve got clients configured somewhere listening though!
    # just type uftp file.you.want.to.send
    uftp mytestfile
    
    # You can also send multiple files by just adding them to the end of
    # the string:
    uftp mytestfile1 mytestfile2 mytestfile3
    
    # I also wrote a small script that works the same way and sends stuff encrypted
    uftpe mytestfile1 mytestfile2 mytestfile3
    
  • uftp-client-4.5.1-1.el6.nuxref.x86_64.rpm: The client listens for data sent from the uftp server. You can use the RC Script i prepared to greatly simplify this tool:
    # as root; use the RC Script I wrote to make hosting the server
    # really easy
    service uftpd start
    

    Filtering is optional; if you don’t specify any, then by default there are no restrictions. Sometimes this is satisfactory (especially in closed or isolated networks). I attempted to simply UFTP’s built in server filtering for those who want to use it though. You see, not only can you encrypt the data you transmit from the server. But you can restrict the client to only accept connections from specific UFTP servers residing at a specific hosts with a specific server id. You can even go as far as only accepting servers with a specific fingerprint (created by their private key)

    # simply drop a configuration file in /etc/uftpd/servers.d 
    # Examples of accepted entries (taken from uftpd man page):
    # 0x11112222|192.168.1.101|66:1E:C9:1D:FC:99:DB:60:B0:1A:F0:8F:CA:F4:28:27:A6:BE:94:BC
    # 0x11113333|fe80::213:72ff:fed6:69ca
    #
    # You can have as many files as you want in this directory with as many entries in each
    # as you want.  If you add or remove new files, you'll need to restart the uftpd service
    # since it's only read in at the start.
    

    So by adding this bit of complexity, I know you are asking yourself:

    • Q: How do I know what my Server ID is?
      A: By default (using the rpm I’ve packaged) every server has an ID of 0x00000001 (decimal value of 1 – one) if you use the uftpe script I wrote (for encrypted transfers). To change your server id do the following:

      # Define a new id (other then 1)
      NEWID=2
      
      # Simply store this ID inside of the $HOME/.uftp config
      # file as it's HEX value:
      
      # Ensure the config directory exists
      [ ! -d $HOME/.uftp ] && mkdir -p  $HOME/.uftp
      
      # clear any old entry you may have set if you want:
      [ -f $HOME/.uftp/config ] && sed -i -e '/^UFTP_UID=/d' $HOME/.uftp/config
      
      # Set the new entry
      printf 'UFTP_UID=0x%.8xn' $NEWID >> $HOME/.uftp/config
      

      If however your just using the uftp tool on it’s own, it takes on the IP address of the host it’s running on (as it’s hex value) unless you explicitly specify -U 0x00000002 (or whatever ID you want it to assume). Here is a quick example of how you can convert an IP address to it’s hex value at the shell:

      # Define the address you want to convert
      IP_ADDR=192.168.1.128
      # Now convert it (don't forget the brackets!)
      (
         printf '0x'
         printf '%02X' $(echo "${IP_ADDR//./ }"); echo
      )
      # example above outputs: 0xC0A80180
      
    • Q: How do I know what my Server Fingerprint is?
      A: First off, uftp will not ever connect to this server with this option set if you choose ‘not’ to use the uftpe script i wrote or provide the necessary switches to enforce encryption. Setting a filter of your servers fingerprint is also a way of saying you only accept connections that are encrypted. This entry is completely optional. Your fingerprint ID is stored in $HOME/.uftp/uftp.key. Again this key only exists if your using the uftpe tool; otherwise the key is wherever you chose to store it. Fetch the id as follows (the below is intended to be called on the server where the uftp.key file exists):

      # fetch the details from the key:
      uftp_keymgt $HOME/.uftp/uftp.key
      

      Don’t panic if you don’t have a key; They are generated automatically when you first run the uftpe tool. The easiest way to pre-create it would just be to call uftpe by itself (without any parameters). Yes; it’ll spit an error telling you you didn’t provide it enough options. But the script will also generate you a key automatically too.

    I tried to think of everything; so log rotations and logging is already built in and included. You can locate them in /var/log/uftpd.log.

  • uftp-proxy-4.5.1-1.el6.nuxref.x86_64.rpm: the proxy service can bridge two networks that don’t support multicasting together. I haven’t spent to much time with this area since my environment hasn’t required me to. If you have any information to share about it; feel free and I can expand this area.
  • uftp-debuginfo-4.5.1-1.el6.nuxref.x86_64.rpm (Optional): This is only required if you are debugging this tool.
  • uftp-4.5.1-1.el6.nuxref.src.rpm (Optional): The source RPM for those interested in building the software for themselves.

Some Things To Consider:

When something is explained to be this easy, there is always a catch. I won’t lie, there are a few which means the UFTP solution may not be for everyone. They are as follows:

  • Multicasting isn’t enabled by most routers by default. If your recipients reside in a network you don’t manage, you’ll want to ask the local administrator to make sure their routers have (Level 2) multicasting enabled.
  • Multicasting can be a pain in the butt to troubleshoot; although newer routers won’t give you any grief, some of the older ones can fail to relay content correctly to clients who’ve also connected to the same multicast address. With negativity aside though, when it works, it works so great! My point is: you’ll need to make sure you’re using (networking) hardware that is relatively new (no older then 2010) where the firmware adequately supports Level 2 multicasting.
  • The UFTP Proxy attempts to resolve the problem of linking your networks together when separated by ones you don’t control. But consider that unless there are multiple recipients located on each network you connect your proxy too, you aren’t saving any bandwidth choosing this route.
  • Only uftp clients who are online will receive content sent by the uftp server. This can be a deal breaker for some especially if the product being delivered ‘MUST’ reach all of the clients. UFTP does not track who is online and who isn’t. It simply delivers content to those who are present at that time. It’s just like how you can’t watch a television show if you haven’t told your TV’s receiver what channel to be on.
  • If you’re using restrictive firewall settings (hopefully you are!), you’ll want to make sure multicasting is allowed into your client box with the following:
    iptables -A INPUT -m pkttype --pkt-type multicast -j ACCEPT

    It would be worth adding this to your /etc/sysconfig/iptables file as well.

Repository

Please note that all of these packages are also within my repository if that makes things easier for you and your deployment.

Sources

  • UFTP: Dennis Bush did a fantastic job with this tool. It is the key to making multicast file transmission powerful and reliable.
  • Multicasting: information in greater depth can be found here.

A Usenet Solution For CentOS 6

What Is Usenet

In a nutshell; it’s basically a bunch of (file) servers that host a ton of information people place onto it. We’re talking about petabytes (1000+ Terabytes) of information. There is very little organization, but it does have a defined structure.

Content is sorted into groups which act as containers for it to be stored and retrieved from. You can think of a group like you might think of a directory on your computer at home. We create directories all the time in efforts to add order and structure to where we keep things (so we can find them later). The thing is, Usenet has no moderation; so you can place content in any group you want. As a result; it’s a lot like what you might expect someone’s hard drive would look like if you gave 5 million people access to it. Basically there is just a ton of crap everywhere.

The World Wide Web is similar to this, but instead of groups, we sort things by URLs (web addresses) such as http://nuxref.com. Google uses it’s own web crawlers to scan the entire World Wide Web just to create an index from it. Each website they find, they track it’s name, it’s content, and the language it’s written in. The result from them doing this is: we get to use their fantastic search engine! A search engine that has made our lives incredibly easy by granting us fast and easy accessible information at our fingertips.

The Usenet Indexer

Usenet is a very big world of it’s own and it’s a lot harder to get around in (but not impossible) without anything indexing it. Thankfully Usenet is no where near the size of the World Wide Web which makes indexing it is very possible for a much larger audience! In fact, we can even index it with our personal computer(s) we run at home. By indexing it; we can easily search it for content we’re interested in (much like how we use Google for web page searching).

Since just about anyone can index Usenet, one has to think: Why index Usenet ourselves if someone’s already doing it for us elsewhere? In fact, there are many sites (and tools) that have already done all the indexing (some better than others) of Usenet who are willing to share it with others (us). But it’s important to know: it can take a lot of server power, disk space, and network consumption for these site administrators to constantly index Usenet for us. Since most (if not all) of the sites are just hobbyists doing it for fun, it gets expensive for them to maintain things. For that reason some of them may charge or ask for a donation. If you want to use their services, you should respect their measly request of $8USD to $20USD for a lifetime membership. But don’t get discouraged, there are still a lot of free ones too!

Just keep in mind that Usenet is constantly getting larger; people are constantly posting new content to it every second. You’ll find that the sites that charge a fee are already (relatively) aware of the new changes to Usenet every time you search with it. Others (the free ones) may only update their index a few times a day or so.

Alternatively (the free route), we can go as far as running our own Usenet indexer (such as NewzNab) just as the hobbyists did (mentioned above). NewzNab will index Usenet on a regular basis. With your own indexer, you can choose to just index content that appeals to you. You can even choose to offer your services publicly if you want. Just keep in mind that Usenet is huge! If you do decide to go this route, you’ll find it a very CPU and network intensive operation. You may want to make sure you don’t exceed your Internet Service Providers (ISP) download limits.

Now back to the Google analogy I started earlier: When you find a link on Google you like, you simply click on it and your browser redirect you to the website you chose; end of story. However, in the Usenet Indexing world, once you find something of interest, the Usenet Indexer will provide you with an NZB File. An NZB file is effectively a map that identifies where your content can be specifically located on Usenet (but not the data itself). An NZB file to Usenet is similar to what a Torrent file is to a BitTorrent Client. Both NZB and Torrent files provide the blueprints needed to mine (acquire) your data. Both NZB and Torrent files require a Downloader to preform the actual data mining for you.

The Downloader

The Downloader can take an NZB File it’s provided and then uses it to acquire the actual data it maps to. This is the final piece of the puzzle!
Of the list below, you really only need to choose 1 Downloader. I just listed more then 1 to give you alternatives to work with. My personal preference is NZBGet because it is more flexible. But it’s flexibility can also be very confusing (only at first). Once you get over it’s learning curve and especially the initial configuration; it’s a dream to work with. Alternatively SABnzbd may be better for the novice if your just starting off with Usenet and don’t want to much more of a learning curve then you already have.

Either way, pick you poison:

Title Package Details
NZBGet rpm/src NZBGet is written in C++ and designed with performance in mind to achieve maximum download speed by using very little system resources.
Community / Manual
**Note: I created this patch in a recent update rebuild (Jul 17th, 2014) to fix a few directory paths so the compression tools (unrar and 7zip) can work right away. I also added these compression tools as dependencies to the package so they’ll just be present for you at the start.
**Note: I also created this patch in a recent update rebuild (Nov 9th, 2014) to allow the RC Script to take optional configuration defined in /etc/sysconfig/nzbget.

You can install NZBGet using the steps below:

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/nuxref-repository/

# Install NZBGet
yum install -y nzbget 
    --enablerepo=nuxref 
    --enablerepo=nuxref-shared

# Grab Template
cp /usr/share/nzbget/nzbget.conf ~/.nzbget

# Protect it
chmod 600 ~/.nzbget

# Start it Up (as a non-root user):
nzbget -D

# You should now be able to access it via: 
#     http://localhost:6789/
SABnzbd n/a SABnzbd is an Open Source Binary Newsreader written in Python.
Community / Manual
Note:I have not packaged this yet, but will probably eventually get around to it. For now it can be accessed from it’s repository on GitHub, or you can quickly set it up in your environment as follows:

# There is no RPM installer for this one, we just
# fetch straight from their repository.
# Install git (if it's not already)
yum install -y git

# Grab a snapshot of SABnzbd
git clone https://github.com/sabnzbd/sabnzbd.git SABnzbd

# Start it Up (as a non-root user):
python SABnzbd/SABnzbd.py 
    --daemon 
    --pid $(pwd)/SABnzbd/sabnzbd.pid

# You should now be able to access it via: 
#     http://localhost:8080/

Automated Index Searchers

These tools search for already indexed content you’re interested in and can be configured to automatically download it for you when it’s found. It itself doesn’t do the downloading, but it will automate the connection between your chosen Indexer and Downloader (such as NZBGet or SABnzbd). For this reason, these tools do not actually search Usenet at all and therefore have very little overhead on your system (or NAS drive).

Title Package Details
Sonarr
nzbdrone-icon
rpm/src Automatic TV Show downloader
Formally known as NZBDrone; it has since been changed to Sonarr. This was only made possible because of the blog I wrote on mono v3.x .

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/nuxref-repository/

# Installation of this plugin:
yum install -y sonarr 
    --enablerepo=nuxref 
    --enablerepo=nuxref-shared 

# Start it Up (as a non-root user):
nohup mono /opt/NzbDrone/NzbDrone.exe &

# You should now be able to access it via: 
#     http://localhost:8989/
Sick Beard
sickbeard-icon
n/a (Another) Automatic TV Show downloader

Note:I have not packaged this yet, but will probably eventually get around to it. For now it can be accessed from it’s repository on GitHub, or you can quickly set it up in your environment as follows:

# Install git (if it's not already)
yum install -y git

# Grab a snapshot of Sick Beard
# Note that we grab the master branch, otherwise we default
# to the development one.
git clone -b master https://github.com/midgetspy/Sick-Beard.git SickBeard

# Start it Up (as a non-root user):
python SickBeard/SickBeard.py 
   --daemon 
   --pidfile $(pwd)/SickBeard/sickbeard.pid

# You should now be able to access it via: 
#     http://localhost:8081/
CouchPotato
couchpotato-icon
n/a Automatic movie downloader

Note:I have not packaged this yet, but will probably eventually get around to it. For now it can be accessed from it’s repository on GitHub, or you can quickly set it up in your environment as follows:

# Install git (if it's not already)
yum install -y git

# Grab a snapshot of CouchPotato
git clone https://github.com/RuudBurger/CouchPotatoServer.git CouchPotato

# Start it Up (as a non-root user):
python CouchPotato/CouchPotato.py 
   --daemon 
   --pid_file CouchPotato/couchpotato.pid

# You should now be able to access it via: 
#     http://localhost:5050/
Headphones
headphones-icon
n/a Automatic music downloader

Note:I have not packaged this yet, but will probably eventually get around to it. For now it can be accessed from it’s repository on GitHub, or you can quickly set it up in your environment as follows:

# Install git (if it's not already)
yum install -y git

# Grab a snapshot of Headphones
git clone https://github.com/rembo10/headphones Headphones

# Start it Up (as a non-root user):
python Headphones/Headphones.py 
   --daemon 
   --pidfile $(pwd)/Headphones/headphones.pid

# You should now be able to access it via: 
#     http://localhost:8181/
Mylar
mylar-icon
n/a Automatic Comic Book downloader

Note:I have not packaged this yet, but will probably eventually get around to it. For now it can be accessed from it’s repository on GitHub, or you can quickly set it up in your environment as follows:

# Install git (if it's not already)
yum install -y git

# Grab a snapshot of Headphones
git clone https://github.com/evilhero/mylar Mylar

# Start it Up (as a non-root user):
python Mylar/Mylar.py 
   --daemon 
   --pidfile $(pwd)/Mylar/Mylar.pid

# You should now be able to access it via: 
#     http://localhost:8090/

NZBGet Processing Scripts

For those who prefer SABnzbd, you can ignore this part of the blog. For those using NZBGet, one of it’s strongest features is it’s ability to process content it downloads before and after it’s received. The Post Processing (PP) has been specifically one of NZBGet’s greatest features. It allows separation between the the function NZBGet (which is to download content in NZB files) and what you want to do with the content afterwards. Post Processing could do anything such as catalogue what was received and place it into an SQL database. Post Processing could rename the content and sort it for you in separate directories depending on what it is. Post processing can be as simple as just emailing you when the download completed or post on Facebook or Twitter. You’re not limited to just 1 PP Script either, you can chain them and run a whole slew of them one after another. The options are endless.

I’ve taken some of the popular PP Scripts from the NZBGet forum and packaged them in a self installing RPM as well to make life easy for those who want it. Some of these packages require many dependencies and ports to make the installation smooth. Although i link directly to the RPMs here, you are strongly advised to link to my repository with yum if you haven’t already done so.

Title Package Provides Details
Failure Link rpm/src FAILURELINK If download fails, the script sends info about the failure to indexer site, so a replacement NZB (same movie or TV episode) can be queued up if available. The indexer site must support DNZB-Header “X-DNZB-FailureLink”.

Note: The integration works only for downloads queued via URL (including RSS). NZB-files queued from local disk don’t have enough information to contact the indexer site.

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbget-script-failurelink 
    --enablerepo=nuxref 
    --enablerepo=nuxref-shared 
nzbToMedia rpm/src DELETESAMPLES
RESETDATETIME
NZBTOCOUCHPOTATO
NZBTOGAMEZ
NZBTOHEADPHONES
NZBTOMEDIA
NZBTOMYLAR
NZBTONZBDRONE
NZBTOSICKBEARD
Provides an efficient way to handle post processing for
CouchPotatoServer, SickBeard, Sonarr, Headphones, and Mylar
when using NZBGet on low performance systems like a NAS.

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbget-script-nzbtomedia 
    --enablerepo=nuxref 
    --enablerepo=nuxref-shared 

Note: This package includes the removal of the entire PYPKG/libs directory. I replaced all of the dependencies previously defined here with global ones used by CentOS. The reason for this was due to the fact a lot of other packages all share the same libraries. It just didn’t make sense to maintain a duplicate of it all.

Subliminal rpm/src SUBLIMINAL Provides a wrapper that can be integrated with NZBGet with subliminal (which fetches subtitles given a filename or filepath). Subliminal uses the correct video hashes using the powerful guessit library to ensure you have the best matching subtitles. It also relies on enzyme to detect embedded subtitles and avoid retrieving duplicates.

Multiple subtitles services are available using:opensubtitles, tvsubtitles, podnapisi, addic7ed, and thesubdb.

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbget-script-subliminal 
    --enablerepo=nuxref 
    --enablerepo=nuxref-shared 

*Note: python-subliminal (what this PP Script is a wrapper too) had some issues I had to address. For one, I eliminated the entire PYPKG/subliminal/libs directory. I replaced all of the dependencies previously defined here with global ones used by CentOS. The reason for this was due to the fact a lot of other packages all share the same libraries. It just didn’t make sense to maintain a duplicate of it all.
**Note: Subliminal was written using Dict Comprehensions (PEP 274), a feature that wasn’t introduced until Python 2.7. Unfortunately, the developers of it had no interest in resolving this and closed the issue with ‘Upgrade to Python 2.7 or Python v3.3. For this reason, subliminal does ‘not’ work at all with CentOS or Red Hat 6.x. So… I fixed that. Now, I can proudly tell you that the copy of subliminal I host on my repository is 100% compatibility with python 2.6 (this includes a few Logging backported functionality too).

I am the current maintainer of this plugin and it can be accessed from my GitHub page here.

DirWatch rpm/src DIRWATACH DirWatch can watch multiple directories for NZB-Files and move them for processing by NZBGet. This tool is awesome if you have a DropBox account or a network share you want NZBGet to scan! Without this script NZBGet can only be configured to scan one (and only one) directory for NZB-Files.

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbget-script-dirwatch 
    --enablerepo=nuxref 
    --enablerepo=nuxref-shared 

I am the current maintainer of this plugin and it can be accessed from my GitHub page here.

TidyIt rpm/src TIDYIT TidyIt integrates itself with NZBGet’s scheduling and is used
to preform basic house cleaning on a media library. TidyIt
removes orphaned meta information, empty directories and unused
content. It’s the perfect OCD tool for those who want to eliminate
any unnecessary bloat on their filesystem and media library.

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbget-script-tidyit 
    --enablerepo=nuxref 
    --enablerepo=nuxref-shared 

I am the current maintainer of this plugin and it can be accessed from my GitHub page here.

Notify rpm/src NOTIFY Notify provides a wrapper that can be integrated with NZBGet allowing you to notify in just about any supported method today such as
email, KODI (XBMC), Prowl, Growl, PushBullet, NotifyMyAndroid, Toasty, Pushalot,
Boxcar, Faast, Telegram, Join, and Slack Notifications. It also supports pushing information in HTTP Post request
via JSON or XML (SOAP structure).

The script can also be used as a standalone tool and called from the
command line allowing it to support a lot more tools besides NZBGet.

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbget-script-notify 
    --enablerepo=nuxref 
    --enablerepo=nuxref-shared 

I am the current maintainer of this plugin and it can be accessed from my GitHub page here.

Password Detector rpm/src PASSWORDETECTOR Password Detector is a queue script that checks for passwords inside of every .rar file of a NZB downloaded. This means that it can detect password protected NZB’s very early before downloading is complete, allowing the NZB to be automatically deleted or paused. Detecting early saves data, time, resources, etc.

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbget-script-passworddetector 
    --enablerepo=nuxref 
    --enablerepo=nuxref-shared 
Fake Detector rpm/src FAKEDETECTOR This is a queue-script which is executed during download, after every downloaded file containing in nzb-file (typically a rar-file). The script lists content of download rar-files and tries to detect fake nzbs. Thus it saves your bandwidth if it detects that the content your downloading if the contents within it fail to pass a series of validity checks.

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbget-script-fakedetector 
    --enablerepo=nuxref 
    --enablerepo=nuxref-shared 
Video Sort rpm/src VIDEOSORT With post-processing script VideoSort you can automatically organize downloaded video files.

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbget-script-videosort 
    --enablerepo=nuxref 
    --enablerepo=nuxref-shared 

Note: This package includes the removal of the entire PYPKG/libs directory. I replaced all of the dependencies previously defined here with global ones used by CentOS. The reason for this was due to the fact a lot of other packages all share the same libraries. It just didn’t make sense to maintain a duplicate of it all.

Mobile Integration

nzb360-logoThere are some fantastic Apps out there that allow you to integrate your phone with the applications mentioned above. It can allow you to manage your downloads from wherever you are. A special shout out to NZB 360 who recently had his app pulled from the Google Play Store for no apparent reason and had to set up shop outside. I can say first hand that his application is amazing! You should totally consider it if you have an Android phone.

Usenet Provides

For those who don’t have Usenet already, it does come at an extra cost and/or fee. The cost averages anywhere between $6 to $20 USD/month (anything more and you’re paying to much). The reason for this is because Usenet is a completely isolated network from the Internet. It’s comprised of a completely isolated set of interconnected servers. While the internet is comprised of hundreds of millions of servers all hosting specific content, each Usenet server hosts the entire usenet database… it hosts everything. If anything is uploaded to Usenet, all of the interconnected servers update themselves with their own local copy of it (to serve us). For this to happen, their servers have to have petabytes of storage. The fee they charge you is just going to support their operational cost such as bandwidth, maintenance and the regular addition of storage to their infrastructure. There is very little profit to be made for them at $8 a person. Here is a breakdown of a few servers (in alphabetical order) I’m aware of and support:

Provider Server
Location(s)
Notes Average Cost
Astraweb US & Europe Retention: 2158 Days (5.9 Years) $6.66USD/Month to $15USD/Month
see here for details
Usenet Server US Retention: 2159 Days (5.9 Years)
Has a free 14 day trial
$13.33USD/Month to $14.95USD/Month
see here for details

*Note: Table information was last updated on Jul 14th, 2014. Prices are subject to change as time goes on and this blog post isn’t updated.
**Note: If you have a provider that you would like to be added to this list… Or if you simply spot an error in pricing or linking, please feel free to contact me so I can update it right away.

Why do people use Usenet/Newsgroups?

  • Speed: It’s literally just you and another server; a simple 1 to 1 connection. Data transfer speeds will always be as fast as your ISP can carry your traffic to and from the Usenet Server you signed up with. Unlike torrents, content isn’t governed by how many seeders and leechers that have the content available to you. You never have to deal with upload/download ratios, maintain quotas, and or sit idle in someone’s queue who will serve data to you eventually.
  • Security: You only deal with secure connections between you and your Usenet Provider; no one else! Torrents can have you to maintaining thousands of connections to different systems and sharing data with them. With BitTorrent setups, tracker are publicly advertising what you have to share and what your trying to download. Your privacy is public to anyone using the same tracker that you’re connected to. Not only that, but most torrent connections are insecure as well which allows virtually anyone to view what you’re doing.

Please know that I am not against torrents at all! In fact, now I’ll take the time to mention a few points where torrents are excel over Usenet:

  • Cost: It doesn’t usually cost you anything to use the torrent network. It all depends on the tracker your using of course (some private trackers charge for their usage). But if you’re just out to get the free public stuff made available to us, there are absolutely no costs at all to use this method!
  • Availability: Usenet is far from perfect. When someone uploads something onto their Usenet Provider, by the time it propagates this new content to all of the other Usenet Servers, there is a small chance the data will be corrupted. This happens with Usenet all of the time. To compensate for this, Usenet users anticipate corruption (sad but true). These people kindly post Parchive files to Usenet to compliment whatever they previously uploaded. Parchive files work similar to how RAID works; they provide building blocks to reassemble data in the event it’s corrupted. Corruption never happens with Torrents unless the person hosting decides to host corrupted data. Any other scenario would simply be because your BitTorrent Client had a bug in it.
  • Retention: As long as someone is willing to seed something, or enough combined leechers can reconstruct what is being shared, then data will always stay alive in the BitTorrent world. However with Usenet, the Usenet Server is hosting EVERYTHING which means it has to maintain a lot data on a lot of disk space! For this reason, a retention period is inevitably met. A time is eventually reached where the Usenet Server purges (erases) older content from these hard disks to make room for the new stuff showing up every day.

Honestly, at the end of the day: both Torrents and Usenet Servers have their pros and cons. We will always continue each weigh them at different levels. What’s considered the right choice for one person, might not be the right one for another. Heck, just use both depending on your situation! πŸ™‚

Source

Mono 3.x Packages for CentOS 6

Introduction

Mono allows us to run Windows Applications in our Linux environment. It is an open source implementation of Microsoft’s .NET Framework. The problem is, CentOS (and Red Hat) 6.x ship with Mono v2.4 which is a little outdated. You can’t take advantage of the newer apps .NET developers are writing. In fact, you can’t run anything that requires a newer version of v3.5 of the .NET runtime libraries.

In addition to that, Mono v3.4 grants your CentOS system more support for .NET applications that weren’t otherwise available for you in v2.4.

Compatibility Mono v2.4 Mono v3.4
.NET 1.0 Yes No (dropped support)
.NET 2.0 Yes Yes
C# 3.0 Yes Yes
ASP.NET 2.0 Yes Yes
.NET 3.5 Partial Yes
.NET 4.0 No Yes
.NET 4.5 No Yes
C# 4.0 No Yes
ASP.NET 4.0 No Yes
C# 5.0 No Yes

What’s so special about the repackaging you did?

Well first of all… it’s actually an RPM package. It doesn’t require you to haul in a ton of development libraries and compile it from scratch. Another point is that Mono v2.4 (shipped with CentOS & Red Hat 6) had many patches applied to it. These patches forced Mono to conform to the common directory structure used natively by our Operating System. It took me several hours to recreate all of these patches forcing Mono v3.4 to comply with the same standards.

Finally (and at the time of writing this blog), this is the first package of Mono v3.4 that I’ve found that can be installed via an RPM and not require you to recompile everything yourself. Hence you don’t even need to haul in any development libraries at all. Mono will just work as is. Since my repackaging was based off of the original, I tried to keep all of the external rpm packages the same. That said, I did get a little confused with all of the new packages and binary tools that ship with Mono v3.x. Since I’m not a Microsoft Developer, I tried to sort these new packages accordingly as best as I could. Please feel free to let me know how I can improve this package if you notice anything I’ve done wrong.

Just hand over all your work already!

Absolutely, here they are:

Binary Packages:

Note: Mono (v3) was a bit picky about it’s SQLite version it referenced. I had to update it’s package to a slightly newer version as well for everything to play nicely. Only if you intend to haul in the mono-data-sqlite-*.rpm package, will you be required to haul in this newer version. I’ve already provided this on my repository, but for the sceptics who want to build it themselves, I’ll include those instructions too.

Source Packages:

Debug Info Packages

Alternatively, you can get it from my repository too (this is the best and easiest way). The below instructions assume you’ve set yourself up.

# Make sure you're hooked up with my repository for this to
# work: http://nuxref.com/nuxref-repository/
################################################################
# Install Mono
################################################################
yum install -y 
       --enablerepo=nuxref 
       --enablerepo=nuxref-shared 
           mono-core
################################################################
# Install additional packages too if you wish (depending on your
# needs.
################################################################
yum install -y 
       --enablerepo=nuxref 
       --enablerepo=nuxref-shared 
           mono-web

I’ll Never Trust Your Stuff; Let Me Do It Myself

Sure, First you’re going to need to fetch all the patches I had to create (plus the old ones carried forth from Mono v2.4:

You can additionally view the RPM SPEC file I created here.

First prepare our development environment with mock if you haven’t already:

# Install 'mock' into your environment if you don't have it already
# This step will require you to be the superuser (root) in your native
# environment.
yum install -y mock

# Grant your normal every day user account access to the mock group
# This step will also require you to be the root user.
usermod -a -G mock YourNonRootUsername
# Download the official mono packages from their official
# hosting site:
wget http://origin-download.mono-project.com/sources/mono/mono-3.4.0.tar.bz2

# Download all of the building blocks you'll need
wget --output-document=mono.spec https://www.dropbox.com/sh/9dt7klam6ex1kpp/AAAiqD2KjHhakweKY_mkLLPba/20140713/mono/mono.spec?dl=1
wget --output-document=monodir.c https://www.dropbox.com/sh/9dt7klam6ex1kpp/AABZHv5NeWFICyAAAw--eiJoa/20140713/mono/monodir.c?dl=1
wget --output-document=mono.snk https://www.dropbox.com/sh/9dt7klam6ex1kpp/AADoY6UvThpcQbUHhs7XUecsa/20140713/mono/mono.snk?dl=1
wget --output-document=lc https://www.dropbox.com/sh/9dt7klam6ex1kpp/AACkja0kNxmO1ytHOIw523HTa/20140713/mono/lc?dl=1
wget --output-document=mono-3.4-ppc-threading.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AAAKrjdqRR826osJjbqZIu5la/20140713/mono/mono-3.4-ppc-threading.patch?dl=1
wget --output-document=mono-1.2.3-use-monodir.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AABHvxieGDqU8eDB24ghS_Dua/20140713/mono/mono-1.2.3-use-monodir.patch?dl=1
wget --output-document=mono-2.2-uselibdir.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AABSjbMfIRj5JB7HKWydQVpja/20140713/mono/mono-2.2-uselibdir.patch?dl=1
wget --output-document=mono-2.0-monoservice.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AADsL0DBfI0VixRAw6uI0Vkpa/20140713/mono/mono-2.0-monoservice.patch?dl=1
wget --output-document=mono-3.4-libgdiplusconfig.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AAAHvISVzxIPq9xCmw2m2tcPa/20140713/mono/mono-3.4-libgdiplusconfig.patch?dl=1
wget --output-document=mono-3.4-libdir.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AACHrlv_iSp36jhSeOn4ki0fa/20140713/mono/mono-3.4-libdir.patch?dl=1
wget --output-document=mono-3.4-POSIX_ARG_MAX.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AADSN5WhjyqTQptMoWthDnYHa/20140713/mono/mono-3.4-POSIX_ARG_MAX.patch?dl=1
wget --output-document=mono-3.4.xamarin.BZ18690.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AAAwWsEKkOlnzYZCshp29wuwa/20140713/mono/mono-3.4.xamarin.BZ18690.patch?dl=1

# Initialize our Environment
mock -v -r epel-6-x86_64 --init

# Dependencies
mock -v -r epel-6-x86_64 --install libpng-devel libjpeg-devel ligiflib-devel  
   lilibtiff-devel  lilibexif-devel  lilibX11-devel  lifontconfig-devel  
   ligettext  limake  ligcc-c++ libison liglib2-devel lipkgconfig 
   lilibicu-devel lilibgdiplus-devel  lizlib-devel li automake libtool 
   gettext-devel mono-core gcc-c++ mediainfo gettext 
   giflib-devel libtiff-devel libexif-devel libX11-devel fontconfig-devel 
   bison glib2-devel libicu-devel libgdiplus-devel mysql-devel 
   postgresql-devel sqlite-devel

# Copy our packages into our environment
mock -v -r epel-6-x86_64 --copyin mono.spec /builddir/build/SPECS
mock -v -r epel-6-x86_64 --copyin 
    *.patch 
    mono-3.4.0.tar.bz2 
    mono.snk 
    lc 
    monodir.c 
    /builddir/build/SOURCES

# Shell into our environment
mock -v -r epel-6-x86_64 --shell

# Change to our build directory
cd builddir/build

# Enable Bootstrapping for the first time
# mono actually requires 'mono' (itself) to build. Weird Right?
# But still necessary! For this reason I prepared an easier
# way of enabling bootstrapping for your first build.
#
# Once you install the binaries created from your first build
# we can rebuild the package again (but this time without
# bootstrapping). The purpose of this is to ensure the mono
# binaries and packages we created are equivalent to the
# the bootstrapped content.
# So... on with the bootstrapping; Note: this will take
# 20 to 30 minutes depending on how fast your system is.
rpmbuild -ba --define "_with_bootstrap=1" SPECS/mono.spec

# Now that we've created mono from a bootstrap, we can
# install the package back into our virtual environment
# and rebuild it again. But this time we rebuild it
# without the bootstrap reference.
rpm -Uhi RPMS/mono-core-3.4.0-1.el6.nuxref.x86_64.rpm RPMS/mono-devel-3.4.0-1.el6.nuxref.x86_64.rpm
# Now rebuild the whole thing all over again to confirm
# your build was good; Note: This will take another 20 to 30 
# minutes again...
rpmbuild -ba SPECS/mono.spec

# we're now done with our mock environment for now; Press Ctrl-D to
# exit or simply type exit on the command line of our virtual
# environment
exit

# We'll return to the directory we were previously in.  We can copy
# out the packages we just built at this point.Ignore the warning
# about SELinux if you get one. It doesn't impact our goals at this
# moment.
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/SRPMS/mono-3.4.0-1.el6.nuxref.src.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-core-3.4.0-1.el6.nuxref.x86_64.rpm
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-data-3.4.0-1.el6.nuxref.x86_64.rpm
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-data-oracle-3.4.0-1.el6.nuxref.x86_64.rpm
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-data-postgresql-3.4.0-1.el6.nuxref.x86_64.rpm
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-data-sqlite-3.4.0-1.el6.nuxref.x86_64.rpm
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-devel-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/monodoc-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/monodoc-devel-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-extras-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-locale-extras-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-nunit-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-nunit-devel-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-reactive-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-wcf-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-web-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-web-devel-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-winforms-3.4.0-1.el6.nuxref.x86_64.rpm .
# The debuginfo package will only exist if you successfully rebuilt
# everything without the bootstrap set
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-debuginfo-3.4.0-1.el6.nuxref.x86_64.rpm .

Upgrading SQLite

For me, I just visited pkgs.org and downloaded the fedora 20 (source -src.rpm) release of SQLite. Then I extracted it’s contents as follows:

# I can't promise this link will work, as this package is always
# evolving, but if you do the search above, you'll get the idea
wget http://dl.fedoraproject.org/pub/fedora/linux/releases/20/Everything/source/SRPMS/s/sqlite-3.8.1-2.fc20.src.rpm

# Alternatively, you can download the source rpm package I'm
# already hosting:
wget http://repo.nuxref.com/centos/6/en/source/custom/sqlite-3.8.1-2.el6.src.rpm

# Then extracted it using this neat technique:
rpm2cpio sqlite-*.src.rpm | cpio -idmv

# Initialize our Environment
mock -v -r epel-6-x86_64 --init

# Dependencies
mock -v -r epel-6-x86_64 --install ncurses-devel 
    readline-devel glibc-devel autoconf /usr/bin/tclsh 
    tcl-devel

# You'll already have the block you need as nothing is
# changed with this package. We're just using it as is
mock -v -r epel-6-x86_64 --copyin sqlite-*.zip /builddir/build/SOURCES
mock -v -r epel-6-x86_64 --copyin *.patch /builddir/build/SOURCES
mock -v -r epel-6-x86_64 --copyin sqlite.spec /builddir/build/SPECS

# Shell into our environment
mock -v -r epel-6-x86_64 --shell
 
# Change to our build directory
cd builddir/build

# Build our packages (process doesn't take long ~2 min)
rpmbuild -ba SPECS/sqlite.spec

# we're now done with our mock environment for now; Press Ctrl-D to
# exit or simply type exit on the command line of our virtual
# environment
exit

# We'll return to the directory we were previously in.  We can copy
# out the packages we just built at this point. Ignore the warning
# about SELinux if you get one. It doesn't impact our goals at this
# moment.
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/SRPMS/sqlite-3.8.1-2.el6.src.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/sqlite-3.8.1-2.el6.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/sqlite-devel-3.8.1-2.el6.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/sqlite-doc-3.8.1-2.el6.noarch.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/lemon-3.8.1-2.el6.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/sqlite-tcl-3.8.1-2.el6.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/sqlite-debuginfo-3.8.1-2.el6.x86_64.rpm .

Credit

This blog took me a very (,very) long time to put together and test! The repository hosting alone now accommodates all my blog entries up to this date. If you like what you see and wish to copy and paste this HOWTO, please reference back to this blog post at the very least. It’s really all I ask.

I’ve tried hard to make this a complete working solution out of the box. Please feel free to email me or post comments below with any suggestions you have so I can ensure this blog is as complete as possible! Positive feedback is always welcome too!

Repository

This blog makes use of my own repository I loosely maintain. If you’d like me to continue to monitor and apply updates as well as hosting the repository for long terms, please consider donating or offering a mirror server to help me out! This would would be greatly appreciated!

Sources

The majority of my efforts came from the following sites: