What Is Usenet
In a nutshell; it’s basically a bunch of (file) servers that host a ton of information people place onto it. We’re talking about petabytes (1000+ Terabytes) of information. There is very little organization, but it does have a defined structure.
Content is sorted into groups which act as containers for it to be stored and retrieved from. You can think of a group like you might think of a directory on your computer at home. We create directories all the time in efforts to add order and structure to where we keep things (so we can find them later). The thing is, Usenet has no moderation; so you can place content in any group you want. As a result; it’s a lot like what you might expect someone’s hard drive would look like if you gave 5 million people access to it. Basically there is just a ton of crap everywhere.
The World Wide Web is similar to this, but instead of groups, we sort things by URLs (web addresses) such as http://nuxref.com. Google uses it’s own web crawlers to scan the entire World Wide Web just to create an index from it. Each website they find, they track it’s name, it’s content, and the language it’s written in. The result from them doing this is: we get to use their fantastic search engine! A search engine that has made our lives incredibly easy by granting us fast and easy accessible information at our fingertips.
The Usenet Indexer
Usenet is a very big world of it’s own and it’s a lot harder to get around in (but not impossible) without anything indexing it. Thankfully Usenet is no where near the size of the World Wide Web which makes indexing it is very possible for a much larger audience! In fact, we can even index it with our personal computer(s) we run at home. By indexing it; we can easily search it for content we’re interested in (much like how we use Google for web page searching).
Since just about anyone can index Usenet, one has to think: Why index Usenet ourselves if someone’s already doing it for us elsewhere? In fact, there are many sites (and tools) that have already done all the indexing (some better than others) of Usenet who are willing to share it with others (us). But it’s important to know: it can take a lot of server power, disk space, and network consumption for these site administrators to constantly index Usenet for us. Since most (if not all) of the sites are just hobbyists doing it for fun, it gets expensive for them to maintain things. For that reason some of them may charge or ask for a donation. If you want to use their services, you should respect their measly request of $8USD to $20USD for a lifetime membership. But don’t get discouraged, there are still a lot of free ones too!
Just keep in mind that Usenet is constantly getting larger; people are constantly posting new content to it every second. You’ll find that the sites that charge a fee are already (relatively) aware of the new changes to Usenet every time you search with it. Others (the free ones) may only update their index a few times a day or so.
Alternatively (the free route), we can go as far as running our own Usenet indexer (such as NewzNab) just as the hobbyists did (mentioned above). NewzNab will index Usenet on a regular basis. With your own indexer, you can choose to just index content that appeals to you. You can even choose to offer your services publicly if you want. Just keep in mind that Usenet is huge! If you do decide to go this route, you’ll find it a very CPU and network intensive operation. You may want to make sure you don’t exceed your Internet Service Providers (ISP) download limits.
Now back to the Google analogy I started earlier: When you find a link on Google you like, you simply click on it and your browser redirect you to the website you chose; end of story. However, in the Usenet Indexing world, once you find something of interest, the Usenet Indexer will provide you with an NZB File. An NZB file is effectively a map that identifies where your content can be specifically located on Usenet (but not the data itself). An NZB file to Usenet is similar to what a Torrent file is to a BitTorrent Client. Both NZB and Torrent files provide the blueprints needed to mine (acquire) your data. Both NZB and Torrent files require a Downloader to preform the actual data mining for you.
The Downloader
The Downloader can take an NZB File it’s provided and then uses it to acquire the actual data it maps to. This is the final piece of the puzzle!
Of the list below, you really only need to choose 1 Downloader. I just listed more then 1 to give you alternatives to work with. My personal preference is NZBGet because it is more flexible. But it’s flexibility can also be very confusing (only at first). Once you get over it’s learning curve and especially the initial configuration; it’s a dream to work with. Alternatively SABnzbd may be better for the novice if your just starting off with Usenet and don’t want to much more of a learning curve then you already have.
Either way, pick you poison:
Title | Package | Details |
---|---|---|
NZBGet | rpm/src | NZBGet is written in C++ and designed with performance in mind to achieve maximum download speed by using very little system resources. Community / Manual **Note: I created this patch in a recent update rebuild (Jul 17th, 2014) to fix a few directory paths so the compression tools (unrar and 7zip) can work right away. I also added these compression tools as dependencies to the package so they’ll just be present for you at the start. **Note: I also created this patch in a recent update rebuild (Nov 9th, 2014) to allow the RC Script to take optional configuration defined in /etc/sysconfig/nzbget. You can install NZBGet using the steps below: # Note: You must link to the NuxRef repository for this to work! # See: http://nuxref.com/nuxref-repository/ # Install NZBGet yum install -y nzbget --enablerepo=nuxref --enablerepo=nuxref-shared # Grab Template cp /usr/share/nzbget/nzbget.conf ~/.nzbget # Protect it chmod 600 ~/.nzbget # Start it Up (as a non-root user): nzbget -D # You should now be able to access it via: # http://localhost:6789/ |
SABnzbd | n/a | SABnzbd is an Open Source Binary Newsreader written in Python. Community / Manual Note:I have not packaged this yet, but will probably eventually get around to it. For now it can be accessed from it’s repository on GitHub, or you can quickly set it up in your environment as follows: # There is no RPM installer for this one, we just # fetch straight from their repository. # Install git (if it's not already) yum install -y git # Grab a snapshot of SABnzbd git clone https://github.com/sabnzbd/sabnzbd.git SABnzbd # Start it Up (as a non-root user): python SABnzbd/SABnzbd.py --daemon --pid $(pwd)/SABnzbd/sabnzbd.pid # You should now be able to access it via: # http://localhost:8080/ |
Automated Index Searchers
These tools search for already indexed content you’re interested in and can be configured to automatically download it for you when it’s found. It itself doesn’t do the downloading, but it will automate the connection between your chosen Indexer and Downloader (such as NZBGet or SABnzbd). For this reason, these tools do not actually search Usenet at all and therefore have very little overhead on your system (or NAS drive).
Title | Package | Details |
---|---|---|
Sonarr |
rpm/src | Automatic TV Show downloader Formally known as NZBDrone; it has since been changed to Sonarr. This was only made possible because of the blog I wrote on mono v3.x . # Note: You must link to the NuxRef repository for this to work! # See: http://nuxref.com/nuxref-repository/ # Installation of this plugin: yum install -y sonarr --enablerepo=nuxref --enablerepo=nuxref-shared # Start it Up (as a non-root user): nohup mono /opt/NzbDrone/NzbDrone.exe & # You should now be able to access it via: # http://localhost:8989/ |
Sick Beard |
n/a | (Another) Automatic TV Show downloader
Note:I have not packaged this yet, but will probably eventually get around to it. For now it can be accessed from it’s repository on GitHub, or you can quickly set it up in your environment as follows: # Install git (if it's not already) yum install -y git # Grab a snapshot of Sick Beard # Note that we grab the master branch, otherwise we default # to the development one. git clone -b master https://github.com/midgetspy/Sick-Beard.git SickBeard # Start it Up (as a non-root user): python SickBeard/SickBeard.py --daemon --pidfile $(pwd)/SickBeard/sickbeard.pid # You should now be able to access it via: # http://localhost:8081/ |
CouchPotato |
n/a | Automatic movie downloader
Note:I have not packaged this yet, but will probably eventually get around to it. For now it can be accessed from it’s repository on GitHub, or you can quickly set it up in your environment as follows: # Install git (if it's not already) yum install -y git # Grab a snapshot of CouchPotato git clone https://github.com/RuudBurger/CouchPotatoServer.git CouchPotato # Start it Up (as a non-root user): python CouchPotato/CouchPotato.py --daemon --pid_file CouchPotato/couchpotato.pid # You should now be able to access it via: # http://localhost:5050/ |
Headphones |
n/a | Automatic music downloader
Note:I have not packaged this yet, but will probably eventually get around to it. For now it can be accessed from it’s repository on GitHub, or you can quickly set it up in your environment as follows: # Install git (if it's not already) yum install -y git # Grab a snapshot of Headphones git clone https://github.com/rembo10/headphones Headphones # Start it Up (as a non-root user): python Headphones/Headphones.py --daemon --pidfile $(pwd)/Headphones/headphones.pid # You should now be able to access it via: # http://localhost:8181/ |
Mylar |
n/a | Automatic Comic Book downloader
Note:I have not packaged this yet, but will probably eventually get around to it. For now it can be accessed from it’s repository on GitHub, or you can quickly set it up in your environment as follows: # Install git (if it's not already) yum install -y git # Grab a snapshot of Headphones git clone https://github.com/evilhero/mylar Mylar # Start it Up (as a non-root user): python Mylar/Mylar.py --daemon --pidfile $(pwd)/Mylar/Mylar.pid # You should now be able to access it via: # http://localhost:8090/ |
NZBGet Processing Scripts
For those who prefer SABnzbd, you can ignore this part of the blog. For those using NZBGet, one of it’s strongest features is it’s ability to process content it downloads before and after it’s received. The Post Processing (PP) has been specifically one of NZBGet’s greatest features. It allows separation between the the function NZBGet (which is to download content in NZB files) and what you want to do with the content afterwards. Post Processing could do anything such as catalogue what was received and place it into an SQL database. Post Processing could rename the content and sort it for you in separate directories depending on what it is. Post processing can be as simple as just emailing you when the download completed or post on Facebook or Twitter. You’re not limited to just 1 PP Script either, you can chain them and run a whole slew of them one after another. The options are endless.
I’ve taken some of the popular PP Scripts from the NZBGet forum and packaged them in a self installing RPM as well to make life easy for those who want it. Some of these packages require many dependencies and ports to make the installation smooth. Although i link directly to the RPMs here, you are strongly advised to link to my repository with yum if you haven’t already done so.
Title | Package | Provides | Details |
---|---|---|---|
Failure Link | rpm/src | FAILURELINK | If download fails, the script sends info about the failure to indexer site, so a replacement NZB (same movie or TV episode) can be queued up if available. The indexer site must support DNZB-Header “X-DNZB-FailureLink”.
Note: The integration works only for downloads queued via URL (including RSS). NZB-files queued from local disk don’t have enough information to contact the indexer site. # Note: You must link to the NuxRef repository for this to work! # See: http://nuxref.com/repo/ # Installation of this plugin: yum install -y nzbget-script-failurelink --enablerepo=nuxref --enablerepo=nuxref-shared |
nzbToMedia | rpm/src | DELETESAMPLES RESETDATETIME NZBTOCOUCHPOTATO NZBTOGAMEZ NZBTOHEADPHONES NZBTOMEDIA NZBTOMYLAR NZBTONZBDRONE NZBTOSICKBEARD |
Provides an efficient way to handle post processing for CouchPotatoServer, SickBeard, Sonarr, Headphones, and Mylar when using NZBGet on low performance systems like a NAS. # Note: You must link to the NuxRef repository for this to work! # See: http://nuxref.com/repo/ # Installation of this plugin: yum install -y nzbget-script-nzbtomedia --enablerepo=nuxref --enablerepo=nuxref-shared Note: This package includes the removal of the entire PYPKG/libs directory. I replaced all of the dependencies previously defined here with global ones used by CentOS. The reason for this was due to the fact a lot of other packages all share the same libraries. It just didn’t make sense to maintain a duplicate of it all. |
Subliminal | rpm/src | SUBLIMINAL | Provides a wrapper that can be integrated with NZBGet with subliminal (which fetches subtitles given a filename or filepath). Subliminal uses the correct video hashes using the powerful guessit library to ensure you have the best matching subtitles. It also relies on enzyme to detect embedded subtitles and avoid retrieving duplicates.
Multiple subtitles services are available using:opensubtitles, tvsubtitles, podnapisi, addic7ed, and thesubdb. # Note: You must link to the NuxRef repository for this to work! # See: http://nuxref.com/repo/ # Installation of this plugin: yum install -y nzbget-script-subliminal --enablerepo=nuxref --enablerepo=nuxref-shared *Note: python-subliminal (what this PP Script is a wrapper too) had some issues I had to address. For one, I eliminated the entire PYPKG/subliminal/libs directory. I replaced all of the dependencies previously defined here with global ones used by CentOS. The reason for this was due to the fact a lot of other packages all share the same libraries. It just didn’t make sense to maintain a duplicate of it all. I am the current maintainer of this plugin and it can be accessed from my GitHub page here. |
DirWatch | rpm/src | DIRWATACH | DirWatch can watch multiple directories for NZB-Files and move them for processing by NZBGet. This tool is awesome if you have a DropBox account or a network share you want NZBGet to scan! Without this script NZBGet can only be configured to scan one (and only one) directory for NZB-Files.
# Note: You must link to the NuxRef repository for this to work! # See: http://nuxref.com/repo/ # Installation of this plugin: yum install -y nzbget-script-dirwatch --enablerepo=nuxref --enablerepo=nuxref-shared I am the current maintainer of this plugin and it can be accessed from my GitHub page here. |
TidyIt | rpm/src | TIDYIT | TidyIt integrates itself with NZBGet’s scheduling and is used to preform basic house cleaning on a media library. TidyIt removes orphaned meta information, empty directories and unused content. It’s the perfect OCD tool for those who want to eliminate any unnecessary bloat on their filesystem and media library. # Note: You must link to the NuxRef repository for this to work! # See: http://nuxref.com/repo/ # Installation of this plugin: yum install -y nzbget-script-tidyit --enablerepo=nuxref --enablerepo=nuxref-shared I am the current maintainer of this plugin and it can be accessed from my GitHub page here. |
Notify | rpm/src | NOTIFY | Notify provides a wrapper that can be integrated with NZBGet allowing you to notify in just about any supported method today such as email, KODI (XBMC), Prowl, Growl, PushBullet, NotifyMyAndroid, Toasty, Pushalot, Boxcar, Faast, Telegram, Join, and Slack Notifications. It also supports pushing information in HTTP Post request via JSON or XML (SOAP structure). The script can also be used as a standalone tool and called from the # Note: You must link to the NuxRef repository for this to work! # See: http://nuxref.com/repo/ # Installation of this plugin: yum install -y nzbget-script-notify --enablerepo=nuxref --enablerepo=nuxref-shared I am the current maintainer of this plugin and it can be accessed from my GitHub page here. |
Password Detector | rpm/src | PASSWORDETECTOR | Password Detector is a queue script that checks for passwords inside of every .rar file of a NZB downloaded. This means that it can detect password protected NZB’s very early before downloading is complete, allowing the NZB to be automatically deleted or paused. Detecting early saves data, time, resources, etc.
# Note: You must link to the NuxRef repository for this to work! # See: http://nuxref.com/repo/ # Installation of this plugin: yum install -y nzbget-script-passworddetector --enablerepo=nuxref --enablerepo=nuxref-shared |
Fake Detector | rpm/src | FAKEDETECTOR | This is a queue-script which is executed during download, after every downloaded file containing in nzb-file (typically a rar-file). The script lists content of download rar-files and tries to detect fake nzbs. Thus it saves your bandwidth if it detects that the content your downloading if the contents within it fail to pass a series of validity checks.
# Note: You must link to the NuxRef repository for this to work! # See: http://nuxref.com/repo/ # Installation of this plugin: yum install -y nzbget-script-fakedetector --enablerepo=nuxref --enablerepo=nuxref-shared |
Video Sort | rpm/src | VIDEOSORT | With post-processing script VideoSort you can automatically organize downloaded video files.
# Note: You must link to the NuxRef repository for this to work! # See: http://nuxref.com/repo/ # Installation of this plugin: yum install -y nzbget-script-videosort --enablerepo=nuxref --enablerepo=nuxref-shared Note: This package includes the removal of the entire PYPKG/libs directory. I replaced all of the dependencies previously defined here with global ones used by CentOS. The reason for this was due to the fact a lot of other packages all share the same libraries. It just didn’t make sense to maintain a duplicate of it all. |
Mobile Integration
There are some fantastic Apps out there that allow you to integrate your phone with the applications mentioned above. It can allow you to manage your downloads from wherever you are. A special shout out to NZB 360 who recently had his app pulled from the Google Play Store for no apparent reason and had to set up shop outside. I can say first hand that his application is amazing! You should totally consider it if you have an Android phone.
Usenet Provides
For those who don’t have Usenet already, it does come at an extra cost and/or fee. The cost averages anywhere between $6 to $20 USD/month (anything more and you’re paying to much). The reason for this is because Usenet is a completely isolated network from the Internet. It’s comprised of a completely isolated set of interconnected servers. While the internet is comprised of hundreds of millions of servers all hosting specific content, each Usenet server hosts the entire usenet database… it hosts everything. If anything is uploaded to Usenet, all of the interconnected servers update themselves with their own local copy of it (to serve us). For this to happen, their servers have to have petabytes of storage. The fee they charge you is just going to support their operational cost such as bandwidth, maintenance and the regular addition of storage to their infrastructure. There is very little profit to be made for them at $8 a person. Here is a breakdown of a few servers (in alphabetical order) I’m aware of and support:
Provider | Server Location(s) |
Notes | Average Cost |
---|---|---|---|
Astraweb | US & Europe | Retention: 2158 Days (5.9 Years) | $6.66USD/Month to $15USD/Month see here for details |
Usenet Server | US | Retention: 2159 Days (5.9 Years) Has a free 14 day trial |
$13.33USD/Month to $14.95USD/Month see here for details |
*Note: Table information was last updated on Jul 14th, 2014. Prices are subject to change as time goes on and this blog post isn’t updated.
**Note: If you have a provider that you would like to be added to this list… Or if you simply spot an error in pricing or linking, please feel free to contact me so I can update it right away.
Why do people use Usenet/Newsgroups?
- Speed: It’s literally just you and another server; a simple 1 to 1 connection. Data transfer speeds will always be as fast as your ISP can carry your traffic to and from the Usenet Server you signed up with. Unlike torrents, content isn’t governed by how many seeders and leechers that have the content available to you. You never have to deal with upload/download ratios, maintain quotas, and or sit idle in someone’s queue who will serve data to you eventually.
- Security: You only deal with secure connections between you and your Usenet Provider; no one else! Torrents can have you to maintaining thousands of connections to different systems and sharing data with them. With BitTorrent setups, tracker are publicly advertising what you have to share and what your trying to download. Your privacy is public to anyone using the same tracker that you’re connected to. Not only that, but most torrent connections are insecure as well which allows virtually anyone to view what you’re doing.
Please know that I am not against torrents at all! In fact, now I’ll take the time to mention a few points where torrents are excel over Usenet:
- Cost: It doesn’t usually cost you anything to use the torrent network. It all depends on the tracker your using of course (some private trackers charge for their usage). But if you’re just out to get the free public stuff made available to us, there are absolutely no costs at all to use this method!
- Availability: Usenet is far from perfect. When someone uploads something onto their Usenet Provider, by the time it propagates this new content to all of the other Usenet Servers, there is a small chance the data will be corrupted. This happens with Usenet all of the time. To compensate for this, Usenet users anticipate corruption (sad but true). These people kindly post Parchive files to Usenet to compliment whatever they previously uploaded. Parchive files work similar to how RAID works; they provide building blocks to reassemble data in the event it’s corrupted. Corruption never happens with Torrents unless the person hosting decides to host corrupted data. Any other scenario would simply be because your BitTorrent Client had a bug in it.
- Retention: As long as someone is willing to seed something, or enough combined leechers can reconstruct what is being shared, then data will always stay alive in the BitTorrent world. However with Usenet, the Usenet Server is hosting EVERYTHING which means it has to maintain a lot data on a lot of disk space! For this reason, a retention period is inevitably met. A time is eventually reached where the Usenet Server purges (erases) older content from these hard disks to make room for the new stuff showing up every day.
Honestly, at the end of the day: both Torrents and Usenet Servers have their pros and cons. We will always continue each weigh them at different levels. What’s considered the right choice for one person, might not be the right one for another. Heck, just use both depending on your situation! 🙂
Source
- Wikipedia’s article explaining what Usenet is.
- NZBGet Official Website
- SABnzbd Official Website
- Sonarr Official Website and it’s GitHub repository.
- CouchPotato (Server) Official Website and it’s GitHub repository.
- Sick Beard Official Website and it’s GitHub repository
- Mylar has no other (official) website other then it’s GitHub repository
- Headphones has no other (official) website other then it’s GitHub repository
- NZB 360 is a fantastic Android Application that you can use to integrate with SABnzbd, NZBGet, CouchPotato, Sick Beard, Sonarr, and Headphones!
- NewzNab is a PHP (v5.3+) based indexer that can allow you to index Usenet (or portions of it) yourself.
- Subliminal NZBGet Post-Process & Scan Script I maintain. This script can also be configured to be used for people who don’t use Newsgroups or NZBGet. I documented how this can be done through a cron on the GitHub link provided.
- NZBGet Scripting Framework I maintain. This provides the core of NZBGet’s version of Subliminal. It makes developing scripts for NZBGet really easy and is documented well explaining how to do so for those interested.
The repositories of the nzbget PP scripts are broken. Instead of […]nzbget-ppscript-[…] they are […]nzbget-script-[…]. Without the “pp”.
Yes, this is my fault; but it was changed for the better. I really don’t get a lot of hits on this blog entry, so I didn’t think anyone was even using the packages I prepared. I have since updated the blog to reflect the proper rpms. Up until NZBGet v13, there was only ‘Post Processing’ scripts (hence the ppscripts id). But since the releases of v13 and v14, they (NZBGet developers) changed the default directory name to ‘scripts‘. They did this because now there are Queue, Scan, and Scheduler extension scripts now too (in addition to Post Processing); I just made the same change to my RPMs too. I updated my copy of nzbget too as well to precreate a scripts directory instead of the ppscripts one.
You can easily switch over; just back up your current config (just in case) and uninstall all the ppscript rpms and reinstall their much more mature (and maintained) script replacements. You just need to update your config to look in the scripts directory now too. If the standards change again in the future, I’ll do a better job at handing the transition.
Thank you for the elaboration! I am surprised to hear that you think nobody is using your packages. It makes all the fragmented sites and blogs about creating a centos download-server redundant.
Are you planning to work on CentOS 7 packages? I haven’t decided which version of CentOS will be the base of my new ldap/download/file server yet. Any ideas?
Thanks for the compliments; I do eventually plan on switching to CentOS 7; I’ll probably wait until the 7.1 or 7.2 release though before i start using it on any production system though. CentOS 6 will definitely be around easily for another 10 years just because of how stable it is. I’d strongly recommend you build your LDAP server there. If you want to use CentOS/Red hat 7, at least wait until 7.1 or 7.2 when they fix the majority of the issues.
Thanks for the advise. CentOS 7 seems to be too big a leap for me now. Will install as VM on my lab just to get used to it.
While on the subject of ldap. I’m looking for a all-in-one CentOS to do ldap/samba/pdc/nfs together with the usenet functionality described so nicely on your site. It seems a lot but I’ve done it with ClearOS years ago. This three year old COS 5.2 server is outdated and I’m building a replacement server. CentOS seems the right choice for me.
I just finished setting up IPA server on 6.6. But it seems it doesn’t like Samba on the same machine. So back to plain OpenLDAP? What is your advise?
Honestly, you’re in one area I don’t use much. I certainly don’t have any blog entries on those topics! 🙂 I run a simple OpenLDAP server where I work here for basic authentication for our ticketing system, but otherwise… that’s it. I’m a bit shocked to hear Samba not working correctly on the same machine though; there really shouldn’t be any issue at all. I honestly can’t offer you any further advice other then stick with prepackaged content (rpms). Ideally keep with the RPMs you find from a reputable sources (such as EPEL). pkgs.org is great too for looking for pre-built content. If you have anymore questions, feel free to email me; this conversation has drifted far off of what this blog entry is focused on. 🙂
About configuring NZBGet. What I don’t understand is the config option ‘DaemonUsername=’ and your remark “Start it Up (as a non-root user)”. I have pointed the DaemonUsername to a non-root account. My config file is in ~/.nzbget/ of that user. When I start “nzbget -D” I get the error “No configuration-file found”. How come?
Another question: How should I start the daemon as a service? I’ve tried a few init scripts. None work flawless. Best one yet is this NZBDrone script: https://raw.githubusercontent.com/OnceUponALoop/RandomShell/master/NzbDrone-init/nzbdrone.init.centos
Any ideas?
Hansel,
Unless your running as the root user, the DaemonUsername entry will just be ignored (when you type nzbget -D). I personally don’t use nzbget as a global service for everyone (and therefore don’t launch it as root). I just run it as my own personal account. For my system, I just added the following to the end my /etc/rc.local file:
# Start NZBGet (as you're userid)
su - myusername -c "nzbget -D"
When it first starts up, it will look for your configuration file which you can place globally in /etc/nzbget.conf, or locally in your home directory (I do this approach) ~/.nzbget. Judging by your comment, you created a directory called this; you want it to just be the file itself instead. hence:
# Grab Template
cp /usr/share/nzbget/nzbget.conf ~/.nzbget
# Protect it
chmod 600 ~/.nzbget
Ok, you’re starting it direct from tc.local. I’m used to start sabnzbd & sickbeard from V init; like rc.local says:
# This script will be executed *after* all the other init scripts.
# You can put your own initialization stuff in here if you don’t
# want to do the full Sys V style init stuff.
I guess I want it to be full style V init.
SAB and SB are python based. Not sure how to build the V init for nzbget… I’m close though.
It shouldn’t be difficult to take the example i provided below and stick it in an system V RC Script (place it in the start() function). Here is how i start my other services from the /etc/rc.local file as well in case it helps. You can easily take what I’ve done and massage it to include Sickbeard and/or Sabnzbd too if you wanted. Just replace myusername with your actual userid.
# Start CouchPotato
CPROOT=/home/myusername/Downloads/CouchPotato
CPPID=$CPROOT/couchpotato.pid
su - myusername -c "(if [ -f $CPPID ]; then
PID=$(/dev/null);
kill $PID &>/dev/null;
sleep 1s;
kill -9 $PID &>/dev/null;
rm -f $CPPID;
fi
python $CPROOT/CouchPotato.py
--daemon
--pid_file $CPPID;
)"
# Start Headphones
HPROOT=/home/myusername/Downloads/Headphones
HPPID=$HPROOT/headphones.pid
su - myusername -c "(if [ -f $HPPID ]; then
PID=$(/dev/null);
kill $PID &>/dev/null;
sleep 1s;
kill -9 $PID &>/dev/null;
rm -f $HPPID;
fi
python $HPROOT/Headphones.py
--daemon
--pidfile $HPPID
--port=7070;
)"
# Start Mono
su - myusername -c "nohup mono /opt/NzbDrone/NzbDrone.exe &>/dev/null" &
Almost got it running from V init. Seems the PID is not passed on correctly to the nzbget.pid file.
[root@hera ~]# service nzbget start
Starting nzbget...
nzbget started (pid ) [ OK ]
[root@hera ~]# pgrep -f nzbget
14271
[root@hera ~]# cat /var/run/nzbget.pid
[root@hera ~]# service nzbget stop
Stopping nzbget...
Now I get more than one PID:
[root@hera ~]# service nzbget start
Starting nzbget...
nzbget started (pid 18218 18223 18229 18232 18234 18233 18235 18236 18237)
[root@hera ~]# cat /var/run/nzbget.pid
18218 18223 18229 18232 18234 18233 18235 18236 18237
Hansel,
You’ve ventured beyond the scope of my blog. I can’t really troubleshoot an RC Script I haven’t seen by just the errors it outputs. If you like, you can email me what you’ve done (the script you wrote) and I can try and help you out. Alternatively, and for simplicity, consider just using the examples I’ve already provided. You might find it easier that way 🙂
I like to venture beyond scopes.. 😉
Something else; I installed SABnzbd as per your directions. It starts correctly but it doesn’t let me conenct to my Astraweb account over SSL. It’s enabled in the .ini file but the options is grayed out in the webui. pyOpenSSL is installed
Package matching pyOpenSSL-0.10-2.el6.x86_64 already installed.
Same server information is working on another ClearOS 5.2 installation.BTW, what’s your e-mail adres so I can contact about nzbget init questions?
Hansel,
For SSL handling, I actually use stunnel instead.
yum install stunnel
Then you can create a configuration file that looks similar to this:
client = yes
[nntp]
accept = localhost:119
; assuming your in the US and using Astraweb (which it appears you are)
; this is their secure server
connect = ssl.astraweb.com:563
You can launch your new stunnel instance (as root) by the following:
stunnel yourconfig.file
You can also just use the RC script in place and place your configuration file in /etc/stunnel/stunnel.conf (I think) and launch:
service stunnel start
Stunnel will look after all of the encryption for you, you just need to tell your programs (Pan, Sabnzbd, NZBGet, etc) to your local box now (instead of the remote server – Astraweb). You’ll want to choose to NOT use SSL and point it to localhost (default port 119).
Python is slow (which SABNZBd uses)… so having the encryption handled at the C level using the libraries that are updated more frequently is not only faster, but safer. Right now i think PythonSSL is struggling with the fact SSLv3 was dropped from the support of most remote locations. There are a few manual hacks you can do to make it work; but just don’t use it at all for now.
As per the email; I see that you’ve found me already; I received your email and will get back to you after I’ve had time to look over your scripts.
How do I make it so it launches on boot? All the init.d scripts I see don’t seem to work, I compiled nzbget via their documentation.
I had issues with the RC Script that is packaged with NZBGet too, so I wrote my own which allows you to start it up as different users (or defaults to root). Try using the package I have provided in my testing repository for now and see if that works for you here which includes it.
Alternatively, a previous user had a similar question to yours not to long ago. If you scroll back just 2 or 3 comments back, you can see how editing your /etc/rc.local file can be used to start up your services at boot.
Also, if you still plan on using your own compiled version and don’t want to use what I’ve packaged, I posted the RC Script I wrote here just now.
I used your script and this is the response I got:
http://imageshere.co.uk/f
Sorry for the late reply, I’ve been busy!
Kind Regards,
Jerome Haynes
Jerome,
It almost appears as though your missing the #!/bin/sh on the first line causing ‘service’ to use some other shell interpreter. What do you get if you call the same script (as root) like so:
sh /etc/rc.d/init.d/nzbget
Either way… I submitted another copy of the script here. You can fetch it and place it into your system like so:
wget http://pastebin.com/raw.php?i=SxJctmGd -O /etc/rc.d/init.d/nzbget
chmod 755 /etc/rc.d/init.d/nzbget
Good luck!
This is the response I get: http://imageshere.co.uk/g
I’m not new to CentOS, it’s just getting a really good understanding of init.d and the boot process I’ve always found difficult.
Kind Regards,
Jerome Haynes
What does
# ls -al /etc/rc.d/init.d/nzb*
tell you?Here’s the result: http://imageshere.co.uk/h
Hansel, that permissions system is on about selinux, which is disabled anyway.
Kind Regards,
Jerome Haynes
From what you’re displaying, you’re not doing anything wrong. That is really strange indeed.
What happens if you just launch the RC script manually? As root?
sh /etc/rc.d/init.d/nzbget start
sh /etc/rc.d/init.d/nzbget status
sh /etc/rc.d/init.d/nzbget stop
The service command itself is a shell script too (/sbin/service)?
What version of CentOS are you using? 6 i’m assuming?
Also, what is the output of the following:
# the RC Package contains the /sbin/services file; this will spit out the version:
rpm -q initscripts
# environment variables (incase something is over-riding your settings)
export
# also, since the script sources /etc/sysconfig/nzbget try removing it
# if present (or move it as a different name) to see if it's sourcing
# some strange configuration
mv /etc/sysconfig/nzbget /etc/sysconfig/nzbget.bak
I’m using CentOS 6
Init scripts:
initscripts-9.03.46-1.el6.centos.1.x86_64
This is the documentation I followed to compile nzbget:
“http://nzbget.net/Installation_on_POSIX”
To launch nzbget manually I run this command: nzbget -c /usr/local/etc/nzbget.conf -D
That command works fine for starting it until reboot (not that this server gets rebooted often, it’s more of having the knoweldge of having it automated so there’s less work on my part)
starting it as a service returns the following:
http://pastebin.com/uTt3sxF9
export returns:
http://pastebin.com/L59F64E0
This runs as the root user by the way.
There is no nzbget in that folder. Surely following that complation documentation it should have put it in that folder? Or maybe compiling it puts it in a different one?
Kind Regards,
Jerome Haynes
Unfortunately no, if you compile it yourself and you need to place things yourself in the directories you compiled it as. I really encourage you to use the RPM I built. It will make your life a lot easier. If you don’t trust what I’ve done, at the very least download the src.rpm file and rebuild it yourself to see there is no foul or malicious intent.
I found an article here which identifies your issue with bad characters being present in the file. I’m not sure how this would be since I don’t have the problem preforming the same set of actions, but you could try it out. In short:
sed -i -e '/15/d' /etc/rc.d/init.d/nzbget
# alternatively you could type:
yum install -y dos2unix
# then:
dos2unix /etc/rc.d/init.d/nzbget
Your rpm throws up: Error: Package: p7zip-9.20.1-1.el7.rf.x86_64 (rpmforge)
Requires: libc.so.6(GLIBC_2.14)(64bit)
I know that this is quite a central part of the OS, would upgrading it reap that much benefit? Is there a way around this? Would finding py7zip somewhere else work?
Kind Regards,
Jerome Haynes
I grabbed the latest rpm from : http://pkgs.repoforge.org/p7zip/ and ran your rpm again and it appears to have worked in terms of installing your rpm. Next issue is this:
http://imageshere.co.uk/l/
I haven’t uninstalled my compiled version, could that be causing the issue? Or it something else? The compiled version is not running just to let you know.
Kind Regards,
Jerome Haynes
I’m honestly not sure what’s going on with your setup. There is something wrong in general with it for sure; most likely something very small and simple to fix. But what this problem is has got me baffled. If you just launch the NZBGet executable (nzbget -D) manually, does that work?
But regardless, I think there is something wrong (at an OS level). You may have to look back in your history… Can you recall installing an application or removing one that causes you grief? Previous manual changes you made to your system? Hopefully there is something obvious you can just undo.
This is a brand new dedicated server moving over from our old ones, running Citrix Xenserver. I’ve configured it differently so we have things more spread out on more vm’s. The vm in question here is the main/master, so it has: plex, nzbget, sonarr, tc admin for our gameservers, and a teamspeak server, apache (just for serving basic webpages. + mysql. Trying to get all that running as you can imagine no matter how many times you’ve done it small things always somehow get broken or something. As far as nzbget goes, I know every server I’ve ever installed it on, I’ve ALWAYS followed that documentation to the letter for compiling it and booted it with the command I stated previously, then everytime I rebooted which was are I would run it again, it’s not ideal but it didn’t affected me much at all. But even on clean setups, I always had these exact same results pretty much, so there must be something I’m doing wrong. I have an spare vm I use just for testing and throwing stuff at I’ll probably give it an attempt on there, including with your rpm. Though Cent OS does not ship with this version of GlibC mentioned in the error it seems, as this was installed less than 6 days ago.
“Error: Package: p7zip-9.20.1-1.el7.rf.x86_64 (rpmforge)
Requires: libc.so.6(GLIBC_2.14)(64bit)
”
You may want to add it to dependencies or something.
Thank you for your help so far.
Kind Regards,
Jerome Haynes
The problem with your last entry is your downloading the wrong RPM; if it’s installing, you might be cross/mixing rpms onto your system. You want to download stuff with .el6 in it. This one is safe to use.
I don’t doubt your instruction following on NZBGet one bit; my concern is the fact your ‘service’ script isn’t even referencing the right locations. Something has been altered on your system and i’m not sure what that is at this time. My other concern is that you’re setting up a server and compiling everything yourself instead of just finding the already prepackaged and tested version that deploys safely through an RPM. The only qualm with hauling in all of your development libraries into a server and installing content through the ‘./configure; make; make install’ is it can leave your system in an unsafe/unstable state. It’s bad practice these days IMO too. ‘make install’ can sometimes installs content you didn’t even need or over-writes something you did. It can pretty much void any packaging you’ve installed already in your system (through yum and/or rpm). I’m not saying this is your case; but it’s worth considering. I’ve sent you a private email. I think rather then fill this comment forum up with more troubleshooting; it might be easier to just take it offline.
Make sure your /etc/init.d is just a symbolic link to /etc/rc.d/init.d (that’s how it is on my Centos 6 system).
http://nzbindex.in as an indexer alternative…
You’re awesome! thank you