All posts by l2g

A Usenet Solution For CentOS 6

What Is Usenet

In a nutshell; it’s basically a bunch of (file) servers that host a ton of information people place onto it. We’re talking about petabytes (1000+ Terabytes) of information. There is very little organization, but it does have a defined structure.

Content is sorted into groups which act as containers for it to be stored and retrieved from. You can think of a group like you might think of a directory on your computer at home. We create directories all the time in efforts to add order and structure to where we keep things (so we can find them later). The thing is, Usenet has no moderation; so you can place content in any group you want. As a result; it’s a lot like what you might expect someone’s hard drive would look like if you gave 5 million people access to it. Basically there is just a ton of crap everywhere.

The World Wide Web is similar to this, but instead of groups, we sort things by URLs (web addresses) such as http://nuxref.com. Google uses it’s own web crawlers to scan the entire World Wide Web just to create an index from it. Each website they find, they track it’s name, it’s content, and the language it’s written in. The result from them doing this is: we get to use their fantastic search engine! A search engine that has made our lives incredibly easy by granting us fast and easy accessible information at our fingertips.

The Usenet Indexer

Usenet is a very big world of it’s own and it’s a lot harder to get around in (but not impossible) without anything indexing it. Thankfully Usenet is no where near the size of the World Wide Web which makes indexing it is very possible for a much larger audience! In fact, we can even index it with our personal computer(s) we run at home. By indexing it; we can easily search it for content we’re interested in (much like how we use Google for web page searching).

Since just about anyone can index Usenet, one has to think: Why index Usenet ourselves if someone’s already doing it for us elsewhere? In fact, there are many sites (and tools) that have already done all the indexing (some better than others) of Usenet who are willing to share it with others (us). But it’s important to know: it can take a lot of server power, disk space, and network consumption for these site administrators to constantly index Usenet for us. Since most (if not all) of the sites are just hobbyists doing it for fun, it gets expensive for them to maintain things. For that reason some of them may charge or ask for a donation. If you want to use their services, you should respect their measly request of $8USD to $20USD for a lifetime membership. But don’t get discouraged, there are still a lot of free ones too!

Just keep in mind that Usenet is constantly getting larger; people are constantly posting new content to it every second. You’ll find that the sites that charge a fee are already (relatively) aware of the new changes to Usenet every time you search with it. Others (the free ones) may only update their index a few times a day or so.

Alternatively (the free route), we can go as far as running our own Usenet indexer (such as NewzNab) just as the hobbyists did (mentioned above). NewzNab will index Usenet on a regular basis. With your own indexer, you can choose to just index content that appeals to you. You can even choose to offer your services publicly if you want. Just keep in mind that Usenet is huge! If you do decide to go this route, you’ll find it a very CPU and network intensive operation. You may want to make sure you don’t exceed your Internet Service Providers (ISP) download limits.

Now back to the Google analogy I started earlier: When you find a link on Google you like, you simply click on it and your browser redirect you to the website you chose; end of story. However, in the Usenet Indexing world, once you find something of interest, the Usenet Indexer will provide you with an NZB File. An NZB file is effectively a map that identifies where your content can be specifically located on Usenet (but not the data itself). An NZB file to Usenet is similar to what a Torrent file is to a BitTorrent Client. Both NZB and Torrent files provide the blueprints needed to mine (acquire) your data. Both NZB and Torrent files require a Downloader to preform the actual data mining for you.

The Downloader

The Downloader can take an NZB File it’s provided and then uses it to acquire the actual data it maps to. This is the final piece of the puzzle!
Of the list below, you really only need to choose 1 Downloader. I just listed more then 1 to give you alternatives to work with. My personal preference is NZBGet because it is more flexible. But it’s flexibility can also be very confusing (only at first). Once you get over it’s learning curve and especially the initial configuration; it’s a dream to work with. Alternatively SABnzbd may be better for the novice if your just starting off with Usenet and don’t want to much more of a learning curve then you already have.

Either way, pick you poison:

Title Package Details
NZBGet rpm/src NZBGet is written in C++ and designed with performance in mind to achieve maximum download speed by using very little system resources.
Community / Manual
**Note: I created this patch in a recent update rebuild (Jul 17th, 2014) to fix a few directory paths so the compression tools (unrar and 7zip) can work right away. I also added these compression tools as dependencies to the package so they’ll just be present for you at the start.
**Note: I also created this patch in a recent update rebuild (Nov 9th, 2014) to allow the RC Script to take optional configuration defined in /etc/sysconfig/nzbget.

You can install NZBGet using the steps below:

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/nuxref-repository/

# Install NZBGet
yum install -y nzbget 
    --enablerepo=nuxref 
    --enablerepo=nuxref-shared

# Grab Template
cp /usr/share/nzbget/nzbget.conf ~/.nzbget

# Protect it
chmod 600 ~/.nzbget

# Start it Up (as a non-root user):
nzbget -D

# You should now be able to access it via: 
#     http://localhost:6789/
SABnzbd n/a SABnzbd is an Open Source Binary Newsreader written in Python.
Community / Manual
Note:I have not packaged this yet, but will probably eventually get around to it. For now it can be accessed from it’s repository on GitHub, or you can quickly set it up in your environment as follows:

# There is no RPM installer for this one, we just
# fetch straight from their repository.
# Install git (if it's not already)
yum install -y git

# Grab a snapshot of SABnzbd
git clone https://github.com/sabnzbd/sabnzbd.git SABnzbd

# Start it Up (as a non-root user):
python SABnzbd/SABnzbd.py 
    --daemon 
    --pid $(pwd)/SABnzbd/sabnzbd.pid

# You should now be able to access it via: 
#     http://localhost:8080/

Automated Index Searchers

These tools search for already indexed content you’re interested in and can be configured to automatically download it for you when it’s found. It itself doesn’t do the downloading, but it will automate the connection between your chosen Indexer and Downloader (such as NZBGet or SABnzbd). For this reason, these tools do not actually search Usenet at all and therefore have very little overhead on your system (or NAS drive).

Title Package Details
Sonarr
nzbdrone-icon
rpm/src Automatic TV Show downloader
Formally known as NZBDrone; it has since been changed to Sonarr. This was only made possible because of the blog I wrote on mono v3.x .

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/nuxref-repository/

# Installation of this plugin:
yum install -y sonarr 
    --enablerepo=nuxref 
    --enablerepo=nuxref-shared 

# Start it Up (as a non-root user):
nohup mono /opt/NzbDrone/NzbDrone.exe &

# You should now be able to access it via: 
#     http://localhost:8989/
Sick Beard
sickbeard-icon
n/a (Another) Automatic TV Show downloader

Note:I have not packaged this yet, but will probably eventually get around to it. For now it can be accessed from it’s repository on GitHub, or you can quickly set it up in your environment as follows:

# Install git (if it's not already)
yum install -y git

# Grab a snapshot of Sick Beard
# Note that we grab the master branch, otherwise we default
# to the development one.
git clone -b master https://github.com/midgetspy/Sick-Beard.git SickBeard

# Start it Up (as a non-root user):
python SickBeard/SickBeard.py 
   --daemon 
   --pidfile $(pwd)/SickBeard/sickbeard.pid

# You should now be able to access it via: 
#     http://localhost:8081/
CouchPotato
couchpotato-icon
n/a Automatic movie downloader

Note:I have not packaged this yet, but will probably eventually get around to it. For now it can be accessed from it’s repository on GitHub, or you can quickly set it up in your environment as follows:

# Install git (if it's not already)
yum install -y git

# Grab a snapshot of CouchPotato
git clone https://github.com/RuudBurger/CouchPotatoServer.git CouchPotato

# Start it Up (as a non-root user):
python CouchPotato/CouchPotato.py 
   --daemon 
   --pid_file CouchPotato/couchpotato.pid

# You should now be able to access it via: 
#     http://localhost:5050/
Headphones
headphones-icon
n/a Automatic music downloader

Note:I have not packaged this yet, but will probably eventually get around to it. For now it can be accessed from it’s repository on GitHub, or you can quickly set it up in your environment as follows:

# Install git (if it's not already)
yum install -y git

# Grab a snapshot of Headphones
git clone https://github.com/rembo10/headphones Headphones

# Start it Up (as a non-root user):
python Headphones/Headphones.py 
   --daemon 
   --pidfile $(pwd)/Headphones/headphones.pid

# You should now be able to access it via: 
#     http://localhost:8181/
Mylar
mylar-icon
n/a Automatic Comic Book downloader

Note:I have not packaged this yet, but will probably eventually get around to it. For now it can be accessed from it’s repository on GitHub, or you can quickly set it up in your environment as follows:

# Install git (if it's not already)
yum install -y git

# Grab a snapshot of Headphones
git clone https://github.com/evilhero/mylar Mylar

# Start it Up (as a non-root user):
python Mylar/Mylar.py 
   --daemon 
   --pidfile $(pwd)/Mylar/Mylar.pid

# You should now be able to access it via: 
#     http://localhost:8090/

NZBGet Processing Scripts

For those who prefer SABnzbd, you can ignore this part of the blog. For those using NZBGet, one of it’s strongest features is it’s ability to process content it downloads before and after it’s received. The Post Processing (PP) has been specifically one of NZBGet’s greatest features. It allows separation between the the function NZBGet (which is to download content in NZB files) and what you want to do with the content afterwards. Post Processing could do anything such as catalogue what was received and place it into an SQL database. Post Processing could rename the content and sort it for you in separate directories depending on what it is. Post processing can be as simple as just emailing you when the download completed or post on Facebook or Twitter. You’re not limited to just 1 PP Script either, you can chain them and run a whole slew of them one after another. The options are endless.

I’ve taken some of the popular PP Scripts from the NZBGet forum and packaged them in a self installing RPM as well to make life easy for those who want it. Some of these packages require many dependencies and ports to make the installation smooth. Although i link directly to the RPMs here, you are strongly advised to link to my repository with yum if you haven’t already done so.

Title Package Provides Details
Failure Link rpm/src FAILURELINK If download fails, the script sends info about the failure to indexer site, so a replacement NZB (same movie or TV episode) can be queued up if available. The indexer site must support DNZB-Header “X-DNZB-FailureLink”.

Note: The integration works only for downloads queued via URL (including RSS). NZB-files queued from local disk don’t have enough information to contact the indexer site.

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbget-script-failurelink 
    --enablerepo=nuxref 
    --enablerepo=nuxref-shared 
nzbToMedia rpm/src DELETESAMPLES
RESETDATETIME
NZBTOCOUCHPOTATO
NZBTOGAMEZ
NZBTOHEADPHONES
NZBTOMEDIA
NZBTOMYLAR
NZBTONZBDRONE
NZBTOSICKBEARD
Provides an efficient way to handle post processing for
CouchPotatoServer, SickBeard, Sonarr, Headphones, and Mylar
when using NZBGet on low performance systems like a NAS.

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbget-script-nzbtomedia 
    --enablerepo=nuxref 
    --enablerepo=nuxref-shared 

Note: This package includes the removal of the entire PYPKG/libs directory. I replaced all of the dependencies previously defined here with global ones used by CentOS. The reason for this was due to the fact a lot of other packages all share the same libraries. It just didn’t make sense to maintain a duplicate of it all.

Subliminal rpm/src SUBLIMINAL Provides a wrapper that can be integrated with NZBGet with subliminal (which fetches subtitles given a filename or filepath). Subliminal uses the correct video hashes using the powerful guessit library to ensure you have the best matching subtitles. It also relies on enzyme to detect embedded subtitles and avoid retrieving duplicates.

Multiple subtitles services are available using:opensubtitles, tvsubtitles, podnapisi, addic7ed, and thesubdb.

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbget-script-subliminal 
    --enablerepo=nuxref 
    --enablerepo=nuxref-shared 

*Note: python-subliminal (what this PP Script is a wrapper too) had some issues I had to address. For one, I eliminated the entire PYPKG/subliminal/libs directory. I replaced all of the dependencies previously defined here with global ones used by CentOS. The reason for this was due to the fact a lot of other packages all share the same libraries. It just didn’t make sense to maintain a duplicate of it all.
**Note: Subliminal was written using Dict Comprehensions (PEP 274), a feature that wasn’t introduced until Python 2.7. Unfortunately, the developers of it had no interest in resolving this and closed the issue with ‘Upgrade to Python 2.7 or Python v3.3. For this reason, subliminal does ‘not’ work at all with CentOS or Red Hat 6.x. So… I fixed that. Now, I can proudly tell you that the copy of subliminal I host on my repository is 100% compatibility with python 2.6 (this includes a few Logging backported functionality too).

I am the current maintainer of this plugin and it can be accessed from my GitHub page here.

DirWatch rpm/src DIRWATACH DirWatch can watch multiple directories for NZB-Files and move them for processing by NZBGet. This tool is awesome if you have a DropBox account or a network share you want NZBGet to scan! Without this script NZBGet can only be configured to scan one (and only one) directory for NZB-Files.

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbget-script-dirwatch 
    --enablerepo=nuxref 
    --enablerepo=nuxref-shared 

I am the current maintainer of this plugin and it can be accessed from my GitHub page here.

TidyIt rpm/src TIDYIT TidyIt integrates itself with NZBGet’s scheduling and is used
to preform basic house cleaning on a media library. TidyIt
removes orphaned meta information, empty directories and unused
content. It’s the perfect OCD tool for those who want to eliminate
any unnecessary bloat on their filesystem and media library.

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbget-script-tidyit 
    --enablerepo=nuxref 
    --enablerepo=nuxref-shared 

I am the current maintainer of this plugin and it can be accessed from my GitHub page here.

Notify rpm/src NOTIFY Notify provides a wrapper that can be integrated with NZBGet allowing you to notify in just about any supported method today such as
email, KODI (XBMC), Prowl, Growl, PushBullet, NotifyMyAndroid, Toasty, Pushalot,
Boxcar, Faast, Telegram, Join, and Slack Notifications. It also supports pushing information in HTTP Post request
via JSON or XML (SOAP structure).

The script can also be used as a standalone tool and called from the
command line allowing it to support a lot more tools besides NZBGet.

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbget-script-notify 
    --enablerepo=nuxref 
    --enablerepo=nuxref-shared 

I am the current maintainer of this plugin and it can be accessed from my GitHub page here.

Password Detector rpm/src PASSWORDETECTOR Password Detector is a queue script that checks for passwords inside of every .rar file of a NZB downloaded. This means that it can detect password protected NZB’s very early before downloading is complete, allowing the NZB to be automatically deleted or paused. Detecting early saves data, time, resources, etc.

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbget-script-passworddetector 
    --enablerepo=nuxref 
    --enablerepo=nuxref-shared 
Fake Detector rpm/src FAKEDETECTOR This is a queue-script which is executed during download, after every downloaded file containing in nzb-file (typically a rar-file). The script lists content of download rar-files and tries to detect fake nzbs. Thus it saves your bandwidth if it detects that the content your downloading if the contents within it fail to pass a series of validity checks.

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbget-script-fakedetector 
    --enablerepo=nuxref 
    --enablerepo=nuxref-shared 
Video Sort rpm/src VIDEOSORT With post-processing script VideoSort you can automatically organize downloaded video files.

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbget-script-videosort 
    --enablerepo=nuxref 
    --enablerepo=nuxref-shared 

Note: This package includes the removal of the entire PYPKG/libs directory. I replaced all of the dependencies previously defined here with global ones used by CentOS. The reason for this was due to the fact a lot of other packages all share the same libraries. It just didn’t make sense to maintain a duplicate of it all.

Mobile Integration

nzb360-logoThere are some fantastic Apps out there that allow you to integrate your phone with the applications mentioned above. It can allow you to manage your downloads from wherever you are. A special shout out to NZB 360 who recently had his app pulled from the Google Play Store for no apparent reason and had to set up shop outside. I can say first hand that his application is amazing! You should totally consider it if you have an Android phone.

Usenet Provides

For those who don’t have Usenet already, it does come at an extra cost and/or fee. The cost averages anywhere between $6 to $20 USD/month (anything more and you’re paying to much). The reason for this is because Usenet is a completely isolated network from the Internet. It’s comprised of a completely isolated set of interconnected servers. While the internet is comprised of hundreds of millions of servers all hosting specific content, each Usenet server hosts the entire usenet database… it hosts everything. If anything is uploaded to Usenet, all of the interconnected servers update themselves with their own local copy of it (to serve us). For this to happen, their servers have to have petabytes of storage. The fee they charge you is just going to support their operational cost such as bandwidth, maintenance and the regular addition of storage to their infrastructure. There is very little profit to be made for them at $8 a person. Here is a breakdown of a few servers (in alphabetical order) I’m aware of and support:

Provider Server
Location(s)
Notes Average Cost
Astraweb US & Europe Retention: 2158 Days (5.9 Years) $6.66USD/Month to $15USD/Month
see here for details
Usenet Server US Retention: 2159 Days (5.9 Years)
Has a free 14 day trial
$13.33USD/Month to $14.95USD/Month
see here for details

*Note: Table information was last updated on Jul 14th, 2014. Prices are subject to change as time goes on and this blog post isn’t updated.
**Note: If you have a provider that you would like to be added to this list… Or if you simply spot an error in pricing or linking, please feel free to contact me so I can update it right away.

Why do people use Usenet/Newsgroups?

  • Speed: It’s literally just you and another server; a simple 1 to 1 connection. Data transfer speeds will always be as fast as your ISP can carry your traffic to and from the Usenet Server you signed up with. Unlike torrents, content isn’t governed by how many seeders and leechers that have the content available to you. You never have to deal with upload/download ratios, maintain quotas, and or sit idle in someone’s queue who will serve data to you eventually.
  • Security: You only deal with secure connections between you and your Usenet Provider; no one else! Torrents can have you to maintaining thousands of connections to different systems and sharing data with them. With BitTorrent setups, tracker are publicly advertising what you have to share and what your trying to download. Your privacy is public to anyone using the same tracker that you’re connected to. Not only that, but most torrent connections are insecure as well which allows virtually anyone to view what you’re doing.

Please know that I am not against torrents at all! In fact, now I’ll take the time to mention a few points where torrents are excel over Usenet:

  • Cost: It doesn’t usually cost you anything to use the torrent network. It all depends on the tracker your using of course (some private trackers charge for their usage). But if you’re just out to get the free public stuff made available to us, there are absolutely no costs at all to use this method!
  • Availability: Usenet is far from perfect. When someone uploads something onto their Usenet Provider, by the time it propagates this new content to all of the other Usenet Servers, there is a small chance the data will be corrupted. This happens with Usenet all of the time. To compensate for this, Usenet users anticipate corruption (sad but true). These people kindly post Parchive files to Usenet to compliment whatever they previously uploaded. Parchive files work similar to how RAID works; they provide building blocks to reassemble data in the event it’s corrupted. Corruption never happens with Torrents unless the person hosting decides to host corrupted data. Any other scenario would simply be because your BitTorrent Client had a bug in it.
  • Retention: As long as someone is willing to seed something, or enough combined leechers can reconstruct what is being shared, then data will always stay alive in the BitTorrent world. However with Usenet, the Usenet Server is hosting EVERYTHING which means it has to maintain a lot data on a lot of disk space! For this reason, a retention period is inevitably met. A time is eventually reached where the Usenet Server purges (erases) older content from these hard disks to make room for the new stuff showing up every day.

Honestly, at the end of the day: both Torrents and Usenet Servers have their pros and cons. We will always continue each weigh them at different levels. What’s considered the right choice for one person, might not be the right one for another. Heck, just use both depending on your situation! πŸ™‚

Source

Mono 3.x Packages for CentOS 6

Introduction

Mono allows us to run Windows Applications in our Linux environment. It is an open source implementation of Microsoft’s .NET Framework. The problem is, CentOS (and Red Hat) 6.x ship with Mono v2.4 which is a little outdated. You can’t take advantage of the newer apps .NET developers are writing. In fact, you can’t run anything that requires a newer version of v3.5 of the .NET runtime libraries.

In addition to that, Mono v3.4 grants your CentOS system more support for .NET applications that weren’t otherwise available for you in v2.4.

Compatibility Mono v2.4 Mono v3.4
.NET 1.0 Yes No (dropped support)
.NET 2.0 Yes Yes
C# 3.0 Yes Yes
ASP.NET 2.0 Yes Yes
.NET 3.5 Partial Yes
.NET 4.0 No Yes
.NET 4.5 No Yes
C# 4.0 No Yes
ASP.NET 4.0 No Yes
C# 5.0 No Yes

What’s so special about the repackaging you did?

Well first of all… it’s actually an RPM package. It doesn’t require you to haul in a ton of development libraries and compile it from scratch. Another point is that Mono v2.4 (shipped with CentOS & Red Hat 6) had many patches applied to it. These patches forced Mono to conform to the common directory structure used natively by our Operating System. It took me several hours to recreate all of these patches forcing Mono v3.4 to comply with the same standards.

Finally (and at the time of writing this blog), this is the first package of Mono v3.4 that I’ve found that can be installed via an RPM and not require you to recompile everything yourself. Hence you don’t even need to haul in any development libraries at all. Mono will just work as is. Since my repackaging was based off of the original, I tried to keep all of the external rpm packages the same. That said, I did get a little confused with all of the new packages and binary tools that ship with Mono v3.x. Since I’m not a Microsoft Developer, I tried to sort these new packages accordingly as best as I could. Please feel free to let me know how I can improve this package if you notice anything I’ve done wrong.

Just hand over all your work already!

Absolutely, here they are:

Binary Packages:

Note: Mono (v3) was a bit picky about it’s SQLite version it referenced. I had to update it’s package to a slightly newer version as well for everything to play nicely. Only if you intend to haul in the mono-data-sqlite-*.rpm package, will you be required to haul in this newer version. I’ve already provided this on my repository, but for the sceptics who want to build it themselves, I’ll include those instructions too.

Source Packages:

Debug Info Packages

Alternatively, you can get it from my repository too (this is the best and easiest way). The below instructions assume you’ve set yourself up.

# Make sure you're hooked up with my repository for this to
# work: http://nuxref.com/nuxref-repository/
################################################################
# Install Mono
################################################################
yum install -y 
       --enablerepo=nuxref 
       --enablerepo=nuxref-shared 
           mono-core
################################################################
# Install additional packages too if you wish (depending on your
# needs.
################################################################
yum install -y 
       --enablerepo=nuxref 
       --enablerepo=nuxref-shared 
           mono-web

I’ll Never Trust Your Stuff; Let Me Do It Myself

Sure, First you’re going to need to fetch all the patches I had to create (plus the old ones carried forth from Mono v2.4:

You can additionally view the RPM SPEC file I created here.

First prepare our development environment with mock if you haven’t already:

# Install 'mock' into your environment if you don't have it already
# This step will require you to be the superuser (root) in your native
# environment.
yum install -y mock

# Grant your normal every day user account access to the mock group
# This step will also require you to be the root user.
usermod -a -G mock YourNonRootUsername
# Download the official mono packages from their official
# hosting site:
wget http://origin-download.mono-project.com/sources/mono/mono-3.4.0.tar.bz2

# Download all of the building blocks you'll need
wget --output-document=mono.spec https://www.dropbox.com/sh/9dt7klam6ex1kpp/AAAiqD2KjHhakweKY_mkLLPba/20140713/mono/mono.spec?dl=1
wget --output-document=monodir.c https://www.dropbox.com/sh/9dt7klam6ex1kpp/AABZHv5NeWFICyAAAw--eiJoa/20140713/mono/monodir.c?dl=1
wget --output-document=mono.snk https://www.dropbox.com/sh/9dt7klam6ex1kpp/AADoY6UvThpcQbUHhs7XUecsa/20140713/mono/mono.snk?dl=1
wget --output-document=lc https://www.dropbox.com/sh/9dt7klam6ex1kpp/AACkja0kNxmO1ytHOIw523HTa/20140713/mono/lc?dl=1
wget --output-document=mono-3.4-ppc-threading.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AAAKrjdqRR826osJjbqZIu5la/20140713/mono/mono-3.4-ppc-threading.patch?dl=1
wget --output-document=mono-1.2.3-use-monodir.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AABHvxieGDqU8eDB24ghS_Dua/20140713/mono/mono-1.2.3-use-monodir.patch?dl=1
wget --output-document=mono-2.2-uselibdir.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AABSjbMfIRj5JB7HKWydQVpja/20140713/mono/mono-2.2-uselibdir.patch?dl=1
wget --output-document=mono-2.0-monoservice.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AADsL0DBfI0VixRAw6uI0Vkpa/20140713/mono/mono-2.0-monoservice.patch?dl=1
wget --output-document=mono-3.4-libgdiplusconfig.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AAAHvISVzxIPq9xCmw2m2tcPa/20140713/mono/mono-3.4-libgdiplusconfig.patch?dl=1
wget --output-document=mono-3.4-libdir.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AACHrlv_iSp36jhSeOn4ki0fa/20140713/mono/mono-3.4-libdir.patch?dl=1
wget --output-document=mono-3.4-POSIX_ARG_MAX.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AADSN5WhjyqTQptMoWthDnYHa/20140713/mono/mono-3.4-POSIX_ARG_MAX.patch?dl=1
wget --output-document=mono-3.4.xamarin.BZ18690.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AAAwWsEKkOlnzYZCshp29wuwa/20140713/mono/mono-3.4.xamarin.BZ18690.patch?dl=1

# Initialize our Environment
mock -v -r epel-6-x86_64 --init

# Dependencies
mock -v -r epel-6-x86_64 --install libpng-devel libjpeg-devel ligiflib-devel  
   lilibtiff-devel  lilibexif-devel  lilibX11-devel  lifontconfig-devel  
   ligettext  limake  ligcc-c++ libison liglib2-devel lipkgconfig 
   lilibicu-devel lilibgdiplus-devel  lizlib-devel li automake libtool 
   gettext-devel mono-core gcc-c++ mediainfo gettext 
   giflib-devel libtiff-devel libexif-devel libX11-devel fontconfig-devel 
   bison glib2-devel libicu-devel libgdiplus-devel mysql-devel 
   postgresql-devel sqlite-devel

# Copy our packages into our environment
mock -v -r epel-6-x86_64 --copyin mono.spec /builddir/build/SPECS
mock -v -r epel-6-x86_64 --copyin 
    *.patch 
    mono-3.4.0.tar.bz2 
    mono.snk 
    lc 
    monodir.c 
    /builddir/build/SOURCES

# Shell into our environment
mock -v -r epel-6-x86_64 --shell

# Change to our build directory
cd builddir/build

# Enable Bootstrapping for the first time
# mono actually requires 'mono' (itself) to build. Weird Right?
# But still necessary! For this reason I prepared an easier
# way of enabling bootstrapping for your first build.
#
# Once you install the binaries created from your first build
# we can rebuild the package again (but this time without
# bootstrapping). The purpose of this is to ensure the mono
# binaries and packages we created are equivalent to the
# the bootstrapped content.
# So... on with the bootstrapping; Note: this will take
# 20 to 30 minutes depending on how fast your system is.
rpmbuild -ba --define "_with_bootstrap=1" SPECS/mono.spec

# Now that we've created mono from a bootstrap, we can
# install the package back into our virtual environment
# and rebuild it again. But this time we rebuild it
# without the bootstrap reference.
rpm -Uhi RPMS/mono-core-3.4.0-1.el6.nuxref.x86_64.rpm RPMS/mono-devel-3.4.0-1.el6.nuxref.x86_64.rpm
# Now rebuild the whole thing all over again to confirm
# your build was good; Note: This will take another 20 to 30 
# minutes again...
rpmbuild -ba SPECS/mono.spec

# we're now done with our mock environment for now; Press Ctrl-D to
# exit or simply type exit on the command line of our virtual
# environment
exit

# We'll return to the directory we were previously in.  We can copy
# out the packages we just built at this point.Ignore the warning
# about SELinux if you get one. It doesn't impact our goals at this
# moment.
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/SRPMS/mono-3.4.0-1.el6.nuxref.src.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-core-3.4.0-1.el6.nuxref.x86_64.rpm
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-data-3.4.0-1.el6.nuxref.x86_64.rpm
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-data-oracle-3.4.0-1.el6.nuxref.x86_64.rpm
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-data-postgresql-3.4.0-1.el6.nuxref.x86_64.rpm
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-data-sqlite-3.4.0-1.el6.nuxref.x86_64.rpm
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-devel-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/monodoc-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/monodoc-devel-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-extras-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-locale-extras-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-nunit-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-nunit-devel-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-reactive-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-wcf-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-web-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-web-devel-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-winforms-3.4.0-1.el6.nuxref.x86_64.rpm .
# The debuginfo package will only exist if you successfully rebuilt
# everything without the bootstrap set
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-debuginfo-3.4.0-1.el6.nuxref.x86_64.rpm .

Upgrading SQLite

For me, I just visited pkgs.org and downloaded the fedora 20 (source -src.rpm) release of SQLite. Then I extracted it’s contents as follows:

# I can't promise this link will work, as this package is always
# evolving, but if you do the search above, you'll get the idea
wget http://dl.fedoraproject.org/pub/fedora/linux/releases/20/Everything/source/SRPMS/s/sqlite-3.8.1-2.fc20.src.rpm

# Alternatively, you can download the source rpm package I'm
# already hosting:
wget http://repo.nuxref.com/centos/6/en/source/custom/sqlite-3.8.1-2.el6.src.rpm

# Then extracted it using this neat technique:
rpm2cpio sqlite-*.src.rpm | cpio -idmv

# Initialize our Environment
mock -v -r epel-6-x86_64 --init

# Dependencies
mock -v -r epel-6-x86_64 --install ncurses-devel 
    readline-devel glibc-devel autoconf /usr/bin/tclsh 
    tcl-devel

# You'll already have the block you need as nothing is
# changed with this package. We're just using it as is
mock -v -r epel-6-x86_64 --copyin sqlite-*.zip /builddir/build/SOURCES
mock -v -r epel-6-x86_64 --copyin *.patch /builddir/build/SOURCES
mock -v -r epel-6-x86_64 --copyin sqlite.spec /builddir/build/SPECS

# Shell into our environment
mock -v -r epel-6-x86_64 --shell
 
# Change to our build directory
cd builddir/build

# Build our packages (process doesn't take long ~2 min)
rpmbuild -ba SPECS/sqlite.spec

# we're now done with our mock environment for now; Press Ctrl-D to
# exit or simply type exit on the command line of our virtual
# environment
exit

# We'll return to the directory we were previously in.  We can copy
# out the packages we just built at this point. Ignore the warning
# about SELinux if you get one. It doesn't impact our goals at this
# moment.
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/SRPMS/sqlite-3.8.1-2.el6.src.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/sqlite-3.8.1-2.el6.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/sqlite-devel-3.8.1-2.el6.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/sqlite-doc-3.8.1-2.el6.noarch.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/lemon-3.8.1-2.el6.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/sqlite-tcl-3.8.1-2.el6.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/sqlite-debuginfo-3.8.1-2.el6.x86_64.rpm .

Credit

This blog took me a very (,very) long time to put together and test! The repository hosting alone now accommodates all my blog entries up to this date. If you like what you see and wish to copy and paste this HOWTO, please reference back to this blog post at the very least. It’s really all I ask.

I’ve tried hard to make this a complete working solution out of the box. Please feel free to email me or post comments below with any suggestions you have so I can ensure this blog is as complete as possible! Positive feedback is always welcome too!

Repository

This blog makes use of my own repository I loosely maintain. If you’d like me to continue to monitor and apply updates as well as hosting the repository for long terms, please consider donating or offering a mirror server to help me out! This would would be greatly appreciated!

Sources

The majority of my efforts came from the following sites:

Configuring a DNS Server on CentOS 6

Introduction

We have been relying on the Domain Name System (DNS) since the dawn of the internet. Simply put: it allows us to access information by a human readable string or recognizable name such as google.com or nuxref.com instead of it’s actual IP address (which is not as easily memorizable). If we didn’t have the DNS, then the internet would not have evolved as far as it has today. The DNS was built on a series of Name Servers that are all looking after their respected domain (or zone). Our Internet Service Provider (ISP) is lending us their DNS servers everyday when we connect to them. It’s our wireless router (at home or at work) that passes this server to our tablet, phone, laptop etc… when we connect to it.

Here is a simple DNS query taking place illustrating how most of us are setup today.
Here is a simple DNS query taking place illustrating how most of us are setup today.
Managing our own Authoritative DNS Server allows us to catalog our personal devices we use daily with great ease. If you’re publicly hosting content, an Authoritative DNS server can be used to even distribute the traffic you servers receive both geographically and as a distributed (load balancing) approach. It gives us the ability to dynamically associate names to all of our devices on our network. It’s great for the hobbyist and absolutely mandatory for any medium or larger sized company.

PowerDNS is my preferred DNS server solution. I personally prefer this to it’s long-term predecessor Berkeley Internet Name Domain (BIND). BIND has been around since 1984 and has gone through years of hacky patches to get to where it is today. PowerDNS is much younger (first release was in 1999), but was written without all of the growing pains BIND suffered through from the start. In all fairness, BIND developers were forced to deal with RFC (Request for Comments) as DNS continued to evolve to what it is today. Where as PowerDNS already had a stable set of requirements to work with from day one. Not to mention PowerDNS can be easily configured to use alternative backend databases.

You are reading this blog because you want the following:

  • A fast and reliable Authoritative DNS server with a PostgreSQL database backend.
  • You want a central configuration point; you want everything to be easy to maintain after it’s all set up.
  • You want everything to just work the first time and you want to leave the figuring it out part to the end.
  • Package management and version control is incredibly important to you.
  • You want the ability to catalog your local network by assigning devices on it their own unique (easy to remember) hostnames.
  • You want to maintain the ability to surf the internet by forwarding on requests your DNS server doesn’t know to another that does.
The beauty of running your own Authorative DNS grants you the ability to catalog and easily access everything on your local network.
The beauty of running your own Authorative DNS grants you the ability to catalog and easily access everything on your local network by it’s hostname (you assign).

Here is what my tutorial will be focused on:

  • PowerDNS (v3.x) configured to use a Database Backend (PostgreSQL) giving you central configuration. This tutorial focuses on version 8.4 because that is what ships with CentOS and Red Hat. But most (if not all) of this tutorial should still work fine if you choose to use version 9.x of the database instead.
  • PowerDNS Recursor (v3.x) will be configured to handle anything records we don’t otherwise host or override
  • Security Considered
  • Poweradmin (v2.x) will provide our administration of the DNS records we add via it’s simple web interface.

Please note the application versions identified above as this tutorial focuses specifically on only them. One big issue I found while researching how to set up thing on the net was some tutorials I found didn’t really mention the version they were using. Hence, when I would stumble across these old article(s) with new(er) software, it would make for quite a painful experience when things didn’t work.

Please also note that other tutorials will imply that you setup one feature at a time. Then test it to see if it worked correctly before moving on to the next step. This is no doubt the proper way to do things. However, I’m just going to give it all to you at once. If you stick with the versions and packages I provide… If you follow my instructions, it will just work for you the first time. Debugging on your end will be a matter of tracing back to see what step you missed.

I tried to make this tutorial as cookie cutter(ish) as I could. Therefore you can literally just copy and paste what I share right to your shell prompt and the entire setup will be automated for you.

Installation

The following four (4) steps will get you set up with your very own DNS server.

Step 1 of 4: Setup Your Environment

This is the key to my entire blog; it’s going to make all of the remaining steps just work the first time for you. All; I repeat All of the steps below (after this one) assume that you’ve set this environment up. You will need to reset up your environment at least once before running through any of the remaining steps below or they will not work.

It’s also important to mention that you will need to be root to configure the DNS server. This applies to all of the steps identified below throughout this blog.

I re-hosted all of the packages I used to successfully pull this blog off. This allows me to host this information and pair it with the software it works against. Feel free to hook up to my repositories to speed up your setup.

Install all of the necessary packages:

# Connect to my repository to which I've had to rebuild a few
# packages to support PostgreSQL as well as fix some bugs in
# other bugs. This step will really make your life easy and let
# us compare apples to apples with package versions. It also
# allows you to haul in a working setup right out of the box.
#
# Be sure you're connected to my repository for the below to work
# visit: http://nuxref.com/nuxref-repository/

################################################################
# Install our required products
################################################################
yum install -y 
       --enablerepo=nuxref 
       --enablerepo=nuxref-shared 
           postgresql-server postgresql 
           php-pgsql php-imap php-mcrypt php-mbstring  
           pdns pdns-backend-postgresql pdns-recursor 
           poweradmin 
           nuxref-templates-pdns

# Also make sure these products are installed as well since we
# use them to manipulate and test some of the data
yum install -y awk sed bind-utils curl

# Choose between NginX or Apache
## NginX Option (a) - This one is my preferred choice:
yum install -y 
       --enablerepo=nuxref 
       --enablerepo=nuxref-shared 
           nginx php-fpm

## Apache Option (b):
yum install -y 
       --enablerepo=nuxref 
       --enablerepo=nuxref-shared 
            httpd php

# Setup Default Timezone for PHP. For a list of supported
# timezones you can visit here: http://ca1.php.net/timezones
TIMEZONE="America/Montreal"
sed -i -e "s|^[ t]*;*(date.timezone[ t]*=).*|1 $TIMEZONE|g" 
    /etc/php.ini

# Ensure we're not using Strict PHP Handling
sed -i -e 's/^[ t]*(error_reporting)[ t]*=.*$/1 = E_ALL & ~E_STRICT/g' 
    /etc/php.ini 

################################################################
# Setup PostgreSQL (v8.4)
################################################################
# The commands below should all work fine on a PostgreSQL v9.x
# database too; but your mileage may vary as I've not personally
# tested it yet. You can skip this section if you've already
# got a database running using one of my earlier tutorials.

# Only init the database if you haven't already. This command
# could otherwise reset things and you'll loose everything.
# If your database is already setup and running, then you can
# skip this line
service postgresql initdb

# Now that the database is initialized, configure it to trust
# connections from 'this' server (localhost)
sed -i -e 's/^[ t]*(local|host)([ t]+.*)/#12/g' 
    /var/lib/pgsql/data/pg_hba.conf
cat << _EOF >> /var/lib/pgsql/data/pg_hba.conf
# Configure all local database access with trust permissions
local   all         all                               trust
host    all         all         127.0.0.1/32          trust
host    all         all         ::1/128               trust
_EOF

# Make sure PostgreSQL is configured to start up each time
# you start up your system
chkconfig --levels 345 postgresql on

# Start the database now too because we're going to need it
# very shortly in this tutorial
service postgresql start

To simplify your life, I’ve made the configuration of all the steps below reference a few global variables. The ones identified below are the only ones you’ll probably want to change. May I suggest you paste the below information in your favourite text editor (vi, emacs, etc) and adjust the variables to how you want them making it easier to paste them back to your terminal screen.

# The following is only used for our SSL Key Generation.
# You can skip SSL Key generation if you've done so using an
# earlier tutorial
COUNTRY_CODE="7K"
PROV_STATE="Westerlands"
CITY="Lannisport"
SITE_NAME="NuxRef"

Now for the rest of the global configuration; There really should be no reason to change any of these values (but feel free to). It’s important that you paste the above information (tailored to your liking’s) as well as the below information below to your command line interface (CLI) of the server you wish to set up.

# PostgreSQL Database
PGHOST=localhost
PGPORT=5432
PGNAME=system_dns
PGRWUSER=pdns
PGRWPASS=pdns

# Identify the domain name of your server here
# I use the .local extension because I only intend to resolve
# internal addresses with my DNS server.  You may wish to use
# a different value.
DOMAIN=nuxref.local

# Configure a recursor, the recursor will cache your database hits
# and will greatly increase performance.  Ideally you want to set
# the recursor address to the address you want to host your server
# on.  This is the same IP address you will add to everyones 
# /etc/resolve.conf later.  This is in fact your name server.
# If you leave this value at 127.0.0.1 your DNS will be restricted
# to just the server you're hosting on.
# If you aren't sure what your IP Address is, you can just type 'ifconfig'
#
# This command may also fetch your ip address:
# cat /etc/sysconfig/network-scripts/ifcfg-* | 
#    egrep '^IPADDR=' | egrep -v '127.0.0.1' | 
#    cut -f2 -d'=' | head -n1
NAMESERVER_ADDR=$(cat /etc/sysconfig/network-scripts/ifcfg-* | 
    egrep '^IPADDR=' | egrep -v '127.0.0.1' | 
    cut -f2 -d'=' | head -n1)
# Alternatively, if you're not reading this and 'only' if the
# above failed we'll just set the address to your local address.
NAMESERVER_ADDR=${NAMESERVER_ADDR:=127.0.0.1}

# The Network that our DNS server resides on is important information
# for security purposes.  We want to only allow recursion on this
# network alone and not others hitting our server.  The below
# looks kind of cryptic, but it's just a method of extracting
# the network information automatically if you don't already
# know it. It may or may not work; it will depend on if you set
# a proper NAMESERVER_ADDR
SUBNET_ADDR=$(/sbin/ifconfig | egrep -m1 $NAMESERVER_ADDR | 
    sed -e 's/.*Mask:([0-9.]+).*/1/g')
NAMESERVER_PRFX=$(ipcalc -s -p $NAMESERVER_ADDR $SUBNET_ADDR | 
                   cut -f2 -d'=')
# Assign a default in case the above command failed.
NAMESERVER_PRFX=${NAMESERVER_PRFX:=24}

# Calculate our network
NAMESERVER_NWRK=$(ipcalc -s -n $NAMESERVER_ADDR $SUBNET_ADDR | 
                   cut -f2 -d'=')
# Assign a default in case the above command failed.
NAMESERVER_NWRK=${NAMESERVER_NWRK:=$(echo $NAMESERVER_ADDR | 
                   cut -f1,2,3 -d'.').0}

# Reverse Address Resolution Preparation
# This converts and IP Address of 1.2.3.4 to 3.2.1.in-addr.arpa
# We can use this later to create a reverse translation which
# PowerDNS can administrate for us also.  The templates I created
# will set some early examples up for you.
NAMESERVER_ARPA=$(echo "$NAMESERVER_ADDR" | 
    awk -F"." '{print $3 "." $2 "." $1 ".in-addr.arpa"}')

# We now need the 4th octet of our Name Server Address to complete
# our ARPA address for the reverse lookup. For example, if your server
# ip is 2.4.8.16, we want the '16' defined here.  The below is just
# a cheat to go ahead and extract it from the address you specified
NAMESERVER_OCT4=$(echo "$NAMESERVER_ADDR" | 
    cut -f4 -d'.')

# This is where our templates get installed to make your life
# incredibly easy and the setup to be painless. These files are
# installed from the nuxref-templates-pdns RPM package you
# installed above. If you do not have this RPM package then you
# must install it or this blog simply won't work for you.
# > yum install --enablerepo=nuxref nuxref-templates-pdns
NUXREF_TEMPLATES=/usr/share/nuxref

I realize the above environment can seem a bit cryptic. I tried to simplify this DNS setup so that even a novice’s life would be easy. The environment variables attempt to detect everyones settings automatically. In some cases, I may have just made it worse for some (hopefully not). It would be a good idea to just echo the defined variables to your screen and confirm they are as you expect them to be. They really are the key to making all of the next steps work in this blog.

# Simple Check
# Note: grab the brackets too when you copy and paste the below
(
   for VAR in COUNTRY_CODE PROV_STATE CITY SITE_NAME 
              PGHOST PGPORT PGNAME PGRWUSER PGRWPASS 
              DOMAIN NAMESERVER_ADDR NAMESERVER_PRFX 
              NAMESERVER_NWRK NAMESERVER_ARPA 
              NAMESERVER_OCT4 NUXREF_TEMPLATES; do
      [ -z $(eval "echo $$VAR") ] && echo "You must set the variable: $VAR"
   done
)
# Pretty Printing
# Note: grab the brackets too when you copy and paste the below
(
   echo "PostgreSQL:"
   echo -e "tPGHOST=$PGHOSTntPGPORT=$PGPORTntPGNAME=$PGNAMEntPGRWUSER=$PGRWUSERntPGRWPASS=$PGRWPASSn"
   echo "SSL:"
   echo -e "tCOUNTRY_CODE='$COUNTRY_CODE'ntPROV_STATE='$PROV_STATE'ntCITY='$CITY'ntSITE_NAME='$SITE_NAME'n"
   echo "Nameserver:"
   echo -e "tDOMAIN=$DOMAINntNAMESERVER_ADDR=$NAMESERVER_ADDRntNAMESERVER_NWRK=$NAMESERVER_NWRKntNAMESERVER_PRFX=$NAMESERVER_PRFX"
   echo -e "tNAMESERVER_ARPA=$NAMESERVER_ARPAntNAMESERVER_OCT4=$NAMESERVER_OCT4n"
   echo "NuxRef Templating"
   echo -e "tNUXREF_TEMPLATES=$NUXREF_TEMPLATES"
   echo
)

Step 2 of 4: Setup PowerDNS

First off, make sure you’ve set up your environment correctly (defined in Step 1 above) or you will have problems with the outcome of this step!
Database Configuration:

################################################################
# Configure PostgreSQL (for PowerDNS)
################################################################
# Optionally Eliminate Reset Database.
/bin/su -c "/usr/bin/dropdb -h $PGHOST -p $PGPORT $PGNAME 2>&1" postgres &>/dev/null
/bin/su -c "/usr/bin/dropuser -h $PGHOST -p $PGPORT $PGRWUSER 2>&1" postgres &>/dev/null

# Create Read/Write User (our Administrator)
echo "Enter the role password of '$PGRWPASS' when prompted"
/bin/su -c "/usr/bin/createuser -h $PGHOST -p $PGPORT -S -D -R $PGRWUSER -P 2>&1" postgres

# Create our Database and assign it our Administrator as it's owner
/bin/su -c "/usr/bin/createdb -h $PGHOST -p $PGPORT -O $PGRWUSER $PGNAME 2>&1" postgres 2>&1

# the below seems big; but will work fine if you just copy and
# it as is right to your terminal: This will prepare the SQL
# statement needed to build your DNS server's database backend
sed -e '/^--?/d' 
    -e "s/%PGRWUSER%/$PGRWUSER/g" 
        $NUXREF_TEMPLATES/pgsql.pdns.template.schema.sql > 
          /tmp/pgsql.pdns.schema.sql

# load DB
/bin/su -c "/usr/bin/psql -h $PGHOST -p $PGPORT -f /tmp/pgsql.pdns.schema.sql $PGNAME 2>&1" postgres 2>&1
# cleanup
/bin/rm -f /tmp/pgsql.pdns.schema.sql

# This will get your database started with some working data to use.
# This part is optional, but since it's so easy to delete stuff later
# and there really isn't a whole lot taking place here, you should run
# this step. It becomes especially useful in debugging later.
sed -e "/^--?/d" 
    -e "s/%DOMAIN%/$DOMAIN/g" 
    -e "s/%NAMESERVER_ADDR%/$NAMESERVER_ADDR/g" 
    -e "s/%NAMESERVER_ARPA%/$NAMESERVER_ARPA/g" 
    -e "s/%NAMESERVER_OCT4%/$NAMESERVER_OCT4/g" 
        $NUXREF_TEMPLATES/pgsql.pdns.template.data.sql > 
            /tmp/pgsql.pdns.data.sql

# load DB with our data
/bin/su -c "/usr/bin/psql -h $PGHOST -p $PGPORT -f /tmp/pgsql.pdns.data.sql $PGNAME 2>&1" postgres 2>&1
# cleanup
/bin/rm -f /tmp/pgsql.pdns.data.sql

Server Configuration:

################################################################
# Configure PowerDNS
################################################################
# Create backup of configuration files
[ ! -f /etc/pdns/pdns.conf.orig ] && 
   cp /etc/pdns/pdns.conf /etc/pdns/pdns.conf.orig

# Install our configuration using the template
sed -e "/^#?/d" 
    -e "s/%NAMESERVER_ADDR%/$NAMESERVER_ADDR/g" 
    -e "s/%NAMESERVER_NWRK%/$NAMESERVER_NWRK/g" 
    -e "s/%NAMESERVER_PRFX%/$NAMESERVER_PRFX/g" 
    -e "s/%PGRWUSER%/$PGRWUSER/g" 
    -e "s/%PGRWPASS%/$PGRWPASS/g" 
    -e "s/%PGHOST%/$PGHOST/g" 
    -e "s/%PGPORT%/$PGPORT/g" 
    -e "s/%PGNAME%/$PGNAME/g" 
        $NUXREF_TEMPLATES/pgsql.pdns.template.pdns.conf > 
          /etc/pdns/pdns.conf

# Protect our configuration since it has user/pass info
# inside of it.
chmod 640 /etc/pdns/pdns.conf
chown root.pdns /etc/pdns/pdns.conf

Step 3 of 4: Setup PowerDNS Recursor

################################################################
# Configure PowerDNS Recursor
################################################################
# Create backup of configuration files
[ ! -f /etc/pdns-recursor/recursor.conf.orig ] && 
   cp /etc/pdns-recursor/recursor.conf 
        /etc/pdns-recursor/recursor.conf.orig

# Install our configuration using the template
sed -e "/^#?/d" 
        $NUXREF_TEMPLATES/pgsql.pdns-recursor.template.recursor.conf > 
          /etc/pdns-recursor/recursor.conf

# Generate an up to date root.hints file, this allows recursion
# back out to the internet.
curl -u ftp:ftp 'ftp://ftp.rs.internic.net/domain/db.cache' 
    -o /etc/pdns-recursor/root.hints

# If the above command did not work, you can use the one I shipped
# with the nuxref-template.pdns packaging:
#   cp $NUXREF_TEMPLATES/root.hints /etc/pdns-recursor/root.hints

# Alternatively, PowerDNS is hardcoded with a default set of root-hints.
# But i personally just like seeing it as an external configuration instead
# But... if all of this is combersom to you and you simply don't want
# to use the offical root.hints and the hard-coded one instead you can
# do the following:
#
# sed -i -e '/^([ t]*hint-file=.*)/d' /etc/pdns-recursor/recursor.conf

# Start up all of our services
chkconfig pdns-recursor --level 345 on
chkconfig pdns --level 345 on
service pdns-recursor restart
service pdns restart

It’s important to take a time-out on this step just to make sure everything is working.
A few simple commands should work perfectly for you otherwise we have an issue:

# The following command should output a bunch of googles DNS servers
nslookup google.com $NAMESERVER_ADDR
# The following command should output the same list
nslookup -port=5300 google.com 127.0.0.1

# If you receive an error such as 
#      ** server can't find google.com: NXDOMAIN
# Then you need to revisit the above steps again

# Alternatively, if you receive an error such as:
#  ;; connection timed out; trying next origin
#  ;; connection timed out; trying next origin
#  ;; connection timed out; no servers could be reached
# Then you have been most likely been restricted access
# to port 53 to the outside world. You're not really
# in a problem state at this point. Make sure the rest
# of the tests (Below) work and then make sure to follow
# the section of this blog entitled:
#     'Zone Forwarding Alternative'
# 
# 
# You should be able to resolve the domain
# poweradmin.$DOMAIN to this very server your hosting
# on:
nslookup poweradmin.$DOMAIN $NAMESERVER_ADDR

# You can even test reverse lookups using our data
# we loaded with the following command:
nslookup $NAMESERVER_ADDR $NAMESERVER_ADDR

# The above should resolve itself to hostmaster.your.domain

Step 4 of 4: Setup PowerAdmin

First off, make sure you’ve set up your environment correctly (defined in Step 1 above) or you will have problems with the outcome of this step!

################################################################
# Configure PostgreSQL (for PowerAdmin)
################################################################

# Now we need to update our database with a schema for
# poweradmin to work with
sed -e "/^--?/d" 
    -e "s/%DOMAIN%/$DOMAIN/g" 
    -e "s/%PGRWUSER%/$PGRWUSER/g" 
        $NUXREF_TEMPLATES/pgsql.poweradmin.template.schema.sql > 
            /tmp/pgsql.poweradmin.schema.sql

# Now we can load the file:
/bin/su -c "/usr/bin/psql -h $PGHOST -p $PGPORT -f /tmp/pgsql.poweradmin.schema.sql $PGNAME 2>&1" postgres 2>&1
# cleanup
/bin/rm -f /tmp/pgsql.poweradmin.schema.sql

# If you loaded the sample dataset for PowerDNS earlier, then you'll
# want to additionally load this file too to help PowerAdmin access it
sed -e "/^--?/d" 
    -e "s/%DOMAIN%/$DOMAIN/g" 
    -e "s/%NAMESERVER_ADDR%/$NAMESERVER_ADDR/g" 
    -e "s/%NAMESERVER_ARPA%/$NAMESERVER_ARPA/g" 
        $NUXREF_TEMPLATES/pgsql.poweradmin.template.data.sql > 
            /tmp/pgsql.poweradmin.data.sql

# load DB with our data
/bin/su -c "/usr/bin/psql -h $PGHOST -p $PGPORT -f /tmp/pgsql.poweradmin.data.sql $PGNAME 2>&1" postgres 2>&1
# cleanup
/bin/rm -f /tmp/pgsql.poweradmin.data.sql

################################################################
# Configure PowerAdmin (for PowerDNS Administration)
################################################################
# Create backup of configuration files
[ ! -f /etc/poweradmin/config.inc.php.orig ] && 
   cp /etc/poweradmin/config.inc.php 
        /etc/poweradmin/config.inc.php.orig

# Apply our configuration
sed -e "/^//?/d" 
    -e "s/%DOMAIN%/$DOMAIN/g" 
    -e "s/%PGHOST%/$PGHOST/g" 
    -e "s/%PGNAME%/$PGNAME/g" 
    -e "s/%PGPORT%/$PGPORT/g" 
    -e "s/%PGRWUSER%/$PGRWUSER/g" 
    -e "s/%PGRWPASS%/$PGRWPASS/g" 
        $NUXREF_TEMPLATES/pgsql.poweradmin.template.config.inc.php > 
            /etc/poweradmin/config.inc.php

# Protect file since it contains passwords
chmod 640 /etc/poweradmin/config.inc.php
chown root.apache /etc/poweradmin/config.inc.php

# NginX Configuration
sed -e "/^#?/d" 
    -e "s/%DOMAIN%/$DOMAIN/g" 
        $NUXREF_TEMPLATES/nginx.poweradmin.template.conf > 
            /etc/nginx/conf.d/poweradmin.conf

################################################################
# Generate SSL Keys For Webpage Security
################################################################
# Generate SSL Keys (if you don't have any already) that we
# will secure all our inbound and outbound mail as.
openssl req -nodes -new -x509 -days 730 -sha256 -newkey rsa:2048 
   -keyout /etc/pki/tls/private/$DOMAIN.key 
   -out /etc/pki/tls/certs/$DOMAIN.crt 
   -subj "/C=$COUNTRY_CODE/ST=$PROV_STATE/L=$CITY/O=$SITE_NAME/OU=IT/CN=$DOMAIN"

# Permissions; protect our Private Key
chmod 400 /etc/pki/tls/private/$DOMAIN.key

# Permissions; protect our Public Key
chmod 444 /etc/pki/tls/certs/$DOMAIN.crt

At this point you should be able to start NginX. If it’s already running
send it a reload or just run the below command.

# If you chose the NginX approach you'll want to make sure it's
# setup to run correctly and restart itself if the system is
# ever restarted:

# Ensure NginX runs even after a reboot
chkconfig nginx --level 345 on
chkconfig php-fpm --level 345 on

# Restart the service if it isn't running already
service php-fpm restart
service nginx restart

Now, we’re almost done. We need to make sure our server is referencing our new DNS server. You may need to update your network settings, but the following will just cheat for the time being and set you up:

[ ! -f /etc/resolv.conf.orig ] && 
   cp /etc/resolv.conf 
       /etc/resolv.conf.orig

# Tell our server to use our new DNS server
cat << _EOF > /etc/resolv.conf
search $DOMAIN
nameserver $NAMESERVER_ADDR
_EOF

# Restore your old configuration like so
# if you need to:
#  /bin/mv -f /etc/resolv.conf.orig /etc/resolv.conf

You will want to additionally add the following to your iptables /etc/sysconfig/iptables:

#---------------------------------------------------------------
# DNS Traffic
#---------------------------------------------------------------
-A INPUT -m state --state NEW -m tcp -p tcp --dport 53 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 53 -j ACCEPT

#---------------------------------------------------------------
# Web Traffic for PowerAdmin
#---------------------------------------------------------------
-A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT

Use the login/pass as admin/admin when you log in for the first time.  Consider changing this afterwards!
Use the login/pass as admin/admin when you log in for the first time. Consider changing this afterwards!
You should now be able to visit https://poweradmin and see a login screen. You may have to accept the ‘untrusted key’ prompt. Don’t worry; it’s safe to do so! In fact, if you’re worried, then just have a look at the key itself before accepting it. You’ll see that it’s just the one we generated earlier. The login is admin and the password is admin at the start. You will want to consider changing this right away after you log in for percautionary sake.

Your setup can now be illustrated by the model below. It’s virtually the same setup as you had before however now instead of querying your ISPs DNS Server, you query your very own local one. Now you can easily maintain your own local network and begin labeling the devices you use on it using PowerAdmin.

This illustration shows the PowerDNS Recursor (pdns-recursor)
This illustration shows the PowerDNS Recursor (pdns-recursor)
Your Authorative (Power)DNS Server caches the location for you for a period of time making subsiquent requests to the same spot VERY fast.
All Subsequent Requests are Cached for a Period of Time.
All Subsequent Requests are Cached for a Period of Time.

Zone Forwarding Alternative

Up until now, our ISP was using it’s own root.hints file (or some alternative method) to look up your request. But now, it is our server that is going out directly into the big bad internet instead using this technique. Since DNS requests are not encrypted, it’s now possible for others to spy on the hostnames we’re resolving (and places we’re are visiting). Not only that, these same people can easily trace the source back to us (all DNS requests originate from our IP now). This can allow someone to specifically know the online banking site you use as an example. Prior to hosting your own DNS server, all websites and servers you accessed were channeled privately between you and your ISP so this was never a problem. It was our ISP who made the (recursive) requests for us instead of us doing them ourselves. Prior to now, what we looked up didn’t explicitly trace back to you, it traced back to our ISP. Previously, we actually had more privacy (depending on the contract we signed with our ISP).

Your ISP has thousands of clients making requests to it’s DNS servers constantly. As a result, it has probably already cached 90% of all the websites we intend to visit. Cached content means a very a speedy response from our server. Meanwhile, our local DNS server’s cache will (probably) be empty most of the time (depending on how many people will use it). Hence your ISP’s DNS Server will be MUCH faster then yours.

When you signed up with your ISP, they would have gave you (at least) 1 DNS server to use (most provide 2 – a primary and backup). We can actually tell our DNS Server to use these instead of our root.hints file when it finds a domain that needs to be further looked up. This way, you regain your secure pipe between you and your ISP. The trade off is your adding one more hop to your recursive lookups. But in most scenarios, they would have already cached what your looking for, so it would be an imediate response. The below diagram illustrates the worst case scenario:

A forwarding zone of '*' (asterix) tells the PowerDNS Recursor to forward all requests to a specific Server.  In our example we use our ISP's DNS Servers.
A forwarding zone of ‘*’ (asterix) tells the PowerDNS Recursor to forward all requests to a specific Server. In our example we use our ISP’s DNS Servers.

Here is how you can alter your configuration:

# Put your DNS Servers below,  the ones in place right
# now are the public ones offered by Google.
DNS_SERVERS="8.8.8.8 8.8.4.4"

# Remove any information that may conflict
sed -i -e '/^([ t]*hint-file=.*)/d' /etc/pdns-recursor/recursor.conf
sed -i -e '/^([ t]*forward-zones=.*)/d' /etc/pdns-recursor/recursor.conf

# Disable hint-file
echo 'hint-file=' >> /etc/pdns-recursor/recursor.conf

# Prepare Forwarding Zones for everything unmatched:
echo -n 'forward-zones=*=' >> /etc/pdns-recursor/recursor.conf
echo $(echo "$DNS_SERVERS" | 
    sed -e 's/^[ t]*//g' -e 's/[ t]*$//g' -e 's/[ t]+/, /g') >> 
     /etc/pdns-recursor/recursor.conf

# Now restart our recursor
service pdns-recursor restart

Got Old BIND Configuration You Need Imported?

This step is completely optional! If your not familiar with what BIND even is, or know you’ve never used it, you can freely skip this section.
If you migrating from BIND to PowerDNS then you may have a setup in place. PowerDNS makes an easy transition by writing a tool that will scan your old BIND configuration and generate the SQL needed for an easy migration to PowerDNS.

################################################################
# Generate SQL content from all of your zone files
################################################################
# I just had 1 simple DNS zone, but you may many.
# The below did all the work for me (bind was configured to
# run in chroot environment):
# zone2sql --gpgsql --zone=/var/chroot/var/named/data/zone.nuxref.local > 
#    /tmp/pgsql.pdns.zones.sql
# zone2sql --gpgsql --zone=/var/chroot/var/named/data/192.168.0 >> 
#     /tmp/pgsql.pdns.zones.sql

# You could even cheat and run all your files with a command like this
# Please note that this is optional (and not part of the blog, it's just
# a simple conversion tool for those who already have bind configuration
ZONE_DIR=/var/chroot/var/named/data/
[ -f /tmp/pgsql.pdns.zones.sql ] && /bin/rm -f /tmp/pgsql.pdns.zones.sql
for ZONE in $(find $ZONE_DIR -type f); do
   # Fetch ORIGIN/ZONE ID
   ZONE_ID=$(cat $ZONE | egrep '^[ t]*$ORIGIN' | 
              sed -e 's/^.*$ORIGIN[ t]+([^ t]+).*/1/g' 
                  -e 's/[. t]*$//g')
   [ -z "$ZONE_ID" ] && echo "Error Parsing: $ZONE" && continue
   zone2sql --gpgsql --zone=$ZONE --zone-name=$ZONE_ID >> /tmp/pgsql.pdns.zones.sql
done

# Now before you load this file into your database, you may
# want to review it.  It doesn't hurt to scan it over and remove
# any entries you don't think would be useful.

# Under normal circumstances you would be done at this point, however because
# we are additionally using poweradmin, we need to create a few zone entries
# based on the SQL file we just generated.
$NUXREF_TEMPLATES/import2zone.awk /tmp/pgsql.pdns.zones.sql >> 
    /tmp/pgsql.pdns.zones.sql

# Then I just loaded the file straight into the database:
/bin/su -c "/usr/bin/psql -h $PGHOST -p $PGPORT -f /tmp/pgsql.pdns.zones.sql $PGNAME 2>&1" postgres 2>&1
# cleanup
/bin/rm -f /tmp/pgsql.pdns.zones.sql
# You're done!

So… That’s it? Now I’m done?

Yes and No… My blog pretty much hands over a working DNS server with little to no extra configuration needed on your part.

No system is bulletproof; disaster can always strike when you’re least expecting it. To cover yourself, always consider backups of the following:

  • Your PostgreSQL Database: This is where all of your DNS configuration is stored. You definitely do not want to lose this. May I suggest you reference my other blog entry here where I wrote a really simple backup/restore tool for a PostgreSQL database.
  • /etc/poweradmin/*: Your PowerAdmin flat file configuration allowing you to centrally manage everything via a webpage.
  • /etc/pdns/*: Your PowerDNS flat file configuration which defines the core of your DNS Server. It’s configuration allows you to centrally manage everything else through the PowerAdmin website.
  • /etc/pdns-recursor/*: Your PowerDNS Recursor flat file configuration which grants you the recursive functionality of your DNS Server.

What about Apache?

Apache is a perfectly fine alternative solution as well! I simply chose NginX because it is much more lightweight approach. In fact, PowerAdmin already comes with Apache configuration out of the box located in /etc/httpd/conf.d/. Thus, if you simply start up your Apache instance (service httpd start), you will be hosting its services right away. Please keep in mind that the default (Apache) configuration does not come with all the SSL and added security I provided with the NginX templates. Perhaps later on, I will update the template rpm to include an Apache secure setup as well.

Credit

This blog took me a very (,very) long time to put together and test! The repository hosting alone now accomodates all my blog entries up to this date. If you like what you see and wish to copy and paste this HOWTO, please reference back to this blog post at the very least. It’s really all I ask.

I’ve tried hard to make this a complete working solution out of the box. Please feel free to email me or post comments below with any suggestions you have so I can ensure this blog is as complete as possible! Positive feedback is always welcome too!

Repository

This blog makes use of my own repository I loosely maintain. If you’d like me to continue to monitor and apply updates as well as hosting the repository for long terms, please consider donating or offering a mirror server to help me out! This would would be greatly appreciated!

Sources