Usenet Solution

A Usenet Solution For CentOS 6


The Indexer
Google uses it’s own web crawlers to scan the entire World Wide Web just to create an index from it. Each website they find, they track it’s name, it’s content, the language it’s written in and so on. The result from them doing this is: we get to use their fantastic search engine! A search engine that has made our lives incredibly easy by granting us fast and easy accessible information at our fingertips. Well… Usenet needs the same thing done to it too; and Google isn’t doing it for us! Usenet is a very big world of it’s own and without indexing, it’s a lot harder to get around in (but not impossible). Thankfully Usenet is no where near the size of the World Wide Web and indexing it is very possible. We can even index it with our personal computer(s) we run at home!

Since just about anyone can index Usenet, one has to think: Why index Usenet ourselves if someone’s already doing it for us elsewhere? In fact, there are many sites (and tools) that have done all indexing (some better than others) of Usenet who are willing to share it with others (us). But it’s important to know: it can take a lot of server power, disk space, and network consumption for these site administrators to constantly index Usenet for us. Since most (if not all) of the sites in this list are just hobbyists doing it for fun, it gets expensive for them to maintain things. For that reason some of them may charge or ask for a donation. If you want to use their services, you should respect their measly request of $8USD to $20USD for a lifetime membership. But don’t get discouraged, there are still a lot of free ones too! Some get by with just advertisement banners on their website or just index once every few days. Others indexers may run their indexer constantly (every 1-5 minutes) granting you the latest data available to you at all times.

Alternatively, we can go as far as running our own Usenet indexer (such as NewzNab) just as the hobbyists did (mentioned above). NewzNab will index Usenet on a regular basis. With your own indexer, you can choose to just index content that appeals to you. You can even choose to offer your services publicly if you want. Just keep in mind that Usenet is huge! If you do decide to go this route, you’ll find it a very CPU and network intensive operation. You may want to make sure you don’t exceed your Internet Service Providers (ISP) download limits.

Now back to the Google analogy I started earlier: When you find a link on Google you like, you simply click on it and your browser redirect you to the website you chose; end of story. However, in the Usenet Indexing world, once you find something of interest, the Usenet Indexer will provide you with an NZB File. An NZB file is effectively a map that identifies where your content can be specifically located on Usenet (but not the data itself). An NZB file to Usenet is similar to what a Torrent file is to a BitTorrent Client. Both NZB and Torrent files provide the blueprints needed to mine (acquire) your data. Both NZB and Torrent files require a Downloader to preform the actual data mining for you.

The Downloader
The Downloader can take an NZB File it’s provided and then uses it to acquire the actual data it maps to. This is the final piece of the puzzle!
Of the list below, you really only need to choose 1 Downloader. I just listed more then 1 to give you alternatives to work with. My personal preference is NZBGet because it is more flexible. But it’s flexibility can also be very confusing (only at first). Once you get over it’s learning curve and especially the initial configuration; it’s a dream to work with. Alternatively SABnzbd may be better for the novice if your just starting off with Usenet and don’t want to much more of a learning curve then you already have.

Either way, pick you poison:

Title Package Details
NZBGet rpm/src NZBGet is written in C++ and designed with performance in mind to achieve maximum download speed by using very little system resources.
Community / Manual
*Note: I did create one small patch so that the Emailing Post Process script would conform to a few extra SMTP header standards to avoid causing issues with Google Mail and relaying mail to it.
**Note: I also created this patch in a recent update rebuild (Jul 17th, 2014) to fix a few directory paths so the compression tools (unrar and 7zip) can work right away. I also added these compression tools as dependencies to the package so they’ll just be present for you at the start.

You can install NZBGet using the steps below:

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Install NZBGet
yum install -y nzbget \
    --enablerepo=nuxref \
    --enablerepo=nuxref-shared

# Grab Template
cp /usr/share/nzbget/nzbget.conf ~/.nzbget

# Protect it
chmod 600 ~/.nzbget

# Start it Up (as a non-root user):
nzbget -D

# You should now be able to access it via: 
#     http://localhost:6789/
SABnzbd n/a SABnzbd is an Open Source Binary Newsreader written in Python.
Community / Manual
Note:I have not packaged this yet, but will probably eventually get around to it. For now it can be accessed from it’s repository on GitHub, or you can quickly set it up in your environment as follows:

# There is no RPM installer for this one, we just
# fetch straight from their repository.
# Install git (if it's not already)
yum install -y git

# Grab a snapshot of SABnzbd
git clone https://github.com/sabnzbd/sabnzbd.git SABnzbd

# Start it Up (as a non-root user):
python SABnzbd/SABnzbd.py \
    --daemon \
    --pid $(pwd)/SABnzbd/sabnzbd.pid

# You should now be able to access it via: 
#     http://localhost:8080/

NZBGet Processing Scripts
One of it’s strongest features of NZBGet is it’s ability to process content it downloads before and after it’s received. The Post Processing (PP) has been specifically one of NZBGet’s greatest features. It allows separation between the the function NZBGet (which is to download content in NZB files) and what you want to do with the content afterwards. Post Processing could do anything such as catalogue what was received and place it into an SQL database. Post Processing could rename the content and sort it for you in separate directories depending on what it is. Post processing can be as simple as just emailing you when the download completed or post on Facebook or Twitter. You’re not limited to just 1 PP Script either, you can chain them and run a whole slew of them one after another. The options are endless.

I’ve taken some of the popular PP Scripts from the NZBGet forum and packaged them in a self installing RPM as well to make life easy for those who want it. Some of these packages require many dependencies and ports to make the installation smooth. Although i link directly to the RPMs here, you are strongly advised to link to my repository with yum if you haven’t already done so.

Title Package Provides Details
Failure Link rpm/src FAILURELINK If download fails, the script sends info about the failure to indexer site, so a replacement NZB (same movie or TV episode) can be queued up if available. The indexer site must support DNZB-Header “X-DNZB-FailureLink”.

Note: The integration works only for downloads queued via URL (including RSS). NZB-files queued from local disk don’t have enough information to contact the indexer site.

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbget-ppscript-failurelink \
    --enablerepo=nuxref \
    --enablerepo=nuxref-shared 
nzbToMedia rpm/src DELETESAMPLES
RESETDATETIME
NZBTOCOUCHPOTATO
NZBTOGAMEZ
NZBTOHEADPHONES
NZBTOMEDIA
NZBTOMYLAR
NZBTONZBDRONE
NZBTOSICKBEARD
Provides an efficient way to handle post processing for
CouchPotatoServer, SickBeard, NZBDrone, Headphones, and Mylar
when using NZBGet on low performance systems like a NAS.

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbget-ppscript-nzbtomedia \
    --enablerepo=nuxref \
    --enablerepo=nuxref-shared 

Note: This package includes the removal of the entire PYPKG/libs directory. I replaced all of the dependencies previously defined here with global ones used by CentOS. The reason for this was due to the fact a lot of other packages all share the same libraries. It just didn’t make sense to maintain a duplicate of it all.

Subliminal rpm/src SUBLIMINAL Provides a wrapper that can be integrated with NZBGet with subliminal (which fetches subtitles given a filename or filepath). Subliminal uses the correct video hashes using the powerful guessit library to ensure you have the best matching subtitles. It also relies on enzyme to detect embedded subtitles and avoid retrieving duplicates.

Multiple subtitles services are available using: Addic7ed, OpenSubtitles, SubsWiki, Subtitulos, TheSubDB, and TvSubtitles.

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbget-ppscript-subliminal \
    --enablerepo=nuxref \
    --enablerepo=nuxref-shared 

*Note: python-subliminal (what this PP Script is a wrapper too) had some issues I had to address. For one, i eliminated the entire PYPKG/subliminal/libs directory. I replaced all of the dependencies previously defined here with global ones used by CentOS. The reason for this was due to the fact a lot of other packages all share the same libraries. It just didn’t make sense to maintain a duplicate of it all.
**Note: Subliminal was written using Dict Comprehensions (PEP 274), a feature that wasn’t introduced until Python 2.7. Unfortunately, the developers of it had no interest in resolving this and closed the issue with ‘Upgrade to Python 2.7 or Python v3.3. For this reason, subliminal does ‘not’ work at all with CentOS or Red Hat 6.x. So… I fixed that. Now, I can proudly tell you that the copy of subliminal I host on my repository is 100% compatibility with python 2.6 (this includes a few Logging backported functionality too). For those interested in the patch content (so you can backport for other systems) you can view them here:

patch #1 Just an update to enforce the correct version of guessit (v0.7.1) and babelfish (v0.5+) to support all of the packages included.
patch #2 The core backporting of python 2.4+ support (by eliminating Dict Comprehensions. This patch also includes support for babelfish v0.5.1
patch #3 Add support for guessit v0.7.1
patch #4 subliminal would crash when there wasn’t enough details to detect the show or movie. This patch allows subliminal to gracefully handle these scenarios.
New as of Aug 21st, 2014
patch #5
Subliminal to support not exclusively only downloading with or without subtitles but allow one to download either (but fetch the best matched one).
patch #6 New as of Aug 24st, 2014
If subliminal matched more then 1 subtitle, it would download them all. This is presumed to be a bug because when i looked through the code, the extra download were never referenced. This patch has subliminal stop downloading subs after the first successful matched one is downloaded.
patch #7 New as of Aug 24st, 2014
This is an extension to patch #5 above. If both hearing-impaired and non hearing-impaired subtitles were found, this patch allows you to prioritize which one is more important to you (in the event they both score high).
patch #8 New as of Aug 24st, 2014
If a provider was unavailable (their server was simply just not online), subliminal would previously crash. This patch allows a more graceful handling which just involves moving on to the next provider identified instead.
Video Sort rpm/src VIDEOSORT With post-processing script VideoSort you can automatically organize downloaded video files.

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbget-ppscript-videosort \
    --enablerepo=nuxref \
    --enablerepo=nuxref-shared 

Note: This package includes the removal of the entire PYPKG/libs directory. I replaced all of the dependencies previously defined here with global ones used by CentOS. The reason for this was due to the fact a lot of other packages all share the same libraries. It just didn’t make sense to maintain a duplicate of it all.

Automated Index Searchers
These tools search for already indexed content you’re interested in and can be configured to automatically download it for you when it’s found. It itself doesn’t do the downloading, but it will automate the connection between your chosen Indexer and Downloader (such as NZBGet or SABnzbd). For this reason, these tools do not actually search Usenet at all and therefore have very little overhead on your system (or NAS drive).

Title Package Details
NZBDrone
nzbdrone-icon
rpm/src Automatic TV Show downloader
This was only made possible because of the blog I wrote on mono v3.x .

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbdrone \
    --enablerepo=nuxref \
    --enablerepo=nuxref-shared 

# Start it Up (as a non-root user):
nohup mono /opt/NzbDrone/NzbDrone.exe &

# You should now be able to access it via: 
#     http://localhost:8989/
Sick Beard
sickbeard-icon
n/a (Another) Automatic TV Show downloader

Note:I have not packaged this yet, but will probably eventually get around to it. For now it can be accessed from it’s repository on GitHub, or you can quickly set it up in your environment as follows:

# Install git (if it's not already)
yum install -y git

# Grab a snapshot of Sick Beard
# Note that we grab the master branch, otherwise we default
# to the development one.
git clone -b master https://github.com/midgetspy/Sick-Beard.git SickBeard

# Start it Up (as a non-root user):
python SickBeard/SickBeard.py \
   --daemon \
   --pidfile $(pwd)/SickBeard/sickbeard.pid

# You should now be able to access it via: 
#     http://localhost:8081/
CouchPotato
couchpotato-icon
n/a Automatic movie downloader

Note:I have not packaged this yet, but will probably eventually get around to it. For now it can be accessed from it’s repository on GitHub, or you can quickly set it up in your environment as follows:

# Install git (if it's not already)
yum install -y git

# Grab a snapshot of CouchPotato
git clone https://github.com/RuudBurger/CouchPotatoServer.git CouchPotato

# Start it Up (as a non-root user):
python CouchPotato/CouchPotato.py \
   --daemon \
   --pid_file CouchPotato/couchpotato.pid

# You should now be able to access it via: 
#     http://localhost:5050/
Headphones
headphones-icon
n/a Automatic music downloader

Note:I have not packaged this yet, but will probably eventually get around to it. For now it can be accessed from it’s repository on GitHub, or you can quickly set it up in your environment as follows:

# Install git (if it's not already)
yum install -y git

# Grab a snapshot of Headphones
git clone https://github.com/rembo10/headphones Headphones

# Start it Up (as a non-root user):
python Headphones/Headphones.py \
   --daemon \
   --pidfile $(pwd)/Headphones/headphones.pid

# You should now be able to access it via: 
#     http://localhost:8181/
Mylar
mylar-icon
n/a Automatic Comic Book downloader

Note:I have not packaged this yet, but will probably eventually get around to it. For now it can be accessed from it’s repository on GitHub, or you can quickly set it up in your environment as follows:

# Install git (if it's not already)
yum install -y git

# Grab a snapshot of Headphones
git clone https://github.com/evilhero/mylar Mylar

# Start it Up (as a non-root user):
python Mylar/Mylar.py \
   --daemon \
   --pidfile $(pwd)/Mylar/Mylar.pid

# You should now be able to access it via: 
#     http://localhost:8090/

Mobile Integration
nzb360-logoThere are some fantastic Apps out there that allow you to integrate your phone with the applications mentioned above. It can allow you to manage your downloads from wherever you are. A special shout out to NZB 360 who recently had his app pulled from the Google Play Store for no apparent reason and had to set up shop outside. I can say first hand that his application is amazing! You should totally consider it if you have an Android phone.

Usenet Provides
For those who don’t have Usenet already, it does come at an extra cost and/or fee. The cost averages anywhere between $6 to $20 USD/month (anything more and you’re paying to much). The reason for this is because Usenet is a completely isolated network from the Internet. It’s comprised of a completely isolated set of interconnected servers. While the internet is comprised of hundreds of millions of servers all hosting specific content, each Usenet server hosts the entire usenet database… it hosts everything. If anything is uploaded to Usenet, all of the interconnected servers update themselves with their own local copy of it (to serve us). For this to happen, their servers have to have petabytes of storage. The fee they charge you is just going to support their operational cost such as bandwidth, maintenance and the regular addition of storage to their infrastructure. There is very little profit to be made for them at $8 a person. Here is a breakdown of a few servers (in alphabetical order) I’m aware of and support:

Provider Server
Location(s)
Notes Average Cost
Astraweb US & Europe Retention: 2158 Days (5.9 Years) $6.66USD/Month to $15USD/Month
see here for details
Usenet Server US Retention: 2159 Days (5.9 Years)
Has a free 14 day trial
$13.33USD/Month to $14.95USD/Month
see here for details

*Note: Table information was last updated on Jul 14th, 2014. Prices are subject to change as time goes on and this blog post isn’t updated.
**Note: If you have a provider that you would like to be added to this list… Or if you simply spot an error in pricing or linking, please feel free to contact me so I can update it right away.

Why do people use Usenet/Newsgroups?

  • Speed: It’s literally just you and another server; a simple 1 to 1 connection. Data transfer speeds will always be as fast as your ISP can carry your traffic to and from the Usenet Server you signed up with. Unlike torrents, content isn’t governed by how many seeders and leechers that have the content available to you. You never have to deal with upload/download ratios, maintain quotas, and or sit idle in someone’s queue who will serve data to you eventually.
  • Security: You only deal with secure connections between you and your Usenet Provider; no one else! Torrents can have you to maintaining thousands of connections to different systems and sharing data with them. With BitTorrent setups, tracker are publicly advertising what you have to share and what your trying to download. Your privacy is public to anyone using the same tracker that you’re connected to. Not only that, but most torrent connections are insecure as well which allows virtually anyone to view what you’re doing.

Please know that I am not against torrents at all! In fact, now I’ll take the time to mention a few points where torrents are excel over Usenet:

  • Cost: It doesn’t usually cost you anything to use the torrent network. It all depends on the tracker your using of course (some private trackers charge for their usage). But if you’re just out to get the free public stuff made available to us, there are absolutely no costs at all to use this method!
  • Availability: Usenet is far from perfect. When someone uploads something onto their Usenet Provider, by the time it propagates this new content to all of the other Usenet Servers, there is a small chance the data will be corrupted. This happens with Usenet all of the time. To compensate for this, Usenet users anticipate corruption (sad but true). These people kindly post Parchive files to Usenet to compliment whatever they previously uploaded. Parchive files work similar to how RAID works; they provide building blocks to reassemble data in the event it’s corrupted. Corruption never happens with Torrents unless the person hosting decides to host corrupted data. Any other scenario would simply be because your BitTorrent Client had a bug in it.
  • Retention: As long as someone is willing to seed something, or enough combined leechers can reconstruct what is being shared, then data will always stay alive in the BitTorrent world. However with Usenet, the Usenet Server is hosting EVERYTHING which means it has to maintain a lot data on a lot of disk space! For this reason, a retention period is inevitably met. A time is eventually reached where the Usenet Server purges (erases) older content from these hard disks to make room for the new stuff showing up every day.

Honestly, at the end of the day: both Torrents and Usenet Servers have their pros and cons. We will always continue each weigh them at different levels. What’s considered the right choice for one person, might not be the right one for another. Heck, just use both depending on your situation! :)

Source

.NET Solution

Mono 3.x for CentOS 6


Introduction
Mono allows us to run Windows Applications in our Linux environment. It is an open source implementation of Microsoft’s .NET Framework. The problem is, CentOS (and Red Hat) 6.x ship with Mono v2.4 which is a little outdated. You can’t take advantage of the newer apps .NET developers are writing. In fact, you can’t run anything that requires a newer version of v3.5 of the .NET runtime libraries.

In addition to that, Mono v3.4 grants your CentOS system more support for .NET applications that weren’t otherwise available for you in v2.4.

Compatibility Mono v2.4 Mono v3.4
.NET 1.0 Yes No (dropped support)
.NET 2.0 Yes Yes
C# 3.0 Yes Yes
ASP.NET 2.0 Yes Yes
.NET 3.5 Partial Yes
.NET 4.0 No Yes
.NET 4.5 No Yes
C# 4.0 No Yes
ASP.NET 4.0 No Yes
C# 5.0 No Yes

What’s so special about the repackaging you did?
Well first of all… it’s actually an RPM package. It doesn’t require you to haul in a ton of development libraries and compile it from scratch. Another point is that Mono v2.4 (shipped with CentOS & Red Hat 6) had many patches applied to it. These patches forced Mono to conform to the common directory structure used natively by our Operating System. It took me several hours to recreate all of these patches forcing Mono v3.4 to comply with the same standards.

Finally (and at the time of writing this blog), this is the first package of Mono v3.4 that I’ve found that can be installed via an RPM and not require you to recompile everything yourself. Hence you don’t even need to haul in any development libraries at all. Mono will just work as is. Since my repackaging was based off of the original, I tried to keep all of the external rpm packages the same. That said, I did get a little confused with all of the new packages and binary tools that ship with Mono v3.x. Since I’m not a Microsoft Developer, I tried to sort these new packages accordingly as best as I could. Please feel free to let me know how I can improve this package if you notice anything I’ve done wrong.

Just hand over all your work already!
Absolutely, here they are:

Binary Packages:

Note: Mono (v3) was a bit picky about it’s SQLite version it referenced. I had to update it’s package to a slightly newer version as well for everything to play nicely. Only if you intend to haul in the mono-data-sqlite-*.rpm package, will you be required to haul in this newer version. I’ve already provided this on my repository, but for the sceptics who want to build it themselves, I’ll include those instructions too.

Source Packages:

Debug Info Packages

You can get it from my repository too
Content/Repositories are grouped as follows:

  • nuxref - Repo/Source: All custom packages rebuilt reside here (including those from past blog entries).
  • nuxref-shared - Repo/Source: This is really just a snapshot in time of packages I used from the different external repositories I don’t maintain. It may not always be up to date, but it can be thought of as a baseline (where things worked for me and with respect to this blog). If I get around to testing a new package and everything is still working fine, I’ll update the repositories here. I also resign everything with my own key to mark that I’ve tested it.

Install all of the necessary packages:

# I signed everything I host to add a level of security to
# all of the people following my blog and using the packages
# I'm hosting. You'll want to install that here:
rpm --import http://repo.lead2gold.org/pub/NUXREF-GPG-KEY

# Connect to my repository to which I've had to rebuild a few
# packages to support PostgreSQL as well as fix some bugs in
# other bugs. This step will really make your life easy and let
# us compare apples to apples with package versions. It also
# allows you to haul in a working setup right out of the box.

# Don't worry, we're defaulting it to a disabled state, for
# the paranoid.  (note that enable=0 below for both
# repositories)
cat << _EOF > /etc/yum.repos.d/nuxref.repo
[nuxref]
name=NuxRef Core - Enterprise Linux 6 - \$basearch
baseurl=http://repo.lead2gold.org/centos/6/en/\$basearch/custom
enabled=0
priority=1
gpgcheck=1

[nuxref-shared]
name=NuxRef Shared - Enterprise Linux 6 - \$basearch
baseurl=http://repo.lead2gold.org/centos/6/en/\$basearch/shared
enabled=0
priority=1
gpgcheck=1
_EOF

# You should use yum priorities (if you're not already) to
# eliminate version problems later, but if you intend on 
# just pointing to my repository and not the others, then you'll
# be fine.
yum install -y yum-plugin-priorities

################################################################
# Install Mono
################################################################
yum install -y \
       --enablerepo=nuxref \
       --enablerepo=nuxref-shared \
           mono-core
################################################################
# Install additional packages too if you wish (depending on your
# needs.
################################################################
yum install -y \
       --enablerepo=nuxref \
       --enablerepo=nuxref-shared \
           mono-web

I Never Trust Your Stuff; Let Me Do It Myself
Sure, First you’re going to need to fetch all the patches I had to create (plus the old ones carried forth from Mono v2.4:

You can additionally view the RPM SPEC file I created here.

First prepare our development environment with mock if you haven’t already:

# Install 'mock' into your environment if you don't have it already
# This step will require you to be the superuser (root) in your native
# environment.
yum install -y mock

# Grant your normal every day user account access to the mock group
# This step will also require you to be the root user.
usermod -a -G mock YourNonRootUsername
# Download the official mono packages from their official
# hosting site:
wget http://origin-download.mono-project.com/sources/mono/mono-3.4.0.tar.bz2

# Download all of the building blocks you'll need
wget --output-document=mono.spec https://www.dropbox.com/sh/9dt7klam6ex1kpp/AAAiqD2KjHhakweKY_mkLLPba/20140713/mono/mono.spec?dl=1
wget --output-document=monodir.c https://www.dropbox.com/sh/9dt7klam6ex1kpp/AABZHv5NeWFICyAAAw--eiJoa/20140713/mono/monodir.c?dl=1
wget --output-document=mono.snk https://www.dropbox.com/sh/9dt7klam6ex1kpp/AADoY6UvThpcQbUHhs7XUecsa/20140713/mono/mono.snk?dl=1
wget --output-document=lc https://www.dropbox.com/sh/9dt7klam6ex1kpp/AACkja0kNxmO1ytHOIw523HTa/20140713/mono/lc?dl=1
wget --output-document=mono-3.4-ppc-threading.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AAAKrjdqRR826osJjbqZIu5la/20140713/mono/mono-3.4-ppc-threading.patch?dl=1
wget --output-document=mono-1.2.3-use-monodir.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AABHvxieGDqU8eDB24ghS_Dua/20140713/mono/mono-1.2.3-use-monodir.patch?dl=1
wget --output-document=mono-2.2-uselibdir.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AABSjbMfIRj5JB7HKWydQVpja/20140713/mono/mono-2.2-uselibdir.patch?dl=1
wget --output-document=mono-2.0-monoservice.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AADsL0DBfI0VixRAw6uI0Vkpa/20140713/mono/mono-2.0-monoservice.patch?dl=1
wget --output-document=mono-3.4-libgdiplusconfig.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AAAHvISVzxIPq9xCmw2m2tcPa/20140713/mono/mono-3.4-libgdiplusconfig.patch?dl=1
wget --output-document=mono-3.4-libdir.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AACHrlv_iSp36jhSeOn4ki0fa/20140713/mono/mono-3.4-libdir.patch?dl=1
wget --output-document=mono-3.4-POSIX_ARG_MAX.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AADSN5WhjyqTQptMoWthDnYHa/20140713/mono/mono-3.4-POSIX_ARG_MAX.patch?dl=1
wget --output-document=mono-3.4.xamarin.BZ18690.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AAAwWsEKkOlnzYZCshp29wuwa/20140713/mono/mono-3.4.xamarin.BZ18690.patch?dl=1

# Initialize our Environment
mock -v -r epel-6-x86_64 --init

# Dependencies
mock -v -r epel-6-x86_64 --install libpng-devel libjpeg-devel ligiflib-devel  \
   lilibtiff-devel  lilibexif-devel  lilibX11-devel  lifontconfig-devel  \
   ligettext  limake  ligcc-c++ libison liglib2-devel lipkgconfig \
   lilibicu-devel lilibgdiplus-devel  lizlib-devel li automake libtool \
   gettext-devel mono-core gcc-c++ mediainfo gettext \
   giflib-devel libtiff-devel libexif-devel libX11-devel fontconfig-devel \
   bison glib2-devel libicu-devel libgdiplus-devel mysql-devel \
   postgresql-devel sqlite-devel

# Copy our packages into our environment
mock -v -r epel-6-x86_64 --copyin mono.spec /builddir/build/SPECS
mock -v -r epel-6-x86_64 --copyin \
    *.patch \
    mono-3.4.0.tar.bz2 \
    mono.snk \
    lc \
    monodir.c \
    /builddir/build/SOURCES

# Shell into our environment
mock -v -r epel-6-x86_64 --shell

# Change to our build directory
cd builddir/build

# Enable Bootstrapping for the first time
# mono actually requires 'mono' (itself) to build. Weird Right?
# But still necessary! For this reason I prepared an easier
# way of enabling bootstrapping for your first build.
#
# Once you install the binaries created from your first build
# we can rebuild the package again (but this time without
# bootstrapping). The purpose of this is to ensure the mono
# binaries and packages we created are equivalent to the
# the bootstrapped content.
# So... on with the bootstrapping; Note: this will take
# 20 to 30 minutes depending on how fast your system is.
rpmbuild -ba --define "_with_bootstrap=1" SPECS/mono.spec

# Now that we've created mono from a bootstrap, we can
# install the package back into our virtual environment
# and rebuild it again. But this time we rebuild it
# without the bootstrap reference.
rpm -Uhi RPMS/mono-core-3.4.0-1.el6.nuxref.x86_64.rpm RPMS/mono-devel-3.4.0-1.el6.nuxref.x86_64.rpm
# Now rebuild the whole thing all over again to confirm
# your build was good; Note: This will take another 20 to 30 
# minutes again...
rpmbuild -ba SPECS/mono.spec

# we're now done with our mock environment for now; Press Ctrl-D to
# exit or simply type exit on the command line of our virtual
# environment
exit

# We'll return to the directory we were previously in.  We can copy
# out the packages we just built at this point.Ignore the warning
# about SELinux if you get one. It doesn't impact our goals at this
# moment.
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/SRPMS/mono-3.4.0-1.el6.nuxref.src.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-core-3.4.0-1.el6.nuxref.x86_64.rpm
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-data-3.4.0-1.el6.nuxref.x86_64.rpm
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-data-oracle-3.4.0-1.el6.nuxref.x86_64.rpm
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-data-postgresql-3.4.0-1.el6.nuxref.x86_64.rpm
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-data-sqlite-3.4.0-1.el6.nuxref.x86_64.rpm
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-devel-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/monodoc-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/monodoc-devel-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-extras-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-locale-extras-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-nunit-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-nunit-devel-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-reactive-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-wcf-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-web-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-web-devel-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-winforms-3.4.0-1.el6.nuxref.x86_64.rpm .
# The debuginfo package will only exist if you successfully rebuilt
# everything without the bootstrap set
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-debuginfo-3.4.0-1.el6.nuxref.x86_64.rpm .

Upgrading SQLite
For me, I just visited pkgs.org and downloaded the fedora 20 (source -src.rpm) release of SQLite. Then I extracted it’s contents as follows:

# I can't promise this link will work, as this package is always
# evolving, but if you do the search above, you'll get the idea
wget http://dl.fedoraproject.org/pub/fedora/linux/releases/20/Everything/source/SRPMS/s/sqlite-3.8.1-2.fc20.src.rpm

# Alternatively, you can download the source rpm package I'm
# already hosting:
wget http://repo.lead2gold.org/centos/6/en/source/custom/sqlite-3.8.1-2.el6.src.rpm

# Then extracted it using this neat technique:
rpm2cpio sqlite-*.src.rpm | cpio -idmv

# Initialize our Environment
mock -v -r epel-6-x86_64 --init

# Dependencies
mock -v -r epel-6-x86_64 --install ncurses-devel \
    readline-devel glibc-devel autoconf /usr/bin/tclsh \
    tcl-devel

# You'll already have the block you need as nothing is
# changed with this package. We're just using it as is
mock -v -r epel-6-x86_64 --copyin sqlite-*.zip /builddir/build/SOURCES
mock -v -r epel-6-x86_64 --copyin *.patch /builddir/build/SOURCES
mock -v -r epel-6-x86_64 --copyin sqlite.spec /builddir/build/SPECS

# Shell into our environment
mock -v -r epel-6-x86_64 --shell
 
# Change to our build directory
cd builddir/build

# Build our packages (process doesn't take long ~2 min)
rpmbuild -ba SPECS/sqlite.spec

# we're now done with our mock environment for now; Press Ctrl-D to
# exit or simply type exit on the command line of our virtual
# environment
exit

# We'll return to the directory we were previously in.  We can copy
# out the packages we just built at this point. Ignore the warning
# about SELinux if you get one. It doesn't impact our goals at this
# moment.
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/SRPMS/sqlite-3.8.1-2.el6.src.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/sqlite-3.8.1-2.el6.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/sqlite-devel-3.8.1-2.el6.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/sqlite-doc-3.8.1-2.el6.noarch.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/lemon-3.8.1-2.el6.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/sqlite-tcl-3.8.1-2.el6.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/sqlite-debuginfo-3.8.1-2.el6.x86_64.rpm .

Credit
This blog took me a very (,very) long time to put together and test! The repository hosting alone now accommodates all my blog entries up to this date. If you like what you see and wish to copy and paste this HOWTO, please reference back to this blog post at the very least. It’s really all I ask.

I’ve tried hard to make this a complete working solution out of the box. Please feel free to email me or post comments below with any suggestions you have so I can ensure this blog is as complete as possible! Positive feedback is always welcome too!

Repository
This blog makes use of my own repository I loosely maintain. If you’d like me to continue to monitor and apply updates as well as hosting the repository for long terms, please consider donating or offering a mirror server to help me out! This would would be greatly appreciated!

Sources
The majority of my efforts came from the following sites:

Authoritative DNS Solution

Configuring a DNS Server on CentOS 6


Introduction
We have been relying on the Domain Name System (DNS) since the dawn of the internet. Simply put: it allows us to access information by a human readable string or recognizable name such as google.com or nuxref.com instead of it’s actual IP address (which is not as easily memorizable). If we didn’t have the DNS, then the internet would not have evolved as far as it has today. The DNS was built on a series of Name Servers that are all looking after their respected domain (or zone). Our Internet Service Provider (ISP) is lending us their DNS servers everyday when we connect to them. It’s our wireless router (at home or at work) that passes this server to our tablet, phone, laptop etc… when we connect to it.

Here is a simple DNS query taking place illustrating how most of us are setup today.

Here is a simple DNS query taking place illustrating how most of us are setup today.

Managing our own Authoritative DNS Server allows us to catalog our personal devices we use daily with great ease. If you’re publically hosting content, an Authoritative DNS server can be used to even distribute the traffic you servers receive both geographically and dividedly. It gives us the ability to dynamically associate names to all of our devices on our network. It’s great for hobbiests and absolutely manditory for any medium or larger sized company.

PowerDNS is my preferred DNS server solution. I personally prefer this to it’s long-term predecessor Berkeley Internet Name Domain (BIND). BIND has been around since 1984 and has gone through years of hacky patches to get to where it is today. PowerDNS is much younger (first release was in 1999), but was written without all of the growing pains BIND suffered through from the start. In all fairness, BIND developers were forced to deal with RFC (Request for Comments) as DNS continued to evolve to what it is today. Where as PowerDNS already had a stable set of requirements to work with from day one. Not to mention PowerDNS can be easily configured to use alternative backend databases.

You are reading this blog because you want the following:

  • A fast and reliable Authoritative DNS server with a PostgreSQL database backend.
  • You want a central configuration point; you want everything to be easy to maintain after it’s all set up.
  • You want everything to just work the first time and you want to leave the figuring it out part to the end.
  • Package management and version control is incredibly important to you.
  • You want the ability to catalog your local network by assigning devices on it their own unique (easy to remember) hostnames.
  • You want to maintain the ability to surf the internet by forwarding on requests your DNS server doesn’t know to another that does.
The beauty of running your own Authorative DNS grants you the ability to catalog and easily access everything on your local network.

The beauty of running your own Authorative DNS grants you the ability to catalog and easily access everything on your local network by it’s hostname (you assign).

Here is what my tutorial will be focused on:

  • PowerDNS (v3.x) configured to use a Database Backend (PostgreSQL) giving you central configuration. This tutorial focuses on version 8.4 because that is what ships with CentOS and Red Hat. But most (if not all) of this tutorial should still work fine if you choose to use version 9.x of the database instead.
  • PowerDNS Recursor (v3.x) will be configured to handle anything records we don’t otherwise host or override
  • Security Considered
  • Poweradmin (v2.x) will provide our administration of the DNS records we add via it’s simple web interface.

Please note the application versions identified above as this tutorial focuses specifically on only them. One big issue I found while researching how to set up thing on the net was some tutorials I found didn’t really mention the version they were using. Hence, when I would stumble across these old article(s) with new(er) software, it would make for quite a painful experience when things didn’t work.

Please also note that other tutorials will imply that you setup one feature at a time. Then test it to see if it worked correctly before moving on to the next step. This is no doubt the proper way to do things. However, I’m just going to give it all to you at once. If you stick with the versions and packages I provide… If you follow my instructions, it will just work for you the first time. Debugging on your end will be a matter of tracing back to see what step you missed.

I tried to make this tutorial as cookie cutter(ish) as I could. Therefore you can literally just copy and paste what I share right to your shell prompt and the entire setup will be automated for you.

Step 1 of 4: Setup Your Environment
This is the key to my entire blog; it’s going to make all of the remaining steps just work the first time for you. All; I repeat All of the steps below (after this one) assume that you’ve set this environment up. You will need to reset up your environment at least once before running through any of the remaining steps below or they will not work.

It’s also important to mention that you will need to be root to configure the DNS server. This applies to all of the steps identified below throughout this blog.

I re-hosted all of the packages I used to successfully pull this blog off. This allows me to host this information and pair it with the software it works against. Feel free to hook up to my repositories to speed up your setup.

Content/Repositories are grouped as follows:

  • nuxref - Repo/Source: All custom packages rebuilt reside here (including those from past blog entries).
  • nuxref-shared - Repo/Source: This is really just a snapshot in time of packages I used from the different external repositories I don’t maintain. It may not always be up to date, but it can be thought of as a baseline (where things worked for me and with respect to this blog). If I get around to testing a new package and everything is still working fine, I’ll update the repositories here. I also resign everything with my own key to mark that I’ve tested it.

Install all of the necessary packages:

# I signed everything I host to add a level of security to
# all of the people following my blog and using the packages
# I'm hosting. You'll want to install that here:
rpm --import http://repo.lead2gold.org/pub/NUXREF-GPG-KEY

# Connect to my repository to which I've had to rebuild a few
# packages to support PostgreSQL as well as fix some bugs in
# other bugs. This step will really make your life easy and let
# us compare apples to apples with package versions. It also
# allows you to haul in a working setup right out of the box.

# Don't worry, we're defaulting it to a disabled state, for
# the paranoid.  (note that enable=0 below for both
# repositories)
cat << _EOF > /etc/yum.repos.d/nuxref.repo
[nuxref]
name=NuxRef Core - Enterprise Linux 6 - \$basearch
baseurl=http://repo.lead2gold.org/centos/6/en/\$basearch/custom
enabled=0
priority=1
gpgcheck=1

[nuxref-shared]
name=NuxRef Shared - Enterprise Linux 6 - \$basearch
baseurl=http://repo.lead2gold.org/centos/6/en/\$basearch/shared
enabled=0
priority=1
gpgcheck=1
_EOF

# You should use yum priorities (if you're not already) to
# eliminate version problems later, but if you intend on 
# just pointing to my repository and not the others, then you'll
# be fine.
yum install -y yum-plugin-priorities

################################################################
# Install our required products
################################################################
yum install -y \
       --enablerepo=nuxref \
       --enablerepo=nuxref-shared \
           postgresql-server postgresql \
           php-pgsql php-imap php-mcrypt php-mbstring  \
           pdns pdns-backend-postgresql pdns-recursor \
           poweradmin \
           nuxref-templates-pdns

# Also make sure these products are installed as well since we
# use them to manipulate and test some of the data
yum install -y awk sed bind-utils curl

# Choose between NginX or Apache
## NginX Option (a) - This one is my preferred choice:
yum install -y \
       --enablerepo=nuxref \
       --enablerepo=nuxref-shared \
           nginx php-fpm

## Apache Option (b):
yum install -y \
       --enablerepo=nuxref \
       --enablerepo=nuxref-shared \
            httpd php

# Setup Default Timezone for PHP. For a list of supported
# timezones you can visit here: http://ca1.php.net/timezones
TIMEZONE="America/Montreal"
sed -i -e "s|^[ \t]*;*\(date\.timezone[ \t]*=\).*|\1 $TIMEZONE|g" \
    /etc/php.ini

# Ensure we're not using Strict PHP Handling
sed -i -e 's/^[ \t]*\(error_reporting\)[ \t]*=.*$/\1 = E_ALL \& ~E_STRICT/g' \
    /etc/php.ini 

################################################################
# Setup PostgreSQL (v8.4)
################################################################
# The commands below should all work fine on a PostgreSQL v9.x
# database too; but your mileage may vary as I've not personally
# tested it yet. You can skip this section if you've already
# got a database running using one of my earlier tutorials.

# Only init the database if you haven't already. This command
# could otherwise reset things and you'll loose everything.
# If your database is already setup and running, then you can
# skip this line
service postgresql initdb

# Now that the database is initialized, configure it to trust
# connections from 'this' server (localhost)
sed -i -e 's/^[ \t]*\(local\|host\)\([ \t]\+.*\)/#\1\2/g' \
    /var/lib/pgsql/data/pg_hba.conf
cat << _EOF >> /var/lib/pgsql/data/pg_hba.conf
# Configure all local database access with trust permissions
local   all         all                               trust
host    all         all         127.0.0.1/32          trust
host    all         all         ::1/128               trust
_EOF

# Make sure PostgreSQL is configured to start up each time
# you start up your system
chkconfig --levels 345 postgresql on

# Start the database now too because we're going to need it
# very shortly in this tutorial
service postgresql start

To simplify your life, I’ve made the configuration of all the steps below reference a few global variables. The ones identified below are the only ones you’ll probably want to change. May I suggest you paste the below information in your favourite text editor (vi, emacs, etc) and adjust the variables to how you want them making it easier to paste them back to your terminal screen.

# The following is only used for our SSL Key Generation.
# You can skip SSL Key generation if you've done so using an
# earlier tutorial
COUNTRY_CODE="7K"
PROV_STATE="Westerlands"
CITY="Lannisport"
SITE_NAME="NuxRef"

Now for the rest of the global configuration; There really should be no reason to change any of these values (but feel free to). It’s important that you paste the above information (tailored to your liking’s) as well as the below information below to your command line interface (CLI) of the server you wish to set up.

# PostgreSQL Database
PGHOST=localhost
PGPORT=5432
PGNAME=system_dns
PGRWUSER=pdns
PGRWPASS=pdns

# Identify the domain name of your server here
# I use the .local extension because I only intend to resolve
# internal addresses with my DNS server.  You may wish to use
# a different value.
DOMAIN=nuxref.local

# Configure a recursor, the recursor will cache your database hits
# and will greatly increase performance.  Ideally you want to set
# the recursor address to the address you want to host your server
# on.  This is the same IP address you will add to everyones 
# /etc/resolve.conf later.  This is in fact your name server.
# If you leave this value at 127.0.0.1 your DNS will be restricted
# to just the server you're hosting on.
# If you aren't sure what your IP Address is, you can just type 'ifconfig'
#
# This command may also fetch your ip address:
# cat /etc/sysconfig/network-scripts/ifcfg-* | \
#    egrep '^IPADDR=' | egrep -v '127.0.0.1' | \
#    cut -f2 -d'=' | head -n1
NAMESERVER_ADDR=$(cat /etc/sysconfig/network-scripts/ifcfg-* | \
    egrep '^IPADDR=' | egrep -v '127.0.0.1' | \
    cut -f2 -d'=' | head -n1)
# Alternatively, if you're not reading this and 'only' if the
# above failed we'll just set the address to your local address.
NAMESERVER_ADDR=${NAMESERVER_ADDR:=127.0.0.1}

# The Network that our DNS server resides on is important information
# for security purposes.  We want to only allow recursion on this
# network alone and not others hitting our server.  The below
# looks kind of cryptic, but it's just a method of extracting
# the network information automatically if you don't already
# know it. It may or may not work; it will depend on if you set
# a proper NAMESERVER_ADDR
SUBNET_ADDR=$(/sbin/ifconfig | egrep -m1 $NAMESERVER_ADDR | \
    sed -e 's/.*Mask:\([0-9.]\+\).*/\1/g')
NAMESERVER_PRFX=$(ipcalc -s -p $NAMESERVER_ADDR $SUBNET_ADDR | \
                   cut -f2 -d'=')
# Assign a default in case the above command failed.
NAMESERVER_PRFX=${NAMESERVER_PRFX:=24}

# Calculate our network
NAMESERVER_NWRK=$(ipcalc -s -n $NAMESERVER_ADDR $SUBNET_ADDR | \
                   cut -f2 -d'=')
# Assign a default in case the above command failed.
NAMESERVER_NWRK=${NAMESERVER_NWRK:=$(echo $NAMESERVER_ADDR | \
                   cut -f1,2,3 -d'.').0}

# Reverse Address Resolution Preparation
# This converts and IP Address of 1.2.3.4 to 3.2.1.in-addr.arpa
# We can use this later to create a reverse translation which
# PowerDNS can administrate for us also.  The templates I created
# will set some early examples up for you.
NAMESERVER_ARPA=$(echo "$NAMESERVER_ADDR" | \
    awk -F"." '{print $3 "." $2 "." $1 ".in-addr.arpa"}')

# We now need the 4th octet of our Name Server Address to complete
# our ARPA address for the reverse lookup. For example, if your server
# ip is 2.4.8.16, we want the '16' defined here.  The below is just
# a cheat to go ahead and extract it from the address you specified
NAMESERVER_OCT4=$(echo "$NAMESERVER_ADDR" | \
    cut -f4 -d'.')

# This is where our templates get installed to make your life
# incredibly easy and the setup to be painless. These files are
# installed from the nuxref-templates-pdns RPM package you
# installed above. If you do not have this RPM package then you
# must install it or this blog simply won't work for you.
# > yum install --enablerepo=nuxref nuxref-templates-pdns
NUXREF_TEMPLATES=/usr/share/nuxref

I realize the above environment can seem a bit cryptic. I tried to simplify this DNS setup so that even a novice’s life would be easy. The environment variables attempt to detect everyones settings automatically. In some cases, I may have just made it worse for some (hopefully not). It would be a good idea to just echo the defined variables to your screen and confirm they are as you expect them to be. They really are the key to making all of the next steps work in this blog.

# Simple Check
# Note: grab the brackets too when you copy and paste the below
(
   for VAR in COUNTRY_CODE PROV_STATE CITY SITE_NAME \
              PGHOST PGPORT PGNAME PGRWUSER PGRWPASS \
              DOMAIN NAMESERVER_ADDR NAMESERVER_PRFX \
              NAMESERVER_NWRK NAMESERVER_ARPA \
              NAMESERVER_OCT4 NUXREF_TEMPLATES; do
      [ -z $(eval "echo \$$VAR") ] && echo "You must set the variable: $VAR"
   done
)
# Pretty Printing
# Note: grab the brackets too when you copy and paste the below
(
   echo "PostgreSQL:"
   echo -e "\tPGHOST=$PGHOST\n\tPGPORT=$PGPORT\n\tPGNAME=$PGNAME\n\tPGRWUSER=$PGRWUSER\n\tPGRWPASS=$PGRWPASS\n"
   echo "SSL:"
   echo -e "\tCOUNTRY_CODE='$COUNTRY_CODE'\n\tPROV_STATE='$PROV_STATE'\n\tCITY='$CITY'\n\tSITE_NAME='$SITE_NAME'\n"
   echo "Nameserver:"
   echo -e "\tDOMAIN=$DOMAIN\n\tNAMESERVER_ADDR=$NAMESERVER_ADDR\n\tNAMESERVER_NWRK=$NAMESERVER_NWRK\n\tNAMESERVER_PRFX=$NAMESERVER_PRFX"
   echo -e "\tNAMESERVER_ARPA=$NAMESERVER_ARPA\n\tNAMESERVER_OCT4=$NAMESERVER_OCT4\n"
   echo "NuxRef Templating"
   echo -e "\tNUXREF_TEMPLATES=$NUXREF_TEMPLATES"
   echo
)

Step 2 of 4: Setup PowerDNS
First off, make sure you’ve set up your environment correctly (defined in Step 1 above) or you will have problems with the outcome of this step!
Database Configuration:

################################################################
# Configure PostgreSQL (for PowerDNS)
################################################################
# Optionally Eliminate Reset Database.
/bin/su -c "/usr/bin/dropdb -h $PGHOST -p $PGPORT $PGNAME 2>&1" postgres &>/dev/null
/bin/su -c "/usr/bin/dropuser -h $PGHOST -p $PGPORT $PGRWUSER 2>&1" postgres &>/dev/null

# Create Read/Write User (our Administrator)
echo "Enter the role password of '$PGRWPASS' when prompted"
/bin/su -c "/usr/bin/createuser -h $PGHOST -p $PGPORT -S -D -R $PGRWUSER -P 2>&1" postgres

# Create our Database and assign it our Administrator as it's owner
/bin/su -c "/usr/bin/createdb -h $PGHOST -p $PGPORT -O $PGRWUSER $PGNAME 2>&1" postgres 2>&1

# the below seems big; but will work fine if you just copy and
# it as is right to your terminal: This will prepare the SQL
# statement needed to build your DNS server's database backend
sed -e '/^--?/d' \
    -e "s/%PGRWUSER%/$PGRWUSER/g" \
        $NUXREF_TEMPLATES/pgsql.pdns.template.schema.sql > \
          /tmp/pgsql.pdns.schema.sql

# load DB
/bin/su -c "/usr/bin/psql -h $PGHOST -p $PGPORT -f /tmp/pgsql.pdns.schema.sql $PGNAME 2>&1" postgres 2>&1
# cleanup
/bin/rm -f /tmp/pgsql.pdns.schema.sql

# This will get your database started with some working data to use.
# This part is optional, but since it's so easy to delete stuff later
# and there really isn't a whole lot taking place here, you should run
# this step. It becomes especially useful in debugging later.
sed -e "/^--?/d" \
    -e "s/%DOMAIN%/$DOMAIN/g" \
    -e "s/%NAMESERVER_ADDR%/$NAMESERVER_ADDR/g" \
    -e "s/%NAMESERVER_ARPA%/$NAMESERVER_ARPA/g" \
    -e "s/%NAMESERVER_OCT4%/$NAMESERVER_OCT4/g" \
        $NUXREF_TEMPLATES/pgsql.pdns.template.data.sql > \
            /tmp/pgsql.pdns.data.sql

# load DB with our data
/bin/su -c "/usr/bin/psql -h $PGHOST -p $PGPORT -f /tmp/pgsql.pdns.data.sql $PGNAME 2>&1" postgres 2>&1
# cleanup
/bin/rm -f /tmp/pgsql.pdns.data.sql

Server Configuration:

################################################################
# Configure PowerDNS
################################################################
# Create backup of configuration files
[ ! -f /etc/pdns/pdns.conf.orig ] && \
   cp /etc/pdns/pdns.conf /etc/pdns/pdns.conf.orig

# Install our configuration using the template
sed -e "/^#?/d" \
    -e "s/%NAMESERVER_ADDR%/$NAMESERVER_ADDR/g" \
    -e "s/%NAMESERVER_NWRK%/$NAMESERVER_NWRK/g" \
    -e "s/%NAMESERVER_PRFX%/$NAMESERVER_PRFX/g" \
    -e "s/%PGRWUSER%/$PGRWUSER/g" \
    -e "s/%PGRWPASS%/$PGRWPASS/g" \
    -e "s/%PGHOST%/$PGHOST/g" \
    -e "s/%PGPORT%/$PGPORT/g" \
    -e "s/%PGNAME%/$PGNAME/g" \
        $NUXREF_TEMPLATES/pgsql.pdns.template.pdns.conf > \
          /etc/pdns/pdns.conf

# Protect our configuration since it has user/pass info
# inside of it.
chmod 640 /etc/pdns/pdns.conf
chown root.pdns /etc/pdns/pdns.conf

Step 3 of 4: Setup PowerDNS Recursor

################################################################
# Configure PowerDNS Recursor
################################################################
# Create backup of configuration files
[ ! -f /etc/pdns-recursor/recursor.conf.orig ] && \
   cp /etc/pdns-recursor/recursor.conf \
        /etc/pdns-recursor/recursor.conf.orig

# Install our configuration using the template
sed -e "/^#?/d" \
        $NUXREF_TEMPLATES/pgsql.pdns-recursor.template.recursor.conf > \
          /etc/pdns-recursor/recursor.conf

# Generate an up to date root.hints file, this allows recursion
# back out to the internet.
curl -u ftp:ftp 'ftp://ftp.rs.internic.net/domain/db.cache' \
    -o /etc/pdns-recursor/root.hints

# If the above command did not work, you can use the one I shipped
# with the nuxref-template.pdns packaging:
#   cp $NUXREF_TEMPLATES/root.hints /etc/pdns-recursor/root.hints

# Alternatively, PowerDNS is hardcoded with a default set of root-hints.
# But i personally just like seeing it as an external configuration instead
# But... if all of this is combersom to you and you simply don't want
# to use the offical root.hints and the hard-coded one instead you can
# do the following:
#
# sed -i -e '/^\([ \t]*hint-file=.*\)/d' /etc/pdns-recursor/recursor.conf

# Start up all of our services
chkconfig pdns-recursor --level 345 on
chkconfig pdns --level 345 on
service pdns-recursor restart
service pdns restart

It’s important to take a time-out on this step just to make sure everything is working.
A few simple commands should work perfectly for you otherwise we have an issue:

# The following command should output a bunch of googles DNS servers
nslookup google.com $NAMESERVER_ADDR
# The following command should output the same list
nslookup -port=5300 google.com 127.0.0.1

# If you receive an error such as 
#      ** server can't find google.com: NXDOMAIN
# Then you need to revisit the above steps again

# Alternatively, if you receive an error such as:
#  ;; connection timed out; trying next origin
#  ;; connection timed out; trying next origin
#  ;; connection timed out; no servers could be reached
# Then you have been most likely been restricted access
# to port 53 to the outside world. You're not really
# in a problem state at this point. Make sure the rest
# of the tests (Below) work and then make sure to follow
# the section of this blog entitled:
#     'Zone Forwarding Alternative'
# 
# 
# You should be able to resolve the domain
# poweradmin.$DOMAIN to this very server your hosting
# on:
nslookup poweradmin.$DOMAIN $NAMESERVER_ADDR

# You can even test reverse lookups using our data
# we loaded with the following command:
nslookup $NAMESERVER_ADDR $NAMESERVER_ADDR

# The above should resolve itself to hostmaster.your.domain

Step 4 of 4: Setup PowerAdmin
First off, make sure you’ve set up your environment correctly (defined in Step 1 above) or you will have problems with the outcome of this step!

################################################################
# Configure PostgreSQL (for PowerAdmin)
################################################################

# Now we need to update our database with a schema for
# poweradmin to work with
sed -e "/^--?/d" \
    -e "s/%DOMAIN%/$DOMAIN/g" \
    -e "s/%PGRWUSER%/$PGRWUSER/g" \
        $NUXREF_TEMPLATES/pgsql.poweradmin.template.schema.sql > \
            /tmp/pgsql.poweradmin.schema.sql

# Now we can load the file:
/bin/su -c "/usr/bin/psql -h $PGHOST -p $PGPORT -f /tmp/pgsql.poweradmin.schema.sql $PGNAME 2>&1" postgres 2>&1
# cleanup
/bin/rm -f /tmp/pgsql.poweradmin.schema.sql

# If you loaded the sample dataset for PowerDNS earlier, then you'll
# want to additionally load this file too to help PowerAdmin access it
sed -e "/^--?/d" \
    -e "s/%DOMAIN%/$DOMAIN/g" \
    -e "s/%NAMESERVER_ADDR%/$NAMESERVER_ADDR/g" \
    -e "s/%NAMESERVER_ARPA%/$NAMESERVER_ARPA/g" \
        $NUXREF_TEMPLATES/pgsql.poweradmin.template.data.sql > \
            /tmp/pgsql.poweradmin.data.sql

# load DB with our data
/bin/su -c "/usr/bin/psql -h $PGHOST -p $PGPORT -f /tmp/pgsql.poweradmin.data.sql $PGNAME 2>&1" postgres 2>&1
# cleanup
/bin/rm -f /tmp/pgsql.poweradmin.data.sql

################################################################
# Configure PowerAdmin (for PowerDNS Administration)
################################################################
# Create backup of configuration files
[ ! -f /etc/poweradmin/config.inc.php.orig ] && \
   cp /etc/poweradmin/config.inc.php \
        /etc/poweradmin/config.inc.php.orig

# Apply our configuration
sed -e "/^\/\/?/d" \
    -e "s/%DOMAIN%/$DOMAIN/g" \
    -e "s/%PGHOST%/$PGHOST/g" \
    -e "s/%PGNAME%/$PGNAME/g" \
    -e "s/%PGPORT%/$PGPORT/g" \
    -e "s/%PGRWUSER%/$PGRWUSER/g" \
    -e "s/%PGRWPASS%/$PGRWPASS/g" \
        $NUXREF_TEMPLATES/pgsql.poweradmin.template.config.inc.php > \
            /etc/poweradmin/config.inc.php

# Protect file since it contains passwords
chmod 640 /etc/poweradmin/config.inc.php
chown root.apache /etc/poweradmin/config.inc.php

# NginX Configuration
sed -e "/^#?/d" \
    -e "s/%DOMAIN%/$DOMAIN/g" \
        $NUXREF_TEMPLATES/nginx.poweradmin.template.conf > \
            /etc/nginx/conf.d/poweradmin.conf

################################################################
# Generate SSL Keys For Webpage Security
################################################################
# Generate SSL Keys (if you don't have any already) that we
# will secure all our inbound and outbound mail as.
openssl req -nodes -new -x509 -days 730 -sha1 -newkey rsa:2048 \
   -keyout /etc/pki/tls/private/$DOMAIN.key \
   -out /etc/pki/tls/certs/$DOMAIN.crt \
   -subj "/C=$COUNTRY_CODE/ST=$PROV_STATE/L=$CITY/O=$SITE_NAME/OU=IT/CN=$DOMAIN"

# Permissions; protect our Private Key
chmod 400 /etc/pki/tls/private/$DOMAIN.key

# Permissions; protect our Public Key
chmod 444 /etc/pki/tls/certs/$DOMAIN.crt

At this point you should be able to start NginX. If it’s already running
send it a reload or just run the below command.

# If you chose the NginX approach you'll want to make sure it's
# setup to run correctly and restart itself if the system is
# ever restarted:

# Ensure NginX runs even after a reboot
chkconfig nginx --level 345 on
chkconfig php-fpm --level 345 on

# Restart the service if it isn't running already
service php-fpm restart
service nginx restart

Now, we’re almost done. We need to make sure our server is referencing our new DNS server. You may need to update your network settings, but the following will just cheat for the time being and set you up:

[ ! -f /etc/resolv.conf.orig ] && \
   cp /etc/resolv.conf \
       /etc/resolv.conf.orig

# Tell our server to use our new DNS server
cat << _EOF > /etc/resolv.conf
search $DOMAIN
nameserver $NAMESERVER_ADDR
_EOF

# Restore your old configuration like so
# if you need to:
#  /bin/mv -f /etc/resolv.conf.orig /etc/resolv.conf

You will want to additionally add the following to your iptables /etc/sysconfig/iptables:

#---------------------------------------------------------------
# DNS Traffic
#---------------------------------------------------------------
-A INPUT -m state --state NEW -m tcp -p tcp --dport 53 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 53 -j ACCEPT

#---------------------------------------------------------------
# Web Traffic for PowerAdmin
#---------------------------------------------------------------
-A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT

Use the login/pass as admin/admin when you log in for the first time.  Consider changing this afterwards!

Use the login/pass as admin/admin when you log in for the first time. Consider changing this afterwards!

You should now be able to visit https://poweradmin and see a login screen. You may have to accept the ‘untrusted key’ prompt. Don’t worry; it’s safe to do so! In fact, if you’re worried, then just have a look at the key itself before accepting it. You’ll see that it’s just the one we generated earlier. The login is admin and the password is admin at the start. You will want to consider changing this right away after you log in for percautionary sake.

Your setup can now be illustrated by the model below. It’s virtually the same setup as you had before however now instead of querying your ISPs DNS Server, you query your very own local one. Now you can easily maintain your own local network and begin labeling the devices you use on it using PowerAdmin.

This illustration shows the PowerDNS Recursor (pdns-recursor)

This illustration shows the PowerDNS Recursor (pdns-recursor)

Your Authorative (Power)DNS Server caches the location for you for a period of time making subsiquent requests to the same spot VERY fast.
All Subsequent Requests are Cached for a Period of Time.

All Subsequent Requests are Cached for a Period of Time.

Zone Forwarding Alternative
Up until now, our ISP was using it’s own root.hints file (or some alternative method) to look up your request. But now, it is our server that is going out directly into the big bad internet instead using this technique. Since DNS requests are not encrypted, it’s now possible for others to spy on the hostnames we’re resolving (and places we’re are visiting). Not only that, these same people can easily trace the source back to us (all DNS requests originate from our IP now). This can allow someone to specifically know the online banking site you use as an example. Prior to hosting your own DNS server, all websites and servers you accessed were channeled privately between you and your ISP so this was never a problem. It was our ISP who made the (recursive) requests for us instead of us doing them ourselves. Prior to now, what we looked up didn’t explicitly trace back to you, it traced back to our ISP. Previously, we actually had more privacy (depending on the contract we signed with our ISP).

Your ISP has thousands of clients making requests to it’s DNS servers constantly. As a result, it has probably already cached 90% of all the websites we intend to visit. Cached content means a very a speedy response from our server. Meanwhile, our local DNS server’s cache will (probably) be empty most of the time (depending on how many people will use it). Hence your ISP’s DNS Server will be MUCH faster then yours.

When you signed up with your ISP, they would have gave you (at least) 1 DNS server to use (most provide 2 – a primary and backup). We can actually tell our DNS Server to use these instead of our root.hints file when it finds a domain that needs to be further looked up. This way, you regain your secure pipe between you and your ISP. The trade off is your adding one more hop to your recursive lookups. But in most scenarios, they would have already cached what your looking for, so it would be an imediate response. The below diagram illustrates the worst case scenario:

A forwarding zone of '*' (asterix) tells the PowerDNS Recursor to forward all requests to a specific Server.  In our example we use our ISP's DNS Servers.

A forwarding zone of ‘*’ (asterix) tells the PowerDNS Recursor to forward all requests to a specific Server. In our example we use our ISP’s DNS Servers.


Here is how you can alter your configuration:

# Put your DNS Servers below,  the ones in place right
# now are the public ones offered by Google.
DNS_SERVERS="8.8.8.8 8.8.4.4"

# Remove any information that may conflict
sed -i -e '/^\([ \t]*hint-file=.*\)/d' /etc/pdns-recursor/recursor.conf
sed -i -e '/^\([ \t]*forward-zones=.*\)/d' /etc/pdns-recursor/recursor.conf

# Disable hint-file
echo 'hint-file=' >> /etc/pdns-recursor/recursor.conf

# Prepare Forwarding Zones for everything unmatched:
echo -n 'forward-zones=*=' >> /etc/pdns-recursor/recursor.conf
echo $(echo "$DNS_SERVERS" | \
    sed -e 's/^[ \t]*//g' -e 's/[ \t]*$//g' -e 's/[ \t]\+/, /g') >> \
     /etc/pdns-recursor/recursor.conf

# Now restart our recursor
service pdns-recursor restart

Got Old BIND Configuration You Need Imported?
This step is completely optional! If your not familiar with what BIND even is, or know you’ve never used it, you can freely skip this section.
If you migrating from BIND to PowerDNS then you may have a setup in place. PowerDNS makes an easy transition by writing a tool that will scan your old BIND configuration and generate the SQL needed for an easy migration to PowerDNS.

################################################################
# Generate SQL content from all of your zone files
################################################################
# I just had 1 simple DNS zone, but you may many.
# The below did all the work for me (bind was configured to
# run in chroot environment):
# zone2sql --gpgsql --zone=/var/chroot/var/named/data/zone.nuxref.local > \
#    /tmp/pgsql.pdns.zones.sql
# zone2sql --gpgsql --zone=/var/chroot/var/named/data/192.168.0 >> \
#     /tmp/pgsql.pdns.zones.sql

# You could even cheat and run all your files with a command like this
# Please note that this is optional (and not part of the blog, it's just
# a simple conversion tool for those who already have bind configuration
ZONE_DIR=/var/chroot/var/named/data/
[ -f /tmp/pgsql.pdns.zones.sql ] && /bin/rm -f /tmp/pgsql.pdns.zones.sql
for ZONE in $(find $ZONE_DIR -type f); do
   # Fetch ORIGIN/ZONE ID
   ZONE_ID=$(cat $ZONE | egrep '^[ \t]*\$ORIGIN' | \
              sed -e 's/^.*\$ORIGIN[ \t]\+\([^ \t]\+\).*/\1/g' \
                  -e 's/[. \t]*$//g')
   [ -z "$ZONE_ID" ] && echo "Error Parsing: $ZONE" && continue
   zone2sql --gpgsql --zone=$ZONE --zone-name=$ZONE_ID >> /tmp/pgsql.pdns.zones.sql
done

# Now before you load this file into your database, you may
# want to review it.  It doesn't hurt to scan it over and remove
# any entries you don't think would be useful.

# Under normal circumstances you would be done at this point, however because
# we are additionally using poweradmin, we need to create a few zone entries
# based on the SQL file we just generated.
$NUXREF_TEMPLATES/import2zone.awk /tmp/pgsql.pdns.zones.sql >> \
    /tmp/pgsql.pdns.zones.sql

# Then I just loaded the file straight into the database:
/bin/su -c "/usr/bin/psql -h $PGHOST -p $PGPORT -f /tmp/pgsql.pdns.zones.sql $PGNAME 2>&1" postgres 2>&1
# cleanup
/bin/rm -f /tmp/pgsql.pdns.zones.sql
# You're done!

So… That’s it? Now I’m done?
Yes and No… My blog pretty much hands over a working DNS server with little to no extra configuration needed on your part.

No system is bulletproof; disaster can always strike when you’re least expecting it. To cover yourself, always consider backups of the following:

  • Your PostgreSQL Database: This is where all of your DNS configuration is stored. You definitely do not want to lose this. May I suggest you reference my other blog entry here where I wrote a really simple backup/restore tool for a PostgreSQL database.
  • /etc/poweradmin/*: Your PowerAdmin flat file configuration allowing you to centrally manage everything via a webpage.
  • /etc/pdns/*: Your PowerDNS flat file configuration which defines the core of your DNS Server. It’s configuration allows you to centrally manage everything else through the PowerAdmin website.
  • /etc/pdns-recursor/*: Your PowerDNS Recursor flat file configuration which grants you the recursive functionality of your DNS Server.

What about Apache?
Apache is a perfectly fine alternative solution as well! I simply chose NginX because it is much more lightweight approach. In fact, PowerAdmin already comes with Apache configuration out of the box located in /etc/httpd/conf.d/. Thus, if you simply start up your Apache instance (service httpd start), you will be hosting its services right away. Please keep in mind that the default (Apache) configuration does not come with all the SSL and added security I provided with the NginX templates. Perhaps later on, I will update the template rpm to include an Apache secure setup as well.

Credit
This blog took me a very (,very) long time to put together and test! The repository hosting alone now accomodates all my blog entries up to this date. If you like what you see and wish to copy and paste this HOWTO, please reference back to this blog post at the very least. It’s really all I ask.

I’ve tried hard to make this a complete working solution out of the box. Please feel free to email me or post comments below with any suggestions you have so I can ensure this blog is as complete as possible! Positive feedback is always welcome too!

Repository
This blog makes use of my own repository I loosely maintain. If you’d like me to continue to monitor and apply updates as well as hosting the repository for long terms, please consider donating or offering a mirror server to help me out! This would would be greatly appreciated!

Sources

Password Management

KeePassX – A Password Manager and Why You Need One


Introduction
I haven’t posted in a while and I figured I was due. I usually blog specifically for Linux users, but this really applies to anyone on any operating system (Apple, Microsoft, Android, etc).

I want to encourage the use of a password manager to anyone not already using one. There are a lot of them out there, but I’m specifically interested in focusing the attention on KeePassX. There are a number of reasons why and I’ll eventually get to most of them. But for starters KeePassX is completely free (GPLv2) and as I mentioned above, it works on just about every platform!

It’s sole purpose in life is to remember passwords for you so you don’t have to. It’s used to prevent you from using the same 1-3 passwords you’re using today for everything. Instead of using your birthday, engagement and/or pets name, you’ll use passwords like: JP83bW93wwrJsZ2x6tQy3aD6W or SLJUhoQWPBRtwTwmzsCxWXqOV. Again, I emphasis that the password manager does all the memorizing for you. Not only that, but it’ll easily generate you these passwords too.

KeePassX Interface

KeePassX Interface

Now before you roll your eyes and decide you don’t need such a thing, just here me out. Give me a chance to sell you the reasons why you seriously need to consider this. I’ll attempt to do this with respect to The 5 Stages of Loss and Grief. :)

Stage 1: Denial
What’s the point of a password manager? I never needed one before!
A password manager is kind of like social media was to us back in February of 2004. The ideology is the same: “You didn’t know you needed it until it was shown to you”. So trust me when I tell you that you need one now. The thing is, the internet is filled with people whose sole intent is to gain access to your identity and exploit it for their own personal gain. The passwords we’ve all used up to this date are too simple; I know this because we’ve all remembered/memorized them. Even the complicated ones we confidently memorized are no good.

Just Saying...

Food for Thought

Most of us just acknowledge there is risk, but we do nothing about it. We belive that if no one has figured out our password yet, they never will. Well the scary thing is: some of them might know our password(s) already. But these people don’t want to make that evident to us. Don’t expect someone to advertise themselves once they’ve gained access to something we had protected. Instead, you’re only going to find out once the damage has been done.

Stage 2: Anger
Why would I put all my passwords in one file? Something just sounds wrong with that statement.
You would do it for the same reason you trust a bank’s security with your money. The same reason you trust your wallet or purse with your personal identity. The same reason you go to work and secure all your belongings and pets behind in your home. We put all of our eggs into one basket every day, but we’re all smart about it. We all put trust in the decision that our belongings will remain in tact until we return to it. KeePassX works exactly the same way.

Both a Password and a Key File can be used to keep your password database safe.

Both a Password and a Key File can be used keep your password database safe.

The passwords that reside in your KeePassX database are encrypted, so no one will gain access to it unless you allow it to happen. The encryption means the password database is completely useless to anyone who has it without the keys required to access it. You can even go as far as to lock it using a keyfile (KeePassX will generate for you) as well as a password.

There is no question that if a hacker gains access to your password database and has all the keys he/she needs to unlock it, then you’ve defeated it’s purpose. Your passwords are only as secure as the conscious effort you take to keep them that way. If you keep the keys to your house under the welcome mat, or your car keys in the vehicle they pair with, it’s just a matter of time before you suffer a theft in that regard too.

Stage 3: Bargaining
Can’t I just change the passwords after I read about the compromising?
The answer is No and here is why:
People are attempting to exploit us ever day through phishing, viruses, spyware, false advertising, and even through everyday tasks we didn’t even know we were a victim of. Take Sony and eBay for example. Both are very reputable companies with an entire team of security experts working with them every day, yet they still fell prey when their systems were compromised by a hacker. As a result, it’s clients and customers became the victims too. If you used the same password on these sites as you do everywhere else then you’re already at risk for identity theft. The point is, sh*t happens and sometimes it’s just out of our control.

We can’t expect every organization to be like eBay and Sony either. Not all of them will alert their customers of the threat they faced. Even the ones that do have been long exploited before going public. Our personal information was at risk last week when they told us today. Some companies don’t want to admit to any privacy invasions to protect their own reputation. Meanwhile others might not even know they’ve been already compromised. So if they don’t tell us; how will we ever know?

Therefore, consider taking the extra time to change every password on every site you visit once you get KeePassX installed.

Stage 4: Depression
I don’t want to change my password everywhere.
I don’t want the burden of a another application just to manage my passwords.
I was stuck at this stage for a while. It takes a while to change passwords everywhere. But after it’s done, you have to remember that this program does absolutely everything in its power to make your life easy from this point on. You only need to add everything once; so the burden doesn’t linger.

How can such complicated password make my life easier?

Ctrl+B and Ctrl+C are all you ever need to use once KeePassX is setup.

Ctrl+B and Ctrl+C are all you ever need to use once KeePassX is setup.

Your life ‘will’ be simpler with KeePassX because your every day tools are now accessible by just a few keystrokes:

  1. Press ctrl-b in Password Manager: Copy the username into the clipboard
  2. Press ctrl-v in Web Browser or App: Paste the username (into the username field)
  3. Press ctrl-c in Password Manager: Copy the password into the clipboard
  4. Press ctrl-v in Web Browser or App: Paste the password (into the password field)

That’s it; don’t worry about the password (you just copied) being left in the clipboard. KeePassX looks after clearing the clipboard after a few idle seconds elapse that you configure (the default is 20 seconds). The point of me explaining this is: instead of memorizing your secret passwords you use everywhere, you only need to remember the 4 key combinations identified above. The same 4 keystrokes are used for all sites/apps you ever use/visit from this point forward. Not to mention that each password is a unique from the other and unguessable by any hacker. I can’t stress enough that the effort level decreases with a password manager. At the same time your online privacy is more secure than ever.

Stage 5: Acceptance
What about my Phone? What about my other computer? Can access my password file on other systems?
Yes! This is one of the most amazing features of KeePassX! People even place their password file on locations such as DropBox or their Google Drive so they can access it between the systems. It’s compatible with Microsoft Windows, Mac OS (including Apple iOS), Android, and Linux (of course!).

So Where Do I Get It?
Microsoft Windows and Mac OS users can just download it directly from the download area at the official KeePassX website here.

If you’re using an Android device then you can find it here on the Google Play Store.

If you’re using an Apple device then you can find it here on the iTunes Store.

Linux users can download it from pkgs.org. It’s also available from the EPEL Repositories.

For the CentOS and/or Red Hat users: if for whatever reason the links above become unavailable or you want a fast approach: KeePassX can be retrieved from the repository I’m hosting.

Content/Repositories are grouped as follows:

  • nuxref - Repo/Source: All custom packages rebuilt reside here (including those from past blog entries).
  • nuxref-shared - Repo/Source: This is really just a snapshot in time of packages I used from the different external repositories I don’t maintain. It may not always be up to date, but it can be thought of as a baseline I use to support other blog entries I wrote. This repository isn’t required for KeePassX. but it is worth knowing about (to keep things common across other blog entries).

Install KeePassX in CentOS and/or Red Hat 6:

# You'll need to be root to run any of the below commands.

# I signed everything I host to add a level of security to
# all of the people following my blog and using the packages
# I'm hosting. You'll want to install that here:
rpm --import http://repo.lead2gold.org/pub/NUXREF-GPG-KEY

# Connect to my repository.
# Don't worry, we're defaulting it to a disabled state, for
# the paranoid.  (note that enable=0 below for both
# repositories)
cat << _EOF > /etc/yum.repos.d/nuxref.repo
[nuxref]
name=NuxRef Core - Enterprise Linux 6 - \$basearch
baseurl=http://repo.lead2gold.org/centos/6/en/\$basearch/custom
enabled=0
priority=1
gpgcheck=1

[nuxref-shared]
name=NuxRef Shared - Enterprise Linux 6 - \$basearch
baseurl=http://repo.lead2gold.org/centos/6/en/\$basearch/shared
enabled=0
priority=1
gpgcheck=1
_EOF
################################################################
# Install our required products
################################################################
yum install -y --enablerepo=nuxref keepassx
# You're Done!

KeepassX Tips

KeePassX Password Generator

KeePassX Password Generator will keep your passwords diverse and complicated.

  • For obvious reasons, password protect your password database. This is the one password you ‘will’ need to remember. So I encourage you to use a new original one, but don’t forget it!
  • Consider using a keyfile as well as the password for securing your password database. The nice thing about the keyfile is you can move it with you; keep it on a usb drive that is kept with you or in another location. This way if someone ever did get ahold of your password database, they can’t do anything without the key file even if they know your password.
  • Configure KeyPassX to lock itself after it’s been idle or minimized. By default it won’t, but it’s a simple option that will allow you to walk away from your desk and know that no one is snooping where they shouldn’t be.
  • Log onto your remote sites and consider changing your passwords every now and then. Even if it’s only once every years. That’s still better than never changing them at all! You can optionally configure KeePassX to remind you to change your passwords after they get to a certain age. Use the password generator; it’ll greatly simplify your life and keep your passwords complex.

Speaking of Passwords…
Consider that your neighbors (or even someone you don’t know) could be using your wireless network while you’re at work. They could be using it even when your around. Have you been connecting to your wireless router lately to see if you can account for everyone connected to it? Now would be a good time to change this password too!

Pay attention to the sites you use and make sure they use some form of login encryption. For example, logging into a website that isn’t secure means anyone can easily extract your username password combination without your knowledge.

Privacy Compromised

Just do a Google search for +breach +password +hacked to get an idea of how many companies are getting hacked constantly and how easy it is for someone to figure out your password and re-use it elsewhere.

Final Notes
Sure, Password Managers aren’t for everyone; don’t worry, I get that. But if you truely want to prepare yourself and prevent having to deal with unnessisary online fraud, or identy theft… If you truely wan’t to avoid venturing through The 5 Stages of Loss and Grief which follows these awful crimes, then you should consider protecting yourself now. Take the plunge and get a password manager and diversify your passwords.

It’s better to be safe than sorry.

Credit
If you like what you see and wish to copy and paste this HOWTO, please reference back to this blog post at the very least. It’s really all I ask.

Sources
Here are some other links you may find useful:

Mail Server Solution

Configuring a Secure Mail Server on CentOS 6


Introduction
As you know, Mail Server tutorials are all over the internet. So if you stumbled upon mine, then you were like me trying to assemble one based on how others do it to save reading the manual. A mail server is NOT a walk through the park like some other servers are (such as Web, Database, etc). They can be very complicated to master; and frankly, when you’re short on time, it’s easier to just get one going right away. Don’t dismiss the need to learn how it works once you’ve set it up otherwise you may not know how to recover from a unexpected situation. But instead, learn about how it works (and how to further tweak it) after the deadlines have been met. Besides, your boss will be impressed when you show him your successful results sooner then later.

You’re reading this blog because your needs were/are similar to what mine were:

  • You’re looking for an automated solution to installing a mail server with little effort because you’re on a tight deadline.
  • You want a central configuration point; you want everything to be easy to maintain after it’s all set up.
  • You want everything to just work the first time and you want to leave the figuring it out part to the end.
  • Package management and version control is incredibly important to you.

Here is what my tutorial will be focused on:

  • A Database Backend (PostgreSQL) giving you central configuration. This tutorial focuses on version 8.4 because that is what ships with CentOS and Red Hat. But most (if not all) of this tutoral should still work fine if you choose to use version 9.x of the database instead.
  • Spam Control (SpamAssasin v3.3.x, Amavisd-new v2.8, and Clam AntiVirus v0.98) gives you some spam and antivirus control. I’ve been looking into DSPAM but haven’t implemented it yet. I figure I’ll make a part 2 to this blog once i get around to working with it and mastering it’s setup.
  • Dovecot Pigeonhole v2.0.x provides you a way of routing mail in the users inbox based on it’s content. You can key off of certain content within a received message and mark it as spam, or flag it as important because it was sent by a certain individual, etc. It basically gives you the ability to custom some of the post processing of a received message that passed through all the other checks (spam, antivirus, etc).
  • Security Considered
  • Mail Delivery Agent (MDA) (DoveCot v2.0.x):
    • Secure POP3 (POP3S) Access
    • Secure IMAP (IMAPS) Access
    • WebMail (RoundCube) Access
Message Delivery Agent Configuration

MDA Configuration

  • Mail Transport Agent (MTA) (PostFix v2.6.x): Secure SMTP (SMTPS)
Mail Transport Agent

MTA Configuration

  • Web Based Administration (PostFixAdmin v2.3.6). Life is made easier when you don’t have to do very much once everything has been set up. Run your server like an ISP would:
    • Virtual Domain Administration: Add/Remove as many as you want
    • Unlimited Virtual Aliases (or restrict if you want) per domain
    • Unlimited Mailboxes (or restrict if you want) per domain
    • Administrative Delegation: Grant enough privileges to another administrator who only has rights to his/her domain you granted them access to.
    • Away on Vacation Support (automatic replies)
Central Configuration

Central Configuration

Please note the application versions identified above as this tutorial focuses specifically on only them. One big issue I found while researching how to set up a mail server was that the other tutorials I found didn’t really mention the software versions they were using. Hence, when I stumbled upon these old article(s) with new(er) software, it made for quite a painful experience when things didn’t work.

Please also note that other tutorials will imply that you setup one feature at a time. Then test it to see if it worked correctly before moving on to the next step. This is no doubt the proper way to do things. However, I’m just going to give it all to you at once. If you stick with the versions and packages I provide… If you follow my instructions, it will just work for you the first time. Debugging on your end will be a matter of tracing back to see what step you missed.

I tried to make this tutorial as cookie cutter(ish) as I could. Therefore you can literally just copy and paste what I share right to your shell prompt and the entire setup will be automated for you.

Hurdles
Just to let you know, I had many hurdles in order to pull this feat off. They were as follows:

  • postfix as shipped with CentOS and in the EPEL repository is not compiled with PostgreSQL support. I had to recompile this package as well just to enable this extra database support.
  • postfixadmin in the EPEL repository has qwirks I wasn’t happy with. I needed to fix a php error that kept coming up. I needed to adjust the database schema and permissions as well as fixing the Vacation Notification feature. I also did not want the mandatory MySQL dependency; so I removed that too.
  • perl Yes… that’s right, I had to patch perl :(. I had to recompile it due to a bug that I don’t belive was correctly addressed. In a nutshell, prior to my custom rebuild, perl-devel would haul in absolutely every development package including kernel headers and the GCC compiler. In the past the perl-devel package could be installed by itself providing us some tools spamassassin and amavisd-new depended on. You’re welcome to use the official version of perl over the one I recompiled; but be prepared to have a ton of compilation tools and source code in your production environment. This is not something I wanted at all. Interestingly enough; after the developers at RedHat said they wouldn’t tackle the issue of rolling their changes back, they seem to be entertaining this new guy’s request who’s asking for a similar alternative. So who knows, maybe newer versions of perl will accommodate mail servers again! :)
  • This blog itself was a massive hurdle. There are just so many configuration files and important points to account for that I found it easier to package my own rpm (nuxref-templates-mxserver) containing a series of configuration templates. The templates themselves took a while to construct in such a way that they could all be used together.

Step 1 of 7: Setup Your Environment
This is the key to my entire blog; it’s going to make all of the remaining steps just work the first time for you. All; I repeat All of the steps below (after this one) assume that you’ve set this environment up. You will need to reset up your environment at least once before running through any of the remaining steps below or they will not work.

It’s also important to mention that you will need to be root to configure this entire mail server. This applies to all of the steps identified below throughout this blog.

I started hosting some of these packages on one of my own servers to make it easier to do all the tasks. I can’t guarantee other packages will work as well; this is definitely the case for the packages I had to rebuild (identified above). If you don’t trust my packages, you can access their source files and rebuild them for yourself if you wish.

Content/Repositories are grouped as follows:

  • nuxref - Repo/Source: All custom packages rebuilt reside here (including those from past blog entries).
  • nuxref-shared - Repo/Source: This is really just a snapshot in time of packages i used from the different external repositories I don’t maintain. It may not always be up to date, but it can be thought of as a baseline (where things worked for me and with respect to this blog). If I get around to testing a new package and everything is still working fine, I’ll update the repositories here. I also resign everything with my own key to mark that I’ve tested it.

Install all of the necessary packages:

# I signed everything I host to add a level of security to
# all of the people following my blog and using the packages
# I'm hosting. You'll want to install that here:
rpm --import http://repo.lead2gold.org/pub/NUXREF-GPG-KEY

# Connect to my repository to which I've had to rebuild a few
# packages to support PostgreSQL as well as fix some bugs in
# other bugs. This step will really make your life easy and let
# us compare apples to apples with package versions. It also
# allows you to haul in a working setup right out of the box.

# Don't worry, we're defaulting it to a disabled state, for
# the paranoid.  (note that enable=0 below for both
# repositories)
cat << _EOF > /etc/yum.repos.d/nuxref.repo
[nuxref]
name=NuxRef Core - Enterprise Linux 6 - \$basearch
baseurl=http://repo.lead2gold.org/centos/6/en/\$basearch/custom
enabled=0
priority=1
gpgcheck=1

[nuxref-shared]
name=NuxRef Shared - Enterprise Linux 6 - \$basearch
baseurl=http://repo.lead2gold.org/centos/6/en/\$basearch/shared
enabled=0
priority=1
gpgcheck=1
_EOF

# You should use yum priorities (if you're not already) to
# eliminate version problems later, but if you intend on 
# just pointing to my repository and not the others, then you'll
# be fine.
yum install -y yum-plugin-priorities

################################################################
# Install our required products
################################################################
yum install -y \
       --enablerepo=nuxref \
       --enablerepo=nuxref-shared \
           postgresql-server postgresql \
           php-pgsql php-imap php-mcrypt php-mbstring  \
           dovecot dovecot-pgsql dovecot-pigeonhole \
           clamav amavisd-new spamassassin \
           postfix postfix-perl-scripts \
           roundcubemail postfixadmin \
           nuxref-templates-mxserver

# Choose between NginX or Apache
## NginX Option (a) - This one is my preferred choice:
yum install -y \
       --enablerepo=nuxref \
       --enablerepo=nuxref-shared \
           nginx php-fpm

## Apache Option (b):
yum install -y \
       --enablerepo=nuxref \
       --enablerepo=nuxref-shared \
            httpd php

# Make sure your build of Postfix supports PostgreSQL
# execute the following to see: 
postconf -c /etc/postfix -m | \
   egrep -q '^pgsql$' && echo "Supported!" || \
      echo "Not Supported!"

# If it's not supported then you ran into the first of many
# problems I had. It also means you didn't haul in my version
# from the previous command.  This can happen if you already had
# postfix installed on your machine and it's of a newer version
# then what I'm hosting. Your options at this point is to either
# uninstall your copy of postfix and install mine or recompile 
# your version of postfix (but this time with PostgreSQL support).

# Setup Default Timezone for PHP. For a list of supported
# timezones you can visit here: http://ca1.php.net/timezones
TIMEZONE="America/Montreal"
sed -i -e "s|^[ \t]*;*\(date\.timezone[ \t]*=\).*|\1 $TIMEZONE|g" /etc/php.ini

################################################################
# Setup PostgreSQL (v8.4)
################################################################
# The commands below should all work fine on a PostgreSQL v9.x
# database too; but your mileage may vary as I've not personally
# tested it yet.

# Only init the database if you haven't already. This command
# could otherwise reset things and you'll loose everything.
# If your database is already setup and running, then you can
# skip this line
service postgresql initdb

# Now that the database is initialized, configure it to trust
# connections from 'this' server (localhost)
sed -i -e 's/^[ \t]*\(local\|host\)\([ \t]\+.*\)/#\1\2/g' \
    /var/lib/pgsql/data/pg_hba.conf
cat << _EOF >> /var/lib/pgsql/data/pg_hba.conf
# Configure all local database access with trust permissions
local   all         all                               trust
host    all         all         127.0.0.1/32          trust
host    all         all         ::1/128               trust
_EOF

# Make sure PostgreSQL is configured to start up each time
# you start up your system
chkconfig --levels 345 postgresql on

# Start the database now too because we're going to need it
# very shortly in this tutorial
service postgresql start

To simplify your life, I’ve made the configuration of all the steps below reference a few global variables. The ones identified below are the only ones you’ll probably want to change. May I suggest you paste the below information in your favourite text editor (vi, emacs, etc) and adjust the variables to how you want them making it easier to paste them back to your terminal screen.

# Identify the domain name of your server here
DOMAIN=nuxref.com
# Setup what you want to be your Administrative email account
# Note: This does 'NOT' have to be of the same domain even though
#       thats how I set it up to be. Don't worry if the email
#       address doesn't exist, because when you're all done
#       following this blog, you'll be able to create it!
ADMIN=hostmaster@$DOMAIN

# The following is only used for our SSL Key Generation
COUNTRY_CODE="7K"
PROV_STATE="Westerlands"
CITY="Lannisport"
SITE_NAME="NuxRef"

Now for the rest of the global configuration; There really should be no reason to change any of these values (but feel free to). It’s important that you paste the above information (tailored to your liking’s) as well as the below information below to your command line interface (CLI) of the server you wish to set up.

# PostgreSQL Database
PGHOST=localhost
PGPORT=5432
PGNAME=system_mail
# This is the Read Only access user (or very limited access)
PGROUSER=mailreader
PGROPASS=mailreader
# This is for administration
PGRWUSER=mailwriter
PGRWPASS=mailwriter

# VHost Mail Directory
VHOST_HOME=/var/mail/vhosts
VHOST_UID=5000
# No real reason to make them differ
# but define tis variable anyway for
# below configuration to work
VHOST_GID=$VHOST_UID

# RoundCube Configuration
MXHOST=$PGHOST
PGRQNAME=system_roundcube
PGRQUSER=roundcube_user
PGRQPASS=roundcube_pass

# This is where our templates get installed to make your life
# incredibly easy and the setup to be painless. These files are
# installed from the nuxref-templates-mxserver RPM package you
# installed above. If you do not have this RPM package then you
# must install it or this blog simply won't work for you.
# > yum install --enablerepo=nuxref nuxref-templates-mxserver
NUXREF_TEMPLATES=/usr/share/nuxref

Once all of this has been set you can proceed to do any of the below steps! Keep in mind that if you decide to change any of the variables above, you may need to redo every single step identified below.
Step 2 of 7: System Preparation
First off, Make sure you’ve set up your environment correctly (defined in Step 1 above) or you will have problems with the outcome of this step!
General:

# Create vmail user; this will be a secure user no one else has
# permissions to that we can lock and keep our mail private
# from any prying eyes of people who have or gain access to our
# server.
useradd -u $VHOST_UID -d /var/mail/vhosts -M -s /sbin/nologin vmail
mkdir -p  /var/mail/vhosts/
chown vmail.vmail /var/mail/vhosts/
chmod 700 /var/mail/vhosts/

# Create a clam user we can preform our anti-virus scans as
usermod -G amavis clam

# Fix ClamD Sock Reference
sed -i -e 's|/var/spool/amavisd/clamd.sock|/var/run/clamav/clamd.sock|g' /etc/amavisd/amavisd.conf

# Fix Amavis Directory Permission
chmod 1770 /var/spool/amavisd/tmp/
# Amavis Domain Configuration
sed -i -e '/NuxRef BulletProofing/d' \
       -e "s/\(# \$myhostname.*\)/\1\n\$myhostname = 'mail.$DOMAIN'; # NuxRef BulletProofing/g" /etc/amavisd/amavisd.conf
sed -i -e "s/^\(\$mydomain[ \t]*=[ \t]*\).*$/\1'$DOMAIN';/g" /etc/amavisd/amavisd.conf

# Generate SSL Keys (if you don't have any already) that we
# will secure all our inbound and outbound mail as.
openssl req -nodes -new -x509 -days 730 -sha1 -newkey rsa:2048 \
   -keyout /etc/pki/tls/private/$DOMAIN.key \
   -out /etc/pki/tls/certs/$DOMAIN.crt \
   -subj "/C=$COUNTRY_CODE/ST=$PROV_STATE/L=$CITY/O=$SITE_NAME/OU=IT/CN=$DOMAIN"

# Permissions; protect our Private Key
chmod 400 /etc/pki/tls/private/$DOMAIN.key

# Permissions; protect our Public Key
chmod 444 /etc/pki/tls/certs/$DOMAIN.crt

Create PostgreSQL Mail Database:

# Optionally Eliminate Reset Database.
/bin/su -c "/usr/bin/dropdb -h $PGHOST -p $PGPORT $PGNAME 2>&1" postgres &>/dev/null
/bin/su -c "/usr/bin/dropuser -h $PGHOST -p $PGPORT $PGRWUSER 2>&1" postgres &>/dev/null
/bin/su -c "/usr/bin/dropuser -h $PGHOST -p $PGPORT $PGROUSER 2>&1" postgres &>/dev/null

# Create Read/Write User (our Administrator)
echo "Enter the role password of '$PGRWPASS' when prompted"
/bin/su -c "/usr/bin/createuser -h $PGHOST -p $PGPORT -S -D -R $PGRWUSER -P 2>&1" postgres

# Create Read-Only User
echo "Enter the role password of '$PGROPASS' when prompted"
/bin/su -c "/usr/bin/createuser -h $PGHOST -p $PGPORT -S -D -R $PGROUSER -P 2>&1" postgres

# Create our Database and assign it our Administrator as it's owner
/bin/su -c "/usr/bin/createdb -h $PGHOST -p $PGPORT -O $PGRWUSER $PGNAME 2>&1" postgres 2>&1

# Secure and protect a temporary file to work with
touch /tmp/pgsql.postfix.schema.sql
chmod 640 /tmp/pgsql.postfix.schema.sql
chown root.postgres /tmp/pgsql.postfix.schema.sql

# the below seems big; but will work fine if you just copy and
# it as is right to your terminal: This will prepare the SQL
# statement needed to build your mail server's database backend
sed -e '/^--?/d' \
    -e "s/%PGROUSER%/$PGROUSER/g" \
    -e "s/%PGRWUSER%/$PGRWUSER/g" \
    -e "s/%DOMAIN%/$DOMAIN/g" \
        $NUXREF_TEMPLATES/pgsql.postfix.template.schema.sql > \
          /tmp/pgsql.postfix.schema.sql

# load DB
/bin/su -c "/usr/bin/psql -h $PGHOST -p $PGPORT -f /tmp/pgsql.postfix.schema.sql $PGNAME 2>&1" postgres 2>&1
# cleanup
/bin/rm -f /tmp/pgsql.postfix.schema.sql

Step 3 of 7: Setup our Mail Transfer Agent (MTA): Postfix
First off, Make sure you’ve set up your environment correctly (defined in Step 1 above) or you will have problems with the outcome of this step!

################################################################
# Configure Postfix (MTA)
################################################################
# Create backup of configuration files
[ ! -f /etc/postfix/main.cf.orig ] && \
   cp /etc/postfix/main.cf /etc/postfix/main.cf.orig
[ ! -f /etc/postfix/master.cf.orig ] && \
   cp /etc/postfix/master.cf /etc/postfix/master.cf.orig

# Directory to store our configuration in
[ ! -d /etc/postfix/pgsql ] && \
    mkdir -p /etc/postfix/pgsql

# Secure this new directory since it will contain passwords
# information
chmod 750 /etc/postfix/pgsql
chown root.postfix /etc/postfix/pgsql

# Now using our templates, build our SQL files:
for FILE in relay_domains.cf \
            transport_maps.cf \
            virtual_alias_maps.cf \
            virtual_domains_maps.cf \
            virtual_mailbox_limit_maps.cf \
            virtual_mailbox_maps.cf
do
    sed -e "/^#?/d" \
        -e "s/%PGROUSER%/$PGROUSER/g" \
        -e "s/%PGROPASS%/$PGROPASS/g" \
        -e "s/%PGHOST%/$PGHOST/g" \
        -e "s/%PGNAME%/$PGNAME/g" \
            $NUXREF_TEMPLATES/pgsql.postfix.template.$FILE > \
                /etc/postfix/pgsql/$FILE
done

# main.cf
sed -e "/^#?/d" \
    -e "s/%DOMAIN%/$DOMAIN/g" \
    -e "s|%VHOST_HOME%|$VHOST_HOME|g" \
    -e "s/%VHOST_UID%/$VHOST_UID/g" \
    -e "s/%VHOST_GID%/$VHOST_GID/g" \
        $NUXREF_TEMPLATES/pgsql.postfix.template.main.cf > \
            /etc/postfix/main.cf

# master.cf
cat pgsql.postfix.template.master.cf > /etc/postfix/master.cf

# Run the newaliases command to generate /etc/aliases.db
newaliases

# Vacation Support
echo "autoreply.$DOMAIN vacation:" > /etc/postfix/transport
postmap /etc/postfix/transport

# Update to latest Spam Assassin Rules/Filters
sa-update
# Update to latest Antivirus
freshclam

# Setup Auto-Cron Entries (so future antivirus updates
# can just be automatic).
sed -i -e '|/etc/cron.hourly|d' /etc/crontab
sed -i -e '|/etc/cron.daily|d' /etc/crontab
sed -i -e '|/etc/cron.weekly|d' /etc/crontab
sed -i -e '|/etc/cron.monthly|d' /etc/crontab
cat << _EOF >> /etc/crontab
  01 * * * * root run-parts /etc/cron.hourly 
  02 4 * * * root run-parts /etc/cron.daily 
  22 4 * * 0 root run-parts /etc/cron.weekly 
  42 4 1 * * root run-parts /etc/cron.monthly
_EOF

# Enable our Services On Reboot
chkconfig --levels 345 spamassassin on
chkconfig --levels 345 clamd on
chkconfig --levels 345 clamd.amavisd on
chkconfig --levels 345 amavisd on
chkconfig --levels 345 postfix on

# Start all of our other services if they aren't already
service spamassassin start
service clamd start
service amavisd start
service clamd.amavisd start

# Restart our postfix service to on our new configuration
service postfix restart

Step 4 of 7: Setup our Mail Delivery Agent (MDA): Dovecot
First off, Make sure you’ve set up your environment correctly (defined in Step 1 above) or you will have problems with the outcome of this step!

################################################################
# Configure Dovecot (MDA)
################################################################
# Create backup of configuration files
[ ! -f /etc/dovecot/dovecot.conf.orig ] && \
   cp /etc/dovecot/dovecot.conf /etc/dovecot/dovecot.conf.orig

# dovcot.conf
sed -e "/^#?/d" \
    -e "s/%DOMAIN%/$DOMAIN/g" \
    -e "s|%VHOST_HOME%|$VHOST_HOME|g" \
    -e "s/%VHOST_UID%/$VHOST_UID/g" \
    -e "s/%VHOST_GID%/$VHOST_GID/g" \
        $NUXREF_TEMPLATES/pgsql.dovecot.template.dovecot.conf > \
            /etc/dovecot/dovecot.conf

# Create our Sieve Directories
[ ! -d /var/lib/sieve/users ] && \
   mkdir -p /var/lib/sieve/users
[ ! -d /var/lib/sieve/before.d ] && \
   mkdir -p /var/lib/sieve/before.d
[ ! -d /var/lib/sieve/after.d ] && \
   mkdir -p /var/lib/sieve/after.d
chown -R vmail.vmail /var/lib/sieve
chmod 750 /var/lib/sieve

# Dovecot PostgreSQL Configuration
for FILE in dovecot-sql.conf \
            dovecot-dict-user-quota.conf \
            dovecot-dict-domain-quota.conf
do
   sed -e "/^#?/d" \
       -e "s|%VHOST_HOME%|$VHOST_HOME|g" \
       -e "s/%VHOST_UID%/$VHOST_UID/g" \
       -e "s/%VHOST_GID%/$VHOST_GID/g" \
       -e "s/%PGROUSER%/$PGROUSER/g" \
       -e "s/%PGROPASS%/$PGROPASS/g" \
       -e "s/%PGHOST%/$PGHOST/g" \
       -e "s/%PGNAME%/$PGNAME/g" \
           $NUXREF_TEMPLATES/pgsql.dovecot.template.$FILE > \
               /etc/dovecot/$FILE
   # permissions
   chmod 640 /etc/dovecot/$FILE
   chown root.dovecot /etc/dovecot/$FILE
done

# Warning Message when mailbox is almost full
sed -e "/^#?/d" \
    -e "s/%DOMAIN%/$DOMAIN/g" \
        $NUXREF_TEMPLATES/pgsql.dovecot.template.mail-warning.sh > \
            /usr/libexec/dovecot/mail-warning.sh

# Make Script Executable
chmod 755 /usr/libexec/dovecot/mail-warning.sh

# Ensure Dovecot starts after each system reboot:
chkconfig --levels 345 dovecot on

# Start Dovecot (otherwise restart it if it's already running)
service dovecot status && service dovecot restart || service dovecot start

Step 5 of 7: Setup Postfix Admin
First off, Make sure you’ve set up your environment correctly (defined in Step 1 above) or you will have problems with the outcome of this step!

################################################################
# Configure PostfixAdmin
################################################################
sed -e "/^\/\/?/d" \
    -e "s/%DOMAIN%/$DOMAIN/g" \
    -e "s/%ADMIN%/$ADMIN/g" \
    -e "s/%PGHOST%/$PGHOST/g" \
    -e "s/%PGNAME%/$PGNAME/g" \
    -e "s/%PGRWUSER%/$PGRWUSER/g" \
    -e "s/%PGRWPASS%/$PGRWPASS/g" \
        $NUXREF_TEMPLATES/pgsql.postfixadmin.template.config.local.php > \
            /etc/postfixadmin/config.local.php

# Protect file since it contains passwords
chmod 640 /etc/postfixadmin/config.local.php
chown root.apache /etc/postfixadmin/config.local.php

# Vacation Auto-respond Support
sed -e "/^#?/d" \
    -e "s/%DOMAIN%/$DOMAIN/g" \
    -e "s/%PGROUSER%/$PGROUSER/g" \
    -e "s/%PGROPASS%/$PGROPASS/g" \
    -e "s/%PGHOST%/$PGHOST/g" \
    -e "s/%PGNAME%/$PGNAME/g" \
        $NUXREF_TEMPLATES/pgsql.postfixadmin.template.vacation.conf > \
            /etc/postfixadmin/vacation.conf

# Protect file since it contains passwords
chmod 640 /etc/postfixadmin/vacation.conf
chown root.vacation /etc/postfixadmin/vacation.conf

# Log Rotation
cat << _EOF > /etc/logrotate.d/postfix-vacation
/var/log/vacation.log {
        missingok
        notifempty
        create 644 vacation root
}
_EOF

Now you can setup NginX to host your administration; in the below example, I set up https://postfixadmin.<your.domain>/ to go to the postfixadmin page.

# Create dummy favicon.ico for now (silences some log entries)
touch /usr/share/postfixadmin/favicon.ico

# PostfixAdmin NginX Configuration
sed -e "/^#?/d" \
    -e "s/%DOMAIN%/$DOMAIN/g" \
        $NUXREF_TEMPLATES/nginx.postfixadmin.template.conf > \
            /etc/nginx/conf.d/postfixadmin.conf

# You may have to bump php-fpm to be safe (if it's not already running)
service php-fpm status 2>/dev/null && service php-fpm restart || service php-fpm start
# make sure it starts on every reboot too:
chkconfig php-fpm --level 345 on
# Restart NginX if it's not already
service nginx status 2>/dev/null && service nginx restart || service nginx start
chkconfig nginx --level 345 on

Once you’re complete that, you’re now ready to access the administrator interface and set up a new account. Simply visit https://postfixadmin.<your.domain>/setup.php. The templates I provided will set the system password to admin. You’ll need to always enter this value prior to creating an account below.
PostfixAdmin Setup
Once you’re done creating an account, just change the setup.php script to read-only as a security precaution. You can preform every other action you’ll ever need using the account you already created.

################################################################
# Disable Future System Administrator Creation
################################################################
chmod 600 /usr/share/postfixadmin/setup.php

# later on, you can re-enable the <i>setup.php</i> file to create a new
# account in the distant future by just typing:
#
# chmod 644 /usr/share/postfixadmin/setup.php
#

You can simply access https://postfixadmin.<your.domain>/ now (without the setup.php) and login using the new administrator you created.

Step 6 of 7: Setup Roundcube
First off, Make sure you’ve set up your environment correctly (defined in Step 1 above) or you will have problems with the outcome of this step!

################################################################
# Configure RoundCube Mailer
################################################################
# RoundCube NginX Configuration
sed -e "/^#?/d" \
    -e "s/%DOMAIN%/$DOMAIN/g" \
        $NUXREF_TEMPLATES/nginx.roundcubemail.template.conf > \
            /etc/nginx/conf.d/roundcubemail.conf

# Optionally Eliminate Reset RoundCube Database
/bin/su -c "/usr/bin/dropdb -h  $PGHOST -p $PGPORT $PGRQNAME 2>&1" postgres &>/dev/null
/bin/su -c "/usr/bin/dropuser -h $PGHOST -p $PGPORT $PGRQUSER 2>&1" postgres &>/dev/null

# Create RoundCube Admistrator User
echo "Enter the role password of '$PGRQPASS' when prompted"
/bin/su -c "/usr/bin/createuser -h $PGHOST -p $PGPORT -S -D -R $PGRQUSER -P 2>&1" postgres 2>&1

# Create our Database and assign it our Administrator as it's owner
/bin/su -c "/usr/bin/createdb -h $PGHOST -p $PGPORT -O $PGRQUSER $PGRQNAME 2>&1" postgres 2>&1
/usr/bin/psql -h $PGHOST -p $PGPORT -U $PGRQUSER $PGRQNAME -f /usr/share/doc/roundcubemail-0.9.5/SQL/postgres.initial.sql

# Configure Roundmail
sed -i -e "s|\(^[ \t]*\$rcmail_config\[[']db_dsnw['\"]\][ \t]*=\).*$|\1 'pgsql://$PGRQUSER:$PGRQPASS@$PGHOST/$PGRQNAME';|g" /etc/roundcubemail/db.inc.php
sed -i -e "s|\(^[ \t]*\$rcmail_config\[[']default_host['\"]\][ \t]*=\).*$|\1 'ssl://$MXHOST:993';|g" /etc/roundcubemail/main.inc.php
sed -i -e "s|\(^[ \t]*\$rcmail_config\[[']smtp_server['\"]\][ \t]*=\).*$|\1 'tls://$MXHOST';|g" /etc/roundcubemail/main.inc.php
sed -i -e "s|\(^[ \t]*\$rcmail_config\[[']smtp_user['\"]\][ \t]*=\).*$|\1 '%u';|g" /etc/roundcubemail/main.inc.php
sed -i -e "s|\(^[ \t]*\$rcmail_config\[[']smtp_pass['\"]\][ \t]*=\).*$|\1 '%p';|g" /etc/roundcubemail/main.inc.php

# Some extra Roundmail options i preferred:
sed -i -e "s|\(^[ \t]*\$rcmail_config\[[']identities_level['\"]\][ \t]*=\).*$|\1 3;|g" /etc/roundcubemail/main.inc.php
sed -i -e "s|\(^[ \t]*\$rcmail_config\[[']quota_zero_as_unlimited['\"]\][ \t]*=\).*$|\1 true;|g" /etc/roundcubemail/main.inc.php
sed -i -e "s|\(^[ \t]*\$rcmail_config\[[']htmleditor['\"]\][ \t]*=\).*$|\1 1;|g" /etc/roundcubemail/main.inc.php
sed -i -e "s|\(^[ \t]*\$rcmail_config\[[']preview_pane['\"]\][ \t]*=\).*$|\1 true;|g" /etc/roundcubemail/main.inc.php

# You may have to bump php-fpm to be safe (if it's not already running)
service php-fpm status 2>/dev/null && service php-fpm restart || service php-fpm start
# make sure it starts on every reboot too:
chkconfig php-fpm --level 345 on
# Restart NginX if it's not already
service nginx status 2>/dev/null && service nginx restart || service nginx start
chkconfig nginx --level 345 on

You can simply access https://roundcube.<your.domain>/ now and access any mailboxes you configured. Just remember to tell your users that they must specify their full email address as their username.

Each mailbox you create using PostfixAdmin you’ll be able to access with your Roundcube webpage.

Step 7 of 7: Security
If you aren’t familiar with Fail2Ban; now would be an excellent time to learn about it. I wrote a blog about securing your CentOS system a while back and encourage you to read it. At the very least, read the section on Fail2Ban. The below explains how you can protect yourself from brute force.

# Monitor for multiple failed SASL Logins into postfix
cat << _EOF > /etc/fail2ban/filter.d/postfix-sasl.conf
[Definition]
failregex = (?i): warning: [-._\w]+\[<HOST>\]: SASL (?:LOGIN|PLAIN|(?:CRAM|DIGEST)-MD5) authentication failed(: [A-Za-z0-9+/ ]*)?$
ignoreregex =
_EOF
# Monitor for multiple failed postfixadmin logins
cat << _EOF > /etc/fail2ban/filter.d/postfixadmin-auth.conf
[Definition]
failregex = ^<HOST> -.*POST.*login.php HTTP[^ \t]+ 500\$
ignoreregex =
_EOF

# Now in /etc/fail2ban/jail.conf you will want the following:
cat << _EOF >> /etc/fail2ban/jail.conf
[dovecot-iptables]
enabled  = true
filter   = dovecot
backend  = polling
action   = iptables[name=dovecot, port=110,143,993,995, protocol=tcp]
           sendmail-whois[name=dovecot, dest=root, sender=fail2ban@$DOMAIN]
logpath  = /var/log/mail.log

[sasl-iptables]
enabled  = true
filter   = postfix-sasl
backend  = polling
action   = iptables[name=sasl, port=25,587, protocol=tcp]
           sendmail-whois[name=sasl, dest=root, sender=fail2ban@$DOMAIN]
logpath  = /var/log/mail.log

[roundcube-iptables]
enabled  = true
filter   = roundcube-auth
backend  = polling
action   = iptables[name=RoundCube, port="http,https"]
           sendmail-whois[name=RoundCube, dest=root, sender=fail2ban@$DOMAIN]
logpath  = /var/log/roundcubemail/errors

[pfixadmadmin-iptables]
enabled  = true
filter   = postfixadmin-auth
backend  = polling
action   = iptables[name=PostfixAdmin, port="http,https"]
           sendmail-whois[name=PostfixAdmin, dest=root, sender=fail2ban@$DOMAIN]
logpath  = /var/log/nginx/postfixadmin.access.log
_EOF

You will want to additionally add the following to your iptables /etc/sysconfig/iptables:

#---------------------------------------------------------------
# Web Traffic (for PostfixAdmin and RoundCube)
#---------------------------------------------------------------
# Allow non-encrypted port so we can redirect these users to the
# encrypted version.  It's just a nicer effect to support
# redirection
-A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT

#---------------------------------------------------------------
# SMTP /Message Transfer Agent Communication
#---------------------------------------------------------------
-A INPUT -m state --state NEW -m tcp -p tcp --dport 465 -j ACCEPT

#---------------------------------------------------------------
# IMAP 
#---------------------------------------------------------------
-A INPUT -m state --state NEW -m tcp -p tcp --dport 993 -j ACCEPT

#---------------------------------------------------------------
# POP3
#---------------------------------------------------------------
-A INPUT -m state --state NEW -m tcp -p tcp --dport 995 -j ACCEPT

Useful Commands When Running Your Mail Server

  • doveconf -a: Display Dovecot configuration
  • postconf -n: Display Postfix configuration
  • postqueue -p: Display mail queue information

Sometimes when first playing with quotas, you may or may not want to recalculate them against a user. This can be done as follows:

#  Recalculate a specific users quota
doveadm quota recalc -u foobar@your.domain.com

# Or you can do this and recalculate ALL user quotas
doveadm quota recalc -A

# You will also want to run the following command if you decide
# to recalculate someone (or all) in your database:
# UPDATE domain_quota SET bytes=sq.sb, messages=sq.sm 
#   FROM (SELECT 'your.domain.com',
#           sum(bytes) as sb, sum(messages) as sm from quota2 WHERE
#            username like '%@your.domain.com') AS sq 
#   WHERE domain = 'your.domain.com';

Note: If you delete a mailbox for a specified domain, remember to manually remove: /var/mail/vhosts/domain/user

So… That’s it? Now I’m done?
Yes and No… My blog pretty much hands over a working mail server with little to no extra configuration needed on your part. But to properly accept mail from other people around the world, you will need:

  1. This mail server (ideally) must be accessible to the internet via a static IP address. This means that if you’re hosting this at home, the IP address your ISP provides you may not work (Dynamic IP vs Static IP). That said; a lot of ISPs can offer you a static IP if you don’t already have one for little (to no) extra cost.
  2. Your own domain name (if you don’t have an official one already) because you can’t have an email@your.domain.com if your.domain.com isn’t publicly recognized.
  3. A Mail Exchange (MX) Record that points to your new mail server by via it’s accessible IP on the internet. This is the only way the outside world will be able to send mail back to you; this step is not required if the only thing you’re doing is sending email out.

    Most Domain Registars allow you to set your own MX record (GoDaddy.com, NameCheap.com) which can simply be entered right from their webpage with little to no effort. If you’re paying someone else to host your websites for the domain you own, then most likely they have the MX record already pointing to them. You may need to open a support ticket (or call them) and tell them you want the MX record changed to point back to your own server instead or forward it.

Please keep in mind that there are risks involved with running your own mail server. You can get yourself temporarily (or permantenly) blacklisted if you’re not careful. Once you’re blacklisted, you’ll have a very hard time getting other mail servers on the internet to accept your emails for delivery. Blacklisting occurs when mail servers (which you will now be interacting with) detect abuse. In most cases, the mail server administrator (this is YOU) won’t even know you’re abusing other peoples servers. The abuse will start under your nose from emails that originated from your system by those you given mailboxes too. In fact, once your domain is blacklisted; it can be a pain in the @$$ to get de-listed later. A Blacklisted domain’s emails usually never reaches their intended recipient. Instead, they are immediately flagged as spam and sent to the deleted items (by the remote server). The configuration I provided considers most of the cases, but you still need to consider:

  • Don’t create mailboxes for people that you know intend to use it for derogatory purposes or for the intentions of spamming others. Hence; don’t allow users to send out hundreds of thousands of emails a day to a massive distribution on a regular bases even if it’s meaningful mail. Consider that the same stuff that you don’t like in your inbox is the same stuff that nobody else likes in theirs either. :)
  • Don’t allow your mail server to relay mail from untrusted sources. Hence; make sure you only allow users you create accounts for to send mail from your server.
  • Throttle all outbound mail delivery to each of their relay locations. With respect to the first point, even if you have to send massive amounts of mail from your system on a regular basis, do it in small batches. This way you won’t overwhelm the remote servers accepting your mail you want delivered.

If you followed my blog and are using the settings I put in place, then you’re already configured for the last 2 options above. The first option is governed by your own decisions.

No system is bulletproof; disaster can always strike when you’re least expecting it. To cover yourself, always consider backups of the following:

  • Your PostgreSQL Database: This is where all of your mail configuration is for both your MTA and MDA. You definitely do not want to lose this. May I suggest you reference my other blog entry here where I wrote a really simple backup/restore tool for a PostgreSQL database.
  • /etc/postfixadmin/*: Your Postfix Admin flat file configuration allowing you to centrally manage everything via a webpage.
  • /etc/postfix/*: Your Postfix flat file configuration which defines the core of your MTA. It’s configuration allows you to centrally manage everything else through the Postfix Administration website.
  • /etc/roundcube/*: Your Roundcube flat file configuration which allowing users to check their mail via a webpage you host.
  • /etc/dovecot/*: Your Dovecot flat file configuration which defines the core of your MDA. It’s configuration allows you to centrally manage everything through the Postfix Administration website.
  • /var/mail/vhosts/*: All of your user’s mailboxes are defined here. This is a vast storage of unread and read mail that resides on your server.

What about Apache?
Apache is a perfectly fine alternative solution as well! I simply chose NginX because it is much more lightweight approach. In fact, PostfixAdmin and RoundCube mail already come with Apache configuration out of the box located in /etc/httpd/conf.d/. Thus, if you simply start up your Apache instance (service httpd start), you will be hosting its services right away. Please keep in mind that the default (Apache) configuration does not come with all the SSL and added security I provided with the NginX templates. Perhaps later on, I will update the template rpm to include an Apache secure setup as well.

Credit
This blog took me a very (,very) long time to put together and test! The repository hosting alone now accomodates all my blog entries up to this date. If you like what you see and wish to copy and paste this HOWTO, please reference back to this blog post at the very least. It’s really all I ask.

Repository
This blog required me to set up my own repository of which I was thinking that some people might want me to continue to maintain. Such as fetching and applying the latest security updates after testing them first for the continued promise of stability. Personally, I believe I may be setting up a new can of worms for myself by offering this service because bandwidth costs money and updates cost time. But I’ll share it publicly and see how things go anyway.

If you’d like me to continue to monitor and apply updates as well as hosting the repository for long terms, please consider donating or offering a mirror server to help me out! This would would be greatly appreciated!

Sources
This blog could not have been made possible without the tons of resources that I used. Some of these resources including other peoples blogs too which I picked and chose approaches they took.

Software Resources

PostgreSQL Backup & Restore

Backing Up and Restoring Your PostgreSQL Database


Introduction
As a Developer or even a System Administrator, experiencing database corruption is a scary thing. Usually when something you’re responsible fails… it fails hard. It’s important to always make frequent backups of your database(s) your in charge of. The internet is filled with solutions and one liners you can consider. But there is a lot of contradiction everywhere too as each site tells you how it should be done. Personally, I’m not going to tell you the way I do it is the correct way or not. But I will tell you it works, and it’s saved me in numerous occasions.

I’m lazy, and when I look for solutions online, I always hope someone is just going to spell it right out for me. It’s so much easier when someone else does all the work; but usually that’s wasn’t the case for me. Stack Overflow is fantastic for getting the quick 1 liners you need to build your solution, but I prefer automation. I don’t want to remember all of the different options one tool has from another. I want simplicity because when something fails, I don’t want to learn the complicated steps to do a database restoration.

As a strong supporter of PostgreSQL, I want to provide the solution I use to hopefully speed along someone else’s research in the future. I can’t stress enough also that if you aren’t taking regular backups of databases you are responsible for, then you really should reconsider and at use the scripts defined below.

July 12th, 2014 Update
I packaged pgnux_backup, pgnux_restore and pgnux_kickusers into an RPM to simplify things:
RPM: pgnux-1.0.0-1.el6.nuxref.noarch.rpm
Source: pgnux-1.0.0-1.el6.nuxref.src.rpm

These RPMs can also be fetch from my nuxref repository.

Database Backup
Assuming you you are the global system administrator responsible for a PostgreSQL database and have root privileges, here is a simple backup script you can use:

#!/bin/bash
# Author: Chris Caron <lead2gold at gmail.com>
# Name: pgnux_backup
# Description: Preform a backup of one ore more PostgreSQL Database(s)
# Returns: 0 on success, a non-zero value if a failure occurred
# Syntax: pgnux_backup [db1 [db2 [db3 [...]]]
#
# Note: If no database is specified to be backed up, then all of
#       databases hosted by the postgresql instance are used.

##################################
# Define some simple variables
##################################
PGENGINE=/usr/bin
PGPORT=5432
PGHOST=localhost
DUMPDIR=/var/pgbackup

##################################
# System Tweaks
##################################
# if COMPRESSION is enabled, then all database backups are
# gzipped to save space.  The trade off is more cpu power will
# be used to generate your backups and they will take longer
# to create.  But you can save yourself significant amounts of
# disk space.  Set this value to 0 if you do not wish to use
# compression. COMPRESSION can be set to a value between 0 and
# 9 where a setting of 0 is used to disable it.
# Please keep in mind also, that if you choose to use
# compression, the PostgreSQL restores will be a bit slower since
# the data needs to be first uncompressed before it can be
# reloaded.
COMPRESSION=6

# For SELinux we need to use 'runuser' not 'su'
if [ -x /sbin/runuser ]; then
    SU=/sbin/runuser
else
    SU=/bin/su
fi

# Safety Check
if [ $(whoami) != 'root' ]; then
   echo "Error: You must be 'root' to use this script."
   exit 1
fi

# List of databases you wish to back up by default.  If you
# leave this blank, then the system will automatically detect
# all of the databases hosted by the PostgreSQL instance.
#
# You can use Space, Tab or some form of white space delimiter
# between each database you specify if you need to specify more
# than one.
# If nothing is specified on the command line (when this script
# is called, then this is the default list that will be used in
# it's place.
DBLIST=""

# If anything was specified on the command line; we over-ride it here
[ ! -z "$@" ] && DBLIST="$@"

# Capture the current time to make it easy to locate our backups later
SNAPTIME=$(date +'%Y%m%d%H%M%S')

# Build DBLIST if it isn't already defined:
[ -z "$DBLIST" ] && DBLIST=$($SU -l postgres -c "$PGENGINE/psql \
                          -p $PGPORT -h $PGHOST -lt | \
                   awk 'NF >=5 { print \$1}' | \
                   egrep -v '^(template[0|1])\$'")

# Create our backup directory if it doesn't already exist
[ ! -d $DUMPDIR ] && mkdir -p $DUMPDIR

# Ensure our directory is protected from prying eyes
chmod 770 $DUMPDIR
chown root.postgres $DUMPDIR

# Backup our Database Globals (exit on failure)
$SU -l postgres -c "$PGENGINE/pg_dumpall --globals-only \
   -p $PGPORT -h $PGHOST | \
   gzip -c > $DUMPDIR/${SNAPTIME}-globals.sql.gz"
if [ $? -ne 0 ]; then
   echo -n "$(date +'%Y-%m-%d %H:%M:%S') - $$ - ERROR - "
   echo "Failed to dump system globals; backup aborted."
   exit 2
fi

# Protect our backup and ensure the user postgres user can manage
# this file if need be.
chmod ug+rw "$DUMPDIR/${SNAPTIME}-globals.sql.gz" &>/dev/null
chown postgres.root $DUMPDIR/${SNAPTIME}-globals.sql.gz &>/dev/null

FAIL_COUNT=0
# Iterate over our database list and perform our backup:
for DBNAME in $DBLIST; do
   echo -n "$(date +'%Y-%m-%d %H:%M:%S') - $$ - INFO - "
   echo "Dumping database '$DBNAME' ..."
   PGBKUP=
   PGFLAGS="-p $PGPORT -h $PGHOST --create --oids --format=t --verbose"
   if [ $COMPRESSION -gt 0 ]; then
      # Use Compression
      PGBKUP=$SNAPTIME-$DBNAME.db.gz
      $SU -l postgres -c "$PGENGINE/pg_dump $PGFLAGS $DBNAME | \
                 gzip -cq$COMPRESSSION > $DUMPDIR/$PGBKUP"
      RETVAL=$?
   else
      # Do not compres (flag is not set to 1)
      PGBKUP=$SNAPTIME-$DBNAME.db
      $SU -l postgres -c "$PGENGINE/pg_dump $PGFLAGS \
                 --file=$DUMPDIR/$PGBKUP $DBNAME"
      RETVAL=$?
   fi
   if [ $RETVAL -eq 0 ]; then
      echo -n "$(date +'%Y-%m-%d %H:%M:%S') - $$ - INFO - "
      echo "Backup Complete: $PGBKUP"
      chmod ug+rw $DUMPDIR/$PGBKUP &>/dev/null
      chown postgres.root $DUMPDIR/$PGBKUP &>/dev/null
   else
      [ -f $DUMPDIR/$PGBKUP ] && \
          rm -f $DUMPDIR/$PGBKUP
      echo -n "$(date +'%Y-%m-%d %H:%M:%S') - $$ - ERROR - "
      echo "Backup Failed: $PGBKUP"
      let FAIL_COUNT+=1
   fi
done

if [ $FAIL_COUNT -ne 0 ]; then
   exit 1
fi
exit 0

Database Restore
Backups are one thing, but the ability to restore is another. This script pairs with the above and will allow you to restore different points in time.

#!/bin/bash
# Author: Chris Caron <lead2gold at gmail.com>
# Name: pgnux_restore
# Description: Preform a restore of one ore more PostgreSQL Database(s)
# Returns: 0 on success, a non-zero value if a failure occurred
# Syntax: pgnux_restore <database_name> [snapshot_time]
#
# Note: This script will not work if people are connected to the
#       databases in question you are trying to restore.
 
##################################
# Define some simple variables
##################################
PGENGINE=/usr/bin
PGPORT=5432
PGHOST=localhost
DUMPDIR=/var/pgbackup
 
# How many backups in the past to list when no restore
# has taken place
HISTORY=10
 
# Specify a keyword that when in place of the <database_name>
# it triggers all databases for specified snapshot time to be
# restored including database system globals. Set this to
# 'nothing' if you want the restore to do a global restore every
# time (might be dangerous)
# The below allows you to type 'pgnux_restore --' to restore
# everything.
RESTORE_ALL_KEYWORD="--"

# For SELinux we need to use 'runuser' not 'su'
if [ -x /sbin/runuser ]; then
    SU=/sbin/runuser
else
    SU=/bin/su
fi

# Safety Check
if [ $(whoami) != 'root' ]; then
   echo "Error: You must be 'root' to use this script."
   exit 1
fi

# Not worth continuing if there is no dump directory
if [ ! -d $DUMPDIR ]; then
   echo -n "$(date +'%Y-%m-%d %H:%M:%S') - $$ - ERROR - "
   echo "Directory '$DUMPDIR' does not exist."
   exit 2
fi
  
# Capture Database(s) and allow for delimiters to separate more
# then one. Keep in mind that the first database specified will
# be the one used to detect the SNAPTIME if one isn't specified
DBLIST=$(echo "$1" | sed -e 's/[,:+%!&]/ /g')
 
# Take the value specified on the command line (argument 2)
shift
_SNAPTIME=$(echo "$@" | sed -e 's/[^0-9]//g')
SNAPTIME=
if [ -z "$_SNAPTIME" ]; then
   # Fetch most recent backup time
   SNAPTIME=$(find -L $DUMPDIR -maxdepth 1 -mindepth 1 \
        -type f -regex ".*/[0-9]+-.*\.\(db\|db.gz\)$" \
        -exec basename {} \; | \
        sed -e 's/^\([0-9]\+\)-\(.*\)\.\(db\|db\.gz\)$/\1/g' | \
        sort -r -n | uniq | head -n1)
   if [ -z "$SNAPTIME" ]; then
      echo -n "$(date +'%Y-%m-%d %H:%M:%S') - $$ - ERROR - "
      echo "No backups detected."
      exit 2
   fi
fi
 
# Initialize Globals Loaded Flag 
GLOBALS_LOADED=1
if [ "$DBLIST" == "$RESTORE_ALL_KEYWORD" ]; then
   # Now we build a list of the databases based on the
   # snapshot time specified
   DBLIST=$(find -L $DUMPDIR -maxdepth 1 -mindepth 1 \
      -type f -regex ".*/$SNAPTIME-.*\.\(db\|db.gz\)\$" \
      -exec basename {} \; | \
      sed -e 's/^[0-9]\+-\(.*\)\.\(db\|db\.gz\)$/\1/g')
   GLOBALS_LOADED=0
fi
 
FAILLIST=""
RESTORE_COUNT=0
for DBNAME in $DBLIST; do
   # Toggle 'Globals Loaded Flag' if 'Restore All Keyword' found
   # This neat little trick allows you to force the restores of
   # globals associated with a specified date of the backup
   # hence: pgnux_restore --,foobar_database 2014010100000
   # would cause the globals taken at this time to be loaded
   # with the database,
   [ "$DBLIST" == "$RESTORE_ALL_KEYWORD" ] && GLOBALS_LOADED=0

   # If no Snapshot time was specified, we need to
   # keep detecting it based on the database specified
   [ -z "$_SNAPTIME" ] &&
      SNAPTIME=$(find -L $DUMPDIR -maxdepth 1 -mindepth 1 \
        -type f -regex ".*/[0-9]+-$DBNAME\.\(db\|db.gz\)$" \
        -exec basename {} \; | \
        sed -e 's/^\([0-9]\+\)-\(.*\)\.\(db\|db\.gz\)$/\1/g' | \
        sort -r -n | uniq | head -n1)
   PGBKUP=$DUMPDIR/$SNAPTIME-$DBNAME.db

   if [ $GLOBALS_LOADED -eq 0 ] && \
      [ -f $DUMPDIR/globals-$SNAPTIME.sql.gz ]; then
      echo -n "$(date +'%Y-%m-%d %H:%M:%S') - $$ - INFO - "
      echo -n $"Restoring Database Globals ..."
         $SU -l postgres -c "gunzip \
             -c $DUMPDIR/globals-$SNAPTIME.sql.gz | \
             $PGENGINE/psql -a -p $PGPORT -h $PGHOST"
      # Toggle Flag
      GLOBALS_LOADED=1
   fi
 
   # Detect Compression
   if [ -f $PGBKUP.gz ]; then
      echo -n "$(date +'%Y-%m-%d %H:%M:%S') - $$ - INFO - "
      echo $"Uncompressing Database $DBNAME ($SNAPTIME)..."
      gunzip -c $PGBKUP.gz > $PGBKUP
      if [ $? -ne 0 ]; then
         FAILLIST="$FAILLIST $DBNAME"
         [ -f $PGBKUP ] && rm -f $PGBKUP
      fi
   fi
 
   [ ! -f $PGBKUP ] && continue
 
   echo -n "$(date +'%Y-%m-%d %H:%M:%S') - $$ - INFO - "
   echo $"Restoring Database $DBNAME ($SNAPTIME)..."
   # This action can fail for 2 reasons:
   #  1. the database doesn't exist (this is okay because we
   #     are restoring it anyway.
   #  2. Someone is accessing the database we are trying to
   #     drop.  You need to make sure you've stopped all
   #     processes that are accessing the particular database
   #     being restored or you will have strange results.
   $SU -l postgres -c "$PGENGINE/dropdb \
          -p $PGPORT -h $PGHOST $DBNAME" &>/dev/null
   # use -C & -d 'template1' so we can bootstrap off of the
   # 'template1' database. Note, that this means we are bound
   # by the character encoding of 'template1' (default is UTF-8)
   $SU -l postgres -c "$PGENGINE/pg_restore -e -v -p $PGPORT \
                   -C -d template1 -h $PGHOST $PGBKUP"
   [ $? -ne 0 ] && FAILLIST="$FAILLIST $DBNAME" || \
        let RESTORE_COUNT+=1
   [ -f $PGBKUP.gz ] && [ -f $PGBKUP ] && rm -f $PGBKUP
done
  
# Spit a readable list of databases that were not recovered if
# required.
if [ ! -z "$FAILLIST" ]; then
   for DBNAME in $FAILLIST; do
      echo "Warning: $DBNAME ($SNAPTIME) was not correctly restored."
   done
   exit 1
elif [ $RESTORE_COUNT -eq 0 ]; then
   # Nothing was restored; now is a good time to display to the user
   # their options
   echo -n "$(date +'%Y-%m-%d %H:%M:%S') - $$ - WARNING - "
   echo $"There were no databases restored."
   echo
   # Display last X backups and the databases associated with each
   BACKUPS=$(find -L $DUMPDIR -maxdepth 1 -mindepth 1 \
        -type f -regex ".*/[0-9]+-.*\.\(db\|db.gz\)$" \
        -exec basename {} \; | \
        sed -e "s/^\([0-9]\+\)-\(.*\)\.\(db\|db\.gz\)$/\1|\2/g" | \
        sort -r -n | head -n$HISTORY)
   [ -z "$BACKUPS" ] && exit 1
 
   LAST_SNAPTIME=""
   printf "    %-16s %s\n" "TIMESTAMP" "DATABASE(S)"
   for BACKUP in $BACKUPS; do
      SNAPTIME=$(echo $BACKUP | cut -d'|' -f1)
      DBNAME=$(echo $BACKUP | cut -d'|' -f2)
      if [ "$LAST_SNAPTIME" != "$SNAPTIME" ]; then
          printf "\n%s: %s" $(echo "$SNAPTIME" | \
              sed -e 's/\([0-9]\{4\}\)\([0-9]\{2\}\)\([0-9]\{2\}\)\([0-9]\{2\}\)\([0-9]\{2\}\)\([0-9]\{2\}\)/\1-\2-\3_\4:\5:\6/g') "$DBNAME"
          LAST_SNAPTIME=$SNAPTIME
      else
         printf ", %s" "$DBNAME"
      fi
   done
   printf "\n"
   exit 1
fi
exit 0

Automation
Consider creating a few crontab entries to manage backups during off hours as well as some tool that can keep your backups from growing out of control on your hard drive. Feel free to adjust the numbers as you see fit. The below assumes you copied the 2 scripts above in /usr/bin

cat << EOF > /etc/cron.d/nuxref_postgresql_backup
# Create a backup of all databases every hour
0  * * * * root /usr/bin/pgnux_backup &>/dev/null
# Only keep the last 30 days of backups (check run every hour on the 40 min mark)
40 * * * * root find /var/pgbackup -maxdepth 1 -type f -mtime +30 -delete &>/dev/null
EOF

Usage
Backups are as simple as the following:

# Backup All databases
pgnux_backup

# Backup just the foobar_database:
pgnux_backup foobar_database

# Backup just the foobar_database and the barfoo_database:
pgnux_backup foobar_database barfoo_database

Restoring requires that no one is accessing the database (if it already exists). Otherwise it’s just as easy:

# Call the script with no parameters, to see a nice list of the last 10 backups.
# This list can be really helpful because it also identifies all of the
# snapshot times that they were taken.  You need these times to preform
# restores with.
pgnux_restore

# Restore ALL databases that resided in the last database snapshot
# Caution: If you just backed up a single database on it's own last
#          time, this might be a bit unsafe to run. If you backed
#          up other databases at a different time, then they won't
#          get included in this restore. The -- also forces the
#          system globals to get reloaded as well.
pgnux_restore --

# Restore the last snapshot taken of the foobar_database
pgnux_restore foobar_database

# Restore the last snapshot taken of the foobar_database and
# barfoo_database notice we can't use space as a delimiter (like
# pgnux_backup does) only because the second argument identifies
# the snapshot time (in case you want your restore to be specific).
pgnux_restore foobar_database,barfoo_database

# With respect to the last point, this will restore a database
# snapshot further back in the past (not the latest one). This
# also assumes you have a backup at this point.  You can find
# this out by running pgnux_restore all by itself as the first
# example given above. The script will take care of the formatting
# of the time so long as you provide YYYYMMDDHHmmss in that format.
# You can stick in as many delimiters to make the time readable as
# you want. The below restore a specific database snapshot:
pgnux_restore foobar_database 2014-03-13_15:47:06

# You can restore more then one database at a specific time as
# well using the a comma (,) to delimit your databases and
# specifying the snapshot time at the end. Here is how to restore
# the foobar_database and the barfoo_database at a specific
# instance of time:
pgnux_restore foobar_database,barfoo_database 2014-03-13_15:47:06

# the -- also plays another role, it can be used to force the loading
# of the postgresql globals again when it's paired with databases
# consider the above line but with this small tweak:
pgnux_restore --,foobar_database,barfoo_database 2014-03-13_15:47:06

Catastrophic Recoveries
If worst case scenario happens, and the restore process seems to be failing you, your not out of options. As long as you’ve been taking frequent backups, there is always a way back!

Remember that restores will fail if the database is being accessed by someone else. The below is a dirty little scripted trick that boots everyone accessing your system off. It’s somewhat dangerous to do, but if restoring is your final goal, then the danger is mitigated.

#!/bin/bash
# Author: Chris Caron <lead2gold at gmail.com>
# Name: pgnux_kickusers
# Description: Kicks all users off of a PostgreSQL instance being
#              accessed.
# Syntax: pgnux_kickusers

##################################
# Define some simple variables
##################################
PGENGINE=/usr/bin
PGPORT=5432
PGHOST=localhost

# For SELinux we need to use 'runuser' not 'su'
if [ -x /sbin/runuser ]; then
    SU=/sbin/runuser
else
    SU=/bin/su
fi

# Safety Check
if [ $(whoami) != 'root' ]; then
   echo "Error: You must be 'root' to use this script."
   exit 1
fi

# Get a list of all PIDS accessing our system
PIDS=$($SU -l postgres \
      -c "$PGENGINE/psql -h $PGHOST -p $PGPORT -e \
         -c \"SELECT procpid from pg_stat_activity;\"" \
            2>/dev/null | \
      egrep '^[ \t]+[0-9]+' | tr -s '[:space:]' ' ')

# Iterate over our list and attempt to kick our users
# by sending each of them a SIGTERM.  Since we're impatient
# if they don't die we try 3 more times before sending a
# SIGKILL
for PID in $PIDS; do
   RETRY=3
   while [ $RETRY -gt 0 ]; do
      kill $PID &>/dev/null
      ps -p $PID &>/dev/null
      if [ $? -ne 0 ]; then break; fi
      let RETRY-=1
   done
   # All PostgreSQL Documentation will tell you SIGKILL
   # is the absolute 'WORST' thing you can do.
   # The below code should only be uncommented in
   # catastrophic situations where a restore is inevitable
   # ps -p $PID &>/dev/null
   # [ $? -eq 0 ] && kill -9 $PID
done
exit 0

They can also fail if PostgreSQL data directory contents has been corrupted. Corruption can occur:

  • From a system that is experiencing harddrive failures.
  • From another processes accessing /var/lib/pgsql/data/* that shouldn’t be. Only the database itself should be modifying/accessing the contents of this directory unless you know what you’re doing.
  • From just a simple power outage. A busy system that isn’t backed up by a UPS device or redundant power sources can have it’s file system corrupted easily during a loss of power. What usually happens is PostgreSQL is writing content to the database when the power is pulled from the machine. At the end of the day, you’re left with a database that wasn’t correctly written to and in some cases can be seriously damaged from this as a result.

In some cases, the corruption can be so bad the database instance (PostgreSQL) won’t even start. Evidently, you can’t restore a working copy of a database if the service itself isn’t working for you.

Sometimes attempting to repair the write-ahead logging can help you recover. Obviously this completely depends on the type of failure you’re experiencing. /var/lib/pgsql/data/pg_xlog/* can help shed some light on this subject as well. The below script assumes your instance of PostgreSQL is stopped.

# Make sure your database is NOT running if you intend to run the below
# command:
service postgresql stop

Here is the the simple wrapper to the read-ahead write log repair:

#!/bin/bash
# Author: Chris Caron <lead2gold at gmail.com>
# Name: pgnux_repair
# Description: Simple wrapper to pg_resetxlog
# Syntax: pgnux_repair
 
##################################
# Define some simple variables
##################################
PGENGINE=/usr/bin
PGDATA=/var/lib/pgsql/data

# For SELinux we need to use 'runuser' not 'su'
if [ -x /sbin/runuser ]; then
    SU=/sbin/runuser
else
    SU=/bin/su
fi

# Safety Check
if [ $(whoami) != 'root' ]; then
   echo "Error: You must be 'root' to use this script."
   exit 1
fi

# Attempt to repair the WAL logs
$SU -l postgres -c "$PGENGINE/pg_resetxlog -f $PGDATA"

If it still doesn’t work, as a last resort consider these simple commands to bring your system back up again. Note that you should be the root user prior to preforming these steps:

# Stop the database (if it isn't already):
service postgresql stop

# Backup your configuration files
mkdir -p /tmp/postgresql_conf
cp -a /var/lib/pgsql/data/*.conf /tmp/postgresql_conf

# Destroy the database data directory. Unless you changed the
# default directory, the below command will do it:
# This is IRREVERSIBLE, be sure you have a recovery point to
# restore to before doing this otherwise you will make your
# situation worse!
find /var/lib/pgsql/ -mindepth 1 -maxdepth 1 rm -rf {} \;

# Initialize a new empty database; this will rebuild
# the /var/lib/pgsql/data directory for you
service postgresql initdb

# Restore all of your configuration you backed up above:
cp -af /tmp/postgresql_conf/*.conf /var/lib/pgsql/data/

# Just in case, for SELinux Users, make sure we're set okay
restorecon -R /var/lib/pgsql

# Start your database up
service postgresql start

# Now restore your databases using pgnux_restore and make sure you
# include the double dash '--' so that you load the system globals
# back as well:
#   pgnux_restore --,db1,db2,db3,db4,...
#
# If you've always been backing them all together, then you can
# restore them with a single command:
pgnux_restore --

# Your done! You should be back and running now and hopefully with
# minimal downtime.

Disclaimer
I can not be held responsible or liable for anything that goes wrong with your system. This blog is intended to be used as a resource only and will hopefully help you out in the time of need. These tools work for me and have been tested thoroughly using PostgreSQL v8.4 (which ships with CentOS 6.x). I have not tried any of these tools for PostgreSQL v9.x or higher and can not promise you anything, but I honestly can’t see a reason why they wouldn’t work.

Credit
Please note that this information took me several days to put together and test thoroughly. I may not blog often; but I want to re-assure the stability and testing I put into everything I intend share.

If you like what you see and wish to copy and paste this HOWTO, please reference back to this blog post at the very least. It’s really all I ask.

Secure FTP Solution

Configuring and Installing VSFTPD on CentOS 6


Introduction
FTP Servers have been around for a very long time. On could easily argue that they aren’t always the best option to choose anymore. But for those dealing with legacy systems, the FTP protocol provides one the easiest ways to share and distribute files across external systems. Despite the hate some might give the protocol; it still receives a lot a love from others just because it’s still very compatible with just about everything; heck, even web browsers such as Internet Explorer, Chrome, and Firefox (and many more) have the protocol built right into it.

In my case, I needed to set up an FTP server to help a client with some legacy software they use (and are familiar with). This blog is more or less just the steps I took to make it work in case anyone else is interested.

Security Concerns
FTP preforms all of it’s transactions in ‘plain text’ including it’s authentication. This wasn’t a problem back in 1980 when online security wasn’t an issue. This also isn’t a problem for sites offering anonymous file hosting services. But for everyone else, it pretty much leaves you susceptible to privacy issues and possible intrusions.

FTPS (not to be confused with SFTP) is a way of securing the FTP protocol for the systems that require it allowing you to eliminate the ‘plain text’ problem. But this requires the client uses software that can take advantage of it.

Some additional security concerns I wanted to consider:

  • Separate user accounts from the system ones. We don’t want people trying to guess our root password or access anyone’s home directory unless with specifically configure the server to allow it.

Setup VSFTPD
VSFTPD stands for Very Secure FTP Daemon and provides all the flexibility we need. It’s official website can be found here.

The following will set up VSFTPD into your CentOS/RedHat environment in a isolated manor. You see we want to disconnect the users that currently access your system from the users we create for VSFTPD for more flexibility and control.

# First fetch the required packages
# db4:       The Berkeley Database (Berkeley DB) which is a
#             quick and dirty way of storing our user accounts.
# db4-utils: This provides a tool we'll use to build a small user
#             database with.  Technically you can uninstall (just)
#             this package afterwards for added security after.
# openssl:   This is used to grant our FTP server FTPS support
# vsftpd:    The FTP/FTPS Daemon itself
yum -y install db4 db4-utils openssl vsftpd

# Create a password file: /etc/vsftpd/users.passwd
# This file contains all of the users you want to allow on the
# site in the structure:
# Line: Entry
#  1  | USERNAME1
#  2  | PASSWORD1
#  3  | USERNAME2
#  4  | PASSWORD2
# This below creates a simple user 'foobar' and a password
# of 'barfoo'.  This isn't the safest way to build your password
# file because it leaves a persons password set available in
# the shell history... But for the purpose of this tutorial:
echo foobar > /etc/vsftpd/users.passwd
echo barfoo >> /etc/vsftpd/users.passwd

# Protect our password file now from prying eyes:
chmod 600 /etc/vsftpd/users.passwd
chown root.root /etc/vsftpd/users.passwd

# Prepare a directory we want the foobar user to send it's files
# to. You can also just use a directory you already have in place
# or someone elses home directory.
mkdir -p /var/ftp/foobar

# Set a comfortable permission to this directory granting the ftp
# user (or any user account your later going to assign to this user
# read/write access)
chown root.ftp /var/ftp/foobar
chmod 775 /var/ftp/foobar

# Convert our plain-text password file into the Berkley Database
# format. This is the only command that requires the db4-utils
# rpm package which you can uninstall if you want for security
# reasons after you run the below command.  You'll need to
# re-installl it though if you ever want to add or update accounts
db_load -T -t hash \
   -f /etc/vsftpd/users.passwd /etc/vsftpd/virtual.users.db

# Protect our new (Berkley) database:
chmod 600 /etc/vsftpd/virtual.users.db
chown root.root /etc/vsftpd/virtual.users.db

# Prepare our virtual user directory; this is where we can
# optionally place over-riding configuration for each user
# we create in the Berkley database above.
mkdir -p /etc/vsftpd/virtual.users
chmod 700 /etc/vsftpd/virtual.users
chown root.root /etc/vsftpd/virtual.users

# Create PAM Module that points to our new database.
# Note: you do not provide the '.db' extension when creating
#       this file. The file is valid as you see it below.
cat << _EOF > /etc/pam.d/vsftpd-virtual
auth     required pam_userdb.so db=/etc/vsftpd/virtual.users
account  required pam_userdb.so db=/etc/vsftpd/virtual.users
session  required pam_loginuid.so
_EOF

# Protect our Module
chmod 644 /etc/pam.d/vsftpd-virtual
chown root.root /etc/pam.d/vsftpd-virtual

# Create an empty jail directory.  This is used for default
# configurations only. A well configured system won't even use
# this; but it's still good to have since we'll be referencing
# it in our configuration. This will become the default
# directory a user connects to if they aren't otherwise
# configured to go to another location.
mkdir -p /var/empty/vsftpd/
chown nobody.ftp /var/empty/vsftpd/
chmod 555 /var/empty/vsftpd/

# Now we want to allow FTPS support, we'll need an SSL key to
# do it with.  If you already have one, you can skip this step.
# Otherwise, the following will just generate you a self-signed
# key as a temporary solution.
openssl req -nodes -new -x509 -days 730 -sha1 -newkey rsa:2048 \
   -keyout /etc/pki/tls/private/nuxref.com.key \
   -out /etc/pki/tls/certs/nuxref.com.crt \
   -subj "/C=7K/ST=Westerlands/L=Lannisport/O=NuxRef/OU=IT/CN=nuxref.com"

# Protect our Keys
chmod 400 /etc/pki/tls/private/nuxref.com.key; # Private Key
chmod 444 /etc/pki/tls/certs/nuxref.com.crt; # Public Certificate

# Create ourselves a little banner we can use to at least alert
# human intruders that they are in fact being monitored.  This
# scare tactic may or may not work, but if you ever have a breach
# of security, you may need to reference that you gave the user
# ample warning that they were violating someones rights by
# continuing.  Feel free to adjust the banner to your likings.
cat << _EOF > /etc/banner
* - - - - - - W A R N I N G - - - - - - - W A R N I N G - - - - - *
*                                                                 *
* The use of this system is restricted to authorized users. All   *
* information and communications on this system are subject to    *
* review, monitoring and recording at any time, without notice or *
* permission.                                                     *
*                                                                 *
* Unauthorized access or use shall be subject to prosecution.     *
*                                                                 *
* - - - - - - W A R N I N G - - - - - - - W A R N I N G - - - - - *
_EOF

# Protect our banner
chmod 640 /etc/banner

At this point we have our environment set up the way we want it. The next step is to create our VSFTPD configuration.

# Lets first backup original configuration file
mv /etc/vsftpd/vsftpd.conf /etc/vsftpd/vsftpd.conf.orig

# Create new configuration
cat << _EOF > /etc/vsftpd/vsftpd.conf
# --------------------------------------------------------------
# Base Configuration
# --------------------------------------------------------------
anon_world_readable_only=NO
anonymous_enable=NO
chroot_local_user=YES
hide_ids=YES
listen=YES
local_enable=YES
max_clients=10
max_per_ip=3
nopriv_user=ftp
pasv_min_port=64000
pasv_max_port=64100
session_support=NO
user_config_dir=/etc/vsftpd/virtual.users
userlist_enable=YES
use_localtime=YES
xferlog_enable=YES
xferlog_std_format=NO
log_ftp_protocol=YES
pam_service_name=vsftpd-virtual
banner_file=/etc/banner
reverse_lookup_enable=NO
# --------------------------------------------------------------
# Secure Configuration (FTPS)
# --------------------------------------------------------------
ssl_enable=YES
virtual_use_local_privs=NO
allow_anon_ssl=NO
# forcing SSL makes the FTP portion of your site disabled and it
# will only operate as FTPS.  This may or may not be what you
# want.
force_local_data_ssl=NO
force_local_logins_ssl=NO
ssl_tlsv1=YES
ssl_sslv2=NO
ssl_sslv3=NO
# Point to our certificates
rsa_cert_file=/etc/pki/tls/certs/nuxref.com.crt
rsa_private_key_file=/etc/pki/tls/private/nuxref.com.key
require_ssl_reuse=NO
ssl_ciphers=HIGH:!MD5:!ADH
# --------------------------------------------------------------
# FTP Configuration
# --------------------------------------------------------------
async_abor_enable=YES
ftp_data_port=20
connect_from_port_20=YES
# --------------------------------------------------------------
# Default Anonymous Restrictions (over-ride per virtual user)
# --------------------------------------------------------------
guest_enable=NO
guest_username=nobody
# Default home directory once logged in
local_root=/var/empty/vsftpd
# write_enabled is required if the user is to make use of any of
# the anon_* commands below
write_enable=NO
# give the user the ability to make directories
anon_mkdir_write_enable=NO
# give the user the ability delete and overwrite files
anon_other_write_enable=NO
# give the user the ability upload new files
anon_upload_enable=NO
# Give the user permission to do a simple directory listings
dirlist_enable=NO
# Give the user permission to download files
download_enable=NO
# if the user has can upload or make new directories, then this
# will be the umask applied to them
anon_umask=0002
# delete failed uploads (speaks for itself)
delete_failed_uploads=YES
_EOF

# Protect our configuration
chmod 600 /etc/vsftpd/vsftpd.conf
chown root.root /etc/vsftpd/vsftpd.conf

Technically we’re done now, but because we intentionally specified very restrictive user access rights, our foobar user we created will only connect to the /var/empty/vsftpd directory with no access rights. Therefore, our final step is to create an additional configuration file for the foobar account granting him read/write access to /var/ftp/foobar.

# The file you write to in the /etc/vsftpd/virtual.users/ 'must' be the same
# name as the user(s) you created to over-ride their permissions!
cat << _EOF > /etc/vsftpd/virtual.users/foobar
local_root=/var/ftp/foobar
# --------------------------------------------------------------
# User
# --------------------------------------------------------------
guest_enable=YES
# Set this to any system user you want
guest_username=ftp
local_root=/var/ftp/foobar
# --------------------------------------------------------------
# Permissions
# --------------------------------------------------------------
# write_enabled is required if the user is to make use of any of
# the anon_* commands below
write_enable=YES
# give the user the ability to make directories
anon_mkdir_write_enable=YES
# give the user the ability delete and overwrite files
anon_other_write_enable=YES
# give the user the ability upload new files
anon_upload_enable=YES
# Give the user permission to do a simple directory listings
dirlist_enable=YES
# Give the user permission to download files
download_enable=YES
# if the user has can upload or make new directories, then this
# will be the umask applied to them
anon_umask=0002
# delete failed uploads (speaks for itself)
delete_failed_uploads=NO
_EOF

# Protect our foobar permission file
chmod 600 /etc/vsftpd/virtual.users/foobar
chown root.root /etc/vsftpd/virtual.users/foobar

You are now complete, you can start VSFTPD at any time:

# Have VSFTPD start after each system reboot
chkconfig --level 345 vsftpd on

# Start VSFTPD if it isn't already running
service vsftpd status || service vsftpd start

It’s worth noting that if you ever change any of the configuration or add more, you will need to restart the VSFTPD server in order for the changes you made to take effect:

# Restarting the vsftpd server is simple:
service vsftpd restart

Firewall Configuration
The FTP firewall configuration can get complicated especially when the ephemeral ports it chooses to open are random (when operating it in Passive mode). If you scan through the configuration file above, you’ll see that we’ve specified this range to be between 64000 and 64100.

In a nutshell, if you choose to use/enable FTPS (which I strongly recommend you do), the firewall configuration will look like this (/etc/sysconfig/iptables):

#....
#---------------------------------------------------------------
# FTP Traffic
#---------------------------------------------------------------
-A INPUT -p tcp -m state --state NEW --dport 21 -j ACCEPT
# non-encrypted ftp connections (ip_contrack_ftp) module looks
# after these ports, however for the encrypted sessions it can't
# spy so we need to disable ip_contrack_ftp and just open the
# port range himself
-A INPUT -p tcp -m state --state NEW --dport 64000:64100 -j ACCEPT
#...

However, if (and only if) you choose not to use FTPS and strictly operate using FTP only then your configuration will look as follows:

  1. /etc/sysconfig/iptables
    #....
    #---------------------------------------------------------------
    # FTP Traffic
    #---------------------------------------------------------------
    -A INPUT -p tcp -m state --state NEW --dport 21 -j ACCEPT
    #...
  2. /etc/sysconfig/iptables-config
    This file requires you to add ip_conntrack_ftp to the variable IPTABLES_MODULES which is near the top of this file. You may also need to update /etc/sysconfig/ip6tables-config if you are using ip6; the change is the same. This keeps the entire range of 64000 to 64100 ports closed be default and through packet sniffing, they are opened on demand.
#...
IPTABLES_MODULES="ip_conntrack_ftp"
#...

If you’re not running any other modules, you can use the following one liner to update the file:

sed -i -e 's/^IPTABLES_MODULES=.*/IPTABLES_MODULES="ip_conntrack_ftp"/g' \
    /etc/sysconfig/iptables-config
sed -i -e 's/^IPTABLES_MODULES=.*/IPTABLES_MODULES="ip_conntrack_ftp"/g' \
    /etc/sysconfig/ip6tables-config

Be sure to reload your firewall configuration once you have these in place:

# Restart the firewall
service iptables restart

Fail2Ban Bruit Force Configuration
The final thing you should consider if your server will be available via the internet is some bruit force prevention. I really recommend you read my blog on Securing Your CentOS 6 System, specifically my blurb on Fail2Ban which I think all systems should always have running. Fail2ban allows you to track all users hitting your FTP server and take immediate action on preventing further access to this potential intruder.

The configuration is as follows (remember to set the email to what you want it as where I’ve specified your@email.goes.here so you can be notified of any intrusions that take place.

cat << _EOF >> /etc/fail2ban/jail.conf
[my-vsftpd-iptables]

enabled  = true
filter   = vsftpd
action   = iptables[name=VSFTPD, port=ftp, protocol=tcp]
           sendmail-whois[name=VSFTPD, dest=your@email.goes.here]
logpath  = /var/log/vsftpd.log
_EOF

Be sure to reload your Fail2Ban configuration once this is done

# Restart Fail2Ban
service fail2ban restart

Test Our Configuration
Make sure you have an ftp tool installed into your environment like lftp or even a GUI based tool like FileZilla that supports both FTP and FTPS. The old Linux tool ‘ftp’ will only allow you to test the un-encrypted connection.

# Install lftp to keep things simple
yum -y install lftp

# Test our server (FTP)
[root@nuxref ~]# lftp ftp://foobar:barfoo@localhost
lftp foobar@localhost:~> pwd
ftp://foobar:barfoo@localhost
lftp foobar@localhost:~> exit

# Test our server (FTPS)
[root@nuxref ~]# lftp ftps://foobar:barfoo@localhost
lftp foobar@localhost:~> pwd
ftps://foobar:barfoo@localhost
lftp foobar@localhost:~> exit

First lets just test the basic FTP portion (plain-text):

# Connect to our server
[root@nuxref ~]# ftp localhost
Connected to localhost (127.0.0.1).
220-* - - - - - - W A R N I N G - - - - - - - W A R N I N G - - - - - *
220-*                                                                 *
220-* The use of this system is restricted to authorized users. All   *
220-* information and communications on this system are subject to    *
220-* review, monitoring and recording at any time, without notice or *
220-* permission.                                                     *
220-*                                                                 *
220-* Unauthorized access or use shall be subject to prosecution.     *
220-*                                                                 *
220-* - - - - - - W A R N I N G - - - - - - - W A R N I N G - - - - - *
220 
Name (localhost:nuxref): foobar
331 Please specify the password.
Password:
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> ls
227 Entering Passive Mode (127,0,0,1,250,83).
150 Here comes the directory listing.
226 Directory send OK.
FileZilla FTPS Configuration

FileZilla FTPS Configuration

If you plan on using FileZilla as your solution, You need to configure it to connect as the FTP protocol with the Encryption set to Requires explicit FTP over TLS similar to the screen shot I provided.

You may or may not have to accept your certificate afterwards that we created earlier in this blog.

FileZilla On Going FTPS Bug
The FileZilla Client is a pretty sweet application for those who like to work with a GUI instead of a command line. Those who choose to test their configuration with this should just know that there is an outstanding bug with FileZilla and the FTPS protocol. Hence, if you’re using Filezilla to to test your new VSFTPD server and it’s not working, it might not be your configuration at the end of the day. The versions seem to be hit and miss of which cause the bug to surface; reports of v3.5.2 working and v3.5.3 not. That all said, I’m using v3.7.3 and am not having a problem.

Here is the ticket #7873 that identifies the problem. One thing that is mentioned is that an earlier version of FileZilla works perfectly fine (specifically v3.5.2). But I’ve also had no problem with the current version (at the time was v3.7.3). I guess my main point is… don’t panic if you see this error; it’s not necessarily anything you’ve configured incorrectly. If you followed this blog then you shouldn’t have any issue at all.

Credit
I may not blog often; but I want to re-assure the stability and testing I put into everything I intend share.

If you like what you see and wish to copy and paste this HOWTO, please reference back to this blog post at the very least; it’s really all that I ask of you.

Sources: