Brogue

Brogue: A Roguelike Game for CentOS 6


Introduction
I don’t intend to post much on Linux gaming; in fact this blog entry is quite uncharacteristic of me. But Brogue is a fantastic roguelike game for most operating systems (Linux, MacOS X, and Windows). It’s completely open source (license and cost free) and available for anyone who wants it. The game has a huge following on their forum. It’s also worth pointing out that all the development credit in this game goes to the author Brian Walker.
Pixel Dungeon
I was inspired to look up Brogue and build it properly for CentOS/Red Hat (including Fedora 20) after playing Pixel Dungeon (GitHub/Wiki) which is available for all Android devices. Pixel Dungeon credits Brogue for all the success it’s had; therefore it only felt worth it to help the Brogue community out and package it for Red Hat based systems.

How Does It Work?
You start off as a character in a dungeon denoted by an at sign @. Using the arrow keys on your keyboard and/or mouse; you simply navigate your way around. Your goal is to reach 26th floor of it where a prize awaits you (The Amulet of Yendor). You’ll face monsters, starvation, curses and a lot of luck the deeper you go. All of the levels, gear, and monsters are randomly generated; so each time you play the game you will have a different experience. Some experiences are hard, some harder, and some virtually impossible. Roguelike games (like this one) imply that you will die. Most games are short; but each time you play, you’ll learn new things and make it further into the game. It’s important not to get discouraged; but rather give the game a fair chance. In fact, since each experience is different for every single play-through, you won’t be bored with the repetitive nature of rinsing and repeating what you’ve already done over and over again. It’s quite the opposite actually. You’ll learn things that didn’t work for you last time; but you may not get to apply that knowledge on the next play-through since you’ll be in a completely different dungeon.
Brogue Brimstone Battle
The in-game graphics could be considered impressive to some and not to others. But hopefully most people have come to realize that graphics don’t make a game and it’s the fun of it that truly fulfils the experience.

Again, I’ll state that when you first dive into this game it can be frustrating. You are going to die a lot until you grasp the concepts. The developer thought about those who just want to hack and slash and win every time the play the game. For this reason; he created a file called the seed catalog which dictates what will be available to you as you explore deeper into the dungeon. You can manipulate this catalogue and make your experience more enjoyable (that is… if enjoyable means easy). If you choose this route, perhaps overtime as you pick up new strategies, you can limit the catalogues contents or just use it’s defaults. It’s up to you!

Note: If you want to update the catalogue; it becomes available to you after you start the program up for the first time. It’s located as ~/.config/brogue/Brogue seed catalog.txt. You can adjust the content of a specific seed and choose that seed in the game by pressing <ctrl>-N on your keyboard. For example: <ctrl>-N and entering in 21 is known to give you a game that is fairly generous to new users. Apparently seed 1331532419 is a good one to start with too (source).

How Do I Get Brogue?
I thought you’d never ask; I’ve packaged the whole thing along with all of it’s dependencies. The easiest way to get it is from my repository I host.

# Visit http://nuxref.com/repo/ and set up a link to the
# repository I host, then run the following:
yum install -y \
       --enablerepo=nuxref \
       --enablerepo=nuxref-shared \
           brogue

I went ahead and packaged this game for Fedora 20 users as well. Since this repository is someone incomplete and i don’t intend on completing it; it’s great to have around for the gamers:

# You'll need to be root to run any of the below commands.

# I signed everything I host to add a level of security to
# all of the people following my blog and using the packages
# I'm hosting. You'll want to install that here:
rpm --import http://repo.lead2gold.org/pub/NUXREF-GPG-KEY

# Connect to my repository.
# Don't worry, we're defaulting it to a disabled state, for
# the paranoid.  (note that enable=0 below for both
# repositories)
cat << _EOF > /etc/yum.repos.d/nuxref.repo
[nuxref]
name=NuxRef Core - Fedora 20 - \$basearch
baseurl=http://repo.lead2gold.org/fedora/20/en/\$basearch/custom
enabled=0
priority=1
gpgcheck=1
_EOF

# Now you can easily install it as follows Install
yum install -y \
       --enablerepo=nuxref \
       --enablerepo=nuxref-shared \
           brogue

But if you want the required RPMs directly from this blog you can get them here:

Getting Started
After you’ve installed the game; you’re ready to begin playing it. It’s truly worth reading the tips to new gamers posted on the Official Brogue Website.
Gnome Launcher
You can launch the game from the Menu bar in Gnome, or you can simply type brogue on the command line. A script I prepared will automatically start everything up for you.
Brogue Main Menu

The only other thing worth noting is that all of your configuration and high scores are stored in the ~/.config/brogue directory.

Good Luck!

Sources

  • Official Brogue Website: I can’t encourage you enough to check this link out. Especially for the invaluable tips and playing advice it provides to new users.
  • Brogue Online Forum: A great place to meet new people and discuss the game. You can use this location to report any issues or pick up strategies of your own.
  • Official Brogue Wiki: This is a fantastic resource to learn the environment and about all the things that reside in the game.

I also want to give credit to the graphic I photoshopped as part of my title for this blog. It came from an author who goes by 88grzes who publicly posted it on DiviantART here.

Multicast File Transfer Solution

Mass File Distribution Using Multicasting


Introduction
Multicasting has it’s pros and cons just like everything else. But it is often an overlooked solution to a common business problem which is: How do I transfer a file to multiple (subscribed) locations at the same time?
Most administrators or developers will come up with a solution that involves sending this file to each site using a common protocol such as SFTP, SCP, RSYNC, FTP, RCP, etc. There is no doubt that these solutions will work. However these solutions require you to send the file to each location individually. Hence, if you need to send a 10MB file to 100 remote locations, you’ll need 1GB (10MB x 100) of local network bandwidth to do it with.
Traditional Protocol File Transfers
A Multicasting solution saves you this effort and bandwidth by allowing you to send the file once and have all 100 sites collectively store it onto their systems at (relatively) the same time. Now, with respect to the diagram below (and the rest of this blog), I’m focusing entirely on the amazing efforts Dennis Bush put into UFTP. It is this tool that makes all of this possible. UFTP is one of those diamonds in the rough that I don’t think gets enough attention for the value it brings with it.
Multicast File Transfer using UFTP

Multicast in a Nutshell
A Multicast boils down to just being an IP address anywhere in the 224.0.0.0/4 network range. It uses a User Datagram Protocol to communicate with anyone who chooses to read and write data from this address. Routers can be configured to share this address across networks so that everyone may join in. Similar to a chat room on the internet where everyone joins together and anything they write, everyone else in the channel can see too.

Multicasting no doubt has it’s drawbacks. But I figure that it’s not worth dwelling on what you can’t do with the (multicast) protocol on it’s own since the UFTP tool this blog focuses on for mass file distribution has solved many (if not all) of these problems for us.

Why Multicasting, Who Uses it?
Multicasting saves bandwidth. Think of your Cable TV provider; when you change a channel, you’re actually just subscribing to a new multicast address along with the 100+ million other subscribing customers. Multicasting allows Cable TV companies to broadcast every channel at once, and those who are interested in a specific channel will receive it. Regardless of what channel you change to, there is no additional load sustained by the cable provider.

System administrators may not even know they’re using it when if they are managing a cluster. Most clusters use multicast as a way of passing their heartbeats to all other nodes in efforts to keep quorum.

Now, with respect to UFTP, if you visit it’s official website, you’ll see that the the Wall Street Journal used this tool in the past to send their developed newspapers to remote printing plants and other outlets across the United States.

UFTP in a Nutshell
UFTP is a File Transfer solution that wraps itself around the the multicast protocol as well as addressing the deficiencies that come with it. It is a well designed server/client application that allows for one to easily transfer files from 1 location to as many clients as you want in one shot. It can drastically save your company on bandwidth and has been around long enough to be deemed a reliable business decision. It’s worth noting that UFTP was written in C making it incredibly fast and lightweight on system resources. It works on all platforms but I’ll specifically focus on CentOS and the rpms I’ve packaged.

To use the software, you simply pass it a file and it looks after all the dirty work of guaranteeing it’s delivery to all of the clients subscribed to the address it broadcasts on. The author thought of everything when developing his tool, encryption of the data is always available to you as well if you prefer. He developed 3 main tools that you can manipulate to feed your data anywhere.
A Simple UFTP Environment

  • uftp (The Server – Docs): A simple command line tool that takes a file and broadcasts it to all of the listening clients. This is the exact reverse of the traditional ftp, sftp, scp, etc tools where they become the client and need to connect to a server to preform their tasks.
  • uftpd (The Client – Docs): A daemon that starts up and listens to a multicast address for new files being broadcasted by the server. Again, you’ll notice with Multicasting, roles are reversed from what you’d normally be used to. With the traditional protocols (FTP, SFTP, SCP, etc), they usually have daemons to host the server side of things not the client. You’ll want to run this daemon at all locations you wish to receive content broadcasted by the server (uftp).
  • uftpproxyd (The Proxy – Docs): The proxy allows you to tunnel your uftp multicast across a network that doesn’t support multicasting (such as the internet). This allows clients in other controlled networks (separated by one you can’t control) to additionally be part of your file distribution.

    The UFTP Proxy has 3 main modes it operates as:

    • Server Mode: This is used for pushing content upstream. This would effectively sit on the same server you call ‘uftp’ from. It listens just like any other client would and passes all information it receives to connected UFTP Proxies configured with the client mode.
    • Client Mode: This communicates with a UFTP Proxy configured for Server Mode and mimics the ‘uftp’ by broadcasting the same data to the local multicast address. This allows all local UFTP Clients to retrieve the file(s).
    • Response Mode: This is used to help take off some of the load of the server if there are many clients. Although the file is only being broadcasted once, there is a lot of handshaking that goes on at the start and end of the transmission to guarantee all data was delivered successfully. Depending on the different networks, their medium and reliability, a server may need some extra help with the handshaking if there are a lot of clients involved struggling to retrieve the data.

    Below shows an example of how the proxy can be utilized:
    A UFTP Proxy ConfigurationNote: Pinholes are used as a way to connect back to the UFTP Proxy Server effectively requiring no firewall changes to be made at each client site (only the server).

    **Note: It’s important to note a single Proxy Server can only be configured to connect to a single Proxy Client; it’s a 1 to 1 mapping. If you have multiple sites you need to connect to, you’ll need to set up an individual Proxy Server for each Proxy Client you need to serve.

    If you’re interested in the proxy portion of UFTP, you can read the official documentation about it found here.

Hand Over Everything
I wouldn’t have it any other way:

  • uftp-4.5.1-1.el6.nuxref.x86_64.rpm: The server is just the uftp tool and is really easy to use. This assumes you’ve got clients configured somewhere listening though!
    # just type uftp file.you.want.to.send
    uftp mytestfile
    
    # You can also send multiple files by just adding them to the end of
    # the string:
    uftp mytestfile1 mytestfile2 mytestfile3
    
    # I also wrote a small script that works the same way and sends stuff encrypted
    uftpe mytestfile1 mytestfile2 mytestfile3
    
  • uftp-client-4.5.1-1.el6.nuxref.x86_64.rpm: The client listens for data sent from the uftp server. You can use the RC Script i prepared to greatly simplify this tool:
    # as root; use the RC Script I wrote to make hosting the server
    # really easy
    service uftpd start
    

    Filtering is optional; if you don’t specify any, then by default there are no restrictions. Sometimes this is satisfactory (especially in closed or isolated networks). I attempted to simply UFTP’s built in server filtering for those who want to use it though. You see, not only can you encrypt the data you transmit from the server. But you can restrict the client to only accept connections from specific UFTP servers residing at a specific hosts with a specific server id. You can even go as far as only accepting servers with a specific fingerprint (created by their private key)

    # simply drop a configuration file in /etc/uftpd/servers.d 
    # Examples of accepted entries (taken from uftpd man page):
    # 0x11112222|192.168.1.101|66:1E:C9:1D:FC:99:DB:60:B0:1A:F0:8F:CA:F4:28:27:A6:BE:94:BC
    # 0x11113333|fe80::213:72ff:fed6:69ca
    #
    # You can have as many files as you want in this directory with as many entries in each
    # as you want.  If you add or remove new files, you'll need to restart the uftpd service
    # since it's only read in at the start.
    

    So by adding this bit of complexity, I know you are asking yourself:

    • Q: How do I know what my Server ID is?
      A: By default (using the rpm I’ve packaged) every server has an ID of 0x00000001 (decimal value of 1 – one) if you use the uftpe script I wrote (for encrypted transfers). To change your server id do the following:

      # Define a new id (other then 1)
      NEWID=2
      
      # Simply store this ID inside of the $HOME/.uftp config
      # file as it's HEX value:
      
      # Ensure the config directory exists
      [ ! -d $HOME/.uftp ] && mkdir -p  $HOME/.uftp
      
      # clear any old entry you may have set if you want:
      [ -f $HOME/.uftp/config ] && sed -i -e '/^UFTP_UID=/d' $HOME/.uftp/config
      
      # Set the new entry
      printf 'UFTP_UID=0x%.8x\n' $NEWID >> $HOME/.uftp/config
      

      If however your just using the uftp tool on it’s own, it takes on the IP address of the host it’s running on (as it’s hex value) unless you explicitly specify -U 0x00000002 (or whatever ID you want it to assume). Here is a quick example of how you can convert an IP address to it’s hex value at the shell:

      # Define the address you want to convert
      IP_ADDR=192.168.1.128
      # Now convert it (don't forget the brackets!)
      (
         printf '0x'
         printf '%02X' $(echo "${IP_ADDR//./ }"); echo
      )
      # example above outputs: 0xC0A80180
      
    • Q: How do I know what my Server Fingerprint is?
      A: First off, uftp will not ever connect to this server with this option set if you choose ‘not’ to use the uftpe script i wrote or provide the necessary switches to enforce encryption. Setting a filter of your servers fingerprint is also a way of saying you only accept connections that are encrypted. This entry is completely optional. Your fingerprint ID is stored in $HOME/.uftp/uftp.key. Again this key only exists if your using the uftpe tool; otherwise the key is wherever you chose to store it. Fetch the id as follows (the below is intended to be called on the server where the uftp.key file exists):

      # fetch the details from the key:
      uftp_keymgt $HOME/.uftp/uftp.key
      

      Don’t panic if you don’t have a key; They are generated automatically when you first run the uftpe tool. The easiest way to pre-create it would just be to call uftpe by itself (without any parameters). Yes; it’ll spit an error telling you you didn’t provide it enough options. But the script will also generate you a key automatically too.

    I tried to think of everything; so log rotations and logging is already built in and included. You can locate them in /var/log/uftpd.log.

  • uftp-proxy-4.5.1-1.el6.nuxref.x86_64.rpm: the proxy service can bridge two networks that don’t support multicasting together. I haven’t spent to much time with this area since my environment hasn’t required me to. If you have any information to share about it; feel free and I can expand this area.
  • uftp-debuginfo-4.5.1-1.el6.nuxref.x86_64.rpm (Optional): This is only required if you are debugging this tool.
  • uftp-4.5.1-1.el6.nuxref.src.rpm (Optional): The source RPM for those interested in building the software for themselves.

Some Things To Consider:
When something is explained to be this easy, there is always a catch. I won’t lie, there are a few which means the UFTP solution may not be for everyone. They are as follows:

  • Multicasting isn’t enabled by most routers by default. If your recipients reside in a network you don’t manage, you’ll want to ask the local administrator to make sure their routers have (Level 2) multicasting enabled.
  • Multicasting can be a pain in the butt to troubleshoot; although newer routers won’t give you any grief, some of the older ones can fail to relay content correctly to clients who’ve also connected to the same multicast address. With negativity aside though, when it works, it works so great! My point is: you’ll need to make sure you’re using (networking) hardware that is relatively new (no older then 2010) where the firmware adequately supports Level 2 multicasting.
  • The UFTP Proxy attempts to resolve the problem of linking your networks together when separated by ones you don’t control. But consider that unless there are multiple recipients located on each network you connect your proxy too, you aren’t saving any bandwidth choosing this route.
  • Only uftp clients who are online will receive content sent by the uftp server. This can be a deal breaker for some especially if the product being delivered ‘MUST’ reach all of the clients. UFTP does not track who is online and who isn’t. It simply delivers content to those who are present at that time. It’s just like how you can’t watch a television show if you haven’t told your TV’s receiver what channel to be on.
  • If you’re using restrictive firewall settings (hopefully you are!), you’ll want to make sure multicasting is allowed into your client box with the following:
    iptables -A INPUT -m pkttype --pkt-type multicast -j ACCEPT

    It would be worth adding this to your /etc/sysconfig/iptables file as well.

Repository
Please note that all of these packages are also within my repository if that makes things easier for you and your deployment.

Sources

  • UFTP: Dennis Bush did a fantastic job with this tool. It is the key to making multicast file transmission powerful and reliable.
  • Multicasting: information in greater depth can be found here.
Usenet Solution

A Usenet Solution For CentOS 6


The Indexer
Google uses it’s own web crawlers to scan the entire World Wide Web just to create an index from it. Each website they find, they track it’s name, it’s content, the language it’s written in and so on. The result from them doing this is: we get to use their fantastic search engine! A search engine that has made our lives incredibly easy by granting us fast and easy accessible information at our fingertips. Well… Usenet needs the same thing done to it too; and Google isn’t doing it for us! Usenet is a very big world of it’s own and without indexing, it’s a lot harder to get around in (but not impossible). Thankfully Usenet is no where near the size of the World Wide Web and indexing it is very possible. We can even index it with our personal computer(s) we run at home!

Since just about anyone can index Usenet, one has to think: Why index Usenet ourselves if someone’s already doing it for us elsewhere? In fact, there are many sites (and tools) that have done all indexing (some better than others) of Usenet who are willing to share it with others (us). But it’s important to know: it can take a lot of server power, disk space, and network consumption for these site administrators to constantly index Usenet for us. Since most (if not all) of the sites in this list are just hobbyists doing it for fun, it gets expensive for them to maintain things. For that reason some of them may charge or ask for a donation. If you want to use their services, you should respect their measly request of $8USD to $20USD for a lifetime membership. But don’t get discouraged, there are still a lot of free ones too! Some get by with just advertisement banners on their website or just index once every few days. Others indexers may run their indexer constantly (every 1-5 minutes) granting you the latest data available to you at all times.

Alternatively, we can go as far as running our own Usenet indexer (such as NewzNab) just as the hobbyists did (mentioned above). NewzNab will index Usenet on a regular basis. With your own indexer, you can choose to just index content that appeals to you. You can even choose to offer your services publicly if you want. Just keep in mind that Usenet is huge! If you do decide to go this route, you’ll find it a very CPU and network intensive operation. You may want to make sure you don’t exceed your Internet Service Providers (ISP) download limits.

Now back to the Google analogy I started earlier: When you find a link on Google you like, you simply click on it and your browser redirect you to the website you chose; end of story. However, in the Usenet Indexing world, once you find something of interest, the Usenet Indexer will provide you with an NZB File. An NZB file is effectively a map that identifies where your content can be specifically located on Usenet (but not the data itself). An NZB file to Usenet is similar to what a Torrent file is to a BitTorrent Client. Both NZB and Torrent files provide the blueprints needed to mine (acquire) your data. Both NZB and Torrent files require a Downloader to preform the actual data mining for you.

The Downloader
The Downloader can take an NZB File it’s provided and then uses it to acquire the actual data it maps to. This is the final piece of the puzzle!
Of the list below, you really only need to choose 1 Downloader. I just listed more then 1 to give you alternatives to work with. My personal preference is NZBGet because it is more flexible. But it’s flexibility can also be very confusing (only at first). Once you get over it’s learning curve and especially the initial configuration; it’s a dream to work with. Alternatively SABnzbd may be better for the novice if your just starting off with Usenet and don’t want to much more of a learning curve then you already have.

Either way, pick you poison:

Title Package Details
NZBGet rpm/src NZBGet is written in C++ and designed with performance in mind to achieve maximum download speed by using very little system resources.
Community / Manual
*Note: I did create one small patch so that the Emailing Post Process script would conform to a few extra SMTP header standards to avoid causing issues with Google Mail and relaying mail to it.
**Note: I also created this patch in a recent update rebuild (Jul 17th, 2014) to fix a few directory paths so the compression tools (unrar and 7zip) can work right away. I also added these compression tools as dependencies to the package so they’ll just be present for you at the start.

You can install NZBGet using the steps below:

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Install NZBGet
yum install -y nzbget \
    --enablerepo=nuxref \
    --enablerepo=nuxref-shared

# Grab Template
cp /usr/share/nzbget/nzbget.conf ~/.nzbget

# Protect it
chmod 600 ~/.nzbget

# Start it Up (as a non-root user):
nzbget -D

# You should now be able to access it via: 
#     http://localhost:6789/
SABnzbd n/a SABnzbd is an Open Source Binary Newsreader written in Python.
Community / Manual
Note:I have not packaged this yet, but will probably eventually get around to it. For now it can be accessed from it’s repository on GitHub, or you can quickly set it up in your environment as follows:

# There is no RPM installer for this one, we just
# fetch straight from their repository.
# Install git (if it's not already)
yum install -y git

# Grab a snapshot of SABnzbd
git clone https://github.com/sabnzbd/sabnzbd.git SABnzbd

# Start it Up (as a non-root user):
python SABnzbd/SABnzbd.py \
    --daemon \
    --pid $(pwd)/SABnzbd/sabnzbd.pid

# You should now be able to access it via: 
#     http://localhost:8080/

NZBGet Processing Scripts
One of it’s strongest features of NZBGet is it’s ability to process content it downloads before and after it’s received. The Post Processing (PP) has been specifically one of NZBGet’s greatest features. It allows separation between the the function NZBGet (which is to download content in NZB files) and what you want to do with the content afterwards. Post Processing could do anything such as catalogue what was received and place it into an SQL database. Post Processing could rename the content and sort it for you in separate directories depending on what it is. Post processing can be as simple as just emailing you when the download completed or post on Facebook or Twitter. You’re not limited to just 1 PP Script either, you can chain them and run a whole slew of them one after another. The options are endless.

I’ve taken some of the popular PP Scripts from the NZBGet forum and packaged them in a self installing RPM as well to make life easy for those who want it. Some of these packages require many dependencies and ports to make the installation smooth. Although i link directly to the RPMs here, you are strongly advised to link to my repository with yum if you haven’t already done so.

Title Package Provides Details
Failure Link rpm/src FAILURELINK If download fails, the script sends info about the failure to indexer site, so a replacement NZB (same movie or TV episode) can be queued up if available. The indexer site must support DNZB-Header “X-DNZB-FailureLink”.

Note: The integration works only for downloads queued via URL (including RSS). NZB-files queued from local disk don’t have enough information to contact the indexer site.

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbget-ppscript-failurelink \
    --enablerepo=nuxref \
    --enablerepo=nuxref-shared 
nzbToMedia rpm/src DELETESAMPLES
RESETDATETIME
NZBTOCOUCHPOTATO
NZBTOGAMEZ
NZBTOHEADPHONES
NZBTOMEDIA
NZBTOMYLAR
NZBTONZBDRONE
NZBTOSICKBEARD
Provides an efficient way to handle post processing for
CouchPotatoServer, SickBeard, NZBDrone, Headphones, and Mylar
when using NZBGet on low performance systems like a NAS.

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbget-ppscript-nzbtomedia \
    --enablerepo=nuxref \
    --enablerepo=nuxref-shared 

Note: This package includes the removal of the entire PYPKG/libs directory. I replaced all of the dependencies previously defined here with global ones used by CentOS. The reason for this was due to the fact a lot of other packages all share the same libraries. It just didn’t make sense to maintain a duplicate of it all.

Subliminal rpm/src SUBLIMINAL Provides a wrapper that can be integrated with NZBGet with subliminal (which fetches subtitles given a filename or filepath). Subliminal uses the correct video hashes using the powerful guessit library to ensure you have the best matching subtitles. It also relies on enzyme to detect embedded subtitles and avoid retrieving duplicates.

Multiple subtitles services are available using: Addic7ed, OpenSubtitles, SubsWiki, Subtitulos, TheSubDB, and TvSubtitles.

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbget-ppscript-subliminal \
    --enablerepo=nuxref \
    --enablerepo=nuxref-shared 

*Note: python-subliminal (what this PP Script is a wrapper too) had some issues I had to address. For one, i eliminated the entire PYPKG/subliminal/libs directory. I replaced all of the dependencies previously defined here with global ones used by CentOS. The reason for this was due to the fact a lot of other packages all share the same libraries. It just didn’t make sense to maintain a duplicate of it all.
**Note: Subliminal was written using Dict Comprehensions (PEP 274), a feature that wasn’t introduced until Python 2.7. Unfortunately, the developers of it had no interest in resolving this and closed the issue with ‘Upgrade to Python 2.7 or Python v3.3. For this reason, subliminal does ‘not’ work at all with CentOS or Red Hat 6.x. So… I fixed that. Now, I can proudly tell you that the copy of subliminal I host on my repository is 100% compatibility with python 2.6 (this includes a few Logging backported functionality too). For those interested in the patch content (so you can backport for other systems) you can view them here:

patch #1 Just an update to enforce the correct version of guessit (v0.7.1) and babelfish (v0.5+) to support all of the packages included.
patch #2 The core backporting of python 2.4+ support (by eliminating Dict Comprehensions. This patch also includes support for babelfish v0.5.1
patch #3 Add support for guessit v0.7.1
patch #4 subliminal would crash when there wasn’t enough details to detect the show or movie. This patch allows subliminal to gracefully handle these scenarios.
New as of Aug 21st, 2014
patch #5
Subliminal to support not exclusively only downloading with or without subtitles but allow one to download either (but fetch the best matched one).
patch #6 New as of Aug 24st, 2014
If subliminal matched more then 1 subtitle, it would download them all. This is presumed to be a bug because when i looked through the code, the extra download were never referenced. This patch has subliminal stop downloading subs after the first successful matched one is downloaded.
patch #7 New as of Aug 24st, 2014
This is an extension to patch #5 above. If both hearing-impaired and non hearing-impaired subtitles were found, this patch allows you to prioritize which one is more important to you (in the event they both score high).
patch #8 New as of Aug 24st, 2014
If a provider was unavailable (their server was simply just not online), subliminal would previously crash. This patch allows a more graceful handling which just involves moving on to the next provider identified instead.
Video Sort rpm/src VIDEOSORT With post-processing script VideoSort you can automatically organize downloaded video files.

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbget-ppscript-videosort \
    --enablerepo=nuxref \
    --enablerepo=nuxref-shared 

Note: This package includes the removal of the entire PYPKG/libs directory. I replaced all of the dependencies previously defined here with global ones used by CentOS. The reason for this was due to the fact a lot of other packages all share the same libraries. It just didn’t make sense to maintain a duplicate of it all.

Automated Index Searchers
These tools search for already indexed content you’re interested in and can be configured to automatically download it for you when it’s found. It itself doesn’t do the downloading, but it will automate the connection between your chosen Indexer and Downloader (such as NZBGet or SABnzbd). For this reason, these tools do not actually search Usenet at all and therefore have very little overhead on your system (or NAS drive).

Title Package Details
NZBDrone
nzbdrone-icon
rpm/src Automatic TV Show downloader
This was only made possible because of the blog I wrote on mono v3.x .

# Note: You must link to the NuxRef repository for this to work!
#      See: http://nuxref.com/repo/

# Installation of this plugin:
yum install -y nzbdrone \
    --enablerepo=nuxref \
    --enablerepo=nuxref-shared 

# Start it Up (as a non-root user):
nohup mono /opt/NzbDrone/NzbDrone.exe &

# You should now be able to access it via: 
#     http://localhost:8989/
Sick Beard
sickbeard-icon
n/a (Another) Automatic TV Show downloader

Note:I have not packaged this yet, but will probably eventually get around to it. For now it can be accessed from it’s repository on GitHub, or you can quickly set it up in your environment as follows:

# Install git (if it's not already)
yum install -y git

# Grab a snapshot of Sick Beard
# Note that we grab the master branch, otherwise we default
# to the development one.
git clone -b master https://github.com/midgetspy/Sick-Beard.git SickBeard

# Start it Up (as a non-root user):
python SickBeard/SickBeard.py \
   --daemon \
   --pidfile $(pwd)/SickBeard/sickbeard.pid

# You should now be able to access it via: 
#     http://localhost:8081/
CouchPotato
couchpotato-icon
n/a Automatic movie downloader

Note:I have not packaged this yet, but will probably eventually get around to it. For now it can be accessed from it’s repository on GitHub, or you can quickly set it up in your environment as follows:

# Install git (if it's not already)
yum install -y git

# Grab a snapshot of CouchPotato
git clone https://github.com/RuudBurger/CouchPotatoServer.git CouchPotato

# Start it Up (as a non-root user):
python CouchPotato/CouchPotato.py \
   --daemon \
   --pid_file CouchPotato/couchpotato.pid

# You should now be able to access it via: 
#     http://localhost:5050/
Headphones
headphones-icon
n/a Automatic music downloader

Note:I have not packaged this yet, but will probably eventually get around to it. For now it can be accessed from it’s repository on GitHub, or you can quickly set it up in your environment as follows:

# Install git (if it's not already)
yum install -y git

# Grab a snapshot of Headphones
git clone https://github.com/rembo10/headphones Headphones

# Start it Up (as a non-root user):
python Headphones/Headphones.py \
   --daemon \
   --pidfile $(pwd)/Headphones/headphones.pid

# You should now be able to access it via: 
#     http://localhost:8181/
Mylar
mylar-icon
n/a Automatic Comic Book downloader

Note:I have not packaged this yet, but will probably eventually get around to it. For now it can be accessed from it’s repository on GitHub, or you can quickly set it up in your environment as follows:

# Install git (if it's not already)
yum install -y git

# Grab a snapshot of Headphones
git clone https://github.com/evilhero/mylar Mylar

# Start it Up (as a non-root user):
python Mylar/Mylar.py \
   --daemon \
   --pidfile $(pwd)/Mylar/Mylar.pid

# You should now be able to access it via: 
#     http://localhost:8090/

Mobile Integration
nzb360-logoThere are some fantastic Apps out there that allow you to integrate your phone with the applications mentioned above. It can allow you to manage your downloads from wherever you are. A special shout out to NZB 360 who recently had his app pulled from the Google Play Store for no apparent reason and had to set up shop outside. I can say first hand that his application is amazing! You should totally consider it if you have an Android phone.

Usenet Provides
For those who don’t have Usenet already, it does come at an extra cost and/or fee. The cost averages anywhere between $6 to $20 USD/month (anything more and you’re paying to much). The reason for this is because Usenet is a completely isolated network from the Internet. It’s comprised of a completely isolated set of interconnected servers. While the internet is comprised of hundreds of millions of servers all hosting specific content, each Usenet server hosts the entire usenet database… it hosts everything. If anything is uploaded to Usenet, all of the interconnected servers update themselves with their own local copy of it (to serve us). For this to happen, their servers have to have petabytes of storage. The fee they charge you is just going to support their operational cost such as bandwidth, maintenance and the regular addition of storage to their infrastructure. There is very little profit to be made for them at $8 a person. Here is a breakdown of a few servers (in alphabetical order) I’m aware of and support:

Provider Server
Location(s)
Notes Average Cost
Astraweb US & Europe Retention: 2158 Days (5.9 Years) $6.66USD/Month to $15USD/Month
see here for details
Usenet Server US Retention: 2159 Days (5.9 Years)
Has a free 14 day trial
$13.33USD/Month to $14.95USD/Month
see here for details

*Note: Table information was last updated on Jul 14th, 2014. Prices are subject to change as time goes on and this blog post isn’t updated.
**Note: If you have a provider that you would like to be added to this list… Or if you simply spot an error in pricing or linking, please feel free to contact me so I can update it right away.

Why do people use Usenet/Newsgroups?

  • Speed: It’s literally just you and another server; a simple 1 to 1 connection. Data transfer speeds will always be as fast as your ISP can carry your traffic to and from the Usenet Server you signed up with. Unlike torrents, content isn’t governed by how many seeders and leechers that have the content available to you. You never have to deal with upload/download ratios, maintain quotas, and or sit idle in someone’s queue who will serve data to you eventually.
  • Security: You only deal with secure connections between you and your Usenet Provider; no one else! Torrents can have you to maintaining thousands of connections to different systems and sharing data with them. With BitTorrent setups, tracker are publicly advertising what you have to share and what your trying to download. Your privacy is public to anyone using the same tracker that you’re connected to. Not only that, but most torrent connections are insecure as well which allows virtually anyone to view what you’re doing.

Please know that I am not against torrents at all! In fact, now I’ll take the time to mention a few points where torrents are excel over Usenet:

  • Cost: It doesn’t usually cost you anything to use the torrent network. It all depends on the tracker your using of course (some private trackers charge for their usage). But if you’re just out to get the free public stuff made available to us, there are absolutely no costs at all to use this method!
  • Availability: Usenet is far from perfect. When someone uploads something onto their Usenet Provider, by the time it propagates this new content to all of the other Usenet Servers, there is a small chance the data will be corrupted. This happens with Usenet all of the time. To compensate for this, Usenet users anticipate corruption (sad but true). These people kindly post Parchive files to Usenet to compliment whatever they previously uploaded. Parchive files work similar to how RAID works; they provide building blocks to reassemble data in the event it’s corrupted. Corruption never happens with Torrents unless the person hosting decides to host corrupted data. Any other scenario would simply be because your BitTorrent Client had a bug in it.
  • Retention: As long as someone is willing to seed something, or enough combined leechers can reconstruct what is being shared, then data will always stay alive in the BitTorrent world. However with Usenet, the Usenet Server is hosting EVERYTHING which means it has to maintain a lot data on a lot of disk space! For this reason, a retention period is inevitably met. A time is eventually reached where the Usenet Server purges (erases) older content from these hard disks to make room for the new stuff showing up every day.

Honestly, at the end of the day: both Torrents and Usenet Servers have their pros and cons. We will always continue each weigh them at different levels. What’s considered the right choice for one person, might not be the right one for another. Heck, just use both depending on your situation! :)

Source

.NET Solution

Mono 3.x for CentOS 6


Introduction
Mono allows us to run Windows Applications in our Linux environment. It is an open source implementation of Microsoft’s .NET Framework. The problem is, CentOS (and Red Hat) 6.x ship with Mono v2.4 which is a little outdated. You can’t take advantage of the newer apps .NET developers are writing. In fact, you can’t run anything that requires a newer version of v3.5 of the .NET runtime libraries.

In addition to that, Mono v3.4 grants your CentOS system more support for .NET applications that weren’t otherwise available for you in v2.4.

Compatibility Mono v2.4 Mono v3.4
.NET 1.0 Yes No (dropped support)
.NET 2.0 Yes Yes
C# 3.0 Yes Yes
ASP.NET 2.0 Yes Yes
.NET 3.5 Partial Yes
.NET 4.0 No Yes
.NET 4.5 No Yes
C# 4.0 No Yes
ASP.NET 4.0 No Yes
C# 5.0 No Yes

What’s so special about the repackaging you did?
Well first of all… it’s actually an RPM package. It doesn’t require you to haul in a ton of development libraries and compile it from scratch. Another point is that Mono v2.4 (shipped with CentOS & Red Hat 6) had many patches applied to it. These patches forced Mono to conform to the common directory structure used natively by our Operating System. It took me several hours to recreate all of these patches forcing Mono v3.4 to comply with the same standards.

Finally (and at the time of writing this blog), this is the first package of Mono v3.4 that I’ve found that can be installed via an RPM and not require you to recompile everything yourself. Hence you don’t even need to haul in any development libraries at all. Mono will just work as is. Since my repackaging was based off of the original, I tried to keep all of the external rpm packages the same. That said, I did get a little confused with all of the new packages and binary tools that ship with Mono v3.x. Since I’m not a Microsoft Developer, I tried to sort these new packages accordingly as best as I could. Please feel free to let me know how I can improve this package if you notice anything I’ve done wrong.

Just hand over all your work already!
Absolutely, here they are:

Binary Packages:

Note: Mono (v3) was a bit picky about it’s SQLite version it referenced. I had to update it’s package to a slightly newer version as well for everything to play nicely. Only if you intend to haul in the mono-data-sqlite-*.rpm package, will you be required to haul in this newer version. I’ve already provided this on my repository, but for the sceptics who want to build it themselves, I’ll include those instructions too.

Source Packages:

Debug Info Packages

You can get it from my repository too
Content/Repositories are grouped as follows:

  • nuxref - Repo/Source: All custom packages rebuilt reside here (including those from past blog entries).
  • nuxref-shared - Repo/Source: This is really just a snapshot in time of packages I used from the different external repositories I don’t maintain. It may not always be up to date, but it can be thought of as a baseline (where things worked for me and with respect to this blog). If I get around to testing a new package and everything is still working fine, I’ll update the repositories here. I also resign everything with my own key to mark that I’ve tested it.

Install all of the necessary packages:

# I signed everything I host to add a level of security to
# all of the people following my blog and using the packages
# I'm hosting. You'll want to install that here:
rpm --import http://repo.lead2gold.org/pub/NUXREF-GPG-KEY

# Connect to my repository to which I've had to rebuild a few
# packages to support PostgreSQL as well as fix some bugs in
# other bugs. This step will really make your life easy and let
# us compare apples to apples with package versions. It also
# allows you to haul in a working setup right out of the box.

# Don't worry, we're defaulting it to a disabled state, for
# the paranoid.  (note that enable=0 below for both
# repositories)
cat << _EOF > /etc/yum.repos.d/nuxref.repo
[nuxref]
name=NuxRef Core - Enterprise Linux 6 - \$basearch
baseurl=http://repo.lead2gold.org/centos/6/en/\$basearch/custom
enabled=0
priority=1
gpgcheck=1

[nuxref-shared]
name=NuxRef Shared - Enterprise Linux 6 - \$basearch
baseurl=http://repo.lead2gold.org/centos/6/en/\$basearch/shared
enabled=0
priority=1
gpgcheck=1
_EOF

# You should use yum priorities (if you're not already) to
# eliminate version problems later, but if you intend on 
# just pointing to my repository and not the others, then you'll
# be fine.
yum install -y yum-plugin-priorities

################################################################
# Install Mono
################################################################
yum install -y \
       --enablerepo=nuxref \
       --enablerepo=nuxref-shared \
           mono-core
################################################################
# Install additional packages too if you wish (depending on your
# needs.
################################################################
yum install -y \
       --enablerepo=nuxref \
       --enablerepo=nuxref-shared \
           mono-web

I Never Trust Your Stuff; Let Me Do It Myself
Sure, First you’re going to need to fetch all the patches I had to create (plus the old ones carried forth from Mono v2.4:

You can additionally view the RPM SPEC file I created here.

First prepare our development environment with mock if you haven’t already:

# Install 'mock' into your environment if you don't have it already
# This step will require you to be the superuser (root) in your native
# environment.
yum install -y mock

# Grant your normal every day user account access to the mock group
# This step will also require you to be the root user.
usermod -a -G mock YourNonRootUsername
# Download the official mono packages from their official
# hosting site:
wget http://origin-download.mono-project.com/sources/mono/mono-3.4.0.tar.bz2

# Download all of the building blocks you'll need
wget --output-document=mono.spec https://www.dropbox.com/sh/9dt7klam6ex1kpp/AAAiqD2KjHhakweKY_mkLLPba/20140713/mono/mono.spec?dl=1
wget --output-document=monodir.c https://www.dropbox.com/sh/9dt7klam6ex1kpp/AABZHv5NeWFICyAAAw--eiJoa/20140713/mono/monodir.c?dl=1
wget --output-document=mono.snk https://www.dropbox.com/sh/9dt7klam6ex1kpp/AADoY6UvThpcQbUHhs7XUecsa/20140713/mono/mono.snk?dl=1
wget --output-document=lc https://www.dropbox.com/sh/9dt7klam6ex1kpp/AACkja0kNxmO1ytHOIw523HTa/20140713/mono/lc?dl=1
wget --output-document=mono-3.4-ppc-threading.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AAAKrjdqRR826osJjbqZIu5la/20140713/mono/mono-3.4-ppc-threading.patch?dl=1
wget --output-document=mono-1.2.3-use-monodir.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AABHvxieGDqU8eDB24ghS_Dua/20140713/mono/mono-1.2.3-use-monodir.patch?dl=1
wget --output-document=mono-2.2-uselibdir.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AABSjbMfIRj5JB7HKWydQVpja/20140713/mono/mono-2.2-uselibdir.patch?dl=1
wget --output-document=mono-2.0-monoservice.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AADsL0DBfI0VixRAw6uI0Vkpa/20140713/mono/mono-2.0-monoservice.patch?dl=1
wget --output-document=mono-3.4-libgdiplusconfig.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AAAHvISVzxIPq9xCmw2m2tcPa/20140713/mono/mono-3.4-libgdiplusconfig.patch?dl=1
wget --output-document=mono-3.4-libdir.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AACHrlv_iSp36jhSeOn4ki0fa/20140713/mono/mono-3.4-libdir.patch?dl=1
wget --output-document=mono-3.4-POSIX_ARG_MAX.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AADSN5WhjyqTQptMoWthDnYHa/20140713/mono/mono-3.4-POSIX_ARG_MAX.patch?dl=1
wget --output-document=mono-3.4.xamarin.BZ18690.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AAAwWsEKkOlnzYZCshp29wuwa/20140713/mono/mono-3.4.xamarin.BZ18690.patch?dl=1

# Initialize our Environment
mock -v -r epel-6-x86_64 --init

# Dependencies
mock -v -r epel-6-x86_64 --install libpng-devel libjpeg-devel ligiflib-devel  \
   lilibtiff-devel  lilibexif-devel  lilibX11-devel  lifontconfig-devel  \
   ligettext  limake  ligcc-c++ libison liglib2-devel lipkgconfig \
   lilibicu-devel lilibgdiplus-devel  lizlib-devel li automake libtool \
   gettext-devel mono-core gcc-c++ mediainfo gettext \
   giflib-devel libtiff-devel libexif-devel libX11-devel fontconfig-devel \
   bison glib2-devel libicu-devel libgdiplus-devel mysql-devel \
   postgresql-devel sqlite-devel

# Copy our packages into our environment
mock -v -r epel-6-x86_64 --copyin mono.spec /builddir/build/SPECS
mock -v -r epel-6-x86_64 --copyin \
    *.patch \
    mono-3.4.0.tar.bz2 \
    mono.snk \
    lc \
    monodir.c \
    /builddir/build/SOURCES

# Shell into our environment
mock -v -r epel-6-x86_64 --shell

# Change to our build directory
cd builddir/build

# Enable Bootstrapping for the first time
# mono actually requires 'mono' (itself) to build. Weird Right?
# But still necessary! For this reason I prepared an easier
# way of enabling bootstrapping for your first build.
#
# Once you install the binaries created from your first build
# we can rebuild the package again (but this time without
# bootstrapping). The purpose of this is to ensure the mono
# binaries and packages we created are equivalent to the
# the bootstrapped content.
# So... on with the bootstrapping; Note: this will take
# 20 to 30 minutes depending on how fast your system is.
rpmbuild -ba --define "_with_bootstrap=1" SPECS/mono.spec

# Now that we've created mono from a bootstrap, we can
# install the package back into our virtual environment
# and rebuild it again. But this time we rebuild it
# without the bootstrap reference.
rpm -Uhi RPMS/mono-core-3.4.0-1.el6.nuxref.x86_64.rpm RPMS/mono-devel-3.4.0-1.el6.nuxref.x86_64.rpm
# Now rebuild the whole thing all over again to confirm
# your build was good; Note: This will take another 20 to 30 
# minutes again...
rpmbuild -ba SPECS/mono.spec

# we're now done with our mock environment for now; Press Ctrl-D to
# exit or simply type exit on the command line of our virtual
# environment
exit

# We'll return to the directory we were previously in.  We can copy
# out the packages we just built at this point.Ignore the warning
# about SELinux if you get one. It doesn't impact our goals at this
# moment.
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/SRPMS/mono-3.4.0-1.el6.nuxref.src.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-core-3.4.0-1.el6.nuxref.x86_64.rpm
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-data-3.4.0-1.el6.nuxref.x86_64.rpm
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-data-oracle-3.4.0-1.el6.nuxref.x86_64.rpm
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-data-postgresql-3.4.0-1.el6.nuxref.x86_64.rpm
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-data-sqlite-3.4.0-1.el6.nuxref.x86_64.rpm
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-devel-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/monodoc-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/monodoc-devel-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-extras-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-locale-extras-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-nunit-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-nunit-devel-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-reactive-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-wcf-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-web-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-web-devel-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-winforms-3.4.0-1.el6.nuxref.x86_64.rpm .
# The debuginfo package will only exist if you successfully rebuilt
# everything without the bootstrap set
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/mono-debuginfo-3.4.0-1.el6.nuxref.x86_64.rpm .

Upgrading SQLite
For me, I just visited pkgs.org and downloaded the fedora 20 (source -src.rpm) release of SQLite. Then I extracted it’s contents as follows:

# I can't promise this link will work, as this package is always
# evolving, but if you do the search above, you'll get the idea
wget http://dl.fedoraproject.org/pub/fedora/linux/releases/20/Everything/source/SRPMS/s/sqlite-3.8.1-2.fc20.src.rpm

# Alternatively, you can download the source rpm package I'm
# already hosting:
wget http://repo.lead2gold.org/centos/6/en/source/custom/sqlite-3.8.1-2.el6.src.rpm

# Then extracted it using this neat technique:
rpm2cpio sqlite-*.src.rpm | cpio -idmv

# Initialize our Environment
mock -v -r epel-6-x86_64 --init

# Dependencies
mock -v -r epel-6-x86_64 --install ncurses-devel \
    readline-devel glibc-devel autoconf /usr/bin/tclsh \
    tcl-devel

# You'll already have the block you need as nothing is
# changed with this package. We're just using it as is
mock -v -r epel-6-x86_64 --copyin sqlite-*.zip /builddir/build/SOURCES
mock -v -r epel-6-x86_64 --copyin *.patch /builddir/build/SOURCES
mock -v -r epel-6-x86_64 --copyin sqlite.spec /builddir/build/SPECS

# Shell into our environment
mock -v -r epel-6-x86_64 --shell
 
# Change to our build directory
cd builddir/build

# Build our packages (process doesn't take long ~2 min)
rpmbuild -ba SPECS/sqlite.spec

# we're now done with our mock environment for now; Press Ctrl-D to
# exit or simply type exit on the command line of our virtual
# environment
exit

# We'll return to the directory we were previously in.  We can copy
# out the packages we just built at this point. Ignore the warning
# about SELinux if you get one. It doesn't impact our goals at this
# moment.
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/SRPMS/sqlite-3.8.1-2.el6.src.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/sqlite-3.8.1-2.el6.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/sqlite-devel-3.8.1-2.el6.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/sqlite-doc-3.8.1-2.el6.noarch.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/lemon-3.8.1-2.el6.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/sqlite-tcl-3.8.1-2.el6.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout \
   /builddir/build/RPMS/sqlite-debuginfo-3.8.1-2.el6.x86_64.rpm .

Credit
This blog took me a very (,very) long time to put together and test! The repository hosting alone now accommodates all my blog entries up to this date. If you like what you see and wish to copy and paste this HOWTO, please reference back to this blog post at the very least. It’s really all I ask.

I’ve tried hard to make this a complete working solution out of the box. Please feel free to email me or post comments below with any suggestions you have so I can ensure this blog is as complete as possible! Positive feedback is always welcome too!

Repository
This blog makes use of my own repository I loosely maintain. If you’d like me to continue to monitor and apply updates as well as hosting the repository for long terms, please consider donating or offering a mirror server to help me out! This would would be greatly appreciated!

Sources
The majority of my efforts came from the following sites:

Authoritative DNS Solution

Configuring a DNS Server on CentOS 6


Introduction
We have been relying on the Domain Name System (DNS) since the dawn of the internet. Simply put: it allows us to access information by a human readable string or recognizable name such as google.com or nuxref.com instead of it’s actual IP address (which is not as easily memorizable). If we didn’t have the DNS, then the internet would not have evolved as far as it has today. The DNS was built on a series of Name Servers that are all looking after their respected domain (or zone). Our Internet Service Provider (ISP) is lending us their DNS servers everyday when we connect to them. It’s our wireless router (at home or at work) that passes this server to our tablet, phone, laptop etc… when we connect to it.

Here is a simple DNS query taking place illustrating how most of us are setup today.

Here is a simple DNS query taking place illustrating how most of us are setup today.

Managing our own Authoritative DNS Server allows us to catalog our personal devices we use daily with great ease. If you’re publically hosting content, an Authoritative DNS server can be used to even distribute the traffic you servers receive both geographically and dividedly. It gives us the ability to dynamically associate names to all of our devices on our network. It’s great for hobbiests and absolutely manditory for any medium or larger sized company.

PowerDNS is my preferred DNS server solution. I personally prefer this to it’s long-term predecessor Berkeley Internet Name Domain (BIND). BIND has been around since 1984 and has gone through years of hacky patches to get to where it is today. PowerDNS is much younger (first release was in 1999), but was written without all of the growing pains BIND suffered through from the start. In all fairness, BIND developers were forced to deal with RFC (Request for Comments) as DNS continued to evolve to what it is today. Where as PowerDNS already had a stable set of requirements to work with from day one. Not to mention PowerDNS can be easily configured to use alternative backend databases.

You are reading this blog because you want the following:

  • A fast and reliable Authoritative DNS server with a PostgreSQL database backend.
  • You want a central configuration point; you want everything to be easy to maintain after it’s all set up.
  • You want everything to just work the first time and you want to leave the figuring it out part to the end.
  • Package management and version control is incredibly important to you.
  • You want the ability to catalog your local network by assigning devices on it their own unique (easy to remember) hostnames.
  • You want to maintain the ability to surf the internet by forwarding on requests your DNS server doesn’t know to another that does.
The beauty of running your own Authorative DNS grants you the ability to catalog and easily access everything on your local network.

The beauty of running your own Authorative DNS grants you the ability to catalog and easily access everything on your local network by it’s hostname (you assign).

Here is what my tutorial will be focused on:

  • PowerDNS (v3.x) configured to use a Database Backend (PostgreSQL) giving you central configuration. This tutorial focuses on version 8.4 because that is what ships with CentOS and Red Hat. But most (if not all) of this tutorial should still work fine if you choose to use version 9.x of the database instead.
  • PowerDNS Recursor (v3.x) will be configured to handle anything records we don’t otherwise host or override
  • Security Considered
  • Poweradmin (v2.x) will provide our administration of the DNS records we add via it’s simple web interface.

Please note the application versions identified above as this tutorial focuses specifically on only them. One big issue I found while researching how to set up thing on the net was some tutorials I found didn’t really mention the version they were using. Hence, when I would stumble across these old article(s) with new(er) software, it would make for quite a painful experience when things didn’t work.

Please also note that other tutorials will imply that you setup one feature at a time. Then test it to see if it worked correctly before moving on to the next step. This is no doubt the proper way to do things. However, I’m just going to give it all to you at once. If you stick with the versions and packages I provide… If you follow my instructions, it will just work for you the first time. Debugging on your end will be a matter of tracing back to see what step you missed.

I tried to make this tutorial as cookie cutter(ish) as I could. Therefore you can literally just copy and paste what I share right to your shell prompt and the entire setup will be automated for you.

Step 1 of 4: Setup Your Environment
This is the key to my entire blog; it’s going to make all of the remaining steps just work the first time for you. All; I repeat All of the steps below (after this one) assume that you’ve set this environment up. You will need to reset up your environment at least once before running through any of the remaining steps below or they will not work.

It’s also important to mention that you will need to be root to configure the DNS server. This applies to all of the steps identified below throughout this blog.

I re-hosted all of the packages I used to successfully pull this blog off. This allows me to host this information and pair it with the software it works against. Feel free to hook up to my repositories to speed up your setup.

Content/Repositories are grouped as follows:

  • nuxref - Repo/Source: All custom packages rebuilt reside here (including those from past blog entries).
  • nuxref-shared - Repo/Source: This is really just a snapshot in time of packages I used from the different external repositories I don’t maintain. It may not always be up to date, but it can be thought of as a baseline (where things worked for me and with respect to this blog). If I get around to testing a new package and everything is still working fine, I’ll update the repositories here. I also resign everything with my own key to mark that I’ve tested it.

Install all of the necessary packages:

# I signed everything I host to add a level of security to
# all of the people following my blog and using the packages
# I'm hosting. You'll want to install that here:
rpm --import http://repo.lead2gold.org/pub/NUXREF-GPG-KEY

# Connect to my repository to which I've had to rebuild a few
# packages to support PostgreSQL as well as fix some bugs in
# other bugs. This step will really make your life easy and let
# us compare apples to apples with package versions. It also
# allows you to haul in a working setup right out of the box.

# Don't worry, we're defaulting it to a disabled state, for
# the paranoid.  (note that enable=0 below for both
# repositories)
cat << _EOF > /etc/yum.repos.d/nuxref.repo
[nuxref]
name=NuxRef Core - Enterprise Linux 6 - \$basearch
baseurl=http://repo.lead2gold.org/centos/6/en/\$basearch/custom
enabled=0
priority=1
gpgcheck=1

[nuxref-shared]
name=NuxRef Shared - Enterprise Linux 6 - \$basearch
baseurl=http://repo.lead2gold.org/centos/6/en/\$basearch/shared
enabled=0
priority=1
gpgcheck=1
_EOF

# You should use yum priorities (if you're not already) to
# eliminate version problems later, but if you intend on 
# just pointing to my repository and not the others, then you'll
# be fine.
yum install -y yum-plugin-priorities

################################################################
# Install our required products
################################################################
yum install -y \
       --enablerepo=nuxref \
       --enablerepo=nuxref-shared \
           postgresql-server postgresql \
           php-pgsql php-imap php-mcrypt php-mbstring  \
           pdns pdns-backend-postgresql pdns-recursor \
           poweradmin \
           nuxref-templates-pdns

# Also make sure these products are installed as well since we
# use them to manipulate and test some of the data
yum install -y awk sed bind-utils curl

# Choose between NginX or Apache
## NginX Option (a) - This one is my preferred choice:
yum install -y \
       --enablerepo=nuxref \
       --enablerepo=nuxref-shared \
           nginx php-fpm

## Apache Option (b):
yum install -y \
       --enablerepo=nuxref \
       --enablerepo=nuxref-shared \
            httpd php

# Setup Default Timezone for PHP. For a list of supported
# timezones you can visit here: http://ca1.php.net/timezones
TIMEZONE="America/Montreal"
sed -i -e "s|^[ \t]*;*\(date\.timezone[ \t]*=\).*|\1 $TIMEZONE|g" \
    /etc/php.ini

# Ensure we're not using Strict PHP Handling
sed -i -e 's/^[ \t]*\(error_reporting\)[ \t]*=.*$/\1 = E_ALL \& ~E_STRICT/g' \
    /etc/php.ini 

################################################################
# Setup PostgreSQL (v8.4)
################################################################
# The commands below should all work fine on a PostgreSQL v9.x
# database too; but your mileage may vary as I've not personally
# tested it yet. You can skip this section if you've already
# got a database running using one of my earlier tutorials.

# Only init the database if you haven't already. This command
# could otherwise reset things and you'll loose everything.
# If your database is already setup and running, then you can
# skip this line
service postgresql initdb

# Now that the database is initialized, configure it to trust
# connections from 'this' server (localhost)
sed -i -e 's/^[ \t]*\(local\|host\)\([ \t]\+.*\)/#\1\2/g' \
    /var/lib/pgsql/data/pg_hba.conf
cat << _EOF >> /var/lib/pgsql/data/pg_hba.conf
# Configure all local database access with trust permissions
local   all         all                               trust
host    all         all         127.0.0.1/32          trust
host    all         all         ::1/128               trust
_EOF

# Make sure PostgreSQL is configured to start up each time
# you start up your system
chkconfig --levels 345 postgresql on

# Start the database now too because we're going to need it
# very shortly in this tutorial
service postgresql start

To simplify your life, I’ve made the configuration of all the steps below reference a few global variables. The ones identified below are the only ones you’ll probably want to change. May I suggest you paste the below information in your favourite text editor (vi, emacs, etc) and adjust the variables to how you want them making it easier to paste them back to your terminal screen.

# The following is only used for our SSL Key Generation.
# You can skip SSL Key generation if you've done so using an
# earlier tutorial
COUNTRY_CODE="7K"
PROV_STATE="Westerlands"
CITY="Lannisport"
SITE_NAME="NuxRef"

Now for the rest of the global configuration; There really should be no reason to change any of these values (but feel free to). It’s important that you paste the above information (tailored to your liking’s) as well as the below information below to your command line interface (CLI) of the server you wish to set up.

# PostgreSQL Database
PGHOST=localhost
PGPORT=5432
PGNAME=system_dns
PGRWUSER=pdns
PGRWPASS=pdns

# Identify the domain name of your server here
# I use the .local extension because I only intend to resolve
# internal addresses with my DNS server.  You may wish to use
# a different value.
DOMAIN=nuxref.local

# Configure a recursor, the recursor will cache your database hits
# and will greatly increase performance.  Ideally you want to set
# the recursor address to the address you want to host your server
# on.  This is the same IP address you will add to everyones 
# /etc/resolve.conf later.  This is in fact your name server.
# If you leave this value at 127.0.0.1 your DNS will be restricted
# to just the server you're hosting on.
# If you aren't sure what your IP Address is, you can just type 'ifconfig'
#
# This command may also fetch your ip address:
# cat /etc/sysconfig/network-scripts/ifcfg-* | \
#    egrep '^IPADDR=' | egrep -v '127.0.0.1' | \
#    cut -f2 -d'=' | head -n1
NAMESERVER_ADDR=$(cat /etc/sysconfig/network-scripts/ifcfg-* | \
    egrep '^IPADDR=' | egrep -v '127.0.0.1' | \
    cut -f2 -d'=' | head -n1)
# Alternatively, if you're not reading this and 'only' if the
# above failed we'll just set the address to your local address.
NAMESERVER_ADDR=${NAMESERVER_ADDR:=127.0.0.1}

# The Network that our DNS server resides on is important information
# for security purposes.  We want to only allow recursion on this
# network alone and not others hitting our server.  The below
# looks kind of cryptic, but it's just a method of extracting
# the network information automatically if you don't already
# know it. It may or may not work; it will depend on if you set
# a proper NAMESERVER_ADDR
SUBNET_ADDR=$(/sbin/ifconfig | egrep -m1 $NAMESERVER_ADDR | \
    sed -e 's/.*Mask:\([0-9.]\+\).*/\1/g')
NAMESERVER_PRFX=$(ipcalc -s -p $NAMESERVER_ADDR $SUBNET_ADDR | \
                   cut -f2 -d'=')
# Assign a default in case the above command failed.
NAMESERVER_PRFX=${NAMESERVER_PRFX:=24}

# Calculate our network
NAMESERVER_NWRK=$(ipcalc -s -n $NAMESERVER_ADDR $SUBNET_ADDR | \
                   cut -f2 -d'=')
# Assign a default in case the above command failed.
NAMESERVER_NWRK=${NAMESERVER_NWRK:=$(echo $NAMESERVER_ADDR | \
                   cut -f1,2,3 -d'.').0}

# Reverse Address Resolution Preparation
# This converts and IP Address of 1.2.3.4 to 3.2.1.in-addr.arpa
# We can use this later to create a reverse translation which
# PowerDNS can administrate for us also.  The templates I created
# will set some early examples up for you.
NAMESERVER_ARPA=$(echo "$NAMESERVER_ADDR" | \
    awk -F"." '{print $3 "." $2 "." $1 ".in-addr.arpa"}')

# We now need the 4th octet of our Name Server Address to complete
# our ARPA address for the reverse lookup. For example, if your server
# ip is 2.4.8.16, we want the '16' defined here.  The below is just
# a cheat to go ahead and extract it from the address you specified
NAMESERVER_OCT4=$(echo "$NAMESERVER_ADDR" | \
    cut -f4 -d'.')

# This is where our templates get installed to make your life
# incredibly easy and the setup to be painless. These files are
# installed from the nuxref-templates-pdns RPM package you
# installed above. If you do not have this RPM package then you
# must install it or this blog simply won't work for you.
# > yum install --enablerepo=nuxref nuxref-templates-pdns
NUXREF_TEMPLATES=/usr/share/nuxref

I realize the above environment can seem a bit cryptic. I tried to simplify this DNS setup so that even a novice’s life would be easy. The environment variables attempt to detect everyones settings automatically. In some cases, I may have just made it worse for some (hopefully not). It would be a good idea to just echo the defined variables to your screen and confirm they are as you expect them to be. They really are the key to making all of the next steps work in this blog.

# Simple Check
# Note: grab the brackets too when you copy and paste the below
(
   for VAR in COUNTRY_CODE PROV_STATE CITY SITE_NAME \
              PGHOST PGPORT PGNAME PGRWUSER PGRWPASS \
              DOMAIN NAMESERVER_ADDR NAMESERVER_PRFX \
              NAMESERVER_NWRK NAMESERVER_ARPA \
              NAMESERVER_OCT4 NUXREF_TEMPLATES; do
      [ -z $(eval "echo \$$VAR") ] && echo "You must set the variable: $VAR"
   done
)
# Pretty Printing
# Note: grab the brackets too when you copy and paste the below
(
   echo "PostgreSQL:"
   echo -e "\tPGHOST=$PGHOST\n\tPGPORT=$PGPORT\n\tPGNAME=$PGNAME\n\tPGRWUSER=$PGRWUSER\n\tPGRWPASS=$PGRWPASS\n"
   echo "SSL:"
   echo -e "\tCOUNTRY_CODE='$COUNTRY_CODE'\n\tPROV_STATE='$PROV_STATE'\n\tCITY='$CITY'\n\tSITE_NAME='$SITE_NAME'\n"
   echo "Nameserver:"
   echo -e "\tDOMAIN=$DOMAIN\n\tNAMESERVER_ADDR=$NAMESERVER_ADDR\n\tNAMESERVER_NWRK=$NAMESERVER_NWRK\n\tNAMESERVER_PRFX=$NAMESERVER_PRFX"
   echo -e "\tNAMESERVER_ARPA=$NAMESERVER_ARPA\n\tNAMESERVER_OCT4=$NAMESERVER_OCT4\n"
   echo "NuxRef Templating"
   echo -e "\tNUXREF_TEMPLATES=$NUXREF_TEMPLATES"
   echo
)

Step 2 of 4: Setup PowerDNS
First off, make sure you’ve set up your environment correctly (defined in Step 1 above) or you will have problems with the outcome of this step!
Database Configuration:

################################################################
# Configure PostgreSQL (for PowerDNS)
################################################################
# Optionally Eliminate Reset Database.
/bin/su -c "/usr/bin/dropdb -h $PGHOST -p $PGPORT $PGNAME 2>&1" postgres &>/dev/null
/bin/su -c "/usr/bin/dropuser -h $PGHOST -p $PGPORT $PGRWUSER 2>&1" postgres &>/dev/null

# Create Read/Write User (our Administrator)
echo "Enter the role password of '$PGRWPASS' when prompted"
/bin/su -c "/usr/bin/createuser -h $PGHOST -p $PGPORT -S -D -R $PGRWUSER -P 2>&1" postgres

# Create our Database and assign it our Administrator as it's owner
/bin/su -c "/usr/bin/createdb -h $PGHOST -p $PGPORT -O $PGRWUSER $PGNAME 2>&1" postgres 2>&1

# the below seems big; but will work fine if you just copy and
# it as is right to your terminal: This will prepare the SQL
# statement needed to build your DNS server's database backend
sed -e '/^--?/d' \
    -e "s/%PGRWUSER%/$PGRWUSER/g" \
        $NUXREF_TEMPLATES/pgsql.pdns.template.schema.sql > \
          /tmp/pgsql.pdns.schema.sql

# load DB
/bin/su -c "/usr/bin/psql -h $PGHOST -p $PGPORT -f /tmp/pgsql.pdns.schema.sql $PGNAME 2>&1" postgres 2>&1
# cleanup
/bin/rm -f /tmp/pgsql.pdns.schema.sql

# This will get your database started with some working data to use.
# This part is optional, but since it's so easy to delete stuff later
# and there really isn't a whole lot taking place here, you should run
# this step. It becomes especially useful in debugging later.
sed -e "/^--?/d" \
    -e "s/%DOMAIN%/$DOMAIN/g" \
    -e "s/%NAMESERVER_ADDR%/$NAMESERVER_ADDR/g" \
    -e "s/%NAMESERVER_ARPA%/$NAMESERVER_ARPA/g" \
    -e "s/%NAMESERVER_OCT4%/$NAMESERVER_OCT4/g" \
        $NUXREF_TEMPLATES/pgsql.pdns.template.data.sql > \
            /tmp/pgsql.pdns.data.sql

# load DB with our data
/bin/su -c "/usr/bin/psql -h $PGHOST -p $PGPORT -f /tmp/pgsql.pdns.data.sql $PGNAME 2>&1" postgres 2>&1
# cleanup
/bin/rm -f /tmp/pgsql.pdns.data.sql

Server Configuration:

################################################################
# Configure PowerDNS
################################################################
# Create backup of configuration files
[ ! -f /etc/pdns/pdns.conf.orig ] && \
   cp /etc/pdns/pdns.conf /etc/pdns/pdns.conf.orig

# Install our configuration using the template
sed -e "/^#?/d" \
    -e "s/%NAMESERVER_ADDR%/$NAMESERVER_ADDR/g" \
    -e "s/%NAMESERVER_NWRK%/$NAMESERVER_NWRK/g" \
    -e "s/%NAMESERVER_PRFX%/$NAMESERVER_PRFX/g" \
    -e "s/%PGRWUSER%/$PGRWUSER/g" \
    -e "s/%PGRWPASS%/$PGRWPASS/g" \
    -e "s/%PGHOST%/$PGHOST/g" \
    -e "s/%PGPORT%/$PGPORT/g" \
    -e "s/%PGNAME%/$PGNAME/g" \
        $NUXREF_TEMPLATES/pgsql.pdns.template.pdns.conf > \
          /etc/pdns/pdns.conf

# Protect our configuration since it has user/pass info
# inside of it.
chmod 640 /etc/pdns/pdns.conf
chown root.pdns /etc/pdns/pdns.conf

Step 3 of 4: Setup PowerDNS Recursor

################################################################
# Configure PowerDNS Recursor
################################################################
# Create backup of configuration files
[ ! -f /etc/pdns-recursor/recursor.conf.orig ] && \
   cp /etc/pdns-recursor/recursor.conf \
        /etc/pdns-recursor/recursor.conf.orig

# Install our configuration using the template
sed -e "/^#?/d" \
        $NUXREF_TEMPLATES/pgsql.pdns-recursor.template.recursor.conf > \
          /etc/pdns-recursor/recursor.conf

# Generate an up to date root.hints file, this allows recursion
# back out to the internet.
curl -u ftp:ftp 'ftp://ftp.rs.internic.net/domain/db.cache' \
    -o /etc/pdns-recursor/root.hints

# If the above command did not work, you can use the one I shipped
# with the nuxref-template.pdns packaging:
#   cp $NUXREF_TEMPLATES/root.hints /etc/pdns-recursor/root.hints

# Alternatively, PowerDNS is hardcoded with a default set of root-hints.
# But i personally just like seeing it as an external configuration instead
# But... if all of this is combersom to you and you simply don't want
# to use the offical root.hints and the hard-coded one instead you can
# do the following:
#
# sed -i -e '/^\([ \t]*hint-file=.*\)/d' /etc/pdns-recursor/recursor.conf

# Start up all of our services
chkconfig pdns-recursor --level 345 on
chkconfig pdns --level 345 on
service pdns-recursor restart
service pdns restart

It’s important to take a time-out on this step just to make sure everything is working.
A few simple commands should work perfectly for you otherwise we have an issue:

# The following command should output a bunch of googles DNS servers
nslookup google.com $NAMESERVER_ADDR
# The following command should output the same list
nslookup -port=5300 google.com 127.0.0.1

# If you receive an error such as 
#      ** server can't find google.com: NXDOMAIN
# Then you need to revisit the above steps again

# Alternatively, if you receive an error such as:
#  ;; connection timed out; trying next origin
#  ;; connection timed out; trying next origin
#  ;; connection timed out; no servers could be reached
# Then you have been most likely been restricted access
# to port 53 to the outside world. You're not really
# in a problem state at this point. Make sure the rest
# of the tests (Below) work and then make sure to follow
# the section of this blog entitled:
#     'Zone Forwarding Alternative'
# 
# 
# You should be able to resolve the domain
# poweradmin.$DOMAIN to this very server your hosting
# on:
nslookup poweradmin.$DOMAIN $NAMESERVER_ADDR

# You can even test reverse lookups using our data
# we loaded with the following command:
nslookup $NAMESERVER_ADDR $NAMESERVER_ADDR

# The above should resolve itself to hostmaster.your.domain

Step 4 of 4: Setup PowerAdmin
First off, make sure you’ve set up your environment correctly (defined in Step 1 above) or you will have problems with the outcome of this step!

################################################################
# Configure PostgreSQL (for PowerAdmin)
################################################################

# Now we need to update our database with a schema for
# poweradmin to work with
sed -e "/^--?/d" \
    -e "s/%DOMAIN%/$DOMAIN/g" \
    -e "s/%PGRWUSER%/$PGRWUSER/g" \
        $NUXREF_TEMPLATES/pgsql.poweradmin.template.schema.sql > \
            /tmp/pgsql.poweradmin.schema.sql

# Now we can load the file:
/bin/su -c "/usr/bin/psql -h $PGHOST -p $PGPORT -f /tmp/pgsql.poweradmin.schema.sql $PGNAME 2>&1" postgres 2>&1
# cleanup
/bin/rm -f /tmp/pgsql.poweradmin.schema.sql

# If you loaded the sample dataset for PowerDNS earlier, then you'll
# want to additionally load this file too to help PowerAdmin access it
sed -e "/^--?/d" \
    -e "s/%DOMAIN%/$DOMAIN/g" \
    -e "s/%NAMESERVER_ADDR%/$NAMESERVER_ADDR/g" \
    -e "s/%NAMESERVER_ARPA%/$NAMESERVER_ARPA/g" \
        $NUXREF_TEMPLATES/pgsql.poweradmin.template.data.sql > \
            /tmp/pgsql.poweradmin.data.sql

# load DB with our data
/bin/su -c "/usr/bin/psql -h $PGHOST -p $PGPORT -f /tmp/pgsql.poweradmin.data.sql $PGNAME 2>&1" postgres 2>&1
# cleanup
/bin/rm -f /tmp/pgsql.poweradmin.data.sql

################################################################
# Configure PowerAdmin (for PowerDNS Administration)
################################################################
# Create backup of configuration files
[ ! -f /etc/poweradmin/config.inc.php.orig ] && \
   cp /etc/poweradmin/config.inc.php \
        /etc/poweradmin/config.inc.php.orig

# Apply our configuration
sed -e "/^\/\/?/d" \
    -e "s/%DOMAIN%/$DOMAIN/g" \
    -e "s/%PGHOST%/$PGHOST/g" \
    -e "s/%PGNAME%/$PGNAME/g" \
    -e "s/%PGPORT%/$PGPORT/g" \
    -e "s/%PGRWUSER%/$PGRWUSER/g" \
    -e "s/%PGRWPASS%/$PGRWPASS/g" \
        $NUXREF_TEMPLATES/pgsql.poweradmin.template.config.inc.php > \
            /etc/poweradmin/config.inc.php

# Protect file since it contains passwords
chmod 640 /etc/poweradmin/config.inc.php
chown root.apache /etc/poweradmin/config.inc.php

# NginX Configuration
sed -e "/^#?/d" \
    -e "s/%DOMAIN%/$DOMAIN/g" \
        $NUXREF_TEMPLATES/nginx.poweradmin.template.conf > \
            /etc/nginx/conf.d/poweradmin.conf

################################################################
# Generate SSL Keys For Webpage Security
################################################################
# Generate SSL Keys (if you don't have any already) that we
# will secure all our inbound and outbound mail as.
openssl req -nodes -new -x509 -days 730 -sha1 -newkey rsa:2048 \
   -keyout /etc/pki/tls/private/$DOMAIN.key \
   -out /etc/pki/tls/certs/$DOMAIN.crt \
   -subj "/C=$COUNTRY_CODE/ST=$PROV_STATE/L=$CITY/O=$SITE_NAME/OU=IT/CN=$DOMAIN"

# Permissions; protect our Private Key
chmod 400 /etc/pki/tls/private/$DOMAIN.key

# Permissions; protect our Public Key
chmod 444 /etc/pki/tls/certs/$DOMAIN.crt

At this point you should be able to start NginX. If it’s already running
send it a reload or just run the below command.

# If you chose the NginX approach you'll want to make sure it's
# setup to run correctly and restart itself if the system is
# ever restarted:

# Ensure NginX runs even after a reboot
chkconfig nginx --level 345 on
chkconfig php-fpm --level 345 on

# Restart the service if it isn't running already
service php-fpm restart
service nginx restart

Now, we’re almost done. We need to make sure our server is referencing our new DNS server. You may need to update your network settings, but the following will just cheat for the time being and set you up:

[ ! -f /etc/resolv.conf.orig ] && \
   cp /etc/resolv.conf \
       /etc/resolv.conf.orig

# Tell our server to use our new DNS server
cat << _EOF > /etc/resolv.conf
search $DOMAIN
nameserver $NAMESERVER_ADDR
_EOF

# Restore your old configuration like so
# if you need to:
#  /bin/mv -f /etc/resolv.conf.orig /etc/resolv.conf

You will want to additionally add the following to your iptables /etc/sysconfig/iptables:

#---------------------------------------------------------------
# DNS Traffic
#---------------------------------------------------------------
-A INPUT -m state --state NEW -m tcp -p tcp --dport 53 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 53 -j ACCEPT

#---------------------------------------------------------------
# Web Traffic for PowerAdmin
#---------------------------------------------------------------
-A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT

Use the login/pass as admin/admin when you log in for the first time.  Consider changing this afterwards!

Use the login/pass as admin/admin when you log in for the first time. Consider changing this afterwards!

You should now be able to visit https://poweradmin and see a login screen. You may have to accept the ‘untrusted key’ prompt. Don’t worry; it’s safe to do so! In fact, if you’re worried, then just have a look at the key itself before accepting it. You’ll see that it’s just the one we generated earlier. The login is admin and the password is admin at the start. You will want to consider changing this right away after you log in for percautionary sake.

Your setup can now be illustrated by the model below. It’s virtually the same setup as you had before however now instead of querying your ISPs DNS Server, you query your very own local one. Now you can easily maintain your own local network and begin labeling the devices you use on it using PowerAdmin.

This illustration shows the PowerDNS Recursor (pdns-recursor)

This illustration shows the PowerDNS Recursor (pdns-recursor)

Your Authorative (Power)DNS Server caches the location for you for a period of time making subsiquent requests to the same spot VERY fast.
All Subsequent Requests are Cached for a Period of Time.

All Subsequent Requests are Cached for a Period of Time.

Zone Forwarding Alternative
Up until now, our ISP was using it’s own root.hints file (or some alternative method) to look up your request. But now, it is our server that is going out directly into the big bad internet instead using this technique. Since DNS requests are not encrypted, it’s now possible for others to spy on the hostnames we’re resolving (and places we’re are visiting). Not only that, these same people can easily trace the source back to us (all DNS requests originate from our IP now). This can allow someone to specifically know the online banking site you use as an example. Prior to hosting your own DNS server, all websites and servers you accessed were channeled privately between you and your ISP so this was never a problem. It was our ISP who made the (recursive) requests for us instead of us doing them ourselves. Prior to now, what we looked up didn’t explicitly trace back to you, it traced back to our ISP. Previously, we actually had more privacy (depending on the contract we signed with our ISP).

Your ISP has thousands of clients making requests to it’s DNS servers constantly. As a result, it has probably already cached 90% of all the websites we intend to visit. Cached content means a very a speedy response from our server. Meanwhile, our local DNS server’s cache will (probably) be empty most of the time (depending on how many people will use it). Hence your ISP’s DNS Server will be MUCH faster then yours.

When you signed up with your ISP, they would have gave you (at least) 1 DNS server to use (most provide 2 – a primary and backup). We can actually tell our DNS Server to use these instead of our root.hints file when it finds a domain that needs to be further looked up. This way, you regain your secure pipe between you and your ISP. The trade off is your adding one more hop to your recursive lookups. But in most scenarios, they would have already cached what your looking for, so it would be an imediate response. The below diagram illustrates the worst case scenario:

A forwarding zone of '*' (asterix) tells the PowerDNS Recursor to forward all requests to a specific Server.  In our example we use our ISP's DNS Servers.

A forwarding zone of ‘*’ (asterix) tells the PowerDNS Recursor to forward all requests to a specific Server. In our example we use our ISP’s DNS Servers.


Here is how you can alter your configuration:

# Put your DNS Servers below,  the ones in place right
# now are the public ones offered by Google.
DNS_SERVERS="8.8.8.8 8.8.4.4"

# Remove any information that may conflict
sed -i -e '/^\([ \t]*hint-file=.*\)/d' /etc/pdns-recursor/recursor.conf
sed -i -e '/^\([ \t]*forward-zones=.*\)/d' /etc/pdns-recursor/recursor.conf

# Disable hint-file
echo 'hint-file=' >> /etc/pdns-recursor/recursor.conf

# Prepare Forwarding Zones for everything unmatched:
echo -n 'forward-zones=*=' >> /etc/pdns-recursor/recursor.conf
echo $(echo "$DNS_SERVERS" | \
    sed -e 's/^[ \t]*//g' -e 's/[ \t]*$//g' -e 's/[ \t]\+/, /g') >> \
     /etc/pdns-recursor/recursor.conf

# Now restart our recursor
service pdns-recursor restart

Got Old BIND Configuration You Need Imported?
This step is completely optional! If your not familiar with what BIND even is, or know you’ve never used it, you can freely skip this section.
If you migrating from BIND to PowerDNS then you may have a setup in place. PowerDNS makes an easy transition by writing a tool that will scan your old BIND configuration and generate the SQL needed for an easy migration to PowerDNS.

################################################################
# Generate SQL content from all of your zone files
################################################################
# I just had 1 simple DNS zone, but you may many.
# The below did all the work for me (bind was configured to
# run in chroot environment):
# zone2sql --gpgsql --zone=/var/chroot/var/named/data/zone.nuxref.local > \
#    /tmp/pgsql.pdns.zones.sql
# zone2sql --gpgsql --zone=/var/chroot/var/named/data/192.168.0 >> \
#     /tmp/pgsql.pdns.zones.sql

# You could even cheat and run all your files with a command like this
# Please note that this is optional (and not part of the blog, it's just
# a simple conversion tool for those who already have bind configuration
ZONE_DIR=/var/chroot/var/named/data/
[ -f /tmp/pgsql.pdns.zones.sql ] && /bin/rm -f /tmp/pgsql.pdns.zones.sql
for ZONE in $(find $ZONE_DIR -type f); do
   # Fetch ORIGIN/ZONE ID
   ZONE_ID=$(cat $ZONE | egrep '^[ \t]*\$ORIGIN' | \
              sed -e 's/^.*\$ORIGIN[ \t]\+\([^ \t]\+\).*/\1/g' \
                  -e 's/[. \t]*$//g')
   [ -z "$ZONE_ID" ] && echo "Error Parsing: $ZONE" && continue
   zone2sql --gpgsql --zone=$ZONE --zone-name=$ZONE_ID >> /tmp/pgsql.pdns.zones.sql
done

# Now before you load this file into your database, you may
# want to review it.  It doesn't hurt to scan it over and remove
# any entries you don't think would be useful.

# Under normal circumstances you would be done at this point, however because
# we are additionally using poweradmin, we need to create a few zone entries
# based on the SQL file we just generated.
$NUXREF_TEMPLATES/import2zone.awk /tmp/pgsql.pdns.zones.sql >> \
    /tmp/pgsql.pdns.zones.sql

# Then I just loaded the file straight into the database:
/bin/su -c "/usr/bin/psql -h $PGHOST -p $PGPORT -f /tmp/pgsql.pdns.zones.sql $PGNAME 2>&1" postgres 2>&1
# cleanup
/bin/rm -f /tmp/pgsql.pdns.zones.sql
# You're done!

So… That’s it? Now I’m done?
Yes and No… My blog pretty much hands over a working DNS server with little to no extra configuration needed on your part.

No system is bulletproof; disaster can always strike when you’re least expecting it. To cover yourself, always consider backups of the following:

  • Your PostgreSQL Database: This is where all of your DNS configuration is stored. You definitely do not want to lose this. May I suggest you reference my other blog entry here where I wrote a really simple backup/restore tool for a PostgreSQL database.
  • /etc/poweradmin/*: Your PowerAdmin flat file configuration allowing you to centrally manage everything via a webpage.
  • /etc/pdns/*: Your PowerDNS flat file configuration which defines the core of your DNS Server. It’s configuration allows you to centrally manage everything else through the PowerAdmin website.
  • /etc/pdns-recursor/*: Your PowerDNS Recursor flat file configuration which grants you the recursive functionality of your DNS Server.

What about Apache?
Apache is a perfectly fine alternative solution as well! I simply chose NginX because it is much more lightweight approach. In fact, PowerAdmin already comes with Apache configuration out of the box located in /etc/httpd/conf.d/. Thus, if you simply start up your Apache instance (service httpd start), you will be hosting its services right away. Please keep in mind that the default (Apache) configuration does not come with all the SSL and added security I provided with the NginX templates. Perhaps later on, I will update the template rpm to include an Apache secure setup as well.

Credit
This blog took me a very (,very) long time to put together and test! The repository hosting alone now accomodates all my blog entries up to this date. If you like what you see and wish to copy and paste this HOWTO, please reference back to this blog post at the very least. It’s really all I ask.

I’ve tried hard to make this a complete working solution out of the box. Please feel free to email me or post comments below with any suggestions you have so I can ensure this blog is as complete as possible! Positive feedback is always welcome too!

Repository
This blog makes use of my own repository I loosely maintain. If you’d like me to continue to monitor and apply updates as well as hosting the repository for long terms, please consider donating or offering a mirror server to help me out! This would would be greatly appreciated!

Sources

Password Management

KeePassX – A Password Manager and Why You Need One


Introduction
I haven’t posted in a while and I figured I was due. I usually blog specifically for Linux users, but this really applies to anyone on any operating system (Apple, Microsoft, Android, etc).

I want to encourage the use of a password manager to anyone not already using one. There are a lot of them out there, but I’m specifically interested in focusing the attention on KeePassX. There are a number of reasons why and I’ll eventually get to most of them. But for starters KeePassX is completely free (GPLv2) and as I mentioned above, it works on just about every platform!

It’s sole purpose in life is to remember passwords for you so you don’t have to. It’s used to prevent you from using the same 1-3 passwords you’re using today for everything. Instead of using your birthday, engagement and/or pets name, you’ll use passwords like: JP83bW93wwrJsZ2x6tQy3aD6W or SLJUhoQWPBRtwTwmzsCxWXqOV. Again, I emphasis that the password manager does all the memorizing for you. Not only that, but it’ll easily generate you these passwords too.

KeePassX Interface

KeePassX Interface

Now before you roll your eyes and decide you don’t need such a thing, just here me out. Give me a chance to sell you the reasons why you seriously need to consider this. I’ll attempt to do this with respect to The 5 Stages of Loss and Grief. :)

Stage 1: Denial
What’s the point of a password manager? I never needed one before!
A password manager is kind of like social media was to us back in February of 2004. The ideology is the same: “You didn’t know you needed it until it was shown to you”. So trust me when I tell you that you need one now. The thing is, the internet is filled with people whose sole intent is to gain access to your identity and exploit it for their own personal gain. The passwords we’ve all used up to this date are too simple; I know this because we’ve all remembered/memorized them. Even the complicated ones we confidently memorized are no good.

Just Saying...

Food for Thought

Most of us just acknowledge there is risk, but we do nothing about it. We belive that if no one has figured out our password yet, they never will. Well the scary thing is: some of them might know our password(s) already. But these people don’t want to make that evident to us. Don’t expect someone to advertise themselves once they’ve gained access to something we had protected. Instead, you’re only going to find out once the damage has been done.

Stage 2: Anger
Why would I put all my passwords in one file? Something just sounds wrong with that statement.
You would do it for the same reason you trust a bank’s security with your money. The same reason you trust your wallet or purse with your personal identity. The same reason you go to work and secure all your belongings and pets behind in your home. We put all of our eggs into one basket every day, but we’re all smart about it. We all put trust in the decision that our belongings will remain in tact until we return to it. KeePassX works exactly the same way.

Both a Password and a Key File can be used to keep your password database safe.

Both a Password and a Key File can be used keep your password database safe.

The passwords that reside in your KeePassX database are encrypted, so no one will gain access to it unless you allow it to happen. The encryption means the password database is completely useless to anyone who has it without the keys required to access it. You can even go as far as to lock it using a keyfile (KeePassX will generate for you) as well as a password.

There is no question that if a hacker gains access to your password database and has all the keys he/she needs to unlock it, then you’ve defeated it’s purpose. Your passwords are only as secure as the conscious effort you take to keep them that way. If you keep the keys to your house under the welcome mat, or your car keys in the vehicle they pair with, it’s just a matter of time before you suffer a theft in that regard too.

Stage 3: Bargaining
Can’t I just change the passwords after I read about the compromising?
The answer is No and here is why:
People are attempting to exploit us ever day through phishing, viruses, spyware, false advertising, and even through everyday tasks we didn’t even know we were a victim of. Take Sony and eBay for example. Both are very reputable companies with an entire team of security experts working with them every day, yet they still fell prey when their systems were compromised by a hacker. As a result, it’s clients and customers became the victims too. If you used the same password on these sites as you do everywhere else then you’re already at risk for identity theft. The point is, sh*t happens and sometimes it’s just out of our control.

We can’t expect every organization to be like eBay and Sony either. Not all of them will alert their customers of the threat they faced. Even the ones that do have been long exploited before going public. Our personal information was at risk last week when they told us today. Some companies don’t want to admit to any privacy invasions to protect their own reputation. Meanwhile others might not even know they’ve been already compromised. So if they don’t tell us; how will we ever know?

Therefore, consider taking the extra time to change every password on every site you visit once you get KeePassX installed.

Stage 4: Depression
I don’t want to change my password everywhere.
I don’t want the burden of a another application just to manage my passwords.
I was stuck at this stage for a while. It takes a while to change passwords everywhere. But after it’s done, you have to remember that this program does absolutely everything in its power to make your life easy from this point on. You only need to add everything once; so the burden doesn’t linger.

How can such complicated password make my life easier?

Ctrl+B and Ctrl+C are all you ever need to use once KeePassX is setup.

Ctrl+B and Ctrl+C are all you ever need to use once KeePassX is setup.

Your life ‘will’ be simpler with KeePassX because your every day tools are now accessible by just a few keystrokes:

  1. Press ctrl-b in Password Manager: Copy the username into the clipboard
  2. Press ctrl-v in Web Browser or App: Paste the username (into the username field)
  3. Press ctrl-c in Password Manager: Copy the password into the clipboard
  4. Press ctrl-v in Web Browser or App: Paste the password (into the password field)

That’s it; don’t worry about the password (you just copied) being left in the clipboard. KeePassX looks after clearing the clipboard after a few idle seconds elapse that you configure (the default is 20 seconds). The point of me explaining this is: instead of memorizing your secret passwords you use everywhere, you only need to remember the 4 key combinations identified above. The same 4 keystrokes are used for all sites/apps you ever use/visit from this point forward. Not to mention that each password is a unique from the other and unguessable by any hacker. I can’t stress enough that the effort level decreases with a password manager. At the same time your online privacy is more secure than ever.

Stage 5: Acceptance
What about my Phone? What about my other computer? Can access my password file on other systems?
Yes! This is one of the most amazing features of KeePassX! People even place their password file on locations such as DropBox or their Google Drive so they can access it between the systems. It’s compatible with Microsoft Windows, Mac OS (including Apple iOS), Android, and Linux (of course!).

So Where Do I Get It?
Microsoft Windows and Mac OS users can just download it directly from the download area at the official KeePassX website here.

If you’re using an Android device then you can find it here on the Google Play Store.

If you’re using an Apple device then you can find it here on the iTunes Store.

Linux users can download it from pkgs.org. It’s also available from the EPEL Repositories.

For the CentOS and/or Red Hat users: if for whatever reason the links above become unavailable or you want a fast approach: KeePassX can be retrieved from the repository I’m hosting.

Content/Repositories are grouped as follows:

  • nuxref - Repo/Source: All custom packages rebuilt reside here (including those from past blog entries).
  • nuxref-shared - Repo/Source: This is really just a snapshot in time of packages I used from the different external repositories I don’t maintain. It may not always be up to date, but it can be thought of as a baseline I use to support other blog entries I wrote. This repository isn’t required for KeePassX. but it is worth knowing about (to keep things common across other blog entries).

Install KeePassX in CentOS and/or Red Hat 6:

# You'll need to be root to run any of the below commands.

# I signed everything I host to add a level of security to
# all of the people following my blog and using the packages
# I'm hosting. You'll want to install that here:
rpm --import http://repo.lead2gold.org/pub/NUXREF-GPG-KEY

# Connect to my repository.
# Don't worry, we're defaulting it to a disabled state, for
# the paranoid.  (note that enable=0 below for both
# repositories)
cat << _EOF > /etc/yum.repos.d/nuxref.repo
[nuxref]
name=NuxRef Core - Enterprise Linux 6 - \$basearch
baseurl=http://repo.lead2gold.org/centos/6/en/\$basearch/custom
enabled=0
priority=1
gpgcheck=1

[nuxref-shared]
name=NuxRef Shared - Enterprise Linux 6 - \$basearch
baseurl=http://repo.lead2gold.org/centos/6/en/\$basearch/shared
enabled=0
priority=1
gpgcheck=1
_EOF
################################################################
# Install our required products
################################################################
yum install -y --enablerepo=nuxref keepassx
# You're Done!

KeepassX Tips

KeePassX Password Generator

KeePassX Password Generator will keep your passwords diverse and complicated.

  • For obvious reasons, password protect your password database. This is the one password you ‘will’ need to remember. So I encourage you to use a new original one, but don’t forget it!
  • Consider using a keyfile as well as the password for securing your password database. The nice thing about the keyfile is you can move it with you; keep it on a usb drive that is kept with you or in another location. This way if someone ever did get ahold of your password database, they can’t do anything without the key file even if they know your password.
  • Configure KeyPassX to lock itself after it’s been idle or minimized. By default it won’t, but it’s a simple option that will allow you to walk away from your desk and know that no one is snooping where they shouldn’t be.
  • Log onto your remote sites and consider changing your passwords every now and then. Even if it’s only once every years. That’s still better than never changing them at all! You can optionally configure KeePassX to remind you to change your passwords after they get to a certain age. Use the password generator; it’ll greatly simplify your life and keep your passwords complex.

Speaking of Passwords…
Consider that your neighbors (or even someone you don’t know) could be using your wireless network while you’re at work. They could be using it even when your around. Have you been connecting to your wireless router lately to see if you can account for everyone connected to it? Now would be a good time to change this password too!

Pay attention to the sites you use and make sure they use some form of login encryption. For example, logging into a website that isn’t secure means anyone can easily extract your username password combination without your knowledge.

Privacy Compromised

Just do a Google search for +breach +password +hacked to get an idea of how many companies are getting hacked constantly and how easy it is for someone to figure out your password and re-use it elsewhere.

Final Notes
Sure, Password Managers aren’t for everyone; don’t worry, I get that. But if you truely want to prepare yourself and prevent having to deal with unnessisary online fraud, or identy theft… If you truely wan’t to avoid venturing through The 5 Stages of Loss and Grief which follows these awful crimes, then you should consider protecting yourself now. Take the plunge and get a password manager and diversify your passwords.

It’s better to be safe than sorry.

Credit
If you like what you see and wish to copy and paste this HOWTO, please reference back to this blog post at the very least. It’s really all I ask.

Sources
Here are some other links you may find useful:

Mail Server Solution

Configuring a Secure Mail Server on CentOS 6


Introduction
As you know, Mail Server tutorials are all over the internet. So if you stumbled upon mine, then you were like me trying to assemble one based on how others do it to save reading the manual. A mail server is NOT a walk through the park like some other servers are (such as Web, Database, etc). They can be very complicated to master; and frankly, when you’re short on time, it’s easier to just get one going right away. Don’t dismiss the need to learn how it works once you’ve set it up otherwise you may not know how to recover from a unexpected situation. But instead, learn about how it works (and how to further tweak it) after the deadlines have been met. Besides, your boss will be impressed when you show him your successful results sooner then later.

You’re reading this blog because your needs were/are similar to what mine were:

  • You’re looking for an automated solution to installing a mail server with little effort because you’re on a tight deadline.
  • You want a central configuration point; you want everything to be easy to maintain after it’s all set up.
  • You want everything to just work the first time and you want to leave the figuring it out part to the end.
  • Package management and version control is incredibly important to you.

Here is what my tutorial will be focused on:

  • A Database Backend (PostgreSQL) giving you central configuration. This tutorial focuses on version 8.4 because that is what ships with CentOS and Red Hat. But most (if not all) of this tutoral should still work fine if you choose to use version 9.x of the database instead.
  • Spam Control (SpamAssasin v3.3.x, Amavisd-new v2.8, and Clam AntiVirus v0.98) gives you some spam and antivirus control. I’ve been looking into DSPAM but haven’t implemented it yet. I figure I’ll make a part 2 to this blog once i get around to working with it and mastering it’s setup.
  • Dovecot Pigeonhole v2.0.x provides you a way of routing mail in the users inbox based on it’s content. You can key off of certain content within a received message and mark it as spam, or flag it as important because it was sent by a certain individual, etc. It basically gives you the ability to custom some of the post processing of a received message that passed through all the other checks (spam, antivirus, etc).
  • Security Considered
  • Mail Delivery Agent (MDA) (DoveCot v2.0.x):
    • Secure POP3 (POP3S) Access
    • Secure IMAP (IMAPS) Access
    • WebMail (RoundCube) Access
Message Delivery Agent Configuration

MDA Configuration

  • Mail Transport Agent (MTA) (PostFix v2.6.x): Secure SMTP (SMTPS)
Mail Transport Agent

MTA Configuration

  • Web Based Administration (PostFixAdmin v2.3.6). Life is made easier when you don’t have to do very much once everything has been set up. Run your server like an ISP would:
    • Virtual Domain Administration: Add/Remove as many as you want
    • Unlimited Virtual Aliases (or restrict if you want) per domain
    • Unlimited Mailboxes (or restrict if you want) per domain
    • Administrative Delegation: Grant enough privileges to another administrator who only has rights to his/her domain you granted them access to.
    • Away on Vacation Support (automatic replies)
Central Configuration

Central Configuration

Please note the application versions identified above as this tutorial focuses specifically on only them. One big issue I found while researching how to set up a mail server was that the other tutorials I found didn’t really mention the software versions they were using. Hence, when I stumbled upon these old article(s) with new(er) software, it made for quite a painful experience when things didn’t work.

Please also note that other tutorials will imply that you setup one feature at a time. Then test it to see if it worked correctly before moving on to the next step. This is no doubt the proper way to do things. However, I’m just going to give it all to you at once. If you stick with the versions and packages I provide… If you follow my instructions, it will just work for you the first time. Debugging on your end will be a matter of tracing back to see what step you missed.

I tried to make this tutorial as cookie cutter(ish) as I could. Therefore you can literally just copy and paste what I share right to your shell prompt and the entire setup will be automated for you.

Hurdles
Just to let you know, I had many hurdles in order to pull this feat off. They were as follows:

  • postfix as shipped with CentOS and in the EPEL repository is not compiled with PostgreSQL support. I had to recompile this package as well just to enable this extra database support.
  • postfixadmin in the EPEL repository has qwirks I wasn’t happy with. I needed to fix a php error that kept coming up. I needed to adjust the database schema and permissions as well as fixing the Vacation Notification feature. I also did not want the mandatory MySQL dependency; so I removed that too.
  • perl Yes… that’s right, I had to patch perl :(. I had to recompile it due to a bug that I don’t belive was correctly addressed. In a nutshell, prior to my custom rebuild, perl-devel would haul in absolutely every development package including kernel headers and the GCC compiler. In the past the perl-devel package could be installed by itself providing us some tools spamassassin and amavisd-new depended on. You’re welcome to use the official version of perl over the one I recompiled; but be prepared to have a ton of compilation tools and source code in your production environment. This is not something I wanted at all. Interestingly enough; after the developers at RedHat said they wouldn’t tackle the issue of rolling their changes back, they seem to be entertaining this new guy’s request who’s asking for a similar alternative. So who knows, maybe newer versions of perl will accommodate mail servers again! :)
  • This blog itself was a massive hurdle. There are just so many configuration files and important points to account for that I found it easier to package my own rpm (nuxref-templates-mxserver) containing a series of configuration templates. The templates themselves took a while to construct in such a way that they could all be used together.

Step 1 of 7: Setup Your Environment
This is the key to my entire blog; it’s going to make all of the remaining steps just work the first time for you. All; I repeat All of the steps below (after this one) assume that you’ve set this environment up. You will need to reset up your environment at least once before running through any of the remaining steps below or they will not work.

It’s also important to mention that you will need to be root to configure this entire mail server. This applies to all of the steps identified below throughout this blog.

I started hosting some of these packages on one of my own servers to make it easier to do all the tasks. I can’t guarantee other packages will work as well; this is definitely the case for the packages I had to rebuild (identified above). If you don’t trust my packages, you can access their source files and rebuild them for yourself if you wish.

Content/Repositories are grouped as follows:

  • nuxref - Repo/Source: All custom packages rebuilt reside here (including those from past blog entries).
  • nuxref-shared - Repo/Source: This is really just a snapshot in time of packages i used from the different external repositories I don’t maintain. It may not always be up to date, but it can be thought of as a baseline (where things worked for me and with respect to this blog). If I get around to testing a new package and everything is still working fine, I’ll update the repositories here. I also resign everything with my own key to mark that I’ve tested it.

Install all of the necessary packages:

# I signed everything I host to add a level of security to
# all of the people following my blog and using the packages
# I'm hosting. You'll want to install that here:
rpm --import http://repo.lead2gold.org/pub/NUXREF-GPG-KEY

# Connect to my repository to which I've had to rebuild a few
# packages to support PostgreSQL as well as fix some bugs in
# other bugs. This step will really make your life easy and let
# us compare apples to apples with package versions. It also
# allows you to haul in a working setup right out of the box.

# Don't worry, we're defaulting it to a disabled state, for
# the paranoid.  (note that enable=0 below for both
# repositories)
cat << _EOF > /etc/yum.repos.d/nuxref.repo
[nuxref]
name=NuxRef Core - Enterprise Linux 6 - \$basearch
baseurl=http://repo.lead2gold.org/centos/6/en/\$basearch/custom
enabled=0
priority=1
gpgcheck=1

[nuxref-shared]
name=NuxRef Shared - Enterprise Linux 6 - \$basearch
baseurl=http://repo.lead2gold.org/centos/6/en/\$basearch/shared
enabled=0
priority=1
gpgcheck=1
_EOF

# You should use yum priorities (if you're not already) to
# eliminate version problems later, but if you intend on 
# just pointing to my repository and not the others, then you'll
# be fine.
yum install -y yum-plugin-priorities

################################################################
# Install our required products
################################################################
yum install -y \
       --enablerepo=nuxref \
       --enablerepo=nuxref-shared \
           postgresql-server postgresql \
           php-pgsql php-imap php-mcrypt php-mbstring  \
           dovecot dovecot-pgsql dovecot-pigeonhole \
           clamav amavisd-new spamassassin \
           postfix postfix-perl-scripts \
           roundcubemail postfixadmin \
           nuxref-templates-mxserver

# Choose between NginX or Apache
## NginX Option (a) - This one is my preferred choice:
yum install -y \
       --enablerepo=nuxref \
       --enablerepo=nuxref-shared \
           nginx php-fpm

## Apache Option (b):
yum install -y \
       --enablerepo=nuxref \
       --enablerepo=nuxref-shared \
            httpd php

# Make sure your build of Postfix supports PostgreSQL
# execute the following to see: 
postconf -c /etc/postfix -m | \
   egrep -q '^pgsql$' && echo "Supported!" || \
      echo "Not Supported!"

# If it's not supported then you ran into the first of many
# problems I had. It also means you didn't haul in my version
# from the previous command.  This can happen if you already had
# postfix installed on your machine and it's of a newer version
# then what I'm hosting. Your options at this point is to either
# uninstall your copy of postfix and install mine or recompile 
# your version of postfix (but this time with PostgreSQL support).

# Setup Default Timezone for PHP. For a list of supported
# timezones you can visit here: http://ca1.php.net/timezones
TIMEZONE="America/Montreal"
sed -i -e "s|^[ \t]*;*\(date\.timezone[ \t]*=\).*|\1 $TIMEZONE|g" /etc/php.ini

################################################################
# Setup PostgreSQL (v8.4)
################################################################
# The commands below should all work fine on a PostgreSQL v9.x
# database too; but your mileage may vary as I've not personally
# tested it yet.

# Only init the database if you haven't already. This command
# could otherwise reset things and you'll loose everything.
# If your database is already setup and running, then you can
# skip this line
service postgresql initdb

# Now that the database is initialized, configure it to trust
# connections from 'this' server (localhost)
sed -i -e 's/^[ \t]*\(local\|host\)\([ \t]\+.*\)/#\1\2/g' \
    /var/lib/pgsql/data/pg_hba.conf
cat << _EOF >> /var/lib/pgsql/data/pg_hba.conf
# Configure all local database access with trust permissions
local   all         all                               trust
host    all         all         127.0.0.1/32          trust
host    all         all         ::1/128               trust
_EOF

# Make sure PostgreSQL is configured to start up each time
# you start up your system
chkconfig --levels 345 postgresql on

# Start the database now too because we're going to need it
# very shortly in this tutorial
service postgresql start

To simplify your life, I’ve made the configuration of all the steps below reference a few global variables. The ones identified below are the only ones you’ll probably want to change. May I suggest you paste the below information in your favourite text editor (vi, emacs, etc) and adjust the variables to how you want them making it easier to paste them back to your terminal screen.

# Identify the domain name of your server here
DOMAIN=nuxref.com
# Setup what you want to be your Administrative email account
# Note: This does 'NOT' have to be of the same domain even though
#       thats how I set it up to be. Don't worry if the email
#       address doesn't exist, because when you're all done
#       following this blog, you'll be able to create it!
ADMIN=hostmaster@$DOMAIN

# The following is only used for our SSL Key Generation
COUNTRY_CODE="7K"
PROV_STATE="Westerlands"
CITY="Lannisport"
SITE_NAME="NuxRef"

Now for the rest of the global configuration; There really should be no reason to change any of these values (but feel free to). It’s important that you paste the above information (tailored to your liking’s) as well as the below information below to your command line interface (CLI) of the server you wish to set up.

# PostgreSQL Database
PGHOST=localhost
PGPORT=5432
PGNAME=system_mail
# This is the Read Only access user (or very limited access)
PGROUSER=mailreader
PGROPASS=mailreader
# This is for administration
PGRWUSER=mailwriter
PGRWPASS=mailwriter

# VHost Mail Directory
VHOST_HOME=/var/mail/vhosts
VHOST_UID=5000
# No real reason to make them differ
# but define tis variable anyway for
# below configuration to work
VHOST_GID=$VHOST_UID

# RoundCube Configuration
MXHOST=$PGHOST
PGRQNAME=system_roundcube
PGRQUSER=roundcube_user
PGRQPASS=roundcube_pass

# This is where our templates get installed to make your life
# incredibly easy and the setup to be painless. These files are
# installed from the nuxref-templates-mxserver RPM package you
# installed above. If you do not have this RPM package then you
# must install it or this blog simply won't work for you.
# > yum install --enablerepo=nuxref nuxref-templates-mxserver
NUXREF_TEMPLATES=/usr/share/nuxref

Once all of this has been set you can proceed to do any of the below steps! Keep in mind that if you decide to change any of the variables above, you may need to redo every single step identified below.
Step 2 of 7: System Preparation
First off, Make sure you’ve set up your environment correctly (defined in Step 1 above) or you will have problems with the outcome of this step!
General:

# Create vmail user; this will be a secure user no one else has
# permissions to that we can lock and keep our mail private
# from any prying eyes of people who have or gain access to our
# server.
useradd -u $VHOST_UID -d /var/mail/vhosts -M -s /sbin/nologin vmail
mkdir -p  /var/mail/vhosts/
chown vmail.vmail /var/mail/vhosts/
chmod 700 /var/mail/vhosts/

# Create a clam user we can preform our anti-virus scans as
usermod -G amavis clam

# Fix ClamD Sock Reference
sed -i -e 's|/var/spool/amavisd/clamd.sock|/var/run/clamav/clamd.sock|g' /etc/amavisd/amavisd.conf

# Fix Amavis Directory Permission
chmod 1770 /var/spool/amavisd/tmp/
# Amavis Domain Configuration
sed -i -e '/NuxRef BulletProofing/d' \
       -e "s/\(# \$myhostname.*\)/\1\n\$myhostname = 'mail.$DOMAIN'; # NuxRef BulletProofing/g" /etc/amavisd/amavisd.conf
sed -i -e "s/^\(\$mydomain[ \t]*=[ \t]*\).*$/\1'$DOMAIN';/g" /etc/amavisd/amavisd.conf

# Generate SSL Keys (if you don't have any already) that we
# will secure all our inbound and outbound mail as.
openssl req -nodes -new -x509 -days 730 -sha1 -newkey rsa:2048 \
   -keyout /etc/pki/tls/private/$DOMAIN.key \
   -out /etc/pki/tls/certs/$DOMAIN.crt \
   -subj "/C=$COUNTRY_CODE/ST=$PROV_STATE/L=$CITY/O=$SITE_NAME/OU=IT/CN=$DOMAIN"

# Permissions; protect our Private Key
chmod 400 /etc/pki/tls/private/$DOMAIN.key

# Permissions; protect our Public Key
chmod 444 /etc/pki/tls/certs/$DOMAIN.crt

Create PostgreSQL Mail Database:

# Optionally Eliminate Reset Database.
/bin/su -c "/usr/bin/dropdb -h $PGHOST -p $PGPORT $PGNAME 2>&1" postgres &>/dev/null
/bin/su -c "/usr/bin/dropuser -h $PGHOST -p $PGPORT $PGRWUSER 2>&1" postgres &>/dev/null
/bin/su -c "/usr/bin/dropuser -h $PGHOST -p $PGPORT $PGROUSER 2>&1" postgres &>/dev/null

# Create Read/Write User (our Administrator)
echo "Enter the role password of '$PGRWPASS' when prompted"
/bin/su -c "/usr/bin/createuser -h $PGHOST -p $PGPORT -S -D -R $PGRWUSER -P 2>&1" postgres

# Create Read-Only User
echo "Enter the role password of '$PGROPASS' when prompted"
/bin/su -c "/usr/bin/createuser -h $PGHOST -p $PGPORT -S -D -R $PGROUSER -P 2>&1" postgres

# Create our Database and assign it our Administrator as it's owner
/bin/su -c "/usr/bin/createdb -h $PGHOST -p $PGPORT -O $PGRWUSER $PGNAME 2>&1" postgres 2>&1

# Secure and protect a temporary file to work with
touch /tmp/pgsql.postfix.schema.sql
chmod 640 /tmp/pgsql.postfix.schema.sql
chown root.postgres /tmp/pgsql.postfix.schema.sql

# the below seems big; but will work fine if you just copy and
# it as is right to your terminal: This will prepare the SQL
# statement needed to build your mail server's database backend
sed -e '/^--?/d' \
    -e "s/%PGROUSER%/$PGROUSER/g" \
    -e "s/%PGRWUSER%/$PGRWUSER/g" \
    -e "s/%DOMAIN%/$DOMAIN/g" \
        $NUXREF_TEMPLATES/pgsql.postfix.template.schema.sql > \
          /tmp/pgsql.postfix.schema.sql

# load DB
/bin/su -c "/usr/bin/psql -h $PGHOST -p $PGPORT -f /tmp/pgsql.postfix.schema.sql $PGNAME 2>&1" postgres 2>&1
# cleanup
/bin/rm -f /tmp/pgsql.postfix.schema.sql

Step 3 of 7: Setup our Mail Transfer Agent (MTA): Postfix
First off, Make sure you’ve set up your environment correctly (defined in Step 1 above) or you will have problems with the outcome of this step!

################################################################
# Configure Postfix (MTA)
################################################################
# Create backup of configuration files
[ ! -f /etc/postfix/main.cf.orig ] && \
   cp /etc/postfix/main.cf /etc/postfix/main.cf.orig
[ ! -f /etc/postfix/master.cf.orig ] && \
   cp /etc/postfix/master.cf /etc/postfix/master.cf.orig

# Directory to store our configuration in
[ ! -d /etc/postfix/pgsql ] && \
    mkdir -p /etc/postfix/pgsql

# Secure this new directory since it will contain passwords
# information
chmod 750 /etc/postfix/pgsql
chown root.postfix /etc/postfix/pgsql

# Now using our templates, build our SQL files:
for FILE in relay_domains.cf \
            transport_maps.cf \
            virtual_alias_maps.cf \
            virtual_domains_maps.cf \
            virtual_mailbox_limit_maps.cf \
            virtual_mailbox_maps.cf
do
    sed -e "/^#?/d" \
        -e "s/%PGROUSER%/$PGROUSER/g" \
        -e "s/%PGROPASS%/$PGROPASS/g" \
        -e "s/%PGHOST%/$PGHOST/g" \
        -e "s/%PGNAME%/$PGNAME/g" \
            $NUXREF_TEMPLATES/pgsql.postfix.template.$FILE > \
                /etc/postfix/pgsql/$FILE
done

# main.cf
sed -e "/^#?/d" \
    -e "s/%DOMAIN%/$DOMAIN/g" \
    -e "s|%VHOST_HOME%|$VHOST_HOME|g" \
    -e "s/%VHOST_UID%/$VHOST_UID/g" \
    -e "s/%VHOST_GID%/$VHOST_GID/g" \
        $NUXREF_TEMPLATES/pgsql.postfix.template.main.cf > \
            /etc/postfix/main.cf

# master.cf
cat $NUXREF_TEMPLATES/pgsql.postfix.template.master.cf > \
   /etc/postfix/master.cf

# Run the newaliases command to generate /etc/aliases.db
newaliases

# Vacation Support
echo "autoreply.$DOMAIN vacation:" > /etc/postfix/transport
postmap /etc/postfix/transport

# Update to latest Spam Assassin Rules/Filters
sa-update
# Update to latest Antivirus
freshclam

# Setup Auto-Cron Entries (so future antivirus updates
# can just be automatic).
sed -i -e '/\/etc\/cron.hourly/d' /etc/crontab
sed -i -e '/\/etc\/cron.daily/d' /etc/crontab
sed -i -e '/\/etc\/cron.weekly/d' /etc/crontab
sed -i -e '/\/etc\/cron.monthly/d' /etc/crontab
cat << _EOF >> /etc/crontab
  01 * * * * root run-parts /etc/cron.hourly 
  02 4 * * * root run-parts /etc/cron.daily 
  22 4 * * 0 root run-parts /etc/cron.weekly 
  42 4 1 * * root run-parts /etc/cron.monthly
_EOF

# Enable our Services On Reboot
chkconfig --levels 345 spamassassin on
chkconfig --levels 345 clamd on
chkconfig --levels 345 clamd.amavisd on
chkconfig --levels 345 amavisd on
chkconfig --levels 345 postfix on

# Start all of our other services if they aren't already
service spamassassin start
service clamd start
service amavisd start
service clamd.amavisd start

# Restart our postfix service to on our new configuration
service postfix restart

Step 4 of 7: Setup our Mail Delivery Agent (MDA): Dovecot
First off, Make sure you’ve set up your environment correctly (defined in Step 1 above) or you will have problems with the outcome of this step!

################################################################
# Configure Dovecot (MDA)
################################################################
# Create backup of configuration files
[ ! -f /etc/dovecot/dovecot.conf.orig ] && \
   cp /etc/dovecot/dovecot.conf /etc/dovecot/dovecot.conf.orig

# dovcot.conf
sed -e "/^#?/d" \
    -e "s/%DOMAIN%/$DOMAIN/g" \
    -e "s|%VHOST_HOME%|$VHOST_HOME|g" \
    -e "s/%VHOST_UID%/$VHOST_UID/g" \
    -e "s/%VHOST_GID%/$VHOST_GID/g" \
        $NUXREF_TEMPLATES/pgsql.dovecot.template.dovecot.conf > \
            /etc/dovecot/dovecot.conf

# Create our Sieve Directories
[ ! -d /var/lib/sieve/users ] && \
   mkdir -p /var/lib/sieve/users
[ ! -d /var/lib/sieve/before.d ] && \
   mkdir -p /var/lib/sieve/before.d
[ ! -d /var/lib/sieve/after.d ] && \
   mkdir -p /var/lib/sieve/after.d
chown -R vmail.vmail /var/lib/sieve
chmod 750 /var/lib/sieve

# Dovecot PostgreSQL Configuration
for FILE in dovecot-sql.conf \
            dovecot-dict-user-quota.conf \
            dovecot-dict-domain-quota.conf
do
   sed -e "/^#?/d" \
       -e "s|%VHOST_HOME%|$VHOST_HOME|g" \
       -e "s/%VHOST_UID%/$VHOST_UID/g" \
       -e "s/%VHOST_GID%/$VHOST_GID/g" \
       -e "s/%PGROUSER%/$PGROUSER/g" \
       -e "s/%PGROPASS%/$PGROPASS/g" \
       -e "s/%PGHOST%/$PGHOST/g" \
       -e "s/%PGNAME%/$PGNAME/g" \
           $NUXREF_TEMPLATES/pgsql.dovecot.template.$FILE > \
               /etc/dovecot/$FILE
   # permissions
   chmod 640 /etc/dovecot/$FILE
   chown root.dovecot /etc/dovecot/$FILE
done

# Warning Message when mailbox is almost full
sed -e "/^#?/d" \
    -e "s/%DOMAIN%/$DOMAIN/g" \
        $NUXREF_TEMPLATES/pgsql.dovecot.template.mail-warning.sh > \
            /usr/libexec/dovecot/mail-warning.sh

# Make Script Executable
chmod 755 /usr/libexec/dovecot/mail-warning.sh

# Ensure Dovecot starts after each system reboot:
chkconfig --levels 345 dovecot on

# Start Dovecot (otherwise restart it if it's already running)
service dovecot status && service dovecot restart || service dovecot start

Step 5 of 7: Setup Postfix Admin
First off, Make sure you’ve set up your environment correctly (defined in Step 1 above) or you will have problems with the outcome of this step!

################################################################
# Configure PostfixAdmin
################################################################
sed -e "/^\/\/?/d" \
    -e "s/%DOMAIN%/$DOMAIN/g" \
    -e "s/%ADMIN%/$ADMIN/g" \
    -e "s/%PGHOST%/$PGHOST/g" \
    -e "s/%PGNAME%/$PGNAME/g" \
    -e "s/%PGRWUSER%/$PGRWUSER/g" \
    -e "s/%PGRWPASS%/$PGRWPASS/g" \
        $NUXREF_TEMPLATES/pgsql.postfixadmin.template.config.local.php > \
            /etc/postfixadmin/config.local.php

# Protect file since it contains passwords
chmod 640 /etc/postfixadmin/config.local.php
chown root.apache /etc/postfixadmin/config.local.php

# Vacation Auto-respond Support
sed -e "/^#?/d" \
    -e "s/%DOMAIN%/$DOMAIN/g" \
    -e "s/%PGROUSER%/$PGROUSER/g" \
    -e "s/%PGROPASS%/$PGROPASS/g" \
    -e "s/%PGHOST%/$PGHOST/g" \
    -e "s/%PGNAME%/$PGNAME/g" \
        $NUXREF_TEMPLATES/pgsql.postfixadmin.template.vacation.conf > \
            /etc/postfixadmin/vacation.conf

# Protect file since it contains passwords
chmod 640 /etc/postfixadmin/vacation.conf
chown root.vacation /etc/postfixadmin/vacation.conf

# Log Rotation
cat << _EOF > /etc/logrotate.d/postfix-vacation
/var/log/vacation.log {
        missingok
        notifempty
        create 644 vacation root
}
_EOF

Now you can setup NginX to host your administration; in the below example, I set up https://postfixadmin.<your.domain>/ to go to the postfixadmin page.

# Create dummy favicon.ico for now (silences some log entries)
touch /usr/share/postfixadmin/favicon.ico

# PostfixAdmin NginX Configuration
sed -e "/^#?/d" \
    -e "s/%DOMAIN%/$DOMAIN/g" \
        $NUXREF_TEMPLATES/nginx.postfixadmin.template.conf > \
            /etc/nginx/conf.d/postfixadmin.conf

# You may have to bump php-fpm to be safe (if it's not already running)
service php-fpm status 2>/dev/null && service php-fpm restart || service php-fpm start
# make sure it starts on every reboot too:
chkconfig php-fpm --level 345 on
# Restart NginX if it's not already
service nginx status 2>/dev/null && service nginx restart || service nginx start
chkconfig nginx --level 345 on

Once you’re complete that, you’re now ready to access the administrator interface and set up a new account. Simply visit https://postfixadmin.<your.domain>/setup.php. The templates I provided will set the system password to admin. You’ll need to always enter this value prior to creating an account below.
PostfixAdmin Setup
Once you’re done creating an account, just change the setup.php script to read-only as a security precaution. You can preform every other action you’ll ever need using the account you already created.

################################################################
# Disable Future System Administrator Creation
################################################################
chmod 600 /usr/share/postfixadmin/setup.php

# later on, you can re-enable the <i>setup.php</i> file to create a new
# account in the distant future by just typing:
#
# chmod 644 /usr/share/postfixadmin/setup.php
#

You can simply access https://postfixadmin.<your.domain>/ now (without the setup.php) and login using the new administrator you created.

Step 6 of 7: Setup Roundcube
First off, Make sure you’ve set up your environment correctly (defined in Step 1 above) or you will have problems with the outcome of this step!

################################################################
# Configure RoundCube Mailer
################################################################
# RoundCube NginX Configuration
sed -e "/^#?/d" \
    -e "s/%DOMAIN%/$DOMAIN/g" \
        $NUXREF_TEMPLATES/nginx.roundcubemail.template.conf > \
            /etc/nginx/conf.d/roundcubemail.conf

# Optionally Eliminate Reset RoundCube Database
/bin/su -c "/usr/bin/dropdb -h  $PGHOST -p $PGPORT $PGRQNAME 2>&1" postgres &>/dev/null
/bin/su -c "/usr/bin/dropuser -h $PGHOST -p $PGPORT $PGRQUSER 2>&1" postgres &>/dev/null

# Create RoundCube Admistrator User
echo "Enter the role password of '$PGRQPASS' when prompted"
/bin/su -c "/usr/bin/createuser -h $PGHOST -p $PGPORT -S -D -R $PGRQUSER -P 2>&1" postgres 2>&1

# Create our Database and assign it our Administrator as it's owner
/bin/su -c "/usr/bin/createdb -h $PGHOST -p $PGPORT -O $PGRQUSER $PGRQNAME 2>&1" postgres 2>&1
/usr/bin/psql -h $PGHOST -p $PGPORT -U $PGRQUSER $PGRQNAME -f /usr/share/doc/roundcubemail-0.9.5/SQL/postgres.initial.sql

# Configure Roundmail
sed -i -e "s|\(^[ \t]*\$rcmail_config\[[']db_dsnw['\"]\][ \t]*=\).*$|\1 'pgsql://$PGRQUSER:$PGRQPASS@$PGHOST/$PGRQNAME';|g" /etc/roundcubemail/db.inc.php
sed -i -e "s|\(^[ \t]*\$rcmail_config\[[']default_host['\"]\][ \t]*=\).*$|\1 'ssl://$MXHOST:993';|g" /etc/roundcubemail/main.inc.php
sed -i -e "s|\(^[ \t]*\$rcmail_config\[[']smtp_server['\"]\][ \t]*=\).*$|\1 'tls://$MXHOST';|g" /etc/roundcubemail/main.inc.php
sed -i -e "s|\(^[ \t]*\$rcmail_config\[[']smtp_user['\"]\][ \t]*=\).*$|\1 '%u';|g" /etc/roundcubemail/main.inc.php
sed -i -e "s|\(^[ \t]*\$rcmail_config\[[']smtp_pass['\"]\][ \t]*=\).*$|\1 '%p';|g" /etc/roundcubemail/main.inc.php

# Some extra Roundmail options i preferred:
sed -i -e "s|\(^[ \t]*\$rcmail_config\[[']identities_level['\"]\][ \t]*=\).*$|\1 3;|g" /etc/roundcubemail/main.inc.php
sed -i -e "s|\(^[ \t]*\$rcmail_config\[[']quota_zero_as_unlimited['\"]\][ \t]*=\).*$|\1 true;|g" /etc/roundcubemail/main.inc.php
sed -i -e "s|\(^[ \t]*\$rcmail_config\[[']htmleditor['\"]\][ \t]*=\).*$|\1 1;|g" /etc/roundcubemail/main.inc.php
sed -i -e "s|\(^[ \t]*\$rcmail_config\[[']preview_pane['\"]\][ \t]*=\).*$|\1 true;|g" /etc/roundcubemail/main.inc.php

# You may have to bump php-fpm to be safe (if it's not already running)
service php-fpm status 2>/dev/null && service php-fpm restart || service php-fpm start
# make sure it starts on every reboot too:
chkconfig php-fpm --level 345 on
# Restart NginX if it's not already
service nginx status 2>/dev/null && service nginx restart || service nginx start
chkconfig nginx --level 345 on

You can simply access https://roundcube.<your.domain>/ now and access any mailboxes you configured. Just remember to tell your users that they must specify their full email address as their username.

Each mailbox you create using PostfixAdmin you’ll be able to access with your Roundcube webpage.

Step 7 of 7: Security
If you aren’t familiar with Fail2Ban; now would be an excellent time to learn about it. I wrote a blog about securing your CentOS system a while back and encourage you to read it. At the very least, read the section on Fail2Ban. The below explains how you can protect yourself from brute force.

# Monitor for multiple failed SASL Logins into postfix
cat << _EOF > /etc/fail2ban/filter.d/postfix-sasl.conf
[Definition]
failregex = (?i): warning: [-._\w]+\[<HOST>\]: SASL (?:LOGIN|PLAIN|(?:CRAM|DIGEST)-MD5) authentication failed(: [A-Za-z0-9+/ ]*)?$
ignoreregex =
_EOF
# Monitor for multiple failed postfixadmin logins
cat << _EOF > /etc/fail2ban/filter.d/postfixadmin-auth.conf
[Definition]
failregex = ^<HOST> -.*POST.*login.php HTTP[^ \t]+ 500\$
ignoreregex =
_EOF

# Now in /etc/fail2ban/jail.conf you will want the following:
cat << _EOF >> /etc/fail2ban/jail.conf
[dovecot-iptables]
enabled  = true
filter   = dovecot
backend  = polling
action   = iptables[name=dovecot, port=110,143,993,995, protocol=tcp]
           sendmail-whois[name=dovecot, dest=root, sender=fail2ban@$DOMAIN]
logpath  = /var/log/mail.log

[sasl-iptables]
enabled  = true
filter   = postfix-sasl
backend  = polling
action   = iptables[name=sasl, port=25,587, protocol=tcp]
           sendmail-whois[name=sasl, dest=root, sender=fail2ban@$DOMAIN]
logpath  = /var/log/mail.log

[roundcube-iptables]
enabled  = true
filter   = roundcube-auth
backend  = polling
action   = iptables[name=RoundCube, port="http,https"]
           sendmail-whois[name=RoundCube, dest=root, sender=fail2ban@$DOMAIN]
logpath  = /var/log/roundcubemail/errors

[pfixadmadmin-iptables]
enabled  = true
filter   = postfixadmin-auth
backend  = polling
action   = iptables[name=PostfixAdmin, port="http,https"]
           sendmail-whois[name=PostfixAdmin, dest=root, sender=fail2ban@$DOMAIN]
logpath  = /var/log/nginx/postfixadmin.access.log
_EOF

You will want to additionally add the following to your iptables /etc/sysconfig/iptables:

#---------------------------------------------------------------
# Web Traffic (for PostfixAdmin and RoundCube)
#---------------------------------------------------------------
# Allow non-encrypted port so we can redirect these users to the
# encrypted version.  It's just a nicer effect to support
# redirection
-A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT

#---------------------------------------------------------------
# SMTP /Message Transfer Agent Communication
#---------------------------------------------------------------
-A INPUT -m state --state NEW -m tcp -p tcp --dport 465 -j ACCEPT

#---------------------------------------------------------------
# IMAP 
#---------------------------------------------------------------
-A INPUT -m state --state NEW -m tcp -p tcp --dport 993 -j ACCEPT

#---------------------------------------------------------------
# POP3
#---------------------------------------------------------------
-A INPUT -m state --state NEW -m tcp -p tcp --dport 995 -j ACCEPT

Useful Commands When Running Your Mail Server

  • doveconf -a: Display Dovecot configuration
  • postconf -n: Display Postfix configuration
  • postqueue -p: Display mail queue information

Sometimes when first playing with quotas, you may or may not want to recalculate them against a user. This can be done as follows:

#  Recalculate a specific users quota
doveadm quota recalc -u foobar@your.domain.com

# Or you can do this and recalculate ALL user quotas
doveadm quota recalc -A

# You will also want to run the following command if you decide
# to recalculate someone (or all) in your database:
# UPDATE domain_quota SET bytes=sq.sb, messages=sq.sm 
#   FROM (SELECT 'your.domain.com',
#           sum(bytes) as sb, sum(messages) as sm from quota2 WHERE
#            username like '%@your.domain.com') AS sq 
#   WHERE domain = 'your.domain.com';

Note: If you delete a mailbox for a specified domain, remember to manually remove: /var/mail/vhosts/domain/user

So… That’s it? Now I’m done?
Yes and No… My blog pretty much hands over a working mail server with little to no extra configuration needed on your part. But to properly accept mail from other people around the world, you will need:

  1. This mail server (ideally) must be accessible to the internet via a static IP address. This means that if you’re hosting this at home, the IP address your ISP provides you may not work (Dynamic IP vs Static IP). That said; a lot of ISPs can offer you a static IP if you don’t already have one for little (to no) extra cost.
  2. Your own domain name (if you don’t have an official one already) because you can’t have an email@your.domain.com if your.domain.com isn’t publicly recognized.
  3. A Mail Exchange (MX) Record that points to your new mail server by via it’s accessible IP on the internet. This is the only way the outside world will be able to send mail back to you; this step is not required if the only thing you’re doing is sending email out.

    Most Domain Registars allow you to set your own MX record (GoDaddy.com, NameCheap.com) which can simply be entered right from their webpage with little to no effort. If you’re paying someone else to host your websites for the domain you own, then most likely they have the MX record already pointing to them. You may need to open a support ticket (or call them) and tell them you want the MX record changed to point back to your own server instead or forward it.

Please keep in mind that there are risks involved with running your own mail server. You can get yourself temporarily (or permantenly) blacklisted if you’re not careful. Once you’re blacklisted, you’ll have a very hard time getting other mail servers on the internet to accept your emails for delivery. Blacklisting occurs when mail servers (which you will now be interacting with) detect abuse. In most cases, the mail server administrator (this is YOU) won’t even know you’re abusing other peoples servers. The abuse will start under your nose from emails that originated from your system by those you given mailboxes too. In fact, once your domain is blacklisted; it can be a pain in the @$$ to get de-listed later. A Blacklisted domain’s emails usually never reaches their intended recipient. Instead, they are immediately flagged as spam and sent to the deleted items (by the remote server). The configuration I provided considers most of the cases, but you still need to consider:

  • Don’t create mailboxes for people that you know intend to use it for derogatory purposes or for the intentions of spamming others. Hence; don’t allow users to send out hundreds of thousands of emails a day to a massive distribution on a regular bases even if it’s meaningful mail. Consider that the same stuff that you don’t like in your inbox is the same stuff that nobody else likes in theirs either. :)
  • Don’t allow your mail server to relay mail from untrusted sources. Hence; make sure you only allow users you create accounts for to send mail from your server.
  • Throttle all outbound mail delivery to each of their relay locations. With respect to the first point, even if you have to send massive amounts of mail from your system on a regular basis, do it in small batches. This way you won’t overwhelm the remote servers accepting your mail you want delivered.

If you followed my blog and are using the settings I put in place, then you’re already configured for the last 2 options above. The first option is governed by your own decisions.

No system is bulletproof; disaster can always strike when you’re least expecting it. To cover yourself, always consider backups of the following:

  • Your PostgreSQL Database: This is where all of your mail configuration is for both your MTA and MDA. You definitely do not want to lose this. May I suggest you reference my other blog entry here where I wrote a really simple backup/restore tool for a PostgreSQL database.
  • /etc/postfixadmin/*: Your Postfix Admin flat file configuration allowing you to centrally manage everything via a webpage.
  • /etc/postfix/*: Your Postfix flat file configuration which defines the core of your MTA. It’s configuration allows you to centrally manage everything else through the Postfix Administration website.
  • /etc/roundcube/*: Your Roundcube flat file configuration which allowing users to check their mail via a webpage you host.
  • /etc/dovecot/*: Your Dovecot flat file configuration which defines the core of your MDA. It’s configuration allows you to centrally manage everything through the Postfix Administration website.
  • /var/mail/vhosts/*: All of your user’s mailboxes are defined here. This is a vast storage of unread and read mail that resides on your server.

Oct 20th, 2014 Security Update: Handles The Poodle SSLv3 Exploit
Last week a security exploit was found specifically targeting web hosting making use of the SSLv3 (see here for details) protocol. Previously, the NginX templates (residing in nuxref-templates-mxserver version 1.0.1 or less) I provided did not protect you from this vulnerability. As a result; earlier readers of this blog entry may be susceptible to a man-in-the-middle attack. I just recently posted an update to the nuxref-templates-mxserver (v1.0.2) which will automatically cover any new bloggers building a mail server. For the previous readers, you just need to make 2 changes to correct yourself of this exploit:

  1. Open up /etc/nginx/conf.d/roundcubemail.conf and /etc/nginx/conf.d/postfixadmin.conf and change this:
       ssl_ciphers HIGH:!ADH:!MD5;
       ssl_prefer_server_ciphers on;
       ssl_protocols SSLv3;
       ssl_session_cache shared:SSL:1m;
       ssl_session_timeout 15m;
    

    to this:

       ssl_session_timeout  5m;
       ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
       ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128:AES256:AES:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK';
       ssl_prefer_server_ciphers on;
       ssl_session_cache  builtin:1000  shared:SSL:10m;
    

    This information was based on a great blog entry on securing your NginX configuration found here.

  2. Finally you will want to reload your NginX configuration so it takes on the new updates you applied:
    # Reload NginX
    service nginx reload
    

What about Apache?
Apache is a perfectly fine alternative solution as well! I simply chose NginX because it is much more lightweight approach. In fact, PostfixAdmin and RoundCube mail already come with Apache configuration out of the box located in /etc/httpd/conf.d/. Thus, if you simply start up your Apache instance (service httpd start), you will be hosting its services right away. Please keep in mind that the default (Apache) configuration does not come with all the SSL and added security I provided with the NginX templates. Perhaps later on, I will update the template rpm to include an Apache secure setup as well.

Credit
This blog took me a very (,very) long time to put together and test! The repository hosting alone now accomodates all my blog entries up to this date. If you like what you see and wish to copy and paste this HOWTO, please reference back to this blog post at the very least. It’s really all I ask.

Repository
This blog required me to set up my own repository of which I was thinking that some people might want me to continue to maintain. Such as fetching and applying the latest security updates after testing them first for the continued promise of stability. Personally, I believe I may be setting up a new can of worms for myself by offering this service because bandwidth costs money and updates cost time. But I’ll share it publicly and see how things go anyway.

If you’d like me to continue to monitor and apply updates as well as hosting the repository for long terms, please consider donating or offering a mirror server to help me out! This would would be greatly appreciated!

Sources
This blog could not have been made possible without the tons of resources that I used. Some of these resources including other peoples blogs too which I picked and chose approaches they took.

Software Resources