Tag Archives: RPM

Mono 3.x Packages for CentOS 6

Introduction

Mono allows us to run Windows Applications in our Linux environment. It is an open source implementation of Microsoft’s .NET Framework. The problem is, CentOS (and Red Hat) 6.x ship with Mono v2.4 which is a little outdated. You can’t take advantage of the newer apps .NET developers are writing. In fact, you can’t run anything that requires a newer version of v3.5 of the .NET runtime libraries.

In addition to that, Mono v3.4 grants your CentOS system more support for .NET applications that weren’t otherwise available for you in v2.4.

Compatibility Mono v2.4 Mono v3.4
.NET 1.0 Yes No (dropped support)
.NET 2.0 Yes Yes
C# 3.0 Yes Yes
ASP.NET 2.0 Yes Yes
.NET 3.5 Partial Yes
.NET 4.0 No Yes
.NET 4.5 No Yes
C# 4.0 No Yes
ASP.NET 4.0 No Yes
C# 5.0 No Yes

What’s so special about the repackaging you did?

Well first of all… it’s actually an RPM package. It doesn’t require you to haul in a ton of development libraries and compile it from scratch. Another point is that Mono v2.4 (shipped with CentOS & Red Hat 6) had many patches applied to it. These patches forced Mono to conform to the common directory structure used natively by our Operating System. It took me several hours to recreate all of these patches forcing Mono v3.4 to comply with the same standards.

Finally (and at the time of writing this blog), this is the first package of Mono v3.4 that I’ve found that can be installed via an RPM and not require you to recompile everything yourself. Hence you don’t even need to haul in any development libraries at all. Mono will just work as is. Since my repackaging was based off of the original, I tried to keep all of the external rpm packages the same. That said, I did get a little confused with all of the new packages and binary tools that ship with Mono v3.x. Since I’m not a Microsoft Developer, I tried to sort these new packages accordingly as best as I could. Please feel free to let me know how I can improve this package if you notice anything I’ve done wrong.

Just hand over all your work already!

Absolutely, here they are:

Binary Packages:

Note: Mono (v3) was a bit picky about it’s SQLite version it referenced. I had to update it’s package to a slightly newer version as well for everything to play nicely. Only if you intend to haul in the mono-data-sqlite-*.rpm package, will you be required to haul in this newer version. I’ve already provided this on my repository, but for the sceptics who want to build it themselves, I’ll include those instructions too.

Source Packages:

Debug Info Packages

Alternatively, you can get it from my repository too (this is the best and easiest way). The below instructions assume you’ve set yourself up.

# Make sure you're hooked up with my repository for this to
# work: http://nuxref.com/nuxref-repository/
################################################################
# Install Mono
################################################################
yum install -y 
       --enablerepo=nuxref 
       --enablerepo=nuxref-shared 
           mono-core
################################################################
# Install additional packages too if you wish (depending on your
# needs.
################################################################
yum install -y 
       --enablerepo=nuxref 
       --enablerepo=nuxref-shared 
           mono-web

I’ll Never Trust Your Stuff; Let Me Do It Myself

Sure, First you’re going to need to fetch all the patches I had to create (plus the old ones carried forth from Mono v2.4:

You can additionally view the RPM SPEC file I created here.

First prepare our development environment with mock if you haven’t already:

# Install 'mock' into your environment if you don't have it already
# This step will require you to be the superuser (root) in your native
# environment.
yum install -y mock

# Grant your normal every day user account access to the mock group
# This step will also require you to be the root user.
usermod -a -G mock YourNonRootUsername
# Download the official mono packages from their official
# hosting site:
wget http://origin-download.mono-project.com/sources/mono/mono-3.4.0.tar.bz2

# Download all of the building blocks you'll need
wget --output-document=mono.spec https://www.dropbox.com/sh/9dt7klam6ex1kpp/AAAiqD2KjHhakweKY_mkLLPba/20140713/mono/mono.spec?dl=1
wget --output-document=monodir.c https://www.dropbox.com/sh/9dt7klam6ex1kpp/AABZHv5NeWFICyAAAw--eiJoa/20140713/mono/monodir.c?dl=1
wget --output-document=mono.snk https://www.dropbox.com/sh/9dt7klam6ex1kpp/AADoY6UvThpcQbUHhs7XUecsa/20140713/mono/mono.snk?dl=1
wget --output-document=lc https://www.dropbox.com/sh/9dt7klam6ex1kpp/AACkja0kNxmO1ytHOIw523HTa/20140713/mono/lc?dl=1
wget --output-document=mono-3.4-ppc-threading.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AAAKrjdqRR826osJjbqZIu5la/20140713/mono/mono-3.4-ppc-threading.patch?dl=1
wget --output-document=mono-1.2.3-use-monodir.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AABHvxieGDqU8eDB24ghS_Dua/20140713/mono/mono-1.2.3-use-monodir.patch?dl=1
wget --output-document=mono-2.2-uselibdir.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AABSjbMfIRj5JB7HKWydQVpja/20140713/mono/mono-2.2-uselibdir.patch?dl=1
wget --output-document=mono-2.0-monoservice.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AADsL0DBfI0VixRAw6uI0Vkpa/20140713/mono/mono-2.0-monoservice.patch?dl=1
wget --output-document=mono-3.4-libgdiplusconfig.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AAAHvISVzxIPq9xCmw2m2tcPa/20140713/mono/mono-3.4-libgdiplusconfig.patch?dl=1
wget --output-document=mono-3.4-libdir.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AACHrlv_iSp36jhSeOn4ki0fa/20140713/mono/mono-3.4-libdir.patch?dl=1
wget --output-document=mono-3.4-POSIX_ARG_MAX.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AADSN5WhjyqTQptMoWthDnYHa/20140713/mono/mono-3.4-POSIX_ARG_MAX.patch?dl=1
wget --output-document=mono-3.4.xamarin.BZ18690.patch https://www.dropbox.com/sh/9dt7klam6ex1kpp/AAAwWsEKkOlnzYZCshp29wuwa/20140713/mono/mono-3.4.xamarin.BZ18690.patch?dl=1

# Initialize our Environment
mock -v -r epel-6-x86_64 --init

# Dependencies
mock -v -r epel-6-x86_64 --install libpng-devel libjpeg-devel ligiflib-devel  
   lilibtiff-devel  lilibexif-devel  lilibX11-devel  lifontconfig-devel  
   ligettext  limake  ligcc-c++ libison liglib2-devel lipkgconfig 
   lilibicu-devel lilibgdiplus-devel  lizlib-devel li automake libtool 
   gettext-devel mono-core gcc-c++ mediainfo gettext 
   giflib-devel libtiff-devel libexif-devel libX11-devel fontconfig-devel 
   bison glib2-devel libicu-devel libgdiplus-devel mysql-devel 
   postgresql-devel sqlite-devel

# Copy our packages into our environment
mock -v -r epel-6-x86_64 --copyin mono.spec /builddir/build/SPECS
mock -v -r epel-6-x86_64 --copyin 
    *.patch 
    mono-3.4.0.tar.bz2 
    mono.snk 
    lc 
    monodir.c 
    /builddir/build/SOURCES

# Shell into our environment
mock -v -r epel-6-x86_64 --shell

# Change to our build directory
cd builddir/build

# Enable Bootstrapping for the first time
# mono actually requires 'mono' (itself) to build. Weird Right?
# But still necessary! For this reason I prepared an easier
# way of enabling bootstrapping for your first build.
#
# Once you install the binaries created from your first build
# we can rebuild the package again (but this time without
# bootstrapping). The purpose of this is to ensure the mono
# binaries and packages we created are equivalent to the
# the bootstrapped content.
# So... on with the bootstrapping; Note: this will take
# 20 to 30 minutes depending on how fast your system is.
rpmbuild -ba --define "_with_bootstrap=1" SPECS/mono.spec

# Now that we've created mono from a bootstrap, we can
# install the package back into our virtual environment
# and rebuild it again. But this time we rebuild it
# without the bootstrap reference.
rpm -Uhi RPMS/mono-core-3.4.0-1.el6.nuxref.x86_64.rpm RPMS/mono-devel-3.4.0-1.el6.nuxref.x86_64.rpm
# Now rebuild the whole thing all over again to confirm
# your build was good; Note: This will take another 20 to 30 
# minutes again...
rpmbuild -ba SPECS/mono.spec

# we're now done with our mock environment for now; Press Ctrl-D to
# exit or simply type exit on the command line of our virtual
# environment
exit

# We'll return to the directory we were previously in.  We can copy
# out the packages we just built at this point.Ignore the warning
# about SELinux if you get one. It doesn't impact our goals at this
# moment.
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/SRPMS/mono-3.4.0-1.el6.nuxref.src.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-core-3.4.0-1.el6.nuxref.x86_64.rpm
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-data-3.4.0-1.el6.nuxref.x86_64.rpm
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-data-oracle-3.4.0-1.el6.nuxref.x86_64.rpm
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-data-postgresql-3.4.0-1.el6.nuxref.x86_64.rpm
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-data-sqlite-3.4.0-1.el6.nuxref.x86_64.rpm
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-devel-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/monodoc-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/monodoc-devel-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-extras-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-locale-extras-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-nunit-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-nunit-devel-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-reactive-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-wcf-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-web-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-web-devel-3.4.0-1.el6.nuxref.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-winforms-3.4.0-1.el6.nuxref.x86_64.rpm .
# The debuginfo package will only exist if you successfully rebuilt
# everything without the bootstrap set
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/mono-debuginfo-3.4.0-1.el6.nuxref.x86_64.rpm .

Upgrading SQLite

For me, I just visited pkgs.org and downloaded the fedora 20 (source -src.rpm) release of SQLite. Then I extracted it’s contents as follows:

# I can't promise this link will work, as this package is always
# evolving, but if you do the search above, you'll get the idea
wget http://dl.fedoraproject.org/pub/fedora/linux/releases/20/Everything/source/SRPMS/s/sqlite-3.8.1-2.fc20.src.rpm

# Alternatively, you can download the source rpm package I'm
# already hosting:
wget http://repo.nuxref.com/centos/6/en/source/custom/sqlite-3.8.1-2.el6.src.rpm

# Then extracted it using this neat technique:
rpm2cpio sqlite-*.src.rpm | cpio -idmv

# Initialize our Environment
mock -v -r epel-6-x86_64 --init

# Dependencies
mock -v -r epel-6-x86_64 --install ncurses-devel 
    readline-devel glibc-devel autoconf /usr/bin/tclsh 
    tcl-devel

# You'll already have the block you need as nothing is
# changed with this package. We're just using it as is
mock -v -r epel-6-x86_64 --copyin sqlite-*.zip /builddir/build/SOURCES
mock -v -r epel-6-x86_64 --copyin *.patch /builddir/build/SOURCES
mock -v -r epel-6-x86_64 --copyin sqlite.spec /builddir/build/SPECS

# Shell into our environment
mock -v -r epel-6-x86_64 --shell
 
# Change to our build directory
cd builddir/build

# Build our packages (process doesn't take long ~2 min)
rpmbuild -ba SPECS/sqlite.spec

# we're now done with our mock environment for now; Press Ctrl-D to
# exit or simply type exit on the command line of our virtual
# environment
exit

# We'll return to the directory we were previously in.  We can copy
# out the packages we just built at this point. Ignore the warning
# about SELinux if you get one. It doesn't impact our goals at this
# moment.
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/SRPMS/sqlite-3.8.1-2.el6.src.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/sqlite-3.8.1-2.el6.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/sqlite-devel-3.8.1-2.el6.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/sqlite-doc-3.8.1-2.el6.noarch.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/lemon-3.8.1-2.el6.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/sqlite-tcl-3.8.1-2.el6.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout 
   /builddir/build/RPMS/sqlite-debuginfo-3.8.1-2.el6.x86_64.rpm .

Credit

This blog took me a very (,very) long time to put together and test! The repository hosting alone now accommodates all my blog entries up to this date. If you like what you see and wish to copy and paste this HOWTO, please reference back to this blog post at the very least. It’s really all I ask.

I’ve tried hard to make this a complete working solution out of the box. Please feel free to email me or post comments below with any suggestions you have so I can ensure this blog is as complete as possible! Positive feedback is always welcome too!

Repository

This blog makes use of my own repository I loosely maintain. If you’d like me to continue to monitor and apply updates as well as hosting the repository for long terms, please consider donating or offering a mirror server to help me out! This would would be greatly appreciated!

Sources

The majority of my efforts came from the following sites:

Configuring a DNS Server on CentOS 6

Introduction

We have been relying on the Domain Name System (DNS) since the dawn of the internet. Simply put: it allows us to access information by a human readable string or recognizable name such as google.com or nuxref.com instead of it’s actual IP address (which is not as easily memorizable). If we didn’t have the DNS, then the internet would not have evolved as far as it has today. The DNS was built on a series of Name Servers that are all looking after their respected domain (or zone). Our Internet Service Provider (ISP) is lending us their DNS servers everyday when we connect to them. It’s our wireless router (at home or at work) that passes this server to our tablet, phone, laptop etc… when we connect to it.

Here is a simple DNS query taking place illustrating how most of us are setup today.
Here is a simple DNS query taking place illustrating how most of us are setup today.
Managing our own Authoritative DNS Server allows us to catalog our personal devices we use daily with great ease. If you’re publicly hosting content, an Authoritative DNS server can be used to even distribute the traffic you servers receive both geographically and as a distributed (load balancing) approach. It gives us the ability to dynamically associate names to all of our devices on our network. It’s great for the hobbyist and absolutely mandatory for any medium or larger sized company.

PowerDNS is my preferred DNS server solution. I personally prefer this to it’s long-term predecessor Berkeley Internet Name Domain (BIND). BIND has been around since 1984 and has gone through years of hacky patches to get to where it is today. PowerDNS is much younger (first release was in 1999), but was written without all of the growing pains BIND suffered through from the start. In all fairness, BIND developers were forced to deal with RFC (Request for Comments) as DNS continued to evolve to what it is today. Where as PowerDNS already had a stable set of requirements to work with from day one. Not to mention PowerDNS can be easily configured to use alternative backend databases.

You are reading this blog because you want the following:

  • A fast and reliable Authoritative DNS server with a PostgreSQL database backend.
  • You want a central configuration point; you want everything to be easy to maintain after it’s all set up.
  • You want everything to just work the first time and you want to leave the figuring it out part to the end.
  • Package management and version control is incredibly important to you.
  • You want the ability to catalog your local network by assigning devices on it their own unique (easy to remember) hostnames.
  • You want to maintain the ability to surf the internet by forwarding on requests your DNS server doesn’t know to another that does.
The beauty of running your own Authorative DNS grants you the ability to catalog and easily access everything on your local network.
The beauty of running your own Authorative DNS grants you the ability to catalog and easily access everything on your local network by it’s hostname (you assign).

Here is what my tutorial will be focused on:

  • PowerDNS (v3.x) configured to use a Database Backend (PostgreSQL) giving you central configuration. This tutorial focuses on version 8.4 because that is what ships with CentOS and Red Hat. But most (if not all) of this tutorial should still work fine if you choose to use version 9.x of the database instead.
  • PowerDNS Recursor (v3.x) will be configured to handle anything records we don’t otherwise host or override
  • Security Considered
  • Poweradmin (v2.x) will provide our administration of the DNS records we add via it’s simple web interface.

Please note the application versions identified above as this tutorial focuses specifically on only them. One big issue I found while researching how to set up thing on the net was some tutorials I found didn’t really mention the version they were using. Hence, when I would stumble across these old article(s) with new(er) software, it would make for quite a painful experience when things didn’t work.

Please also note that other tutorials will imply that you setup one feature at a time. Then test it to see if it worked correctly before moving on to the next step. This is no doubt the proper way to do things. However, I’m just going to give it all to you at once. If you stick with the versions and packages I provide… If you follow my instructions, it will just work for you the first time. Debugging on your end will be a matter of tracing back to see what step you missed.

I tried to make this tutorial as cookie cutter(ish) as I could. Therefore you can literally just copy and paste what I share right to your shell prompt and the entire setup will be automated for you.

Installation

The following four (4) steps will get you set up with your very own DNS server.

Step 1 of 4: Setup Your Environment

This is the key to my entire blog; it’s going to make all of the remaining steps just work the first time for you. All; I repeat All of the steps below (after this one) assume that you’ve set this environment up. You will need to reset up your environment at least once before running through any of the remaining steps below or they will not work.

It’s also important to mention that you will need to be root to configure the DNS server. This applies to all of the steps identified below throughout this blog.

I re-hosted all of the packages I used to successfully pull this blog off. This allows me to host this information and pair it with the software it works against. Feel free to hook up to my repositories to speed up your setup.

Install all of the necessary packages:

# Connect to my repository to which I've had to rebuild a few
# packages to support PostgreSQL as well as fix some bugs in
# other bugs. This step will really make your life easy and let
# us compare apples to apples with package versions. It also
# allows you to haul in a working setup right out of the box.
#
# Be sure you're connected to my repository for the below to work
# visit: http://nuxref.com/nuxref-repository/

################################################################
# Install our required products
################################################################
yum install -y 
       --enablerepo=nuxref 
       --enablerepo=nuxref-shared 
           postgresql-server postgresql 
           php-pgsql php-imap php-mcrypt php-mbstring  
           pdns pdns-backend-postgresql pdns-recursor 
           poweradmin 
           nuxref-templates-pdns

# Also make sure these products are installed as well since we
# use them to manipulate and test some of the data
yum install -y awk sed bind-utils curl

# Choose between NginX or Apache
## NginX Option (a) - This one is my preferred choice:
yum install -y 
       --enablerepo=nuxref 
       --enablerepo=nuxref-shared 
           nginx php-fpm

## Apache Option (b):
yum install -y 
       --enablerepo=nuxref 
       --enablerepo=nuxref-shared 
            httpd php

# Setup Default Timezone for PHP. For a list of supported
# timezones you can visit here: http://ca1.php.net/timezones
TIMEZONE="America/Montreal"
sed -i -e "s|^[ t]*;*(date.timezone[ t]*=).*|1 $TIMEZONE|g" 
    /etc/php.ini

# Ensure we're not using Strict PHP Handling
sed -i -e 's/^[ t]*(error_reporting)[ t]*=.*$/1 = E_ALL & ~E_STRICT/g' 
    /etc/php.ini 

################################################################
# Setup PostgreSQL (v8.4)
################################################################
# The commands below should all work fine on a PostgreSQL v9.x
# database too; but your mileage may vary as I've not personally
# tested it yet. You can skip this section if you've already
# got a database running using one of my earlier tutorials.

# Only init the database if you haven't already. This command
# could otherwise reset things and you'll loose everything.
# If your database is already setup and running, then you can
# skip this line
service postgresql initdb

# Now that the database is initialized, configure it to trust
# connections from 'this' server (localhost)
sed -i -e 's/^[ t]*(local|host)([ t]+.*)/#12/g' 
    /var/lib/pgsql/data/pg_hba.conf
cat << _EOF >> /var/lib/pgsql/data/pg_hba.conf
# Configure all local database access with trust permissions
local   all         all                               trust
host    all         all         127.0.0.1/32          trust
host    all         all         ::1/128               trust
_EOF

# Make sure PostgreSQL is configured to start up each time
# you start up your system
chkconfig --levels 345 postgresql on

# Start the database now too because we're going to need it
# very shortly in this tutorial
service postgresql start

To simplify your life, I’ve made the configuration of all the steps below reference a few global variables. The ones identified below are the only ones you’ll probably want to change. May I suggest you paste the below information in your favourite text editor (vi, emacs, etc) and adjust the variables to how you want them making it easier to paste them back to your terminal screen.

# The following is only used for our SSL Key Generation.
# You can skip SSL Key generation if you've done so using an
# earlier tutorial
COUNTRY_CODE="7K"
PROV_STATE="Westerlands"
CITY="Lannisport"
SITE_NAME="NuxRef"

Now for the rest of the global configuration; There really should be no reason to change any of these values (but feel free to). It’s important that you paste the above information (tailored to your liking’s) as well as the below information below to your command line interface (CLI) of the server you wish to set up.

# PostgreSQL Database
PGHOST=localhost
PGPORT=5432
PGNAME=system_dns
PGRWUSER=pdns
PGRWPASS=pdns

# Identify the domain name of your server here
# I use the .local extension because I only intend to resolve
# internal addresses with my DNS server.  You may wish to use
# a different value.
DOMAIN=nuxref.local

# Configure a recursor, the recursor will cache your database hits
# and will greatly increase performance.  Ideally you want to set
# the recursor address to the address you want to host your server
# on.  This is the same IP address you will add to everyones 
# /etc/resolve.conf later.  This is in fact your name server.
# If you leave this value at 127.0.0.1 your DNS will be restricted
# to just the server you're hosting on.
# If you aren't sure what your IP Address is, you can just type 'ifconfig'
#
# This command may also fetch your ip address:
# cat /etc/sysconfig/network-scripts/ifcfg-* | 
#    egrep '^IPADDR=' | egrep -v '127.0.0.1' | 
#    cut -f2 -d'=' | head -n1
NAMESERVER_ADDR=$(cat /etc/sysconfig/network-scripts/ifcfg-* | 
    egrep '^IPADDR=' | egrep -v '127.0.0.1' | 
    cut -f2 -d'=' | head -n1)
# Alternatively, if you're not reading this and 'only' if the
# above failed we'll just set the address to your local address.
NAMESERVER_ADDR=${NAMESERVER_ADDR:=127.0.0.1}

# The Network that our DNS server resides on is important information
# for security purposes.  We want to only allow recursion on this
# network alone and not others hitting our server.  The below
# looks kind of cryptic, but it's just a method of extracting
# the network information automatically if you don't already
# know it. It may or may not work; it will depend on if you set
# a proper NAMESERVER_ADDR
SUBNET_ADDR=$(/sbin/ifconfig | egrep -m1 $NAMESERVER_ADDR | 
    sed -e 's/.*Mask:([0-9.]+).*/1/g')
NAMESERVER_PRFX=$(ipcalc -s -p $NAMESERVER_ADDR $SUBNET_ADDR | 
                   cut -f2 -d'=')
# Assign a default in case the above command failed.
NAMESERVER_PRFX=${NAMESERVER_PRFX:=24}

# Calculate our network
NAMESERVER_NWRK=$(ipcalc -s -n $NAMESERVER_ADDR $SUBNET_ADDR | 
                   cut -f2 -d'=')
# Assign a default in case the above command failed.
NAMESERVER_NWRK=${NAMESERVER_NWRK:=$(echo $NAMESERVER_ADDR | 
                   cut -f1,2,3 -d'.').0}

# Reverse Address Resolution Preparation
# This converts and IP Address of 1.2.3.4 to 3.2.1.in-addr.arpa
# We can use this later to create a reverse translation which
# PowerDNS can administrate for us also.  The templates I created
# will set some early examples up for you.
NAMESERVER_ARPA=$(echo "$NAMESERVER_ADDR" | 
    awk -F"." '{print $3 "." $2 "." $1 ".in-addr.arpa"}')

# We now need the 4th octet of our Name Server Address to complete
# our ARPA address for the reverse lookup. For example, if your server
# ip is 2.4.8.16, we want the '16' defined here.  The below is just
# a cheat to go ahead and extract it from the address you specified
NAMESERVER_OCT4=$(echo "$NAMESERVER_ADDR" | 
    cut -f4 -d'.')

# This is where our templates get installed to make your life
# incredibly easy and the setup to be painless. These files are
# installed from the nuxref-templates-pdns RPM package you
# installed above. If you do not have this RPM package then you
# must install it or this blog simply won't work for you.
# > yum install --enablerepo=nuxref nuxref-templates-pdns
NUXREF_TEMPLATES=/usr/share/nuxref

I realize the above environment can seem a bit cryptic. I tried to simplify this DNS setup so that even a novice’s life would be easy. The environment variables attempt to detect everyones settings automatically. In some cases, I may have just made it worse for some (hopefully not). It would be a good idea to just echo the defined variables to your screen and confirm they are as you expect them to be. They really are the key to making all of the next steps work in this blog.

# Simple Check
# Note: grab the brackets too when you copy and paste the below
(
   for VAR in COUNTRY_CODE PROV_STATE CITY SITE_NAME 
              PGHOST PGPORT PGNAME PGRWUSER PGRWPASS 
              DOMAIN NAMESERVER_ADDR NAMESERVER_PRFX 
              NAMESERVER_NWRK NAMESERVER_ARPA 
              NAMESERVER_OCT4 NUXREF_TEMPLATES; do
      [ -z $(eval "echo $$VAR") ] && echo "You must set the variable: $VAR"
   done
)
# Pretty Printing
# Note: grab the brackets too when you copy and paste the below
(
   echo "PostgreSQL:"
   echo -e "tPGHOST=$PGHOSTntPGPORT=$PGPORTntPGNAME=$PGNAMEntPGRWUSER=$PGRWUSERntPGRWPASS=$PGRWPASSn"
   echo "SSL:"
   echo -e "tCOUNTRY_CODE='$COUNTRY_CODE'ntPROV_STATE='$PROV_STATE'ntCITY='$CITY'ntSITE_NAME='$SITE_NAME'n"
   echo "Nameserver:"
   echo -e "tDOMAIN=$DOMAINntNAMESERVER_ADDR=$NAMESERVER_ADDRntNAMESERVER_NWRK=$NAMESERVER_NWRKntNAMESERVER_PRFX=$NAMESERVER_PRFX"
   echo -e "tNAMESERVER_ARPA=$NAMESERVER_ARPAntNAMESERVER_OCT4=$NAMESERVER_OCT4n"
   echo "NuxRef Templating"
   echo -e "tNUXREF_TEMPLATES=$NUXREF_TEMPLATES"
   echo
)

Step 2 of 4: Setup PowerDNS

First off, make sure you’ve set up your environment correctly (defined in Step 1 above) or you will have problems with the outcome of this step!
Database Configuration:

################################################################
# Configure PostgreSQL (for PowerDNS)
################################################################
# Optionally Eliminate Reset Database.
/bin/su -c "/usr/bin/dropdb -h $PGHOST -p $PGPORT $PGNAME 2>&1" postgres &>/dev/null
/bin/su -c "/usr/bin/dropuser -h $PGHOST -p $PGPORT $PGRWUSER 2>&1" postgres &>/dev/null

# Create Read/Write User (our Administrator)
echo "Enter the role password of '$PGRWPASS' when prompted"
/bin/su -c "/usr/bin/createuser -h $PGHOST -p $PGPORT -S -D -R $PGRWUSER -P 2>&1" postgres

# Create our Database and assign it our Administrator as it's owner
/bin/su -c "/usr/bin/createdb -h $PGHOST -p $PGPORT -O $PGRWUSER $PGNAME 2>&1" postgres 2>&1

# the below seems big; but will work fine if you just copy and
# it as is right to your terminal: This will prepare the SQL
# statement needed to build your DNS server's database backend
sed -e '/^--?/d' 
    -e "s/%PGRWUSER%/$PGRWUSER/g" 
        $NUXREF_TEMPLATES/pgsql.pdns.template.schema.sql > 
          /tmp/pgsql.pdns.schema.sql

# load DB
/bin/su -c "/usr/bin/psql -h $PGHOST -p $PGPORT -f /tmp/pgsql.pdns.schema.sql $PGNAME 2>&1" postgres 2>&1
# cleanup
/bin/rm -f /tmp/pgsql.pdns.schema.sql

# This will get your database started with some working data to use.
# This part is optional, but since it's so easy to delete stuff later
# and there really isn't a whole lot taking place here, you should run
# this step. It becomes especially useful in debugging later.
sed -e "/^--?/d" 
    -e "s/%DOMAIN%/$DOMAIN/g" 
    -e "s/%NAMESERVER_ADDR%/$NAMESERVER_ADDR/g" 
    -e "s/%NAMESERVER_ARPA%/$NAMESERVER_ARPA/g" 
    -e "s/%NAMESERVER_OCT4%/$NAMESERVER_OCT4/g" 
        $NUXREF_TEMPLATES/pgsql.pdns.template.data.sql > 
            /tmp/pgsql.pdns.data.sql

# load DB with our data
/bin/su -c "/usr/bin/psql -h $PGHOST -p $PGPORT -f /tmp/pgsql.pdns.data.sql $PGNAME 2>&1" postgres 2>&1
# cleanup
/bin/rm -f /tmp/pgsql.pdns.data.sql

Server Configuration:

################################################################
# Configure PowerDNS
################################################################
# Create backup of configuration files
[ ! -f /etc/pdns/pdns.conf.orig ] && 
   cp /etc/pdns/pdns.conf /etc/pdns/pdns.conf.orig

# Install our configuration using the template
sed -e "/^#?/d" 
    -e "s/%NAMESERVER_ADDR%/$NAMESERVER_ADDR/g" 
    -e "s/%NAMESERVER_NWRK%/$NAMESERVER_NWRK/g" 
    -e "s/%NAMESERVER_PRFX%/$NAMESERVER_PRFX/g" 
    -e "s/%PGRWUSER%/$PGRWUSER/g" 
    -e "s/%PGRWPASS%/$PGRWPASS/g" 
    -e "s/%PGHOST%/$PGHOST/g" 
    -e "s/%PGPORT%/$PGPORT/g" 
    -e "s/%PGNAME%/$PGNAME/g" 
        $NUXREF_TEMPLATES/pgsql.pdns.template.pdns.conf > 
          /etc/pdns/pdns.conf

# Protect our configuration since it has user/pass info
# inside of it.
chmod 640 /etc/pdns/pdns.conf
chown root.pdns /etc/pdns/pdns.conf

Step 3 of 4: Setup PowerDNS Recursor

################################################################
# Configure PowerDNS Recursor
################################################################
# Create backup of configuration files
[ ! -f /etc/pdns-recursor/recursor.conf.orig ] && 
   cp /etc/pdns-recursor/recursor.conf 
        /etc/pdns-recursor/recursor.conf.orig

# Install our configuration using the template
sed -e "/^#?/d" 
        $NUXREF_TEMPLATES/pgsql.pdns-recursor.template.recursor.conf > 
          /etc/pdns-recursor/recursor.conf

# Generate an up to date root.hints file, this allows recursion
# back out to the internet.
curl -u ftp:ftp 'ftp://ftp.rs.internic.net/domain/db.cache' 
    -o /etc/pdns-recursor/root.hints

# If the above command did not work, you can use the one I shipped
# with the nuxref-template.pdns packaging:
#   cp $NUXREF_TEMPLATES/root.hints /etc/pdns-recursor/root.hints

# Alternatively, PowerDNS is hardcoded with a default set of root-hints.
# But i personally just like seeing it as an external configuration instead
# But... if all of this is combersom to you and you simply don't want
# to use the offical root.hints and the hard-coded one instead you can
# do the following:
#
# sed -i -e '/^([ t]*hint-file=.*)/d' /etc/pdns-recursor/recursor.conf

# Start up all of our services
chkconfig pdns-recursor --level 345 on
chkconfig pdns --level 345 on
service pdns-recursor restart
service pdns restart

It’s important to take a time-out on this step just to make sure everything is working.
A few simple commands should work perfectly for you otherwise we have an issue:

# The following command should output a bunch of googles DNS servers
nslookup google.com $NAMESERVER_ADDR
# The following command should output the same list
nslookup -port=5300 google.com 127.0.0.1

# If you receive an error such as 
#      ** server can't find google.com: NXDOMAIN
# Then you need to revisit the above steps again

# Alternatively, if you receive an error such as:
#  ;; connection timed out; trying next origin
#  ;; connection timed out; trying next origin
#  ;; connection timed out; no servers could be reached
# Then you have been most likely been restricted access
# to port 53 to the outside world. You're not really
# in a problem state at this point. Make sure the rest
# of the tests (Below) work and then make sure to follow
# the section of this blog entitled:
#     'Zone Forwarding Alternative'
# 
# 
# You should be able to resolve the domain
# poweradmin.$DOMAIN to this very server your hosting
# on:
nslookup poweradmin.$DOMAIN $NAMESERVER_ADDR

# You can even test reverse lookups using our data
# we loaded with the following command:
nslookup $NAMESERVER_ADDR $NAMESERVER_ADDR

# The above should resolve itself to hostmaster.your.domain

Step 4 of 4: Setup PowerAdmin

First off, make sure you’ve set up your environment correctly (defined in Step 1 above) or you will have problems with the outcome of this step!

################################################################
# Configure PostgreSQL (for PowerAdmin)
################################################################

# Now we need to update our database with a schema for
# poweradmin to work with
sed -e "/^--?/d" 
    -e "s/%DOMAIN%/$DOMAIN/g" 
    -e "s/%PGRWUSER%/$PGRWUSER/g" 
        $NUXREF_TEMPLATES/pgsql.poweradmin.template.schema.sql > 
            /tmp/pgsql.poweradmin.schema.sql

# Now we can load the file:
/bin/su -c "/usr/bin/psql -h $PGHOST -p $PGPORT -f /tmp/pgsql.poweradmin.schema.sql $PGNAME 2>&1" postgres 2>&1
# cleanup
/bin/rm -f /tmp/pgsql.poweradmin.schema.sql

# If you loaded the sample dataset for PowerDNS earlier, then you'll
# want to additionally load this file too to help PowerAdmin access it
sed -e "/^--?/d" 
    -e "s/%DOMAIN%/$DOMAIN/g" 
    -e "s/%NAMESERVER_ADDR%/$NAMESERVER_ADDR/g" 
    -e "s/%NAMESERVER_ARPA%/$NAMESERVER_ARPA/g" 
        $NUXREF_TEMPLATES/pgsql.poweradmin.template.data.sql > 
            /tmp/pgsql.poweradmin.data.sql

# load DB with our data
/bin/su -c "/usr/bin/psql -h $PGHOST -p $PGPORT -f /tmp/pgsql.poweradmin.data.sql $PGNAME 2>&1" postgres 2>&1
# cleanup
/bin/rm -f /tmp/pgsql.poweradmin.data.sql

################################################################
# Configure PowerAdmin (for PowerDNS Administration)
################################################################
# Create backup of configuration files
[ ! -f /etc/poweradmin/config.inc.php.orig ] && 
   cp /etc/poweradmin/config.inc.php 
        /etc/poweradmin/config.inc.php.orig

# Apply our configuration
sed -e "/^//?/d" 
    -e "s/%DOMAIN%/$DOMAIN/g" 
    -e "s/%PGHOST%/$PGHOST/g" 
    -e "s/%PGNAME%/$PGNAME/g" 
    -e "s/%PGPORT%/$PGPORT/g" 
    -e "s/%PGRWUSER%/$PGRWUSER/g" 
    -e "s/%PGRWPASS%/$PGRWPASS/g" 
        $NUXREF_TEMPLATES/pgsql.poweradmin.template.config.inc.php > 
            /etc/poweradmin/config.inc.php

# Protect file since it contains passwords
chmod 640 /etc/poweradmin/config.inc.php
chown root.apache /etc/poweradmin/config.inc.php

# NginX Configuration
sed -e "/^#?/d" 
    -e "s/%DOMAIN%/$DOMAIN/g" 
        $NUXREF_TEMPLATES/nginx.poweradmin.template.conf > 
            /etc/nginx/conf.d/poweradmin.conf

################################################################
# Generate SSL Keys For Webpage Security
################################################################
# Generate SSL Keys (if you don't have any already) that we
# will secure all our inbound and outbound mail as.
openssl req -nodes -new -x509 -days 730 -sha256 -newkey rsa:2048 
   -keyout /etc/pki/tls/private/$DOMAIN.key 
   -out /etc/pki/tls/certs/$DOMAIN.crt 
   -subj "/C=$COUNTRY_CODE/ST=$PROV_STATE/L=$CITY/O=$SITE_NAME/OU=IT/CN=$DOMAIN"

# Permissions; protect our Private Key
chmod 400 /etc/pki/tls/private/$DOMAIN.key

# Permissions; protect our Public Key
chmod 444 /etc/pki/tls/certs/$DOMAIN.crt

At this point you should be able to start NginX. If it’s already running
send it a reload or just run the below command.

# If you chose the NginX approach you'll want to make sure it's
# setup to run correctly and restart itself if the system is
# ever restarted:

# Ensure NginX runs even after a reboot
chkconfig nginx --level 345 on
chkconfig php-fpm --level 345 on

# Restart the service if it isn't running already
service php-fpm restart
service nginx restart

Now, we’re almost done. We need to make sure our server is referencing our new DNS server. You may need to update your network settings, but the following will just cheat for the time being and set you up:

[ ! -f /etc/resolv.conf.orig ] && 
   cp /etc/resolv.conf 
       /etc/resolv.conf.orig

# Tell our server to use our new DNS server
cat << _EOF > /etc/resolv.conf
search $DOMAIN
nameserver $NAMESERVER_ADDR
_EOF

# Restore your old configuration like so
# if you need to:
#  /bin/mv -f /etc/resolv.conf.orig /etc/resolv.conf

You will want to additionally add the following to your iptables /etc/sysconfig/iptables:

#---------------------------------------------------------------
# DNS Traffic
#---------------------------------------------------------------
-A INPUT -m state --state NEW -m tcp -p tcp --dport 53 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 53 -j ACCEPT

#---------------------------------------------------------------
# Web Traffic for PowerAdmin
#---------------------------------------------------------------
-A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT

Use the login/pass as admin/admin when you log in for the first time.  Consider changing this afterwards!
Use the login/pass as admin/admin when you log in for the first time. Consider changing this afterwards!
You should now be able to visit https://poweradmin and see a login screen. You may have to accept the ‘untrusted key’ prompt. Don’t worry; it’s safe to do so! In fact, if you’re worried, then just have a look at the key itself before accepting it. You’ll see that it’s just the one we generated earlier. The login is admin and the password is admin at the start. You will want to consider changing this right away after you log in for percautionary sake.

Your setup can now be illustrated by the model below. It’s virtually the same setup as you had before however now instead of querying your ISPs DNS Server, you query your very own local one. Now you can easily maintain your own local network and begin labeling the devices you use on it using PowerAdmin.

This illustration shows the PowerDNS Recursor (pdns-recursor)
This illustration shows the PowerDNS Recursor (pdns-recursor)
Your Authorative (Power)DNS Server caches the location for you for a period of time making subsiquent requests to the same spot VERY fast.
All Subsequent Requests are Cached for a Period of Time.
All Subsequent Requests are Cached for a Period of Time.

Zone Forwarding Alternative

Up until now, our ISP was using it’s own root.hints file (or some alternative method) to look up your request. But now, it is our server that is going out directly into the big bad internet instead using this technique. Since DNS requests are not encrypted, it’s now possible for others to spy on the hostnames we’re resolving (and places we’re are visiting). Not only that, these same people can easily trace the source back to us (all DNS requests originate from our IP now). This can allow someone to specifically know the online banking site you use as an example. Prior to hosting your own DNS server, all websites and servers you accessed were channeled privately between you and your ISP so this was never a problem. It was our ISP who made the (recursive) requests for us instead of us doing them ourselves. Prior to now, what we looked up didn’t explicitly trace back to you, it traced back to our ISP. Previously, we actually had more privacy (depending on the contract we signed with our ISP).

Your ISP has thousands of clients making requests to it’s DNS servers constantly. As a result, it has probably already cached 90% of all the websites we intend to visit. Cached content means a very a speedy response from our server. Meanwhile, our local DNS server’s cache will (probably) be empty most of the time (depending on how many people will use it). Hence your ISP’s DNS Server will be MUCH faster then yours.

When you signed up with your ISP, they would have gave you (at least) 1 DNS server to use (most provide 2 – a primary and backup). We can actually tell our DNS Server to use these instead of our root.hints file when it finds a domain that needs to be further looked up. This way, you regain your secure pipe between you and your ISP. The trade off is your adding one more hop to your recursive lookups. But in most scenarios, they would have already cached what your looking for, so it would be an imediate response. The below diagram illustrates the worst case scenario:

A forwarding zone of '*' (asterix) tells the PowerDNS Recursor to forward all requests to a specific Server.  In our example we use our ISP's DNS Servers.
A forwarding zone of ‘*’ (asterix) tells the PowerDNS Recursor to forward all requests to a specific Server. In our example we use our ISP’s DNS Servers.

Here is how you can alter your configuration:

# Put your DNS Servers below,  the ones in place right
# now are the public ones offered by Google.
DNS_SERVERS="8.8.8.8 8.8.4.4"

# Remove any information that may conflict
sed -i -e '/^([ t]*hint-file=.*)/d' /etc/pdns-recursor/recursor.conf
sed -i -e '/^([ t]*forward-zones=.*)/d' /etc/pdns-recursor/recursor.conf

# Disable hint-file
echo 'hint-file=' >> /etc/pdns-recursor/recursor.conf

# Prepare Forwarding Zones for everything unmatched:
echo -n 'forward-zones=*=' >> /etc/pdns-recursor/recursor.conf
echo $(echo "$DNS_SERVERS" | 
    sed -e 's/^[ t]*//g' -e 's/[ t]*$//g' -e 's/[ t]+/, /g') >> 
     /etc/pdns-recursor/recursor.conf

# Now restart our recursor
service pdns-recursor restart

Got Old BIND Configuration You Need Imported?

This step is completely optional! If your not familiar with what BIND even is, or know you’ve never used it, you can freely skip this section.
If you migrating from BIND to PowerDNS then you may have a setup in place. PowerDNS makes an easy transition by writing a tool that will scan your old BIND configuration and generate the SQL needed for an easy migration to PowerDNS.

################################################################
# Generate SQL content from all of your zone files
################################################################
# I just had 1 simple DNS zone, but you may many.
# The below did all the work for me (bind was configured to
# run in chroot environment):
# zone2sql --gpgsql --zone=/var/chroot/var/named/data/zone.nuxref.local > 
#    /tmp/pgsql.pdns.zones.sql
# zone2sql --gpgsql --zone=/var/chroot/var/named/data/192.168.0 >> 
#     /tmp/pgsql.pdns.zones.sql

# You could even cheat and run all your files with a command like this
# Please note that this is optional (and not part of the blog, it's just
# a simple conversion tool for those who already have bind configuration
ZONE_DIR=/var/chroot/var/named/data/
[ -f /tmp/pgsql.pdns.zones.sql ] && /bin/rm -f /tmp/pgsql.pdns.zones.sql
for ZONE in $(find $ZONE_DIR -type f); do
   # Fetch ORIGIN/ZONE ID
   ZONE_ID=$(cat $ZONE | egrep '^[ t]*$ORIGIN' | 
              sed -e 's/^.*$ORIGIN[ t]+([^ t]+).*/1/g' 
                  -e 's/[. t]*$//g')
   [ -z "$ZONE_ID" ] && echo "Error Parsing: $ZONE" && continue
   zone2sql --gpgsql --zone=$ZONE --zone-name=$ZONE_ID >> /tmp/pgsql.pdns.zones.sql
done

# Now before you load this file into your database, you may
# want to review it.  It doesn't hurt to scan it over and remove
# any entries you don't think would be useful.

# Under normal circumstances you would be done at this point, however because
# we are additionally using poweradmin, we need to create a few zone entries
# based on the SQL file we just generated.
$NUXREF_TEMPLATES/import2zone.awk /tmp/pgsql.pdns.zones.sql >> 
    /tmp/pgsql.pdns.zones.sql

# Then I just loaded the file straight into the database:
/bin/su -c "/usr/bin/psql -h $PGHOST -p $PGPORT -f /tmp/pgsql.pdns.zones.sql $PGNAME 2>&1" postgres 2>&1
# cleanup
/bin/rm -f /tmp/pgsql.pdns.zones.sql
# You're done!

So… That’s it? Now I’m done?

Yes and No… My blog pretty much hands over a working DNS server with little to no extra configuration needed on your part.

No system is bulletproof; disaster can always strike when you’re least expecting it. To cover yourself, always consider backups of the following:

  • Your PostgreSQL Database: This is where all of your DNS configuration is stored. You definitely do not want to lose this. May I suggest you reference my other blog entry here where I wrote a really simple backup/restore tool for a PostgreSQL database.
  • /etc/poweradmin/*: Your PowerAdmin flat file configuration allowing you to centrally manage everything via a webpage.
  • /etc/pdns/*: Your PowerDNS flat file configuration which defines the core of your DNS Server. It’s configuration allows you to centrally manage everything else through the PowerAdmin website.
  • /etc/pdns-recursor/*: Your PowerDNS Recursor flat file configuration which grants you the recursive functionality of your DNS Server.

What about Apache?

Apache is a perfectly fine alternative solution as well! I simply chose NginX because it is much more lightweight approach. In fact, PowerAdmin already comes with Apache configuration out of the box located in /etc/httpd/conf.d/. Thus, if you simply start up your Apache instance (service httpd start), you will be hosting its services right away. Please keep in mind that the default (Apache) configuration does not come with all the SSL and added security I provided with the NginX templates. Perhaps later on, I will update the template rpm to include an Apache secure setup as well.

Credit

This blog took me a very (,very) long time to put together and test! The repository hosting alone now accomodates all my blog entries up to this date. If you like what you see and wish to copy and paste this HOWTO, please reference back to this blog post at the very least. It’s really all I ask.

I’ve tried hard to make this a complete working solution out of the box. Please feel free to email me or post comments below with any suggestions you have so I can ensure this blog is as complete as possible! Positive feedback is always welcome too!

Repository

This blog makes use of my own repository I loosely maintain. If you’d like me to continue to monitor and apply updates as well as hosting the repository for long terms, please consider donating or offering a mirror server to help me out! This would would be greatly appreciated!

Sources

Configuring and Installing NRPE and NSCA into Nagios Core 4 on CentOS 6

Introduction

About a month ago I wrote (and updated) an article on how to install Nagios Core 4 onto your system. I’m a bit of a perfectionist, so I’ve rebuilt the packages a little to accommodate my needs. Now I thought it might be a good idea to introduce some of the powerful extensions you can get for Nagios.

For an updated solution, you may wish to check out the following:

  • NRDP for Nagios Core on CentOS 7.x: This blog explains how awesome NRDP really is and why it might become a vital asset to your own environment. This tool can be used to replace NSCA’s functionality. The blog also provides the first set of working RPMs (with SELinux support of course) of it’s kind to support it.
  • NRPE for Nagios Core on CentOS 7.x: This blog explains how to set up NRPE (v3.x) for your Nagios environment. At the time this blog was written, there was no packaging of it’s kind for this version.

RPM Solution

RPMs provide a version control and an automated set of scripts to configure the system how I want it. The beauty of them is that if you disagree with something the tool you’re packaging does, you can feed RPMs patch files to accommodate it without obstructing the original authors intention.

Now I won’t lie and claim I wrote these SPEC files from scratch because I certainly didn’t. I took the stock ones that ship with these products (NRPE and NSCA) and modified them to accommodate and satisfy my compulsive needs. 🙂

My needs required a bit more automation in the setup as well as including:

  • A previous Nagios requirement I had was a /etc/nagios/conf.d directory to behave similar to how Apache works. I wanted to be able to drop configuration files into here and just have it work without re-adjusting configuration files. In retrospect of this, these plugins are a perfect example of what can use this folder and work right out of the box.
  • These new Nagios plugins should adapt to the new nagiocmd permissions. The nagioscmd group permission was a Nagios requirement I had made in my previous blog specifically for the plugin access.
  • NSCA should prepare some default configuration to make it easier on an administrator.
  • NSCA servers that don’t respond within a certain time should advance to a critical state. This should be part of the default (optional) configuration one can use.
  • Both NRPE and NSCA should plug themselves into Nagios silently without human intervention being required.
  • Both NRPE and NSCA should log independently to their own controlled log file that is automatically rotated by the system when required.

Nagios Enhancement Focus

The key things I want to share with you guys that you may or may not find useful for your own environment are the following:

  • Nagios Remote Plugin Executor (NRPE): NRPE (officially accessed here) provides a way to execute all of the Nagios monitoring tools on a remote server. These actions are all preformed through a secure (private) connection to the remote server and then reported back to Nagios. NRPE can allow you to monitor servers that are spread over a WAN (even the internet) from one central monitoring server. This is truly the most fantastic extension of Nagios in my opinion.
    NRPE High Level Overview
    NRPE High Level Overview
  • Nagios Service Check Acceptor (NSCA): NSCA (officially accessed here) provides a way for external applications to report their status directly to the Nagios Server on their own. This solution still allows the remote monitoring of a system by taking the responsibility off of the status checks off of Nagios. However the fantastic features of Nagios are still applicable: You are still centrally monitoring your application and Nagios will immediately take action in notifying you if your application stops responding or reports a bad status. This solution is really useful when working with closed systems (where opening ports to other systems is not an option).
    NSCA High Level Overview
    NSCA High Level Overview

Just give me your packaged RPMS

Here they are:

How do I make these packages work for me?

In all cases, the RPMs take care of just about everything for you, so there isn’t really much to do at this point. Some considerations however are as follows:

  • NRPE
    NRPE - Nagios Remote Plugin Executor
    NRPE – Nagios Remote Plugin Executor

    In an NRPE setup, Nagios is always the client and all of the magic happens when it uses the check_nrpe plugin. Most of NRPE’s configuration resides at the remote server that Nagios will monitor. In a nutshell, NRPE will provide the gateway to check a remote system’s status but in a much more secure and restrictive manor than the check_ssh which already comes with the nagios-plugins package. The check_ssh requires you to create a remote user account it can connect with for remote checks. This can leave your system vulnerable to an attack since you can do a lot more damage with a compromised SSH account. However check_nrpe uses the NRPE protocol and can only return what you let it; therefore making it a MUCH safer choice then check_ssh!

    You’ll want to install nagios-plugins-nrpe on the same server your hosting Nagios on:

    # Download NRPE
    wget --output-document=nagios-plugins-nrpe-2.15-1.el6.x86_64.rpm http://repo.nuxref.com/centos/6/en/x86_64/custom/nagios-plugins-nrpe-2.15-4.el6.nuxref.x86_64.rpm
    
    # Now install it
    yum -y localinstall nagios-plugins-nrpe-2.15-1.el6.x86_64.rpm
    

    Again I must stress, the above setup will work right away presuming you chose to use my custom build of Nagios introduced in my blog that went with it.

    Just to show you how everything works, we’ll make the Nagios Server the NRPE Server as well. In real world scenario, this would not be the case at all! But feel free to treat the setup example below on a remote system as well because it’s configuration will be identical! 🙂

    # Install our NRPE Server
    wget --output-document=nrpe-2.15-1.el6.x86_64.rpm http://repo.nuxref.com/centos/6/en/x86_64/custom/nrpe-2.15-4.el6.nuxref.x86_64.rpm
    
    # Install some Nagios Plugins we can configure NRPE to use
    wget --output-document=nagios-plugins-1.5-1.x86_64.rpm http://repo.nuxref.com/centos/6/en/x86_64/custom/nagios-plugins-1.5-5.el6.nuxref.x86_64.rpm
    
    # Now Install it
    yum -y localinstall nrpe-2.15-1.el6.x86_64.rpm 
       nagios-plugins-1.5-1.x86_64.rpm
    # This tool requires xinetd to be running; start it if it isn't
    # already running
    service xinetd status || service xinetd start
    
    # Make sure our system will always start xinetd
    # even if it's rebooted
    chkconfig --level 345 xinetd on
    

    Now we can test our server by creating a test configuration:

    # Create a NRPE Configuration our server can accept
    cat << _EOF > /etc/nrpe.d/check_mail.cfg
    command[check_mailq]=/usr/lib64/nagios/plugins/check_mailq -c 100 -w 50
    _EOF
    
    # Create a temporary test configuration to work with:
    cat << _EOF > /etc/nagios/conf.d/nrpe_test.cfg
    define service{
       use                 local-service
       service_description Check Users
       host_name           localhost
       # check_users is already defined for us in /etc/nagios/nrpe.cfg
    	check_command		  check_nrpe!check_users
    }
    
    # Test our new custom one we just created above
    define service{
       use                 local-service
       service_description Check Mail Queue
       host_name           localhost
       # Use the new check_mailq we defined above in /etc/nrpe.d/check_mail.cfg
    	check_command		  check_nrpe!check_mailq
    }
    _EOF
    
    # Reload Nagios so it sees our new configuration defined in
    # /etc/nagios/conf.d/*
    service nagios reload
    
    # Reload xinetd so nrpe sees our new configuration defined in
    # /etc/nrpe.d/*
    service xinetd reload
    

    We can even test our connection manually by calling the command:

    # This is what the output will look like if everything is okay:
    /usr/lib64/nagios/plugins/check_nrpe -H localhost -c check_mailq
    OK: mailq is empty|unsent=0;50;100;0
    

    Another scenario you might see (when setting on up on your remote server) is:

    /usr/lib64/nagios/plugins/check_nrpe -H localhost -c check_mailq
    CHECK_NRPE: Error - Could not complete SSL handshake.
    

    Uh oh, Could not complete SSL handshake.! What does that mean?
    This is the most common error people see with the NRPE plugin. If you Google it, you’ll get an over-whelming amount of hits suggesting how you can resolve the problem. I found this link useful.
    That all said, I can probably tell you right off the bat why it isn’t working for you. Assuming you’re using the packaging I provided then it’s most likely because your NRPE Server is denying the requests your Nagios Server is making to it.

    To fix this, access your NRPE Server and open up /etc/xinetd/nrpe in an editor of your choice. You need to allow your Nagios Server access by adding it’s IP address to the only_from entry. Or you can just type the following:

    # Set your Nagios Server IP here:
    NAGIOS_SERVER=192.168.192.168
    
    # If you want to keep your previous entries and append the server
    # you can do the following (spaces delimit the servers):
    sed -i -e "s|^(.*only_from[^=]+=)[ t]*(.*)|1 2 $NAGIOS_SERVER|g" 
       /etc/xinetd.d/nrpe
    
    # The below command is fine too to just replace what is there
    # with the server of your choice (you can use either example
    sed -i -e "s|^(.*only_from[^=]+=).*|1 $NAGIOS_SERVER|g" 
       /etc/xinetd.d/nrpe
    
    # When your done, restart xinetd to update it's configuration
    service xinetd reload
    

    Those who didn’t receive the error I showed above, it’s only because your using your Nagios Server as your NRPE Server too (which the xinetd tool is pre-configured to accept by default). So please pay attention to this when you start installing the NRPE server remotely.

    You will want to install nagios-plugins-nrpe on to your NRPE Server as well granting you access to all the same great monitoring tools that have already been proven to work and integrate perfectly with Nagios. This will save you a great deal of effort when setting up the NRPE status checks.

    As a final note, you may want to make sure port 5666 is open on your NRPE Server’s firewall otherwise the Nagios Server will not be able to preform remote checks.

    ## Open NRPE Port (as root)
    iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 5666 -j ACCEPT
    
    # consider adding this change to your iptables configuration
    # as well so when you reboot your system the port is
    # automatically open for you. See: /etc/sysconfig/iptables
    # You'll need to add a similar line as above (without the
    # iptables reference)
    # -A INPUT -m state --state NEW -m tcp -p tcp --dport 5666 -j ACCEPT
    
  • NSCA
    NSCA - Nagios Service Check Acceptor
    NSCA – Nagios Service Check Acceptor

    Remember, NSCA is used for systems that connect to you remotely (instead of you connecting to them (what NRPE does). This is a perfect choice plugin for systems you do not want to open ports up to unnecessarily on your remote system. That said, it means you need to open up ports on your Monitoring (Nagios) server instead.

    You’ll want to install nsca on the same server your hosting Nagios on:

    # Download NSCA
    wget --output-document=nsca-2.7.2-9.el6.x86_64.rpm http://repo.nuxref.com/centos/6/en/x86_64/custom/nsca-2.7.2-10.el6.nuxref.x86_64.rpm
    
    # Now install it
    yum -y localinstall nsca-2.7.2-9.el6.x86_64.rpm
    
    # This tool requires xinetd to be running; start it if it isn't
    # already running
    service xinetd status || service xinetd start
    
    # Make sure our system will always start xinetd
    # even if it's rebooted
    chkconfig --level 345 xinetd on
    
    # SELinux Users may wish to turn this flag on if they intend to allow it
    # to call content as root (using sudo) which it must do for some status checks.
    setsebool -P nagios_run_sudo on
    

    The best way to test if everything is working okay is by also installing the nsca-client on the same machine we just installed NSCA on (above). Then we can simply create a test passive service to test everything with. The below setup will work presuming you chose to use my custom build of Nagios introduced in my blog that went with it.

    # First install our NSCA client on the same machine we just installed NSCA
    # on above.
    wget http://repo.nuxref.com/centos/6/en/x86_64/custom/nsca-client-2.7.2-10.el6.nuxref.x86_64.rpm
    
    # Now install it
    yum -y localinstall nsca-client-2.7.2-9.el6.x86_64.rpm
    
    # Create a temporary test configuration to work with:
    cat << _EOF > /etc/nagios/conf.d/nsca_test.cfg
    # Define a test service. Note that the service 'passive_service'
    # is already predefined in /etc/nagios/conf.d/nsca.cfg which was
    # placed when you installed my nsca rpm
    define service{
       use                 passive_service
       service_description TestMessage
       host_name           localhost
    }
    _EOF
    
    # Now reload Nagios to it reads in our new configuration
    # Note: This will only work if you are using my Nagios build
    service nagios reload
    

    Now that we have a test service set up, we can send it different nagios status through the send_nsca binary that was made available to us after installing nsca-client.

    # Send a Critical notice to Nagios using our test service
    # and send_nsca. By default send_nsca uses the '<tab>' as a
    # delimiter, but that is hard to show in a blog (it can get mixed up
    # with the space.  So in the examples below i add a -d switch
    # to adjust what the delimiter in the message.
    # The syntax is simple:
    #    hostname,nagios_service,status_code,status_msg
    #
    # The test service we defined above identifies both the
    # 'host_name' and 'service_description' define our first 2
    # delimited columns below. The status_code is as simple as:
    #       0 : Okay
    #       1 : Warning
    #       2 : Critical
    # The final delimited entry is just the human readable text
    # we want to pass a long with the status.
    #
    # Here we'll send our critical message:
    cat << _EOF | /usr/sbin/send_nsca -H 127.0.0.1 -d ','
    localhost,TestMessage,2,This is a Test Error
    _EOF
    
    # Open your Nagios screen (http://localhost/nagios) at this point and watch the
    # status change (it can take up to 4 or 5 seconds or so to register
    # the command above).
    
    # Cool?  Here is a warning message:
    cat << _EOF | /usr/sbin/send_nsca -H 127.0.0.1 -d ',' -c /etc/nagios/send_nsca.cfg
    localhost,TestMessage,1,This is a Test Warning
    _EOF
    
    # Check your warning on Nagios, when your happy, here is your
    # OKAY message:
    cat << _EOF | /usr/sbin/send_nsca -H 127.0.0.1 -d ',' -c /etc/nagios/send_nsca.cfg
    localhost,TestMessage,0,Life is good!
    _EOF
    

    Since NSCA requires you to listen to a public port, you’ll need to know this last bit of information to complete your NSCA configuration. Up until now the package i provide only open full access to localhost for security reasons. But you’ll need to take the next step and allow your remote systems to talk to you.

    NSCA uses port 5667, so you’ll want to make sure your firewall has this port open using the following command:

    ## Open NSCA Port (as root)
    iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 5667 -j ACCEPT
    
    # consider adding this change to your iptables configuration
    # as well so when you reboot your system the port is
    # automatically open for you. See: /etc/sysconfig/iptables
    # You'll need to add a similar line as above (without the
    # iptables reference)
    # -A INPUT -m state --state NEW -m tcp -p tcp --dport 5667 -j ACCEPT
    

    Another security in place with the NSCA configuration you installed out of
    the box is that it is being managed by xinetd. The configuration can
    be found here: /etc/xinetd.d/nsca. The security restriction in place that you’ll want to pay close attention to is line 16 which reads:

    only_from = 127.0.0.1 ::1

    If you remove this line, you’ll allow any system to connect to yours; this is a bit unsafe but an option. Personally, I recommend that you individually add each remote system you want to monitor to this line. Use a space to separate more the one system.

    You can consider adding more security by setting up a NSCA paraphrase which will reside in /etc/nagios/nsca.cfg to which you can place the same paraphrase in all of the nsca-clients you set up by updating /etc/nagios/send_nsca.cfg.

    Consider our example above; I can do the following to add a paraphrase:

    # Configure Client
    sed -i -e 's/^#*password=/password=ABCDEFGHIJKLMNOPQRSTUVWXYZ/g' 
       /etc/nagios/send_nsca.cfg
    # Configure Server
    sed -i -e 's/^#*password=/password=ABCDEFGHIJKLMNOPQRSTUVWXYZ/g' 
       /etc/nagios/nsca.cfg
    # Reload xinetd so it rereads /etc/nagios/nsca.cfg
    service xinetd reload
    

I don’t trust you, I want to repackage this myself!

As always, I will always provide you a way to build the source code from scratch if you don’t want to use what I’ve already prepared. I use mock for everything I build so I don’t need to haul in development packages into my native environment. You’ll need to make sure mock is setup and configured properly first for yourself:

# Install 'mock' into your environment if you don't have it already.
# This step will require you to be the superuser (root) in your native
# environment.
yum install -y mock

# Grant your normal every day user account access to the mock group
# This step will also require you to be the root user.
usermod -a -G mock YourNonRootUsername

At this point it’s safe to change from the ‘root‘ user back to the user account you granted the mock group privileges to in the step above. We won’t need the root user again until the end of this tutorial when we install our built RPM.

Just to give you a quick summary of what I did, here are the new spec files and patch files I created:

  • NSCA RPM SPEC File: Here is the enhanced spec file I used (enhancing the one already provided in the EPEL release found on pkgs.org). At the time I wrote this blog, the newest version of NSCA was v2.7.2-8. This is why I repackaged it as v2.7.2-9 to include my enhancements. I created 2 patches along with the spec file enhancements.
    nrpe.conf.d.patch was created to provide a working NRPE configuration right out of the box (as soon as it was installed) and nrpe.xinetd.logrotate.patch was created to pre-configure a working xinetd server configuration.
  • NRPE RPM SPEC File: Here is the enhanced spec file I used (enhancing the one already provided in the EPEL release found on pkgs.org). At the time I wrote this blog, the newest version of NRPE was v2.14-5. However v2.15 was available off of the Nagios website so this is why I repackaged it as v2.15-1 to include my enhancements.
    nsca.xinetd.logrotate.patch was the only patch I needed to create to prepare a NSCA xinetd server working out of the box.

Everything else packaged (patches and all) are the same ones carried forward from previous versions by their package managers.

Rebuild your external monitoring solutions:

Below shows the long way of rebuilding the RPMs from source.

# Perhaps make a directory and work within it so it's easy to find
# everything later
mkdir nagiosbuild
cd nagiosbuild
###
# Now we want to download all the requirements we need to build
###
# Prepare our mock environment
###
# Initialize Mock Environment
mock -v -r epel-6-x86_64 --init

# NRPE (v2.15)
wget http://repo.nuxref.com/centos/6/en/source/custom/nrpe-2.15-4.el6.nuxref.src.rpm 
mock -v -r epel-6-x86_64 --copyin nrpe-2.15-1.el6.src.rpm /builddir/build

# NSCA (v2.7.2)
wget http://repo.nuxref.com/centos/6/en/source/custom/nsca-2.7.2-10.el6.nuxref.src.rpm 
mock -v -r epel-6-x86_64 --copyin nsca-2.7.2-9.el6.src.rpm /builddir/build

#######################
### THE SHORT WAY #####
#######################
# Now, the short way to rebuild everything is through these commands:
mock -v -r epel-6-x86_64 --resultdir=$(pwd)/results 
   --rebuild  nrpe-2.15-1.el6.src.rpm  nsca-2.7.2-9.el6.src.rpm

# You're done; You can find all of your rpms in a results directory
# in the same location you typed the above command in.  You can 
# alternatively rebuild everything the long way allowing you to
# inspect the content in more detail and even change it for your
# own liking

#######################
### THE LONG WAY  #####
#######################
# Install NRPE Dependencies
mock -v -r epel-6-x86_64 --install 
   autoconf automake libtool openssl-devel tcp_wrappers-devel

# Install NSCA Dependencies
mock -v -r epel-6-x86_64 --install 
   tcp_wrappers-devel libmcrypt-devel

###
# Build Stage
###
# Shell into our enviroment
mock -v -r epel-6-x86_64 --shell

# Change to our build directory
cd builddir/build

# Install our SRPMS (within our mock jail)
rpm -Uhi nsca-*.src.rpm nrpe-*.src.rpm

# Now we'll have placed all our content in the SPECS and SOURCES
# directory (within /builddir/build).  Have a look to verify
# content if you like

# Build our RPMS
rpmbuild -ba SPECS/*.spec

# we're now done with our mock environment for now; Press Ctrl-D to
# exit or simply type exit on the command line of our virtual
# environment
exit

###
# Save our content that we built in the mock environment
###

#NRPE
mock -v -r epel-6-x86_64 --copyout /builddir/build/SRPMS/nrpe-2.15-1.el6.src.rpm .
mock -v -r epel-6-x86_64 --copyout /builddir/build/RPMS/nrpe-2.15-1.el6.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout /builddir/build/RPMS/nagios-plugins-nrpe-2.15-1.el6.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout /builddir/build/RPMS/nrpe-debuginfo-2.15-1.el6.x86_64.rpm .

#NSCA
mock -v -r epel-6-x86_64 --copyout /builddir/build/SRPMS/nsca-2.7.2-9.el6.src.rpm .
mock -v -r epel-6-x86_64 --copyout /builddir/build/RPMS/nsca-2.7.2-9.el6.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout /builddir/build/RPMS/nsca-client-2.7.2-9.el6.x86_64.rpm .
mock -v -r epel-6-x86_64 --copyout /builddir/build/RPMS/nsca-debuginfo-2.7.2-9.el6.x86_64.rpm .

# *Note that all the commands that interact with mock I pass in 
# the -v which outputs a lot of verbose information. You don't
# have to supply it; but I like to see what is going on at times.

# **Note: You may receive this warning when calling the '--copyout'
# above:
# WARNING: unable to delete selinux filesystems 
#    (/tmp/mock-selinux-plugin.??????): #
#    [Errno 1] Operation not permitted: '/tmp/mock-selinux-plugin.??????'
#
# This is totally okay; and is safe to ignore, the action you called
# still worked perfectly; so don't panic!

So where do I go from here?
NRPE and NSCA are both fantastic solutions that can allow you to tackle any monitoring problem you ever had. In this blog here I focus specifically on Linux, but these tools are also available on Microsoft Windows as well. You can easily have 1 Nagios Server manage thousands of remote systems (of all operating system flavours). There are hundreds of fantastic tools to monitor all mainstream applications used today (Databases, Web Servers, etc). Even if your trying to support a custom application you wrote. If you can interface with your application using the command line interface, well then Nagios can monitor it for you. You only need to write a small script with this in mind:

  • Your script should always have an exit code of 0 (zero) if everything is okay, 1 (one) if you want to raise a warning, and 2 (two) if you want to raise a critical alarm.
  • No matter what the exit code is, you should also echo some kind of message that someone could easily interpret what is going on.

There is enough information in this blog to do the rest for you (as far as creating a Nagios configuration entry for it goes). If you followed the 2 rules above, then everything should ‘just work’. It’s truely that easy and powerful.

How do I decide if I need NSCA or NRPE?

NRPE & NSCA High Level Overview
NRPE & NSCA High Level Overview

NRPE makes it Nagios’s responsibility to check your application where as NSCA makes it your applications responsible to report its status. Both have their pros and cons. NSCA could be considered the most secure approach because at the end of the day the only port that requires opening is the one on the Nagios server. NSCA does not use a completely secure connection (but there is encryption none the less). NRPE is very secure and doesn’t require you to really do much since it just simply works with the nagios-plugins already available. It litterally just extends these existing Nagios local checks to remote ones. NSCA requires you to configure a cron, or adjust your applications in such a way that it frequently calls the send_nsca command. NSCA can be a bit more difficult to set up but creates some what of a heartbeat between you and the system monitoring it (which can be a good thing too). I pre-configured the NSCA server with a small tweak that will automatically set your application to a critical state if a send_nsca call is missed for an extended period of time.

Always consider that the point of this blog was to state that you can use both at the same time giving you total flexibility over all of your systems that require monitoring.

Credit

All of the custom packaging in this blog was done by me personally. I took the open source available to me and rebuilt it to make it an easier solution and decided to share it. If you like what you see and wish to copy and paste this HOWTO, please reference back to this blog post at the very least. It’s really all I ask.

Sources

I referenced the following resources to make this blog possible:

  • The blog I wrote earlier that is recommended you read before this one:Configuring and Installing Nagios Core 4 on CentOS 6
  • Official NRPE download link; I used all of the official documentation to make the NRPE references on this blog possible.
  • A document identifying the common errors you might see and their resolution here.
  • Official NSCA download link; I used all of the official documentation to make the NSCA references on this blog possible.
  • The NRPE and NSCA images I’m reposting on this blog were taking straight from their official sites mentioned above.
  • Linux Packages Search (pkgs.org) was where I obtained the source RPMs as well as their old SPEC files. These would be a starting point before I’d expand them.
  • A bit outdated, but a great (and simple) representation of how NSCA works with Nagios can be seen here.