Tag Archives: RPM

Pan: A Useful NewsReader for Linux

Introduction

PAN is a newsreader that has been around for ages. It allows you to sift through the massive clutter that Usenet has become through its really fast interface loaded with tons of features!

It’s development died off way back in 2012, but recently it’s development has picked right back up again. Not only is this product feature rich and open source, but it’s written purely in C++ which makes it incredibly light weight (thus very, very fast). Some of the subtle product enhancements this product has seen in the past few months make it worthy to be in the spot light again.

So What Can It Do?

  • Header Caching: Tell it the group(s) you want and how much of it you want to see and it will download the headers it retrieves to a local cache file. This is awesome because now you can sift through this content offline.
    Cache Headers

    Cache Headers

  • Header Scoring: You can flag key aspects of articles with a score. By default every header retrieved has a score of zero (0) unless you start dabbling in this area.

    Anything that scores less than (or equal to) -9999 can be configured to not list itself at all. Some well set scores can greatly clean up your ability to locate content in groups.

    You can score content higher and/or lower based on the posts author, subject, size, age, etc. You can even apply scoring through regular expressions too!

    Scoring is very powerful when used properly! I’ll talk about it again a bit later in this blog once you’ve gotten set up. But if we were to apply scoring to the previous screenshot (above), it might look like this (all garbage cleaned up and content prioritized with color coding too):

    Header Scoring

    Header Scoring

  • Multiple Server Support: Got a block account? No problem, you can add it as a secondary server and only pull from it if the Primary one is unavailable.
  • NZB-File Support: The treasure maps of Usenet can be loaded into Pan too and downloaded through it. True automation of these come through systems like NZBGet and SABnzbd, but it’s still worth knowing that not only is this a newsreader, but it can pass as a downloader as well!
  • Concurrent Connections: Like any great browser/downloader of any system; files are retrieve concurrently. This means that you can just keep browsing and tagging content of interest seamlessly without interruptions.
  • Header Compression Support: One of the new enhancements surfaced with the new development of this project. This makes a world of difference when retrieving hundreds of thousands of headers from a Usenet group. Enabling this feature along (if your Usenet provider supports it) will greatly reduce wait times!

Pan’s Disabled Features

The features page on PANs website explains about a parent company (called ChimPanXi) that tries to sell this free product with added functionality. I guess the deal they have with the developers is to just disable a few features so that they can be re-enabled them the paid version (purely speculation)?

But since the (Pan) code is open source, the options are right there in front of us but just disabled. Quite honestly… of all this disabled functionality, only one is truly worth pointing out: Pan restricts you to just 4 allowable concurrent connections to your Usenet provider at a time. Here is a small patch I created which increases this number to 99. The build I provide in this blog already has this patch applied. Here are the rest of the missing features (with some of my comments as well); maybe some might see value in the others?

Pans Missing Features

Pans Missing Features

The Goods

For those hooked up to my repository are already set, just type the following:

# install the new version of Pan
yum install pan --enablerepo=nuxref

You can also reference this table too for direct links:

Package Download Description
pan el7.rpm, fc22.rpm, fc23.rpm, fc24.rpm, fc25.rpm The Newsreader: This is the program that this blog focuses on.

Note: The source rpm can be obtained here which builds everything you see in the table above. It’s not required for the application to run, but might be useful for developers or those who want to inspect how I put the package together.

It’s also worth noting (again) that this build includes a small patch to increase the maximum allowable number of concurrent connections from 4 to 99.

Securing Your Connection

There is very little security built into Pan from a connection point of view. What little security is (normally) in place is built using GnuTLS. GnuTLS has a history of not keeping up with the security exploits and vulnerabilities that surface with encryption libraries. It doesn’t make it unsafe; it just doesn’t make it as reliable as it’s competition (OpenSSL and Crypto). For this reason the packages I provide are intentionally not built against it (GnuTLS).

It’s really not a problem at the end of the day because there are other ways of securing this connection (properly). The way I use (and recommend) is through Stunnel.

Stunnel allows you to take an unencrypted input (from Pan) and connect it to a secure connected one (at your Usenet provider). The best thing about stunnel is that it links to your (OpenSSL) shared system libraries libssl.so and libcrypto.so which are actively maintained and patched! Basically what I’m saying is by attaching Pan to Stunnel: you get the feature rich usage of Pan and the ongoing (reliable) security of OpenSSL.

The following will get you set up with stunnel; you’ll want to be root before running the command below:

# Install stunnel (if it's not installed already)
# you'll need to be connected to either EPEL or NuxRef for this
# to work:
yum install stunnel

You can also reference this table too for direct links:

Package Download Description
stunnel el7.rpm, fc22.rpm, fc23.rpm, fc24.rpm, fc25.rpm Secure Tunnel: for data encryption.

Note: This RPM is not required by PAN to run correctly. It does however offer you a safer and more secure method of encrypting your communication to (and from) your NNTP Server.
# You must have root permissions when setting up
# stunnel

# Create relay bound to local server only (semi-colons are for
# comments):
cat << _EOF > /etc/stunnel/stunnel.conf
; Use it for client mode
; This is the pass through mode you need to encrypted
; your NNTP traffic:
client = yes
 
[nntp]
;
; --- IN ---
;
; local port to listen on (on this PC)
; You will configure PAN to connect here:
accept = 127.0.0.1:119

;
; --- OUT ---
;
; The Remote Usenet Server's (encrypted) connection to use:
; In this example, I'm just pointing to Astraweb, but you
; can provide any Usenet server you wish here. Just be sure
; to point it to their secure transport point!
connect = ssl.astraweb.com:563
_EOF

# This line below is useless, but it allows you revisit this blog
# entry and continue and copy and paste these instructions at a later
# time. The line removes any previous entries set to prevent the
# creation of duplicate entries  in your startup file at another time
# It's harmless to run at any point:
sed -i -e '/bin\/stunnel/d' /etc/rc.d/rc.local

# Configure stunnel to start after each boot
echo "# Start /usr/bin/stunnel on boot each time:" >> /etc/rc.d/rc.local
echo "/usr/bin/stunnel" >> /etc/rc.d/rc.local

# By default stunnel is configured to read 
# it's configuration from /etc/stunnel/stunnel.conf
# on startup:
stunnel

The next step is to update your PAN server configuration to point to your local server (localhost or 127.0.0.1) instead of the remote one you’re accessing. Make sure to set the port to 119 too like so:

Stunnel Pan Configuration

Stunnel Pan Configuration


You’ll provide the same username and password you would have otherwise provided to your Usenet provider.

The end result is a secure connection between you and your Usenet provider like so:
Pan Setup With Stunnel

Scoring

Scoring articles can greatly ease your life when looking through all of the headers in front of you; it’s great for:

  • Eliminating SPAM
  • Filtering out potential malicious content (such as Trojans and Viruses)
  • Increasing the visibility of items of interest
  • Locating Authors of interest with ease

All scores can be optionally associated with a time limit too. When the limit expires, so does the score. This is useful when you only want to temporarily filter content. Otherwise the permanent scores will make up most of your configuration. To add a score, simply click Articles > Add a Scoring Rule…

Add Scoring Rule

Add a Scoring Rule

Here is an example of a rule you might add; this one greatly reduces the score of all entries that have potentially dangerous file extensions in the subject line:

Block Potentially Malicious Content

Block Potentially Malicious Content


Pan’s built in filter field allows you to sift through all of the articles you found with keywords. Pairing this functionality with the scoring one really shows off the power of Pan.

All created scores are kept in ~/.pan2/Scores so don’t worry if you mess one up. You can just as easily open this file and fix it. Any manual changes to this file will however require you to exit out of Pan (if it’s open) and restart it.

Here is just a few entries of what you might have in your Score file:

%BOS
% Greatly reduce score of potentially malicious content
[alt.bin*]
Score:: -9999
Subject: .*\.(exe|bat|vbs|cpl|msi|scr|vb(script)?|ws(f|h))[^A-Za-z0-9].*
%EOS

%BOS
% Moderately increase the score of compressed content
[alt.bin*]
Score:: 2500
Subject: .*\.(z(ip|[0-9]{2})|r(ar|[0-9]{2})|7z|iso)[^A-Za-z0-9]([0-9]{3}[^A-Za-z0-9])?.*
%EOS

%BOS
% Very slightly decrease the content of PAR content
% This allows it to not quite have the same spot light as
% the item it matches up against. If it were a compressed file
% it would already have +2500 from the previous score entry
% identified above.  These will just sit at +2400 instead.
[alt.bin*]
Score:: -100
Subject: .*(\.vol[0-9]+\+[0-9]+)?\.(par2|sfv)[^A-Za-z0-9].*
%EOS

%BOS
% Very slightly increase the score of NZB-Files
[alt.bin*]
Score:: 250
Subject: .*\.(nzb)[^A-Za-z0-9].*
%EOS 

%BOS
% Mildly drop the score of cross-posted content
[alt.bin*]
Score:: -750
Xref: (.*:){2} % cross-posted to 2 or more groups 
%EOS

Wrapping It Up

I’m certainly not asking anyone to change from their existing system if it works for them. What I am pointing out though is that Pan is completely free, it’s open source and the features it offers are comparable (if not better) than all of it’s competition. Although it works great on Linux, it also works on many other platforms as well such as Microsoft and Apple.

It might not have a beautiful interface, but it wasn’t built to fill your systems memory with bloated eye candy. It was built to be fast and effective… and truly, it really is.

The newer versions coming out are really great! If you haven’t given it a try since it’s dated ones, you really should! If you’re interested in seeing how Usenet is structured, than this is also a great tool to learn with. If you run an indexer (such as newznab or the many forks of it) you can practice your regular expressions (regexs) using Pan. For an Indexer Admin, this tool is especially great in debugging your regexs!

Credit

This blog took me a very long time to put together and test! The repository hosting alone accommodates all my blog entries up to this date. All of the custom packaging described here was done by me personally. I took the open source available to me and rebuilt it to make it an easier solution and decided to share it. If you like what you see and wish to copy and paste this HOWTO, please reference back to this blog post at the very least. It’s really all I ask.

Sources

AWStats Setup on CentOS

Introduction

AWStats is a great tool for gathering statistics about your website. It acquires everything it needs to know about your site strictly through your websites log files. AWStats is able to scan through these logs line by line and present them in a fantastic report. This report can really help you make strategic decisions going forward as well as spot any anomalies that might be taking place. The tool is smart enough to only scan newer log entries (from when it last ran) allowing you to run it again and again (as often as you want). Thus, once you set this tool up to run daily (or even hourly), you’ll have detailed statistics about your website you can call upon anytime.

AWStats collects information such as such as:AWStats Report

  • Who is visiting your site.
  • How many visitors you’re getting daily.
  • Where are they’re coming from (did a site link to you?)
  • Where is the visitor from (geographical location
  • … and on and on
  • The presentation of these collected statistics can be either via a website (HTML), XML and/or as a PDF file. The PDF is especially useful since it combines all of the multiple HTML pages (as presented) into one great big report with a table of contents and hyperlinks throughout it! The PDF is also really easy to navigate and pass along to others who might also be interested.

    Why Use AWStats over Google Analytics?

    Google Analytics Inaccuracy

    Google Analytics Inaccuracy

    The number one reason is because AWStats is much (,much) more accurate! AWStats also just works without ‘any’ changes to your website (literally – none at all). Google Analytics however requires you to add a small piece of JavaScript to every web page you want to track. Every time this tiny bit of JavaScript code executes, it passes the information along to Google. The problem is… if that little snippet of JavaScript doesn’t execute, then Google doesn’t track that user (and you’ll never know) because it just won’t get reported.

    It’s really easy to prevent this chunk of JavaScript from running too, you just have to have installed something like Ad-Blocker Plus, Disconnect and/or uBlock into your Web Browser (such as Firefox or Chrome). These plugins specifically block these tracking techniques and eliminate most (if not all) advertising the website might have too.

    It doesn’t mean that online analytic tools (like Google Analytics) are not good; no, not at all! But it’s just important to understand that they can’t (and truly aren’t) reporting everything that’s going on with your website and the traffic generated from it.

    Another point worth mentioning is that Google Analytics can not monitor and report statistics on traffic used by third party tools. Therefore you can’t use it to monitor any RESTful API services because the programs accessing it will never call these JavaScript snippets of code.

    It’s worth pointing out now that if you use AWStats, you’ll have the full picture! You’ll be able to easily identify any anomalies and detect certain forms of malicious intent! You’ll be able to monitor all of your internal (web based) services you may manage. From the public standpoint, you might be very surprised at how much more traffic your website is getting despite what online analytic tools will tell you!

    Let’s Get Started

    First you’ll want to install the proper packages. You should hook up to my repository and the EPEL repository as well! The EPEL repository hosts AWStats too, but mine is a newer version. We need the EPEL repository for it’s GeoIP packages since they get updated more often there:

    # CentOS 7 users can connect to EPEL this way:
    rpm -Uhi https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
    
    # Similarly, you can hook up to my repository at http://nuxref.com
    # but here is a quick way of doing it (for CentOS/RedHat 7):
    rpm -Uhi http://repo.nuxref.com/centos/7/en/x86_64/custom/nuxref-release-1.0.0-4.el7.nuxref.noarch.rpm
    

    You should be good to go now; the following installs AWStats and a few extra tools to get the best out of it:

    # install awstats
    # install htmldoc too because it'll allow you to create a pdf
    # install geoip-geolite for the ability to track the IPs
    #        to countries
    # install perl-Geo-IP to look up the IP Addresses
    yum install awstats htmldoc geoip-geolite perl-Geo-IP
    
    

    AWStat In a Nutshell

    The steps below will require that you have set up the environment defined below. Obviously you’ll want to change these environment variables to suite your own needs:

    # First define our website as a variable.
    # We will use this value to track and store in an
    # organized structure.
    
    # Those who host other websites for people can
    # change this and virtually everything below will
    # and re-run everything to get stats for that too!
    WEBSITE=nuxref.com
    
    # AWSTATS Variable Data
    DATADIR=/var/lib/awstats/$WEBSITE
    
    

    Configuring AWStats: Step 1 of 3

    AWStats works from configuration files you create in /etc/awstats/. But it also needs a directory it can work within (we use /var/lib/awstats/). I’ve provided documentation around each line so you know what’s going on:

    # Make sure our environment variables are defined
    # WEBSITE and DATADIR
    
    # First we need to setup our DATADIR; this is where
    # all our statistics and generated data will be placed
    # into:
    [ ! -d $DATADIR/static ] && \
        mkdir -p $DATADIR/static
    ln -snf /usr/share/awstats/wwwroot/icon \
       $DATADIR/static/icon
    ln -snf /usr/share/awstats/wwwroot/cgi-bin \
       $DATADIR/static/cgi-bin
    
    # Create a configuration file using our website
    # based on the awws.model.conf example file that
    # ships with AWStats
    sed -e "s|localhost\.localdomain|$WEBSITE|g" \
    	/etc/awstats/awstats.model.conf > \
    		/etc/awstats/awstats.$WEBSITE.conf
    
    ########################################
    # Now update our new configuration
    ########################################
    # Update the LogFile with our access.log file we'll
    # reference. This path doesn't exist yet but
    # we'll be creating it soon enough; leave this entry
    # untouched (don't change it to your real log path!):
    sed -i -e "s|^\(LogFile\)=.*$|\1=\"$DATADIR/access.log\"|g" \
       /etc/awstats/awstats.$WEBSITE.conf
    
    # Disable DNS (for speed mostly)
    sed -i -e "s|^\(DNSLookup\)=.*$|\1=0|g" \
       /etc/awstats/awstats.$WEBSITE.conf
    
    # For PDF Generation we need to update the relative
    # paths for the icons.
    sed -i -e "s|^\(DirIcons\)=.*$|\1=\"icon\"|g" \
       /etc/awstats/awstats.$WEBSITE.conf
    sed -i -e "s|^\(DirCgi\)=.*$|\1=\"cgi-bin\"|g" \
       /etc/awstats/awstats.$WEBSITE.conf
    
    

    Optionally Configuring GeoIP Updates

    The geolite data fetches us a great set of (meta) data we can reference when looking up IP Addresses (of people who visited our site) and determining what part of the world they came from. This information is fantastic when putting together statistics and web page traffic like AWStats does.

    First we want to configure AWStats to use the GEO IP Plugin:

    # Now configure our GEOIP Setup
    sed -i -e '/^LoadPlugin=.*/d' /etc/awstats/awstats.$WEBSITE.conf
    cat << _EOF >> /etc/awstats/awstats.$WEBSITE.conf
    LoadPlugin="geoip GEOIP_STANDARD /usr/share/GeoIP/GeoIP.dat"
    LoadPlugin="geoip_city_maxmind GEOIP_STANDARD /usr/share/GeoIP/GeoIPCity.dat"
    _EOF
    

    Next we want to set up our GEO IP to update itself with the latest meta data for us automatically (so we don’t have to worry about it):

    # downloads all of the latest GEO IP content to
    # /usr/share/GeoIP with this simple command:
    geoipupdate
    
    # This IP information changes often; so the next
    # thing you want to do is create a cronjob to have
    # this tool fetch regular updates automatically for
    # us to keep the GEO IP Content fresh and up to date!
    cat << _EOF > /etc/cron.d/geoipdate
    0 12 * * 3 root /usr/bin/geoipupdate &>/dev/null
    _EOF
    

    Apache Users: Step 2a of 3

    AWStats depends on the log files to build it’s statistics from, so it’s important we point it to the right directory. Apache logs have been pretty much standardized and AWStats just works with them. If your web page is being hosted through Apache then your log files are most likely being placed in /var/log/httpd. If you’re using NginX (and not Apache), you can skip over this section and to Step 2b of 3 instead.

    Make sure AWStats knows it’s dealing with Apache log files (make sure you’ve still got the $WEBSITE variable defined from above):

    # Make sure our environment variables are defined
    # WEBSITE and DATADIR
    ########################################
    # Apache Users Should run The Following
    ########################################
    # Now if you're logs are created from Apache you
    # need to run the following:
    # Log Format (Type 1 is for Apache)
    sed -i -e "s|^\(LogFormat\)=.*$|\1=1|g" \
       /etc/awstats/awstats.$WEBSITE.conf
    
    

    Now what we want to do is take all of the logs files associated with our website in /var/log/httpd and build one great big (sorted) log file we can get all of our statistics out of:

    # logresolvmerge.pl is a fantastic tool that ships with
    # awstats and merges (and sorts) all of our logs. We
    # place the output into our $DATADIR (which we declared
    # earlier):
    /usr/share/awstats/tools/logresolvemerge.pl \
       /var/log/httpd/access.log \
       /var/log/httpd/access.log-????????.gz \
        > $DATADIR/access.log
    
    

    Nginx Users

    NginX logs have a slightly different format then the Apache logs and therefore require a slightly different configuration to work. If your web page is being hosted through NginX then your log files are most likely being placed in /var/log/nginx. If you’re using Apache (and not NginX), then you can skip over this section as long as you’ve already done Step 2a of 3 instead.

    Make sure AWStats knows it’s dealing with NginX log files otherwise it won’t be able to interpret them. Also be sure to have your $WEBSITE variable defined:

    # Make sure our environment variables are defined
    # WEBSITE and DATADIR
    ########################################
    # NginX Users Should run The Following
    ########################################
    # If you're using NginX, you'll want to adjust
    # your awstat LogFormat entry as follows:
    sed -i -e "s|^\(LogFormat\)=.*$|\1=\"%host %other %logname %time1 %methodurl %code %bytesd %refererquot %uaquot\"|g" \
       /etc/awstats/awstats.$WEBSITE.conf
    

    Now we take all of the logs files associated with our website in /var/log/nginx and build one great big (sorted) log file we can get all of our statistics out of:

    # logresolvmerge.pl is a fantastic tool that ships with
    # awstats and merges (and sorts) all of our logs. We
    # place the output into our $DATADIR (which we declared
    # earlier):
    /usr/share/awstats/tools/logresolvemerge.pl \
       /var/log/nginx/access.log \
       /var/log/nginx/access.log-????????.gz \
        > $DATADIR/access.log
    

    Statistic Generation: Step 3 of 3

    At this point we have all the info we need

    # Make sure our environment variables are defined
    # WEBSITE and DATADIR
    ########################################
    # (Create) and/or Update our Stats
    ########################################
    /usr/share/awstats/wwwroot/cgi-bin/awstats.pl \
       -config=$WEBSITE
    
    # The following builds us a PDF file containing all
    # of our statistics in addition to a website we can
    # optionally host if we want.
    # The following would allow you to gather statistics for
    # a given year:
    #   /usr/share/awstats/tools/awstats_buildstaticpages.pl \
    #      -config=$WEBSITE -buildpdf \
    #      -month=all -year=$(date +'%Y') \
    #      -dir=$DATADIR/static \
    #      -buildpdf=/usr/bin/htmldoc
    
    # This will build statistics with all the information we have:
    /usr/share/awstats/tools/awstats_buildstaticpages.pl \
       -config=$WEBSITE -buildpdf \
       -dir=$DATADIR/static \
       -buildpdf=/usr/bin/htmldoc
    
    # - The main website will appear as:
    #      $DATADIR/static/awstats.$WEBSITE.html
    #    But this 'main' website links to several other websites
    #    that can also all be found in the $DATADIR/static
    #    directory
    # - The pdf file will appear as:
    #      $DATADIR/static/awstats.$WEBSITE.pdf
    

    Consider throwing the above into a script file and having it ran in a cron job!

    Hosting The Statistics

    This option is purely optional; but but here is some simple configurations you can use if you want to access these generated statistics from your browser.

    Note: I intentionally keep things simple in this section. AWStats can be configured so that you can update your statistics via it’s very own website (see AllowToUpdateStatsFromBrowser directive in the site configuration). However I don’t recommend this option and therefore do not document it below.

    NginX

    A simple NginX configuration might look like this:

    # Make sure our environment variables are defined
    # WEBSITE and DATADIR
    cat << _EOF > /etc/nginx/default.d/awstats.$WEBSITE.conf
       # Visit your statistics by browsing to:
       # if WEBSITE was equal nuxref.com, you'd visit the stats:
       # http://localhost/stats/nuxref.com/
       location /stats/$WEBSITE/ {
          alias   $DATADIR/$WEBSITE/static/;
          index  awstats.$WEBSITE.html;
    
          ## Set 1.2.3.4 to your own IP address and uncomment
          ## the entries below to 'only' allow yourself access to
          ## these stats:
          # allow 1.2.3.4/32;
          # deny all;
    
          location /stats/css/ {
              alias /usr/share/awstats/wwwroot/css/;
          }
    
          location /stats/icon/ {
              alias /usr/share/awstats/wwwroot/icon/;
          }
       }
    _EOF
    

    Don’t forget to reload NginX so it takes on your new configuration (and makes that statistics page visible):

    # Reload NginX
    systemctl reload nginx.service
    

    Apache

    # Make sure our environment variables are defined
    # WEBSITE and DATADIR
    cat << _EOF > /etc/httpd/conf.d/awstats.$WEBSITE.conf
       # Visit your statistics by browsing to:
       # if WEBSITE was equal nuxref.com, you'd visit the stats:
       # http://localhost/stats/nuxref.com/
       Alias /stats/$WEBSITE/ "$DATADIR/$WEBSITE/static/"
       <Directory "$DATADIR/$WEBSITE/static/">
          Options FollowSymLinks
          AllowOverride None
          Order allow,deny
          Allow from all
    
          ## Set 1.2.3.4 to your own IP address and uncomment
          ## the entries below to 'only' allow yourself access to
          ## these stats:
          # Order deny,allow
          # Deny from all
          # Allow from 1.2.3.4/255.255.255.255
       </Directory>
    _EOF
    

    Don’t forget to reload Apache so it takes on your new configuration (and makes that statistics page visible):

    # Reload Apache
    systemctl reload httpd.service
    

    Credit

    This blog took me a long time to put together and test! The repository hosting alone accommodates all my blog entries up to this date. I took the open source available to me and rebuilt it to make it an easier solution and decided to share it. If you like what you see and wish to copy and paste this HOWTO, please reference back to this blog post at the very least. It’s really all I ask.

    Sources

NRPE for Nagios Core on CentOS 7.x

Introduction

In continuation to part 1 and part 2 of this blog series… NRPE (Nagios Remote Plugin Executor) is yet another Client/Server plugin for Nagios (but can work with other applications too). Unlike NRDP, Nagios isn’t required for NRPE to function which means you can harness the power of this tool and apply it to many other applications too. It does however work ‘very’ well with Nagios and was originally designed for it.

If you read my blog on the NRDP protocol, then you’re already familiar with it’s push architecture where the Applications are responsible for reporting in their status. NRPE however works in the reverse fashion; NRPE requires you to pull the information from the Application Server instead. The status checking responsibility falls on the Nagios Server (instead of the Applications it monitors).

NRPE Overview

NRPE Overview


The diagram above illustrates how the paradigm works.

  • The A represents the Application Server. There is no limit to the number of these guys.
  • The N represents the Nagios Server. You’ll only ever need one Nagios server.

NRPE Query Overview

  1. The Nagios Server will make periodic status checks to the to the Application Server via the NRPE Client (check_nrpe).
  2. The Application Server will analyze the request it received and perform the status check (locally).
  3. When the check completes, it will pass the results back to the Nagios Server (via the same connection the NRPE Client started).
  4. Nagios will take the check_nrpe results and display it accordingly. If the check_nrpe tool can’t establish a connection to the NRPE Server (running on the Application Server), then it will report a failure.

    If the call to the check_nrpe tool takes to long to get a response (or a result) back, then it will be reported as a (timeout) failure. By default check_nrpe will wait up to a maximum of 10 seconds before it times out and gives up. You can change this if you find it too short (or to long).

Here is what you’re getting with this blog and the packaged rpms that go with it:

  • Nagios Core v4.x: updated RPM packaging which I continued to maintain and carry forward from my previous blogs.
    Note: You will need this installed in order to monitor applications with NRPE. See part 1 of this blog series for more information if you don’t already have it set up.
  • NRPE (Nagios Remote Plugin Executor) v3.x: My custom RPM packaging bringing NRPE v3.x to CentOS 7.x for the first time especially since I couldn’t find it available anywhere else (at the time of the this blog entry). I had to make a few modifications to it so that it would be easier used our environment such as:
    • I forwarded the useful patches from the old NRPE v2.x branch to the new NRPE 3.x branch that were applicable still.
    • I patched the SystemD startup script to work with CentOS/Red Hat systems.
    • A sudoer’s file is already to go for those who want to use sudo with their NRPE remote calls.
    • There is firewall configuration all ready to use with FirewallD.

    Best of all, with my RPMs, you can run SELinux in full Enforcing mode for that extra piece of mind from a security standpoint!

    The Goods

    For those who really don’t care and want to just jump right in with the product. Here you go!

    You can download the packages manually if you choose, or reference them using my repository:

    Package Download Description
    nrpe el7.rpm The NRPE Server: This should be installed on any server you want to monitor. The server will allow the machine to respond to requests sent to it via the check_nrpe (Nagios) plugin.
    nrpe-selinux el7.rpm An SELinux add-on package that allows the NRDP Server to operate under Enforcing Mode.

    Note: This RPM is not required by the NRDP server to run correctly.
    nagios-plugins-nrpe el7.rpm Our NRPE Client; this is the Nagios Plugin used to request information from the NRPE Server(s). You’ll only need to install this on the Nagios server (for the purpose of this blog). The RPM provides a small tool called check_nrpe that gets installed into the Nagios Plugin Directory (/usr/lib64/nagios/plugins).

    Through some simple configuration; Nagios can use the check_nrpe tool to monitoring anything you want just as long as the NRPE server is (installed and) running at the other end.

    Note: The source rpm can be obtained here which builds everything you see in the table above. It’s not required for the application to run, but might be useful for developers or those who want to inspect how I put the package together.

    NRPE Server Side Configuration

    The NRPE Server is usually installed onto all of the Application servers you want Nagios to monitor remotely. It provides a means of accepting requests to process (such as, what is your system load like?) and handling the request and returning the response back.

    Assuming you hooked up to my repository here, the NRDP server can can be easily installed with the following command:

    # Install NRPE (Server) on your Application Server
    yum install nrpe nrpe-selinux
    

    NRPE communicates through TCP port 5666; which means you may need to enable the ports in your firewall first to allow remote connections from Nagios. The below commands open up the protocol to everyone attached to your network. You should only perform this command if your Application Server will be running on a local private network:

    # Enable NRPE port (5666)
    firewall-cmd --permanent --add-service=nrpe
    firewall-cmd --add-service=nrpe
    

    If you’re opening this up to the internet, then you might want to just open the (NRPE) port exclusively for the Nagios Server. The following presumes you know the IP of the Nagios Server and will open access to ‘JUST’ that system:

    # Assuming 5.6.7.8 belongs to the Nagios Server you intend to
    # allow to monitor you; you might do the following:
    firewall-cmd --permanent --zone=public \
       --add-rich-rule='\
           rule family="ipv4" \
           source address="5.6.7.8/32" \
           port protocol="tcp" \
           port="5666" \
           accept'
    

    You’ll want to have a look at your NRPE Configuration as you will probably have to update a few lines. See /etc/nagios/nrpe.cfg and have a look for the following directives:

    Directive Details
    allowed_hosts This is added security for NRPE but can come back and haunt you if you ignore it (as nothing will work). You should specify the IP address of your Nagios server here and not allow anyone else! For example, if your Nagios server was 6.7.8.9, then you would put 6.7.8.9/32 here.
    dont_blame_nrpe This allows you to pass options into your remote checks. I personally think this is awesome, but there is no question that depending on the checks that accept arguments, it could could exploit content from your system you wouldn’t have otherwise wanted to share. This is specially the case if you grant NRPE Sudoers permission (discussed a bit lower). Set this value to one (1) to enable argument passing and zero (0) to disable it.
    allow_bash_command_substitution Bash substitution is something like $(hostname) (which might return something like node01.myserver). Set this value to one (1) to enable argument passing and zero (0) to disable it. I don’t use personally use this and therefore have it disabled on my system.

    Setting up NRPE Tokens

    Once your server is set up, there is one last thing you need to do. You need to associate tokens with some of the status checks you want to do. You see, NRPE won’t just execute any program you tell it to, it will only execute programs a specific way that you’ve allowed for. This is purely for security and it makes sense to do so! It’s really not that complicated, consider a token/command mapping like this (as an example):

    Token Command
    check_load # Check the system load:
    # – warning if greater than 10.0
    # – critical if grater then 20.0
    # – we map the check_load token to this command:
    /usr/lib64/nagios/plugins/check_load -w 10 -c 20

    You pass this information into NRPE by creating a .conf file in /etc/nrpe.d (assuming you’re using my RPMs) with the syntax:

    command[token]=/path/to/mapped/command

    So with respect to the example we started; it would look like this:

    command[check_load]=/usr/lib64/nagios/plugins/check_load -w 10 -c 20

    You can specify as many tokens as you like (each one has to be unique from the other). If you have a look in /etc/nrpd.d/, you’ll see one called common_checks.cfg which has a handful of useful commands to start you off with. You can add to this file or start another if you like (in the same directory with a .conf extension). Each time you make changes to the configuration (or another) you’ll need to signal NRPE to have it load any new changes you provided:

    # Restart our NRPE Service
    systemctl restart nrpe.service
    

    Here is a conceptual diagram that will help illustrate what was just explained here using the check_load example above:

    NRPE check_load Example

    NRPE check_load Example

    Sudoer’s Permission

    Substitute User Do (or sudo) allows you to run commands as other users. Most commonly people use sudo to elevate there permission to the root (superuser) privilege to execute a command. When the command has completed, they return back to their normal privileges.

    Some actions require you to have higher system authority to get anything good from them. This includes retrieve certain kinds of system/status information. For example, you can’t check how much mail is spooled and ready for remote delivery (on a Mail Server) unless you reduce the privileges of all of your stored mail (making it accessible by anyone); not a very good idea! But alternatively you can use sudo to grant one user permission to just run a command that can only fetch the number of mail items queued; this is a much better approach! Thus elevating a users permissions for just "special status checking only " commands isn’t so much of a risk. NRPE can be configured to have superuser permissions for it’s status checks which can allow you to monitor a lot more things! Here is how to do it:

    # First make sure that the sudoer's configuration doesn't
    # require tty (teletype terminals) endpoints to use. You
    # do so by commenting out the line that reads 'Defaults requiretty'
    # in the /etc/sudoers file (access it with the command visudo)
    # You can also do this with the below one-liner too
    sed -i -e 's/^\(Defaults[ \t]\+requiretty.*\)$/#\1/g' \
             /etc/sudoers
    
    # By doing this, you allow pseudo-teletype (pty) endpoints
    # to use the sudo command too. NRDP would be a pty service
    # for example since it's not an actual person running the
    # command.
    
    # Update our configuration to use the sudo command
    sed -i -e "s|^[ \t#]*\(command_prefix\)=.*|\1=/usr/bin/sudo|g" \
          /etc/nagios/nrpe.cfg
    
    # Restart our server if it's running
    systemctl restart nrpe.service
    

    SELinux users will want to also do this:

    # Allow NRPE/Nagios calls to run /usr/bin/sudo
    setsebool -P nagios_run_sudo on
    

    NRPE Client Side Configuration

    The client side simply consists of a small application (called check_nrpe) which connects remotely to any NRPE Server you tell it to and hands it a token for processing (provided your NRPE Server is configured correctly). The NRPE Server will take this token and execute a command that was associated with it and return the results back to you.

    # You'll want to be connected to my repositories for this to work:
    # See: http://nuxref.com/repo for more information
    # Install NRPE on the same server running Nagios
    yum install nagios-plugins-nrpe
    

    There is configuration already in place for you if you’re using my RPMS located in /etc/nagios/conf.d called nrpe.cfg. But you can feel free to create your own (or over-write it like so:

    cat << _EOF > /etc/nagios/conf.d/nrpe.cfg
    ; simple wrapper to check_nrpe for remote calls to NRPE servers
    define command{
       command_name check_nrpe
       command_line /usr/lib64/nagios/plugins/check_nrpe -H $HOSTADDRESS$ -c $ARG1$
    }
    
    ; check_nrpe with arguments enabled (enable the dont_blame_nrpe for this
    ; to work properly otherwise simply don't use it)
    define command{
       command_name check_nrpe_args
       command_line /usr/lib64/nagios/plugins/check_nrpe -H $HOSTADDRESS$ -c $ARG1$ -a $ARG2$
    }
    

    You can now send command tokens to servers running NRPE via Nagios. A Nagios configuration might look like this:

    cat << _EOF > /etc/nagios/conf.d/my.application.server01.cfg
    ; first we define a host that we want to monitor.
    ; If you already have a host configured; you can skip this part
    define host{
    ; Name of host template to use. This host definition will inherit
    ; all variables that are defined in (or inherited by) the
    ; linux-server host template definition. You can find this
    ; in /etc/nagios/objects/templates.cfg if you're interested
            use                     linux-server
    
    ; Now we define the server we're going to monitor
            host_name               my-application-server01.nuxref.com
            alias                   my-application-server01.nuxref.com
            address                 192.168.1.2
    
            statusmap_image         redhat.png
            icon_image              redhat.png
            icon_image_alt          CentOS 7.x
    }
    
    ; Now we want to define our monitoring service that talks to our
    ; Application server with NRPE installed on it:
    define service{
    ; Name of service template to use. This service definition will inherit
    ; all variables that are defined in (or inherited by) the
    ; local-service service template definition. You can find this
    ; in /etc/nagios/objects/templates.cfg if you're interested
    	use				local-service
            host_name                       my-application-server01.nuxref.com
    	service_description		Our System Load
    
    ; The Exclamation mark (!) lets nagios know that check_load is the argument
    ; we want to pass into our check_nrpe command we defined earlier (as $ARG1$)
    	check_command			check_nrpe!check_load
    }
    _EOF
    

    If you want to make sure it’s going to work, before you go any further you can run the same command you just told Nagios to do (above):

    # The syntax 'check_nrpe!check_load' gets translated to:
    #  /usr/lib64/nagios/plugins/check_nrpe -H $HOSTADDRESS$ -c $ARG1$
    #
    # Which we can further translate (for testing purposes) to:
    /usr/lib64/nagios/plugins/check_nrpe -H 192.168.1.2 -c check_load
    

    If everything worked okay you should see something similar to the output:

    OK – load average: 0.00, 0.00, 0.00|load1=0.000;10.000;20.000;0; load5=0.000;10.000;20.000;0; load15=0.000;10.000;20.000;0

    You’ll want to reload Nagios to pick up on your new configuration so it can start calling this command too:

    # But before you reload it; it doesn't hurt to just check and make
    # sure your configuration is okay; You can test it out with:
    nagios -v /etc/nagios/nagios.cfg
    
    # correct any errors that display and rerun the above command until
    # everything checks out okay!
    
    # Now reload Nagios so it reads in it's new configuration:
    systemctl reload nagios.service
    

    NRPE vs. NRDP

    Which one should you use? Both NRDP and NRPE have their Pro’s and Cons.

    Benefits of using…
    NRPE NRDP
    You have central control over the checking periods of an application. You only ever need to open 1 TCP port; from a security standpoint; this is awesome!
    You don’t need Nagios to use this. If used properly, it can provide a great way to access remote servers for information and even execute administrative and maintenance commands. You only need to manage the Nagios configuration when a new server is added.
    Can send multiple status messages in one single (TCP) transaction.
    Can reflect a remote application status change immediately oppose to NRPE which one reflect the change until the next status check is performed.

    It should be worth noting that nothing is stopping you from using both NRDP and NRPE at the same time. You might choose an NRDP strategy for remote systems while choosing an NRPE strategy for all your systems residing in your private network. NRPE works over the internet as well, but just exercise caution and be sure to have SELinux running in Enforcing mode on all of the server end points.

    Credit

    This blog took me a very (,very) long time to put together and test! The repository hosting alone accommodates all my blog entries up to this date. All of the custom packaging described here was done by me personally. I took the open source available to me and rebuilt it to make it an easier solution and decided to share it. If you like what you see and wish to copy and paste this HOWTO, please reference back to this blog post at the very least. It’s really all I ask.

    Sources