Category Archives: Linux Tools

Pan: A Useful NewsReader for Linux

Introduction

PAN is a newsreader that has been around for ages. It allows you to sift through the massive clutter that Usenet has become through its really fast interface loaded with tons of features!

It’s development died off way back in 2012, but recently it’s development has picked right back up again. Not only is this product feature rich and open source, but it’s written purely in C++ which makes it incredibly light weight (thus very, very fast). Some of the subtle product enhancements this product has seen in the past few months make it worthy to be in the spot light again.

So What Can It Do?

  • Header Caching: Tell it the group(s) you want and how much of it you want to see and it will download the headers it retrieves to a local cache file. This is awesome because now you can sift through this content offline.
    Cache Headers

    Cache Headers

  • Header Scoring: You can flag key aspects of articles with a score. By default every header retrieved has a score of zero (0) unless you start dabbling in this area.

    Anything that scores less than (or equal to) -9999 can be configured to not list itself at all. Some well set scores can greatly clean up your ability to locate content in groups.

    You can score content higher and/or lower based on the posts author, subject, size, age, etc. You can even apply scoring through regular expressions too!

    Scoring is very powerful when used properly! I’ll talk about it again a bit later in this blog once you’ve gotten set up. But if we were to apply scoring to the previous screenshot (above), it might look like this (all garbage cleaned up and content prioritized with color coding too):

    Header Scoring

    Header Scoring

  • Multiple Server Support: Got a block account? No problem, you can add it as a secondary server and only pull from it if the Primary one is unavailable.
  • NZB-File Support: The treasure maps of Usenet can be loaded into Pan too and downloaded through it. True automation of these come through systems like NZBGet and SABnzbd, but it’s still worth knowing that not only is this a newsreader, but it can pass as a downloader as well!
  • Concurrent Connections: Like any great browser/downloader of any system; files are retrieve concurrently. This means that you can just keep browsing and tagging content of interest seamlessly without interruptions.
  • Header Compression Support: One of the new enhancements surfaced with the new development of this project. This makes a world of difference when retrieving hundreds of thousands of headers from a Usenet group. Enabling this feature along (if your Usenet provider supports it) will greatly reduce wait times!

Pan’s Disabled Features

The features page on PANs website explains about a parent company (called ChimPanXi) that tries to sell this free product with added functionality. I guess the deal they have with the developers is to just disable a few features so that they can be re-enabled them the paid version (purely speculation)?

But since the (Pan) code is open source, the options are right there in front of us but just disabled. Quite honestly… of all this disabled functionality, only one is truly worth pointing out: Pan restricts you to just 4 allowable concurrent connections to your Usenet provider at a time. Here is a small patch I created which increases this number to 99. The build I provide in this blog already has this patch applied. Here are the rest of the missing features (with some of my comments as well); maybe some might see value in the others?

Pans Missing Features

Pans Missing Features

The Goods

For those hooked up to my repository are already set, just type the following:

# install the new version of Pan
yum install pan --enablerepo=nuxref

You can also reference this table too for direct links:

Package Download Description
pan el7.rpm, fc22.rpm, fc23.rpm, fc24.rpm, fc25.rpm The Newsreader: This is the program that this blog focuses on.

Note: The source rpm can be obtained here which builds everything you see in the table above. It’s not required for the application to run, but might be useful for developers or those who want to inspect how I put the package together.

It’s also worth noting (again) that this build includes a small patch to increase the maximum allowable number of concurrent connections from 4 to 99.

Securing Your Connection

There is very little security built into Pan from a connection point of view. What little security is (normally) in place is built using GnuTLS. GnuTLS has a history of not keeping up with the security exploits and vulnerabilities that surface with encryption libraries. It doesn’t make it unsafe; it just doesn’t make it as reliable as it’s competition (OpenSSL and Crypto). For this reason the packages I provide are intentionally not built against it (GnuTLS).

It’s really not a problem at the end of the day because there are other ways of securing this connection (properly). The way I use (and recommend) is through Stunnel.

Stunnel allows you to take an unencrypted input (from Pan) and connect it to a secure connected one (at your Usenet provider). The best thing about stunnel is that it links to your (OpenSSL) shared system libraries libssl.so and libcrypto.so which are actively maintained and patched! Basically what I’m saying is by attaching Pan to Stunnel: you get the feature rich usage of Pan and the ongoing (reliable) security of OpenSSL.

The following will get you set up with stunnel; you’ll want to be root before running the command below:

# Install stunnel (if it's not installed already)
# you'll need to be connected to either EPEL or NuxRef for this
# to work:
yum install stunnel

You can also reference this table too for direct links:

Package Download Description
stunnel el7.rpm, fc22.rpm, fc23.rpm, fc24.rpm, fc25.rpm Secure Tunnel: for data encryption.

Note: This RPM is not required by PAN to run correctly. It does however offer you a safer and more secure method of encrypting your communication to (and from) your NNTP Server.
# You must have root permissions when setting up
# stunnel

# Create relay bound to local server only (semi-colons are for
# comments):
cat << _EOF > /etc/stunnel/stunnel.conf
; Use it for client mode
; This is the pass through mode you need to encrypted
; your NNTP traffic:
client = yes
 
[nntp]
;
; --- IN ---
;
; local port to listen on (on this PC)
; You will configure PAN to connect here:
accept = 127.0.0.1:119

;
; --- OUT ---
;
; The Remote Usenet Server's (encrypted) connection to use:
; In this example, I'm just pointing to Astraweb, but you
; can provide any Usenet server you wish here. Just be sure
; to point it to their secure transport point!
connect = ssl.astraweb.com:563
_EOF

# This line below is useless, but it allows you revisit this blog
# entry and continue and copy and paste these instructions at a later
# time. The line removes any previous entries set to prevent the
# creation of duplicate entries  in your startup file at another time
# It's harmless to run at any point:
sed -i -e '/bin\/stunnel/d' /etc/rc.d/rc.local

# Configure stunnel to start after each boot
echo "# Start /usr/bin/stunnel on boot each time:" >> /etc/rc.d/rc.local
echo "/usr/bin/stunnel" >> /etc/rc.d/rc.local

# By default stunnel is configured to read 
# it's configuration from /etc/stunnel/stunnel.conf
# on startup:
stunnel

The next step is to update your PAN server configuration to point to your local server (localhost or 127.0.0.1) instead of the remote one you’re accessing. Make sure to set the port to 119 too like so:

Stunnel Pan Configuration

Stunnel Pan Configuration


You’ll provide the same username and password you would have otherwise provided to your Usenet provider.

The end result is a secure connection between you and your Usenet provider like so:
Pan Setup With Stunnel

Scoring

Scoring articles can greatly ease your life when looking through all of the headers in front of you; it’s great for:

  • Eliminating SPAM
  • Filtering out potential malicious content (such as Trojans and Viruses)
  • Increasing the visibility of items of interest
  • Locating Authors of interest with ease

All scores can be optionally associated with a time limit too. When the limit expires, so does the score. This is useful when you only want to temporarily filter content. Otherwise the permanent scores will make up most of your configuration. To add a score, simply click Articles > Add a Scoring Rule…

Add Scoring Rule

Add a Scoring Rule

Here is an example of a rule you might add; this one greatly reduces the score of all entries that have potentially dangerous file extensions in the subject line:

Block Potentially Malicious Content

Block Potentially Malicious Content


Pan’s built in filter field allows you to sift through all of the articles you found with keywords. Pairing this functionality with the scoring one really shows off the power of Pan.

All created scores are kept in ~/.pan2/Scores so don’t worry if you mess one up. You can just as easily open this file and fix it. Any manual changes to this file will however require you to exit out of Pan (if it’s open) and restart it.

Here is just a few entries of what you might have in your Score file:

%BOS
% Greatly reduce score of potentially malicious content
[alt.bin*]
Score:: -9999
Subject: .*\.(exe|bat|vbs|cpl|msi|scr|vb(script)?|ws(f|h))[^A-Za-z0-9].*
%EOS

%BOS
% Moderately increase the score of compressed content
[alt.bin*]
Score:: 2500
Subject: .*\.(z(ip|[0-9]{2})|r(ar|[0-9]{2})|7z|iso)[^A-Za-z0-9]([0-9]{3}[^A-Za-z0-9])?.*
%EOS

%BOS
% Very slightly decrease the content of PAR content
% This allows it to not quite have the same spot light as
% the item it matches up against. If it were a compressed file
% it would already have +2500 from the previous score entry
% identified above.  These will just sit at +2400 instead.
[alt.bin*]
Score:: -100
Subject: .*(\.vol[0-9]+\+[0-9]+)?\.(par2|sfv)[^A-Za-z0-9].*
%EOS

%BOS
% Very slightly increase the score of NZB-Files
[alt.bin*]
Score:: 250
Subject: .*\.(nzb)[^A-Za-z0-9].*
%EOS 

%BOS
% Mildly drop the score of cross-posted content
[alt.bin*]
Score:: -750
Xref: (.*:){2} % cross-posted to 2 or more groups 
%EOS

Wrapping It Up

I’m certainly not asking anyone to change from their existing system if it works for them. What I am pointing out though is that Pan is completely free, it’s open source and the features it offers are comparable (if not better) than all of it’s competition. Although it works great on Linux, it also works on many other platforms as well such as Microsoft and Apple.

It might not have a beautiful interface, but it wasn’t built to fill your systems memory with bloated eye candy. It was built to be fast and effective… and truly, it really is.

The newer versions coming out are really great! If you haven’t given it a try since it’s dated ones, you really should! If you’re interested in seeing how Usenet is structured, than this is also a great tool to learn with. If you run an indexer (such as newznab or the many forks of it) you can practice your regular expressions (regexs) using Pan. For an Indexer Admin, this tool is especially great in debugging your regexs!

Credit

This blog took me a very long time to put together and test! The repository hosting alone accommodates all my blog entries up to this date. All of the custom packaging described here was done by me personally. I took the open source available to me and rebuilt it to make it an easier solution and decided to share it. If you like what you see and wish to copy and paste this HOWTO, please reference back to this blog post at the very least. It’s really all I ask.

Sources

Install NodeJS

Install NodeJS v0.12 on CentOS 7

Introduction

NodeJS v0.10 presently ships on CentOS/Red Hat 7.x. If you Google around, there are some hacky ways to upgrade NodeJS on your system, but all of them (at least all of the ones I found) don’t manage these upgrades properly through RPMs.

Now the good folks over at nodesource.com attempted to package this all into one RPM but by doing this, they unfortunately violate a number of standards that Red Hat tries to follow. It also causes some conflicts with any existing (NodeJS) packages you may have been using vs simply gracefully upgrading them. This blog merely offers an alternative (more elegant) solution to the same problem nodesource.com already solved.

What is NodeJS?

For those of you who don’t know what it is, it’s a scripting language (like Python and PHP) which allows you to write your code in JavaScript. That’s right, the same language web developers would otherwise write client side code to enhance someones web experience. NodeJS however allows developers to write server side code using this technology.

The tool is constantly evolving and becoming more widely used which is exactly why you need to upgrade your copy to take advantage of the new features it has to offer! 🙂

NodeJS Dependency Nightmare

I’ll be honest up front, NodeJS does suffer from a bit of a dependency nightmare. One component you’ll install will require some other components to work, they in turn will also require components (and so on and so forth). This is no different then other languages, however with NodeJS the problem is that some dependencies don’t connect up with others all the time and some dependencies conflict with another. This prevents you from being able to fully experiment with some of the on-going development and newer features easily. More importantly NodeJS struggles from the lack of Semantic Versioning which is the root cause of the dependency hell we have to work with.

The dependency issues that confront NodeJS are probably the biggest reason it’s not so easily integrated into Linux environments like the other scripting languages are.

The Extra Packages for Enterprise Linux (EPEL) team did do a good job of working out a ton of dependencies for us in the past. But no one seems to be keeping up with this. The packages they provided are old… I mean really old. Even the bleeding edge Linux distributions (Fedora 23 and Ubuntu 15.10 at the time) still ship with NodeJS 0.10 and haven’t updated their NodeJS packages in years either.

The good news is, I’ve worked out ton of these dependencies using many of the current packages available today and also share them on my repository. So although I can’t satisfy the needs of everyone… I can certainly start with my own and share my work in case others are interested! 🙂

The Run Down

This blog focuses on properly providing installable RPMs that follow Red Hat standards. In addition to this, these packages will remain compatible with previous installations of NodeJS:

  • NodeJS v0.12 (from v0.10)
  • npm v3.8.x (from v1.3.x)
  • libuv v.1.9 (from v0.10.x)

Not only that, but I ported over 360+ NodeJS packages into install-able RPMs! They’re certainly not all tested, but it’s a heck of a lot farther then we were before as far as what was available to us.

Installation

Make sure you’ve connected to my repository which is documented here. After that, it’s as simple as the following:

NodeJS

The bread and butter of this entire blog entry. This will haul in the latest libuv packaging as well:

# Run as root (or a user with sudoers permission)
yum install --enablerepo=nuxref nodejs

NPM

If you want to stray away from the 360+ packages I put together, you can use the latest version of npm to haul in your own.

# Run as root (or a user with sudoers permission)
yum install --enablerepo=nuxref npm

Credit

I may not blog often; but I want to re-assure the stability and testing I put into everything I intend share.

If you like what you see and wish to copy and paste this HOWTO, please reference back to this blog post at the very least; it’s really all that I ask of you.

Sources

None of this would have been possible if I hadn’t referenced the following:

Linux Administration Tools & Tricks

Introduction

All *nux administrators have tools they use to make their job easier. This blog will be an ongoing one where every time I find or create a handy tool, I’ll share it. I’ll do my best to provide a quick write up with it and some simple examples as to how it may be useful to you too.
Currently, this blog touches on the following:

Applications

Please note that most of these tools can be directly retrieved from my repository I’m hosting. You may wish to connect to it to make your life easier.


genssl: SSL/TLS Certificate Generator and Certificate Validation

genssl is a simple bash script I created to automatically generate Private/Public key pairs that can be used by all sorts of applications requiring SSL/TLS encryption. This could be used by any number of servers you’re hosting (Web, Mail, FTP, etc). It will also test previously created keys (validating them). It supports generating keys that are to be signed by Certificate Authority as well as self signed and test keys too.

# Assuming you've connected to my repository,
# just type the following:
yum install -y --enablerepo=nuxref genssl

What You’ll Need To Know
It’s really quite simple, SSL/TLS keys are usually assigned to domain names. Thus this is the only input to the tool in addition to the key type you want to generate (self-signed, test, or registered):

# Generate a simple self-signed Certificate Private/Public
# key pair for the domain example.com you would just type the
# following:
genssl -s example.com
# ^-- creates the following:
#   - example.com.key
#   - example.com.crt
#   - example.com.README

# You can specify as many domain names as you want to
# generate more then one:
genssl -s example.com nuxref.com myothersite.com
# ^-- creates the following:
#   - example.com.key
#   - example.com.crt
#   - example.com.README
#   - nuxref.com.key
#   - nuxref.com.crt
#   - nuxref.com.README
#   - myothersite.com.key
#   - myothersite.com.crt
#   - myothersite.com.README

You can verify the keys with the following command:

# The following scans the current directory
# for all .key, .crt, and/or .csr files and tests that
# they all correctly match against one another.
genssl -v

If you need a signed certificate by a registered Certificate Authority, genssl will get you half way there. It will generate you your public key and an unsigned certificate (.csr). You’ll then need to choose a Certificate Authority that works best for you (cost wise, features you need). You will provide them your unsigned certificate you just generated and in return, they’ll produce (and provide) for you the private key you need (.key). Signed (registered) certificates cost money because you’re paying for someone else to confirm the authenticity of your site when people visit it. This is by far the absolute best way to publicly host a secure website!

# Generate an unsigned certificate and public key
# pair for the domain example.com with the following:
genssl -r example.com
# ^-- creates the following:
#   - example.com.key
#   - example.com.crt
#   - example.com.README

You’ll notice that you also generate a README file for every domain you create keys for. Some of us won’t generate keys often; so this README provides some additional information that will explain how you can use the key. It lists a few Certificate Authoritative locations you can contact, and provides examples on how you can install the key into your system. Depending on the key pair type you generate, the README will provide you with information specific to your cause. Have a look at the contents yourself after you generate a key! I find the information in it useful!

The last thing worth noting about this tool is to create a key pair, you usually provide information about yourself or the site you’re creating the key for. I’ve defaulted all these values to ones you’re probably not interested in using. You can over-ride them by specifying additional switches on the command line. But the absolute easiest (and better) way of doing it, is to just provide a global configuration file it can reference which is /etc/genssl. This file is always sourced (if present). The next file that is checked and sourced is ~/.config/genssl and then ~/.genssl.
Set this with information pertaining to your hosting to ease the generation of of the key details. Here is an example of what any one of these configuration files might look like:

# The country code is represented in it's 2 letter abbreviated version:
# hence: CA, US, UK, etc
# Defaults to "7K" if none is specified
GENSSL_COUNTRY="7K"
# Your organization/company Name
# Defaults to "Lannisters" if none is specified
GENSSL_ORG="Lannisters"
# The Province and or State you reside in
# Defaults to "Westerlands" if none is specified
GENSSL_PROVSTATE="Westerlands"
# Identify the City/Town you reside in
# Defaults to "Casterly Rock" if none is specified
GENSSL_LOCATION="Casterly Rock"
# Define a department; this is loosely used. Some
# just leave it blank
# Defaults to "" (blank) if not is specified
GENSSL_DEPT=""

There are other variables you can override, but the above are the key ones. The default cipher strength is rsa:2048 and keys expire after 730 days (which equates to 2 years). This is usually the max time Certificate Authoritative sites will sign your key for anyway (if registering with them).

You don’t need a configuration file at all for this script to work, there are switches for all of these variables too that you can pass in. Its important to note that passed in switches will always over-ride the configuration file too, but this is the same with most applications anyway.

Some Further Reading


fencemon: A System Monitoring Tool

fencemon is a beautiful creation by Red Hat, the only thing I did was package it as an RPM. I don’t think it’s advertised or used enough by system administrators. It simply runs as a cron and constantly gathers detailed system information. Information such as the current programs running, the system memory remaining, network statistics, disk i/o, etc. It captures content every 10 seconds but does so by using tools that are not i/o intensive. So running this will not slow your machine down enough to justify not using it.

Where this information becomes vital is during a system failure (in the event one ever does occur). It will allow you to see exactly what the last state of the system was (processes in memory, etc) prior to your issue. The detailed information can be used with everyday troubleshooting as well too such as identifying that process that is going crazy overnight and other anonymities that you’re never around to witness first hand.

Why is it called fencemon?
It got it’s name because it’s initial purpose was to run in clustered environments which do a think called ‘fencing’. It’s when another cluster node sees that another is violating some of the simple cluster rules put in place, or it simply isn’t responding anymore. Fencing is effectively the power cycling of the node (so an admin doesn’t have to). In almost all cases (unless there is seriously something wrong with the fenced node), it will reboot and rejoin the cluster correctly. This tool would have allowed an admin to see the final state of the cluster node before it was lost. But any server can crash or go crazy when deploying software in a production environment. We all hope it never happens, but it can. The logging put in place by fencemon can be a life saver!

# Assuming you've connected to my repository,
# just type the following:
yum install -y --enablerepo=nuxref fencemon

What You’ll Need To Know
The constant system capturing that is going on will report itself to /var/log/fencemon/.

The format of the log files is:
hostnameYYYYMMDDHHMMSS-info-capturetype.log

Once an hour elapses, a new file is written to. Each file contains system statistics in 20 second bursts; as you can imagine, there is A LOT of information here.

the following capturetype log files (with their associated commands) are gathered in 20 second increments:

  • hostnameYYYYMMDDHHMMSS-info-ps-wchan.log:
    ps -eo user,pid,%cpu,%mem,vsz,rss,tty,stat,start,time,wchan:32,args
  • hostnameYYYYMMDDHHMMSS-info-vmstat.log:
    vmstat
  • hostnameYYYYMMDDHHMMSS-info-vmstat -d.log:
    vmstat-d
  • hostnameYYYYMMDDHHMMSS-info-netstat.log:
    netstat -s
  • hostnameYYYYMMDDHHMMSS-info-meminfo.log:
    cat /proc/meminfo
  • hostnameYYYYMMDDHHMMSS-info-slabinfo.log:
    cat /proc/slabinfo
  • hostnameYYYYMMDDHHMMSS-info-ethtool-$iface.log:
    # $iface will be whatever is detected on your machine
    # you can also define this variable in /etc/sysconfig/fencemon
    # ifaces="lo eth0"
    # The above definition would cause the following to occur:
    /sbin/ethtool -S lo >> 
       /var/log/fencemon/hostname-YYYYMMDD-HHMMSS-info-ethtool-lo.log
    /sbin/ethtool -S eth0 >> 
       /var/log/fencemon/hostname-YYYYMMDD-HHMMSS-info-ethtool-eth0.log
    

The log files are kept for about 2 days which can occupy about 750MB of disk space. But hey, disk space is cheap now of days. You should have no problem reserving a GIG of disk space for this info. It’s really worth it in the long run!

Some Further Reading


datemath: A Date/Time Manipulation Utility

I won’t go too deep on this subject since I’ve already blogged about it before here. In a nutshell, It can easily provide you a way of manipulating time on the fly to ease scripting. This tool is not rocket science by any means, but it simplifies shell scripting a great deal when preforming functionality that is time sensitive.

This is effectively an extension or the the missing features to the already existing date tool which a lot of developers and admins use regularly.

# Assuming you've connected to my repository,
# just type the following:
yum install -y --enablerepo=nuxref datemath

What You’ll Need To Know
There are 2 key applications; datemath and dateblock. Datemath can ease scripting by returning you time relative to when it was called; for example:

# Note: the current date/time is printed to the
#        screen to give you an idea how the math was
#        applied.
# date +'%Y-%m-%d %H:%M:%S' prints: 2014-11-14 21:49:42
# What will the time be in 20 hours from now?
datemath -o 20
# 2014-11-15 17:49:42

# Top of the hour 9000 minutes in the future:
datemath -n 9000 -f '%Y-%m-%d %H:00:00'
# 2014-11-21 03:00:00

If you use a negative sign, then it looks in the past. There are lots of reasons why you might want to do this; consider this little bit of code:

# Touch a file that is 30 seconds older than 'now'
touch -t $(datemath -s -30 -f '%Y%m%d%H%M.%S') reference_file

# Now we have a reference point when using the 'find'
# command. The below command will list all files that
# at least 30 seconds old. This might be useful if your
# hosting an FTP server and don't want to process files
# until they've arrived completely.
find -type f -newer reference_file
# You could move the fully delivered files to a new
# directory with a slight adjustment to the above line
find -type f -newer reference_file -exec mv -f {} /new/path ;

# Consider this for an archive clean-up for a system
# you maintain. Simply touch a file as being 31 days
# older then 'now'
touch -t $(datemath -d -31 -f '%Y%m%d%H%M.%S') reference_file

# now you can list all the files that are older then
# 31 days.  Add the -delete switch and you can clean
# up old content from a directory.
find /an/archive/directory -type f ! -newer reference_file

# You can easily script sql statements using this tool too
# consider that you need to just select yesterdays data
# for an archive:
START=$(datemath -d -1 -f '%Y-%m-%d 00:00:00')
FINISH=$(date +'%Y-%m-%d 00:00:00')
BACKUP_FILE=/absolute/path/to/put/backup/$START-backup.csv
# Now your SQL statement could be:
/usr/bin/psql -d postgres 
      -c "COPY (SELECT * FROM mysystem.data WHERE 
                mysystem.start_date >= '$START' AND 
                mysystem.finish_date < '$FINISH') 
          TO '$BACKUP_FILE';"
# Now that we've backed our data up, we can remove
# it from our database:
/usr/bin/psql -d postgres 
      -c "DELETE FROM mysystem.data  WHERE 
                mysystem.start_date >= '$START' AND 
                mysystem.finish_date < '$FINISH'"

Some Further Reading


dateblock: A Cron-Like Utility

dateblock allows us to mimic the functionality of sleep by blocking the process. The catch is dateblock blocks until a specific time instead of for a duration of time.

This very subtle difference can prove to be a very effective and powerful too, especially when you want to execute something at a specific time. Cron’s are fine for most scenarios, but they aren’t ideal in a clustered environment where this tool excels. In a clustered environment you may only want an action to occur on the node hosting the application, where a cron would require the action to fire on all nodes.

The python C++ library enables this tools usage even further since you can integrate it with your application.

Just like datemath is, dateblock is discussed in much more detail in my blog about it here.

# Assuming you've connected to my repository,
# just type the following:
yum install -y --enablerepo=nuxref dateblock

# Get the python library too if you want it
yum install -y --enablerepo=nuxref python-dateblock

What You’ll Need To Know
Consider the following:

# block for 30 seconds...
sleep 30

# however dateblock -s 30 would block until the next 30th second
# of the minute is reached. Consider the time:
# date +'%Y-%m-%d %H:%M:%S' prints: 2014-11-14 22:14:16
# dateblock would block until 22:14:30 (just 14 seconds later)
dateblock -s 30

Dateblock works just like a crontab does and supports the crontab format too. It was written by me to specifically allow accuracy for time sensitive applications running. If you write your actions and commands in a script under a dateblock command, you can always guarantee your actions will be called at a precise time.

Since it supports cron entries you can feed it things like this:

# The following will unblock if the minute becomes
# divisible by 10.  so this would unblock at the
# :00, :10, :20, :30: 40: and :50 intervals
dateblock -n /10

# the above command could also be written like this:
dateblock -n 0,10,20,30,40,50

# You can mix and match switches to tweak the tool to
# always work in your favour.

A really nice switch that can help with your debugging and development is the –test (-t) switch. This will cause dateblock to ‘not’ block at all. Instead it will just spit out the current time, followed by the time it ‘would’ have unblocked at had you not put it into test mode. This is good for trying to tweak complicated arguments to get the tool to work in your favour.

# using the test switch, we can make sure we're going
# to unblock at time intervals we expect it to. In this
# example, i use the test switch and a cron formatted
# argument.  In this example, I'm asking it to unblock
# ever 2 days at the start of each day (midnight with
# respect to the systems time)
dateblock -t -c "0 0 0 /2"
Current Time : 2014-11-14 22:24:06 (Fri)
Block Until  : 2014-11-16 00:00:00 (Sun)

# Note: there is an extra 0 above that may through you
#       off.. in the normal cron world, the first
#       argument is the 'minute' entry.  Well since
#       dateblock offers high resolution, the first
#       entry is actually 'seconds', then minute, etc.

There are man pages for both of these tools. You’ll get a lot more detail of the power they offer. Alternatively, if you even remotely interested, check out e my blog entry on them.

The other cool thing is dateblock also has a python library I created.

# Import the module
from dateblock import dateblock

# Here is a similar example as above and we set
# block to false so we know how long were blocking until
unblock_dt = dateblock('0 0 0 /2', block=False)
print("Blocking until %s" % unblock_dt
              .strftime('%Y-%m-%d %H:%M:%S))
# by default, blocking is enabled, so now we can just
# block for the specified time
datetime.datetime('0 0 0 /2')

Some Further Reading


nmap (Network Mapper)

nmap is port scanner & network mapping tool. You can use it to find out all of the open ports a system has on it and you can also use it to see who is on your network. This tool is awesome, but it’s intentions can be easily abused. It’s common sense to scan your own network to make sure that only the essential ports are open to the outside world, but it’s not kind to scan a system to which you are not the owner of. Hence, this tool should be used to identify your systems weaknesses so that you can fix them; it should not be used to identify an others weakness. For this reason, if you do install nmap on your system, consider limiting permission so that only the root user has access to it.

# Assuming you've connected to my repository,
# just type the following:
yum install -y --enablerepo=nuxref-shared nmap

# Restrict its access (optional, but ideal)
chmod o-rwx /usr/bin/nmap

What You’ll Need To Know
Now you can do things like:

  • Scan your system to identify all open ports:
    # Assuming the server you are scanning is 192.168.1.10
    nmap 192.168.1.10
    
  • Scan an entire network to reveal everyone on it (including MAC Address information):
    # Assuming a class C network (24 masked bits) with
    # a network address of 192.168.1.0
    nmap -n -v -sP 192.168.1.0/24
    

Note: Don’t panic if your system appears to have more ports open then you had thought. Well at least don’t panic until you determine where you are running your network map test from. You see, if you run nmap on the same system you are testing against, then there your test might not prove accurate since firewalls do not normally deny any local traffic. The best test is to use a second access point to test against the first. Also, if your server sits on more then one network, make sure to test it from a different server on each network it connects to!

Some Further Reading

  • A blog with a ton of different examples identifying things you can do with this tool.

lsof (LiSt Open Files)

This tool is absolutely fantastic when it comes to preforming diagnostics or handling emergency scenarios. This command is only available to the root user; this is intentional due to the power it offers the system administrators.

I’m not hosting this tool simply because it ships with most Linux flavors (Fedora, CentOS, Red Hat, etc). Therefore, the following should haul it into your system if it’s not already installed:

# just type the following:
yum install -y lsof

# If this doesn't work, the rpm will be found
# on your DVD/CD containing your Linux Operating
# System.

What You’ll Need To Know
This tool when executed on it’s own (without any options) simply lists every open file, device, socket you have on one great big list. This can be useful when paired with grep if your looking for something specific.

# list everything opened
lsof

But it’s true power comes from it’s ability to filter through this list and fetch specific things for you.

# list all of the processes utilizing port 80 on your system:
lsof -i :80 -t

# or be more specific and list all of the processes accessing
# a specific service you are hosting on an IP listening on
# port 80:
# Note: the below assumes we're interested in those accessing
#       the ip address 192.168.0.1
lsof -i @192.168.0.1:80 -t

# If you pair this with the kill command, you can easily kick
# everyone off your server this way:
kill -9 $(lsof -i @192.168.0.1:80 -t)

But the tool is also great for even local administration; let’s say there is a user account on your system with the login id of bob.

# find out what bob is up to and what he's accessing on
# your system this way (you may need to pair this with
# grep since the output can be quite detailed
lsof -u bob

# Perhaps you just want to see all of the network activity
# bob is involved in. The following spits out a really
# easy to read list of all of the network related processes
# bob is dealing with.
lsof -a -u bob -i

# You can kill all of the TCP/IP processes bob is using with
# the following (if bob was violating the system, this might
# be a good course of action to take):
kill -9 $(lsof -t -u bob)

Perhaps you have a file on your system you use regularly, you can find out who is also using it with the following command:

# List who is accessing a file
lsof /path/to/a/file

Some Further Reading

  • lsof Survival Guide: A Stack Overflow post with some awesome tricks you can do with this tool.
  • More great lsof examples. This is someones blog who specifically wrote about this tool specifically. They provide many more documented examples of this tools usage here.

Credit

This blog took me a very long time to put together and test! If you like what you see and wish to copy and paste this HOWTO, please reference back to this blog post at the very least. It’s really all I ask.