Tech musings and other things...

Southeast Puppet User's Group September

| Comments


John Ray is bringing the Puppet + Docker goodness in his talk tonight: “Deploying Docker Containers with Puppet”.  Join us each month at the Shadow Soft offices for the latest in DEVOPS topics and information.  Always fun, lots of discussion and information surrounding Puppet topics and associated technologies.  There’s always pizza and beverages of all kinds, and we’ve finally moved into our new meeting/class rooms, so come on out.

The Toolbox Grows...

| Comments

So far we’ve gotten our heads around some important things.  First and foremost, vim.  Our editor and companion for creating great code and ways to see our code in action and be able to determine at a glance whether our syntax is correct.  Also, we’ve looked at revision control.  The single largest “CYA” ohmygodimgladivegotanoldercopytorestoreto sort of paradigm where you can roll yourself back to previously “known good” revisions to save that day…besides that, it’s just darned good practice to keep your code externally saved, revision controlled, and accessible.

I’ve also talked about importance of workflow clarity and quality.  If you implement a poor workflow, you just have an automated poor workflow. Key word here is “poor”.

Next up on our browse through the “toolbox” is “Vagrant”.  What is this Vagrant, you ask?

Virtualization is paramount in today’s world in a number of ways and for a number of reasons.  For extending your server farms to handle even more application expression, to expand your own desktop machines to test/try different operating systems, and even just rolling up an ad-hoc VM so you can try something without touching a “real” machine in your environment.

Some may disagree, but I’ve found virtualization to be one of the most powerful tools added to the toolbox in years.  Not only can you prototype systems or applications, but you can prototype entire environments.  This is where Vagrant shines, and especially in the context of Puppet (master + clients), allows you to create a fully functioning Puppet environment upon which to develop, prototype, and test without ever jeopardizing even the least important system of your infrastructure.  I count that as a “win”.  Let’s see what this tool can do.

What is Vagrant?

According to its website:

Vagrant provides easy to configure, reproducible, and portable work environments built on top of industry-standard technology and controlled by a single consistent workflow to help maximize the productivity and flexibility of you and your team.

To achieve its magic, Vagrant stands on the shoulders of giants. Machines are provisioned on top of VirtualBox, VMware, AWS, or any other provider. Then, industry-standard provisioning tools such as shell scripts, Chef, or Puppet, can be used to automatically install and configure software on the machine.

There’s a lot there, but it’s just a fancy way of saying exactly what I said before.  Vagrant is essentially a framework system that wraps your virtualization engine to manage environments of VMs.  Here is where Vagrant will hold the power for us.


If Vagrant is the framework, then Virtualization is the foundation.  Now, I’ve chosen to use “Virtualbox” for my virtualization technology, but VMWare works every bit as well.  I am doing all my testing over Virtualbox, however, so YMMV.  Virtualbox is freely available from oracle, and you can download the appropriate version from Virtualbox at  I am running the latest version at 4.3.12 (as of this writing) and it serves the Vagrant system extremely well.


Next, you’ll need to install Vagrant on your system.  You can find all the right packages at  I am currently running version 1.6.3 without errors.

[warning]I want to make a disclaimer here since I’ve had an issue or two with Vagrant on a platform I don’t use-Windows.  I am a Mac & Linux user, and have had no issues using the Vagrant/Virtualbox combo on either of these.  However, literally every time I’ve used Vagrant over Windows, it’s just been a mess.  I’ve known one person (ONE!) who has gotten Vagrant to work over Windows, and it required his getting into the product, editing code, etc.  As such, I wouldn’t recommend it for those new to the platform.[/warning]

On the Mac platform, you get a .dmg file and can extract it run the installer.  Linux versions are available as RPM installs and Debian Packages.  Once you’re installed, let’s mess around a bit with Vagrant to see what we can do.

Getting Started

Vagrant is a unique tool in that it allows you to manage all these varied VMs, but adds a twist.  The big twist is that you don’t have to have the source materials for the VMs you’re installing.  In fact, the simplicity of turning up a new VM is astounding.  Take the following series of commands:

mkdir precise32
cd precise32
vagrant init hashicorp/precise32
vagrant up

If your Vagrant is installed correctly, a number of things start to happen.  First, Vagrant places a file in your cwd called “Vagrantfile”.  Your vagrant file (indie) looks like this:

# -*- mode: ruby -*-
# vi: set ft=ruby :

# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure("2") do |config|
  # The most common configuration options are documented and commented below.
  # For a complete reference, please see the online documentation at

  # Every Vagrant development environment requires a box. You can search for
  # boxes at = "hashicorp/precise32"

  # Disable automatic box update checking. If you disable this, then
  # boxes will only be checked for updates when the user runs
  # `vagrant box outdated`. This is not recommended.
  # config.vm.box_check_update = false

  # Create a forwarded port mapping which allows access to a specific port
  # within the machine from a port on the host machine. In the example below,
  # accessing "localhost:8080" will access port 80 on the guest machine.
  # "forwarded_port", guest: 80, host: 8080

  # Create a private network, which allows host-only access to the machine
  # using a specific IP.
  # "private_network", ip: ""

  # Create a public network, which generally matched to bridged network.
  # Bridged networks make the machine appear as another physical device on
  # your network.
  # "public_network"

  # Share an additional folder to the guest VM. The first argument is
  # the path on the host to the actual folder. The second argument is
  # the path on the guest to mount the folder. And the optional third
  # argument is a set of non-required options.
  # config.vm.synced_folder "../data", "/vagrant_data"

  # Provider-specific configuration so you can fine-tune various
  # backing providers for Vagrant. These expose provider-specific options.
  # Example for VirtualBox:
  # config.vm.provider "virtualbox" do |vb|
  #   # Display the VirtualBox GUI when booting the machine
  #   vb.gui = true
  #   # Customize the amount of memory on the VM:
  #   vb.memory = "1024"
  # end
  # View the documentation for the provider you are using for more
  # information on available options.

  # Define a Vagrant Push strategy for pushing to Atlas. Other push strategies
  # such as FTP and Heroku are also available. See the documentation at
  # for more information.
  # config.push.define "atlas" do |push|
  # end

  # Enable provisioning with a shell script. Additional provisioners such as
  # Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the
  # documentation for more information about their specific syntax and use.
  # config.vm.provision "shell", inline: <<-SHELL
  #   apt-get update
  #   apt-get install -y apache2

Note that this is a long file with a lot of explanatory documentation.  In actuality, the most important part of your Vagrantfile can be summed up here:

# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config| = "hashicorp/precise32"

These are the lines that are uncommented plus the top two declaratives that tell Vagrant what to do.  It’s a very simple file that does some very powerful things.  First, it checks your home directory in the ~/.vagrant.d location to see if you already have the “precise32” Vagrant source “box”.  (more on boxes later).  Next, if you do have this, it will simply start up a VM in your virtualization of choice with a randomized name.  For instance, mine is called “precise32_default_1402504453444_30545”.  Vagrant takes away the selection of an .iso image, connecting it to the virtual CD/DVD Rom, starting an installer, etc.  It simply sends you a pre-rolled image, places it in your .vagrant.d directory, and provisions the VM to respond to Vagrant commands, and starts it up within Virtualbox.  Precise32 is simply a test scenario, as Vagrant’s site has quite a number of varied and specially configured “box” files that you can use to prototype on at their “ready-made” box discovery site:  You can install boxes with too many variations and differentiations to enumerate here, and that’s not really the point for our purposes… you may find these of great assistance in your own workplace, but let’s continue.

When you run your “vagrant init” command listed above, it places a Vagrantfile, and when you do a “vagrant up”, it automatically retrieves your box file, provisions, and starts the VM.  Now, by simply running “vagrant ssh default”, you are now logged into this virtual machine!  You also have full sudo to become root and do any sort of damage you may wish to do.  If you logout (“exit” or CTRL-D), and type “vagrant destroy”, the VM goes away and you have nothing in Virtualbox.

Were we to just stop here, the power inherent in being able to just have these “Vagrantfiles” (sort of like a “Makefile” for boxes) to spin up and down test scenarios at will is incredible.  But, let’s look at this in light of the Vagrantfile, what it can do and how you can customize it.  There is an entire descriptive language surrounding Vagrant PLUS Vagrant has a plugin infrastructure whereby developers can extend Vagrant’s capabilities.  We will capitalize on these later.

So, imagine a scenario where you can create a directory, copy a text file into it, run a single command, and it automatically provisions a 4-node Puppet Enterprise infrastructure, fully installed with a master and three agents, MCollective fully installed, PuppetDB installed and in use…  literally a full installation just like you would use for your infrastructure…  Now we get powerful.  NOW we have the ability to do some cool things.

Next time, that’s exactly what we’re going to do.

"Do's and Dont's" for Your Puppet Environment

| Comments

IT Automation, like the features and functions offered by Puppet, is riddled with a number of pitfalls. Nothing dangerous or site-threatening in the near term, however evolving a bad plan can lead you down a painful path to re-trek when you ultimately need to demolish what you’ve done and re-tool, re-work, or even re-start from scratch.  Some simple suggestions can help smooth your integration, and also provide tools and methodologies that make changes in philosophy easy to test and implement as well as make the long road back from a disaster easy(-ier?) to navigate.

Here are some simple guidelines that can provide that foundation and framework:

DO Always Use Revision Control

It would seem this would be a foregone conclusion in this day and age, but you would be surprised just how many shops don’t have revision control of any kind in place.  A series of manifests or configurations might be tarred up and sent to the backup system, but aside from dated backups, there’s no real versioning…just monolithic archives to weed through in a time of disaster.

Revision control puts you one command away from restoring those configurations and manifests (and even your data vis-a-vis “Hiera”) to their original locations in the most recent state.

DO Rethink Your Environments

If you automate a bad workflow, you still have a bad workflow.  (albeit an automated one!)

Rethink how you do things and why.  Why do you promote code the way you do, and is there a better way to do it?  Why do you still have a manual portion to your procedure, and is it entirely necessary, or can this be remanded to Puppet to do for you as well.  What things are you doing well?  How can they improve?

Try to think through all your procedures.  There are more than you think, and they’re often less optimized than they can be.  If you’re going to implement Puppet automation, it’s time to retool.

DO Implement Slowly and Methodically

Another pitfall a lot of shops wander into is they try to do too much all at once, and do none of it well.  Either they implement too quickly and migrate a huge environment it took years to build (sometimes as much as a decade!) through a single professional services engagement or at an unrealistic pace.  Automation is complex, but if you take the time to implement correctly, piece-by-piece and hand-in-hand with your rethinking of your environment referred to above, you can revolutionize the way you work and make the environment considerably more powerful, considerably easier to work with, and ultimately release yourself to work on much more interesting problems in your environment.  Take your time to build the environment you want.

DO Engage the Community

By using Puppet, you are the beneficiary of the greatest software development paradigm in history – the Open Source movement.  People all over the world have taken part in crafting the powerful tool you have before you.  If you are able to help in like manner, by all means contribute your code to the community. (With your data in Hiera, this is easier than ever!)  Join a Puppet Users Group.  Share your clever solutions to unique problems with the community via GitHub, the Puppet Forge, your website… give back.  The more you pour in, the more you get out, and something you solve may end up baked into the final product one day in the future.

DON’T Pit Teams Against Each Other

DON’T make this a DEV vs OPS paradigm.  This is a marriage of the best tools of both worlds.  Depending on how your culture breaks down, this could be an OPS-aware way of doing development, or a DEV-informed way of doing operations.  You need to remember one thing in all of it.  The marriage of these worlds is a teamwork effort.

I was averse to the term DEVOPS when it first started being used, as it was a tool of the development world I was engaged with to cede root level access to developers.  In a properly managed, secure environment, this is always a no-no.  Development personnel are not trained systems people and rarely are.  By the same token, never ask your systems people to delve into core development, or to troubleshoot your developers' code.  They are not tooled for that work.

This does not say that one is better than the other, nor does it say they do not share a certain amount of core skills at the basest levels. Much like the differences between civil and mechanical engineers, each has a base level of knowledge that ties them together, but each is highly specialized.  You don’t want your civil engineer building machine tools just as much as you don’t want your mechanical engineer building bridges.  Each discipline is highly specialized and carries with it nuance and knowledge you only gain through experience…experience on the job.

Instead, find a culture and a paradigm that joins the forces of these two disciplines to build something unique and special rather than wasting time with dissension and argument.

DON’T Expect Automation to Solve Everything

I know, that sounds like a sacrilege at this point, but its true.  No matter how automated your site becomes, how detailed your configuration elements are, or how much you’ve detailed your entire workflow, you still can never replace the element of human consideration and decision-making.

Automation, as I’ve said before, automates away the mundane to make time for you, DEVOPS person, to work on really interesting and curious work.  You can now write that entire new whiz bang gadget you’ve been conceptualizing for the last several years, but have never quite gotten there because you were too busy “putting out fires”.  Puppet automation is definitely a watershed in modern administration and development, but people are still needed.

Another “intangible” you may not readily think about when considering a DEVOPS infrastructure is one of culture.  The best places to work are always the best cultures brought about by the right collection of people, ideas, personalities, and management styles.  When you find that right mix of people and ideas, the workplace becomes a, forgive me, magical place to be.  Automation can never make that happen.

DON’T Starve Your Automation Environment

Automation solves a lot of things, but one thing it cannot do is feed itself.  This particular animal has a ton of needs over time.  From appropriate hardware to personnel, the environment needs time, attention, and consideration.  Remember that this is the “machine tool” of your whole company.  It is the thing that builds and maintains other things.  As such, its priority rises above that of the next web server or DNS system.

Always allocate enough resources (read: money, personnel, and time) to your environment.  If that means engineer time to work on a specially project and to do the job right, that’s what it means.  And, yes, it’s more important than meeting an arbitrarily assigned “live date” to your new widget or site or application.  The environment comes first, and all else follows.  If you give the resources and time to your automation initiatives it deserves, a number of years down the road you will look back and be amazed at the sheer amount of work your team was able to accomplish just by keeping this simple precept.

DON’T Stop Evolving

Never stop learning.  Never stop bettering yourself or your environment.  Always keep refactoring your code.  (i.e., if you wrote that Apache module 4 years ago, chances are good that what you’ve learned in the interim can go back into making that module even better.)  Always keep your people trained and engaged on the latest developments in Puppet and all the associated tools.  Never stop striving to be better and never stop reaching.  I may sound lil your coach from high school in this, but those principles he was trying to impart hold true.  If you continue to drive forward and reinvent yourself as a regular part of your forward pursuits, the endpoint of that evolution will benefit you personally, your team both vocationally and culturally, your company’s efficiency, and your environment’s impact on your bottom line.


If we keep a stronger eye on our environment and tools that rises above the simple concept of “that software I bought” and “fit it in between all the other things you have to do” and give Puppet its proper place in our company, it can truly revolutionize your workflow.  However, when properly placed culturally and from a design, implementation, and workflow perspective, it can transform any shop on levels not readily observable when looking at the price tag or the resource requirements list.  DO let Puppet transform your environment and workflow and DON’T be afraid to take the plunge.  It’s exciting, challenging, and can easily take your company to the “next level”.

GitHub, Git, and Just Plain Revision Control

| Comments

One of the “bugaboos” in the sysadmin world for the longest time was the reluctance to use those “stinky developer tools” in our world for any reason.  I’m not sure the impetus behind this, but my wager is on something akin to security or yet another open port or “attack vector” if you will.  But today’s competent and conscientious systems admin (not to mention DEVOPS person) will use revision control as their go-to standard for collecting, versioning, backing up, and distributing all manner of things.

I’ve seen some shops use CVS as their choice, old thought it is, just as a large “bucket” in which to throw things for safekeeping with revisions and rollbacks available in case of some uncertain as yet unencountered event.  Subversion was the next generation of revision control tools.  Darling of developers and bane of disk space, Subversion still had many more features and performed essentially the same task.

Now, Git is the flavor of the month, and not only has gained widespread acceptance as a standard way to “do” revision control, it’s the de-facto way to do DEVOPS in  a Puppet world.  Granted, there are those brave souls out there who have tried to stick with the older tools, but the workflow and the “glue” between all the various components therein.  Hence, this post.

What is Git, really?

Git was developed by Linus Torvalds for Linux Kernel collaboration.  He needed a new revision control system akin to the previously used BitKeeper software that was unencumbered by copyright and able to handle the unique distributed development needs of the Linux project.  So, rather than try and use someone else’s project, he collated what was needed and developed the project himself.

Now, Git is used both privately and Publicly throughout the world for many projects.  Git is lightweight and works in a more efficient manner by moving changes via diffs rather than whole repositories, allows developers to maintain and manage an entire repository on their own systems either connected or disconnected from the Internet.  Then, they can “push” all their changes back to the central repository as needed.

Enter GitHub

For our purposes, we’ll specifically be working in GitHub.  GitHub is a project offering web-based hosting of your code that you can source from anywhere.  GitHub offers public and private hosting and a spate of other related services to development collaboration on the Internet.  If you do not have a GitHub account, you’ll need to surf on over to the site and sign up for one.  It’s free and it’s fast, and I’ll be using and sourcing it heavily as this series continues.

Basic Git

Git itself is available on most modern platforms and can easily hook into GitHub for our purposes.  I will be mostly referring to command-line usage of git, but you will find quite a bit in the way of tools, frontends, and “helper” apps for Git that you may or may not wish to leverage as you learn and incorporate Git into your workflow.  In the meantime, stick with me on command-line work.

When you install git on your unix-like platform, it will drop a few binaries.  The one we’re most interested in is the git binary itself.  It’s very simply designed and has a very straightforward set of options you can get from the command line by simply typing “git” with no options, or “git help”.  The output is below:

usage: git [--version] [--help] [-C <path>] [-c name=value]
           [--exec-path[=<path>]] [--html-path] [--man-path] [--info-path]
           [-p | --paginate | --no-pager] [--no-replace-objects] [--bare]
           [--git-dir=<path>] [--work-tree=<path>] [--namespace=<name>]
           <command> [<args>]

These are common Git commands used in various situations:

start a working area (see also: git help tutorial)
   clone      Clone a repository into a new directory
   init       Create an empty Git repository or reinitialize an existing one

work on the current change (see also: git help everyday)
   add        Add file contents to the index
   mv         Move or rename a file, a directory, or a symlink
   reset      Reset current HEAD to the specified state
   rm         Remove files from the working tree and from the index

examine the history and state (see also: git help revisions)
   bisect     Use binary search to find the commit that introduced a bug
   grep       Print lines matching a pattern
   log        Show commit logs
   show       Show various types of objects
   status     Show the working tree status

grow, mark and tweak your common history
   branch     List, create, or delete branches
   checkout   Switch branches or restore working tree files
   commit     Record changes to the repository
   diff       Show changes between commits, commit and working tree, etc
   merge      Join two or more development histories together
   rebase     Reapply commits on top of another base tip
   tag        Create, list, delete or verify a tag object signed with GPG

collaborate (see also: git help workflows)
   fetch      Download objects and refs from another repository
   pull       Fetch from and integrate with another repository or a local branch
   push       Update remote refs along with associated objects

'git help -a' and 'git help -g' list available subcommands and some
concept guides. See 'git help <command>' or 'git help <concept>'
to read about a specific subcommand or concept.

We’re most interested in a small subset of commands for our purposes here.  They are add, commit, pull, push, branch, checkout, and clone.

I will be referencing one particular way to “do” git which works for me, but as with anything TMTOWTDI and YMMV.

GitHub Portion

I am going to assume you’ve created a GitHub Account.  When you create your account, you’ll have a unique URL assigned to you based on your username.  Mine, for instance, is  The basic interface to GitHub is rather straightforward and looks like the following:


The interface keeps track of all projects you’re working on, the frequency with which you commit or otherwise use your repository, and (most importantly), a centralized server that is storing those projects you can source from any internet connected system.

Make a Repository

In the upper right-hand corner of your screen, you’ll notice a “+” symbol.  Let’s click that and create us a new repository.  You’ll be presented with a dialog to name and describe your new repo.  I’ll use the name “sample repo” and the description “Sample Repo for my Tutorial” with no other options other than the defaults.  (we’ll go manually through those processes shortly).  After creating the repository by clicking “Create Repository”, I’m presented with a page that has step-by-step instructions on what to do next.  I’ll include that here for you.


As you can see, you have your repo referenced at the top by /.  You have instructions on how to use the repo from both the GitHub desktop client and the command line and some special instructions for if you have it locally on your system, and are just now uploading that content into this repository you’ve created to hold it.  We’re interested in the command line instructions.

A Place to Git

On my system (a Mac), I have Git installed by default and I have a directory in my home directory simply called “Projects”.  Under there, I have a “Git” directory.  ALL of my work in Git goes here.  This is not a hard/fast rule.  I just chose it as my location to place all my git work so it is centralized and all collected together.

What we’re going to do next is to configure Git, create a location for our repo, make a file to commit to the repo and then push that file up to GitHub to see how that workflow works.  Let’s get started.

Configuring Git

Since Git is personal to you as a user, you need to let Git know a few things about you.  This gives your git server (in our case GitHub) the information it needs when you’re pushing code (like your identity, default commit locations, etc).  First, your name and email:

git config --global "John Doe"
git config --global

You’ll only need to perform this once.  There are quite a few options and you can read up on those here at your leisure.

Next, create a location for your new repo.  I chose my aforementioned directory and created the location:


for demonstration purposes.  From here, though, we can take up with the instructions on the GitHub page displayed after creating your repo. I’ll reproduce that here for reference:

cd /Users/jsheets/Projects/Git/samplerepo
git init
git add
git commit -m "first commit"
git remote add origin
git push -u origin master

If all has gone well, you have now created an empty file, committed it to your local Git repository and then subsequently pushed it up to GitHub.  You’ll note that we added “origin” as the remote and then we pushed to “origin” in a thing called “master”.  What’s that all about?

GitHub (and git) refer to their repo location as “origin”.  This becomes handy when you start pushing between remote repositories and from remote to remote to GitHub, etc.  So, it makes sense to name GitHub functionally as well as it’s assigned domain name.  By saying “origin”, we’re making GitHub the de-facto standard center of everything we’re doing.

Next, we refer to “master”.  What is that?  Simply stated, we’re pushing to a “branch” called “master”.


Branching is a method by which you can have multiple code “branches” or “threads” in existence simultaneously, and Git is managing them all for you.  For instance, you may wish to have one code collection only for use in production systems while maintaining a separate one for development systems.  In fact, you can create a random branch with a bug name (bug1234, for instance), commit your changes to that, test it, and push it to origin, then pull it down to all your production hosts, solving a big problem in your site or codebase.  Better yet, if it all works great and you’re happy with it, you can “merge” that bug back into your main code repository, making it a permanent fixture in your code in whatever branch you like. (or even all of them!)

When you first create your repo, GitHub makes a “main” branch for you automatically, and calls it “master”.  So, by utilizing the command above, we’re telling Git to push our code (in this case, to our origin server (GitHub) and put it in the “master” branch.

While we’re on the topic, let’s create two more branches so we can get the full hang of this branching thing.  (Hint:  It’s core to how we integrate this into Puppet).

Makin' Branches

As I said before, GitHub creates a default “master” branch for you.  If, from your local repository location, you type “git branch”, Git will list a single branch for you,

git branch
* master

This tells us simply, what branch you are currently in.  Now, let’s run two commands to create new branches to be tracked by Git.

git branch production
git branch development

Now.  Run “git branch” again:

git branch
* master

As you can see, your other branches are now visible when running the command.  If you have color, you may notice that the “master” branch is a different color than the others (based on your settings).  If you do not have color, the asterisk denotes what branch is active as well.

Checkout and Commit, Branch and Merge

We have our repository and we have our branches.  We have a single in the current directory, and we are ready to roll committing code and pushing it into our repository.  Let’s perform a simple experiment to get the “hang” of how the branches work and how to switch between them as needed.  Since we’re in “master”, let’s edit our to reflect that by placing a single word in the file “master”. (use vim as discussed in our last tutorial).

Once you’re done with your edit, you’ll see that the text is in the file.  you can edit it and you can cat the file and see the contents, but if you view the file up at GitHub, that content is not there yet.  Some sort of way, a mechanism must be used to put that data there.  Well, there is such a process, and it is a two part process.

Recall I mentioned that one of the features of Git is that you can have a complete repository local to your machine.  you can work on that repo and make all sorts of changes completely disconnected from your server (in our case GitHub… “origin” as it is named to Git).  Therefore, in reality you are dealing with not one, but two repositories.  The local one on your machine and the remote one at origin.  (remember the “git remote add origin” above?)

So, to finalize your changes locally, you must “commit” them to your local repository as “final”.  THEN, you can “push” those changes into your main server (in our case GitHub).   We did as much above with our procedure where we did the commit with a message, and then a push up to origin.  However, now that we’ve made changes locally, they are not yet reflected at GitHub.  Logic would dictate another commit is in order:

git commit
git commit -m 'Some message about your commit'

As you can see, there are two routes you can go.  If you simply supply the “git commit” without any options, you will be brought into the system text editor you (or your OS) has configured in the $EDITOR environment variable.  Most platforms use “vi” or “vim” for this, but I have also seen “pico” used in some distributions like Ubuntu Linux.  In any event, you can edit the file by placing your comments in.  After exiting the file, saving the content, the commit will be complete.  If, however, you do not put anything, git will not commit the changes.  This is to enforce good coding practice by requiring some notes about what a committer is doing before making the changes.  It’s a highly recommended workflow to follow.

Once your commit is complete, phase 1 (local commit) is over.  You can commit over and over, as many times as you like.  you are a full, local repository.  In fact, I’d encourage many commits.  Commit when you think about it.  Commit before you walk away from your system.  Commit randomly for no reason in mid-workflow.  The more commits you have, the less likely you are to lose work.

Finally, to get the data up to GitHub, we need to “push” that data off your repository and into your “origin” repository. This is quite simple, and you’ve done it before:

git push -u origin master

Sometimes you may wish to not keep specifying the location you’re pushing to.  If so, you can set a default location for each branch.  Git will tell you just how to do that if you forget the “-u location branch” option.  Let’s say I’m in my aster branch and I simply run a “git push”.  Git will tell me I did something wrong, but will also tell me how to eliminate that problem:

fatal: The current branch master has no upstream branch.
To push the current branch and set the remote as upstream, use

git push --set-upstream origin master

“fatal” seems a little melodramatic since Git gives you the answer as to what to do right there.  All you need to do is set the default target once with that last line, and from that point forward, you only need type “git push” when pushing to GitHub.  Hint:  I do this in ALL my branches at create time.  It saves a lot of typing over time, and like any good Sysadmin, I’m lazy.  :)

So, now I’ve got multiple branches that need this setting, but I’m still stuck in “master”.  How do I get to “development” or “production” to perform the same tasks?

Git provides a “checkout” command.  What you’re saying with “checkout” is:  "Git, I want to be working on branch “x”, and I want you to make that my current branch.  if there are any differences between that branch and the one I’m on, please make those changes on-disk for me so I can exclusively be working in branch “x”“.  A little verbose, but you get the point.  So, to move to the next branch and do all the wonderful things we did in "master” above, we perform:

git checkout development
edit to say different text
git commit -a -m 'editing README for development branch'
git push --set-upstream origin development
git push

If all has gone well, your development file is now changed and pushed into GitHub.  What about “master”, though?  Well, let’s take a look:

git checkout master

If all has gone well, the contents of are back to what was in your “master” branch.  By checking out “development”, it’ll change back to the new content there.  As a test, checkout the “production” branch, change the file, commit it, set your upstream push target and then push the contents to GitHub.

Now you’re cooking with gas.


This is a simple tutorial to get you started with Git & GitHub.  There are MANY tutorials and books that can make you into a Git expert, but are way outside the scope of this humble little blog.  let me provide a few of those for you here:

[Git Help](
[Git Book]([
GitHub Help](

This documentation should be more than enough to get you moving and well underway with Git ins-and-outs for committing Puppet code and using r10k to interface with and distribute that code around your environment.

Why All the Vim?

| Comments

As we move on through the last post on Vim, you may ask yourself why I’m remediating all the way back to text editors.  Well, as you’ll see over time, this is all about workflow; Building yourself a detailed workflow by which you can write code, syntax check, commit to revision control, deploy to your Puppet instances, and duplicate that workflow across all your environments.

The text editor itself, while important, is just a tiny part of a much larger picture I hope to cobble together over time.  So, let’s begin to push forward with our coverage of Vim.

Plugins and Syntax Highlighting

I went through the beginnings of Vim to give you a starting point and some basics in the event you have no experience in the Vim world.  One would assume that since you’re on a Puppet/Dev-Ops-y sort of page, all this is old news, but we do have completely “green” readers from time to time, and I didn’t want to leave them out.

The main goal in getting Vim in the picture was to bring you to the point where we start looking at our code and knowing what we’re dealing with at a glance.  As you begin to work in the field, whether coding in Perl, Python, Shell, or the Puppet DSL, there are some conventions out there all designed to help you and smooth your workflow.  Of these is syntax highlighting in code editors in general, but (for our purposes) in Vim specifically.

Take a look at this screen:


While the code is well formatted and everything seems ok, were there any issues in this document, you’d never know it.  From syntax issues to missing elements, none of this is automatically highlighted to you in any way.  Enter syntax highlighting…  Look again:


Much nicer, no?  Were there any missing elements, you’d see something amiss in the document.  The colors would not be organized according to element type, and odd things would be displayed in the page.  Let me “break” the file for you…


You’ll notice that on line 4 something is amiss.  If you compare the two colored instances, you’ll see that your eye is drawn to where things begin to be different.  Best part for quick and easy glancing is that the entirety of the file after that one mistake now looks “wrong”.  Ease of view.

How Can This Help With Puppet?

As luck would have it, Vim has a plugin engine that allows you to have pre-built templates that syntax highlight code for you in a predetermined way.  It “recognizes” your code type, and highlights accordingly.  The basic plugin structure for vim lives in your home directory and in the “hidden” .vim directory.  Under this directory you can have a number of wide and varied add ons to vim.  We’re just going to talk about plugins.

By default, you don’t have anything in this directory.  You usually have a .vimrc file and a .vim directory in your home directory location, but that’s about it.  The “magic” happens, though, when you add a few pieces.  Those would include a “.vimrc” file which will turn on syntax highlighting, and then a Puppet vim plugin that sorts all the language elements and colorizes them for you.

In your home directory, if it doesn’t already exist, create a .vimrc file with a single entry:

syntax on

This will instruct vim to syntax highlight, and to be aware of any highlighting plugins that may live in the .vim plugin folder.  Next, place this file into your .vim plugin directory.  If it does not yet exist, create it.  It lives in ~/.vim/plugin.  Once you load up new puppet manifests, it will recognize the type and begin to highlight the code according to the defined convention from the puppet.vim file you just downloaded.  If you have any issues, look at the vim plugin reference here.

Now you’re ready to work with Puppet files like a pro!


| Comments

Old School

Why on earth am I starting with Vim?  (or “vi” for you old folks)

Vim is the modern “vi” implementation.  A full-screen text editor with a myriad of options and abilities for beyond anything I could ever cover here.  But Vim has one thing going for it that no other text editor has.  One simple fact about it puts it in the category of cameras.  You know the old saying?

“The best camera is the one you have with you.”

Thus it is with vi/vim.  It’s literally everywhere.

Every UNIX OS, commercial or not, streamlined or not, old or new, has vi or vim installed on it.  Emacs is a great product, but it just isn’t installed by default everywhere.  Regardless of the editor you refer to, ${INSERT_EDITOR_HERE} just isn’t as ubiquitous as Vi/Vim.

Since we’re talking about a modern pursuit and workflow (DEVOPS), we’ll be talking mostly about Vim’s capabilities and features.

What Vim is NOT

Vim is not a word processor.  You won’t be writing business letters with it.  Vim is not for writing resumes, making pretty newsletters, or for typesetting a magazine.  The die-hard Vi/Vim fan will tell you that you can do all of the above with it, but that falls into the same category as filling in the Grand Canyon with a teaspoon. You can do it, but why on earth would you want to?

What Vim Does Best

Vim edits text.  Plain…text.  Not pretty bold, italicized, with all sorts of alignment characters and strange paragraphs and pagination doohickeys… no, Vim just makes text files.  Text files that are SO devoid of bells and whistles, in fact, that when you open a Vim created file in your favorite WYSIWYG editor, you’ll see what appears to be a pile of letters and such all crammed together like you had nothing else on your keyboard but letters, numbers, and punctuation.

Vim allows you to eliminate all the cruft and get right down to the matter of creating plain, unencumbered text files.

Why Does it Matter?

When you’re logged onto your favorite Linux through two bastion hosts across several continents and have latency to boot, WordPad will not help.  You’ll need a lightweight text editor within which you can load, edit, and save the single most numerous type of item on a UNIX system… a text file.

Where Can I learn More About Vim?

Vim’s main project page can be found here online.  The main page has links to documentation and various community links as well as connections to various types of plugins and add ons you can use with Vim for any number of tasks.  You can join online forums, mailing lists, and communities whose entire purpose is the extension and promotion of Vim.  But that’s not what we’re up to…

How We Will Use Vim

For our purposes, we will use vim as code editor.  No more, no less.  Vim’s abilities can help us see our code in ways that let’s us know when there’s an issue and can direct us generally in the direction of our solutions.  So, let’s dive in to a minor Vim tutorial.  (If you’re an advanced Vim user, stick with me…)

All Linux distributions and Mac OSX come with Vim pre-installed and ready for action.  Note that some distributions of Linux will have both Vim and Vi.  You will either need to get into the habit of running “vim” from the command line, or setup your shell aliases to load Vim every time you type “vi” instead.

When you launch Vim, you see a screen much like the following.  I use Mac OSX, but the effect is the same, regardless of platform.


I have several features turned on (including the line across the bottom that provides me a lot of information about the file I’m editing), but the main things we will talk about here are syntax highlighting and plugins after a short tutorial on how Vim works.

I Can’t DO Anything!!!

Most people’s frustrations begin right on this page.  From here, nothing seems familiar.  I can’t pull down a menu and I can’t really even choose “exit” from a list of things to do.  The only real hint I have is on the screen above:

type  :q               to exit

…which is quite a peculiar directive.  Why do I have to type a colon?  What does it mean?  And goodness help you if you already managed to type something into the screen.  The instructions you see above disappear, and without the right collection of keystrokes, you’re not getting out of Vim.  You’ll most likely just close the window and start looking for “notepad”.

Here’s where the tutorial starts.

Vim is what is known as a full-screen text editor.  It started back in the Amiga days, and was first released publicly in 1991.  Before Vi/Vim, to create text files, there were line-based text editors that only allowed you to see one line of a file at a time.  So, you really couldn’t work with huge files…it would be difficult to see file lines compared to each other or look at the whole flow of a subroutine you had written, or even to match wording or syntax from one section to another.

Enter the full-screen editor.

What the full-screen editor did was open a file on disk, reserving space in memory to hold the entire file, and then display to you a “window” into your file equal to your terminal’s display size.  for instance, I have a terminal right now that has a several thousand line file open.  Of that file, I can only see 25 lines and 80 columns wide.  This “window” onto my file is something I personally configured in my terminal program (in my case, “iTerm”).  I can scroll up and down this file, sliding forward or backward within the file from the beginning to the end (just as you may an MS Word document) and can interact with/edit any character I can see on the screen.  (we’ll talk about search and replace and other such things later).

We are currently in what is known as “command mode”.  Many would ask why you don’t just call this “view” mode, and the reasons are very simple.  From this screen, you issue MANY commands to Vim and tell it how you want it to behave for you.  For our purposes, we will use “command” mode and “insert” mode mostly.

Well, How Do I Edit Something?

To edit your file, you enter into a mode known as “insert” mode.  From this page there are several different modes you can enter, but “insert” mode is the easiest.  You simply type a single lowercase “i” to enter this mode.  When you do so, a cursor appears on the top line of the page you are viewing, and you are now able to type all you like.  Letters, keys, tabs… all normal typing idioms are available to you from this point.  How, you may ask, then do you save your work to disk?  I’m in this “insert” mode and don’t know how to save!

Think about what you wish to accomplish… you wish to issue the command “save” to Vim.  Command…  as in… “command mode”, perhaps?  Well, we have to go back into command mode, then, so we can issue some commands to Vim and exit the program.

Any time you are in Vim, your “saving grace” is your [esc] key.  Two taps on the escape key lways brings you back to command mode from anywhere.  Go ahead and try it.

You’ll now notice you have returned to “command mode” just as you were when you first opened Vim.  The only difference is that everything you typed is now on your screen and has not gone away.  Your “edit buffer” is full of a file you now can do things with.  You can save it, delete it, save it out as a specific file name… a myriad of normal file operations you may be used to from other software packages.

In our case, we want to just save the file.  However, when we opened Vim, we didn’t specify a file name to edit, we just opened Vim.  So, for all intents and purposes, we have an open buffer full of “stuff” and no file name to associate with it.  What we want to do now is to “write” the file to disk.  To do so, we have to issue commands to Vim in Command mode.

Command Mode

To issue commands to Vim, we have to tell it we are issuing it a command, otherwise you may hit a letter than means something else.  Recall that simply by hitting a lowercase “i” we placed vim into insert mode and then by hitting the [esc] key twice, we left it.  Clearly, there’s more to this editor than we can readily see, so how do we save the file?

When you have a buffer with text you would like to save, you have to first hit a colon “:”.  You’ll notice that Vim places the colon on the bottom line of your screen to the left, awaiting a command.  While there are several commands we can perform here (as well as joining multiple commands together), we will simply write the file right now.  To do so, while at the “colon prompt”, we simply type:

**: w foo.txt**

and press the “enter” key.  You will receive a message on the last line that lets you know the file has been written to disk:

**"foo.txt" [New] 1L, 16C written**

But I’m still in Vim.  What do I do now?

Just like “w” is a command, exiting the program also is a command.  Guessably so, it is the letter “q” for “quit”.  So, as before, you hit the colon key, then the letter q, and then the enter key.  If all goes well, you’ll be back at the command line.

Much More to This

Were I to do this for all the features of Vim, I’d be writing a book.  However, fortunately for you, there are several tutorials and cheat sheets on Vi/Vim all over the Internet.  Here are a few of my favorites.

The Main Vim Tutorial’s Vim Tutorial

As well as some cheat sheets for you to refer to for quick reference on the various commands available when using Vim:

One of My Favorites Another Good Cheat Sheet

Take some time learning the basics of Vim before pressing on to the next article: “Customizing Vim”.

New Tools and Old Schools

| Comments

New Tools

Once you’re treading water in this whole DEVOPS thing, a lot of terms get thrown around and a lot of “newest bestest” gets offered up as the cure for everything including the kitchen sink… oh, and cancer.  However, I think when the hubbub is the loudest, that’s when I really take a step back and ask myself what we’re trying to do, and what’s the simplest, most repeatable and safest way to make it happen.

As with any sufficiently new technology, there’s a learning curve that accompanies such a shift, and new tools start to be speckle your horizon such that you wonder where to start, what is imperative and what is optional for you to occupy your time with.

Thus it is with Puppet-y stuff.

Given that the whole DEVOPS thing includes within it a heavy lean toward DEV, the regular rank-and-file sysadmin may find himself thrust into a world of development terms he may have only heard in passing in some random meeting or other.  The tools of the developer are as varied and arrayed as those for the operations guru and every bit as arcane (in some cases) as the esoteric shell command only installable from some guy in Russia’s repo, with 30 or more command line switches to get a single piece of data upon which to work your evil schemes.

Well… that’s why this blog.

I’m about to embark upon a coverage of a set of tools.  However, this set of tools I will not observe in a vacuum, but in light of a powerful workflow engine with which to empower you to become considerably more efficient in writing Puppet code.  The tools in particular I’d like to go over and, much in the same way I covered Puppet Open Source, instruct you step-by-step on how to install, configure, and use said tools are as follows:

Vagrant r10k GitHub Vim (yes, Vim) puppet-lint Gepetto

This is by no means and exhaustive list, but what it does is collect together the best tools to assemble a workflow that will speed your work and not leave you spending all your time working on tools, but working on Puppet Code, which is our main focus and goal.

Old Schools

At the end of the day, the really important things are conceptual.  Whether you use a new whizbang tool to do the heavy lifting, several tools working together to achieve this goal, or you heavy lift all on your own, the process and the rules remain the same.

DO keep all your code in revision control. DO syntax highlight and check your code validity DO build repeatable, consistent environments in a timely fashion DO have a method to share your work environments with others DO document heavily both in and out of code segments DO have a solid end-to-end workflow that enables rapid iteration

What of this is new?  Whether we’re talking about keeping all your shell scripts in CVS, deploying your script repo with SVN, or deploying versioned code caches with ${PACKAGE_MANAGEMENT_SYSTEM}, we’re talking about the same general good practices.  All Puppet and supporting tools does is make it something that is tightly integrated with management consoles.  To organize and institutionalize your workflow gives the steady underpinning to the cool DEVOPS tool to make everything repeatable, sharable, and collaborative.  This is where the power comes from.  THIS is where DEVOPS makes sense.

When your workflow is solid, your DEVOPS tools are strong, and your culture has bought in to both, mundane work becomes an afterthought and you get to work on the really interesting things that you’ve been letting slide while putting out fires that could’ve been best managed via configuration management anyhow.

My hope is to build the concept and context from the ground up to show you one tight, functional way this can be accomplished.  Let’s start next time with Vim.

Workflows, Tools, and a Myriad of Gobbledygook...

| Comments

Ok, so first let’s cover the gobbledygook.

I’ve had a lot of feedback on various parts of the blog here, and I thought I’d address a few of them here.

Q:  Why didn’t you just point at the appropriate Yum repos for installing Puppet?

A:  Easy.  I can do “yum install foo” and never know what’s going on behind the scenes just like anyone.  My goal here was to give a point-by-point installation guide so those who are interested could know what all the moving parts were and how they fit together.

Q:  Why are you using the dashboard and not Foreman?

A:  Also easy… at least at the time of this writing, The Foreman has been indicated to be the next front end to be the “de facto” standard, but as for right now, the Enterprise Console is essentially a turbocharged version of the Dashboard.  As such, when I begin to talk about extending the Enterprise Console, Dashboard would be the analogue by which you can most obtain the same experience (short of installing Puppet Enterprise itself).

Q:  Why OSS and not PE?

A:  Well, to be frank… PE is about as simple as you can get.  You can learn a bit about what portions to install and how to connect them all together across machines (something I plan to cover), and you can learn about constructing an answer file (or several for a large, complex installation),  but the vast majority of your PE installations will be a Q/A interaction with the installer.

Workflows and Tools

One of the biggest changes in how I deal with Puppet has been my adoption of and implementation of workflows as well as using some modern tools I’ve been made aware of by the Puppet Labs folks.  Among these are Vagrant, r10k, GitHub, and many others that work together and are tightly integrated and require a lot of configuration and setup to “make happen”.  I intend to cover those here.

So, what are all these things?


Vagrant is a tool that allows you to pre-configure a small virtualized environment on your host consisting of a Puppet Master and any number of agents for use in a dynamic, iterative fashion.  By bringing up a Vagrant environment, I have a miniature development environment from which I can actually test Puppet code I write in a full PE environment and work out kinks you don’t normally encounter when working independently on a single workstation.  From this environment, I can puppet-lint check all my code and then push all that code out of the Vagrant environment up to GitHub, then pull it down to my production instance as needed.  Further, it allows me to simulate all environments I have in my corporate setup (DEV, PROD, TEST, etc.) and commit those differences to the appropriate branches in GitHub.


The method by which I iterate, push/pull code and deploy to various environments both in my Vagrant instance and in my Production site is via r10k.  A leap forward past “puppet librarian”, r10k is the glue that ties between GitHub and Vagrant as well as your GitHub instance and your Corporate site.  This can tie to public GitHub as well as Corporate (private) GitHub.


GitHub is considerably more well known, but it’s important to the whole process as you can read above.  GitHub is a revision control repository designed for rapid deployment, iteration, and isolated developer work with periodic pushes back to GitHub. (distributed development).  I intend to do a simple tutorial on GitHub as well.

Look forward to piece-by-piece coverage of each of these and my thoughts as I prepare for and take the Puppet Labs Certification test.

Puppet v - Configuration and Scaling

| Comments


The name of this portion of our tutorial was difficult to determine.  This is another set of configurations, but we will also be scaling Puppet to handle production quality traffic, be an external node classifier (ENC), have a backend database, employ an enterprise class web server, turn up a console…  there’s a lot to do.  We are indeed configuring the backend, but also scaling Puppet to handle your environment…hence the name.

This portion assumes you’ve followed all previous tutorials from I-IV, have your certs signed and are complete and ready to go with Puppet “as-is”, you simply have not installed any of the following add-ons.  As mentioned last time, you could begin to write manifests and modules right now, using Puppet “as-is”, never utilizing any of the other features.  However, the “out-of-the-box” configuration of Puppet is not ready for enterprise use.  Perfect for a small development environment…perhaps up to 25 hosts or so, the Puppet server as installed includes a small WEBRick server (ruby-based) and is not intended to handle large site traffic profiles.

To make Puppet “enterprise-ready”, we need to do a few things.

  1. Install the Puppet Dashboard
  2. Install the Puppet DB
  3. Install passenger + Apache modules
  4. Install MySQL

The dashboard gives you a GUI configuration mechanism as well as an external node classifier.  The PuppetDB is a centralized config storage mechanism for all node facts and configurations.  Passenger+Apache is the piece that replaces Puppet’s WEBRick server, and MySQL holds the database for the dashboard.

This will be the longest portion of the tutorial series, and all the separate & individual pieces will be interdependent, requiring us to do all the work first with configuration testing at the end.  Let’s get started.


Early in tutorial I, I had you install a number of packages.  Had you stopped after installment IV, you’d have had no need for a few of them, but I wanted all the packages to be on your system as prerequisites to avoid later installation headaches.  However, we do want to get the EPEL package set onto the system to add one prerequisite (and it’s a darned fine repo to have, should you need it for other things)

To install EPEL, run the following command:

_**sudo rpm -ivh**_

The “Extra packages for Enterprise Linux” (EPEL) set is an important addition to any server set.  Especially to install the package prerequisites we need.

Installing Packages

Passenger: First, we will be installing Passenger.  The passenger packages are in their own repository, not hosted at Puppet Labs.  First, import the GPG key:

_**sudo rpm –import**_

And then install the passenger release repo:

_**sudo yum -y install**_

Finally, install the Passenger Apache module to tie everything together:

sudo yum -y install mod_passenger

Congratulations.  The groundwork for Passenger is now installed.

Dashboard Since we have done so much preparatory work, the dashboard install is quite simple:

sudo yum -y install puppet-dashboard


PuppetDB PuppetDB is installed a little differently, using Puppet itself to get and install the package:

sudo puppet resource package puppetdb ensure=latest

This procedure takes a bit of time, but when complete, the PuppetDB is now installed.

MySQL MySQL is installed via the usual yum repos, but we will also turn it on and have it ready for use as well as create our users and remove unneeded and unsecured accounts for the system.

sudo yum -y install mysql-server sudo /sbin/chkconfig mysqld on sudo /sbin/service mysqld start

Let’s set the database root user’s password:

****/usr/bin/mysqladmin -u root password ‘ /usr/bin/mysqladmin -u root -h password ‘’ mysql -u root -p

A few words here, for those of you unfamiliar with MySQL.  We are setting the root user’s password to be able to administrate the database.  The simplest way to set up this initial security is using the “mysqladmin” tool provided by MySQL.  Note that when I use <> in these above, this is where your site-specific information comes into play.  For \<FQDN>, for my example purposes I would replace this with  The password setting & changes, then, would look like so:

/usr/bin/mysqladmin -u root password ‘puppet’ /usr/bin/mysqladmin -u root -h password ‘puppet’ /usr/bin/mysql -u root -p

I just wanted to clarify this for you in the event my use of <> and ‘ ’ above caused any confusion.

Configuring MySQL

Once you run the above commands, MySQL will prompt you for the password you just set.  Enter that password, and you will find yourself at a mysql prompt that looks like so:


What this means is you have now logged into the MySQL database, and are ready to set it up for use.  Following I will list out all the commands you need to run in a set.  Note that these commands are each entered on a line and you press “” at the end of the line to enter the next command.  There is no output from MySQL when you enter these, so I’ll enumerate them all together here for your convenience.

mysql> create database dashboard character set utf8;
mysql> create user ‘dashboard’@‘localhost’ identified by ‘my_password';
mysql> create user ‘dashboard’@‘’ identified by ‘my_password';
mysql> grant all privileges on dashboard.* to ‘dashboard’@‘localhost’;
mysql> grant all privileges on dashboard.* to ‘dashboard’@‘’;
mysql> drop user ‘’@‘localhost’;
mysql> drop user ‘’@‘’;
mysql> drop database test;
mysql> flush privileges;
mysql> exit

As before, replace with the fully qualified hostname for your server and ‘my_password’ with the password you wish to set for the dashboard user.  A few notes:

  1. First, we created the dashboard database
  2. Next, we created the dashboard user for connecting from the localhost name
  3. Next, we created the dashboard user for connecting from the server FQDN
  4. The next two, we grant the dashboard user rights to the whole database from either location
  5. The following two lines delete the user ‘’ from the server (a null user w/o a password)
  6. Finally we drop the “test” database, flush all our privilege tables (to take effect immediately) and exit MySQL.

The final steps in getting MySQL configured for production use is to tweak the settings in the database by editing the /etc/my.cnf file and restarting the database.  Open the /etc/my.cnf file and add a new line at the end of the file:

max_allowed_packet = 32M

Save the file and then run

sudo /sbin/service mysqld restart

for the changes to take effect.

Passenger The final piece is to get the appropriate passenger gems and the Apache module installed to handle Puppet Agent requests to the server.  Luckily, our previous prerequisite installs have made this easy for us.  First:

sudo gem install rack passenger

When this is done, install the Apache module, following the prompts as follows:

****sudo passenger-install-apache2-module Press Press At the end of the installation, Press

If we’ve done everything right up until this point, you should not need to supply any extra information, packages, or configuration, and only need to continue to press as listed above to complete the installation.

Now comes the time to configure the various pieces…


Passenger First we need a number of directories and files to exist around the system, so let’s put those in place by using the following commands:

sudo mkdir -p /usr/share/puppet/rack/puppetmasterd
sudo mkdir /usr/share/puppet/rack/puppetmasterd/public
sudo mkdir /usr/share/puppet/rack/puppetmasterd/tmp
sudo cp /usr/share/puppet/ext/rack/ /usr/share/puppet/rack/puppetmasterd
sudo chown puppet:puppet /usr/share/puppet/rack/puppetmasterd/
sudo chown puppet-dashboard:puppet-dashboard /usr/lib/ruby/gems/1.8/gems/passenger-4.0.37/buildout/agents/PassengerWatchdog

Next, we need to configure the Puppet Dashboard to connect to its database, and setup the tablespaces for use:

cd /usr/share/puppet-dashboard/config
edit database.yml
Remove the last stanza of this file that refers to the “test” database we removed above.
For the Production and Development database stanzas, change the “database:” line to read “dashboard” and the password line to contain your dashboard password so that it appears as follows:

database: dashboard
username: dashboard

Next, prepare the database for use as follows:

cd /usr/share/puppet-dashboard
__sudo rake gems:refresh_specs
rake RAILS_ENV=production db:migrate

(Even though we’ve reference the production and development databases in the database config above, we’ll only be working in the production database in this tutorial)  

At this point, we should be ready to test the Dashboard configuration to ensure we’re still on the right track.  Top do so, run the following:

cd /usr/share/puppet-dashboard
sudo ./script/server -e production

Now, attempt to connect to the dashboard via web browser by pulling up the server at the following address:  http://:3000.  If the dashboard displays correctly in your browser, we’re ready to continue.

Press CTRL-C to exit the server.
Configure Puppet for Dashboard

While we have already configured the dashboard itself, we have not told Puppet the dashboard exists.  To do so, edit the /etc/puppet/puppet.conf file and add the following.

In the [master] section of the puppet.conf, add the following lines:

reports = store,http
_**reporturl = http://<FQDN>:3000/reports/upload**_

Node Classification (Using as an ENC)
node_terminus = exec
external_nodes = /usr/bin/env PUPPET_DASHBOARD_URL=http://localhost:3000 /usr/share/puppet-dashboard/bin/external_node

Exit the puppet.conf file, saving your changes and set permissions for following files like so:

sudo chown -R puppet-dashboard:puppet-dashboard /usr/share/puppet-dashboard
sudo /sbin/chkconfig puppet-dashboard-workers on
sudo /sbin/service puppet-dashboard-workers start


Next, we need to configure the Apache web server to process requests being made by Puppet agents in your environment and hand them off to the Puppet server.  To do so, we need to create two files in the /etc/httpd/conf.d location.  The passenger installation will have already created a passenger.conf there.  Just remove it before creating the following two files.


[snippet id=“30”]

And like it:


[snippet id=“31”]

 Starting Everything Up for Testing & Operation

Once these configuration files are in place, it’s time to test Apache’s handoff to Puppet and to make a special SELinux module to allow Passenger handoffs to the various needed places in the filesystem.

First, make sure the puppetmaster process has been stopped:

**/sbin/service puppetmaster stop
/sbin/chckonfig puppetmaster off_

This assumes you’ve run the procedures in the previous tutorials, including (especially) the certificate signing and exchange between master and agent.  If you’ve done this, Passenger now has all the certs it needs to handle requests on behalf of Puppet, and no longer needs the Puppet server running.

Next, test the configuration, that you’ve made no typos:

sudo /sbin/service httpd configtest

If no errors are displayed, then at least the syntax of your Apache configs are correct.  Now, to generate SELinux entries in the audit log to build a custom Passenger SELinux module, you need to start Apache:

sudo /sbin/service httpd start

Turn off SELinux temporarily:

sudo setenforce 0

Restart Apache to generate the log entries:

sudo /sbin/service httpd restart

Test the Puppet dashboard in a browser by going to:


What you have done is put output into the audit log that can be used as input to the audit2allow tool to generate a policy file for import into SELinux to allow Passenger to do it’s job.  To create that module, run:

grep httpd /var/log/audit/audit.log | audit2allow -M passenger

This creates a new policy file for SELinux in your current working directory called “passenger.pp” which you can now import into SELinux.  TO do so, simply import the module:

sudo semodule -i passenger.pp

…from the directory where the file resides (presumably your current working directory if you have not moved).

Finally, you re-enable SELinux, and begin to test your environment.

sudo setenforce 1


I know that’s a lot for a single entry, but I wanted to make sure and get all the additional pieces installed and configured before starting to cover how each of the pieces work and interact.  I thought of making each piece its own article, but in every scenario I thought about, you ended up with a not-completey-configured (i.e., “broken”) system.  I opted instead to do the complete configuration.

At this point, however, you should be all set and ready to go with Dashboard/Passenger configured and being run by Apache.  You should have MySQL and PuppetDB installed and configured, handling their own individual tasks, and you should have Puppet Master and Agent installed (with one agent node) all configured to perform their duties as a Puppet Infrastructure.