Questy.org

Scaling Puppet Enterprise – Part IIIa – Additional Compilers

| comments

You should have completed a split install before beginning this section. You can find the Split Installation documentation at Puppet’s Website, or the first installment of this tutorial here. If you try and begin here, you might find yourself lost.

Note also that the “Additional Compilers” docs comes in two parts–One to install the Load Balancer and one to install the compilers.

First, Some Philosophy

The Puppet Enterprise documentation circa PE 2015.3.2 had some “issues”. Let me actually preface that, though. Puppet Labs’ documentation is by far some of the most voluminous and in many respects most complete vendor documentation out there. I don’t mean to disparage their work AT ALL. When it comes to the fact they even have documentation at this level, they’re the “bees knees”.

However, I’ve always written documentation to fit the “grandma rule”. My grandmother was a little 4 foot nothing Cajun woman with English as her second language. She never used the first computer, still had a rotary phone when she passed away, and remained suspicious of anything technical. She was, however, a voracious reader, keenly intelligent, and understood considerably more than you’d expect on first glance. She also was a stickler for puncutation, grammar, and the like. In short, if my grandma couldn’t read the documentation and follow a step-by-step process to install Puppet successfully, then its just either too complex, poorly formatted or unclear and needs to be simplified.

This causes a problem, of course. There are technologists out there that would become annoyed at repetition, verbosity around “understood” things, and spelling out each and every step along the way… even painfully. However, I feel it is the only proper way to document something. My rules are simple.

  • Leave nothing to question
  • Be as verbose and clear as possible
  • Make sure everything is in order, step-by-step

By following this simple guideline, I feel I’m doing more of a service to the reader than if I presumed on their level of sophistication with Puppet, Linux/UNIX, Windows, research capability, Google-foo or whatever.

So let’s dive in, shall we?

HAProxy

Seemingly counterintuitive, now that we’ve done a split install, I want to next install the HAProxy we will use as a Load Balancer on the additional compilers. By installing this first, we can utilize Puppet to install the HAProxy, and manage them automatically rather than doing a lot of ad-hoc work.

Also, by doing the proxy first, the prerequisites are satisfied in their proper order, the Load Balancer exists before configuring additional compilers (to be able to utilize the dns_alt_names for the load balancer along with the compilers) and to have the GitLab in place and hosting the control_repo before turning on and configuring Code Manager.

Hardware

In the initial hardware list, I included a node called “Compile Master”. This node looked like:

This node may seem like overkill, but disk and memory are cheap. If you are scaling at this level, its better to not have to reinstall your Load balancer later. Keep in mind, you don’t have to use HAProxy and can use a corporate Load Balancer here, but its configuration is outside the scope of this tutorial.

Once you’ve provisioned the load balancer, ssh to the node as the root user, and use the “frictionless installer” to add your Puppet agent.

curl -k https://master.example.com:8140/packages/current/install.bash | bash

When the client is fully installed, retrieve the Enterprise Console from your browser, and navigate to Nodes | Classification | Unsigned Certificates and select “Accept All”. Finally, ssh to the instance as the root user and run puppet agent -t to finish the setup.

Configure the Load Balancer

At this point, the node is provisioned and you have a Puppet agent running on it, but you have as of yet not configured the HAProxy Load Balancer for use in the environment. The load balancer will be necessary to have in place prior to adding compile masters to your existing split installation. The following instructions guide you through setting up the HAProxy load balancer.

  1. SSH to the Puppet Master as root. (master.example.com in our list)
  2. Install the HAPRoxy Forge Module on the master
puppet module install puppetlabs-haproxy



leave your root console open while performing steps 3-6

  1. Retrieve the Enterprise Console in your browser
  2. Select Nodes | Classification
  3. Create a New Classification Group called “Load Balancer
  4. Select the new group from the list and pin the node “compiler.example.com” into the new group.
  5. In your open SSH session to master.example.com, create the profiles module to hold the configuration for HAProxy
cd /etc/puppetlabs/code/environments/production/modules

mkdir -p profiles/manifests

cd profiles/manifests
  1. Once you have changed to the profiles/manifests directory, create the loadbalancer.pp manifest.
  2. Follow the documentation here to configure HAProxy. When complete, the loadbalancer.pp manifest should resemble the following with IPs corrected for your particular instance:
# Load Balancer Profile
class profiles::loadbalancer {

  class { 'haproxy': }

  # Main Proxy Listener
  haproxy::listen { 'compiler.example.com':
    collect_exported => false,
    ipaddress        => $::ipaddress,
    ports            => '8140',
  }

  # First Load balanced Compile Master
  haproxy::balancermember { 'compiler1.example.com':
    listening_service => 'compiler.example.com',
    server_names      => 'compiler1.example.com',
    ipaddress         => '10.0.1.24',
    ports             => '8140',
    options           => 'check',
  }

  # Second Load Balanced Compile Master
  haproxy::balancermember { 'compiler2.example.com':
    listening_service => 'compiler.example.com',
    server_names      => 'compiler2.example.com',
    ipaddress         => '10.0.1.25',
    ports             => '8140',
    options           => 'check',
  }
}

Once you have created this profile, retrieve the Puppet Enterprise Console in your browser and navigate to Nodes | Classification | Load Balancer.

  1. Selet the Classes tab.
  2. Click the “refresh” button so the console will pick up your new loadbalancer.pp profile to classify your node with.
  3. Under the “Add new Class” heading, select profiles::loadbalancer from the list that drops down.
  4. Click “Add Class”.
  5. Select “Commit 1 Change” at the bottom right of the page.
  6. SSH back into compiler.example.com and run puppet agent -t to configure the Load Balancer.

Your Load Balancer is now prepared to balance traffic to two catalog compilers (catalog1.example.com and catalog2.example.com) as listed in the above configuration.

###Notes

I noted when putting together the loadbalancer.pp profile above that I had previously used some REALLY ODD ip addresses in the balancer config. Why? For the life of me I cannot recall. The original file looked like so:

# Load Balancer Profile
class profiles::loadbalancer {

  class { 'haproxy': }

  # Main Proxy Listener
  haproxy::listen { 'compiler.example.com':
    collect_exported => false,
    ipaddress        => $::ipaddress,
    ports            => '8140',
  }

  # First Load balanced Compile Master
  haproxy::balancermember { 'compiler1.example.com':
    listening_service => 'compiler.example.com',
    server_names      => 'compiler1.example.com',
    ipaddress         => '10.0.1.24',
    ports             => '8140',
    options           => 'check',
  }

  # Second Load Balanced Compile Master
  haproxy::balancermember { 'compiler2.example.com':
    listening_service => 'compiler.example.com',
    server_names      => 'compiler2.example.com',
    ipaddress         => '10.0.1.25',
    ports             => '8140',
    options           => 'check',
  }
}

In my original implementation I set the ipaddres fields with some odd IP addresses. For info around how to fill those but ,the documentation gives some hints:

ipaddresses: Optional. Specifies the IP address used to contact the balancermember service. Valid options: a string or an array. If you pass an array, it must contain the same number of elements as the array you pass to the server_names parameter. For each pair of entries in the ipaddresses and server_names arrays, Puppet creates server entries in haproxy.cfg targeting each port specified in the ports parameter. Default: the value of the $::ipaddress fact.

Since I was originally setting these up in Digital Ocean, I used the IP space 159.203.x.x which belongs to Digital Ocean. I am guessing these were the hard IPs on the instances I stood up. Since the documentation above states these are optional, you have two options here. Either leave those lines out of your config altogether, or manually set them to the IP Address of the instance you’re using. Try each and do which works for you.

Conclusion

Your HAProxy Load balancer is now complete and ready to take traffic to the additional catalog compiler nodes. In installment IV, we’ll begin to add in more components along the way to a fully developed LEI of Puppet Enterprise.

Scaling Puppet Enterprise – Part II – Installation

| comments

Installing Puppet Enterprise has been made remarkably easier as time has gone on. The efforts of Puppet Labs (I still can’t get used to simply ‘Puppet’) to make the installation as seamless and powerful as possible with the simplest of interfaces has been highly successful.

Many changes have occurred over time to include changing from answer files to a HOCON formatted pe.conf file containing the various configuration elements you may need to stand up an instance. I somewhat preferred the simple nature of the original answer files, but I can see the sense in moving to HOCON moving forward.

Obtain puppet

Needless to say, you’re going to need the Puppet Enterprise package to install from. Unlike Puppet Community, the entire installer is provided as a tarball rather than repo based installations via package management, and requires a little bit of UNIX-y knowhow to get it started, as the Puppet Enterprise Server is only installable on Linux.

When you navigate to the Puppet Download page, you may be required to sign up for a free account if you haven’t already. The opening download page is found here.

You will be presented with a launch page that contains a “Download” button. Click the button, and one of two things will happen. Either you will be directed to a “Thank You” page or a page to sign up for an account. As you can see, the “Thank You” page means you already have an account and are signed in whereas the signup page is self-explanatory. Sign up for an account, and retry the download link.

Once you’ve made it to the “Thank You” page, there are three tabs containing “Puppet Enterprise Masters”, “Puppet Enterprise Agents”, and “Puppet Enterprise Client Tools”. As of this writing, the only supported Puppet Master platforms are RedHat 6 & 7, Ubuntu 12.04, 14.04, and 16.04, as well as SLES 11 and 12.

If you had intentions of running the Puppet Master server on any other platform, here is where you realign your expectations. 🙂 I have heard that people have hacked the server to run on other platforms, but since we’re dealing with Puppet Enterprise, why would you break support and eliminate warranty? Pick one of the three and download the tarball for your appropriate platform.

NOTE: If You need legacy versions of PE, you can download those here.

Installation

For the purposes of this scenario, we will be installing the Puppet Infrastructure for fictional super-mega huge company “example.com”. I am going to trust you have worked out the DNS/Host file naming structure, and can resolve everything everywhere. If you cannot, don’t comment on the post, as I will make fun of you publicly… you deserve it.

My assumed setup will be:

Automated

The Puppet Enterprise Installer is a GUI web-browser based installer. Puppet has gone through the process of giving you a nice frontend to your installation, and making it dead-easy to perform a monolithic as well as split installation. For our purposes, though, we will be doing a “split” installation.

Stand up 3 Nodes with the specifications from the first article in the series as follows:

In my experience, I’ve found it much easier to exchange root keys between all three of the above nodes to allow the installer to do all it needs to do on each node. You can, however, decide to set the root password to something temporary to hand to the installer as well (and many people opt for this) and then return root’s password to your site default. In any event, all the machines should be able to resolve themselves and each other by name and root should be able to freely ssh between them either via shared keys (easiest) or password.

Transfer the package to the Puppet Master node:

scp -rp puppet-enterprise-2015.3.2-el-7-x86_64.tar.gz root@master.example.com:/root/

Once the package is on the destination machine, you should connect to the machine to work with the package on-box:

ssh root@master.example.com

which places you in the root user’s home directory where you copied the package.

Extract the Package

tar -zxvf puppet-enterprise-2015.3.2-el-7-x86_64.tar.gz
cd puppet-enterprise-2015.3.2-el-7-x86_64

Run the Installer

./puppet-enterprise-installer

You will receive a text prompt that states:

??Install packages and guided install [Y,n]

Simply press “Y” or the [Enter] key and the GUI portion of the installation will begin.

GUI Installer

Once you have started the Installation, the Puppet Enterprise Installer will perform some preparatory steps and then launch an installation interface on your master node on port 3000. To access this interface, you can bring it up in the web browser of your choice at:

https://master.example.com:3000

Navigate to this interface in your Internet browser. When you first arrive at the GUI installer, simply click the “Let’s Get Started” button. On the next page, The Puppet Enterprise Installer will present you with a GUI questionnaire to fill out regarding your environment. The following is that process in order by section.

Puppet Master Component

  1. Enter the name of your Master in the Puppet Master FQDN text box. (e.g. master.example.com) 3. Enter all appropriate names for your master in the **Puppe## PuppetDB Component Application orchestration” check box.
  2. Enter the hostname of your PuppetDB Node in the PuppetDB Hostname text box. (e.g. puppetdb.example.com)
  3. Change no other selections under the remainder of the items for this section.

PE Console Component

  1. Enter the hostname of your Puppet Console in the Console Hostname text box. (e.g. console.example.com)
  2. Change no other selections under the remainder of the items for this section.

Database Support

No changes are needed to is section. Simply leave “Install PostgreSQL on the PuppetDB host for me” selected.

Console ‘admin’ User

Enter the password you would like to use for the Puppet Enterprise console once your installation is complete in the final text box.

Final Considerations

After completing the final section, click the “Submit” button, and the Puppet Enterprise Installer will present you with a confirmation page for you to review before commencing the installation based on the configuration elements you just provided to the installer.

If everything is to your satisfaction, click the “Continue” button and the Installation progress summary will continue to update you as to the progress of the installation. If you would like to see logging “as it happens”, you can click the “log view” button to see that in real time. If you would like to switcAfter what is roughly 10-15 minutes of installation and configuration, the installer will have completed all its work, and you will be presented with a button at the bottom of the progress screen you have been viewing that says: “Start Using Puppet Enterprise”. Click that button, and the installer will redirect you to the PE Console login screen. Enter the admin credentials you created earlier, and you are ready to begin working with the console as needed.

Scaling Puppet Enterprise

| comments

In my former life as a consultant, I had to install all manner of configurations of Puppet for clients. Some were small and some were large, but none were VERY large. One of the big things I was finding back then was there just wasn’t a lot of publicly available information regarding doing a full install and scaling it large.

So, I took some “research time” on my own and started to build out the configurations according to Puppet Labs’ (at the time… now just “Puppet”) documentation. The problem I was having was that the docs wouldn’t ever lead me to a successful install following a chronolgical set of steps. I had to click into subpages, jump over to sub-sub configurations, and then jump back to the main docs to follow yet another trail down until I reached the end…lather, rinse, repeat.

Some Caveats..

First, this is probably no longer a good “HOWTO” unless you’re installing an older Puppet Enterprise. It was created between 2015.2 and 2016.x, and likely has some amount of artifacting related to those versions.

Second, I’m going by docs I’ve recorded for my own use. I wrote these as mentioned above through prototyping, tearing it down, starting again, and literally doing the entire install over and over until it worked “as advertised”. A lot of this was really just ordering things the right way, and finding documentaiton for various pieces online at Puppet’s documentation site as well as blogs, conversations, and plain old trial and error. I certainly can’t warrant anything to anybody for any reason. As with most open source/creative commons assets, “it works for me, hope it works for you, and if it doesn’t, sorry about that.”

Finally, I hope to use this as the springboard to start brain-dumping all my old notes, conversations, ideas, and other prototyping I did in my home lab. There’s still a fair amount of documentation I cannot use or touch because they belong to my former employer or Puppet Labs, so some things may be less than clear and usually because I’m dancing around an NDA, noncompete, or just plain being a nice guy. If I inadvertently reveal something I shouldn’t, chances are it could disappear without a trace, but I’ll still make a note that I removed something, and try and replace whatever it is with published docs.

In short: I want to help the community, but I’m walking a tightrope here, so please be kind.

Format

I hope to start easy with a decision making process for installing Puppet, how to choose a method, think about scale, and will likely have quite an opinionated view at times. Once PE is installed, we’ll add compilers, scale postgres, etc. but for starters, I hope to just have the following:

PE Master (MoM)

Puppet DB

PE Console

HA Proxy Node for Compilers

Two Catalog Compilers

One ActiveMQ Hub

Two ActiveMQ Spokes

Two Agent Nodes for testing

I know that’s quite a number of nodes to get started with, but this after all a large environment infrastructure, and we want to scale big.

Required Nodes

To put together all the required components for a good large installation, I’ve settled on the below specs. You can change those as you see fit, but note that some of the disk space requirements and related were due to Puppet’s documented requirements at the time. YMMV, of course, but this is what I consider to be a base level installation if you intend on scaling into the multiple tens of thousands of nodes. Be sure that if you’re going to size this down that you’re still meeting Puppet’s needs in regards to memory, cores, and disk. (for a current listing of Puppet’s requirements, you can look here for more information.)

In addition, you’ll need to be aware of firewall requirements for such an installation. Puppet has documentation regarding firewall configurations and needed ports at their website here, but I’ll insert the image and recount the requirements here.

In short:

This is a close approximation to what you need to know. Detailed charts found in the above links, and a “point-by-point” port and use list is available to review.

In short, I’ve found it easiest to have all PE components on the same VLAN with no restrictions between them. If you are going to have a local firewall turned up on each node, you’ll need to manage all the above communications as you see fit, but for the serving infrastructure (if in a secure environment, of course) you can likely drop host firewalls in favor of corporate ones. In short, make it as easy on yourself as you see fit while balancing that toward your corporate security policy.

Finally, make sure DNS and NTP are all ready to go. I can’t tell you the number of times I’ve had major issues trying to get all this working, and NTP was off, or DNS didn’t propagate as expected (it’s always DNS, right?) or some other similar seemingly unrelated piece was not restarted or some such. Just make sure that all nodes resolve to their respective FQDN from all nodes. Obviously, the easiest way to do this is to simply put them all in DNS. You can manage the host files manually, but why would you want to do that?

If you’re at this point and all ready to go, look to the next entry to get started.

PuppetConf 2015

| comments

PuppetConf Portland

Ahh, Portland! What a great place to have PuppetConf this year. The home of Puppet Labs and all its varied food, drink, and other unnamed consumables give Portland a vibe like no other.

From the hipster eateries to the burger dives around town, Portland offered something for everyone.

My week began by arriving a tad early for the Puppet Certifed Consultant training day. On the way in, I passed this little beauty right here:

I had forgotten just how beautiful the Pacific Northwest can be.

In our meetings before the conference began in earnest, we talked about things announced and things as yet unannounced, and essentially just learned how to be better consultants and puppeteers. It was nice to be able to ask the questions that arise from time to time of the “big boys” (Gary Larizza, Zak Smith, etc.) and get first-hand accounts on how to do better.

There were quite a number of really cool talks on various upcoming tech, that I had quite a bit of opportunity to take notes and build upon knowledge I’d already gained.

Keynotes!!

Next up was the PuppetConf Keynotes for the first day that usually contains Luke Kanies’ annual Puppet conversation. Where they’ve been, what they’re doing, and the roadmap forward was fodder for Luke’s talk, and you can find the complete Keynote here:

The big synopsis I can give is all about application automation. For you Puppeteers out there, just think of the relationships between the File|Package|Service component “types” and apply that to application components (db, web, container, Java App, etc.) and you get the gist. Very cool, very powerful, and very near. Be looking for Puppet Enterprise 2015.3 to drop in the very near future. I’ll certainly have soe blog things to say when that happens.

I appreciate, once again, my employer Shadow Soft sending me out to PuppetConf to be the best engineer I can be, and learn all the new tricks and tools at my disposal while on the road.

Look for me to pick up where I left off with basic tools, and a revamp of my early-on configuration tutorial for Puppet Community with the new tools and features in Puppet 4 soon.

Some Software Releases and Forge Stuffs

| comments

Salutations

First of all, hello from sunny and hot New York. I’m on engagement for my company doing some Puppet goodness in the Northeast.

In my downtime at night, Ive been wrapping up some work I’ve had on my plate for awhile (my Puppet module destined for the Puppet Forge) and generally studying for various topics if for no other reason than to get better.

As I was preparing my new module, Puppet releases the Puppet4 Enterprise release that now follows new semantic versioning schemes, and follows the scheme:

Puppet Enterprise YYYY.VV

where “V” is version number.

Puppet Enterprise

As many of you know, I’ve been maintaining a project for Puppet prototyping and working with development and testing over a Vagrant instance for some time. I created what I have because I needed something I could share with customers to help facilitate coding and iteration without their touching the production instance unless absolutely necessary.

Thus, my projects were born.

Vagrants. Vagrants Everywhere.

In a nutshell, I configure a Vagrant environment on a modern OS. I create 4 nodes. One is the Puppet Master itself and the remaining three are Puppet agents that check in with the master and are in three faux envirnments: “Production”, “Testing”, and “Development”. As a result, you can code for DEV and iterate the heck out of it. Once you like it, you can merge up the tree into testing and finally to production.

These releases have been followng the format:

[OSNAME]|[VERSION NUMBER]-[PUPPET][PUPPETVERSION]

So, for instance, a release for CentOS5 running Puppet Open Source 4.0 would look something like this:

centos5-po4

Make sense?

Well, as of today, my new release will be CentOS7 with Puppet Enterprise 2015.2. It can be found here.

Puppet Forge Module

I’ve also been working on two separate modules for the forge here recently. The other I’ve been working on longer, but this one was just ready first. I call it “PuppetDev”.

The idea behind the PuppetDev module is that sometimes a company who needs to do Puppet Development has corporate policy against using a tool like the Vagrant instances (i.e. virtualization on the desktop) mentioned above. As a result, the company often will set up a centralized development host that people can login to for developing Puppet code.

Often times, though, when starting out, a user can feel overwhelmed at the simple command line in front of them, and even if they get into an editor, they may not know where to start with syntax highlighting and the like.

It was from this need PuppetDev was born. You simply apply the module to a node, and supply the user/group you want to apply the module to as parameters, and it whirls away and sets up their development environment.

Among the toys they get with the release are syntax highlighting, an easier to read colorscheme, a Vim plugin infrastructure, and all manageable by the Puppet Administrator for the site.

I know it’s a rather narrow use case, but there it is… feel free to check it out here, or if you’re so inclined, on your Puppet master you can simply run puppet module install cvquesty/puppetdev.

That’s all I have for this update, but hope to be a little more active here shortly with my next Forge module.

Organizing Your Hierarchy Equals Pain

| comments

The Pain Point

One would think after reading Gary Larizza’s blog that I wouldve come away with the idea that Hiera presents a few issues as it solves a ton… but no. I had to go and think it was easy, fly off half-cocked and try and tackle a big issue or two, unprepared mind you, and here I am… re-discovering what humility should be like.

The Problem

Hiera looks simple. Disarmingly simple. However, the pain doesn’t come in just looking at a nice, default hiera.yaml:

1
2
3
4
5
6
7
8
9
:backends:
– yaml
:hierarchy:
– “{clientcert}”
– “%{environment}”
– common
:yaml:
:datadir:“/etc/puppetlabs/puppet/environments/%{environment}/hieradata”

We can simply look at this and see the wonderful simplicity of how the yaml file is laid out, the ease of adding more hierarchies from which to gain data, and even expand the model to include subdirectories and all sorts of interesting methods of organizing and abstracting our code into usable, organized chunks.

Not so fast.

The real issues begin when you’re dealing with a customer. All too often I find that even they aren’t entirely sure what’s going on in their very own environment, and pushed hard, will actually argue among themselves as to just how everything works. Scary.

The main thing we find ourselves doing is figuring out that last mile… What precisely is the role of machine X in this environment? Ask two people separate from each other, and they’ll likely give different (sometimes remarkably so) answers. Get them together and they may quibble a bit, but generally get to consensus.

This is REALLY, REALLY important. If the folks you’re trying to help aren’t 100% sure exactly:

What a machine does How a machine is built What it’s role in the organization is You’ve got some real issues.

This Doesn’t Suck

Often, engineers look at the Hiera configuration file in a vacuum (so much for the suck joke). Not permanently but definitely independently at first. Then, as the engagement pushes on to defining Roles and Profiles for the site, you have this “oh crap” moment, and throw back to the hierarchy and then start modifying lookup layers, but then jump forward to the profiles where the lookups are (or should be, anyhow) and realize they assumed a different hierarchy. Then, jump back to the hiera.yaml, make changes, then back to the profiles and repeat the process and then finally to the component modules, and remediate any assumptions you made on everythin gyou just changed. Uh oh. Wasted time.

I’ve started working through “all the things” and have come up with a mechanism that works well for me. Hopefully you can gain some mileage from it as well.

First Thing’s First

Some would argue you should do the Hierarchy first while others would argue you do the Profiles first. However, I’ve found that parsing out all the business logic with the team gains a remarkable amount of runway for you to start. Why?

Systems engineers are techno-nerd types. The nuts & bolts, configs and the like are their prime concern and more often than not, they view the site atomically. They can tell you with great detail precisely what each and every machine has installed on it (often…not always), but generally know what the IT ROLE of a node or collection of nodes is for. Further, they can tell you who requested it, what kind of storage may be connected, what business group out there it satisfies the needs of, but with a startling amount of frequency, cannot tell you the BUSINES ROLE of the node.

Q: What is it?
A: A web server.
Q: What’s it’s purpose?
A: To serve web documents, duh!
Q: No, no… what’s it’s business purpose?
A: To make money?
Q: No, no… If you were to give it one overarching purpose, one reason for existing, what would it be?
A: Oh… ummm… I never thought of that before.

You’d be surprised just how often you arrive there with pretty much everyone.

As a result, if you wait until mid-engagement to reach this point, (or at least the middle of the writing phase), you’ve got a fair amount of backtracking, and even refactoring to do before you regain some sense of normalcy and can push forward.

Most Specific to Least

As has been said many times and in many ways, your MOST specific designation should always come first. For instance, what is the MOST atomic level of abstraction? Well, the node itself, of course, so the %{clientcert} designator suffices for that.

Well, what’s next? That depends on you. You might have a location to think of (is this data center on the east or west coast, US or Asia). That’s highly broad… maybe not. It might be environment (such as DEV, TEST, PROD). Again, this is custom to you, and that might still be overly broad and you need to find a happy place between clientcert and environment. Only you can tell me that when I’m standing in front of you, so I generally refrain until I can get the layout of your site.

For instance, one customer had clientcert, then location, then environment. That way, items unique to the data center the nodes were in would get handled first, and then things that were environment unique (regardless of location – more broad) could get handled next. See? Custom to them and the way they do business or are arranged technologically.

I “borrowed” the name for this post from Bill Engvall to illustrate a point, that if you just run off to development with no prior knowledge of the things Hiera works with, you will encounter the pain of refactoring at what is most likely the component module level and then the profile level as well. If you’ve ever had to do it, and then do it on a time crunch, truly you have felt the pain.

Avoid it. Think before you act.

Puppets… Puppets Everywhere

| comments

3.8 is Here!!!

That’s right, kids. PE 3.8 has dropped, and it is quite tasty. Some highlights:

AWS Module Now a Supported Module

As simple as that sounds, it’s huge. Being able to stand up multiple, tens, even hundreds and thousands of servers into AWS at once with Puppet is a great thing, but to have the module supported by Puppet Labs Support is even better.

Docker Containers??

Indeed. The Node manager now “gets” Docker containers and you can provision from bare metal as needed. Once the provisioning is done, it hands directly off to Puppet to execute the configuration portion of your run. Seet, sweet sauce right there.

Bare Metal

You’ve always been able to foray into the world of bare metal provisioning, but now it too is supported for you. You can stand up OSes, hypervisors, and then hand those off into the config run using Razor. Razor is now core to PE and also supported by the Puppet Labs Support Team.

Code Management

A long time coming, you can also manage code deployment to your Puppet Master using r10k, installed by default. Newly dubbed the “Puppet Code Manager”, r10k remains a command line tool, but I hear rumblings there may be some GUI juice on the horizon for this.

Deprecations

As with any release, some Puppet Enterprise features are going the way of the Dodo Bird. Some expected, some surprising, Puppet Enterprise’s landscape is certainly changing.

Cloud Provisioner

Long decried as a weak part of the PE infrastructure, the newly announced AWS Supported Module renders it redundant, and as such is removed from the shipping product’s default installation. Of course, if you have a large infrastructure that leverages the Cloud Provisioner, you can continue to use it by installing it into PE separately.

Live Management

Live management, a long-standing feature of the Enterprise Console, is now also deprecated. Of course, with the new code management features “baked-in” to Puppet Enterprise through r10k, Live Management is somewhat redundant. However, Puppet Labs notes that they will be releasing improved resource management functionality in future releases. If you need Live Management, then just as you can with the Cloud Provisioner, you can turn it on as well in the 3.8.0 product.

Compatability

Finally, some older versions of supported OSes are no longer so, and the list is as follows:

centos-5-i386
centos-5-x86_64
centos-6-i386
debian-6-i386
debian-6-x86_64
debian-7-i386
debian-7-x86_64
oracle-5-i386
oracle-5-x86_64
oracle-6-i386
redhat-5-i386
redhat-5-x86_64
redhat-6-i386
scientific-5-i386
scientific-6-i386
sles-11-i386
ubuntu-1004-i386
ubuntu-1004-x86_64
ubuntu-1204-i386
ubuntu-1404-i386

I’m sure you may have some of these in your infrastructure, but they’re usually the result of a vendor application’s supported platforms. If so, you may wish to communicate back upstream to your various vendors, because when you upgrade PE, these go away for you.

Try It Out

As usual, I’ve already created a Vagrant instance to allow you to test and work with the new PE, testing out your existing code on the new platform. Check it out on my GitHub here.

Let me know if you find any issues, and happy Puppeting!

Some Coding Work of Late…

| comments

I AM NOT A CODER

I know that sounds a little silly with all the Coderwall links over there in the sidebar, but I’m not.

I got into this business as a lowly PC building guy, and worked my way into systems administration through light consulting. A necessary evil of the day was tweaking an autoexec.bat & config.sys to release as much memory to the user as was possible for applications. (This was long before Win95, FYI)

As progress and learning would have it, I landed myself a systems administration job and began to grow. Here, 22 years later, back to consulting (but on a much larger scale), I look back on my professional career and see and realize that there’s a LOT of code behind me. Perl, BASH, ksh, HTML, PHP, light Ruby, old DOS debug scripts, Puppet DSL, Expect scripting… tons of it. All encountered and fleshed out in the context of systems engineering and/or management over the years as situations and needs arose.

Fast forward to today. The juxtapostion of Development, QA, and Operations into one big hairy hard-to-define (but getting clearer) term known as “DEVOPS” is the landscape a new admin comes into, and he or she learns from the very beginning the principles of placing infrastructure definition into code, and working as a developer to enhance and automate the hard infrastructure of the operations world.

I say all that to say this… I’ve got some new releases on my GitHub I’d like to share with you to help you out while navigating in this world of DEVOPS. If you have to call me a coder because of it, I may frown, but it is what it is.

Vagrant/Virtualbox Fun

As I’ve been slowly revealing through my series in past months, there’s a lot of tools out there for working with Puppet and there’s a ton of the same to prototype for your company’s environment. One of these is Vagrant, and it has the ability, in a huge way, to help you automate the setup and teardown of sample infrastructures to work with your Puppet code in. I’ve just updated and released a few of these, and I want to tell you about them.

Vagrant with CentOS 6.5 and PE 3.7.1

If you look here, you’ll find my current project I use with customers. This is a Vagrant instance that turns up a 4-VM environment including a PE Master, a DEV, TEST, and PROD VM running the PE agent, Enterpise Console, Directory Environments pre-configured, r10k configured, and a simple set of Puppet Modules to get you started.

Most commonly, I share those with customers, coworkers, and community folks to get them started coding right away, and to have a platform with which to teach them how to deploy, merge, and promote code through an environment in a smaller version of wat they might already have in their company. This is the enviroment I spoke about at the Atlanta Puppet User’s Group last year in its current iteration.

Vagrant with CentOS7 and PE 3.7.2

Similarly, you can find a CentOS7 + PE 3.7.2 project here. Much like the above, you get the latest of PE with CentOS7 to help your prototyping over a more current OS.

Vagrant with CentOS7 and Puppet OSS 4

If you look up the word “experiemental” in the dictionary, this project right here is linked as an example.

I’ve gotten a very rudimentary working setup of a Master and one agent to install completely and autosign, and haven’t even scratched the surface of all the new goodies in Puppet 4. As Puppet 4 is still in Beta, this is not recommended in any way for any reason at any time for you to use for any purpose. 🙂

My hope here is to prepare myself for the PE4 features long before they’re released. I hope to work on getting directory environments and r10k working for this only to have a base from which to rapidly develop for PE4 when it’s released. EXPECT THIS ONE TO GO AWAY IN FAVOR OF THE NEW PROJECT.

YOU HAVE BEEN WARNED

I hope these projects assist you in rapidly creating a platform and developing for Puppet. If you have any questions, don’t hesitate to contact me via jsheets@shadow-soft.com, quest@questy.org, or one of the many other social media nexii you have available to you.

As always, these are in active, deep development. If you’ve got some Vagrant chops and/or want to contribute in any way, feel free to do pull requests, and I’ll integrate changes as soon as I’m able between customer engagements and/or other duties I may have here at Shadow-Soft.

Building Your Toolbox

| comments

I know it’s been since June we’ve worked on the Puppet Development series, but as work goes, so go I, and as I go, so delays the blog. 🙂

Recap

We started our journey with a reintroduction to Vim. As strange as it sounds, often times we techno-guys take for granted that people coming into the DEVOPS space are well versed in all these things, and overlook remediating the basics. We covered Vim basics and switching between command and insert mode as well as linking you to some good cheat sheets to help you beef up your vim-fu.

Next, we covered Vim plugins for syntax highlighting and just general code visualization so we can see visually when our code editing has issues and needs to be fixed.

The next article centered on revision control in general, but Git in particular. In addition to using Git, we also covered that amazing tool GitHub and how to get registered for it, create your own repos, and how to work with repos from your local command line as well and I linked you some excellent resources on Git to expand your knowledge of the Git world and become proficient and fluent in its use.

Finally, we got around to Vagrant and I gave you a simple tour of Vagrant to be familiar with the tool and what it does for you. We stood up a Vagrant instance and saw how we could destroy and re-provision that precise same instance with only a shell command, and saw the power of automated provisioning at work right on our own node.

Reasons

Like the social media world will tell you, I did it because reasons. In short, I wanted you familiar with all these separate tools as we start to coalesce them into an integrated whole we can use as our development toolbox.

So, since we’ve got Vim, Git, and Vagrant as needed components, what else might we need to continue pressing forward?

We need to understand Puppet itself.

Puppet – The Product

Puppet, as I’m sure many of you are aware, is simply “configuration management software” produced by Puppet Labs, Inc. Puppet Labs was founded in 2005 by then Systems Administrator/Engineer Luke Kaines to help automate common repetetive tasks encountered in his regular work duties encoutered on a day-to-day basis.

After a few rounds of venture funding and explosive growth of the market segment known as “configuration management”, Puppet Labs has become a market leader in the space, and continues to develop and improve upon the product at a rather aggressive rate.

Configuration Management

If Puppet is “configuration management” software, what is this thing called “configuration management”?

Configuration management as a systems engineering process covers a lot of landscape in its purview. It can mean, speaking generally, a process for maintaining consistency of a product’s performance and can become considerably complex, such as the methodology used to manage miliary weapons systems, IT service management, and other domain models covering civil and industrial engineering.

For the purposes of the technical field of Systems Administration, Engineering, and Automation, however, Configuration Management as a discipline is very well defined. Specifically, the model that covers these areas is Operating System Configuration Management. Certainly, when automating your site you step over into additional disciplines, but at its core, Configuration Management almost always implies that you are working with the primary target of Operating Systems Configuration Management.

IT Automation

Often times, configuraiton management as it is expressed in the realm of operating systems more specifically begins to take on automation as the primary characteristic of the work performed. From provisioning to deployment, automation saves the most time and effort through modeling systems design in a modular fashion, whereby allowing you to apply systems configurations against classes of machines tooled to perform specific types of work.

This paradigm allows for many idioms to be used in the description of the destination machines, and is the primary domain within which products like Puppet operate.

Puppet – the Product

As you peruse the main website for Puppet Labs, you start to see a much larger emphasis and prevalence of “IT Automation” throughout the website–specifically in the area of data center automation. It is important to note that Puppet prefers to work within this space, although it has abilities that stretch into the entire IT lifecycle workflow.

Puppet is comprised of two distinct products: “Puppet Enterprise” and “Open Source Puppet”. As is common in the Open Source space, the distinction between the two is defined primarily by way of the support options available for each.

Puppet Enterprise

The Puppet Labs flagship product is Puppet Enterprise. Puppet enterprise is a fully supported, maintained, and actively developed data-center ready software package designed to enable you to model configurations for your site out-of-the-box. It has an integrated installer, smoothing the installation process, a series of Puppet apps available for use in the Enterprise Console (the Puppet GUI), and for-pay support and licensing options to meet the needs of your enterprise, regardless of size.

Open Source Puppet

Open Source Puppet is the core product found within Puppet Enterprise. It is the engine which drives the Puppet product and does the job of configuration modeling against your environment. It has several components, individually installed, and no shipped console. It generally leads the Enterprise version by several revisions, allowing early access to new features and benefits but lacks the additional features afforded by the Enterprise offering.

Make no mistake. Both products are the same software. However, the Enterprise product has much of the legwork done for you in the integrated installer, additional functionality in independently released “Puppet Apps”, and of course, enterprise-level 24×7 support availability. Add to that a vibrant community of development and expansion, the Puppet Forge, an annual conference, and a respected certification program, and Puppet Labs’ offerings, while similar, certainly shine on the Enterprise side of things.

Idempotency

The main concept upon which Puppet’s operation is founded is that of idempotency. Idempotence is the property of certain operations in mathematics and computer science that can be applied multiple times without changing the result beyond the initial application.

As you can see, this characteristic only applies an effect on the target if and only if it has not already been applied. Subsequent applications will have no effect, as the desired state of the target is as it should be.

This is good news in that it allows us to think of our enviromnet in new ways. Instead of thinking of all the changes we’re making or going to make to our environment, we begin to considered the desired state of our site and the maintenance of that target at all times. Then, “events” are no longer misconfigurations, but instances of deviation from the desired state, the “norm”.

Implications

Think of the ramifications of the shift in mindset this represents. Audit reports are no longer a series of weeks of review of the site. Your site’s state is described in code, and is a public record within your organization for all to see. Instead, those conversations become “We are ALWAYS compliant. Here is a report of the few times we weren’t compliant in the last year.”

Considerably different conversation to have.

Now, as you model more configuration state in code, your entire way you think of your site evolves. More things hapen automatically, are automatically reported on, and are automatically remediated. Your reporting changes. Your audits change. Your compliance reporting changes. In fact, your entire internal culture chages, and all involved teams need to make adjustments in the way they think about the site and how to mold the way they’ve traditionally done their job (whether administration, audit, or compliance) into this new world of DEVOPS, automation, and the like.

Conclusion

Puppet Enterprise and Open Source promise to not only change how you view your systems and site, but how your entire organization functions. From automated configuration expression within the site, to how you originally model your configuration target, the tools provided by Puppet have changed the face of IT. Automation and configuration management bring culture change and ideological evolution to the enterprise, and step you into the next level of efficiency, compliance, and security management.

Management May Be Missing an Important Component of DEVOPS

| comments

As I travel around the country installing and training people in Puppet Enterprise, I’m noticing some characteristics of management perspective on the DEVOPS movement that has a disjoint with implementation and reality. In short: Managers are now hiring personnel in the field of DEVOPS for positions they may be highly qualified for, but have no institutional ability to execute on the tasks that will be assigned them in a modern infrastructure especially one that has to meet governance criteria such as ITIL, PCI, SOX, HIPAA, and various STIG requirements we see in governmental circles.

First a story, then an observation…

Early Rumblings of DEVOPS

A number of years ago I was a senior engineer in a large TV/Web property. The team was probably one of the best I’d ever worked with from an operational perspective. What I mean by that, is they not only knew how to do what they knew, but when confronted with requirements on something that did not exist, or had not been built yet, they just built it themselves. (handy to have in the days before ubiquitous workflow engines, automation tools, and deployment mechanisms!)

At the same time, the devlopment team was mostly tiered… Entry level personnel were basic coders, seniors were considerably more integrated into the nuts & bolts of the site and the leads & managers could actually commit. Quite a well organized protectionist strategy to keep the codebase clean and mostly devoid of errors. It was a great setup for a 2000-2002 era development shop. Problem was, it was 2005.

As business goes, eventually newer and more well rounded developers with experience in a new subset of tools and techniques began to be hired, and from their background they might have had elevated privileges in their past environments, the ability to commit at will (or continuously integrate?) and felt as though this somewhat “experienced” development model was archaic and slow. And it was.

Inevitably, one of these nice folks would make their way over to the operations side of the house, usually in despair, looking for ways to make their lives easier, which usually ended up in some sort of altercation over “root” level access to systems throughout the environment they had to touch. One could assume how that conversation would go, ultimately operations could not find a business justification for such a level of access, and the request was denied. This would engender a certain amount of tension between teams, and life would roll on in much the same way.

Finally, one day, one of the best developers I personally have ever had the privilege of working with came on as a contractor. (he would ultimately come on board full time and then become the Sr. Architect for the team) Everything he did would turn to gold. his development models and abilities were changing the way developers would think about what they could do, and methods and procedures were changing, deployment techologies were being tried, workflow engines were making the development side of the house quite modern by all measures.

However, the existing operational model continued along at the early-century norms, and would not/could not budge. Now, this wasn’t due to the fact that there were jerks in the department, no quite the contrary. In scearios where a team is so competent in what they do, they look for ways to script and automate away mundanity. The better the team, the stronger this backbone. The stronger the backbone, the more tendrils get attached the the core until automation and development reaches each and every part of the infrastructure. When the team builds to that level, each part hands off to the other. Centralized data stores provide the API for the site, and to touch any particular part of the infrastructure at the design level affects the entire system. So goes Infrastructure architecture.

Before there was a “DEVOPS”

As you can see, in very real terms, this was a “pre-incarnate” DEVOPS infrastructure. A little more OPS than DEV, but nonetheless automated as was possible.

But these new upstart tools were going to ruin this! Yes, they had promise and could certainly replace large portions of the existing workflow, but it could take months or years to “undo” what had already been done to supplant existing mechanisms with newer, better tools.

And therein lies the road to DEVOPS.

The Climate Today

I tell the above story to illustrate the tensions existent before the rise of DEVOPS and the subsequent automation revolution we’re currently experiencing.

Many times one would love to implement their new tools, but the operational infrastructure would prevent it. Or, lesser-informed development teams would accept no less than the highes level of access into the environment, but modern compliance standards prevent that from happening as well.

The manager that has to navigate this particular problem when hiring or resourcing a need in her infrastructure has quite the task ahead of them. Why? DEVOPS has integrated the two fields at a vector point to a degree whereby it is incredibly difficult to determine where the DEV ends and the OPS begins. Sure, there are considerably more well-defined responsibilities on the extremeties of the respective disciplines, but that joining point threatens to cause dischord in the world of the IT infrastructure and many sleepless nights for the IT manager in trying to nail down his talent needs.

Take the requirements for the “DEVOPS Application Operations Engineer” found on one of the major online employment sites posted just a few days ago for a major metro in the U.S.:

Minimum 4 years’ experience in scripting and or any development languages like C#,.NET, Python, Java, Shell, Ruby or any other open source languages.

Experience with HTML/XML and Java Script

Familiarity with Microsoft SCOM, SolarWinds Orion, Keynote, Nagios, Puppet, Chef or other monitoring, SaaS management solutions is desired

Proven experience debugging and troubleshooting software-related issues in a software development or advanced application support position

As you can see, this is a development-heavy position (that, IMO, is all over the map from a requirements perspective), but so goes job descriptions today. Read between the lines, though…

Someone needs a competent developer that isn’t completely freaked out when someone says “Puppet Environment”, “Monitoring”, or “SaaS”, that knows their way around deployment and automation and can get things done. That’s fine. Problem is, this assumes full lifecycle respnsibility when the actuality is that the future employee has a hard-line stopping point beyond which he or she will never be “allowed” to tread due to compliance alone, and that is the breakpoint between DEV and OPS in the DEVOPS world. Consider this:

There are three clearly defined worlds here, all converging on a singluar point known as DEVOPS. From the development skill and expertise of the developer to the testing and assurance retrospect of the QA Engineer, to the Security and Compliance purview of the Operational Architect, DEVOPS is not a “one trick pony” with a singularity view. It is a methodology that brings together the three worlds in a clear developmental workflow to speed safe and secure deployment with minimal errors into serving infrastructure. As often as people try and push DEVOPS into a development position or into an Operations or QA postion when hiring, success will be limited, and frustration will be the result.

What, Then, Is the Manager Missing?

As has been heavily implied thus far, the manager may be missing the fact that DEVOPS is not a position but a way of doing things. DEVOPS is a methodology, not a granting of rights or abilities. And, if we’re talking about lines of demarcation within groups, DEVOPS is a superstructure of tools built, implemented and designed by the Operational Architectural team to move, implement, and regression test code and associated objects provided by the Development Architectural team with tests, regressions, and automated mechanisms specified by the Quality Assurance Architectural team in a specific fashion and after a specific methodology that has commonly become referred to as DEVOPS.

The manager has to realize that this is not a subset of bullet points on a resume, but a wholistic approach to all of their environmental considerations that requires all teams to cooperate through to the end result: QUality Software Products delivered in as short a cycle as possible in an automated fashion with as few errors and bugs as reasonably can be remediated before going “live”.

The manager who is looking for this methodology in a single position has already lost the battle before ever posting the position.