Questy.org

Tech musings and other things...

Some Housecleaning

| Comments

Picking Up Where I Left Off

Sometimes life starts to get in the way of things. It happened to me. I’m no longer consulting in the Puppet space, but am still very much so involved and active in the community. But, as happenstance dictates, we get busy in our daily and things like blogs (that don’t generate revenue) and social media outlets start to take a “second seat” to the rest of work and life.

So, I’m going to reset, and will be cycling back through the previous instructional materials I’ve worked on, and revamp for modern editions my tutorials on PE, Community, Vagrant, etc. I want to stay up with modern versions, and I want to get deeper into PE and the new pe.conf (HOCON format) as well as better orchestration methods within the Vagrant framework for standing up new PE instances for use as a local development workstation (regardless of your platform).

Stick with me. We should have some fun changes here soon with all the new goodies Puppet has brought us over the last couple years.

Scaling Puppet Enterprise - Part VI - Code Manager

| Comments

Recap

Let’s see where we are.

  1. We have a Puppet Enterprise Split Installation consisting of a Puppet Master, PuppetDB, and Puppet Enterprise Console.
  2. We have a Load Balancer with two compiler nodes behind it.
  3. We have an ActiveMQ Hub and an ActiveMQ Spoke and have removed ActiveMQ responsibilites from the Enterprise Master (MoM).
  4. We have built a GitLab instance to host our Control Repo and other items necessary to operation of our Puppet environment.

The final remaining piece is like a “glue” step where we pull all the various pieces together, generate keys for SSH and deployment tokens. We also associate the ctalog compilers to the Enterprise master and coordinate the deployment of code across the masters. Needless to say, you will need to have already performed all the preceding steps, and have made everything ready to go for the following procedure. Failure to have done so will have unpredictable results. So, if you’re ready, let’s proceed.

Setup A Control Repository

The first and foremost piece is to have a repo whose job is to “control” the processig of modules and custom code, and giving the “map” for deployment into your Enterprise master and catalog compilers. Puppet Labs has a suggested sample one here which has quite a number of nice features. However, when I first wrote this tutorial, it was considerably overkill for what I was needing to do, so I opted to create my own very simple version of a control repository. Quite a few iterations have occurred since writing these instructions, so I will continue with my instructions.

The Control Repo

The control repo came about as a collaboration at Puppet Labs between employees, consultants, users, etc. It was originally named something else which escapes me at the moment, but eventually came to be named the “control repo” by virtue of the service it performs. In short, it contains the “map” between what you have in Git or at the Puppet Forge, and the deployment directories on your Puppet masters. The “map” itself is known as the “Puppetfile”. This file contains a listing of all the modules you want deployed to the server. The bonus is that for each Git branch you have within the repo, this specifies an “environment” to Puppet.

I won’t get into all the conversation around whether you should have 1:1 mapping between Git branches and Puppet Environments and then from Puppet Environments to application tiers… I’ll leave that to the Puppet folks. I have always mapped everthing identically all the way through, and will cover that process here.

NOTE: The understood way of doing this these days is that if you have a new “thing” you want to create, you fork a feature branch, apply it to a few nodes for testing, then merge back into your master or production branch and deploy everywhere. I’d recommend learning this. In my own needs, I had a lot of governed environments (PCI, SOX, ITIL, etc) that needed absolute code separation, and the ability to demonstrate that no code in one environment had a chance of deploying to another. (principle of environment separation) As a result, I always opted for 1:1 correlation.

I have created a sample control_repo you can clone from here: https://github.com/cvquesty/control_repo.git

This is a bare repo with a collection of Forge modules populated into the Puppetfile with a “development” and a “production” branch. This will trigger Puppet to create directories called “development” and “production” in /etc/puppetlabs/code/environments, and will contain the items you instruct it to deploy there from the Puppetfile.

First, clone the repo:

1
git clone https://github.com/cvquesty/control_repo.git

to a working directory of your choice on your local node. Next, change to the directory and view the remote:

1
2
cd control_repo
git remote -v

This should present you with the GitHub location of my sample repo:

origin https://github.com/cvquesty/control_repo.git (fetch) origin https://github.com/cvquesty/control_repo.git (push)

This will present you an issue in that you can’t push to my repo. What I always tell people to do is to move it to their own Git repo. This is well documented elsewhere, but I’ll give you an example process.

While in the control_repo directory, perform the following:

1
2
git checkout development
git remote rm origin 

This now makes sure you’re in the development branch, and that the repo is unattached to my GitHub account. Next, you’ll need to create a control_repo in your Git server, and set it as your own remote. First login to your Git server and create an empty repo to hold the code. Next, in the repo you forked from mine, run the commands to switch repos like so:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
git remote add origin https://<YOUR_GIT_SERVER>/<YOUR_ID>/control_repo.git

git add .

git commit -a -m 'Initial Commit'

git push origin development

git checkout production

git add .

git commit -a -m 'Initial Commit'

git push origin production

Now, you have the repo local to you and pointing to your own Git repository so you can edit and update the control_repo at will. (you might note extra steps there. This is for those unfamiliar with Git. They may not be necessary, but it gives a good pattern for how to work with repos, and I want to establish good habits early.)

Generate SSH Keys

On the Enterprise Master, you will need to create locations and/or set permissions on files and directories used by Code Manager:

1
2
3
chown -R pe-puppet:pe-puppet /etc/puppetlabs/code
chown -R pe-puppet:pe-puppet /etc/puppetlabs/code-staging
mkdir -p /etc/puppetlabs/puppetserver/ssh

Next, you will need to generate a secret key to use with Code Manager setup. To create the secret key, perform the following on the Enterprise Master:

1
ssh-keygen -t rsa -b 4096 -C "SSH Deploy Keys"

When ssh-keygen asks you what to name the key, I usually give it a name I can remember, name it after the customer I am doing work for, or just name it for the Control Repo itself. In this case, let’s answer with the latter:

1
/etc/puppetlabs/puppetserver/ssh/id-control_repo.rsa

Now, when configuring Code Manager in the Puppet Enterprise Console, you will use that key.

next, ensure all associated files are owned by the PE user:

1
chown -R pe-puppet:pe-puppet /etc/puppetlabs/puppetserver/ssh

Create RBAC Code Manager User

Next, you’ll need a Code Manager user in the Enterprise Console. use the following process for that:

  1. Create a new role named “Deploy Environments”
  2. Assign this role the following permissions:

    • Add the “Puppet Environment” type.
    • Set Permissions for this type to “Deploy Code
    • Set the Object for this type to All.
  3. Add the Tokens Type

    • Set Permissions for this type to Override Default Expiry.
  4. Create a local user to manage code deployments.
    • Click “Access Control | users
    • On the Users page, in the Full Name field, type the User’s Full Name: (e.g. CM Admin)
    • In the Login field, type the name cmadmin.
    • Click Add Local User.
  5. Set the User’s password to “puppetlabs” (or whatever you’d like to use)
    • Select the user from the list.
    • Click “Generate password reset
    • Retrieve the link in a browser, and set the password to “puppetlabs”.
  6. Finally, add the user to the “Deploy Code” role.
    • Click the “Deploy Environments” role.
    • Click the “Member Users” tab.
    • Fromt the dropdown list in the User Name field, select the CM Admin user and click Add User.

Code Manager

Under the covers, Puppet Labs now uses r10k with the control repository to manage the deployment of code.nUnder this scenario, a few items are very important to remember:

  • Under no circumstances should you be manually editing code in /etc/puppetlabs/code any more. Any attempt to do so will be overwritten by the code manager. ALL deployments to the system must come through your editing the control_repo and pointing to either Forge modules or custom modules you have written to be deployed to your Enterprise Master (and sync'ed to the catalog compilers).
  • You must have a control repo branch for each environment you wish to represent in your Masters (production, testing, etc.)
  • You cannot shorten or live without the fully named “production” environment. Puppet hard-coded this environment name in the product, and shortening the name to “prd”, “prod”, etc. will not work.
  • Code Manager operates with a synchronization subdirectory that lives in /etc/puppetlabs/code-staging. When you’re pushing coe via your control_repo, it goes here first, then Code Manager and Code Sync take over, and publish the code to all compile masters at once. Once all masters have the code in code-staging, it gets copied to /etc/puppetlabs/code.

More information on this process can be found at Puppet’s Documentation site.

Configuring the Git Server

You should have a custom deployment user explicitly for pushing code into your master. I have settled on using “cmadmin” as a deploy user on the Git Server. This allows you to have a generic user on the GitLab server you created earlier that you can work with, configure web hooks for, and then leave the credentials for that user with your customer or place it into IDM for your company.

To setup the new user:

  1. Create a user in the admin area of the GitLab server named “cmadmin”. Next, select “Edit” in the upper right hand corner of the screen and set the password as you see fit. (I’ll use “puppetlabs”)
  2. Select “Impersonate” from the upper right hand section of the page to assume the identity of the “cmadmin” user.
  3. Select “New Project”.
  4. On the resulting page, create a new repo called “control_repo” and make it a piblic project.
  5. Click “Create Project”.
  6. Push the control repo from the previous section to this repo in the cmadmin space.
  7. Seeing as we are using GitLab, you are unable to use a full authenticated deploy token because GitLab server’s input buffer is too short to handle a full authentication token. NOTE: This has changed in later versions of GitLab. You may find success in just creating the token.
  8. Configure the Webhook:

  9. Connect to your Git server (e.g. http://git.example.com) and choose the “settings gear” from the bottom left hand side of the page.

  10. Once in the settings for the cmadmin user, there is a small icon on the left frame tht looks like two links of chain and is labelled “Webhooks”.
  11. Next, add the _https://master.example.com:8170/code-manager/v1/webhook?type=gitlab**_ formatted webhook into the “URL**” box.

The “prefix” section points to the name od the user based on the way GitLab uses namespaces in the URL.

  • Also select items you need from the list of options. I recommend selecting all items exceptBuild Events” and DE-select “Enable SSL Verification”.
  • Click “Add Webhook”.

Configuring the SSH Key

Finally, add the PUBLIC SSH key created on the Enterprise master located at /etc/puppetlabs/puppetserver/ssh/id-control_repo.rsa.pub to the SSH keys section for the CM Admin user in the GitLab Server.

  1. While still impersonating the “cmadmin” user in the GitLab GUI Interface, choose the “cmadmin” icon in the lower left of the browser. Next, choose “Profile Settings” in the left hand bar.
  2. Under the profile’s Settings, choose “SSH Keys” from the left hand bar.
  3. Paste in the PUBLIC KEY to the “Key” text box. The Title text box should populate automatically. (or, you can name it yourself.)
  4. Click “Add Key”.

Installing Code Manager

This process assumes you have follwed this entire series from start to here in order. The final steps are to install and configure the Code Manager itself. This is that process.

  1. At the Puppet Enterprise Console, navigate to Nodes | Classification | PE Master | Classes Tab | puppet_enterprise::profile::master.
  2. In the puppet_enterprise::profile::master class, you need to set the following parameters:

  3. r10k_remote => ‘the git FQDN and path to the namespace/control_repo of this node.’

1
e.g. **git@git.example.com:cmadmin/control_repo.git**
  • r10k_private_key => ‘the full path to your deploy key on your Puppet enterprise master’
1
e.g. **/etc/puppetlabs/puppetserver/ssh/id-control_repo.rsa**
  • file_sync_enabled => true
  • code_manager_auto_configure => true

At this point, you also want to make sure your control_repo has a hieradata value set. If you cloned your repo from mine, you already have that value set in the common.yaml in the hieradata directory. That setting would be:

1
puppet_enterprise::master::code_manager::authenticate_webhook: false

NOTE: Recall that GitLab has change dramatically since the original writing of this tutorial. Later versions allow you to authenticate the webhook. WHen I wrote this, I was working around technological limitations that are now gone. Feel free to complete this as needed, but I just wanted to disclaim the reasoning for these previous configuration steps.

Next, ensure the hiera.yaml lives in $confdir as needed for Code manager:

  • Edit the /etc/puppetlabs/puppet/puppet.conf file to ensure there is a line in the “[Main]” section: hiera_config = $confdir/hiera.yaml.

Finally, run the puppet agent to apply all the above configuration changes:

1
puppet agent -t

Test the hiera value on the command line to ensure Hiera has picked up your value:

1
hiera -c /etc/puppetlabs/puppet/hiera.yaml puppet_enterprise::master::code_manager::authenticate_webhook environment=production

You should get a return of “false”.

NOTE: Later versions of Hiera respond to the “lookup” command. The older “hiera” command line utility has intermittent proper functioning at this time, and it has been recommended on the Puppet Community Slack that “Lookup” is the way to go at this time.

Generate Authentication Token

On the Puppet Enterprise Master, you must now generate an authentication token for the CM Admin deployment user to be authorized to push code. First, request the token:

1
/opt/puppetlabs/bin/puppet-access login --service-url https://console.example.com:4433/rbac-api --lifetime 180d

It will request a username and password. Use the credentials you created in the RBAC console (In my example, cmadmin::puppetlabs) and the system will write the token to /root/.puppetlabs/token.

Time to restart!!

Run the puppet agent on all compile masters in no particular order:

1
**puppet agent -t**

Now, lets' Test!!

Prior to PE 2016.x.x, you could only fire the tests with curl commands against the API. Those would be as follows:

Deploy a Single Environment

1
/usr/bin/curl -k -X POST -H 'Content-Type: application/json' "https://localhost:8170/code-manager/v1/deploys? token=`cat ~/.puppetlabs/token`" -d '{"environments": ["ENVIRONMENTNAME"], "wait": true}'

Deploy All Code Manager Managed Environments

1
/usr/bin/curl -k -X POST -H 'Content-Type: application/json' "https://localhost:8170/code-manager/v1/deploys?token=`cat ~/.puppetlabs/token`" -d '{"deploy-all": true}'

On PE Versions 2016.x.x and later, a new tool known as puppet-code was created to ease the testing and firing of the deploys.

Deploy a Single Environment

1
/opt/puppetlabs/bin/puppet-code deploy {environmentname}

Deploy All Code Manager Managed Environments

1
/opt/puppetlabs/bin/puppet-code deploy --all

Conclusion

At this point, you should see your code beginning to populate the /etc/puppetlabs/code-staging directory and then eventually the /etc/puppetlabs/code directory. Your final tests will include pushing code to the control_repo to test that the hook is working properly.

If all goes well, you should have code automatically deploy to the $codedir after a few seconds to a minute depending on a variety of factors.

Other Stuff

I wrote these as tutorials as I mentioned in the first article to help coworkers complete the same process I was doing. I had to sanitize out a lot of internal info, and I had to change hostnames on the fly to make sure “all the things” were secret that needed to be, so the names in question have not been specifically tested end-to-end, but the principles are the same.

I worked on both 2015.x.x and 2016.x.x with this process, but newer versions of PE may have different features or setup options not covered here. As with any “Open” documentation, “Your mileage may vary” and “Use at your own risk.”

I hope this helps someone out there get Code Manager setup and fuctioning in a Large Environment Installation scenario, and you scale as large as you need to as a result of the footwork I’ve done here. Feel free to email me for errors you find, and I’ll fix ‘em up right away!

Scaling Puppet Enterprise - Part v - GitLab

| Comments

If you’ve been following for the past 5 installments, we’re nearing the end! Note that each of the prior articles required other things to have been completed before reading/performing the contained steps, but this article is a bit different. In all truth, you could do this process at any point, but I placed it here for one reason alone. “Why do this manually when I could get Puppet to do it for me?”

The importance of this particular step is that we need a place to hold our “control repo” (more on this later) and if you don’t already have Git installed in your environment, you’ll need it. So, before finishing up the installation and configuration of Code Manager, utilizing Puppet to install GitLab is a good test that everything is installed and configured properly, and all the components are communicating as expected.

Without further delay, let’s continue.


Create a Machine to Serve as the GitLab Server

Provision a new node according to our earlier chart to serve as your GitLab server. While I list specifications, you may find more mileage by scaling the Git server larger. If you will be expanding your Puppet team and will have dozens to hundreds of people developing for Puppet, scaling will be a consideration. Also, while outside the scope of this article, you will want to configure offsite backup and/or replication to a geographically separte location for your GitLab server. This is of paramount importance. If you lose this server, all configuration for all systems managed in all environments across your organization would be lost. This isn’t the end of the world in terms of business continuity, but trying to recreate all that code from the ground up would be prohibitive.

Yes, people will have recent copies of the repo on their local machines. Yes, with some nonzero level of effort, you should be able to get the repos back. No, it’s not fun, and you’ll have a bad time. Just back up your server, and if possible…replicate it elsewhere in your organization.

My intial suggested specifications on this server are:

GitLab Specs

I don’t specify disk for /opt and /var here, as each of these images carries ample disk with it. If you believe you will need additional storage for your Git instance, feel free to scale this as you see fit.

Once the server is installed, go ahead and install the Puppet Agent on it, pointing to the compiler VIP like so:

1
curl -k https://compile.example.com:8140/packages/current/install.bash | bash

Once the agent installation is complete, in the Puppet Enterprise Console, navigate to Nodes | Unsigned Certificates and accept the new cert request for the GitLab server. Once that is complete, SSH to the GitLab server, and run puppet agent -t to complete the initial configuration of the node.

Create a Profile to Manage the GitLab Installation

On the Puppet Enterprise Master, install the vshn-gitlab module.

1
puppet module install vshn-gitlab

NOTE: You will need to perform this on ALL catalog compilers in your infrastructure. If the GitLab serer checks in and doesn’t find either the vshn-gitlab module or the profile you’re creating below on the master the load balancer refers it to, the catalog run will fail.

On the Puppet Enterprise Master (eg. master.example.com) create a new profile in $codedir/environments/production/modules/profiles/manifests/gitlab.pp.

(Puppet Enterprise has an internal variable for $codedir now. If you have made no modifications to this in the puppet.conf, the default location is /etc/puppetlabs/code.)

The profile you create should look like the following:

1
2
3
4
5
6
7
8
# Configure GitLab Server
class profiles::gitlab {

  class { 'gitlab':
    external_url => 'http://git.example.com',
  }
  
}

Save this as gitlab.pp.

In the Puppet Enterprise Console, create a new classification group.

  • Navigate to Nodes | Classification
  • Create a group called ‘GitLab’ with a parent of ‘All Nodes’ in the Production Environment
  • Pin the git.example.com node into the newly created GitLab group.
  • Choose the ‘Classes’ tab and click the ‘Refresh’ icon to pick up your newly created profile.
  • Add the profiles::gitlab class to the classification group.
  • Commit the changes.

Caveats

Since we’re mid-setup and have multiple compilers but do not have code sync enabled, we have to manually copy the new profile to all your compilers in the same location. This allows the agent on the GitLab server to pick up the profile regardless of where the load balancer sends the agent request.

Once the profile is in place, run puppet agent -t on your GitLab server, and Puppet will then install the GitLab software onto the server. At this point, after a short delay, you should be able to retrieve your GitLab server in a browser (e.g. http://git.example.com) and login with the default credentials.

In our example, git.example.com is the server and the login would be automatically set to admin@example.com with a password of 5iveL!fe. These are defaults set by the GitLab installer.

Your GitLab server should now be up, running, and ready for action in your Puppet Environment. Look for the final installment to bring everything together and finish the installation.

Scaling Puppet Enterprise - Part IV - ActiveMQ Hub and Spokes

| Comments

As in the previous installment, you need to have already completed a few steps before arriving at this post. You should have already completed a “split installation” (Documented here). Also, your load balancer needs to be configured and running. The procedure for this portion can be found here. Finally, you should have the additional compilers installed and configured along with two example agent nodes as covered here and here.. If you’ve completed all these portions, you are now ready to configure ActiveMQ for scaling MCollective.

Once the preceding items are performed, you may find it necessary to add ActiveMQ hubs and spokes to increase capacity for MCollective and/or the Code Sync and Code Manager functions of Puppet Enterprise. This installment documents how to install these additional components and tie them into the existing infrastructure.

Create an ActiveMQ Hub

  1. Go to the Puppet Enterprise Console in your browser.
  2. Select Nodes | Classification and create a new group called “PE ActiveMQ Hub”
  3. Stand up two new nodes for the ActiveMQ Hub and Spoke (in our example, activemq-hub.example.com and activemq-spoke.example.com) according to the following specifications:

Hub and Spoke Specs

Once your nodes have been provisioned, install the Puppet Agent on each node, making sure to point the installer DIRECTLY at the MoM (master.example.com**) instead of at the compiler VIP.

1
curl -k https://master.example.com:8140/packages/current/install.bash | bash

and let the agent install complete in its entirety.

Next, from your browser, retrieve the Puppet Enterprise Console and select the “PE ActiveMQ Hub” group you created earlier. Pin the activemq-hub.example.com node into the PE ActiveMQ Hub group.

  1. Select the “Classes” tab and add a new class entitled: “puppet_enterprise::profile::amq::hub
  2. Click “Add Class”.
  3. Under the Parameters drop-down, select “network_collector_spoke_collect_tag” and set its value to “pe-amq-network-connectors-for-activemq-hub.example.com
  4. Commit the changes.
  5. SSH to the activemq-hub.example.com and run puppet agent -t to make all your changes effective for the Hub node.

Create ActiveMQ Spoke (or “broker”)

  1. In the Puppet Enterprise Console, Select Nodes | Classification | PE ActiveMQ Broker
  2. Pin your new ActiveMQ broker into the PE ActiveMQ Broker group.
  3. Select the “Classes” tab.
  4. Under the puppet_enterprise::profile::amp::broker class, choose the activemq_hubname parameter and set it to the FQDN of the hub you just created. In our case, activemq-hub.example.com.
  5. SSH to the new broker (activemq-spoke.example.com) and run puppet agent -t.
  6. Finally, unpin master.example.com from the PE ActiveMQ Broker group.

Conclusion

At this point, you should have:

  • Puppet Master of Masters - master.example.com
  • PuppetDB - puppetdb.example.com
  • PE Console - console.example.com
  • HAProxy Node - compiler.example.com
  • 2 Catalog compilers - compile1.example.com and compile2.example.com
  • An ActiveMQ Hub - activemq-hub.example.com
  • An ActiveMQ Spoke - activemq-spoke.example.com
  • Two Agent Nodes - agent1.example.com and agent2.example.com

with their respective configurations. Your serving infrastructure is complete, and you are now ready to configure it for use.

Scaling Puppet Enterprise - Part IIIb - Additional Compilers

| Comments

As in the previous installment, you need to have already completed a few steps before arriving at this post. You should have already completed a “split installation” (Documented here). Also, your load balancer needs to be configured and running. The procedure for this portion can be found here. If you’ve completed all these portions, you are now ready to configure and install the compilers themselves. If this is you, read on!


Once your Load Balancer and split install are in place and functioning, we need to add more compilers to the serving infrastructure. For the purposes of this tutorial, we will install two additional catalog compilers, register them with the currently existing master. Then, we will direct them to look to the “MoM” or the “Master of Masters” as the CA certificate authority. Further, we will install two agent nodes and connect them to the infrastructure.

You will need to install two compiler nodes and two agent nodes according to the following specifications.

Compilers and Agents

Once these four nodes are in place, we can connect the compilers to the Master of Masters (MoM) and then the agents to the “master” as they see it. Remember, that for our purposes, these nodes are named:

  • compile1.example.com
  • compile2.example.com
  • agent1.example.com
  • agent2.example.com

Installing the Compilers

SSH to the first compiler master (compile1.example.com for this post’s purposes) and install the Puppet agent as follows:

1
curl -k https://master.example.com:8140/packages/current/install.bash | bash -s main:dns_alt_names=compile1.example.com,compile.example.com,compile1,compile

What this does is simple. When this compiler goes behind the load balancer (compile.example.com), traffic may get directed to this node. When the request is made, the agent node will be asking for “compile.example.com” but this node’s name is “compile1.example.com”. The additional options at the end of the curl line are to tell the agent that when it installs, it should be aware of both names, and when speaking to the MoM the first time to request its cert, to represent all the comma delimited names listed at the end of the above command.

Next, SSH to your master node and accept the agent cert request as follows to allow for these names on the MoM you just set up in the previous step.

1
2
ssh master.example.com
puppet cert --allow-dns-alt-names sign compile1.example.com

NOTE THAT YOU CANNOT ACCEPT THIS CERT FROM THE CONSOLE. ALT_DNS IS NOT SUPPORTED FROM THE GUI

Finally, run the puppet agent on the first compiler (compile1.example.com) to configure the node:

1
puppet agent -t

Once the agent run is complete, you need to classify the catalog compiler in the console to make it ready for service.

Classify the Compiler

In the Puppet Enterprise Console:

Choose Nodes | Classification | PE Master

Add compile1.example.com and pin the node to the classification group and commit the change.

BE SURE TO COMPLETE THE NEXT STEPS IN ORDER AS FOLLOWS OR YOU WILL HAVE A BAD TIME

First: SSH to compile1.example.com and run puppet agent -t

Second: SSH to puppetdb.example.com and run puppet agent -t

Third: SSH to console.example.com and run puppet agent -t

Fourth: SSH to master.example.com and run puppet agent -t

BE SURE TO ALLOW EACH RUN TO COMPLETE FULLY BEFORE MOVING ON TO THE NEXT ONE

For All Subsequent Compile Node Installations

Follow the above instructions completed for compile1.example.com for all subsequent compiler installations. This means that if you add compilers six months or a year from now, go back to the previous procedure and duplicate it with the new node name precisely as you did above. To recap:

  1. Install the agent as above with the alt_dns switches
  2. Accept the cert on the master with the alt_dns switches
  3. Classify the compiler in the console
  4. Run the Puppet agent in the above specified order, allowing each one to complete fully before moving on.

Configure Future Agent Installations to Point to the Load Balancer by Default

In the Puppet Enterprise Console, you must configure the system to point all future agent installations to the load balancer by default so you do not have to continue to make modifications and customizations after each agent install. To do so, perform the following steps:

  1. In the Puppet Enterprise Console, choose: Nodes | Classification | PE Master
  2. Select the “Classes” tab.
  3. Choose the “pe_repo” class.
  4. Under the parameters drop-down, choose “master” and set the text box to the name of your load balancer or VIP (in our case, “compile.example.com”)
  5. Commit the changes.

Point the New Compilers at the Master (MoM) for CA Authority

Create a new classification group called “PE CA pe_repo Override”

  • Go to Nodes | Classification in the Puppet Enterprise Console
  • Create a New Group
  • Name the new group “PE CA pe_repo Override
  • From the “Parent Name” drop-down, choose the “PE Master” group.
  • Click “Add Group”.

  • Select your new group and pin master.example.com to the new group and click “Commit One Change

  • Select the “Classes” tab.
  • Add “pe_repo” class.
  • From the parameter drop-down, select “Master”.
  • Enter the name of the MoM in the text box. (in this example, master.example.com)
  • Click “Add Parameter” and then “Commit 2 Changes”.

Test New Agents

The two agents you created at the beginning of the article are now able to be tested with this new group of compilers.

  1. Make sure agent1.example.com and agent2.example.com have been installed according to the system requirements covered in this series.
  2. Install the Puppet agent on each of these nodes, but this time instead of pointing at the MoM, point to your Load balancer vip like so:
1
curl -k https://compile.example.com:8140/packages/current/install.bash | bash

In the PE Console, accept the new certificate request for Agent1. SSH to agent1.example.com and run puppet agent -t. Finally, repeat this process for agent2.example.com.

If you have completed all the above steps properly, the agents will reach out to the compile.example.com VIP and be ferried off to one of the catalog compilers. Regardless of which one, since we accepted all the alternate DNS names when creating the connection between them and the MoM, they will respond for compile.example.com, and deliver back to the agent the required information, catalog, etc. as Puppet would do under normal circumstances.

Conclusion

As you can see, we needed the Load Balancer in place to install the catalog compilers. We also needed all the DNS alt-naming to be in place so the load balancer could send traffic to either catalog compiler as needed, and still have it answer for the VIP name. Finally, we needed to refer requests to their appropriate destinations and also classify the new compilers as such with the MoM, and set up appropriate referral of certificate requests from the compilers back to the CA Master, which is the MoM.

The serving infrstructure is almost done, all we have left to do is to scale MCollective with an ActiveMQ Hub & Spoke, and remove that responsibility from the MoM. We will also install a GitLab server to hold our Control Repo and associated Roles & Profiles, and we will configure the Code Manager.

Scaling Puppet Enterprise - Part IIIa - Additional Compilers

| Comments

You should have completed a split install before beginning this section. You can find the Split Installation documentation at Puppet’s Website, or the first installment of this tutorial here. If you try and begin here, you might find yourself lost.

Note also that the “Additional Compilers” docs comes in two parts–One to install the Load Balancer and one to install the compilers.

First, Some Philosophy

The Puppet Enterprise documentation circa PE 2015.3.2 had some “issues”. Let me actually preface that, though. Puppet Labs' documentation is by far some of the most voluminous and in many respects most complete vendor documentation out there. I don’t mean to disparage their work AT ALL. When it comes to the fact they even have documentation at this level, they’re the “bees knees”.

However, I’ve always written documentation to fit the “grandma rule”. My grandmother was a little 4 foot nothing Cajun woman with English as her second language. She never used the first computer, still had a rotary phone when she passed away, and remained suspicious of anything technical. She was, however, a voracious reader, keenly intelligent, and understood considerably more than you’d expect on first glance. She also was a stickler for puncutation, grammar, and the like. In short, if my grandma couldn’t read the documentation and follow a step-by-step process to install Puppet successfully, then its just either too complex, poorly formatted or unclear and needs to be simplified.

This causes a problem, of course. There are technologists out there that would become annoyed at repetition, verbosity around “understood” things, and spelling out each and every step along the way… even painfully. However, I feel it is the only proper way to document something. My rules are simple.

  • Leave nothing to question
  • Be as verbose and clear as possible
  • Make sure everything is in order, step-by-step

By following this simple guideline, I feel I’m doing more of a service to the reader than if I presumed on their level of sophistication with Puppet, Linux/UNIX, Windows, research capability, Google-foo or whatever.

So let’s dive in, shall we?

HAProxy

Seemingly counterintuitive, now that we’ve done a split install, I want to next install the HAProxy we will use as a Load Balancer on the additional compilers. By installing this first, we can utilize Puppet to install the HAProxy, and manage them automatically rather than doing a lot of ad-hoc work.

Also, by doing the proxy first, the prerequisites are satisfied in their proper order, the Load Balancer exists before configuring additional compilers (to be able to utilize the dns_alt_names for the load balancer along with the compilers) and to have the GitLab in place and hosting the control_repo before turning on and configuring Code Manager.

Hardware

In the initial hardware list, I included a node called “Compile Master”. This node looked like:

Compile Master Specs

This node may seem like overkill, but disk and memory are cheap. If you are scaling at this level, its better to not have to reinstall your Load balancer later. Keep in mind, you don’t have to use HAProxy and can use a corporate Load Balancer here, but its configuration is outside the scope of this tutorial.

Once you’ve provisioned the load balancer, ssh to the node as the root user, and use the “frictionless installer” to add your Puppet agent.

1
curl -k https://master.example.com:8140/packages/current/install.bash | bash

When the client is fully installed, retrieve the Enterprise Console from your browser, and navigate to Nodes | Classification | Unsigned Certificates and select “Accept All”. Finally, ssh to the instance as the root user and run puppet agent -t to finish the setup.

Configure the Load Balancer

At this point, the node is provisioned and you have a Puppet agent running on it, but you have as of yet not configured the HAProxy Load Balancer for use in the environment. The load balancer will be necessary to have in place prior to adding compile masters to your existing split installation. The following instructions guide you through setting up the HAProxy load balancer.

  1. SSH to the Puppet Master as root. (master.example.com in our list)

  2. Install the HAPRoxy Forge Module on the master

1
puppet module install puppetlabs-haproxy


leave your root console open while performing steps 3-6

  1. Retrieve the Enterprise Console in your browser

  2. Select Nodes | Classification

  3. Create a New Classification Group called “Load Balancer

  4. Select the new group from the list and pin the node “compiler.example.com” into the new group.

  5. In your open SSH session to master.example.com, create the profiles module to hold the configuration for HAProxy

1
2
3
4
5
cd /etc/puppetlabs/code/environments/production/modules

mkdir -p profiles/manifests

cd profiles/manifests
  1. Once you have changed to the profiles/manifests directory, create the loadbalancer.pp manifest.

  2. Follow the documentation here to configure HAProxy. When complete, the loadbalancer.pp manifest should resemble the following with IPs corrected for your particular instance:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# Load Balancer Profile
class profiles::loadbalancer {

  class { 'haproxy': }

  # Main Proxy Listener
  haproxy::listen { 'compiler.example.com':
    collect_exported => false,
    ipaddress        => $::ipaddress,
    ports            => '8140',
  }

  # First Load balanced Compile Master
  haproxy::balancermember { 'compiler1.example.com':
    listening_service => 'compiler.example.com',
    server_names      => 'compiler1.example.com',
    ipaddress         => '10.0.1.24',
    ports             => '8140',
    options           => 'check',
  }

  # Second Load Balanced Compile Master
  haproxy::balancermember { 'compiler2.example.com':
    listening_service => 'compiler.example.com',
    server_names      => 'compiler2.example.com',
    ipaddress         => '10.0.1.25',
    ports             => '8140',
    options           => 'check',
  }
}

Once you have created this profile, retrieve the Puppet Enterprise Console in your browser and navigate to Nodes | Classification | Load Balancer.

  1. Selet the Classes tab.
  2. Click the “refresh” button so the console will pick up your new loadbalancer.pp profile to classify your node with.
  3. Under the “Add new Class” heading, select profiles::loadbalancer from the list that drops down.
  4. Click “Add Class”.
  5. Select “Commit 1 Change” at the bottom right of the page.
  6. SSH back into compiler.example.com and run puppet agent -t to configure the Load Balancer.

Your Load Balancer is now prepared to balance traffic to two catalog compilers (catalog1.example.com and catalog2.example.com) as listed in the above configuration.

Notes


I noted when putting together the loadbalancer.pp profile above that I had previously used some REALLY ODD ip addresses in the balancer config. Why? For the life of me I cannot recall. The original file looked like so:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# Load Balancer Profile
class profiles::loadbalancer {

  class { 'haproxy': }

  # Main Proxy Listener
  haproxy::listen { 'compiler.example.com':
    collect_exported => false,
    ipaddress        => $::ipaddress,
    ports            => '8140',
  }

  # First Load balanced Compile Master
  haproxy::balancermember { 'compiler1.example.com':
    listening_service => 'compiler.example.com',
    server_names      => 'compiler1.example.com',
    ipaddress         => '10.0.1.24',
    ports             => '8140',
    options           => 'check',
  }

  # Second Load Balanced Compile Master
  haproxy::balancermember { 'compiler2.example.com':
    listening_service => 'compiler.example.com',
    server_names      => 'compiler2.example.com',
    ipaddress         => '10.0.1.25',
    ports             => '8140',
    options           => 'check',
  }
}

In my original implementation I set the ipaddres fields with some odd IP addresses. For info around how to fill those but ,the documentation gives some hints:

ipaddresses: Optional. Specifies the IP address used to contact the balancermember service. Valid options: a string or an array. If you pass an array, it must contain the same number of elements as the array you pass to the server_names parameter. For each pair of entries in the ipaddresses and server_names arrays, Puppet creates server entries in haproxy.cfg targeting each port specified in the ports parameter. Default: the value of the $::ipaddress fact.

Since I was originally setting these up in Digital Ocean, I used the IP space 159.203.x.x which belongs to Digital Ocean. I am guessing these were the hard IPs on the instances I stood up. Since the documentation above states these are optional, you have two options here. Either leave those lines out of your config altogether, or manually set them to the IP Address of the instance you’re using. Try each and do which works for you.

Conclusion

Your HAProxy Load balancer is now complete and ready to take traffic to the additional catalog compiler nodes. In installment IV, we’ll begin to add in more components along the way to a fully developed LEI of Puppet Enterprise.

Scaling Puppet Enterprise - Part II - Installation

| Comments

Installing Puppet Enterprise has been made remarkably easier as time has gone on. The efforts of Puppet Labs (I still can’t get used to simply ‘Puppet’) to make the installation as seamless and powerful as possible with the simplest of interfaces has been highly successful.

Many changes have occurred over time to include changing from answer files to a HOCON formatted pe.conf file containing the various configuration elements you may need to stand up an instance. I somewhat preferred the simple nature of the original answer files, but I can see the sense in moving to HOCON moving forward.

Obtain puppet

Needless to say, you’re going to need the Puppet Enterprise package to install from. Unlike Puppet Community, the entire installer is provided as a tarball rather than repo based installations via package management, and requires a little bit of UNIX-y knowhow to get it started, as the Puppet Enterprise Server is only installable on Linux.

When you navigate to the Puppet Download page, you may be required to sign up for a free account if you haven’t already. The opening download page is found here.

You will be presented with a launch page that contains a “Download” button. Click the button, and one of two things will happen. Either you will be directed to a “Thank You” page or a page to sign up for an account. As you can see, the “Thank You” page means you already have an account and are signed in whereas the signup page is self-explanatory. Sign up for an account, and retry the download link.

Once you’ve made it to the “Thank You” page, there are three tabs containing “Puppet Enterprise Masters”, “Puppet Enterprise Agents”, and “Puppet Enterprise Client Tools”. As of this writing, the only supported Puppet Master platforms are RedHat 6 & 7, Ubuntu 12.04, 14.04, and 16.04, as well as SLES 11 and 12.

If you had intentions of running the Puppet Master server on any other platform, here is where you realign your expectations. :) I have heard that people have hacked the server to run on other platforms, but since we’re dealing with Puppet Enterprise, why would you break support and eliminate warranty? Pick one of the three and download the tarball for your appropriate platform.

NOTE: If You need legacy versions of PE, you can download those here.

Installation

For the purposes of this scenario, we will be installing the Puppet Infrastructure for fictional super-mega huge company “example.com”. I am going to trust you have worked out the DNS/Host file naming structure, and can resolve everything everywhere. If you cannot, don’t comment on the post, as I will make fun of you publicly… you deserve it.

My assumed setup will be:

Example.com Node List

Automated

The Puppet Enterprise Installer is a GUI web-browser based installer. Puppet has gone through the process of giving you a nice frontend to your installation, and making it dead-easy to perform a monolithic as well as split installation. For our purposes, though, we will be doing a “split” installation.

Stand up 3 Nodes with the specifications from the first article in the series as follows:

Split Node List

In my experience, I’ve found it much easier to exchange root keys between all three of the above nodes to allow the installer to do all it needs to do on each node. You can, however, decide to set the root password to something temporary to hand to the installer as well (and many people opt for this) and then return root’s password to your site default. In any event, all the machines should be able to resolve themselves and each other by name and root should be able to freely ssh between them either via shared keys (easiest) or password.

Transfer the package to the Puppet Master node:

1
scp -rp puppet-enterprise-2015.3.2-el-7-x86_64.tar.gz root@master.example.com:/root/

Once the package is on the destination machine, you should connect to the machine to work with the package on-box:

1
ssh root@master.example.com

which places you in the root user’s home directory where you copied the package.

Extract the Package

1
2
tar -zxvf puppet-enterprise-2015.3.2-el-7-x86_64.tar.gz 
cd puppet-enterprise-2015.3.2-el-7-x86_64

Run the Installer

1
./puppet-enterprise-installer

You will receive a text prompt that states:

1
??Install packages and guided install [Y,n]

Simply press “Y” or the [Enter] key and the GUI portion of the installation will begin.

GUI Installer

Once you have started the Installation, the Puppet Enterprise Installer will perform some preparatory steps and then launch an installation interface on your master node on port 3000. To access this interface, you can bring it up in the web browser of your choice at:

1
https://master.example.com:3000

Navigate to this interface in your Internet browser. When you first arrive at the GUI installer, simply click the “Let’s Get Started” button. On the next page, select “Split” to begin the Split Installation.The Puppet Enterprise Installer will present you with a GUI questionnaire to fill out regarding your environment. The following is that process in order by section.

Puppet Master Component

  1. Choose the “Install on this Server” radio button.2. Enter the name of your Master in the Puppet Master FQDN text box. (e.g. master.example.com) 3. Enter all appropriate names for your master in the Puppet Master DNS Aliases text box.4. Select the “Enable Application orchestration” check box.## PuppetDB Component

  2. Enter the hostname of your PuppetDB Node in the PuppetDB Hostname text box. (e.g. puppetdb.example.com)

  3. Change no other selections under the remainder of the items for this section.

PE Console Component

  1. Enter the hostname of your Puppet Console in the Console Hostname text box. (e.g. console.example.com)
  2. Change no other selections under the remainder of the items for this section.

Database Support

No changes are needed to is section. Simply leave “Install PostgreSQL on the PuppetDB host for me” selected.

Console ‘admin’ User

Enter the password you would like to use for the Puppet Enterprise console once your installation is complete in the final text box.

Final Considerations

After completing the final section, click the “Submit” button, and the Puppet Enterprise Installer will present you with a confirmation page for you to review before commencing the installation based on the configuration elements you just provided to the installer.

If everything is to your satisfaction, click the “Continue” button and the Puppet Enterprise Installation will begin.The Installation progress summary will continue to update you as to the progress of the installation. If you would like to see logging “as it happens”, you can click the “log view” button to see that in real time. If you would like to switch back to the summary, simply click the Summary button.After what is roughly 10-15 minutes of installation and configuration, the installer will have completed all its work, and you will be presented with a button at the bottom of the progress screen you have been viewing that says: “Start Using Puppet Enterprise”. Click that button, and the installer will redirect you to the PE Console login screen. Enter the admin credentials you created earlier, and you are ready to begin working with the console as needed.

Scaling Puppet Enterprise

| Comments

In my former life as a consultant, I had to install all manner of configurations of Puppet for clients. Some were small and some were large, but none were VERY large. One of the big things I was finding back then was there just wasn’t a lot of publicly available information regarding doing a full install and scaling it large.

So, I took some “research time” on my own and started to build out the configurations according to Puppet Labs' (at the time… now just “Puppet”) documentation. The problem I was having was that the docs wouldn’t ever lead me to a successful install following a chronolgical set of steps. I had to click into subpages, jump over to sub-sub configurations, and then jump back to the main docs to follow yet another trail down until I reached the end…lather, rinse, repeat.

Some Caveats..

First, this is probably no longer a good “HOWTO” unless you’re installing an older Puppet Enterprise. It was created between 2015.2 and 2016.x, and likely has some amount of artifacting related to those versions.

Second, I’m going by docs I’ve recorded for my own use. I wrote these as mentioned above through prototyping, tearing it down, starting again, and literally doing the entire install over and over until it worked “as advertised”. A lot of this was really just ordering things the right way, and finding documentaiton for various pieces online at Puppet’s documentation site as well as blogs, conversations, and plain old trial and error. I certainly can’t warrant anything to anybody for any reason. As with most open source/creative commons assets, “it works for me, hope it works for you, and if it doesn’t, sorry about that.”

Finally, I hope to use this as the springboard to start brain-dumping all my old notes, conversations, ideas, and other prototyping I did in my home lab. There’s still a fair amount of documentation I cannot use or touch because they belong to my former employer or Puppet Labs, so some things may be less than clear and usually because I’m dancing around an NDA, noncompete, or just plain being a nice guy. If I inadvertently reveal something I shouldn’t, chances are it could disappear without a trace, but I’ll still make a note that I removed something, and try and replace whatever it is with published docs.

In short: I want to help the community, but I’m walking a tightrope here, so please be kind.

Format

I hope to start easy with a decision making process for installing Puppet, how to choose a method, think about scale, and will likely have quite an opinionated view at times. Once PE is installed, we’ll add compilers, scale postgres, etc. but for starters, I hope to just have the following:

PE Master (MoM)
Puppet DB
PE Console
HA Proxy Node for Compilers
Two Catalog Compilers
One ActiveMQ Hub
Two ActiveMQ Spokes
Two Agent Nodes for testing

I know that’s quite a number of nodes to get started with, but this after all a large environment infrastructure, and we want to scale big.

Required Nodes

To put together all the required components for a good large installation, I’ve settled on the below specs. You can change those as you see fit, but note that some of the disk space requirements and related were due to Puppet’s documented requirements at the time. YMMV, of course, but this is what I consider to be a base level installation if you intend on scaling into the multiple tens of thousands of nodes. Be sure that if you’re going to size this down that you’re still meeting Puppet’s needs in regards to memory, cores, and disk. (for a current listing of Puppet’s reuqirements, you can look here for more information.)

Puppet_Prerequisites

In addition, you’ll need to be aware of firewall requirements for such an installation. Puppet has documentation regarding firewall configurations and needed ports at their website here, but I’ll insert the image and recount the requirements here.

Firewall_Ports

In short:

Firewall_Ports

This is a close approximation to what you need to know. Detailed charts found in the above links, and a “point-by-point” port and use list is available to review.

In short, I’ve found it easiest to have all PE components on the same VLAN with no restrictions between them. If you are going to have a local firewall turned up on each node, you’ll need to manage all the above communications as you see fit, but for the serving infrastructure (if in a secure environment, of course) you can likely drop host firewalls in favor of corporate ones. In short, make it as easy on yourself as you see fit while balancing that toward your corporate security policy.

Finally, make sure DNS and NTP are all ready to go. I can’t tell you the number of times I’ve had major issues trying to get all this working, and NTP was off, or DNS didn’t propagate as expected (it’s always DNS, right?) or some other similar seemingly unrelated piece was not restarted or some such. Just make sure that all nodes resolve to their respective FQDN from all nodes. Obviously, the easiest way to do this is to simply put them all in DNS. You can manage the host files manually, but why would you want to do that?

If you’re at this point and all ready to go, look to the next entry to get started.

Changes Are Afoot

| Comments

After much thought and consideration, I’ve terminated my employ with ShadowSoft. I was travelling nearly every week all over the US, and not with my family as much as I’d like. Unfortunately, this role was a 70% travel role, and our youngest needed Daddy home.

PayPal

As luck would have it, a totally awesome telecommute FT/Perm option came up with PayPal, and I accepted rather excitedly, and began that role today. After some getting acquainted with my new duties, you should be seeing/hearing more from me on the Puppet front soon.

Travel Hiatus

| Comments

Out

Just a quick update for you all. $work decided at some point that one of the things I was specifically hired for (blogging in the community) some sort of way “gives away the farm” in regards to Puppet, Puppet Consultation, and related items. As such, I’ve been asked not to blog publicly regarding items we deliver as services.

:-(

It likely doesn’t matter that much, as I’m travelling more than I have in years, and am on-target to exceed 145k miles by Summer’s end. As a result, I will be laying off until I can regroup and find more time.

Sorry, but as they say “them’s the breaks”.

If you need me, you can always find me on the Puppet community Slack Channel #puppet, and my nick there is @cvquesty.

Look for interesting news from me soon.